text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
School of Physics, Georgia Institute of Technology The expectation values of the first and second moments of the quantum mechanical spin operator can be used to define a spin vector and spin fluctuation tensor, respectively. The former is a vector inside the unit ball in three space, while the latter is represented by an ellipsoid in three space. They are both experimentally accessible in many physical systems. By considering transport of the spin vector along loops in the unit ball it is shown that the spin fluctuation tensor picks up geometric phase information. For the physically important case of spin one, the geometric phase is formulated in terms of an SO(3) operator. Loops defined in the unit ball fall into two classes: those which do not pass through the origin and those which pass through the origin. The former class of loops subtend a well defined solid angle at the origin while the latter do not and the corresponding geometric phase is non-Abelian. To deal with both classes, a notion of generalized solid angle is introduced, which helps to clarify the interpretation of the geometric phase information. The experimental systems that can be used to observe this geometric phase are also discussed. Non-Abelian Geometric Phases Carried by the Spin Fluctuation Tensor Bharath H. M. December 30, 2023 ===================================================================§ INTRODUCTIONBerry's geometric phase <cit.>, has attracted renewed interest in recent years due to applications in, for example, phase transitions with topological order parameters <cit.> and fault tolerant quantum computation <cit.>. Although Berry's phase was defined for adiabatic transport of a quantum system along a loop in the parameter space of the system's Hamiltonian, it was later realized that it is a kinematic property, which does not depend on the dynamics of the system <cit.>. In <cit.>, it is shown that a quantum system transported along a closed loops picks up a geometric phase irrespective of how the transport was induced. In <cit.>, geometric phase has been generalized to open paths in the space of mixed states by defining an SU(N) operator corresponding to every path in the space of N× N density matrices, with no reference to the question of how the transport along the path is induced. In other words, Berry's phase depends only on the path in the parameter space along which the quantum state is transported, and not on the dynamical equation governing the transport or the rate of transport. This insight has resulted in a kinematic formulation <cit.>, <cit.>, which we shall also adopt in this paper. The space of ground state eigenvectors of a non-degenerate Hamiltonian has a line bundle structure over its parameter space. Geometrically, Berry's phase can be viewed as the holonomy of Berry's connection form on this line bundle <cit.>. When the Hamiltonian is degenerate, Berry's phase generalizes to a non-Abelian Wilczek-Zee phase, which can also be formulated as a holonomy <cit.>. In general, geometric phase can be defined as a holonomy element of a connection form on a fiber bundle structure imposed on the space of quantum states <cit.>, <cit.>, <cit.>.The geometric phase is essentially the geometric information stored in the overall phase of the wave-function of a quantum mechanical system. In this paper, we show that such geometric information may be extracted from second and higher order spin moments of a quantum spin system, which we formulate as a non-Abelian geometric phase. We restrict our analysis to pure quantum states, i.e., quantum states that can be represented by a vector in a finite-dimensional Hilbert space. Vectors in a finite dimensional Hilbert space may be regarded as states of a quantum spin system. Corresponding to every pure state, one can define a spin vector in real space as s⃗= (⟨ S_x⟩, ⟨ S_y⟩, ⟨ S_z⟩ )^T, where S_i are the Hilbert space spin operators and ⟨ S_i⟩ are their expectation values with respect to the given pure state.For a spin-1/2 system, the real space spin vector has unit length and therefore lies on the unit sphere; known as the Bloch sphere. For a spin-1 system, the length of the real space spin vector lies in the interval [0,1] and therefore s⃗ lies in the closed unit ball, known as the Bloch ball (𝔹):𝔹 = {s⃗∈ℝ^3 :|s⃗ | ≤ 1}A measure of the quantum fluctuations of the spin in the quantum state is given by the covariance matrix, a rank-2 tensor:𝐓= ( [⟨ S_x^2⟩-⟨ S_x⟩^21/2⟨{S_x,S_y}⟩ -⟨ S_x⟩⟨ S_y ⟩1/2⟨{S_x,S_z}⟩ -⟨ S_x⟩⟨ S_z ⟩; 1/2⟨{S_x,S_y}⟩ - ⟨ S_x⟩⟨ S_y ⟩ ⟨ S_y^2⟩ -⟨ S_y⟩^2 1/2⟨{S_z,S_y}⟩ - ⟨ S_z⟩⟨ S_y ⟩; 1/2⟨{S_x,S_z}⟩ - ⟨ S_x⟩⟨ S_z ⟩1/2⟨{S_z,S_y}⟩ -⟨ S_z⟩⟨ S_y ⟩⟨ S_z^2⟩ - ⟨ S_z⟩^2;])Here, {S_i, S_j}=S_iS_j + S_jS_i is the anticommutator of S_i and S_j. Hereafter, we refer to this covariance matrix as the spin fluctuation tensor. When the real space spin vector is transported along a loop in 𝔹, the geometric phase information is encoded in the spin fluctuation tensor. To see this, we will introduce a geometric picture of the spin fluctuation tensor. The latter is a symmetric, positive semi-definite matrix with three non-negative eigenvalues and orthogonal eigenvectors. It may be represented by an ellipsoid whose principle axes have lengths given by the square-roots of the eigenvalues and whose orientation is determined by the eigenvectors. The pair (s⃗, 𝐓) can be visualized by a vector in 𝔹 with an ellipsoid representing the spin quantum fluctuations centered at its tip (FIG1(a)). Let us consider a loop inside 𝔹 along which the spin vector is transported. Analogous to the parallel transport of tangent vectors on a sphere, one can introduce a notion of parallel transport of the ellipsoids along the loop, where each of the ellipsoid's axes is parallel transported (Of course, the center of the Bloch ball poses a certain non-triviality which we address in this paper).Upon circumscribing the loop the ellipsoid will, in general, return in a different orientation, capturing the geometric phase of the loop (FIG1(b)). In this paper, we rigorously formulate this geometric phase as an element of the group SO(3) and provide a geometric interpretation for the same.The key step in formulating our geometric phase is to rigorously define parallel transport of the ellipsoids. In the sequel we show, however, that the parallel transport cannot be defined using the standard theory of connections on a fiber bundle. Hereafter, we restrict ourselves to spin-1 systems. The quantum state of a spin-1 system is represented by a non-zero vector ψ=(z_-1,z_0, z_+1)^T in the 3-dimensional complex Hilbert space ℂ^3; here superscript T denotes matrix transpose. The physical properties of the spin system are invariant under a multiplication of this vector by a non-zero complex scalar, i.e., ψ and λψ are physically equivalent state vectors for λ∈ℂ-{0}; this defines an equivalence class under the equivalence relation ψ∼λψ. The quotient space under this equivalence is a four dimensional manifold.Topologically, the manifold is the complex projective plane (ℂℙ^2), defined as the space of all lines in a 3-dimensional complex vector space passing through the origin:ℂℙ^2 = {ψ∈ℂ^3-{0⃗}:ψ∼λψ for λ∈ℂ-{0}}Equivalently, ℂℙ^2 is the space of all 1-dimensional subspaces of ℂ^3. Each 1-dimensional subspace of ℂ^3 represents an equivalence class. The spin expectation values can be written as ⟨ S_i⟩ = ⟨ψ, S_iψ⟩/⟨ψ, ψ⟩, where ⟨· , ·⟩ is the standard inner product on ℂ^3. We may define a map ϕ : ℂℙ^2 →𝔹 that takes every equivalence class of ℂ^3 to its real space spin vector: ϕ(ψ)= s⃗. In terms of coordinates, for a vector ψ = (z_-1, z_0, z_+1)^T representing the equivalence class {λψ: λ∈ℂ-{0}}, the image under this map is:ϕ(([ z_-1;z_0; z_+1;]))=1/|z_-1|^2+|z_0|^2+|z_+1|^2([ √(2)Re(z_-1z_0^*+z_0z_+1^*); √(2)Im(z_-1z_0^*+z_0z_+1^*); |z_+1|^2-|z_-1|^2; ]) = s⃗∈𝔹We note that the map ϕ is independent of the choice of the representative in any equivalence class and therefore, it is well defined. In other words, ϕ(ψ)=ϕ(λψ) for λ∈ℂ-{0}. The components of the spin fluctuation tensor can also be written in terms of the coordinates of ψ. Together, the spin vector and the spin fluctuation tensor contain all the information about the spin-1 quantum state. Indeed, every spin-1 state is uniquely represented by the pair (s⃗, 𝐓). Defining a parallel transport of ellipsoids along a loop in 𝔹 is tantamount to defining a horizontal lift of loops in 𝔹 to ℂℙ^2. The map ϕ:ℂℙ^2→𝔹 does not, however, have a fiber bundle structure. Any fiber bundle over 𝔹 is necessarily a product bundle as 𝔹 is a contractible space. The space ℂℙ^2, being 4-dimensional, is not a product bundle over 𝔹 because it has non-trivial second homology. Any 4-dimensional product bundle over 𝔹, being homotopic to the 1-dimensional fiber itself, would have a trivial second homology. Therefore, this geometric phase cannot be formulated as a holonomy of loops in 𝔹, in general. Circumventing this difficulty is the first of the two problems that we address in this paper.The interpretation of this geometric phase poses a separate problem. Berry's phase associated with a loop on the Bloch sphere, ∂𝔹, is generally interpreted as the solid angle enclosed by the loop <cit.>. The definition of solid angles easily extends to loops in the Bloch ball, provided they do not pass through the center. A convenient way to determine this solid angle is to radially project the loop onto the boundary ∂𝔹 where it subtends the same solid angle as the original loop (FIG2 (a)). We refer to such loops as non-singular loops. Loops in 𝔹 that pass through the center break into discontinuous pieces when projected to the boundary of 𝔹; their solid angle cannot be defined by projection and therefore we refer to them as singular loops (FIG2 (b)). It can be seen intuitively that for non-singular loops, the geometric phase is a rotation of the ellipsoid by an angle equal to the solid angle subtended by the loop, about the vector s⃗, at the base point of the loop. However, interpretation of the geometric phase of singular loops is non-trivial. It is the second problem we address in this paper. We restate these two problems as:(i) Given that ℂℙ^2 is not a fiber bundle over 𝔹, can we still define a curve in ℂℙ^2 to be a horizontal lift of a loop in 𝔹 and formulate a definition of geometric phase?(ii) What is the interpretation of this geometric phase? In particular, can we attach a meaning to “solid angles" for singular loops?Pivotal to our solution of (i) is the idea that horizontal lifts in every known version of geometric phase minimize a certain metric in the fiber bundle <cit.>. In sec 2A, we provide an outline of our solution to (i). As for the interpretation of geometric phases of singular loops, the term is justified by noting that the difficulty in defining solid angles for such a loop can not be solved by perturbing it and taking a limit <cit.>. It requires a more detailed construction and a generalization of the notion of solid angles, which we provide in the next two sections and outline in sec 2B. While sec 2 as a whole outlines all of our results, the details of definitions and proofs of the theorems therein are provided in sec 3. Along with a few examples, we address the question of how to observe this geometric phase experimentally in sec 4.§ OUTLINE OF RESULTS We state our solutions to (i) and (ii) in sec 2a and sec 2b respectively.§.§ Definition of Horizontal Lift And Geometric Phase In definitions 1 and 2 below, we answer (i) by invoking the important role played by metrics in the theory of geometric phase <cit.>, <cit.>, <cit.>. In the definition of Berry's phase and Uhlmann's phase, horizontal lifts are constructed using Berry's connection form <cit.> and Uhlmann's connection form <cit.>, respectively. It has been noted that in both of these cases, the horizontal lift can also be defined as the lift with minimal length in the respective fiber bundles <cit.>. For a general Ehresmann connection <cit.>, if the horizontal subspace of the tangent space of a fiber bundle is defined as the orthogonal complement of the vertical subspace under a Riemannian metric, the resulting horizontal lift of a loop always minimizes the length among all lifts of the loop. While ℂℙ^2 is not a fiber bundle over 𝔹, it has a standard, natural (i.e., maximally symmetric) metric — the Fubini-Study metric (s_FS) <cit.>. It is essentially the “angle" between two quantum state vectors in the Hilbert space:s_FS(ψ_1, ψ_2)= cos^-1(|⟨ψ_1, ψ_2⟩|/√(⟨ψ_1, ψ_1 ⟩⟨ψ_2, ψ_2 ⟩))We note that this definition extends to ℂℙ^2, when we employ any Hilbert space representatives for the equivalence classes corresponding to the points in ℂℙ^2, i.e., it is invaraint under scalar multiplications: s_FS(ψ_1, ψ_2)=s_FS(λ_1ψ_1, λ_2ψ_2) where λ_1, λ_2 ∈ℂ-{0}. We define a horizontal lift for loops in 𝔹 using this metric.Definition 1 (Horizontal Lift): A continuous path γ̃:[0,1]→ℂℙ^2is called a horizontal lift of a loop γ:[0,1]→𝔹 iff ϕ∘γ̃ = γ and γ̃ minimizes the Fubini-Study length in ℂℙ^2.In sec 3, we show that the earlier described intuitive notion of parallel transport of the ellipsoids along a loop in 𝔹 is equivalent to the above definition of a horizontal lift of the loop. We show that, corresponding to every choice of γ̃(0) satisfying ϕ(γ̃(0))=γ(0), there is a unique horizontal lift of γ.In eqn_u, we provide explicit equations to compute the horizontal lift of a given loop and a given initial point of the lift. Before proceeding to define a geometric phase using this horizontal lift, we note that not every loop in 𝔹 has a well-defined horizontal lift in ℂℙ^2. The relevant regularity conditions on the loop are summarized in theorem 1.Theorem 1 (Existence criteria for horizontal lifts): A continuous, piece-wise differentiable loop γ: [0,1]→𝔹 has a horizontal lift if it is differentiable at every t∈ [0,1] where γ(t)=0⃗∈𝔹.This theorem essentially states that a loop in 𝔹 has a horizontal lift if it has no “kinks" while passing through the center of 𝔹. We refer to the loops satisfying the condition mentioned in this theorem as liftable loops. Clearly, any piece-wise differentiable loop not passing through the center of 𝔹 is liftable. FIG3 shows two examples of liftable loops and one example of a loop that is not liftable. FIG3 (b) is an important example of a loop that appears to have a kink at the center of 𝔹, but is nevertheless liftable. The apparent non-differentiability at the center is removable. If we choose the center as the starting and the ending points of the loop, i.e., γ(0)=γ(1)=0⃗∈𝔹, the loop satisfies all conditions mentioned in the theorem. However, the loop in FIG3 (c) is not liftable. There are multiple points of non-differentiability at the center, and so this loop does not satisfy the conditions mentioned in the above theorem. Therefore, a loop is liftable, if there is at least one parametrization under which it is differentiable at every visit to the center.We now define geometric phase using the horizontal lift defined above. For a given loop γ and a horizontal lift γ̃, the end points γ̃(0) and γ̃(1) are in ℂℙ^2 and therefore, there is an operator U∈ SU(3) such that γ̃(1)=Uγ̃(0). This is because, SU(3) acts transitively on ℂℙ^2. The operator is not unique — there are infinitely many such operators. Through its irreducible representation in SU(3), SO(3) can be regarded as a subgroup of SU(3). We denote the representation as 𝒟 : SO(3)→ SU(3). In sec 3, we show that there is an SO(3) choice for the operator U, i.e., there is an operator R∈ SO(3) with a representation 𝒟(R)∈ SU(3) such that γ̃(1)=𝒟(R)γ̃(0). However, this operator is still not unique — it has a two fold ambiguity. We clear up this ambiguity and provide a more rigorous definition in sec 3. We also show that this operator is independent of the choice of γ̃(0), and so it is well-defined for γ. We define this SO(3) operator as the geometric phase of γ. Definition 2 (Geometric Phase): If γ is a liftable loop in 𝔹, its geometric phase is the operator R∈ SO(3) such that, γ̃(1)=𝒟(R)γ̃(0) holds for every horizontal lift γ̃ of γ, where 𝒟(R) ∈ SU(3) is the representation of R in SU(3).In sec 3, eqn_R, we provide an explicit way of computing the geometric phase of a given loop. Going back to the earlier described geometric picture of representing a quantum state by a spin vector and an ellipsoid centered at its tip, the end points γ̃(0) and γ̃(1) are two quantum states with the same spin vector but different ellipsoids; i.e., we can represent them as γ̃(0)≡(s⃗, 𝐓_1) and γ̃(1)≡(s⃗, 𝐓_2). The geometric phase of γ, we show, is precisely the rotation R which rotates the ellipsoid 𝐓_1 to 𝐓_2, i.e., 𝐓_2 = R𝐓_1R^T.In the axis-angle representation, a right-hand rotation about a unit vector n̂∈ℝ^3 by an angle θ∈ [0,2π) is represented by R_n̂(θ). For non-singular loops, the geometric phase is R=R_γ(0)(Ω), a rotation about the spin vector γ(0) by an angle Ω, equal to the solid angle enclosed by γ (FIG1 (b)).To see this, we need the following simple facts about the ellipsoids, which follow from the map phi. One of the eigenvectors of 𝐓 coincides with s⃗ with an eigenvalue 1-|s⃗|^2. Therefore, the ellipsoid is always oriented with one axis parallel to s⃗ (see Appendix for a detailed derivation). The other two eigenvalues are 1/2(1±√(1-|s⃗|^2)) and that leaves only one degree of freedom for the ellipsoid when the spin vector is fixed, namely, rotation about the spin vector (FIG1 (a)).Therefore, if γ(t)≠ 0 throughout the loop, the geometric phase is necessarily a rotation about the vector γ(0). The parallel transport of the ellipsoid is reminiscent of the parallel transport of a tangent line to S^2 along a loop and thus the holonomy is the solid angle of the loop. Therefore, the angle of rotation of the ellipsoid is also this solid angle.The above interpretation, however, does not work for singular loops. We provide a generalization of the above interpretation in the following section. §.§ Interpretation of Geometric Phase To answer point (ii), we define a generalized solid angle for all loops inside 𝔹 in definition 3 below.The ideas is to first project a loop in the Bloch ball onto the real projective plane (ℝℙ^2) and to define a solid angle for this projection. We begin with the definition of the projection.We recall that ℝℙ^2 is the space of all lines through the origin of ℝ^3. Equivalently, it is the space obtained from the 2-sphere S^2 by identifying diametrically opposite points. We use the following notation for points in ℝℙ^2:Notation: The projection of a unit vector n̂∈ S^2, to ℝℙ^2 is the equivalence class {+n̂, -n̂} and will be denoted by ±n̂.Every loop in S^2 can be projected to a loop in ℝℙ^2.As described earlier, the solid angle of a non-singular loop can be pictured by radially projecting it to the boundary of 𝔹, which is S^2 (FIG2 (a)). A singular loop can also be projected to S^2 after removing the point(s)at the center. The projected path will, however, be discontinuous (FIG2 (b)). Every time the loop crosses the center of 𝔹, the projected path makes a discontinuous jump across the diameter of S^2, parallel to the tangent of the loop at the center. This holds for all liftable loops. The discontinuity can be removed by identifying diametrically opposite points on S^2 and in doing so, we obtain an ℝℙ^2. Thus, every liftable loop γ in 𝔹 can be projected to a continuous path α: [0,1] →ℝℙ^2:α (t) = ±γ (t)/|γ (t)|γ(t)≠ 0 ±γ̇ (t)/|γ̇ (t)|γ(t)= 0Here, γ̇=dγ/dt. Note that the projection is in general an open path. We will next define a solid angle for paths in ℝℙ^2, as an appropriate U(1) holonomy. Indeed the relevant U(1) bundle over ℝℙ^2 is isomorphic to the lens space L(4,1). We recall that the lens space L(4,1) is a quotient of the 3-sphere S^3 by the discrete group Z_4 action (z_1,z_2) ↦ (iz_1,iz_2), where S^3 is represented as the set of all normalized vectors in ℂ^2, i.e.,S^3 = {(z_1, z_2)∈ℂ^2:|z_1|^2+|z_2|^2=1},and Z_4= {1, i, -1, -i}. Thus L(4,1) is obtained by identifying the orbits of Z_4 in S^3,L(4,1)=S^3/(z_1,z_2)∼ (iz_1, iz_2)S^3 is a 4-sheet covering space of L(4,1).The lens space L(4,1) is a U(1) bundle over both ℝℙ^2 and S^2 (we will show this explicitly in sec 3). In fact, this is the only lens space that is a U(1) bundle over ℝℙ^2 <cit.>. The solid angle of a loop in S^2 can be defined as the U(1) holonomy of its lift in L(4,1). Similarly, we define the solid angle of a loop in ℝℙ^2 as the U(1) holonomy of its lift in L(4,1). An important property of this solid angle is that it is preserved under the projection map from S^2 to ℝℙ^2 — the solid angle of a loop in S^2 is equal to the solid angle of its projection in ℝℙ^2. We prove this in lemma 3 in sec 3b.The appropriate generalization of a holonomy to open paths is a vertical displacement <cit.>. The vertical displacement of the horizontal lift of a path in ℝℙ^2 is a map from the fiber above the initial point of the path to the fiber above the final point of the path. Noting that SO(3)≈ L(2,1) is a double cover of L(4,1) and it acts transitively on L(4,1), the vertical displacement can be represented by an SO(3) action on L(4,1), i.e., by an operator V∈ SO(3). We provide the details in sec 3.We now define the generalized solid angle of a loop in 𝔹.Definition 3 (Generalized Solid Angle): Let γ be a liftable loop in 𝔹 and let α be its projection in ℝℙ^2. If α̃ is a horizontal lift of α in L(4,1) with a vertical displacement V∈ SO(3), and k̂ is any unit vector normal to both α(0) and α(1), the generalized solid angle (Ω) of the loop γ is given by Ω = cos^-1(k̂· Vk̂).In sec 3c, we show that the expression Ω = cos^-1(k̂· Vk̂) is the correct holonomy of α when it is closed, and a meaningful definition of the solid angle of α, also when it is open.Furthermore, we also show that it is equal to the standard solid angle of γ when it is non-singular. Hence we refer to this angle as the generalized solid angle of γ. The following theorem establishes the connection between the generalized solid angle and geometric phase:Theorem 2: If γ is a liftable loop in 𝔹 and α is its projection in ℝℙ^2, then the geometric phase of γ is equal to the vertical displacement of α.Thus, the geometric phase of any loop inside 𝔹 can be interpreted in terms of the generalized solid angle of its projection into ℝℙ^2. This interpretation builds on the m=0 geometric phases introduced in <cit.>. In the following section, we fill in the details of definitions 1, 2,3and provide proofs of theorem 1 and theorem 2. Before proceeding, we make a few remarks contrasting our geometric phase with Berry's phase. Unlike Berry's phase, our geometric phase does not arise naturally from the dynamics of the system. For any liftable loop inside 𝔹, our geometric phase is well defined, regardless of how the physical system is transported along the loop. Therefore, our geometric phase is similar to the mixed state geometric phase introduced in <cit.> and the non-adiabatic geometric phases introduced in <cit.>. Both of these formulations have been observed experimentally <cit.>. In sec 4, we briefly address the question of how to observe our geometric phase experimentally. § FORMULATION AND PROOFS OF THEOREM 1 AND THEOREM 2 The basic idea behind the proof of theorem 1 is that although ϕ: ℂℙ^2→𝔹 does not have a fiber bundle structure, it is closely related to a fiber bundle. In fact, it can be constructed as a quotient of a fiber bundle. 𝔹 can be constructed as a quotient space of S^2×[0,1], by collapsing the sphere S^2×{0} to a point.We show in lemma 2(a) below that ℂℙ^2 can also be constructed as a quotient space of L(4,1)×[0,1] by collapsing L(4,1)×{0} and L(4,1)×{1} to an ℝℙ^2 and an S^2 respectively. L(4,1)× [0,1] is an S^1 bundle over S^2×[0,1], because L(4,1) is an S^1 bundle over S^2. Thus, ℂℙ^2 →𝔹 can be constructed from the fiber bundle L(4,1)× [0,1]→ S^2 ×[0,1]. Before proceeding to state and prove lemma 2, we develop a geometrical construction of L(4,1). We show, in lemma 1, that L(4,1) is the space of all tangent lines to a unit sphere.Lemma 1: L(4,1) is homeomorphic to the space of all tangent lines to a unit sphere and it is an S^1 bundle over both S^2 and ℝℙ^2.Proof: A tangent line (ℓ) to a sphere is uniquely represented by the pair ℓ = (v̂, ±û) (FIG4(a)) of orthogonal unit vectors, v̂ representing the point of tangency of ℓ and û representing the direction of ℓ. Here, -û and +û represent the same tangent line and therefore, we use a “±" sign before û, as a short hand for the equivalence class {+û, -û}. We show that the space of all tangent lines to a sphere, i.e, {ℓ = (v̂, ±û): v̂·û=0} is homeomorphic to L(4,1) by explicitly constructing a 4-sheeted covering map from S^3 to this space and showing that this space is also obtained as a quotient of S^3 under a Z_4 action (L41).Noting that SU(2) is topologically homeomorphic to S^3 and SO(3) acts transitively on the space of tangent lines to a sphere, we construct a composition of the following two maps:SU(2) SO(3) {ℓ = (v̂, ±û): v̂·û=0} f is the standard double cover from SU(2) to SO(3) i.e., f : e^in̂·σ⃗θ/2↦ R_n̂(θ)∈ SO(3), where n̂ is a unit vector in ℝ^3 andσ⃗=(σ_x, σ_y, σ_z) are the Pauli matrices:σ_x=([ 0 1; 1 0; ]), σ_y=([0 -i;i0;]), σ_z=([10;0 -1;])The map g is constructed from the action of SO(3) on the space of tangent lines to a sphere. Fixing a tangent line ℓ_0 = (ẑ, ±x̂) (FIG4(a)), we obtain:g : R_n̂(θ) ↦ R_n̂(θ)ℓ_0 = (R_n̂(θ)ẑ, ± R_n̂(θ)x̂)We now show that g ∘ f : SU(2)→{(v̂, ±û):v̂·û=0} is the required 4-sheet covering map. The action of SO(3) on a tangent line to a sphere has a Z_2 stabilizer. For instance, the stabilizer of ℓ_0 is {1,R_ẑ(π)}. Therefore, g is a double covering map. For an arbitrary tangent line ℓ, the pre-image set under g contains two points in SO(3). If ℓ = R_n̂(θ)ℓ_0, for some n̂ and θ, then its pre-image set is g^-1(ℓ)={R_n̂(θ), R_n̂(θ)R_ẑ(π)}. Further, f^-1∘ g^-1(ℓ) is a set of 4 elements in SU(2) given by:f^-1∘ g^-1(ℓ)= e^in̂·σ⃗θ/2{1, iσ_z, -1, -iσ_z}Thus, the pre-image set is generated by a Z_4 action and therefore, g∘ f is the required covering map and L(4,1)≈{ℓ = (v̂, ±û): v̂·û=0}. We can now define the bundle maps π_1 : L(4,1)→ S^2 and π_2 : L(4,1)→ℝℙ^2:π_1: (v̂, ±û)↦v̂∈ S^2 π_2 :(v̂, ±û) ↦±û∈ℝℙ^2π_1 takes every tangent line to its point of tangency, and π_2 takes every tangent line to a parallel line through the center, which is an element of ℝℙ^2. It is straight forwardto verify that they are both S^1 bundle maps ▪.A natural metric on L(4,1) is induced by the round metric (i.e., the standard Cartesian metric) on S^3. This metric, at a point ℓ = (v̂, ±û)∈ L(4,1) is:ds^2 = dv̂· dv̂ + dû· dû-(v̂· dû)^2The first term (dv̂· dv̂) corresponds to the distance covered by the point of contact on S^2. The term dû· dû-(v̂· dû)^2 corresponds to the angle of rotation of the tangent line about its point of contact.Using a similar argument, it can be shown that the lens space L(2,1) is the space of all unit tangent vectors to a unit sphere, i.e.,L(2,1) ≈{(v̂, û):û·v̂=0} (FIG4 (b)). Lemma 2: (a) ℂℙ^2 can be constructed from the stack L(4,1)× [0,1] by collapsing L(4,1)×{0} to an ℝℙ^2 and L(4,1)×{1} to an S^2 using the respective bundle maps π_1 and π_2. That is,ℂℙ^2 = L(4,1)× [0,1]/πwhere π=1 on L(4,1)×(0,1), π=π_1 on L(4,1)×{1} andπ=π_2 on L(4,1)×{0}(b) Writing 𝔹^∘-{0} = S^2 ×(0,1), where 𝔹^∘ is the interior of 𝔹, the restriction of ϕ to L(4,1)×(0,1) is,ϕ = π_1× 1 : L(4,1)× (0,1) → S^2× (0,1). (c) ℂℙ^2 is the space of all chords to a unit sphere and ϕ maps each chord to its center. Proof: We begin with a proof of (a). Let us consider the pre-image sets of ϕ:ϕ^-1(s⃗)=ℝℙ^0if|s⃗|=1ℝℙ^1if0<|s⃗|<1ℝℙ^2if|s⃗|=0This can be shown using the explicit algebraic expression of ϕ, the map phi. However, it is more illuminating to use the earlier described geometric picture of representing a point in ℂℙ^2 as a vector and an ellipsoid, i.e., (s⃗, 𝐓) (FIG1 (a)). The lengths of the axes of the ellipsoid are 1-|s⃗|^2, 1/2(1±√(1-|s⃗|^2)) (see Appendix). Therefore, its dimensions depend only on the length of the spin vector. When |s⃗|≠ 0, one of its axes is parallel to s⃗. For a given spin vector with 0<|s⃗|<1, the ellipsoid has one degree of freedom — rotation about s⃗, which produces the set of all quantum states with spin vector s⃗. This set is an ℝℙ^1, because the ellipsoid has a two fold symmetry when rotated about s⃗.On the boundary of 𝔹, when |s⃗|=1, the lengths of the two transverse axes of the ellipsoid are equal and the length of the third axis is zero. Therefore, the ellipsoid degenerates into a disk perpendicular to s⃗. It has no degrees of freedom; it is the only quantum state with the given spin vector. Thus, the pre-image set of this spin vector is just a point i.e., ℝℙ^0.Finally, when |s⃗|=0, the ellipsoid again degenerates to a disk at the center of 𝔹. This time, however, it has two degrees of freedom. The pre-image set ϕ^-1(0⃗) is the space of all orientations of a disk in ℝ^3 centered at the origin. This is indeed ℝℙ^2.It follows, now, that the pre-image set of the boundary of 𝔹, i.e., ϕ^-1({s⃗:|s⃗|=1}) is a sphere in ℂℙ^2. For a shell of radius 0<r<1, the pre-image set is a lens space L(4,1):ϕ^-1({s⃗:|s⃗|=r})= L(4,1)0<r<1To show this, we use lemma 1 and construct a bijective map from the pre-image of the shell to L(4,1). Consider the map (s⃗, 𝐓)↦ (v̂, ±û) where v̂=s⃗/r and û is the eigenvector of 𝐓 normal to s⃗, with the smaller eigenvalue. Indeed, there is a one-one correspondence between the orientations of an ellipsoid at s⃗ and tangent lines at s⃗ to a sphere of radius |s⃗|. Thus, it follows from lemma 1 that the pre-image of a shell is homeomorphic to L(4,1).We can now construct ℂℙ^2 using the pre-image sets:ϕ^-1({s⃗:|s⃗|=1}) = S^2 ϕ^-1({s⃗:0<|s⃗|<1}) = L(4,1)×(0,1) ϕ^-1(0⃗) = ℝℙ^2ℂℙ^2 is therefore obtained by attaching an ℝℙ^2 and an S^2 to either ends of L(4,1)×(0,1). The attaching maps are easily seen to be π_1 and π_2, using the geometric picture. Thus, ℂℙ^2 is obtained from L(4,1)× [0,1] by collapsing L(4,1)×{0} to an ℝℙ^2 and L(4,1)×{1} to an S^2 using the respective bundle maps.(b) follows trivially from the above construction of pre-image sets. The geometrical construction of ℂℙ^2 claimed in (c) can be shown as follows. The chords passing through the center of a unit sphere form an ℝℙ^2. The chords at some distance r∈(0,1) from the center form an L(4,1) and the chords at a distance 1 from the center degenerate to points on a sphere, forming a sphere. Thus, the space of all chords to a unit sphere has the same structure as ℂℙ^2 and is homemorphic to it. ▪. Lemma 2(c) is also a consequence of Majorana constellation <cit.> which has been used very fruitfully to understand geometric phases <cit.>. States of a spin-1 system can be considered as symmetric states of a two coupled spin-1/2 systems. A spin-1/2 state is a point on a Bloch sphere (i.e., ℂℙ^1) and therefore, a spin-1 state is an unordered pair of points on the Bloch sphere (see ref. <cit.> for a detailed description of this representation). This is equivalent to a chord[This picture has a generalization. ℂℙ^n is a unordered product of n ℂℙ^1's. It is the space of all unordered set of n points on a unit sphere. That is,ℂℙ^n = ℂℙ^1×⋯×ℂℙ^1/∼ where (r_1,⋯ r_i,⋯ r_j,⋯ r_n)∼(r_1,⋯ r_j,⋯ r_i,⋯ r_n) for r_i ∈ℂℙ^1. This is known as Majorana constellation.]. ϕ maps each chord to its center.We can represent a chord as (r, v̂, ±û), where rv̂ is the center of the chord and û is its direction. This corresponds to a quantum state whose spin vector is rv̂ and the ellipsoid is oriented such that the eigenvector normal to s⃗ with the smaller eigenvalue is parallel to û. It is straightforward to construct this quantum state ψ∈ℂ^3. For instance, written in the standard basis, (r, ẑ, ±x̂)↦ψ = (√(1-r/2), 0, √(1+r/2)) ∈ℂ^3Quantum states corresponding to any chord can be obtained by preforming rotations on both sides of the above equation. Conversely, the chord corresponding to given quantum state can be obtained from its spin vector and fluctuation tensor, (s⃗, 𝐓) — it is the chord centered at s⃗ and oriented parallel to the largest axis of 𝐓 perpendicular to s⃗. The Fubini-Study metric on ℂℙ^2 can be applied to the space of all chords to a unit sphere.At (r, v̂, ±û), the metric is:ds_FS^2 = 1/2(1-√(1-r^2))dv̂· dv̂+ √(1-r^2)(û· dv̂)^2+ (1-r^2)(dû· dû-(v̂· dû)^2)+1/4(1-r^2)dr^2This follows from FS. We now proceed to prove theorem 1. §.§ Proof of theorem 1Without loss of generality, we may assume that γ̇(t)≠ 0 whenever it is well-defined. Therefore, γ^-1(0⃗) is a zero dimensional compact manifold, i.e., it is a finite set of points. Adding the end points 0and 1 to this finite set, we obtain a set of points,γ^-1(0⃗)∪{0,1}={a_0,⋯ a_n+1} where, a_i<a_i+1, a_0=0 and a_n+1=1. This set divides the loop into n+1 pieces, γ_j:[a_j-1,a_j]→𝔹 for j=1,2,⋯ n+1. Each piece γ_j may start and end at the center of 𝔹, but lies away from the center otherwise. That is, its interior lies away from the center, γ((a_j-1,a_j))⊂ S^2×(0,1]. The closure of this path in S^2× [0,1] has a horizontal lift in L(4,1)× [0,1], defined using the standard theory of connections <cit.>,because this space has a circle bundle structure over S^2×[0,1]. We denote this horizontal lift by γ̃_j: [a_j-1,a_j]→ L(4,1)×[0,1]. This path can be projected to ℂℙ^2 by composing it with π, as shown in lemma 2(a). The idea behind this proof is to show that theseprojected paths can be attached continuously under the assumptions of the theorem, and the resulting path in ℂℙ^2 is a lift of γ that minimizes the Fubini-Study length.Within (a_j-1, a_j), we may write γ_j(t) = (γ_j(t)/|γ_j(t)|, |γ_j(t)|)∈ S^2×(0,1] where the two components represent the coordinates in S^2 and (0,1] respectively, i.e., γ_j(t)/|γ_j(t)|∈ S^2 and |γ_j(t)|∈ (0,1]. Let us define the closure of the first component as β_j: [a_j-1,a_j]→ S^2:β_j(t) = γ_j(t)/|γ_j(t)| a_j-1<t<a_j lim_t'→ a_kγ_j(t')/|γ_j(t')| t=a_k, k =j, j-1Note that β_j are indeed the closures of the discontinuous radial projections shown in FIG2 (b). Let β̃_j denote a horizontal lift of β_j in L(4,1). We define paths γ̃_j:[a_j-1,a_j]→ L(4,1)×[0,1] as: γ̃_j(t)= (β̃_j(t), |γ_j(t)|)We next show that after projecting these paths to ℂℙ^2, i.e., π∘γ̃_j can be attached continuously at all a_j for j=1, 2⋯ n. Note that γ(a_j)=0⃗ for j=1,2⋯ n. The end points of the two neighboring paths, γ̃_j and γ̃_j+1 at a_j, projected to ℂℙ^2 are given by:π∘γ̃_j(a_j)=π∘ (β̃_j(a_j), 0)≡π_2∘β̃_j(a_j)∈ℝℙ^2 = ϕ^-1(0⃗) π∘γ̃_j+1(a_j)=π∘ (β̃_j+1(a_j), 0)≡π_2∘β̃_j+1(a_j)∈ℝℙ^2 =ϕ^-1(0⃗)It suffices to show that the first point of the lift, β̃_j+1(a_j), can be chosen such that the above two points coincide in ℂℙ^2. We begin with a simple observation; since γ is liftable, it is differentiable at a_j and it follows that [We have used lim_t→ a_k^±γ_j(t)/|γ_j(t)| =lim_t→ a_k^±γ_j(t)-γ_j(a_k)/|γ_j(t)-γ_j(a_k)|= ±γ̇_j(a_k)/|γ̇_j(a_k)|]:β_j(a_j) = lim_t→ a_jγ_j(t)/|γ_j(t)| = γ̇(a_j)/|γ̇(a_j)| β_j+1(a_j) = lim_t→ a_jγ_j+1(t)/|γ_j+1(t)| = -γ̇(a_j)/|γ̇(a_j)|Let β̃_j(a_j)=(β_j(a_j), ±û)∈ L(4,1) for some û normal to β_j(a_j), following lemma 1. We may chooseβ̃_j+1(a_j)= (β_j+1(a_j), ±û) ∈π_1^-1(β_j+1(a_j))This is a valid choice because û is normal to β_j+1(a_j) (this follows from β_j+1(a_j)= -β_j(a_j)). It now follows that π_2∘β_j(a_j) = π_2 ∘β_j+1(a_j)= ±û∈ℝℙ^2 and therefore, γ̃_j and γ̃_j+1 can be attached continuously .It remains to show that the lift γ̃ obtained by attaching π∘γ̃_jminimizes the Fubini-Study metric. It suffices to show this for the interior of each segment π∘γ̃_j, which is contained in L(4,1)× (0,1). Consider γ̃_j(t)=(r(t), v̂(t), ±û(t)) as a path in the set of all chords to a unit sphere, following lemma 2(c) and using the notation (r, v̂, ±û) for a chord with center at rv̂ and in direction û. It follows from the construction of γ̃_j that:r(t) = |γ_j(t)| (v̂(t), ±û(t))= β̃_j(t) ∈ L(4,1) and v̂(t)= β_j(t)r(t) and v̂(t) are determined by |γ_j(t)| and β_j(t) respectively. The key observation is that the horizontal lift β̃_j minimizes the length under the induced round metric on L(4,1) (Metric on L41) among all lifts of β_j <cit.>, <cit.>. That is,û(t) is chosen so as to minimize the length of β̃_j in L(4,1). From Metric on L41, it follows that û̇·û̇ = (v̂·û̇)^2 (here, û̇=dû/dt). This is the condition for minimizing the length. From Fubini-Study, it follows that the same condition minimizes the Fubini-Stuidy length of γ̃_j in L(4,1)×(0,1). Thus, γ̃ is a horizontal lift of γ. ▪We next demonstrate that the horizontal lift defined by minimizing the Fubini-Study metric is equivalent to the intuitive notion of parallel transport of ellipsoids inside 𝔹. It is easier to use chords in 𝔹, following lemma 2(c), instead of ellipsoids. Let ψ be a quantum state vector represented by the chord (r, v̂, ±û). Its spin vector is s⃗=rv̂. We show that for every infinitesimal change ds⃗ of the spin vector, the corresponding intuitive parallel transport of the chord also minimizes the Fubini-Study metric. ds⃗ is a 3-dimensional vector and it can be written as a superposition of v̂, û and v̂×û. It suffices to consider the three cases where ds⃗ is parallel to the above mentioned three vectors separately. Let us begin with the case ds⃗=|ds⃗|v̂ (FIG5 (a)). Intuitively, the chord should be moved radially, parallel to itself, i.e., after parallel transport, the new chord will be (r+|ds⃗|, v̂, ±û). From Fubini-Study, it follows that this is consistent with the minimization of the Fubini-Study metric. When ds⃗ is perpendicular to both v̂ and û, intuitively, the chord should be moved to (r, v̂+ds⃗/r, ±û)— consistent with minimization of Fubini-Study metric (FIG5 (b)). Finally, when ds⃗ is parallel to û, the chord should be parallel transported like a tangent line to the shell of radius r. Using straightforward geometry, it follows that the new chord is (r, v̂+ds⃗/r, ±(û-|ds⃗|/rv̂)) (FIG5 (c)). This satisfies the correct minimization condition for the Fubini-Study metric, dû· dû=(v̂· dû)^2. Geometric phase was defined in sec 2 as the operator R∈ SO(3) such that γ̃(1)=𝒟(R)γ̃(0) holds for all lifts γ̃ of γ. However, this operator is not unique — it has a two fold ambiguity because γ̃(0) has a non-trivial stabilizer in SO(3). For instance, when |s⃗|≠ 0, R_s⃗(π)γ̃(0)=γ̃(0). We now use the details of the construction of γ̃ to clear this ambiguity and provide a rigorous definition of R.Corresponding to each segment β_j in S^2, we define a vertical displacement R_j ∈ SO(3) such that its lift satisfies β̃_j(a_j)= R_j β̃_j(a_j-1). Here, β̃_j is considered a path in the space of tangent lines to a sphere and R_j acts on the tangent lines as a rotation. To define R_j uniquely, we note that SO(3)≈ L(2,1) is a double cover of L(4,1). As remarked earlier, L(2,1) is the space of all unit tangent vectors to a unit sphere. β̃_j can be lifted to L(2,1), and the end points of this lift will define a unique R_j ∈ SO(3). For example, if β̃_j(t) = (v̂(t), ±û(t)), we may assume without loss of generality, that (v̂(t), û(t)) represents a continuous path in the space of all unit tangent vectors, i.e., in L(2,1). Indeed, this is a lift of β̃_j in L(2,1). The only other lift is (v̂(t), -û(t)). Both of these lifts define the same, unique vertical displacement R_j∈ SO(3) withR_j v̂(a_j-1)= v̂(a_j) and R_j û(a_j-1)= û(a_j)Noting that L(4,1) is a U(1) bundle over S^2, it is straightforward to show that this operator is independent of the choice of the first point, β̃_j(a_j-1) of the lift <cit.>. We now define the geometric phase asR= R_n+1R_n⋯ R_1It follows that 𝒟(R)γ̃(0)=γ̃(1).We end this section with an explicit formula to compute the horizontal lift in ℂℙ^2 and the geometric phase of a given loop in 𝔹. It suffices to compute β̃_j and R_j for each piece γ_j of the loop. Assuming that β_j = v̂(t) for t∈ [a_j, a_j+1], we are to find a û(t) such that (v̂(t), ±û(t)) is a horizontal lift of β_j with a given initial point û(0). Using the minimization condition for Metric on L41 and û(t)·v̂(t)=0, it follows that û(t) is the solution to the differential equation with the given initial value:d/dtû(t) = - (dv̂(t)/dt·û(t))v̂(t)To find the geometric phase, we introduce X:[a_j,a_j+1]→ SO(3) satisfying û(t)=X(t)û(a_j), v̂(t)=X(t)v̂(a_j) and X(a_j)=1. The geometric phase will then be R_j= X(a_j+1). It is straightforward to see that X(t) is the solution to the following initial value problem:d/dtX(t)= (dv̂(t)/dtv̂(t)^T-v̂(t)dv̂(t)/dt^T)XX(a_j) = 1The above two equations, along with CP2-chord provide a complete set of equations to compute the horizontal lift and the geometric phase for any loop in 𝔹. Before proving theorem 2, we make a few remarks regarding the points on the boundary of 𝔹. The pre-image set of these points is trivial (pre-image sets). This implies that the corresponding quantum states can not carry any geometric phase information. Nevertheless, the definition of horizontal lifts and geometric phase given above are valid even for loops that visit the boundary of 𝔹. To understand what the horizontals lift and geometric phases of the class of loops that visit the boundary of 𝔹 mean, we note that such loops can be pushed to the interior of𝔹 through infinitesimal perturbations. It is straightforward to see that the horizontal lift (geometric phase) of such loops is indeed equal to the limit of the horizontal lift (geometric phase) of the perturbed loops. Therefore, although no geometric phase can be extracted physically from this particular class of loops, for the purpose of theoretical completeness, it is possible to consistently define a geometric phase for them. §.§ Proof of Theorem 2 As shown in lemma 1, L(4,1) admits two S^1 bundle structures, namely, π_1: L(4,1)→ S^2 and π_2 : L(4,1)→ℝℙ^2.Accordingly, loops in S^2and loops in ℝℙ^2 both have well-defined solid angles in terms of the respective U(1) holonomies. The natural projection from S^2 to ℝℙ^2 preserves the solid angle.This is the core ingredient in the interpretation of the geometric phase and the proof of theorem 2. We prove this fact in lemma 3 and then proceed to prove theorem 2. We denote the natural projection map from S^2 to ℝℙ^2 by p.Lemma 3: Let β be a piece-wise differentiable path in S^2 and p∘β be its projection in ℝℙ^2. The vertical displacements of the horizontal lifts of β and p∘β in L(4,1) are equal.Proof: Let β(t)=v̂(t) and let β̃(t)= (v̂(t), ±û(t)) be its horizontal lift in L(4,1). The projection of β in ℝℙ^2 is p∘β = ±v̂(t). We first show that the path obtained by interchanging the two vectors û and v̂ in β̃, i.e., (û(t), ±v̂(t)), is a horizontal lift of p∘β in L(4,1).From the condition û(t)·v̂(t)=0, it follows that û̇(t)·v̂(t)+û(t)·v̇̂̇(t)=0. Therefore, the paths (v̂(t), ±û(t)) and (û(t), ±v̂(t)) have the same length in L(4,1)(see Metric on L41).Further, (û(t), ±v̂(t)) is a lift of p∘β because, π_2∘ (û(t), ±v̂(t))= ±v̂(t) = p∘β(t). We show, by contradiction, that it is indeed a horizontal lift. If it is not a horizontal lift, let (û'(t), ±v̂(t)) be the unique horizontal lift with the initial value û'(0)=û(0). It must have a shorter length than (û(t), ±v̂(t)). It follows now that (v̂(t), ±û'(t)) is a lift of β with a length shorter than β̃(t)=(v̂(t), ±û(t)), and they have the same initial point i.e., (v̂(0), ±û'(0)) = (v̂(0), ±û(0)). This contradicts with the hypothesis that β̃ is a horizontal lift.Thus, p̃∘̃β̃=(û(t), ±v̂(t)) is a horizontal lift of p∘β. Let us now consider lifts of β̃ and p̃∘̃β̃ in L(2,1) i.e., (v̂(t), û(t)) and (û(t), v̂(t)) respectively. It is straightforward to see that the vertical displacements are identical and is given by the unique SO(3) operator V which satisfies Vv̂(0)=v̂(1) and Vû(0)=û(1). ▪We now return to prove theorem 2. Although the pieces β_j in S^2 cannot be attached continuously, their projections in ℝℙ^2 can be attached continuously:p∘β_j(a_j)= ±γ̇(a_j)/|γ̇(a_j)|= p∘β_j+1(a_j)This follows from beta. Indeed, the path obtained by attaching the segments p∘β_j in ℝℙ^2 is α, the projection of γ defined in Projection to rp2. From lemma 3, it follows that the vertical displacements of β_j and p∘β_j are equal. Thus, the vertical displacement of α is given byV=R_n+1R_n ⋯ R_1Where, R_j is the vertical displacement of β_j. This is equal to the geometric phase of γ, defined in def of gp.▪ §.§ Generalized Solid AngleThe notion of generalized solid angle was introduced through definition 3 in sec 2. In the following, we show that this generalized solid angle reduces to the standard solid angle for non-singular loops. Furthermore, we discuss the reasons why this definition is a meaningful generalization of solid angles for singular loops. In particular, we discuss the case when the projected path α is open in ℝℙ^2.When γ is non-singular, its projection α is necessarily closed. We consider the following three cases separately — (i) γ is non singular, (ii) γ is singular and α is closed and (iii)γ is singular and α is an open path.For a non-singular loop, by definition |γ(t)|≠ 0 throughout. Therefore it comprises of only one piece, i.e., a_0=0 and a_1=1. The corresponding projected path in S^2, β = γ/|γ| is closed. From lemma 3 and the definition of the geometric phase given by def of gp, it follows that the geometric phase (R) of γ is a rotation about β (0) (or equivalently, about α (0)) by an angle equal to the solid angle of γ. This angle is obtained by the expression cos^-1(k̂· Rk̂) for some unit vector k̂ normal to α(0). Thus, the generalized solid angle is consistent with the standard solid angle for non-singular loops. For a singular loop, the standard solid angle is not well-defined. However, if the projection α is closed, i.e., α(0)=α(1), the geometric phase (i.e., the vertical displacement of α) is still a rotation about α(0) — it maps the fiber above α(0) in L(4,1) to itself. Therefore, the angle of rotation about α(0) is well-defined and is the natural extension of solid angles to this case.Finally, we consider the case where α is open. FIG3(b) shows one such example of a loop γ, whose projection is open in ℝℙ^2. That is, γ(0)=γ(1)=0⃗ but ±γ̇(0)/|γ̇(0)|=α(0)≠α(1)=±γ̇(1)/|γ̇(1)|. Solid angles are well-defined for open paths in S^2 by closing them using a geodesic in <cit.>, <cit.> (see also ref. <cit.> for an alternative formulation). We adopt a similar technique to define solid angles for open paths in ℝℙ^2. The geometric phase (R) maps the fiber above α(0) to the fiber above α(1) in L(4,1). Indeed, it can be written uniquely as a product of two rotations, one that takes α(0) to α(1) and another that rotates about α(1):R = R_α(1)(Ω_2)R_k̂(Ω_1)where k̂ is a vector normal to α(0) and α(1) and Ω_1 is the angle between α(0) and α(1). The natural definition of solid angle for such a path is Ω_2, which is given by cos^-1(k̂· Rk̂). § EXAMPLES AND EXPERIMENTAL CONSIDERATIONSIn this section, we discuss two examples and make a few remarks on how this geometric phase can be observed experimentally.To begin, we present the general procedure to determine the Horizontal lift and geometric phase for any given loop, in five steps: 1. Verify if the loop is liftable. If it is not liftable, the horizontal lift and the geometric phase are not well defined.2. Identify the “zeros" of the loop, i.e., the set {a_0, a_1, ⋯ , a_n+1}= γ^-1(0⃗) at which the loop visits the center of 𝔹. Because the loop is liftable, it will be differentiable at each a_i.3. Divide the loop into the segments γ_i:[a_i-1, a_i]→𝔹 and determine their projections β_i(t), as defined in the proof of theorem-1.4. Solveeqn_u for β̃_i(t) (and hence obtain γ̃_i(t)) and eqn_R for X_i(t) and hence obtain R_i.5. By concatenating γ̃_i, obtain the horizontal lift γ̃, i.e., the lift that minimizes the Fubini -Study length. The geometric phase is, using (<ref>), R=R_n+1R_n⋯ R_1 and the generalized solid angle is given by cos^-1(k· Rk̂) for some unit vector k̂ normal to both β_1(0) and β_n(1). We now illustrate this procedure to compute the generalized solid angle of the singular loop shown in FIG2(b), as a first example. This is also the loop “a" in FIG7. In Cartesian coordinates, this loop is (using the notation γ_a(t)=(x(t), y(t), z(t))): γ_a(t)=(sin (2π t)sin (π/6)cos(2π t), -sin (2π t)sin (π/6)sin(2π t), sin (2π t)cos (π/6))In spherical polar coordinates, this loop can be represented as r(t)=sin(2π t) (allowing negative values), θ(t)=π/6 and ϕ(t)=-2π t. We now follow the procedure outlined above to compute its horizontal lift, geometric phase and generalized solid angle: 1. This loop is liftable, because γ_a(t) is differentiable everywhere.2. This loop has zeros at t=0, 1/2& 1, i.e, Thus, n=1 and a_0=0, a_1=1/2, a_2=1.3. The segments γ_1 and γ_2 are given by restricting the loop to [0,1/2] and [1/2, 1] respectively. The corresponding projections β_i, spherical polar coordinates, are given by:β_1(t) =(1/2cos( 2π t), -1/2sin(2π t), √(3)/2),t∈ (0,1/2) β_2(t)= (-1/2cos( 2π t), 1/2sin(2π t),-√(3)/2), t∈ (1/2,1)FIG6 (a) shows these projections.4. eqn_u and eqn_R both have analytical solutions. In particular, the solutions toeqn_R are:X_1(t) =R_β_1(t)(-√(3)π t)R_z(2π t):t ∈ [0, 1/2]X_2(t) =R_β_2(t)(√(3)π (t-1/2))R_z(2π (t-1/2)):t ∈ [1/2, 1]This indeed implies that eqn_u also has an analytic solution. Using the notation from the proof of theorem-1, β̃_i(t)=(β_i(t), ± u_i(t)) for i=1,2, it is straightforward to see that u_1(t)=X_1(t)u_1(0) and u_2(t)=X_2(t)u_1(1/2). The horizontal lifts depend on the choice of the initial point, u_1(0). 5. Thus, R_1=R_β_1(1/2)(-√(3)π /2)R_z(π)= R_z(π)R_β_1(0)(-√(3)π /2) and R_2= R_β_2(1)(√(3)π/2)R_z(π). Noting that R_z(2π)=R_β_1(0)(2π) and β_1(0)=-β_2(1), it follows that the geometric phase is R_a=R_2R_1 = R_β_1(0)((2-√(3))π). Following GSA, the generalized solid angle is Ω =π (2-√(3)). The most straightforward way to obtain the concatenated lift γ̃, starting from an initial point γ̃(0) is using the equation γ̃(t)= (|γ_a(t)|, β̃(t)), where we are representing a quantum state by a chord.The projection of this loop to ℝℙ^2 is α(t)=± (1/2cos( 2π t), -1/2sin(2π t), √(3)/2). If every point in ℝℙ^2 is represented by a diameter of S^2, this loop represents a cone (see FIG6(b)). The generalized solid angle of this loop is indeed equal to the solid angle of the cone, as shown by theorem 2. The geometric phase of this loop, R_a=R_β_1(0)((2-√(3))π) is a rotation about the axis, (1/2, 0, √(3)/2) by an angle (2-√(3))π. The axis is the tangent to the loop at t=0. This already indicates that geometric phase of two loops based at the center can be non-commuting. If, for instance, we choose another loop congruent to this one but oriented differently (see FIG7(b)), its geometric phase will also be a rotation by an angle (2-√(3))π, but about a different axis because the loop would start with a different tangent. For instance, FIG7(b) shows a loop γ_b, congruent to γ_a, obtaned by rotating the latter about the z-axis by π/2. Its geometric phase is R_b=R_z(π/2)R_aR_z(-π/2). Although the two geometric phases do not commute, the product of the geometric phases is not equal to the geometric phase of the product of the loops, because the latter is not liftable. We provide another example, illustrating the non-Abelian nature of this geometric phase, where the geometric phases of two liftable loops do not commute and the concatenations of the loops are liftable.These loops are also experimentally implemented easily, as we argue below. Let us consider the loops γ_c and γ_d shown in FIG7(c) and FIG7(d) respectively. It suffices to work out one of them in detail. γ_c can be parametrized as:γ_c(t)= (0, 0, 2t):0≤ t≤ 1/4 (0, 1/2sin (2π(t-1/4)), 1/2cos (2π(t-1/4))): 1/4 ≤ t≤ 1/2( 1/2sin (2π(t-1/2)), 1/2cos (2π(t-1/2)), 0): 1/2 ≤ t≤ 3/4(2(1-t), 0, 0):3/4≤ t≤ 1We now follow the procedure to compute this loop's horizontal lift and geometric phase: 1. Despite its appearance, this loop does not have a non-differentiable point at the center; it visits zero only twice, at t=0 and t=1, under the given parametrization. And it is differentiable at both these times. Therefore, this loop is liftable.2. The set γ_c^-1(0⃗) is quite obviously {0,1}. 3. There is only one segment, and its projection β (t) is given by:β(t)= (0, 0, 1):0≤ t≤ 1/4 (0,sin (2π(t-1/4)), cos (2π(t-1/4))): 1/4 ≤ t≤ 1/2(sin (2π(t-1/2)),cos (2π(t-1/2)), 0): 1/2 ≤ t≤ 3/4(1, 0, 0):3/4≤ t≤ 14. The solution to eqn_R are:X(t)= 1:0≤ t≤ 1/4 R_x(-2π(t-1/4)): 1/4 ≤ t≤ 1/2R_z(-2π(t-1/2))R_x(-π/2): 1/2 ≤ t≤ 3/4R_z(-π/2)R_x(-π/2):3/4≤ t≤ 1In the axis-angle representation. The lift β̃ is easy to obtain using this solution. 5. The geometric phase is R_c=R_z(-π/2)R_x(-π/2). The lift can be obtained, again, by using the equation γ̃(t)= (|γ_c(t)|, β̃(t)).Similarly, for γ_d, the geometric phase is R_d = R_x(π/2)R_z(-π/2). Thus, their geometric phases do not commute. They both have a generalized solid angle of π/2.In order to experimentally observe this geometric phase, the physical spin -1 system must satisfy the following criteria: (i) the quantum state vector of the system must be controllable and, (ii) a complete state tomography, i.e., a measurement of all the components of the spin vector and the spin fluctuation tensor must be possible.Physical systems that satisfy both of these criteria include trapped atoms, ions, nitrogen-vacancy centers, superconducting qubits and experiments using nuclear-magnetic resonance. In the following, we briefly describe how the loops shown in FIG7 can be induced and their geometric phase can be measured using trapped atoms. We consider a Bose-Einstein condensate of trapped rubidium atoms, which is a spin-1 system. The spin eigenstates corresponding to eigenvalues 0 and ± 1 are magnetically sensitive. Therefore, using a strong spatial gradient in the magnetic field, a condensate can be spatially separated into three clouds, corresponding to the eigenstates 0 and ± 1. This is also known as Stern-Gerlach separation. The population of atoms in each cloudcan be estimated by measuring the intensity of fluorescent light emitted by each cloud. These populations are proportional to |z_0|^2 and |z_±|^2 where (z_-1, z_0, z_+1) is the quantum state vector of the system. The elements of the spin vector and the spin fluctuation tensor can be extracted from such measurements done in different basis. This technique has been demonstrated in <cit.>. Further, the loops shown in FIG7 can be induced in this system using rotating magnetic fields and microwave pulses. The curved parts of this loop can be induced by rotating the spin vector using a radio-frequency magnetic field and the linear parts can be induced using microwave transitions. These techniques have also been demonstrated recently <cit.>. The geometric phase of these loops can be measured by starting with two condensates prepared in the same quantum state and inducing the loop on one of them and then comparing the measured spin fluctuation tensor for both of them. For instance, the ellipsoid at the center of the Bloch sphere degenerates into a disk and has two independent parameters. Therefore, in an experiment where a loop that starts and ends at the center is induced, geometric phase can be observed by measuring this pair of parameters of the spin fluctuation tensor. Such an experiment has been recently done in a system of ultracold Rubidium atoms <cit.>. § CONCLUSIONSTo conclude, we have shown that the geometrical properties of a loop traversed by the spin vector inside the Bloch ball can be extracted from the spin fluctuation tensor of a spin-1 quantum state. This property crucially depends on the Fubini-Study metric on ℂℙ^2, and it reflects the deep synchrony between the geometry of real space and the geometry of the abstract space of quantum states.For a loop inside the Bloch ball we call a path in the space of quantum states, a lift, if it projects down to the given loop upon evaluating the expectation value of the spin vector at each point and a horizontal lift, if it also minimizes its length under the Fubini-Study metric. Not every loop inside the Bloch ball has a well-defined horizontal lift. A loop is liftable, i.e., has a well-defined horizontal lift if it has at least one parameterization under which it is differentiable at every visit to the center. We have defined a geometric phase corresponding to each loop in a Bloch ball in the form of an SO(3) operator.Among liftable loops,those that actually visit the center, which we call singular loops, are particularly interesting because of two non-trivial properties — first, their geometric phase is non-Abelian and second, they don’t have a well-defined solid angle and therefore prompt a generalization of the same. We have introduced the notion of generalized solid angles, defined for both singular and non-singular loops, which reduces to the standard solid angle for non-singular loops, and use it to provide an interpretation of the geometric phase of liftable-singular loops.On the experimental side, we have discussed the possibility of observing this geometric phase in a system of a Bose-Einstein condensate of rubidium atoms. We have argued that in this system, it is possible to use the existing experimental techniques to induce the loops shown in FIG7 and the geometric phase can be observed by comparing the tomographies performed on the quantum state before and after inducing the loop. Although we have considered a spin-1 system, our analysis can be generalized to any spin system. A spin-S system has independent moment tensors up to order 2S. A natural extension of our work is to explore the geometric information carried by these higher order tensors.We end with a remark on the possible theoretical applications of our geometric phase. One of the recent theoretical applications has been in characterizing topological phases of matter. Berry's phase along a loop in the parameter space of a Hamiltonian is given by the integral of the Berry curvature evaluated over the region enclosed by the loop <cit.>. The total integral of the Berry curvature over the entire parameter space (usually the momentum space in condensed matter systems) is a topological invariant of the parameter space known as the Chern number. A topological phase transition is characterized by a “sudden change" of the the Chern number. Recent explorations <cit.>, <cit.>, have shown that mixed state generalizations of Berry's phase <cit.>, <cit.> can also be used to characterize topological phase transitions.The geometric phase introduced in this paper could also be used to characterize topological states of 1-dimensional quantum systems. In particular the nature of singular loops is akin to critical points of quantum phase transitions, an example of which can be seen in <cit.>.§ ACKNOWLEDGMENTSWe thank T. A. B. Kennedy for discussions and providing extensive feedback during the preparation of the manuscript. We thank M. S. Chapman, Carlos Sa de Melo and Matthew Boguslawski for stimulating insights and discussions. We also thank John Etnyre for help with the formulation.Finally, we acknowledge support from the National Science Foundation, grant no. NSF PHYS-1506294.§ APPENDIX A: EIGENVALUES OF THE SPIN FLUCTUATION TENSORIn this section, we derive the expressions for the quantum state vector with a given spin vector and the eigenvalues of its spin fluctuation tensor. The system has an SO(3) symmetry, i.e., if R∈ SO(3) and 𝒟(R)∈ SU(3) is its representation, under the transformation ψ→𝒟(R)ψ, the spin vector transforms as s⃗→ Rs⃗ and the spin fluctuation tensor transforms as 𝐓→ R𝐓R^T.Therefore, for the purpose of derivation of the eigenvalues of 𝐓, without loss of generality we may assume that s⃗=(0,0,|s⃗|)^T. Any normalized quantum state vector ψ=(z_-1, z_0, z_+1)^T, (i.e., ⟨ψ, ψ⟩=1) with this spin vector must satisfy:z_-1z_0^*+z_0z_+1^* = 0 |z_+1|^2-|z_-1|^2 = |s⃗| This follows from the map phi. When 0<|s⃗|<1, the solutions to the above equation are ψ = ( [ √(1-|s⃗|/2) e^-iθ; 0;√(1+|s⃗|/2) e^iθ; ])Each θ∈ [0,π) produces a distinct quantum state with spin vector equal to s⃗. From tensor, the corresponding spin fluctuation tensor can me computed:𝐓= ([ 1/2+√(1-|s⃗|^2)/2cos 2θ √(1-|s⃗|^2)/2sin 2θ 0; √(1-|s⃗|^2)/2sin 2θ 1/2-√(1-|s⃗|^2)/2cos 2θ 0; 0 01-|s⃗|^2; ])One of the eigenvectors of this matrix is s⃗=(0, 0, |s⃗|)^T with an eigenvalue 1-|s⃗|^2. The other two eigenvalues are easily seen to be 1±√(1-|s⃗|^2)/2.§ APPENDIX B: THE CONNECTION FORM OF THIS GEOMETRIC PHASEThe well known parallel transport of tangent vectors on a sphere is formulated using the affine connection. However, the geometric phase introduced in this paper can not be formulated using affine connection because its features are compatible only with Ehresmann connection. In order to put our geometric phase in perspective with the other well known examples and to bridge the gap between affine connection and Ehresmann connection, in this section, we show how the former naturally generalizes to the latter and that minimizing the length is a concise way of defining parallel transports. We begin with an overview of affine connection. Let M be an n-manifold and TM be its tangent bundle. At a point q∈ M, with local coordinates (x^1,x^2, ⋯ , x^n), the tangent plane, T_qM is an n dimensional vector space spanned by e_μ= ∂/∂ x^μ, μ = 1,2,⋯, n. Central to an affine connection is the covariant derivative, which comes from “differentiating" the basis vectors, ∂/∂ x^νe_μ=Γ^σ_μνe_σ. In the more formal language, the covariant derivative, denoted by “D" is defined as De_μ = Γ^σ_μνe_σ⊗ dx^ν. The basic problem, that the affine connection is designed to solve is to define a parallel transport γ̃, i.e., a horizontal lift, in TM for a path γ in M. This can be restated as follows: for a point q∈ M on γ with a given a lift (q, v)∈ TM on γ̃ (where v=v^μe_μ∈ T_qM) and the local tangent vector y=y^μe_μ of γ, along which q is moved, how do we change the coordinates v^μ in order to maintain the vector v parallel to itself? In other words, we are to decide the local tangent vector t of γ̃, that moves the point (q,v)∈ TM such that q is moved along y and v remains parallel to itself. This tangent vector is in the 2n dimensional tangent plane of TM at (p,v), i.e., t∈ T_(q,v)(TM) . This space is spanned by the basis vectors (e_1, e_2, ⋯, e_n, f_1, f_2, ⋯ , f_n), where f_μ=∂/∂ v^μ. Quite obviously, t=y^μe_μ+z^νf_ν, for a suitable choice of coefficients z^ν such that tDv=0. This condition is the parallel transport criterion. Using Dv= D(v^μe_μ)=dv^μ e_μ+ v^μ De_μ, we get z^μe_μ+ Γ^σ_μνe_σ y^νv^μ =0. For convenience, the connection matrix is defined as ω_ν^μ=Γ^μ_νσy^σ, in terms of which, z^μ=-ω^μ_νy^ν <cit.>.In order to move towards Ehresmann connection, we rewrite the affine connection in a coordinate-free form. Two observations are crucial. First, in the 2n dimensional tangent space T_q, v(TM), the n dimensional subspace spanned by f_μ represent changes to the tangent v alone and is therefore known as the vertical subspace of T_(q,v)(TM).And second, the special vectors t∈ T_(q,v)(TM) that satisfy tDv=0 also form an n dimensional subspace spanned by {e_μ-ω_μ^νf_ν:μ= 1, 2, ⋯, n}. This space complements the vertical subspace and is known as horizontal subspace of T_(q,v)(TM).It is straightforward to see that all that the affine connection does is to identify this horizontal subspace at each point of TM. Indeed, any n dimensional subspace of T_(q,v)(TM) that complements the vertical subspace uniquely defines the elements ω_μ^ν of the connection matrix. This follows from the fact that the vector e_μ can be written uniquely as a sum of two vectors, one in the horizontal subspace and one in the vertical subspace. Thus, in the coordinate-free form, a connection is a specification of a horizontal subspace at each point of TM, that complements the vertical subspace. Indeed, it is an n dimensional distribution over TM.This definition extends to any fiber bundle and is known as Ehresmann connection. Apart from being coordinate-free, it also has two other advantages over the affine and the Levi-Civita connections. The covariant derivative, uipon which the affine and Levi-Civita connections are based, is rooted in differentiating vector fields over a manifold (i.e., sections of the vector bundle). In a fiber bundle where the fiber has non-trivial topology,in general there are no sections. The Ehresmann connection is therefore the most natural way of defining horizontal lifts. Furthermore, the Ehresmann connection generalizes naturally to structures that are not bundles. The dimension of the horizontal space can be non-uniform. Such a connection, however, can not be traced back to a connection matrix and therefore, regarding a connection as a distribution becomes inevitable. If a fiber bundle has a natural Riemannian metric, the horizontal subspace can be defined as the orthogonal complement of the vertical subspace. The resulting horizontal lifts mimimize the path length. The tangent vector to the horizontal lift is always confined to the local horizontal subspace and because the vertical subspace is orthogonal to it, any added component would only make the lift longer. We illustrate with two examples, that by minimizing the length of a lift, we recover the standard connection form: * Berry phase for spin-1/2 systems: Let n̂(t) be a loop on the Bloch sphere and ψ(t) be one of its lifts in ℂ^2. We assume that ψ(t) is normalized. All other lifts of n̂(t) with the same initial point are of the form e^ix(t)ψ(t), where x(t) is a real scalar with x(0)=0. The length of these lifts (s) is given, under the Euclidean metric, by s=∫√(||ψ̇(t)||^2 +|ẋ|^2+ iẋ(⟨ψ̇, ψ⟩ - ⟨ψ, ψ̇⟩)) dt. The first order functional derivative of s w.r.t to x(t) is ⟨ψ̇, ψ⟩ - ⟨ψ, ψ̇⟩ and it should vanish if ψ is a horizontal lift, according to the new definition. Together, with ||ψ(t)||=1, we obtain the correct parallel transport criterion: ⟨ψ̇, ψ⟩=0, i.e., the Berry connection form <cit.>. The same argument applies for Aharonov-Anandan phase <cit.>. * Wilczek-Zee phase <cit.>: We follow the generalization of Wilczek-Zee phase in <cit.>, and show that the connection form thereof can be derived by minimizing the length. Let us assume that the Hilbert space can be decomposed into a direct sum of two subspaces with dimensions n and m respectively (denoted by V_n and V_m). V_n is the time dependent n dimensional eigenspace of the Hamiltonian. We are to define parallel transport of a vector ψ∈ V_n(0) in time, i.e., we are to define a path ψ(t)∈ V_n(t) with ψ(0)=ψ and minimal length. Given any lift, ψ(t), we can construct other such lifts bya transformation U(t)ψ(t), where U(t) is a unitary operator with all of its non-trivial eigenvectors in V_n(t). The length of such a path is s=∫√(||ψ̇(t)||^2 + ⟨U̇ψ|U̇^†ψ⟩ + ⟨ U^†U̇ψ|ψ̇⟩ + ⟨ψ̇ |U^†U̇ψ⟩) dt. Again, setting U^†U̇=iH(t) for some Hermitian operator H(t), the first derivative of s is ⟨ H(t)ψ̇|ψ⟩ - ⟨ψ |H(t)ψ̇⟩. H(t) is an arbitrary Hermitian with zero eigenvalues in V_m(t). Therefore, s is stationary iff ψ̇ is orthogonal to V_n(t), indeed the same condition as equation (7) in ref. <cit.>.In our system, at most of the points in ℂℙ^2, the vertical subspace is one dimensional and the horizontal subspace, defined by the Fubini-Study metric is three dimensional. However, at some points, i.e., within the pre-image of the center of the Bloch ball, the vertical and the horizontal subspaces are both two dimensional. This feature adds all the non-trivialities to the system and therefore, we have to use Ehresmann connection in this problem. ieeetr
http://arxiv.org/abs/1702.08564v3
{ "authors": [ "H. M. Bharath" ], "categories": [ "math-ph", "cond-mat.quant-gas", "math.MP" ], "primary_category": "math-ph", "published": "20170227222714", "title": "Non-Abelian Geometric Phases Carried by the Spin Fluctuation Tensor" }
Farzad.Mahfouzi@gmail.com Department of Physics and Astronomy, California State University, Northridge, CA, USA nick.kioussis@csun.edu Department of Physics and Astronomy, California State University, Northridge, CA, USA Motivated by the need to understand current-induced magnetization dynamics at the nanoscale, we have developed a formalism, withinthe framework of Keldysh Green function approach, to study the current-induced dynamics of a ferromagnetic (FM) nanoisland overlayer on a spin-orbit-coupling (SOC) Rashba plane. In contrast to the commonly employed classical micromagnetic LLG simulationsthe magnetic moments of the FM are treated quantum mechanically. We obtain the density matrix of the whole system consisting of conduction electrons entangled with the local magnetic moments and calculate the effective damping rate of the FM. We investigate two opposite limiting regimes of FM dynamics: (1) The precessional regime where the magnetic anisotropy energy (MAE) and precessional frequency are smaller than the exchange interactions, and (2) The local spin-flip regime where the MAE and precessional frequency are comparable to the exchange interactions. In the former case, we show that due to the finite size of the FM domain, the “Gilbert damping”does not diverge in the ballistic electron transport regime, in sharp contrast to Kambersky's breathing Fermi surface theory for damping in metallic FMs. In the latter case, we show that above a critical bias the excited conduction electrons can switch the local spin moments resulting in demagnetization and reversal of the magnetization. Furthermore, our calculations show that the bias-induced antidamping efficiency in the local spin-flip regime is much higher than that in the rotational excitation regime. 72.25.Mk, 75.70.Tj, 85.75.-d, 72.10.Bg Current Induced Damping of Nanosized Quantum Moments in the Presence of Spin-Orbit Interaction Nicholas Kioussis December 30, 2023 ==============================================================================================§ INTRODUCTIONUnderstanding the current-induced magnetization switching (CIMS) at the nanoscale is mandatory for the scalability of non-volatile magnetic random access memory (MRAM) of the next-generation miniaturized spintronic devices. However, the local magnetic moments of a nanoisland require quantum mechanical treatment rather than the classical treatment of magnetization commonly employed in micromagnetic simulations, which is the central theme of this work.The first approach of CIMS employs the spin transfer torque (STT) <cit.> in magnetic tunnel junctions (MTJ) consisting of two ferromagnetic (FM) layers (i.e., a switchable free layer and a fixed layer) separated by an insulating layer, which involves spin-angular-momentum transfer from conduction electrons to local magnetization <cit.>. Although STT has proven very successful and brings the precious benefit of improved scalability, it requires high current densities (≥ 10^10 A/cm^2) that are uncomfortably high for the MTJ's involved and hence high power consumption. The second approach involves an in-plane current in a ferromagnet-heavy-metal bilayer where the magnetization switching is through the so-called spin-orbit torque (SOT) for both out-of-plane and in-plane magnetized layers. <cit.> The most attractive feature of the SO-STT method is that the current does not flow through the tunnel barrier, thus offering potentially faster and more efficient magnetization switching compared to the MTJs counterparts.As in the case of STT, the SO-STT has two components: a field-like and an antidamping component. While the field-like component reorients the equilibrium direction of the FM, the antidamping component provides the energy necessary for the FM dynamics by either enhancing or decreasing the damping rate of the FM depending on the direction of the current relative to the magnetization orientation as well as the structural asymmetry of the material. For sufficiently large bias the SOT can overcome the intrinsic damping of the FM leading to excitation of the magnetization precession. <cit.> The underlying mechanism of the SOT for both out-of-plane and in-plane magnetized layers remains elusive and is still under debate. It results from either the bulk Spin Hall Effect (SHE) <cit.>, or the interfacial Rashba-type spin-orbit coupling, <cit.> or both <cit.>. Motivated by the necessity of scaling down the size of magnetic bits and increasing the switching speed, the objective of this work is to develop a fully quantum mechanical formalism, based on the Keldysh Green function (GF) approach, to study the current-induced local moment dynamics of a bilayer consisting of a FM overlayer on a SOC Rashba plane, shown in Fig. <ref>.Unlike the commonly used approaches to investigate the magnetization dynamics of quantum FMs, such as the master equation <cit.>, the scattering <cit.> or quasi-classical <cit.> methods, our formalism allows the study of magnetization dynamics in the presence of nonequilibrium flow of electrons. We consider two different regimes of FM dynamics: In the first case, which we refer to as the single domain dynamics, the MAE and the precession frequency are smaller than the exchange interactions, and the FM can be described by a single quantum magnetic moment, of a typically large spin, S, whose dynamics are governed mainly by the quantized rotational modes of the magnetization. We show that the magnetic degrees of freedom entering the density matrix of the conduction electron-local moment entagled system simply shift the chemical potential of the Fermi-Dirac distribution function by the rotational excitations energies of the FM from its ground state. We also demonstrate that the effective damping rate is simply the net current along the the auxiliary m-direction, where m= -S, -S+1, …, +S, are the eigenvalues of the total S_z of the FM. Our results for the change of the damping rate due to the presence of a bias voltage are consistent with the anti-damping SOT of classical magnetic moments, <cit.>, where due to the Rashba spin momentum locking, the anti-damping SOT, to lowest order in magnetic exchange coupling, is of the form,m⃗×(m⃗×ŷ), where ŷ is an in-plane unit vector normal to the transport direction.In the adiabatic and ballistic transport regimes due to the finite S value of the nanosize ferromagnet our formalism yields a finite “Gilbert damping”, in sharp contrast to Kambersky's breathing Fermi surface theory for damping in metallic FMs. <cit.> On the other hand, Costa and Muniz  <cit.> and Edwards <cit.> demonstrated that the problem of divergent Gilbert damping is removed by taking into account the collective excitations. Furthermore, Edwards points out <cit.>the necessity of including the effect of long-range Coulomb interaction in calculating damping for large SOC.In the second case, which corresponds to an independent local moment dynamics, the FM has a large MAE and hence the rotational excitation energy is comparable to the local spin-flip excitation (exchange energy). We investigate the effect of bias on the damping rate of the local spin moments. We show that above a critical bias voltage the flowing conduction electrons can excite (switch) the local spin moments resulting in demagnetization and reversal of the magnetization. Furthermore, we find that, in sharp contrast to the single domain precessional dynamic, the current-induced damping is nonzero for in-plane and out-of-plane directions of the equilibrium magnetization. The bias-induced antidamping efficiency in the local moment switching regime is much higher than that in the single domain precessional dynamics. The paper is organized as follows. In Sec. <ref> we present the Keldysh formalism for the density matrix of the entagled quantum moment-conduction electron system and the effective dampin/antdamping torque. In Sec. <ref> we present results for the current-induced damping rate in the single domain regime.In Sec. <ref> we present results for the current-induced damping rate in the independent local regime.We conclude in Sec. <ref>. § THEORETICAL FORMALISM Fig. <ref> shows a schematic view of the ferromagnetic heterostructure under investigation consisting of a 2D ferromagnet-Rashba plane bilayer attached to two semi-infinite normal (N) leads whose chemical potentials are shifted by the external bias, V_bias. The magnetization of the FM precesses around the axis specified by the unit vector, n⃗_M, with frequency ω and cone angle θ. The FM has length, L_x^FM, along the transport direction. The total Hamiltonian describing the coupled conduction electron-localized spin moment system in the heterostructure in Fig. <ref>can be written as, H_tot=∑_rr',σσ'Tr_{s_d}[(1_sĤ^σσ'_rr'+δ_rr'δ_σσ'1_sμ_r+δ_rr' J_sdσ⃗_σσ'·s⃗_d(r)+δ_σσ'δ_rr'H_M)ψ^*_{s_d'}r'σ'ψ_{s_d}rσ]. Here, s⃗_d(r) is the local spin moment at atomic position r,the trace is over the different configurations of the local spin moments, {s_d}, ψ_{s_d}rσ=|{s_d}⟩⊗ψ^e_rσ is the quasi-particle wave-function associated with the conduction electron (ψ^e) entangled to the FM states (|{s_d}⟩), J_sd is the s-d exchange interaction, 1_s is the identity matrix in spin configuration space, and σ̂_x,y,z are the Pauli matrices. We use the convention that, except for r, bold symbols represent operators in the magnetic configuration space and symbols with hat represent operators in the single particle Hilbert space of the conduction electrons. The magnetic Hamiltonian H_M is given byH_M= -gμ_B∑_rB⃗^ext(r)·s⃗_d(r)-∑_⟨r,r'⟩J^dd_rr'/s_d^2s⃗_d(r')·s⃗_d(r)- ∑_rJ_sd/s_ds⃗_c(r)·s⃗_d(r),where, the first term is the Zeeman energy due to the external magnetic field, the second term is the magnetic coupling between the local moments and the third term is the energy associatedwith the intrinsic magnetic field acting on the local moment, s⃗_d(r), induced by the local spin of the conduction electrons, s⃗_c(r). The Rashba model of a two-dimensional electron gas with spin orbit coupling interacting with a system of localized magnetic moments has been extensively employed <cit.> to describe the effect of enhanced spin-orbit coupling solely at the interface on the current-induced torques in ultrathin ferromagnetic (FM)/heavy metal (HM) bilayers. The effects of (i) the ferromagnet inducing a moment in the HM and (ii) the HM with strong spin-orbit coupling inducing a large spin-orbit effect in the ferromagnet (Rashba spin-orbit coupling) lead to a thin layer where the magnetism and the spin-orbit coupling coexist. <cit.> The single-electron tight-binding Hamiltonian <cit.> for the conduction electrons of the 2D Rashba plane, H^σσ'_rr' which is finite along the transport direction x and infinite along the y direction is of the form, Ĥ^σσ'_xx'(k_ya)=[tcos(k_ya)δ_σσ'-t_sosin(k_ya)σ^x_σσ']δ_xx'+t(δ_x,x'+1+δ_x+1,x')δ_σσ'+it_so(δ_x,x'+1-δ_x+1,x')σ^y_σσ'.Here, x,x' denote atomic coordinates along the transport direction, a is the in-plane lattice constant, and t_so is the Rashba SOI strength. The values of the local effective exchange interaction, J_sd= 1eV,and of the nearest-neighbor hopping matrix element, t=1 eV, represent a realistic choice for simulating the exchange interaction of 3d ferromagnetic transition metals and their alloys (Fe, Co). <cit.> The Fermi energy, E_F=3.1 eV, is about 1 eV below the upper band edge at 4 eV consistentwith the ab initio calculations of the (111) Pt surface<cit.>. Furthermore, we have used t_so=0.5 eV which yields a Rashba parameter, α_R = t_soa ≈1.4 eVÅ  (a=2.77 Å is the in-plane lattice constant of the (111) Pt surface) consistent with the experimental value of about 1-1.5 eVÅ <cit.> and the ab initio value of 1 eVÅ <cit.>. However, because other experimental measurements for Pt/Co/Pt stacks report<cit.> a Rashba parameter which is an order of magnitude smaller, in Fig.3 we show the damping rate for different values of the Rashba SOI.. For the results in Sec. <ref>, we assume a real space tight binding for propagation along y-axis. The single particle propagator of the coupled electron-spin system is determined from the equation of motion of the retarded Green function, (E-iη-μ̂-Ĥ-H_M- J_sd/2σ̂⃗̂·ŝ⃗̂_d)Ĝ^r(E)=1̂, where, η is the broadening of the conduction electron states dueto inelastic scattering from defects and/or phonons, and for simplicity we ignore writing the identity matrices 1̂ and 1 in the expression. The density matrix of the entire system consisting of the noninteracting electrons (fermionic quasi-particles) and the local magnetic spins is determined (see Appendix <ref> for details of the derivation for a single FM domain) from the expression, ρ̂=∫dE/πĜ^r(E)ηf(E-μ̂-H_M)Ĝ^a(E). It is important to emphasize that Eq. (<ref>) is the central result of this formalism which demonstrates that the effect of the local magnetic degrees of freedom is to shift the chemical potential of the Fermi-Dirac distribution function by the eigenvalues, ε_m, of H_M|m⟩=ε_m|m⟩, i.e., the excitation energies of the FM from its ground state. Here, |m⟩ are the eigenstates of the Heisenberg model describing the FM. The density matrix can then be used to calculate the local spin density operator of the conduction electrons, [s⃗_c(r)]^mm'=∑_ss'ρ_ss',rr^mm'σ⃗_ss'/2, which along with Eqs. (<ref>),  (<ref>),and (<ref>) form a closed set of equations that can be solved self consistently. Since, the objective of this work is the damping/anti-damping (transitional) behavior of the FM in the presence of bias voltage, we only present results for the first iteration. Eq. (<ref>) shows that the underlying mechanism of the damping phenomenon is the flow of conduction electrons from states ofhigher chemical potential to those of lower one where the FM state relaxes to its ground state by transferring energy to the conduction electrons. Therefore, the FM dynamical properties in this formalism is completely governed by its coupling to the conduction electrons, where conservation of energy and angular momentum dictates the excitations as well as the fluctuations of the FM sate through the Fermi distribution function of the electrons coupled to the reservoirs. This is different from the conventional Boltzmann distribution function which is commonly used to investigate the thermal and quantum fluctuations of the magnetization. Due to the fact that the number of magnetic configurations (i.e. size of the FM Hilbert space) grows exponentially with the dimension of the system it becomes prohibitively expensive to consider allpossible eigenstates of the H_M operator.Thus, in the following sections we consider two opposite limiting cases of magnetic configurations. In the first case we assume a single magnetic moment for the whole FM which is valid for small FMs with strong exchange coupling between local moments and small MAE. In this case the dynamics is mainly governed by the FM rotational modes and local spin flips can be ignored. In the second case we ignore the correlation between different local moments and employ a mean field approximation such that at each step we focus on an individual atom by considering the local moment under consideration as a quantum mechanical object while the rest of the moments are treated classically. We should mention that a more accurate modeling of the system should contain both single domain rotation of the FM as well as the local spin flipping but also the effect of nonlocal correlations between the local moments and conduction electrons, which are ignored in this work. § SINGLE DOMAIN ROTATIONAL SWITCHING In the regime where the energy required for the excitation of a single local spin moment (≈ meV) is much larger than the MAE (≈μ eV) the low-energy excited states correspond to rotation of the total angular momentum of the FM acting as a single domain and the effects of local spin flips described as the second term in Eq <ref>, can be ignored. In this regime all of the local moments behave collectively and the local moment operators can be replaced by the average spin operator, s⃗_d(r)=∑_r's⃗_d(r')/N_d=s_dS⃗/S, where N_d is the number of local moments and S⃗ is the total angular momentum with amplitude S. The magnetic energy operator is given by H_M=-B⃗·S, where, B⃗=gμ_BB⃗^ext+J_sds⃗_c. Here, for simplicity we assume s⃗_c to be scalar and independent of the FM state. The eigenstates of H_M operator are then simply the eigenstates, |S,m⟩, of the total angular momentum S_z, with eigenvalues mω = -Sω,… ,+Sω, where ω=B_z is the Larmor frequency. Thus, the wave function of the coupled electron-spin configuration system, shown schematically in Fig. <ref> is of the form, ψ_ms'r(t)=|S,m⟩⊗ψ_s'r(t). One can see that the magnetic degrees of freedom corresponding to the different eigenstates of the S_z operator,enters as an additional auxiliary dimension for the electronic system where the variation of the magnetic energy, ⟨ S,m|H_M|S,m⟩=mω, shiftsthe chemical potentials of the electrons along this dimension. The gradient of the chemical potential along theauxiliary direction, is the Larmor frequency (μ eV≈ GHz) which appears as an effective “electric field”in that direction. Substituting Eq (<ref>) in Eq (<ref>)(b) and averaging over one precession period we find that theaverage rate of angular momentum loss/gain, which we refer to as the effective “damping rate”per magnetic moment, can be written as 𝒯_m =1/2(𝒯^-_m-𝒯^+_m),where,𝒯^±_m =J_sd/2SN_dTr_el[ σ̂^∓S^±_mρ̂_m,m± 1].is the current along the auxiliary m-direction in Fig. <ref> from the m↔ m+1 (± sign) state of the total S_z of the FM. Here, Tr_el, is the trace over the conduction electron degrees of freedom, and S_m^±=√(S(S+1)-m(m±1)) are the ladder operators. It is important to note that within this formalism the damping rate is simply the net current across the m^th-layer along the auxiliary direction associated with the transition rate of the FM from state m to its nearest-neighbor states (m± 1). Fig. <ref> shows the damping rate as a function of the precession cone angle, θ = cos^-1(m/S), for different values of bias and for an in-plane effective magnetic field (a) along and (b) normal to the transport direction, and (c) an out-of-plane magnetic field. For cases (a) and (c) thedamping rate is negative and relatively independent of bias for low bias values. A negative damping rate implies that the FM relaxes towards the magnetic field by losing its angular momentum, similar to the Gilbert damping rate term in the classical LLG equation, where its average value over the azimuthal precession angle, φ=ω t, is of the form, 𝒯=-α s_d∫dφ/2πm⃗×(m⃗×B⃗)·n⃗_M, which is nonzero (zero) when the unit vector n⃗_M is along (perpendicular to) the effective magnetic field. The dependence of the damping rate on the bias voltage when the effective magnetic field B⃗ is inplane and normal to the transport direction can be understood by the spin-flip reflection mechanism accompanied by Rashba spin-momentum locking described in Ref. <cit.>. One can see that a large enough bias can result in a sign reversal of the damping rate and hence a magnetization reversal of the FM.It's worth mentioning that due to the zero-point quantum fluctuations of the magnetization, at θ=0,π (i.e. m=± S) we have 𝒯≠0 which is inversely proportional to the size of the magnetic moment, S. In Fig. <ref>(a) we present the effective damping rate versus bias for different values of the Rashba SOC. The results show a linear response regime with respect to the bias voltage where both the zero-bias damping rate and the slope, d𝒯/dV increases with the Rashba SOC. This is consistent with Kambersky's mechanism of Gilbert damping due to the SOC of itinerant electrons, <cit.> and the SOT mechanism <cit.>. Fig. <ref>(b) shows that in the absence of bias voltage the damping rate is proportional to t_so^2 and the effect of the spin current pumped into the left and right reservoirs is negligible. This result of the t_so^2 dependence of the zero-bias damping rate is in agreement with recent calculations of Costa and Muniz<cit.> and Edwards<cit.> which took into account the collective excitations.In the presence of an external bias, 𝒯 varies linearly with the SOC, suggesting that to the lowest order it can be fitted to𝒯=sin^2(θ) t_so(c_1 t_soħω+c_2eV_bias), where c_1 and c_2 are fitting parameters. The bias-induced efficiency of the anti-damping SOT,Θ≡ħω(𝒯(V_bias)-𝒯(0))/eV_bias𝒯(0), describes how efficient is the energy conversion between the magnetization dynamics and the conduction electrons. Accordingly, for a given bias-induced efficiency, Θ, one needs to apply an external bias equal to ħω/eΘto overcome the zero-bias damping of the FM. Fig. <ref> displays the anti-damping efficiency versus the position of the Fermi energy of the FM from the bottom (-4t=-4 eV) to the top (4t=4eV) of the conduction electron band for the two-dimensional square lattice. The result is independent of the bias voltage and the Larmor frequency in the linear response regime (i.e. V_bias,ω≪ t). We find that the efficiency peaks when the Fermi level is in the vicinity of the bottom or top of the energy band where the transport is driven by electron- or hole-like carriers and the Gilbert damping is minimum. The sign reversal of the antidamping SOT is due to the electron- or hole-like driven transport similar to the Hall effect. <cit.> Classical Regime of the Zero Bias Damping rate — In the following we show that in the case of classical magnetic moments (S→∞) and the adiabatic regime (ω→ 0), the formalism developed in this paper leads to the conventional expressions for the damping rate. In this limit the system becomes locally periodic and one can carry out a Fourier transformation from m≡ S_z space to azimuthal angle of the magnetization orientation, φ, space. Conservation of the angular momentum suggests that the majority- (minority-) spin electrons can propagate only along the ascending (descending) m-direction, where the hopping between two nearest-neighbor m-layers is accompanied by a spin-flip. As shown in Fig.  <ref> the existence of spin-flip hopping requires the presence of intralayer SOC-induced noncollinear spin terms which rotate the spin direction of the conduction electrons as they propagate in each m-layer. This is necessary for the persistent flow of electrons along the φ auxiliary direction and therefore damping of the magnetization dynamics. Using the Drude expression of the longitudinal conductivity along the φ-direction for the damping rate, we find that, within the relaxation time approximation, η/ω→∞, where the relaxation time of the excited conduction electrons is much shorter than the time scale of the FM dynamics, 𝒯 is given by 𝒯=-ω/η∑_n∫dk_xdk_ydφ/(2π)^3(v^φ_nk⃗)^2f'(ε_nk⃗(φ)). Here, v_nk⃗^φ=∂ε_nk⃗(φ)/∂φis the group velocity along the φ-direction in Fig. <ref>, and ε_n,k⃗=ε^0(|k⃗|)±|h⃗(k⃗)| for the 2D-Rashba plane, where ε^0(|k⃗|) is the spin independent dispersion of the conduction electrons and h⃗=at_soê_z×k⃗+1/2 J_sdm⃗, is the spin texture of the electrons due to the SOC and the s-d exchange interaction. For small precession cone angle, θ, the Gilbert damping constant can be determined from α=-𝒯/s_dωsin^2(θ), where the zero-temperature 𝒯 is evaluated by Eq. (<ref>). We find that α≈1/η t_so^2[(k^+_Fa)^2D^+(E_F)+(k^-_Fa)^2D^-(E_F)](1+cos^2(γ)), where D^+(-)(E) is the density of states of the majority (minority) band, γ is the angle between the precession axis and the normal to the Rashba plane, and the Fermi wave-vectors (k^±_F) are obtained from, ε_0(k^±_F)=E_F∓ J_sd/2. Eq. (<ref>) shows that the Gilbert damping increases as the precession axis changes from in-plane (γ=π/2) to out of plane (γ=0), <cit.> which can also be seen in Fig. <ref>. It is important to emphasize that in contrast to Eq. (<ref>) the results shown in Fig. <ref> correspond to the ballistic regime with η=0in the central region and the relaxation of the excited electrons occurs solely inside the metallic reservoirs. To clarify how the damping rate changes from the ballistic to the diffusive regime we present in Fig. <ref> the damping rate versus the broadening, η, of states in the presence (solid line) and absence (dashed line) of bias voltage. We find that in both ballistic (η/ω≈ 0) and diffusive (η/ω≫ 1) regimes the damping rate is independent of the size of the FM domain, S. On the other hand, in the intermediate regime the FM dynamics become strongly dependent on the effective domain size where the minimum of the damping rate varies linearly with S. This can be understood by the fact that the effective chemical potential difference between the first, m=-S and last, m=S layers in Fig.<ref> is proportional to S and for a coherent electron transport the conductance is independent of the length of the system along the transport direction. Therefore, in this case the FM motion is driven by a coherent dynamics. § DEMAGNETIZATION MECHANISM OF SWITCHINGIn Sec. <ref> we considered the case of a single FM domain where its low-energy excitations, involving the precession of the total angular momentum, can be describedby the eigenstates |m⟩ of S_z and local spin flip processes were neglected.However, for ultrathin FM films or FM nanoclusters, where the MAE per atom (≈ meV) is comparable to the exchange energy between the local moments (Curie temperature), the low-energy excitations involve both magnetization rotation and local moments spin-flips due to conduction electron scattering which can in turn change also S. In this case the switching is accompanied by the excitation of local collective modes that effectively lowers the amplitude of the magnetic ordering parameter. For simplicity we employ the mean field approximation for the 2D FM nanocluster where the spin under consideration at position r is treated quantum mechanically interacting with all remaining spins through an effective magnetic field, B⃗. The spatial matrix elements of the local spin operator are[ŝ⃗̂_d,r]_r_1r_2=s⃗_d(r_1)δ_r_1r_2(1-δ_r_1r) 1_s+1/2δ_r_1r_2δ_r_1rτ⃗, where, τ⃗s are the Pauli matrices. The magnetic energy can be expressed as, H_M(r)=-B⃗(r)·τ⃗/2, where, the effective local magnetic field is given by,B⃗(r)=gμ_BB⃗^ext+4∑_r'J^dd_rr's⃗_d(r')+2J_sds⃗_c(r). The equation of motion for the single particle propagator of the electronic wavefunction entangled with the local spin moment under consideration can then be obtained from, ( E-μ̂-H_M(r)-Ĥ- J_sd/2σ̂⃗̂·ŝ⃗̂_d,r)Ĝ^r_r(E)=1̂. The density matrix is determined from Eq. (<ref>) which can in turn be used to calculate the spin density of the conduction electrons, s⃗_c(r)=Tr(σ̂⃗̂ρ̂_rr)/2, and the direction and amplitude of the local magnetic moments, s⃗_d(r)=Tr(τ⃗ρ̂_rr)/2. Fig. <ref> shows the spatial dependence of the spin-1/2 local moment switching rate for aFM/Rashba bilayer (Fig. 1) for two bias values (V_bias=±0.4V) and for an in-plane effective magnetic field (a) along and (b) normal to the transport direction, and (c) an out-of-plane magnetic field. The size of the FM island is25a×25a, where a is the lattice constant.Negative local moment switching rate (blue) denotes that, once excited, the local moment relaxes to its ground state pointing along the direction of the effective magnetic field; however positive local damping rate (red) denotes that the local moments remain in the excited state during the bias pulse duration. Therefore, the damping rate of the local moments under bias voltage can be either enhanced or reduced and even change sign depending on the sign of the bias voltage and the direction of the magnetization. We find that the bias-induced change of the damping rate is highest when the FM magnetization is in-plane and normal to the transport directions similar to the single domain case. Furthermore, the voltage-induced damping rate is peaked close to either the left or right edge of the FM (where the reservoirs are attached) depending on the sign of the bias. Note that there is also a finite voltage-induced damping rate when the magnetization is in-plane and and along the transport direction (x) or out-of-the-plane (z). Fig. <ref> shows the bias dependence of the average (over all sites) damping rate for in- (a and b) and out-of-plane (c) directions of the effective magnetic field (direction of the equilibrium magnetization) and for two values of |B|. This quantity describes the damping rate of the amplitude of the magnetic order parameter. For an in-plane magnetization and normal to the transport direction (Fig. <ref>) the bias behavior of the damping rate is linear and finite in contrast to the single domain [Fig. <ref>(a)] where the damping rate was found to have a negligible response under bias. On the other hand, the bias behavior of the current induced damping rate shows similar behavior to the single domain case when the equilibrium magnetization direction is in-plane and normal to the transport direction (Fig. <ref>(b)). For an out-of-plane effective magnetic field [Fig. <ref>(c)] the damping torque has an even dependence on the voltage bias. In order to quantify the efficiency of the voltage induced excitations of the local moments, we calculate the relative change of the average of the damping rate in the presence of a bias voltage and present the result versus the Fermi energy for different orientations of the magnetization in Fig <ref>. We find that the efficiency is maximum for an in-plane equilibrium magnetization normal to the transport direction and it exhibits an electron-hole asymmetry. The bias-induced antidamping efficiency due to spin-flip can reach a peak around 20% which is much higher than the peak efficiency of about 2% in the single domain precession mechanism in Fig. <ref> for the same system parameters. Future work will be aimed in determining the switching phase diagram <cit.> by calculating the local antidamping and field-like torques self consistently for different FM configurations. § CONCLUDING REMARKSIn conclusion, we have developed a formalism to investigate the current-induced damping rate of nanoscale FM/SOC 2D Rashba plane bilayer in the quantum regime within the framework of the Kyldysh Green function method. We considered two different regimes of FM dynamics, namely, the single domain FM and independent local moments regimes. In the first regime we assume the rotation of the FM as the only degree of freedom, while the second regime takes into account only the local spin-flip mechanism and ignores the rotation of the FM.When the magnetization (precession axis) is in-plane and normal to the transport direction, similar to the conventional SOT for classical FMs, we show that the bias voltage can change the damping rate of the FM and for large enough voltage it can lead to a sign reversal. In the case of independent spin-1/2 local moments we show that the bias-induced damping rate of the local quantum moments can lead to demagnetization of the FM and has strong spatial dependence. Finally, in both regimes we have calculated the bias-induced damping efficiency as a function of the position of the Fermi energy of the 2D Rashba plane. § DERIVATION OF ELECTRONIC DENSITY MATRIX Using the Heisenberg equation of motion for the angular momentum operator, S⃗(t), and the commutation relations for the angular momentum, we obtain the following Landau-Lifshitz equations of motion, ∓ i∂/∂ tS^±(t)=h^zS^±(t)-h^±(t)S^z(t)-i∂/∂ tS^z(t)=1/2(h^+(t)S^-(t)-h^-(t)S^+(t))h⃗_mm'(t)=1/ħ∑_rJ_sds⃗^mm'_c(r)+gμ_Bδ_mm'B⃗(t), where,(), is the angular momentum (spin) ladder operators, s⃗^mm'_c(r)=1/2∑_σσ'σ⃗_σσ'ρ^mm'_σσ',rr is the local spin density of the conduction electrons which is an operator in magnetic configuration space. Here, ρ is the density matrix of the system, and the subscripts, r,m,σ refer to the atomic cite index, magnetic state and spin of the conduction electrons, respectively. In the following we assume a precessing solution for Eq (<ref>)(a) with a fixed cone angle and Larmor frequency ω=h^z. Extending the Hilbert space of the electrons to include the angular momentum degree of freedom we define ψ_ms'i(t)=|S,m⟩⊗ψ_s'i(t). The equation of motion for the Green function (GF) is then given (E-iη-Ĥ(k)+nω-n/2SJ_sd(k)σ^z)Ĝ^r_nm(E,k)-√(S(S+1)-n(n+1))/2S J_sd(k)σ^-Ĝ^r_n+1m(E,k)-√(S(S+1)-n(n-1))/2SJ_sd(k)σ^+Ĝ^r_n-1m(E,k)=1̂δ_nm where, n=(-S,-S+1,...,S) and the gauge transformationhas been employed to remove the time dependence. The density matrix of the system is of the form ρ̂_nm=e^-i(n-m)ω t∑_p=-S^S∫dE/2πĜ^r_np2η f_pμ̂Ĝ^a_pm where, f_pμ̂(E)=f(E-pω-μ̂) is the equilibrium Fermi distribution function of the electrons. Due to the fact that pω are the eigenvalues of , one can generalize this expression by transforming into a basis where the magnetic energy is not diagonal which in turn leads to Eq (<ref>) for the density matrix of the conduction electron-local moment entagled system. § RECURSIVE RELATION FOR GFS Since in this work we are interested in diagonal blocks of the GFs and in general for FMs at low temperature we haveS≫ 1, we need a recursive algorithm to be able to solve the system numerically. The surface Keldysh GFs corresponding to ascending ĝ^u,r/<, and descendingĝ^d,r/<, recursion scheme read, ĝ^u,r_n(E,k)=1/E-ω_n-iη_n-Ĥ(k)-Σ̂_n^r(E,k)-n/2S J_sd(k)σ^z-(S^-_n)^2/4S^2 J_sd(k)σ^+ĝ^u,r_n-1(E,k)σ^- J_sd(k)Σ̂^u,<_n(E,k)=-∑_α(2iη_n+Σ̂^r_n,α (E,k)-Σ̂^a_n,α (E,k))f_nα+(S^-_n)^2/4S^2 J_sdσ^+ĝ^u,r_n-1Σ̂^u,<_n-1ĝ^u,a_n-1σ^- J_sdĝ^d,r_n(E,k)=1/E-ω_n-iη_n-Σ̂_n^r(E,k)-Ĥ(k)-n/2S J_sd(k)σ^z-(S^+_n)^2/4S^2 J_sd(k)σ^-ĝ^u,r_n+1(E,k)σ^+ J_sd(k)Σ̂^d,<_n(E,k)=-∑_α(2iη_n+Σ̂^r_n,α (E,k)-Σ̂^a_n,α (E,k))f_nα+(S^+_n)^2/4S^2 J_sdσ^-ĝ^d,r_n+1Σ̂^d,<_n+1ĝ^d,a_n+1σ^+ J_sd where, Σ̂^r_n(E,k)=∑_αΣ̂^r_α(E-ω_n,k) corresponds to the self energy of the leads, α=L,R refers to the left and right leads in the two terminal device in Fig. <ref> and S^±_m=√(S(S+1)-m(m±1)). Using the surface GFs we can calculate the GFs as follows, Ĝ^r_n,m(E,k) =1/E-ω_n-iη_n-Ĥ(k)-Σ̂_n^r-n/2S J_sd(k)σ^z-Σ̂^r,u_n-Σ̂^r,d_n, n=m=S^+_n/2Sĝ^u,r_n(E,k) J_sd(k)σ^-Ĝ^r_n+1,m(E,k),n≠ m=S^-_n/2Sĝ^d,r_n(E,k) J_sd(k)σ^+Ĝ^r_n-1,m(E,k), n≠ m where the ascending and descending self energies are given by, Σ̂^r,u_n=(S^-_n)^2/4S^2 J_sd(k)σ^+ĝ^u,r_n-1(E,k)σ^- J_sd(k) Σ̂^r,d_n=(S^+_n)^2/4S^2 J_sd(k)σ^-ĝ^d,r_n+1(E,k)σ^+ J_sd(k) The average rate of angular momentum loss/gain can be obtained from the real part of the loss of angular momentum in one period of precession, 𝒯'_n=1/2(𝒯'^-_n-𝒯'^+_n)=1/2(∑_k Tr[S^-_n/2Sσ^+ J_sd(k)ρ̂_nn+1(k)-S^+_n/2Sσ^- J_sd(k)ρ̂_nn-1(k)]) which can be interpreted as the current flowing across the layer n. 𝒯'^-/+_n= ∑_k∫dE/2 π i Tr {[Σ̂^d/u,r_n(E,k)-Σ̂^d/u,a_n(E,k)] Ĝ^<_nn(E,k) + Σ̂^d/u,<_n(E)[ Ĝ^r_nn(E,k)-Ĝ_nn^a(E,k) ] }, The work at CSUN is supported by NSF-Partnership in Research and Education in Materials (PREM) Grant DMR-1205734, NSF Grant No. ERC-Translational Applications of Nanoscale Multiferroic Systems (TANMS)-1160504, and US Army of Defense Grant No. W911NF-16-1-0487. 10 Slonczewski1996J. C. Slonczewski, Current-driven excitation of magnetic multilayers, J. Magn. Magn. Mater. 159,L1-L7(1996). Berger1996L. Berger, Emission of spin waves by a magnetic multilayer traversed by a current, Phys. Rev. B 54,9353(1996). Ralph2008 D. Ralph and M. Stiles, Spin transfer torques, J. Magn. Magn. Mater. 320, 1190(2008); A. Brataas, A. D. Kent, and H. Ohno, Current-induced torques in magnetic materials, Nature Mater. 11, 372 (2012).Theodonis2006Ioannis Theodonis, Nicholas Kioussis, Alan Kalitsov, Mairbek Chshiev, and W. H. Butler, Anomalous Bias Dependence of Spin Torque in Magnetic Tunnel Junctions, Phys. Rev. Lett. 97, 237205 (2006). Gambardella2011 P. Gambardella and I. M. Miron, Current-induced spinorbit torques, Phil. Trans. R. Soc. A 369, 3175 (2011). Miron2010 I. M. Miron et al., Nature Mater. 9,230(2010) I. M. Miron, K. Garello, G. Gaudin, Pierre-Jean Zermatten, M. V. Costache, S. Auffret, S. Bandiera, B. Rodmacq, A. Schuhl, and P. Gambardella, Perpendicular switching of a single ferromagnetic layer induced by in-plane current injection, Nature 476, 189 (2011); Liu2012 L. Liu, C. F. Pai, Y. Li, H. W. Tseng, D. C. Ralph, and R. A. Buhrman, Spin-torque switching with the giant spin Hall effect of tantalum, Science 336, 555 (2012). Liu2012b L. Liu, T. Moriyama, D. C. Ralph, and R. A. Buhrman,Spin-Torque Ferromagnetic Resonance Induced by the Spin Hall Effect, Phys. Rev. Lett. 106, 036601 (2011); Luqiao Liu, O. J. Lee, T. J. Gudmundsen, D. C. Ralph, and R. A. Buhrman, Current-Induced Switching of Perpendicularly Magnetized Magnetic Layers Using Spin Torque from the Spin Hall Effect, Phys. Rev. Lett. 109, 096602 (2012). Dyakonov1971 M. I. Dyakonov and V. I. Perel, Current-induced spin orientation of electrons in semiconductors, Phys. Lett. 35A, 459 (1971). Hirch1999 J. E. Hirsch, Spin Hall Effect, Phys. Rev. Lett. 83, 1834 (1999).Jungwirth2012 T. Jungwirth, J. Wunderlich and K. Olejník, Spin Hall effect devices, Nature Materials 11,(2012).Sinova2014 J. Sinova, S. O. Valenzuela, J. Wunderlich, C. H. Back, and T. Jungwirth, Spin Hall Effects, Rev. Mod. Phys. 87, 1213 (2015). Bijl2012E. van der Bijl and R. A. Duine, Current-induced torques in textured Rashba ferromagnets, Phys. Rev. B 86, 094406 (2012).Kim2012 Kyoung-Whan Kim, Soo-Man Seo, Jisu Ryu, Kyung-Jin Lee, and Hyun-Woo Lee, Magnetization dynamics induced by in-plane currents in ultrathin magnetic nanostructures with Rashba spin-orbit coupling, Phys. Rev. B 85, 180404(R) (2012). Kurebayashi2014 H. Kurebayashi, Jairo Sinova, D. Fang, A. C. Irvine, T. D. Skinner, J. Wunderlich, V. Novák, R. P. Campion, B. L. Gallagher, E. K. Vehstedt, L. P. Zârbo, K. Výborný, A. J. Ferguson and T. Jungwirth, An antidamping spinorbit torque originating from the Berry curvature, Nature Nanotechnology 9, 211 (2014).MahfouziPRB2016 F. Mahfouzi, B. K. Nikolić, and N. Kioussis, Antidamping spin-orbit torque driven by spin-flip reflection mechanism on the surface of a topological insulator: A time-dependent nonequilibrium Green function approach, Phys. Rev. B 93, 115419 (2016). Freimuth2014 Frank Freimuth, Stefan Blügel, and Yuriy Mokrousov, Spin-orbit torques in Co/Pt(111) and Mn/W(001) magnetic bilayers from first principles, Phys. Rev. B 90, 174423 (2014).Kim2013 J. Kim, J. Sinha, M. Hayashi, M. Yamanouchi, S. Fukami, T. Suzuki, S. Mitani and H. Ohno, Layer thickness dependence of the current-induced effective field vector in Ta|CoFeB|MgO, Nature Materials12, 240245 (2013).Xiao2014 Xin Fan, H. Celik, J. Wu, C. Ni, Kyung-Jin Lee, V. O. Lorenz, and J. Q. Xiao, Quantifying interface and bulk contributions to spinorbit torque in magnetic bilayers, 5:3042 | DOI: 10.1038/ncomms4042 (2014).Chudnovskiy2014 A. Chudnovskiy, Ch. Hubner, B. Baxevanis, and D. Pfannkuche, Spin switching: From quantum to quasiclassical approach, Phys. Status Solidi B 251, No. 9, 1764 (2014). Fahnle2011 M. Fahnle and C. Illg, Electron theory of fast and ultrafast dissipative magnetization dynamics, J. Phys.: Condens. Matter 23, 493201(2011) Swiebodzinski2010 J. Swiebodzinski, A. Chudnovskiy, T. Dunn, and A. Kamenev, Spin torque dynamics with noise in magnetic nanosystems, Phys. Rev. B 82, 144404 (2010).Li2015 Hang Li, H. Gao, Liviu P. Zarbo, K. Vyborny, Xuhui Wang, Ion Garate, Fatih Dogan, A. Cejchan, Jairo Sinova, T. Jungwirth, and Aurelien Manchon, Intraband and interband spin-orbit torques in noncentrosymmetric ferromagnets Phys. Rev. B 91, 134402 (2015).Kambersky2007 V. Kamberský, On the LandauLifshitz relaxation in ferromagnetic metals, Can. J. Phys. 48, 2906 (1970); V. Kamberský, Spin-orbital Gilbert damping in common magnetic metals Phys. Rev. B 76, 134416 (2007). Costa2015 A. T. Costa and R. B. Muniz, Breakdown of the adiabatic approach for magnetization damping in metallic ferromagnets, Phys. Rev B 92, 014419 (2015). Edwards2016 D. M. Edwards, The absence of intraband scattering in a consistent theory of Gilbert damping in pure metallic ferromagnets, Journal of Physics: Condensed Matter, 28, 8 (2016). Haney2013 Paul M. Haney, Hyun-Woo Lee, Kyung-Jin Lee, Aurlien Manchon, and M. D. Stiles, Current induced torques and interfacial spin-orbit coupling: Semiclassical modeling Phys. Rev. B 87, 174411 (2013).Park2013 Jin-Hong Park, Choong H. Kim, Hyun-Woo Lee, and Jung Hoon Han, Orbital chirality and Rashba interaction in magnetic bands, Phys. Rev. B 87, 041301(R) (2013). Data_book Suprio Data, Quantum Transport: Atom to Transistor, Cambridge University Press, New York, (2005).Zhang2004 S. Zhang and Z. Li, Roles of Nonequilibrium Conduction Electrons on the Magnetization Dynamics of Ferromagnets Phys. Rev. Lett. 93, 127204 (2004).Sanvito2005 Maria Stamenova, Stefano Sanvito, and Tchavdar N. Todorov, Current-driven magnetic rearrangements in spin-polarized point contacts, Phys. Rev. B 72, 134407 (2005). Eastman1980 D. E. Eastman, F. J. Himpsel, and J. A. Knapp, Experimental Exchange-Split Energy-Band Dispersions for Fe, Co, and Ni, Phys. Rev. Lett. 44, 95 (1980). Kokalj1999Anton Kokalj and Mauro Caus , Periodic density functional theory study of Pt(111): surface features of slabs of different thicknesses, J. Phys.: Condens. Matter 11 7463 (1999).Miron2011 Ioan Mihai Miron, Thomas Moore, Helga Szambolics, Liliana Daniela Buda-Prejbeanu, Stphane Auffret, Bernard Rodmacq, Stefania Pizzini, Jan Vogel, Marlio Bonfim, Alain Schuhl and Gilles Gaudins, Fast current-induced domain-wall motion controlled by the Rashba effect, Nature Materials 10, 419?423 (2011).Haazen2013 P. P. J. Haazen, E. Mure, J. H. Franken, R. Lavrijsen, H. J. M. Swagten, and B. Koopmans, Domain wall depinning governed by the spin Hall effect, Nat. Mat. 12, 4, 299303 (2013). Kittel_book Charles Kittel, Introduction to Solid State Physics, Wiley, (2004). mahfouzi_SPIN2016 F. Mahfouzi, N. Kioussis, Ferromagnetic Damping/Anti-damping in a Periodic 2D Helical Surface; A Nonequilibrium Keldysh Green Function Approach, SPIN 06, 1640009 (2016).
http://arxiv.org/abs/1702.08408v2
{ "authors": [ "Farzad Mahfouzi", "Nicholas Kioussis" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227181106", "title": "Current Induced Damping of Nanosized Quantum Moments in the Presence of Spin-Orbit Interaction" }
[E-mail: ]rcalvo@nanogune.eu or mreyescalvo@gmail.com Department of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA CIC nanoGUNE, 20018 Donostia-San Sebastian, Spain Ikerbasque, Basque Foundation for Science, 48013 Bilbao, Spain[Present address:]Rudolf Peierls Centre for Theoretical Physics, Oxford University, UK Department of Physics, University of California, Berkeley, California 94720, USARaymond and Beverly Sackler School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel Department of Physics, University of California, Berkeley, California 94720, USADepartment of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USADepartment of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USAPhysikalisches Institut (EP3) and Röntgen Center for Complex Material Systems, Universität Würzburg, Am Hubland, 97074 Würzburg, GermanyDepartment of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA State Key Laboratory of Surface Physics and Department of Physics, Fudan University, Shanghai 200433, ChinaPhysikalisches Institut (EP3) and Röntgen Center for Complex Material Systems, Universität Würzburg, Am Hubland, 97074 Würzburg, GermanyPhysikalisches Institut (EP3) and Röntgen Center for Complex Material Systems, Universität Würzburg, Am Hubland, 97074 Würzburg, GermanyPhysikalisches Institut (EP3) and Röntgen Center for Complex Material Systems, Universität Würzburg, Am Hubland, 97074 Würzburg, GermanyDepartment of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USAPhysikalisches Institut (EP3) and Röntgen Center for Complex Material Systems, Universität Würzburg, Am Hubland, 97074 Würzburg, GermanyPhysikalisches Institut (EP3) and Röntgen Center for Complex Material Systems, Universität Würzburg, Am Hubland, 97074 Würzburg, Germany[E-mail: ]goldhaber-gordon@stanford.edu Department of Physics, Stanford University, Stanford, California 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, California 94025, USA We study the electronic transport across an electrostatically-gated lateral junction in a HgTe quantum well, a canonical 2D topological insulator, with and without applied magnetic field. We control carrier density inside and outside a junction region independently and hence tune the number and nature of 1D edge modes propagating in each of those regions. Outside the bulk gap, magnetic field drives the system to the quantum Hall regime, and chiral states propagate at the edge. In this regime, we observe fractional plateaus which reflect the equilibration between 1D chiral modes across the junction.As carrier density approaches zero in the central region and at moderate fields, we observe oscillations in resistance that we attribute to Fabry-Perot interference in the helical states, enabled by the broken time reversal symmetry. At higher fields, those oscillations disappear, in agreement with the expected absence of helical states when band inversion is lifted.Interplay of chiral and helical states in a Quantum Spin Hall Insulator lateral junction D. Goldhaber-Gordon December 30, 2023 ========================================================================================Above a certain critical thickness, the 2-dimensional electron gas (2DEG) of a HgTe quantum well presents an inverted band structurecharacteristic of a 2D topological insulator (2D-TI) <cit.>. At the edge of the topological insulator, quantum spin Hall (QSH) helical states propagate <cit.>.When the Fermi level lies in the bulk gap of a 2D-TI, conduction is dominated by those edge states <cit.> and is in principle protected by time-reversal symmetry (TRS) against single-electron backscattering processes. The application of magnetic field is expected to lift suchprotection. Nonetheless, band inversionand counterpropagating QSH-like edge states are predicted to persist up to a critical magnetic field B_ c. Above this field, band inversion should disappear, leaving a 2D band structure identical to that of a topologically trivial [The electronic structure of a 2DEG in the Quantum Hall regime is also topologically nontrivial. Here, however, we use trivial to describe only the lowest energy bands characterized by the Z2 topological invariant.] semiconductor in the Quantum Hall (QH) regime <cit.>.Previous experiments on HgTe quantum wells in this thickness range show that the resistance in the bulk gap increases in the presence of moderate magnetic fields <cit.> as predicted <cit.>, but our understanding of the evolution of edge conduction with magnetic field is incomplete.For the related problem of assigning quantum numbers to different chiral quantum Hall (QH) modes, a fruitful approach has been to study scattering between those modes by measuring transport through junctions between regions of different carrier density.This approach has been widely applied in GaAs quantum wells (for a review see Ref. <cit.>) and more recently in the Dirac 2DEG of graphene <cit.>. Thus, to characterize helical modes under broken TRS, studying their interplay with quantum Hall chiral states could be a promising strategy.In this work, we explore electronic transmission across a lateral heterojunction fabricated on a HgTe quantum well with inverted band structure.Above a critical field, our results are consistent with expectations for equilibration of QH edge modes.Results are similar below the critical field for high carrier densities, but clearly differ when the junction is tuned through zero density. There, we first observe how the maximum of resistance associated with the bulk gap narrows and shifts towards lower values of carrier density. We find this to be a consequence of the remaining band inversion and the existence of helical edge states.Over the density regime corresponding to the peak shift, the resistance of our device presents oscillations which we attribute to Fabry-Perot interference of helical states enabled by the lifting of TRS protection.Fig. <ref>(a) presents the geometry of our device. A Hall bar mesa is defined following the method described in Refs. <cit.> on a HgTe quantum well with inverted band structure. The 2DEG is formed in a quantum well epitaxially grown over a conductive substrate <cit.>, allowing for the application of an overall back-gate voltage.A narrow top gate electrode is placed at the center of the device, defining two regions with separately-tunable density (inset of Fig. <ref>(c)). Central region refers to the area covered by the top-gate electrode and outer region to its surroundings. The carrier density in the outer region n can be tuned by the applied back-gate voltage V_bg, while the density n' in the central region depends on both back- and top-gate (V_tg) voltages. Both n and n' can be estimated using a simple capacitor model <cit.>.The evolution of the four-terminal resistance R measured across the junction at 2.1 K (inset of Fig. <ref>(c)) as a function of both V_bg and V_tg at zero applied magnetic field is shown in Fig. <ref>(b). As previously reported <cit.>, the resistance presents a finite maximum when the chemical potential lies in the bulk gap and conduction is dominated by the QSH edge states. As a function of spatially-uniform carrier density, four-terminal resistance (Curve 2 in Fig. <ref>(c)) shows a maximum value higher than h/2e^2, the value associated with the ballistic Quantum Spin Hall regime. This is expected: the edge mean free path in similar heterostructures has been reported to be a few microns <cit.>, substantially less than theedge length between contacts in the present geometry, so backscattering should result in increased resistance. In contrast, as a function only of density in the central region n' (Curve 1 in Fig. <ref>(c)) the maximum resistance is lower than h/2e^2, suggesting the presence of bulk conduction in parallel to the QSH edge states. A detailed look at the data (Fig. S2 of <cit.>) reveals oscillations in the resistance which likely arise from the Fabry-Perot like interference of bulk conduction paths (for a detailed analysis see <cit.>).The locations of resistance maxima in the (V_ tg,V_ bg) parameter space (Fig. <ref>(a)) fall along two lines: a horizontal line around V_ bg=V_ bg0=0 V representing zero density in the outer region, and a diagonal line representing zero density in the central region of the junction (n'=0) (see <cit.>). The two lines define four quadrants of electron and hole densities in the central and outer regions of the junction, labeled in Fig. <ref>(b).At finite fields, four-terminal resistance R in the n-n'-n quadrant shows a sharp tiled pattern of fractional resistance values ranging from 0 to h/e^2 (Fig. <ref>(a,b), B=3 and 5 T respectively.) The Landauer-Büttiker formalismdescribes those fractional values as the result of full equilibration between co-propagating edge states in the junction (cf. <cit.>.) In the unipolar regime and in a 4-terminal configuration, the predicted resistance across the junction is given byR=h/e^2N-N'/NN'where N and N' are the number of quantum Hall states propagating in the outer and central regions of the junction respectively. The bottom row of tiles in Fig. <ref>(a,b) corresponds to N=1, with N' increasing from left to right. Our data show a good agreement with the fractional plateaus at R=0,1/2,2/3,3/4,... expected for N'=1,2,3,4,... (N=1 linecut in Fig. <ref>(c)). Similarly, a second row of tiles appears for N=2, and in the corresponding linecut of Fig. <ref>(a), plateaus of resistance can be observed near the expected values for N=2 paired with a range of N', although deviations from the ideal behavior are larger here than for N=1. Similarly, plateaus are observed near but not exactly at the expected values for the p-p'-p and the n-p-n quadrants (see <cit.> for details.)These results highlight the role of the strong spin-orbit interaction and inversion symmetry breaking in HgTe. If the s_z component of spin were conserved, transmission would then be spin-selective and only those states with same spin polarization would equilibrate with each other, leading to fewer plateaus in resistance R. In contrast, our data suggest that full equilibration occurs for all possible values of N and N'. Inversion symmetry breaking provides a mechanism spin mixing that allows this to happen <cit.>. Adapting the theory in Ref. Khaetskii92, we estimate that the equilibration length for our system is around 2 μ m at 2 T, which is indeed smaller than the junction width.In the n-n'-n quartet, the tiled structure of fractional resistance values associated to a given pair of values (N,N')is similar at both 3 and 5 T (Fig. <ref>(a) and (b) respectively).A contrasting behavior emerges around zero density. First, the zero density n'=0 linedetermined from zero magnetic field data in Fig. <ref>(b) is overlaid for reference on Fig. <ref>(a) and (b). The maximum of resistance at B=3 T is clearly shifted towards lower values of V_ tg with respect to that line.Remarkably, it returns to the original position atB= 5 T (Fig. <ref>(b)). This effect can be also observed in the horizontal linecuts taken from the corresponding 2D resistance plots at similar outer densities n for 0, 3 and 5 T (Fig. <ref>(d)). Furthermore, at 3T and near zero density, in the range of voltages where the resistance peak was found at zero field, this plot now shows strong oscillations in the resistance.To explicate these results,we present calculations of band structure below and above the critical field, (details in <cit.>) for the magnetic fields considered in the experiment: B =0,3,5 T (Fig. <ref>(a)). At zero field, when the Fermi energy lies in the bulk gap, we find the usual counterpropagating helical edge states. At both B=3 and 5 T conduction and valence bands turn into a set of discrete Landau levels (LLs).One chiral Quantum Hall state propagates at the edge for each filled Landau level in the bulk, so the total number of modes N is given by the integer part of the filling factor ν.For fields B < B_ c such as B = 3 T, the lowest order hole-like and electron-like Landau levels are inverted in the bulk and cross near the edge, so when the Fermi level is in the bulk gap there are counter-propagating helical edge states (below we refer to this regime as ν=0). By 5 T, which is above B_ c, band inversion has disappeared, and the band structure resembles that of a trivial semiconductor, with a gap between electron and hole Landau levels.In the junction geometry considered in our experiment,at finite field the electronic transmission across the device will result from the matching of edge modes corresponding to different fillings in the central and outer regions of the junction (ν' and ν respectively). When the central region has ν' ≠ 0, the edge mode structure is the usual one observed in quantum Hall experiments with standard 2DEGs (see Figs. <ref>(b,c))and the resulting resistanceis given by a Landauer-Büttiker expression (Eq. <ref>). When ν'=0 however, the situation changes drastically depending on whether B is larger or smaller than B_ c. Above the critical field, the Fermi level always lies in a bulk gap with no edge modes, and incoming modes are always reflected, as illustrated in Fig. <ref>(d) for the case 1-0-1. Below the critical field, in contrast, band inversion implies the presence in the inner region of two QSH-like edge states with opposite chiralities (Fig. <ref>(e)). Edge modes cannot simply terminate, so a mode must also propagate along the 1-0 and 0-1 interfaces. We believe that the matching of chiral to helical edge states in the 1-0-1 scenario is the origin of the resistance oscillations and the shift in the position of the resistance maximum we observe at 3 T(Fig. <ref>(d)). To understand this, we first note that since TRS is broken at finite field, the crossing of QSH edge modes when B<B_ c is only protected in the presence of extra symmetries such as mirror symmetry. In the experiment, such symmetries are absent, so there should always be a small minigap.(Figs. <ref>a-c).The location of this edge minigap within the bulk gap depends on details such as the potential at the edge. Therefore the edge and bulk charge neutrality points do not necessarily occur for the same value of gate voltage, and accordingly the center of the resistance maximum originating from the minigap at finite field does not necessarily align with the center of the bulk gap. The observed gate voltage position shift and narrowing of the resistance maximum at 3 T compared to zero or 5 T (see Fig. <ref>(d)) is consistent with an origin in the edge state minigap. In the 1-0-1 configuration, when the incoming chiral edge mode from the outer region reaches the junction it can scatter into two possible outgoing modes: the co-propagating helical edge mode or the chiral mode parallel to the junction. When the chemical potential in the central region is very close to the bottom of the lowest Landau Level, the incoming edge mode is almost perfectly matched to the copropagating helical one, while the counterpropagating edge mode forms a loop spanning the whole junction (Fig. <ref>(a)). This must be so because the counterpropagating mode has smaller momentum and therefore is located farther from the edge. Transport in this scenario is almost equivalent to the 1-1-1 situation, seen in the experimental data as an extension of the R=0 plateau to lower densities (Fig. <ref>(a)).As the chemical potential approaches the crossing of the helical edge modes, the chiral mode connects to the one parallel to the junction, while the helical modes form a loop at either edge (Fig. <ref>(b)). These loops should disappear at the minigap, and reappear with opposite orientation below it (Fig. <ref>(c)). The existence of these loops, allowed because the protection from backscattering is lifted by B, implies that coherent transport should be affected by multiple reflections at the interfaces. This effect should manifest in Fabry-Perot type oscillations as a function of chemical potential, because the accumulated phase δ = k L depends smoothly on chemical potential. This explains the oscillations observed at 3 T in Fig. <ref>(d), in the density range assigned to the bulk gap and adjacent to the edge minigap, and their disappearance beyond B_ c (i.e. at 5 T) where no helical modes exist. Resistance oscillations are also present at zero field in a similar density regime – these we associate with bulk states. However, at 3T, oscillations present an amplitude about an order of magnitude higher than their zero field counterparts. Moreover, our data (Fig. <ref>(c) and Fig. S4(d)) indicate that the bulk states causing the oscillations at zero field must be fully localized at 3T (see <cit.>), further suggesting that oscillations at 3T have their origin in edge rather than bulk states.Furthermore, oscillations are periodic in n', with no substantial dependence on n (Fig. 4(d)). This is consistent with our interpretation: changing n' will change the edge momentum of the loop modes and hence the phase, while n only determines the momentum of the incoming modes, which should have no effect on the phase of the oscillations. Fig. <ref>(d) also shows that oscillations are present for n values corresponding to N=1, but they disappear when approaching N=2. This is consistent with the N=2 Quantum Hall chiral edge modes not being fully established at B=3 T, as already seen in the imperfectly-quantized equilibration plateaus in Fig. <ref>(c). Taken together, the amplitude, position and field- and carrier density-dependence of resistance oscillations support our interpretation of their origin in the constructive interference of helical edge states.Finally, assuming a Fabry-Perot scenario yields some quantitative estimates for system parameters. Based on bulk 2D Fabry-Perot oscillations at B=0 we estimate the effective length L^* of the central region to be 0.6 μm (see section 4 of <cit.> for details). This length need not be the same as the physical top-gate length, due to the smooth shape of the gate-induced potential. Given that we observe coherent FP oscillations from the QSH edge states as well, a lower bound on the edge localization length can be set at L^*. We expect that at larger channel lengths or with higher disorder, the QSH loop responsible for interference will break up into more loops and coherence will gradually be lost, in a way similar to Ref.  tkachov2010ballistic. We may also estimate the 1D edge mode carrier density: given the above value of L^* we infer C_ 1D=δ(n_1D)/δ(V_ tg)≃ 1.4× 10^6 cm^-1V^-1 (see section 6 of <cit.> for details).While our results below critical field are compatible with those in Ref. <cit.>, we present here evidences for physical scenarios that were not accessible in that work. More specifically, the dual-gate configuration of our device allows us to perform a detailed study of the equilibration of QH states in HgTe QWs and to infer the role played here by spin-orbit interaction. More importantly, we present one of very few evidences for a transition between QSH and QH regimes in a 2D-TI. Finally,we observe signatures of coherent interference on helical states, likely due to the geometry of our junction.Our results suggest that valuable information about the QSH state under broken TRS can be inferred from the electronic transmission across a QH-QSH-QH heterojunction.While the present work was under review, a related theoretical work by Nanclares et al. <cit.> has been published. The work at Stanford was supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under contract DE-AC02-76SF00515 to D.G.-G. and S.-C.Z. The Center for Probing the Nanoscale, an NSF NSEC under grant PHY-0830228 to D.G.-G. supported early stages of the project. The European Union under the project FP7-PEOPLE-2010-274769 supported M.R.C.'s stay at Stanford. We also acknowledge support from the National Thousand-Young-Talents Program to J.W. and funding from an AFOSR MURI (F.J.) and from the DARPA FENA program (R.I.). The Würzburg group acknowledges additional financial support from the German Research Foundation (The Leibniz Program, Sonderforschungsbereich 1170 ‘Tocotronics’ and Schwerpunktprogramm 1666), the EU ERC-AG program (Project 3-TOP), the Elitenetzwerk Bayern IDK ‘Topologische Isolatoren’ and the Helmoltz Foundation (VITI).
http://arxiv.org/abs/1702.08561v2
{ "authors": [ "M. R. Calvo", "F. de Juan", "R. Ilan", "E. J. Fox", "A. J. Bestwick", "M. Mühlbauer", "J. Wang", "C. Ames", "P. Leubner", "C. Brüne", "S. C. Zhang", "H. Buhmann", "L. W. Molenkamp", "D. Goldhaber-Gordon" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227222322", "title": "Interplay of chiral and helical states in a Quantum Spin Hall Insulator lateral junction" }
[ Optimal Experiment Design for Causal Discovery from Fixed Number of ExperimentsAmirEmad Ghassamighassam2@illinois.edu Department of ECE, Coordinated Science Laboratory, University of Illinois at Urbana-Champaign Urbana, IL 61801 USA Saber Salehkaleybarsabersk@illinois.edu Coordinated Science Laboratory, University of Illinois at Urbana-Champaign Urbana, IL 61801 USA Negar Kiyavashkiyavash@illinois.edu Department of ECE and ISE, Coordinated Science Laboratory, University of Illinois at Urbana-Champaign Urbana, IL 61801 USAboring formatting information, machine learning, ICML0.3in ]We study the problem of causal structure learning over a set of random variables when the experimenter is allowed to perform at most M experiments in a non-adaptive manner. We consider the optimal learning strategy in terms of minimizing the portions of the structure that remains unknown given the limited number of experiments in both Bayesian and minimax setting. We characterize the theoretical optimal solution and propose an algorithm, which designs the experiments efficiently in terms of time complexity. We show that for bounded degree graphs, in the minimax case and in the Bayesian case with uniform priors, our proposed algorithm is aρ-approximation algorithm, where ρ is independent of the order of the underlying graph. Simulations on both synthetic and real data show that the performance of our algorithm is very close to the optimal solution. § INTRODUCTION Causal structures are commonly represented by directed acyclic graphs (DAGs), where the vertices of the graph are random variables and a directed edge from X to Y indicates that variable X is a direct cause of variable Y <cit.>. Given a set of variables, there are two main approaches for uncovering the causal relationships among them. First to perform conditional independence tests on the set of variables based on observational measurements <cit.>. The second involves intervening on some of variables to recover their causal effect on the rest of the variables. Unlike observational only tests, sufficient experiments involving interventions can uniquely identify the underlying causal graph completely.In the study of intervention-based inference approach, often a setup in which the experimenter performs M experiments on the set of variables is considered. In each experiment a set of at most k variables are intervened on. In this setting, two natural questions arise: * What is the smallest required number of experiments in order to learn all the causal relations?* For a fixed number of experiments, what portion of the causal relationships can be learned? The first problem has been addressed in the literature under different assumptions (see the Related Work subsection). To the best of our knowledge, the second question has not been studied in the literature, and it is this question that we address herein. Specifically, we consider a setup with M experiments, each containing exactly one intervention. The reason we consider single-intervention experiments is that in many applications, such as some experiments in biology, simultaneous intervention in multiple variables may not be feasible. For the underlying structure on the variables, we assume that the number of cycles of length three in the structure is negligible compared to the order of the graph. Structures satisfying this assumption arise in many applications. For instance, causal structure in gene regulatory network (GRN) for some bacteria such as the Escherichia coli (E-coli) and Saccharomyces cerevisiae (S. cerevisiae) has a tree like structure (see Figure <ref>), and hence, satisfies our assumption <cit.>.Contributions. Unlike most of the previous work, we utilize a hybrid inference scheme instead of an adaptive approach (Subsection <ref>). In this approach, first an observational test, such as the IC algorithm <cit.>, is performed on the set of variables. This test reveals the skeleton as well as the orientation of some of the edges of the causal graph. Next, based on the result of the initial test, the complete set of M experiments is designed in a non-adaptive manner. Having the complete set of experiments, enables the experimenter to perform the interventional experiments in parallel. The formal description of the problem of interest is provided in Subsection <ref>. We study the problem of structure learning for both Bayesian and minimax settings. In Section <ref>, we present the optimal solution for both settings. This solution is optimal in the sense of recovering the structure that minimizes the loss for a given number of interventional experiments. Finding this optimal solution is in general computationally intense. In Section <ref>, we propose ProBal algorithm, which finds the set of experiments in a computationally efficient manner. We show that for bounded degree graphs, in the minimax setting and in the Bayesian settings with uniform prior, our proposed algorithm is a ρ-approximation algorithm, where ρ is independent of the order of the underlying graph. In Section <ref>, using synthetic and real data, we show that the performance of ProBal is very close to the optimal solution.Related Work.The best known algorithms for general purely observational recovery approaches are IC <cit.> and PC <cit.>. Such purely observational approaches reconstruct the causal graph up to Markov equivalence classes, and hence, the direction of some of the edges may remain unresolved. Of course under some conditions, full causal structure learning using merely observational data is feasible <cit.>.There is a large body of research on learning causal structures using interventional data <cit.>. Specifically, Pearl <cit.>, considers the SEM model and defines so-called do-intervention to infer the causal relations among a set of variables.A similar approach for representing interventions is adopted in <cit.>, but it allows interventions to have non-degenerate distributions. Woodward <cit.> proposed another type of intervention which unlike Pearl's, does not depend on a specified model of the causal relations among the random variables. Peters et al. introduced invariant causal prediction <cit.>, which is a causal discovery method that uses different experimental settings to predict the set of ancestors of a variable. In that work, data comes from different unknown experimental settings (which could results from interventions). See <cit.> for some validations of this method.Regarding the first question discussed earlier, <cit.> consider the complete graph as the underlying causal structure to obtain the worst case bounds on the number of required experiments. In that work, both cases of experiments containing bounded and unbounded number of interventions are studied. In <cit.>, it has been shown that there isa connection between the problem of finding a separating system in a graph and designing a proper set of experiments for causal inference, and hence, results from combinatorics help in finding the fundamental bounds.In <cit.>, two algorithms that minimize the number of experiments in the worst case are developed. The proposed algorithms are adaptive and in the one with polynomial complexity, the size of experiments can be as large as half the order of the graph, which may not be practical in many real-life applications.In <cit.>, the authors present information-theoretic lower bounds on the number of required experiments for both deterministic and randomized adaptive approaches. They also proposed an adaptive algorithm that allows to learn chordal graphs completely.§ MODEL DESCRIPTION §.§ Preliminaries In this subsection we introduce some definitions and concepts that we require later.Consider a directed graph D=(V,E) with vertex set V and set of directed edges E. D is a DAG if it is a finite graph with no directed cycles.A DAG D is called causal if its vertices represent random variables V={X_1, ...,X_n} and a directed edges (X_i,X_j) indicates that variable X_i is a direct cause of variable X_j.We consider a structural equation model <cit.>, which is a collection of n equations X_i=f_i(PA_X_i,N_i), i=1,...,n, where PA_X_i denotes the parents of X_i in D, and N_i's are jointly independent noise variables. We assume here that in our network, all variables are observable. Also, throughout the rest of the paper, we assume the faithfulness assumption on the probability distribution. Two causal DAGs D_1 and D_2 over V are Markov equivalent if everydistribution that is compatible with one of the graphs is also compatible with the other. The set of all graphs over V is partitioned into a set of mutually exclusive and exhaustive Markov equivalence classes, which are the set of equivalence classes induced by the Markov equivalence relation <cit.>.A v-structure is a structure containing two converging directed edges whose tails are not connected by an edge. v-structures are also known as immorality and a graph with no immorality is called a moral graph.Using purely observational data(referred to which as the null experiment by <cit.>), one can utilizea “complete" conditional independence based algorithm to learn the causal structure as much as possible. By complete we mean that the algorithm is capable of distinguishing all the orientations up to the Markov equivalence class. Such an algorithm includes performing a conditional independence test followed by applying the Meek rules <cit.>.On the other hand, interventions can enable us to differentiate among the different causal structures inside a Markov equivalence class. Define an intervention I on variable X∈ V(D) as removing the influence of all the variables on X and randomizing the value of this variable. We denote this intervention by I=X. An inference algorithm consists of a set of M experiments[Notethat in most of the other work in this area, each intervention is what we refer to as experiment here and hence, each intervention can contain as many as n variable randomization.] ℰ_total={ℰ_1, ℰ_2, ..., ℰ_M}, where each experiment contains k interventions, i.e., ℰ_i={I_1^(i), I_2^(i), ..., I_k^(i)} for 1≤ i ≤ M. As shown in <cit.>, observing the result of the null experiment,one can find the orientation of the edge between any two variables X_i and X_j, if there exists ℰ_k∈ℰ_total such that (X_i∈ℰ_k , X_j∉ℰ_k) or(X_j∈ℰ_k , X_i∉ℰ_k). An inference algorithm may be (1) adaptive, in which case the experiments are performed sequentially and the information obtained from the previous experiments is used to design the next one; (2) passive, in which all the experiments are designed beforehand; (3) hybrid, in which the experimenter first performs a pure observational study to obtain the skeleton and some of the orientations in the causal DAG, and then designs the rest of the experiments in a passive manner. The third approach is referred to as the passive setup by <cit.>, while <cit.> use the term passive for a setting in which the interventions are selected without performing the null experiment. §.§ Problem DefinitionSince in many practical applications, it is resource-intensive to perform the experiments sequentially, we will investigate the hybrid approach for the design of experiments.A nice feature of the hybrid approach is that it allows us to parallelize the performance of the experiments.For example, in the study of GRNs introduced in Section <ref>, the GRN of all E-coli cells are the same and experiments can be performed on different cells simultaneously. A chord of a cycle is an edge not in the cycle whose endpoints are in the cycle. A hole in a graph is a cycle of length at least 4 having no chord. A graph is chordal if it has no hole.Let Z be the set of edges whose orientations are identified after the obseravtional test. Consider the moral graph D\ Z. As noted in <cit.>, D\ Z consists of a set of disjoint chordal moral graphs {G_1, G_2, ..., G_K} (see Figure <ref> as an example). To learn the structure, it suffices to learn {G_1, G_2, ..., G_K}. We focus on one such graph, say G. Hence, in the remainder of the paper, G is assumed to be a moral chordal DAG. Let S be the set of all edges in a chordal graph which do not belong to any triangle. We refer to a connected component of G\ S of order larger than one as “cyst”. We assume that the number of triangles, compared to the order of the chordal graph is negligible. Formally, we assume that the proportion of the number of triangles to the order of G goes to zero as n=|V(G)| tends to infinity. Assumption <ref> implies that both the number of cysts and their orders are negligible.As mentioned in Section <ref>, structures with this property occur frequently in genomic and other applications. As mentioned earlier, in some scenarios the experimenter may be restricted to perform a limited number of experiments. Hence, we focus on the following problem: If the experimenter is allowed to perform M experiments, each of size k=1. What portion of the graph could be reconstructed on average and in the worst case?We shall distribute our total budget of M experiments over the components {G_1, G_2, ..., G_K} proportionally to their size. Thus, we assume that we are capable of performing m experiments ℰ={ℰ_1, ℰ_2, ..., ℰ_m}⊆ℰ_total on component G.Consider the graph obtained by contracting all the vertices of a cyst into a single vertex. Clearly, this merging graph would be a tree,and hence, we work on a moral tree structure after the contractions. If we are required to intervene on a vertex in the resulting tree that corresponds to a cyst, we will intervene on all the vertices of that cyst. This is in some cases necessary as shown in Figure <ref>. Note that after intervening in all the nodes of a cyst, we recover all the orientations inside the cyst. We emphasize that some edge orientations remain unidentified in the cysts that we have not intervened on; however, due to Assumption <ref>, the number of such edges will not scale with n. Also, due to Assumption <ref>, we can count intervening on a contracted cyst as one intervention without impacting the scaling results. Root variable is a variable for which the number of edges entering that variable is zero. A moral chordal graph G has only one root unless they all belong to the same cyst. It suffices to prove that we cannot have two roots in two separate cysts. We prove this by contradiction. Suppose roots r_1 and r_2 exist in two separate cysts. Since the component is assumed to be connected, there exists a vertex v where a directed path from r_1 to v meets a directed path from r_2 in v. Let the parents of v on these pathsbe p_1 and p_2, respectively. If p_1 and p_2 are not adjacent, we will have a v-structure on the set {p_1,v,p_2} which is a contradiction. Otherwise, p_1 and p_2 are adjacent and {p_1,v,p_2} are in a cyst. Without loss of generality, assume p_1 is a parent of p_2. Let p_3 be the parent of p_2 on the path from r_2. In this case, we will have the same argument for the set{p_1,p_2,p_3}, and we either have a v-structure, or p_3 is also in the cyst containing v. Continuing this argument, we either have a v-structure or r_1 and r_2 are in the same cyst as v, which is a contradiction. In a moral chordal graph, randomizing the root suffices to fully learn the orientations of all the edges not belonging to any cysts. Let r be the root vertex. Consider an edge e=(u,v). Since e does not belong to any directed cycle, and there is a directed path from r to u and also a directed path from r to v, e should be directed from u to v, otherwise it contradicts Lemma <ref>. Therefore, in the sequel, we focus on a moral tree structure and from Lemma <ref>, we know that randomizing the root variable will give us all the orientations.We model this problem as follows. Let 𝒫={P^θ:θ∈Θ} be the set of probability distributions over the location of the root of the tree. We assume that the probability distributions of interest are all positive.The following remark is a consequence of Lemma <ref>. For each location of the root, only one moral tree is consistent with the skeleton of the tree. Therefore, 𝒫 is the set of probably distributions over the realizations of the moral tree.Define U(ℰ) as the set of edges whose orientation is not found after performing the experiment set ℰ. For the given skeleton obtained from the observational test, let T_v be the moral tree of order n with vertex v as its root. We define the loss of an experiment set ℰ on T_v as l(ℰ,T_v)=|U(ℰ)|, and the average loss of the experiment set under distribution P^θ asL_θ(ℰ)=∑_vP^θ(v)l(ℰ,T_v). The problem of finding the best experiment set for the worst case, could be stated as the following minimax problem:min_ℰmax_θL_θ(ℰ),s.t. |ℰ|=m,|ℰ_i|=1   ∀ i: 1≤ i≤ mIn some real life applications, the experimenter may have prior knowledge about the possible location of the root in the tree. That is, the probability distribution P^θ∈𝒫 over the location of the root in the tree is known.In such a setting, we can investigate the following Bayesian version of the problem: min_ℰL_θ(ℰ),s.t. |ℰ|=m,|ℰ_i|=1   ∀ i: 1≤ i≤ m § OPTIMAL SOLUTIONConsider a moral tree T of order n.In this structure, every non-root vertex has incoming degree d^-(v)=1, the root vertex, r has incoming degree d^-(r)=0, and recall that intervening on the root identifies the whole tree.Assume the experiment set ℰ={{I^(1)_1},...,{I^(m)_1}} was performed on the moral tree T. Let ℐ={I^(1)_1,...,I^(m)_1} be the set of variables on which we intervened. In this case, the subgraph T\ℐ will be a forest containing J components {C_1,...,C_J}.Performing experiment set ℰ, on T_v rooted at v,l(ℰ,T_v) = {[ 0v∈ℐ,; |C_j|-|B_j| v∈ C_j, ].where C_j∈{C_1,...,C_J}, and B_j=C_j∩ N(ℐ), where, N(ℐ) denotes the set of neighbors of variables in ℐ.After performing a single intervention I=X, the orientation of all the edges entering the descendants of X and the edges entering X itself will be recovered. Thus l(X,T_v)=|ND_v(X)|-1_{X≠ v}. As a result, intervening on variables in ℐ, if v∈ℐ, since |ND_v(v)|=0 the loss would be equal to zero; otherwise, if v∈ C_j we will have:l(ℰ,T_v)=|⋂_X∈ℐND_v(X) |-|⋂_X∈ℐND_v(X)∩ N(ℐ)| =|C_j|-|B_j|. The average loss could be bounded as follows∑_j=1^JP^θ(C_j)|C_j|-m≤ L_θ(ℰ)≤∑_j=1^JP^θ(C_j)|C_j|-1, Using Lemma <ref>, we haveL_θ (ℰ)=∑_v P^θ(v)l(ℰ,T_v)=∑_v∈ℐP^θ(v)×0 +∑_j=1^J∑_v∈ C_jP^θ(v)(|C_j|-|B_j|)=∑_j=1^J(|C_j|-|B_j|)∑_v∈ C_jP^θ(v)=∑_j=1^JP^θ(C_j)|C_j| -∑_j=1^JP^θ(C_j)|B_j|.Note that for all j, 1≤|B_j|≤ m, and the result is immediate. Since we have assumed that m≪ n, in the sequel we will focus on minimizingL̂_θ(ℰ)=∑_j=1^JP^θ(C_j)|C_j|.In the special case of uniform P^θ, the function L̂ could be obtained as L̂_θ(ℰ)=1/n∑_j=1^J|C_j|^2.Bayesian Setting. In the Bayesian setting, we seek the set of experiments that minimizes L̂_θ. This can be done by checking all nm possible vertex sets of size m. This brute-force solution is computationally intensive.In Section <ref> we will introduce an efficient approximation algorithm instead. Minimax Setting. As mentioned in Section <ref>, in the minimax setting we are interested in finding optimal ℰ in min_ℰmax_θL̂_θ(ℰ). Note that the loss is maximized if all the probability mass of root is put on the largest component. That ismin_ℰmax_θL̂_θ(ℰ)=min_ℰmax_θ∑_j=1^JP^θ(C_j)|C_j|≤min_ℰmax_j|C_j|. Therefore, it suffices to choose a variable set ℐ={X_1,...,X_m}, which minimizes max_j |C_j|.We can again use a brute-force solution which suffers from the same computational complexity issues as in the Bayesian case. We will propose an approximation algorithm for the minimax approach in Section <ref>.§ EFFICIENT LEARNING ALGORITHM In this section, we propose an algorithm for finding experiment sets efficiently for both Bayesian and minimax settings. First, we define the concept of a separator vertex, which plays a key role in the proposed algorithm. We shall see that in our method, we require the prior on the location of the root variable. Using separators, we propose the probability balancer (ProBal) algorithm, which allows for experiment design. The main idea is to iteratively decompose the tree into two subtrees, referred to as segments, sharing a separator vertex. In Subsection <ref>, we consider the minimax extension in which no prior is not available to the experimenter.Let lobes of a vertex v in a tree be the remaining components in the graph after removing v. A vertex vin a tree T is a separator if the probability of the root being in each of its lobes is less than 1/2.There exists a separator vertex in any tree. Consider any arbitrary vertex v_1 in the tree. If all its lobes have probability less than 1/2, then it is a separator; otherwise, only one of its lobes, say lobe b, has probability larger than or equal to 1/2. Consider v_2, which is the neighbor of v_1 in b. Since the probabilities should add up to 1, the lobe connected to v_2 through v_1 should have probability less than 1/2 and hence we continue with checking the probability of the other lobes of v_2. This process will result in finding a separator because there are no cycles in a tree and we have assumed that the tree is finite. The lobes of a separator vertex can be partitioned into two wings such that the probability of the root being in each of them is less than 2/3. Suppose v is a separator vertex with lobes b_1,⋯,b_l sorted in ascending order of their probabilitiesP^θ(b_i)=p_i. We also add lobe b_0 with p_0=0. Let j be the largest index such that ∑_i=0^jp_i≤2/3. If j=l, any arbitrary partitioning of the lobes is acceptable. Assume j<l.If ∑_i=j+1^lp_i≤2/3, then {{b_0,⋯,b_j},{b_j+1,⋯,b_l}} is the desired partitioning. Otherwise, we have ∑_i=0^j+1p_i>2/3 and ∑_i=j+1^lp_i>2/3, which implies that p_j+1>1/3 and since v is a separator, p_j+1<1/2. Therefore, P^θ({b_0,⋯ b_l}\{b_j+1})<2/3, and {{b_j+1},{{b_0,⋯ b_l}\{b_j+1}}} is the desired partitioning.ProBal algorithm searches for m variables in a given tree T in an iterative manner. It starts with the original tree as the initial segment and in each round it breaks it into smaller segments in the following manner. Let 𝒢 be the set containing all the segments. At each round the algorithm picks the segment G_m with largest P^θ(G) in the set 𝒢, and finds the most suitable separator (described below) and adds it to the intervention set ℐ (if it is not already in ℐ). This is done using the function FindSep.Then using the function Div, the algorithm divides G_m into two new segments G_1 and G_2, and replaces G_m with {G_1,G_2} in the set 𝒢 unless they have a star structure with the used separator as the center vertex. The reason that we ignore the star structures is that since the center is already chosen to be intervened on, the orientation of all the edges of the structure are discovered and there is no need for further interventions on the other variables in the star. The process of choosing separators continues until m variables are collected or all the graph is resolved. The set ℐ will be returned as the set of intervention variables. The functions FindSep(·) and Div(·) are described below.* In the function FindSep(·), first we normalize the values of probability in the input segment to obtain distribution P^θ. For any variable X∈ G_m, we compute the probability of root being in the lobes of X, and partition the lobes into two wings W_1^*(X) and W_2^*(X) such that the probability of the root being in the wings is as balanced as possible. Define the unbalancedness of X as s(X) |P^θ(W_1^*(X))-1/2P^θ(W_1^*(X)∪ W_2^*(X))|. The function returns variables X^* with minimum s(X).* The function Div(·) outputs segments G_1=G_m\ W_2^*(X^*) and G_2=G_m\ W_1^*(X^*). In both segments, it sets the probability of X^* to zero.(a) A leaf will not be chosen as the separator more than once.(b) If the chosen separator X^* in G_m is not a leaf, and the segments G_1 and G_2 are produced, then max{|V(G_1)|,|V(G_2)|}<|V(G_m)|. (a) Suppose a leaf variable X is chosen as the separator in one of the rounds of the algorithm. Consequently, its probability will be set to zero in the segment containing it. Since X is a leaf, one of its wings will be empty, and hence after normalization of the probability in function FindSep(·), the wings of X will have probabilities 0 and 1, while any other variable X' in the segment with non-zero probability, will have wings that are balancing the measure 1-P^θ(X') (Note that all the variables in this segment cannot have zero probability, because the distribution is assumed to be positive and also all the other variables could not have been picked as separators before, otherwise the algorithm would not have kept this segment). Therefore, the function FindSep(·) will not choose X.(b) Since X^* is not a leaf, |V(W_2^*(X^*))|≠ 0, and since |V(G_1)|=|V(G_m)|-|V(W_2^*(X^*))|, we have |V(G_1)|<|V(G_m)|. Similarly |V(G_2)|<|V(G_m)|. ProBal algorithm runs in time O(n^3). One can find the probability of each lobe of a vertex in linear time by running Depth-first search (DFS) algorithm. Therefore, FindSep(·) runs in time O(n^2). Additionally, Div(·) runs in o(n^2). From Lemma <ref>, ProBal algorithm will end in at most n rounds. Thus, the time complexity of the algorithm is O(n^3).§.§ Analysis In this subsection we find bounds on the performance of ProBal algorithm. We will show that in the case of a uniform prior, the proposed algorithm is a ρ-approximation algorithm, where ρ is independent of the order of the graph.After running ProBal algorithm for r rounds and obtaining the experiment set ℰ, the loss L̂_θ(ℰ) is upper bounded asL̂_θ(ℰ)≤(2/3)^⌊log_2(r+1) ⌋n.First we claim that with r=2^k-1, the set 𝒢 defined in the algorithm which contain at most 2^k segments, has the property max_G∈𝒢P^θ(G)≤(2/3)^k. We use induction to prove this claim. The base of the induction is clear from Proposition <ref> and the fact that we set the probability of the separator itself to zero in function Div(·). For the induction step we need to show that after r=2^k+1-1 rounds max_G∈𝒢P^θ(G)≤(2/3)^k+1. By the induction hypothesis with r=2^k-1 rounds max_G∈𝒢P^θ(G)≤(2/3)^k. Now, after the extra 2^k rounds, if a different segment were divided in each of those rounds, by Proposition <ref>, the desired result could be concluded. Otherwise, at least one of the segments, say G', was not divided while there exists another segment, say G”, which was divided more than once. This implies that in the second dividing of G”, at least one of its sub-segments, say G_1”, obtained from the first division, had a larger probability than P^θ(G'). But by Proposition <ref>, we have P^θ(G_1”)≤2/3(2/3)^k. Therefore, P^θ(G')≤(2/3)^k+1.Each component C_j belongs to a segment G∈𝒢. Hence, by the claim above, for all j, P^θ(C_j)≤(2/3)^k. This concludesL̂_θ(ℰ)=∑_j=1^JP^θ(C_j)|C_j|≤(2/3)^kn ≤(2/3)^⌊log_2(r+1) ⌋n. In the following we prove that in the case of uniform prior on bounded-degree graphs,ProBal is a ρ-approximation algorithm, where ρ is independent of the order of the graph. To this end, we first obtain a lower bound on the loss of the optimum algorithm introduced in Section <ref>. Consider tree T of order n with maximum degree Δ(T) and uniform probability distribution over the location of the root of the tree.Then for any experiment set ℰ, we have(n-m)^2/n(Δ(T)m-1)≤L̂_θ(ℰ).For J components, from Corollary <ref>, the loss function may be lower bounded as followsL̂_θ(ℰ)=1/n∑_j=1^J|C_j|^2≥1/nJ(n-m/J)^2.Since the tree is connected, the maximum number of components created by an experiments set of size m is Δ(T)m-1, which implies the result. Consider tree T of order n with maximum degree Δ(T). In the case of uniform prior distribution, if m≤ϵ n, ProBal is a ρ-approximation algorithm, where ρ=3/2(m∧ r)^log_22/3(Δ(T)m-1)/(1-ϵ)^2, where m∧ r min{r,m}.ρ is constant in n, polynomial of degree less than 0.42 in m, and linear on Δ(T). Proof is immediate from Theorem <ref> and Lemma <ref>.§.§ Minimax SettingAs shown in (<ref>) in the minimax setting, we need to minimize the size if the largest component generated from the set of intervention variables. If in ProBal, instead of balancing the probability of existence of the root variable in wings, we chose the separators to balance the number of vertices in the wings, then regardless of the probability distribution P^θ, it would guarantee that in each round of the algorithm, the largest segment G_mwould be divided into two segments, where each of these segments would contain at most 2/3|V(G_m)|+1 vertices. Therefore, at most the orientation of 2/3|V(G_m)| of the edges in G_m may not be found. This is equivalent to running ProBal algorithm on a uniform distribution P^θ.Consider tree T of order n with maximum degree Δ(T). Let L̂_W(ℰ) be the minimax loss of the experiment set ℰ.(a) After running ProBal for r rounds and obtaining the experiment set ℰ, the loss L̂_W(ℰ) is upper bounded as follows: L̂_W(ℰ)≤(2/3)^⌊log_2(r+1) ⌋n.(b) For any experiment set ℰ, we have n-m/Δ(T)m-1≤L̂_W(ℰ).(c) If m≤ϵ n, ProBal is a ρ-approximation algorithm, where ρ=3/2(m∧ r)^log_22/3(Δ(T)m-1)/1-ϵ. Since the performance of ProBal for the minimax case is the same as the Bayesian case with uniform distribution, the proof of part (a) follows from Theorem <ref>. For part (b), since the tree is connected, the maximum number of components created by an experiments set of size m is Δ(T)m-1, and we can minimize the order of the largest, by making the orders equal. That is, the order of the largest component is at least n-m/Δ(T)m-1. The proof of part (c) is immediate from parts (a) and (b). § EXPERIMENTAL RESULTSIn this section we evaluate the performance of our proposed algorithm on both synthetic and real data. §.§ Synthetic Data We generated 1000 instances of trees based on Barabási-Albert (BA) model <cit.>, and bounded degree (BD) model created according to Galton-Watson branching process <cit.>. For both models we considered uniform and degree based distributions for the location of the root of the tree. In the degree based distribution, the probability of vertex v being the root is proportional to its degree.Figure <ref> depicts the loss of ProBal as well as the loss for the optimal solution with respect to the order of the tree and the number of the interventions. As shown in this figure, in all cases, the performance of ProBal algorithm is very close to the optimal solution. There are worst case scenarios where special graphs with specifically designed distributions can reach the upper-bounds, but as seen in Figure <ref>, for many distributions, ProBal algorithm works much better than what is predicted by our theoretic upper bound. §.§ Real DataWe examined the performance of ProBal on real data. The graph that we work on is the GRN of E-coli bacteria which we sourced from the RegulonDB database <cit.>. In GRN, the transcription factors are the main players to activate genes. The interactions between transcription factors and regulated genes in a species genome can be presented by a directed graph. In this graph, links are drawn whenever a transcription factor regulates a gene's expression. Moreover, some of vertices have both functions, i.e., are both transcription factor and regulated gene. Figure <ref> depicts the normalized average loss of ProBal with respect to the total budget for the number of interventions. As seen in this figure, seven interventions are enough to reconstruct more than 95 percent of the network. § CONCLUSIONWe studied the problem of experiment design for causal inference when only a limited number of experiments are available. In our model, each experiment consists of intervening on a single vertex, which makes the model suitable for applications in which intervening on several variables simultaneously is not feasible. Also, in our model, experiments are designed merely based on the result of an initial purely observational test, which enables the experimenter to perform the interventional tests in parallel. We assumed that the underlying structure on the variables contains negligible number of triangles compared to the size of the graph. This assumption is satisfied in many applications such as the structure of GRN for some bacteria.We addressed the following question: “How much of the causal structure can be learned when only a limited number of experiments are available?” We characterized the optimal solution to this problem and then proposed an algorithm, which designs the experiments in a time efficient manner. We showed that for bounded degree graphs, in both the minimax setting and the Bayesian settings with uniform prior, our proposed algorithm is aρ-approximation algorithm, where ρ is independent of the order of the underlying graph.We examined our proposed algorithm on synthetic as well as real datasets. The results show that the performance of our proposed algorithm is very close to the optimal theoretical solution. One direction of future work is to extend this experiment design problem to even more general causal structures. icml2016§ PROOF OF THEOREM <REF> First we claim that with r=2^k-1, the set 𝒢 defined in the algorithm which contain at most 2^k segments, has the property max_G∈𝒢P^θ(G)≤(2/3)^k. We use induction to prove this claim. The base of the induction is clear from Proposition <ref> and the fact that we set the probability of the separator itself to zero in function Div(·). For the induction step we need to show that after r=2^k+1-1 rounds max_G∈𝒢P^θ(G)≤(2/3)^k+1. By the induction hypothesis with r=2^k-1 rounds max_G∈𝒢P^θ(G)≤(2/3)^k. Now, after the extra 2^k rounds, if a different segment is divided in each of those rounds, by Proposition <ref>, the desired result is concluded; otherwise, at least one of the segment, say G', is not divided while there exists another segment, say G”, which is divided more than once. This implies that in the second dividing of G”, at least one of its sub-segments, say G_1”, obtained from the first division, had larger probability than P^θ(G'). But by Proposition <ref>, we have P^θ(G_1”)≤2/3(2/3)^k. Therefore, P^θ(G')≤(2/3)^k+1.Each component C_j belongs to a segment G∈𝒢. Hence, by the claim above, for all j, P^θ(C_j)≤(2/3)^k. This concludesL̂_θ(ℰ)=∑_j=1^JP^θ(C_j)|C_j|≤(2/3)^kn ≤(2/3)^⌊log_2(r+1) ⌋n.
http://arxiv.org/abs/1702.08567v1
{ "authors": [ "AmirEmad Ghassami", "Saber Salehkaleybar", "Negar Kiyavash" ], "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227223043", "title": "Optimal Experiment Design for Causal Discovery from Fixed Number of Experiments" }
Instituto de Física de São Carlos, Universidade de São Paulo, Caixa Postal 369, 13560-970 São Carlos, São Paulo, Brazil Departament de Genètica i de Microbiologia, Grup de Genòmica, Bioinformàtica iBiologia Evolutiva (GGBE), Universitat Autonòma de Barcelona,08193 Bellaterra (Barcelona), SpainThe idea that a genetically fixed behavior evolved from the once differential learning ability of individuals that performed the behavior is known as the Baldwin effect. A highly influential paper [Hinton G.E., Nowlan S.J., 1987. How learning can guide evolution. Complex Syst. 1, 495–502] claimed that this effect can be observed in silico, but here we argue that what was actually shown is that the learning ability is easily selected for. Then we demonstrate the Baldwin effect to happen in the in silico scenario by estimating the probability and waiting times for the learned behavior to become innate. Depending on parameter values, we find that learning can increase the chance of fixation of the learned behavior by several orders of magnitude compared with the non-learning situation. The revival of the Baldwin Effect Mauro Santos December 30, 2023 =================================§ INTRODUCTION As pointed out by Maynard Smith, a recurrent issue in evolutionary biology is whether natural selection can explain the existence of complex structures that are of value to the organism only when fully formed <cit.>. This is a favorite topic of intelligent design creationists because it goes directly to the heart of Darwin's theory of evolution <cit.>: “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.” Although Darwin could find out no such a case, intelligent design creationists coined the term `irreducible complexity' <cit.> (Behe's focus is on adaptation at the molecular level) to argue that evolution cannot account for the intricate engineering found in all organisms: remove any part of a complex structure and the whole thing stops working. These `new' arguments were refuted by showing that a standard Darwinian process of descent with modification can explain the eventual complexity of adaptive molecular features <cit.>. On another level, a number of influential evolutionary psychologists and cognitive scientists claim that the human mind and the human language are examples of such complex structures, and do not feel comfortable with putative neo-Darwinian explanations <cit.>. These scholars do not obviously embrace the irreducible complexity argument of creationists. Instead, they assert that a much-neglected process during the Modern Synthesis in the 1930s and 1940s should be at work to explain the evolutionary emergence of such high-order phenotypic traits: Baldwinian evolution.The idea of Baldwinian evolution has been available since the end of the 1800's and involves the notion that traits that were learned or acquired in earlier generations could be genetically fixed in the course of the evolution <cit.>; the so-called Baldwin effect <cit.>. Because of the obvious Lamarckian scent and the absence of a Darwinian mechanism to implement the idea, the Baldwin effect remained on the fringe of evolutionary biology until 1987, when Hinton and Nowlanoffered a simple evolutionary algorithm that showed how learning could guide evolution <cit.> (see also <cit.>). However, perhaps because of the computational limitations at the time, the original simulations as well as their numerous replications (e.g., <cit.>) solely offered evidence that the inheritable flexible traits, which could be learned during the organism's lifetime, are selected. Here we run Hinton and Nowlan'sevolutionary algorithm until the finite-size population becomes genetically homogeneous and show that, for the original algorithm parameters <cit.>, learning increases the fixation probability of the target fixed trait by 6 orders of magnitude with respect to the non-learning situation, thus turning a virtual impossibility into a non-remarkable event. This is perhaps the essence of Baldwin effect as a theoretical explanation for non-reducible complex structures referred to above by evolutionary psychologists and cognitive scientists. It should be stressed, however, that we do not claim that these scholars are right; we simply demonstrate something they have taken for granted but that has not been proved in any of the numerous papers discussing Hinton and Nowlan's work. The rest of this paper is organized as follows. In Section <ref>, we describe Hinton and Nowlan's evolutionary algorithm and argue that following the short-time dynamics for a few runs is not enough to show the Baldwin effect. This effect is shown in Section <ref>, where we present the statistics of the fixation of the target fixed trait using a very large number of runs in which the dynamics is followed until the complete homogenization of the population. In Section <ref>, we show that selection at the individual level does not optimize the learning parameter of Hinton and Nowlan's model. Finally, Section <ref> is reserved for our concluding remarks.§ SIMULATIONS OF HINTON AND NOWLAN'S MODELIn their proof-of-conceptpaper,Hinton and Nowlan <cit.> proposed to testthe theoretical plausibility of the Baldwin effect simulating the evolution of a populationofN haploid sexualindividuals, each represented by a chromosome ofL loci with three alleles at each locus: 1, 0, and ?.It is assumed that the L loci code for neural connections so that alleles 1 specify innately correct connections, alleles 0 innately incorrect connections, and alleles ? guessable (plastic) connections containing a switch that can be on (right) or off (wrong). Learning consists of giving each individual up to a maximum of G random combinations of switch settings (with equal probability for on and off) on every trial. Those individuals that have a 0 allele at any locus will never produce the right connectivity of the neural network. On the other hand, if the combination of switch settings and the genetically specified connections produce the correctneural network (i.e., a fully connected neural network) at trialg ≤ Gthe individual stops guessing and becomes mature for mating.To determine the mating probability, each individual i=1,2, …,Nis evaluated according toa fitness function w_i. In the case that individual i has the Lalleles correctly set innately (i.e., it has the correct genotype),it is assigned the maximum fitness value,w_i = L. In the case that individual i has P correct allelesand Q = L- P plastic alleles,its fitness is a random variable given by w_i = 1 +( L - 1)( 1 - g/G) if g ≤ G and w_i = 1, otherwise. Here the number of guesses g =1,…, ∞is a geometrically distributed random variable with success probability 1/2^Q. Hence, even if the individual has learned the correct setting ofneuralswitches, its fitness is lower than that of an individual who was born with the correct setting. The difference is the cost of learningγ_g = g(L -1 ) /G for g ≤ G and γ_g =L-1 otherwise. Finally, in the case that individual ihas at least one innately incorrect allele, it is assigned the basal fitness value w_i = 1. The generations do not overlap and the population size is fixed, so that to create the next generation from the current one we must perform exactly Nmatings. The parents in a mating are two different individuals that are chosen at random from the current generation with probability proportional to their fitness. The single offspring of each mating is generated by applying the one point crossover operation: we pick one point 1 ≤ m ≤ L-1at random from each of parents' chromosomes to form one offspring chromosome by taking all alleles from the first parent up to the crossover point m, and all alleles from the second parent beyond the crossover point.Thusthe offspring will always be a recombinant chromosome. Of course, none of the learning is passed on to the offspring, which inherits the same allelic configuration thattheir parents had at the different loci. In the absence of learning(G=1 in Eq. (<ref>)), we have a `needle-in-the-haystack' problem for whichno search mechanisms can do much better than a randomsearchon the configuration space.However, provided the initial frequency p_1 of alleles 1 is not too low (say, p_1 > 0.4) and for the parameter setting N=1000 and L=20 used in Hinton and Nowlan simulations, theevolutionary algorithmcan produce the correct genotype with a sporting chance and, somewhat surprising, once this individual is produced it rapidly takes over the population and reaches fixation, despite the disrupting effect of recombination<cit.>. In a biological context, the`needle-in-the-haystack' fitness landscapeentails the existence of structures that are useless until completely formed andwhose evolutionis difficult to explainwithin a purely Darwinian perspective that, roughly speaking, views evolution as a hill-climbing process on a fitness landscape<cit.>.Although such structures are unlikely to exist in nature – we adhere to the standard view that any complex biological organ evolved from less complex but nonetheless useful (or with different function) organs – Baldwin effect offers a theoretical framework to consider these landscapes within a purely Darwinian perspective.In Hinton and Nowlan's scenario, learningcreates an increased fitness zone around the `needle' by allowing individuals whose connections are near perfect to learn the correct setting <cit.>.Somewhat surprisingly, although Baldwin's suggestion that behavioral goals in earlier generations could be genetically determined in the course of the evolution clearly points out to the necessity of looking at the fixation probability of the targeted traits, the quantitative studies around Hinton and Nowlan's work mainly focused on the time evolution of the allele frequencies. These studies did not analyze the fixation of allele 1 in all the L loci. For instance, Fig. <ref> shows a typical result of a single run using the same parameters and time scale of the original simulations of Hinton and Nowlan (see Figure2 of Ref. <cit.>). The population is able to quickly learn the solution as innately incorrect alleles 0 are eliminated from the population, although the frequency of the plastic alleles ? remains relatively high. The trouble is that this type of graph shows only that the plastic alleles (and, consequently, learning) are selected, which is not exactly an extraordinary result. To demonstrate the Baldwin effect, the plastic alleles should be eliminated as well. For instance, in the particular run shown inFig. <ref>, an allele ? fixed at one of the L loci at generation t=91, thus precluding the fixation of the correct genotype. Undisputable evidence of Baldwin effect requires that we run the simulations until fixation of the correct genotype and average the results over very many runs. This is the aim of this paper. § FIXATION PROBABILITY AND MEAN TIME TO FIXATION To calculate thefixation probabilitieswe must carry out a large number of independent simulations and follow the dynamics until all Nchromosomes become identical (recall that in their original simulations Hinton and Nowlan neglected mutation). We denote by P_1the fraction of runs in which we observed the fixation of the correct genotype.Note that since we are interested in P_1 only, we can abort a run whenever the fixation ofalleles 0 or ? is observed at any loci. For instance, the run shown in Fig. <ref> was aborted at generation t=91 when the allele ? fixed at one of the L loci. Hence, only the runs that lead to the fixation of the correct genotype must be followedtill thecomplete homogenization of the population.Another trick that greatly speeds up the simulation of the learning process consists of exploring the fact that number of guesses g is a random variable distributed by a geometric distribution with probability of success 1/2^Q, where Q is the number of plastic alleles in the genome of the learner. In fact, we can easily obtain a geometric deviate g ∈{1,2, …}from a uniform deviater ∈ ( 0,1)using the transformation method <cit.>g = ⌈ln ( 1 - r)/ln ( 1 - 1/2^Q )⌉ where the ceiling brackets notation ⌈x ⌉ stands for theleast integer greater than or equal to x. The number of runs varies from 10^5 to 10^8 to guarantee that a statistically significant number offixations of the correct genotypeoccurs. This is important because we use that sampleto estimate the (conditional) mean time to fixation of the correct genotype, which we denoteby T_1.Unless stated otherwise,the initial frequency of alleles are setto p_1=p_0 = 0.25 and p_? = 0.5 in accord with Hinton and Nowlan's simulations.Figure<ref> summarizes our findings for thepopulation size N=1000. As the independent variable we choosethe parameter G, the reciprocal of which measures the difficultyof learning. For instance, for large G an individual with a few plastic alleles is certain to learn the correctsetting of switches and the learning cost γ_g is typicallysmall. This results ina quasi-neutral evolution scenario, where the fitness of the correct genotype differs very little from the fitness of genotypes with a few plastic alleles. As expected, thissituation isunfavorable to the fixation of the correct genotype and, accordingly, the upper panel ofFig.<ref> shows a drop of the fixation probability P_1 in the regime where learning is easy. For small G,only individuals with a very small number of plastic alleles (and none allele 0) have a chance of guessing the correct switch setting.For most individuals,learning that setting is nearly impossible. The ineffectiveness of learning in this case is reflected in the very low probability of fixation of the correct genotype. In particular for G=1 (the non-learning limit), we find that P_1 ≈ 10^-7 for L=20 (see the discussionof Fig. <ref> for the details of this estimate) whereas forG=1000 (the value chosen by Hinton and Nowlan) we find P_1 ≈ 0.167. This gap of 6 orders of magnitude shows that Baldwin effect in Hinton and Nowlan's scenariocan make a virtual impossibility to happen with a sporting chance. Once the correct genotype fixed in the population,there is no trace left of the plastic alleles, which acted as a scaffoldingmechanism to aid that fixation. The lower panel of Fig.<ref> shows that the fixation takes place in atenable time scale.Contrary to claims that the parameters of the original simulation of Hinton and Nowlan were very carefully selected <cit.> to facilitate the `observation' of Baldwin effect,the results shown in Fig. <ref>indicate that setting the maximum number of guesses toG=1000greatlyoverestimate the values that optimize the fixation probability P_1 or the fixation time T_1.(We emphasize again that the simulations of Hinton and Nowlanoffered no evidence of the Baldwin effect– they showed only that learning is selected.)In particular, for the parameters of Fig. <ref> we find that the optimal value of G that maximizes the fixation probability P_1is G_opt≈ 2^0.4 L. The exponential increase of G_opt withL is expected since the number of switch settings to be exploredby the learning or guessingprocedure is 2^p_? L, where p_? = 0.5 is thefrequency of the plastic allele in the initial population. For fixed G and N, increasing the chromosome length L always results in the decrease of the probability of fixation (see Fig. <ref>) and for large L we find that P_1 decreases exponentially fast with increasing L.This decrease can be compensated by increasing the population size N as illustrated in Fig. <ref>. In fact, in the regime where the fixation of the correct genotype is relatively rare, say P_1 < 0.2, this probability is practically unchanged with the increase of L provided that N increases such that the ratio N^5/2^L is kept fixed.In addition,Fig. <ref> shows that the conditional mean time to fixation T_1 is a monotonically increasing function of N. In the region N ≪ 2^L, we find T_1 ∝ln N, regardless of the chromosome length L, whereas for large N the fixation timelevels off towards asymptotic values that increase with increasing L.The (theoretical) importance of the Baldwin effect is better appreciated when one considers the probability of fixation of the correct genotype P_1 as function of the initial frequency of the correctallele p_1in the population. In fact, if that allele is widespread in the initial population (say, p_1 ≈ 0.5 for L=20 and N=1000) then the correct genotype has a sporting chance of fixation without need to invoke the Baldwinian evolution<cit.>.Accordingly, in Fig. <ref>we showP_1 againstp_1 for the non-learning case (G=1) and for the learning casewithG=1000. The initial frequency of the incorrectallele is setto p_0 = p_1 and the initial frequency of the plastic allele is then p_?= 1 - 2p_1. Of course, for p_1=0 we have P_1 =0 since the fixation ofthe correct genotype is impossible if the correct allele is not present in the initial population. Most interestingly, these results show that if the plastic alleleis rare (i.e., p_1 ≈ 0.5)then learning willactually lessen the odds of fixation of the correct genotype when compared with the non-learning scenario. However,when the plastic allele isnot under-represented in the initial population, learning can produce an astronomical increase of those odds. For instance, sincein the range p_1 > 0.34 the data for L=20 and G=1 is very well fitted by the function P_1 = 2^a -b/p_1 with a ≈ 17.6 and b ≈ 10.2 (seeFig. <ref>) we can use this fitting to estimate the value of P_1 at p_1=0.25 and obtain P_1 ≈ 9 × 10^-8. Recalling that P_1 ≈ 0.167 for G=1000, we conclude that the Baldwinian evolution boosts the chances offixation of the correct genotype by about 6 orders of magnitude for the parameter set used in the original simulation of Hinton and Nowlan. However, if those authors had usedp_1 = 0.1 instead, then that improvement would amount to 24 orders of magnitude.§ ADAPTIVE MAXIMUM NUMBER OF GUESSES We note that the value of the maximum number of guesses G that maximizes the probability of fixation of the correct genotype (see Fig. <ref>)cannot be selected by the evolutionary dynamics. To check that, instead of assuming that all individuals have the same G,we assignuniformly distributed values of G in the set {1,2, …, 2000 } to the N individuals at generation t=0. More specifically, we introduce an additional locus to the chromosomes, say locus L+1, which stores the value of G. Similarlyto the other loci, this G-locus is inherited by the offspring.In the rare event that the correct genotype appears in the randomly assembled initial population, weassign the value G=0 to its G-locus. All the other chromosome types, regardless of their allelic compositions,have their G-lociassignedto random values of G. Figure <ref> shows the probability distribution Π of the values of Gfor the runs that ended with the fixation of the correct genotype. For 10^5 independent runs that led to the desired fixation, we found that 83% of them resulted in the fixation of the G-locus as well (i.e., the individuals descended from a single ancestor), 16% resulted in two distinct values of G at the G-locus, 0.9% in three values and 0.1% in four values. The distribution of those values of G is shown in Fig. <ref> together with the initial uniform distribution. The results indicate that the fitness function in Eq.(<ref>)favors individuals with large values of G. In fact, two individuals that usedthe same number of guesses g to find the correct switch setting will have different fitnessif they are assigned different G values. All else being equal, the fitness function in Eq.(<ref>)increases with increasing G.Recalling that the maximum population fitness is achieved when the correct genotype fixed in the population,we have here an example where selection at the individual level, whichfavors large G, conflicts with selection at the population level, which favors intermediate values of G. In fact, it would be interesting to seewhat value of Gwill dominate in a metapopulation scenario <cit.>since, in addition to the pressure of individual selection for thelargest possible Gs, there is also a conflict betweenthe value of G that maximizes the fixation probability andthe value of G that minimizes thefixation time of the correct genotype(see Fig. <ref>).§ CONCLUSIONSIn our previous work <cit.> we criticized Hinton and Nowlan's paper <cit.> (and also Maynard Smith's <cit.>) on the grounds that they incorrectly assumed that a sexual population will never be able to fix the correct genotype, even if it is discovered many times, because recombination would always disrupt this genotype. By assuming that the frequency of alleles 1 att=0was p_1 =0.5, we proved Maynard Smith's claim that “In a sexual population of 1000 with initial allele frequencies of 0.5, a fit individual would arise about once in 1000 generations …Mating would disrupt the optimum genotype, however, and its offspring would have lost the adaptation. In effect, a sexual population would never evolve the correct settings” to be wrong because once the good genotype appears it is expected to increase exponentially and eventually fix. However, Fig. <ref> and Fig. <ref> show that if we set the initial frequency of alleles to p_1 = p_0 = 0.25and p_? = 0.5 as in Hinton and Nowlan's simulations and do not allow learning, the good genotype has (as expected) very low chances of appearing in the population. An interesting result is that the number of queries (fitness evaluations) assumed by Hinton and Nowlan (G ≈ 2^10because at t=0the average number of plastic alleles ? per individual is 10) does not maximize the fixation probability of the good genotype (Fig. <ref>). The essence of the fitness function inEq.(<ref>) is to alter the evolutionary dynamics by smoothing the fitness landscape and provide a direct path of increasing fitness to reach the highest peak. The problem is that the cost of learning, γ_g = g( L - 1)/G, decreases with increasingG which, in turn, flattens the shape of the fitness function and increases the `neutrality' zone around the peak. This makes the probability of fixation of the good genotype (Baldwin effect) highly dependent on the maximum number of guesses allowed to each individual (Fig. <ref>). To sum up, our aim in this paper was to demonstrate and to quantify the probability of the Baldwin effect in the in silico scenario devised by Hinton and Nowlan <cit.>, something that has been surprisingly overlooked in the copious literature around this seminal work. Whether this effect offers a Darwin-compliant theoretical explanation to the evolution of non-reducible complex structures, or to the evolutionary emergence of high-order phenotypic traits such as consciousness or language, is ultimately an empirical question. The research of JFF wassupported in part by grant 15/21689-2, Fundação de Amparo à Pesquisa do Estado de São Paulo(FAPESP) and by grant 303979/2013-5, Conselho Nacional de DesenvolvimentoCientí­fi­co e Tecnológico (CNPq).MS is funded by grant CGL2013-42432-P from the Ministerio de Economía y Competitividad (Spain), and Grant 2014 SGR 1346 from Generalitat de Catalunya. This research used resources of the LCCA - Laboratory of Advanced Scientific Computation of the University of São Paulo.99 Maynard_87Maynard Smith, J., 1987. Natural selection: when learning guides evolution. Nature 329, 761–762.Darwin_59 Darwin, C., 1859. On the Origin of Species. John Murray, London, p. 189.Behe_96 Behe, M.J., 1996.Darwin's Black Box: The Biochemical Challenge to Evolution. Free Press, New York.Lynch_05 Lynch, M.,2005. Simple evolutionary pathways to complex proteins. Protein Sci. 14, 2217–2225.Pinker_90 Pinker, S., Bloom, P., 1990. Natural language and natural selection. Behav. Brain Sci. 13, 707–727.Dennet_91 Dennett, D., 1991.Consciousness Explained. Little, Brown and Company, Boston.Dennet_95Dennett, D., 1995. Darwin's Dangerous Idea.Simon and Schuster, New York.Briscoe_97 Briscoe, E.J., 1997. Co-evolution of language and of the language acquisition device. In: Cohen, P.R., Wahlster, W. (Eds.), Proceedings of the Thirty-Fifth Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics. Somerset, New Jersey, pp. 418–427.Deacon_97Deacon, T., 1997.The Symbolic Species: The Co-evolution of Language and the Brain. W.W. Norton, New York.Pinker_97Pinker, S., 1997. How the Mind Works.W.W. Norton, New York.Calvin_00 Calvin, W.H., Bickerton, D., 2000. Lingua ex Machina: Reconciling Darwin and Chomsky with the Human Brain. MIT Press, Cambridge.Dor_01 Dor, D., Jablonka, E., 2001. How language changed the genes: toward an explicit account of the evolution of language. In: Trabant, J., Ward, S. (Eds.), New Essays on the Origin of Language. Trends in Linguistics: Studies and Monographs. Mouton de Gruyter, Berlin, pp. 151–175. Yamauchi_04 Yamauchi, H., 2004. Baldwinian Accounts of Language Evolution. Ph.D. Thesis. Univ. Edinburgh, Edinburgh. Baldwin_96Baldwin, J.M., 1896. A new factor in evolution.Am. Nat.30, 441–451.Morgan_96Morgan, C.L., 1896. On modification and variation. Science4, 733–740. Osborn_96 Osborn, H.F., 1896. Ontogenic and phylogenic variation. Science 4, 786–789.Simpson_53 Simpson, G.G., 1953. The Baldwin effect. Evolution 7, 110–117.Hinton_87Hinton G.E., Nowlan S.J., 1987. How learning can guide evolution. Complex Syst. 1, 495–502.Dennet_03Dennett, D., 2003. The Baldwin effect: a crane, not a skyhook. In: Weber, B.H., Depew, D.J. (Eds.), Evolution and Learning: the Baldwin Effect Reconsidered. MIT Press, Cambridge, pp. 69–79.Belew_90Belew, R.K., 1990. Evolution, learning, and culture: computational metaphors for adaptive algorithms. Complex Syst.4, 11–49. Fontanari _90 Fontanari, J.F., Meir,R., 1990. The effect of learning on the evolution of asexual populations. Complex Syst. 4, 401–414.Ackley_91 Ackley D., Littman, M., 1991. Interactions between learning and evolution. In: Langton,C.,Taylor, C.,Farmer, D.,Rasmussen, S. (Eds.), Proceedings of the Second Conference on Artificial Life. Addison-Wesley, Redwood City, CA, pp. 487–509.Harvey_93 Harvey, I., 1993. The puzzle of the persistent question marks: a case study of genetic drift.In: Forrest, S. (Ed.),Proceedings of the 5th International Conference on Genetic Algorithms. Morgan Kaufmann, San Mateo, CA, pp. 15–22.Santos_15 Santos, M.,Szathmáry, E., Fontanari,J.F., 2015. Phenotypic Plasticity, the Baldwin Effect, and the Speeding up of Evolution: the Computational Roots of an Illusion. J.Theor. Biol. 371, 127–136.Press_92 Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, MA. Fontanari_06 Fontanari,J.F., Santos, M.,Szathmáry, E., 2006.Coexistence and error propagation in pre-biotic vesicle models: a group selection approach. J.Theor. Biol. 239, 247–256.
http://arxiv.org/abs/1702.08411v3
{ "authors": [ "José F. Fontanari", "Mauro Santos" ], "categories": [ "q-bio.PE" ], "primary_category": "q-bio.PE", "published": "20170227181157", "title": "The revival of the Baldwin Effect" }
Exploring the Optical States for Black Phosphorus: Anisotropy and Bandgap TuningDanhong Huang December 30, 2023 ================================================================================= Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore—even data not yet restored—by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds.This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few “nines” of availability are added to the system using simple and low-overhead software techniques.§ INTRODUCTIONAdvancements in hardware technology have significantly improved the performance of database systems over the last decade, allowing for throughput in the order of thousands of transactions per second and data volumes in the order of petabytes. Availability, on the other hand, has not seen drastic improvements, and the research goal postulated by Jim Gray in his ACM Turing Award Lecture of a system “unavailable for less than one second per hundred years” <cit.> remains an open challenge. Improvements in reliable hardware and data center technology have contributed significantly to the availability goal, but proper software techniques are required to not only avoid failures but also repair failed systems as quickly as possible. This is especially relevant given that a significant share of failures is caused by human errors and unpredictable defects in software and firmware, which are immune to hardware improvements <cit.>. In the context of database logging and recovery, the state of the art has unfortunately not changed much since the early 90’s, and no significant advancements were achieved in the software front towards the availability goal. Instant restore is a technique for media recovery that drastically reduces mean time to repair by means of simple software techniques. It works by extending the write-ahead logging mechanism of ARIES <cit.> and, as such, can be incrementally implemented on the vast majority of existing database systems. The key idea is to introduce a different organization of the log archive to enable efficient on-demand, incremental recovery of individual data pages. This allows transactions to access recovered data from a failed device orders of magnitude faster than state-of-the-art techniques, all of which require complete restoration of the entire device before access to the application's working set is allowed.The problem of inefficient media recovery in state-of-the-art techniques, including ARIES and its optimizations, can be attributed to two major deficiencies. First, the media recovery process has a very inefficient random access pattern, which in practice encourages excessive redundancy and high-frequency incremental backups—solutions that only alleviate the problem instead of eliminating it. The second deficiency is that the recovery process is not incremental and requires full recovery before any data can be accessed—on-demand schedules are not possible and there is no prioritization scheme to make most needed data available earlier. Previous work addressed the first problem with a technique called single-pass restore <cit.>, while the present paper focuses on the second one.The effect of instant restore is illustrated in Figure <ref>, where transaction throughput is plotted over time and a media failure occurs after 10 minutes. In single-pass restore, as in ARIES, transaction processing halts until the device is fully restored (the red line in the chart), while instant restore continues processing transactions, using them to guide the restore process (blue and green lines). In a scenario where the application working set fits in the buffer pool (blue line), there is actually no visible effect on transaction throughput. We emphasize that traditional ARIES media recovery would take much longer than the scale used in the diagram; therefore, the baseline used to measure our present work is single-pass restore. More detailed and comprehensive experiments are presented in Section <ref>. In the remainder of this paper, Section <ref> describes related work, both previous work leading to the current design as well as competing approaches. Then, Section <ref> describes the instant restore technique. Finally, Section <ref> presents an empirical evaluation, while Section <ref> concludes this paper. A high-level description of instant restore was previously published in a book chapter <cit.> among related instant recovery techniques. The additional contribution here is a much more detailed discussion of the design—including practical implementation aspects—as well as the first empirical evaluation of the technique with an open-source prototype.§ RELATED WORKThis section starts by establishing the scope of our work with respect to failure classes considered in transaction recovery literature and defining basic assumptions. Afterwards, we discuss existing media recovery techniques, focusing mainly on the limitations that will be addressed later in Section <ref>.§.§ Failure classes and assumptionsDatabase literature traditionally considers three classes of database failures <cit.>, which are summarized in Table <ref> (along with single-page failures, a fourth class to be discussed in Section <ref>). In the scope of this paper, it is important to distinguish between system and media failures, which are conceptually quite different in their causes, effects, and recovery measures.System failures are usually caused by a software fault or power loss, and what is lost—hence what must be recovered—is the state of the server process in main memory; this typically entails recovering page images in the buffer pool (i.e., “repeating history” <cit.>) as well as lists of active transactions and their acquired locks, so that they can be properly aborted. The process of recovering from system failures is called restart.Instant restart <cit.> is an orthogonal technique that provides on-demand, incremental data access following a system failure. While the goals are similar, the design and implementation of instant restore require quite different techniques.In a media failure, which is the focus here, a persistent storage device fails but the system might continue running, serving transactions that only touch data in the buffer pool or on other healthy devices. If the system and media failures happen simultaneously, or perhaps one as a cause of the other, their recovery processes are executed independently, and, by recovering pages in the buffer pool, the processes coordinate transparently. Readers are referred to the literature for further details <cit.>.The present work makes the same assumptions as most prior research on database recovery. The log and its archival copy reside on “stable storage”, i.e., they are assumed to never fail. We consider media failure on the database device only, i.e., the permanent storage location of data pages. Recovery from such failures requires a backup copy (possibly days or weeks old) of the lost device and all log records since the backup was taken; such log records may reside either in the active transaction log or in the log archive. The process of recovering from media failures is called restore. The following sections briefly describe previous restore methods. §.§ ARIES restoreTechniques to recover databases from media failures were initially presented in the seminal work of Gray <cit.> and later incorporated into the ARIES family of recovery algorithms <cit.>. In ARIES, restore after a media failure first loads a backup image and then applies a redo log scan, similar to the redo scan of restart after a system failure.Fig. <ref> illustrates the process, which we now briefly describe. After loading full and incremental backups into the replacement device, a sequential scan is performed on the log archive and each update is replayed on its corresponding page in the buffer pool. A global minLSN value (called “media recovery redo point” by Mohan et al. <cit.>) is maintained on backup devices to determine the begin point of the log scan.Because log records are ordered strictly by LSN, pages are read into the buffer pool in random order, as illustrated in the restoration of pages A and B in Fig. <ref>. Furthermore, as the buffer pool fills up, they are also written in random order into the replacement device, except perhaps for some minor degree of clustering. As the log scan progresses, evicted pages might be read multiple times, also randomly. This mechanism is quite inefficient, especially for magnetic drives with high access latencies. Thus, it is no surprise that multiple hours of downtime are required in systems with high-capacity drives and high transaction rates <cit.>.Another fundamental limitation of the ARIES restore algorithm is that it is not incremental, i.e., pages cannot be restored to their most up-to-date version one-by-one and made available to running transactions incrementally. As shown in the example of Fig. <ref>, the last update to page A may be at the very end of the log; thus, page A will be out-of-date until almost the end of the long log scan. Some optimizations may alleviate this situation (e.g., reusing checkpoint information), but there is no general mechanism for incremental restoration. Furthermore, even if pages could somehow be released incrementally when their last update is replayed, the hottest pages of the application working set are most likely to be released only at the very end of the log scan, and probably not even then, because they might contain updates of uncommitted transactions and thus require subsequent undo. This leads to yet another limitation of this log-scan-based approach: even if pages could be restored incrementally, there is no effective way to provide on-demand restoration, i.e., to restore most important pages first. Despite a variety of optimizations proposed to the basic ARIES algorithm <cit.>, none of them solves these problems in a general and effective manner. In summary, all proposed techniques that enable earlier access to recovered data items suffer from the same problem: early access is only provided for data for which early access is not really needed—hot data in the application working set is not prioritized and most accesses must wait for complete recovery.Finally, industrial database systems that implement ARIES recovery suffer from the same problems. IBM’s DB2 speeds up log replay by sorting log records after restoring the backup and before applying the log records to the replacement database <cit.>. While a sorted log enables a more efficient access pattern, incremental and on-demand restoration is not provided. Furthermore, the delay imposed by the offline sort may be as high as the total downtime incurred by the traditional method.Oracle attempts to eliminate the overhead of reading incremental backups by incrementally maintaining a full backup image <cit.>. While this makes the access pattern slightly more efficient, it does not address the deficiencies discussed earlier. §.§ ReplicationGiven the extremely high cost of media recovery in existing systems, replication solutions such as disk mirroring or RAID <cit.> are usually employed in practice to increase mean time to failure. However, it is important to emphasize that, from the database system's perspective, a failed disk in a redundant array does not constitute a media failure as long as it can be repaired automatically. Restore techniques aim to improve mean time to repair whenever a failure that cannot be masked by lower levels of the system occurs. Therefore, replication techniques can be seen largely as orthogonal to media restore techniques as implemented in database recovery mechanisms.However, a substantial reduction in mean time to repair, especially if done solely with simple software techniques, opens many opportunities to manage the trade-off between operational costs and availability. One option can be to maintain a highly-available infrastructure (with whatever costs it already requires) while availability is increased by deploying software with more efficient recovery. Alternatively, replication costs can be reduced (e.g., downgrading RAID-10 into RAID-5) while maintaining the same availability. Such level of flexibility, with solutions tackling both mean time to failure and mean time to repair, are essential in the pursuit of Gray's availability goal <cit.>, especially considering the impact of human errors and unpredictable failures that occur in large deployments <cit.>. Early work on in-memory databases focused mainly on restart after a system failure, employing traditional backup and log-replay techniques for media recovery <cit.>. The work of Levi and Silberschatz <cit.> was among the first to consider the challenge of incremental restart after a system failure. While an extension of their work for media recovery is conceivable, it would not address the efficiency problem discussed in Section <ref>. Thus, it would, in the best case and with a more complex algorithm, perform no better than the related work discussed later in Section <ref>.§.§ In-memory databasesRecent proposals for recovery on both volatile and non-volatile in-memory systems usually ignore the problem of media failures, employing the unspecific term “recovery” to describe system restart only <cit.>. Therefore, recovery from media failures in modern systems either relies on the traditional techniques or is simply not supported, employing replication as the only means to maintain service upon storage hardware faults. As discussed above, while relying on replication is a valid solution to increase mean time to failure, a highly available system must also provide efficient repair facilities. In this aspect, traditional database system designs—using ARIES physiological logging and buffer management—provide more reliable behavior. Therefore, we believe that improving traditional techniques for more efficient recovery with low overhead on memory-optimized workloads is an important open research challenge.§.§ Single-page repair Single-page failures are considered a fourth class of database failures <cit.>, along with the other classes summarized in Table <ref>. It covers failures restricted to a small set of individual pages of a storage device and applies online localized recovery to that individual page instead of invoking media recovery on the whole device. The single-page repair algorithm, illustrated in Fig. <ref> (with backup and replacement devices omitted for simplification), has two basic requirements: first, the LSN of the most recent update of each page is known (i.e., the current PageLSN value) without having to access the page; second, starting from the most recent log record, the complete history of updates to a page can be retrieved. The former requirement can be provided with a page recovery index—a data structure mapping page identifiers to their most recent PageLSN value. Alternatively, the current PageLSN can be stored together with the parent-to-child node pointer in a B-tree data structure <cit.>. The latter requirement is provided by per-page log record chains, which are straight-forward to maintain using the PageLSN fields in the buffer pool. For each page update, the LSN of the last log record to affect the same page (i.e., the pre-update PageLSN value) is recorded in the log record; this allows the history of updates do be derived by following the resulting chain of backward pointers. We refer to the paper for further details <cit.>.In principle, single-page repair could be used to recover from a media failure, by simply repairing each page of the failed device individually. One advantage of this technique is that it yields incremental and on-demand restore, addressing the second deficiency of traditional media recovery algorithms mentioned in Section <ref>. To illustrate how this would work in practice, consider the example of Fig. <ref>. If the first page to be accessed after the failure is A, it would be the first to be restored. Using information from the page recovery index (which can be maintained in main memory or fetched directly from backups), the last red log record on the right side of the diagram would be fetched first. Then, following the per-page chain, all red log records until minLSN would be retrieved and replayed in the backup image of page A, thus yielding its most recent version to running transactions.While the benefit of on-demand and incremental restore is a major advantage over traditional ARIES recovery, this algorithm still suffers from the first deficiency discussed in Section <ref>—namely the inefficient access pattern. The authors of the original publication even foresee the application to media failures <cit.>, arguing that while a page is the unit of recovery, multiple pages can be repaired in bulk in a coordinated fashion. However, the access pattern with larger restoration granules would approach that of traditional ARIES restore—i.e., random access during log replay. Thus, while the technique introduces a very useful degree of flexibility, it does not provide a unified solution for the two deficiencies discussed. §.§ Single-pass restore Our previous work introduced a technique called single-pass restore, which aims to perform media recovery in a single sequential pass over both backup and log archive devices <cit.>. Eliminating random access effectively addresses the first deficiency discussed in Section <ref>. This is achieved by partially sorting the log on page identifiers, using a stable sort to maintain LSN order within log records of the same page. The access pattern is essentially the same as that of a sort-merge join: external sort with run generation and merge followed by another merge between the two inputs—log and backup in the media recovery case.The idea itself is as old as the first recovery algorithms (see Section 5.8.5.1 of Gray's paper <cit.>) and is even employed in DB2's “fast log apply” <cit.>. However, the key advantage of single-pass restore is that the two phases of the sorting process—run generation and merge—are performed independently: sorted runs are generated during the log archiving process (i.e., moving log records from the latency-optimized transaction log device into high-capacity, bandwidth-optimized secondary storage) with negligible overhead; the merge phase, on the other hand, happens both asynchronously as a maintenance service and also during media recovery, in order to obtain a single sorted log stream for recovery. Importantly, merging runs of the log archive and applying the log records to backed-up pages can be done in a single sequential pass, similar to a merge join. The process is illustrated in Fig. <ref>. We refer to the original publication for further details <cit.>.Having addressed the access pattern deficiency of media recovery algorithms, single-pass restore still leaves open the problem of incremental and on-demand restoration. Nevertheless, given its superiority over traditional ARIES restore (see <cit.> and <cit.> for an in-depth discussion), it is a promising approach to use as starting point in addressing the two deficiencies in a unified way. Therefore, as mentioned in Section <ref>, single-pass restore is taken as the baseline for the present work. §.§ Summary of related workAs the previous sections discussed, none of the state-of-the-art media recovery schemes is able to effectively eliminate the two main deficiencies of traditional ARIES: the inefficient access pattern and the lack of early access to important data before complete recovery. Ideally, a restore mechanism would combine the incremental availability and on-demand schedule provided by single-page repair with the efficient, bulk access pattern of single-pass restore. Moreover, this combination should allow for a continuous adjustment between these two behaviors and a simple adaptive technique should make the best decision dynamically based on system behavior. These challenges are addressed by instant restore, which we describe next. § INSTANT RESTOREThe main goal of instant restore is to preserve the efficiency of single-pass restore while allowing more fine-granular restoration units (i.e., smaller than the whole device) that can be recovered incrementally and on demand. We propose a generalized approach based on segments, which consist of contiguous sets of data pages. If a segment is chosen to be as large as a whole device, our algorithm behaves exactly like single-pass restore; on the other extreme, if a segment is chosen to be a single page, the algorithm behaves like single-page repair. As discussed in this section and evaluated empirically in Section <ref>, the optimal restore behavior lies somewhere between these two extremes, and simple adaptive techniques are proposed to robustly deliver good restore performance without turning knobs manually.This section is divided in four parts: first, we introduce the log data structure employed to provide efficient access to log records belonging to a given segment or page; after that, we present the restore algorithm based on this data structure; then, we discuss techniques to choose the best segment size dynamically and thus optimize restore behavior; finally, we discuss the issue of coordinating processes of different recovery modes (e.g., restart and restore at the same time) as well as concurrent threads of the same recovery mode (e.g., multiple restore threads). §.§ Indexed log archiveIn order to restore a given segment incrementally, instant restore requires efficient access to log records pertaining to pages in that segment. In single-page repair, such access is provided for individual pages, using the per-page chain among log records <cit.>. As already discussed, this is not efficient for restoration units much larger than a single page. Therefore, we build upon the partially sorted log archive organization introduced in single-pass restore <cit.>.In instant restore, the partially sorted log archive is extended with an index. The log archiving process sorts log records in an in-memory workspace and saves them into runs on persistent storage. These runs must then be indexed, so that log records of a given page or segment identifier can be fetched directly. Sorting and indexing of log records is done online and without any interference to transaction processing, in addition to standard log archiving tasks such as compression.Fig. <ref> illustrates indexed log archive runs and a range lookup for a segment containing pages G to K. As explained in previous work <cit.>, runs must are mapped to contiguous LSN ranges to simplify log archiving restart and garbage collection. In an index lookup for instant restore, the set of runs to consider would be restricted by the given minLSN (see Section <ref>) of the backup image, since runs older than that LSN are not needed. Furthermore, bloom filters can be appended to each run to restrict this set even further. The result of the lookup in each indexed run is then fed into a merge process that delivers a single stream of log records sorted primarily by page identifier and secondarily by LSN. This stream can then be used by the restore algorithm to replay updates on backed-up images of segments.Multiple choices exist for the physical data structure of the indexed log archive. Ideally, the B-tree component of the indexing subsystem can be reused, but there is an important caveat in terms of providing atomicity and durability to this structure. A typical index relies on write-ahead logging, but that is not an option for the indexed log archive because it would introduce a kind of self-reference loop—updates to the log data structure itself would have to be logged and used later on for recovery. This self-reference loop could be dealt with by introducing special logging and recovery modes (e.g., a separate “meta”-log for the indexed log archive), but the resulting algorithm would be too cumbersome.A more viable solution is to rely on an atomic data structure, like the shadow-based B-tree proposed by Rodeh <cit.>. Since the log archive is mostly a read-only data structure, where the only writes are bulk appends or merges, such shadowing approaches are perfectly suitable. In our prototype, we chose a different approach, where each partition of the log archive is maintained in its own read-only file; temporary shadow files are then used for merges and appends. In this scheme, atomicity is provided by the file rename operation, which is atomic in standard filesystems <cit.>. §.§ Restore algorithmWhen a media failure is detected, a restore manager component is initialized and all page read and write requests from the buffer pool are intercepted by this component. The diagram in Fig. <ref> illustrates the interaction of the restore manager with the buffer pool and all persistent devices involved in the restore process: failed and replacement devices, log archive, and backup. For reasons discussed in previous work <cit.>, incremental backups are made obsolete by the partially sorted log archive; thus, the algorithm performs just as well with full backups only. Nevertheless, incremental backups can be easily incorporated, and the description below considers a single full backup without loss of generality.In the following discussion, the numbers in parentheses refer to the numbered steps in Fig. <ref>. The restore manager keeps track of which segments were already restored using a segment recovery bitmap, which is initialized with zeros. When a page access occurs, the restore manager first looks up its segment in the bitmap (1). If set to one, it indicates that the segment was already restored and can be accessed directly on the replacement device (2a). If set to zero, a segment restore request is enqueued into a restore scheduler (2b), which coordinates the restoration of individual segments (3).To restore a given segment, an older version is first fetched from the backup directly (4). This is in contrast to ARIES restore, which first loads entire backups into the replacement device and then reads pages from there <cit.>. This has the implication that backups must reside on random-access devices (i.e., not on tape) and allow direct access to individual segments, which might require an index if backup images are compressed. These requirements, which are also present in single-page repair <cit.>, seem quite reasonable given the very low cost per byte of current high-capacity hard disks. For moderately-sized databases, it is even advisable to maintain log archive and backups on flash storage.While the backed-up image of a segment is loaded, the indexed log archive data structure is probed for the log records pertaining to that segment (5). This initializes the merge logic illustrated in Fig. <ref>. Then, log replay is performed to bring the segment to its most recent state, after which it can be written back into a replacement device (6).Finally, once a segment is restored, the bitmap is updated (7) and all pending read and write requests can proceed. Typically, a requested page will remain in the buffer pool after its containing segment is restored, so that no additional I/O access is required on the replacement device.All read and write operations described above—log archive index probe, segment fetch, and segment write after restoration—happen asynchronously with minimal coordination. The read operations are essentially merged index scans—a very common pattern in query processing <cit.>. The write of a restored segment is also easily made asynchronous, whereby the only requirement is that marking a segment as restored on the bitmap, and consequently enabling access by waiting threads, be done by a callback function after completion of the write.To illustrate the access pattern of instant restore, similarly to the diagrams in Section <ref>, Fig. <ref> shows an example scenario with three log archive runs and two pages, A and B, belonging to the same segment. The main difference to the previous diagrams is the segment-wise, incremental access pattern, which delivers the efficiency of pure sequential access with the responsiveness of on-demand random reads.Using this mechanism, user transactions accessing data either in the buffer pool or on segments already restored can execute without any additional delay, whereby the media failure goes completely unnoticed. Access to segments not yet restored are used to guide the restore process, triggering the restoration of individual segments on demand. As such, the time to repair observed by transactions accessing data not yet restored is multiple orders of magnitude lower than the time to repair the whole device. Furthermore, time to repair observed by an individual transaction is independent of the total capacity of the failed device. This is in contrast to previous methods, which require longer downtime for larger devices.§.§ Latency vs. bandwidth trade-offOne major contribution of instant restore is that it generalizes single-page repair and single-pass restore, providing a continuum of choices between the two. In order to optimize restore behavior, the restore manager must adaptively and robustly choose the best option within this continuum. In practice, this boils down to choosing the correct granularity of access to both backup and log archive, in order to balance restore latency and bandwidth.Restore latency is defined as the additional delay imposed on the page reads and writes of an individual transaction due to restore operations. Hence, it follows that if a single page can be read and restored in the same time it takes to just read it, the restore latency is zero—this is the “gold standard” of restore performance and availability. For a single transaction, restore latency can be reduced by setting a small segment size—e.g., a single page. However, this is not the optimal behavior when considering average restore latency across all transactions. Therefore, restore bandwidth, i.e., the number of bytes restored per second, must also be optimized. The optimized restore behavior is illustrated in Fig. <ref>: in the beginning of the restore process, pages which are needed more urgently should be restored first, so that restore latency is decreased; towards the end, less and less transactions must wait for restore, so the system can effectively increase restore bandwidth while a low restore latency is maintained.It is also worth noting that devices with low latency and inherent support for parallelism, e.g., solid-state drives, make these trade-offs less pronounced. This does not mean, however, that instant restore is any less significant for such devices—a point which we would like to emphasize with the next two paragraphs.As discussed earlier, previous restore techniques suffered from two deficiencies: inefficient access pattern and lack of incremental and on-demand recovery. Solid-state devices shorten the efficiency gap between restore algorithms with sequential and random access, but this gap will never be entirely closed—if anything, thanks to the locality and predictability of sequential access.As for the second deficiency, low-latency devices directly contribute to the reduction of restore latency, because the time to recover a single segment is reduced with faster access to backup and log archive runs. Therefore, with instant restore, any improvement on I/O latency directly translates into lower time to repair—as perceived by a single transaction—and thus higher availability. Non-incremental techniques, where the restore latency is basically the time for complete recovery, do not benefit as much from low-latency storage hardware when it comes to improving restore latency. In terms of latency and bandwidth trade-off in the instant restore algorithm, the first choice to be made is the segment size. In order to simplify the tracking of restore progress with a simple bitmap data structure, a fixed segment size must be chosen when initializing the restore manager. We recommend choosing a minimum size such that acceptable bandwidth is delivered even for purely random access, but not too many segments exist such that the bitmap would be too large; e.g., 1 MB seems like a reasonable choice in practice.In order to exploit opportunities for increasing bandwidth, multiple contiguous segments should be restored in a single step when applicable. One technique to achieve that dynamically and adaptively is to simply run single-pass restore concurrently with instant restore. Since the two processes rely on the same algorithm, no additional code complexity is required. Furthermore, the coordination between them is essentially the same as that between concurrent instant restore processes—they both rely on the buffer pool and the segment recovery bitmap. Section <ref> exposes details of that coordination.Alternatively, the scheduler component of the restore manager can employ a preemptive policy, where multiple contiguous segments are restored as long as no requests arrive in its incoming queue. As shown empirically in Section <ref>, this simple technique automatically prioritizes latency in the beginning of the restoration process, when the most important pages are being requested; then, as less and less transactions access data not yet restored, bandwidth is increased gradually with larger restoration units. This technique essentially delivers the behavior presented in Fig. <ref>.In terms of log archive access, the size of initial runs poses an important trade-off between minimizing merge effort and minimizing the lag between generating a log record and persisting it into the log archive. In order to generate larger runs, log records must be kept longer in the in-memory sort workspace. On the other hand, correct recovery requires that all log records up to the time of device failure be properly archived before restore can begin; thus, smaller initial runs imply lower restore latency for the first post-failure transactions. While this choice is important, simple techniques largely mitigate these concerns. One option is to enable access to log records while they are still in the main-memory sort workspace. This is possible because, as discussed in Section <ref>, a media failure does not incur loss of the server process and its in-memory contents. Alternatively, single-page repair could be used to replay log records that are not yet archived when a segment is restored. As with concurrent single-pass restore, these individual recovery techniques are orthogonal and can thus be applied concurrently with minimal coordination. Using the techniques sketched above, the lag incurred by the archiving process would be minimized.Besides these concerns specific to instant restore, established techniques to choose initial run size and merge fan-in based on device characteristics directly apply <cit.>. This is mainly because the access pattern of instant restore basically resembles that of an external sort followed by a merge join. §.§ Coordination of recovery actionsAs mentioned briefly above, the segment recovery bitmap enables the coordination of concurrent restore processes, allowing configurable scheduling policies. Another important aspect to be considered is the coordination among restore and the other recovery modes summarized in Table <ref>. This section discusses how to coordinate all such recovery actions without violating transactional consistency.The first failure class—transaction failure—is the easiest to handle because its recovery is made transparent to the other classes thanks to rollback by compensation actions, as introduced in ARIES <cit.> and refined in the multi-level transaction model <cit.>. The implication is that recovery for the other failure classes must distinguish only between uncommitted and committed transactions. Transactions that abort are simply considered committed—it just happens that they revert all changes they made, i.e., they “commit nothing”. Therefore, for the purposes of instant restore, transactions that issue an abort behave exactly like any other in-flight transaction, including those that started after the failure: they hold locks to protect their reads and updates and access data through the buffer pool, which possibly triggers segment restoration as described in Section <ref>.As for the other three classes, recovery coordination using the techniques presented in this work requires a distinction between two general forms of recovery: using a transaction log and using the indexed log archive. The former is assumed to be a linear data structure ordered strictly by LSN and containing embedded chains among log records of the same transaction and of the same page—whether it resides on active or archive devices does not matter for this discussion. Single-page repair and restart after a system failure both use the transaction log, and whether the old page image comes from a backup or from the persistent database also does not matter for this discussion. Since they perform log replay on a single page at a time, they are coordinated using the fix and unfix operations of the buffer pool. Because replaying updates on a page requires an exclusive latch, the same page cannot be recovered concurrently by different recovery processes of any kind. Furthermore, tracking the page LSN of the fixed buffer pool frame guarantees that updates are never replayed more than once and that no updates are missed. This mechanism ensures correctness of concurrent restart and single-page repair processes.The second form of recovery—using the indexed log archive—is used solely for instant restore at the segment granularity. Here, a segment, whose size is fixed when a failure is detected, is the unit of recovery, and coordination relies on the segment recovery bitmap. Using two states—restored and not restored—avoids restoring a segment more than once in sequence, but additional measures are required to prevent that from happening concurrently. One option is to simply employ a map with three states, the additional one being simply “undergoing restore”. A thread encountering the “not restored” state attempts to atomically change it to “undergoing restore”: if it succeeds, it initiates the restore request for the segment in question; otherwise, it simply waits until the state changes to “restored”.Alternatively, coordination of segment restore requests can reuse the lock manager. A shared lock is acquired before verifying the bitmap state, and, in order to restore a segment, the shared lock must be upgraded to exclusive with an unconditional request. The thread that is granted the upgrade is then in charge of restoration, while the others will automatically wait and be awoken by the lock protocol, after which they see the “restored” state.While the segment recovery bitmap provides coordination of concurrent restore processes, the buffer fix protocol is again used to coordinate restore with the other recovery modes. Concomitant restart and restore processes may occur in practice because some failures tend to cause related failures. A hardware fault, for instance, may not only corrupt persistent data but also cause an operating system crash. In this case, the recovery processes will be automatically coordinated with the methods described above. Restart recovery will fix pages in the buffer pool prior to performing any redo or undo action. The fix call, in turn, will issue a read request on the device. If the device has failed, the restore manager will intercept this request and follow the restore protocol described above. Only after the containing segment is restored, the fix call returns. After that, the page may still require log replay in the redo phase of restart, which is fine—the two recovery modes will simply replay different ranges of the page's history. §.§ Summary of instant restoreInstant restore is enabled by an indexed log archive data structure that can be generated online with very low overhead. By partitioning data pages into segments, the recovery algorithm provides incremental and on-demand access to restored data. The algorithm requires a simple bitmap data structure to keep track of progress and coordinate restoration of individual segments under configurable scheduling policies.The generalized nature of instant restore enables a wide range of choices for trading restore latency and bandwidth. These choices can be made adaptively and robustly by the system using simple techniques. Moreover, while instant restore mitigates many of the issues with high-capacity hard disks, making them a more attractive option, it still benefits greatly from modern storage devices such as solid-state drives. Therefore, the technique is equally relevant for improving availability with any kind of storage hardware.Lastly, the restore processes can be easily coordinated with processes from other recovery modes—the independence of these modes and the integrated coordination using the buffer pool ensure transaction consistency in the presence of an arbitrary mix of failure classes. § EXPERIMENTS Our experimental evaluation covers three main measures of interest during recovery from a media failure: restore latency, restore bandwidth, and transaction throughput. Moreover, we evaluate the overhead of log archiving with sorting and indexing in order to assess the cost of instant restore during normal processing. Before presenting the empirical analysis, a brief summary of our experimental environment is provided. §.§ EnvironmentWe implemented instant restore in a fork of the Shore-MT storage manager <cit.> called Zero. The code is available as open source [<http://github.com/caetanosauer/zero>]. The workload consists of the TPC-C benchmark as implemented in Shore-MT, but adapted to use the Foster B-tree <cit.> data structure for both table and index data.All experiments were performed on dual six-core CPUs with two thread contexts each, yielding support for 24 hardware threads. The system has 100 GB of high-speed RAM and several Samsung 840 Pro 250 GB SSDs connected to a dedicated I/O controller.The operating system is Ubuntu Linux 14.04 with Kernel 3.13.0-68 and all code is compiled with4.8 andoptimization.The experiments all use the same workload, with media failure and recovery set up as follows. Initial database size is 100 GB, with full backup and log archive of the same size—i.e., recovery starts from a full backup of 100 GB and must replay roughly the same amount of log records. Log archive runs are a little over 1.5 GB in size, resulting in 64 inputs in the restore merge logic. All persistent data is stored on SSDs and 24 hardware threads are used at all times. The benchmark starts with a warmed-up buffer pool, whose total size we vary in the experiments. §.§ Restore latency Our first experiment evaluates restore latency, as defined in Section <ref>, by analyzing the total latency of individual transactions before and after a media failure. The hypothesis under test is that average transaction latency immediately following a media failure is in the order of a few seconds or less, after which is gradually decreases to the pre-failure latency. Furthermore, with larger memory, i.e., where a larger portion of the working set fits in the buffer pool, average latency should remain at the pre-failure level throughout the recovery process. The results are shown in Fig. <ref>.After ten minutes of normal processing, during which the average latency is 1–2 ms, a media failure occurs. The immediate effect is that average transaction latency spikes up (to about 100 ms in the buffer pool size of 30 GB) but then decreases linearly until pre-failure latency is reestablished. For the largest buffer pool size of 45 GB, there is a small perturbation in the observed latency, but the average value seems to remain between 1 and 2 ms. From this, we can conclude that for any buffer pool size above 45 GB, a media failure goes completely unnoticed.For this experiment, we also look at the distribution of individual latencies, in order to analyze the worst-case behavior. As the plot on Fig. <ref> shows, the largest latency observed by a single transaction, with the smallest buffer pool, is 5.1 s. The total recovery time, which is shown later in Fig. <ref>, is in the range of 17–25 minutes—this is the restore latency incurred by single-pass restore.These results successfully confirm our hypothesis: restore latency is reduced from 25 minutes to 5 seconds in the worst case, which corresponds to two orders of magnitude or two additional 9's of availability. For the average case, still considering the smallest buffer pool, another order of magnitude is gained with latency dropping below 100 ms.Note that the average restore latency is independent of total device capacity, and thus of total recovery time. Therefore, the availability improvement could be in the order of four or five orders of magnitude in certain cases. This would be expected, for instance, for very large databases (in the order of terabytes) stored on relatively low-latency devices. In these cases, the gap between a full sequential read and a single random read—hence, between mean time to repair with single-pass restore and with instant restore—is very pronounced. §.§ Restore bandwidth Next, we evaluate restore bandwidth for the same experiment described earlier for restore latency. The hypothesis here is that, in general, restore bandwidth gradually increases throughout the recovery process until it reaches the bandwidth of single-pass restore. From these two general behaviors, two special cases are, again, the small and large buffer pools. In the former, bandwidth may not reach single-pass speeds due to prioritization of low latency for the many incoming requests (recall that each buffer pool miss incurs a read on the replacement device, which, in turn incurs a restore request). In the latter case, restore bandwidth should be as large as single-pass restore.Fig. <ref> shows the results of this experiment for four buffer pool sizes. For the smallest buffer pool of 30 GB, restore bandwidth remains roughly constant in the first 15 minutes. This indicates that during this initial period, most segments are restored individually in response to an on-demand request resulting from a buffer pool miss. As the buffer size increases, the rate of on-demand requests decreases as restore progresses, resulting in more opportunities for multiple segments being restored at once. In all cases, as predicted in the diagram of Fig. <ref>, restore bandwidth gradually increases throughout the recovery process, reaching the maximum speed of 240 MB/s towards the end in the larger buffer pool sizes. §.§ Transaction throughput The next experiments evaluate how media failure and recovery impact transaction throughput with instant restore. We take the same experiment performed in the previous sections and look at transaction throughput for each buffer pool size individually. As instant restore progresses, transactions continue to access data in the buffer pool, triggering restore requests for each page miss. Therefore, we expect that the larger the buffer pool is (i.e., more of the working set fits into main memory), the less impact a media failure has on transaction throughput. This effect was already presented in the diagram of Fig, <ref>—the present section analyzes that in more detail.Fig. <ref> presents the results. In the four plots shown, transaction throughput is measured with the red line on the left y-axis. At minute 10, a media failure occurs, after which a green straight line shows the pre-failure average throughput. The number of page reads per second is shown with the blue line on the right y-axis. Moreover, total recovery time, which also varies depending on the buffer pool size, is also shown as the shaded interval on the x-axis.The goal of instant restore in this experiment is to re-establish the pre-failure transaction throughput (i.e., the dotted green line) as soon as possible. Similar to the evaluation on previous experiments, our hypothesis is that this occurs sooner the larger the buffer pool is.The results show that for a small buffer pool of 20 GB, transaction throughput drops substantially, and it only regains the pre-failure level at the very end of the recovery process. As the buffer size is increased to 25 and then 35 GB, pre-failure throughput is re-established at around minute 7, i.e., 1/3 of the total recovery time. Lastly, for the largest buffer pool of 50 GB, the media failure does not produce any noticeable slowdown, as predicted in our hypothesis.. §.§ Log archiving overheadThe overhead imposed on running transactions by sorting and indexing log records in the log archiving procedure is measured in the experiment of Figure <ref>. The chart on the left side shows average transaction throughput of a TPC-C execution using two variants of log archiving: with sorting and indexing, as required by instant restore, vs. simply copying files. The chart on the right side measures CPU utilization. The charts show distributions of values using a candlestick chart; the box in the middle covers data points between the first and third quartiles of the distribution (i.e., half of the observations), while the extremities show the minimum a maximum values; the line in the middle of the box shows the median value.As the chart on the left side shows, there is a small difference between a simplified implementation of traditional log archiving (i.e., plain filesystem copy) and log archiving with sorting and indexing. The difference between the median points is less than 3% (i.e., 11.2 vs. 10.9 ktps). The CPU utilization measurement shows that it is proportional to transaction throughput, leading to the conclusion that the archiving process does not consume too much CPU. Furthermore, note that the “copy” variant is quite primitive, since an industrial-strength implementation would incur additional overhead by compressing log records. In that case, the overhead of our technique could be even less than 3%.§.§ Summary of experimentsWe have shown that instant restore greatly improves upon the baseline single-pass restore algorithm. Restore latency, which is the additional latency incurred on transactions by media recovery actions, is cut down by multiple orders of magnitude, which directly translates into the same improvement on availability. Restore bandwidth adaptively and gradually approximates the maximum sequential speed as the recovery process progresses. The same gradual improvement is observed for transaction throughput during media recovery. These measures are equally affected by an increase in buffer pool size, up to a point where media failures cause no disruption at all on running transactions. Lastly, we have shown that the online archiving procedure required by instant restore induces very little overhead (3% or less) on normal processing—a small price to pay for the vast improvement in availability. § CONCLUSIONSInstant restore improves perceived mean time to repair and thus database availability in the presence of media failures. We identified two main deficiencies with traditional recovery techniques, such as the ARIES design <cit.>: (i) media recovery is very inefficient due to its random access pattern on database pages, which means that time to repair is unacceptably long; and (ii) data on a failed device cannot be accessed before recovery is completed. The first deficiency was addressed in our previous work on single-pass restore <cit.>, which introduces a partial sort order on the log archive, converting the random access pattern of log replay into sequential. The second deficiency is addressed with the instant restore technique, which was first described in earlier work <cit.> and discussed in more detail, implemented, and evaluated in this paper. By generalizing single-pass restore and other recovery methods such as single-page repair, instant restore is the first media recovery method to effectively eliminate the two deficiencies discussed. In comparison with traditional ARIES media restore, instant restore delivers not only the benefits of single-pass restore (i.e., substantially higher bandwidth and therefore shorter recovery time), but also much quicker access (e.g., seconds instead of hours) to the application working set after a failure. Instant restore introduces a new organization of the log archive data structure, where log records are partially sorted and indexed. Maintenance of this data structure incurs very little overhead and is performed continuously and online.Our empirical analysis shows that instant restore is able to effectively deliver the efficiency of single-pass restore while cutting down restore latency by multiple orders of magnitude.The experiments also analyze the impact of a failure on transaction throughput, which largely depends on the size of the working set in relation to the buffer pool size. The results confirm our expectation that the pre-failure transaction throughput is re-established earlier as memory size increases—up to a point where a media failure goes completely unnoticed.The net effect is that availability is greatly improved and the number of missed transactions due to media failures is significantly reduced.IEEEtran
http://arxiv.org/abs/1702.08042v1
{ "authors": [ "Caetano Sauer", "Goetz Graefe", "Theo Härder" ], "categories": [ "cs.DB" ], "primary_category": "cs.DB", "published": "20170226145804", "title": "Instant restore after a media failure" }
]A quantum-mechanical perspective on linear response theory within polarizable embedding Author to whom correspondence should be addressed. Electronic mail: nalist@kth.se Division of Theoretical Chemistry and Biology, School of Biotechnology, KTH Royal Institute of Technology, Roslagstullsbacken 15, SE-106 91 Stockholm, Swedenpanor@kth.se Division of Theoretical Chemistry and Biology, School of Biotechnology, KTH Royal Institute of Technology, Roslagstullsbacken 15, SE-106 91 Stockholm, Swedenkongsted@sdu.dk Department of Physics, Chemistry and Pharmacy, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmarkhjj@sdu.dk Department of Physics, Chemistry and Pharmacy, University of Southern Denmark, Campusvej 55, 5230 Odense M, DenmarkThe derivation of linear response theory within polarizable embedding is carried out from a rigorous quantum-mechanical treatment of a composite system. Two different subsystem decompositions (symmetric and nonsymmetric) of the linear response function are presented, and the pole structures as well as residues of the individual terms are analyzed and discussed. This theoretical analysis clarifieswhich form of the response function to use in polarizable embedding, and we highlight complications in separating out subsystem contributions to molecular properties. For example, based on the nonsymmetric decomposition of the complex linear response function, we derive conservation laws for integrated absorption cross sections, providing a solid basis forproper calculations of the intersubsystem intensity borrowing inherent to coupled subsystems and how that can lead to negative subsystem intensities. We finally identify steps and approximations required to achieve the transition from a quantum-mechanical description of the composite system to polarizable embedding with a classical treatment of the environment, thus providing a thorough justification for the descriptions used in polarizable embedding models. 33.20.-t, 33.70.Ca, 33.80.-b[ Hans Jørgen Aagaard Jensen December 30, 2023 ==============================§ INTRODUCTIONThe many types of spectroscopy that have been developed over the years (some of which are still under development) serve as an indispensable tool to gain fundamental understanding of the structure and dynamics of molecular systems.<cit.> Equally indispensable are the large number of theoretical methods that nowadays allow for accurate in silico simulations of said systems and spectroscopies, and which can provide complementary information that is inaccessible in the experiment. When designing calculations of molecular properties, it is often necessary to consider the influence of the surrounding environment to obtain adequate accuracy. Given the typical steep computational scaling of quantum-chemical methods with respect to the number of basis functions, the description of large molecular systems is out of question using conventional algorithms. One way to partially address the challenges of treating large systems is to exploit localization techniques and simplified treatments of long-range interactions to achieve reduced, ultimately linear scaling.<cit.> While linear scaling formulations, aided by increasing computer power, significantly extend the applicability range of quantum chemistry, they are generally too costly for routine applications, especially if the system and property of interest put special demands on method, basis set and conformational sampling. For instance, even in cases where (time-dependent) density functional theory ((TD)DFT) is still feasible, it may suffer from electron-transfer overstabilization in the ground state <cit.> and artificial low-lying charge-transfer transitions<cit.> due to the self-interaction errors associated with approximate standard exchange–correlation functionals. Another complication is that the delocalized picture obtained from such brute-force calculations is not straightforward to interpret in terms of chemical concepts and generally requires additional decomposition analyses of the wave function and properties into local components. An alternative route is to adopt subsystem approaches in which the system is divided into chemically well-defined constituents which may be treated as separate entities perturbed by the other subsystems, and as such they offer a natural decomposition into environmental effects. Several methods belong to this category and one may generally distinguish two conceptually different approaches: (i) strict subsystem methods that treat the subsystems consistently on an equal footing and which upon recombination yield the properties of the entire system,<cit.> and (ii) the so-called embedding approaches in which an accurate quantum-mechanical description is intended for only a smaller part of the system, while the remaining parts (the environment) and its effects on the former are described with more efficient, approximate methods. A recent review of subsystem and embedding approaches can be found in Ref. . Apart from the obvious computational cost argument, embedding approaches are motivated by the fact that the chemical identity of subsystems is largely intact in molecular complexes and that many electronic transitions are localized in nature. Prototypical examples are solute–solvent systems and chromophore–protein complexes.Frozen-density embedding<cit.> (FDE) provides in principle a full quantum-mechanical description of the system based on a subsystem formulation of DFT and can be applied in both a self-consistent and an embedding mode thereby bridging (i) and (ii). In principle, it is a formally exact framework although its quality with respect to supermolecular DFT in practical calculations is limited by approximations in the employed nonadditive parts of the exchange–correlation and kinetic energy functionals.More efficient, though more approximate embedding approaches are quantum–classical models, such as hybrid quantum mechanics/molecular mechanics methods, which use a discrete but classical representation of the environment.<cit.> These approaches are classified in a hierarchy according to the extent of the coupling between the quantum and classical subsystems. In electrostatic embedding, the environment is represented as a purely external electrostatic potential perturbing the density of the quantum subsystem, while a more realistic description incorporating the mutual polarization effects between the quantum and classical subsystems is offered by polarizable embedding schemes. A variety of embedding schemes to model environment polarization have been proposed and applied, such as induced dipoles and fluctuating charges.<cit.>Inclusion of the explicit environmental response is particularly important for processes where substantial rearrangements may occur, e.g., upon electronic excitation.<cit.>In the FDE scheme, state-specific polarization of the environment can be included in the self-consistent mode of the formalism by interchanging the roles of the subsystems in so-called freeze-and-thaw iterations.<cit.> In polarizable embedding, this is replaced by a self-consistent determination of the model-specific embedding parameters describing the polarization of the environment. Both FDE<cit.> and polarizable embedding approaches<cit.> have been generalized to a response formalism to allow for the calculation of response and transition properties of molecules embedded in large molecular complexes. The computational cost associated with the explicit coupling of the subsystem excitation manifolds in a fully coupled FDE scheme generally hinders the inclusion of the dynamical response of the entire environment in large complexes,<cit.> but allows in a truncated form to describe a few strongly coupled local excitations as relevant for chromophoric aggregates.<cit.> The classical treatment in polarizable embedding schemes, on the other hand, offers an efficient inclusion of the dynamic environmental response, provided the perturbing field is nonresonant with respect to local excitations in the environment.<cit.> This restriction has, however, been lifted by introducing phenomenological excited-state lifetimes in the response formalism, see Refs. . The objective of the present work is to formulate a rigorous derivation of linear response theory within polarizable embedding starting from a quantum-mechanical treatment of the entire system. While various polarizable embedding schemes differ in the specific representation of the environment, the underlying mathematical structure and physical content of their working equations are the same.<cit.> For the present aim, we will focus on the explicit expressions for the polarizable embedding (PE) model<cit.> in which the environment is described in terms of a distributed multipole representation, thus providing a well-defined link to the environment charge density. We note that the main conclusions will also be valid for, e.g., polarizable density embedding,<cit.> which goes beyond pure polarizable embedding. While the formally exact FDE framework has previously been used as a common theoretical platform for discussing embedding models,<cit.> our derivation will be based on an explicit parameterization of the wave function of the combined system. The origin and inherent limitations of polarizable embedding become particularly transparent in this framework since approximations are introduced by imposing restrictions directly on the corresponding wave function parameterization, as we shall see.The implications of the choice of parameterization are pertinent to the extension to response properties: the equivalence between response theory and state-specific approaches is lost when employing a nonlinear parameterization,<cit.> as is the case for polarizable embedding, such that different physical pictures emerge in the two formalisms.On the basis of a four-state model, it was shown in Ref.  that the exact second-order excitation energy for a system of two interacting subsystems contains three contributions in addition to the zeroth-order term describing the excitation within a frozen environment: (i) a term describing the differential polarization in the environment upon excitation in the embedded molecule, i.e., a classical induction effect, (ii) the difference in dispersion interaction between the environment and the embedded molecule in its ground state and excited state as given by the Casimir–Polder formula, and (iii) a term that describes the coupling between the excited state of interest with all lower-lying states of the embedded molecule, i.e., de-excitations, in terms of a screened interaction mediated by the environment through its dynamic polarizability (evaluated at the excitation frequency). For a generalization beyond a few-state model, see Refs. . Regardless the debated interpretation of this term as a dispersion effect,<cit.> a nonresonant excitonic coupling between subsystems<cit.> or a resonant coupling,<cit.> it is a manifestation of the mutual correlation of the electrons in the subsystems. Employing a direct-product ansatz in a linear response framework recovers only the third term, although it should be noted that the correspondence is incomplete for all but the lowest excitation since de-excitations to lower-lying states different from the ground state are missing in the response treatment based on a ground-state reference.<cit.> On the other hand, only the state-specific induction term (i) is recovered in a state-specific formalism. Whereas previous theoretical analyses have primarily focused on excitation energies,<cit.> we shall here be concerned with the general linear response function, and not only its poles. Based on the linear response function of the combined system, we will derive two different subsystem decompositions, referred to as symmetric and nonsymmetric, respectively. These provide a direct link to the standard response formulations of polarizable embedding based on the classical description of the energy contribution of the interaction with the environment.<cit.> In addition to justifying the form of the environmental effects in polarizable embedding schemes, the present analysis also sheds light on the basic features of coupled systems and potential complications in separating out subsystem contributions to molecular properties. As discussed by Pavanello in the subsystem DFT framework, transitions in composite systems are not strictly localized in the sense that the pole structure of the combined system is inherited by the individual subsystem contributions to the linear response function.<cit.> We will extend that analysis by considering the consequences on the evaluation of transition strengths in an embedding framework by contrasting the symmetric and nonsymmetric decompositions.This exposition will further benefit from our consideration of the complex linear response function of the combined system that puts emphasis on conservation laws for integrated intensities and their ramifications for the decomposed forms of the response function used in practical polarizable embedding schemes.Finally, the key features of linear response theory of a combined system subjected to a weak external electric field will be illustrated numerically by considering a six-level-model of a hydrogen-bonded dimer complex formed by water and para-nitroaniline. This simple example serves to demonstrate the results otherwise found in the theoretical analysis.§ THEORYOur presentation is intended to be self-contained and thus we provide a rather extensive theory section, starting in the first part with a brief review of the conventional quantum-mechanical direct-productansatz for the wave function of the combined system.<cit.> We then provide a rigorous formulation of linear response theory for a composite system within this theoretical framework.Our formulation allows, upon additional approximations, to recover the PE model. In practice, we exploit two different decompositions that allow for solving the response equations in effective subsystem spaces. This leads to the definition of effective quantities that provide insight into how subsystem properties are modified in the presence of another interacting subsystem. The features of the individual terms in the two decompositions are compared with a view to the evaluation of subsystem contributions to response and transition properties. Finally, to overcome some of the challenges related to the subsystem formulation in conventional linear response theory, we extend the treatment to a complex response framework. In addition, such formulation allows to demonstrate the intensity borrowing occurring in a coupled system.In the second part of the this section, we analyze the problem of recovering the PE model and its extension to quantum-mechanical linear response theory based on the decompositions derived in the first part. Specifically, we assume a perturbation treatment of the environment subsystem as well as a multipole representation of its charge distribution. In this way, we provide a rigorous derivation of the environmental effects emerging in linear response theory within polarizable embedding.<cit.>Atomic units (a.u.) will be used throughout and, in order to keep equations more compact, we will allow ourselves to leave out physical constants that have a numerical value of 1 a.u.§.§ Preliminary ConsiderationsConsider a composite system consisting of N interacting subsystems each with an integer number of electrons and with a fixed relative position. A fundamental assumption made in quantum–classical embedding models is that of nonoverlapping subsystem charge densities (zero-overlap approximation), which implies that exchange–repulsion vanishes.As a consequence, the exact wave function of the combined system can be written in the basis of direct-product states constructed from the complete antisymmetrized subsystem spaces.<cit.>The nonrelativistic Born–Oppenheimer Hamiltonian for the interacting system decomposes naturally asℋ̂=∑_A=1^Nℋ̂_A+∑_A> B^N𝒱̂_AB ,where ℋ̂_A is the electronic Hamiltonian of the isolated subsystem A and𝒱̂_AB the interaction operator describing the interactions between the nuclei andelectrons in subsystem A with those in B.In second quantization, the interaction operator takes the form𝒱̂_AB = ∑_m∈B^M_BZ_m∑_pq∈Av_pq(𝐑_m)Ê_pq+∑_n∈A^M_AZ_n∑_rs∈Bv_rs(𝐑_n)Ê_rs +∑_pq∈A∑_rs ∈B v_pq,rsÊ_pqÊ_rs+ ∑_n∈A^M_A∑_m∈B^M_BZ_nZ_m/|𝐑_n-𝐑_m| ,where Z_n and 𝐑_n denote the charge and the position vector, respectively, of nucleus n of the M_A nuclei in subsystem A, while Z_m, 𝐑_m and M_B are the corresponding quantities belonging to subsystem B. Ê_pq is the usual second quantization singlet one-electron excitation operator,<cit.> and p,q,r,s are used as general spatial molecular orbital indices. The affiliation of the corresponding orbitalsto a given subsystem is indicated by the summation. The associated one- and two-electron integrals are defined asv_pq(𝐑)= -∫ϕ_p^*(𝐫)ϕ_q(𝐫)/|𝐑-𝐫|d𝐫 ,v_pq,rs = ∬ϕ_p^*(𝐫_1) ϕ_r^*(𝐫_2) ϕ_q(𝐫_1)ϕ_s(𝐫_2)/|𝐫_1-𝐫_2| d𝐫_1 d𝐫_2 = -∫ϕ_p^*(𝐫_1) ϕ_q(𝐫_1) v_rs(𝐫_1) d𝐫_1 .The first two terms in Eq. (<ref>)describe the instantaneous intersubsystem Coulomb interaction between the electrons insubsystem A and the M_B nuclei in subsystem B and vice versa, whereas thethird and fourth terms are theintersubsystemelectron–electron and nucleus–nucleus repulsion terms, respectively. Notice thatno exchange integrals between the subsystems survive within the zero-overlap approximation, and the two-electron excitation operator therefore factorizes into subsystem contributions.Occasionally, we will use an alternative representation of the interaction operator𝒱̂_AB=∫ρ̂_A(𝐫)𝒱̂_B(𝐫)d𝐫=∫ρ̂_B(𝐫)𝒱̂_A(𝐫)d𝐫=ρ̂^A_𝐫 𝒱̂^B_𝐫 ,given in terms of the first-order reduced density and electrostatic potential operators:ρ̂_A(𝐫)=∑_n∈A^M_AZ_nδ(𝐫-𝐑_n)-∑_pq∈Aϕ_p^*(𝐫)ϕ_q(𝐫)Ê_pq , 𝒱̂_B(𝐫)=∑_m∈B^M_BZ_m/|𝐫-𝐑_m|+∑_rs∈Bv_rs(𝐫)Ê_rs .[1]The last equality of Eq. (<ref>) introduces a shorthand notation for the spatial integration with respect to repeated space variables.To maximize the comparability with the effective environment models, we approximate the electronic wave function of the combined system with a single direct product of subsystem wave functions |Ψ_AΨ_B…Ψ_N⟩ = |Ψ_A⟩⊗ |Ψ_B⟩⊗…|Ψ_N⟩ .The variation of the associated energy functional according to the Rayleigh–Ritz variational principle leads to a set of coupled effective subsystem equations∀ A:ℱ̂_A|0_A⟩=E_A|0_A⟩ , where the effective subsystem Hamiltonian is given byℱ̂_A =ℋ̂_A+∑_B≠A^N⟨ 0_B| 𝒱̂_AB| 0_B⟩=ℋ̂_A + ∑_pq∈A∑_B≠A[∑_m∈B^M_BZ_mv_pq(𝐑_m)+∑_rs∈Bv_pq,rsD_B,rs]Ê_pq .Here, D_B,rs=⟨ 0_B | Ê_rs| 0_B⟩ is an element of the first-order reduced density matrix for subsystem B. The two terms collected in the square bracket represent the classical electrostatic potential generated by the ground states of the remaining subsystems in their polarized states, i.e., in the presence of the other subsystems.In practice, any wave function ansatz may be invoked for the individual subsystems and the associated optimization conditions derived upon applying the variational principle. Equation (<ref>) is analogous to Hartree self-consistent field theory, where the orbitals replace the subsystem wave functions. In particular, the parameterization in Eq. (<ref>) discards direct-product states in which the subsystems are simultaneously excited, which implies that no dispersion effects are included in this approximation.However, some intersubsystem electron correlation effects are introducedwhen the direct-product ansatz is used in a response framework, as mentioned in the Introduction. § RESPONSE THEORY OF COMPOSITE SYSTEMSWe will now consider the electronic response of the composite system to an external optical field within the framework introduced in the previous section. In such a case, nuclear motions are neglected and only the pure electronic response will be considered.Special attention will be paid to the physical aspects of the intersubsystem interactions in the presence of the applied field, and how they influence molecular properties. For the sake of notational simplicity, we shall restrict this analysis to two subsystems (leading to an embedded subsystem A and an environment B in polarizable embedding) and work within a configuration interaction (CI) framework for the individual subsystems. The framework can easily be extended to consider the individual subsystems of the environmentstill assuming nonoverlapping subsystems, by decomposing the environment wave function into a product of subsystem contributions (see Eq. <ref>). This will be used in Sec. <ref>. We also note that, even if we are considering explicitly the CI parameterization for wave functions of the individual subsystems, the principles apply more generally also to other variational wave function models.The field–matter interaction will be described within the semi-classical framework,where the incident fieldis treated as a classical plane wave that perturbsthe molecular system.The perturbation operator describing the action of a monochromatic external field on the composite system can then be expressed in terms of Fourier components asV̂^t = ∑_±ωV̂_α^ωF_α^ωe ^-iω t=∑_±ω(V̂_A,α^ω+V̂_B,α^ω)F_α^ωe ^-iω t ,where F_α^ω are Fourier amplitudesassociated with the one-electron perturbation operators V̂_α^ω, using Greek subscripts as (possibly composite) Cartesian labels. In order to maintain Hermiticity of V̂^t, we have F_α^ω=[F_α^-ω]^* andV̂_α^ω=[V̂_α^-ω]^†. Note that the additivity of the perturbation operator in the last equality of Eq. (<ref>) is a consequence of the zero-overlap assumption. Here and henceforth, the Einstein summation convention is adopted for repeated Greek indices.Our derivation will follow the quasi-energy formulation of response theory,<cit.> which can be viewed as a time-dependent generalization of the ordinary energy-differentiation technique from time-independent perturbation theory, which in its time-averaged formulation is restricted to time-periodic perturbations.The response functions are defined as the coefficients of the time-averaged quasi-energy in a Taylor series expansion in terms of the external field strengths{Q_AB}_T= E_AB + ∑_ω⟨ 0_A0_B|V̂_α^ω|0_A0_B⟩ F^ω_αδ_ω+1/2∑_ω_1,ω_2⟨⟨V̂_α^ω_1;V̂_β^ω_2⟩⟩δ_ω_1+ω_2 F_α^ω_1F_β^ω_2+1/6∑_ω_1,ω_2⟨⟨V̂_α^ω_1;V̂_β^ω_2,V̂_γ^ω_2⟩⟩δ_ω_1+ω_2+ω_3 F_α^ω_1F_β^ω_2F_γ^ω_2+⋯ ,where {·}_T indicates that the time-average over one period of oscillation in the external field has been taken. The symbol δ_ω, not to be confused with a Dirac-delta or a discrete Kronecker-delta function, is unity when the continuous frequency variable vanishes and zero otherwise. An instructive exposition of the quasi-energy derivative approach as well as a comparison to the alternative Ehrenfest formulation can be found in Ref. .§.§ The Direct-Product ApproximationThe phase-isolated part of the time-dependent direct-product wave function for the composite system may be definedby an exponential unitary parameterization as|0_A 0_B⟩ = e^i[Λ̂_A(t)+Λ̂_B(t)]|0_A0_B⟩=e^iΛ̂_A(t)|0_A⟩⊗ e^iΛ̂_B(t)|0_B⟩; [Λ̂_A,Λ̂_B]=0 .The intersubsystem commutation relation above follows from the zero-overlap assumption. The time-dependent Hermitian operator Λ̂_A(t) for subsystem A is parameterizedin terms of a set of time-dependent amplitudes {λ_i_A} and takes the form<cit.>Λ̂_A(t) = ∑_i> 0 (λ_i_A(t)q̂_i_A^†+λ^*_i_A(t)q̂_i_A) =𝐐_A^†Λ_A , 𝐐_A^†=[ 𝐪_A^† 𝐪_A ];Λ_A=[ λ_A(t) λ_A(t)^* ]^T ,where an identity operator for subsystem B is implied. The state-transfer operators q̂_i_A^†=|i_A⟩⟨ 0_A| and their adjointsare built from a set of orthonormalized states {|i_A⟩}that spans the orthogonal complement space of the reference state |0_A⟩,the latter satisfying the variational condition given by Eq. (<ref>). Before proceeding, we briefly comment on the employed parameterization: (i) The direct-product parameterization in Eq. (<ref>) is nonlinear: that is, it produces states outside the excitation manifold, defined by the linear action of the operator Λ̂_A(t)+Λ̂_B(t). In particular, the exponential parameterization contains states in which both subsystems are excited simultaneously due to products of subsystem excitations, despite the absence of such transitions in the state-transfer operators. This is analogous to the nonlinear exponential parameterization of a Hartree–Fock or Kohn–Sham determinant that is based on a generator of single-electron excitations but yet encompasses multi-electron excited determinants. As mentioned in the Introduction, a direct consequence of the use of a nonlinear parameterization is that the properties derived from a response framework will differ from those based on the state-specific formulation. (ii) The lack of a Λ̂_AB(t) operator in the exponent of the first equality in Eq. (<ref>) and thus a direct coupling between direct-product states of the type ⟨ i_A0_B| and |i_Aj_B⟩ as well as ⟨ i_A0_B| and |k_Aj_B⟩ is the origin of the neglect of state-specific relaxation and London dispersion between the subsystems. (iii) The expansion of the time-dependent wave function in the basis of subsystem states means that only intrasubsystem transitions are included, while intersubsystem transitions are excluded. Therefore, a main approximation in the above parameterization is the exclusion of charge-transfer transitions between subsystems in line with our initial restriction on a fixed number of electrons in a given subsystem.Having settled on an approximate parameterization of the phase-isolated wave function, we can now construct the time-dependent quasi-energy of the composite systemQ_AB(t)= ⟨0_A 0_B | (ℋ̂ +V̂^t-i∂/∂ t) |0_A 0_B⟩ .The time-dependent amplitudes are expanded in orders of the perturbation and determined by imposing the variational principle for the time-averaged quasi-energy to be satisfied at the various orders. Following Eq. (<ref>), explicit expressions for response functions of the combined system can then be obtained as perturbation-strength derivatives of the time-averaged quasi-energy in Eq. (<ref>), evaluated at zero field strengths. The linear response function becomes⟨⟨V̂_α^-ω; V̂_β^ω⟩⟩ =.d^2{Q_AB}_T/dF_α^-ω dF_β^ω|_𝐅=0= -𝐕_α^ω†(𝐄^[2]-ω𝐒^[2])^-1𝐕_β^ω .Ordering the Fourier components of the configuration amplitudes in the operators in Eq. (<ref>) according to: Λ^ω=(λ^ω_A, λ^ω *_A, λ^ω_B, λ^ω_B^*)^T, leads to the following intra- and intersubsystem blocked forms of the vectors and matrices:-i𝐕_β^ω=.∂^2{Q_AB}_T/∂ F_β^ω ∂Λ^ω^*|_𝐅=0=-i [ 𝐕^ω_A,β; 𝐕^ω_B,β ] , 𝐄^[2]-ω𝐒^[2]= .∂^2{Q_AB}_T/∂Λ^ω^* ∂Λ^ω|_𝐅=0 = [𝐄_A^[2] 𝐄_AB^[2]; 𝐄_BA^[2]𝐄_B^[2] ] -ω[ 𝐒_A^[2] 0; 0 𝐒_B^[2] ].The diagonal blocks of the electronic Hessian and overlap matrices, arising upon differentiation of the quasi-energy with respect to the wave function parameters belonging to the same subsystem (I=A,B), take the same overall form as for the isolated subsystems𝐄_I^[2] =[ 𝐀^I 𝐁^I; 𝐁^I^* 𝐀^I^* ]; 𝐒_I^[2] =[10;0 -1 ];𝐕_I,α^ω= [𝐠_α^I; -𝐠_α^I^* ] ,where the symbols 0 and 1 are used to denote appropriately sized null and identity matrices. As exemplified for subsystem A, the elements of the subsystem blocks take the formA^A_ij=⟨ 0_A0_B |[q̂_i_A,[ℋ̂_A+𝒱̂_AB,q̂_j_A^†]] |0_A0_B⟩ =⟨ i_A 0_B | ℋ̂_A+𝒱̂_AB| j_A 0_B⟩-δ_i_Aj_A⟨ 0_A0_B | ℋ̂_A+𝒱̂_AB | 0_A0_B⟩ =⟨ i_A | ℋ̂_A+ ⟨ 0_B | 𝒱̂_AB| 0_B⟩ | j_A⟩-δ_i_Aj_A⟨ 0_A | ℋ̂_A+ ⟨ 0_B |𝒱̂_AB | 0_B⟩ | 0_A⟩B^A_ij=⟨ 0_A 0_B |[q̂_i_A,[ℋ̂_A+𝒱̂_AB,q̂_j_A]] |0_A0_B⟩=0, g_α,i^A=⟨0_A0_B| [q̂_i_A,V̂_A,α^ω] |0_A0_B⟩= ⟨0_A| [q̂_i_A,V̂_A,α^ω] |0_A⟩ .In addition to the isolated subsystem term, the electronic subsystem Hessian in Eq. (<ref>) incorporates a contribution from the intersubsystem coupling. In particular, it describes the coupling between intrasubsystem excitations under the influence of the electrostatic potential produced by the electronic ground state of the other subsystem. Note that the vanishing 𝐁^I blocks result from our choice of a CI parameterization within the subsystems. However, we choose to keep these blocks in our above illustration of the structure of the electronic Hessian of the subsystems as they could be nonzero for other wave function models. The generally rectangular off-diagonal block 𝐄_AB^[2] and its conjugate transpose 𝐄_BA^[2] in Eq. (<ref>) describe the intersubsystem coupling. Using the same ordering as in Eq. (<ref>), the structure of this block can be written as𝐄_AB^[2] = [ Γ Θ; Θ^* Γ^* ] ,with elements given byΓ_ij=⟨ 0_A 0_B |[q̂_i_A,[𝒱̂_AB,q̂_j_B^†]] |0_A0_B⟩= ⟨ i_A 0_B |𝒱̂_AB| 0_A j_B⟩, Θ_ij=⟨ 0_A 0_B |[q̂_i_A,[𝒱̂_AB,q̂_j_B]] |0_A0_B⟩ = -⟨ i_A j_B |𝒱̂_AB| 0_A 0_B⟩ .The off-diagonal blocks couple excitations in one subsystem with those in the other, and as follows from these expressions, the coupling is described through a Coulombic interaction of transition densities in the two subsystems. Note that the off-diagonal blocks 𝐒_AB^[2] and 𝐒_BA^[2] vanish due to the zero-overlap assumption. By analogy to exact-state theory, a pole and residue analysis of the linear response function in Eq. (<ref>) determines expressions for excitation energiesand transition strengths for ground- to excited-state transitions in the combined system for the specific choice of parameterization in Eq. (<ref>). Accordingly, excitation energies are found as eigenvalues of the generalized eigenvalue equation involving the electronic Hessian and the metric in terms of the overlap matrix:<cit.>𝐄^[2]𝐗 =𝐒^[2]𝐗Ω .We recall the well-known feature of this generalized eigenvalue problem where the eigenvalues come in pairs ±ω_n and the associated eigenvectors are related,<cit.> and we shall use positive and negative indices (ω_-n=-ω_n) to denote paired solutions. Here, 𝐗 is the matrix of eigenvectors satisfying the orthonormality relation𝐗^†𝐒^[2]𝐗 =σ;σ_nm=sgn(n)δ_nm , and Ω is the diagonal matrix containing the associated eigenvalues ±ω_n.For later reference, we introduce the partitioned form of the eigenvector matrix according to the blocked structure in Eq. (<ref>)𝐗= [𝐗^A 𝐗^AB; 𝐗^BA𝐗^B ],where the off-diagonal blocks describe the degree of delocalization of the excitations in the combined system. The transition strength associated with excitation n in the composite system can be obtained from the residue of the linear response function asT^0n_αβ=lim_ω→ω_n(ω-ω_n)⟨⟨V̂_α^-ω;V̂_β^ω⟩⟩ =𝐕_α^-ω_n𝐗_n^𝐗_n^†𝐕_β^ω_n ,and used to compute the dimensionless oscillator strengthf_0n=2ω_n/3T_αα^0n . Having outlined the response formalism for the combined system, the main objective in the two following subsections is to obtain expressions for subsystem contributions to properties of the combined system by solving effective equations within the subsystem spaces. As will be shown in Section <ref>, this alternative approach to the solution of the full system provides a direct link to polarizable embedding. §.§.§ Subsystem decomposition: electronic response propertiesTo decompose the linear response function of the combined system into subsystem contributions, we use that the inverse of a blocked matrix with nonsingular square diagonal blocks (𝐔 and 𝐙) may be written as [[ 𝐔 𝐕; 𝐖 𝐙 ]]^-1= [ [ (𝐔-𝐕𝐙^-1𝐖)^-1 -(𝐔-𝐕𝐙^-1𝐖)^-1𝐕𝐙^-1; -(𝐙-𝐖𝐔^-1𝐕)^-1𝐖𝐔^-1 (𝐙-𝐖𝐔^-1𝐕)^-1 ]] ,which corresponds to using the Löwdin partitioning technique.<cit.> Using this identity, we can rewrite the matrix resolvent inEq. (<ref>) in a partitioned form and obtain the following subsystem decompositionof the linear response function:⟨⟨V̂_α^-ω;V̂_β^ω⟩⟩ =- (𝐕^ω†_A,α𝐍^ω_A,β +𝐕^ω†_B,α𝐍_B,β^ω) = -𝐕_A,α^ω†(𝐄_A^[2](ω)-ω𝐒^[2]_A)^-1𝐕_A,β^ω-𝐕_B,α^ω†(𝐄_B^[2](ω)-ω𝐒_B^[2])^-1𝐕_B,β^ω .Since the subsystems are treated on the same footing in this representation,Eq. (<ref>) will in the following be referred to as the symmetric subsystem decomposition (SD). In contrast to Eq. (<ref>), the dimensions in the above expression have been reduced to those of the excitation manifolds of the individual subsystems (e.g., dim_A×dim_A) instead of that of the full system (dim_A+dim_B)×(dim_A+dim_B). As indicated by tildes, the response vectors 𝐍^ω in Eq. (<ref>) are modified quantities that satisfy the following set of effective linear response equations(𝐄_A^[2](ω)-ω𝐒_A^[2])𝐍_A,β^ω = 𝐕^ω_A,β , (𝐄_B^[2](ω)-ω𝐒_B^[2])𝐍_B,β^ω = 𝐕^ω_B,β .In addition to the implicit modifications through the polarization of the reference state vectors (changes induced by interactions within the subsystem through the interaction part of the diagonal blocks, e.g., 𝐄_A^[2]), the presence of the other subsystem manifests in explicit contributions to the electronic Hessian and property gradients. In Eqs. (<ref>) and (<ref>), this has been written compactly by introducing effective Hessians and property gradients defined, here for subsystem A, as𝐄_A^[2](ω) = 𝐄^[2]_A-𝐄_AB^[2](𝐄_B^[2]-ω𝐒^[2]_B)^-1𝐄_BA^[2] , 𝐕_A,β^ω = 𝐕_A,β^ω-𝐄_AB^[2](𝐄_B^[2]-ω𝐒^[2]_B)^-1𝐕_B,β^ω .These two expressions are key of the present work, since they show how the properties of subsystem A are affected by the presence of another interacting subsystem. As follows, the coupling interactions between subsystems are governed by three different kinds of mechanisms. The response vector for subsystem A describes changes induced by: (i) the indirect coupling through the modification of the pure intrasubsystem term 𝐄^[2]_A to include the ground-state electrostatic potential of subsystem B. (ii) The explicit coupling to subsystem B through the frequency-dependent term in the electronic Hessian. A similar result was obtained in the work by Neugebauer in which a subsystem partitioning of the eigenvalue problem within the linear response generalization of subsystem DFT is discussed.<cit.> (iii) The direct interaction between the applied external field and subsystem B through the modified property gradient. In particular,we recognize the matrix resolvent of the second term in Eqs. (<ref>) and (<ref>) as a matrix representation of the frequency-dependent linear polarizabilityof the polarized reference state of subsystem B (i.e., self-consistently polarized by subsystem A), evaluated at the optical frequency ω.<cit.> This can be seen by rewriting the intersubsystem coupling blocks according toΓ_ij= ∑_pq∈A∑_rs∈Bv_pq,rs⟨ 0_A0_B |[q̂_i_A,[Ê_pqÊ_rs,q̂_j_B^†]]|0_A0_B⟩= ∑_pq∈A∑_rs∈Bv_pq,rs⟨ 0_A|[q̂_i_A,Ê_pq]| 0_A⟩⟨ 0_B | [Ê_rs,q̂_j_B^†]|0_B⟩= -∑_pq∈A∑_rs∈Bv_pq,rs⟨ 0_A|[q̂_i_A,Ê_pq]| 0_A⟩⟨ 0_B | [q̂_j_B^†,Ê_rs]|0_B⟩= g_𝒱̂_𝐫,i^Ag_ρ̂_𝐫,j^B* ,and similarlyΘ_ij= -∑_pq∈A∑_rs∈Bv_pq,rs⟨ 0_A|[q̂_i_A,Ê_pq]| 0_A⟩⟨ 0_B | [q̂_j_B,Ê_rs]|0_B⟩= -g_𝒱̂_𝐫,i^Ag_ρ̂_𝐫,j^B ,where the vectors are defined in the same way as the property gradient in Eq. (<ref>) but with the perturbation operator being replaced by either the electrostatic potential operator or the density operator. The off-diagonal block of the electronic Hessian can then be written as𝐄_AB^[2]= 𝐕_𝒱̂_𝐫^A𝐕_ρ̂_𝐫^B^† =𝐕_ρ̂_𝐫^A𝐕_𝒱̂_𝐫^B^† .Substituting this expression into the last term of Eq. (<ref>), we obtain𝐄_AB^[2](𝐄_B^[2]-ω𝐒_B^[2])^-1𝐄_BA^[2] = 𝐕_𝒱̂_𝐫^A[ 𝐕^†_ρ̂_𝐫^B(𝐄_B^[2]-ω𝐒_B^[2])^-1𝐕_ρ̂^B_𝐫^']𝐕_𝒱̂_𝐫^'^A^†= 𝐕_𝒱̂_𝐫^A C_𝐫,𝐫'^B(ω)𝐕_𝒱̂_𝐫^' ^A† ,where the factor in square brackets is identified with the frequency-dependent generalized linear polarizability of the polarized ground state of subsystem B:C_𝐫,𝐫'^B(ω)=⟨⟨ρ̂_B(𝐫);ρ̂_B(𝐫^')⟩⟩_ω, which, upon assuming the dipole approximation for subsystem B, reduces to the electric dipole–dipole polarizability tensor α^B(ω).The property gradients for subsystem A describe the electrostatic potential generated by a transition density. The last term in Eq. (<ref>) can thus be interpreted as the linear response of subsystem B induced by the electrostatic potential due to a transition density of subsystem A, which in turn produces an electrostatic potential acting on A. In the same way, the last term of the effective property gradient in Eq. (<ref>) can be interpreted as the electrostatic potential acting on subsystem A due to the linear polarization induced in subsystem B by its direct interactions with the external field. In other words, subsystem B acts as a source of a field that gives rise to an effective field strength at the location (but devoid) of subsystem A different from the external field, as represented by 𝐕_A,β^ω and 𝐕_A,β^ω, respectively. Note that because of the tensorial nature of the environment polarizability, the external field can be screened differently in different directions. It is also worth underlining the relative distance dependency of these two interaction mechanisms. Assuming a dipole approximation for the interaction operator, it follows that the former effect decays quickly with increasing intermolecular distance r, displaying a r^-6 behavior, whereas the latter effect is comparably long range, dropping of as r^-3.This difference in interaction range is important to consider in practical calculations in order to get a balanced description of the various contributing terms when dealing with finite-sized systems.<cit.> As we shall see later, the second terms of Eqs. (<ref>) and (<ref>) reduce exactly to the form of the so-called dynamic reaction field and the effective external field (EEF) effects, the latter also referred to as the local field effect, appearing in polarizable embedding.<cit.>The effective external field effect is of the same origin as the cavity-field effect in polarizable continuum models leading to effective properties,<cit.> but is defined with respect to the external probing field rather than the Maxwell field within the dielectric.<cit.> It should also be noted that the same terms describe the so-called image- and near-field effects in the context of treating molecules adsorbed to metal nanoparticles.<cit.> Finally, we note that neither of the subsystem contributions to the linear response functionin Eq. (<ref>) are symmetric with respect to the left and right property gradients. For that reason, there is no guarantee that the effective subsystem polarizability tensors are symmetric, as discussed before,<cit.> or that the diagonal elements are positive in the static limit. This is in contrast to the polarizability tensor of the combined system, which is symmetric, as it should be.Physically, the nonsymmetric form of the individual terms can be viewed as describing the linear response of a property associated with the operator V̂_A,α^-ω to the actual perturbing field acting on subsystem A in the presence of subsystem B. §.§.§ Subsystem decomposition: electronic transition propertiesIn this section, we perform an analysis of the transition properties of the composite system as relevant for resonant external fields. As pointed out in the Introduction, the lowest electronic transitions are often localized in nature and can often be attributed predominantly to a particular subsystem, i.e., the excitation vectors have the dominant contribution in the excitation manifold of one of the subsystems. The condition under which such excitations occur follows from perturbation theory: the magnitude of the coupling strengths of the excited state of interest with electronic states in the other subsystem should be small compared with their energetic separation. This ratio dictates the magnitude difference between the diagonal and off-diagonal blocks in Eq. (<ref>) and thus the extent of the delocalization of a given electronic excitation in the combined system. Rather than solving the full generalized eigenvalue equation in Eq. (<ref>), the excitation energies of the combined system may be determined from the effective response equations in Eq. (<ref>) upon zeroing the right-hand sides. Folding the effects of subsystem B into the equation for A yields the following pseudo generalized eigenvalue equation𝐄_A^[2](ω_n)𝐗_n=ω_n𝐒_A^[2]𝐗_n ,which has the dimension of the excitation manifold of subsystem A. Provided there are no degenerate states located in the B part, the excitation energies derived from the reduced system in Eq. (<ref>) are identical to those associated with the subsystem A part of the parent system in Eq. (<ref>), i.e., the full system. This reformulation thus turns the generalized eigenvalue problem for the full system into a dressed subsystem problem, as has been shown before in the subsystem TDDFT framework.<cit.> In other words, this form allows us to compute the poles associated with the A (B) block of Eq. (<ref>) without having to consider the full problem with the dimension of both subsystems.This is particularly attractive for determining localized transitions asrelevant from the perspective of embedding calculations. The nonlinearity of Eq. (<ref>) introduced by the frequency-dependent effective Hessian, however, requires knowledge of the solutions beforehand or that the problem is solved iteratively one excitation at a time, including the construction of a new Hessian for each eigenvalue in question. As a consequence, the subsystem A components will not be orthogonal to each other (only the full eigenvectors will). If only the subsystem term 𝐄_A^[2] is included in Eq. (<ref>), the result will be denoted by "FP" for frozen ground-state polarization—implying that subsystem B is not allowed to respond to the density changes in A upon excitation (that is, neglecting intersubsystem couplings).As discussed in the previous section, the second term in Eq. (<ref>) couples an excitation in subsystem A with those in B by the interaction of the associated transition densities (see Eq. (<ref>)). Let us now return to the evaluation of the transition strengths associated with the excitations in the composite system, but now taking a decomposed form of the linear response function as a starting point.As has been shown by Pavanello,<cit.> each subsystem contribution to the response function (i.e., symmetric decomposition in Eq. (<ref>)) displays poles at all excitations in the combined system. This complication reflects the delocalization necessarily present in coupled quantum systems, which is not compliant with our heuristic view on local excitations. As a consequence, transition strengths of electronic transitions "localized" in subsystem A cannot be identified as the residues of the subsystem A contribution to the response function (the first term in Eq. (<ref>)).To facilitate the identification of residues, we will explore an alternative partitioning of the matrix resolvent in Eq. (<ref>), which can be obtained by applying the Woodbury matrix identity (for 𝐔 and 𝐙 nonsingular square matrices)(𝐙-𝐖𝐔^-1𝐕)^-1𝐖𝐔^-1=𝐙^-1𝐖(𝐔-𝐕𝐙^-1𝐖)^-1 ,(𝐙-𝐖𝐔^-1𝐕)^-1=𝐙^-1+𝐙^-1𝐖(𝐔-𝐕𝐙^-1𝐖)^-1𝐕𝐙^-1 ,to the third and fourth blocks of Eq. (<ref>) (alternatively, the first and second blocks to get the residues related to the B part). Hereby, the linear response function of the combined system can be written as⟨⟨V̂_α^-ω;V̂_β^ω⟩⟩= -𝐕^ω†_A,α(𝐄_A^[2]-ω𝐒_A^[2] )^-1𝐕^ω_A,β -𝐕_B,α^ω†(𝐄_B^[2]-ω𝐒_B^[2] )^-1𝐕_B,β^ω ,in which the subsystems, in contrast to Eq. (<ref>), are treated on an unequal footing. Equation (<ref>) will therefore be referred to as the nonsymmetric decomposition (NSD). The same approach has been used in the original formulations of second-order approximate polarization propagator (SOPPA) theory<cit.>in which the effective quantities describe the doubles correction to the particle-hole spectrum, while the pure double excitations are given by the second term. As will be discussed in Sec. <ref>, the expressions for polarizable embedding in a linear response framework follows, in a similar spirit as SOPPA and algebraic diagrammatic construction,<cit.> from a perturbation analysis of the individual matrices appearing in Eq. (<ref>). Note that the second term originates from the first term on the right-hand side of Eq. (<ref>) and corresponds to the linear response function for the ground-state polarized subsystem B. An important outcome of this alternative representation of the response function of the combined system is that the poles of the A-dominated excitations (all excitations of the composite system) are contained entirely in the first term. However, as a consequence of the appearance of the second term, the partitioning in Eq. (<ref>) introduces additional poles in the individual terms at the transition frequencies of the ground-state polarized subsystem B. Hence, the correct pole structure for the composite system is only recovered upon taking the sum of the two terms. Nevertheless, the first term in Eq. (<ref>) can be used to identify the correct decomposed expression for the residues of the excitations mainly in subsystem A, since the second term does not affect excitation energies and transition moments but is needed only in the calculation of the response function.Until now the normalization of the effective eigenvectors obtained from Eq. (<ref>) has been of no concern since it does not affect excitation energies. However, it is necessary to consider a renormalization of the effective eigenvectors before the transition strengths can be evaluated in the decomposed formulation. In particular, since the eigenvector of the full equation, corresponding to a transition mainly localized in subsystem A, is normalized to ± 1 according to𝐗^A†_n𝐒_A^ [2]𝐗^A_n+ 𝐗^BA†_n𝐒_B^[2]𝐗^BA_n=σ_n ,the subsystem A component (first term) must have a norm less than unity. The A and B components of the eigenvector are related through Eq. (<ref>), which can be used to rewrite the normalization condition in terms of the A component. This leads to the following renormalization factor for the effective eigenvector for the given pole ω_n Γ_n^A-1 =𝐗_n^A†𝐒_A^ [2]𝐗_n^A+ 𝐗_n^A†𝐄_AB^[2](𝐄_B^[2]-ω_n𝐒_B^[2])^-1𝐒_B^[2](𝐄_B^[2]-ω_n𝐒_B^[2])^-1𝐄_BA^[2]𝐗_n^A .The transition strength between the ground state and an excited state mainly located in subsystem A can then be written, in the partitioned form, asT_αβ^0n=Γ_n^A𝐕_A,α^-ω_n𝐗^A_n 𝐗^A†_n𝐕_A,β^ω_n .What we have achieved up to this point is to have recast Eqs. (<ref>) and (<ref>) into dressed subsystem expressions that separate out subsystem contributions to molecular and transition properties of the combined system. As will become clear from the final steps taken in Sec. <ref>, this provides a justification of the various environmental effects appearing in polarizable embedding. The theoretical analysis further allowed to identify what decomposed form of the response function to use for a specific purpose. We underline that such clarification does not follow from a derivation anticipating the classical description of the environment in the first place, where the Ehrenfest and quasi-energy derivative formulations of response theory lead to different response functions.<cit.>Although the subsystem decompositions based on standard response theory are illustrative and provide insight into the mechanisms governing the interaction between subsystems, their practical application for transition properties is limited for several reasons: (i) the solution of the pseudo generalized eigenvalue problem in Eq. (<ref>) requires an iterative scheme, (ii) a new effective subsystem A Hessian has to be constructed for each "eigenvalue" of interest, and (iii) the problem is ill-defined if poles in subsystem B are too close to the one being solved for. Another important point is that the normalization factor in Eq. (<ref>), necessary for the evaluation of absorption intensities, cannot be straightforwardly converted into an effective environment analog as needed when turning to embedding models. As will be discussed next, these complications can be avoided by considering the combined system within a complex response theory framework in which absorption properties can be computed without having to resolve the individual excitations, which is also advantageous for systems in which the density of electronic states is high. In addition, such framework clearly illustrates the intensity borrowing that occurs between interacting subsystems. §.§ Subsystem decomposition in a complex response frameworkIn complex response theory, effects of radiative and nonradiative relaxation mechanisms for the decay of theexcited states are modeled in a phenomenological manner by assigning finite lifetimes (τ_n) to the excited states. This leads to complex-valued response functions that are well-defined across the entire frequency range and thus provides resonant-convergent properties.The complex linear response function takes the following form<cit.>⟨⟨V̂_α^-ω;V̂_β^ω⟩⟩= -𝐕^ω†_α (𝐄^[2]-(ω+iγ)𝐒^[2])^-1𝐕^ω_β .In practice, it is customary to adopt a common lifetime, and hence damping parameter γ=(2τ)^-1, for all excited states such that the so-called relaxation matrix becomes γ=γ1.Similar to the conventional response framework, the complex linear response function may be expressed in alternative subsystem decomposed forms. Applying Eq. (<ref>) yields its symmetric subsystem decomposition⟨⟨V̂_α^-ω;V̂_β^ω⟩⟩= -𝐕_A,α^ω†(𝐄_A^[2](ω)-(ω+iγ)𝐒_A^[2])^-1𝐕_A,β^ω-𝐕_B,α^ω†(𝐄_B^[2](ω)-(ω+iγ)𝐒_B^[2])^-1𝐕_B,β^ω .The effective subsystem vector and matrix quantities are now complex, here given for subsystem A:𝐄_A^[2](ω)= 𝐄_A^[2]-𝐄_AB^[2](𝐄_B^[2]-(ω+iγ)𝐒_B^[2])^-1𝐄_BA^[2] , 𝐕^ω_A,α= 𝐕_A,α^ω-𝐄_AB^[2](𝐄_B^[2]-(ω+iγ)𝐒_B^[2])^-1𝐕_B,α^ω .Note that even in the usual case of a real (or purely imaginary) external perturbation, the property gradient will be complex as a result of the damped response of the other subsystem (the second term of Eq. (<ref>)). By further rewriting according to Eqs. (<ref>) and (<ref>), we obtain the nonsymmetric decomposition⟨⟨V̂_α^-ω;V̂_β^ω⟩⟩ = -𝐕_A,α^ω†'(𝐄_A^[2](ω)-(ω+iγ)𝐒_A^[2])^-1𝐕_A,β^ω -𝐕_B,α^ω†(𝐄_B^[2]-(ω+iγ)𝐒_B^[2])^-1𝐕_B,β^ω .It should be noted that the conjugate transpose of the effective quantities are here assumed, as indicated by the prime, to act only on vectors and matrices but without changing the sign in front of the damping parameter.In actual calculations, the value of the response function is determined by solving the complex linear response equation. For the subsystem decompositions, this implies solving the complex analogs of Eq. (<ref>). To be amenable to practical implementation, these may be expressed asa coupled set of linear equations for the real and imaginary components<cit.> (indicated by superscripts R and I, respectively). By using Eq. (<ref>) and assuming real wave functions, we obtain the following expression for the effective response equations for subsystem A [[ 𝐄_A^[2]-𝐕_𝒱̂^A_𝐫C_𝐫,𝐫'^B,R(ω)𝐕_𝒱̂^A_𝐫'^† -ω𝐒_A^[2] 𝐕_𝒱̂^A_𝐫C_𝐫,𝐫'^B,I(ω)𝐕_𝒱̂ ^A_𝐫'^†+γ𝐒_A^[2];-𝐕_𝒱̂^A_𝐫 C_𝐫,𝐫'^B,I(ω)𝐕_𝒱̂^A _𝐫'^† -γ𝐒_A^[2]𝐄_A^[2]-𝐕_𝒱̂^A_𝐫C_𝐫,𝐫'^B,R(ω)𝐕_𝒱̂^A_𝐫'^†-ω𝐒_A^[2];]] [[ 𝐍_A,β^ω,R; 𝐍_A,β^ω,I ]]=[[ 𝐕_A,β^ω,R-𝐕_𝒱̂^A_𝐫C_𝐫,β^B,R(ω); 𝐕_A,β^ω,I-𝐕_𝒱̂^A_𝐫C_𝐫,β^B,I(ω) ]] , where C_𝐫,β^B(ω)=⟨⟨ρ̂^B(𝐫);V̂^ω_B,β⟩⟩_ω. As seen, the coupling between the real and imaginary components of the effective response vector is, in addition to the vacuum contribution from the damping within subsystem A, mediated by the imaginary part of the generalized linear polarizability for subsystem B.The real part of the complex electric dipole–dipole polarizability is related to the refractive index of the system and, as follows from energy-loss considerations of the perturbing electromagnetic field, the imaginary part is directly proportional to the linear absorption cross section σ(ω).<cit.> For a sample that is isotropic with respect to the light polarization, we haveσ(ω)=ω/3ϵ_0 c_0Im[0pt12ptα_αα(ω)] ,where ϵ_0 is the vacuum permittivity and c_0 is the speed of light in vacuum. The physical significance of the imaginary component of the polarizability can alternatively be recognized by the relation between the integrated absorption cross section and the sum of oscillator strengths (see Appendix <ref>)I=∫_0^∞σ(ω)dω=π/2ϵ_0 c_0∑_n>0f_n0 .In the framework of exact state theory or variational approximate state theory in the complete basis set limit, the Thomas–Reiche–Kuhn sum rule further implies that the sum of oscillator strengths is equal to the number of the electrons in the system N_e.Accordingly, for the combined system, we have∑_n>0f_n0^AB=N_e^A+N_e^B ,and likewise for the subsystems∑_n>0f_n0^vac,I=∑_n>0f_n0^FP,I=N_e^I; I=A,B .Together, Eqs. (<ref>) and (<ref>) reflect that excitations in one subsystem can borrow intensity from transitions in the other subsystem, while the total integrated cross section is preserved. By combining Eqs. (<ref>)–(<ref>) and recalling that the second term of the complex linear response function in the nonsymmetric subsystem decomposition in Eq. (<ref>) is identical to the linear response functions of subsystem B within the FP approximation, it follows that ∫σ^NSD_1(ω) dω=π/2ϵ_0 c_0N_e^A; ∫σ^NSD_2(ω) dω=π/2ϵ_0 c_0N_e^B , where σ^NSD_1 and σ^NSD_2 denote the contribution to the absorption cross section from the first and second term in Eq. (<ref>), respectively. That is, if an A-dominated transition gains in intensity due to the coupling to excitations in subsystem B, then σ^NSD_1(ω) will take on negative values around poles in subsystem B. Consequently, in these regions of the spectrum, σ^NSD_1(ω) in itself cannot be associated with an absorption spectrum but it rather becomes imperative to consider the total absorption cross section σ(ω). This discussion is important due to its implications in the context of polarizable embedding (see next section) in which one focuses on the calculation of σ^NSD_1(ω) and, as we have seen, caution is called for in the interpretation of the results of such a calculation. § POLARIZABLE EMBEDDINGHaving derived the expressions for the direct-product ansatz for the combined system, we will in this section detail the additional steps that lead to the definition of the PE model, following the work of Ángyán for the derivation of the effective embedding operator.<cit.> In particular, the subsystems will now be treated at different levels, where a classical description will be adopted for subsystem B. To reflect this distinction, subsystem A will in this section be referred to as the quantum region and B as the environment.Furthermore, instead of considering the environment as a whole the individual subsystems constituting the environment B={b_1,b_2, …,b_N-1} will be treated separately by decomposing the environment wave function into a product of subsystem contributions, still assuming nonoverlapping subsystem charge densities. We will assume that the unperturbed (i.e., vacuum) environment subsystem eigenfunctions and -energies {|0_b^(0)⟩,|j_b^(0)⟩} and {E_0_b^(0),E_j_b^(0)}, for b∈B, are known, where superscripts (n) specify the order with respect to the perturbation (see below). §.§ Working EquationsIn the PE model, we invoke a perturbation treatment of all but subsystem A and assume that the environment is only linearly responsive. This corresponds to requiring that Eq. (<ref>) for b∈B is fulfilled only through first order in terms of the electrostatic potentials from the ground states of the other subsystems.Within this approximation, the interaction operator acting on subsystem A takes the form𝒱̂^int= ∑_pq∈A∑_b∈B∑_m∈ b ^M_bZ_m[v_pq(𝐑_m)+∑_rs∈bv_pq,rsD_b,rs^(0)]Ê_pq_𝒱̂^es+∑_pq∈A∑_b∈B∑_rs∈ bv_pq,rsD_b,rs^(1)Ê_pq_𝒱̂^ind ,where an element of the zeroth- and first-order electronicdensities of subsystem b are defined asD_b,rs^(0)=⟨ 0_b^(0)|Ê_rs|0_b^(0)⟩ and D_b,rs^(1)=⟨ 0_b^(1)|Ê_rs|0_b^(0)⟩ +⟨ 0_b^(0)|Ê_rs|0_b^(1)⟩, respectively. The effective interaction operator acting on subsystem A consists of contributions from the permanent and induced charge distributions, 𝒱̂^es and 𝒱̂^ind, respectively, of the environment subsystems. In terms of the first-order reduced density and electrostatic potential operators in Eqs. (<ref>) and (<ref>), they read𝒱̂^es = ∑_b∈B∫ρ̂_A^e(𝐫)⟨𝒱̂_b(𝐫)⟩_0_b^(0) d𝐫 , 𝒱̂^ind = ∑_b∈B∫ρ̂_A^e(𝐫) ⟨𝒱̂^b(𝐫)⟩_0_b^(1) d𝐫 ,where superscript "e" signifies that only the electronic part of the operator is included. We have further introduced a shorthand notation for expectation values, e.g.,  ⟨𝒱̂_b(𝐫)⟩_0_b^(0)=⟨ 0_b^(0) | 𝒱̂_b(𝐫)| 0_b^(0)⟩. The 𝒱̂^es operator is straightforwardly constructed from the charge densities of the unperturbed environment subsystems and contains the contributions from both the nuclei and electrons in the environment. The induction operator requires the first-order densities to be known.Using standard Rayleigh–Schrödinger perturbation theory yields the following first-orderperturbation expression for thewave function of the environment subsystems<cit.>∀ b∈B:|0_b^(1)⟩ =-∑_j> 0 |j_b^(0)⟩∫⟨ j_b^(0) | 𝒱̂_b(𝐫)|0_b^(0)⟩/(E_j_b^(0)-E_0_b^(0))×(⟨ρ̂_A(𝐫) ⟩_0_A + ∑_b'∈B\b[⟨ρ̂_b'(𝐫)⟩_0_b'^(0) +⟨ρ̂_b'(𝐫)⟩_0_b'^(1)] ) d𝐫 ,where the expectation value involving subsystem A is over the fully polarized state within the framework of Eq. (<ref>). As pointed out by Stone,<cit.> this expression is however inconsistent with the first-order perturbation analysis: through the coupling to the other environment subsystems in the third term, Eq. (<ref>) contains contributions to infinite order in the electrostatic potential generated by subsystem A. A strict first-order expression can be obtained by neglecting the many-body polarization among theenvironment subsystems, i.e., removing the first-order term of Eq. (<ref>).Substituting the first-order correction to the wave function in Eq. (<ref>) into 𝒱̂^ind and using the definition of the generalized static polarizability, C_𝐫,𝐫'^b,(0)(ω=0), of thezeroth-order ground states of the environment subsystems, the first-order induction operator can be written as 𝒱̂^ind =- ∑_b∈B∫ρ̂_A^e(𝐫) ∫[∬C_𝐫”,𝐫”'^b,(0)(0)/|𝐫-𝐫”||𝐫'-𝐫”'| d𝐫” d𝐫”' ]×(⟨ρ̂_A(𝐫') ⟩_0_A + ∑_b'∈B\ b[⟨ρ̂_b'(𝐫')⟩_0_b'^(0) +⟨ρ̂_b'(𝐫')⟩_0_b'^(1)] )d𝐫 d𝐫' . In the PE model, the charge distributions of the environment subsystems are represented by multipole expansions rather than densities. To this end, it is expedient to use a 3-dimensional multi-index notation in which a multi-index k=(k_x,k_y,k_z) is an ordered list of positive integers.<cit.> The norm and factorialof a multi-index are defined as |k|=k_x+k_y+k_z and k!=k_x!· k_y!· k_z!, respectively, the sum of two multi-indexes as k± l=(k_x± l_x,k_y± l_y,k_z± l_z), and the multi-index power of a vector as 𝐫^k=x^k_x· y^k_y· z^k_z. Using this notation, a Taylor series expansion of the electrostatic potential operator can be written as<cit.>1/|𝐑_j-𝐑_i|= ∑_|k|=0^∞(-1)^|k|/k!(∇_j^k 1/|𝐑_j-𝐑_o|)(𝐑_i-𝐑_o)^k ,where 𝐑_o is the expansion point and the summation over k runs over the 12(|k|+1)(|k|+2) Cartesian components. The multipole form of the potential operator involves components of the Cartesian interaction tensors defined as derivatives of the potential operator<cit.> T_ij^(k) = ∂^k_j1/|𝐫_j-𝐫_i| ;∂^k_j = ∂^|k|/∂ x_j^k_x∂ y_j^k_y∂ z_j^k_z ,where the superscript multi-index notation should not be confused with the perturbation order. Note that in writing Eq. (<ref>), we have used the symmetry properties of theinteraction tensors, that is, T_ij^(k)=(-1)^|k|T_ji^(k). Applying this to the electrostatic potential operators for the environment subsystems yields𝒱̂_b(𝐑_b')=∑_|k|=0^∞(-1)^|k|/k!T_bb'^(k)M̂_b^(k)(𝐑_b) ,where the expansion point 𝐑_b is chosen to reside within thecharge density of subsystem b. A Cartesian component of the |k|'th-order multipole moment operator acting on subsystem b is given byM̂_b^(k)(𝐑_b) = ∫ρ̂_b(𝐫)(𝐫-𝐑_b)^kd𝐫=∑_m∈b^M_bZ_m(𝐑_m-𝐑_b)^k+∑_rs∈bm_rs^(k)(𝐑_b)Ê_rs ,where the associated electronic multipole integral is given bym_rs^(k)(𝐑_b)=-∫ϕ_r^*(𝐫)(𝐫-𝐑_b)^kϕ_s(𝐫) d𝐫 .The expectation value of Eq. (<ref>) corresponds to an electric multipole moment, M^(k)_b(𝐑_b), of subsystem b. For instance, letting |k|=0 gives the charge, while |k|=1 covers the three Cartesian components, i.e., (1,0,0), (0,1,0) and (0,0,1), of adipole moment. Only the lowest-order nonvanishing multipole moment of the permanent charge distributionis independent of the choice of origin (here 𝐑_b). For the sake of brevity,we will suppress this explicit origin dependence in the following. Finally, substitution of Eq. (<ref>) into 𝒱̂^es yields the multipole-expanded form of the electrostatic interaction operator𝒱̂^es=∑_pq∈A∑_b∈B∑_|k|=0^∞(-1)^|k|/k!M_b^(k)t_pq^(k)(𝐑_b)Ê_pq ,where the electrostatic potential integral is defined over the interaction tensors in Eq. (<ref>) ast_pq^(k)(𝐑_b)=-∫ϕ_p^*(𝐫_i)T_bi^(k)ϕ_q(𝐫_i) d𝐫_i, where the index i refers to an electronic coordinate. Comparing Eq. (<ref>) to the nonexpanded form in Eq. (<ref>), it is clear that the two-electron integrals have beenreplaced by one-electron integrals for subsystem A over theinteraction tensors multiplied by the permanent multipole moments of theenvironment subsystems. We proceed in the same way for the induction operator in Eq. (<ref>) by replacing all occurrences of the potential operator by its Taylor series expansion. We further define a component of the|k|'th-order induced multipole moment belonging to an environment subsystem b asM̅_b^(k)(𝐑_b) =∫⟨ρ̂_b(𝐫)⟩_0_b^(1) (𝐫-𝐑_b)^k d𝐫= ⟨ 0_b^(1)| M̂_b^(k)|0_b^(0)⟩+ ⟨ 0_b^(0)| M̂_b^(k)|0_b^(1)⟩ ,where we have introduced the bar notation to distinguish induced multipole moments from their permanent counterparts. By contrast to the permanent moments, there are no induced monopolesand the nuclear contributions vanish irrespective of the multipole order.This is a result of the intermediate normalization of thecorrections to the wave functions of the environment subsystems. As detailed in Appendix <ref>, the induction part of the interaction operator can be written in the multipole-expanded form as𝒱̂^ind =-∑_b∈B∑_|k|=1^∞1/k!F̂_A^e,(k)(𝐑_b)M̅_b^(k)= -∑_b∈B∑_|k|=1^∞∑_|l|=1^∞1/k!· l!F̂_A^e,(k)(𝐑_b)P_b^(k,l)×(⟨F̂_A^(l)(𝐑_b)⟩_0_A+∑_b'∈B\ b[⟨F̂_b'^(l)(𝐑_b)⟩^(0)_0_b'+⟨F̂_b'^(l)(𝐑_b)⟩_0_b'^(1)] ) ,where the two alternative expressions arise byexpanding Eqs. (<ref>) and (<ref>),respectively.P_b^(k,l) are static electronic polarizabilities of subsystem b, analogous to that in Eq. (<ref>), defined asP_b^(k,l)= ∑_j_b≠ 0_b[⟨ 0_b^(0) | M̂_b^(k)| j_b^(0)⟩⟨ j_b^(0) | M̂_b^(l)|0_b^(0)⟩/(E_j_b^(0)-E_0_b^(0))+⟨ 0_b^(0) | M̂_b^(l)| j_b^(0)⟩⟨ j_b^(0) | M̂_b^(k)|0_b^(0)⟩/(E_j_b^(0)-E_0_b^(0))] ,recalling that this definition employs the traced multipole moment operators. Furthermore, F̂_A^(k)(𝐑_b) is the (k-1)'th-order electric-field derivative, which probes the field derivative produced by subsystem A at point 𝐑_bF̂_A^(k)(𝐑_b)=-∑_n∈A^M_AZ_nT_nb^(k)_F_A^n,(k)(𝐑_b)+(-1)^|k|+1∑_pq∈A t_pq^(k)(𝐑_b)Ê_pq_F̂_A^e,(k)(𝐑_b) ,as customary defining the field (|k|=1) as minus the gradient of the electrostatic potential. The operator has been partitioned into nuclear and electronic contributions.For the environment subsystems, we invokea multipole expansion of the field operator by analogy to Eq. (<ref>). Accordingly, we can writethe zeroth- and first-orderelectric-field derivatives of the environment subsystems in Eq. (<ref>),in terms of static and induced multipole moments, respectively, as⟨F̂_b'^(l)(𝐑_b)⟩_0_b'^(0) =∑_|k|=0^∞(-1)^|k|+1/k!T^(k+l)_b'bM_b'^(k) ,⟨F̂_b'^(l)(𝐑_b)⟩_0_b'^(1) =∑_|k|=1^∞(-1)^|k|+1/k!T^(k+l)_b'bM̅_b'^(k) .Note the difference between the lower limits of the two summations. Finally, by equating the right-hand sides of Eqs. (<ref>) and (<ref>), we obtain the equation determining the induced multipole moments∀b∈B:M̅_b^(k) = ∑_|l|=1^∞1/l!P_b^(k,l)F^tot(l)(𝐑_b)= ∑_|l|=1^∞1/l!P_b^(k,l)(⟨F̂_A^(l)(𝐑_b)⟩_0_A +∑_b'∈B\b[⟨F̂_b'^(l)(𝐑_b)⟩_0_b'^(0)+∑_|m|=1^∞(-1)^|m|+1/m!T_b'b^(m+l)M̅_b'^(m)]) ,where F^tot(l)(𝐑_b) is the total (l-1)'th-order electric-field derivative acting on subsystem b. As follows from the second equality, it consists of the physical electric-field contributions from the nuclei and electrons in subsystem A andthe permanent multipoles of the other environment subsystems, collectively denoted F^(l)(𝐑_b), as well as the contribution from the remaining first-order induced multipole moments. Hence, the first term describes the mutual coupling between subsystem A with all environment subsystems, whereas the second and third terms account for the mutual polarization between the environment subsystems.In practice, the multipole expansions in Eqs. (<ref>) and (<ref>) are terminated at a finite order K_s, and to improve the convergence properties of the multipole representation distributed multipole expansions (using S=∑_b∈BS_b to denote the total number of expansion points) are used instead of one-center expansions. For the expansion over induced moments in Eqs. (<ref>) and (<ref>), the dipole approximation is introduced and only the dipole–dipole polarizability tensor is taken into account. In this case, Eq. (<ref>) gives a set of coupled equations determining the induced dipole moments<cit.>μ̅_s(0)= ∑_s'=1^Sℛ_ss'(0)𝐅(𝐑_s') ,where the polarizability tensors for the individual sites have beenreplaced by a (3S×3S)-dimensional classical linear response matrix (also known as the relay matrix) given byℛ(ω) =([ α_1(ω)^-1 -𝐓^(2)_12 ⋯ -𝐓^(2)_1S; -𝐓^(2)_21 α_2(ω)^-1 ⋱ ⋮; ⋮ ⋱ ⋱ -𝐓^(2)_(S-1)S; -𝐓^(2)_S1 ⋯ -𝐓^(2)_S(S-1) α_S(ω)^-1; ])^-1.This matrix holds the inverse of the distributedelectronic dipole–dipole polarizability tensors on the diagonal and second-order interaction tensors in the off-diagonal blocks.Upon contraction with unit vectors, ℛ(ω) models the dipole–dipole polarizability of the environment.By combining Eqs. (<ref>) and (<ref>) in truncated and distributed forms, we finally obtain the embedding operator defining the PE model:v̂_PE= ∑_pq∈A∑_s=1^S∑_|k|=0^K_s(-1)^|k|/k!M_s^(k)t_pq^(k)(𝐑_s)Ê_pq-∑_s=1^Sμ̅_s,α(0)F̂^e_A,α(𝐑_s) . The induced dipoles, and in turn the embedding operator, depend on the wave function of subsystem A through the electric fields. In other words, upon averaging over the environment wave functions, the Hamiltonian turns into a nonlinear effective operator. §.§ Response theory frameworkThe extension of the PE model within a quantum-mechanical response framework usually proceeds by assuming the classical description of the environment from the outset, i.e., starting from Eq. (<ref>) and the associated energy functional (see, e.g., Ref. ). For the present analysis, we will instead begin from the response expressions derived in Sec. <ref> and outline the additional assumptions that lead to the expressions within the PE framework.As briefly alluded to in Sec. <ref>, the differentiated treatment in the PE model is achieved by a perturbation analysis of the quantities in the linear response function, although, as we will presently discuss, the choice of truncation is not fully coherent from a perturbation theory point of view. Since special attention is given to subsystem A, and the sole purpose of subsystem B in this context is to obtain a realistic description of the properties of A, the order counting will be performed on the effective subsystem A quantities.As a first step toward the PE model, one includes terms in the pure subsystem A blocks and vectors through second order (in the meaning of Eq. (<ref>)), using a first-order corrected wave function for the environment that is normalized through second order. The pure B blocks as well as the coupling blocks are evaluated only through lowest nonvanishing order. That is, only the zeroth-order contribution to the wave function of the environment subsystem is included. For the electronic Hessian, this implies that the 𝐄_B^[2] matrix must be known through zeroth order and the 𝐄_AB^[2] matrix through first order. Accordingly, the electronic Hessian and metric matrices are approximated as𝐄^[2]= [ 𝐄_A^[2](0,1,2)𝐄_AB^[2](1);𝐄_AB^[2](1) 𝐄_B^[2](0) ]; 𝐒^[2]= [ 𝐒_A^[2](0,1,2)0;0 𝐒_B^[2](0) ].Although the pure B block is treated only through zeroth order, its effect on excitations in A is correct through second order as can be seen from the resulting effective electronic Hessian for subsystem A 𝐄_A^[2] ^(II)= 𝐄_A^[2](0,1,2)_I-𝐄_AB^[2](1)(𝐄_B^[2](0)-ω𝐒_B^[2](0))^-1𝐄_BA^[2](1)_II .This order truncation thus provides excitation energies of A-dominated transitions that are consistent through second order.The explicit expressions for the terms are given byI =⟨ 0_A | [𝐐_A,[ℋ̂_A+𝒱̂^int,𝐐_A^†]]|0_A⟩ , II = 𝐕_𝒱̂^A_𝐫C_𝐫,𝐫'^B,(0)(ω)𝐕_𝒱̂^A_𝐫'^† ,using Eq. (<ref>) for a single environment subsystem and implying that |0_A⟩ has been derived from the effective Hamiltonian includingEq. (<ref>) rather than the full operator in Eq. (<ref>). The metric matrices retain their structures in Eq. (<ref>). Based on the chosen truncation, the expression for the effective property gradient for subsystem A becomes𝐕_A,α^ω= 𝐕_A,α^ω(2)-𝐄_AB^[2](1)(𝐄_B^[2](0)-ω𝐒_B^[2](0))^-1𝐕_B,α^ω(0) .The analogy to SOPPA is thus imperfect, since keeping only the zeroth-order correction to the B part of the property gradients means that the effective transition moments and thereby the linear response function are not consistent through second order. To arrive at the PE model, we further need to decompose the environment into individual subsystems and invoke a truncated multipole representation of the interaction operator with respect to the environment subsystems. For practical feasibility but without theoretical justification, the lowest-order approximation invoked for the combined environment is also employed for all individual subsystems constituting the environment, meaning that the ground-state polarization among the environment subsystems, otherwise implied in 𝐄^[2](0)_B, is neglected. Taking the simplest two-subsystem environment as an example, the structure of the PE analog of II in Eq. (<ref>) then takes the form illustrated in Fig. <ref>. In particular, upon rewrite of the matrix resolvent according to Eq. (<ref>), contraction with the property gradients and repeated use of Eq. (<ref>) on the resulting subsystem blocks, we recognize the series expansions of the corresponding blocks in the relay matrix. For instance, the first block can be rewritten as𝐕_μ̂^b_1^(0)† ((𝐄_b_1^[2](0)-ω𝐒_b_1^[2](0))^-1-𝐄_b_1b_2^[2](1)(𝐄_b_2^[2](0)-ω𝐒_b_2^[2](0))^-1𝐄_b_2b_1^[2](0))^-1𝐕_μ̂ ^b_1^(0)=(α_b_1(ω)^-1+𝐓^(2)_b_1b_2α_b_2(ω)𝐓^(2)_b_2b_1)^-1 ,that is, in terms of the relay matrix in Eq. (<ref>). Rewriting the second term of Eq. (<ref>) in a similar manner, we finally obtain the PE analogs of the effective electronic Hessian and effective property gradient defined in Ref. :𝐄_A^[2] PE=⟨ 0_A| [𝐐_A,[ℋ̂_A+v̂_PE,𝐐_A^†]]| 0_A⟩-∑_s,s'=1^S⟨ 0_A| [𝐐_A,𝐅̂^e_A(𝐑_s)]| 0_A⟩ℛ_ss'(ω)⟨ 0_A| [𝐐^†,𝐅̂^e_A(𝐑_s')]| 0_A⟩ , 𝐕_A,α^ω PE=⟨ 0_A| [𝐐_A,V̂^ω_A,α]| 0_A⟩ -∑_s,s'=1^S⟨ 0_A| [𝐐_A,𝐅̂^e_A(𝐑_s)]| 0_A⟩ℛ_ss'(ω)𝐞_α ,taking μ̂_α as perturbation and using 𝐞_α to denote a unit vector in the Cartesian α direction. The above expressions define the environmental effects included in the PE model within a linear response framework. Specifically, Eq. (<ref>) defines the static (through v̂_PE) and dynamic reaction field effects (second term), while the second term ofEq. (<ref>) defines the effective external field effect.<cit.> In this way, we have shown the transition from a full quantum-mechanical description of the linear response of the combined system to the differentiated subsystem treatment in the PE model. Based on the insight from the theoretical analysis in Sec. <ref>, these effective quantities can then be used in the first term of the SD (NSD) to obtain the subsystem A contributions to molecular (transition) properties.A commonly adopted possibility to further reduce the computational complexity of the calculations is to assume frequency-independent environment subsystems, corresponding to imposing𝐒_B^[2]=0.This zero-frequency (ZF) approximation offers significant simplifications: (i) the nonlinearity of the effective electronic Hessian is lost such that Eq. (<ref>) reduces to a standard generalized eigenvalue problem, (ii) the renormalization factor, otherwise needed in Eq. (<ref>), becomes unity because the excitation is restricted to subsystem A in this approximation, and (iii) the additional zeroth-order poles in the first term of Eq. (<ref>) corresponding to the FP approximation are removed. According to previous studies,<cit.> the zero-frequency limit is a good approximation at off-resonant and optical frequencies because of the larger excitation energywhere the frequency dispersion in the environment subsystems is typically small. In such cases, the dynamical response of the environment on the excitations in subsystem A is captured reasonably well by the static limit. § NUMERICAL ILLUSTRATIONTo illustrate the basic features of the response of a combined system to a perturbing external field and the importance of the various intersubsystem interactions, we will in this section perform a numerical inspection of the working expressions presented in Sec. <ref>.For this purpose, we consider a simplified six-level-model (SLM) for a para-nitroaniline(pNA)–water complex and its linear response to a uniform electric-field perturbation. In addition to the respective ground states, the SLM includes also the first and second singlet excited states for pNA—these are the nπ^* state and the intramolecular amino-to-nitro charge-transfer transition referred to as ππ^*—and the first and third singlet excited states for water—these are states 1B_1 and 1A_1, respectively, using the symmetry labels referring to the irreducible representations of the C_2v point group of the parent molecule. The manifold of states in the SLM is depicted in Fig. <ref>, defining pNA and water as subsystem A and B, respectively. The set of monomer parameters and electronic couplings reported in Tables <ref> and <ref> have been obtained at the TD-DFT level of theory employing CAM-B3LYP<cit.>/aug-cc-pVDZ<cit.> in the presence of the ground-state-frozen embedding potential of the other system (i.e., corresponding to the FP approximation), as obtained from a PE calculation. In other words, the second term in Eq. (<ref>) is excluded in the response calculation. The embedding potentials consisted of atom-centered permanent electric multipoles up to quadrupoles and anisotropic electric dipole–dipole polarizabilities and were computed according to the LoProp<cit.> scheme using DALTON and the Loprop-for-Dalton Python script.<cit.> The calculations were performed using a development version of the DALTON program<cit.> that contains the implementation of electronic couplings between transition densities.<cit.> The nuclear configuration of the complex has been taken from a molecular dynamics simulation.<cit.> We use this polarized basis as approximate representation for the eigenvectors of the subsystem electronic Hessians, such that the diagonal blocks of the full Hessian are diagonal. As expected from the strengths of the leading-order transition dipoles and their relative orientations (the charge-transfer transition is directed along the x-axis), only the electronic coupling between the ππ^* and 1A_1 states is significant whereas the nπ^* and 1B_1 states will essentially be unaffected. However, the state mixing still remains small because their energy difference is significantly larger than their electronic coupling. Therefore, to better illustrate the different aspects of the response of a combined system, the coupling blocks of the electronic Hessian have been scaled by a factor of 12. The largest absolute intersubsystem coupling element is then equal to 0.0212 a.u., which is smaller than the relevant difference between the excitation energies in the subsystems by a factor of ∼9.Fig. <ref>a shows the isotropic electric dipole–dipole polarizability, Eq. (<ref>), of the pNA–water complex within the SLM. The excitation energies of the full system (vertical dotted lines) can be identified as the poles of the linear response function. In the frequency region around the two lowest poles (ω≈ 0.10-0.20 a.u.), the isotropic polarizability is dominated by the α_xx component of the tensor with a dispersion that is in turn dictated by the charge-transfer transition (second pole). This leads to a seemingly absence of a pole at the first excitation in subsystem A, but it is thus merely a consequence of the nπ^*-transition being nearly electric-dipole forbidden and close in energy to the intense ππ^*-transition.The symmetric decomposition of the polarizability according to Eq. (<ref>) into subsystem A and B contributions is shown in Fig. <ref>b. Note that while the diagonal elements of the polarizability of the combined system in the static limit are guaranteed to be positive, the same does not hold for the individual subsystem contributions. As exemplified in Fig. <ref> by the subsystem A contribution to α_yy (the first term in Eq. (<ref>)), this situation does indeed occur in the present case. As seen in Eq. (<ref>), there are two contributions to the modified property gradient 𝐕^ω_A,β. The contribution from the first term, i.e., 𝐕^ω_A,β, is guaranteed to give a positive contribution to α_yy for subsystem A but, due the large response in subsystem B, the second term in the effective property gradient becomes dominant and leads to an overall negative value for α_yy(0) of subsystem A.Furthermore, we note that both the subsystem A and B contributions in the symmetric decomposition contain poles at all the excitation energies of the combined system, also referred to as the physical excitations. As we discussed previously, this implies that transition moments, in contrast to excitation energies, cannot be determined from any single one of the terms. With the nonsymmetric decomposition (Eq. (<ref>), Fig. <ref>c), on the other hand, all the physical poles are collected in the first term, and it is thus the proper choice when determining residues for the transitions mainly located in subsystem A. In addition to the physical poles, however, there are unphysical zeroth-order poles in both the first and second terms of Eq. (<ref>). Specifically, they contain poles at the excitation energies of subsystem B within the FP approximation, i.e., where the environment polarization is fixed during the response calculation. This inclusion of both physical and unphysical poles in the first term of Eq. (<ref>) becomes particularly apparent in the frequency region close to the fourth excitation in Fig. <ref>c (see red solid line at ω≈ 0.35 a.u.). The excitation energies and associated one-photon transition strengths are reported in Table <ref> for the two lowest transitions in the model system, i.e., those predominantly localized in subsystem A.First, the results provide a clear evidence for the equivalence between the properties obtained from the decomposed subsystem expressions in Eqs. (<ref>) and (<ref>) and from the consideration of the full system expressed in terms of Eq. (<ref>).We further consider various approximate models that are defined according to what terms are retained in the response expressions. The impact of the renormalization factor defined in Eq. (<ref>) and appearing in Eq. (<ref>) depends on the degree of delocalization of the given transition, and its neglect (denoted by "-renorm" in Table <ref>) would lead to an overestimation of transition strengths. The effective field strengths experienced by subsystem A in the presence of subsystem B can be different for different directions as a consequence of the anisotropy of the polarizability of B and the relative orientation of the interacting subsystems, as seen by comparing to the assumption that only subsystem A interacts with the external field (denoted by "-EEF" in Table <ref>). For excitations in subsystem A far from resonances in B (the environment), as in the present case, the impact of renormalization is smaller than that of the EEF effect, as expected from the relative distance dependency of the two contributions. As discussed in Sec. <ref>, the various polarizable embedding models typically assume the effective electronic Hessian of subsystem A to be frequency-independent, thereby losing the nonlinearity of the linear response equations and the poles associated with excitations predominantly in subsystem B. The zero-frequency limit of the inverse of the isotropic polarizability of the combined system is illustrated in Fig. <ref> along with the full frequency-dependent counterpart. Associated excitation energies and transition moments are given in Table <ref>. As anticipated from the weak dispersion of the real polarizability for subsystem B at the resonance frequencies in subsystem A (see Fig. <ref>c), the neglect of transitions in subsystem B only leads to a small changes in the transition properties of the A-dominated excitations.It is noted that the ZF approximation is analogous to the familiar adiabatic approximation in TD-DFT where the exchange–correlation kernel is assumed to be frequency-independent and hence leads to a linear eigenvalue problem with solutions only at Kohn–Sham one-electron excitations.<cit.> Double- and higher-electron excitations and their effects on the Kohn–Sham single excitations are thus neglected in adiabatic TD-DFT,<cit.> as are the excitations in subsystem B in our case. As discussed in Sec. <ref>, the coupling of subsystem excitations gives rise to intensity borrowing. To illustrate this effect, we report in Fig. <ref>a the linear absorption cross section defined in Eq. (<ref>) for the full system together with that associated with subsystem A, i.e., the first term of Eq. (<ref>). In Fig. <ref>b, the corresponding cross sections for the subsystems within the FP approximation are reported. First, we note that upon integrating the absorption cross section for the full system (black solid line in Fig. <ref>a) across the full frequency range, we obtain a reference value of I_AB=0.0829 a.u. Due to the very limited description of the electronic structure of the pNA–water complex by means of the SLM, this value is far from the exact value of 11.81 a.u. as obtained from the conservation law in Eq. (<ref>) for a system with 82 electrons. But this discrepancy is of no concern here since the key point is that the value for the integrated cross section is identical to the corresponding summed result for the twosubsystems within the FP approximation (I_A^FP = 0.0606 and I_B^FP = 0.0223 a.u., respectively).Let us now pass to examine the nonsymmetric decomposition of the interacting system with individual cross sections σ_1^NSD(ω) and σ_2^NSD(ω) for the two terms in Eq. (<ref>), respectively. These two individual cross sections obey their own respective conservation laws as expressed in Eq. (<ref>). Since the nonsymmetric decomposition is made such that the second term corresponds exactly to subsystem B within the FP approximation, the cross section σ_2^NSD(ω) is identical to the red solid line in Fig. <ref>b.Finally, we include in Fig. <ref> also the partially integrated absorption cross section for two separate finite energy intervals, shown as gray-shaded areas in the figure. In agreement with the increase in transition strength found in Table <ref>, the lowest band dominated by the ππ ^* state is intensified upon coupling the excitations in the two subsystems. This gain in intensity in one part of the spectrum is counteracted by a reduction of the intensity in another part and, in the nonsymmetric decomposition, this will be seen only in σ_1^NSD(ω). Since the coupling of the lowest subsystem A-dominated band (second transition) is only effective to the subsystem B-dominated 1A_1-band (fourth transition), there will be an intensity borrowing from the latter to the former. As a consequence, σ_1^NSD(ω) will take on negative values in the region of the fourth transition (ω≈ 0.35 a.u.) as seen in Fig. <ref>a (green solid line). Accordingly, in frequency regions dominated by transitions in subsystem A, the calculation of σ_1^NSD(ω) is to be associated with an absorption spectrum, whereas this cannot be readily done in frequency regions that includes transitions in subsystem B.§ SUMMARY AND CONCLUSIONSIn this work, we have provided a rigorous derivation of the various contributing terms appearing in linear response theory of a quantum molecular system embedded in a polarizable environment. The origin of the three distinct types of mechanisms for the environmental effects within polarizable embedding—the static and dynamic environment responses entering the intra- and intersubsystem blocks of the electronic Hessian, respectively, as well as the effective external field effect modifying the property gradient—follow in a straightforward manner from a subsystem partitioning of a quantum-mechanical direct-product treatment of the response of the entire system. In particular, the effective external field effect is a consequence of the direct interaction between the environment and the probing external field and leads to the definition of effective subsystem properties. A crucial point is what decomposed form of the response function that ought to be used in a given context. In the present theoretical analysis, we have demonstrated the basic features of two alternative subsystem decompositions, which clarify such discussions and highlight potential issues in defining subsystem contributions to response and transition properties.Our first decomposition provided in Eq. (<ref>) treats subsystems on an equal footing and results in two linear response function terms that express the permutation of subsystem indices. For that reason, we have denoted this decomposition scheme as symmetric. This is the natural choice for computing subsystem contributions to molecular properties, but, as already shown in the framework of subsystem DFT,<cit.> the coupling of the subsystems manifests itself in that each subsystem contribution contains the poles of the entire system—a feature that emphasizes the approximate nature of our picture of "localized" transitions. As a result that is made clear in the present work, transition strengths for a subsystem-dominated excitation cannot be found from a residue analysis of the symmetric subsystem linear response function.Our second decomposition provided in Eq. (<ref>) treats, on the other hand, subsystems on an unequal footing and results in two linear response function terms that both are symmetric with respect to left and right property gradients. This is an unnatural choice for the development of polarizable embedding models due to the fact that the first term of the response function (describing the response properties of the chromophore of interest) contains poles not only from all transitions in the fully interacting system but also poles from the ground-state polarized, but otherwise uncoupled, environment. It is demonstrated that the linear response function in the nonsymmetric decomposition lends itself to a determination of transition strengths but only after taking proper account of renormalization of the subsystem excitation vectors. However, the renormalization factor is not accessible within the framework of polarizable embedding because it cannot be rewritten in terms of the response kernel of the environment. This complication can be avoided in practice by assuming the static limit for the environment response or by turning to the framework of complex linear response theory. We have shown that the integrated absorption cross sections of the two terms in the symmetric decomposition of the linear response function are preserved independently of each another and, as a consequence, it is demonstrated (also in the numerical example of the water–pNA dimer) that subsystem absorption cross sections (and likewise oscillator strengths) will take on negative values if intersubsystem intensity borrowing takes place.N. H. L. thanks J. Oddershede (University of Southern Denmark, Odense, Denmark) for helpful discussions, and the Carlsberg Foundation for a postdoctoral fellowship (Grant No. CF15-0792). P. N. acknowledges financial support from the Swedish Research Council (Grant No. 621-2014-4646). J. K. thanks the Danish Council for Independent Research (the Sapere Aude program) and the Villum Foundation for financial support.Computation/simulation for the work described in this paper has beensupported by the DeIC National HPC Center, University of Southern Denmark (SDU). § DERIVATION OF EQ. (<REF>)In this appendix, we show how to derive Eq. (<ref>).The diagonal representation of the complex linear response function is- ⟨⟨μ̂_α^-ω;μ̂_β^ω⟩⟩=∑_n>0[T_αβ^0n/ω_n-(ω+iγ)+T_βα^0n/ω_n+(ω+iγ)].where we have used the definition of the transition strengths in Eq. (<ref>) as well as the relation between eigenvectors for paired eigenvalues, i.e., 𝐗_n=[ 𝐗_n^1 𝐗_n^2 ] and 𝐗_-n=[ 𝐗_n^2* 𝐗_n^1* ].<cit.> Introducing the dispersion and absorption lineshape functions𝒟_n(±ω)= ω_n∓ω/(ω_n∓ω)^2+γ^2 , 𝒜_n(±ω)= γ/(ω_n∓ω)^2+γ^2 ,the real and imaginary components of the complex linear response function can be written as-Re[⟨⟨μ̂_α^-ω;μ̂_β^ω⟩⟩]= ∑_n>0[T_αβ^0n𝒟_n(ω)+T_βα^0n𝒟_n(-ω)],-Im[⟨⟨μ̂_α^-ω;μ̂_β^ω⟩⟩]= ∑_n>0[T_αβ^0n𝒜_n(ω)-T_βα^0n𝒜_n(-ω)], The integrated absorption cross section can therefore be written asI= ∫_0^∞σ(ω) dω = 1/2∫_-∞^∞σ(ω) dω = 1/2ϵ_0c_0∑_n>0T_αα^0n/3∫_-∞^∞ω[γ/(ω_n-ω)^2+γ^2-γ/(ω_n+ω)^2+γ^2]dω ,where we have used that σ(ω) is an even function.Each term in the square bracket can be written as a weighted sum of the dispersion and absorption lineshape functions according toγω/(ω_n∓ω)^2+γ^2= ∓γ𝒟_n(±ω)±ω_n𝒜_n(±ω) .Since 𝒟_n(±ω) is an odd function around ±ω_n, its integral over the entire frequency range vanishes. Therefore, only 𝒜_n(±ω), which apart from a factor is a Lorentzian function, contributesI= 1/2ϵ_0c_0∑_n>0ω_n/3T_αα^0n∫_-∞^∞[𝒜_n(ω)+𝒜_n(-ω)] dω = π/2ϵ_0c_0∑_n>02ω_n/3T_αα^0n= π/2ϵ_0c_0∑_n>0f_0n . § DERIVATION OF EQS. (<REF>) AND (<REF>) In this appendix, we shall derive the two multipole-expanded representations of the induction operator given by the first and second equality in Eq. (<ref>). The corresponding nonexpanded representations of the operator were given in Eqs. (<ref>) and (<ref>), respectively, but to ease the discussion, we reiterate both expressions here 𝒱̂^ind = ∑_b∈B∫ρ̂^e_A(𝐫)⟨𝒱̂_b(𝐫)⟩_0_b^(1) d𝐫=- ∑_b∈B∫ρ̂_A^e(𝐫) ∫[∬C_𝐫”,𝐫”'^b,(0)(0)/|𝐫-𝐫”||𝐫'-𝐫”'| d𝐫” d𝐫”' ]×(⟨ρ̂_A(𝐫') ⟩_0_A + ∑_b'∈B\ b[⟨ρ̂_b'(𝐫')⟩_0_b'^(0) +⟨ρ̂_b'(𝐫')⟩_0_b'^(1)] )d𝐫 d𝐫' .We begin by considering the expression provided by the first equality. By introducing the multipole-expanded form of theelectrostatic potential operator of subsystem b given in Eq. (<ref>), this becomes𝒱̂^ind = ∑_b∈B∫ρ̂^e_A(𝐫)⟨𝒱̂_b(𝐫)⟩_0_b^(1) d𝐫=∑_b∈B∑_|k|=1^∞(-1)^|k|/k!⟨M̂_b^(k)(𝐑_b)⟩_0_b^(1)∫ρ̂^e_A(𝐫)T^(k)_br d𝐫=∑_b∈B∑_|k|=1^∞1/k!M̅_b^(k)∫ρ̂^e_A(𝐫)T^(k)_rb d𝐫 ,where we have used the definition of the induced multipole moments in Eq. (<ref>). Note that the multi-index summation excludes zero, since there is no induced monopole. We identify 𝒱̂_A^(k)(R_b)=∫ρ̂_A(r)T_rb^(k) dr as a component of the k'th-order derivative of the potential operator, whose expectation value gives the k'th-order derivative of the electrostatic potential at R_b, generated by the charge density of subsystem A. Since it is common to work in terms of the fields, F̂_A^(k)(R_b)=-𝒱̂_A^(k)(R_b) for |l|=1, and its derivatives, we shall multiply Eq. (<ref>) by 1=(-1)^2, associating a minus with the interaction operator. Hereby, we arrive at the first of the two alternative multipole-expanded representations of the induction operator given by Eq. (<ref>)𝒱̂^ind =-∑_b∈B∑_|k|=1^∞1/k!F̂_A^e,(k)(𝐑_b)M̅_b^(k) ,where the field and field derivative operators are defined in Eq. (<ref>).Proceeding to Eq. (<ref>) we start by rewritingthe double integral in square brackets by introducing a Taylor expansion of the two interaction operators around a point 𝐑_b inside the charge distribution of subsystem b∬C_𝐫”,𝐫”'^b,(0)(0)/|r-r”||r'-r”'| dr” dr”' = ∑_|k|=1^∞∑_|l|=1^∞(-1)^|k|+|l|/k!· l!T_br^(k)P_b^(k,l)T_br'^(l)= ∑_|k|=1^∞∑_|l|=1^∞1/k!· l!T_rb^(k)P_b^(k,l)T_r'b^(l) ,where the last equality follows from the symmetry of the interaction tensors. P_b^(k,l) is the generalized electronic polarizability defined in Eq. (<ref>). The summations exclude zero, since P_b^(k,l) involves transition matrix elements, which vanish for the constant monopole operator as a consequence of the orthogonality of the unperturbed wave functions of subsystem b. For the same reason,P_b^(k,l) contains no nuclear contribution.Substituting Eq. (<ref>) into (<ref>) yields𝒱̂^ind = -∑_b∈B∑_|k|=1^∞∑_|l|=1^∞1/k!· l!∫ρ̂^e_A(r) T_rb^(k) dr P_b^(k,l)×∫ T_r'b^(l)(⟨ρ̂_A(r')⟩_0_A + ∑_b'∈B\ b[⟨ρ̂_b'(r')⟩_0_b'^(0)+⟨ρ̂_b'(r')⟩_0_b'^(1)]) dr' .As before, we recognize 𝒱̂_b'^(l)(R_b)=∫ T_rb^(l)ρ̂_b'(r) dr as a component of the l'th-order derivative of the potential operator,which can be translated to the corresponding field and field derivative operators upon multiplication of Eq. (<ref>) by 1=(-1)^2. In this case, we associate a minus with each of the interaction tensors. Equation (<ref>) can then be rewritten as𝒱̂^ind = -∑_b∈B∑_|k|=1^∞∑_|l|=1^∞1/k!l!F̂_A^e,(k)(R_b)P_b^(k,l)×(⟨F̂^(l)_A(R_b)⟩_0_A + ∑_b'∈B\ b' [⟨F̂_b'^(l)(R_b)⟩_0_b'^(0)+⟨F̂^(l)_b'(R_b)⟩_0_b'^(1)]) .We shall represent ⟨F̂_b'(R_b)⟩_0_b'^(0) and ⟨F̂_b'(R_b)⟩_0_b'^(1) in terms of the permanent and induced multipole moments. By taking minus the l'th-order derivative of the expectation value of Eq. (<ref>), we obtain ⟨F̂_b'^(l)(R_b)⟩_0_b'^(0)=∑_|k|=0^∞(-1)^|k|+1/k!T_b'b^(k+l)M_b'^(k) .Analogously, the first-order correction can be recast in a multipole-expanded form as⟨F̂^(l)_b'(R_b)⟩_0_b'^(1)=-∫ T_rb^(l)⟨ρ̂_b'(r)⟩_0_b'^(1) dr= -∑_|k|=1^∞(-1)^|k|/k!T_b'b^(k+l)∫(r-R_b)^k⟨ρ̂_b'(r)⟩_0_b'^(1) dr= ∑_|k|=1^∞(-1)^|k|+1/k!T_b'b^(k+l)M̅_b'^(k) .Finally, by substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we arrive at the multipole-expanded representation of the induction operator given by Eq. (<ref>)𝒱̂^ind = -∑_b∈B∑_|k|=1^∞∑_|l|=1^∞1/k!· l!F̂_A^e,(k)(R_b)P_b^(k,l)×(⟨F̂^(l)_A(R_b)⟩_0_A + ∑_b'∈B\b[⟨F̂_b'^(l)(R_b)⟩_0_b'^(0)+∑_|m|=1^∞(-1)^|m|+1/m!T_b'b^(m+l)M̅_b'^(m)]) . 92 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Lindon, Tranter, and Holmes(2010)]Lindon2010 editor J. C. Lindon, editor G. E. Tranter,and editor J. L. Holmes, eds., @nooptitle Encyclopedia of spectroscopy and spectrometry (publisher Academic Press, address San Diego, year 2010)NoStop [Kristensen et al.(2012)Kristensen, Høyvik, Jansik, Jørgensen, Kjærgaard, Reine, and Jakowski]kristensen2012mp2 author author K. Kristensen, author I.-M. Høyvik, author B. Jansik, author P. Jørgensen, author T. Kjærgaard, author S. Reine,and author J. Jakowski, title title MP2 Energy and Density for Large Molecular Systems with Internal Error Control Using the Divide-Expand-Consolidate Scheme,@noopjournal journal Phys. Chem. Chem. Phys. volume 14, pages 15706–15714 (year 2012)NoStop [Ziółkowski et al.(2010)Ziółkowski, Jansik, Kjærgaard,and Jørgensen]ziolkowski2010linear author author M. Ziółkowski, author B. Jansik, author T. Kjærgaard,and author P. Jørgensen, title title Linear Scaling Coupled Cluster Method with Correlation Energy Based Error Control, @noopjournal journal J. Chem. Phys. volume 133, pages 014107 (year 2010)NoStop [Ochsenfeld, Kussmann, andLambrecht(2007)]ochsenfeld2007linear author author C. Ochsenfeld, author J. Kussmann,and author D. S. Lambrecht, title title Linear-Scaling Methods in Quantum Chemistry, @noopjournal journal Rev. Comp. Ch. volume 23, pages 1 (year 2007)NoStop [Sherrill(2010)]sherrill2010frontiers author author C. D. Sherrill, title title Frontiers in Electronic Structure Theory, @noopjournal journal J. Chem. Phys. volume 132,pages 110902 (year 2010)NoStop [Jakobsen, Kristensen, andJensen(2013)]jakobsen2013electrostatic author author S. Jakobsen, author K. Kristensen,and author F. Jensen, title title Electrostatic Potential of Insulin: Exploring the Limitations of Density Functional Theory and Force Field Methods, @noopjournal journal J. Chem. Theory Comput. volume 9,pages 3978–3985 (year 2013)NoStop [Olsen et al.(2015a)Olsen, List, Kristensen, and Kongsted]olsen2015accuracy author author J. M. H.Olsen, author N. H. List, author K. Kristensen, and author J. Kongsted,title title Accuracy of Protein Embedding Potentials: An Analysis in Terms of Electrostatic Potentials,@noopjournal journal J. Chem. Theory Comput. volume 11, pages 1832–1842 (year 2015a)NoStop [Dreuw, Weisman, and Head-Gordon(2003)]dreuw2003long author author A. Dreuw, author J. L. Weisman,and author M. Head-Gordon,title title Long-Range Charge-Transfer Excited States in Time-Dependent Density Functional Theory Require Non-Local Exchange, @noopjournal journal J. Chem. Phys. volume 119, pages 2943–2946 (year 2003)NoStop [Gritsenko and Baerends(2004)]gritsenko2004asymptotic author author O. Gritsenko and author E. J. Baerends, title title Asymptotic Correction of the Exchange–Correlation Kernel of Time-Dependent Density Functional Theory for Long-Range Charge-Transfer Excitations, @noopjournal journal J. Chem. Phys. volume 121, pages 655–660 (year 2004)NoStop [Dreuw and Head-Gordon(2004)]dreuw2004failure author author A. Dreuw and author M. Head-Gordon, title title Failure of Time-Dependent Density Functional Theory for Long-Range Charge-Transfer Excited States: The Zincbacteriochlorin-Bacteriochlorin and Bacteriochlorophyll-Spheroidene Complexes, @noopjournal journal J. Am. Chem. Soc. volume 126, pages 4007–4016 (year 2004)NoStop [Gordon et al.(2012)Gordon, Fedorov, Pruitt, and Slipchenko]gordon2012fragmentation author author M. S. Gordon, author D. G. Fedorov, author S. R. Pruitt,andauthor L. V. Slipchenko,title title Fragmentation Methods: a Route to Accurate Calculations on Large Systems, @noopjournal journal Chem. Rev volume 112, pages 632–672 (year 2012)NoStop [Jacob and Neugebauer(2014)]jacob2014subsystem author author C. R. Jacob and author J. Neugebauer, title title Subsystem Density-Functional Theory, @noopjournal journal WIREs Comput. Mol. Sci. volume 4,pages 325–362 (year 2014)NoStop [Gomes and Jacob(2012)]gomes2012quantum author author A. S. P.Gomes and author C. R.Jacob, title title Quantum-Chemical Embedding Methods for Treating Local Electronic Excitations in Complex Chemical Systems, @noopjournal journal Annu. Rep. Prog. Chem., Sect. C: Phys. Chem. volume 108, pages 222–277 (year 2012)NoStop [Wesolowski and Warshel(1993)]wesolowski1993frozen author author T. A. Wesolowski and author A. Warshel, title title Frozen Density Functional Approach for Ab Initio Calculations of Solvated Molecules,@noopjournal journal J. Phys. Chem.volume 97, pages 8050–8053 (year 1993)NoStop [Warshel and Levitt(1976)]warshel1976theoretical author author A. Warshel and author M. Levitt, title title Theoretical Studies of Enzymatic Reactions: Dielectric, Electrostatic and Steric Stabilization of the Carbonium Ion in the Reaction of Lysozyme,@noopjournal journal J. Mol. Biol.volume 103, pages 227–249 (year 1976)NoStop [Senn and Thiel(2009)]senn2009qm author author H. M. Senn and author W. Thiel,title title QM/MM Methods for Biomolecular Systems, @noopjournal journal Angew. Chem. Int. Ed. volume 48,pages 1198–1229 (year 2009)NoStop [van der Kamp and Mulholland(2013)]van2013combined author author M. W. van der Kamp and author A. J. Mulholland, title title Combined Quantum Mechanics/Molecular Mechanics (QM/MM) Methods in Computational Enzymology, @noopjournal journal Biochemistry volume 52, pages 2708–2728 (year 2013)NoStop [Lopes, Roux, and MacKerell Jr(2009)]lopes2009molecular author author P. E. M.Lopes, author B. Roux,and author A. D. MacKerell Jr, title title Molecular Modeling and Dynamics Studies with Explicit Inclusion of Electronic Polarizability: Theory and Applications, @noopjournal journal Theor. Chem. Acc. volume 124, pages 11–28 (year 2009)NoStop [Sneskov et al.(2011)Sneskov, Schwabe, Christiansen, andKongsted]sneskov2011scrutinizing author author K. Sneskov, author T. Schwabe, author O. Christiansen,andauthor J. Kongsted, title title Scrutinizing the Effects of Polarization in QM/MM Excited State Calculations, @noopjournal journal Phys. Chem. Chem. Phys.volume 13, pages 18551–18560 (year 2011)NoStop [Wesolowski and Weber(1996)]wesolowski1996kohn author author T. A. Wesolowski and author J. Weber, title title Kohn-Sham Equations with Constrained Electron Density: an Iterative Evaluation of the Ground-State Electron Density of Interacting Molecules, @noopjournal journal Chem. Phys. Lett. volume 248, pages 71–76 (year 1996)NoStop [Casida and Wesołowski(2004)]casida2004generalization author author M. E. Casida and author T. A. Wesołowski, title title Generalization of the Kohn–Sham Equations with Constrained Electron Density Formalism and its Time-Dependent Response Theory Formulation,@noopjournal journal Int. J. Quant. Chem. volume 96, pages 577–588 (year 2004)NoStop [Höfener, Gomes, andVisscher(2012)]hofener2012molecular author author S. Höfener, author A. S. P. Gomes,and author L. Visscher, title title Molecular Properties via a Subsystem Density Functional Theory Formulation: A Common Framework for Electronic Embedding, @noopjournal journal J. Chem. Phys. volume 136,pages 044104 (year 2012)NoStop [Neugebauer(2010)]neugebauer2010chromophore author author J. Neugebauer, title title Chromophore-Specific Theoretical Spectroscopy: From Subsystem Density Functional Theory to Mode-Specific Vibrational Spectroscopy, @noopjournal journal Phys, Rep. volume 489, pages 1–87 (year 2010)NoStop [Neugebauer(2009)]neugebauer2009calculation author author J. Neugebauer, title title On the Calculation of General Response Properties in Subsystem Density Functional Theory, @noopjournal journal J. Chem. Phys. volume 131, pages 084104 (year 2009)NoStop [Neugebauer(2007)]neugebauer2007couplings author author J. Neugebauer, title title Couplings between Electronic Transitions in a Subsystem Formulation of Time-Dependent Density Functional Theory, @noopjournal journal J. Chem. Phys. volume 126, pages 134116 (year 2007)NoStop [Olsen, Aidas, and Kongsted(2010)]olsen2010 author author J. M. H.Olsen, author K. Aidas,and author J. Kongsted, title title Excited States in Solution through Polarizable Embedding, @noopjournal journal J. Chem. Theory Comput.volume 6, pages 3721–3734 (year 2010)NoStop [Lipparini, Cappelli, andBarone(2012)]lipparini2012linear author author F. Lipparini, author C. Cappelli,and author V. Barone,title title Linear Response Theory and Electronic Transition Energies for a Fully Polarizable QM/Classical Hamiltonian, @noopjournal journal J. Chem. Theory Comput. volume 8, pages 4153–4165 (year 2012)NoStop [Yoo et al.(2008)Yoo, Zahariev, Sok, and Gordon]EFP-excited-states author author S. Yoo, author F. Zahariev, author A. Sok,and author M. S. Gordon, title title Solvent effects on optical properties of molecules: A combined time-dependent density functional theory/effective fragment potential approach, @noopjournal journal J. Chem. Phys. volume 129,pages 144112 (year 2008)NoStop [List, Olsen, and Kongsted(2016)]list2016excited author author N. H. List, author J. M. H. Olsen,and author J. Kongsted,title title Excited States in Large Molecular Systems through Polarizable Embedding, @noopjournal journal Phys. Chem. Chem. Phys.volume 18, pages 20234–20250 (year 2016)NoStop [Jensen, van Duijnen, andSnijders(2003)]jensen2003discrete2 author author L. Jensen, author P. T. van Duijnen,and author J. G. Snijders, title title A Discrete Solvent Reaction Field Model for Calculating Molecular Linear Response Properties in Solution, @noopjournal journal J. Chem. Phys. volume 119, pages 3800–3809 (year 2003)NoStop [Morton and Jensen(2011)]morton2011discrete author author S. M. Morton and author L. Jensen, title title A Discrete Interaction Model/Quantum Mechanical Method to Describe the Interaction of Metal Nanoparticles and Molecular Absorption, @noopjournal journal J. Chem. Phys. volume 135, pages 134103 (year 2011)NoStop [Curutchet et al.(2009)Curutchet, Muñoz-Losa, Monti, Kongsted, Scholes, and Mennucci]curutchet2009electronic author author C. Curutchet, author A. Muñoz-Losa, author S. Monti, author J. Kongsted, author G. D. Scholes,andauthor B. Mennucci, title title Electronic Energy Transfer in Condensed Phase Studied by a Polarizable QM/MM Model, @noopjournal journal J. Chem. Theory Comput. volume 5, pages 1838–1848 (year 2009)NoStop [Jacob et al.(2006)Jacob, Neugebauer, Jensen, and Visscher]jacob2006comparison author author C. R. Jacob, author J. Neugebauer, author L. Jensen,and author L. Visscher, title title Comparison of Frozen-Density Embedding and Discrete Reaction Field Solvent Models for Molecular Properties,@noopjournal journal Phys. Chem. Chem. Phys. volume 8, pages 2349–2359 (year 2006)NoStop [Payton et al.(2012)Payton, Morton, Moore, and Jensen]payton2012discrete author author J. L. Payton, author S. M. Morton, author J. E. Moore,andauthor L. Jensen, title title A Discrete Interaction Model/Quantum Mechanical Method for Simulating Surface-Enhanced Raman Spectroscopy,@noopjournal journal J. Chem. Phys.volume 136, pages 214103 (year 2012)NoStop [List et al.(2013)List, Jensen, Kongsted, and Hedegård]list2013unified author author N. H. List, author H. J. Aa. Jensen, author J. Kongsted, and author E. D. Hedegård,title title A Unified Framework for the Polarizable Embedding and Continuum Methods Within Multiconfigurational Self-Consistent Field Theory, @noopjournal journal Adv. Quantum Chem. volume 66, pages 195 (year 2013)NoStop [Olsen and Kongsted(2011)]olsen2011 author author J. M. H.Olsen and author J. Kongsted, title title Molecular Properties through Polarizable Embedding, @noopjournal journal Adv. Quantum Chem. volume 61, pages 107–143 (year 2011)NoStop [Olsen et al.(2015b)Olsen, Steinmann, Ruud, and Kongsted]olsen2015polarizable author author J. M. H.Olsen, author C. Steinmann, author K. Ruud, and author J. Kongsted,title title Polarizable Density Embedding: A New QM/QM/MM-based Computational Strategy, @noopjournal journal J. Phys. Chem. A volume 119, pages 5344–5355 (year 2015b)NoStop [Olsen and Jørgensen(1995)]olsen1995 author author J. Olsen and author P. Jørgensen, @nooptitle Modern Electronic Structure Theory, edited by editor D. R.Yarkony, Vol. volume 2 (publisher World Scientific, year 1995) pp. pages 857–990NoStop [Corni et al.(2005)Corni, Cammi, Mennucci, and Tomasi]corni2005electronic1 author author S. Corni, author R. Cammi, author B. Mennucci,andauthor J. Tomasi, title title Electronic Excitation Energies of Molecules in Solution within Continuum Solvation Models: Investigating the Discrepancy between State-Specific and Linear-Response Methods,@noopjournal journal J. Chem. Phys.volume 123, pages 134512 (year 2005)NoStop [List(2015)]list2015thesis author author N. H. List, title Theoretical Description of Electronic Transitions in Large Molecular Systems in the Optical and X-Ray Regions,@noopPh.D. thesis, school University of Southern Denmark, address Odense, Denmark (year 2015),note <http://www.diva-portal.org/smash/record.jsf?pid=diva2:1072871&dswid=7352>NoStop [Schwabe(2016)]schwabe2016general author author T. Schwabe, title title General Theory for Environmental Effects on (Vertical) Electronic Excitation Energies,@noopjournal journal J. Chem. Phys.volume 145, pages 154105 (year 2016)NoStop [Rösch and Zerner(1994)]roesch1994calculation author author N. Rösch and author M. C. Zerner, title title Calculation of Dispersion Energy Shifts in Molecular Electronic Spectra, @noopjournal journal J. Phys. Chem. volume 98, pages 5817–5823 (year 1994)NoStop [McRae(1957)]mcrae1957theory author author E. G. McRae, title title Theory of Solvent Effects on Molecular Electronic Spectra. Frequency Shifts,@noopjournal journal J. Phys. Chem.volume 61, pages 562–572 (year 1957)NoStop [Buhmann(2012)]buhmann2012dispersion author author S. Y. Buhmann, @nooptitle Dispersion Forces II (publisher Springer Berlin Heidelberg, year 2012)NoStop [Lunkenheimer and Köhn(2012)]lunkenheimer2012solvent author author B. Lunkenheimer and author A. Köhn, title title Solvent Effects on Electronically Excited States Using the Conductor-Like Screening Model and the Second-Order Correlated Method ADC (2), @noopjournal journal J. Chem. Theory Comput.volume 9, pages 977–994 (year 2012)NoStop [Daday et al.(2015)Daday, Curutchet, Sinicropi, Mennucci, and Filippi]daday2015chromophore author author C. Daday, author C. Curutchet, author A. Sinicropi, author B. Mennucci,and author C. Filippi, title title Chromophore–Protein Coupling Beyond Nonpolarizable Models: Understanding Absorption in Green Fluorescent Protein, @noopjournal journal J. Chem. Theory Comput. volume 11, pages 4825–4839 (year 2015)NoStop [Jensen, Swart, and van Duijnen(2005)]jensen2005microscopic author author L. Jensen, author M. Swart, and author P. T. van Duijnen,title title Microscopic and Macroscopic Polarization within a Combined Quantum Mechanics and Molecular Mechanics Model, @noopjournal journal J. Chem. Phys. volume 122, pages 034103 (year 2005)NoStop [Pavanello(2013)]pavanello2013subsystem author author M. Pavanello, title title On the Subsystem Formulation of Linear-Response Time-Dependent DFT, @noopjournal journal J. Chem. Phys. volume 138, pages 204118 (year 2013)NoStop [Ángyán(1992)]angyan1992common author author J. G. Ángyán, title title Common Theoretical Framework for Quantum Chemical Solvent Effect Theories,@noopjournal journal J. Math. Chem.volume 10, pages 93–137 (year 1992)NoStop [McWeeny and Sutcliffe(1989)]mcweeny1969methods author author R. McWeeny and author B. T. Sutcliffe, @nooptitle Methods of Molecular Quantum Mechanics, edition 2nd ed., Vol. volume 2 (publisher Academic press London, year 1989)NoStop [Helgaker, Jørgensen, andOlsen(2000)]helgaker2000molecular author author T. Helgaker, author P. Jørgensen,and author J. Olsen, @nooptitle Molecular Electronic-Structure Theory (publisher Wiley, year 2000)NoStop [Christiansen, Jørgensen, andHättig(1998)]christiansen1998response author author O. Christiansen, author P. Jørgensen,and author C. Hättig, title title Response Functions from Fourier Component Variational Perturbation Theory applied to a Time-Averaged Quasienergy, @noopjournal journal Int. J. Quant. Chem. volume 68,pages 1–52 (year 1998)NoStop [Sasagane, Aiga, and Itoh(1993)]sasagane1993 author author K. Sasagane, author F. Aiga, and author R. Itoh, title title Higher–Order Response Theory Based on the Quasienergy Derivatives: The Derivation of the Frequency–Dependent Polarizabilities and Hyperpolarizabilities, http://dx.doi.org/10.1063/1.466123 journal journal J. Chem. Phys. volume 99, pages 3738–3778 (year 1993)NoStop [Norman(2011)]norman2011perspective author author P. Norman, title title A Perspective on Nonresonant and Resonant Electronic Response Theory for Time-Dependent Molecular Properties, @noopjournal journal Phys. Chem. Chem. Phys. volume 13,pages 20519–20535 (year 2011)NoStop [Olsen and Jørgensen(1985)]olsen1985linear author author J. Olsen and author P. Jørgensen, title title Linear and Nonlinear Response Functions for an Exact State and for an MCSCF State, @noopjournal journal J. Chem. Phys. volume 82, pages 3235–3264 (year 1985)NoStop [List et al.(2014)List, Coriani, Christiansen, and Kongsted]list2014identifying author author N. H. List, author S. Coriani, author O. Christiansen,andauthor J. Kongsted, title title Identifying the Hamiltonian Structure in Linear Response Theory, @noopjournal journal J. Chem. Phys. volume 140, pages 224103 (year 2014)NoStop [Löwdin(1962a)]lowdin1962bstudies author author P.-O. Löwdin, title title Studies in Perturbation Theory. V. Some Aspects on the Exact Self-Consistent Field Theory, @noopjournal journal J. Math. Phys. volume 3, pages 1171–1184 (year 1962a)NoStop [Löwdin(1963)]lowdin1963studies author author P.-O. Löwdin, title title Studies in Perturbation Theory: Part I. An Elementary Iteration-Variation Procedure for Solving the Schrödinger Equation by Partitioning Technique,@noopjournal journal J. Mol. Spectrosc. volume 10, pages 12–33 (year 1963)NoStop [Löwdin(1962b)]lowdin1962astudies author author P.-O. Löwdin, title title Studies in Perturbation Theory. IV. Solution of Eigenvalue Problem by Projection Operator Formalism, @noopjournal journal J. Math. Phys. volume 3, pages 969–982 (year 1962b)NoStop [Hsu et al.(2001)Hsu, Fleming, Head-Gordon, and Head-Gordon]hsu2001excitation author author C.-P. Hsu, author G. R. Fleming, author M. Head-Gordon,andauthor T. Head-Gordon,title title Excitation Energy Transfer in Condensed Media, @noopjournal journal J. Chem. Phys. volume 114, pages 3065–3072 (year 2001)NoStop [Rinaldi, Morton, and Jensen(2013)]rinaldi2013discrete author author J. M. Rinaldi, author S. M. Morton,and author L. Jensen,title title A Discrete Interaction Model/Quantum Mechanical Method for Simulating Nonlinear Optical Properties of Molecules Near Metal Surfaces, @noopjournal journal Mol. Phys. volume 111,pages 1322–1331 (year 2013)NoStop [Böttcher and Bordewijk(1973)]botcher1973theory author author C. J. F.Böttcher and author P. Bordewijk, @nooptitle Theory of Electric Polarization, Vol. 1: Dielectrics in Static Fields(publisher Elsevier Amsterdam, year 1973)NoStop [Wortmann and Bishop(1998)]wortmann1998effective author author R. Wortmann and author D. M. Bishop, title title Effective Polarizabilities and Local Field Corrections for Nonlinear Optical Experiments in Condensed Media, @noopjournal journal J. Chem. Phys. volume 108,pages 1001–1007 (year 1998)NoStop [Cammi, Mennucci, and Tomasi(1998)]cammi1998calculation author author R. Cammi, author B. Mennucci, and author J. Tomasi,title title On the Calculation of Local Field Factors for Microscopic Static Hyperpolarizabilities of Molecules in Solution with the Aid of Quantum-Mechanical Methods, @noopjournal journal J. Phys. Chem. A volume 102, pages 870–875 (year 1998)NoStop [Pipolo, Corni, and Cammi(2014)]pipolo2014cavity2 author author S. Pipolo, author S. Corni, and author R. Cammi, title title The Cavity Electromagnetic Field within the Polarizable Continuum Model of Solvation, @noopjournal journal J. Chem. Phys. volume 140, pages 164114 (year 2014)NoStop [List, Jensen, and Kongsted(2016)]list2016local author author N. H. List, author H. J. Aa. Jensen,and author J. Kongsted, title title Local Electric Fields and Molecular Properties in Heterogeneous Environments through Polarizable Embedding, DOI: 10.1039/C6CP00669H journal journal Phys. Chem. Chem. Phys.volume 18, pages 10070–10080 (year 2016)NoStop [Nielsen, Jørgensen, andOddershede(1980)]nielsen1980transition author author E. S. Nielsen, author P. Jørgensen,and author J. Oddershede, title title Transition Moments and Dynamic Polarizabilities in a Second Order Polarization Propagator Approach, @noopjournal journal J. Chem. Phys. volume 73, pages 6238–6246 (year 1980)NoStop [Dreuw and Wormit(2015)]dreuw2015algebraic author author A. Dreuw and author M. Wormit,title title The Algebraic Diagrammatic Construction Scheme for the Polarization Propagator for the Calculation of Excited States, @noopjournal journal WIREs Comput. Mol. Sci. volume 5, pages 82–95 (year 2015)NoStop [Norman et al.(2005)Norman, Bishop, Jensen, and Oddershede]norman2005nonlinear author author P. Norman, author D. M. Bishop, author H. J. Aa. Jensen,and author J. Oddershede, title title Nonlinear Response Theory with Relaxation: The First-Order Hyperpolarizability,@noopjournal journal J. Chem. Phys.volume 123, pages 194103 (year 2005)NoStop [Norman et al.(2001)Norman, Bishop, Jensen, and Oddershede]norman2001near author author P. Norman, author D. M. Bishop, author H. J. Aa. Jensen,and author J. Oddershede, title title Near-Resonant Absorption in the Time-Dependent Self-Consistent Field and Multiconfigurational Self-Consistent Field Approximations, @noopjournal journal J. Chem. Phys. volume 115, pages 10323–10334 (year 2001)NoStop [Kauczor, Jørgensen, andNorman(2011)]kauczor2011onthe author author J. Kauczor, author P. Jørgensen,and author P. Norman, title title On the Efficiency of Algorithms for Solving Hartree-Fock and Kohn-Sham Response Equations, 10.1021/ct100729t journal journal J. Chem. Theory Comput. volume 7, pages 1610–1630 (year 2011)NoStop [Boyd(2003)]boyd2003nonlinear author author R. W. Boyd, @nooptitle Nonlinear Optics(publisher Academic press, year 2003)NoStop [List et al.(2015)List, Kauczor, Saue, Jensen, andNorman]list2015beyond author author N. H. List, author J. Kauczor, author T. Saue, author H. J. Aa. Jensen,and author P. Norman, title title Beyond the Electric-Dipole Approximation: A Formulation and Implementation of Molecular Response Theory for the Description of Absorption of Electromagnetic Field Radiation, @noopjournal journal J. Chem. Phys. volume 142, pages 244111 (year 2015)NoStop [Stone(1989)]stone1989induction author author A. J. Stone, title title The Induction Energy of an Assembly of Polarizable Molecules, @noopjournal journal Chem. Phys. Lett. volume 155, pages 102–110 (year 1989)NoStop [Saint Raymond(1991)]saint1991elementary author author X. Saint Raymond, @nooptitle Elementary Introduction to the Theory of Pseudodifferential Operators,Vol. volume 3 (publisher CRC Press, year 1991)NoStop [Olsen(2012)]olsen2012thesis author author J. M. H.Olsen, title Development of Quantum Chemical Methods towards Rationalization and Optimal Design of Photoactive Proteins, @noopPh.D. thesis, school University of Southern Denmark, address Odense, Denmark (year 2012), note DOI: 10.6084/m9.figshare.156852NoStop [Stone(2002)]stone2013theory author author A. Stone, @nooptitle The Theory of Intermolecular Forces (publisher Oxford University Press,year 2002)NoStop [Buckingham(1967)]buckingham1967 author author A. D. Buckingham, title title Permanent and Induced Molecular Moments and Long-Range Intermolecular Forces,@noopjournal journal Adv. Chem. Phys.volume 12, pages 107–142 (year 1967)NoStop [Applequist, Carl, andFung(1972)]applequist1972atom author author J. Applequist, author J. R. Carl,and author K.-K. Fung, title title Atom Dipole Interaction Model for Molecular Polarizability. Application to Polyatomic Molecules and Determination of Atom Polarizabilities, @noopjournal journal J. Am. Chem. Soc. volume 94, pages 2952–2960 (year 1972)NoStop [Harczuk, Vahtras, andÅgren(2015)]harczuk2015frequency author author I. Harczuk, author O. Vahtras, and author H. Ågren,title title Frequency-Dependent Force Fields for QMMM Calculations, @noopjournal journal Phys. Chem. Chem. Phys. volume 17, pages 7800–7812 (year 2015)NoStop [Nørby et al.(2016)Nørby, Vahtras, Norman, andKongsted]norby2016assessing author author M. S. Nørby, author O. Vahtras, author P. Norman,and author J. Kongsted, title title Assessing Frequency-Dependent Site Polarisabilities in Linear Response Polarisable Embedding, @noopjournal journal Mol. Phys. , pages 1–9 (year 2016)NoStop [Yanai, Tew, and Handy(2004)]yanai2004new author author T. Yanai, author D. P. Tew, and author N. C. Handy,title title A New Hybrid Exchange–Correlation Functional using the Coulomb-Attenuating Method (CAM-B3LYP), @noopjournal journal Chem. Phys. Lett. volume 393, pages 51–57 (year 2004)NoStop [Dunning(1989)]dunning author author T. H. Dunning, title title Gaussian Basis Sets for Use in Correlated Molecular Calculations. I. The Atoms Boron through Neon and Hydrogen, @noopjournal journal J. Chem. Phys. volume 90, pages 1007–1023 (year 1989)NoStop [Gagliardi, Lindh, andKarlström(2004)]loprop author author L. Gagliardi, author R. Lindh, and author G. Karlström,title title Local Properties of Quantum Chemical Systems: The LoProp Approach. @noopjournal journal J. Chem. Phys. volume 121, pages 4494–4500 (year 2004)NoStop [Vahtras(2014)]Vahtras:13276 author author O. Vahtras, @nooptitle LoProp for Dalton, howpublished see <http://dx.doi.org/10.5281/zenodo.13276> (year 2014)NoStop [Aidas et al.(2014)Aidas, Angeli, Bak, Bakken, Bast, Boman, Christiansen, Cimiraglia, Coriani, Dahle, Dalskov, Ekström, Enevoldsen, Eriksen, Ettenhuber, Fernández, Ferrighi, Fliegl, Frediani, Hald, Halkier, Hättig, Heiberg, Helgaker, Hennum, Hettema, Hjertenæs, Høst, Høyvik, Iozzi, Jansik, Jensen, Jonsson, Jørgensen, Kauczor, Kirpekar, Kjærgaard, Klopper, Knecht, Kobayashi, Koch, Kongsted, Krapp, Kristensen, Ligabue, Lutnæs, Melo, Mikkelsen, Myhre, Neiss, Nielsen, Norman, Olsen, Olsen, Osted, Packer, Pawlowski, Pedersen, Provasi, Reine, Rinkevicius, Ruden, Ruud, Rybkin, Sałek, Samson, de Merás, Saue, Sauer, Schimmelpfennig, Sneskov, Steindal, Sylvester-Hvid, Taylor, Teale, Tellgren, Tew, Thorvaldsen, Thøgersen, Vahtras, Watson, Wilson, Ziolkowski,and Ågren]dalton author author K. Aidas, author C. Angeli, author K. L. Bak, author V. Bakken, author R. Bast, author L. Boman, author O. Christiansen, author R. Cimiraglia, author S. Coriani, author P. Dahle, author E. K.Dalskov, author U. Ekström, author T. Enevoldsen, author J. J. Eriksen, author P. Ettenhuber, author B. Fernández, author L. Ferrighi, author H. Fliegl, author L. Frediani, author K. Hald, author A. Halkier, author C. Hättig, author H. Heiberg, author T. Helgaker, author A. C. Hennum, author H. Hettema, author E. Hjertenæs, author S. Høst, author I.-M.Høyvik, author M. F.Iozzi, author B. Jansik, author H. J. Aa.Jensen, author D. Jonsson, author P. Jørgensen, author J. Kauczor, author S. Kirpekar, author T. Kjærgaard, author W. Klopper, author S. Knecht, author R. Kobayashi, author H. Koch, author J. Kongsted, author A. Krapp, author K. Kristensen, author A. Ligabue, author O. B. Lutnæs, author J. I. Melo, author K. V. Mikkelsen, author R. H. Myhre, author C. Neiss, author C. B. Nielsen, author P. Norman, author J. Olsen, author J. M. H.Olsen, author A. Osted, author M. J. Packer, author F. Pawlowski, author T. B. Pedersen, author P. F. Provasi, author S. Reine, author Z. Rinkevicius, author T. A.Ruden, author K. Ruud, author V. Rybkin, author P. Sałek, author C. C. M. Samson, author A. S. de Merás, author T. Saue, author S. P. A. Sauer, author B. Schimmelpfennig, author K. Sneskov, author A. H.Steindal, author K. O.Sylvester-Hvid, author P. R.Taylor, author A. M.Teale, author E. I. Tellgren, author D. P. Tew, author A. J. Thorvaldsen, author L. Thøgersen, author O. Vahtras, author M. A. Watson, author D. J. D. Wilson, author M. Ziolkowski,and author H. Ågren, title title The Dalton Quantum Chemistry Program System,@noopjournal journal WIREs Comput. Mol. Sci. volume 4, pages 269–284 (year 2014)NoStop [dal()]dalton2 @nooptitle Dalton, A Molecular Electronic Structure Program, release dalton2016.0 (2016), howpublished see <http://daltonprogram.org/>NoStop [Steinmann and Kongsted(2015)]steinmann2015electronic author author C. Steinmann and author J. Kongsted, title title Electronic Energy Transfer in Polarizable Heterogeneous Environments: A Systematic Investigation of Different Quantum Chemical Approaches, @noopjournal journal J. Chem. Theory Comput.volume 11, pages 4283–4293 (year 2015)NoStop [Maitra et al.(2004)Maitra, Zhang, Cave, and Burke]maitra2004double author author N. T. Maitra, author F. Zhang, author R. J. Cave,andauthor K. Burke, title title Double Excitations Within Time-Dependent Density Functional Theory Linear Response, @noopjournal journal J. Chem. Phys. volume 120, pages 5932–5937 (year 2004)NoStop [Cave et al.(2004)Cave, Zhang, Maitra, and Burke]cave2004dressed author author R. J. Cave, author F. Zhang, author N. T. Maitra,andauthor K. Burke, title title A Dressed TDDFT Treatment of the 2 1 A g States of Butadiene and Hexatriene, @noopjournal journal Chem. Phys. Lett. volume 389, pages 39–42 (year 2004)NoStop [Casida and Huix-Rotllant(2015)]casida2015many author author M. E. Casida and author M. Huix-Rotllant, title title Many-Body Perturbation Theory (MBPT) and Time-Dependent Density-Functional Theory (TD-DFT): MBPT Insights About What Is Missing In, and Corrections To, the TD-DFT Adiabatic Approximation, in @noopbooktitle Density-Functional Methods for Excited States (publisher Springer, year 2015) pp. pages 1–60NoStop [Caballero, Moreira, andBofill(2013)]caballero2013comparison author author M. Caballero, author I. d. P. Moreira,and author J. M. Bofill, title title A Comparison Model between Density Functional and Wave Function Theories by Means of the Löwdin Partitioning Technique, @noopjournal journal J. Chem. Phys. volume 138,pages 174107 (year 2013)NoStop
http://arxiv.org/abs/1702.07792v1
{ "authors": [ "Nanna Holmgaard List", "Patrick Norman", "Jacob Kongsted", "Hans Jørgen Aagaard Jensen" ], "categories": [ "physics.chem-ph" ], "primary_category": "physics.chem-ph", "published": "20170224224551", "title": "A quantum-mechanical perspective on linear response theory within polarizable embedding" }
Institute of Photonics, University of Eastern Finland, P.O. Box 111 Joensuu, FI-80101 Finland ITMO University, St. Petersburg 197101, Russia Center for Theoretical Physics of Complex Systems, Institute for Basic Science (IBS), Daejeon 34051, Republic of Korea Nonlinear Physics Centre, Research School of Physics and Engineering, The Australian National University, Canberra ACT 2601, Australia We study Frenkel exciton-polariton Bose-Einstein condensation in a two-dimensional defect-free triangular photonic crystal with an organic semiconductor active medium containing bound excitons with dipole moments oriented perpendicular to the layers. We find photonic Bloch modes of the structure and consider their strong coupling regime with the excitonic component. Using the Gross-Pitaevskii equation for exciton polaritons and the Boltzmann equation for the external exciton reservoir, we demonstrate the formation of condensate at the points in reciprocal space where photon group velocity equals zero. Further, we demonstrate condensation at non-zero momentum states for TM-polarized photons in the case of a system with incoherent pumping, and show that the condensation threshold varies for different points in the reciprocal space, controlled by detuning.78.67.Pt,78.66.Fd,78.45.+h Polariton condensation in photonic crystals with high molecular orientation I. G. Savenko December 30, 2023 ===========================================================================§ INTRODUCTIONThe large exciton binding energy and oscillator strength of organic materials embedded in light-confining structures such as optical cavities make it possible to achieve giant energies of the Rabi oscillations desired for room-temperature exciton-polariton (EP) condensation <cit.>. In this respect, two-dimensional (2D) photonic crystals (PC), which can be easily integrated with organic materials <cit.>, are a current area of focus. Low group velocity of the optical Bloch modes at the edge of the conduction band provides for a long lifetime of the slow waves and thus seems promising for the realization of polariton condensation, similar to the enhancement of coherent emission in defect-free photonic crystals <cit.>.A considerable number of organic semiconductors, such as thiophene/phenylene co-oligomer single crystal, 1,4-bis(5-phenylthiophen-2-yl) benzene and 2,5-bis(4-biphenyl)thiophene <cit.>, have transition dipole moments oriented along the vertical direction with respect to the main crystal face. For this reason, these organic crystals are inappropriate for strong interaction with the optical modes of a Fabry-Perot cavity, where the electric field is oriented perpendicular to the dipole moment. Instead, as we will show in the case of 2D PCs, transverse magnetic (TM) modes have the electric field component perpendicular to the plane of the crystal and therefore can be strongly coupled with excitons. It can be noted that there exist other materials, such as cyano-substituted compound 2,5-bis(cyano biphenyl-4-yl) thiophene, in which the transition dipole moment lies in the in-plane direction with respect to the crystal face. While such materials can be assumed to demonstrate strong coupling with the Fabry-Perot cavities or transverse electric (TE) modes of PCs <cit.>, supporting Γ-point condensation in the reciprocal space, this case is trivial and beyond the scope of our manuscript.In this manuscript, we consider a 2D PC represented by a triangular lattice of pillars supporting the emergence of band gaps for both TE and TM polarizations. In principle, 2D PCs provide two types of exciton-photon quasiparticles.The first type results from coupling between excitonic and photonic modes below the light cone (free photon dispersion), with such modes called guided PC polaritons.The second type, called radiative polaritons, constitutes the excitons lying above the light cone. Polaritons of the latter type can be effectively analyzed by angle-resolved spectroscopy, under the condition that the exciton-photon coupling is much greater than the intrinsic photon linewidth <cit.>. These two modes can be employed differently; in particular, the radiative modes can be used as efficient reflectors <cit.>, whereas the guided modes can be utilized for the realization of strong light-matter interaction in such devices as vertical cavity surface emitting lasers <cit.> and polariton lasers <cit.>.Both types of polaritons can be made of photons representing slow Bloch modes (SBM), confined in 2D PCs in the vicinity of the extremum points at the edge of the photonic band gap, where the group velocity approximately equals zero <cit.>. Slow velocity of the modes results in long optical paths, or in other words, nearly total light confinement. This effect has been used for mode synchronization in organic lasers <cit.>, and has also proved to be efficient for lasing threshold reduction in vertical cavity surface emitting lasers <cit.> and in 2D PC lasers in the strong coupling regime. Notably, such lasers exhibit much lower threshold gains <cit.>.The lifetime of the Bloch modes is mostly determined by the lateral quality factor of the PC, which itself depends on the size and quality of the nanostructure. The vertical quality factor can be considered infinitely high for 2D PCs of sizes greater than a hundred microns. § PHOTONIC BAND STRUCTUREOur system schematic is presented in Fig. <ref>. The structure consists of aluminum nitride (AlN) pillars of radius 450 nm forming the photonic crystal, with a lattice constant of 1 μm and a refractive index (n) of 2.15. The transparent layer consists of a polymer material with optical properties close to air. The substrate provides for optical confinement in the vertical direction and should therefore be chosen from materials with refractive indices higher than that of AlN (e.g. GaN). However, we consider the system to be effectively 2D in our calculations and neglect the influence of the substrate on the optical properties of the system. Further, in order to make the light-matter coupling effective and strong, the thin active layer is firmly set onto the PC.The structure is exposed to an electromagnetic wave falling at the edge of the PC, nearly parallel to the layers of the structure. It should be noted that normal incidence does not provide for light confinement, which is necessary for a finite photon lifetime in the system, and is thus outside the focus of our study. We employ a standard Fourier method in order to decouple the Maxwell's equations and find independent behavior of the TE and TM polarizations, using separate equations for the magnetic and electric fields. Both the magnetic and electric fields are coplanar to the z axis. In the case of TE polarization, we can represent the fields in the following form <cit.>: 𝐇(𝐫) = (0,0,H_z(𝐫)), 𝐄(𝐫) = (E_x(𝐫),E_y(𝐫),0),where 𝐫(x,y) is the coordinate in the xy plane. Inserting (<ref>) and (<ref>) into Maxwell's equations in the frequency domain, we find: ∂/∂ x1/ϵ(𝐫)∂ H_z(𝐫)/∂ x +∂/∂ y1/ϵ(𝐫)∂ H_z(𝐫)/∂ y +ω^2/c^2 H_z(𝐫)=0.Further, we use the Bloch decomposition, H_z(𝐫)=∑_G A_G(k) e^-i(𝐤·𝐫+ 𝐆·𝐫), where the summation is taken over the reciprocal lattice vectors, G=nb_1+mb_2, where n and m are integer numbers. Inserting (<ref>) into (<ref>), we obtain the eigenvalue problem for the TE-modes: ∑_G'ϵ_G,G'^-1 (𝐤+𝐆')· (𝐤+𝐆) 𝐀_G=ω^2/c^2𝐀_G, where ϵ_G,G' is the dielectric permittivity. Let us now consider TM polarization, where 𝐄(𝐫) = (0,0,E_z(𝐫)), 𝐇(𝐫) = (H_x(𝐫),H_y(𝐫),0). Then we write down Maxwell's equation, 1/ϵ(𝐫)∂^2 E_z(𝐫)/∂^2 x +1/ϵ(𝐫)∂^2 E_z(𝐫)/∂^2 y +ω^2/c^2 E_z(𝐫)=0, and by substituting the ansatz E_z(𝐫)=∑_G B_G(k) e^-i(𝐤·𝐫+ 𝐆·𝐫) we arrive at the eigenvalue problem: ∑_G'ϵ_G,G'^-1 (𝐤+𝐆')^2 𝐁_G=ω^2/c^2𝐁_G.It should be noted that the Fourier approach employed here is only valid for perfectly periodic photonic crystals. In reality, one should also account for the fact that the 2D PC is not infinitely large or lossless. For dielectric materials with small absorption coefficients, one can apply an effective Fourier approach in which the imaginary part of permittivity is considered as a perturbation <cit.>. In order to account for the finite lifetime of photons, we use experimental data for the absorption coefficient <cit.> which gives us an imaginary part of the permittivity of about 10^-4. Then we use the complex-valued permittivity and find complex eigen frequencies which give us the photon lifetime. The finite size of PCs and imperfections of a real sample lead to a reduction in photon lifetime, the effect of which is important for the Bloch waves under the light cone. For these modes, material losses do not reduce the photon lifetime, and imperfections of the geometry can be considered as the only source of dissipation. We include this mechanism of energy relaxation in our model phenomenologically, as discussed below. When we apply Eq. (<ref>) to our geometry (Fig. <ref>), we find that the structure favors two energy minima in the spectrum of the TM modes (see Fig. <ref>). It is important that the TM mode supports the emergence of the SBM often associated with the emergence of the Van Hove singularity <cit.>. One of the minima is located at the high-symmetry M-point of the Brillouin zone, and the other lies above the light cone in the vicinity of the K-point. Such modes (lying above the light cone) are usually referred to as radiated or quasi-guided modes since they can radiate in free space. The modes lying below the light cone (guided modes) are confined and can only radiate through imperfect sidewalls of the crystal or other disorders in the PC structure <cit.>. § QUALITY FACTORThe main condition for the strong coupling regime, which provides for EP formation, can be roughly stated as Ω_R>1/τ_C,1/τ_X, where Ω_R is the Rabi frequency, standing for the rate of energy exchange between the excitonic and photonic components, and τ_C and τ_X are the lifetimes of the photons and excitons, respectively. In case of organic polaritons, based on Frenkel excitons with typically long lifetimes, the lifetime of the particles is mostly determined by the photonic component. By definition, the latter is determined by the quality factor of the PC and frequency τ_C=2π Q/ω_real. The relation between the optical pumping rate and EP inverse lifetime determines the possibility of Bose-Einstein condensate (BEC) formation. Clearly, an increase of both the lifetime and the pumping rate allows one to achieve critical polariton concentration for BEC formation.In 2D PCs, the full quality factor Q can be found as the inversed sum of vertical and lateral quality factors: Q^-1=Q^-1_v+Q^-1_l. The lifetime of the Bloch modes, which lie above the light cone and can be coupled with free-space modes, is determined by the vertical quality factor which is a characteristic of the radiation losses in the perpendicular direction <cit.>, Q^-1_v=-ω_real/2ω_im, whereω_real, ω_im are the real and imaginary parts of the frequency of the Bloch modes <cit.>. We can estimate Q_v of a 2D PC of final size L using the assumption that such a PC supports modes with mean in-plane k∝1/L. It is known that if, approximately, L>100 μm, then Q_v> 50000 <cit.>. Thus the total Q is mainly determined by the lateral losses.In turn, Q_l depends on the band structure, size, and disorder of the nanostructure <cit.>, Q_l=π/1-R(λ_0)[ 2cL^2/λ_0 α1/pπ-ϕ_r-λ_0/π.dϕ_r/dλ|_λ_0], where R is the modal reflectivity, α is band curvature in the vicinity of the minimum (the second derivative of the dispersion), the group velocity can be expressed as v_g=α k, p is an integer number, and ϕ_r is the phase of the modal reflectivity at the edges of the 2D PC. In our simulations, we choose a typical value for the lateral quality factor: Q_l≈ 2000 <cit.>. Polariton lifetime in this structure can be comparable with a Fabry-Perot microcavity at τ_p≈5 ps.Note that a typical quality factor of AlN hexagonal crystals is high enough for strong coupling, with such structures commonly used as microwire waveguides <cit.>. It is, however, insufficient to provide a thermal state for BEC, and therefore we will consider nonequilibrium condensation. Thus, our system is described with a kinetics approach. Another beneficial property of AlN is that it has a small lattice mismatch with other nitride-based semiconductor alloys <cit.>; moreover, AlN stress-free layers can be easily grown on Si/SiC substrates <cit.>.§ DISPERSION RELATIONIn organic active media, one can observe tightly bound Frenkel excitons with a typical size of one angstrom. This fact allows us to neglect the influence of the periodic structure of the 2D PC on the exciton wave function and consider non-uniform electric fields only. EPs emerge as mixed modes of the electromagnetic field and the exciton resonance, thus having the dispersion ω_LP(k)=ω^C+ω_k^X/2- √((ω_k^C-ω^X)^2+Ω_R^2)/2, where <cit.>: ħΩ_R=√(2|μ|^2ħω_C(N/V)/ϵ). The molecular packing density here can be estimated as N/V≈ 10^-3Å^-3, |μ|≈ 25 Debye. The estimations with formula (<ref>) give exaggerated values of more than 1 eV which do not comply with experimental data. This discrepancy is due to 99% of the excitons being uncoupled. For our simulations, we choose a value of Rabi energy typical for a planar geometry with an organic active layer, ħΩ_R≈ 100 meV <cit.>.While the Rabi energy in typical Fabri-Perot cavities with organic active regions varies between 100 and 800 meV <cit.>, in our case it takes the lowest limit since we have laminated the active layer on top of the PC. This configuration does not allow the achievement of full overlap between the electric field and the dipole. Such values of Ω_R typical for organic microcavities are much higher compared with GaAs/InGaAs quantum well-based microcavities due to the extremely high transition dipole moment that is typical for organic materials <cit.>.For a defect-free PC the electric field localizes in the Fabry-Perot microcavity, where the quantum well plays the role of a defect in 1D PC. But in the case of 2D PC, the active region covers the whole surface of the sample; therefore, the overcrossing of the dipole and the photon field in each elementary cell is small, yet for the whole sample the Rabi constant is quite high. Figure <ref>a shows the results of the photonic band gap calculation: the red and blue lines correspond to TE and TM modes, respectively, and the black line shows the EP dispersion.The latter curve has two minima which can be considered as traps for polaritons, where their condensation might take place.It should be noted that we do not describe the polaritons based on TE-modes since first, most organic active regions have dipole moments oriented perpendicular to the surface of lamination, and second, the bandgap for TE-modes is less than the trap for the polaritons, resulting in nonzero group velocity and instability of the condensate. § CONDENSATION KINETICSAfter finding the bare EP dispersion, we can describe EP dynamics within the mean field approximation, where the EP field operator, Ψ̂(𝐫,t), is averaged over the z-direction and treated as the classical variable ψ(𝐫,t) with Fourier image ψ(𝐤,t). The corresponding equation of motion reads <cit.>: iħdψ(𝐫,t)/dt = F^-1[ħω_LP(k)ψ(𝐤,t) -iħ/2τ(k)ψ(𝐤,t)] +iħγ/2 n_X(𝐫,t)ψ(𝐫,t)+α|ψ(𝐫,t)|^2 ψ(𝐫,t), where F^-1 is the inverse Fourier transform, α is a parameter describing the strength of particle-particle interactions, and τ is polariton lifetime. It can be estimated as: α≈ (10^-22/L)  eV cm^3, where L is the thickness of the active layer <cit.>. We use a value which is three orders of magnitude less than what we usually have in GaAs microcavities <cit.>. On one hand, such a small value of α does not lead to a significant blueshift. On the other hand, the main driver of condensation is still the cubic term in Eq. (<ref>). The term -i(ħ/2τ)ψaccounts for the radiative decay of particles. Safely assuming that exciton lifetime τ_X lies in the nanosecond range and is much greater than photon lifetime, we consider that polariton lifetime τ is determined mostly by the microcavity photon lifetime, which is τ_C=1/Im[ω^C_k] where τ_C is determined by the material properties and geometry of the PC (see the discussion above).Now we model the evolution of the exciton density in the reservoir, n_X, using the equation <cit.>: ∂ n_X(𝐫,t)/∂ t=P-n_X/τ_X-γ  n_X|ψ(𝐫,t)|^2, where τ_X is exciton lifetime, P is the incoherent pumping power, and γ is the rate of polariton formation fed by the excitonic reservoir.Using (<ref>) and (<ref>) we calculate polariton distribution in the reciprocal space (Fig. <ref>). The colormaps demonstrate that EP condensation occurs at nonzero momenta states. Indeed, EPs condense at the minima, where the photon group velocity turns into zero. It can be seen that in both types of points, with one located at the M-point in k-space and the other located between Γ and K points (see Fig. <ref>), we observe a threshold-like behavior.At small, under-threshold pumping powers, the particles are thermally distributed at high energies above the ground state(s), as seen in Fig. <ref>a. The minima remain nearly unoccupied. With an increase of pumping power (Fig. <ref>b), the particles start to accumulate at the inflection points of the dispersion and we observe the bottleneck effect <cit.>. The last panel (Fig <ref>c) corresponds to the above-threshold pumping, when the particles start to Bose-condense at the minima.Figure <ref> illustrates that the condensate formation varies for different points in k-space. We attribute this to the difference in the detunings of exciton energy and PC photon energy. Consequently, as expected, the less-detuned M point is more susceptible to condensation and exhibits a lower threshold.§ CONCLUSIONSWe have demonstrated the formation of organic exciton polaritons in a triangular lattice of AlN pillar, two-dimensional photonic crystal, and shown that Bose-Einstein condensation can take place at the minima of the band diagram where photon group velocity equals zero.Such dispersion acts as a set of traps for particles, and it can be employed to achieve polariton condensation at non-zero momenta, which may be useful, for example, in valleytronics <cit.> and for spontaneous symmetry breaking. It should also be mentioned that one can replace our periodic (solid state) crystal with an optical lattice produced by crossed laser beams, as in cold atomic systems. Then it becomes easy to in situ vary the optical properties of the PC.In the framework of our model, we found different particle densities at different points in k-space, controlled by the varying exciton-photon detuning at different points.In contrast to Bose-Einstein condensation in conventional quantum wells based on inorganic semiconductors, here organic materials with high molecular orientation provide selective coupling with TM (as opposed to TE) polarized modes and produce strong coupling due to a giant magnitude of the dipole moment, as opposed to regular inorganic excitons. § ACKNOWLEDGEMENTWe thank T. Ellenbogen for the suggestion of this research project and useful discussions, and Joel Rasmussen (RECON) for a critical reading of our manuscript. We acknowledge support of the IBS-R024-D1, the Australian Research Council Discovery Projects funding scheme (Project No. DE160100167), President of Russian Federation (Project No. MK-5903.2016.2), and Dynasty Foundation. D.V.K. thanks the IBS Center of Theoretical Physics of Complex Systems for hospitality. 99 Daska2014 K. S. Daskalakis, S. A. Maier, R. Murray, S. Kéna-Cohen, Nonlinear interactions in an organic polariton condensate, Nat. Mater. 13, 271-278 (2014).Plumhof2013 J. D. Plumhof, T. Stöferle, L. Mai, U. Scherf, R. F. Mahrt, Room-temperature Bose-Einstein condensation of cavity exciton-polaritons in a polymer, Nat. Mater. 13, 247-252 (2013).Crist2007 S. Christopoulos, G. B. H. von Högersthal, A. J. D. Grundy, P.G. Lagoudakis, A. V. Kavokin, J. J. Baumberg, G. Christmann, R. Butté, E. Feltin, J.-F. Carlin, N. Grandjean, Room-Temperature Polariton Lasing in Semiconductor Microcavities, Phys. Rev. Lett. 98, 126405 (2007).Kaname K. Goto, K. Yamashita, H. Yanagi, T. Yamao and S. Hotta, Strong exciton-photon coupling in organic single crystal microcavity with high molecular orientation, Appl. Phys. Lett. 109, 061101 (2016).Yamao T. Yamao, K. Yamamoto, Y. Taniguchi, and S. Hotta, Spectrally narrowed emissions occurring near an interface between a single crystal thiophene/phenylene co-oligomer and a glass substrate, Appl. Phys. Lett. 91, 201117 (2007). NojimaBasic S. Nojima, Jpn. J. Appl. Phys., Part 2 37, L565 (1998).Notomi M. Notomi, H. Suzuki, and T. Tamamura, Appl. Phys. Lett. 78, 1325 (2001).Nojima2001 S. Nojima, J. Phys. Soc. of Japan 70(11), 3432-3445 (2001). NojimaPRB S. Nojima, Photonic-crystal laser mediated by polaritons, Phys. Rev. B 61, 9940(2000).PRX J.-H. Jiang and S. John, Photonic Crystal Architecture for Room-Temperature Equilibrium Bose-Einstein Condensation of Exciton Polaritons, Phys. Rev. X 4, 031025 (2014). GeraceExp D. Bajoni, D. Gerace, M. Galli, J. Bloch, R. Braive, I. Sagnes, A. Miard, A. Lemaítre, M. Patrini, and L. C. Andreani, Phys. Rev. B 80, 201308 (2009).reflect S. Boutami, B. B. Bakir, H. Hattori, X. Letartre, J. L. Leclerq, P. Rojo-Romeo, M. Garrigues, C. Seassal and P. Viktorovitch, Broadband and compact 2D photonic crystal reflectors with controllable polarization dependence, IEEE Photon. Technol. Lett. 18, 835 (2006).vecs J. Mouette, C. Seassal, X. Letame, P. Rojo-Romeo, J.-L. Leclercq, P Regreny, P. Viktorovitch, E. Jalaguier, P. Perreau and H. Moriceau, Very low threshold vertical emitting laser operation in InP graphite photonic crystal slab on silicon, IEEE Electron. Lett. 39, 526 (2002).GeraceTheory D. Gerace and L. C. Andreani, Phys. Rev. B 75, 235325 (2007).CosendeyApl G. Cosendey, A. Castiglia, G. Rossbach, J.-F. Carlin, N. Grandjean, Blue monolithic AlInN-based vertical cavity surface emitting laser diode on free-standing GaN substrate, Appl. Phys. Lett. 101, 151113 (2012) Plihal M. Plihal, A. Shambrook, A. A. Maradudin, Two-dimensional photonic band structures, Optics Comm. 80(3-4), 199-204 (1991).PlihalHex M. Plihal and A. A. Maradudin, Photonic band structure of two-dimensional systems: The triangular lattice, Phys. Rev. B 44, 8565 (1991).aln J. Kischkat, S. Peters, B. Gruska, M. Semtsiv, M. Chashnikova, M. Klinkmller, O. Fedosenko, S. Machulik, A. Aleksandrova, G. Monastyrskyi, Y. Flores, and W. T. Masselink. Mid-infrared optical properties of thin films of aluminum oxide, titanium dioxide, silicon dioxide, aluminum nitride, and silicon nitride, Appl. Opt.51, 6789-6798 (2012) VanHove L. Van Hove, The Occurrence of Singularities in the Elastic Frequency Distribution of a Crystal, Phys. Rev. 89, 1189 (1953).Ferrier L. Ferrier, P. Rojo-Romeo, E. Drouard, X. Letartre, and P. Viktorovitch, Optics Express 16(5), 3145 (2007).Sauvan2005 C. Sauvan, P. Lalanne and J. P. Hugonin, Slow-wave effect and mode-profile matching in photonic crystal microcavities, Phys. Rev. B 71, 165118 (2005).appl1 B. Ben Bakir, Ch. Seassal. X. Letartre and P. Viktorovitch, Surface-emitting microlaser combining twodimensional photonic crystal membrane and vertical Bragg mirror, Appl. Phys. Lett. 88, 081113 (2006).appl2 J. Mouette, C. Seassal, X. Letame, P. Rojo-Romeo, J.-L. Leclercq, P Regreny, P. Viktorovitch, E. Jalaguier, P. Perreau and H. Moriceau, Very low threshold vertical emitting laser operation in InP graphite photonic crystal slab on silicon, IEEE Electron. Lett. 39, 526 (2002). ourAPL2016 D. V. Karpov and I. G. Savenko, Operation of a semiconductor microcavity under electric excitation, Applied Physics Letters 109(6), 061110 (2016).Kukushkin2016 V. N. Bessolov, D. V. Karpov, E. V. Konenkova, A. A. Lipovskii, A.V. Osipov,A. V. Redkov, I. P. Soshnikov, S. A. Kukushkin, Pendeo-epitaxy of stress-free AlN layer on a profiled SiC/Si substrate, Thin Solid Films 606, 74-79 (2016). BP2T H. Tamura, I. Hamada, H. Shang, K. Oniwa, Md. Akhtaruzzaman, T. Jin, N. Asao, Y. Yamamoto, T. Kanagasekaran, H. Shimotani, S. Ikeda, and K. Tanigaki, Theoretical Analysis on the Optoelectronic Properties of Single Crystals of Thiophene-furan-phenylene Co-Oligomers: Efficient Photoluminescence due to Molecular Bending, Phys. Chem. C 117(16), 8072-8078 (2013).Org1 D. G. Lidzey, D. C. Bradley, M. S. Skolnick, T. Virgili, S. Walker, D. M. Whittaker, Strong exciton-photon coupling in an organic semiconductor microcavity, Nature 395, 53-55 (1998).Org2 D. G. Lidzey, D. D. C. Bradley, T. Virgili, A. Armitage, M. S. Skolnick, S. Walker, Room Temperature Polariton Emission from Strongly Coupled Organic Semiconductor Microcavities, Phys Rev Lett. 82, 3316-3319 (1999).Org3 V. Agranovich, H. Benisty, C. Weisbuch, Organic and inorganic quantum wells in a microcavity: Frenkel-Wannier-Mott excitons hybridization and energy transformation, Solid State Commun. 102, 631-636 (1997).bottleneck F. Tassone and Y. Yamamoto, Exciton-exciton scattering dynamics in a semiconductor microcavity and stimulated scattering into polaritons, Phys. Rev. B 59, 10830 (1999).Wouters M. Wouters and I. Carusotto, Excitations in a nonequilibrium Bose-Einstein condensate of exciton polaritons. Phys. Rev. Lett. 99, 140402 (2007).bootleneck1 F. Stokker-Cheregi, A. Vinattieri, F. Semond, M. Leroux, I. R. Sellers, J. Massies, D. Solnyshkov, G. Malpuech, M. Colocci, and M. Gurioli, Polariton relaxation bottleneck and its thermal suppression in bulk GaN microcavities, Applied Physics Letters 95(4), 042119 (2008).bootleneck2 A. Imamoglu, R. J. Ram, S. Pau, and Y. Yamamoto, Phys. Rev. A 53, 4250 (1996).RefValley M. Sun, I. G. Savenko, H. Flayac, T. C. H. Liew, Multivalley engineering in semiconductor microcavities, arXiv:1610.05473, soon in Scientific Reports (2017).
http://arxiv.org/abs/1702.08015v1
{ "authors": [ "D. V. Karpov", "I. G. Savenko" ], "categories": [ "physics.optics", "cond-mat.mes-hall", "quant-ph" ], "primary_category": "physics.optics", "published": "20170226102124", "title": "Polariton condensation in photonic crystals with high molecular orientation" }
ρχr̊ a b c w d k p q r v x z G RH F J P⟨⟩ℋℳ𝒢ℒ𝒱ℰ𝒵ψ̂ψ̂^† Trϵε∂τk̅r̅n̅LaboratoireKastlerBrossel,ENS-PSLResearchUniversity,CNRS,UPMC,CollègedeFrance,24,rueLhomond,75005Pariszwu@mail.tsinghua.edu.cnInstitute for Advanced Study, Tsinghua University, Beijing, 100084, ChinaLaboratoireKastlerBrossel,ENS-PSLResearchUniversity,CNRS,UPMC,CollègedeFrance,24,rueLhomond,75005ParisDepartment of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark We present a mixed-dimensional atomic gas system to unambiguously detect and systematically probe mediated interactions. In our scheme, fermionic atoms are confined in two parallel planes and interact via exchange of elementary excitations in a three-dimensional backgroundgas. This interaction gives rise to a frequency shift of the out-of-phase dipole oscillations of the two clouds, which we calculate using a strong coupling theory taking the two-bodymixed-dimensional scattering into account exactly. The shift is shown to be easily measurable for strong interactions and can be used as a probe for mediated interactions. Long range mediated interactions in a mixed dimensional system Georg M. Bruun December 30, 2023 ============================================================== Mediated interactionswere originally introduced to provide a quantum-mechanical explanation for the peculiar “action at a distance"interactions like gravity and electromagnetism and they now constitute a major overarching paradigm in physics. In particle physics, exchange of gauge bosons is responsible for the propagation of fundamental interactions <cit.>. In condensed matter, the attraction between the electrons in BCS superconductors arises from the exchange of lattice phonons <cit.>, and it is speculated that the mechanism behind high-T_c superconductivity lies in the exchange of spin fluctuations <cit.>. The concept of mediated interactions is also important in classical physics, where fluctuations of classical fields are responsible for phenomena such as the finite-temperature Casimir effect in electrodynamics <cit.> and inbiophysics <cit.>. Ultracold atoms have emerged as a versatile platform for theinvestigation of many-body physics, and a host of schemes have been proposed to explore mediated interactions using these systems. For instance, mediated interactions lead to the formation of a p-wave superfluidin spin-imbalanced fermionic systems <cit.>; they are responsible for the formation of a topological superfluid with a high critical temperature in 2D systems <cit.>, and in 1D quantum liquids they are shown to result in Casimir-like forces between impurities <cit.>. In most cases, however, the mediated interaction is weak and in competition with direct interactions between atoms, making its experimental observation challenging.In this paper, we apply the mixed-dimensional setup proposed in <cit.> and illustrated in Fig. <ref> to study mediated interactions.Specifically we consider two parallel layers located at z_1=0 and z_2=d, which contain an equal number of spin-polarized non-interacting fermions (A-species). The layers are immersed in a uniform 3D gas of interacting spin 1/2 fermions (B species), which can be tuned through the BEC-BCS cross-over. The presence of the 3D gas induces a mediated interaction between the A-particles: one A-particle will perturb locally the surrounding B-particles thereby inducing excitations in the 3D gas, which in turn affects the dynamics of a second A-particle. IfA-particles are harmonically trapped, this mediated coupling leads to a beating between oscillations in the two planes. Measuring the beating frequency between the 2D-clouds therefore gives access to the strength of mediated interaction. This scheme is similar to Coulomb drag experiments in bilayered electronic systems <cit.> that was recently generalized to the case of dipolar gases <cit.>.To analyze the dynamics of this system, we develop a systematic many-body theory for themediated inter-plane interaction that includes the low-energy mixed-dimensional A-B scattering exactly.We then derive anexpression for the associated interaction energy between the two planes and calculatethe frequency of the out-of-phase dipole oscillations of the 2D clouds in the xy-plane. In the weak A-B interaction limit, our results recover the perturbative expression for a mediated interaction proportional to thedensity-density response function of the 3D gas.In the strong A-B interaction limit, however, the weak-coupling result breaks down completely. In the latter case we focus on the BEC regime of the 3D gas and show that the mediated interaction gives rise to a significant and easily detectable shift in the out-of-phase dipole oscillation frequency of the two clouds.2D-3D scattering.– The interaction between the A and B particles is short range and canbe characterised by an effective 2D-3D scattering length a_ eff <cit.>. Solving for the scattering matrix in the many-body medium yields𝒯_AB(_⊥,iω_ν) = g/1- gΠ(_⊥,iω_ν),whereg=2π a_eff/√(m_Bm_r), and m_r=m_Am_B/(m_A+m_B) is the reduced mass (ħ=k_B=1). Here m_A denotes the mass of an A-fermion and m_B that of the scattering particle in the 3D gas, namely the mass of B-fermion (dimer) in the BCS (BEC) regime. Π(_⊥,iω_ν) is the renormalised 2D-3D pair propagator for the center-of-mass (COM) momentum _⊥=(p_x,p_y) in the plane,andiω_ν is either a bosonic (BCS regime) or fermionic (BEC regime) Matsubara frequency.Equation (<ref>) includes many-body effects in the ladder approximation (see the Supplemental Material), and recovers the correct low energy2D-3D scattering matrix in a vacuum <cit.>. . Mediated interaction for weak 2D-3D interaction.– Consider first the case of a weak 2D-3D interaction where a_eff is much smaller than the interparticle spacing of the A and B particles. Wethen have T_AB(_⊥,iω_ν)≃ g from(<ref>), and second-order perturbation theory givesV_ m.i.(𝐪_⊥,iω_ν)=g^2∫_-∞^∞d q_z e^iq_zdχ_B(_⊥,q_z,iω_ν),which describes the mediated interaction between two A-particles indifferent planes. Here (_⊥,iω_ν)=(q_x,q_y,iω_ν)are the transferred momentum and frequency andχ_B(_⊥,q_z,iω_ν) is the density-density response function of theB-cloud.The integration over the momentum q_zcomes from the fact that it is not conserved in the 2D-3D scattering. Deep in the BCSlimit where the B fermions form an ideal Fermi gas,the mediated interaction (<ref>) is of the form of a Ruderman-Kittel-Kasuya-Yosida potential <cit.>. When the B fermions are deep in the BEC limit where they form a weakly interacting BEC of dimers, the mediated interaction takes the form of a Yukawapotential <cit.>. At zero frequency, Fourier transforming (<ref>) back to the real space givesV_ m.i.(r) =g^2m_B /16π^32p_Frcos2p_Fr-sin2p_Fr/r^4 BCS limit-g^2n_Bm_B /π re^-√(2)r/ξ_B BEC limitwhere p_F is the Fermi momentum of the 3D Fermi gas in the BCS regime, andn_B is the density of the 3D BEC of dimers with coherence length ξ_B=1/√(8π n_Ba_B).Here, a_B=0.6a_BB is thescattering length between thedeeply bound dimers of B fermions <cit.>.Mediated interaction for strong 2D-3D interaction.– For a strong 2D-3D interaction where a_eff is comparable to or larger than the interparticle spacing, the mediated interaction between the two layerstakes on a more complex form. The reason is that we need to retain thefull COM momentum and frequency dependence of the 2D-3D scattering matrix given by (<ref>). We shall from now on concentrate on the BEC limit of the B-fermions, namely when they form a weakly interacting BEC of dimers, which can be treated within Bogoliubov theory.The mediated interaction between the A-particles is calculated includingall processes where a single Bogoliubov phonon in the BEC is exchanged betweenthe two layers. In a diagrammatic language, these processes areshown in Fig. <ref> (a).Summing up the contributions from the four termsin Fig. <ref> (a) givesV_ m.i.(p_1,p_2;q) = n_B𝒯_AB(p_1+q)𝒯_AB(p_2) G̅^B_11(𝐪_⊥,iω_ν)+ n_B𝒯_AB(p_1)𝒯_AB(p_2-q) G̅^B_11(-𝐪_⊥,-iω_ν)+ n_B𝒯_AB(p_1+q)𝒯_AB(p_2-q) G̅^B_12(𝐪_⊥,iω_ν)+ n_B𝒯_AB(p_1)𝒯_AB(p_2) G̅^B_21(𝐪_⊥,iω_ν),wherep_1≡ (_1⊥,iω_m_1), p_2≡ (_2⊥,iω_m_2), and q ≡ (_⊥,iω_ν). Here ω_m = (2m+1)π /β and ω_ν = 2νπ /β are Fermi and Bose Matsubara frequencies respectively, where β=1/T is the inverse temperature andm and ν are integers. In(<ref>), the Green's functions of the BEC are integrated over the z-component of the momentum asG̅^B_αβ(𝐪_⊥,iω_ν)≡∫_-∞^∞dq_z/2πG^B_αβ(𝐪_⊥,q_z,iω_ν)e^iq_z d.The Green's functions of the 3D BEC are as usualG^B_11 = u_^2/iω_ν-E_-v_^2/iω_ν+E_, G^B_12=g_Bn_B/ω_ν^2+E_^2 where = (_⊥,k_z) and G^B_21(,iω_ν)=G^B_12(,iω_ν). We have definedu_^2, v_^2 =1/2[(_ + g_Bn_B)/E_± 1 ], E_ = √(_ (_ + 2 g_Bn_B) ) is the Bogoliubov spectrum with _ = k^2/2m_B, andg_B = 4π a_B/m_B. Note that the mediated interaction(<ref>) depends on bothp_1 and p_2 as well as q due to the momentum and frequency dependence of the2D-3D scattering. In fact, in the weak interaction limit 𝒯_AB≃ g, one recovers (<ref>) from the more general expression (<ref>).Thermodynamical potential.– We now derive an expression for the correction to the thermodynamic potential Ω due to the mediated interaction between the two planes for a general strength of the2D-3D interaction. The dominant contribution is the Hartree term illustrated in Fig. <ref> (b). For a homogeneous system, this term gives the correction per unit area as(for the rest of the paper the ⊥ subscript will be dropped in the vector notation and all bold face letters now denote in-plane 2D vectors)Ω̅_ m.i. = 1/β^2∑_m_1 m_2∫d^2p_1/(2π)^2d^2p_2/(2π)^2V_ m.i.(p_1,p_2;0)× G_1^A(_1,iω_m_1)G^A_2(_2,iω_m_2),whereG^A_j(,iω_m) = 1/(iω_m-p^2/2m_A +μ_A) is the Green's function for the A-fermions in the j-th layer withμ_A being the chemical potential. Using (<ref>) together with the identity 2G̅_11(0,0)+2G̅_12(0,0)=-√(2)n_Bm_Bξ_Bexp(-√(2)d/ξ_B) yieldsΩ̅_ m.i. =-√(2)m_Bξ_B n_B e^-√(2)d/ξ_BΩ̅_1 Ω̅_2,whereΩ̅_j = 1/β∑_m∫d^2p/(2π)^2𝒯_AB(,iω_m)G^A_j(,iω_m).We point out that the Matsubara frequency summation in the above expression can in fact be performed analytically (see supplementary material), which greatly simplifies the numerical calculation of thermodynamic potential density. Local-density approximation.– Using the local-density approximation, we cangeneralize (<ref>),which was derived assuming homogeneous system, to the case of trapped 2D Fermi clouds. This yields the total correction as Ω_ m.i.(_1-_2) =∫ d^2r_1d^2r_2[2G̅^B_11(_1-_2,0) +2G̅^B_12(_1-_2,0)] Ω̅_1 (_1-_1)Ω̅_2 (_2-_2),whereG̅^B_ij(,0) is the Fourier transform of G̅^B_ij(,0) back to real 2D space, and Ω̅_i () is given by (<ref>) using a local chemical potential μ_A()=μ_A+m_Aω_⊥^2r^2/2. In (<ref>), we have allowed the two A-clouds to be rigidly displaced distances of _1 and _2 along the x-axis in order to analyse their coupled dipole oscillations, see Fig. <ref>.Since G̅^B_ij already contains a Fourier transform with respect to z-momentum, see (<ref>), the bosonic Green's functions entering(<ref>)now simply add upto the density-density correlation function of the BEC evaluated at the 3D real space distance r=|_1-_2+d|. Using this, we finally obtainΩ_ m.i.(_1-_2)= -m_Bn_B/π∫ d^2r_1d^2r_2e^-√(2)r/ξ_B/r×Ω̅_1 (_1-_1)Ω̅_2 (_2-_2).Equation (<ref>) can be understood as follows. Consider two area elements of the 2D gases,one located at _1-_1 inlayer 1 and the other at _2-_2 in layer 2. The contribution from these two elements can be approximated by the expression in (<ref>) in which the relative distance is taken to be r instead of d. Equation (<ref>) then sums up all such contributions in the two clouds.For weak interaction, we see from (<ref>) that Ω̅_j(_j-_j)=gn_j(_j-_j), where n_j (_j-_j) denotes the equilibrium fermion density in layer j rigidly displaced the distance _j along the x-axis. Equation(<ref>) then simplifies toΩ_ m.i.(_1-_2)= -g^2m_Bn_B/π∫ d^2r_1d^2r_2e^-√(2)r/ξ_B/r× n_1 (_1-_1) n_2(_2-_2), whichis the usual Hartree approximation for the interaction energy between the two planes mediated by a Yukawa interaction. Coupled dipole oscillations.– Consider now the situation where the two clouds perform dipole oscillations around their equilibrium positions, see Fig. <ref>. For small displacements _1 and _2, the COM velocities and the beating frequencies aresmall compared to the speed of sound in the 3D gas and the trapping frequencies respectively, yielding rigid and undamped oscillations of the 2D clouds <cit.>. The COM dynamics is then determined by the energy increase δ Eassociated with thedisplacements of the clouds. For rigid displacements, we have δ E = Ω_ m.i.(_1-_2)-Ω_ m.i.(0)+ [μ_A(_1)+μ_A(_2)-2μ_A]N_A, which givesδ E(_1,_2)=1/2N_Am_Aω_⊥^2(_1^2 +_2^2)+ Ω_ m.i.(_1-_2)-Ω_ m.i.(0),whereN_A is the number of fermions in each layer. Taylor expanding Ω_ m.i.(_1-_2) to second order in _1-_2, we readily see that the motion of the two clouds separates into an in-phase oscillation with frequency ω_⊥, and an out-of-phase oscillation with frequencyω_r = ω_⊥√(1+ 2I/N_Am_Aω_⊥^2),whereI = .^2/_1^2Ω_ m.i.(_1-_2) |__1-_2=0.The microscopic expression for ω_r for arbitrary strength of the2D-3D interaction in terms of (<ref>), (<ref>), (<ref>), and (<ref>) is themain result of this letter and it explicitly shows how the mediated interaction can be probed by measuring the frequency of the out-of-phase dipole oscillations of the two clouds.Results.– We now calculate the frequencyω_r for a realistic cold-atom system consisting ofN_A = 1000^40 K atomstrapped in each plane, immersed in a 3D BEC of ^6 Li dimers. Thetransverse trapping frequencyfor the^40 K clouds is ω_⊥ = 2π× 380 Hz, the density of the BEC is n_B=10^18m^-3, and the coherence length isξ_B= 2.7μ m. We furthermore assume that the temperature is zero. In Fig. <ref>, we show the frequencyω_r/ω_⊥ as a function of the 2D-3D interaction strength1/k_Fa_ eff at a fixed interlayer distance d = 0.4μ m. The frequency increases monotonically as a_ eff increases. For weak interaction, it agrees with the second order result (dashed line). For stronger interaction, the full frequency/momentum dependence of the 2D-3D scattering is important, and the perturbative result deviates significantly from the full strong-coupling theory. In particular, whereas the perturbative resultdiverges for 1/k_Fa_ eff→ 0, the strong-coupling theory predicts a finite frequency saturating at ω_r≃ 1.48ω_⊥. Importantly, the frequencyshift becomes significant for -2≲1/k_Fa_ eff≤ 0, which includes a region sufficiently far from unitarity so thatthe predicted 3-body loss is small <cit.>. This demonstrates the usefulness of our proposal to detectmediated interactions.Note that this result can only be obtained using a strong coupling theory, since the perturbative result is only accurate for weak interactions where the frequency shift is minute. In Fig. <ref>, we plot ω_r/ω_⊥ as a function of the ratio of the interparticle distances n_B^1/3/n_F^1/2 (keeping n_F fixed)with 1/k_Fa_ eff=-0.1 and all other physicalparameters the same as forFig. <ref>.a. The density of the BEC enters the mediated interaction in two ways, which is most clearly seen in the weak-coupling limit given by(<ref>): First, the strength of theinteraction isproportional to n_B;second, the range of theinteraction is determined by the BEC coherence length ξ_B∝ 1/√(n_B). Thus, increasing the density increases the strength but reduces the range of the mediated interaction, and it is not a priori obvious what the net effect on the frequency shift will be. From Fig. <ref>, we see that for the chosen parameters, ω_r in fact increases monotonically with increasing BEC density [We restricted all figures to negative values of the 2D-3D scattering length. Indeed for 1/k_Fa_ eff> 0, a 2D fermion can form a bound-dimer state with a 3D boson. The frequency shift in this region therefore depends on whether the system forms these dimers, or whether it ison the so-called repulsive branch where the effective 2D-3D interaction is repulsive. This complicates the analysis, which will be presented in a future publication. ]. Conclusions.–We demonstrated that a mixed-dimensional setup consisting of two layers of identical fermions immersed in a 3D background gas is a powerful probe to investigate mediated interactions systematically. The mediated interaction between the two layers modifies the out-of-phase dipole oscillation frequency of the 2D clouds, and we calculate this shift using a strong-coupling theory taking into account the low energy scattering between the 2D and 3D particles. Using this theory, we showed that for strong 2D-3D coupling, the resulting frequency shift is clearly measurable.Finally we note that the advantages of our proposal are twofold. First, if the 2D trapping is realized using optical potentials, the distance between planes is a few hundred nanometres, which is much larger than the range of interatomic interactions. Any observed coupling between the two planes is therefore solely due to a mediated interaction via the 3D gas. Second, the shift of the center-of-mass oscillation frequency is a very precise spectroscopic tool that can be used as a probe ofweak interactions, as demonstrated recently in <cit.>.FC and DS acknowledge support from Région Ile de France (DIM IFRAF/NanoK), ANR (Grant SpiFBox) and European Union (ERC Grant ThermoDynaMix). GMB and ZW wishes to acknowledge the support of the Villum Foundation via Grant No. VKR023163.DSand ZW contributed equally to this work.10weinberg1995 S. Weinberg.The Quantum Theory of Fields.Number vb. 1 in The Quantum Theory of Fields 3 Volume Hardback Set. Cambridge University Press, 1995.schrieffer1983 J.R. Schrieffer.Theory of Superconductivity.Advanced Book Program Series. Advanced Book Program, Perseus Books, 1983.Scalapino1995 D.J. Scalapino.The case for dx2 − y2 pairing in the cuprate superconductors.Physics Reports, 250(6):329 – 365, 1995.milton2001casimir Kimball A Milton.The Casimir effect: physical manifestations of zero-point energy.World Scientific, 2001.machta2012critical Benjamin B Machta, Sarah L Veatch, and James P Sethna.Critical casimir forces in cellular membranes.Physical review letters, 109(13):138101, 2012.bulgac2006ipw A. Bulgac, M.M.N. Forbes, and A. Schwenk. Induced P-Wave Superfluidity in Asymmetric Fermi Gases.Phys. Rev. Lett., 97:020402, 2006.lobo2006nsp C. Lobo, A. Recati, S. Giorgini, and S. Stringari. Normal state of a polarized Fermi gas at unitarity.Phys. Rev. Lett., 97(20):200403, 2006.Mora2010Normal C. Mora and F. Chevy.Normal phase of an imbalanced fermi gas.Phys. Rev. Lett., 104(23):230402, Jun 2010.yu2010comment Z. Yu, S. Zöllner, and C. J. Pethick.Comment on “normal phase of an imbalanced fermi gas”.Phys. Rev. Lett., 105(18):188901, Oct 2010.Wu2016 Zhigang Wu and G. M. Bruun.Topological superfluid in a fermi-bose mixture with a high critical temperature.Phys. Rev. Lett., 117:245302, Dec 2016.Midtgaard2016 J. Melkær Midtgaard, Z. Wu, and G. M. Bruun. Topological superfluidity of lattice fermions inside a Bose-Einstein condensate.Phys. Rev. A, 94:063631 2016.Caracanhas2017 M.A. Caracanhas, F. Schreck, and C. Morais Smith. Fermi-Bose mixture in mixed dimensions.arXiv:1701.04702, Jan 2017.schecter2014phonon Michael Schecter and Alex Kamenev.Phonon-mediated casimir interaction between mobile impurities in one-dimensional quantum liquids.Physical review letters, 112(15):155301, 2014.Nishida2010 Yusuke Nishida.Phases of a bilayer fermi gas.Phys. Rev. A, 82:011605, Jul 2010.rojo1999electron AG Rojo.Electron-drag effects in coupled electron systems.Journal of Physics: Condensed Matter, 11(5):R31, 1999.matveeva2011dipolar N Matveeva, A Recati, and S Stringari.Dipolar drag in bilayer harmonically trapped gases.The European Physical Journal D, 65(1-2):219–222, 2011.nishida2008universal Y. Nishida and S. Tan. Universal Fermi gases in mixed dimensions.Phys. Rev. Lett., 101(17):170401, 2008.Nishida2009 Yusuke Nishida.Induced p-wave superfluidity in two dimensions: Brane world in cold atoms and nonrelativistic defect {CFTs}.Annals of Physics, 324(4):897 – 919, 2009.RKKY1 M. A. Ruderman and C. Kittel.Indirect exchange coupling of nuclear magnetic moments by conduction electrons.Phys. Rev., 96:99–102, Oct 1954.RKKY2 Tadao Kasuya.A theory of metallic ferro- and antiferromagnetism on zener's model.Progress of Theoretical Physics, 16(1):45–57, 1956.RKKY3 Kei Yosida.Magnetic properties of cu-mn alloys.Phys. Rev., 106:893–898, Jun 1957.Yukawa Hideki Yukawa.On the interaction of elementary particles.Proc. Phys. Math. Soc. Japan, 17(48), 1935.Petrov2004 D. S. Petrov, C. Salomon, and G. V. Shlyapnikov.Weakly bound dimers of fermionic atoms.Phys. Rev. Lett., 93:090404, Aug 2004.Ferrier2014Mixture I Ferrier-Barbut, M. Delehaye, S. Laurent, A.T. Grier, M. Pierce, B.S Rem, F. Chevy, and C. Salomon. A mixture of Bose and Fermi superfluids.Science, 345:1035–1038, 2014.nishida2011liberating Yusuke Nishida and Shina Tan.Liberating Efimov physics from three dimensions.Few-Body Systems, 51(2-4):191–206, 2011.Note1 We restricted all figures to negative values of the 2D-3D scattering length. Indeed for 1/k_Fa_ eff> 0, a 2D fermion can form a bound-dimer state with a 3D boson. The frequency shift in this region therefore depends on whether the system forms these dimers, or whether it is on the so-called repulsive branch where the effective 2D-3D interaction is repulsive. This complicates the analysis, which will be presented in a future publication.roy2016two Richard Roy, Alaina Green, Ryan Bowler, and Subhadeep Gupta.Two-element mixture of Bose and Fermi superfluids.arXiv:1607.03221, 2016. Supplemental Material Daniel Suchet^1, Zhigang Wu^2, FrédéricChevy^1, and G. M. Bruun^3^1LaboratoireKastlerBrossel,ENS-PSLResearchUniversity,CNRS,UPMC,CollègedeFrance,24,rueLhomond,75005Paris^2Institute for Advanced Study, Tsinghua University, Beijing, 100084, China ^3Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark§ 2D-3D SCATTERING MATRIXWe provide some details on the 2D-3D scattering matrix given in Eq. (1) in the main text.In the strong 2D-3D interaction limit and in the presence of a 3D BEC background, we need the scattering amplitude in medium between the a 2D fermion and a 3D boson to determine the mediated interaction. In terms of the well-known T-matrix approximation, the scattering amplitude 𝒯_AB satisfies an integral equation represented diagrammatically in Fig. <ref>. Here the ⊥ subscript is used to distinguish 2D plane vectors from the 3D ones. Using standard procedure the scattering matrix can be expressed in terms of the 2D-3D zero-energy scattering amplitude in vacuum g=2π a_eff/√(m_Bm_r), where m_r = m_Am_B/M with M=m_A + m_B being the reduced mass and a_ eff being the 2D-3D scattering length. In doing so, it can be shown that 𝒯_AB only depends on the total momentum and frequency _⊥=_1⊥+_2⊥ = _3⊥+_4⊥ and ω_ν = ω_n_1 + ω_ν_2 =ω_n_3 + ω_ν_4. We find𝒯_AB(_⊥,iω_ν) =g/1- gΠ(_⊥,iω_ν).Here Π(_⊥,iω_ν) is the renormalised pair propagation given byΠ(_⊥,iω_ν) = ∫d^3p'/(2π)^3[ u__+^21 + b(E__+) - f(ξ__-)/iω_ν - E__+ - ξ__- + v__+^2b(E__+) + f(ξ__-)/iω_ν + E__+ - ξ__-+1/p_z^' 2/2m_B+'^2_⊥/2m_r+i0^+ ],where '=('_⊥,p'_z), _+ ≡m_B/M_⊥+', _- ≡m_A/M_⊥ - '_⊥, and b(x) = 1/(e^β x-1) and f(x)=1/(e^β x+1) are the Bose and Fermi distribution function respectively.For weakly interacting Bosons, it is a good approximation to replace the normal Green's function G^ B_11(,iω_ν) by the non-interacting Boson Green's function G^ B_0(,iω_ν) = 1/(iω_ν-_+μ_B) in the scattering T-matrix. With this simplification, we find at T=0 Π(_⊥,iω_ν) = ∫d^3p'/(2π)^3[ 1- θ (k_F- |m_A_⊥/M-'_⊥ | )/iω_ν -( _⊥^2/2M+_⊥^' 2/2m_r+p_z^' 2/2m_B )+μ_A+1/p_z^' 2/2m_B+^' 2_⊥/2m_r+i0^+ ],where k_F = √(2m_Aμ_A) is the Fermi momentum of the A-species.Expressed in terms of thedimensionless variables, the pair propagator is Π(_⊥,iω_ν) = 2m_A k_F∫d^2 p_⊥'/(2π)^2∫d p'_z/2π[ 1- θ (1- |'_⊥-α_A_⊥ | )/iω_ν -( α_Ap_⊥^2+α_B^-1p_⊥^' 2+α_Aα_B^-1p_z^' 2 )+1+1/α_Aα_B^-1p_z^' 2+α_B^-1p_⊥^' 2+i0^+ ],whereα_A = m_A/M and α_B = m_B/M.Here the frequency variables are scaled in terms of the chemical potential μ_A and the momentum variables in terms of the Fermi momentum k_F. We writeΠ(_⊥,iω_ν)=Π_0(_⊥,iω_ν) + ΔΠ(_⊥,iω_ν),whereΠ_0(_⊥,iω_ν) ≡ 2m_A k_F∫d^2 p'_⊥/(2π)^2∫d p'_z/2π[ 1 /iω_ν -( α_Ap_⊥^2+α_B^-1p_⊥^' 2+α_Aα_B^-1p_z^' 2 )+1+1/α_Aα_B^-1p_z^' 2+α_B^-1p_⊥^' 2+i0^+ ]= -im_A k_F/2πα_A^1/2α_B^-3/2√(iω_ν+1-α_Ap_⊥^2)is the pair propagator in vacuum and ΔΠ(_⊥,iω_ν)≡ -2m_A k_F∫d^2 p'_⊥/(2π)^2∫d p'_z/2πθ (1- |'_⊥-α_A_⊥ | )/iω_ν -( α_Ap_⊥^2+α_B^-1p_⊥^' 2+α_Aα_B^-1p_z^' 2 )+1 = im_A k_F/√(α_Aα_B^-1)∫d^2 p'_⊥/(2π)^2θ (1- |'_⊥-α_A_⊥ | )/√(iω_ν+1-α_Ap_⊥^2-α_B^-1p_⊥^' 2)is the medium correction. Here √(z) always denotes the root of the complex number z that lies in the upper half plane.From Eq. (<ref>)-(<ref>) we find (from now on we drop the ⊥ sign from the 2D vectors) Π(,iω_ν) = -im_A k_F/2π^2α_A^1/2α_B^-3/2∫_0^π/2 dθ (√(iω_ν-γ_+(θ,p)) -√(iω_ν-γ_-(θ,p) ) ) for α_A p ≤ 1. Here and in the following γ_± (θ,p) ≡α_B^-1p^2_±(θ)+α_Ap^2-1,where p_±(θ) = ±α_A p cosθ + √(1-α^2_A p^2 sin^2 θ).For α_A p > 1 we findΠ(,iω_ν) =-im_A k_F/2π^2α_A^1/2α_B^-3/2[ √(iω_ν-(α_Ap^2-1)) +1/π∫_0^θ_0 dθ (√(iω_ν-γ_+(θ,p)) -√(iω_ν-γ_-(θ,p) ) )],where θ_0 = sin^-1 (1/α_A p).§ CALCULATION OFΩ̅_JWe now determine Ω̅_j given in Eq. (9) in the main text, which is reproduced below Ω̅_j = 1/β∑_m∫d^2p/(2π)^2𝒯_AB(,iω_m)G^A_j(,iω_m).In terms of the dimensionless momenta and frequencies introduced earlier, we get Ω̅_j=2gm_A/β∑_m∫d^2p/(2π)^21/ [1-gΠ(,iω_m) ] [ iω_m-(p^2-1)] =gm_A/πβ∑_m∫_0^∞ dp 1/ [1-gΠ(,iω_m) ] [ iω_m-(p^2-1)]Submitting Eq. (<ref>) and (<ref>) into Eq. (<ref>), and performing the Matsubara frequency summation, we find for negative 2D-3D scattering length a_ eff < 0 and in the zero temperature limit β→∞ Ω̅_j = 2 a_ effα_A^1/2α_B^-1μ_A∫_0^1 dp S(p),where S(p)= 1/1-k_Fa_ eff√(α_B)1/π∫_0^π/2 dθ [√(1-p^2 +γ_+(θ,p)) +√(1-p^2+γ_-(θ,p)) ].
http://arxiv.org/abs/1702.08129v1
{ "authors": [ "Daniel Suchet", "Zhigang Wu", "Frédéric Chevy", "Georg M. Bruun" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170227030244", "title": "Long range mediated interactions in a mixed dimensional system" }
Simplified proposal for realizing multiqubit tunable phase gate in circuit QED Wen-An Li[E-mail: liwenan@126.com] and Yuan Chen December 30, 2023 ============================================================================== The capacity of a discrete-time multi-input multi-output (MIMO) Gaussian channel with output quantization is investigated for differentreceiver architectures.A general formulation of this problem is proposed in which the antenna outputs are processed by analog combiners while sign quantizers are used for analog-to-digital conversion. To exemplify this approach, four analog receiver architectures of varying generality and complexity are considered: (a) multiple antenna selection and sign quantization of the antenna outputs, (b) single antenna selection and multilevel quantization, (c) multiple antenna selection and multilevel quantization, and (d) linear combining of the antenna outputs and multilevel quantization.Achievable rates are studied as a function of the number of available sign quantizers and compared among different architectures.In particular, it is shown that architecture (a) is sufficient to attain the optimal high signal-to-noise ratio performance fora MIMO receiver in which the number of antennas is larger than the number of sign quantizers.Numerical evaluations of the average performance are presented for the case in which the channel gains are i.i.d. Gaussian. § INTRODUCTION Low-resolution quantization is an important technology for massive MIMO and millimeter-wave communication systems as it allows the transceivers to operate at low power levels <cit.>. Although the performance of MIMO receivers with large antenna arrays and low-resolution quantizers has been investigated in the literature under different assumptions on the hardware limitations and antenna architectures,a complete fundamental information theoretic understandingis currently not available. In this paper, we propose a unified framework to analyze and compare low-resolution receiver architectures.More specifically, we assume that the receiver is comprised ofN_SQ sign quantizers that processantenna outputs.Each sign quantizer is connected to the antenna outputs via an analog combining circuit with limited processing capabilities.Through this general formulation, we study the effects of limited processing and low-resolution quantization on the capacity of MIMO channels. Op-amp voltage comparators are employed in nearly all analog-to-digital converters to obtain multilevel quantization. Given the receiver's ability to partially reconfigure its circuitry depending on the channel realization, it is of interest to determine which configuration of the comparatorsyields the largest capacity.§.§.§ Literature ReviewQuantization in MIMO systems is a well-investigated topic in the literature: for the sake of brevity we focus here on the results regarding sign quantization.[ In the literature, the term “one-bit quantization” most often refers to sign quantization of the antenna outputs.Here, as in <cit.>,we prefer the term “sign quantization”since we distinguish between sign and threshold quantization.]The authors in <cit.> are perhaps the first to point out that the capacity loss in MIMO channels due to coarse quantization is surprisingly small, although this observation is supported mostly through numerical evaluations. In <cit.>, the authors derive fundamental properties of the capacity-achieving distribution for a single-input single-output (SISO) channel with output quantization.A lower bound on the capacity of sign-quantized MIMO channels with Gaussian inputs based on the Bussgang decomposition is derived in <cit.>.The high signal-to-noise ratio (SNR) asymptotics for complex MIMO channels with sign quantization are studied are <cit.>.For the SISO channel withthreshold quantization, <cit.> shows that, in the limit of vanishing SNR, asymmetric quantizers outperform symmetric ones.§.§.§ Contributions We focus, in the following, on four analog receiver architectures with different levels of complexity: (a) multiple antenna selection and sign quantization, (b) single antenna selection and multilevel quantization, (c) multiple antenna selection and multilevel quantization, and (d) linear combining and multilevel quantization.The architecture (c) is more general than both (a) and (b), and (d) is the most general one.We study the case of a SIMO channel and a MIMO channel and provide capacity bounds of each architecture as a function of the number of sign quantizers.For the SIMO channel, our results suggest conditions under which the capacity of the architecture with multiple antenna selection and multilevel quantization closely approaches that of the architecture with linear combining and multilevel quantization.For the MIMO channel with linear combining and multilevel quantization, we derive an approximatively optimal usage of the sign quantizers as a variation of the classic water-filling power allocation scheme. This solution shows that, if the number of antennas at the receiver is larger than the number of sign quantizers, sign quantization is sufficient to attain the optimal performance in the high SNR regime. Numerical evaluations are provided for the case in which the channel gains are i.i.d. Gaussian distributed.§.§.§ Paper Organization Sec. <ref> introduces the channel model. Sec. <ref> reviews the results available for the case of sign quantization of the channel outputs.The main resultsare given in Sec. <ref>.Numerical evaluations are provided in Sec. <ref>. Sec. <ref> concludes the paper. §.§.§ NotationWe adopt the standard notation for H_2(x)=-xlog x-(1-x)log(1-x) and Q(x)=1/√(2 π)∫_x^+∞exp(-u^2/2) u.All logarithms are taken in base two. For the SISO model, we set=1 w.l.o.g.,for the MISO and SIMO models we denote the channel matrix asand ^T respectively. For the MIMO case, the vector =[_1 …_min{,}] contains the eigenvalues of the matrix ^T.Theidentity matrix of size n× nis indicated as _n, theall-zero/all-one matrix of size n × m as _n × m/_n × m.Finally,_π indicates the set of all permutation matrices. § CHANNEL MODEL§.§.§ Problem FormulationWe consider a discrete-time real-valued MIMO channel withtransmit antennas andreceive antennas.At the n^ th channel use, the antenna output vector _n=[W_1,n… W_,n]^T, is obtained from the channel input vector _n=[X_1,n… X_,n]^T as _n=_n+_n, n∈[1 …N], whereis a full rank matrix of size ×[This condition guarantees the existence of a right pseudo-inverse forand holds with high probability in a richly scattering environment.] and _n is an -vector of i.i.d. additive Gaussian noise samples with zero mean and unitary variance.The channel matrixis assumed to be known at both transmitter and receiver and to be fixed throughout the transmission block-length N.The channel input vector is subject to the average power constraint ∑_n=1^N [ |_n|_2^2]≤N P where |_n|_2 indicates the 2-norm. The antenna output vector is processed through N_SQ sign quantizers, each receiving a linear combination of the antenna output vector plus a constant,[It must be noted that generating a precise voltage reference is another major hurdle in analog-to-digital conversion. Although possible in our framework, in the following we do not consider such limitation.] i.e._n = (_n+), n∈[1 …N], whereis the analog combining matrix of size N_SQ×,  is a threshold vector of length N_SQ and() is the function producing the sign of each component of the vectoras plus or minus one, so that _n ∈{-1,+1}^N_SQ.For a given choice of combining matrixand threshold vector , the capacity of the model in (<ref>) is given by(,)=max_P_(),[ ||_2^2]≤P I(;), where we have explicitly expressed the dependency of the capacity on the parameters {,}.[The capacity (,) is also a function of the channel matrix , although not explicitly indicated.]The analog processing capabilities at the receiver are modeled as a set of feasible values of {,}, denoted as. Our goal is to maximize the capacity expression in (<ref>) over , namely ()=max_{,} ∈ (,).§.§.§ Relevant Architectures The formulation in (<ref>) attempts to capture the tensionbetween the quantization of few antennas with high precision versus the quantization of many antennas with low precision.This is accomplished by treating the sign quantizers as a resource to be allocated optimally among a set of possible configurations F.Note that multilevel quantizationcan be obtained by using M-1 sign quantizers and appropriate thresholds , resulting in log(M) information bits.It follows that sign quantization produces the most information bits per sign quantizer and increasing the number of quantization levels increases the information bits only logarithmically. To exemplify the insights provided by our approach, we study four analog receiver architectures: (a) Multiple antenna selection and sign quantization:Herein (<ref>) is selected as_a==_N_SQ,_N_SQ ×(-N_SQ) P_π,P_π ∈_π, =_N_SQ×1 , that is, each sign quantizer is connected to one of the channel outputs.Figure <ref> represents this model for =4 and N_SQ=3.(b) Single antenna selection and multilevel quantization: For this receiver architecture, the sign quantizers are used to construct an (N_SQ+1)-level quantizer:_b= = _N_SQ×1 , _ N_SQ ×(-1) P_π,P_π ∈_π, ∈^N_SQ , Figure <ref> shows this model for =4 and N_SQ=3.(c) Multiple antenna selection and multilevel quantization:Here, each sign quantizer can select an antenna output and a voltage offset before performing quantization. This is obtained by choosing_c = V_ij ∈{0,1},_j=1^ V_ij=1, ∈^N_SQ .This receiver architecture encompasses those in Fig. <ref> and Fig. <ref> as special cases. Figure <ref> again shows this model for =4 and N_SQ=3.(d) Linear combining and multilevel quantization: Corresponds to the set of all possible choices ofand . § SIGN QUANTIZATIONThe effect of quantization on the capacity of the MIMO channelhas been investigated thoroughly in the literature.For conciseness, we review only the results on sign quantization of the channel outputs, corresponding to the architecture in Fig. <ref> for N_SQ=,which will be relevant in the remainder of the paper.The capacity of SISO channel with sign quantization of the outputs is attained by antipodal signaling. <cit.>:The capacity of the SISO channel with sign quantization of the antenna output with N_SQ= is_SISO=1-H_2 Q √(P) . The capacity of the MISO channel with sign output quantization is obtained from the result in Lem. <ref> by transforming this model into a SISO channel through transmitter beamforming, thus yielding_MISO=1-H_2 Q ||√(P) .For the SIMO and MIMO channel, capacity with sign quantization is known in the high-SNR regime.<cit.>.The capacity of theSIMOchannel with sign quantization of the antenna output with N_SQ= at high SNR satisfies log() ≤_SIMO,a^SNR ∞ ≤log(+1).<cit.>.The capacity of theMIMOchannel with sign quantization and N_SQ=, and for whichsatisfies a general position condition(see <cit.>), is bounded at high SNR as12 log(K(N_SQ,) ) ≤_MIMO,a^SNR ∞ ≤12log(K(N_SQ,)+1)if <N_SQ, whereK(N_SQ,) = ∑_k=0^2 -1 2N_SQ-1k.If ≥ N_SQ, then _ MIMO,a^ SNR ∞ = N_SQ.At finite SNR, upper and lower bounds on the capacity of the MIMO channel with sign quantization are known but are not tight in general <cit.>. § MAIN RESULTSWe begin by considering the capacity of the SISO channel for the receiver architectures in Sec. <ref>.Capacity for the architecture (a) is provided in Lem. <ref> (necessarily N_SQ=1) while the architectures(b), (c) and (d) all correspond to the same model in which the channel output is quantized through an (N_SQ+1)-level quantizer.The capacity for this latter model can be bounded to within a small additive gap as shown in the next proposition.The capacity of the SISO channel with multi-level output quantization, N_SQ>1,is upper-bounded as_SISO ≤12 logminP+1,(N_SQ+1)^2 , and capacity is to within1 bits-per-channel-use () from the upper bound in (<ref>). The upper bound (<ref>) is theminimum between the capacity of the model without quantization constraints and the capacity of the channel without additive noise.For the achievability proof, the input is chosen as an equiprobable M-PAM signal for M= min⌊√(P) ⌋, N_SQ +1 ,in which the distance between the constellation points issuch that the power constraint is met with equality.At the receiver, the quantization thresholds are selected as the midpoints of the M-PAM constellation points.The full proof is inApp. <ref>. For the SIMO and MIMO cases, given the generality of the formulation in (<ref>), rather than attempting to find the exact capacity C(F) for each architecture in Sec. <ref>,we instead focus on approximate characterizationin the spirit of Prop. <ref>,that is: (i) the upper boundis obtainedas the minimum among two simple upper bounds and (ii) the achievability proof relies on a transmission scheme whose performance can be easily compared to the upper bound to show a small gapbetween the two bounds.This approach provides an approximate characterization of capacity which is useful in comparing the performance of different architectures.In the following, we extend the result in Prop. <ref> to the SIMO and MIMO cases.[Note that the MISO case follows from the SISO case as in (<ref>).] §.§.§ SIMO case The capacity for the architecture (a) is obtained by selecting the antenna with the largest gain; for the architecture (b) the capacityis a rather straight-forward extension of the result in Prop. <ref>. The capacity of the SIMO channel with single antenna selection and multilevel quantization is upper-bounded as_SIMO, b ≤12 logmin1+h_max^2 P, (N_SQ+1)^2 , where h_max=max_i h_i and the upper bound in (<ref>) can be attained to within 1/2.The proof is provided in App. <ref> For the architecture (c), samplingmore antennas allows the receiver to collect more information on the input butreduces thenumber of samples that can be acquired from each antenna.The capacity of the SIMO channel with multiple antenna selection and multilevel quantization for P>log(N_SQ)>2 and h_i^2 >1is bounded as max_K 12 logmin1+ |^(K)|_2^2P,N_SQ K +1 ^2 -2 ≤_SIMO, c≤12 log1+||_2^2P, (N_SQ+1)^2 , where ^(K)is the vector of the K largest channel gains. The upper bound is derived similarly to Prop. <ref>.The achievable rate with finite uniform output quantization isrelated to the achievable rate with infinite uniform output quantization by bounding the largest difference between these two quantities under the conditions P>log(N_SQ) and . In the model with infinite output quantization, a dither can be used to make the quantization noise independent of the channel input and of the additive noise,so that the worst additive noise lemmamay then be used tolower bound the attainable rate as in (<ref>).The full proof is provided in App. <ref>.The capacity of the SIMO channel with linear combining and multilevel quantization is upper-bounded as_SIMO, d ≤12 logmin1+||_2^2P, (N_SQ+1)^2 , and the upper bound in (<ref>) can be attained to within 1/2.With this architecture, the maximal ratio combining at the receiver results in the equivalent SISO channel with channel gain ||_2. The result in Prop. <ref> can then be used to obtain the approximate capacity.The results in Prop. <ref>, Prop. <ref> and Prop. <ref> are related as follows.The results for the architecture (a) in Lem. <ref> and the architecture (b) in Prop. <ref> show that the two architectures yield the same high-SNR behaviour when ≥ N_SQ.When < N_SQ, though, the architecture in (b) can attain higher performance at high SNR.The architectures (c) and (d) differ as follows: in the former, the estimate of the transmitted message is implicitly obtained by combining the quantized information while, in the latter, combining occurs before quantization.From Prop. <ref> we gather the conditions under whichcombining after quantization roughly attains the same performance as combining before quantization: this occurs when the number of quantizers is sufficiently large so that the first term in the minimum in (<ref>) dominates the channel performance. The capacity of the SIMO channel with multiple antenna selection and multilevel quantization is upper-bounded as_SIMO, c ≤12 log1+||_2^2P , and the upper bound in (<ref>) can be attained to within 1 when N_SQ> √(||_2^2 P+1) andh_i^2 >1. Under these assumption, the minimum in (<ref>)is attained by setting K=N_r, in which case the trivial outer bound of (<ref>) can be attained to within 2. §.§.§ MIMO case For the architecture (a),inner and outer bounds are derived in <cit.>; for the architecture (b), an upper bound is derived in the next proposition. The capacity of the MIMO channel with single antenna selection and multilevel quantization is upper-bounded as _MIMO,b≤12 logmin1+|_max^T|_2^2P,(N_SQ+1)^2 ,where _max^T is the row ofwith the largest norm and the upper bound in (<ref>) can be attained to within 2.The proof is provided in App. <ref>.For the architecture (d), the approximate capacity can be obtained as a variation of the classic water-filling solution.By decomposing the channel matrix through singular value decomposition, the channel can be transformed inK=min{N_t,N_r} parallel channel with gains{_i}.Capacity is then obtained asmax ∑_i=1^K12 logmin1+_i^2 P_i, ( N_SQ,i+1)^2, where the maximization is over P_i ∈^+,∑_i P_i = P, N_SQ,i∈,∑_i N_SQ,i =N_SQ and K ∈ [0,min{N_t,N_r}].By relaxing the integer constraint on the parameters N_SQ,i, we obtain to the outer bound≤R^⋆(,P,N_SQ)=∑_i=1^min{,}12 log(1+_i P_i)if∑_i=1^min{,} √(1+_i P_i)-1 ≤N_SQKlogN_SQ K+1 otherwise,where P_i are chosen as P_i=(μ-_i^-2)^+and μ is the smallest value for which∑_i P_i=Pand K=∑_i 1_{P_i>0}.The approximate capacity for the architecture (d) is obtained by showing that a rate sufficiently close to (<ref>) is achievable. The capacity approaching transmission strategy is interpreted as follows: the classic water-filling solution is approximatively optimal as long as each channel output can be quantized usingN_SQ,i≈√(1+_i P_i)-1 quantizers. If this condition is not satisfied, then the optimal solution is to uniformly assign the quantizers to all the active antennas. This leads to the next proposition. The capacity of a MIMO channel with linear combining and multilevel quantization is upper-bounded as_MIMO,d ≤R^⋆(,P,N_SQ), and capacityis to within a gap of3/2 K from the upper bound in (<ref>) for R^⋆(,P,N_SQ) and K in (<ref>). The proof is provided in App. <ref>.The result in Prop. <ref> shows that sign quantization is sufficient to attain the optimal performance in the high SNR regime since K=N_SQ yields the largest rate in (<ref>) when P ∞.This follows from the fact that sign quantization, among all possible architectures, yields the largest number of information bits.The optimality of this solution arises from the fact that the number of sign quantizer isa fixed resource that limits, at the receiver side, the largest attainable rate. § NUMERICAL EVALUATIONSIn the following, we evaluate the results in Sec. <ref> by considering the expected value of capacity () in (<ref>) when the channel gains H_ij are drawn from a Gaussian distribution with mean zero and variance one.We begin by numerically evaluating the performance for the SIMO channel with single antenna and multilevel quantization selection in Prop. <ref> and with linear combining in Prop. <ref>.Figure <ref>shows the upper bound expressions in (<ref>) and (<ref>) as a function of the number of receiver antennasand for a fixed transmit power P and number of sign quantizers N_SQ.For =1, the performance of the two architectures is the same as the SISO channel in Prop. <ref>, while, when increases, the performance approaches log(N_SQ+1), albeit at a slower rate for the single antenna selection case.As the power increases, the transition between these two regimes requires fewerantennas. Consequently, the performance loss of the receiver architecture in Figure <ref>, in comparison with linear combining receiver, decreases as the transmit power grows large. The performance of multiple antenna selection for the SIMO case is shown in Figure <ref>: in this figure, we plot the upper bound in Prop. <ref> and Prop. <ref> together with those in Prop. <ref>.From Figure <ref> we observe how increasing the number of antennas that are selected impacts the achievable rate, reducing the gap from the performance of the architecture with linear combining and multilevel quantization. The performance for the MIMO case is presented in Fig. <ref>: in this figure, we show the performance difference between the architectures (a) from <cit.>.Single antenna selection with multilevel quantization performs well when the number of receive antennas is small but its performance is surpassed by multi-antenna selection and sign quantization as the number of receiver antennas grows.This follows from the fact that the attainable rate with single antenna selection converges to log(N_SQ+1) asgrows while sign quantization converges to N_SQ. It is interesting to observe that these two simple receiver architectures, together, are able to closely approach the performance in Prop. <ref>. § CONCLUSIONA general approach to model receiver architectures for MIMO channels with low-resolution output quantization has been proposed.In our formulation, the antenna outputs undergo analog processing before being quantized using N_SQ sign quantizers.Analogprocessing is embedded in the channel model description while the channel output corresponds to the output of the sign quantizers.Through this formulation, it is then possible to optimize the capacity expression over the set of feasible analog processing operations while keeping the number of sign quantizers fixed. IEEEtran § PROOF OF PROP. <REF> ∙ Converse:The capacity of the SISO channel with multilevel quantization is necessarily dominated by the capacity of the AWGN channel without quantization constraints and by the capacity of the channel with channel with output quantization but no additive noise. The upper bound_SISO ≤12 logP+1, is obtained as the capacity of the channel without quantization constraints.The upper bound_SISO ≤logN_SQ+1 , is obtained as the capacity of the channel without additive noise.The intersection of the outer bounds in (<ref>) and (<ref>) yields the outer bound in (<ref>).In the following we refer to this upper bound as the trivial upper bound for brevity.∙ Achievability: If N_SQ=1, then capacity is provided by Lem. <ref> for anyP>0. Let us first consider the case is whichP≤ 6 and N_SQ>1: in this parameter regimeit can be verified through numerical evaluations that the capacity expression in (<ref>) is to within 1/2 from the infinite quantization capacity in (<ref>).This implies that the achievability proof in Lem. <ref> is sufficient to show the approximate capacity in this parameter regime. For P>6 and N_SQ≥2, consider the achievable scheme in which the channel input is an equiprobable M-PAM constellation while, at the receiver, the M-1 sign quantizers thresholds are chosen as the midpoints of the transmitted constellation points. The parameter M is chosen according to whether performance is limited by the transmit power or by the number of available quantizers.When 1/2 log(P+1) ≥log(N_SQ+1), the number of available sign quantizers dominates the performance and M is chosen as N_SQ+1, which is the largest number of channel inputs that can be distinguished at the receiver.When log(N_SQ+1) > 1/2 log(P+1), then the available transmit power dominates the performance and M is chosen as ⌊√(P)⌋. Following these reasoning, we defineM=min{N_SQ+1,⌊√(P) ⌋} ≥3, and denote support of the input X as={x_1,…,x_M}, for {x_m}_1^M are in increasing order. For M is even, we chooseas=·-M/2+1, …,+M/2-1/2,while, for M odd, we letbe equal to= -M-1 2, …, M-1 2 , for=√(12 PM^2-1). Forin either (<ref>) or (<ref>), let thechannel input be uniformly distributed on the set ; note that, by construction, the power constraint is attained with equality, i.e. [X^2]= P. At the receiver, the channel output is quantized using M-1 sign quantizers, each with threshold t_k obtained ast_m= 12 x_m + x_m+1 , m ∈[1,…,M-1]. Note that, by definition, M-1≤ N_SQ so that the constraint on the number of available sing quantizers is respected.In particular, for the case in which N_SQ+1>√(P), we have that not all the sign quantizers are employed at the receiver.In this scenario a better performance can be attained by employing all the available quantizer: for simplicity in the analysis, we only consider the sub-optimal strategy which employs M-1 of theN_SQ available quantizers. For convenience of notation, we express in (<ref>) through the random variablewith supportdefined as[=x_m]= [W ≤t_1]m=1[t_m-1<W ≤t_m] m∈[2,…, M-1] [W > t_M-1]m=M. The mapping in (<ref>) is a one-to-one mapping since _i is of the form_i=[-1 …-1_M^-, +1, …+1_M^+]^T, with M^-,M^+≥ 0 andM^-+M^+=M,so that the M-1 sign quantizer outputs have a one-to-one correspondence with M possible values of . With the definition in (<ref>) and for the channel input uniformly distributed over the support in (<ref>) and (<ref>), we obtain the inner boundR^IN = H()-H(|X), where[=|X=x] = |Z-(-x)|<2[=]=1 M∑_m=1^M Z-(-x)< 2,where Z ∼(0,1) and x,∈. The entropy term H() in (<ref>) is lower-bounded asH() ≥M min_∈ -P_()logP_(), and, given the symmetry in the input constellation,we have that the minimum P_() is obtained at =±/2 for M even,and at =0 forM odd.Note moreover that, the minimum P_() is at most 1/M≤ 1/3 < e^-1: for x<e^-1, the function -x log(x) is a positive increasing in x, so that a lower bound on P_() produces a lower bound to the RHS of (<ref>).For this reason, when M is even,we lower bound P_(+/2)=P_(-/2) asP_(+/2) = 1 M 1-2 Q(/2)+ ∑_k=2^+M/2 Q((k-2)+/2)-Q((k-1)+/2) ++∑_k=+1^+M/2 Q((k-1) +/2)-Q( k +/2) = 1 M 1-2 Q(/2)+ Q(/2)-Q((M-1)/2)+Q(/2)-Q((M+1) /2)=1 M1-Q((M-1)/2)-Q((M+1) /2) ≥1 M1-2Q((M-1) /2) . Similarly, for the case of M odd, we haveP_(0) = 1 M 1-2 Q(/2)+ 2 ∑_k=1^+(M-1)/2 Q((k-1)+/2)-Q(k+/2) =1 M1-2Q( M /2). By plugging (<ref>) and (<ref>) in (<ref>), depending on the value of M, we obtain the boundmin_∈ P_() ≥1 M1-2Q((M-1) /2) .Let =Q((M-1) /2) for convenience of notation and further bound (<ref>) as H() ≥- M 1 M 1-2log1 M1-2= logM- (1-2)log(1-2 )-2 log(M)≥logM-2 log(M) ≥logM -0.2 ,where (<ref>) follows from the fact that the function -log(1-2 )-2 log (1-2 ) is positive defined while (<ref>) from the bound = Q 1 2 (M-1) √( 12 P M^2-1)= Q√((M-1)^2M^2-1) √(3 P)≤Q(√(3 P)),so that2log(M)≤2 Q(√(3 P)) log(√(P)) ≤0.02, where (<ref>) follows from the fact that Q(√(3 P)) log(√(P)) is a decreasing function for P>6. Accordingly, we conclude thatH() ≥logM -0.02.Next, we wish to upper bound the entropy term H(|X) in (<ref>).Note that, for each X=x_m,H(|X=x_m),corresponds tothe entropy of a Gaussian random variable with mean x_m and unitary variance which is quantized with M-level uniform quantization of step . From the “grouping rule for entropy” <cit.>we have that the value of this entropyis smaller than the entropy of a Gaussian variablewith infinite uniform quantization of step . Let us denote as N^ the infinite quantization of a Gaussian variable with step ; more specifically,N^ is defined as the random variable with supportand for which [N^=z],z ∈ is obtained as [N^=0]= - 2 ≤X <+2[N^=k]=(k-1) +2 ≤X <k+ 2, k ∈∖{0}. The entropy H(N^) can be expressed as H(N^) = -1-2 Q(/2) log1-2 Q(/2) ≤0.15 - 2 ∑_k=0^∞ Q(k +/2 )-Q((k+1)+ /2) logQ(k +/2 )-Q((k+1)+ /2) . ForΔ in (<ref>), we necessarily have >2√(3), and thusQ(k +/2 )-Q((k+1)+ /2) <Q(/2)- Q(3/2) < Q(/2) < e^-1. Using the bound in (<ref>), together with the fact that -xlog (x) is an increasing function of x for x≤ e^-1, we have that an upper bound on the term Q(k +/2 )-Q((k+1)+ /2) results in an upper bound on the quantity in (<ref>). Next, note that for k>1, we haveQ(k +/2 )-Q((k+1)+ /2) ≤Q(k )-Q(2 k ) ≤e^-1 2 -k^2 ^2-e^ -2 k^2 ^2,so that, by numerical integration methods, we obtain the bound- 2 ∑_k=1^∞ Q(k +/2 )-Q((k+1)+ /2) ≤0.03+∫_x=1^∞ e^-1 2 -k^2 ^2-e^ -2 k^2 ^2 x ≤0.25. Plugging the bound (<ref>) in (<ref>) we obtainH(N^) ≤0.15+0.25=0.4 Finally, combining (<ref>) and (<ref>)I(X;)≥log(M)-12,which is the desired result. § PROOF OF PROP. <REF>When only one antenna can be selected, the result in Prop. <ref> can be used to bound the capacity maximization in (<ref>) to within 1/2 from the trivial outer bound() ≤max_k 12 log1+h_k^2 P, (N_SQ+1)^2. The function on the RHS of (<ref>) is increasing in k when h_k are ordered in increasing order, thus yielding the desired result.§ PROOF OF PROP. <REF>The outer bound in (<ref>) is the trivial outer bound as defined in App. <ref> while the inner bound in (<ref>) is derived in the following.In the remainder of this appendix, the channel coefficients h_i are taken positive: this assumption is without loss of optimality as the noise distribution is symmetric.Also, in the following, we assume without loss of generality that the terms h_k are in descending order.Achievability:If ||_2^2 P ≤ 15 orN_SQ≤ 3, then 12 logmin1+ |^(K)|_2^2P, N_SQ+1 ^2 ≤12 logmin1+ ||_2^2P, N_SQ+1 ^2 ≤2 from which we conclude that (<ref>) is less than zero in this parameter subset.Since the rate zero is trivially achievable, the inequality in (<ref>) proves that (<ref>) is achievable. If ||_2^2 P > 15 andN_SQ> 3,the achievability of the bound in (<ref>) is shown by letting the channel inputbe the sum of an M-PAM signal plus a dither.For this receiver architecture dithered quantization is necessary to evaluate the performance of the combining of the sampled channel outputs. Similarly to (<ref>), let us we define M asM= ⌊minN_SQ K,|^(K)|_2 √(P)-1 ⌋. For M in (<ref>), note that(<ref>) = 12 logmin1+ |^(K)|_2^2P,N_SQ K +1 ^2 ≤log(M+2),so that when M ≤ 2, the expression in(<ref>) is less than zero which is trivially achievable.For M ≥ 3,let the channel input be obtained asX=S+U. where S is an M-PAM signal for M in (<ref>),with support as in (<ref>)for M even, or as (<ref>) for M odd but where is chosen as=√( 12 P M^2-1 ).The variable U in (<ref>) is quantization dither, that is U ∼([-/2,+/2]) and U ⊥ S.Since [U^2] = ^2 /12,the power constraint is satisfied with equality by settingP = P-^2 12,which yields^2=12 PM^2. At the receiver, the K antennas with the best SNR are each quantized with an (M+1)-level quantizer.More specifically, the k^ th antenna output, k ∈ [1 … K], is quantized with thresholdst_i^(k) for i∈ [0 … M]chosen ast_0^(k) = h_i x_1- 2t_m^(k)=h_i 2 x_m + x_m+1 , m ∈[1,…,M-1] t_M^(k) = h_i x_M+ 2.Note that, although channel input has M possible values but the receiver uses an (M+1)-level quantizer to quantize each of the K best antenna outputs: two additional quantization levels are used to detectwhether the channel output is below h_i(x_1-/2)orabove h_i(x_M+/2) (as specified at the beginning of the appendix,the channel coefficients are assumed to be positive and with decreasing magnitude without loss of generality).Note thatthe total number of quantizers employed at the receiver is K (M+1) ≤ N_SQ, so that the constraint on the total number of available sign quantizers is satisfied.As for the proof in App. <ref>, it is possible that not all the sign quantizer are utilized in this achievable scheme. Next, similarly to (<ref>), we define ^(k) for k∈[1,…,K] as[^(k)=x_m]= [W_k ≤t_0^(k)]m=0 [t_m-1^(k)<W_k ≤t_k^(k)] m ∈[1,…, M] [W_k > t_M^(k)]m=x_M+1,where x_m for m∈ [1,M] is as in (<ref>) while we additionally let x_0=x_1- and x_M+1=x_M+.As for the mapping in (<ref>), the mapping in (<ref>) is a one-to-onecorrespondence between W_k and ^(k).Finally, let ^(k)=^(k)-U and =[^(k), …, ^(K)].We next lower bound the achievable rate as follows: first (i) we show that the capacity of the channel with finite quantization levels is to within a constant gap from the channel with infinite quantization levels, successively (ii) we lower boundthat the capacity of the model with infinite quantization levels.This lower bound minus with the gap between the capacity of the model with finite and infinite quantization corresponds to the achievable rate in (<ref>).Define ^(k) as the quantization ofW_k for k ∈ [1, … ,K] with infinite quantization levels and with stepas in (<ref>).Similarly, let ^(k)=^(k)-U and =[^(1)…^(K)].The rate achievable with the transmission strategy described above is lower bounded asR^IN≥I(; X)=H(,)-H(|)-H(|X) = I(; X)-H(|) ≥I(; X)-∑_k=1^K H(^(k)|^(k)).The expression in (<ref>) is interpreted as follows: I(; X) is the attainable rate for the model with infinite output quantization while ∑_k=1^K H(^(k)|^(k)) is an upper bound to the performance gap between the attainable rate with infinite and finite quantization.Let us first bound the performance gap between the channel with finite and infinite output quantization:for each term H(^(k)|^(k)), we observe that, if W_k/h_k ∈ [x_1-/2,x_M+/2], then^(k)=^(k): using this observation and given the symmetry of the input and noise distributions, we write-H(^(k)|^(k)) = -H(^(k)|^(k)=x_M+1)[^(k)_i=x_M+1] -H(^(k)|^(k)=x_0)[^(k)=x_0] = -2 H(^(k)|^(k)=x_M+1)[^(k)_i=x_M+1]. If i ∈ [0,M+1], then[^(k)=x_i|^(k)=x_M+1]=0,on the other hand, for i >M+1, we have[^(k)=x_i|^(k)=x_M+1]=∑_m=1^M [^(k)=x_i|^(k)=x_M+1, X=x_m] [X=x_m]≤1 M ∑_m=1^M [Xt^(k)=x_i| ^(k)=x_M+1, X=x_M] ≤[^(k)=x_i|^(k)=x_M+1,X=x_M] ≤Q h_k((i-1)+/2)-Q h_k(i+/2)) Q( h_k M +/2) ≤Q h_k((i-1)+/2Q( h_k M +/2),≤1 -1 h_k (M +/2) ^-1 e^- h_k^2(i^2-M^2)^2 ≤1 -1 h_k M^-1 e^- h_k^2(i-M)^2^2 The case for i<0, can be bounded in a symmetric matter to yield[^(k)=x_i|^(k)=x_M+1] ≤Q h_k((i-1)+/2Q( h_k M +/2) ≤1 -1 h_k M^-1 e^- h_k^2(i-M)^2^2. Since M^2 ^2=12 P and P h_k>1 by assumption, we haveh_k M ≥2 √(3), k∈[1, …, K],which implies Q((i h_k )) ≤ e^-1 for all k.Since -x log x is positive increasing function in x for x ∈ [0,1/e], we can writeH(^(k)=x_m|^(k)=x_M+1) =∑_i=M^∞[^(k)=x_i|^(k)=x_M+1]log[^(k)=x_i|^(k)=x_M+1] + ∑_i=0^-∞[^(k)=x_i|^(k)=x_M+1] [^(k)=x_i|^(k)=x_M+1] log≤2 ∑_i=M^∞Q(k h_i )logQ(kh_i ) ≤∑_k=M^∞∑_i=M^∞[^(k)=x_i|^(k)=x_M+1]log[^(k)=x_i|^(k)=x_M+1]≤∑_j=0^∞6 5 h_k^2 j^2^2 e^- h_k^2 j^2^2 ≤0.15,where we have used the fact that √(15 P) ≤M ≤√(18 P),and, similarly, √(12)≤≤√(15) for P>6.Plugging the bound in (<ref>)in (<ref>) yields∑_k H(^(k)|^(k))≤0.3,which shows that the capacity of the channel with infinite quantization is at most 0.3 from the capacity of the channel with finite output quantization.Having bounded the performance gap between finite infinite quantization, we next lower bound the rate attainable in the model with infinite quantization of the K antenna outputs with the highest SNR. For this model, the attainable rate can be lower bounded using the results that Gaussian distributed noise is the worst additive noise under a covariance constraint in <cit.>.More specifically, let us define =1 |^(K)|_2^2 ∑h_k^2 ^k-U.Note that, from properties of dithered quantization <cit.>, we have^(k)=S + Z_k h_k+ N_k,where N_k∼( /2, /2) and independent from S and Z_k.Using this observation, we haveI(;X) = I(;X|U)≥I(;S) ≥I(S+Z^;S),where Z^∼(0,) for=1 |^(K)|_2^4∑_k=1^K h_kZ_k + ∑_k=1^K h_k^2N_k Note that, from the achievability proof in Prop. in <ref>, wehave I(S+Z^;S) ≥log(M)-0.6-log().A bound onin (<ref>) is obtained as follows:∑_k=1^K h_k Z_k=|^(K)|_2^2 and∑_k h_k^2 N_k≤1 12 |^(K)|_4 + 2 12 ∏_i>j h_i^2 h_j^2 ≤|^(K)|_4^2 12 so that,h_i>1, as by assumption≤|^(K)|_2^2+|^(K)|_4^2 |^(K)|_2^4≤1+|^(K)|_4 |^(K)|_2^2^2 ≤2. Substituting M in (<ref>) and boundingas in (<ref>) in (<ref>) finally yields (<ref>). § PROOF OF PROP. <REF> With single antenna selection, the capacity maximization in (<ref>) can be rewritten as() ≤max_k 12 log1+|_k|_2^2 P, (N_SQ+1)^2, where _k is the k^ th row of .In other words, the capacity is the maximum among the capacity of the MISO channels between the transmitter and each of the antennas at the receiver.For each MISO channel, the capacity can be attained using the result in Prop. <ref> since transmitter pre-coding can be used to turn the MISO channel into a SISO channel.§ PROOF OF PROP. <REF> Through the classic VBLAST architecture, the channel can be equivalently written as a set of parallel channels_i=_i _i+_i, i=1,…, min{,}, where [_1, …, _min{,}] are the eigenvalues ofand _i ∼(0,1). Since the capacity of the parallel ofchannels is obtained as the sum of the capacity of each channel, we have that an upper bound to capacity isR^OUT=max∑_i=1^min{,} 12 logmin{ _i^2 P_i+1,(N_SQ,i+1)^2} , where the maximization is over ∑ P_i=P and ∑ N_SQ,i=N_SQ as P_i is the input powerand N_SQ,i the number of sign quantizers allocated to the i^ th equivalent channel.Additionally, the upper bound in (<ref>) can be attained to within min{,} following the result in Prop. <ref>.We next wish to determine an approximate expression for the solution of the optimization in(<ref>) as a function of the available power and number of sign quantizers.To simplify this analysis, we relax this optimization problem and let N_SQ take values in ^+.Under this relaxation of the optimization problem in (<ref>), we have that the termmin{_i^2 P_i+1,(N_SQ,i+1)^2} must be attained by either the power or the sign quantizer allocation on all channels simultaneously.This can be shown by contradiction: assume that there exist two subchannels j and k such thatmin{ _j^2 P_j^*+1,(N_SQ,j^*+1)^2} = _j^2 P_j^*+1 min{ _k^2 P_k^*+1,(N_SQ,k^*+1)^2} = (N_SQ,k^*+1)^2,in the optimal solution, then there must exist _1,_2>0 such thatmin{ _j^2 (P_j^*+_2)+1,(N_SQ,j^*-_1+1)^2 } = _j^2 (P_j^*+_2)+1>_j^2 P_j^*+1min{ _k^2 (P_k^*-_2)+1,(N_SQ,k^*+_1+1)^2} = (N_SQ,k^*+_1+1)^2> (N_SQ,k^*+1)^2,which contradicts the claim of optimality. For the case in which the power constraint is active, the optimal solution corresponds to the classical waterfilling solution in the channel with infinite quantization levels.For the case in which the constraint on the quantization is active, then maximization problem becomesmax_∑N_SQ,i=N_SQ∑_i=1^min{,}logN_SQ,i+1 . The optimization problem in (<ref>) is equivalent to the waterfilling problem with equal channel gains and thusthe uniform allocation of quantizers across all sub-channels is optimal.1+_i^2 P_i=N_SQK+1 ^2 where K is the number of active channels.Since N_SQ+1≤ 2 N_SQ, the assignment ⌊ N_SQ⌋ provides a loss of at most 1 per each channel, so that the overall gap between inner and upper bound is 2 min{,}.
http://arxiv.org/abs/1702.08133v2
{ "authors": [ "Stefano Rini", "Luca Barletta", "Yonina C. Eldar", "Elza Erkip" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170227033143", "title": "A General Framework for Low-Resolution Receivers for MIMO Channels" }
SH spectroscopy to optically detect valley polarization in 2D materials] Second harmonic spectroscopy to optically detect valley polarization in 2DmaterialsDepartment of Physics and Nanotechnology, Aalborg University, DK-9220 Aalborg Øst, Denmark Centre for Advanced 2D Materials, National University of Singapore,6 Science Drive 2, Singapore 117546 Corresponding author: vpereira@nus.edu.sg Centre for Advanced 2D Materials, National University of Singapore,6 Science Drive 2, Singapore 117546Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542Valley polarization (VP), an induced imbalance in the populations of amulti-valley electronic system, allows emission of second harmonic (SH)light even in centrosymmetric crystals such as graphene.Whereas in systems such as MoS_2 or BN this adds to their intrinsicquadratic response, SH generation in a multi-valley inversion-symmetric crystalcan provide a direct measure of valley polarization.By computing the nonlinear response and characterizing theoretically therespective SH as a function of polarization, temperature, electron density, anddegree of VP, we demonstrate the possibility of disentangling and individuallyquantifying the intrinsic and valley contributions to the SH.A specific experimental setup is proposed to obtain direct quantitativeinformation about the degree of VP and allow its remote mapping. This approachcould prove useful for direct, contactless, real-space monitoring of valleyinjection and other applications of valley transport and valleytronics.78.67.-n,78.67.Wj,81.05.ue,42.65.An [ Vitor M. PereiraAbstract 0.9 We discuss effects of the brane-localized mass terms on the fixed points of the toroidal orbifold T^2/Z_2 under the presence of background magnetic fluxes, where multiple lowest and higher-level Kaluza–Klein (KK) modes are realized before introducing the localized masses in general. Through the knowledge of linear algebra, we find that, in each KK level, one of or more than one of the degenerate KK modes are almost inevitably perturbed, when single or multiple brane-localized mass terms are introduced. When the typical scale of the compactification is far above the electroweak scale or the TeV scale, we apply this mechanism for uplifting unwanted massless or light modes which are prone to appear in models on magnetized orbifolds. ===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION Interactions of light with matter beyond linear response are a rich source offundamentally interesting phenomena as well as of many-fold opportunities forapplications <cit.>. In particular, the use of non-linear optical spectroscopy for characterizingthe electronic properties of crystalline materials has emerged as a fruitful,simple and important technique because, among other advantages, it allows fast,non-invasive probing of electronic systems and is sensitive to intermediatecoherent electronic transitions <cit.>. Compared to linearoptical absorption, for example, in non-linear optical spectroscopy there is alarger freedom in utilizing the expanded set of selection rules and conditionsinvolving the polarization dependence or polarization state, in order toextract more microscopic details with the same type of measurement<cit.>. We can understand this in the simplest and most general way by recalling thatthe n–th order response is governed by a (n+1)–rank tensor and that, fora given crystalline symmetry, the number of independent optical constantsincreases with the order <cit.>. Therefore, since a singlefrequency measurement at higher order of response can capture a larger numberof independent quantities of the system, it more strongly constrains itsmicroscopic details (e.g. the modeling of its bandstructure) while, at thesame time, becomes a more versatile approach that is capable of probing a richerset of phenomena. This has a large potential for characterization andapplications and, consequently, is of high interest.Two-dimensional crystals such as graphene, boron nitride, and transition-metaldichalcogenides (TMD) have been shown to have a particularly strong non-linearoptical response, especially given their atomically thin character. In thelatter, second harmonic generation (SHG) is particularly robust<cit.> and is routinely used for simplecharacterization tasks such as identifying crystal orientation, uniformity, orlayer number <cit.>. With a setup that allowsfor translation of the beam along the sample, it becomes possible tospatially map the SHG by probing the sample in scanning mode with resolutionlimited by the spot size <cit.>.In graphene, on the other hand, SHG is forbidden in equilibrium by its D_6hpoint group symmetry (PGS). As discussed below, the vanishing quadratic responsein graphene arises at the microscopic level from the exact cancellation offinite contributions from the 𝐊 and 𝐊' valleys.Previous theoretical calculations, indicate that disrupting the valleycancellation by population imbalance can lead to a finite SH with estimatedmagnitudes on par with conventional nonlinear crystals <cit.>, which means that VP is expected to generate a very strongnon-linear signal.Golub and Tarasenko <cit.> calculated the frequency dependent SHsusceptibility from an explicit integration of the time-dependent densitymatrix using an effective Dirac description of the electronic structure ofgraphene; trigonal warping is explicitly included both in the effectiveHamiltonian and in the coupling to light.Wheling et al. <cit.> computed the SH optical conductivityfrom a diagrammatic expansion of the current response to second order, thusexpressing all quantities in terms of Green's functions. Their description isbased on a tight-binding (TB) formulation of the electronic problem ingraphene, including the TB derivation of the generalized coupling to light andthe velocity operator in higher orders from a Peierls substitution in thehoppings.The different methods and approximations used in these references lead to notentirely compatible results. Moreover the scope of these calculations islimited by other strict approximations,zero temperature, very small VP and, above all, pertain to (gapless) graphene only. Therefore, it becomes important to address the more general and rich problem ofvalley-induced SHG in 2D materials beyond graphene using an approach capableof addressing a more general set of conditions, such as finite temperature,variable carrier density, and polarization of the incoming and outgoingradiation. From the technical point of view, we address these characteristics(see <ref>) within the length gauge formalism <cit.>.To be encompassing and allow a controlled breaking of inversion symmetry, westudy the quadratic optical response to light of a generic two-band electronicsystem on a honeycomb lattice. This allows us to quantify the interplay betweenintrinsic contributions and those induced by a finite VP as a function offrequency and polarization. At the qualitative level this choice allows us tointerpolate between the behavior of graphene (gapless) and that ofsemiconducting TMD (gapped). Since much effort is currently invested to theoretically and experimentallydevelop methods and concepts to harness the valley degree of freedom in these and related systems for valleytronic applications <cit.>, it is important to establish practical, versatile andreliable probes able to quantify and track the degree of valley polarization(VP), just as, in spintronics, it is crucial to have probes capable ofquantifying spin polarization, injection, relaxation, etc.Light is a demonstrably effective means of inducing a VP in these materials<cit.> and, here, we discuss a specific proposalof its utilization as an effective qualitative and quantitative probe as wellthrough SH spectroscopy. Even though SHG in otherwise SH-dark graphene providesdirect access to the degree of an induced VP, our discussion extends from thiscase to systems with intrinsic SHG where the two effects can be present andcontribute to independent second-order optical constants.To be encompassing and allow a controlled breaking of inversion symmetry, we dothat by studying the quadratic optical response to light of a generic two-bandelectronic system on a honeycomb lattice. This allows us to quantify theinterplay between intrinsic contributions and those induced by a finite VP as afunction of frequency and polarization.As the key underlying physics is not dependent on specific microscopic detailsother than the crystal symmetry, we begin by discussing a specific experimentalprocedure that should allow one to use SHG as a useful probe in valleytronicsand, subsequently, analyze the microscopic details of SHG in the frameworkdiscussed above. § FINGERPRINT OF VALLEY POLARIZATION IN SHG Threefold rotational symmetry severely restricts the in-plane components of thequadratic conductivity that obeyσ_αββσ_βαβσ_ββασ_ααα, whereα,β ∈ { 1, 2 } <cit.>. Alone, thissymmetry reduces the number of independent components to just σ_111 andσ_222 which, for simplicity, we shall replace by the dimensionlesscounterparts σ̅_i σ_iii/, where e^3a/4γ_0ħ sets the natural scale of the second order 2D conductivity(see below) <cit.>. Furthermore, in a honeycomb lattice inequilibrium whose mirror plane 𝐞_m is parallel to 𝐞_2[fig. <ref>(b)], only σ̅_2 survives which defines theintrinsic quadratic response of the system.Since at frequencies much smaller that the bandwidth (ω ≪ γ_0) electronic processes are governed by states in thevicinity of the two inequivalent points 𝐊 and 𝐊'in the Brillouin zone (BZ), that intrinsic response is the combination of thecontributions from each of these two valleys, which contribute independently(additively) in a translationally-invariant system.A crucial aspect, though, is that the PGS of 𝐊/𝐊' isstill D_3h but with the mirror plane perpendicular to that ofthe real space lattice (𝐞_m). In other words, if takenindependently,each valley contributes σ̅_1 ≠ 0, σ̅_2 = 0and it is their sum that yields an overall σ̅_1 = 0,σ̅_2 ≠ 0 expected on symmetry grounds at equilibrium (inparticular, if inversion symmetry is furthermore present as in graphene, thetwo valleys exactly cancel each other andσ̅_1 = σ̅_2 = 0) <cit.>. However, if there exists an imbalance in the population of the valleys, thereis no symmetry constraint to quench either σ̅_1 orσ̅_2; in particular, SHG can arise through σ̅_1 0in a lattice with inversion symmetry, which immediately suggests the detectionof this valley-induced SHG response as a direct optical probe of VP.Generically, in a VP crystal without inversion both components will be presentwith a direct impact on the polarization dependence of the SHG that we explorehere to disentangle and independently quantify the intrinsic (σ̅_2)and valley-induced (σ̅_1) SHG.We first derive the dependence of the SHG in the polarization state of theexcitation field in general terms to establish the procedure for the individualcomponent extraction, and afterwards analyze the frequency dependence ofσ̅_1 and σ̅_2 in a microscopic model for graphene andfor gapped graphene that applies qualitatively to the response in TMD.We begin with the generic parameterization of an incoming monochromatic fieldE_0 normal to the sample, as illustrated in fig. <ref>(a), that isinitially p-polarized before transmitting through a λ/4 platewith fast axis at an angle φ with the plane of propagation. Thispermitsthe selection of any incoming polarization state, including linearpolarization.For a general orientation of the propagation plane (ζ) the (complex)amplitude of the electric field reaching the sample, 𝐄_ω,reads𝐄_ω E_0 ( a sinζ +b cosζ , -a cosζ +b sinζ , 0 )where aisin( 2φ )/√(2), b[ 1icos(2φ ) ]/√(2) and 𝐞_1 is aligned with the lattice zigzagdirection. The second order two-dimensional current density, j_i^(2) (ω_1, ω_2 ) = ∑_jkσ_ijk^(2) E_ω_1^jE_ω_2^k, can hence be written asj_1^(2) ( ω_1, ω_2 ) =( f_1 σ̅_1 +f_2 σ̅_2 ) E_ω_1 E_ω_2 /2,j_2^(2) ( ω_1, ω_2 )=( f_2 σ̅_1 -f_1 σ̅_2 ) E_ω_1 E_ω_2 /2,where the auxiliary functions f_1 and f_2 readf_1≡ 2sin( 2ζ + 2φ ) sin( 2φ ) -2icos( 2ζ+2φ ), f_2≡ 2cos( 2ζ +2φ ) sin( 2φ ) +2isin( 2ζ+2φ ).Even though we will be focusing on SHG arising from a single monochromaticsource (ω_1ω_2ω), we explicitly distinguish ω_1and ω_2 to underline that our analysis applies to any second-orderprocess. The sheet current eq. <ref> radiates, in turn, anelectromagnetic field with a flux densityI=μ_0c |j^(2)(ω_1,ω_2)|^2/8, orI/ I_0=( |f_1|^2 +| f_2 |^2 )( | σ̅_1 |^2+| σ̅_2 |^2 )-8i ( σ̅_1 σ̅_2^*-σ̅_1^* σ̅_2 ) sin( 2φ ), where I_0 μ_0c ^2 |E_ω_1|^2|E_ω_2|^2/32(μ_0c)^3 ^2 I_ω_1 I_ω_2/8 [W/m^2].Whereas this shows that the total SHG intensity cannot discriminate therelative magnitudes of σ̅_1 and σ̅_2, that can beachieved by filtering the SH field with a linear polarizer parallel to thesample and rotated by ξ with respect to 𝐞_1 so that the electricfield at the detector reads 𝐄_ξ𝐄_2ω ·(cosξ, sinξ, 0).If the incoming light is linearly polarized parallel to the analizer(ξζ) the SH intensity at the detector reads I_∥/I_0= 4|σ̅_1|^2 [ sin^2( 3ζ + 2φ)sin^2(2φ)+cos^2( 3ζ + 2φ ) ] + 4|σ̅_2|^2 [ cos^2( 3ζ + 2φ)sin^2(2φ)+sin^2( 3ζ + 2φ ) ] -2( σ̅_1 σ̅_2^* +σ̅_1^* σ̅_2 )sin(6ζ +4φ ) cos^2( 2φ ) -4i( σ̅_1 σ̅_2^* -σ̅_1^* σ̅_2 )sin(2φ ) , while at cross orientation (ξζ 90^∘) it is given byeq. <ref> with the replacement 3ζ 3ζ +90^∘.Eq. <ref> or any of its variants can thus be used to directlyobtain σ_1 and σ_2 as well as the orientation of the lattice byfitting experimental SH intensities as a function of polarization. This is aconcept similar to the usage of SHG as a remote, non-invasive probe of latticeorientation, layer number and other properties in recent applications oftwo-dimensional crystals having intrinsic SHG <cit.>. What we now propose and explicitly show is that, inaddition, it follows from the general features of eq. <ref> that thesame concept can be applied to monitor and quantify the presence of VP, whichisof clear interest for applications envisaged in the realm of valleytronics,somewhat similarly to the uses of the Kerr rotation to monitor and map spinaccumulation in spintronics <cit.>.Fig. <ref> illustrates the two simplest and extreme cases ofentirely intrinsic (red) and entirely VP-induced SHG (black).For definiteness, we consider φ 0 so that the incoming lightreaches the sample linearly polarized.As exactly anticipated from the earlier discussion on the differences betweenthe PGS of the crystal and that of each valley, the angular pattern ofI_∥ is rotated by 30^∘ between the two cases: VP in aninversion symmetric crystal (σ̅_2 0) leads to SHG whose intensityis a direct measure of the degree of polarization μ_𝐊 -μ_𝐊' (more details below), and its flower-shaped pattern directlyreveals the PGS of the 𝐊 points.The more general case of a system already having an intrinsic SHG(σ̅_2 0) is illustrated by the blue and gray curves. They revealthat an emerging VP is signaled by three distinct features:(i) the progressive rotation of the flower pattern away from the principal directions set by the lattice orientation; (ii) the increase in intensity as the contributions arising from σ̅_1add to the intrinsic SH intensity, as per eq. <ref>; (iii) the minimal intensity is no longer zero. Since the zero of the intrinsicSH response is usually well resolved experimentally <cit.>,any of these effects can be used for qualitative monitoring of the degree of VPin the system, a fast alternative to fitting the angular patterns toeq. <ref> when the actual magnitudes of σ̅_1,2 are notrequired.Note that σ̅_1,2 are complex quantities and, hence, the orientationand shape of the pattern is determined not just by their relative magnitudesbut also the relative phase, as easily seen in the caseσ̅_1σ̅_2e^iδ: I_∥/I_0 2|σ̅_1|^2 [ 1 +cosδ sin(6ζ) ] (notably, when δ=±90^∘ the six-fold pattern vanishes and becomesisotropic). An obvious but important implication of these modifications is that theorientation of the SH intensity pattern does not correlate directlyanymore with the lattice orientation in the presence of both intrinsicand VP-induced SHG. But this same fact can be utilized to detect and quantifyboth intrinsic and valley-induced conductivities.As σ̅_1 is to leading order linear in δμ, a reversal of theVP (δμδμ) changes its sign. On account of thecross-term in eq. <ref>, this translates into a rotation of thepatternby 30^∘, which is equivalent to a reflection about the principaldirections set by the lattice orientation, as shown in fig. <ref>.Consequently, the intersection of two patterns associated with opposite VPdefines the orientation of the lattice modulo 30^∘. While this stilldoesn't uniquely distinguish ZZ and AC directions, we note that a uniqueidentification is possible whenever |σ̅_1||σ̅_2| (whichcomprises essentially all cases) because the two non-equivalent intersectionswill then occur at different SH intensities, and it follows fromeq. <ref> that the intersection at lower (higher) intensity,highlighted with black (red) markers in the plot, occurs along the direction𝐞_1 ⇔ZZ (𝐞_2 ⇔AC)when |σ_2| > |σ_1|. Conversely, when|σ_2| < |σ_1| the lower (higher) intersection occurs along𝐞_2 ⇔AC (𝐞_1 ⇔ZZ).This is clearly seen in fig. <ref> where the reversal ofδμallows the immediate conclusion that the direction 𝐞_1 correspondsto ζ=0 because the two curves intersect there with the lowest intensity.These considerations are relevant not just because they illustrate how to useall the available information for a facile and expedite characterization ofthe nonlinear optical constants, but also because the success of a fullnonlinear fit of an experimental trace of I_∥ vs ζ toeq. <ref> can depend strongly on the assumed alignment of the lattice.Finally, it is clear from eq. <ref> that, if the lattice orientationisknown, measuring I_∥/I_0 at three non-equivalent orientations such asζ0^∘,15^∘,30^∘ suffices to uniquely determine themagnitudes of σ_1 and σ_2, as well as their relative phase.The discussion so far was done for a linearly polarized excitation field(φ=0). An alternative consists in analyzing the SH signal as afunction of the polarization state of the excitation fielddetermined by φ, and which can be tuned continuously with the rotationof a λ/4 plate <cit.>. Since theroles of φ and ζ are very much equivalent in eq. <ref>,an analysis analogous to the one above can be straightforwardly done in thiscase. For example, with a fixed analyzer at ξξ_zz 0(ξ_ac 30^∘), the intensity can still be read fromeq. <ref> with the replacement 3ζ 2ζ (3ζ2ζ+90^∘). Since the description of the φ-dependence is similar tothe one above, we omit it for brevity.§ FREQUENCY DEPENDENCE OF SIGMA_1 ANDSIGMA_2In order to determine the typical dependence of both σ̅_1 andσ̅_2 on excitation frequency and chemical potential forrepresentativecases, we focus the analysis now on graphene-based systems, whererecent reports have demonstrated the possibility of generating valley-polarizedcurrents with high valley relaxation lengths, both in mono and bilayers<cit.>. The electronic degrees of freedom of agraphene monolayer are extremely well described by a single-orbitaltight-binding (TB) model for electrons in the honeycomb lattice offig. <ref>(b). This is a single (hopping) parameter model forgraphene,which can only have σ̅_1 0. In addition, in order to study the characteristics and relative magnitudes ofσ̅_1 and σ̅_2 in a non inversion-symmetric system, it isdesirable to have a model where inversion symmetry can be broken in acontrolledway. That is easily incorporated in the single-orbital tight-binding via asublattice potential ±Δ/2 which explicitly breaks the sublatticesymmetry, and allows one to study the effects of VP in a more general “gappedgraphene” setting. Whereas the case of graphene should be captured with good accuracy within thisframework, the case of “gapped graphene” is expected to convey the mainqualitative features expected in gapped systems such as in doped MoS_2(doping suppresses excitonic effects, and renders a single-particledescription of the optical response appropriate). The second order conductivity tensor is computed perturbatively for atranslationally-invariant system treating the interaction with light via thedirect coupling, 𝐫 · 𝐄, in the dipole approximation as describedin references <cit.>. We consider only the cleanlimit, but account phenomenologically for disorder broadening of theconductivity. Each component σ̅^(2)_λαβ(ω_1,ω_2) isobtained from the formal result (25) of reference <cit.>. Our results show explicitly that, as expected, the 𝐤-spaceintegration for small photon energies ħω≤γ_0, is dominatedby the vicinity of the K points.This allows us to use the equilibrium results to compute the contribution ofeach separate valley at different chemical potential by restricting themomentum integration to either of the shaded regions in fig.<ref>(c),while still working with the full tight-binding. Being able to keep the full tight-binding bandstructure rather than aDirac-type approximation is important because, on the one hand, this allows usto immediately accommodate any refinement of the bandstructure model orstraightforwardly extend the analysis to a different material. On the otherhand, SHG in the clean limit arises only when the trigonal warping of the bandsis explicitly considered <cit.>, which is guaranteed in the TB scheme. Starting with the case of graphene, it is instructive to consider first theindividual contribution of each valley.Since the point group symmetry of K has no inversion, each valleycontributes a finite SHG (through σ̅_1 0) but, in equilibrium(μ_𝐊=μ_𝐊'), time-reversal symmetry forces an exactcancellation of each valley's contribution and a system such as graphene has nointrinsic SHG (cf. dashed curves in the top panel of fig. <ref>). When a VP is induced as in fig. <ref>, there is no cancellation anymore and the overall effect at finite frequency is the appearance of two features atωμ and 2μ and a strong enhancement when ω→0,consistent with two previous reports based on a related calculation in theDirac approximation <cit.>. The bottom panel shows the decomposition of σ̅_1 in terms of theinter and intraband contributions defined in references<cit.>[We follow the notation of <cit.>, whereee represents purely interband, ie and ei mixed inter-intrabandprocesses, and ii purely intraband processes. These results are inqualitative agreement in with <cit.>, but exhibit magnitudes500-fold larger, as the intensity of the resonant features increasessignificantly at low temperature.]. Whereas the behavior at ħωμ and 2μ is due to the resonantdenominators coming from interband processes, the signal is much amplifiedtowards the DC limit because the lack of inversion within each valley impliesthat when ω 0 the response is dominated by purely intrabandtransitions, since the conductivity terms σ_1^(ii) arenow strictly finite (they cancel by symmetry in equilibrium<cit.>).We note that, despite being at the same level of single-particle approximation,our results in fig. <ref> disagree with the previous calculations oftheSHG in valley-polarized graphene <cit.> (which, in turn,themselves disagree with each other [With respect to Ref.<cit.>, despite capturing the divergence∝ 1/ω^2 in the DC limit, our results are qualitativelydifferentin the range μ≲ħω≲ 2μ in what regards the shape ofthe resonances at μ and 2μ, as well as the relative sign ofσ̅_1 in the two limits ω→ 0, ∞. Regarding the resultsof Ref. <cit.>, there is some qualitative agreement, but adetailed analysis shows that the results do not match quantitatively in theamplitude and shape of the resonances.]).We attribute these differences to the long-standing problem of taking properaccount of the intra-band contributions in the calculation of nonlinearresponse functions. Since a VP leads to explicitly finite intra-band terms evenin the presence of a band-gap (see below), such contributions must be handledwith care, which is addressed here in the framework of Aversa and Sipe that hasbeen proven reliable in the DC limit <cit.>. The dependence of this valley-induced SHG on T and μ is addressed infigure fig. <ref>. The features at ωμ and 2μ arestronglytemperature-dependent and disappear as soon as k_B T ≳ δμ(top panel) because, at this point, the temperature broadening of theFermi-Dirac function whittles down the effective valley polarization.At fixed T, the response grows linearly with δμ <cit.>when ω ≲ μ, except near the resonances at ωμ and 2μ. Setting Δ 0 explicitly breaks the inversion symmetry of grapheneandan intrinsic SHG (σ̅_2 0) obtains. For definiteness, considerthe case when Δ/2 and μ are comparable, which we illustrate infig. <ref> for δμ 10 meV at T 50 K.We see that lower frequencies ω≤μΔ/2 are dominated bythe VP mechanism (|σ̅_1| ≫ |σ̅_2|), while theintrinsic response dominates for ω ≳ μ. This happens becauseσ̅_2 is Pauli-blocked at ω ≲ μ but, as seenabove,σ̅_1 is enhanced at lower frequencies and varies weakly with μ(except if |μ±δμ| ≈ Δ/2, cf. black curve, sincethenVP is strongly affected by small changes in μ). As a result, even in asituation where σ̅_1 and σ̅_2 might have comparablemaximum magnitudes, it is possible to separate the VP-dominated andintrinsic-dominated regimes by tuning the relative position of the excitationfrequency and μ, since the latter can be used to push up the spectralregion for which Re σ_20. Furthermore, the rapid whittling of σ̅_1 when k_B T ≳ δμ, in contrast with the robustness of σ̅_2 up totemperatures, significantly above room temperature <cit.> resultsin a strong temperature dependence of the ratio |σ̅_1| /|σ̅_2|.§ CONCLUDING REMARKS We studied the generation and polarization dependence of SH in threefoldsymmetric 2D materials with a finite VP, and performed specificmicroscopic calculations of the SH conductivity for a model that applies(accurately) to graphene and (qualitatively) to semiconducting TMD such asMoS_2.Our results show that VP and intrinsic (when present) quadratic responsegenerate distinct contributions with contrasting symmetry properties, which canbe disentangled by analyzing the dependence of SHG on the orientation ofpolarization plane [cf. eq. <ref>] or on the state of polarizationφ.To achieve this, the SHG signal I_∥ can be used to determine theorientation of the lattice by either reversing the sign of the VP, or byprobingthe dependence on ζ at photon energies above (below) μ, in the regimedominated by intrinsic (VP) where the maxima indicate the armchair (zigzag)directions of the lattice.Knowledge of the lattice orientation (thus obtained or otherwise), permits a direct application of eq. <ref> to extract the two independentnonlinear optical constants σ̅_1 and σ̅_2. Sinceσ̅_1 is proportional to δμ, the SH fingerprint of thesesystems can be used to directly identify and quantify an underlying imbalancebetween the populations in the valleys 𝐊 and 𝐊'. If performed with a small spot size in a scanning mode, such measurementsprovide a means to directly map valley polarization throughout a system,measure the spatial decay of valley currents, and investigate the possibilityor efficacy of their injection across heterostructure junctions and interfaces.Our data for σ_iii is presented in units of σ_0 2.88 ×10^-15 S m/V. If converted to 3D quadratic susceptibilities using aneffective graphene thickness of d3.4 Å, this corresponds toχ^(2)σ_0 /(ωε_0d)≈6.3nm/V at ω 0.1 eV.As a reference, χ^(2) in a good non-linear bulk crystal has typicalvalues of 0.01nm/V (ZnO) <cit.>,0.5 nm/V (GaAs, MoS_2) <cit.>,2nm/V (monolayer GaSe) <cit.>.Hence, SHG due to valley polarization can exceed largely the typical non-linear response of bulk materials such as GaSe.The ability to vary the reference Fermi level in most atomically thincrystals through gating <cit.>, when combined with frequencydependent measurements, further expands the versatility of SH spectroscopy toassess valley-dependent properties, rendering it a valuable characterizationtool in the nascent field of valleytronics. FH thanks A. H. Castro Neto and M. Milletarì for their support anddiscussions throughout this project. He was supported by the National Research Foundation Singapore under theCRP award NRF-CRP6-2010-05 and by the QUSCOPE center sponsored by the VillumFoundation. VMP was supported by the Singapore Ministry of Education throughgrant MOE2015-T2-2-059.Numerical computations were carried out at the HPC facilities of the NUS Centrefor Advanced 2D Materials.
http://arxiv.org/abs/1702.08181v2
{ "authors": [ "F. Hipolito", "Vitor M. Pereira" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227082609", "title": "Second harmonic spectroscopy to optically detect valley polarization in 2D materials" }
Gain and lossin open quantum systemsHichem Eleuch^1[email: hichemeleuch@tamu.edu] andIngrid Rotter^2[email: rotter@pks.mpg.de, corresponding author] December 30, 2023 =======================================================================================================================We present a simple, yet useful result about the expected value of the determinant of random sum of rank-one matrices. Computing such expectations in general may involve a sum over exponentially many terms. Nevertheless, we show that an interesting and useful class of such expectations that arise in, e.g., D-optimal estimation and random graphs can be computed efficiently via computing a single determinant.§ PROBLEM DEFINITION* [n] ≜{1,2,…,n}, and for any finite set 𝒲, 𝒲k is the set of k-subsets of 𝒲.* Suppose we are given a pair of m real n-vectors, {_i}_i=1^m and {_i}_i=1^m. Define, ≜[ _1 _2 ⋯_m ] ≜[ _1 _2 ⋯_m ] * Let {π_i}_i=1^m be m independent Bernoulli random variables distributed as, π_i∼(p_i) i ∈ [m] π_i⊥π_j i,j∈[m], i≠ j where {p_i}_i=1^m are given. Define ≜ [p_1p_2 ⋯ p_m]^⊤ and π≜ [π_1π_2⋯ π_m]^⊤.* We are interested in computing the expression below, e(,,)≜_π [(∑_i=1^mπ_i _i_i^⊤)] = _π [(Π^⊤)] where Π≜(π_1,π_2,…,π_m). Note that the naive way of computing this expectation leads to a computationally intractable sum over {0,1}^m. § MAIN RESULT e(,,) = (∑_i=1^m p_i _i_i^⊤) = (^⊤), where ≜(p_1,p_2,…,p_m). The proof outline is as follows: Step 1. First, the Cauchy-Binet formula is used to expand the determinant as a sum over mn terms. Step 2. The expected value of each of the mn terms can be easily computed. Step 3. Finally, the Cauchy-Binet formula is applied again to shrink the sum.We begin by applying the Cauchy-Binet formula:_π [(∑_i=1^mπ_i _i_i^⊤)]= _π [∑_𝒬∈[m]n(∑_k ∈𝒬π_k _k_k^⊤)] = ∑_𝒬∈[m]n_π [(∑_k ∈𝒬π_k _k_k^⊤)].Since |𝒬| = n,we will have (∑_k ∈𝒬π_k _k_k^⊤) < n if there exists k ∈𝒬 for which π_k = 0. Hence, the determinant can be non-zero only whenπ_k = 1 for all k ∈𝒬. Therefore,(∑_k ∈𝒬π_k _k_k^⊤) = (∑_k ∈𝒬_k_k^⊤)iff π_k = 1 for all k ∈𝒬, 0otherwise.But from the independence assumption we know that,ℙ [⋀_k ∈𝒬π_k = 1] = ∏_k ∈𝒬 p_k.Each individual expectation in (<ref>) can be computed as follows._π [(∑_k ∈𝒬π_k _k_k^⊤)]=(∑_k ∈𝒬_k_k^⊤) ∏_k ∈𝒬 p_k=(∑_k ∈𝒬 p_k _k_k^⊤).Plugging (<ref>) back into (<ref>) yields,_π [(∑_i=1^mπ_i _i_i^⊤)]=∑_𝒬∈[m]n(∑_k ∈𝒬 p_k _k_k^⊤).Note that (<ref>) is nothing but the Cauchy-Binet expansion of (∑_i=1^m p_i _i_i^⊤). This concludes the proof. § MOTIVATION & APPLICATIONSe(,,) arises in the following problems: * Estimation Suppose ∈^n is an unknown quantity to be estimated using m observations {z_i}_i=1^m (m ≥ n) generated according to, 𝐳 = 𝐇𝐱 + ϵwhereϵ∼𝒩(0,Σ) where 𝐳≜ [z_1z_2 ⋯ z_m]^⊤. To simplify our notation, let us define≜Σ^-1/2. The maximum likelihood estimatorhas the following form: = (^⊤)^-1^⊤𝐳. It is well known thatis unbiased and efficient; i.e., it achieves the Cramé-Rao lower bound, [] = (^⊤)^-1. Geometrically speaking, the hypervolume of uncertainty hyperellipsoids are proportional to √( []) (see, e.g., <cit.>). The D-optimality (determinant-optimality) criterion is defined as []^-1. Note that [] = ( ℱ)^-1 where ℱ≜^⊤ is the so-called Fisher information matrix. Hence, minimizing the determinant of the estimation error covariance matrix is equivalent to maximizing the D-optimality criterion, (^⊤). Now consider the following scenarios. * Sensor Failure: The ith “sensor” may “fail” independently with probability 1-p_i, for all i ∈ [m]. In this case, the row corresponding to each failed sensor has to be removed from . Hence, e(^⊤,,) gives the expected value of the D-optimality criterion.* Sensor Selection: The goal in D-optimal sensor selection is to select a subset (e.g., k-subset) of m available sensors (observations) such that the D-optimality criterion is maximized. Joshi and Boyd <cit.> proposed an approximate solution to this problem through convex relaxation. In <cit.>, we showed that their convex program can be interpreted as the problem of finding the optimal probabilities {p_i}_i=1^m for randomly selecting (e.g., k) sensors via independent coin tosses such that the expected value of the D-optimality criterion, i.e., e(^⊤,,), is maximized. See <cit.> for the details. For sufficiently smooth nonlinear measurement models,should be replaced by the normalized Jacobian of the measurement function. * Spanning Trees in Random Graphs[We first presented Theorem <ref>, and its special case used for computing the weighted number of spanning trees, in <cit.>. Recently we discovered an earlier result for computing the expected number of spanning trees in unweighted anisotropic random graphs by Joel E. Cohen in 1986 <cit.>. Cohen in <cit.> provides a different proof and extends his result to the case of random directed graphs. Our result, however, considers the weighted graphs, while our Theorem <ref> extends it to the general case of random sum of arbitrary rank-one matrices.] Networks with “reliable” (against, e.g., noise in estimation, or failure in communication) topologies are crucial in many applications across science and engineering. In general, the notion of reliability in networks is closely related to graph connectivity.Among the existing combinatorial and spectral graph connectivity criteria, the number of spanning trees (sometimes referred to as graph complexity or tree-connectivity) stands out: despite its combinatorial origin, it can also be characterized solely by the spectrum of the graph Laplacian (Kirchhoff) matrix. This result is due to Kirchhoff's matrix-tree theorem (and its extensions): Consider graph = (,,w) where = {v_i}_i=0^n, ⊆2, and w : →_>0. The reduced Laplacian matrix of , denoted by _, is obtained by removing an arbitrary row and the corresponding column from the (weighted) Laplacian matrix of ; e.g., v_0. The weighted number of spanning is given by, t_w()≜∑_𝒯∈𝕋()∏_e ∈(𝒯) w(e)=(_) where 𝕋() is the set of all spanning trees of , and (𝒯) denotes the edge set of graph 𝒯. Note that in case of unit weights, t_w() is simply the number of spanning trees in . Now consider a random graph whose ith edge is “operational” with probability p_i, independent of other edges (Figure <ref>).[Here, “operational” means that the corresponding vertices are connected via an edge.] Define indicator variables {π_i}_i=1^m such that π_i = 1 iff the ith edge is operational, otherwise π_i = 0. The reduced (unweighted) incidence matrix of , = [_1_2⋯ _m], is obtained by removing an arbitrary row from the (unweighted) incidence matrix of . From Theorem <ref> we know that, _π [ t_w(_π) ] = _π [ ( ∑_i=1^mπ_i w(e_i) _i_i^⊤) ]. Define _w ≜√(𝐖) in which 𝐖≜(w(e_1) w(e_2)⋯w(e_m)). Note that this expression is equal to e(_w,_w^⊤,). From Theorem <ref> we have, _π [ t_w(_π) ]= _π [ ( ∑_i=1^mπ_i w(e_i) _i_i^⊤) ] = e(_w,_w^⊤,)= ( ∑_i=1^m p_i w(e_i) _i_i^⊤)= ∑_𝒯∈𝕋()∏_e_i ∈(𝒯) p_i w(e_i). It is worth mentioning that, according to above equations, the expected weighted number of spanning trees is given by computing the weighted number of spanning trees after multiplying the edge weights by their probabilities; i.e., _π [ t_w(_π) ]= t_w_p(), where w_p : e_i ↦ p_i w(e_i). § RANDOM SUM OF RANK-R MATRICESIt is not immediately clear whether there is an efficient way for computing_π [(∑_i=1^mπ_i _i_i^⊤)]in which _i and _i belong to ^n × r_i for i ∈ [m]. Nevertheless, the following results provide some preliminary insights into this more general case. The proofs of the following lemmas follow that of Theorem <ref>—i.e., Cauchy-Binet formula. _π [(∑_i=1^mπ_i _i_i^⊤)]≥(∑_i=1^m p_i _i_i^⊤). Consider a random graph _π (over graph ) whose edge setispartitioned into k blocks {_i}_i=1^k. The edges in the ithblock are operational, independent of other blocks, with probability p_i.Let _i be the collection of the columns of the reduced weighted incidence matrix that belong to the ith block of edges _i. We have, _π [ t_w(_π) ]= _π [ ( ∑_i=1^mπ_i_i_i^⊤) ]= ∑_𝒯∈𝕋()∏_e_i ∈(𝒯) p_b_i^1/n_b_i(𝒯) w(e_i) where b_i is the block index that contains e_i and n_i(𝒯) ≜ |(𝒯) ∩_i|.../wafr16/paper/splncs_srt
http://arxiv.org/abs/1702.08247v3
{ "authors": [ "Kasra Khosoussi" ], "categories": [ "cs.DS", "math.PR" ], "primary_category": "cs.DS", "published": "20170227115520", "title": "On the Expected Value of the Determinant of Random Sum of Rank-One Matrices" }
^1Department of Physical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Knowledge City, Sector 81, Mohali 140306, India.^2 Department of Condensed Matter Physics and Material Sciences, Tata Institute of Fundamental Research, Mumbai 400005, India.The honeycomb lattice iridates A_2IrO_3 (A = Na, Li) are candidates for realization of the Kitaev-Heisenberg model although their proximity to Kitaev's quantum Spin-Liquid (QSL) is still debated.We report on heat capacity C and entropy S_mag for A_2IrO_3 (A = Na, Li) in the temperature range 0.075  K≤ T ≤ 155 K.We find a well separated two-peak structure for the magnetic heat capacity C_mag for both materials and S_mag for Na_2IrO_3 shows a shoulder between the peaks with a value close to 1 2Rln2.These features signal the fractionalization of spins into Majornana Fermions close to Kitaev's QSL [Phys. Rev. B 92, 115122 (2015); Phys. Rev. B 93, 174425 (2016).]. These results provide the first thermodynamic evidence that A_2IrO_3 are situated close to the Kitaev QSL.Additionally we measure the high temperature T≤ 1000 K magnetic susceptibility χ and estimate the Weiss temperature θ in the true paramagnetic state.We find θ≈ -127 K and -105 K, for Na_2IrO_3 and Li_2IrO_3, respectively.Heat capacity evidence for proximity to the Kitaev QSL in A_2IrO_3 (A = Na, Li) Kavita Mehlawat^1, A. Thamizhavel^2 and Yogesh Singh^1December 30, 2023 ===============================================================================The family of layered honeycomb lattice iridates A_2IrO_3 (A = Na, Li) has garnered a lot of recent attention as possible realizations of the Kitaev-Heisenberg model <cit.>.Although these materials were found to be magnetically ordered at low temperatures <cit.> there is growing evidence of dominant Kitaev interactions from ab  initio estimations of the exchange parameters <cit.>, from inelastic Raman scattering measurements <cit.>, and from direct evidence of dominant bond directional exchange interactions in Na_2IrO_3<cit.>.It is still debated how close or far the real materials are from the Quantum Spin Liquid (QSL) expected in the strong Kitaev limit.Recently predictions for signatures of Kitaev's QSL in Raman scattering experiments have been made <cit.> and have been observed in experiments on Na_2IrO_3 and (Na_1-xLi_x)_2IrO_3<cit.>. Optical spectroscopy measurements on Na_2IrO_3 also claimed to observe signatures consistent with proximity to a QSL state <cit.>.However, direct thermodynamic evidence is still lacking due to the unavailability of clear criteria applicable to the real materials.Recently, thermodynamic properties of the Kitaev model on a honeycomb lattice have been calculated and it was shown that heat capacity would show a two-peak structure coming from the fractionalization of the quantum spins into two kinds (dispersive and dispersionless) of Majorana Fermions <cit.>.The two peaks in the heat capacity would then come from the thermal excitation of these two kinds of Majorana Fermions.The Rln2 entropy of the spins would be shared equally between the two Majorana's and so the temperature dependence of the magnetic entropy S_mag would show a half-plateau between the two heat capacity peaks and the value of S_mag at the plateau would be pinned to 1 2Rln2  <cit.>.These predictions for the Kitaev model are however, still not applicable to the real materials which are magnetically ordered <cit.> and have terms other than the dominant Kitaev term present in their Hamiltonian <cit.>.More recently, thermodynamic properties have been determined for a generalized Kitaev-Heisenberg model and for the ab initio Hamiltonian for Na_2IrO_3 arrived at in Ref. Yamaji2014.It was predicted that even for magnetically ordered material proximate to Kitaev's QSL the two-peak structure in heat capacity would survive while the plateau in S_mag at 1 2Rln2 becomes a shoulder at the same numerical value<cit.>.These features vanish when the material is deep into the magnetically ordered state away from the QSL state <cit.>.Thus, a well separated two-peak structure in the magnetic heat capacity and a shoulder in S_mag between the two peaks with a value close to Rln2 is the predicted hallmark to place A_2IrO_3 materials proximate to Kitaev's QSL <cit.>.In this work we report high temperature (T ≤ 1000 K) magnetic susceptibility χ versus temperature and heat capacity measurements in the T range 0.075  K≤ T ≤ 155 K on polycrystalline samples of the honeycomb lattice iridates A_2IrO_3.Our measurements provide three new results. (i) The high temperature χ(T) data gave the Weiss temperatures θ = -127(4) K and -105(2) K for Na_2IrO_3 and Li_2IrO_3, respectively.While θ for Na_2IrO_3 is similar to values found previously using lower temperature (T < 300 K) χ(T) data <cit.>, θ for Li_2IrO_3 is larger by a factor of 3 compared to the values found using lower temperature χ(T) data <cit.>.This indicates that, in contrast to what was believed previously, magnetic energy scales in both materials should be similar.(ii) The magnetic contribution to heat capacity shows a two-peak structure and the T dependence of the entropy S_mag shows a shoulder between the two peaks with a value close to 1 2Rln2 for Na_2IrO_3.This is in excellent agreement with recent predictions <cit.> and provides the first thermodynamic evidence that Na_2IrO_3 lies proximate to Kitaev's QSL.The results for Li_2IrO_3 are qualitatively similar although quantitative agreement is not as strong. (iii) Lastly, the low temperature C for Na_2IrO_3 and Li_2IrO_3 show very different T dependence.While C(T) for Na_2IrO_3 shows a conventional T^3 behaviour, C(T) for Li_2IrO_3 shows a clear T^2 dependence suggesting novel 2-dimensional magnetic excitations for this material.Experimental: Polycrystalline samples of A_2IrO_3 were synthesized using a solid state reaction method starting with high purity chemicals and heating the pelletized mixtures between 900 and 1000 ^oC in 50 ^0C steps.The step-wise heating instead of going directly to 1000^o C was found to be essential for the formation of high quality samples.Powder X-ray diffraction on crushed pieces of the samples confirmed the formation of single phase samples with the correct lattice parameters <cit.>.The magnetic susceptibility χ versus temperature data were measured in the temperature range T = 2 K to 400 K using the VSM option of a physical property measurement system from Quantum Design (QD-PPMS).The χ(T) data between T = 300 K and 1000 K were measured using the VSM oven option of the QD-PPMS.The heat capacity C data were measured in the temperature range 2 K to 155 K using a QD-PPMS.The C data from 75 mK to 3 K was measured using the dilution refrigerator (DR) option of a QD-PPMS.Magnetic Susceptibility: Figure <ref> shows the χ versus T data for A_2IrO_3 between 2 and 1000 K.The two separate measurements, between 2 and 400 K and between 300 and 1000 K, for each sample match quite well.Sharp anomalies were observed at T_N ≈ 15 K for both samples in agreement with previous reports <cit.>.The new data are the ones above T = 400 K.These data are plotted as 1/χ(T) in the inset of Fig. <ref>.Data for Na_2IrO_3 are approximately linear in this temperature range.The data above T ≈ 750 K were fit by the expression χ(T) = χ_0+C T-θ, with χ_0 , C, and θ as fit parameters.Here χ_0 is a T independent term, C is the Curie constant, and θ is the Weiss temperature.The fit gave the values χ_0 = 2.66(3) × 10^-5 cm^3/mol, C = 0.395(1) cm^3 K/mol, and θ = -127(4) K.These values are close to those found by fitting the χ(T) data for T ≤ 300 K<cit.>.In particular, the value of θ, which gives the overall scale of the magnetic interactions, comes out to be very close to the value -125(6) K found previously for polycrystalline Na_2IrO_3 <cit.>. For Li_2IrO_3 the 1/χ(T) data shown in the inset of Fig. <ref> shows a prominent downward curvature.This suggests a large and positive χ_0.The large curvature also means that to obtain a reliable value of θ one needs to be well in the paramagnetic state.The fit to the data above T = 700 K gave the values χ_0 = 1.45(3) × 10^-4 cm^3/mol, C = 0.403(2) cm^3 K/mol, and θ = -105(2) K.The values of χ_0 is slightly larger than obtained previously.This suggests a large Van Vleck paramagnetic contribution for Li_2IrO_3.The most conspicuous difference between the low temperature and high temperature fit parameters is the value of θ = -105(2) K which is about a factor of 3 larger in magnitude compared to the value -33 K obtained previously <cit.>.This indicates that, in contrast to what was believed previously based on old θ values, magnetic energy scales in both materials might be similar. Heat Capacity:Figure <ref> (a) show the heat capacity divided by temperature C/T versus T data for Na_2IrO_3 between T = 75 mK and 155 K.A sharp λ type anomaly near 15 K confirms the antiferromagnetic transition for Na_2IrO_3 <cit.>.This anomaly is much sharper than observed for previous polycrystalline samples <cit.> indicating the high quality of the sample used in the current study.The data at lower temperatures show an upturn below about 1 K which we will return to later.We also show in Fig <ref> (a) the approximate lattice contribution obtained by measuring the heat capacity of the iso-structural non-magnetic analog Na_2SnO_3 and then rescaling the data to account for the molecular mass difference between Na_2IrO_3 and Na_2SnO_3.By subtracting this lattice contribution from the total C(T) one can obtain the magnetic contribution C_magshown (in the units Rln2) in Fig. <ref> (b).In addition to the low temperature anomaly we find another broad peak centered around ≈ 110 K.Such a two-peak structure has been predicted recently for a generalized Kitaev-Heisenberg Hamiltonian for parameters placing the material in the magnetic state proximate to Kitaev's QSL <cit.>.A two-peak structure is however, not uncommon in frustrated and/or low-dimensional magnetic materials where the high temperature anomaly occurs when short ranged magnetic correlations start to develop while the low temperature peak occurs on the development of long ranged correlations <cit.>.However, the definitive signature for closeness to Kitaev's QSL has been predicted to be the T dependence of the entropy S_mag which must show a half-plateau pinned at or close to the value 1 2Rln2 between the two heat capacity peaks  <cit.>.We present the T dependence of S_mag in units of Rln2 in Fig. <ref> (b).We note that there is a distinct shoulder in S_mag(T) between the two heat capacity peaks and that the value of S_mag between the peaks is close to the predicted value 1 2Rln2 shown as the horizontal dashed line in Fig. <ref> (b).S_mag reaches 90%Rln2 at the highest T of our measurements.The high temperature peak in C_mag is quite broad and one can see a tail extending to even higher temperatures.It is evident that the full Rln2 entropy will be recovered at a slightly higher temperature.The two-peak structure in heat capacity, the T dependence of S_mag with a half-plateau between the peaks and its numerical value =1 2Rln2 are in excellent agreement with theoretical predictions and provide direct evidence which lead to the inference that Na_2IrO_3 is situated close to Kitaev's QSL. We now turn to heat capacity data on Li_2IrO_3.The C/T versus T data for Li_2IrO_3 are shown in Fig. <ref> (a) between T = 90 mK and 120 K.The lattice contribution, estimated by measuring the heat capacity of the isostructural non-magnetic material Li_2SnO_3 and rescaling the data to account for the difference in molecular masses of Li_2IrO_3 and Li_2SnO_3, is also shown in Fig. <ref> (a).The 15 K anomaly signalling the onset of long-ranged zig-zag magnetic order is clearly visible as is a weak shoulder around 7 K.This shoulder below the main magnetic anomaly has been observed for all previous polycrystalline samples as well <cit.>.This second anomaly is most likely associated with disorder as its relative magnitude compared to the 15 K anomaly can be suppressed by improving the quality of the samples <cit.>.It must be noted however that the second anomaly cannot be completely suppressed even for the best samples (including single crystals <cit.>) and our current sample is at least as good as the best polycrystalline samples produced thus far <cit.>.As for Na_2IrO_3, the low-T data for Li_2IrO_3 show an abrupt upturn below about 1 K.We will discuss the low temperature data for A_2IrO_3 separately later.The C_mag data for Li_2IrO_3obtained by subtracting the lattice part from the total C(T) is shown in Fig. <ref> (b).Although there is more scatter in the obtained data compared to Na_2IrO_3, the two-peak structure in C_mag(T) is clearly visible for Li_2IrO_3 too.The two peaks occur at 15 K and ∼ 90 K, respectively.The magnetic entropy S_mag shown in Fig. <ref> (b) also shows a shoulder between the two peaks although the quantitative match with predictions are not as strong as for Na_2IrO_3.Specifically, the S_mag value between the two heat capacity peaks reaches only about 35% Rln2 and the value 1 2Rln2 is reached only close to the start of the high temperature peak.The value of S_mag at 120 K is only about 65%Rln2 and it seems unlikely that the rest will be recovered under the tail of the high temperature peak beyond 120 K.The possibility that Li_2SnO_3 isn't a good approximation for the lattice heat capacity for Li_2IrO_3 presents itself.Nevertheless, the C_mag(T) data with the two-peak structure and the S_mag(T) with a plateau between the two peaks are qualitatively consistent with predictions for materials close to Kitaev's QSL <cit.>. We finally discuss the low temperature behaviours of C(T) for A_2IrO_3 and show that they follow qualitatively different T dependences.Figure <ref> (a) shows the C versus T data for Na_2IrO_3 measured down to T = 75 mK in the DR.The data between 2 K and 4 K follow a T^3 behaviour (shown as the solid curve through the data in Fig. <ref> (a))expected for a 3-dimensional insulator in the antiferromagnetic state.In this case the T^3 contribution to the heat capacity will be a combination of phonons and antiferromagnetic spin-waves.From Fig. <ref> (a) it is evident that below ≈ 2 K there is an excess C(T) above the T^3 contribution.If the T^3 contribution is subtracted from C(T) one gets a difference heat capacity Δ C(T) shown in the inset of Fig. <ref> (a).A clear anomaly peaked at ≈ 0.8 K can be observed in the Δ C(T) data.The entropy under this peak however is quite small (≤ 1%Rln2) suggesting that it could be extrinsic in origin.The upturn at the lowest temperatures could be the start of a nuclear Schottky anomaly.Figure <ref> (b) shows the C/T versus T data for Li_2IrO_3 measured down to T = 90 mK.Below 2 K the C/T data follow a linear in T behaviour down to about 0.3 K where an abrupt upturn is observed.This upturn could again be the high temperature tail of a nuclear Schottky anomaly.The C ∼ T^2 behaviour for Li_2IrO_3 is unusual and suggests different magnetic excitations compared to Na_2IrO_3 which shows a conventional C ∼ T^3 behaviour.Summary and Discussion: A_2IrO_3 materials have a high magnetic energy scale as suggested by recent Raman <cit.> and RIXS <cit.> measurements as well as in ab initio estimations of the exchange parameters <cit.>.This prompted us to measure the high temperature T ≤ 1000 K magnetic susceptibility and use these to estimate the Weiss temperature θ in the true paramagnetic state.This gave the first new result that θ for Li_2IrO_3 which was previously estimated to be ∼ -30 K is actually ∼ -100 K and much closer to the value for Na_2IrO_3 indicating that magnetic interaction scales for the two materials are very similar. Secondly, motivated by recent theoretical criteria for placing materials close to Kitaev's QSL state <cit.>, we have presented the heat capacity C and magnetic entropy S_mag from T =75 mK to T > θ.We find a two-peak structure in C_mag and S_mag shows a clear shoulder with a value 1 2Rln2 between the two peaks.Nearly the full Rln2 entropy is recovered for Na_2IrO_3 above the high temperature peak.For materials in or close to the Kitaev QSL state a two-peak structure arises from the fractionalization of the spin into two kinds of Majorana Fermions <cit.>.The entropy Rln2 is released in two equal parts leading to a plateau in S_mag between the two peaks with a value 1 2Rln2.Thus our results provide the first thermodynamic evidence of proximity to the Kitaev QSL for Na_2IrO_3 materials.Although results for Li_2IrO_3 are qualitatively similar, the quantitative agreement with theoretical predictions is not as strong.Lastly, the low temperature C for Li_2IrO_3 shows an unusual T^2 dependence suggesting 2-dimensional magnetic excitations which could be confirmed in future inelastic scattering measurements on single crystals.Acknowledgments.– We thank the X-ray facility at IISER Mohali.KM acknowledges UGC-CSIR India for a fellowship.YS acknowledges DST, India for support through Ramanujan Grant #SR/S2/RJN-76/2010 and through DST grant #SB/S2/CMP-001/2013.Jackeli2009 G. Jackeli, and G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009).Chaloupka2010 J. Chaloupka,G. Jackeli, and G. Khaliullin, Phys. Rev. Lett. 105, 027204 (2010).Singh2010 Y. Singh, and P. Gegenwart, Phys. Rev. B82, 064412 (2010).Liu2011 X. Liu, T. Berlijn, W-G. Yin, W. Ku, A. Tsvelik, Y-J. Kim, H. Gretarsson, Y. Singh, P. Gegenwart, and J. P. Hill, Phys. Rev. B 83 (22), 220403 (2011).Kimchi2011 I. Kimchi and Y.-Z. You, Phys. Rev. B 84, 180407 (2011).Choi2012 S. K. Choi, R. Coldea, A. N. Kolmogorov, T. Lancaster, I. I. Mazin, S. J. Blundell, P. G. Radaelli, Y. Singh, P. Gegenwart, K. R. Choi, S. W. Cheong, P. J. Baker, C. Stock, and J. Taylor. Phys. Rev. Lett.108, 127204 (2012).Ye2012 F. Ye, S. Chi, H. Cao, B. C. Chakoumakos, J. A. Fernandez-Baca, R. Custelcean, T. F. Qi, O. B. Korneta, and G. Cao, Phys. Rev. B 85, 180403 (2012).Singh2012 Y. Singh, S. Manni, J. Reuther, T. Berlijn, R. Thomale, W. Ku, S. Trebst, and P. Gegenwart. Phys. Rev. Lett. 108, 127203 (2012).Gretarsson2012 H. Gretarsson, J. P. Clancy, X. Liu, J. P. Hill, E. Bozin, Y. Singh, S. Manni, P. Gegenwart, J. Kim, A. H. Said, D. Casa, T. Gog, M. H. Upton, H-S. Kim, J. Yu, V. M. Katukuri, L. Hozoi, J. van den Brink, Young-June Kim, Phys. Rev. Lett. 110, 076402 (2013).Comin2012 R. Comin, G. Levy, B. Ludbrook, Z.-H. Zhu, C. N. Veenstra, J. A. Rosen, Y. Singh, P. Gegenwart, D. Stricker, J. N. Hancock, D. vanderMarel, I. S. Elfimov, and A. Damascelli.Phys. Rev. Lett. 109, 266406 (2012).Chaloupka2013 J. Chaloupka, G. Jackeli, and G. Khaliullin, Phys. Rev. Lett. 110, 097204 (2013).Gretarsson2013 H. Gretarsson, J. P. Clancy, Y. Singh, P. Gegenwart, J. P. Hill, J. Kim, M. H. Upton, A. H. Said, D. Casa, T. Gog, Y-J. Kim, Phys. Rev. B 87, 220407 (2013).Foyevtsova2013 K. Foyevtsova, H. O. Jeschke, I. I. Mazin, D. I. Khomskii, R. Valenti, Phys. Rev. B 88, 035107 (2013).Katukuri2014 V. M. Katukuri, S. Nishimoto, V. Yushankhai, A. Stoyanova, H. Kandpal, S. Choi, R. Coldea, I. Rousochatzakis, L. Hozoi, and J. van den Brink, New J. Phys. 16, 013056 (2014).Yamaji2014 Y. Yamaji, Y. Nomura, M. Kurita, R. Arita, and M. Imada, Phys. Rev. Lett. 113, 107201 (2014).Sizyuk2014 Y. Sizyuk, C. Price, P. Wo ̈lfle, and N. B. Perkins, Phys. Rev. B 90, 155126 (2014).Chun2015 S. H. Chun, J-W. Kim, J. Kim, H. Zheng, C. C. Stoumpos, C. D. Malliakas, J. F. Mitchell, K. Mehlawat, Yogesh Singh, Y. Choi, T. Gog, A. Al-Zein, M. Moretti Sala, M. Krisch, J. Chaloupka, G. Jackeli, G. Khaliullin, and B. J. Kim, Nature Phys. 11, 462(2015)Gupta2016 S. N. Gupta, P. V. Sriluckshmy, K. Mehlawat, A. Balochi, D. K. Mishra, S. R. Hassan, T. V. Ramakrishnan, D. V. S. Muthu, Y. Singh, and A. K. Sood, Euro. Phys. Lett. 114, 47004 (2016).Alpichshev2015 Zh. Alpichshev, F. Mahmood, G. Cao, N. Gedik, Phys. Rev. Lett. 114, 017203 (2015).Knolle2014 J. Knolle, G-W. Chern, D. L. Kovrizhin, R. Moessner, N. B. Perkins, Phys. Rev. Lett. 113, 187201 (2014).Nasu2015 J. Nasu, M. Udagawa, and Y. Motome, Phys. Rev. B 92, 115122 (2015).Yamaji2016 Y. Yamaji, T. Suzuki, T. Yamada, S. I. Suga, N. Kawashima, and M. Imada, Phys. Rev. B 93, 174425 (2016).Hardy2003 V. Hardy, S. Lambert, M. R. Lees, and D. M. Paul, Phys. Rev. B 68, 014424 (2003). Manni-thesis S. Manni,Synthesis and investigation of frustrated Honeycomb lattice iridates and rhodates. Ph.D. thesis, Georg-August-Universität Göttingen (2014). Freund2016 F. Freund, S. C. Williams, R. D. Johnson, R. Coldea, P. Gegenwart, and A. Jesche, Nature Scientific Reports 6, 35362 (2016).
http://arxiv.org/abs/1702.08331v1
{ "authors": [ "K. Mehlawat", "A. Thamizhavel", "Yogesh Singh" ], "categories": [ "cond-mat.str-el", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.str-el", "published": "20170227153256", "title": "Heat capacity evidence for proximity to the Kitaev QSL in A$_2$IrO$_3$ ($A =$ Na, Li)" }
^1Faculty of Physics, Adam Mickiewicz University, ul. Umultowska 85, 61-614 Poznań, Poland ^2Institute of Molecular Physics, Polish Academy of Sciences, ul. M. Smoluchowskiego 17, 60-179 Poznań, Poland^3 Department of Physics and Medical Engineering, Rzeszów University of Technology, al. Powstańców Warszawy 6, 35-959 Rzeszów, PolandCurrent-induced spin polarization in a two-dimensional electron gas with Rashba spin-orbit interaction is considered theoretically in terms of the Matsubara Green functions. This formalism allows to describe temperature dependence of the induced spin polarization. The electron gas is assumed to be coupled to a magnetic substrate via exchange interaction. Analytical and numerical results on the temperature dependence of spin polarization have been obtained in the linear response regime. The spin polarization has been presented as a sum of two terms – one proportional to the relaxation time and the other related to the Berry phase corresponding to the electronic bands of the magnetized Rashba gas. The spin-orbit torque due to Rashba interaction is also discussed. Such a torque appears as a result of the exchange coupling between the non-equilibrium spin polarization and magnetic moment of the underlayer.71.70.Ej,75.76.+j, 85.75.-d, 72.25.MkCurrent-induced spin polarization of a magnetized two-dimensional electron gas with Rashba spin-orbit interaction A. Dyrdał^1, J. Barnaś^1,2 and V. K. Dugaev^3 December 30, 2023 =================================================================================================================§ INTRODUCTIONSpin-orbit interaction leads, in general, to a number ofinteresting transport phenomena, that enable generation and control of spin currents in a pure electrical manner. Two of the most prominent examples are the spin Hall and spin Nernst effects. The former (latter) effect consists in generation of pure spin current flowing perpendicularly to an external electric field (temperature gradient) applied to the system. These effects play currently an essential role in the processes of electrical generation and detection of spin currents<cit.>. For instance, the spin current can be used as origin of spin torque exerted on magnetic moments of a ferromagnet in a bilayer system consisting of a magnetic layer attached to a nonmagnetic one with strong spin-orbit coupling. This torque, in turn,may induce magnetic dynamics andeven can reverse magnetic moment of the magnetic layer when the spin current exceeds some critical value. Another consequence of the spin-orbit interaction in a system with mobile electrons is the current-inducednonequilibrium spin polarization of conduction electrons. This effect was predicted theoretically in the '70s <cit.> for a two-dimensional electron gas (2DEG) with Rashba spin-orbit interaction, and then it was studied in various systems exhibiting spin-orbit interaction <cit.>. The current-induced spin polarization was also observed experimentally <cit.>, and currently it is attracting attention of many researchers <cit.>.The current-induced spin polarization can also arise in a magnetic system, when it includes spin-orbit coupling. In such a case the induced non-equilibrium spin polarization interacts with the local magnetization via exchange coupling and creates a torque exerted on the magnetic moment <cit.>.Moreover, it has been also shown that not only external electric field, but also a temperature gradient may lead to spin-orbit driven spin polarization <cit.>. These observations initiated a wide interestin the field- and thermally-induced spin-orbit torques and new ways of magnetization switching, that could be alternative to the switching induced by spin transfer torques <cit.>. In this paper we present theoretical results on the current-induced spin polarization of a magnetic 2DEG with Rashba spin-orbit interaction. Such a system is a basic model of various magnetic semiconductor heterostructures. The system consists of a 2DEG deposited on a magnetic substrate and interacting with the substrate via exchange interaction (see also Fig.1). To calculate the current-induced spin polarization we use the Matsubara Green function formalism which enables description of the temperature variation of the induced spin polarization. We derive some general formulas for the polarization and also present numerical results. The induced spin polarization is shown to include generally a term due to Berry curvature of the corresponding electron bands. Similar terms also appear in the spin-orbit torques following from exchange interaction of the electrons and magnetic underlayer.The paper is organized as follows. In section 2 we describe the model system and also present the theoretical formalism and derive general formulas for the current-induced spin polarization.In Section 3 we present analytical and numerical results in some specific situations; first, we consider the nonequilibrium spin polarization in the absence ofexchange field (Section 3 A), then we present results forexchange field oriented perpendicularly to the plane of 2DEG (Section 3 B) and for exchange fieldoriented in plane of 2DEG andcollinear (perpendicular) to the electric current, Section 3C (Section 3 D). In Section 4 we discuss the spin polarization in a general case ofarbitrarily orientedexchange field. In Section 5, in turn, we consider relation of the nonequilibrium spin polarization with the Berry curvature of the corresponding electronic bands. The induced spin-orbit torque is briefly discussed in Section 6, while summary and final conclusions are in Section 7.§ THEORETICAL OUTLINE We consider a magnetized 2DEG with Rashba spin-orbit interaction, as shown schematically in Fig.1. The 2DEG is assumed to be deposited on a ferromagnetic substrate which creates an effective exchange field acting on the electron gas.§.§ ModelThe single-particle Hamiltonian describing such a system can be written in the following form:Ĥ = ħ^2 k^2/2 mσ_0 + α (k_yσ_x - k_xσ_y) + 𝐇· ,whereσ_0 and σ_n (forn = {x, y, z}) are the unit and Pauli matrices defined in the spin space, the parameter α in the second term of the Hamiltonian describes strength of the Rashba spin-orbit interaction, while k_x and k_y are the in-plane wavevector components. The third term of the above Hamiltonian describesthe effect of exchange field due to a magnetic substrate. This exchange field can be written as 𝐇 =J𝐌, with J standing for theexchange parameter (J>0 for a ferromagnetic coupling between the 2DEG and magnetic substrate). Note, the exchange field 𝐇 is measured here in energy units. In spherical coordinates (see Fig.1),components of the exchange field,𝐇 = (H_x, H_y, H_z), can be written as H_x= JM_x = JMsin (θ ) cos (ξ ), H_y= JM_y = JMsin (θ )sin (ξ ), H_z= JM_z = JMcos (θ ), where M=|𝐌|, while θ and ξ are the polar and azimuthal angles, as defined in Fig. 1. In general, we take into account the temperature dependence of the magnetizationM(T), and assume it obeys the Bloch's law M=M(T) = M_0[1 - (T/T_c)^3/2], where T_c is the Curie temperature of the magnetic substrate, and M_0 is the corresponding zero-temperature magnetization.Eigenvalues of the Hamiltonian (<ref>) take the formE_± = ε_k±λ_𝐤 ,where ε_k = ħ^2 k^2/2 m (with k^2 = k_x^2 + k_y^2), whileλ_𝐤 = [H^2 + α^2 k^2 - 2α (H_yk_x -H_xk_y)]^1/2.Below we present the theoretical method based on the Matsubara-Green function formalism, and also derive a general formula for the nonequilibrium spin polarization induced by an external electric field. §.§ Method and general solution for current-induced spin polarization To describe spin polarization induced by an external electric field we introduce a time-dependent external electromagnetic field of frequency ω/ħ (note, here ω is energy) described by the vector potential 𝐀(t)=𝐀(ω)exp (-iω t/ħ). Theelectric field isrelated to 𝐀 via the formula 𝐀(ω)=(ħ /iω )𝐄(ω).Hamiltonian H_𝐀^E describing interaction of the system with the external field (treated as a perturbation) takes the formĤ_𝐀^E(t) =- 𝐣̂^el·𝐀(t).Here, the operator of the electric current density is defined as 𝐣̂^el = e 𝐯̂; withe being the charge of electron (e < 0), and 𝐯̂=(1/ħ )∂Ĥ/∂𝐤 being the electron velocity operator. Thex and y components of the velocity operator have the following explicit form:v̂_x = ħ k/mcos(ϕ) σ_0 - α/ħσ_y, v̂_y = ħ k/msin(ϕ) σ_0 + α/ħσ_x,where ϕ is the angle between the wavevector k and the axis x, i.e. k_x=kcos (ϕ) and k_y=ksin (ϕ), while the last terms in Eq. (<ref>) andEq. (<ref>) represent components of the anomalous velocity thatoriginates from the Rashba spin-orbit interaction.Without loss of generality, we assume in this paper that the external electric field is oriented along the x-axis. Thus, the α-th (α =x, y, z) component of the quantum-mechanical average value ofspin polarization induced by the external electric field can be found in the Matsubara-Green functions formalismfrom the following formula: S_α(i ω_m)= 1/β∑_𝐤, nTr{ŝ_α G_𝐤(i ε_n + i ω_m) Ĥ_𝐀^E (i ω_m)G_𝐤(i ε_n) },where ŝ_α = ħσ_α/2 is the operator of the α's spin component,β =1/k_BT (with T and k_B denoting the temperature and Boltzmann constant, respectively), ε_n=(2n+1)iπ k_BT and ω_m=2miπ k_BT are the Matsubara energies, whileG_𝐤(i ε_n) are the Matsubara Green functions (in the 2× 2 matrix form). Note, theperturbation term takes now the form Ĥ_𝐀^E(i ω_m)=-ev̂_xA_x(i ω_m), with the amplitude of the vector potential A_x(i ω_m) determined by the amplitude E_x(i ω_m) of electric field through the relation A_x(i ω_m) = E_x(i ω_m) ħ/i (i ω_m).Taking into account the explicit form of Ĥ_𝐀^E(i ω_m), one can rewrite Eq.(<ref>)in the form S_α (i ω_m)= - 1/βe E_x(iω_m) ħ/i (i ω_m) ×∑_𝐤, nTr{ŝ_α G_𝐤(i ε_n + i ω_m) v̂_x G_𝐤(i ε_n) }. The sum over Matsubara energies in the above expression can be calculated by the method of contour integration, <cit.>1/β∑_nŝ_α G_𝐤(i ε_n + i ω_m) v̂_x G_𝐤(i ε_n) = - ∫_𝒞dz/2 π i f(z) ŝ_α G_𝐤(z + i ω_m) v̂_x G_𝐤(z) ,where 𝒞 denotes the appropriate contour of integration and f(z) is a meromorphic function of the form (e^β z + 1)^-1, that has simple poles at the odd integers n, z = i ε_n (for details see Refs [abrikosov,mahan]).Upon analytical continuation one obtainsS_α(ω) = - e ħ/ω E_xTr∑_𝐤∫d ε/2 π f(ε) ŝ_α( G_𝐤^R(ε + ω) v̂_x [G_𝐤^R(ε) - G_𝐤^A(ε)]. + . [G_𝐤^R(ε) - G_𝐤^A(ε)] v̂_x G_𝐤^A(ε - ω)).Here, f(ε ) is the Fermi-Dirac distribution function andG_𝐤^R/A(ε)is the impurity-averaged retarded/advanced Green function corresponding to the Hamiltonian (1). The Green functions take the following explicit form:G_𝐤^R/A(ε) = G_𝐤0^R/A(ε) σ_0+ G_𝐤x^R/A(ε) σ_x + G_𝐤y^R/A (ε) σ_y + G_𝐤z^R/A(ε) σ_z,where G_𝐤0^R/A(ε)= 1/2 [G_+(ε) + G_-(ε)], G_𝐤x^R/A(ε)= 1/2 λ_𝐤 (α k_y + H_x)[G_+(ε) -G_-(ε)], G_𝐤y^R/A(ε)= 1/2 λ_𝐤 (-α k_x + H_y)[G_+(ε)-G_-(ε)], G_𝐤z^R/A(ε)=1/2 λ_𝐤 H_z [G_+(ε) -G_-(ε)], with G_±^R(ε) = [ε + μ - E_±+ iΓ]^-1 and G_±^A(ε) = [ε + μ - E_±- iΓ]^-1. Note, we assumedΓ = ħ/2τ, with equal effective relaxation time τ in the two subbands. Using equation (<ref>) as a starting point and performing integration over ε we get finallythe following formula for the three components of the current-induced spin polarization: S_x = - e E_xħ ×∫d^2𝐤/(2π)^2{1/2 Γħ^2 k_x/2 m λ_𝐤 (α k_y + H_x) [f'(E_+) - f'(E_-)] .+ α/Γ(α k_y + H_x) (α k_x - H_y)/(2λ_𝐤)^2 + (2Γ)^2 [f'(E_+) + f'(E_-)]- α H_z/(2 λ_𝐤)^2(2 Γ)^2/(2λ_𝐤)^2 + (2Γ)^2 [f'(E_+) + f'(E_-)]- . α H_z/4 λ_𝐤^3 [f(E_-) - f(E_+)]},S_y = e E_xħ ×∫d^2𝐤/(2π)^2{1/4Γα/λ_𝐤^2 (α k_x - H_y)^2 [f'(E_+) + f'(E_-)]. +ħ^2/m λ_𝐤 (α k_x - H_y) 1/4Γ [f'(E_+) - f'(E_-)] + . αΓ/λ_𝐤^2(1 - (α k_x - H_y)^2/λ_𝐤^2) f'(E_+) + f'(E_-)/(E_+ - E_-)^2 + (2Γ)^2},S_z = - e E_xħ ×∫d^2𝐤/(2π)^2{α H_z/Γα k_x - H_y/(2 λ_𝐤)^2 + (2Γ)^2 [f'(E_+) + f'(E_-)] . + ħ^2 k_x/2mH_z/λ_𝐤1/2Γ [f'(E_+) - f'(E_-)] - α/4 λ_𝐤^2(2Γ)^2 (H_x + α k_y)/(2 λ_𝐤)^2 + (2Γ)^2 [f'(E_+) + f'(E_-)] -. α/4 λ_𝐤^3 (H_x + α k_y) [f(E_+) - f(E_-)]}. Details on the derivation of theabove equations are presented in the Appendix A. Before presenting results on the current-induced spin polarization for an arbitrary orientation of the exchange field, we consider first some special cases. § SPECIAL CASES §.§ Zero exchange field First, we reconsider the limit of zero exchange field, i.e. the limit of a nonmagnetized 2DEG, when only the y component of spin polarization survives. The general expression for S_y takes then the following form: S_y = e E_xħ∫dk k/(2π)^2{απ/4 Γ [f'(E_+) + f'(E_-)] . + απΓf'(E_+) + f'(E_-)/(2α k)^2 + (2Γ)^2+ . πħ^2 k/4 m Γ [f'(E_+) - f'(E_-)]},where f'=∂ f/∂ E. Note that in this limit the eigenvalues have the form E_± = ε_k ±α k.In the low-temperature regime, the above integrals can be evaluated analytically and one arrives atS_y = - e E_xħ∫dk k/(2π)^2{απ/4 Γ [δ(E_+-μ) + δ(E_--μ)] . + απΓδ(E_+-μ) + δ(E_--μ)/(2α k)^2 + (2Γ)^2+ . πħ^2 k/4 m Γ [δ(E_+-μ) - δ(E_--μ)]}.When both subbands are occupied (which corresponds to μ > 0), the Dirac delta functions in the above equation can be written in the formδ(E_± - μ) = m/√(2 mμħ^2 + m^2α^2)δ(k - k_±),and finally one obtainsS_y = 1/2 e E_xm α/2πħ^2τ- e E_xħ m α/16 πΓ√(2 m μħ^2 + m^2α^2) ×[ k_+/1 + (α k_+/Γ)^2 + k_-/1 + (α k_-/Γ)^2],with k_± = ∓m α/ħ^2 + 1/ħ^2√(m^2α^2 + 2 m μħ^2). The first term of Eq.(<ref>) corresponds to the Edelstein expression for the current-induced spin polarization in the so-called bubble approximation,S^0_y = 1/2 e E_xm α/2πħ^2τ .Note, the impurity vertex correction is neglected in ourconsiderations. Such a correction leads to some renormalization of the spin polarization (for details see e.g. Ref. [edelstein90,BDDIchapt]). The second term in (<ref>) is a correction which originates from the imaginaryterm in the nominator of the Green function andproducts of two retarded or two advanced Green's functions (omitted in Ref. edelstein90). Note, the second term in Eq.(<ref>) vanishes in the quasi-ballistic limit (low impurities concentration), when Γ→ 0.In the general case, i.e. for arbitrary T and arbitrary chemical potential μ, one should use the general formula (16). However, one point requires some comment. It is known, that for impurities with short-range (δ-like) potential and μ>0, the parameter Γ is constant, while for negative μ it increases and diverges when μ approaches the bottom of the lower energy band <cit.>. Thus, at a certain value of μ,μ =μ_ loc, the Ioffe-Regel localization condition <cit.> is obeyed, and the states become localized below μ_ loc. Accordingly, the results are valid beyond the localization regime, i.e. for μ >μ_ loc. Now, we present some numerical results. In Fig.<ref>(a) we show the temperature dependence of spin polarization for four different values of chemical potential μ. Here, we should mention that the chemical potential also depends on temperature, thus a fixed value of chemical potential means that the carrier concentration varies. If however the system is gated one can keep chemical potential constant. Apart from this, the relaxation time τ (and thus the parameter Γ) may also depend on temperature T. This dependence, however, is neglected in Fig.<ref>. The spin polarization S_y was obtained from Eq.(16) and is normalized there to the corresponding value of S^0_y (note S^0_y does not depend explicitly on temperature). For the largest value of μ, the S_y component remains almost constant in the temperature range shown in Fig.<ref>(a), and is roughly equal to the corresponding value of S^0_y. For smaller values of μ,in turn, the spin polarization becomes reduced monotonously with increasing T (see the curves for μ =0.02 eV and μ = 0.005 eV). For still lower values of μ, the temperature dependence is nonmonotonous - it first decreases and then slightly increases with temperature. To understand this behaviour we plot in Fig.<ref>(c) the spin polarization S_yas a function of chemical potential for several values of temperature and the same Γ as in Fig.<ref>(a).This figure clearly shows that spin polarization tends to S^0_y with increasing μ. Such a behavior is reasonable as the second term in Eq.(19) decreases with increasing μ (the effective role of finite Γ decreases with increasing E_+ - E_-). For small values of μ, however, the second term in Eq.(19) plays a role and the spin polarization is reduced. The temperature dependence appears when E_+ - E_-at the Fermi level is of the order or smaller than kT, which takes place in the region of small values of μ. Moreover, this figure also shows that S_y decreases with increasing T, except a narrow region of small values of μ, where the temperature dependenceis nonmonotonous, exactly like in Fig.<ref>(a) The temperature dependence is also shown in Fig.<ref>(b) for several values of Γ. This figure also shows that the correction due to the second term in Eq.(19) increases with increasing Γ. The latter behavior is shown explicitly inFig.<ref>(d) for indicatedvalues of μ. The decrease of spin polarization with increasing Γ is physically clear as the effective separation of the two Rashba bands becomes reduced with increasing Γ. The second term in Eq.(19) plays then an important role and leads to reduction of spin polarization.§.§ Exchange field perpendicular to plane of 2DEG Consider now a magnetized 2DEG and let us begin with the situation when the exchange field (or equivalently substrate magnetization) is perpendicular to the plane of 2DEG, 𝐇 = (0, 0, H_z). The eigenvalues of Hamiltonian (1) reduce now to the form E_± = ε_k±ζ, with ζ = √(H^2 + α^2 k^2).The general expressions describing the two nonzero components of spin polarization take the formsS_x =e E_xħ∫dk k/2 πα H_z/(2 ζ)^2{Γ^2/ζ^2 + Γ^2 [f'(E_+) - f'(E_-)] .- . 1/ζ [f(E_+) - f(E_-)] },S_y = e E_xħ∫dk k/(2π)^2απ{α^2 k^2/4 Γζ^2 [f'(E_+) + f'(E_-)] .+ ħ^2 k^2/4 Γ m ζ [f'(E_+) - f'(E_-)]. + Γ(2 - α^2 k^2/ζ^2) f'(E_+) + f'(E_-)/(2ζ)^2 + (2Γ)^2}, while S_z = 0. Accordingly, theelectric field generates now spin polarization with both in-plane components nonzero, while the component normal to the plane of 2DEG (along the exchange field) vanishes exactly. Thus, the exchange field generates spin polarization along the electric field and also modifies the spin polarizationalong the axis y.In thelow temperature limit equations (<ref>) and (<ref>) lead to the following analytical expressions:S_x = -ħe/8 π E_xH_z/α[ζ_+ - ζ_-/ζ_+ζ_-.+.α^2(ν_+/ζ_+^21/1 + (ζ_+/Γ)^2 - ν_-/ζ_-^21/1 + (ζ_-/Γ)^2) ], S_y = - ħe/16 π E_xα/Γ[ k_+^2/ζ_+ - k_-^2/ζ_-]- ħe/16 π E_x[(1-H_z/ζ_+) ν_+/1 + (ζ_+/Γ)^2. . + (1-H_z/ζ_-) ν_-/1 + (ζ_-/Γ)^2],where ν_± =m/ħ^2 (1 ±m α^2/ħ^2ζ_±)^-1 representthe densityof states corresponding to the E_± subbands, respectively,ζ_± = ζ (k = k_±), andk_± = √(2m)/ħ^2√(m α^2 + μħ^2∓√(m^2α^4 + 2 m α^2ħ^2μ + H_z^2ħ^4)) are the Fermi wavevectors corresponding to the two subbands.In the ballistic limit (extremely long relaxation time), Eq. (<ref>) takes the formS_x =- e E_xħ∫dk k/2 πα H_z/4 ζ^3[f(E_+) - f(E_-)],which after integration over k leads toS_x = - ħe/8 π E_xH_z/αζ_+ - ζ_-/ζ_+ζ_-,i.e. to the first term in Eq.(<ref>). The above expression does not depend on the relaxation time and due to the mathematical form of Eq.(<ref>)it may be identified as the Berry phase related contribution to the spin polarization, that in turn may be responsible for anti-damping spin-torque. For details see Section V.Numerical results on the current-induced spin polarization of the magnetized 2DEG with the exchange field oriented perpendicularly to the plane are shown in Fig.<ref>. The dependence of S_x and S_z on the exchange field JM_0is presented in Figs <ref>(a) and <ref>(b), respectively. Figure <ref>(a) clearly shows that S_x vanishes in the limit of zero exchange field and then its magnitude grows rather fast with increasing JM_0. Then, it decreases to zero for large exchange fields. Magnitude of S_y, in turn, is nozero for zero exchange field, and increases with increasing JM_0. It reaches a maximum at some value of JM_0, and then decreases with a further increase in JM_0. Such a behavior with JM_0 can be understood since the relative role of Rashba coupling decreases with increasing JM_0. Note, that the S_x component is antisymmetrical with respect to sign reversal of JM_0, while the S_y component is then symmetrical. Figures <ref>(c) and <ref>(d) present the x and y components of spin polarization as a function of temperature for fixed values of chemical potential and JM_0.In numerical calculations we have assumed T_c = 150 K and therefore the S_x component vanishes for T ≥ 150 K. In turn, the S_y components is remarkably enhanced below T_c and drops to a weakly temperature dependent value (for fixed chemical potential and the parameter Γ) when T ≥ 150 K.Variation of spin polarization with the chemical potential is presented in Figs <ref>(e) and <ref>(f) for different magnitudes of the exchange field JM_0. Magnitudes of both components increase monotonously with μ when μ is inside the energy region between the bottom edges of the two subbands. For μ in the vicinity of the bottom of higher energy band, these components reach maximum values and for larger μ they decrease with increasing μ. Note, the component S_x is roughly three orders of magnitude smaller than the S_y component.In Fig. <ref>(g) and <ref>(h) we show the x and y components of the spin polarization as a function of the Rashba coupling constant. These figures clearly show that the absolute values of both components increase roughly linearly with α. However, some deviations from this linear dependence appear above certain values of α, where the increase is smaller. §.§ Exchange field in plane of 2DEG and perpendicular to electric fieldIn this section we consider the current-induced spin polarization for the magnetization vector (exchange field) oriented alongthe y axis, i.e. when the exchange field is in the plane of two-dimensional electron gas and perpendicular to the current. In such a case the x and z components of the current-induced spin polarization vanish exactly, and the only nonzero component isS_y- like in the case of zero exchange field. This component, however, is modified by the exchange field.Numerical results for S_y are shown in Fig.<ref>, where variation of S_y with the exchange field JM_0, Fig.<ref>(a), clearly shows that the spin polarization decreases relatively fast with increasing absolute value of JM_0 and is suppressed when the Zeeman-like term (due to exchange coupling to the substrate) dominates over the Rashba term. The suppression to zero of spin polarization at large JM_0 appears due to strong modification of electronic states by the Zeeman like term, and takes place for all values of chemical potential.Temperature dependence of S_y is shown in Fig.<ref>(b) for two values of chemical potential and two values of JM_0. For the larger value ofJM_0, the spin polarization S_y vanishes in a broad temperature region and then increases when T approaches the Curie temperature, reaching the magnitude of S_y in the limit of a nonmagnetized2DEG. This behavior is consistent with that in Fig.4aInFig.<ref>(c) we show S_y as a function of chemical potential. As follows from this figure, S_y increases monotonously with the chemical potential increasing from the minimum of the lower subband, and then becomes saturated for large values of μ. The rate of this increase as well as thechemical potential at which the saturation appears depend on JM_0. Spin polarization as a function of the Rashba parameter α is shown inFig.<ref> (d) for indicated values of the exchange field. In general, the y component of spin polarization increases now nonlinearly with the Rashba constant. §.§ Exchange field in plane of 2DEG and collinear with electric fieldWhen the exchange field is oriented along the x axis, i.e. it is collinear with the external electric field, the x component of spin polarization vanishes, whereas the y and z components are non-zero. In general, the S_z component of spin polarization is roughly three orders of magnitude smaller than the S_y component. Variation of both components with the exchange field JM_0, temperature, chemical potential, and Rashba constant is presented in Figs<ref> (a-d). Behavior of the S_y and S_z components with JM_0, T, μ and α is qualitatively similar to the corresponding behavior of the components S_y and S_x in the case with the exchange field normal to the plane of 2DEG, see Fig.<ref>. There are some differences of rather quantitative character, which follow from different electronic bands in these two situations. For instance, the S_z component varies with the chemical potential in a slightly different manner than the S_x component in Fig.3. Weak difference also appear in the variation of the S_y component with temperature for T below the Curie temperature T_c. Similarly as in Fig. 3, both components behavealmost linearly with the Rashba parameter α.§ NUMERICAL RESULTS FOR ARBITRARILY ORIENTED EXCHANGE FIELDUp to now we have discussed only some specific situations, when the exchange field is oriented along the three main directions: (i) along the electric field, (ii) normal to the electric field and to the plane of 2DEG, and (iii) normal to the electric field and oriented in the plane of 2DEG. Now let us consider a general case, when the exchange field is oriented arbitrarily. This orientation is described by the polar θ and azimuthal ξangles, as shown in Fig.1. Generally, all three components of spin polarization (i.e. S_x, S_y and S_z) can be nonzero. In Fig. <ref> we present these componentsas a function of both θ and ξ angles, see left panel in this figure. The right panel, in turn, presents several vertical cross-sections of the corresponding density plots from the left panel. In the specific configurationsconsidered in the preceding section, the results shown in Fig. 6 reduce to the corresponding ones discussed in Sec.3. This figure shows the regions in the (θ ,ξ) plane, where particular components of the spin polarization are large and where are small or suppressed to zero . The results in a general case, like those presented in Fig. <ref> are requiredwhen considering magnetic dynamics induced by spin torque due to spin polarization. Magnetic moment (and thus also exchange field) precesses then in space, and this time evolution is associated with time evolution of the spin polarization. In this paper, however,we do not consider dynamical properties and focus rather on evaluating spin polarization in static situations.§ RELATION WITH THE BERRY CURVATURERecentlyH. Kurebayashi et al. <cit.>, based on experimental results, have proposed theanti-damping spin-orbit torque mediated by the Berry phase <cit.>. In other words, they showed that the Berry curvature gives rise to the spin-orbit torque in systems with broken inversion symmetry.Our results given by Eqs. (<ref>)-(<ref>) show that when the exchange field is nonzero, the inversion symmetry is broken and the general expressions for the x and z components of the spin polarizationcontain terms that do not depend on relaxation rate, but are functions of the Fermi-Dirac distribution function instead of its derivative. Thus,taking into accountthe notation well known in the context of the anomalous Hall effect, we can rewrite Eqs (<ref>)-(<ref>) as follows:S_α = S_α^I + S_α^IIwhere the first term depends on the states in a close vicinity of the Fermi level: S_α^I = S_α[f'(E_±)], while the second term contains information from all electronic states: S_α^II = S_α[f(E_±)]. Now we show that the terms S_α^II are related to the Berry curvature.To do this let us rewrite the Hamiltonian (<ref>) in the following form:H = ε_kσ_0 + 𝐧·where𝐧 = (α k_y - H_x, - α k_x - H_y, - H_z), and ε_k = ħ^2 (k_x^2 + k_y^2)/2m. The eigenvectors corresponding to the eigenvalues E_±can be written as| Ψ_+⟩= ( [√(λ_𝐤 + n_z/2 λ_𝐤); -n_x + i n_y/√(2 λ_𝐤 (λ_𝐤 + n_z)); ]), | Ψ_-⟩= ( [ -n_x - i n_y/√(2 λ_𝐤 (λ_𝐤 + n_z));√(λ_𝐤 + n_z/2 λ_𝐤); ]).The Berry curvature of the n-th (n=1,2) band, ℬ⃗_n(𝐤), is defined as the rotation of theBerry connection 𝒜⃗_n(𝐤) = i ⟨Ψ_n| ∇_𝐤|Ψ_n⟩ (for details see Refs Volovik,DiXiaoRevModPhys,NagaosaRevModPhys). Thus one can writeℬ_n^z(𝐤) = i [∂/∂ k_x⟨Ψ_n| ∂/∂ k_y|Ψ_n⟩ - ∂/∂ k_y⟨Ψ_n| ∂/∂ k_x|Ψ_n⟩] .Combining Eqs. (28) to (30) we find for the Berry curvature,ℬ_±^z(𝐤) =∓α^2 H_z/2 λ_𝐤^3 .Taking the expression above into account, the Berry phase related terms in the electrically generated spin polarization can be written asS_x^II = 1/2 e E_xħ/α∑_n∫d^2𝐤/(2π)^2 f(E_n) ℬ_n^z(𝐤)S_y^II = 0S_z^II = -1/2 e E_xħ∑_n∫d^2𝐤/(2π)^2 f(E_n) α k_y + H_x/α H_zℬ_n^z.Note, these terms disappear in the absence of exchange field.§ SPIN-ORBIT TORQUE Due to exchange interaction, the current induced spin polarization exerts a torqueon the magnetic moment 𝐌. This torque enters the Landau-Lifshitz-Gilbert equation for magnetic dynamics,∂𝐦 /∂ t = -γ𝐦 ×𝐡_ eff+ α_g 𝐦×∂𝐦 /∂ t+,where 𝐦 =𝐌 /M is a unit vector along magnetic moment 𝐌, 𝐡_ eff is the effective magnetic field which includes external magnetic field, dipolar field, and anisotropy field, α_g is the Gilbert damping factor, and γ is the giro-magnetic factor.To find the torque we write the coupling energy of the magnetic moment and induced spin polarization as=(2J/ħ )𝐒·𝐌=-𝐌·𝐡_ so, where 𝐡_ so is defined as𝐡_ so = -2J/ħ𝐒.Taking the above into account, one can write the torqueas a sum of a field-liketorque _f and damping-like torque _d,=_f + _d.These components can be written in terms of the spin-orbit field𝐡_ so as_f = -γ𝐦 ×𝐡_ sofor the field-like term, and_d = -α_gγ𝐦 × (𝐦 ×𝐡_ so)for the damping-like term. Since the spin polarization includes terms related to the Berry curvature, the resulting spin-orbit torques include terms related to the Berry curvature as well.§ SUMMARY AND CONCLUSIONS Using the Matsubara Green function method we have calculated current-induced spin polarization in a magnetized two-dimensional electron gas with the Rashba spin-orbit interaction. The exchange field is shown to have a significant impact on the spin polarization. First, For some orientations of the exchange field, the component of spin polarization that appears in the absence of exchange field can be enhanced by the exchange field, while for other orientations this component can be suppressed. Second, exchange fieldalso generates the components of spin polarization which are absent in the limit of vanishing exchange field. We also note, that the states at the band edges may become localized due to disorder and the results may be not valid in the localization regime. Analytical and/or numerical results have been presented in some special cases, when exchange field is oriented along current or perpendicular to current (in-plane and perpendicular to the plane of 2DEG in the latter case). Numerical results have been also presented in a general case of arbitrary orientation of exchange field. We have found that the exchange field leads to terms in the spin polarization that can be related to the Berry curvature of the corresponding electron bands. Since the calculated spin polarization generates a torque which may induce dynamics of the magnetic moment, this torque includes terms related to the Berry curvature as well. This work was supported bythe Polish Ministry of Science and Higher Education through a research project Iuventus Plus in years 2015-2017 (project No. 0083/IP3/2015/73). A.D. also acknowledges support from the Fundation for Polish Science (FNP). V.D. acknowledges support from the National Science Center in Poland under Grant No. DEC-2012/06/M/ST3/00042.§ DERIVATION OF EQS. (<REF>), (<REF>), (<REF>) The current induced spin polarizaton is evaluated starting from the equation (<ref>) that we rewrite in the following form: S_α = - e E_xħ/ω∫d^2𝐤/(2π)^2(𝒯_S_α^(1) + 𝒯_S_α^(2)),where:𝒯_S_α^(1) = ∫d ε/2 π f(ε) ℐ_S_α^(1)(ε + ω, ε), 𝒯_S_α^(2) = ∫d ε/2 π f(ε) ℐ_S_α^(2)(ε, ε - ω),and the following notation has been introduced:ℐ_S_α^(1)(ε + ω, ε) = Tr{ŝ_α G_𝐤^R(ε + ω) v̂_x [G_𝐤^R(ε) - G_𝐤^A(ε)]}, ℐ_S_α^(2)(ε, ε - ω) = Tr{ŝ_α [G_𝐤^R(ε) - G_𝐤^A(ε)] v̂_x G_𝐤^A(ε - ω) }, According to the above notation the S_x component of spin polarization is described by the following expressions:ℐ_S_x^(1)(ε + ω, ε) = ħ^2 k_x/2 m λ_𝐤 (α k_y - H_x) [G_𝐤 -^R(ε+ ω) G_𝐤 -^A(ε) - G_𝐤 -^R(ε + ω) G_𝐤 -^R(ε) + G_𝐤 +^R(ε + ω) G_𝐤 +^R (ε) - G_𝐤 +^R(ε + ω) G_𝐤 +^A (ε) ]- α/2 λ_𝐤^2 (α k_y - H_x) (α k_x + H_y) [ G_𝐤 -^R(ε + ω) G_𝐤 -^A(ε) - G_𝐤 -^R(ε + ω) G_𝐤 +^A(ε)-G_𝐤 -^R(ε + ω) G_𝐤 -^R(ε) + G_𝐤 -^R(ε + ω) G_𝐤 +^R(ε)- G_𝐤 +^R(ε + ω) G_𝐤 -^A(ε) + G_𝐤 +^R(ε + ω) G_𝐤 +^A(ε)+ . G_𝐤 +^R(ε + ω) G_𝐤 -^R(ε) - G_𝐤 +^R(ε + ω) G_𝐤 +^R(ε)]- i α/2λ_𝐤 H_z[ G_𝐤 -^R(ε + ω) G_𝐤 +^A(ε) - G_𝐤 -^R(ε + ω) G_𝐤 +^R(ε)-G_𝐤 +^R(ε + ω) G_𝐤 -^A(ε) + G_𝐤 +^R(ε + ω) G_𝐤 -^R(ε)]ℐ_S_x^(2)(ε, ε - ω) = ħ^2 k_x/2 m λ_𝐤 (α k_y - H_x) [ G_𝐤 -^A(ε) G_𝐤 -^A(ε - ω) - G_𝐤 +^A(ε) G_𝐤 +^A(ε - ω) -G_𝐤 -^R(ε) G_𝐤 -^A(ε) + G_𝐤 +^R(ε) G_𝐤 +^A(ε - ω) ]- α/2 λ_𝐤^2 (α k_x + H_y) (α k_y - H_x) [ G_𝐤 -^A(ε) G_𝐤 -^A(ε - ω) - G_𝐤 +^A(ε) G_𝐤 -^A(ε - ω) .- G_𝐤 -^R(ε) G_𝐤 -^A(ε - ω) + G_𝐤 +^R(ε) G_𝐤 -^A(ε - ω)- G_𝐤 -^A(ε) G_𝐤 +^A(ε - ω) + G_𝐤 +^A(ε) G_𝐤 +^A(ε - ω)+ . G_𝐤 -^R(ε) G_𝐤 +^A(ε - ω) - G_𝐤 +^R(ε) G_𝐤 +^A(ε - ω)]- i α/λ_2𝐤 H_z[ G_𝐤 -^A(ε) G_𝐤 +^A(ε - ω) - G_𝐤 +^A(ε) G_𝐤 -^A(ε - ω) -G_𝐤 -^R(ε) G_𝐤 +^A(ε - ω) + G_𝐤 +^R(ε) G_𝐤 -^A(ε - ω) ]Inserting Eqs. (<ref>), (<ref>) into Eqs. (<ref>) and (<ref>) respectively we get: ℜ𝔢[ 𝒯_S_x^(1) + 𝒯_S_x^(2)]=ωħ^2 k_x/2 m λ_𝐤 (α k_y - H_x) 2 Γ/ω^2 + (2Γ)^2 [f'(E_+) - f'(E_-)] + ωα/2 λ_𝐤^2 (α k_x + H_y) (α k_y - H_x) 2 Γ/ω^2 + (2Γ)^2 [f'(E_+) + f'(E_-)]- ωΓα/λ_𝐤^2 (α k_x + H_y) (α k_y - H_x) ( f'(E_-)/(E_+ - E_- - ω)^2 + (2Γ)^2 +f'(E_+)/(E_+ - E_- + ω)^2 + (2Γ)^2)-ωα/2 λ_𝐤 H_z( E_+ - E_- - ω/(E_+ - E_- - ω)^2 + (2Γ)^2f'(E_-) -E_+ - E_- + ω/(E_+ - E_- + ω)^2 + (2Γ)^2 f'(E_+) )+ ωα/2 λ_𝐤 H_zE_+ - E_-/(E_+ - E_-)^2 - ω^2 [f'(E_+) + f'(E_-)]- ωα/λ_𝐤 H_zf(E_+) - f(E_-)/(E_+ - E_-)^2 - ω^2+ ω^2α/2 λ_𝐤 H_zf'(E_-) - f'(E_+)/(E_+ - E_-)^2 - ω^2 In the limit of ω→ 0 we find x component of current-induced spin polarization given by Eq.(<ref>).In turn, the S_y component of spin polarization is expressed by the following functions:ℐ_S_y^(1)(ε + ω, ε) = α/2[ G_𝐤 -^R(ε+ω) G_𝐤 -^A(ε) + G_𝐤 +^R(ε+ω) G_𝐤 +^A(ε).- . G_𝐤 -^R(ε+ω) G_𝐤 -^R(ε) - G_𝐤 +^R(ε+ω) G_𝐤 +^R(ε)]- ħ^2 k_x/2 m λ_𝐤 (α k_x + H_y)[ G_𝐤 -^R(ε+ω) G_𝐤 -^A(ε) - G_𝐤 -^R(ε+ω) G_𝐤 -^R(ε).- . G_𝐤 +^R(ε+ω) G_𝐤 +^A(ε) + G_𝐤 +^R(ε+ω) G_𝐤 +^R(ε)]- α/2 λ_𝐤^2[ (α k_y - H_x)^2 + H_z^2][ G_𝐤 -^R(ε+ω) G_𝐤 -^A(ε) - G_𝐤 +^R(ε+ω) G_𝐤 -^A(ε).-G_𝐤 -^R(ε+ω) G_𝐤 -^R(ε) + G_𝐤 -^R(ε+ω) G_𝐤 +^R(ε)+ G_𝐤 -^R(ε+ω) G_𝐤 +^R(ε) - G_𝐤 +^R(ε+ω) G_𝐤 +^R(ε)- . G_𝐤 -^R(ε+ω) G_𝐤 +^A(ε) + G_𝐤 +^R(ε+ω) G_𝐤 +^A(ε)]ℐ_S_y^(2)(ε, ε - ω) = α/2[ G_𝐤 -^A(ε) G_𝐤 -^A(ε - ω) - G_𝐤 -^R(ε) G_𝐤 -^A(ε - ω) .+ . G_𝐤 +^A(ε) G_𝐤 +^A(ε - ω) - G_𝐤 +^R(ε) G_𝐤 +^A(ε - ω) ]- ħ^2 k_x/2 m λ_𝐤 (α k_x + H_y) [ G_𝐤 -^A(ε) G_𝐤 -^A(ε-ω) - G_𝐤 +^A(ε) G_𝐤 +^A(ε-ω).- . G_𝐤 -^R(ε) G_𝐤 -^A(ε-ω) + G_𝐤 +^R(ε) G_𝐤 +^A(ε-ω)]- α/2 λ_𝐤^2[ (α k_y - H_x)^2 + H_z^2] [ G_𝐤 -^A(ε) G_𝐤 -^A(ε-ω) - G_𝐤 +^A(ε) G_𝐤 -^A(ε-ω).- G_𝐤 -^R(ε) G_𝐤 -^A(ε-ω) + G_𝐤 +^R(ε) G_𝐤 -^A(ε-ω)- G_𝐤 -^A(ε) G_𝐤 +^A(ε-ω)+ G_𝐤 +^A(ε) G_𝐤 +^A(ε-ω)+ . G_𝐤 -^R(ε) G_𝐤 +^A(ε-ω) - G_𝐤 +^R(ε) G_𝐤 +^A(ε-ω)]After integration over εin Eqs. (<ref>) and (<ref>) with integrands given by(<ref>),(<ref>) we obtain the following expression: ℜ𝔢[ 𝒯_S_y^(1) + 𝒯_S_y^(2)] = - ħ^2 k_x/2 m λ_𝐤 (α k_x + H_y) 2 Γω/ω^2 + (2 Γ)^2 [f'(E_+) - f'(E_-)]- α/λ_𝐤^2(α k_x + H_y)^2 2 Γω/ω^2 + (2 Γ)^2 [f'(E_+) - f'(E_-)] - α/λ_𝐤^2Γω[ (α k_y - H_x)^2 + H_z^2] (f'(E_-) /(E_+ - E_- - ω)^2 + (2Γ)^2 + f'(E_+) /(E_+ - E_- + ω)^2 + (2Γ)^2)In the limit ω→ 0 we obtain the formula describing y component of current-induced spin polarization given by Eq.(<ref>).Finally, the S_z component of the nonequilibrium spin polarization is described by following traces:ℐ_S_z^(1)(ε + ω, ε) = α^2 k_x/2 λ_𝐤^2 H_z[ G_𝐤 -^R(ε + ω) G_𝐤 -^A(ε) - G_𝐤 -^R(ε + ω) G_𝐤 +^A(ε) .- G_𝐤 -^R(ε + ω) G_𝐤 -^R(ε) + G_𝐤 -^R(ε + ω) G_𝐤 +^R(ε)- G_𝐤 +^R(ε + ω) G_𝐤 -^A(ε) + G_𝐤 +^R(ε + ω) G_𝐤 +^A(ε)+ . G_𝐤 +^R(ε + ω) G_𝐤 -^R(ε) - G_𝐤 +^R(ε + ω) G_𝐤 +^R(ε)]+ ħ^2 k_x/2 m λ_𝐤 H_z[ G_𝐤 -^R(ε + ω) G_𝐤 -^R(ε) - G_𝐤 -^R(ε + ω) G_𝐤 -^A(ε).+ . G_𝐤 +^R(ε + ω) G_𝐤 +^A(ε) - G_𝐤 +^R(ε + ω) G_𝐤 +^R(ε)]+ α/2 λ_𝐤^2 H_y H_z[ G_𝐤 -^R(ε + ω) G_𝐤 -^A(ε) - G_𝐤 -^R(ε + ω) G_𝐤 +^A(ε).- G_𝐤 -^R(ε + ω) G_𝐤 -^R(ε) + G_𝐤 -^R(ε + ω) G_𝐤 +^R(ε)- G_𝐤 +^R(ε + ω) G_𝐤 -^A(ε) + G_𝐤 +^R(ε + ω) G_𝐤 +^A(ε)+ . G_𝐤 +^R(ε + ω) G_𝐤 -^R (ε) - G_𝐤 +^R(ε + ω) G_𝐤 +^R(ε) ]+ i α/2 λ_𝐤 (H_x - α k_y) [ G_𝐤 -^R(ε + ω) G_𝐤 +^A(ε) - G_𝐤 -^R(ε + ω) G_𝐤 +^R(ε).- . G_𝐤 +^R(ε + ω) G_𝐤 -^A(ε) + G_𝐤 +^R(ε + ω) G_𝐤 -^R(ε) ]ℐ_S_z^(2)(ε, ε - ω) = α^2 k_x/2 λ_𝐤^2 H_z[ G_𝐤 -^A(ε) G_𝐤 -^A(ε - ω) - G_𝐤 +^A(ε) G_𝐤 -^A(ε - ω).- G_𝐤 -^R(ε) G_𝐤 -^A(ε - ω) + G_𝐤 +^R(ε) G_𝐤 -^A(ε - ω)- G_𝐤 -^A(ε) G_𝐤 +^A(ε - ω) + G_𝐤 +^A(ε) G_𝐤 +^A(ε - ω)+ . G_𝐤 -^R(ε) G_𝐤 +^A(ε - ω) - G_𝐤 +^R(ε) G_𝐤 +^A(ε - ω)]+ ħ^2 k_x/2 m λ_𝐤 H_z[ G_𝐤 +^A(ε) G_𝐤 +^A(ε - ω) - G_𝐤 -^A(ε) G_𝐤 -^A(ε - ω) .+ . G_𝐤 -^R(ε) G_𝐤 -^A(ε - ω) - G_𝐤 +^R(ε) G_𝐤 +^A(ε - ω)]+ α/2 λ_𝐤^2 H_y H_z[ G_𝐤 -^A(ε) G_𝐤 -^A(ε - ω) - G_𝐤 +^A(ε) G_𝐤 -^A(ε - ω) .- G_𝐤 -^R(ε) G_𝐤 -^A(ε - ω) + G_𝐤 +^R(ε) G_𝐤 -^A(ε - ω)- G_𝐤 -^A(ε) G_𝐤 +^A(ε - ω) + G_𝐤 +^A(ε) G_𝐤 +^A(ε - ω)+ . G_𝐤 -^R(ε) G_𝐤 +^A(ε - ω) - G_𝐤 +^R(ε) G_𝐤 +^A(ε - ω)]+ i α/2 λ_𝐤 (H_x - α k_y) [ G_𝐤 -^A(ε) G_𝐤 +^A(ε - ω) - G_𝐤 +^A(ε) G_𝐤 -^A(ε - ω).- . G_𝐤 -^R(ε) G_𝐤 +^A(ε - ω)+ G_𝐤 +^R(ε) G_𝐤 -^A(ε - ω)]These two equations combining with Eqs. (<ref>) and (<ref>) lead to the following expression:ℜ𝔢[ 𝒯_S_z^I + 𝒯_S_z^II] = - ω2 Γ/ω^2 + (2Γ)^2ħ^2 k_x/2 m λ_𝐤 H_z [f'(E_+) - f'(E_-)] - ωα/2 λ_𝐤^2 H_z (α k_x + H_y)2 Γ/ω^2 + (2Γ)^2 [f'(E_+) + f'(E_-)] + ωΓα/λ_𝐤^2 (α k_x + H_y) H_z[f'(E_-)/(E_+ - E_- - ω)^2 + (2Γ)^2 + f'(E_+)/(E_+ - E_- + ω)^2 + (2Γ)^2] + ωα/2 λ_𝐤 (H_x - α k_y) [E_+ - E_- - ω/(E_+ - E_- - ω)^2 + (2Γ)^2 f'(E_-) + E_+ - E_- + ω/(E_+ - E_- + ω)^2 + (2Γ)^2 f'(E_+)] - ωα (H_x - α k_y) f'(E_+) + f'(E_-)/(E_+ - E_-)^2 - ω^2 - ωα/λ_𝐤 (H_x - α k_y) f(E_-) - f(E_+)/(E_+ - E_-)^2 - ω^2 - ω^2α/2 λ_𝐤 (H_x - α k_y) f'(E_-) - f'(E_+)/(E_+ - E_-)^2 - ω^2In the dc-limit we get Eq. (<ref>). Sinova_RMP2015 J. Sinova, S. O. Valenzuela, J. Wunderlich, C. H. Back, and T. Jungwirth, Rev. Mod. Phys. 87, 1213 (2015)SinovaZutic2012 J. Sinova and I. Zutic, Nat. Mater. 11, 368 (2012). dyakonov71 M. I. Dyakonov and V. I. Perel, Phys. Letters A 35, 459 (1971). Ivchenko78 E. L. Ivchenko and G. E. Pikus, Posma Zh. Eksp. Teor. Fiz. 27, 640 (1978) [JETP Lett. 27, 604 (1978)]. edelstein90 V. M. Edelstein, Sol. State Communs. 73, 233 (1990). aronov89 A. G. Aronov and Y. B. Lynda-Geller, JETP Lett 50, 431 (1989). liu08 M.-H. Liu, S.-H. Chen, and C.-R. Chang, Phys. Rev. B 78, 165316 (2008). gorini08 C. Gorini, P. Schwab, and M. Dzierzawa, 78, 125327 (2008). wang09 C. M. Wang, H. T. Cui, and Q. Lin, Phys. Status Solidi B 246, 2301 (2009). schwab10 P. Schwab, R. Raimondi and C. Gorini,Europhysics Lett. 90, 67004 (2010). golub11 L. E. Golub and E. L. Ivchenko, 84, 115303 (2011).dyrdal13 A. Dyrdal, M. Inglot, V. K. Dugaev, J. Barnas, 87, 245309 (2013). dyrdal14 A. Dyrdal and J. Barnas, 89, 075422 (2014). vorobev1979 L. E. Vorob'ev, E. L. Ivchenko, G. E. Pikus, I. I. Ferbstein, V. A. Shylygin, and A. V. Shturbin, Pisma Zh. Eksp. Teor. Fiz. bf 29, 485 (1979) [JETP Lett. 29, 441 (1979)]. kato04 Y. K. Kato, R. C. Myers, A. C. Gossard, and D. D. Awschalom, 93, 176601 (2004). silov04 A. Yu. Silov, P. A. Blaynov, J. H. Wolter, R. Hey, K. H. Ploog, and N. S. Averkiev, Appl. Phys. Lett. 85, 5929 (2004). sih05 V. Sih, R. C. Myers, Y. K. Kato, W. H. Lau, A. C. Gossard, and D. D. Awschalom, Nature Phys. 1, 31 (2005). norman14 B. M. Norman, C. J. Trowbridge, D. D. Awschalom, and V. Sih, 112, 056601 (2014). yang06 C. L. Yang, H. T. He, L. Ding, L. J. Cui, Y. P. Zeng, J. N. Wang, and W. K. Ge, Phys. Rev. Lett. 96, 186605 (2006). stern06 N. P. Stern, S. Ghosh, G. Xiang, M. Zhu, N. Samarth, and D. D. Awschalom, 97, 126603 (2006). koehl09 W. F. Koehl, M. H. Wong, C. Poblenc, B. Swenson, U. K. Mishra, J. S. Speck, and D. D. Awshalom, 95, 072110 (2009). kuhlen12 S. Kuhlen, K. Schmalbuch, M. Hagedorn, P. Schlammes, M. Patt,M. Lepsa, G. Güntherodt, and B. Beschoten, 109, 146603 (2012). Manchon08 A. Manchon and S. Zhang, 78, 212405 (2008). Abiague09 A. Matos-Abiague, R. L. Rodriguez-Suarez, 80, 094424 (2009). Gambardella11 P. Gambardella and I. M. Miron, Phil. Trans. R. Soc. A 369, 3175 (2011). Garello13 K. Garello, I. M. Miron, C. O. Avci, F. Freimuth, Y. Mokrousov, S. Blugel, S. Auffret, O. Boulle, G. Gaudin, and P. Gambardella, Nature Nanotechnology 8, 587–593 (2013). Kurebayashi14 H. Kurebayashi, J. Sinova, D. Fang, A. C. Irvine, T. D. Skinner, J. Wunderlich, V. Novak, R. P. Campion, B. L. Gallagher, E. K. Vehstedt, L. P. Zarbo, K. Vyborny, A. J. Ferguson, and T. Jungwirth wang10 C. M. Wang and M. Q. Pang, Solid State Communications 150, 1509 (2010).XiaoMa2016 C. Xiao, D. Li, Z. Ma, Front. Phys. 11, 117201 (2016). Li04 Z. Li, S. Zhang, 69, 134416 (2004). Hatami07 M. Hatami, G. E. W. Bauer, Q. Zhang, P. J. Kelly, 99, 06603 (2007). Ansermet10 Haiming Yu, S. Granville, D. P. Yu, and J.-Ph. Ansermet, 104, 146601 (2010). abrikosov A. A. Abrikosov, L. P. Gorkov, and I. E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Physics (Dover, New York, 1963).mahan G. D. Mahan, Many Particle Physics(Kluwer Academic/Plenum Publishers, New York, 2000).BDDIchapt J. Barnaś, A. Dyrdał, V. K. Dugaev, M. Inglot, Thermal spin polarization in bidimensional systems in Magnetic Nano- and Microwires. Design, Synthesis, Properties and Applications. edited by Manuel Vazquez, Woodhead Publishing, Elsevier (2015).Brosco V. Brosco, L. Benfatto, E. Cappelluti, and C. Grimaldi, Phys. Rev. Lett. 116, 166602 (2016).Dyrdal2016 A. Dyrdał, J. Barnaś, V. K. Dugaev, 94, 035306 (2016).Ioffe A. F. Ioffe and A. R. Regel, Prog. Semicond. 4, 237 (1960). Kurebayashi H. Kurebayashi, Jairo Sinova, D. Fang, A. C. Irvine, J. Wunderlich, V. Novak, R. P. Campion, B. L. Gallagher, E. K. Vehstedt, L. P. Zarbo, K. Vyborny, A. J. Ferguson, T. Jungwirth, Nature Nanotech. 9, 211–217 (2014).Berry M. V. Berry,Proc. R. Soc. Lond. Ser. A 392, 45 (1984).Volovik G. E. Volovik, Zh. Eksp. Teor. Fiz. 94, 123 (1988).DiXiaoRevModPhys Di Xiao, Ming-Che Chang, and Qian Niu, Rev. Mod. Phys. 82, 1959 (2010).NagaosaRevModPhys Naoto Nagaosa, Jairo Sinova, Shigeki Onoda, A. H. MacDonald, and N. P. Ong, Rev. Mod. Phys. 82, 1539 (2010).
http://arxiv.org/abs/1702.08162v1
{ "authors": [ "A. Dyrdal", "J. Barnas", "V. K. Dugaev" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227070311", "title": "Current-induced spin polarization of a magnetized two-dimensional electron gas with Rashba spin-orbit interaction" }
Angela Di Virgilio et al. Observational and Experimental Gravity INFN-Sezione di Pisa, Largo B. Pontecorvo 3, 56124 Pisa Italy,angela.divirgilio@pi.infn.itNational Tsing Hua University, Hsinchu, Taiwan, 30013 ROC,weitou@gmail.comINAF - Osservatorio Astrofisico di Arcetri, 50125 Firenze, ItalyInstitute of Astronomy & Astrophysics, Academia Sinica, Taipei, Taiwan, 10617 ROC Center for Measurement Standards, ITRI, Hsinchu, Taiwan, 30011 ROC Observational and Experimental Gravity Sheau-shi Pan December 30, 2023 ======================================== Day Month Year Day Month YearWe indicate the progress of experimental gravity, present an outlook in this field, and summarise the Observational/Experimental Parallel Session together with a related plenary talk on gravitational waves of the 2nd LeCosPA Symposium. PACS numbers: 04.80.Cc, 04.80.Nn, 04.80.-y, 95.55.Ym, 98.80.Es§ PROGRESS AND OUTLOOKFor one hundred years since the advent of General Relativity (GR), the first thing in observational and experimental gravity is the observation of light deflection during the solar eclipse in 1919. Since then, the 3 classical tests of GR (perihelion advance, gravitational redshift and light deflection) have been verified<cit.> to 10^-3-10^-4. With the advent of space age in 1957 and the development of space radio communication, Shapiro proposed a fourth test (Shapiro time delay) of GR that electromagnetic wave packets passing near the Sun would be retarded due to gravity curving of spacetime <cit.>. Shapiro time delay is measured to agree with GR in terms of Eddington parameter γ (equal to 1 for GR) as 1.000021 ±0.000023 from Cassini spacecraft Doppler tracking<cit.>. Lense-Thirring frame dragging has been measured to about 10 percent to agree with GR by satellite laser ranging (SLR) of LAGEOS 1 and LAGEOS 2, and by GP-B gyro relativity experiment<cit.>. GP-B experiment has also verified the equivalence principle for rotating body to an ultimate precision <cit.>.At present, Lunar Laser Ranging (LLR) test and solar-system radio tests of GR have about the same accuracy. In the near future, interplanetary laser ranging and spacecraft laser ranging in the solar system will improve the accuracy of tests of relativistic gravity by another 3-4 orders of magnitude reaching the second post-Newtonian order<cit.>. The observation of precise timing of pulses from pulsars is now catching up the solar system observations and will also reach the second post-Newtonian order soon if not earlier<cit.>. The development of ring laser gyroscopes makes it possible to measure the absolute angular velocity and to estimate the rotation rates relative to the local inertial frame with ultra-precision. This is ideal for measuring the Lense-Thirring frame dragging on Earth. GINGER (Gyroscopes IN GEneral Relativity) is proposed to measure this frame dragging to 1 percent; GINGERino is under implementation (Sec. 2)<cit.>. Technology based on the success of GINGER experiment could be applied to improve the tie between the astronomic reference frame and the solar-system dynamical frame significantly.Einstein Equivalence Principle is an important cornerstone of GR and metric theories of gravity. From the nonbirefringence of cosmicpropagation of electromagnetic wave packet, the constitutive tensor of spacetime for local linear gravity-coupling to electromagnetism must beof core metric form with an axion (pseudoscalar) degree of freedom and a dilaton (scalar) degree of freedom; from observations it is empirically verified to 10^-38, i.e. to 10^-4×(M_Higgs/M_Planck)^2<cit.>. This is significant in constraining the infrared behaviour of quantum gravity. Empirically, axion is constrained by the non-observation of cosmic polarization rotation (Sec. 2); dilation is constrained bythe agreement of CMB spectrum to Planck spectrum<cit.>.Galileo equivalence principle is experimentally verified by Eötvös-type experiments and free-fall experiments<cit.> to about10^-13.In the next mission/mission proposals for testing Galileo equivalence principle, the improvements are aimed at 2-5 orders of magnitude<cit.>. In a series of papers in the 1970s, Rubin, Ford, and Thonnard measured the rotation curves of a number of disk galaxies and found thatrotation speeds were larger than would be expected from the gravitational attraction arising from the visible mass distribution<cit.>. Authors interpreted their findings as evidence for a new dark matter component. Logically this conflict, known as the missing massproblem, could arise from a mass discrepancy, an acceleration discrepancy, or possibly even both. In 1983, Milgrom proposed thephenomenological modified Newtonian dynamics (MOND) law for small accelerations <cit.>. Under this hypothesis, the gravitationaldynamics become modified when the acceleration is smaller than a_0∼ 10^-10 m s^-2. However, great efforts in finding the missingmass have not been fruitful. Neither the construction of a viable relativistic gravitational theory incorporating the MOND law is fully successful.It remains an open question. It is also interesting to note that a_0∼Λ^1/2<cit.>. On the cosmological scale, the discovery of cosmic acceleration have indicated the existence of cosmological constant or the cosmological-constant-like dark energy. The search for a microscopic theory behind this phenomenon may give clue to the microscopic origin of gravity<cit.>. The observation on cosmic structure has confirmed the inflationary scenario that it could have originated from quantum fluctuations in the inflationary period<cit.>. GWs from inflationary period could give imprints of tensor anisotropies and production of B-modes on CMB. CMB polarization observations have constrained tensor to scalar ratios r of inflationary/primordial GWs to be less than 0.07-0.1<cit.>. Most of the experimental gravitation community are working on GW experiments/observations on various frequency bands from aHz to THz<cit.>. Real-time detection is possible above 300 pHz, while below 300 pHz the detection is possible on GW imprints or indirectly. Advanced LIGO has achieved 3.5 fold better sensitivities with a reach to neutron star binary merging event at 70 Mpc and began its first observing run (O1) in the middle of September 2015 searching for GWs. On September 14 the first GW observation has taken place, and the merger of two Black Holes have been recorded<cit.>. We will see a global network of second generation km-size interferometers for GW detection soon. Another avenue for real-time direct detection is from the PTAs. The PTA bound on stochastic GW background already excludes most theoretical models; this may mean we could detect very low frequency GWs anytime too with a longer time scale. Although the prospect of a launch of space GW is only expected in about 20 years, the detection in the low frequency band may have the largest signal to noise ratios. This will enable the detailed study of black hole co-evolution with galaxies and with the dark energy issue. We will see improvement of a few orders to several orders of magnitude in the GW detection sensitivities over all frequency bands in the next hundred years<cit.>.The gravitational deflection has already been applied to astrophysics and cosmology as gravitational lensing to weigh the lensing sources<cit.>. It becomes an important tool of astrophysics and cosmology. Besides GW observations, electromagnetic observations on black holes have been proposed (Sec. 2). It is well-known that satellite positioning systems need to incorporate GR corrections. With the clock reaching 10^-18 and beyond, application to measure the Earth gravity and the altitude could be realized. § COSMIC POLARIZATION ROTATION, LENSE-THIRRING FRAME DRAGGING AND BLACK HOLE SHADOW OBSERVATION§.§ Summary on CPR A review about cosmic polarization rotation (CPR), i.e. a rotation of the polarization angle (PA) for radiation traveling over large distances across the universe, was presented by Sperello di Serego Alighieri<cit.>. CPR is very relevant for this Symposium, since it would be observed if there were the pseudoscalar field coupling to electromagnetism, which Ni<cit.> found as a unique counter-example to the conjecture that any consistent Lorentz-invariant theory of gravity obeying the weak equivalence principle (WEP) would also obey the Einstein equivalence principle (EEP). In fact, since general relativity (GR) is based on the EEP, our confidence on EEP and GR would be greatly increased if we could show that there is no CPR, because in this case the EEP would be tested to the same high accuracy of the WEP. The search for CPR is important also because it would tell us if and how one of the three elementary pieces of information (direction, energy, and PA), which photons carry to us about the universe, is changed while they travel. Since 1990 CPR has been searched using the polarization of radio galaxies, both in the radio and in the ultraviolet, and, more recently, using the polarization of the cosmic microwave background (CMB). The results of a recent review on CPR <cit.> and of a few updates were presented. In summary, the results so far are consistent with a null CPR with upper limits of the order of 1^∘. Two current problems in CPR searcheswere discussed. The first involves the PA calibration at CMB frequencies, which is becoming the limiting factor, imposing a systematic error of about 1^∘, which is larger than the statistical errors of the best CMB polarization experiments. Improvements are expected from more precise measurements of the polarization angle of celestial sources at CMB frequencies and a calibration source on a satellite<cit.>. The second problem results from the unfortunate choice of the CMB community for a PA convention which is opposite to the standard one, adopted by all astronomers for many decades and enforced by the International Astronomical Union<cit.>. This is causing obvious confusion and misuderstanding, particularly for CPR, for which results use both conventions. A recommendation has been issued that all astronomers, including CMB polarimetrists, use the standard PA convention<cit.>. Concerning CPR tests, improvements are expected by better targeted high resolution radio polarization measurements of radio galaxies and quasars, by more accurate ultraviolet polarization measurements of radio galaxies with the coming generation of giant optical telescopes and by future CMB polarization measurements.An update<cit.> of CPR constraint was presented from the analysis of the recent measurements of sub-degree B-mode polarization in the cosmic microwave background from 100 square degrees of SPTpol data<cit.>. The CPR fluctuation constraint from the joint ACTpol-BICEP2-POLARBEAR polarization data is 23.7 mrad (1.36^∘)<cit.>. With the new SPTpol data included, the CPR fluctuation constraint is updated to 17 mrad (1^∘) with the scalar to tensor ratio r = 0.05 ± 0.1.<cit.>§.§ The GINGER Project and the LenseThirring measurementGINGER (Gyroscopes IN General Relativity) <cit.>is based on an array of ring-lasers and is aiming at measuring the LenseThirring effect at the level of 1%. Large frame ring-lasers are at present the most sensitive device to measure absolute angular rotation, and has been already demonstrated that they have a sensitivity very close to what is necessary to measure the Lense-Thirring effect<cit.>. At present it is under discussion the real construction of GINGER. At the same time, experimental activity is in progress; the large frame prototype GINGERino<cit.> is investigating if the GranSasso underground laboratory is adeguate for an experiment as GINGER, and the prototype called GP2 has been installed in Pisa in order to develop a suitable control strategy to constrain the geometry of the ring-laser and guarantee the long term stability of the scale factor of the ring-laser. The long term stability of the scale factor of the ring-laser is the most challenging experimental problem of the research activity around GINGER.§.§ The Greenland Telescope Project and BH Shadow ObservationThe size and shape of the shadow cast by a black hole event horizon is determined by the null geodesics and directly related to the background spacetime metric. Direct imaging of such shadow image and the test of physics in strong gravity is one of the important goals in modern astronomy.The ongoing Greenland Telescope (GLT)<cit.> project in Academia Sinica Institute of Astronomy and Astrophysics (ASIAA) is devoting itself to this exciting area <cit.>.The target source of GLT project is the supermassive black hole located at the center of M87.The baseline of the future telescope in Greenland (hence the name Greenland Telescope), the Submillimeter Array in Hawaii,and the Atacama Large Millimeter/submillimeter Array in Chile, will play a key role for future Very Long Baseline Interferometry (VLBI) observation. With longest baseline >9000 km, the angular resolution can reach ∼20 μas at 350 GHz, high enough to resolve the black hole shadow of M87, which has a estimated angular size ∼40 μas.The first light and related VLBI test of GLT in Thule, a northwest cost of Greenland, will be obtained in 2016. The GLT will then be established at the Summit station in 2018/2019 <cit.>.The GLT project is a collaborative project between ASIAA, Smithsonian Astrophysical Observatory, MIT Haystack Observatory, and National Radio Astronomy Observatory .00 unoSee, e.g., W.-T. Ni, Int. J. Mod. Phys. D 25, 1630003 (2016). due I. Shapiro, Phys. Rev. Lett., 13 789 (1964). tre B. Bertotti, L. Iess, and P. Tortora, Nature 425, 374-376 (2003). quattro I. Ciufonlini, and E. C. Pavlis, Nature 431, 958-960 (2004). cinque C. W. F. Everitt, et al., Phys. Rev. Lett. 106, 221101 (2011).seiW.-T. Ni, Phys. Rev. Lett., 106, 221101 (2011).setteSee, e.g., R. N. Manchester, Int. J. Mod. Phys. D, 24 (2015) 1530018. ottoJ. Belfi, et al., 'First Results of GINGERino', arXiv:1601.02874. noveW.-T. Ni, Phys.Lett. A, 379, 1297 (2015); and references therein. dieciW.-T. Ni, Phys.Lett. A, 378, 3413 (2014). undiciJ.G. Williams, S. G. Turyshev, and D. H. Boggs, Int. J. Mod. Phys. D, 18, 1129 (2009). dodiciS. Schlamminger, et al., Phys. Rev. Lett., 100, 041101 (2008); and references therein. tredicihttp://smsc.cnes.fr/MICROSCOPE/index.htm. quattordicihttp://www.sstd.rl.ac.uk/fundphys/step/; http://einstein.stanford.edu/. quindiciV. Rubin, W. K. Ford, Jr ,Astrophys. J., 159, 379, (1970). sediciRubin, V. C., Thonnard, N., and Ford, W. K., Jr., 1978, Astrophys. J., 225, L107. diciassetteV. Rubin, N. Thonnard, W. K. Ford, Jr, (1980), Astrophys. J. 238, 471. diciottoM. Milgrom, Astrophys. J., 270 365 (1983). diciannoveSee, e.g., M. Bucher and W.-T. Ni, Int. J. Mod. Phys. D, 24, 1530030 (2015).ventiSee, e.g., M. Davis, Int. J. Mod. Phys. D, 23, 1430021 (2014). ventunoSee, e.g., K. Sato and J. Yokoyama, Int. J. Mod. Phys. D, 24, 1530025 (2015). ventidueBICEP2/Keck and Planck Collab., Phys. Rev. Lett., 114, 101301 (2015); C.-L. Kuo, this meeting. wei0 W.-T. Ni, 'Sensitivities of gravitational-wave detection', these proceedings. GWLIGO and Virgo Collaboration, Phys. Rev. Lett., 116, 6, 061102, 2016 ventitreSee, e.g., W.-T. Ni, Int. J. Mod. Phys. D, 24, 1530031 (2015) (arXiv:1511.00231). ventiquattroSee, e.g., T. Futamase, Int. J. Mod. Phys. D, 24, 1530011 (2015). sperello0 S. di Serego Alighieri, 'Gaining confidence on General Relativity with Cosmic Polarization Rotation', these proceedings. Nio77 Ni, W.-T., Phys. Rev. Lett. 38, 301 (1977). diS15 di Serego Alighieri, S., IJMPD 24, 1530016 (2015). Kau16 Kaufman, J.P., Keating, B.G., and Johnson, B.R., MNRAS 455, 1981 (2016).iau74 IAU Commission 40, Polarization Definitions, Transactions of the IAU, Vol. XVB, p. 166 (1974). iau15 IAU Recom., http://www.iau.org/news/announcements/detail/ann16004 (2015). panW.-P. Pan et al., 'New constraints on cosmic polarization rotation including SPTpol B-mode polarization observations', these proceedings. keisler R. Keisler et al., Astrophysical Journal, 807 (2015) 151 mei H.-H. Mei et al., Astrophysical Journal, 805 (2015) 107 angela A. Di Virgilio, 'GINGER, An array of Ring Lasers to test General Relativity', these proceedings PRD84 F. Bosi et al., Phys. Rev. D, 84, 12, 122002, 2011 pu0 H.-Y. Pu, 'Observing the Black Hole Shadow of M87 and the Greenland Telescope Project', these proceedings. pu1 ASIAA website for the GLT project: url:http://vlbi.asiaa.sinica.edu.tw/project.phppu5 Inoue M. et al. 2014 Greenland Telescope Project: Direct Confirmation of Black Hole with Sub-millimeter VLBI, RADIO SCIENCE: 49(7), 564-571, 2014-07
http://arxiv.org/abs/1702.08187v1
{ "authors": [ "Angela D. V. Di Virgilio", "Wei-Tou Ni", "Wei-Tou Ni", "Sperello di Serego Alighieri", "Hung-Yi Pu", "Sheau-shi Pan" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170227083435", "title": "Observational and Experimental Gravity" }
Supplemental material for “Correlators in simultaneous measurement of non-commuting qubit observables” Alexander N. Korotkov December 30, 2023 ======================================================================================================empty plain We establish a functional weak law of large numbers for observable macroscopic state variables of interacting particle systems (e.g., voter and contact processes) over fast time-varying sparse random networks of interactions. We show that, as the number of agents N grows large, the proportion of agents (Y_k^N(t)) at a certain state k converges in distribution – or, more precisely, weakly with respect to the uniform topology on the space of càdlàg sample paths – to the solution of an ordinary differential equation over any compact interval [0,T]. Although the limiting process is Markov, the prelimit processes, i.e., the normalized macrostate vector processes (𝐘^N(t))=(Y_1^N(t),…,Y_K^N(t)), are non-Markov as they are tied to the high-dimensional microscopic state of the system, which precludes the direct application of standard arguments for establishing weak convergence. The techniques developed in the paper for establishing weak convergence might be of independent interest. Keywords: Interacting particle systems; large-scale systems; thermodynamic limit; functional weak law of large numbers; time-varying sparse random networks; fluid limits; non-Markovian processes§ INTRODUCTION Systems of interacting agents – often abstracted as interacting particle systems – can model many applications of large-scale networked agents from voting processes to diffusion of opinions or epidemics in large populations; examples include the Harris contact process, the voter process, and the Glauber-Ising model <cit.> in statistical mechanics. Due to the large-scale of such systems, it is often not feasible to follow the pathwise dynamics of the high-dimensional microstate, i.e., of the collective state of all individuals in the population. As an alternative, one could attempt to study the evolution of the system by observing macroscopic quantities, that is, the state variables defined as global averages or low-resolution functionals of the microstate. For instance, in epidemics, observing the binary state of each individual – infected or not infected – is prohibitive in a large population; instead, one often tracks the fraction of infected nodes in the population – a macroscopic observable. However, and in light of the discussion in the introduction of <cit.>, while the microstate of a system and the local rules of interaction completely determine its evolution (determinism principle), two systems at the same macrostate may evolve very differently. For instance, two distinct communities A and B may each have 50% of infected individuals, but community A may have a large number of contacts intertwining infected and healthy individuals – a microscopic information – as opposed to community B that may have a more clustered configuration. These microscopic configurations cause very different evolutions of the system at the macroscale: for example, the infected population will tend to increase at a much faster rate in community A. Fig. <ref> illustrates this example.It remains a major challenge in statistical mechanics to understand how the microscopics are exactly quotiented out, in the limit of large-scale interacting agents, to engender deterministic laws at the macroscale (a.k.a. fluid limit dynamics) without resorting to simplifying hypothesis of: i) full mixing: where the underlying network of interactions is complete[Any agent can interact with any other agent at any time.]; ii) ideal gases: no interaction among agents or iii) ergodicity. In such cases, the macrostates are overall Markov.More formally, and for the sake of clarity, we introduce the concept of realization. Let (X(t)) be Markov. We say that the stochastic process (X(t)) is a refinement of the process (Y(t)) if (Y(t)) is measurable with respect to (X(t)). We say that (Y(t))=(F(X(t))) realizes its refinement (X(t)) when (Y(t)) is Markov, i.e., its local (in time) evolution at each time t depends on the finer process (X(t)) only through (Y(t)) itself at each time t. For instance, if F is bijective, then (Y(t)) trivially realizes (X(t)). In statistical mechanics, (X(t)) plays the role of the high-dimensional microscopic process conveying all the information of the interacting particle system and (Y(t)) plays the role of the low-dimensional macroscopic observable. In such framework, F is not bijective – many different microstate configurations yield the same macroscopic observation.In general, even though the microstate (X(t)) is Markov, the macrostates (Y(t)) are not as they are tied to the microstate, i.e., they do not realize the microstate. This paper proves that, in interacting particle systems over appropriate fast time-varying networks, the macrostates asymptotically realize the microstate and become Markov: as the number of agents grows large, knowledge of the macrostates Y(t) at present t becomes sufficient to foresee their immediate future. In other words, the determinism principle is recovered in the limit of large scale (time-varying) networks for the macrostate quantities. Formally, we prove that a sequence of non-Markov macrostate processes (𝐘^N(t)) converges in distribution – i.e., weakly with respect to the uniform topology on the space of càdlàg sample paths – to a process (𝐲(t)) that is the solution to an ordinary differential equation whose vector field only depends on (𝐲(t)), and thus, the limiting process (𝐲(t)) is Markov. In other words, the process (Y^N(t)) realizes the microstate (X^N(t)) asymptotically in N (though not for finite N).When the exact ODE fluid limit dynamics associated with macroscopic state variables of a complex system exists, it provides a means to study interacting particle systems at the low-dimensional macroscale without keeping track of their microscopics; in particular, fluid limits are relevant to study the stability and metastability – often observed in dynamical systems exhibiting multiple equilibria with large basins of attraction – of systems at the macroscale, e.g., <cit.> and, it is useful to study the qualitative dynamics of such large-scale systems outside their thermo-equilibrium (if there is one). These in turn maps to questions of determining thresholds and conditions on the parameters of the dynamical system under which this is ergodic or not (due to having multiple invariant measures). Hence it is worth seeking conditions on the underlying dynamics of the network (if not static) and of the interactions among agents that yield an exact fluid dynamics. To the best of our knowledge, such conditions are still sparse in the literature and exact concentration results for macroscopic variables associated with stochastic processes over networks are still a very open question in the literature: they are only available for complete networks, or trivial variants, e.g., supernetwork of densely connected cliques (a.k.a., network of communities), or complete-multipartite networks. The major reason lies in the fact that in such densely connected cases, the macroscopic variables of interest realize the microscopics, i.e., they are Markov and convergence results follow as corollary to now standard arguments such as Kurtz Theorem <cit.> (for weak convergence on the sample path), or Stein's method (for weak convergence of some limiting distribution, e.g., Gibbs measure), whereas for other settings, involving non-Markovian prelimit processes, asymptotic properties are relatively less explored. For instance, a complete network assumption is crucial to prove the limiting theorems in <cit.> as it leads to full-exchangeability of the underlying macroscopic quantities (which in turn allows to resort to Stein's method). In our model we do not assume any of the densely connected type of networks mentioned and the underlying process is not exchangeable (though partial exchangeability is present as we will remark momentarily). In fact, in our case, the network will vary over time preserving its low-sparsity – the number of edges is ∼ O(N). Note that one may also establish relevant bounds on macroscopic quantities – instead of seeking exact fluid limits – via stochastic domination on sparse networks by bounding the corresponding processes over complete network counter-parts as, e.g., in <cit.>. That is not the goal in the current paper. In this work, we show that for a particular dynamics of the network of contacts, namely, whenever there is an interaction between two agents, the agents randomly shuffle, then we obtain an exact fluid limit concentration. It lays still open, the question of determining the broadest class of network dynamics that leads to an exact fluid limit.Note that one can obtain the limiting Partial Differential Equation dynamics of exclusion processes over lattices via the framework of hydrodynamics, e.g., <cit.>: where one seeks to determine the PDE thermodynamic limit associated with the evolution of the number of particles locally in space (e.g., following exclusion processes dynamics). In particular, one is interested in the limiting behavior of a process (η^N(x,t)) – the number of particles in a small interval or patch of space about x, whose length is of order O(1/N), – whereas we are interested in the evolution over time (without spatial dependence, i.e., ODE instead of a general PDE) of the fraction of agents at a particular state. The above framework is different from the one studied in this paper, e.g., the former requires a local renormalization of the scale of time and each vector-process (η^N(x,t)) – where the vector entries collect the number of particles at each patch of the discretized space – in the sequence is Markov.To summarize, this paper shows that under a time-varying random rewiring dynamics of the sparse network of contacts or interactions (described in Section <ref>), one can obtain exact weak convergence of the macrostate quantities, and, in particular, characterize their macroscale dynamics (ODE fluid limit). This work thus helps in filling the gap on the characterization of exact fluid limit dynamics on processes over large sparse random networks of contacts. As explained more formally in Section <ref>, besides classical interacting particle systems examples, the model adopted here may be applicable to other scenarios of practical interest, such as the case of malware spread in mobile communication networks, an application of increasing relevance given the emergence of large-scale distributed denial of service (DDoS) attacks <cit.>. Outline of the paper. Section <ref> introduces the main definitions and the dynamics assumed; Section <ref> formalizes the main goal of the paper; Section <ref> establishes an important result on the concentration of a rate process; Section <ref> finally establishes the weak convergence of the macroprocess and illustrates a simulation result; and Section <ref> concludes the paper.§ PROBLEM FORMULATION In this section, we present the general class of interacting particle systems denoted as Finite Markov Information-Exchange (FMIE) introduced in <cit.>, and we define the model and dynamics we assume. §.§ Main Constructs Consider N agents and let X^N_ik(t)∈{0,1} be the indicator that node i is in state k∈𝒳:={1,2,…,K} at time t, where K<∞ is fixed. We represent by𝐗^N(t)=(X_ik^N(t))_ik,X_ik^N(t)∈{0,1}the matrix microstate collecting the state of each of the N agents at time t. Each node can only be at one particular state at a time, i.e., the rows of (𝐗^N(t)) sum to 1. The underlying network of potential interactions, at time t, is captured by the binary adjacency matrix on N nodes A^N(t)∈{0,1}^N × N. Let𝐗_k^N(t)=(X_1k^N(t),…,X_Nk^N(t))∈{0,1}^Nbe the k-column of the matrix 𝐗^N(t). We consider the macroscopic state variablesY_k^N(t)=∑_i=1^N X_ik^N(t)=1^⊤𝐗_k^N(t)to be the number of agents at the state k∈𝒳 at time t and its normalized counterpartY_k^N(t)=1/N1^⊤𝐗_k^N(t)to be the fraction of nodes at the state k∈𝒳. Also, denote by𝐘^N(t)=(Y_1^N(t),…,Y_K^N(t)),the vector process representing the empirical distribution of nodes across states 𝒳.The microstate (𝐗^N(t)) is updated by two distinct processes: i) the peer-to-peer interactions given by the microscopic dynamics; and ii) the network random rewiring. Both are described in the next two subsections. §.§ Peer-to-peer Interaction Dynamics We assume d_i(t) clocks at each node i, where d_i(t) is the degree of node i at time t and each clock is dedicated to a current neighbor of i: once a clock ticks, agent i interacts with the corresponding neighbor (in the current network topology or geometry). The clocks are independent and exponentially distributed – hence, (𝐗^N(t)) is Markov. The state 𝐗^N(t) is updated by these local interactions. If i interacts with j (as the clock of i pointing to j rings) at time t, the state of i and j are updated as(𝐞_i^⊤𝐗^N(t),𝐞_j^⊤𝐗^N(t))=G(𝐞_i^⊤𝐗^N(t_-),𝐞_j^⊤𝐗^N(t_-))where 𝐞_ℓ is the canonical vector with 1 at the ℓth entry, and zero otherwise, and G : ℰ×ℰ→ℰ×ℰ is the update function withℰ:= {𝐞_1,𝐞_2,…,𝐞_K}.G is a function that maps the state of the interacting nodes onto the new state, similar to as defined in <cit.>. For instance, if a node from state 3 interacts with a node in state 5 then, the new state for both nodes will be given by the tuple G(𝐞_3,𝐞_5).From the peer-to-peer dynamics described, the updates in the microstate (𝐗^N(t)) change the macrostate (𝐘^N(t)) according to the following Martingale problem <cit.> pathwise dynamicsY_k^N(ω,t)=Y_k^N(ω,0)+M_k^N(ω,t) + 1/N∫_0^t ∑_mℓ∈𝒳^2γ_mℓ c_mℓ(k) 𝐗_m^N ⊤(s_-) A^N(s_-)𝐗_ℓ^N(s_-)_=:ℱ^N_k(𝐗^N(t_-),A^N(t_-)) dswhere Y_k^N(ω,t) is the fraction of agents at the state k at time t for the realization ω, (M_1^N(t),…,M_K^N(t)) is a normalized martingale process (refer to equation (<ref>)), γ_mℓ is the rate of the exponential clocks from nodes at the state m to contact nodes at the state ℓ, and c_mℓ(k)∈{-2,-1,0,1,2} gives the increment in the number of nodes at state k due to the interaction between nodes in states m and ℓ. For instance, if whenever node i in state 1 interacts with node j at state 2 causes both i and j to turn to state 4, then c_12(4)=2. The terms c_mℓ(k) are uniquely determined from the given update function G. Note also that the clock-rates γ_mℓ may or may not depend upon the states m and ℓ of the interacting nodes. For instance, for the analysis of contact processes, γ_mℓ is often assumed independent of the states and represented simply as γ (rate of infection).Remark on c_mℓ(k). Note that, if two nodes at state k interact, than the number of nodes in state k cannot be incremented as a result of this interaction (the two nodes interacting are already at this state). Hence, tensor 𝐜 is constrained to c_kk(k)≤ 0 for all k. This leads, on the other hand, to the fact that the hyper-cube [0,1]^K is invariant under the stochastic dynamics (<ref>). §.§ Rewiring Network Dynamics Once an update on the microstate happens, the edges of the network are randomly rewired, i.e.,A^N(t)=P^⊤ A^N(t_-) P,where P∈𝒫_ er(N) is drawn uniformly randomly from the set of N× N permutation matrices 𝒫_ er(N) – each time an update occurs.An alternative representation to the random rewiring is the following:𝐗_m^N ⊤(t_-)(P^⊤A^N(t_-)P)𝐗_ℓ^N(t_-)=(P𝐗_m^N(t_-))^⊤A^N(t_-)(P𝐗_ℓ^N(t_-)),in other words, we can consider equivalently that the network A^N is fixed and that the position of the nodes permute just after an update. This interpretation is assumed throughout the paper. And in fact, such partial exchangeability allows us to consider any network A^N with a fixed number of edges. For each N, we consider a regular network with degree d^N and thus, the degree, in a sense, controls the sparsity of the network – or, if we will, the real-time bandwith of nodes for peer-to-peer contact. Note that, given any even number N and an arbitrary degree d<N, one can always draw a regular bipartite graph from it. In other words, and for the sake of our problem, we can assume that the graph is regular bipartite (which will be convenient momentarily). § SUMMARY AND GOAL OF THE PAPER The model introduced in Section <ref> arises from the fact that often agents wander around as their state evolves, instead of being static. The macroscopic dynamical laws derived may be useful, e.g., to study large-scale interacting particle systems supported in sparse dynamical network environments. One particular application is malware propagation <cit.> in mobile devices networks. Mobile devices move around fast in an ad-hoc manner – hence, the underlying network of contacts changes fast over time in an ad-hoc manner – and their real-time bandwith for peer-to-peer communication is usually low <cit.> – hence, the geometry of the support network of interactions is sparse – nevertheless, the massive amount of infected mobile devices corresponds to a large-scale (botnet) network that may launch, for instance, DDoS attacks <cit.>, a modern threat over the Internet-of-Things (IoT) of increasing importance <cit.> and that may display non-trivial metastable behavior <cit.>.One can partition the class of interacting particle systems into three broad categories: * Fast network mixing dynamics: the support network dynamics, i.e., the movement of the interacting agents runs at a much faster time-scale than the interactions among the agents themselves;* Static network: interactions among agents run at a much faster rate than the network dynamics;* Mesoscale: both dynamics run at comparable speeds.The fluid limit paradigm varies greatly depending on the class under analysis. Our work sits primarily upon the first category: fast network mixing dynamics.To summarize, the microstate (𝐗^N(t)) is updated in two ways: i) peer-to-peer interactions given by the microscopic dynamics; and ii) network random rewiring just described. Fig. <ref> summarizes the model for the case of a contact process and assuming that (the fixed) A^N is a cycle network. Note from equation (<ref>) that the empirical process (𝐘^N(t)) is not Markov as it is tied to the finer microscopics (𝐗^N(t)). Our goal is to prove that the non-Markovian sequence (𝐘^N(t)) converges weakly as N goes to infinite, with respect to the uniform topology on the space of càdlàg sample paths, to the solution (𝐲(t)) of the ODEẏ_k(t) = d ∑_mℓ∈𝒳^2γ_mℓ c_mℓ(k) y_m(t)y_ℓ(t),for k∈𝒳={1,…,K}, where d is the asymptotic average degree of the limiting network, and γ_mℓ is the rate of the exponential clocks from nodes at state m to contact nodes at state ℓ. More compactly, the limiting equation is given by the ODEẏ_k(t) = d 𝐲(t)^⊤(Γ⊙ C(k)) 𝐲(t)=:f_k(𝐲(t)),where C(k):=(c_mℓ(k))_mℓ, Γ:=(γ_mℓ)_mℓ, and ⊙ is the pointwise Hadamard product. Recall that the constraints on C, in order to make the microscopic model physically meaningful (as referred at to the end of Subsection <ref>) imply that the hyper-cube [0,1]^K is invariant under the ODE dynamics (<ref>).This tacitly implies that, in order to perform macroscopic analysis in the large-scale, one can replace the complex stochastic microscopic dynamics in equation (<ref>) by the lower-dimensional ODE in equation (<ref>).In the next sections, we prove weak convergence by establishing four major results:i) (Convergence on the line – Subsection <ref>). The quadratic rate in the pathwise dynamics|𝐗_m^N ⊤(t) A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)| ℙ⟹ 0concentrates on the line (i.e., at each time t) in probability exponentially fast. To prove this step, it is crucial to introduce and prove the result for an auxiliary process (𝐗^N(t)) that is coupled with the original one (𝐗^N(t)), which is done in Subsection <ref>.ii) (Tightness – Subsection <ref>). Since, for each time t, the quadratic rate converges exponentially fast, then it is tight via Theorem <ref> – note that simply convergence on the line does not imply tightness <cit.>.iii) (Martingale converges to zero in probability – Subsection <ref>). We show that||M_k^N(t)||_sup[0,T]ℙ⟶ 0,for every k∈𝒳, where ||·||_sup[0,T] is the sup-norm on the space of sample paths over the interval [0,T].iii) (Weak convergence – Section <ref>). Relying on points i), ii) and iii), and by a standard evocation of the Skorokhod Representation Theorem <cit.>, one can show weak convergence of the empirical process (𝐘^N(t)) to the solution of an ODE over each compact interval [0,T].In what follows, we refer to the processR^N_mℓ(t):=𝐗_m^N ⊤(t) A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)as the gap process. § WEAK CONVERGENCE OF THE GAP PROCESS In this section, we prove the following concentration in probability for the gap process (refer to Theorem <ref> for its formal statement)||𝐗_m^N ⊤(t) A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)||_sup[0,T]ℙ⟹ 0 where Y_k^N(t) is the fraction of nodes at the state k∈𝒳 at time t∈[0,T]. In other words, the gap process is tight and this will be crucial to establish weak convergence of the macroscopic process (𝐘^N(t)). To prove such tightness result, we first establish it on the line for an auxiliary process that is coupled to our original microscopic process 𝐗^N(t).§.§ Conditional Large Deviation on the Line for an Auxiliary Process Let (𝐗_m^N(t))=(X_1m^N(t),…,X_Nm^N(t)) be defined as follows: whenever there is an interaction at t-, and assuming we have α_m N nodes at state m just after that interaction, the coordinates X_im^N(t) are updated by the realization of N i.i.d. Bernoulli random variables (conditioned on 𝐘^N(t)) with conditional-lawℙ(X_im^N(t)=1|Y_m^N(t_-)=α_m.)=α_m. Remark. One can partition the set of edges E^N comprising A^N into sets of independent edges E^N=⋃_k E^N_k (a.k.a., matchings <cit.>), i.e., edges that do not share nodes in common. In other words,i_1 i_2, i_3 i_4 ∈ E^N_k ⇒ i_m≠ i_nm≠ n, m,n∈{1,2,3,4},and note from the definition of 𝐗_m^N(t) that X_i_1m(t)X_i_2m(t) and X_i_3m(t)X_i_4m(t) are independent (given 𝐘^N(t)) if i_1 i_2, i_3 i_4 ∈ E_k for some k. A matching that pairs all nodes is called a perfect matching. It is a rather simple to prove and well-known fact that the set of edges of a d^N-regular bipartite graph admits a partition into d^N perfect matchings of size N (number of nodes), e.g., refer to <cit.>.The following lemma follows from a Bernstein concentration inequality <cit.> for independent random variables (refer to Corollary <ref> in the Appendix). For simplicity, in what follows, we denote α_mℓ:=α_mα_ℓ. Let [α] round α>0 so that [α N]∈ℕ is the closest integer (from above) to α N. Let t∈[0,T] be fixed. For any ϵ>0, there is N_0,k>0 so thatℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d [α_mℓ]|>ϵ|Y_m^N(t_-)=[α_m],Y_ℓ^N(t_-)=[α_ℓ].)≤ 2e^-kN,for all N>N_0, where k does not depend on α_mℓ (this latter information is relevant for Theorem <ref> and follows from Corollary <ref> in the Appendix).As discussed in the Subsection <ref>, without loss of generality, we assume that A^N is a regular bipartite network with degree d^N. One can thus partition the quadratic term 𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t) into d^N sums of N independent terms𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)= ∑_i_1 j_1∈ E_1X^N_i_1 mX^N_j_1 ℓ +… + ∑_i_d j_d ∈ E_dX^N_i_dmX^N_j_dℓwhere each sum runs over a perfect matching and comprises N independent terms (as remarked before), where each term has mean α_mℓ. Thus,ℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N [α_mℓ]|>ϵ|Y_m^N(t_-)=[α_m],Y_ℓ^N(t_-)=[α_ℓ].)≤ℙ(|∑_i_1 j_1∈ E_1X^N_i_1 mX^N_j_1 ℓ +… + ∑_i_d j_d ∈ E_dX^N_i_dmX^N_j_dℓ/N-d^N [α_mℓ]|>ϵ|Y_m^N(t_-)=[α_m],Y_ℓ^N(t_-)=[α_ℓ].)≤ℙ(|∑_i_1 j_1X^N_i_1 mX^N_j_1 ℓ/N-[α_mℓ]| +… + |∑_i_d j_dX^N_i_d mX^N_j_d ℓ/N-[α_mℓ]|>ϵ|Y_m^N(t_-)=[α_m],Y_ℓ^N(t_-)=[α_ℓ].)≤ℙ(|∑_i_1 j_1X^N_i_1 mX^N_j_1 ℓ/N-[α_mℓ]|>ϵ/d^N|Y_m^N(t_-)=[α_m],Y_ℓ^N(t_-)=[α_ℓ].)+…+… + ℙ(|∑_i_d j_dX^N_i_d mX^N_j_d ℓ/N-[α_mℓ]|>ϵ/d^N|Y_m^N(t_-)=[α_m],Y_ℓ^N(t_-)=[α_ℓ].)≤ 2d^N e^-kNfor N large enough, where k is a function of the degree d^N and ϵ, but it does not depend on α_mℓ (refer also to Corollary <ref> in the Appendix); and the last inequality follows from the Bernstein concentration inequality.§.§ Large Deviations on the Line for the Gap Process Now, we observe that the main process (𝐗^N(t)) can be obtained (in distribution) from (𝐗^N(t)) as follows: for any m∈𝒳 * if 1^⊤𝐗_m^N(t) > 1^⊤𝐗_m^N(t), then choose randomly 1^⊤𝐗_m^N(t)-1^⊤𝐗_m^N(t) of the 1's of (𝐗_m^N(t)) to flip to zero and declare the new vector as 𝐙_m^N(t);* if 1^⊤𝐗_m^N(t) < 1^⊤𝐗_m^N(t), then choose randomly 1^⊤𝐗_m^N(t)- 1^⊤𝐗_m^N(t) of the zero's of (𝐗_m^N(t)) to flip to one and declare the new vector as 𝐙_m^N(t);* if 1^⊤𝐗_m^N(t) = 1^⊤𝐗_m^N(t), then set 𝐙_m^N(t)=𝐗_m^N(t).Clearly, 𝐙^N(t)d=𝐗^N(t) and the above construction couples both processes 𝐗^N(t) and 𝐗^N(t) as we can write𝐗_m^N(t)+𝐄_m^N(t)d=𝐗_m^N(t)where the vector 𝐄_m^N(t)∈{-1,0,1}^N flips the appropriate entries of the vector 𝐗_m^N(t), and the above equality holds in distribution for each m∈{1,…,K}. We have the theorem. The following holdsℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N|> ϵ|𝐘^N(t_-).)≤ 2e^-kN.where k does not depend on 𝐘^N(t). We get successivelyℙ(|𝐗_m^N ⊤(t)A 𝐗_ℓ^N(t)/N-𝐗_m^N ⊤(t)A 𝐗_ℓ^N(t)/N|> ϵ|𝐘^N(t_-)=α.) = ℙ(|(𝐗_m^N ⊤(t)+𝐄_m^N(t))^⊤A (𝐗_ℓ^N ⊤(t)+𝐄_ℓ^N(t))/N-𝐗_m^N ⊤(t)A 𝐗_ℓ^N(t)/N|> ϵ|𝐘^N(t_-)=α.) =ℙ(|𝐄_m^N ⊤(t)A 𝐗_ℓ^N(t)/N+𝐗_m^N ⊤(t)A 𝐄_ℓ^N(t)/N+𝐄_m^N ⊤(t)A 𝐄_ℓ^N(t)/N|> ϵ|𝐘^N(t_-)=α.)≤ℙ(|𝐄_m^N ⊤(t)A 𝐗_ℓ^N(t)/N| >ϵ/3|𝐘^N(t-)=α.)+ℙ(|𝐗_m^N ⊤(t)A 𝐄_ℓ^N(t)/N| >ϵ/3|𝐘^N(t_-)=α.) + ℙ(|𝐄_m^N ⊤(t)A 𝐄_ℓ^N(t)/N| >ϵ/3|𝐘^N(t_-)=α.)≤ℙ(d^N |𝐄_m^N ⊤(t)1/N| >ϵ/3|𝐘^N(t_-)=α.)+ℙ(d^N |1^⊤𝐄_ℓ^N(t)/N| >ϵ/3|𝐘^N(t_-)=α.) + ℙ(d^N |𝐄_m^N ⊤(t)1/N| >ϵ/3|𝐘^N(t_-)=α.).Each term on the right hand side of the last inequality can be bounded as followsℙ(d^N|𝐄_i^N ⊤(t)1/N|> ϵ/3|𝐘^N(t_-)=α.) =ℙ(|1^⊤(𝐗_i^N(t)-𝐗_i^N(t))/N|> ϵ/3 d^N|𝐘^N(t_-)=α.) =ℙ(|(1^⊤𝐗_i^N(t)- α_i N)/N|> ϵ/3 d^N|𝐘^N(t_-)=α.) =ℙ(|(1^⊤𝐗_i^N(t) )/N-α_i|> ϵ/3 d^N|𝐘^N(t_-)=α.)≤ 2 e^-kNfor any α, where k does not depend on α. And the theorem follows. The next theorem follows as corollary to Lemma <ref> and Theorem <ref>. We haveℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)|>ϵ)≤ M e^-kN,for all t≥ 0, and some M>0. ℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)|>ϵ) =E[ℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)|>ϵ| 𝐘^N (t_-) .)] = E[ℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)-𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N+𝐗_m^N ⊤(t)A 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)|>ϵ| 𝐘^N(t_-) .)]≤ E[ℙ(|𝐗_m^N ⊤(t)A 𝐗_ℓ^N(t)-𝐗_m^N ⊤(t)A 𝐗_ℓ^N(t)/N|>ϵ| 𝐘^N(t_-) .)] +E[ℙ(|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t)Y_ℓ^N(t)|>ϵ| 𝐘^N(t_-) .)]≤ M e^-kN,where the last inequality follows from Lemma <ref>, Theorem <ref>, and the fact that k does not depend on (𝐘^N(t_-)).§.§ Tightness of the Gap Process The following theorem is crucial to what followsLet M^Nd∼𝒩_N(0,1) be a Poisson random variable with parameter N. Let (Z^N_i)_i=1^∞ be a sequence of independent (and independent of M^N) Bernoulli random variables with lawℙ(Z^N_i=1)=1/N^α,for all i∈ℕ, with α>1. Then,∑_i=0^M^N Z^N_iℙ⟶ 0,or equivalently,ℙ(∑_i=0^M^N Z^N_i≥ 1)⟶ 0.as N goes to infinite.The idea behind this theorem is that Z^N_i will play the role of the indicator of an ϵ-deviation in our gap process(R^N_mℓ(t))=(𝐗_m^N(t)A^N𝐗_ℓ^N(t)/N-Y_m^N(t)Y_ℓ^N(t)).Thus, the theorem states that the probability that there will be at least one ϵ-deviation during the whole time interval [0,T] (i.e., across all shuffles in [0,T]) decreases to zero as N grows large (as stated formally in Theorem <ref>). First note thatℙ(.∑_i=0^M^N Z^N_i≥ 1|M^N) = 1-ℙ(.∑_i=0^M^N Z^N_i = 0|M^N)= 1-ℙ(.Z^N_i = 0∀i≤ M^N|M^N) = 1-(1-1/N^α)^M^N = 1-((1-1/N^α)^N^α)^M^N/N^α = 1-e(N)^M^N/N^α,where we definede(N):=(1-1/N^α)^N^α.Now,ℙ(∑_i=0^M^N Z^N_i≥ 1)= E[ℙ(.∑_i=0^M^N Z^N_i≥ 1|M^N)]= ∑_k≥ 0(1-e(N)^k/N^α) N^k e^-N/k!.We have thate^-N∑_k(1-e(N)^k/N^α) N^k/k!= e^-N(∑_kN^k/k! - ∑_ke(N)^k/N^αN^k/k!) = 1-e^-N∑_ke(N)^k/N^αN^k/k!=1-e^-N× e^e(N)^1/N^αN = 1-e^-N× e^(1-1/N^α)N = 1- e^-N/N^αN→∞⟶0,for any α>1. The next theorem is the main result of this section. We havelim_N→∞ℙ(sup_t∈[0,T]|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t) Y_ℓ^N(t)|>ϵ)=0,for any ϵ>0.Let M^N ∼𝒩_d^N N(0,T) be a Poisson random variable with parameter d^N N and let M^N count the number of interactions (i.e., a state change happens) across the time interval [0,T]. SetZ^N(t):=1_{|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t) Y_ℓ^N(t)|>ϵ}(t)to be the indicator of an ϵ-deviation in the gap process. Now, note that under an appropriate couplingM^N:=[0,T] ≤_a.s. M^N d∼𝒩_λ(N)(0,T)where λ(N)=d^N N, and d^N is the degree of the network A^N. In particular, the intensity λ of the Poisson upper-bounding the number of shuffles on the interval increases linearly with N. It follows thatℙ(sup_t∈[0,T]|𝐗_m^N ⊤(t)A^N 𝐗_ℓ^N(t)/N-d^N Y_m^N(t) Y_ℓ^N(t)|>ϵ) =ℙ(∑_i=1^M^NZ^N(t_i)≥ 1) ≤ ℙ(∑_i=0^M^N Z^N_i≥ 1) N→∞⟶ 0where the last inequality follows from Theorem <ref> and the large deviation on the line, Theorem <ref>.§.§ Martingale Converges in Probability to Zero In this subsection, we prove the following theorem.For any ϵ>0, the following holdsℙ(sup_[0,T]|M_k^N(t)|>ϵ)N→∞⟶ 0for each k∈𝒳 and T≥ 0.We prove that for each T≥ 0, we haveE(M_k^N(T))^2 N→∞⟶ 0,that is, the martingale vanishes in ℒ^2 on the line. The theorem will follow as corollary to Doob's inequality, i.e.,P(sup_0≤ t≤ T|M_k^N(t)|>ϵ)≤ E(M_k^N(T))^2/ϵ^2N→∞⟶ 0, ∀ϵ>0, ∀ T≥0.For each k∈𝒳, the martingale is given byM^N_k(t)=∑_mℓ∈𝒳^2∑_n=0^d^N N∫_0^t c_mℓ(k) 1_{X_m^N⊤(s_-)A^N X_ℓ^N(s_-)=n }(𝒩^(m,ℓ,n)_γ_mℓn(ds)-γ_mℓ nds)where {𝒩^(m,ℓ,n)_γ_mℓn}_(m,ℓ,n) is a family of pair-wise independent Poisson processes indexed by the triple (m,ℓ,n) and each with mean or parameter γ_mℓn. We haveE(M_k^N(T))^2 =E(∑_mℓ∈𝒳^2∑_n∫_0^T c_mℓ(k) 1_{X_m^N⊤(s_-)A^N X_ℓ^N(s_-)=n }(𝒩^(m,ℓ,n)_γ_mℓn(ds)-γ_mℓ nds))^2= ∑_mℓ∈𝒳^2∑_n E(∫_0^T c_mℓ(k) 1_{X_m^N⊤(s_-)A^N X_ℓ^N(s_-)=n }(𝒩^(m,ℓ,n)_γ_mℓn(ds)-γ_mℓ nds))^2= ∑_mℓ∈𝒳^2∑_n E(∫_0^T c^2_mℓ(k) 1_{X_m^N⊤(s_-)A^N X_ℓ^N(s_-)=n }γ_mℓ n ds)≤ ∑_mℓ∈𝒳^2 E(∫_0^T∑_n1_{X_m^N⊤(s_-)A^N X_ℓ^N(s_-)=n } 4 γ d^N N ds)≤4 K^2 γ d^N N T,where γ:=max_mℓγ_mℓ; the second equality (<ref>) follows from Theorem <ref> and the independence of all the underlying Poisson processes involved (hence, the cross terms in the square are zero-mean martingales). The third equality (<ref>) is due to the Itô isometry Theorem (refer to <cit.> or <cit.>) and the fact that the quadratic variation of a compensated Poisson martingale is given by⟨𝒩_γ(t)-γ t ⟩=γ t.The first inequality (<ref>) is due toc^2_mℓ(k)≤ 4;n≤d^N N = 2 ×#.The last inequality (<ref>) holds since the family of subsets of the interval [0,T]I_n(ω):={s∈[0,T] :X_m^N⊤(ω,s_-)A^N X_ℓ^N(ω,s_-)=n},for each fixed pair (m,ℓ), indexed by n, are realization-wise disjoint and thus for each pair (m,ℓ)∑_n 1_{X_m^N⊤(ω,s_-)A^N X_ℓ^N(ω,s_-)=n}=1_⋃_n{X_m^N⊤(ω,s_-)A^N X_ℓ^N(ω,s_-)=n}≤ 1_[0,T](ω,s_-).for all ω∈Ω.Therefore, for the normalized martingale, we have for all fixed TE(M_k^N(T))^2=1/N^2 E(M_k^N(T))^2≤4K^2 γ d^N T/N⟶ 0.and the result now follows from Doob's inequality (<ref>). § WEAK CONVERGENCE OF THE MACROPROCESS The stochastic dynamical system for the macroscopics (<ref>) can be rewritten as followsY_k^N(t) =Y_k^N(0)+M_k^N(t) + d^N∫_0^t ∑_mℓ∈𝒳^2γ_mℓ c_mℓ(k) Y_m^N(s_-)Y_ℓ^N(s_-) ds + ∫_0^t ∑_mℓ∈𝒳^2γ_mℓ c_mℓ(k) (𝐗_m^N ⊤(s_-) A^N𝐗_ℓ^N(s_-)- d^N Y_m^N(s_-)Y_ℓ^N(s_-))ds.for each k∈𝒳. The next theorem follows from the equicontinuity condition in the Arzelà-Ascoli Theorem (refer to Theorem <ref> in the Appendix). The sequence of macro-processes (𝐘^N(t))=(Y_1^N(t),…,Y_K^N(t)) is C-tight, i.e., its set of weak-accumulation points is nonempty and lie almost surely in C_[0,T], that is,{(𝐘^N_k(t))⇒(𝐘(t))}⇒ℙ((𝐘(t))∈ C_[0,T])=1. We show that (𝐘^N(t)) fulfills the bound and equicontinuity conditions in equations (<ref>)-(<ref>) in the Arzelà-Ascoli Theorem, Theorem <ref>. Indeed, we haveℙ(sup_0≤ t≤ TY_m^N(t)≥ k)=0, ∀ k>1,and the first condition holds since 0 ≤Y_m^N(t)≤ 1 almost surely for all t∈[0,T] and m∈𝒳 (as referred in the end of Subsection <ref>).For the equicontinuity condition, for every k∈𝒳, we haveω(Y_k^N, δ, T) = sup_|u-v|≤δ, u,v∈[0,T]{|Y_k^N(u)-Y_k^N(v)|}= sup_|u-v|≤δ, u,v∈[0,T]{|M_k^N(u)-M_k^N(v).. ..+d^N∫_u^v∑_mℓ∈𝒳^2γ_mℓ c_mℓ(k) Y_m^N(s_-)Y_ℓ^N(s_-) ds..+..∫_u^v ∑_mℓ∈𝒳^2γ_mℓ c_mℓ(k) (𝐗_m^N ⊤(s_-)A^N𝐗_ℓ^N(s_-)/N-d^N Y_m^N(s_-)Y_ℓ^N(s_-)) ds|}≤ sup_0≤ t≤ T|M_k^N(t)|+γ^(k) d^N δ+γ^(k)δsup_0≤ t≤ T|R_mℓ^N(t)|=: ω_2(Y_k^N,δ,T), where we defined γ^(k):=∑_mℓ∈𝒳^2γ_mℓ|c_mℓ(k)|. Now, for any ϵ>0, we haveℙ(ω(Y_i^N,δ,T)≥ϵ)≤ℙ(ω_2(Y_i^N,δ,T)≥ϵ).Moreover,ℙ(ω_2(Y_k^N,δ,T)≥ϵ)≤ ℙ(sup_0≤ t≤ T|M_k^N(t)|>ϵ/3)+ℙ(γ^(k) d^N δ >ϵ/3)+ℙ(γ^(k)δsup_0≤ t≤ T|R_mℓ^N(t)| >ϵ/3).By applying the lim sup_N on both sides of the inequality (<ref>)-(<ref>), we obtainlim sup_N→∞ℙ(ω_2(Y_k^N,δ,T)≥ϵ)≤ℙ(γ d δ >ϵ/3),from Theorem <ref>, the martingale convergence Theorem <ref>, and the assumption d^NN→∞⟶ d. Therefore, we can apply lim_δ→ 0 to equation (<ref>)lim_δ→ 0lim sup_N→∞ℙ(ω_2(Y^N,δ,T)≥ϵ)≤lim_δ→ 0ℙ(γ d δ>ϵ/3)=0,and thus,lim_δ→ 0lim sup_N→∞ℙ(ω(Y_k^N,δ,T)≥ϵ)≤lim_δ→ 0lim sup_N→∞ℙ(ω_2(Y_k^N,δ,T)≥ϵ)=0.We conclude that (𝐘^N(t)) is a tight family with almost surely continuous weak-accumulation points,(𝐘^N_n(t))n→∞⇒(𝐘(t))withℙ((𝐘(t))∈ C_[0,T])=1. Let 𝐘^N(0)⇒𝐘(0). Any weak accumulation process (𝐘(t)) of (𝐘^N(t)) obeys the integral equationY_k(ω,t)=Y_k(ω,0)+d ∑_mℓ∈𝒳^2∫_0^tγ_mℓ c_mℓ(k) Y_m(ω,s)Y_ℓ(ω,s)ds,for k∈𝒳 and almost all ω∈Ω.Define the functionalℱ_k : D^K× K_[0,T]× D^K_[0,T]⟶ℝwithℱ_k((𝐫(t),𝐲(t))) := y_k(t)-y_k(0)+d ∑_mℓ∈𝒳^2∫_0^tγ_mℓ c_mℓ(k) y_m(s) y_ℓ(s)ds+∑_mℓ∈𝒳^2∫_0^tγ_mℓ c_mℓ(k) r_mℓ(s)ds.where D^K_[0,T] stands for the space of càdlàg sample paths from the interval [0,T] to the cube [0,1]^K endowed with the Skorokhod metric (it is a Polish space, refer to <cit.>). The functional ℱ_k is measurable: indeed, the sum `+' operator is measurable (with respect to the product topology D_[0,T]× D_[0,T]); the integral operator `(∫_0^t (·)ds)' is measurable; and composition of measurable operators is measurable (for these observations, refer to <cit.>).Let (𝐘^N_n(t))⇒(𝐘(t)) and remark from Theorem <ref> that (𝐑^N_n(t))⇒0. We now prove thatℱ_k(𝐑^N_n(t),𝐘^N_n(t))⇒ℱ_k(0,𝐘(t)).From the Skorokhod's Representation Theorem <cit.>,[∃(𝐘^n(t)), (𝐑^n(t)), (𝐘(t)) : (𝐘^n(t))d= (𝐘^N_n(t)), (𝐑^n(t))d= (𝐑^N_n(t));(𝐘(t))d=(𝐘(t));(𝐘^n(ω,t))U[0,T]⟶(𝐘(ω,t)), (𝐑^n(ω,t))U[0,T]⟶0. ]for almost all ω∈Ω, where U[0,T] stands for uniform convergence in the interval [0,T]. Since,(𝐘^n(ω,t))⟶(𝐘(ω,t))a.s. uniformly over the compact interval [0,T], we can interchange the limit with the integral via the Dominated Convergence Theorem (e.g., <cit.>),∫_0^tY_m^n(ω,s)Y_ℓ^n(ω,s)ds⟶ ∫_0^t Y_m(ω,s) Y_ℓ(ω,s) ds ∫_0^tR_mℓ^n(ω,s) ds⟶0.Therefore,ℱ_k(𝐑^N_n(t),𝐘^N_n(t))d= ℱ_k(𝐑^n(t),𝐘^n(t))⟶ℱ_k(0,𝐘(t))d= ℱ_k(0,𝐘(t))where the first and last equality are due to the measurability of ℱ_k; and the convergence `⟶' is in a realization-wise sense with respect to the uniform topology on the space of sample paths. In particular, this implies convergence in probability, and thus, convergence (<ref>) holds in a weak sense (refer to Corollary 1.6 from <cit.>), i.e.,ℱ_k(𝐑^N_n(t),𝐘^N_n(t))⇒ℱ_k(0,𝐘(t)).Remark that (𝐘^N(t)) obeys the following stochastic dynamicsℱ_k(𝐑^N(ω,t),𝐘^N(ω,t))=M_k^N(ω,t),and since(𝐌^N(t))⇒ 0we haveℱ_k(0,𝐘(ω,t))d= 0,or in other words,Y_k(ω,t)=Y_k(ω,0)+d ∑_mℓ∈𝒳^2∫_0^tγ_mℓ c_mℓ(k) Y_m(ω,s)Y_ℓ(ω,s)ds,for almost all ω∈Ω. Now, from the uniqueness of the integral equation (the vector field is Lipschitz) we conclude uniqueness of the accumulation point and the following result follows. Let 𝐘^N(0)⇒𝐲(0). We have(𝐘^N(t))⇒(𝐲(t))=(y_1(t),…,y_k(t))where (𝐲(t)) is the solution to the ODEẏ_k(t) = d 𝐲(t)^⊤(Γ⊙ C(k)) 𝐲(t)k={1,2,…,K}with initial condition 𝐲(0), Γ=[γ_mℓ]_mℓ, C(k)=[c_mℓ(k)]_mℓ and ⊙ is the pointwise Hadamard product.Since the vector field is Lipschitz, the continuous (and thus, differentiable) solution (𝐘(t)) of (<ref>) is unique. Thus, any weak limit of (𝐘^N(t)) with initial condition given by 𝐘^N(0) and converging in distribution to 𝐘(0) is equal to the unique solution (𝐘(t)) of (<ref>) with initial condition (𝐘(0)). Therefore, by Prokhorov's Theorem <cit.>, the whole sequence converges(𝐘^N(t))⇒(𝐘(t))to the solution of (<ref>). Equation (<ref>) is the integral version of the ODE (<ref>). Fig. <ref> depicts a numerical simulation illustrating the concentration result proved for the case of a binary state {0,1} contact process, where A^N was assumed to be a cycle network (i.e., d^N=2) for all N=100,1000,4000. We observe that as the number of nodes increase, the stochastic dynamics (captured by the blue noisy curves) concentrates about the solution to the limiting ODE (captured by the red smooth curves). § CONCLUDING REMARKS Deriving the exact macroscopic dynamical laws from the microscopic laws of interacting particle systems is challenging when outside the scope of: uncoupled systems (a.k.a., ideal gases), full network of contacts, or network of communities. Within such frameworks, low-resolution macroscopic state variables, such as the fraction of nodes at a particular state, realize the system.In this paper, we proved that under a time-varying random rewiring dynamics of the sparse network of contacts (described in Subsection <ref>), the non-Markov macro-state variables associated with the fraction of nodes at each state k realize the system asymptotically in N. That is, one can obtain the exact fluid limit macroscopic dynamics associated with general FMIE interacting particle systems. To establish such result, one has to primarily prove the tightness and finite-dimensional distribution convergence of built-in rate processes (e.g., the gap process converges to zero on the line) of the macroscopic process (e.g., the fraction of nodes at a particular state). The main difficulty in establishing such result for interacting particle systems over networks – or general systems whose rules are set at the microscopics and respect the peer-to-peer disposition of nodes – is that the pre-limit macroscopic processes are non-Markov (unless the underlying network of contacts is complete or it is a network of communities), and one of two steps is often hard: i) tightness of the rates; ii) convergence on the line of the rates. By introducing an intermediate process (𝐗^N(t)), appropriately coupled with the original process (𝐗^N(t)), we were able to address both steps mentioned above. A natural future direction is to characterize more general classes of dynamical networks for which such exact concentration results are attainable. The next theorem provides with an important concentration inequality <cit.>.Let (Z_i) be a sequence of zero-mean independent random variables bounded by some constant c>0, i.e., |Z_i|≤ c a.s. for all i. Letσ^2(N)= 1/N∑_i=1^N Var(Z_i)be the sample mean variance. Then, for any ϵ>0,ℙ(1/N∑_i=1^N Z_i≥ϵ)≤ e^-Nϵ^2/2σ(N)^2+2 c ϵ/3 We restate the previous theorem into a more useful corollary, as follows.Under the same assumptions as in Theorem <ref>, we haveℙ(|1/N∑_i=1^N Z_i |≥ϵ)≤ 2 e^-Nϵ^2/2σ(N)^2+2 c ϵ/3 Note thatℙ(-1/N∑_i=1^N Z_i≤ -ϵ)≤ e^-Nϵ^2/2σ(N)^2+2 c ϵ/3and by symmetry in the assumptions of the theorem – namely, if (Z_i)_i fulfills the conditions, then (-Z_i)_i fulfills as well – we haveℙ(1/N∑_i=1^N Z_i≤ -ϵ)≤ e^-Nϵ^2/2σ(N)^2+2 c ϵ/3and therefore,ℙ(|1/N∑_i=1^N Z_i| ≥ϵ) =ℙ({1/N∑_i=1^N Z_i ≥ϵ}∪{1/N∑_i=1^N Z_i≤ -ϵ})≤ ℙ({1/N∑_i=1^N Z_i ≥ϵ}) + ℙ( {1/N∑_i=1^N Z_i≤ -ϵ})≤2 e^-Nϵ^2/2σ(N)^2+2 c ϵ/3 If in addition to the assumptions in Theorem <ref>, we have bounded variance, i.e.,Var(Z_i)≤ v,∀i∈ℕfor some v>0, thenℙ(|1/N∑_i=1^N Z_i |≥ϵ)≤ 2 e^-kNwithk=-ϵ^2/2v^2+2 c ϵ/3. Let (Z^N(t)) be a sequence of càdlàg processes. Then, the sequence of probability measures ℙ_Z^N induced on D_[0,T] by (Z^N(t)) is tight and any weak limit point of this sequence is concentrated on the subset of continuous functions C_[0,T]⊂ D_[0,T] if and only if the following two conditions hold for each ϵ>0:lim_k→∞lim sup_N→∞ℙ(sup_0≤ t≤ T|Z^N(t)|≥ k)=0 lim_δ→ 0lim sup_N→∞ℙ(ω(Z^N,δ,T)≥ϵ) =0where we defined the modulus of continuityω(x, δ, T)=sup{|x(u)-x(v)| : 0≤ u,v≤ T, |u-v|≤δ}. Let (𝐘(t)) be an (ℱ_t)-adapted càdlàg process with discrete range and piecewise constant (i.e., constant when it does not jump). Let 𝒩_λ(t) and 𝒩_μ(t) be two independent (ℱ_t)-adapted Poisson processes (hence their compensated versions are (ℱ_t)-martingales, as it is trivial to establish). Assume the rates λ,μ are nonnegative. Let f, g be two bounded functions defined over the discrete range of 𝐘(t). Then,(∫_0^t f(𝐘(s-))(𝒩_λ(ds)-λ ds) ∫_0^t g(𝐘(s-))(𝒩_μ(ds)-μ ds))is an (ℱ_t)-martingale.IEEEtran
http://arxiv.org/abs/1702.08447v1
{ "authors": [ "Augusto Almeida Santos", "Soummya Kar", "José M. F. Moura", "João Xavier" ], "categories": [ "math.PR", "math-ph", "math.MP" ], "primary_category": "math.PR", "published": "20170226194244", "title": "Thermodynamic Limit of Interacting Particle Systems over Time-varying Sparse Random Networks" }
gxz sign reversal paper PRB vfinalAlex.Hamilton@unsw.edu.au ^1School of Physics, University of New South Wales, Sydney NSW 2052, Australia ^2NTT Basic Research Laboratories, NTT corporation, Atsugi-shi, Kanagawa 243-0198, Japan ^3Graduate School of Science, Tohoku University, Sendai-shi, Miyagi 980-8578 Japan Zeeman splitting of 1D hole subbands is investigated in quantum point contacts (QPCs) fabricated on a (311) oriented GaAs-AlGaAs heterostructure.Transport measurements can determine the magnitude of the g-factor, but cannot usually determine the sign.Here we use a combination of tilted fields and a unique off-diagonal element in the hole g-tensor to directly detect the sign of g^*.We are able to tune not only the magnitude, but also the sign of the g-factor by electrical means, which is of interest for spintronics applications.Furthermore, we show theoretically that the resulting behaviour of g^* can be explained by the momentum dependence of the spin-orbit interaction. Electrical control of the sign of the g-factor in a GaAs hole quantum point contact A. R. Hamilton^1 December 30, 2023 ===================================================================================Electrical manipulation of spin is the underlying principal of many proposed spintronic and quantum computing device architectures <cit.>.In particular, electrical control of the effective Landé g-factor in semiconductor nanostructures has been a major focus of recent research, with theoretical investigations predicting strong g^* tunability in both magnitude and sign <cit.>.The ability to invert the sign of the g-factor and tune the system through a state of zero spin polarisation (g^* = 0) could be a valuable asset in engineering solid-state spin devices <cit.>.In this regard, quantum confined hole systems in GaAs are prime candidates due to the strong coupling between spin and orbital motion in the valence band <cit.>.The spin 3/2 nature of valence band holes in GaAs leads to several unique properties such as a tensor structure of g^* with large anisotropy between all three spatial directions <cit.>, and tunability of the g-factor across orders of magnitude <cit.>.Previous studies of the g-factor of quantum confined holes revealed a non-monotonic dependance of |g^*| on the gate bias, suggestive of a change in sign of g^* <cit.>.However, these studies could not directly detect the sign of g^*, only its magnitude.In this work, we utilise a novel approach to directly detect the sign of g^* by exploiting a unique property of the (311) GaAs hole g-tensor, and demonstrate a gate-controlled sign change of g^* in a hole quantum point contact (QPC) on (311) GaAs.We also introduce a theoretical model showing that the observed sign reversal of g^* arises from the in-plane momentum dependence of the spin-orbit interaction in the valence band.Typically it is not possible to experimentally probe the directional k-dependence of the 2D hole g-tensor, since transport measurements represent an average over all k-states at the Fermi surface.However, by using an electrostatically controlled QPC fabricated along particular in-plane directions of a 2D hole system, we can perform a direct spectroscopic measurement of g^*, and investigate its dependence on the magnitude and direction of the in-plane momentum <cit.>.The device used in this work was fabricated from a (311)A-oriented heterostructure, in which a 2D hole system is induced at an AlGaAs/GaAs interface by applying a negative voltage (-0.7V) to a heavily p-doped cap layer <cit.>. The peak 2D hole mobility was μ = 6.0 × 10^5 cm^2 V^-1s^-1 at a density p=1.3 × 10^11 cm^-2 and temperature T = 40 mK. The 2D holes are further confined using a split-gate geometry, to two short one-dimensional (1D) channels or quantum point contacts (QPCs) - see Fig.1a.The two orthogonal 400nm long 1D channels, oriented along the [233] and [011] crystal directions (which we label QPC[233] and QPC[011] respectively), were defined by electron-beam lithography and shallow wet etching of the cap layer. Measurements were carried out in a dilution refrigerator, with a base temperature below 40mK, using standard ac lock-in techniques with a 100μ V excitation at 31Hz.A three-axis vector magnet was used to independently control all three components of the magnetic field, eliminating the need to thermally cycle the device.The fields were applied along [233] and [311] as shown by the schematic in Fig.1b.Fig.1c shows the conductance as QPC[233] is pinched off, revealing clean 1D conductance plateaus in units of 2e^2/h at B = 0, which evolve to spin resolved half plateaus when a magnetic field was applied along the in-plane [233] direction.The g-factor was extracted by measuring the Zeeman splitting in gate voltage Δ V_SG(B), which is then converted to a Zeeman energy splitting Δ E_Z(B) using the well known source drain bias spectroscopy technique <cit.> (see Supplemental Material <cit.> section 1).Figs. 2a and 2b show the Zeeman splitting of the 1D subbands in the two orthogonal QPCs with a magnetic field B_[233] applied.The greyscale plots show the transconductance ∂ G/∂ V_SG, with the dark regions corresponding to the risers between plateaus in Fig. 1c, hence marking the 1D subband edges.For both QPCs there is a clear linear Zeeman splitting of the 1D states, from which we extract the g-factor. The measured g^*_[233] for QPC[011] is plotted in Fig. 2c along with earlier data from Ref. <cit.> taken at a higher 2D hole density.In both cases, g^*_[233] shows a monotonic decrease with increasing subband index n.The equivalent g-factor for QPC[233] is shown in Fig. 2d, and we again show earlier data taken at a higher density <cit.>.In contrast to QPC[011], QPC[233] shows a non-monotonic evolution of g^*_[233] as a function of subband index, with a clear minimum at n = 5.This marked difference in the g-factor for orthogonal current directions is due to a combination of the crystallographic anisotropy in the (311) surface and the in-plane momentum dependence of g^*, as shown later.We now use a novel approach to prove that the trend observed in Fig. 2d is due to a sign change of the in-plane g-factor g^*_[233], as the 1D channel is tuned from the 2D to the 1D limit.Although the observed non-monotonic trend of g^*_[233] is suggestive of a sign reversal, these measurements alone cannot determine the sign of g^*.In the following section, we show that the sign of g^* can be explicitly extracted by simultaneously applying orthogonal magnetic fields to exploit an unusual property of the (311) hole g-tensor:Uniquely to (311) oriented GaAs 2D systems, theory <cit.> and experiment <cit.> have shown that when a field is applied along the in-plane [233] direction, in addition to an in-plane polarisation with g-factor g_xx, there exists an anomalous out-of-plane polarisation due to an off-diagonal term g_xz in the g-tensor.The Hamiltonian describing the Zeeman term for 2D heavy holes in (311) GaAs is then:H = μ_B/2((g_xxB_xσ_x) + (g_xzB_xσ_z) + (g_zxB_zσ_x)+ (g_yyB_yσ_y) + (g_zzB_zσ_z))where x,y and z refer to the [233], [011] and [311] directions respectively, with theoretical 2D values g_xx = g_yy = -0.16, g_xz = 0.65, g_zz = 7.2 <cit.> and g_zx≃ 0 <cit.>.With the magnetic field applied along [011], the Zeeman splitting is Δ E_Z = g^*_[011]μ_BB_[011], where g^*_[011] is simply the isotropic component of the g-tensor g_yy.However, when the field is applied along [233], the Zeeman splitting is Δ E_Z = g^*_[233]μ_BB_[233], where |g^*_[233]| = √(g_xx^2 + g_xz^2).If combined magnetic fields are applied both along the in-plane [233] and out-of-plane [311] directions, the total Zeeman splitting measured in experiment is:Δ E_Z^2 = (g_xxμ_BB_[233])^2 + (g_xzμ_BB_[233] + g_zzμ_BB_[311])^2The resulting Zeeman spliting is unusual in that it is sensitive to the relative signs of the g_xz and g_zz terms: If both g_xzB_[233] and g_zzB_[311] have the same sign, the total Zeeman splitting is large. However, if one of the two terms is negative, the total Zeeman splitting is suppressed.Therefore, applying both B_[233] and B_[311] simultaneously allows the relative signs of g_xz and g_zz to be extracted.To check if there is a sign change of g^*_[233] as suggested by Fig. 2d, we again measure the Zeeman splitting of 1D subbands as a function of B_[233] but now apply an additional fixed magnetic field along the out-of-plane [311] direction.The magnitude of the total Zeeman splitting depends on the relative signs of the g_xzB_[233] and g_zzB_[311] terms in eqn. 2, resulting in an asymmetry in the Zeeman splitting around B_[233] = 0.Crucially, if the sign of g_xz changes with respect to g_zz, the asymmetry in the Zeeman splitting as a function of B_[233] should reverse, providing direct proof of a sign reversal <cit.>. Turning to the experimental results, Fig. 3 shows the Zeeman splitting of both QPC[011] and QPC[233] in combined magnetic fields applied in and out of the plane.When a fixed out-of-plane field B_[311] = 0.2T is introduced (Figs. 3a and 3b), the data becomes asymmetric around B_[233] = 0.We note that for 1D holes on the high symmetry (100) plane, the data is always symmetric even in combined magnetic fields, due to the absence of the off-diagonal g_xz term (see supplemental material <cit.> section 2). Starting with QPC[011] (Fig. 3a), the lower subbands do not appear to show any asymmetry in the combined fields, suggesting that the cancellation/addition of g_zz and g_xz is minimal (this is due to the fact that g_zz is small for low subbands - see supplemental material <cit.> section 3).However, for subbands 5 and 6, the asymmetry around B_[233] = 0 becomes increasingly apparent as g_zz becomes large.Subband 6 clearly shows a strong Zeeman splitting for B_[233] > 0, and a relatively weak splitting for B_[233] < 0.This confirms the predicted effect due to the competition between the g_zz and g_xz terms in eqn. 2. In the case of QPC[233] (Fig. 3b), the asymmetry of the Zeeman splitting around B_[233] = 0 again increases with subband index.However, the most significant aspect of the data is that the asymmetry is reversed for subband 6, which can only occur if g_xz has changed sign between n = 5 and n = 6 <cit.>.This is consistent with the data in Fig. 2d, where there is a clear minimum around n = 5.In order to confirm that the asymmetry in the Zeeman splitting is caused by the combination of magnetic fields, we also show the Zeeman splitting as a function of B_[233], with B_[311] = 0 (Figs. 3c and 3d).In this case, the g_zzB_[311] term in eqn. 2 becomes zero, so the Zeeman splitting is simply Δ E_Z^2 = (g_xx^2 + g_xz^2) B_[233]^2 = g^*2_[233] B_[233]^2, resulting in a symmetric evolution of the subbands either side of B_[233] = 0.The symmetry is clearly evident for both QPCs in Figs. 3c and 3d. We now turn to the question of what is causing the sign change of g_xz for QPC[233], and show theoretically that the data can be well explained by the dependence of the 2D g-factor on the in-plane momentum.The 1D subband index effectively corresponds to quantised values of the in-plane momentum ⟨ p_∥^2 ⟩: In the 1D region, ⟨ p_∥^2 ⟩ is determined by the difference between the Fermi energy E_F in the 2D reservoirs and the top of the saddle point potential created by the QPC gates <cit.>.In the 1D limit at n = 1, the saddle point is high in energy and ⟨ p_∥^2 ⟩ is small. As the subband index increases, the saddle point decreases in energy so ⟨ p_∥^2 ⟩ also grows larger and eventually saturates at ⟨ p_∥^2 ⟩ = p_F^2.Hence, by tuning the 1D subband index, we are effectively probing the effects of finite momentum on g^*.We now analyse how g_xz should depend on the in-plane momentum and directly relate this to the measurements of g_xz vs n for both QPCs.We begin with the Luttinger Hamiltonian and take into account both the axial and cubic terms corresponding to the crystallographic anisotropy of the (311) surface.The 2D (z) confinement at the GaAs-AlGaAs interface, is taken as a triangular potential, and is assumed to be far greater than the in-plane (x,y) confinement due to the QPC, meaning we treat the hole system as quasi-2D in the (x,y)-plane with strong quantisation in the z-direction.The in-plane momentum is then taken into account using perturbation theory with the parameter ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩, where ⟨ p_∥^2 ⟩ = (⟨ p_x^2 ⟩, ⟨ p_y^2 ⟩).We consider a magnetic field applied in the [233] (x) direction, and derive an expression for g_xz as a function of ⟨ p_x^2 ⟩ and ⟨ p_y^2 ⟩ (see supplemental material section 5 for full derivation <cit.>): g_xz = 0.39 - C_1⟨ p_x^2 ⟩/⟨ p_z^2 ⟩ - C_2⟨ p_y^2 ⟩/⟨ p_z^2 ⟩ - C_3⟨ p_x^2 ⟩ - ⟨ p_y^2 ⟩/⟨ p_z^2 ⟩The constants C_1, C_2 and C_3 depend on band structure parameters and the 2D confinement potential.We have also included the Dresselhaus interaction which suppresses the g-factor by ≃ 40%.We note that the Rashba interaction makes a negligible contribution to g^* <cit.>.The QPC confinement is taken into account as follows: For QPC[233], the current is along the x direction, so ⟨ p_x^2 ⟩ = 0 since the spin splitting is measured at the subband edge, and ⟨ p_y^2 ⟩ takes quantised values corresponding to the 1D subbands.Conversely, for the orthogonal QPC[011], ⟨ p_y^2 ⟩ = 0 and ⟨ p_x^2 ⟩ takes quantised values.In Fig.4a, the theoretically calculated g_xz is plotted as a function of ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩.The blue trace shows QPC[011] with ⟨ p_∥^2 ⟩ = ⟨ p_x^2 ⟩, and the red trace shows QPC[233] with ⟨ p_∥^2 ⟩ = ⟨ p_y^2 ⟩. Due to the differing dependence of g_xz on ⟨ p_x^2 ⟩ and ⟨ p_y^2 ⟩ in eqn. 3 (originating from the crystallographic anisotropy of the (311) surface), the two orthogonal QPCs show strikingly different behaviour.g_xz for QPC[011] is positive and decreases slightly with increasing ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩ (and subband index), whereas g_xz for QPC[233] starts at a positive value but changes sign at larger⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩.The experimentally measured g_xz for both QPCs, obtained from g^*_[233] in Figs.2c and 2d, (g_xz = √(g^2_[233] - g_xx^2) = √(g^2_[233] - g^2_[011]) - see section 4 of supplemental material <cit.>) is plotted in Fig.4b.The data shows good agreement with the theory, with g_xz for QPC[011] decreasing slightly as the in-plane momentum increases.Meanwhile, g_xz for QPC[233] decreases strongly and changes sign around n=5.In the limit of the largest measurable subband - subband 7, we use the known 2D density and confinement potential to numerically estimate the quantity ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩ giving ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩≃ 0.2.The sign change (at n=5) should therefore occur at ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩≲ 0.2, which is reasonably close to the theoretically predicted value of ⟨ p_∥^2 ⟩ / ⟨ p_z^2 ⟩ = 0.3.This small discrepancy may be due to the fact that the theory does not take into account the effects of 1D quantisation, which may alter the confinement parameters used to derive eqn. 3.Nevertheless, the behaviour we observe for g_xz in both QPCs is qualitatively consistent with that predicted by theory.Finally we note that although the form of g_xz obtained from the theory agrees well with experiment, a quantitative comparison shows that the range of g_xz measured experimentally (-0.65 < g_xz < 1.5) is larger than that predicted by theory (-0.3 < g_xz < 0.4).This enhancement of the g-factor in experiment may be attributed to many-body interactions (not included in the theoretical calculation), previously observed in both 1D electron and hole systems <cit.>.In conclusion, Zeeman splitting measurements of 1D subbands were carried out for two orthogonal hole QPCs on (311)A GaAs.Due to the low symmetry of the (311) surface, the total Zeeman splitting in combined fields becomes sensitive to the sign of different components of the g-tensor.In this way, we are able to prove that g_xz changes sign when the 1D channel is oriented along [233], consistent with a theoretical model of g^* versus in-plane momentum.Our experimental results shed light on the complex spin physics of holes, and demonstrates gate-controlled tuning, not only of the magnitude but also the sign, of the g-factor, which is desirable for spintronics applications. The authors acknowledge the late J. Cochrane for technical support, and thank T. Li and U. Zülicke for enlightening discussions.YH acknowledges support by KAKENHI Grant No. 26287059. This work was supported by the Australian Research Council under the DP scheme, and was performed in part using facilities of the NSW Node of the Australian National Fabrication Facility. :DattaApl90 S. Datta and B. Das, Appl. Phys. Lett. 56, 665 (1990).LossPRA98 D. Loss and D. P. DiVincenzo, Phys. Rev. A. 57, 120 (1998).WolfSci01 S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton, S. von Molnar, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger, Science 294, 1488 (2001).Awsbook02 D. D. Awschalom, N. Samarth, D. Loss, Eds., Semiconductor Spintronics and Quantum Computation (Springer-Verlag, Berlin, Germany, 2002).PradoPRB04 S. J. Prado, C. Trallero-Giner, A. M. Alcalde, V. Lopez-Richard, and G. E. Marques, Phys. Rev. B. 69, 201310(R) (2004).KuglerPRB09 M. Kugler, T. Andlauer, T. Korn, A. Wagner, S. Fehringer, R. Schulz, M. Kubová, C. Gerl, D. Schuh, W. Wegscheider, P. Vogl, and C. Schuller, Phys. Rev. B. 80, 035325 (2009).AndlauerPRB09 T. Andlauer and P. Vogl, Phys. Rev. B. 79, 045307 (2009).SalisNat01 G. Salis, Y. Kato, K. Ensslin, D. C. Driscoll, A. C. Gossard and D. D. Awschalom, Nature 414, 619-622 (2001).KatoSci03 Y. Kato, R. C. Myers, D. C. Driscoll, A. C. Gossard, J. Levy and D. D. Awschalom, Science 299, 1201 (2003).BennettNcom13 A. J. Bennett, M. A. Pooley, Y. ao, N. Sköld, I. Farrer, D. A. Ritchie and A. J. Shields, Nat. Comm. 4, 1522 (2013).WinklerBook03 R. Winkler, Spin-orbit coupling effects in two-dimensional electron and hole systems, (Springer Tracts in Modern Physics, Vol. 191, Springer, Berlin, 2003).KesterenPRB90 H. W. van Kesteren, E. C. Cosman, W. A. J. A. van der Poel and C. T. Foxon, Phys. Rev. B. 41, 5283-5292 (1990).WinklerPRL00 R. Winkler, S. J. Papadakis, E. P. De Poortere, and M. Shayegan, Phys. Rev. Lett. 85, 4574 (2000).DanneauPRL06 R. Danneau, O. Klochan, W. R. Clarke, L. H. Ho, A. P. Micolich, M. Y. Simmons, A. R. Hamilton, M. Pepper, D. A. Ritchie and U. Zülicke, Phys. Rev. Lett. 97, 026403 (2006).SrinivasanNL13 A. Srinivasan, L. A. Yeoh, O. Klochan, T. P. Martin, J. C. H. Chen, A. P. Micolich, A. R. Hamilton, D. Reuter and A. D. Wieck, Nano Lett. 13, 148-152 (2013).NichelePRL14 F. Nichele, S. Chesi, S. Hennel, A. Wittmann, C. Gerl, W. Wegscheider, D. Loss, T. Ihn and K. Ensslin, Phys. Rev. Lett. 113, 046801 (2014).KlochanNJP09 O. Klochan, A. P. Micolich, L. H. Ho, A. R. Hamilton, K. Muraki and Y. Hirayama, New. J. Phys. 11, 043018 (2009).ChenNJP10 J. C. H Chen, O. Klochan, A. P. Micolich, A. R. Hamilton, T. P. Martin, L. H. Ho, U. Zülicke, D. Reuter and A. D. Wieck, New. J. Phys. 12, 033043 (2010).ClarkeJAP06 W. R. Clarke, A. P. Micolich, A. R. Hamilton, M. Y. Simmons, K. Muraki and Y. Hirayama, J. Appl. Phys. 99, 023707 (2006).PatelPRB91 N. K. Patel, J. T. Nicholls, L. Martin-Moreno, M. Pepper, J. E. F. Frost, D. A. Ritchie and G. A. C. Jones, Phys. Rev. B. 44, 13549 (1991).supp1 See Supplemental Material for details.WinklerSST08 R. Winkler, D. Culcer, S. J. Papadakis, B. Habib and M. Shayegan, Semicond. Sci. Technol.23, 114017 (2008).YeohPRL14 L. A. Yeoh, A. Srinivasan, O. Klochan, R. Winkler, U. Zülicke, M. Y. Simmons, D. A. Ritchie, M. Pepper and A. R. Hamilton, Phys. Rev. Lett. 113, 236401 (2014).signofgxx We note that while the Zeeman splitting in combined fields is sensitive to a relative sign change between g_xz and g_zz, it cannot discern a sign change in the isotropic component of the g-tensor g_xx, as is evident from eqn 2.gzzsign It is possible that a sign reversal of g_zz instead of g_xz could also result in the opposite asymmetry of subband 6 seen in Fig. 3d, but this is inconsistent with the monotonically increasing trend of g_zz vs. n (See supplemental material section 3).Buttiker M. Buttiker, Phys. Rev. B 41, 7906 (1990).ThomasPRL96 K. J. Thomas, J. T. Nicholls, M. Y. Simmons, M. Pepper, D. R. Mace, and D. A. Ritchie, Phys. Rev. Lett. 77, 135 (1996).DaneshvarPRB97 A. J. Daneshvar, C. J. B. Ford, A. R. Hamilton, M. Y. Simmons, M. Pepper, and D. A. Ritchie, Phys. Rev. B 55, R13409(R) (1997).
http://arxiv.org/abs/1702.08135v1
{ "authors": [ "A. Srinivasan", "K. L. Hudson", "D. S. Miserev", "L. A. Yeoh", "O. Klochan", "K. Muraki", "Y. Hirayama", "O. P. Sushkov", "A. R. Hamilton" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170227040056", "title": "Electrical control of the sign of the g-factor in a GaAs hole quantum point contact" }
[pages=-]SOTOPO-ASGCD.pdf
http://arxiv.org/abs/1702.07842v2
{ "authors": [ "Chaobing Song", "Shaobo Cui", "Yong Jiang", "Shu-Tao Xia" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170225070609", "title": "Accelerated Stochastic Greedy Coordinate Descent by Soft Thresholding Projection onto Simplex" }
aiurov@unm.eduCenter for High Technology Materials, University of New Mexico, 1313 Goddard SE, Albuquerque, NM, 87106, USA Department of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, NY 10065, USADepartment of Physics and Astronomy, Hunter College of the City University of New York, 695 Park Avenue, New York, NY 10065, USA Donostia International Physics Center (DIPC), P de Manuel Lardizabal, 4, 20018 San Sebastian, Basque Country, SpainAir Force Research Laboratory, Space Vehicles Directorate, Kirtland Air Force Base, NM 87117, USA Center for High Technology Materials, University of New Mexico, 1313 Goddard SE, Albuquerque, NM, 87106, USAThe dressed states arising from the interaction between electrons and holes, and off-resonant electromagnetic radiation have been investigated for recently fabricated gapped and anisotropic black phosphorus.Our calculations were carried out for the low-energy electronic subbands near the Γ point. States for both linearand circular polarizations of the incoming radiation have been computed. However, our principal emphasisis on linearly polarized light with arbitrary polarization since this case has not been given much attention for dressing fields imposed on anisotropic structures. We have considered various cases forone- and few-layer phosphorus, including massless Dirac fermions with tunable in-plane anisotropy. InitialHamiltonian parameters are renormalized in a largely different way compared to those for previouslyreported for gapped Dirac structures and, most importantly, existing anisotropy which could be modifiedin every direction.78.67.-n, 78.67.Wj, 81.05.Xj, 73.22-f.anisotropicExploring the Optical States for Black Phosphorus: Anisotropy and Bandgap TuningDanhong Huang December 30, 2023 =================================================================================§ INTRODUCTION Black phosphorus (BP) is a layered structure of buckled atomic phosphorus for which the layers areconnected by weak van der Waals forces. <cit.> This is the most stable phosphorus-based crystalat roomtemperature and for a wide range of pressure values. Crystalline BP is a semiconductor, forwhich P atoms are covalently bonded with three adjacent atoms.Such a structure exhibits stronganisotropy for an arbitrary number of layers, as well as hybrid electron and holestates in the vicinity of the band edge in phosphorene featuring both Dirac cone and a conventionalSchrödinger electron behavior. While bulk BP is a semiconductor with a smallbandgap of 0.3 eV , its monlayer counterpart is a semiconductor with a large direct bandgap(∽ 2 eV) and is referred to as phosphorene in analogy to graphene and could beexfoliated in a mechanical way.It is not surprsing that reliable information for the band structure of BP witha specified number of layers has been appreciated as being extremely important for device applications, including a field-effect transistor. <cit.> This has stimulated a large number of first-principles calculations based on the density functional theory as well as the tight-binding model<cit.>, group-theoretical calculations<cit.> and the continuum model. <cit.>of BP structures, which recently received a significant amount of attention for their successin analyzing experimental data. <cit.> The quality of all suchmodels<cit.> depends on the accuracy of the exchange-correlation approximation.Over time, weare going to witness numerous attempts to engineer new types of anisotropic electronicbandstruture of BP-based devices using various mechanisms such as electron-photon interaction with adressing field. The corresponding effect from an imposed electrostatic fieldwas addressed in Ref. [elfield].The possibility of an electronic topological transition and electrically-tunable Dirac cone wastheoretically predicted for multi-layer BP. <cit.> The fact that the bandgap of BPis mainly determined by the number of layers was confirmed in experiment,<cit.> i.e., theenergy gap is strongly decreased for a larger number of layers and could be effectively neglectedfor N_L > 5. Yet demonstrating a Dirac cone with no effective mass or energybandgap, such electrons still posses strong anisotropic properties for pristine blackphosphorus.<cit.> Consequently, non-symmetric Klein tunneling could beobserved<cit.> with a shifted transmission peaks similar to electron tunnelingfor graphene in the presence of magnetic field. <cit.> Phosphorene is one of the most recently discovered members of a sizable group of low-dimensional structureswith great potential for nanoelectronics applications. The most famous part of this familyis graphene, fabricated in 2004. Because of its unique properties, <cit.> graphene has initiated a new direction in electronic devices.A subsequent crucial advance was the discoveryof the buckled honeycomb lattices such as silicene and germanene. Their most distinguished featureis the existence and tunability of two non-equivalent spin-orbit and sublattice-asymmetry bandgaps.The latter one, Δ_z is directly modified by an externalelectrostatic field. <cit.>This is a result of the out-of plane buckling, which is due to a larger radiusof Si and Ge atoms compared to carbon and sp^3 hybridization. The mostrecently fabricated germanene possesses similar buckling features but with different bandgaps and Fermi velocity. <cit.>Another important class of such materials are the transition metal dichalcogenides(TMDC's) structures such as MC_2, where Mdenotes a metal such as Mo, W, and C is a chalcogen atom (S, Se, or Te). Molybdenum disulfideMoS_2, a typical representative<cit.> of TMDC's, has exhibited a large energybandgap ⋍ 1.9eV, and broken symmetry between the electron and holesubbands so that for all experimentally accessible electron density values onlyone hole subband is doped. As a result, all the electronic, collective and transportproperties vary significantly for the electron and hole types of doping. However, allthese low-dimensional materials exhibit almost complete (with a slightdeviation for MoS_2) isotropy in the x-y plane. Consequently, phosphoreneis a totally new material with its very unusual properties, so that completeand thorough studies of these characteristics open a new important chapter inlow-dimensional science and technology. With the newest achievements in laser and microwave science, it has become possible toachieve substantial control and tunability of the electronic properties of low-dimensional condensed-matter materials by subjecting them to strong off-resonant high-frequencyperiodic fields (so-called "Floquet engineering“, schematically shown inFig. ]refFIG:1 (a)).<cit.> If the electron-phonon couplingstrength is high, such a bound system could be considered as a single, holisticobject and has been investigated using quantum optics and mechanics. Theseelectrons with substantially modified energy dispersions, referred to as “dressed states"', became a commonly used model in present-day low-dimensional physics. <cit.> One of the first significant achievements has been the demonstration of a metal-insulator transition in graphene<cit.>, which drasticallyaffected the electron tunneling and the Kleinparadox. <cit.> Important collective properties such as exchange andcorrelation energies are also affected by the presence of an energy gap, <cit.>and spin dynamics on the surface of a three-dimensional topological insulator<cit.> is also modified.The rest of our paper is organized in the following way. In Sec. <ref>,we present our model, the low-energy Hamiltonian and the energy dispersionsfor phosphorene, i.e., single-layer black phosphorus with strong in-planeanisotropy and a large energy bandgap. The electron-photon dressed statesfor phosphorene are presented and discussed in Sec. <ref>.Section  <ref> is devoted to calculating the dressed states for few-layer phosphorus, in which the electrons are anisotropic massless Diracfermions without a gap. Mathematical generalizations of such a model withboth on- and off-diagonal bandgaps is considered, and the correspondingdressed states are also obtained. Concluding remarks are provided in Sec. <ref>.§ LOW-ENERGY MODEL FOR PHOSPHORENEOur calculations utilize the model for BP presented in Refs. [GGBpPRL, GGBpPRB]. Being somewhat similar to the truly two-dimensional hexagon structure of graphene, the atomicarrangement for single layer BP results in a packered surface due to thesp^3hybridization composed of the 3s and 3p orbitals. For silicene, such hybridizationis responsible for out-of-plane “buckling" displacement of the Si atoms. The continuouum k-dependent Hamiltonian is usually based on the tight-binding model. Close to the Γ point, approximated up to second order in the wave vector components,it is given as H_ph^Δ( k) = (E_i + ∑_i = x,yη_i k_i^2 ) Î_2× 2+ (Δ_O + ∑_i = x,yγ_i k_i^2 ) Σ̂_x- χ k_y Σ̂_y ,or in the matrix form,H_ph^Δ( k) = [ [E_i + ∑_i = x,yη_i k_i^2 Δ_O+ ∑_i = x,yγ_i k_i^2 - i χ k_y;Δ_O + ∑_i = x,yγ_i k_i^2 + i χ k_yE_i + ∑_i = x,yη_i k_i^2 ]] .This Hamiltonian clearly displays significantly different structure and properties,compared to that for graphene. First, there are no linear k- terms, except± i χ k_y. Furthermore, there are no linear k_x elements. As one of themost evident consequences of this structure, we note that circularly polarizedirradiation with x- and y-components being equally important, couples tosuch electrons only in the ⋍ A^2 level. Secondly, the energy bandgap is presented in a Σ̂_x, off-diagonalform, contributing to the asymmetry between the electron and hole states incontrast to the Σ̂_z-type gap. These properties, coming directlyfrom the Hamiltonian structure, are new and have not been encountered previously.The energy dispersions areε_Δ^± ( k) = E_i + ∑_i = x,yη_i k_i^2 ±[( Δ_O+ ∑_i = x,yγ_i k_i^2 )^2+ χ^2 k_y^2]^1/2,where γ_e,h = ± 1 corresponds to the electron or hole solution. For small values of the wave vector, these dispersions are approximated as ε_Δ^± ( k) ⋍E_i ±Δ_O + ( η_x ±γ_x ) k_x^2 + [ η_y ±( γ_y + χ^2/2 Δ_O) ] k_y^2.The effective masses given by<cit.> m^(e,h)_ x =ħ^2/2 ( η_x ±γ_x ), m^(e,h)_ y =ħ^2 / 2/η_y ±( γ_y + χ^2 / (2 Δ_O) )are anisotropic, and this anisotropy is different for the electron and holestates as ∽χ^2/Δ_O.§ ELECTRON DRESSED STATES FOR IN SINGLE LAYERIn this Section, we calculate electron-light dressed states for phosphorene. As far as circularly polarized irradiation is concerned, one must consider second order coupling in order to see how both components of the wave vector are modified. Such consideration is critical for the vector potential given by Eq.(<ref>), but is clearly beyond the scope of conventional analytical methods. The mere presence of anoff-diagonal energy gap Δ_O means that there is no electron/hole symmetric solution, obtained in Refs. [kibissrep,kibisall]. This situation also leads us toconclude that the Hamiltonian parameters, such as the energy gap, are affected atlower order than such parameters for Dirac fermions. Consequently, for monolayer BP, we focus on the case for linearly polarized irradiation. §.§ Linear polarization of the dressing field and induced anisotropy Since electron energy dispersion relations and their effective masses are intrinsically anisotropic for phosphorenes, the direction of the dressing field polarizationnow plays a substantial role. We define this direction by an arbitrary angleθ_0 from the x-axis and generalize the vector potential used inRef. [kibisall] so that it now has both x- and y- non-zero componentsA^L(t) ={E_0/ωcos ω tcosθ_0;E_0/ωcos ω t sinθ_0 }. The renormalized Hamiltonian for the dressed states is obtained by the canonical substituionk_x,y⇒ k_x,y - e /ħ A_x,y,where e stands for the electron charge, yielding Ĥ(k) = H_ph^Δ + ĥ_0 + ĥ_int,where H_ph^Δ is the “bare", non-interacting electron Hamilotonian,given by Eq.(<ref>). The zeroth-order, k-independentinteraction Hamiltonian may be expressed as ĥ_o = χ E_0 e/ħω sinθ_0cosω tΣ̂_y =c_0( [ 0 -i cosω t;i cosω t 0 ]),where c_0 = χ sinθ_0E_0 e/(ħω), e = | e |.We conclude that the vector potentail of linearly polarized light (<ref>) must have a non-zero y- component in order to enable ∽ A^L coupling.We now turn our attention to the interaction term which is linear in k_x,y and given by ĥ_int =2 e/ħ( ∑_i = x,yη_i k_i A_i^L(t)I_2 × 2 + ∑_i = x,yγ_i k_i A_i^L(t)Σ̂_x) , or, introducing the following simplifying notationsϵ_α = √(α_x^2 k_x^2 + α_y^2 k_y^2), ϕ^(α) = tan^-1[ α_y k_y / (α_x k_x) ]for α = η, γ andc^(2) = (2 e E_0) / (ħω), we express it as ĥ_int = c^(2) cos ω t[[ ϵ_ηcos (ϕ^(η) - θ_0) ϵ_γcos (ϕ^(γ) - θ_0); ϵ_γcos (ϕ^(γ) - θ_0) ϵ_ηcos (ϕ^(η) - θ_0) ]] .As the fisrt step, we solve a time-dependent Schrödinger equationfor k = 0: i ħ∂Ψ_0(t)/∂ t = ĥ_oΨ_0(t) .We obtain the solution in astraightforward way to be Ψ_0^β = ± 1(t) = 1/√(2) [[1; βi ]] exp{- i β c_0/ħω sinω t}.It is noteworthy that even if both energy bandgaps Δ_O and the initialenergy shift E_i are included so that the Hamiltonian takes the form h_o = E_iI_2 × 2 + Δ_OΣ̂_x + c_0cosω tΣ̂_y = c_0([E_iΔ_O -i cosω t; Δ_O + i cosω tE_i ]) ,the solution could still could be determined analytically asΨ_0^β = ± 1(t) = 1/√(2) [[1; βi ]] exp{- βi/ħ[c_0/ω sinω t - (Δ_O - βE_i) t]}.We note that such a trivial solution could only be obtained for equal diagonal energiesE_0, i.e., for an introduced non-Σ_z type of energy gap. If the diagonal energies are not equivalent, given by E_1,2 = E_i ±Δ_D, complete symmetry betweenthe two components of the wave function no longer exists and such a solution couldnot be determined analytically. Consequently, we will use basic set inEq.(<ref>) for the rest of the present calculation. Such a situation (non-zero diagonalgap Δ_D) appears if there is a relatively small non-zero vertical electrostaticfield component and the symmetry between the vertically displaced phosphorus atoms is broken similar to silicene. We will often use this mathematical generalization in the rest of our work.For finite wave vector, we present the eigenfunction as an expansion in terms of a basis set<cit.> Ψ_ k (t) = F^⇑ Ψ_0^+(t) + F^⇓ Ψ_0^-(t) ,where F^⇑,⇓ = F^⇑,⇓(k_x,k_y|t) are scalar, time-dependent coefficients with anisotropic k-dependence.This equation immediately results in the two following indentities i ħdF^ ⇑,⇓/dt = ⟨Ψ_0^±(t) |δH_ph^γ( k) |Ψ_0^+(t) ⟩ F^ ⇑ + ⟨Ψ_0^±(t)|δH_ph^γ( k) |Ψ_0^-(t) ⟩ F^ ⇓,whereδH_ph^γ( k) = H_ph^γ + ĥ_intis the bandgap and wave vector dependent portion of the total Hamiltonian (<ref>).This system becomes i dF^ ⇑,⇓/dt =[ E_i + η_x k_x^2 + ( η_y k_y ±χ ) k_y + c^(2)ϵ_ηcos (ϕ^(η) - θ_0 )cosω t] F^ ⇑, ⇓± ±i [ ∓ i Δ_D + Δ_O + γ_x k_x^2 + γ_y k_y^2+ c^(2)ϵ_γcos (ϕ^(γ) - θ_0 )cosω t]F^ ⇓, ⇑ exp[ ± 2i c_0/ħω.sinω t ] The quisiparticle energy dispersion relations ε_d ( k)are calculated by using the Floquet theorem from the following substitution F^⇑, ⇓(t) = exp[ - i ε_d ( k) /ħt ]∑_λ = - ∞^∞ f_λ^⇑, ⇓ e^i λω t,where f^⇑, ⇓(t) = f^⇑, ⇓(t + 2 π/ ω) aretime-dependent periodic functions. The nested exponential dependence is usuallysimplified using the Jacobi-Anger identity exp[ ± 2i c_0/ħωsinω t] =∑_ν = - ∞^∞e^i νω tJ_ν( ± 2 c_0/ħω) .The orthonormality of Fourier expansions results in the following system of2 μ, μ⇒∞ equations ε_d ( k) f^⇑, ⇓_μ =[ μ ħω + E_i ±η_x k_x^2 + ( η_y k_y ±χ ) k_y ]f^⇑, ⇓_μ + [ c^(2)/2ϵ_ηcos (ϕ^(η) -θ_0) ]( f^⇑, ⇓_μ+1 + f^⇑, ⇓_μ-1)∑_λ = -∞^∞f^⇓, ⇑_λ× × {[ Δ_D ±i( Δ_O + γ_x k_x^2 + γ_y k_y^2 )]J_μ-λ( ± 2 c_0/ħω) + [ c^(2)/2ϵ_γcos (ϕ^(γ) - θ_0) ]∑_α = ± 1J_μ+ α -λ( ± 2 c_0/ħω) }.In our consideration, the frequency of the off-resonant dressing field is highenough so that only diagonal elements in the eigenvalue equation (<ref>) areretained. However, if we need to include the first-order electron-field coupling terms ∽ c^(2), we must keep the summations with λ = μ± 1. In the simplest case, where only diagonal elements are kept, the quasiparticleenergy dispersion relations areε_d ( k) = E_i + ∑_i=x,yη_i k_i^2 ±{χ^2 k_y^2 + [Δ_D^2 + (Δ_O + ∑_j=x,yγ_j k_j^2 )^2 ] J_0^2( 2 c_0/ħω) }^1/2.This result is valid only if c_0 = χ sinθ_0E_0 e/(ħω)≠ 0, or there is a finite y-component of the polarization directionof the dressing field. ε_d ( k) = E_i ±{(1 - α_c^2) [Δ_D^2 + Δ_O^2] }^1/2 + [ η_x ±γ_x√(1 - α_c^2) Δ_O/√(Δ_D^2 + Δ_O^2)] k_x^2 + [ η_y ±γ_yΔ_O(1 - α_c^2) + χ^2/2/√((1 - α_c^2 ) [ Δ_D^2 + Δ_O^2 ])]k_y^2 ,where α_c= 2 c_0 / (ħω) is a dimensionless coupling coefficient.The electron effective masses are now readily obtained. If there is no Σ_z-typeenergy bandgap Δ_D, the expressionsare simplified as m^(e,h)_ x =ħ^2/2 ( η_x ±α̃_cγ_x ), m^(e,h)_ y =ħ^2 / 2/η_y ±[ α̃_cγ_y + χ^2 / (2 Δ_O α̃_c) ],where α̃_c = √(1 - α_c^2)∽ 1 - α_c^2 / 2.The obtained energy dispersion relations are presented in Fig. <ref>.It is interesting to note that both energy bandgaps are renormalized by theelectron-photon interaction, showing a substantial decrease, while the diagoanlterms for the initial effective masses of a “bare" electron η_x,y are unchanged.§ ANISOTROPIC MASSLESS FERMIONS IN FEW-LAYER PHOSPHORUS The central property, as well as the research focus of phosphorene, is the electrondispersion relation and effective mass anisotropy. At the same time, BP-based materials have a band gap which is determined by the number of layers, which varies from0.6 eV for five layers to 1.5 eV for a single layer. Specifically, we consideranisotropic massless Dirac particles, which could be observed in special few-layer(N_L > 5) black phosphorus superlattices for a narrow range of energies.This anisotropic Dirac HamiltonianH_ml^γ_0 = ħ v_F' ( k_x Σ̂_x+ γ_0 k_yΣ̂_y)leads to the following energy dispersion ε_γ_0^± ( k) = ±ħ v_F' √(k_x^2 + (γ_0 k_y)^2).For such fermions interacting with light having linear polarization in arbitrarydirection θ_0 described by Eq.(<ref>), we obtain the following Hamiltonian Ĥ( k) =H_ml^γ_0( k) + ĥ_0 ,where ĥ_0 = e v_F Σ̂· A^L(t) = e v_F E_0/ω ([0 e^ - i θ_γcosω t; e^ i θ_γcosω t0 ])is the k = 0 portion of the total Hamiltonian. Here, we also introducedθ_γ = tan^-1[γ_0 tan(θ_0)] so thate^± i θ_γ = cosθ_0 ± i γ_0 sinθ_0.We define c_0 = e v_F E_0 / ω to be the electron-photon interaction coefficient with the dimension of energy and for a given range of frequencyc_0 ≪ħω- dressing field which cannot be absorbed by electrons. Traditionally, we first need to solve the time-dependent Schrödinger equationfor k = 0 and Hamiltonian ĥ_0. The eigenfunction is obtained in astraightforward way asΨ_0^β = ± 1(t) = 1/√(2) [[1; βe^i θ_γ ]] exp{- i β c_0/ħω sinω t}. In order to determine the solution for a finite wave vector we once again employ the expansion (<ref>) and solve Eq. (<ref>) for the time-dependent coefficients F^⇑,⇓ = F^⇑,⇓(k_x,k_y|t).In our case, this leads to i dF^ ⇑,⇓/dt = ± v_F' k_γcos(ϕ^(γ) - θ_γ)F^ ⇑,⇓± i v_F'k_γsin(ϕ^(γ) - θ_γ)F^ ⇓,⇑ exp[± 2i c_0/ħωsinω t ] ,where ϕ^(γ) = tan^-1[γ_0 k_y/k_x] or ϕ^(γ)= tan^-1[γ_0 tan(ϕ_0)] and tanϕ_0 = k_y/k_x. We also denote k_γ = √(k_x^2 + (γ_0 k_y)^2).Now we also use the Floquet theorem to extract the quasiparticle energyε_d ( k)and expand the remaining time-periodic functions as a Fourier series. The result isagain a system of 2 μ, μ⇒∞ equations ε_d ( k) f^⇑, ⇓_μ = ∑_λ = -∞^∞{δ_μ, λ [ μ ω±ħ v_F' k_γcos (ϕ^(γ) - θ_γ) ]f^⇑, ⇓_λ±iħ v_F' k_γsin (ϕ^(γ) - θ_γ)J_μ-λ( ± 2 c_0/ħω) f^⇓, ⇑_λ}.In the region of interest, i.e., for large frequency ω≫ v_F' k andω≫ϵ ( k) we approximate f^⇑, ⇓_μ≠ 0⋍ 0. Finally, the eigenvalueequation becomes ε_d ( k) f^⇑, ⇓_0 = K(⇑, ⇓ | γ_0,k)× f^⇑, ⇓_0, where K(⇑, ⇓ | γ_0,k) = [[ħ v_F' k_γcos (ϕ^(γ) - θ_γ) i γ_0ħ v_F' k_γsin (ϕ^(γ) - θ_γ) J_0 [ 2 c_0/(ħω)]; - i ħ v_F' k_γsin (ϕ^(γ) - θ_γ)J_0 [ - 2 c_0/(ħω)]- ħ v_F' k_γcos (ϕ^(γ) - θ_γ) ]]The energy eigenvalues are given by ε_d ( k) = ±ħ v_f' √(k_x^2 + (γ_0 k_y)^2) {cos^2 (ϕ^(γ) - θ_γ) +[ sin (ϕ^(γ) - θ_γ) J_0 ( 2c_0/ħω) ]^2 }^1/2 .For small light intensity c_0 ≪ħω the zeroth-order Bessel function of the first kind behaves as J_0^c_0 ≪ħω[ 2c_0/ħω] ⋍ 1 - c_0^2/(ħω)^2 + c_0^4/4 (ħω)^4 - ...and we have approximately for the energy dispersionε_d ( k) = ±ħ v_f' {[1 - 2c_0^2/(ħω)^2sin^2 θ_γ]^2 k_x^2 + γ_0^2[1 - 2c_0^2/(ħω)^2cos^2 θ_γ]^2 k_y^2 + 2γ_0 c_0^2/(ħω)^2 k_x k_y sin (2 θ_γ)}^1/2.If the light polarization is directed along the x-axis, then θ_0 = θ_γ = 0andε_d ( k) = ħ v_f' √(k_x^2 + γ_0^2[1 - 2 c_0^2/(ħω)^2]^2 k_y^2 ).Angular dependence of the dressed states dispersions is shown in Fig. <ref>. We notice that initially existing in-plane anisotropy is affected for all in-planceangles, depending on the dressing field polarization direction. For small intensityof the incoming radiation, polarized along the x- axis, the anisotropycoefficient is simply renormalized. §.§ Circular Polarization For circular polarization of the dressing radiation, the vector potential isA^C(t) = { A_0,x, A_0,y} = E_0/ω{cos ω t, sin ω t }.Being completely isotropic, this type of field is known to induce the metal-insulatrontransition in graphene,<cit.> resulting in the creation of a non-zero energy bandgap.If such a gap already exists, it could be increased or decreased depending on its initial value. <cit.> At the same time, the slope of the Dirac dispersions,known as Fermi velocity and the in-plane isotropy are not changed. The situation cannot bethe same for an initially anisotropic Dirac cone for AMF's.The total Hamiltonian for the interacting quasiparticle now becomes Ĥ( k) =H_ml^γ_0( k) + ĥ_0^(c),where the k = 0 interaction term is ĥ_0^(c) = e v_F E_0/ω Σ̂· A^C (t) = c_0/2 [[0 ∑_α = ± 1 (1 - αγ)e^iαγω t; ∑_α = ± 1 (1 + αγ)e^iαγω t0 ]].It seems rather surprising, although physically justified, that this problem ismathematically identical to the isotrpic Dirac cone interacting with ellipticallypolarized light addressed in Refs. [kibisall, goldprx]. The interactionterm also could be presented asĥ_0^(c) = Ŝ_γ e^i ω t+ Ŝ_γ^† e^ -i ω t Ŝ_γ = c_0/2∑_α = ± 1 (1 + αγ) Σ̂_α,where Σ̂_± = 1/2 (Σ̂_x ± i Σ̂_y ). This Hamiltonian represents an example of a wide class a periodically driven quantumsystems. <cit.> Such problems are generally solved perturbatively, in powers ofc_0/(ħω), if the electron-field coupling is weak. The effective Hamiltonianfor such problem has been shown to beĤ_eff( k) ⋍H_ml^γ_0( k) + 1 / (ħω)[Ŝ_γ Ŝ^†_γ ] +1/2 (ħω)^2 { [ [Ŝ_γ, H_ml^γ_0( k)]Ŝ^†_γ ] +h.c. } + ⋯Evaluating this expression, we obtain Ĥ_eff( k) = ħ v_F' k_x ( 1 -γ_0 c_0^2/2 (ħω)^2)Σ̂_x + ħ v_F'γ_0 k_y ( 1 -c_0^2/2 (γ_0ħω)^2)Σ̂_y-c_0^2/ħω γ_0Σ̂_z .Finally, the energy dispersion is given byε_d ( k) = ±{(γ_0 c_0^2/ħω)^2 + ħ^2 v_F'^2 [( 1 -γ_0 c_0^2/2 (ħω)^2)^2 k_x^2 + γ_0^2 ( 1 -c_0^2/2 (γ_0ħω)^2)^2 k_y^2 ] }^1/2.This result is an approximation. As we mentioned, in the case of an isotropicDirac cone, electrons in graphene interacting with a circularly-polarized dressingfield, the energy gap was found to be<cit.> Δ_g/2 = √(ħ^2 ω^2 + 2 c_0^2 ) - ħω⋍c_0/ħω - 1/4( c_0/ħω)^2 + ... ,while the Fermi velocity v_F is unaffected. §.§ Gapped anisotropic fermions We now present a generalization of previously considered massless Dirac particleswith a finite energy bandgap. Two different gaps added to both on- (Δ_D) andoff-diagonal terms (Δ_O) of the Hamiltonian. Here, an anisotropic Dirac coneis combined with the energy gaps attributed to phosphorene, a single-layer structure. Even though this model does not exactly describe any of the fabricated black phosphorusstructures, we conider it as an interesting mathematical generalization of the anisotropicDirac fermions case, which may become relevant from a physical point of view.Apart from that, this is an intermediate case between phosphorene and few-layergapless materials, which is expected to approximate the electronic properties ofa system with a small number of phospohrus layers.We haveH_g =E_iÎ_2× 2 + ( ħ v_F' k_x + Δ_O ) Σ̂_̂x̂+ ħ v_F' k_yΣ̂_y + Δ_DΣ̂_z == [ [ E_i + Δ_D 0; 0 E_i - Δ_D ]]+ Δ_O [[ 0 1; 1 0 ]]+ħ v_F' [[ 0 k_x - i γ_0 k_y; k_x + i γ_0 k_y 0 ]] .The corresponding energy dispersion is given byϵ_γ_0^±(k) = E_0 ±{Δ_D^2 + [Δ_O + ħ v_F' k_x ]^2 + (γ_0ħ v_F k_y)^2 }^1/2 .We now address the interaction of these gapped Dirac electrons with the dressinglinearly polarized field. The vector potential here is again specified by Eq.(<ref>) and the new Hamiltonian is Ĥ( k) =H_g( k) + ĥ_0 ,where ĥ_0 is identical to Eq.(<ref>) since both equations sharesimilar k-dependent terms. Following this approach, we expand the wave functionfor a finite wave vector Ψ_ k (t) over the basis (<ref>) and obtainthe following equations for the expansion coefficientsF^ ⇑,⇓ (k_x, k_y|t) i Ḟ^ ⇑,⇓ = ±{±E_i + (ħ v_F' k_x + Δ_O)cosθ_γ + γ_0ħ v_F' k_y sinθ_γ} F^ ⇑,⇓+ + i { - i Δ_D ±γ_0ħ v_F' k_y cosθ_γ∓ (ħ v_F' k_x + Δ_O) sinθ_γ} F^ ⇓,⇑ exp[ ± 2 ic_0/ħωsinω t ] .Similar to our previous case, we introduce the following simplifying notationϕ^(O) = tan^-1{γ_0ħ v_F' k_y/ħ v_F' k_x + Δ_O} ϵ_O = √((ħ v_F' k_x + Δ_O)^2 + (γ_0ħ v_F' k_y)^2) .After performing the Floquet theorem substitution and the expansions similar to the previously adopted procedure and we obtain ε_d ( k) f^⇑, ⇓_μ = [ μ ω + E_i ±ϵ_Ocos (ϕ^(O) - θ_γ) ]f^⇑, ⇓_λ + + ∑_λ = -∞^∞[ Δ_D ±iϵ_Osin (ϕ^(O) - θ_γ) ]J_μ-λ( ± 2 c_0/ħω) f^⇓, ⇑_λ.It is interesting to note that the diagonal bandgap Δ_D is now on themain diagonal and is affected by the Bessel function. The energy dispersions are now given by ε_d ( k)= E_i ±{[ϵ_O cos (ϕ^(γ) - θ_γ)]^2 +[ Δ_D^2 + ϵ_O^2sin^2 (ϕ^(γ) - θ_γ)] J_0^2 ( 2c_0/ħω)}^1/2,ϵ_O = √((ħ v_F' k_x + Δ_O)^2 + (γ_0ħ v_F' k_y)^2) .If the electron-photon interaction is small, then α_c = 2 c_0/(ħω) ≪ 1, the energy dispersion relation is approximated by ( ε_d ( k) - E_i )^2⋍ (1 - α_c^2)Δ_D^2 + [1 - α_c^2 sin^2 θ_γ]^2 (ħ v_F' k_x + Δ_O)^2 + (ħ v_F')^2γ_0^2[1 - α_c^2 cos^2 θ_γ]^2 k_y^2 + +2 γ_0α ħ v_F' (ħ v_F' k_x + Δ_O) k_ysin (2 θ_γ) .From Eqs. (<ref>) and (<ref>)we note that the diagonal Δ_D bandgapis decreased as ∽α_c^2, similar to that for gapped graphene or thetransition metal dichalcogenides, <cit.> while the off-diagonal gapmodification drastically depends on the direction of the radiation polarization.This behavior has no analogy in the previously considered structures. At the same time,the Fermi velocity components are modified similarly to that for anisotropic masslessfermions. § CONCLUDING REMARKSWe have derived closed-form analytic expressions for electron-photon dressed states in one- and few layerblack phosphorus. The energy gap is determined by the number of layers comprising the system, reaching its largest value for a single layer (phosphrene) and effectively vanishes for a few-layer structure. The latter case gives rise toanisotropic massless fermions, which exhibits an anisotropic Dirac cone. Since the anisotropy is the most significantand common property. Of all cases of BP, we focused on linearly polarized dressing field in an arbitrary direction. As a result, we demonstrated the Hamiltonian parameters are modified in an essentially different way compared to all isotropic Dirac systems. Anisotropy of the energy dispersion is modified in all directions, as well as all theelectron effective masses. If both diagonal and off-diagonal gaps are present, the latter one remains unaffected,but only for a specific light polarization direction. For AMF's interacting with circularly polarized light,the problem is mathematically identical with Dirac electrons irradiated by the field with elliptical polarization.In that case we found that initially absent energy bandgap is created and the non-equivalent Fermi velocitiesin various directions are renormalized. These results are expected to be of high importance for electronicdevice applications based on recently discovered and fabricated black phosphorus. .D.H. would like to thank the support from the Air Force Office of Scientific Research (AFOSR).
http://arxiv.org/abs/1702.08058v1
{ "authors": [ "Andrii Iurov", "Liubov Zhemchuzhna", "Godfrey Gumbs", "Danhong Huang" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170226173545", "title": "Exploring the Optical States for Black Phosphorus: Anisotropy and Bandgap Tuning" }
lingwang@csrc.ac.cn Beijing Computational Science Research Center, 10 East Xibeiwang Rd, Beijing 100193, China sandvik@bu.edu Department of Physics, Boston University, 590 Commonwealth Ave, Boston, Massachusetts 02215, USA Beijing Computational Science Research Center, 10 East Xibeiwang Rd, Beijing 100193, China Beijing National Laboratory of Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China We use the DMRG method to calculate several energy eigenvalues of the frustrated S=1/2 square-lattice J_1-J_2 Heisenberg model on 2L × L cylinders with L ≤ 10. We identify excited-level crossings versus the coupling ratio g=J_2/J_1 and study their drifts with the system size L. The lowest singlet-triplet and singlet-quintuplet crossings converge rapidly (with corrections ∝ L^-2) to different g values, and we argue that these correspond to ground-state transitions between the Néel antiferromagnet and a gapless spin liquid, at g_c1≈ 0.46, and between the spin liquid and a valence-bond-solid at g_c2≈ 0.52. Previous studies of order parameters were not able to positively discriminate between an extended spin liquid phase and a critical point. We expect level-crossing analysis to be a generically powerful tool in DMRG studies of quantum phase transitions.Critical level crossings and gapless spin liquid in the square-lattice spin-1/2 J_1-J_2 Heisenberg antiferromagnet Anders W. Sandvik December 30, 2023 ====================================================================================================================The spin-1/2 frustrated J_1-J_2 Heisenberg model on the two-dimensional (2D) square lattice (where J_1 and J_2 are the strengths of the first and secondneighbor couplings S_i· S_j, respectively) has been studied and debated since the early daysof the high-T_c cuprate superconductors <cit.>. The initial interest in the system stemmed from the proposal that frustrated antiferromagnetic (AFM) couplings could lead to a spin liquid (SL) in which preformed pairs (resonating valence bonds <cit.>) become superconducting upon doping <cit.>. Later, with frustrated quantum magnets emerging in their own right as an active research field <cit.>, the J_1-J_2 model became a prototypical 2D system for theoretical and computational studies of quantum phase transitions and nonmagnetic states <cit.>. Of primary interest is the transition from the long-range Néel AFM ground state <cit.> at small g=J_2/J_1to a nonmagnetic state in a window around g≈ 0.5 (before a stripe AFM phase at g0.6). The nature of this quantum phase transition has remained enigmatic <cit.>, despite a large number of calculations with numerical tools of ever increasing sophistication, e.g., the density matrix renormalization group (DMRG) method <cit.>, tensor-product states <cit.>, and variational Monte Carlo <cit.>.The nonmagnetic state may be one with spontaneously broken lattice symmetries dueto formation of a pattern of singlets (a valence-bond-solid, VBS) or a SL. Within thesetwo classes of potential ground states there are several different proposals, e.g.,a columnar <cit.> versus a plaquette <cit.> VBS, and gapless <cit.> or gapped <cit.> SLs. The quantum phase transition out of the AFM state may possibly be an unconventional 'deconfined' transition <cit.>, which recently has been investigated primarily within other models <cit.> hosting direct AFM–VBS transitions. In the J_1-J_2 model, some studies have indicated that the nonmagnetic phase may actually comprise two different phases, with an entire gapless SL phase—not just a critical point—existing between the AFM and VBS states <cit.>. However, because of the small system sizes accessible, it was not possible to rule out a direct AFM–VBS transitions. We here demonstrate an intervening gapless SL by locating the AFM–SL and SL-VBS transitions using a numerical level-spectroscopy approach, where finite-size transition points are defined using excited-level crossings. These crossing points exhibit smooth size dependence and can be more reliably extrapolated to infinite size than the order parameters and gaps used in past studies.We use a variant of the DMRG method <cit.> to calculate the ground state energy as well as several of the lowest singlet, triplet and quintuplet excited energies. In the AFM state, the lowest excitation above the singlet ground state in a finite system with an even number of sites is a triplet—the lowest state in the Anderson tower of 'quantum rotor' states <cit.>. If the nonmagnetic ground state is a degenerate singlet when the system length L→∞, as it should be in both a VBS and a topological (gapped) SL, there must be a crossing of the lowest singlet and triplet excitation at a point g(L) that approaches g_c with increasing L. This is indeed observed at the dimerization transition of the 1D J_1-J_2 chain <cit.> and related systems <cit.>, and size extrapolations give g_c to remarkable precision, even with system sizes only up to L ≈ 30. A level crossing with the same finite-size behavior was observed recently also in the 2D J-Q model <cit.>, which is a Heisenberg model supplemented by four-spin interactions causing an AFM–VBS transition <cit.>, likely a deconfined quantum-critical point with unusual scaling properties <cit.>.It is then natural to investigate level crossings also in the 2D J_1-J_2 model.We will demonstrate a singlet-triplet level crossing in the J_1-J_2 model which for 2L × L cylindrical lattices shifts as g_c2-g_c2(L) ∝ L^-2 and converges to g_c2≈ 0.52. We also observe a singlet-quintuplet level crossing, which converges to a different point, g_c1≈ 0.46. Given the known transitions associated with singlet-triplet crossings, and that a singlet-quintuplet crossing was found at the transition between the critical and AFM states in a Heisenberg chain with long-range interactions <cit.>, we interpret both g_c1 and g_c2 as quantum-critical points. For g_c1≤ g ≤ g_c2 the system appears to be a gapless SL with algebraically decaying correlations, as in one of thescenarios proposed in Refs. ShengJ1J2,Imada15 (and previouslydiscussed also in Ref. sandvik12). Our value of g_c1 is in the middle of the range g=0.4 ∼ 0.5 where most recent studies have put the end of the AFM phase <cit.>, and g_c2 is close to the VBS-ordering point in Refs. ShengJ1J2,Imada15.DMRG calculations.—The DMRG method <cit.> is a powerful tool for computing the ground state |ψ_0⟩ of a many-body Hamiltonian. By solving a Hamiltonian H_eff in a relevant low-entangled subspace of the full Hilbert space, one can obtain an effective wavefunction, through which the most relevant subspace is selected for the next iteration. A series of such subspace projectors produces the ground state as a matrix product state (MPS), i.e., the wavefunction coefficients are traces of products of local matrices of chosen size m <cit.>.The lowest excited state |ψ_1⟩ can also be targeted with DMRG <cit.> provided that |ψ_0⟩ has been pre-calculated. The only difference from a ground-state DMRG algorithm is that one has to maintain the orthogonality condition ⟨ψ_1|ψ_0⟩=0 at each step. Upon reformulating the Hamiltonian for the lowest excited state as H_1=H-λ_0|ψ_0⟩⟨ψ_0|, where λ_0 is the eigenvalue of H corresponding to |ψ_0⟩, one can write down theeffective Hamiltonian equation in the DMRG procedure as[U_1^†(H-λ_0|ψ_0⟩⟨ψ_0|)U_1] U_1^†|ψ_1⟩=λ_1U_1^†|ψ_1⟩,where U_1 projects onto the canonical MPS <cit.> for |ψ_1⟩ without the center two sites, as illustrated in Fig. <ref>, and λ_1 is the eigenvalue for |ψ_1⟩. We can therefore define an effective Hamiltonian H^1_eff≡ U_1^†(H-λ_0|ψ_0⟩⟨ψ_0|)U_1.Similarly, given that |ψ_i⟩ for all i<j (λ_i<λ_j) have been pre-calculated, we observe that one can compute the next eigenstate j as an MPS with a given number of kept Schmidt states m using a modified HamiltonianH_j=H-∑_i=0^j-1λ_i|ψ_i⟩⟨ψ_i|.Here H_eff^jU_j^†|ψ_j⟩=λ_jU_j^†|ψ_j⟩ as in Eq. (<ref>). In practice such a DMRG scheme will break down (i.e., unreasonably large m has to be used) when the eigenstates far from the bottom of the spectrum begin to violate the area law.The 2L× L cylinder geometry, with open and periodic boundaries in the x and y direction, respectively, is known to be suitable for 2D DMRG calculations <cit.>and we use it here for even L up to 10. We employ the DMRG with either U(1) (the totalspin z component S^z is conserved) or SU(2) symmetry. With U(1) symmetry, we generate up to ten S^z=0 states and obtain the total spin S bycomputing the expectation value of S^2.An advantage of focusing on the level spectrum is the well known fact that the energy converges much faster with the number m of Schmidt states than other physical observables, and also as a function of the number of sweeps in the DMRG procedure. We here apply very stringent convergence criteria and also extrapolate away the remaining finite-m errors based on calculations for several values of m up to m=12000 with U(1) symmetry and m=5000 with SU(2) symmetry.The DMRG procedures and extrapolations are further discussed in Supplemental Material (SM) <cit.>.Results.—Figure <ref> shows two singlet gaps and the lowest triplet and quintuplet gaps versus g in and close to the non-magnetic regime. The main graph shows results for L=10. One of the singlet gaps decreases rapidly with increasing g, crossing the other three levels. This is the lowest singlet excitation starting from g≈ 0.42, after crossing the other singlet (which has other quantum numbers related to the lattice symmetries) that is lower in what we will argue is the AFM phase. The insets of Fig. <ref> show results also for L=6 and 8 in the region around the level crossings that we will analyze (the higher gaps for L=4 are not shown for clarity). Using polynomial fits to the DMRG data points, we extract crossing points g_c1(L) between the singlet and the quintuplet, as well as g_c2(L) between the singlet and the triplet. The singlet-singlet crossings taking place close to g_c1(L) are discussed in the SM <cit.>; their size dependence is similar to g_c1(L). For gg_c1(L) there are also other levels in the energy range of Fig. <ref>, including singlets, but the S=0,1,2 gaps graphed are the lowest with these spins up to and beyond the largest g shown.As L increases the two sets of crossing points drift toward two different asymptotic values. For the singlet-triplet crossings, we have considered different extrapolation procedures with g_c2(L), all of which deliver g_c2≈ 0.52 when L →∞.It is natural to test whether the finite-size correction to g_c2 is consistent with the L^-2 drift in the frustrated Heisenberg chain <cit.>; a behavior also found in the 2D J-Q model in Ref. suwa_prb94.144416. In Fig. <ref>(a) we graph the data versus L^-2 along with a line drawn through the L=8 and L=10 points, as well as a fitted curve including a higher-order correction. Although we have only four points and there are three free parameters, it is not guaranteed that the fit should match the data as well as it does.With a leading L^-1 correction the best fit is far from good. Therefore,we take the former fit as evidence that the asymptotic drift is at least veryclose toL^-2. The fit with the subleading correction in Fig. <ref>(a)gives g_c2=0.519; a minute change from the straight-line extrapolation.Based on the differences between the two extrapolationsand roughly estimated errors on the individual crossing points (which arise from the DMRG extrapolations, as discussed in SM <cit.>), the final result is g_c2=0.519 ± 0.002.Plotting the singlet-quintuplet crossing points in the same graph in Fig. <ref>(a), the overall behavior is similar to the singlet-triplet points, but it is clear that they do not drift as far as to g_ c2. We find that the L^-2 form applies also here; see the SM <cit.> for furtheranalysis of the corrections for both g_c1 and g_c2. A rough extrapolationby a line drawn through the L=8 and L=10 points gives g_ c1≈ 0.465,and when including a correction, of the same form as in the singlet-triplet case, the extrapolated value moves only slightly down to g_ c1≈ 0.463. Based on this analysis we conclude that g_ c1= 0.463 ± 0.002.In Fig. <ref>(b) we analyze the crossing gaps, multiplied by L inorder to make clearly visible the leading behavior and well-behaved corrections. All gaps close as L^-1, i.e., the dynamic exponent z=1 at both criticalpoints. We have also analyzed the gaps in the regime g_ c1 < g < g_ c2 (not shown), and it appears that the lowest S=0,1,2 gaps all scale as L^-1 throughout. This phase should therefore be a gapless (algebraic) SL, instead of a Z_2 SL withnonzero triplet gap for L→∞ <cit.> and singlet gap vanishingexponentially (due to topological degeneracy).The point g_c2≈ 0.52 is higher than almost all previous results reportedfor the point beyond which the AFM order vanishes, but it is close to where recent works have suggested a transition from a gapless SL into a VBS <cit.>. If there indeed is a gapless SL intervening between the AFM and the VBS phases and its lowest excitation is a triplet (as is the case, e.g., in the critical Heisenberg chain), then a singlet-triplet crossing is indeed expected at the SL–VBS transition, since the triplet is gapped and the ground state is degenerate in the VBS phase.To interpret the singlet-quintuplet crossing at g_c1≈ 0.46, we againnote that the nature of the low-lying gapless excitations reflect the properties of the ground state, and a ground state transition can be accompanied by rearrangements of levels across sectors or within a sector of fixed total spin. A singlet-quintuplet crossing is indeed present at the transition betweena critical Heisenberg state (an 1D algebraic SL) and a long-range AFM state in a spin chain with long-range unfrustrated interactions and eitherunfrustrated <cit.> or frustrated <cit.> short-range interactions, as we discuss further in the SM <cit.>.This analogy, and the fact that g_c1 is close to where many previousworks have located the end of the AFM phase (as we also show below and in SM <cit.>), provides compelling evidence for theassociation of the singlet-quintuplet crossing with the AFM–SL transition. Furthermore, the S=2 quantum rotor state in the AFM state has gap ∝ L^-2, while at g_c1 it scales as L^-1 according to Fig. <ref>. Thus, at this point (and for higher g) the level spectrum is incompatible with AFM order.We also computed the squared AFM order parameter (sublattice magnetization per spin) ⟨ m^2_s⟩ in the putative SL phase, with m_s defined on the central L× L part of the 2L× L system (here with L up to 12). Since we mainly focused on the excited energies, we did not push the ground state ⟨ m^2_s⟩ calculations to as large L as in some past works <cit.>. To complement our own data, we therefore also use L=14 results from Ref. ShengJ1J2. In cases where we have data for the same parameter values, our results agree to within 0.2%. We fit the datato power laws with a correction; ⟨ m_s^2⟩ = bL^-α(1-cL^-ω), where acceptable values of ω span the range ω≈ 0.2 ∼ 1.5 and the exponent α changes somewhat when varying ω. In Fig. <ref>we show examples of fits with ω=0.5. We find that α increases with g,from α≈ 1.3 at g=0.46 to α≈ 1.8 at g=0.52. We have also tried to fix α to a common value for all g, but this does not produce good fits. We therefore agree with previous claims <cit.> that the exponent depends on g. At g=0.5,our result α≈ 1.7 ± 0.1 is larger than the value 1.44 reported in Ref. ShengJ1J2, with the difference explained by the correction used here. The result agrees well with α =1.53 ± 0.09 from variational Monte Carlo calculations <cit.>, and a similar value was also reported with a projectedentangled pair state ansatz <cit.>. In the SM <cit.>we provide further analysis showing that the AFM order vanishes at the extrapolated level crossing point g_c1≈ 0.46. Discussion.—Our level-crossing analysis in combination with results for the sublattice magnetization show consistently that the AFM phase endsat g_ c1≈ 0.46 and a gapless SL phase exists between this value and g_ c2≈ 0.52.In the level crossing approach the finite-size transition points are sharply defined and the convergence with system size is rapid, with corrections vanishing as L^-2 (or possiblyL^-a with a≈ 2). Our results in Fig. <ref>(a) leave little doubtthat the singlet-quintuplet and singlet-triplet crossings converge to different points, while we would expect convergence to the same point if there is noSL between the AFM and VBS phases, as we demonstrate explicitly in the SM <cit.> in the case of the J-Q model. The behavior of the spin correlations and the gaps imply a gapless SL with power-law decaying spin correlations. In the region 0.52 < g < 0.62, between the SL and the stripe-AFM, our calculations of excited states reveal many low-lying singlets, and we have been able to map them <cit.> ontothe expected quasi-degenerate levels expected for a columnar<cit.> VBS state.The AFM–SL and SL–VBS phase boundaries are in rough agreement with two recent works discussing a gapless SL phase followed by a VBS <cit.>, and the lower boundary agrees well with a Lanczos-improved variational Monte Carlo calculation <cit.>.Many other past studies have located the end of the AFM order close to the same value. A recent exception is an infinite-size tensor calculation <cit.> where the AFM order ends close to our g_c2 point. However, the infinite-size approach is not unbiased but depends on details of how the environment tensors are constructed. The DMRG calculations, here and in Ref. ShengJ1J2, are unbiased for finite size if the convergence is checked carefully, and completely exclude AFM order beyond our g_c1 value.As far as we are aware, the critical singlet-quintuplet crossing found here(and the singlet-singlet crossing in the SM <cit.>) has not previouslybeen discussed in the 2D context. This level crossing has been considered in1D <cit.>, and in the SM <cit.> we present additionalevidence of its association with the AFM–SL transition. The physicalorigin of the level crossing deserves further study. The detailed informationwe have obtained on the evolution of the low-energy levels in 2D should beuseful for discriminating between different field theoretical descriptionsof the phase transitions and the SL phase.We expect that level crossings are common at 2D quantum phase transitions, as they are in 1D. Our work suggests that the best way to use 2D DMRG in studies of quantumcriticality is to first look for and analyze level crossings to extract critical points,and then study order parameters (conventional or topological) at this point and in the phases. In principle the DMRG procedures that we have employed here can also be extended to more detailed level-spectroscopy studies <cit.>. Acknowledgments.—We would like to thank F. Becca, S. Capponi, M. Imada, D. Poilblanc, S. Sachdev, J.-Z. Zhao, and Z.-Y. Zhu, for helpful discussions. We are grateful to S. Gong and D. Sheng for providing their numerical results from Ref. ShengJ1J2. L.W. is supported by the National Key Research and Development program of China (Grant No. 2016YFA0300600), the National Natural Science Foundation of China (Grant No. NSFC-11734002 and No. NSFC-11474016), the National Thousand Young Talents Program of China, and the NSAF Program of China (Grant No. U1530401). She thanks Boston University's Condensed Matter Theory Visitors program for travel support. A.W.S. was supported by the NSF under grants No. DMR-1410126 and DMR-1710170, and by a Simons Investigator Grant.He would also like to thank the Beijing Computational Science Research Center (CSRC) for visitor support. The calculations were partially carried out under a Tianhe-2JK computing award at the CSRC.99 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURLchandra88authorP. Chandra andauthorB.  Doucot,titlePossible spin-liquid state at large S for the frustrated square Heisenberg lattice,journalPhys. Rev. B volume38,pages9335 (year1988).ed1authorE.  Dagotto andauthorA.  Moreo,titlePhase diagram of the frustrated spin-1/2 Heisenberg antiferromagnet in 2 dimensions,journalPhys. Rev. Lett. volume63,pages2148 (year1989).gelfand89authorM. P.  Gelfand,authorR. R. P.  Singh, andauthorD. A.  Huse,titleZero-temperature ordering in two-dimensional frustrated quantum Heisenberg antiferromagnets,journalPhys. Rev. B volume40,pages10801 (year1989). sachdev89authorS.  Sachdev,titleLarge-N limit of the square-lattice t-J model at 1/4 and other filling fractions,journalPhys. Rev. B volume41,pages4502 (year1990). ed2authorF. Figueirido,authorA.  Karlhede,authorS.  Kivelson,authorS.  Sondhi,authorM.  Rocek, andauthorD. S.  Rokhsar,titleExact diagonalization of finite frustrated spin-1/2 Heisenberg models,journalPhys. Rev. B volume41,pages4619 (year1990).singh90authorR. R. P.  Singh andauthorR.  Narayanan,titleDimer versus twist order in the J_1-J_2 model,journalPhys. Rev. Lett. volume65,pages1072 (year1990). RS9173 authorN. Read and author S. Sachdev, titleLarge-N expansion for frustrated quantum antiferromagnets, journalPhys. Rev. Lett. volume66, pages1773 (year1991).ed3authorH. J.  Schulz andauthorT. A. L.  Ziman,titleFinite-Size Scaling for the Two-Dimensional Frustrated Quantum Heisenberg Antiferromagnet,journalEurophys. Lett. volume18,pages355 (year1992). ivanov92authorN. E. Ivanov andauthorP. Ch.  Ivanov,titleFrustrated two-dimensional quantum Heisenberg antiferromagnet at low temperatures,journalPhys. Rev. B volume46,pages8206 (year1992). ed5authorT.  Einarsson andauthorH. J.  Schulz,titleDirect calculation of the spin stiffness in the J_1-J_2 Heisenberg antiferromagnet,journalPhys. Rev. B volume51,pages6151 (year1995).ed4authorH. J.  Schulz,authorT. A. L.  Ziman, andauthorD.  Poilblanc,titleMagnetic Order and Disorder in the Frustrated Quantum Heisenberg Antiferromagnet in Two Dimensions,journalJ. Phys. I volume6,pages675 (year1996). Singh_prb60.7278authorR. R. P. Singh,authorZ.  Weihong,authorC. J.  Hamer, andauthorJ.  Oitmaa,titleDimer order with striped correlations in the J_1-J_2 Heisenberg model,journalPhys. Rev. B volume60,pages7278 (year1999).fazekas74authorP. Fazekas andauthorP. W.  Anderson,titleOn the ground state properties of the anisotropic triangular antiferromagnet,journalPhilos. Mag. volume30,pages432 (year1974).[Anderson(1987)]RVB authorP. W. Anderson titleThe resonating valence bond state in La_2CuO_4 and superconductivity, journalScience volume235, pages1196 (year1987). [Lee, Nagaosa and Wen(2006)]PatrickRVB authorFor a review, see authorP. A. Lee, authorN.  Nagaosa, and authorX. G.  Wen, titleDoping a Mott insulator: Physics of high-temperature superconductivity, journalRev. Mod. Phys. volume78, pages17 (year2006). diep05H. T. Diep, Editor, Frustrated Spin Systems (World Scientific, 2005). Capriotti_prl84.3173authorL. Capriotti andauthorS.  Sorella,titleSpontaneous Plaquette Dimerization in the J_1-J_2 Heisenberg Model,journalPhys. Rev. Lett. volume84,pages3173 (year2000). sirker_prb73.184420authorJ. Sirker,authorZ.  Weihong,authorO. P.  Sushkov, andauthorJ.  Oitmaa,titleJ_1-J_2 model: First-order phase transition versus deconfinement of spinons,journalPhys. Rev. B volume73,pages184420 (year2006). darradi_prb78.214415authorR. Darradi,authorO.  Derzhko,authorR.  Zinke,authorJ.  Schulenburg,authorS. E.  Krüger, andauthorJ.  Richter,titleGround state phases of the spin-1/2 J_1-J_2 Heisenberg antiferromagnet on the square lattice: A high-order coupled cluster treatment,journalPhys. Rev. B volume78,pages214415 (year2008). WangJ1J216authorL. Wang, authorZ.-C.  Gu,authorF.  Verstraete, andauthorX.-G.  Wen,titleTensor-product state approach to spin-1/2 square J_1-J_2 antiferromagnetic Heisenberg model: Evidence for deconfined quantum criticality,journalPhys. Rev. B volume94,pages075143 (year2016). mambrini17D. Poilblanc and M. Mambrini, Quantum critical point with infinite projected entangled paired states,Phys. Rev. B 96, 014414 (2017).Capriotti_prl87.097201authorL. Capriotti,authorF.  Becca,authorA.  Parola, andauthorS.  Sorella,titleResonating Valence Bond Wave Functions for Strongly Frustrated Spin Systems,journalPhys. Rev. Lett. volume87,pages097201 (year2001). mambrini_prb74.144422authorM. Mambrini,authorA.  Läuchli,authorD.  Poilblanc, andauthorF.  Mila,titlePlaquette valence-bond crystal in the frustrated Heisenberg quantum antiferromagnet on the square lattice,journalPhys. Rev. B volume74,pages144422 (year2006). arlego_prb78.224415authorM. Arlego andauthorW.  Brenig,titlePlaquette order in the J_1-J_2-J_3 model: Series expansion analysis,journalPhys. Rev. B volume78,pages224415 (year2008).Beach_prb79.224431authorK. S. D. Beach,titleMaster equation approach to computing RVB bond amplitudes,journalPhys. Rev. B volume79,pages224431 (year2009). richter_edauthorJ. Richter andauthorJ.  Schulenburg,titleThe spin-1/2 J_1-J_2 Heisenberg antiferromagnet on the square lattice: Exact diagonalization for N=40 spins,journalEur. Phys. J. B volume73,pages117 (year2010). HuJ1J2authorW.-J. Hu,authorF.  Becca, authorA.  Parola,andauthorS.  Sorella,titleDirect evidence for a gapless Z_2 spin liquid by frustrating Néel antiferromagnetism,journalPhys. Rev. B volume88,pages060402(R) (year2013).JiangJ1J2authorH.-C. Jiang,authorH.  Yao, andauthorL.  Balents,titleSpin Liquid Ground State of the Spin-1/2 Square J_1-J_2 Heisenberg Model,journalPhys. Rev. B volume86,pages024424 (year2012).ShengJ1J2authorS.-S. Gong,authorW.  Zhu, authorD. N.  Sheng,authorO. I.  Motrunich, andauthorM. P. A.  Fisher,titlePlaquette Ordered Phase and Quantum Phase Diagram in the Spin-1/2 J_1-J_2 Square Heisenberg Model,journalPhys. Rev. Lett. volume113,pages027201 (year2014).MurgJ1J2authorV. Murg,authorF. Verstraete, andauthorJ. I.  Cirac,titleExploring frustrated spin systems using projected entangled pair states,journalPhys. Rev. B volume79,pages195119 (year2009).KaoJ1J2authorJ. F. Yu andauthorY. J.  Kao,titleSpin-1/2 J_1-J_2 Heisenberg antiferromagnet on a square lattice: a plaquette renormalized tensor network study,journalPhys. Rev. B volume85,pages094407 (year2012).WangJ1J2authorL. Wang,authorD.  Poilblanc, authorZ.-C.  Gu,authorX.-G.  Wen, andauthorF.  Verstraete,titleConstructing gapless spin liquid state for the spin-1/2 J_1-J_2 Heisenberg model on a square lattice,journalPhys. Rev. Lett. volume111,pages037202 (year2013).HaghshenasJ1J2 R. Haghshenas, D. N. Sheng,U(1)-symmetric infinite projected entangled-pair state study of the spin-1/2 square J_1−J_2 Heisenberg model, Phys. Rev. B 97, 174408 (2018).AndersonTowerauthorP. W.  Anderson,titleAn Approximate Quantum Theory of the Antiferromagnetic Ground State,journalPhys. Rev. volume86,pages694 (year1952). chakravarty89authorS.  Chakravarty,authorB. I.  Halperin, andauthorD. R.  Nelson,titleTwo-dimensional quantum Heisenberg antiferromagnet at low temperatures,journalPhys. Rev. B volume39,pages2344 (year1989). manousakis91authorE.  Manousakis,titleThe spin-1/2 Heisenberg antiferromagnet on a square lattice and its application to the cuprous oxides,journalRev. Mod. Phys. volume63,pages1 (year1991).whitedmrgauthorS. R.  White,titleDensity matrix formulation for quantum renormalization groups,journalPhys. Rev. Lett. volume69,pages2863 (year1992).schollwoechreviewauthorU.  Schollwöck,titleThe density-matrix renormalization group in the age of matrix product states,journalAnn. Phys. volume326,pages96 (year2011).Imada15 S. Morita, R. Kaneko, and M. Imada, Quantum Spin Liquid in Spin 1/2 J_1–J_2 Heisenberg Model on Square Lattice:Many-Variable Variational Monte Carlo Study Combined with Quantum-Number Projections, J. Phys. Soc. Jpn. 84, 024720 (2015).senthil04authorT. Senthil,authorA.  Vishwanath,authorL.  Balents,authorS.  Sachdev, andauthorM.  Fisher,titleDeconfined quantum critical points,journalScience volume303,pages1490 (year2004). DQCP2authorT. Senthil,authorL.  Balents,authorS.  Sachdev,authorA.  Vishwanath, andauthorM. P. A.  Fisher,titleQuantum criticality beyond the Landau-Ginzburg-Wilson paradigm,journalPhys. Rev. B volume70,pages144407 (year2004). moon12authorE. G.  MoonandauthorC.  Xu,title Exotic continuous quantum phase transitionbetween Z_2 topological spin liquid and Néel order,journalPhys. Rev. B volume86,pages214414 (year2012).DQCP3authorA. W.  Sandvik,title Evidence for Deconfined Quantum Criticality in a Two-Dimensional Heisenberg Model with Four-Spin Interactions,journalPhys. Rev. Lett. volume98,pages227202 (year2007). melko08authorR. G.  Melko andauthorR. K.  Kaul, title Scaling in the Fan of an Unconventional Quantum Critical Point,journalPhys. Rev. Lett. volume100,pages017203 (year2008). lou09authorJ.  Lou,authorA. W.  Sandvik, andauthorN.  Kawashima,title Antiferromagnetic to valence-bond-solid transitions in two-dimensional SU(N) Heisenberg models with multispin interactions,journalPhys. Rev. B volume80,pages180414(R) (year2009). banerjee10authorA.  Banerjee,authorK.  Damle, andauthorF.  Alet,titleImpurity spin texture at a deconfined quantum critical point,journalPhys. Rev. B volume82,pages155139 (year2010). block13authorM. S.  Block,authorR. G.  Melko,andauthorR. K.  Kaul, title Fate of CP^N-1 Fixed Points with q Monopoles,journalPhys. Rev. Lett. volume111,pages137202 (year2013).harada13authorK.  Harada,authorT.  Suzuki,authorT.  Okubo,authorH.  Matsuo,authorJ.  Lou,authorH.  Watanabe,authorS.  Todo,andauthorN.  Kawashima, title Possibility of deconfined criticality in SU(N) Heisenberg models at small N,journalPhys. Rev. B volume88,pages220408 (year2013). chen13authorK.  Chen,authorY.  Huang,authorY.  Deng,authorA. B.  Kuklov,authorN. V.  Prokof’ev,andauthorB. V.  Svistunov, title Deconfined Criticality Flow in the Heisenberg Model with Ring-Exchange Interactions,journalPhys. Rev. Lett. volume110,pages185701 (year2013).shao16authorH.  Shao,authorW.  Guo, andauthorA. W.  Sandvik,titleQuantum criticality with two length scales,journalScience volume352,pages213 (year2016). nahum15authorA.  Nahum,authorJ.  T.  Chalker,authorP.  Serna,authorM.  Ortuño,and,authorA.  M.  Somoza,titleDeconfined Quantum Criticality, Scaling Violations, and Classical Loop Models,journalPhys. Rev. X volume5,pages041048(year2015). whiteparadmrgauthorE. M.  Stoudenmire andauthorS. R.  White,titleReal-space parallel density matrix renormalization group,journalPhys. Rev. B volume87,pages155137 (year2013).McCulloch07authorI. P.  McCulloch,titleFrom density-matrix renormalization group to matrix product states,journalJ. Stat. Mech. volume2007,pagesP10014 (year2007).nomura92authorK.  Nomura andauthorK.  Okamoto,title Spin-Gap Phase in the One-Dimensional t-J-J^' Model,journalPhys. Lett. A volume169,pages433 (year1992).eggert96authorS.  Eggert,titleNumerical evidence for multiplicative logarithmic corrections from marginal operators,journalPhys. Rev. B volume54,pagesR9612 (year1996).sandvik10aauthorA. W.  Sandvik,titleGround States of a Frustrated Quantum Spin Chain with Long-Range Interactions,journalPhys. Rev. Lett. volume104,pages137204 (year2010).sandvik10bauthorA. W.  Sandvik,titleComputational Studies of Quantum Spin Systems,journalAIP Conf. Proc. volume1297,pages135 (year2010). suwa_prl15authorH.  Suwa andauthorS.  Todo,titleGeneralized Moment Method for Gap Estimation and Quantum Monte Carlo Level Spectroscopy,journalPhys. Rev. Lett. volume115,pages080601 (year2015). suwa_prb94.144416authorH.  Suwa,authorA.  Sen, andauthorA. W.  Sandvik,titleLevel spectroscopy in a two-dimensional quantum magnet: Linearly dispersing spinons at the deconfined quantum critical point,journalPhys. Rev. B volume94,pages144416 (year2016).sandvik10anote In Ref. <cit.>, the crossing S=2 state was misidentified as a singlet, but the results otherwise agree with our DMRG calculations presented in Supplemental Material <cit.>.sandvik12 A. W. Sandvik, Finite-size scaling and boundary effects in two-dimensional valence-bond solids, Phys. Rev. B 85, 134407 (2012) ostlund95S. Östlund and S. Rommer, Thermodynamic Limit of Density Matrix Renormalization,Phys. Rev. Lett. 75, 3537 (1995).white07authorS. R.  White andauthorA. L.  Chernyshev,titleNéel Order in Square and Triangular Lattice Heisenberg Models,journalPhys. Rev. Lett. volume99,pages127004 (year2007).sm See Supplemental Material for discussion of the convergence of the DMRG calculations, level crossings in the the 2D J-Q model and the 1D model with long-range interactions, additional analysis of the AFM order of the 2D J_1-J_2 model, as well as the level crossings of its two lowest singlet excitations.laflorencie N. Laflorencie, I. Affleck, and M. Berciu, J. Stat. Mech. (2005) P12001.tobeappear authorL.  Wang, authorS.  Capponi, authorH.  Shao, andauthorA. M.  Sandvik,title(unpublished)schuler16 authorM.  Schuler, authorS.  Whitsitt, authorL. P.  Henry,authorS.  Sachdev, andauthorA. M.  Läuchli,titleUniversal Signatures of Quantum Critical Points from Finite-Size Torus Spectra: A Window into the Operator Content of Higher-Dimensional Conformal Field Theories,journalPhys. Rev. Lett. volume117,pages210401 (year2016).§ SUPPLEMENTAL MATERIAL-2mm §.§ Critical level crossings in the square-lattice spin-1/2 J1-J2 Heisenberg antiferromagnet-2mmLing Wang and Anders W. Sandvik 5mmWe have argued that the AFM–SL transition in the 2D J_1-J_2 Heisenberg model is associated with a level crossing between the lowest singlet excitation and the first quintuplet (S=2), while the singlet-triplet crossing is associated with the SL–VBS transition. We here provide further supporting evidence for this scenario.In Sec. I, we first illustrate our stringent DMRG convergence checks and extrapolations of the low-energy levels. In Sec. II, we contrast the findings for the J_1-J_2 model with results for the J-Q model, where it is known that no SL phase intervenes between the AFM and VBS states. Accordingly, we show that the singlet-triplet and singlet-quintuplet crossing points flow with increasing system size to the same critical point (a deconfined quantum-critical point). We also investigate the critical scaling of the sublattice magnetization of the J-Q model on the cylinders and compare with the J_1-J_2 model. In Sec. III we present further tests of the scaling behavior of the level crossing points and the sublattice magnetization of the J_1-J_2 model. The singlet-quintuplet crossing in the2D J_1-J_2 model is analogous to a crossing point previously found in a spin chain withlong-range interactions at its transition from a critical SL phase to an AFM phase<cit.>. In Sec. IV we provide further results for the 1D model,using the excited-level DMRG method to go to larger system sizes than in the past Lanczos calculations. In the 2D J_1-J_2 model, in addition to the singlet-quintuplet crossing at the AFM–SL transition, we also find a crossing between the two lowest singlet excitations, and in Sec. V we present the numerical results and analysis of this level crossing. §.§ I. DMRG convergence procedures In each DMRG calculation bounded by m Schmidt states, we start from a previously converged MPS with a smaller m and perform a number of DMRG sweeps until the energy converges sufficiently. The convergence criterion for an m-bounded MPS is that the total energy difference (i.e., not the difference in the average energy per site) betweentwo successive full sweeps is less than 2× 10^-6, which we have confirmed to be sufficient by comparing with calculations done with less stringent criteria. We then check the convergence of the energies as a function of the discarded weight ϵ (which depends on m, with ϵ→ 0 as m →∞) defined in the standard way in DMRG calculations as the sum of discarded eigenvalues of the reduced density matrix.In Fig. <ref> we show the convergence of the first two S=0 energies and the first S=2 level for an L=8 system at two g values close to g_c1 (the AFM–SL transition), using m up to 4000 in calculations with U(1) symmetry and m up to 2000 with SU(2) symmetry. In our analysis of the AFM-SL transition we used a singlet-quintuplet crossing in the main paper, and in Sec. V we will also investigate the excited singlet-singlet crossing.With SU(2) symmetry implemented, the lowest state in calculations with S=0 fixed is the ground state, and we make sure to converge two additional states in this spin sector. At the AFM–SL transition, we further carry out calculations with S=2 for the lowest quintuplet. With only U(1) symmetry, the lowest state in the S^z=0 sector is the ground state, while the lowest state with S^z=2 is also the lowest excitation with S=2. To compute the two lowest singlet excitations close to the AFM–SL transition for L=8, one has to go to 6th and 7th excitations in the S^z=0 sector in the case of L=8. In Fig. <ref> the SU(2) DMRG eigenvalues nevertheless coincide very well with the corresponding U(1) energies in all cases when ϵ is small. All the states show exponentially fastconvergence when ϵ→ 0, and we can obtain stable extrapolated energies.For L=10, we show the energy convergence at two g values close to g_c2 (the SL–VBS transition) in Fig. <ref>, using m up to 12000 with U(1) symmetry and m up to 5000 with SU(2) symmetry. The SL–VBS phase transition is detected as the level crossing between the lower singlet and the lowest triplet. With SU(2) symmetry, the lowest state in the S=0 sector is the ground state and the lowest triplet is the ground state in the S=1 sector.To obtain the lowest singlet excitation used in our analysis in the nonmagnetic state, we target the second S=0 state near g_c2. With U(1) symmetry, the lowest S^z=0 state is the ground state, while the lowest state in the S^z=1 sector is the lowest triplet excitation.To compute the first excited singlet for g_c1 < gg_c2, we need to target the third level with S^z=0 (since one of the triplet states also has S^z=0 and is lower in energy than the targeted singlet) but only need the first excitation when g>g_c2 (since the triplet is higher there). As seen in Fig. <ref>, for small ϵ the SU(2) and U(1) energies again coincide very well.We regard the essentially perfect agreement between the SU(2) and U(1) calculations for large m (in the L=8 and 10 demonstrations above as well as in other cases studied) as evidence for sufficient convergence in both cases. We have estimated the remaining small systematical errors by comparing the U(1) and SU(2) extrapolations in detail and by varying the functional form used in the extrapolations. §.§ II. Critical level crossings and order parameter of the J-Q model on a cylinder In Ref. suwa_prb94.144416, the critical level crossings of the lowest singlet and triplet excitation in J-Q model were studied using quantum Monte Carlo (QMC) simulations of L× L lattices with fully periodic (torus) boundaries. The decay rates of the spin-spin and dimer-dimer correlation functions in imaginary time were used to extract the gaps in the triplet and singlet channels, respectively. It was found that the finite size level crossing points g_c(L) approach a value g_c that is fully consistent with the AFM–VBS quantum critical point previously extracted by finite-size scaling of the order parameters. The scaling correction was found to be g_c(L) - g_c ∝ L^-2. The level crossing in this case is expected, given the known behaviors of the lowest singlet and triplet in the AFM and VBS states.In the main text, we concluded that the J_1-J_2 model hosts an SL phase between the AFM and VBS states and that the AFM-SL transition is associated with a crossing between S=0 and S=2 excitations. It is then interesting to look for and investigate singlet-quintuplet level crossings also in the J-Q model, as a test that a second, spuriouscritical point is not found in this case. In addition, it is also useful to study the singlet-triplet crossings with the same DMRG method that we have used for the J_1-J_2 model, and with the same cylindrical lattices, to check that we can correctly reproduce the AFM-VBS transition point even in this geometry and with the much more limited system sizes than in the QMC calculations. A related question is whether the change of lattice geometry will affect the power-law scaling behavior of the finite-size size crossing points g_c(L).We study the lowest singlet-triplet and singlet-quintuplet gap crossings in the standard J-Q model <cit.>, using the DMRG method with U(1) symmetry on 2L× L cylinders with L=4,6,8,10. Before presenting the DMRG results, we recall some of the well studied ground state properties of the model from previous QMC simulation in both the torus and cylinder geometries  <cit.>. At Q=0, the J-Q model reduces to the standard 2D Heisenberg model with AFM order, while at J=0 the ground state is a columnar VBS with four-fold degeneracy on a torus. When tuning the coupling ratiog≡ J/Q from +∞ to 0, the system goes through a deconfined quantum phase transition from the AFM phase to the columnar VBS phase at g_c≈ 0.045, where the lowest singlet and triplet gaps cross each other when L →∞ as mentioned above. In addition, it is known that the ground state of the J-Q model on 2L× L cylinders in the VBS phase is a non-degenerate columnar VBS state with x-oriented dimers. In our DMRG calculations presented below, we resolve that, in the VBS phase, the ground state has momentum k_y=0, and above it there is a singlet excited state with momentum k_y=π. The k_y=π singlet, which is related to the open x-direction boundary condition, lies below the first triplet excitation and remains with a non-vanishing gap to the unique ground state in the thermodynamic limit.Figure <ref> shows the gaps versus g on 2L× L cylinders with L=6,8,10, with singlets and triplets analyzed in (a), and the singlets and quintuplets in (b). We fit second order polynomials to the data and interpolate forthe crossing points. As L increases, the singlet-triplet crossing points g_c2(L)drift toward g_c from the left, while the singlet-quintuplet crossing points g_c1(L)drift toward g_c from the right. It is again natural to check whether the finite-size corrections to the crossing points g_c is consistent with the same form, L^-2, as in the model on a torus. Fig. <ref>(a) shows g_c1(L) and g_c2(L) versus L^-2 along with a line drawn through the L=8,10 points. These simple extrapolations give g_c2=0.043 (singlet-triplet) and g_c1=0.066 (singlet-quintuplet). Considering the small systems and the extrapolation without any corrections, these results are both in reasonable agreement with the known critical point, g_c ≈ 0.045. The results also support leading L^-2 corrections for the cylindrical lattices and lend further credence to our use of this form of the corrections in the J_1-J_2 model. In contrast, if we assume that the crossing points drift as L^-1, as shownin Fig. <ref>(b), the extrapolated points g_c2 and g_c1 are very different and disagree with the known critical coupling.We analyze the gaps Δ_c1(L) and Δ_c2(L) of the J-Q model at the L-dependent crossing points in Fig. <ref>. We have multiplied the gaps by L and graph the results versus L^-1. We see clear signs of convergenceto constants, confirming that the gaps close as L^-1 at the critical point, as expected since the dynamic critical exponent is z=1.Next, we consider the AFM order parameter of the J-Q model, the squared staggered magnetization. Fig. <ref> shows ⟨ m_s^2⟩ computed in the center L× L section of 2L× L cylinders at various coupling ratios J/Q. The results are graphed versus L^-1 on log-log scales, along with theresults for the same quantity (defined in the same way on the central parts of the cylinders) for J_1-J_2 model at J_2/J_1=0.5. Here the results for the J-Q model are obtained from QMC simulation (with the same cylindrical boundary conditions thatwe use in the DMRG calculations), in order to reach the same system sizes as for the J_1-J_2model. The red line on the log-log plot corresponds to a power-law form of ⟨ m^2_s⟩at g_c. Away from g_c, inside the AFM phase, we observe that ⟨ m^2_s⟩ curves upward for the larger sizes relative to the critical power law behavior, as expectedwhen the order parameter scales to a non-zero value. This is in contrast to the behavior in the case of theJ_1-J_2 model at J_2/J_1=0.5, where ⟨ m^2_s⟩ decays almost in the same way as in the critical J-Q model, though on close examination one can see a clear downward trend with increasing size. It therefore appears very unlikely that a non-zero value would survive in the J_1-J_2 model when L →∞; thus the resultslend further support to the SL scenario.§.§ III. Additional tests of scaling in the J_1-J_2 model In the main text we showed that leading L^-2 corrections alsodescribe well the drifts of crossing points in the case of the J_1-J_2 model. In Fig. <ref>(a) we again show the results for L=6,8,10 (leaving out L=4 for clarity) graphed against L^-2 together with a simple fit based on just the two largest system sizes. Figure <ref>(b) showsthe same data plotted versus L^-1, again along with extrapolations using only thetwo largest system sizes. Since the overall size dependence of the singlet-tripletcrossing is weak, its extrapolation only changes marginally from the one basedon the L^-2 form. An extrapolation with a higher-order correction (not shown in the figure) shifts the value down even closer to the previous estimate. The L^-1 extrapolatedsinglet-quintupletpoint is significantly higher then previously, but looking at the trend including the smaller sizes makes it clear that higher-order fits here will also reduce theextrapolated value. As mentioned in the main text, such higher-order fits do notmatch the data as well as in the case of leading L^-2 corrections. These results with different fitting forms lend support to the existence of a gap between the extrapolated g_c1 and g_c2 values in the J_1-J_2 and the absence of such a gap in the J-Q model. In the main text we have argued thatg_c1≠ g_c2 reflects the presence of an SL phase intervening between the AFM and VBS phases in the J_1-J_2 model, while g_c1 = g_c2 reflects the known deconfined quantum-critical AFM–VBS point in the J-Q model. The well established L^-2 scaling in the latter case, from large-scale QMC simulations <cit.> as well as the results in Sec. II above, allowus to make a further argument against the deconfined quantum-criticality scenario inthe J_1-J_2 model: If the two models both host critical AFM–VBS points, based onthe deconfined universality class, they should also both exhibit leading L^-2 drifts of the crossing points and common extrapolated crossingpoints g_c1=g_c2. However, the results shown in Fig. <ref>(a) and Fig. 3 in the main paper are inconsistent with a common crossing point, unless the systemsizes we have access to here are not yet in the asymptotic regime where scaling with small corrections is applicable. While we cannot in principle exclude that across-over to a single point, a direct AFM–VBS transition, occurs on some larger length scale,we see no a priori physical reason for such large finite-size effects (given their absence in the J-Q model) and find this scenario unlikely. Thus, based on all the presentevidence we conclude that the deconfined critical point most likely is expanded into a stable nonmagnetic phase in the J_1-J_2 model.In Fig. 4 of the main paper we analyzed the sublattice magnetization of the J_1-J_2 model inside theputative SL phase and found power-law behaviors in the inverse system size. Here we present additional results and analysis both below and above the crossing points g_c1≈ 0.46, demonstrating the existence oflong-range AFM order for g < g_c1 and the absence of order for g>g_c1. Fig. <ref>(a) shows results graphed versus L^-1 together with second-order polynomial fitted to the data for L=8,10,12, representing the expected asymptotic L^-1 form in the AFM state and a likewise expected L^-2 next correction. The curves extrapolate to clearly positive values for g=0.40,0.42, and 0.44, while the value at g=0.46 is almost zero. For larger g the extrapolated values are negative, indicating that the functional form used is incorrect. One should expect the neglected higher-order corrections to also influence the extrapolated values for smaller g, and the deviations between the fitted curve and data at L=6 give some indication of the size of the extrapolationerrors for g=0.40-0.46. The results are consistent with the long-range order vanishing at g≈ 0.46, inexcellent agreement with the result g_c1≈ 0.46 obtained from the singlet-quintuplet crossing points. Cubic fits (not shown) to the L≥ 6 data result in slightly larger extrapolated values of ⟨ m_s^2⟩, but the g dependence is less smooth than with the quadratic fits (likely reflecting sensitivity to the small numerical errors in the individual data points and neglected corrections of still higher order). For g≥ 0.46the cubic polynomials produce negative extrapolated values, supporting the conclusion drawn from the quadraticextrapolations that the AFM order vanishes close to g=0.46.Further support for a critical AFM point at g≈ 0.46 is provided in Fig. <ref>(b). Here weshow the data on log-log scales, with straight lines (corresponding to power laws) drawn through the L=6,8,10 data. For g=0.40,0.42, and 0.44, the L=12 points fall above the lines, reflecting an upward curvature as L increases and AFM order is established. The behavior is similar to that of theJ-Q model in the AFM phase close to the critical point, e.g., at J/Q=0.1 in Fig. <ref>. For g=0.46, all four data points follow the fitted line very closely, while for larger g the L=12 points fallbelow the fitted lines, reflecting negative curvature. In Fig. 4 of the main paper we fitted the data in the putativeSL phase to a power law with an additional correction of higher power, required in order to fit all the available data for g > 0.46.Overall, these results and those in the main paper support a scenario of a critical AFM–SL point at g_c1≈ 0.46 at which the scaling corrections are small, while for larger g inside the SL phase the exponent of the asymptotic power law changes and corrections are needed to explain the data on the relatively small systems accessible in DMRG calculations. §.§ IV. Spin chains with long range interactions The spin-1/2 J_1-J_2 Heisenberg chain is a celebrated example of a system hosting a quantum phase transition between quasi-long-range ordered (QLRO) and ordered VBS phases. Defining g=J_2/J_1, the transition is located at g_c≈ 0.2411 <cit.> and is accompanied by a critical level crossing of the lowest singlet and triplet excitations. To study a quantum phase transition between a 1D long-range AFM ordered and QLRO ground states, Laflorencie et al. proposed <cit.> a Heisenberg chain with long-range interactions, with HamiltonianH=∑_i=1^L [𝐒_i·𝐒_i+1+λ∑_r=2^L/2J_r𝐒_i·𝐒_i+r].where the couplings are of the formJ_r = (-1)^r-1/r^α,and α and λ are both adjustable parameters. Later on, to look for a possible 1D quantum phase transition between AFM and VBS phases, a modification of the model was introduced in which the second neighbor coupling J_2 changes sign, making it a frustrated term <cit.>;H=∑_i=1^L∑_r=1^L/2J_r 𝐒_i·𝐒_i+r,where the couplings are given byJ_2=g, J_r≠ 2=(-1)^r-1/r^α (1+∑_r=3^L/21/r^α)^-1,where the adjustable parameters are α and g and the normalization of J_r≠2 is chosen such that the sum of all nonfrustrated (r≠2) interactions |J_r| equals 1.For the unfrustrated chain, a curve of continuous AFM–QLRO transitions was mapped out in the (α,λ) plane <cit.>. In the frustrated chain, it was found that, by fixing the frustration strength g and tuning the exponent α controlling the long-range interaction, two quantum phase transitions take place along this path <cit.>; a QLRO-VBS transition with singlet-triplet excitation level crossing as in the J_1-J_2 chain, and, for smaller α, an AFM-QLRO transition accompanied by another level crossing. This second crossing was claimed to be a singlet-singlet crossing, but it turns out that (as found in the course of the work reported here) that the total spin S of one of the levels was misidentified as a singlet though it actually is an S=2 quintuplet and the crossing discussed is a singlet-quintuplet crossing. In other respects we fully agree with the previous results. Thus, the behavior of the frustrated long-range interacting chain upon increasing α is very similar to that we have observed in the square-lattice J_1-J_2 model upon increasing g=J_2/J_1.In the case of the unfrustrated model with J_r given by Eqs. (<ref>) we alsoexpect the AFM-QLRO quantum phase transition to be accompanied by a singlet-quintuplet excitation crossing, though level crossings were not discussed in Ref. laflorencie. Here we revisit the quantum phase transitions in both the frustrated and unfrustrated chain models, analyzing level crossings obtained by the SU(2) DMRG method to push to large system sizes than what was possible with the previous Lanczos calculations in Ref. sandvik10a. This will provide us with further, indisputable evidence that the AFM–QLRO transition in the 1D system indeed is accompanied with a singlet-quintuplet crossing. This in turn gives added credence to our claim of this scenario for the 2D J_1-J_2 model.In the unfrustrated model we set λ=1 in Eq. (<ref>) and study the AFM-QLRO quantum phase transition by tuning the long-range interaction exponent α^-1. In the frustratedmodel, Eq. (<ref>), we choose a path with fixed g=0.3 in Eq. (<ref>) and vary α^-1 from 1 to 0, thus passing through both the AFM–QLRO and QLRO–VBS transitions.In Fig. <ref> we plot the lowest singlet, triplet and quintuplet gaps of L=48 chains versus α^-1 at (a) fixed λ=1 in the unfrustrated model and (b) fixed g=0.3 in the frustrated chain. In both models, the crossing points of the lowest singlet-quintuplet excitations indicate the AFM-QLRO quantum phase transitions, based on the behaviors previously found for the sublattice magnetization. In the frustrated case, the crossing of the lowest singlet and triplet excitations marks the QLRO-VBS quantum phase transition, in analogy with the case of the conventional J_1-J_2 Heisenberg chain without the J_r>2 (which also corresponds to α=∞ in the long-range model).We further examine the drifts of these critical level crossings for different system sizes, L=32,40,48, in the critical regions. In Fig. <ref> the gaps are fitted to second order polynomials to interpolate the finite-size critical points α^-1_c(L) (singlet-quintuplet in the unfrustrated case), α^-1_c1(L) (singlet-quintuplet in the frustrated case), and α^-1_c2(L) (singlet-triplet in the frustrated case). Fig. <ref> shows the size dependence of all these crossing points versus L^-2 along with lines drawn through the data for the largest two sizes, L=40 and 48. We also show fitted curves including a higher-order correction, which give the infinite-size extrapolated values α^-1_c=0.4434, α^-1_c1=0.476, and α^-1_c2=0.316. In the unfrustrated model, the critical value α^-1_c=0.4434, i.e., α_c=2.255, is fully consistent with the quantum critical point α_c=2.225± 0.025 found by analyzing QMC results for the AFM order parameter in Ref. <cit.>. Thus, there is no doubt that the singlet-quintuplet crossing really marks the AFM–QLRO transition in the unfrustrated chain and there is no reason why this should not be the case also in the frustrated model; indeed the behavior of the order parameters (not shown here) also supports the existence of the phase transition. §.§ V. Singlet-singlet level crossing As seen in Fig. 2 in the main paper, there is also a singlet-singlet level crossing in the neighborhood ofthe singlet-quintuplet point analyzed in the main paper. We call the singlet-singlet crossing pointg_c1^'(L) and investigate its behavior here.In Fig. <ref>(a) we demonstrate the singlet-singlet level crossing for different system sizes and study the trend of this crossing point as a function of the inverse system size in Fig. <ref>(b). A plausible L^-2 correction is again assumed here. Then a rough extrapolation to infinite size by a line drawn through the L=8 and L=10 points in the figure gives g^'_c1≈ 0.454. On including a correction with the same fitting form as in the singlet-triplet case, the extrapolated value moves slightly down to g^'_c1=0.453. This value is very close to g_c1=0.463, marking the AFM-SL ground states phase transition asgiven by the singlet-quintuplet crossing point. Thus, it seems plausible that theAFM-SL transition is associated with both singlet-singlet and singlet-quintupletexcitation crossings, though larger system sizes would be needed to confirm whether the points really flow to the same values.It should be noted that we have not foundany singlet-singlet crossing at the AFM–QLRO transition in the case of the 1D chaindiscussed above in Sec. III. The singlet-quadruplet crossing point along with its scaling in energy as L^-1, shown in Fig. 3(b) of the main paper, is also a more clear-cut indicator of a transition out of the AFM state in the sense that we know that the S=2 level is a quantum-rotor state that scales as L^-2 in the AFMstate. In principle, the singlet-singlet crossing could be accidental and unrelatedto the AFM–SL transition, though the close proximity to the singlet-quadruplet crossingin our extrapolations based on rather small sizes would suggest that it actually isalso associated with the transition in the 2D model.
http://arxiv.org/abs/1702.08197v4
{ "authors": [ "Ling Wang", "Anders W. Sandvik" ], "categories": [ "cond-mat.str-el", "cond-mat.stat-mech" ], "primary_category": "cond-mat.str-el", "published": "20170227090712", "title": "Critical level crossings and gapless spin liquid in the square-lattice spin-$1/2$ $J_1$-$J_2$ Heisenberg antiferromagnet" }
steps[1]ifnextchar[ stepnoitemargtruestep[itemlabel] step[#1] #1plain theoremTheorem proposition[theorem]Proposition lemma[theorem]Lemma corollary[theorem]Corollary assumption[theorem]Assumption conjecture[theorem]Conjecture definition definition[theorem]Definition remark[theorem]Remark *remarksRemarks claimClaim equationsection theoremsection fullwidth,itemindent=, listparindent=,itemsep=0ex, partopsep=0pt,parsep=0exgDNLS] Stability of the sum of two solitary waves for (gDNLS) in the energy space]Xingdong Tang -1.15em Xingdong Tang Beijing Computational Science Research Center, No. 10 West Dongbeiwang Road, Haidian District, Beijing, China, 100193, txd@csrc.ac.cn]Guixiang Xu -1.15em Guixiang Xu Institute of Applied Physics and Computational Mathematics, P. O. Box 8009, Beijing, China, 100088. xu_guixiang@iapcm.ac.cn [2010]Primary: 35L70, Secondary: 35Q55 In this paper, we continue the study in <cit.>. We use the perturbation argument, modulational analysis and the energy argument in <cit.> to show the stability of the sum of two solitary waves with weak interactions for the generalized derivative Schrödinger equation (gDNLS) in the energy space. Here (gDNLS) hasn't the Galilean transformation invariance, the pseudo-conformal invariance and the gauge transformation invariance, and the case σ>1 we considered corresponds to the L^2-supercritical case. [ [ December 30, 2023 ===================== § INTRODUCTIONIn this paper, we consider the stability of the solitary waves for the generalized derivative Schrödinger equation (gDNLS for short) in H^1(){ ı u_t+u_xx+ı |u|^2σu_x=0, t,x∈×,u0,x=u_0x∈ H^1, .where u is a complex valued function of t,x∈×, σ>0. With σ=1, (<ref>) has appeared as a model for Alfvén waves in plasma physics <cit.>.The equation (<ref>) is Ḣ^σ-1/2σ critical since the scaling transformationu(t,x)↦ u_λ(t,x)=λ^1/2σu(λ^2t, λ x)leaves both (<ref>) and Ḣ^σ-1/2σ-norm invariant. The mass, momentum and energy of the solution u(t,x) of (<ref>) are defined as followingM(u)(t)=1/2∫ |u(t,x)|^2 , P(u)(t)=-1/2∫u̅u_xt,x ,E(u)(t)= 1/2∫|u_xt,x|^2+1/2(σ+1)∫|u|^2σu̅u_xt,x.They are conserved under the flow (<ref>) according to the phase rotation invariance, spatial translation invariance and time translation invariance respectively. Compared with nonlinear Schrödinger equation, the equation (<ref>) doesn't enjoy the Galilean invariance and pseudo-conformal invariance any more.Local well-posedness result for (<ref>) with σ≥ 1 in H^1() has been worked out by Hayashi and Ozawa <cit.>. They combined the compactness method with L^4_IW^1,∞() estimate to construct the local-in-time solution with arbitrary initial data in the energy space. Since (<ref>) is Ḣ^1-subcritical, the maximal lifespan interval only depends on the H^1 norm of initial data. More precisely, we haveLet σ≥ 1. For any u_0 ∈ H^1() and t_0 ∈, there exists a unique maximal-lifespan solution u:I×→ to (<ref>) with u(t_0)=u_0, the map u_0→ u is continuous from H^1() to C(I, H^1())∩ L^4_loc(I, W^1,∞()).Moreover, the solution also has the following properties: * I is an open neighborhood of t_0. * The mass, momentum and energy are conserved, that is, for all t∈ I,M(u)(t)=M(u)(t_0),P(u)(t)=P(u)(t_0), E(u)(t)=E(u)(t_0).* Ifsup(I)<+∞,(or inf(I)>-∞) , thenlim_t→sup(I)∂_x u(t)_L^2=+∞,( lim_t→inf(I)∂_x u(t)_L^2=+∞, respectively.)* If u_0 _H^1 is sufficiently small, then u is a global solution.The local well-posedness result of (<ref>) with σ≥ 1 in H^1/2 is due to Takaoka <cit.> by Fourier restriction norm method and gauge transformation and Santos <cit.> by local smoothing effect of the Schrödinger operator. The different features between the case σ=1 and the case σ>1 are that the former is the integrable system and has the gauge transformation invariance.In addtion, there are some numerical stability analysis and blowup results of (<ref>) in the energy space, please refer to <cit.>. At the same time, it is well-known in <cit.> that the equation (<ref>) has a two-parameter family of solitary wave solutions of the formut,x=Q_ω,cx-ct^ıω t,where 4ω > c^2,Q_ω,cx=Φ_ω,c(x)exp{ıc/2x-ı/2σ + 2∫^x_-∞Φ_ω,c^2σ(y)ỵ},andΦ_ω,c(x) = (σ+1)(4ω-c^2)/ 2√(ω)(cosh(σ√(4ω-c^2)x)-c/2√(ω)) ^1/2σis the unique positive solution of- ∂_x^2Φ_ω,c+ (ω - c^2/4)Φ_ω,c +c/2|Φ_ω,c|^2σΦ_ω,c - 2σ + 1/(2σ + 2)^2|Φ_ω,c|^4σΦ_ω,c= 0,up to phase rotation and spatial translation invariance. By the stability criteria in <cit.> <cit.>, it was shown that they are orbitally stable when σ∈ (0, 1), and orbitally unstable when σ≥ 2 in <cit.>.For the case σ=1 and 4ω > c^2. On one hand, bythe convex analysis in <cit.>, the structure analysis[Since the nonlinearity has derivative in (<ref>), the structure analysis of the solitary waves before using the variational argument in <cit.> is used to transform the quasilinear problem into the semilinear problem in principle in <cit.>.] and the variational characterization of the solitary waves, Miao, Tang and Xu obtained the global wellposedness result in some invariant subset K^+ of the energy space in <cit.>, where the construction of K^+ is related to the variational characterization of the solitary wave. On the other hand,Colin and Ohta <cit.> made use of the concentration compactness argument andproved that the above solitary waves are orbitally stable in the energy space. Because (<ref>) is an integrable system,Nakamura and Chen obtained the explicit formula of the multi-soliton solutions of (<ref>) in <cit.> by Hirota's bilinear transform method. Recently, Miao, Tang and Xu in <cit.> and Le Coz and Wu in <cit.> independently showed the stability of the sum of the multi-soliton waves with weak interactions in the energy space, where the arguments are both based on the perturbation argument, the modulation stability and the energy argument in <cit.>.For the case σ∈(1,2) and 4ω > c^2. Fukaya, Hayashi and Inuishowed the variational characterization of the solitary waves of (<ref>) in <cit.>, i.e. Q_ω,c is a minimizer of the following problem:dω,c = infS_ω,cφφ∈ H^1∖0,    K_ω,cφ=0where the action functional is defined byS_ω,c(φ)=E(φ)+ ω M(φ) + c P(φ),and the scaling derivative functional is defined byK_ω,c(φ) = ./λ̣S_ω,c(λφ)|_λ=1.In addition, they also obtainedthe global well-posedness of the solution to (<ref>) in the similar invariant subset K^+ as that in <cit.>.Next we consider its stability in the energy space. Let z_0=z_0(σ) be the unique solution in -1,1 of F(z_0;σ) =0, where F(z;σ) is defined byF(z; σ) = (σ-1)^2 ∫_0^∞ (cosh y -z)^-1/σỵ^2- ∫_0^∞ (cosh y -z)^-1/σ-1(z cosh y -1)ỵ^2.By the stability criteria in <cit.>, Liu, Simpson and Sulem numerically showed that the solitary wave is stable for c∈ (-2√(ω), 2z_0√(ω)) and unstable for c∈ (2z_0√(ω), 2√(ω)) in <cit.>. That is,Let σ∈1,2 and z_0=z_0(σ)∈-1,1 satisfy F(z_0;σ) =0, where F(z;σ) is defined by (<ref>). Let (ω^0, c^0)∈^2, satisfy c^0∈-2√(ω^0) ,2 z_0 √(ω^0), the solitary waveQ_ω^0,c^0(x-c^0t)e^iω^0 t to (<ref>) is orbitally stable in the energy space. That is, for any ϵ>0, there exists δ>0 such that if u_0∈ H^1() satisfiesu_0(·)- Q_ω^0,c^0(·-x^0)e^iγ^0 _H^1()<δfor some (x^0, γ^0)∈^2, then the solution u(t) of (<ref>) exists globally in time and satisfiessup_t≥0inf_(y, γ)∈^2u(t,·)- Q_ω^0,c^0(·-y)e^iγ_H^1()<. For the case σ∈(3/2, 2) and c^0=2z_0√(ω^0), Fukaya shown that the traveling wave is still unstable in <cit.>. It is notice that it are still open problem whether the solitary waves with any σ>0 and the critical case c^0= 2√(ω^0) are stable or not. In fact, the solitary waves with the critical parameter c^0= 2√(ω^0) have polynomial decay, and the difficulty is that there is no the spectral gap about the linearized operator around the solitary wave. In this paper, we consider the stability of the sum of two solitary waves for (<ref>) with σ∈ (1,2) and c^0_k∈ (-2√(ω^0_k), 2z_0(σ)√(ω^0_k)), k=1, 2. As far as we know, the integrability (non-integrability) of (<ref>) is not clear, the existence (nonexistence) of the explicit multi-solition solutions is not obvious. Here we use the argument in <cit.> (see also <cit.>) and the references therein.The main result is the following result.Let σ∈1,2 and z_0=z_0(σ)∈0,1 satisfy F(z_0;σ) =0, where F(z;σ) is defined by (<ref>). Let (ω^0_k, c^0_k)∈^2, k=1,2 satisfy (a) Nonlinear stability: c^0_k∈-2√(ω^0_k),2 z_0 √(ω^0_k) for k=1, 2. (b) Technical assumption: ω^0_2-ω^0_1/c^0_2-c^0_1>0. (c) Relative speed: c^0_1 < ω^0_2-ω^0_1/c^0_2-c^0_1, and 4ω^0_2-ω^0_1/c^0_2-c^0_1<c^0_2.Then there exist positive numbers C, δ_0, θ_0 and L_0, such that if 0<δ<δ_0,L>L_0 andu_0(·)-∑^2_k=1Q_ω^0_k, c^0_k(·-x^0_k)e^iγ^0_k_H^1()≤δ,with x^0_2-x^0_1>L, then the solution u(t) of (<ref>) exists globally in time and there exist functions x_k(t) and γ_k(t), k=1,2 such that for any t≥ 0,u(t,·)-∑^2_k=1Q_ω^0_k, c^0_k(·-x_k(t))e^iγ_k(t)_H^1()≤ C(δ+e^-θ_0 L/2). *The function F(z;σ) and existence of z_0. In order to use the abstract functional analysis argument in <cit.>, Liu, Simpson and Sulem introducedthe function F(z;σ)to obtain the stability(instability) of single soliton solutions of (<ref>)in <cit.>. The function F(z;σ) is closely related to the determinant of the Hessian d”ω,c. It numerically turns out that for any fixed σ∈1,2, the function F(z;σ) is monotonically decreasing with respect to z and has exactly one root z_0 in the interval -1,1. See Figure <ref>.*The technical assumption: ω^0_2-ω^0_1/c^0_2-c^0_1>0. Because of the fact that the radiation term cannot separate from the solitary wave along the flow (<ref>), this technical assumption allows us to deal with some "bad" term with good sign in (<ref>), see the monotonicity formulas in Section <ref> for more details.*The relative speed assumption: In fact, it is sufficient that c^0_1 < 2ω^0_2-ω^0_1/c^0_2-c^0_1 <c^0_2 from our proof. However, we suppose c^0_1 < ω^0_2-ω^0_1/c^0_2-c^0_1, and 4ω^0_2-ω^0_1/c^0_2-c^0_1<c^0_2 for the convenience. In addition, combining 4ω^0_2-ω^0_1/c^0_2-c^0_1<c^0_2 with ω^0_2-ω^0_1/c^0_2-c^0_1>0, we immediately obtains that c_2^0 and z_0(σ) need to be positive. * The stability of the sum of two solitary waves can be easily extended to that of the k solitary waves case, k≥ 3.At last, the paper is organized as following.In Section <ref>,we introduce the linearized operator around the solitary wave, and show the coercivity property of the linearized operator under the geometric constraints; In Section <ref>, we give the modulation analysis of the solution around the sum of two solitary waves with weak interactions. In Section <ref>, we introduce some extra monotonicity formulas and their variance along the flow (<ref>). In Section <ref>,we firstly introduce a localized action functional, which is almost conserved by the monotonicity formula and the conservation laws of mass, momentum and energy, to refine the energy estimate about the radiation term in the modulation analysis of the solution; secondly, we use some monotonicity formulas to refine the estimates of the parameter variance |ω_k(t)-ω_k(0)|+|c_k(t)-c_k(0)|, k=1, 2 besides of the conservation laws of mass and momentum. These refined estimates improve the energy estimate of the radiation term in the modulation analysis and imply Theorem <ref> together with the bootstrap argument in <cit.> (see also <cit.>). In Appendix A, for all solitary waves Q_k^0(x)=Q_ω^0_k, c^0_k(x), k=1, 2, which satisfy the conditions in Theorem <ref>, we verify the fact that2MQ_k^0∂_x Q_k^0_L^2^2 - 4[PQ_k^0 ]^2≠ 0,which is used to show the non-degenerate condition (<ref>). In Appendix B, we give the expansion of the action functional 𝒮(t) (i.e., Lemma <ref>) in details.§ PRELIMINARY RESULTSIn this section, we give some basic facts about the solitary waves for (<ref>). Let (ω, c)∈^2 with 4ω>c^2, and ut,x=φ_ω,cx-ct^ıω t be a solution of (<ref>), it is easy to check that φ_ω,c satisfiesωφ_ω,c-∂^2_xφ_ω,c +ı c ∂_xφ_ω,c - ıφ_ω,c^2σ∂_x φ_ω,c=0.Now define the set 𝒢_ω,c of the solitary waves to (<ref>)𝒢_ω,c = {φ_ω,c∈ H^1∖0 : φ_ω,c satisfies (<ref>)}.and letQ_ω,cx=Φ_ω,c(x)exp{ıc/2x-ı/2σ + 2∫^x_-∞Φ_ω,c^2σ(y)ỵ},withΦ_ω,c(x) = (σ+1)(4ω-c^2)/ 2√(ω)(cosh(σ√(4ω-c^2)x)-c/2√(ω)) ^1/2σ.The first result is the variational characterization lemma of the solitary waves. Suppose ω,c∈^2 satisfies 4ω>c^2. Let dω,c, S_ω,c and K_ω,c be defined by (<ref>), (<ref>) and (<ref>) respectively. Then we have𝒢_ω,c= {φ∈ H^1() ∖0:S_ω,c(φ)=dω,c, K_ω,c(φ)=0 }= {Q_ω,c(·-y)^ıθ : θ∈ [0,2π), y∈}.* By the Lagrange multiplier argument in <cit.>, we have S_ω,c'(Q_ω,c)=0, which implies that d'ω,c = MQ_ω,c,PQ_ω,c andd”ω,c = [ ∂_ωMQ_ω,c ∂_cMQ_ω,c; ∂_ωPQ_ω,c ∂_cPQ_ω,c; ].* By the explicit formula of the solitary waves, the following nondegenerate condition[ d”ω,c]<0 holds with σ∈1,2 and c∈ -2√(ω) ,2z_0√(ω),see Theorem 4.3 in <cit.>. This nondegenerate condition is important to show the stability result of the solitary waves by the perturbation argument, the modulation stability and the energy method. Let σ, z_0 be as that in Theorem <ref>, and (ω, c)∈^2 with c∈ -2√(ω) ,2z_0√(ω). If ∈ H^1() satisfies the orthogonality conditionsı Q_ω,c = ∂_xQ_ω,c = Q_ω,c = ı∂_xQ_ω,c = 0,then we haveS_ω,c”Q_ω,c≥ C_abs_H^1^2,whereS_ω,c”(Q_ω,c) := T_ω,c + N_ω,cwith T_ω,c := ∫_x^2+ω ^2-c_x , andN_ω,c :=∫[ Q_ω,c^2σ_x + σQ_ω,c^2σ-2( Q̅_ω,c∂_xQ_ω,c^2 + Q_ω,c∂_xQ_ω,c^2) ].We follow the argument in <cit.> (see also <cit.>) and the references therein, and divide the proof into several steps. * Spectral distribution of S_ω,c”(Q_ω,c). On one hand, by Hölder's inequality, we haveT_ω,c =∫∂_x^2+ı c ∂_x+ω^2 =∫∂_x^2+ı c ∂_x+c^2/4 ^2 +ω-c^2/4∫^2 ≥ ω-c^2/4∫^2,which means that σ_T_ω,c⊂[ω-c^2/4 , ∞). On the other hand, by the exponential decay of Q_ω,c and the similar argument of Proposition 2.9 in <cit.>, we know that the operator N_ω,c is relatively compact with respect to T_ω,c. By Weyl's theorem in <cit.>, we haveσ_ S_ω,c”(Q_ω,c)=σ_T_ω,c⊂[ω-c^2/4 , ∞).* We claim that for any φ∈ H^1∖0 with K'_ω,cQ_ω,cφ=0,S”_ω,cQ_ω,cφφ≥0. In deed,notice that K'_ω,cQ_ω,c≠ 0, we can choose ψsuch that K'_ω,cQ_ω,cψ≠ 0. We now definefor any φ∈ H^1∖0 with K'_ω,cQ_ω,cφ=0, κm,s:=K_ω,cQ_ω,c +m ψ+sφ. Applying the Implicit Function Theorem to κm,s withκ0,0= K_ω,cQ_ω,cand. ∂_mκm,s|_m,s=0,0≠ 0 yields that there exists δ>0 such that m:-δ,δ↦ is of class 𝒞^1 with m0=0, and κms, s=K_ω,cQ_ω,c +msψ+sφ≡0  for  s∈-δ,δ.Differentiating on s, we haveṁ0∂_mκ0,0 + ∂_sκ0,0=0, which implies that K'_ω,cQ_ω,cφ + ṁ0 K'_ω,cQ_ω,cψ=0. Consequently we have ṁ0=0.Based on the above argument, we can define the function ι: -δ,δ↦ as following:ιs:=S_ω,c Q_ω,c+m sψ+sφ.It means from Lemma <ref> and (<ref>) that 0 is a local minimum point of ι, and implies that the function ι is convex around 0, i.e. ι”0= S”_ω,c Q_ω,cφφ≥0. * S_ω,c”(Q_ω,c) has at least one negative eigenfunction. For this purpose, we only need to show that there exists a function U in H^1 with S_ω,c”(Q_ω,c) UU<0. Indeed, it follows from K_ω,c(Q_ω,c)=0 and 4ω>c^2 thatS”_ω,c(Q_ω,c)Q_ω,c Q_ω,c =- 2σ∫∂_xQ_ω,c^2 + ωQ_ω,c + ı c ∂_xQ_ω,cQ̅_ω,c <0.*S_ω,c”(Q_ω,c) has at most one-dimensional negative eigenspace. We argue by contradiction. Suppose that there exist two linearly independent eigenfunctions χ_1 and χ_2 of S_ω,c”(Q_ω,c). Since S_ω,c”(Q_ω,c) is a self-adjoint operator, without of generality, one may assume that χ_1χ_2=0. It is easy to check thatS_ω,c”(Q_ω,c) χ_1 χ_2 =0.Moreover by the nonnegative property in <ref> and S_ω,c”(Q_ω,c) χ_1 χ_1 <0 and S_ω,c”(Q_ω,c) χ_2 χ_2 <0 , we haveK'_ω,cQ_ω,cχ_1≠0 ,andK'_ω,cQ_ω,cχ_2≠0,which implies that there exists ξ_0∈∖0 with χ_0=χ_1+ξ_0χ_2 such thatK'_ω,cQ_ω,cχ_0 =0.By the nonnegative property in <ref>, we have S_ω,c”(Q_ω,c) χ_0 χ_0 ≥0, which is in contradiction withS_ω,c”(Q_ω,c) χ_0 χ_0=S_ω,c”(Q_ω,c) χ_1 χ_1+ ξ_0^2S_ω,c”(Q_ω,c) χ_2 χ_2<0. *S_ω,c”(Q_ω,c) ={ı Q_ω,c, ∂_x Q_ω,c}. It follows from Proposition 3.6 in <cit.>. *Positivity of the quadratic form S_ω,c”(Q_ω,c). In fact, we have For any ∈ H^1∖0 with (<ref>) we have S_ω,c”Q_ω,c>0. This is a consequence of <ref>–<ref> and the standard spectral decomposition arguments for the quadratic form S_ω,c”Q_ω,c. In this proof, we will ignore the subscript ω and c for convenience and write S_ω,c”Q_ω,c and Q_ω,c as S”Q and Q respectively.First, we infer, from <ref>–<ref> together with (<ref>), that the space H^1 can be decomposed as a direct sum of three subspaces:H^1= N⊕ K⊕ P ,with K:={ı Q , ∂_x Q }, P:=S”Q>0 and N:=χ, where χ is the L^2-normalized negative eigenfunction corresponding to the negative eigenvalue -λ^2. According to (<ref>), we can decompose any function ∈ H^1 satisfying (<ref>) into= κχ +with ∈ P and κ = χ.Now, we turn to the decomposition of some special functions related to the non-degenerate condition d”ω,c <0. On one hand, by (<ref>), we haveS”Q∂_ωQ = - Q,andS”Q∂_cQ = -ı Q,which implies thatS”Q∂_ωQ∂_ωQ = -∂_ω MQ, S”Q∂_ωQ∂_cQ = -∂_c MQ,S”Q∂_cQ∂_ωQ = -∂_ω PQ, S”Q∂_cQ∂_cQ = -∂_c PQ,andS”Q∂_ωQ =S”Q∂_cQ=0.On the other hand, the non-degenerate condition d”ω,c<0 implies that there exists ξ=(ξ_1, ξ_2)∈^2 such that d”ω,cξξ<0, which together with (<ref>), (<ref>)-(<ref>) and setting U = ξ_1∂_ωQ + ξ_2∂_cQ yields thatS”Q UU<0.Using the decomposition (<ref>), we decompose the function U as following:U :=αχ + ζ + 𝚢,with α = Uχ, ζ∈ S”(Q) and 𝚢∈ P. From (<ref>), (<ref>)-(<ref>), we have0> S”Q UU = -α^2λ^2 +S”Q𝚢𝚢, 0= S”Q U = -ακλ^2 +S”Q𝚢.Now inserting (<ref>) into S”Q, and taking into account (<ref>)-(<ref>), we obtain by the Cauchy–Schwarz inequality thatS”Q = -κ^2λ^2 + S”Q≥ -κ^2λ^2 + S”Q𝚢^2/S”Q𝚢𝚢 > -κ^2λ^2 + α^2κ^2λ^4 /α^2λ^2= 0.This completes the proof.By <ref> to <ref>,the coercivity property of S_ω,c”(Q_ω,c) can be obtained by the argument in <cit.> and <cit.> (see also <cit.>). This concludes the proof of the proposition. § MODULATION ANALYSISFollowing the modulation analysis in <cit.> <cit.> (also <cit.>), we will show the geometrical decomposition of the solutions to (<ref>) close to the sum of two solitary waves with weak interactions. Now let (σ, z_0) be as in Theorem <ref>, (ω_j^0, c_j^0)∈^2 be such that -2 √(ω_j^0) <c_j^0 <2z_0√(ω_j^0), j=1, 2, then by Theorem 4.3 in <cit.>, we have the non-degenerate conditiond”ω^0_j,c^0_j<0,forj=1, 2.Let α < α_0 be small enough, and L>L_0 be large enough, where α_0, L_0 will be determined later. We first consider the tube of size α in the energy space H^1()αω^0 c^0L := u∈ H^1∖0inf_x_2-x_1>L, γ_1,γ_2∈ u-∑_j=1^2 Q_ω_j^0,c_j^0·-x_j^ıγ_j_H^1<αwith ω^0= ω_1^0 , ω_2^0 and 𝐜^0= c_1^0 , c_2^0 . We denote Q_j^0=Q_ω_j^0,c_j^0, Q_j=Q_ω_j,c_j for convenience, and let ω, 𝐜, 𝐱 and γ be the vectors ω_1, ω_2, c_1, c_2, x_1, x_2 and γ_1, γ_2 respectively.By the Implicit Function Theorem, we haveThere exist L_ large enough, α_ small enough, such that for any L>L_ , α<α_, if u∈αω^0 c^0L, then there exist unique 𝒞^1 functions ω, 𝐜, 𝐱, γ such that the following decomposition holds:ux=∑_j=1^2 Q_jx-x_j^ıγ_j + x ,with -2 √(ω_j) <c_j <2z_0√(ω_j), j=1,2 andR_j =ı∂_xR_j=ı R_j=∂_xR_j=0,j=1,2,where R_jx=Q_jx-x_j^ıγ_j. Moreover, we have_H^1+∑^2_j=1(ω_j-ω_j^0+c_j-c_j^0 ) <C_α,j=1,2, x_2-x_1 >L/2.and1/2<√( 4ω_j-c_j^2 )/√( 4ω_j^0-c_j^0^2 )<2, j=1,2,First of all, by the definition ofαω^0 c^0L, there exist 𝐱^0:=x_1^0, x_2^0 ∈^2 with x_2^0 -x_1^0≥ L and γ^0:=γ_1^0,γ_2^0 ∈^2 such thatu-∑_j=1^2 Q_ω_j^0,c_j^0·-x_j^0^ıγ_j^0_H^1<α.Let= ω, 𝐜, 𝐱, γ and^0 =ω^0, 𝐜^0, 𝐱^0, γ^0 ,𝐐^0 (x)=∑_j=1^2 Q_ω_j^0,c_j^0x-x_j^0^ıγ_j^0.For any u with (<ref>) and , we definex; , u :=ux-∑_j=1^2 Q_jx-x_j^ıγ_j.It is easy to see thatεx; ^0, 𝐐^0 ≡ 0.Defining P, u := ϱ_1^1, ϱ_1^2, ϱ_1^3, ϱ_1^4, ϱ_2^1, ϱ_2^2, ϱ_2^3, ϱ_2^4 , u byϱ_j^1,u:=· ;,u  Q_j·-x_j^ıγ_j,ϱ_j^2,u :=· ;,u ı∂_xQ_j·-x_j^ıγ_j, ϱ_j^3,u :=· ;,u ıQ_j·-x_j^ıγ_j,ϱ_j^4,u:=· ;,u  ∂_x Q_j·-x_j^ıγ_j, where k=1, 2.By simple calculations, we have∂/∂ω_j= -∂_ω_jQ_jx-x_j^ıγ_j,∂/∂ c_j=-∂_c_jQ_jx-x_j^ıγ_j, ∂/∂ x_j=  ∂_xQ_jx-x_j^ıγ_j,∂/∂γ_j=  -ıQ_jx-x_j^ıγ_j,and∫𝒬_1^0 x-x_1^0 ^ıγ_1^0 𝒬_2^0 x-x_2^0 ^ıγ_2^0≤ C_abs^-2θ_1 L ,where θ_1=min √(4ω_1^0-c_1^0^2)/8 , √(4ω_2^0-c_2^0^2)/8 , and 𝒬_j^0 denotes one of Q_j^0, ∂_xQ_j^0, .∂_ω_jQ_j|_=^0, and .∂_c_jQ_j|_=^0. Inserting (<ref>) and (<ref>) into ϱ_j^k, we obtain ∂ϱ_j^1/∂ω_k^0,𝐐^0 =-∂/∂ω_k^0 MQ_k^0,j=k,^-2θ_1 L,j≠ k,∂ϱ_j^1/∂ c_k^0,𝐐^0 =-∂/∂ c_k^0 MQ_k^0,j=k,^-2θ_1 Lj≠ k, ∂ϱ_j^1/∂ x_k^0,𝐐^0 =0,j=k,^ -2θ_1 L ,j≠ k,∂ϱ_j^1/∂γ_k^0,𝐐^0 =0,j=k,^ -2θ_1 L ,j≠ k,∂ϱ_j^2/∂ω_k^0,𝐐^0 =-∂/∂ω_k^0 PQ_k^0,j=k,^-2θ_1 L,j≠ k,∂ϱ_j^2/∂ c_k^0,𝐐^0 =-∂/∂ c_k^0 PQ_k^0,j=k,^-2θ_1 L,j≠ k, ∂ϱ_j^2/∂ x_k^0,𝐐^0 =0,j=k,^ -2θ_1 L ,j≠ k,∂ϱ_j^2/∂γ_k^0,𝐐^0 =0,j=k,^ -2θ_1 L ,j≠ k,∂ϱ_j^3/∂ω_k^0,𝐐^0 =-∫∂/∂ω_k^0Q_k^0Q̅_k^0,j=k,^-2θ_2 L,j≠ k, ∂ϱ_j^3/∂ c_k^0,𝐐^0 =-∫∂/∂ c_k^0Q_k^0Q̅_k^0,j=k,^-2θ_2 L,j≠ k, ∂ϱ_j^3/∂ x_k^0,𝐐^0 =-2PQ_k^0 ,j=k,^ -2θ_2 L ,j≠ k, ∂ϱ_j^3/∂γ_k^0,𝐐^0 =-2M Q_k^0 ,j=k,^ -2θ_2 L ,j≠ k,∂ϱ_j^4/∂ω_k^0,𝐐^0 =-∫∂/∂ω_k^0 Q_k^0∂_x Q̅_k^0,j=k,^-2θ_2 L,j≠ k, ∂ϱ_j^4/∂ c_k^0,𝐐^0 =-∫∂/∂ c_k^0 Q_k^0∂_x Q̅_k^0,j=k,^-2θ_2 L,j≠ k, ∂ϱ_j^4/∂ x_k^0,𝐐^0 = ∂_x Q_k^0_2^2,j=k,^ -2θ_2 L ,j≠ k, ∂ϱ_j^4/∂γ_k^0,𝐐^0 =2P Q_k^0 ,j=k,^ -2θ_2 L ,j≠ k. Hence we can decompose the Jacobian .D P/D|_,u=^0,𝐐^0 into four 4× 4 submatrices,D P/D^0,𝐐^0 =. [ D P_1,1/D D P_1,2/D; D P_2,1/D D P_2,2/D ]|_,u=^0,𝐐^0where.D P_k,k/D |_,u=^0,𝐐^0 = [ -∂/∂ω_k^0 MQ_k^0-∂/∂ c_k^0 MQ_k^000; -∂/∂ω_k^0 PQ_k^0-∂/∂ c_k^0 PQ_k^000; ∫∂/∂ω_k^0Q_k^0Q̅_k^0∫∂/∂ c_k^0Q_k^0Q̅_k^0 -2PQ_k^0-2M Q_k^0;∫∂/∂ω_k^0 Q_k^0∂_x Q̅_k^0 ∫∂/∂ c_k^0 Q_k^0∂_x Q̅_k^0∂_x Q_k^0_2^2 2P Q_k^0 ].By simple calculations, we have.D P_k,k/D |_,u=^0,𝐐^0 = d” ω_k^0 , c_k^0 × 2MQ_k^0∂_x Q_k^0_L^2^2 - 4[PQ_k^0 ]^2,and.D P_j,k/D |_,u=^0,𝐐^0 =^ -2θ_1 L , forj≠ k.Putting together, we obtainD P/D^0,𝐐^0 = ∏_k=1^2{ d”ω_k^0,c_k^0× 2MQ_k^0∂_x Q_k^0_L^2^2 - 4[PQ_k^0 ]^2} + ^ -2θ_1 L .The fact that2MQ_k^0∂_x Q_k^0_L^2^2 - 4[PQ_k^0 ]^2>0in Appendix A, together with the non-degenerate condition (<ref>) implies thatD P/D^0,𝐐^0>0for sufficiently large L. We can conclude the proof by the Implicit Function Theorem. Let L_ and α_ be given by Lemma <ref>. If u∈𝒞 [0,T^∗],H^1 is a solution to (<ref>) with u0∈αω^0 c^0L, andut∈αω^0 c^0L/2, for any t∈(0,T^∗],where α <α_ and L>2L_, then there exist unqiue 𝒞^1 functionst:=ωt,𝐜t,𝐱t,γt : [0,T^∗]↦^8with -2√(ω_j(t))<c_j(t)<2z_0√(ω_j(t)) for all t∈ [0, T^*], j=1,2 , such that(t) R_j(t)= (t)ı∂_xR_j(t)= (t)ı R_jt= (t)∂_xR_jt =0,where R_jt,x=Q_ω_jt,c_jtx-x_jt^ıγ_jt, j=1,2, andt,x=ut,x-∑_j=1^2 R_jt,x.Moreover, for t∈ [0, T^*], we havet_H^1+∑^2_j=1(ω_jt-ω_j^0+c_jt-c_j^0 ) <C_α,1/2<√( 4ω_jt-c_jt^2 )/√( 4ω_j^0-c_j^0^2 )<2,ω̇_kt + ċ_kt+ ẋ_kt-c_kt + γ̇_kt-ω_kt≤ C_abst_H^1 +^ -θ_2L+θ_2 t , x_2t-x_1t>1/2 L + θ_2 t ,where θ_2=min √(4ω_1^0-c_1^0^2)/8 , √(4ω_2^0-c_2^0^2)/8  , c_2^0-c_1^0 . First, since ut∈αω^0 c^0L/2for any t∈(0, T^∗], there exist 𝐱 ^0t and γ^0t such thatut-∑_j=1^2 Q_ω_j^0,c_j^0·-x_j^0t^ıγ_j^0t_H^1<α, with x_2^0t- x_1^0t≥L/2 .By Lemma <ref>,we have the decomposition (<ref>) with the estimates (<ref>) and (<ref>). Moreover, by the proof of Lemma <ref>, we can obtain the estimate on 𝐱(𝐭), i.e.x_j(t)-x_j^0(t) <C_α,which together withx_2^0(t) - x_1^0(t) ≥ L/2 implies thatx_2(t)-x_1(t) >L/4.for sufficiently small α and sufficiently large L. Now, we turn to the proof of (<ref>). The rigorous calculations for (<ref>) can be obtained by Lemma 4 in <cit.>. Here, we only give the formally calculations. On one hand, by the equation (<ref>) and the decomposition (<ref>), we have0= ı∂_t +∂_xx - ∑_k=1^2ıẋ_k-c_k ∂_xR_k - ∑_k=1^2γ̇_k-ω_k R_k + ∑_k=1^2ıω̇_k∂_ω_kR_k+ ∑_k=1^2ıċ_k∂_c_kR_k + ı∑_k=1^2 R_k + ^2σ∂_x∑_k=1^2 R_k +- ∑_k=1^2ıR_k^2σ∂_xR_k = ı∂_t +∂_xx - ∑_k=1^2ıẋ_k-c_k ∂_xR_k - ∑_k=1^2γ̇_k-ω_k R_k + ∑_k=1^2ıω̇_k∂_ω_kR_k+ ∑_k=1^2ıċ_k∂_c_kR_k + ℛ_1ℛ_2 ++ ∂_x,where we usedı∂_tR_k + ∂_xxR_k = -ıR_k^2σ∂_xR_k - ω_k R_k + ıω̇_k∂_ω_kR_k+ ıċ_k ∂_c_kR_k -ıẋ_k-c_k ∂_xR_k -γ̇_k -ω_kR_k,and + ℛ _1 + ℛ _2 ≲ 1 with ℛ_k is one of R_k and ∂_xR_k, Then, by (<ref>) and the orthogonal condition (<ref>), we haveω̇_k (t) + ċ_k(t)+ ẋ_k(t)-c_k(t)+ γ̇_k(t)-ω_k (t)≤C_abs(_H^1 + ^ -2θ_2x_1(t)-x_2(t)),where we used the fact:∫ℛ_1ℛ_2≤ C_abs∫^ -√( 4ω_1(t)-c_1(t)^2 )/2x-x_1(t)^ -√( 4ω_2(t)-c_2(t)^2 )/2x-x_2(t)≤ C_abs^ -2θ_2x_1(t)-x_2(t).Inserting (<ref>) into (<ref>), we obtain the following "rough" estimateω̇_k (t) + ċ_k(t)+ ẋ_k(t)-c_k(t)+ γ̇_k(t)-ω_k (t)≤C_abs(_H^1 + ^ -θ_2/2L).On the other hand, combining (<ref>) with (<ref>), we haveẋ_2(t)-ẋ_1(t) = ẋ_2(t) - c_2(t)- ẋ_1(t) - c_1(t)+c_2(t)-c_2^0-c_1(t)-c_1^0+c_2^0-c_1^0≥c_2^0-c_1^0- C_absα - C_abs^-θ_2/4 L ≥ 1/2 c_2^0-c_1^0 ,then integrating (<ref>), we obtainx_2(t)-x_1(t) ≥ x_2(0)-x_1(0) + 1/2∫_0^t c_2^0-c_1^0 ṣ≥L/2+1/2 c_2^0-c_1^0 t,which implies thatω̇_k (t) + ċ_k(t)+ ẋ_k(t)-c_k(t)+ γ̇_k(t)-ω_k (t)≤ C_abs( _H^1 + ^ -θ_2 L+θ_2 t ).This concludes the proof. § MONOTONICITY FORMULAIn <cit.>, under the non-degenerate conditiond”ω, c <0, Miao, Tang and Xu obtained the orbital stability of the single solitary wave of the equation (<ref>) with σ=1 in H^1() by the conservation laws of the energy, mass and momentum, these conservation laws were used to refine the estimates about the radiation term (t) and parameters variance ω(t)-ω(0) + c(t)-c(0). In this section, because of the multi-dimension of parameters ω, 𝐜 in dealing with the multi-solitary waves, we will introduce the analogue monotonicity formulas as those in <cit.>,<cit.> instead of the conservation laws to refine the estimates (<ref>) and (<ref>) about the radiation term (t) and parameters variance ω_k(t)-ω_k(0) and c_k(t)-c_k(0), k=1, 2. Those monotonicity formulas arerelated to the localized mass and momentum.We first give a Virial type identity.Let g:↦ be a 𝒞^3 real-valued function such that g', g” and g”' are bounded. If u∈𝒞 [0, T^∗], H^1 is a solution of (<ref>), then, for all t∈[0, T^*], we have/ṭ∫u^2 g =2∫u̅u_x g' +1/σ∫u^2σ+2 g'. -/ṭ∫u̅u_x g = -2∫u_x^2 g' - ∫u^2σu̅u_x g' +1/2∫u^2 g”'.It follows from simple computations. Now suppose ω∈, c∈, x̅^0∈, μ∈, and a>0, then by Lemma <ref>, we have for any t∈ [0, T^*],/ṭω/2∫u^2 gx-x̅^0 -μ t /√(t+a) -c/2∫u̅u_x gx-x̅^0 -μ t /√(t+a)= 1/√(t+a)∫ -cu_x^2 + ω + μ c / 2 u̅u_x - μω/2u^2  g'x-x̅^0 -μ t /√(t+a) +1 /t+a∫ c / 4 u̅u_x- ω/4u^2 Λ gx-x̅^0 -μ t /√(t+a) +c / 4 (t+a)^3/2∫u^2 g”'x-x̅^0 -μ t /√(t+a) -c/ 2 √(t+a)∫u^2σu̅u_x g'x-x̅^0 -μ t /√(t+a) + ω/ 2σ+1√(t+a)∫u^2σ+2 g'x-x̅^0 -μ t /√(t+a),where Λ g x:=xg'x .Now let x̅^0 = x^0_1 + x^0_2/2,μ = 2ω_20-ω_10/c_20-c_10, and define the following functional𝒥_sumt =ω_10/2∫u(t,x)^2 φ -x-x̅^0-μ t/√(t+a)- c_10/2∫u̅u_x φ -x-x̅^0-μ t/√(t+a) +ω_20/2∫u(t,x)^2 φ x-x̅^0-μ t/√(t+a)- c_20/2∫u̅u_x φ x-x̅^0-μ t/√(t+a),which is used to capture the localized mass and momentum around each solitary waves.According to the weak interactions between the solitary waves, we have the following monotonicity properties.§.§ Monotone result for the line x̅^0 +μ tLet a=L^2/64 and u∈𝒞[0, T^∗], H^1 be a solution satisfying the assumption of Lemma <ref>. Then, there exists C_abs such that/ṭ𝒥_sumt≤C_abs/(t+a)^3/2t_H^1^2 + C_abs/(t+a)^3/2^-θ_3 L+θ_3 t ,where θ_3=min √(4ω_1^0-c_1^0^2)/64 , √(4ω_2^0-c_2^0^2)/64 , 4μ -c_1^0 , 4c_2^0- μ .We rewrite 𝒥_sumt as the following identity𝒥_sumt =ω_10/2∫u(t,x)^2- c_10/2∫u̅u_xt,x + 𝒥twhere𝒥t = ω_20-ω_10/2∫u(t,x)^2 φx-x̅^0 -μ t /√(t+a) - c_20-c_10/2∫u̅u_xt,x φx-x̅^0 -μ t /√(t+a) . By the conservation of mass and momentum, it suffices to showLet a, θ_3 and u be as those in Proposition <ref>. Then, there exists C_abs such that/ṭ𝒥t≤C_abs/(t+a)^3/2t_H^1^2 + C_abs/(t+a)^3/2^-θ_3 L+θ_3 t .Moreover, we have𝒥t - 𝒥0⩽C_abs/ L sup_0<s<ts_H^1^2 + C_abs^-θ_3L. Before the proof of Proposition <ref>, we fist give the following estimate. Let Ω_w:=x∈ x-x̅^0-μ t <√(t+a), then for any 2≤ p<∞, we have∫_Ω_wu^p ≤ C_abs^-θ_3 L+θ_3 t + C_abst_H^1^p ≤1/32,where θ_3 is given by Proposition <ref>. By Lemma <ref>, the solution ut can be decomposed as ut= ∑_j=1^2 Q_j·-x_jt^ıγ_jt + t,and (<ref>)-(<ref>) hold. Then we have∫_Ω_wu^p ≤∑_j=1^2 C_abs∫_Ω_wQ_j·-x_jt^p + C_abs∫_Ω_w^p.We first estimate the contribution from Q_1. If x∈Ω_w, we obtain x-x^0-μ t<√(t+a)⩽√(t) + L/8, andx-x_1t=x-x^0-μ t- x_1t -x^0-μ t ⩾ x_1t -x^0-μ t-x-x^0-μ t ⩾ x_1t -x^0-μ t- √(t) - L/8.By (<ref>) and (<ref>), we have for sufficiently small α and sufficiently large L thatd/dtx^0+μ t - x_1t= μ - ẋ_1t - c_1t - c_1t⩾μ - C_abs(C_α + e^ -θ_2L + θ_2 t ) - c_1^0 - C_ α⩾μ - c_1^0/2,and so,x^0+μ t - x_1t⩾x^0 - x_10 + μ - c_1^0/2 t ⩾L/4 + μ - c_1^0/2 t.Now inserting the above estimate into (<ref>),we obtain for sufficiently small α and sufficiently large L thatx-x_1t ⩾x^0+μ t - x_1t - √(t) -L/8⩾L/16 + μ - c_1^0/4 t + μ - c_1^0/4 t - √(t) + L/16⩾L/16 + μ - c_1^0/4 t.Then, it follows from the explicit expression of Q_ω,c that∫_Ω_w Q_1·-x_1t^p ≤ C_abs^-θ_3 L+θ_3 t ,where we used the fact that p≥ 2. By the similar argument, we have∫_Ω_w Q_2·-x_2t^p ≤ C_abs^-θ_3 L+θ_3 t .By (<ref>) and (<ref>) and the Sobolev inequality, we have∫_Ω_wu^p ≤C_abs^-θ_3 L+θ_3 t+ C_abst_H^1^p.This concludes the proof.Now, let us prove Proposition <ref>.First of all, letvt,x:=ut,x^-ı1 / 2 μ x,then we have-u_x^2 + μu̅u_x - μ^2/4u^2 = 1/4v̅v_x, -1/2u^2σu̅u_x=-1/2v^2σv̅v_x-μ/4 v ^2σ+2 .Simple calculations yield that1/c_2(0)-c_1(0)/ṭ𝒥t=-1/√(t+a)∫v_x^2φ' x-x̅^0-μ t/√(t+a) +1/ 4(t+a)^3/2∫v^2 φ”' x-x̅^0-μ t/√(t+a) + 1/4t+a∫v̅v_x Λφ x-x̅^0-μ t/√(t+a) -1/2√(t+a)∫v^2σv̅v_x φ' x-x̅^0-μ t/√(t+a) -σμ/4√(t+a)∫v^2σ+2 φ' x-x̅^0-μ t/√(t+a). Next, we estimate (<ref>)-(<ref>) separately.Estimate for (<ref>). The definition of Λφ immediately implies that Λφ≤φ' , which together with the Cauchy-Schwarz inequality, yields that1/4t+a∫v̅v_x Λφ x-x̅^0-μ t/√(t+a) ≤ 1/4t+a√(∫v_x^2 φ' x-x̅^0-μ t/√(t+a)·∫v^2 φ' x-x̅^0-μ t/√(t+a)) ≤ 1/4√(t+a)∫v_x^2 φ' x-x̅^0-μ t/√(t+a) + 1/4(t+a)^3/2∫v^2 φ' x-x̅^0-μ t/√(t+a) Estimate for (<ref>). Applying the Cauchy-Schwarz inequality, we have1/2√(t+a)∫v^2σv̅v_x φ' x-x̅^0-μ t/√(t+a) ≤ 1/2√(t+a)√(∫v^4σ+2 φ' x-x̅^0-μ t/√(t+a)·∫v_x^2  φ' x-x̅^0-μ t/√(t+a)) ≤ 1/4√(t+a)∫v_x^2  φ' x-x̅^0-μ t/√(t+a) + 1/√(t+a)∫v^4σ+2 φ' x-x̅^0-μ t/√(t+a).By the Hölder inequality, we have∫v^4σ+2 φ' x-x̅^0-μ t/√(t+a)≤v^2√(φ' x-x̅^0-μ t/√(t+a))_L^∞^2 ∫_φ'v^4σ-2.By theSobolev inequality in Lemma 5.2 in <cit.> and Lemma <ref>, we havev^2√(φ'( x-x̅^0-μ t/√(t+a)) )_L^∞^2 ≤8∫v_x^2 φ'∫_φ'u^2 + 1/2(t+a)∫v^2φ”^2 /φ' ∫_φ'v^2≤8∫_φ'v^2∫v_x^2 φ' + 1/2(t+a)∫_φ'v^2 ∫v^2φ”^2 /φ'≤8∫_φ'v^2∫v_x^2 φ' +1/2(t+a)∫_φ'v^2 ^2≤ 1/4∫v_x^2 φ' + C_abs/t+a,which implies that1/√(t+a)∫v^4σ+2 φ' x-x̅^0-μ t/√(t+a) ≤1/4√(t+a)∫v_x^2 φ' x-x̅^0-μ t/√(t+a) + C_abs/ (t+a)^3/2^-θ_3 L+θ_3 t + t_H^1^4σ-2Now inserting the above estimate into (<ref>), we have1/2√(t+a)∫v^2σv̅v_x φ' x-x̅^0-μ t/√(t+a) ≤1/2√(t+a)∫v_x^2  φ' x-x̅^0-μ t/√(t+a) + C_abs/ (t+a)^3/2^-θ_3 L+θ_3 t + t_H^1^4σ-2.Estimate for (<ref>). By μ >0, φ'>0, we have-σμ/4√(t+a)∫v^2σ+2 φ' x-x̅^0-μ t/√(t+a)≤ 0. Inserting (<ref>), (<ref>) and (<ref>) into (<ref>), we can obtain the result.§.§ Monotone result for different lines Now letθ_4:=min √(4ω_1^0-c_1^0^2)/64, √(4ω_2^0-c_2^0^2)/64, μ-2c_1^0, 4c_2^0-2μ ≤θ_3, μ_+,0= μ_0,-=4ω_2(0)-ω_1(0)/c_2(0)-c_1(0),μ_-,0=μ_0,+=ω_2(0)-ω_1(0)/c_2(0)-c_1(0),ϕ_±,0t,x:=φx-x̅^0 -μ_±,0 t /√(t+a),ϕ_0,±t,x:=φx-x̅^0 -μ_0,± t /√(t+a),and define𝒥_+,0(t)= (ω_2(0)-ω_1(0))∫u(t,x)^2 ϕ_+,0(t,x) - c_2(0)-c_1(0)/2∫u̅u_x ϕ_+,0(t,x),𝒥_-,0(t)= (ω_2(0)-ω_1(0))/4∫u(t,x)^2 ϕ_-,0(t,x) - c_2(0)-c_1(0)/2∫u̅u_x ϕ_-,0(t,x),𝒥_0,+(t)= (ω_2(0)-ω_1(0))/2∫u(t,x)^2 ϕ_0,+(t,x) - (c_2(0)-c_1(0))∫u̅u_x ϕ_0,+(t,x),𝒥_0,-(t)= (ω_2(0)-ω_1(0))/2∫u(t,x)^2 ϕ_0,-(t,x) - c_2(0)-c_1(0)/4∫u̅u_x ϕ_0,-(t,x).By the analogue proof as that in Proposition <ref>, we have Let u∈𝒞[0,T^*], H^1 be a solution satisfying the assumption of Lemma <ref>. Then, there exists C_abs such that𝒥_±,0t-𝒥_±,00 ⩽C_abs/ L sup_0<s<ts_H^1^2 + C_abs^-θ_4L,𝒥_0,±t-𝒥_0,±0 ⩽C_abs/ L sup_0<s<ts_H^1^2 + C_abs^-θ_4L.§ PROOF OF THEOREM <REF>Let σ∈1,2 and z_0=z_0(σ)∈0,1 satisfy F(z_0;σ) =0, where F(z;σ) is defined by (<ref>). Let ω^0_k and c^0_k satisfy the assumptions in Theorem <ref>. Let α_0 be defined by Lemma <ref>, and A_0>2, δ_0=δ_0(A_0), L_0=L_0(A_0) be chosen later. Suppose that ut is the solution of (<ref>) with initial data u_0∈αω^0 c^0L, and defineT^∗ := sup t≥ 0 sup_τ∈[0,t]inf_x_2^0-x_1^0 > L/2 γ^0_1, γ^0_2∈ u(τ,·)- ∑_j=1^2 Q_ω_j^0,c_j^0·-x_j^0^ıγ_j^0_H^1≤A_0δ + ^-θ_0/2 L,whereθ_0=min√(4ω_1^0-c_1^0^2)/128,  √(4ω_2^0-c_2^0^2)/128,  c_2^0-c_1^0,  2c_2^0- 2μ,  μ-2c_1^0 .By the continuity of u(t) in H^1, we know that T^*>0. In order to prove Theorem <ref>, it suffices to show T^*=+∞ for some A_0>2, δ_0>0, and L_0. We argue with contradiction. Suppose that T^*<+∞, we know that for any t∈ [0, T^*], there exist (x^0_k(t), γ^0_k(t))∈^2, k=1, 2 such that x^0_2(t)≥ x^0_1(t)+L/2 andu(t,·)-∑^2_k=1Q_ω^0_k,c^0_k·-x^0_k(t)e^ıγ^0_k(t)≤ A_0(δ +e^-θ_0 L/2). * Decomposition of ut. Let L_0>0 be determined by Lemma <ref>, and L_2, L_3 be determined by Proposition <ref> and Corollary <ref>, and choose δ_0>0 small enough and L_0 large enough, such that for δ<δ_0 and L>L_0(A_0)>max 2L_0, L_2, L_3,A_0 δ + e^ - θ_6L/2<α_0. By Lemma <ref> and Lemma <ref>, we haveut, x = t, x + ∑_j=1^2R_jt, xwhere R_jt, x = Q_ω_jt,c_jt x-x_jt^ıγ_jt, and the orthogonality(t) R_j = (t)ı∂_x R_j = (t)ı R_j = (t) ∂_x R_j=0,hold for any t∈[ 0 , T^∗ ]. Moreover, we havet_H^1+∑^2_j=1(ω_jt-ω_j^0+c_jt-c_j^0 ) <C_ A_0δ + ^-θ_0 L/2,ω̇_kt + ċ_kt+ ẋ_kt-c_kt + γ̇_kt-ω_kt≤ C_abst_H^1 +^ -θ_0L+θ_0 t x_2t-x_1t>1/2 L + θ_0 t .In particular, we have (0)_H^1+∑^2_j=1(ω_j0-ω_j^0+c_j0-c_j^0 ) <C_δ. Refined estimate on _H^1^2 and 𝒥t - 𝒥0. In order to do so, we first introduce the functionalt:=Eut + _sumut.and expand it as followingt= ∑_k=1^2S_ω_k0,c_k0 R_k0+ℋtt+ ∑_k=1^2(ω_kt-ω_k0^2 + c_kt-c_k0^2) +t_H^1^2t_H^1+^-θ_0L + θ_0 t . where ℋtt= 1/2∫_xt^2+ω_1t/2∫t^2 1-ϕ+ω_2t/2∫t^2 ϕ+c_1t/2∫_x 1-ϕ+c_2t/2∫_x ϕ+1/2Nt,and N =∑_k=1^2∫R_k^2σ_x + σ∑_k=1^2∫R_k^2σ-2R̅_k∂_xR_k^2 + R_k∂_xR_k^2 .Please refer to the proof in Appendix A.ℋ≥κ_H^1^2. Please refer to Lemma 6.2 in <cit.> and Lemma 6.2 in <cit.>.By Lemma <ref>, we have for all t∈[0, T^∗],t =∑_k=1^2S_ω_k0,c_k0 R_k0 + ℋtt +∑_k=1^2(ω_kt-ω_k0^2 + c_kt-c_k0^2)+ t_H^1^2t_H^1 + ^-θ_0L + θ_0 t . In particular, we have0 =∑_k=1^2S_ω_k0,c_k0 R_k0 + ℋ00 + 0_H^1^20_H^1 + ^-θ_0 L ,which implies that∑_k=1^2S_ω_k0,c_k0 R_k0 =0 - ℋ00 + 0_H^1^20_H^1 + ^-θ_0 L .Inserting (<ref>) into (<ref>), we obtain byLemma <ref> and the conservation laws of mass, momentum and energy thatκt_H^1^2≤ ℋtt=t - ∑_k=1^2S_ω_k0,c_k0 R_k0 +∑_k=1^2( ω_kt-ω_k0^2 + c_kt-c_k0^2)+ t_H^1^2t_H^1 + ^-θ_0L + θ_0 t =t - 0 + ℋ00 + 0_H^1^20_H^1 + ^-θ_0 L+∑_k=1^2(ω_kt-ω_k0^2 + c_kt-c_k0^2 )+ ^-θ_0L + θ_0 t =_sumt - _sum0 + ℋ00 + 0_H^1^20_H^1 + ^-θ_0 L+ ∑_k=1^2(ω_kt-ω_k0^2 + c_kt-c_k0^2 ) + ^-θ_0L + θ_0 t .By Proposition <ref>, we obtaint_H^1^2 ≤ C_abs/ L sup_0<s<ts_H^1^2 + C_abs^-θ_0 L+ C_abs0_H^1^2 +C_abs∑_k=1^2( ω_kt-ω_k0^2 + c_kt-c_k0^2 ).Moreover, by (<ref>), we have𝒥 0- 𝒥 t =𝒥_sum 0- 𝒥_sum t =ℋ00 - ℋtt + 0_H^1^20_H^1 + ^-θ_0 L+ ∑_k=1^2(ω_kt-ω_k0^2 + c_kt-c_k0^2 ) + ^-θ_0L + θ_0 t≤C_abs0_H^1^2 + C_abs^-θ_0 L+ C_abs∑_k=1^2ω_kt-ω_k0^2 + c_kt-c_k0^2 ,which together with (<ref>) implies that𝒥t - 𝒥0≤ C_abs/ L sup_0≤ s≤ ts_H^1^2 + C_abs^-θ_0 L+ C_abs0_H^1^2+ ∑_k=1^2 C_absω_kt-ω_k0^2 + c_kt-c_k0^2 .* Refined estimates of ω_kt-ω_k0 and c_kt-c_k0. Recall thatϕt,x=φx-x̅^0 -μt /√(t+a),ϕ_±,0t,x=φx-x̅^0 -μ_±,0 t /√(t+a), ϕ_0,±t,x=φx-x̅^0 -μ_0,± t /√(t+a),we have∫u(t,x)^2 ϕ(t,x) - ∫R_2t^2 ≤ C_abs^-θ_0 L + θ_0 t+ C_abs_L^2^2.∫u(t,x)^2 1-ϕ(t,x) - ∫R_1t^2 ≤ C_abs^-θ_0 L + θ_0 t+ C_abs_L^2^2.∫u̅u_x ϕ(t,x) - ∫R̅_2∂_xR_2t≤ C_abs^-θ_0 L + θ_0 t+ C_abs_H^1^2.∫u̅u_x 1-ϕ(t,x) - ∫R̅_1∂_xR_1t≤ C_abs^-θ_0 L + θ_0 t+ C_abs_H^1^2.By the definition of ϕ and the exponential decay estimate of R_k, it is easy to check that∫u(t,x)^2 ϕ(t,x) - ∫R_2^2 =∫R_1+R_2+^2 ϕ - ∫R_2^2 =∫R_1^2 ϕ - R_2^2 1-ϕ + ^2 ϕ + 2[ R̅_1R_2ϕ + R̅_1ϕ - 2R̅_21-ϕ]≤2∫R_1^2 ϕ + R_2^2 1-ϕ + R̅_1R_2 + ^2 .First, by inserting the estimate (<ref>) into (<ref>), from the definition of θ_0 in (<ref>), we have∫R̅_1R_2 < C_abs^-θ_0 L + θ_0 t .Now, by (<ref>) and (<ref>), we obtain/ṭ[ x̅^0+μ t -√(t+a) - x_1t]=μ - ẋ_1(t) - 1/2√(t+a)=μ - ẋ_1(t)-c_1(t)-c_1(t)-c_1^0-c_1^0 - 1/2√(t+a) ≥ μ -c_1^0 - C_abst_H^1 - C_abs^-θ_2 L+θ_2 t- C_α ≥ μ -c_1^0 - C_absC_α - C_abs^-θ_2 L- C_α ≥ 1/2μ -c_1^0 , Integrating from 0 to t, we havex̅^0+μ t -√(t+a) - x_1t≥L/4 + 1/2μ -c_1^0t.In the similar way, we havex_2t - x̅^0+μ t + √(t+a)≥L/4 + 1/2 c_2^0 - μ t.By (<ref>), (<ref>), the definition of ϕ andthe explicit expression of R_1 and R_2, we obtain∫[ R_1^2 ϕ + R_2^2 1-ϕ]<C_abs^-θ_0 L + θ_0 t .Inserting (<ref>) and (<ref>) into (<ref>), it is easy to check that (<ref>) holds. The estimates (<ref>)-(<ref>) can be proved in the similar way. ∫u(t,x)^2 ϕ - ϕ_0,-(t,x) + ∫u̅u_xϕ - ϕ_0,-≤ C_abs^-θ_0 L + θ_0 t+ C_abs_H^1^2, ∫u(t,x)^2 ϕ_-,0 - ϕ(t,x) + ∫u̅u_xϕ_-,0 - ϕ≤ C_abs^-θ_0 L + θ_0 t+ C_abs_H^1^2, ∫u(t,x)^2 ϕ - ϕ_+,0(t,x) + ∫u̅u_xϕ - ϕ_+,0≤ C_abs^-θ_0 L + θ_0 t+ C_abs_H^1^2, ∫u(t,x)^2 ϕ_0,+ - ϕ(t,x) + ∫u̅u_xϕ_0,+ - ϕ≤ C_abs^-θ_0 L + θ_0 t+ C_abs_H^1^2. We only give the proof of (<ref>). The estimates (<ref>)-(<ref>) can be shown in the similar way. Now by the definition of ϕ and ϕ_0,-, it is easy to check that for any time t>0ϕ - ϕ_0,-(t,·)⊂ x̅^0 + μ/2t - √(t+a) , x̅^0 + μ t + √(t+a) .Then, it follows from (<ref>) that,∫u(t,x)^2 ϕ - ϕ_0,-(t,x) ≤C_abs∫_ϕ - ϕ_0,-[ R_1^2 + R_2^2 + ^2 ].Now, by (<ref>) and (<ref>), we obtain/ṭ[ x̅^0+μ/2t -√(t+a) - x_1t]=μ/2 - ẋ_1(t) - 1/2√(t+a)=μ/2 - ẋ_1(t)-c_1(t)-c_1(t)-c_1^0-c_1^0 - 1/2√(t+a) ≥ μ/2 -c_1^0 - C_abst_H^1 - C_abs^-θ_2 L+θ_2 t- C_α ≥ μ/2 -c_1^0 - C_absC_α - C_abs^-θ_2 L- C_α ≥ 1/4μ -2c_1^0 .Integrating from 0 to t, we havex̅^0+μ/2 t -√(t+a) - x_1t≥L/4 + 1/4μ -c_1^0t.A similar argument implies thatx_2t - [ x̅^0+ μ t + √(t+a)] ≥L/4 + 1/4μ -2c_1^0t.Hence, we have∫u(t,x)^2 ϕ - ϕ_0,-(t,x) ≤C_abs∫_ϕ - ϕ_0,-[ R_1^2 + R_2^2 + ^2 ].≤C_abs^-2θ_2x̅^0 + μ/2t - √(t+a) - x_1 (t) + C_abs^ -2θ_2 x_2(t) - x̅^0 - μ t - √(t+a) + C_abs_L^2^2≤C_abs^- θ_0L + θ_0 t+ C_abs_L^2^2.By a similar argument as above, it is not hard to see that∫u̅u_xϕ - ϕ_0,-≤C_abs^- θ_0L + θ_0 t+ C_abs_H^1^2.This gives (<ref>).By Lemma <ref> and <ref>, we are able to show the following result. ∑_k=1^2M(R_k(t)) - M(R_k(0)) +∑_k=1^2P(R_k(t)) - P(R_k(0)) ≤ C_abs/ L sup_0<s<ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L + C_abs∑_k=1^2ω_kt-ω_k0^2+c_kt-c_k0^2.On one hand, from the expression of 𝒥_+,0t and 𝒥t, we have 𝒥_+,0t - 𝒥t -ω_20-ω_10/2∫R_2t^2=ω_20-ω_10/2∫u^2 ϕ - ∫R_2t^2+ ω_20-ω_10∫u^2 ϕ_+,0 - ϕ - c_20-c_10/2∫u̅u_xϕ_+,0 - ϕ .Combining (<ref>) with (<ref>), we have for any t∈ [0, T^*]𝒥_+,0t - 𝒥t - ω_20-ω_10/2∫R_2t^2 ≤ C_abst_H^1^2 + C_abs^-θ_0L + θ_0 t .Thus,-𝒥_+,0t - 𝒥t - ω_20-ω_10/2∫R_2t^2≤ C_abst_H^1^2 + C_abs^-θ_0L + θ_0 t ,𝒥_+,00 - 𝒥0 - ω_20-ω_10/2∫R_20^2≤ C_abs0_H^1^2 + C_abs^-θ_0 L,which implies thatω_20-ω_10/2[ ∫R_2t^2 - ∫R_20^2 ] + 𝒥t - 𝒥0 - 𝒥_+,0t -𝒥_+,00 ≤C_abs0_H^1^2 + C_abst_H^1^2 + C_abs^-θ_0 L, which together with (<ref>) and (<ref>) implies thatω_20-ω_10/2[ ∫R_2t^2 - ∫R_20^2 ]≤ 𝒥t - 𝒥0 + [ 𝒥_+,0t -𝒥_+,00] + C_abs0_H^1^2 + C_abst_H^1^2 + C_abs^-θ_0 L ≤ C_abs/ L sup_0<s<ts_H^1^2 + C_abs0_H^1^2 + C_abst_H^1^2 + C_abs^-θ_0 L+C_abs∑_k=1^2[ ω_kt-ω_k0^2 + c_kt-c_k0^2 ]≤ C_abs/ L sup_0<s<ts_H^1^2 + C_abs0_H^1^2 + C_abs^-θ_0 L+C_abs∑_k=1^2[ ω_kt-ω_k0^2 + c_kt-c_k0^2 ].On the other hand, we have𝒥t - 𝒥_-,0t -ω_20-ω_10/4∫R_2t^2=ω_20-ω_10/4∫u^2 ϕ - ∫R_2t^2+ ω_20-ω_10/4∫u^2 ϕ - ϕ_-,0 - c_20-c_10/2∫u̅u_x ϕ - ϕ_-,0.Thus, by (<ref>) and (<ref>), we have for any t∈ [0, T^*]𝒥t - 𝒥_-,0t - ω_20-ω_10/4∫R_2t^2 ≤ C_abst_H^1^2 + C_abs^-θ_0L + θ_0 t ,which implies that,𝒥t - 𝒥_-,0t - ω_20-ω_10/4∫R_2t^2 ≤ C_abst_H^1^2 + C_abs^-θ_0L + θ_0 t , -𝒥0 - 𝒥_-,00 - ω_20-ω_10/4∫R_20^2 ≤ C_abs0_H^1^2 + C_abs^-θ_0 L.Therefore,ω_20-ω_10/4[ ∫R_20^2 - ∫R_2t^2] + 𝒥t - 𝒥0 - 𝒥_-,0t -𝒥_-,00 ≤C_abssup_0≤ s≤ t0_H^1^2 + C_abs^-θ_0 L, which together with (<ref>) implies thatω_20-ω_10/4[ ∫R_20^2 - ∫R_2t^2]≤ 𝒥t - 𝒥0 + 𝒥_-,0t -𝒥_-,00 + C_abs0_H^1^2 + C_abst_H^1^2 + C_abs^-θ_0 L ≤ C_abs/ L sup_0<s<ts_H^1 + C_abs0_H^1^2 + C_abs^-θ_0 L+ C_abs∑_k=1^2[ ω_kt-ω_k0^2 + c_kt-c_k0^2 ].Combining (<ref>) with (<ref>), we have∫R_2t^2 - ∫R_20^2 ≤ C_abs/ L sup_0<s<ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L +C_abs∑_k=1^2ω_kt-ω_k0^2+c_kt-c_k0^2.Similar argument implies that∫R̅_2∂_xR_2t - ∫R̅_2∂_xR_20≤ C_abs/ L sup_0<s<ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L + C_abs∑_k=1^2ω_kt-ω_k0^2+c_kt-c_k0^2.This concludes the estimates of the solitary wave R_2. In order to obtain the estimates of the solitary wave R_1, we will make use of the conservation laws of mass and momentum and the orthogonality condition (<ref>). Firstly, by the mass conservation and the orthogonality condition (<ref>),∫R_1t+R_2t+t^2 - ∫R_10+R_20+0^2 =∑_k=1^2∫R_kt^2-∫R_k0^2 + 2∫R̅_1R_2t - R̅_1R_20 + ∫t^2 - 0^2≥ ∑_k=1^2∫R_kt^2-∫R_k0^2 - 2∫R̅_1R_2t + R̅_1R_20 - ∫t^2 + 0^2 ,which together with (<ref>) implies that∑_k=1^2∫R_kt^2-∫R_k0^2≤ C_abs^-θ_0 L + C_abs0_L^2^2 + t_L^2^2.Secondly, by (<ref>) and (<ref>), we have∫R_1t^2-∫R_10^2 ≤ ∫R_2t^2-∫R_20^2+ C_abs^-θ_0 L + C_abssup_0≤ s≤ ts_L^2^2≤ C_abs/ L sup_0<s<ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L + C_abs∑_k=1^2ω_kt-ω_k0^2+c_kt-c_k0^2In a similar way, we have∫R̅_1t∂_xR_1t - ∫R̅_10∂_xR_10 ≤ C_abs/ L sup_0<s<ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L + C_abs∑_k=1^2ω_kt-ω_k0^2+c_kt-c_k0^2.This ends the proof. Next, by the nondegenerate condition [ d”ω_k^0,c_k^0 ]<0, for k=1,2,we have for sufficiently small α and sufficiently large L,[ d”ω_k0,c_k0] < 1/2[ d”ω_k^0,c_k^0]<0.By the smallness of ω_kt-ω_k0 and c_kt-c_k0, we have the following expression,[M(R_k(t))- M(R_k(0)); P(R_k(t)) - P(R_k(0)) ] = d”ω_k0, c_k 0 [ ω_kt-ω_k0; c_kt-c_k0 ] + ω_kt-ω_k0^2 +c_kt-c_k0^2 ,it follows that, for k=1,2,ω_kt-ω_k0 +c_kt-c_k0< C_abs∫R_kt^2-∫R_k0^2+ C_abs∫R̅_k∂_xR_kt - ∫R̅_k∂_xR_k0.By Lemma <ref>, we haveω_kt-ω_k0 +c_kt-c_k0 < C_abs/ L sup_0≤ s≤ ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L +C_abs∑_k=1^2ω_kt-ω_k0^2+c_kt-c_k0^2 ,which together with the smallness ofω_kt-ω_k0 and c_kt-c_k0 impliesω_kt-ω_k0 +c_kt-c_k0 < C_abs/ L sup_0≤ s≤ ts_H^1^2+C_abs0_H^1^2+C_abs^-θ_0 L . *Conclusion.By (<ref>) and (<ref>), we obtaint_H^1^2 ≤ C_abs/ L sup_0≤ s≤ ts_H^1^2 + C_abs0_H^1^2 + C_abs^-θ_0 L+C_abs/L^2sup_0≤ s≤ ts_H^1^4 + C_abs ^-2θ_0 L+ 0_H^1^20_H^1 ≤ C_abs/ L sup_0≤ s≤ ts_H^1^2 + C_abs0_H^1^2 + C_abs^-θ_0 L .Taking L is large enough, we havesup_0≤ s≤ ts_H^1^2 ≤ C_abs0_H^1^2 + C_abs^-θ_0 L ,which together with (<ref>) implies thatsup_0≤ s≤ ts^2_H^1 + ∑^2_k=1[ ω_kt-ω_k0 +c_kt-c_k0] ≤ C_abs0^2_H^1 + C_abs^-θ_0L .Therefore, we haveinf_ x_2^0-x_1^0>L/2, γ_1^0, γ_2^0∈ ut,·-∑^2_k=1Q_ω_k^0,c_k^0·-x_k^0^ıγ_k^0 ⩽ut,·-∑^2_k=1Q_ω_k^0,c_k^0·-x_kt^ıγ_kt ⩽ut, ·-∑^2_k=1Q_ω_kt,c_kt·-x_kt^ıγ_kt + C_abs∑^2_k=1(ω_k(t)-ω_k^0+c_k(t)-c_k^0)⩽ εt + C_abs∑^2_k=1(ω_kt-ω_k0 + c_kt-c_k0+ω_k0-ω_k^0+ c_k0-c_k^0)⩽ C_absε0 +∑^2_k=1(ω_k0-ω_k^0+ c_k0-c_k^0) + C_abs e^-θ_0 L/2 ⩽ C_absC_(δ + e^-θ_0 L/2).By choosing A_0>2C_absC_, we obtain a contradiction with the definition of T^∗. Thus, T^∗=∞. This concludes the proof. § APPENDIX A By the explicit expression of the solitary wave, we have∂_xQ_ω,c = ∂_xΦ_ω,c + ıc/2Φ_ω,c - ı1/2σ+2Φ_ω,c^2σ+1expı{c/2x-1/2σ + 2∫^x_-∞Φ_ω,c^2σ(y)ỵ}.Hence we have∫Q_ω,c^2 = ∫Φ_ω,c^2,and∫∂_xQ_ω,c^2 =∫∂_xΦ_ω,c^2 + ∫c/2Φ_ω,c - 1/2σ+2Φ_ω,c^2σ+1^2 =∫∂_xΦ_ω,c^2 + c^2/4∫Φ_ω,c^2 + 1/2σ+2^2∫Φ_ω,c^4σ+2 - c/2σ+2∫Φ_ω,c^2σ+2,and∫Q̅_ω,c∂_xQ_ω,c =∫Φ_ω,c∂_xΦ_ω,c + ıc/2Φ_ω,c - ı1/2σ+2Φ_ω,c^2σ+1=∫c/2Φ_ω,c^2 - 1/2σ+2Φ_ω,c^2σ+2.Therefore,we have2MQ_ω,c∂_x Q_ω,c_L^2^2 - 4[PQ_ω,c]^2 =∫Φ_ω,c^2 ∫∂_xΦ_ω,c^2 + c^2/4∫Φ_ω,c^2 + 1/2σ+2^2∫Φ_ω,c^4σ+2 - c/2σ+2∫Φ_ω,c^2σ+2 - ∫c/2Φ_ω,c^2 - 1/2σ+2Φ_ω,c^2σ+2^2 =∫Φ_ω,c^2∫∂_xΦ_ω,c^2 + 1/2σ+2^2∫Φ_ω,c^2∫Φ_ω,c^4σ+2 - 1/2σ+2^2∫Φ_ω,c^2σ+2^2≥ ∫Φ_ω,c^2∫∂_xΦ_ω,c^2>0.where we use the explicit expression (<ref>) of Φ_ω,c in the last inequality.§ APPENDIX B In this appendix, we prove Lemma <ref>. First note thatt = Eut + _sumut=1/2∫ u_xt^2 + 1/2(σ+1)∫_-∞^∞ |u|^2σu̅u_x+ ω_10/2∫u^2 ϕ- c_10/2∫u̅u_x ϕ + ω_20/2∫u^2 1-ϕ- c_20/2∫u̅u_x 1-ϕ.Now we will expand the right hand side of the above equality. Term 1/2∫ u_xt^2 : By the weak interaction (<ref>) between the solitary waves, we have∫ u_xt^2 =∫∂_xR_1 + ∂_xR_2 + ∂_x^2 =∫∂_xR_1 ^2 + ∂_xR_2 ^2 + ∂_x^2+ 2∫∂_xR̅_1∂_xR_2- 2∫∂_x,xR_1 + ∂_x,xR_2=∫∂_xR_1 ^2 + ∂_xR_2 ^2 + ∂_x^2 =∫∂_xR_1 ^2 + ∂_xR_2 ^2 + ∂_x^2- 2∫∂_x,xR_1 + ∂_x,xR_2 + ^ -θ_0 L + θ_0 t Term ω_20/2∫u^2ϕ :By (<ref>) and the definition of ϕ, the simple calculations giveω_20/2∫u^2ϕ =ω_20/2∫R_1+R_2+^2ϕ=ω_20/2∫R_1^2ϕ + R_2^2ϕ + ^2ϕ + ω_20∫ R_1R̅_2ϕ + R_1ϕ + R_2ϕ=ω_20/2∫R_2^2 + 2 R_2 + ^2ϕ +ω_20/2∫R_1^2ϕ + R_2^2ϕ - 1+ ω_20∫ R_1R̅_2ϕ + R_1ϕ + R_2ϕ - 1= ω_20/2∫R_2^2 + 2 R_2 + ^2ϕ + ^ -θ_0 L + θ_0 t+ _L^2^2_L^2 = ω_20/2∫R_2^2 + ω_2t/2∫^2ϕ + ω_20∫ R_2 + ^ -θ_0 L + θ_0 t+ _L^2^2_L^2 + ω_2t-ω_20^2= ω_20/2∫R_2^2 + ω_2t/2∫^2ϕ + ω_2t∫ R_2 + ^ -θ_0 L + θ_0 t+ _L^2^2_L^2 + ω_2t-ω_20^2,where we used the orthogonality condition R_2=0 in the last equality,In the similar way, we can obtainTerm ω_10/2∫u^21-ϕ : ω_10/2∫u^2 1-ϕ= ω_10/2∫R_1^2 + ω_1t/2∫^2 1-ϕ + ω_1t∫ R_1 + ^ -θ_0 L + θ_0 t+ _L^2^2_L^2 + ω_1t-ω_10^2. Term c_20/2∫u̅u_xϕ : c_20/2∫u̅u_x ϕ=c_20/2∫R̅_2∂_xR_2 +c_2t/2∫_x ϕ + c_2t∫ R_2 + ^ -θ_0 L + θ_0 t+ _H^1^2_H^1 +c_2t-c_20^2. Term c_10/2∫u̅u_x1-ϕ : c_10/2∫u̅u_x1-ϕ=c_10/2∫R̅_1∂_xR_1 +c_1t/2∫_x ϕ + c_1t∫ R_1+ ^ -θ_0 L + θ_0 t+ _H^1^2_H^1 +c_1t-c_10^2. Term Nu:=1/2(σ+1)∫ |u|^2σu̅u_x: In order to expand it, we introduce the following cut-off functions around each solitary waves,g_1t,x :=1, x<x_1 + 1/16 L + θ_2 t, 0, x>x_1 + 1/8 L + θ_2 t, g_2t,x :=1, x>x_2 - 1/16 L + θ_2 t, 0, x<x_2 - 1/8 L + θ_2 t,g̃ := 1-g_1-g_2where θ_2 is given by Lemma <ref>. Now, we decompose Nu as following,Nu=N_1u + Ñu + N_2u,whereN_1u = 1/2(σ+1)∫ |u|^2σu̅u_x g_1, N_2u = 1/2(σ+1)∫ |u|^2σu̅u_x g_2,Ñu = Nu-N_1u-N_2u.Note that R_2t,x <C_abs^-4θ_0x-x_2(t), we have∫R_2(t,x) g_1(t,x)+ ∫R_2(t,x) 1-g_2(t,x) =^ -θ_0 L+θ_0 t ,which implies thatN_1u =1/2(σ+1)∫ R_1 + R_1 + ^2σR̅_1 + R̅_2 + ∂_xR_1 + ∂_xR_2 + _x g_1 = N_1 R_1+ N_1'R_1 R_2+ + 1/2N_1”R_1R_2+R_2+ + ∫R_2^3g_1+ _H^1^2_H^1,whereN_1' R_1R_2+ =1/2(σ+1)∫σR_1^2σ-2R̅_1^2∂_xR_1 R_2+g_1+ 1/2(σ+1)∫σ+1R_1^2σ∂_xR_1R̅_2+g_1+ 1/2(σ+1)∫R_1^2σR̅_1∂_xR_2+_x g_1,andN_1” R_1R_2+ R_2+ =1/2(σ+1)∫2σR_1^2σ-2R̅_1^2 R_2+∂_xR_2 +_x g_1+ 1/2(σ+1)∫2σ+1R_1^2σR̅_2+∂_xR_2 +_x g_1+ 1/2(σ+1)∫ 2σσ+1R_1^2σ-2R̅_1∂_xR_1 R_2+^2 g_1+ 1/2(σ+1)∫σσ-1R_1^2σ-4R̅_1^3∂_xR_1 R_2 +^2 g_1+ 1/2(σ+1)∫σσ+1R_1^2σ-2R_1∂_xR_1R̅_2 +^2 g_1.Therefore, the decay estimates (<ref>) implies thatN_1' R_1R_2+ = N_1' R_1+ ^ -θ_0 L+θ_0 t N_1” R_1R_2+ R_2+ =N_1” R_1+ ^ -θ_0 L+θ_0 t.Now, note that R_1t,x <C_abs^-2θ_2x-x_1(t), we have∫R_1  g_2+ ∫R_1 1-g_1 =^ -θ_0 L+θ_0 t ,which yields thatN R_1- N_1 R_1= ^ -θ_0 L+θ_0 t ,N_1' R_1 -N' R_1= ^ -θ_0 L+θ_0 t , N_1” R_1-N” R_1= ^ -θ_0 L+θ_0 t .Inserting (<ref>)-(<ref>) and (<ref>)-(<ref>) into (<ref>), we obtain thatN_1u = NR_1 + N' R_1+ 1/2 N” R_1+ ^ -θ_0 L+θ_0 t .In the similar way, we haveN_2u = NR_2 + N' R_2+ 1/2 N” R_2+ ^ -θ_0 L+θ_0 t .As for the term Ñu, we haveÑu =1/2(σ+1)∫ |u|^2σu̅u_x 1-g_1-g_2 ≤C_absu_L^∞^2σ∫u_x^2 1-g_1-g_2 ∫u^2 1-g_1-g_2^1/2.which together with the smallness of _H^1, (<ref>) and (<ref>) implies thatÑu=^ -θ_0 L+θ_0 t +_H^1^2_H^1.Combining (<ref>) with (<ref>) and (<ref>), we haveNu =∑_k=1^2 NR_k + ∑_k=1^2N' R_k+ ∑_k=1^21/2 N” R_k+ ^ -θ_0 L+θ_0 t +_H^1^2_H^1.Summing up (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we finish the proof.10AmbMal:bookA. Ambrosetti and A. Malchiodi, Nonlinear analysis and semilinear elliptic problems, Cambridge Studies in Advanced Mathematics, 104, Cambridge University Press,2007.CherSS:gDNLS:localStruct Y. Cher, G. Simpson and C. Sulem, Local Structure of singular profiles for a derivative nonlinear Schrödinger equation. arXiv:1602.02381.ColinOhta-DNLS M. Colin, M. Ohta, Stability of solitary waves for derivative nonlinear Schrödinger equation. Ann. Inst. H. Poincaré Anal. Non Linéaire, 23:5(2006), 753–764.Fukaya:gDNLS:bdline N. Fukaya, Instability of solitary waves for a generalized derivative nonlinear Schrödinger equation in a borderline case. arXiv:1604.07945.FuHaIn:gDNLS:GWP N. Fukaya, M. Hayashi, and T. Inui, Global well-posedness on a generalized derivative nonlinear Schrödinger equation. arXiv:1610.00267.GrillSS:Stable:87 M. Grillakis, J. Shatah, and W. Strauss, Stability theory of solitary waves in the presence of symmetry, I. J. Funct. Anal., 74:1(1987), 160–197. GSS:NLS:Stab2 M. Grillakis, J. Shatah, and W. Strauss, Stability theory of solitary waves in the presence of symmetry, II.J. Funct. Anal., 94(1990), 308–348. HaOz:gDNLS M. Hayashi and T. Ozawa. Well-posedness for a generalized derivative nonlinear Schrödinger equation.J. Differ. Equat., 261:10(2016), 5424–5445. IbrMN:NLKG:Scat S. Ibrahim, N. Masmoudi and K. Nakanishi, Scattering threshold for the focusing nonlinear Klein-Gordon equation, Analysis & PDE., 4:3(2011), 405–460.KaupN:DNLS:soliton D. J. Kaup and A. C. Newell, An exact solution for a derivative nonlinear Schrödinger equation. J. Math. Phys., 19:4(1978), 798–801. LeWu:DNLS S. Le Coz and Y. Wu, Stability of multi-solitons for the derivative nonlinear Schrödinger equation. arXiv:1609.04589.LiuSS:gDNLS:Stab X. Liu, G. Simpson, and C. Sulem, Stability of solitary waves for a generalized derivative nonlinear Schrödinger equation.J. Nonlinear Sci., 23:4(2013), 557–583,.LiuPS:DNLS:ISM J. Liu, P. Perry and C. Sulem, Global existence for the derivative nonlinear Schrödinger equation by the method of inverse scattering. arXiv:1511.01173.MartelM:Instab:gKdVY. Martel and F. Merle,Instability of solitons for the critical generalizedKorteweg-de Vries equation. Geom. Funct. Anal., 11:1(2001), 74–123.MartelMT:Stab:gKdV Y. Martel, F. Merle and T. P. Tsai, Stability and asymptotic stability for subcritical gKdV equations. Comm. Math. Phys., 231:2(2002), 347–373.MartelMT:Stab:NLS Y. Martel, F. Merle and T. P. Tsai, Stability in H^1 of the sum of K solitary waves for some nonlinear Schrödinger equations. Duke Math. J., 133:3(2006), 405–466.MiaoTX:DNLS:Exist C. Miao, X. Tang and G. Xu, Solitary waves for nonlinear Schrödinger equation with derivative. Submitted. MiaoTX:DNLS:Stab C. Miao, X. Tang and G. Xu, Stability of the traveling waves for the derivative Schrödinger equation in the energy space. To appear in Calculus of Variations and Partial Differential Equations.M-PHY E. Mjølhus, On the modulational instability of hydromagnetic waves parallel to the magnetic field. J. Plasma Phys., 16(1976), 321–334.NakChen:DNLS:Mulsol A. Nakamuraand H.-H. Chen Multi-soliton solutions of a derivative nonlinear Schrödinger equation.J. Phys. Soc. Japan, 49:2(1980), 813–816. NakanishiSchlag:Book:invariant manifold K. Nakanishi and W. Schlag, Invariant Manifolds and Dispersive Hamiltonian Evolution Equations, Zurich Lectures in Advanced Mathematics, Europ. Math. Soc., 2011.PaySatt:Instab L. E. Payne andD. H. Sattinger, Saddle points and instability of nonlinear hyperbolic equations.Israel J. Math., 22:3-4(1975), 273–561.PassS-PHY T. Passot and P. L. Sulem, Multidimensional modulationof Alfvén waves.Phys. Rev. E., 48(1993), 2966–2974. ReedSimon:book:IV M. Reed and B. Simon,Methods of Modern Mathematical Physics: Analysis of Operators. Vol. IV., 1978. Academic Press.San:gDNLS:LWP G. N. Santos, Existence and uniqueness of solution for a generalized nonlinear derivative Schrödinger equation.J. Diff. Equat., 259:5(2015), 2030–2060.SuSu-book C. Sulem and P. L. Sulem, The Nonlinear Schrödinger Equation: Self-Focusing and Wave Collapse. Applied Mathematical Sciences, Vol. 139,Springer New York, 2007.Takaoka:DNLS:LWP H. Takaoka, Well-posedness for the one-dimensional nonlinear Schrödinger equation with the derivative nonlinearity. Adv. Diff. Equat., 4:4(1999), 561–580.Wein:stab:SJMA M. I. Weinstein, Modulational stability of ground states of nonlinear Schrödinger equations. SIAM J. Math. Anal., 16:3(1985), 472–491.Wein:stab:CPAMM. I. Weinstein,Lyapunov stability of ground states of nonlinear dispersive evolution equations. Comm. Pure Appl. Math., 39:1(1986), 51–67.
http://arxiv.org/abs/1702.07858v1
{ "authors": [ "Xingdong Tang", "Guixiang Xu" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170225090533", "title": "Stability of the sum of two solitary waves for (gDNLS) in the energy space" }
Subquadratic Algorithms for the Diameter and the Sum of Pairwise Distances in Planar GraphsA preliminary version of this work was presented at SODA 2017 <cit.>. Sergio CabelloDepartment of Mathematics, IMFM, and Department of Mathematics, FMF, University of Ljubljana, Slovenia. Supported by the Slovenian Research Agency, program P1-0297. Email address: sergio.cabello@fmf.uni-lj.siFirst version: January 17, 2017This version: February 29, 2020 =================================================================================================================================================================================================================================================================== We show how to compute for n-vertex planar graphs in O(n^11/6(n)) expected time the diameter and the sum of the pairwise distances. The algorithms work for directed graphs with real weights and no negative cycles. In O(n^15/8(n)) expected time we can also compute the number of pairs of vertices at distance smaller than a given threshold. These are the first algorithms for these problems using time O(n^c) for some constant c<2, even when restricted to undirected, unweighted planar graphs.Keywords: planar graph, diameter, Wiener index, distances in graphs, distance counting, Voronoi diagram.§ INTRODUCTIONLet G be a directed graph with n vertices andarc-lengths λ E(G)→. The length of a walk in G is the sum of the arc-lengths along the walk. We assume that G has no cycle of negative length. The distance between two vertices x and y of G, denoted by d_G(x,y), is the minimum length over all paths in G from x to y.While it is common to use the term distance, this is not necessarily a metric. This scenario is an extension of the more common case where the graph G is undirected and the lengths are positive. In that case d_G(·,·) is indeed a metric.In this paper we are interested in computing basic information about the distances between vertices in the graph G. The diameter of G is (G)  := max{ d_G(x,y) | x,y ∈ V(G)},the sum of the pairwise distances of G is(G)  := ∑_(x,y)∈ (V(G))^2 d_G(x,y),and, for any δ∈, the distance counter of G is (G,δ)  :=  | {(x,y)∈ (V(G))^2 | d_G(x,y)≤δ}|. For undirected graphs, the value (G) is essentially equivalent to the average distance in the graph and the so-called Wiener index. The Wiener index isa basic topological index used in mathematical chemistry with thousands of publications. Computing the diameter, the sum of the pairwise distances, or the distance counter ofa graph is a fundamental problem in graph algorithms. The obvious way to compute them is via solving the all-pairs shortest path problem (APSP)explicitly and then extract the relevant information. A key question is whether one can avoid the explicit computation of all the pairwise distances. Roditty and Vassilevska Williams <cit.> show that, for arbitrary graphs with n vertices and O(n) edges,one cannot compute the diameter in O(n^2-δ_0) time,for some constant δ_0>0,unless the strong exponential time hypothesis (SETH) fails. In fact, their proof shows that for undirected, unweighted graphs we cannot distinguish in O(n^2-δ_0) time between sparse graphs that have diameter 2 or larger, assuming the SETH. This implies the same conditional lower bound for computing the sum of the pairwise distances or the distance counter in sparse graphs.Indeed, an unweighted graph G of n vertices has diameter 2 if and only if (G)  = ∑_x∈ V(G)( _G(x)+2(n-1-_G(x)) )  =  2n(n-1) - 2 |E(G)|. Similarly such a graph G has (G,2)=n^2 if and only ifG has diameter 2. Thus, if we could compute the sum of pairwise distances or the distancecounter for sparse graphs in O(n^2-δ_0) time, we could also distinguish in the same time whether the graph has diameter 2 or larger, and the SETH fails.Given such conditional lower bounds, it is natural toshift the interest towards identifying families of sparse graphswhere one can compute the diameter or the sum of pairwise distances intruly subquadratic time. Here we provide subquadratic algorithms for directed, planar graphs with no negative cycles.More precisely, we show that the diameter and the sum of the pairwise distances can be computed in O(n^11/6(n)) expected time, while the distance counter can be computed in O(n^15/8(n)) expected time. There are efficient algorithms for computingall the distances in a planar graph <cit.>or a specified subset of the distances <cit.>. However, none of these tools seem fruitful for computing our statistics in subquadratic time.Note that our algorithms are the firstalgorithms using time O(n^c) for some constant c<2,even when restricted to undirected, unweighted planar graphs. Related work. For graphs of bounded treewidth one can compute the diameter and the sum of pairwise distances in near-linear time <cit.>. The distance counter for graphs of bounded treewidth can be handled using the same techniques. Recently, Husfeldt <cit.> has looked at the problem of computing the diameter for undirected,unweighted graphs parameterized by the treewidth and the diameter.For planar graphs, Wulff-Nilsen <cit.> gives an algorithm to compute the diameter and the sum of pairwise distances in unweighted, undirected planar graphs inO(n^2 loglog n/ log n) time, which is slightly subquadratic. Wulff-Nilsen <cit.> extends the result to weighted directed planar graphs with a time bound of O(n^2 (loglog n)^4/ log n). Note that the running time of these algorithms is notof the type O(n^c) for any constant c < 2.Researchers have also looked into near-optimal approximations. In particular,Weimann and Yuster <cit.> provide a (1+)-approximationto the diameter of undirected planar graphs inO((n/^4) (n) + 2^O(1/) n) time. As it was mentioned by Goldreich and Ron <cit.>, a near-linear time randomized (1+)-approximation for the sum of pairwise distances in undirected planar graphs can be obtained using random sampling and an oracle for (1+)-approximatedistances <cit.>. See the work by Indyk <cit.> for the average distance in arbitrary discrete metric spaces. Our approach. Let us describe the high-level idea of our approach. The main new ingredient is the use of additively-weighted Voronoi diagrams in pieces of the graph: we make aquite expensive preprocessing step in each piece thatpermits the efficient computation of such Voronoi diagrams in each piece for several different weights.To be more precise, let G be a planar graph with n vertices.We first compute an r-division: this is a decomposition of G into O(n/r) pieces,each of them with O(r) vertices and O(√(r)) boundary vertices.This means that all the interaction between a piece P and the complement goes through the O(√(r)) boundary vertices of P.Consider a piece P and a vertex x outside P. We would like to break P into regions according to the boundary vertex of P that is used in the shortest path from x. This can be modeled as an additively-weighted Voronoi diagram in the piece: each boundary vertex is a weighted site whose weight equals the distance from x. Thus, we have to compute several such Voronoi diagrams for each piece. Assuming that a piece is embedded, one can treat such a Voronoi diagram as an abstract Voronoi diagram and encode it using the dual graph.In particular, a bisector corresponds to a cycle in the dual graph.We can precompute all possible Voronoi diagrams for O(1) sites, and that information suffices to compute the Voronoi diagram using a randomized incremental construction. Once we have the Voronoi diagram, encoded as a subgraph of the dual graph, we have to extract the information from each Voronoi region. Although this is the general idea, several technical details appear. For example, the technology of abstract Voronoi diagrams can be used only when the sites are cofacial.We remark that our algorithms actually compute information for the distances from each vertex x of G separately. Thus, for each vertex x we compute the furthest vertex from x, the sum of the distances from x to all vertices, and the number of vertices at distance at most δ from x, for a given δ∈.Our main result is the following, whose statement makes this clear. Let G be a planar graph with n vertices, real abstract length on its arcs, and no negative cycle. In O(n^11/6(n)) expected time we can compute (x,V(G),G) and (x,V(G),G) for allvertices x of G. For a given δ∈, in O(n^15/8(n)) expected time we can compute (x,V(G),G,δ) for all vertices x of G. The proof of Theorem <ref> is in Section <ref>. Assumptions.We will assume that the distance between each pair of vertices is distinct and there is a unique shortest path between each pair of vertices. This can be enforced with high probability using infinitesimal perturbations or deterministically using lexicographic comparison; see for example the discussion by Cabello, Chambers and Erickson <cit.>.Since our result is a randomized algorithm with running times that are barely subquadratic, the actual method that is used is not very relevant. Randomization. Our algorithm is randomized and it is good to explain the source of this. Firstly, we use random perturbations of lengths of the edges to ensure unique shortest paths. The author thinks that, with some work, this assumption could be removed.Another source of randomization comes from our black-box use of the paperby Klein, Mehlhorn and Meiser <cit.>. They provide a randomized incremental construction of Voronoi diagrams under very general assumptions. Randomized incremental constructions are a standard tool in computational geometry. At the very high level, we compute a random permutations_1,…,s_n of the sites that define the diagram, and then iteratively compute the Voronoi diagram for the subsets S_i={ s_1,…,s_i }. To compute S_i from S_i-1, one has to estimate the amount of changes that take place, and this is a random variable. In the case of Voronoi diagrams, this is related to the expected size of a face of the Voronoi diagram.Additional work is needed to keep pointers that allow to make the updates fast. In particular, for the new site s_i, we have to find the current face of the Voronoi diagram for S_i-1 that contains it. Follow up work. Since the conference version of our paper there has been important progress using some of the techniques introduced here. Voronoi diagrams in planar graphs have been used to construct distance oracles for planar graphs that have subquadratic space and answer queries in logarithmic time <cit.>.Most importantly, Gawrychowski et al. <cit.> provide a better understanding of the structureof Voronoi diagrams in planar graphs that leads to a deterministic constructionwith a faster preprocessing time. With this, they obtain faster and deterministic algorithms for all the problems we consider here. While some of the ideas they usecome from our work, they also provide several new, key insights. Roadmap. We assume that the reader is familiar with planar graphs. In the next section we explain the notation and some basic background. In Section <ref> we explain how to extract information about the vertices contained in a dual cycle.In Section <ref> we explain the concept of abstract Voronoi diagrams. In Section <ref> we deal with different definitions of Voronoi diagramsin plane graphs and show that they are equivalent. In Section <ref> we discuss the algorithmic aspects of computing Voronoi diagrams. In particular, the algorithm performs an expensive preprocessing to be able to produce Voronoi diagrams faster. In Section <ref> we give the data structure that will be used for each piece of an r-division. In Section <ref> we give the final algorithms for planar graphs.We conclude with a discussion.For some readers, it may be more pleasant to read Section <ref>before Sections <ref>-<ref>.This may help understanding the high level approach and how everything fits together before delving into the details. § NOTATION AND PRELIMINARIES For running times, we use the notation Õ(·) when we omit polylogarithmic factors in any of the parameters that appears in the statement.For example, if n appears in the discussion,Õ(mr) means O(mr log^c (mnr)) for some constant c.For each natural number n, we use the notation [n]:={1,…,n}.For each set A⊂^2, we use A for its closure and A^∘ for its interior. Graphs. Graphs considered in this paper are directed. We use V(G) and E(G) for the vertex and the arc set of a graph G, respectively. We use the notation xy or e to denote arcs.The tail of an arc xy is x, and y is the head. We use e^R for the reversal of the arc e.In some cases we may have parallel arcs. It should be clear from the context which arc we are referring to.When the orientation of the arc xy is not relevant, we may use xy and refer to it as an (undirected) edge.A closed walk in G is a sequence e_0,… , e_k-1 of arcs with the property that the tail of e_i is the head of e_i-1 for all i∈ [k] (indices modulo k). Sometimes a closed walk is given as a sequence of vertices. This uniquely defines the closed walk if there are no parallel edges. A cycle is a closed walk that does not repeat any vertex. In particular, a cycle cannot repeat any arcs. We make it clear that the walk xy, yx is a cycle. Planarity. A plane graph is a planar graph together with a fixed embedding.The arcs e and e^R are assumed to be embedded as a single curve with opposite orientations. In the arguments we will use the geometry of the embedding and the plane quite often.For example, we will talk about the faces enclosed by a cycle of the graph. However, all the computations can be done assuming a combinatorial embedding, described as the circular order of the edges incident to each vertex. Let G^* be the dual graph of a plane graph G.We may consider G^* with oriented arcs or with edges, depending on the context. We keep in G^* any parallel edges that may occur. When G is 2-connected, the graph G^* has no loops. For each vertex v and edge e of G,we use v^* and e^* to denote their dual counterparts, respectively. For any set of edges A⊆ E(G),we use the notation A^*= { e^*| e∈ A}.We assume natural embeddings of G and G^* where eachdual edge e^* of G^* crosses G exactly once and does so at e. There are no other types of intersections between G and G^*. See Figure <ref> for an example. If we would prefer to work with an actual embedding and coordinates, instead of a combinatorial embedding, we could do so. To achieve this, for each edge e of G,we subdivide e and e^* with a common vertex v_e.Then we obtain a planar graph H that contains a subdivision of G and a subdivision of G^*.We can now embed H with straight-line segments in an O(n)× O(n) regular grid <cit.>.In this way we obtain an embedding of G and an embedding of G^*with the property that each edge and each dual edge is represented by a two-segment polygonal curve,and e and e^* cross as desired. With this embedding we can carry out actual operations using coordinates.Vertices of G are usually denoted by x,y,u,v. Faces of G are usually denoted by symbols like f and g. The dual vertices are usually denoted using early letters of the Latin alphabet, like a and b. We use a_∞ for the dual vertex representing the outer face. We will denote cycles and paths in the dual graph with Greek letters,such as γ and π. Sets of cycles and paths in the dual graph are with capital Greek letters, like Γor Π.Quite often we identify a graph object and its geometric representation in the embedding. In particular, (closed) walks in the graph define (closed)curves in the plane.We say that a closed walk γ in G^* is non-crossing if there is an infinitesimal perturbation γ_ of the curve γ that makes it simple. If γ is simple, we can take γ_=γ. For each simple closed curve γ in the plane, let (γ) be the bounded domain of ^2∖γ, and let (γ) be the unbounded one.For each closed, non-crossing closed walk γ in the dual graph G^*, let V_(γ,G)=(γ_)∩ V(G) and V_(γ,G)=(γ_)∩ V(G).Note that since γ is a walk in G^*, the vertices of V(G) are far away from γ and it does not matter which infinitesimal perturbation γ_of γ we use. See Figure <ref> for an example.Distances in graphs. In this paper we allow that the arcs have negative lengths λ.However, the graphs cannot have negative cycles, that is, cycles of negative length.In our approach we need that subpaths of shortest paths are also shortest paths. Note that the existence of a cycle of negative length can be checked in near-linear timefor planar graphs using algorithms for the shortest-path problem <cit.>.For a graph G, a shortest-path tree from a vertex r∈ V(G) is a tree Tthat is a subgraph of G and satisfies d_T (r,y) = d_G (r, y) for all y∈ V(G).A shortest-path tree to a vertex r∈ V(G) is a tree Tthat is a subgraph of G and satisfies d_T (y,r) = d_G (y,r) for all y∈ V(G). For all graphs considered in this paper we assume that, whenever we have an arc e, we also have its reversed arc e^R. We can ensure this by adding arcs with large enough lengththat no shortest path uses them. Similarly, adding edges, we can assume that the graphs that we are considering are connected.For a given graph G with edge lengths λ(·), we use G^R for the reversed graph,that is, the graph G with edge lengths λ^R(e) =λ (e^R).A shortest-path tree from r in G^R is the reversal of a shortest-path tree to r in G. Thus, as far as computation is concerned, there is no difference between computing shortest-path trees from or to a vertex. Potentials for directed graphs Let G be a (directed) graph with arc lengths λ E(G)→. A potential for G is a function ϕ V(G)→ such that:∀ uv ∈ E(G):   ϕ(v) ≤ϕ(u) + λ( uv).For a potential function ϕ for G, the reduced length λ̃ is defined by∀ uv ∈ E(G):   λ̃( uv) = λ(uv) + ϕ(u) - ϕ(v).The following properties are easy and standard <cit.>.They have been used in several previous works in planar graphs. * Fix any vertex s of G. If G has no negative cycle, then the function ϕ(v)=d_G(s,v) is a potential function. * For each dart uv ∈ E(G) we have λ̃( uv)≥ 0. * A path in G from s to t is a λ-shortest path if and only if it is a λ̃-shortest path.This means that, if G has no negative cycle with respect to the arc lengths λ, once we have computed a single-source shortest path tree in Gfrom an arbitrary source s, we can solve all subsequent single-source shortest pathproblems in G using the reduced lengths, which are non-negative. Vertex-based information. Consider a graph G.For each vertex x∈ V(G), each subset U⊆ V(G), and each real value δ, we define(x,U,G)   := max{ d_G(x,u)| u∈ U }, (x,U,G)   := ∑_u∈ U d_G(x,u) , (x,U,G,δ)   :=  | { u ∈ U| d_G(x,u)≤δ}|.Our main results will compute these values for all vertices x∈ V(G) when G is planar and U=V(G). Clearly we have(G)   = max{(x,V(G),G)| x∈ V(G) }, (G)   = ∑_x∈ V(G) (x,V(G),G) , (G,δ)   = ∑_x∈ V(G) (x,V(G),G,δ).§ HANDLING WEIGHTS WITHIN A NON-CROSSING WALKFor the rest of this section, let G be a plane graph with n vertices. In this section we are not concerned with distances. Instead, we are concerned with vertex-weights. Assume that each vertex x of G has a weight ω(x)∈. For each subset U of vertices and each value δ∈ we define σ(U) :=∑_x ∈ Uω(x),    μ(U) :=max_x ∈ Uω(x),    κ_≤(U,δ) := |{x∈ U|ω(x)≤δ}| . Let γ be a non-crossing closed walk in the dual graph G^*.We are interested in a way to compute σ(V_(γ,G)), μ(V_(γ,G)), and κ_≤(V_(γ,G),δ) locally, after some preprocessing of G and G^*. Here, locally means that we would like to just look at the edges of γ. In the following, we assume that any non-crossing closed walk γ in G^* istraversed clockwise. In the next section we concentrate on the computation of σ(·) and then explain how to use it for computing κ_≤(·,δ). In Section <ref> we discuss the computation of μ(·)§.§ Sum of weights and counting weightsWe start adapting the approach by Park and Phillips <cit.> and Patel <cit.>, which considered the computation of σ(·) when ω(x)=1 for all x∈ V(G). We summarize the ideas in the next lemma to make it self-contained.While most of the paper is simpler for undirected graphs, in the next lemma we do need the directed edges of the dual graph. We are not aware of a similar statement that would work using the undirected dual graph. Let G be a plane graph, directed or not, and let x_0 be a fixed vertex in G. In linear time we can compute a weight function χ E(G^*)→ with the following property: For every non-crossing closed walk γ in the dual graph G^* that is oriented clockwise and contains x_0 in its interior, we have σ(V_(γ,G))  = ∑_ab∈γχ(ab). Take any spanning tree T of G rooted at x_0, and orient the arcs away from x_0. For example, a BFS tree of G from x_0. For each node y∈ V(G), let T_y be the subtree of T rooted at y. See Figure <ref>, left. For each vertex y≠ x_0 we proceed as follows. Let x be the parent of y and let ab be the dual arc that crosses xy from left to right. Then we assign χ( ab)= σ(V(T_y)) and χ( ba)= - χ( ab). For any dual edge ab of E(G)^*∖ E(T)^* we set χ( ab)=χ( ba)=0. This finishes the description of the function χ. It is easy to see that we can compute χ in linear time. From the definition of χ we have ∑_ ab ∈γχ( ab)   = ∑_ ab ∈ E(γ)∩ E(T)^*χ( ab) = ∑_1.8cmxy ∈ T, γ crosses xyleft-to-right σ(V(T_y)) - ∑_1.8cmxy ∈ T, γ crosses xyright-to-left σ(V(T_y)) . Let γ_ be an infinitesimal perturbation of γ that is simple. We then have (γ_)∩ V(G) = V_(γ,G) and (γ_)∩ V(G) = V_(γ,G). Consider any vertex z of V(G) and let P_z be the path in T from x_0 to z. Since x_0 is in (γ_) and γ_ is a simple curve, the crossings between P_z and γ_, as we walk along along P_z, alternate between left-to-right and right-to-left crossings. See Figure <ref>, right. Since γ_ defines a simple curve, the number of crossings is even if z is in (γ_) and odd otherwise. It follows that ω(z) contributes to the sum on the right side of equation (<ref>) either once, if z is in (γ_), or zero times, if z is in (γ_). The result follows. Lemma <ref> can also be used to compute σ(V_(γ,G)) because σ(V_(γ,G))+σ (V_(γ,G) =σ(V(G)).We would like a data structure to quickly handle non-crossing closed walks in the dual graph that will be described compactly.More precisely, at preprocessing timewe are given a family Π={π_1,…,π_ℓ}of walks in G^*, and the non-crossing closed walk will be given as a concatenation of some subwalks from Π. Using the function χ(·) and partial sums over the edges e of each prefix of a walk in Π we get the following result. Let G be a plane graph with n vertices and vertex-weights ω(·). Let x_0 be a vertex of G. Let Π={π_1,…,π_ℓ} be a family of walks in G^* with a total of m edges, counted with multiplicity. After O(n+m) preprocessing time, we can answer the following type of queries: given a non-crossing closed walk γ in G^*, described as a concatenation of k subpaths of paths from Π, and with the property that γ is oriented clockwise and contains x_0 in its interior, return σ (V_(γ,G)) in O(k) time. We compute for G the function χ of Lemma <ref>. For each walk π_i of Π we proceed as follows. Let e(i,1),…,e(i,m_i) be the arcs of π_i, as they appear along the walk π_i, and define the partial sums S[i,j]=∑_t=1^j χ(e(i,t)) for j=1,…, m_i. It is also convenient to define S[i,0]=0. The values S[i,1], …, S[i,m_i] can be computed in O(m_i) time using that S[i,j]= S[i,j-1]+ χ(e(i,j)) for j=1,…, m_i. Repeating the procedure for each π_i∈Π, we have spent a total of O(n+m) time. This finishes the preprocessing. Consider a non-crossing closed walk γ in G^* given as the concatenation of k walks π^1,…,π^k, each of them a subpath of some path in Π. Each π^t in the description of γ is of the form e(i(t),j_1(t)),…, e(i(t),j_2(t)) for some index i(t) (so π^t is a subpath of π_i(t) ) and some indicesj_1(t),j_2(t) with 1≤ j_1(t)≤ j_2(t) ≤ m_i(t). Then we have ∑_e∈π^tχ(e)  =  S[i(t),j_2(t)] - S[i(t),j_1(t)-1]. Because of the properties of χ in Lemma <ref> we have σ(V_(γ,G))  = ∑_ab∈γχ(ab)  = ∑_t=1^k  ∑_ab∈π^tχ (ab) = ∑_i=t^k S[i(t),j_2(t)] - S[i(t),j_1(t)-1]. It follows that we can compute σ(V_(γ,G)) in O(k) time, and therefore we can also obtain σ(V_(γ,G)) in the same time bound. We now look into the case of computing κ_≤(·,δ). Using a binary search on W={ω(v)| v∈ V(G)} we achieve the following result. Note that in the following result the dependency in n increases. Consider the setting of Theorem <ref>. After O(n(n+m)) preprocessing time, we can answer the following type of queries: given a value δ∈ and a non-crossing closed walk γ in G^*, described as a concatenation of k subpaths of paths from Π, and with the property that γ is oriented clockwise and contains x_0 in its interior, return κ_≤ (V_(γ,G),δ) in O(k+log n) time. We sort the n weights W={ω(v)| v∈ V(G)} and store them in an array. Let w_1,…,w_n be the resulting weights, so that w_1≤…≤ w_n. For i=1,…,n, we define the weight function ω_i by ω_i(v)  =  1if ω(v)≤ w_i, 0otherwise. Then, we apply Theorem <ref> for each of the weight functions ω_1,…,ω_n. This finishes the preprocessing. To compute κ_≤(V_(γ,G),δ) for a given δ∈, we make a a binary search in W to find w_i= max{ w∈ W| w≤δ} and then use the data structure for the weight function ω_i to get ∑_v∈ V_(γ,G) ω_i(v)  = |{ v∈ V_(γ,G) |ω(v)≤ w_i }| = κ_≤(V_(γ,G),w_i) = κ_≤(V_(γ,G),δ) . Thus, a query boils down to a (standard) binary search followed by a single query to the data structure of Theorem <ref>. Therefore the query time is O(k+log n).§.§ Maximum weight The proof of Lemma <ref> heavily uses that the sum has an inverse operation. We are not aware of any such result for computing the maximum weight, μ(V_(γ,G)) or μ(V_(γ,G)). We could do something similar as we did in the proof of Corollary <ref>, namely, a binary search in W={ω(v)| v∈ V(G)} to find the largest weight inside V_(γ,G). However, the extra preprocessing time in Corollary <ref>, as compared to the preprocessing time of Theorem <ref>, leads to a worst running time in our target application. We will develop now a different approach that works for a special type of closed walks that we have in our application.Let x_0 be a vertex of G and let T_0 be a spanning tree of G rooted at x_0.We say that a cycle γ in the dual graph G^* is T_0-star-shaped if the root x_0 is in (γ) and, for each vertex y in V_(γ,G), the whole path in T_0 from x_0 to y is contained in (γ). (Note that the concept is not meaningful for closed walks that repeat some vertex;hence our restriction to cycles for the time being.) We define the following family of dual cycles:Ξ(G,T_0)  = {γ|γ is a T_0-star-shaped cycle in G^*}. There is a weight function χ_μ E(G^*)× E(G^*)→ with the following properties: * For every cycle γ=e^*_0e^*_1,… ,e^*_k-1 of Ξ(G,T_0) that is oriented clockwise μ(V_(γ,G))  = max{χ_μ(e^*_i,e^*_i+1)| i=0,… ,k-1 }       (indices modulo k) * After a linear-time preprocessing, we can compute in constant time the value χ_μ(ab,bc) for any two dual edges ab and bc of G^*. In this proof, for each vertex v, we use T_0[x_0v] to denote the path in T_0 from x_0 to v. For a dual arc e^*, let p(e^*) be the intersection point of e and e^*, let v(e^*) be the vertex of e to the right of e^*, and letπ(e^*) be the curve obtained by the concatenation of T_0[x_0v(e^*)] and the portion of e from v(e^*) to p(e^*). See Figure <ref>, left. We can now provide a definition of the function χ_μ. Consider any two dual edges e^*_i and e^*_j in G^*. If they have no common vertex or if they are equal, then we set χ_μ(e^*_i,e^*_j)=0. This is not very relevant because such terms never show up in the desired properties. It remains to consider the case when they have a common vertex. For this case we then define Γ(e^*_i,e^*_j) as the region of the plane bounded by π(e^*_i), the portion of e^*_i e^*_j from p(e^*_i) to p(e^*_j), and the reverse of π(e^*_j). We regard the region Γ(e^*_i,e^*_j) as a closed set, with its boundary and the curves that define it. See Figure <ref>, center. Finally, we set χ_μ(e^*_i,e^*_j)= μ( V(G)∩Γ(e^*_i,e^*_j) ). We will discuss the efficient computation and representation of χ_μ(e^*_i,e^*_j) later. We claim that χ_μ satisfies the property in the first item. Consider any dual cycle γ=e^*_0e^*_1,… ,e^*_k-1 and let A be the closure of (γ). For each i, let γ_i be the curve described by γ from p(e^*_i) to p(e^*_i+1) and let us use the shorthand Γ_i=Γ(e^*_i,e^*_i+1), where indices are modulo k. See Figure <ref>, right. Note that γ_i is one of the curves used to define the region Γ_i. If γ is in Ξ(G,T_0), then the three curves that bound Γ_i are contained in A and therefore Γ_i is contained in A (i=0,…, k-1). Moreover, since γ_0,…,γ_k-1 is a decomposition of γ and the regions Γ_i and Γ_i+1 (i=0,…, k-1) share the path π(e^*_i+1) on its boundary, the union ∪_i Γ_i is precisely A. Since the boundary of A does not contain any vertex of G, we get μ(V_(γ,G))   = max{ω(v)| v∈ V(G)∩(γ) }= max{μ(V(G)∩Γ_i)| i=0,…, k-1 }= max{μ(V(G)∩Γ(e^*_i,e^*_i+1)| i=0,…, k-1 } (indices modulo k)= max{χ_μ(e^*_i,e^*_i+1) | i=0,…, k-1 }.(indices modulo k) We have shown that the property in the first item holds. It remains to discuss the computational part. First we discuss an alternative definition of χ_μ that is more convenient for the computation. For each edge uv of G, let R(uv) be the region of the plane defined by the paths in T_0 from x_0 to both endpoints of uv and the edge uv itself. We include in R(uv) the two paths used to define it and the edge uv. If the paths in T_0 from x_0 to u and v share a part, then the region R(uv) also contains that common part. If uv is in T_0, then the region R(uv) is actually a path contained in T_0. See Figure <ref> for an example. Finally, for each edge uv of G we define φ(uv) as φ(uv)  := μ (V(G)∩ R(uv))  = max{ω(x) | x∈ V(G)∩ R(uv) }. For each vertex v of G, we define φ(v) as the maximum weight on the path T_0[x_0v]. This last case can be interpreted as a degenerate case of the previous one. Indeed, if v' is the parent of v in T_0, then φ(v)=φ(vv'). Let f be a face of G and let e_i and e_j be two edges on the boundary of f. We are going to give an alternative definition of χ_μ(e^*_i,e^*_j). If e_j is not the follower of e_i alongthe counterclockwise traversal of f, let E(f,e_i,e_j) be the edges between e_i and e_j in a counterclockwise traversal of f. We do not include e_i and e_j in E(f,e_i,e_j), but the set E(f,e_i,e_j) is nonempty by assumption. See Figure <ref> for an illustration. In this case we have χ_μ(e^*_i,e^*_j)  = max{φ (e)| e∈E(f,e_i,e_j) } . To see that this equality indeed holds, note that the difference between the region Γ(e^*_i,e^*_j) and ⋃{R (e)| e∈E(f,e_i,e_j)} is just a portion ofthe interior of the face f, which cannot contain vertices of G. See Figure <ref>, center and right, for an illustration. If e_i and e_j are consecutive alongthe counterclockwise traversal of f, then they have a common vertex v and we have χ_μ(e^*_i,e^*_j)=φ(v). The argument in this case is the same: the difference betweenΓ(e^*_i,e^*_j) and the path T_0[x_0v] is a portion of the interior of f. The second, alternative definition of χ_μ is more suitable for efficient management. First, we compute the values φ(·). For this we use the undirected version of G. Let C=E(G)∖ E(T_0) be the primal edges not contained in T_0. The duals of those edges, C^*, form a spanning tree of the dual graph G^*. The pair (T_0,C) is a so-called tree-cotree decomposition. We root C^* at the dual vertex representing the outer face of G. Each edge e∈ C defines a region A_e of the plane, namely the closed region bounded by the unique simple closed curve contained in T_0+e. Note that, for each uv∈ C, the region R(uv) is precisely the union of A_uv and the two pathsin T_0 from x_0 to the endpoints of uv. Each edge e∈ C defines a dual subtree, denoted by C^*_e, which is the component of C^*-e^* without the root. The region A_e corresponds to the faces of G that dualize to vertices of C^*_e. See Figure <ref> for an example. After computing and storing for each face of G the maximum weight of its incident vertices, we can use a bottom-up traversal of the dual tree C^* and the values stored for each face to compute the values μ(A_e∩ V(G)) in linear time for all edges e∈ C. With a top-bottom traversal of the primal tree T_0 we can also compute and store for each node v of G the values μ(T_0[x_0v] ∩ V(G)). From this we can compute φ() as follows: ∀ v∈ V(G):   φ(v)= μ(T_0[x_0v] ∩ V(G) ), ∀ uv∈ E(T_0):   φ(uv)= max{μ(T_0[x_0v] ∩ V(G)), ω(u),ω(v) }, ∀ uv∈ C:   φ(uv)= max{μ(A_uv∩ V(G)) , μ(T_0[x_0u] ∩ V(G)), μ(T_0[x_0v] ∩ V(G)) }. Since each value on the right side is already computed, we spend linear time to compute the values φ(·). To represent χ_μ compactly, we will use a data structure for range minimum queries: preprocess an array of numbers A[1… m] such that, at query time, we can report min{ A[k]| i≤ k≤ j} for any given query pair of indices i<j. There are data structures that use linear-time preprocessing and O(1) time per query <cit.>. This data structure does exploit the full power of random-access memory (RAM). It is trivial to extend this data structure for circular arrays: each query ina circular array corresponds to two queries in a linear array. For each face f of G, we build a circular array A_f[·] indexed by the edges, as they appear along the face f. At the entry A_f[e] we store the value φ(e). For each face we spend time proportional to the number of edges on the boundary of the face. Thus, for the whole graph G this preprocessing takes linear time. For two edges e_1 and e_2 on the boundary of a face f and with no common vertex, the value χ_μ(e_1^*,e_2^*), as described in (<ref>), is precisely a range maximum query in the circular array A_f[·], and thus can be answered in constant time. The case when e_1 and e_2 have a common vertex is easier because for each edge e_1 there are only two such possible edges e_2, one per face with e_1 is on the boundary. For our application we will have to deal with pieces that have holes and thus a part of T_0 may be missing.Because of this, we also need to extend things to a type of non-crossing walks.Like before, let G be a plane graph and let T_0 be a rooted spanning subtree. Let P be a subgraph of G, with the embedding inherited from G. Assume that the root x_0 of T_0 is in P. A non-crossing closed walk γ in P^* is T_0-star-shaped if the root x_0 is in (γ) and, for each vertex y in V_(γ,P), all the vertices of P in the path T_0[x_0y] are contained in (γ). We can define the following family of dual non-crossing walks:Ξ(G,P,T_0)  = {γ|γ is a T_0-star-shaped non-crossing walk in P^*}.Thus, each non-crossing walk in Ξ(G,P,T_0) comes from some cycle of Ξ(G,T_0) when we transform G into P by deleting the edges of E(G)∖ E(P).Let us provide some intuition for the following statement. Consider a plane graph G and a spanning tree T_0 of G. Now we delete some of the edges of G until we get a subgraph P, without changing the embedding. We may have deleted some edges of T_0 also. However, the root of T_0 remains in P. Some faces of P may contain some of the edges of the spanning tree, E(T_0), that were deleted.That is, when we draw an edge e∈ E(T_0)∖ E(P) back in its original position, it is contained in the closure of a f face of P. In such a case we say that the interior of fintersects T_0. We use b for the sum, over the faces f of P whose interior intersects T_0, of the number of edges of P on the boundary of f. Thus, for each face f in the sum, wecount how many edge of P define the face. Let G be a plane graph with n vertices and vertex-weights ω(·), and let T_0 be a rooted spanning tree in G. Let P be a subgraph of G such that the root of T_0 is a vertex of P. Let b be the number of edges of P on all the faces of P whose interior intersects E(T_0). Let Π={π_1,…,π_ℓ} be a family of walks in P^* with a total of m edges, counted with multiplicity. After O(n+m+b^3) preprocessing time, we can answer the following type of queries: given a closed walk γ in Ξ(G,P,T_0), described as a concatenation of k subpaths of paths from Π and oriented clockwise return μ(V_(γ,P)) in O(k) time. Although the dependency on b in the time bound can perhaps be reduced, it is sufficient for our purposes because currently the bottleneck is somewhere else. We may assume that V(G)=V(P)=V(T_0). To see this, first we note that we can remove edges of G that are not in E(T_0)∪ E(P) because they do not play any role. Then we can replace each maximal subtree of T_0-V(P) by edges that connect vertices of P without changing the set Ξ(G,P,T_0). For this we just need the ancestor-descendant relation between vertices incident to the face. See Figure <ref> to see the transformation. Thus, from now on we restrict ourselves to the case where V(G)=V(P)=V(T_0). Let F be the set of faces of P that contain some edges of E(T_0)∖ E(P) and consider the set A = { (e_1,e_2)∈ E(P)^2 |e_1,e_2∈ E(f) for some f∈ F}. It is clear that A has O(b^2) pairs. For each (e_1,e_2)∈ A, let f be the face of F that have e_1 and e_2 on the boundary and compute a dual path π_G(e^*_1,e^*_2) in G^* from e_1^* to e_2^* whose other edges are contained in the face f. This means that all the edges of π_G(e^*_1,e^*_2), except e^*_1 and e^*_2, are edges of E(T_0)^*∖ E(P)^*. (If e_1 and e_2 are cofacial in G, then π_G(e^*_1,e^*_2)=e^*_1 e^*_2.) See Figure <ref> for an example. In particular, since E(T_0)∖ E(P) is a forest on b vertices, it has at most b edges, and the path π_G(e^*_1,e^*_2) has O(b) edges. Thus, the paths π_G(e^*_1,e^*_2), over all elements (e_1,e_2)∈ A, have together O(b^3) edges. The paths {π_G(e^*_1,e^*_2)| (e_1,e_2)∈ A } are used to naturally transform walks in P^* into walksin G^*. Indeed, if we have a walk α in P^* and we replace each occurrence of e^*_1 e^*_2, where (e_1,e_2)∈ A by π_G(e^*_1,e^*_2), then we obtain a walk in G^*. We compute for G the function χ_μ of Lemma <ref>. For each element (e_1,e_2)∈ A, we compute and store φ(e^*_1,e^*_2)  = max{χ_μ(ab,bc) |ab and bc consecutive dual edges along π_G(e^*_1,e^*_2)}. Using theproperties of χ_μ stated in Lemma <ref> and using that the paths {π_G(e^*_1,e^*_2)| (e_1,e_2)∈ A} have O(b^3) edges, we can do this step in O(n+b^3) time. For each walk π_i of Π we proceed as follows. Let e^*_1,…,e^*_m_i be the edges of π_i, as they appear along π_i. We make an array A_i[1..(m_i-1)] such that A_i[j]  = φ(e^*_j,e^*_j+1)if (e_j,e_j+1)∈ A, χ_μ(e^*_j,e^*_j+1)if (e_j,e_j+1)∉ A. Finally, we store the array A_i[·] for range maximum queries <cit.>; see the discussion at the end of the proof of Lemma <ref>. We spend O(m_i) preprocessing time for π_i and can find min A[j..j'] in constant time for any given indices 1≤ j<j'< m_i. This step, together for all paths π_i∈Π, takes O(∑_i m_i)=O(m) time. This finishes the preprocessing. Assume that we are given a non-crossing closed walk γ in Ξ(G,P,T_0), given as the concatenation of k paths π^1,…,π^k, each of them a subpath of some path in Π. Let γ_G be the closed walk obtained from γ as follows: for each (e_1,e_2)∈ A and each appearance of e^*_1 e^*_2 in γ, we replacee^*_1 e^*_2 by π_G(e^*_1,e^*_2). Note that γ_G is a closed walk in G^*. In fact, γ_G is a cycle in G^* because geometrically each single replacement occurs within a single face of F and all the replacements within a face do not introduce crossings because γ was non-crossing. Moreover, because each replacement is a rerouting within a face F and V(G)=V(P), we have V_(γ,P)=V_(γ_G,G). From Lemma <ref> we thus get that μ(V_(γ,P))  = μ(V_(γ,G)) = max{χ_μ(ab,bc)|ab and bc consecutive dual edges along the cycle γ_G}. Like in the proof of Theorem <ref>, we can break the computation of χ_μ(·) for pairs of consecutive edges of γ_G into k parts that occur within some path π_i∈Π (after replacements) and k parts that use the last edge of π^t and the first of π^t+1 (t=0,…,k, indices modulo k). The part within a path π_i∈Π can be retrieved in constant time from the range maximum query forA_i[·]. The part combining consecutive subpaths can be computed in constant time, but we have two cases to consider. Let ab be the last dual edge of π^t and let bc be the first dual edge of π^t+1. If (ab,bc)∈ A, then we have to use φ(ab,bc). Otherwise we can directly use χ_μ(ab,bc), which can be computed in constant time (second item of Lemma <ref>). Finally, we have to take the maximum from those 2k values. When G=P, then Ξ(G,P,T_0)=Ξ(G,T_0), we have b=0, and Theorem <ref> simplifies to the following. Let G be a plane graph with n vertices and vertex-weights ω(·), and let T_0 be a rooted spanning tree in G. Let Π={π_1,…,π_ℓ} be a family of paths in G^* with a total of m edges, counted with multiplicity. After O(n+m) preprocessing time, we can answer the following type of queries: given a cycle γ in Ξ(G,T_0), described as a concatenation of k subpaths of paths from Π and oriented clockwise return μ(V_(γ,G)) in O(k) time. § ABSTRACT VORONOI DIAGRAMSAbstract Voronoi diagrams were introduced by Klein <cit.> as a way to handle together several of the different types of Voronoi diagrams that were appearing. The concept is restricted to the plane ^2. They are defined using the concept of bisectors anddominant regions. We will use the definition by Klein, Langetepe and Nilforoushan <cit.>, as it seems the most recent and general. For the construction, we use the randomized incremental construction of Klein, Mehlhorn and Meiser <cit.>,also discussed by Klein, Langetepe and Nilforoushan <cit.> for their framework. In our notation, we will introduce an A in front to indicate we are talking about objects in the abstract Voronoi diagram.Let S be a finite set, which we refer to as abstract sites. For each ordered (p,q)∈ S^2 of distinct sites,we have a simple planar curve (p,q) andan open domain (p,q) whose boundary is (p,q). We refer to the pair ((p,q),(p,q)) as an abstract bisector. Define for each p∈ S the abstract Voronoi region(p,S)=⋂_q∈ S∖{ p}(p,q). Then the abstract Voronoi diagram of S,denoted by (S), is defined as (S)=^2∖⋃_p∈ S(p,S).The intuition is that the set (p,q)is the set of points that are closer to p than to q and that (p,q) plays the role of bisector. Then, (p,S) stands for the points that are dominated by p, when compared against all q∈ S∖{ p}. Note that (p,S) is an open set because it is the intersection of open sets. The abstract Voronoi diagram, (S) would then be the set of points where no site dominates,meaning that at least two sites are “equidistant" from the point. However, the theory does not rely on any such interpretations. This makes it very powerful but less intuitive: some arguments become more cumbersome.While these concepts can be considered in all generality, the theory is developed assuming that certain properties, called axioms, are satisfied. A system of abstract bisectors{ ((p,q),(p,q))|p,q∈ S, p≠ q} is admissible if it satisfies the following properties: (A1) For all distinct p,q∈ S, J(p,q)=J(q,p). (A2) For all distinct p,q∈ S, the plane ^2 is the disjoint union of D(p,q), J(p,q) and D(q,p). (A3) There exists a special point in the plane, which we call p_∞, such that, for all distinct p,q∈ S, the curve J(p,q) passes through p_∞. [Usually the axiom tells that the stereographic projection to the sphere of the curve J(p,q) can be completed to a closed Jordan curve passing through the north pole. For us it will be more convenient to project from a different point and complete all curves within the plane to make them pass through p_∞.] (A4) For each subset S' of S with 3 elements and each p∈ S', the abstract Voronoi region (p,S') is path connected. (A5) For each subset S' of S with 3 elements we have ^2=⋃_p∈ S'(p,S').For the rest of the discussion on abstract Voronoi diagrams, we assume that these axioms are satisfied. Note that axioms (A4)-(A5) are not the ones given in the definition of <cit.> but, as they show in their Theorem 15, they are equivalent. In this regard, our definition is closer to the one given by Klein <cit.>. Since we are going to work with very natural, non-pathological Voronoi diagrams, any of the sets of axioms used in any of the other papers we have encountered also works in our case. Assuming these axioms, one can show that the abstract Voronoi diagram (S) is a plane graph <cit.>.This brings a natural concept of abstract Voronoi vertexand abstract Voronoi edge as those being vertices (of degree ≥ 3) and edges in the plane graph (S).Klein, Mehlhorn and Meiser provide a randomized incremental constructionof abstract Voronoi diagrams. One has to be careful about what it means tocompute an abstract Voronoi diagram, since it is not even clear howthe input is specified.For their construction, they assume as primitive operation that one can compute the abstract Voronoi diagram of anyfive abstract sites.The output is described by a plane graph H and, for each vertex and each edge of H, a pointer to a vertex or an edge, respectively, in the abstract Voronoi diagram for at most four abstract sites. Thus, we tell that an edge e of H corresponds to some precise abstract edge e'of (S'), where |S'|≤ 4. Whether (S') can be computed explicitly or not, it depends on how the input bisectors can be manipulated.Klein, Mehlhorn and Meiser consider a special case,which is the one we will be using,where the basic operation requires the abstract Voronoi diagram of only four sites.(This particular case is not discussed by Klein, Langetepe and Nilforoushan <cit.>,but they discuss the general case.) Assume that we have an admissible system of abstract bisectors for a set S of m sites. The abstract Voronoi diagram of S can be computed in O(mlog m) expected time using an expected number of O(mlog m) elementary operations. If the abstract Voronoi diagram of any three sites contains at most one abstract Voronoi vertex, besides the special point p_∞, then an elementary operation is the computation of an abstract Voronoi diagram for four sites.§ VORONOI DIAGRAMS IN PLANAR GRAPHSWe will need additively weighted Voronoi diagrams in plane graphs. We first define Voronoi diagrams for arbitrary graphs.Then we discuss a representation using the dual graphs that works only for plane graphs and discuss some folklore properties. See for example the papers of Marx and Pilipczuk <cit.>or Colin de Verdière <cit.> for similar intuition.The dual representation is the key to be able to use the machinery of abstract Voronoi diagrams as a black box.§.§ Arbitrary graphsLet G be an arbitrary graph, not necessarily planar, with no negative cycles. A site s is a pair (v_s,w_s), where v_s∈ V(G) is its location, and w_s∈ is its weight, possibly negative. With a slight abuse of notation, we will use s instead of v_sas the vertex. For example, for a site s we will write s∈ V(G) instead of v_s∈ V(G) andd_G(s,x) instead d_G(v_s,x).Let S be a set of sites in G. For each s∈ S, its graphic Voronoi region, denoted _G(s,S), is defined by_G(s,S)  = { x∈ V(G)|∀ t∈ S∖{s}:  w_s+d_G(s,x) ≤ w_t + d_G(t,x) }.See Figure <ref> for an example. Note that we are using the distance from the sites to the vertices to define the graphic Voronoi cells. For directed graphs, using the reverse distance from the vertex to the siteswould define different graphic regions (in general). However, this is equivalent to use the reversed graph G^R of G.Even assuming that all distances in G are distinct, we may have w_s+ d_G(s,x) = w_t + d_G(t,x) for some vertex x.Also, some Voronoi cells may be empty. In our case, we will only deal with cases where these two things cannot happen. We say that the set S of sites is generic when, for each x∈ V(G) and for each distinct s,t∈ S,we have w_s+ d_G(s,x) ≠ w_t + d_G(t,x). The set S is independent when each Voronoi cell is nonempty. It is easy to see that, if S is a generic, independent set of sites, then s∈_G(s,S) and each vertex x of V(G) belongs to precisely one graphic Voronoi cell _G(s,S) over all s∈ S.The graphic Voronoi diagram of S (in G) is the collection of graphic Voronoi regions:_G(S)  = {_G(s,S)| s∈ S}.The following property is standard. Let S be a generic, independent set of sites. Then for each s∈ S the following hold: * For each x in _G(s,S), the shortest path from s to x is contained in _G(s,S). * _G(s,S) induces a connected subgraph of G. Let x be a vertex of _G(s,S) and let P(s,x) be the shortest path in G from s to x. Assume, for the sake of reaching a contradiction, that some vertex y on P(s,x) is contained in some other Voronoi cell _G(t,S), where t≠ s. Because of uniqueness of shortest paths, this means that d_G(t,y)<d_G(s,y). However, this implies that d_G(t,x)≤ d_G(t,y)+d_G(y,x) < d_G(s,y)+d_G(y,x) = d_G(s,x), where in the last equality we have used that y lies in the shortest path P(s,x). The obtained inequality d_G(t,x)<d_G(s,x) contradicts the property that x∈_G(s,S). This proves the first item. To show the second item, note that the subgraph of G induced _G(s,S) contains (shortest) paths from s to all vertices of _G(s,S) because of the previous item. For each two sites s and t,we define the graphic dominance region of s over t as_G(s,t)  = _G(s,{s,t}) = { x∈ V(G)| w_s+d_G(s,x) ≤ w_t+d_G(t,x) }. For each s∈ S we have _G(s,S)=⋂_t∈ S∖{s }_G(s,t). We note that _G(s,S)   = { x∈ V(G)|∀ t∈ S∖{s}: w_s+d_G(s,x) ≤ w_t+d_G(t,x) }= ⋂_t∈ S∖{s}{ x∈ V(G)| w_s+d_G(s,x) ≤ w_t+d_G(t,x)}= ⋂_t∈ S∖{s }_G(s,t).§.§ Plane graphsNow we will make use of graph duality to provide an alternative description of additively weighted Voronoi diagrams in plane graphs. The aim is to define Voronoi diagrams geometrically using bisectors, where a bisector is just going to be a cycle in the dual graph.Consider two sites s and t in G and defineE_G(s,t)  = { xy∈ E(G) | x∈_G(s,t),  y∈_G(t,s)}.Thus, we are taking the edges that have each endpoint in a different graphic Voronoi region of _G({s,t}). We denote by E^*_G(s,t) their dual edges. Let { s,t} be a generic and independent set of sites. Then the edges of E^*_G(s,t) define a cycle γ in G^*. Moreover, if s∈ V_(γ,G), then V_(γ,G)= _G(s,t) and V_(γ,G)=_G(t,s). Let A^* be an arbitrary set of dual edges. It is well known that A^* is the edge set of a cycle if and only if G-A has precisely two connected components. Moreover, two faces u^* and v^* of G^* are in the same side of the cycle defined by A^* if and only if u and v are in the same connected component of G-A. See for example the proof in <cit.> or <cit.>. When { s,t} is generic and independent, we have _G(s,t)≠∅, _G(t,s)≠∅, and V(G) is the disjoint union of _G(s,t) and _G(t,s). This means that E_G(s,t) is the edge cut between _G(s,t) and its complement, _G(t,s). Moreover, by Lemma <ref>, the subgraphs of G induced by _G(s,t) and by _G(t,s) are connected. Therefore G-E_G(s,t) has precisely two connected components, and thus E^*_G(s,t) is the edge set of a cycle γ in G^*. Assume that s∈ V_(γ,G). Since _G(s,t) is the vertex set of the connected component of G-E_G(s,t) that contains s, the faces of { u^*| u∈_G(s,t)} are in (γ) and the faces { v^*| v∈_G(t,s)} are in (γ). Since a vertex u of G is the unique vertex of G contained in the dual face u^* of G^*, the result follows. When s and t are independent and generic, we define the bisector of s and t, denoted as _G(s,t), as the curve in the plane defined by the cycle of E^*_G(s,t), as guaranteed in the previous lemma.See figure <ref> for an example. We also define D_G(s,t) as the connected part of ^2∖_G(s,t) that contains s.We then haveD_G(s,t) = ( ⋃_v∈_G(s,t)v^*)^∘ .Here we have used the notation mentioned earlier:A and A^∘ denote the closure and the interior of a set A⊂^2, respectively. Note that the pair (_G(s,t),D_G(s,t)) is the type of pair used to define abstract Voronoi diagrams. From now on, whenever we talk about the abstract Voronoi diagram of G, we refer to the abstract Voronoi diagram defined by the system of bisectors{ (_G(s,t),D_G(s,t))| s,t∈ S, s≠ t}.We have defined Voronoi regions of plane graphsin two different ways: using distances in the primal graph G, called graphic Voronoi regions,and using bisectors defined as curves in the plane,called abstract Voronoi regions. We next make sure that the definitions match, when restricted to vertices of G. Let G be a plane graph and let S be a generic, independent set of sites. Then, for each s∈ S, we have _G(s,S)= V(G)∩(s,S). Recall the definition (s,S)  = ⋂_t∈ S∖{ s} D_G(s,t). Because of equation (<ref>) we have D_G(s,t)  = ( ⋃_v∈_G(s,t)v^*)^∘ , and we obtain that (s,S)   = ⋂_t∈ S∖{ s}( ⋃_v∈_G(s,t)v^*)^∘ = ( ⋃_v∈⋂_t∈ S∖{ s}_G(s,t)v^*)^∘= ( ⋃_v∈_G(s,S)v^*)^∘, where in the last equality we used Lemma <ref>. Since the only vertex of V(G) contained in the dual face v^* is precisely v, and it lies in the interior of v^*, we get that V(G)∩(s,S)=(s,S). We cannot use the machinery of abstract Voronoi diagrams for arbitrary sites because of axiom (A3). In our case bisectors may not pass through a common “infinity point" p_∞. Indeed, for arbitrary planar graphs we could have two bisectors that never intersect. However, we can use it when all the sites are in the outer face of G.We next show this. Let G be a plane graph and let S be a generic, independent set of sites located in the outer face of G. Let a_∞ be the vertex of G^* dual to the outer face of G. Then the system of abstract bisectors { (_G(s,t),D_G(s,t))| s,t∈ S, s≠ t} is admissible, where a_∞ plays the role of p_∞ in axiom (A3). It is clear that the system of abstract bisectors { (_G(s,t),D_G(s,t))| s,t∈ S, s≠ t} satisfies axioms (A1) and (A2) of the definition. We next show the validity of axiom (A3). Consider any two sites s and t of S. Since _G(s,S) and _G(t,S) are nonempty, also _G(s,t) and _G(t,s) are nonempty. Since s and t are located in the outer face of G the bisector _G(s,t) passes through a_∞. Indeed, the dual faces s^* and t^* have to be in different sides of the dual cycle _G(s,t) and, since s and t are on the outer face of G, that can happen only if _G(s,t) passes through a_∞. Thus, if we take the geometric position of a_∞ as p_∞, all the bisecting curves pass through p_∞ and axiom (A3) holds. For axiom (A4), consider any three sites r,s,t of S and let S'={r,s,t}. As noted in the proof of Lemma <ref>, we have (s,S') = ( ⋃_v∈_G(s,S')v^*)^∘. Since the vertices of _G(s,S') form a connected subgraph of G (Lemma <ref>), the domains v^*, when v iterates over _G(s,S'), are glued through the primal edges, and (s,S') is path connected. This proves axiom (A4). Axiom (A5) is shown similarly. Following the notation and the observations from the previous paragraph, we use that (s,S') = ⋃_v∈_G(s,S') v^* and that V(G)= _G(r,S')∪_G(s,S')∪_G(t,S'), to conclude that (r,S')∪(s,S')∪(t,S') = ⋃_v∈ V(G) v^* = ^2. The abstract Voronoi diagram (S) is a plane graph, and by construction it is contained in the dual graph G^*. An abstract Voronoi vertex corresponds to a vertex in the dual graph G^*. An abstract Voronoi edge corresponds to a path in the dual graph G^*. More precisely, any abstract Voronoi edge corresponds to a portion of a bisector _G(s,t) whose endpoints are vertices of G^*. We further have the following observation regarding the structure of abstract Voronoi diagrams. The abstract Voronoi diagram of any 3 sites in the outer face of G has at most one vertex, besides a_∞. Assume that S is the set of 3 sites. Since each site s∈ S is in the outer face, the abstract Voronoi diagram (s,S) contains the dual face s^*, which is incident to a_∞. It follows that all faces have a common vertex in a_∞. Since a plane graph with 3 faces can have at most 2 vertices of degree at least 3, the result follows. §.§ Dealing with holesLet G be a plane graph and let P be a connected subgraph of G, with the embedding inherited from G. Consider the graphic Voronoi diagram in P using the distances in G. Thus, for a set of weighted sites S in P and a site s∈ S we are interested in the vertex subsets_P,G(s,S)  = _G(s,S)∩ V(P) .Strictly speaking, _P,G(s,S) is a graphic Voronoi region in the complete graph with vertex set V(P) and edge lengths defined by the distances in G.We also have the graphic Voronoi diagram {_P,G(s,S)| s∈ S}. However, this interpretation in the complete graph will not be very useful for us because it does not use planarity. We would like to represent these Voronoi diagrams using the dual graph of P^*. In particular, we have to define bisectors using the graph P^*. Given two sites s and t, there is a non-crossing closed walk γ in P^* such that _P,G(s,{s,t}) and _P,G(t,{s,t}) are precisely V_(γ,P) and V_(γ,P) . Moreover, γ is obtained from _G(s,t) by deleting the edges of (E(G)∖ E(P))^* from the sequence of edges defining _G(s,t) . Let e^*_1,…,e^*_k be the sequence of edges of G^* that define _G(s,t). If in this sequence we delete all appearances of e^* for e∈ E(G)∖ E(P), then we obtain a subsequence (e'_1)^*,… ,(e'_ℓ)^* that defines a closed walk γ in P^*. See Figure <ref> for a small example and Figure <ref> for a larger example. The resulting closed walk is non-crossing, as can be seen by induction on the number of deleted edges. Indeed, if a plane graph H' is obtained from a plane graph H by deleting an edge e, then (H')^* is obtained from H^* by contracting e^*. Any non-crossing walk in H^* remains non-crossing when contracting the edge e^*∈ E(H^*), and the interior of the walk contains exactly the same subset of the vertices of H'. Thus, it also follows by induction, that the vertices of V(P) in the interior of _G(s,t) remain in the interior during the contractions of the edges e^* for e∈ E(G)∖ E(P), and therefore _P,G(s,S)  = _G(s,S)∩ V(P) = V_(γ,P)∩ V(P)= V_(γ,P). The same argument works for V_(γ,P) Note that our description of the transformation from _G(s,t) to γ using dual edges is simpler than a description using dual vertices. This is so because the relevant faces may also change with deletions of edges that are not crossed by _G(s,t). The assumption that P is connected is needed. Otherwise P has faces that are not simply-connected, and closed walks of G^* may become empty in P^* because they do not cross any edge of P. Also, when P has multiple components, there are curves that intersect the same edges of P in the same order, but contain a different set of connected components in their interior. Thus, additional information beyond the edges of P^* would be needed to encode the curves. We use _P,G(s,t) for the non-crossing closed γ in P^* defined by Lemma <ref>. To use abstract Voronoi diagrams we have the following technical problem: in general, the curve _P,G(s,t) is not simple.We can work around this symbolically, as follows. Combinatorially, we keep encoding the bisector as a closed walk in the dual graph P^*. However, the geometric curve associated with a description goes out of the dual graph to become simple. For each two consecutive edges aa' and a'a” of each such closed walk, we always make a small shortcut in a small neighborhood of a' that avoids a'.For example, we can reroute the arcs along small concentric circles, where we use a larger radius when the distance along the face is smaller. See Figure <ref> for an example. There are different ways to dothis rerouting. In any case, the algorithm of Theorem <ref> to build the abstract Voronoi diagram never uses coordinates. In such a way we obtain true geometric simple curves associated to each such bisector.The transformation is not made for the outer face.Indeed, to use the technology of abstract Voronoi diagrams, we need that all the bisectors pass through a common point p_∞, which is a_∞. Thus, we do not want to make any rerouting at the outer face. This is not a problem if each bisector _P,G(s,t) passes exactly once through the vertex a_∞. If G and P have the same outer face, then _P,G(s,t) only passes once through a_∞. Thus, we will restrict attention to the case when G and P have the same outer face.The rest of the presentation used for the case G=P goes essentially unchanged. However, note that Lemma <ref> does not hold in this case. The reason is that the shortest path from s to x may have edges outside E(P). An easier way to visualize things is to consider the creation of abstract Voronoi diagrams in the graph G^* and then consider the deletion of E(G)∖ E(P) in G (and in G^*). To summarize, we obtain the following. Let G be a plane graph, let P be a connected subgraph of G such that G and P have the same outer face. Let S be a generic, independent set of sites located in the outer face of P. Let a_∞ be the vertex of G^* dual to the outer face of G. Then the system of abstract bisectors { (_P,G(s,t),D_P,G(s,t))| s,t∈ S, s≠ t} is admissible, where a_∞ plays the role of p_∞ in axiom (A3). The abstract Voronoi diagram of any 3 sites in the outer face of P has at most one vertex, besides a_∞. For each s∈ S, we have _P,G(s,S)= V(P)∩(s,S). Because of Lemma <ref>, the system of abstract bisectors { (_G(s,t),D_G(s,t))| s,t∈ S, s≠ t} is admissible. Because of Lemma <ref>, the system of abstract bisectors { (_P,G(s,t),D_P,G(s,t))| s,t∈ S, s≠ t} is obtained by deleting the edges of (E(G)∖ E(P))^* from the description of the bisectors, which amounts to contracting those edges in the dual graph. Consider a contraction of a dual edge and how it transforms the bisectors. If we keep the bisectors as simple curves (not self-touching), as discussed above, then the transformation of the bisectors during an edge contraction can be done with a homeomorphism of the plane onto itself. Since the properties of being an admissible system of abstract bisectors are topological, we obtain that { (_P,G(s,t),D_P,G(s,t))| s,t∈ S, s≠ t} is admissible and a_∞ plays the role of p_∞. The number of vertices does not change with the homeomorphism. Also, for any s∈ S, the set V(P)∩(s,S) does not change during the homeomorphism, and therefore _P,G(s,S)= V(P)∩(s,S).Remark.Instead of using rerouting in the dual graph, another alternativeis to use a variant of the line graph of the dual graph.The variant is designed to ensure that all the bisectors pass through a_∞, so that we can use abstract Voronoi diagrams. Let us spell out an adapted construction of the graph, which we denote by L_∞. The vertex set of L_∞ is E(G)∪{ a_∞}. Thus, each edge and thevertex a_∞ corresponding to the outer face of G are the vertices of L_∞. For each face f of G that is not the outer face, we put an edge in L_∞ between each pair of edges that appear in f. For each edge e on the outer face of G, we put an edge in L_∞ between e and a_∞.This finishes the description of L_∞. The graph L_∞ has a natural drawing inherited from the embedding of G, that is not necessarily an embedding. (L_∞ has large cliques when Ghas large faces.) However, we can use a drawing of L_∞ to represent the curves. See Figure <ref> for an example of (a drawing of) L_∞.Any non-crossing walk in G^* that uses each edge at most once corresponds to a cycle in L_∞ because the portion of the walk inside a face f of Gcorresponds to a non-crossing matching inside f between some edges on the boundary of f,and this matching is part of L_∞. Intuitively, the edges of L_∞ represent shortcuts connecting edges of G directly without passing through dual vertices. § ALGORITHMIC ASPECTS OF VORONOI DIAGRAMS IN PLANAR GRAPHS For the rest of this section, we assume that G is a connected plane graph,P is a connected subgraph of G, andthe outer face of P and G coincide. We use r for the number of vertices in P. Let X be a set of b vertices in the outer face of P. We are interested in placing the sites at the vertices of X. In this section we assume that the distances d_G(·,·)from each vertex of X to each vertex of P are known and available. We remark that the arcs of G may have negative weights, but G should not have negative cycles.We next provide tools to manipulate portions of the bisectors and construct Voronoi diagrams in planar graphs. For any two generic, independent sites {s,t} placed at X we can compute _P,G(s,t) in O(r) time. For each vertex x∈ V(P), we compare w_s+d_G(s,x) and w_t+d_G(t,x) to decide whether x belongs to _P,G(s,{ s,t }) or _P,G(t,{ s,t }). Note that w_s+d_G(s,x)≠ w_t+d_G(t,x) because we assume generic sites. The sets _P,G(s,{ s,t }) and _P,G(t,{ s,t }) are nonempty because we assume independent sites. Now we can mark the edges of P with one endpoint in each of those sets and construct the closed walk _P,G(s,t) using the dual graph. Consider any two vertices {v_s,v_t}⊆ X as placements of sites. Consider the family of bisectors _P,G((v_s,w_s),(v_t,w_t)) as a function of the weights w_s and w_t. There are at most O(r) different bisectors. We can compute and store all the bisectors in O(r^2) time such that, given two values w_s and w_t, the corresponding representation of _P,G((v_s,w_s),(v_t,w_t)) is accessed in O(log r) time. From the definition it is clear that _P,G((v_s,w_s),(v_t,w_t))= _P,G ((v_s,0),(v_t,w_t-w_s)). Thus, it is enough to consider the bisectors _P,G((v_s,0),(v_t,w)) parameterized by w∈. Each bisector _G((v_s,0),(v_t,w)) is a cycle in the dual graph G^* and the cycles are nested: as w increases, the graphic dominance region _G(s,t) monotonically grows and D_G(s,t) also monotonically increases. The same happens with _P,G((v_s,0),(v_t,w)): as w increases, the bisectors _P,G((v_s,0),(v_t,w)) are nested and the region on one side monotonically grows. Since any two different non-crossing closed walks _P,G((v_s,0),(v_t,w)) are nested and must differ by at least one vertex of P that is enclosed, there are at most O(r) different bisectors. For each vertex x∈ V(P), define the value η_x=d_G(s,x)-d_G(t,x). The vertex x is in _G(s,t) when w<η_x, in _G(t,s) when w>η_x, and we have a degenerate (non-generic) case when w=η_x. Thus, we can compute the values {η_x | x∈ V(P)}, sort them and store them sorted in a table. For each w between two consecutive values of {η_x | x∈ V(P)} we compute the bisector using Lemma <ref> and store it with its predecessor of {η_x | x∈ V(G)}. Given a query with shifts w_s, w_t, we use binary search in O(log r) time for the value w_t-w_s and locate the relevant bisector. As mentioned before, an abstract Voronoi vertex is just a vertex of P^* and an abstract Voronoi edge is encoded in the dual graph P^* by a tuple (s,t,aa',bb'), meaning that the edge is the portion of _P,G(s,t) starting with the dual edge aa' and finishing with the the dual edge bb' in some prescribed order, like for example the clockwise order of _P,G(s,t). Consider any three vertices {v_q,v_s,v_t}⊆ X as placements of sites. Consider the family of abstract Voronoi diagrams for the sites q=(v_q,w_q), s=(v_s,w_s), and t=(v_t,w_t) as a function of the weights w_q, w_s and w_t. We can compute and store all those Voronoi diagrams in O(r^2) time such that, given the values w_q, w_s and w_t, the corresponding representation of the abstract Voronoi diagram of those 3 sites is accessed in O(log r) time. We use Lemma <ref> to compute and store all the possible bisectors of each pair of vertices. This takes O(r^2) time because we have O(1) pairs of placements. Only the difference between weights of the sites is relevant. Thus, we can just assume that the weight w_q is always 0. The relevant abstract Voronoi diagrams can thus be parameterized by the plane ^2. The first coordinate is the weight w_s and the second coordinate is the weight w_t. For each vertex x∈ V(P), we compute η_x^qs =  d_G(q,x) -d_G(s,x),    η_x^qt =  d_G(q,x) -d_G(t,x),    η_x^st =  d_G(s,x) -d_G(t,x). Note that, once we fix the weights w_s,w_t and w_q=0, the vertex x∈ V(P) belongs to _P,G(s,{q,s,t}) if and only if w_s< η_x^qs and w_t-w_s> η_x^st. A similar statement holds for the other sites, q and t. In the plane (w_s,w_t) we consider the set of lines L that contains precisely the following lines ∀ x∈ V(G):   ℓ_x^qs={ (w_s,w_t)∈^2 | w_s=η_x^qs},∀ x∈ V(G):   ℓ_x^qt={ (w_s,w_t)∈^2 | w_t=η_x^qt},∀ x∈ V(G):   ℓ_x^st={ (w_s,w_t)∈^2 | w_t-w_s=η_x^st}. Since L has O(r) lines, it breaks the plane ^2 into O(r^2) cells, usually called the arrangement induced by L and denoted by 𝒜(L). Such an arrangement can be computed in O(r^2) time <cit.>. For each cell c∈𝒜(L), the Voronoi diagram defined by the sites { (q,0), (s,w_s), (t,w_t)} is the same for all (w_s,w_t)∈ c. We can further preprocess 𝒜(L) for standard point location <cit.>. Thus, after O(r^2) preprocessing, given a query point (w_s,w_t), we can identify in O(log r) time the cell of 𝒜(L) that contains it. In each cell c of 𝒜(L) we store a description of the Voronoi diagram defined for weights on that cell. We can compute the relevant Voronoi diagram for each cell in O(1) amortized time using a traversal of 𝒜(L). A simple way is as follows. Consider any line ℓ of L. Let us say that ℓ=ℓ_x^qs∈ L; the other cases are similar. Let ℓ_ be a right shift of ℓ by an infinitesimal >0. The value w_s remains constantly equal to η_x^qs+ as we walk along ℓ_, while the value w_t changes. Consider the bisector _P,G((v_q,0),(v_s,w_s)) and let e_1,…,e_k be the edges of P that it crosses, as we walk from a_∞ to a_∞. Thus, the bisector is actually the non-crossing closed walk e^*_1,…,e^*_k in the dual graph P^*. For each such edge e_i, we can compute a value ζ(e_i) such that e_i is part of the abstract Voronoi edge of { (v_q,0), (v_s,w_s), (v_t,w_t)} that separates the cell of q and s if and only if w_t> ζ(e_i). Indeed, if y_q is the endpoint of e_i closer to q and y_s the other endpoint, then e_i is (part of) an abstract Voronoi edge of { (v_q,0), (v_s,w_s), (v_t,w_t)} that separates the Voronoi cells of q and s if and only if w_q+d_G(q,y_q)< w_t +d_G(t,y_q)    and    w_s+d_G(s,y_s)< w_t +d_G(t,y_s). Using that w_q=0 and w_s=η_x^qs+, this is equivalent to the condition w_t  > max{ d_G(q,y_q)-d_G(t,y_q),  d_G(s,y_s)-d_G(t,y_s)+ η_x^qs+} =: ζ(e_i). Because of planarity, the values ζ(e_1),…,ζ(e_k) are either monotonically increasing or decreasing. Indeed, the cell for t can only grow when w_t increases and the cell of t has to take always a contiguous part of the bisector _P,G((v_q,0),(v_s,w_s)), as otherwise the Voronoi diagram of { (q,0), (s,w_s), (t,w_t)} would have at least 2 vertices, besides a_∞. Therefore, the values ζ(e_1),…,ζ(e_k) are obtained already sorted. As we walk along ℓ_, we can identify the last edge e_i such that ζ(e_i)< w_t and identify the precise portion of _P,G((q,0),(s,w_s)) that is in the Voronoi diagram of { (q,0), (s,w_s), (t,w_t)}. Repeating this procedure for each line ℓ^qs_x∈ L, with two infinitesimal shifts per line, one on each side, we can figure out in O(1) amortized time per cell the portion of _P,G(q,s) in the abstract Voronoi diagram for each cell of 𝒜(L) bounded by one of those lines. If a cell is not bounded by a line ℓ^qs_x for some x, we figure out this information from a neighbour cell. A similar approach for the lines ℓ^qt_x∈ L and ℓ^st_x∈ L determines the portions of _P,G(q,t) and _P,G(s,t), respectively. Thus, we obtain the abstract Voronoi diagrams for all cells c∈𝒜(L) in O(1) amortized time per cell. Recall that b is the cardinality of X. There is a data structure with the following properties. The preprocessing time is O(b^3 r^2). For any generic, independent set S of 4 sites placed on X, the abstract Voronoi diagram (S) can be computed in O(log r) time. The output is given combinatorially as a collection of abstract Voronoi vertices and edges encoded in the dual graph P^*. First, we make a table T_X[·] such that, for u∈ X, T_X[u] is the rank of u when walking along the boundary of the outer face of P and, for u∉ X, we have T_X[u] undefined. Thus, given 3 vertices of X we can deduce their circular ordering along the boundary of the outer face of P in O(1) time. We use Lemma <ref> to compute and store all the possible bisectors. Since there are b^2 different possible locations for the sites, for each pair of locations there are O(r) different bisectors, and for each bisector we spend O(r) space and preprocessing time, we have spent a total of O(b^2 r^2) time. For each bisector β, we preprocess it to quickly figure out the circular order of its (dual) edges: given two edges aa' and bb' on β, is the clockwise order along β given by aa',bb',a_∞ or by bb',aa',a_∞? For each bisector β we can make a table T_β[·] indexed by the edges such that T_β[aa'] is the position of aa' along β, when we walk β clockwise starting from a_∞. We set T_β[aa'] to undefined when aa' does not appear in β. Thus, given 2 edges of β, we can decide their relative order along β in O(1) time. The time and space for this, over all bisectors, is also O(b^2 r^2). We make a table indexed by triples of vertices of X and, for each triple, we use Lemma <ref> and store in the table a pointer to the resulting data structure. We have O(|X|^3)=O(b^3) choices for the vertices hosting the sites, and thus we spend O(b^3 r^2) in the preprocessing step. Given any three sites placed at X, we can get the abstract Voronoi diagram of those three sites in O(log r) time. This finishes the preprocessing. Assume that we are given a set S of 4 sites placed at X and we want to compute its abstract Voronoi diagram. We recover the abstract Voronoi diagrams for each subset S3 in O(log r) time, using the stored data. If there are two sites s,t∈ S such that their bisector _P,G(s,t) is in full in the Voronoi diagram of each subset S' with |S'|=3 and {s,t}⊂ S'⊂ S, then in the abstract Voronoi diagram of S there is a region bounded only by _P,G(s,t). We can then compose that bisector and the abstract Voronoi diagram of the other three sites to obtain the final Voronoi diagram. See the left of Figure <ref>. (It may be that we have more than one such “isolated" abstract Voronoi region.) In the opposite case, in the abstract Voronoi diagram there is no abstract Voronoi region that is bounded by a unique bisector. The abstract Voronoi diagram restricted to the interior faces of G is connected. The shape of such a Voronoi diagram can be only one of two, depending on which opposite sites share a common edge. See the center and right side of Figure <ref>. Let p,q,s,t be the sites in clockwise order along the boundary of G. We can infer this order in O(1) time through the table T_X[·]. Assume, by renaming the sites if needed, that _P,G(s,p) has s in its interior. From({p,q,s}) we obtain the edge aa' of_P,G(s,p) incident to the vertex of ({p,q,s}), and from ({p,s,t}) we obtain the edge bb' of _P,G(s,p) incident to the vertex of ({p,s,t}). If a=b, then({p,q,s,t}) has a common vertex of degree 4 that is incident to four abstract Voronoi edges. If aa'=bb' or the the cyclic order of a_∞,bb',aa' along _P,G(p,s) is clockwise, then the tuple (p,s,aa',bb') defines an abstract edge in the abstract Voronoi diagram of S. Otherwise, there is tuple (q,t,cc',dd') for some edges cc' and dd' that can be obtained by exchanging the roles of p,s with q,t. From this information and the abstract Voronoi diagrams of each three sites, we can construct the abstract Voronoi diagram of S={ p,q,s,t}. Let G be a plane graph and let P be a connected subgraph of G with r vertices such that G and P have the same outer face. Let X be a set of b vertices on the outer face of P. Assume that G has no negative cycles and the distances d_G(·,·) from each vertex of X to each vertex of P are available. There is a data structure with the following properties. The preprocessing time is O(b^3 r^2). For any generic and independent set S of sites placed at X, the abstract Voronoi diagram _P,G(S) can be computed in Õ(b) expected time. The output is given combinatorially as a collection of abstract Voronoi vertices and edges encoded in the dual graph P^*. We apply the preprocessing of Lemma <ref>. We spend O(b^3r^2) time and, given any four sites placed on X, we can compute its abstract Voronoi diagram in O(log r) time. Assume that we are given a set S of b sites placed at vertices of X. Because of Lemma <ref> (see also Lemma <ref>), any three sites have a vertex in common, besides the one at p_∞ (or a_∞). According to Theorem <ref>, we can compute the abstract Voronoi diagram using O(|S|log r)=O(blog r) expected time and expected number of elementary operations, where an elementary operation is the computation of an abstract Voronoi diagram of 4 sites. Since each elementary operation takes O(log r) time because of the data structure of Lemma <ref>, the result follows. § DATA STRUCTURE FOR PLANAR GRAPHS In this section we are going to use abstract Voronoi diagrams and the data structures of Section <ref> tocompute information about the distances from a fixed vertex in a planar graph when the length of the edges incident tothe fixed vertex are specified at query time.Let G be a plane graph with n vertices and let P be a connected subgraph of G with r vertices such that G and P have the same outer face. Let X be a set of b vertices on the outer face of P. Let U be a subset of V(P). The graph G may have arcs with negative edges, but it does not have any negative cycle.For each subset Y⊂ X, let G^+(Y) be the graph obtained from G by adding a new vertex x_0 and arcs E_0(Y)={x_0y| y∈ Y}. See Figure <ref>. We want to preprocess G and P for different types of queries, as follows. At preprocessing time, the lengths of the edges in E_0(X) are undefined, unknown. At query time we are given a subset Y⊆ Xand the lengths λ(x_0y) for the arcs x_0y of E_0(Y).Using the notation introduced in Section <ref>, we are interested in the following information about the distances from the new vertex x_0:(x_0,U,G^+(Y))  = max{d_G^+(Y)(x_0,u)| u ∈ U}, (x_0,U,G^+(Y))  = ∑_u∈ U d_G^+(Y)(x_0,u), (x_0,U,G^+(Y),δ)   =  |{ u ∈ U | d_G^+(Y)(x_0,u)≤δ}|.Note that we are only using the distances to the subset U⊆ V(P).The set of vertices Y and the lengths λ(x_0y), where y∈ Y, will be given so that they satisfy the following condition:∀ y,y'∈ Y,  y≠ y':    λ(x_0y) < λ(x_0y') + d_G(y',y).This condition implies that, for all y∈ Y, there is a unique shortest path from x_0 to y∈ Y and this shortest path is just the arc x_0y. This condition is important in our scenario to ensure that, when using the vertices of Y as sites with weights λ(x_0y), the sites are generic and independent. Assume that G is a weighted plane graph with n vertices and no negative cycles. Let P be a subgraph of G with r vertices such that G and P have the same outer face. Let X be a set of b vertices on the outer face of P and let U be a subset of V(P). After Õ(n + b^3 r^2) preprocessing time, we can handle the following queries in Õ(b) expected time: given a subset of vertices Y⊂ X and weights for the darts λ(x_0y), y∈ Y, that satisfy the condition (<ref>), return (x_0,U,G^+(Y)). We compute and store the distances d_G(x,v) for all x∈ X and v∈ V(P). This can be done in Õ(n+br) time, as follows. First we compute a single-source shortest-path tree in Õ(n) time <cit.>. With this we have a potential function in G and for the next distances we can assume non-negative weights. Then we use that all the vertices of X are incident to the outer face of G. Using <cit.> we obtain in Õ(|V(P)|+ |X|· |V(P)|)= Õ(n+br) time the distances d_G(x,v), for all x∈ X and v∈ V(P). We preprocess the pair of graphs G and P as described in Theorem <ref>. This takes O(b^3r^2) time because we already have all the required distances. For each vertex x∈ X we proceed as follows. We compute all the bisectors of the type _P,G((x,·),(x',·)) for all x'∈ X. Let Π_x be the resulting family of curves. Then, we preprocess P with respect to Π_x as explained in Theorem <ref>. More precisely, we use Theorem <ref> for the following two vertex-weight functions: ω_x(v)  =  d_G(x,v),if v∈ U, 0,otherwise.    ω'_x(v) =  1,if v∈ U, 0,otherwise. We denote by σ_x(·) and σ'_x(·) the corresponding sums of weights. For example, σ'_x(U')=∑_v∈ U'ω'_x(v) for all U'⊆ V(P). This finishes the description of the preprocessing. Let us analyze the running time for the last step of the preprocessing. For each two vertices x,x' ∈ X, we spend O(r^2) time to compute the bisectors _P,G((x,·),(x',·)) because of Lemma <ref>. It follows that Π_x is a family of walks in P^* with O(br^2) dual edges, counted with multiplicity. The preprocessing of Theorem <ref> is O(r+||Π_x||)=O(r+br^2)=O(br^2) per vertex x∈ X, where ||Π_x|| denotes the number of edges in Π_x. Thus, over all x∈ X, we spend O(b^2r^2) time. Consider now a query specified by a subset Y⊂ X and the edge weights λ( x_0y), y∈ Y, that satisfy the condition (<ref>). For each y∈ Y, define the site s_y=(y,λ( x_0y)). Because of condition (<ref>), the set of sites S_Y={ s_y|y∈ Y} is independent and generic. Using the data structure of Theorem <ref>, we compute the weighted Voronoi diagram for the sites S_Y. Thus, we obtain the abstract Voronoi diagram in Õ(|Y|) = Õ(b) expected time. For each y∈ Y, let γ_y be the closed walk in the dual graph P^* that defines the boundary of (s_y(v),S). For each vertex v∈ V(P) there is precisely one vertex y(v) ∈ Y such that (s_y(v),S) contains v. Moreover, because of the definition of (graphic) Voronoi diagrams, we have d_G^+(Y)(x_0,v)  = λ(x_0y(v)) + d_G(y(v),v). Note that (x_0,U,G^+(Y))  = ∑_u∈ U d_G^+(Y)(x_0,u)= ∑_u∈ U( λ(x_0y(v)) + d_G(y(v),v) )= ∑_y∈ Y ∑_u∈ U s.t. y(u)=y ( λ(x_0y(v)) + d_G(y(v),v) )= ∑_y∈ Y ∑_u∈(s_y,S)∩ U ( λ(x_0y(v)) + d_G(y,v) )= ∑_y∈ Y( λ(x_0y(v)) · |(s_y,S)∩ U| + ∑_u∈(s_y,S)∩ U d_G(y,v) )= ∑_y∈ Y( λ(x_0y(v)) ·σ'_y(V_(γ_y,P)) + σ_y (V_(γ_y,P)) ) For each site y∈ Y, we walk along γ_y, the boundary of the abstract Voronoi region (s_y,Y), and use the data structures of Theorem <ref> for ω_y and ω'_y to collect the data ∀ y∈ Y:    σ_y(V_(γ_y,P)) ,and σ'_y(V_(γ_y,P)) . Here we are using that y∈(s_y,S), and thus y is in the interior of γ_y. For each γ_y we spend Õ(1) times the complexity of its description. Over all Y, this takes Õ(|Y|)=Õ(b) time. From this information we can compute (x_0,U,G^+(Y)) using  (<ref>), and the result follows. Consider the setting of Theorem <ref>. After Õ(nb + b^3 r^2 + b^4) preprocessing time, we can handle the following queries in Õ(b) expected time: given a subset of vertices Y⊂ X, weights for the darts λ(x_0y), y∈ Y, that satisfy the condition (<ref>), return (x_0,U,G^+(Y)). We use the same approach as in the proof of Theorem <ref>. We keep using the notation of that proof. The main difference is that we do not use the data structure of Theorem <ref>, but the data structure of Theorem <ref>. We explain the details of this part. For each vertex x∈ X we proceed as follows. Let T_x be a shortest-path tree in G from x. We do not compute T_x, but use it to argue correctness. Then, we use the data structure of Theorem <ref> for G, P, the tree T_x, and the vertex-weights ω_x(·). Let μ_x be the corresponding maximum function that the data structure returns. Thus, μ_x(U)=max{ω_x(u)| u∈ U}. This finishes the description of the preprocessing. Let us analyze the running time for this step of the preprocessing. Like before, each Π_x is computed in O(br^2) time and has O(br^2) dual edges, counted with multiplicity. The preprocessing of Theorem <ref> is O(n+||Π_x||+b^3)= O(n+br^2+b^3) time for each x∈ X. Therefore, the total preprocessing used in this step is O(nb+b^2r^2+b^4). Next, we note that each γ_y is in Ξ(G,P,T_y) because of Lemma <ref>. Therefore, we can obtain μ_y(V_(γ_y,P)) in Õ(1) times the complexity of the description of γ_y. Over all y∈ Y, this takes Õ(|Y|)=Õ(b) time. With this data, the desired value is then obtained in O(|Y|)=O(b) time using that (x_0,U,G^+(Y))  = max{ d_G^+(Y)(x_0,u) | u∈ U}= max_y∈ Y max_u∈ U s.t. y(u)=y ( λ(x_0y(v)) + d_G(y(v),v) )= max_y∈ Y ( λ(x_0y(v))  + max_u∈ U s.t. y(u)=yd_G(y(v),v) )= max_y∈ Y (λ(x_0y(v))  + μ_y(V_(γ_y,P))). Consider the setting of Theorem <ref>. After Õ(n + b^3 r^3) preprocessing time, we can handle the following queries in Õ(b) expected time: given a subset of vertices Y⊂ X, weights for the darts λ(x_0y), y∈ Y, that satisfy the condition (<ref>), and a real value δ, return (x_0,U,G^+(Y),δ). We use the same approach as in the proof of Theorem <ref> and keep using its notation. The main difference is that we do not use the data structure of Theorem <ref>, but the data structure of Corollary <ref> for the vertex-weights ω_x(·). Let κ_≤^x be the corresponding function. This means that, for each x∈ X, we spend an extra factor r in the preprocessing. Thus, for each x we spend O(b^2r^3) time, instead of O(b^2r^2). Over all x∈ X, this means that the preprocessing has an extra factor of O(b^3r^3). The rest of the approach is the same. We just have to use that (x_0,U,G^+(Y),δ)  = |{ u ∈ U | d_G^+(Y)(x_0,u)≤δ}| = ∑_y∈ Y |{ u ∈V_(γ_y,P) ∩ U |λ(x_0y(v)) + d_G(y(v),v) ≤δ} | = ∑_y∈ Y κ_≤^y(V_(γ_y,P ), δ-λ(x_0y(v)), and all values κ_≤^y(V_(γ_y,P ), δ-λ(x_0y(v)) are recovered from the data structure of Corollary <ref> in Õ(|Y|)=Õ(b) time.§ DIAMETER AND SUM OF DISTANCES IN PLANAR GRAPHSThe data structures of Theorems <ref>, <ref> and <ref> are going to be used for each piece of an r-division. Then, for each vertex of G we are going to query them. We first explain the precise concept of piece and division that we use, and then explain its use. Divisions. The concept of r-division for planar graphs was introduced by Frederickson <cit.>, and then refined and used by several authors; see for example <cit.> for a sample. For us it is most convenient to use the construction ofKlein, Mozes and Sommer <cit.>. We first state the definitions carefully, almost verbatim from the work of Klein, Mozes and Sommer <cit.>.Let G be a plane graph.A piece[They use the term “region",which in our opinion is more suitable. However, we are using such a termfor so many things in this paper that in our context we prefer to use some other term.]P of G is an edge-induced subgraph of G. In each piece we assume the embedding inherited from G. A boundary vertex of a piece P is a vertex of P that is incident to some edge in E(G)∖ E(P).A hole of a piece P is a face of P that is not a face of G. Note that each boundary vertex of a piece P is incident to some hole of P. An r-division with a few holes of G is a collection { P_1,…, P_k} of pieces of G such that * there are O(n/r) pieces, that is, k=O(n/r); * each edge of G is in at least one piece; * each piece has O(r) vertices; * each piece has O(√(r)) boundary vertices; * each piece has O(1) holes. There is a linear-time algorithm that, for any biconnected triangulated planar embedded graph G, outputs an r-division of G with few holes. In fact, we will only use that all pieces together have O(n/r) holes. Thus, other decompositions proposed by other authors could also be used. Note that we can assume that each piece is connected because we could replace each piece by its connected components, and we would get a new r-division with a few holes.Work per piece. We now describe how to compute the relevant information within a fixed piece and the information between a fixed piece and all vertices outside the piece. The next result is sufficient for our purposes;better results can be obtained using additional tools <cit.>. Let P be a piece of G with r vertices and O(√(r)) boundary vertices. Let U be a subset of vertices in P. In Õ(nr^1/2+r^2) time we can compute for all vertices v∈ V(P) the values (v,U,G), (v,U,G), and (v,U,G,δ) (for a given δ∈). Let ∂ be the set of boundary vertices of P in G. We compute shortest-path trees from each vertex x∈∂ in G in near-linear time <cit.>. This takes |∂|·Õ(n)= Õ(nr^1/2) time. We build a graph P by adding to P arcs between each pair of vertices of ∂. The length of each new arc xy is set to d_G(x,y). Standard arguments show that a distance between any two vertices of P is the same in G and in P. The graph P has O(|E(P)|+|∂|^2)=O(r) edges and O(r) vertices. We can compute all pairwise distances in Õ(|V(P)|· |E(P)|)=Õ(r· r)=Õ(r^2) time using standard approaches. (Since P may have negative weights, we may have to use a potential function.) From all the distances in P, that are also distances in G, we can compute the desired values directly. Let P be a piece of G with r vertices, O(√(r)) boundary vertices, and h holes. Let U be a subset of vertices in P. * In Õ(nh + r^7/2 + nr^1/2 ) expected time we can compute the values (v,U,G) for all vertices v∈ V(G)∖ V(P). * In Õ(nh + r^7/2 + nr^1/2 ) expected time we can compute the values (v,U,G) for all vertices v∈ V(G)∖ V(P). * In Õ(nh + r^9/2 + nr^1/2 ) expected time we can compute the values (v,U,G,δ) for a given δ and all vertices v∈ V(G)∖ V(P). Let C_1,…, C_h be the facial walks of the holes of P. For i∈ [h], let A_i be the vertices of G contained in the interior of the hole defined by C_i. Since each vertex of V(G)∖ V(P) is strictly contained in exactly one hole of P, the sets A_1,…,A_h form a partition of V(G)∖ V(P). For each i∈ [h], we define the graph G_i=G-A_i and let X_i be the set of boundary vertices that appear in C_i. See Figure <ref> for an example. The sets A_1,…,A_h, X_1,…,X_h, and the graphs G_1,…, G_h can be constructed in O(nh) time. We compute the distances in G and in G_i from and to all boundary vertices X_i. This can be done computing 4· |X_i|= O(√(r)) different shortest-path trees, each of them in G, G_i, or the reversed graphs G^R,G_i^R. Since each single-source shortest path can be computed in Õ(n) time <cit.>, we spend in total Õ(nr^1/2) time. Consider any fixed index i∈ [h]. For each v∈ A_i, let Y_i^v be the vertices y of X_i such that in the shortest path in G from v to y the last arc is not contained in G_i. For each x∈ X_i∖ Y_i^v, there exists some other boundary vertex y∈ Y^v_i such that d_G(v,x)= d_G(v,y)+d_G_i(y,x). Therefore, for each u∈ V(P), we have d_G(v,u)  = min{ d_G(v,x)+d_G_i(x,y)| x∈ X_i} = min{ d_G(v,y)+d_G_i(y,u)| y∈ Y_i^v} . Because the selection we made for Y_i^v and the uniqueness of shortest paths in G, we have that ∀ y,y'∈ Y_i^v, y≠ y':     d_G(v,y) < d_G(v,y')+ d_G_i(y',y). Using the shortest-path trees to x∈ X_i, we can identify the relevant pairs { (v,y)| v∈ A_i, y ∈ Y_i^v} in O(n|X_i| ) time. Since ∑_i |X_i| = O(√(r)), over all indices i∈ [h] we spend O(nr^1/2) time. For each i∈ [h], fix an embedding of G_i such that C_i defines the outer face and thus X_i lies in the outer face of G_i and P. Now there are slight differences depending on the data we want to compute. The difference lies in which data structure we use. Let us first consider the problem of computing (v,U,G). We apply Theorem <ref> for the graph G_i and the piece P with respect to the set X_i. Since G_i has O(n) vertices and P has O(r) vertices, the preprocessing takes Õ(n+ |X_i|^3 r^2)) time. Now, for each vertex v∈ A_i, we consider the graph G_i^+(Y_i^v) with edge weights λ(x_0y)=d_G(v,y) for all y∈ Y_i^v. Note that, with these weights, the property in (<ref>) corresponds to condition (<ref>). Moreover, for each u∈ V(P) we have d_G(v,u)=d_G_i^+(Y_i^v)(x_0,u). Therefore, we can use the data structure of Theorem <ref> to get in Õ(|Y_i^v|)=Õ(|X_i|) time (x_0,U,G_i^+(Y_i^v))  = ∑_u∈ U d_G_i^+(Y_i^v)(x_0,u)  = ∑_u∈ U d_G(v,u)= (v,U,G). Iterating over all i∈ [h] and noting that A_1,…,A_h is a partition of V(G)∖ V(P), we obtain the desired values: (v,U,G) for all v∈ V(G)∖ V(P). The running time for the preprocessing of this last step is ∑_i Õ( n + |X_i|^3 r^2)  = Õ(nh +|X|^3 r^2 )  = Õ(nh + r^7/2). and the running time for the queries is ∑_i Õ(|A_i| · |X_i|) = Õ( (∑_i |A_i|) · (∑_i |X_i|) )  = Õ(n r^1/2). The result in the first item follows. For computing (v,U,G), we use the same approach, but employ Theorem <ref> instead of Theorem <ref>. The preprocessing time for i∈ [h] has an extra factor Õ(|X_i|^4). Therefore, the preprocessing time in the last step becomes ∑_i Õ( n + |X_i|^3 r^2 + |X_i|^4)  = Õ(nh +|X|^3 r^2 + |X|^4 )  = Õ(nh + r^7/2). The rest is essentially the same, and we obtain the claim in the second item. For computing (v,U,G,δ), we use the same approach, but employ Theorem <ref> instead of Theorem <ref>. The preprocessing time for i∈ [h] has an extra factor Õ(|X_i|^3 r^3). Therefore, the preprocessing time in the last step becomes ∑_i Õ( n + |X_i|^3 r^3 )  = Õ(nh +|X|^3 r^3)  = Õ(nh + r^9/2). The rest is essentially the same, and we obtain the claim in the third item. Working over all pieces. We can now obtain our main result. Adding edges of sufficiently large lengths, we may assume that G is triangulated. We also embed G. These operations can be done in linear time. With a slight abuse of notation, we keep using G for the resulting embedded, triangulated graph. We compute an r-division 𝒫={ P_1,…, P_k} of G with few holes, for a parameter r to be specified below. According to Theorem <ref>, this takes O(n) time. To avoid double counting we assign each vertex to a unique piece, as follows. For each vertex x of G we select a unique index i(x) such that the piece P_i(x) contains x. For each piece P_j∈𝒫, we define the set U_j={ x∈ V(P_j)| i(x)=j }. The sets U_1,…, U_k are a partition of V(G) and can be easily computed in linear time. Next, we iterate over the pieces and, for each piece P_i∈𝒫, we use Lemma <ref> to compute the values (v,U_i,G), (v,U_i,G)    ∀ v∈ V(G)∖ V(P_i). We also use Lemma <ref> to compute (v,U_i,G), (v,U_i,G)    ∀ v∈ V(P_i). Since the piece P_i has O(1) holes, we spend Õ(n + r^7/2 + nr^1/2)= Õ(r^7/2 + nr^1/2) time per piece. Iterating over the O(n/r) pieces, we get _G(v,U_i), _G(v,U_i),    ∀ v∈ V(G),  i∈ [k] in time O(n/r)·Õ(r^7/2 + nr^1/2) = Õ( nr^5/2 + n^2 r^-1/2). Because U_1,…,U_k is a partition of V(G) we can easily compute the desired values because (v,V(G),G)   = ∑_i∈ [k](v,U_i,G), (v,V(G),G)   = max{(v,U_i,G)| i∈ [k]}. (For the diameter of course we do not need that the sets U_1,…,U_k are disjoint.) Taking r=n^1/3 the running time becomes Õ(n^11/6) in expectation. For (·) we use the third item of Lemma <ref> to compute for each piece P_i (v,U_i,G,δ)   ∀ v∈ V(G)∖ V(P_i). Then, for each piece we spend Õ(n + r^9/2 + nr^1/2)= Õ(r^9/2 + nr^1/2). Over all pieces, the running time thus becomes O(n/r)·Õ(r^9/2 + nr^1/2) = Õ( nr^7/2 + n^2 r^-1/2). Choosing r=n^1/4 we obtain a running time of Õ(n^15/8). Again using that U_1,…,U_k is a partition of V(G), we can compute (v,V(G),G,δ)   = ∑_i∈ [k](v,U_i,G,δ). Let G be a planar graph with n vertices, real abstract length on its arcs, and no negative cycle. In O(n^11/6(n)) expected time we can compute the diameter of G and the sum of the pairwise distances in G. For a given δ∈, in O(n^15/8(n)) expected time we can compute the number of pairs of vertices in G at distance at most δ. § DISCUSSION We have decided to explain the construction through the use of abstract Voronoi diagrams, instead of providing analgorithm tailored to our case. It is not clear to the author which option would be better. In any case, for peoplefamiliar with randomized incremental constructions, it should be clear that the details can be worked out, once the compact representation of the bisectors using the dual graph is available. Using a direct algorithm perhaps we could get rid of theassumption that the sites have to be in the outer face and perhaps we could actually build a deterministic algorithm. In fact, Gawrychowski et al. <cit.> do follow this path and have obtained a deterministic algorithm.There are also deterministic algorithms to computeabstract Voronoi diagrams <cit.>. However, they require additional elementary operations and properties.Also, when the abstract Voronoi diagram has a forest-like shape, it can be computed in linear time <cit.>.It is unclear to the author whether these results are applicable in our case.We think that the algorithm can be extended to graphs on surfaces of small genus, but for this one should take care to extend the construction of abstract Voronoi diagrams to graphs on surfaces or toplanar graphs when the sites are in O(g) faces, where g is the genus of the surface. Let us discuss the reduction to the planar case. The first step is to find an r-division where each part is planar. For this we can use the separator theorem of Eppstein <cit.>.It computes a set of curves on the surface that pass through O(√(gn)) vertices of G and do not pass through the interior of any edge.Moreover, cutting along them gives a collection of planar patches, possibly with multiple holes. Taking a maximal subset of the curves that are homologically independent, we will get O(g) curves that pass through O(√(gn)) vertices and cut the surface into planar patches. Now we can compute an r-division in each of the patches. We can also compute the distances from all the boundary vertices because shortest paths in the presence of negative weights can be computed in subquadratic time <cit.>. Now we run into problems. Consider a vertex x and a piece P. The shortest paths from x to different vertices of P can pass through different boundary cycles. In the planar case, we always have a boundary cycle that intersects all the paths from x to all vertices of P. There is no such cycle in the case of surfaces, for example, when one of the planar patches is a cylinder. Computing Voronoi diagrams for sites placed in O(g) cycles would handle this problem. While we think that this should be doable, it requires non-trivial work. In particular, some of the holes may have non-trivial topology. It seems that the new result by Gawrychowski et al. <cit.> may be the missing piece for making this work.Gawrychowski et al. <cit.> have managed to reduce our exponent 11/6 for the diameter to 5/3. The author would be surprised if the problems considered in this paper can be solved in near-linear time. Thus, the author conjectures that there should besome conditional lower bounds of the type Ω(n^c) for some constant c>1.§ ACKNOWLEDGMENTSThis work was initiated at the Dagstuhl seminar Algorithms for Optimization Problems in Planar Graphs, 2016. I am very grateful to Kyle Fox, Shay Mozes, Oren Weimann, and Christian Wulff-Nilsen for several discussions on the problems treated here. I am also grateful to the reviewers of the paper for their many useful suggestions.10AWW16 A. Abboud, V. Vassilevska Williams, and J. Wang. Approximation and fixed parameter subquadratic algorithms for radius and diameter in sparse graphs. Proc. 27th ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, pp. 377–391, 2016, <http://dl.acm.org/citation.cfm?id=2884435.2884463>.BenderFPSS05 M. A. Bender, M. Farach-Colton, G. Pemmasani, S. Skiena, and P. Sumazin. Lowest common ancestors in trees and directed acyclic graphs. J. Algorithms 57(2):75–94, 2005, <http://dx.doi.org/10.1016/j.jalgor.2005.08.001>.BohlerKL14 C. Bohler, R. Klein, and C. Liu. Forest-like abstract Voronoi diagrams in linear time. Proc. 26th Canadian Conference on Computational Geometry, CCCG 2014, 2014, <http://www.cccg.ca/proceedings/2014>.BondyM08 J. Bondy and U. Murty. Graph theory. Graduate texts in mathematics 244. Springer, 2008.Cabello12 S. Cabello. Many distances in planar graphs. Algorithmica 62(1-2):361–381, 2012, <http://dx.doi.org/10.1007/s00453-010-9459-0>.Cab17 S. Cabello. Subquadratic algorithms for the diameter and the sum of pairwise distances in planar graphs. Proc. 28th ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, pp. 2143–2152, 2017, <http://dx.doi.org/10.1137/1.9781611974782.139>.CabelloCE13 S. Cabello, E. W. Chambers, and J. Erickson. Multiple-source shortest paths in embedded graphs. SIAM J. Comput. 42(4):1542–1571, 2013, <http://dx.doi.org/10.1137/120864271>.CabelloK09 S. Cabello and C. Knauer. Algorithms for graphs of bounded treewidth via orthogonal range searching. Comput. Geom. 42(9):815–824, 2009, <http://dx.doi.org/10.1016/j.comgeo.2009.02.001>.ChambersEN12 E. W. Chambers, J. Erickson, and A. Nayyeri. Homology flows, cohomology cuts. SIAM J. Comput. 41(6):1605–1634, 2012, <http://dx.doi.org/10.1137/090766863>.cdw17 V. Cohen-Addad, S. Dahlgaard, and C. Wulff-Nilsen. Fast and compact exact distance oracle for planar graphs. Proc. 58th IEEE Symposium on Foundations of Computer Science, FOCS 2017, pp. 963–973, 2017, <http://ieee-focs.org/FOCS-2017-Papers/3464a962.pdf>.Verdiere10 É. Colin de Verdière. Shortest cut graph of a surface with prescribed vertex set. Algorithms - ESA 2010, 18th Annual European Symposium, Part II, pp. 100–111. Springer, Lecture Notes in Computer Science 6347, 2010, <http://dx.doi.org/10.1007/978-3-642-15781-3_9>.bkos-08 M. de Berg, O. Cheong, M. van Kreveld, and M. Overmars. Computational Geometry: Algorithms and Applications. Springer-Verlag, 3rd ed. edition, 2008, <http://dx.doi.org/10.1007/978-3-540-77974-2>.Diestel05 R. Diestel. Graph Theory, 3rd electronic edition. Graduate texts in mathematics 173. Springer, 2005.Eppstein03 D. Eppstein. Dynamic generators of topologically embedded graphs. Proc. 14th ACM-SIAM Symposium on Discrete Algorithms, SODA 2003, pp. 599–608, 2003, <http://dl.acm.org/citation.cfm?id=644108.644208>.fr-pgnwe-06 J. Fakcharoenphol and S. Rao. Planar graphs, negative weight edges, shortest paths, and near linear time. J. Comput. Syst. Sci. 72(5):868–889, 2006, <http://dx.doi.org/10.1016/j.jcss.2005.05.007>.FischerH07 J. Fischer and V. Heun. A new succinct representation of RMQ-information and improvements in the enhanced suffix array. Combinatorics, Algorithms, Probabilistic and Experimental Methodologies, First International Symposium, ESCAPE 2007, pp. 459–470. Springer, Lecture Notes in Computer Science 4614, 2007, <http://dx.doi.org/10.1007/978-3-540-74450-4_41>.f-faspp-87 G. N. Frederickson. Fast algorithms for shortest paths in planar graphs, with applications. SIAM J. Comput. 16:1004–1022, 1987, <http://dx.doi.org/10.1137/0216064>.f-pgdap-91 G. N. Frederickson. Planar graph decomposition and all pairs shortest paths. J. ACM 38(1):162–204, 1991, <http://doi.org/10.1145/102782.102788>.ghmsw18 P. Gawrychowski, H. Kaplan, S. Mozes, M. Sharir, and O. Weimann. Voronoi diagrams on planar graphs, and computing the diameter in deterministic Õ(n^5/3) time. Proc. 29th ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, p. to appear, 2018.gmww18 P. Gawrychowski, S. Mozes, O. Weimann, and C. Wulff-Nilsen. Better tradeoffs for exact distance oracle in planar graphs. Proc. 29th ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, p. to appear, 2018.GoldreichR08 O. Goldreich and D. Ron. Approximating average parameters of graphs. Random Struct. Algorithms 32(4):473–493, 2008, <http://dx.doi.org/10.1002/rsa.20203>.goodrich95 M. T. Goodrich. Planar separators and parallel polygon triangulation. J. Comput. Syst. Sci. 51(3):374–389, 1995, <http://dx.doi.org/10.1006/jcss.1995.1076>.Husfeldt16 T. Husfeldt. Computing graph distances parameterized by treewidth and diameter. Proc. 11th International Symposium on Parameterized and Exact Computation, IPEC 2016, pp. 16:1–16:11. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, LIPIcs 63, 2017, <http://dx.doi.org/10.4230/LIPIcs.IPEC.2016.16>.Indyk99 P. Indyk. Sublinear time algorithms for metric space problems. Proc. 31st ACM Symposium on Theory of Computing, STOC 1999, pp. 428–434. ACM, 1999, <http://doi.acm.org/10.1145/301250.301366>.KKS11 K. Kawarabayashi, P. N. Klein, and C. Sommer. Linear-space approximate distance oracles for planar, bounded-genus and minor-free graphs. Proc. 38th International Colloquium on Automata, Languages and Programming, ICALP 2011, pp. 135–146. Springer, Lecture Notes in Computer Science 6755, 2011, <http://dx.doi.org/10.1007/978-3-642-22006-7_12>.k-msspp-05 P. N. Klein. Multiple-source shortest paths in planar graphs. Proc. 16th ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, pp. 146–155, 2005, <http://dl.acm.org/citation.cfm?id=1070432.1070454>.KleinMS13 P. N. Klein, S. Mozes, and C. Sommer. Structured recursive separator decompositions for planar graphs in linear time. Proc. 45th ACM Symposium on Theory of Computing, STOC 2013, pp. 505–514, 2013, <http://doi.acm.org/10.1145/2488608.2488672>. See <http://arxiv.org/abs/1208.2223> for the full version.KleinMW10 P. N. Klein, S. Mozes, and O. Weimann. Shortest paths in directed planar graphs with negative lengths: A linear-space O(nlog^2 n )-time algorithm. ACM Trans. Algorithms 6(2):30:1–30:18, 2010, <http://doi.acm.org/10.1145/1721837.1721846>.KleinS98 P. N. Klein and S. Subramanian. A fully dynamic approximation scheme for shortest paths in planar graphs. Algorithmica 22(3):235–249, 1998, <http://dx.doi.org/10.1007/PL00009223>.Klein89 R. Klein. Concrete and Abstract Voronoi Diagrams. Lecture Notes in Computer Science 400. Springer, 1989, <http://dx.doi.org/10.1007/3-540-52055-4>.Klein2014 R. Klein. Abstract Voronoi diagrams. Encyclopedia of Algorithms, pp. 1–5. Springer Berlin Heidelberg, 2014, <http://dx.doi.org/10.1007/978-3-642-27848-8_603-1>.KleinLN09 R. Klein, E. Langetepe, and Z. Nilforoushan. Abstract Voronoi diagrams revisited. Comput. Geom. 42(9):885–902, 2009, <http://dx.doi.org/10.1016/j.comgeo.2009.03.002>.KleinMM93 R. Klein, K. Mehlhorn, and S. Meiser. Randomized incremental construction of abstract Voronoi diagrams. Comput. Geom. 3:157–184, 1993, <http://dx.doi.org/10.1016/0925-7721(93)90033-3>.LT79 R. Lipton, D. Rose, and R. Tarjan. Generalized nested dissection. SIAM J. Numer. Anal. 16(2):346–358, 1979, <http://dx.doi.org/10.1137/0716027>.MarxP15 D. Marx and M. Pilipczuk. Optimal parameterized algorithms for planar facility location problems using Voronoi diagrams. Algorithms - ESA 2015 - 23rd Annual European Symposium, pp. 865–877. Springer, Lecture Notes in Computer Science 9294, 2015, <http://dx.doi.org/10.1007/978-3-662-48350-3_72>. Full version available at <http://arxiv.org/abs/1504.05476>.MozesNW14 S. Mozes, Y. Nussbaum, and O. Weimann. Faster shortest paths in dense distance graphs, with applications. CoRR abs/1404.0977, 2014, <http://arxiv.org/abs/1404.0977>.MozesS12 S. Mozes and C. Sommer. Exact distance oracles for planar graphs. Proc. 23rd ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, pp. 209–222, 2012, <http://dl.acm.org/citation.cfm?id=2095116.2095135>.MozesW10 S. Mozes and C. Wulff-Nilsen. Shortest paths in planar graphs with real lengths in O(n log^2/loglog n) time. Algorithms - ESA 2010, 18th Annual European Symposium, Part II, pp. 206–217. Springer, Lecture Notes in Computer Science 6347, 2010, <http://dx.doi.org/10.1007/978-3-642-15781-3_18>.ItalianoNSW11 G. F. I. Y. Nussbaum, P. Sankowski, and C. Wulff-Nilsen. Improved algorithms for min cut and max flow in undirected planar graphs. Proc. 43rd ACM Symposium on Theory of Computing, STOC 2011, pp. 313–322, 2011, <http://doi.acm.org/10.1145/1993636.1993679>.ParkP93 J. K. Park and C. A. Phillips. Finding minimum-quotient cuts in planar graphs. Proc. 25th ACM Symposium on Theory of Computing, STOC 1993, pp. 766–775, 1993, <http://doi.acm.org/10.1145/167088.167284>.Patel13 V. Patel. Determining edge expansion and other connectivity measures of graphs of bounded genus. SIAM J. Comput. 42(3):1113–1131, 2013, <http://dx.doi.org/10.1137/100811416>.RodittyW13 L. Roditty and V. Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. Proc. 45th ACM Symposium on Theory of Computing, STOC 2013, pp. 515–524, 2013, <http://doi.acm.org/10.1145/2488608.2488673>.Schrijver-book A. Schrijver. Combinatorial Optimization - Polyhedra and Efficiency. Springer, 2003.Snoeyink04 J. Snoeyink. Point location. Handbook of Discrete and Computational Geometry, Second Edition., pp. 767–785. Chapman and Hall/CRC, 2004, <http://dx.doi.org/10.1201/9781420035315.pt4>.TamassiaL04 R. Tamassia and G. Liotta. Graph drawing. Handbook of Discrete and Computational Geometry, Second Edition., pp. 1163–1185. Chapman and Hall/CRC, 2004, <http://dx.doi.org/10.1201/9781420035315.ch52>.Thorup04 M. Thorup. Compact oracles for reachability and approximate distances in planar digraphs. J. ACM 51(6):993–1024, 2004, <http://doi.acm.org/10.1145/1039488.1039493>.WeimannY16 O. Weimann and R. Yuster. Approximating the diameter of planar graphs in near linear time. ACM Trans. Algorithms 12(1):12, 2016, <http://doi.acm.org/10.1145/2764910>.WN08 C. Wulff-Nilsen. Wiener index, diameter, and stretch factor of a weighted planar graph in subquadratic time. Tech. Rep. 08-16, Department of Computer Science, University of Copenhagen, 2008, <http://www.diku.dk/OLD/publikationer/tekniske.rapporter/rapporter/08-16.pdf>. Preliminary version in EurCG 2009.WN10 C. Wulff-Nilsen. Wiener index, diameter, and stretch factor of a weighted planar graph in subquadratic time, 2010. Paper M in the PhD thesis of C. Wulff-Nilsen, available at <http://www.diku.dk/forskning/phd-studiet/phd/ThesisChristian.pdf>.
http://arxiv.org/abs/1702.07815v2
{ "authors": [ "Sergio Cabello" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20170225012403", "title": "Subquadratic Algorithms for the Diameter and the Sum of Pairwise Distances in Planar Graphs" }
Institut für Physik, Johannes Gutenberg Universität Mainz, D-55099 Mainz, Germany Institute of Physics, Academy of Sciences of the Czech Republic, Cukrovarnická 10, 162 53 Praha 6 Czech Republic Faculty of Mathematics and Physics, Charles University in Prague, Ke Karlovu 3, 121 16 Prague 2, Czech RepublicInstitute of Physics, Academy of Sciences of the Czech Republic, Cukrovarnická 10, 162 53 Praha 6 Czech Republic School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, United KingdomInstitut für Physik, Johannes Gutenberg Universität Mainz, D-55099 Mainz, Germany Institute of Physics, Academy of Sciences of the Czech Republic, Cukrovarnická 10, 162 53 Praha 6 Czech RepublicRoute Towards Dirac and Weyl Antiferromagnetic Spintronics J. Sinova December 30, 2023 ========================================================== Topological quantum matter and spintronics research have been developed to a large extent independently.In this Review we discuss a new role that theantiferromagnetic order has taken in combining topological matter and spintronics. This occurs due to the complex microscopic symmetries present in antiferromagnets that allow, e.g., for topological relativistic quasiparticles and the newly discovered Néel spin-orbit torques to coexist.We first introduce the concepts of topological semimetals and spin-orbitronics. Secondly, we explain the antiferromagnetic symmetries on a minimal Dirac semimetal model and the guiding role of ab initio calculations inpredictions of examples of Dirac, and Weyl antiferromagnets: SrMnBi_2, CuMnAs, and Mn_3Ge.Lastly, we illustrate the interplay of Dirac quasiparticles, topology and antiferromagnetismon: (i) the experimentally observed quantum Hall effect in EuMnBi_2,(ii) the large anomalous Hall effect in Mn_3Ge, and (iii) the theoretically predicted topological metal-insulator transition in CuMnAs. § INTRODUCTION: TOPOLOGY MEETS SPINIn 1905 special relativity revolutionized physics.Almost one century later, the observation of relativistic-like effects and relativistic quasiparticles in solids has created a new revolution in condensed matter physics.In 2004 the discoveries of graphene <cit.>and topological insulators (TIs) <cit.> reignited the exploration of Dirac quasiparticles in solids and the search for novel topological states of matter <cit.>.Remarkably, also in 2004, the observation of the spin Hall effect<cit.> reinvigorated spintronics by shifting its focus fromnon-relativistic effects towards effects originatingfromspin-orbit coupling (SOC).A culminating example of this is the recent phenomenon of the spin-orbit torque (SOT), that can be used to efficiently manipulate magnets <cit.>.Although relativistic (Dirac and Weyl) quasiparticles and TIs matured together, thespintronics effects originating from SOC were developed to a large extent independently of them.However topology and certain spintronics effects are, at least on the theoretical level, very entangled.The intrinsic contributions to the spin Hall family of effects can be described in terms of topological properties of the wave functions<cit.>.Recent works have began to combine directly spintronics, topological and Dirac materials <cit.>.Strong magnetic fields have been used to tune Dirac quasiparticle currents in bulk layered antiferromagnets (AFs) at low temperatures <cit.>.In another example,TIs have been used to enhance the efficiency of SOTs in TI/ferromagnetically (FM) doped TI heterostructures <cit.>.Unfortunately, most of the topological effects are still constrained tolow dimensionalities, high external magnetic fields, and very low temperatures<cit.>. In this review we show that antiferromagnetism combined with Dirac quasiparticles might become the missing ingredient on the route towards fully exploiting the potential oftopological spintronics. This promising perspective is provided by (i) the recent demonstration of the manipulation of AFs by electrical currents<cit.>, (ii) the complex AF symmetries compatible with spintronics effects and nontrivial topologies that allow for Dirac and Weyl quasiparticles<cit.>, and (iii)the externally magnetic invisibility in conjunction with the antiferromagneticorder persisting above the room temperature offering novel functionalities <cit.>. We will show that the coupling between the AF order and relativistic quasiparticles not only leads to novel emergent effects but also pushes their limits. For instance, the recently proposed topological anisotropic magnetoresistance (AMR) in CuMnAs AF <cit.> can be thought as a limiting case of the crystalline AMR. The present review aims at providing the state-of-the art insight into the most recent developments from the point-of-view of theory and experiment. As such, it is not meant to be an exhaustive review of this fast developing field.§.§ Topology and Hall effectsThe phases of matter can be classified by the Landau symmetry breaking paradigm <cit.>. For instance,any crystalline phase is distinguished by the rotational, translational or other symmetry breaking by the crystal, or antiferromagnetic phase is determined by breaking the rotational symmetry of spins by the staggered order orientation. In the 1980s it was discovered that topology adds additional labeling to the phases <cit.>. For instance, two insulators in the same crystallographic group can be either topologically trivial or nontrivial <cit.>. Two band structures are topologically equivalent if they can be transformed into each other under continuous deformations. Topology considers spatial relationships between objects that survive these continuous transformations. In contrast, symmetry, as an invariance of the system under a given transformation, is described by group theory. Topology entered solid state physics in the seminal works on 2D phase transitions <cit.>, quantum Hall effect (QHE) <cit.>, and quantization of transport, and<cit.>. The intrinsic Hall conductivity can be calculated according to the linear-response theory <cit.>:σ_xy=e^2/ħ∑_n∫_BZd^3k/(2π)^3f_n(k)Ω_n(k), where the summation is overall occupied bands, f_n(k) is the Fermi-Dirac distribution function, andΩ_n(k)=2Im∑_m≠ n⟨∂_k_x H(k) ⟩_nm⟨∂_k_y H(k) ⟩_mn/(E_n(k)-E_m(k))^2. Here ⟨∂_k_x H(k) ⟩_nm =⟨ u_n(k)|∂_k_x H(k) | u_m(k) ⟩, where H(k) is the Hamiltonian with the corresponding eigenenergies E_n, and eigenvectors | u_n(k) ⟩, where n is the eigenstate quantum number and k is the wave vector. The explicit link between topology of the wave function and the Hall conductivity can be made by recognizing that Ω(k)≡Ω^z(k) is the z-component of the Berry curvature <cit.>: Ω(k)= ∇_k× i⟨ u(k) |∇_k| u(k) ⟩, where we have dropped the band index for brevity. The Berry curvature transforms under spatial inversion symmetry 𝒫, and time-reversal symmetry 𝒯 as: 𝒫 :Ω(-k)=Ω(k),𝒯 : Ω(-k)=-Ω(k). To obtain a nonzero transverse charge conductivity, it is necessary to break the time reversal symmetry. Otherwise the Berry curvature is an odd function of kand, according to Eq. (<ref>) and (<ref>), the Hall conductivity vanishes. For example, an external magnetic field does the job. When, additionally, a strong magnetic field is exerted on a two-dimensional (2D) electron gas, the Landau levels arise and in the insulating state for the Fermi level between fully occupied quantized levels the Hall conductivity (<ref>) becomes quantized: σ_xy=e^2/ħ∫_BZd^2k/(2π)^2Ω^z(k)=e^2/hC. Here C is the integer Chern number quantifying the underlying nontrivial topology of the edge state wave functions within the Landau level gap, provided the bands are well separated. This is the QHE, schematically illustrated in Fig. <ref>(a). However, the nonzero Berry curvature (<ref>) and Hall conductivity (<ref>) can be provided also by time reversal symmetry breaking due to an internal magnetic order. This is the anomalous Hall effect (AHE), illustrated in Fig. <ref>(b), originating from the SOC in the time-reversal symmetry broken solid. The time-reversal symmetry breaking is explicitly given by the magnetization M_z in ferromagnets (FMs), where it was empiricallyestablishedfor the Hall resistivity that <cit.>: ρ_xy=R_0H_z+R_SM_z.Here the first term corresponds to the ordinary Lorentz force related Hall effect, while the second term is the AHE contribution. Electron spin rotation during the adiabatic movement in the k-space leads due to the SOC to the nonzero Berry phase accumulation. The AHE in real solids has often important extrinsic contributions which we leave here aside <cit.>. In the late 1980s, Haldane showed the possibility of the QHE without Landau levels in the presence of complex hopping magnitudes on the honeycomb lattice <cit.>. This was the first conceptual step towards the quantum spin-Hall effect (QSHE) <cit.>. The QSHE, or the 2D TI, are two copies of the QHE for the spin-up and spin-down <cit.>. The discovery of graphene and 2D surface states of 3D TIsbrought Dirac quasiparticles explicitly into the game <cit.>. In contrast to the original QHE, edge states of 3D TIs are topological states protected by the timereversal symmetry, and can be thus in principle gapped. The effective surface Hamiltonian acquires the form of the Dirac equation <cit.>: ℋ(k)=ħ v_F( σ_xk_y-σ_yk_x), where σ are Pauli matrices and v_F is the Fermi velocity. A surface Dirac cone is shown in Fig. <ref>(a). Topological transport exhibits robust dissipation-less edge states which are protected from the back-scattering and thus have been considered as ideal platforms for applications in spintronics, as we describe in Sec.1.3<cit.>.Unfortunately, the effects are typically constrained tohigh magnetic fields, low temperatures, or reduced dimensionality, and it is challenging to find proper material candidates. For instance, the unintentional bulk doping in TIs, and the challenge to make TIs compatible with magnetism, hinders their full potential for spintronics <cit.>.§.§ Dirac and Weyl semimetalsNontrivial topologies and relativistic quasiparticles can be also associated with the bulk degeneracies in the band structure <cit.>. In theprototypical Dirac quasiparticle system – graphene – Dirac points acquire a nonzero mass, i.e., the band crossing is avoided, due to the SOC <cit.>. Unavoided band crossings were investigated from the very early days of quantum theory<cit.>, and the existence of the limiting phases of matter between insulators and metals was consider already in 1970s <cit.>. The recent rejuvenation of the interest in the bulk degeneracies due to the experimental discoveries of relativistic semimetals<cit.> was made possible only by the identification ofsuitable material candidates based on the state-of-the-art first-principle electronic structure theory <cit.>. Electrons in conventional crystals form typically Schrödinger bands.However, when the bands cross accidentally close to the Fermi level in crystals with specific symmetries, they might create a relativistic semimetal phase. Fermi states in relativistic semimetals are dominated by the emergent relativisticquasiparticles similarly to graphene. Adopting the high energy physics terminology, the relativistic particles come in three flavors: Weyl, Dirac and Majorana fermions <cit.>. In solids, the classification of the effective electronic quasiparticles is much richer due to the fact these quasiparticles do not have to obey the relativistic Lorentz symmetry, while they might be constrained by additional crystalline symmetries, not present in high energy physics<cit.>.Here we focus on Dirac and Weyl quasiparticles in AFs, as is depicted in Fig. <ref>(b-c), and <ref>(b-e). Dirac quasiparticles are allowed in systems with doubly-degenerate bands<cit.>. Within the single particle picture, the eigenvalue E_nσ(k) and the eigenvector ψ_nσ(k) of the Hamiltonian of the solid H_0, are labeled by the quantum number n, spin σ, and the crystal momentum k in the first Brillouin zone (BZ).Double-band degeneracy is realized in systems invariant under the combinedspatial inversion 𝒫, and time-reversal 𝒯 symmetries.The 𝒯 symmetry acts as: E_n,↑(k)=E_n,↓(-k), while 𝒫 acts as: E_n,σ(k)=E_n,σ(-k) giving rise to E_n,↑(k)=E_n,↓(k) over the whole BZ. Let us then consider the 𝒫𝒯 invariant solid with two double degenerate bands well separated from the rest of the band structure. The corresponding effective single particle Hamiltonian restricts to the four-band Hamiltonian: H_0→ℋ_eff=∑_kn,m,σ,σ' ψ^†_ n σ(k) ℋ(k) ψ_ mσ'(k), where ψ^†_ nσ(k) creates a particle in a state | u_nσ(k) ⟩, and the matrix elements are given by ℋ_mnσσ'(k)=⟨ u_nσ(k) | H_0| u_mσ'(k)⟩.When we choose𝒫= τ_x, 𝒯=-iσ_y𝒞 (𝒞 being complex conjugation), and we restrict the Hamiltonian by [ℋ,𝒫𝒯]=0, we obtain:ℋ(k)=∑_k=0^5A_j(k)Γ_j,where the Dirac matrices Γ_j={ 1, τ_x, τ_y, σ_xτ_z, σ_yτ_z, σ_zτ_z} and A_j(k) are functions of crystal momentum.The energy spectrum is then given by:E_±(k)=A_0(k)±√(∑_k=1^5A_j^2(k)), To ensure the stable accidental band-crossing (ABC), the expression under the square root must vanish <cit.>. In general, it is not possible to tune simultaneously five functionsA_j(k) to zero by varying just three components of the crystal momentum k. We can reduce the number of free functions to three by additional crystalline symmetries, which can further reduce the number of Γ matrices in the Hamiltonian (<ref>).By abandoning the τ_y term due to the additional crystalline symmetry, neglecting σ_0 terms that tilt and shift the bands, assuming isotropic Fermi velocities v_F and keeping theτ_x term constant we obtain in the vicinity of the ABC the effective Dirac Hamiltonian: ℋ(k) =(ħ v_F(k-k_0)·σm m-ħ v_F(k-k_0)·σ).Recently it was demonstrated that the protection mechanism can by provided by nonsymmorphic <cit.> or rotational symmetries <cit.>. The nonsymmorphic symmetry is a combination of a point group symmetry with a nontrivial translation <cit.>. The crystalline symmetry 𝒮 prevents some terms in the Hamiltonian by[ℋ,𝒮]=0. On the k-subspace invariant under 𝒮, the bands can be labeled eventually by the symmetry eigenvalues preventing the hybridization, as we show in Sec. 2. Thus, symmetry-protected Dirac quasiparticles can be found typically at the rotational axes <cit.> or the BZ edges <cit.>, as we illustrate in Fig. <ref>(b). In this model 2D Dirac semimetal, the Dirac crossings at the X points are protected by nonsymmorphic symmetries <cit.>.Magnetic or non-centrosymmetric crystals have non-degenerate bands by violating 𝒫𝒯 symmetry by breaking 𝒯<cit.>, or 𝒫<cit.>, or both symmetries <cit.>. The low energy physics around the ABC comprising two non-degenerate bands can be approximated by a two-band Hamiltonian:ℋ(k)=∑_i=0^3A_i(k)σ_i. Similarly as in the four-band case, omitting the A_0 term and expanding the Hamiltonianin the vicinity of the ABC gives:ℋ(k)=ħ v_F(k-k_0)·σ,where we have additionally imposed an isotropic Fermi velocity v_F for the sake of recovering the explicit form of the Weyl equation known from high-energy physics. The topology is now reflected in the fact that the 3D Weyl Hamiltonian (<ref>) uses its complete bases - all three Pauli matrices. Thus, any sufficiently small perturbation will just shift, but not gap, the ABC in the BZ <cit.>.The Weyl fermions come always in pairs with opposite chirality, as can be seen by breaking the 𝒫𝒯 symmetry in Eq.(<ref>) and as was generically proven for fermions by the no-go theorem by Nielsen and Ninomiya<cit.>. Weyl points can be gapped only by annihilation with another Weyl point of the opposite chirality. Weyl fermions, the fundamental building blocks of the standard model, were never observed in high energy physics. In 2015, they were observed in a non-centrosymmetric crystal TaAs<cit.>. The Weyl points are typically found inside the BZ, as can be seen on the band structure of the𝒫𝒯 symmetry breaking Weyl semimetal modelinFig. <ref>(c)<cit.>, but can reside also at the edges <cit.>. The nontrivial topology of the electronic wave function can be quantified by the topological index of the band-crossing similarly to Eq. (<ref>). The Berry curvature stream lines for the time-reversal braking Weyl semimetal model <cit.> are depicted in theFig. <ref>(c). The Chern number is calculated, in contrast toEq.(<ref>), as the integral over the Berry curvature (<ref>) on the small sphere around the Weyl point<cit.>: C=1/2π∫_SdSΩ(k)=± 1.The Chern number value ±1 is realized for the linear Weyl crossing and corresponds to the opposite sign of the chirality of the two Weyl points in Fig. <ref>(c), with chirality defined as the projection of the spin to the momentum axis. The Berry curvaturein the vicinity of the Weyl point is approximated as:Ω(k)=±k/2k^3 ,which explicitly shows that the Weyl points act as a source and a drain of the Berry curvature. As we see in Fig. <ref>(c),the Berry curvature stream lines can be used to track the position of the Weyl points since the Weyl points act as effective monopoles of the Berry curvature. The streamlines of the Berry curvature connect the two Weyl points in a similar form as the magnetic field lines connect the north and the south pole of a magnet. Thus, a single Weyl point can be thought of as an analog of the magnetic monopole.The Berry flux between the Weyl points in the time-reversal breaking Weyl semimetal gives rise to a non-quantized anomalous Hall conductivity, σ_xy=-w/πe^2/h, where w is the separation of the Weyl points in the BZ <cit.>. Topological Weyl and Dirac semimetals are attractive due tothe possible robust symmetry and topology protected electronic states, topological surface states - Fermi arcs <cit.>, presumably high mobilities <cit.>, or exotic magneto-transport phenomena such as the negative longitudinal magnetoresistance<cit.> and other manifestations of the chiral anomaly <cit.>. The most investigated Dirac semimetals are nonmagnetic: Na_3Bi <cit.>, and Cd_3As_2<cit.>, or ZrSiS<cit.>. The Weyl semimetal state was firstly observed in the TaAs family as mentioned above <cit.>.§.§ Spintronics and antiferromagnets We first recall recent conceptual developments in spintronicson selected prototypical devices and principles. Traditionally the interaction between magnetization and conduction electron spin was modeled by assuming the s-d type of exchange interaction <cit.>. Prominent effects of this Mott approach <cit.> to spintronics include the giant magnetoresistance (GMR), and the spin-transfer torque (STT), which were explored in the FM/spacer/FM spin valve structures. The GMR refers to changes in the resistivity induced by rotating the moment in the free FM layer, which can be used to read its state.The GMR can be explained in terms of the Mott two current model of transport in an exchange split band FM <cit.>. The switching of the magnetization in the free layer leads to the different scattering rates for the spin-up and spin-down channels, and consequently resistivity changes.The STT can be used for writing the magnetic information. In the STT, the fixed layer is used to spin-polarize the electric current. The spin polarized current then exerts a torque on the free magnetic layer <cit.>.The discoveries of the spin Hall effect (SHE) and TIs shifted the focus of spintronics partly towards relativistic effects.Novel approaches to spintronics include concepts based on the interchange of spin and momentum due to the SOC in heavy metals ('Dirac principle' in<cit.>), and concepts employing low dissipation Dirac quasiparticle surface states of TIs <cit.>, as we explain in Fig. <ref>.The archetypal effects of the magnetic spin-orbitronics (Fig. <ref>(a)) are the AMR and the SOT. The AMR and the SOT, in contrast, to the GMR and the STT, rely on the relativistic SOC. Due to the SOC, electrons feel different scattering rates for the magnetization oriented parallel or perpendicular to the electrical current direction, leading to the AMR <cit.>.The non-centrosymmetric magnet subjected to an electric current generates a non-equilibrium spin-orbit field, and thus the SOT, which in turn can reorient its magnetization. Recently this has been demonstrated even at room temperature<cit.>. The spin-orbitronics path of spintronics research also led recently to the emergence of spintronics based on AFs <cit.>.The applicability ofrelativistic physics to AF based spintronics was demonstrated in the seminal works on AF AMR <cit.>.The role of relativistic effects is even more pronounced in the AF spintronics since the GMRin AFs remains elusive <cit.>, while comparable AMR signals to FMs were observed in AF semiconductors <cit.>, metals <cit.> and recently also semimetals <cit.>.The lack of practical means for themanipulation of AF moments and their microscopic complexityleft for decades AFs as primarily passive elements in spintronics devices providing magnetic pinning of the reference FM layer. The breakthroughwas the demonstration of the electriccurrent manipulation of the AF orderin a CuMnAs semimetal by the Néel SOT (NSOT) <cit.>.Intopological based spintronics (Fig. <ref>(b)) the TI is interfaced with a magnet <cit.>. The combination of the perfect spin-momentum locking and the strong SOC of TIs can lead to large spin accumulation or spin current under the applied electric current <cit.>. The Dirac quasiparticle spin on the surface of the TI is locked to be perpendicular to the particle momentum according to Eq.(<ref>).When the TI is subjected to the lateral electric field, the charge current with a spin polarization is generated at the surface due to the spin-momentum locking in combination with a Fermi surface shift from the Dirac point<cit.>. For example, in Fig. <ref>(b), a negative charge current in the [100] direction would generate a spin-polarization in the [010] direction.This spin polarization can be then used to exert a torque on the magnetization, as was recently demonstrated in a TI/magnetically-doped TI heterostructure <cit.>. Here it was shown that by reversing the direction of the lateral current, the magnetization of the Cr-doped TI can be switched under the assistance of the external magnetic field of 0.6 T <cit.>. Very efficient switching with a critical current 8.9.10^4A/cm^2 – three order of magnitudes lower than in heavy-metal/FM bilayers – was achieved <cit.>. However, theenhanced efficiency is constrained to low temperatures.The main obstacle is that the magnetic order in TIsis stable only at very low temperatures. The subfield is looking for suitable high temperature material candidates which would be possible to efficiently couple with TIs. To this end the suitability of the AF order was demonstrated recently <cit.>. An efficient proximity effect at room temperature was demonstrated in a CrSb superlattice AF sandwiched between TIheterostructures, as depicted in Fig.<ref>(d)<cit.>.The recently proposed novel approach to spintronics based on topological AFs combines the benefits of both topological states and the SOC driven spintronics. Here the antiferromagnetic order would outperform FMs. In Ref.  <cit.> it was theoretically suggested that the topological effects, Dirac quasiparticles, and SOTs can join forces in the specific class of AFs due to their unique symmetries, which are not present in FMs.In Fig. <ref>(c) we depict the concept based on a topological AF semimetal. The working principle is the control of the Fermi surface topology by manipulating the AF order parameter by the electric current, i.e., by the NSOT. The reading of the magnetic state can be achieved due to the predicted effect inherent to topological AFs, topological AMR (TopoAMR) <cit.>. It originates from controlling the symmetry protection of Dirac points by the Néel vector direction.In topological AFs, spintronics effects can be pushed to their limits. For example, the presence of relativistic quasiparticles can enhance the strength and efficiency of the SOT <cit.>. Vice versa, the SOT manipulation of the Néel order parameter can be used to tune the masses of Dirac fermions, which can lead, in principle, to a topological metal-insulator transition (TopoMIT), and the aforementioned TopoAMR. One possible material realization was suggested recently in the orthorhombic phase of the CuMnAs AF semimetal<cit.>, as we will explain in more detail in Sec. 4.3. § THEORY OF DIRAC AND WEYL ANTIFERROMAGNETS §.§ Antiferromagnetic symmetryAFs owe many of their unique properties to their external magnetic invisibility in combination with internal magnetic long range order<cit.>. The external magnetic invisibility is given by the defining feature of AFs, the zero net magnetic moment.The AF symmetries can also lead to the existence of some effects (Dirac quasiparticles, NSOT), but at the same time they can make other effects vanish (the AHE in simple collinear AFs, the NSOT in centrosymmetric AFs).AFs can yield more physical phenomena than FMs when the effective AF symmetries rescue the topological phases, e.g., AF TIs <cit.>, or AF topologicalDirac semimetals<cit.>.AF TIs rely on the effective time reversal symmetry T_1/2𝒯, which combines the time reversal symmetry 𝒯 with a half-magnetic unit cell translation T_1/2<cit.>. This is not possible in FMs. The Dirac semimetal, as we explained in Sec. 1.2, can be realized only in systems with doubly degenerate bands. In any magnetic crystal, the time-reversal symmetry 𝒯 is explicitly broken.In FMs there is no symmetry operation which can rescue the double-band degeneracy once the time-reversal symmetry is broken globally by the net magnetization.Remarkably, in AFs where both 𝒯 and spatial inversion symmetry 𝒫 are broken, but their combination 𝒫𝒯 is preserved, the double band degeneracy over the whole BZ is reinstated <cit.>. Consequently, 𝒫𝒯 AFs might hostDirac quasiparticles.The Dirac semimetal state can be stabilized in bands associated with eigenvalues of crystalline symmetry operators at the corresponding BZ invariant subspaces <cit.>.In the recently proposed AFs, the symmetry protection of these relativistic quasiparticles is due to the nonsymmorphic symmetries <cit.>.In general,magnetism hugely enlarges the playground for studying the fundamental relationship between symmetry andtopology in solids due to the necessity to consider 1651 magnetic space groups instead of the 230 nonmagnetic ones<cit.>. The general classification of the magnetic and nonsymmorphic symmetries is beyond the scope of this brief review.Instead, we illustrate the physics governed by the AF symmetries on a minimal tight binding model introduced recently<cit.>. §.§ Minimal Dirac semimetal antiferromagnetIn a recent work <cit.> it was shown that the interplay between spintronics and topology can be explained on a minimal Dirac AF semimetal model which, additionally,can serve as a parent phase for the massive Dirac fermions or the magnetically induced Weyl semimetal.The generic lattice band Hamiltonian with the symmetry of Eq. (<ref>) can be formulated by considering two AF sublattices with one orbital and a spin per atom on the square lattice, as in Fig. <ref>(a):ℋ=∑_⟨ i,j ⟩,⟨⟨ i,j ⟩⟩t_ijĉ_i^†ĉ_j+∑_iJ_iĉ_i^†n·σĉ_i,where c_i is the annihilation operator, t_ij is the hopping amplitude between nearest ⟨ i,j ⟩ and next nearest neighbor ⟨⟨ i,j ⟩⟩ atoms, J_i is the AF exchange, and n is the unit Néel vector.The coexistence of Dirac quasiparticles, nontrivial topologies, and the NSOT can arise by deforming the above square layered lattice in the left panel of Fig. <ref>(a) to the nonsymmorphic crystal in the right panel. Thesecond neighbor, or Kane-Mele <cit.>, k-dependent SOC is introduced by moving the A and B atoms in the opposite direction along the [001] axis,ℋ_SOC(r) =i∑_⟨⟨ i,j ⟩⟩,⟨ k ⟩λ_ijĉ_i^†(d_ik^1×d^2_kj) ·σĉ_j,where λ_ij is the SOC strength, and d^1,2_ik are bonds to the nearest atom interconnecting the next nearest neighbor atoms as illustrated in Fig. <ref>(a). The SOC term in Eq. (<ref>) is a double Rashba Hamiltonian since the A (B) atom has the nearest inter-layer B (A) atom. In this sense, the term is staggered analogously as the AF Zeeman term - the second term in Eq. (<ref>). This double staggering results into band-inversion and band-touching at specific sections of the BZ. The k-space Hamiltonian is obtained by the Fourier transformation and has the structure ofEq.(<ref>), where the coefficients A_j(k) have now direct physical meaning<cit.>. We will further discuss the quasi-2D case, where the inter-layer hopping is neglected assuming the much larger inter-layer than intra-layer distances.The Hamiltonian reduces to ℋ(k)=∑_j=1,3,4,5A_j(k)Γ_j, with the nearest neighbor hopping term A_1(k) = -2tcosk_x/2cosk_y/2 (crystal momentum k is in dimensionless units), and the combined SOC/exchange terms A_3(k)=J_x-λsin k_y, A_4(k)=J_y+λsin k_x, and A_5(k)=J_z. By neglecting the next nearest neighbor hopping, the spectrum has an additional particle-hole symmetry<cit.>. The energy spectrum is given by Eq.(<ref>).The relevant aspects of the band touchings can be minimalistically explained at the M'-X-M line in the BZ, where we have chosen t=1 eV, J=0.6t, and a large SOC parameter λ=1.5t. InFig. <ref> (a) we plot the energy dispersion for a nonmagnet (J=0), depicted by the black line. The spectrum shows Dirac points at X and M protected by multiple symmetries of the nonmagnetic lattice in the right panel of Fig. <ref>(a) <cit.>.The 3D band structure of the distorted version of this model is plotted inFig. <ref> (b). For the sake of brevity, we plot here the band structure of the model with Dirac points only atX. The Dirac point at Mare gapped by adding the term Δ_1sink_x/2sink_y/2<cit.> to the hopping part A_1(k) which breaks rotational screw symmetries C_2x/y protecting it. When the AF coupling (Néel vector) is switch-on along the [001] axis in the undistorted model, the Dirac points preserve their k-space location but they acquire a mass, as illustrated in Fig. <ref>(a) by the solid red line. For the Néel vector along the [100]-direction, we have Dirac points in 2D, as is depicted in the Fig. <ref>(b). By considering the inter-layer hopping, the 3D model is recovered <cit.> andthe Dirac node lines appear in the k_x=π plane (see schematics in Fig. <ref>(e)). The Dirac points in 2D and Dirac node lines in 3D are protected by the nonsymmorphic glide mirror plane symmetry<cit.> 𝒢_x ={ M_x|(1/2,0, 0 ) }, which combines a nontrivial translation by (1/2, 0, 0) with a mirror plane reflection ℳ_x, as depicted in Fig. <ref>(d). The crystal momentum transforms under 𝒢_x (note that in the reciprocal space only the point group part of𝒢_x, namely ℳ_x matters) as:𝒢_x(k_x,k_y,k_z)=(-k_x,k_y,k_z), making the k_x=0,π BZ sub-spaces invariant under 𝒢_x, The ℳ_x symmetry is represented as, ℳ_x=iσ_xτ_z, with the two crystal-momentum independent eigenvalues m_±=± i. The nonsymmorphicity of 𝒢_x is determined by the different center of symmetry for the 𝒢_x and 𝒫𝒯 symmetries, since 𝒢_x∘𝒫𝒯=e^ik_x𝒫𝒯∘𝒢_x.The 2D Hamiltonian in the vicinity ofthe Dirac point (DP) D_1 in Fig. <ref>(b)at the crystal momentumD=(π,arcsinJ/λ) can be expandedas: ℋ_eff(d_1+k)=-2tcosd_y/2k_xτ_x--2λ( cos d_y k_yσ_x+ k_xσ_y)τ_z.At the 𝒢_x invariant subspace (X-M line) the effective Hamiltonian in Eq. (<ref>) has one degenerate eigenvalue E_+1,2(k_y)=2λcos d_yk_y with the correspondingeigenvectors:⟨ u_k,+1.|=1/√(2)(0, 0, 1, 1), ⟨ u_k,+2.|=1/√(2)(-1, 1, 0,0) and a second degenerate eigenvalue E_-1,2(k_y)=-2λcos d_yk_y with the corresponding eigenvectors:⟨ u_k,-1.|=1/√(2)( 0,0,-1,1 ), ⟨ u_k,-2.|=1/√(2)(1,1, 0,0 ). The 𝒢_xsymmetry shares eigenvectors with the Hamiltonian. The ℳ_x symmetry expectation values m_±=⟨ u_q,α|ℳ_x| u_q,α⟩ for the two states E_+1,2 arem_+=-i, while for E_-1,2 are m_-=+i. Consequentlyeach double degenerate band belongs toa different symmetry representation as depicted in Fig. <ref>(f). The eigenvalue correspondence to the bands protects the band crossing since it prevents hybridization <cit.>.The interplay of Dirac fermions with the NSOT isallowed due to the fact that the deformation of the sublattices has broken the inversion symmetry of the magnetic atom sites A and B. The NSOT wastheoretically predicted in Ref.  <cit.> and subsequently experimentally observed in current induced switching experiments in the tetragonal CuMnAs AF <cit.>. The microscopic physics of the NSOT is discussed in a recent article by Železný<cit.>. In Fig. <ref>(c) we show the effect of the external magnetic field δ h=0.25t applied in the [100] direction. δ h splits the two Dirac points into four Weyl points at the Fermi energy.With this minimal model we have shown herethat the AFs can generate topologically nontrivial states by themselves. This opens the prospect for spintronics devices with built-in topological states beyond those that combine TIs and AFs at interfaces. Next we review realistic topological AF (semi)metallic candidates from the perspective offirst-principle calculations. §.§ First-principle calculations of the electronic structure and transport effectsRecent advances in the understanding of topological properties of condensed matter and spin-orbitronics effects were led, to a large extent, by the developments in density functional theory (DFT) calculations simulating the solids from a microscopic quantum-mechanical description <cit.>.The practical implementation of the theory requires a problem-suitable choice of the effective potential approximation (in the simplest form the local density approximation (LDA) or the generalized gradient approximation (GGA)) and of the wavefuction basis (typically plane waves or tight-binding). Due to the variational formulation of the DFT, it is possible to determineground-state wave functions from a numerical self-consistent iteration process controlled, e.g., by the electronic density convergence.From the ground-state wave functions further quantities of interest are calculated. To reveal nontrivial topologies of the surface states, calculations in the slab geometry are required and in the context of Dirac semimetals there is a need to determine the symmetries of the states comprising the band crossing. Plots of the projected local density of states, as the one in Fig. <ref>(c), then allow to distinguish the topology and the character of the surface states and can help to understand the experimental data, e.g., angle resolved photo-emission spectroscopy or scanning tunneling microscopy (see for instance Refs. <cit.>). For spintronics applications, transport effects are simulated based on the linear response theory. In this formalism, the transport coefficient can be obtained from the response function of the ground-state wave functions, typically in the form similar toEq. (<ref>) (see e.g. Refs. <cit.>).In the presence of Dirac quasiparticles in the band structure, the computational costs increase due to the typically dominating contribution from the (un)avoided crossings. Additionally, in the realm of spintronics, it is desirable to simulate the transport under realistic conditions, e.g., at finite temperature and in imperfect crystals. Recent advances in this direction include the ab initio inclusion of finite temperature effects <cit.>, or the multi-scale approach, where the ab initio results only parametrize the atomistic model <cit.>. The possibility of simulating onequal footing strong relativistic and correlation effects in topological magnetic semimetalscan be perceived as an exciting future direction incomputational physics. Dynamical mean field theory combined with the LDA is the established starting point <cit.>. All the results presented in the next section were obtainedwithin the standard GGA (+SOC) full potential linearized augmented plane-wave method with the Perdew-Burke-Ernzerhof parametrization <cit.>.§ ELECTRONIC STRUCTURE OF DIRAC AND WEYL ANTIFERROMAGNETIC CANDIDATESIn this section we give a summary of three key AFs which have been shown to exhibit relativistic quasiparticles or are potential candidates. §.§ Dirac antiferromagnets XMnBi_2Layred AFs from the 112 pnictides family,CaMnBi_2 and SrMnBi_2 <cit.>, support the double-band degeneracy of the electronic bands. In Fig. <ref> we show the crystallographic structure ofSrMnBi_2 consisting of alternating Sr, Mn, and Bi layers. The band structure calculated with the SOC (Fig. <ref>(b)) shows massive Dirac fermions <cit.>, similar to the case of the minimal model with the Néel vector along the [001] direction. The XMnBi_2 AF family thus represents the massive quasi-2D-Dirac fermion system. Most of the electronic properties are determined by the Bi square sheets, as can be seen from the orbital composition <cit.>. The quasi-2D character is reflected in the almost flat dispersion along the Γ-Z line in the BZ (c.f. Fig. <ref> (b)). Additionally, the AF exhibits a quasi-2D quantum transport, namely a highly anisotropic angular dependent magnetoresistance<cit.>. Note that this very large magnetoresistance is given by the projection of the magnetic field on the Fermi surface and is distinct from the spontaneous AMR driven by the SOC in magnets, which is a representative effect of the Dirac spintronics. The different structure of the Dirac cones in SrMnBi_2 (Fig. <ref>(c)) and CaMnBi_2 was attributed to the different position of the Sr and Ca atom in combination with a different magnitude of the SOC <cit.>. Both AFs have the Néel temperature around room temperature. In contrast, the EuMnBi_2has also an AF coupled Eu sublattice at very low temperatures <cit.>. The control of the AF order at the Eu sublattice leads to the transport effects discussed in Sec 4.1. A more detailed review on the topic of pnictides (also regarding superconductivity) can be found in Ref. <cit.>. §.§ Weyl antiferromagnets Mn_3X The first realistic Weyl semimetal candidate was predicted in the pyrochlore AF Y_2Ir_2O_7, based on DFT<cit.>. This AF Weyl semimetal has not been confirmed yet despite substantial experimental effort<cit.>. An alternative candidate with properties appealing to spintronics, namely strong AHE, was predicted recently in chiral non-collinear AFs Mn_3X (X=Ge,Sn)<cit.>. InFig. <ref>(a) we show the crystallographic structure built from stacked kagome planes along the [001] axis.The materials are known to have relatively weak magnetic anisotropy, reaching approximately ∼ 0.1 meV for the Mn_3Sn<cit.> and a net magnetic moment of 0.005 μ_B per unit cell <cit.>. The predicted triangular magnetic structures in Mn_3Sn shown inFig. <ref>(b,c) <cit.> were supported also by torque measurements of the magnetic anisotropy<cit.>.The crystal in its magnetic texture in Fig. <ref>(b) has a glide mirror plane 𝒢_y={ℳ_y|( 0,0,1/2) }, and two effective time reversal symmetries combining mirror symmetries ℳ_x𝒯 and ℳ_z𝒯. Any of these three symmetries double the number of Weyl points leading to the multiplicity of 8 <cit.>. Band structure calculations inMn_3Ge reveal several Weyl points around the Fermi level together with other trivial states as illustrated in Fig. <ref>(b-c). The small symmetry breaking due to the net momentcorrects slightly the position of the Weyl points related by the mirror symmetries in the ideal AF structure withzero net moment<cit.>. The detailed position of the Weyl points was located by tracking the Berry curvature in the whole BZ as explained in Sec. 1.2. The hallmark of Weyl semimetal states, the nontrivial Fermi arc surface states, was predicted by first-principle calculations of the local density of states and are depicted inFig. <ref>(c)<cit.>.The study published in this issue of the PSS reveals that in spite of the weak anisotropy the inverted chiral structure is relatively stable against thermal fluctuation and it is possible to influence the in-plane chiral AF magnetic structure by the spin-filtering effect <cit.>. §.§ Dirac semimetal antiferromagnets CuMnX CuMnX (X=As,P) has been originally studied in its orthorhombic formas a promising AF semiconductor candidate <cit.>. Electronic structure calculations and transport measurements point towards a semimetallic phase <cit.>. The tetragonal phase was used to experimentally discover the NSOT<cit.>. Very recently, a symmetry protected Dirac semimetal state was predicted in the orthorhombic phase of CuMnX AFs <cit.>.In CuMnAs the Dirac points can carry topological charges and are protected by the combined 𝒫𝒯 symmetry together with a certain nonsymmorphic symmetry, in analogy with the minimal model in Sec. 2.2. 𝒫𝒯 symmetry ensures a double band degeneracy over the whole BZ <cit.>, while the nonsymmorphic symmetry prevents hybridization of the bands at the band-crossing<cit.>. The nonsymmorphic pattern in orthorhombic CuMnAs is slightly more involved than in the minimal model. Orthorhombic CuMnAs contains four Mn sublattices that are connected in pairs by the 𝒫𝒯 symmetry, as seen inFig. <ref>(a). The atomic resolved band structure calculated without the SOC is depicted inFig. <ref>(b) and shows dominating Mn orbitals at the Fermi level. Three visible Dirac points at the Fermi level along the Γ - X, X - U, and Z - X lines are part of the node line related to the glide mirror plane symmetry 𝒢_y={ℳ_y|( 0,1/2,0 ) } <cit.>. The protected Dirac semimetal realized for the Néel order along the [001] axis is shown in Fig. <ref>(c), and appears by gapping the nodal line except for the two Dirac points along the U - X - U subspace. The Dirac points are protected by the nonsymmorphic screw axis 𝒮_z={ 2_z|( 1/2,0,1/2) }<cit.> and are connected via nontrivial surface states<cit.>. The topological index of the band-crossing can be defined for the AF semimetal in analogy to the nonmagnetic Dirac semimetals <cit.>.The topological Dirac semimetal in CuMnAs is appealing, since theoretically only a pair of Dirac points occurs at the Fermi level according to ab initio calculations <cit.>, thus offering an ideal model for a topological Dirac semimetal induced by band-inversion <cit.>. In the next Section we will review the recent prediction of merging spintronics with topology and the novel magnetotransport effects in CuMnAs. § INTERPLAY BETWEEN TOPOLOGY AND ANTIFERROMAGNETISMTopological AFs can bring effects that cannot take place in either nonmagnets or FMs.We review here the interplay of the Dirac quasiparticles, the QHE, and antiferromagnetism in the ternary pnictides. We also review the intrinsic contribution from Weyl quasiparticles to the giant anomalous Hall effect in the non-collinear chiral AF Mn_3Ge, and noveleffects predicted forCuMnAs.§.§ Interplay of Dirac quasiparticles, antiferromagnetism and quantum Hall effects in ternary pnictidesThe interaction between Dirac quasiparticles and magnetism was demonstrated in ternary pnictides, although the Dirac quasiparticles and magnetism arise from different physical origins <cit.>.Recently, enhancement of the exchange coupling between layers via Dirac carriers inCaMnBi_2 and SrMnBi_2 was found with the help of Raman scattering <cit.>. Also the magnetic field manipulation of the transport was demonstrated in the sister compound EuMnBi_2 <cit.>. Magnetism can open an energy gap at the Dirac points in CaMnBi_2 and SrMnBi_2, which was attributed to a FM inter-layer coupling of Mn moments in CaMnBi_2 and to an AF coupling in SrMnBi_2 <cit.>. The different behavior of the two compounds is due to the competing AF super-exchange and the FM double exchange mediated by the itinerant Bi electrons <cit.>.The presence of the 40 meV band gap at the Dirac point along the Γ - M line can lead potentially to a large contribution to the spin-Hall effect <cit.> as was discussed for the similar paramagnetic situations in the iron-based superconductors <cit.> and Weyl semimetal TaAs <cit.>. The Eu-based compounds behave in large magnetic fields differently to SrMnBi_2, due to the additional AF ordering on the Eu moments. Thesuppression of the carrier density was attributed to the AF order of the Eu atomsand demonstrates theinfluence of magnetism on the Fermi surface <cit.>. We have shown in the preceding section the Dirac bands close to the Fermi level inSrMnBi_2. Similarly, quasi-2D Dirac fermions are expected also in the EuMnBi_2. Dirac fermions presumably give rise to a large positive linear magnetoresistance, as can be seen in Fig.<ref>(e), and high mobilities up to 10 000 cm^2/Vs <cit.>. The influence of the magnetic field on the transport and magnetic properties of EuMnBi_2is reproduced in Fig. <ref>. In Fig. <ref>(b) we show the phase diagram typical for the external magnetic field applied along the easy axis in an anisotropic AF. From the net magnetization measurement we see that above the spin-flop field, the Eu moments reorient perpendicular to the the applied field, while above the spin-flip field the moments order ferromagnetically. The AF ordering of Eu moments has substantial influence on the inter-layer transport, as can be seen from Fig. <ref>(d-f). Furthermore, a half-integer QHE was reported in EuMnBi_2controllable by the strength of an external magnetic field <cit.>. The QHE was attributed to the sufficient suppression of the [001]-axis conductivity and confinement of the massive Dirac fermions to the Bi-square quasi-2D layers by the spin-flop at the Eu sites <cit.>. Nevertheless, the detailed mechanism has not yet been identified.§.§ Anomalous Hall effect in non-collinear antiferromagnetsUsually the AHE arises from the presence of magnetization and SOC <cit.>. Within this picture one can argue that the AHE is linear in magnetization, as seen from the empirical Eq. (<ref>), and thus in bipartite AFs should vanish. Indeed, this is true in simple collinear AFs with the combinedtime reversal symmetry and half-magnetic unit cell translation 𝒯T_1/2,which implies that the Berry curvature is an odd function of crystal momentum when replacing 𝒯 with 𝒯T_1/2 in Eq. (<ref>) and the AHE vanishes due to the Eq. (<ref>). We can ask whether it is possible to observe the AHE in systems with a zero net magnetization, or with a zero SOC. The answer to both of these questions is yes <cit.>.We start with the AHE in a system with a zero net magnetic moment. In certain AF textures it is not possible to combine the broken 𝒯 symmetry with another symmetry operation which would recover the symmetry of the AF and would make the AHE vanish. Indeed, nonzero AHE was predicted<cit.>and later experimentally discovered in certain disordered and ordered AFs<cit.>. Recent theoretical workspredicted strong AHE in the non-collinear AFs Mn_3Ir <cit.> and Mn_3Ge <cit.> based on ab initio calculations of the intrinsic part of the AHE.The largest contribution to the AHE originates from the avoided crossings near the Fermi surface<cit.>. As an example, we can compare the k-resolved Berry curvature in Fig. <ref>(a) with the band structure along the K-M axis<cit.> in the BZ inFig. <ref>(b) for Mn_3Ge. The detailed interplay of Weyl points and the AHE and the SHE in the Mn_3X family of AFs was also recently investigated <cit.>.Since the Berry curvature is an axial vector (see Eq. (<ref>)), it transforms in the same way as a magnetic moment under the (effective) time-reversal symmetry (seeEq. (<ref>)). Indeed, the magnetic space groups of the candidates Mn_3Ir and Mn_3Ge allow for a nonzero magnetic moment, which however does not contribute significantly to the AHE <cit.>. The small magnetization allowed for the observation of the AHEin Mn_3Ge by orienting the magnetic domains by magnetic fields <cit.>. The giant magnetic anisotropy energy in Mn_3Ir has prevented the observation of the AHE in this material untill now<cit.>. Hence, we focus here on the Weyl metal candidate Mn_3Ge.The chiral magnetic structure depends on the external magnetic field B_FC applied during sample cooling. Independently on the orientation of the chiral magnetic order, Mn_3Ge has effective the time reversal symmetry 𝒯ℳ_z, where ℳ_z is the mirror (001) plane symmetry, which implies σ_xy=0(𝒯→𝒯ℳ_z inEq. (<ref>)). Similarly, for the chiral structure in Fig. <ref>(b) stabilized by B_FC∥[010] <cit.> there is an effective time reversal symmetry 𝒯ℳ_x, implying σ_yz=0 and only σ_xz≠ 0 (blue line in Fig. <ref>(d)). Finally, for the magnetic order in Fig. <ref> (c) induced by B_FC∥[100] <cit.>, the effective 𝒯𝒢_y symmetry gives σ_xz=0 and only σ_zy≠ 0 (red line in Fig. <ref>(d)).This explains the recent experimental findings summarized inFig. <ref>(d). Kiyohara et al. <cit.> reported a nonzero contribution to the AHE from the chiral AF texture, Fig. <ref>(b), σ^AF_xz. For the spin structure in Fig. <ref>(c) the authors measured a nonzero response only for σ^AF_zy.In their measurements the anomalous Hall resistivity was extracted fromEq. (<ref>) by including the contribution from the AF texture ρ_H^AF: ρ_H=R_0H_z+R_SM+ρ_H^AF,where M is the net magnetization. Ab initio calculations forMn_3Ge give σ_xz^AF≈ 330 Ω^-1cm^-1 <cit.>, while the experimentally inferred value is, σ_xz ≈380 Ω^-1cm^-1 <cit.>. A strong AHE was also observed inthe GdPtBi AF at very low temperatures, where the authors attributed the effect to the Berry curvature induced by the external magnetic field canting of the AF sublattices<cit.>. The discovery of the AHE in non-collinear AFs illustrates how basic research in AFs can advance the general understanding ofspin-dependent transport effects.As mentioned above, the AHE withoutSOC was also observed in AFs. Several works have identified the so called topological Hall effect originating fromnontrivial magnetic textures where the role of the SOC is overtaken by the spin-chirality<cit.>.§.§ Topological metal insulator transition and anisotropic magnetoresistancePredictions of topological quantum phases in oxide iridates have stimulated research on the TopoMIT <cit.>. The TopoMIT can be controlled by large pressures<cit.>, external magnetic fields<cit.>, strain<cit.>, or doping in X_2Ir_2O_7 <cit.>. Recently, a new concept was theoretically predicted in orthorhombic CuMnAs AF <cit.>. Here the topoMIT is controlled by the interplay between the Néel vector and the symmetry protection of the Dirac points. The mechanism is related directly to the relativistic spin control and thus is very different from the mechanisms predicted for the X_2Ir_2O_7 family. Remarkably, in the nonsymmorphic 𝒫𝒯 AFs, the NSOT can be used to control the Néel vector direction and in turn the TopoMIT, as we demonstrated on the simple model in Sec 2.2. We illustrate the staggered symmetry of the NSOT by the blue arrows inFig. <ref>(a) for the applied current along the [100] direction. The various phases predicted for the CuMnAs AF and depicted inFig. <ref>(c) include an AF topological Dirac semimetal for the Néel vectorn∥[001] (the 3D electronic dispersion is plotted in the figure of the abstract), an AF semiconductor for n∥[101], and an AF Dirac semimetal for n∥[100] with the Dirac pointalong the Γ - X line and with a small band gap of approximately 1 meV <cit.>. The possibility of controlling the TopoMIT by the NSOT can lead to novel concepts of spin-dependent transport.For instance, the non-equilibrium counterpart of the TopoMIT is the topological anisotropic magnetoresistance (TopoAMR)<cit.>. The origin of this effect is in the changes of the Fermi surface topology induced by the reorientation of the AF moments, and the corresponding changes of the magnetic symmetry.§ PERSPECTIVES AND CONCLUSIONThe Dirac/Weyl AFs with appealing properties forspintronics are summarized in Fig. <ref>. While topological Weyl and Dirac (semi)metals (Mn_3Ge and CuMnAs respectively) are already extensively explored, a myriad of other topological AF systems have a large potential for future research. For instance, GdPtBi was predicted to be an AF TI <cit.>, and it was recently reported to host a magnetically induced Weyl semimetal state <cit.>. Remarkably, several of the guiding AF symmetries important for spintronics are also sharedwith the magnetic structures proposed for several of the Fe-based superconductors, making a link to topological superconductivity, which is beyond the scope of this review <cit.>.Néel AF order in a mono-layer of FeSe on SrTiO_3 was reported to exhibit the AF QSHE <cit.>.Many of the topological semimetal effects havecome into focus only very recently and are not fully understood. For example, the role of relativistic quasiparticles in the linear magnetoresistance is controversial<cit.>, as well as the topological nature of the Fermi arcs in Dirac semimetals<cit.>, and the nature of the band crossings <cit.>. Experimental studies of topological AFs might lead to a deeper understanding of these effects, similarly as the research in non-collinear AFs helped our general understanding of transport effects, such as the AHE. The exciting challenges for theoretical and computational physics are provided by the fact that the topological AFs live very often at the intersection of different physical regimes, e.g., of strong relativistic and electronic correlation regimes <cit.>. In conclusion, we have described howthe unique symmetries of AFs allow for combining seemingly incompatible effects on explicit examples that included the NSOT and Dirac quasiparticles, or the AHE in AFs. This opens novel research directions intopological antiferromagnetic spin-orbitronics.We acknowledge support from the Grant Agency of the Charles University no. 280815 and of the Czech Republic no. 14-37427G, the Alexander von Humboldt Foundation, EU ERC Synergy Grant No. 610115, and the Transregional Collaborative Research Center (SFB/TRR) 173 SPIN+X. Access to computing and storage facilities owned by parties and projects contributing to the National Grid Infrastructure MetaCentrum provided under the program "Projects of Large Research, Development, and Innovations Infrastructures" (CESNET LM2015042), is greatly appreciated. [100]Sinova2015J. Sinova,S. O. Valenzuela,J. Wunderlich, C. H. Back,andT. JungwirthSpin Hall effects,Rev. Mod. Phys. 87(4), 1213–1260 (2015). Novoselov2004K. S. Novoselov,A. K. Geim,S. V. Morozov, D. Jiang,Y. Zhang,S. V. Dubonos, I. V. Grigorieva,andA. A. FirsovElectric Field Effect in Atomically Thin Carbon Films,Science 306, 666–669 (2004). Kane2005C. L. Kane andE. J. MeleQuantum Spin Hall Effect in Graphene,Phys. Rev. Lett. 95(22), 226801 (2005). Moore2010aJ. E. MooreThe birth of topological insulators.,Nature 464(7286), 194–8 (2010). Kato2004dY. K. Kato,R. C. Myers,A. C. Gossard,and D. D. AwschalomObservation of the spin Hall effect in semiconductors.,Science 306(dec), 1910–1913 (2004). Wunderlich2004J. Wunderlich,B. Kastner,J. Sinova,and T. JungwirthExperimental discovery of the spin-Hall effect in Rashba spin-orbit coupled semiconductor systems,Cond-Mat (2004). Chernyshov2009A. Chernyshov,M. Overby,X. Liu,J. K. Furdyna,Y. Lyanda-Geller,andL. P. RokhinsonEvidence for reversible control of magnetization in a ferromagnetic material by means of spin–orbit magnetic field,Nat. Phys. 5(9), 656–659 (2009). Miron2011bI. M. Miron,K. Garello,G. Gaudin, P. J. Zermatten,M. V. Costache,S. Auffret, S. Bandiera,B. Rodmacq,A. Schuhl,and P. GambardellaPerpendicular switching of a single ferromagnetic layer induced by in-plane current injection.,Nature 476(7359), 189–193 (2011). Liu2012L. Liu,C. F. Pai,Y. Li,H. W. Tseng,D. C. Ralph,andR. A. BuhrmanSpin-torque switching with the giant spin Hall effect of tantalum.,Science 336(6081), 555–558 (2012). Ciccarelli2016C. Ciccarelli,L. Anderson,V. Tshitoyan, A. J. Ferguson,F. Gerhard,C. Gould, L. W. Molenkamp,J. Gayles, J. Železný,L. Šmejkal, Z. Yuan,J. Sinova,F. Freimuth,and T. JungwirthRoom-temperature spin – orbit torque in NiMnSb,Nat. Phys. 12, 855–861 (2016). Hesjedal2016T. Hesjedal andY. ChenTopological insulators: Engineered heterostructures,Nat. Mater. 16(1), 3–4 (2016). Fert2013A. Fert,V. Cros,andJ. SampaioSkyrmions on the track,Nat. Nanotechnol. 8(3), 152–156 (2013). Fan2016bY. Fan andK. L. WangSpintronics Based on Topological Insulators,SPIN 06(02), 1640001 (2016). Masuda2016H. Masuda,H. Sakai,M. Tokunaga, Y. Yamasaki,A. Miyake,J. Shiogai, S. Nakamura,S. Awaji,A. Tsukazaki, H. Nakao,Y. Murakami,T. h. Arima, Y. Tokura,andS. IshiwataQuantum Hall effect in a bulk antiferromagnet EuMnBi2 with magnetically confined two-dimensional Dirac fermions,Sci. Adv. 2(1), e1501117 (2016). Fan2014aY. Fan,P. Upadhyaya,X. Kou,M. Lang, S. Takei,Z. Wang,J. Tang,L. He, L. T. Chang,M. Montazeri,G. Yu, W. Jiang,T. Nie,R. N. Schwartz, Y. Tserkovnyak,andK. L. WangMagnetization switching through giant spin-orbit torque in a magnetically doped topological insulator heterostructure.,Nat. Mater. 13(7), 699–704 (2014). Manchon2014bA. R. Mellnik,J. S. Lee,A. Richardella, J. L. Grab,P. J. Mintun,M. H. Fischer, A. Vaezi,A. Manchon,E. A. Kim, N. Samarth,andD. C. RalphSpin-transfer torque generated by a topological insulator,Nature 511(7510), 449–451 (2014). Fan2016Y. Fan,K. L. Wang,X. Kou, P. Upadhyaya,Q. Shao,L. Pan, M. Lang,X. Che,J. Tang, M. Montazeri,K. Murata,L. T. Chang, M. Akyol,G. Yu,T. Nie,K. L. Wong, J. Liu,Y. Wang,Y. Tserkovnyak,and K. L. WangElectric-field control of spin-orbit torque in a magnetically doped topological insulator,Spin 11(11), 352 (2016). Wadley2016P. Wadley,B. Howells,J. Zelezny, C. Andrews,V. Hills,R. P. Campion, V. Novak,K. Olejník,F. Maccherozzi, S. S. Dhesi,S. Y. Martin,T. Wagner, J. Wunderlich,F. Freimuth,Y. Mokrousov, J. Kunes,J. S. Chauhan,M. J. Grzybowski, A. W. Rushforth,K. W. Edmonds,B. L. Gallagher,andT. JungwirthElectrical switching of an antiferromagnet,Science 351, 587–590 (2016). Smejkal2016L. Šmejkal,J. Železný, J. Sinova,andT. JungwirthElectric control of Dirac quasiparticles by spin-orbit torque in an antiferromagnet,arXiv:1610.08107 (2016). Yang2017H. Yang,Y. Sun,Y. Zhang,W. J. Shi, S. S. P. Parkin,andB. YanTopological Weyl semimetals in the chiral antiferromagnetic materials Mn 3 Ge and Mn 3 Sn,New J. Phys. 19(1), 015008 (2017). Jungwirth2016T. Jungwirth,X. Marti,P. Wadley,and J. WunderlichAntiferromagnetic spintronics,Nat. Nanotechnol. 11(3), 231–241 (2016). Hasan2010M. Z. Hasan andC. KaneColloquium: Topological insulators,Rev. Mod. Phys. 82(4), 3045–3067 (2010). Kosterlitz1972J. M. Kosterlitz andD. J. ThoulessLong range order and metastability in two dimensional solids and superfluids. (Application of dislocation theory),J. Phys. C Solid State Phys. 5(11), L124–L126 (1972). Kosterlitz1973J. M. Kosterlitz andD. J. ThoulessOrdering, metastability and phase transitions in two-dimensional systems,J. Phys. C Solid State Phys. 6(7), 1181–1203 (1973). Klitzing1980K. V. Klitzing,G. Dorda,andM. PepperNew method for high-accuracy determination of the fine-structure constant based on quantized hall resistance,Phys. Rev. Lett. 45(6), 494–497 (1980). Thouless1982D. J. Thouless,M. Kohmoto,M. P. Nightingale, andM. den NijsQuantized Hall Conductance in a Two-Dimensional Periodic Potential,Phys. Rev. Lett. 49(6), 405–408 (1982). Thouless1983D. J. ThoulessQuantization of particle transport,Phys. Rev. B 27(10), 6083–6087 (1983). Nagaosa2010N. Nagaosa,J. Sinova,S. Onoda,A. H. MacDonald,andN. P. OngAnomalous Hall effect,Rev. Mod. Phys. 82(2), 1539–1592 (2010). Haldane1988F. D. M. HaldaneModel for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the {`Parity Anomaly'},Phys. Rev. Lett. 61(18), 2015 (1988). Hsieh2008D. Hsieh,D. Qian,L. Wray,Y. Xia, Y. S. Hor,R. J. Cava,andM. Z. HasanA topological Dirac insulator in a quantum spin Hall phase.,Nature 452(7190), 970–4 (2008). Zhang2009aH. Zhang,C. X. Liu,X. L. Qi, X. Dai,Z. Fang,andS. C. ZhangTopological insulators in Bi2Se3, Bi2Te3 and Sb2Te3 with a single Dirac cone on the surface,Nat. Phys. 5(6), 438–442 (2009). Sun2016aY. Sun,Y. Zhang,C. Felser,and B. YanStrong Intrinsic Spin Hall Effect in the TaAs Family of Weyl Semimetals,Phys. Rev. Lett. 117(14), 146403 (2016). Vafek2013O. Vafek andA. VishwanathDirac Fermions in Solids: From High-T c Cuprates and Graphene to Topological Insulators and Weyl Semimetals,Annu. Rev. Condens. Matter Phys. 5(1), 83–112 (2014). Burkov2016A. A. BurkovTopological semimetals,Nat. Mater. 15(11), 1145–1148 (2016). Herring1937C. HerringAccidental Degeneracy in the Energy Bands of Crystals,Phys. Rev. 52(4), 365–373 (1937). Abrikosov11971a. a. Abrikosov andS. D. BeneslavskiiPossible existence of substances intermediate between metals and dielectrics,Sov. Phys. JETP 32(4), 699 (11971). Neupane2013M. Neupane,S. Xu,R. Sankar, N. Alidoust,G. Bian,C. Liu, I. Belopolski,T. R. Chang,H. T. Jeng, H. Lin,A. Bansil,F. Chou,andM. Z. HasanObservation of a three-dimensional topological Dirac semimetal phase in high-mobility Cd3As2,Nat. Commun. 5(001), 1–7 (2014). Liu2014eZ. K. Liu,B. Zhou,Y. Zhang,Z. J. Wang,H. M. Weng,D. Prabhakaran,S. Mo, Z. X. Shen,Z. Fang,X. Dai, Z. Hussain,andY. L. ChenDiscovery of a Three-Dimensional Topological Dirac Semimetal, Na3Bi,Science (80-. ). 343(February), 864–867 (2014). Xu2015bS. Y. Xu,I. Belopolski,N. Alidoust, M. Neupane,G. Bian,C. Zhang, R. Sankar,G. Chang,Z. Yuan,C. C. Lee,S. M. Huang,H. Zheng,J. Ma, D. S. Sanchez,B. Wang,A. Bansil, F. Chou,P. P. Shibayev,H. Lin, S. Jia,andM. Z. HasanDiscovery of a Weyl fermion semimetal and topological Fermi arcs,Science (80-. ). 349(6248), 613–617 (2015). Lv2015B. Q. Lv,H. M. Weng,B. B. Fu, X. P. Wang,H. Miao,J. Ma, P. Richard,X. C. Huang,L. X. Zhao, G. F. Chen,Z. Fang,X. Dai,T. Qian,andH. DingExperimental discovery of weyl semimetal TaAs,Phys. Rev. X 5(3), 031013 (2015). Wang2012fZ. Wang,Y. Sun,X. Q. Chen, C. Franchini,G. Xu,H. Weng,X. Dai, andZ. FangDirac semimetal and topological phase transitions in math display="inline" msub miA/mi mn3/mn /msub /math Bi ( math display="inline" mrow miA/mi mo=/mo mtextNa/mtext /mrow /math , K, Rb),Phys. Rev. B 85(19), 195320 (2012). Wang2013gZ. Wang,H. Weng,Q. Wu,X. Dai,and Z. FangThree-dimensional Dirac semimetal and quantum transport in Cd 3As2,Phys. Rev. B - Condens. Matter Mater. Phys. 88(12), 1–6 (2013). Huang2015S. M. Huang,S. Y. Xu,I. Belopolski, C. C. Lee,G. Chang,B. Wang, N. Alidoust,G. Bian,M. Neupane, C. Zhang,S. Jia,A. Bansil,H. Lin, andM. Z. HasanA Weyl Fermion semimetal with surface Fermi arcs in the transition metal monopnictide TaAs class.,Nat. Commun. 6, 7373 (2015). Weng2015H. Weng,C. Fang,Z. Fang,B. A. Bernevig,andX. DaiWeyl Semimetal Phase in Noncentrosymmetric Transition-Metal Monophosphides c As b a Ta,Phys. Rev. X 011029, 1–10 (2015). Wieder2016B. J. Wieder,Y. Kim,A. M. Rappe,and C. L. KaneDouble Dirac Semimetals in Three Dimensions,Phys. Rev. Lett. 116(18), 1–5 (2016). Bradlyn2016B. Bradlyn,J. Cano,Z. Wang,M. G. Vergniory,C. Felser,R. J. Cava,andB. A. BernevigBeyond Dirac and Weyl fermions: Unconventional quasiparticles in conventional crystals,Science (80-. ). 353(6299), aaf5037–7 (2016). bernevig2013topologicalB. Bernevig andT. Hughes, Topological Insulators and Topological Superconductors (Princeton University Press, 2013). Yang2014aB. j. Yang andN. NagaosaClassification of stable three-dimensional Dirac semimetals with nontrivial topology,Nat. Commun. 5, 4898 (2014). Young2012S. M. Young,S. Zaheer,J. C. Y. Teo, C. L. Kane,E. J. Mele,andA. M. RappeDirac semimetal in three dimensions,Phys. Rev. Lett. 108(14), 1–5 (2012). Fang2015C. Fang,Y. Chen,H. Y. Kee,and L. FuTopological nodal line semimetals with and without spin-orbital coupling,Phys. Rev. B 92(8), 1–5 (2015). Young2015S. M. Young andC. L. KaneDirac Semimetals in Two Dimensions,Phys. Rev. Lett. 115(12), 1–5 (2015). Schoop2016L. M. Schoop,M. N. Ali,C. Straßer, A. Topp,A. Varykhalov,D. Marchenko, V. Duppel,S. S. P. Parkin,B. V. Lotsch, andC. R. AstDirac cone protected by non-symmorphic symmetry and three-dimensional Dirac line node in ZrSiS,Nat. Commun. 7(May), 11696 (2016). bradley2010mathematicalC. Bradley andA. Cracknell, The Mathematical Theory of Symmetry in Solids: Representation Theory for Point Groups and Space Groups (OUP Oxford, 2010). Wan2011X. Wan,A. M. Turner,A. Vishwanath,and S. Y. SavrasovTopological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates,Phys. Rev. B 83(20), 205101 (2011). Wang2016cZ. Wang,M. G. Vergniory,S. Kushwaha, M. Hirschberger,E. V. Chulkov,A. Ernst, N. P. Ong,R. J. Cava,andB. A. BernevigTime-Reversal-Breaking Weyl Fermions in Magnetic Heusler Alloys,Phys. Rev. Lett. 117(23), 1–12 (2016). Zyuzin2012A. A. Zyuzin,S. Wu,andA. A. BurkovWeyl semimetal with broken time reversal and inversion symmetries,Phys. Rev. B 85(16), 165110 (2012). Chang2016bG. Chang,B. Singh,S. Y. Xu,G. Bian,S. M. Huang,C. H. Hsu,I. Belopolski, N. Alidoust,D. S. Sanchez,H. Zheng, H. Lu,X. Zhang,Y. Bian,T. R. Chang,H. T. Jeng,A. Bansil,H. Hsu, S. Jia,T. Neupert,H. Lin,andM. Z. HasanTheoretical prediction of magnetic and noncentrosymmetric Weyl fermion semimetal states in the R-Al-X family of compounds (R=rare earth, Al=aluminium, X=Si, Ge),arxiv.org/1604.02124(apr) (2016). Nielsen1981H. B. Nielsen andM. NinomiyaA no-go theorem for regularizing chiral fermions,Phys. Lett. B 105(2-3), 219–223 (1981). Witten2015E. WittenThree Lectures On Topological Phases Of Matter,arxiv.org/1510.07698(oct) (2015). Yang2011bK. Y. Yang,Y. M. Lu,andY. RanQuantum Hall effects in a Weyl semimetal: Possible application in pyrochlore iridates,Phys. Rev. B - Condens. Matter Mater. Phys. 84(7), 075129 (2011). Chang2016G. Chang,D. S. Sanchez,B. J. Wieder, S. Y. Xu,F. Schindler,I. Belopolski, S. M. Huang,B. Singh,D. Wu, T. Neupert,T. R. Chang,H. Lin,and M. Z. HasanKramers theorem-enforced Weyl fermions: Theory and Materials Predictions (Ag$_3$BO$_3$, TlTe$_2$O$_6$ and Ag$_2$Se related families),arxiv.org/1611.07925(nov), 26 (2016). Gresch2016D. Gresch,G. Autès,O. V. Yazyev, M. Troyer,D. Vanderbilt,B. A. Bernevig,and A. A. SoluyanovZ2Pack: Numerical Implementation of Hybrid Wannier Centers for Identifying Topological Materials,arxiv.org/1610.08983(oct) (2016). Xu2015S. y. Xu,C. Liu,S. K. Kushwaha, R. Sankar,J. W. Krizan,I. Belopolski, M. Neupane,G. Bian,N. Alidoust, T. r. Chang,H. t. Jeng,C. y. Huang, W. f. Tsai,H. Lin,P. P. Shibayev, F. c. Chou,R. J. Cava,andM. Z. HasanObservation of Fermi arc surface states in a topological metal,Science (80-. ). (2015). Yang2015dL. X. Yang,Z. K. Liu,Y. Sun, H. Peng,H. F. Yang,T. Zhang, B. Zhou,Y. Zhang,Y. F. Guo, M. Rahn,D. Prabhakaran,Z. Hussain, S. K. Mo,C. Felser,B. Yan,and Y. L. ChenWeyl semimetal phase in the non-centrosymmetric compound TaAs,Nat. Phys. 11(9), 728–732 (2015). Shekhar2015C. Shekhar,A. K. Nayak,Y. Sun, M. Schmidt,M. Nicklas,I. Leermakers, U. Zeitler,Y. Skourski,J. Wosnitza, Z. Liu,Y. Chen,W. Schnelle, H. Borrmann,Y. Grin,C. Felser,and B. YanExtremely large magnetoresistance and ultrahigh mobility in the topological Weyl semimetal candidate NbP,Nat. Phys. 11(8), 645–649 (2015). Ali2014M. N. Ali,J. Xiong,S. Flynn,J. Tao,Q. D. Gibson,L. M. Schoop,T. Liang, N. Haldolaarachchige,M. Hirschberger,N. P. Ong,andR. J. CavaLarge, non-saturating magnetoresistance in WTe2.,Nature 514(7521), 205–8 (2014). Arnold2015F. Arnold,C. Shekhar,S. C. Wu, Y. Sun,R. D. dos Reis,N. Kumar, M. Naumann,M. O. Ajeesh,M. Schmidt, A. G. Grushin,J. H. Bardarson,M. Baenitz, D. Sokolov,H. Borrmann,M. Nicklas, C. Felser,E. Hassinger,andB. YanNegative magnetoresistance without well-defined chirality in the Weyl semimetal TaP,Nat. Commun. 7(8), 11615 (2016). Liang2014T. Liang,Q. Gibson,M. N. Ali, M. Liu,R. J. Cava,andN. P. OngUltrahigh mobility and giant magnetoresistance in the Dirac semimetal Cd3As2,Nat. Mater. 14(3), 280–284 (2014). Hirschberger2016M. Hirschberger,S. Kushwaha,Z. Wang, Q. Gibson,S. Liang,C. A. Belvin, B. A. Bernevig,R. J. Cava,andN. P. OngThe chiral anomaly and thermopower of Weyl fermions in the half-Heusler GdPtBi,Nat. Mater. 15(11), 1161–1165 (2016). Jia2016S. Jia,S. Y. Xu,andM. Z. HasanWeyl semimetals, Fermi arcs and chiral anomalies,Nat. Mater. 15(11), 1140–1144 (2016). Zyuzin2012aA. A. Zyuzin andA. A. BurkovTopological response in Weyl semimetals and the chiral anomaly,Phys. Rev. B 86(11), 115133 (2012). Jungwirth2014T. Jungwirth,J. Wunderlich,V. Novák, K. Olejník,B. L. Gallagher,R. P. Campion,K. W. Edmonds,A. W. Rushforth, A. J. Ferguson,andP. NěmecSpin-dependent phenomena and device concepts explored in (Ga,Mn)As,Rev. Mod. Phys. 86(3), 855–896 (2014). Ralph2008D. Ralph andM. D. StilesSpin transfer torques,J. Magn. Magn. Mater. 320(7), 1190–1216 (2008). MacDonald2011A. H. MacDonald andM. TsoiAntiferromagnetic metal spintronics,Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 369(1948), 3098–3114 (2011). Baltz2016V. Baltz,A. Manchon,M. Tsoi, T. Moriyama,T. Ono,andY. TserkovnyakAntiferromagnetism: the next flagship magnetic order for spintronics ?,arXiv:1606.04284(jun), 1–138 (2016). Park2011bB. G. Park,J. Wunderlich,X. Martí, V. Holý,Y. Kurosaki,M. Yamada, H. Yamamoto,A. Nishide,J. Hayakawa, H. Takahashi,A. B. Shick,and T. JungwirthA spin-valve-like magnetoresistance of an antiferromagnet-based tunnel junction.,Nat. Mater. 10(5), 347–51 (2011). wang2014cC. Wang,H. Seinige,G. Cao,J. S. Zhou,J. B. Goodenough,andM. TsoiAnisotropic Magnetoresistance in Antiferromagnetic Sr2IrO4,Phys. Rev. X 4, 041034 (2014). Marti2014X. Marti,I. Fina,C. Frontera,J. Liu,P. Wadley,Q. He,R. J. Paull,J. D. Clarkson,J. Kudrnovský,I. Turek, J. Kuneš,D. Yi,J. H. Chu, C. T. Nelson,L. You,E. Arenholz, S. Salahuddin,J. Fontcuberta,T. Jungwirth, andR. RameshRoom-temperature antiferromagnetic memory resistor.,Nat. Mater. 13(4), 367–374 (2014). Zelezny2014J. Železný,H. Gao, K. Výborný,J. Zemen, J. Mašek,A. Manchon,J. Wunderlich, J. Sinova,andT. JungwirthRelativistic Néel-Order Fields Induced by Electrical Current in Antiferromagnets,Phys. Rev. Lett. 113(15), 157201 (2014). He2016Q. L. He,X. Kou,A. J. Grutter, G. Yin,L. Pan,X. Che,Y. Liu, T. Nie,B. Zhang,S. M. Disseler, B. J. Kirby,W. Ratcliff II,Q. Shao, K. Murata,X. Zhu,G. Yu,Y. Fan, M. Montazeri,X. Han,J. A. Borchers,and K. L. WangTailoring exchange couplings in magnetic topological-insulator/antiferromagnet heterostructures,Nat. Mater. 16(1), 94–100 (2016). Ghosh2016S. Ghosh andA. ManchonSpin Orbit Torque in two dimensional Antiferromagnetic Topological Insulators,arxiv.org/1609.01174(sep) (2016). Hanke2017aJ. P. Hanke,F. Freimuth,C. Niu, S. Blügel,andY. MokrousovMixed Weyl semimetals and dissipationless magnetization control in insulators,arxiv.org/1701.08050(c) (2017). Mong2010R. S. K. Mong,A. M. Essin,andJ. E. MooreAntiferromagnetic topological insulators,Phys. Rev. B 81(24), 1–10 (2010). Fang2013C. Fang,M. J. Gilbert,andB. A. BernevigTopological insulators with commensurate antiferromagnetism,Phys. Rev. B 88(8), 085406 (2013). Tang2016P. Tang,Q. Zhou,G. Xu,andS. C. ZhangDirac fermions in an antiferromagnetic semimetal,Nat. Phys. 12, 1100–1104 (2016). Herring1966C. Herring, Magnetism: Exchange interactions among itinerant electrons,in: Magnetism, edited by G. T. Rado and H. Suhl,(Academic Press, 1966). Chen2014H. Chen,Q. Niu,andA. H. MacDonaldAnomalous Hall Effect Arising from Noncollinear Antiferromagnetism,Phys. Rev. Lett. 112(jan), 017205 (2014). Yang2016bB. J. Yang,T. A. Bojesen,T. Morimoto,and A. FurusakiTopological semimetals protected by type-II nonsymmorphic symmetries,arxiv.org/1604.00843 pp. 1–32 (2016). Zelezny2017J. Zelezny,H. Gao,A. Manchon, F. Freimuth,Y. Mokrousov,J. Zemen, J. Masek,J. Sinova,andT. JungwirthSpin-orbit torques in locally and globally noncentrosymmetric crystals : Antiferromagnets and ferromagnets,Phys. Rev. B 95, 014403 (2017). Jones2015R. O. JonesDensity functional theory: Its origins, rise to prominence, and future,Rev. Mod. Phys. 87(3), 897–923 (2015). bluegel2014computingS. Blügel,I. für Festkörperforschung (Jülich). Spring School,andI. for Advanced Simulation, Computing Solids: Models, Ab-initio Methods and Supercomputing ; Lecture Notes of the 45th IFF Spring School 2014, Lectures notes of the ... IFF Spring School (Forschungszentrum, Zentralbibliothek, 2014). Batabyal2016R. Batabyal,N. Morali,N. Avraham, Y. Sun,M. Schmidt,C. Felser, A. Stern,B. Yan,andH. BeidenkopfVisualizing weakly bound surface Fermi arcs and their correspondence to bulk Weyl fermions,Sci. Adv. 2(8), e1600709–e1600709 (2016). Gradhand2012M. Gradhand,D. V. Fedorov,F. Pientka, P. Zahn,I. Mertig,andB. L. GyörffyFirst-principle calculations of the Berry curvature of Bloch states for charge and spin transport of electrons,J. Phys. Condens. Matter 24, 213202 (2012). Freimuth2015F. Freimuth,S. Blügel,and Y. MokrousovDirect and inverse spin-orbit torques,Phys. Rev. B 92(6), 064415 (2015). Liu2011cY. Liu,A. a. Starikov,Z. Yuan,and P. J. KellyFirst-principles calculations of magnetization relaxation in pure Fe, Co, and Ni with frozen thermal lattice disorder,Phys. Rev. B 84(1), 014412 (2011). Kodderitzsch2013D. Ködderitzsch,K. Chadova, J. Minár,andH. EbertImpact of finite temperatures and correlations on the anomalous Hall conductivity from ab initio theory,New J. Phys. 15(002) (2013). Janson2014O. Janson,I. Rousochatzakis,A. A. Tsirlin, M. Belesi,A. A. Leonov,U. K. Rößler,J. van den Brink,and H. RosnerThe quantum nature of skyrmions and half-skyrmions in Cu2OSeO3,Nat. Commun. 5(1), 5376 (2014). Li2017G. Li,B. Yan,Z. Wang,and K. HeldTopological Dirac semimetal phase in Pd and Pt oxides,Phys. Rev. B 95(3), 035102 (2017). Perdew1996J. P. Perdew,K. Burke,and M. ErnzerhofGeneralized Gradient Approximation Made Simple,Phys. Rev. Lett. 77(18), 3865–3868 (1996). Park2011S. R. Park,C. H. Kim,J. Yu,J. H. Han,andC. KimOrbital-Angular-Momentum Based Origin of Rashba-Type Surface Band Splitting,Phys. Rev. Lett. 107(15), 156803 (2011). Lee2013bG. Lee,M. A. Farhan,J. S. Kim,and J. H. ShimAnisotropic Dirac electronic structures of A MnBi 2 ( A = Sr ,Ca),Phys. Rev. B 87(24), 245104 (2013). Park2011aJ. Park,G. Lee,F. Wolff-Fabris,Y. Y. Koh,M. J. Eom,Y. K. Kim,M. A. Farhan, Y. J. Jo,C. Kim,J. H. Shim,and J. S. KimAnisotropic dirac fermions in a Bi square net of SrMnBi2,Phys. Rev. Lett. 107(12), 1–5 (2011). Guo2014Y. F. Guo,A. J. Princep,X. Zhang, P. Manuel,D. Khalyavin,I. I. Mazin, Y. G. Shi,andA. T. BoothroydCoupling of magnetic order to planar Bi electrons in the anisotropic Dirac metals AMnBi2 (A=Sr,Ca),Phys. Rev. B 90(7), 075120 (2014). Wang2011eK. Wang,D. Graf,H. Lei,S. W. Tozer,andC. PetrovicQuantum transport of two-dimensional Dirac fermions in SrMnBi 2,Phys. Rev. B - Condens. Matter Mater. Phys. 84(22), 220401(R) (2011). Feng2014Y. Feng,Z. Wang,C. Chen,Y. Shi, Z. Xie,H. Yi,A. Liang,S. He, J. He,Y. Peng,X. Liu,Y. Liu, L. Zhao,G. Liu,X. Dong,J. Zhang, C. Chen,Z. Xu,X. Dai,Z. Fang,and X. J. ZhouStrong anisotropy of Dirac cones in SrMnBi2 and CaMnBi2 revealed by angle-resolved photoemission spectroscopy.,Sci. Rep. 4, 5385 (2014). Ray2016S. J. Ray andL. AlffSuperconductivity and Dirac fermions in 112-phase pnictides,Phys. status solidi 254(1), 1600163 (2017). Sushkov2015A. B. Sushkov,J. B. Hofmann,G. S. Jenkins, J. Ishikawa,S. Nakatsuji,S. Das Sarma,and H. D. DrewOptical evidence for a Weyl semimetal state in pyrochlore Eu2Ir2 O7,Phys. Rev. B - Condens. Matter Mater. Phys. 92(24), 241108(R) (2015). Sandratskii1996L. M. Sandratskii andJ. KüblerRole of Orbital Polarization in Weak Ferromagnetism,Phys. Rev. Lett. 76(26), 4963–4966 (1996). Duan2015T. F. Duan,W. J. Ren,W. L. Liu, S. J. Li,W. Liu,andZ. D. ZhangMagnetic anisotropy of single-crystalline Mn3Sn in triangular and helix-phase states,Appl. Phys. Lett. 107(8), 82403 (2015). Tomiyoshi1982S. Tomiyoshi andY. YamaguchiMagnetic Structure and Weak Ferromagnetism of Mn3Sn Studied by Polarized Neutron Diffraction,J. Phys. Soc. Japan (1982). Fujita2016H. FujitaField-free, spin-current control of magnetization in non-collinear chiral antiferromagnets,Phys. status solidi - Rapid Res. Lett.(dec) (2016). Maca2012F. Máca,J. Mašek,O. Stelmakhovych, X. Martí,H. Reichlová, K. Uhlířová,P. Beran,P. Wadley, V. Novák,andT. JungwirthRoom-temperature antiferromagnetism in CuMnAs,J. Magn. Magn. Mater. 324(8), 1606–1612 (2012). Bernevig2015B. A. BernevigIt's been a Weyl coming,Nat. Phys. 11(9), 698–699 (2015). Zhang2016gA. Zhang,C. Liu,C. Yi,G. Zhao, T. l. Xia,J. Ji,Y. Shi,R. Yu, X. Wang,C. Chen,andQ. ZhangInterplay of Dirac electrons and magnetism in CaMnBi2 and SrMnBi2,Nat. Commun. 7, 13833 (2016). Zhang2016dY. Zhang,Y. Sun,H. Yang, J. Železný,S. P. P. Parkin, C. Felser,andB. YanStrong, anisotropic anomalous Hall effect and spin Hall effect in chiral antiferromagnetic compounds Mn_3X (X = Ge, Sn, Ga, Ir, Rh and Pt),arXiv:1610.04034 (2016). Kiyohara2015N. Kiyohara,T. Tomita,andS. NakatsujiGiant Anomalous Hall Effect in the Chiral Antiferromagnet Mn3Ge,Phys. Rev. Appl. 5(jun), 064009 (2016). Shindou2001R. Shindou andN. NagaosaOrbital Ferromagnetism and Anomalous Hall Effect in Antiferromagnets on the Distorted fcc Lattice,Phys. Rev. Lett. 87(11), 116801 (2001). Bruno2004P. Bruno,V. K. Dugaev,and M. TaillefumierTopological Hall Effect and Berry Phase in Magnetic Nanostructures,Phys. Rev. Lett. 93(9), 096806 (2004). Kubler2014J. Kübler andC. FelserNon-collinear antiferromagnets and the anomalous Hall effect,Europhys. Lett. 108(6), 67001 (2014). Machida2010Y. Machida,S. Nakatsuji,S. Onoda, T. Tayama,andT. SakakibaraTime-reversal symmetry breaking and spontaneous Hall effect without magnetic dipole order.,Nature 463(7278), 210–213 (2010). Nakatsuji2015S. Nakatsuji,N. Kiyohara,andT. HigoLarge anomalous Hall effect in a non-collinear antiferro- magnet at room temperature,Nature 527, 212 (2015). Nayak2016A. K. Nayak,J. E. Fischer,Y. Sun, B. Yan,J. Karel,A. C. Komarek, C. Shekhar,N. Kumar,W. Schnelle, J. Kübler,C. Felser,S. S. P. Parkin, J. Ku bler,C. Felser,andS. S. P. ParkinLarge anomalous Hall effect driven by a nonvanishing Berry curvature in the noncolinear antiferromagnet Mn3Ge,Sci. Adv. 2(4), e1501870–e1501870 (2016). Zhang2016fW. Zhang,W. Han,S. H. Yang,Y. Sun, Y. Zhang,B. Yan,andS. S. P. ParkinGiant facet-dependent spin-orbit torque and spin Hall conductivity in the triangular antiferromagnet IrMn 3,Sci. Adv. 2, e1600759 (2016). Suzuki2016T. Suzuki,R. Chisnell,A. Devarakonda, Y. T. Liu,W. Feng,D. Xiao,J. W. Lynn,andJ. G. CheckelskyLarge anomalous Hall effect in a half-Heusler antiferromagnet,Nat. Phys. 12(July), 1119 (2016). Surgers2014C. Sürgers,G. Fischer,P. Winkel,and H. V. LöhneysenLarge topological Hall effect in the non-collinear phase of an antiferromagnet.,Nat. Commun. 5, 3400 (2014). Surgers2016C. Sürgers,W. Kittler,T. Wolf,and H. v. LöhneysenAnomalous Hall effect in the noncollinear antiferromagnet Mn5Si3,AIP Adv. 6(5), 055604 (2016). Wadley2015aP. Wadley,V. Hills,M. R. Shahedkhah, K. W. Edmonds,R. P. Campion,V. Novák, B. Ouladdiaf,D. Khalyavin,S. Langridge, V. Saidl,P. Nemec,A. W. Rushforth, B. L. Gallagher,S. S. Dhesi,F. Maccherozzi, J. Železný,andT. JungwirthAntiferromagnetic structure in tetragonal CuMnAs thin films,Sci. Rep. 5, 17079 (2015). Kohn2013a. Kohn,A. Kovács,R. Fan,G. J. McIntyre,R. C. C. Ward,andJ. P. GoffThe antiferromagnetic structures of IrMn3 and their influence on exchange-bias,Sci. Rep. 3(aug), 2412 (2013). Li2011C. Li,J. S. Lian,andQ. JiangAntiferromagnet topological insulators with AB2C Heusler structure,Phys. Rev. B - Condens. Matter Mater. Phys. 83(23), 1–5 (2011). Li2015Z. Li,H. Su,X. Yang,and J. ZhangElectronic structure of the antiferromagnetic topological insulator candidate GdBiPt,Phys. Rev. B 91(23), 235128 (2015). Wang2016eZ. F. Wang,H. Zhang,D. Liu,C. Liu, C. Tang,C. Song,Y. Zhong,J. Peng, F. Li,C. Nie,L. Wang,X. J. Zhou, X. Ma,Q. K. Xue,andF. LiuTopological edge states in a high-temperature superconductor FeSe/SrTiO3(001) film,Nat. Mater. 15(September), 968 (2016). Tian2015Z. Tian,Y. Kohama,T. Tomita, H. Ishizuka,T. H. Hsieh,J. J. Ishikawa, K. Kindo,L. Balents,andS. NakatsujiField-induced quantum metal–insulator transition in the pyrochlore iridate Nd2Ir2O7,Nat. Phys. 12(November), 134 (2015). Kondo2015T. Kondo,M. Nakayama,R. Chen,J. J. Ishikawa,E. G. Moon,T. Yamamoto,Y. Ota, W. Malaeb,H. Kanai,Y. Nakashima, Y. Ishida,R. Yoshida,H. Yamamoto, M. Matsunami,S. Kimura,N. Inami, K. Ono,H. Kumigashira,S. Nakatsuji, L. Balents,andS. ShinQuadratic Fermi Node in a 3D Strongly Correlated Semimetal,Nat. Commun. 6, 1–8 (2015). Yang2010aB. J. Yang andY. B. KimTopological insulators and metal-insulator transition in the pyrochlore iridates,Phys. Rev. B 82(8), 085111 (2010). Zhang2017H. Zhang,K. Haule,andD. VanderbiltMetal-Insulator Transition and Topological Properties of Pyrochlore Iridates,Phys. Rev. Lett. 118(2), 026404 (2017). Wang2015gQ. Wang,Y. Shen,B. Pan,X. Zhang, K. Ikeuchi,K. Iida,A. D. Christianson, H. C. Walker,D. T. Adroja,M. Abdel-Hafiez, X. Chen,D. A. Chareev,A. N. Vasiliev,and J. ZhaoMagnetic ground state of FeSe,Nat. Commun. 7, 1–15 (2015). Xu2016G. Xu,B. Lian,P. Tang,X. L. Qi, andS. C. ZhangTopological Superconductivity on the Surface of Fe-Based Superconductors,Phys. Rev. Lett. 117(4), 1–5 (2016). Khouri2016T. Khouri,U. Zeitler,C. Reichl, W. Wegscheider,N. E. Hussey,S. Wiedmann,andJ. C. MaanLinear Magnetoresistance in a Quasifree Two-Dimensional Electron Gas in an Ultrahigh Mobility GaAs Quantum Well,Phys. Rev. Lett. 117(25), 256601 (2016). Kargarian2016M. Kargarian,M. Randeria,andY. M. LuAre the surface Fermi arcs in Dirac semimetals topologically protected?,Proc. Natl. Acad. Sci. U. S. A. 113(31), 8648–52 (2016). Akrap2016A. Akrap,M. Hakl,S. Tchoumakov, I. Crassee,J. Kuba,M. O. Goerbig, C. C. Homes,O. Caha,J. Novak, F. Teppe,W. Desrat,S. Koohpayeh, L. Wu,N. P. Armitage,A. Nateprov, E. Arushanov,Q. D. Gibson,R. J. Cava, D. Van Der Marel,B. A. Piot,C. Faugeras, G. Martinez,M. Potemski,andM. OrlitaMagneto-Optical Signature of Massless Kane Electrons in Cd3As2,Phys. Rev. Lett. 117(13), 3–8 (2016). Shinaoka2015H. Shinaoka,S. Hoshino,M. Troyer,and P. WernerPhase Diagram of Pyrochlore Iridates: All-in-All-out Magnetic Ordering and Non-Fermi-Liquid Properties.,Phys. Rev. Lett. 115(15), 156401 (2015).
http://arxiv.org/abs/1702.07788v1
{ "authors": [ "L. Šmejkal", "T. Jungwirth", "J. Sinova" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170224222743", "title": "Route Towards Dirac and Weyl Antiferromagnetic Spintronics" }
Department of Physics,University of California, Davis,Davis, CA 95616, USANuclear and Chemical Sciences Division,Lawrence Livermore National Laboratory,Livermore, CA 94551, USA Department of Physics,University of California, Davis,Davis, CA 95616, USA We explore polarized heavy quarkonium production using the color evaporation model at leading order. We present the polarized to total yield ratio as a function of center of mass energy and rapidity in p+p collisions. At energies far above the Q Q production threshold, we find charmonium and bottomonium production to be longitudinally polarized (J_z=0).The quarkonium states are also longitudinally polarized at central rapidity, becoming transversely polarized (J_z=±1) at the most forward rapidities.14.40.PqPolarized Heavy Quarkonium Production in the Color Evaporation Model Ramona Vogt December 30, 2023 ====================================================================§ INTRODUCTIONEven more than 40 years after the discovery of J/ψ, the production mechanism of quarkonium is still not well understood. Most recent studies of quarkonium production employ nonrelativistic QCD (NRQCD) <cit.>, which is based on an expansion of the cross section in the strong coupling constant and the Q Q velocity <cit.>.The cross section is factorized into hard and soft contributions and divided into different color and spin states.Each color state carries a weight, the long distance matrix elements (LDMEs) that are typically adjusted to the data above some minimum transverse momentum, p_T, value.The NRQCD cross section has been calculated up to next-to-leading order (NLO).The LDMEs, conjectured to be universal, fail to describe both the yields and polarization simultaneously for p_T cuts less than twice the mass of the quarkonium state.The polarization is sensitive to the p_T cut:the cut p_T > 10 GeV was chosen to describe both the yield and polarization in Ref. <cit.> while p_T > 3m was chosen for the excited states ψ(2S) and Υ(3S) in Ref. <cit.> to fit the polarization.The universality of the LDMEs can be tested by using those obtained at high p_T to calculate the p_T-integrated cross section.In Ref. <cit.>, the p_T-integrated NRQCD cross section is calculated with LDMEs obtained with p_T cuts in the range 3 < p_T < 10 GeV.The resulting midrapidity cross sections, dσ/dy|_y=0, systematically overshoot the J/ψ data.The lowest p_T cut is most compatible with dσ/dy|_y=0 while calculations based on higher p_T cuts can be up to an order of magnitude away from the data <cit.>. More recent analysis has shown that the η_c p_T distributions calculated with LDMEs obtained from J/ψ yields using heavy quark spin symmetry <cit.>, overshoots the high p_T LHCb η_c results <cit.>.The Color Evaporation Model (CEM) <cit.>, which considers all Q Q (Q = c, b) production regardless of the quarks' color, spin, and momentum, is able to predict both the total yields and the rapidity distributions with only a single normalization parameter<cit.>.The CEM has so far only been used to predict spin-averaged quarkonium production: the polarization was not considered before. This paper presents a leading order (LO) calculation of quarkonium polarization in the CEM, a p_T-integrated result.Currently, there are no exclusive NLO polarized Q Q calculations on which to impose the HH (H = D, B) mass threshold. Our calculation is a first step toward a full CEM polarization result that provides a general idea of whether there is any appreciable LO polarization that might carry through to the next order even though the kinematics are different.We will begin to address the p_T dependence in a further publication.In the CEM, all quarkonium states are treated the same as QQ below the HH threshold where the invariant mass of the heavy quark pair is restricted to be less than twice the mass of the lowest mass meson that can be formed with the heavy quark as a constituent. The distributions for all quarkonium family members are assumed to be identical.(See Ref. <cit.> for a new treatment of the CEM p_T distributions based on mass-dependent thresholds.)In a p+p collision, the production cross section for a quarkonium state is given byσ =F_Q ∑_i,j∫^4m_H^2_4m_Q^2dŝ∫ dx_1 dx_2 f_i/p(x_1,μ^2) f_j/p(x_2,μ^2) × σ̂_ij(ŝ) δ(ŝ-x_1 x_2 s),where i and j are q, q and g such that ij = qq or gg.The square of the heavy quark pair invariant mass is ŝ while the square of the center-of-mass energy in the p+p collision is s.Here f_i/p(x,μ^2) is the parton distribution function (PDF) of the proton as a function of the fraction of momentum carried by the colliding parton x at factorization scale μ and σ̂_ij is the parton-level cross section. Finally, F_Q is a universal factor for the quarkonium state and is independent of the projectile, target, and energy. At leading order, the rapidity distribution, dσ/dy, is dσ/dy =F_Q∫^4m_H^2_4m_Q^2dŝ/s{ f_g/p(x_1,μ^2) f_g/p(x_2,μ^2) σ̂_gg (ŝ) + ∑_q=u,d,s [f_q/p(x_1,μ^2)f_q/p(x_2,μ^2) +f_q/p(x_1,μ^2)f_q/p(x_2,μ^2)] σ̂_qq (ŝ) } ,where x_1,2 = (√(ŝ/s)) exp(± y). We take the square of the factorization and renormalization scales to be μ^2 = ŝ.§ POLARIZED QQ PRODUCTION AT THE PARTON LEVELAt the parton level, the leading order calculation forces the final state Q Q pair to be produced back-to-back with zero total transverse momentum. We define the polarization of the Q Q pair to be either transversely polarized (J_z =± 1) or longitudinally polarized(J_z = 0) in the helicity frame where the z axis is pointing from Q to Q along the beam axis as shown in Fig. <ref>. Note that we are not distinguishing the S=1 triplet state from the S=0 singlet state. This will be addressed in a future publication, together with the separation into orbital angular momentum, L, states.At leading order, there are four Feynman diagrams to consider, one for q q annihilation and three for gg fusion.Each diagram includes a color factor C and a scattering amplitude 𝒜.The generic matrix element for each process is <cit.>ℳ_qq = C_qq𝒜_qq ,ℳ_gg =C_gg,ŝ𝒜_gg,ŝ+ C_gg,t̂𝒜_gg,t̂ + C_gg,û𝒜_gg,û. As previously mentioned, there is one diagram only for q q→ Q Q, thus a single amplitude, 𝒜_qq.However, there are three diagrams for gg → Q Q at leading order, theŝ, t̂ and û channels.In terms of the Dirac spinors u and v, the individual amplitudes are𝒜_qq = g_s^2/ŝ [u(p^') γ_μ v(p)][v(k) γ^μ u(k^')],𝒜_gg,ŝ =- g_s^2/ŝ{ -2k^'·ϵ(k) [u(p^') ϵ/(k^') v(p)] + 2 k·ϵ(k^') [u(p^') ϵ/(k) v(p)] + ϵ(k) ·ϵ(k^')[u(p^') (k/^' - k/) v(p)]} ,𝒜_gg,t̂ =-g_s^2/t̂-M^2u(p^') ϵ/(k^') (k/ -p/ +M) ϵ/(k) v(p),𝒜_gg,û =-g_s^2/û-M^2u(p^') ϵ/(k) (k/^' -p/ +M) ϵ/(k^') v(p).Here g_s is the gauge coupling, M is the mass of heavy quark (m_c for charm and m_b for bottom), ϵ represents the gluon polarization vectors, γ^μ are the gamma matrices, k^' (k) is the momentum of initial state light quark (antiquark) or gluon, and p^' (p) is the momentum of final sate heavy quark (antiquark). The amplitudes are separated according to the J_z of the final state, J_z = 0 or J_z = ± 1 .The total amplitudes are calculated for each final state J_z while averaging over the polarization of the initial gluons or the spin of the light quarks, depending on the process, in the spirit of the CEM.The squared matrix elements, |ℳ|^2, are calculated for each J_z.The color factors, C, are calculated from the SU(3) color algebra and are independent of the polarization <cit.>.They are|C_qq|^2 = 2, |C_gg,ŝ|^2 = 12,|C_gg,t̂|^2 = 16/3 , |C_gg,û|^2 = 16/3. C_gg,ŝ^*C_gg,t̂ = +6, C_gg,ŝ^*C_gg,û = -6, C_gg,t̂^*C_gg,û = -2/3.The total squared amplitudes for a given J_z state,|ℳ_qq^J_z|^2= |C_qq|^2 |𝒜_qq|^2, |ℳ_gg^J_z|^2= |C_gg,ŝ|^2 |𝒜_gg,ŝ|^2 + |C_gg,t̂|^2 |𝒜_gg,t̂|^2 +|C_gg,û|^2 |𝒜_gg,û|^2 + 2C_gg,ŝ^*C_gg,t̂𝒜_gg,ŝ^*𝒜_gg,t̂+2 C_gg,ŝ^*C_gg,û𝒜_gg,ŝ^*𝒜_gg,û+2 C_gg,t̂^*C_gg,û𝒜_gg,t̂^*𝒜_gg,û ,are then used to obtain the partonic cross sections by integrating over solid angle:σ̂_ij^J_z = ∫ dΩ( 1/8π)^2 |ℳ_ij^J_z|^2/ŝ√(1-4M^2/ŝ). The individual partonic cross sections for the longintudinal and transverse polarizations areσ̂_qq^J_z = 0 (ŝ)= 16πα_s^2/27 ŝ^2 M^2 χ ,σ̂_qq^J_z = ± 1(ŝ)= 4πα_s^2/27 ŝ^2ŝχ ,σ̂_gg^J_z = 0 (ŝ)= πα_s^2/12ŝ[ (4-31M^2/ŝ+33M^2/ŝ-4M^2) χ+ (4M^4/ŝ^2 +31M^2/2ŝ -33M^2/2(ŝ-4M^2)) ln1+χ/1-χ],σ̂_gg^J_z = ± 1 (ŝ)= πα_s^2/24ŝ[ -11(1+3M^2/ŝ-4M^2) χ+ ( 4+M^2/2ŝ+33M^2/2(ŝ-4M^2)) ln1+χ/1-χ],where χ = √(1-4M^2/ŝ). The sum of these results, σ̂_ij^J_z = 0 + σ̂_ij^J_z = +1 + σ̂_ij^J_z = -1, is equal to the total partonic cross section <cit.>:σ̂_qq^tot. (ŝ)= 8πα_s^2/27 ŝ^2 (ŝ + 2M^2) χ ,σ̂_gg^tot. (ŝ)= πα_s^2/3ŝ[ -( 7+31M^2/ŝ) 1/4χ + ( 1+4M^2/ŝ+M^4/ŝ^2) ln1+χ/1-χ].Having computed the polarized QQ production cross section at the parton level, we then convolute the partonic cross sections with the parton distribution functions (PDFs) to obtain the hadron-level cross section σ as a function of √(s) using Eq. (<ref>), and the rapidity distribution, dσ/dy, using Eq. (<ref>). We employ the CTEQ6L1 <cit.> PDFs in this calculation and the running coupling constant α_s = g_s^2/(4π) is calculated at the one-loop level appropriate for the PDFs.We assume that the polarization is unchanged by the transition from the parton level to the hadron level, consistent with the CEM that the linear momentum is unchanged by hadronization.This is similar to the assumption made in NRQCD that once a c c is produced in a given spin state, it retains that spin state when it becomes a J/ψ. § RESULTS Since this is a LO calculation, we can only calculate the CEM polarization as a function of √(s) and y but not p_T which will require us to go to NLO. However, the charm rapidity distribution at LO is similar to that at NLO <cit.>. The same is true for J/ψ production in the CEM. The only difference would be a rescaling of the parameter F_Q based on the ratio NLO/LO using the NLO scale determined in Nelson et al. <cit.>.The CEM results are in rather good agreement with the data from p+p collisions <cit.>.We present the results as ratios of the cross section with J_z = 0 to the total cross section. Taking the ratio has the benefit of being independent of F_Q. In the remainder of this section, we discuss the energy dependence of the total cross section ratios for both charmonium and bottomonium (in the general sense as being in the mass range below the H H threshold) as well as for c c and b b, integrated over all invariant mass.We show the ratios for charmonium and bottomonium production as a function of rapidity for selected energies.Finally, we discuss the sensitivity of our results to the choice of proton parton densities. §.§ Energy dependence of the longitudinal polarization fraction In this section, we compare the energy dependence of the fraction σ^J_z = 0/σ^ tot. as a function of center of mass energy in p+p collisions in Figs. <ref> and <ref>.In the case of quarkonium, the integration in Eq. (<ref>) is from twice the quark mass to twice the mass of the lowest lying open heavy flavor hadron.For open heavy flavor, the upper limit of the integral is extended to √(s).§.§.§ Charmonium and c c In Fig. <ref> the charmonium production cross section is calculated by integrating the invariant mass of the cc pair from 2m_c (m_c=1.27 GeV) to 2m_D^0 (m_D^0=1.86 GeV) in Eq. (<ref>).We see that ψ production (solid curve in Fig. <ref>) is more than 50% longitudinally polarized for √(s) > 10 GeV.At √(s) > 100 GeV, the production ratio saturates at a longitudinal polarization fraction of 0.80.The behavior of the total cc production fraction (dashed curve in Fig. <ref>) is quite different.Instead of saturating, like the charmonium ratio, it reaches a peak of 0.68 at √(s)=84 GeV and then begins decreasing. This is because of the approximate helicity conservation at the parton level for M/√(ŝ)≪ 1.The narrow integration range of charmonium production assures that charmonium production never enters this region, keeping charmonium longitudinally polarized.§.§.§ Bottomonium and b b The results for bottomonium and b b production are shown in Fig. <ref>.Here, the integral over the pair invariant mass is assumed to be from 2m_b (m_b = 4.75 GeV) to 2m_B^0 (m_B^0=5.28 GeV).For the more massive bottom quarks, the pairs start out transversely polarized for √(s) < 40 GeV.Bottomonium production becomes dominated by longitudinal polarization but the ratio saturates at 0.90 for √(s) of ∼ 1 TeV, higher than the charmonium ratio at the same energy.The smaller longitudinal fraction at lower √(s) for bottomonium is because of q q dominance of the total cross section at these energies.As the gg contribution rises, the longitudinal fraction increases.We note that the point at which the bottomonium fraction is ∼ 0.50, √(s) = 46.3 GeV, is similar to the lowest energy at which Υ polarization has been measured, √(s_NN) = 38.8 GeV.The E866/NuSea Collaboration measured the polarization of bottomonium production in p+Cu and found no polarization at low p_T in the Collins-Soper frame <cit.>. This result is compatible with our own because at leading order, the polarization axes in the helicity frame, the Collins-Soper frame, and the Gottfried-Jackson frame frame are coincident <cit.>.Likewise, the turnover in the c c polarization is also observed for b b but at a much higher energy, √(s)=550 GeV.Although the energy scale is higher, the peak in the b b polarization ratio is almost the same as that for c c, 0.69.§.§ Rapidity dependence of the longitudinal polarization fraction We now turn to the rapidity dependence of our result, shown in Figs. <ref> and <ref>.Four representative energies are chosen to illustrate.The lowest values, √(s) = 20 and 38.8 GeV were the highest available fixed-target energies at the CERN SPS for ion beams and the FNAL Tevatron for proton beams.The higher energies, √(s) = 0.2 and 7 TeV are energies available at the BNL RHIC and CERN LHC facilities.The results are presented for positive rapidity only because the rapidity distributions are symmetric around y=0 in p+p collisions.§.§.§ Charmonium The rapidity dependence for the charmonium longitudinal polarization fraction is shown in Fig. <ref>. The results are given up to the kinematic limits of production.The longitudinal fraction is greatest at y = 0 and decreases as |y| increases. For the highest energies, where the longitudinal polarization has saturated in Fig. <ref>, the ratio is flat over a wide range of rapidity.The ratio remains greater than 0.50 as long as the gg contribution, with a significant J_z = 0 polarization,dominates production. As the phase space for charmonium production is approached, the q q channel, predominantly transversely polarized, begins to dominate, causing the ratio to drop to a minimum of ∼ 0.30.§.§.§ Bottomonium The behavior of the bottomonium ratio as a function of rapidity, shown in Fig. <ref>, is similar to that of charmonium.The higher mass scale, however, reduces the kinematic range of the calculation.It also results in near transverse (J_z = ± 1) polarization of bottomonium at fixed-target energies.The calculation at √(s) = 38.8 GeV shows that, at y=0, the bottomonium ratio is consistent with no polarization, as measured by E866/NuSea <cit.>.At √(s) = 20 GeV, not far from production threshold, bottomonium is transversely polarized in the narrow rapidity range of production.§.§ Sensitivity to the proton PDFsWe have tested the sensitivity of our results to the choice of PDFs used in the calculation.Since not many new LO proton PDFs are currently being made available, we compare our CTEQ6L1 results with calculations using the older GRV98 LO <cit.> set.We can expect the ratio to be the most sensitive to the choice of proton PDF because the PDFs can change the balance of gg to q q production, especially at lower √(s) where the x values probed by the calculations are large, x ∼ 0.1.In particular, bottomonium production at √(s) = 20 GeV is most likely to be sensitive to the choice of PDF since the q q contribution is large at this energy.The results should, on the other hand, be relatively insensitive to the chosen mass and scale values since these do not strongly affect the relative contributions of gg and q q.This is indeed the case, for bottomonium production at √(s) = 20 GeV, close to the production threshold, the largest difference in the longitudinal ratio for the two PDF sets is 36% at y=0.The sensitivity arises because the gg contribution is predominantly produced with J_z = 0 while the q q contribution is primarily produced with J_z = ± 1.By √(s) = 38.8 GeV, the difference in the results is reduced to 20%.At collider energies, the difference is negligible.Since the gg contribution is dominant for charmonium already at √(s) = 20 GeV, the charmonium production ratio is essentially independent of the choice of proton PDF.Thus, away from production threshold, the results are robust with respect to the choice of PDF.§ CONCLUSION We have presented the energy and rapidity dependence of the polarization of heavy quarkonium production in p+p collisions in the Color Evaporation Model.We find the quarkonium polarization to be longitudinal at most energies and around central rapidity while the polarization becomes transverse as the kinematic limits of the calculation, where q q production is dominant, are approached.We note that the partonic cross sections, sorted by J_z in this calculation, are still mixtures of total angular momentum J and orbital angular momentum L states. So there is no immediate connection between these ratios and the lambda parameter of the data. In future work, we will extract the S=1, L=0 contribution from the partonic cross sections to narrow down into three distinct angular momentum states of J=1 in order to give predictions for the polarization parameter λ_θ <cit.>. Because we have performed a leading order calculation, we cannot yet speak to the p_T dependence of the quarkonium polarization. We will address the p_T dependence in a separate publication.§ ACKNOWLEDGEMETSWe thank F. Yuan for valuable discussions throughout this work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics (Nuclear Theory) under contract number DE-SC-0004014.
http://arxiv.org/abs/1702.07809v2
{ "authors": [ "Vincent Cheung", "Ramona Vogt" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170225000438", "title": "Polarized Heavy Quarkonium Production in the Color Evaporation Model" }
Live Visualization of GUI Application Code Coverage with GUITracer Arthur-Jozsef Molnar Faculty of Mathematics and Computer ScienceUniversity of Babes-BolyaiCluj-Napoca, Romaniaarthur@cs.ubbcluj.ro December 30, 2023 ============================================================================================================================================== The present paper introduces the initial implementation of a software exploration tool targeting graphical user interface (GUI) driven applications. GUITracer facilitates the comprehension of GUI-driven applications by starting from their most conspicuous artefact - the user interface itself. The current implementation of the tool can be used with any Java-based target application that employs one of the AWT, Swing or SWT toolkits. The tool transparently instruments the target application and provides real time information about the GUI events fired. For each event, call relations within the application are displayed at method, class or package level, together with detailed coverage information. The tool facilitates feature location, program comprehension as well as GUI test creation by revealing the link between the application's GUI and its underlying code. As such, GUITracer is intended for software practitioners developing or maintaining GUI-driven applications. We believe our tool to be especially useful for entry-level practitioners as well as students seeking to understand complex GUI-driven software systems. The present paper details the rationale as well as the technical implementation of the tool. As a proof-of-concept implementation, we also discuss further development that can lead to our tool's integration into a software development workflow. § INTRODUCTIONSoftware tools can help practitioners in virtually all activities undertaken during the life of software starting from requirements analysis to development, program comprehension as well as test case design and execution. When studying how complex IDE's such as Eclipse evolve <cit.>, we observe that newer versions ship with increasingly complex tools for aiding professionals build higher quality software faster. Modern IDE's feature tools for working with artefacts such as UML, code generation and navigation as well as supporting many common development tasks. However, we find that in most cases these tools are centred on the executable representation of the program, namely its source code and associated artefacts, limiting provided functionalities to those directly related to source code. Our goal is to leverage the latest results from research and industry in order to build new and useful tools for practitioners working on GUI-driven software. Our choice of this field is motivated by the fact that the GUI is the most pervasive paradigm for human-computer interaction, employed by many desktop and mobile applications. In addition, according to <cit.>, in many cases GUI related code takes up to 50% of application code, making it an even more compelling target. The role of tooling is already established in the literature. The authors of <cit.> conducted a survey covering over 1400 software professionals who were inquired about their strategies, tools and the problems they encountered when comprehending software. The most significant findings show that most developers employ a white-box strategy for program comprehension. They interact with the application GUI to locate corresponding event handlers in code and in many cases use the IDE in combination with more specialized tools. Of particular note is the finding that "industry developers do not use dedicated program comprehension tools developed by the research community" <cit.>. The purpose of our work is to provide innovative open-source tools that practitioners have a real need for and that can be used both within the academia as well as industry. After our previously developed JETracer framework <cit.>, the GUITracer tool serves as the next natural step of this strategy.This paper is structured as follows: the next section introduces the theoretical framework GUITracer is based on, while Section 3 discusses the implementation and features of the tool. The following sections discuss related work and present our conclusions as well as future work.§ PREREQUISITESWe start from the GUI's characterization as a "hierarchical, graphical front-end to a software system that accepts as input user-generated and system-generated events" <cit.>. As provided by Memon in <cit.>, interaction with a GUI application can be modelled as a sequence of events. Industry studies such as <cit.> show that practitioners approach program comprehension and feature localization tasks at the GUI level. Thus, we consider it beneficial to develop software tools that support this approach. Our goal for GUITracer is to provide a navigable relation between the target application's GUI and its underlying code. We achieve this by providing information regarding how the GUI and the underlying code are related as well as the source code that actually runs once a GUI event is fired. The following sections introduce some theoretical notions that are used within our tool as well as our previously developed JETracer framework, on which our tool is based. §.§ Code Coverage for GUI EventsWhile the literature abounds with code coverage related techniques and tools, we find far less work focused on metrics tailored for GUI-driven applications. In this section we propose several metrics that measure the relation between GUI events and the application source code that handles them. Our approach combines established coverage criteria with Memon's event-sequence coverage defined in <cit.>. To this we add knowledge gained from call graphs built using static analysis, which improve the code coverage picture. The first step in our effort is to define a GUI event's call graph: Event call graph. Given a GUI event e, we define its event call graph as a subgraph of the application's statically computed call graph that consists of event e's application-level event handlers and all application methods reachable from them. The event call graph provides information regarding which methods might be called when handling the event, as well as call relations between them. In practice, this is computed using static analysis once the event's handlers are known. However, GUI events are not fired in isolation, but are part of an event sequence <cit.>. Each fired event contributes to improved code coverage. We take this into account when we define an event's call graph coverage: Event call graph coverage. Given a GUI event sequence S = {e_1, e_2, ..., e_n}, we define the call graph coverage of event e_i, with 0<i≤ n as the ratio between the number of source code statements covered by S to the total number of statements from the methods in e_i's event call subgraph. For each event, the call graph coverage tells us how many code statements were run out of the maximum possible as determined via static analysis. As defined above, an event's code coverage can be improved if subsequent events run code from its event call graph.However, all computed call graphs are only approximations. When computed dynamically, that is by running the target application, they might miss methods that can be called via different code paths, and are thus incomplete. When computed statically, using tools such as Soot, they might include additional edges that a more precise analysis might determine to be superfluous. This is a well-known problem in pointer analysis <cit.>, one that is not expected to be solved for complex languages such as Java. For GUITracer, our approach was to use one of the algorithms within the Soot framework that computes an accurate call graph statically <cit.>. This means that while all methods that may be called are included in the call graph, it is possible that it also contains superfluous entries.In order to build event call graphs, we required information about the event handlers installed by the target application. This was achieved using the JETracer framework that is detailed below. §.§ The JETRacer FrameworkJETracer is our open-source framework for real-time tracing of GUI events fired within Java applications built using one of the AWT, Swing or SWT toolkits. Available at <cit.>, the tool consists of two modules: an agent and a host <cit.>. The agent module is GUI toolkit specific and is deployed within the target application which it instruments during start up. The process is completely transparent to the target application, and as shown within the evaluation section in <cit.>, it does not impact its perceived performance. The JETracer agent gathers GUI event information and transmits it to the host module via network socket. The host module maintains the connection with the agent, and once an event is received it notifies any attached listeners. Adding support for new GUI toolkits or events (e.g. touch interactions) is possible by implementing a new agent component <cit.>.GUITracer is built on top of our JETracer framework, which it uses to gather event information from the target application. Figure <ref> shows the architecture of our tool, including the JETracer agent and host. Code coverage capabilities are provided via the open-source JaCoCo[JaCoCo - http://www.eclemma.org/jacoco/] library that provides coverage information on-demand, during program execution. We harnessed this feature so that every time JETRacer sends event information it also provides updated code coverage via JaCoCo. This is done once for each GUI event fired, when handling routines have completed and control is returned to the GUI toolkit.§ THE GUITRACER TOOLThe GUITracer <cit.> tool is open-sourced under the Eclipse Public License and is free to download from our website <cit.>. Fully implemented in Java, the application's only requirement is a Java 6 compatible platform. As a proof of concept implementation, GUITracer comes in the form of a standalone desktop application. A brief video that showcases the tool's main features is available at <cit.>. In order to set up an application for tracing by our tool is done by providing the location of the following artefacts using command line parameters: * Source code - This folder is required in order to enable illustrating source code coverage.* Binaries - The location of the binaries is required for calculating the application call graph and pre-run instrumentation.* Libraries - Libraries must be provided separately in order to exclude them from the displayed callgraphs.* Main class - Required to start the target application.* Call graph - An optional parameter. As call graph calculation is computationally expensive, storing the call graph between runs decreases the time required to start GUITracer. Our website <cit.> details how to save generated call graphs in order to be reused on subsequent runs. Figure <ref> illustrates GUITracer's architecture. The tool employs our JETracer <cit.> framework for event and coverage tracing information. Once a GUI event is fired, the JETracer agent relays the information to the framework's host component, which is integrated with GUITracer's designated handler.Figure <ref> illustrates a screenshot of the application. The target application is version 0.7.1 of the open-source FreeMind[FreeMind home - http://freemind.sourceforge.net] mind mapping software. The tool's UI consists of three main panes - the Event trace, the Call graph and the Source code. We describe each of them below. §.§ Event Trace PaneGUI events fired within the target application are displayed within this pane, on the left hand side of Figure <ref>, which currently shows three events. The topmost event is shown with a white background as it is currently selected. The event is an ActionEvent fired by the toolbar button having the black outline. For each event, the tool currently displays the following information:* Application window screenshot. Recorded at the time the event is fired, it allows visually identifying the event's originating widget using its black border outline. It also ensures that any visual styles active within the target application also appear in GUITracer. * Code coverage information. This provides information regarding the event's code coverage. The colour coding is used consistently with most other coverage tools: green for covered and red for uncovered code, respectively. In addition, as many events might run the same code over and again, we use a lighter shade of green to illustrate code that was first run during the event's handling.The application line coverage section provides information regarding how many code lines have coverage up until and including the event. This is the information most other tools also display, with the difference that in the case of GUITracer, it is gathered after each GUI event is handled. The event line coverage section is based on the event coverage concepts introduced within the previous section. It uses the statically computed call graph of the application and shows the event's call graph coverage. Its purpose is to facilitate feature localization as well as provide detail regarding the link between GUI events and the source code that handles them.As an example, after the topmost event in the trace was handled, target application coverage was of 2082 lines out of a total of 7615. The event call graph comprises 404 code lines, with a coverage of 276 lines. As shown by the light green shade, most statements handling this ActionEvent were not run before.The type of GUI events displayed in the trace can be filtered. This functionality was added due to observing that in many cases, GUI applications fire a large number of events whichclutter the event trace. These include focus events fired when the target application loses/gains focus as well as mouse movement events. In addition to filtering these, users can choose to hide events that do not contribute to the target application's code coverage. This is useful to hide repeated events that always take the same code path. The topmost event in the trace is selected, as shown by the white background. Once this happens, the call graph pane becomes relevant. §.§ Call Graph PaneOnce an event from the trace is selected, GUITracer calculates its event call graph, and displays it in the top pane on the right hand side of Figure <ref>. Each call graph has exactly one start node, with one outgoing edge for each handler. The displayed call graph only contains application code; library as well as Java platform calls are not included, and neither are any callbacks from them. This is due to the difficulty of modelling library callbacks, which is an active topic of research <cit.>. The displayed call graph can be customized using the controlsbelow the call graph panel. First of all, the graph can be displayed with method, class or package granularity. Most detailed call graphs are at method level, where each vertex represents one method. At class and package level, each vertex represents a class or package, respectively. Regardless of call graph granularity, code coverage is shown using consistent colour coding with the trace view. At class or package granularity, method calls are displayed as edges between the classes or packages they belong to, as is the case. Changing the granularity changes both the level of detail as well as the complexity of the displayed graph. While this is application specific, method call graphs can easily contain hundreds of vertices, while class and package call graphs usually contain no more than a few dozen. In addition, users can choose to display one collated call graph, which represents all event handlers in the same pane, or have a separate tab for each handler. These features are meant to facilitate exploration and feature location, as users can quickly retrieve information regarding the coverage of each source code entity. If a method or class-level call graph is displayed, nodes present a contextual menu allowing the element's source code to be shown in the corresponding panel. §.§ Source Code PaneThis pane allows users to consult the target application's source code. Source files can be opened using the combo-box control or from the contextual menu of the call graph nodes. Source files are displayed with syntax as well as coverage highlighting. The employed colour scheme is consistent with the one previously described. The purpose of the source code pane is to illustrate that by following a top-down exploration strategy, users can start from the target application's GUI and reach the covered statements in its source code. Given the technical challenges of implementing the GUITracer tool, this proof of concept was implemented as a standalone application. However, as such tools are more useful when integrated within an IDE, the next version of the tool will be implemented as a plugin for a popular Java IDE, such as Eclipse or Netbeans. As IDE's have advanced source code editing components, the GUITracer plugin will employ them in order to showcase GUI event coverage, similar with how most unit test plugins currently work. § RELATED WORKAn important body of work which GUITracer employs is the Soot framework <cit.>. First detailed in <cit.>, Soot provides static analysis functions for Java programs, among which several algorithms for obtaining the static call graph <cit.>. Our tool uses the SPARK algorithm detailed in <cit.>, a context insensitive algorithm that provides an adequate speed to accuracy trade-off. One of the first tools to employ Soot was JAnalyzer <cit.>, which leveraged call graph information to provide simple graphical representation of the call relations in the target application. Our tool builds on JAnalyzer by providing complete call subgraphs of the application code starting from the entry points into event handler code.Our tool's code coverage functionalities are inspired by efforts such as detailed by Duck et al. in <cit.>, where a new approach for software reconnaissance based on differential code coverage is proposed. The approach is then investigated within an evaluation where users had to debug and change code in several complex GUI-driven applications. Our implementation complements the one detailed in <cit.> by providing context to coverage information in the form of the GUI event trace, which facilitates identifying the relation between the source code and the user interface controls. An effort more related to GUI-driven systems is detailed in <cit.>, where authors describe a navigation mechanism that enables source code localization for GUI elements. A controlled user study is also detailed within <cit.> showing important speed-ups in feature localization tasks.One of the key challenges of implementing our tool was accurately capturing GUI event information. As this is a software tracing task, we studied previous efforts targeting Java, such as the JMonitor library developed by Karaorman and Freeman <cit.>. JMonitor provides event monitoring for Java by specifying event patterns and event monitors. Patterns are used to describe interesting events, and monitors act as handlers that are called once the events have taken place. The proposed library provides a generic implementation for lowest-level events such as setting the value of a class field or a method call. Another notable example is JRapture <cit.>, a tool for capturing and replaying Java program executions by recording interactions between the program itself and the system, using accurately reproduced input sequences. Profiling can then be added to study the application during replay. JETracer differentiates itself from these tools by working on a higher abstraction level and being developed to record GUI events. This allows capturing additional information such as application screenshots as well as event listener information. § FUTURE WORK AND CONCLUSIONOur aim for this implementation was to provide the proof-of-concept for a tool that may find many uses during software development and maintenance. We believe the current implementation lays down the foundation for a useful tool to assist in program comprehension and feature localization for GUI-driven applications.To the best of our knowledge, GUITracer is the first tool that successfully combines static and dynamic analyses for program comprehension of GUI-driven applications. Its creation was guided by findings from studies targeting professional developers such as those in <cit.>, which underline the fact that software exploration and comprehension are most often started at GUI level. We have also taken into account the findings within Storey et al.'s survey of software exploration tools <cit.> that show a lack of tools proposing a top-down approach.In order to identify possible future improvements to GUITracer, we undertook a preliminary evaluation using various versions of open-source applications such as FreeMind, jEdit and Azureus. This allowed us to discover what features are important to improve the tool's capabilities. At the present time, these include improving the visualization of large call graphs using better filtering and navigation, adding support for multi-thread programs and library callbacks <cit.>. A more distant issue is to provide support for unit testing by integrating our tool's visualization capabilities with well known frameworks such as JUnit.Regarding the tool's deployment, the next step is to integrate the tool as a plugin within popular IDE's such as Eclipse and NetBeans, where GUITracer will be available while running GUI applications. At this point we plan to undertake a user-driven evaluation in order to guide further development that will make the tool as useful as possible to practitioners working on large scale GUI applications.IEEEtran
http://arxiv.org/abs/1702.08013v1
{ "authors": [ "Arthur-Jozsef Molnar" ], "categories": [ "cs.SE", "D.2.2" ], "primary_category": "cs.SE", "published": "20170226094955", "title": "Live Visualization of GUI Application Code Coverage with GUITracer" }
School of Mathematical Sciences, Queen Mary University of London, Mile End Road, London E14NS (UK) The family of visibility algorithms were recently introduced (Lacasa et al, PNAS 105 (2008)) as mappings between time series and graphs. Here we extend this method to characterize spatially extended data structures by mapping scalar fields of arbitrary dimension into graphs. After introducing several possible extensions, we provide analytical results on some topological properties of these graphs associated to some types of real-valued matrices, which can be understood as the high and low disorder limits of real-valued scalar fields. In particular, we find a closed expression for the degree distribution of these graphs associated to uncorrelated random fields of generic dimension, extending a well known result in one-dimensional time series. As this result holds independently of the field's marginal distribution, we show that it directly yields a statistical randomness test, applicable in any dimension. We showcase its usefulness by discriminating spatial snapshots of two-dimensional white noise from snapshots of a two-dimensional lattice of diffusively coupled chaotic maps, a system that generates high dimensional spatio-temporal chaos. We finally discuss the range of potential applications of this combinatorial framework, which include image processing in engineering, the description of surface growth in material science, soft matter or medicine and the characterization of potential energy surfaces in chemistry, disordered systems and high energy physics. An illustration on the applicability of this method for the classification of the different stages involved in carcinogenesis is briefly discussed. Visibility graphs of random scalar fields and spatial data Lucas Lacasa and Jacopo Iacovacci December 30, 2023 ==========================================================§ INTRODUCTIONVisibility and Horizontal Visibility Graphs are a family of mappings between ordered sequences and graphs <cit.>. Consider an ordered sequence { x(t)}_t=1^N, where x(t)∈ℝ^m,m≥ 1. For m=1 a typical case of such sequence are time series describing the activity of some system, whereas for m>1 we consider multivariate time series or in general high-dimensional dynamical systems. In every case, a univariate time series of N data is mapped into a graph of N nodes such that two nodes are linked in the graph if a particular visibility criterion holds in the sequence (the multivariate setting has been explored recently <cit.>). This mapping enables the possibility of performing graph-theoretical time series analysis and builds a bridge between the theories of dynamical systems, signal processing and graph theory. In recent years, this mapping has been used to provide a topological characterization of different routes to low dimensional chaos <cit.>, or different types of stochastic and chaotic dynamics <cit.>. From an applied angle, it is being widely used to extract in a simple and computationally efficient way informative features for the description and classification of empirical time series appearing in several areas of physics including optics <cit.>, fluid dynamics <cit.>, geophysics <cit.> or astrophysics <cit.>, and extend beyond physics in areas such as physiology <cit.>, neuroscience <cit.> or finance <cit.> to cite only a few examples. Whenever each element in a given classification task is naturally encoded as an ordered sequence, one can map such sequence into a visibility graph and subsequently extract a certain set of topological properties of these graphs as the feature vector with which to train classifiers in supervised learning tasks. Here we propose to extend this methodology from time series { x(t)}_t=1^N toscalar fields h(x,y):ℝ^d→ℝ. This extension, which has only been scarcely explored <cit.> is conceptually closer to the original context of visibility graphs <cit.> and enables the possibility of constructing the visibility graphs of images, landscapes, and general large-scale spatially-extended surfaces. In what follows we will introduce the concept along with a few definitions and properties. In section III we provide analytical results on some topological properties of these graphs associated to some types of real-valued matrices which can be understood as the high and low disorder limits of real-valued scalar fields. In particular, we find a closed expression for the degree distribution of these graphs associated to uncorrelated random fields of generic dimension, extending the result known for one-dimensional time series. As this result holds independently of the field's marginal distribution, we show that this result directly yields a statistical randomness test, applicable in arbitrary dimensions. In section IV we showcase its usefulness by discriminating two-dimensional white noise from two-dimensional lattice of diffusively coupled chaotic maps (a system that generated high dimensional spatio-temporal chaos). In section V we discuss the range of potential applications of this combinatorial framework and we further briefly illustrate its usefulness for characterizing the process of oncogenesis. § DEFINITIONS AND BASIC PROPERTIESDefinition (VG) Let S={x_1,…,x_N} be a ordered sequence of N real-valued, scalar datapoints. A Visibility Graph (VG) is an undirected graph of n nodes, where each node i∈ [1,N] is labelled by the time order of its corresponding datum x_i. Hence x_1 is mapped into node i=1, x_2 into node i=2, and so on. Then, two nodes i and j (assume i<j without loss of generality) are connected by a link if and only if one can draw a straight line connecting x_i and x_j that does not intersect any intermediate datum x_k,i<k<j. Equivalently, i and j are connected if the following convexity criterion is fulfilled:x_k< x_i + k-i/j-i[x_j-x_i], ∀ k: i<k<jThe same definition applies to a Horizontal Visibility Graph (HVG) but in this latter graph two nodes i, j (assume i<j without loss of generality) are connected by a link if and only if one can draw a horizontal line connecting x_i and x_j that does not intersect any intermediate datum x_k,i<k<j. Equivalently, i and j are connected if the following ordering criterion is fulfilled: x_k<inf(x_i,x_j), ∀ k: i<k<jFrom a combinatoric point of view, HVGs are outerplanar graphs with a Hamiltonian path <cit.>, i.e. noncrossing graphs as defined in algebraic combinatorics <cit.>. Note that the former definitions focus on discrete sequences, such that the index labeling is such that i+1≡ i+Δ, where Δ is the spacing between data. Interestingly, both VG and HVG are invariant under changes in Δ. In particular, this enables to directly consider the continuous version of a discrete time series simply as the limit Δ→ 0. This invariance property will allow treating continous scalar fields as the Δ→ 0 limit of matrices as we will show later. Extension classes. One can now extend the definition of visibility to handle two-dimensional manifolds, by simply extending the visibility criteria along one-dimensional sections of the manifold. The question is, in how many different ways one can do that? As a matter of fact, there exist several possibilities, here we consider just a few of them. We firstly consider manifolds of dimension d which have a natural Euclidean embedding and define two extension classes, labelled as canonical and FCC respectively. In the canonical extension class, the rule of thumb for extending the definition of a visibility graph to a manifold of dimension d will be by applying the VG/HVG to d orthogonal sections of the manifold (which define n=2d directions). In other words, at each point of the manifold one constructs the VG/HVG in the direction of the (canonical) Cartesian axis. On the other hand, the FCC extension class allows an additional number of sections in the direction of the main diagonals. Accordingly, in this second class the number of directions is n=2d+2^d directions (see figure <ref> for an illustration in the case d=2). Finally, a third extension class (which in this work will only be studied for d=2 flat surfaces) is defined by taking n directions in such a way that the set of n vectors make an homogeneous angular partition of the plane with constant angle 2π/n. This class is labelled as the order-n class. Obviously, the order-8 and order-4 classes coincide, when d=2, with the FCC and canonical classes respectively. These special classes are indeed of special relevance as they are the most natural algorithmic implementation for image processing <cit.>. We are now ready to give a more formal definition of visibility graphs in these extension classes. Definition (IVG_n) Let I be a N× N matrix, where I_ij∈ℝ and N>0. For an arbitrary entry ij, make an angular partition of the plane into n directions, such that direction labelled as p makes an angle with the row axis of 2π(p-1)/n. The Image Visibility Graph of order n IVG_n is a graph with N^2 nodes, where each node is labelled by a duple ij in association with the indices of the entry I_ij, such that two nodes ij and i'j' are linked if * i'j' belongs to one of the n angular partition lines, and* I_ij and I_i'j' are linked in the VG defined over the ordered sequence which includes ij and i'j'.The Image Horizontal Visibility Graph (IHVG_n) follows equivalently if in the second condition we make use of HVG instead of VG. Note that in the preceding definition, I can be understood as a two-dimensional square lattice, which is naturally embedded in ℝ^2 if we associate a certain lattice length Δ_p>0 to the separation between any two neighbors in each direction p. In the limit N→∞, Δ_p→ 0 this matrix I converges in some mathematically well-defined sense to a continuous scalar field h(x,y):ℝ^2→ R. Accordingly, the continuous version of these graphs can be obtained for n→∞, and in that case I(H)VG_∞ would be an infinite graph. In this work we keep n finite and from now on only consider finite discretizations of scalar fields, however the infinite case is certainly of theoretical interest and is left for future investigations. For a given dimension d, one can define in a similar fashion the Visibility Graphs in the canonical extension class labelled IVG^c(d) by modifying condition (1): i'j' belongs to one of the d Cartesian axis which span ℝ^d and have origin in ij. Analogously, the Visibility Graphs in the FCC extension class IVG^FCC(d) are obtained by modifying again condition (1) appropriately to allow visibility in the main diagonals. Finally, again the Horizontal version follows equivalently if in the second condition we make use of HVG instead of VG. A trivial but important remark is that ∀ I, I(H)VG_4=I(H)VG^c(2) and I(H)VG_8=I(H)VG^FCC(2). Note also that the special class IVG^c(2) has been explored recently under the name row-column visibility graph <cit.>.Once any of these graphs has been extracted from a given matrix I, one can further compute standard topological properties on this graph using classical measures from Graph Theory <cit.> or recent metrics defined in Network Science <cit.>, which in turn might be used to provide a topological characterization of I. For instance, the degree k of a node is the number of links of that node. This allows to construct the degree matrix K∈ℕ^N× N, where K_ij is the degree of node labelled with the pair i,j. The degree distribution P(k) determines the probability of finding a node of degree k and can be straightforwardly computed from the degree matrix. In this work for concreteness we will only consider these metrics, however we should emphasize here that a large toolbox of measures could be used for feature extraction in context-dependent applications. Here we are motivated to use these very simple metrics as it has recently been proved that, in the one-dimensional case, the set of degrees is on bijection with the adjacency matrix and hence is indeed an optimal feature <cit.>. In what follows we depict some exact results on the topology of these graphs associated to simple types of matrices which can be understood as the high order and high disorder limits of real images. From now on we only consider the Horizontal version of the visibility criteria, and we assume N→∞ to avoid border effects.§ SOME EXACT RESULTSPeriodicity: monochromatic images and chess.We start by considering trivial configurations at the end of total order. For monochromatic images where I_ij=c, the IHVG_n is such that K_ij=n and thus P(n)=1 and P(k≠ n)=0. Then we can consider chess. This is a periodic lattice, where in each row the same periodic sequence is represented (black,white,black,…)≡ (1,-1,1,-1,…), except for a one-step translation in even rows. Accordingly, neglecting boundary conditions I_ij= 1 if i · j is odd and -1 otherwise. For IHVG_4 we find K_ij=8 if i · j is odd and 4 otherwise. For IHVG_8 we find K_ij=12 if i · j is odd and 8 otherwise. From this latter matrix the degree distribution is simply P(k)=1/2 for k=8,12 and zero otherwise. For other types of periodic structures it is easy to see that the degree matrix will inherit such periodicity and thus the degree distribution will only be composed by a finite number q of non-null probabilities, where q in turn is typically bounded by a function that depends on the period of the periodic structure.Uncorrelated random fields.We then consider a limit configuration at the end of total disorder: a two-dimensional uncorrelated random field, i.e. white noise. Then, the following theorem holds for the degree distribution of IHVG_n: Theorem.Consider an N × N matrix with entries I_ij=ξ, where ξ is a random variable sampled from a distribution f(x) with continuous real support x∈(a,b).Then, for n>0 and in the limit N→∞ the degree distribution of the associated IHVG_n converges toP(k)= (1/n+1)(n/n+1)^k-n,if k≥ n 0,otherwiseA few comments are in order before presenting a proof is presented. First, note that this equation reduces, for n=2 (d=1), to the well-known result for time series of i.i.d. variables P(k)=(1/3)(2/3)^k-2 <cit.>. Second, in the specific class n=8 (equivalent to the FCC class in d=2, being this the selected version for image processing <cit.>), eq.<ref> yieldsP(k)= (1/9)(8/9)^k-8,if k≥ 8 0,otherwiseThird, note that in the limit of large n we would have a continuous visibility scanning. The extension for any generic n can also be directly interpreted as a generalization to higher dimensional (discrete) scalar fields, so it is easy to show that eq.<ref> also applies to the degree distribution of (i) the canonical extension for dimension d=n/2 (i.e. only even values of n are allowed in this case), and (ii) the FCC extension for dimension d, where n=2d+2^d (i.e. for n=8,14,24,42,…). We are now ready to provide the proof of the theorem. Proof.The proof essentially makes use of the diagrammatic formalism introduced in <cit.> where, in the case of time series, the probability of each degree was expanded in a series of terms, each term associated to a different diagram and contributing with different amplitude.Let us start by considering the concrete case n=8 (which describes the case implemented in our algorithm for image filtering) and we will generalize for all n thereafter. Using the jargon developed in <cit.>, a node chosen at random which has horizontal visibility of k others can be modeled as a seed (contributing with probability 𝔖) which has visibility of k-8 inner nodes (contributing with ℑ) distributed along the n=8 directions (such that direction i contributes with k_i inner nodes), and whose visibility is finally bounded by 8 bounding nodes (contributing with probability 𝔅).The probability of this event can thus be formally expressed asP(k)=∑_{k_1,k_2… k_8}𝔖𝔅^8∏_i=1^8ℑ_k_i,where the sum enumerates all admissible combinations of {(k_1,k_2,…,k_8)} such that ∑_i=1^8=k-8 (by construction, every node always has visibility of its boundary, here formed by n=8 nodes). It is easy to see that a possible enumeration is k_i=0,1,…,k-8-∑_m=1^i-1k_mfor i=1,2,…,7;k_8=k-8-∑_i=1^7 k_i.Making use of the cumulative distributionF(x)=∫_a^x f(x')dx' (with F(a)=0,F(b)=1) and following <cit.>, geometrically it is easy to see that𝔖=∫_a^b f(x_0)dx_0;𝔅=∫_x_0^b f(x)dx=1-F(x_0); To describe the probability of finding p inner nodes ℑ_p, by construction we shall take into account that an arbitrary number r (from zero to an infinite amount) of hidden data can lie in between every pair of aligned inner nodes. Such arbitrary number of hidden data should contribute with the following amplitude∑_r=0^∞∏_j=1^r ∫_a^x f(n_j)dn_j=1/1-F(x),where we have used the properties of the cumulative distribution to find the last identity. Accordingly, the concatenation of p inner data which might have an arbitrary number of interspersed hidden data can be expressed as ℑ_p=∫_a^x_0f(x_1)dx_1/1-F(x_1)∏_j=1^n-1∫_x_j^x_0f(x_j+1)dx_j+1/1-F(x_j+1).This latter calculation is easy but quite tedious. One proceeds to integrate equation <ref> step by step and a recurrence quickly becomes evident. One can easily prove by induction thatℑ_p=(-1)^p/p![ln(1-F(x_0))]^p.We are thus ready to tackle eq. <ref>. Taking advantage of the closure ∑_i=1^8 K_i=k-8, we first have∏_i=1^8ℑ_k_i=(-1)^k-8[ln(1-F(x_0))]^k-8/∏_i=1^8(k_i)!,so after some reordering, P(k)=∑_{k_1,k_2… k_8}(-1)^k-8/∏_i=1^8(k_i)!∫_a^b f(x_0)(1-F(x_0))^8[ln(1-F(x_0))]^k-8dx_0.Now, in this latter equation the integral is easy to compute:∫_a^b f(x_0)(1-F(x_0))^8[ln(1-F(x_0))]^k-8dx_0=(-1)^k-8(k-8)!(1/9)^k-7Consider finally the term ∑_{k_1,k_2… k_8}(k-8)!/∏_i=1^8(k_i)!=(k-8)!∑_k_1=0^k-8∑_k_2=0^k-8-k_1…∑_k_7=0^k-8-∑_j=1^7 k_j1/k_1!1/k_2!…1/k_7!1/(k-8-∑_j=1^7 k_j)!=8^k-8 where the last indentity was found by iteratively applying the binomial theorem∑_k=0^aak r^k=(1+r)^a.Altogether, we can write down explicitly for n=8P(k)=(1/9)(8/9)^k-8for k≥ 8 and zero otherwise. This result is independent of f(x) as expected since HVG is an order statistic <cit.>, and coincides with eq. <ref> for n=8 (i.e. eq. <ref>).We are now ready to generalize the whole derivation. For a generic n, triviallyP(k)=∑_{k_1,k_2… k_n}(-1)^k-n/∏_i=1^n(k_i)!∫_a^b f(x_0)(1-F(x_0))^n[ln(1-F(x_0))]^k-ndx_0.with∫_a^b f(x_0)(1-F(x_0))^n[ln(1-F(x_0))]^k-ndx_0=(1/n+1)^k-n+1(-1)^k-n(k-n)!such thatP(k)=(1/n+1)^k-n+1∑_{k_1,k_2… k_n}(k-n)!/∏_i=1^n(k_i)!.Finally since∑_{k_1,k_2… k_n}(k-n)!/∏_i=1^n(k_i)!=n^k-n,we find P(k)=(1/n+1)^k-n+1n^k-n=(1/n+1)(n/n+1)^k-n,what concludes the proof. ▪Note that a similar result can be found much easily at the expense of using a non-rigorous heuristic argument. In the case n=8, the probability that the seed node has visibility of exactly k nodes can be expressed as the probability that there are k-8 nodes that are not bounding times the probability that after these, the boundary prevents larger visibility. Accordingly, we shall writeP(k)=(1-P(8))^k-8P(8)For k=8, k_i only take the value k_i=0∀ i=1… 8, hence this term is straightforward to computeP(8)=𝔖𝔅^8=∫_a^b f(x_0)[∫_x_0^b f(x)dx]^8 dx_0=1/9,∀ fwhich then yields the correct shape for P(k):P(k)=(1-P(8))^k-8P(8)=(1/9)(8/9)^k-8A similar argument can be used for a generic n, yieldingP(k)=(1-P(n))^k-nP(n)=(1/n+1)(n/n+1)^k-nfor k≥ n and zero otherwise, in good agreement with eq. <ref>. Finite size effects. To assess the convergence speed to eq. <ref> for finite N, we have estimated the degree distribution of IHVG_8 associated to N× N random matrices whose entries are i.i.d. uniform random variables U[0,1]. In figure <ref> we plot, in semi-log scales, the resulting (finite size) degree distributions, for different N=2^7,2^8,…,2^12. As we can see, the distributions are on excellent agreement with eq. <ref> for k≤ k_0, where the location of the cut-off value k_0 scales logarithmically with the system's size N as shown in the bottom of the figure. In other words, finite size effects only affects the tail of the distribution, which converges logarithmically fast with N.§ A SIMPLE APPLICATIONThe results for uncorrelated random fields found in the previous section are indeed of practical interest because eq.<ref> holds independently of the noise marginal distribution f. Resorting to the contrapositive, if the degree distribution of IHVG_n deviates from eq.<ref> for some empirical field I, one can conclude that the field is not uncorrelated noise. This theorem thereby allows for the straightforward design of a randomness statistical test which would be applicable to data structure of arbitrary dimension d, where n(d)=2d if one uses the canonical extension class, or n(d)=2d+2^d in the case of FCC.Coupled Map Lattices. To illustrate this we consider a simple application of discriminating noise from high-dimensional chaos.Chaotic processes display irregular and unpredictable behavior which is often confounded with randomness, however chaos is a deterministic process which indeed hides in some cases some patterns that can be extracted by appropriate techniques. The endeavor of distinguishing noise from chaos has been an area of intense research activity in the last decades <cit.> and applications have pervaded nearly every scientific discipline where complex, irregular empirical signals emerge. Here we consider spatially extended structures and thus we will be dealing with spatio-temporal chaos, i.e. chaotic behavior in space and in time, and we will explore whether if visibility graphs are able to distinguish such dynamics from simple randomness. Let us define I(t) as a two dimensional square lattice of N^2 diffusively coupled chaotic maps which evolve in time <cit.>. In each vertex of this coupled map lattice (CML) we allocate a fully chaotic logistic map x_t+1=Q(x_t), Q(x)=4x(1-x), and the system is then spatially coupled as it follows:I_ij(t+1)=(1-ϵ)Q[ I_ij(t)] + ϵ/4∑_i',j'Q[ I_i'j'(t)],where the sum extends to the Von Neumann neighborhood of ij (four adjacent neighbors) The update is parallel and we use periodic boundary conditions. The coupling strength ϵ∈[0,1]. For ϵ=0 the system is uncoupled and the N^2 logistic maps evolve independently. For positive ϵ>0 there is a balance between the internal (chaotic) dynamics which drives a local tendency towards inhomogeneity and the diffusion term (in the right hand of the equation one can easily recognize the discrete version of the Laplacian) which induces a global tendency towards homogeneity in space. This balance is tuned by ϵ, acting as an effective viscosity constant, and the system evolves into different spatio-temporal dynamics as ϵ varies. For a small yet positive value of the coupling the system displays so-called Fully Developed Turbulence, a phase with incoherent spatiotemporal chaos and high dimensional attractor <cit.>. In other words, the system evolves both temporally and spatially in a very irregular way, yet it is not totally uncorrelated. For illustration, in figure <ref> we plot, for N=200, grayscale snapshots of this system for ϵ=0 (uncoupled), ϵ=0.1 (weak coupling) and ϵ=0.7 (strong coupling) along with a 200× 200 matrix of U[0,1] i.i.d. random variables (white noise). Note that the snapshot of the uncoupled case reduces to a collection of independent and identically distributed chaotic variables with a marginal distribution that coincides with the invariant measure of the fully chaotic logistic map: the Beta distribution 𝔹(1/2,1/2)=π^-1x^-1/2(1-x)^-1/2. In other words, such a snapshot is indistinguishable from white, Beta-distributed noise, which should be then equivalent under the IHVG mapping to any type of white noise and should therefore fulfill our theorem. When ϵ>0 spatial correlations settle in and the snapshots are in theory statistically different, however this difference is only evident for large coupling. Distinguishing noise from chaos. To explore such differences we can exploit our theorem as it follows: first, we estimate the degree distribution of the IHVG_8 of each snapshot, and compare against the theoretical equation for white noise. To account for finite size effects, it is necessary to compare the estimation of the chaotic case not just with eq.<ref> but also with a finite i.i.d. sample. We have generated 20 realizations of each process (random uniform noise, ϵ=0 and 0.1) and have extracted the degree distribution of IHVG_8 for each case. Sample results of these distributions can be shown in the left panel of figure <ref> along with the theoretical prediction for i.i.d (eq. <ref>). As expected, the distributions are apparently very well approximated by eq. <ref> in every case (there are strong deviations for k>35 but this is due to finite size effects as similar deviations take place for the i.i.d. white uniform noise case). To quantify potential deviations from the theory (which according to the theorem would imply non-randomness), for each case we have computed the χ^2 statisticχ^2=N∑_k [P_th(k)-P_exp(k)]^2/P_th(k),where we have taken k=8,9,…,44. Results are shown in the bottom panel of figure <ref>, showing now a clear separation between the uncorrelated cases (uncoupled chaotic maps and uniform white noise) and the weakly coupled system. This clear distinction is further confirmed in a principal component analysis (PCA) depicted in the right panel of the same figure, where each degree distribution P(k) has been projected in a two-dimensional space spanned by the first two principal components (this subspace accounts for 60% of the variability). One does not need to apply any clustering algorithm as the non-random matrices are very clearly clustered together and apart from the i.i.d. cases. Phase diagram.As mentioned previously, the spatio-temporal dynamics of the coupled map lattice show a rich phase diagram as we increase the coupling constant ϵ. An easy way of encapsulating and visualizing such richness in a single diagram is presented in the left panel of figure <ref>. For each ϵ, we compute the degree distribution of the associated IHVG_8. Then we compute the distance D between the degree distribution at ϵ and the corresponding result for ϵ=0 (eq. <ref>) D=∑_k |P(k)-(1/9)(8/9)^k-8|. D acts as a scalar order parameter describing the spatial configuration of the CML, and interestingly, evidences sharp changes for the different phases, such as: for ϵ<0.12, the system develops Fully-Developed Turbulence (FDT) with weak spatial correlations. This regime shifts to a Periodic Structure (PS) for 0.12<ϵ<0.27. This regime then parsimoniously shifts into a phase with spatially Coherent Structures (CS), which ultimately break down for ϵ>0.88 in favor of periodic patterns. For 0.88<ϵ<1 the spatial structure shows a mix between CS and PS. We conclude that the degree distribution of the IHVG_8 captures this rich spatial structure, something confirmed via principal component analysis in the right panel of figure <ref>.§ DISCUSSIONThis framework allows the possibility of describing discretized scalar fields of arbitrary origin in a combinatorially compact fashion, and enables using the tools of graph theory and network science for the practical description and classification of spatially-extended data structures. For the sake of exposition and concreteness, in this work we have only used a couple of graph measures (degree matrix and degree distribution) which can be argued that were optimal in the one-dimensional case <cit.>, but it should be highlighted that this method is much more general and allows to extract from these graphs any desired property.For d=1 the method was naturally designed for the task of time series analysis, and has been exploited accordingly and extensively in the last years -both from a theoretical point of view and for applications- as was acknowledged in the introduction section. Here we have presented a natural extension of these algorithms to deal with (discretized) scalar fields of arbitrary dimension, along with a few exact results on simple -yet relevant- cases. From a mathematical point of view, the task of characterizing the graphs in these extension classes provide a wide range of challenging open questions, which could parallel recent advancements in the one-dimensional case <cit.>. Now, what are the potential applications of this framework? For d=2 (either using the canonical or FCC extension classes, or the order-n class), a plethora of applications emerge, here we only enumerate and discuss a few: (i) Image Processing: a (grayscale) image is just a discrete scalar field. Once we extract the visibility graphs of a given image, can we use the topological properties of this graph to build feature vectors which can feed automatic classifiers for several statistical learning tasks involving images <cit.>? Can we define the distance between two images using graph kernels <cit.> on the associated visibility graphs?(ii) Physics of Interfaces: can we provide a topological characterization of fractal surface growth <cit.>? Can we -for instance- account for spatial self-similar structures much in the same way the Hurst exponent of fractional Brownian motion was estimated with visibility graphs <cit.> (a preliminary analysis via row-column visibility graphs has partly addressed this issue recently <cit.>). Furthermore, can we apply this methodology in biologically-relevant problems and beyond, for instance to classify tumoral or calli surfaces? (iii) Urban Planning: can we automatically cluster cities by only resorting to combinatorial properties extracted from their visibility graphs? And can we link such emerging clusters with architectural, historical or cultural properties of cities? (iv) Random Matrix theory: Is there a visibility graph characterization of different random matrix ensembles? To illustrate the potential applicability of the method to the case of tumor description, in the left panel of figure <ref> we plot the degree distribution of the IHVG_8 associated to three atomic force microscopy (AFM) images (94× 94 after grayscale preprocessing) of normal, immortal (premalignant) and cancer (malignant) cervical epithelial cells <cit.>. This very preliminary evidence suggests that the carcinogenesis transition normal → premalignant → cancer is paralleled in graph space by a systematic deviation of the degree distribution from the i.i.d. case. In the right panel of the same figure we plot the degree distribution associated to IVG_8, whose tails have been fitted to exponential functions ∼exp(-λ k) finding λ_normal<λ_immortal<λ_cancer <cit.>. These are of course very preliminary results given simply for illustration, and future research should confirm their accuracy and their potential use for carcinogenesis description and early detection.The most exciting application for higher dimensions d≥ 2 is perhaps on describing the spatial structure of generic energy landscapes <cit.> V: x∈ℝ^d→ℝ, where d is the number of degrees of freedom. Typically, these fields describe an energy function whose minimum is associated to the macroscopic behavior of many-body systems, and play a major role in physics and chemistry. The structure of these fields is however rather messy. As a matter of fact, in spin glasses and other disordered systems their macroscopic properties do not necessarily relate directly to a configuration of minimal energy as the system gets trapped in local, metastable minima of this energy surface: in this sense the spatial distribution and overall structure of these minima (stationary points) gives valuable information on the system dynamical evolution. These energy surfaces are also of great interest in chemistry (Kramer's reaction rate theory for the thermally activated escape from metastable states) and high energy physics (e.g. local minima of supersymmetric energy landscape corresponds to the field theory vacuum). The formalism presented here would enable the description of such energetic landscapes, opening a thread of questions such as: Can we classify different types of field theories only using combinatorial criteria on their energy landscapes? What is the spatial distribution of stationary points of different canonical disordered systems in the light of this new method? To conclude, we hopefully made the case that to encode spatially extended structures in a combinatorial fashion is an enterprise that opens exciting theoretical questions as well as applications. The approach presented here is promising and there exist several possible avenues for future research, and we hope that these methods spark interest in some of these communities accordingly. We thank I. Sokolov for granting permission to reproduce the images on normal, immortal and cancer cells. LL acknowledges funding from EPSRC Fellowship EP/P01660X/1.10 GT B. Bollobas, Modern Graph Theory (Springer, 1998). NS M. Newman, Networks: and introduction (Oxford University Press, 2010). Luque_Theorem B. Luque, L. Lacasa, Canonical horizontal visibility graphs are uniquely determined by their degree sequence, Eur. Phys. J. B Sp. Top. (in press). libro_chaos H. Kantz, T. Schreiber, Nonlinear time series analysis (Cambridge University Press). PNAS L. Lacasa, B. Luque, F.J. Ballesteros, J. Luque, and J.C. Nuno, From time series to complex networks: the visibility graph, Proc. Natl. Acad. Sci. USA 105, 13 (2008). image_processing J. Iacovacci and L. Lacasa, Visibility graphs: a combinatorial framework for image processing (in preparation). PRE B. Luque, L. Lacasa, J. Luque, F.J. Ballesteros, Horizontal visibility graphs: exact results for random time series, Phys. Rev. E 80, 046103 (2009). severini S. Severini, G. Gutin, T. Mansour, A characterization of horizontal visibility graphs and combinatorics on words, Physica A 390, 12(2011) 2421-2428. flajo P. Flajolet and M. Noy, Analytic combinatorics of non-crossing configurations, Discrete Math. 204 (1999) 203-229. nonlinearity L. Lacasa, On the degree distribution of horizontal visibility graphs associated to Markov processes and dynamical systems: diagrammatic and variational approaches, Nonlinearity 27, 2063-2093 (2014). nonstationary L.Lacasa and R. Flanagan, Time reversibility from visibility graphs of non-stationary processes, Phys. Rev. E 92, 022817 (2015). EPL L. Lacasa, B. Luque, J. Luque and J.C. Nuno, The Visibility Graph: a new method for estimating the Hurst exponent of fractional Brownian motion, EPL 86, 30001 (2009).rowcolumn X Qin, P Xue, L Xin-Li, M Stephen, Y Hui-Jie, J Yan, W Jian-Yong, Z. Quin-Jung, Row-column visibility graph approach to two-dimensional landscapes, Chinese Physics B 23, 7 (2014). original A. Turner, M. Doxa, D. O'sullivan, and A. Penn, From isovists to visibility graphs: a methodology for the analysis of architectural space. Environment and Planning B: Planning and design, 28(1), 103-121 (2001). CML K. Kaneko, Overview of Coupled Map Lattices, Chaos 2, 3(1992). kernel S.V. N. Vishwanathan, N.N. Schraudolph, R. Kondor and K.M. Borgwardt, Graph kernels, Journal of Machine Learning Research 11 (2010) pp.1201-1242. barabasiA.L. Barabasi and H.E. Stanley, Fractal Concepts in Surface Growth (Cambridge University Press, 1995). PEL1 D. Wales, Energy Landscapes : Applications to Clusters, Biomolecules and Glasses (Cambridge University Press, 2004). jns B. Luque, L. Lacasa, F. Ballesteros, A. Robledo,Analytical properties of horizontal visibility graphs in the Feigenbaum scenario, Chaos 22, 1 (2012) 013109. quasi B. Luque, A. Núñez, F. Ballesteros, A. Robledo, Quasiperiodic Graphs: Structural Design, Scaling and Entropic Properties, Journal of Nonlinear Science 23, 2, (2012) 335-342. pre2013 A.M. Núñez, B. Luque, L. Lacasa, J.P. Gómez, A. Robledo, Horizontal Visibility graphs generated by type-I intermittency, Phys. Rev. E, 87 (2013) 052801.physics3 A. Aragoneses, L. Carpi, N. Tarasov, D.V. Churkin, M.C. Torrent, C. Masoller, and S.K. Turitsyn, Unveiling Temporal Correlations Characteristic of a Phase Transition in the Output Intensity of a Fiber Laser, Phys. Rev. Lett. 116, 033902 (2016).fluiddyn0 M. Murugesana and R.I. Sujitha1, Combustion noise is scale-free: transition from scale-free to order at the onset of thermoacoustic instability, J. Fluid Mech. 772 (2015). fluiddyn1A. Charakopoulos, T.E. Karakasidis, P.N. Papanicolaou and A. Liakopoulos, The application of complex network time series analysis in turbulent heated jets, Chaos 24, 024408 (2014). fluiddyn2 P. Manshour, M.R. Rahimi Tabar and J. Peinche, Fully developed turbulence in the view of horizontal visibility graphs, J. Stat. Mech. (2015) P08031. physics2 RV Donner, JF Donges, Visibility graph analysis of geophysical time series: Potentials and possible pitfalls, Acta Geophysica 60, 3 (2012).suyal V. Suyal, A. Prasad, H.P. Singh, Visibility-Graph Analysis of the Solar Wind Velocity, Solar Physics 289, 379-389 (2014) Zou Y. Zou, R.V. Donner, N. Marwan, M. Small, and J. Kurths, Long-term changes in the north-south asymmetry of solar activity: a nonlinear dynamics characterization using visibility graphs, Nonlin. Processes Geophys. 21, 1113-1126 (2014).physio1 J.F. Donges, R.V. Donner and J. Kurths, Testing time series irreversibility using complex network methods, EPL 102, 10004 (2013). meditation_VG S. Jiang, C. Bian, X. Ning and Q.D.Y. Ma, Visibility graph analysis on heartbeat dynamics of meditation training, Appl. Phys. Lett. 102 253702 (2013). neuro M Ahmadlou, H Adeli, A Adeli, New diagnostic EEG markers of the Alzheimer's disease using visibility graph, J. of Neural Transm. 117, 9 (2010). ryan1 R. Flanagan and L. Lacasa, Irreversibility of financial time series: a graph-theoretical approach, Physics Letters A 380, 1689-1697 (2016) multivariate L. Lacasa, V. Nicosia, V. Latora, Network Structure of Multivariate Time Series, Sci. Rep. 5, 15508 (2015) motifs J. Iacovacci and L. Lacasa, Sequential visibility-graph motifs, Phys. Rev. E 93, 042309 (2016)cancer_paper M.E. Dokukin, N.V. Guz, C.D. Woodworth, and I. Sokolov, Emerging of fractal geometry on surface of human cervical epithelial cells during progression towards cancer, New journal of physics 17,3 (2015). cancer2 M.E. Dokukin, N.V. Guz, R. M. Gaikwad, C.D. Woodworth, and I. Sokolov, Cell Surface as a Fractal: Normal and Cancerous Cervical Cells Demonstrate Different Fractal Behavior of Surface Adhesion Maps at the Nanoscale Phys. Rev. Lett. 107, 028101 (2011).
http://arxiv.org/abs/1702.07813v1
{ "authors": [ "Lucas Lacasa", "Jacopo Iacovacci" ], "categories": [ "physics.data-an" ], "primary_category": "physics.data-an", "published": "20170225010249", "title": "Visibility graphs of random scalar fields and spatial data" }
ß⟨q̅ q ⟩aballonb@ift.unesp.brgkrein@ift.unesp.brmiller@ift.unesp.brInstituto de Física Teórica, Universidade Estadual Paulista,Rua Dr. Bento Teobaldo Ferraz, 271 - Bloco II, 01140-070 São Paulo, SP, Brazil We extend the two-flavor hard-wall holographic model of Erlich, Katz, Son and Stephanov [Phys. Rev. Lett. 95, 261602 (2005)] to four flavors to incorporate strange and charm quarks. The model incorporates chiral and flavor symmetry breaking and provides a reasonable description of masses and weak decay constants of a variety of scalar, pseudoscalar, vector and axial-vector strange and charmed mesons. In particular, we examine flavor symmetry breaking in the strong couplings of the ρ meson to the charmed D and D^* mesons. We also compute electromagnetic form factors of the π,ρ, K, K^*, D and D^* mesons. We compare our results for the D and D^* mesons with lattice QCD data and other nonperturbative approaches. Strong couplings and form factors of charmed mesons in holographic QCD Carlisson Miller====================================================================== § INTRODUCTION There is considerable current theoretical and experimental interest in the study of the interactions of charmed hadrons with light hadrons and atomic nuclei <cit.>. There is special interest in the properties of D mesons in nuclear matter <cit.>, mainly in connection with D-mesic nuclei <cit.>, J/Ψ and η_c binding to nuclei <cit.>, and ND molecules <cit.>. D-mesons are also of interest in the context of the so-called X,Y,Z exotic hadrons, which have galvanized the field of hadron spectroscopy since the discovery in 2003 of the charmed hadron X(3872) by the Belle collaboration <cit.>. They areexotic because they do not fit the conventional quark-model pattern of either quark-antiquark mesons or three-quark baryons. Most of the X,Y,Z hadrons have masses close to open-flavor thresholds and decay into hadrons containing charm (or bottom) quarks. Presently there is no clear theoretical understanding of the new hadrons, despite of the huge literature that has accumulated over the last decade. In the coming years, existing and forthcoming experiments will produce numerous new, and very likely surprising results—Ref. <cit.> is a very recent review on exotic hadrons, with an extensive list of references on theory and experiment. The PANDA collaboration <cit.>, in particular, at the forthcoming FAIR facility has an extensive program <cit.> aiming at the investigation of charmed hadrons and their interactions with ordinary matter.A major difficulty in the theoretical treatment of in-medium interactions of charmed hadrons is the lack of experimental information on the interactions in free space. For example, almost all knowledge on the DN interaction comes from calculations based on effective Lagrangians that are extensions of light-flavor chiral Lagrangians using SU(4) flavor symmetry <cit.> and heavy quark symmetry <cit.>. The Lagrangians involve coupling constants, like g_ρDD, g_ωDD, g_ρD^∗ D and g_ρD^∗ D^∗, whose values are taken from SU(4) flavor and heavy-quark symmetry relations. For instance, SU(4) symmetry relates the couplings of the ρ to the pseudoscalar mesons π, K and D, namely g_ρDD = g_KKρ = g_ρππ/2. If in addition to SU(4) flavor symmetry, heavy-quark spin symmetry is invoked, one has g_ρ DD = g_ρ D^∗ D = g_ρ D^∗ D^∗ = g_π D^∗ D to leading order in the charm quark mass <cit.>. The coupling g_ρππ is constrained by experimental data; the studies of the DN interaction in Refs <cit.> utilized such a SU(4) relation, taking g_ρππ = 6.0, which is the value used in a large body of work conducted within the Jülich model <cit.> for light-flavor hadrons. This value of g_ρππ implies through SU(4) symmetry g_ρ DD = 3, which is not very much different from predictions based on the vector meson dominance (VMD) model: g_DDρ = 2.52-2.8 <cit.>. Moreover, to maintain unitarity in calculations of scattering phase shifts and cross sections, lowest-order Born diagrams need to be iterated with the use of a scattering equation, like the Lippmann-Schwinger equation, and phenomenological form factors are required to control ultraviolet divergences. Form factors involve cutoff parameters that also are subject to flavor dependence. Again, due to the lack of experimental information, they are also poorly constrained.Flavor symmetry is strongly broken at the level of the QCD Lagrangian due to the widely different values of the quark masses; while in the light quark sector one has good SU(2) symmetry, m_u ≃ m_d, thereby e.g. g_ρDD = g_ωDD (up to a phase), in the heavy-flavor sector SU(3) and SU(4) symmetries are badly broken: m_c ≫ m_s ≫ m_u. Given the importance of effective Lagrangians in the study of a great variety of phenomena involving D-mesons, in the present examine their properties in a holographic model of QCD. We extend the holographic QCD model ofRefs. <cit.> to the case of N_f=4 and investigate the implications of the widely different values of the quark masses on the effective three-meson couplings g_ρ D D and g_ρ D^* D^* and the electromagnetic form factors of the D and D^* mesons. The parameters of the model are the quark masses and condensates as well as the mass gap scale.Using experimental data for a selected set of meson masses to fix the model parameters, allows us to predict not only the strong couplings and electromagnetic form factors mentioned above but also many other observables not studied before with a holographic model.The works in Refs. <cit.> pioneered in the modeling of low energy QCD by incorporating features of dynamical chiral symmetry breaking in holographic QCD. They correctly identify the five dimensional gauge fields dual to the left and right currents associated with chiral symmetry as well as the five dimensional scalar field dual to the chiral condensate. The extension proposed in Ref. <cit.> incorporated the strange quark and was able to identify the appearance of scalar modes associated with flavor symmetry breaking. In the present work, by extending the model of Refs. <cit.> to the case N_f=4, we are able to investigate the consequences of the dramatically different values of the quark masses on the phenomenology of charmed mesons. Moreover, by combining the formalism of Kaluza-Klein expansions and the AdS/CFT dictionary, we are able to directly extract the leptonic decay constants of mesons and find an expansion for the flavor currents that relate flavor symmetry breaking to the appearance of scalar modes. That relation bears a strong analogy with the generalized PCAC (partially conserved axial current relation) <cit.> that relates dynamical chiral symmetry breaking to the appearance of the pion and its resonances <cit.>.In the model ofRefs. <cit.>, dynamical chiral symmetry breaking becomes manifest when considering fluctuations of the five dimensional gauge fields associated with the axial and vectorial sector. While the kinetic terms of the axial sector acquire a mass, signalizing chiral symmetry breaking, the vector sector remains massless. In our framework, it turns out that the vector sector also acquires a mass signalizing the breaking of flavor symmetry. The Kaluza-Klein decomposition of these fieldsallows us to obtain effective kinetic Lagrangians for the mesons from the five dimensional kinetic terms, with masses and decay constants obtained in terms of the wave functions representing the Kaluza-Klein modes. Moreover, expanding the five dimensional action to cubic order in the fluctuations and performing again a Kaluza-Klein decomposition allows us to obtain effective Lagrangians describing the three-meson interactions, with strong couplings given in terms of integrals involving the wave functions of the corresponding Kaluza-Klein modes.It turns out that the symmetry breaking pattern in the strong couplings differs somewhat from previous studies in the literature. Calculations employing QCD sum rules found SU(4) symmetry breaking in three-hadron couplings that vary within the range of 7% to 70%<cit.>. In Ref. <cit.>, using a model constrained by the Dyson-Schwinger equations of QCD, it was found that the relation g_ρ DD = g_ρππ/2 is strongly violated at the level of 300% or more. In a recent follow up of that study within the same framework, Ref. <cit.> finds that couplings between D-, D^∗-mesons and π-, ρ-mesons can differ by almost an order-of-magnitude, and that the corresponding form factors also exhibit different momentum dependences. Our results calculations are more in line with calculations using the^3 P_0 quark-pair creation model in the nonrelativistic quark model <cit.>.The organization of this paper is as follows. In Sec. <ref> we describe how chiral and flavor symmetry breaking is realized in our model. Then in Sec. <ref> we describe the five dimensional field equations and the AdS/CFT dictionary for the flavor and axial currents. In Sec. <ref> we describe the formalism of Kaluza-Klein expansions and obtain effective kinetic Lagrangians for the mesons. In Sec. <ref> we use the prescription of our previous studies in Ref.  <cit.> for the leptonic decay constants and obtain relations describing flavor symmetry breaking and chiral symmetry breaking in terms of scalar and pseudoscalar modes respectively. In Sec. <ref> we obtain effective Lagrangians describing three-meson interactions with the holographic prescription for the strong couplings. Finally, in Sec. <ref> we fit the model parameters and present our numerical results for many observables, including the strong couplings g_ρ DD and g_ρ D^* D^* as well as the electromagnetic form factors of the D and D^* mesons. We compare the latter against lattice QCD data obtained in Ref. <cit.>. Section <ref> presents our conclusions.§ CHIRAL SYMMETRY AND FLAVOR SYMMETRY IN HOLOGRAPHIC QCD Chiral symmetry SU(N_f)_L × SU(N_f)_R for N_f flavors holds in the massless limit of QCD and is described in terms of the left and right currents J_L/R^μ , a = q̅_L/Rγ^μ T^a q_L/R,where T^a, a=1, … N^2_f-1 are the generators of the SU(N_f) group, and q_L/R = 1/2 (1±γ_5) q, with q being the quark Dirac field. The SU(4) generators T^a are normalized by the trace condition Tr ( T^a T^b ) = 1/2δ^ab, satisfying the Lie algebra [ T^a , T^b] = i f^abc T^c. The generators T^a are related to the Gell-Mann matrices λ^a by T^a = 1/2λ^a. Chiral symmetry is broken by the presence of the operator q̅ q = q̅_R q_L + q̅_L q_R. This breaking can be explicit, when it appears in the QCD Lagragian associated with the nonzero quark masses, or dynamically, when it acquires a vacuum expectation value, giving rise to a condensate ⟨q̅ q⟩ in limit of zero quark masses.In the case N_f=2, dynamical chiral symmetry breaking goes as SU(2)_L × SU(2)_R → SU(2)_V, where SU(2)_V is an exact vector symmetry and the broken symmetry occurs in the axial sector. This is described in terms of the vector and axial currents J^μ,a_V/A= J^μ,a_R ± J^μ , a_L. The symmetry associated with the vector sector J^μ,a_V is known as isospin symmetry. When N_f >2, both chiral and flavor symmetries are broken by the quark masses. We will describe how chiral and flavor symmetry breaking are implemented in a holographic model for N_f=4.In the pioneering work of Refs. <cit.> and <cit.>, a simple holographic realization of chiral symmetry breaking (CSB) was proposed. They considered the simplest background in holographic QCD, known as the hard wall model <cit.>, consisting in a slice of anti-de-Sitter spacetime: ds^2 = 1/z^2 ( η_μν dx^μ dx^ν - dz^2),with 0 < z≤ z_0. The parameter z_0 determines an infrared (IR) scale at which conformal symmetry is broken. The action proposed in Ref. <cit.> includes N_f gauge fields L_m and R_m, corresponding to the left and right flavor currents J_L/R^μ , a, and a bifundamental field X dual to the operator q̅_R q_L.The action can be written as S= ∫ d^5 x √(|g|) Tr{ (D^m X)^† (D_m X)+ 3 |X|^2- 1/4 g_5^2 ( L^mn L_mn + R^mn R_mn ) }, where D_m X = ∂_m X - i L_m X + i X R_m is the covariant derivative of the bifundamental field X, and L_mn = ∂_m L_n - ∂_n L_m - i[ L_m , L_n], R_mn = ∂_m R_n - ∂_n R_m - i[ R_m , R_n],are non-Abelian field strengths. The 5-d squared mass of the field X is fixed tom^2 = -3, to match with the conformal dimension Δ=3 of the dual operator q̅_R q_L. The model of Ref. <cit.> focused on N_f=2 and worked in the limit of exact flavor symmetry. In Ref. <cit.>, Abidin and Carlson extended the model to N_f=3, to incorporate the strange-quark sector. In the present paper we further extend that model to N_f=4, with the aim of making predictions for charmed mesons. In our approach we use a Kaluza-Klein expansion for the 5-d fields in order to find a 4-d effective action for the mesons. This approach allows us to find directly the meson weak decay constants, couplings and expansions for the vector and axial currents. We find in particular a relation for the vector current describing flavor symmetry breaking (FSB).We start with the classical background that describes chiral symmetry breaking: L^0_m = R^0_m = 0 ,2 X_0 = ζ M z + Σ/ζ z^3 ,where M is the quark-mass matrix, M= diag(m_u , m_u , m_s , m_c), and Σ is the matrix of the quark condensates, Σ= diag(σ_u ,σ_u , σ_s , σ_c). The parameter ζ=√(N_c)/2πis introduced to have consistency with the counting rules of large-N_cQCD—for details, see Ref. <cit.>. Note that we are assuming SU(2) isospin symmetry in the light-quark sector, i.e. m_d = m_u and σ_d = σ_u, which is a very good approximate symmetry in QCD.For the strange and charm quarks we will fit their masses m_s and m_c to the physical masses for the mesons. Note, however, that the model should not be valid for arbitrarily large quark masses. The reason is that, from the string theory perspective, the action in equation (<ref>) is expected to arise from a small perturbation of N_f coincident space-filling flavor branes. Specifically, the mass term M appearing in Eq. (<ref>) acts as a small source for the operator q̅_R q_L, responsible for the breaking of the chiraland flavor symmetries. A holographic description of quarks with very large masses requires the inclusion of long open strings and two sets of flavor branes distinguishing the heavy quarks from the light quarks (see e.g. <cit.>). In that framework, the string length is proportional to the quark mass and each set of flavor branes will carry a set of fields describing the dynamics of light and heavy mesons respectively. In this work we will show that the model described by the action in equation (<ref>) is still a very good approximation for the dynamics of light and heavy-light charmed mesons, the reason being that the internal structure in both cases is governed by essentially the same nonperturbative physics, that occurs at the scale Λ_ QCD <cit.>. In heavy-heavy mesons, on the other hand, the internal dynamics is governed by short-distance physics. For recent holographic studies of mesons involving heavy quarks see Refs. <cit.>.To investigate the consequences of chiral and flavor symmetry breaking it is convenient to rewrite the fluctuations of left and right gauge fields in terms of vector and axial fields , i.e. L_m = V_m + A_m and R_m = V_m - A_m.The bifundamental field X can be decomposed as X = e^iπX_0 e^iπ,where X_0 is the classical part and π contains the fluctuations. The fields V_m, A_m and π can be expanded as V_m^a T^a, A_m^a T^a and π^a T^a respectively.It is important to remark that organizing the heavy-light D mesons together with the light pions and kaons in a 15-plet π^a T^a of fluctuation fields does not imply, automatically, that the heavy-light D mesons are being approximated by Nambu-Goldstone bosons. The reason is that the explicit breaking of chiral symmetry, driven by theheavy charm quark, is large and by no means its effects are neglected in the model. In the same way,the fact the D mesons appear in the same multiplet of the SU(4) flavor group does not mean that flavor symmetry is exact; it is explicitly broken by the widely different values of the quark masses. Themain advantage of using such a SU(4) representation with explicit symmetry-breaking terms is that itallows us to make contact with the four dimensional effective field theories describing the interactions of light and heavy-light mesons commonly used in phenomenological applications. This not only extendsthe work of<cit.> but also leads to quantitative predictions for the strong couplings that canbe tested against experiment or lattice QCD data, which is our main objective in the present paper.An alternative approach to describe the heavy-light mesons is to make contact with a particularly interesting class of four dimensional models that treat the light mesons as in the present paper, and treat heavy mesons by invoking heavy-quark symmetry. The Lagrangian in the heavy sector is written as an expansion in inverse powers of the heavy quark mass; Refs. <cit.> are examples of such models. In holography the heavy quarks are realized in terms of long open strings, as described above in this section. For recent progress in the heavy quark approach to heavy-light mesons within holographic QCD see <cit.>.Expanding the action in Eq. (<ref>) up to cubic order in the fields V_m^a, A_m^a and π^a, we find S = S^(2) + S^(3) + …,where S^(2) = ∫ d^5 x √(|g|){ - 1/4 g_5^2 v^mn_a v_mn^a + 1/2(M^a_V)^2 V^m_a V_m^a - 1/4 g_5^2 a^mn_a a_mn^a +M_A^a b/2 (∂^m π^a - A^m,a) (∂_m π_b - A_m,b) } S^(3) = ∫ d^5 x √(|g|){ - 1/2g_5^2 f^abcv^mn_a( V_m^b V_n^c + A_m^b A_n^c) - 1/g_5^2 f^abc a^mn_a V_m^b A_n^c- (M_V^b)^2/2 f^abc (∂_m π^a - 2 A_m^a) V^m,bπ^c +M_A^ae f^ebc (∂_m π^a - A_m^a) V^m,bπ^c }, and we have defined the Abelian field strengths v_mn^a = ∂_m V_n^a - ∂_n V_m^a and a_mn^a = ∂_m A_n^a - ∂_n A_m^a. In the kinetic term S^(2), the vector and axial symmetry breaking is dictated by the mass terms M_V^a and M̃_A^a b, defined by the traces 2Tr ( [T^a , X_0] [ T^b, X_0] )= - (M_V^a)^2 δ^ab and 2Tr ( { T^a , X_0 }{ T^b , X_0 } ) =M_A^ab. Note, however, that the axial sector in S^(2) is invariant under the gauge transformation A^a_m→ A^a_m-∂_mλ_A^a , π^a→π^a - λ_A^a. Using Eq. (<ref>) we find the following nonzero values for M_V: (M_V^a)^2 = 1/4 (v_s - v_u)^2 for a= (4,5,6,7) ,(M_V^a)^2 = 1/4 (v_c - v_u)^2 for a= (9,10,11,12) , (M_V^a)^2 = 1/4 (v_c - v_s)^2 for a= (13,14) ,and the nonzero values for M_A:M_A^a,a = v_u^2 for a= (1,2,3) ,M_A^a,a = 1/4 (v_s + v_u)^2 for a= (4,5,6,7) ,M_A^a,a =1/4 (v_c + v_u)^2for a= (9,10,11,12) ,M_A^a,a = 1/4 (v_c + v_s)^2for a= (13,14) , M_A^8,8 = 1/3 (v_u^2 + 2 v_s^2 ) ,M_A^15,15 = 1/12 (2 v_u^2 + v_s^2 + 9 v_c^2 ) , M_A^8,15 = M_A^15,8 = 1/3 √(2) (v_u^2-v_s^2) . In Eqs. (<ref>) and (<ref>) we have defined v_q (z) = ζ m_q z + 1/ζσ_q z^3 , q=(u,s,c). In the interesting case where all the masses and condensates are equal we have that (M_V^a)^2=0 and the SU(4) flavor symmetry is preserved. In this paper we consider quark masses and condensates that lead to a realistic spectrum for the mesons so that we could explore the consequences of SU(4)flavor symmetry breaking. The kinetic term in Eq. (<ref>) allows us toextract the meson spectrum and decay constants whereas the action in Eq. (<ref>) leads to nontrivial predictions for three-meson couplings, including the heavy-light charmed mesons D and D^*.Note that the mass term for the vectorial sector (M_V^a)^2 is zero not only for a=(1,2,3), corresponding to the lightSU(2) sector but also for a=(8,15), which implies that flavor symmetry has not been broken in the sector describing the dynamics of the ω' and ψ mesons. This is one clear example of heavy-heavy mesons (mesons composed by a heavy quark-antiquark pair), where we actually expect some corrections to appear in (<ref>) describing flavor symmetry breaking. Those terms would arise from the dynamics of long open strings dual to heavy quarks, as explained above in this section.§ FIELD EQUATIONS AND DUAL CURRENTS Writing the kinetic action in Eq. (<ref>) as S^(2) = ∫ d^5 x L^(2), its variation takes the form δ S^(2) =δ S^(2)_ Bulk + δ S^(2)_ Bdy where δ S^(2)_ Bulk = ∫ d^5 x [( ∂ L^(2)/∂ V_ℓ^a - ∂_m P_V , a ^m ℓ )δ V_ℓ^a +( ∂ L^(2)/∂ A_ℓ^a - ∂_m P_A , a ^m ℓ )δ A_ℓ^a+( ∂ L^(2)/∂π^a - ∂_m P_π , a^m)δπ^a ] , δ S^(2)_ Bdy = ∫ d^5 x∂_m( P_V , a^m ℓδ V_ℓ^a + P_A , a^m ℓ δ A_ℓ^a + P_π , a^mδπ^a) ,andP_V , a ^m ℓ := ∂ L^(2)/∂ (∂_m V_ℓ^a)= - 1/g_5^2√(|g|) v^m ℓ_a,P_A , a ^m ℓ := ∂ L^(2)/∂ (∂_m A_ℓ^a)= - 1/g_5^2√(|g|) a^m ℓ_a ,P_π , a^m:= ∂ L^(2)/∂ ( ∂_m π^a )= M_A^ab√(|g|)(∂^m π_b - A^m_b ) ,are conjugate momenta to the 5-d fields V_m^a, A_m^a and π^a respectively. The bulk term in Eq. (<ref>) leads to the field equations ∂_m[ √(|g|) v^mn_a] + g_5^2 (M_V^a)^2 √(|g|) V^n_a = 0 , ∂_m[ √(|g|) a^mn_a] - g_5^2 M_A^ab√(|g|)(∂^n π_b - A^n_b ) = 0 , ∂_m[ M_A^ab√(|g|)(∂^m π_b - A^m_b )] = 0.Imposing the boundary conditions∂_z V_μ^a |_z=z_0 = ∂_z A_μ^a |_z=z_0 = 0 and ∂_z π^a |_z=z_0 = A_z |_z=z_0 = V_z |_z=z_0 =0 the boundary term (<ref>) reduces toδ S^(2)_ Bdy =- ∫ d^4 x[ ⟨ J^μ̂_V,a⟩ ( δ V_μ̂^a )_z = ϵ +⟨ J^μ̂_A,a⟩ (δ A_μ̂^a)_z = ϵ + ⟨ J_π,a⟩ (δπ^a )_z = ϵ ] ,where we find the dual 4-d currents ⟨ J^μ̂_V,a (x) ⟩ = P_V , a^z μ|_z = ϵ = - 1/g_5^2 ( √(|g|) v^z μ_a)_z = ϵ , ⟨ J^μ̂_A,a (x) ⟩ = P_A , a^z μ|_z = ϵ = - 1/g_5^2 ( √(|g|) a^z μ_a)_z = ϵ, ⟨ J_π,a (x) ⟩ = P_π , a^z |_z = ϵ =[ √(|g|) M_A^ab ( ∂^z π_b - A^z_b)]_z = ϵ = ∂_μ̂⟨ J^μ̂_A,a (x) ⟩ .Note that we distinguish the vector Minkowski indices μ̂ from the AdS indices μ. The results in Eqs. (<ref>), (<ref>) and (<ref>) define the holographic prescription for expectation values of the 4-d vector, axial and pion current operators. Note from Eqs. (<ref>) and (<ref>) that ∂_μ̂⟨ J^μ̂_V,a (x) ⟩≠ 0 when (M_V^a)^2 ≠ 0, i.e. the vector current is not conserved for those cases. Similarly, from Eqs. (<ref>) and (<ref>), one sees that∂_μ̂⟨ J^μ̂_A,a (x) ⟩≠ 0 for any a (because M_A^ab≠ 0), i.e. the axial current is never conserved.§ THE 4-D EFFECTIVE ACTION After decomposing the vector and axial fields into their (z,μ) components and evaluating the metric in Eq. (<ref>), the kinetic action in Eq. (<ref>) takes the formS^(2) =S^(2)_V + S^(2)_A,whereS^(2)_V = ∫ d^4 x ∫d z/z{- 1/4 g_5^2 [ (v_μ̂ν̂^a)^2 - 2 (v_z μ̂^a)^2]+ (M_V^a)^2/2 z^2 [ (V_μ̂^a)^2 - (V_z^a)^2] },andS^(2)_A = ∫ d^4 x ∫d z/z{ - 1/4 g_5^2 [ (a_μ̂ν̂^a)^2 - 2 (a_z μ̂^a)^2]+ M_A^ab/2z^2 [ (∂^μ̂π^a - A^μ̂, a ) (∂_μ̂π^b - A_μ̂^b)-(∂_z π^a - A_z^a ) (∂_z π^b - A_z^b)] }.The vector and axial sectors admit a decomposition in irreducible representations of the Lorentz group. For the vector sector we findV_μ̂^a=V_μ̂^⊥, a + ∂_μ̂(ϕ̃^a - π̃^a ), V_z^a=- ∂_z π̃^a, where ∂_μ̂ V_μ̂^⊥, a = 0. The 5-d field V_μ̂^⊥, a describes an infinite tower of 4-d massive spin 1 fields, i.e. the vector mesons, whereas the 5-d fields ϕ̃^a and π̃^a describe an infinite tower of massive scalar fields, i.e. scalar mesons associated with flavor symmetry breaking (FSB).On the other hand, the gauge symmetry in Eq. (<ref>) allows us to decompose the axial sector as A_μ̂^a→A_μ̂^⊥ , a ,A_z^a → - ∂_z ϕ^ a , π^a→ π^a - ϕ^ a ,where ∂_μ̂ A_μ̂^⊥, a = 0. This time the 5-d field A_μ̂^⊥ , a will describe an infinite tower of 4-d massive axial spin 1 fields, i.e. the axial vector mesons. The 5-d fields ϕ^a and π^a will describe an infinite tower of 4-d pseudoscalar fields, i.e the pions associated with chiral symmetry breaking (CSB).Using Eqs. (<ref>), (<ref>) and (<ref>)the actions in Eqs. (<ref>) and (<ref>) take the formS^(2)_V = ∫ d^4 x ∫dz/z{ - 1/4g_5^2 [ (v_μ̂ν^⊥,a)^2 - 2 (∂_z V_μ̂^⊥,a)^2 -2 (∂_z ∂_μ̂ϕ̃^a )^2] + (M_V^a)^2/2 z^2 [ ( V_μ̂^⊥, a )^2 + (∂_μ̂π̃^a - ∂_μ̂ϕ̃^a )^2-(∂_z π̃^a)^2]+ ∂_μ̂ (…) } ,S^(2)_A = ∫ d^4 x ∫dz/z - 1/4g_5^2 [ (a_μ̂ν^⊥,a)^2 - 2 (∂_z A_μ̂^⊥,a)^2-2 (∂_z ∂_μ̂ϕ^a )^2] + M_A^ab/2 z^2 [ A_⊥^μ̂,a A_μ̂^⊥, b +( ∂^μ̂π^a - ∂^μ̂ϕ^a) ( ∂_μ̂π^b - ∂_μ̂ϕ^b) - (∂_z π^a) (∂_z π^b)]+ ∂_μ̂ (…) },where the terms in (…) are surface terms that vanish after choosing periodic boundary conditions for the fields.The actions in Eqs. (<ref>) and (<ref>) are in a suitable form to perform a Kaluza-Klein expansion for the 5-d fields. Before performing this expansion note that the nondiagonal mass term M_A^8,15 induces a mixing in the axial sector for meson states with flavor indicesa=(8,15). In this paper we are mainly interested in the axial sector states with a=(9,..,14) and a=(1,2,3), corresponding to the heavy-light charmed mesons and the usual light mesons. Then from now on we will consider for the axial sector only those states where a ≠ (8,15). The axial sector states corresponding to a=(8,15) have an interesting physical interpretation, e.g η - η_c mixing for the pseudoscalar sector, and deserve a further study that will be pursued in a future project.The 5-d fields in the vector sector admit a Kaluza-Klein expansion of the formV_μ̂^⊥, a (x,z)=g_5v^a,n(z) V̂_μ̂^a,n (x),π̃^a(x,z)=g_5π̃^a,n(z) π̂_V^a,n(x) ,ϕ̃^a(x,z)=g_5ϕ̃^a,n(z) π̂_V^a,n(x) , ,where a sum from n=0 to n=∞ is implicit.A similar decomposition holds for the 5-d fields in the axial sector: A_μ̂^⊥, a (x,z)=g_5a^a,n(z) Â_μ̂^a,n (x),π^a(x,z)=g_5π^a,n(z) π̂^a,n(x),ϕ^a(x,z)=g_5ϕ^a,n(z) π̂^ a,n(x). Using these expansions the actions in Eqs. (<ref>) and (<ref>) factorize into z integrals and x integrals and we find S^(2)_V = ∫ d^4 xL_V and S^(2)_A = ∫ d^4 xL_A, with the vector and axial 4-d Lagrangians given by L_V = - 1/4Δ_V^a, nmv̂_μ̂ν̂^a,nv̂^μ̂ν̂_a,m +1/2 M_V^a,nmV̂_μ̂^a,nV̂^μ̂_a,m + 1/2Δ_π_V^a,nm (∂_μ̂π̂_V^a,n) (∂^μ̂π_V^a,m) - 1/2 M_π_V^a,nmπ̂_V^a,nπ̂_V^a,m,L_A = - 1/4Δ_A^a, nmâ_μ̂ν̂^a,nâ^μ̂ν̂_a,m +1/2M_A^a,nmÂ_μ̂^a,nÂ^μ̂_a,m + 1/2Δ_π^a,nm(∂_μ̂π̂^ a,n)(∂^μ̂π̂^a,m) -1/2M_π^a,nmπ̂^a,nπ̂^a,m,with coefficients defined by the z integrals Δ_V^a,nm = ∫dz/z v^a,n(z) v^a,m(z) ,M_V^a,nm = ∫dz/z{ [ ∂_z v^a,n(z) ] [ ∂_z v^a,m(z) ]+ β^a_V(z) v^a,n(z) v^a,m(z) } , Δ_π_V^a,nm = ∫dz/z{ [ ∂_z ϕ̃^a,n (z) ][ ∂_z ϕ̃^a,m (z) ]+ β^a_V(z) [ π̃^a,n(z) - ϕ̃^a,n(z) ] [ π̃^a,m(z) - ϕ̃^a,m(z) ] } ,M_π_V^a,nm = ∫dz/zβ_V^a(z) [ ∂_z π̃^a,n ] [ ∂_z π̃^a,m ],for the vector sector andΔ_A^a,nm = ∫dz/z a^a,n(z) a^a,m(z),M_A^a ,nm = ∫dz/z{ [ ∂_z a^a,n(z) ] [ ∂_z a^a ,m(z) ]+ β^a_A(z) a^a,n(z) a^a ,m(z) }, Δ_π^a,nm = ∫dz/z{ [ ∂_zϕ^a,n (z) ][ ∂_zϕ^a,m (z) ] + β^a_A(z) [π^a,n(z) -ϕ^a,n(z) ] [ π^a,m(z) -ϕ^a,m(z) ]},M_π^a,nm = ∫dz/zβ_A^a(z) [ ∂_zπ^a,n ] [ ∂_zπ^ a,m ],for the axial sector. Here we have definedβ_V^a := g_5^2/z^2(M_V^a)^2 , β_A^a := g_5^2/z^2 M_A^aa . In order to obtain standard kinetic terms in Eqs. (<ref>) and (<ref>) we impose the following conditions:Δ_V^a, n m = Δ_π_V^a, nm = Δ_A^a , nm = Δ_π^a , n m = δ^nm,M_V^a , nm = m_V^a,n^2 δ^nm ,M_π_V^a, nm = m_π_V^a,n^2 δ^nm,M_A^a , nm = m_A^a,n^2 δ^nm ,M_π^a, nm = m_π^a,n^2 δ^nm.The Lagrangians in Eqs. (<ref>) and (<ref>) then reduce toL_V= - 1/4v̂_μ̂ν̂^a,nv̂^μ̂ν̂_a,n +1/2 m_V^a,n^2 V̂_μ̂^a,nV̂^μ̂_a,n + 1/2(∂_μ̂π̂_V^a,n) (∂^μ̂π_V^a,n) - 1/2 m_π_V^a,n^2 π̂_V^a,nπ̂_V^a,n, L_A= - 1/4â_μ̂ν̂^a,nâ^μ̂ν̂_a,n +1/2m_A^a,n^2 Â_μ̂^a,nÂ^μ̂_a,n + 1/2(∂_μ̂π̂^a,n)(∂^μ̂π̂^a,n) -1/2m_π^a,n^2 π̂^a,nπ̂^a,n.The conditions for the Δ coefficients are normalization rules for the corresponding wavefunctions. The conditions for the masses are equivalent to the conditions for the Δ coefficientsif we impose the following equations :[ - ∂_z( 1/z∂_z) +1/zβ_V^a(z)]v^a,n(z)= m_V^a,n^2/z v^a,n(z) , β_V^a(z)/z[π̃^a,n(z) - ϕ̃^a,n(z) ] = - ∂_z[1/z∂_z ϕ̃^a,n(z)] , β_V^a(z)∂_z π̃^a,n(z) = m_π_V^a,n^2 ∂_z ϕ̃^a,n(z) ,for the vector sector and[ - ∂_z( 1/z∂_z) +1/zβ_A^a(z)]a^a,n(z) = m_A^ a,n^2/z a^a,n(z), β_A^ a(z)/z[π^a,n(z) - ϕ^a,n(z) ] = - ∂_z[1/z∂_z ϕ^a,n(z)] , β_A^a(z)∂_z π^a,n(z) = m_π^a,n^2 ∂_z ϕ^a,n(z) ,for the axial sector.We finish this section writing the SU(4) pseudoscalar and vectormeson matrices π̂ and V̂ in terms of the charged statesπ̂ = π̂^a T^a = 1/√(2) (π^0/√(2)+ η/√(6)+ η_c/√(12) π^+ K^+ D̅^0 π^--π^0/√(2)+ η/√(6)+ η_c/√(12) K^0 D^- K^- K̅^0- √(2/3)η + η_c/√(12) D_s^- D^0D^+ D_s^+- 3/√(12)η_c ) , V̂ = V̂^a T^a= 1/√(2) ( ρ^0/√(2) + ω'/√(6) + ψ/√(12) ρ^+ K^⋆ + D̅^⋆ 0 ρ^--ρ^0/√(2)+ ω'/√(6)+ ψ/√(12) K^⋆ 0 D^⋆ - K^⋆ - K̅^⋆ 0- √(2/3)ω' + ψ/√(12) D_s^⋆ - D^⋆ 0 D^⋆ + D_s^⋆ +- 3/√(12)ψ ),where in the last equation we have omitted the index μ for simplicity. § DECAY CONSTANTS, CSB AND FSB As observed in <cit.>, the simplest method for extracting the leptonic decay constants is to replace the fields in the r.h.s of the dual currents prescription,Eqs. (<ref>)-(<ref>), by theirKaluza-Klein expansions in Eqs. (<ref>)-(<ref>). We find:⟨ J^μ̂_V,a(x) ⟩ =[1/g_5 z∂_z v^a,n(z) ]_z=ϵV̂^μ̂_a,n(x) +[1/g_5 z∂_z ϕ̃^a,n(z) ]_z=ϵ∂^μ̂π̂^a,n_V(x) , ⟨ J^μ̂_A,a(x) ⟩ =[1/g_5 z ∂_z a^ a ,n(z)]_z=ϵÂ^μ̂_ a,n(x)+[ 1/g_5 z∂_z ϕ^ a , n(z)]_z=ϵ∂^μ̂π̂^a ,n(x) , ∂_μ̂⟨ J^μ̂_A, a (x) ⟩ = ⟨ J_Π , a(x) ⟩ =- [β^a_A (z) /g_5 z∂_z π^ a, n(z)]_z=ϵπ̂^a , n(x) ,where a sum from n=0 to n=∞ is implicit. In the expansions (<ref>)-(<ref>) the 4-d fields V̂^μ̂_a,n(x) , A^μ̂_a,n(x) ,π̂^a,n_V(x) and π̂^a̅ , n(x) are on-shell. From these expansions we find the holographic prescription for leptonic decay constants:g_V^a,n =[1/g_5 z∂_z v^a,n(z) ]_z=ϵ, f_π_V^a,n =-[1/g_5 z∂_z ϕ̃^a,n(z) ]_z=ϵ, g_A^a , n =[1/g_5 z ∂_z a^a ,n(z)]_z=ϵ, f_π^ a , n =-[ 1/g_5 z∂_z ϕ^ a , n(z)]_z=ϵ.Taking the divergence of Eqs. (<ref>) and (<ref>) we find ∂_μ̂⟨ J^μ̂_V,a(x) ⟩ = f_π_V^a,n m_π_V^a,n^2 π̂^a,n_V(x),and ∂_μ̂⟨ J^μ̂_A,a(x) ⟩ = f_π^a,n m_π^a,n^2 π̂^a,n(x) ,where a sum from n=0 to n=∞ is implicit. Equation (<ref>) is a generalization of the partially conserved axial current relation (PCAC), which encodes the effect of chiral symmetry breaking (CSB) in the current algebra. Equation (<ref>) encodes the effect of flavor symmetry breaking (FSB) in the vector current. Interestingly, the scalar mesons π̂^a,n_V(x) of FSBand the pionsπ̂^a,n(x) of CSB appear in a similar way in Eqs. (<ref>) and (<ref>), respectively.§ COUPLING CONSTANTS AND FORM FACTORS The three-point interactions are described by the 5-d action in Eq. (<ref>). After decomposing the fields into their (z,μ) components and evaluating the metric in Eq. (<ref>), the action takes the form S_3 = S_VVV + S_VAA + S_V Aπ + S_Vππ,whereS_VVV =- 1/2g_5^2 f^abc∫ d^4 x ∫dz/z [ v^μ̂ν̂_a V_μ̂^b V_ν̂^c-2 v_z μ̂^a V_z^b V^μ̂_c ] ,S_VAA = 1/2 g_5^2f^abc∫ d^4 x ∫dz/z [ v^μ̂ν̂_a A_μ̂^b A_ν̂^c - 2 v_z μ̂^a A_z^b A_c^μ̂ -2 a^μ̂ν̂_a V_μ̂^b A_ν̂^c + 2 a_z μ̂^a (V_z^b A^μ̂_c - V^μ̂_b A_z^c ) ] ,S_VAπ = 1/g_5^2 f^abc∫ d^4 x ∫dz/z [ β_V^b(z)- β_A^a (z) ] ×[ - A_z^a V_z^b + A_μ̂^a V^μ̂_b ] π^c ,S_Vππ = 1/2g_5^2 f^abc∫ d^4 x ∫dz/z [ -β_V^b(z) + 2 β_A^a(z) ]×[ - (∂_z π^a) V_z^b + (∂_μ̂π^a ) V^μ̂_b] π^ c .Here we are interested in the following 4-d triple couplingsS_V̂V̂V̂ =g__V̂^a , ℓV̂^b, mV̂^c,n∫ d^4 xV̂^μ̂_a ,ℓ v̂_μ̂ν̂^b, m V^ν̂_c,n,S_V̂π̂π̂ =g__π̂^a,ℓV̂^b,mπ̂^c,n∫ d^4 x (∂_μ̂π̂^a , ℓ ) V̂^μ̂_b,mπ̂^c,n ,where a sum over a,b,c as well as ℓ, m, n is implicit.Using (<ref>)-(<ref>)in (<ref>), (<ref>) and (<ref>) as well as the KK expansions (<ref>), (<ref>) and (<ref>)we find that g__V̂^a , ℓV̂^b, mV̂^c,n = g_5/2 f^abc∫dz/z v^a,ℓ (z) v^b,m(z) v^c,n(z) , g__π̂^ a,ℓV̂^b,mπ̂^c,n = g_5/2 f^a b c∫dz/z{ 2 (∂_z ϕ^ a , ℓ) v^b,m (∂_z ϕ^ c ,n)+[ - β_V^b (z) + 2 β_A^ a (z)] (π^a,ℓ- ϕ^a,ℓ) v^b,m ( π^c,n - ϕ^c,n) } . In order to compare our results with chiral theory models we rewrite the 3-point interactions in Eqs. (<ref>) and (<ref>) as S_V̂V̂V̂ =2 f^abcg̅__V̂^a , ℓV̂^b, mV̂^c,n∫ d^4 xV̂^μ̂_a ,ℓ (∂_μ̂V̂_ν̂^b, m)V^ν̂_c,n,S_V̂π̂π̂ =f^abcg̅__π̂^a,ℓV̂^b,mπ̂^c,n∫ d^4 xV̂^μ̂_b,m [ (∂_μ̂π̂^a , ℓ ) π̂^c,n -(∂_μ̂π̂^c , n ) π̂^a,ℓ ], where g̅__V̂^a , ℓV̂^b, mV̂^c,n = g_5/2∫dz/z v^a,ℓ (z) v^b,m(z) v^c,n(z) ,and g̅__π̂^a,ℓV̂^b,mπ̂^c,n = g_5/8∫dz/zv^b,m{ 4 (∂_z ϕ^a , ℓ)(∂_z ϕ^c ,n)+[ - 2 β_V^b (z) + 2 (β_A^a(z) +β_A^c(z))]×(π^a,ℓ- ϕ^a,ℓ)( π^c,n - ϕ^c,n) } .To arrive at Eqs. (<ref>) and (<ref>), we have integrated by parts the actions in Eqs. (<ref>) and (<ref>) and also used the transversality of the vector mesons (∂_μ̂V̂^μ̂_a ,ℓ=0). Note that the coupling in Eq. (<ref>) is symmetric when interchanging the pion flavor indices a and c, as required by crossing symmetry.We are interested in describing strong couplings involvingcharmed mesons D and D^*, strange mesons Kand K^* as well as light mesons π and ρ. Then in (<ref>) and (<ref>)we select only a=(1,..,7) and a=(9,..,12) for the pseudoscalar mesons π^a, whereas for the vector mesons we pick a=(1..,7), a=(9,..,12) and a=(8,15). The other states are taken to zero.The reason we include a=(8,15) in the vectorial sector is because it contributes to the electromagnetic form factors of the D and D^* as shown below in this section.Using also the results in (<ref>)-(<ref>) and evaluating the SU(4) structure constants f^abc we arrive at the effective Lagrangians L_V ππ =L_π D^* D +L_ρ D D +L_ω' D D +L_ψ D D +L_π K^* K+L_ρ K K + L_ω' K K + L_ρππ, L_VVV =L_ρ D^* D^* + L_ω' D^* D^* + L_ψ D^* D^* +L_ρ K^* K^* +L_ω' K^* K^* + L_ρρρ ,where L_π D^* D =i√(2)g_πD^* D[D_μ^*+(D̅^0∂^μπ ^-) + D_μ^*-( π^+∂^μD^0 ) + D_μ^*0(D^-∂^μπ ^+) + D̅_μ^*0( π^-∂^μ D^+)] + i g_π D^* D [ D_μ^*+( π^0 ∂^μ D^-) + D_μ^*-( D^+∂^μπ^0 ) + D_μ^*0(D̅^0 ∂^μπ^0 ) + D̅_μ^*0(π ^0 ∂^μ D^0 )], L_ρ D D =i√(2)g_ρ D D [ ρ_μ^+(D^0 ∂^μD^-) + ρ_μ^-( D^+∂^μD̅^0 ) ] + ig_ρ DD [ ρ_μ^0(D^-∂^μD^+) + ρ_μ^0(D^0∂^μD̅^0 ) ], L_ω' D D = i/√(3)g_ω' D D [ ω'_μ ( D^+∂^μ D^- )+ ω'_μ ( D^0 ∂^μD̅^0) ], L_ψ D D = i √(8/3)g_ψ D D [ ψ_μ ( D^+∂^μ D^- ) + ψ_μ ( D^0 ∂^μD̅^0)], L_ρ D^* D^* =i√(2)g_ρ D^* D^* [ D_μ^*+ (ρ^-_ν∂^μD̅^ν_*0 ) +D_μ^*- ( D_ν^*0∂^μρ_+^ν ) +D_μ^*0 ( ρ^+_ν∂^μ D^ν_*- ) + D̅_μ^*0 ( D_ν^*+∂^μρ_-^ν ) + ρ_μ^+ ( D^*-_ν∂^μD^ν_*0 ) + ρ_μ^- ( D̅_ν^*0∂^μ D_*+^ν ) ] + i g_ρ D^*D^* [D_μ^*+ ( D_ν^*-∂^μρ_0^ν )+ D_μ^*- ( ρ^0_ν∂^μD^ν_*+ ) + D_μ^*0 ( ρ^0_ν∂^μD̅^ν_*0 ) + D̅_μ^*0 ( D_ν^*0∂^μρ _0^ν ) +ρ_μ^0( D_ν^*+∂^μ D_*-^ν ) + ρ_μ^0( D̅_ν^*0∂^μD_*0^ν ) ], L_ω' D^* D^* =- i/√(3) g_ω' D^* D^* [D_μ^*+ ( D_ν^*-∂^μω'^ν ) + D_μ^*- ( ω'^ν∂^μD_ν^*+ ) + D_μ^*0 ( D̅_ν^*0∂^μω'^ν )+ D̅_μ^*0 ( ω'^ν∂^μ D_ν^*0 ) + ω'_μ (D_ν^*+∂^μ D_*-^ν ) + ω'_μ (D_ν^*0∂^μD̅_*0^ν )],L_ψ D^* D^* =- i √(8/3) g_ψ D^* D^* [D_μ^*+ ( D_ν^*-∂^μψ^ν ) + D_μ^*- ( ψ^ν∂^μ D_ν^*+ ) +D_μ^*0 ( D̅_ν^*0∂^μψ^ν ) + D̅_μ^*0 ( ψ^ν∂^μ D_ν^*0 ) + ψ_μ (D_ν^*+∂^μ D_*-^ν ) + ψ_μ (D_ν^*0∂^μD̅_*0^ν )],L_π K^* K =i √(2)g_π K^* K [ K_μ^*+ ( π^-∂^μK̅^0) + K_μ^*- ( K^0 ∂^μπ^+ ) + K_μ^*0 ( π^+∂^μ K^- ) + K̅_μ^*0 ( K^+∂^μπ^- )] +i g_π K^* K [ K_μ^*+ ( π^0 ∂^μ K^- ) + K_μ^*- ( K^+∂^μπ^0) + K_μ^*0 ( K̅^0∂^μπ^0) + K̅_μ^*0 ( π^0 ∂^μ K^0)] ,L_ρ K K =i √(2)g_ρ K K [ ρ_μ^+ ( K^-∂^μ K^0) + ρ_μ^- ( K̅^0 ∂^μ K^+ )] + i g_ρ K K [ ρ_μ^0( K^-∂^μ K^+ ) + ρ_μ^0( K^0 ∂^μK̅^0)],L_ω' K K =i √(3)g_ω' K K [ ω'_μ ( K^-∂^μ K^+ ) +ω'_μ ( K̅^0 ∂^μ K^0)] , L_ρ K^* K^* =i √(2) g_ρ K^* K^* [ K_μ^*+ ( K̅_ν^*0∂^μρ_-^ν ) + K_μ^*- ( ρ_ν^+∂^μ K_ν^*0 ) + K_μ^*0 ( K_ν^*-∂^μρ_+^ν ) + K̅_μ^*0 ( ρ_ν^-∂^μK_*+^ν ) + ρ_μ^+ ( K_ν^*0∂^μ K_*-^ν ) + ρ_μ^- ( K_ν^*+∂^μK̅_*0^ν )] + ig_ρ K^* K^* [ K_μ^*+ ( K_ν^*-∂^μρ_0^ν ) + K_μ^*- ( ρ_ν^0 ∂^μ K_*+^ν ) +K_μ^*0 ( ρ_ν^0 ∂^μK̅_*0^ν ) + K̅_μ^*0 ( K_ν^*0∂^μρ_0^ν ) +ρ_μ^0( K_ν^*+∂^μ K_*-^ν ) + ρ_μ^0( K̅_ν^*0∂^μ K_*0^ν )] , L_ω' K^* K^* =i √(3)g_ω' K^* K^* [ K_μ^*+ ( K_ν^*-∂^μω'^ν )+ K_μ^*- ( ω'_ν∂^μ K_*+^ν ) + K_μ^*0 ( K̅_ν^*0∂^μω'^ν ) + K̅_μ^*0 (ω'_ν∂^μ K_*0^ν ) + ω'_μ ( K_ν^*+∂^μ K_*-^ν ) + ω'_μ ( K_ν^*0∂^μK̅_*0^ν )],L_ρππ =i g_ρππ [ρ_μ^+(π^0 ∂^μπ^-) +ρ_μ^-(π^+∂^μπ^0 ) + ρ_μ^0(π^-∂^μπ ^+)],L_ρρρ =i g_ρρρ[ ρ_μ^+ ( ρ_ν^-∂^μρ_0^ν )+ ρ_μ^- ( ρ_ν^0 ∂^μρ_+^ν ) + ρ_μ^0( ρ _ν^+∂^μρ_-^ν ) ] .In the above, the couplings are given by g__π D^* D = g̅__π̂^aV̂^bπ̂^c ,a=(1,2,3),(b,c)=(9,..,12),g__ρ D D = g̅__π̂^aV̂^bπ̂^c ,(a,c)=(9,..,12), b =(1,2,3),g__ω' D D = g̅__π̂^aV̂^bπ̂^c ,(a,c)=(9,..,12), b =8,g__ψ D D = g̅__π̂^aV̂^bπ̂^c ,(a,c)=(9,..,12), b =15,g__ρ D^* D^* = g̅__V̂^a V̂^b V̂^c , a=(1,2,3) , (b,c)=(9,..,12), g__ω' D^* D^* = g̅__V̂^a V̂^b V̂^c , a=8 , (b,c)=(9,..,12),g__ψ D^* D^* = g̅__V̂^a V̂^b V̂^c , a=15 , (b,c)=(9,..,12),g__π K^* K = g̅__π̂^aV̂^bπ̂^c ,a=(1,2,3),(b,c)=(4,..,7),g__ρ K K = g̅__π̂^aV̂^bπ̂^c ,(a,c)=(4,..,7), b =(1,2,3),g__ω' K K = g̅__π̂^aV̂^bπ̂^c ,(a,c)=(4,..,7), b =8,g__ρ K^* K^* = g̅__V̂^a V̂^b V̂^c , a=(1,2,3) , (b,c)=(4,..,7), g__ω' K^* K^* = g̅__V̂^a V̂^b V̂^c , a=8 , (b,c)=(4,..,7),g__ρππ =2 g̅__π̂^aV̂^bπ̂^c , (a,b,c) =(1,2,3),g__ρρρ = 2 g̅__V̂^a V̂^bV̂^c , (a,b,c) =(1,2,3).We have used the double arrow derivativef∂^μg:= f (∂^μ g) - (∂^μ f) g and for simplicity we have omitted the indices ℓ , m, n that distinguish the fundamental states from the corresponding resonances. The Lagrangians in Eqs. (<ref>) and (<ref>) are typically used in phenomenology of charmed mesons—see e.g. Ref.<cit.>. In the limit where the quark masses and condensates are equal, flavor symmetry is recovered and the couplings satisfy the relations g_π D^* D = g_ρ D D= g_ω' D D = g_ψ D D =g_π K^* K = g_ρ K K = g_ω' K K= 1/2 g_ρππ =: g/4 ,g_ρ D^* D^* = g_ω' D^* D^* =g_ψ D^* D^* =g_ρ K^* K^* = g_ω' K^* K^* = 1/2 g_ρρρ:= g̃/4 .In this case all the couplings can be obtained from the interaction terms i gTr (∂^μπ [ π, V_μ]) and i g̃ Tr (∂^μ V^ν [ V_μ , V_ν ] )—see e.g. Ref. <cit.>. §.§ Electromagnetic form factors The effective Lagrangian (<ref>) describes the interaction between a vector meson and two pseudoscalar mesons. If the vector meson is off-shell and the pseudoscalar mesons are on-shell we can use (<ref>) to investigate the electromagnetic (EM) form factors of pseudoscalar mesons. Similarly, using the effective Lagrangian (<ref>) and taking one of the vector mesons off-shell we can investigate the EM form factors of vector mesons. The (elastic) EM form factors of pseudoscalar mesons appear in the decomposition of the EM current as ⟨π^a (p+q) | J^μ_EM (0) |π^a (p) ⟩ =(2p + q)^μ F_π^a(q^2).Similarly, the (elastic) EM form factors of vector mesons appear as the Lorentz scalars in the EM current decomposition <cit.> ⟨ V^a (p+q),ϵ' | J^μ_EM (0) | V^a (p),ϵ⟩ = -(ϵ' ·ϵ) (2p + q)^μ F_V^a^1(q^2)+ [ϵ'^μ(ϵ· q)-ϵ^μ(ϵ' · q)] [F_V^a^1(q^2) + F_V^a^2(q^2)]+ 1/M_V^a^2(q ·ϵ')(q ·ϵ)(2p + q)^μ F_V^a^3(q^2).Linear combinations of the form factors in (<ref>) define the so called electric, magnetic and quadrupole form factors:F_V^a^E =F_V^a^1 + q^2/6 M_V^a^2[F_V^a^2 - (1-q^2/4M_V^a^2) F_V^a^3 ] ,F_V^a^M =F_V^a^1 + F_V^a^2 ,F_V^a^Q =- F_V^a^2 + (1-q^2/4M_V^a^2) F_V^a^3.In the absence of baryonic number the EM current is obtained from a linear combination of flavor currents, i.e.J^μ_EM (x) = ∑_a=(3,8,15) c_a J^μ_a (x),where the coefficients c_a can vary depending on the quarks that will be considered in the EM current. When considering the EM form factors of the heavy-light D and D^* charmed mesons the strange quark does not participate in the process and we can define the EM current asJ^μ_EM = 2/3u̅γ^μ u - 1/3d̅γ^μ d + 2/3c̅γ^μ c .Then the EM current can be decomposed as (<ref>) with coefficientsc_3=1, c_8=7/(3√(3)) and c_15=- 8/(3√(6)), up to the strangeness current which do not contribute when evaluating the current at the external states. On the other hand, when evaluating the EM form factors for the strange K and K^* as well as the light mesons π and ρ we define the EM current asJ^μ_EM = 2/3u̅γ^μ u - 1/3d̅γ^μ d - 1/3s̅γ^μ s ,which admits the decomposition (<ref>) for the coefficients c_3=1, c_8 = 1/√(3) and c_15=0.As explained in the previous section, each flavor current admits a decomposition in terms of vector mesons. This implies from (<ref>) that the photon decays into ρ^0,n,ω'^n and ψ^n mesons. This is is a holographic realization of generalized vector meson dominance (GVMD) <cit.>, in that also the resonances are included and not only the fundamental states as VMD.For the pion and ρ meson only the states ρ^0,n contribute to the EM form factors. In the case of strange mesons K and K^* the states ρ^0,n and ω'^n contribute to the EM form factors whereas in the case of the charmed D and D^* mesons we have contributions from ρ^0,n, ω'^n and ψ^n. In our model it turns out that the states ω'^n and ψ^n are identical to the statesρ^0,n as well as their couplings to external states. Although this implies an unrealistic spectrum for those mesons, its contribution to the EM form factors is not only required by consistency but also leads to reasonable results consistent either with experimental data or lattice QCD data, as we will show below.Using the Feynman rules for the vector meson propagator in(<ref>) and the triple vertex (<ref>) as well as the EM current decomposition (<ref>), with the appropriate coefficients, we extract the (elastic) EM form factors for the pion, kaonand D meson :F_π(Q^2) = ∑_ng_ρ^n g_ρ^n ππ/m_ρ^n^2 + Q^2, F_K(Q^2)= ∑_n [g_ρ^n g_ρ^n K K/m_ρ^n^2 + Q^2 +g_ω'^n g_ω'^n K K/m_ω'^n^2 + Q^2 ] =2 ∑_ng_ρ^n g_ρ^n K K/m_ρ^n^2 + Q^2, F_D(Q^2)= ∑_n [g_ρ^n g_ρ^n D D/m_ρ^n^2 + Q^2 - 7/9g_ω'^n g_ω'^n D D/m_ω'^n^2 + Q^2 + 16/9g_ψ^n g_ψ^n D D/m_ψ^n^2 + Q^2 ] = 2 ∑_ng_ρ^n g_ρ^n D D/m_ρ^n^2 + Q^2,where Q^2 = - q^2. The second equalities in (<ref>) and (<ref>) come from the identification of the states ω'^n and ψ^n with the states ρ^0,n. Similarly for the vector sector, we use the Feynman rules associated with the triple vertex (<ref>) and the vector meson propagator in(<ref>) to extract the (elastic) EM form factors for the ρ meson, K^* meson and D^* meson :F_ρ^1 = F_ρ^2 = F_ρ(Q^2)= ∑_ng_ρ^n g_ρ^n ρρ/m_ρ^n^2 + Q^2, F_K^*^1 = F_K^*^2 = F_K^*(Q^2)=2 ∑_ng_ρ^n g_ρ^n K^* K^*/m_ρ^n^2 + Q^2, F_D^*^1 = F_D^*^2 = F_D^*(Q^2)=2 ∑_ng_ρ^n g_ρ^n D^* D^*/m_ρ^n^2 + Q^2, F_ρ^3 =F_K^*^3= F_D^*^3=0.The electric, magnetic and quadrupole form factors are obtained using (<ref>). §.§ Low and high Q^2 At low Q^2 the EM form factor of a pseudoscalar meson can be expanded asF_π^a(Q^2)= 1 -1/6⟨ r_π^a^2⟩ Q^2 + ... ,where the second term defines the charge radius. A similar expression follows for the vector mesons. Notice that we have used the relation F_π^a(0)=1, which is due to charge conservation. In fact, the relation F_π^a(0)=1 follows nicely from the sum rules∑_ng_ρ^n g_ρ^n ππ/m_ρ^n^2 =2 ∑_ng_ρ^n g_ρ^n K K/m_ρ^n^2 =2 ∑_ng_ρ^n g_ρ^n D D/m_ρ^n^2 = 1 .These sum rules can be proven using the equation and completeness relation of vector mesons as well as the normalization of the external states.For the vector mesons the electric radius is obtained from the electric form factor as ⟨ r_V^a^2 ⟩ = -6 . d F_V^a^E(Q^2)/dQ^2| _Q^2=0.The magnetic μ and quadrupole D moments of the vector mesons in our model take the canonical valuesμ =F_V^a^M(0) =2 ,D= 1/M_V^a^2F_V^a^Q(0)= - 1/M_V^a^2 ,where we have used the relation F_V^a(0)=1 which follows from the sum rules∑_ng_ρ^n g_ρ^n ρρ/m_ρ^n^2 =2 ∑_ng_ρ^n g_ρ^n K^* K^*/m_ρ^n^2 =2 ∑_ng_ρ^n g_ρ^n D^* D^*/m_ρ^n^2=1.Again these sum rules follow from the equation and completeness of the ρ^n states and the normalization of the external states.In fact, the sum rules Eqs. (<ref>) and (<ref>) are universal in bottom up and top-down holographic models for QCD. A discussion of these sum rules in the top-down approach can be found in Ref. <cit.>.In the regime of large Q^2, the EM form factors of pseudoscalar mesons can be expanded asF_π(Q^2) = 1/Q^2∑_n=0^∞ g_ρ^ng_ρ^n ππ [ 1 - m_ρ^n^2/Q^2 + … ] ,F_K(Q^2) = 2/Q^2∑_n=0^∞ g_ρ^ng_ρ^n K K[ 1 - m_ρ^n^2/Q^2 + … ] F_D(Q^2) = 2/Q^2∑_n=0^∞ g_ρ^ng_ρ^n D D[ 1 - m_ρ^n^2/Q^2 + … ] . A similar expansion holds for the EM form factors of vector mesonsF_ρ(Q^2) = 1/Q^2∑_n=0^∞ g_ρ^ng_ρ^n ρρ [ 1 - m_ρ^n^2/Q^2 + … ] ,F_K^*(Q^2) = 2/Q^2∑_n=0^∞ g_ρ^ng_ρ^n K^* K^* [ 1 - m_ρ^n^2/Q^2 + … ] ,F_D^*(Q^2) = 2/Q^2∑_n=0^∞ g_ρ^ng_ρ^n D^* D^* [ 1 - m_ρ^n^2/Q^2 + … ] . In the next section we present our predictions for the couplings and form factors involving the pions, kaons, ρ mesons, K^* mesons as well as the charmed mesons D and D^*. For the D and D^* EM form factors wecompare our results against data from Lattice QCD. Using (<ref>) and (<ref>) we will also extract the charge radii of all those mesons and compare against experimental data or lattice QCD data. Last but not least, the high-Q^2 behavior of the form factors in Eqs. (<ref>)-(<ref>) will be checked and compared with perturbative QCD calculations.§ RESULTS AND COMPARISON WITH LATTICE QCD In this section we present our numerical results for the spectrum, decay constants, coupling constants and EM form factorsinvolving the charmed mesons. It is convenient to define unnormalized wave functions for the scalar mesons (ϕ̃_U^a,n and π̃_U^a,n), pseudoscalar mesons (ϕ_U^a,n and π_U^a,n), vector mesons (v_U^a,n) and axial vector mesons (a_U^a,n) so that the first coefficient in the near boundary expansion is fixed arbitrarily (due to the linearity of the differential equations). Eqs. (<ref>) and (<ref>) dictate the near boundary behavior ofthe unnormalized wave functions: ϕ̃^a,n_U (z)= -z^2 + …, π̃^a,n_U (z) =-m_π_V^a,n^2/β_V^a(0)z^2 + … , ϕ^a,n_U (z)= -z^2 + …, π^a,n_U (z) =-m_π^a,n^2/β_A^a(0)z^2 + …,v_U^a,n(z)=z^2 + …,a_U^a,n(z) = z^2 + ….In Eq. (<ref>), the first coefficients were fixed to 1 or -1 to guarantee a positive sign for the decay constants in Eqs. (<ref>)-(<ref>) and positive normalization constants. The normalized wave functions take the formϕ̃^a,n(z)=N_π_V^a,nϕ̃_U^a,n(z) , π̃^a,n(z) = N_π_V^a,nπ̃_U^a,n(z) , ϕ^a,n(z)=N_π^a,nϕ_U^a,n(z) , π^a,n(z) = N_π^a,nπ_U^a,n(z) , v^n,a(z) =N_V^a,n v_U^a,n (z),a^n,a(z) = N_A^a,n a_U^a,n (z),with the normalization constants defined by the integralsN_π_V^a,n^-2 = ∫dz/z{ ( ∂_zϕ̃_U^a,n (z) )^2 + β^a_V(z) (π̃_U^a,n(z) -ϕ̃_U^a,n(z) )^2},N_π^a,n^-2 = ∫dz/z{ ( ∂_z ϕ_U^a,n (z) )^2 + β^a_A(z) ( π_U^a,n(z) - ϕ_U^a,n(z) )^2},N_V^a,n^-2 = ∫dz/z (v_U^a,n(z) )^2,N_A^a,n^-2 = ∫dz/z (a_U^a,n(z) )^2.The spectrum of vector mesons and scalar mesons is obtained by solving Eqs. (<ref>) and imposing the Neumann boundary conditions at the hard wall z=z_0. Similarly, the spectrum of axial vector mesons and pseudoscalar mesons is obtained by solving Eqs. (<ref>) and imposing Neumann boundary conditions at the hard wall.On the other hand, using Eqs. (<ref>) and (<ref>) we find that the meson decay constants, defined in Eqs. (<ref>)-(<ref>), are determined by the normalization constants through the relations f_π_V^a,n = 2/g_5N_π_V^a,n ,f_π^a,n = 2/g_5N_π^a,n,g_V^a,n = 2/g_5 N_V^a,n ,g_A^a,n = 2/g_5 N_A^a,n.Having described the procedure for finding the meson spectrum and decay constants now we describe how we fit the parameters of our model, namely the quark masses m_u, m_s, m_c , quark condensates σ_u , σ_s, σ_c and the position of the hard wall z_0. We choose to fit the parameter z_0 using only the mass of the ρ meson, since that observable does not depend on any other parameter. We find z_0^-1 = 322.5MeV. Then we proceed with a global fit for the quark masses and quark condensates using 10 observables, namely the light meson masses (m_π,m_a_1), the strange meson masses (m_K,m_K^∗,m_K_1,m_K_0^*) and the charmed meson masses (m_D,m_D^∗,m_D_s,m_D^*_s). Note that we have included the scalar meson K_0^*, which is associated with flavor symmetry breaking. Numerically, we find the best global fit for the parametersm_u=9 MeV, m_s=190 MeV and m_c=1560 MeV for the quark masses; σ_u=(198 MeV )^3, σ_s=(205 MeV)^3 and σ_c=(280 MeV)^3 for the quark condensates. In Table <ref> we compare the model fit to the observables with their experimental values. Note that the fit works very well for the isospin and strange sectors, as already known from previous works. The extension of the model to the charm sector also gives a reasonably good fit of properties of heavy-light mesons, like the D and D^∗ mesons.Once we have fitted the parameters of the model, we are able to make predictions. In Table (<ref>) we show a set of predictions for masses and decay constants. In the cases where experimental or lattice data is available the measured values are presented. Regarding the masses m_D^∗_0 and m_D^∗_0s, the large difference between the model prediction and the experimental values <cit.> may be related to the not clear distinction between the ground and excited states.Now we move to the triple meson couplings defined in the effective Lagrangians (<ref>) and (<ref>). Using the dictionary(<ref>, <ref>) and the relations (<ref>) we find the couplings g_ρ^n ππ, g_ρ^n ρρ, g_ρ^n K K, g_ρ^n K^* K^*, g_ρ^n D D and g_ρ^n D^* D^*. The results are shown in Table <ref> where we notice an interesting feature taking place. The triple coupling g_ρ^n D D, involving the heavy-light pseudoscalar charmed mesons, does not decrease with n in the same way as thetriple couplingsg_ρ^n ππ and g_ρ^n K K, involving light and strange pseudoscalar mesons respectively. The same behavior appears in the triple coupling g_ρ^n D^∗ D^∗, involving vectorial charmed mesons,when compared to the triple couplings g_ρ^n ρρ and g_ρ^n K^* K^*, involving vectorial light and strange mesons respectively. In the case of g_ρ^n D D we see that actually the first resonance ρ^n=1 couples stronger than the fundamental ρ^n=0.Due to flavor symmetry breaking, through the different values for the quark masses and condensates, we expect a violation ofSU(4) relations given in (<ref>)-(<ref>). We compare in Table <ref> our results with the expectations from the SU(4) flavor symmetric case. We note that we find the trend g_ρ D D < g_ρ K K < g_ρππ/2 for the pseudoscalar mesons, which opposite to that found with the QCD sum rules <cit.> and Dyson-Schwinger calculations in Ref. <cit.>, but it agrees with calculations based ^3P_0 quark-pair creation model in the nonrelativistic quark model of Refs. <cit.>. For the vectorial mesons we find a similar trend g_ρ D^* D^* < g_ρ K^* K^* < g_ρρρ/2. Note, however, that the coupling g_ρ K^* K^* is very close to g_ρρρ/2. The reason behind this proximity is that the vector meson spectrum, found from the first equation in (<ref>), depends more on the condensate difference than the mass difference appearing in β_V^a(z). Since the strange and light condensates σ_s and σ_u are very close to each other the masses and wave functions of the ρ and K^* are very similar. Finally, we show our results for the meson (elastic) EM form factors defined in the previous section. Using the couplings obtained in Table <ref> and evaluating the expressions in Eqs. (<ref>)-(<ref>) we obtain a series expansion for the EM form factors of the pion, kaon and D meson. In a similar fashion, we take the coupling in Table <ref> and evaluate the expressions in Eqs. (<ref>)-(<ref>) to obtain a series expansion for the EM form factors of the ρ, K^* and D^* mesons. In both cases, a good convergence is achieved after considering 5 states (4 resonances ρ^n besides the fundamental ρ^n=0). This is explicitly shown in Table <ref> for the case Q^2=0.Table <ref> reveals again a clear distinction between the light mesons and charmed mesons. In the former, vector meson dominance (VMD) is a good approximation, whereas in the case of the latter the EM form factors receive substantial contribution from the first ρ resonance. This is a nice example of generalized vector meson dominance (GVMD) in EM form factors. This is consistent with Ref. <cit.>,where the authors claimed that the radial excitations of the ρ meson are important for EMform factors of nonzero spin hadrons, as nucleons.In Figs. <ref>, <ref>, <ref> and <ref> we show our results for the EM form factors of the π, ρ, K and K^* mesons. The pion and kaon EM form factors are compared against experimental data. Previous results for the pion and kaon EM form factors in holographic QCD can be found in <cit.> and <cit.> respectively. Previous results for the ρ meson EM form factor can be found in <cit.>. Finally we show in Figs. <ref> and <ref> our results for the D meson and D^∗ meson EM form factors compared with data from lattice QCD. As promised, we find a reasonable agreement between our model and the lattice results.At low Q^2, we use the relations in Eqs. (<ref>) and (<ref>) to extract the charge radii of pseudoscalar and vector mesons. In Table <ref> we compare our results for the light mesons against experimental data and those for the charmed mesons against lattice QCD data.We use the expansions in Eqs. (<ref>) and (<ref>) to obtain the Q^2 dependence of the EM form factors; the results are shown in Figs. <ref> and <ref>. For the case ofpseudoscalar mesons, we find the behavior F_π^a(Q^2) ∼ Q^-2. For the vector meson EM form factors we find that F_V^a(Q^2) ∼ Q^-4. Both results are consistent with perturbative QCD expectations and conformal symmetry in the UV. § CONCLUSIONS We have extended the two-flavor hard-wall holographic model of Ref. <cit.> to four flavors. By fitting the seven parameters of the model, which are three quark masses, three condensates and the hard-wall scale z_0, to eleven selected meson masses, the model provides a good description of weak decay constants of more than a dozen of light and strange and charmed mesons. We have also investigated the effects of flavor symmetry breaking on three-meson couplings and form factors. In particular, we have made predictions for the strong couplings g_ρ^nππ, g_ρ^nρρ,g_ρ^n K K, g_ρ^n K^* K^*,g_ρ^n D D and g_ρ^n D^*D^*. Moreover, using our results for those couplings we have been able to evaluate the π, ρ, K, K^*, D and D^* electromagnetic form factors. For the D and D^*electromagnetic form factors we found a reasonable agreement with the lattice QCD results of Ref. <cit.>.Our results for thecouplings involving the ground-state ρ meson and the charmed mesons, namely g_ρ DD and g_ρ D^* D^* are smaller than the SU(4) symmetry values, as shown in Table <ref>. Our result g_ρ DD=1.103 is also smaller than predictions based on the VMD model <cit.> where g_ρ D D = 2.52 - 2.8. Moreover, we found that g_ρ DD < g_ρππ/2, which is of the opposite trend to the predictions based on QCD sum rules <cit.> and Dyson-Schwinger equations of QCD <cit.>, but it agrees with that obtained with the^3 P_0 pair-creation model in the nonrelativistic quark model of Refs.<cit.>. A possible explanation for the discrepancy for the small values of the couplings is that the electromagnetic form factor of the D meson is a dramatic example where the VMD approximation is broken and the contribution from the resonances ρ^n can not be neglected. It is interesting to notice the relation between the breaking of the VMD approximation and the breaking of the SU(4) symmetry. In a VMD approximation we would find that 2 g_ρ D D = m_ρ^2/g_ρ = g_ρππ .The first equality comes from applying VMD to the D isospin form factor; in our framework, this relation also comes from the EM form factor. The second equality is the well known VMD result for the pion EM form factor.The relation in Eq. (<ref>) can be extended to other couplings and it means that a VMD approximation necessarily implies a universality between the couplings. Interestingly, the result in Eq. (<ref>) for the coupling g_ρ DD matches with the SU(4) symmetry expectations. Then it is reasonable to interpret a dramatic breaking of the SU(4) flavor symmetry in terms of a dramatic breaking of the VMD approximation, which is exactly what we have found for the charmed mesons D and D^*.We finish this paper by reiterating our earlier remarks on the applicability of our model. Our holographic QCD model is based on an extensions of a light-flavor chiral Lagrangian, which should be adequate to describe heavy-light mesons, as the internal structure of these mesons is governed by essentially the same nonperturbative physics governing the internal structure of light mesons, which occurs at the scale Λ_ QCD. On the other hand, the internal structure of heavy-heavy mesons, as the ψ and η_c mesons, is governed by short-distance physics at the scale of the heavy quark mass. An appropriate holographic description of such mesons most likely requires the inclusion of long open strings. In that scenario, it should be possible, in particular, to describe the non-relativistic limit of heavy quarks where a spin-flavor symmetry emerges <cit.>. Although there have been some interesting top-down <cit.> and bottom-up <cit.> proposals, a realistic model for heavy-heavy mesons remains a challenge in holographic QCD.In holographic QCD, it is assumed that the quark mass coefficient m_q in the near boundary expansion of the classical field X_0(z) behaves as the source of the operator q̅(x)q(x). Then the holographic dictionary leads to conclude that the parameter σ is also in one to one correspondence with the vacuum expectation value (v.e.v.) ⟨q̅(x)q(x) ⟩. This matching, however, is ambiguous, as discussed in Ref. <cit.>, because ⟨q̅(x)q(x) ⟩ is actually a scale-dependent quantity whereas m_q and σ are obtained from a global fit to the meson spectrum. This issue actually becomes exacerbated as the quark mass increases.In QCD, the quantity ⟨q̅(x)q(x) ⟩ is identified with the trace of the quark propagator S, i.e. ⟨q̅(x)q(x) ⟩ = -TrS(x-x)= -TrS(0). It contains a nonperturbative, low-energy contribution from a dynamical component of chiral symmetry breaking, and an essentially perturbative contribution due to the explicit chiral symmetry breaking driven by the quark mass. In the heavy quark mass limit, the perturbative contribution dominates and the nonperturbative contribution goes to zero. For that reason, and to make contactwith the traditional definition of the quark condensate inQCD sum rules <cit.>, in lattice simulations the perturbative contribution is subtracted; see e.g. <cit.>. Interestingly,the authors of Ref. <cit.> found, after subtracting theperturbative contribution, that the strange quark condensate at the MS scale of 2  GeVis larger than that of the light quarks. So far there are no such lattice calculations for the charm and bottom quark,but calculations within the framework of Dyson-Schwinger equations <cit.> find that the nonperturbative component of chiral symmetry breaking decreaseswith increasing current-quark mass, as expected. Our results, obtained from a global fit to the meson spectrum, indicate that σincreases with m_q. Although, as discussed above, the relation between σ and theQCD v.e.v. ⟨q̅(x)q(x) ⟩ is far from clear, one could assume that relation as being strictly one-to-one and conclude that ⟨q̅(x)q(x) ⟩ increases with the quark mass unless a perturbative subtraction is also implemented in the holographic model. For the case of the charm quark this means that a large value for σ_c does not necessarily imply a large charm quark condensate. There is an additional issue that requires further study. In QCD, the v.e.v. of the operator q̅ q, with canonical dimensionΔ=3, is expected to acquire a large anomalous dimension in the infrared. In our holographicmodel, we have made the ad hoc approximation of keeping the same canonical dimension for ⟨q̅ q ⟩. If we take into account anomalous dimension effects,corrections to m_q and σ are expected. We hope to pursue this line of research in the near future.It is also important to bear in mind that results from a holographic QCD approach are supposedly referring to leading-order in an expansion of 1/N_c and in the large 't Hooft coupling λ=g_YM^2 N_c. As such, loop corrections for the hadronic propagators and vertices are not taken into account. The 1/N_c and/or 1/λ corrections to the effective chiral-flavor Lagrangians in holographic QCD is a fascinating open problem and deserves further studies. § ACKNOWLEDGEMENTS Work partially supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Grants No. 2015/17609-3 (A.B.-B.) and 2013/01907-0 (G.K.), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Grant No. 305894/2009-9 (G.K.) and Coordenação de Aperfeiçõamento de Pessoal de Nível Superior (CAPES) for a doctoral fellowship (C.M).A.B-B alsoacknowledges partial financial support from thegrantCERN/FIS-NUC/0045/2015. The authors also thank K.U. Can for providing them the lattice data for the electromagnetic form factors of Ref. <cit.>. 99 Krein:2016fqh G. Krein,AIP Conf. Proc.1701, 020012 (2016).Hosaka:2016ypm A. Hosaka, T. Hyodo, K. Sudoh, Y. Yamaguchi and S. Yasui,arXiv:1606.08685 [hep-ph]. Briceno:2015rlt R. A. Briceñoet al.,Chin. Phys. C40, 042001 (2016). Tolos:2013gta L. Tolos,Int. J. Mod. Phys. E22, 1330027 (2013). Tsushima:1998ru K. Tsushima, D. H. Lu, A. W. Thomas, K. Saito and R. H. Landau,Phys. Rev. C59, 2824 (1999) Yasui:2009bz S. Yasui and K. Sudoh,Phys. Rev. D80, 034008 (2009). GarciaRecio:2010vt C. Garcia-Recio, J. Nieves and L. Tolos,Phys. Lett. B690, 369 (2010).GarciaRecio:2011xt C. Garcia-Recio, J. Nieves, L. L. Salcedo and L. Tolos,Phys. Rev. C85, 025203 (2012).Brodsky:1989jd S. J. Brodsky, I. A. Schmidt and G. F. de Teramond, Phys. Rev. Lett.64, 1011 (1990).Ko:2000jx S. H. Lee, C. M. Ko,Phys. Rev. C67, 038202 (2003).Krein:2010vp G. Krein, A. W. Thomasand K. Tsushima,Phys. Lett. B697, 136 (2011).Tsushima:2011kh K. Tsushima, D. H. Lu, G. Krein and A. W. Thomas,Phys. Rev. C83, 065208 (2011) Carames:2016qhr T. F. Carams, C. E. Fontoura, G. Krein, K. Tsushima, J. Vijande and A. Valcarce,Phys. Rev. D94, 034009 (2016). Choi:2003ue S. K. Choiet al., Phys. Rev. Lett.91, 262001 (2003). Lebed:2016hpi R. F. Lebed, R. E. Mitchell and E. S. Swanson,Prog. Part. Nucl. Phys.93, 143 (2017).PANDA W. Erni et al. [Panda Collaboration], arXiv:0903.3905 [hep-ex].Wiedner:2011 U. Wiedner,Prog. Part. Nucl. Phys.66, 477 (2011).Prencipe:2016 E. Prencipeet al. [PANDA Collaboration],AIP Conf. Proc.1735, 060011 (2016).Mizutani:2006vq T. Mizutani and A. Ramos,Phys. Rev. C74, 065201 (2006).Lin:1999ve Z. -w. Lin, C. M. Ko and B. Zhang,Phys. Rev. C61, 024904 (2000) .Hofmann:2005sw J. Hofmann and M.F.F Lutz,Nucl. Phys. A763, 90 (2005).Haidenbauer:2007jq J. Haidenbauer, G. Krein, U.-G. Meißner and A. Sibirtsev,Eur. Phys. J. A33, 107 (2007).Haidenbauer:2008ff J. Haidenbauer, G. Krein, U.-G. Meißner and A. Sibirtsev,Eur. Phys. J. A37, 55 (2008) . Haidenbauer:2010ch J. Haidenbauer, G. Krein, U.-G. Meißner and L. Tolos,Eur. Phys. J. A47, 18 (2011).Fontoura:2012mz C. E. Fontoura, G. Krein and V. E. Vizcarra,Phys. Rev. C87, 025206 (2013). GarciaRecio:2008dp C. Garcia-Recio, V. K. Magas, T. Mizutani, J. Nieves, A. Ramos, L. L. Salcedo and L. Tolos,Phys. Rev. D79, 054004 (2009).Casalbuoni:1996pg R. Casalbuoni, A. Deandrea, N. Di Bartolomeo, R. Gatto, F. Feruglio and G. Nardulli,Phys. Rept.281, 145 (1997).Manohar:2000dt A. V. Manohar and M. B. Wise, Camb. Monogr. Part. Phys. Nucl. Phys. Cosmol.10, 1 (2000). Haidenbauer:1991kt J. Haidenbauer, T. Hippchen, K. Holinde, B. Holzenkamp, V. Mull and J. Speth, Phys. Rev.C45, 931 (1992).Hoffmann:1995ie M. Hoffmann,J. W. Durso, K. Holinde, B. C. Pearce and J. Speth,Nucl. Phys.A593, 341 (1995).Mat98 S. Matinya and, B. Müller, Phys. Rev. C58, 2994 (1998).Lin00a Z.w. Lin and C.M. Ko,Phys. Rev.C62, 034903 (2000).Erlich:2005qh J. Erlich, E. Katz, D. T. Son and M. A. Stephanov,Phys. Rev. Lett.95, 261602 (2005)Da Rold:2005zs L. Da Rold and A. Pomarol,Nucl. Phys. B721, 79 (2005).Abidin:2009aj Z. Abidin and C. E. Carlson,Phys. Rev. D80, 115010 (2009). Holl:2004fr A. Holl, A. Krassnigg and C. D. Roberts,Phys. Rev. C70, 042203 (2004). Dominguez:1976ut C. A. Dominguez,Phys. Rev. D15, 1350 (1977). Ballon-Bayona:2014oma A. Ballon-Bayona, G. Krein and C. Miller,Phys. Rev. D91, 065024 (2015).Bracco:2011pg M. E. Bracco, M. Chiapparini, F. S. Navarra and M. Nielsen,Prog. Part. Nucl. Phys.67, 1019 (2012) ElBennich:2011py B. El-Bennich, G. Krein, L. Chang, C. D. Roberts and D. J. Wilson,Phys. Rev. D85, 031502 (2012).El-Bennich:2016bnoB. El-Bennich, M. A. Paracha, C. D. Roberts and E. Rojas,Phys. Rev. D95, no. 3, 034037 (2017) Krein:2012lra G. Krein,PoS ConfinementX , 144 (2012). Fontoura:2017ujfC. E. Fontoura, J. Haidenbauer and G. Krein, Eur. Phys. J. A53, no. 5, 92 (2017). Can:2012tx K. U. Can, G. Erkol, M. Oka, A. Ozpineci and T. T. Takahashi, Phys. Lett. B719, 103 (2013). Polchinski:2001tt J. Polchinski and M. J. Strassler,Phys. Rev. Lett.88, 031601 (2002). Cherman:2008eh A. Cherman, T. D. Cohen and E. S. Werbos,Phys. Rev. C79, 045203 (2009). Erdmenger:2006bg J. Erdmenger, N. Evans and J. Grosse,JHEP0701, 098 (2007)[hep-th/0605241]. Erdmenger:2007vj J. Erdmenger, K. Ghoroku and I. Kirsch,JHEP0709, 111 (2007) Hashimoto:2014jua K. Hashimoto, N. Ogawa and Y. Yamaguchi,JHEP1506, 040 (2015) Braga:2015lck N. R. F. Braga, M. A. Martin Contreras and S. Diles,Europhys. Lett.115, 31002 (2016). Liu:2016iqoY. Liu and I. Zahed,Phys. Rev. D95, no. 5, 056022 (2017) Liu:2016urzY. Liu and I. Zahed,Phys. Lett. B769, 314 (2017) Grigoryan:2007vg H. R. Grigoryan and A. V. Radyushkin,Phys. Lett. B650, 421 (2007). Sakurai:1972wk J. J. Sakurai and D. Schildknecht,Phys. Lett.40B, 121 (1972).Bayona:2010bg C. A. B. Bayona, H. Boschi-Filho, M. Ihl and M. A. C. Torres,JHEP1008, 122 (2010). Olive:2016xmw C. Patrignaniet al. [Particle Data Group],Chin. Phys. C40, 100001 (2016).Donoghue:1992dd J. F. Donoghue, E. Golowich and B. R. Holstein,Dynamics of the standard model (Cambridge, New York, 2014). Isgur:1988vm N. Isgur, C. Morningstar and C. Reader,Phys. Rev. D39, 1357 (1989).Bramon:1972vv A. Bramon, E. Etim and M. Greco,Phys. Lett. B41, 609 (1972).Brodsky:2007hb S. J. Brodsky and G. F. de Teramond,Phys. Rev. D77, 056007 (2008) Kwee:2007dd H. J. Kwee and R. F. Lebed,JHEP0801, 027 (2008) Sang:2010kc H. Z. Sang and X. H. Wu,arXiv:1004.4392 [hep-ph]. Amendolia:1986wj S. R. Amendoliaet al. [NA7 Collaboration],Nucl. Phys. B277, 168 (1986).Amendolia:1986ui S. R. Amendoliaet al.,Phys. Lett. B178, 435 (1986).Krutov:2016uhy A. F. Krutov, R. G. Polezhaev and V. E. Troitsky,Phys. Rev. D93, 036007 (2016). Reinders:1984srL. J. Reinders, H. Rubinstein and S. Yazaki,Phys. Rept.127, 1 (1985).McNeile:2012xhC. McNeile, A. Bazavov, C. T. H. Davies, R. J. Dowdall, K. Hornbostel, G. P. Lepage and H. D. Trottier,Phys. Rev. D87, no. 3, 034503 (2013)Chang:2006bmL. Chang, Y. X. Liu, M. S. Bhagwat, C. D. Roberts and S. V. Wright,Phys. Rev. C75, 015201 (2007)
http://arxiv.org/abs/1702.08417v3
{ "authors": [ "Alfonso Ballon-Bayona", "Gastao Krein", "Carlisson Miller" ], "categories": [ "hep-ph", "hep-lat", "hep-th" ], "primary_category": "hep-ph", "published": "20170227182545", "title": "Strong couplings and form factors of charmed mesons in holographic QCD" }
Data61, CSIRO, Sydney, Australia Comput. Sci. and Engineering, University of New South Wales, Sydney, Australia A Branching Time Model of CSPRob van Glabbeek 1,2December 30, 2023 ============================= I present a branching time model of CSP that is finer than all other models of CSP proposed thus far. It is obtained by taking a semantic equivalence from the linear time – branching time spectrum, namely divergence-preserving coupled similarity, and showing that it is a congruence for the operators of CSP. This equivalence belongs to the bisimulation family of semantic equivalences, in the sense that on transition systems without internal actions it coincides with strong bisimilarity. Nevertheless, enough of the equational laws of CSP remain to obtain a complete axiomatisation for closed, recursion-free terms. § INTRODUCTION The process algebra CSP—Communicating Sequential Processes—was presented in Brookes, Hoare & Roscoe <cit.>.It is sometimes called theoretical CSP, to distinguish it from the earlier language CSP of Hoare <cit.>. It is equipped with a denotational semantics, mapping each CSP process to an element of the failures-divergences model <cit.>. The same semantics can also be presented operationally, by mapping CSP processes to states in a labelled transition system (LTS), and then mapping LTSs to the failures-divergences model. Olderog & Hoare <cit.> shows that this yields the same result. Hence, the failures-divergences model of CSP can alternatively be seen as a semantic equivalence on LTSs, namely by calling two states in an LTS equivalent iff they map to the same element of the failures-divergences model.Several other models of CSP are presented in the literature, and each can be cast as a semantic equivalence on LTSs, whichis a congruence for the operators of CSP. One such model is called finer than another if its associated equivalence relation is finer, i.e., included in the other one, or more discriminating. The resulting hierarchy of models of CSP has two pillars: the divergence-strict models, most of which refine the standard failures-divergences model, and the stable models, such as the model based on stable failures equivalence from Bergstra, Klop & Olderog <cit.>, or the stable revivals model of Roscoe <cit.>.Here I present a new model, which can be seen as the first branching time model of CSP, and the first that refines all earlier models, i.e. both pillars mentioned above. It is based on the notion of coupled similarity from Parrow & Sjödin <cit.>. What makes it an interesting model of CSP—as opposed to, say, strong or divergence-preserving weak bisimilarity—is that it allows a complete equational axiomatisation for closed recursion-free CSP processes that fits within the existing syntax of that language.§ CSP CSP <cit.> is parametrised with a set Σ of communications. In this paper I use the subset of CSP given by the following grammar.P,Q ::= [|÷| a→ P | PQ | PQ | PQ |; P _A Q | P \ A | f(P) | P Q | PQ ||μ.P ]Here P and Q are CSP expressions, a∈Σ, A⊆Σ and f:Σ→Σ. Furthermore,ranges over a set of process identifiers. A CSP process is a CSP expression in which each occurrence of a process identifierlays within a recursion construct μ.P. The operators in the above grammar are inaction, divergence, action prefixing, internal, external and sliding choice, parallel composition, concealment, renaming, interrupt and throw. Compared to <cit.>, this leaves out * successful termination () and sequential composition (;),* infinitary guarded choice,* prefixing operators with name binding, conditional choice,* relational renaming, and* the version of internal choice that takes a possibly infinite set of arguments.The operators , a→, , , A, f(_) and recursion stem from <cit.>, and ÷ and _A from <cit.>, whereas ,andwere added to CSP by Roscoe <cit.>.The operational semantics of of CSP is given by the binary transition relations α between CSP processes. The transitions P α Q are derived by the rules in CSP. Here a, b range over Σ and α, β over Σ{τ}, and relabelling operators f are extended to Σ{τ} by f(τ)=τ. The transition labels α are called actions, and τ is the internal action.§ THE FAILURES-DIVERGENCES MODEL OF CSP The process algebra CSP stems from Brookes, Hoare & Roscoe <cit.>.It is also called theoretical CSP, to distinguish it from the language CSP of Hoare <cit.>.Its semantics <cit.> associates to each CSP process a pair ⟨ F, D⟩ of failures F ⊆Σ^* ×(Σ) and divergences D ⊆Σ^*, subject to the conditions:ε,∅∈ F N1st,∅∈ F ⇒s,∅∈ F N2s,X∈ F ∧ Y⊆ X ⇒s,Y∈ F N3s,X∈ F ∧∀ c∈ Y . sc,∅∉ F ⇒s,X∪ Y∈ F N4∀ Y ∈_ fin(X). s,Y∈ F ⇒s,X∈ F N5 s∈ D ⇒ st ∈ D D1 s∈ D ⇒st,X. D2Here ε∈Σ^* is the empty sequence of communications and st denotes the concatenation of sequences s and t∈Σ^*. If ⟨ F, D⟩ is the semantics of a process P, s,∅∈ F, with s∉D, tells that P can perform the sequence of communications s, possibly interspersed with internal actions. Such a sequence is called a trace of P, and Conditions N1 and N2 say that the set of traces of any processes is non-empty and prefix-closed. A failure s,X∈ F, with s∉ D, says that after performing the trace s, P may reach a state in which it can perform none of the actions in X, nor the internal action. A communication x∈Σ is thought to occur in cooperation between a process and its environment. Thus s,X∈ F indicates that deadlock can occur if after performing s the process runs in an environment that allows the execution of actions in X only. From this perspective, Conditions N3 and N4 are obvious.A divergence s ∈ D is a trace after which an infinite sequence of internal actions is possible. In the failures-divergences model of CSP divergence is regarded catastrophic: all further information about the process' behaviour past a divergence trace is erased. This is accomplished by flooding: all conceivable failures st,X and divergences st that have s as a prefix are added to the model (regardless whether P actually has a trace st).A CSP process P from the syntax of CSP has the property that for any trace s of P, with s∉ D, the set next(s) of actions c such that sc is also a trace of P is finite. By (N3–4), s,X∈ F iff s,X∩ next(s)∈ F. It follows that if s,X∉ F, then there is a finite subset Y of X, namely X ∩ next(s), such that s,Y∉ F. This explains Condition (N5).In Brookes & Roscoe <cit.> the semantics of CSP is defined denotationally: for each n-ary CSP operator Op, a function is defined that extracts the failures and divergences of Op(P_1,…,P_n) out of the failures and divergences of the argument processes P_1,…,P_n. The meaning of a recursively defined CSP process μ.P is obtained by means of fixed-point theory. Alternatively, the failures and divergences of a CSP process can be extracted from its operational semantics: Write PQ if there are processes P_0,…,P_n, with n≥ 0, such that P=P_0, P_i τ P_i+1 for all 0≤ i < n, and P_n=Q.Write P α Q if there are processes P',Q' with PP' α Q'Q.Write P α̂ Q if either α∈Σ and P α Q, or α=τ and PQ.Write P s Q, for s=a_1a_2 … a_n ∈Σ^* with n≥ 0, if there are processes P_0,…,P_n such that P=P_0, P_i a_i P_i+1 for all 0≤ i < n, and P_n=Q.Let I(P)={α∈Σ∪{τ}|∃ Q.P α Q}.Write P⇑ if there are processes P_i for all i≥ 0 with P s P_0 τ P_1 τ….s∈Σ^* is a divergence trace of a process P if there is a Q with P s Q⇑.The divergence set of P is (P):={st| sP}.A stable failure of a process P is a pair s,X∈Σ^*×(Σ) such that P s Q for some Q with I(Q)∩ (X∪{τ}) = ∅. The failure set of a process P is (p) = {s,X| s ∈(P) s,X P}.The semantics P_ of a CSP process P is the pair ⟨(P), (P) ⟩.Processes P and Q are failures-divergences equivalent, notation P ≡_FD Q, iff P_=Q_. Process P is a failures-divergences refinement of Q, notation P ⊒_FD Q, iff (P) ⊆(Q) ∧(P) ⊆(Q).The operational semantics of CSP (then without the operators ,and ) appears, for instance, in <cit.>, and was created after the denotational semantics. In Olderog & Hoare <cit.> it is shown that the semantics P of a CSP process defined operationally through FPoperational equals the denotational semantics given in <cit.>. The argument extends smoothly to the new operators ,and<cit.>. This can be seen as a justification of the operational semantics of CSP.In Brookes, Hoare & Roscoe <cit.> a denotational semantics of CSP was given involving failures only.Divergences were included only implicitly, namely by thinking of a trace s as a divergence of a process P iff P has all failures st,X.[2] So the semantics of ÷ or μ X.X is simply the set of all failure pairs.As observed in De Nicola <cit.>, this approach invalidates a number of intuitively valid laws, such as P ÷ = ÷. The improved semantics of <cit.> solves this problem.In Hoare <cit.> a slightly different semantics of CSP is given, in which a process is determined by its failures, divergences, as well as its alphabet. The latter is a superset of the set of communications the process can ever perform. Rather than a parallel composition _A for each set of synchronising actions A ⊆Σ, this approach has an operatorwhere the set ofsynchronising actions is taken to be the intersection of the alphabets of its arguments. Additionally, there is an operator , corresponding to _∅. This approach is equally expressive as the one of <cit.>, in the sense that there are semanticspreserving translations in both directions. The work reported in this paper could just as well have been carried out in this typed version of CSP.§ A COMPLETE AXIOMATISATIONIn <cit.> many algebraic lawsP=Q, resp. P ⊑ Q, are stated that are valid w.r.t. the failures-divergences semantics of CSP, meaning that P ≡_FD Q, resp. P ⊑_FD Q. Ifis a collection of equational laws P=Q then ⊢ R=S denotes that the equation R=S is derivable from the equations inusing reflexivity, symmetry, transitivity and the rule of congruence, saying that if Op is an n-ary CSP operator and P_i = Q_i for i=1,…,n then Op(P_1,…,P_n)=Op(Q_1,…,Q_n). Likewise, ifis a collection of inequational laws P⊑ Q then ⊢ R⊑ S denotes that the inequation R⊑ S is derivable from the inequations inusing reflexivity, transitivity and the rule saying that if Op is an n-ary CSP operator and P_i ⊑ Q_i for i=1,…,n then Op(P_1,…,P_n)⊑ Op(Q_1,…,Q_n). An equivalence ∼ on process expressions is called a congruence for ann-ary operator Op if P_i ∼ Q_i for i=1,…,n implies Op(P_1,…,P_n)∼ Op(Q_1,…,Q_n). A preorder ≼ is a precongruence for Op, or Op is monotone for ≼, if P_i ≼ Q_i for i=1,…,n implies Op(P_1,…,P_n)≼ Op(Q_1,…,Q_n).If ∼ is a congruence for all operators of CSP (resp. ≼ is a precongruence for all operators of CSP) andis a set of (in)equational laws that are valid for ∼ (resp. ≼) then any (in)equation R=S with ⊢ R=S (resp. R⊑ S with ⊢ R⊑ S) is valid for ∼ (resp. ≼).≡_FD is a congruence for all operators of CSP. This follows immediately from the existence of the denotational failures-divergences semantics. Likewise, ⊑_FD is a precongruence for all operators of CSP <cit.>.A setof (in)equational laws—an axiomatisation—is sound and complete for an equivalence ∼ (or a preorder ≼) if ⊢ R=S iff R ∼ S (resp. ⊢ R⊑ S iff R ≼ S). Here “⇒” is soundness and “⇐” completeness.In De Nicola <cit.> a sound and complete axiomatisation of ⊑_FD for recursion-free CSP, and no process identifiers or variables, is presented. It is quoted in axsFD. As this axiomatisation consist of a mix of equations and inequations, formally it is an inequational axiomatisation, where an equation P=Q is understood as the conjunction of P ⊑ Q and Q ⊑ P. This mixed use is justified because ≡_FD is the kernel of ⊑_FD: one has P ≡_FD Q iff P ⊑_FD Q ∧ Q ⊑_FD P.In <cit.>, following <cit.>, two parallel composition operatorsandwere considered, instead of the parametrised operator _A. Here =_Σ and = _∅. In axsFD the axioms for these two operators are unified into an axiomatisation of _A. Additionally, I added axioms for sliding choice, renaming, interrupt and throw—these operators were not considered in <cit.>. The associativity of parallel composition (Axiom ) is not included in <cit.> and is not needed for completeness. I added it anyway, because of its importance in equational reasoning.The soundness of the axiomatisation of axsFD follows from ⊑_FD being a precongruence, and the validity of the axioms—a fairly easy inspection using the denotational characterisation of _. To obtain completeness, write _i∈ IP_i, with I={i_1,…,i_n} any finite index set, for P_i_1 P_i_2… P_i_n, where _i∈∅P_i represents . This notation is justified by Axioms E2–4. Furthermore, _j∈ JP_j,with J={j_1,..,j_m} any finite, nonempty index set, denotes P_j_1 P_j_2… P_j_m. This notation is justified by Axiomsand . Now a normal form is a defined as a CSP expression of the form ÷ or _j∈ JR_j, with R_j=(_k∈ K_j(a_kj→ R_kj))for j∈ J, where the subexpressions R_kj are again in normal form. Here J and the K_j are finite index sets, J nonempty.Axiomsandderive P ÷ =÷. Together with Axioms , , , H1–4, R1–5, T1–6 and U1–5 this allows any recursion-free CSP expression to be rewritten into normal form. In <cit.> it is shown that for any two normal forms P and Q with P ⊑_FD Q, Axioms , I1–4,andderive ⊢ P=Q. Together, this yields the completeness of the axiomatisation of axsFD.§ OTHER MODELS OF CSP Several alternative models of CSP have been proposed in the literature, including the readiness-divergences model of Olderog & Hoare <cit.> and the stable revivals model of Roscoe <cit.>. A hierarchy of such models is surveyed in Roscoe <cit.>. Each of these models corresponds with a preorder (and associated semantic equivalence) on labelled transition systems. In <cit.> I presented a survey of semantic equivalences and preorders on labelled transition systems, ordered by inclusion in a lattice. Each model occurring in <cit.> correspond exactly with with one of the equivalences of <cit.>, or—like the stable revivals model—arises as the meet or join of two such equivalences.In the other direction, not every semantic equivalence or preorder from <cit.> yields a sensible model of CSP. First of all, one would want to ensure that it is a (pre)congruence for the operators of CSP. Additionally, one might impose sanity requirements on the treatment of recursion.The hierarchy of models in <cit.> roughly consist of two hierarchies: the stable models, and the divergence-strict ones. The failures-divergences model could be seen as the centre piece in the divergence-strict hierarchy, and the stable failures model <cit.>, which outside CSP stems from Bergstra, Klop & Olderog <cit.>, plays the same role in the stable hierarchy. Each of these hierarchies has a maximal (least discriminating) element, called FL^⇓ and FL in <cit.>. These correspond to the ready trace models RT^↓ and RT of <cit.>.The goal of the present paper is to propose a sensible model of CSP that is strictly finer than all models thus far considered, and thus unites the two hierarchies mentioned above. As all models of CSP considered so far have a distinctly linear time flavour, I here propose a branching time model, thereby showing that the syntax of CSP is not predisposed towards linear time models. My model can be given as an equivalence relation on labelled transition system, provided I show that it is a congruence for the operators of CSP. I aim for an equivalence that allows a complete axiomatisation in the style of axsFD, obtained by replacing axioms that are no longer valid by weaker ones.One choice could be to base a model on strong bisimulation equivalence <cit.>. Strong bisimilarity is a congruence for all CSP operators, because their operational semantics fits the tyft/tyxt format of <cit.>. However, this is an unsuitable equivalence for CSP, because it fails to abstract from internal actions. Even the axiomwould not be valid, as the two sides differ by an internal action.A second proposal could be based on weak bisimilarity <cit.>. This equivalence abstracts from internal activity, and validates . The default incarnation of weak bisimilarity is not finer than failures-divergences equivalence, because it satisfies ÷=. Therefore, one would take a divergence-preserving variant of this notion: the weak bisimulation with explicit divergence of Bergstra, Klop & Olderog <cit.>. Yet, some crucial CSP laws are invalidated, such asand . This destroys any hope of a complete axiomatisation along the lines of axsFD.My final choice is divergence-preserving coupled similarity <cit.>, based on coupled similarity for divergence-free processes from Parrow & Sjödin <cit.>. This is the finest equivalence in <cit.> that satisfiesand . In fact, it satisfies all of the axioms of axsFD, except for the ones marked red: , , , , , , , , , ,and .Divergence-preserving coupled similarity belongs to the bisimulation family of semantic equivalences, in the sense that on transition systems without internal actions it coincides with strong bisimilarity.In coupled I present divergence-preserving coupled similarity. In congruence I prove that it is a congruence for the operators of CSP, and in complete I present a complete axiomatisation for recursion-free CSP processes without interrupts. § DIVERGENCE-PRESERVING COUPLED SIMILARITYA coupled simulation is a binary relationon CSP processes, such that, for all α∈Σ∪{τ}, * if PQ and P α P' then there exists a Q' with Q α̂ Q' and P'Q',* and if PQ then there exists a Q' with QQ' and Q'P.It is divergence-preserving if PQ and P ⇑ implies Q ⇑. Write P ⊒_CS^Δ Q if there exists a divergence-preserving coupled simulationwith PQ. Two processes P and Q are divergence-preserving coupled similar, notation P ≡_CS^Δ Q, if P ⊒_CS^Δ Q and Q ⊒_CS^Δ P.Note that the union of any collection of divergence-preserving coupled simulations is itself a divergence-preserving coupled simulation. In particular, ⊒_CS^Δ is a divergence-preserving coupled simulation. Also note that in the absence of the internal action τ, coupled simulations are symmetric, and coupled similarity coincides with strong bisimilarity (as defined in <cit.>).Intuitively, P ⊒_CS^Δ Q says that P is “ahead” of a state matching Q, where P' is ahead of P if PP'. The first clause says that if P is ahead of a state matching Q, then any transition performed by P can be matched by Q—possibly after Q “caught up” with P by performing some internal transitions. The second clause says that if P is ahead of Q, then Q can always catch up, so that it is ahead of P. Thus, if P and Q are in stable states—where no internal actions are possible—then P ⊑_CS^Δ Q implies Q ⊑_CS^Δ P.In all other situations, P and Q do not need to be matched exactly, but there do exists under- and overapproximations of a match. The result is that the relation behaves like a weak bisimulation w.r.t. visible actions, but is not so pedantic in matching internal actions. ⊒_CS^Δ is reflexive and transitive, and thus a preorder. The identity relationis a divergence-preserving coupled simulation, and if , ' are divergence-preserving coupled simulations, then so is ;' ∪';. Here ;' is defined by P ;' R iff there is a Q with PQ ' R.;' is divergence-preserving: if PQ ' R and P ⇑, then Q ⇑, and thus R ⇑. The same holds for ';, and thus for ;' ∪';.To check that ;' ∪'; satisfies the first clause of coupled, note that if Q ' R and Q α̂ Q', then, by repeated application of the first clause of coupled, there is an R' with R α̂ R' and Q' ' R'.Towards the second clause, if PQ ' R, then, using the second clause for , there is a Q' with QQ' and Q'P. Hence, using the first clause for ', there is an R' with RR' and Q' ' R'. Thus, using the second clause for ', there is an R” with R'R” and R”' Q', and hence R”'; P'.If PQ then P ⊑_CS^Δ Q. I show that ∪{(Q,P)}, withthe identity relation, is a coupled simulation. Namely if Q αQ' then surely P α Q'. The second clause of coupled is satisfied because PQ. Furthermore, if Q ⇑ then certainly P ⇑, so the relation is divergence-preserving.P ⊒_CS^Δ Q iff P Q ≡_CS^Δ Q. “⇒”: Letbe the smallest relation such that, for any P and Q, P ⊒_CS^Δ Q implies PQ, (P Q)Q and Q(P Q). It suffices to show thatis a divergence-preserving coupled simulation.Thatis divergence-preserving is trivial, using that (PQ)⇑ iff P⇑∨ Q⇑.Suppose P^*Q and P^* α P'. The case that P^*=P with P ⊒_CS^Δ Q is trivial. Now let Q be Q^*P^*. Since P^* α P', surely Q α P', and P'P'. Finally, let P^*=(P Q) with P ⊒_CS^Δ Q. Then α=τ and P' is either P or Q. Both cases are trivial, taking Q'=Q.Towards the second clause of coupled, suppose P^*Q. The case P^*=P with P ⊒_CS^Δ Q is trivial. Now let Q be Q^*P^*. Then Q P^* and P^*P^*. Finally, let P^*=(P Q) with P ⊒_CS^Δ Q. Then QQ and Q(P Q).“⇐”: Suppose PQ ⊒_CS^Δ Q. Since PQ τ P there exists a Q' with QQ' and P ⊒_CS^Δ Q'. By tau Q' ⊒_CS^Δ Q and by preorder P ⊒_CS^Δ Q. § CONGRUENCE PROPERTIES≡_CS^Δ is a congruence for action prefixing. I have to show that P ≡_CS^Δ Q implies (a→ P) ≡_CS^Δ (a→ Q).Letbe the smallest relation such that, for any P and Q, P ⊑_CS^Δ Q implies PQ, and P ≡_CS^Δ Q implies (a→ P)(a→ Q). It suffices to show thatis a divergence-preserving coupled simulation.Checking the conditions of coupled for the case PQ with P ⊑_CS^Δ Q is trivial. So I examine the case (a→ P)(a→ Q) with P ≡_CS^Δ Q.Suppose (a→ P) α P'. Then α=a and P'=P. Now (a→ Q)α Q and PQ, so the first condition of coupled is satisfied.For the second condition, (a→ Q) (a→ Q), and, since Q ≡_CS^Δ P, (a→ Q)(a→ P). Thus,is a coupled simulation.As a→ P does not diverge,moreover is divergence-preserving.Since ⊒_CS^Δ (a→ STOP) but ⋢_CS^Δ (a→ STOP),and thus b→⋣_CS^Δ b→((a→ STOP) ), the relation ⊒_CS^Δ is not a precongruence for action prefixing.It is possible to express action prefixing in terms of the throw operator: a → P is strongly bisimilar with (a →) Θ_{a} P. Consequently, ⊒_CS^Δ is not a precongruence for the throw operator. ≡_CS^Δ is a congruence for the throw operator. Let A⊆Σ. Letbe the smallest relation such that, for any P_1,P_2,Q_1, Q_2, P_1 ⊒_CS^Δ Q_1 and P_2 ≡_CS^Δ Q_2 implies P_1Q_1 and (P_1 P_2)(Q_1 Q_2). It suffices to show thatis a divergence-preserving coupled simulation.So let P_1 ⊒_CS^Δ Q_1, P_2 ≡_CS^Δ Q_2 and (P_1 P_2) α P'. Then P_1 α P_1' for some P_2', and either α∉ A and P'=P_1'P_2, or α∈ A and P'=P_2. So there is a Q_1' with Q_1 α̂ Q_1' and P_1' ⊒_CS^Δ Q_1'. If α∉ A it follows that (Q_1 Q_2) α̂ (Q_1' Q_2) and (P'_1 P_2)(Q'_1 Q_2). If α∈ A it follows that (Q_1 Q_2) α Q_2 and P_2Q_2.Now let P_1 ⊒_CS^Δ Q_1 and P_2 ≡_CS^Δ Q_2. Then there is a Q_1' with Q_1Q_1' and Q_1' ⊒_CS^Δ P_1. Hence Q_1 Q_2Q_1'Q_2 and (Q_1' Q_2)(P_1 P_2).The same two conditions for the case PQ because P ⊒_CS^Δ Q are trivial. Thusis a coupled simulation. Thatis divergence-preserving follows because P_1P_2 ⇑ iff P_1 ⇑. I proceed to show that ⊒_CS^Δ is a precongruence for all the other operators of CSP. This implies that ≡_CS^Δ is a congruence for all the operators of CSP. ⊒_CS^Δ is a precongruence for internal choice. Letbe the smallest relation such that, for any P_i and Q_i, P_i ⊒_CS^Δ Q_i for i=1,2 implies P_iQ_i (i=1,2) and (P_1 P_2)(Q_1 Q_2). It suffices to show thatis a divergence-preserving coupled simulation.So let P_i ⊒_CS^Δ Q_i for i=1,2 and (P_1 P_2) α P'. Then α=τ and P'=P_i for i=1 or 2. Now Q_1Q_2Q_i and P_iQ_i.Now let P_i ⊒_CS^Δ Q_i for i=1,2. Then there is a Q_1' with Q_1Q_1' and Q_1' ⊒_CS^Δ P_1. By tau P_1 ⊒_CS^Δ P_1 P_2 and by preorder Q_1' ⊒_CS^Δ P_1 P_2. The same two conditions for the case PQ because P ⊒_CS^Δ Q are trivial. Thusis a coupled simulation. Thatis divergence-preserving follows because P_1P_2 ⇑ iff P_1 ⇑∨ P_2 ⇑. [2]⊒_CS^Δ is a precongruence for external choice. Letbe the smallest relation such that, for any P_i and Q_i, P_i ⊒_CS^Δ Q_i for i=1,2 implies P_iQ_i (i=1,2) and (P_1 P_2)(Q_1 Q_2). It suffices to show thatis a divergence-preserving coupled simulation.So let P_i ⊒_CS^Δ Q_i for i=1,2 and (P_1 P_2) α P'. If α∈Σ then P_i α P' for i=1 or 2, and there exists a Q' with Q_i α Q' and P'⊒_CS^Δ Q'. Hence Q_1 Q_2 α Q' and P' Q'. If α=τ then either P_1 τ P_1' for some P_1' with P'=P_1' P_2, or P_2 τ P_2' for some P_2' with P'=P_1 P_2'. I pursue only the first case, as the other follows by symmetry. Here Q_1Q_1' for some Q_1' with P_1' ⊒_CS^Δ Q_1'. Thus Q_1 Q_2Q_1'Q_2 and (P_1' P_2)(Q_1' Q_2).Now let P_i ⊒_CS^Δ Q_i for i=1,2. Then, for i=1,2, there is a Q_i' with Q_iQ_i' and Q_i' ⊒_CS^Δ P_i. Hence Q_1 Q_2Q_1'Q_2' and (Q_1' Q_2')(P_1 P_2).Thusis a coupled simulation. Thatis divergence-preserving follows because P_1P_2 ⇑ iff P_1 ⇑∨ P_2 ⇑.⊒_CS^Δ is a precongruence for sliding choice. Letbe the smallest relation such that, for any P_i and Q_i, P_i ⊒_CS^Δ Q_i for i=1,2 implies P_iQ_i (i=1,2) and (P_1 P_2)(Q_1 Q_2). It suffices to show thatis a divergence-preserving coupled simulation.So let P_i ⊒_CS^Δ Q_i for i=1,2 and (P_1 P_2) α P'. If α∈Σ then P_1 α P', and there exists a Q' with Q_1 α Q' and P'⊒_CS^Δ Q'. Hence Q_1 Q_2 α Q' and P' Q'. If α=τ then either P'=P_2 or P_1 τ P_1' for some P_1' with P'=P_1' P_2. In the former case Q_1Q_2Q_2 and P_2Q_2. In the latter case Q_1Q_1' for some Q_1' with P_1' ⊒_CS^Δ Q_1'. Thus Q_1 Q_2Q_1'Q_2 and (P_1' P_2)(Q_1' Q_2).Now let P_i ⊒_CS^Δ Q_i for i=1,2. Then there is a Q_2' with Q_2Q_2' and Q_2' ⊒_CS^Δ P_2. By tau P_2 ⊒_CS^Δ P_1 P_2 and by preorder Q_2' ⊒_CS^Δ P_1 P_2.Thusis a coupled simulation. Thatis divergence-preserving follows because P_1P_2 ⇑ iff P_1 ⇑∨ P_2 ⇑.⊒_CS^Δ is a precongruence for parallel composition. Let A⊆Σ. Letbe the smallest relation such that, for any P_i and Q_i, P_i ⊒_CS^Δ Q_i for i=1,2 implies (P_1_A P_2)(Q_1_A Q_2). It suffices to show thatis a divergence-preserving coupled simulation.So let P_i ⊒_CS^Δ Q_i for i=1,2 and (P_1_A P_2) α P'. If α∉ A then P_i α P_i' for i=1 or 2, and P'=P_1'_A P_2', where P_3-i':=P_3-i. Hence there exists a Q_i' with Q_i α̂ Q_i' and P_i'⊒_CS^Δ Q_i'. Let Q_3-i':=Q_3-i. Then Q_1_A Q_2 α̂ Q_1'Q'_2 and (P_1'P'_2) (Q_1'Q'_2). If α∈ A then P_i α P_i' for i=1 and 2. Hence, for i=1,2, Q_i α Q_i' for some Q_i' with P_i' ⊒_CS^Δ Q_i'. Thus Q_1_A Q_2 α Q_1' _A Q_2' and (P_1'_A P'_2)(Q_1'_A Q'_2).Now let P_i ⊒_CS^Δ Q_i for i=1,2. Then, for i=1,2, there is a Q_i' with Q_iQ_i' and Q_i' ⊒_CS^Δ P_i. Hence Q_1_A Q_2Q_1' _A Q_2' and (Q_1'_A Q_2')(P_1_A P_2).Thusis a coupled simulation. Thatis divergence-preserving follows because P_1 _A P_2 ⇑ iff P_1 ⇑∨ P_2 ⇑.⊒_CS^Δ is a precongruence for concealment. Let A⊆Σ. Letbe the smallest relation such that, for any P and Q, P ⊑_CS^Δ Q implies (P A)(QA). It suffices to show thatis a divergence-preserving coupled simulation.So let P ⊑_CS^Δ Q and P A α P^*. Then P^*=P' A for some P' with P β P', and either β∈ A and α=τ, or β=α∉ A. Hence Q β Q' for some Q' with P' ⊑_CS^Δ Q'. Therefore Q A α Q' A and (P' A)(Q' A).Now let P ⊑_CS^Δ Q. Then there is a Q' with QQ' and Q' ⊒_CS^Δ P. Hence Q AQ' A and (Q' A)(P A).To check thatis divergence-preserving, suppose (P A) ⇑. Then there are P_i and α_i ∈ A∪{τ} for all i>0 such that P α_1 P_1 α_2 P_2 α_3…. By the first condition of coupled, there are Q_i for all i>0 such that P_iQ_i and Q α̂_1 Q_1 α̂_2 Q_2 α̂_3…. This implies Q AQ_1 AQ_2 A ….In case α_i ∈Σ for infinitely many i, then for infinitely many i one has Q_i-1α_i Q_i and thus Q_i-1 A τ Q_i A. This implies that (Q A) ⇑.Otherwise there is an n>0 such that α_i=τ for all i ≥ n. In that case P_n ⇑ and thus Q_n ⇑. Hence (Q_n A) ⇑ and thus (Q A) ⇑.⊒_CS^Δ is a precongruence for renaming. Let f:Σ→Σ. Letbe the smallest relation such that, for any P and Q, P ⊑_CS^Δ Q implies f(P)f(Q). It suffices to show thatis a divergence-preserving coupled simulation.So let P ⊑_CS^Δ Q and f(P) α P^*. Then P^*=f(P') for some P' with P β P' and f(β)=α. Hence Q β Q' for some Q' with P' ⊑_CS^Δ Q'. Therefore f(Q) α f(Q') and f(P')f(Q').Now let P ⊑_CS^Δ Q. Then there is a Q' with QQ' and Q' ⊒_CS^Δ P. Hence f(Q)f(Q') and f(Q')f(P).To check thatis divergence-preserving, suppose f(P) ⇑. Then P ⇑, so Q ⇑ and f(Q) ⇑.⊒_CS^Δ is a precongruence for the interrupt operator. Letbe the smallest relation such that, for any P_i and Q_i, P_i ⊒_CS^Δ Q_i for i=1,2 implies P_2Q_2 and (P_1 P_2)(Q_1 Q_2). It suffices to show thatis a divergence-preserving coupled simulation.So let P_i ⊒_CS^Δ Q_i for i=1,2 and (P_1 P_2) α P'. Then either P'=P_1'P_2 for some P_1' with P_1 α P_1', or α=τ and P'=P_1P_2' for some P_2' with P_2 τ P_2', or α∈Σ and P_2α P'.In the first case there is a Q_1' with Q_1 α̂ Q_1' and P_1' ⊒_CS^Δ Q_1'. It follows that (Q_1 Q_2) α̂ (Q_1' Q_2) and (P'_1 P_2)(Q'_1 Q_2).In the second case there is a Q_2' with Q_2Q_2' and P_2' ⊒_CS^Δ Q_2'. It follows that (Q_1 Q_2)(Q_1 Q_2') and (P_1 P_2')(Q_1 Q_2').In the last case there is a Q_2' with Q_2 α Q_2' and P_2' ⊒_CS^Δ Q_2'. It follows that (Q_1 Q_2) αQ_2' and P_2'Q_2'.Now let P_i ⊒_CS^Δ Q_i for i=1,2. Then, for i=1,2, there is a Q_i' with Q_iQ_i' and Q_i' ⊒_CS^Δ P_i. Hence Q_1 Q_2Q_1'Q_2' and (Q_1' Q_2')(P_1 P_2).Thusis a coupled simulation. Thatis divergence-preserving follows because P_1P_2 ⇑ iff P_1 ⇑∨ P_2 ⇑. § A COMPLETE AXIOMATISATION OF ≡_CS^Δ A set of equational laws valid for ≡_CS^Δ is presented in axsCS. It includes the laws from axsFD that are still valid for ≡_CS^Δ. I will show that this axiomatisation is sound and complete for ≡_CS^Δ for recursion-free CSP without the interrupt operator. The axiomsand , which are not valid for ≡_CS^Δ, played a crucial rôle in reducing CSP expressions with interrupt into normal form. It is not trivial to find valid replacements, and due to lack of space and time I do not tackle this problem here.The axiomreplaces the fallen axiom , and is due to <cit.>. Here the result of hiding actions results in a process that cannot be expressed as a normal form built up from a →,and . For this reason, one needs a richer normal form, involving the sliding choice operator. It is given by the context-free grammar[N → D | DI;I → D | II; D →|÷| E |÷ E; E → (a→ N) | (a→ N) E . ]A CSP expression is in head normal form ifit is of the form ([÷]_i∈ I (a_i→ R_i))_j∈ JR_j, with R_j=([÷]_k∈ K_j(a_kj→ R_kj)) for j∈ J. Here I, J and the K_j are finite index sets, and the parts between square brackets are optional. Here, although _i∈∅P_i is undefined, I use P _i∈∅P_ito represent P. An expression is in normal form if it has this form and also the subexpressions R_i and R_kj are in normal form.A head normal form is saturated if the ÷-summand on the left is present whenever any of the R_j has a ÷-summand, and for any j∈ J and any k∈ K_j there is an i∈ Iwith a_i=a_kj and R_i = R_kj.My proof strategy is to ensure that there are enough axioms to transform any CSP process without recursion and interrupt operators into normal form, and to make these forms saturated; then to equate saturated normal forms that are divergence-preserving coupled simulation equivalent.Due to the optional presence in head normal forms of a ÷-summand and a sliding choice, I need four variants of the axiom ; so far I have not seen a way around this. Likewise, there are 4× 4 variants of the axiomfrom axsFD, of which 6 could be suppressed by symmetry (P4–P13). There are also 3 axioms replacing(P14–P16). § SOUNDNESS Since divergence-preserving coupled similarity is a congruence for all CSP operators, to establish the soundness of the axiomatisation of axsCS it suffices to show the validity w.r.t. ≡_CS^Δ of all axioms. When possible, I show validity w.r.t. strong bisimilarity, which is a strictly finer equivalence.Two processes are strongly bisimilar <cit.> if they are related by a binary relationon processes such that, for all α∈Σ∪{τ}, * if PQ and P α P' then there exists a Q' with Q α Q' and P'Q',* if PQ and Q α Q' then there exists a P' with P α P' and P'Q'. Axiomis valid for ≡_CS^Δ. {(P P, P), (P, P P) | P   }∪ is a divergence-preserving coupled simulation.Axiomis valid even for strong bisimilarity. {(P Q, QP)| P,Q   }∪ is a strong bisimulation.Axiomis valid for ≡_CS^Δ. The relation {( P(QR) , (PQ)R ), ((PQ)R , P(QR) ),[3] (QR , (PQ)R ), (PQ, P(QR)), (R,QR), (P,P Q)|P,Q,R   }∪ is a divergence-preserving coupled simulation. [3]Axioms E2–4 are valid for strong bisimilarity. The relation {( P(QR) , (PQ)R ) | P,Q,R   }∪ is a strong bisimulation. So is {(P Q, QP)| P,Q   }∪, as well as {(P, P)| P   }∪.Axiomis valid for ≡_CS^Δ. {(P'P, P),(P,P'P) | P' ⊒_CS^Δ P}∪ is a divergence-preserving coupled simulation. This follows from tau.Axiomis valid for ≡_CS^Δ. {( P(QR) , (PQ)R ), ((PQ)R , P(QR) ) | P,Q,R   }∪ is a divergence-preserving coupled simulation. [2]Axiomis valid for ≡_CS^Δ. {( (PQ)R , (PQ)R ), ( (PQ')R , (PQ)R ), ( QR , (PQ)R ),[3] ( R , QR ) |Q' ⊒_CS^Δ Q}∪ is a divergence-preserving coupled simulation.Axiomis valid for ≡_CS^Δ. {( (PQ)R , (PQ)R ), ( (P'Q')R , (PQ)R ), ( PR , (PQ)R ),[3] ( QR , (PQ)R ), ( R , QR ) | P' ⊒_CS^Δ P ∧ Q' ⊒_CS^Δ Q}∪ is a divergence-preserving coupled simulation. Checking this involves tau.Axiomis valid for ≡_CS^Δ. The relation {( P, P), (P,P) | P   }∪ is a divergence-preserving coupled simulation.Axiomis valid for ≡_CS^Δ. {( (PQ)(RS) , (PR)(Q S)),( (P'R')(Q S), (PQ)(RS) ),( PQ , (PR)(Q S)),( RS , (PR)(Q S)),( Q S, (PQ)(RS) ),( S, (P'R')(Q S) ),( S, RS ),( S, QS ) | P' ⊒_CS^Δ P ∧ R' ⊒_CS^Δ R}∪ is a divergence-preserving coupled simulation.Axiomis valid for ≡_CS^Δ. {( (PQ)(RS) , (PR)(Q S)),( (PR)(Q S), (PQ)(RS) ),( Q'(RS) , (PR)(Q S))  ,  ( (PQ)S' , (PR)(Q S)),( Q' S', Q'(RS) ),( Q' S', (PQ)S' ),| QQ' ∧ SS'}∪ is a divergence-preserving coupled simulation.Axiomis valid for ≡_CS^Δ. {( P'(QR) , (PQ)(PR) ),( (PQ)(PR) , P(QR) ),( P'Q , P'(QR) )| PP'}∪ is a divergence-preserving coupled simulation.Axiomis valid for ≡_CS^Δ. {( (a→ P)a→(PQ) , a→(PQ) ),( a→(PQ) , (a→ P)a→(PQ) )}[3] ∪ is a divergence-preserving coupled simulation. [3]Axioms P0–1 and P4–10 are valid for strong bisimilarity.Axioms P11–16 are valid for ≡_CS^Δ.Straightforward.Axioms , , R0–5 and T0–6are valid for strong bisimilarity. Axioms H5–8 are valid for ≡_CS^Δ.Straightforward. Axiomis valid for ≡_CS^Δ. {( (PQ) R' , (P R)(Q R) ),( (P R')(Q R) , (PQ) R ),( Q R' , (PQ) R' ),( Q R , (P R')(Q R)) | RR'}∪ is a divergence-preserving coupled simulation. Axiomis valid for ≡_CS^Δ. {( (PQ) R' , (P R)(Q R) ),( (P R)(Q R) , (PQ) R ),( P R' , (PQ) R' ) | RR'}∪ is a divergence-preserving coupled simulation. Axiom ' is valid for strong bisimilarity.{( PQ , (_i∈ I (a_i → (P_i Q) ))Q ), ( (_i∈ I (a_i → (P_i Q) ))Q , PQ )| P = _i ∈ I(a_i→ P_i) ∧Q = _j ∈ J(b_j→ Q_j)}∪ is a strong bisimulation.§ COMPLETENESS Letbe the axiomatisation of axsCS.For each recursion-free CSP process P without interrupt operators there is a CSP process Q in normal form such that ⊢ P=Q.By structural induction on P it suffices to show that for each n-ary CSP operator Op, and all CSP processes P_1,...,P_n in normal form, also Op(P_1,...,P_n) can be converted to normal form. This I do with structural induction on the arguments P_i. * Let P= or ÷. Then P is already in normal form. Take Q:=P.* Let P=a → P'. By assumption P' is in normal form; therefore so is P.* Let P=P_1P_2. By assumption P_1 and P_2 are in normal form. So P=(([÷]_i∈ I (a_i→ R_i))_j∈ JR_j) (([÷]_l∈ L (a_l→ R_l))_j∈ MR_j) with R_j=([÷]_k∈ K_j(a_kj→ R_kj)) for j∈ J∪ M. With AxiomI may assume that J,M≠∅. Now Axiomconverts P to normal form.* Let P=P_1P_2. By assumption P_1 and P_2 are in normal form. So P=(([÷]_i∈ I (a_i→ R_i))_j∈ JR_j) (([÷]_l∈ L (a_l→ R_l))_j∈ MR_j) with R_j=([÷]_k∈ K_j(a_kj→ R_kj)) for j∈ J∪ M. WithI may assume that J,M≠∅. Now Axiomsandconvert P to normal form.* Let P=P_1P_2. Axioms S2–4 andconvert P to normal form.* Let P=P_1 _A P_2. Axiomsand P4–16, together with the induction hypothesis, convert P to normal form.* Let P=P A. Axiomsand H5–8, together with the induction hypothesis, convert P to normal form.* Let P=f(P). Axioms R0–5, together with the induction hypothesis, convert P to normal form.* Let P=P_1P_2. Axioms T0–6, together with the induction hypothesis, convert P to normal form. For any CSP expression P in head normal form there exists a saturated CSP expression Q in head normal form. Let P=([÷]_i∈ I (a_i→ R_i))_j∈ JR_j. Then P has the form SR. By Axioms S1–3 ⊢ P = (S R)R. By means of Axiomsandthe subexpression SR can be brought in the form [÷]_l∈ L (a_l→ R_l). The resulting term is saturated. A CSP expression (_i∈ I(b_i→ P_i)) is pruned if, for all i,h∈ I, b_i=b_h ∧ P_i ⊒_CS^Δ P_h⇒ i=h.Let P and Q be recursion-free CSP processes without interrupt operators. Then P ≡_CS^Δ Q iff ⊢ P = Q.“⇐” is an immediate consequence of the soundness of the axioms of , and the fact that ≡_CS^Δ is a congruence for all operators of CSP.“⇒”: Let (P) be the length of the longest trace of P—well-defined for recursion-free processes P. If P ≡_CS^Δ Q then (P)=(Q). Given P ≡_CS^Δ Q, I establish ⊢ P = Q with induction on (P).By nf I may assume, without loss of generality, that P and Q are in normal form. By saturated I furthermore assume that P and Q are saturated. Let P=([÷]_i∈ I (a_i→ R_i))_j∈ JR_j and Q=([÷]_l∈ L (a_l→ R_l))_j∈ MR_jwith R_j=([÷]_k∈ K_j(a_kj→ R_kj)) for j∈ J∪ M, where R_i, R_l and R_kj are again in normal form.Suppose that there are i,h∈ I with i≠ h, a_i=a_h and R_i ⊒_CS^Δ R_h. Then R_iR_h ≡_CS^Δ R_h by CSpreorder. Since (R_iR_h) < (P), the induction hypothesis yields ⊢ R_iR_h = R_h. Hence Axiomallows me to prune the summand a_i → R_i from _i∈ I (a_i→ R_i). Doing this repeatedly makes _i∈ I (a_i→ R_i) pruned. By the same reasoning I may assume that _l∈ L (a_l→ R_l)is pruned.Since P⇑⇔ Q⇑ and P and Q are saturated, P has the ÷-summand iff Q does. I now define a function f:I→ L such that a_f(i)=a_i and R_i ⊒_CS^Δ R_f(i) for all i ∈ I.Let i ∈ I. Since P a_i R_i, by coupled Q a_i Q' for some Q' with R_i ⊒_CS^Δ Q'. Hence either there is an l∈ L such that a_l=a_i and R_lQ', or there is a j∈ M and k∈ K_j such that a_kj=a_i and R_kj Q'. Since P is saturated, the first of these alternatives must apply. By tau Q' ⊒_CS^Δ R_l and by preorder R_i ⊒_CS^Δ R_l. Take f(i):=l.By the same reasoning there is a function g:L→ I such that a_g(l)=a_l and R_l ⊒_CS^Δ R_g(l) for all l ∈ L. Since _i∈ I (a_i→ R_i) and _l∈ L (a_l→ R_l)are pruned, there are no different i,h ∈ I (or in L) with a_i=a_h and R_i ⊒_CS^Δ R_h. Hence the functions f and g must be inverses of each other. It follows that Q=([÷]_i∈ I (a_i→ R_f(i)))_j∈ MR_j with R_i ≡_CS^Δ R_f(i) for all i ∈ I. By induction ⊢ R_i = R_f(i) for all i∈ I.So in the special case that I=M=∅ I obtain ⊢ P = Q. (*)Next consider the case J=∅ but M ≠∅. Let j ∈ M. Since QR_j, there is a P' with PP' and R_j ⊒_CS^Δ P'. Moreover, there is a P” with P'P” and P”⊒_CS^Δ R_j. Since J=∅, P”=P'=P, so P ≡_CS^Δ R_j. By (*) above ⊢ P = R_j. This holds for all j ∈ J, so by Axiom⊢ Q=([÷]_i∈ I (a_i→ R_i)) P. By Axiomone obtains ⊢ P=Q.The same reasoning applies when M=∅ but J≠∅. So henceforth I assume J,M ≠∅. I now define a function h:J→ M with ⊢ R_j =R_h(j) for all j ∈ J.Let j ∈ J. Since P τ R_j, by coupled QQ' for some Q' with R_j ⊒_CS^Δ Q', and Q'Q” for some Q” with Q”⊒_CS^Δ R_j. There must be an m∈ M with Q” R_m. By coupled R_jR' for some R' with R_m ⊒_CS^Δ R', and R'R” for some R” with R”⊒_CS^Δ R_m. By the shape of R_j one has R”=R'=R_j, so R_j ≡_CS^Δ R_m. By (*) above ⊢ R_j = R_m. Take h(j):=m.By the same reasoning there is a function e:M→ J with ⊢ R_m =R_e(m) for all m ∈ M. Using Axioms I1–3 one obtains ⊢ P=Q. § CONCLUSION This paper contributed a new model of CSP, presented as a semantic equivalence on labelled transition systems that is a congruence for the operators of CSP. It is the finest I could find that allows a complete equational axiomatisation for closed recursion-free CSP processes that fits within the existing syntax of the language. For τ-free system, my model coincides with strong bisimilarity, but in matching internal transitions it is less pedantic than weak bisimilarity.It is left for future work to show that recursion is treated well in this model, and also to extend my complete axiomatisation with the interrupt operator of Roscoe <cit.>.An annoying feature of my complete axiomatisation is the enormous collections of heavy-duty axioms needed to bring parallel compositions of CSP processes in head normal form. These are based on the expansion law of Milner <cit.>, but a multitude of them is needed due to the optional presence of divergence-summands and sliding choices in head normal forms. In the process algebra ACP the expansion law could be avoided through the addition of two auxiliary operators: the left merge and the communication merge <cit.>. Unfortunately, failures-divergences equivalence fails to be a congruence for the left-merge, and the same problems exists for any other models of CSP <cit.>. In <cit.> an alternative left-merge is proposed, for which failures-divergences equivalence, and also ≡_CS^Δ, is a congruence. It might be used to eliminate the expansion lawfrom the axiomatisation of axsFD.Unfortunately, the axiom that splits a parallel composition between a left-, right- and communication merge (Axiom CM1 in <cit.>), although valid in the failures-divergences model, is not valid for≡_CS^Δ. This leaves the question of how to better manage the axiomatisation of parallel composition entirely open. eptcsini
http://arxiv.org/abs/1702.07844v1
{ "authors": [ "Rob van Glabbeek" ], "categories": [ "cs.LO" ], "primary_category": "cs.LO", "published": "20170225071545", "title": "A Branching Time Model of CSP" }
GBgbsn[co][0]Hadronic coupling constants of g_σππ in lattice QCDSupported bythe National Magnetic Confinement Fusion Program of China (2013GB109000).Lingyun Wang^1 Ziwen Fu^2,3fuziwen@scu.edu.cn, corresponding author Hang Chen^3Draft of December 30, 2023 ==============================================================================================================================================^1International Affair Department, Chengdu Jiaxiang Foreign Language School, Chengdu 610023, China^2Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China. ^3Center for Theoretical Physics, College of Physical Science and Technology, Sichuan University, Chengdu 610064, China We investigate the coupling constant g_σππ for the hadronic decay σ→ππ only using the relevant three-point function, which is evaluated by the moving-wall source technique with a pretty good noise-to-signal ratio. This simulation is carried out on a 40^3×96 MILC gauge configuration with N_f=2+1 flavor of the “Asqtad” improved staggered dynamical sea quarks at the lattice spacing a ≈ 0.09 fm. Our estimated value for this given MILC fine lattice gauge ensembleg_σππ=2.71(42) GeV. hadronic coupling constant, scalar meson, staggered fermions.12.38.Gc,11.15.Ha 2Hadronic coupling constants of g_σππ in lattice QCDSupported bythe National Magnetic Confinement Fusion Program of China (2013GB109000).Lingyun Wang^1 Ziwen Fu^2,3fuziwen@scu.edu.cn, corresponding author Hang Chen^3 ============================================================================================================================================== § INTRODUCTION In 2016, Particle Data Group (PDG) lists f_0(500), which is often called σ meson I(J^PC)=0(0^++), with mass 400-550 MeV and broad width 400-700 MeV <cit.>. Although direct determination of σ resonance parameters from QCD is difficult since it is a nonperturbative problem, some theoretical efforts are still taken to study σ meson and its resonance parameters <cit.>.The most feasible way to nonperturbatively obtain σ resonance parameters from first principles is using lattice QCD. So far, there are just a couple of lattice reports on σ resonance parameters.The first lattice attemptis preliminarily performed on a MILC “medium” coarse lattice ensemble <cit.>. Nevertheless, the evaluation of vacuum diagram of I=0 ππ scattering is not convincing <cit.>. The Hadron Spectrum Collaboration observed that the I=0 ππ scattering amplitude renders the characteristic of a σ looking as a broad resonance for the lighter quark mass, whichveryresembles the experimental case <cit.>. The ETM Collaboration study with the s-wave I=0 ππscattering length from twisted mass lattice QCD <cit.>. M. Doring et al extrapolate σ resonance by analyzing the recent results on isoscalar ππ scattering <cit.>Although it is quite expensive to implement, the moving wall source technique, which is originally designed for the center-of-mass frame <cit.>, is recognized to compute four-point and three-point functions of a two-particle scattering with high quality. Recently we further extended this method to two-particle system with the non-zero momenta to tentatively investigate the scalar mesons κ and σ <cit.>, and vector meson K^⋆(892) and ρ meson decays <cit.>, along with a few studies of meson-meson scattering <cit.>. In these works, we confirmed that the moving wall source can calculate the relevant four-point and three-point correlators with high quality. It is comparatively cheap to perform lattice studies using staggered fermion in contrast with those using other lattice discretizations, consequently, it enables us to carry out lattice examinations with larger lattice spatial dimension L or smaller quark mass with the fixed and limited computer resources. For this reason, we study σ resonance on a 40^3×96 MILC fine gauge configuration with N_f=2+1 flavor of the “Asqtad” improved staggered dynamical sea quarks in this work <cit.>. We found that the noise-to-signal ratio of three-point function are pretty good, as compared with four-point function due to the so-called vacuum diagram.Inspired by the lattice calculations of the hadronic coupling constants of vector mesons with only three-point functions <cit.>, we will investigate the hadronic coupling constant of a scalar meson strong decay process σ→π^+ + π^- (i.e., g_σπ^+π^-, for short, we called g_σππ) with only the corresponding three-point function, since, as it is demonstrated later, it can be calculated with high quality in reasonable amounts of computer time.This needs us to seek a function to parameterize three-point function in which the hadronic coupling constant g_σππ appears as a parameter. For this aim, the generating functional method is used to connect the three-point function to the hadronic coupling constant. Moreover, we should determine the relevant Clebsch-Gordan (CG) coefficients since the coupling constants are normally denoted as the flavor independent quantities.§ THE PHENOMENOLOGICAL MODELIn this section, the original definitions and notations in Refs. <cit.> are employed to derive the relationship between the hadronic coupling constant of the scalar meson strong decay process S → A + π and the three-point function of the scalar-meson field S(x), together with a pseudoscalar meson field A(x), and a pion meson field π(x)Γ_3 ≡⟨S̃(𝐪_S,t_S) A(0,t_A)π̃(𝐪_π, t_π) ⟩ ,where the tilde implies that the relevant field operator is defined in momentum space, for instance,π̃(𝐪,t) = ∫d^3𝐱/(2π)^3 e^-i𝐪·𝐱π(𝐱,t) .On the lattice, the integral hands over to a sum.This three-point function can be associated to the three-point function in momentum-space:Γ_3 = ∫d^3𝐪_A/(2π)^3⟨S̃(𝐪_S,t_S)Ã(𝐪_A,t_A)π̃(𝐪_π, t_π) ⟩ .These interacting fields can be renormalized to the asymptotic free fields (i.e., at spatial infinity) by a field strength renormalization constant √(Z) so that the single-particle contribution to its propagator has the same behavior near its pole as the propagator of a free field. As a result, Eq. (<ref>) can be expressed asΓ_3= ∫d^3𝐪_A/(2π)^3√(Z_S(𝐪_S)Z_A(𝐪_A)Z_π(𝐪_π)) × ∫ d^3𝐱_S e^i𝐪_S ·𝐱_S∫ d^3𝐱_A e^i𝐪_A ·𝐱_A∫ d^3𝐱_π e^i𝐪_π·𝐱_π × ⟨S_as(𝐱_S,t_S) A_as(𝐱_A,t_A)π_as(𝐱_π, t_π) ⟩ ,where the subscripts “as” in the fields indicate the asymptotic free fields. In practice, the renormalization constant Z_S(𝐪_S) in Eq. (<ref>) can be calculated from the scalar meson amplitude of the operator S, i.e.,Z_S(𝐪_S) = | ⟨ S, 𝐪_S | S̃(𝐪_S;0) | 0 ⟩ |^2where |0⟩ is the vacuum state. It is similar for the definitions of Z_A(𝐪_A) and Z_π(𝐪_π).The last term in Eq. (<ref>) is a typical Euclidean three-point function met in Lehmann-Symanzik-Zimmermann (LSZ) reduction theory. One needs a phenomenological model of the strong decay of a scalar meson to measure it. The process under study is customarily expressed by the phenomenological interaction term parameterized by a coupling constant g_SAπ.The general effective interaction Lagrangian, which parameterizes the decay of a scalar meson S into a pseudoscalar meson A and a pion meson π, can be cast as𝐿_int(x) = g_SAπ𝑐_ijk S^i(x) A^j(x) π^k(x),where𝑐_ijk is a Clebsch-Gordan coefficient on the isospin indices i,j, and k. Our normalization of𝑐_ijk is chosen so that the Lagrangian manifests like a scalar under flavor transformations, and for the vertices σ→ππ or κ→ Kπ, the kinematic factor K(𝐪) is a constant, i.e.,𝐾(𝐪) = 1 .It is interesting and important to note that that kinematic factor K(𝐪) does not depend on the momenta of the participants.Once the phenomenological model is introduced, the generating functional method can be applied to solve our problem. The Euclidean three-point function hinted by this interaction can be directly estimated from the Feynman path integral through generating functionals, that isG_3(x_S,x_A,x_π) = g∫ d^4 x_S(x-x_S) _A(x-x_A) _π(x-x_π),where (x-y) is the free Feynman propagator between x and y, andwe only consider the tree-level contribution to the generating functionals.Following the pioneering works in Refs <cit.>, we can readily write the integral in Eq. (<ref>) as the product of three exponentials:Γ_3 ≡ f_SAπ×∫ dt e^-E_S(𝐪_S)|t_S-t)| e^-E_A(𝐪_A)|t_A-t| e^-E_π(𝐪_π)|t_π-t| ,where we definef_SAπ≡ g_SAπ√(Z_S(𝐪_S) Z_A(𝐪_A) Z_π(𝐪_π))/8E_S(𝐪_S) E_A(𝐪_A) E_π(𝐪_π) .Assuming large time distances between the operators we can restrict ourselves to the low lying particle states. For a fixed source of A meson at time slice t_A and a fixed pion source located at time t_π, the sum can be evaluated piecewise for three cases: t_S> t_π > t_A, t_π> t_S > t_A and t_A > t_π > t_S. The following expression can then be derived for t_S> t_π > t_A:Γ_3=f_SAπ[ P(E_S, E_π+E_A)e^-E_π(t_S-t_π)e^-E_A(t_S-t_A)+ P(E_A, E_S+E_π)e^-E_S(t_S-t_A)e^-E_π(t_π-t_A)+ P(E_π, E_S-E_A)e^-E_S(t_S-t_π)e^-E_A(t_π-t_A)] ,where, to make ease notation, the free lattice particle propagator is denoted asP(ω,E) ≡sinh(ω)/cosh(ω)-cosh(E).The other cases can be written down similarly. Since, in this work, we do not measure the corresponding lattice data, the explicit form is of no relevance here. We should remark at this point that, like other staggered hadron operators, σ operator also has the undesired property of coupling to a state with opposite parity, namely a taste-axial-vector η_A meson <cit.>. This parity-partner state contributes to the three-point correlators so that the additional terms should be taken into account in Eq. (<ref>), as it is done in Ref. <cit.>, which results in the more sophisticated parameterizations. In principle, Eq. (<ref>) should also include the terms accounting for the parity partnersof the pion mesons. However, the pion operators was chosen to be the lattice Goldstone pion, and the taste of its oscillating parity partners is γ_0. This particle (i.e., π_V ) is an exotic state since itsJ^PC = 0^+-. Therefore, the parity partner of the pion created by this operator is highly suppressed such that it can be neglected in the analysis. (i.e. Z ∼ 0, as can be noticed in the analysis of pion propagator), consequently, it is not necessary to be considered in Eq. (<ref>) for the current study.To estimate the scalar-meson partial width for the aforementioned three-point interaction, we conveniently begin with the generic two-body decay rate formula in the center-of-mass frame of the decaying particledΓ_ijk = |𝑀_ijk|^2 |𝐩_f|/32π^2m_S^2 dΩ ,where the 𝑀_ijk is a given the matrix element, m_S is mass of the scalar meson S, and |𝐩_f| is the magnitude of either outgoing momentum.Performing the angular integration,averaging over the incoming spin states, and summing over the outgoing spin states, we arrive atΓ_ijk = |𝑀_ijk|^2 |𝐩_f| (2J_A+1)/8π m_S^2 (2J_S+1). From the interaction vertex denoted inEq. (<ref>), we can easily obtain that the matrix element squared is just|𝑀_ijk|^2 =g_SAπ^2𝑐_ijk^2 . The final formula for the total decay width can beexpressed as <cit.>Γ = g_SAπ^2 |𝐩_f| /8π m_S^2∑_ijk𝑐_ijk^2 .We should bear in mind that g_SAπ is dimensional. Like the other dimensional parameters calculated in QCD, it is anticipated to display dependence on the lattice spacing.Fortunately, according to the studies in Ref. <cit.>, the scalar coupling constant g_SAπ (i.e., g_σππ or g_κ Kπ) is a rough constant since it varies pretty slowly as the quark mass changes. We note that, there is an extra m_σ/2 in Eq. (<ref>) forσ→ππ in Ref. <cit.>, which leads in the couplingg_σππ to be dimensionless <cit.>.§ LATTICE CORRELATORWe have described a detailed procedure to measure σ correlator ⟨ 0 | σ^†(t)σ (0) | 0 ⟩ <cit.>. To simulate the correct number of quark species, we use an interpolator with the isospin I=0 and J^P=0^+ at source and sink,O(x)≡∑_a, tu̅^c_t( x ) u^c_t( x ) +d̅^c_t( x ) d^c_t( x ) /√(2n_t) ,where t is the index of taste replica, n_t is the number of taste replicas, and c is color index. After carrying out the Wick contractions of fermion fields, and summing over the taste and color indexes, we obtain the time slice correlator C(t) with momentum 𝐩C(𝐩,t)=-1/2∑_𝐱 e^i 𝐩·𝐱⟨ M^-1 (𝐱,t; 𝐱,t)M^-1 ( 0,0; 0,0) ⟩ + ∑_𝐱(-)^x e^i 𝐩·𝐱⟨ [ M^-1 (𝐱,t; 0,0) M^-1^†𝐱,t; 0,0) ] ⟩,where M is the light quark Dirac operator, and the first and secondterms are the quark-line disconnected and connected contributions, respectively <cit.>.Like other staggered hadron operators, the σ operator also has the undesired oscillating term with opposite parity, namely a taste-axial-vector η_A meson <cit.>. In practice, we take one mass with each parity <cit.>. Then, the σ correlator was fit toC_σ(t)= Z_σe^-m_σt + Z_η_A(-)^t e^-M_η_At + (t → N_t-t), where Z_σ and Z_η_A are two overlap factors. We should bear in mind that, for the staggered Kogut-Susskind quark action, the σ interpolator couples to various tastes as we examined in our previous studies for scalar a_0, σ and κ mesons <cit.>, where we investigated the bubble contribution, and found that it dominates the correlator at large time distance. Thus, we should remove this term from σ correlator.It is well-known that the four-point and three-point functions are very difficult to calculate,and the so-called stochastic source method, or its variants: the distillation method, etc. are normally used to compute <cit.>. Although it is quite expensive to implement, the moving wall source technique is used to compute three-point functions in this work.To avoid the Fierz rearrangement of the quark lines, we choose t_1 =0, t_2=1, and t_3=t for the σ→ππ three-point function <cit.>. The quark line diagrams contributing to σ→ππ three-point correlation function is displayed in Fig. <ref>. The calculation of the σ→ππ three-point function is quite difficult. In practice, we employ an up-antiquark source with 1 on each lattice site 𝐱 for pion creation operator, and an up-quark source with e^i𝐩·𝐱 on each lattice site 𝐱 for pion creation operator <cit.>.< g r a p h i c s >Quark line contraction diagrams of σ→ππ. Short bars represent the wall sources. Open circles stand for the sinks of local pion operator.It should be worthwhile to stress that the imaginary part of the second diagram forσ→ππ has same magnitude but with minus sign, as that of the first diagram. As a consequence, three-point diagrams are purely real, and only one quark-line diagram is required to calculate.It is interesting and important to note that the value in Eq. (<ref>) is also purely real, as expected.We then write the first diagrams for the σ→ππ quark-line diagrams in Fig. <ref> in terms of the light quark propagators G,C_σ→ππ (𝐩,t_3,t_2,t_1)= ∑_𝐱_3, 𝐱_1 e^ i 𝐩·𝐱_3 ⟨ [G_t_1(𝐱_3, t_3) G_t_2^†(𝐱_3, t_3)G_t_2^†(𝐱_1, t_1) ] ⟩,where the trace is taken over color index andDirac matrix is used as an interpolator for ith meson: the γ_5 for the pseudoscalar meson and 1 for σ meson.§ LATTICE CALCULATION In this work, we used 400 MILC 40^3 × 96 gauge configurations with 2+1 flavors of the Asqtad-improved staggered fermions, where bare quark masses am_ud/am_s = 0.0031/0.031 and bare gauge coupling 10/g^2 = 7.08 <cit.>. The lattice spacing a is about 0.09 fm. The masses of u and d quarks are degenerate. All the gauge configurations were gauge fixed to the Coulomb gauge before calculating the quark propagators. The standard conjugate gradient method is utilized to acquire the required matrix elements of inverse Dirac fermion matrices, and the conjugate gradient residual is selected to be 1.0×10^-5, which is generally smaller than that of generating the MILC gauge configurations <cit.>. Moreover, all the numerical calculations are evaluated in double precision to avoid the potential roundoff errors. In the calculation of the σ correlator,σ(t)=1/T∑_t_s⟨σ^†(t+t_s) σ(t_s)⟩ ,we average all the possible correlators. One thing we must stress is that we use Z_2 noisy estimators based on the random color fields to measure the disconnected contribution of sigma correlator <cit.>. Using the standard discussed in ref. <cit.>, we determine that 1000 noise Z_2 sources are sufficiently reliable to measure the disconnected part. Fitting σ correlator with Eq. (<ref>), we can obtain the sigma mass m_σ and two overlap amplitudes: Z_η_A and Z_σ, which will be subsequently plugged into Eq. (<ref>) to estimate the three-point functions.We compute the σ→ππ three-point functions on all the time slices, and explicitly combine the results from all the time slices T; namely, the σ→ππ three-point correlator C_σ→ππ(t) is measured throughC_σ→ππ(t)= 1/T∑_t_s^T⟨σ(t+t_s)(ππ)^†(t_s)⟩ ,After averaging the propagators over all the T values, the statistics are found to be remarkably improved. According to the discussion in Appendix of Ref. <cit.>, the noise-to-signal ratio of σ correlator and σ→ππ correlator are improved as approximately∝1/√(N_ slice L^3), where L is lattice spatial dimension and N_ slice is the number of the time slices calculated the propagators for each of the gauge configurations. In this work, we use the lattice ensembles with relatively large L and sum σ correlator over all the time slices and the σ→ππ correlator over all the time slices; consequently, it is natural that the signals of the correlators should be significantly improved. Admittedly, the most efficient way to improve the relevant noise-to-signal ratio is to use finer gauge configurations or anisotropic gauge configurations <cit.>. We measure two-point pion correlators with the zero and nonezero momenta (0 and 𝐩) as well,C_π(0, t)= 1/T∑_t_s=0^T-1⟨ 0|π^† (0, t+t_s) W_π(0, t_s) |0⟩, C_π(𝐩, t)= 1/T∑_t_s=0^T-1⟨ 0|π^† (𝐩, t+t_s) W_π(𝐩, t_s) |0⟩,where π is the pion point-source operator and W_π is the pion wall-source operator <cit.>.It is worth noting that the summations over all the time slices for π propagators guarantee the extraction of π mass with high precision. Disregarding the contributions from the excited states, the pion mass m_π and energy E_π(𝐩) can be robustly extracted at large t from two-point pion correlators (<ref>) and (<ref>), respectively <cit.>,C_π(0, t)=Z_π(0) [e^-m_π t+e^-m_π(T-t)] +⋯, C_π(𝐩, t)=Z_π(𝐩) [e^-E_π(𝐩) t+e^-E_π(𝐩)(T-t)] + ⋯, where the ellipses show the oscillating parity partners, and Z_π(0) and Z_π(𝐩) are two overlapping amplitudes, which will be subsequently plugged into Eq. (<ref>) to estimate three-point correlation functions. § SIMULATIONS AND RESULTSThe valence u/d quark masses are set to its dynamical quark masses,while valence strange quark is fixed to its physical mass, which was determined by MILC Collaboration <cit.>. In the usual manner, we extracted π, K, and fictitious ss̅ masses <cit.>, which are used to evaluate bubble contribution to σ correlators <cit.>, where three low energy couplings (μ, δ_A, and δ_V) are fixed to MILC-determinated values <cit.>. After neatly removing the unwanted bubble terms from σ propagators, the remaining σ propagators have a clean information, we then fit them with the physical model in Eq. (<ref>). The extracted meson masses gave m_π/m_σ≈ 0.38 < 0.5, ensuring that the physical kinematics for the σ-meson decay is satisfied. In this work, we directly quote the lattice parameters, which are professionally determined by MILC Collaboration <cit.>. First of all, we adopt the natural choice of having σ meson at rest and two pions at rest. Moreover, we also measure the case with two pions at the nonzero momentum since we found that the signal of the Goldstone pion propagator with the nonzero momentum are also stable with our particular choice of kinematics. The time slice t_1 of the pion has been fixed to t_1 = 1, another pion was located at t_π = 0, and we evaluated the correlation function for all times t of the σ meson. In this work, π meson was given the minimal lattice momentum 𝐩_π = (2π/L)𝐞_z. Figure <ref> shows our computed three-point function on the lattice as a function of the temporal location of the scalar meson where the scalar σ meson is at rest. Octagon red one indicates the three-point function with two pions at rest, and square blue one shows three-point function with two pions at momenta 𝐩_π = (2π/L)𝐞_z and -(2π/L)𝐞_z,respectively. As expected, we observe the very clear signal. This situation is quite different from that reported for vector ρ meson, where far from the source Γ_3 is consistent with zero <cit.>. It is interesting and important to note that the oscillating parity partners in three-point functions are not clear in Fig. <ref>. This is easy to understand, since the parity partner of lattice Goldstone pion corresponds to an exotic state and is highly suppressed so that it can be neglected in the analysis. As a consequence, the MILC Collaboration usually adopts a fit of type“1,0” in the fit of pion mass which means that the oscillating parity partner is not included <cit.>. We should remark at this point that, the σ operator also has the undesired property of coupling to a state with opposite parity, namely a the taste-axial-vector η_A meson <cit.>. However, the sigma mass m_σ is much smaller than that of η_A meson for the MILC lattice ensemble used in the present work, so it is highly suppressed so that itcan be neglected in the analysis.< g r a p h i c s >Real parts of σ→ππ three-point function with σ meson at rest. Occasional points with negative central values are not displayed. In principle, we can use the above information to get the fitting value of the coupling constant g_σππ. To get more information, the three-point functions were also generated by giving the σ-meson a momentum 𝐩_σ and varying σ-meson time-slice location t. In this work, we chose to put the same spatial momentum 𝐩 on the σ-meson and one pion meson, and another pion was set to 0. Our signal is much more stable with our particular choice of kinematics. The time slice t_1 of the pion has been fixed to t_1 = 1, another pion was located at t_π = 0, and we evaluated the correlation function for all times t of the σ meson. The σ meson was given the minimal lattice momentum 𝐩_σ = (2π/L)𝐞_z in the z direction, and we also measure at 𝐩_σ =(2π/L)(𝐞_y+𝐞_z), (2π/L)(𝐞_x+𝐞_y+𝐞_z), and (2π/L)(2𝐞_z) (i.e., 𝐩_σ = [0,0,1], [0,1,1], [1,1,1], [0,0,2]). Figure <ref> shows our lattice-measured three-point functions with σ meson at above-mentioned four momenta as a function of the temporal location of the sigma meson. As expected, we observe a very clear signal, and it is interesting to note that the oscillating behavior generally contributes in relatively larger quantities for the higher momenta. This is easy to comprehend since the sigma energy E_σ is more and more closer with η_A meson energy E_η_A for the higher momenta.We are now in a position to discuss the flavor quantum numbers of the states we are investigating, since in lattice calculation, the coupling of any u̅u pair to a meson is assumed to be just unity. The σ, π^+ andπ^- wave functions are 1/√(2)(uu̅+dd̅), d̅u, u̅d, respectively. As a consequence, there exists a factor of 2 since the q̅q pair which pops out of the vacuum can be either a u̅u pair or a d̅d pair when the σ decays <cit.>; thereforeg_σππ = √(2)g_lattice . < g r a p h i c s >Real parts of σ→ππ three point function with σ meson at momentum 𝐩. Occasional points with negative central values are not displayed. Now we are ready to determine the g_σππ coupling constant from the numerical data which are shown in Fig.  <ref> and Fig. <ref>. All these constants together with the masses and energies of the different particle states needed as inputs to Eq.(<ref>) were extracted from the analysis of two-point functions, and then used to obtain the theoretical form of three-point function. The coupling constant g_σππ is determined by fitting this function to the lattice-measured three-point functions, discarding various choices of time slices.The three-point functions were measured with σ meson at five momenta 𝐩=(0,0,0), (0,0,1), (0,1,1), (1,1,1), and (0,0,2). All six correlators (two in 𝐩=(0,0,0)) were then simultaneously fitted to the physical model in Eq.(<ref>) for only one fitting parameter g_σππ. We find for the localσ operatorg_σππ = 2.71±0.42 GeV .This is in reasonable agreement with the recent analytic predictions from residue at complex pole, which are listed in Table <ref>, together with our former lattice result <cit.>. This is also in fair accordance with Hadron Spectrum Collaboration's lattice result <cit.>. The agreement is fairly reasonable, taking into account that, according to the studies in Ref. <cit.>, the scalar coupling constant g_SAπ (i.e., g_σππ or g_κ Kπ) is a rough constant since it varies pretty slowly as the quark mass changes. Moreover, the Hadron Spectrum Collaboration found that the coupling constant g_σππ is approximately independent of quark mass <cit.>.We should remark at this point that g_SAπ is not dimensionless, it is expected to show dependence on the lattice spacing. Since we only work on a MILC lattice ensemble, our obtainedg_SAπ is not physical. More sophisticated evaluation should be carried out at several lattice ensembles, and discuss the mass dependence. Other recent determinations of the σ meson coupling to two pions (g_σππ) using some form of analytic properties or data constrained by Roy equations and chiral symmetry. Reference |g_σππ| (GeV) Pelaez <cit.>3.45^+0.25_-0.22Masjuan, Ruiz de Elvira and Sanz-Cillero <cit.> 3.8±0.4Oller <cit.> 2.97±0.05 Kaminski, Mennessier and Narison <cit.> 2.2 Mennessier, Narison and Wang <cit.> 2.65±0.1 Garcia-Martin et al. <cit.> 3.59^+0.11_-0.13 Pelaez and Rios <cit.> (fit D)3.5 Narison <cit.> 5.3±1.8 Nebreda and Pelaez <cit.>2.86Fu <cit.> 2.69±0.44This work 2.71±0.42 § SUMMARY AND OUTLOOKIn this work, we discuss that the hadronic coupling constants for the scalar-meson strong decays S→ Aπ are extracted from the lattice three-point function. And we report an exploratory lattice investigation of the hadronic coupling constants g_σππ for the hadronic decays σ→ππ only using the appropriate three-point function, which are evaluated by the moving-wall source technique with a pretty good noise-to-signal ratio. These simulations are carried out on a 40^3×96 MILC fine gauge configuration with N_f=2+1 flavor of the asqtad-improved staggered dynamical sea quarks at m_π / m_σ≈ 0.38 and the lattice spacing a ≈ 0.09 fm. Our estimated value for this given MILC fine lattice gauge ensemble g_σππ=2.71(42))GeV, which can be reasonably compared to the recent analytic predictions from residue at complex pole, which are summarizedin Table <ref>, along with our former tentative lattice result <cit.>. The most important outcome of the lattice calculation of g_σππ exhibits that the lattice study of the scalar meson decay processes can be carried out in reasonable amounts of time on presently limited available computers. We should remark at this point that g_σππ is dimensional, it is expected to show dependence on lattice spacing. Since we only work on a MILC lattice ensemble, strictly speaking, our obtainedg_σππ is not the physical one, we can not directly compare with other data. More sophisticated one should be carried out at several lattice ensembles, and discuss the mass dependence on lattice spacing. It will be interesting to see whether this expectation is borne out in numerical QCD simulations, especially at smaller lattice spacing.Moreover, according to the empirical discussion in the Appendix of Ref. <cit.>, to improve the relevant noise-to-signal ratio, we should use very fine gauge configurations or the lattice ensembles with larger lattice spatial dimensions L. For this reason, we are beginning a series of numerical simulations with the super-fine or ultra-fine MILC lattice ensembles.Furthermore, admittedly, thee method described in this work cannot obtain another resonance parameter: the σ resonance mass. To achieve the σ resonance mass, we must calculate the I=0 ππ scattering with the careful treatment of vacuum diagrams, as it done in Ref. <cit.>. Nonetheless, the reliable evaluation of vacuum diagram needs more lattice or more finer gauge configurations.All of these open questions are beyond the scope of this paper since this will demand a huge amount of computing allocations. We postpone and reservethese expensive tasks for our future lattice study. We will enthusiastically appeal for all the possible computational resources to carry out these challenging tasks.We deeply appreciate MILC for using MILC gauge configurations and codes. We sincerely thank Carleton DeTar for inculcating us in the necessary knowledge for this work. We especially thank Eulogio Oset and Michael Doring for their enlightening comments. The authors express respect to Han-qing Zheng, Geng Liseng, Liu Chuan, and Chen Ying for reading this manuscript or providing useful comments. We cordially express our boundless gratitude to Hou qing, He Yan and Fujun Gou's vigorous support. We also express gratitude to the Institute of Nuclear Science and Technology, Sichuan University, and the Chengdu Jiaxiang Foreign Language School, from which computer resources were furnished. Numerical calculations for this work were carried out at both the PowerLeader Clusters and the AMAX, CENTOS, HP, and ThinkServer workstations.80mm0.1pt 2 90 Olive:2016xmw C. Patrignani et al (Particle Data Group),Chin. Phys. C, 40 (10): 100001 (2016).Pelaez:2015qba J. R. Pelaez,Phys. Rept.,658: 1 (2016). Masjuan:2014psa P. Masjuan, J. Ruiz de Elvira and J. J. Sanz-Cillero,Phys. Rev. D, 90 (9): 097901 (2014).Oller:2003vf J. A. Oller,Nucl. Phys. A, 727: 353 (2003). Kaminski:2009qg R. Kaminski, G. Mennessier and S. Narison,Phys. Lett. B, 680: 148 (2009). Mennessier:2010xg G. Mennessier, S. Narison and X. G. Wang,Phys. Lett. B, 688: 59 (2010).GarciaMartin:2011jx R. Garcia-Martin, R. Kaminski, J. R. Pelaez and J. Ruiz de Elvira,Phys. Rev. Lett.,107: 072001 (2011). Pelaez:2010fj J. R. Pelaez and G. Rios,Phys. Rev. D, 82: 114002 (2010). Narison:2005wc S. Narison,Phys. Rev. D, 73: 114024 (2006). Nebreda:2010wv J. Nebreda, J. R. Peláez., Phys. Rev. D, 81: 054035 (2010).Zhou:2004ms Z. Y. Zhou, G. Y. Qin, P. Zhang, Z. Xiao, H. Q. Zheng and N. Wu,JHEP, 0502: 043 (2005).Oller:1997ti J. A. Oller and E. Oset, Nucl. Phys.A, 620: 438 (1997). Hyodo:2010jp T. Hyodo, D. Jido and T. Kunihiro,Nucl. Phys. A,848: 341 (2010). Caprini:2008fcI. Caprini,Phys. Rev. D, 77: 114019 (2008). Yndurain:2007qm F. J. Yndurain, R. Garcia-Martin and J. R. Pelaez,Phys. Rev. D, 76: 074034 (2007). Caprini:2005zr I. Caprini, G. Colangelo and H. Leutwyler,Phys. Rev. Lett., 96: 132001 (2006).Escribano:2002iv R. Escribano, A. Gallegos, J. L. Lucio M, G. Moreno, J. Pestieau,Eur. Phys. J. C, 28: 107 (2003). Giacosa:2007bn F. Giacosa and G. Pagliara,Phys. Rev. C, 76: 065204 (2007). Fu:2012gf Z. Fu,JHEP, 1207: 142 (2012).Briceno:2016mjc R. A. Briceno, J. J. Dudek, R. G. Edwards and D. J. Wilson,arXiv:1607.05900 [hep-ph].Liu:2016cba L. Liu et al,arXiv:1612.02061 [hep-lat].Doring:2016bdrM. Doring, B. Hu and M. Mai,arXiv:1610.10070 [hep-lat].Kuramashi:1993ka Y. Kuramashi, M. Fukugita, H. Mino, M. Okawa and A. Ukawa,Phys. Rev. Lett., 71: 2387 (1993).Fukugita:1994ve M. Fukugita, Y. Kuramashi, M. Okawa, H. Mino, A. Ukawa,Phys. Rev. D, 52: 3003 (1995). Fu:2011xw Z. Fu,JHEP, 01: 017 (2012); Z. Fu,Phys. Rev.D, 85: 014506 (2012).Fu:2012tj Z. Fu and K. Fu,Phys. Rev. D, 86: 094507 (2012);Fu:2016itp Z. Fu and L. Wang,Phys. Rev. D, 94 (3): 034505 (2016). Fu:2011bz Z. Fu,Commun. Theor. Phys.,57: 78 (2012);Z. Fu,Phys.Rev.D 87: 074501 (2013);Z. Fu,Phys. Rev. D 85: 074501 (2012); Z. Fu,Eur. Phys. J. C, 72: 2159 (2012). Bernard:2010fr C. Bernard et al (Fermilab Lattice and MILC Collaborations),Phys. Rev. D,83: 034503 (2011). Bazavov:2009bb A. Bazavov et al,Rev. Mod. Phys.,82: 1349 (2010). Gottlieb:1985rc S. A. Gottlieb, P. B. MacKenzie, H. B. Thacker and D. Weingarten, Nucl. Phys. B, 263: 704 (1986).Gottlieb:1983rh S. A. Gottlieb, P. B. Mackenzie, H. B. Thacker and D. Weingarten, Phys. Lett. B, 134: 346 (1984). Loft:1988sy R. D. Loft and T. A. DeGrand,Phys. Rev. D 39: 2692 (1989).Altmeyer:1995qx R. L. Altmeyer, M. Gockeler, R. Horsley, E. Laermann, G. Schierholz and P. M. Zerwas,Z. Phys. C 68: 443 (1995). Bernard:2007qf C. Bernard et al,Phys. Rev. D, 76: 094504 (2007). Kleinert:1972ye H. Kleinert, L. P. Staunton and P. H. Weisz,Nucl. Phys. B, 38: 104 (1972). Fu:2011zzh Z. W. Fu,Chin. Phys. Lett.,28: 081202 (2011);Z. W. Fu and C. DeTar,Chin. Phys. C, 35: 896 (2011);Z. Fu,Chin. Phys. C, 38: 063102 (2014); Z. Fu and C. DeTar,Chin. Phys. C, 35: 1079 (2011);Z. Fu,Chin. Phys. C, 36: 489 (2012); Z. Fu,Int. J. Mod. Phys. A, 28: 1350059 (2013).Peardon:2009gh M. Peardon et al (Hadron Spectrum Collaboration),Phys. Rev. D, 80: 054506 (2009). Bernard:2001av C. Bernard et al, Phys. Rev. D, 64: 054506 (2001);Aubin:2004wf C. Aubin et al, Phys. Rev. D, 70: 094505 (2004).Muroya:2001ypS. Muroya et al(SCALAR Collaboration),Nucl. Phys. Proc. Suppl., 106: 272 (2002). Wilson:2015dqa D. J. Wilson, R. A. Briceno, J. J. Dudek, R. G. Edwards and C. E. Thomas,Phys. Rev. D, 92 (9): 094502 (2015).Aubin:2004fsC.Aubin et al (MILC Collaboration), Phys. Rev. D, 70: 114501 (2004).
http://arxiv.org/abs/1702.08337v1
{ "authors": [ "Lingyun Wang", "Ziwen Fu", "Hang Chen" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20170227155343", "title": "Hadronic coupling constants of $g_{σππ}$ in lattice QCD" }
Seeing What Is Not There:Learning Context to Determine Where Objects Are Missing Jin Sun David W. JacobsDepartment of Computer ScienceUniversity of Maryland{jinsun,djacobs}@cs.umd.eduDecember 30, 2023 =====================================================================================================================We proposeboundary conditions for the diffusion equation that maintain the initial mean and the total mass of a discrete data sample in the density estimation process. A complete study of this framework with numerical experiments using the finite element method is presented for the one dimensional diffusion equation, some possible applications of this resultsare presented as well. We also comment on a similar methodology for the two-dimensional diffusion equation for future applications in two-dimensional domains. § INTRODUCTION Estimating a density function using a set of initial data points in order to find probability information is a very significant tool in statistics<cit.>. The method of Kernel Density Estimation (KDE)<cit.> is now standard in many analysis and applications. Furthermore, this ideahas been applied in multiple fields (Archaeology <cit.>, Economy <cit.>, etc). The author of this article is particularly interested in constructing Perception of Security (PoS) hotspots using (KDE) methods to analyze real data registered by security experts in Bogotá <cit.>. Nowadays a wide variety of methods are available to find density functions (KDE) <cit.>,<cit.>. Themethod of KDE via difussion is of particular interest for this document; a recent article<cit.> develops a systematic method for (KDE) using the diffusion equation, also they propose a more general equation to solve some biases for data estimation. However in their analysis, it is only considered the normalization (conservation of mass) of the density function via Neumann boundary conditions, the mean of the sample data is not considered, thus inducing a change of an important initial parameterfrom the discrete data sample. In this article, we propose a new set of boundary conditions for the diffusion equation that maintain the initial mean and mass of the the discrete data sample in the density estimation process. A complete study of this framework is performedusing the finite element method (FEM) to solve the one-dimensional diffusion equation for different boundary conditions. We show the induced error on the final density when the mean is not conserved. We also show how this one-dimensional model can be used to simulate a (PoS) in a busy avenue of a city. Lastly the new boundary conditions are presented for the two-dimensional diffusion equation for future applications in two dimensional domains. § DIFFUSION EQUATION WITH DIFFERENT BOUNDARY CONDITIONS As it was first noted in <cit.> and expanded in <cit.>, solving the diffusion equation with a discrete data sample {b_n}_n=1^N as initial condition (<ref>)give an estimate of a continuous probability density function. Then by solving the diffusion equation <cit.>, ∂u(x,t)/∂t- ∂^2 u(x,t)/∂x^2=0 a<x<b , t>0,u(x,0)=1/N∑_i=1^Nδ(x-b_i), x,b_i∈[a,b] ,with appropriate boundary conditions and then finding the best t (bandwidth) for the initial data sample one obtains a continuous estimation of the experimental density. In this article we do not consider algorithms for bandwidth selection, we consider only the conservation of the mean. For more information on the bandwidth selection see <cit.>.This one-dimensional toy problem is nevertheless of interest in applications for constructing (PoS). For instance we can model an avenue as a one dimensional domain where predictions of the most dangerous places in a selectedzone can beaccomplished.In the following sections we present thenon-conservation of the mean for the Neumann boundary conditions forProblem (<ref>). We also propose new boundary conditions. For the derivations we assume that the functions are sufficiently smooth in order for the theorems of vector analysis to hold. Moreover the following derivations can be done for a more general diffusion equation with a variable diffusion coefficient k(x). §.§ Neumann boundary conditions If we consider the Neumann or natural boundary conditions on the Problem (<ref>), we have ∂ u(x,t)/∂ x|_a=0 , ∂ u(x,t)/∂ x|_b=0 . As is widely known, the total mass is conserved over time, seeSection <ref>, however the mean of the initial conditionis, in general, not conserved. Indeed, we have d/dt( ∫_a^b x u(x,t) dx ) =∫_a^b x∂^2 u(x,t)/∂ x^2 dx = [ x ∂ u(x,t)/∂ x]_a^b- [u(x,t)]_a^b = u(a,t) - u(b,t). Where we used (<ref>), (<ref>) and integration by parts. Hence the mean is generally not conserved, it depends on the values of u(x,t) at the boundary in a time t. §.§ Boundary conditions that conserve the mean We propose the following boundary conditions for (<ref>), ∂ u(x,t)/∂ x|_a=∂ u(x,t)/∂ x|_b, u(b)-u(a)/b-a = ∂ u(x,t)/∂ x|_b . Note that this boundary conditions are non-local, we need to evaluate inboth boundary points at the same time. Now we show that both the mean and the mass are conserved over time using this boundary conditions. Consider first the conservation of the total mass. We have, d/dt( ∫_a^b u(x,t) dx )=∫_a^b∂^2 u(x,t)/∂ x^2 dx= [∂ u(x,t)/∂ x]_a^b = ∂ u(x,t)/∂ x|_a-∂ u(x,t)/∂ x|_b=0. Where we used (<ref>), (<ref>) and integration by parts. This shows that the total mass is conserved. Consider now the conservation of the mean. We have, d/dt( ∫_a^b x u(x,t) dx ) =∫_a^b x∂^2 u(x,t)/∂ x^2 dx = [ x ∂ u(x,t)/∂ x]_a^b- [u(x,t)]_a^b = (b-a)∂ u(x,t)/∂ x|_b -u(b,t) + u(a,t) = 0. Again (<ref>), (<ref>) and integration by parts were used to obtain the desired result.This shows that the boundary conditions (<ref>) forproblem (<ref>) conserve both mean and mass. Now we proceed to make some numerical simulations using FEM to show the consequences of the application of this boundary conditions in the process of estimation a probability density for a data sample (<ref>). § NUMERICAL STUDY OF MEAN CONSERVATIONNow the problem (<ref>),(<ref>) is written in a weak formulation <cit.>in order to apply the finite element method to theproblem. Now for all v(x) ∈ C^∞(a,b) we have, ∫_a^b∂ u(x,t)/∂ t v(x)dx + ∫_a^b∂ u(x,t)/∂ xd v(x)/d xdx =(v(b)-v(a))∂ u(x,t)/∂ x|_b. We solve this weak formulation using FEM with low order elements in theinterval [a,b]=[0,10], where the number of elements is M. ThenProblem (<ref>),(<ref>),(<ref>) yields the problem in the discretised space V^h. Find u(x,t) ∈ V^h, such thatfor all v(x) ∈ V^h: ∫_a^b ∂u(x,t)/∂t v(x)dx + ∫_a^b ∂u(x,t)/∂x d v(x)/d xdx =(v(b)-v(a))∂u(x,t)/∂x|_b, u(x,0)=M/(b-a)N∑_i=1^Nδ(x-b_i), x,b_i∈[a,b],∂u(x,t)/∂x|_a=∂u(x,t)/∂x|_b , u(b)-u(a)/b-a = ∂u(x,t)/∂x|_b. Where we represent delta measures by the closest base element of the finite element approximation. Note that (<ref>) contains a normalization factor, since now the elements integral are not one (since they are not delta measures).Now we use the Galerkin method of mean weighted residuals for the spatial part of the problem choosing low order elements ϕ_i. This formulation can be found in <cit.>. For our numerical studies we solve the temporal part of the problem (element coefficients) using the implicit-Euler Galerkin Discretization <cit.>, thus the problem is reduced to solve a linear system iteratively for every timestep Δ t. In order to implement the previous formulation numerically, we useto do all the calculations for the simulation. The code is available publicly in <cit.>. There we start by generating a list of {b_n}_n=1^N=500 uniformly distributed points in theinterval[0,10]. This points are located in the closest interval of the spatial FEM partition { (0+ (n-1)/500,n/500) }_n=1^5000. The histogram of this points, Figure <ref>can be seen for instance as the number of times a certain criminal act was informed in a zone from the avenue. See Figure <ref>. If we represent this data as an initial condition (<ref>) we obtain the Figure <ref>. Where we plotted alternatively each consecutive FEM basis function red and black.Nowwe solve numerically the problem using the implicit-Euler Galerkin discretization for the problem (<ref>),(<ref>),(<ref>) and we evolve the solution until time t=0.1 using either Neumann boundary conditions, see Figure <ref> andmean conserving boundary conditions, see Figure <ref>. The solution for the mean conserving boundary condition is positive for this numerical experiment, see Figure <ref>, this factis currently being explored for future analytical studies.As the Figure <ref> shows, the solutions are similar and therefore we can see that for this example the new boundary condition does not generate a noticeable change on the generation of the continuous density distribution. Nevertheless we present the plots ofchange of mass Δ m(t) = m(0) - m(t), Figures <ref>, <ref> and change of mean Δμ(t) = μ(0) - μ(t), Figures <ref>, <ref> for both Neumann and mean conserving boundary conditions. Figures <ref> and <ref> present the real difference in the evolution of the density. We effectively see that the mean conserving boundary conditions conserve the mean in the density estimation process. On the other hand if we where to have an initial condition that is biased to one of the boundaries, the differences of the estimated densities by both boundary conditions would differ significantly. However there is no evidence to think that this phenomena occurs in real avenues.For the numerical experiment presented here we can see that the mean for the Neumann boundary conditions has changed about 0.4% in t = 0.1. This change is small, in fact, for an avenue of 10 km, the change in mean would be about 40 m.We conclude that for this numerical experimentfor the process of density estimation (when the data has not change to much due to the smoothing process) the Neumann boundary condition provide a very fast (since they are easy to implement) andaccurate way to estimate a continuous probability density.Nevertheless the mean of the sample is not preserved exactly, on the other hand, the mean conserving boundary condition, apart from being also easily implementable, is accurate and do preserve the mean of the sample.§ TWO-DIMENSIONAL DENSITIES We now present the problem for the diffusion equation <cit.> in two dimensions, ∂ u( x,t)/∂ t- ∇^2 u( x,t) =0,x=(x_1,x_2) ∈Ω⊂ℝ^2 , t>0. Again we want the conservation of mass and mean in the time evolution of the density. Consider first the conservation of the total mass. We have, d/dt( ∫_Ω u( x,t) d x)=∫_Ω∇^2 u( x,t)d x= ∫_∂Ω∂ u( x,t)/∂νdσ, where ∇ u ·ν= ∂ u( x,t)/∂ν, and ν denotesthe outward normal unit vector to ∂Ω. To deduce this relation we used (<ref>), and the first Green identity <cit.>. Consider now the conservation of the mean. We have, d/dt( ∫_Ω x_i u( x,t) d x) = ∫_Ωx_i ∇^2 u( x,t)d x =∫_∂Ω x_i ∂ u( x,t)/∂νdσ - ∫_Ω∇_i u( x,t) d x, where ∇_i u( x,t) = e_i·∇ u( x,t), assuming Cartesian unit vectors. Again (<ref>) and the first Green's identity were used to obtain the desired result.Then the conditions that we have to impose on u( x,t) in order to conserve mean and mass are: ∫_∂Ω x_i ∂ u( x,t)/∂νdσ = ∫_Ω∇_i u( x,t) d x,i = 1,2, and∫_∂Ω∂ u( x,t)/∂νdσ =0. The advantage of two dimensional domains is that we are not restricted to impose only two conditions for the boundary(mean and mass conservation). For these domains we can in principle conserve additional higher moments of the density distribution that are meaningful for the particular problem. Applications on two dimensional domains are of special interest for the author since a two dimensional map of the city can generate really robust results in the field ofPerception of security(PoS). § CONCLUSIONS The proposed mean conserving boundary conditions were shown to effectively maintain the mean of the initial data sample over the continuous density estimation process. This was also confirmed by the numerical simulation of the estimation process where we used a list of uniformly distributed points in the interval [0,10] as an initial condition.The numerical experiments presented here show that even though Neumann boundary conditions do not conserve the mean over time, they are accurate enough to maintain the mean in a very restricted interval before the over-smoothing of the density estimation process. We showed the application and some of the consequences of both the idea of (KDE) and the new boundary conditions to avenues in a city. The consequences of implementing the diffusion equation with the proposed boundary conditions in companion of more special initial conditions and in 2D domains remains to be analyzed. § ACKNOWLEDGMENTS I would like to express my gratitude toJuan Galvis, whose guidance was essential for the completion of this manuscript. I also want to thank Francisco A. Gómez and Zdravko Botev, whose comments were really appreciated for the analysis of the results.9bib1 Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis. Chapman and Hall, London. bib2 Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.bib3 Baxter, M. J., Beardah, C. C. and Westwood, S.(2000). Sample size and related issues in the analysis of lead isotopedata. J. Archaeological Science 27 973-980.bib4 DiNardo, J., Fortin, N. M., and Lemieux, T. (1996). Labor market institutions and the distribution of wages, 1973 1992: A semiparametric approach. Econometrica 64 1001- 1044.bib0 Gómez, F., Torres, A. , Galvis, J., Camargo, J. and Martínez, O.(2016). Hotspot mapping for Perception of Security. Smart Cities Conference (ISC2), 2016 IEEE Internationalbib5 Botev, Z. I., Grotowski, J. F.and Kroese, D. P.(2010). Kernel density estimation via Diffusion. Ann. Statist, Vol. 38, No. 5, 2916–2957.bib6 Chaudhuri, P. and Marron, J. S. (2000). Scale space view of of curve estimation. Ann. Statist. 28 408–428. MR1790003.bib7 Olver, P.(2014). Introduction to Partial Differential Equations. Springer-Verlag, 1st Ed.bib8Thomee, V. (1997). Galerkin Finite Element Methods for Parabolic Problems. Springer-Verlag, 2nd Ed.bib9 1D Inhomogenious Diffusion Equation solution with FEM. Patarroyo K. available online. <http://nbviewer.jupyter.org/github/MrKeithPatarroyo/Example/blob/master/FEM-Public/1D%20Heat%20Equation%20FEM-Public.ipynb> bib10 Salsa, S. (2008). Partial Differential Equations in Action From Modelling to Theory. Springer-Verlag Italia, Milano.
http://arxiv.org/abs/1702.07962v2
{ "authors": [ "Keith Y. Patarroyo" ], "categories": [ "math.NA", "65M60" ], "primary_category": "math.NA", "published": "20170226005339", "title": "Mean conservation for density estimation via diffusion using the finite element method" }
Definition of the relativistic geoid in terms of isochronometric surfaces Dennis Philipp, Volker Perlick, Dirk Puetzfeld, Eva Hackmann, and Claus Lämmerzahl======================================================================================A system of interacting Brownian particles subject to short-range repulsive potentials is considered. A continuum description in the form of a nonlinear diffusion equation is derived systematically in the dilute limit using the method of matched asymptotic expansions.Numerical simulations are performed to compare the results of the model with those of the commonly used mean-field and Kirkwood-superposition approximations, as well as with Monte Carlo simulation of the stochastic particle system, for various interaction potentials.Our approach works best for very repulsive short-range potentials, while the mean-field approximationis suitable for long-range interactions. The Kirkwood superposition approximation provides an accurate description for both short- and long-range potentials, but is considerably more computationally intensive. diffusion, soft spheres, closure approximations, particle systems, matched asymptotic expansions 35Q84, 60J70, 82C31§ INTRODUCTIONNonlinear diffusion equations are often used to describe a system of interacting particles at the continuum level. These play a key role in various physical and biological applications, including colloidal systems and granular gases, ion transport, chemotaxis, neural networks, and animal swarms. These continuum models are important as tools to explain how individual-level mechanisms give rise to population-level or collective behavior. Closure approximations such as the mean-field closure are often used to obtain the continuum model, but depending on the type of interactions they can lead to substantial errors. In this paper we present a new approach that is suited for short-range repulsive interactions. A typical model for a system of N interacting particles is to assume each particle evolves according to the overdamped Langevin dynamics and interacts with the other particles via a pairwise interaction potential u, so thatX_i(t) =√(2D)W_i(t) +f(X_i (t) )t- ∑_ji∇_ x_i u( X_i(t) -X_j(t))t,for i = 1, …, N, where X_i (t) ∈Ω⊂ℝ^d is the position of the ith particle at time t,D is the diffusion constant, W_i(t) denotes a d-dimensional Brownian motion, and f: Ω→ℝ^d is an external force. Despite the conceptual simplicity of the stochastic model <ref>, it can be computationally intractable for systems with a large number of interacting particles N, since the interaction term has to be evaluated for all particle pairs. In such cases, a continuum description of the system, based on the evolution of the population-averaged spatial concentration instead of individual particles, becomes attractive. Depending on the nature of the interaction potential different averaging techniques may be suitable. Interactions can be broadly classified into local and nonlocal depending on the range of interaction between particles. Nonlocal interactions are associated with a long-range or ultra-soft interaction potential uin <ref>. Then every particle can interact not only with its immediate neighbors but also with particles far away, and one can use a mean-field approximation to obtain a partial differential equation (PDE) for the one-particle probability density p( x,t) of finding a given particle at position x at time t. The standard mean-field approximation (MFA)procedure applied to the microscopic model <ref> gives∂ p/∂ t=∇_ x [ D∇_ x p -f ( x) p+ (N-1) p ∇_ x (u∗ p) ], whereu ∗ p = ∫ u( x -y) p( y)y. The main assumption in writing down <ref> is that, in deriving the interaction term,particles can be treated as though they were uncorrelated. Since one is often interested in system with a large number of particles N, it is common to consider the number density (ρ = N p) and take the limit in which the number of particles and the volume tend to infinity, while keeping the average number density constant, thatis, N, V→∞, N/V = ρ_0. This is known as the thermodynamic limit <cit.>, and results in equation (<ref>) for ρ without the factor N-1.While the mean-field approximation <ref> is convenient and leads to an accurate description for long-range interactions, such as with Coulomb interactions <cit.>, it fails when considering relatively strong repulsive short-range potentials. Sometimes <ref> does not make sense since the convolution does not exist; in other cases, it results in a poor model of the system because the underlying assumptions of the method are not satisfied, as we will discuss later.In particular, the model <ref> does not make sense with hard-core repulsive interactions, which are commonly used to model excluded-volume effects in biological and social contexts <cit.>. A common way to circumvent this is to assume particles are restricted to a lattice, giving rise to so-called on-lattice models. The most common of these is the simple exclusion model, in which a particle can only move to a site if it is presently unoccupied <cit.>. One can derive an analogous continuous limit to <ref> using Taylor expansions <cit.>. However, it turns out that with identical particles the interaction terms (analogous to the convolution term in <ref>) cancel out unless the external force f is nonzero. Because of the issues that the MFA has in particle systems with short-range interaction potentials, one must often resort to numerical regularisations <cit.> or alternative closure approximations. There are a large number of closure approximations, and choosing the right closure for a given pair potential is “an art in itself” <cit.> due to the phenomenological nature of the approach. Each choice of closure relation results in a different approximate equation for the density p, and it is not clear a priori whether it will be a good approximation. As discussed above, one may even obtain an equation that does not make sense or is ill-posed. One class of closures, including the MFA and the Kirkwood superpositionapproximation (KSA), impose a relation between the nth and (n+1)th density functions in the BBGKY hierarchy (see <ref>). Whereas the MFA closes at n=1 (writing the two-particle density function in terms of the one-particle density), KSA closes at n=2, approximating the three-particle density function as a combination of the one- and two-particle density functions <cit.>. While the KSA originated in the field of statistical mechanics, where it has been the basis of a whole theory, it has recently also been used in biological applications to obtain closed equations for a system of biological cells <cit.>. However, the resulting KSA model is quite complicated to solve and the MFA remains the most commonly used approximation. Another class of closure relations, similar in spirit to KSA, is based on the Ornstein–Zernike (OZ) integral equation <cit.>. Here the pair correlation function is decomposed into a `direct' part and an `indirect' part. The latter is mediated through (and integrated over) a third particle. In addition, the OZ equation requires a further closure assumption providing an additional relation between the direct and indirect correlations. Commonly used closures, for hard spheres and soft spheres respectively, include the Percus–Yevick approximation and the hypernetted chain approximation. In this paper we are interested in systems such as <ref> with short-range repulsive interactions for which MFA fails. We will employ an alternative averaging method to obtain a continuum description of the system based on matched asymptotic expansions (MAE). Unlike the mean-field approach, this method is a systematic asymptotic expansion which does not rely on the system size being large. It is valid for low concentrations, exploiting a small parameter ϵ arising from the short-range potential and the typical separation between particles. The result is a nonlinear advection-diffusion equation of the form ∂ p/∂ t = ∇_x· [ D ∇_xp - f(x) p + α_u ϵ^d (N-1)p ∇_xp],where the coefficient α_u depends on the interaction potential u and d is the dimension of the physical space.The remainder of the paper is organized as follows. In <ref> we introduce the Fokker–Planck PDE for the joint probability density of the particle system; this is another individual-based description equivalent to the Langevin stochastic differential equation <ref>. In <ref> we discuss three common closure approximations which reduce the Fokker–Planck equation toa population-level PDE.In <ref> we present our alternative approach to closure based on MAE, and derive equation <ref>. In <ref> we test the models obtained from the different methods against each other and stochastic simulations of the stochastic particle systemfor various interaction potentials. Finally, in <ref>, we present our conclusions. § INDIVIDUAL-BASED MODEL We consider a set of N identical particles evolving according to the Langevin stochastic differential equation <ref> in a domain Ω⊂ℝ^d, with d ≤ 3. We nondimensionalise time and space such that the diffusion coefficient D = 1, and the volume of the domain| Ω | = 1. We suppose the interaction potential u(r) is repulsive and short range, with range ϵ≪ 1.The interaction potential of a system of N particles is, assuming pairwise additivity, the sum of isolated pair interactionsU(x⃗) =∑_1≤ i<j≤ N u ( x_i -x_j),where x⃗ = ( x_1, …,x_N) is the N-particle position vector. The interaction force acting on the ith particle due to the other N-1 particles is given byg_i (x⃗) = - ∇_ x_i U (x⃗) = - ∑_ji∇_ x_i u( x_i -x_j).Here forces are non-dimensionalized with the mobility (the inverse of the drag coefficient) so that we can talk about a force acting on a Brownian particle.Finally, we suppose that the initial positions X_i(0) are random and identically distributed.The counterpart of <ref> in probability space is the Fokker–Planck equation∂ P/∂ t (x⃗, t) = ∇⃗_x⃗· [∇⃗_x⃗ P - F⃗ (x⃗) P +∇_x⃗ U (x⃗) P] inΩ^N,where P(x⃗, t) is the joint probability density function of the N particles being at positions x⃗ = ( x_1, …,x_N)∈Ω^N at time t and F⃗ (x⃗) = ( f_1( x_1), …,f_N( x_N) ). Since we want to conserve the number of particles, on the domain boundaries ∂Ω ^N we require either no-flux or periodic boundary conditions. Throughout this work we use the latter. Accordingly, the potential u will be a periodic function in Ω.The initial condition is P(x⃗, 0) = P_0(x⃗),with P_0 invariant to permutations of the particle labels.We proceed to reduce the dimensionality of the problem <ref> by looking at the marginal density function of one particle (the first particle, say) given byp( x_1,t) = ∫_Ω^N-1 P(x⃗, t) x_2 ⋯ x_N.The particle choice is unimportant since P is invariant with respect to permutations of particle labels. Integrating <ref> over x_2, …,x_N and applying the divergence theorem gives∂ p/∂ t ( x_1,t) = ∇_x_1·[ ∇_x_1 p-f( x_1) p+ G( x_1,t)],where G is the d-vectorial functionG( x_1,t)= ∫_Ω^N-1P(x_1,x_2, …, x_N, t) ∑_j=2^N ∇_ x_1 u ( x_1 -x_j) x_2 ⋯ x_N = (N-1) ∫_Ω P_2(x_1,x_2, t) ∇_ x_1 u( x_1 -x_2 ) x_2,andP_2(x_1,x_2, t) = ∫_Ω^N-2 P(x⃗, t) x_3 ⋯ x_Nis the two-particle density function, which gives the joint probability density of particle 1 being at position x_1 and particle 2 being at x_2. An equation for P_2 can be written from <ref>, but this then depends on P_3, the three-particle density function. This results in a hierarchy (the BBGKY hierarchy) of N equations for the set of n-particle density functions (n = 1, …, N), the last of which is <ref> itself (since P is the N-particle density function).In order to obtain a practical model, a common approach is to truncate this hierarchy at a certain level to obtain a closed system. In particular, closure approximations in which the n-particle density function P_n is replaced by an expression involving lower density functions P_s, s<n, are commonly used. However, because of their phenomenological nature, they can often lead to errors in the resulting model. In the next section we present three such closure approximations and highlight the issues they encounter when dealing with short-range repulsive potentials. In <ref> we present an alternative approach based on matched asymptotic expansions. § CLOSURE APPROXIMATIONS §.§ Mean-field closure The simplest and most common closure approximation is to assume that particles are not correlated at all in evaluating the interaction term G, that is,P_2(x_1,x_2, t) = p(x_1, t) p(x_2, t).Substituting <ref> into <ref> givesG ( x_1,t)= (N-1 ) p(x_1,t) ∫_Ω p(x_2, t) ∇_ x_1 u( x_1 -x_2) x_2.Combining this with the equation for p in <ref> gives equation <ref> presented in the introduction, the mean-field approximation (MFA). However, one should keep in mind that <ref> might not always be valid when using such model. In particular, when u(r) is a short-range interaction potential, the dominant contribution to the integral <ref> is when x_1 is close to x_2, and this is exactly the region in which the positions of particles are correlated.We note that the mean-field closure is often used implicitly with <ref> written down directly rather than being derived from <ref> <cit.>. The reasoning goes as follows: if p( x,t) is the probability of finding a particle at x, the force on a particle at x_1 is given by multiplying the force due to another particle at x_2 by the density of particles at x_2 and integrating over all positions x_2. If we suppose the pair potential is short-ranged, we can approximate the integral in <ref> to remove of the convolution term. In particular, we suppose that u = O(r^-(d+δ)) for some δ>0 as r→∞ and rewrite the potential as u(r) = ũ(r/ϵ) with ϵ≪ 1.Introducing the change of variable x_2 =x_1 + ϵx̃ and expanding p( x_2,t) about x_1 givesG (x_1,t) = - ϵ^d-1 (N-1 ) p( x_1,t) ∫_ℝ^d[ p( x_1, t) + ϵx̃·∇_ x_1 p( x_1, t)] ∇_x̃ ũ( x̃ )x̃ + ⋯,where we can extend the integral with respect to variable x̃ to the whole space since the potential ũ is localized near the origin and decays at infinity.Noting that the potential is a radial function, the leading-order term in the integral vanishes, and, after integrating by parts in the next term, we obtainG (x_1,t) ∼ϵ^d (N-1 ) p( x_1,t) ∇_x̃_1 p( x_1,t) ∫_ℝ^dũ( x̃)x̃ + O(ϵ^d+δ).Inserting <ref> into <ref>, we find that the marginal density function satisfies the following nonlinear Fokker–Planck equation∂ p/∂ t = ∇_x_1·{ [1 + α_u (N-1) ϵ^d p ] ∇_x_1p -f(x_1) p },where the nonlinear coefficient is given byα_u =∫_ℝ^d u( ϵ x ) x. We will refer to (<ref>), in which the mean-field closure has been combined with the assumption of a short-range potential, as the localized MFA or LMFA.As we shall see later, the MFA or LMFA are not defined for many commonly used short-range repulsive potentials. If the integral in <ref> does not exist because of the behavior at infinity then this is an indication that the potentials is too long range for the localization performed above to be valid and the full MFA integral needs to be retained. However, if the integral in <ref> diverges because of the behavior at the origin then the MFA itself will diverge, that is, the integral in <ref> will not exist. Examples of the inappropriate use of MFA for such short range potentials exist in the literature <cit.>. §.§ Closure at the pair correlation functionA more elaborated closure is suggested by Felderhof <cit.>. His derivation considers a more general context of interacting Brownian particles suspended in a fluid, including hydrodynamic interactions. His analysis is valid for zero external force, f≡ 0, and is based on the thermodynamic limit (in which the number of particles N and the system volume V tend to infinity, with the number density N/V = ρ_0 fixed). Because of this, instead of working with probability densities, it is convenient to switch to number densities: ρ ( x_1,t) : = ρ_0 p( x_1,t) and Q ( x_1, x_2, t) := ρ_0^2 P_2( x_1, x_2, t). In what follows we outline his derivation for hard spheres ignoring hydrodynamic interactions. The equation for the one-particle number density ρ( x_1,t) is∂ρ/∂ t= ∇_x_1·( ∇_x_1 ρ+ ∫ Q( x_1,x_2, t) ∇_x_1 u(r) x_2),where r =x_1-x_2 is the interparticle distance and Q( x_1,x_2, t) (the two-particle number density) satisfies, to lowest order in ρ_0, ∂ Q/∂ t = ∇_x_1·[ ∇_x_1 Q + Q ∇_x_1 u(r) ] + ∇_x_2·[ ∇_x_2 Q + Q ∇_x_2 u(r) ].Equations <ref> and <ref> have the following time-independent equilibrium solutionsρ_s( x_1) = ρ_0,Q_s ( x_1, x_2) = ρ_0^2g_0 (x_1-x_2 ),where ρ_0 is constantand g_0 is the pair correlation functiong_0(r) = e^-u(r).Felderhof then looks for a linearized solution around the equilibrium values <ref>, by making the ansatzQ( x_1, x_2, t) = ρ( x_1,t) ρ ( x_2,t) g( x_1, x_2, t),and considering the deviations ρ_1 and g_1,ρ( x_1,t)= ρ_0 + ρ_1( x_1,t),g( x_1, x_2, t) = g_0(r )+ g_1( x_1, x_2, t).Then, to terms linear in ρ_1 and g_1,<ref> becomesQ( x_1, x_2, t)≈ρ_0^2 g_0(r) +ρ_1( x_1,t) ρ_0 g_0(r) + ρ_0 ρ_1 ( x_2,t) g_0(r) + ρ_0^2 g_1( x_1, x_2, t).Substituting in <ref> and linearizing gives ∂ρ_1/∂ t = ∇_x_1·[ ∇_x_1ρ_1+ ρ_0 ∫ g_0(r) ρ_1( x_2, t) ∇_x_1 u(r) x_2 + ρ_0^2 ∫ g_1( x_1, x_2, t) ∇_x_1 u(r) x_2 ].At this stage, Felderhof supposes that perturbations from equilibrium <ref> are small so that ∇_x_1ρ_1( x_1,t) ≈∇_x_2ρ_1( x_2,t) and the pair correlation function is at its equilibrium value, that is, g( x_1, x_2, t) ≈ g_0(r). Then <ref> simplifies to∂ρ_1/∂ t= ∇_x_1· [ ∇_x_1ρ_1+ ρ_0 ∫ g_0(r) ρ_1( x_2, t) ∇_x_1 u(r) x_2].Now, using <ref> and expanding ρ_1( x_2,t) about x_1 and keeping only the first non-vanishing term gives <cit.>∂ρ_1/∂ t= ∇_x_1·[ (1+α_u ϵ^d ρ_0) ∇_x_1ρ_1 ],where α_u = ∫_ℝ^d ( 1 - e^- u(ϵ x)) x. Note that this is the evolution equation for the perturbation ρ_1 from the uniform equilibrium ρ_0 (valid with f= 0).§.§ Kirkwood closureAs we will see later in the results section, the MFA can only provide an adequate approximation for relatively soft interaction potentials and low densities.An alternative closure approximation is based on the Kirkwood superposition approximation (KSA) <cit.>, and consists of truncating the hierarchy at the two-particle density function. To this end, we consider the equation satisfied by P_2(x_1, x_2, t) by integrating the N-particle Fokker–Planck equation <ref> over x_3, …,x_N, applying the divergence theorem and relabelling particles as before to obtain∂ P_2/∂ t = ∇_x_1·[ ∇_ x_1 P_2 -f(x_1)P_2 + ∇__1 u( _1 - _2) P_2 + G_2(_1, _2, t)] + ∇_x_2·[ ∇_ x_2 P_2 -f(x_2)P_2 + ∇__2 u( _1 - _2) P_2 + G_2(_2, _1, t)],whereG_2(_1, _2, t)=(N-2) ∫_Ω∇__1 u( _1 - _3) P_3( _1,_2,_3,t) _3,and P_3(x_1,x_2, _3, t) = ∫_Ω^N-3 P(x⃗, t) x_4 ⋯ x_Nis the three-particle density function. We note that in writing <ref> we are using that P_3 is invariant to particle relabelling.The KSA then approximates the three-particle density function asP_3( x_1, x_2, x_3,t) =P_2 ( x_1, x_2,t) P_2 ( x_1, x_3,t) P_2 ( x_2, x_3,t) /p ( x_1,t) p ( x_2,t) p ( x_3,t) ,where p and P_2 are the one- and two-particle density functions, respectively. Inserting <ref> into <ref> one can then solve the coupled system <ref> and <ref> for p and P_2. The KSA closure has been the basis of many subsequent closure approximations, and it can be derived as the maximum entropy closure in the thermodynamic limit <cit.>. Because it is thought to be superior to the MFA at high densities, it has been used in several biological applications such as on-lattice birth-death-movement processes with size-exclusion <cit.> and off-lattice cell motility processes with soft interactions <cit.>.Middleton, Fleck and Grima <cit.> consider a system of Brownian particles evolving according to <ref> in one dimension interacting via a Morse potential (see <ref>) and compare the KSA closure to the MFA closure and simulations of the stochastic system. Markham et al. <cit.> use the KSA on a more general individual-based model, where the random jumps of particles are not Gaussian but depend on the positions of all particles, resulting in multiplicative noise. Berlyand, Jabin and Potomkin <cit.> use a variant of the KSA closure, approximating either P_2(x_1,x_3, t) or P_2(x_2,x_3, t)in <ref> by their corresponding mean-field approximation P_2(x_i,x_j, t) = p(x_i, t) p(x_j, t), for a system of interacting deterministic particles. It is worth noting that the KSA model is computationally expensive and complicated to solve, especially if the interaction potential u is short ranged, requiring a fine discretization. For example, in 3 dimensions one must solve a 6-dimensional problem which, once discretized, involves a full discretization matrix because of the convolution terms G and G_2. As we shall see in <ref>, even in one physical dimensional the KSA model is rather complicated to solve.§ MATCHED ASYMPTOTIC EXPANSIONS In this section we consider an approach based on matched asymptotic expansions (MAE) to obtain a closed equation for the one-particle density p that is valid for short-range interaction potentials, and that is computationally practical to solve even in two or three dimensions. We go back to the evolution equation <ref> for the one-particle density p. Assuming that the pair potential u is localized near x_1, we can determine G in <ref> using MAE. To do so, we first must obtain an expression for P_2. For low-concentration solutions with short-range interactions, three-particle (and higher) interactions are negligible compared to two-particle interactions: when two particles are close to each other, the probability of a third particle being nearby is so small that it can be ignored. Mathematically, this means that the two-particle probability density P_2(x_1,x_2, t) is governed by the dynamics of particles 1 and 2 only, independently of the remaining N-2 particles.In other words, the terms G_2 in <ref> are negligible and the equation for P_2 reduces to ∂ P_2/∂ t = ∇_x_1·[ ∇_ x_1 P_2 -f(x_1)P_2 + ∇__1 u( _1 - _2) P_2] + ∇_x_2·[ ∇_ x_2 P_2 -f(x_2)P_2 + ∇__2 u( _1 - _2) P_2 ],for(x_1, x_2) ∈Ω ^2, complemented with periodic boundary conditions on ∂Ω^2. Note that in approximating P_2 in this way it is no longer true that p(_1,t) = ∫ P_2(_1, _2,t) _2, so that p and P_2 need to be solved for as a coupled system. This equation is basically <ref> with an added external force. Essentially our MAE approach aims to solve <ref> systematically asymptotically rather than through Felderhof's linearization and approximations (see <ref>).§.§ Inner and outer regions By assumption, the pair interaction potential u(r) is negligible everywhere except when the interparticle distance r is of order ϵ. Therefore, we suppose that when two particles are far apart (x_1 - x_2≫ϵ) they are independent, whereas when they are close to each other (x_1 - x_2∼ϵ) they are correlated. We designate these two regions of configuration space the outer region and inner region, respectively.In the outer regionwe define P_o(x_1, x_2, t) = P_2( x_1, x_2, t). By independence, we have that[Independence only tells us that P_o(x_1, x_2, t) ∼ q(x_1, t) q(x_2, t) for some function q but the normalization condition on P implies p = q + O(ϵ).]P_o(x_1, x_2, t) = p(x_1, t) p(x_2, t) + ϵ P_o^(1)(x_1, x_2, t) + ⋯,for some function P_o^(1). In the inner region, we setx_1 = x̃_1 and x_2 = x̃_1 + ϵx̃ and define P̃ (x̃_1, x̃, t) = P_2(x_1, x_2 , t) and ũ (x̃ ) = u( x_1- x_2). Rewriting <ref> in terms of the inner coordinates gives ϵ^2 ∂P̃/∂ t = 2 ∇_x̃·[ ∇_x̃P̃ + ∇_x̃ũ( x̃ ) P̃] + ϵ∇ _x̃·{[ f(x̃_1) - f(x̃_1 + ϵx̃) ] P̃}-ϵ∇_x̃_1·[ 2 ∇_x̃P̃ + ∇_x̃ũ (x̃ ) P̃] + ϵ^2 ∇_x̃_1^2P̃ - ϵ ^2∇ _x̃_1·[ f(x̃_1)P̃ ].The inner solution P̃ must match with the outer solution P_o as x̃→∞.Expanding P_o in terms of the inner variables gives (omitting the time variable for ease of notation)P_o ( x_1,x_2)∼ p(x̃_1) p(x̃_1 + ϵx̃) + ϵ P_o^(1)(x̃_1, x̃_1 + ϵx̃) ∼p^2 (x̃_1) +ϵ p(x̃_1)x̃·∇ _x̃_1 p( x̃_1) + ϵ P_o^(1)(x̃_1, x̃_1)asx̃→∞. We look for a solution of <ref> matching with <ref>as x̃→∞ of the form P̃∼P̃^(0) + ϵP̃^(1) +⋯. The leading-order inner problem is 0 = 2 ∇_x̃·[ ∇_x̃P̃^(0) + ∇_x̃ũ( x̃ ) P̃^(0)],P̃^(0)∼ p^2 (x̃_1) asx̃→∞,with solution P̃^(0) =p^2 (x̃_1) e^-ũ( x̃ ).The O(ϵ) problem reads0 =2 ∇_x̃·[∇_x̃P̃^(1) + ∇_x̃ũ( x̃) P̃^(1)]-∇_x̃_1·[ 2 ∇_x̃P̃^(0) + ∇_x̃ũ( x̃) P̃^(0)], P̃^(1) ∼p(x̃_1)x̃·∇ _x̃_1 p( x̃_1) + P_o^(1)(x̃_1, x̃_1)asx̃∼∞. Using <ref>, we can rearrange <ref> to give∇_x̃·[ ∇_x̃P̃^(1) + ∇_x̃ũ( x̃) P̃^(1)- 12∇_x̃_1P̃^(0)]= 0.Solving <ref> together with <ref> givesP̃^(1) =[ p (x̃_1) x̃·∇ _x̃_1 p( x̃_1) + P_o^(1)(x̃_1, x̃_1) ] e^-ũ( x̃). Thus we find that the inner region solution is, to O(ϵ),P̃∼[p^2 (x̃_1,t) + ϵp (x̃_1,t)x̃·∇ _x̃_1 p( x̃_1,t) + ϵ P_o^(1)(x̃_1, x̃_1,t) + ⋯] e^-ũ( x̃).§.§ Interaction integralNow we go back to the interaction integral G( x_1) in <ref>. Because of the short-range nature of the potential u, the main contribution to this integral is from the inner region. Therefore, we will use the inner solution <ref> to evaluate it. First we split the integration volume Ω for x_2 into the inner and the outer regions defined in the previous section. Although there is no sharp boundary between the inner and outer regions, it is convenient to introduce an intermediate radius δ, with ϵ≪δ≪ 1, which divides the two regions. Then the inner region is Ω_i( x_1) = { x_2 ∈Ω : x_2-x_1<δ} and the outer region is the complimentary set Ω_o( x_1) = Ω∖Ω_i( x_1). The dominant contribution to <ref> is thenG ( x_1, t)= (N-1) ∫_Ω_i( x_1) P( x_1,x_2, t) ∇_ x_1 u(x_1-x_2 )x_2 = -(N-1) ϵ^d-1∫_x̃ < δ/ϵP̃(x_1, x̃, t)∇_x̃ũ( x̃) x̃∼(N-1) ϵ^d-1p (x_1,t) ∫_x̃ < δ/ϵ{ p (x_1,t) + ϵx̃·∇ _ x_1 p( x_1,t)+ϵ P_o^(1)( x_1, x_1) }∇_x̃ e^-ũ( x̃) x̃.The first and third terms of the integral vanish using the divergence theorem and that ũ is a radial function of x̃. Integration by parts on the second component gives∫_x̃ < δ/ϵ[ x̃·∇ _ x_1 p(x_1,t) ] ∇_x̃ e^-ũ( x̃) x̃ ∼∇ _ x_1 p(x_1,t)[ 2(d-1)πd ( δϵ)^d - ∫_x̃ <δ/ϵ e^-ũ( x̃) x̃ ],where we have used that e^-ũ( x̃)≈ 1 at x̃ = δ/ϵ. Finally, rewriting the first term above as a volume integral,[The volume of a d-dimensional sphere of radius δ/ϵis equal to .] <ref> becomesG( x_1, t)∼(N-1) ϵ^d p(x_1,t) ∇ _ x_1 p( x_1,t) ∫_x̃ <δ/ϵ ( 1 - e^-ũ( x̃))x̃.Since 1 - e^-ũ( x̃) decays at infinity, we can extend the domain of integration to the entire ℝ^d introducing only lower order errors. Therefore we can writeG(x_1, t)∼α_u (N-1) ϵ^d p(x_1,t) ∇ _ x_1 p( x_1,t),withα_u =∫_ℝ^d ( 1 - e^-ũ( x̃))x̃ = ∫_ℝ^d ( 1 - e^- u( ϵ x)) x.Note that we obtain the same coefficient α_u that appeared in equation <ref> using the closure at the pair correlation function. §.§ Reduced Fokker–Planck equation for soft spheres Combining <ref> with<ref> we find that, to O(ϵ^d),∂ p/∂ t=∇_x_1·{ [1 +α_u (N-1) ϵ^d p ] ∇_x_1p -f(x_1) p },where α_u = ∫_ℝ^d ( 1 - e^- u(ϵ x)) x. Therefore, we find that the MAE method yields the same type of equation as the LMFA (see <ref>), but with a different coefficient in the nonlinear diffusion term. Expanding the exponential in <ref> for small u gives α_u = ∫_ℝ^d ( 1 - e^- u( ϵ x)) x = ∫_ℝ^du(ϵ x)x - 1/2∫_ℝ^d u^2(ϵ x) x + ⋯,that is, the LMFA closure is the leading contribution of the potential, provided that u(ϵ x) is small. However, as we will see in <ref>, this is not always true. We also see that the equation obtained by Felderhof <cit.>, <ref> is the linearized version of <ref> after takingN →∞ and setting the external force to zero. The coefficient α_u can be related to basic concepts from statistical mechanics. Namely, the integrand in <ref>is the negative of the total correlation function h(r) = g(r)-1, where g(r) = exp[-u(r)] is the low-density limit of the pair correlation function, the so-called Boltzmann factor of the pair potential <cit.>. §.§ Two-particle density function via MAEIn contract to the MFA, using MAEs we have obtained an approximation of the two-particle density function P_2(_1, _2, t) in two regions of the configuration space, the outer and the inner regions, defined according to the separation between the two particles. It is convenient to have a uniformly valid expansion in the whole space–so that for example we can plot and compare it againstsimulations of the stochastic particle system. This can be done by constructing a so-called composite expansion, consisting of the inner expansion <ref> plus the outer expansion <ref> minus the common part <cit.>. The result isP_2( x_1,x_2, t) ∼ p( x_1,t) p( x_2,t) e^-u(x_1 -x_2).§ RESULTS§.§ The hard-sphere potential So far we have assumed that the set of Brownian particles interact via a soft pair potential u(r). However, the resulting reduced Fokker–Planck equation <ref> obtained via MAE can be used to model a system of hard-core interacting particles of diameter ϵ.In particular, the model <ref> for soft spheres has exactly the same structure as the counterpart model for hard spheres derived in <cit.>. Inserting the hard-sphere pair potentialu_HS(r ) = { ∞ r ≤ϵ, 0r >ϵ,.into the nonlinear diffusion coefficient in <ref> givesα_HS =∫_ x<1 x,that is, α_HS = 2 for d=1, α_HS = π for d=2, and α = 4π /3 for d=3. This is in agreement with our previous work where the reduced model was specifically derived for a system of hard spheres <cit.>. For hard spheres the configuration space has holes due to the illegal configurations (associated with infinite energy) and the derivation is slightly different. We note that the MFA for hard spheres does not work and that, in particular, the coefficient α_HS in <ref> is not defined.Because the MAE models for soft and hard spheres coincide, for every potential u we can find an effective hard-sphere diameter ϵ_eff such that the continuum models of both systems – the soft-sphere system with pair potential u(r) and the hard-sphere system with diameter ϵ_eff – are equivalent. In other words, a characterization of a given soft potential u is to find ϵ_eff such that α_u ϵ^d = α_HS ϵ_eff^d.where α_u is given in <ref> and α_HS given in <ref>. Rearranging, we find that(ϵ_eff/ϵ)^d = d∫_0^∞ ( 1 - e^- u(ϵ r)) r^d-1r. This idea of finding the effective hard-sphere diameter associated to a soft-sphere system, known as the effective hard sphere diameter method, has been widely used to calculate both equilibrium and transport properties <cit.>. The reason this method is appealing is that it allows us to “translate” a general system of interacting soft spheres (whose properties may have not been studied before) to the widely studied hard-sphere system, on which most theories are based.Moreover, the derived model <ref> implies that, as far as the population-level dynamics are concerned, soft interactions may be incorporated into the effective hard particle model by adjusting the hard-sphere diameter with <ref>. This is provided α_u in <ref> is well-defined and positive. In contrast, if α_u does not exist (or is negative), the MAE approach breaks down (or may become unstable) for the given pair potential u. Even this is instructive: it means that the potential is not decaying at infinity fast enough to be incorporated into a population-level equation of the form <ref> and that the MFA is preferrable.Another application of the effective hard-sphere diameter is to provide an effective volume fraction for the system of soft spheres: using ϵ_eff, one can define the effective volume fraction of soft spheres and use it to check whether the “low-volume fraction” condition holds.§.§ Comparison between MAE and LMFA The simplest repulsive pair potential is the soft-sphere (SS) potential, which assumes the formu_SS (r) =(ϵ/r)^ν,where ϵ is a measure of the range of the interaction and ν isthe hardness parameter which characterizes the particles (the softness parameter is defined as its inverse, 1/ν). We note that for ν =1, the SS potential corresponds to the Coulomb interaction (ν =1)<cit.>. Other common purely repulsive potentials include the exponential (EX) potential <cit.>u_EX (r) = e^-r/ϵ,and the repulsive Yukawa (YU) potential u_YU (r) = ϵ/r e^-r/ϵ.This potential, also known as screened Coulomb, is used to describe elementary particles, small charged “dust” grains observed in plasma environments, and suspensions of charge-stabilized colloids <cit.>. To model some physical systems it is convenient to incorporate an attractive part to the repulsive pair potential. The most common situation is that particles repel each other in the short range and attract each other in a longer range. For example, the SS potential in <ref> may be generalized to power-law repulsive–attractive potentials of the form u(r) = (ϵ/r)^a - C(ϵ/r)^b, with a>b and C>0, known as the Mie potentials. The most famous example of this class of potentials is the Lennard–Jones potential,u_LJ (r) =( ϵ/r)^12 -( ϵ/r)^6.Another common repulsive–attractive potential is the Morse (MO) potentialu_MO(r) = e^-r/ϵ - 1/C e^-lr/ϵ,where C and l are, respectively, the relative strength and relative lengthscale of the repulsion to attraction. The most relevant situations for biological applications are given for C>1 and l<1, which correspond to short-range repulsion and weak long-range attraction <cit.>. <ref> shows examples of all the interaction potentials above.0.75 1.0 A simple way to compare between approaches for short-range interaction potentials is to consider the nonlinear diffusion term using either MAE or LMFA. Their respective coefficients α_u in <ref>and α_u in <ref> for the potentials above are shown in <ref>. Since the behaviour of theintegrals in α_u and α_u at infinity is the same, the LMFA fails for any case in which the MAE fails. LMFA may fail also because of a singularity at the origin for which MAE is valid, as seen in some examples of <ref>. This is because the strongly repulsive short-range part of the potential results in correlations which violate theMFA that particles may be considered independent. Moreover, there are considerable discrepancies between α_u and α_u for the cases for which the latter is defined.The coefficientα_u is undefined for inverse-power potentials such as SS and LJ, since the integral in <ref> is either singular at the origin or at infinity for all possible powers. Therefore, the LMFA is not valid for these potentials. The MAE coefficient α_u exists for the SS potential in <ref> for ν>d, but is undefined for ν≤ d. Therefore, we find that MAE is not valid for the Coulomb interaction [ν = 1 in <ref>]. The interpretation is that the Coulomb interaction does not decay sufficiently quickly at infinity and hence the inner region spans the whole configuration space (so there is no outer region where the integral G ( x_1) is negligible as assumed in the MAE derivation). <ref> shows the variation of α_SS against the hardness parameter ν. As expected, lim_ν→∞α_SS = α_HS since the HS potential is the limiting case of the SS potential for ν→∞. Also note that the strength of the nonlinear diffusion term, parametrized by α_SS, decreases as the hardness ν increases. The effect of the softness parameter 1/ν on the diffusion and other transport coefficients was studied in <cit.>; they found that for ν≥ 72 the soft sphere system behaved essentially as a hard sphere system in molecular-dynamics simulations. We find that the relative error between α_SS and α_HSfor ν = 72 in three dimensions is 2.6%.0.7 1.0 The Morse potential <ref> has well-defined coefficients α_MO and α_MOfor C and l positive. However, they differ substantially depending on the relative strength C and relative lengthscale l of the repulsion to the attraction, see <ref>.We note that both coefficientsα_MO and α_MO may be negative for some parameter values, which implies, from <ref>, that the nonlinear component of the diffusion coefficient becomes negative. When this occurs, the system enters a so-called catastrophic regime and the particles collapse to a point as N→∞ <cit.>.0.7 1.0 Evaluation of α_u and α_u provides a straightforward way to determine the type of dynamics the interacting particle system has. Initially it is not so easy to discern whether the potential is “short-range enough” and, as a result, which method is more suitable to obtain a reduced population-level model. The nonlinear diffusion coefficient <ref>, α_u (N-1) ϵ^d p, gives an idea of the strength of the interaction and whether the MAE will be the appropriate method (in particular, one can compute the effective volume fraction as described in <ref> and ascertain whether the low-volume fraction assumption holds). Regarding the MFA closure, one should keep in mind that α_u not being defined does not necessarily mean that standard MFA fails, as mentioned in <ref>; it could be that the short-range assumption used to obtain the nonlinear diffusion equation <ref> does not hold and that, instead, one should keep the original MFA integro-differential model <ref>.Conversely, if α_u is not defined due to the singular behavior of the potential at the origin (which is the case for all those seen in <ref>), this is an indication that the convolution in the original MFA will not exist.§.§ Comparison with the particle-level model In this section we compare the macroscopic models obtained via the closures MFA and KSA, and via the MAE method we have introduced to each other as wellas to numerical simulations of the stochastic particle system.We use the open-source C library Aboria <cit.> to perform the particle-level simulations. The overdamped Langevin equation <ref> is integrated using the Euler-Maruyama method and a constant timestep Δ t, leading to an explicit update step for each particle given by𝐗^m+1_i = 𝐗^m_i + √(2DΔ t)Δ W^m +f(X_i^m ) Δ t- ∑_ji∇_ x_i u( X_i^m -X_j^m) Δ t,where Δ W^m is a d-dimensional normally distributed random variable with zero mean and unit variance. We choose the timestep Δ t so that the results are converged (that is, there is no change in the results for smaller timesteps). For all simulations in this paper a timestep of Δ t = (0.1ϵ)^2/2D was sufficient for convergence, leading to an average diffusion step size of 0.1ϵ.A naive implementation of the particle force interaction term over all particle pairs would lead to a large number (N^2) of potential pair interactions to perform. To improve the efficiency of the code, we take advantage of the compact nature of the potentials and restrict particle interaction to those pairs that are within a certain cutoff length c < 6ϵ. All particle pairs i,j separated by a distance greater than this cutoff will implicitly be given a pair potential of u( X_i -X_j) = 0.In order to compare the particle-level models with the PDE models, we perform R independent realizations and output the positions of all NR particles at a set of output time points. A histogram of the positions is calculated and then scaled to produce a discretized density function that can be compared with the PDE models. To generate the two-particle density function, we create a two-dimensional histogram of the positions of each particle pair (i,j) and scale it accordingly to produce a two-particle density.First, we consider a one-dimensional problem in Ω = [0, 1] with periodic boundary conditions. We compare estimates of the one-particle density p(x_1,t) as well as the two-particle density P_2(x_1, x_2,t) obtained from simulating <ref> to solutions of the same quantities using the KSA, MFA or MAE models. We start with a set of parameters in which the potential is sufficiently long range that the system is not really in the low-density limit with the number of particles we use, and then consider an example with a strongly repulsive short-ranged potential. In all the examples we set the external drift to zero.We begin by presenting the models for the three approaches for d=1. The MFA reads, combining <ref> and <ref>, ∂ p/∂ t= ∂/∂ x_1[ ∂ p /∂ x_1+ (N-1 ) p∫_Ω p( x_2, t) f_u(x_1, x_2) x_2],where p = p(x_1,t) unless explicitly written andf_u(x_1, x_2) := ∂/∂ x_1u(|x_1 - x_2|). To solve <ref> we use a second-order accurate finite-difference approximation with M grid points in space and the method of lines in time with the inbuilt Matlab ode solver . To evaluate the integral in <ref>, we use the periodic trapezoid rule (which converges exponentially fast for smooth integrands <cit.>). To avoid evaluating the interaction potential at zero, we shift the grid for x_2 by h/2, where h = 1/M is the grid spacing. The density p(x_2,t) is approximated from p(x_1,t) using p(x_2,t) ≈ (p(x_2-h/2,t) + p(x_2+h/2,t))/2. Because of the convolution term, the discretization matrix of <ref> is full, making the numerical solution of the MFA computationally expensive. An alternative would be to use a nonuniform mesh with more points near the diagonal or an adaptive grid scheme such as that presented by Carrillo and Moll <cit.>, which uses a transport map between the uniform density and the unknown density p such that more grid points are placed where the density is higher. The KSA closure model is the coupled system for p(x_1,t) and P_2(x_1, x_2,t), given by∂ p/∂ t = ∂/∂ x_1[ ∂ p /∂ x_1 + (N-1 ) ∫_Ω P_2f_u(x_1, x_2)x_2], ∂ P_2/∂ t = ∂/∂ x_1[ ∂ P_2/∂ x_1 +f_u(x_1, x_2) P_2 +(N-2)P_2 (x_1,x_2,t) p (x_1,t) p (x_2,t)∫_Ω P_2 (x_1,x_3,t) P_2 (x_2,x_3,t) p (x_3,t) f_u(x_1, x_3)x_3 ]+∂/∂ x_2[ ∂ P_2/∂ x_2 +f_u(x_2, x_1) P_2+(N-2)P_2 (x_1,x_2,t) p (x_1,t) p (x_2,t)∫_Ω P_2 (x_1,x_3,t) P_2 (x_2,x_3,t) p (x_3,t) f_u(x_2, x_3)x_3 ], where p = p(x_1,t) and P_2 = P_2(x_1,x_2, t) unless explicitly written. We solve in the computational spatial domain Ω^2 and therefore the KSA model requires a two-dimensional grid with full matrices, making it very computationally expensive. As with the MFA, we shift the grid for x_2 by h/2 to avoid any issues with f_u(x_1, x_2) when the interaction potential u is singular at the origin. In the first integral of <ref>, the coordinate x_3 is evaluated on the same grid as x_2, and in the second integral it is evaluated on the same grid as x_1.An alternative implementation of the KSA system is to solve for P_2 only and evaluate the one-particle density as p(x_1,t) = ∫_Ω P_2(x_1,x_2,t) x_2 (replacing <ref>) <cit.>. While this is true in the infinite BBGKY hierarchy, we note that once the KSA closure <ref> is adopted, the resulting model is not necessarily equal to the KSA system <ref>. Finally, the reduced model from the MAE method reads∂ p/∂ t = ∂/∂ x_1{ [1+ α_u (N-1) ϵ^d p ] ∂ p /∂ x_1} ,where α_u = 2 ∫_0^∞(1-e^-u (ϵ r) ) r. Noting that the right-hand side in <ref> can be written as [p + 1/2α_u (N-1) ϵ^d p^2]_x x, the numerical implementation is straightforward and, since the discretization matrix is banded, very efficient.In the first example, we use the exponential potential u_EX <ref> with ϵ = 0.05 in a system in N=16 particles. The particles are initially distributed according to p(0) = 0.5[tanh(β(x-θ))+tanh(β(1-θ-x))], with β = 30 and θ = 0.2, and we let them diffuse until time T_f = 0.02. The evolution in time of the one-particle density obtained with the various methods is shown in <ref>. We observe very good agreement between the stochastic simulations and the MFA and KSA models, whereas the MAE model slightly underestimates the diffusion strength. This can be explained since, for the chosen values of ϵ and N, the potential is not short-ranged and the assumptions of the MAE model break down. In particular, using <ref> we find that the effective hard-sphere radius is ϵ_eff = 0.04 and hence the effective volume fraction is 0.64. Similarly, the solution obtained via the LMFA (shown at T_f in <ref>(b)) differs noticeably to the MFA solution.0.7 1.0<Ref> shows the corresponding two-particle density function at the final time T_f. For the KSA, P_2 is solved for as discussed above. For the MFA and MAE models, P_2 is given by <ref> and <ref> respectively. As before, x_2 is shifted by half the grid size and p(x_2,t) is approximated from p(x_1,t) using the centered average.The correlation between particles can be seen by the drop in probability at the diagonal x_1 = x_2 in the simulations as well as in the KSA and MAE plots. Conversely, the MFAmisses the correlation between particles, as expected from the ansatz P_2(x_1, x_2, t) = p(x_1, t) p(x_2,t). The differences in P_2 between the models can be clearly seen in <ref>(a), which shows a plot of P_2(x_1,x_2,T_f) along x_2 = 0.5.0.7 1.0 Next we consider an example using a more repulsive interaction potential. In particular, we consider a smoothed version of the Yukawa potential <ref>, namely u (r) =(ϵ /√(r^2 + δ^2)) exp^-r/ϵ with ϵ = 0.01 and δ = 0.002. We choose to smooth the potential so that particles can still swap positions and so that we can use the closure models KSA and MFA (in one dimension the singularity at zero poses problems in the convolution terms, see <ref>). We run a simulation with N=20 particles up to time T_f = 0.02. For these parameters, the effective hard-sphere radius is ϵ_eff = 0.009 and the effective volume fraction is 0.18. Thus we expect the MAE model to perform better than in the previous example where the potential was not short range. The one-particle density plots are shown in <ref>, and the results for the two-particle density function are shown in <ref>. We see that the MFA closure solution diffuses substantially faster than the KSA closure solution, whereas the MAE model spreads slightly slower than the KSA one (<ref>). The particle-level simulations match the KSA solution closely over all times. The two-particle density functions for the particle simulation, the KSA and the MAE solution are indistinguishable at time T_f = 0.02, whereas the MFA misses the drop in probability for closely spaced particles, as expected due to its assumption of no particle correlations (<ref>(b)). 0.7 1.0 0.7 1.0 Generally, we see that MAE is good for strongly repulsive interactions (such as in the example in <ref>) while MFA is better for softer or longer-range interactions (<ref>). The KSA provides a good approximation in all cases. When the two-particle density is required in a system with long-range interactions, our results may one lead to think that one should use the KSA, since the MFA does not capture the correlation in P_2. However, we suggest that a slight modification in the MFA can provide a reasonable approximation to P_2 also. Specifically, one could still use the standard MFA closure P_2(x_1, x_2, t) = p(x_1, t) p(x_2,t)to compute the one-particle density, but then useP_2(x_1, x_2, t) ≈ p(x_1, t) p(x_2,t) exp(-u(|x_1-x_2|)) /C as an approximation of the two-particle density, where C =∬exp(-u(|x_1-x_2|))x_1x_2 is a normalization constant. This modification of P_2, using either the MFA or the LMFA to compute p, is shown as MFAe and LMFAe respectively in <ref>. We see that MFAe and LMFAe provide a better approximation of P_2 than MFA and MAE in the first example with longer-range interactions (<ref>(a)), whereas in the second example (with strong repulsion) the three approximations MFA, MFAe, and LFMAe are poor (<ref>(b)). Finally, we consider a two-dimensional example in Ω = [0,1] ^2. We choose a system of N =400 Yukawa-interacting particles, with interaction potential u_YU<ref> and ϵ = 0.01, initially distributed according to a normal distribution in x with mean 0.5 and standard deviation σ = 0.05. The effective hard-sphere radius is ϵ_eff = 0.0112, giving an effective volume fraction of 0.04.Being in two dimensions, solving the KSA model would require solving a system of M^4 equations, where M is the number of grid points in one direction. Because of the short-range nature of the pair potential u_YU, we require a large number of points M to resolve the interaction near the origin. As a result, solving the KSA model for this system becomes computationally impractical, and we compare the MFA and the MAE methods only.We also solve the interaction-free model (ϵ=0) for reference. We plot the comparison at times T_f/2 and T_f in<ref>. We observe a very good agreement between the stochastic simulations of the particle system and the MAE model, whereas the MFA model overestimates the diffusion strength. Because of the short-range nature of the potential, we find no noticeable difference between the solution of the MFA (<ref>) and the localized version LMFA <ref>, which is computationally much easier to solve (the two curves would lie on top of each other in <ref>, the norm of the relative error is of order 10^-4). As expected, the interaction-free case spreads the slowest of all models since the nonlinear diffusion term is set to zero.0.7 1.0§ DISCUSSION AND CONCLUSIONS We have studied a system of Brownian particles interacting via a short-range repulsive potential u, and have discussed several ways to obtain a population-level model for the one-particle density. In particular, we have considered two common closure approximationsand presented an alternative method based on matched asymptotic expansions (MAE). The MAE method has the advantage that it is systematic and works well for very short-rangedpotentials, especially singular potentials for which commonclosureapproximations can lead to ill-posed models.[These models aresometimes used regardless, with a numerical discretisationproviding an ad hocregularisation of theconvolution integral <cit.>.] The MAE result is a nonlineardiffusion equation similar to our previous work for hard-spheres <cit.>, withthecoefficient of the nonlinear term α_u depending on the potentialthrough (<ref>).We have performed Monte Carlo simulations of the stochastic particle system in one and two dimensions, and compared the results with the solution of theMAE model and two common closure approximations: the mean-field approximation (MFA) and the Kirkwood Superposition Approximation (KSA). While MFA closes the system at the level of theone-particle density p, the KSA closes it at the level of the two-particle density P_2. We have tested the models in examples with long- and short-range interactions.We found that the KSA agreed well with the stochastic simulations in both scenarios, but we could only use it in the one-dimensional examples due to its high computational cost. This is because the discretisation of the convolution term yields a full matrix, making the method impractical to use in two or three dimensions, especially for strongly repulsive potentials that require a very fine mesh in the region where two particles are in close proximity.This is also true, but to a lesser extent, for the MFA model, which captured well the behavior of the system with a long-range potential but was outperformed by MAE in examples with a short-range potential. The MAE method results in a nonlinear diffusion model (which is thus local, with banded discretization matrix) for p, making it straightforward to solve.Recently <cit.> argued in favour of the KSA because it gave the two-particle densityas well as the one-particle density, and could therefore capture correlations in particle positions, which the MFA cannot. Our MAE method also gives an approximation for P_2, and itsuccessfully captures the low likelihood of finding particles close to each other when there are strong short-range repulsions. We emphasize again that it does so at a fraction of the cost of KSA, so that the MAE method becomes particularly suited for problems intwo or three dimensions where the KSA or similar higher-order closures are impractical.We noted also that the MFA can be extended simply (to what we called MFAe) to produce an approximation for P_2, so that it too becomes a viable alternative if the interactions are longer range.Stochastic simulations of system of repulsive soft spheres are generally regarded as simpler than the simulation of hard spheres since one avoids the issue of how to approximate the collision between two particles. Instead, soft sphere simulations involve summing up the contribution to the interaction force of all the neighbors at each time-step and adding it as a drift term to the Brownian motion. However, for very repulsive potentials this needs to be donevery carefully, since if the time-step is not small enough the repulsive part of the potential is not resolved correctly and easily missed. If this happened, the low-density diagonal in the two-particle density plots (see for example <ref>) would either not be well resolved or not be there at all. It is therefore important to do a proper convergence study for the stochastic simulations in order to decide on the appropriate time-step. We note in this respect that our coefficient α_u provides a natural way to determine the radius of the equivalent hard-spherefor any short range potential. We have seen that the MAE method works well for repulsive short-range potentials, while the MFA provides a good approximation for long-range interactions. A natural question is to ask what to do for potentials with both characteristics, namely those which are very singular at the origin but with fat tails at infinity.This work provides a possible route to deal with such potentials: to combine the MAE and MFA methods. Specifically, one could break the potential into twoparts and deal with each of them separately. The result wouldbe an equation of the type <ref> with the nonlocal convolution term due to the long-range component of the potential, and a nonlinear diffusion due to the short-range component.10Baker:2010is R. E. Baker and M. J. Simpson, Correcting mean-field approximations for birth-death-movement processes, Phys. Rev. E, 82 (2010), p. 041905.Berlyand:2016fu L. Berlyand, P.-E. Jabin, and M. Potomkin, Complexity Reduction in Many Particle Systems with Random Initial Data, SIAM/ASA J. Uncertainty Quantification, 4 (2016), pp. 446–474.Binny:2015ei R. N. Binny, M. J. Plank, and A. James, Spatial moment dynamics for collective cell movement incorporating a neighbour-dependent directional bias, J. R. Soc. Interface, 12 (2015), pp. 20150228–20150228.Bruna:2012cg M. Bruna and S. J. Chapman, Excluded-volume effects in the diffusion of hard spheres, Phys. Rev. E, 85 (2012), p. 011103.Bruna:2013ku M. Bruna and S. J. Chapman, Diffusion of finite-size particles in confined geometries, Bull. Math. Biol., 76 (2014), pp. 947–982.Burger:2010gb M. Burger, M. Di Francesco, J.-F. Pietschmann, and B. Schlake, Nonlinear Cross-Diffusion with Size Exclusion, SIAM J. Math. Anal., 42 (2010), p. 2842.Carrillo:2010tx J. A. Carrillo, M. Fornasier, G. Toscani, and F. Vecil, Particle, kinetic, and hydrodynamic models of swarming, in Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences, G. Naldi, L. Pareschi, and G. Toscani, eds., Birkhäuser Basel, 2010, pp. 297–336.Carrillo:2009dz J. A. Carrillo and J. S. Moll, Numerical Simulation of Diffusive and Aggregation Phenomena in Nonlinear Continuity Equations by Evolving Diffeomorphisms, SIAM J. Sci. Comput., 31 (2009), pp. 4305–4329.Dolbeault:2001jc J. Dolbeault, P. A. Markowich, and A. Unterreiter, On Singular Limits of Mean-Field Equations, Arch. Ration. Mech. An., 158 (2001), pp. 319–351.DOrsogna:2006ci M. R. D’Orsogna, Y. Chuang, A. L. Bertozzi, and L. S. Chayes, Self-Propelled Particles with Soft-Core Interactions: Patterns, Stability, and Collapse, Phys. Rev. Lett., 96 (2006), p. 104302.Felderhof:1978vn B. U. Felderhof, Diffusion of interacting Brownian particles, J. Phys. A: Math. Gen., 11 (1978), p. 929.Hansen:2006uv J. P. Hansen and I. R. McDonald, Theory of Simple Liquids, Academic Press, London, 2006.Heyes:2005ff D. M. Heyes and A. C. Brańka, The influence of potential softness on the transport coefficients of simple fluids, J. Chem. Phys., 122 (2005), p. 234504.Hinch:1991go E. J. Hinch, Perturbation Methods, Cambridge University Press, 1991.Horng:2012io T.-L. Horng, T.-C. Lin, C. Liu, and B. Eisenberg, PNP Equations with Steric Effects: A Model of Ion Flow through Channels, J. Phys. Chem. B, 116 (2012), pp. 11422–11441.Hynninen:2003hr A.-P. Hynninen and M. Dijkstra, Phase diagrams of hard-core repulsive Yukawa particles, Phys. Rev. E, 68 (2003), p. 021407.Israelachvili:2011ug J. N. Israelachvili, Intermolecular and Surface Forces, Revised Third Edition, Academic Press, 2011.Kirkwood:1935is J. G. Kirkwood, Statistical Mechanics of Fluid Mixtures, J. Chem. Phys., 3 (1935), pp. 300–313.liggett1999stochastic T. M. Liggett, Stochastic interacting systems: Contact, voter, and exclusion processes, vol. 324, Springer-Verlag, Berlin, 1999.Markham:2013tv D. C. Markham, M. J. Simpson, P. K. Maini, E. A. Gaffney, and R. E. Baker, Incorporating spatial correlations into multispecies mean-field models, Phys. Rev. E, 88 (2013), p. 052713.Middleton:2014fa A. M. Middleton, C. Fleck, and R. Grima, A continuum approximation to an off-lattice individual-cell based model of cell migration and adhesion, J. Theor. Biol., 359 (2014), pp. 220–232.Mogilner:1999iy A. Mogilner and L. Edelstein-Keshet, A non-local model for a swarm, J. Math. Biol., 38 (1999), pp. 534–570.Mulero:2012vc A. Mulero, Theory and simulation of hard-sphere fluids and related systems, vol. 753 of Lecture Notes in Physics, Springer-Verlag, Berlin, 2008.aboria M. Robinson, Aboria library, 2016, <https://martinjrobins.github.io/Aboria/> (accessed 12/20/2015). Version 0.1.Robinson:2017vxa M. Robinson, and M. Bruna, Particle-based and meshless Methods with Aboria, SoftwareX, In Press (2017), <http://dx.doi.org/10.1016/j.softx.2017.07.002>.Singer:2004ec A. Singer, Maximum entropy formulation of the Kirkwood superposition approximation, J. Chem. Phys., 121 (2004), p. 3657.Trefethen:2000jp L. N. Trefethen, Spectral Methods in MATLAB, Society for Industrial and Applied Mathematics, 2000.
http://arxiv.org/abs/1703.09768v2
{ "authors": [ "Maria Bruna", "S. Jonathan Chapman", "Martin Robinson" ], "categories": [ "cond-mat.stat-mech", "cond-mat.soft", "physics.chem-ph" ], "primary_category": "cond-mat.stat-mech", "published": "20170227171634", "title": "Diffusion of particles with short-range interactions" }
Image Stitching by Line-guided Local Warping with Global Similarity Constraint Tianzhu Xiang^1, Gui-Song Xia^1, Xiang Bai^2, Liangpei Zhang^1^1State Key Lab. LIESMARS, Wuhan University, Wuhan, China.^2Electronic Information School, Huazhong University of Science and Technology, China. ========================================================================================================================================================================================================================= Low-textured image stitching remains a challenging problem. It is difficult to achieve good alignment and it is easy to break image structures due to insufficient and unreliable point correspondences. Moreover, because of the viewpoint variations between multiple images, the stitched images suffer from projective distortions. To solve these problems, this paper presents a line-guided local warping method with a global similarity constraint for image stitching. Line features which serve well for geometric descriptions and scene constraints, are employed to guide image stitching accurately. On one hand, the line features are integrated into a local warping model through a designed weight function. On the other hand, line features are adopted to impose strong geometric constraints, including line correspondence and line colinearity, to improve the stitching performance through mesh optimization. To mitigate projective distortions, we adopt a global similarity constraint, which is integrated with the projective warps via a designed weight strategy. This constraint causes the final warp to slowly change from a projective to a similarity transformation across the image. Finally, the images undergo a two-stage alignment scheme that provides accurate alignment and reduces projective distortion. We evaluate our method on a series of images and compare it with several other methods. The experimental results demonstrate that the proposed method provides a convincing stitching performance and that it outperforms other state-of-the-art methods. § INTRODUCTIONBecause images are limited by a camera's narrow field of view (FOV), image stitching combines a group of images with overlapping regions to generate a single, but larger, mosaic with a wider FOV. Image stitching has been widely used in many tasks in photogrammetry <cit.>, remote sensing <cit.> and computer vision <cit.>. In the literature <cit.>, there are typically two main approaches that have been attempted to produce image stitching with satisfactory visual results: (1) developing better alignment models and (2) employing image composition algorithms, such as seam cutting <cit.> and blending <cit.>. Image alignment is the first andmost crucial step in image stitching. Although advanced image composition methods can reduce stitching artifacts and improve the stitching performance, they cannot address obvious misalignments. When a seam or blending area coincides with misaligned areas, the current image composition schemes will fail to provide a satisfactory stitched image <cit.>. Most previous image stitching methods estimate global geometric transformations (e.g., similarity, affine or projective transformation) to bring the overlapping images into alignment. However, these methods require the camera rotation to have a fixed projective center or the scenes to have limited depth variance <cit.>, which are restrictive assumptions that are often violated in practice, resulting in artifacts in the stitched images, e.g., misalignments or ghosting.To compensate for these geometric assumptions, some spatially-varying warping methods for image stitching have been proposed in recent years that can be roughly categorized into two groups: multiple homographies and mesh-based warping. The former estimates multiple homographies that are compatible with local geometries to align the input images, e.g., as-projective-as-possible (APAP) warping <cit.>. Mesh-based warping first pre-warps the image using global homography; then, it adopts some energy functions to optimize the alignment, treating it as a mesh warping problem, e.g., content-preserving warping (CPW) <cit.>. The high degrees of freedom (DoFs) involved in these methods can better handle parallax than can global transformations; thus, they can provide satisfactory stitching results. However, some challenges remain to be addressed: - The current methods often fail to achieve satisfactory alignment in low-texture images. Due to the high DoFs, these methods inevitably depend heavily on point correspondences <cit.>. However, keypoints are difficult to detect in some low-texture images because the homogeneous regions, such as indoor walls, sky, artificial structures, are not distinctive enough to provide rich and reliable correspondences. Hence, these methods often erroneously estimate the warping model, which causes misalignments. - The influence of projective distortions has not been fully considered. Because many methods are based on projective transformations, e.g., CPW <cit.>, APAP <cit.>, the stitched results of images taken under various photographing viewpoints may suffer from projective distortions <cit.> in the non-overlapping regions, including both shape and perspective distortions. For instance, some regions in the stitched image may be stretched or non-uniformly enlarged, and it is difficult to preserve the perspective of each image (Fig. <ref>(b), Fig. <ref>(a)). - The image structure distortion has not been fully considered. Some local warping models, e.g., CPW <cit.>, APAP <cit.>, may bend line structures, especially when stitching low-texture images. For instance, insufficient or unreliable keypoints cause APAP to erroneously estimate some local transformations, which results in misalignment of the local regions and distorts the line structures that span multiple local regions, while CPW employsonly feature correspondences and content smoothness to optimize the global transformation and does not consider structural constraints. The challenges to image stitching can be clearly seen in Fig. 1. Fig. 1 (a) shows the original images and the detected features (points and lines). In some homogeneous regions, only a few points are detected and matched, making it difficult to estimate an accurate transformation. Fig. 1 (b) shows the stitching results from global homography <cit.>, CPW <cit.>, APAP <cit.> and the proposed method. When the restrictive imaging conditions are violated, the global homography model does not fit the data correctly; thus, it results in obvious misalignments (the red boxes). In low-textured areas with insufficient correspondence (red boxes), CPW lacks sufficient data to align the pre-warping result, and APAP cannot estimate accurate local homographies, causing obvious misalignments. The lack of point correspondences also leads to structural deformations in CPW and APAP (blue boxes), where straight lines are deformed into curves. Due to the projective transformation used in these three models and the fact that no measures are taken to eliminate distortions, the stitched image results of these methods suffer from severe projective distortions (the yellow boxes), where the chairs are enlarged non-uniformly. The above problems provide strong motivation for improving the performance of image stitching. To our knowledge, only a few studies have been conducted to address either of the aforementioned problems; consequently, additional efforts are needed. Recent studies (<cit.> and <cit.>) have reported that line features can be used to improve the alignment performance, and <cit.> and <cit.> recently showed that similarity transformations are advantageous in reducing distortions. Inspired by these studies, our work is based on the following two assumptions: - In most man-made environments, line features are relatively abundant, thus they can be regarded as effective supplements that can provide rich correspondences for accurate warping model estimation <cit.>. Furthermore, line features depict the geometrical and structural information of scenes <cit.>; thus, they can also be used to preserve the image structures. - Similarity transformation <cit.> does not introduce shape distortion because it consists only of translation, rotation and uniform scaling. A similarity transformation can be regarded as a combination of panning, zooming and in-plane camera rotation; therefore, it preserves the viewing direction.It is thus of great interest to investigate how to integrate line features and global similarity transformation to improve the image stitching performance. To this end, this paper presents a line-guided local warping model for image stitching with a global similarity constraint. More precisely, this method adopts a two-stage scheme to achieve good alignment. First, pre-warping is jointly estimated using both point and line features. Then, extended mesh-based warping is used to further align the pre-warping result. Line features are integrated into mesh-based warping framework and act as structural constraints to preserve image structures. Finally, to preventundesirable distortions, the global similarity transformation is adopted as a similarity constraint and used to adjust the estimated warping model. The contributions of our work are as follows: - We introduce line features to guide image stitching, especially in low-texture cases. Line features play a significant role mainly in two aspects: 1) they are integrated into the local warping model using a weight function to achieve accurate alignment; 2) they are employed to impose strong geometric constraints (i.e. line correspondence and line collinearity) to refine the stitching performance. - We present a weight integration strategy to combine the global similarity constraint with models of global homography or multiple homographies. Using this strategy, the resultant warp achieves a smooth transition from a projective to a similarity transformation across the image, which significantly mitigates the projective distortions in non-overlapping regions. - We propose a robust and effective two-stage stitching framework that combines the local multiple homographies model and the mesh-based warping model with line and global similarity constraints. The proposed method addresses local variation well to ensure image alignment by local stitching and flexible refinement. The method also preserves image structures and multi-perspective through strong geometrical and structural constraints. The proposed method achieves a state-of-the-art performance. The remainder of this paper is organized as follows. Section <ref> gives a brief review of the related works. Section <ref> describes the proposed method in detail. The experimental results and analyses are reported in Section <ref>. Finally, we draw some conclusions and provide remarks in Section <ref>.§ RELATED WORKSNumerous studies have been devoted to image stitching; a comprehensive survey can be found in <cit.>. The global homography model <cit.> works well for planar scenes or for scenes acquired with parallax-free camera motion, but violation of these assumptions may lead to ghosting artifacts.Recently, spatially-varying warping methods have been proposed that flexibly address parallax. Liu et al. <cit.> proposed the content-preserving warping (CPW) method, which was first used in video stabilization. CPW adopts registration error and content smoothness to refine the pre-warping result obtained by global homography. A simple extension of global homography method was presented in <cit.>, called dual-homography warping (DHW), which divides the entire scene into two planes: a distant plane and a ground plane. The final warping is obtained by a linear combination of these two homographies estimated by the point correspondences of each plane. However, this method has difficulties on complex scenes. Lin et al. <cit.> proposed the smoothly varying affine (SVA) warping method for image stitching. SVA can handle local deformations while preserving global affinity. However, because there are insufficient DoFs in the affine model, SVA cannot achieve projective warping. Zaragoza et al. <cit.> extended the previous method and proposed an as-projective-as-possible (APAP) warping method for image stitching. APAP achieves a smoothly varying projective stitching field estimated by a moving direct linear transformation (DLT) <cit.>. It maintains a global projection while allowing local non-projective deviations. Zhang et al. <cit.> proposed a parallax-tolerant image stitching method that seeks the optimal homography evaluated by the seam cost and uses CPW to refine the alignment. However, except for SVA, these methods are based on projective transformations, thus the stitched images often suffer from projective distortions. In addition, the resulting images may suffer from structural deformations because of the nonlinear loceal transformations in the model.In recent years, similarity transformation, which is composed of translation, rotation and scaling, was introduced. Similarity transformation constructs a combined warping with projective transformations to constrain the projective distortions. Chang et al. <cit.> proposed a shape-preserving half-projective (SPHP) warping for image stitching that adopts projective, transition and similarity transformation to achieve a gradual change from a projective to a similarity transformation across the image. SPHP can significantly reduce the distortions and preserve the image shape; however, it may introduce structural deformations, e.g., line distortions, when the scene is dominated by line structures. Lin et al. <cit.> proposed an adaptive as-natural-as-possible (AANAP) warping that linearizes the homography in the non-overlapping regions and combines these homographies with a global similarity transformation using a direct and simple distance-based weight strategy to mitigate perspective distortions. However, some distortions still exist locally when stitching images (Fig. <ref>(b)).It is worth noting that spatially-varying warping-based image stitching is highly dependent on point correspondences. When there are insufficient reliable keypoints (such as in low-texture images), the effects of the estimated models will degrade. More recently, Joo et al. <cit.> introduced line correspondences into the local warping model, but this approach requires a user to annotate the straight lines, and setting the parameters for this method is complex. Li et al. <cit.> proposed a dual-feature warping method for motion model estimation that combines line segments and points to estimate the global homography. However, this method still suffers from projective distortions. § THE PROPOSED APPROACHThis section introduces the proposed method for image stitching in detail. The main idea is to integrate line constraints and a global similarity constraint into a two-stage alignment framework. The outline of our method is illustrated in Fig. <ref>. The first-stage alignment (presented in Section <ref>) involves estimating an accurate warping model using line guidance. Linear features are adopted as alignment constraints to jointly estimate both global and local homography with point correspondences, which provide rich and reliable correspondences even in low-texture images. To further improve the stitching performance, we adopt mesh optimization based on the extended content-preserving warping framework presented in Section <ref>. Then, the linear feature constraints (i.e., line correspondence and line collinearity) are combined to further refine the alignment and preserve the image structures. Finally, to mitigate the projective distortions, a global similarity transformation, estimated by a set of selected points in the approximate image projection plane, is employed to constrain the distortions caused by projective warping via a weighted integration strategy (Section <ref>). Based on the proposed warping model, we are able to achieve accurate and distortion-free image stitching. §.§ Line-guided warping modelPoint features are often adopted for image alignment. Given the target and reference images I, I^', ℝ×ℝ↦ℝ, and a pair of matching points: 𝐩=[x,y,1] and 𝐩^'=[x^',y^',1] where x,y∈ℝ, the global homography, 𝐇∈ℝ^3 × 3: 𝐩^' = 𝐇𝐩, can be estimated by minimizing the algebraic distance ∑_i 𝐩_i^'×𝐇𝐩_i^2 between a set of matching points, where i is the index of matching points.However, as stated previously, keypoints extracted from images are rare in some low-texture scenarios, thus it is difficult to estimate an accurate global homography for image stitching. Hence, line features, which are salient in artificial scenarios, are adopted as the alignment constraint to guide the global homography estimation.Let 𝐥=[ a,b,c ]^T, 𝐥^' = [ a^',b^',c^' ]^T, with a,b,c∈ℝ be a pair of matching lines in the target and reference images respectively. Here, 𝐩^0,1=[x^0,1,y^0,1,1] denotes the two endpoints of line 𝐥. They satisfy 𝐥^'^T𝐇𝐩^0,1=0, which means that the endpoints transformed by 𝐇 from 𝐥 should lie on the corresponding line 𝐥^'. Therefore, 𝐇 can be estimated by minimizing the algebraic distance ∑_j 𝐥_j^'^T ×𝐇𝐩_j^0,1^2 using a set of matching lines, where j is the index of the matching lines. The homography is then estimated jointly by point and line correspondences:ĥ= min_𝐡( ∑_i 𝐩_i^'×𝐇𝐩_i ^2+ ∑_j 𝐥_j^'^T×𝐇𝐩_j^0,1^2) = min_𝐡( ∑_i 𝐀_i𝐡^2+ ∑_j 𝐁_j𝐡^2),s.t. 𝐡 = 1,where 𝐡=[h_1,h_2,h_3,h_4,h_5,h_6,h_7,h_8,h_9] is the column vector representation of 𝐇, and 𝐀_i, 𝐁_j ∈ℝ^2 × 9 are the coefficient matrixes computed by the i–th matching point and j–th matching line, respectively. Stacking all the coefficient matrices of points (𝐀_i) and lines (𝐁_j) vertically into a unified matrix, 𝐂 = [𝐀;𝐁], and Eq. (<ref>) can be rewritten as follows:ĥ = min_𝐡𝐂𝐡^2,s.t. 𝐡 = 1,The global homography 𝐇 is the smallest significant right singular vector of 𝐂. Note that before estimation, all the entries of the stacked matrices [ A_i;B_j] should be normalized for numerical stability. In this study, we adopt the point-centric normalization approach proposed in <cit.>.Local homography can handle parallax better than global homography due to the higher DoFs <cit.>. Therefore, we extend the line-guided global homography to local homographies. The input images are first divided into uniform grid meshes. The local homography 𝐡_k of the k–th mesh located at 𝐩_* = [x_*,y_*] is estimated by𝐡_k = 𝐡min𝐖_k 𝐂𝐡^2, s.th= 1,where 𝐖_k=diag( [ 𝐰^p,𝐰^l] ), 𝐰^p∈ℝ^2N, and 𝐰^l∈ℝ^2M denote the weight factors for the point and line correspondences, respectively. Specifically, 𝐰^p=[w^p_1 w^p_1 ... w^p_N w^p_N], and 𝐰^l=[w^l_1 w^l_1 ... w^l_M w^l_M]. Therefore, the solution is the smallest significant right singular vector of 𝐖𝐂.The point weight factor 𝐰^p is calculated by the Gaussian weighted Euclidean distance:𝐰^p_i= max( exp( -𝐩_*-𝐩_i^2/σ^2),η),where 𝐩_i is the i–th keypoint, σ is the scale parameter, and η∈ [0,1] is used to avoid the numerical issues caused by the small weights when the mesh center 𝐩_* is far away from keypoint 𝐩_i, as shown in Fig. <ref>(a).The line weight factor 𝐰^l is calculated as follows:𝐰^l_j= max( exp( -d_l( 𝐩_*,𝐥_j)^2/σ^2),η),where d_l(𝐩_*,𝐥_j) is the shortest distance between the mesh center 𝐩_* and line 𝐥_j, calculated as follows: d_l(𝐩_*,𝐥_j)= { min (𝐩_*-𝐩_j^0,𝐩_*-𝐩_j^1)( a ) | a_jx_*+b_jy_*+c_j |/√(a_j^2+b_j^2) ( b ) ,.where 𝐩_j^0, 𝐩_j^1 are the endpoints of line 𝐥_j: 𝐥_j=[a_j,b_j,c_j]. As shown in Fig. <ref>(b), when 𝐩_* is in the R_1 or R_2 region, the d_l is calculated by (a), and when 𝐩_* is in the R_3 region, d_l is calculated by (b). From Eq.(<ref>) and (<ref>), the weight is greater when the keypoint or line is closer to the mesh center 𝐩_*, which causes the local homography to be a better fit for the local structure around 𝐩_*.§.§ Alignment refinement with line constraintsThis section describes the adoption of mesh optimization as the second step of the two-stage alignment scheme to further improve the performance of image stitching. Content-preserving warping is a mesh-based warping method that was first used for video stabilization in <cit.> and, later, successfully applied to image stitching <cit.>. It is well-suited for small local adjustments. In our work, the line feature constraints (e.g., the line correspondence constraint and line colinearity constraint) are integrated into the content-preserving warping framework to both maintain the image structures and refine the alignment satisfactorily.The target image I is first divided into a regular grid mesh. In our case, the grid mesh is used to guide the image warping. Supposing 𝐕 denotes the vertices of the grid mesh in the pre-warping image transformed by the line-guided warping model. Alignment refinement is performed to find a group of deformed vertices 𝐕 using energy optimization. An arbitrary point 𝐩 in the pre-warping image can be represented by a linear combination of four mesh vertices 𝐕 = [ 𝐕_1,𝐕_2,𝐕_3,𝐕_4]^T in its locating quad: 𝐩 = 𝐰^T𝐕, and 𝐰 = [ w_1,w_2,w_3,w_4]^T are calculated by inverse bilinear interpolation <cit.> and sum to 1. Therefore, the image warping problem can be formulated as a mesh warping problem. In fact, it is an optimization problem in which the objective is to accurately align the pre-warping image to the reference image while avoiding obvious distortions. The energy terms used in this paper are detailed below. §.§.§ Content-preserving warping Content-preserving warping <cit.> includes three energy terms: a point alignment term, a global alignment term and a smoothness term. The point alignment term E_p is used to align the feature points in the target image or pre-warping image to the corresponding points in the reference image as much as possible. It is defined as follows:E_p = ∑_i𝐰_i^T𝐕_i- 𝐩_i^' ^2,where 𝐩_i^' is the matching point in the reference image. This term ensures the alignment of the overlapping region.The global alignment term E_g is used to constrain the image regions without feature correspondences to be as consistent as possible with the pre-warping result:E_g = ∑_i𝐕_i - 𝐕_i^2,where 𝐕_i is the corresponding vertex in the pre-warping result.The smoothness term E_s encourages each grid in the pre-warping result to preserve similarity during warping to avoid shape distortions as much as possible. Precisely, given a triangle ▵𝐕_0 𝐕_1 𝐕_2 in the pre-warping result, the vertex 𝐕_0 can be represented by 𝐕_1 and 𝐕_2 as shown below:[ 𝐕_1 = 𝐕_2 + μ (𝐕_3-𝐕_2) + ν𝐑 (𝐕_3-𝐕_2), ] 𝐑 = [ 0 1 -1 0 ],where μ, ν are the coordinate values of 𝐕_0 in the coordinated system defined by the other two vertices. During warping, the triangle uses a similarity transformation to preserve the relative relationship of the three vertices and avoid local distortions. The smoothness term isE_s(𝐕_1) = φ𝐕_1 - ( 𝐕_2 + μ (𝐕_3-𝐕_2) + ν𝐑 (𝐕_3-𝐕_2)) ^2,where φ is a weight used to measure the salience of the triangle as in <cit.>. The weight more strongly preserves the shapes of high-salience regions than those of low-salience regions. The full smoothness energy term is formed by summing Eq. (<ref>) over all the vertices. §.§.§ Line correspondence term However, content-preserving warping terms only ensure the point alignment in the overlapping regions; thus, the line correspondences are taken into consideration to further improve the alignment.A line correspondence term is utilized to ensure that the line correspondences are well aligned. Let 𝐥_j, 𝐥_j^' be a pair of corresponding lines in the target and reference images, respectively. Line 𝐥_j is cut into several short line segments by the edges of mesh if the line 𝐥_j traverses this mesh. The endpoints of the short line segments from 𝐥_j are denoted by 𝐩_j,k, where k is the index of the endpoints, and 𝐩_j,k^' denotes the endpoints in the pre-warping image transformed from 𝐩_j,k by the preceding warping process, 𝐩_j,k^' = 𝐰_j,k^T𝐕_j,k. The line correspondence term can be expressed by the idea that the distance from 𝐩_j,k^' to the corresponding line 𝐥_j^' should be the minimum distance:E_l = ∑_j,k(𝐥_j^'^T·𝐰_j,k^T𝐕_j,k)/√(a_j^'^2 + b_j^'^2) ^2. The line correspondence term not only enhances the image alignment but also, together with line collinearity term below, preserves the straightness of line structures. §.§.§ Line collinearity term However, the above terms may not reduce the distortions (e.g., line structure distortions) in the non-overlapping regions where there are few point or line correspondences. To capitalize on the line features and preserve the line structure, we adopt the line collinearity constraint.The line collinearity term is used to preserve the straightness of linear structures in the target image as much as possible. Let 𝐩_i,k denote the endpoints and intersecting points of line 𝐥_i in the non-overlapping regions with the grid. Assume that 𝐩_i,k^' denotes the corresponding points of 𝐩_i,k in the pre-warping result. The line should maintain its straightness after warping, that is, the transformed points 𝐩_i,k^' should lie on the same line. This can be represented by the distance from the endpoints 𝐩_i,k^' to the line 𝐥̂_i which should be the minimum distance. Line 𝐥̂_i is calculated by the head and tail endpoints of 𝐩_i,k^'. The term is defined as follows:E_c = ∑_i,k(𝐥̂_i^T ·𝐰_i,k^T𝐕_i,k) / √(â_i^2 + b̂_i^2) ^2. Together, the line collinearity term and the line correspondence term maintain the line structures well. §.§.§ Objective functionThe above five energy terms are then combined as an energy optimization problem in which the objective function isE = αE_p + βE_g + γE_s + δE_l + ρE_c,where α, β, γ, δ, ρ are the weight factors for each energy term. In our implementation, α = 1, β = 0.001, γ =0.01, δ = 1, and  ρ = 0.001. The above function is quadratic; consequently, it can be solved by a sparse linear solver. The final result is obtained through texture mapping.§.§ Distortion reduction by global similarity constraintTo reduce the projective distortions in the non-overlapping regions, the global similarity transformation is adopted to adjust the local warping model.Chang et al. <cit.> has shown that similarity transformation is effective in mitigating distortions. If we can find a similarity transformation that approximately represents the camera motion of the image projection plane, that transformation can be applied to offset the camera motion <cit.>. RANSAC <cit.> is used to iteratively segment the matching points. Each group of point correspondences can be used to estimate a similarity transformation. The estimation with the smallest rotation angle is selected as the optimal candidate <cit.>. As shown in Fig. <ref>, the group of points in green is chosen to estimate the global similarity transformation. The plane composed of green points approximates the image projection plane because the camera is nearly perpendicular to the ground when shooting.§.§.§ Similarity constraintAn image patch can be transformed by a projective transformation (e.g. homography), which provides good alignment but may cause distortions, such as stretching. An image patch can also be warped by the similarity transformation, which, although it introduces no distortions, may result in poor alignment due to the limited DoFs. Integrating two types of transformations using weights, can therefore both ensure good alignment and reduce distortions. The similarity constraint procedure is described in Algorithm <ref>. The global similarity transformation is combined with global or local homographies using weight factors. To create a smooth transition, the whole image should be considered. The weight integration is calculated as follows:𝐇_i^' = τ𝐇_i + ξ𝐒,where 𝐇_i is the homography in the i–th grid mesh, and 𝐇_i^' is the final homography in the i–th grid mesh. Here, 𝐒 is the similarity transformation, and τ and ξ are weight coefficients with τ + ξ = 1. The calculation of these two weights will be described later. In a global homography model, the homography of every grid mesh is the same.The corresponding warping procedure should also be applied to the reference image because the similarity transformation also adjusts the overlapping regions. The warping procedure for the reference image can be formulated as follows:𝐓_i^' = 𝐇_i^'𝐇_i^-1,where 𝐓_i^' is the warping procedure for the reference image in the i–th grid mesh.As shown in Fig. <ref>, when a point is far from the overlapping regions (especially the distorted non-overlapping regions) the procedure assigns a high weight for the similarity transformation to mitigate the distortions as much as possible. In contrast, for points near the overlapping regions, it assigns a high weight for the homography to ensure accurate alignment. Using this weight combination, the final warp smoothly changes from a projective to a similarity transformation across the image, which preserves the image shape and maintains the multi-perspective.§.§.§ Weighting strategyThe weight coefficient calculation stems from the analysis of projective transformation. According to <cit.>, let 𝐑 be a rotation transformation that transforms the image coordinate (x,y) to a new coordinate (u,v). Based on 𝐩^' = 𝐇𝐩, a new projective transformation 𝐐 that transforms (u,v) to (x^',y^') meets 𝐩^' = 𝐐 [u, v, 1]^T = 𝐇𝐑 [u, v, 1]^T, where 𝐇 = [h_1, h_2, h_3; h_4, h_5, h_6; h_7, h_8, 1], and 𝐐 = [q_1, q_2, q_3; q_4, q_5, q_6; q_7, q_8, 1].Supposing that the rotation angle is θ= arctan( h_8/h_7), we can obtain q_8 =- h_7sinθ + h_8cosθ = 0. Then, 𝐐 can be decomposed as follows:[q_1 q_2 q_3q_4 q_5 q_6 -c 0 1 ] = [q_1 + cq_3 q_2 q_3q_4 + cq_6 q_5 q_6 0 0 1 ]_Q_a[ 1 0 0 0 1 0 -c 0 1 ]_Q_p,where c = √(h_7^2 + h_8^2). Here, 𝐐_a is the affine transformation, and 𝐐_p is the projective transformation. Defining the local scale change <cit.> at point (u,v) under the projective transformation as the determinant of the Jacobian of 𝐐 at point (u,v), the local scale change is calculated as follows:𝐉( u,v) = 𝐉_a(u,v) ·𝐉_p( u,v) = λ_a·1/( 1 - cu)^3,where det denotes the determinant, and λ_a is independent of u and v. It can be seen that the local area change derived from 𝐐 relies only on the u direction. In other words, the distortions of projective transformation occur only along the u–axis. Therefore, the distortions can be effectively eliminated if the weight coefficients are calculated along the u direction in the (u,v) coordinate system. The weight coefficients are designed based on the distance of grid points in the u direction; the goal is to provide a gradual change from a projective to a similarity transformation across the image to preserve the image content in non-overlapping regions. As shown in Fig. <ref>, the center of the reference image is used as the origin of coordination o, and the unit vector on the u–axis denotes ou =( 1,0 ). For the arbitrary mesh center 𝐩, d is the projected length of vector o𝐩 on the vector ou. The projected point 𝐩_max with a maximum length of d and the projected point 𝐩_min with a minimum length of d can be calculated. For the i–th grid, the weight coefficients are calculated as follows:ξ= < 𝐩_min𝐩_i·𝐩_min𝐩_max >/| 𝐩_min𝐩_max|,where < 𝐩_min𝐩_i·𝐩_max𝐩_min > denotes the projection length of 𝐩_min𝐩_i on 𝐩_max𝐩_min, and τ= 1 - ξ. As shown in Fig. <ref>, APAP adopts the local homographies for alignment, which aims to be both globally projective while allowing local deviations. However, the stitched image suffers from projective distortions; for instance, the buildings are undesirably stretched and not parallel to the temples, in addition, the perspective distortions in the non-overlapping regions are obvious. In contrast, using a global similarity constraint, the proposed warping model preserves the shapes of objects and maintains the perspective of each image.§ EXPERIMENTAL RESULTS AND ANALYSISThis section describes several experiments conducted to assess the performance of the proposed method on a series of challenging images. In our experiments, the testing images were acquired casually, using different shooting positions and angles.Given a pair of input images, the keypoints are detected and matched by SIFT <cit.> in the VLFeat library <cit.>. The line features are detected by a line segment detector (LSD) <cit.> and matched by line-point invariants <cit.> or line-junction-line <cit.>. Then, RANSAC is used to remove the mismatches, and the remaining inliers are input to the stitching algorithms. We compared our approach with several other methods. The parameters of the other methods were set as suggested in the respective papers and we used the source code provided by the authors of the papers to obtain the compared results. For our method, σ is 8.5, and η is 0.01. The experiments were conducted on a PC with an Intel i3-2120 3.3 Ghz CPU and 8 GB of RAM. Not considering feature detection and matching, the proposed method takes 20–30 s to stitch together two images with a resolution of 800×600.To better compare the methods and reduce interference, we avoided post-processing methods such as blending or seam cutting as detailed in <cit.>. Instead, the aligned images are simply blended by intensity average so that any misalignments remain obvious.To assess the accuracy of the image stitching alignment quantitatively, the metrics of correlation (Cor) <cit.> and mean geometric error (Err_mg) <cit.> are adopted. Cor is defined as one minus the normalized cross correlation (NCC) over the neighborhood of a 3 × 3 window, that isCor( I,I^') = √(1/N∑_π(1 - NCC( 𝐩,𝐩^') )^2 ) ,where N is the number of pixels in the overlapping region π, and 𝐩 and 𝐩^' are the pixels in image I and I^', respectively. Cor reflects the similarity of two images in the overlapping regions. The smaller the Cor value is, the better the stitching result is.Err_mg is defined as the mean geometric error on points and lines, that is[Err_mg^(p)=1/M∑_i=1^M f(𝐩_i) - 𝐩_i^'; Err_mg^(l)= 1/2K∑_j=1^K∑_i=0^1d_l(f(𝐩_l_j^i),𝐥_j^');Err_mg=(Err_mg^(p)*M+Err_mg^(l)*2K)/(M+2K) ] ,where f: ℝ^2 ↦ℝ^2 is the estimated warping, M is the number of point correspondences, 𝐩_i and 𝐩_i^' are a pair of point correspondences, K is the number of line correspondences, and d_l denotes the projected distance of the endpoints of 𝐥_j to its correspondence line 𝐥_j^'. A smaller Err_mg value indicates a better stitching result.In the following subsections, we first verify the performance of the proposed method on image alignment and distortion reduction. Then, we report the experimental comparison results including the comparison with the global-based methods and the local-based methods.§.§ Image alignment Fig. <ref> illustrates the performance of each constraint in the proposed method, including the line-guided local warping estimation, the line correspondence constraint, and the line colinearity constraint. Fig. <ref>(b) shows the result of line-guided warping combined with APAP (LAPAP), which largely improves the alignment compared to APAP, as can be clearly seen in the closeup. However, LAPAP introduces structural distortions, e.g., the bent lines on the buildings, shown by red circle in the blue closeup. With CPW optimization, LAPAP+CPW refines the alignment performance (shown in Fig. <ref>(c)), but some slight misalignments still exist. Combined with line correspondence (LineCorr) constraint, LAPAP+CPW+LineCorr provides good alignment (Fig. <ref>(d)). However, structural distortions, e.g., line deformations, are not handled well as can be clearly seen in the blue closeup. By adding the line collinearity constraint to restrain the structural deformation, the proposed method provides a good stitching result with less distortion in this example (Fig. <ref>(e)). Quantitative evaluations ofCor and Err_mg are shown in Table <ref>, which demonstrates conclusions consistent with the visual effect. Fig. <ref> shows a comparison of the original point-based CPW model <cit.> and the proposed CPW model on the Rooftops[The Rooftops images were acquired from the open dataset of <cit.>.] images. Some errors or distortions are highlighted by the red circles. The stitching process is based on the proposed two-stage alignment. Fig. <ref>(a) shows the results from the original CPW model, in which misalignments are obvious, especially on the rooftops (red circle). Additionally, the roadside trees are stretched. Under the constraints of line features, Fig. <ref>(b) improves the alignment performance and produces more accurate results. As can be seen, line features provide a better geometric description than do point features alone, and the line features function as strong constraints for image stitching. Fig. <ref>(c) shows the final stitching results. Due to the global similarity constraint, the distortions around the roadside trees are largely mitigated, and the proposed method achieves a satisfactory stitching result. Table <ref> shows the quantitative comparison. The improved CPW model largely reduces the alignment errors (mainly line errors and total error). By using the similarity constraint, the proposed method obtains a lower Cor.Next, we compared the proposed method with other flexible warping methods to evaluate the alignment performance, namely, global homography (baseline) <cit.>, CPW (using global warping for the initial alignment) <cit.>, and APAP <cit.>. For completeness, the proposed method is also compared with the Image Composite Editor (ICE) <cit.> (a common commercial tool for image stitching) by inputting two images at once. For ICE, we used the final post-processed results for the comparison because the original alignment results are not obtainable in the standard version of ICE. In addition, no quantitative comparison of ICE is provided. Fig. <ref> shows the Desk image pair and the detected feature. For most of the low-textured areas, the keypoints are difficult to extract, resulting in insufficient matching points for the estimation of warping model. However, line features can be used as an effective complement for alignment purposes. The comparison results are shown in Fig. <ref>. Because the images violate the assumptions, the baseline warp is unable to align them properly; it produces obvious misalignments (see the red boxes in Fig. <ref> (a)). ICE, CPW, and APAP provide relatively better stitching results, but a non-negligible number of ghost artifacts remain. In Fig. <ref>(b), although ICE uses blending and pixel selection to conceal the misalignments, the post-processing is clearly not completely successful; for instance, there are obvious misalignments on the vertical edge of the desk. Due to an insufficient number of corresponding keypoints along the vertical edge of the desk, CPW and APAP cannot provide an accurate warping model for image alignment; consequently, ghosting occurs in these regions (see the red boxes in Fig. <ref> (c) and (d)). With the help of line correspondences and the two-stage robust alignment scheme, our method results in satisfactory stitching performance with accurate alignment and few ghost artifacts (Fig. <ref>(e)). Note that our method also reduces the need for post-processing.Table <ref> depicts the Cor and Err_mg values of the compared methods on the Desk image pair. As listed, CPW's stronger constraint on point correspondences results in a smaller alignment error on point Err_mg^(p); however, the alignment errors on line Err_mg^(l) and Err_mg remain large. The proposed method reduces the geometric error and results in better accuracy than do the other tested methods.§.§ Distortion reductionTo investigate the distortion reduction performance, SPHP <cit.> and AANAP <cit.> were compared with the proposed method on the Railtracks and Temple Square image pairs[The Railtracks and Temple Square images were acquired from the open dataset of <cit.>.].Fig. <ref> shows the stitching results of the four methods, APAP <cit.>, SPHP <cit.>, SPHP with an assumption of no rotation (SPHP1) <cit.>, and our method. Due to its simple extrapolation of projective transformation to non-overlapping regions, the APAP (Fig. <ref>(a)) result shows projective distortions in the non-overlapping regions. In the blue box in the closeup, the car is enlarged, and the palm tree is obviously slanted. By introducing the similarity transformation, SPHP can largely mitigate these projective distortions. In Fig. <ref>(b), SPHP preserves the shape of the car, but it has a problem with the unnatural rotation. In addition, the construction site (in the red box) is tilted to the left. In contrast, SPHP1 preserves the shape and reduces the perspective distortion, but the construction site is now tilted slightly to the right (<ref>(c)). Using the global similarity constraint, the proposed method largely eliminates all these distortions, providing a pleasing stitching result, as is clearly shown in <ref>(d). Fig. <ref> shows a comparison of the proposed method with AANAP <cit.> on distortion reduction. Fig. <ref>(a) shows that APAP achieves good alignment, but it suffers from shape and perspective distortion problems, for example, in the stretched and tilted buildings at the right of the image. By linearizing the homography and using the similarity transformation, AANAP provides an attractive result in which the projective distortions have been largely mitigated (Fig. <ref>(b)). However, as shown in the red circle of the enlarged view, the lines on the ground are slightly deformed. Our method yields more appealing stitching results in this example (Fig. <ref>(c)).§.§ Comparisons with global-based methodsIn this section, the proposed method is compared with three global-based methods: global homography (Baseline) <cit.>, ICE <cit.>, and SPHP <cit.>. For our method (called the global version), global homography is adopted during the first alignment stage and jointly estimated by point and line correspondences to pre-warp the source images.Fig. <ref> shows the two pairs of original images for stitching: Ceiling and Temple [The Temple images were acquired from the open dataset of <cit.>.]. The low-textured content of Ceiling results in the detection of only a limited number of unevenly distributedkeypoints, which may degrade the warping model'sestimations. However, line correspondences are abundant, which can improve the image alignment. Temple provides rich point correspondences, but the scene contains multiple distinct planes, which is a challenge for the global-based methods.Fig. <ref> and <ref> show the results of the global-based methods on the Ceiling and Temple image pairs. As shown, due to the model deficiencies, the Baseline warp cannot provide satisfactory stitching results; there are numerous misalignments and projective distortions. The ICE and SPHP methods improve the stitching performance, especially in the aspect of the reduction of projective distortions. For instance, the door in Fig. <ref> and the people in Fig. <ref> have few distortions, but the bricks of the ceiling in the non-overlapping area in the ICE result (Fig. <ref>(b)) are slightly stretched. In addition, alignment errors in these two pairs of images (the red circles in Fig. <ref> and <ref>) remain obvious. In contrast, the proposed method is more flexible and robust in handling the alignment not only because of the line-guided warping estimation but also because of the alignment constraints in the mesh-based framework. With the similarity constraint, our method provides good stitching results with minimal distortions.Table <ref> and Table <ref> contains a quantitative comparison of Ceiling and Temple, showing that our method provides the results with the fewest errors. On Ceiling, our method performs the best because the line features play an important role in scenes without reliable keypoint correspondences. On the Temple image, which has rich and reliable keypoints, the role of the line feature may be reduced, but it still helps to improve the alignment accuracy. §.§ Comparisons with local-based methodsThe global version works well in preserving the content and perspective, but it is somewhat less robust when aligning images taken with large views. For high DoFs and flexible local homographies, our method that uses local homography in the pre-warping stage (called the local version) can handle the parallax issue. Therefore, in this section, we compared it with three other local-based methods: CPW <cit.>, APAP <cit.>, and SPHP+APAP <cit.>. Fig. <ref> shows the original Church, Block, and Wall images for the comparison experiments[The Church and Block images were acquired from the open dataset of <cit.>.]. Some images have little texture, which limits the extracted features. Moreover, the images' corresponding views vary greatly.The stitching results on these three pairs of images are provided in Fig. <ref>. In terms of alignment accuracy, CPW and APAP allow higher DoFs than does global homography, but they also produce misalignments in regions that lack point correspondences (the areas partially highlighted in red boxes). In addition, CPW and APAP may cause local structure deformation in structural regions that lack keypoints. The red closeups clearly show that straight lines are bent (e.g., the stair railing in Church, the building edge in Block, and the wall edge in Wall). Using the similarity transformation, SPHP+APAP reduces the projective distortions and preserves the shape and perspective, mitigating the building distortion in the non-overlapping regions in both Church and Block. In comparison, our method not only provides accurate alignment, which benefits from the two-stage alignment scheme, but also preserves image structures and perspectives due to the line and similarity constraints.Table <ref> shows the quantitative results of the compared methods. Our method consistently achieves better accuracy than CPW, APAP and SPHP+APAP except for Err_mg^(p) in Church result. CPW adopts feature alignment as a strong constraint; therefore, it provides a good quantitative result in Err_mg^(p). However, its results are unsatisfactory on other criteria on the Church image. Overall, our method achieves the best quantitative results.§.§ Stitching of multiple images Figs. <ref> and <ref> show the stitching results of multiple images on the Apartments and Garden data, respectively[These images were acquired from the open dataset of <cit.>]. Some distinct errors are highlighted in boxes. As can be seen, Autostitch and ICE result in some obvious misalignments because they use only global homography for alignment, which is unsuitable for images whose views differ by factors other than pure rotation. In contrast, our method largely improves the stitching performance because of the flexible line-guided local homographies and mesh optimization. Thus, the proposed method produces satisfactory stitching results that contain few misalignments and distortions.§ CONCLUSIONThis paper proposed a line-guided local warping for image stitching by imposing similarityconstraint. Our method integrates multiple constraints, including line features and global similarity constraints, into a two-stage image stitching framework that achieves accurate alignment and mitigates distortions. The line features are employed as an effective supplement to point features for alignment. Then, the line feature constraints (line matching and line collinearity) are integrated into the mesh-based warping framework, which further improves the alignment while preserving the image structures. Additionally, the global similarity transformation is combined with the projective warping to maintain the image content and perspective. As shown by the results of performed experiments, the proposed method achieves a good image stitching result that yields the fewest alignment errors and distortions compared to other methods. The proposed method depends on line detection and matching; thus, incomplete or broken line segments may influence its structure-preserving performance. In future work, we would like to explore other complex structure constraints, such as contours <cit.>, to improve the image stitching performance, and explore the possibility of applying our warping model to other applications, such as video stabilization <cit.>.IEEEtran
http://arxiv.org/abs/1702.07935v2
{ "authors": [ "Tian-Zhu Xiang", "Gui-Song Xia", "Xiang Bai", "Liangpei Zhang" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170225181551", "title": "Image Stitching by Line-guided Local Warping with Global Similarity Constraint" }
Runge–Kutta convolution coercivityLehel Banjai The Maxwell Institute for Mathematics in the Sciences; School of Mathematical & Computer Sciences, Heriot-Watt University, EH14 4AS Edinburgh, UKL.Banjai@hw.ac.ukChristian Lubich Mathematisches Institut, Universität Tübingen, Auf der Morgenstelle, D-72076 Tübingen, Germany.Lubich@na.uni-tuebingen.deL. Banjai and Ch. LubichRunge–Kutta convolution coercivity and its use for time-dependent boundary integral equations Lehel Banjai Christian Lubich============================================================================================= A coercivity property of temporal convolution operators is an essential tool in the analysis of time-dependent boundary integral equations and their space and time discretisations. It is known that this coercivity property is inherited by convolution quadrature time discretisation based on A-stable multistep methods, which are of order at most two. Here we study the question as to which Runge–Kutta-based convolution quadrature methods inherit the convolution coercivity property. It is shown that this holds without any restriction for the third-order Radau IIA method, and on permitting a shift in the Laplace domain variable, this holds for all algebraically stable Runge–Kutta methods and hence for methods of arbitrary order. As an illustration, the discrete convolution coercivity is used to analyse the stability and convergence properties of the time discretisation of a non-linear boundary integral equation that originates from a non-linear scattering problem for the linear wave equation. Numerical experiments illustrate the error behaviour of the Runge–Kutta convolution quadrature time discretisation. 65L05 65R20 § INTRODUCTION This paper is concerned with a discrete coercivity property that ensures the stability of time discretisations of boundary integral equations for wave equations, also in situations such as- non-linear boundary integral equations;-boundary integral equations coupled with a wave equation in an interior domain, with an explicit time discretisation in thedomain.For convolution quadrature based on A-stable multistep methods (which have approximation order at most two), it is known from <cit.> that the coercivity property is preserved under time discretisation, uniformly in the temporal stepsize. Here we study the preservation of convolution coercivity under time discretisation by Runge–Kutta convolution quadrature. Up to a shift in the Laplace variable and a corresponding reformulation of the boundary integral equation for an exponentially scaled solution function, we show that the convolution coercivity property is preserved by all convolution quadratures based on algebraically stable Runge–Kutta methods, which include in particular Radau IIA methods of arbitrary order. Without any such shift and exponential scaling, the convolution coercivity is shown to be preserved by the two-stage Radau IIA method of order three.We illustrate the use of the discrete convolution coercivity by the stability and convergence analysis of the Runge–Kutta convolution quadrature time discretisation of a non-linear boundary integral equation for a non-linear scattering problem for the acoustic wave equation. This problem has been studied with different numerical methods in <cit.>.The discrete convolution coercivity is not needed for the corresponding linear scattering problem, because there the convolution quadrature time discretisation of the linear boundary integral equation can be interpreted as a convolution quadrature discretisation of the convolution operator that maps the data to the solution. Therefore known bounds of the Laplace transform of the solution operator and known error bounds of convolution quadrature yield stability and error bounds <cit.>. The same argument can also be used for the coupling of a linear wave equation in an interior domain with the boundary integral equation that describes transparent boundary conditions, provided that the convolution quadrature for the boundary integral equation is based on the same (implicit) time discretisation method as for the wave equation in the interior domain. This precludes explicit time-stepping in the interior. For the coupling of convolution quadrature on the boundary with an explicit time discretisation in the interior, the discrete convolution coercivity as considered in the present paper is needed; see <cit.> for the coupling of implicit BDF2 convolution quadrature on the boundary with explicit leapfrog time-stepping in the domain for acoustic, elastic and electro-magnetic wave equations, respectively.The paper is organised as follows:In Section 2 we recall the continuous-time convolution coercivity, which is related to a coercivity property of the Laplace transform of the (distributional) convolution kernel that holds uniformly for all values of the Laplace-domain frequency variable in a (possibly shifted) right half-plane.In Section 3 we study the preservation of the convolution coercivity under time discretisation by Runge–Kutta convolution quadrature. This preservation depends on the numerical range of the Runge–Kutta differentiation symbol, which is shown to lie in the right half-plane for algebraically stable Runge–Kutta methods. With a matrix-function inequality that is obtained as an extension of a theorem of von Neumann, we then prove our main result, Theorem <ref>, which yields the discrete convolution coercivity.Section 4 recapitulates error bounds of Runge–Kutta convolution quadrature shown in <cit.>.In Section 5 we apply our results to the time discretisation of the wave equation with a non-linear impedance boundary condition. We study only semi-discretisation in time, but note that this could be extended to full discretisation withthe techniques of <cit.>. The error behaviour is illustrated by numerical experiments in Section 6. In the numerical experiments it is observed that the convolution quadrature based on the three-stage Radau IIA method performs well even without the shift and exponential scaling, which is more favourable than our theoretical results.§ COERCIVITY OF TEMPORAL CONVOLUTIONS The following coercivity result is given in <cit.>, where it is used as a basic result in studying boundary integral operators for the acoustic wave equation; see also <cit.> for Maxwell's equation and <cit.> for elastic wave equations. The result can be viewed as a time-continuous operator-valued extension of a theorem of Herglotz from 1911, which states that an analytic function has positive real part on the unit disk if and only if convolution with its coefficient sequence is a positive semi-definite operation.Let V be a complex Hilbert space and V' its dual, and let ⟨·,·⟩ denote the anti-duality between V and V'. Let L(s): V → V' and R(s) : V → V be analytic families of bounded linear operators for s > σ, continuous for s≥σ.We assume the uniform bounds, with some real exponent μ,L(s)_V' ← V≤ M(1+ |s|)^μ, R(s)_V ← V≤ M (1+|s|)^μ,s > σ.This polynomial bound guarantees that L is the Laplace transform of a distribution ℓ. If we write L(s)=s^k L_k(s) with an integer k>μ+1, then the Laplace inversion formulaℓ_k(t) = 1/2π∫_σ'+i e^stL_k(s)ds,t≥ 0(σ'>σ)defines a continuous and exponentially bounded function ℓ_k, which has ℓ as its kth distributional derivative. We write the convolution with ℓ asu(t) = L(∂_t)f (t) = (ℓ*f)(t)=(d/dt)^k ∫_0^t ℓ_k(t-τ)f(τ)dτ, t>0,for functions f on [0,T] whose extension to t<0 by 0 is k times continuously differentiable. Similarly we consider the convolution R(∂_t) f. <cit.> Let α≥ 0. In the above situation, the following statements are equivalent: * ⟨ v, L(s) v⟩≥αR(s) v^2 for allv ∈ V, s > σ.* ∫_0^∞ e^-2σ t ⟨ f(t) , L(∂_t) f (t) ⟩dt ≥α∫_0^∞ e^-2σ t R(∂_t) f(t)^2dtfor allf ∈ C^k([0,∞), V) with finite support and f^(j)(0)=0 for 0≤ j<k, and for all t ≥ 0.Property 1. is known to be satisfied forthe Laplace transforms of various boundary integral operators for wave equations <cit.>, and it is a fundamental property in the study of boundary integral equations for wave equations.We are interested in time discretisations of the convolution operators L(∂_t) and R(∂_t) that preserve this coercivity property. It was shown in <cit.> that this is achieved by convolution quadrature based on A-stable multistep methods such as the first- and second-order backward differentiation formulae. In Theorem <ref> below we will show that the coercivity property is also preserved by convolution quadrature based on certain Runge–Kutta methods such as the third-order, two-stage Radau IIA method. For the particular case σ=0, it will be shown to be preserved for all algebraically stable Runge–Kutta methods.§ PRESERVING COERCIVITY BY RUNGE–KUTTA CONVOLUTION QUADRATURE §.§ Runge–Kutta differentiation symbol and convolution quadrature An m-stage Runge–Kuttadiscretisation of the initial value problem y' = f(t,y), y(0) = y_0, is given byY_ni = y_n + ∑_j = 1^m a_ij f(t_n+c_jh,Y_nj),i = 1,…,m,y_n+1= y_n + ∑_j = 1^m b_j f(t_n+c_jh,Y_nj),where >0 is the time step, t_n = n, and the internal stages Y_ni and grid values y_n are approximations to y(t_n+c_i ) and y(t_n), respectively.In thefollowing we use the notation𝒜 = (a_ij)_i,j = 1^m,b = (b_1,…,b_m)^T, = (1,1,…,1)^T.We always assume that the Runge–Kutta matrix 𝒜 is invertible.As has been shown in <cit.>, and in applications to wave propagation problems furtherin <cit.>, Runge–Kutta methods can be used to construct convolution quadrature methods that enjoy favourable properties. Here one uses the Runge–Kutta differentiation symbolΔ(ζ) = (𝒜+ζ/1-ζ b^T)^-1, ζ∈ with|ζ|<1.This is well-defined for |ζ|<1 ifR(∞)=1-b^T𝒜^-1 satisfies |R(∞)|≤ 1. In fact, the Sherman-Morrison formula then yieldsΔ(ζ) = 𝒜^-1 -ζ/1-R(∞)ζ𝒜^-1 b^T 𝒜^-1.To formulate the Runge–Kutta convolution quadrature for L(∂_t )g, we formally replace in L(s) the differentiation symbol s by Δ(ζ)/ and expand the operator-valued matrix functionL(Δ(ζ)/) = ∑_n=0^∞W_n(L) ζ^n,where in the case of L(s):V→ V' we have the convolution quadrature matrices W_n(L):V^m → (V')^m.For the discrete convolution with these matrices we use the notation(L(∂_t^)f)_n = ∑_j=0^n W_n-j(L) f_jfor any sequence f=(f_n) in V^m. For vectors of function values of a function f:[0,T]→ V given as f_n=(f(t_n+c_i))_i=1^m, the ith component of the vector (L(∂_t^)f)_n is considered as an approximation to (L(∂_t)f)(t_n+c_i).In particular, if c_m=1, as is the case with Radau IIA methods, then the continuous convolution at t_n+1 is approximated by the last component of the discrete block convolution:(L(∂_t)f)(t_n+1) ≈ e_m^T (L(∂_t^)f)_n,where e_m = (0, …, 0, 1)^T is the mth unit vector. We recall the composition rule L_2(∂_t^) L_1(∂_t^)f=(L_2L_1)(∂_t^)f.For λ∈, the convolution quadrature (∂_t^-λ)^-1f (which is to be interpreted as L(∂_t^)f for the multiplication operator L(s)=(s-λ)^-1) contains the internal stages of the Runge–Kutta approximation to the linear differential equation y'=λ y +f with initial value y(0)=0. Results on the order of convergence of this approximation are given in <cit.>. The result of <cit.>, which is relevant for operators L(s) arising in wave propagation, will be restated and extended to the internal stages in Section <ref>. §.§ Numerical range of the Runge–Kutta differentiation symbolWe now consider methods that are algebraically stable: * All weights b_i are positive.* The symmetric matrix with entries b_ia_ij+b_ja_ji- b_ib_j is positive semi-definite.Gauss methods and Radau IIA methods arewidely used classes of methods that satisfy this condition. We refer the reader to <cit.> for background literature on Runge–Kutta methods and their stability notions.We consider the weighted inner product on ^m,(u,v) = ∑_i=1^m b_i u_i v_i,u,v∈^m.We have the following characterisation.For an algebraically stable Runge–Kutta method and for the b-weighted inner product (<ref>),(w,Δ(ζ)w) ≥ 0for all w∈^m, |ζ|<1.Conversely, if the differentiation symbol of a Runge–Kutta method with positive weights b_i satisfies this inequality, then the method is algebraically stable.With a different notation, this is shown in <cit.>. For the convenience of the reader we include the short proof.Since for v=Δ(ζ)w we have (w,Δ(ζ)w)=(Δ(ζ)^-1v,v), it suffices to show that (v,Δ(ζ)^-1v) ≥ 0for all v∈^m, |ζ|<1.We rewriteΔ(ζ)^-1 = 𝒜+ζ/1-ζ b^T = 𝒞+ 1/21+ζ/1-ζ b^T, with𝒞= 𝒜-12 b^T,and observe that (cf.<cit.>)( b^Tv,v)= | ∑_i=1^m b_i v_i |^2 2 (𝒞v,v)= ∑_i,j=1^m (b_ia_ij+b_ja_ji-b_ib_j) v̅_i v_j.Since (1+ζ)/(1-ζ)>0 for |ζ|<1, the result follows.In this paper we will need a stronger positivity property, for which we show the following order barrier anda positive result for the two-stage Radau IIA method, which is of order 3 and has the coefficients𝒜 = [5/12 -1/12; 3/4 1/4 ], b^T = (3/4,1/4). (a) For the two-stage Radau IIA method and for the b-weighted inner product (<ref>) and corresponding norm |·| we have(w,Δ(ζ)w) ≥12 (1-|ζ|^2) |w|^2for allw∈^m,|ζ|≤ 1.(b) For none of the Gauss methods with two or more stages and none of the Radau IIA methods with three or more stages, there exists c>0 such that for all sufficiently small δ>0,(w,Δ(ζ)w) ≥ cδ |w|^2for allw∈^m,|ζ| ≤ e^-δ.Clearly, (<ref>) implies (<ref>) with c arbitrarily close to 1 for small δ. We further note that the implicit Euler method and the implicit midpoint rule (which are the one-stage Radau IIA and Gauss methods, respectively) also satisfy(<ref>). (a) For the two-stage Radau IIA method we findΔ(ζ) = 1/2[3 1 - 4ζ; -9 5 + 4ζ ].Denoting the diagonal matrix of the weights by ℬ= diag(3/4,1/4), wenote (w,Δ(ζ)w) = w^T ℬ^1/2·ℬ^-1/212(ℬΔ(ζ)+Δ(ζ)^Tℬ) ℬ^-1/2·ℬ^1/2w.We obtain the hermitian matrixℬ^-1/212(ℬΔ(ζ)+Δ(ζ)^Tℬ) ℬ^-1/2 = 1/2[3 -√(3) (1+2ζ); -√(3) (1+2ζ)5 + 4 ζ ],which has the trace 4+2ζ and the determinant 3(1-|ζ|^2). It follows that both eigenvalues are positive and bounded by 6, and hence the smaller eigenvalue is bounded from below by 3(1-|ζ|^2)/6=(1-|ζ|^2)/2. This yields the inequality (<ref>).(b) The proof uses the W-transformation of Hairer & Wanner, see <cit.>. For each of the m-stage Gauss and Radau IIA methods, there exists an invertible real m× m matrix W with first columnsuch that, with the diagonal matrix ℬ of the weights b_i,W^T ℬ W=I_mor in other words, W^T ℬ^1/2 is an orthogonal matrix (with respect to the Euclidean inner product), and𝒜= WXW^-1,where X-12 e_1e_1^T - β_m e_me_m^T is a skew-symmetric matrix with β_m=0 for the Gauss method and β_m>0 for the Radau IIA method. We write(w,Δ(ζ)w) = w^T ℬΔ(ζ) w = w^Tℬ^1/2·ℬ^1/2W · W^T ℬ^1/2·ℬ^1/2Δ(ζ) ℬ^-1/2·ℬ^1/2W · W^T ℬ^1/2·ℬ^1/2 w = w^Tℬ^1/2·ℬ^1/2W ·W^T ℬΔ(ζ) W · W^T ℬ^1/2·ℬ^1/2 w ,where we noteW^T ℬΔ(ζ) W = W^-1Δ(ζ) W = (W^-1Δ(ζ)^-1 W)^-1.Now, by the definition of Δ(ζ) and the above-mentioned property of W^-1𝒜W=X together with W^Tb=e_1, the matrix W^-1Δ(ζ)^-1 W is the sum of a skew-hermitian matrix plus a rank-1 or rank-2 matrix for Gauss or Radau IIA methods, respectively, and by the Sherman-Morrison-Woodbury formula so is its inverse:W^T ℬΔ(ζ) W = Y+Z(ζ),where Y is skew-hermitian and Z(ζ) is of rank 1 or 2 for Gauss or Radau IIA methods, respectively. If w 0 is in the null-space ofZ(ζ)W^Tℬ, which is of codimension 1 or 2 for Gauss or Radau, respectively, then we obtain from the above formulas that(w,Δ(ζ)w) =0in contradiction to (<ref>).As we will show in Theorem <ref> below, Runge–Kutta convolution quadrature with (<ref>) preserves the coercivity property of Theorem <ref> for arbitrary abscissa σ≥ 0, while general algebraically stable methods preserve it in the case σ=0. Before we state and prove this theorem in Section <ref>, we need anauxiliary result of independent interest. §.§ A matrix-function inequality related to a theorem by von NeumannWe consider again a complex Hilbert space V and its dual V', with the anti-duality denoted by ⟨·, ·⟩. On ℂ^m we consider an inner product (·,·) and associated norm |·|. An inner product on V^m and the anti-duality between V^m and (V')^m are induced in the usual way: for Kronecker products a⊗ u and b⊗ v with a,b∈ℂ^m and u,v∈ V one defines ( a⊗ u, b⊗ v) =(a,b)(u,v) and extends this to a sequilinear form on V^m× V^m, and in the same way one proceeds for the anti-duality ⟨·,·⟩ on V^m × (V')^m.On the Hilbert space V, let L(s): V → V' and R(s) : V → V be analytic families of bounded linear operators for s > σ, continuous for s ≥σ, such that (<ref>) is satisfied and for some α≥ 0,⟨ v, L(s) v⟩≥αR(s) v^2,for allv ∈ V, s ≥σ.Let the matrix S∈ℂ^m× m be such that(w,Sw) ≥σ|w|^2,for allw∈ℂ^m.Then,⟨v, L(S) v⟩≥αR(S) v^2,for all v∈ V^m.This result can be viewed as an extension of a theorem of von Neumann <cit.> (see also <cit.>), which corresponds to the particular case where L(s) is the identity operator on V (when V is identified with V' with the anti-duality given by the inner product on V). The proof adapts Michel Crouzeix's proof of von Neumann's theorem as given in <cit.>. Without loss of generality we assume here σ =0. First we note that for a diagonal matrix S the result holds trivially, and so it does for a normal matrix S, which is diagonalised by a similarity transformation with a unitary matrix.For a non-normal matrix S we consider the matrix-valued complex functionS(z) = z/2 (S+S^*) + 1/2 (S-S^*)and we observe that S=S(1) and(w,S(z)w) = 12 ( z) (w,Sw).Together with the condition on S this shows that the numerical range of S(z) is in the right complex half-plane for z ≥ 0, and hence all eigenvalues of S(z) have non-negative real part. Therefore, the operator functions L(S(z)) and R(S(z)) are well-defined for z ≥ 0.If z = 0, then the matrix S(z) is normal, and hence the desired inequality is valid for S(z) with z=0. The functionφ(z) = α R(S(z)) v^2 - ⟨v, L(S(z))v⟩is subharmonic, since the last term is harmonic as the real part of an analytic function and the first term is the inner product of an analytic function with itself, which is subharmonic (as is readily seen by computing the Laplacian and noting that the real and imaginary parts of the analytic function are harmonic). Hence, by the maximum principle (or its Phragmén-Lindelöf-type extension to polynomially bounded subharmonic functions on the half-plane),φ(1) ≤sup_ z = 0φ(z) ≤ 0,which is the desired inequality. There is a slightly weaker variant of Lemma <ref>. We formulate it for σ=0. Let L(s): V → V' and R(s) : V → V be analytic families of bounded linear operators for s > 0, continuous for s=0, such that (<ref>) is satisfied and for some α≥ 0,⟨ v, L(s) v⟩≥αR(s) v^2,for allv ∈ V, s > 0.Let the matrix S∈ℂ^m× m be such that all its eigenvalues either have positive real part or are zero, and(w,Sw) ≥ 0,for allw∈ℂ^m.Then,⟨v, L(S) v⟩≥αR(S) v^2,for all v∈ V^m.This is proved by continuity, using the previous result for S+ I and letting → 0. §.§ Preserving the convolution coercivity under discretisationLet the m-stage Runge–Kutta method satisfy (<ref>) for some inner product (·,·), as in particular is the case for the two-stage Radau IIA method.In the situation of Theorem <ref>, condition 1. of that theorem implies, for sufficiently small stepsize >0 and withσ= σ/c,∑_n = 0^∞ e^-2σn⟨f_n, (L(∂_t^)f)_n ⟩≥α∑_n = 0^∞ e^-2σn(R(∂_t^)f)_n^2,for every sequence f=(f_n)_n≥ 0 in V^m with finitely many non-zero entries. Moreover, in the case σ=0this inequality holds for every algebraically stable Runge–Kutta method, with σ=0 and with respect to the b-weighted inner product (<ref>) on ^m. The proof uses Parseval's formula and combines Lemma <ref> with Lemmas <ref> and <ref>. By (<ref>), with σ=σ/c and ρ=e^-σwe have with respect to the inner product weighted by the b_i that( w,Δ(ρ e^iθ)/w ) ≥σ |w|^2for allw∈^m,θ∈.We abbreviateL(θ) = L( Δ(ρ e^iθ)/)and similarly R(θ). We denote the Fourier seriesf(θ) = ∑_n=0^∞ρ^n e^inθf_n.By Parseval's formula and the definition of the convolution quadrature weights W_n(L),∑_n = 0^∞⟨ρ^n f_n , ∑_j = 0^n ρ^n-jW_n-j (L)ρ^j f_j⟩ = 1/2π∫_-π^π⟨f(θ), L(θ)f(θ) ⟩ dθ.Here (<ref>) used in Lemma <ref> yields⟨f(θ), L(θ)f(θ) ⟩≥αR(θ) f(θ)^2.Moreover, again by Parseval's formula,1/2π∫_-π^πR(θ) f(θ)^2 dθ = ∑_n = 0^∞ρ^2n∑_j = 0^n W_n-j(R) f_j^2,which yields the result.§ ERROR BOUNDS OF RUNGE–KUTTA CONVOLUTION QUADRATURE In this section we restate the result of <cit.> and extend it to cover the approximation properties of the internal stages, which will be needed in the next section. To avoid restating the list of properties required for the underlying Runge–Kutta method, we state the results just for the Radau IIA methods, which appear to be the practically most important class of Runge–Kutta methods to be used for convolution quadrature.Let K(s), fors > σ > 0, be an analytic family of operators between Hilbert spaces V and W (or just Banach spaces are sufficient here), such that for some real exponent μ and ν≥ 0 the operator norm is bounded as follows:K(s)≤ M(σ) |s|^μ/( s)^ν for alls >σ. <cit.> Let K satisfy (<ref>) and consider the Runge-Kuttaconvolution quadrature based on the Radau IIA method with m stages. Let r > max(2m-1+μ, 2m-1, m+1) and f ∈ C^r([0,T],V) satisfyf(0) = f'(0) = … = f^(r-1)(0) = 0. Then, there exists > 0such that for 0 < ≤ and t_n=n∈ [0,T],e_m^T (K(∂_t^)f)_n - (K(∂_t)f)(t_n+1)≤ Ch^min(2m-1,m+1-μ+ν) (f^(r)(0)+∫_0^t f^(r+1)(τ) dτ).The constant C is independent ofand f, but does depend on ,T, and the constants in(<ref>). The proof in <cit.> is readily extended to yield the following error bound for the internal stages. Note that here the full order 2m-1 is replaced by the stage order plus one, m+1. We give the result for m≥ 2 stages, so that m+1≤ 2m-1. (For m=1, the implicit Euler method, one can use the previous result.) Let K satisfy (<ref>) and consider the Runge-Kuttaconvolution quadrature based on the Radau IIA method with m≥ 2 stages. Let r > max(m+1+μ,m+1) and f ∈ C^r([0,T],V) satisfyf(0) = f'(0) = … = f^(r-1)(0) = 0. Then, there exists > 0such that for 0 < ≤ and t_n=n∈ [0,T],(K(∂_t^)f)_n - (K(∂_t)f(t_n+c_i))_i=1^m≤ Ch^min(m+1,m+1-μ+ν) (f^(r)(0)+∫_0^t f^(r+1)(τ) dτ).The constant C is independent ofand f, but does depend on ,T, and the constants in(<ref>). § APPLICATION TO THE TIME DISCRETISATION OF THE WAVE EQUATION WITH A NON-LINEAR IMPEDANCE BOUNDARY CONDITION §.§ A non-linear scattering problemWe consider the wave equation on an exterior smooth domain Ω^+⊂^3. Following <cit.>, we search for a function u(·,t) ∈ H^1(Ω^+) satisfying theweak form of the wave equationü = Δ uin Ω^+with zero initial conditions and with the non-linear boundary condition∂_ν^+ u = g(u̇ + u̇^inc) - ∂_ν^+ u^inc on Γ,where ∂_ν^+ is the outer normal derivative on the boundary Γ of Ω^+, where g:→ is a given monotonically increasing function, and where u^inc(x,t) is a given solution of the wave equation. The interpretation is that the total wave u^tot=u+u^inc is composed of the incident wave u^inc and the unknown scattered wave u.One approach to solve this exterior problem is to determine the Dirichlet and boundary data from boundary integral equations on Γ and then to compute the solution at points of interest x∈Ω^+ from the Kirchhoff representation formula. Here we are interested in the stability and convergence properties of the numerical approximation when the time discretisation in the boundary integral equation and in the representation formula is done by Runge–Kutta convolution quadrature. Since our interest in this paper is the aspect of time discretisation, we will not address the space discretisation by boundary elements, though with some effort this could also be included; cf. <cit.>. §.§ Boundary integral equation and representation formula Using the standard notation of the boundary integral operators for the Helmholtz equation s^2 u-Δ u=0 ( s>0) as used, for example, in <cit.> and <cit.>, we denote by S(s):H^-1/2(Γ) → H^1(^3∖Γ)andD(s):H^1/2(Γ) → H^1(^3∖Γ) the single-layer and double-layer potential operators, respectively, and by V(s), K(s), K^T(s), W(s) the corresponding boundary integral operators that form the Calderón operatorB(s) = [ sV(s)K(s); -K^T(s) s^-1 W(s) ] :H^-1/2(Γ) × H^1/2(Γ) → H^1/2(Γ) × H^-1/2(Γ)and the related operatorB_imp(s) = B(s) + [0 -1/2 I;1/2 I0 ],where the suffix imp stands for impedance. With these operators, the solution u is determined by first solving, for φ=-∂_ν^+uandψ=γ^+u̇ (where γ^+ is the trace operator on Ω^+), the time-dependent boundary integral equation (see <cit.>)B_imp(∂_t) [ φ; ψ ] + [ 0; g(ψ + u̇^inc) ] = [ 0; ∂_ν^+ u^inc ].The solution u is then obtained from the representation formulau = S(∂_t)φ + D(∂_t)∂_t^-1ψ.We will address the question as to what are the approximation properties when the temporal convolutions in (<ref>) and (<ref>) are discretised by Runge–Kutta convolution quadrature. Since this will not turn out fully satisfactory, we will further consider time-differentiated versions of (<ref>).The following coercivity property was proved in <cit.>. <cit.>Let ⟨·,·⟩_Γ denote the anti-duality pairing between H^-1/2(Γ) × H^1/2(Γ) and H^1/2(Γ) × H^-1/2(Γ). There exists β > 0 so that the Calderón operator (<ref>) satisfies⟨[ φ; ψ ], B(s)[ φ; ψ ]⟩_Γ≥β c_σ (s^-1φ^2_H^-1/2(Γ) + s^-1ψ^2_H^1/2(Γ))fors ≥σ> 0 and for all φ∈ H^-1/2(Γ) and ψ∈ H^1/2(Γ), with c_σ= min(1,σ^2)σ. Since B_imp(s) differs from B(s) by a skew-hermitian matrix, the same estimate then also holds for B_imp(s). Note that Lemma <ref> yields property 1. of Theorem <ref> for the Calderón operator B(s) and for the multiplication operator R(s)=s^-1.§.§ Time discretisation by Runge–Kutta convolution quadratureUsing the notation of Section <ref>, the boundary integral equation (<ref>) is discretised in time with a stepsize >0 over a time interval (0,T) with T=N byB_imp(∂_t^) [ φ^; ψ^ ] + [0; g(ψ^ + u̇^inc) ] = [ 0; ∂_ν^+ u^inc ],where ( φ^ , ψ^)= ( φ_n , ψ_n)_n=0^N-1 with ( φ_n , ψ_n)=(φ_n,i,ψ_n,i)_i=1^mand (φ_n,i,ψ_n,i) ≈ (φ(t_n+c_i),ψ(t_n+c_i)) is the numerical approximation that is to be computed, and u̇^inc = (u̇^inc_n)_n=0^N-1 with u̇^inc_n=(u̇^inc(t_n+c_i))_i=1^m. The function g acts componentwise. At the nth time step, a non-linear system of equations of the following form needs to be solved:B_imp(Δ(0)/τ) [ φ_n; ψ_n ] + [ 0; g(ψ_n + u̇^inc_n) ] = …,where the dots represent known terms. This has a unique solution, because the eigenvalues of Δ(0)=𝒜^-1 have positive real part, and Lemma <ref> and the monotonicity of g then yield the unique existence of the solution by the Browder–Minty theorem; cf. <cit.> for the analogous situation for multistep-based convolution quadrature.As an alternative to (<ref>), we further consider the time discretisation of the time-differentiated boundary integral equation:B_imp(∂_t^) [ φ̇^; ψ̇^ ] + [0; g'(ψ^ + u̇^inc)(ψ̇^ + ü^inc) ] = [0; ∂_ν^+ u̇^inc ],which is now solved for the approximations (φ̇^, ψ̇^ ) (where the dot is just suggestive notation) to (φ̇,ψ̇) (where the dot means again time derivative). Here we define ψ^ = (∂_t^)^-1ψ̇^ and the same for φ^. Furthermore, ü^inc contains the values ü ^inc(t_n+c_iτ).We can go even further and consider the time discretisation of the twice differentiated boundary integral equation:B_imp(∂_t^) [ φ̈^; ψ̈^ ] + [0; g'(ψ^ + u̇^inc)(ψ̈^ + ⃛u^inc) + g”(ψ^ + u̇^inc)· (ψ̇^ + ü^inc)^2 ] = [ 0; ∂_ν^+ ü^inc ],where again the dots on the approximation (φ̈^, ψ̈^ ) are suggestive notation, and we set ψ̇^ = (∂_t^)^-1ψ̈^ and ψ^ = (∂_t^)^-2ψ̈^, and the same for φ^.Finally, at any point x∈Ω^+ of interest we compute the approximation to the solution value u(x,t_n+c_iτ) by using the same Runge–Kutta convolution quadrature for discretizing the representation formula (<ref>):u^ = S(∂_t^)φ^ + D(∂_t^)(∂_t^)^-1ψ^. §.§ Error bounds for the linear caseWe consider first the case of a linear impedance function g(ξ)=αξ with α≥ 0.Let u^τ=(u_n)_n=0^N-1 with u_n=(u_n,i)_i=1^m be the solution approximation obtained by the discretised representation formula (<ref>) with either of the discretised boundary integral equations (<ref>) or(<ref>) or(<ref>). The discretisation is done by Runge–Kutta convolution quadrature based on the Radau IIA method with m stages.Here we obtain the following optimal-order pointwise error bounds for x bounded away from Γ.Suppose that in a neighbourhood of the boundary Γ, the incident wave u_inctogether with its extension by 0 to t<0 is sufficiently regular. For x∈Ω^+ with (x,Γ)≥δ>0, the following optimal-ordererror bound is satisfied in the linear situation described above: for 0≤ t_n=nτ≤ T,|u_n(x) - u(x,t_n) | ≤ C(δ,T) τ^2m-1. We denote B_α(s) = B_imp(s)+ [ 0 0; 0 α I ].By Lemma <ref>, B_α(s) is invertible for α≥ 0 with the bound, for s≥σ>0,B_α(s)^-1≤ C(σ) |s|^2/ s.The exact solution u(x,t) is given by the representation formula (<ref>) with[ φ; ψ ] = B_α^-1(∂_t)[0; ∂_ν^+ u^inc -αu̇^inc ].For x∈Ω^+ we define the operators S_x(s):H^-1/2(Γ)→ andD_x(s):H^1/2(Γ)→ byS_x(s)φ = (S(s)φ)(x) and D_x(s)ψ = (D(s)ψ)(x).These operators are bounded for s≥σ >0 and dist(x,Γ)≥δ>0 byS_x(s)_ H^-1/2(Γ)≤ C(σ,δ)|s|e^-δs D_x(s)_ H^1/2(Γ) ≤ C(σ,δ) |s|e^-δs.The first bound is proved in <cit.> and the second bound is proved similarly.We thus haveu(x,t) = (M_x(∂_t)f)(t)withM_x(s)=(S_x(s),D_x(s)s^-1)B_α(s)^-1 andf= [0; ∂_ν^+ u^inc -αu̇^inc ].With the above operator bounds we obtain for s ≥σ >0 and dist(x,Γ)≥δ>0M_x(s) _ H^1/2(Γ) × H^-1/2(Γ)≤ C(σ,δ)|s|^3/ se^-δs .By the composition rule, the numerical solution obtained by (<ref>) and (<ref>) is given asu^(x) = M_x(∂_t^)f,where f contains the values of f at the points t_n+c_i. If we take instead (<ref>) or (<ref>), then we haveu^(x) = M_x(∂_t^) (∂_t^)^-1ḟ or u^(x) = M_x(∂_t^) (∂_t^)^-2f̈,respectively. In view of (<ref>), Theorem <ref> then yields the result. The situation is different if we consider the H^1(Ω^+) norm of the error. Suppose that in a neighbourhood of the boundary Γ, the incident wave u_inctogether with its extension by 0 to t<0 is sufficiently regular. Then, the following error bounds are satisfied in the linear situation described above: for 0≤ t_n=nτ≤ T,u_n - u(·,t_n) _H^1(Ω^+)≤ C(T) τ^kwithk= m+1/2if (<ref>) is used,k=min(2m-1,m+3/2)if (<ref>) is used,k=min(2m-1,m+5/2)if (<ref>) is used.Consider the Laplace transformed wave equation (<ref>)-Δû + s^2 û = 0 in Ω^+, ∂_ν^+û -α s û = f̂on Γ^,whereû is the Laplace transform of u and f̂ the Laplace transform of f = ∂_ν^+ u^inc-αu̇^inc. We will require the estimate, see <cit.>, ∂_ν^+ û_H^-1/2(Γ)≤ C(σ) |s|^1/2û_|s|,Ω^+,with s ≥σ > 0 and the scaled H^1 normû^2_|s|,Ω^+ = ∇û^2_L^2(Ω^+)+|s|^2 û^2_L^2(Ω^+). Testing (<ref>) with sû, integrating by parts and taking the real part givess û^2_|s|,Ω^+= -⟨∂_ν^+ û, s γ^+ û⟩_Γ≤C(σ) |s|^1/2û_|s|,Ω^+ψ̂_H^1/2(Γ),where ψ̂= s γ^+ û is the Laplace transform of ψ. Making use of û_H^1(Ω^+)≤ C(σ) û_|s|,Ω^+ and the bound (<ref>) givesû_H^1(Ω^+)≤ C(σ) |s|^5/2/( s)^2f̂_H^-1/2(Γ).The stated result then follows from Theorem <ref>. §.§ Convergence for the non-linear problemThere are several aspects which make the error analysis of the non-linear problem more intricate: * The numerical solution can no longer be interpreted as a mere convolution quadrature for an appropriate operator K(s) acting on the data (i.e., the incident wave).* We need to impose regularity assumptions on the solution rather than the data.* Convolution coercivity now plays an important role in ensuring the stability of the time discretisation.We assume strict monotonicity of the non-linear function g:→: there exists β>0 such that(ξ-η)( g(ξ)-g(η) ) ≥β|ξ-η|^2for allξ,η∈.Furthermore, we assume that the pointwise application of g maps H^1/2(Γ)to H^-1/2(Γ). As is shown in <cit.> by Sobolev embeddings, this is satisfied if g(ξ) grows at most cubically as |ξ|→∞.In the following we write for a stepsize τ>0 and a sequence e=(e_n)_n=0^N-1 with e_n=(e_n,i)_i=1^m and e_n,i in a Hilbert space Ve_ℓ_2^(0:N;V^m) = τ∑_n=0^N-1∑_i=1^me_n,i_V^2.We denote the numerical solution by u^τ=(u_n,i) and the corresponding values of the exact solution by u=(u(t_n+c_iτ)), where in both cases n=0,…,N-1 and i=1,…,m.We have the following error bound for the non-linear problem. Here the restriction to the two-stage Radau IIA method stems from Lemma <ref>.Let the non-linear function g be continuous, strictly monotone and have at most cubic growth. Suppose that the solution u to the problem (<ref>)–(<ref>) is sufficiently regular. Consider the time discretisation (<ref>) and (<ref>) by the two-stage Radau IIA convolution quadrature method. Then, there is τ̅>0 such that for stepsizes 0<τ≤τ̅, the error in the boundary values satisfies the boundγ^+u^τ - γ^+u_ℓ_2^(0:N;H^1/2(Γ)^2)≤ Cτ^3,and the error in the exterior domain is bounded byu^τ - u_ℓ_2^(0:N;H^1(Ω^+)^2)≤ Cτ^3/2.The constants C are independent of τ and N with 0<Nτ≤ T, but depend on T. We eliminate φ in the system of boundary integral equations (<ref>) to arrive at a boundary integral equation for ψ,L(∂_t)ψ + g(ψ + u̇^inc)=∂_ν^+ u^inc,whereL(s) = s^-1(W(s) - (12 I - K^T(s)) V(s)^-1(12 I - K(s))) = -s^-1DtN^+(s)with the exterior Dirichlet-to-Neumann operator DtN^+(s). It follows from Propositions 17 and 18 (and their proofs) in <cit.>that, for s ≥σ >0, there exist C(σ) and α(σ)>0 such thatL(s) _H^-1/2(Γ) H^1/2(Γ)≤ C(σ) |s|/ s, ⟨ψ, L(s) ψ⟩≥α(σ) s/|s|^2 ψ_H^1/2(Γ)^2for allψ∈ H^1/2(Γ),where ⟨·,·⟩ denotes the anti-duality pairing between H^-1/2(Γ) and H^1/2(Γ).Thanks to the composition rule, we can do the same for the numerical discretisation (<ref>) and reduce the numerical system to an equation for ψ^, which is just the convolution quadrature time discretisation of (<ref>),L(∂_t^)ψ^ + g(ψ^ + u̇^inc)=∂_ν^+ u^inc.For the error =ψ^ - ψ with ψ= ((ψ(t_n+c_i))_i=1^m)_n=0^N-1 we then have the error equationL(∂_t^) + g(ψ^ + u̇^inc)-g(ψ + u̇^inc)= dwith the defectd = L(∂_t^)ψ - ((L(∂_t)ψ(t_n+c_i))_i=1^m)_n=0^N-1,which is the convolution quadrature error for L(∂_t)ψ. By Theorem <ref> and our assumption of a sufficiently regular ψ=γ^+u̇, this is bounded byd_n _H^-1/2(Γ)≤ C ^3for0≤ n≤ T.Since we can apply the same argument also to spatial derivatives of ψ (in the assumed case of a smooth boundary Γ), we even haved_n _H^1/2(Γ)≤ C ^3.We test (<ref>) with , multiplywith e^-2σt with σ=1/T and integrate from 0 to T. With (<ref>) and the Runge-Kutta convolution coercivity as given by Theorem <ref>, and with the strict monotonicity (<ref>) we conclude thatα ∑_n = 0^N e^-2σn((∂_t^)^-1)_n_H^1/2(Γ)^2 +β∑_n = 0^N e^-2σn_n_L_2(Γ)^2≤∑_n = 0^N e^-2σn⟨_n, d_n ⟩and estimate further⟨_n, d_n ⟩≤_n _L_2(Γ) d_n _L_2(Γ)≤β/2_n _L_2(Γ) +1/2βd_n _L_2(Γ)^2.We thus find the stability estimate(∂_t^)^-1_ℓ_2^(0:N;H^1/2(Γ)^2) +_ℓ_2^(0:N;L_2(Γ)^2)≤ Cd_ℓ_2^(0:N;L_2(Γ)^2).Since (∂_t^)^-1 = γ^+u^τ - γ^+u, this proves (<ref>).Let us denote by M(s) = S(s)V^-1(s):H^1/2(Γ) → H^1(Ω^+) the operator that maps Dirichlet data in H^1/2(Γ) to the corresponding solutionû∈ H^1(Ω^+) of the Helmholtz equation s^2û-Δû=0. By <cit.>, this is bounded for s≥σ>0 byM(s) _H^1(Ω^+)H^1/2(Γ)≤ C(σ) |s|^3/2/ s.We then haveu^τ - u = M(∂_t^)γ^+u^τ -(( M(∂_t)γ^+u(t_n+c_i))_i=1^2 )_n=0^N-1= M(∂_t^) (γ^+u^τ - γ^+u) + ( M(∂_t^)γ^+u - (( M(∂_t)γ^+u(t_n+c_i))_i=1^2 )_n=0^N-1).By Theorem <ref> and the bound for M, the last term is bounded by O(τ^5/2) in the H^1(Ω^+) norm. The first term is only O(τ^3/2), since we lose a factor τ^3/2 from the O(τ^3) error bound for γ^+ u because of the O(|s|^3/2) bound of M(s); this follows from Lemma 5.2 in <cit.> and Parseval's identity.In a similar way we obtain the following results for the alternative discretisations (<ref>) and (<ref>): (i) In addition to Proposition <ref>, assume that g has bounded second derivatives. With the discretisation (<ref>) instead of (<ref>), the error bound in the H^1(Ω^+) norm improves to O(τ^5/2), and the ℓ_2^ error in a point x bounded away from the boundary Γ is at most O(τ).(ii) In addition to Proposition <ref>, assume that g has bounded second and third derivatives. With the discretisation (<ref>) instead of (<ref>), the error bound in the H^1(Ω^+) norm improves to O(τ^3), and the ℓ_2^ error in a point x bounded away from the boundary Γ is at most O(τ^2). The proofs of these error bounds are very similar to that of Proposition <ref>, using in addition a discrete Gronwall inequality at the end of the estimation of , and an O(|s|^3) bound for the norm of the operator from H^1/2(Γ)→ that maps Dirichlet data to the solution of the Helmholtz equation s^2û-Δû=0 at a point x∈Ω^+ bounded away from Γ, for s in a right half-plane. Since our main concern here is to illustrate the use of the convolution coercivity, we omit the details of these extensions.If we set L(s)=L(s+σ) and ψ(t)=e^-σ tψ(t) for some σ>0, then the boundary integral equation (<ref>) is equivalent to(L(∂_t)ψ)(t) + e^-σ t g(e^σ tψ(t)+ u̇^inc(t))= e^-σ t∂_ν^+ u^inc(t).By (<ref>), we then have the coercivity estimate for L(s) for all s ≥ 0 (and not just for s ≥σ):⟨ψ, L(s) ψ⟩≥α(σ) s+σ/|s+σ|^2 ψ_H^1/2(Γ)^2for allψ∈ H^1/2(Γ).By Theorem <ref>, the coercivity estimate for the convolution quadrature approximation of L(∂_t)ψ is then obtained for every algebraically stable Runge-Kutta method (and not just the two-stage Radau IIA method). Hence, by discretising the shifted boundary integral equation (<ref>) on an interval [0,T] with shift σ=1/T, we obtain Runge–Kutta based convolution quadrature time discretisations of arbitrarily high order of convergence (assuming sufficient regularity of the exact solution). We remark that similar shifts are familiar in the convergence analysis of space-time Galerkin methods for time-dependent boundary integral equations <cit.>. As in that case, numerical experiments indicate that implementing the shift may not be necessary in practical computations, although this is not backed by theory. § NUMERICAL EXPERIMENTS§.§ Scattering by the unit sphere In these experiments we let Ω^+ be the exterior of the unit sphere and the trace of the incident wave u^inc on the sphere be space independent. As constant functions are eigenfunctions of all the integral operators on the sphere <cit.>, the solution will also be constant in space. The eigenvalue for the combined operator L(s)in (<ref>) is given byL(s)ψ̂= -s^-1DtN^+(s)ψ̂= (1+1/s)ψ̂,for any ψ̂ constant in space. This operator will reflect well the behaviour of scattering by a convex obstacle, but not that of a general scatterer. For this reason we concentrate on the correspondinginterior problem withL^-(s)ψ̂= s^-1DtN^-(s)ψ̂=(-1/s+1+e^-2s/1-e^-2s)ψ̂,again for ψ̂ constant.Treating both these operators as scalar, complex valued functions of s, we see that both have a better behaviour than the general operators, see (<ref>) and (<ref>). Namely|L(s)| ≤ C(σ),L(s) ≥ 1and|L^-(s)| ≤ C(σ),L^-(s) ≥α(σ).As the operator L(s) is too simple, in the numerical experiments we only consider the scalar, non-linear equationL^-(∂_t)ψ +g(ψ+u̇^inc) = 0. Even though these operators are of such a simple form, due to the nonlinearity the exact solution is not available. Nevertheless, a highly accurate solution is not expensive to evaluate and can be used to compute the error in the ℓ^τ_2 norm. We have performed the numerical experiments with the following choices of g and u^incg_1(ξ) = 14 ξ+ξ|ξ|,g_2(ξ) = 14 ξ+ξ^3,u^inc(t) = 2e^-10(t-5/2)^2and with final time T = 6. Note that g_1 is once continuously differentiable whereas g_2 is infinitely differentiable. The data u^inc is not causal, but it is vanishingly small for t < 0 and we have found that this discrepancy has no significant effect on the results. In Figure <ref> we show the convergence of the two-stage Radau IIA convolution quadrature. As expected, for the smooth non-linear condition we obtain fullorder of convergence.The solution and its first derivative are shown in Figure <ref>. Note that the two solutions have a similar shape, buta closer look at the derivative in Figure <ref> reveals that one is smooth and the other only once continuously differentiable. For the interior problem, as L(s) ≥ 1the theory also applies to higher order Radau IIA methods. This is however not the case with L^-(s). We nevertheless perform experiments with the three-stage Radau IIA method and obtain good results as shown in Figure <ref>. §.§ A full non-scalar example We end the paper with a 2Dexample that requires the full BEM discretisation in space. The domain is an L-shape and the incident wave is a plane wave. Piecewise linear boundary element space is used to approximate the Dirichlet trace ψ and piecewise constant boundary element space to approximate the Neumann trace φ and the time-discretisation is performed using the two-stage Radau IIA method. The images of the solution are shown in Figure <ref>. § ACKNOWLEDGEMENTWe thank Ernst Hairer for helpful discussions. This work was partially supported by DFG, SFB 1173.1BamH A. Bamberger and T. Ha Duong. Formulation variationelle espace-temps pour le calcul par potentiel retardé d'une onde acoustique. Math. Meth. Appl. Sci., 8:405–435, 1986.BamH2 A. Bamberger and T. Ha Duong. Formulation variationnelle pour le calcul de la diffraction d'une onde acoustique par une surface rigide. Math. Methods Appl. Sci., 8(4):598–608, 1986.BanK L. Banjai and M. Kachanovska.Fast convolution quadrature for the wave equation in three dimensions.J. Comp. Phys., 279, 103–126, 2014.BanL L. Banjai and C. Lubich. An error analysis of Runge-Kutta convolution quadrature. BIT 51:483–496, 2011.BanLM L. Banjai, C. Lubich, and J. M. Melenk. Runge-Kutta convolution quadrature for operators arising in wave propagation. Numer. Math., 119(1):1–20, 2011.BanLS L. Banjai, C. Lubich, and F.-J. Sayas. Stable numerical coupling of exterior and interior problems for the wave equation. Numer. Math., 129(4): 611–646, 2015.BanMS L. Banjai,M. Messner, and M. Schanz.Runge-Kutta convolution quadrature for the boundary element method.Computer Methods in Applied Mechanics and Engineering, 245, 90–101, 2012.BanR L. Banjai and A. Rieder.Convolution quadrature for the wave equation with a non-linear impedance boundary condition. arXiv preprint arXiv:1604.05212 (2016).Ebe S. Eberle. The elastic wave equation and the stable numerical coupling of its interior and exterior problems. Preprint, Univ. Tuebingen, na.uni-tuebingen.de/preprints.shtml, 2016.HaiL E. Hairer and C. Lubich. On the stability of Volterra Runge–Kutta methods. SIAM J. Numer. Anal. 21:123–135, 1984.HaiW E. Hairer and G. Wanner. Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems. Springer, 1996.KovL B. Kovács and C. Lubich. Stable and convergent fully discrete interior-exterior coupling of Maxwell's equations. Preprint,arXiv:1605.04086 (2016). To appear in Numer. Math.LalS A. R. Laliena and F.-J. Sayas. Theoretical aspects of the application of convolution quadrature to scattering of acoustic waves. Numer. Math., 112(4):637–678, 2009.Lub94 C. Lubich. On the multistep time discretization of linear initial-boundary value problems and their boundary integral equations. Numer. Math., 67:365–389, 1994.LubO87C. Lubich and A. Ostermann. Multi-grid dynamic iteration for parabolic equations. BIT 27:216–234, 1987.LubO C. Lubich and A. Ostermann. Runge-Kutta methods for parabolic equations and convolution quadrature. Math. Comp., 60(201):105–131, 1993.Ned J.-C. Nédélec. Acoustic and electromagnetic equations, volume 144 of Applied Mathematical Sciences. Springer-Verlag, New York, 2001. Neu J. von Neumann. Eine Spektraltheorie für allgemeine Operatoren eines unitären Raumes. Math. Nachrichten, 4:258–281, 1951.Say16F. Sayas.Retarded Potentials and Time Domain Boundary Integral Equations: A Road Map.Springer, 2016.SchLL A. Schädle, M. López-Fernández, and C. Lubich. Fast and oblivious convolution quadrature. SIAM J. Sci. Comput., 28(2):421–438, 2006.WanW X. Wang and D. Weile.Implicit Runge-Kutta methods for the discretisation of time domain integral equations.IEEE Transactions on Antennas and Propagation, 59(12): 4651–4663, 2011.
http://arxiv.org/abs/1702.08385v1
{ "authors": [ "Lehel Banjai", "Christian Lubich" ], "categories": [ "math.NA", "65L05, 65R20" ], "primary_category": "math.NA", "published": "20170227171523", "title": "Runge--Kutta convolution coercivity and its use for time-dependent boundary integral equations" }
An SDP-Based Algorithm for Linear-Sized Spectral Sparsification Yin Tat LeeMicrosoft ResearchRedmond, USA He SunThe University of BristolBristol, UK ===================================================================================================== In this paper we consider discrete robot path planning problems on metric graphs.We propose a clustering method, for the planning graph that significantly reduces the number of feasible solutions, yet retains a solution within a constant factor of the optimal.By increasing the input parameter Γ, the constant factor can be decreased, but with less reduction in the search space.We provide a simple polynomial-time algorithm for finding optimal s, and show that for a given Γ, this optimal is unique. We demonstrate the effectiveness of the clustering method on traveling salesman instances, showing that for many instances we obtain significant reductions in computation time with little to no reduction in solution quality. § INTRODUCTION Discrete path planning is at the root of many robotic applications, from surveillance and monitoring for security, to pickup and delivery problems in automated warehouses. In such problems the environment is described as a graph, and the goal is to find a path in the graph that completes the task and minimizes the cost function.For example, in monitoring, a common problem is to compute a tour of the graph that visits all vertices and has minimum length <cit.>. These discrete planning problems are typically  <cit.>, and thus there is a fundamental trade off between solution quality and computation time. In this paper we propose a graph clustering method, called , that can be used to reduce the space of feasible solutions considered during the optimization.The parameter Γ serves to trade-off the feasible solution space reduction (and typically computation time) with the quality of the resulting solution.The idea behind is to group vertices together that are in close proximity to each other but are also far from all other vertices. Figure <ref> shows an example of in an office environment. Given this clustering, we solve the path planning problem by restricting the path to visit vertices within each cluster consecutively (i.e., no path can visit any cluster more than once). This restriction reduces the number of possible solutions exponentially and thus reduces the amount of computational time needed to find good solutions.Unlike other clustering methods, does not accept as input a desired number of clusters.This means that some instances will have no clusters, while others will have many. In this way, s only explore natural structures within the problem instances instead of imposing structures onto the instance. Additionally, when the graph is metric, we establish that for a given , the optimal path of the clustered planning problem is within a constant factor (dependent on Γ) of the true optimal solution. Related workThere are a number of clustering methods for Euclidean <cit.> and discrete <cit.> environments. Typically the objective of these algorithms is to find a set of equal (or roughly equal) non-overlapping clusters that are grouped by similarity (close in proximity, little to no outgoing edges, etc …), where each location in the graph is assigned to one cluster. For these methods, the desired number of clusters is given as an input parameter. In contrast, in , the idea is to simply find a specific form of clustering within the environment, if it exists. These s may be nested within one another.There are other clustering methods that also look for specific structures within the graph such as community structures <cit.>, which is based on a metric that captures the density of links within communities to that between communities. In contrast, our clustering method is specifically designed to find structures that yield desirable properties for path planning on road maps.The use of clustering to save on processing/time is done in a variety of different fields such as data mining <cit.>, parallel computer processing <cit.>, image processing <cit.>, and control <cit.> for path planning. In environments that have regions with a high degree of connectivity, such as electronic circuits, clustering is commonly used to identify these regions and then plan (nearly) independently in each cluster <cit.>. For path planning problems with repetitive tasks, one can cluster a set of popular robot action sequences into macros <cit.>, allowing the solver to quickly discover solutions that benefit from these action sequences. In applications such as sensor sweeping for coverage problems <cit.> or in the routing of multiple agents <cit.>, clustering has been used to partition the environment into regions that can again be treated in a nearly independent manner, reducing computation time.There is some prior work on partitioning in discrete path planning. Multilevel refinement <cit.> is the process of recursively coarsening the graph by aggregating the vertices together to create smaller and smaller instances, for which a plan can be found more easily. The plan is then recursively refined to obtain a solution to the original problem. The idea in coarsening a graph is that the new coarse edges should approximate the transition costs in the original graph.This differs from , which preserves the edges within the graph. There are a number of clustering approaches that aim to reduced the complexity of Euclidean and or planar problems <cit.>. is more general, in that it works on any graph, while the solution quality guarantees only hold for metric graphs. Contributions The main contribution of this paper is the introduction of , a clustering method for a class of discrete path planning problems.We establish that the solution to the corresponding clustered problem provides a min(2, 1 + 3/2Γ)-factor approximation to the optimal solution. We give some insight into the reduction of the search space as a function of the amount of clustering, and we provide an efficient algorithm for computing the optimal . We then use an integer programming formulation of the problem to demonstrate that for many problem instances the clustering method reduces the computation time while still finding near-optimal solutions. § PRELIMINARIES IN DISCRETE PATH PLANNING In this section we define the class of problems considered in this paper, review some semantics of clusters, review the traveling salesman problem () <cit.> and define its clustered variant, the general clustered traveling salesman problem (). §.§ Discrete Path PlanningGiven a graph G = V,E,w, we define a path as a non-repeating sequence of vertices in V, connected by edges in E. A cycle is a path in which the first and last vertex are equal, and for simplicity we will also refer to cycles as paths. Let 𝒫 represent the set of all possible paths in G. Then, abstractly, a path planning constraint defines a subset 𝒫_1 ⊆𝒫 of feasible paths. Given a set of constraints 𝒫_1, 𝒫_2, …, 𝒫_m, the set of all feasible paths is 𝒫_1 ∩𝒫_2 ∩⋯∩𝒫_m. In this paper we restrict our attention to the following class of constraints and planning problems. A constraint 𝒫_1 is order-free if, given any p∈𝒫_1, then all paths obtained by permuting the vertices of p are also in 𝒫_1. [Discrete Path Planning Problem] Given a complete weighted graph G = V,E,w and a set of order-free constraints {𝒫_1, 𝒫_2, …, 𝒫_m }, find the minimum length feasible path. Many discrete path planning problems for single and multiple robots fall into this class, so long as they do not restrict the ordering of vertex visits (i.e., no constraints of the form “visit A before B”). Some examples include single and multi-robot traveling salesman problems, point-to-point planning, and patrolling.As a specific example the is a problem where a robot is required to visit exactly one location in each non-overlapping set of locations <cit.>. This is naturally expressed in the above framework by having one constraint for each set: for each cluster V_i we have a constraint stating that exactly one vertex in V_i must be visited in the path. A metric discrete path planning problem is one where the edge weights in the graph G satisfy the triangle inequality: for v_a,v_b,v_c∈ V, we have w(v_a,v_c) ≤ w(v_a, v_b) + w(v_b, v_c).To describe the number of feasible paths for a given planning problem, we use the phrase search space size. For example, a problem where we must choose an ordering of n locations has a search space size of n!, since there are n! combinations that a path may take. Note that as more constraints are added to the problem, the search space size can only be reduced, since a feasible path must lie in the intersection of all constraints. §.§ ClustersA cluster is a subset of the graph's vertices, V_i ⊂ V. Given the clusters V_1 and V_2, we say V_1 is nested in V_2 if V_1 ⊆ V_2. The clusters V_1 and V_2 are overlapping if V_1 ∩ V_2 ≠∅, V_1 ⊈ V_2, and V_2 ⊈ V_1. A set of clusters (or clustering) is denoted by C = {V_1,…,V_m}. A clustering C = {V_1, V_2, …, V_m } is a nested if there exists some V_i ⊆ V_j for V_i, V_j ∈ C.In this paper we seek to add clustering constraints to a discrete planning problem that reduce its search space size, but also retains low-cost feasible paths. The clustering constraints we consider are of the following form. Given a graph G= V,E,w and a cluster V_i ⊆ V, a feasible path p must visit the vertices within the cluster V_i consecutively. Formally, the vertices visited by p, V[p] are visited consecutively if there exists a path segment p' of p that visits every vertex in V_i ∩ V[p] and is of length |V_i ∩ V[p]|. Note that in the above definition, it is not necessary for all of the vertices in V_i to be visited. It is just necessary to visit the vertices consecutively in V_i that are visited. §.§ Traveling Salesman ProblemsThe traveling salesman problem () is defined as follows: given a set of cities and distances between each pair of cities, find the shortest path that the salesman can take to visit each city exactly once and return to the first city (i.e., the shortest tour). A tour in a graph that visits each vertex exactly once is called a Hamiltonian cycle (regardless of path cost). The general clustered version of is the extension that requires the solution to visit the vertices within the clusters consecutively. The definition of these problems is as follows:[Traveling Salesman Problem ()] Given a complete graph G = V,E,w with edge weights w: E →_≥ 0, find a Hamiltonian cycle of G with minimum cost. [General-] Given a complete and weighted graph G = V,E,w along with a clustering C = {V_1,…, V_m}, find a Hamiltonian cycle of G with minimum cost such that the vertices within each cluster V_i are visited consecutively. The traditional version of the restricts the clusters to be non-overlapping (and non-nested). For this paper we use the syntax to emphasize when we are solving a General-problem to refer to the traditional problem. §In this section, we define and show that the ed path planning problem provides a min(2, 1 + 3/2Γ) approximation of the original path planning problem. We then describe an algorithm for finding the optimal , and characterize the search space reductions. §.§ Definition of Below we define the notion of s, s, and the clustered discrete path planning problem. Then we pose the clustering problem as one of maximizing the search space reduction.Given a graph G = V,E,w and a cluster V_i ⊂ V, we define the following quantities for V_i relative to G:α_i≡min_v_a ∈ V_i, v_b ∈ V ∖ V_i (w(v_a, v_b), w(v_b, v_a))β_i≡max_v_a,v_b ∈ V_i, v_a ≠ v_b w(v_a, v_b)Γ_i≡α_i/β_i,where α_i represents the minimum weight edge entering or exiting the cluster V_i, and β_i represents the maximum weight edge within V_i. The ratio Γ_i is a measure of how separated the vertices in V_i are from the remaining vertices in G. Given an input parameter Γ≥ 0 and a graph G = V,E,w, a clustering C = {V_1, V_2,…,V_m} is said to be a if and only if V is covered by V_1 ∪ V_2 ∪⋯∪ V_m; each V_i ∈ C has a separation Γ_i ≥Γ; and the clusters are either nested (V_i ⊆ V_j or V_j ⊆ V_i for all V_i, V_j ∈ C) or non-overlapping (V_i ∩ V_j = ∅). The search space reduction for path planning problems comes from restricting paths to visit the clusters consecutively and our goal is to maximize that reduction. Thus we are interested in the following two problems.Given a discrete path planning problem P and a clustering C, the clustered version of the problem P' has the constraint that the path must visit the vertices within each cluster consecutively, in addition to all the constraints of P. Given a graph G = V,E,w and a parameter Γ > 0, find a C^* such that the search space reduction is maximized.Note that in Definition <ref> overlapping clusters are not permitted. This is necessary for the problem in Definition <ref> to be well defined. In addition, we will see in the following section that clusters that have a separation of Γ_i > 1 cannot overlap.§.§ Solution Quality BoundsIn this section, we show that when the graph G is metric and Γ > 1, then the solution to the Γ-clustered path planning problem provides a min(2, (1 + 3/2Γ))-factor approximation to the optimal. Given a metric discrete path planning problem P with optimal solution p^* and cost c^*, a C = {V_1, V_2, …, V_m } where Γ > 1, then the optimal solution (p')^* to the clustered problem P' over the same set of vertices is a solution to P with cost (c')^* ≤min(2, 1 + 3/2Γ) c^*. To prove the main result, we begin by proving the first half of the bound (c')^* ≤ 2c^*. We do this by using an minimum spanning tree (MST) to construct a feasible solution for P'.Given a metric graph G and a C, with Γ > 1, every MST will have exactly one inter-set edge for each cluster V_i ∈ C. We prove the above result by contradiction. Suppose the MST has at least two inter-set edges connected to V_i. Thus, there are at least two sets of vertices in V_i that are not connected to each other using intra-set edges. We can then lower the cost of the MST by removing one of these inter-set edges of weight ≥α_i and replace it with an intra-set edge of weight ≤β_i < α_i. This highlights the contradiction and thus every MST will have exactly one inter-set edge for each cluster V_i ∈ C.Consider a metric discrete path planning problem P with an optimal solution path p^* and cost c^*. Then given a C = {V_1, V_2, …, V_m } with Γ > 1, the optimal solution (p')^* for the clustered problem P' over the same set of vertices V[p^*] is a solution to P with cost (c')^* ≤ 2 c^*.To prove the above result, we will use the MST approach described below and in <cit.> to construct a path p' over the set of vertices in V[p^*]. This approach yields a solution p' for P' that has our desired cost bound, c' ≤ 2c^* <cit.>. The MST procedure is described below.* Find a minimum spanning tree for the vertices V[p^*].* Duplicate each edge in the tree to create a Eulerian graph.* Find an Eulerian tour of the Eulerian graph.* Convert the tour to a : if a vertex is visited more than once, after the first visit, create a shortcut from the vertex before to the one after, i.e., create a tour that visits the vertices in the order they first appeared in the tour. What remains is to prove that the above tour is a feasible solution for P'. First we note that the above approach yields a single tour of all the vertices in V[p^*], i.e., there are no disconnected tours. Next we note that Lemma <ref> states that every MST uses exactly one inter-set edge for each cluster V_i ∈ C. Thus when the edges are duplicated and a Eulerian tour is found, there are only two inter-set edges used for each V_i ∈ C. Furthermore short-cutting the path does not change the number of inter-set edges used by the tour, thus the final solution p' only has one incoming and one outgoing edge for each cluster V_i ∈ C and so it is a clustered solution for P' that satisfies the bound since the approach also yields a solution with cost c' ≤ 2c^*. Next we prove the second half of the bound (c')^* ≤(1 + 3/2Γ)c^* by using Algorithm <ref> to construct a feasible solution p' for P'. Additionally we use a modified graph Ĝ defined in Definition <ref> to show that the cost of this solution satisfies our desired bound. Given a feasible path p for P and a cluster V_i ∈ C that is not visited consecutively, then p_i ←deform(p,V_i) does visit V_i consecutively and any subsequent deformed paths p_i+n+1←deform(p_i+n,V_i+n+1) also visit V_i consecutively, for n ∈_> 0.By construction V_i is visited consecutively in p_i ←deform(p,V_i). The remaining claim that V_i continues to be visited consecutively is proven by showing that the number of inter-set edges for V_i in the subsequent paths remains the same.We prove this result by contradiction. Suppose there is a sequencev_a, v_b∈ p_j-1 for some j > i such that v_a, v_b ∈ V_i and v_a is no longer connected to v_b in p_j←deform(p_j-1,V_j); that is, v_a, v_b∉p_j.This would mean that one or more vertices were inserted in between v_a and v_b, thus creating a new inter-set edge. In Algorithm <ref>, this cannot happen in lines 5-7 since the path for the 1^st vertex to the k^th is unchanged. This also cannot happen in lines 8-9 since this part of the algorithm is only connecting vertices within the cluster V_j together. Finally, this cannot happen in lines 11-13, since the path is not changing the order of the appearance of v_a and v_b (no insertions, just deletions).Thus there are no additional inter-set edges created by Algorithm <ref> for cluster V_i, which highlights our contradiction. Therefore all subsequent paths must also visit V_i consecutively. When Algorithm <ref> is applied to p^* iteratively for each V_i ∈ C to generate the solution p', the solution is unique despite the order that deform(p,V_i) was called for all V_i ∈ C. Furthermore the order of the clusters is determined by their first appearance in p.This follows from the method in which Algorithm <ref> reorders the vertices within the tour. Specifically, the order of vertices within each cluster V_i is preserved as deform(p,V_i) is called as is the ordering of the remaining vertices.Consider a feasible path p for P that has 2(n+1) ≥ 4 inter-set edges for V_i such that Γ_i > 1 and n ∈_≥ 0. Then the cost to deform p into p' ←deform(p,V_i) is c' - c ≤ (2n+1)β_i.In this proof we analyze the cost to deform p into p', which is a result of calling p' ←deform(p,V_i) (Algorithm <ref>). There are three types of deformations that result from the algorithm: 1) there are short cuts created within the cluster; 2) there are short cuts created outside of the cluster; and 3) there is a new outgoing edge for the cluster. These deformations are illustrated in Figure <ref> and a classification of the edges in the figure are as follows: 1) edges v_3, v_4 and v_6, v_7 are short cuts within the cluster; 2) edges v_10, v_11 and v_12, v_13 are short cuts outside of the cluster; and 3) edge v_8, v_9 is the new outgoing the edge for the cluster.We start by examining the incurred cost to short cut paths within the cluster. Consider a path segment v_a, v_b, v_c, …, v_x, v_y, v_z of p such that v_a is directly connected to v_z in p' with the edge v_a, v_z and v_a, v_z ∈ V_i. The incurred cost of each of these edges is ≤β_i due to the fact that the cost of any intra-set edge has weight ≤β_i. There are n such shortcuts of this nature incurred from performing deform(p,V_i) (n captures the number of extra visits to the cluster), and so the total incurred cost for this type of shortcut is ≤ nβ_i.Next we examine the incurred cost to short cut paths outside of the cluster. Consider a path segment v_a, v_b, v_c, …, v_x, v_y, v_z of p such that v_a is directly connected to v_z in p' with the edge v_a, v_z, v_a, v_z ∉V_i and v_b, v_c, …, v_y ∈ V_i. The incurred cost for each of these short cuts is again ≤β_i. This is due to the metric property of G: The cost of the direct path from v_a to v_z is less than or equal to any path from v_a to v_z, specifically c(v_a, v_z) ≤ c(v_a, v_b) + c(v_b, v_y) + c(v_y, v_z) ≤ c(v_a, v_b) + β_i + c(v_y, v_z). Thus the incurred cost Δ, of this shortcut is bounded by the difference between the cost of the new edges in p' and the removed edges in p^*, namely: Δ = c(v_a, v_z) - c(v_a,v_b) - c(v_y, v_z) ≤ c(v_a, v_b) + β_i + c(v_y, v_z) - c(v_a,v_b) - c(v_y, v_z) ≤β_i There are n such shortcuts of this nature incurred by deform(p,V_i), and so the total incurred cost for this type of shortcut is also ≤ nβ_i.Lastly we examine the incurred cost of the new outgoing edge. Consider the path v_a, v_b, v_c, …, v_x, v_y, v_z of p such that v_b,v_c is the first outgoing edge of V_i and v_x,v_y is the last outgoing edge of V_i, thus v_x, v_c is the new outgoing edge. Then due to the metric property we know that c(v_x, v_c) ≤ c(v_x,v_b) + c(v_b,v_c) ≤β_i + c(v_b,v_c). The incurred cost of this deformation is the difference between the cost of the new edge v_x,v_c and the removed edge v_b,v_c (this edge has not been considered in any previous incurred cost calculation): Δ = c(v_x, v_c) - c(v_b,v_c) ≤β_i + c(v_b,v_c) - c(v_b,v_c) ≤β_i This accounts for all of the incurred costs, and so the total cost to deform p into p' via deform(p,V_i) is c' - c ≤ (2n+1)β_i. We introduce the modified graph Ĝ in the following definition to aid with our ongoing proof of the bound. Given a graph G and a C, the modified graph Ĝ is a copy of G with the following modifications: if v_a, v_b is an inter-set edge with v_a ∈ V_i and v_b ∈ V_j then ĉ(v_a, v_b) = c(v_a, v_b) + 3/2max(β_i, β_j), otherwise ĉ(v_a, v_b) = c(v_a, v_b), where β_i and β_j are as defined in Definition <ref>. Consider a feasible path p for P and a V_i such that Γ_i > 1. Then the cost to deform p into p' ←deform(p,V_i) in Ĝ is ĉ' - ĉ≤ 0.In this proof we analyze the cost to deform p into p'in Ĝ, which is a result of single call p' ←deform(p,V_i) (Algorithm <ref>).From Lemma <ref> we see that the cost to deform p into p' with respect to G is c' - c ≤ (2n+1)β_i, where there are 2(n+1) ≥ 4 inter-set edges for V_i in p. The cost of p in Ĝ is ĉ≥ c + 2(n+1)(3/2)β_i = c + 3(n+1)β_i and the cost of p' in Ĝ isĉ' ≤ c' + 2(3/2β_i) ≤ c + (2n+1)β_i + 3β_i. Thus ĉ' - ĉ ≤ (2n+1)β_i + 3β_i - 3(n+1)β_i =2nβ_i + β_i + 3β_i - 3nβ_i - 3β_i =β_i - nβ_i and since n ≥ 1 (otherwise we did not need to deform the path), then ĉ' - ĉ≤ 0. Given a metric discrete path planning problem P with optimal solution cost c^*, Γ > 1 and a C = {V_1, V_2, …, V_m }, then the optimal solution (p')^* to the clustered problem P' over the same set of vertices is a solution to P with cost (c')^* ≤ (1 + 3/2Γ) c^*.To prove the above result we will work with the modified graph Ĝ (defined in Definition <ref>) and use the result from Lemma <ref> to show that there exists a clustered solution in Ĝ that has the same cost or lower than ĉ(p^*). Then we will relate (c')^* to c^*.First we show that there exists a clustered solution p' that satisfies the following: ĉ(p') ≤ĉ(p^*) To find a solution p' that satisfies the above we will use Algorithm <ref> to deform p^* into p'. The deform algorithm is called for each V_i ∈ C in any order as p_i+1 = deform(p_i,V_i) to form a solution for P' (see Lemma <ref>). For each call of p_i+1 = deform(p_i,V_i) the incurred cost ĉ(p_i) - ĉ(p_i+1) ≤ 0 (see Lemma <ref>). Thus after the series of calls, we have a clustered solution p' satisfying ĉ(p') ≤ĉ(p^*). Next we relate c(p^*) to ĉ(p^*) by observing the following for an inter-set edge v_a, v_b∈ p^*: ĉ(v_a, v_b)= c(v_a, v_b) + 3/2max(β_i, β_j) = c(v_a, v_b) + 3/2max(α_i/Γ_i, α_j/Γ_j) ≤( 1 + 3/2max(1/Γ_i, 1/Γ_j) ) c(v_a, v_b) ≤( 1 + 3/2Γ) c(v_a, v_b) The above inequality for ĉ(p^*) is true for each edge v_a, v_b∈ p^*, inter-set edge or not. Therefore ĉ(p^*) ≤( 1 + 3/2Γ) c(p^*). Due to the construction of Ĝ and how ĉ(p') ≤ĉ(p^*) we deduce that (c')^* ≤ĉ(p^*), since (c')^* ≤ c(p') ≤ĉ(p') ≤ĉ(p^*). Therefore we have (c')^* ≤( 1 + 3/2Γ) c^*. The proof of the main result in Theorem <ref> directly follows from Lemma <ref> and Lemma <ref>.We have not fully characterized the tightness of the bounds from Lemma <ref> and Lemma <ref>, but we we have a lower bound, which is illustrated in Figure <ref>. In this example, the graph is scale-able (we can add vertices three at a time). The clustered and non-clustered solutions for this example have a cost relation of lim_|V| →∞ (c')^* = ( 1 + 2/2Γ + 1) c^*. We also provide the graph in Figure <ref> to show how the gap changes as Γ varies.The bound for this graph is obtained as follows. Let n = |V|/3 - 1 for |V|/3∈_>0. Then the optimal non-clustered solution cost is (recall that Γ = α/β): p^*= v_1, v_2, v_3, v_4, v_6, v_5, …c^*= α + β + α + n(2α + β) = ( n+1 )( 2α + β) = ( n+1 )( 2 + 1/Γ) α The optimal clustered solution is: (p')*= v_1, v_2, v_3, v_5, v_6, …, v_4, v_7, …(c')^* = α + β + 2nβ + α + n(2α + β) = 2α + 3β - 2β + n(2α + 3β) = (n+1)(2α + 3β) - 2β= ( (n+1)(2 + 3/Γ) - 2/Γ) α As the instance grows (|V| →∞, which implies n →∞), we have the following: lim_n →∞ (c')^*=(n+1)(2 + 3/Γ) - 2/Γ/( n+1 )(2 + 1/Γ) c^*=( 2 + 3/Γ/2 + 1/Γ) c^*=( 1 + 2/2Γ + 1) c^* §.§ Finding sBefore we describe our method for finding optimal (s), we describe a few properties of s. §.§.§ OverlapA special property of is that when Γ > 1, there are no overlapping clusters. Specifically there are no two clusters V_i and V_j with Γ_i > 1 and Γ_j > 1 that have a non-zero intersection unless one cluster is a subset of the other. Given a graph G, clusters V_i and V_j with separations Γ_i > 1 and Γ_j > 1 (defined in Definition <ref>), then V_i ∩ V_j is non-empty if and only if V_i ⊆ V_j or V_j ⊆ V_i.We prove the above result by contradiction. Before we begin, recall that edges cut by the cluster V_i have edge weights ≥α_i and edges within the cluster have edge weights ≤β_i. Let us assume that V_i and V_j overlap and are not nested. Without loss of generality let β_j ≤β_i. Then there exists an edge v_a, v_b with weight w(v_a, v_b) ≤β_j for v_a ∈ V_i ∩ V_j and v_b ∈ V_j ∖ (V_i ∩ V_j). Since this edge does not exist in the cluster V_i but it is cut by V_i, then it must be the case that w(v_a, v_b) ≥α_i. However, this edge does exist in the cluster V_j and so α_i ≤β_j, since w(v_a, v_b) ≤β_j. This result highlights the contradiction: Γ_i = α_i/β_i≤β_j/β_i≤β_i/β_i = 1.§.§.§ UniquenessThe non-overlapping property for s with Γ > 1 implies that there exists a unique maximal clustering set C^* (more clusters equals more reductions in the search space size). This result follows from the simple property that if a cluster V_i exists and does not exist in our clustering C^*, then we must be able to add it to C^* to get additional search space reductions.Given graph G and a parameter Γ > 1, the problem of finding a C^* that maximizes the search space reduction has a unique solution C^*. Furthermore C^* contains all clusters V_i with separation Γ_i ≥Γ.We use contradiction to prove that C^* is unique. Suppose there exists two different s C_1 and C_2 that maximize the search space reductions for a given Γ > 1. Then, there is a cluster V_1 that is in C_1 and not in C_2 (or vice versa). This implies that either V_1 is somehow incompatible with C_2, which we know is not the case due to Lemma <ref> (i.e., clusters do not overlap unless one is a subset of the other) or this cluster can be added to C_2. This is a contradiction, which proves the first result. The second result directly follows since adding a cluster can only further reduce the search space size of the problem. Given a graph G and two clustering parameters Γ_i > Γ_j > 1, the optimal clustering C^*_j for Γ_j is a superset of any clustering for Γ_i.We prove this result by contradiction. Suppose we have a clustering C_i for Γ_i and the optimal clustering C_j^* for Γ_j, for which there exists a cluster V_1 in C_i that is not in C_j^*. Then by the definition of s V_1 must satisfy Γ_1 ≥Γ_i and since Γ_i > Γ_j then Γ_1 ≥Γ_j, which makes V_i a cluster that should be added to C_j^*. This contradicts Proposition <ref> and thus proves the result.There exists a minimum Γ^* > 1 and its optimal C^* that is a superset of all other s for Γ > 1.This result follows directly from Corollary <ref>, when we consider Γ^* to be the smallest ratio of edge weights in the graph G (find the two edges that give us the smallest ratio). Then for every cluster with a separation parameters Γ_i > 1, it must also be greater than or equal to Γ^* since Γ_i itself is a ratio of existing edge weights in the graph G. Thus by Corollary <ref>, C^* must be superset of any other for some Γ > 1.§.§.§ An Approach For Finding sGiven an input Γ > 1, Algorithm <ref> computes the optimal s, i.e., the with maximum search space reduction. Informally the algorithm deletes edges in the graph from largest to smallest (line 6-7) to look for s. It uses a minimum spanning tree () to keep track of when the graph becomes disconnected, and when it does the disconnected components are tested to see if they qualify as s (line 9-11). Regardless, any non-trivial sized disconnected component (or not) is added back to the queue (line 13-14), so that it can be broken and tested again to find of all nested s.Given G and a Γ > 1, Algorithm <ref> finds the optimal C^* in O(|V|^3) time.The proof is two fold, first we show that Algorithm <ref> runs in polynomial time and second we show that it finds the optimal . For this proof let n = |V|.Minimum spanning trees can be found O(n^2) time <cit.> (line 3). The rest of the algorithm modifies the from line 3, which originally has n edges and so the while loop for the rest of the algorithm can only run at most n times (we can't remove more than n edges). Finding the largest edge(s) and removing them in line 6 and 7 takes O(n) time. Creating the induced subgraph and finding the maximum edge cost in lines 9 and 10 takes O(n^2) time. Testing if the subgraph is a clique (line 11) takes O(n^2) time. Thus lines 1 to 3 run in O(n^2) time and lines 4 to 14 run in O(n· n^2), which means the entire algorithm runs in O(n^3) time or O(|V|^3).Next we show that Algorithm <ref> finds the optimal . To do so, we will show that V[m'] does indeed represent a with separation Γ' ≡α/β (line 11) as defined by Definition <ref> and then wefinish by showing that the algorithm does not omit any candidate clusters, thus proving the theorem by leveraging Proposition <ref>.Let us start by understanding how to find s and how we use s in the algorithm. A with separation Γ_i > 1 is a subgraph of G that is connected with intra-edge weights less than the inter-edge weights connected to the rest of the graph. Thus one method of searching for s is to delete all edges in the graph of weight ≥α and if there is a disconnected subgraph in G then and only then is it a possible . We can use s to more efficiently keep track of these deleted edges. By definition, an is tree that connects vertices within the graph with minimum edge weight. If we remove edges of size ≥α in the graph to search for disconnected sub-graphs, then the graph is disconnected if and only if the is disconnected. This follows by considering the cut needed to disconnect these two sub-graphs (the minimum weight edge cut, shares the same weight cut by the ). Thus we can instead search for s by disconnecting the . Furthermore if we disconnected the by incrementally removing the largest edge(s) of weight α from the tree then we know that induced sub-graphs of the newly disconnected trees have at least one inter-set edge of size α. Line 9 in the Algorithm creates the induced subgraph G' of a disconnected tree (m' ∈ M'), line 10 measures its β, and if G' is a clique and meets the separation criterion (Γ' ≥Γ) then V[m'] is indeed a (by definition), thus it is added to the clustering C in line 12. Next we show that the Algorithm did not omit any candidate clusters. We have already argued that only disconnected trees need to be considered for s, thus what is left to show is that the algorithm tests every possible value of α to disconnect the tree.This is true since it considers every edge in the original . This is because every disconnected tree of size two or more is added back to M in line 14 until every edge originally in the is removed in line 7. Therefore this algorithm finds all of the candidate s and Proposition <ref> tells us that it is the unique optimal for the given Γ > 1.§.§ Search Space ReductionThe last remaining question is to determine how much the clustering approach reduces the search space. In general, this is difficult to answer since it depends on the particular constraints of the path planning problem.However, to get an understanding of the search space reduction, consider the example of with a non-nested clustering (nesting would result in further search space reductions). Let r be the ratio of the non-clustered search space size N_0, to the clustered search space size N_1, for a graph with vertices V and a clustering C = { V_1, V_2, …, V_m }. Then, the ratio (derived from counting the number of solutions) is as follows: r ≡N_0/N_1 = |V|!/m! ∏_i^m |V_i|!To further simplify the ratio consider the case where all clusters are equally sized (for all V_i, V_j ∈ C, |V_i| = |V_j|): Given a graph G and a clustering C = {V_1, V_2, …, V_m} such that |V_i| = |V_j| for all i,j ∈ [1,m] then the ratio r of the search space size for the original problem to the cluster problem is [The big-Ω notation states that for large enough |V| the ratio r is at least k (̇m!)^x-1 for some constant k.] r = Ω( (m!)^x-1), where x = |V|/m.The number of solutions for the (directed) problem is N_0 = |V|! and the number of solutions for the problem is: N_1 = m! ( |V|/m ! )^m (these results come from counting the number of possible solutions). First let us bound |V|! = (mx)!: |V|! =∏_i=m^1∏_j=0^x-1( ix - j ) =∏_i=m^1∏_j=0^x-1 (i) ( x - j/i) ≥∏_i=m^1∏_j=0^x-1 (i) ( x - j ) =( ∏_i=m^1 i^x ) ( ∏_j=0^x-1 ( x - j ) )^m =(m!)^x(x!)^mWe now use the fact that |V|! ≥ (m!)^x(x!)^m to prove the main result:r≡N_0/N_1= |V|!/m!(x!)^m≥(m!)^x (x!)^m/m! (x!)^m= (m!)^x-1 Thus r = Ω( (m!)^x-1). To get an idea of the magnitude of r, consider an instance of size |V|=100 divided into four equal clusters. The clustered problem has a feasible solution space of size N_1 ≈1.49 × 10^-56 N_0, where N_0 is the feasible solution space size of the non-clustered problem. However, N_1 is still extremely large at about 1.39 × 10^102. § EXPERIMENTS In this section we present experimental results that demonstrate the effectiveness of for solving discrete path planning problems. We focus on metric instances drawn from the established library  <cit.>, which have a variety of problem types (the first portion of the instance name indicates the type and the numberindicates the size). The tests were conducted with Γ = 1.000001, for which Theorem <ref> implies that the solution to the clustered problem gives a min(2, 1 + 3/2Γ)-factor approximation to the instance. However, we will see that the observed gap in performance is considerably smaller.To test the effectiveness of the clustering method, we perform on each instance and recorded both the runtime and the number of clusters found. This gives us an idea of whether or not instances from have a structure that can be exploited by . Then, we use standard integer programming formulations for both the original instance and the general clustered version of the instance (). We solve each instance three times using the solver Gurobi <cit.> and recorded the average solver time and solution quality. All instances were given a time budget of 900 seconds, after which they were terminated and the best solution found in that run was outputted. The instances reported in this paper are the 37 instances that were solved to within 50% of optimal and have s. The remaining instances are not reported as they would have required more than 900 seconds to provide a meaningful comparison.Additionally we demonstrate on an office environment to gain insight into the structure of s. §.§ Integer Programming FormulationThe clustering algorithm was implemented in Python and run on an Intel Core i7-6700, 3.40GHz with 16GB of RAM. The integer programming (IP) expression of the problem and clustered problem is solved on the same computer with Gurobi, also accessed through python. The results of both of these approaches is found in Table <ref> and summarized in Figure <ref>.The following is the IP expression used for the clustered and non-clustered path planning problems, where each variable e_a,b∈{0,1}:minimize∑_a=1^|V|∑_b=1^|V| e_a,b w(v_a, v_b)subject to∑_b=1^|V| e_a,b = 1, for eacha ∈{ 1,2,…, |V| }∑_a=1^|V| e_a,b = 1,for eachb ∈{ 1,2,…, |V| }∑_∀ v_a ∈ V_i, v_b ∉V_i e_a,b = 1,for eachi ∈{ 1,2,…, m }∑_∀ v_a ∉V_i, v_b ∈ V_i e_a,b = 1,for eachi ∈{ 1,2,…, m }∑_e_a,b∈ E' e_a,b≤ |E'|-1,for each subtourE' The formulation was adapted from a IP formulation found in <cit.>, where the Boolean variables e_a,b represent the inclusion/exclusion of the edge v_a, v_b from the solution.Constraints <ref> and <ref> restricts the incoming and outgoing degree of each vertex to be exactly one (the vertex is visited exactly once). Similarly constraints  <ref> and  <ref> restrict the incoming and outgoing degree of each cluster to be exactly one (these constraints are only present in the clustered version of the problem). Constraint <ref> is the subtour elimination constraint, which is lazily added to the formulation as conflicts occur due to the exponential number of these constraints. For each instance, we seed the solver with a random initial feasible solution. §.§ ResultsFigure <ref> shows the ratio of time spent finding clusters with respect to total solver time. In all instances this time is less than 6% and most is less than 1%. Additionally as the total solver time grows, the ratio gets smaller (Time02 is for instances that use the full 900 seconds). This approach is able to find s on 63 out of 70 instances. In total it found 3700 non-trivial s (i.e., clusters with |V_i| ≥ 2), which is promising since the library contains a variety of different applications.Figure <ref> and Table <ref>, shows that for instances that do not time out the solution path costs found by the clustering approach are close to optimal (Cost01 is close to 1 in the figure and the instances from burma14 to pr107 are almost all within 1% error as shown in the table). Furthermore when the solver starts to time out (exceeds its 900 second time budget) the solution quality of the approach starts to surpass the solution quality of the non-clustered approach (as shown by Cost02 in the figure). We attribute this trend to the fact that the clustered approach needs less time to search its feasible solution space and thus is able to find better quality solutions faster than its counterpart. In instances gr202, kroB200, pr107, tsp225, and gr229 the clustering approach does very well compared to the non-clustering approach, enabling the solver to find solutions within 1% of optimal for the first three instances and solutions within 6% of optimal for the latter two instances, while the non-clustered approach exceeds 8% of optimal in the first three instances and 28% of optimal for the latter two instances.On average the approach is more efficient than the non-clustered approach, which is highlighted in Figure <ref> and Table <ref>. For the results that do not time out (Time01 in the figure) we often save more than 50% of the computational time, while maintaining a near optimal solution quality (Cost01 in the figure). For instances that require most or all of the 900 seconds (harder instances) the time savings can be quite large. This is particularly clear from the table when we compare the easy instances burma14 up to st70 that have an average time savings of around 60% to the harder instances kroA100, gr96, bier127, and ch130, which all have a time savings of more than 95%. For the instances that both time out (Time02 in the figure), the time savings is not present since both solvers use the full 900 seconds.From these results we can see that when the approach does not time out, we usually save time and when the approach does time out we often find better quality solutions as compared to the non-clustered approach. It is worth emphasizing that we are not necessarily recommending solving instances in this manner. We are simply using as an illustrative example to show how can be used to reduce computation time in a given solver. Many discrete path planning problems are solved with IP solvers and as such we hope our results help provide some insight as to how would work on other path planning problems. In general unless the solver approach (IP or not) takes advantage of the clustering, there is no guarantee that a computation saving will be achieved.§.§ Clustering Real-World EnvironmentsAs shown in Figure <ref> we have also performed on real-world environments.The figure shows the floor-plan for a portion of one floor of the Engineering 5 building at the University of Waterloo.Red dots denote the locations of desks within the environment. We encoded the environment with a graph, where there is a vertex for each red dot and edge weights between vertices are given by the shortest axis-aligned obstacle free path between the locations (obstacle-free Manhattan distances). The figure shows the results of clustering for Γ = 1.000001. We see that locations that are close together are formed into clusters unless there are other vertices in close proximity.A path planning problem on this environment could be robotic mail delivery, where a subset of locations must be visited each day. The clusters could then be visited together (or visited using the same robot). § CONCLUSION In this paper we presented a new clustering approach called . We have shown how it can be used to approximate the discrete path planning problems to within a constant factor of min(2, 1 + 3/2Γ), more efficiently than solving the original problem. We verify these findings with a set of experiments that show on average a time savings and a solution quality that is closer to the optimal solution than it is to the bound. For future directions we will be investigating other path planning applications, which includes online path planning with dynamic environments.10url@samestyle smith2011optimal S. L. Smith, J. Tůmová, C. Belta, and D. Rus, “Optimal path planning for surveillance with temporal-logic constraints,” The International Journal of Robotics Research, vol. 30, no. 14, pp. 1695–1708, 2011.gouveia2015load L. Gouveia and M. Ruthmair, “Load-dependent and precedence-based models for pickup and delivery problems,” Computers & Operations Research, vol. 63, pp. 56–71, 2015.jain1988algorithms A. K. Jain and R. C. Dubes, Algorithms for clustering data.1em plus 0.5em minus 0.4emPrentice-Hall, Inc., 1988.karypis1999chameleon G. Karypis, E.-H. Han, and V. Kumar, “Chameleon: Hierarchical clustering using dynamic modeling,” Computer, vol. 32, no. 8, pp. 68–75, 1999.kernighan1970efficient B. W. Kernighan and S. Lin, “An efficient heuristic procedure for partitioning graphs,” Bell system technical journal, vol. 49, no. 2, pp. 291–307, 1970.guha2000rock S. Guha, R. Rastogi, and K. Shim, “Rock: A robust clustering algorithm for categorical attributes,” Information systems, vol. 25, no. 5, pp. 345–366, 2000.katselis2016clustering D. Katselis and C. L. Beck, “Clustering fully and partially observable graphs via nonconvex optimization,” in American Control Conference, 2016, pp. 4930–4935.blondel2008fast V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, “Fast unfolding of communities in large networks,” Journal of statistical mechanics: theory and experiment, vol. 2008, no. 10, p. P10008, 2008.berkhin2006survey P. Berkhin, “A survey of clustering data mining techniques,” in Grouping multidimensional data.1em plus 0.5em minus 0.4emSpringer, 2006, pp. 25–71.meyerhenke2009new H. Meyerhenke, B. Monien, and T. Sauerwald, “A new diffusion-based multilevel algorithm for computing graph partitions,” Journal of Parallel and Distributed Computing, vol. 69, no. 9, pp. 750–761, 2009.gao2010kernel S. Gao, I. W.-H. Tsang, and L.-T. Chia, “Kernel sparse representation for image classification and face recognition,” in European Conference on Computer Vision, 2010, pp. 1–14.jin2012multi X. Jin, S. Gupta, J. M. Luff, and A. Ray, “Multi-resolution navigation of mobile robots with complete coverage of unknown and complex environments,” in American Control Conference, 2012, pp. 4867–4872.karypis1998fast G. Karypis and V. Kumar, “A fast and high quality multilevel scheme for partitioning irregular graphs,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 359–392, 1998.backstrom2014automaton C. Bäckström, A. Jonsson, and P. Jonsson, “Automaton plans,” Journal of Artificial Intelligence Research, vol. 51, no. 1, pp. 255–291, 2014.levihn2013foresight M. Levihn, L. P. Kaelbling, T. Lozano-Perez, and M. Stilman, “Foresight and reconsideration in hierarchical planning and execution,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. 224–231.das2014mapping A. Das, M. Diu, N. Mathew, C. Scharfenberger, J. Servos, A. Wong, J. S. Zelek, D. A. Clausi, and S. L. Waslander, “Mapping, planning, and sample detection strategies for autonomous exploration,” Journal of Field Robotics, vol. 31, no. 1, pp. 75–106, 2014.ryan2008exploiting M. R. K. Ryan, “Exploiting subgraph structure in multi-robot path planning,” Journal of Artificial Intelligence Research, pp. 497–542, 2008.chevalier2009comparison C. Chevalier and I. Safro, “Comparison of coarsening schemes for multilevel graph partitioning,” in International Conference on Learning and Intelligent Optimization, 2009, pp. 191–205.karp1977probabilistic R. M. Karp, “Probabilistic analysis of partitioning algorithms for the traveling-salesman problem in the plane,” Mathematics of operations research, vol. 2, no. 3, pp. 209–224, 1977.haxhimusa2009approximative Y. Haxhimusa, W. G. Kropatsch, Z. Pizlo, and A. Ion, “Approximative graph pyramid solution of the E-TSP,” Image and Vision Computing, vol. 27, no. 7, pp. 887–896, 2009.applegate2006traveling D. L. Applegate, R. E. Bixby, V. Chvatal, and W. J. Cook, The Traveling Salesman Problem: A Computational Study.1em plus 0.5em minus 0.4emPrinceton University Press, 2006.noon1993efficient C. E. Noon and J. C. Bean, “An efficient transformation of the generalized traveling salesman problem,” Information Systems and Operational Research (INFOR), vol. 31, no. 1, pp. 39–44, 1993.korte2012combinatorial B. Korte, J. Vygen, B. Korte, and J. Vygen, Combinatorial optimization.1em plus 0.5em minus 0.4emSpringer, 2012, vol. 2.prim1957shortest R. C. Prim, “Shortest connection networks and some generalizations,” Bell system technical journal, vol. 36, no. 6, pp. 1389–1401, 1957.reinelt1991tsplib G. Reinelt, “TSPLIB–a traveling salesman problem library,” ORSA Journal on computing, vol. 3, no. 4, pp. 376–384, 1991.optimization2012gurobi G. Optimization et al., “Gurobi optimizer reference manual,” 2012. [Online]. Available: <http://www.gurobi.com> gouveia1999asymmetric L. Gouveia and J. M. Pires, “The asymmetric travelling salesman problem and a reformulation of the miller–tucker–zemlin constraints,” European Journal of Operational Research, vol. 112, no. 1, pp. 134–146, 1999.
http://arxiv.org/abs/1702.08410v2
{ "authors": [ "Frank Imeson", "Stephen L. Smith" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20170227181122", "title": "Clustering in Discrete Path Planning for Approximating Minimum Length Paths" }
Three-particle correlations in relativistic heavy ion collisions in a multiphase transport model Che Ming Ko December 30, 2023 ================================================================================================ Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM). We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past. We show that MIST RNNs 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and 3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies. § INTRODUCTIONRecurrent neural networks <cit.> are a powerful class of neural networks that are naturally suited to modeling sequential data. For example, in recent years alone, RNNs have achieved state-of-the-art performance on tasks as diverse as machine translation <cit.>, speech recognition <cit.>, generative image modeling <cit.>, and surgical activity recognition <cit.>.These successes, and the vast majority of other RNN successes, rely on a mechanism introduced by long short-term memory <cit.>, which was designed to alleviate the so called vanishing gradient problem <cit.>. The problem is that gradient contributions from events at time t - τ to a loss at time t diminish exponentially fast with τ, thus making it extremely difficult to learn from distant events (see Figures <ref> and <ref>). LSTM alleviates the problem using nearly-additive connections between adjacent states, which help push the base of the exponential decay toward 1. However LSTM in no way solves the problem, and in many cases still fails to learn long-term dependencies (see, e.g., <cit.>).NARX[The acronym NARX stems from Nonlinear AutoRegressive models with eXogeneous inputs.] RNNs <cit.> offer an orthogonal mechanism for dealing with the vanishing gradient problem, by allowing direct connections, or delays, from the distant past. However NARX RNNs have received much less attention in literature than LSTM, which we believe is for two reasons. First, as previously introduced, NARX RNNs have only a small effect on vanishing gradients, as they reduce the exponent of the decay by only a factor of n_d, the number of delays. Second, as previously introduced, NARX RNNs are extremely inefficient, as both parameter counts and computation counts grow by the same factor n_d.In this paper, we introduce MIxed hiSTory RNNs (MIST RNNs), a novel NARX RNN architecture which 1) exhibits superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) improves performance substantially over LSTM on tasks requiring very long-term dependencies; and 3) remains efficient in parameters and computation, requiring even fewer than LSTM for a fixed number of hidden units. Importantly, MIST RNNs reduce the decay's exponent by a factor of 2^n_d - 1; see Figure <ref>.§ BACKGROUND AND RELATED WORKRecurrent neural networks, as commonly described in literature, take on the general form_t = f(_t-1, _t, )which compute a new state _t in terms of the previous state _t-1, the current input _t, and some parameters(which are shared over time).One of the earliest variants, now known to be especially vulnerable to the vanishing gradient problem, is that of simple RNNs <cit.>, described by_t = tanh(_h _t-1 + _x _t + )In this equation and elsewhere in this paper, all weight matricesand biasescollectively form the parametersto be learned, and tanh is always written explicitly.Long short-term memory <cit.>, the most widely-used RNN architecture to date, was specifically introduced to address the vanishing gradient problem. The term LSTM is often overloaded; we refer to the variant with forget gates and without peephole connections, which performs similarly to more complex variants <cit.>:_t= (_fh_t-1 + _fx_t + _f)_t= (_ih_t-1 + _ix_t + _i)_t= (_oh_t-1 + _ox_t + _o)_t= tanh(_ch_t-1 + _cx_t + _c)_t= _t ⊙_t-1 + _t ⊙_t_t= _t ⊙tanh(_t)Here σ(·) denotes the element-wise sigmoid function and ⊙ denotes element-wise multiplication. _t, _t, and _t are referred as the forget, input, and output gates, which can be interpreted as controlling how much we reset, write to, and read from the memory cell _t. LSTM has better gradient properties than simple RNNs (see Figure <ref>) because of the mechanism in Equation <ref>, which introduces a path between _t-1 and _t which is modulated only by the forget gate. We also remark that gated recurrent units <cit.> alleviate the vanishing gradient problem using this exact same idea.NARX RNNs <cit.> also address the vanishing gradient problem, but using a mechanism that is orthogonal to (and possibly complementary to) that of LSTM. This is done by allowing delays, or direct connections from the past. NARX RNNs in their general form are described by_t = f(_t-1, _t-2, …, _t, _t-1, …, )but literature typically assumes the specific variant explored in <cit.>,_t = tanh( [ ∑_d=1^n_d_d _t-d] + _x _t + )which we refer to as simple NARX RNNs.Note that simple NARX RNNs require approximately n_d as much computation and n_d as many parameters as their simple-RNN counterpart (with n_d = 1), which greatly hinders their applicability in practice. To our knowledge, this drawback holds for all NARX RNN variants before MIST RNNs. For example, in <cit.>, higher-order recurrent neural networks (HORNNs) are defined precisely as simple NARX RNNs, and every variant in the paper suffers from this exact same problem. And, in <cit.>, a simple NARX RNN architecture is defined that is limited to having precisely two delays with non-zero weights. This way, at the expense of having fewer, longer paths to the past, parameter and computation counts are only doubled.The previous work that is most similar to ours is that of Clockwork RNNs <cit.>, which split weights and hidden units into partitions, each with a distinct period. When it's not a partition's time to tick, its hidden units are passed through unchanged, and so Clockwork RNNs in some ways mimic NARX RNNs. However Clockwork RNNs differ in two key ways. First, Clockwork RNNs sever high-frequency-to-low-frequency paths, thus making it difficult to learn long-term behavior that must be detected at high frequency (for example, learning to depend on quick motions from the past for activity recognition). Second, Clockwork RNNs require hidden units to be partitioned a priori, which in practice is difficult to do in any meaningful way. NARX RNNs (and in particular MIST RNNs) suffer from neither of these drawbacks.Many other approaches have also been proposed to capture long-term dependencies. Notable approaches include maintaining a generative model over inputs and learning to process only unexpected inputs <cit.>, operating explicitly at multiple time scales <cit.>, Hessian-free optimization <cit.>, using associative or explicit memory <cit.>, and initializing or restricting weight matrices to be orthogonal <cit.>.§ THE VANISHING GRADIENT PROBLEM IN THE CONTEXT OF NARX RNNSIn <cit.>, gradient decompositions and sufficient conditions for vanishing gradients are presented for simple RNNs, which contain one path between times t - τ and t. Here, we use the chain rule for ordered derivatives <cit.> to connect gradient components to paths and edges, which in turn provides a simple extension of the results from <cit.> to general NARX RNNs. We remark that we rely on slightly overloaded notation for clarity, as otherwise notation becomes cumbersome (see <cit.>).We begin by disambiguating notation, as the symbolis routinely overloaded in literature. Consider the Jacobian of (, ()) with respect to . We letdenote (), a collection of full derivatives, and we letdenote (, ), a collection of partial derivatives. This lets us write the ordinary chain rule as =+. Note that this notation is consistent with <cit.>, but is the exact opposite of the convention used in <cit.>. §.§ The Chain Rule for Ordered Derivatives Consider an ordered system of n vectors _1, _2, …, _n, where each is a function of all previous:_i ≡_i(_i-1, _i-2, …, _1),1 ≤ i ≤ nThe chain rule for ordered derivatives expresses the full derivatives _i_j for any j < i in terms of the full derivatives that relate _i to all previous _k:_i_j = ∑_i ≥ k > j_i_k_k_j,j < i§.§ Gradient Decomposition for General NARX RNNs Consider NARX RNNs in their general form (Equation <ref>), which we remark encompasses other RNNs such as LSTM as special cases. Also, for simplicity, consider the situation that is most often encountered in practice, where the loss at time t is defined in terms of the current state _t and its own parameters _l (which are independent of ).l_t = f_l(_t, _l)(This is in not necessary, but we proceed this way to make the connection with RNNs in practice evident. For example, f_l may be a linear transformation with parameters _l followed by squared-error loss.) Then the Jacobian (or transposed gradient) with respect tocan be written asl_t = f_l_t_tbecause the additional term f_l_l_l is 0. Now, by letting _1 =, _2 = _1, _3 = _2, and so on in Equations <ref> and <ref>, we immediately obtain_t = ∑_τ=0^t-1_t_t-τ_t-τbecause all of the partials _t-τ are 0.Equations <ref> and <ref> extend Equations 3 and 4 of <cit.> to general NARX RNNs, which encompass simple RNNs, LSTM, etc., as special cases. This decomposition breaks _t into its temporal components, making it clear that the spectral norm of _t_t-τ plays a major role in how _t-τ affects the final gradient l_t^T. In particular, if the norm of _t_t-τ is extremely small, then _t-τ has only a negligible effect on the final gradient, which in turn makes it extremely difficult to learn from events that occurred at t - τ. §.§ Connecting Gradient Components to Paths and Edges Equations <ref> and <ref>, along with the chain rule for ordered derivatives, let us connect gradient components to paths and edges, which is useful for a) gaining insights into various architectures and b) solidifying intuitions from backpropagation through time which suggest that short paths between t - τ and t facilitate gradient flow. Here we provide an overview of the main idea; please see the appendix for a full derivation.By applying the chain rule for ordered derivatives to expand _t_t-τ in Equation <ref>, we obtain a sum over τ terms. However each term involves a partial derivative between _t and a prior hidden state, and thus all of these terms are 0 with the exception of those states that share an edge with _t. Now, for each term, we can repeat this process. This then yields non-zero terms only for hidden states which can be connected to _t through two edges. We can then continue to apply the chain rule for ordered derivatives repeatedly, until only partial derivatives remain.Upon completion, we have a sum over gradient components, with each component corresponding to exactly one path from t - τ to t and being a product over its path's edges. The spectral norm corresponding to any particular path (t - τ→ t' → t”→⋯→ t) can then bounded as‖_t_t”'^⋯⋯_t'_t-τ‖≤‖_t_t”'^⋯‖⋯‖_t'_t-τ‖≤λ^n_ewhere λ is the maximum spectral norm of any factor and n_e is the number of edges on the path. Terms with λ < 1 diminish exponentially fast, and when all λ < 1, shortest paths dominate[We remark that it is also possible for gradient contributions to explode exponentially fast, however this problem can be remedied in practice with gradient clipping. None of the architectures discussed in this work, including LSTM, address the exploding gradient problem.].§ MIXED HISTORY RECURRENT NEURAL NETWORKSViewing gradient components as paths, with each component being a product with one factor per edge along the path, gives us useful insight into various RNN architectures. When relating a loss at time t to events at time t - τ, simple RNNs and LSTM contain shortest paths of length τ, while simple NARX RNNs contain shortest paths of length τ / n_d, where n_d is the number of delays.One can envision many NARX RNN architectures with non-contiguous delays that reduce these shortest paths further. In this section we introduce one such architecture using base-2 exponential delays. In this case, for all τ≤ 2^n_d - 1, shortest paths exist with only log_2 τ edges; and for τ > 2^n_d - 1, shortest paths exist with only τ / 2^n_d - 1 edges (see Figure <ref>). Finally we must avoid the parameter and computation growth of simple NARX RNNs. We achieve this by sharing weights over delays, instead using an attention-like mechanism <cit.> over delays and a reset mechanism from gated recurrent units <cit.>.The proposed architecture, which we call mixed history RNNs (MIST RNNs), is described by_t= ( _ah_t-1 + _ax_t + _a )_t= ( _rh_t-1 + _rx_t + _r )_t= tanh( _h [ _t ⊙∑_i = 0^n_d - 1 a_ti_t - 2^i] + _x _t + )Here, _t is a learned vector of n_d convex-combination coefficients and _t is a reset gate. At each time step, a convex combination of delayed states is formed according to _t; units of this combination are reset according to _t; and finally the typical linear layer and nonlinearity are applied.§ EXPERIMENTSHere we compare MIST RNNs to simple RNNs, LSTM, and Clockwork RNNs. We begin with the sequential permuted MNIST task and the copy problem, synthetic tasks that were introduced to explicitly test RNNs for their ability to learn long-term dependencies <cit.>. Next we move on to 3 tasks for which it is plausible that very long-term dependencies play a role: recognizing surgical maneuvers from robot kinematics, recognizing phonemes from speech, and classifying activities from smartphone motion data. We note that for all architectures involved, many variations can be applied (variational dropout, layer normalization, zoneout, etc.). We keep experiments manageable by comparing architectures without such variations. §.§ Sequential pMNIST ClassificationThe sequential MNIST task <cit.> consists of classifying 28x28 MNIST images <cit.> as one of 10 digits, by scanning pixel by pixel – left to right, top to bottom – and emitting a label upon completion. Sequential pMNIST <cit.> is a challenging variant where a random permutation of pixels is chosen and applied to all images before classification. LSTM with 100 hidden units is used as a baseline, with hidden unit counts for other architectures chosen to match the number of parameters. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set), with random learning rates and initializations. Additional experimental details can be found in the appendix.Test error rates are shown in Table <ref>. Here, MIST RNNs outperform simple RNNs, LSTM, and Clockwork RNNs by a large margin. We remark that our LSTM error rates are consistent with best previously-reported values, such as the error rates of 9.8% in <cit.> and 12% in <cit.>, which also use 100 hidden units. One may also wonder if the difference in performance is due to hidden-unit counts. To test this we also increased the LSTM hidden unit count to 139 (to match MIST RNNs), and continued to increase the capacity of each model further. MIST RNNs significantly outperform LSTM in all cases.We also used this task to visualize gradient magnitudes as a function of τ (the distance from the loss which occurs at time t = 784). Gradient norms for all methods were averaged over a batch of 100 random examples early in training; see Figure <ref>. Here we can see that simple RNNs and LSTM capture essentially no learning signal from steps that are far from the loss. To validate this claim further, we repeated the 512-unit LSTM and MIST RNN experiments, but using only the last 200 permuted pixels (rather than all 784). LSTM performance remains the same (7.4% error, within 1 standard deviation) whereas MIST RNN performance drops by 15 standard deviations (6.0% error). §.§ The Copy ProblemThe copy problem is a synthetic task that explicitly challenges a network to store and reproduce information from the past. Our setup follows <cit.>, which is in turn based on <cit.>. An input sequence begins with L relevant symbols to be copied, is followed by a delay of D - 1 special blank symbols and 1 special go symbol, and ends with L additional blank symbols. The corresponding target sequence begins with L + D blank symbols and ends with a copy of the relevant symbols from the inputs (in the same order). We run experiments with copy delays of D = 50, 100, 200, and 400. LSTM with 100 hidden units is used as a baseline, with hidden unit counts for other architectures chosen to match the number of parameters. Additional experimental details can be found in the appendix.Results are shown in Figure <ref>, showing validation curves of the top 5 randomized trials out of 50, with random learning rates and initializations. With a short copy delay of D = 50, we can see that all methods other than Clockwork RNNs can solve the task in a reasonable amount of time. However, as the copy delay D is increased, we can see that simple RNNs and LSTM become unable to learn a solution, whereas MIST RNNs are relatively unaffected. We also note that our LSTM results are consistent with those in <cit.>.Note that Clockwork RNNs are expected to fail for large delays (for example, the second symbol can only be seen by the highest-frequency partition, so learning to copy this symbol will fail for precisely the same reason that simple RNNs fail). However, here they also fail for short delays, which is surprising because the high-speed partition resembles a simple RNN. We hypothesized that this failure is due to hidden unit counts / parameter counts: here, the high-frequency partition is allocated only 256 / 8 = 32 hidden units. To test this hypothesis, we reran the Clockwork RNN experiments with 1024 hidden units, so that 128 are allocated to the high-frequency partition. Indeed, under this configuration (with 10x as many parameters), Clockwork RNNs do solve the task for a delay of D = 50 and fail to solve the task for all higher delays, thus behaving like simple RNNs. §.§ Surgical Maneuver RecognitionHere we consider the task of online surgical maneuver recognition using the MISTIC-SL dataset <cit.>. Maneuvers are fairly long, high-level activities; examples include suture throw and knot tying. The dataset was collected using a da Vinci, and the goal is to map robot kinematics over time (e.g., x, y, z) to gestures over time (which are densely labeled as 1 of 4 maneuvers on a per-frame basis). We follow <cit.>, which achieves state-of-the-art performance on this task, as closely as possible, using the same kinematic inputs, test setup, and hyperparameters; details can be found in the original work or in the appendix. The primary difference is that we replace their LSTM layer with our layers. Results are shown in Table <ref>. Here MIST RNNs match LSTM performance (with half the number of parameters). §.§ Phoneme RecognitionHere we consider the task of online framewise phoneme recognition using the TIMIT corpus <cit.>. Each frame is originally labeled as 1 of 61 phonemes. We follow common practice and collapse these into a smaller set of 39 phonemes <cit.>, and we include glottal stops to yield 40 classes in total. We follow <cit.> for data preprocessing and <cit.> for training, validation, and test splits. LSTM with 100 hidden units is used as a baseline, with hidden unit counts for other architectures chosen to match the number of parameters. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set), with random learning rates and initializations. Other experimental details can be found in the appendix. Table <ref> shows that LSTM and MIST RNNs perform nearly identically, which both outperform simple RNNs and Clockwork RNNs. §.§ Activity Recognition from SmartphonesHere we consider the task of sequence classification from smartphones using the MobiAct (v2.0) dataset <cit.>. The goal is to classify each sequence as jogging, running, sitting down, etc., using smartphone motion data over time. Approximately 3,200 sequences were collected from 67 different subjects. We use the first 47 subjects for training, the next 10 for validation, and the final 10 for testing. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set), with random learning rates and initializations. Other experimental details can be found in the appendix. Results are shown in Table <ref>. Here, MIST RNNs outperform all other methods, including LSTM and LSTM^+, a variant with the same number of hidden units and twice as many parameters.§ CONCLUSIONS AND FUTURE WORKIn this work we analyzed NARX RNNs and introduced a variant which we call MIST RNNs, which 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) improve performance substantially over LSTM on tasks requiring very long-term dependencies; and 3) require even fewer parameters and computation than LSTM. One obvious direction for future work is the exploration of other NARX RNN architectures with non-contiguous delays. In addition, many recent techniques that have focused on LSTM are immediately transferable to NARX RNNs, such as variational dropout <cit.>, layer normalization <cit.>, and zoneout <cit.>, and it will be interesting to see if such enhancements can improve MIST RNN performance further.§.§.§ Acknowledgments This work was supported by the Technische Universität München – Institute for Advanced Study, funded by the German Excellence Initiative and the European Union Seventh Framework Programme under grant agreement 291763, and by the National Institutes of Health, grant R01-DE025265. iclr2018_workshop§ APPENDIX: GRADIENT COMPONENTS AS PATHS Here we will apply Equation <ref> repeatedly to associate gradient components with paths connecting t - τ to t, beginning with Equation <ref> and handling simple RNNs and simple NARX RNNs in order. Applying Equation <ref> to expand _t_t-τ, we obtain_t_t-τ = ∑_t ≥ t' > t - τ_t_t'_t'_t-τ §.§.§ Simple RNNs For simple RNNs, by examining Equation <ref>, we can immediately see that all partials _t'_t-τ are 0 except for the one satisfying t' = t - τ + 1. This yields_t_t-τ = _t_t-τ+1_t-τ+1_t-τNow, by applying Equation <ref> again to _t_t-τ+1, and then to _t_t-τ+2, and so on, we trace out a path from t - τ to t, as shown in Figure <ref>, finally resulting the single term_t_t-1⋯_t-τ+2_t-τ+1_t - τ + 1_t-τwhich is associated with the only path from t - τ to t, with one factor for each edge that is encountered along the path.§.§.§ Simple NARX RNNs and General NARX RNNs Next we consider simple NARX RNNs, again by expanding Equation <ref>. From Equation <ref>, we can see that up to n_d partials are now nonzero, and that any particular partial _t'_t-τ is nonzero if and only if t' > t - τ and t' and t - τ share an edge. Collecting these t' as the set V_t - τ = {t': t' > t - τ (t-τ, t') ∈ E}, we can write_t_t-τ = ∑_t' ∈ V_t - τ_t_t'_t'_t-τWe can then apply this exact same process to each _t_t'; by defining V_t' = {t”: t” > t'(t', t”) ∈ E} for all t', we can write_t_t-τ = ∑_t' ∈ V_t - τ∑_t”∈ V_t'_t_t”_t”_t'_t'_t-τBy continuing this process until only partials remain, we obtain a summation over all possible paths from t - τ to t. Each term in the sum is a product over factors, one per edge:_t_t”'^⋯⋯_t”_t'_t'_t-τ The analysis is nearly identical for general NARX RNNs, with the only difference being the specific sets of edges that are considered.§ APPENDIX: EXPERIMENTAL DETAILS§.§ General Experimental SetupEverything in this section holds for all experiments except surgical maneuver recognition, as in that case we mimicked <cit.> as closely as possible, as described above.All weight matrices are initialized using a normal distribution with a mean of 0 and a standard deviation of 1 / √(n_h), where n_h is the number of hidden units. All initial hidden states (for t < 1) are initialized to 0. For optimization, gradients are computed using full backpropagation through time, and we use stochastic gradient descent with a momentum of 0.9, with gradient clipping as described by <cit.> at 1, and with a minibatch size of 100. Biases are generally initialized to 0, but we follow best practice for LSTM by initializing the forget-gate bias to 1 <cit.>. For Clockwork RNNs, 8 exponential periods are used, as in the original paper. For MIST RNNs, 8 delays are used. We avoid manual learning-rate tuning in its entirety. Instead we run 50 trials for each experimental configuration. In each trial, the learning rate is drawn uniformly at random in log space between 10^-4 and 10^1, and initial weight matrices are also redrawn at random. We report results over the top 10% of trials according to validation-set error. (An alternative option is to report results over all trials. However, because the majority of trials yields bad performance for all methods, this simply blurs comparisons. See for example Figure 3 of <cit.>, which compares these two options.) §.§ Sequential pMNIST Classification Data preprocessing is kept minimal, with each input image individually shifted and scaled to have mean 0 and variance 1. We split the official training set into two parts, the first 58,000 used for training and the last 2,000 used for validation. Our test set is the same as the official test set, consisting of 10,000 images. Training is carried out by minimizing cross-entropy loss. §.§ Copy Problem: Experimental Details In our experiments, the L relevant symbols are drawn at random (with replacement) from the set {0, 1, …, 9}; D is always a multiple of 10; and L is chosen to be D / 10. This way the simplest baseline of always predicting the blank symbol yields a constant error rate for all experiments. No input preprocessing of any kind is performed. In each case, we generate 100,000 examples for training and 1,000 examples for validation. Training is carried out by minimizing cross-entropy loss. §.§ Surgical Activity Recognition: Experimental Details We use the same experimental setup as <cit.>, which currently holds state-of-the-art performance on these tasks. For kinematic inputs we use positions, velocities, and gripper angles for both hands. We also use their leave-one-user-out teset setup, with 8 users in the case of JIGSAWS and 15 users in the case of MISTIC-SL. Finally we use the same hyperparameters: 1 hidden layer of 1024 units; dropout with p = 0.5; 80 epochs of training with a learning rate of 1.0 for the first 40 epochs and having the learning rate every 5 epochs for the rest of training. As mentioned in the main paper, the primary difference is that we replaced their LSTM layer with our simple RNN, LSTM, or MIST RNN layer. Training is carried out by minimizing cross-entropy loss. §.§ Phoneme Recognition: Experimental Details We follow <cit.> and extract 12 mel frequency cepstral coefficients plus energy every 10ms using 25ms Hamming windows and a pre-emphasis coefficient of 0.97. However we do not use derivatives, resulting in 13 inputs per frame. Each input sequence is individually shifted and scaled to have mean 0 and variance 1 over each dimension. We form our splits according to <cit.>, resulting in 3696 sequences for training, 400 sequences for validation, and 192 sequences for testing. Training is carried out by minimizing cross-entropy loss. Means and standard deviations are computed using the top 5 randomized trials out of 50 (ranked according to performance on the validation set). §.§ Activity Recognition from Smartphones In <cit.>, emphasis was placed on hand-crafted features, and each subject was included during both training and testing (with no official test set defined). We instead operate on the raw sequence data, with no preprocessing other than sequence-wise centering and scaling of inputs, and we define train, val, test splits so that subjects are disjoint among the three groups.
http://arxiv.org/abs/1702.07805v4
{ "authors": [ "Robert DiPietro", "Christian Rupprecht", "Nassir Navab", "Gregory D. Hager" ], "categories": [ "cs.NE" ], "primary_category": "cs.NE", "published": "20170224234811", "title": "Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies" }
Strong couplings and form factors of charmed mesons in holographic QCD Carlisson Miller====================================================================== Plane Poiseuille flow, the pressure driven flow between parallel plates,shows a route to turbulence connected with a linear instability to Tollmien-Schlichting (TS) waves, and another one, the bypass transition,that is triggered with finite amplitude perturbation.We use direct numerical simulations to explore the arrangement of the differentroutes to turbulence among the set of initial conditions.For plates that are a distance 2H apart and in a domain of width 2π H and length 2π H the subcritical instability to TS waves sets in at Re_c=5815that extends down to Re_TS≈4884. The bypass route becomes available aboveRe_E=459 with the appearance of three-dimensional finite-amplitude traveling waves. The bypass transition covers a large set of finite amplitude perturbations. Below Re_c, TS appear for a tiny set of initial conditions that grows with increasingReynolds number. Above Re_c the previously stable region becomes unstable via TS waves, but a sharp transition to the bypass route can still be identified. Both routes lead to the same turbulent in the final stage of the transition,but on different time scales. Similar phenomena can be expected in other flows where two or more routes to turbulence compete. § INTRODUCTIONThe application of ideas from dynamical systems theory to the turbulence transition in flows without linear instability of the laminar profile, such as pipe flow or plane Couette flow have provided a framework in which many of the observed phenomena can be rationalized. This includes the sensitive dependence on initial conditions <cit.>, the appearance of exact coherent states around which the turbulent state can form <cit.>,the transience of the turbulent state <cit.>,or the complex spatio-temporal dynamics in largesystems <cit.>. Methods to identify the critical thresholds that have to be crossed before the turbulent state can be reached have been developed <cit.>and the bifurcation and manifold structures that explains this behavior in the state space of the system have been identified <cit.>.Extensions to open external flows, like asymptotic suction boundary layers <cit.>and developing boundary layers<cit.> have been proposed.Plane Poiseuille flow (PPF), the pressure driven flow between parallel plates,shows a transition to turbulence near a Reynolds number of about 1000 <cit.>. In the subcritical range the flow shows much of the transition phenomenology observed in other subcritical flows, such as plane Couette flow or pipe flow, but it also has a linear instability ofthe laminar profile at a Reynolds number of 5772 <cit.>.This raises the question about the relation between the transition via an instability to the formation of Tollmien-Schlichting (TS) waves and the transition triggered by large amplitude perturbations that bypass the linear instability(henceforth referred to as the "bypass" transition)<cit.>. For instance, one could imagine that the exact coherent structures related to the bypass transition are connected to the TS waves in some kind of subcritical bifurcation. However, the flow structures are very different, with the exact coherent structures being dominated by downstream vortices<cit.>, and the TS waves dominated by spanwise vortices. In order to explore the arrangement of the different transition pathways we will use direct numerical simulations to map out the regions of initial conditions that follow one or the other path. Such explorations of the state space of a flow have been useful in the identification of the sensitive dependence on initial conditions for the transition <cit.>, and in the exploration of the bifurcations <cit.>. We start with a description of the system and the bifurcations of the relevant coherent states in section <ref>. Afterwards, in section <ref> we describe the exploration of the state space of the system. Conclusions are summarized in section <ref>.§ PLANE POISEUILLE FLOW AND ITS COHERENT STRUCTURESTo fix the geometry, let x, y, and z be the downstream, normal and spanwise directions, and let the flow be bounded by parallel plates at y=± H. The flow is driven bya pressure gradient, giving a parabolic profile for the laminar flow. Dimensionless units are formed with the height H and the center line velocity U_0 so that the unit of time is H/U_0 and the Reynolds number becomes Re=U_0 H / ν, with ν the fluid viscosity. In these units the laminar profile becomes u⃗_0=(1-y^2) e⃗_x.The equations of motion, the incompressible Navier-Stokes equations, are solved using Channelflow <cit.>, with a spatial resolutionof N_x=N_z=32 and N_y=65 for a domain of length 2 π and width 2π and at fixed mass flux. The chosen resolution is sufficient to resolve the exact solutions and the transition process but underresolved in the turbulent case.In the studied domain, the linearinstability occurs at Re_c=5815, slightly higher than the valuefound by <cit.> on account of the slightly different domain size.The full velocity field U⃗=u⃗_0+u⃗ can be written as a sum of the laminar flow u⃗_0 and deviations u⃗=(u,v,w).In the following we always mean u⃗ when we refer to the velocity field.Tollmien-Schlichting (TS) waves are travelling waves formed by spanwise vortices. They appear in a subcritical bifurcation that extends down to Re≈ 2610for a streamwise wavenumber of 1.36. The TS wave is independent of spanwise position z and consists of two spanwise vortices, as shown in figure <ref>(a).The Reynolds number range over which the transition to TS waves is subcritical depends on thedomain size. For our domain (streamwise wave number of 1.0) the turning point is at Re≈ 4685.A bifurcation diagram of this exact solution, referred to as TW_TSin the remainder of thepaper, is shown in figure <ref>(a).The ordinate in the bifurcation diagram is the amplitude of the flow fielda(u⃗)=||u⃗||=√(1/L_xL_yL_z∫u⃗^2 dxdydz).A study of the stability of the state in the full three-dimensional space shows that this lower branch state has only one unstable direction in the used computational domain for 5727<Re<5815=Re_c. Thus, for these Reynolds numbers the state is an edge state whose stable manifold can divide the state space in two parts <cit.>. For lower Re, there are secondary bifurcationsthat add more unstable directions to the state. Specifically, near the turning point at Re=4690, the lower branch has acquiredabout 350 unstable directions. Because of the high critical Reynolds numbers this state cannot explain the transition toturbulence observed in experiments atReynolds numbers around 1000<cit.> or even lower <cit.>.The states that are relevant to the bypass transition can be found using the method of edge tracking<cit.>. Starting from an arbitrary turbulent initial condition, trajectories in the laminar-turbulent-boundary thatare followed with the edge-tracking algorithm converge to a travelling wave <cit.> which we referred to as TW_E in the following.The visualization in figure <ref>(b) shows that this state has a strong narrow upstream streak, a weaker but more extended downstream streak and streamwise vortices.Moreover, TW_Ehas a wall-normal reflection symmetrys_y: [u,v,w](x,y,z)=[u,-v,w](x,-y,z),a shift-and-reflect symmetrys_zτ_x: [u,v,w](x,y,z)=[u,v,-w](x+0.5 · L_x,y,-z),and exists for a wide range in Reynolds numbers.It is created in a saddle-node bifurcation near Re ≈ 459(see the bifurcation diagram in figure <ref>(a)); for other combinations of spanwise and streamwise wavelengths the state appears at a even lower Reynolds numbers of 319 <cit.>.The corresponding lower branch state can be continued to Reynolds numbers far above 3· 10^5, and its amplitude decreaseswith increasing Reynolds number as shown infigure <ref>(b). A fit to the amplitude for large Reynolds numbers gives a scaling like Re^-0.52, similar to that of the solution embedded in the edge of plane Couette flow <cit.>. A stability analysis of the lower branch of TW_Eshows that the travelling wave hasone unstable eigenvalue for 510<Re<5850. Therefore, TW_E is a second travelling wave with a stablemanifold that candivide the state space into two disconnected parts. How the two edge states interact and divide up the state space will be discussed in section <ref>.At Re=510the lower branch undergoes a supercriticalpitchfork bifurcation that breaks the s_y symmetry and adds a second unstable eigenvalues for Re<510. The upper branch of the travelling wave has three unstable eigenvalues for Re<1000. Investigation of different systems which show subcritical turbulence revealed that bifurcations of exact solutions connected to the edge state of the system lead to the formationof a chaotic saddle that shows transient turbulence with exponential distributed lifetimes <cit.>. In the present systems the formation of chaotic saddlecannot be studied in detail since it takes place in an unstable subspace. However, previous investigations in a symmetry restricted system did show that thestates follow such a sequence of bifurcations to the formation of a chaotic saddle<cit.>, so that we expect that also the states in the unstable subspace follow this phenomenology.The two travelling waves described above are clearly related to the two different transition mechanismthat exist in the flow. For Reynolds numbers below the onset of TS waves (here: Re_c=5815),initial conditions that start close to TW_E in the state space will either decayor become turbulent without showing any approach to a TS wave:they will follow the bypass transition to turbulence.Initial condition that start close to TW_TS can also either decay or swing up to turbulence, but they will first form TS waves. Above Re_c all initial conditions will show a transitionto turbulence, but it will still be possible to distinguish whether they follow the bypass or TS route to turbulence, as we will see.§ STATE SPACE STRUCTUREIn order to explore the arrangement of the different routes to turbulence in the space of initial conditions we pick initial conditions and integrate them until the flow either becomes turbulent oruntil it returns to the laminar profile. The initial conditions are taken in a two-dimensional slice of the high-dimensional space, spanned by two flow fields u⃗_1 and u⃗_2. The choice of the flow fields allows to explore different cross sections of state space. For the most part, we will use u⃗_1 and u⃗_2 to be the travelling wavesTW_E and TW_TS, so that both states are part of the cross section.The initial conditions are thenparametrized by a mixing parameter α and an amplitude A, i.e.,u⃗(α,A)= A(1-α) u⃗_1 + αu⃗_2/||(1-α) u⃗_1 +αu⃗_2||.For α=0 one explores the state space along velocity field u⃗_1 and for α=1 along velocity field u⃗_2. If the upper and lower branch of TW_E are used to create such a slice, one recognizes that the turbulence in PPF appears in similar chaotic bubbles as inplane Couette <cit.>. Lower branch states are relevant for the transition to turbulence, so begin by exploring the slice spanned by the lower branches of TW_E and TW_TS. We assign to eachinitial condition the time it takes to become turbulent, with an upper cut-off for initial conditions that either take longer or that never become turbulent because they return to the laminar profile. Color coded transition-time plots are shown in figure <ref>(a) - (e) for different Reynolds numbers below Re_c. The boundary between initial conditions that relaminarize and those that become turbulent stands out clearly. They are formed by the stable manifold of the states and their crossings with the cross section. Parts of the stable manifold are indicated by the dashed white lines for better visibility.The part of the laminar-turbulent boundary connected with TW_E can be distinguished from thatconnected to TW_TS by the huge differences in transition times: for TW_TS transition times are significantly longer and even exceed 2· 10^4 time units.The interaction between the two domains is rather intricate.For Reynolds number 5780, shown infigure <ref>(d), it seems that the bordersdo not cross but rather wind around each other in a spiral shape down to very small scales. Although the wave TW_TS has still only one unstable eigenvalues,the size of the structure that is directly connected to TW_TS shrinkswith decreasing Re and is not visible in these kind ofprojection for Re<5727 where TW_TS has more than one unstable eigenvalue. In figure <ref> the evolution of the amplitude for different initial conditions marked in figure <ref>(d)is shown. The green and blue lines are typical representatives of the slow TS transition. Starting with a three dimensional initial condition, their amplitude decays and the two-dimensional TS wave TW_TS, whose amplitude is marked by the black line, is approached. Afterwards, they depart from TW_TS again, which is a slow process because of the small growth rate. Ultimately, the transition is caused by secondary instabilities of the TS waves <cit.>.The solid yellow line in figure <ref> is an initial condition that undergoes bypass transition. It quickly swings up to higher amplitudes and does not approach the TS wave on its way to turbulence. The dashed yellow line is of an intermediate type. It takes a long time to become turbulent but it does not come very close to the TS wave. The relation between time-evolution, transient amplification, and final state is complicated and non-intuitive. For instance, the dashed red and green trajectoriesshare a transients increase near t≈4000, but differ in their final state: the red curve, with the higher maximum, eventually returns to the laminar profile, but the green curve, with the smaller maximum, approaches the TS level and eventually becomes turbulent following the TS route. Similarly, the red, blue and green continuous lines start with high amplitude slightly below thethreshold for the bypass route. They all decay, but while the red initial conditions ends up onthe decaying side of the TS wave, the green and blue one eventually become turbulent via the TS route.For plane Couette flow it was found that a small chaotic saddle can appear inside of existing larger ones <cit.>.There, trajectories that escape from the inner saddle are still captured by the outer one. The appearance of TS transition in PPF follows a comparable mechanism. With increasing Reynolds number the chaotic saddle of subcritical bypass turbulence is surrounded by the stable manifold of the TS wave that above Re=5727 can separate two parts of the state space and therefore prevent trajectories in the interior from becoming laminar.With increasing Reynolds number the number of initial conditions becoming turbulent increases.Finally, for Re>Re_c no initial conditions that return to the laminar state exist anymore.Nevertheless, also in this supercritical regime a sudden change in the type of transition can be identified: when the amplitude increases and crosses the stable manifold of TW_E, the transition time drops dramatically and turbulence is reached via the bypass route.In the state space visualization for Re=5855 that is shown in figure <ref>(c), these change of the transition type presents itself in the rapid drop of the transition time with increasing a for α values between 0 and 0.6.In the supercritical range the stable manifold of the bypass edge state TW_E separates initial conditions undergoing the quick bypass transition form initial conditions that become turbulent by TS transition. The state space picture at a higher Reynolds number of 6000 looks qualitatively similar tothe one shown in figure <ref>(c), including the switch from TS to bypass transition when thestable manifold of TW_E is crossed.§ CONCLUSIONSWe have explored the coexistence of two types of transition in subcritical plane Poiseuille flowconnected with the existence of states dominated by streamwise and spanwise vortices (bypass and TS transition).Probing the state space by scanning initial conditions in two-dimensionalcross sections gave information on the sets of initial conditions that follow one or the other route to turbulence.The results show that the transition via TS waves initially occupies a tiny region of state space. As this region expands it approaches the bypass-dominated regions, but a boundary between the tworemains visible because of the very different times needed to reach turbulence. This extends to the parameter range where the laminar profile is unstable to the formation of TS waves. The results shown here are obtained for small domains, where the extensive numerical computations for very many initial conditions are feasible. For larger domains, the corresponding exact coherent structures are localized, as shown by <cit.> and <cit.> for TS waves andby <cit.> for the bypass transition. Since the bifurcation diagrams for thelocalized states are similar to that of the extended states, we anticipate a similar phenomenologyalso for localized perturbations in spatially extended states. The methods presented here can also be used to explore the relation between bypass transitionand TS waves in boundary layers <cit.>. More generally, they can be applied to any kind of transition where two different paths compete: examples include shear driven or convection driven instabilities in thermal convection <cit.>,the interaction between transitions driven by different symmetries <cit.>,or the interaction between the established subcritical scenario and the recently discovered linear instability in Taylor-Couette flow with rotating outer cylinder <cit.>.This work was supported in part by the German Research Foundation (DFG) within Forschergruppe 1182.jfm 54 natexlab#1#1#1#1 #1#1 #1#1#1#1#1#1 #1#1#1#1 #1#1 #1#1 #1#1#1#1#1#1#1#1[Avila et al.(2011)Avila, Moxey, de Lozar, Avila, Barkley & Hof]Avila2011a Avila, K., Moxey, D., de Lozar, A., Avila, M., Barkley, D. & Hof, B. 2011The onset of turbulence in pipe flow.Science333 (6039),192–6.[Avila et al.(2013)Avila, Mellibovsky, Roland & Hof]Avila2013 Avila, M., Mellibovsky, F., Roland, N. & Hof, B. 2013Streamwise-localized solutions at the onset of turbulence in pipe flow.Phys. Rev. Lett.110,224502.[Barkley & Tuckerman(2005)]Barkley2005 Barkley, D. & Tuckerman, L. 2005Computational Study of Turbulent Laminar Patterns in Couette Flow.Phys. Rev. Lett.94,014502.[Bottin et al.(1998)Bottin, Daviaud, Manneville & Dauchot]Bottin1998a Bottin, S, Daviaud, F, Manneville, P & Dauchot, O 1998Discontinuous transition to spatiotemporal intermittency in plane Couette flow.Europhys. Lett.43 (2),171–176.[Carlson et al.(1982)Carlson, Widnall & Peeters]Carlson1982 Carlson, D. R., Widnall, S. E. & Peeters, M. F. 1982A flow-visualization study of transition in plane Poiseuille flow. J. Fluid Mech.121,487–505.[Cherubini et al.(2011a)Cherubini, De Palma, Robinet & Bottaro]Cherubini2011 Cherubini, S., De Palma, P., Robinet, J.-Ch. & Bottaro, A. 2011aEdge states in a boundary layer.Phys. Fluids23 (5),051705.[Cherubini et al.(2011b)Cherubini, De Palma, Robinet & Bottaro]Cherubini2011a Cherubini, S., De Palma, P., Robinet, J.-C. & Bottaro, A. 2011bThe minimal seed of turbulent transition in the boundary layer.J. Fluid Mech. 689,221–253.[Clever & Busse(1992)]Clever1992 Clever, R. M. & Busse, F. H. 1992Three-dimensional convection in a horizontal fluid layer subjected to a constant shear. J. Fluid Mech.234,511–527.[Clever & Busse(1997)]Clever1997 Clever, R. M. & Busse, F. H. 1997Tertiary and quaternary solutions for plane Couette flow.J. Fluid Mech. 344,137–153.[Darbyshire & Mullin(1995)]Darbyshire95 Darbyshire, A. G. & Mullin, T 1995Transition to turbulence in constant-mass-flux pipe flow.J. Fluid Mech. 289,83–114.[Deguchi(2017)]Deguchi2017 Deguchi, Kengo 2017Linear instability in Rayleigh-stable Taylor-Couette flow.Phys. Rev. Ep. in press.[Duguet et al.(2012)Duguet, Schlatter, Henningson & Eckhardt]Duguet2012 Duguet, Y., Schlatter, P., Henningson, D. S. & Eckhardt, B. 2012Self-sustained localized structures in a boundary-layer flow.Phys. Rev. Lett.108,044501.[Faisst & Eckhardt(2003)]Faisst2003 Faisst, H. & Eckhardt, B. 2003Traveling waves in pipe flow.Phys. Rev. Lett.91,224502.[Faisst & Eckhardt(2004)]Faisst2004 Faisst, H. & Eckhardt, B. 2004Sensitive dependence on initial conditions in transition to turbulence in pipe flow.J. Fluid Mech.504,343–352.[Gibson(2012)]Gibson2009b Gibson, J. F. 2012Channelflow: a spectral Navier-Stokes simulator in C++. Tech. Rep..U. New Hampshire.[Gibson et al.(2009)Gibson, Halcrow & Cvitanović]Gibson2009 Gibson, J. F., Halcrow, J. & Cvitanović, P. 2009Equilibrium and travelling-wave solutions of plane Couette flow.J. Fluid Mech.638,243–266.[Halcrow et al.(2009)Halcrow, Gibson, Cvitanović & Viswanath]Halcrow2009 Halcrow, J., Gibson, J. F., Cvitanović, P. & Viswanath, D. 2009Heteroclinic connections in plane Couette flow.J. Fluid Mech.621,365–376.[Herbert(1988)]Herbert1988 Herbert, T. 1988Secondary instability of boundary layers.Annu. Rev. Fluid Mech.20,487–526.[Hof et al.(2006)Hof, Westerweel, Schneider & Eckhardt]Hof2006 Hof, B., Westerweel, J., Schneider, T. M. & Eckhardt, B. 2006Finite lifetime of turbulence in shear flows. Nature443 (7107),59–62.[Itano et al.(2013)Itano, Akinaga, Generalis & Sugihara-Seki]Itano2013a Itano, T., Akinaga, T., Generalis, S. C. & Sugihara-Seki, M. 2013Transition of Planar Couette Flow at Infinite Reynolds Numbers.Phys. Rev. Lett.111, 184502.[Jiménez(1990)]Jimenez1990a Jiménez, J. 1990Transition to turbulence in two-dimensional Poiseuille flow.J. Fluid Mech.218, 265–297.[Khapko et al.(2014)Khapko, Duguet, Kreilos, Schlatter, Eckhardt & Henningson]Khapko2013a Khapko, T., Duguet, Y., Kreilos, T., Schlatter, P., Eckhardt, B. & Henningson, D. S. 2014Complexity of localised coherent structures in a boundary-layer flow.Eur. Phys. J. E37 (32),1–12.[Khapko et al.(2013)Khapko, Kreilos, Schlatter, Duguet, Eckhardt & Henningson]Khapko2013 Khapko, T., Kreilos, T., Schlatter, P., Duguet, Y., Eckhardt, B. & Henningson, D. S. 2013Localized edge states in the asymptotic suction boundary layer.J. Fluid Mech. 717,R6.[Khapko et al.(2016)Khapko, Kreilos, Schlatter, Duguet, Eckhardt & Henningson]Khapko2016 Khapko, Taras, Kreilos, Tobias, Schlatter, Philipp, Duguet, Yohann, Eckhardt, Bruno & Henningson, Dan S. 2016Edge states as mediators of bypass transition in boundary-layer flows.J. Fluid Mech.801,R2, arXiv: 1605.03002.[Kreilos & Eckhardt(2012)]Kreilos2012 Kreilos, T. & Eckhardt, B. 2012Periodic orbits near onset of chaos in plane Couette flow.Chaos22 (4), 047505.[Kreilos et al.(2014)Kreilos, Eckhardt & Schneider]KreilosPRL2014 Kreilos, Tobias, Eckhardt, Bruno & Schneider, Tobias M. 2014Increasing Lifetimes and the Growing Saddles of Shear Flow Turbulence.Phys. Rev. Lett.112,044503.[Kreilos et al.(2016)Kreilos, Khapko, Schlatter, Duguet, Henningson & Eckhardt]Kreilos2016 Kreilos, T., Khapko, T., Schlatter, P., Duguet, Y.n, Henningson, D. S. & Eckhardt, Bruno 2016Bypass transition and spot nucleation in boundary layers.Phys. Rev. Fluids 1,043602,arXiv: 1604.07235.[Kreilos et al.(2013)Kreilos, Veble, Schneider & Eckhardt]Kreilos2013 Kreilos, T., Veble, G., Schneider, T. M. & Eckhardt, B. 2013Edge states for the turbulence transition in the asymptotic suction boundary layer.J. Fluid Mech.726, 100–122.[Lemoult et al.(2012)Lemoult, Aider & Wesfreid]Lemoult2012 Lemoult, G., Aider, J.-L. & Wesfreid, J. E. 2012 Experimental scaling law for the subcritical transition to turbulence in plane Poiseuille flow.Phys. Rev. E85 (2),025303(R).[Lemoult et al.(2013)Lemoult, Aider & Wesfreid]Lemoult2013 Lemoult, G., Aider, J.-L. & Wesfreid, J. E. 2013 Turbulent spots in a channel: large-scale flow and self-sustainability.J. Fluid Mech.731,R1.[Manneville(2009)]Manneville2009 Manneville, P. 2009Spatiotemporal perspective on the decay of turbulence in wall-bounded flows.Phys. Rev. E79, 025301.[Mellibovsky & Meseguer(2015)]Mellibovsky2015 Mellibovsky, F. & Meseguer, A. 2015A mechanism for streamwise localisation of nonlinear waves in shear flows.J. Fluid Mech.779,R1.[Moxey & Barkley(2010)]Moxey2010 Moxey, D. & Barkley, D. 2010Distinct large-scale turbulent-laminar states in transitional pipe flow.Proc. Natl. Acad. Sci. U. S. A.107 (18),8091–8096.[Nagata(1990)]Nagata1990 Nagata, M 1990Three-dimensional finite-amplitude solutions in plane Couette flow: bifurcation from infinity.J. Fluid Mech217,519–527.[Nishioka & Asai(1985)]Nishioka1985 Nishioka, M. & Asai, M. 1985Some observations of the subcritical transition in plane Poiseuille flow.J. Fluid Mech. 150,441–450.[Orszag(1971)]Orszag1971 Orszag, S. A. 1971Accurate solution of the Orr-Sommerfeld stability equation.J. Fluid Mech.50, 689–703.[Sano & Tamai(2016)]Sano2016 Sano, M. & Tamai, K. 2016A Universal Transition to Turbulence in Channel Flow.Nat. Phys.12,249–253.[Schmid & Henningson(2001)]Henningson Schmid, P & Henningson, Dan S 2001 Stability and transition in shear flow.Springer Berlin / Heidelberg.[Schmiegel & Eckhardt(1997)]Schmiegel1997 Schmiegel, A. & Eckhardt, B. 1997Fractal Stability Border in Plane Couette Flow.Phys. Rev. Lett.79, 5250.[Schneider & Eckhardt(2008)]Schneider2008a Schneider, T. M. & Eckhardt, B. 2008Lifetime statistics in transitional pipe flow.Phys. Rev. E78, 046310.[Schneider et al.(2007)Schneider, Eckhardt & Yorke]Schneider2007 Schneider, T. M., Eckhardt, B. & Yorke, J. 2007 Turbulence transition and the edge of chaos in pipe flow.Phys. Rev. Lett.99,034502.[Schneider et al.(2008)Schneider, Gibson, Lagha, De Lillo & Eckhardt]Schneider2008 Schneider, T. M., Gibson, J. F., Lagha, M., De Lillo, F. & Eckhardt, B. 2008Laminar-turbulent boundary in plane Couette flow.Phys. Rev. E78,037301.[Skufca et al.(2006)Skufca, Yorke & Eckhardt]Skufca2006 Skufca, J., Yorke, J. A. & Eckhardt, B. 2006 Edge of chaos in a parallel shear flow.Phys. Rev. Lett. 96,174101.[Toh & Itano(2003)]Toh2003 Toh, S. & Itano, T. 2003A periodic-like solution in channel flow.J. Fluid Mech.481,67–76.[Tuckerman et al.(2014)Tuckerman, Kreilos, Schrobsdorff, Schneider & Gibson]Tuckerman2014 Tuckerman, L. S., Kreilos, T., Schrobsdorff, H, Schneider, T. M. & Gibson, J. F. 2014 Turbulent-laminar patterns in plane Poiseuille flow.Phys. Fluids26,114103.[Vollmer et al.(2009)Vollmer, Schneider & Eckhardt]Vollmer2009 Vollmer, J., Schneider, T. M. & Eckhardt, B. 2009 Basin boundary, edge of chaos and edge state in a two-dimensional model.New J. Phys.11,013040.[Waleffe(1998)]Waleffe1998 Waleffe, F. 1998Three-Dimensional Coherent States in Plane Shear Flows.Phys. Rev. Lett.81 (19),4140.[Wedin et al.(2014)Wedin, Bottaro, Hanifi & Zampogna]Wedin2014a Wedin, H, Bottaro, A, Hanifi, A & Zampogna, G. 2014Unstable flow structures in the Blasius boundary layer. Eur. Phys. J. E37 (34),1–20.[Wedin & Kerswell(2004)]Wedin2004 Wedin, H. & Kerswell, R. R. 2004Exact coherent structures in pipe flow: travelling wave solutions.J. Fluid Mech. 508,333–371.[Willis & Kerswell(2007)]Willis2007 Willis, A. & Kerswell, R. R. 2007Critical Behavior in the Relaminarization of Localized Turbulence in Pipe Flow.Phys. Rev. Lett.98 (1),014501.[Zammert & Eckhardt(2014)]Zammert2014b Zammert, S & Eckhardt, B. 2014Streamwise and doubly-localised periodic orbits in plane Poiseuille flow.J. Fluid Mech.761,348–359.[Zammert & Eckhardt(2015)]Zammert2015 Zammert, S. & Eckhardt, B. 2015Crisis bifurcations in plane Poiseuille flow.Phys. Rev. E91,041003(R).[Zammert & Eckhardt(2017)]Zammert2017 Zammert, S. & Eckhardt, B. 2017Harbingers and latecomers - The order of appearance of exact coherent structures in plane Poiseuille flow.J. Turbul.18 (2),103–114.[Zammert & Eckhardt(2015)]Zammert2016x Zammert, S. & Eckhardt, B. 2015Bypass transition and subcritical turbulence in plane Poiseuille flow.Proceedings of TSFP-9, www.tsfp-conference.org , arXiv:1506.04370.[Zammert et al.(2016)Zammert, Fischer & Eckhardt]Zammert2016b Zammert, S., Fischer, N. & Eckhardt, B. 2016 Transition in the asymptotic suction boundary layer over a heated plate.J. Fluid Mech.803,175–199.
http://arxiv.org/abs/1702.08416v1
{ "authors": [ "Stefan Zammert", "Bruno Eckhardt" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170227182408", "title": "Transition to turbulence when the Tollmien-Schlichting and bypass routes coexist" }
An EM Based Probabilistic Two-Dimensional CCA with Application to Face RecognitionMehran Safayani Seyed Hashem Ahmadi Homayun Afrabandpey Abdolreza MirzaeiReceived: date / Accepted: date =================================================================================== This paper proposes Monte Carlo Action Programming, a programming language framework for autonomous systems that act in large probabilistic state spaces with high branching factors. It comprises formal syntax and semantics of a nondeterministic action programming language. The language is interpreted stochastically via Monte Carlo Tree Search. Effectiveness of the approach is shown empirically. § INTRODUCTION We consider the problem of sequential decision making in highly complex and changing domains. These domains are characterized by large probabilistic state spaces and high branching factors. Additional challenges for system design are occurrence of unexpected events and/or changing goals at runtime.A state of the art candidate for responding to this challenge is behavior synthesis with online planning <cit.>. Here, a planning agent evaluates possible behavioral choices w.r.t. current situation and background knowledge at runtime. At some point, it acts according to this evaluation and observes the actual outcome of the action. Planning continues, incorporating the observed result. Planning performance directly correlates with search space cardinality. This paper introduces Monte Carlo Action Programming (MCAP) to reduce search space cardinality through specification of heuristic knowledge in the form of procedural nondeterministic programs. MCAP is based on stochastic interpretation of nondeterministic action programs by Monte Carlo Tree Search (MCTS) <cit.>. Combining search space constraints and stochastic interpretation enables program evaluation in large probabilistic domains with high branching factors.From the perspective of online planning, MCAP provides a formal non-deterministic action programming language that allows to specify plan sketches for autonomous systems. From the perspective of action programming, MCAP introduces stochastic interpretation with MCTS. This enables effective program interpretation in very large, complex domains.We will discuss MCTS and action programming in Section <ref>. Section <ref> introduces MCAP. In Section <ref> we empirically compare MCTS and MCAP specifications for online planning. We conclude and sketch venues for further research in Section <ref>.§ RELATED WORKWe briefly review Monte Carlo Tree Search in Section <ref> and action programming in Section <ref>. §.§ Monte Carlo Tree SearchMonte Carlo Tree Search (MCTS) is a framework for statistical search in very large state spaces with high branching factors based on a generative model of the domain (i.e. a simulation). It yields good performance even without heuristic assessment of intermediate states in the search space. The MCTS framework originated from research in computer Go <cit.>. The game Go exposes the mentioned characteristics. Also, not many good heuristics are known for Go. Nevertheless, specialized Go programs based on the MCTS algorithm are able to play on the niveau of a human professional player <cit.>. MCTS is also commonly used in autonomous planning <cit.> and has been applied successfully to a huge number of other search tasks <cit.>. MCTS adds nodes to the tree iteratively. Nodes represent states and store metadata about search paths that lead through them. Gathered metadata comprises mean reward (i.e. node value) and the number of searches that passed through the node. It enables assessment of exploration vs. exploitation: Should search be directed to already explored, promising parts of the search space? Or should it gather information about previously unexplored areas?Figure <ref> shows the basic principle of MCTS. Based on node information, MCTS selects an action w.r.t. a given tree policy. The successor state is determined by simulating action execution. Selection is repeated as long as simulation leads to a state that is represented by a node in the tree. Otherwise, a new node representing the simulated outcome state is added to the tree (expansion). Then, a default policy is executed (e.g. uniform random action execution). Gathered reward is stored in the new node (simulation or rollout). This gives a first estimation of the new node's value. Finally, the rollout's value is backpropagated through the tree and the corresponding node values are updated. MCTS repeats this procedure iteratively. Algorithm <ref> shows the general MCTS approach in pseudocode. Here, v_0 is the root node of the search tree. v_l denotes the last node visited by the tree policy. Δ is the value of the rollout from v_l according to the default policy. MCTS can be interrupted at any time and yields an estimation of quality for all actions in the root state. The best action (w.r.t. node information) is executed and its real outcome is observed. MCTS continues reusing the tree built so far. Eventually, nodes representing past states are pruned from the tree. §.§ Action ProgrammingNondeterministic action programs define sketches for system behavior that are interpreted at runtime, leaving well-defined choices to be made by the system at runtime. Interpreting an action program typically provides a measure of quality for particular instantiations of theses sketches. Concrete traces are then executed w.r.t. to this quality metric.Well-established action programming languages are Golog <cit.> and Flux <cit.>. Each is interpreted w.r.t. a particular formal specification of domain dynamics: The situation calculus and the fluent calculus are concerned with specification of action effects and domain dynamics in first order logic <cit.>. For both Golog and Flux, Prolog interpreters have been implemented. The MCAP framework differs from these formalisms and their respective languages: (a) MCAP does not provide nor require a specific formal representation of domain dynamics. Rather, any form of domain simulation suffices. (b) MCAP does not explore the search space exhaustively. Rather, programs are interpreted stochastically by MCTS. The search space is explored iteratively. Program interpretation is directed to promising areas of the search space based on previous interpretations. Search can be interrupted any time yielding an action recommendation accounting for current situation and a given program. Recommendation quality depends on the number of simulations used for search <cit.>.§ MONTE CARLO ACTION PROGRAMMINGThis Section introduces Monte Carlo Action Programming (MCAP), a nondeterministic procedural programming framework for autonomous systems.The main idea of the MCAP framework is to allow to specify behavioral blueprints that leave choices to an agent. A MCAP is a nondeterministic program. MCAP programs are interpreted probabilistically by MCTS. MCAPs constrain the MCTS search space w.r.t. a procedural nondeterministic program.§.§ Framework ParametersThe MCAP framework requires the following specification. * A generative domain model that captures the probability distribution of successor states w.r.t. current state and executed action (Equation <ref>). The model does not have to be explicit: The framework only requires a simulation that allows to query one particular successor state. simulate : P(  |  ×) * A reward function R that encodes the quality of a state w.r.t. system goals (Equation <ref>). ℛ : →ℝ * A discount factor γ∈ [0;1] weights the impact of potential future decision on the current situation. A discount factor of zero means that only immediate consequences of action are considered. A discount factor of one means that all future consequences influence the current decision equally, regardless of their temporal distance. * A maximum search depth h_max∈ℕ.§.§ SyntaxEquation <ref> defines syntax of the MCAP language. ϵ is the empty program,denotes specified action space,is a sequential operator, + is nondeterministic choice, ∥ denotes interleaving concurrency. Q denotes the query space for conditional evaluation (see Equation <ref>). ? denotes querying the current execution context. ∘ denotes a conditional loop.:= ϵ         ∥     ?() {}   ?() {}   ∘{}{}Normal Form  We define a normal form _norm for MCAPs. Each program in normal form is a choice of programs with an action prefix and any tail program._norm := ∑( ) Equations <ref> to <ref> define a term reduction system that ensures transformation of programs to their normal form. ϵ p= pp + p= p (p_1 + p_2)p= (p_1p) + (p_2p)p(p_1 + p_2)= (pp_1) + (pp_2)p_1 ∥ (p_2 + p_3) = (p_1 ∥ p_2) + (p_1 ∥ p_3)( a_1p_1 ) ∥( a_2p_2 ) =( a_1(p_1 ∥( a_2p_2 ) ) )+( a_2(( a_1p_1 ) ∥ p_2) ) a_1 ∥ (a_2p) = (a_1a_2p) + (a_2(a_1 ∥ p)) a_1 ∥ a_2 = (a_1a_2) + (a_2a_1)§.§ SemanticsThis Section formalizes MCAP semantics in the context of MCTS interpretation. Search Tree  We introduce a formal representation of the search tree. Its purpose is to accumulate information about computation traces w.r.t. simulation and system action choices. Tree nodes represent states ∈ and actions ∈. State nodes _ and action nodes _ alternate (Equations <ref> and <ref>). Nodes contain aggregation of metadata 𝒟 that guides further search. Aggregated data are visitation count and node value (Equation <ref>)._ ⊆×𝒟× 2^_ _ ⊆×𝒟× 2^_× 𝒟 ⊆ℕ×ℝWhile it is possible to use a DAG instead of a tree <cit.>, we will concentrate on the tree setting in this paper for the sake of simplicity. Framework Operations  Equations <ref> to <ref> show the functional signatures of MCAP framework operations. We will define each one in the rest of this Section. select : _→_ expand : ×→_ rollout : ××ℕ→ℝ update : _→_ update : _→_Selection  Equation <ref> shows UCB1 action selection. It is a popular instantiation of the MCTS tree policy based on regret minimization <cit.>. q(v_a) denotes the current value aggregated in the metadata of action node v_a. (v_s) and (v_a) denote the number of searches that visited the corresponding node stored in its metadata (see also Algorithm <ref>, lines 2 and 10). UCB1 favors actions that expose high value (first term of the sum), and adds a bias towards actions that have not been well explored (second term of the sum). The parameter c is a constant to control the tendency towards exploration. select(v_s)= argmax_v_a ∈v⃗_⃗a⃗(v_s)( q(v_a) + c ·√(2 ln(v_s)(v_a)))Queries  Our framework requires specification of a query representation and a satisfaction function of queries and states to enable conditional computation. Queries Q are evaluated w.r.t. a given state ∈ and yield a set of substitutions for query variables (Equation <ref>). It returns the set of substitutions for variables in the query for which the query holds in the state. In case the query is ground and holds, the set containing the empty substitution {∅} is returned. If the query does not hold, it returns the empty set ∅. We write ⊢ in infix notation and s ⊬q ⇔ s ⊢ q = ∅.⊢ : ×→ 2^ΘInterpretation of MCAPs  Expansion of the tree is constrained by a given MCAP through interpreting it w.r.t a given state. The potential program function constrains the search space w.r.t. given action program and current system state. It maps an MCAP and a given state to the set of normalized MCAPs that result from (a) nondeterministic choices and (b) interpretations of queries.pot : ×→ 2^_norm Equations <ref> to <ref> define MCAP interpretation by the potential program function inductively on the structure of . pot(s, ϵ)= ∅ pot(s, a )= { a ϵ} pot( s, pp' )= ⋃_p”∈pot(s, p)( p” p' ) pot(s, ∑_i p_i )= ⋃_i pot(s, p_i) pot(s, ?{ q }{ p })= ⋃_θ∈ s ⊢ qpot(s, θ(p) ) pot(s, ?{ q }{ p })= pot(s, p) s ⊬q ∅pot(s, ∘{ q }{ p })= pot(s, ?{ q }{ p }∘{ q }{ p })Expansion  Equation <ref> shows the MCAP expansion mechanism. s ∈ denotes the state for which a new node is added. p is the MCAP to be executed in state s. Potential programs pot(s,p) in normal form define the set of action node children for actions a that contain the corresponding tail programs p'. Thus, a MCAP effectively constrains the search space. d_0 ∈𝒟, d_0 = (0,0) defines initial node metadata.expand(s,p) = (s,d_0,v⃗_⃗a⃗) where v⃗_⃗a⃗ = ⋃_(a,p') ∈pot(s,p) (a,d_0,∅,p')Rollout  After expansion a rollout is performed. A number of simulation steps is performed (i.e. until maximum search depth h_max is reached) and the reward for resulting states is aggregated. An MCAP p defines the rollout's default policy. Actions and corresponding tail programs are selected uniformly random from the set of potential programs in each state s encountered in the rollout.rollout(s, p, h) = R(s) h = h_max R(s) + γ·rollout(s', p', h + 1)where  (a,p') ∼pot(s,p) ∧ s' ∼simulate(s' | s, a)Value Update  After a node is expanded its value is determined by a rollout. The newly created value is then incorporated to the search tree by value backpropagation along the search path. In general any kind of value update mechanism is feasible, e.g. a mean update as used by many MCTS variants. MCAP uses dynamic programming (i.e. a Bellman update) for updating node values <cit.>. An action's value is the weighted sum of its successor states' values (Equation <ref>). A state's value is the currently obtained reward and the value of the currently optimal action (Equation <ref>).update(v_a)= ∑_v_s ∈v⃗_⃗s⃗(v_a)(v_s)(v_a) v(v_s) update(v_s)= R(s(v_s)) + max_v_a ∈v⃗_⃗a⃗(v_s) q(v_a)Algorithm <ref> shows the interplay of selection, aggregation of metadata, simulation, expansion, rollout and value update for Monte Carlo Action Programming.Algorithm <ref> shows the integration of MCAP with online planning. While the system is running, a given MCAP is repeatedly evaluated and executed until termination (lines 2 – 4). Evaluation is performed by MCTS until a certain budget is reached (lines 6 – 8). The currently best action w.r.t. MCAP interpretation is determined (line 9). If there is no such action, the program terminated (line 10). Otherwise, the best action is executed and the outcome observed (lines 13 and 14). In case the new state is already represented in the search tree, the corresponding state node is used as new root for further search (lines 15 and 16). Otherwise, a new root node is created (line 18).§ STATISTICAL MODEL CHECKING OF MCAPS TODO: Remove this Section? Paper with Andrea.Metasimulation allows to assess system performance if there is a difference of the simulation used for generating trajectories in MCTS and the simulation that runs online MCTS. § EXPERIMENTAL EVALUATION §.§ Example DomainWe introduce the rescue domain as illustrating example.Robots can move around a connected graph of positions and lift or drop victims. The number of victims a robot can carry is limited by its capacity. A position may be on fire, in which case a robot cannot move there.At every time step the fire attribute of a position may changedepending on how many of the position's neighbors are on fire. A safe position never catches fire. The class diagram of the rescue domain is shown in Figure <ref>. A particular state of the domain is an instantiation of this class diagram.Possible system actions are: * Move(R,P): Robot R moves to target position P if it is connected to the robot's current position and is not on fire. * Extinguish(R,P): Robot R extinguishes fire at a neighbor position P. * Lift(R,V): Robot R lifts victim V (at same location) if it has capacity left. * Drop(R,V): Robot R drops lifted victim V at the current location.* Noop: Does nothing.§.§ Setup & Results Effectiveness of the MCAP framework was evaluated empirically for the rescue domain. A simulation of the domainwas used as generative model. Reward R(s) was defined as the number of victims located at safe positions in state s. Also, each victim not burning provided a reward of 0.1. Maximum search depth was set to h_max = 40 and the discount factor was set to γ = 0.9.Experiments were conducted with randomized initial states, each consisting of twenty positions with 30% connectivity. Three positions were safe, ten victims and ten fires were located randomly on unsafe positions. Robot capacity was set to two. This setup yields a state space containing more than 10^19 possible states.Fires ignited or ceased probabilistically at unsafe positions. Actions succeeded or failed probabilistically (p = 0.05). This yields a branching factor of 2 · 2^17 for each action.In the experiments using plain MCTS all actions ∈ were evaluated at each step. Algorithm <ref> shows pseudocode for the program used to determine the action to evaluate in the experiments with MCAP. Both MCTS and MCAP used 1000 playouts at each step for action evaluation. System performance was measured with the statistical model checker Multivesta <cit.>. Two metrics of system behavior with and without MCAP search space constraints were assessed: Ratios of safe victims and burning victims.Figure <ref> compares the average results for behavior synthesis with plain MCTS and with MCAP within a 0.1 confidence interval. The effect of MCAP search space reduction on system performance can clearly be seen. The configuration making use of online MCAP interpretation achieves larger ratios of safe victims and manages the reduction of burning victim ratios better than the configuration not making use of MCAP. With plain MCTS, search is distracted by low reward regions due to avoiding burning victims. MCAP search identifies high reward regions where victims are saved within the given budget.A similar experiment with unexpected events illustrates robustness of the approach. Here, every twenty steps all currently carried victims fell to the ground (i.e. were located at their carrier's position). Also, fires ignited such that overall at least ten fires were burning immediately after these events.Note that the simulation of the domain used for plain MCTS and MCAP did not simulate these events. The planning system managed to recover from the unexpected situations autonomously (Figure <ref>). As for the basic experiment, the configuration with MCAP performed significantly better that the configuration using plain MCTS. In a third experiment the reward function was changed unexpectedly for the system. Before step 25, a reward is provided exclusively for avoiding burning victims. From step 25 on the reward function from the previous experiments was used, providing reward for safe victims. The planner did not simulate the change of reward when evaluating action traces. MCAP outperformed plain MCTS by reacting more effectively to the change of reward function. Figure <ref> shows the results of this experiment. § CONCLUSIONThis paper proposed Monte Carlo Action Programming, a programming language framework for autonomous systems that act in large probabilistic state spaces. It comprises formal syntax and semantics of a nondeterministic action programming language. The language is interpreted stochastically via Monte Carlo Tree Search.The effectiveness of search space constraint specification in the MCAP framework was shown empirically. Online interpretation of MCAP provides system performance and robustness in the face of unexpected events.A possible venue for further research is the extension of MCAP to domains with continuous time and hybrid systems. Here, discrete programs are interpreted w.r.t. continuously evolving domain values <cit.>.It would also be interesting to evaluate to what extend manual specification techniques as MCAP could be combined with online representation learning (e.g. statistical relational learning <cit.> and deep learning <cit.>): How to constrain system behavior if perceptual abstraction is unknown at design time or changes at runtime?splncs§ EXAMPLE DOMAIN STATE SPACE CARDINALITYGiven a single robot, the state space cardinality takes into account the following possible configurations. * The possible distributions of victims carried by a robot and the others located at arbitrary positions (first term of the product in Equation <ref>).* The possible positions of the robot (second term).* Each position is either on fire or not (third term). ∑_i = 0^capacity[ [ victims; i ]·[ positions; (victims - i) ]] ·positions· 2^(positions - safe)
http://arxiv.org/abs/1702.08441v1
{ "authors": [ "Lenz Belzner" ], "categories": [ "cs.AI", "cs.PL" ], "primary_category": "cs.AI", "published": "20170225114850", "title": "Monte Carlo Action Programming" }
[ [ December 30, 2023 =====================Tumor is heterogeneous – a tumor sample usually consists of a set of subclones with distinct transcriptional profiles and potentially different degrees of aggressiveness and responses to drugs. Understanding tumor heterogeneity is therefore critical to precise cancer prognosis and treatment.In this paper, we introduce BayCount, a Bayesian decomposition method to infer tumor heterogeneity with highly over-dispersedRNA sequencing count data.Using negative binomial factor analysis,BayCount takes into account both the between-sampleand gene-specific random effectson raw counts of sequencing reads mapped to each gene.For posterior inference, we develop an efficient compound Poisson basedblocked Gibbs sampler. Through extensive simulation studies and analysis of The Cancer Genome Atlas lung cancer and kidney cancer RNA sequencing count data, weshow that BayCount is able to accurately estimate the number of subclones,the proportions of these subclones in each tumor sample, and the gene expression profiles in each subclone. Our method represents the first effort in characterizing tumor heterogeneity using RNA sequencing count data that simultaneously removes the need of normalizing the counts,achieves statistical robustness, and obtains biologically/clinically meaningful insights. KEY WORDS:Cancer genomics,compound Poisson,Markov chain Monte Carlo, negative binomial, over-dispersion§ INTRODUCTIONTumor heterogeneity (TH) is a phenomenon that describes distinct molecular profiles of different cells in one or more tumor samples.TH arises during the formation of a tumor as a fraction of cells acquire and accumulate different somatic events (e.g., mutations in different cancer genes), resulting in heterogeneity within the same biological tissue sample and between different ones,spatially and temporally <cit.>. As a result, tumor cell populations are composed of different subclones (subpopulations) of cells, characterized by distinct genomes, transcriptional profiles <cit.>, as well as other molecular profiles, such as copy number alterations. Understanding TH is critical to precise cancer prognosis and treatment. Heterogenetic tumors may exhibit different degrees of aggressiveness and responsesto drugs among different samples due to genetic or gene expression differences. The level of heterogeneity itself can be used as a biomarker to predict treatment response or prognosis since more heterogeneous tumors are more likely to contain treatment-resistant subclones <cit.>.This will ultimately facilitate the rational design of combination treatments, with each distinct compound targeting a specific tumor subclone based on its transcriptional profile.Large-scale sequencing techniques providevaluable information for understanding tumor complexity and open a door for the desired statistical inference on TH. Previous studies have focused on reconstructing the subclonal composition by quantifyingthestructural subclonal copy number variations<cit.>,somatic mutations<cit.>, or both <cit.>. In this paper, we aim to learn tumor transcriptional heterogeneity using RNA sequencing (RNA-Seq) data. In the analysis ofgene expression data, matrix decomposition models have been extensively studied in the context of microarray and normalized RNA-Seq data <cit.>. Generally, given gene expression data matrix X=(x_ij)_G× S, where the (i,j)th element records the expression value of the ith gene in the jth sample, they decompose X by modelingx_ij with ∑_k=1^Kϕ_ikθ_kj, where ϕ_ik encodes the expression level of the ith gene in the kth subclone, θ_kj represents the mixingweight of the kth subclone in the jth sample, and K is the number of subclones. The decomposition can be solved by either optimization algorithms <cit.> or statistical inference by assuming a normal distribution on x_ij. While it is reasonable to assume normality for microarray gene expression data, it is often inappropriate to adopt such an assumption for directly modeling RNA-Seq data, which involve nonnegative integer observations.Ifa model based on normal distribution is used,one often needs to first normalize RNA-Seq data before performing any downstream analysis.See <cit.> for a review on normalization methods. Although normalization often destroys the nonnegative and discrete nature of the RNA-Seq data, it remains the predominant way for data preprocessing due to not only the computational convenience in modeling normalized data, but also the lack of appropriate count data models. Distinct from previously proposed methods, in this paper, wepropose an attractive class of count data models in decomposing RNA-Seq count matrices. There are, nevertheless, statistical challenges with RNA-Seq count data. First, the distributions of the RNA-Seq count data are typically over-dispersed andsparse. Second, the scales of the read counts in sequencing data across samples can be enormously differentdue to the mechanism of the sequencing experiment such as the variations in technical lane capacities. The larger the library sizes (i.e., sequencing depth) are, the larger the read counts tend to be. In addition, the differences in gene lengths or GC-content <cit.> can biasgene differential expression analysis, particularly for lowly expressed genes <cit.>. A number of count data models have been developed for RNA-Seq data <cit.>. For example,<cit.>proposed a Poisson factor model on microRNA to reduce the dimension of count data and identify low-dimensional features, followed by a clustering procedure over tumor samples. <cit.> developed a method using a mixture of negative binomial and Poisson distributions to model single cell RNA-Seq data for gene differential expressionanalysis. None of these methods, however, address the problem of TH.To this end, we propose BayCount, a Bayesian matrix decomposition modelbuilt upon the negative binomial model<cit.>, to infer tumor transcriptional heterogeneity using RNA-Seq count data. BayCount accounts forboth thebetween-sample and gene-specific random effects and infers the number of latent subclones, the proportions of these subclones in each sample, and subclonal expression simultaneously.The remainder of the paper is organized as follows. In Section <ref>, weintroduce BayCount, a hierarchical Bayesian model for RNA-Seqcount data, and develop anefficient compound Poisson based blocked Gibbs sampler. We investigate the performance of posterior inference and robustness of the BayCount model through extensive simulation studies in Section 3, and apply our proposed BayCount model to analyze two real-world RNA-Seq datasets from The Cancer Genome Atlas (TCGA) <cit.>in Section 4. We conclude the paper in Section <ref>. § HIERARCHICAL BAYESIAN MODEL AND INFERENCEIn this section wepresent the proposed hierarchical model for RNA-Seq count data, develop the corresponding posterior inference, and discuss how todetermine the number of subclones.§.§ BayCount ModelWe assume that S tumor samples are available from the same or different patients.Consider a G× S count matrix Y=(y_ij)_G× S, where each row represents a gene, each column represents a tumor sample, and the element y_ij records the read count of the ith gene from the jth tumor sample. The Poisson distribution Pois(λ) with mean λ>0 is commonly used for modeling count data.Poisson factor analysis (PFA) <cit.>factorizes the count matrix Y as y_ij∼(∑_k=1^Kϕ_ikθ_kj), where Φ=(ϕ_ik)_G× K∈ℝ_+^G× K isthe factor loading matrixand Θ=(θ_kj)_K× S∈ℝ_+^K× S is the factor score matrix.Here K is an integer indicating the number of latent factors, and each column of Φ is subject to the constraint that ∑_i=1^Gϕ_ik=1 and ϕ_ij≥ 0. However, the restrictive equidispersion property of the Poisson distribution that the variance and mean are the same limits the application of PFA in modeling sequencing data, which are often highly over-dispersed. For this reason, one may consider negative binomial factor analysis (NBFA)of <cit.> that factorizes Y asy_ij∼NB(∑_k=1^Kϕ_ikθ_kj,p_j), where p_j∈(0, 1). We denote y∼NB(r, p) as a negative binomial distribution with shape parameter r>0 and success probability p∈(0,1), whose mean and variance are rp/(1-p) and rp/(1-p)^2, respectively, with the variance-to-mean ratio as 1/(1-p).Denote the jth column ofas _j=(y_1j, y_2j, …, y_Gj)^T,the count profile of the jth tumor sample.To account for both the between-sample and gene-specific random effects when modeling RNA-Seq count data, wepropose y_ij|λ,α_i,ζ_j, p_j,Φ,Θ ∼ NB(λα_i+∑_k=1^Kϕ_ikθ_kjζ_j, p_j),where α_i accounts for the gene-specific random effect of the ith gene, λ and p_j control the scales of the gene-specific effect and between-sample effect of the jth sample, respectively,and ∑_k=1^Kϕ_ikθ_kjζ_j represents the average effect of the K subclones on the expression of the ith gene in the jth sample.To see this, recall that the mean of y_ij based on (<ref>) is𝔼[y_ij]=(λα_i+∑_k=1^Kϕ_ikθ_kjζ_j) p_j/1-p_j.Since p_j is sample-specific, the termp_j/(1-p_j) describes the effect of sample j on read counts due to technical or biological reasons (e.g., different library sizes, biopsy sites, etc). We assume the relative expression of the ith gene in the kth subclone is described by ϕ_ik, where ϕ_ik≥ 0. Since the sample-specific effect has already been captured by p_j, for modeling convenience, we normalize the gene expression so that the expression levels sum to one for each subclone. Namely, ∑_i=1^Gϕ_ik=1 for all k=1,⋯,K. Furthermore, we assume that θ_kj represents the proportion of the kth subclone in the jth sample, where θ_kj≥ 0 and ∑_k=1^Kθ_kj=1. We can interpret θ_kjζ_j as the population frequencies of the kth subclone in the jth sample, where parameter ζ_j controls the scale.Together, the summation ∑_k=1^Kϕ_ikθ_kjζ_j represents the aggregated expression level of the ith gene across all K subclones for the jth sample. To further account for the gene-specific random effects that are independent of the samples and subclones, we introduce an additional term λα_i to describe the random effect of the ith gene on the read counts such as GC-content and gene length. We assume ∑_i=1^Gα_i=1 so that α_i represents the relative gene-specific random effect of the ith genewith respect to all the genes and λ controls the overall scale of the gene-specific random effects.Following <cit.>, the model in (<ref>) has an augmented representation asy_ij = x_ij+z_ij, x_ij = ∑_k=1^Kx_ijk,z_ij|λ,α_i,p_j ∼ NB(λα_i,p_j), x_ijk|_k,_j,ζ_j, p_j ∼ NB(ϕ_ikθ_kjζ_j,p_j).From (<ref>),the raw count y_ij of the ith gene in the jth sample can be interpreted as coming from multiple sources: x_ijk represents the count of the ith gene contributed by the kth subclone in the jth sample, where k=1, …, K, while z_ij is the count contributed by the gene-specific random effect of the ith gene in the jth sample. Denote y_ j = ∑_i=1^Gy_ij. Since ∑_i=1^G ϕ_ik=1 and ∑_k=1^Kθ_kj = 1 by construction,under (<ref>), by the additive property of independent negative binomial random variables with the samesuccess probability, we havey_ j|λ,α_i,ζ_j, p_j,Φ,Θ∼NB(λ+ζ_j, p_j),and, in particular, the mean as 𝔼[y_ j] = (λ + ζ_j)p_j/(1-p_j) and the variance as Var(y_ j)= 𝔼[y_ j] + 𝔼^2[y_ j]/(λ+ζ_j).It is clear that p_j, the between-sample random effect of the jth sample, governs the variance-to-mean ratio of y_ j, whereas λ+ζ_j, the sum of the scale λ ofthe gene-specific random effects and the scale ζ_j for the jth sample,controls the quadratic relationship between Var(y_ j) and𝔼[y_ j]. We complete the model by setting the following priors that will be shown to be amenable to posterior inference:_k ∼Dirichlet(η,⋯,η),∼Dirichlet(δ,⋯,δ), _j| r_1,⋯,r_K ∼Dirichlet(r_1,⋯,r_K), p_j∼Beta(a_0,b_0), ζ_j| r_1,⋯,r_K,c_j ∼Gamma(∑_k=1^Kr_k,c_j^-1),λ∼Gamma(u_0,v_0^-1),where _k=(ϕ_1k,⋯,ϕ_Gk)^T, _j=(θ_1j,⋯,θ_Kj)^T, =(α_1,⋯,α_G)^T, Gamma(a, b) denotes a gamma distribution with mean ab and variance ab^2, and Dirichlet(η_1,⋯,η_d) denotes a d-dimensional Dirichlet distribution with parameter vector (η_1,⋯,η_d). We further impose the hyperpriors, expressed as r_k|γ_0, c_0∼Gamma(γ_0/K,c_0^-1), c_j∼Gamma(e_0,f_0^-1), γ_0∼Gamma(g_0,h_0^-1), and c_0∼Gamma(e_0,f_0^-1) to construct a more flexible model. Shown in Figure <ref>isthegraphical representation of our BayCount model.§.§ Gibbs Sampling via Data Augmentation For the proposed BayCount model, while the conditional posteriors of p_j, c_j and c_0are straightforward to derive due to conjugacy,a variety of data augmentation techniques are used to derive the closed-form Gibbs sampling update equations for all the other model parameters. Rather than going into the details here, let us first assume that we have already sampled the latent counts x_ijk given the observations y_ij and model parameters, which, according to Theorem 1 of<cit.>, can be realized by sampling from the Dirichlet-multinomial distribution; given x_ijk, we show how to derive the Gibbs sampling update equations for Φ and Θ via data augmentation; and we will describe in the Supplementary Material a compound Poisson based blocked Gibbs sampler that completely removes the need of sampling x_ijk. §.§.§ Sampling Φ and ΘWe introduce an auxiliary variable ℓ_ijk that follows a Chinese restaurant table (CRT) distribution, denoted by ℓ_ijk| x_ijk,ϕ_ikθ_kjζ_j∼CRT(x_ijk, ϕ_ikθ_kjζ_j), with probability mass functionp(ℓ_ijk| x_ijk,ϕ_ikθ_kjζ_j) = Γ(ϕ_ikθ_kjζ_j)/Γ(x_ijk+ϕ_ikθ_kjζ_j)|s(x_ijk, ℓ_ijk)| (ϕ_ikθ_kjζ_j)^ℓ_ijk,supported on {0,1,2,⋯,x_ijk}, wheres(x_ijk, ℓ_ijk) are Stirling numbers of the first kind <cit.>. Sampling ℓ∼CRT(x,r)can be realized bytaking the summation of m independent Bernoulli random variables: ℓ=∑_t=1^x b_t, where b_t∼Bernoulli(r/(r+t-1)) independently. Following <cit.>, the joint distribution of ℓ_ij and x_ij described by ℓ_ijk| x_ijk ,ϕ_ik,θ_kj,ζ_j∼ CRT(x_ijk,ϕ_ikθ_kjζ_j),x_ijk|ϕ_ik,θ_kj,ζ_j,p_j∼ NB(ϕ_ikθ_kjζ_j,p_j),can be equivalently characterized under the compound Poisson representationx_ijk|ℓ_ijk ,p_j∼ SumLog(ℓ_ijk,p_j), ℓ_ijk|ϕ_ik,θ_kj,ζ_j, p_j∼ Pois(-ϕ_ikθ_kjζ_jlog(1-p_j)),where x∼SumLog(ℓ,p) denotes the sum-logarithmic distribution generated asx = ∑_t=1^ℓu_t, where (u_t)_t=1^ℓ are independent, and identically distributed (i.i.d.) according to the logarithmic distribution <cit.> with probability mass function p(u)=-p^u/[ulog(1-p)], supported on {1,2,⋯}. Under this augmentation, the likelihood of ϕ_ik, θ_kj and ζ_j becomes ℒ(ϕ_ik,θ_kj,ζ_j)∝(ℓ_ijk|-ϕ_ikθ_kjζ_jlog(1-p_j)),where (·|λ) denotes the probability mass function of the Poisson distribution with mean λ .It follows immediately that the full conditional posterior distributions for _k and _j are(_k|-) ∼ Dirichlet(η+ ∑_j=1^Sℓ_1jk,⋯,η+∑_j=1^Sℓ_Gjk),(_j|-) ∼ Dirichlet( r_1+∑_i=1^Gℓ_ij1,⋯,r_K+∑_i=1^Gℓ_ijK). Using data augmentation, we can similarlyderive the full conditional posterior distributions for ζ_j, , r_k and γ_0, as described in detail in the Supplementary Material.§.§ Determining the Number of Subclones KWe have so far assumed a priori that K is fixed. Determining the number of factors in factor analysisis, in general, challenging. <cit.> suggested adaptively truncatingK during Gibbs sampling iterations.This adaptive truncation procedure, which is designed to fit the data well, may tend to choose a large number of factors, some of which may be highly correlated to each other and henceappear to be redundant.To facilitate the interpretation of the model output, we seek a model selection procedure that estimates K in a more conservative manner.To select a moderate K that is large enough to fit the data reasonably well, but at the same time is small enough for the sake of interpretation,we generalize the deviance reduction-based approach in <cit.> and calculate the estimated log-likelihood of the model under different numbers of subclones using post-burn-in MCMC samples. These samples are obtained by running the compound Poisson based blocked Gibbs sampler for different K's.The estimate of K can beidentified by an apparent decrease in the slopes of segments that connect the log-likelihood values of two consecutive K values. Formally, we denote the log-likelihood function logℒ(K) as a function of K, and define the second-order finite difference Δ^2logℒ(K) of the log-likelihood function byΔ^2logℒ(K):=2logℒ(K) -logℒ(K-1) - logℒ(K+1), for K=K_min+1,⋯,K_max-1, where K_min and K_max are the lower and upper limits of K, respectively.Then an estimate of K is given by K̂ = argmax_KΔ^2logℒ(K). § SIMULATION STUDY In this section, we evaluate the proposed BayCount model through simulation studies.Two different scenarios are considered. * Scenario I:We simulate the data according to the BayCount model itself in (<ref>).In particular, we generate the subclone-specific gene expression data matrix Φ = (ϕ_ik)_G× K^o∈ℝ_+^G× K^o by i.i.d. draws of _k∼Dirichlet(0.05,⋯,0.05), the proportion matrix Θ = (θ_kj)_K^o× S by i.i.d. draws of _j∼Dirichlet(0.5,⋯,0.5), and ζ_j by i.i.d. draws of ζ_j∼Gamma(0.5K^o,1), where i=1, ⋯, G, j=1, ⋯, S, and k=1, ⋯, K^o.HereG is the number of genes, S is the number of samples, and K^o is the simulated number of subclones. We set λ = 1, drawfrom Dirichlet(0.5,⋯,0.5), and generate p_j from a uniform distributionsuch that the variance-to-mean ratio p_j/(1-p_j) of y_ j ranges from 100 to 10^6, encouraging the simulated data to be over-dispersed. * Scenario II: To evaluate the robustness of the BayCount model, under scenario II we consider simulating the data from a model that is different from the BayCount. We generate the subclone-specific gene expression data matrix W = (w_ik)_G× K^o∈ℝ_+^G× K^o by i.i.d. draws of w_ik∼Gamma(0.05,10), and the proportion matrix Θ = (θ_kj)_K^o× S by i.i.d. draws of _j∼Dirichlet(0.5,⋯,0.5). We set λ=1,drawfrom Dirichlet(0.5,⋯,0.5), and generate p_j from a uniform distribution such that the variance-to-mean ratio p_j/(1-p_j) of y_ j ranges from 100 to 10^6. The count matrix Y=(y_ij)_G× S is generated from y_ij∼NB(λα_i+∑_k=1^K^ow_ikθ_kj,p_j). Note that in scenario II the scales of W=(w_ik)_G× K^o are not subject to the constraint ∑_i=1^Gw_ik=1. We will show thatBayCountcan accurately recover both the subclone-specific gene expression patterns and subclonal proportions. The hyperparameters are set to be η = 0.1, a_0=b_0=0.01, e_0=f_0=1, g_0=h_0=1, and u_0=v_0=100. We consider K∈{2,3,⋯,10}. The compound Poisson based blocked Gibbs sampler is implemented with an initial burn-in of B=1000 iterations and a total of n=2000 iterations. The posterior means and95% credible intervals for all parameters are computed using the 1000 post-burn-in MCMC samples.§.§ Synthetic data with K^o=3We first simulate twodatasets with G=100, S=20, and K^o=3 under both scenario I and scenario II. Under scenario I, the data generation scheme is the same as the BayCount model. Figure S1 in the Supplementary Material plotsΔ^2logℒ(K) versus K, indicating K̂=3, which is the same as the simulation truth. The estimated subclone-specific gene expression matrix Φ̂ and subclonal proportions Θ̂ are computed as the posterior means of the post-burn-in MCMC samples.Figure S2 and S3 compare the simulated true Φ and Θ with the estimated Φ̂ and Θ̂, respectively. We can see that both the subclone-specific gene expression patterns and the subclonal proportions are successfully recovered.The analysis under scenario II is of greater interest, since the focus is to evaluate the robustness of BayCount. BayCount yields an estimate of K̂=3, as shown in Figure S4. We then focus on the posterior inference based on K̂=3. Figure <ref> compares the estimated subclonal proportions Θ̂ with the simulated true subclonal proportions across samples, along with the posterior 95% credible intervals. The results show that the estimated Θ̂ approximatesthe simulated true Θ well.We then report the posterior inference on the subclone-specific gene expression Φ. Under the BayCount model, ∑_i=1^Gϕ_ik=1, hence the estimated Φ̂ by BayCount and the unnormalized gene expression profile matrix W used in generating the simulated dataare not directly comparable.To see whether the gene expression pattern is recovered, we first normalize W by its column sums as Ŵ = WΛ^-1, whereΛ=diag(∑_i=1^Gw_i1,⋯,∑_i=1^Gw_iK), so that ŵ_ik represents the relative expression level of the ith gene in the kth subclone, and then compare Φ̂ with Ŵ. For visualization, the genes with small standard deviations (less than 0.01)are filtered out due to their indistinguishable expressions across different subclones. Figure <ref> compares the heatmap of Φ̂, with the heatmap of the simulated true (normalized) subclone-specific gene expression Ŵ on selected differentially expressed genes.It is clear that the pattern of subclone-specific gene expression estimated by BayCount closely matches the simulation truth.§.§ Synthetic data with K^o=5Similarly as in Section <ref>, we simulatetwo datasets with G=1000, S=40, and K^o=5 under scenarios I andII, respectively. Under scenario I, BayCount yields an estimate of K̂=5 (Figure S5), and from Figures S6 and S7, both the subclone-specific gene expression pattern and the subclonal proportions are successfully captured.Under scenario II, BayCount yields an estimate of K̂=5 (Figure S8).For the subclonal proportions Θ=(θ_kj)_K× S, Figure<ref> shows that the estimated Θ̂ successfully recovers the simulated true proportions. Notice that the credible bands are narrower than those in Figure <ref>, implying relatively smaller variability in estimating subclonal proportions for larger dataset. Figure S9presents the autocorrelation plots of the posterior samples of some randomly selected proportionsby the compound Poisson based blocked Gibbs sampler, indicatingthat the Markov chains mix well. Figure <ref> compares the simulated true (normalized) subclone-specific gene expression Ŵ with the estimated Θ̂ under the inference of BayCount.For this dataset we pre-screen Ŵ with a threshold 0.008 on the across-subclone standard deviation for all genes for visualization.The high concordance between the heatmaps of the estimated and true expression patterns of thedifferentially expressed genes indicates that the subclone-specific gene expression patterns have been successfully recovered as well.In summary, the BayCount model can accurately identify the number of subclones, estimate the subclonal proportions in each sample, and recover the subclone-specific gene expression pattern of the differentially expressed genes.§ REAL-WORLD DATA ANALYSISWe implement and evaluate the proposed BayCount model on the RNA-Seq data from The Cancer Genome Atlas (TCGA) <cit.> to study tumor heterogeneity (TH) in both lung squamous cell carcinoma (LUSC) and kidney renal clear cell carcinoma (KIRC). We first run the proposed Gibbs sampler for each fixed K∈{2,3,⋯,10}, compute both the posterior mean and95% credible interval of the log-likelihood for each fixed K, and estimate K by maximizing Δ^2logℒ(K) over K. Next, based on the estimated K̂ and the posterior samples generated by the proposed Gibbs sampler, we estimate the proportions of the identified subclones in each tumor sample and the subclone-specific gene expression,which in turn can be used for a variety of downstream analyses.§.§ TCGA LUSC Data AnalysisWe apply the proposed BayCount model to the TCGA RNA-Seq data in lung squamous cell carcinoma (LUSC), which is a common type of lung cancer thatcauses nearly one milliondeaths worldwide every year. The raw RNA-Seq data for 200 LUSC tumor samples were processed using the Subread algorithm in the Rsubread package <cit.> to obtain gene level read counts <cit.>. We select 382 previously reported important lung cancer genes <cit.> for analysis, such as KRAS, STK11, BRAF, and RIT1.BayCount yields an estimate of five subclones (Figure S10) and their proportions in each tumor sample are shown in Figure <ref>. To identify the dominant subclone for each sample, we compare the estimated Θ̂ of the five subclones in each tumor sample, and use them to cluster the patients. Formally, for each patient j=1,⋯,S, we compute the dominant subclone k_j=_k=1,⋯,Kθ̂_kj, and then cluster patients according to {j: k_j=k}, k=1, …, K̂.That is to say, the patients with the same dominant subclone belong to the same cluster. We next check if the identified subclones have any clinical utility, e.g., stratification of patients in terms of overall survival.Figure <ref>a shows the Kaplan-Meier plots of the overall survival of the patients in the five clusters identified by their dominant subclones. Indeed, patients stratified by these five BayCount-identified groups exhibit very distinct survival patterns (log-rank test p = 0.0194). Figure <ref>b shows the expression levels of the top 30 differentially expressed genes(ranked by the standard deviations of thesubclone-specific gene expression levels ϕ_ik's in an increasing order) in these five subclones. Distinct expression patterns are observed amongdifferent subclones. For example, the FTL level is elevated in subclone 1; the expression levels of several genes encoding keratins (KRT5, KRT6A, etc.) are elevated in subclone 3; and the COL1A1 and COL1A2 expression levels are elevated in subclone 4.Interestingly, the patients with these dominant subclones also show the expected survival patterns. The subclone-1 dominated patients have better overall survival. Previous studies show that the expression of FTL is decreased in lung tumors compared to normal tissues <cit.>, and one plausible explanation is that subclone 1 may descend from less malignant cells and therefore resemble (orconsist of) normal cells. Keratins and collagen I (encoded by COL1A1 and COL1A2) are known to play key roles in epithelial-to-mesenchymal transition (EMT), which subsequently initiates metastasis and promotes tumor progression <cit.>.This agrees with our observation of worse prognosis in patients who have either subclone 3 (with elevated Keratin-coding genes) or subclone 4 (with elevated collagen I coding genes) as their dominant subclone.§.§ Kidney Cancer (KIRC) Data Analysis Similarly, we obtain gene level read counts <cit.> for 200 TCGA kidney renal clear cell carcinoma (KIRC)tumor RNA-seq samples and analyze them with BayCount.Among a total of 23,368 genes, 966 significantly mutated genes <cit.> in KIRC patients are selected, including VHL, PTEN, MTOR, etc. BayCount yields an estimate of five subclones in KIRC (Figure S11). Figure <ref> shows the Kaplan-Meier curves of the overall survival of the patients grouped by their dominant subclones (panel a) and the heatmap of the top 30 differentially expressed genes (panel b). Since we have a large number of genes to begin with, whereas ∑_i=1^Gϕ_ik=1 for all k=1,⋯,K, the subclone-specific gene expression estimates Φ̂ will be small. For better visualization, we plot Φ̂ in the logarithmic scale. The subclonal proportions across 200 KIRC tumor samples are shown inFigure S12.As shown in Figure <ref>, the patients with these dominant subclones again show distinct survival patterns. One of the poor survival groups (dominated by subclone 5) is characterized by elevated expression of TGFBI, which is known to be associated with poor prognosis <cit.> andmatches our observation here. One distinction of our method from conventional subgroup analysis methods is that we focus on characterizing the underlying subclones (i.e., biologically meaningful subpopulations), by not only their individual molecular profiles but also their proportions. Instead of grouping the patients by their dominant subclones, we examine the proportion itself in terms of clinical utility. Interestingly, as shown in Figure <ref>a, the proportion of subclone 2 increases with tumor stage: i.e., as subclone 2 expands and eventually outgrows other subclones, the tumor becomes more aggressive. In contrast, the proportion of subclone 3 decreases with tumor stage (Figure <ref>b). Subclone 3 might be characterized by the less malignant (or normal-like) cells and takes more proportion in the beginning of the tumor life cycle.As tumor progresses to more advanced stages, subclone 3 could besuppressed by more aggressive subclones (e.g., subclone 2) and takes a decreasing proportion. Unsurprisingly,the survival patterns agree with our speculations about subclones 2 and 3, withthe patients dominated by subclone 2 (the more aggressive subclone) and subcolone 3 (the less aggressive subclone) showing the worst and best survivals, respectively.More excitingly, we find that the proportions of these two subclones can complement clinical variables in further stratifying patients. For patients at early stage where the event rate is low and clinical information is relatively limited, the proportions of subclones 2 and 3 serve as a potent factor in further stratifyingpatients (Figure S13) when dichotomizing at a natural cutoff. Combining our observations above, subclone proportions may provide additional insights into the progression course of tumors, assistance in biological interpretation, and potentially more accurate clinical prognosis.§ CONCLUSIONThe emerging high-throughput sequencing technology provides us with massive information for understanding tumors' complex microenvironment and allows us to develop novel statistical models for inferring tumor heterogeneity. Instead ofnormalizing RNA-Seq data that may biasdownstream analysis, we propose BayCount to directly analyze the raw RNA-Seq count data. Overcoming the natural challenges of analyzing raw RNA-seq count data, BayCount is able to factorize them while adjusting for both the between-sample and gene-specific random effects. Simulation studies show that BayCount can accurately recover the subclonal inference used to generate the simulated data. We apply BayCount to the TCGA LUSC and KIRC datasets, followed by correlating the subclonal inferences with their clinical utilities for comparison. In particular, by grouping patients according to theirdominant subclones, we observe distinct and biologically sensible overall survival patterns for both LUSC and KIRC patients. Moreover, the proportions of the subclones may complement clinical variables in further stratifying patients. In addition to prognosis value, tumor heterogeneity may be used as a biomarker to predict treatment response.For example, tumor samples with large proportions of cells bearing higher expressions on clinically actionable genes should be treated differently from those that have no or a small proportion of such cells. In addition, metastatic or recurrent tumors may possess very different compositions of subclones and should be treated differently.BayCount provides a general framework for inference on latent structures arising naturally in many other biomedical applications involving count data. For example, analyzing single-cell data is a potential further application of BayCount due to their sparsity and over-dispersion nature.<cit.> describe Drop-Seq, a technology for profiling more than 40,000 single cells at one time. The unique characteristic of dropped-out events <cit.> in single cell sequencing limits the applicability of normalization methods in bulk RNA-Seq data. Also,such huge amount number of single-cells and high levels of sparsity pose difficulties for dimensionality reduction methods such as principal component analysis. Inferring distinct cell populations in single-cell RNA count data will be an interesting extension of BayCount.§ ACKNOWLEDGEMENTYanxun Xu's research is partly supported by Johns Hopkins inHealth and Booz Allen Hamilton.apalike
http://arxiv.org/abs/1702.07981v1
{ "authors": [ "Fangzheng Xie", "Mingyuan Zhou", "Yanxun Xu" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20170226031041", "title": "BayCount: A Bayesian Decomposition Method for Inferring Tumor Heterogeneity using RNA-Seq Counts" }
Liam Connor liam.dean.connor@gmail.comThe Canadian Hydrogen Intensity Mapping Experiment, DRAO, Kaledan B.C., V0H 1k0 Department of Physics and Astronomy, the University of British Columbia LCSEE, West Virginia University, Morgantown, WV 26505, USA Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON, M5S 3H8, Canada Department of Physics, University of Toronto, 60 St George St, Toronto, ON, M5S 3H4, Canada Canadian Institute for Advanced Research, Toronto, ON, Canada, M5G 1Z8 Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON, M5S 3H8, Canada Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada0000-0002-7587-6352]L. Connor ASTRON, Netherlands Institute for Radio Astronomy, Postbus 2, 7990 AA Dwingeloo, The Netherlands Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands Department of Physics and Astronomy, the University of British Columbia Department of Astronomy, the University of Toronto Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada Canadian Institute for Advanced Research, Toronto, ON, Canada, M5G 1Z8 Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Physics, University of Toronto, 60 St George St, Toronto, ON, M5S 3H4, Canada Department of Physics and Astronomy, the University of British Columbia Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada Department of Physics and Astronomy, the University of British Columbia Department of Physics and Astronomy, the University of British Columbia Canadian Institute for Advanced Research, Toronto, ON, Canada, M5G 1Z8 Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada Department of Physics and Astronomy, the University of British Columbia Department of Physics, University of Rome “La Sapienza”, Piazzale Aldo Moro 5, I-00185 Rome, Italy Department of Physics and Astronomy, the University of British Columbia Canadian Institute for Advanced Research, Toronto, ON, Canada, M5G 1Z8 Department of Physics and Astronomy, the University of British Columbia Department of Physics, McGill University, 3600 University St., Montreal, QC H3A 2T8, Canada Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390 Dominion Radio Astrophysical Observatory, Herzberg Program in Astronomy and Astrophysics, National Research Council Canada Department of Physics and Astronomy, the University of British Columbia Department of Physics, McGill University, 3600 University St., Montreal, QC H3A 2T8, Canada Department of Physics, Yale University, New Haven, CT 06520 Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON, M5S 3H8, Canada Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON, M5S 3H8, Canada Canadian Institute for Advanced Research, Toronto, ON, Canada, M5G 1Z8 Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5, Canada Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada McWilliams Center for Cosmology, Dept. of Physics, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15208, USA Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Physics and Astronomy, the University of British Columbia Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Physics and Astronomy, the University of British Columbia Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada Perimeter Institute for Theoretical Physics, Waterloo, ON N2L 2Y5, Canada Department of Physics, McGill University, Montreal, Quebec H3A 2T8, Canada Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Physics, University of Toronto, 60 St George St, Toronto, ON, M5S 3H4, Canada Dunlap Institute for Astronomy & Astrophysics, 50 St. George St., Toronto, ON, M5S 3H4, Canada Department of Physics, University of Toronto, 60 St George St, Toronto, ON, M5S 3H4, Canada Department of Physics and Astronomy, the University of British ColumbiaWe present results from a new incoherent-beam Fast Radio Burst (FRB) searchon the Canadian Hydrogen Intensity Mapping Experiment (CHIME) Pathfinder.Its large instantaneousfield of view (FoV) and relative thermal insensitivity allowus to probe the ultra-bright tail of the FRB distribution,and to test a recent claimthat this distribution's slope, α≡-∂log N/∂log S,is quite small.A 256-input incoherent beamformer was deployed onthe CHIME Pathfinder for this purpose. If the FRB distribution weredescribed by a single power-law with α=0.7,we would expect an FRB detection every few days,making this the fastest survey on sky at present. We collected 1268 hours of data, amounting to one of the largest exposures of any FRB survey,with over 2.4 × 10^5 deg^2 hrs. Having seen nobursts, we have constrained the rate of extremely bright events to <13 sky^-1 day^-1 above ∼ 220√((τ/ ms)) Jy ms for τ between 1.3 and 100 ms, at 400–800 MHz. The non-detection alsoallows us to rule out α≲0.9 with 95% confidence,after marginalizing over uncertainties in the GBT rate at 700–900 MHz,though we show that for a cosmological population and a largedynamic range in flux density, α is brightness-dependent. Since FRBs now extend to large enough distances thatnon-Euclidean effects are significant, thereis still expected to be a dearth of faint events andrelative excess of bright events. Nevertheless wehave constrained the allowed number of ultra-intense FRBs.While this does not have significant implications for deeper, large-FoV surveys like full CHIME and APERTIF, it does have important consequences for other wide-field, small dish experiments. § INTRODUCTION Fast radio bursts (FRBs) are extragalactic, millisecondradio transients,of which roughly two dozen have been reported<cit.>.Though the exact origin of FRBs remainselusive, great progress has been made in the last few years alone.Uncertainty in their distance scale has decreased bytwenty orders of magnitude, and the error circle for angular position hasshrunk by a factor of ∼ billion. A largeswath of progenitor theories have also been tentatively ruled out<cit.>,leaving behind a minority of non-cataclysmic models.This came from work establishingtheir extraterrestrial <cit.>and later extragalactic <cit.> nature, as well as thediscovery that FRB 121102 repeats <cit.>. More recently, <cit.> were able tolocalize the repeating burst using the VLA,leading to the first unambiguous host galaxy identification. The host was found by <cit.> to bea low-metallicity, star-forming dwarf galaxy at z≈0.19. <cit.> used the European VLBI Network to studythe radio counterpart, and favor either a low-luminosity AGN(discussed in ) or a young neutron star in a supernova remnant(proposed by <cit.>;developed by ) as the progenitor to FRB 121102.In the absence of multiple host-galaxy identifications,the logN–logS test is a useful method of indirectlydetermining the radial distribution of FRBs.The volume of the Universe is greater at larger distances, so there tend to be more faint events than bright ones. Because of this, the sensitive,single-dish telescopes that, to date, have discovered all FRBs, have detected mostly moderate-brightness events due to theirlimited FoV. Therefore, the high-S tail of the FRB distribution has not yet been thoroughly explored. We parametrize the brightness distribution as a simplepower-law, such that dN ∝ S^-(α+1)dS, where S is flux density and N isnumber of events. When integrated, this givesN(>S) ∝ S^-α, which we refer to as thebrightness distribution. This one-parametric class ofmodels has a single special value, α=3/2,corresponding to a non-evolving population ofsources in a Euclidean spacetime.It is worth pointing out that this value is notlimited to standard candles, so long as there is no statisticalrelationship between distance and luminosity or volume density.The 3/2 case also holds foranything proportional to flux density, so fluence orsignal-to-noise can be used in place of S.Several groups have tackled the logN–logS problem. <cit.> argued that a surplus of multi-beamdetections at Parkes implied a comparatively flat fluence distribution,with 0.52<α<1.0.<cit.> used the ratio of observed signal-to-noise, s,to the search threshold, s_ min, to test the Euclidean hypothesis,motivated by the fact that it is model independent and does notsuffer from survey incompleteness. This essentially reinstituted the classic < V/V_ max >-test that was used to show the cosmological nature of quasars <cit.>and gamma-ray bursts (GRBs; ). They found consistency with a Euclideandistribution, but neither the Lorimer burst <cit.> nor FRB 150807<cit.> were included in their analysis, whereas <cit.> usedboth. <cit.> carried out a wide-field FRB search in“Fly-Eye” mode on the Allen Telescope Array (ATA) with roughly onequarter the exposure of our Pathfinder survey, though too few FRBs had been observed at the time to put limits on logN–logS.<cit.> concluded separately that ATA's non-detectionrules out α≲0.6. <cit.> arguedthat the apparent deficit of events at low Galactic latitudes may be explained by a steep brightness distribution, with α>2.5. However, such steeplogN–logS are now disfavored by the data.If <cit.> are correct and the brightness distribution is much flatter than expected,then the implications for survey design are striking. They point out that for α<1,small dishes are actually preferred to large dishes because the high number of bright events favors sky coverageover sensitivity. Survey speed, Γ, which we take to be the rateat which a given experiment detects FRBs, is given by the product offield of view and a thermal sensitivity termraised to the power of α. Sensitivity increases with collectingarea, which scales quadratically with dish diameter, D, andbeamsize goes as 1/D^2. Therefore,Γ ∝FoV×sensitivity^α∝ D^2(α-1), and survey speed decreases with increasing dish size for flat distributions(α < 1). Using an incoherent-beam search on thepre-existing Canadian Hydrogen Intensity Mapping Experiment (CHIME)Pathfinder, we are able to test the low-α hypothesis with limitedtime on sky, based on similar arguments. The incoherent beamis generated by adding up the signals from all antennas after squaringtheir voltage time streams, erasing relative phase information. Thisproduces a less sensitive beam than the coherent case, for whichphase is preserved, but is the size of the full primary beam. We expect a coherent beam from N_a dual-polarization antennas will be √(N_a)times more sensitive than an incoherent beamfrom the same set of inputs, assuming noise is mostly uncorrelated betweenreceivers.The factor of N_a in incoherent beamsolid angle ultimately wins though—dramatically so for small α,as can be seen by comparing the dark grey and orange regions inFig. <ref>. If we take the ratio of the incoherent survey speed tocoherent survey speed, assuming equal bandwidth and signal-to-noisecut-off, we get, Γ_ inc/Γ_ coh = N_a Ω_i/Ω_i× ( √(N_a) G_i/T_ sys/N_a G_i / T_ sys )^α= N_a^1-α/2, where G_i and Ω_i are the gain and beam solid angle of a single feed. The Pathfinder,for which N_a=128, should benefit from a factor of about 23 in speed-upfor α=0.7 when going from a coherent to an incoherent beam.The “full” CHIME FRB project is expected to see multiple events per day,making it the fastest survey on sky <cit.>.This is mainly due to its ability to search all ∼ 10^3coherently-formed beams,with near 100% duty-cycle, filling its full ∼ 200 deg^2 FoV primary beam at all times <cit.>. Because full CHIME also has appreciable collecting area (8000 m^2),it is relatively α-independent, which is also the case forfast upcoming surveys like APERTIF <cit.> and UTMOST <cit.>. This is not true for the CHIME Pathinder, which has a similardesign to full CHIME, but less collecting areaand its beam-forming backend is presently capable ofprocessing only one synthesized full-polarization beam. This can be seen inFig. <ref>, where we plot the expected numberof detected FRBs per week as a function of α,both for existing experiments and those in the commissioning stage.Large-FoV, highly-sensitive instruments like CHIME (light blue solid region) and APERTIF (dashed red curve; ) are able to see faint events,as well as the rarer bright events. However, specialized instruments likethe incoherent-beam CHIME Pathfinder (dark grey solid region) and the Deep Synoptic Array[www.astro.caltech.edu/∼srk/Workshop/BnE2016_NB.pdf] (dashed black curve) are only competitiveif α is small. Moderate FoV instruments like the ParkesMultibeam Receiver and Arecibo's ALFA are orders of magnitude fasterthan the incoherent Pathfinder search for the Euclidean case,but several times slower if α < 0.8./Group << /S /Transparency /I true /CS /DeviceRGB>>In this paper we discuss the new incoherent-beam FRB surveyon the CHIME Pathfinder. Its development was motivated by two points. Given its large instantaneous FoV but poor sensitivity, we could very quickly test the low-α hypothesis.And if α really were significantly smaller than3/2, we would have set up—with little cost—a survey faster than the Parkes Multi Beam. We outlinethis experiment in Sect. <ref>,including the development of its beamforming and tree-dedispersionpipelines.In Sect. <ref> we discuss our non-detection in∼ 53 days of data and the constraints on α.We then go over the implications for other similar surveys,and discuss various astrophysical reasons for our non-detection in Sect. <ref>. § CHIME PATHFINDERA pathfinder instrument for CHIME was constructedat the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, British Columbia, was brought onlinein late 2013 <cit.>. Its purpose is to act as both a proof-of-concept instrumentand a debugging tool for the full CHIME, whosehighly ambitious primary sciencegoal of 21 cm intensity mapping requiresconsiderable precision in calibration. The Pathfinder consists of two north-south 37 m-long, 20 m-widecylindrical mesh reflectors, whose focal lines are each instrumentedwith 64 linear dual-polarization antennas for a total of 256 inputs. This is roughly an order of magnitude smaller in scale thanfull CHIME, which has a total of 2048 inputs on four100 m-long, 20 m-wide reflectors. More informationabout CHIME's pathfinder instrument can be found in <cit.>.The stationary, cylindrical reflector design makes CHIMEa wide-field transit telescope. Since its dishes are aligned north-south, it only focuses lightin the east-west direction, resulting in a primary beamthat spans ∼ 150^∘ in declination and 1–2^∘ in hour-angle.North-south spatial resolution is recovered either by beamforming orcomputing the full N^2-correlation matrix, both of whichare done in the Pathfinder's correlator (for moredetails, see ). §.§ Beamformer Since late 2015, we have had a working beamforming back-end in the Pathfinder. The beamformer is an OpenCL kernelrun on a 16-node GPU cluster, which is the X-engineof the Pathfinder's hybrid FX-correlator <cit.>. It is run in a commensal mode with the more computationally intensiveN^2-correlation that is used for the cosmology experiment.Initially, the beamformer produced a single coherent tracking beamthat was used for pulsar observations and apreliminary FRB search. Once it was realized that anincoherent beam may potentially provide an enormous increasein search speed, the coherent beamforming kernel was modifiedto first square, then sum, incoming voltages.Channelized data arrive at each of the 16 GPU nodesfrom the custom F-engine electronics as 4-bit real,4-bit imaginary offset encoded integers. Once these voltages are squared andsummed across the array, they are reduced to 8-bit unsigned integers. Signals from all 256-inputs, but only one sixteenth of the frequencies,are processed on each node. The beamformeddata are sent to a separate acquisition node over a 10 Gigabit Ethernet at 6.4 Gbps in the VDIF specification[www.vlbi.org/vdif/docs/VDIF_specification_Release_1.1.1.pdf]. For the incoherent beamforming kernel, the intensities arrive at the acquisition node at full time and frequencyresolution, 2.56 μs and 390.625 kHz respectively.The squared and summed signals from our two orthogonal polarizationsarrive separately, and are not summed until further down the pipeline.Since different frequencies arrive from different nodes, packets arrive outof order and must be unscrambled. A real-time, multi-threaded acquisition code was developed to handlethis[https://github.com/kmsmith137/ch_vdif_assembler].It writes to disk either assembled voltages (in the coherent beamforming case)or intensities integrated to 1.3 ms (in the incoherent case). The latter are written to HDF5 files for offline processing. A diagram of the incoherent back-end and search pipelineis shown in Fig. <ref>. The custom data processing pipeline was built specifically for this experiment. §.§ FRB Search We run a modified tree dedispersion algorithm on the data tosearch for FRBs with dispersion measures (DMs) between 20 and 2000 pc cm^-3and widths between 1.3 and 100 ms.The package burst_search[https://github.com/kiyo-masui/burst_search]was first developed to search GUPPI data from the Green Bank Telescope,and successfully found FRB 110523 <cit.>. We modifiedthis code to search both real-time Pathfinder data streamsand offline, integrated intensity data. The data werebroken up into 80-second total-intensity (Stokes I)arrays that overlapped by 15 swith the previous block. We do not search over spectral index. If the largest S/N value in a given block exceeded our thresholdof 10, a trigger was written to disk, along with plots of theevent. The large number of low-DM triggers jeopardized our seeingextragalactic events, so we searched ranges 20–200 pc cm^-3,200–525 pc cm^-3, and 525–2000 pc cm^-3 separately.§.§.§ RFIDue to the incoherent beam's sensitivity to the horizon,radio frequency interference (RFI) was a significant concern. When we searched raw data without any masking or de-trending,an event above the S/N threshold would occur in every processedblock of data. A large fraction of these were due to the recently-introducedLong Term Evolution (LTE) wireless communication bandaround 700 MHz, which fluctuates on millisecond time-scales. By maskingthese and other persistent RFI frequency channels, most false positivescould be avoided. But the data still needed preprocessing, so a seriesof filters was applied to each block of intensities before the tree-dedispersionsearch was run.This included a 6 σ outlier cut in frequency, which removes bad channels. Bandpass calibration is done by dividing our data by the time-averagedDC power within an 80-s block. We then apply a highpass filterthat used a 100 ms Blackman window function, which sets our maximumsearch width. The effectiveness of this RFI-preprocessing was verifiedusing transits from pulsar B0329+54, as well as simulations of FRB events; after preprocessing these events would be detected,but without the filtering, B0329+54 and injected events would goundetected due to strong RFI occurring during the dispersed pulse. After pre-processing we reduced our false-positiverate to roughly one per 30 minutes.§.§ Survey parameters In order to estimate an expected FRB rate we need to knowthe telescope's sensitivity, beamsize, and the effectsof dispersion smearing, which can be difficult to determine.For example,the reduction in search speed of dispersion smearing is calculable onlyif the DM and width distributions of ultra-bright FRBs are known.Below we provide estimates of each of these quantities andtheir associated uncertainty.§.§.§ Beamsize Extensive work has been done both to simulate and map the primary beams of eachPathfinder antenna. Using the east-west holographic measurementsmade by <cit.>, and simulations of thenorth-south beam using the reflector antenna softwareGRASP[http://www.ticra.com/products/software/grasp], we adopt half-power beam solid angles of 270, 225,155, and 110 square degrees at 430, 525, 625, and 750 MHz,respectively. For most of our analysiswe take their mean, ∼ 190 deg^2, to be our beamsize.However, it seems likely that this is a conservative estimate; early holographymeasurements indicate that the true north-south beam on-sky is larger than the beam produced in simulation.§.§.§ Sensitivity We were able to test our expected sensitivity using twodistinct methods. The first method uses the fractional power increasefrom the transits of bright point-sources such as Cassiopeia A, Cygnus A,and Taurus A (Crab nebula), to estimate the baseline T_ sys. This “Y-factor method" measures the average system temperature of the individual antennas,but does not measure the effective T_ sys of our incoherent beam.The point-source transits give what we expect,namely an average system temperature per antenna of ∼60 K assuming anaperture efficiency of 50% <cit.>.The second method, which is more relevant toour search, comes from using the radiometer equationwith measurements of single pulses of B0329+54, and indicateshigher-than-expected noise. The difference between the two methods is thatthe latter measures an actual RMS of the final incoherent beam,so it probes the way the noise averages down after we sum across the array.B0329+54 is the brightest visible pulsar in our band,and fortuitously is only ∼ 5 degrees off-zenith at ourlatitude. According tothe Australia Telescope National Facility (ATNF) Pulsar Database,B0329+54 has flux densities at 400 MHz and 1.4 GHz of S_400^ν = 1500 mJy and S_1400^ν = 200 mJy, respectively <cit.>. With a pulse width of 6.6 ms and a 714 ms period,its interpolated flux density at 600 MHz whenit is “on” is 84 Jy, assuming a power-law index γ=1.61.To calculate the expected S/N from a dedispersed, frequency-averaged time-series, the average flux, < S>_ν, can beestimated by summing in quadrature. This gives an effectiveflux density of ∼ 96 Jy. If we then measure the average S/Nof single B0329+54 pulses, we can use < S>_ν to constrain the system's sensitivity. This is donewith the radiometer equation, S/N =< S>_ν√(2 B_effτ)/S_ sys, where B_ eff is the effective bandwidth, τ is the pulse duration,and the factor of 2 is the number of polarizations.S_ sys is the system-equivalent flux density (SEFD),which is simply the ratio of system temperature to forward gain, T_ sys/G. We opt for the SEFD since T_ sys and G are often degenerate, and we do not need to distinguish betweenthe two for our purposes. From the several dozen B0329+54 transits we have in our dataset,over 50,000 individual pulses were observed. Analyzing the stored data,we find at beam center,the mean S/N was ∼10, so S_ sys = 2 × 10^4 Jy,using B_ eff and τ from Table  <ref>. This is a few times larger than expected,which seems to be caused by excess noise on time-scales≲20 ms, leading to larger RMS on all time-scales.This excess is also seen in the noise power spectrumat high temporal frequencies, and may be caused by intermittent RFI.The discrepancy between the Y-factor method and the S/N of B0329+54 pulseswould then come from correlated RFI-induced noise not beating down as √(N_a)as we sum all antennas in the beamformer. We also collected roughly 10^3 Crab giant pulses (GPs). Due to theirsteep brightness distribution and uncertainty in absolute flux density,they were not directly used as a calibrator. However, by comparing toa large set of GPs observed in our band with the Algonquin Radio Telescope(Main et al., in prep), we find our rate of 2–8 GPs per minute with s>10 σ to be consistent with the brightnessdistribution they found.§.§.§ DM SmearingThough full CHIME will “upchannelize" its data (increase the frequency resolutionafter the initial channelization step) <cit.>,the incoherent Pathfinder search was carried out withthe nominal 1024 channels at 390-kHz resolution. This leads to “DM smearing” forhighly dispersed events, which broadensthe pulse and reduces S/N. If the FRB's intrinsic widthis t_i, is scattered to a width τ, and is sampled at t_ samp, the minimum flux density to which we are sensitive isincreased as, S'_min→ S_min× ( t_I/√(t_ samp^2+τ^2 + t^2_i) )^1/2, where t_I is the final pulse width <cit.>. Using Δν as the frequencyresolution, and ν_c as the central frequency,the effective pulse width can be calculated byadding in quadrature the other broadening elements. This is done as follows, t^2_I = τ^2 + t^2_i + t^2_samp + t^2_DM wheret_DM = 8.3 (DM/ pc cm^3)( Δν/1 MHz)( ν_c/1 GHz) μ s. In this survey the smearing term will dominatethe sampling time and, probably, intrinsic width, for high DMs. Scattering is less constrained.For example, a burst with DM=776 pc cm^-3 (the medianDM on FRBcat ) that was intrinsically 1 ms, sampled at 1.3 ms, and scattered to5 ms (roughly the case for GBT FRB 110523 if it wereobserved at 600 MHz), would be ∼12 ms in duration if observed on the Pathfinder. Therefore, if all FRBs had the parameters of that hypotheticalburst, Eq. <ref> tells us thatthe current Pathfinder search would be ∼ 6^α/2 timesslower than a sufficiently upchannelized Pathfinder search. But not all FRBs will have those exact parameters, and the number of degrees of freedomin Eq. <ref> makes predicting the effects ofsmearing for high-DM events difficult. Fortunately, there is reason to think this would not be a major issue.The only way the incoherent-beam Pathfinder search will see anythingis if α really is small (see the low detection ratefor α>1.2 in Fig. <ref>). That would mean the IGM is doing a significant fraction ofthe dispersion,in which case brightness anti-correlates with DM, asnearby sources have less intervening plasma. In other words, our surveyonly probes the ultra-bright, nearby subset of the FRB population,and their low DMs will not greatly reduce the survey's sensitivity. Indeed, the twosources whose inferred flux density was orders-of-magnitude greater thanthe FRB median (the Lorimer burst and FRB 150807) both had extragalactic DMs less than 350 pc cm^-3 <cit.>. § RESULTSIn 1268 hours of data several thousand triggers were producedwith signal-to-noise greater than 10. Each was inspected by eye, and almost every “event” was discernibly non-astronomical. For example, the incoherent beam's susceptibility to RFI means that most triggers were narrow-band or had unusual discontinuitiesin their frequency-collapsed profile. As discussed in Sect. <ref>,some of these false-positives were caused by strong interferenceflickering on time-scales of tens of milliseconds. The handful of marginal events were analyzed further, but no FRBs were found. Havingseen zero events, we can ask how unlikely that outcome wasand therefore put a lower limit on α. But first,we must verify that if there were an FRB in our beam,we would have detected it. §.§ Completeness There are many ways for a transient searchto not see something, so care must be taken inverifying a survey's completeness and reliability.Since the incoherent beamformer does not do any spatial filtering, we see the whole northern skyeach day with CHIME's large north-south primary beam.Giant pulses (GPs) from the Crab and single pulses from B0329+54 were usedto ensure the search pipeline was working andthat each day's data were good. All other pulsars—includingGP-emitting sources like B1937+21—are too faint to see individual pulses.This is also true for known RRATs.For the two sources we could see, wefound that, even when the sources were entering the beam andthe maximum S/N in a given 80-s block was around oursearch's threshold, s_ min,individual pulses still triggered andwere easily recognizable as pulses.We detected B0329+54 pulses and Crab GPs in allof their respective transits during our observing campaign. However, getting a fractional completeness—the ratio of the number of detectionsto number of events—is difficult for these sources. This is due to their pulse-to-pulse intensity fluctuations, which cause theirS/N to fall below our threshold, and the fact that we triggeronly on the brightest event above 10 σ in each 80-s block of data. In order to quantify our fractional completeness, we injectedsignals into our data with DM=400 pc cm^-3 at a range of brightnesses. We find that for the injected signals whose expected resultant S/N≳15, effectively all pulses are recovered and our completeness is above 99%. For events whose recovered S/Nis between 10–12, we detect ∼90% of the simulated bursts. Fortunately, as weshow in Fig. <ref>, for N(>S) power-laws with α<1.5, most FRBs are expected to be detected above 15 σ.Two examples of the outputof our search are seen in Fig. <ref>, which showfour different visualizations of the data for each event. The top panel ofeach trigger plot shows the burst's amplitude in DM / arrival time space.The second panel from the top shows a frequency / timeintensity array after dedispersing the pulse tothe maximum likelihood DM. The next panel showsa frequency-averaged pulse profile, and the finalpanel shows fluence plotted against frequency forthree different binnings. The B0329+54trigger illustrates that pulses near the cut-off are stilleasily identifiable. The Crab trigger shows that high-S/N eventsare not excised by our RFI-preprocessing.The clarity of B0329+54 pulses or Crab GPs close to 10 σis in stark contrast to the vast majority of unexpected triggers.Several thousand events were inspectedby eye, almost all of which were unequivocally false positives. They would have power only in a few frequency channels, orwould look like step-functions in time. The borderline events were followed up by analyzing directlythe data around the event, but ultimately therewere no triggers that looked like broad-band, single-DM pulses.The triggers produced tended to be very low DM, which is whywe partitioned the full DM range into three groups. RFI triggerscluster around the minimum search DM as well as the signal-to-noisethreshold. The latter effect is shown in Fig. <ref>. Thelight purple histogram is the S/N distribution of 3470 triggers.Almost half of them had 10≤ s≤11, whereas only 7% and 13%of FRBs would be within 1 σ of the threshold, assumingα=0.75 and α=1.5 respectively. §.§ Constraints on α If we treat the arrival times of detectable FRBs as Poissonian,we can calculate the probability of seeing M eventsgiven some expected number of events μ. The expectednumber of events will depend on α, so this likelihoodcan be written as, P(M | α, μ) = μ^M (α) e^-μ(α)/M!. A suitable model for μ must now be chosen. Assuming ahomogeneous Poisson process, the expected number ofevents in a given interval is proportional to the duration of that interval and the area of sky covered. This can be written as μ = r_0 Ω T_ obs where T_ obs is the total searchable observing time and r_0 is the true rate on the sky per unit time andsolid angle. We follow <cit.> and tether our expected rateto the empirical rate of a similar survey with detections.This is more directthan the standard method of rate estimationwhich quotes an all-sky rate abovesome fixed fluence threshold and then scales accordingly with α.This also eschews the need to choose a single fluence completeness value, ormake assumptions about the distribution of pulse widths, and relaxes theneed to account for non-uniform sensitivity over the FoV.The Green Bank Telescope Intensity Mapping (GBT IM) survey is a naturalreference point, since it found the only published FRB below 1.4 GHz andit overlaps with the CHIME band at 700–800 MHz. While the GBTrate is consistent with the rate at 1.4 GHz, it is more uncertain. To account for this, we can marginalize over the low-frequency event-rateuncertainty from GBT.One could also use the more precisely determined rate from Parkes<cit.>, but then uncertainties about scattering andspectral index are introduced.We discuss this point further in Sect. <ref>. We can now write down a relationship between the rate inferredfrom GBT with the number of events we expect to see at the Pathfinder. The GBT rate is scaled in the following way μ_PF = μ_GBT N^PF_days/N^GBT_days×Ω_PF/Ω_GBT× ( H_GBT/H_PF )^α, where H is a thermal sensitivity term given bythe survey's bandwidth, B, its SEFD, S, and its signal-to-noisecut-off, s_min. Given that GBT saw one event in 27.5 days with a beamsize of 0.055 deg^2, 200 MHz of bandwidth, and a signal-to-noise threshold of 8, we can writethis relationship more explicitly. We expect the following number of events, @size8@̌mathfonts @@@#1@th#1μ_PF = μ_GBT N^PF_days/27.5× Ω/0.055 deg^2( 13.25 Jy/S_ sys )^α ( B/200.0 MHz )^α/2 × ( √(τ^2 + t_i^2)/t_I )^α/2( 8/s_ min )^α.In Eq. <ref> μ_PF is the expected number of events for the incoherent Pathfinder, and μ_GBT is the expected number of events in 27.5 daysof observing with GBT. The latter has a maximum-likelihood value of 1 and a 95% confidence interval of 0.25–5.57 <cit.>. We assume GBT's SEFD to be26.5 K / 2.0 K Jy^-1, and we have included a DM smearing term, which we take to be negligible in the case of Green Bank.We use DM=500 pc cm^-3, based on the argument in Sect. <ref>that we are only sensitive to nearby, and therefore relatively low-DM FRBs.One advantage of directly extrapolating from the empirical rate of another survey is that we do not need to compute an integral under the beam to account for direction-dependentsensitivity; the correction factors from the two surveys roughly divide out. However, the effectmust be accounted for when quoting “all-sky” rates, especially if the telescope hassignificant sidelobes.Using the values in Table <ref>, we can calculate the expected numberof events, μ, foreach value of α and compute the probability of non-detection with the likelihood function in Eq. <ref>.If we were to ignore uncertainty in the rate, we would simply apply a p-testusing the maximum-likelihood value in Eq. <ref>, and ask what values of α can be ruled out with, say, 95% certainty. But in general, r_0 and α are degenerate <cit.>. In the case of our non-detection,we cannot strictly differentiate between small-α with a low rate, and large-α with a high rate. Therefore, we marginalize overthe uncertainties in the true sky rate, similar to what is done by<cit.>. Mathematically, this is just thesum of likelihood curves for all rates, r_0>0, weighted by theprobability density at that rate, 𝒫(r_0).We use the GBT rate posterior as 𝒫(r_0), and compute the following integral, P(M=0| α) = ∫_0^∞P(0|α, r_0) 𝒫(r_0) dr_0. This procedure produces the black curve shown in Fig. <ref>.The curve is equal to 0.05 at α≈0.9, meaning if α weresmaller than 0.9, we would have expected to see one or more FRBs in53 days of Pathfinder data >95% of the time. The figure also shows the non-detectionlikelihoods for a range of event rates. The green region showsthe likelihood values for rates between 0.34-4.68 times the maximum-likelihoodrate. 0.34 is the value above which 95% of the GBT rate posterior lies,and 4.68 is the upper-bound on 95% of the posterior.§ DISCUSSION §.§ Brightness-dependent α The most model-independent statement we can make about ourresults is not about α, but about theevent rate above our sensitivity threshold, between 400–800 MHz.Turning that rate upper-limit into a lower-limiton α requires some assumption about the functional formof the brightness distribution, and its scaling (i.e. thetrue rate on the sky). For example, we have assumedthe distribution's shape is described by a single power-law. But for a large enough range of brightnesses and an underlying cosmological population, the one-parameterpower-law assumption breaks down (see the light blue curve in Fig. <ref>). Ignoring, for a moment, the Universe's star-formation historyand considering only non-Euclidean effects, we generically expect arelative deficitof faint events. This is because FRBs at large distances will bediminished in energy and rate due to cosmological redshift and time dilation.Therefore, α is brightness-dependent, flattening out for high-z sources and asymptoting to3/2 as z_ src approaches 0, with a simple mapping between brightnessand redshift in the idealized standard-candle case.This phenomenon is seen in logN–logS of long GRBs, which exhibit such a continuously varyingα parameter, and is nearly flat at the fluences of the faintestbursts. In the bright tail, the curve approaches a power-law withindex 3/2 <cit.>. One consequenceof this is that surveys with different sensitivities will measuredifferent logN–logS slopes. For example,the distribution of FRB signal-to-noise within a low-sensitivity surveymay be Euclidean, even though a flatter distributionmight be required when extrapolating to the rates oflarger telescopes. <cit.> provide a frameworkfor constraining α based on S/N distributions as well as detection counts between surveys. We justified our constant-α assumptionfor the CHIME Pathfinder by pointing out that we were probingflux densities that are only a couple of orders-of-magnitude larger thanwhere most FRBs have been seen (∼ten times closer, on average),and so the effects of brightness-dependent α might be negligible.However, it is possible that N(>S) turns over at a flux density that is less than our threshold, and approaches α≈1.5 the way the brightest GRBs do.§.§ Consistency with the ultra-bright rateThe brightest event at the time of this publication, FRB 150807,would probably not have been detectable in our survey,in part because it was narrower than our sampling time and its S/N would be reduced <cit.>. Despite not having seen anything,we can ask if the rate of ultra-bright events implied by 150807 and the Lorimer burst is in agreement with our results, for a given value of α. <cit.> predict that the rate at 1.3 GHz of FRBs above 50 Jy ms is 190±60 sky^-1 day^-1. Using α=3/2 and a minimumburst energy of 220 Jy ms with width 5 ms, the quoted rate predictsone every couple of months in our current configuration. In other words,our non-detection could be consistent with 190±60 sky^-1 day^-1if those ultra-bright events are, on average, from nearby sources, and the required extrapolationis near-Euclidean. On the other hand, an α=0.7 extrapolation predicts roughly one per week,which is not consistent with our data.Of course, our non-detection is alsoconsistent with the rate of <cit.> having been overestimated. §.§ 600 MHz vs. 1.4 GHz Our results are the first constraints at 600 MHz,but several surveys have searched, to no avail, at lower frequencies <cit.>.<cit.> searched around 140 MHz with LOFARand saw nothing, though inter-channel smearing meant they had a maximum DM of just 320 pc cm^-3. A GBT survey at 350 MHzplaced a 95% confidence upper-limit of a few thousand detectable FRBsper sky per day after searching ∼ 80 days of data <cit.>. This result is still roughly consistent with the rate at 1.4 GHz,which has a lower bound around 10^3 sky^-1 day^-1<cit.>. Nevertheless, the uncertaintyin FRB rate as a function of frequency is a concern for us. Wehave tried to mitigate its effects by tying the Pathfinder results to the only published FRB survey in our band, and marginalizingover the rate distribution. Still, the GBT IM survey only overlaps with ours at the top of the CHIME band, between 700–800 MHz. If the effects ofscattering, free-free absorption, and smearing are significantly moredestructive in the bottom of our band, then we would have overestimatedthe effective rate and, consequently, our lower-limit on α. In spite of the spectral uncertainty in rate, the brightnessdistribution's logarithmic slope, α, should be fairly robustagainst frequency variation. The spectral behaviour of FRBs does affect the shape of N(>S) in the cosmological case,but at a given value of S, there are only special cases whereα is frequency-dependent.If the intrinsic source luminosity is given by a power-law,ℒ∝ν^γ, then we will observe thepart of the spectrum that has been redshifted down to our instrument's band. Negative spectral index lowers the observed energiesof distant FRBs as, ℒ→ℒ(1+z)^γ, therebydecreasing the number of visible distant events and flatteninglog N–log S. Conversely, positive γ steepens it. These effects areshown in a toy-model plotted in Fig. <ref>. While the curves all approach the Euclidean value of 3/2,there is significant spectral index dependence in α forFRBs when cosmological volumes are probed.If the source has non-power-law frequency behaviour (e.g. ∼ GHz scintillation),then the source-to-source (or even pulse-to-pulse)variance in brightness will increase, but the ensemble distributionshould not be affected unless there is an average tiltin FRB spectra. §.§ Implications for other surveysThe CHIME Pathfinder's incoherent-beam survey is searchinga limited region of FRB parameter space, namely the ultra-brighttail between 400–800 MHz. Because of this large brightness threshold, our results have few implicationsfor full CHIME, which will have a flux density limit thatis several hundred times lower than the current search,thanks to its coherent beams and larger collecting area.Therefore, the primary uncertainty in full CHIME'srate of detection—the deleterious effects of scatteringand/or free-free absorption at low frequencies—remains. But as <cit.> showed, even if the rate between 400–700 MHz is zero,CHIME's overlap with GBT IM between 700–800 MHz indicatesthat it will see multiple bursts per day, assuming current design parameters.<cit.> also found a large event rate,accounting for scattering and spectral index.Upcoming surveys like UTMOST <cit.>and APERTIF <cit.> will also unite sensitivity with FoV. Therefore, theirspeed is largely α-independent, unless they are not operating at full capacity, e.g. during commissioning.Detections made in the commissioning phase, before designsensitivity is reached, could address our claims about brightness-dependent α, since those early, bright bursts, may have a Euclidean distribution.The non-detection does, however, have implications for other lower-sensitivity surveys. The Deep Synoptic Array (DSA) initially will consistof ten 5-m dishes combined incoherently, in the hopes of detectingultra-bright FRBs. Saving to disk buffered voltage data couldachieve ∼ arcsecond localization, allowing for a very high-impact surveyon a moderate budget. However, if α is not significantly smaller than 1.5,then that survey may not detect an event for many months. Extrapolatingfrom the Parkes Multi Beam rate of one event every couple of weeks, we estimate that the DSA would have to wait of order a year per FRB if α≈1.1. However, given the importance of localization, a scaled-up DSA withmore dishes could prove highly valuable. In a similar vein, theAustralian Square Kilometre Array Pathfinder's (ASKAP) small dishesand phased-array feeds will effect large sky coverage with long baselines,potentially providing regular localization <cit.>.§ CONCLUSIONSWe have performed a shallow, wide-field FRB survey using the CHIME Pathfinder. This was motivated by recent assertions about theflatness of the brightness distribution of FRBs by <cit.>, who showedthat α may be less than 1.If this were the case, the incoherent-beam Pathfinder search would be a highly competitivesurvey, potentially detecting multiple events per week. And if α were not quiteso low, our search could demonstrate this with relatively little time on sky.We took 52.85 days of data, amassing an enormous exposure, with ∼ 2.4 × 10^5 deg^2 hrs.These data were searched using tree-dedispersion softwarethat was used to discover FRB 110523 <cit.>.Thousands of triggers above our S/N threshold of 10 were produced,including daily Crab GPs and B0329+54 pulses,but no FRBs were found. By not detecting anything FRB signatures we are able to rule out α<0.9 with 95% confidence,using the GBT 700–900 MHz rate and assuming the single-index power-law approximation holdsinto our flux sensitivity.This constrains the number of events brighter than∼ 220√((τ/ ms)) Jy ms for τ between1.3 and 100 ms tofewer than ∼ 13 sky^-1 day^-1. We quoteour upper-limit in this way because surveys have a singlesignal-to-noise threshold, but in fluence space this cut-off is a curve that depends on pulse width. The sub-arcsecond localization of FRB 121102 has shown thatFRBs are distant enough that non-Euclidean effectsought to be significant. Still, its considerable local dispersionmeans that the IGM contribution is only about half of 121102's extragalactic DM <cit.>.If local dispersion were a generic property ofFRBs, then the volumes that modern surveys are sensitive to would shrinkand deviation from α=3/2 should be decreased, other thingsbeing held equal. As the lower-limit on α increases, the incoherent-beamPathfinder search experiences diminishing returns in its ability to constrain. For example, with just 5 days on sky,α≲0.6 can be ruled out by a non-detection with95% confidence. As we have shown, ∼ 53 days sets a lower-bound of 0.9, but zero events in an entire year on skycan only rule out α≲1.15. For this reason,if we choose to run the incoherent-beam Pathfinder search indefinitely,the best strategy is to increase its sensitivity. This would meaninvestigating further the larger-than-expected noise fluctuationson short time-scales, perhaps mitigating it with baseband RFIremoval. The null result suggests similar wide-fieldlow-sensitivity surveys may not be highly competitive,but has little implication for wide-field deep surveyslike full CHIME, APERTIF, and UTMOST.We are very grateful for the warm reception and skillful help we have received from the staff of the Dominion Radio Astrophysical Observatory, which is operated by the National Research Council of Canada. The CHIME Pathfinder is funded by grants fromthe Natural Sciences and Engineering Research Council (NSERC),and by the Canada Foundation for Innovation (CFI). LC acknowledges that the research leading to these results has receivedfunding from the European Research Council under theEuropean Union's Seventh Framework Programme(FP/2007-2013) / ERC Grant Agreement n. 617199. We alsothank Manisha Caleb and Casey Law for useful discussions.aastex61
http://arxiv.org/abs/1702.08040v2
{ "authors": [ "CHIME Scientific Collaboration", "Mandana Amiri", "Kevin Bandura", "Philippe Berger", "J. Richard Bond", "Jean-François Cliche", "Liam Connor", "Meiling Deng", "Nolan Denman", "Matt Dobbs", "Rachel Simone Domagalski", "Mateus Fandino", "Adam J Gilbert", "Deborah C. Good", "Mark Halpern", "David Hanna", "Adam D. Hincks", "Gary Hinshaw", "Carolin Höfer", "Gilbert Hsyu", "Peter Klages", "T. L. Landecker", "Kiyoshi Masui", "Juan Mena-Parra", "Laura Newburgh", "Niels Oppermann", "Ue-Li Pen", "Jeffrey B. Peterson", "Tristan Pinsonneault-Marotte", "Andre Renard", "J. Richard Shaw", "Seth R. Siegel", "Kris Sigurdson", "Kendrick M. Smith", "Emilie Storer", "Ian Tretyakov", "Keith Vanderlinde", "Donald V. Wiebe" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170226144440", "title": "Limits on the ultra-bright Fast Radio Burst population from the CHIME Pathfinder" }
[NO \title GIVEN] [NO \author GIVEN] December 30, 2023 ====================== In deep learning, depth, as well as nonlinearity, create non-convex loss surfaces. Then, does depth alone create bad local minima? In this paper, we prove that without nonlinearity, depth alone does not create bad local minima, although it induces non-convex loss surface. Using thisinsight, we greatly simplify a recently proposed proof to show that all of the local minima of feedforward deep linear neural networks are global minima. Our theoretical results generalize previous results with fewer assumptions, and this analysis provides a method to show similar results beyond square loss in deep linear models. § INTRODUCTIONDeep learning has recently had a profound impact on the machine learning, computer vision, and artificial intelligence communities. In addition to its practical successes, previous studies have revealed several reasons why deep learning has been successful from the viewpoint of its model classes. An (over-)simplified explanation is the harmony of its great expressivity and big data: because of its great expressivity, deep learning can have less bias, while a large training dataset leads to less variance. The great expressivity can be seen from an aspect of representation learning as well: whereas traditional machine learning makes use of features designed by human users or experts as a type of prior, deep learning tries to learn features from the data as well. More accurately, a key aspect of the model classes in deep learning is the generalization property; despite its great expressivity, deep learning model classes can maintain great generalization properties <cit.>. This would distinguish deep learning from other possibly too flexible methods, such as shallow neural networks with too many hidden units, and traditional kernel methods with a too powerful kernel. Therefore, the practical success of deep learning seems to be supported by the great quality of its model classes. However, having a great model class is not so useful if we cannot find a good model in the model class via training. Training a deep model is typically framed as non-convex optimization. Because of its non-convexity and high dimensionality, it has been unclear whether we can efficiently train a deep model.Note that the difficulty comes from the combination of non-convexity and high dimensionality in weight parameters. If we can reformulate the training problem into several decoupled training problems, with each having a small number of weight parameters, we can effectively train a model via non-convex optimization as theoretically shown in Bayesian optimization and global optimization literatures <cit.>. As a result of non-convexity and high-dimensionality, it was shown that training a general neural network model is NP-hard <cit.>. However, such a hardness-result in a worst case analysiswould not tightly capture what is going on in practice, as we seem to be able to efficiently train deep models in practice. To understand its practical success beyond worst case analysis, theoretical and practical investigations on the training of deep models have recently become an active research area <cit.>. An important property of a deep model is that the non-convexity comes from depth, as well as nonlinearity: indeed, depth by itself creates highly non-convex optimization problems. One way to see a property of the non-convexity induced by depth is the non-uniqueness owing to weight–space symmetries <cit.>: themodel represents the same function mapping from the input to the output with different distinct settings in the weight space.Accordingly, there are many distinct globally optimal points and many distinct points with the same loss values due to weight–space symmetries, which wouldresult in a non-convex epigraph (i.e., non-convex function) as well as non-convex sublevel sets (i.e., non-quasiconvex function). Thus, it has been unclear whether depth by itself can create a difficult non-convex loss surface. The recentwork <cit.> indirectly showed, as a consequence of its main theoretical results, that depth does not create bad local minima of deep linear model with Frobenius norm although it creates potentially bad saddle points. In this paper, we directly prove that all local minima of deep linear model corresponds to local minima of shallow model. Building upon this new theoretical insight, we propose a simpler proof for one of the main results in the recent work <cit.>; all of the local minima of feedforward deep linear neural networks with Frobenius norm are global minima. The power of this proof can go beyond Frobenius norm: as long as the loss function satisfies Theorem <ref>, all local minima of deep linear model corresponds to local minimum of shallow model. § MAIN RESULT To examine the effect of depth alone, we consider the following optimization problem of feedforward deep linear neural networks with the square error loss: _W L(W)=1/2W_H W_H-1⋯ W_1X-Y_F^2, where W_i∈^d_i× d_i-1 is the weight matrix,X∈^d_0× m is the input training data, and Y∈^d_H× m is the target training data. Let p=min_0≤ i≤ Hd_i be the index corresponding to the smallest width. Note that for any W, we have (W_H W_H-1⋯ W_1) ≤ d_p. To analyze optimization problem (<ref>), we also consider the following optimization problem with a “shallow” linear model, which is equivalent to problem (<ref>) in terms of the global minimum value: _R F(R)= RX-Y_F^2 s.t. (R)≤ d_p,where R∈^d_H× d_0. Note that problem (<ref>) is non-convex, unless d_p = min(d_H,d_0), whereas problem (<ref>) is non-convex, even when d_p ≥min(d_H,d_0) with H>1. In other words, deep parameterization creates a non-convex loss surface even without nonlinearity. Though we only consider the Frobenius loss here, the proof holds for general cases. As long as the loss function satisfies Theorem <ref>, all local minima of deep linear model corresponds to local minimum of shallow model.Our first main result states that even though deep parameterization creates a non-convex loss surface, it does not create new bad local minima. In other words, every local minimum in problem (<ref>) corresponds to a local minimum in problem (<ref>).(Depth creates no new bad local minima) Assume that X and Y have full row rank. If W̅={W̅_1,…,W̅_H} is a local minimum of problem (<ref>), then R̅ = W̅_HW̅_H-1⋯W̅_1 achieves the value of a local minimum of problem (<ref>).Therefore, we can deduce the property of the local minima in problem (<ref>) from those in problem (<ref>). Accordingly, we first analyze the local minima in problem (<ref>), and obtain the following statement.(No bad local minima for rank restricted shallow model) If X has full row rank, all local minima of optimization problem (<ref>) are global minima. By combining Theorems <ref> and <ref>, we conclude that every local minimum is a global minimum for feedforward deep linear networks with a square error loss. (No bad local minima for deep linear neural networks) If X and Y have full row rank, then all local minima of problem (<ref>) are global minima.Theorem <ref> generalizes one of the main results in<cit.> with fewer assumptions. Following the theoretical work with a random matrix theory <cit.>, the recent work <cit.>showed that under some strong assumptions, all of the local minima are global minima for a class of nonlinear deep networks. Furthermore, the recent work <cit.> proved the following properties for a class of general deep linear networks with arbitrary depth and width: 1) the objective function is non-convex and non-concave;2) all of the local minima are global minima; 3) every other critical point is a saddle point; and 4) there is no saddle point with the Hessian having no negative eigenvalue for shallow networks with one hidden layer, whereas such saddle points exist for deeper networks. Theorem <ref> generalizes the second statementwith fewer assumptions; theprevious papers <cit.> assume that the data matrix YX^T(XX^T)^-1XY^T has distinct eigenvalues, whereas we do not assume that.§ PROOFIn this section, we provide the proofs of Theorems <ref>, <ref>, and <ref>.§.§ Proof of Theorem <ref>In order to deduce the proof of Theorem <ref>, we need some fundamental facts in linear algebra. The next two lemmas recall some basic facts of perturbation theory for singular value decomposition (SVD). Let M and M̅ be two m× n (m≥ n) matrices with SVDsB=UΣ V^T=(U_1,U_2)([ Σ_1; Σ_2;; ])([ V_1^T; V_2^T ])B̅=U̅Σ̅V̅^T=(U̅_1,U̅_̅2̅)([ Σ̅_1;Σ̅_2; ;])([ V̅_1^T; V̅_2^T ]),where Σ_1=(σ_1,⋯,σ_k), Σ_2=(σ_k+1,⋯,σ_n), Σ_1=(σ̅_1,⋯,σ̅_k), Σ_2=(σ̅_k+1,⋯,σ̅_n), U, V, U̅ and V̅ are orthogonal matrices. Continuity of Singular Value The singular value σ_i of a matrix is a continuous map of entries of the matrix.<cit.> Continuity of Singular SpaceIfρ:=min{min_1≤ i≤ k,1≤ j≤ n-k|σ_i-σ̅_k+j|,min_1≤ i≤ kσ_i}>0 ,then:√(sin(U_1,U̅_1)_F^2+sin(V_1,V̅_1)_F^2)≤√((M̅-M)V_1_F^2+(M̅^*-M^*)U_1_F^2)/ρ.For a fixed matrix B, we say “matrix A is a perturbation of matrix B” if A - B_∞ is o(1), which means that the difference between A and B is much smaller than any non-zero number in matrix B.Lemma <ref> implies that any SVD for a perturbed matrix is a perturbation of some SVD for the original matrix under full rank condition. More formally:Let M̅ be a full-rank matrix with singular value decomposition M̅=U̅Σ̅V̅^T. M is a perturbation of M̅. Then, there exists one SVD of M, M=UΣ V^T, such that U is a perturbation of U̅, Σ is a perturbation of Σ̅ and V is a perturbation of V̅.(Notice that SVDof a matrix may not be unique due to rotation of the eigen-space corresponding to the same eigenvalue)Proof: With the small perturbation of matrix M̅, Lemma <ref> shows that the singular values does not change much. Thus, if M̅-M_∞ is small enough, |σ_i-σ̅_i| is also small for all i. Remember that all singular values of M̅ are positive. By letting Σ_1 contain only the singular value σ_i (which may be multiple, and hence U_1 and V_1 are the singular spaces corresponding to the singular value σ_i), we have ρ > 0 in Lemma <ref>, thus Lemma <ref> implies that the singular space of the perturbed matrix corresponding to singular value σ_i in the initial matrix does not change much. The statement of the lemma follows by combining this result for the different singular values together (i.e., consider each index i for different σ_i in the above argument).We say that W satisfies the rank condition, if (W_H⋯W_1) = d_p. Any perturbation of the products of matrices is the product of the perturbed matrices, when the original matrix satisfies the rank constraint. More formally:Let R̅=W̅_HW̅_H-1⋯W̅_1 with (R̅)=d_p. Then, for any R, such that R is a perturbation of R̅ and (R)≤ d_p, there exists {W_1,W_2,…,W_H}, such that W_i is perturbation of W̅_i for all i ∈{1,…,H} and R=W_H W_H-1⋯ W_1. We will prove the theorem by induction. When H=2, we can easily show that the perturbation of theproduct of two matrices is the product of one matrix and the perturbation of the other matrix. When H=k>=3, we let M be the product of two specific matrices, and by induction the perturbation of the product (R) is the product of a perturbation of M and perturbations of the other H-2 matrix. And a perturbation of M is also the product of perturbations of those two specific matrices, which proves the statement when H=k. Proof: The case with H=1 holds by setting W_1=R. We prove the lemma with H≥ 2 by induction. We first consider the base case where H=2 withR̅=W̅_2W̅_1. LetR̅=U̅Σ̅V̅^T be the SVD of R̅. It follows Lemma <ref> that there exists an SVD of R, R = U Σ V^T, such that U is a perturbation of U̅, Σ is a perturbation of S̅i̅g̅m̅a̅ and V is a perturbation of V̅. Because (R̅)=d_p, with a small perturbation, the positive singular values remain strictly positive, whereby, (R)≥ d_p. Together with the assumption (R)≤ d_p, we have (R)=d_p. Let S̅_2 = U̅^T W̅_2 and S̅_1 = W̅_1V̅. Note that U̅Σ̅V̅^T= R̅= W̅_2W̅_1. Hence, S̅_2 S̅_1 = Σ̅ is a diagonal matrix. Remember Σ is a perturbation of Σ̅, thus there is an S_2, which is a perturbation of S̅_2 (each row of S_2 is a scale of the corresponding row of S̅_2), such that S_2S̅_1 = Σ. Let W_2=US_2 and W_1=S̅_1 V. Then, W_1 is a perturbation of W̅_1, W_2 is a perturbation of W̅_2, and W_1W_2=R, which proves the case when H=2.For the inductive step, given that the lemma holds for the case with H= k≥ 2, let us consider the case when H=k+1≥ 3 with R̅=W̅_k+1W̅_k⋯W̅_1. Let ℐ be an index set defined as ℐ = {p,p-1} if p≥ 2, ℐ = {p+2,p+1} if p=0 or p=1. We denote the i-th element of a set ℐ by ℐ_i. Then, M̅ =W̅_ℐ_2W̅_ℐ_1 exists as k+1≥ 3. Note that R̅ can be written as a product of k matrices with M̅ (for example,R̅=W̅_H⋯W̅_I_1+1M̅W̅_I_2-1⋯W̅_1). Thus, from the inductive hypothesis, for any R, such that R is a perturbation of R̅ and (R)≤ d_p, there exists a set of desiredk matrices M and W_i for i∈{1,…,k+1}∖ℐ, such that W_i is perturbation of W̅_i for all i∈{1,…,k+1}∖ℐ,M is perturbation of M̅,and the product is equal to R. Meanwhile, because M̅ is either a d_p by d_p-2 matrix or a d_p + 2 by d_p matrix, we have (M̅)≤ d_p and (M) ≤ d_p, and it follows (R̅)=d_p that (M̅)=d_p. Thus, by setting R̅←M̅ and R ← M (note that d_p in R̅ = W̅_k+1W̅_k⋯W̅_1 is equal to d_p in M̅ =W̅_ℐ_2W̅_ℐ_1), we can apply the proof for the case of H=2 to conclude: there exists { W_ℐ_2, W_ℐ_1}, such that W_i is perturbation of W̅_i for all i ∈ℐ, and M= W_ℐ_2 W_ℐ_1. Combined with the above statement from the inductive hypothesis, this implies the lemma with H=k+1, whereby we finish the proof by induction.The next two theorems show that, for any local minimum of L(·), there is another local minimum of L(·), whose function value is the same as the original and it satisfies the rank constraint. Let W={W_1,⋯,W_H} be a local minimum of problem (<ref>) and R≜ W_H W_H-1⋯ W_1. If W_i is not of full rank, then there exists a W̅_̅i̅, such that W̅_i is of full rank, W̅_i is a perturbation of W_i,W̅={W_1,⋯,W_i-1,W̅_i,W_i+1,⋯,W_H} is a local minimum of problem (<ref>), and L(W)=L(W̅).The idea of the proof is that if we just change one weight W_i and keep all other weights, it becomes a convex least square problem. Then we are able to perturb W_i to maintain the objective value as well as the perturbation is full rank.Proof of Theorem <ref> For notational convenience, let A=W_i-1⋯ W_1X and B=W_i+1⋯ W_H, and let L_i(W_i)=1/2B^TW_iA-Y_F^2. Because W is a local minimum of L, W_i is a local minimum of L_i. Let A=U^T_1D_1V_1 and B=U^T_2D_2V_2 are the SVDs of A and B, respectively, where D_i is a diagonal matrix with the first s_i terms being strictly positive, i=1,2. Minimizing L_i over W_i is a least square problem, and the normal equation is BB^TW_iAA^T=BYA^T, henceW_i ∈(BB^T)^+BYA^T(AA^T)^++{ M|BB^TMAA^T=0} =U_2D_2^+V_2^TYV_1D_1^+U_1^T+{ U_2KU_1^T|K_1:s_2,1:s_1=0},where (·)^+ is a Moore–Penrose pseudo-inverse and K is a matrix with suitable dimension with the entries in the top left s_2× s_1 rectangular being 0. Since V_2^TYV_1 is of full rank, (D_2^+V_2^TYV_1D_1^+) ≥max{ 0,s_2+s_1-max{d_i,d_i-1}}Thus, we can choose a proper K (which contains d_i+d_i-1-s_2-s_1 1s at proper positions with all other terms being 0s)such that D_2^+V_2^TYV_1D_1^++K is of full rank, whereby U_2(D_2^+V_2^TYV_1D_1^++K)U_1^T is of full rank. Therefore, there is a full rank Ŵ_̂î that satisfies the normal equation (<ref>).Let W̅_i(μ)=W_i+ μ(Ŵ_i-W_i). Then, W̅_i(μ) also satisfies the normal equation, andL(W̅(μ))=L_i(W̅_i(μ))=L_i(W_i)=L(W), for any μ>0.Note that W is a local minimum of L(W). Thus, there exists a δ>0, such that for any W^0 satisfying W^0-W_∞≤δ, we have L(W^0)≥ L(W). It follows from Ŵ_̂î being full rank that there exists a small enough μ, such that W̅_i(μ) is full rank and W̅_i(μ)-W_i_∞ is arbitrarily small (in particular, W̅_i(μ)-W_i_∞≤δ/2), because the non-full-rank matrices are discrete on the line of W̅_i(μ) with parameter μ>0 by considering the determine of W_i^T(μ)W_i(μ) orW_i(μ)W_i^T(μ) as a polynomial of λ. Therefore, for any W^0, such that W^0-W̅(μ)_∞≤δ/2, we haveW^0-W_∞≤W^0-W̅(μ)_∞+W̅_i(μ)-W_i_∞≤δ ,wherebyL(W^0) ≥ L(W) = L(W̅(μ)).This shows that W̅(μ)={W_1,⋯,W_i-1,W̅_i(μ),W_i+1,⋯,W_H} is also a local minimum of problem (<ref>) for some small enough μ. Let R=AB for two given matrices A∈ R^d_1× d_2 and B∈ R^d_2× d_3. If d_1≤ d_2, d_1≤ d_3 and rank(A)=d_1, then any perturbation of R is the product of A and perturbation of B. Proof: Let A=UDV^T be the SVD of A, then, R=UDV^TB. Let R̅ be a perturbation of R and let B̅=B+VD^+U^T(R̅-R). Then, B̅ is a perturbation of B and AB̅=R̅ by noticing DD^+=I, as A has full row rank. If W̅={W̅_1,⋯,W̅_H} is a local minimum with W̅_i being full rank, then, there exists Ŵ={Ŵ_1,⋯,Ŵ_H}, such that Ŵ_i is a perturbation of W̅_̅i̅for all i∈{ 1,…,H}, Ŵ is a local minimum, L(Ŵ)=L(W̅), and (Ŵ_HŴ_H-1⋯Ŵ_1)=d_p. In the proof of Theorem <ref>, we will use Theorem <ref> and Lemma <ref> to show that we can perturb W̅_p-1,W̅_p-2…, W̅_1 in sequence to make sure the perturbed weight is still the optimal solution and (Ŵ_pŴ_p-1)=d_p. Similar strategy can make sure (Ŵ_HŴ_H-1⋯Ŵ_p+1)=d_p, which then proves the whole theorem.Proof of Theorem <ref> :If p≠1, considerL_1(T):=W̅_H⋯W̅_p+1T W̅_p-2⋯W̅_1X-Y_F^2.Then, it follows from Lemma <ref> and W̅ is a local minimum of L(W) that T̅ is a local minimum of L_1, where T̅=W̅_pW̅_p-1. It follows from Theorem <ref> that there exists T̂, such that T̂ is close enough to T̅, T̂ is a local minimum of L_1(T), L_1(T̂)=L_1(T̅), and (T̂)=d_p. Note T̂ is a perturbation of T̅, whereby, from Lemma <ref>, there exists Ŵ_p, Ŵ_p-1, which are perturbations of W̅_p and W̅_p-1, respectively, such that Ŵ_pŴ_p-1 = T̂. Thus, Ŵ^0=(W̅_H,⋯,W̅_p+1,Ŵ_p,Ŵ_p-1,W̅_p-2⋯,W̅_1) is a local minimum of L(W), L(Ŵ)=L(W̅) and (Ŵ_pŴ_p-1)=d_p. By that analogy, we can find Ŵ_p⋯Ŵ_1, such that Ŵ^1=(W̅_H,⋯,W̅_p+1,Ŵ_p,Ŵ_p-1,⋯,Ŵ_1) is a local minimum of L(W), Ŵ_i is a perturbation of W̅_i for i=1,⋯,p, L(Ŵ^1)=L(W̅) and (Ŵ_pŴ_p-1⋯Ŵ_1)=d_p. Similarly, we can find Ŵ_H⋯Ŵ_p+1, such that Ŵ^2=(Ŵ_H,⋯,Ŵ_p+1,Ŵ_p,Ŵ_p-1,⋯,Ŵ_1) is a local minimum of L(W), Ŵ_i is a perturbation of W̅_i for i=p+1,⋯,H, L(Ŵ^2)=L(Ŵ^1)=L(W̅) and (Ŵ_HŴ_H-1⋯Ŵ_p+1)=d_p. Noticing that (Ŵ_H⋯Ŵ_1) ≥(Ŵ_HŴ_H-1⋯Ŵ_p+1) +(Ŵ_pŴ_p-1⋯Ŵ_1)-d_p = d_pand (Ŵ_H⋯Ŵ_1)≤min_i=0,…,Hd_i=d_p, we have (Ŵ_H⋯Ŵ_1)=d_p, which completes the proof. Proof of Theorem <ref>: It follows from Theorem <ref> and Theorem <ref> that there exists another local minimum Ŵ= Ŵ={Ŵ_1,⋯,Ŵ_H}, such that L(Ŵ) = L(W̅) and (Ŵ_HŴ_H-1⋯Ŵ_1)=d_p. Remember that R̂=Ŵ_HŴ_H-1⋯Ŵ_1. It then follows from Theorem <ref> that for any R, such that R is a perturbation of R̂ and (R)≤ d_p, we have R = W_HW_H-1⋯ W_1, where W_i is a perturbation of Ŵ_i. Therefore, by noticing Ŵ is a local minimum of (<ref>), we haveF(R)= L(W)≥ L(Ŵ) = F(R̂) ,which shows that R̂ is a local minimum of (<ref>).In the proof of Theorem <ref>, we at first show that we just need to consider the case where X is an identity matrix and Y is a diagonal matrix by noticing rotation is invariant under Frobenius norm. Then we show that the local minimum must be a block diagonal and symmetric matrix, and each block term is a projection matrix on the space corresponding to the same eigenvalue of the diagonal matrix Y. Finally, we show that those projection matrices must be onto the eigenspace of Y corresponding to the as large as possible eigenvalues, which then shows that the local minimum shares the same function value. §.§ Proof of Theorem <ref>Let X=U_1Σ_1 V_1^T be the SVD decomposition of X, where Σ_1 is a diagonal matrix with full row rank. Then, F(R)= RU_1 Σ_1 V_1^T -Y_F^2= RU_1Σ_1 - YV_1_F^2= (RU_1)(Σ_1)_1:d_1, 1:d_1 - (YV_1)_1:d_2,1:d_1_F^2 + Const,where Const is a constant in R and (·)_t_1:t_2,t_3:t_4 is a submatrix of (· ), which contains the t_1 to t_2 row and t_3 to t_4 column of (·). If R is a local minimum of (<ref>), then S=RU_1 is a local minimum of[min_S G(S) = SΣ̂_1 - Ŷ_F^2; s.t.(S)≤ k, ] where Σ̂_1 := (Σ_1)_1:d_1, 1:d_1, Ŷ := (YV_1)_1:d_2,1:d_1 and the difference of objective function values of (<ref>) and (<ref>) is a constant. Let Ŷ := U_2Σ_2 V_2^T be the SVD of Ŷ, thenG(S)= SΣ̂_1 - U_2Σ_2 V_2^T_F^2 = U_2^TSΣ̂_1V_2 - Σ̂_2_F^2 ,and if S is a local minimum of G(S), we have T:=U_2^TSΣ̂_1V_2 is a local minimum of [min_T H(T) = T - Σ_2_F^2; s.t.(T)≤ k, ]and the objective function values of (<ref>) and (<ref>) are the same at corresponding points. Let Σ_2 have r distinct positive diagonal terms λ_1>⋯>λ_r≥0 with multiplicities m_1,⋯,m_r. Let T^* be a local minimum of (<ref>), and T^*=U^*Σ^*V^*T=[U_S^*U_N^*][[ Σ_S^* 0; 0 0 ]][[ V_S^*T; V_N^*T ]] be the SVD of T, where Σ^*_S are positive singular values. Let P_L:=U_S^*(U_S^*TU_S^*)^-1U_S^*T and P_R:=V_S^*(V_S^*TV_S^*)^-1V_S^*T be the projection matrix to the space spanned by U_S^* and V_S^*, respectively. Note that { T|P_LT=T}⊆{ T|rank(T)≤ k}, thus, T^* is also a local minimum ofmin T-Σ_2_F^2s.t. P_LT=T,which is a convex problem, and it can be shown by the first order optimality condition that the only local minimum of (<ref>) is T^*=P_LΣ_2. Similarly, we have T^*=Σ_2P_R. Then, D:=Σ_2Σ_2^T is a diagonal matrix, with r distinct non-zero diagonal terms λ_1^2>⋯>λ_r^2>0 with multiplicities m_1,⋯,m_r. Therefore,P_LDP_L =P_LΣ_2Σ_2^TP_L^T=T^*T^*T=Σ_2P_RP_R^TΣ_2^T=Σ_2P_RΣ_2^T=Σ_2T^*T=Σ_2Σ_2^TP_L^T=DP_L.Note that the left hand is a symmetric matrix, thus, DP_L is also a symmetric matrix. Meanwhile, P_L is a symmetric matrix, whereby P_L is a r-block diagonal matrix with each block corresponding to the same diagonal terms of D. Therefore, T^*=P_LΣ_2 is also a r-block diagonal matrix. Let T^*=[[ T_1^*; ⋱; T_r^*; 0 ]],where T_i^* is a m_i× m_i matrix, then T^*T^*T=Σ_2T^*T implies T_i^*T_i^*T=λ_iT_i^*T. Thus, T_i^* is a symmetric matrix and T_i^*/λ_i is a projection matrix. Let rank(T_i^*)=d_p_i, then, ∑_i=1^rd_p_i≤ p and tr(T_i^*)=λ_id_p_i, whereby, H(T^*) =∑_i=1^rT_i^*-λ_iI_m_i_F^2=∑_i=1^rtr(T_i^2)-2λ_itr(T_i)+m_iλ_i^2=∑_i=1^r(m_i-d_p_i)λ_i^2.Let j be the largest number that ∑_i=1^jm_i < d_p. Then, it is easy to find that the global minima of (<ref>) satisfy d_p_i=m_i for i≤ j, d_p_j+1=d_p-∑_i=1^jm_i and d_p_i = 0 for i>j+1 which gives all of the global minima.Now, let us show that all local minima must be global minima. As local minima T^* is a block diagonal matrix, thus, we can assume without loss of generality that both Σ_2 and T^* are square matrices, because the all 0 rows and columns in Σ_2 and T do not change anything. Thus, it follows T_i^* is symmetric that T^* is a symmetric matrix. Remember that T_i^*/λ_i is a projection matrix, thus the eigenvalues of T_i^* are either 0 or λ_i, whereby T^*=∑_i=1^r∑_j=1^d_p_iλ_iu_iju_ij^T,where u_ij is the jth normalized orthogonal eigen-vector of T^* corresponding to eigenvalue λ_i. It is easy to see that, at a local minimum, we have ∑_i=1^rd_p_i=d_p, otherwise, there is a descent direction by adding a rank 1 matrix to T^* corresponding to one positive eigenvalue. If there exists i_1,i_2, such that i_1<i_2, d_p_i_1<m_i_1, and d_p_i_2≥1, then, there exists u̅_i_1, such that u̅_i_1⊥ u_i_1j for j=1,⋯ d_p_i_1. Let T(θ) := T^*-λ_i_2u_i_21u_i_21^T +(λ_i_1sin^2θ +λ_i_2cos^2θ) (u_i_21cosθ + u̅_i_1sinθ)(u_i_21cosθ +u̅_i_1sinθ )^T. Then, (T(θ))=(T^*)=d_p, T(0)=T^* andH(T(θ))=H(T^*)+λ_1^2+λ_2^2-(λ_1sin^2θ+λ_2cos^2θ)^2 .It is easy to check that H(T(θ)) is monotonically decreasing with θ, which gives a descent direction at T^*, contradicting with that T^* is a local minimum. Therefore, there is no such i_1 and i_2, which shows that T^* is a global minimum.§.§ Proof of Theorem <ref>The statement follows from Theorem <ref> and <ref>.§ CONCLUSIONWe have proven that, even though depth creates a non-convex loss surface, it does not create new bad local minima. Based on this new insight, we have successfully proposed a new simple proof for the fact that all of the local minima of feedforward deep linear neural networks are global minima as a corollary. The benefits of this new results are not limited to the simplification of the previous proof. For example, our results applyto problems beyond square loss. Let us consider the shallow problem (S) L(R)s.t.rank(R) ≤ d_p, and and the deep parameterization counterpart (D) L(W_H W_H-1⋯ W_1). Our analysis shows that for any function L, as long as L satisfies Theorem 3.2, any local minimum of (D) corresponds to a local minimum of (S). This is not limited to when L is least square loss, and this is why we say depth creates no bad local minima.In addition, our analysis can directly apply to matrix completion unlike previous results.<cit.> show that local minima of the symmetric matrix completion problem are global with high probability. This should be able to extend to asymmetric case. Denote f(W) := ∑_i,j ∈Ω (Y - W_2 W_1)_i,j, then local minimum of f(W) is global with high probability, where Ω is the observed entries. Then, our analysis here can directly show that the result can be extended for deep linear parameterization: for h(W) := ∑_i,j ∈Ω (Y - W_H W_H-1⋯ W_1)_i,j, any local minimum of h(W) is global with high probability.§ ACKNOWLEDGEMENTS The authors would like to thank Professor Robert M. Freund, Professor Leslie Pack Kaelbling for their generous support. We also want to thank Cheng Mao for helpful discussions.apalike
http://arxiv.org/abs/1702.08580v2
{ "authors": [ "Haihao Lu", "Kenji Kawaguchi" ], "categories": [ "cs.LG", "cs.NE", "math.OC", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227232736", "title": "Depth Creates No Bad Local Minima" }
Department of Physics, Stanford University, Stanford, California 94305-4045, USAKavli Institute for Theoretical Physics, University of California, Santa Barbara, California, 93106, USA Station Q, Microsoft Research, Santa Barbara, California 93106-6105, USADepartment of Physics, Stanford University, Stanford, California 94305-4045, USAWe investigate theoretically an interacting metallic wire with a strong magnetic field directed along its length and show that it is a new and highly tunable one-dimensional system. By considering a suitable change in spatial geometry, we map the problem in the zeroth Landau level with Landau level degeneracy N to one-dimensional fermions with an N-component pseudospin degree of freedom and SU(2)-symmetric interactions. This mapping allows us to establish the phase diagram as a function of the interactions for small N (and make conjectures for large N) using renormalization group and bosonization techniques. We find pseudospin-charge separation with a gapless U(1) charge sector and several possible strong-coupling phases in the pseudospin sector. For odd N, we find a fluctuating pseudospin-singlet charge density wave phase and a fluctuating pseudospin-singlet superconducting phase which are topologically distinct. For even N>2, similar phases exist, although they are not topologically distinct, and an additional, novel pseudospin-gapless phase appears. We discuss experimental conditions for observing our proposals. Strongly Interacting Phases of Metallic Wires in Strong Magnetic Field Xiao-Liang Qi December 30, 2023 ======================================================================§ INTRODUCTIONInteracting quantum systems in one spatial dimension exhibit many exotic behaviors, such as Luttinger liquid phases and other phases with quasi-long-range order<cit.>. Remarkably, these behaviors are often tractable theoretically thanks to powerful tools special to one dimension (1D), such as bosonization<cit.> and 1+1D conformal field theory (CFT) techniques<cit.>. There are a wide range of systems which can be treated with such tools, including spin chains<cit.>, 1D metals<cit.>, and coupled wires<cit.>, but the underlying degrees of freedom in the 1D problem are typically not possible to tune, in the sense that spin chains are always (after fermionization) built from a fixed number of colors of spin-1/2 fermions and 1D metals are always built from spin-1/2 fermions. In this paper, we consider a spinless, interacting metallic wire with strong magnetic field directed along its length and relate it to a new class of 1D systems: interacting metals whose electrons have a large (pseudo)spin. This is particularly interesting because the fact that the magnetic field changes the Landau level degeneracy in the first problem will map onto a tunable number of (degenerate) spin states in the second problem. For the simplest intuition about how to treat the problem of the wire in field, consider semiclassical electrons traveling in three dimensions in a magnetic field B. They move freely along the direction of the field, but in the plane perpendicular to the field, they move in cyclotron orbits whose radius goes as 1/B. At strong field, the motion thus becomes increasingly one-dimensional, similar to the plasma physics concept of magnetic confinement, and the number of non-overlapping orbits that fit into a wire scales as B. In more quantum language, consider a metal in a magnetic field strong enough that only the zeroth Landau level (ZLL) is occupied at every momentum along the field. Kinetic energy is quenched in directions perpendicular to the field, so naively the degenerate Landau level states are like one-dimensional wires which are coupled only by electronic interactions, and the degeneracy scales with B.However, in the quantum case there is a key difference between the ZLL problem and coupled wires. As a consequence of the nontrivial topological invariant of the Landau level<cit.>, no orthogonal basis for the ZLL can have wavefunctions which are local in both directions perpendicular to the field. Since electron-electron interactions are local in real space, this means that there is no natural choice of basis in which the interaction between basis states is local. Another problem is that the choice of basis makes magnetic translation symmetry implicit, making it difficult to make approximations while preserving the symmetry.Motivated by the problems of the coupled wire picture, in this paper we propose an alternative approach to this problem which explicitly preserves symmetry. We map a metallic wire in the quantum limit with an N-fold degenerate ZLL to a large-pseudospin one-dimensional wire with N degenerate spin states. Magnetic translation symmetry is mapped to an SU(2) symmetry of the pseudospin. (The boundary of the wire, which breaks magnetic translation symmetry, is mapped to an SU(2)-breaking external field.) Although this mapping is a small modification of one already known<cit.> at the level of non-interacting electrons, our main insight is that the resulting one-dimensionality and symmetry make the interacting problem tractable. We are able to apply the powerful machinery of both Abelian and non-Abelian bosonization, along with conformal field theory techniques, to elucidate the phase diagram as a function of generic interaction parameters.There has been considerable previous work on interacting bulk metals in the zeroth Landau level. On the theory side, many approaches of varying sophistication have been used, resulting in predictions of density waves<cit.>, exciton insulators<cit.>, superconductors<cit.> (SC), and marginal Fermi liquids<cit.>. Experimentally, there is evidence for field-induced transitions to an insulating state in bulk bismuth<cit.> and graphite<cit.>, which have been understood as charge density wave (CDW) transitions<cit.> but are still being studied. In contrast, our interest is in using a wire geometry in order to more clearly bring out the quasi-one-dimensionality induced by the magnetic field, and to more easily apply 1D tools.A major technical strength of our approach is that the mapping to pseudospins accounts for interactions with range longer than the magnetic length, in contrast to previous work and any naive coupled-wire treatment.Before proceeding, we summarize our phase diagram, which depends strongly on the parity of the Landau level degeneracy N. For odd N, we have identified three phases. One is a Luttinger liquid, having a gapless charge sector and a free pseudospin sector. The other two have a gapless charge sector and fully gapped pseudospin sector, and we argue that they are separated by a first-order transition. One has power-law correlations of the CDW order parameter and the other has power-law correlations of p-wave SC order; these phases are unusual because the power is tuned by N (that is, by the magnetic field). For even N>2, we have identified four different phases, all of which have a gapless charge sector. One is again a Luttinger liquid. Two have a fully gapped pseudospin sector, with either power-law correlations of CDW order or s-wave SC order, and the transition between them can be second-order. Again the power laws can be tuned by N. The final phase is, to our knowledge, new: it has a gapless pseudospin sector, and we provide evidence that it has coexisting power-law correlations of pseudospin-density wave order and p-wave, pseudospin-triplet SC order.The structure of this paper is as follows. In Section <ref>, we discuss the non-interacting part of the model and construct the analogy between fermions in a wire and fermions on the spatial manifold ℝ× S^2. In Section <ref>, we write down the interacting Hamiltonian and cast it into a convenient form which makes its symmetry explicit. Sections <ref> through <ref> contain our main results. In Section <ref>, for small N, we explicitly analyze our model through a perturbative renormalization group (RG) procedure and establish a phase diagram using non-Abelian bosonizaton. We identify the nature of the phases more explicitly using Abelian bosonization in Section <ref>. In Section <ref>, we generalize the results of the previous two sections to conjectures about the phase diagram for all N. In Section <ref>, we discuss the effect of symmetry-breaking perturbations in order to bring our results to bear on the experimentally relevant geometry. Section <ref> relates our results to previously known ones in the bulk (large-N) limit. Finally, Section <ref> consists of prospects for experimentally realizing these phases, open questions, and further discussion.§ NON-INTERACTING MODELIn this section, we review the Landau level problem of spinless fermions on the wire ℝ× D^2, where D^2 is the two-dimensional disk of radius R, and on the manifold ℝ× S^2. We will build a connection between the two problems, and review the mapping from the lowest Landau level of the latter onto itinerant spinful 1D electrons. We will then use the latter model as the basis for much of the rest of the paper.To establish conventions, we call the direction along the length of the wire x. The geometries are pictured in Table <ref>, along with a summary of the results of this section. §.§ Landau Levels on the Disk and Sphere We start by considering Schrodinger particles in a strong magnetic field along the x direction, i.e. with HamiltonianH = (p-eA)^2/2m^∗ where m^∗ is the effective mass and A is the electromagnetic vector potential. We can always choose a gauge such that the eigenvalue k_x of p_x is a good quantum number. In the limit R →∞, this problem is simple; the spectrum forms Landau levels of energyE_n(k_x) = ω_c(n+1/2) + k_x^2/2m^∗where n is a non-negative integer and ω_c = eB/m^∗ is the cyclotron frequency. At fixed k_x, each Landau level has degeneracy approximately equal to the number of flux quanta n_ϕ penetrating a fixed-x cross-section of the system. Working in symmetric gauge, as appropriate for the ℝ× D^2 geometry, these degenerate states are localized in the radial direction and labeled by the integer eigenvalue m of the angular momentum operator L_x. In the zeroth Landau level, the states have a spatial width of order l_B = √(1/eB). At finite R, the degeneracy is broken due to the presence of the potential V_edge associated with the boundary; those states which are radially localized close to the boundary have higher energy. The spectrum is shown schematically in Fig. <ref>.This broken degeneracy arises from the boundary-induced loss of magnetic translation symmetry in the radial direction. The remaining symmetries are translations along x and an O(2) rotation symmetry. We would like more symmetry in order to better constrain the interacting problem. The reason, as discussed in the introduction, is that the nontrivial topological invariant<cit.> of a Landau level makes it impossible to form an orthogonal basis for the ZLL with wavefunctions local in both directions perpendicular to x. Therefore, interactions, projected to the ZLL, cannot be well-constrained by locality in any basis; with no locality and not much symmetry, there is no reason to expect the interacting problem to be tractable.In order to enrich the symmetry, we change the spatial manifold to ℝ× S^2. In this case, the wire has the spherical version of magnetic translation symmetry, which is an SU(2) rotation symmetry. To see this, consider now Schrodinger electrons on a wire with a spherical cross-section, and suppose that every cross-section has a uniform, fixed flux piercing it. This requires a monopole inside the sphere, so the flux will be quantized to n_ϕ∈ℤ flux quanta. The Hamiltonian isH = Λ^2/2m^∗R^2 + p_x^2/2m^∗where Λ = r×( p-eA) is the canonical momentum on the sphere and A is a monopole vector potential. The radial component of r is not related to x; it arises because writing Λ in this form requires embedding the S^2 in a fictitious extra spatial dimension. If the wire had finite length, then this geometry would indeed be analogous to a solid ball with a monopole placed in the center; the long direction of the wire would correspond to the radial direction on the ball. In this picture, though, the infinite radius limit would correspond to a semi-infinite wire, where r=0 corresponds to the single end of the wire, so this analogy is somewhat limited in the case we are considering.Again, p_x commutes with H, so we fix its eigenvalue k_x to reduce to the Landau level problem in a spherical geometry. We briefly review standard facts about this problem<cit.>. The operator L = Λ + n_ϕr̂/2 commutes with the Hamiltonian and obeys the angular momentum algebra [L_i,L_j] = i ε_ijkL_k, where i,j,k run over the three dimensions in which the S^2 is embedded and ε is the Levi-Civita symbol. The good quantum numbers in the problem are the eigenvalues k_x, l(l+1), and m of the operators p_x, L^2, and L_3 respectively, with m=-l, -l+1, ... ,l. Single-valuedness of the wavefunction only requires 2m-n_ϕ to be an integer; hence m can be a half-integer if n_ϕ is odd. The energy spectrum, shown in Fig. <ref>, isE(l,m,k_z) = l(l+1)-(n_ϕ/2)^2/n_ϕω_c + k_z^2/2m^∗where ω_c = eB/m^∗ is the cyclotron frequency. There is also a restriction l(l+1) ≥ (n_ϕ/2)^2; therefore the lowest Landau level has l=n_ϕ/2 and has degeneracy N = n_ϕ+1.Given that the angular momentum quantum numbers can be half-integers, the symmetry group corresponding to rotations of the spherical cross-section of the wire is SU(2). Projecting to the lowest Landau level reduces all of the degrees of freedom on the S^2 to N degenerate levels which transform as a pseudospin-S_0 representation of the SU(2) symmetry, where S_0 = N-1/2This projected problem is therefore equivalent to purely one-dimensional itinerant fermions with a (possibly very large) pseudospin.We expect that for large N, the lowest Landau level of the sphere problem and the disk (wire) problem should behave very similarly. In both cases, there is free propagation along the wire, and the finite-size directions are characterized by a large Landau level degeneracy. On the disk, at every k_x, all the states which far from the edge of the disk are nearly degenerate. The presence of the boundary breaks this degeneracy, but that effect is only strong near the edge. In the spherical case, the way to lift the Landau level degeneracy is by breaking SU(2) symmetry.The main idea of this paper is therefore to exploit the SU(2) symmetry to understand the ℝ× S^2 problem, and then add SU(2)-breaking perturbations to understand the physics of a wire. §.§ Low-Energy Non-Interacting Theory The rest of this paper will be devoted to finding instabilities of the non-interacting theory to interactions that are much weaker than the Landau level splitting and the bandwidth in k_x. To do the analysis, we need only consider the low-energy part of the non-interacting theory in the ℝ× S^2 geometry, obtained by linearizing the dispersion of Fig. <ref> about the Fermi level. Define left- and right-moving fermions in the standard wayψ_m(x)∼∑_±∫_-Λ^Λdk/2π e^i(k± k_F)xψ_m(k± k_F)≡ e^ik_Fxψ_m,R(x) + e^-ik_Fxψ_m,L(x)where Λ≪ k_F is a momentum cutoff.The low-energy Hamiltonian is thenH_0 =∑_m=-S_0^S_0∫ dxi v_F (ψ^†_m,L∂_x ψ_m,L - ψ^†_m,R∂_x ψ_m,R)where v_F is the Fermi velocity, which we set to 1. This Hamiltonian has an enormous U(N)⊗ U(N) symmetry; the left- and right-movers may be transformed separately at the level of the low-energy theory. Interactions will break this symmetry to the nonchiral SU(2) magnetic translation symmetry. §.§ Schrodinger vs. WeylIn order to reach the zeroth Landau level, the carrier density needs to be low. In a standard metal or semiconductor, zero carrier density means the system is an insulator, and the above physics is not an appropriate description. In (type-I) Weyl semimetals<cit.>, the Landau level at the Fermi energy still disperses linearly even at zero density. Such materials may be a promising system for realizing our proposal. To evaluate their suitability, we briefly compare and contrast Schrodinger and Weyl fermions as they pertain to our construction.In either geometry, Schrodinger and Weyl fermions look very similar at low energies. The dispersion along z is linear, and there is Landau level degeneracy; the Landau levels either have SU(2) symmetry in the spherical case or magnetic translation symmetry in the bulk of the wire. There are three main differences. First, in the spherical case, the Landau level degeneracy N for a given n_ϕ is n_ϕ+1 for Schrodinger fermions and n_ϕ for Weyl fermions. Second, at fixed electron number k_F is strongly dependent on the magnetic field in the Schrodinger case (since the Landau level degeneracy changes with field) but is set primarily by the Weyl point splitting in the Weyl case, with weak field-dependent corrections at finite doping above the Weyl points. Finally, the Landau level spacing is slightly different (at small momentum, it is proportional to B for Schrodinger electrons and √(B) for Weyl electrons).These differences are inessential for the rest of our analysis; we abstract them away by fixing N and k_F. Of course, these differences matter in a real experiment, as the difficulty of reaching the quantum limit with a given N will depend on such factors; we will discuss this further in Section <ref>.For the rest of this paper, we use the ℝ× S^2 geometry. We assume SU(2) symmetry until section <ref>, when we will make more contact with the wire geometry by investigating SU(2)-breaking perturbations.§ STRUCTURE OF THE INTERACTIONSStarting from the free fermion fixed point, we now wish to write down the most relevant (in the RG sense) symmetry-respecting interaction terms. Four-fermion contact interactions are marginal at tree level; all other momentum-conserving interactions are irrelevant. Moreover, in the absence of fine-tuning to k_F = π, Umklapp scattering is forbidden. Finally, the interactions that we care about are non-chiral ones; fully chiral terms are exactly marginal and only renormalize velocities. As such the most relevant operators are left-right products of fermion bilinears, i.e. ψ_L,m^†A_mm'ψ_L,m'ψ_R,n^†B_nn'ψ_R,n' where A and B are Hermitian N × N matrices. We now need to constrain A and B by symmetry.The interactions we want are shown in Fig. <ref>. The interaction can be decomposed according to the angular momentum (S,p) transferred from the left-mover to the right mover, where S(S+1) and p are the eigenvalues of of L^2 and L_3 respectively. Here S can range from 0 to N-1. The SU(2) symmetry completely fixes the p dependence of the coupling constants for each S, that is, there should only be N independent coupling constants.An explicit decomposition of the interaction in this form, where p labels a component of the angular momentum transfer, is given in Appendix <ref>, but it is slightly inconvenient for our purposes. The most convenient way to implement the symmetry is to use a special basis { M^S,α} (we suppress the label N) for the set of Hermitian N × N matrices which has the following properties: * S takes integer values from 0 to N-1 and α takes values from -S to S.* For fixed S, under the action M^S,α→ U^†M^S,αU for U valued in the spin-S_0 representation of SU(2), the M^S,α transform as a spin-S representation of SU(2).* The matrices are orthogonal under the trace norm, that is (M^S,αM^S',β) = kδ_S,S'δ_αβ for an S-independent constant k.For some intuition about the M^S,α basis, we see that property (2) implies that M^0,0 is √(k/N) times the N × N identity matrix and that M^1,α can be chosen to be the usual spin-S_0 spin matrices with α = x,y,z. The decomposition in Fig. <ref> is inconvenient because it violates property (3); in this decomposition, the S=1 basis matrices would be S^z and S^±, which have less convenient orthogonality properties. We choose an unusual normalization convention where the commutation relations of SU(2) are [M^1,α,M^1,β] = √(2)iε^αβγM^1,γ with ε the Levi-Civita symbol; this implies that k=1/6N(N^2-1) See Appendix <ref> for an explicit construction of this basis; the matrices M^S,α are particular linear combinations of Clebsch-Gordan coefficients for fusing two spin-S_0 objects into a spin-S object. This normalization convention is chosen because the currents ψ^†_χ M^1,αψ_χ (χ=L,R and we suppress pseudospin indices) form a representation of 𝔰𝔲(2)_k, giving k a physical meaning. See Sec. <ref> for a justification of this fact.With this basis choice, the most general marginal interaction which is symmetric under nonchiral SU(2) transformations is of the form H_int = ∫ dx ∑_S=0^N-1 g_S ∑_α = -S^S :ψ_L,m^†M^S,α_mm'ψ_L,m':(x):ψ_R,n^†M^S,α_nn'ψ_R,n':(x) where we have suppressed the sums over the fermion pseudospin states. The S=0 and S=1 terms have simple physical interpretations stemming from the aforementioned explicit forms of M^0,0 and M^1,α. The S=0 term is just a contact density-density interaction n_L n_R, where n_L/R are the chiral fermion densities, while the S=1 term is a contact Heisenberg-type interaction S_L ·S_R where S_L/R are the chiral SU(2) pseudospin densities. See Appendix <ref> for the explicit construction and proof of SU(2) invariance. The Hamiltonian for the full system is thenH = H_0 + H_intwith H_0 the non-interacting Hamiltonian defined in Eq. (<ref>).§ PHASE DIAGRAM FOR SMALL N §.§ RG Procedure We assume that all of the |g_S| are small and perform perturbative RG to second order (one loop). In the free theory, all fermion bilinears have scaling dimension 1, so all of the interactions are marginal at tree level. Using standard machinery, the perturbative RG equations for many marginal operators are known to be<cit.>dg_S/dl = -π∑_S',S”β_S',S”^Sg_S'g_S”where the cutoff in real space is a_0e^l (here a_0 is the lattice-scale cutoff of the low-energy theory at which the bare couplings are defined) and β_S',S”^S is the operator product expansion (OPE) coefficient given by the short-distance identification (written in complex coordinates)𝒪_i(z,z̅) 𝒪_j(w,w̅) ∼∑_k β^k_ij𝒪_k(w,w̅)/|z-w|^2within correlation functions. Here, we are using a specific form of OPE where all the operators {𝒪_i } involved are marginal, which is immediately applicable to our discussion. For our interactions, the OPE coefficients can be computed by Wick's theorem to beβ_S',S”^S = ∑_α, β1/k^2([M^S',α,M^S”,β]M^S,γ)^2A tedious calculation, outlined in Appendix <ref>, using the explicit forms of the M matrices and sum-of-product identities for the Clebsch-Gordan coefficients<cit.> shows thatβ_S',S”^S = -k(2S'+1)(2S”+1)( S S' S”S_0 S_0 S_0 )^2(1-(-1)^S+S'+S”)^2 where the S S' S”S_0 S_0 S_0 is the Wigner 6j-symbol. This form makes explicit a selection rule resulting from the symmetry properties of products of the Ms: β_S',S”^S is zero if S+S'+S” is even. See Appendix <ref> for an explanation of this selection rule in terms of Young tableaux.Since the identity matrix commutes with all the other Ms, β^0_S',S” and β^S'_0,S” = β^S'_S”,0 are zero unless S'=S”=0. As such, to this order in perturbation theory, the U(1) charge sector of the theory decouples from the pseudospin sector and, since Umklapp scattering is generally forbidden thanks to the incommensurate filling, the charge sector remains a gapless Luttinger liquid. The coupling constant g_0 simply changes the Luttinger parameter. We will therefore ignore the U(1) sector and g_0 unless otherwise stated. §.§ Non-Abelian Bosonization§.§.§ Basics of Non-Abelian Bosonization We will use non-Abelian bosonization<cit.> to find the strong-coupling fixed points and to determine the low-energy theories. A full review of non-Abelian bosonization is beyond the scope of this paper; we will simply define notation and briefly review the basics.The main result of non-Abelian bosonization is that a theory of N free fermions with the same Fermi velocity are equivalent to the Wess-Zumino-Witten (WZW) model 𝔲(N)_1 = 𝔲(1) ⊗𝔰𝔲(N)_1. The chiral SU(N) symmetry currents J^a_χ, where a labels a generator t^a of SU(N) and χ = L,R labels left- and right-movers, correspond to chiral fermion bilinearsJ^a_χ(x) ∼ :ψ^†_m,χt^a_mnψ_n,χ:(x)The colons indicate normal ordering and the t^a generate the fundamental representation of 𝔰𝔲(N). The conserved chiral currents of the U(1) part of the theory are identified with the chiral total fermion density. A heuristic way to understand this identification from the CFT point of view follows from comparing operator product expansions (OPEs). Suppressing matrix indices, Wick's theorem implies that if A and B are matrices, then the corresponding fermion bilinears have the OPE (in complex coordinates) :ψ^†_L A ψ_L:(z) :ψ^†_L B ψ_L:(w) ∼:ψ^†_L [A,B] ψ_L:(w)/z-w + (AB)/(z-w)^2 with an analogous equation for the right-movers. With the normalization f^ab_cf^ab_d=2Nδ_cd, with f^ab_c the structure constants of 𝔲(N),plugging in A=t^a and B=t^b yields the correct 𝔲(N)_1 OPEsJ^a_L(z) J^b_L(w) ∼i f^ab_cJ^c_L(w)/z-w + δ_ab/(z-w)^2More generally, Eq. (<ref>) means that for any Lie subgroup G ⊂ U(N) with generators t̃^a, the fermion bilinearsψ^†_Lt̃^aψ_L will have the same OPEs as the symmetry currents of a WZW theory with Lie group G and level k equal to the embedding index x_e of G in U(N). We will frequently make use of such embeddings. In order to explicitly distinguish between the currents in different subgroups G, define the dim(G)-component object J^G_χ whose ath component is the current J^a_χ, where a labels a generator of G. In this notation the Sugawara Hamiltonian for a level-k WZW theory with symmetry group G isH = 1/2(k+g)(:J^G_L ·J^G_L:+:J^G_R ·J^G_R:)where g is the dual Coxeter number of G.§.§.§ Coset construction Embeddings of the previously mentioned sort naturally lead to consideration of coset models; we briefly review the construction<cit.>.Consider a unitary WZW theory at level k over a Lie group G with a subgroup H, with corresponding Lie algebras 𝔥⊂𝔤. Then the generators of 𝔥 can be written as linear combinations ofgenerators of 𝔤, so there exist currents J^H_χ which are linear combinations of the currents J^G_χ of the same chirality. These currents also satisfy a Kac-Moody algebra for 𝔥 at the level k' = x_e k where x_e is the embedding index of H in G. We define the energy-momentum tensor for the coset theory 𝔤_k/𝔥_k' byT_coset =T_𝔤_k-T_𝔥_k'where T_𝔤_k and T_𝔥_k' are the energy-momentum tensors for the 𝔤_k and 𝔥_k' WZW theories respectively. The coset theory is another unitary CFT with central chargec_coset = c_𝔤_k-c_𝔥_k'Importantly, the Hilbert space for the 𝔤_k theory decomposes into a tensor product of the Hilbert space of the 𝔥_k' theory and the coset theory, that is, any operator 𝒪 in the 𝔤_k theory can be written as a linear combination 𝒪 = ∑_ij𝒪^𝔥_i ⊗𝒪^(coset)_jwhere 𝒪^𝔥_i and 𝒪^(coset)_j are operators in the 𝔥_k' and coset theories respectively. If 𝒪 is a scaling operator, then its scaling dimension (conformal spin) is the sum of the dimensions (spins) of 𝒪^𝔥_i and 𝒪^(coset)_j.A special case will be helpful later. Suppose that 𝔤_k = 𝔰𝔲(N)_1, 𝔥_k' = 𝔰𝔲(2)_k with k defined in Eq. (<ref>), and 𝒪_L,m is a chiral spin-S fermion bilinear (m=-S,...,S labels a component), which has scaling dimension 1. Then if we decompose 𝒪_L,m as in Eq. (<ref>), its 𝒪^𝔰𝔲(2)_i part must have a scaling dimension less than 1 and furthermore has to transform as a spin-S field under the 𝔰𝔲(2)_k algebra. This means that the 𝒪^𝔰𝔲(2)_i part of 𝒪_L,m can only be the left-moving spin-S primary ϕ^S_L,m in 𝔰𝔲(2)_k, i.e. 𝒪_L,m𝒪_R,m = ϕ^S_L,mϕ^S_R,m⊗𝒪^(coset)for some coset operator 𝒪^(coset) with scaling dimensionΔ_𝒪 = 2-2S(S+1)/k+2because S(S+1)/(k+2) is the scaling dimension of ϕ^S_L,m.Before determining the phase diagram, one more notational convention is needed. A symplectic group will sometimes appear as an emergent symmetry, but the term “symplectic group" and the notation Sp(N) are used in multiple incompatible ways in the literature. In this paper, the term “symplectic group" will always refer to the group USp(2M), which is the set of 2M × 2M matrices which are both unitary and preserve the symplectic form. Our notation for the Lie algebra of USp(2M) is 𝔰𝔭(2M). For example, in this notation 𝔰𝔭(4) ≈𝔰𝔬(5).Before discussing general N, we analyze the cases of N=2,3, and 4 in detail. Each case will add new structure and features to the problem, but N is small enough to demonstrate all of our reasoning very explicitly. §.§ N=2: Luther-Emery phase diagram The N=2 interaction Hamiltonian is simply H_int = ∫ dx (g_0/2 n_L(x) n_R(x) + g_1 J_L^SU(2)(x)·J_R^SU(2)(x)) with g_0 exactly marginal and RG equationdg_1/dl = 4π g_1^2for g_1. Its flow is shown in Fig. <ref>. When g_1 <0 this coupling is marginally irrelevant and provides logarithmic corrections to the free-pseudospin fixed point. When g_1 > 0 it is marginally relevant and J^SU(2)_L ·J^SU(2)_R flows to strong coupling. The latter phase is the well-known Luther-Emery phase<cit.> of the 1D spin-1/2 fermion chain (note that under our sign conventions, g_0 < 0 and g_1>0 when on-site interactions are attractive); strong backscattering causes the pseudospin sector to become gapped while the charge sector remains gapless. Both pseudospin-singlet CDW order at wavevector 2k_F and pseudospin-singlet SC have power-law correlations in this phase.A comment on terminology: since we are studying one-dimensional physics, there is no true long-range order, only power-law correlations. We will use the terminology “fluctuating order parameter" to describe objects which acquire such correlations since such objects can be thought of as mean-field order whose long-range order has been destroyed by quantum fluctuations.One way to qualitatively understand this phase is as follows. Since the pseudospin sector becomes gapped, any possible fluctuating order parameters must be SU(2) singlets. There are two ways to make a two-particle SU(2) singlet order parameter: one in the particle-hole channel and one in the particle-particle channel. The fact that this is possible is special to SU(2); particles and holes transform in conjugate representations, but representations of SU(2) are self-conjugate. This means that both the singlet CDW and the singlet SC order parameters can fluctuate, and it is known that they do both fluctuate. Such arguments will be useful sanity checks in higher-N cases. §.§ N=3: Two Nontrivial PhasesThe N=3 RG equations aredg_1/dl = 4π(g_1^2 + 5g_2^2) dg_2/dl = 24π g_1 g_2 The flows in Fig. <ref> show that g_1 flows to strong coupling unless g_1<0 and |g_2|<|g_1|; if the latter occurs, both g_1 and g_2 flow to zero, and the free pseudospin fixed point is stable. In the strong-coupling case, it will be useful to define g̃_2 = g_2/g_1 to obtain the equation1/g_1dg̃_2/dl = 5g̃_2(1-g̃_2^2)Clearly g̃_2 = ± 1 and g_2 = 0 are “fixed rays" of the RG flow, in the sense that the ratio of the coupling constants remains fixed but g_1 flows to strong coupling. It is easy to check by linearizing Eq. (<ref>) that the fixed rays g_2 = ± g_1 are stable to small changes in g̃_2 and the g_2 = 0 fixed ray is unstable; flow of this ratio is shown in Fig. <ref> for g_1>0. The properties of the fixed points are summarized in Table <ref>.What is the nature of the strong-coupling phases? By non-Abelian bosonization, the free theory is the 𝔲(3)_1 = 𝔲(1) ⊗𝔰𝔲(3)_1 WZW theory. Since the 𝔲(1) charge sector has decoupled, the pseudospin sector of the free theory is just 𝔰𝔲(3)_1. When g_1 = g_2, the interaction is actually fully SU(3)-symmetric; in the language of non-Abelian bosonization, the interaction is backscattering of the form g J^SU(3)_L ·J^SU(3)_R. That is, there is an emergent SU(3) symmetry. When g flows to strong coupling, we expect the 𝔰𝔲(3) sector to be gapped; the pseudospin sector drops out of the low-energy theory entirely. Physically, since there is a pseudospin gap, we expect any fluctuating order parameter to be a singlet under the emergent SU(3) symmetry. Since ψ_m transforms under the fundamental representation of SU(3), which is not self-conjugate, no particle-particle order parameter can be such a singlet. However, there is a particle-hole singlet ψ^†_L,mψ_R,m, which is, physically, the CDW order parameter. We therefore expect this phase to have fluctuating pseudospin-singlet CDW order.Let us next consider the g_2 = -g_1 fixed ray, which for future purposes we will refer to as the SO(3)-invariant fixed ray. (The spin-1 representation of SU(2) is, of course, also a representation of SO(3), hence the name. Although SO(3) is not an emergent symmetry, we will see that at larger odd N there will be an emergent SO(N) symmetry, so we choose this name to agree with the generalization.) To understand this phase, define the second-quantized operator Ĉ, which is unitary at the level of the low-energy theory and acts asĈψ_R,mĈ^-1 = (-1)^m-S_0ψ^†_R,-m Ĉψ^†_R,mĈ^-1 = (-1)^m-S_0ψ_R,-mwhere m = -S_0, -S_0+1,...,S_0 and acts as the identity on the left-moving sector. Using Clebsch-Gordan coefficient identities detailed in Appendix <ref>, it can be checked thatĈψ^†_R,mM^S,α_mnψ_R,nĈ^-1 = (-1)^S+1ψ^†_R,mM^S,α_mnψ_R,nThat is, Ĉ transforms the Hamiltonian at the SU(3)-invariant fixed ray to the Hamiltonian at the SO(3)-invariant fixed ray. Naively, Ĉ looks unitary, which would mean that there is an energy gap and a full SU(3) symmetry at the SO(3)-invariant fixed ray. However, Ĉ is chiral, so this SU(3) symmetry may be anomalous. As the low-energy theory suffers from the chiral anomaly, we expect any chiral symmetry to be broken in the UV, but there is no reason to expect a large perturbation to the low-energy theory. Therefore, the conclusion that there is a pseudospin gap should be robust, but the SU(3) symmetry need not be.To see what symmetry could remain in the UV, note that the M^S,α are N × N Hermitian matrices and therefore generate the chiral action of the SU(3) symmetry. At the SU(3) fixed ray, the nonchiral symmetry is generated by acting with the same M^S,α on both the left- and right-moving fermions. Therefore, the action of any nonchiral symmetry at the SU(3) fixed ray becomes chiral at the SO(3) fixed point if and only if it is generated by an M^S,α which transforms nontrivially under Ĉ. Eq. (<ref>) thus shows that the transformations generated by the odd-S generators remain exact symmetries but those generated by the even-S generators are broken by the quantum anomaly. For N=3 this leaves only the S=1 generators, which generate SO(3); therefore, the true symmetry at the fixed point should be SO(3).To get a physical understanding of the SO(3) fixed point, note that Ĉ transforms density-wave order parameters into superconducting ones and vice-versa. In particular, it is easy to check that it turns the SU(2)-singlet CDW order parameter into the SU(2)-singlet SC order parameter and vice-versa. Since the CDW order parameter fluctuates in the SU(3)-invariant phase, the SC order parameter must fluctuate in this SO(3)-invariant phase while the CDW order parameter should have exponentially decaying correlations.Our analysis so far has yielded the phase diagram of Fig. <ref>.We next turn to the unstable g_2 = 0 fixed ray, which represents a phase transition between the CDW and the SC phases. We analyze this in a way which is slightly laborious for this particular case but will be extremely useful in more general cases.We know that the generators of the SU(2) symmetry form a representation of 𝔰𝔲(2)_4. Moreover, the interaction g_1 is exactly a product of those generators. As such, it is useful to decompose 𝔰𝔲(3)_1 = 𝔰𝔲(2)_4 ⊗ (𝔰𝔲(3)_1/𝔰𝔲(2)_4) where 𝔰𝔲(3)_1/𝔰𝔲(2)_4 is a coset theory. It so happens that there is a conformal embedding of 𝔰𝔲(2)_4 into 𝔰𝔲(3)_1<cit.>; this means that this coset theory has zero central charge and is thus trivial. But we have added a term g_1 J_L^SU(2)·J_R^SU(2) which is flowing to strong coupling; we thus expect the 𝔰𝔲(2)_4 theory to be gapped out. Thus we expect the strongly coupled fixed point to also have a pseudospin gap.The fact that the phase transition appears to be gapped leaves two possibilities: either there is a first-order transition, or there is some reason that the 𝔰𝔲(2)_4 theory is not gapped out. In Section <ref>, we will see that our simple arguments identifying the physical character of these phases can be put on more solid ground using Abelian bosonization, and we will use those techniques to argue why one should expect a first-order transition. We defer further discussion of this phase transition to that section.Before moving to N=4, a comment on the interpretation of the superconducting order parameter is in order. For N=3 (pseudospin-1), the two-particle singlet has a symmetric pseudospin wavefunction. Therefore, no pseudospin-singlet s-wave superconducting order parametercan exist by Pauli statistics. However, a p-wave order parameter can exist and fluctuate. §.§ N=4: Three Nontrivial PhasesSo far we have seen quasi-one-dimensional physics appear, although the main difference between N=2 and N=3 was whether or not the singlet CDW and SC order parameters fluctuated simultaneously. However, new structure will clearly appear at N=4, where the non-interacting pseudospin sector is 𝔰𝔲(4)_1, and the level of the 𝔰𝔲(2) subalgebra is k=10.The RG equations aredg_1/dl = 4π(g_1^2+5g_2^2+14g_3^2) dg_2/dl = 4π(6g_1g_2+14g_2g_3) dg_3/dl = 4π(12g_1g_3+5g_2^2+3g_3^2)Cuts of the flow diagram as a function of g_1 and the g_2,3/g_1 are shown in Fig. <ref>, in analogy to Fig. <ref> for N=3. Focusing first on g_1 < 0, we see that there is a region with g_3 small where the free pseudospin fixed point is stable. (It is easy to check numerically that this region is stable to adding a small nonzero g_2). Otherwise, g_1 passes through zero. Although this causes g_3/g_1 to blow up in finite RG time, g_3 can still remain small and our perturbative expansion remains valid as g_1 changes sign; we are then reduced to studying the g_1>0 case. When g_1>0, it is again useful to re-analyze the equations in terms of g̃_S = g_S/g_1:1/g_1dg̃_2/dl = 5g̃_2 + 14 g̃_2g̃_3 - g̃_2(5g̃_2^2+14g̃_3^2) 1/g_1dg̃_3/dl = 11g̃_3 +5 g̃_2^2 + 3 g̃_3^2 - g̃_3(5g̃_2^2+14g̃_3^2)The flow diagram for the g̃_S with g_1>0 is shown both in Fig. <ref> and in Fig. <ref> schematically located at g_1 → +∞ plane. The “fixed points" in this diagram are, just like in Fig. <ref>, actually “fixed rays" on which the couplings grow large but have a fixed ratio. We see clearly from the flows that there are three stable fixed rays and four unstable ones, resulting in the phase diagram in Fig. <ref>. It is possible to find the fixed ray couplings explicitly. The fixed rays and their properties are summarized for g_1>0 in Table <ref>.§.§.§ SU(4)-invariant phase The simplest stable fixed ray is at g̃_2 = 1 and g̃_3 = 1 (point B in Fig. <ref>). As in the N=3 case, such a fixed ray with g_1=g_2=g_3 has an emergent nonchiral version of the SU(4) symmetry of the non-interacting problem. As such, under bosonization, the interaction Hamiltonian is of the form g J_L^SU(4)·J_R^SU(4). Hence the pseudospin sector will gap out completely upon flowing to strong coupling. As in the N=3 case, any fluctuating order parameter should be an SU(4) singlet, which means that it should be pseudospin-singlet CDW order.§.§.§ USp(4)-invariant phase The (stable) g_S/g_1 = (-1)^S+1 fixed ray (point C in Fig. <ref>) also has emergent symmetry beyond SU(2). In Appendix <ref>, we prove that the ten matrices M^1,a and M^3,a, taken together, generate USp(4) ≈ SO(5) (it will turn out that the language USp(4) is the correct generalization), and that M^2,a transform as a 5-dimensional representation of USp(4), which is the fundamental representation of SO(5). Therefore the Hamiltonian is USp(4)-symmetric on this fixed ray, but the coupling is not simple in this language.We can, however, understand this USp(4)-symmetric phase via the same chiral particle-hole transformation that we used for N=3. In fact, the transformation Ĉ defined in Eq. (<ref>) behaves exactly the same in the N=4 case (with S_0 = 3/2) as it does for N=3 (S_0 = 1); it switches the signs of even-pseudospin couplings, thus transforming the Hamiltonian at the SU(4)-invariant fixed ray to that of the USp(4)-invariant fixed ray. Again, Eq. (<ref>) tells us that the even-S generators of SU(4) become anomalous, so the SU(4) symmetry is broken to USp(4) in the UV. We therefore expect that, like the SU(4)-invariant phase, the USp(4)-invariant fixed point is fully gapped, but has power-law singlet SC correlations rather than power-law CDW correlations.§.§.§ CDW/SC phase transition The above two phases appeared at N=3, but the transition between them seemed to be first-order. However, we will now show that a second-order transition is allowed (though, of course, not required) for N=4 by analyzing the nontrivial saddle point fixed ray with g_2=0 and g̃_3=1 (point D in Fig. <ref>). As mentioned previously, M^1,a and M^3,a, taken together, generate USp(4); in Appendix <ref>, we show that the fermion bilinears that they define generate a representation of 𝔰𝔭(4)_1. As such, the fixed ray coupling is actually of the form g J_L^USp(4)·J_R^USp(4). In the coset construction, the free theory decomposes as 𝔰𝔲(4)_1 = 𝔰𝔭(4)_1 ⊗ (𝔰𝔲(4)_1/𝔰𝔭(4)_1), and the fixed point interaction should cause the 𝔰𝔭(4)_1 sector to gap out. This time, however, the remaining coset theory 𝔰𝔲(4)_1/𝔰𝔭(4)_1 has central charge 1/2, that is, it is the Ising CFT. Hence the strong coupling fixed point describes a second-order, Ising-type phase transition between two pseudospin-gapped phases, one with power-law correlations of pseudospin-singlet CDW order and the other with power-law correlations of pseudospin-singlet s-wave SC order.§.§.§ SU(2)-invariant phase The g_2=0, g̃_3 = -11/14 fixed ray (point E in Fig. <ref>) is much more difficult to analyze because the fixed ray Hamiltonian has no additional symmetry. We can make some progress as follows.The free spin-1 fermion currents form a representation of 𝔰𝔲(2)_10. The pseudospin sector of the free theory can be decomposed as 𝔰𝔲(4)_1 = 𝔰𝔲(2)_10⊗ (𝔰𝔲(4)_1/𝔰𝔲(2)_10) and the spin-1 currents have strictly zero correlation functions with any operator in the coset theory. In fact 𝔰𝔲(2)_10 has central charge c=5/2, so the coset 𝔰𝔲(4)_1/𝔰𝔲(2)_10 has central charge c=1/2 and is thus the Ising CFT. If g_3 were zero, as at point A in Fig. <ref>, the interaction would be of the form g J_L^SU(2)·J_R^SU(2) and would flow to strong coupling. We would expect that the 𝔰𝔲(2)_10 sector would fully gap out and we would be left with a gapless Ising theory. However, g_3 is not zero at the fixed point. The corresponding operator can be decomposed into a product of the pseudospin-3 primaries in the 𝔰𝔲(2)_10 theory and an Ising primary, as in the discussion following Eq. (<ref>). However, the chiral pseudospin-3 primary ϕ^3_L,m happens to have scaling dimension h=1 in 𝔰𝔲(2)_10. Therefore, by Eq. (<ref>), the Ising primary has dimension 0 and is trivial, so the fixed ray Hamiltonian isH_int = g( J_L^SU(2)·J_R^SU(2) - 11/14ϕ^3_L,mϕ^3_R,m) In particular, the Hamiltonian does not couple to the Ising coset theory. We therefore conclude that the low-energy theory of this phase contains the Ising CFT and is thus gapless. However, we cannot draw conclusions about the fate of the 𝔰𝔲(2)_10 sector using any tools familiar to us. Since there is RG flow, its central charge should decrease, but it is unclear if it should gap out or, for example, flow to 𝔰𝔲(2)_k' for some k'<k.§ IDENTIFYING THE PHASESOur RG and non-Abelian bosonization pictures were very useful for understanding what fixed points are available, the spectrum, and symmetry. However, they only provided heuristic descriptions of, for example, correlation functions within each phase. To improve on that, we first build intuition using mean field theory, which is inaccurate in 1D but will prove helpful. We will then use Abelian bosonization on the fixed rays in order to extract accurate physical interpretations and calculate some correlation functions. In this section, we first explain our general techniques and conventions, then explicitly apply them to the cases N=2, 3, and 4.§.§ Mean-Field Theory In this subsection we outline our mean-field procedure; see Appendix <ref> for the details and a more careful explanation of our heuristic use of mean-field theory. To do mean-field theory, we can convert the coupling constant g_1 in the direct channel to coupling constants g^E_S in the exchange and g^C_S in the Cooper channels, defined asH_int = ∑_S g^E_S ∑_αψ^†_L,mM^S,α_mm'ψ_R,m'ψ^†_R,nM^S,α_nn'ψ_L,n'= ∑_S g^C_S ∑_αψ^†_L,mM^(p),S,α_mm'ψ^†_R,m'ψ_R,nM^(h),S,α_nn'ψ_L,n' Here M^(p) and M^(h) are defined using the same conditions as the M matrices but with the appropriate transformation rules under SU(2) for particle-particle and hole-hole bilinears respectively.We will show shortly that the transformations are always linear; that is, there exist N × N matrices K^E and K^C for each N such thatg^E_S= K^E_SS'g_S'with a similar equation for g^C.Next, we perform a Hubbard-Stratonovich transformation in either the exchange or Cooper channels, integrate out the fermions, and expanding in the set of mean-field order parameters ψ^†_L M^S,αψ_R or ψ^†_LM^(p),S,αψ^†_R. At second order, all of the order parameter fields are decoupled thanks to our orthogonalization convention (M^S,αM^S',β) = kδ_SS'δ_αβ. The expansion shows that if one of the g is negative, then there is a divergent susceptibility to the corresponding order, with larger |g| implying a stronger instability. The details of this calculation can be found in Appendix <ref>.We now provide an explicit formula for the matrices K defined in Eqs. (<ref>) and (<ref>). This is done by matching the fermion operators appearing in those equations term by term, that is,∑_S',β g_S' M^S',β_mm'M^S',β_nn' = -∑_S',β g_S'^E M^S',β_mn' M^S',β_nm'Multiplying both sides by M^S,α_n'mM^S,α_m'n for fixed S,α and summing on m,m',n,n', the orthogonality of the M^S,α results ing_S^E = K^E_SS'g_S' = -1/k^2∑_S',β g_S'(M^S,αM^S',βM^S,αM^S',β)By SU(2) invariance this result is independent of α. A nearly identical computation shows thatK^C_SS' = 1/k^2∑_S',α(M^(p),S,αM^S',βM^(h),S,α(M^S,α)^T) It is also easy to show that the operator Ĉ defined in Eq. (<ref>) transforms Ĉ∑_S,α g_S^E ψ^†_L,mM^S,α_mm'ψ_R,m'ψ^†_R,nM^S,α_nn'ψ_L,n'Ĉ^-1 = ∑_S,α g_S^E ψ^†_L,mM^(p),S,α_mm'ψ^†_R,m'ψ_R,nM^(h),S,α_nn'ψ_L,n' that is, it converts an operator in the exchange channel to one in the superconducting channel. But the transformation also changes the direct channel coupling constants g_S → (-1)^S+1g_S. We conclude, then, thatK^E_SS' = K^C_SS'(-1)^S'+1and will therefore only explicitly list K^E. §.§ Abelian Bosonization We introduce one free chiral boson field ϕ_m,χ (χ=L,R) for each component ψ^†_m,χ of chiral fermion. Our convention is⟨ϕ_m,χ(x) ϕ_n,χ(0) ⟩ = -δ_m,nlog |x|We defineϕ_m= ϕ_m,L + ϕ_m,R θ_m= ϕ_m,L - ϕ_m,Rwhich obey the commutation relations[ϕ_m(x),∂_y θ_m(y)] = iδ(x-y) The corresponding bosonization identities areψ_m,L^† →η_m e^i ϕ_m,L ψ_m,R^† →η̅_m e^-iϕ_m,R ∑_χ:ψ^†_m,χψ_m,χ:→∂_x ϕ_mwhere η_m and η̅_n are mutually anticommuting Klein factors which square to 1. We have dropped normalization factors. Note that the fermion operators are left unchanged under ϕ_m →ϕ_m + 2π l for l ∈ℤ, so we should think of ϕ_m as compact bosons with ϕ_m ∼ϕ_m+2π. §.§ N=2 We analyze the Luther-Emery phase at N=2 as a familiar example before moving to the less familiar larger-N cases.The mean-field coupling constants in the exchange channel are computed using Eq. (<ref>) to beK^E= -1/2[13;1 -1 ]That is, for g_0 = 0, we have g^E_0 = g^C_0 = -3g_1/2 and g^E_1 = g^C_1 = g_1/2. At mean field level, there is, as expected, an instability to a singlet CDW with order parameter ⟨ψ^†_L(x)ψ_R(x) +h. c.⟩ and to singlet SC with order parameter ⟨ψ^†_L(x)M^(p),0ψ^†_R(x)⟩ = ⟨ψ^†_R(x)M^(p),0ψ^†_L(x)⟩; these two orders happen to be degenerate, which is closely related to the fact that both order parameters have power-law correlations in the Luther-Emery phase. At this level of approximation, g_0 > 0 will break the degeneracy in favor of CDW order and g_0<0 will favor superconductivity, but we know from the more accurate bosonization study that this degeneracy remains, illustrating the limitations of the mean field formalism.In Abelian bosonization, since we expect spin-charge separation it is convenient to define charge and pseudospin bosonsϕ_c= ϕ_1/2 + ϕ_-1/2/√(2) ϕ_s= ϕ_1/2 - ϕ_-1/2/√(2)which obey the same canonical commutation relations as the ϕ_m. The compactness of ϕ_± 1/2 implies that ϕ_c,s are not simply compact bosons; instead, ϕ_c,s∼ϕ_c,s + √(2)π l_c,s where l_c and l_s are integers of the same parity. The g_0 interaction term simply renormalizes the Luttinger parameter K of the charge sector. The g_1 interaction term bosonizes toH_int = -g_1 ∫ dx cos√(2)ϕ_s(x)where we have made a gauge choice to project the Klein factors to the subspace η_1/2η_-1/2η̅_-1/2η̅_1/2 = -1. The pseudospin sector thus becomes the sine-Gordon model, and since g_1 flows to strong coupling, ϕ_s gets pinned to √(2)π l with l ∈ℤ. All values of l lead to physically equivalent configurations.Now we just need to bosonize the possible order parameters. They areΔ_CDW(x)= ∑_m e^2ik_F x:ψ^†_m,Lψ_m,R: = η_1/2η̅_1/2e^2ik_F x e^iϕ_c/√(2)cos(ϕ_s/√(2)) Δ_SC(x)= ψ^†_1/2,Lψ^†_-1/2,R-ψ^†_-1/2,Lψ^†_1/2,R= η_1/2η̅_-1/2e^iθ_c/√(2)cos(ϕ_s/√(2))where all fields are evaluated at x. The pseudospin-density wave and triplet SC order parameters involve θ_s but not ϕ_s. Since ϕ_s is pinned and ∂_x θ_s is its conjugate variable, the pseudospin-density wave and triplet SC order parameters have exponentially decaying correlations. On the other hand, the CDW and singlet SC order parameters fluctuate; at long distances,⟨Δ_CDW(x)Δ_CDW^∗(0) ⟩ ∼1/|x|^1/K ⟨Δ_SC(x)Δ_SC^∗(0) ⟩ ∼1/|x|^K These simultaneously fluctuating order parameters, together with the spin gap, are a hallmark of the Luther-Emery phase. §.§ N=3 For the mean-field analysis, we findK^E = -1/3[135;13/2 -5/2;1 -3/21/2 ] We first consider g_0=0. At the SU(3)-symmetric flow, the most negative coupling constant is pseudospin-singlet CDW order. At the SO(3)-symmetric flow, pseudospin-singlet superconductivity ⟨ψ^†_L(x) M^(p),0ψ^†_R(x)⟩ has the most negative coupling constant.Both are degenerate at the g_2=0 fixed ray. We thus expect a phase transition between fluctuating CDW and fluctuating singlet SC orders.Now let us add g_0 ≠ 0. At mean-field level, g_0 changes the location of the transition. In the bosonization language, there is spin-charge separation; the naive effect of a nonzero g_0 is simply to change the Luttinger parameter of the charge sector. Deep in a phase this merely distinguishes the power laws of correlation functions of the two order parameters. However, this distinction suggests that g_0 modifies the energies of the two phases relative to one another, and since the phase transition seems to be first order this may indeed modify the location of the phase transition.To check this in Abelian bosonization, we define one charge and two pseudospin bosonsϕ_c= ∑_m ϕ_m /√(3) ϕ_s1 = ϕ_1 - ϕ_-1/√(2) ϕ_s2 = ϕ_1 + ϕ_-1-2ϕ_0/2These fields mutually commute. Again there is pseudospin-charge separation and the only effect of g_0 is to renormalize the Luttinger parameter K of the charge sector. Compactness of the ϕ_m results in compactifications of ϕ_c, ϕ_s1 and ϕ_s2 generated by the identifications (ϕ_c, ϕ_s1, ϕ_s2) ∼ (ϕ_c + 2√(3)π, ϕ_s1, ϕ_s2) ∼ (ϕ_c, ϕ_s1 + 2√(2)π, ϕ_s2) ∼ (ϕ_c + 2π/√(3), ϕ_s1 +√(2)π, ϕ_s2 + π).Analyzing the interaction for general values of g_1 and g_2 is challenging, but it is straightforward on the stable fixed rays, which, as before, we refer to as the SU(3) (g_2=g_1) and SO(3) (g_2 = -g_1) fixed rays. The pseudospin Hamiltonians areH_int,SU(3) = -g∫ dx (cos√(2)ϕ_s1 + 2cosϕ_s2cos(ϕ_s1/√(2)))H_int,SO(3) = g∫ dx (cos√(2)ϕ_s1 - 2sinθ_s2sin(ϕ_s1/√(2)))where we have chosen three independent Klein factor projections and all fields inside the integrals are evaluated at x. The appearance of sines instead of cosines in the SO(3) Hamiltonian results from the Klein factors and the odd number of fermion flavors and, as we will see, it is very important.These Hamiltonians are unfrustrated. In the SU(3) phase, ϕ_s1 and ϕ_s2 are pinned to √(2)π l_1 and π l_2 respectively, where l_1 and l_2 are integers of the same parity. All such configurations are physically identical. In the SO(3) phase, ϕ_s_1 and θ_s2 are pinned to √(2)π (l_1+1/2) and π (l_2+1/2) where l_1 and l_2 again have the same parity. To understand what the phases do physically, we bosonize the pseudospin-singlet order parameters:Δ_CDW^S=0 = e^2ik_F x e^iϕ_c/√(3)+iϕ_s2/3η_1η̅_1(2cos(ϕ_s1/√(2)) + e^-iϕ_s2) Δ_SC^S=0 = e^iθ_c/√(3)+iθ_s2/3η_1η̅_1(-2isin(ϕ_s1/√(2)) + e^-iθ_s2)Since ϕ_s1 is always pinned and ϕ_s2 (θ_s2) is pinned in the SU(3) (SO(3)) phase, we see that singlet CDW (SC) order has power-law decay and SC (CDW) order has exponential decay. The long-distance power laws are⟨Δ_CDW^S=0(x)Δ_CDW^S=0 ∗(0) ⟩ SU(3)∼1/|x|^2/(3K) ⟨Δ_SC^S=0(x)Δ_SC^S=0∗(0) ⟩ SO(3)∼1/|x|^2K/3 For higher-spin channels, SU(2) invariance allows us to only check the m=0 component of the higher-spin order parameters. The spin-density wave (SDW) order parameters bosonize as follows:Δ_SDW^S=1 ∝ e^iϕ_c/√(3)+ϕ_s2/3η_1 η̅_1sin(ϕ_s1/√(2)) Δ_SDW^S=2 ∝ e^iϕ_c/√(3)+ϕ_s2/3η_1 η̅_1(cos(ϕ_s1/√(2)) - e^iϕ_s2)In the SU(3) phase, ϕ_s1 and ϕ_s2 are both pinned to zero, so both order parameters are also pinned to zero. In the SO(3) phase, θ_s2 is pinned, causing both of these order parameters to have exponentially decaying correlations. The higher-spin SC order parameters are also either pinned to zero or decay similarly. The conclusion is that, as expected, only the pseudospin-singlet CDW (SC) order parameter has power-law correlations in the SU(3) (SO(3)) phase.Remarkably, these results are in accordance with the intuition gained from mean field theory. The channel with the most negative coupling constant has power-law fluctuations, while all others have exponentially decaying correlations.§.§.§ Comparison to non-Abelian results Notice that ϕ_s1 is pinned to physically inequivalent values in the two phases. In particular, if there is an externally-enforced boundary between these two phases, ϕ_s1 must change by a half-integer multiple of its compactification length √(2)π. The interpretation can be understood as follows. Clearly ∂_x ϕ_s1 is proportional to the density of S_z. In particular locally adding a fermion with S_z = +1 corresponds to adding a 2π kink in ϕ_1; this means that there is a √(2)π kink of ϕ_s1. Hence a √(2)π kink in ϕ_s1 corresponds to a localized change in spin by 1 unit. We instead have a π/√(2) kink, so there must be a half-integer spin trapped at the boundary despite the system being built out of integer pseudospins. We conclude that the two phases are topologically distinct.However, non-Abelian bosonization (see Sec. <ref>) indicated that at the phase transition (g_2 =0), the low-energy theory should have central charge 0 and thus be gapped. There are therefore two possibilities: * The transition at g_2=0 is first order.* The transition at g_2=0 is continuous, and there is a topological obstruction to gapping out 𝔰𝔲(2)_4 using a 𝐉_L ·𝐉_R interaction.We cannot rule out the second possibility except to say that we have found no evidence supporting it.In the absence of numerical evidence, we suggest that the transition is first order. §.§ N=4 Starting with mean field again, we findK^E = -1/4[ 1 3 5 7; 111/5 1 -21/5; 1 3/5-3 7/5; 1-9/5 1-1/5 ] At mean field level, the leading instabilities are as follows when g_0=0. At the SU(4)- and USp(4)-invariant fixed points, CDW and singlet SC orders respectively have the most negative coupling constants, so we expect physics similar to N=3. The fixed point without emergent symmetry (g_2 = 0, g_3 = -11/14g_1) has degenerate pseudospin-triplet SDW order and pseudospin-triplet p-wave superconductivity. The physical picture of this phase should then be of fluctuations of both of these order parameters. Both order parameters would spontaneously break SU(2) symmetry if they developed; therefore it makes sense that the pseudospin sector could remain gapless due to fluctuating Goldstone modes.The effect of a nonzero g_0 is similar to that of N=3; again at mean-field level it modifies the location of the phase transition. However, if the transition between the SU(4)- and USp(4)-invariant phases is second-order (which is allowed for N even), we expect that g_0 will not significantly modify the phase transition.For the SU(4)- and USp(4)-invariant phases, the Abelian bosonization analysis is very similar to that for N=3. We use the fieldsϕ_c = ∑_m ϕ_m/2 ϕ_s1 = ϕ_1/2-ϕ_-1/2/√(2) ϕ_s2 = ϕ_1/2+ϕ_-1/2-ϕ_3/2-ϕ_-3/2/2 ϕ_s3 = ϕ_3/2-ϕ_-3/2/√(2)Bosonizing the fixed point Hamiltonians produces, after setting Klein factor conventions, H_int,SU(4) = -g∫ dx (cos(√(2)ϕ_s1)+cos(√(2)ϕ_s3) + 4cos(ϕ_s1/√(2))cos(ϕ_s3/√(2))cos(ϕ_s2))H_int,USp(4) = -g∫ dx (cos(√(2)ϕ_s1)+cos(√(2)ϕ_s3) + 4cos(ϕ_s1/√(2))cos(ϕ_s3/√(2))cos(θ_s2)) Again both Hamiltonians are unfrustrated, and the difference between the two phases is whether ϕ_s2 or θ_s2 is pinned. It is easy to check by bosonizing the order parameters that when ϕ_s2 (θ_s2) is pinned, the CDW (singlet SC) order parameter acquires power-law correlations⟨Δ_CDW^S=0(x)Δ_CDW^S=0 ∗(0) ⟩ SU(4)∼1/|x|^1/(2K) ⟨Δ_SC^S=0(x)Δ_SC^S=0∗(0) ⟩ USp(4)∼1/|x|^K/2There is a crucial qualitative difference between N=3 and N=4: for N=4, both ϕ_s1 and ϕ_s2 are pinned to the same set of (physically equivalent) values in both phases. This means that, unlike for N=3, there are no topologically protected, fractionalized edge states between these two phases. This is expected; since the onsite fermion number is not fixed, the fermions should be thought of as transforming in the fundamental representation of USp(4) ⊂ SU(4), a symmetry which is preserved at both the CDW and SC fixed points. Being simply connected, USp(4) ≈ Spin(5) has no projective representations and thus there can be no fractionalization of the full symmetry. By contrast, for N=3, the fermions carry the fundamental of SO(3), which can fractionalize into spinor representations.The phase without emergent symmetry is unfortunately very difficult to analyze using Abelian bosonization. Even assuming that perturbative RG yielded the correct value for the ratios of couplings on the fixed ray, which need not be the case since the flow is to strong coupling, the cosine terms that appear do not all commute, so there is no simple “pinning" picture at strong coupling. We therefore cannot confirm our mean field intuition about this peculiar phase and leave further investigation to future work.§ PHASE DIAGRAM FOR GENERAL NUnfortunately, the fixed ray structure is hard to visualize for N>4 due to the large parameter space. We can make some exact statements for general N; together with example calculations and numerics done at small N, this is enough to guess the key features of the phase diagram at all N.Before discussing the results, we briefly explain the nature of our numerical work. We evaluated Eq. (<ref>) numerically in order to obtain the RG equations, which were then rewritten as a function of the g̃_S and solved numerically in order to obtain the full set of fixed rays. The stability of the fixed rays was evaluated by numerically linearizing the RG equations for g̃_S about the fixed ray, writing dδg̃_S/dl ≈ A_SS'δg̃_S', where δg̃_S is the difference between g̃_S and its fixed ray value. The fixed ray is stable if and only if all of the eigenvalues of A are negative; we diagonalized A numerically. The fixed point structure was obtained numerically in this way for all N ≤ 8. We also calculated K^E from Eq. (<ref>) by numerically generating the M^S,α using the relation to Clebsch-Gordan coefficients (which can be generated algorithmically by standard techniques) detailed in Appendix <ref>.As a first general statement, using Eq. (<ref>), it is straightforward to show that the RG equation for g_1 is always of the formdg_1/dl = 2π/3∑_S S(S+1)(2S+1) g_S^2We conjecture that, as in N=3 and N=4, there is a region where the pseudospin sector can still flow to the free fixed point when g_1<0, occurring when the |g̃_S| are sufficiently small; in this regime, all the |g_S| for S>1 decrease more rapidly than |g_1| does. Otherwise, unless there is fine-tuning, the system will generically flow to large positive g_1, and the system should be analyzed using fixed rays in the same way as at small N. §.§ SU(N)-Invariant Phase Using the completeness of the Clebsch-Gordan coefficients, it can be shown that ∑_S',S”β^S_S',S” = -2Nk for all S. Hence, there is a fixed ray with g_S = g for all S > 0, and the flow is to strong coupling if g > 0. The existence is rigorous; we conjecture based on the numerical evidence discussed above that this fixed ray is stable.On this fixed ray, the system has a nonchiral SU(N) symmetry and the corresponding interaction, when bosonized, is of the form g J_L^SU(N)·J_R^SU(N). Hence we expect the interaction to gap out the 𝔰𝔲(N)_1 sector.To understand the nature of this phase, we use similar arguments to before. Since the 𝔰𝔲(N) sector is gapped out, we expect the fluctuating order parameter to be an SU(N) singlet. This can only happen (for fermion bilinears) in the particle-hole channel because the fundamental representation of SU(N) is not self-conjugate for N>2. We therefore expect the leading mean-field instability to be the pseudospin-singlet density wave (exchange) channel, which is confirmed by our numerical calculations of K^E.This phase should thus have power-law correlations of the CDW order parameter (where the power depends on N, see Sec. <ref>). These correlations were checked explicitly in Abelian bosonization for N ≤ 6 by generalizing the method in Section <ref>. §.§ Odd N In addition to the SU(N)-invariant fixed ray, there is always additional structure in the phase diagram. By the selection rule present in the OPE coefficents in Eq. (<ref>), the number of g_S with even S has the same parity on both sides of the RG equation. Hence the RG equation is symmetric under g_S → (-1)^S+1g_S, so the existence of the SU(N)-invariant fixed ray implies the existence of a fixed ray at g_S/g_1 = (-1)^S+1. Moreover, the chiral particle-hole transformation Eq. (<ref>) relates these two fixed rays at the level of the low-energy theory. This transformation causes the even-S generators of the SU(N) symmetry to become anomalous, and it interchanges particle-hole and particle-particle order parameters. This is true for all N.Apart from the existence of these two fixed rays, however, the behavior of the phase diagram depends strongly on the parity of N, with odd N being simpler. We first focus on this simpler case. In fact, N=3 contains almost all the physics of the general case for odd N. Our numerical solution of the RG equations finds that g̃_S=1 and g̃_S = (-1)^S+1 are the only stable fixed rays . This latter fixed ray has SO(N) symmetry; in fact, we prove in Appendix <ref> that for odd N, the M^S,α for odd S form the fundamental representation of 𝔰𝔬(N), and that the corresponding chiral fermion currents form a representation of 𝔰𝔬(N)_2 for N>3. (N=3 is exceptional, forming 𝔰𝔬(3)_4 due to the isomorphism of the Lie algebras 𝔰𝔬(3) and 𝔰𝔲(2).) We also conjecture that the SO(N)-invariant fixed ray has power-law correlations of spin-singlet SC order (where the power again depends on N, see Sec. <ref>). This was checked in Abelian bosonization for N=5; the treatment is completely analogous to N=3 and N=4.Moreover, our numerical solution of the RG equations always shows that there is an unstable fixed ray with g_S = 0 for even S and g_S/g_1 = 1 for odd S, analogous to the g_2 = 0 fixed point at N=3. Naively this might mark a continuous transition between an SU(N)-invariant phase and an SO(N)-invariant phase. But since the only couplings which appear involve currents in the fundamental representation of 𝔰𝔬(N), the interaction is the marginally relevant coupling g J_L^SO(N)·J_R^SO(N). The fixed point at strong coupling should be described by 𝔰𝔲(N)_1/𝔰𝔬(N)_2, which can be checked to have central charge 0; this is a known conformal embedding<cit.>. For the same reasons as at N=3, we conjecture that the transition between these phases is first-order. §.§ Even N: USp(N)-Invariant Phase and Parafermions When N is even, we conjecture based on the numerical solution of the RG equations for N ≤ 8 that the phase structure is similar to that of N=4. That is, in addition to the SU(N)-invariant phase, there is a USp(N)-invariant phase and a phase which has no symmetry beyond the SU(2) symmetry we imposed. We focus on the former in this section.As in the odd N case, the RG equations are symmetric under g_S → (-1)^S+1g_S, so there is (rigorously) always a fixed ray at g_S/g_1 = (-1)^S+1, which we conjecture to be stable. We prove in Appendix <ref> that the M^S,α for odd S generate the fundamental representation of USp(N). By the selection rules resulting from Eq. (<ref>), we also see that the OPE of an odd-pseudospin fermion current with an even-pseudospin current produces only even-pseudospin currents. Therefore, this phase is fully USp(N)-invariant.To understand this phase, we can again use the operator Ĉ appearing in Eq. (<ref>) with the appropriate value of S_0. Eq. (<ref>) holds for any N, so as in the N=4 case, the USp(N)-invariant phase should have a full pseudospin gap and power-law singlet s-wave superconducting correlations. This was checked by numerical mean field calculations using K^E for N ≤ 8, which show that singlet s-wave superconductivity is the leading instability, and Abelian bosonization for N≤ 6.We can also consider the phase transition between the SU(N)-invariant phase and the USp(N)-invariant phase; since at g_S/g_1=1 for odd S and g_S=0 for even S the system is invariant under Ĉ, such a fixed point always exists. We know that the odd-pseudospin matrices generate USp(N), and we have computed in Appendix <ref> that the odd-pseudospin fermion bilinears generate a representation of 𝔰𝔭(N)_1. We can then conjecture that if the transition is second order, then it is described by 𝔰𝔲(N)_1/𝔰𝔭(N)_1.To understand this theory, we simply note that 𝔰𝔲(N)_1 = 𝔲(N)_1/𝔲(1), so the pseudospin sector is described by (𝔲(N)_1/𝔲(1))/𝔰𝔭(N)_1. Switching the order of the coset procedure (which is valid because the generator of the 𝔲(1) subalgebra commutes with the generators of 𝔰𝔭(N)_1), we obtain (𝔲(N)_1/𝔰𝔭(N)_1)/𝔲(1). But 𝔲(N)_1/𝔰𝔭(N)_1 = 𝔰𝔲(2)_N/2 for even N, so our phase transition is described by the 𝔰𝔲(2)_N/2/𝔲(1) theory, which describes ℤ_N/2 parafermions.Although this second-order phase transition is consistent with our results, we cannot rule out the possibility of first-order phase transitions appearing instead. In fact, it is quite possible that, much like the quantum rotor model, for some values of N this fixed point is actually the multicritical end of a line of first-order transitions. §.§ Even N: SU(2)-Invariant Phase Again based on the numerical solution to the RG equations for all N ≤ 8, we conjecture that there is always another stable fixed ray for even N at g_S = 0 for S even and some particular but non-generic (and not all positive) values of g̃_S for S odd. Unfortunately, the analysis of this fixed point is even more challenging than for N=4 for two reasons. First, the coset theory 𝔰𝔲(n)_1/𝔰𝔲(2)_k has central chargec_coset = (N-3)(N-2)(N-1)(N+2)/N^3-N+12which is not an easily identifiable theory for N>4. Second, it is merely a coincidence that for N=4, the field ϕ^3_L ϕ^3_R has scaling dimension 2 in 𝔰𝔲(2)_10. This coincidence allowed us, using Eq. (<ref>), to say that the coset theory did not flow as the fixed ray couplings grow large. In general, the interaction at the fixed point will not generally live only in the 𝔰𝔲(2)_k theory; although some spin-S term may happen to have scaling dimension 2, other operators are typically present, so the coset theory flows as well. However, based on our mean-field procedure and numerical calculations of K^E, we conjecture that this phase has, as at N=4, fluctuating pseudospin-triplet CDW and p-wave SC orders, and should, correspondingly, be gapless in the pseudospin sector. As an additional piece of evidence, if the fixed ray indeed always has g_S = 0 for S even (as it does at our level of approximation for N ≤ 8), then at the level of the low-energy theory the pseudospin part of the theory is invariant under Ĉ. This means that the triplet CDW order parameter has power-law correlations if and only if the triplet SC order parameter does as well.§ SU(2)-BREAKING PERTURBATIONSRecall that the whole point of our mapping from a wire to ℝ× S^2 geometry was to restore magnetic translation symmetry, which is broken in a wire, while also changing the group structure of magnetic translation symmetry to SU(2). In order for our results to relate to real wires, we therefore need to add SU(2)-breaking perturbations. In this section, we give some qualitative arguments about what happens when SU(2) symmetry is broken.Recall that in the ℝ× D^2 geometry in symmetric gauge, single-particle states are localized in the radial direction. Suppose the potential at the edge of the disk decays on a length scale ξ; then only the states localized within a strip of width ξ near the edge will be significantly affected by the edge potential. In the ℝ× S^2 geometry, single-particle states are localized in the azimuthal direction with m=S_0 corresponding to a state near the north pole and m=-S_0 localized near the south pole. Adding a perturbation ψ^†(S_z/S_0 - 1)^γψ for some large power γ therefore corresponds to sharply increasing the energy of the states near the south pole without affecting the rest very much; such a perturbation is analogous to adding an edge potential to the disk geometry if we associate the north (south) pole of the sphere with r=0 (r=R) on the disk. In the spin language, this perturbation behaves similarly to a magnetic field.To estimate the strength of this perturbation, we note that the electron density in the disk is N/π R^2. Suppose the edge potential decays on a length scale ξ; then only the states within a strip of width ξ near the edge will be significantly affected by the symmetry-breaking field. Therefore, approximately (2π R ξ)(N/π R^2) = Nξ/R out of the N degenerate states will be affected. That is, the fraction ξ/R of the degenerate states will have a marginal perturbation applied to them (roughly speaking, k_F changes for these states because their k_z dispersion is shifted upward in energy); we thus expect that the strength of the “Zeeman field" to be proportional to ξ/R. For a thick enough wire, ξ/R should be small, so most of the single-particle states remain degenerate.After this analysis it is straightforward to understand the fate of the phase diagram upon moving to the disk geometry. Since charge is obviously still conserved, SU(2)-breaking marginal perturbations affect only the pseudospin sector. Therefore, despite the fact that the whole system is gapless, a gap in the pseudospin sector is enough to guarantee perturbative stability of a phase. This immediately implies that the singlet CDW and singlet SC phases are stable at both odd and even N. These phases should also remain distinct. Breaking SU(2) symmetry does not mix CDW and SC order parameters; in fact, in the Abelian bosonization picture it is clear that the fact that ϕ_s1,ϕ_s2,... are well-defined even when the external “field" is applied is sufficient to maintain the distinctness of these phases.The triplet CDW/SC phase at even N, on the other hand, is probably not strictly speaking stable to SU(2)-breaking perturbations. Its gaplessness originates from fluctuations of a putative spontaneous breaking of SU(2) symmetry, so explicit symmetry breaking should induce a gap of order ξ/R. As a result, this need not be a distinct phase, but the smallness of the gap may allow a crossover to a regime where signatures of this phase remain.§ THE THREE-DIMENSIONAL LIMITConsiderable work has already been done<cit.> on bulk 3D crystals in the zeroth Landau level; to compare with those results, we wish to take the bulk limit in our treatment. In the disk geometry, this means taking the radius of the wire to infinity at fixed magnetic field and carrier density. Since the Landau level degeneracy goes as the total flux penetrating the wire, the bulk limit is that of large N, a limit we can also take in the sphere geometry. One key expectation is that as the system becomes less one-dimensional, true long-range order appears instead of quasi-long-range order. In this section, we compare to previous work and to this expectation.The simplest way to see the bulk limit emerge is by examining what power laws appear in correlation functions of various order parameters. Looking at our Abelian bosonization results in Section <ref> and generalizing the pattern of basis changes, we expect that the singlet order parameters obeyΔ_CDW = ∑_m e^i ϕ_m∝ e^i ϕ_c/√(N) Δ_SC = ∑_m e^i θ_m∝ e^i θ_c/√(N)where ϕ_c = (∑_m ϕ_m)/√(N) and we have dropped the spin sector pieces of the order parameters. We saw at small N that at the fixed point, the power law correlations come entirely from the U(1) charge sector; the spin sector delivers constant factors. Assuming this trend continues, for a given Luttinger parameter K of the charge sector, we compute that ⟨Δ_CDW(x)Δ_CDW^∗(0) ⟩ SU(N)∼1/|x|^2/( NK) ⟨Δ_SC(x)Δ_SC^∗(0) ⟩ SO(N),USp(N)∼1/|x|^2K/NSuppose that the interactions before projecting to the ZLL are fixed and weak. As N grows, none of the projected interaction strengths should diverge; that is, g_0 should not grow with N. This means that corrections to the free value K = 1, which are controlled by the small parameter g_0, do not diverge with N. Hence as N →∞, the power law falls off slower and slower, eventually becoming a distance-independent contribution to the correlation function. This is how true long-range order appears in the bulk limit. It would be nice to check our conjectures about the general-N phase diagram at large N. The starting point would be to expand the RG coefficients β_S'S”^S at large N by expanding the Wigner 6j-symbols appearing in Eq. (<ref>) at large S_0. Unfortunately, the leading-order term in the expansion<cit.> is proportional to the Clebsch-Gordan coefficient ⟨S,m=0|S',m=0;S”,m=0⟩, which is precisely zero when S+S'+S” is odd. If S+S'+S” is even, then β_SS'^S” is instead zero due to the selection rules in Eq. (<ref>). To get any nontrivial flow, then, 1/N corrections must be considered, which considerably complicates the analysis.Although it is difficult to analyze the large-N limit in more detail, we can make some simple comparisons with the results of Ref. Yakovenko, where fully three-dimensional spinless fermions were considered in the parquet approximation. Ref. Yakovenko finds two zero-temperature phases in the bulk limit depending on whether the contact interactions are repulsive or attractive. In the former case, there is a transition to a CDW state, and in the latter the system is a marginal Fermi liquid. We do find two phases much like those above. Our CDW state exists at all N and becomes long-range order in the N →∞ limit; this should be analogous to the CDW phase in Ref. Yakovenko. The marginal Fermi liquid phase is harder to compare because we have focused on T=0 while Ref. Yakovenko finds susceptibilities at T>0 which diverge only as T→ 0. However, the marginal Fermi liquid phase has a divergent SC susceptibility and finite CDW susceptibility as T→ 0, which is qualitatively similar to our SO(N) (USp(N)) phase. We do find more phases than Ref. Yakovenko, in that we find a Luttinger liquid phase at all N and a phase with fluctuating triplet order parameters at even N. A likely reason for this inconsistency is that although we require short-range interactions, we do not constrain the range of the interactions compared to the magnetic length. Ref. Yakovenko does make this assumption in order to argue that considering a projected contact interaction is sufficient, and therefore is in a special case of our results. Another possibility is that as N gets large, our additional phases occupy a fraction of the phase diagram which approaches zero; we cannot rule this out because we do not know how the basin of attraction of these fixed points behaves as a function of N.Beyond these considerations, it is possible for our model to break down entirely in the bulk limit due to disorder. As the wire gets thicker, it is more likely to be disordered, which would broaden the Landau levels. In fact, this would be like analyzing our pseudospin model with a random SU(2)-breaking field.§ DISCUSSIONWe first briefly summarize our main results. We mapped an interacting metallic wire with a strong magnetic field along its length to one-dimensional fermions of pseudospin S_0 = (N-1)/2, where N is the degeneracy of the zeroth Landau level at fixed k_x. We then computed the phase diagram. For all N and any interactions, there is spin-charge separation with a gapless charge sector (so long as the filling is incommensurate). For all N, there is a Luttinger liquid phase where the interactions only provide logarithmic corrections to correlations in the pseudospin part of the free theory. For N>2, there are also two pseudospin-gapped phases where an order parameter has power law correlations with a power that depends on N: a fluctuating pseudospin-singlet CDW phase and a fluctuating pseudospin-singlet SC phase. For N odd, the transition between these phases is first-order, but for N even, the transition is permitted to be second-order and governed by the 𝔰𝔲(2)_N/2/𝔲(1) parafermion CFT. Even N>2 has an additional phase which has no pseudospin gap and has power-law correlations of both the pseudospin-triplet CDW and SC order parameters.Recalling that tuning N is like tuning the magnetic field, our main predictions which are interesting to search for in experiments are: power law correlation functions whose power law is tuned by magnetic field, using the magnetic field to tune between a Luttinger liquid, fluctuating SC order, and CDW orders (although the extent to which this is possible depends on the details of how the interactions project at different N), and signatures of the phase with fluctuating pseudospin-triplet orders. One important consideration for any such experimental search is how practical the limits we are considering are for real experimental systems. The main constraint is that the carrier density must be low enough that all carriers are in the zeroth Landau level. For electrons with a quadratic dispersion, this means that the chemical potential in field must be below the energy of the first Landau level, i.e. ħ^2 k_F^2/2m≤ħ eB/mwhere m is the effective mass and k_F is the Fermi wavevector. For a Weyl semimetal with Weylpoints at k = ± k_W x̂ (with k_W>0), the corresponding estimate isħ |k_F-k_W| v_F ≤ v_F √(ħ eB)with v_F the Fermi velocity. The Landau level degeneracy N in both cases is of order π R^2 B/Φ_0, where R is the wire radius and Φ_0 = h/e is the flux quantum. The LL degeneracy can be used to relate k_F to the carrier density, which can then be plugged into Eqs. (<ref>) and (<ref>) to estimateB ≳ħ/e(2π^4n^2)^1/3for Schrodinger electrons and B ≳ħ/e(4π^4n^2)^1/3for Weyl electrons. Assuming n ∼ 10^17 cm^-3, this is about 8 T for Schrodinger and 10 T for Weyl. However, in both casesN ∼ 60 (B/8T)(R/100 nm)^2In the previous section, we saw that the power law correlation functions are most one-dimensional when N is small; large N quickly starts to look like long range order. Given these estimations, the large-N limit should be experimentally achievable, but the small-N limit may require extremely narrow wires or extremely low carrier density (to reduce the magnetic field required).On the theoretical side, this work raises a number of open questions. Analyzing the pseudospin-gapless phase at even N and its stability to SU(2)-breaking perturbations is an interesting and nontrivial CFT problem. Studying the various phase transitions in this model and distinguishing first-order and second-order transitions more clearly is also an interesting technical challenge in both the Abelian and non-Abelian bosonization languages. Another interesting possibility is to see if there is a deep connection with the Haldane conjecture. In particular, changing from even to odd N corresponds to moving between half-integer and integer pseudospin, and the appearance of a pseudospin-gapless phase for half-integer spin is reminiscent of the Haldane conjecture. The connection is not obvious because our results are at incommensurate filling and because the set of allowed operators looks different. We would like to thank Ian Affleck, Yingfei Gu, Pavan Hosur, Steve Kivelson, and Sri Raghu for illuminating discussions. DB is supported by the National Science Foundation under grant No. DGE-114747. CMJ's research was in part completed in Stanford University under the support of the David and Lucile Packard Foundation. CMJ's research at KITP is supported by a fellowship from the Gordon and Betty Moore Foundation (Grant 4304). XLQ is supported by the National Science Foundation under grant No. DMR-1151786 and by the David and Lucile Packard Foundation.§ THE BASIS OF FERMION BILINEARSIn this appendix, we construct the matrices M^S,α with the properties discussed in Section <ref> and use them to write down the SU(2)-invariant Hamiltonian Eq. (<ref>), prove that the odd-S matrices form a 𝔲𝔰𝔭(N) (𝔰𝔬(N)) subalgebra for N even (odd), and prove that the corresponding affine subalgebra of fermion bilinears has level 1 (2).We start with some intuition. Fermion bilinears are objects ψ^†_mM_mn^S,αψ_n (suppressing the L/R indices) which transform under SU(2) as ψ^†_m'U^†_m'mM_mn^S,αU_nn'ψ_n'. We are thus taking two objects, one which transforms as pseudospin S_0 = (N-1)/2 and one which transforms as its complex conjugate, and producing an object which transforms in a pseudospin-S representation. In SU(2), moving from a representation to the complex conjugate is the same as time reversal. Therefore, we expect a relationship between M_mn^S,α and the Clebsch-Gordan coefficient ⟨S_0,m;S_0,-n|S,p⟩ for some appropriate relationship between p and α. Let us make this precise. Define a compact notationC^S,p_mn = ⟨S_0,m;S_0,n|S,p⟩for the Clebsch-Gordan coefficients fusing two spin-S_0 objects with S_z quantum numbers m and n to a spin-S object with S_z quantum number p. Here m,n = -S_0, -S_0+1...S_0 and p = -S, -S+1,...S; note that p and S are always integers. Treating m and n as matrix indices, the Clebsch-Gordan coefficients are not Hermitian. Before building Hermitian matrices from them, we need to establish some preliminary properties. Using a convention where all Clebsch-Gordan coefficients are real, elementary symmetry and completeness properties of the Clebsch-Gordan coefficients lead to the identitiesC^S,p_mn ∝δ_m+n,p(C^S,p)^† = (-1)^S+2S_0C^S,p [(C^S,p)^†C^S',p']= δ^S,S'δ^p,p'In taking the Hermitian conjugate and the trace, we are treating m and n as the matrix indices and S,S',p,p' as labels. Eq. (<ref>) relies on the fact that S is an integer, or else there could be an extra negative sign.Next, in the convention where the spin-S_0 matrices S^x and S^z are purely real and S^y is purely imaginary, the time reversal operator isT ≡Ω𝒦with 𝒦 the antiunitary complex conjugation operator and Ω the unitary matrix Ω = exp(iπ S^y/√(2)) (the factor of √(2) is due to our normalization convention for the structure constants of 𝔰𝔲(2)). The matrix elements of Ω are Ω_mn = (-1)^S_0-mδ_m,-n; note that Ω^† = (-1)^2S_0Ω and Ω^2 = (-1)^2S_0.Next, define for each p the matrices A^S,p = C^S,pΩ. By inspection A is related to the Clebsch-Gordan coefficient C^S,p_m,-n, as expected intuitively. Moreover, time-reversal symmetry of the Cs implies C^S,pΩ = (-1)^S-pΩ C^S,-p, which can be combined with Eq. (<ref>) and (<ref>) to find(A^S,p)^† = (-1)^p A^S,-p (A^S,pA^S',p')= (-1)^pδ_S,S'δ_p,-p'Finally, we can define our desired matrices. For α =-S,-S+1,...,S, define (suppressing matrix indices)M^S,α = √(k/2)(A^S,α + (-1)^αA^S,-α)α > 0 √(k)A^S,0 α = 0i√(k/2)[A^S,α - (-1)^αA^S,-α]α < 0Hermiticity follows immediately. Property (1) of Section <ref> is satisfied by definition. Additionally using Eqs. (<ref>) and (<ref>) shows that these matrices are orthogonal and normalized according to Property (3) (all of the factors of (-1)^p work out properly).To check the transformation properties under SU(2), note first that S^z anticommutes with Ω; this immediately proves[S_z, A^S,p] = √(2)p A^S,p(where again the √(2) is due to normalization).Likewise, S_x and S_y anticommute and commute, respectively, with Ω. Moreover, transforming the lower indices of a Clebsch-Gordan coefficient is the same as transforming the upper index, that is,S^±C^S,p + C^S,p(S^±)^† = √(S(S+1)-p(p± 1))C^S,p± 1These two facts imply[S^±,A^S,p] = √(S(S+1)-p(p± 1))A^S,p ± 1as desired.From the transformation properties, it is straightforward to show that SU(2) invariance requires that the interaction Hamiltonian has the formH_int = ∑_S,p g_S (-1)^p ψ^†_L A^S,pψ_L ψ^†_R A^S,-pψ_RSubstituting the definition Eq. (<ref>) of the Ms into Eq. (<ref>) proves that Eq. (<ref>) is the same as Eq. (<ref>). That is, the Ms are just a basis rearrangement of the As used to ensure Hermiticity. This is particularly clear for S=1; it is easy to check that A^1,± 1∝ S^±, so M^1,± 1∝ S^x, S^y respectively. We use Eq. (<ref>) rather than Eq. (<ref>) because the orthogonality and normalization of the M^S,α is slightly simpler than that of the A^S,p.Having discussed the SU(2) properties of the M^S,α, we now demonstrate that the M^S,α for odd S generate 𝔰𝔭(N) and 𝔰𝔬(N) when N is even and odd respectively.It is easy to count that when N is even and odd respectively, there are N(N+1)/2 and N(N-1)/2 (mutually orthogonal in the trace norm) matrices M^S,α with odd S; these are the dimensions of 𝔰𝔭(N) and 𝔰𝔬(N) respectively. Next, note that Ω is always real and is antisymmetric (symmetric) for N even (odd); therefore, we can use it as a symplectic (symmetric) form and the fundamental representation of the Lie group USp(N) (SO(N)) consists of unitary N × N matrices B which obey B^T Ω B=Ω. Passing to the Lie algebra and using Ω^2 = (-1)^N+1, this means that if Ω(M^S,α)^TΩ = (-1)^N+1M^S,α for all odd S and each α, then M^S,α generate 𝔰𝔭(N) and 𝔰𝔬(N) respectively. Using Eq. (<ref>) we findΩ A^S,pΩ = (-1)^N+1Ω C^S,p = (-1)^S-p+N+1A^S,-pThis immediately implies Ω (M^S,α)^T Ω = (-1)^S+N+1 M^S,α, which is the desired identity.Finally, we determine the level of the 𝔰𝔬(N) and 𝔲𝔰𝔭(N) affine algebras generated by the corresponding fermion bilinears. According to Eq. (<ref>), if M^S,α is any generator in the subalgebra, then the level of the corresponding affine subalgebra is (M^S,α)^2 provided that the normalization of the subalgebra structure factors f^ab_c is such that∑_ab f^ab_cf^ab_d = 2gδ_cdwhere a,b,c,d label generators of the subalgebra and g is the dual Coxeter number of the subalgebra. In our current normalization, (M^S,α)^2 = k; we still need to check the normalization of the structure factors. Since the normalization Eq. (<ref>) is independent of the index c, we can choose the generator c to be M^1,0 = S_z for convenience. From now on we will use S',S” as dummy indices taking only odd values from 1 to N-1 if N is even and from 1 to N-2 if N is odd. From the definition of the structure factors it is easy to see thatf^S',α;S”,β_1,0 = 1/ik([M^S',α,M^S”,β]M^1,0)Plugging this into Eq. (<ref>) and comparing to Eq. (<ref>), we see that∑_S',S”,α,β (f^S',α;S”,β_1,0)^2 = -∑_S',S”β_S',S”^1Plugging in the definitions of the Ms in terms of As, expanding carefully and doing some reindexing turns this into ∑_S',S”,α,β (f^S',α;S”,β_1,0)^2= -∑_S',S”,αβ (-1)^α + β([A^S',α,A^S”,β]M^1,0)([A^S',-α,A^S”,-β]M^1,0)=-∑_S',S”,αβ (-1)^α + β([M^1,0,A^S',α]A^S”,β)([M^1,0,A^S',-α]A^S”,-β)= ∑_S',S”αβ 2α^2 (-1)^α + βδ_α, -βδ_S',S”= 2/3∑_S' oddS'(S'+1)(2S'+1)=2k(1+N/2) Nevenk(N-2) Nodd=2kg_𝔰𝔭(N)Nevenkg_𝔰𝔬(N)Nodd where we used Eqs. (<ref>) and (<ref>) to evaluate the commutators and traces. Since (M^2)= k, we immediately read off that the level of the 𝔰𝔭(N) (𝔰𝔬(N)) affine algebra is 1 (2) for N even (odd). § DERIVATION OF THE RG COEFFICIENTSIn this Appendix, we outline the derivation of Eq. (<ref>) starting from Eqs. (<ref>) and (<ref>). The left-hand side of Eq. (<ref>) is SU(2) invariant, so the right-hand side must be independent of γ. For convenience we sum over γ:β^S_S',S” = 1/k^2(2S+1)∑_αβγ([M^S',α,M^S”,β]M^S,γ)^2 Next we plug in the explicit expression Eq. (<ref>) of the M matrices. A careful expansion of the squares and some reindexing leads to β^S_S',S” = k/(2S+1)∑_αβγ (-1)^α + β + γ([A^S',α,A^S”,β]A^S,γ)([A^S',-α,A^S”,-β]A^S,-γ)=k/(2S+1)∑_αβγ (-1)^α + β + γ([C^S',αΩ,C^S”,βΩ]C^S,γΩ)([C^S',-αΩ,C^S”,-βΩ]C^S,-γΩ) For the moment we ignore the sums on Greek indices and the commutators in order to evaluate traces of products of three Clebsch-Gordan (C-G) coefficients. Using Eq. (<ref>), we have(C^S',αΩ C^S”,βΩ C^S,γΩ) =(-1)^S”-β+2S_0(C^S',α C^S”,-β C^S,γΩ)= (-1)^S”-β+2S_0∑_mnl(-1)^S_0+mC^S',α_mn C^S”,-β_nl C^S,γ_l,-mNote that this is only nonzero when m+n=α, n+l=-β, and l-m = γ, which means α+β+γ=0. This removes a phase factor in Eq. (<ref>). Transposing the first term using Eq. (<ref>) manipulates this equation into a form for which there is a known<cit.> identity relating such a product of three C-G coefficients to a product of a 6j symbol and another C-G coefficient. Applying the identity, we get(C^S',αΩ C^S”,βΩ C^S,γΩ) = (-1)^S'+S-β√((2S+1)(2S'+1))C̃^S”-β_S',α;S,γ S_0 S_0 S'S”S S_0where C̃ is a C-G coefficient for combining spin S and S' into S”. Substituting this relationship into Eq. (<ref>) and using the symmetry properties of the 6j symbols converts it to β^S_S',S” = k ( S S' S”S_0 S_0 S_0 )^2 ∑_αβγ ((-1)^S'-β√(2S'+1)C̃^S”,-β_S',α;S,γ - (-1)^S”-α√(2S”+1)C̃^S',-α_S”,β;S,γ) ×((-1)^S'+β√(2S'+1)C̃^S”,β_S',-α;S,-γ - (-1)^S”+α√(2S”+1)C̃^S',α_S”,-β;S,-γ)Using elementary symmetry properties of the C-G coefficients, all the αs and γs can be placed on the bottom and given the same sign up to some phase factors and factors of √(2S'+1) or √(2S”+1). This allows the use of the completeness relations of the C-G coefficients in order to perform the sums over α and γ and to remove all the C-G coefficients. The remaining β dependence disappears, allowing the sum over β to be replaced by a factor of (2S”+1). These manipulations are simple but tedious; tracking all the factors carefully (and remembering that α, β, γ, S, S', and S” are integers) produces Eq. (<ref>).§ SELECTION RULES FOR OPESWe found that in Eq. (<ref>) that β^S_S'S” = 0 if S+S'+S” is even. In this section, we will use Young tableaux to demonstrate how this selection rule results from the symmetry properties of the fermion bilinears.Consider the products of three Ms as they appear in Eq. (<ref>). The object (M^S',αM^S”,γM^S,δ) intuitively takes a spin-S' and spin-S” object, fuses them, and finds its overlap with the spin-S channel. There are of course constraints on α, γ, and δ, but for the moment we only care about whether β_S'S”^S is zero.The symmetry of such fusions can be encoded in Young tableaux. For example, consider S'=2,S”=1. Then the two terms in the commutator ([M^2,α,M^1,γ]M^S,δ) are nosmalltableaux4⊗[*(gray)]2 = [*(white)]4*[*(gray)]6⊕[*(white)]4*[*(gray)]5,1⊕[*(white)]4*[*(gray)]4,2 nosmalltableaux[*(gray)]2⊗4 = [*(gray)]2*[*(white)]6⊕[*(gray)]2*[*(white)]5,1⊕[*(gray)]2*[*(white)]4,2 The shading tracks whether the box came from the spin-2 or the spin-1 representation. It is implied that all boxes with the same shading are symmetrized, regardless of the row, because they are symmetrized on the left-hand side of Eq. (<ref>).The three terms correspond to S = 3,2,1 respectively.It is now clear from the symmetry properties of the Young tableaux (that is, rows are symmetrized and columns are antisymmetrized) that in subtracting Eq. (<ref>) from Eq. (<ref>) the spin-3 and spin-1 tableaux will cancel out, while the spin-2 tableau will not. The commutator in Eq. (<ref>) is exactly such a difference, so the commutator must produce zero if S ≠ 2.More generally, there will be a fully symmetric tableau with 2S' boxes(the white boxes in Eq. (<ref>)) fused with a fully symmetric tableau with 2S” boxes (the shaded boxes in Eq. (<ref>)). Consider the fusion to spin S. There are 2(S'+S”) boxes total, 2S of which must be “dangling" in the first row. Hence there are S'+S”-S columns which have two boxes in them (this must be nonnegative for that fusion channel to be allowed at all), one of which must come from S' and the other of which must come from S”. Therefore under exchange of the S' and S” tableaux, the wavefunction picks up a factor of (-1)^S'+S”-S = (-1)^S+S'+S” (since S is an integer). If S+S'+S” is even, then the wavefunction is symmetric under this exchange and the commutator produces zero, so β_S'S”^S = 0. § MEAN FIELD THEORYIn this section, we explain our mean-field procedure that is used for intuition about the phase diagram. In particular, we will compute the susceptibility for each possible CDW or SC order parameter to show that at mean-field level, the most negative coupling constant produces the strongest tendency towards order (the strongest divergence in the susceptibility).The action isS = ∫ dx dτ∑_m ψ^†_m ∂_τψ_m + H_0 +H_intwith H_0 defined in Eq. (<ref>). We choose to write H_int in the exchange channel as in Eq. (<ref>).Next, consider the fat unity 1 ∝∫ DG^Sαexp(-1/4|g_S^ex|∫ dx dτ(G^S,α+2 g^E_S ψ^†_LM^S, αψ_R)((G^S,α)^∗+2 g_S^Eψ^†_RM^S, αψ_L)) where G^S,α is a complex bosonic field and spacetime dependences have been suppressed. Then it is easy to check by expanding that when g_S < 0, the quartic term produces the correct sign to cancel off the interaction. We expect no low-energy instabilities when g_S^ex > 0, so the mean field does not need to make sense. Defining the object Ψ^†(x) = [ ψ^†_L(x) ψ^†_R(x) ] (a 2N-component object) and inserting the fat unity into the path integral, the effective action is then S_eff = ∫ dz dτ[1/4|g_S^ex||G^S,α|^2 + Ψ^†[ G_0,L^-1 -1/2 G^S,α M^S,α; -1/2 (G^S,α)^∗ M^S,α G_0,R^-1 ]Ψ] where G_0,L(R) is the noninteracting Green's function for the left (right) movers (and is independent of m). We now integrate out the fermions and expand to second order in G^S,α. The expansion produces terms in the free energy proportional to (M^S,αM^S',β)G^S,α(G^S',β)^∗; thanks to our convenient choice of the Ms, the trace collapses the sum to only the diagonal terms. Hence at second order, all the order parameters decouple, yielding the free energyF ≈∫ dq dω|G^S,α(q,ω)|^2/4|g_S^ex|[1 + χ_CDW(q,ω)]The linear term vanishes by the trace in L/R space, and we have dropped the zeroth-order (free fermion) contribution. We have defined the CDW susceptibilityχ_CDW(q,ω) = k|g_S^ex|∑_p,ω'G_0,L(p,ω')G_0,R(p-q,ω'-ω)The trace over the flavor index produces the factor of k. Here ω and ω' are bosonic Matsubara frequencies. We have assumed that all g_S^E < 0, and there are implicit sums over all S,α.Evaluating the sum of noninteracting fermionic Green functions by standard techniques produces, at zero temperature and zero frequency, the static susceptibilityχ_CDW(q,ω = 0) = k |g^E_S|πlog| (δ q)^2/4Λ^2 - (δ q)^24k_F - 2Λ - δ q/4k_F + 2Λ - δ q|Here δ q = q + 2k_F and Λ is the momentum cutoff of the low-energy non-interacting theory. There is a divergence at δ q = 0 (i.e. q=2k_F) which scales as 2π k |g^E_S| logδ q.A completely analogous computation in the Cooper channel yields a static susceptibilityχ_SC(q,ω = 0) = k |g_S^C|πlog| q^2(4k_F+q-2Λ)/(4Λ^2-q^2)(4k_F+q+2Λ)|This has a q=0 divergence scaling as 2π k |g^C_S|log q.The conclusion of all of this is that at mean-field level, any negative coupling constant produces a logarithmically divergent susceptibility in its corresponding channel. Moreover, the strength of the divergence is the coupling constant times a channel- and S-independent factor. Therefore, all of the coupling constants are directly comparable, and the most negative coupling constant should produce the strongest tendency towards order. Since there is no spontaneous symmetry breaking of a continuous symmetry in one dimension, we expect that there are significant corrections to the mean field picture. First, decoupling of the order parameters should not persist past second order, Second, we expect long-range, mean-field order to be corrected to quasi-long-range order. As a heuristic guide, then, we expect that the channel with the most negative coupling constant will have quasi-long-range order and that other channels will not.apsrev4-1
http://arxiv.org/abs/1702.08528v1
{ "authors": [ "Daniel Bulmash", "Chao-Ming Jian", "Xiao-Liang Qi" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170227210017", "title": "Strongly Interacting Phases of Metallic Wires in Strong Magnetic Field" }
Constraining stellar mass black hole mergers in AGN disks detectable with LIGO D.J.Rosen December 30, 2023 ============================================================================== We consider learning a sequence classifier without labeled data by using sequential output statistics. The problem is highly valuable since obtaining labels in training data is often costly, while the sequential output statistics (e.g., language models) could be obtained independently of input data and thus with low or no cost. To address the problem, we propose an unsupervised learning cost function and study its properties. We show that, compared to earlier works, it is less inclined to be stuck in trivial solutions and avoids the need for a strong generative model. Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem. Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods. Specifically, it reaches test errors about twice of those obtained by fully supervised learning. § INTRODUCTION Unsupervised learning is one of the most challenging problems in machine learning. It is often formulated as the modeling of how the world works without requiring a huge amount of human labeling effort, e.g. <cit.>. To reach this grand goal, it is necessary to first solve a sub-goal of unsupervised learning with high practical value; that is, learning to predict output labels from input data without requiring costly labeled data. Toward this end, we study in this paper the learning of a sequence classifier without labels by using sequential output statistics. The problem is highly valuable since the sequential output statistics, such as language models, could be obtained independently of the input data and thus with no labeling cost.The problem we consider here is different from most studies on unsupervised learning, which concern automatic discovery of inherent regularities of the input data to learn their representations <cit.>. When these methods are applied in prediction tasks, either the learned representations are used as feature vectors <cit.> or the learned unsupervised models are used to initialize a supervised learning algorithm <cit.>. In both ways, the above unsupervised methods played an auxiliary role in helping supervised learning when it is applied to prediction tasks.Recently, various solutions have been proposed to address the input-to-output prediction problem without using labeled training data, all without demonstrated successes <cit.>. Similar to this work, the authors in <cit.> proposed an unsupervised cost that also exploits the sequence prior of the output samples to train classifiers. The power of such a strong prior in the form of language models in unsupervised learning was also demonstrated in earlier studies in <cit.>.However, these earlier methods did not perform well in practical prediction tasks with real-world data without using additional strong generative models. Possible reasons are inappropriately formulated cost functions and inappropriate choices of optimization methods. For example, it was shown in <cit.> that optimizing the highly non-convex unsupervised cost function could easily get stuck in trivial solutions, although adding a special regularization mitigated the problem somewhat. The solution provided in this paper fundamentally improves these prior works in <cit.> in following aspects. First, we propose a novel cost function for unsupervised learning, and find that it has a desired coverage-seeking property that makes the learning algorithm less inclined to be stuck in trivial solutions than the cost function in <cit.>. Second, we develop a special empirical formulation of this cost function that avoids the need for a strong generative model as in <cit.>. Third, although the proposed cost function is more difficult to optimize in its functional form, we develop a stochastic primal-dual gradient (SPDG) algorithm to effectively solve problem. Our analysis of SPDG demonstrates how it is able to reduce the high barriers in the cost function by transforming it into a primal-dual domain. Finally and most importantly, we demonstrate the new cost function and the associated SPDG optimization algorithm work well in two real-world classification tasks. In the rest of the paper, we proceed to demonstrate these points and discuss related works along the way. § EMPIRICAL-ODM: AN UNSUPERVISED LEARNING COST FOR SEQUENCE CLASSIFIERSIn this section, we extend the earlier work of <cit.>and propose an unsupervised learning cost named Empirical Output Distribution Match (Empirical-ODM) for training classifiers without labeled data. We first formulate the unsupervised learning problem with sequential output structures. Then, we introduce the Empirical-ODM cost and discuss its important properties that are closely related to unsupervised learning. §.§ Problem formulationWe consider the problem of learning a sequence classifier that predicts an output sequence (y_1,…,y_T_0) from an input sequence (x_1,…,x_T_0) without using labeled data, where T_0 denotes the length of the sequence. Specifically, the learning algorithm does not have access to a labeled training set D_XY{(x_1^n,…,x_T_n^n), (y_1^n,…,y_T_n^n): n=1,…,M}, where T_n denotes the length of the n-th sequence. Instead, what is available is a collection of input sequences, denoted as D_X {(x_1^n,…,x_T_n^n): n=1,…,M}. In addition, we assume that the sequential output statistics (or sequence prior), in the form of an N-gram probability, are available: p_(i_1,…,i_N) p_(y^n_t-N+1=i_1,…,y^n_t=i_N)where i_1,…,i_N ∈{1,…,C} and the subscript “LM” stands for language model. Our objective is to train the sequence classifier by just using D_X and p_(·). Note that the sequence prior p_(·), in the form of language models, is a type of structure commonly found in natural language data, which can be learned from a large amount of text data freely available without labeling cost. For example, in optical character recognition (OCR) tasks, y_t^n could be an English character and x_t^n is the input image containing this character. We can estimate an N-gram character-level language model p_(·) from a separate text corpus. Therefore, our learning algorithm will work in a fully unsupervised manner, without any human labeling cost. In our experiment section, we will demonstrate the effectiveness of our method on such a real OCR task. Other potential applications include speech recognition, machine translation, and image/video captioning. In this paper, we focus on the sequence classifier in the form of p_θ(y_t^n|x_t^n) that is, it computes the posterior probability p_θ(y_t^n|x_t^n) only based on the current input sample x_t^n in the sequence. Furthermore, we restrict our choice of p_θ(y_t^n|x_t^n) to be linear classifiers[p_θ(y_t^n = i | x_t^n) = e^γ w_i^T x_t^n /∑_j=1^C e^γ w_j^T x_t^n, where the model parameter is θ{ w_i ∈R^d, i=1,…, C}.] and focus our attention on designing and understanding unsupervised learning costs and methods for label-free prediction. In fact, as we will show in later sections, even with linear models, the unsupervised learning problem is still highly nontrivial and the cost function is also highly non-convex. And we emphasize that developing a successful unsupervised learning approach for linear classifiers, as we do in this paper, provides important insights and is an important first step towards more advanced nonlinear models (e.g., deep neural networks). We expect that, in future work, the insights obtained here could help us generalize our techniques to nonlinear models.A recent work that shares the same motivations as our work is <cit.>, which also recognizes the high cost of obtaining labeled data and seeks label-free prediction. Different from our setting, they exploit domain knowledge from laws of physics in computer vision applications, whereas our approach exploits sequential statistics in the natural language outputs. Finally, our problem is fundamentally different from the sequence transduction method in <cit.>, although it also exploits language models for sequence prediction. Specifically, the method in <cit.> is a fully supervised learning in that it requires supervision at the sequence level; that is, for each input sequence, a corresponding output sequence (of possibly different length) is provided as a label. The use of language model in <cit.> only serves the purpose of regularization in the sequence-level supervised learning. In stark contrast, the unsupervised learning we propose does not require supervision at any level including specifically the sequence level; we do not need the sequence labels but only the prior distribution p_(·) of the output sequences.§.§ The Empirical-ODMWe now introduce an unsupervised learning cost that exploits the sequence structure in p_(·). It is mainly inspired by the approach to breaking the Caesar cipher, one of the simplest forms of encryption <cit.>. Caesar cipher is a substitution cipher where each letter in the original message is replaced with a letter corresponding to a certain number of letters up or down in the alphabet. For example, the letter “D” is replaced by the letter “A”, the letter “E” is replaced by the letter “B”, and so on. In this way, the original message that was readable ends up being less understandable. The amount of this shifting is also known to the intended receiver of the message, who can decode the message by shifting back each letter in the encrypted message. However, Caesar cipher could also be broken by an unintended receiver (not knowing the shift) when it analyzes the frequencies of the letters in the encrypted messages and matches them up with the letter distribution of the original text <cit.>. More formally, let y_t=f(x_t) denote a function that maps each encrypted letter x_t into an original letter y_t. And let p_(i)p_(y_t=i) denote the prior letter distribution of the original message, estimated from a regular text corpus. When f(·) is constructed in a way that all mapped letters {y_t: y_t = f(x_t), t=1,…,T} have the same distribution as the prior p_(i), it is able to break the Caesar cipher and recover the original letters at the mapping outputs.Inspired by the above approach,the posterior probability p_θ(y_t^n|x_t^n) in our classification problem can be interpreted as a stochastic mapping, which maps each input vector x_t^n (the “encrypted letter”) into an output vector y_t^n (the “original letter”) with probability p_θ(y_t^n|x_t^n). Then in a samplewise manner, each input sequence (x_1^n, …, x_T_n^n) is stochastically mapped into an output sequence (y_1^n, …, y_T_n^n). We move a step further than the above approach by requiring that the distribution of the N-grams among all the mapped output sequences are close to the prior N-gram distribution p_(i_1,…,i_N). With this motivation, we propose to learn the classifier p_θ(y_t|x_t) by minimizing the negative cross entropy between the prior distribution and the expected N-gram frequency of the output sequences: min_θ {J(θ) -∑_i_1,…,i_N p_ (i_1,…,i_N) lnp_θ(i_1,…,i_N) }where p_θ(i_1,…,i_N) denotes the expected N-gram frequency of all the output sequences. In Appendix <ref> of the supplementary material, we derive its expression as p_θ(i_1,…,i_N) 1/T∑_n=1^M ∑_t=1^T_n∏_k=0^N-1 p_θ(y_t-k^n = i_N-k| x_t-k^n)where TT_1 + ⋯ + T_M is the total number of samples in all sequences. Note that minimizing the negative cross entropy in (<ref>) is also equivalent to minimizing the Kullback-Leibler (KL) divergence between the two distributions since they only differ by a constant term, ∑ p_ln p_. Therefore, the cost function (<ref>) seeks to estimate θ by matching the two output distributions, where the expected N-gram distribution in (<ref>) is an empirical average over all the samples in the training set.For this reason, we name the cost (<ref>) as Empirical Output Distribution Match (Empirical-ODM) cost. In <cit.>, the authors proposed to minimize an output distribution match (ODM) cost, defined as the KL-divergence between the prior output distribution and the marginalized output distribution, D(p_(y) || p_θ(y)), where p_θ(y) ∫ p_θ(y|x) p(x) dx. However, evaluating p_θ(y) requires integrating over the input space using a generative model p(x). Due to the lack of such a generative model, they were not able to optimize this proposed ODM cost. Instead, alternative approaches such as Dual autoencoders and GANs were proposed as heuristics. Their results were not successful without using a few labeled data. Our proposed Empirical-ODM cost is different from the ODM cost in <cit.> in three key aspects. (i) We do not need any labeled data for training. (ii) We exploit sequence structure of output statistics, i.e., in our case y = (y_1,…,y_N) (N-gram) whereas in <cit.> y = y_t (unigram, i.e., no sequence structure). This is crucial in developing a working unsupervised learning algorithm. The change from unigram to N-gram allows us to explicitly exploit the sequence structures at the output, which makes the technique from non-working to working (see Table <ref> in Section <ref>). It might also explain why the method in <cit.> failed as it does not exploit the sequence structure. (iii) We replace the marginalized distribution p_θ(y) by the expected N-gram frequency in (<ref>). This is critical in that it allows us to directly minimize the divergence between two output distributions without the need for a generative model, which<cit.> could not do. In fact, we can further show that p_θ(i_1,…,i_N) is an empirical approximation of p_θ(y) with y=(y_1,…,y_N) (see Appendix <ref> of the supplementary material). In this way, our cost (<ref>) can be understood as an N-gram and empirical version of the ODM cost except for an additive constant, i.e., y is replaced by y=(y_1,…,y_N) and p_θ(y) is replaced by its empirical approximation. §.§ Coverage-seeking versus mode-seeking We now discuss an important property of the proposed Empirical-ODM cost (<ref>) by comparing it with the cost proposed in <cit.>. We show that the Empirical-ODM cost has a coverage-seeking property, which makes it more suitable for unsupervised learning than the mode-seeking cost in <cit.>. In <cit.>, the authors proposed the expected negative log-likelihood as the unsupervised learning cost function that exploits the output sequential statistics. The intuition was to maximize the aggregated log-likelihood of all the output sequences assumed to be generated by the stochastic mapping p_θ(y_t^n|x_t^n). We show in Appendix <ref> of the supplementary material that their cost is equivalent to -∑_i_1,…, i_N-1∑_i_N p_θ(i_1,…,i_N) ln p_(i_N | i_N-1, …, i_1)where p_(i_N | i_N-1, …, i_1)p(y_t^n=i_N | y_t-1^n=i_N-1,…, y_t-N+1^n=i_1), and the summations are over all possible values of i_1,…,i_N ∈{1,…,C}. In contrast, we can rewrite our cost (<ref>) as - ∑_i_1,…, i_N-1 p_(i_1, …, i_N-1) ·∑_i_N p_(i_N|i_N-1,…,i_1) lnp_θ(i_1,…,i_N)where we used the chain rule of conditional probabilities. Note that both costs (<ref>) and (<ref>) are in a cross entropy form. However, a key difference is that the positions of the distributions p_θ(·) and p_(·) are swapped.We show that the cost in the form of (<ref>) proposed in <cit.> is a mode-seeking divergence between two distributions, while by swapping p_θ(·) and p_(·), our cost in (<ref>) becomes a coverage-seeking divergence (see <cit.> for a detailed discussion on divergences with these two different behaviors). To understand this, we consider the following two situations: * If p_(i_N | i_N-1, …, i_1) → 0 and p_θ(i_1,…,i_N)>0 for a certain (i_1,…, i_N), the cross entropy in (<ref>) goes to +∞ and the cross entropy in (<ref>) approaches zero. * If p_(i_N | i_N-1, …, i_1) >0 and p_θ(i_1,…,i_N) → 0 for a certain (i_1,…,i_N), the cross entropy in (<ref>) approaches zero and the cross entropy in (<ref>) goes to +∞. Therefore, the cost function (<ref>) will heavily penalize the classifier if it predicts an output that is believed to be less probable by the prior distribution p_(·), and it will not penalize the classifier when it does not predict an output that p_(·) believes to be probable. That is, the classifier is encouraged to predict a single output mode with high probability in p_(·), a behavior called “mode-seeking” in <cit.>. This probably explains the phenomena observed in <cit.>: the training process easily converges to a trivial solution of predicting the same output that has the largest probability in p_(·). In contrast, the cost (<ref>) will heavily penalize the classifier if it does not predict the output that p_(·) is positive, and will penalize less if it predicts outputs that p_(·) is zero. That is, this cost will encourage p_θ(y|x) to cover as much of p_(·) as possible, a behavior called “coverage-seeking” in <cit.>. Therefore, training the classifier using (<ref>) will make it less inclined to learn trivial solutions than that in <cit.> since it will be heavily penalized. We will verify this fact in our experiment section <ref>. In summary, our proposed cost (<ref>) is more suitable for unsupervised learning than that in <cit.>. §.§ The difficulties of optimizing J(θ)However, there are two main challenges of optimizing the Empirical-ODM cost J(θ) in (<ref>). The first one is that the sample average (over the entire training data set) in the expression of p_θ(·) (see (<ref>)) is inside the logarithmic loss, which is different from traditional machine learning problems where the average is outside loss functions (e.g., ∑_t f_t(θ)). This functional form prevents us from applying stochastic gradient descent (SGD) to minimize (<ref>) as the stochastic gradients would be intrinsically biased (see Appendix <ref> for a detailed discussion and see section <ref> for the experiment results). The second challenge is that the cost function J(θ) is highly non-convex even with linear classifiers. To see this, we visualize the profile of the cost function J(θ) (restricted to a two-dimensional sub-space) around the supervised solution in Figure <ref>.[The approach to visualizing the profile is explained with more detail in Appendix <ref>. More slices and a video of the profiles from many angles can be found in the supplementary material.] We observe that there are local optimal solutions and there are high barriers between the local and global optimal solutions. Therefore, besides the difficulty of having the sample average inside the logarithmic loss,minimizing this cost function directly will be difficult since crossing the high barriers to reach the global optimal solution would be hard if not properly initialized.§ THE STOCHASTIC PRIMAL-DUAL GRADIENT (SPDG) ALGORITHMTo address the first difficulty in Section <ref>, we transform the original cost (<ref>) into an equivalent min-max problem in order to bring the sample average out of the logarithmic loss. Then, we could obtain unbiased stochastic gradients to solve the problem. To this end, we first introduce the concept of convex conjugate functions. For a given convex function f(u), its convex conjugate function f^⋆(ν) is defined as f^⋆(ν) sup_u (ν^T u - f(u)) <cit.>, where u and ν are called primal and dual variables, respectively. For a scalar function f(u) = -ln u, its conjugate function can be calculated as f^⋆(ν) = -1 - ln(-ν) with ν<0. Furthermore, it holds that f(u) = sup_ν( u^T ν - f^⋆(ν) ), by which we have -ln u = max_ν ( u ν + 1 + ln(-ν) ).[The supremum is attainable and is thus replaced by maximum.] Substituting it into (<ref>), the original minimization problem becomes the following equivalent min-max problem: min_θ max_{ν_i_1,…,i_N<0}{L(θ, V) 1/T∑_n=1^M∑_t=1^T_n L_t^n(θ, V) + ∑_i_1,…,i_N p_(i_1,…,i_N) ln(-ν_i_1,…,i_N) }where V {ν_i_1,…,i_N} is a collection of all the dual variables ν_i_1,…,i_N, andL_t^n(θ, V) is the t-th component function in the n-th sequence, defined as L_t^n(θ,V) ∑_i_1,…,i_N p_(i_1,…,i_N) ν_i_1,…,i_N∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n)In the equivalent min-max problem (<ref>), we find the optimal solution (θ^⋆, V^⋆) by minimizing L with respect to the primal variable θ and maximizing L with respect to the dual variable V. The obtained optimal solution to (<ref>), (θ^⋆, V^⋆),is called the saddle point of L <cit.>. Once it is obtained, we only keep θ^⋆, which is also the optimal solution to (<ref>) and thus the model parameter. We further note that the equivalent min-max problem (<ref>) is now in a form that sums over T=T_1+⋯+T_M component functions L_t^n(θ, V). Therefore, the empirical average has been brought out of the logarithmic loss and we are ready to apply stochastic gradient methods. Specifically, weminimize L with respect to the primal variable θ by stochastic gradient descent and maximize L with respect to the dual variable V by stochastic gradient ascent. Therefore, we name the algorithm stochastic primal-dual gradient (SPDG) method (see its details in Algorithm <ref>). We implement the SPDG algorithm in TensorFlow, which automatically computes the stochastic gradients.[The code will be released soon.] Finally, the constraint on dual variables ν_i_1,…,i_N are automatically enforced by the inherent log-barrier, ln(-ν_i_1,…,i_N), in (<ref>) <cit.>. Therefore, we do not need a separate method to enforce the constraint.We now show that the above min-max (primal-dual) reformulation also alleviates the second difficulty discussed in Section <ref>. Similar to the case of J(θ), we examine the profile of L(θ,V) in (<ref>) (restricted to a two-dimensional sub-space) around the optimal (supervised) solution in Figure <ref> (see Appendix <ref> for the visualization details). Comparing Figure <ref> to Figure <ref>, we observe that the profile of L(θ,V) is smoother than that of J(θ) and the barrier is significantly lower. To further compare J(θ) and L(θ,V), we plot in Figure <ref> the values of J(θ) and L(θ, V) along the same line of θ^⋆ + λ_p (θ_1-θ^⋆) for different λ_p. It shows that the barrier of L(θ, V) along the primal direction is lower than that in J(θ). These observations imply that the reformulated min-max problem (<ref>) is better conditioned than the original problem (<ref>), which further justifies the use of SPDG method. § EXPERIMENTS§.§ Experimental setup We evaluate our unsupervised learning scheme described in earlier secitons using two classification tasks, unsupervised character-level OCR and unsupervised English Spelling Correction (Spell-Corr). In both tasks, there is no label provided during training. Hence, they are both unsupervised.For the OCR task, we obtain our dataset from a public database UWIII English Document Image Database <cit.>, which contains images for each line of text with its corresponding groudtruth.We first use Tesseract <cit.> to segment the image for each line of text into characters tiles and assign each tile with one character. We verify the segmentation result by training a simple neural network classifier on the segmented results and achieve 0.9% error rate on the test set. Then, we select sentence segments that are longer than 100 and contain only lowercase English characters and common punctuations (space, comma, and period). As a result, we have a vocabulary of size 29 and we obtain 1,175 sentence segments including 153,221 characters for our OCR task. To represent images, we extract VGG19 features with dim=4096, and project them into 200-dimension vectors using Principal Component Analysis. We train the language models (LM) p_(·) to provide the required output sequence statistics from both in-domain and out-of-domain data sources. The out-of-domain data sources are completely different databases, including three different language partitions (CNA, NYT, XIN) in the English Gigaword database <cit.>.In Spell-Corr task, we learn to correct the spelling from a mis-spelled text. From the AFP partition of the Gigaword database, we select 500 sentence segments into our Spell-Corr dataset. We select sentences that are longer than 100 and contain only English characters and common punctuations, resulting in a total of 83,567 characters. The mis-spelled texts are generated by substitution simulations and are treated as our inputs. The objective of this task is to recover the original text.[We gratefully acknowledge the discussions with Prof. Hermann Ney for private discussions on this task and his work on using likelihood as the objective function for unsupervised training.]§.§ Results: Comparing optimization algorithms In the first set of experiments, we aim to evaluate the effectiveness of the SPDG method as described in Section <ref>, which is designed for optimizing the Empirical-ODM cost in Section <ref>. The analysis provided in Sections <ref> and <ref> sheds insight to why SPDG is superior to the method in <cit.> and to the standard stochastic gradient descent (SGD) method. The coverage-seeking behavior of the proposed Empirical-ODM cost helps avoid trivial solutions, and the simultaneous optimization of primal-dual variables reduces the barriers in the highly non-convex profile of the cost function. Furthermore, we do not include the methods from <cit.> because their approaches could not achieve satisfactory results without a few labeled data, while we only consider fully unsupervised learning setting. In addition, the methods in <cit.> are not optimizing the ODM cost and do not exploit the output sequential statistics.Table 1 provides strong experimental evidence demonstrating the substantially greater effectiveness of the primal-dual method over the SGD and the method in <cit.> on both tasks. All these results are obtained by training the models until converge. Let us examine the results on the OCR in detail. First, the SPGD on the unsupervised cost function achieves 9.21% error rate, much lower than the error rates of any of mini-batch SGD runs, where the size of the mini-batches ranges from 10 to 10,000.Note that, larger mini-batch sizes produce lower errors here because it becomes closer to full-batch gradient and thus lower bias in SGD. On the other hand, when the mini-batch size is as small as 10, the high error rate of 83.09% is close to a guess by majority rule — predicting the character (space) that has a largest proportion in the train set, i.e., 25,499/153,221 = 83.37%. Furthermore,the method from <cit.> does not perform well no matter how we tune the hyperparameters for the generative regularization. Finally and perhaps most interestingly, with no labels provided in the training, the classification errors produced by our method are only about twice compared with supervised learning (4.63% shown in Table 1). This clearly demonstrates that the unsupervised learning scheme proposed in this paper is an effective one. For the Spelling Correction data set (see the third column in Table <ref>), we observe rather consistent results with the OCR data set. §.§ Results: Comparing orders of language modeling In the second set of experiments, we examine to what extent the use of sequential statistics (e.g. 2- and 3-gram LMs) can do better than the uni-gram LM (no sequential information) in unsupervised learning. The unsupervised prediction results are shown in Table <ref>, using different data sources to estimate N-gram LM parameters. Consistent across all four ways of estimating reliable N-gram LMs, we observe significantly lower error rates when the unsupervised learning exploits 2-gram and 3-gram LM as sequential statistics compared with exploiting the prior with no sequential statistics (i.e. 1-gram). In three of four cases, exploiting a 3-gram LM gives better results than a 2-gram LM. Furthermore, the comparable error rate associated with 3-gram using out-of-domain output character data (10.17% in Table <ref>) to that using in-domain output character data (9.59% in Table <ref>) indicates that the effectiveness of the unsupervised learning paradigm presented in this paper is robust to the quality of the LM acting as the sequential prior.§ CONCLUSIONS AND FUTURE WORKIn this paper, we study the problem of learning a sequence classifier without the need for labeled training data. The practical benefit of such unsupervised learning is tremendous. For example, in large scale speech recognition systems, the currently dominant supervised learning methods typically require a few thousand hours of training data, where each utterance in the acoustic form needs to be labeled by humans. Although there are millions of hours of natural speech data available for training, labeling all of them for supervised learning is less feasible. To make effective use of such huge amounts of acoustic data, the practical unsupervised learning approach discussed in this paper would be called for. Other potential applications such as machine translation, image and video captioning could also benefit from our paradigm. This is mainly because of their common natural language output structure, from which we could exploit the sequential structures for learning the classifier without labels. Furthermore, our proposed Empirical-ODM cost function significantly improves over the one in <cit.> by emphasizing the coverage-seeking behavior. Although the new cost function has a functional form that is more difficult to optimize, a novel SPDG algorithm is developed to effectively address the problem. An analysis of profiles of the cost functions sheds insight to why SPDG works well and why previous methods failed. Finally, we demonstrate in two datasets that our unsupervised learning method is highly effective, producing only about twice errors as fully supervised learning, which no previous unsupervised learning could produce without additional steps of supervised learning. While the current work is restricted to linear classifiers, we intend to generalize the approach to nonlinear models (e.g., deep neural nets <cit.>) in our future work. We also plan to extend our current method from exploiting N-gram LM to exploiting the currently state-of-the-art neural-LM. plain § SUPPLEMENTARY MATERIAL FOR “UNSUPERVISED SEQUENCE CLASSIFICATION USING SEQUENTIAL OUTPUT STATISTICS” § DERIVATION OF THE EQUIVALENT FORM OF THE COST IN <CIT.>The cost function in <cit.> can be expressed as: [- ∑_n=1^M ln p_(y_1^n,…,y_T_n^n) | x_1^n, …, x_T_n^n ]We now show how to derive (<ref>) from the above expression. In N-gram case, the language model can be written as p_(y_1^n,…,y_T_n^n) = ∏_t=1^T_n p_(y_t^n | y_t-1^n,…, y_t-N+1^n)Substituting the above expression into the cost (<ref>), we obtain [- ∑_n=1^M ln p_(y_1^n,…,y_T_n^n) | x_1^n, …, x_T_n^n ] = -∑_n=1^M ∑_(y_1^n,…,y_T_n^n)∏_t=1^T_n p_θ(y_t^n | x_t^n) ln p_(y_1^n,…,y_T_n^n) = -∑_n=1^M ∑_(y_1^n,…,y_T_n^n) p_θ(y_1^n | x_1^n) ⋯ p_θ(y_T_n^n | x_T_n^n) ×∑_t=1^T_nln p_ (y_t^n | y_t-1^n, …, y_t-N+1^n) = - ∑_n=1^M ∑_t=1^T_n∑_(y_1^n,…,y_T_n^n) p_θ(y_1^n | x_1^n) ⋯ p_θ(y_T_n^n | x_T_n^n) ×ln p_ (y_t^n | y_t-1^n, …, y_t-N+1^n) = - ∑_n=1^M ∑_t=1^T_n∑_(y_t^n,…,y_t-N+1^n) p_θ(y_t^n | x_t^n) ⋯ p_θ(y_t-N+1^n|x_t-N+1^n) ×ln p_ (y_t^n | y_t-1^n,…,y_t-N+1^n)                         ×∑_y_1^n,…,y_t-N^n p_θ(y_1^n|x_1^n) ⋯ p_θ(y_t-N^n | x_t-N^n) ×∑_y_t+1^n,…,y_T_n^n p_θ(y_t+1^n | x_t+1^n) ⋯ p_θ(y_T_n^n | x_T_n^n) = - ∑_n=1^M ∑_t=1^T_n∑_(y_t^n,…,y_t-N+1^n) p_θ(y_t^n | x_t^n) ⋯ p_θ(y_t-N+1^n|x_t-N+1^n) ×ln p_ (y_t^n | y_t-1^n,…,y_t-N+1^n) = - ∑_n=1^M ∑_t=1^T_n∑_i_1,…, i_N p_θ(y_t^n = i_N | x_t) ⋯ p_θ(y_t-N+1^n=i_1|x_t-N+1^n) ×ln p_ (y_t^n = i_N | y_t-1^n = i_N-1,…,y_t-N+1^n=i_1) = - ∑_n=1^M ∑_t=1^T_n∑_i_1,…, i_N p_θ(y_t^n = i_N | x_t^n) ⋯ p_θ(y_t-N+1^n=i_1|x_t-N+1^n) ×ln p_ (i_N | i_N-1,…,i_1) = -∑_i_1,…, i_Nln p_ (i_N | i_N-1,…,i_1) ×∑_n=1^M ∑_t=1^T_n p_θ(y_t^n = i_N | x_t) ⋯ p_θ(y_t-N+1^n=i_1|x_t-N+1^n) = -T ∑_i_1,…, i_Nln p_ (i_N | i_N-1,…,i_1) ×1/T∑_n=1^M ∑_t=1^T_n p_θ(y_t^n = i_N | x_t^n) ⋯ p_θ(y_t-N+1^n=i_1|x_t-N+1^n)§ PROPERTIES OF P_Θ(I_1,…,I_N) §.§ p_θ(i_1,…,i_N) is the expected N-gram frequency of all the output sequencesIn this section, we formally derive the following relation, which interprets p_θ(i_1,…,i_N) as the expected frequency of (i_1,…,i_N) in the output sequence: _∏_n=1^M ∏_t=1^T_n p_θ(y_t^n|x_t^n)[ n(i_1,…,i_N)/T] = p_θ(i_N,…,i_1)where TT_1 + ⋯ T_M. Let (x_1^n,…,x_T_n^n) be a given n-th input training sequence, and let (y_1^n,…,y_T_n^n) be a sequence generated according to the posterior ∏_t=1^T_n p_θ(y_t^n|x_t^n) (which is the classifier). Furthermore, let I_t^n(i_1,…,i_N) denote the indicator function of the event {y_t-N+1^n=i_1,…,y_t^n=i_N}, and let n(i_1,…,i_N) denote the number of the N-gram (i_1,…,i_N) appearing in all the output sequences {(y_1^n,…,y_T_n^n): n=1,…,M}. Then, we have the following relation: n(i_1,…,i_N) = ∑_n=1^M ∑_t=1^T_nI_t^n(i_1,…,i_N)Obviously, n(i_1,…,i_N) is a function of {(y_1^n,…,y_T_n^n): n=1,…,M} and is thus a random variable. Taking the conditional expectation of the above expression with respect to ∏_n=1^M∏_t=1^T_n p_θ(y_t^n|x_t^n), we obtain _∏_n=1^M∏_t=1^T_n p_θ(y_t^n|x_t^n) [n(i_1,…,i_N)] = ∑_n=1^M ∑_t=1^T_n_∏_n=1^M∏_t=1^T_n p_θ(y_t^n|x_t^n)[ I_t^n(i_1,…,i_N) ] = ∑_n=1^M ∑_t=1^T_n_∏_t=1^T_n p_θ(y_t^n|x_t^n)[ I_t^n(i_1,…,i_N) ] (a)=∑_n=1^M∑_t=1^T_n∏_k=0^N-1 p_θ(y_t-k^n = i_N-k| x_t-k^n)where step (a) uses the fact that the expectation of an indicator function of an event equals the probability of the event. Divide both sides by T, the right hand side of the above expression becomes p_θ(i_1,…,i_N), and we conclude our proof. §.§ p_θ(i_N,…,i_1) is an empirical approximation of the marginal output N-gram probabilityFirst, define the marginal N-gram probability p_θ(i_1,…, i_N) as p_θ(i_1,…, i_N) p_θ(y_1=i_1,…,y_N=i_N)For simplicity, we consider the case where the input random variables are discrete, taking finite value from a set X, then p_θ(i_1,…,i_N) can be written as p_θ(i_1,…,i_N) = ∑_(x_1,…,x_N) ∈X^N∏_k=1^N p_θ(y_k = i_k| x_k) p(x_1,…,x_N)To show that p_θ(i_N,…,i_1) is an empirical approximation of p_θ(i_1,…,i_N), it suffices to show that p_θ(i_1,…,i_N) = ∑_(x_1,…,x_N) ∈X^N∏_k=1^N p_θ(y_k = i_k| x_k) p̂(x_1,…,x_N)where p̂(x_1,…,x_N) is the empirical frequency of the N-tuple (x_1,…,x_N) in the dataset {(x_1^n,…,x_T_n^n): n=1,…,M}. The result follows in a straightforward manner from the definition ofp_θ(i_1,…,i_N): p_θ(i_1,…,i_N) = 1/T∑_n=1^M∑_t=1^T_n∏_k=0^N-1 p_θ(y_t-k^n = i_N-k| x_t-k^n) = 1/T∑_(x_1,…,x_N) ∈X^N∏_k=0^N-1 p_θ(y_t-k^n = i_N-k| x_t-k^n=x_N-k) × n(x_1,…,x_N) = ∑_(x_1,…,x_N) ∈X^N∏_k=0^N-1 p_θ(y_t-k^n = i_N-k| x_t-k^n=x_N-k) ×n(x_1,…,x_N)/Twhere n(x_1,…,x_N) denotes the number of N-tuple (x_1,…,x_N) in the dataset {(x_1^n,…,x_T_n^n): n=1,…,M}. The second equality is simply re-organizing the summation in the first expression according to the value of (x_1,…,x_N), i.e., accumulating all the terms inside the double-summation with the same value of (x_1,…,x_N) together. Further note that p_θ(y_t-k^n = i_N-k| x_t-k^n=x_N-k) is independent of t and n for any given values of i_N-k and x_N-k, so that ∏_k=0^N-1 p_θ(y_t-k^n = i_N-k| x_t-k^n=x_N-k) = ∏_k=1^N p_θ(y_k = i_k| x_k)Then, we can conclude the proof by recognizing that p̂(x_1,…,x_N) = n(x_1,…,x_N)/T. § OPTIMIZING EMPIRICAL-ODM BY SGD IS INTRINSICALLY BIASEDIn this section, we show that the stochastic gradient of Empirical-ODM is intrinsically biased. To see this, we can express the (full batch) gradient of J(θ) as ∇_θ J(θ) = -∑_i_1,…,i_N p_(i_1,…,i_N) 1/T∑_n=1^M ∑_t^T_n∇_θ( ∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n) ) /1/T∑_n=1^M ∑_t^T_n∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n)Note that the gradient expression has sample averages in both the numerator and denominator. Therefore, full batch gradient method is less scalable as it needs to go over the entire training set to compute ∇_θJ(θ) at each update. To apply SGD, we may obtain an unbiased estimate of it by sampling the numerator with a single component while keeping the denominator the same: - ∑_i_1,…,i_N p_(i_1,…,i_N) ∇_θ(∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n) ) /1/T∑_n=1^M ∑_t^T_n∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n)However, this implementation is still not scalable as it needs to average over the entire training set at each update to compute the denominator. On the other hand, if we sample both the numerator and the denominator, i.e., - ∑_i_1,…,i_N p_(i_1,…,i_N) ∇_θ(∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n) ) /∏_k=0^N-1 p_θ(y_t-k^n = i_N-k | x_t-k^n)then it will be a biased estimate of the gradient (<ref>). Our experiments in Section <ref> showed that this biased SGD does not perform well on the unsupervised learning problem. § EXPERIMENT DETAILS In the experiment, we implement the model with Python 2.7 and Tensorflow 0.12. In training of models both on OCR and Spell-Corr task, we initialize the linear model's parameters (primal variable) with w_init=1/dim(x) and γ=10, where dim(x) is the dimension of input. And we initialize the dual parameters V_init with uniformly distributed random variables v ∼ U(-1,0). We set the learning rate for primal parameter μ_θ=10^-6 and learning rate for dual parameter μ_v=10^-4. We use Adam optimization to train our model.The test set of OCR is generated also from UWIII database,but avoiding overlap with training set. The size of test set of OCR is 15000. Furthermore, the size of the test set of Spell-Corr is also 15000 without overlapping with the training set. § THE DETAILS OF VISUALIZING THE HIGH-DIMENSIONAL COST FUNCTIONSSince J(θ) is a high-dimensional function, it is hard to visualize its full profile. Instead, we use the following procedure to partially visualize J(θ). First, since the supervised learning of linear classifiers is a convex optimization problem, from which we could obtain its global optimal solution θ^⋆.[Note that, we solve the supervised learning only for the purpose of understanding our proposed unsupervised learning cost J(θ). In our implementation of the unsupervised learning algorithm, we do not use any of the training label information nor supervised learning algorithms.] Then, we randomly generate two parameter vectors θ_1 and θ_2 and plot the two-dimensional function J(θ^⋆ + λ_1(θ_1-θ^⋆) + λ_2 (θ_2-θ^⋆)) with respect to λ_1,λ_2 ∈R, which is a slice of the cost function on a two-dimensional plane.For the profile of L(θ,V) in (<ref>), similar to the case of J(θ), in order to visualize L(θ,V), we first solve the supervised learning problem to get θ^⋆. Then we substitute θ^⋆ into (<ref>) and maximize L(θ^⋆, V) over V to obtain V^⋆={ν_i_1,…,i_N^⋆}, where ν_i_1,…,i_N^⋆ = -1/1/T∑_n=1^M ∑_t=1^T_n∏_k=0^N-1 p_θ^⋆(y_t-k^n = i_N-k | x_t-k^n). We also randomly generate a (θ_1,V_1) (with the elements of V_1 being negative) and plot in Figure <ref> the values of L(θ^⋆ + λ_p (θ_1 - θ^⋆), V^⋆ + λ_d (V_1 - V^⋆)) for different λ_p, λ_d ∈R. Clearly, the optimal solution (red dot) is at the saddle point of the profile. § ADDITIONAL VISUALIZATION OF J(Θ)In Figures <ref>, <ref> and <ref>, we show three visualization examples of J(θ) for the OCR dataset on three different affine spaces, part of the first example was included in Figure <ref>. The six sub-figures in each example show the same profile from six different angles, spinning clock-wise from (a)-(f). The red dots indicate the global minimum.In Figure <ref>, we show the same type of profiles as above except using synthetic data for of a binary classification problem. First, we sequentially generated a sequence of states from 0,1 by an hidden Markov model. Then we sample the corresponding data points from two separate 2-dimensional Gaussian models.accordingly. § ADDITIONAL VISUALIZATION OF L(Θ,V)Figure <ref> shows the profile of L(θ,V) for the OCR data set on a two-dimensional affine space viewed from nine different angles. The red dots show the saddle points of the profile, one for each angle.
http://arxiv.org/abs/1702.07817v2
{ "authors": [ "Yu Liu", "Jianshu Chen", "Li Deng" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170225015538", "title": "Unsupervised Sequence Classification using Sequential Output Statistics" }
bibcount lbibitembibcount.bibsetup1 agsm theoremTheoremTheorems customthmTheoremTheorems lemmaLemmaLemmas propPropositionPropositions defnDefinitionDefinitions corollaryCorollaryCorollaries sectionSectionSections figureFigureFiguresletterpaperequationsection [],a; plain theoremTheoremsection corollarytheoremCorollary proptheoremProposition lemmatheoremLemmadefinition *definitionDefinitionexampletheoremExample *remarkRemark innercustomthmTheorem The Local Limit of Random Sorting Networks Omer AngelDuncan Dauvergne Alexander E. Holroyd Bálint VirágDecember 30, 2023 =========================================================================A sorting network is a geodesic path from 12 ⋯ n to n ⋯ 21 in the Cayley graph of S_n generated by adjacent transpositions. For a uniformly random sorting network, we establish the existence of a local limit of the process of space-time locations of transpositions in a neighbourhood of an for a∈[0,1] as n→∞.Here time is scaled by a factor of 1/n and space is not scaled.The limit is a swap process U on ℤ.We show that U is stationary and mixing with respect to the spatial shift and has time-stationary increments.Moreover, the only dependence on a is through time scaling by a factor of √(a(1-a)).To establish the existence of U, we find a local limit for staircase-shaped Young tableaux. These Young tableaux are related to sorting networks through a bijection of Edelman and Greene. § INTRODUCTIONConsider the Cayley graph of the symmetric group S_n where the edges are given by adjacent transpositions π_i = (i, i+1) for i ∈{1, …, n-1}.The permutation farthest from the identity 𝕀_n = 12⋯ n is the reverse permutation _n = n⋯ 21, at distance n2. A sorting network is a path in this Cayley graph from the identity to the reverse permutation of minimal possible length, namely N = n2.Equivalently, a sorting network is a representation _n = π_k_1π_k_2⋯π_k_N, with the path being the sequence _t = Π_i ≤ tπ_k_i, so that _0 = 𝕀_n and _N = _n.For this reason, sorting networks are also known as reduced decompositions of the reverse permutation. Under this name, the combinatorics of sorting networks have been studied in detail, and there are connections between sorting networks and Schubert calculus, quasisymmetric functions, zonotopal tilings of polygons, and aspects of representation theory. We refer the reader to <cit.> and <cit.>for more background in this direction. < g r a p h i c s > A “wiring diagram" for a sorting network with n = 4. Inthis diagram, trajectories are drawn as continuous curves forclarity, whereas our definition specifies that trajectories make jumps at swaptimes. Sorting networks also arise in computer science, as a sorting network can be viewed as an algorithm for sorting a list.Consider an array with n elements, and let π_k_1 ,π_k_2, …, π_k_N be the sequence of adjacent transpositions in a sorting network. At each step i, instead of swapping the elements at positions k_i and k_i+1, rearrange these elements in increasing order.After all N steps, this process will sort the entire array from any initial order. If we start with _n, then every comparison will result in a swap. It is helpful to think of the elements of {1,…,n} as labeled particles.Each step in the sorting network has the effect of swapping the locations of two adjacent particles.In this way, we can talk of the particles as having -valued trajectories, with jumps of {0,± 1} at integer times. Exactly two particles make a non-zero jump at each time.We denote by H_k(·) the trajectory of particle k. Specifically, for t≤ N we have σ_⌊ t ⌋(H_k(t)) = k (here and later ⌊ t ⌋ denotes the integer part). The number of sorting networks of order n has been computed by <cit.>.Stanley observed that the number of sorting networks equals the number of standard Young tableaux of a certain staircase shape.A bijective proof of this was provided by <cit.>.Later, another bijective proof was found by <cit.>, and recently <cit.> proved that the two bijections coincide.modify paragraph The study of random sorting networks was initiated by <cit.>. That paper considered the possible scaling limits of sorting networks, namely weak limits of the scaled processlim_n→∞1/n H_⌊ an ⌋(t/N).Here, space is rescaled by a factor of n and time by a factor of N=n2.With this scaling, H_⌊ an ⌋ becomes a function from [0,1] to [0,1], starting at a and terminating at 1-a.It is not a priori clear that the limit exists (in distribution) or even that the limit is continuous.While existence of the above limit is still an open problem, it is shown in <cit.> that the scaled trajectories are equicontinuous in probability, and that subsequential limits are Hölder(α) for any α<1/2.It is also conjectured – based on strong numerical evidence – that particle trajectories converge to sine curves as n →∞.We refer the reader to <cit.> and <cit.>for further results and conjectures in this direction.See also <cit.> for the scaling limit of certain non-uniform random sorting networks under this scaling.Different local properties of random sorting networks have also been studied in <cit.>. §.§ Limits of Sorting Networks In this paper we are interested in local limits of sorting networks. These limits are local in the sense that space is not scaled at all. However, time still needs to be scaled by a factor of 1/n to observe a non-constant process.Thus instead of the sorting process finishing at time N, it will finish at time N/n = (n-1)/2.consistently use space-time, not time-space! Should look through -done require no simultaneous jumps? do we prove this? Yes (injectivity). A swap function is a function U: ×_+ → with the following properties: * For each x, we have that U(x,·) is cadlag.* For each t we have that U(·,t) is a permutation of .* Define the trajectory H_x(t) by U(H_x(t),t)=x.Then H_x is a cadlag path with nearest neighbour jumps for each x (i.e. the inverse permutation U^-1 is pointwise cadlag).* For any time t ∈ (0, ∞) and any x ∈, lim_s → t^- U(x, s) = U(x + 1, t) if and only iflim_s → t^- U(x +1, s) = U(x, t).We think of a swap function as a collection of particle trajectories {H_x(·) : x ∈}. Condition (iv) guarantees that the only way that a particle at position x can move up at time t is if the particle at position x+1 moves down. That is, particles move by swapping with their neighbours.We letbe the space of swap functions endowed with the following topology. A sequence of swap functions U_n → U if each of the cadlag paths U_n(x, ·) → U(x, ·) and H_n, x(·) → H_x(·). Convergence of cadlag paths is convergence in the Skorokhod topology. We refer to a random swap function as a swap process.Our main result is the following limit theorem.1 There exists a swap process U so that the following holds.Let u∈(-1,1), and let {k_n : n ∈} be any sequence such that k_n/n→ (1+u)/2.Consider the shifted, and time scaled swap processU_n(x,t) = σ^n_⌊ nt/√(1-u^2)⌋(k_n + x) - k_n,where σ^n is a uniformly random n-element sorting network. ThenU_nU.Moreover, U is stationary and mixing of all orders with respect to the spatial shift, and has stationary increments in time: the permutation (U(·,s)^-1U(·,s+t))_t≥ 0 has the same law as (U(·,t))_t≥ 0. The scaling in Theorem <ref> can be thought of in the following way. We first choose a spatial location u ∈ (-1 , 1) and look at a finite window around the position (1 + u)n/2. That is, we are concerned with particles whose labels are in a window [(1+u)n/2 - K, (1 + u)n/2 + K]. We want to know what the start of the sorting network looks like in this local window, at a scale where we see each of the individual swaps in the limit. To do this, we need to rescale time by a factor of 1/n. Note that the semicircle factor of √(1 - u^2) accounts for the fact that the swap rate is slower outside of the center of a random sorting network. On the global scale, this was proven in <cit.>, so the slow-down does not come as a surprise. To precisely define each U_n, for x such that k_n+x∉{1,…,n}, we use the convention that U_n(x,t)=x. For t>N/n we use the convention that U_n(x,t) = U_n(x,N/n).By doing this, any sorting network corresponds to a swap function. Convergence in the above theorem is weak convergence in the topology on . Recall also that a process is spatially mixing of order m if translations by k_1,…,k_m are asymptotically independent as min |k_i-k_j| →∞.Spatial mixing (even of order 2) of the system implies ergodicity. As a by-product of the proof, we also show that for any t, there is a bi-infinite sequence of particles in the limit process U that have not moved by time t.Consequently,can be split into finite intervals that are preserved by the permutation U(·, t). Furthermore, we prove convergence in expectation of the number of swaps between positions x and x+1 by some time t.Specifically, if s(x, t, U) is the number of swaps between positions x and x+1 up to time t in the process U, then s(x, t, U_n) → s(x, t, U) = 4/πt as n→∞.The expected number of swaps here agrees with corresponding global result obtained in <cit.>.Theorem <ref> is proven in the k=0 case as Theorem <ref>.The general case is a consequence of Theorem <ref>. §.§ Limits of Young tableaux To prove Theorem <ref>, we will first prove a limit theorem for staircase Young tableaux, and then use the Edelman-Greene bijection to translate this into a theorem about sorting networks. This theorem is of interest in its own right.Recall that for an integer N, a partition λ of N is a non-increasing sequence (λ_1,…,λ_n) of positive integers adding up to N.The size of λ is N = |λ| = ∑λ_i.We shall use the convention ={1,2,…}.The Young diagram associated with λ is the set A⊂× given by A = {(i,j) : j ≤λ_i}.A Young diagram is traditionally drawn with a square for each element, and elements of A are referred to as squares.The lattice × is usually oriented so that the square (1,1) is in the top left corner of the lattice, but a different orientation will be convenient for us as discussed below. The staircase Young diagram of order n is the diagram of the partition (n-1,n-2,…,1), of size N=n2.A standard Young tableau of shape λ is an order-preserving bijection f: A→{1,…,N}, i.e., f is increasing in both i and j.For both the statement of our results and their proofs, it will be more convenient to work with reverse standard Young tableaux, where the bijection is order-reversing.Clearly f ↦ N+1-f is a bijection between standard and reverse standard Young tableaux.Our second main result is a limit theorem for the entries near the diagonal of a uniformly random staircase shaped Young tableau of order n.To introduce this theorem, we must first change the coordinate system for staircase Young tableaux.< g r a p h i c s > The staircase Young diagram of order 5, i.e., ofλ = (4,3,2,1), with squares labelled by a reverse standardYoung tableau, shown in both the usual and in our coordinate systemin .One can think of the Young diagram λ = (4,3,2,1)as ten blocks in a triangular pile.The entries in a tableau ofshape λ give a possible order in which to place theseblocks in the pile while respecting gravity. Define = {(x,y) ∈× : x+y ∈ 2}.We introduce a partial order ongiven by (x,y) ≤ (x',y') if x+y ≤ x'+y' and y-x ≤ y'-x' (i.e. (x,y)≤ (x',y') if there is a path in the lattice from the (x,y) to (x',y'), increasing in the y-coordinate.For (c,n-1) ∈, defineT(c,n) = {z ∈ : z ≤ (c,n-1)}. The set T(c,n) is the image of a staircase shaped Young diagram of order n by the mapping (i,j)↦ (c-i+j, n+1-i-j). We extend the definition of a staircase diagram of order n and use that term for T(c,n).We call the value c the center of the diagram.The order on T(c,n) induced by the order oncorresponds to reversing the order on the Young diagram induced by the order on ×.Therefore any order-preserving bijection G: T(c,n) →{1,…,N} is a reverse standard Young tableau.We extend G to a function from → [0, ∞] by setting G(z) = ∞ for all z ∉ T(c,n).In the topology of pointwise convergence in this function space, we then have the following theorem about convergence of uniformly random reverse standard Young tableaux.2There exists a random function F:→[0,∞) such that the following holds.Fix u∈(-1,1), and a sequence k_n with k_n/n→ u.Let G_n be a uniformly random staircase Young tableau on T(k_n,n). ThenG_n/n1/√(1 - u^2) F.Moreover F is stationary and mixing of all orders with respect to translations by (2m,0) for m ∈. The different components of Theorem <ref> are proved in Theorem <ref>, Proposition <ref>, and Theorem <ref> below.§.§ Overview The structure of the paper is as follows.Section <ref> contains the necessary background about Young tableaux and the Edelman–Greene bijection, as well as some basic domination lemmas about Young tableaux.This will allow us to conclude Theorem <ref> from the limit theorem for staircase Young tableaux, Theorem <ref>. Section <ref> contains the proof of Theorem <ref> for the case u=0. reformulate is not quite right. revise overview - done...In order to translate Theorem <ref> using the Edelman–Greene bijection to a theorem about sorting networks, we require certain regularity properties of the Young tableau limit.These are proved in Sections <ref> and <ref>.Finally, we deduce Theorem <ref> in the case u=0 in Section <ref>.In Section <ref>, we extend Theorem <ref> and consequently Theorem <ref> to arbitrary u ∈ (-1, 1) by exploiting a monotonicity property of random Young tableaux.Remark. We note that <cit.> have results that overlap some of ours.Our proof of the local limit is probabilistic, and is based on the Edelman–Greene bijection, the hook formula and an associated growth process, and a monotonicity property for randomYoung tableaux.Gorin and Rahman take a very different approach, using a contour integral formula for Gelfand–Tsetlin patterns discovered by<cit.>.This allows them to get determinantal formulas for the limiting process.While for many models exact formulas are the only known approach to limit theorems, we show that for random Young tableaux the local limit and its properties can also be established from first principles. § THE HOOK FORMULA, THE EDELMAN–GREENE BIJECTION AND TABLEAU PROCESSES In this section, we introduce some preliminary information regarding Young tableaux and the Edelman–Greene bijection. We then use the hook formula to prove some basic domination lemmas about pairs of growing tableau processes.The hook formula.Let d(λ) be the number of reverse standard Young tableaux of shape λ. <cit.> proved a remarkable formula for d(λ).To state it, we first need some definitions.Let A() ⊂× be the Young diagram of shape λ. For a square z = (i,j) ∈ A(), define the hook of z byH_z = {(i,j') ∈ A : j' ≥ j}∪{(i',j) ∈ A: i' ≥ i}.Define the hook length of z by h_z = |H_z|.We also define the reverse hook for z byR_z = {w∈ A{z} : z ∈ H_w}.The reverse hook will be of use later when manipulating the hook formula. We note here for future use hook lengths and reverse hook lengths in . For a point z = (x, y) in a diagram T(c, n), we have that h_z = 2y - 1, and that |R_z| = n - 1 - y. With the above notations, we haved(λ) = |λ|!/∏_z ∈ A(λ) h_z.The Edelman–Greene bijection.For the staircase Young diagram of order n, the hook formula givesd(λ_n) = n2!/1^n 3^n-1 5^n-2 (2n -3)^1.As noted, this is also the formula for the number ofsorting networks of order n given in <cit.>.We now describe the bijection between these two sets given by <cit.>.We recount here a version of the Edelman–Greene bijection for rotated (defined on subsets of ) reverse standard Young tableaux.More precisely, the map as we describe it gives a bijection between Young tableaux on the diagram T(c,n) and sorting networks of size n with particles located at positions {c-(n-1), c-(n-1)+2, …, c+(n-1)}. Note that here particles are located at positions in 2, and not inas in the statement of Theorem <ref>. This is done to optimize the description of the bijection.To accommodate this, for odd k we use π_k to denote the swap of the particles at positions k-1 and k+1. what is a sorting network on an interval? I've removedthis language.Given a reverse standard Young tableau G: T(c,n) →{1,…,N}, we generate a sorting network π_k_1π_k_2, π_k_N and a sequence of Young tableaux (G_t)_t≤ N, starting with G_0=G.Recall that by convention G(z)=∞ for z∉ T(c,n).We repeat the following for t∈{1,…,N}, computing k_t and G_t from G_t-1.(See <ref> for an example.)Step 1: Find the point z_*∈ such that the value of G_t-1(z_*) is minimal.Clearly z_*= (k,1) for some odd k.Set k_t = k. Step 2: Recursively compute the “sliding path” z_1,z_2,… as follows. Set z_1=z_*.If z_i = (x,i), then z_i+1∈ (x± 1, i+1) is chosen to be the point with a smaller value of G_t-1.If both are infinite then the choice is immaterial. Step 3: Perform sliding to update G: If z is in the sliding path, so that z=z_i for some i then let G_t(z) = G_t-1(z_i+1).Otherwise, let G_t(z) = G_t-1(z).< g r a p h i c s >The first three iterations in the Edelman–Greene bijection. Squares not shown have G_t(z)=∞.In each iteration, (the start of) the sliding path is in bold.< g r a p h i c s >The Edelman–Greene bijection applied to a tableau of order n=4.The particles are labelled A–D to distinguish them from the entries in the tableau.The sorting network corresponds to the wiring diagram shown in <ref>.The output of the Edelman–Greene bijection is the swap sequence (k_i) of length N=n2, taking odd values k_i ∈ [c-(n-2),c+(n-2)].Edelman and Greene proved that applying the given sequence of swaps will reverse the elements of the interval [c-(n-1),c+(n-1)] ⊂ 2, and moreover, that any sorting network on this interval results from a unique reverse standard Young tableau on T(c,n).§.§ Uniform Young tableaux The Edelman–Greene bijection allows us to sample a uniformly random sorting network of size n given a uniformly random reverse standard Young tableau of shape T(c,n).We say a set A⊂ is downward closed if whenever z ∈ A and w≤ z, then w∈ A.In the language of Young diagrams, such an A is a special case of a skew Young diagram.Given a reverse standard Young tableau G on T(c,n), let A_i = {z : G(z) ≤ i}. Monotonicity of G implies that A_i is downward closed.Moreover |A_i| = i for each i∈{0,…,N}, and we have A_i ⊂ A_i+1.Thus a Young tableau on T(c,n) can be viewed as a maximal sequence of downward closed subsets A_0 = ∅⊂ A_1 ⊂ A_2 ⊂…⊂ A_N = T(c,n).The complementary sets B_i=T(c, n) ∖ A_i are rotated Young diagrams, and G|_B_i is a reverse standard Young tableau on that diagram (with entries shifted by i). If B is a Young diagram, and G is a reverse standard Young tableau on B, then G(z)=1 for some square z∈ B, and this square must have hook H_z = {z}.We call such squares corners of B. The restriction of G to B∖{z} is a reverse standard Young tableau with all values increased by 1.This observation allows us to use the hook formula to find the probability that in a uniformly random reverse standard Young tableau of shape λ, the square containing 1 is a given corner z. We call this the hook probability, denoted (B,z).A simple calculation shows that(B,z) = d(B∖{z})/d(B) = 1/|B|∏_y ∈ R_z(h_y/h_y - 1). This gives a simple procedure for sampling a uniformly random reverse standard Young tableau on any diagram B: Pick a random corner z_1 of B with probability mass function (B,z) and set G(z_1)=1. Recursively pick a corner z_2 of B∖{z} and set G(z_2) = 2, and repeat until all elements of B have been chosen.In terms of the corresponding growing sequence of sub-diagrams, this takes the following form: Set A_0 = ∅.Having chosen {A_0,…,A_i-1}, pick a corner z_i of B∖ A_i-1 with probability mass function (B∖ A_i-1, z_i), and let G(z_i)=i and A_i = A_i-1∪{z_i}.We will primarily be interested in this process when B is a staircase diagram T(c,n). While the hook probabilities have an explicit formula, which we use directly, one can sample a corner of a diagram with this distribution very efficiently using the hook walk, a process described in <cit.>.We omit the mechanism of the walk since we do not need it, but remark that it can be used to provide alternate proofs of some of the stochastic domination lemmas that follow. §.§ Continuous time growth A significant simplification of our analysis is achieved by Poissonizing time.Instead of generating a sequence of growing diagrams A_i, we shall define a continuous time process with the same jump distribution but moving at the times of a Poisson process.The staircase tableau process (or simply tableau process) is a Markov process X(t) = X(c,n,r)(t).Its law is determined by parameters c,n and r, and it is related to the uniform reverse standard Young tableau of T(c,n).The state space of this process comprises all downward closed subsets A ⊂ T(c,n).The initial state is X(0) = ∅.If A and A∪{z} are two states, then the rate of jump from A to A∪{z} is v_X(z, A) = r ·(T(c, n) ∖ A, z).When the process X is clear from context we omit the subscript on the rate v. No other jumps are possible.Note that the parameter r simply multiplies all jump rates, so that the process X(n,c,r)(t) has the same law as X(n,c,1)(rt).Running these processes at different rates will be useful, hence the inclusion of r in the notations.The state T(c,n) is absorbing. The total rate of jumps from any other state is r, so the first n2 jump times of the process coincide with points in a rate-r Poisson process.Given the process X, let the inclusion time of a square z be defined byF(z) = inf{t : z ∈ X(t)}.These determine the process X, since we have X(t) = {z : F(z) ≤ t}.Note that F is naturally defined on all of , with F(z)=∞ for z∉ T_c,n, so that F ∈ [0,∞]^. We refer to F as the inclusion function for X. The first convergence theorem we prove can now be stated.Let X_n = X_n(c_n,n,n) be a sequence of tableau processes with c_n=o(n), and let F_n be the corresponding sequence of inclusion functions.Then F_nF for some random F:→_+.Moreover, the limit F is translation invariant, in the sense that FFτ, where τ (x,y) = (x+2, y). We will use the notation τ throughout the paper to signify horizontal translation on .Fix the following: done By the law of large numbers for the Poisson process, the limit of the inclusion functions F_n for the processes X_n is the same as the limit of a uniformly random reverse standard Young tableau on T(c_n, n) with entries scaled by 1/n. Thus Theorem <ref> immediately implies the convergence and translation invariance in Theorem <ref> in the case u = 0. We will similarly prove Theorem <ref> for u ≠ 0 in Section <ref> by again Poissonizing time, noting that this does not change the limit.Note also that T(c, n) is only defined when c and n have opposite parity, so when taking tableau limits for constant c, we may need to change the value of c by 1, depending on whether n is odd or even. In all of our proofs, shifting the position that the tableaux are centered at by 1 does not affect any of the arguments, as all of the domination lemmas we use are unaffected by distance changes of size o(n). Therefore from now on, we will ignore issues of the parity of c and n.§.§ Stochastic domination A central tool in our proof of existence of certain limits is stochastic domination of growth processes.Subsets ofare naturally ordered by inclusion.For coupled tableau processes X and X', we say that X is dominated by X' up to time T if for all t≤ T we have X(t) ⊂ X'(t).In terms of the inclusion functions, this can be stated equivalently as F ≥ F' ∧ T in the pointwise order on inclusion functions (note the order reversal: a smaller process X corresponds to larger inclusion times F.)In light of Strassen's theorem (see <cit.>), we have that X is stochastically dominated by X' if there is a coupling of the two so that domination holds, and write XX' up to time T. The next lemma gives a sufficient condition for stochastic domination of one tableau process by another, in terms of their rates.Let X_1 and X_2 be two tableau processes on the diagrams T(c_1, n_1) and T(c_2, n_2) respectively. Letbe some subset of the state space of X, and let the stopping time T be the first time t that X_1(t) ∉.Suppose that for any A_1 ∈, any state A_2 with A_1 ⊂ A_2, and for any lattice point z we havev_X_1(z, A_1) ≤ v_X_2(z, A_2),provided both are non-zero.Then X_1X_2 up to time T.Suppose first thatis the entire state space of X_1. The proof whenis not the whole state space goes through in the same way. We define a Markov process Y whose state space is all pairs (A_1, A_2) with A_1 ⊂ A_2 such that Y has marginals X_1 and X_2.We define the transitions rates of Y out of a state (A_1, A_2) as follows.Let C_1 be the set of all corners of T(c_1, n_1)∖ A_1, C_2 be the set of all corners of T(c_2, n_2)∖ A_2, and C_1' be the set of all corners belonging to both T(c_1, n_1)∖ A_1 and T(c_2, n_2)∖ A_1. For z ∈ C_1', Y transitions to state (A_1 ∪{z}, A_2 ∪{z}) with rate v_X_1(z, A_1). Y also transitions to state (A_1, A_2 ∪{z}) with rate v_X_2(z, A_2) - v_X_1(z, A_1). For z ∈ C_2C_1', Y transitions to state (A_1 , A_2 ∪{z}) with rate v_X_2(z, A_1).For z ∈ C_1C_1', Y transitions to state (A_1 ∪{z}, A_2) with rate v_X_1(z, A_1).It is easy to check that Y has the correct marginals and provides a coupling of X_1 and X_2 with X_1 ≤ X_2. We can further simplify which rates we need to compare to prove stochastic domination with the following observation.Suppose X(t) = X(c, n, r)(t) is a tableau process, andA_1 ⊂ A_2. Then v(z, A_1) ≤ v(z, A_2) for any point z ∉ A_2.The only interesting case here is when z is a corner for both T(c, n) ∖ A_1 and T(c, n) ∖ A_2. Then by Equation (<ref>) and the hook probability formula, v(z, A_i)= r/T(c, n) ∖ A_i∏_y ∈ R^i_z(1 + 1/h^i_y -1). Here R^i_z refers to the reverse hook for z in the diagram T(c, n) ∖ A_i, and h^i_y refers to the cardinality of the hook for y in the same diagram. We have that R^1_z = R^2_z, and each of these are simply the reverse hook for z in T(c, n). Also, h^2_y ≤ h^1_y for all y since T(c, n) ∖ A_2T(c, n) ∖ A_1, and T(c, n) ∖ A_2≤T(c, n) ∖ A_1. Putting this together, we get that v(z, A_1) ≤ v(z, A_2), as desired. To prove the more general stochastic domination result, we need the following lemma to help bound hook probabilities.Let a, b be either two integers greater than 1 or two half-integers greater than 1, and definey= ∏_i=a^b (1 + 1/2i -1) = ∏_i=a^b 2i/2i -1,where the product runs over integers between a and b if both are integers, and over half-integers between a and b if both are half-integers. Then√(2b-1/2a-1) < y < √(2b/2a-2). We have x < y < z wherex=∏_i=a^b 2i+1/2i ,z =∏_i=a^b 2i-1/2i-2.Then xy and yz are telescoping products given byxy=2b -1/2a-1,yz=2b/2a-2so we get √(xy)< y <√(yz). Now we can prove the following more general lemma about stochastic domination. Let T(c_1,n_1)T(c_2, n_2), and consider two tableau processesX_1 = X_1(c_1, n_1, n_1), and X_2 = X_2(c_2, n_2, θ n_2).Fix ∈ (0,1), and let T be the stopping time when ⌊n_12⌋ lattice points have been added to X_1.Let the difference between the horizontal centers of the two tableau processes be d= |c_1 -c_2|. Then X_1X_2 up to time T, provided that θ > (n_2 -1)n_1/(n_1 - 1)(n_2 - 2)((1- )√(1- (n_1 + d/n_2-2)^2))^-1. We may assume that c_1 = d and c_2 = 0. By Lemmas <ref> and <ref> it suffices to show that for any state A with A≤⌊n_12⌋, and any corner z of both T(d, n_1) ∖ A and T(0, n_2) ∖ A we havev_X_1(z, A) ≤ v_X_2(z, A).Let R_z^1 be the reverse hook of z in T(d, n_1), and let h_y^1 be the hook length of y in T(d, n_1)∖ A, and similarly define R_z^2 and h_y^2 for T(0, n_2). To get a simple expression for v_X_2(z, A)/v_X_1(z, A), observe that if y ∈ R_z^1, then y ∈ R_z^2 and h_y^1 = h^2_y for such y.Thus v_X_2(z, A)/v_X_1(z, A) = θn_2/n_1T(d, n_1) ∖ A/T(0, n_2) ∖ A∏_y ∈ R_z^2 ∖ R_z^1(1 + 1/h_y^2 -1).We will show that this is always greater than 1. For y = (y_1, y_2) to be in R_z^2 ∖ R_z^1 where z =(z_1, z_2), one of two possibilities must occur. Either(y_1, y_2) ∈ E_1= { (z_1- i , z_2 + i) : z_2 - z_1 + 2i ∈ (n_1 - 1 - d,n_2 -1]}, (y_1, y_2) ∈ E_2= { (z_1+ i , z_2 + i) : z_1 + z_2 + 2i ∈ (n_1 - 1+ d,n_2 -1]}.For (y_1, y_2) = (z_1- i , z_2 + i) ∈ E_1 and for y =(z_1+ i , z_2 + i)∈ E_2, the hook for y is of length 1 + (y_2 -z_2) + (y_2 - 1) = 2i + z_2. Thus using Lemma <ref>, we find that ∏_y ∈ R_z^2 ∖ R_z^1(1 + 1/h_y^2 -1)=∏_i = [n_1 - 1 - d - z_2 + z_1]/2 + 1^[n_2 -1 - z_2 + z_1]/2 (1 + 1/2i + z_2 - 1)∏_i = [n_1 - 1 + d - z_2 - z_1]/2 + 1^[n_2 - 1 - z_2 - z_1)]/2(1 + 1/2i + z_2 - 1)>√((n_2 -2)^2 - z_1^2/n_1^2 - (z_1 - d)^2).Thus for the quantity (<ref>) to be greater than 1, we need θ≥n_1/n_2T(0, n_2) ∖ A/T(d, n_1) ∖ A√(n_1^2 - (z_1 - d)^2/(n_2 -2)^2 - z_1^2),for all values of z_1 and A with A≤⌊n_12⌋. We then have the following chain of inequalities for the right hand side of (<ref>), which show that the inequality (<ref>) holds for the values of θ specified in the Lemma. n_1/n_2T(0, n_2) ∖ A/T(d, n_1) ∖ A√(n_1^2 - (z_1 - d)^2/(n_2 -2)^2 - z_1^2) < n_1/n_2n_22/(1-)n_12n_1/n_2 -2(1 - (z_1/n_2-2)^2)^-1/2≤(n_2 -1)n_1/(n_1 - 1)(n_2 -2)((1- )√(1- (n_1 + d/n_2-2)^2))^-1 We will use this lemma when n_1 is much smaller than n_2, the valuesmall, and the distance d grows linearly with n_2. In this case we have the following asymptotic version of the stochastic domination.Let X(un + a_n, n, n/√(1 - u^2))(t) be a sequence of tableau processes for n ∈, where u ∈ (-1, 1) and a_n=o(n) is a sequence of integers. Then for any _1 > 2_2 ∈ (0,1), for all sufficiently large m there exists some N(m) such thatX(un + a_n, n, (1+ _1)n/√(1 - u^2))X(0, m, m)up to time T, for all values of n ≥ N(m). Here T is the stopping time when _2 m2 lattice points have been added to the process X(0, m, m). Finally, we will also state Lemma <ref> for domination of a tableau process over two independently coupled tableau processes, as this will be necessary for the proof that the tableau limit is mixing. The proof goes through analogously.Let T(b_1, n_1) and T(c_1, n_1) be disjoint sets with T(b_1, n_1) ∪ T(c_1, n_1)T(c_2, n_2), and consider three tableau processes X_1(t) = X_1(c_1, n_1, n_1), X'_1(t) = X_1(b_1, n_1, n_1)and X_2(t) = X_2(c_2, n_2, θ n_2). Let Y be the process given by the union of independent copies of X_1 and X_1'. Let d = max (|c_2 - c_1|, |c_2 -b_1|). Then ifand θ are as in the statement of Lemma <ref> (with the new definition for d), we have that YX_2 up to a stopping time T. In this case T is the stopping when either ⌊n_12⌋ lattice points have been added to X_1 or X_1'. Inclusion Functions and ConvergenceWe want to show that for a sequence of tableau processes X_n (t) = X_n (0, n, n)(t), that the corresponding inclusion functions converge in the weak topology on the space of probability measures on [0, ∞] ^. To do this, we use the monotonicity established by Corollary <ref>, which will be exploited using the following lemmas.Let G_n be a tight sequence of random variables taking values in [0, ∞)^m. Suppose that for every > 0, there exists a sequence of random variables G^_n such that (G^_n ≠ G_n) → 0 as n →∞ and such that the following holds. For all sufficiently large M there is some N ∈ such thatG_n(1+ )G^_M for alln ≥ N.Then the sequence G_n has a distributional limit G. We leave the proof of this lemma for the appendix (Section <ref>), as it is fairly standard but somewhat lengthy.Let X_n = X_n(a_n, n, n) be a sequence of tableau processeswith a_n =o(n) and let F_n be the corresponding sequence of inclusion functions. Then for any z ∈,{F_n(z) : n large enough so thatz ∈ T(0, n)}is a tight as a sequence taking values in [0, ∞).< g r a p h i c s > The set A_n in the proof of Lemma <ref>. As the size of the Young diagram goes to infinity, the proportion of T(0, n) taken up by A_n increases to 1/2, as the point z does not grow with n.Let z = (x, y) andconsider the set A_n = {z' ∈ T(a_n, n) : z' ≥ z}. Then A_n is a rectangle and as n→∞ the relative size |A_n|/|T(a_n,n)|→ 1/2. Moreover, no square in A_n is added before z.Now let n be large enough so that A_n > n^2/8, and let m, θ be such thatθ > n (m-1)/(n - 1)(m-2)(1/4√(1- (n + |a_n - a_m|/m - 2)^2))^-1.By Lemma <ref>, θ X_m dominates X_n until the time when 3/4n2 squares have been added to X_n. By this time at least one square from A_n must have been added to X_n, so z must have been added to X_n. Thereforeθ F_n (z)F_m(z).As the right hand side of (<ref>) is bounded uniformly for large m for a fixed value of n, there is some K > 0 such thatK F_n(z)F_m for all large m, so {F_n(z)} is tight. Now we can prove Theorem <ref>, which as mentioned previously corresponds precisely with Theorem <ref> in the case u = 0, and proves all parts of the Theorem in that case except for the mixing property with respect to spatial shift. First assume that a_n=0 for all n. Since the product topology on [0, ∞]^ is compact, F_n has subsequential limits. Suppose that there are two subsequential limits F^a ≠ F^b. Then for some finite set K, the restrictions F^a|_K and F^b|_K are not equal. Define T_n^ to be the stopping time when n2 lattice points have been added to X_n. T_n^∞ as n →∞, so for any z,(F_n(z) ≤ T_n^) → 1 as n →∞ since{F_n(z) : n large enough so thatz ∈ T(0, n)}is tight by Lemma <ref>. Defining F^_n byto be F^_n(z) = F(z) for F(z) ≤ T_n^ and F^_n(z) = ∞ otherwise,(F^_n|_K ≠ F_n|_K) → 0 as n →∞. Now by Corollary <ref>, for large enough m there exists N(m) such that X(0, n, (1 + 3)n)X(0, m, m) up to time T_m^, for all n ≥ N. This implies that (1 + 3 ) F^_m(z)F_n(z). Under these conditions we can appeal to Lemma <ref>, which gives that F_n|_K does indeed have a distributional limit, contradicting that F_1|_K ≠ F_2|_K . Thus F_n itself has some distributional limit F. Note that F ∈ [0, ∞)^ almost surely since each {F_n(z)} is a tight sequence on [0, ∞).The same proof works in the case when X_n is centred at a_n for a sequence a_n = o(n), since all the domination lemmas can be used in exactly the same way. Moreover, translation invariance follows by comparing the sequences X_n (a_n, n, n) and X_n (a_n + 2, n, n) since the difference between the center points, d_n = 2 = o(n). Bounding Rates of Adding Lattice PointsThe goal of this section and the next one is to establish regularity properties of the limit F of random Young tableaux in order to apply the Edelman-Greene bijection. In order to do this we will show that at every time t, the points in the limit tableau that are added before time t form a set of disjoint downward closed subsets of , and that the limit F is still an order-preserving injection. The key to both of these proofs is the following proposition about bounding the rates of adding points in the finite tableau processes.Throughout this section we let X_n be the tableau process X_n(0, n, n). There exist constants K_1 and K_2 such that for any z ∈and for any t,[ sup_s ≤ tv(z, X_n(s)) ] ≤ K_1t+ K_2,for all large enough n (how large we need to take n depends on the square z). The cylindrical tableau process. To prove this proposition we introduce cylindrical Young diagrams and the cylindrical tableau process. Define (n), the discrete cylinder of size n, to be the set of equivalence classes of points (x, y) in { (x, y) ∈ :1 ≤ y ≤ n - 1} where (x, y) ∼ (x', y') if y = y' and x ≡ x' 2(n-1). This cylinder has the following partial order inherited from the partial order on . For (x, y), (x', y') ∈(n), (x, y) ≤ (x', y') if (x', y') ∼ (x”, y”) for some (x”, y”) ∈ with (x', y) ≤ (x”, y”). Thus we have a notion of downward closed sets in (n), and notions of corners, hooks, and reverse hooks in (n) ∖ A for any downward closed set A (n) by thinking of (n) as a cylindrical Young diagram. As in a usual Young diagram, for any corner z ∈(n) ∖ A we can define the “hook probability" for z by ((n)∖ A, z) = 1/(n) ∖ A∏_y ∈ R_z(1 + 1/h_y -1).Now we define the cylindrical tableau process C(t) = C(n, r)(t) on (n) with rate r as the continuous time Markov process C(t) where a square z is added to configuration A at ratev_C(z,A)=r((n)∖ A,z).Note that the hook probabilities incylindrical tableaux do not sum to 1 as they do with staircase tableaux. This is not an issue as we are only using the hook probabilities to define rates, not as actual probabilities.The symmetry in the cylindrical process makes it easier to bound the expectation of the rate v_C(z, C(t)). We can then use that the staircase tableau process can be coupled with an appropriately sped up cylindrical process in a way that allows rates in the staircase process to be controlled by the rates in the cylindrical process. This will prove Proposition <ref>. The modified rate. Instead of working with v(z, A), we will replace it with a monotone increasing function w(z,A) called the modified rate.The modified rate w_C(z, A) is the rate of adding z to the configuration A with the cone S_z={z' : z' ≥ z} above z removed. More preciselyw_C(z,A)=v_C(z,A∖ S_z).By the definition of v, the modified rate satisfiesw_C(z, A) = r/(n) ∖ A∪ S_z ∏_y ∈ R_z(1 + 1/f^z_y -1).Here f_y^z is the hook length of y in the residual tableau corresponding to the state A∖ S_z. We also define w_X(z, A) for a staircase tableau process X in the analogous way. Since v_C is monotone in A as long as z_C has not been added, we get that w_C is monotone in A (even if z has been added). Therefore to prove Proposition<ref> it suffices to prove the following. For all large enough n, we have[ sup_s ≤ tw_X(z, X_n(s)) ] =w_X(z, X_n(t)) ≤ K_1t+ K_2 We first need a lemma bounding the products in the hook probability formula.Let A be a downward closed subset of 𝒞(n), and let β be the maximal second coordinate of squares in A. Then we have∏_y ∈ R_z(1 + 1/f^z_y(A) -1) < 2 n (β + 1).If we order squares in the reverse hook of z by their second coordinate (s+1 below), we get upper bounds on the individual factors. This gives an overall upper bound∏_s = 1^n-1(1 + 1/s + (s - β)^+)^2 = (β + 1)^2 ∏_s = (β + 3)/2^n - 1 - β(2s/2s -1)^2< 2 n (β + 1).The last inequality is from Lemma <ref>.The same bound holds in the staircase tableau case. Next, we bound w(z, A) for z at the bottom of the cylinder.Let ℬ denote the bottom row of 𝒞(n). Then we have that ∑_z∈ℬ w(z, A) ≤ 48(|A| + n),in the rate n cylindrical tableau process. We have only included the explicit constant 48 in the above proposition to streamline the proof. It is far from optimal for large n.For z∈ℬ, defineD^z=∏_y ∈ R_z(1 + 1/f^z_y(A) -1).It suffices to show that∑_z∈ℬ D^z ≤ 12 (n |A|+ n^2),since (n) ∖ A∪ S_z≥S_z≥n2. To establish this bound, we will build the set A in |A| steps by starting with A_0=∅ and repeatedly adding a single square (α_i,β_i) to A_i-1 to get A_i. We do this in a way so that A_i stays downward closed and β_i are non-decreasing.Define the quantities D^z_i for A_i analogously to D^z. By simple algebra,∑_z∈ℬ D^z ≤ nmax_z∈ℬD_0^z+∑_i=1^|A|[max_z∈ℬD^z_i-1]∑_z∈ℬ|D^z_i/D^z_i-1 - 1|.By defining β_0=0, we have that β_i is the maximal y-coordinate of a square in A_i. The first term on the right is bounded above by 2n^2 by Lemma <ref>. By the same lemma, max_z∈ℬD^z_i-1≤ 2n(β_i-1+1)≤ 2n(β_i+1) ≤ 4n β_i,since β_i ≥ 1 for i ≥ 1. So it suffices to show that for any i ≥ 1, we have∑_z∈ℬ|D_i^z/D_i-1^z-1 |≤3/β_i.To do this, recall that A_i=A_i-1∪{(α_i,β_i)}. Note that if for z∈ℬ we have z≤ (α_i,β_i) then D_i^z/D_i-1^z=1. Let ℬ'=ℬ∖{z:z ≤ (α_i,β_i)}. Then |ℬ'|=n-1-β_i.For any z∈ℬ' the reverse hooks R_z and R_(α_i,β_i) intersect at exactly two points, one on the right leg of R_z and one on the left. Call the y-coordinates of these points s_z and s'_z, respectively. As we move z, these intersection points exhaust the set R_(α_i,β_i). More precisely, s and s' are both bijections from ℬ' to{β_i+1,…, n-1}. For z∈ℬ' we haveD_i^z/D_i-1^z=Q(s_z)Q(s'_z),whereQ(s)= 1 + 1/2s - β_i- 2/1 + 1/2s - β_i - 1 =(2s - β_i -1)^2/(2s - β_i- 2)(2s - β_i).Since s and s' are bijections, Cauchy-Schwarz gives∑_z∈ℬ' Q(s_z)Q(s'_z) ≤∑_s=β_i+1^n-1 Q(s)^2.By simple algebra Q(s)≥ 1 andQ(s)^2-1≤3/(2s-β_i - 1)^2.So the left hand side of (<ref>) is bounded above by∑_s=β_i+1^n-13/(2s-β_i - 1)^2< 3/β_i. Now we can embed the staircase tableau of size n into the cylinder (n) by identifying the subset T(0, n) with its equivalence class in (n). Thus we can talk about stochastic domination of a cylindrical tableau process over a staircase tableau process, and we can talk about domination of modified rates.Let T be the time at which n^2/4 particles have been added to the tableau process X_n(t). Let C_n(t) be a cylinder process on (n) with rate 8n. Then there exists a coupling so that for n ≥ 3,X_n(t) ≤ C_n(t)for all t≤ T.Moreover, for any z in the bottom row of T(0, n), w_C(z, C_n(t)) ≥ w_X(z, X_n(t)) for all t ≤ T in this coupling.To prove the existence of a coupling, it suffices to show that for any A, and any lattice point z that is both a corner of (n)∖ A and T(0, n) ∖ A, that v_X_n(z, A) ≤ v_C_n(z, A). From here we can appeal to Lemmas <ref> and <ref>, which can be proven in the exact same way if one of the processes is a cylinder process.Reverse hooks in (n) are larger than reverse hooks in T(0, n), and for y ∈(n)∩ T(0, n), we have h_y^X = h_y^C, so∏_y ∈ R^z_X(1 + h^X_y/h^X_y -1) ≤∏_y ∈ R^z_C(1 + h^C_y/h^C_y -1). Also, (n)∖ A/T(0, n) ∖ A≤ 8for all n ≥ 3. Combining the inequalities (<ref>) and (<ref>) proves the lemma. The relation among modified rates follows in the same way. Now we can prove Proposition <ref> for z in the bottom row of T(0, n). Let T be the stopping time when (2t+1) n squares have been added to the tableau process X_n. Since the times of adding squares are the points of a rate n Poisson process, it is easy to check that(T<t) ≤ e^-L nfor some universal constant L.Observe the naive bound that w_X(z, X_n(t)) ≤ n for all n. We can now use Lemma <ref> together with the monotonicity of modified rates to get:[ sup_s ≤ tv_X(z, X(s)) ]≤ w_X(z, X_n(t)) ≤[ w_C(z, C_n(t))1_t < T] + ne^-L n≤ w_C(z, C_n(T))+ ne^-L n.Finally, using Proposition <ref> and the rotational symmetry of the cylinder process, we get thatw_C(z, C_n(T))+ ne^-L n≤ 48((2t+1) + 1) + ne^-L n≤ K_1t + K_2,completing the proof.Finally, we show that for any fixed z' ≥ z ∈, that for large enough n, the modified rate for adding z' to X_n is always bounded by twice the modified rate for adding z.This extends Proposition <ref> to encompass all z ∈, and therefore completes the proof of Proposition <ref>. Let z' ≥ z = (z_1, z_2) and for a downward closed subset AT(0, n) let w(z, A) and w(z', A)be the modified rates in X_n. Thenlim_n →∞(sup_AT(0, n)w(z', A)/w(z, A)) = 1.Specifically, for all large enough n, we have that w(z', X(t)) ≤ 2w(z, X(t)) for all t. We only prove this in the case z' = (z_1 + 1, z_2 +1), as the general case follows by symmetry and induction. Observe first that the supremum on the right hand side of (<ref>) is at least 1 for every n, since w(z', T(0, n)) = w(z, T(0, n)) for all n. Also, it is easy to see thatT(0, n) ∖ A ∪ S_z/ T(0, n) ∖ A ∪ S_z'→ 1as n →∞, since S_z/n^2 → 1/4 as n →∞, but S_zS_z'/n → 1/2. Therefore to complete the proof it suffices to show that for any configuration AT(0, n), that ∏_y ∈ R_z'(1 + 1/f^z'_y -1) ≤∏_y ∈ R_z(1 + 1/f^z_y -1).To prove this, let y' = (y_1 + 1, y_2 + 1) ∈ R_z'. It is clear that y = (y_1, y_2) must be in R_z. Moreover, if (x_1, x_2) ∈ H_y in the configuration T(0, n) ∖ A ∪ S_z, then (x_1 + 1, x_2 + 1) ∈ H_y' in the configuration T(0, n) ∖ A ∪ S_z'. This gives an injective mapping of R_z' into R_z that does not decrease hook length, proving (<ref>).Regularity and Mixing of the Limit FIn Theorem <ref> we showed that the inclusion functions of random staircase Young tableaux have a limit F. In this section we establish regularity properties and mixing of F using the results of Section <ref>. F is almost surely injective.Suppose not. Since there are only countably many pairs of points in , then there exists a pair (z_1, z_2) ∈^2 with (F(z_1) = F(z_2)) = δ > 0. Then for any > 0, there is some N such that (|F_n(z_2) - F_n(z_1)| < ) ≥δ/2 for all n ≥ N. Without loss of generality, we can remove the absolute values at the expense of a factor of 1/2 to get(0≤ F_n(z_2) - F_n(z_1) < ) ≥δ/4. Let T be the stopping time when z_1 is added to the process X_n. The probability of adding z_2 in the interval [T, T + ] is bounded by the integral of the rate in that interval. This gives that(0 ≤ F_n(z_2) - F_n(z_1) < )≤sup_s ∈ [T, T + ] v(z_2, X_n(t)) ≤ Kt+ (sup_s ≤ t v(z_2,X_n(s)) ≥ Kt) + (T > t - ).By Proposition <ref> we can choose K and t large enough and independently ofto make the last two terms on the right hand side arbitrarily small for all large enough n. Takingclose to 0 then contradicts (<ref>).For each z, the distribution of F(z) has no atoms.The proof that F has no atoms is the same as the proof that F is almost surely injective, except instead of conducting the analysis at a stopping time T when the square z_1 is added, we conduct it at a (deterministic) time t. The limit F is mixing Recall that a measure μ is k-mixing with respect to a measure-preserving transformation τ if for any measurable sets A_1, , A_k,lim_m_1, …, m_k →∞μ(A_1 ∩τ^-m_1A_2 ∩…∩τ^-m_1 - m_2 -- m_k A_k) = ∏_i=1^k μ(A_i). Note that this proposition completes the proof of Theorem <ref> in the case u =0.The limit F is mixing of all orders with respect to the spatial shift τ. We first present an outline of the proof that F is 2-mixing. Fix m, and consider two sets A^r and B^r of the formA^r =∏_i ∈ T(0, m) [0, a_i]B^r = ∏_i ∈ T(0, m) [0, b_i],and let A = A^r ×∏_i ∉ T(0, m) [0, ∞)B = B^r ×∏_i ∉ T(0, m) [0, ∞).By Dynkin'sπ- Theorem, it suffices to show that(F∈ A ∩τ^-K B) →(F ∈ A)(F ∈ B)K →∞,for any such A and B. To show this, we will approximate the value of F on A ∩τ^-K B in two different ways. Figure <ref> illustrates the two approximations used. For the first approximation,take two disjoint tableaux T(0, ⌊ K/2 ⌋) and T(-K, ⌊ K/2 ⌋) and run independent, rate-⌊ K/2 ⌋ tableau processes Y_1 and Y_2 on each of these tableaux. Let G_K, 1 and G_K, 2 be the inclusion functions for Y_1 and Y_2. For K ≫ m, convergence of G_K, 1 and G_K, 2 to F implies that (G_K, 1∈ A) is very close to (F ∈ A), and similarly for G_K, 2 and τ^-KB.For the second approximation, take n ≫ K, and let X_n be the rate-n tableau process on T(0, n) with inclusion function F_n. Since n ≫ K, the convergence of F_n to F implies that(F_n ∈ A ∩τ^-K B) is close to (F ∈ A ∩τ^-K B), and that (F_n ∈ A) and (F_n ∈τ^-K B) are close to (F ∈ A) and (F ∈ B) respectively.Finally, we can use the domination Lemma <ref> to show that a small speed-up of X_n dominates the union of the independent processes Y_1 and Y_2 up to a large stopping time. This in turn implies that up to a small error, (F_n ∈ A ∩τ^-K B) < (G_K, 1∈β A)(G_K, 2∈βτ^-K B),where β is the value of the speed-up. Combining this with our previous relationships between probabilities implies that (G_K, 1∈ A)(G_K, 2∈τ^-K B) must be very close to (F_n ∈ A ∩τ^-K B). Passing to the limit in n and then K then proves that F is 2-mixing, noting that (G_K, 2∈τ^-K B) →(F ∈ B)n →∞by spatial stationarity.The general case can be proven using the same method, with the main difference being that in that case, we approximate the limit F with n disjoint independent tableau processes instead of 2. For simplicity, we only prove 2-mixing below. < g r a p h i c s > The two approximating processes used in the proof of Proposition <ref>. The first approximation pairs two disjoint processes on T(0, ⌊ K/2 ⌋) and T(-K, ⌊ K/2 ⌋) for K ≫ m and the second approximation takes a tableau process on T(0, n) for n ≫ K.The proof exactly follows the outline of what is stated above, but with precise bookkeeping regarding the error terms.With notation as in the outline, first note that it suffices to show that for large enough n,|(F_n ∈ A ∩τ^-K B) - (G_K, 1∈ A)(G_K, 2∈τ^-K B)| < _Kwhere _K → 0 as K →∞. To see that (<ref>) implies (<ref>), let |(F ∈ A)(F ∈ B) - (G_K, 1∈ A)(G_K, 2∈τ^-KB)| = δ_K. Taking n →∞ in (<ref>) and replacing G_K, 1 and G_K, 2 with F, we get that| (F ∈ A ∩τ^-K B) - (F ∈ A)(F ∈ B)| ≤_K + δ_K.We can pass to the limit in F_n since A ∩ B is a set of continuity of H by Corollary <ref>. Moreover, δ_K → 0 as K →∞ since A and B are sets of continuity of F by the same corollary and using the spatial stationarity of F.Now let γ > 0, defineβ =K + 1/(1- γ)(K - 4),and letα_K,= max{(F(i) ∈[c, β c ] : c ∈{a_i, b_i})}.We have chosen β in a way so that if n is large enough, then the tableau process X^β_n on the tableau T(0, n) with speed β n stochastically dominates the independent coupling of the tableau processes Y_1 and Y_2 up to time T_. Here T_ is the time when either ⌊ K/2 ⌋ 2 squares have been added to Y_1 or ⌊ K/2 ⌋ 2 squares have been added to Y_2. This can be seen by comparing with the condition in Lemma <ref>. Therefore letting M = max{a_i, b_i}, we have(G_K, 1∈ A)(G_K, 2∈τ^-K B) < (F_n/β∈ A ∩τ^-KB) + (T_γ < M).Moreover, we have that for all large enough n, (F_n/β∈ A ∩τ^-K B )= (F_n ∈β A ∩τ^-Kβ B )< (F_n ∈ A ∩τ^-K B) + 2m(m-1)_K, .Here β A = {β x : x ∈ A }, and similarly for B. For the above inequality to hold, n just needs to be large enough so that max{(F_n(i) ∈[c, β c ] : c ∈{a_i, b_i})} < 2 _K, . Combining the above two inequalities, we get that (G_K, 1∈ A)(G_K, 2∈τ^-K B) < (F_n ∈ A ∩τ^-K B) + 2m(m-1)_K,+ (T_γ < M).We can similarly get that(G_K, 1∈ A^c)(G_K, 2∈τ^-K B^c) > (F_n ∈ A^c ∩τ^-K B^c) - 2m(m-1)_K,- (T_γ < M).Finally, let _K = max{|(G_K, 1∈ A) - (F ∈ A)|,|(G_K, 2∈τ^-KB) - (F ∈τ^-K B)|}. For large enough n, we have that |(F_n ∈ A) - (G_K, 1∈ A)| < 2_K|(F_n ∈τ^-KB) - (G_K, 1∈τ^-KB)|since A and B are sets of continuity of F. This similarly holds for A^c and B^c. Therefore |((G_K, 1∈ A)(G_K, 2∈τ^-K B)- (G_K, 1∈ A^c)(G_K, 2∈τ^-K B^c)) -((F_n ∈ A ∩τ^-K B) - (F_n ∈ A^c ∩τ^-K B^c)) | < 4 _K. Combining this bound with (<ref>) and (<ref>) gives that for all large enough n, |((G_K, 1∈ A)(G_K, 2∈τ^-K B)- (F_n ∈ A ∩τ^-K B)| < 4m(m-1)_K,+ 2(T_ < M) + 4 _K.Now note that for any fixed value of > 0, as K →∞, we can choose a sequence _K → 0 such that (T__K > M) <. With this sequence of _Ks, _K, _K→ 0 as K →∞. Noting also that _K → 0 as K →∞, this shows that the left hand side of (<ref>) tends to 0 as n →∞. Inclusion times for squares in the bottom rowWe can also use the rate bound to get a lower bound on the probability that it takes a long time to add any given square in the bottom row. Note that by spatial stationarity of the limit F, it suffices to prove this for the square z_0 = (1, 1). We can then combine this with the mixing property of F to show that at any time infinitely many squares have not been added. The idea here is to modify the process X_n to create a new process Y_n. Y_n will beX_n, but with the hook probabilities modified so that Y_n never adds z_0. We will then show that X_n and Y_n can always be coupled so that at any time t they are equal with positive probability P independent of n. The construction of Y_n. Y_n is a Markov process with the same state space as the tableau process X_n, namely:{ AT(0, n) : Ais downward closed}.If A = Y_n(t), andz is corner of T(0, n) ∖ A with z ≠ z_0, then we add the point z to Y_n with rate n (T(0, n) ∖ A, z)/1 - (T(0, n) ∖ A, z_0).In words, the rates in Y_n for squares that can be added are given by the rates in X_n times1/1 - (T(0, n) ∖ A, z_0).Note that this only makes sense as long as there are squares other than z_0 that can be added to Y_n. Once z_0 is the only square left that can be added, we can define Y_n so that nothing happens past that point. We first show that Y_n is dominated by a sped-up version of X_n. Note that the total rate of jumps from any non-terminal state in Y_n is exactly n.Let M < n(n-1)/128, and let T_M be the stopping time when M squares have been added to Y_n. Then letting X^2_n = X(0, n, 2n) be a tableau process on T(0, n) with speed 2n, we have that Y_nX^2_n up to time T_M. Suppose that A is some configuration with fewer than n(n-1)/128 points added. The maximum height of A is bounded by n/8 - 1, since any square of height n/8 - 1 lies above a triangle with n(n-1/8)/128 squares. By the remark following Lemma <ref> this implies abound on the hook probabilities, namely(T(0, n) ∖ A, z_0) ≤1/n2 - M 2 n n/8 < 1/2for n ≥ 3. Then by (<ref>) we have domination of the rates of Y_n by those in X^2_n. Lemmas <ref> and <ref> (more precisely, the proofs of those lemmas,) then imply stochastic domination.Now we couple X_n and Y_n to bound the probability of adding z_0. There exist constants K and L such that for any t>0{F(z_0) > t }≥ e^-Kt -L t^2.Couple X_n and Y_n so that they add squares at the same times (we can do this since the total rate of exiting non-absorbing states in X_n and Y_n is n), and add the same squares until the time when X_n adds square 0. Now let M ∈, and for m ∈ let T_m be the stopping time when the mth square is added to X_n. Letbe the set of maximal sequences {A_0 = ∅ A_1A_M} of downward closed subsets of T(0,n)such that z_0 ∉ A_M.Then we have(X_n(t) = Y_n(t)) ≥(T_M ≥ t)∑_{A_m}∈(X(T_m) = A_mm ≤ M)Using the transition probabilities for Y_n the sum above can be written as∑_{A_m}∈(Y(T_m) = A_mm ≤ M) ∏_m=1^M (1 - ( T(0, n)A_m-1, z_0 )).We may write this as an expectation∏_m=1^M(1 - ( T(0, n)Y_n(T_m-1) ,z_0 ))≥(1 - ( T(0, n)Y_n(T_M), z_0 ))^M.Theinequality follows since the probabilities are monotone.By Jensen's inequality we get the lower bound(1 - ( T(0, n)Y_n(T_M), z_0 ))^MWe use Lemma <ref> to bound the expectation above. AssumeM ≤n(n-1)/128, then X^2_n(T_M) stochastically dominates Y_n(T_M), that is in some coupling X^2_n(T_M)≥ Y_n(T_M), and since z_0∉ Y_n(T_M), we have X^2_n(T_M) S_z_0≥ Y_n(T_M), where S_z_0 is the set of squares that are greater than z_0 in the partial order. By monotonicity of the rates we have( T(0, n)Y_n(T_M), z_0 ) ≤( T(0, n)(X^2_n(T_M)∖ S_z_0), z_0 ).We can bound the rates in X^2_nS_z_0 at some fixed time s=2M/n by Proposition <ref> from the previous section. Here note that the rate of adding z_0 to X^2_nS_z_0 is the modified rate of adding z_0 to X_n.We get the upper bound( T(0, n)(X^2_n(s)∖ S_z_0), z_0 ) + (T_M>s) ≤K_1/n+K_2 M/n^2 + e^-K_3n,where bound on (T_M > s) follows from the tail probabilities of the Poisson distribution. Monotonicity of the rates implies that this is also an upper bound for ( T(0, n)(X^2_n(T_M)∖ S_z_0), z_0). Putting everything together and setting M=2tn, we get for large enough n(X_n(t) = Y_n(t)) ≥(T_M > t)(1 - K_1+2 K_2 t/n-e^-K_3n) ^2tn.Letting n→∞ gives that lim_n →∞(X_n(t) = Y_n(t)) ≥ e^-Kt - Lt^2,for some constants L and K. Using that F(z_0) has a continuous distribution (Corollary <ref>) then finishes the proof. The mixing of F combined with Lemma <ref> implies that at any time, a bi-infinite sequence of squares has not been added. This is a direct consequence of the fact that mixing implies ergodicity.For any time t, there are almost surely infinitely many values of x > 0 and infinitely many values of x < 0 such that F(x, 1) > t.Sorting Networks at the CenterNow we are finally in a position to prove the existence of the local limit of random sorting networks at the center. Letbe the space of swap functions. Defineto be the set of all functions G: → [0, ∞] such that the following two conditions hold. i) Let B = {z ∈ : G(z) ≠∞}. Then G|_B is order-preserving and injective.ii) For any t, we have thatG(x, 1) > t for infinitely many x> 0 and x < 0.We will define a map EG: → which will generalize the Edelman-Greene bijection. To do this we first define swap functions EG_t(G) for every t > 0. These swap functions will be EG(G) defined up to time t > 0. Consider the set of pointsA = {z : G(z) ≤ t}.Since G(x, 1) > t for infinitely many x> 0 and x < 0 and G is order-preserving on , A breaks down into infinitely many finite downward closed sets A_i such that each A_i lies in some T(ℓ_i, k_i) and the sets T(ℓ_i, k_i) are disjoint. We can then define the swap function on each T(ℓ_i, k_i) individually up to time t using the regular Edelman-Greene bijection on that diagram, since these swap functions don't interact before time t and G|_B is order-preserving and injective.Now define the process EG(G) by letting EG(G)(x, r) = EG_t(G)(x, r),where t is any time greater than r. This is well-defined since for r < s < t, EG_t(G)(x, r) = EG_s(G)(x, r). It is easy to see that EG is continuous on , by checking that EG_t is continuous for all t. This is clear since if G_n → G in , for any subset T(ℓ_i, k_i), eventually G_n will be identically ordered to G on T(ℓ_i, k_i) and so the ordering of the swaps given by the Edelman-Greene bijection will be the same for G_n and G on T(ℓ_i, k_i). Moreover, the times at which these swaps occur converge in the limit. This implies convergence of both the cadlag paths of the permutation EG(G_n)(·, x) and the cadlag paths of the inverse permutation, thus showing that EG is continuous.Finally, by Corollary <ref> and Proposition <ref>, we know that our tableau process limit F ∈ almost surely, so U_n = EG(F_n) → U = EG (F) in distribution as well by the continuity of the map EG. This proves convergence of random sorting networks at the center to a swap process U. The only thing left to do to prove Theorem <ref> when u=0 is to show that the limit EG(F) has time-stationary increments, as the spatial stationarity and mixing follow from the spatial stationarity and mixing of F.U has time-stationary increments. Namely, the distribution of the process(U(·,s)^-1U(·,s+t)), t≥ 0) does not depend on s.The sequence of transpositions {π_i_1, π_i_k} in a random sorting network is equal in law to the sequence {π_i_ℓ, π_i_ℓ + k -1}.To prove this time stationarity, note that if we remove the first swap π_i_1 from a sorting network, we can get another sorting network by adding the swap π_n - i_1 to the end of the sorting network. This result was first proved in <cit.>.We use this idea to extend the process U_n, which only completes n2 swaps at the first n2 times of a rate-n Poisson process Π_n, to a process U_n^*, which completes swaps at every time in Π_n. Let the first N swaps in U_n^* be as in U_n and then recursively define the kth swap in U_n^* to be equal to π_n - j, where π_j is the (k - n2)th swap in U_n^* for k > N. Then U_n^* is a time-stationary process, and U_n^*(x, t) = U_n (x, t) for all t ≤ T_n, where T_n is the n2th point in Π_n. Since T_n ∞ as n →∞, and U_nU, U_n^*U as well. Finally, since each U_n^* has stationary increments, U must have stationary increments as well. Putting this all together, we obtain Theorem <ref> in the u=0 case. Let a_n be a sequence of integers with a_n = o(n). Let U_n be the swap process defined by U_n(x, t) = ^n_⌊ nt ⌋(a_n + x) - a_n,where ^n is an n-element random sorting network. Then U_nU,where U is a swap process that is stationarity and mixing of all orders with respect to the spatial shift, and has time-stationary increments.The Local Limit Outside the CenterIn this section, we prove that the local limit of random reverse standard staircase Young tableaux exists at distance ⌊ un ⌋+ o(n) outside the center. This will immediately imply the existence of the local limit outside the center for sorting networks via the Edelman-Greene map EG in Section <ref>. Let u ∈ (-1, 1), a_n = o(n), and let G_n be the inclusion functions for the sequence of tableau processes X_n(⌊ un ⌋+ a_n, n, n). Then G_n F^u = 1/√(1 - u^2)F, where F is the limit when u = 0. We will assume that a_n = 0 throughout, as it is easy to use domination lemmas to conclude Theorem <ref> for general a_n from this case. The basic idea of the proof is as follows.By using the domination lemmas in Section <ref>, it is easy to see that any subsequential limit G at a distance ⌊ un ⌋ outside the center must be stochastically dominated by F^u, so we just need to show domination in the opposite direction. For this, we show that the expected heights in the tableau process corresponding to F^u are greater than expected heights in the tableau process corresponding to G at every location and every time. Note that it is possible to get domination in the opposite direction for almost every value of u by comparing the number of squares in a tableau process at time t with the expected number of squares in each of processes shifted by u, integrated over all u ∈ (-1, 1). However, this approach only proves Theorem <ref> for almost every u. To prove the theorem for any u, we take the following approach. By considering the inclusion functions G_n of the shifted tableau processes as elements of = [0, ∞]^ we have a setof subsequential limits of G_n by compactness. Consider largest and smallest elements inin the stochastic ordering on inclusion functions. Such elements exist sinceis closed and the space of probability measures onis compact. Call G ∈ a limsup if for any G' ∈, G'G if and only if G' = G. Similarly, we define a liminf into be any G ∈ such that for G' ∈, G'G if and only if G' = G. We show that these elements are translation invariant, and that any translation invariant element ofhas expected heights less than those of F^u. Therefore any limsup or liminf inmust be F^u. As any element inmust lie between a liminf and a limsup, this allows us to conclude that = { F^u }.Shifted tableau processes. We introduce new notation for the tableau processes used in this section, using Y instead of X to distinguish from centered tableau processes. For a fixed value of u ∈ (-1, 1), define Y^K_n(t) to be the rate Kn tableau process on the diagram T(⌊ un ⌋, n).When K = 1, we omit the superscript. To establish the translation invariance of liminfs and limsups, we need a basic domination lemma involving these processes.Fix u ∈ (-1, 1), and choose ℓ so that for every n,T(⌊ un ⌋, n)T(⌊ u(n +ℓ) ⌋+ 2, n + ℓ),T(⌊ un ⌋ + 2, n)T(⌊ u(n+ℓ) ⌋, n + ℓ).Let T_n be the time when n2/2 squares have been added to Y_n, and let θ_n = n+4ℓ/n - 1. Then for all large enough n,Y_n+2ℓ^θ_n^2τ Y_n+ℓ^θ_n Y_n Y_n+2ℓ^θ_n^2 Y_n+ℓ^θ_n Y_n, where all stochastic domination holds up to time T_n.As before, τ is the spatial shift. Thus τ Y^K_n(t) is exactly Y^K_n shifted by 2 units to the right so that it lives on the diagram T(⌊ un ⌋ + 2, n). The essence of this lemma is that we can get domination of the shifted process τ Y_n_1 over Y_n by letting n_1 be slightly larger than n, and slightly speeding up τ Y_n_1. The precise value of the speed-up θ_n is not important here, only that θ_n → 1 as n →∞.We just prove that τ Y_n+ℓ^θ_n Y_n up to time T_n, as the rest of the inequalities follow using the same argument. By Lemmas <ref> and <ref>, we just need to show that if Y_n and τ Y_n+ℓ^θ_n are in the same configuration A, and z is a corner of both T(⌊ un ⌋, n)A and T(⌊ u(n + ℓ) ⌋ + 2, n+ℓ)A, that v_n(z, A) < v_n+ℓ(z, A), where v_n and v_n+ℓ refer to rates in Y_n and Y_n+ℓ^θ_n, respectively. To see this, observe that for any set A of cardinality at most n2/2,v_n+ℓ(z, A)/v_n(z, A) = θ_n n+ℓ/nT(⌊ un ⌋, n) ∖ A/T(⌊ u(n + ℓ) ⌋ + 2, n + ℓ) ∖ A∏_y ∈ R_z^n+ℓ∖ R_z^n(1 + 1/h_y^n+ℓ -1) ≥(n+ 4ℓ)(n+ℓ)/n(n-1)n(n-1)/2(n+ℓ)(n+ℓ-1) - n(n-1) > 1. Now we can characterize liminfs and limsups in .Suppose G ∈ is a limsup (or a liminf). Then G is translation invariant.Throughout this proof, we let G^K_n be the inclusion function of Y^K_n(t). Let G_n(i)→ G for some liminf G ∈ (the case for G a limsup is similar). Note that by Lemma <ref>, G'G for any subsequential limit G' of G^θ_n(i)^2_n(i) + 2ℓ.By passing to the limit, we remove any issues with the stopping time T_n from Lemma <ref> since T_n ∞ as n →∞. Such limits exist by compactness of .However, since θ_n^2 → 1 as n →∞, G' is also a subsequential limit of G_n(i) + 2ℓ, so since G is a liminf, G' = G. Therefore G^θ_n(i)^2_n(i) + 2ℓ G. Now again by Lemma <ref>, we have that G^θ_n(i)^2_n(i) + 2ℓG^θ_n(i)_n(i) + ℓG_n(i)G^θ_n(i)^2_n(i) + 2ℓG^θ_n(i)_n(i) + ℓ∘τG_n(i),where G_* = G ∧ T_n for each of the inclusion functions G_* corresponding to the tableau processes in (<ref>). Note here that if G is the inclusion function for the process Y, then G ∘τ is the inclusion function for the shifted process τ Y. By the squeeze theorem, and the facts that θ_n → 1 and T_n ∞, this implies that both G_n(i) + ℓ G and G_n(i) + ℓ∘τ G, allowing us to conclude that G∘τ G. We now aim to show that every translation-invariant element G ∈ is the rescaled central limit F^u by comparing heights. For any J ∈, x ∈ 2 + 1 and t ∈ [0, ∞), define the height functionh(J, x, t) = {z = (z_1, z_2): z_1 = xx + 1J(z) < t }.We first prove the following lemmas about the expected heights in F. h(F, x, t) is finite for all t ∈ [0, ∞) and x ∈ 2.Note that F_nF, and thath(F_n, x, t)h(F, x, t)for all t and x since F has no atoms. Recall also that the tableau processes X_n are dominated by a sped-up cylinder process C(n, 8n) up to the stopping time T_n when n^2/4 squares have been to X_n.Since T_n→∞ in probability as n→∞, we also have h(F_n ,x, t∧ T_n)h(F, x, t).By the symmetry of the cylinder, the expected height at x at time t in C(n, 8 n) is 8t, so by Fatou's lemma, h(F, x, t) ≤lim inf_n →∞ h(F_n, x, t∧ T_n) ≤ 8 t.Let T^n_t to be the stopping time when ⌊ nt ⌋ squares have been added to the centered tableau process X_n = X(0, n, n). There exists a subsequence {n_i : i ∈} such thath(F_n_i, x, T^n_i_t) → h(F, x, t). We find a dominating “infinite tableau process" for the sequence of tableau processes X_n. We can find an increasing sequence {n_i : i ∈} and a decreasing sequence {δ_i : i ∈} such that for all i, the tableau process Z_i = X(0, n_i, (1 + δ_i)n_i) stochastically dominates the process Z_i-1 up to time T^n_i-1_t, and such that∏_i=1^∞(1 + δ_i) < ∞.Finding such sequences can easily be done by iteratively choosing n_1, n_2 andappropriately in Lemma <ref> (noting that that domination in that lemma is up to the time when n_1^2 squares have been added, so we can letbecome arbitrarily small for large n_1 and still have domination up to time T^n_1_t). Then letting J_i be the inclusion function for Z_i, we haveJ_i = ∏_j=1^i(1 + δ_j)^-1F_n_i J = ∏_j=1^∞ (1 + δ_j)^-1 F.J_i is a monotone decreasing sequence in the stochastic ordering. Moreover, F_n_i J_iJso h(F_n_i, x, t) h(J, x, t), for every x and t. Finally, heights in J have finite expectation by Lemma <ref> as J is a sped-up version of F. Therefore the dominated convergence theorem, h(F_n_i, x, t) → h(F, x, t).As |h(F_n_i, x, t) -h(F_n_i, x, T^n_i_t)| → 0 as n →∞, this completes the proof.In order to compare the heights in F^u and G we will need to translate the tableau processes to swap processes on the integers. The reason for doing this is that we can relate the expected height at position x to the expected number of swaps at position x, and the expected number of swaps at any position in a sorting network is given by the following theorem from <cit.>. Letbe a random sorting network on n particles given by a sequence of adjacent transpositions {π_k_1, π_k_N}, and let a_n be a sequence of positive integers with 2a_n/n - 1→ u ∈ (-1, 1). Thenn(k_1 = a_n) →4/π√(1 - u^2)({i ≤ Cn : k_i = a_n}) →4C/π√(1 - u^2).We use this theorem to prove the following lemma about expected height in F. lim_t → 0 h(F, 0, t)/t≥4/π. By Lemma <ref>, we can first replace h(F, 0, t) by lim_n →∞ h(F_n_i, 0, T^n_i_t). Now we replace h(F_n_i, 0, t) by the strictly smaller quantity (F_n_i(z_0) < T^n_i_t) where z_0 = (1,1), and note that(F_n_i(z_0) < T^n_i_t) ≥1 - (1 - p_i/n_i)^⌊ n_it ⌋where p_i = v_X_n_i (z_0, ∅). We can make this replacement since the rate of adding the square z_0 is monotone increasing in time. Now by Theorem <ref>, p_i →4/π as i →∞, so we havelim_t → 0 h(F, 0, t)/t≥lim_t → 01 - e^-4t/π/t = 4/π,as desired.For x ∈ 2 + 1, we now define s(J, x, t) to be the number of swaps at location x before time t in the swap process EG(J), where the map EG is as in Section <ref>.We then have the following relationships between heights and swaps. Let x ∈ 2, t ∈ [0, ∞), and let G ∈ be translation invariant. Then G ∈ and h(G, x, t) =s(G, x, t) ( is defined at the beginning of Section <ref>). We also have that h(F, x, t) =s(F, x, t). We can use the bound in Lemma <ref> to conclude that GF, thus implying that at any time t, there is a bi-infinite sequence of squares in the bottom row that have not been added to G. Moreover, there exists a constant C such that for all large enough n the modified rates in each G_n are bounded up to the stopping time T when n^2/4 squares have been added by C times the modified rate in F_n. This allows us to conclude that G is injective, by the proof of Proposition <ref>. Therefore G ∈.Thus we can apply the Edelman-Greene map EG from Section <ref> to G, giving a translation-invariant swap process EG(G) and allowing us to define s(G, x, t) for all t. We now show that h(G, x, t) =s(G, x, t). By translation invariance, it suffices to consider the case x = 1. For each square z ∈, let π(z) ∈ 2+ 1 be the location of the swap in EG(G) corresponding to the square z. Since only squares z' ≥ z_0 can have π(z) = 1, we have s(G, 1, t)= ∑_z' ≥ z_0(F(z') < t π(z') = 1) = ∑_i=1^∞∑_j ∈ [1 - (i-1),1 + (i-1)](F(j, i) < t π(j, i) = 1) = ∑_i=1^∞∑_j ∈ [1 - (i-1),1 + (i-1)](F(q_i, i) < t π(q_i, i) = 1+ q_i - j).Here q_i is either 1 or 2 depending on the parity of i. The second equality is just rearranging terms in the sum and the final equality comes from the translation invariance of the swap process. Sinceπ(q_i, i) ∈ [q_i - (i-1),q_i + (i-1)],we have∑_j ∈ [- i , i ](F(q_i, i) < t π(q_i, i) = 1 - j) = (F(q_i, i) < t),and so the final line of (<ref>) is equal to h(G, 1, t). The exact same proof works for F.Suppose G ∈ is translation invariant. Then G 1/√(1 - u^2)F. First define= {f: 2 + 1 ×_+ →},and defineH: → by H(J) = h(J, ·, ·). Note that H a strictly decreasing function with respect to the pointwise orders on and . As every G ∈ satisfies GF^u, to show that F^uG it suffices to show that h(G, x, t) ≤ h(F^u, x, t) for all x and t. By Theorem <ref>, s(G_n, 0, t)→(4/π√(1 - u^2))t.Then by Fatou's Lemma and Lemma <ref> we have that h(G, 0, t) =s(G, 0, t) ≤lim_n →∞ s(G_n, 0, t)=(4/π√(1 - u^2)) t. Now by the time-stationarity of the increments in the limit EG(F) (Proposition <ref>), we have that s(F, 0, t) is linear in time. Therefore h(F, 0, t) must be linear in time as well since it is equal tos(F, 0, t) by Lemma <ref>. Combining this with Lemma <ref> gives that h(F, 0, t) = K t for some K ≥4/π, so h( F^u, 0, t) ≥(4/π√(1 - u^2))t,which combined with (<ref>) gives the desired result. We can finally combine Propositions <ref> and <ref> to conclude the convergence of the processes G_n to F^u, which completes the proof of Theorem <ref>. This in turn completes the proofs of Theorems <ref> and <ref>. Proposition <ref> also allows us to conclude the following proposition about expected heights in F, and therefore swaps in EG(F). For any x and t, we haveh(F, x, t) =s(F, x, t) = 4/πt.Appendix G_n is tight, so it has subsequential limits in distribution. Suppose that G^1 and G^2 are two different subsequential limits of G. Then there are subsequences G_(i) G^1 and G_β(i) G^2. Without loss of generality, we can assume that there are some numbers a_1,a_m> 0 such that (G^1 ∈∏_k=1^m[0, a_k]) - (G^2 ∈∏_k=1^m[0, a_k]) > 0.Then there is some δ > 0 such that (G^1 ∈∏_k=1^m[0, a_k+ δ)) - (G^2 ∈∏_i=1^m[0, a_k + 2 δ]) > 0,By weak convergence, we get the following chain of inequalities.lim sup_i →∞(G_β(i)∈∏_k=1^m[0, a_k + 2 δ])≤(G^2 ∈∏_k=1^m[0, a_k + 2 δ]) < (G^1 ∈∏_k=1^m[0, a_k + δ)) ≤lim inf_i →∞(G_(i)∈∏_k=1^m[0, a_k + δ)). However, letting = δ/a + δ where a = max_k a_k, for any large enough i there exists some J such that for all j ≥ J,( G^_(i)∈∏_k=1^m[0, a_k + δ))≤((1 + ) G^_(i)∈∏_k=1^m[0, a_k + 2δ]) ≤(G_β(j)∈∏_k=1^m[0, a_k + 2δ] ),since (1+)G^_(i) G_β(j) for all large enough j by assumption. Thuslim sup_i →∞( G_β(i)∈∏_k=1^m[0, a_k + 2δ] ) ≥lim inf_i →∞(G_(i)^∈∏_k=1^m[0, a_k + δ)),which contradicts (<ref>), since lim inf_i →∞(G^_(i)∈∏_k=1^m[0, a_k + δ))= lim inf_i →∞(G_(i)∈∏_k=1^m[0, a_k + δ)).Thus G^1 = G^2 for any two subsequential limits of G_n, so G_n has a distributional limit. Acknowledgements. Omer Angel was supported in part by NSERC. Duncan Dauvergne was supported by an NSERC CGS D scholarship. Bálint Virág was supported by theCanada Research Chair program, the NSERC Discovery Accelerator grant, the MTA Momentum Random Spectra research group, and the ERCconsolidator grant 648017 (Abert). We would also like to thank the Banff International Research Station for hosting a focussed research group that initiated this research.
http://arxiv.org/abs/1702.08368v4
{ "authors": [ "Omer Angel", "Duncan Dauvergne", "Alexander E. Holroyd", "Bálint Virág" ], "categories": [ "math.PR", "math.CO", "60C05, 05E10, 68P10" ], "primary_category": "math.PR", "published": "20170227164446", "title": "The Local Limit of Random Sorting Networks" }
Proof.theoremTheorem[section] corollary[theorem]Corollaryproposition[theorem]Proposition conjecture[theorem]Conjectureequationsectionfiguresection
http://arxiv.org/abs/1702.08442v3
{ "authors": [ "Robert Connelly", "Steven J. Gortler", "Evan Solomonides", "Maria Yampolskaya" ], "categories": [ "math.MG", "51Kxx, 51Fxx, 51N20, 52Cxx, 52C26" ], "primary_category": "math.MG", "published": "20170225173718", "title": "The Isostatic Conjecture" }
Dipartimento di Fisica e Astronomia, Università di Firenze, via G. Sansone 1, 50019 Sesto Fiorentino, Firenze, ItalyINAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125, Firenze APC, Astroparticules et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10 rue Alice Domon et Léonie Duquet, 75205 Paris Cedex 13, France Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, China Department of Astronomy, School of Physics, Peking University, Beijing 100871, China Max-Planck Institüt für extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching bei München, GermanyWe studied the spectra of six z ∼ 2.2 quasars obtained with the X-shooter spectrograph at the Very Large Telescope. The redshift of these sources and X-shooter's spectral coverage allow us to cover the rest spectral range ∼12007000Å  for the simultaneous detection of optical and ultraviolet lines emitted by the Broad Line Region. Simultaneous measurements, avoiding issues related to quasars variability, help us understanding the connection between different Broad Line Region line profiles generally used as virial estimators of Black Holes masses in quasars. The goal of this work is comparing the emission lines from the same object to check on the reliability of Hα,andwith respect to Hβ. Hα andlinewidths correlate well with Hβ, whileshows a poorer correlation, due to the presence of strong blueshifts and asymmetries in the profile. We compare our sample with the only other two whose spectra were taken with the same instrument and for all examined lines our results are in agreement with the ones obtained with X-shooter at z∼1.51.7. We finally evaluate ] as a possible substitute ofin the same spectral range and find that its behaviour is more coherent with those of the other lines: we believe that, when a high quality spectrum such as the ones we present is available and a proper modelization with theandemissions is performed, the use of this line is more appropriate than that ofif not corrected for the contamination by non-virialized components.Simultaneous detection of optical and ultraviolet broad emission lines S. Bisogni et al.Simultaneous detection and analysis of optical and ultraviolet broad emission lines in Quasars at z∼2.2Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, under program 086.B-0320(A). S. Bisogni<ref>,<ref>E-mail: susanna@arcetri.astro.itS. di Serego Alighieri<ref>P. Goldoni<ref>L. C. Ho<ref>,<ref>A. Marconi<ref>,<ref>G. Ponti<ref>G. Risaliti<ref>,<ref> December 30, 2023 ============================================================================================================================================================================================================================================================= § INTRODUCTIONBH (Black Holes) are believed to be the origin of the extreme phenomenology of quasars and, more generally, of Active Galactic Nuclei (AGN) <cit.>. The growth of BHs occurs by accretion of the gas available in its proximity <cit.>.The gravitational force and angular momentum of the SMBH forces the gas to rotate around it and the gas emission reveals the properties of the motion. Broad (width > 2000^-1) Emission Lines detected in the spectra of quasars are a consequence of this process. These lines are believed to form in the near proximity of the BH (d<<1) in the so-called Broad Line Region (BLR). Given the spatial dimensions involved and the distances of these luminous sources, the study of the BLR emissions is one of the few ways to have an insight on the nuclear region, otherwise inaccessible to observations. This is why the technique known as Reverberation Mapping (or Echo Mapping) <cit.> has received so much attention. Since its development, originally established with the purpose of a better knowledge of the geometrical properties of the BLR <cit.>, the opportunity it gives to measure the mass of the central body has been realized <cit.>. As long as the gas in the central regions is forced to rotate under the gravitational influence exerted by the black hole, the width of the emission lines coming from these regions is a measurement of the BH mass, according to the equationv^2 = f G M_BH/R_BLR ,where f, the virial factor, accounts for the geometry of the BLR <cit.>, still mostly unknown, R_BLR is the dimension of the Broad Line Region as deduced by Reverberation Mapping, v is the velocity of the emitting gas, and G is the gravitational constant. In addition, the presence of a relation between the size of BLR and the continuum luminosity <cit.> enables us to give a measurement of the central mass with a single spectroscopic observation in all the sources for which we can see the emissions coming from the BLR (type 1 AGN), avoiding the limits imposed by spatially resolved kynematic measurements and by the long observational times required by Reverberation Mapping. The measurement of BH masses through the analysis of spectroscopic features then opens the possibility of a statistical study of these peculiar sources.The BLR emission line most used for virial mass determination is by far Hβ λ4861, mostly because this line is known to be emitted by gas in virialized conditions and also because of its prominence in the optical spectral window. For very distant sources the optical range is no more available and we have to find a replacement for Hβ.The most promising candidates for this role areλ 1549 andλ 2800 <cit.>, both in the UV rest spectral range of the sources; in using these lines we assume that they are emitted by approximatively the same region as Hβ, by gas in virialized conditions, and therefore that they have widths comparable with that of this line. However, it is well known that the BLR is stratified in terms of ionization potential <cit.>. has a much higher ionization potential than Hβ and even more so than . Moreover,emission has very different behaviors depending on the source and exhibits very often a blueshift and a general asymmetry. That does not fit well in an ordered keplerian motion scenario and it is believed to be associated with outflows or winds in the gas <cit.>.It is then clear why the use of Balmer lines is more advisable in general and whycan be considered a more reasonable replacement.When the redshift of the source only allows the use of , we are, however, compelled to find a different solution. The most important point, therefore, consists in identifying which part of the line can be associated with the gas of the BLR in keplerian motion and which part we should instead consider in a non-virial state <cit.>. Furthermore, the emission variability of AGN is more important for shorter wavelengths and a comparison ofwith the optical lines is not truly reliable if these lines are not simultaneously detected. The simultaneous detection eluds the problems connected with the very fast variability of these lines, a typical signature of AGN spectroscopic emissions, and helps us in the search of possible connections between lines properties to obtain rules to be used when the optical virial estimator (Hβ) is not available. § SAMPLE SELECTION, OBSERVATIONS AND DATA REDUCTION X-shooter <cit.> is a three-arm, single object echelle spectrograph which started operations in October 2009. The instrument covers simultaneously the wavelength range from 300 to 2400 nm in the three arms: UVB (Δλ = 300 - 550 nm), VIS (Δλ = 550 - 1020 nm) and NIR (Δλ = 1020 - 2400 nm), respectively. For our observations we used slit widths of 1.3, 1.2 and 1.2 arcsec respectively for the three arms resulting in resolving powers R = λ/Δλ = 4000, 6700 and 4300.The sample was selected with the goal of extending the work presented in <cit.> athigher redshift. In that paper a sample of relatively bright (r∼ 18-19) quasars from the SDSS DR7 release <cit.> with redshift around ∼ 1.5 was analyzed. The redshift choice ensures a simultaneous coverage fromto Hα with X-shooter. For this effort we selected, again from the SDSS DR7 release, QSOs with redshift around ∼ 2.3 ensuring again that X-shooter would detectto Hα shifted at higher wavelengths with respect to the previous sample. In order to obtain higher S/N spectra, especially in NIR where the spectra are noisier, we selected slightly brighter (r∼ 17.5-18.5) QSOs observable in a single night at the VLT. The resulting sample contained eight QSOs. After selection we also checked from the SDSS spectra that the selected objects have broad emission lines suitable to BH mass estimation and that they have no obvious broad absorption features. The average broad lines FWHM of the sources in the sample is consistent with the average at these redshifts. However the average bolometric luminosity is <logL_bol>=47.25, higher than the average bolometric luminosity of QSOs at this redshift, log L_bol=46.8 ± 0.3 <cit.>, but compatible within 1.5 σ. This ensures that our sample is not strongly biased. Observations were performed in the framework of the French Guaranteed Time and took place on March the 10th 2011. For all our sources we report in Table <ref> the properties and the characteristics of the observations. The night was not photometric and the observing conditions were changing. Therefore during the night we monitored the spectra reduced on line and we increased or decreased the observing time of the targets depending on their quality. The night was also hampered by strong winds whose speed was near (and sometimes over) the 12 m/s limit[http://archive.eso.org/asm/ambient-server?site=paranal] which prevents pointing towards Northern targets such as ours.These strong winds caused a loss of about two hours anda half of observing time on our program forcing us to drop two targets. The six observed targets are listed in Table <ref> with exposure times, average airmass and seeing. Each observation consisted of 4 different exposures of 450 sec to 750 sec each for a total of 1800 to 3000 sec. The exposures were taken using the nodding along the slit technique with an offset of 5 arcsec between exposures in a standard ABBA sequence. The slit was put at parallactic angle. Every observation was preceded by an observation of a telluric A0V standard at similar airmass.We processed the spectra using version 1.3.0 of the X-shooter data reduction pipeline <cit.>. The pipeline performed the following actions. The raw frames were first subtractedand cosmic ray hits were detected and corrected using the method developed by <cit.>. The frames were then divided by a master flat field obtained by using day-time flat field exposures with halogen lamps.The orders were extracted and rectified in wavelength space using a wavelength solution previously obtained from calibration frames. The resulting rectified orders were then shifted and added to superpose them thus obtaining the final 2D spectrum. The orders were then merged and in the overlapping regions the merging was weighted by the errors which were being propagated during the process. From the resulting 2D merged spectrum a one dimensional spectrum was extracted at the source's position. The one dimensional spectrum with the corresponding error file and bad pixel map is the final product of the reduction.To perform flux calibration we used different procedures for the UVB data and for the VIS-NIR data. In the UVB band we extracted a spectrum from a staring observation of the flux standard LTT3218<cit.> taken in the beginning of the night. We then reduced the data using the same steps as above but in this case we subtracted the sky emission lines using the <cit.> method. This spectrum was divided by the flux table of the same star delivered with the pipeline to produce the response function. The response was then applied to the spectrum of the sources. For the VIS and NIR arm, we used the A0V stars as flux and telluric standards. We extracted the A0V spectra with the same procedure used for the flux standard. We then used these spectra to apply telluric corrections and flux calibrations simultaneously using the Spextool software <cit.>. We then verified if the final spectra of the three arms were compatible in the common wavelength regions and performed a correction using the UVB spectra as reference where needed. The spectral shapes are compatible with the ones of the SDSS spectra, while the fluxes are on average ∼ 50 % weaker. Relying on the accurate SDSS flux calibration, we finally scaled our spectra in order to match them with SDSS spectra in common wavelength regions reasonably free from emission lines.§ SPECTRAL FITTING As a preliminary step we de-redshifted the spectra according to their SDSS redshifts as reported in <cit.> and corrected them for galactic extinction using the E(B-V) values from <cit.> as listed in the NASA/IPAC Extragalactic Database[The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.] and the reddening law of <cit.> with R_V=3.1.We then fitted the spectra with a procedure that uses the IDL MPFIT package <cit.>, written with the purpose of a simultaneous fitting of continuum,and other emission lines. Broad lines are fitted with a broken power law, convolved with a gaussian function to avoid the presence of a cusp at the peak <cit.>; the expression for the broken power law isf(λ)∝(λ/λ_0)^βλ<λ_0 (λ/λ_0)^αλ>λ_0 ,where λ_0 is the central wavelength and α and β are the slopes for red and blue tail respectively. The choice of such a function allows us to reproduce with only five parameters (flux, λ_0, α, β and the σ of the Gaussian function with which the double power law is convolved) the profiles which are commonly fitted with at least two Gaussian functions, involving 6 parameters. This is particularly useful when dealing with emission line complexes, in which the use of a single component for every line helps in limiting the degeneracy in the fits. Moreover, when we do not fit lines separately, but several lines together, the use of a single fitting function allows us to set the same profile for all the lines with similar excitation conditions (high or low ionisation). The results obtained by fitting with the function in Eq. <ref> are consistent with those obtained by fitting with multiple Gaussians, as long as the total spectrum is well reproduced by the fit. Narrow lines are instead fitted with a simple Gaussian, because their emissions are generally very well reproduced by this function. Where a blue asymmetry is present, as in the case of [] λ5007Å, a second Gaussian takes into account this feature. We first obtained the slope of the continuum forfits for the entire UV (wavelength range ∼ 14003500Å, containing , ] and ) and for the optical window (wavelength range ∼ 40007300Å, containing Hβ and Hα). We then used these slopes also in the fits for the four narrower windows, pertaining to -], , Hβ and Hα emissions. When necessary we applied a mask to the spectral regions contaminated by sky emissions (that was particularly required in the case of Hβ spectral window).For the UV spectral window we took into account several emission lines, following the prescriptions proposed in <cit.>: emissions are separated in two groups, high and low ionization lines (HIL:λ1402.06, ] λ1486.496, ,λ1549.06,λ1640.42; LIL:λ 1396.76, ] λ1663.48,λ1857.40,λ1892.03, ] λ1908.73,λ2669.95,λ2672.04), whose velocity profiles are known to have systematically different behaviours, so that some parameters pertaining to one group, such as the central velocity and the two power law indexes for blue and red tail, can be tied for each line. We determined these fitting parameters usingfor the HIL group and ] for the LIL group. Thedoublet was instead fitted independently of the other lines. This line should in theory be included in the LIL and therefore be tied to ] parameters, but sinceis one of our investigation target it did not make sense to tie it to another line (considering also that the ] complex, with three emission lines present, can be degenerate). The optical window includes the Balmer lines, Hβ and Hα, and a few other lines not always present in the spectra (Hδ λ4103, Hγ λ4342,λ4472, [] λ4959,5007, [] λ 6585,6550 andλ7067).Hβ and Hα were fitted independently. The fitting procedure also includesemissions; they are reproduced convolving emission templates with a Gaussian that accounts for the velocity of the emitting gas. We used two kinds of templates: the first one is the I Zw 1template by <cit.>, valid only for the visible band, and the second one is a series of model templates obtained with the photoionization code Cloudy <cit.>.The Cloudy templates were computed with the following setup: * we used the 371 levelsmodel <cit.> instead of the simplified model <cit.>;* we considered a continuum emission similar to that examined in <cit.>, resembling the spectrum of a typical radio-quite AGN;* we assumed a plane-parallel geometry with maximum cloud column density of 10^23 ^-2;* we used 10 combinations of ionizing photon flux and column density to consider the possible physical conditions of the BLR;* for all models we also considered the possibility of a 100 microturbulence velocity.The above assumptions result in 20 different templates, which are shown in Tab. <ref>.For the -] spectral window,emissions were also taken into account. Specifically we used the <cit.> empirical template forandas deduced from the I Zwicky 1 spectrum. During the fitting procedure the templates are combined with a positive weight (free parameter of the fit) and convolved with a Gaussian function accounting for the velocity of the emitting gas (whose central value and σ are free parameters of the fit as well).None of the examined lines exhibits an evident narrow component.We decided then to fit all of the permitted lines with a single double power law function.Also the [NII] doublet in the Hα window is not recognisable at all and we did not consider it among the emission lines in the fitting procedure. Only for Hα in J093147 the shape of the line profile reveals the presence of [NII] and, as a consequence of this, we considered the doublet in the fitting process. A special case is represented instead by J123120, for which we originally considered only broad lines but the fit improved considerably taking into account an emission from the NLR too (see Section <ref> and Fig. <ref>).In Fig. <ref> an example for the UV and optical windows for one of the sources is presented, along with the individual windows for , ] and , Hβ and Hα lines (fits for the rest of the sources are shown in Appendix A).In the large UV window some wide regions were masked during the fitting process for the following reasons: - the presence of strong emission blending,- a non representativeness of thetemplates,- a lack of knowlegde about what kind of emission is able to reproduce such features (this is for example the case of the red-shelf of ),- the presence of noise.The regions usually excluded from the fit are the red shelf of , the bump of emission between ] λ1663.48Å  and the ] complex, when present, and the wide spectral region between 2000 and ∼ 2450Å  <cit.>. This choice does not affect neither the continuum slope determination nor the line analysis, since the proper examination of the lines is performed on the individual spectral line windows. For all the sources the FWHM and the σ were estimated on the best fit profile for each line. They are connected to the shape of the line; while the FWHM is more representative of the core, the σ depends more on the tails of the line <cit.>. Choosing one or the other leads to different results in BH virial mass estimations, especially if we are dealing with poor quality data <cit.>. To give an estimate of the errors of these quantities we used a Monte Carlo approach, extracting 1000 independent values for every parameter pertaining to the line. On these synthetic profiles we evaluated 1000 values for FWHM and σ; from the distributions of the 1000 value of the FWHM and the σ we were able to infer the relative error to be associated with a given measurement. As a consistency check, we also performed the fit of 100 mock realizations of the original spectra, obtained with a random extraction, within its 1 σ uncertainty, of the flux in each channel. We measured the properties of the profile for all the 100 best fit and determined the standard deviation of the distribution. This is then the error associated with the property for the best fit of the true spectrum. This check was repeated for the lines in every spectral window. The errors on the measured quantities obtained in this way are consistent with those determined with the Monte Carlo approach. The errors on the UV and optical slopes and on the luminosities at 1350, 1450, 3000 and 5000Å are instead computed performing, for every source, fits of various regions of the spectra and then evaluating the differences between these results and those obtained in the original fit <cit.>. All the measured quantities are listed in Tab. <ref>.We noticed that the central wavelength of the [] λ5007Å  line, from which the redshift was estimated <cit.>, almost in every case was not close to the nominal wavelength. We then corrected the redshift using only the principal component of [] (excluding the blue shifted component from the whole profile, which is instead considered in the estimate of ). A more accurate estimate of the redshift of the sources can improve the analysis of the shifts of the lines with respect to their nominal wavelengths. The corrected redshifts are listed in table Tab. <ref> and have been used in the following analysis.§ RESULTS§.§ Line comparisonFor a visual comparison we show in Fig. <ref> the best fit profiles for every line in all spectra. The profiles are normalized to their peak values and presented on a velocity scale.As a general trend, Hα, Hβ andbehave similarly, according to what expected if the three lines are all emitted from regions in a virialized condition <cit.>. All of them show symmetric profiles and small shifts in the central wavelengths, generally below 300km/s (Tab. <ref>). Surprisingly, the most asymmetric line among these (usually in the red wing) is Hβ. We suspect this is the result of a possible degeneration within the Hβ-[] complex, especially whenand , whose emission are difficult to disentangle by the fitting procedure, are present. The most asymmetric line of all is .This line frequently shows a significant shift in the central wavelength, about -730^-1 on average, and all cases show the presence of a prominent blueshift. We notice that, in contrast with what found in Paper I, ] does not seem to behave so differently with respect to the other lines. All sources do not present a large shift (∼ 90^-1 on average).Furthermore, in all sources ] does not show a prominent blueshift as doesinstead.§.§ Line widths comparison Fig. <ref> (second, third and fourth columns) shows a comparisonof FWHM for every pair of lines commonly examined for virial estimates, Hβ, , Hα and . The red points represent the measurements used in the final analysis. For J123120 we decided to do two fits, one using only the broad components (black points) and one including also the narrow ones. The latter fit was done in two ways: the first one leaving the width of the narrow component as a free parameter and the other one fixing it to the the one of ] in the UV range and to the one of [] and Hβ in the optical range. The magenta points were adopted for the following analysis and correspond to the case in which we tie the narrow component in the UV spectrum and to the case in which we left it free in the optical range. While in the UV we could use the obvious technique of linking together the widths of the narrow components, we could not do the same in the optical because the narrow Hα component is much broader than the [] one and they cannot be reasonably linked together. The presence of a narrow component in J123120 is particularly evident for theline, although it improved the fits also fot the other lines.Even for our small sample a correlation between FWHM of Hβ, Hα andis present.Instead,has a less strong correlation with the other lines: this is not surprising given the blueshift of the line in almost every source of the sample. We report the results of the linear fit assuming a linear relation between the logarithms of the linewidths in Tab. <ref>. Concerning the parametrization of the line width, although the σ is in principle a more reliable estimator, especially when data quality is poor <cit.>, most of the recent works use instead the FWHM ( among the others), justifying it with a smaller scatter between different lines <cit.>.In our measurements we do not observe such a larger scatter in the relationships involving the line dispersion σ with respect to the FWHM (see Fig. <ref> and Tab. <ref>). The only exceptions are the relationships involving , for which essentially the J123120 point is an outlier. Of course the smallness of our sample plays an important role in this respect, stressing the presence of outliers that would not probably be such in a larger sample. The analysis presented by <cit.> highlights the difference between measurements performed under a global (considering a continuum fitted on the whole SED of accretion disk, BLR and NLR emissions) and a local approach (the more common case, in which the fit on the line is performed only on a smaller spectral window including the line). The line measurements they report are those obtained under the local approach and for which they recognize the presence of a large scatter for the line dispersion, ascribable to the subtraction of a non proper fitted continuum. We notice, however, that their local approach take into account rather narrow spectral windows, while our measurements are performed on wider wavelength ranges and, moreover, take into account a preliminar continuum evaluation, performed on even wider windows. Given the data quality and the spectral range that our fits cover, we are confident that our line dispersions could be considered for a virial estimate. Nontheless, since we are especially interested in the comparison with some of the works mentioned before and our data quality allows the use of the FWHM, we will focus our analysis on this quantity.§.§ M_BH and Eddington ratiosAlthough we took into account several previous works <cit.>, we focus our analysis on the comparison of our data with the only two other sample with the same characteristics, i.e. whose spectra were taken with the XShooter spectrograph and therefore cover a spectral range including all the broad lines of interest, Paper I and <cit.>. We notice that, while both these samples cover the same redshift range z∼1.5, ours goes to higher redshift (z∼2.2) and therefore can be interesting to make a comparison in terms of mass and Eddington ratio. Since the quasars in our sample are selected to be slightly brighter than those selected in Paper I, we expect them to be characterized by higher values for at least one of these two quantities.In Fig. <ref> (second, third and fourth columns) we report the measurements of M_BH obtained with the new prescriptions of <cit.> for our sample (see Tab. <ref>) and for the Paper I sample, for which only the measurements pertaining to three lines out of four are present (in this work results for Hβ are not included, given the poor signal to noise in this spectral range, and all the comparison are made with Hα).In the same figure we also show the <cit.> sample. As for the Hα-based masses, we use their Hα prescription with the luminosity at 5100Å, the same we used for the other samples. For all the lines we used the third column of <cit.> Tab.7, i.e. local approach M_BH calibrations, but corrected for the small systematics with respect to the global approach M_BH calibrations. The -based M_BH for the <cit.> sources are computed using L_1350 instead of L_1450, because this is the closest continuum luminosity available for this sample. Our sample (red data points) fits very well in all cases and, on average, is located in the upper part of the global distribution. We then compute the Eddington ratios for our sources and for <cit.> and Paper I samples, with the same prescription used in Paper 1 (bolometric luminosity from <cit.> and Eddington luminosity L_Edd = 1.26 10^38(_BH/ _⊙) ^-1) to verify if our sample is composed by higher accreting black holes.We evaluate the Eddington ratio for our objects both with Hβ and with Hα as virial estimators, while for <cit.> we recompute the Hα, L_5100 based values. In this way we can compare these values with those foundin Paper I, for which only Hα measurements are available.We find that our Eddington ratios (Tab. <ref>, Fig. <ref>) are much higher on average than those of both Paper I and <cit.>.The higher luminosity of our sample is therefore due both to the presence of more massive BHs and to the fact that they are accreting, on average, at higher rates. §.§ Canbe used as a virial estimator?Unlike what found in Paper I, when looking at the ] profile we do not recognize a different behaviour of the line with respect to the others (see Fig. <ref>). We then decide to examine the relationships between the FWHM of ] with those of the other lines. Although ] is not commonly used, some works analyse this line <cit.>. <cit.> find only a slight correlation of ] withFWHM, while <cit.> state that ] linewidths correlates withand therefore these lines could be emitted by the same region, then being characterized by the same issues (i.e. non virialization of the emitting region). We find that this correlation (log(FWHM_)-log(FWHM_)) has a larger scatter with respect to those of Hβ and Hα and comparable with that of(see Tab. <ref> and Fig.<ref>, first column). However, the sample of <cit.> has much higher luminosity than ours and is composed by lower redshift sources. Moreover, they fit the line profiles on much narrower wavelenght ranges than ours. Additionally, for the line profile model they use two Gaussians tied to give a symmetric broad component for ].All these differences could then contribute to the discrepancy with their results.In Fig. <ref> (first column) we notice the presence of only one outlier, J121911, for which the ] complex appears to have a more “boxy” shape with respect to those the other sources, resulting in an more asymmetric ] profile with an extended red wing (see the figures in Appendix for the results on the complete sample and Fig. <ref> for a comparison of the line profiles).We have checked that including or excluding this point does not affect the fit and we decide to leave it in the sample. The reason for this is that this point has larger errors, since the IDL routine we use to fit the linear relation (MPFITEXY, , based on the MPFIT package <cit.>) considers the errors in both x and y variables. J121911 does not stand out evidently in the case of the line dispersion (Fig. <ref>, first column), but for the same reason we do not consider it as a reliable measurement. Results of fits assuming a linear correlation between the quantities are listed in Tab. <ref>.] is not usually mentioned among the possible virial estimators. This is mostly due to (1) its smaller intensity and (2) to the blend with other emission lines in the same complex. The use of only one component to fit the broad lines, instead of two or more Gaussians, is more robust against the degeneracy in the profile fitting, also thanks to the good quality of our data. Theline should not be used in virial estimates because, although it is more intense, it is contaminated by non-virial components. On the contrary, the preliminary line comparison (Fig. <ref>) shows that ] behaves very similarly to the lines that are mostly virialized (Hβ, Hα and ). Therefore, we attempt to find a virial relationship for ] comparing measurements for this line with virial masses based on the other lines. Given the smallness of our sample, in addition to a fixed dependence of the M_BH on the velocity of the emitting gas according to the virial assumption, we also fix the dependence on the luminosity as _BH∝ L^0.5. This choice is perfectly consistent with the hypothesis of photoionization in the BLR and with what found in previous works <cit.>. The BH mass is then given by the equation_BH = C ^2L^0.5 ,where the only free parameter is the scaling factor C. We chose to use the luminosity at 1450Å, as the closest to the ] line that we can measure in a continuum window reasonably free by other emissions. In Tab. <ref> we report our results (scaling factor C and scatter Δ) for the comparison of the ] based M_BH with those from all the other lines except(Fig. <ref>, first column). Since , if not corrected, does not share the property of virialization of the emitting region, we do not consider this line for this comparison. Tab. <ref> also shows the M_BH estimates for all the sources derived from the ] line. The scatter in these relations is comparable (only in the case of M_BH(])-M_BH(Hβ) is larger) with those of the mass relations involving(0.14, 0.15 and 0.16 dex with the only six objects in our sample and 0.23, 0.21 and 0.20 dex for the whole sample, for Hβ, Hα and , respectively). However, the strong similarity of the ] profile with the lines emitted by virialized gas (Hβ and ) suggests that, at least for this sample, we can use this line as a virial estimator.The use of the double power law function as a model to fit broad components helps in removing the degeneracy in the , ] and ] complex and, therefore, in retrieving the ] profile more accurately. Despite the scatter in the ] virial relationships is as large as that of , this line does not seem to be affected by contamination by not-reverberating components. Of course this sample is composed only of six sources, one of which (J121911) represents an outlier for what concerns the ] behaviour, therefore a significantly enlarged sample is needed to ensure the goodness of this line as a virial estimator. Moreover, due to the strong blending of the 1900Å  complex, spectra of very high quality and S/N are required in order to disentangle the different emission components. This fact can therefore limit the use of this line but, in principle, ] seems to represent a better choice with respect toas it is. Since we are comparing our ] based masses with those obtained using the <cit.> prescription and since they do not provide virial relationships for σ, we limit our analysis to the FWHM. § SUMMARY We examined a sample of six quasars at redshift z∼2.2, whose spectra were taken with the XShooter spectrograph. This instrument covers a very large spectral range, allowing the simultaneous comparison of the most used virial estimators. We compare our results with those of the only two other samples observed with XShooter (Paper I and ). The analysis gives the following results: 1. The comparison of the line profiles shows that Hβ, Hα andbehave in a similar way, as expected for virialized gas.2.is by far the line that deviates most from this condition because of its strong blueshifts and asymmetry.3. We find ] to behave consistently with the other lines, in contrast to .4. Comparisons of the linewidths obtained for every line give a similar scatter for the FWHM and the line dispersion σ. However, we chose to focus our analysis on FWHM, to be consistent with the works we are comparing our sample to.5. We compute virial masses for our sample and for the sources in Paper I, using the prescriptions by <cit.>. All the sources follow the relations. A comparison with Paper I and <cit.> shows that our higher redshift sample has larger M_BH and higher accreting rates.6. Notwithstanding the smallness of our sample, we suggest a new virial mass prescription based on the FWHM of ], which can be considered as a valid substitute offor sources in which only this spectral window is present, if a high quality spectrum is available and a proper modelization of theandemissions is included in the analysis. Unlikein fact, this line seems to share the behaviour of the lines emitted by virialized gas.The authors would like to thank the anonymous referee for helpful comments and suggestions that considerably improved the work. They also thank Alvaro Alvarez for support during the observations and Marianne Vestergaard for kindly providing the I Zw 1 iron templates from <cit.>. LCH acknowledges support from the Chinese Academy of Science (grant No. XDB09030102), National Natural Science Foundation of China (grant No. 11473002), and Ministry of Science and Technology of China (grant No. 2016YFA0400702). GP acknowledges the Bundesministerium für Wirtschaft undTechnologie/Deutsches Zentrum für Luft- und Raumfahrt(BMWI/DLR, FKZ 50 OR 1408 and FKZ FKZ 50 OR 1604)and the Max Planck Society. aa§ COMPLETE SAMPLE FITS
http://arxiv.org/abs/1702.08046v1
{ "authors": [ "Susanna Bisogni", "Sperello di Serego Alighieri", "Paolo Goldoni", "Luis C. Ho", "Alessandro Marconi", "Gabriele Ponti", "Guido Risaliti" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170226153506", "title": "Simultaneous detection and analysis of optical and ultraviolet broad emission lines in quasars at z~2.2" }
Stability of Electrodeposition at Solid-Solid Interfaces and Implications for Metal Anodes Venkatasubramanian Viswanathan December 30, 2023 ==========================================================================================In unpublished notes, Pila discussed some theory surrounding the modular function j and its derivatives.A focal point of these notes was the statement of two conjectures regarding j, j' and j”: a Zilber-Pink type statement incorporating j, j' and j”, which was an extension of an apparently weaker conjecture of André-Oort type.In this paper, I first cover some background regarding j, j' and j”, mostly covering the work already done by Pila.Then I use a seemingly novel adaptation of the o-minimal Pila-Zannier strategy to prove a weakened version of Pila's “Modular André-Oort with Derivatives” conjecture.Under the assumption of a Schanuel-type conjecture, the central theorem of the paper implies Pila's conjecture in full generality, as well as a more precise statement on the same lines. § INTRODUCTIONThe modular André-Oort Conjecture is a statement about the arithmetic and algebraic properties of the classical modular function j:→ℂ.The statement is the following. [Pila, Modular André-Oort]Let Vℂ^n be an algebraic variety.Then V contains only finitely many maximal special subvarieties.While various partial <cit.> and conditional <cit.> results were known, this was first proven unconditionally and in full generality by Pila, in his 2011 paper <cit.>, using what is now a fairly standard strategy employing ideas of o-minimality and point-counting.The connection with the j function is obscured behind the definition of “special subvariety”, which goes as follows.It is well known that there are modular polynomials Φ_N∈ℤ[X,Y] with the property that Φ_N(j(gτ),j(τ))=0whenever g∈ is a primitive integer matrix of determinant N. So, although j is a transcendental function, it is very well-behaved under the action of .A fairly direct consequence of the existence of the polynomials Φ_N is the (also well known) fact that j(τ) is an algebraic integer whenever τ∈ is quadratic.Loosely, a special subvariety of ℂ^n is a variety induced by these relations.To be more precise, we have the following definition. Let n∈ℕ. Let S_0∪ S_1∪…∪ S_k be a partition of {1,…,n}, where k≥ 0 and S_i∅ for i>0.For each s∈ S_0, choose any point q_s∈.For each i>0, let s_i be the least element of S_i and for each s_i s∈ S_i choose a geodesic matrix g_i,s∈.A weakly -special subvariety of ^n is a set of the form {(τ_1,…,τ_n)∈^n:τ_s=q_s for s∈ S_0, τ_s=g_i,sτ_s_i for s∈ S_i, s s_i,i=1,…,k}, for some given data S_i, q_s, g_i,s.A weakly -special subvariety is -special if the constant factors q_s are imaginary quadratic numbers for all s∈ S_0. One can define special (henceforth to be known as j-special) subvarieties in a similar way, as varieties in ℂ^n cut out by the modular polynomials Φ_N.For our purposes, however, there is a simpler definition that suits better.By abuse of notation, we will write j for the function(τ_1,…,τ_n)↦ (j(τ_1),…,j(τ_n)).A j-special variety in ℂ^n is then the image under j of an -special variety; similarly a weakly j-special variety is the image under j of a weakly -special variety. In this paper, I will be investigating what happens when we consider not only j and its corresponding special varieties, but also the derivatives of j.In this setup, the situation is rather more complicated.To begin with, recall that j satisfies a certain 3rd order differential equation.Hence it is enough to consider only j and its first two derivatives.With this in mind, let us define a function which will be our central object of study for the paper:J=(j,j',j”):→ℂ^3, J(τ)=(j(τ),j'(τ),j”(τ)).Again we will abuse notation and use J also to refer to the obvious map from ^n to ℂ^3n defined as J on each coordinate.Much of the setup here is due to Pila; in unpublished notes, he compiled various properties of J and gave definitions of what a J-special variety should be.Pila also made a conjecture analogous to <ref>, with respect to J.For the rest of this section, I will give some of the setup covered by Pila in his notes.Towards the end of the section I state Pila's conjecture and a weakened version of the conjecture which is the central theorem of the paper. §.§ Properties of JThe functions j' and j” are not fully modular functions.Instead, they satisfyj'(γτ)=(cτ+d)^2j'(τ)andj”(γτ)=(cτ+d)^4j”(τ)+2c(cτ+d)^3j'(τ),where γ=[ a b; c d ]∈.This says that j' is a meromorphic modular form of weight 2, while j” is a so-called quasimodular form of weight 4 and depth 1.The behaviour of j' and j” at quadratic points is also worse than that of j; while j(τ) is algebraic at quadratic τ, j'(τ) and j”(τ) almost never are.In fact, such points are always transcendental unless τ is in the -orbit of i or ρ=e^2π i/3 (see <cit.>).However, thanks to a result of Masser, j”(τ) is always algebraic over j'(τ).To be precise, we have a rational function p_c, in 3 variables (with c a positive real), defined as follows:p_c(W,X,Z)=16Z^2(X-7W+6912)W(W-1728)+iZc.A brief calculation shows thatp_τ(j(τ),χ^*(τ),j'(τ))=j”(τ)for all τ.Here and throughout, χ^* is an almost holomorphic modular (AHM) function defined byχ^*=1728·E_2^*E_4E_6E_4^3-E_6^2,where the E_k are standard Eisenstein series and E_2^* is the weight 2 almost holomorphic modular form defined byE_2^*(τ)=E_2(τ)-3πτ.We can decompose χ^* as χ^*=χ-3π y· f,where χ and f are, here and throughout, holomorphic functions defined byχ=1728·E_2E_4E_6E_4^3-E_6^2, f=1728·E_4E_6E_4^3-E_6^2.The function χ^* and related AHM functions have been studied in several places, by Masser <cit.>, Mertens and Rolen <cit.> and Zagier <cit.> among others.I have studied χ^* in the context of an André-Oort result <cit.>; we will be making use of some results from that paper occasionally.For the present, the most relevant fact is that χ^*(τ) is algebraic whenever τ is quadratic[This was essentially proven by Masser in the Appendix of <cit.>, which seems to have been the first investigation of the algebraic properties of AHM functions.].As a consequence, we see that p_τ has coefficients in ℚ whenever τ is quadratic.Hence, as claimed, tr.deg._ℚ(j(τ),j'(τ),j”(τ))=1 for quadratic τ∉·{i,ρ}.By differentiating the modular polynomials, we see that J also has nice behaviour with respect to g∈.We will briefly work out the details of this in the case of j'.Let g∈ be a primitive integer matrix of determinant N.SinceΦ_N(j(τ),j(gτ))=0,we see that∂_X(Φ_N)(j(τ),j(gτ))j'(τ)+∂_Y(Φ_N)(j(τ),j(gτ))j'(gτ)d(gτ)dτ=0 j'(gτ)=-λ_N(j(τ),j(gτ))j'(τ)m_g(τ),whereλ_N=∂_X(Φ_N)∂_Y(Φ_N)and m_g(τ)=(cτ+d)^2 g=(d(gτ)dτ)^-1.For j”, we have a similar relation, though a bit more complex.In both cases, the nature of the relation differs depending on whether g is upper triangular.If g fails to be upper triangular (ie. c 0) then the relation between the functions j'(gτ) and j'(τ) (respectively for j”) only exists over ℂ(τ).Otherwise the relationship is over ℂ.Hencetr.deg._ℂ(J(τ),J(gτ))=3(considering τ as a variable in ) if g is upper triangular andtr.deg._ℂ(J(τ),J(gτ))=4otherwise.Since the behaviour of J is affected by whether or not a matrix is upper triangular, it makes sense to make the following definition. A weakly -special variety G is called a geodesic upper-triangular (GUT) variety if all of the g_i,s∈ arising in its definition (see <ref>) are upper triangular matrices.We will be making use of GUT varieties quite often, later on in this paper. §.§ Special sets for JIn the classical case, the special sets were just the images, under j, of -special sets.An important feature is that they are bi-algebraic; for -special sets G, both G and j(G) are algebraic sets defined over ℚ.In the J case, as noted earlier, J(G) is not necessarily an algebraic set, let alone defined over ℚ.The solution to this is fairly simple; just take Zariski closures over ℚ.We are still following Pila, and will use the following notation, also of his design. For any subset S^n, define S to be the ℚ-Zariski closure of J(S).That is, S is the smallest algebraic variety, defined over ℚ, which contains J(S).A J-special subvariety of ℂ^3n is an irreducible component of any set of the form G, where G is an -special set.With this definition made, we might conjecture a direct analogue of <ref>, with j-special varieties replaced by J-special ones.This fails for a fairly obvious reason.Consider the variety Vℂ^3 defined just byX_1=j(τ),for some fixed quadratic τ∈∖·{i,ρ}.Then by modularity of j, V contains all the points J(γτ), γ∈.In particular, taking Zariski closures, we haveγτ Vfor all γ∈.Sinceγτ={(j(τ),w,p_γτ(j(τ),χ^*(τ),w)):w∈ℂ},one sees that the various γτ are in fact distinct.So V contains infinitely many distinct J-special sets.They are maximal since =ℂ^3V.With this in mind, we need a version of <ref> which takes into account the action of .The aforementioned conjecture of Pila is one such version.To state it, we will need a definition. Let 𝒮 be a collection of subsets of ^n.We say 𝒮 is -finite if there is some finite subcollection 𝒯𝒮, such that every S∈𝒮 takes the formS=γ· T,for some γ∈^n, T∈𝒯.Otherwise, 𝒮 is -infinite.Abusing notation slightly, we also use this terminology to apply to collections of points, equating points τ with singleton sets {τ} in the obvious way. Now we can state Pila's conjecture.Let Vℂ^3n be a proper algebraic variety defined over ℚ.There exists an -finite collection σ(V), consisting of proper -special varieties of ^n, with the following property.Every J-special subvariety of V is contained in G for some G∈σ(V).We will be approaching this conjecture using a variant of the Pila-Zannier strategy and o-minimality.With a seemingly novel adaptation of the usual strategy, we are able to make some good progress.The o-minimal methods are sufficient to give us good control over quadratic points of the form (g_1σ,…,g_nσ).In this case, the methods yield a bound on the size of the determinants of the g_i∈ and on the discriminant of σ.This is a good step towards Conjecture <ref>.There is significant difficulty, however, in dealing with points having a more complex -structure.The difficulty lies in the possibility that, for two quadratic points τ,σ∈, it might happen that j'(τ) and j'(σ) are algebraically dependent even when τ and σ lie in distinct -orbits.As we will discuss shortly, we do not expect this to happen, but it does not seem to be possible to exclude the possibility using o-minimal methods.So we need the following definition.Let τ=(τ_1,…,τ_n)∈^n be a quadratic point.Then τ may be written asτ=(σ_1,g_1,1σ_1,…,g_1,r_1σ_1,…,σ_k,g_k,1σ_k,…,g_k,r_kσ_k), with g_i,j∈ and the σ_i lying in distinct -orbits.We say τ is j'-generic if the numbers j'(σ_1),…,j'(σ_k) are algebraically independent over ℚ.Now we can state the central theorem of the paper.Let Vℂ^3n be a proper algebraic variety defined over ℚ.There exists an -finite collection σ(V), consisting of proper -special varieties of ^n, with the following property.Every j'-generic -special point in J^-1(V) is contained in some G∈σ(V).Note in particular that any special point of the form (g_1σ_1,…,g_nσ_1), with g_i∈, is automatically j'-generic, unless g_1σ_1 is in the -orbit of i or ρ.Hence we have the following easy corollaries. Let Vℂ^3n be a proper algebraic variety defined over ℚ.There exists an -finite collection σ(V), consisting of proper -special varieties of ^n, with the following property.Every -special point of the form (g_1σ,…,g_nσ)∈ J^-1(V), g_i∈, is contained in some G∈σ(V). This follows directly from <ref>, except for the possible existence of points with a coordinate lying in ·{i,ρ}.But any such points are automatically contained within -finitely many proper -special varieties.Let Vℂ^3 be a proper algebraic variety defined over ℚ.Then J^-1(V) contains only -finitely many -special points. This last, of course, is simply Conjecture <ref> for n=1.To get <ref> in full generality, we would need something like the following.Let τ_1,…,τ_n∈ be quadratic points, lying in distinct -orbits, none of which lies in the -orbit of i or ρ.Then j'(τ_1),…,j'(τ_n) are algebraically independent over ℚ.Otherwise put: “All quadratic points in ^n are j'-generic, except those with a coordinate in the -orbit of i or ρ.”In this form, it is easy to see that the conjecture, together with Theorem <ref>, implies <ref>.Assume Conjecture <ref>.Then Conjecture <ref> holds. Immediate from Theorem <ref>.Should we believe Conjecture <ref>?It is quite strong, having a similar flavour to existing modular Schanuel statements; but it does fit into the existing body of conjectures.See, for instance, Bertolin's elliptico-toric conjecture (CET) <cit.>, from which Conjecture <ref> follows immediately.In turn, CET is a special case of the Grothendieck-André period conjecture.So <ref> fits well with what we might expect.It is also much stronger than we need; to get Conjecture <ref>, various weaker transcendence statements would suffice.The weaker statements are much less clean to state and fit less obviously into the existing literature, so we stick with Conjecture <ref> for this paper. The paper is broken down as follows.Section 2 is dedicated towards proving some Ax-Lindemann type results.In this section lies the primary novelty of the paper; the Ax-Lindemann results we prove here are necessarily of a very new and unusual shape, in order to account for some problems that arise with point-counting.This section is completely independent from Conjecture <ref> and does not involve considerations of j'-genericity.It is in section 3, where we discuss the point-counting aspects, that the issue of j'-genericity arises.Section 4 brings everything together to conclude the proof of <ref>.The penultimate section of the paper, section 5, is dedicated towards proving a more precise version of Conjecture <ref>, under the assumption of Conjecture <ref>.In the final section we apply this more precise result, together with a slight adaptation of work of Scanlon <cit.>, to produce versions of our results which are uniform in algebraic families.Acknowledgements. I would like to take the opportunity to thank Jonathan Pila for his invaluable guidance and supervision, as well as for many excellent suggestions on the subject of this paper.I would also like to thank Sebastian Eterović for several useful conversations on these topics.Last but certainly not least, thanks go to my father Derek for our many discussions and his keen proof-reading eyes!§ AX-LINDEMANN §.§ TechnicalitiesWe begin with a few crucial definitions.A subset of ^n is a linear variety if, up to reordering of coordinates, it takes the form G={(g_1,1τ_1, … g_1,r_1τ_1,…,g_k,1τ_k,…,g_k,r_kτ_k,t_0,1,…,t_0,r_0):τ_i∈}, where t_0,j∈ and g_i,j∈.A linear variety is called basic if r_0=0.So a basic linear variety is defined by finitely many matrices g_i,j∈, together with a permutation of the coordinates.We will often suppress the permutation of coordinates, assuming for simplicity that the variety takes exactly the form in (<ref>).Any linear variety G^n has an underlying basic variety attached to it, namely the B^k attained by ignoring the constant coordinates t_0,j.We say that G is a translate of B (by the t_0,j), or that B is the basic variety underlying G. As we will see in later sections, the usual counting methods don't work out perfectly when applied to the derivatives problem.One would like to be able to count linear subvarieties contained in J^-1(V), but the methods are slightly too coarse for this.One can, however, count those varieties which are, in some sense, approximately in J^-1(V).This motivates the following.Let Vℂ^3n be an algebraic variety and B^k a basic linear variety, given by data g_i,j as in (<ref>).For each g_i,j, take a complex number z_i,j and a real c_i,j>0.Also take n-k triples of complex numbers (w_i,x_i,y_i). We say B is adjacent to V via z_i,j,c_i,j,w_i,x_i,z_i if for all τ_i, we have, up to permutation of coordinates, […,j(g_i,jτ_i),j'(g_i,jτ_i)z_i,jm_g_i,j(τ_i),p_c_i,j(j(g_i,jτ_i),χ^*(g_i,jτ_i),j'(g_i,jτ_i)z_i,jm_g_i,j(τ_i)),…,w_i,x_i,y_i,…]∈ V.If this holds for some choice of the possible data z_i,j, c_i,j, w_i, x_i, z_i, we simply say B is adjacent to V, and write B↪ V. Finally, if G is a translate of a basic linear variety B by (σ_1,…,σ_d), we say G is adjacent to V if B is adjacent to V via any z_i,j, any c_i,j and (w_i,x_i,z_i)=J(σ_i). In this case we again write G↪ V. This rather intricate definition turns out to be crucial in carrying out a suitable variant of the usual o-minimal strategy used for André-Oort problems.This is the main new idea of the paper: we follow the typical Pila-Zannier strategy for diophantine problems of this type, but rather than counting those -special varieties which are contained in J^-1(V) directly, we instead count the -special varieties which are adjacent to V.The notion of adjacency is constructed so as to be definable, invariant under the action of(on the g_i,j) and invariant under Galois action (on j and χ^*); those three conditions are precisely what we need to make the strategy work. The first step of the strategy and the primary goal of this this section is to prove an “Ax-Lindemann-style” result pertaining to the notion of adjacency.The idea is that the only linear varieties (or more generally, real algebraic arcs) that can be adjacent to some variety V should be accounted for by weakly -special varieties.Given our overall goal - to count those linear varieties which are adjacent to V - this problem is directly analogous to the usual Ax-Lindemann theorems needed in the classical case.Unsurprisingly, we will need to use some existing Ax-Lindemann results. Let S^n be an arc of a real algebraic curve and let G be the smallest weakly -special variety containing S.Suppose that G is a GUT variety.(Recall: this simply means that the matrices defining G are upper triangular.)Let Vℂ^3n+1 be an algebraic variety such that (τ_1,J(τ_1),…,J(τ_n))∈ V for all (τ_1,…,τ_n)∈ S.Then in fact this holds for all (τ_1,…,τ_n)∈ G. Let S^n be an arc of a real algebraic curve and let G be the smallest weakly -special variety containing S.Let Vℂ^2n be an algebraic variety such that (…,j(τ_j),χ^*(τ_j),…)∈ V for all (τ_1,…,τ_n)∈ S.Then in fact this holds for all (τ_1,…,τ_n)∈ G.Both of the above were proven in <cit.>, though the majority of the work towards <ref> was done by Pila in <cit.>.Before we can apply these to prove our central Ax-Lindemann result, we will need some technical lemmas, the first of which is simply a strengthening of <ref>.Let S^n be an arc of a real algebraic curve and let G be the smallest weakly -special variety containing S.Suppose that G is a GUT variety.Let Vℂ^4n+1 be an algebraic variety such that (τ_1,j(τ_1),j'(τ_1),χ(τ_1),f(τ_1),…,j(τ_n),j'(τ_n),χ(τ_n),f(τ_n))∈ V for all (τ_1,…,τ_n)∈ S.Then in fact this holds for all (τ_1,…,τ_n)∈ G. Let F be a defining polynomial of V.We will represent the various coordinates as follows.* The τ_1 coordinate will be represented by a variable T. * The j-coordinates (ie. the 2nd, 6th, coordinates, etc.) will be represented by variables J_1,…,J_n. * The j'-coordinates will be represented by variables K_1,…,K_n. * The χ-coordinates will be represented by variables X_1,…,X_n. * The f-coordinates will be represented by variables F_1,…,F_n.Since j,j',χ and f are algebraically dependent, there is an irreducible polynomial p with the property thatp(j(τ),j'(τ),χ(τ),f(τ))=0for all τ. Consider the variety Wℂ^4n+1 defined by p(J_i,K_i,X_i,F_i)=0for each 1≤ i≤ n.Clearly W=3n+1.If we further impose the conditionF(X_1,…,X_4n+1)=0,there are two possibilities.Either the resulting variety W_F still has dimension 3n+1 or it has dimension 3n.If W_F=3n+1, it is automatically the case that F(τ_1,…,j(τ_k),j'(τ_k),χ(τ_k),f(τ_k),…)=0for all (τ_1,…,τ_n)∈^n. On the other hand, if W_F=3n, then W_F amounts to the imposition of a relation between 3 of the 4 functions.That is, there are distinct A,B,C∈{J,K,X,F} and a polynomial H, in 3n+1 variables, such that W_F is defined by:(T,J_1,… F_n)∈ Wand H(T,A_1,B_1,C_1,…, A_n,B_n,C_n)=0.We then have, for the corresponding f_A,f_B,f_C∈{j,j',χ,f} thatH(τ_1,…,f_A(τ_i),f_B(τ_i),f_C(τ_i),…)=0for all (τ_1,…,τ_n)∈ S.By Theorem <ref>, this must then hold for all (τ_1,…,τ_n)∈ G.In particular,F(τ_1,…,j(τ_i),j'(τ_i),χ(τ_i),f(τ_i),…)=0for all (τ_1,…,τ_n)∈ G.This holds for each defining polynomial F of V, so we're done.Let S^n be an arc of a real algebraic curve and let ϕ be an algebraic function in 4n+1 variables.Let G be the smallest weakly -special variety containing S, and suppose that G is a GUT variety. Writingπ̃(τ_1,…,τ_n)=(j(τ_1),j'(τ_1),χ(τ_1),f(τ_1),…,j(τ_n),j'(τ_n),χ(τ_n),f(τ_n)), we suppose that, on some branch of ϕ, ϕ(τ_1,π̃(τ))=0 for all τ=(τ_1,…,τ_n)∈ S.Then this holds for all τ∈ G, excluding perhaps some exceptional set corresponding to branch points of ϕ. There exists an irreducible polynomial p such thatp(ϕ(𝐗),𝐗)=0for all 𝐗.Then in particular we havep(0,τ_1,π̃(τ))=0for all τ=(τ_1,…,τ_n)∈ S.By Theorem <ref>, we have this relation for all τ∈ G.We can pick a point 𝐪=(τ_1,…,τ_n)∈ S, a G-open neighbourhood U of 𝐪 and a ℂ-open neighbourhood V of 0 with the following property.Whenever τ∈ U, the only root of p(X,τ_1,π̃(τ))lying in V is root 0 (which is a root by the earlier discussion).Now, for all τ∈^n, we havep(ϕ(τ_1,π̃(τ)),τ_1,π̃(τ))=0by definition of p.In other words, ϕ(τ_1,π̃(τ)) is a root of (<ref>).However, as τ∈ U gets arbitrarily close to 𝐪, the value of ϕ(τ_1,π̃(τ)) gets arbitrarily close to 0.Hence it eventually lies in V.The only root of (<ref>) within V is 0, whence for some G-open neighbourhood of 𝐪, we haveϕ(τ_1,π̃(τ))=0.By analytic continuation, this holds for all τ∈ G, excluding some exceptional set corresponding to the branch points and branch cuts of ϕ. We conclude this section with one more technical lemma.While it may appear entirely unmotivated, hopefully the need for such a lemma will become clear during the proof of Theorem <ref>. Let S^n be an arc of a real algebraic curve and let ϕ be an algebraic function in 4n variables.Suppose that τ_1=ϕ(…,j(τ_k),j'(τ_k),χ(τ_k),f(τ_k),…) for all (τ_1,…,τ_n)∈ S.Let G be the smallest weakly -special variety containing S, and suppose that G is a GUT variety.Then τ_1 is constant on S. Write y=τ_1 and suppose that y is nonconstant.Let us retain the abbreviation π̃(τ)=(j(τ_1),j'(τ_1),χ(τ_1),f(τ_1),…,j(τ_n),j'(τ_n),χ(τ_n),f(τ_n)).Then S can be parametrised asS={(x_1(y)+iy,x_2(y)+iy_2(y),…,x_n(y)+iy_n(y)):y∈ U}for some interval Uℝ and algebraic functions x_i, y_i.Now, take one of the polynomials p(x_1,y_1,…,x_n,y_n) defining S.We can writep[τ_1-iϕ(π̃(τ)),ϕ(π̃(τ)),x_2(ϕ(π̃(τ))),y_2(ϕ(π̃(τ))),…,x_n(ϕ(π̃(τ))),y_n(ϕ(π̃(τ)))]=0for all τ=(τ_1,…,τ_n)∈ S.Otherwise put, we have an algebraic function ψ such thatψ(τ_1,π̃(τ))=0for all τ∈ S.By Corollary <ref>, this holds for all τ∈ G, whence <ref> holds for all τ∈ G. By assumption, G is a GUT variety.Since y is assumed to be nonconstant on S, the variable τ_1 cannot be constant on G.So up to permutation of coordinates G looks like{(τ_1,g_2τ_1,…,g_kτ_1):τ_1∈}× Hfor some upper triangular matrices g_i∈ and some GUT variety H.For any τ_1∈, τ'∈ H and any t∈ℤ, we then haveτ_t:=(τ_1+t,g_2(τ_1+t),…,g_k(τ_1+t),τ')∈ G.Since the g_i are upper triangular, we can find an integer N with the following property.For every t∈ℤ, there exists k∈ℤ withg_i(τ_1+tN)=k+g_i(τ_1),for all i.By the periodicity of j, j', χ and f, it follows thatπ̃(τ_tN)=π̃(τ_0)for all t∈ℤ.So for all τ=(τ_1,…,τ_n)∈ G and all t∈ℤ we havep(τ_1+tN-iϕ(π̃(τ)),ϕ(π̃(τ)),x_2(ϕ(π̃(τ))),y_2(ϕ(π̃(τ))),…,x_n(ϕ(π̃(τ))),y_n(ϕ(π̃(τ))))=0.In particular, whenever τ=(x_1(y)+iy,x_2(y)+iy_2(y),…,x_n(y)+iy_n(y))∈ S, we havep(x_1(y)+tN,y,x_2(y),y_2(y),…,x_n(y),y_n(y))=0.This holds for every polynomial p defining S.Since S has only one real dimension, it must therefore be a horizontal line in the τ_1 coordinate.That is, y is constant.Contradiction!§.§ Ax-Lindemann for AdjacencyWe can now prove our main Ax-Lindemann theorem.The idea is to show that a real algebraic arc in ^n which is `adjacent' to a variety V (in a suitable sense) must be contained in a weakly -special variety which is itself adjacent to V. Let Vℂ^3n be an algebraic variety.Let S be an arc of a real algebraic curve lying in (×)^n.DefineS={(g_1τ_1,…,g_nτ_n):(τ_1,g_1,…,τ_n,g_n)∈ S}, and suppose that S is positive-dimensional; that is, not all of the g_jτ_j are constant on S.Further suppose that, for some c_i∈ℝ,[…,j(g_iτ_i),j'(g_iτ_i)m_g_i(τ_i),p_c_i(j(g_iτ_i),χ^*(g_iτ_i),j'(g_iτ_i)m_g_i(τ_i)),…]∈ V for all (τ_1,g_1,…,τ_n,g_n)∈ S.Then there exists a weakly -special variety G with S G↪ V. Note. The functions j(g_jτ_j), χ^*(g_jτ_j) and j'(g_jτ_j)/m_g_j(τ_j) are unaffected if we replace g_j by γ g_j for any γ∈.Hence we may assume that G, the smallest weakly -special variety containing S, is a GUT variety.This will be useful several times throughout.Idea of Proof. First, we attempt to parametrise the relevant algebraic arcs in terms of the imaginary part y_j of one of the variables.With suitable manipulations, we reach one of two outcomes: either a particular complex analytic relation involving just j, χ, f and j' holds, or y_j is equal to an algebraic function of j, χ, f and j'.In the first case, the result comes fairly easily.In the second case, we apply Lemma <ref> to see that y_j is constant; this situation can be dealt with easily. Given (…,τ_j,g_j,…)∈ S, let us write g_jτ_j=σ_j=x_j+iy_jand m_g_j(τ_j)=ρ_j=u_j+iv_j.We wish to parametrise the real algebraic arcS̃={(…,g_jτ_j,m_g_j(τ_j),…):(…,τ_j,g_j,…)∈ S} (×ℂ)^nin terms of one of the y_j.Thus we first have to deal with the possibility that all of the y_j are in fact constant on S̃.In this situation, let us first assume that the ρ_j are also constant.Then we have[…,j(σ_j),j'(σ_j)ρ_j,p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π y_j,j'(σ_j)ρ_j),…]∈ Vfor some constants y_j, ρ_j and all σ=(σ_1,…,σ_n)∈S.Recall that G, the weakly -special closure of S, can be assumed to be a GUT variety.So we can apply Theorem <ref> to see that (<ref>) holds for all σ∈ G.Taking Zariski closures (over ℂ) in (<ref>), we get[…,j(σ_j),w_j,p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π y_j,w_j),…]∈ Vfor all 𝐪∈ G and (w_1,…,w_n)∈ V_j(𝐪).Here V_j(𝐪) is a variety depending only on the j(σ_j), which contains the point (j'(σ_1)ρ_1,…,j'(σ_n)ρ_n).As in <cit.>, it is easy to find a sequence of matrices γ_t∈, t∈ℕ, such that (χ(γ_tσ_1),…,χ(γ_tσ_n))→ (χ(σ_1),…,χ(σ_n))and(f(γ_tσ_1),…,f(γ_tσ_n))→ 0as t→∞.So by continuity (and the invariance of j), we get[…,j(σ_j),w_j,p_c_j(j(σ_j),χ(σ_j),w_j),…]∈ Vfor all 𝐪∈ G and 𝐰∈ W_j(𝐪).By an isomorphism theorem from <cit.>, we get[…,j(σ_j),w_j,p_c_j(j(σ_j),χ^*(σ_j),w_j),…]∈ Vand hence[…,j(σ_j),j'(σ_j)ρ_j,p_c_j(j(σ_j),χ^*(σ_j),j'(σ_j)ρ_j),…]∈ V,for all σ∈ G.This says precisely that G↪ V. Next we deal with the situation where one of the y_j is nonconstant on S̃.Without loss of generality, suppose it is y_1 and write y=y_1.We can parametriseS̃={(x_1(y)+iy,u_1(y)+iv_1(y),…,x_n(y)+iy_n(y),u_n(y)+iv_n(y)):y∈ I},for some interval Iℝ and algebraic functions x_i,y_i,u_i,v_i.Letting F be a defining polynomial of V, we haveF[…,j(σ_j),j'(σ_j)u_j(y)+iv_j(y),p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π y_j(y),j'(σ_j)u_j(y)+iv_j(y)),…]=0,where we set y_1(y)=y.We can rewrite this; there is an algebraic function s such that the above holds if and only ifs(y,π̃(σ))=0for all σ=(σ_1,…,σ_n)∈S.Here we are writing y=σ_1 and, as before:π̃(σ_1,…,σ_n)=(j(σ_1),j'(σ_1),χ(σ_1),f(σ_1),…,j(σ_n),j'(σ_n),χ(σ_n),f(σ_n)). Since s is an algebraic function, there is a nontrivial irreducible polynomial p_s such that p_s(s(𝐗),𝐗)=0for all 𝐗.In particular,p_s(0,y,π̃(σ))=0for all σ∈S and y=σ_1.Since p_s is irreducible and nontrivial, we get a nontrivial q_s(𝐗)=p_s(0,𝐗).(It is clear that p_s(t,𝐗) t.)Now we apply the following iterative procedure to q_s.* Inspect separately each coefficient r_k of T^k in q(T,…).If r_k(π̃(σ))=0for all σ∈S and all k, then terminate.Otherwise, let q' be the polynomial produced by removing from q all coefficients r_k which have the above property. * If q' is irreducible, terminate.Otherwise, there is a factor q” of q' withq”(y,π̃(σ))=0for all σ∈S and y=σ_1. * We have a polynomial q”, which retains the property that q”(y,π̃(σ))=0for all σ∈S and y=σ_1.Repeat from step 1, with q” instead of q. This must eventually terminate, since step 2 will always reduce the degree of the polynomial in question.So we have two possibilities. If we terminated at step 1, then working backwards we see that every coefficient r_k of T^k in q_s has the property thatr_k(π̃(σ))=0for all σ∈S.Using the fact that G is a GUT variety, we can apply Theorem <ref> to see that this holds for all σ∈ G.In particular, q_s(y,π̃(σ))=0for all y∈ℂ and all σ∈ G.Otherwise put, 0 is a root ofp_s(X,y,π̃(σ))for all y∈ℂ and all σ∈ G.Now we proceed much as we did in the proof of Corollary <ref>.Choose a point 𝐚=(a_1,…,a_n)∈S,a G-open neighbourhood U of 𝐚 and a ℂ-open neighbourhood W of 0 with the following property.Whenever σ=(σ_1,…,σ_n)∈ U, the only root of (<ref>) lying in W is 0 itself.Now recall that s(y,π̃(σ)) is a root of (<ref>) for all y and all σ.This is just the definition of p_s.Since s vanishes at 𝐚∈S, there is a G-open neighbourhood 𝐚∈ U' Usuch thats( a_1,π̃(σ))∈ Wfor all σ∈ U'.But the only root of (<ref>) lying in W is the root 0.So it must be the case thats( a_1,π̃(σ))=0for all σ∈ U' and hence for all σ∈ G.Recalling the definition of S, we see thatF[…,j(σ_j),j'(σ_j)u_j(y)+iv_j(y),p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π y_j(y),j'(σ_j)u_j(y)+iv_j(y)),…]=0,for all σ=(σ_1,…,σ_n)∈ G and for constants y_j=y_j( a_1) and ρ_j=u_j( a_1)+iv_j( a_1).If we repeat this whole procedure for each defining polynomial of V, we get[…,j(σ_j),j'(σ_j)ρ_j,p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π y_j,j'(σ_j)ρ_j),…]∈ Vfor all σ∈ G and constants y_j=y_j( a_1) and ρ_j=u_j( a_1)+iv_j( a_1).Now we are in exactly the same position as we were for (<ref>), so we conclude as we did earlier. If we terminated at step 2, then we have an irreducible polynomial q such thatq(y,π̃(σ))=0for all σ=(σ_1,…,σ_n)∈S.Moreover, for every k, the coefficient r_k of T^k in q(T,…) has the property thatr_k(π̃(σ))does not vanish identically for σ∈S.Thus we can extract an algebraic function ϕ such thaty=ϕ(π̃(σ))for all σ∈S and y=σ_1.Now we may apply Lemma <ref> (once again using the fact that the weakly -special closure of S is a GUT variety) and see that y is constant on S, a contradiction. Finally we deal with the case where the y_j are all constant on S̃, but perhaps the ρ_j vary.Say y_j=a_j.Note, by hypothesis, that at least one of the σ_j is nonconstant on S̃.Without loss of generality, let us say it is σ_1.We have the relation[…,j(σ_j),j'(σ_j)ρ_j,p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π a_j,j'(σ_j)ρ_j),…]∈ V,holding for all (…,σ_j,ρ_j,…)∈S̃.This is a complex analytic relation.Consider the intersection of an irreducible algebraic variety Wℂ^2n with (×ℂ)^n.A connected component of this intersection is called a complex algebraiccomponent.Let A be the smallest complex algebraic component containing S̃.Then by analytic continuation, (<ref>) holds for all (…,σ_j,ρ_j,…)∈ A.Since the weakly -special closure of S is a GUT variety G, the projection of A onto the σ_j coordinates also has G as its weakly -special closure.Hence we can find a real algebraic arcT (×ℂ)^nwith the following properties:* On T, the imaginary part of σ_1 is nonconstant. * Whenever (…,σ_j,r_j,…)∈ T, we have […,j(σ_j),j'(σ_j)ρ_j,p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π a_j,j'(σ_j)ρ_j),…]∈ V.* The projection of T onto the σ_j coordinates has G as its weakly -special closure.We can then parametrise T in terms of y=σ_1.Exactly as before, we then rewrite the polynomial relation as an algebraic function s, yieldings(y,π̃(σ_1,…,σ_n))=0whenever (…,σ_j,r_j,…)∈ T and y=σ_1.By the same analysis as earlier, we end up with a constant a such thats(a,π̃(σ_1,…,σ_n))=0for all (σ_1,…,σ_n)∈ G.This in turn yields[…,j(σ_j),j'(σ_j)ρ_j(a),p_c_j(j(σ_j),χ(σ_j)-3f(σ_j)π a_j,j'(σ_j)ρ_j(a)),…]∈ Vfor all (σ_1,…,σ_n)∈ G.Since ρ_j(a) and a_j are constants, we can then conclude exactly as we did for (<ref>).The above theorem will be used in our point-counting arguments in the next section, in order to take a real algebraic arc and produce from it a weakly -special variety with certain adjacency properties.We will also need the following corollary, which we will eventually use to ensure that there are only finitely many basic linear varieties adjacent to our fixed variety V.Let Vℂ^3k be a variety and let B be a basic linear variety adjacent to V.Suppose B is maximal with this property.Then B is -special, ie. all of the g_i,j defining B lie in . Immediate from <ref>.§ POINT-COUNTINGIn this section, we will discuss the necessary o-minimality and point-counting considerations.For familiar readers, the results here should fit well with expectations, though they are necessarily rather less neat than their equivalents in more classical settings.We are assuming some basic familiarity with o-minimality and the Pila-Wilkie theorems; see <cit.>, <cit.> and <cit.>.The first crucial fact we need is that all of the relevant functions are definable in an appropriate sense.Namely, the restrictions of j, j', j”, χ^*, χ and f to any standard fundamental domain for the action ofonare definable in (the o-minimal structure) .We will use the “most standard” fundamental domain𝔽={τ∈:-1/2≤Reτ≤ 1/2, |τ|≥ 1}.The definability can be seen in various ways; either as a consequence of the theory of elliptic curves and a result of Peterzil and Starchenko on definability of the Weierstrass ℘-function <cit.>, or via q-expansions. Given this fact, the idea is a fairly standard one:First assume that a variety contains an infinite set of special points.Then the preimage of that variety (under J for instance), intersected with 𝔽^n, is a definable set in .By taking Galois conjugates, one can force this definable set to contain many quadratic points of bounded height, in the sense of the Pila-Wilkie theorem.So it will contain a real algebraic arc, allowing us to apply the Ax-Lindemann results of the previous section.The missing ingredient so far is the Galois aspect.Fortunately, much of this is already done for us; the points about which we need Galois information are just the j-special and χ^*-special points.For control over these, we have the following.Let τ∈ be a quadratic point and consider the algebraic numbers j(τ) and χ^*(τ).Let σ be a Galois conjugation acting on ℚ(j(τ))⊇ℚ(χ^*(τ)).Let τ' be a quadratic point such that j(τ')=σ(j(τ)).Then χ^*(τ')=σ(χ^*(τ)).For a proof, see <cit.>.This tells us, essentially, that to keep track of the Galois conjugates of χ^*(τ), we need only keep track of the Galois conjugates of j(τ).We already have sufficiently good control of the Galois conjugates of j(τ); it is a consequence of the Siegel bound for class numbers of quadratic fields <cit.> that[ℚ(j(τ)):ℚ]≫ D^1/4,where D is the discriminant of τ.(In fact we can do much better, but this is sufficient for our purposes.)Hence in particular, there will be ≫ D^1/4 Galois conjugates of a point (j(τ),χ^*(τ)), over any fixed number field.This fact is central to our main “point-counting theorem”.In this theorem we use the assumption of j'-genericity for the first, and only, time in the paper.Let Vℂ^3n be an algebraic variety defined over ℚ.Then there is a number D=D(V) with the following property.Let τ∈ J^-1(V) be a j'-generic quadratic point with discriminant greater than D, and suppose none of the coordinates of τ lies in ·{i,ρ}.Then there is an -special variety G with τ∈ G ↪ V.Let K be a number field containing a field of definition for V.Suppose we have a partition of {1,…, n},S_1∪…∪ S_k,with each S_i∅.For each i, let s_i=min S_i and r_i=#S_i-1.Given σ=(σ_1,…,σ_k)∈^kand g=(g_1,1,…,g_1,r_1,…,g_k,1,…,g_k,r_k)∈^n-k,define the following set:Z_σ,g={(τ,h)∈^k×^n-k:h_i,j= g_i,j, […, j(τ_i),j'(τ_i),p_σ_i(j(τ_i),χ^*(τ_i),j'(τ_i)),… …, j(h_i,jτ_i),j'(h_i,jτ_i)m_g_i,j(σ_i)m_h_i,j(τ_i),p_ g_i,jσ_i(j(h_i,jτ_i),χ^*(h_i,jτ_i),j'(h_i,jτ_i)m_g_i,j(σ_i)m_h_i,j(τ_i)),…]∈ V}Consider this as a family of sets, fibred over ^k×^n-k.There is one such family for each of the finitely many partitions of {1,…, n}, and we consider them all together.They are certainly not definable families.However, for a given partition, the family𝒵_σ,g={(τ,h)∈ Z_σ,g:τ_i, h_i,jτ_i∈𝔽, i≤ k, j≤ r_k}is definable in . Now let us consider a j'-generic special point τ∈ J^-1(V), of large discriminant D(τ).Up to permutation of coordinates, τ looks likeτ=(σ_1,g_1,1σ_1,…,g_1,r_1σ_1,…,σ_k,g_k,1σ_k,…,g_k,r_kσ_k),with the σ_i lying in distinct -orbits, and g_i,j∈.So τ corresponds to a partition of {1,…, n} in the obvious way.Writing g_i,j as primitive integer matrices, let N_i,j= g_i,j.Recall that the j'-genericity of τ means thatj'(σ_1),…,j'(σ_k)are algebraically independent over ℚ.So we see that the ℚ-Zariski closure τ of J(τ) is the set of points of the form[…,j(σ_i),w_i,p_σ_i(j(σ_i),χ^*(σ_i),w_i),…,j(g_i,jσ_i),-w_iλ_N_i,j(j(σ_i),j(g_i,jσ_i))m_g_i,j(σ_i), p_ g_i,jσ_i(j(g_i,jσ_i),χ^*(g_i,jσ_i),-w_iλ_N_i,j(j(σ_i),j(g_i,jσ_i))m_g_i,j(σ_i)),…],for some w_1,…,w_k∈ℂ.We will show that the existence of this τ implies that 𝒵_σ,g contains ≫ D(τ)^1/4 quadratic points of bounded height.As is typical in the Pila-Zannier strategy, these new points will arise from Galois conjugates of j(τ).To begin, we need to define a variety which keeps track of τ and its Galois conjugates.Said variety will be a subvariety of ℂ^2n; we will write a general element of ℂ^2n as (…,X_i,Y_i,…,X_i,j,Y_i,j,…),with i≤ k and j≤ r_i, matching the structure of the underlying partition of {1,…,n}.LetV_σ,g={(𝐗,𝐘)∈ℂ^2n:∀ w_1,…,w_k∈ℂ, […,X_i,w_i,p_σ_i(X_i,Y_i,w_i),…,X_i,j,-w_iλ_N_i,j(X_i,X_i,j)m_g_i,j(σ_i), p_ g_i,jσ_i(X_i,j,Y_i,j,-w_iλ_N_i,j(X_i,X_i,j)m_g_i,j(σ_i)),…]∈ V}.Then V_σ,g is a subvariety of ℂ^2n, defined over K(σ,σ).This definition is set up to mirror the shape of τ.Thus, since τ V, we see that V_σ,g must contain the point(j,χ^*)(τ).Hence V_σ,g also contains every Galois conjugate (over K(σ,σ)) of (j,χ^*)(τ).By Proposition <ref>, such a Galois conjugate must take the form (j,χ^*)(τ'), for some quadratc τ' with D(τ')=D(τ).Moreover, by the existence of the modular polynomial Φ_N, τ' must have the samestructure as τ.That is:τ'=(σ_1',g_1,1'σ_1',…,g_1,r_1'σ_1',…,σ_k',g_k,1'σ_k',…,g_k,r_k'σ_k'),where the σ_i' are quadratic points and g_i,j'= g_i,jγ_i,j, for some γ_i,j∈.Further, by the modularity of j and χ^*, we can ensure that σ_i'∈𝔽.For each τ' arising this way, let us take w_i=j'(σ_i') in the definition of V_σ,g.Noting that-j'(σ_i')λ_N_i,j(j(σ_i'),j(g_i,j'τ))m_g_i,j(σ_i)=j'(g_i,j'σ_i')m_g_i,j(σ_i)m_g_i,j'(σ_i'),we see that (σ',g')∈ Z_σ,g.Further, there is γ_i,j'∈ such that γ_i,j'g_i,j'σ_i'∈𝔽.This yields (σ',γ'g')∈𝒵_σ,g.By (<ref>), there are ≫ D(τ)^1/4 Galois conjugates of (j,χ^*)(τ) over ℚ.Since K(σ,σ) is an extension of K of degree at most 4n, we have[ℚ(j(τ)):K(σ,σ)]=[ℚ(j(τ)):ℚ]/c,where c is an absolute constant.Hence there are ≫ D(τ)^1/4 points (σ',γ'g') lying in 𝒵_σ,g.Moreover, it is a consequence of Proposition 5.2 in <cit.> that the corresponding γ_i,j,γ_i,j' can be chosen to have height polynomial in D, whence (σ',γ' g') has height polynomial in D.So the existence of τ∈ J^-1(V), with discriminant D(τ), ensures that 𝒵_σ,g contains ≫ D(τ)^1/4 points of bounded height (and degree at most 2).At this point, we can apply the uniform Pila-Wilkie Theorem.Playing the upper bound from uniform Pila-Wilkie against the lower bound found above, we find a number D such that whenever τ∈ J^-1(V) has discriminant greater than D, the corresponding 𝒵_σ,g contains an arc T of a real algebraic curve.Further, we can ensure that T contains the (σ',γ'g') corresponding to one of the τ' arising from the Galois conjugates of (j,χ^*)(τ).We would like to apply Theorem <ref> to some algebraic arc constructed from T.Indeed, let S={(…,τ_i,[ 1 0; 0 1 ],…,τ_i,h_i,j,…):(τ,h)∈ T},where h is the element ofcorresponding to the image of h as an element of PGL_2(ℝ).Also letS={(…,τ_i,…,h_i,jτ_i,…):(τ,h)∈ T}.Before we can apply Theorem <ref>, it only remains to check that S is indeed an arc, rather than just point.This is easy to see; if τ_i and h_i,jτ_i are all constant on S, then h must be constant, up to determinant, on T.Since the determinant of h_i,j is fixed in the definition of Z_τ_0,g_0, it follows that τ_i and h_i,j are both constant on T, whence T itself is just a point, which is a contradiction.So <ref> yields an -special set H withS G ↪ V.Since (σ',γ'g')∈ T, we have γτ'∈S for some γ∈^n.Hence, some -translate H' of H contains τ', and remains adjacent to V.Suppose thatH'=B×{τ_k+1',…,τ_n'}for some basic -special variety B.Since (j,χ^*)(τ') was a Galois conjugate of (j,χ^*)(τ), we can now apply the inverse Galois conjugation to see thatB×{τ_k+1,…,τ_n}↪ V.For suitable γ, the -special varietyG=γ B×{τ_k+1,…,τ_n}will contain τ and is still adjacent to V. The above result is one of two crucial pieces of “counting” we need in order to prove <ref>.The other half is the following proposition.The idea is that <ref> will be used to count the number of isolated points that can arise in J^-1(V) (the“zero-dimensional pieces”), and this next proposition will count the “positive-dimensional pieces;” namely the basic -special varieties.Together, <ref> and <ref> will act as the engine driving the inductive argument at the heart of the proof of <ref>. Let Vℂ^3n be a variety.Consider the definable subset consisting of those proper basic linear varieties B^k (for any k) such that:* B meets 𝔽^k in its full dimension.* B is adjacent to V.* B is maximal with the above properties.Then there are only finitely many such B, and each is -special. By Corollary <ref>, the only basic linear B which are maximally adjacent to W are necessarily -special.For such B, all the defining g_i,j are in , so the collection of such B is parametrised by a countable set. The conditions specified are definable, (compare with, for instance, <cit.>) so we have a countable definable set, which is therefore finite. § BRINGING THE PROOF TOGETHERAs we bring everything together to prove our central theorem, let us recall the statement.Let Vℂ^3n be a proper algebraic variety defined over ℚ.There exists an -finite collection σ(V), consisting of proper -special varieties of ^n, with the following property.Every j'-generic -special point in J^-1(V) is contained in some G∈σ(V). First, let us note: we can safely ignore any τ∈ J^-1(V) which have a coordinate lying in ·{i,ρ}, since these all clearly lie in -finitely many proper -special subvarieties. We work by induction on n.For n=1, we argue as follows.Suppose that Vℂ^3 is an algebraic variety defined over ℚ and let D=D(V) be the number given to us by <ref>.If J^-1(V) contains -infinitely many quadratic points, then in particular it contains one of discriminant greater than D.Theorem <ref> then tells us thatis adjacent to V.This implies that∀τ∈, (j(τ),j'(τ)z,p_c(j(τ),χ^*(τ),j'(τ)z))∈ Vfor some c>0 and z∈ℂ.Taking Zariski closures (over ℂ) this says that∀ w∈ℂ,τ∈, (j(τ),w,p_c(j(τ),χ^*(τ),w))∈ V.In particular, for any τ with τ=c, we have(j(τ),j'(τ),p_c(j(τ),χ^*(τ),j'(τ)))∈ V.For such τ, p_c(j(τ),χ^*(τ),j'(τ))=j”(τ) by definition.HenceJ(τ)∈ Vfor all τ with τ=c.By analytic continuation, this says that J() V, whence V=ℂ^3. So by induction we may assume the result holds for all Vℂ^3k, k<n.The first stage is to construct a variety V^* which is designed to account for all possible positive-dimensional special subvarieties of V.Let 𝒢 be the finite collection of proper basic -special subvarieties (of some ^k) afforded by applying Proposition <ref> to V. Then let𝒢_1^*={ω(γ·(B×^n-k)):B∈𝒢, B^k, γ∈^n,ω a permutation of the coordinates}.Since 𝒢 was finite, 𝒢_1^* is -finite.Next, consider the variety V_kℂ^3(n-k), defined over ℚ by V_k={𝐗∈ℂ^3(n-k):The translate of ℂ^3k by 𝐗(for some choice of ordering of coordinates) is contained in V}.Clearly V_k is a proper subvariety of ℂ^3(n-k).By our inductive assumption, there is some -finite collection ℱ_k of proper -special subvarieties of ^n-k.Every j'-generic -special point in J^-1(V_k) is contained in some F∈ℱ_k.Let𝒢_2^*={ω(^k× F):F∈ℱ_k,1≤ k< n,ω a permutation of coordinates}.Then 𝒢_2^* is -finite.Let 𝒢^*=𝒢_1^*∪𝒢_2^*,and let V^*=⋃𝒢^*.Since 𝒢^* consists of -finitely many proper -special subvarieties, V^* is a proper subvariety of ℂ^3n.Suppose now that J^-1(V∖ V^*) contains -infinitely many j'-generic quadratic points.In particular, there is some j'-generic quadratic τ∈ V∖ V^* with D(τ)>D=D(V).By <ref>, there is some -special set H withτ∈ H ↪ V.Now, H is a translate of some basic -special variety B^k.If B is a proper subvariety of ^k, then H should have been accounted for by 𝒢_1^*, whence J(H) V^*, which contradicts τ∉V^*.So we must have B=^k.So, up to permutation of coordinates, we haveH=^k×{τ_k+1,…,τ_n}.As in the case n=1 above, the fact that H ↪ V (via some data including some positive real numbers c_i) tells us thatJ(σ_1,…,σ_k,τ_k+1,…,τ_n)∈ Vwhenever σ_i=c_i.By analytic continuation we then see thatJ(^k×{τ_k+1,…,τ_n}) V,whenceℂ^3k×{J(τ_k+1,…,τ_n)} V.So (τ_k+1,…,τ_n) is an -special point of J^-1(V_k).Moreover, it is j'-generic since τ was.Hence (τ_k+1,…,τ_n) should have been accounted for by 𝒢_2^*.Hence τ∈ V^*, which is a contradiction.So J^-1(V∖ V^*) can only contain -finitely many j'-generic quadratic points, whence the j'-generic quadratic points in J^-1(V) are accounted for by the -finite collection 𝒢^*, together with -finitely many additional points. In the next section, we will state a more precise version of Conjecture <ref>, and prove it under the assumption of <ref>.Before we can do so, we will take the time now to note the following.Proposition <ref> clearly holds uniformly, and by using the uniform Pila-Wilkie Theorem, we can also get a uniform version of Theorem <ref>.Using this uniformity, it is easy to get the following uniform version of <ref>.Let Vℂ^3n+k be an algebraic variety defined over ℚ, considered as an algebraic family of varieties,V_𝐚ℂ^3n, 𝐚∈ℂ^k. For each positive integer r, there is an -finite collection σ_r(V), consisting of proper -special subvarieties of ^n, with the following property.Whenever 𝐚∈ℚ^k satisfies max [ℚ(a_i):ℚ]≤ r and V_𝐚 is a proper subvariety of ℂ^3n, every j'-generic quadratic point in J^-1(V_𝐚) is contained in some G∈σ_r(V).§ A MORE PRECISE STATEMENTReaders familiar with the normal shape of André-Oort statements may have noticed that <ref> is rather weaker than one might expect.Even taking into account the action of , a more natural analogue of the classical case might look like the following.Let Vℂ^3n be an algebraic variety defined over ℚ.Then the collection of maximal J-special subvarieties of V is -finite.This turns out to be false.The reason for its failure is fairly simple; the modular relations that relate j'(gτ) (for some g∈) with j'(τ) include not just j(τ), j'(τ), j(gτ) and j'(gτ), but also include instances of m_g(τ)=(cτ+d)^2/N.With the right polynomial, one can therefore enforce arbitrary relations between m_g(τ) and other m_h(σ) arising in other coordinates.Similarly, the modular relation for j” introduces new variables to the equation.Indeed, by differentiating the modular polynomials one can find a rational function μ_N (in 7 variables) such thatj”(gτ)=μ_N(j(τ),j(gτ),j'(τ),j'(gτ),j”(τ),c,(cτ+d)),where g=[ a b; c d ] is a primitive integer matrix of determinant N.Moreover, μ_N is linear in the c-coordinate.Hence we are able to enforce relations on the c that can arise, as well as the m_g(τ).Let us see some examples to illustrate this issue. * Let Wℂ^2 be an algebraic variety defined over ℚ.Suppose that W has at least one solution (x,y) where x and y are both squares of quadratic points in .Fix two positive integers M and N. Writing a general element of ℂ^12 as (X_1,Y_1,Z_1,…,X_4,Y_4,Z_4), consider the variety Vℂ^12 defined (over ℚ) by Φ_M(X_1,X_2)=0, Φ_N(X_3,X_4)=0,(-Y_2Y_1λ_M(X_1,X_2),-Y_4Y_3λ_N(X_3,X_4))∈ W. Then the special points of J^-1(V) are precisely the points (τ,gτ,σ,hσ), where g and h are (arbitrary) primitive integer matrices with determinant M and N respectively, and τ, σ are quadratic points satisfying (m_g(τ),m_h(σ))∈ W. Since W has at least one solution which is a square of a quadratic point, we can certainly find τ and σ to solve this equation.Indeed, by modifying τ and σ we can solve this equation for any g and h of the right determinant.The resulting collection of special points is certainly -infinite, but no positive-dimensional -special variety is contained in J^-1(V).This example therefore serves as a counterexample to the hypothetical statement <ref>.Since the points all lie within the -translates of the -special set {(τ_1,gτ_1,τ_2,hτ_2):τ_i∈}, our main theorem <ref> is still fine!Let us see one more example.* Fix a quadratic point σ∈ and γ=[ a b; c d ]∈.Then(cσ+d)^2=m_γ(σ)=j'(γσ)j'(σ)and c=j”(γσ)-j”(σ)(cσ+d)^42(cσ+d)^3j'(τ),whence c^2 =(j”(γσ)-j”(σ)(cσ+d)^4)^24(cσ+d)^6j'(σ)^2=(j”(γσ)-j”(σ)(j'(γσ)j'(σ))^2)^24(j'(γσ)j'(σ))^3j'(σ)^2.So for the appropriate rational function q we havec^2=q(j'(σ),j'(γσ), j”(σ),j”(γσ)).Given a variety Wℂ^2, defined over ℚ, we can then define Vℂ^9 byΦ_N(X_1,X_2),X_3=j(σ), ∀ w∈ℂ, (-Y_2Y_1λ_N(X_1,X_2),q[w,Y_3, p_σ(j(σ),χ^*(σ),w),Z_3])∈ W.Then the -special points of J^-1(V) are exactly those points(τ,gτ,γσ),where g=[ a b; c d ] is any primitive integer matrix of determinant N, γ=[ A B; C D ]∈ and ((cτ+d)^2,C^2)∈ W.Once again, if W is suitable then this is an -infinite collection, but no positive dimensional -special set lies in J^-1(V). With variants of the examples above, one can produce varieties whose special points satisfy almost any arbitrary relation, provided the relation is written in terms of variables c, (cτ+d) corresponding to matrices g∈, and C,D corresponding to some -translate of a fixed σ.The idea of our stronger result is that these relations should be the only obstruction to a result like <ref>.In order to state this precisely, we will need to go through some technicalities.Given a proper -special set G^n, there is an underlying partition of {1,…,n} which can be written asS_0∪ S_1∪…∪ S_h∪ T_1∪…∪ T_k,with only S_0 allowed to be empty and only the T_i allowed to be singletons.(The condition that G is proper is equivalent to requiring that k < n.)For i>0, let r_i=# S_i-1 and let s_i be the smallest element of s_i.Also associated to G are some matrices g_i,1,…,g_i,r_i∈, so that each coordinate in S_i (except the s_i coordinate) is defined by τ=g_i,jτ_s_i.Given such a G and given a tuple of matricesγ=(γ_1,…,γ_#S_0,γ_1,1,…,γ_1,r_1,…,γ_h,1,…,γ_h,r_h)∈^#S_0+∑ r_i,let c_i,d_i be the bottom row of γ_i, and c_i,j,d_i,j be the bottom row of the matrix γ_i,jg_i,j. This is all building towards the following definitions.A variety Wℂ^2# S_0+2∑ r_i, defined over ℚ, is called a G-variety.For a G-variety W and a given γ∈^#S_0+∑ r_i (as above), we define W^γ^{s_1,…,s_h} to be the set of (τ_s_1,…,τ_s_h) such that(…,c_i,d_i,…,c_i,j, c_i,jτ_s_i+d_i,j,…)∈ W. If γ is an element of the full group ^n, then it consists of γ'∈^#S_0+∑ r_i, as above, together with some more matricesα_s_1,…,α_s_h∈,corresponding to the s_i-coordinates, andβ_1,…β_k∈corresponding to the singleton coordinates in the T_i.For such a γ∈^n, we will abuse notation and writeW^γ=(α_s_1,…,α_s_h)· W^γ'.(The β_i have no meaningful effect.)We will write (γ,W)={(τ_1,…,τ_n)∈^n: (τ_s_1,…,τ_s_h)∈ W^γ and every τ_i is quadratic}.In the case where h=0, so that we have no τ-coordinates to work with, the variety W^γ only enforces conditions on the γ corresponding to the S_0-coordinates.In this case, we will use the convention that(γ,W)=^nif (…,c_i,d_i,…)∈ W, and(γ,W)=∅otherwise.Before we can state our more precise version of <ref>, we need one more definition.A pair (G,W), with G a proper -special set and W a G-variety is said to be geodesically minimal if ⋃_γ∈^n(γ,W)is not contained in any -finite collection of proper -special varieties. [Precise Modular André-Oort with Derivatives] Assume Conjecture <ref>.Let Vℂ^3n be an algebraic variety defined over ℚ.Then there is a finite collection σ(V) of -special subvarieties of ^n, and for each G∈σ(V) an associated G-variety W_G, with the following properties.* For every G∈σ(V), (G,W_G) is geodesically minimal. * The set of quadratic points in J^-1(V) is precisely ⋃_G∈σ(V) γ∈^nγ· G∩(γ,W).Under the assumption of Conjecture <ref>, Theorem <ref> yields a finite collection σ(V) of proper -special subvarieties, such that the special points of J^-1(V) are contained in ⋃_G∈σ(V) γ∈^nγ· G. Let us look first at a single G∈σ(V), and associate some data to it, as in the definitions above.Associated to G is a partition of {1,…,n}, S_0∪ S_1∪…∪ S_h∪ T_1∪…∪ T_k, with T_i singletons, #S_i>1.As above, we have some associated data s_i=min S_i, r_i=# S_i-1for i>0,σ_1,…,σ_#S_0∈ quadratic and g_i,j∈, a primitive integer matrix with determinant N_i,j, for 1≤ i ≤ h, 1≤ j≤ r_i. For ease of notation, we will assume that the coordinates are ordered nicely, with the first few coordinates in S_0, the next few in S_1, and so on.Recall: for each N, there is a rational function μ_N with the property that j”(gτ)=μ_N(j(τ),j(gτ),j'(τ),j'(gτ),j”(τ),c,(cτ+d)) whenever g=[ a b; c d ] is a primitive integer matrix of determinant N.Moreover, it will be useful later to know that μ_N(j(τ),j(gτ),j'(τ),j'(gτ)(c_0τ_0+d_0)^2(cτ+d)^2,j”(τ),c_0,(c_0τ_0+d_0)) =j”(gτ)(c_0τ_0+d_0)^4(cτ+d)^4+j'(gτ)2(c_0τ_0+d_0)^3(c_0d-cd_0)(cτ+d)^3. This follows by a straightforward, if tedious, calculation.Given γ∈^n, and a point τ=(τ_1,…,τ_h)∈^h, define a variety V_γ,τℂ^3(h+k) as follows.First, write γ as in (<ref>).That is, γ consists ofγ'=(γ_1,…,γ_#S_0,γ_1,1,…,γ_1,s_1,…,γ_h,1,…,γ_h,s_h)∈^# S_0+∑ r_i,α_s_1,…,α_s_h∈corresponding to the s_i coordinates, andβ_1,…,β_k∈ corresponding to the singleton coordinates in the T_i.Let c_i, d_i be the bottom row of γ_i and c_i,j,d_i,j the bottom row of γ_i,jg_i,j.Define V_γ,τ' by (X_1,Y_1,Z_1,…,X_h+k,Y_h+k,Z_h+k)∈ V_γ,τ'⟺∀ w_i,j∈ℂ with Φ_N_i,j(X_i,w_i,j), i≤ h,j≤ r_i, […,j(σ_i),j'(γ_iσ_i),j”(γ_iσ_i),…,X_i,Y_i,Z_i,…,w_i,j,-Y_i(c_i,jτ_s_i+d_i,j)^2λ_N_i,j(X_i,w_i,j), μ_N_i,j(X_i,w_i,j,Y_i,-Y_i(c_i,jτ_s_i+d_i,j)^2λ_N_i,j(X_i,w_i,j),Z_i,c_i,j,c_i,jτ_s_i+d_i,j),… …,X_h+i,Y_h+i,Z_h+i,…]∈ V. Taking ℚ-Zariski closures replaces the j'(γ_iσ_i) and j”(γ_iσ_i) by suitable rational functions involving σ_i, σ_i, χ^*(σ_i), c_i, d_i, and some complex numbers w which are allowed to be arbitrary.Making these replacements we get a variety V_γ,τ, defined over ℚ, depending polynomially on c_i,d_i,c_i,j,(c_i,jτ_i+d_i,j) (and nothing else).Thus each V_γ,τ is a fibre of an algebraic family of varieties V, defined over ℚ.We now apply Theorem <ref> to V.We get an -finite collection σ_2(V), consisting of -special subvarieties of ^(h+k), such that for all γ∈^n and all quadratic τ∈^h, eitherV_γ,τ=ℂ^3(h+k)or the J-special subvarieties of V_γ,τ are accounted for by σ_2(V).The H∈σ_2(V) correspond in the obvious way to an -finite collection 𝒢 of proper -special subvarieties of G.We add all these H∈𝒢 to the overarching collection σ(V). Now, a quadratic point lying in⋃_γ∈^nγ· G∩ J^-1(V)corresponds to a quadratic point τ=(τ_1,…,τ_n)∈ G together with γ∈^n such that γτ∈ J^-1(V).Such a pair (τ, γ) necessarily satisfies J(τ')∈ V_γ,τ, where τ'=(α_s_1τ_s_1,…,α_s_hτ_s_h,β_1τ_n-k+1,…,β_kτ_n). By the properties of σ_2(V), either τ'∈ H, for some H∈σ_2(V), or V_γ,τ=ℂ^3(h+k).In the first case, we have τ∈ H, for some H∈𝒢.Define a set R={(τ,γ)∈^n×^n:τ is quadratic, τ∈ G, J(γτ)∈ V, ∀ H∈𝒢,τ∉H}. If R is empty, we can stop here, removing G from σ(V) entirely; it contributes no special points other than those already accounted for by 𝒢.If R∅, we continue.By the properties of 𝒢, every (τ,γ)∈ R must satisfy V_γ,τ=ℂ^3(h+k). Hence, in the definition of V_γ,τ, we can replace (X_1,…,Z_h+k) with J(z_1,…,z_h+k), for arbitrary z_i∈.By (<ref>) and an easy calculation involving λ_N, we see that […,J(γ_iσ_i),…,J(z_i),… …,j(h_i,jz_i),j'(h_i,jz_i)(c_i,jτ_s_i+d_i,j)^2δ_i,j^2,j”(h_i,jz_i)(c_i,jτ_s_i+d_i,j)^4δ_i,j^4+2j'(h_i,jz_i)c_i,j(c_i,jτ_s_i+d_i,j)^3δ_i,j^2,… …,J(z_h+i),…]∈ V for all (z_1,…,z_h+k)∈^h+k.Here h_i,j is an upper triangular matrix in the -orbit of g_i,j and δ_i,j is its lower-right entry. As before, by taking ℚ-Zariski closures we can replace the J(γ_iσ_i) by some ℚ-rational functions of c_i and d_i.Thus the above equation, for each choice of 𝐳=(z_1,…,z_h+k), defines a G-variety which we will call W_𝐳.The intersectionW=⋂_𝐳∈^h+kW_𝐳 is still a G-variety.(It is nonempty since R is nonempty.)We have seen that every (τ,γ)∈ R satisfies(τ_s_1,…,τ_s_h)∈ W^γ. Conversely, any quadratic 𝐳=(z_1,…,z_h), no matter its height, which is a solution of some W^γ, must come from a member of γ· G∩ J^-1(V).Indeed, in this situation we have[…,J(γ_iσ_i),…,J(z_i),… …,j(h_i,jz_i),j'(h_i,jz_i)(c_i,jz_i+d_i,j)^2δ_i,j^2,j”(h_i,jz_i)(c_i,jz_i+d_i,j)^4δ_i,j^4+2j'(h_i,jz_i)c_i,j(c_i,jz_i+d_i,j)^3δ_i,j^2,… …,J(z_h+i),…]∈ V, for all (z_h+1,…,z_h+k)∈^k.Brief calculations involving the transformation laws for j' and j” show thatj'(h_i,jz_i)(c_i,jz_i+d_i,j)^2δ_i,j^2=j'(γ_i,jg_i,jz_i)(c_i,jz_i+d_i,j)^2(c_i,jz_i+d_i,j)^2=j'(γ_i,jg_i,jz_i) and that j”(h_i,jz_i)(c_i,jz_i+d_i,j)^4δ_i,j^4+2j'(h_i,jz_i)c_i,j(c_i,jz_i+d_i,j)^3δ_i,j^2=j”(γ_i,jg_i,jz_i)(c_i,jz_i+d_i,j)^4(c_i,jz_i+d_i,j)^4=j”(γ_i,jg_i,jz_i). Thus we get […,J(γ_iσ_i),…,J(z_i),…,j(γ_i,jg_i,jz_i),j'(γ_i,jg_i,jz_i),j”(γ_i,jg_i,jz_i),…,J(z_h+i),…]∈ V, for every (z_h+1,…,z_h+k)∈^k.Hence 𝐳 corresponds to the point (…,γ_iσ_i,…,z_i,…,γ_ig_i,jz_i,…,z_h+i,…)∈γ· G∩ J^-1(V), for any choice of z_h+i. To sum up, we have seen that the quadratic points in γ· G∩ J^-1(V) consist precisely of:* Those quadratic points that lie in some H∈𝒢.* Those points that corresponds to quadratic solutions of W^γ, that isγ· G∩(γ, W). We are not claiming that these possibilities are mutually exclusive!Before we are done with G, we must check whether (G,W) is geodesically minimal.If it does happen to be geodesically minimal, we are done.Otherwise, by definition,⋃_γ∈^n(γ, W) is contained in some -finite collection of proper -special subvarieties of ^h+k.This is not a problem; we simply remove G from σ(V) entirely and replace it by the appropriate finite collection of proper subvarieties of G. This is as much as we can do with a given G∈σ(V).It may be helpful to have a brief summary here.For the given G we have done two things:* Either removed G from σ(V) entirely or associated to G an geodesically minimal G-variety W_G.* Added to σ(V) some finite collection 𝒢 of proper subvarieties of G.Moreover, the union of the -special subvarieties of γ· G∩ J^-1(V) is precisely γ· G∩(γ,W_G), together with the -special subvarieties of ⋃_H∈𝒢γ· H∩ J^-1(V).This is enough to conclude the theorem.Simply perform the above process to each G∈σ(V) in turn, taking the G in descending order of dimension.Since each G can add to σ(V) only finitely many varieties of strictly smaller dimension, the process will eventually terminate.§ UNIFORMITYIn this final section, I will discuss uniform versions of the two main results of the document, Theorems <ref> and <ref>.For this, we are closely following work of Scanlon, who gives in <cit.> a very general approach to uniformising results of this type.Unfortunately, our setting does not fit perfectly into Scanlon's framework; the full strength of his result is therefore not available to us.For our purposes it is enough that the central ideas in Scanlon's work do apply.There are two main lemmas of Scanlon's that we will use.The first is Lemma 3.1 from <cit.>, which applies directly.We write it out here for completeness.Let k be a field and K an algebraically closed field extension of k.Let X be a variety over k and X_K its base change to K.Let A X(K).Suppose that Y X_K is constructible.Then there is a natural number n, some constructible set Z X× X^n, defined over k, and some a∈ A^n such that Z_a(K)∩ A=Y(K)∩ A. See Lemma 3.1 from <cit.>The next lemma is the analogue of Lemma 3.2 from <cit.>.The statement needs some very slight modification before it applies in our setting, but the proof of the modified lemma is essentially identical to the original.Let k K be algebraically closed fields, X=𝔸^s, B=𝔸^t and Y X× B a constructible subset.Let A X(K).Suppose that there exist α,β∈ A with the following property:For m∈ℕ and i≤ m, let p^(m)_i be the m-tuple (α,…,α,β,α,…,α), with the β arising in the ith place.Let P^(m)_i be the k-Zariski closure of {p^(m)_i}.Thenp^(m)_i∉P^(m)_j for any i j.Then there is a natural number n and a k-constructible set Z X× X^n such that for any b∈ B(K), there is a∈ A^n for which Y_b(K)∩ A=Z_a∩ A. Throughout this proof we will suppress notation, writing W=W(K) whenever W is a constructible set over K.Consider K as a structure in the language ℒ=(+,·,P_A,{a}_a∈ K), where each constant symbol a is to be interpreted as the corresponding element a∈ K and P_A is an s-ary predicate to be interpreted as the set A.In this language, we can express the condition “x∈ A∩ W” for any affine variety W over K.Let T be the ℒ-theory of K, and then let ℒ' be ℒ together with some new constant symbols b_1,…, b_t.Write b=(b_1,…,b_t).Let 𝒞(k) be the set of k-constructible subsets of X× X^N (for some N) and consider the setΓ = T∪{∀ c∈ A^N∃ x∈ A (x∈ Y_b∖ Z_c∨ x∈ Z_c∖ Y_b): Z∈𝒞(k)}.Suppose that Γ is not finitely satisfiable.Let Γ_0 be a finite subset witnessing this.Since T is the theory of K, Γ_0 cannot be contained in T, so it mentions some finitely many k-constructible sets Z_1,…,Z_l, with Z_i X× X^N_i.SinceΓ_0 is not satisfiable, we have:∀ b∈ B ∃ i≤ l∃ c∈ A^N_i∀ x∈ A( x∉Y_b∖ Z_c∧ x∉(Z_i)_c∖ Y_b).In other words, for every b∈ B, there is some Z_i and some c∈ A^N_i such thatA∩ (Z_i)_c=A∩ Y_b.Now let N=max{N_i:i≤ l} and let n=N+l.Define Z=⋃_i=1^lZ_i× X^N-N_i× P^(l)_i,with P^(l)_i as in the hypotheses of the lemma.This Z is a constructible set defined over k, and satisfies the conclusion of the lemma: if b∈ B, then for some i, c, we have A∩ (Z_i)_c=A∩ Y_b. Letting c'=(c,α,…,α, p^(l)_i)∈ A^N+l, we getA∩ Z_c'=A∩ (Z_i)_c=A∩ Y_b.So suppose on the other hand that Γ is finitely satisfiable.By the Compactness Theorem, it is satisfiable.So we have an algebraically closed extension L⊇ K of K, a point b∈ B(L) and a set A^* X(L) such that, for every k-constructible Z X× X^N and c∈ (A^*)^N, we haveY_b(L)∩ A^* Z_c(L)∩ A^*.This contradicts Lemma <ref> (applied to k, L, A^*, X and Y_b). Using this, we would like to get a uniform version of <ref>. This can indeed be done, though perhaps not in exactly the manner we might like.Given an algebraic family Vℂ^3n+k, we can apply <ref>, with A={J(τ):τ∈^nquadratic}.For the resulting constructible set Z, we can writeZ=⋃_i=1^rX_i∖ Y_ifor some varieties X_i, Y_i defined over ℚ. If we apply <ref> (under the assumption of Conjecture <ref>) to each of the X_i and Y_i, we get a finite collection σ(V), consisting of pairs (G,H) of -special varieties, with corresponding G-varieties (W_G,W_H), such that the union of the -special subvarieties of J^-1(V_b), for any fibre b, is precisely the fibre of the set⋃_(G,H)∈σ(V)[⋃_γ∈γ G∩Sp(γ,W_G)\⋃_γ∈γ H∩Sp(γ,W_H)]at some quadratic τ=τ(b).This is not quite as good as we might like; one would prefer not to have the -special sets H in the picture.These arise thanks to the fact that Z is only constructible, rather than Zariski closed.It does not seem possible to get around this; the reason is essentially the same as the reason why the full strategy outlined in Scanlon's paper <cit.> does not work. Let G and H be -special varieties and take a corresponding G-variety W_G and H-variety W_H.If it were the case that (G∩ H, W_G∩ W_H) was geodesically minimal whenever (G,W_G) and (H,W_H) were, then we would be able to apply the rest of Scanlon's work.This is not necessarily the case, so the rest of Scanlon's work cannot be applied.Hence the above seems likely to be the best possible uniform version of <ref>.Theorem <ref>, however, can be uniformised more cleanly, since it is less precise.Assume Conjecture <ref>.Let Vℂ^3n+k be an algebraic variety (with arbitrary field of definition), considered as an algebraic family of fibres V_bℂ^3n.There is a natural number N, and an -finite collection σ(V) of -special subvarieties of ^n+N, with the following property.For every b∈ℂ^k with V_bℂ^3n, the -special points of J^-1(V_b) are contained in⋃_G∈σ(V) G_τ,where G_τ is the fibre of G at some fixed quadratic τ=τ(b)∈^N. Moreover, all of the G_τ are proper subvarieties of ^n. Let Vℂ^3n+k be a variety, considered as an algebraic family of fibres V_bℂ^3n.We will apply Lemma <ref>, with X=ℂ^3n, Y=V, K=ℂ, k=ℚ andA={J(τ):τ∈^nquadratic}.To apply the lemma, we need to find two suitable points α,β∈ A.This is easy; we only need the j-coordinates of α and β to be distinct.So Lemma <ref> does apply; we get a ℚ-constructible set Zℂ^3(n+dn) such that, for every b∈ℂ^k, there is some a∈ A^d such thatZ_a∩ A = V_b∩ A.WriteZ=⋃_i=1^rX_i∖ Y_i,for some ℚ-varieties X_i and Y_i.We will apply Theorem <ref> to each X_i and Y_i separately.We get some finite sets σ(X_i) and σ(Y_i) of -special varieties, with associated G-varieties, exactly describing the special subvarieties of the Z_i in the manner described in the statement of Theorem <ref>. Now, given b∈ℂ^3d, let a∈ A^d be such that Z_a∩ A=V_b∩ A.Let τ be a preimage of a under J.First suppose that no σ(X_i) contains any G such that, for some γ∈^N, G_γτ=^n. Then we are done; the special points of J^-1(V_b) are contained in the -finite collection of proper -special varieties{γ'· G_γτ:G∈σ(X_i), i≤ r,γ∈^N,γ'∈^n}.On the other hand, suppose that some σ(X_i) contains G such that, for some γ∈^N, G_γτ=^n.Then the G-variety associated to G cannot impose any condition on the coordinates corresponding to ^n.Hence by the properties laid out in <ref>, we must have (X_i)_a=ℂ^3n.Now apply the same argument to Y_i.There are 2 possibilities.Either: * The special points of (Y_i)_a are contained in an -finite collection{γ'· G_γτ:G∈σ(Y_i),i≤ r,γ∈^N,γ'∈^n}, with each G_γτ being a proper -special subvariety of ^n, or * (Y_i)_a=ℂ^3n.In the first case, since the special points of (Y_i)_a are contained in a lower-dimensional set, it follows that the special points of Z_a are Zariski dense, whence V_b=ℂ^3n.In the second case, (X_i)_a∖ (Y_i)_a contributes no new special points, so we can ignore this i and move on.Finally, note that if G_τ=^n for some τ, then it must be the case for every τ' that either G_τ'=^n or G_τ'=∅.Thus, if some G∈σ(X_i) has the property that (for some τ), G_τ=^n, we can safely remove it.By the previous arguments, all the special points will still be covered by the rest of the G∈⋃σ(X_i), except in the case where V_b=ℂ^3n. Hence the (-finite) collectionσ(V)={γ· G: γ∈^n, G∈σ(X_i) for some i, and for every τ∈^N, G_τ^n}satisfies the conclusion of the proposition.To conclude, we will state one final corollary of the above result, which simply says that Theorem <ref> holds for arbitrary varieties, rather than just those defined over ℚ.Assume Conjecture <ref>. Let Vℂ^3n be a proper algebraic variety (with arbitrary field of definition).There exists an -finite collection σ(V), consisting of proper -special varieties of ^n, such that every -special point in J^-1(V) is contained in some G∈σ(V). Immediate.../../bib/scabbrv
http://arxiv.org/abs/1702.08403v6
{ "authors": [ "Haden Spence" ], "categories": [ "math.NT", "math.LO" ], "primary_category": "math.NT", "published": "20170227175039", "title": "A Modular Andre-Oort Statement with Derivatives" }
maths@jbboyer.frLet μ be a borelian probability measure on 𝐆:=SL_d(ℤ) ⋉𝕋^d. Define, for x∈𝕋^d, a random walk starting at x denoting for n∈ℕ,{[ X_0 = x; X_n+1 = a_n+1 X_n + b_n+1 ].where ((a_n,b_n))∈𝐆^ℕ is an iid sequence of law μ.Then, we denote by ℙ_x the measure on (𝕋^d)^ℕ that is the image of μ^⊗ℕ by the map ((g_n) ↦ (x,g_1 x, g_2 g_1 x, … , g_n … g_1 x, …)) and for any φ∈L^1((𝕋^d)^ℕ, ℙ_x), we set 𝔼_x φ((X_n)) = ∫φ((X_n)) dℙ_x((X_n)). Bourgain, Furmann, Lindenstrauss and Mozes studied this random walk when μ is concentrated on SL_d(ℤ) ⋉{0} and this allowed us to study, for any hölder-continuous function f on the torus, the sequence (f(X_n)) when x is not too well approximable by rational points. In this article, we are interested in the case where μ is not concentrated on SL_d(ℤ) ⋉ℚ^d/ℤ^d and we prove that, under assumptions on the group spanned by the support of μ, the Lebesgue's measure ν on the torus is the only stationary probability measure and that for any hölder-continuous function f on the torus, 𝔼_x f(X_n) converges exponentially fast to ∫ fdν. Then, we use this to prove the law of large numbers, a non-concentration inequality, the functional central limit theorem and it's almost-sure version for the sequence (f(X_n)). In the appendix, we state a non-concentration inequality for products of random matrices without any irreducibility assumption. On the affine random walk on the torus Jean-Baptiste Boyer December 30, 2023 ======================================§ INTRODUCTION AND MAIN RESULTS Let d∈, d⩾ 2 and ^d:= ^d/^d be the torus of dimension d. Let μ be a borelian probability measure on :=SL_d() ⋉^d. Define, for any x∈^d, a random walk starting at x by denoting, for any n∈,{[ X_0 = x; X_n+1 = a_n+1 X_n + b_n+1 ].where ((a_n,b_n))∈^ is an iid sequence of law μ.Then, we denote by _x the measure on ^ that is the image of the measure μ^⊗ on ^ by the map ((g_n) ↦ (x,g_1 x, g_2 g_1 x, … , g_n … g_1 x, …)) and by _x the operator of integration against the measure _x.We denote by P the Markov operator associated toμ. This is the operator defined for any borelian non-negative function f on ^d and any x∈^d byPf(x) = ∫_ f(gx)μ(g)Thus, for any n∈, we have thatP^n f(x) = ∫_ f(gx) μ^∗ n(g) = ∫_^d f(y) μ^∗ n∗δ_x(y) = ∫_(^d)^ f(X_n) _x((X_n)) = _x f(X_n)Where we noted μ^∗ n the n-th power of convolution of the measure μ (μ^∗ 0 is by convention the Dirac measure at (I_d,0)).Bourgain, Furmann, Lindenstrauss and Mozes studied in <cit.> the case where μ is concentrated on SL_d() ⋉{0} and they proved that, under assumptions on the support of μ, the only P-invariant probability measures on the torus where the Lebesgue's measure ν and the uniform measures on unions of rational orbits (which are finite). their result is even more precise since they give the rate of convergence of _x f(X_n) to ∫ fν in terms of diophantine properties of x and this allowed us to study the sequence (f(X_n)) for starting points x that are not too well approximable by rational points in <cit.>.In this article, we are interested in the case where μ is not concentrated on SL_d()⋉^d/^d. A result by Benoist-Quint (see <cit.>) shows that in this case, under assumptions on the projection on SL_d() of the subgroup spanned by the support of μ, the only P-invariant probability measure on the torus is Lebesgue's measures and this proves that for any continuous function f on ^d and any x∈^d,1/n∑_k=0^n-1 f(X_k) ∫ fν _x-a.e.The aim of this article is to precise the previous convergence by proving a Central Limit Theorem, a Law of the Iterated Logarithm, etc. To do so, we are going to make a few assumptions on the subgroup spanned by {a|(a,b)∈μ}.In the sequel, we will say that a closed subgroupof SL_d() is strongly irreducible if it doesn't fix any finite union of non-trivial subspaces of ^d. Moreover, we will say thatis proximal if it contains an element g such that there is v_g^+ ∈^d∖{0}, λ∈ and a g- invariant hyperplane V_g^< in ^d such that ^d= v_g^+ ⊕_g^<, gv_g^+ = λ v_g^+ and the spectral radius of g in V_g^< is strictly smaller that |λ|. Finally, we will say that a probability measure μ on SL_d() is strongly irreducible and proximal if the closure of the subgroup spanned by the support of μ has these properties. These two assumptions are actually assumptions on the Zariski-closure ofand so, as an example, they are satisfied ifis Zariski-dense in SL_d(). Finally, we will say that a measure μ on SL_d() has an exponential moment if there is some ε∈_+^∗ such that∫_SL_d()g^εμ(g)<+∞We will see in the sequel that our study of the random walk on the torus requires arguments of orbit closures. This is why we give a name to the property that we will use and we will see right after examples of measures satisfying it.Let μ be a borelian probability measure on .We say that μ satisfies an effective shadowing lemma if for any C',t' ∈_+^∗, there are C_1,C_2,M,t,L ∈_+^∗ such that for any x,y∈^d, any r∈_+^∗ and any n∈, with r⩽ C_1e^-Ln, ifμ^∗ n({ g∈| d(gx,y) ⩽ r }) ⩾ C_2 e^-t nthen, there are x',y'∈^d such that d(x,x'),d(y,y')⩽ re^Mn andμ^∗ n({ g∈| gx'=y'})⩾ C'e^-t'nFor a measure to satisfy this property means that if a lot of elements g∈μ^∗ n send x close to y, it is only because x and y are close to points of the same orbit. The name comes from the theory of hyperbolic diffeomorphisms since when μ=δ_g_0, saying that μ satisfies an effective shadowing lemma means that there is some constant M such that for any large enough K, any n∈ and any x,y∈^d with d(g^n_0 x,y) ⩽ e^-K n, there are x',y' ∈^d such that d(x,x'),d(y,y')⩽ e^-(K-M)n and g^n_0 x' = y'.This is a technical definition but we will see in section <ref> a criterion (the proposition <ref>) that allows to tell if a measure satisfies to an effective shadowing lemma and we will deduice examples from it. In particular, we will see in example <ref> that if b_0∈^1 is such that there are C,L ∈_+^∗ such that for any q∈^∗, d(qb_0,0) ⩾C/q^L, then, any borelian probability measure μ onwhose projection on SL_d() is strongly irreducible, proximal and has an exponential moment and such that F(μ):={coefficients of b|(a,b)∈μ}⊂{0,b_0} satisfies an effective shadowing lemma. Moreover, in example <ref> we will prove that if a_1, … , a_N ∈SL_d() generate a strongly irreducible and proximal group, then for a.e. b_1, … , b_N∈^d, the measure μ = 1/N∑_i=1^N δ_(a_i,b_i) satisfies an effective shadowing lemma. For α∈ ]0,1], we denote by 𝒞^0,α(^d) the space of α-hölder-continuous functions f on ^d endowed with the normf_α := f_∞ + m_α(f)wheref_∞ := sup_x|f(x)|and m_α(f) := sup_x≠y |f(x)-f(y)|/d(x,y)^αwhere d is the distance induced by some norm on ^d.Moreover, for any two borelian probability measures ϑ_1, ϑ_2 on ^d, we denote by 𝒲_α(ϑ_1, ϑ_2) the Kantorovich-Rubinstein's distance of ϑ_1 and ϑ_2 and this is defined by𝒲_α(ϑ_1, ϑ_2) := sup_f∈𝒞^0,α(^d) f_α⩽ 1|∫ fϑ_1 - ∫ fϑ_2 | This will allow us to prove the Let μ be a borelian probability measure on :=SL_d()⋉^d that is not concentrated on SL_d() ⋉^d/^d and satisfies an effective shadowing lemma. Note μ_0 the projection of μ on SL_d() and assume that μ_0 is strongly irreducible, proximal and has an exponential moment. Denote by P the Markov operator associated to μ.Then, the Lebesgue's measure ν on ^d is the only P-invariant borelian probability measure on the torus. Moreover, for any α∈]0,1], there are C,t∈_+^∗ such that for any f ∈ C^0,α(^d) and any n∈,sup_x∈^d𝒲_α(μ^∗ n∗δ_x, ν) ⩽ Ce^-tnf_αIn particular, for any α-hölder-continuous function f on the torus, there is a continuous function g such thatf-∫ f ν = g-Pgand g_∞⩽ C f_α We don't know if the function g that we construct in this theorem is hölder-continuous. This theorem will allow us to prove a few of the classical results in probability theory for the sequence (f(X_n)) in theUnder the same assumptions than in theorem <ref>.Denote, for any continuous function f on the torus, f= f-∫ fν and, for any sequence x=(X_n) ∈ (^d)^,S_n f(x) = ∑_k=0^n-1f(X_k) = ∑_k=0^n-1 f(X_k) - n ∫ fνMoreover, for any t∈ [0,1], setξ_n(t) = 1/√(n)(S_if(x)+ n(t-i/n)f(X_i))for i/n⩽ t⩽i+1/n and 0 ⩽ i ⩽ n-1Then, for any continuous function f on the torus and any x∈^d,S_n f(x)/n 0 _x-a.e.Moreover, for any α∈]0,1] there is t∈_+^∗ such that for any ε∈ ]0,1] there is a constant C such that for any α-hölder-continuous function f on the torus, any x∈^d and any n∈,_x ( {x∈^||S_n f(x)|>nεf_α}) ⩽ Ce^-t ε^2 nFinally, setσ^2(f) := ∫_^d g^2 - (Pg)^2 νand then, * If σ^2(f) ≠0 then for any bounded continuous F: 𝒞^0([0,1]) → and any x∈^d,_x F(ξ_n)F(W_σ^2)and 1/ln n ∑_k=1^n 1/k F(ξ_k)F(W_σ^2)_x- a.e.Where W_σ^2 denotes Wiener's measure of variance σ^2.And for any continuous function φ onsuch that t^2 φ(t) is bounded and for any x∈^d,1/ln n ∑_k=1^n 1/kφ( S_k f(x)/√(k))φ(W_σ^2(1))_x- a.e. *If σ^2(f)= 0 then for any x∈^d and any n∈, S_nf ∈L^∞(_x) andS_nf_L^∞(_x)⩽ 2Cf_αThe two convergences of (F(ξ_n)) in point (<ref>) are respectively called functional central limit theorem (FCLT) and almost-sure functional central limit theorem (ASFCLT). There is no obvious link between the convergence in law of (F(ξ_n)) and the a.e. convergence of it's logarithmic average (see <cit.> for a criterion). However, note that we have to take a logarithmic mean because of the arc sine law. The FCLT and the ASFCLT have many corollaries such as the cental limit theorem and the almost sure central limit theorem (taking F_φ(ξ) = φ(ξ(1)) for any continuous and bounded function φ on ), the law of the iterated logarithm (see theorem 2.4 in <cit.>), a control of max_k ∈ [0,n]S_kf(x)/√(n) (taking F(ξ) :=sup_t∈ [0,1]ξ(t)), or an estimation of σ^2(f) (taking φ(x) = x^2). Before we continue, we give an example where there is a non-constant function f such that σ^2(f)=0. LetA=([ 2 1; 1 1 ]) and B =([01; -10 ])Then, the subgroup spanned by A and B is strongly irreducible and proximal.Let b_0∈^1∖/ be a diophantine number[There are C,L∈_+^∗ such that for any q∈^∗, d(qb_0,0)⩾ Cq^-L.], b=(b_0,0) and μ = 1/2δ_(A,b) + 1/2δ_(BA,Bb).Then, according to proposition <ref>, the measure μ satisfies to the assumptions of theorem <ref>.Let g be the function defined for any x∈^2 by g(x) = d(x,0). We made everything so that for any x∈^2, g(Bx) = g(x).Then, for any x∈^2,Pg(x) = 1/2 g(Ax+b) + 1/2 g(BA x+Bb) = g(Ax+b)And,∫_ |Pg(x)|^2 ν(x) = ∫_ |g(Ax+b)|^2 ν(x) = ∫_ |g(x)|^2 ν(x)Moreover, if we set f=g-Pg, then, we just saw that σ^2(f) = ∫ g^2 -(Pg)^2ν=0 and for any x∈, n∈ and any (g_1, … g_n) ∈{(A,b), (BA,Bb)}^n, we have thatg(g_n+1… g_1 x) =g(A g_n … g_1x+b)And so,∑_k=0^n-1 f(g_k …g_1 x)= g(x) - g(g_n …g_1 x) + ∑_k=0^n-1 g(g_k+1 …g_1 x) - g(A g_k …g_1x+b) = g(x) - g(g_n …g_1 x)This proves that for any x∈, the sequence (∑_k=0^n-1 f(g_k … g_1 x)) is bounded in L^∞ (_x). The results in section 3 of <cit.> actually prove that this example is really general. We will see in sub-section <ref> that theorem <ref> is a quite general corollary of theorem <ref> since we can easily study functions f on the torus that writes f=g-Pg+∫ fν with g continuous and theorem <ref> precisely says that any holder-continuous function can be written in this way.Therefore, the main point of this article is the proof of theorem <ref>. To do so, we use the same method as Bourgain, Furmann, Lindenstrauss and Mozes. In section <ref>, we prove that the only obstacle in the equidistribution of the measure μ^∗ n∗ϑ is the lower regularity of ϑ i.e. the existence of points x such that for some r depending on n,ϑ(B(x,r))⩾ r^εIn particular, if μ^∗ n+m∗ϑ is far from Lebesgue's measure then there has to be points x such thatμ^∗ m∗ϑ(B(x,r)) ⩾ r^εThen, our assumptions that μ satisfies an effective shadowing lemma and that μ is not a subset of SL_d() ⋉^d/^d will allow us to prove in section <ref> that this cannot happen when r≪ e^-m≪ r^ε.The precise proof of the theorem is in subsection <ref>.Finally, in section <ref>, we prove proposition <ref> that is a criterion that shows that under some diophantine conditions on the translations in it's support, a measure satisfies an effective shadowing lemma and we will use this criterion to produce examples of such measures. In the appendix, we state results on the products of random matrices in the case where the action is not irreducible and that we use in section <ref>.§.§ Some kind of diophantine assumption is necessary We already said (and we will prove in section <ref>) that a way to guarantee that a measure satisfies an affective shadowing lemma is to require diophantine conditions on the coefficients of the translations of it's support. In this sub-section, we prove that this kind of assumptions is indeed necessary to get theorem <ref>. Let a,b∈SL_d() and v∈^d. Set μ= 1/2δ_(a,0) + 1/2δ_(b,v).Assume that for some α∈ ]0,1], there are C,t∈_+^∗ such that for any α-hölder-continuous function f on the torus and any n∈,sup_x∈^d|P^n f(x) - ∫ fν| ⩽ Ce^-tnf_αThen, there are constants C_0,L∈_+^∗ such that for any rational point p/q∈^d/^d,d(v,p/q)⩾C_0/q^L For q∈^∗ and x∈^d, we set X_q = 1/q^d/^d andf_q(x) = 1- min(1, q^2αd(x,X_q)^α)This function is chosen so that it takes the value 1 on 1/q^d/^d, it vanishes on the complementary of the 1/q^2-neighborhood of 1/q^d/^d and it is hölder-continuous with f_q_α⩽ q^2α.In particular, we have that, for some constant C depending only on d (and on the distance on ^d),∫ f_qν⩽∑_p/q∈1/q^d/^dν(B(p/q, 1/q^2) ) ⩽C/q^dMoreover, for μ^⊗-a.e. ((a_n,b_n)), we have that|f_q(∑_k=1^n a_n … a_k+1 b_k) - 1 | ⩽f_q_α d(∑_k=1^n a_n … a_k+1 b_k, X_q )^α⩽e^α Mn/(e^M-1)^α d(v,X_q)^α q^2αwhere we noted e^M = max(A,B). Indeed, for any p/q∈ X_q, we have that f_q(p/q) = 1 and ∑_k=1^n a_n … a_k+1 b_k can be written Dv where D is a matrix with integer coefficients and D⩽∑_k=1^n a_n … a_k+1⩽∑_k=1^n e^M(n-k). This proves that for any n∈,| P^n f_q(0) - 1 | = |∫_ f_q(b) μ^∗ n(a,b) - 1 |⩽e^α Mn/(e^M-1)^α d(v,X_q)^α q^2αBut, by assumption, we also have that|P^n f_q(0) - ∫ f_q ν| ⩽ Ce^-tnf_q_α⩽ Ce^-tnq^2αSo, this proves that for any n,q∈^∗,1 - C/q^d -Ce^-tn q^2α⩽e^α Mn/(e^M-1)^α d(v,X_q)^α q^2αThus, for any p∈^d, any q∈^∗ such that q^d ⩾ 4C and any n such that Ce^-tn⩽ q^-2α/4, we have thatd(v,p/q) ⩾e^M - 1/2^1/αe^Mn q^2In particular, for n = ⌊1/tln(4Cq^2α) ⌋+1, we find that, for some constant C' depending only on M,α,t,C,d( v, p/q) ⩾C'/q^2+2α M/tAnd this is what we intended to prove.We can prove the same kind of results for rates more general than Ce^-tn and this shows that even convergences slower than exponential require some kind of diophantine assumption. §.§ Proof of theorem <ref> given the results of sections <ref> and <ref> Let α∈ ]0,1] and ε∈_+^∗.According to theorem <ref> there are constants c_0,ε' ∈_+^∗ with ε'<ε such that for any ϑ∈ M^1(^d), any t∈ ]0,1] and any n∈ with n⩾ c_0(1+|ln t|),𝒲_α(μ^∗ n∗ϑ, ν) ⩾ t ⇒ϑ( { x ∈^d | ϑ(B(x,r))⩾ r^ε}) ⩾ t^c_0wherer=e^-(Λ_1 +ε')n( t/16)^1/αIn particular, for any m,n∈, any ϑ∈ℳ^1(^d) any t∈_+^∗ small enough and any C large enough,𝒲_α(μ^∗ n+m∗ϑ, ν) ⩾ Ce^-t n⇒μ^∗ m∗ϑ( { x ∈^d | μ^∗ m∗ϑ(B(x,r))⩾ r^ε}) ⩾(Ce^-t n)^c_0forr=e^-(Λ_1 +ε'- δ /α)n/16 ^1/αBut, since the measure satsfies to an effective shadowing lemma, according to proposition <ref>, there are C_1,C_2,t_0,L ∈_+^∗ such that for any x,y∈^d, any m∈ and any r∈_+^∗ with r⩽ C_1e^-Lm,μ^∗ m({ g∈| d(gx,y) ⩽ r }) ⩽ C_2 e^-t_0 mAnd so, to get a contradiction, we only need to assume thatr=e^-(Λ_1 +ε'- δ /α)n/16 ^1/α⩽ C_1 e^-Lm and r^ε = e^-ε(Λ_1 +ε'- δ /α)n/16 ^ε/α⩾ C_2 e^-t_0mAnd this is always possible for n=Km with K∈ large enough and ε small enough.We just proved that there are C,t∈_+^∗ and K∈^∗ such that for any borelian probability measure ϑ on ^d and any m∈,𝒲_α( μ^∗ (K+1)m∗ϑ,ν) ⩽ Ce^-tmLet n∈ and let m,L∈ be such that n=(K+1)m+L and 0⩽ L< K+1. Then,𝒲_α( μ^∗ n∗ϑ,ν)⩽𝒲_α( μ^∗ (K+1)m∗μ^∗ L∗ϑ,ν) ⩽ C e^-tm = Ce^-t/K+1 (n-L)⩽ C e^t e^-tn/(K+1)And this finishes the proof of the first part of the theorem.In particular, with ϑ= δ_x for some x∈^d, we get that for any α-hölder-continuous function f on ^d and any x∈^d,|P^n f(x) - ∫ fν| ⩽ Ce^-tnf_α Let f be an α-hölder-continuous function on ^d. Set, for any n∈,g_n=∑_k=0^n-1 P^k f-∫ fνThen,(I_d-P)g_n = f-∫ fν - (P^n f -∫ fν)And so,lim_n g_n - Pg_n = f-∫ fνMoreover, the series is normally convergent since∑_n P^nf - ∫ fν_∞⩽C/1-e^-tf_αAnd so, the function g=lim_n g_n exists, is continuous and satisfiesg-Pg=f-∫ fν and g_∞⩽C/1-e^-tf_α Now, let ϑ be a P-invariant borelian probability measure on ^d. Then, for any hölder-continuous function f,∫ fϑ = ∫ P^n f ϑ∫ fνWhere we first used the P-invariance of ϑ and then the dominated convergence theorem since for any x∈^d, lim_n P^n f(x) = ∫ fν according to the first part of the proof. And, finally, as the hölder-continuous functions are dense in the space of continuous functions on the torus, this proves that ϑ = ν and so, ν is the unique P-invariant borelian probability measure on ^d.§.§ Proof of theorem <ref>This result is a consequence of the uniqueness of the P-invariant borelian probability measure seen in theorem <ref>. Indeed, if we manage to prove that for any x and _x-a.e. x=(X_n) ∈ (^d)^, the accumulation points of ν_n,x:=1/n∑_k=0^n-1δ_X_k are P-invariant, we will get that they have to be the Lebesgue's measure and so, for any continuous function f on the torus,1/n∑_k=0^n-1 f(X_k) ∫ fν_x-a.e.For any continuous function f on the torus, we can compute,∫ fν_n,x - ∫ Pfν_n,x = 1/n∑_k=0^n-1 f(X_k) - 1/n∑_k=0^n-1 Pf(X_k) = 1/n∑_k=0^n-1 f(X_k+1) - Pf(X_k)+ 1/n (f(X_0) - f(X_n))But, M_n = ∑_k=0^n-1 f(X_k+1) - Pf(X_k) is a martingale with bounded increments so 1/n M_n0 a.e. and as f is bounded, we also have that 1/n (f(X_0) - f(X_n))0 in L^∞(_x).Thus, we just proved that for any x∈^d and any continuous function f on ^d, there is X_f ⊂ (^d)^ such that _x(X_f) = 1 and for any x∈ X_f,lim_n ∫ fν_n,x - ∫ Pfν_n,x=0Let (f_i) be a dense sequence in 𝒞^0(^d) and X_∞ = ∩_i X_f_i. Then, _x(X_∞)=1 and for any x∈ X_∞ and any i∈,lim_n ∫ f_iν_n,x - ∫ Pf_iν_n,x=0So, as the sequence (f_i) is dense, we get that for any continuous function f on ^d and any x∈ X_∞,lim_n ∫ fν_n,x - ∫ Pfν_n,x = 0This proves that for any x∈ X_∞, the accumulation points of (ν_n,x) are P-invariant and so they are equal to ν and this proves the law of large numbers. To prove the remaing part of the theorem, we are going to use Gordin's method and deduce the non-concentration inequality, the FCLT and the ASFCLT from these results for martingales. Indeed, according to theorem <ref>, for any α∈ ]0,1], there is a constant C such that for any α-hölder-continuous function f on the torus there is a continuous function g such thatf-∫ fν = g-Pgand g_∞⩽ Cf_αSet, for x= (X_n) ∈^,S_n f(x) = ∑_k=0^n-1 f(X_k) -n∫ fν and M_n = ∑_k=0^n-1 g(X_k+1) - Pg(X_k)Then,S_n f(x) = M_n + g(X_0) - g(X_n)And M_n is a martingale with bounded increments.For any n∈, we have that|M_n| ⩾ |S_n f(x)| - 2 g_∞⩾ |S_n f(x)| - 2Cf_αSo, using Azuma-Hoeffding's inequality, if nε>2C, we get thatI_n(x) : =_x ( |S_nf(x) | > nεf_α)⩽_x( | M_n | ⩾(nε-2C) f_α) ⩽2exp( - (n ε- 2 C)^2f_α^2/2n ( 2Cf_α)^2 ) = 2 exp(- n ε^2/8 C^2 +ε/4 C- 1/2n) And this finishes the proof of this point. As the function g is bounded, the sequence (S_nf(x) - M_n) is bounded in L^∞(_x) and so it is clear that to prove the FCLT and the AEFCLT, it is enough to study the martingale M_n (that has bounded increments). But, according to the functional central limit theorem for martingales (see corollary 4.1 in <cit.>) and it's almost sure extension (see <cit.>), it is enough to prove the a.e. convergence of the variance (when the limit doesn-t vanish). But, for any n∈^∗,1/n∑_k=0^n-1_x[| M_k+1 - M_k|^2 |X_0, … ,X_k ]= 1/n∑_k=0^n-1_x[| g(X_k+1) - Pg(X_k)|^2 |X_0, … ,X_k ]= 1/n∑_k=0^n-1 P(g^2)(X_k) - (Pg(X_k))^2So, according to the law of large numbers that we already proved and applied to the continuous function P(g^2) - (Pg)^2,1/n∑_k=0^n-1_x[| M_k+1 - M_k|^2 |X_0, … ,X_k ]σ^2(f):=∫ g^2 - (Pg)^2 ν _x-a.e.(We used the P-invariance of ν to get that ∫ P(g^2) ν = ∫ g^2 ν).And this proves point <ref> since we suppose in it that σ^2(f) ≠0.To conclude, remark that, using the -invariance of ν, we can compute∫_∫_| g(γ x) - P g(x)|^2ν(x)μ(γ)=∫_∫_ g(γ x)^2 + g(x)^2 -2Pg(x) g(γ x) ν(x) μ(γ) = 2 ∫_ g^2 - (Pg)^2 ν = 2σ^2(f)And so, if σ^2(f) = 0, then, as g is continuous, we get that for any γ∈μ and any x∈^d, g(γ x) = Pg(x). This proves that for any n∈, M_n=0 _x-a.e. and so, S_n f(x) = g(X_0) - g(X_n). Thus, for any x∈^d, S_nf ∈L^∞(_x) andsup_x∈^dsup_n∈S_nf_L^∞(_x)⩽ 2 g_∞⩽ 2C f_αThis inequality finishes the proof of point <ref>. § THE NON-EQUIDISTRIBUTION COMES FROM THE LOWER REGULARITY OF THE MEASURE Like Bourgain, Furmann, Lindenstrauss and Mozes did for the linear random walk on the torus, we are going to prove in this section that if the measure μ^∗ n∗ϑ is far from being equidistributed, it is only because of atoms i.e. of points x∈^d such thatϑ(B(x,r))⩾ r^εfor some r∈_+^∗ depending on n.More specifically, the aim of this section is to prove theLet μ be a borelian probability measure on SL_d() ⋉^d. Denote by μ_0 the projection of μ on SL_d() and assume that μ_0 is strongly irreducible, proximal and has an exponential moment. Let λ_1 ∈_+^∗ be the largest Lyapunov exponent of μ_0 (see appendix <ref>). Then for any α∈]0,1] and any ε∈_+^∗, there is c_0,ε'∈_+^∗ with ε'< ε such that for any ϑ∈ M^1(^d), any t∈]0,1] and any n∈ with n⩾ c_0(1+|ln t|),𝒲_α(μ^∗ n∗ϑ, ν) ⩾ t ⇒ϑ( { x ∈^d | ϑ(B(x,r))⩾ r^ε}) ⩾ t^c_0Where we setr=e^-(λ_1 +ε')n( t/16)^1/αIn the case of the linear random walk, this statement is a reformulation of an intermediate result (propositions 7.1 and 7.2) of <cit.>. We could prove it for the affine random walk just like they do for the linear one that is to say, by studying, for any borelian probability measure ϑ on ^d, the set of Fourier-coefficients of μ^∗ n∗ϑ andby remarking that for any c∈^d,μ^∗ n∗ϑ(c)= ∫_^d∫_SL_d() ⋉^d e^2iπ⟨ c,ax+b ⟩μ^∗ n(a,b) ν(x) = ∫_SL_d() ⋉^d e^2iπ⟨ c,b⟩ϑ(^t ac) μ^∗ n(a,b)And so,|μ^∗ n∗ϑ(c) | ⩽∫_SL_d() ⋉^d|ϑ(^t ac)| μ^∗ n(a,b) = ∫_SL_d()|ϑ(^t ac)| μ_0^∗ n(a)Where we recall that we denoted by μ_0 the projection of μ onto SL_d().Thus, if |μ^∗ n∗ϑ(c) | ⩾ t, then for many a, we also have that |ϑ(^t ac)|⩾ t and this is the key remark in the proof of BFLM. Instead, we are going to see that this result can also be obtained as a corollary of the one of BFLM for the linear walk : at first, we are going to prove that their result gives informations on the spectral radius of the operator P in L^p(^d,ν) (even for the affine random walk) and then, that this implies the theorem. §.§ Spectral gap in LpTd Letbe a second countable locally compact group acting measurably on a standard borelian spaceendowed with a -invariant probability measure ν.Let μ be a borelian probability measure onand P the Markov operator associated to μ. This is the operator defined for any non-negative borelian function f onand any x∈ byPf(x) = ∫_ f(gx) μ(g)As ν is a -invariant probability measure, it is clear that for any p∈[1,+∞], 1∈L^p(,ν) and P1=1. Moreover, we can prove, using Jensen's inequality that P_p= 1. So, we note, for any p∈ ]1,+∞],L^p_0(,ν):= { f∈L^p(,ν)| ∫ fν=0 }and ρ_p the spectral radius of P in L^p_0(,ν). We say that P has a spectral gap in L^p_0(,ν) (or, by abuse of notations in L^p(,ν)) if ρ_p<1.In the sequel, we will need a more flexible tool than the spectral gap. This is why, for any P-invariant subspaceof L^p(,ν) endowed with a norm . _ such that P is continuous on (, . _) and the injection of (, . _) into (L^p,. _p) is also continuous, we setκ(μ,, L^p(,ν) ) := -lnlim sup_n→ +∞sup_f∈∖{0}( P^n f_p/f_)^1/n The sequence ( sup_f∈∖{0}P^n f_p/f_) is not sub-multiplicative in general so it may converge to 0 only at polynomial rate and in this case, we would have that κ(μ,, L^p(,ν) )=0. This is impossible if = L^p() because in this case, if it converges to 0, it has to be at exponential rate.With this definition, if (, . _)= (L^p(,ν),. _p), then e^-κ(μ,, L^p(,ν) ) = ρ_p and, for any (, . _), we have, since the inclusion ofinto L^p(,ν) is supposed to be continuous,κ(μ,, L^p(,ν) ) ⩾ -lnρ_pIn particular, whenis a subset of L^∞(,ν), we can define and study the function (p↦κ(μ,, L^p(,ν) )). Remind that, according to Hölder's inequality, for any 1⩽ p⩽ p' and any function f∈L^∞ (,ν) with f_∞⩽ 1,f_p'^p' = ∫_ |f|^p'ν⩽∫_ |f|^p ν =f_p^p and f_p ⩽f_p'So, we get that the function ( p↦κ(μ,, L^p(,ν) )) is decreasing whereas the function ( p↦ pκ(μ,, L^p(,ν) )) is non-decreasing. In the same way that we defined L^p_0(^d), we set𝒞^0,α_0(^d) := {f∈𝒞^0,α(^d) | ∫ fν=0}The definition of the function κ is made to get the Let μ be a strongly irreducible and proximal probability measure on SL_d() having an exponential moment.Then, for any α∈ ]0,1] small enough,lim_p→+∞p κ(μ,𝒞^0,α_0(^d), L^p(^d,ν) ) = λ_1 dThis theorem implies in particular that for any ε∈_+^∗, there are p∈ and C∈_+ such that for any n∈ and any f∈𝒞^0,α_0(^d),P^n f_L^p(^d)⩽ C e^-(λ_1 d - ε) n/pf_αFirst of all, since μ has an exponential moment, for any α∈ ]0,1] small enough, any f∈𝒞^0,α(^d) and any x∈^d,|Pf(x)| ⩽f_∞and for any y∈^d,|Pf(x) - Pf(y)|⩽∫_ |f(gx) - f(gy)| μ(g) ⩽f_α∫_ d(gx,gy)^αμ(g)⩽f_α d(x,y)^α∫_g^αμ(g)So, Pf is α-hölder-continuous and P is a continuous operator on 𝒞^0,α_0(^d).According to the result of Bourgain, Furmann, Lindenstrauss and Mozes in <cit.> (that we use as stated in proposition 4.5 in <cit.>), we have that for any α∈ ]0,1] and any ε∈_+^∗, there is a constant C such that for any n∈, any t∈ ]0,1] with n⩾ -Cln t and any f∈𝒞^0,α(^d) with ∫ fν=0,{x||P^n f(x)|⩾ t f_α}⊂⋃_p/q∈^d/^d q⩽ Ct^-C B(p/q, e^-(λ_1 - ε)n)In particular, for any L∈^∗,∫ |P^n f|^L ν ⩽ (tf_α)^L + ν( {x||P^n f(x)|⩾ t f_α}) f_∞^L⩽(t^L +(Ct^-C)^d e^-(λ_1 - ε)dn)f_α^LAnd so, taking t=e^-δ n with δ∈_+^∗ small enough and L∈ large enough, we find that for some constant C,∫|P^n f|^L ν⩽ Ce^-(λ_1 - 2ε)dnf_α^LAnd this proves (reminding that the limit exists according to remark <ref>) thatlim_pp κ(μ,𝒞^0,α_0(^d), L^p(^d,ν) ) ⩾λ_1 dWe are now going to prove the other inequality. Let δ∈ ]0,1/4], f∈𝒞^∞(^d) such that f=1 on B(0,δ), f_∞⩽ 1 and ∫ f =0.Then, for any ε∈_+^∗ and any x∈ B(0, e^-(λ_1 +ε)nδ), we have thatP^n f(x)= ∫__g⩽ e^(λ_1 + ε)n f(gx) μ^∗ n(g)+ ∫__g⩾ e^(λ_1 + ε)n f(gx) μ^∗ n(g) = μ^∗ n({ g| g⩽ e^(λ_1 + ε)n}) +∫__g⩾ e^(λ_1 + ε)n f(gx) μ^∗ n(g) ⩾ 1 - 2 μ^∗ n({ g| g⩾ e^(λ_1 + ε)n})But, according to theorem <ref>, there are C,t∈_+^∗ such thatμ^∗ n({ g| g⩾ e^(λ_1 + ε)n}) ⩽ Ce^-tnAnd so, for n∈ large enough, we have that for any x∈^d,|P^n f(x)| ⩾(1-2Ce^-tn) _B(0,e^-(λ_1 + ε)nδ)(x)In particular, for any L∈,∫ |P^n f(x)|^L ν⩾( 1-2Ce^-tn)^L ν(B(0, e^-(λ_1 + ε)nδ)) =( 1-2Ce^-tn)^L e^-(λ_1 + ε)dnδ^dAnd this proves thatlim_pp κ(μ,𝒞^0,α_0(^d), L^p(^d,ν) ) ⩽λ_1 dAnd this finishes the proof of the proposition. We are now going to extend the previous result to measures on SL_d() ⋉^d by proving theLet μ be a borelian probability measure on :=SL_d() ⋉^d. Let μ_0 be the projection of μ onto _0:=SL_d() and assume that μ_0 is strongly irreducible, proximal and has an exponential moment. Then, for any α∈ ]0,1] small enough,lim_p→+∞p κ(μ,𝒞^0,α_0(^d), L^p(^d,ν) ) ⩾λ_1 d Theorem <ref> actually proves that for any measure μ satisfying it's assumptions we have that for any f∈𝒞^0,α(^d) with∫ fν=0, any n∈ and any p∈^∗,∫ |P^n f|^p ν⩽ C^p e^-tpnf_α^pAnd so, for any p∈ [1,+∞[,κ( μ,𝒞^0,α_0(^d), L^p(^d,ν) ) ⩾ tAnd, in particular,lim_p→ +∞ p κ(μ,𝒞^0,α_0(^d), L^p(^d,ν) )= +∞We are going to prove this result in three steps. First, we are going to prove it for trigonometric functions, then, for regular ones and last, for hölder-continuous functions. Let μ be a borelian probability measure onand μ_0 it's projection on _0. Denote by P the Markov operator associated to μ and byP_0 the one associated to μ_0.Then, for any c ∈^d, any n∈ and any L∈,∫_^d |P^n e_c|^2Lν⩽∫_^d |P_0^n e_c|^2LνWhere, for c∈^d, e_c is the function defined for x∈^d bye_c(x) := e^2iπ⟨ c,x⟩ Using Fubini's theorem, we can make the following computation∫_ |P^n_0 e_c(x)|^2Lν (x)=∫_ (P^n_0 e_c(x))^L (P^n_0f(x))^Lν(x) = ∫_∫_^2Le_c(a_1x) … e_c(a_L x)e_c(a_L+1x) … e_c(a_2Lx)μ_0^∗ n(a_1) …μ_0^∗ n(a_2L)ν(x) =∫_^2L∫_ e^2iπ⟨ c,(a_1+ … + a_L - (a_L+1 + … + a_2L))x⟩ν(x) μ_0^∗ n(a_1) …μ_0^∗ n(a_2L) = ∫_^2L_{^t(a_1+ … + a_L - (a_L+1 + … + a_2L))c=0}μ_0^∗ n(a_1) …μ_0^∗ n(a_2L)Doing the same kind of computations for the measure μ, and noting, to simplify notations, for (a_1,b_1), …, (a_2L, b_2L) ∈μ, a_i^j = a_i +… + a_j and b_i^j = b_i + … + b_j, we find that∫_ |P^n e_c|^2Lν = ∫_^2L e^2iπ⟨ c,b_1^L - b_L+1^2L⟩_{^t(a_1^L - a_L+1^2L)c=0}μ^∗ n(a_1,b_1) …μ^∗ n(a_2L,b_2L) ⩽∫_^2L_{^t(a_1^L- a_L+1^2L)c=0}μ_0^∗ n(g_1) …μ_0^∗ n(g_2L) =∫_ |P^n_0 e_c|^2LνWhere the last inequality comes from the first part of the proof and precisely gives what we intended to prove. For s∈_+^∗, we denote by ℋ^s(^d) the Sobolev space of exponent s. With the same assumptions than in corollary <ref>, for any s∈_+^∗ large enough and any ε∈_+^∗, there is L∈_+ such that for any f∈ℋ^s(^d) and any n∈,∫| P^n f-∫ fν|^2Lν⩽ C e^-(λ_1d - ε)nf_ℋ^s(^d)^2LLet f∈ℋ^s(^d). Then, by definition, we can expand f in Fourier series : f = ∑_c∈^df(c) e_c with f_ℋ^s:=(∑_c∈^d (1+c^2)^s/2 |f(c)|^2)^1/2 <+∞ and so, for any L∈^∗,(∫_^d |P^n f|^2Lν)^1/2L⩽∑_c∈^d |f(c)| (∫_^d|P^n e_c|^2Lν)^1/2LUsing the previous lemma and proposition <ref>, we get that for any ε∈_+^∗, there is L∈_+ such that for any c∈^d∖{0},∫_^d |P^n e_c(x)|^2Lν(x) ⩽ C e^-(λ_1d- ε)nc^2LCombining this inequality with the previous one, we get that for any f ∈ H^s(^d) with f(0) = ∫ fν=0,(∫_^d |P^n f|^2Lν)^1/2L⩽ C^1/2L e^-(λ_1 d-ε)n/2L∑_c∈^d∖{0} |f(c)|c⩽ C' e^-(λ_1 d-ε)n/2Lf_ℋ^sFor some constant C' depending on d, s, μ, L but non on f. According to Jackson-Bernstein's lemma, for any α∈ ]0,1] and any s∈_+ large enough, there is a constant C such that for any f∈ 𝒞^0,α(^d), there is a sequence (f_m)∈ℋ^s(^d)^ such that for any m∈^∗,∫ fν = ∫ f_mν, f-f_m_∞⩽C/m^αf_α and f_m_ℋ^s⩽ Cmf_α This implies that for any x∈^d and any m,n∈^∗,|P^n f_m(x)| ⩾ |P^n f (x)| - C/m^αf_αLet ε∈_+^∗, m∈^∗ and t=2C/m^α. Then, using the equality t-C/m^α = t/2 and lemma <ref>, we get that∫ |P^n f|^2Lν ⩽ (tf_α)^2L + ν({ |P^n f|⩾ tf_α}) f_∞^2L⩽(t^2L + ν({ |P^n f_m| ⩾(t-C/m^α) f_α}) ) f_α^2L⩽(t^2L + ( 2/tf_α)^2M∫ |P^n f_m|^2Mν)f_α^2L⩽(t^2L + ( 2/tf_α)^2M C^2M m^2M e^-(λ_1 d -ε)nf_α^2M)f_α^2LSo, for m=e^δ n, we get that for some constant C',∫ |P^n f|^2Lν⩽ C' (e^-δα 2L n + e^δ (1+α)2M n -(λ_1 d - ε) n)f_α^2LAnd so, for δ small enough and L large enough, we get that∫ |P^n f|^2Lν⩽ C e^-(λ_1 d-2ε)nf_α^2LAnd this is what we intended to prove.§.§ Equidistribution, lower regularity and spectral gap. In this subsection, we finish the proof of theorem <ref> by studying the link between equidistribution and the lower regularity of the measure when the spectral gap is large.Under the same assumptions as in theorem <ref>, for any ε∈_+^∗, there are C,t ∈_+^∗ such that for any x,y∈^d, any f∈𝒞^0,α(^d) and any n∈,|P^n f(x) - P^n f(y)| ⩽( e^α(λ_1 + ε) n d(x,y)^α + Ce^-tn) f_α Let's compute, for any x,y∈^d, n∈, f∈𝒞^0,α(^d) and ε∈_+^∗,|P^n f(x) - P^n f(y) |= |∫_ f(gx) - f(gy) μ^∗ n(g) | ⩽∫__g⩽ e^(λ_1 + ε)n |f(gx) - f(gy)| μ^∗ n(g) + ∫__g⩾ e^(λ_1 + ε)n |f(gx) - f(gy)| μ^∗ n(g) ⩽ m_α(f) ∫__g⩽ e^(λ_1 + ε)n d(gx,gy)^αμ^∗ n(g) + 2f_∞μ^∗ n( { g∈| g⩾ e^(λ_1 + ε)n}) ⩽( d(x,y)^α e^α(λ_1 + ε)n + 2μ^∗ n( { g∈| g⩾ e^(λ_1 + ε)n}) ) f_αWhere we used the fact that for any x,y∈^d and any g∈,d(gx,gy) ⩽g d(x,y)To conclude, we use theorem <ref> and we get that there are C,t∈_+^∗ such thatμ^∗ n( { g∈| g⩾ e^(λ_1 + ε)n}) ⩽ Ce^-tnWe are now ready to prove theorem <ref>.The idea of the proof is that if we have some point x_0 of the torus such that |P^nf(x_0)| ⩾ t, then, on a neighborhood B(x_0,r)for r≈ e^-λ_1 n we also have that |P^n f(x)| ≈ t. But, the control on κ(μ,𝒞^0,α, L^p) implies that ν(x| |P^n f(x)| ⩾ t) ≈ e^-λ_1 dn and so, we have that ν(x| |P^n f(x)| ⩾ t) ≈ e^-λ_1 dn≈ν(B(x_0,r)) and this proves that { x ||P^n f(x)| ⩾ t} cannot be much bigger than B(x_0,r).Let α,t∈ ]0,1], ϑ∈ℳ^1(^d) and n∈. As, for any 0<α'<α the inclusion of 𝒞^0,α(^d) into 𝒞^0,α'(^d) is continuous, we may assume without any loss of generality that α is small enough so that corollary <ref> holds.Assume that 𝒲_α(μ^∗ n∗ϑ,ν)⩾ t.By definition, there is f∈𝒞^0,α(^d) with f_α⩽ 1 and such that|∫ P^n fϑ - ∫ f ν| ⩾t/2We can assume without any loss of generality that ∫ fν=0 and f_α⩽ 2. And this proves that∫_^d|P^n f(x)| ϑ(x) ⩾|∫_^dP^n f(x) ϑ(x)| ⩾t/2We set, for any n∈ and t∈ ]0,1],X_n,t := {x∈^d | |P^n f(x)| ⩾ t}Then, using that P^n f_∞⩽f_∞⩽ 2, we find thatt/2⩽∫_ |P^n f(x)| ϑ(x) ⩽t/4 + 2 ϑ(X_n,t/4)And so,ϑ( X_n,t/4) ⩾t/8Moreover, according to lemma <ref>, for any ε_2∈_+^∗, there are C,t_0∈_+^∗ such that for any x∈ X_n,t/4 and any y∈^d, we have that|P^n f(y)| ⩾t/4-e^α(λ_1 + ε_2)n d(x,y)^α - Ce^-t_0 n⩾t/8 -e^α(λ_1 + ε_2)n d(x,y)^αSince we can take c_0 so large that Ce^-t_0 n⩽t/8 for n⩾ c_0(1+|ln t|). In particular, noting r= e^-(λ_1 + ε_2 )n(t/16)^1/α, we have that for any x∈ X_n,t/4, B(x, r)⊂ X_n,t/16.Moreover, according to the classical covering results, there is a constant C(d), depending only on d and points x_1, …, x_N ∈^d such thatX_n,t/4⊂⋃_i=1^N B(x_i,r) ⊂ X_n,t/16and the union has multiplicity at most C(d). This implies in particular that∑_i=1^N _B(x_i,r)⩽ C(d) _X_n,t/16And so, taking the integral against the measure ν and using the equality ν(B(x,r)) = r^d, we getNr^d ⩽ C(d) ν(X_n,t/16)To sum-up, we found points x_1,…, x_N with N⩽C(d) ν(X_n,t/16)/r^d, such thatϑ(⋃_i=1^N B(x,r)) ⩾t/8So, noting ℐ:={i∈ [1,N] | ϑ(B(x_i,r)) ⩾t/16N}, we get thatϑ(⋃_i∈ℐ B(x_i,r)) ⩾t/16And finally, for any x∈⋃_i∈ℐ B(x_i,r), there is, by definition of ℐ, some i ∈ℐ such thatB(x,2r) ⊃ B(x_i, r)And so,ϑ(B(x,2r)) ⩾t/16NIn conclusion, we proved thatϑ({x∈^d | ϑ(B(x,2r)) ⩾t/16 N}) ⩾t/16To finish, note that we have thatN ⩽ C(d) ν(X_n,t/16)/r^d = C(d) (16/t)^d/α e^(λ_1 + ε_2)dnν(X_n,t/16)And that, according to Markov's inequality and corollary <ref>, for any ε_1 ∈_+^∗, there are C,L∈_+ such thatν(X_n,t/16)⩽(16/t)^2L∫_ |P^n f(x)|^2Lν(x)⩽(16/t)^2L Ce^-(λ_1d-ε_1)nf_α^2L⩽(32/t)^2L Ce^-(λ_1d-ε_1)nThis proves that for some constant C depending on ε_1,d,μ,N ⩽ C/t^C e^(ε_1 + ε_2 d ) nAnd so, taking ε_1,ε_2 small enough and c_0 large enough, we get thatt/16 N⩾ t^C+1/C e^-(ε_1+ ε_2 d) n⩾ (2r)^εand this is what we intended to prove.§ MEASURE OF POINTS-STABILIZERS The aim of this section is to prove the followingwith the same assumptions as in theorem <ref>, there are C,t ∈_+^∗ such that for any x∈^d and any n∈,μ^∗ n⊗μ^∗ n({g_1,g_2 ∈| g_1 x = g_2 x}) ⩽ Ce^-tnThis proposition will be a direct corollary of lemmas <ref> and <ref> since if μ is not concentrated on SL_d()⋉^d, then the measure μ_1 of lemma <ref> is not concentrated on SL_d() ⋉{0}.To evaluate the measure of the stabilizer of a point, we are going to lift the situation from ^d to ^d since we know better the products of random elements of SL_d() ⋉^d (through the theory of products of random matrices since this group can be identified to a subgroup of SL_d+1()) than those of elements of SL_d() ⋉^d.To understand what we are going to do, remark that if μ = 1/2δ_(g_1,v_1) + 1/2δ_(g_2,v_2) with g_1,g_2∈SL_d(), v_1∈^d/^d and v_2∈^d such that the coefficients of v_2 and 1 are -linearly independent, then, for μ^∗ n-a.e. (g,v)∈SL_d()⋉^d, we can writev=M_1 v_1 + M_2 v_2with M_i ∈ℳ_d().So, in particular, noting v_1, v_2 some representatives of v_1,v_2 in ^d, we get that if v=0 in ^d, then there is p∈^d, such that,M_1 v_1 + M_2 v_2 = pAs we assumed that the coefficients of v_2 and 1 are -linearly independent and that v_1∈^d, we get that M_2 v_2 = 0 and M_1 v_1 = p.So, we setμ_1 = 1/2δ_(g_1,0) + 1/2δ_(g_2, v_2)∈ℳ^1 (SL_d() ⋉^d )and what we just proved is thatμ^∗ n( SL_d() ⋉{0}) ⩽μ_1^∗ n( SL_d() ⋉{0})And so we are let with a problem on probability measures on SL_d() ⋉^d.Thus, to lift the situation from ^d to ^d, we are going to project the translation part of elements in the support of μ onto a complementary subspace of ^d in the -vector space ^d. To do so, we fix some -linear projection π_:^d →^d onto ^d and we remark that for any v∈^d/^d, v-π_ v ∈^d is well defined and the application SL_d() ⋉^d ∋ (g,v) ↦ (g,v-π_ v) ∈SL_d() ⋉^d is a group-morphism. Now, we can prove the following Let μ be a borelian probability measure on SL_d() ⋉^d.Then, for any n∈ and any y∈^d,μ^∗ n⊗μ^∗ n( {(g_1,g_2) | g_2^-1 g_1 ∈Stab(y)}) ⩽μ_1^∗ n⊗μ_1^∗ n( {(g_1,g_2) | g_2^-1 g_1 ∈Stab(y-π_ y)})where μ_1 is the measure on SL_d() ⋉^d defined by μ_1 (A) = μ(φ^-1(A)) where φ: SL_d() ⋉^d →SL_d() ⋉^d is the function defined byφ(g,v) = (g,v-π_ v)and π_ is some -linear projection onto ^d.As φ is a morphism, we only need to prove that for any g∈SL_d() ⋉^d and any y∈^d, if gy=y then φ(g) (y-π_ y) = (y-π_ y).Write g=(a,b) with a∈SL_d() and b∈^d. Then, φ(g) = (a,b-π_ b). So,φ(g) (y-π_ y) = a (y-π_ y) + b-π_ bBut, gy=y, so, noting b, y some representatives of b,y, we get that there is p∈^d such thata y + b = y+pProjecting onto ^d, we also get thata π_y + π_b = π_y+pAnd this proves thata (I_d-π_)y+ (I_d-π_)b = (I_d-π_)yFinally, as (I_d-π_)y and (I_d-π_)b don't depend on the choices of the representatives of y and b, this finishes the proof of the fact thata(y-π_ y) + b-π_ b = y-π_ yand this finishes the proof of the lemma. Let μ be a borelian probability measure on SL_d() ⋉^d that is not concentrated on SL_d() ⋉{0} and has an exponential moment. Let μ_0 be the projection of μ onto SL_d() and assume that μ_0 is strongly irreducible and proximal. Then, there are C,t ∈_+^∗ such that for any x∈^d and any n∈,μ^∗ n⊗μ^∗ n( {(g_1,g_2) | g_1x= g_2 x}) ⩽ Ce^-tn We denote by μ̃ the measure on SL_d()⋉^d defined by μ̃(A) = μ(A^-1) for any borelian subset A of SL_d()⋉^d and where A^-1 := {g^-1| g∈ A}. Let λ_1 ⩾…⩾λ_d be the Lyapunov exponents of μ_0 (see appendix <ref>).Then, the largest Lyapunov exponent of μ̃ is -λ_d. Moreover, as λ_1 ⩾…⩾λ_d, λ_1>0 and λ_1 + … + λ_d = 0, we have that λ_d <0 and so, -λ_d>0. Let n∈, ε∈_+^∗ and x∈^d. We can compute,I_n(x): =μ^∗ n⊗μ^∗ n({g_1,g_2 ∈| g_1 x=g_2x }) = μ^∗ n⊗μ̃^∗ n({(g_1,g_2) ∈^2 | g_2g_1 x=x }) ⩽μ^∗ n⊗μ̃^∗ n({(g_1,g_2)| 1+g_2^-1 g_1x/1+x⩽ e^(λ_1 - λ_d- ε)n})But1+g_2 g_1x/1+x = 1+g_2 g_1x/1+g_1x1+g_1x/1+xSo, we obtain thatI_n(x)⩽μ^∗ n({ g_1| 1+g_1x/1+x⩽ e^(λ_1 - ε/2)n}) + ∫_μ̃^∗ n({ g_2| 1+g_2g_1x/1+g_1x⩽ e^(-λ_d - ε/2)n}) μ^∗ n(g_1)And we can conclude with corollary <ref> applied to the measures μ and μ̃. § EFFECTIVE SHADOWING LEMMAS The aim of this section is to prove a criterion to produce measures satisfying an effective shadowing lemma.First of all, we recall theLet μ be a borelian probability measure on .We say that μ satisfies an effective shadowing lemma if for ant C',t' ∈_+^∗, there are C_1,C_2,M,t,L ∈_+^∗ such that for any x,y∈^d, any r∈_+^∗ and any n∈ with r⩽ C_1e^-Ln, ifμ^∗ n({ g∈| d(gx,y) ⩽ r }) ⩾ C_2 e^-t nthen there are x',y'∈^d such that d(x,x'),d(y,y')⩽ re^Mn andμ^∗ n({ g∈| gx'=y'})⩾ C'e^-t'nThe criterion that we are going to prove will use the diophantine properties of the translation parts of the elements in μ. More specifically, we want a condition that ensures that if (g_1,v_1), (g_2,v_2)∈μ^∗ n are such that v_1 and v_2 are close (in some sense) then v_1=v_2. This is why we make the followingLet d∈^∗ and B⊂^d a finite subset.We that that B is (C,L)-diophantine if for any non zero (M_b)∈^B,d(∑_b∈ B M_b b,0)⩽ C/max_b |M_b|^L⇒∑_b M_b b=0More generally, we say that B is diophantine if it is (C,L)-diophantine for some C,L∈_+^∗.With this definition, a diophantine subset can contain rational points.Let b∈^1 ∖/. Asking {b} to be diophantine is asking for C,L∈_+^∗ such that for any q∈^∗,d(qb,0) > C/q^LThe name comes from this property.Let d⩾ 2 and B a subset of ^d. It is not the same thing to say that B is diophantine and that {coefficients of b| b∈ B} is a diophantine subset of ^1 (consider B={(b_1,b_2)} with b_1 diophantine and b_2 not). This last property is stronger but this is the one that we will need in the sequel and we refer to lemma <ref> for more details.Let N∈^∗. Then, for a.e. b_1, … b_N∈^1, the set {b_1, … b_N} is diophantine. We are now ready to state the main result of this section Let μ be a borelian probability measure on SL_d() ⋉^d and let μ_0 be it's projection on SL_d(). Assume that μ_0 is strongly irreducible, proximal and that it has an exponential moment and that {coefficients of b| (a,b)∈μ} is a diophantine subset of ^1.Then, μ satisfies an effective shadowing lemma. To prove this proposition, we first come back to the difference for a subset B of ^d between being diophantine and having elements whose coefficients form a diophantine subset of ^1.Let B be a finite subset of ^d andF:= {coefficients of B}⊂^1Then, F is (C,L)-diophantine if and only if there is C'∈_+^∗ such that for any non zero (M_b) ∈ℳ_d()^B,d(∑_b∈ B M_b b,0) ⩽C'/(max_b M_b)^L⇒∑_b M_bb=0 First, assume that F is (C,L)-diophantine and set C'= C/(d|B|)^L.Let 0≠(M_b) ∈ℳ_d()^B be such thatd(∑_b∈ B M_b b,0) ⩽C'/(max_b M_b)^LEach coefficient of ∑_b M_bb is a sum of elements of F multiplied by integers that are smaller than d |B| max_b M_b. In other words, for any coefficient of ∑_b M_bb, we get a sum ∑_f∈ F L_ff with |L_f| ⩽ d|B| max_b M_b andd( ∑_f L_ff,0) ⩽C'/max_b M_b^L⩽C'(d|B|)^L/max_f|L_f|^L = C/(max_f |L_f|)^LAnd as F is (C,L)-diophantine, this implies that ∑_f L_ff=0 and so, as this is true for any coefficient of ∑_b M_bb, we get that ∑_b M_bb=0. Reciprocally, if there is C' ∈_+^∗ such that for any 0≠(M_b) ∈ℳ_d()^B d(∑_b∈ B M_bb,0) ⩽C'/(max_b M_b)^L⇒∑_b M_bb=0Then, we set C= C'/d^L and let 0≠(L_f) ∈^F be such thatd(∑_f∈ FL_ff,0) ⩽ C/(max_f |L_f|)^LFor any element f of F, choose an element (b(f),i(f)) in {(b,i) ∈ B× [1,d] | f is the i-th coefficient of b }Now, for any b∈ B, we denote by M_b the matrix where we set L_f in the i-th column if there is f∈ F such that (b(f),i(f)) = (b,i) and 0 otherwise.Thus, by definition,∑_b∈ B M_bb = ([ ∑_f L_ff;⋮; ∑_f L_ff ])And max_b M_b⩽ d max_f |L_f| sod(∑_b M_bb,0) ⩽ C/(max_f|L_f|)^L⩽ C'/(max_bM_b)^LAnd this proves, as B is (C,L)-diophantine, that ∑_b M_bb=0 and so we also get that ∑_f L_f f=0 and this finishes the proof of the lemma.From now on, we setB(μ):={b| (a,b)∈μ} and F(μ):={coefficients of b|b∈ B(μ)}To prove proposition <ref>, we are going to use some control of the translation parts of elements of μ^∗ n. To do so, for any Q∈^∗ and any finite subset B of ^d, we setX_Q(B) = {p + ∑_b∈ B M_bb/q| p∈^d, q∈, |q|⩽ Q, (M_b)∈ℳ_d()^B, max_b∈ BM_b⩽ Q} Thus some x∈^d belongs to X_Q(B) if there is q∈^∗ with q⩽ Q such that each coefficient of qx can be obtained from translations by coefficients of elements of B with multiplicities smaller than Q.This definition is made so that for any M large enough we have that for any n∈^∗, with large probability, any element (g,v)∈μ^∗ n is such that v∈ X_e^Mn(B(μ)).To make this idea more precise, we set, for any M∈_+^∗ and n∈,_n^M := {( a_i) ∈_0^n |max_k∈ [1,n]max_1⩽ i_1<…<i_k ⩽ na_i_k… a_ i_1⩽ e^Mn}The aim of the following lemma is to prove that elements of _n^M are generic. Let μ be a borelian probability measure onand let μ_0 be the image of μ on SL_d(). Assume that μ_0 has an exponential moment.Then, for any M∈_+^∗ large enough, there is t∈_+^∗ such that for any n∈,μ^⊗ n_0(_n^M) ⩾ 1 - e^-tnMoreover, if B(μ) is finite (see equation (<ref>)), then, for any (a_1,b_1), … ,(a_n,b_n) ∈μ such that (a_i) ∈_n^M,∑_k=1^n a_n … a_k+1 b_k ∈ X_e^(M+1)n(B(μ)) First, remark that as μ_0 is concentrated on SL_d(), for μ_0-a.e. g∈_0, g⩾ 1 and so, noting δ∈_+^∗ such that ∫__0g^δμ_0(g)<+∞, we have that for any M∈_+^∗ and any n∈,μ_0^⊗ n (_n^M): =μ_0^⊗ n({ (g_i) ∈_0^| max_k∈ [1,n]max_1⩽ i_1<…<i_k ⩽ ng_i_k… g_ i_1⩾ e^Mn}) ⩽μ_0^⊗ n( { (g_i) ∈_0^|g_n…g_1⩾ e^Mn}) ⩽ e^-δ Mn( ∫__0g^δμ_0(g))^nAnd this finishes the proof of the first part of the lemma.To prove the second one, take (a_1,b_1), … ,(a_n,b_n) ∈μ such that (a_i) ∈_n^M.Then, we can write∑_k=1^n a_n … a_k+1 b_k = ∑_b'∈ B(μ)(∑_k=1^n _b_k=b' a_n … a_k )b'and, for any b'∈ B(μ),∑_k=1^n _b_k=b' a_n … a_k ⩽ ne^MnThis proves that ∑_k=1^n a_n … a_k+1 b_k ∈ X_ne^Mn(B(μ)). In the sequel, we will have to control sums of the form ∑ c_i a_i where the a_i are μ^∗ n-generic and c_i ∈. To do so, we will use the Let μ_0 be a strongly irreducible and proximal borelian probability measure on SL_d() having an exponential moment.Set_0^1 := { a∈_0 ||1/nlna- λ_1|⩽ε}and_0^d := { (a_i)∈_0^d | ∀ i∈ [1,d], a_i∈_0^1 and (∑_i c_i a_i)≠0 }where we put c_1 = 1-d and c_i=1 for i∈ [2,d].Then, for any ε∈_+^∗, there are C,t∈_+^∗ such that for any n∈,μ^∗ n(_0^1) ⩾ 1-Ce^-tn et μ^∗ n⊗…⊗μ^∗ n(_0^d) ⩾1 - Ce^-tn In <cit.>, this lemma is not stated as we do it here but it corresponds to the lemmas 4.3, 4.6 and 7.9 and to some part of the proof of proposition 7.3. We are now ready to prove proposition <ref>. The proof consists in two lemmas. In lemma <ref>, we study points that are far from points of X_e^n(B(μ)) and in lemma <ref> we study points that are close to it. With the assumptions of proposition <ref>, for any M∈_+^∗ large enough there are C,t∈_+^∗ such that for any r∈ ]0,1], any n∈, any x,y∈^d, ifμ^∗ n({g∈| d(gx,y)⩽ r}) ⩾ Ce^-tnthen there is x'∈ X_e^Mn(B(μ)) such thatd(x,x') ⩽ re^MnThe idea that makes the demonstration work is that if a_1 x + b_1, … , a_d x+b_d are close to each-others, then setting c_1=1-d and c_i=1 for i∈ [2,d], we get that ∑_i c_i a_i x is close to ∑_i c_i b_i. But, according to lemma <ref>, with large probability, the matrix ∑_i c_i a_i is invertible and, according to lemma <ref>, the b_i belong to X_e^Mn(B(μ)), and so x∈ X_e^Mn(B(μ)).We keep the notations _0^1 and _0^d from lemma <ref> and the notation _n^M from lemma <ref>. Then, for any ε∈_+^∗ and any large enough M we denote by C,t the constants given by lemma <ref> and by t_0 the one given by lemma <ref>.Then, we set_∗ := { ((a_1,b_1), …, (a_d,b_d))∈^d |(a_i)∈_0^dand for any i,b_i ∈ X_e^(M+1)n(B(μ)) }By definition, we have thatμ^∗ n⊗…⊗μ^∗ n(_∗) ⩽ Ce^-tn + d e^-t_0nSo, we can compute, for any x,y∈^d,I_n(x,y): =(μ^∗ n({g∈| d(gx,y)⩽ r}))^d = ∫_^d_B(y,r)(a_1 x+b_1) …_B(y,r) (a_dx+b_d) μ^∗ n(a_1,b_1) …μ^∗ n(a_d,b_d)⩽ Ce^-tn + de^-t_0n + ∫__∗∏_i=1^d_B(y,r)(a_i x+b_i) μ^∗ n(a_1,b_1) …μ^∗ n(a_d,b_d)Thus, if M is large enough, we can find C,t ∈_+^∗ such that for any r∈ ]0,1], any n∈, ifμ^∗ n({g∈| d(gx,y)⩽ r}) ⩾ Ce^-tnthen∫__∗∏_i=1^d_B(y,r)(a_i x+b_i) μ^∗ n(a_1,b_1) …μ^∗ n(a_d,b_d) >0In particular, there is ((a_1,b_1), …, (a_d,b_d))∈^∗ such that for any i,d(a_i x+b_i, y) ⩽ rNow, we let x, y, b_i be representatives of x,y,b_i in ^d. We have that for any i∈ [1,d] there is p_i ∈^d such thata_i x + b_i - y - p_i⩽ rSo, noting c_1 = 1-d and c_i = 1 for i≠1, we get that(∑_i c_i a_i)x + ∑_i c_i b_i - ∑_i c_ip_i⩽ 2d rBut, by definition of _0^d, (∑_i c_i a_i )∈^∗.So,1 ⩽|( ∑_i c_i a_i)| ⩽∑_i c_i a_i^d ⩽ d^2d e^(λ_1 + ε)dnLet U = ∑_i c_i a_i. Then U is invertible and we can write,U^-1 = 1/ U Vwith V ∈ℳ_d() and for some constant C(d) depending only on d, V⩽ C(d)U^d-1.Thus, we get thatx+U^-1∑_i c_i b_i - U^-1∑_i c_ip_i ⩽ 2drUTo conclude, we only need to remark that we can writeb_i = ∑_b ∈ B(μ) M_g^i bwith max_b M_b^i⩽ e^(M+1)nAnd this proves that-U^-1∑_i c_i b_i +U^-1∑_i c_i p_i =∑_i c_i V p_i/(U) + ∑_b∈ B( μ)(∑_i c_i VM_b^i)b /(U)And thatmax_b∈ B(μ)∑_i c_i V M_b^i ⩽ 2d Vmax_bM_b^i⩽ 2d C(d) e^(M+1)n e^(λ_1 + ε)(d-1)nSo, maybe for some M'⩾ M we get thatx':=-U^-1∑_i c_i b_i +U^-1∑_i c_i p_i∈ X_e^M'n(μ)and d(x,x') ⩽ re^M'n.With the same assumptions as in proposition <ref>, there are C_0,L such that for any M∈_+ large enough, there are C,t∈_+^∗ such that for any n∈^∗, any r∈ ]0,1] withr ⩽C_0/e^MLnwe have that for any y∈^d, any x'∈ X_e^(M+1)n(B(μ)) and any x∈^d with d(x,x')⩽ r,(μ^∗ n({ g∈| d(gx,y) ⩽ r}) )^2⩽ Ce^-tn + μ^∗ n⊗μ^∗ n({g_1,g_2 | g_1x'=g_2 x'})To simplify our notations, we set B= B(μ) and F=F(μ).let x,x',y∈^d as in the proposition. By definition of X_Q(B), there is (M_v)∈ℳ_d() ^B with maxM_v⩽ Q, there is p∈^d and q∈^∗ with |q|⩽ Q such thatx' = p+ ∑_b∈ B M_b b/qLet, for any ε∈_+^∗ and M∈,_∗:= { (a,b) ∈| a⩽ e^(λ_1 + ε)n et b∈ X_e^(M+1)n(B)}Lemmas <ref> and <ref> prove that for any ε∈_+^∗ and any M∈_+ large enough, there are C,t∈_+^∗ such that for any n∈,μ^∗ n(_∗) ⩾ 1-Ce^-tnLet's computeI_n(x,y): =(μ^∗ n({ g∈| d(gx,y) ⩽ r}) )^2 = ∫_^2_B(y,r)(g_1 x) _B(y,r) (g_2 x) μ^∗ n(g_1) μ^∗ n(g_2) ⩽ 2μ^∗ n(_∗) + ∫_^2__∗(g_1)__∗(g_2) _d(g_1x,g_2x)⩽ 2rμ^∗ n(g_1) μ^∗ n(g_2) ⩽ 2Ce^-tn + ∫_^2__∗(g_1)__∗(g_2) _d(g_1x,g_2x)⩽ 2rμ^∗ n(g_1) μ^∗ n(g_2)Moreover, for g_1 = (a_1,b_1) and g_2=(a_2,b_2), we have thatd(g_1 x',g_2 x')⩽ d(g_1 x',g_1 x) + d(g_1x,g_2 x) + d(g_2 x,g_2 x') ⩽a_1 r + d(g_1 x,g_2 x) + a_2rAnd this proves thatI_n(x,y) ⩽ 2 Ce^-tn + ∫_^2__∗(g_1)__∗(g_2) _d(g_1x',g_2x')⩽ 3re^(λ_1 + ε)nμ^∗ n(g_1) μ^∗ n(g_2)To conclude, we only need to prove that, under the diophantine condition, if x'∈ X_e^(M+1)n, g_1,g_2∈_∗ are such that d(g_1 x',g_2 x') ⩽ 3re^(λ_1+ ε)n then g_1 x'=g_2 x'.To do so, remark that if x'∈ X_e^(M+1)n(B) and g=(a,b)∈_∗, alors, a x'+b ∈ X_2 e^(M+1 +λ_1 + ε)n(B)Thus, we have that (g_1 - g_2) x' ∈ X_4e^(M+1+λ_1 +ε)n(B) and that d((g_1 - g_2)x',0) ⩽ 3re^(λ_1 + ε)n.In particular, there is q∈^∗ with q⩽ 4e^(M+1+λ_1 +ε)n such that q(g_1-g_2)x' is a sum of elements of b multiplied on the left by matrices of norm smaller than (4e^(M+1+λ_1 +ε)n)^2 andd(q(g_1-g_2)x',0) ⩽ |q|3r e^(λ_1 + ε)n⩽ 12re^(M+1+2λ_1 +2ε)nSo, as F is (C,L)-diophantine, according to lemma <ref>, there are constants C',L such that if12re^(M+1+2λ_1 +2ε)n⩽C'/(4e^(M+1+λ_1 +ε)n)^2Lthen we have that qg_1 x'=qg_2 x'. But, this proves that g_1 x' = g_2 x' + p/q for some p∈^d. Andd(p/q,0)=d(g_1x',g_2 x') ⩽ 3r e^(λ_1 + ε)nSo, if 1/|q| > 3r e^(λ_1 +ε)n, we have that p/q=0 and so, g_1 x'=g_2 x'. According to lemma <ref>, we have that for any M∈_+^∗ large enough, there are C,t∈_+^∗ such that for any r∈ ]0,1], any n∈ and any x,y∈^d, ifμ^∗ n({g∈| d(gx,y)⩽ r}) ⩾ Ce^-tnthen there is x'∈ X_e^Mn(B(μ)) such thatd(x,x') ⩽ re^MnBut, in this case, we have, according to lemma <ref>, that ifre^Mn⩽C_0/e^LMn,then(μ^∗ n({ g∈| d(gx,y) ⩽ r}) )^2⩽ Ce^-tn + μ^∗ n⊗μ^∗ n({g_1,g_2 | g_1x'=g_2 x'})And this is what we intended to prove sinceμ^∗ n⊗μ^∗ n({g_1,g_2 | g_1x'=g_2 x'}) = ∫_μ^∗ n( { g_1 | g_1 x' = g_2 x' }) μ^∗ n(g_2) § PRODUCTS OF RANDOM MATRICESIn this section, we are going to recall some of the properties of products of random matrices.To do so, we fix some finite dimensional -vector spacethat we endow with an euclidian norm.Let μ be a borelian probability measure on :=GL().We set, for any g∈, N(g) = max(g,g^-1) and we say that μ has a moment of order 1 if∫_ln N(g) μ(g)<+∞and that it has an exponential moment if there is some ε∈_+^∗ such that∫_ N(g)^εμ(g) <+∞Remark that there is some constant C depending only on () such that for any g∈SL_d(), g^-1⩽ C g^ and so, if μ is a measure on SL(), it is enough to ask that for some ε∈_+^∗,∫_g^εμ(g)<+∞ We would like to study the product g_n … g_1 where (g_i) is an iid sequence of law μ.The first result in this direction is Oseledec's theorem :Let μ be a borelian probability measure on :=GL() having a moment of order 1. Then, there are m_1 , … , m_r ∈^∗ with ∑_i m_i=, there is Λ_1>…>Λ_r ∈ and some measurable function from ^ into the space of flags :=_1 ⊃…⊃_r+1:={0} ofsuch that _i =∑_j=i^r m_j and such that for μ^⊗-a.e. ω =(g_n) ∈^, * g_1 _i^ω = _i^ϑω where ϑ is the shift on ^.* For any x∈_i^ω∖_i+1^ω,lim_n 1/nlng_n … g_1 x=Λ_i We call Lyapunov exponents the paramaters Λ_1, … ,Λ_r and we noteλ_1 = … = λ_m_1 = Λ_1, λ_m_1+1 = … = λ_m_1+m_2 = Λ_2,etc.However, if x∈, this theorem doesn't say anything on the behavior of g_n … g_1 x for some generic (g_n)∈^ because we have no information on the sequences (g_n) such that x∈_i^(g_n). To avoid this problem, we usually assume that the subgroup ofspanned by the support of μ acts irreducibly on(it doesn't fix any non-trivial subspace of ). And in this case, Furstenberg proved theLet μ be a borelian probability measure onhaving a momen tof order 1 and whose support generates a group acting irreducibly on .Then, for any x∈∖{0},1/nlng_n … g_1 xλ_1a.e.This irreducibility assumption is not good enough for us since we will identify SL_d() ⋉^d with the subgroup([ SL_d() ^d;01 ]) of SL_d+1() whose action on ^d+1 is not irreducible.A first important case of reducible actions onis when the support of μ generates a group{( [ Γ_1 ∗; 0 Γ_2 ])}where Γ_i < SL(_i) with _1 ⊕_2 = and Γ_i acts irreducibly on _i. Indeed, in this case, we can study the action on _1 and on _2 to get the one on . This motivates the following Let μ be a borelian probability measure on GL().We say that some subspaceofis adapted to μ if it is proper, invariant by the subgroup of GL() spanned by the support of μ and if there are Δ_1>Δ_2 ∈ such that * For any x∈/∖{0},1/nlng_n … g_1 xΔ_1 a.e. * For any x∈∖{0},lim sup1/nlng_n … g_1 x⩽Δ_2 a.e. This definition is only useful since there is always an adapted subspace.Let μ be a borelian probability measure on GL() having a moment of order 1.Then there is some subspaceofthat is adapted to μ. This theorem proves that we can always block-triangularize the group _μ spanned by the support of μ. Indeed, we can find by induction Δ_1>… >Δ_s ∈ and a flag =_1 ⊃…⊃_s+1:=0 adapted to μ : _i is _μ-invariant and for any i and any x∈_i / _i+1∖{0},1/nlng_n … g_1 xΔ_ia.e.The flag that we obtain in this way is included in the flag given by Oseledec's theorem (it means that {_i}⊂{_i} and that {Δ_i}⊂{Λ_i} with Δ_1 = Λ_1) but it has the advantage of being invariant and the convergence in equation (<ref>) gives us Furstenberg's law of large numbers without any irreducibility assumption. We are going to precise the convergence in equation (<ref>) through a non-concentration inequality that we state in next Let μ be a borelian probability measure on GL() having an exponential moment.Then, for any ε∈_+^∗, there are C,t∈_+^∗ such that for any n∈,μ^∗ n( { g ∈GL()| | 1/nlng - Λ_1 | ⩾ε}) ⩽ Ce^-tnMoreover, ifis adapted to μ then, for any x∈ and any n∈,μ^∗ n({ g∈GL()|e^(Λ_1 - ε)n d(x,) ⩽gx⩽ e^(Λ_1 + ε)nx}) ⩾ 1- Ce^-tnwhere we notedd(x,) = inf_y∈x-yTo prove this theorem, we first prove the following Let μ be a borelian probability measure on GL() having an exponential moment.Let Δ_1>…>Δ_s∈ and :=_1 ⊃…⊃_s+1:={0} be the flag adapted to μ and given by induction by theorem <ref>.Then for any ε∈_+^∗, there are C,t∈_+^∗ such that for any n∈ and any x∈∖{0},μ^∗ n( { g ∈GL()| Δ_s - ε⩽1/nlngx/x⩽Δ_1 + ε}) ⩾ 1 - Ce^-tn The proof of this lemma is an adaptation of the proof of proposition 3.2 in <cit.> where Benoist and Quint only have polynomial moments.First, for g∈ and X= x∈(^d), we setσ(g,X) = lngx/x, φ(X) = ∫_σ(g,X) μ(g) et σ'(g,X) = σ(g,X) - φ(X)Then, for any X= x ∈(^d) and any sequence (g_n) ∈GL()^, noting X_k = g_k … g_1 X, we have thatlng_n … g_1 x/x = ∑_k=0^n-1σ(g_k+1, X_k) = ∑_k=1^n σ'(g_k+1, X_k) + ∑_k=0^n-1φ(X_k)Now, let M_n = ∑_k=0^n-1σ'(g_k+1,X_k).We can compute[ M_n+1 - M_n | X_0, …,X_n ] = [σ'(g_n+1, X_n) | X_0, … ,X_n ] = ∫_σ'(g,X_n) μ(g) = 0This proves that M_n is a martingale. Moreover, for any ε∈_+^∗,[ e^ε|M_n+1 - M_n|| X_0, …, X_n ] = ∫_ e^ε|σ'(g,X_n)|μ(g)And the inequality-lng^-1⩽σ(g,X) = lngx/x⩽lngshows that[ e^ε|M_n+1 - M_n|| X_0, …, X_n ] ⩽∫_ e^2εlnmax(g, g^-1)μ(g)And so, as μ has an exponential moment, there is ε∈_+^∗ and some constant C_0 such that for any X∈() and any n∈,_X [ e^ε|M_n+1 - M_n|| X_0, …, X_n ] ⩽ C_0a.e.Thus, according to the non-concentration inequality for martingales (see theorem 1.1 in <cit.>), for any ε∈_+^∗, there is C,t∈_+^∗ such that for any X∈() and any n∈,_X( |M_n| ⩾ε n)⩽ Ce^-tn(it is important to remark that the constants C,t don't depend on X but only on ε and C_0). To conclude, we only need to study ∑_k=0^n-1φ(X_k). To do so, we just proved (using Borel-Cantelli-s theorem) that1/n M_n0a.e.Moreover, by definition of Δ_1, … , Δ_s and _1, … ,_s, we have that for any X∈(),Δ_s ⩽lim inf_n 1/n∑_k=0^n-1σ(g_k+1, X_k) ⩽lim sup_n 1/n∑_k=0^n-1σ(g_k+1, X_k) ⩽Δ_1a.e.This proves thatΔ_s ⩽lim inf_n 1/n∑_k=0^n-1φ(X_k) ⩽lim sup_n 1/n∑_k=0^n-1φ(X_k) ⩽Δ_1a.e.And so, for any stationary probability measure ν on (),Δ_s ⩽∫φν = lim_n ∫_x 1/n∑_k=0^n-1φ(X_k) ν(x) ⩽Δ_1Finally, using proposition 3.1 of <cit.>, we get that for any ε∈_+^∗, there are C,t∈_+^∗ such that for any n∈ and any X∈(^d),_X( Δ_s - ε⩽1/n∑_k=0^n-1φ(X_k) ⩽Δ_1+ ε)⩽ Ce^-tnAnd this finishes the proof of the lemma. We refer to the proof of proposition 4.1 in <cit.>. First, remark that for any basis (v_1 , … , v_d) ofthere is a constant C such that for any g∈GL(),1/Cgv_1⩽g⩽ C max_i∈ [1,d]gv_iAnd so, the non-concentration inequality for lngv/v for any v∈∖{0} implies the one for lng.Then, remark that according to lemma <ref>, for any ε∈_+^∗, there are C,t∈_+^∗ such that for any x∈ and any n∈,μ^∗ n({ g∈GL() | gx > e^(Λ_1 + ε)nx}) ⩽ Ce^-tn We endow / with the normx_/ := inf_y∈x-yThen, we have that for any x∈,x⩾ d(x,) = π x_/where π is the projection onto / and, by definition, sinceis adapted to μ, for any x∈/∖{0},1/nlng_n … g_1 x_/Λ_1a.e.This proves that {0} is adapted to the image of μ in GL(/). And so, according to lemma <ref>, for any ε∈_+^∗, there are C,t∈_+^∗ such that for any n∈ and any x∈/∖{0},μ^∗ n ({ g| | 1/nlngx/x - Λ_1| ⩾ε}) ⩽ Ce^-tnAnd this proves that for any n∈ and any x∈,μ^∗ n({ g| gx⩽ e^(Λ_1 - ε)n d(x,) }) ⩽ Ce^-tnAnd this finishes the proof of the theorem. We end this section with the study, for a borelian probability measure μ on SL_d() ⋉^d, of the translation part b of the μ^∗ n-generic elements g=(a,b). The aim is to prove that if we make some assumptions on μ, for any n, with large μ^∗ n-probability, an element g=(a,b)∈SL_d() ⋉^d is such that b≈ e^Λ_1 n.To do so, we wirst compare the Lyapunov exponents of μ and of it's projection on SL_d() in next Let μ be a borelian probability measure on SL_d() ⋉^d having an exponential moment and let μ_0 be the projection of μ onto SL_d(). See μ as a probability measure on SL_d+1() and define Λ_1(μ) this way.Then,Λ_1(μ) = Λ_1(μ_0) First for any n∈^∗ and any ε∈_+^∗,μ^⊗({| 1/nlng_2n… g_n+1 - Λ_1 (μ)| ⩾ε}) = μ^⊗({|1/nlng_n … g_1 - Λ_1 (μ)| ⩾ε})So, according to lemma <ref>,1/nln g_2n… g_n+1Λ_1(μ)μ^⊗-a.e. We can write, for any g∈SL_d() ⋉^d,g =( [ a b; 0 1 ])and if g has law μ, then a has law μ_0.Thus,g_2n… g_1 = ([ a_2n… a_1 a_2n… a_n+1∑_k=1^n a_n … a_k+1 b_k + ∑_k=n+1^2n a_2n… a_k+1 b_k; 0 1 ])And so,,g_n … g_1 ⩾a_n … a_1This proves that Λ_1(μ) ⩾Λ_1(μ_0).Let now Ω∈^ be such that μ^⊗ (Ω)=1 and for any (g_n) ∈Ω,lim_n 1/nlng_n… g_1=lim_n 1/nlng_2n… g_n+1 = Λ_1(μ)andlim_n1/nlna_n… a_1= lim_n 1/nlna_2n… a_n+1 = Λ_1(μ_0)This way, for any ε∈_+^∗, there is N_ε such that for any n∈ with n⩾ N_ε,e^(Λ_1(μ) - ε)n⩽g_n … g_1, g_2n… g_n+1⩽ e^(Λ_1(μ) + ε)nand,a_n … a_1 , a_2n… a_n+1⩽ e^(Λ_1(μ_0) + ε)nThus, we have that for any ε∈_+^∗ and any large enough n,e^2 (Λ_1(μ)-ε)n⩽g_2n… g_1 ⩽max( e^2(Λ_1(μ_0) + ε)n, e^(Λ_1(μ) + Λ_1(μ_0) + 2ε)n + e^(Λ_1(μ) + ε)n)This proves that for any ε∈_+^∗,2(Λ_1(μ) - ε) ⩽max (2Λ_1(μ_0) + ε, Λ_1(μ) + Λ_1(μ_0) + 2ε)And so, we get that2Λ_1(μ) ⩽max( 2Λ_1(μ_0), Λ_1(μ) + Λ_1(μ_0))Finally, as we already proved that Λ_1(μ_0) ⩽Λ_1(μ), the previous inequality actually is an equality and we get the expected result.Until now, we didn't say anything on the positivity of Λ_1.If μ is a measure on SL(), then, as for μ-a.e. g∈, (g)=1, we have that λ_1 + … + λ_() = 0 and so, λ_1=0 if and only if for any i, λ_i=0.To get conditions that ensure that λ_1>0, we will say that a subgroupof SL_d() is strongly irreducible if it doesn't fix any non trivial finite union of subspaces of ^d. We remind the following result Let μ be a borelian probability measure on SL() having a moment of order 1 and such that the subgroup spanned by the support of μ is strongly irreducible and non-compact.∫_SL_d()lngμ(g) <+∞ Then, Λ_1 >0.Let μ be a borelian probability measure on SL_d() ⋉^d having an exponetial moment and that is not concentrated on SL_d() ⋉{0}. Assume that the projection onto SL_d() of the subgroup spanned by the support of μ is strongly irreducible and non-compact. Then, for any ε∈_+^∗, there are C,t∈_+^∗ such that for any n∈ and any x∈^d,μ^∗ n({ g ∈SL_d() ⋉^d || 1/nln1+gx/1+x - Λ_1 | ⩾ε}) ⩽ Ce^-tn We see μ as a probability measure on SL_d+1() and we are going to prove that the subspaceadapted to μ and given by theorem <ref> is {0}.First, since the projection of μ on SL_d() spans a group _μ that acts strongly irreducibly and non-compactly on ^d, the only subspaces of ^d+1 that can be invariant by μ are ^d+1, Vect(e_1, … e_d), Vect(e_d+1) and {0}. But, assuming that μ is not a subset of SL_d() ⋉{0} implies that Vect(e_d+1) is not invariant by the group spans by the support of μ. Moreover, according to theorem <ref>, there is Λ_1∈_+^∗ such that for any x∈Vect(e_1,…, e_d) ∖{0},1/nlng_n … g_1 xΛ_1a.e.Then, for any x∈^d+1∖Vect(e_1, …, e_d+1), we have, according to lemma <ref> thatlim sup_n 1/nlng_n … g_1 x⩽Λ_1So we also have that ≠Vect(e_1, … ,e_d) and, asis proper, we have that ={0}.Thus, according to lemma <ref>, for any ε∈_+^∗, there are C,t∈_+^∗ such that for any x∈^d+1∖{0},μ^∗ n( {g ∈SL_d() ⋉^d | | 1/nlngx/x- Λ_1 | ⩾ε}) ⩽ Ce^-tnAnd in particular, with x=x_0+e_d+1 for x_0∈Vect(e_1, …, e_d) we get the expected result. amsalpha
http://arxiv.org/abs/1702.08387v1
{ "authors": [ "Jean-baptiste Boyer" ], "categories": [ "math.PR", "math.DS" ], "primary_category": "math.PR", "published": "20170227171733", "title": "On the affine random walk on the torus" }
ZARM, University of Bremen, 28359 Bremen, Germany We present a definition of the geoid that is based on the formalism of general relativity without approximations; i.e. it allows for arbitrarily strong gravitational fields. For this reason, it applies not only to the Earth and other planets but also to compact objects such as neutron stars. We define the geoid as a level surface of a time-independent redshift potential. Such a redshift potential exists in any stationary spacetime. Therefore, our geoid is well defined for any rigidly rotating object with constant angular velocity and a fixed rotation axis that is not subject to external forces. Our definition is operational because the level surfaces of a redshift potential can be realized with the help of standard clocks, which may be connected by optical fibers. Therefore, these surfaces are also called “isochronometric surfaces.” We deliberately base our definition of a relativistic geoid on the use of clocks since we believe that clock geodesy offers the best methods for probing gravitational fields with highest precision in the future. However, we also point out that our definition of the geoid is mathematically equivalent to a definition in terms of an acceleration potential, i.e. that our geoid may also be viewed as a level surface orthogonal to plumb lines. Moreover, we demonstrate that our definition reduces to the known Newtonian and post-Newtonian notions in the appropriate limits. As an illustration, we determine the isochronometric surfaces for rotating observers in axisymmetric static and axisymmetric stationary solutions to Einstein's vacuum field equation, with the Schwarzschild metric, the Erez-Rosen metric, the q-metric and the Kerr metric as particular examples. 91.10.-v, 04.20.-q, 91.10.ByDefinition of the relativistic geoid in terms of isochronometric surfaces Dennis Philipp, Volker Perlick, Dirk Puetzfeld, Eva Hackmann, and Claus Lämmerzahl======================================================================================§ INTRODUCTION One of the fundamental tasks of geodesy is to determine the Earth's geoid from gravity field measurements. Within a Newtonian framework, the definition of the geoid combines the Newtonian gravitational potential and the potential related to centrifugal forces that act on the rotating Earth. Therefore, the gradient of the total potential describes the free fall of particles in the corotating frame. From acceleration measurements, and the knowledge of the Earth's state of rotation, one can deduce the pure Newtonian potential. Afterward, via geodetic modeling schemes, information about the change of mass distributions and mass transport can be obtained. These temporal variations and long time trends are usually translated into water height equivalent mass changes on the Earth's surface for visualization. The geoid itself is also commonly used as a reference surface for height measurements <cit.>.Within the last years, the accuracy of measurements of the gravitational field has improved considerably, and it is expected to improve even more in the near future. For example, such an improvement is expected from the upcoming geodetic space mission GRACE-FO, which consists of two spacecraft in a polar orbit around the Earth. The influence of the varying gravitational field along the orbit causes a variation in the separation of the two satellites. With the onboard Laser Ranging Interferometer (LRI), it is expected that such variations can be measured to within an accuracy of 10nm <cit.>. Another important improvement is expected from the use of clocks in the context of chronometric geodesy. The basic idea is to surround the Earth with a network of clocks and to measure their mutual redshifts (or their redshifts with respect to a master clock). As clocks now approach a stability of 10^-18 <cit.>, it will soon be possible to measure gravitational redshifts that correspond to height differences of about 1cm. Both examples show that for a correct evaluation of present or near-future measurements of the gravitational field of the Earth it is mandatory to take general relativity into account. Of course, the geodetic community is well aware of this fact. The usual way to consider relativistic effects is by starting with the Newtonian theory and applying post-Newtonian (PN) corrections. In particular, the notion of the geoid was already discussed in such a PN setting in 1988 by Soffel et al. <cit.>. They defined a so-called a-geoid, which is based on acceleration measurements, and a so-called u-geoid which is based on using clocks. The authors showed that, within their setting, the two definitions are equivalent. For a more recent discussion of the Earth's geoid in terms of PN calculations, we refer to the work by Kopeikin et al. <cit.>. Although the PN approach is certainly sufficient for calculating all relevant effects with the desired accuracy in the vicinity of the Earth, from a methodological point of view, it is more satisfactory to start out from a fully relativistic setting and then to apply approximations where appropriate. This makes it necessary to provide fully relativistic definitions of all the basic concepts, in particular of the Earth's geoid.It is the purpose of this paper to present and discuss such a fully relativistic definition of the geoid. As we allow the gravitational field to be arbitrarily strong, our definition applies not only to the Earth and to other planets but also to compact objects such as neutron stars. For lack of a better word, we always speak of the “geoid,” for all kinds of gravitating bodies. Our definition is operational, using clocks as measuring devices. That is to say, in the terminology of the above-mentioned paper by Soffel et al., we define a fully relativistic u-geoid. However, we also discuss the notion of an a-geoid and we show that, also in the relativistic theory without approximations, the two notions are equivalent. We believe that high-precision geodesy will be mainly based on the use of clocks in the future; therefore, we consider the u-geoid as the primary notion and the fact that it coincides with the a-geoid as convenient but of secondary importance only. Our definition assumes a central body that rotates rigidly with constant angular velocity, where we have to recall that in general relativity a “rigid motion” is defined by vanishing shear and vanishing expansion for a timelike congruence of worldlines. (This is often called “Born rigidity.”) Of course, the motion of the Earth (or of neutron stars) is not perfectly rigid. However, rigidity may be viewed as a reasonable first approximation, and the effect of deformations may be considered in terms of small perturbations afterward. Our definition is based on the mathematical fact that the gravitational field of a body that rotates rigidly with constant angular velocity admits a time-independent redshift potential. We define the geoid as a surface of constant redshift potential, which is also called an isochronometric surface. The equivalence of our (u-)geoid with an appropriately defined a-geoid follows from the fact that the redshift potential is also an acceleration potential.As we will outline below, our definition of a relativistic geoid may be viewed as a translation into mathematical language of a definition that was given, just in words, already in 1985 by Bjerhammar <cit.>. More recently, inspired by Bjerhammar's wording, Kopeikin et al. <cit.> discussed a relativistic notion of the u-geoid assuming a particular fluid model for the Earth. Also, Oltean et al. <cit.> gave another fully relativistic definition of the geoid, which is mathematically quite satisfactory. However, we believe that our definition is more operational. A major difference is in the fact that, in the above-mentioned terminology, Oltean et al. defined an a-geoid. In contrast to our work, Bjerhammar's, and Kopeikin's, they do not make any reference to the use of clocks. We see the advantage of our framework in the exploration of the use of clocks and their description in terms of an isometric timelike congruence. We ask for the redshift of any pair of clocks within such a congruence and use the redshift potential as the basis for the definition of the relativistic geoid.For a general review of relativistic geodesy and related problems, see, e.g. Refs. <cit.> and <cit.>. Reference <cit.> contains a comprehensive summary of theoretical methods in relativistic gravimetry, chronometric geodesy, and related fields as well as applications to a parametrized post-Newtonian metric. Our notational conventions and a list of symbols can be found in Appendix <ref>.§ NONRELATIVISTIC GEOIDThe field equation that Newtonian gravity is based upon is the Poisson equationΔ U = 4π G ρ,where U is the Newtonian gravitational potential, G is Newton's gravitational constant, and ρ is the mass density of the gravitating source. In the region outside the source, i.e. in vacuum, the field equation reduces to the Laplace equation Δ U = 0. On the rotating Earth, the centrifugal effects give an additional contribution to the acceleration of a freely falling particle that is dropped from rest. This total acceleration can be derived from the potentialW = U + V = U - 12Ω^2 d_z^2 .Here, V is the centrifugal potential, Ω is the angular velocity of the Earth, and d_z is the distance to the rotation axis, which is defined as the z-axis. Whereas the attractive gravitational potential is a harmonic function in empty space, the centrifugal part is not.The shape of the Earth as well as its gravity field shows an enormous complexity. The idea of using an equipotential surface for defining an idealized “mathematical figure of the Earth” was brought forward by C. F. Gauss in 1828. The name geoid was coined by J. F. Listing in 1873. In modern terminology, here quoted from the U.S. National Geodetic Survey <cit.>, the geoid is defined as “the equipotential surface of the Earth's gravity field which best fits, in a least squares sense, global mean sea level.” Here, the term “equipotential surface” refers to the potential W in Eq. (<ref>). The question of which equipotential surface is chosen as the geoid is largely a matter of convention; for the Earth, it is convenient to choose a best fit to the sea level, while for celestial bodies without a water surface, such as Mars or the Moon, one could choose a best fit to the surface. In a strict sense, the geoid is not time independent because the Earth undergoes various kinds of deformations and its angular velocity is not strictly constant. However, all temporal variabilitiesmay be treated as perturbations of a time-independent geoid. For having such a time-independent geoid, one makes the following idealizing assumptions: (A1)The Earth is in rigid motion.(A2)The Earth rotates with constant angular velocity about a fixed rotation axis.(A3)There are no external forces acting on the Earth.Note that assumption (A3) also excludes time-independent deformations caused by other gravitating bodies such as the so-called “permanent tides;” see, e.g. Ref. <cit.>. Just as the time-dependent variations mentioned above, they may be considered as perturbations at a later stage. Physical effects that must be treated in that way include, among others, the intrinsic time dependence of the mass multipoles, tidal effects, anelastic deformations, friction, ocean loading, atmospheric effects, mass variations in the hydrosphere and cryonosphere, and postglacial mass variations.In geodesy, different notions of the geoid are commonly used. See, e.g. the standard textbook on geodesy <cit.> for the definitions of the mean geoid, the non-tidal geoid, and the zero-geoid. In this work, since we exclude the influence of external forces by assumption (A3), we refer to the concept of the non-tidal geoid. The assumptions (A1), (A2), and (A3) guarantee the existence of the time-independent potential W as given in Eq. (<ref>); the geoid is then defined as the time-independent surfaceW = W_0 ,with the constant W_0 chosen by an appropriate convention, as indicated above. By definition, the geoid is perpendicular to the acceleration ∇ W = ∇ U + ∇ V .The magnitude |∇ W| is called gravity in the geodetic community. The gravitational part of the potential is usually expanded into spherical harmonics, cf., e.g. Refs. <cit.>,U = -GMr∑_l=0^∞∑_m=0^l ( R_Er)^l P_lm(cosϑ) [ C_lmcos(m φ) . + . S_lmsin(m φ) ] .An additional assumption of axial symmetry reduces the decomposition (<ref>) toU = - G ∑_l=0^∞ N_l P_l(cosϑ)r^l+1.Here, M is the mass of the Earth, R_E is some reference radius (e.g. the equatorial radius of the Earth), (r, ϑ, φ) are geocentric spherical coordinates, P_l(P_lm) are the (associated) Legendre polynomials, and C_lm, S_lm, N_l are the multipole coefficients. In geodesy, Eq. (<ref>) is often rewritten asU = -GMr∑_l=0^∞( R_Er)^l J_l P_l(cosϑ) ,where the relation between the dimensionless quantities J_l and the multipole moments N_l is given by N_l = J_l R_E^l M.The multipole coefficients C_lm, S_lm (or N_l in an axisymmetric model) can be determined by different measurements. Among others, satellite missions such as GOCE and GRACE as well as ground-based gravimetry and leveling observations on the surface of the Earth contribute to the knowledge of the gravitational field and the derivation of precise models of the geoid <cit.>. Modern space missions use laser ranging (LAGEOS), laser interferometry (GRACE-FO), and GPS tracking for providing such precise models.We end this section by rewriting the three assumptions (A1), (A2), and (A3), which guarantee the existence of a time-independent geoid, in a way that facilitates comparison with the relativistic version to be discussed below. We start out from the well-known transformation formula from an inertial system Σ to a reference system Σ ' attached to a rigidly moving body,x⃗ = x⃗_0 (t) + R (t)x⃗ '.Here, x⃗_0 (t) is the position vector in Σ of the center of mass of the central body and R (t) is an orthogonal matrix that describes the momentary rotation of the central body about an axis through its center of mass. The orthogonality condition R (t)^-1 = R (t)^T implies that the matrixω (t) = Ṙ(t)R(t)^-1is antisymmetric. From Eq. (<ref>), we find thatv⃗ = ẋ⃗̇ = ẋ⃗̇_0 + ω(x⃗ - x⃗_0) ,where the dot means a derivative with respect to t, keeping x⃗' fixed. Successive differentiation results ina⃗ = v̇⃗̇ = ẍ⃗̈_0 + ω̇(x⃗ - x⃗_0) + ω(v⃗ - ẋ⃗̇_0) ,ȧ⃗̇ = ⃛x⃗_0 + ω̈( x⃗ - x⃗_0 ) + 2ω̇( v⃗ - ẋ⃗̇_0 ) + ω( a⃗ - ẍ⃗̈_0 ) .We will now verify that the three assumptions (A1), (A2), and (A3) imply the following: (A1')The velocity gradient ∇⊗v⃗ is antisymmetric.(A2')ω̇ = 0.(A3')ȧ⃗̇ = ω a⃗.Clearly, from Eq. (<ref>), we read that the assumption of rigid motion implies (A1'). Moreover, (A2) obviously requires (A2'). Finally, (A3) implies that ẍ⃗̈_0 (t) = 0⃗ (which means that we may choose the inertial system such that x⃗_0= 0⃗); this result inserted into (<ref>), together with (A2'), gives indeed (A3'). The three conditions (A1'), (A2'), and (A3'), which are necessary for defining a time-independent geoid in the Newtonian theory, have natural analogs in the relativistic theory as we will demonstrate below.§ RELATIVISTIC GEOID Since clocks are the most precise measurement devices that modern technology offers, a relativistic definition of the geoid that is based on time and frequency measurements might be most convenient and operationally realizable with high accuracy. In one of the first articles on a relativistic treatment of geodetic concepts Bjerhammar <cit.>, see also Ref. <cit.>, proposed the following definition: The relativistic geoid is the surface nearest to mean sea level on which precise clocks run with the same speed.§.§ Redshift potential If one wants to translate Bjerhammar's definition into the language of mathematics, one has to specify what “precise clocks” are and what is meant by saying that clocks “run at the same speed”. Presupposing the formalism of general relativity, without approximations, we suggest the following: “precise clocks” are standard clocks, i.e. clocks that measure proper time along their respective worldlines. The notion of standard clocks is mathematically well defined in the formalism of general relativity by the condition that for a worldline parametrized by proper time the tangent vector is normalized; moreover, standard clocks can be equivalently characterized by an operational definition with the help of light rays and freely falling particles, using the notions of radar time and radar distance; see Perlick <cit.>. When comparing predictions from general relativity with observations one always assumes that atomic clocks are standard clocks. This hypothesis is in agreement with all experiments to date.Knowing what is meant by “precise clocks,” we still have to explain what we mean by saying that two clocks “run at the same speed”. For comparing two clocks, it is obviously necessary to send signals from one clock to the other. In a general relativistic setting, it is natural to use light signals which, in the mathematical formalism, are given by lightlike geodesics. This gives rise to the following well-known definition of the general-relativistic redshift: let γ and γ̃ be the worldlines of two standard clocks that measure proper times τ and τ̃, respectively. Assume that a light ray λ is emitted at γ(τ) and received at γ̃(τ̃) while a second light ray is emitted at γ ( τ + Δτ ) and received at γ̃(τ̃ + Δτ̃), see Fig. <ref>. One defines the redshift z by z+1 = νν̃ = d τ̃d τ = Δτ→ 0limΔτ̃Δτ,where ν and ν̃ are the frequencies measured by the emitter γ and by the receiver γ̃, respectively. In general relativity there is a universal formula for the redshift of standard clocks <cit.>,z+1 = νν̃ = ( . g_μνdλ^μds dγ^νdτ) |_γ(τ)( . g_ρσd λ^ρds d γ̃^σd τ̃) |_γ̃(τ̃).Here, s is an affine parameter for the lightlike geodesic λ. A simple derivation of the redshift formula was given by Brill <cit.>; this derivation can also be found in the book by Straumann <cit.>. We are now ready to explain how we interpret the statement that γ and γ̃ run at the same speed: it is supposed to mean that z=0.In this interpretation, Bjerhammar's definition requires pairwise vanishing redshift for an entire family of clocks. Therefore, we now consider a congruence of worldlines and we ask for the redshift of any pair of worldlines in this congruence. The congruence is defined by a four-velocity field u, which is normalized according to g_μνu^μ u^ν=-c^2, i.e. such that its integral curves are parametrized by proper time. We say that ϕ is a redshift potential for u if log (z+1) = ϕ( γ̃ (τ̃ ) ) - ϕ( γ ( τ ) )for any two integral curves γ and γ̃ of u. According to Ref. <cit.>, ϕ is a redshift potential if and only if exp(ϕ)u =: ξ is a conformal Killing vector field of the spacetime. The redshift potential is time independent (i.e. constant along the integral curves of ξ) if and only if ξ is a Killing vector field. The integral curves of u are then called Killing observers. The existence of a time-independent redshift potential is, thus, guaranteed if and only if the spacetime is stationary. In this case, we may introduce coordinates (t, x^1,x^2,x^3) with ξ = ∂_t such that the metric readsg = e^2 ϕ (x) [ -(c dt + α_a(x) dx^a)^2 + α_ab(x) dx^a dx^b ] ,where the metric functions ϕ, α _a, and α _ab depend on x=(x^1,x^2,x^3) but not on t.The redshift potential ϕ (x) foliates the three-dimensional space into surfaces which we call isochronometric surfaces. According to Eq. (<ref>), any two standard clocks, mathematically described by integral curves of the vector field u = exp (- ϕ ) ξ, that are on the same isochronometric surface ϕ = ϕ_0 = constant show zero redshift with respect to each other. We are thus led to the conclusion that Bjerhammar's definition (with our interpretation of his wording) makes sense in any stationary spacetime, and that the geoid is an isochronometric surface.One might ask if the assumption of stationarity is really necessary for this definition to make sense. As a matter of fact, it can be shown that a four-velocity field u must be proportional to a Killing vector field if any two clocks on integral curves of u see each other with temporally constant redshift and if these integral curves are complete; see Theorem 10 in Ref. <cit.>. This demonstrates that, based on redshift measurements, a time-independent geoid can be defined only in the case of stationarity.We end this subsection by briefly discussing the notion of a redshift potential in the Newtonian limit. Given a stationary spacetime with a metric in the form above, the redshift potential ϕ is given by the equation c^2 e^2ϕ = -g_μνξ^μξ^ν = -g_tt.Clearly, the redshift between any two stationary standard clocks (i.e. standard clocks of which the worldlines are integral curves of the vector field u = exp(- ϕ ) ξ) isz+1 = νν̃ = e^ϕ|_γ̃ - ϕ|_γ =e^ϕ|_γ̃e^ϕ|_γ = √(-g_tt)|_γ̃√(-g_tt)|_γ.For the Newtonian limit of general relativity, we know that in a suitable coordinate system -g_tt→ c^2(1+2U/c^2); hence,e^ϕ≈ 1+U/c^2 .This demonstrates that in the Newtonian approximation the level sets of the redshift potential ϕ correspond to equipotential surfaces of the Newtonian gravitational potential U. In the same approximation, the redshift is determined by the potential difference between the emitter and receiver,νν̃≈ 1+U_2-U_1c^2 =: 1 + Δ Uc^2.Near the surface of the Earth, such a potential difference corresponds to a height difference. From Eq. (<ref>), one concludes that the relative frequency change, i.e. the redshift, is about 10^-16 per meter near the Earth's surface. Hence, modern clocks with a stability in the 10^-18 regime can be used to measure height differences at the centimeter level.Fig. 2 shows a sketch of the level sets of the redshift potential and fibers connecting these surfaces. The redshifts measured using fibers I and II are identical, whereas the redshift measured using fiber III vanishes. §.§ Clock comparison through optical fibers The general redshift formula (<ref>) is valid only if the comparison between the two clocks is made with the help of freely propagating light rays, i.e. with the help of lightlike geodesics. We will now show that, by contrast, in the case of a stationary spacetime, the formula (<ref>) is valid whenever the comparison between the two clocks is made with signals that move at the speed of light, even if they are not freely propagating (i.e. nongeodesic). This has the important consequence that this formula may be used if the signals are transmitted through an optical fiber. We have to assume that the fiber is at rest with respect to the Killing observers, i.e. that it establishes a time-independent path in the coordinate representation (<ref>) of the metric. A signal that propagates along this fiber with the speed of light has to satisfy the conditiong_μνẋ^μẋ^ν = 0 ,where the dot denotes the derivative with respect to a curve parameter s. As the signal is future oriented, this is equivalent toc dt + α_a dx^a = √(α_abdx^a dx^b).As a consequence, the coordinate travel time Δ t := t_2 - t_1 = ∫ _t_1 ^t_2 dt = 1c∫ _s_1 ^s_2( √(α_abdx^adsdx^bds) - α_c dx^cds) dsof the signal through the fiber is independent of the emission time since ∂_t α_a = 0 and ∂_t α_ab = 0. This implies that two signals that are emitted with a time difference Δ t will be received with the same time difference Δ t. Together with the fact that, for observers with four-velocity u = exp(-ϕ) ∂_t, proper time and coordinate time are related byd τdt = e^ϕ;this shows that the redshift of signals sent through the fiber is z + 1 = νν̃ =dτ̃dτ = dτ̃dtdtdτ = e^ϕ|_γ̃e^ϕ|_γ.Hence, the redshift potential also gives the correct frequency ratio ν / ν̃ for clock comparison by signal transmission through an arbitrarily shaped optical fiber, provided that the fiber is at rest with respect to the Killing observers.Using the framework of optical metrics, see for instance Ref. <cit.>, we can also consider fiber links with an index of refraction n in which the signal does not propagate with the vacuum speed of light as assumed above. Instead of Eq. (<ref>), the metric now readsg = e^2 ϕ (x) [ -n(x)^-2(c dt + α_a(x) dx^a)^2 + α_ab(x) dx^a dx^b ] .We again assume that the fiber is at rest w.r.t. the Killing observers, i.e. w.r.t. the emitter and observer of the signal. The redshift between the two ends of the fiber now results inz + 1 = νν̃ = e^ϕ|_γ̃e^ϕ|_γn|_γn|_γ̃,such that, again, the redshift potential ϕ gives the correct result for frequency comparison if the index of refraction is constant. As can be seen by the equation above, the vacuum redshift potential ϕ can also be deduced from redshift measurements using optical fibers when the position-dependent index of refraction of the fiber is known. §.§ Definition of the relativistic geoid Based on our deliberations in Sec. <ref>, we suggest the following definition of the relativistic geoid: The relativistic geoid is the level surface of the redshift potential ϕ that is closest to mean sea level.In the case of celestial bodies without a water surface, one has to single out one particular level surface of the redshift potential by some other convention. This definition of the relativistic geoid makes sense for any celestial body that is associated with a stationary spacetime, i.e. with a family of Killing observers. In the next section, we will show that the assumption of stationarity is tantamount to three conditions that are analogous to the three conditions (A1'), (A2'), and (A3'), which are necessary for defining a time-independent geoid in the Newtonian theory; recall Sec. <ref>.Our definition is operational in the sense that standard clocks and fiber links can be used to determine the relativistic geoid. A clock network may be built such that all clocks show pairwise zero redshift, and one of them is positioned at mean sea level. The spatial grid of clocks then determines the shape of the Earth's geoid. We emphasize that our definition of the geoid allows for arbitrarily strong gravitational fields. For weak fields, we may use the Newtonian limit for which the redshift potential can be expressed in terms of the Newtonian potential; see Sec. <ref>. In this limit, our definition of the geoid becomes the usual Newtonian one. At the PN level, our geoid reduces to the u-geoid of Soffel et al. <cit.>. Our definition of the geoid should be compared with the one by Oltean et al. <cit.>, which is also fully relativistic. A major difference is in the fact that we give an operational definition in terms of clocks that are connected by fiber links while their mathematical construction is not immediately related with an operational prescription. In particular, they do not make any reference to clocks. § GENERAL RELATIVISTIC MODEL OF THE SOLID EARTH Our definition of the geoid requires stationarity, i.e. the existence of a timelike Killing vector field. In this section, we will recall some known facts about timelike congruences. They will demonstrate that the stationarity assumption is equivalent to a relativistic version of the three conditions (A1'), (A2'), and (A3') we have discussed in Sec. <ref>. §.§ Rigid and isometric congruences We consider a timelike congruence of worldlines (see, e.g. Refs. <cit.>), i.e. a family of timelike curves which do not intersect and fill a certain region of the four-dimensional spacetime. The tangents to the worldlines are given by a timelike vector field u=u^μ∂_μ, which we assume to be normalized, g_μνu^μ u^ν=-c^2. We interpret u as the four-velocity field of a gravitating body. On the surface of the body, u may be interpreted as the four-velocity of observers with standard clocks that are attached to the surface. Moreover, we may extend u into the exterior region where it may be interpreted as the four-velocity of observers hovering above the surface, e.g. in satellites. We will characterize the case that u is proportional to a Killing vector field; in this case, the congruence is called isometric. The projection onto the local rest space of the congruence is given by the projection operatorP^μ_ν = δ^μ_ν + 1c^2u^μ u_ν.The acceleration a = a^μ∂_μ of the congruence is defined bya^μ := u̇^μ = u^ν D_ν u^μ.The acceleration vanishes along a particular integral curve of u if and only if this curve is a geodesic.As in nonrelativistic physics, a congruence can be characterized by the kinematic quantities rotation ω_μν, shear σ_μν, and expansion θ, ω_μν := P^ρ_μP^σ_νD_[σ u_ρ] = D_[νu_μ] + 1c^2 u̇_[μ u_ν],σ_μν := P^ρ_μP^σ_νD_(σ u_ρ) - 13θ P_μν= D_(νu_μ) + 1c^2 u̇_(μ u_ν) - 13θ P_μν,θ := D_μ u^μ. The rotation is antisymmetric, while the shear is symmetric and traceless. The motion of neighboring worldlines with respect to a chosen worldline with tangent u is determined byD_ν u_μ = ω_μν + σ_μν + 13θ P_μν - 1c^2u_ν a_μ. A congruence with vanishing expansion, θ = 0, is isochoric, i.e. the volume of a comoving spatial region does not change over time <cit.>. If the shear vanishes as well, σ_μν = 0, the congruence is called Born rigid. This is true if and only if the spatial distance between any two infinitesimally neighboring integral curves of u remains constant over time. In this case, Eq. (<ref>) reduces toD_ν u_μ = ω_μν - 1c^2u_ν a_μ.In analogy to the Newtonian condition (A1'), we require the congruence to be Born rigid, i.e.: (A1”)P^ρ_μP^σ_νD_(σ u_ρ) = 0. For defining the analogs of the Newtonian conditions (A2') and (A3'), we introduce the rotation four-vector ω^μ byω^μ := 12cη^μνσλ u_νω_σλ = 1c η^μνσλ u_ν∂_λ u_σ.As ω^μ u_μ = 0, the vector ω ^μ is spacelike. If we write it in the form ω ^μ = ωe^μ with e^μe_μ =1, the unit vector e^μ gives the direction of the momentary rotation axis, and the scalar ω gives the modulus of the momentary angular velocity. The Newtonian requirements (A2') and (A3') now translate into the following conditions:(A2”) P^μ_νω̇^ν = 0. (A3”) P^μ_νȧ^ν = ω^μ_ν a^ν. Condition (A2”) states that the unit vector e^μ is Fermi-Walker transported and that the scalar ω is constant along each worldline of the congruence; in other words, it states that the rotation axis and the angular velocity are time independent. Condition (A3”) states that the change of the acceleration along the congruence is only due to the rotation and that the acceleration vector always points to the same neighboring worldline.§.§ Acceleration potentialEhlers <cit.> has shown that for a rigid congruence the two requirements (A2”) and (A3”) together are equivalent toD_[ν a_μ] = 0 .The latter condition means that there exists a potential ϕ for the acceleration,a_μ = c^2 ∂_μϕ.This, in turn, is true for a rigid congruence if and only if u is proportional to a timelike Killing vector field ξ <cit.>, where the proportionality is given byξ = e^ϕ u .Clearly, ϕ is equal to the redshift potential considered above. We have now seen that at the same time it plays the role of an acceleration potential. Moreover, we have seen that stationarity is equivalent to the three conditions (A1”), (A2”), and (A3”). A congruence with these properties is called isometric. The existence of a time-independent redshift potential is thus based on assumptions that are quite analogous to the assumptions (A1'), (A2'), and (A3') we have discussed in the Newtonian theory. The Killing vector field ξ corresponds to a corotating family of observers. Note that ξ is defined and timelike on a cylindrical neighborhood of the body. This neighborhood extends to infinity for a nonrotating (isolated) body but for a rotating body it is finite. If extended outside of this neighborhood, the Killing vector field becomes spacelike. §.§ General relativistic geoid revisited We summarize our observations in the following way. We have seen that a natural generalization of the classical assumptions (A1'), (A2'), and (A3') requires the congruence associated with the Earth to be isometric, i.e. the spacetime to be stationary. The assumption of stationarity gives rise to a time-independent potential ϕ with two properties. First, ϕ is a redshift potential, which means that the surfaces ϕ = constant in 3-space are isochronometric. Second, ϕ is an acceleration potential, which means that the acceleration a^μ (which is a spatial vector field) is the gradient of the surfaces ϕ = constant in 3-space. Note that freely falling particles undergo the acceleration - a^μ relative to comoving observers. Therefore, the acceleration of freely falling bodies on the Earth, e.g. in falling corner-cube devices, is governed by the potential ϕ. By the same token, plumb lines are perpendicular to the surfaces ϕ = constant.As a consequence, we could rewrite our definition of the relativistic geoid, as it is given in Sec.<ref>, by replacing the words “redshift potential” with the words “acceleration potential.” The geoid may be determined by a family of Killing observers with standard clocks. Once a reference point defining the mean sea level has been chosen, the geoid may be realized either by clock comparison or by measuring the gravitational acceleration in falling corner cubes as shown by Eqs. (<ref>) and (<ref>). In this sense, one may say that also in the full relativistic theory the notions of the u-geoid and a-geoid are equivalent; it was already mentioned that a similar result was proven by Soffel et al. <cit.> in a PN setting. This fact is very convenient because it implies that the geoid may be determined with two independent types of measurements that complement each other. As the notions of redshift potential and acceleration potential coincide, we will speak just of the relativistic potential in the following.Our definition of the geoid is based on the assumption of stationarity. Of course, this is only an approximation. Just as in the Newtonian theory, temporal variations may be taken into account by modifying the time-independent (rigid) geoid by time-dependent perturbations, i.e. by considering a nonstationary metric Σ_μν of the form Σ_μν = g_μν + h_μνwhere g_μν is stationary. In practical geodesy, the stationary part is defined as the mean value over a sufficiently long time interval. Thus, this part also contains the permanent tide effects from the external gravitational field of celestial bodies like the Moon or the Sun. For the stationary part g_μν, we may still use our definition of the geoid in terms of a relativistic potential ϕ. In this paper, we will not work out a theory for such time-dependent perturbations of the relativistic geoid. For examples of such effects, we refer to the list given in Sec.<ref>.However, as our formalism also applies, e.g. to rapidly rotating neutron stars with “mountains” and other non-axisymmetric stationary objects, we should mention that our assumption of stationarity ignores the fact that an irregularly shaped rotating body emits gravitational radiation, so its angular velocity will actually not be constant over time. Of course, this is a small effect; for the Earth and other planets, it is completely negligible.For rigid motion inside the gravitating body, the four-velocity field u and, consequently, the Killing vector field ξ are defined within the interior as well. The extension of equipotential surfaces (i.e. of the geoid) to regions inside the body is also well defined. An interior solution should be considered, and the corresponding isochronometric surfaces need to be calculated. The particular interior solution must be matched, at the surface, to the vacuum solution. The level surface that defines the geoid by the condition of pairwise vanishing redshift for any two clocks on this particular surface will then be continuous but in general not differentiable.In the following two sections, we consider axisymmetric static and axisymmetric stationary spacetimes, respectively, and we determine the isochronometric surfaces for various examples of such spacetimes. Of course, axisymmetric models are highly overidealized in view of applications to the Earth; see e.g. the analysis in Ref. <cit.>. However, we believe that these examples are instructive because they illustrate the general idea behind our definition and its applicability to compact objects. We emphasize that our general definition of the geoid does of course not assume axisymmetry or any other kind of spatial symmetry. However, the axisymmetric stationary case is mathematically distinguished by the fact that then we have two linearly independent Killing vector fields; one of them is timelike and hypersurface orthogonal near spatial infinity. This allows the use of asymptotically defined time-independent multipole moments; see below. The only other case where a Killing vector field exists that is timelike up to spatial infinity and hypersurface orthogonal (near spatial infinity) is the case of a static (i.e. nonrotating) gravitating body. In the exterior of an irregularly shaped rotating body, we have only one Killing vector field, which becomes spacelike at a certain distance from the rotation axis; in this case, the asymptotic definition of time-independent multipole moments is not applicable.All our examples are vacuum solutions of Einstein's field equation. For modeling a gravitating body they have to be matched to an interior matter solution. Correspondingly, the isochronometric surfaces we are calculating are valid only outside of the gravitating body.§ AXISYMMETRIC STATIC SPACETIMES§.§ Axisymmetric static solutions to Einstein's vacuum field equationAny axisymmetric and static spacetime that satisfies Einstein's vacuum field equation is given by the Weyl metric <cit.>g _μν dx^μ dx^ν = - e^2ψ c^2 dt^2+ e^- 2ψρ^2 dφ^2 + e^-2ψ e^2γ (dρ^2 + dz^2),where (t,ρ,z,φ) are Weyl's canonical coordinates. The metric functions ψ and γ depend only on the coordinates ρ and z. The coordinates t and φ are associated with the two Killing vector fields ∂_t and ∂_φ. Some important examples are the Schwarzschild metric, the Erez-Rosen metric <cit.>, and the q-metric <cit.> (Zipoy-Voorhees metric <cit.>). Using the metric (<ref>), the vacuum field equations reduce to, see e.g. Ref. <cit.>, Δψ = 0 ,∂_ργ - ρ(∂_ρψ + ∂_z ψ)(∂_ρψ - ∂_z ψ) = 0 ,∂_z γ - 2 ρ ∂_ρψ ∂_z ψ = 0 . The metric function γ can be obtained by integration once the Laplace equation (<ref>) for ψ has been solved. The general solution for all static, axisymmetric, and asymptotically flat spacetimes is given by <cit.> ψ = ∑_l=0^∞ c_l P_l(cosΘ) R^l+1,γ = ∑_l,i = 0^∞(i+1)(l+1)i+l+2 c_i c_l ×P_l+1(cosΘ) P_i+1(cosΘ) - P_l(cosΘ) P_i(cosΘ)R^l+i+2, where R^2 = ρ^2+z^2 and cosΘ = z/R. The P_l(cosΘ) are Legendre polynomials of degree l, and c_l are constant expansion coefficients, sometimes called Weyl multipoles. The relativistic geoid is defined by the level sets of the time-independent redshift potential for observers that form an isometric congruence. Hence, their four-velocity field u is proportional to a timelike Killing vector field ξ as given by Eq. (<ref>). The relativistic potential ϕ is related to this Killing vector field by Eq. (<ref>). For the spacetime with line element (<ref>), we have two linearly independent Killing vector fields, ∂ _t and ∂ _φ. Note that any linear combination of these two Killing vector fields with constant coefficients is again a Killing vector field. We consider I) the nonrotating congruence with worldlines that are integral curves of the timelike Killing vector field ∂_t and II) a rotating congruence with worldlines that are integral curves of ∂_t + Ω ∂_φ, with some Ω∈ℝ. Note that the latter congruence is timelike only on a cylindrical domain about the symmetry axis; on the boundary of this domain, it becomes lightlike, and farther away from the axis, it is spacelike. The bigger the Ω, the smaller the domain on which the congruence is timelike. Here, Ω has the dimension of an inverse time, i.e. the dimension of a frequency.The first congruence, (I), is associated with observers of which the spatial Weyl coordinates (ρ,φ,z) remain fixed; we can think of them as being attached to the surface of a “nonrotating Earth”. The second congruence, (II), can be associated with observers attached to the surface of a “rotating Earth” where Ω is the angular velocity. As the metric is static, the gravitomagnetic field of the Earth is not taken into account. In the following, all quantities related to the first congruence, (I), will be denoted by the subscript (·)_stat, while all quantities related to the second congruence, (II), will be denoted by the subscript (·)_rot. We obtain, respectively, c^2 e^2ϕ_stat = -g(∂_t,∂_t) = c^2 e^2ψ, c^2 e^2ϕ_rot = - g(∂_t + Ω ∂_φ,∂_t + Ω ∂_φ) = c^2 e^2ψ - Ω^2 ρ^2 e^-2ψ. The isochronometric surfaces for the respective congruence are defined by the level sets of ϕ. Therefore we obtain e^2ϕ_stat = constant⇔ e^2ψ =constant, e^2ϕ_rot = constant⇔ e^2ψ - Ω^2c^2ρ^2 e^-2ψ = constant. The relativistic geoid is one of these isochronometric surfaces, where the constant has to be chosen by a convention.Inserting the expansion (<ref>) gives the geoid in terms of the expansion coefficients c_l. However, this representation gives little insight into the geometry and the physical situation at hand: already for the simplest member of the Weyl class, the Schwarzschild spacetime, the coefficients must be chosen in a complicated way, such that the series (<ref>) converges toψ = 12log( r_+ + r_- - 2mr_+ + r_- + 2m) ,r_±^2 := ρ^2 + (z± m)^2 .The Schwarzschild metric in its usual form follows after the coordinate transformationrm-1 := r_+ + r_-2m, cosϑ := r_+ - r_-2m.To obtain more physical insight, we introduce spheroidal coordinates (x,y) by the coordinate transformation <cit.>ρ^2 =: m^2(x^2-1)(1-y^2) ,z =: m x y ,which is equivalent tox:=r/m-1 ,y := cosϑ.This yields the Weyl metric (<ref>) in spheroidal coordinates,g_μν dx^μ dx^ν= -e^2ψ c^2 dt^2 + m^2 e^-2ψ (x^2-1)(1-y^2)dφ^2 + m^2 e^-2ψ e^2γ (x^2-y^2) ( dx^2x^2-1 + dy^21-y^2) .In these coordinates the relativistic potentials are, respectively,e^2ϕ_stat = e^2ψ, e^2ϕ_rot = e^2ψ - Ω^2c^2 m^2 e^-2ψ (x^2-1)(1-y^2) . The isochronometric surfaces and, thus, the geoid in these coordinates are, again, described by the respective level sets.The vacuum field equation in the new coordinates can be found, e.g., in Refs. <cit.>. In Ref. <cit.>, Quevedo has shown that the general asymptotically flat solution, with elementary flatness on the axis, in these coordinates is given by ψ = ∑_l=0^∞ (-1)^l+1 q_l Q_l(x) P_l(y) ,where the Q_l are the Legendre functions of the second kind as given in Ref. <cit.>. The coefficients q_l can be related to the c_l in Eq. (<ref>). Moreover, we will discuss in the next section how the q_l are related to the relativistic multipole moments of the spacetime and, at the same time, to multipole moments of the Newtonian potential in the weak field limit. For the relativistic moments, we use those defined by Geroch and Hansen <cit.>.In the representation (<ref>), the Schwarzschild solution is obtained by simply choosing q_0=1 and q_l = 0 for all l>0; see Section <ref> below. For this choice of q_0, the parameter m in (<ref>) is the usual mass parameter of the Schwarzschild solution, related to the Schwarzschild radius r_s = 2m. §.§ Newtonian limitEhlers <cit.> gave a definition of the Newtonian limit that also yields a definition of the Newtonian multipole moments. For a Weyl spacetime, one has to assume that the potential ψ depends on the parameter λ = 1/c^2. The Newtonian potential is then given by the limitU (ρ , z ) = lim_λ→ 01λψ(ρ,z,λ) .Keeping the canonical coordinates ρ and z fixed during the limit procedure is motivated by the fact that, with respect to these cylindrical coordinates, ψ satisfies the Laplace equation, which is supposed to hold also in the limit for the Newtonian potential U.It is then inevitable to assume that the coordinates (x,y) depend on λ. This becomes clear if we consider the Schwarzschild case by choosing q_0 = 1 and q_l = 0 for all l>0. We see that the Newtonian limit leads to the potentialU = -GMR,R^2 = ρ^2 + z^2 ,if the parameter m depends on λ according to m = GM/c^2 = GMλ,where G and M are, of course, independent of λ. Inserting Eq. (<ref>) into Eq. (<ref>) clarifies how x and y depend on λ. Performing the limit (<ref>) of the expansion (<ref>) as was done in Ref. <cit.>,[We perform the calculation here again, because in Ref. <cit.>, there are some minor errors in the limit procedure.] we have to calculateU = lim_λ→ 01λ∑_l=0^∞ (-1)^l+1 q_l Q_l( r_+ + r_-2λ GM) P_l( r_+ - r_-2λ GM) .For the coordinates x and y, expressed in terms of ρ and z, we calculate the limits lim_λ→ 0 x= lim_λ→ 0r_+ + r_-2λ GM = ∞,lim_λ→ 0 y= lim_λ→ 0r_+ - r_-2λ GM = z√(ρ^2+z^2). Using the fact that the Legendre polynomials are continuous, we obtainlim_λ→ 0 P_l ( y ) = P_l ( lim_λ→ 0 y ) = P_l ( z√(ρ^2+z^2)) .As the limit λ→ 0 is equivalent to x →∞, we expand Q_l(x) in powers of 1/x <cit.>,Q_l(x) = Q_l( r_+ + r_-2λ GM) = ∑_k=0^∞ b_l+2k+1^l ( 2λ GMr_+ + r_-)^l+2k+1,where b_l+2k+1^l= (l+2k-1)(l+2k)2k(2l+2k+1) b_l+2k-1^l , b_l+1^l= l!(2l+1)!!.The limit of each summand of Eq. (<ref>) exists and is finite. Absolute convergence allows us to interchange the sum and the limit <cit.>. We insert the series expansion for Q_l(x) and calculate the remaining limitU = ∑_l=0^∞ (-1)^l+1 P_l( z√(ρ^2+z^2)) lim_λ→ 01λ q_l Q_l( r_+ + r_-2λ GM) = ∑_l=0^∞ (-1)^l+1 P_l ( z√(ρ^2+z^2)) ×lim_λ→ 01λ q_l ∑_k=0^∞ b_l+2k+1^l ( 2λ GMr_+ + r_-)^l+2k+1.This limit exists and is nonzero if the dimensionless coefficients q_l are of the form <cit.>q_l = (G/c^2)^-lq̅_lwith new coefficients q̅_l that are independent of λ and have dimension [q̅_l] = (m/kg)^l. Then, only the k=0 term in (<ref>) gives a nonzero limit. We finally obtain the Newtonian potentialU = ∑_l=0^∞ (-1)^l+1 b_l+1^l P_l ( z√(ρ^2+z^2)) ×lim_λ→ 0 q_l λ^l ( 2GMr_+ + r_-)^l+1= G ∑_l=0^∞ (-1)^l+1 b_l+1^l q̅_l M^l+1 P_l ( z√(ρ^2+z^2)) ×lim_λ→ 0( 2r_+ + r_-)^l+1= -G∑_l=0^∞ (-1)^ll!(2l+1)!!q̅_l M^l+1 P_l(cosΘ)R^l+1where cosΘ = z√(ρ^2+z^2),R^2 = ρ^2 + z^2 .§.§ Multipole moments If we compare Eq. (<ref>) withEq. (<ref>) for the Newtonian multipole moments N_l in the axisymmetric case, we see that N_l = (-1)^ll!(2l+1)!!q̅_l M^l+1.Choosing q_0 = q̅_0 = 1, we identify M as the total mass of the source (in kg) that gives the monopole moment N_0 = M. A dipole moment can always be made to vanish by transforming the origin of the coordinate system into the center of mass. The quadrupole moment is given by N_2 = -2/15q̅_2 M^3. The lth-order multipole moment has the dimension [N_l] = kg m^l such that for each moment N_l we get [N_l/N_0] = m^l.From this identification, we deduce that the parameters q̅_l, which are independent of λ, determine the Newtonian moments of the gravitating source of which the exterior we describe by the metric (<ref>). On the other hand, the parameters q̅_l also determine the relativistic Geroch-Hansen moments R_l uniquely. The latter, which depend of course on λ = c^-2, can be written in the formR_l = N_l + C_l ,as a sum of the Newtonian moments and relativistic corrections C_l, where the C_l can be calculated exactly, i.e. with no approximation involved. Following Quevedo <cit.>, we obtain C_0= C_1 = C_2 = 0 , C_3= - 25 m^2 N_1 , C_4= - 27 m^2 N_2 - 67 m Gc^2N_1^2 . In general, the correction terms C_l are of the form C_l = C_l(N_l-2, N_l-3, …, N_0). The octupole correction C_3 can be made to vanish by transforming away the Newtonian dipole. Then, a difference between the relativistic and the Newtonian multipole moments occurs for the first time at the 16-pole moment R_4, which is a surprising result that was first derived in Ref. <cit.>. §.§ Examples In this section, we apply our definition of the relativistic geoid to particular axisymmetric and static vacuum solutions to Einstein's field equation. We choose three examples, all of which are asymptotically flat: the Schwarzschild metric, the Erez-Rosen metric, and the q-metric (Zipoy-Vorhees metric). §.§.§ Monopole: Schwarzschild metric Choosing q_0 =1, q_l = 0 for all l > 0 in the expansion (<ref>), we obtain a spacetime which possesses only a monopole moment R_0 = M, and the metric functions becomeψ = 12log( x-1x+1) , γ = 12log( x^2-1x^2-y^2) .The relativistic potential ϕ in this spacetime is given by Eqs. (<ref>) and (<ref>) for the two different congruences, respectively. We obtain e^2ϕ_stat = ( x-1x+1) , e^2ϕ_rot= ( x-1x+1) - Ω^2c^2m^2 (x+1)^2(1-y^2) . The metric (<ref>) then yields the well-known Schwarzschild metric after the coordinate transformation x=r/m-1 andy = cosϑ:g = - ( 1- 2mr) c^2 dt^2 + ( 1- 2mr)^-1 dr^2 + r^2 dϑ^2 + r^2 sin^2 ϑ dφ^2 .Hence, the relativistic potential for static and rotating observers becomes, respectively, e^2ϕ_stat = ( 1- 2mr) , e^2ϕ_rot = ( 1- 2mr) - Ω^2c^2r^2 sin^2 ϑ. Their equipotential surfaces determine the isochronometric surfaces e^2ϕ_stat = constant⇔ r = constant, e^2ϕ_rot = constant⇔( 1- 2mr) - Ω^2c^2 r^2 sin^2 ϑ = constant, one of which is the relativistic geoid in this spacetime. Figure <ref> shows the level sets of the relativistic potential for both cases in a coordinate contour plot. We now compare the relativistic geoid defined by Eq. (<ref>) with its Newtonian analog. For the Newtonian potential U=-GM/R of a spherically symmetric mass distribution, the geoid is defined by an equipotential surface, see Eq. (<ref>),W = -GMR - 12Ω ^2 R^2 sin^2 ϑ = W_0 = constant.Using the relation m = GM/c^2, we get from (<ref>) the condition for the relativistic geoid,1 + 2c^2( - GMr - 12Ω ^2 r^2 sin^2 ϑ) = constant.Hence, the term in brackets must be constant. This is, formally, the same result as for the nonrelativistic geoid (<ref>). Of course, the Newtonian geoid is defined in a flat geometry, while the spatial part of the Schwarzschild metric is not flat. Therefore, the intrinsic geometry of a surface in the Schwarzschild geometry is in general different from that of a surface with the same coordinate representation in flat space. However, as the spheres r = r_0 in the Schwarzschild geometry have area 4 π r_0^2, the intrinsic geometry of the Schwarzschild geoid for the nonrotating observers is the same as that of the corresponding Newtonian geoid. In Figs. <ref> and <ref> in the bottom row on the right, we show an isometric embedding into Euclidean space ℝ^3 of the isochronometric surfaces as seen by the rotating observers. This isometric embedding reveals the intrinsic geometry of these surfaces; close to the source the surfaces are “squashed spheres,” whereas farther away, they deform into cylinders due to the increasing influence of the rotation term that is proportional to r^2; see Eq. (<ref>). For details on the embedding procedure, we refer to Appendix <ref>.§.§.§ Quadrupole I: Erez-Rosen metric Choosing q_0 = 1, q_1 = 0, q_2 ≠ 0, and q_l = 0 for all l > 2, we obtain a metric that possesses a monopole moment R_0 = M and, additionally, an independent quadrupole moment R_2 = 215q̅_2 M^3.The metric functions ψ and γ in Eq. (<ref>) become2 ψ = log( x-1x+1) + q_2 (3y^2-1)( (3x^2-1)4.×. log( x-1x+1) + 32 x ) ,and γ = 12 (1+q_2)^2 log( x^2-1x^2-y^2) - 32 q_2 (1-y^2) ( x log(x-1x+1) + 2 ) + 916 q_2^2 (1-y^2)×[ x^2 + 4y^2 -9x^2y^2 - 43 + x ( x^2 +7y^2 -9x^2y^2 - 53) .×log( x-1x+1) + 14 (x^2-1)(x^2+y^2-9x^2y^2-1). ×log(x-1x+1)^2 ] . This metric is the vacuum solution found by Erez and Rosen <cit.>[As pointed out in Ref. <cit.>, the original work by Erez and Rosen contains some mistakes concerning numerical factors within the expression for the metric functions. A corrected version can be found, for example, in Ref. <cit.>.]. If the quadrupole moment vanishes, q_2 → 0, we reobtain the Schwarzschild metric. The relativistic potential for static and rotating observers is, respectively,e^2ϕ_stat = e^2ψ = ( x-1x+1)exp{ q_2 (3y^2-1) ( (3x^2-1)4. .×. . log( x-1x+1) + 32 x ) }, e^2ϕ_rot = e^2ϕ_stat - Ω^2c^2 m^2 (x^2-1)(1-y^2) e^- 2ϕ_stat. The isochronometric surfaces are shown in Fig. <ref>. We also show the effect of the quadrupole term alone by subtracting the monopole contribution, i.e. subtracting the Schwarzschild term. Using the coordinate transformation (<ref>), we can switch to the coordinates (r,ϑ) and obtaine^2ϕ_stat = ( 1-2mr)exp{ q_2 (3cos^2ϑ-1) . . ×[ ( 34(rm-1 )^2- 14) log( 1-2mr) . . + . . 32(rm-1) ] }, e^2ϕ_rot = e^2ϕ_stat - Ω^2c^2r^2sin^2ϑe^- 2ϕ_stat. Thereupon, the geoid can also be determined in terms of the coordinates (r, ϑ).We expand exp(2ϕ_stat) up to cubic order in m/r because this is where quadrupole corrections appear. We obtaine^2ϕ_stat = 1 - 2mr - 215 q_2 m^3 3cos^2ϑ -1r^3 + 𝒪(m^4/r^4) = 1 - 2c^2( GMr + G M m^2 215 q_2 3cos^2ϑ - 12r^3) + 𝒪(m^4/r^4) = 1 - 2c^2( GMr + G N_2 3cos^2ϑ - 12r^3) + 𝒪(m^4/r^4) .ForN_2= 215 M m^2 q_2 = 215q̅_2 M^3 ,the term in brackets is the Newtonian potential of a quadrupolar gravitational source; see Eq. (<ref>) for comparison. This result shows that, indeed, the Newtonian limit of the Erez-Rosen spacetime yields the Newtonian gravitational potential of a source that possesses only a monopole and a quadrupole moment. Hence, the relativistic geoid for the Erez-Rosen spacetime in terms of the level sets of Eq. (<ref>) reproduces the Newtonian expression in lowest order. Higher orders are, however, different. Moreover, one has to keep in mind that in the Erez-Rosen spacetime the coordinates do not have the same geometric meaning as in the Newtonian theory. The metric on a surface t=constant and r=constant is not the usual metric on the 2-sphere S^2, and r is not an area coordinate as it was in the Schwarzschild spacetime. We can visualize the intrinsic geometry of isochronometric surfaces by isometrically embedding them into the Euclidean space ℝ^3. These surfaces are defined by an equation of the forme^ 2 ϕ (r,ϑ)= f _0 = constant.The value f_0 > 0 labels these surfaces. For f_0 → 0, the surface of infinite redshift for observers on integral curves of ∂_t is approached. For static spacetimes, this surface is a horizon. The relevant equations for constructing the embeddings are given in Appendix <ref>. For the Schwarzschild spacetime, the embedding yields standard spheres in ℝ^3 for the congruence on integral curves of ∂_t, and for the congruence on integral curves of ∂_t + Ω∂_φ, the embedding yields deformed spheres close to the horizon and deformed cylinders further away, cf. Figs. <ref> and <ref> on the right in the bottom row.For the Erez-Rosen spacetime, we have to consider two different signs of the quadrupole parameter. Hence, the embedded surfaces are either prolate or oblate; see the middle rows of Figs. <ref> – <ref>. We see that the isochronometric surfaces in the Erez-Rosen spacetime for negative quadrupole parameter develop “bulges” around the poles close to the horizon. Farther away, the embedded surfaces become oblate or prolate squashed spheres. With non-zero rotation, the embedded surfaces deform into cylinders farther away from the source, analogously to the rotating Schwarzschild case.§.§.§ Quadrupole II: q-metricAnother example of a two-parameter family of metrics that is actually the simplest generalization of the Schwarzschild metric is the q-metric <cit.>. The q-metric, as constructed by Quevedo, is obtained by a Zipoy-Voorhees transformation of the Schwarzschild solution. Zipoy <cit.> and Voorhees <cit.> considered such solutions of the vacuum field equation in their papers. A similar transformation was also used before in the work of Bach (and Weyl) <cit.>. For a discussion of the Zipoy-Voorhees (q-)metric, we refer the reader to, e.g. the book by Griffiths and Podolský <cit.>.The q-metric possesses independent monopole and quadrupole moments, and all higher multipole moments are determined by these two. The metric functions reade^2ψ = ( x-1x+1)^1+q,e^2γ = ( x^2-1x^2-y^2)^(1+q)^2.The relativistic monopole and quadrupole moments of this spacetime are given by R_0 = (1+q)M and R_2 = -Mm^2 q(1+q)(2+q)/3 <cit.>. The limit q → 0 yields the Schwarzschild metric. The relativistic potential for static and rotating observers is, respectively, e^2ϕ_stat = ( x-1x+1)^1+q, e^2ϕ_rot = ( x-1x+1)^1+q - Ω^2c^2m^2 ( x-1x+1)^-(1+q)× (x^2-1)(1-y^2) . With the coordinate transformation (<ref>), the equations that define the isochronometric surfaces read e^2ϕ_stat = ( 1-2mr)^1+q, e^2ϕ_rot = ( 1-2mr)^1+q - Ω^2c^2 ( 1-2mr)^-q r^2sin^2ϑ. Even though the level sets of the redshift potential ϕ_stat coincide with the surfaces x=constant and thus with the surfaces r=constant, this does not mean that the geoid is spherically symmetric. The metric on the surfaces t=constant and r=constant is not the usual metric on the S^2, and r is not an area coordinate as it was in the Schwarzschild spacetime. To put this into geometrical terms, one can use the relativistic flattening <cit.> that measures the deviation from spherical symmetryf := 1 - C_ϑC_φ,where C_ϑ and C_φ are the circumferences, measured with the metric, of circles at r=r_0 in the ϑ-direction (polar circles) and φ-direction (azimuthal circles), respectively. The circumference C_φ is measured in the equatorial plane ϑ=π/2, whereas for C_ϑ, the azimuthal angle φ is arbitrary due to the symmetry. For the Schwarzschild spacetime, this flattening is zero, whereas for the q-metric, we obtainf = 1 - (x^2-1)^q/2(2+q) × x^-q(2+q)_2F_1( 12, 12 q(2+q), 1, 1/x^2 ) .Here, _2F_1 is one of the hypergeometric functions. In the limits r→∞ and q → 0, the flattening becomes zero. For a positive q, the flattening is positive, and the surfaces x=constant are oblate, because circles in the φ-direction are larger. For a negative value of q, these surfaces are prolate.As for the Erez-Rosen metric, we may also visualize the isochronometric surfaces of the q-metric by isometrically embedding them into the Euclidean space ℝ^3. The result is shown in the top rows of Figs. <ref> – <ref>. Again, we refer to Appendix <ref> for details about the construction of the embeddings. As for the Erez-Rosen metric, we have two different signs of the quadrupole parameter. Hence, the embedded surfaces are either oblate or prolate as can be seen in the plots. However, in contrast to the Erez-Rosen metric, the isochronometric surfaces do not develop bulges near the poles in the oblate case; see Fig. <ref> in the top row on the left. For the rotating case, the embedding yields cylinders farther away from the source, and the results are qualitatively similar to those obtained for the Schwarzschild and Erez-Rosen cases. § AXISYMMETRIC STATIONARY SPACETIMES§.§ Axisymmetric stationary solutions to Einstein's vacuum field equationAll axisymmetric and stationary solutions to Einstein's vacuum field equation can be transformed into the Weyl-Lewis-Papapetrou form. Here, we use spheroidal coordinates since they have proven to be useful in the last section. The metric in these coordinates reads g = -e^2ψ (c dt+ω dφ)^2 + e^-2ψσ^2 [ e^2γ (x^2-y^2).×. ( dx^2x^2-1 + dy^21-y^2) + (x^2-1)(1-y^2) dφ^2 ]where ψ, γ, and ω are functions of x and y while σ is a constant. Defining the complex Ernst potentialE := e^2ψ + iΣ, ϵ := 1-E1+E,where Σ is given by σ (x^2-1) ∂_x Σ = -e^4ψ∂_y ω,σ (1-y^2) ∂_y Σ = e^4ψ∂_x ω, reduces the vacuum field equation to a complex equation for the Ernst potential, which can be found, for example, in Ref. <cit.>. For static spacetimes, the Ernst potential becomes real, and the formalism of Sec. <ref> may be used for constructing solutions.We again construct the relativistic potentials e^2ϕ_stat = e^2ψ, e^2ϕ_rot = e^2ψ + 2Ωc ω e^2ψ - Ω^2c^2[ e^-2ψσ^2 (x^2-1)(1-y^2) . . - ω^2 e^2ψ] , for the Killing vector fields ∂ _t and ∂ _t + Ω ∂ _φ. The relativistic potential ϕ_rot is now defined by the metric function ψ and the twist potential ω, leading to gravitomagnetic contributions.A simple solution to the Ernst equation for ω = 0 is ξ = 1/x. This yields the Schwarzschild solution in spheroidal coordinates, which we considered in the last section.§.§ Example: Kerr spacetimeThe best known and most important stationary and axisymmetric solution to Einstein's vacuum field equation is the Kerr metric. In this case, the Ernst potential depends on the mass parameter m and the spin parameter a,ϵ^-1 = σm x + i am y , σ = √(m^2-a^2),and the metric functions in the Weyl-Lewis-Papapetrou representation become e^2ψ = σ^2 x^2 + a^2 y^2 - m^2(σ x+ m)^2 + a^2 y^2,ω = 2a m(σ x+ m)(1-y^2)σ^2 x^2+ a^2 y^2 - m^2,γ = 12log( σ^2 x^2 + a^2 y^2 - m^2σ^2(x^2-y^2)) . After the coordinate transformation σ x = r-m,y=cosϑ,we obtain the Kerr metric in its well-known form given in Boyer-Lindquist coordinates (t,r,ϑ,φ),g = - ( 1-2mrρ^2) c^2 dt^2 + ρ^2Δ dr^2 + ρ^2 dϑ^2 + sin^2 ϑ( r^2 + a^2 + 2m r a^2sin^2ϑρ^2) dφ^2 - 4mrasin^2ϑρ^2c dt dφ,whereρ^2 = r^2 + a^2 cos^2 ϑ, Δ = r^2 + a^2 -2mr .The relativistic potential for the congruence of Killing observers on integral curves of ∂_tis now given bye^2ϕ_stat = 1-2mrρ^2 = 1-2mrr^2+a^2cos^2ϑ.For Killing observers on a rotating congruence, i.e. on integral curves of ∂ _t + Ω∂ _φ with Ω≠ 0, the relativistic potential ϕ satisfiese^2ϕ_rot = 1-2mrr^2+a^2cos^2ϑ + 4Ωc amrsin^2ϑ( r^2+a^2cos^2ϑ) -Ω^2c^2 sin^2 ϑ( r^2+a^2+2m r a^2sin^2ϑr^2+a^2cos^2ϑ) .In either case, for any two observers within such a congruence at positions (r,ϑ) and (r̃,ϑ̃), respectively, the redshift is 1+z = νν̃ =e^ϕ(r̃,ϑ̃) e^ϕ(r,ϑ).Figure <ref> shows a contour plot of the functions exp(2ϕ_stat) andexp(2ϕ_rot) in pseudo-Cartesian coordinates. To infer more about the intrinsic geometry of the isochronometric surfaces Figs. <ref> – <ref> show their isometric embeddings into Euclidean 3-space. The embedding of the surface exp(2ϕ_stat) = f_0 exists for all 0 < f_0 < 1 and all values of a/m. In the limit f_0 → 0, the isochronometric surfaces approach the ergosurface, i.e. the boundary of the ergoregion. An isometric embedding of the ergosurface was first discussed by Sharp <cit.>. It is known that the ergosurface starts to develop bulges around the poles if a^2 approaches its extremal valuem^2; for a picture, see Pelavas <cit.>. Our plots show a similar behavior of the isochronometric surfaces near the ergosurface.As an aside, we mention that our formalism may also be used for calculating the gravitomagnetic redshift on the surface of the Earth if the spacetime geometry outside of the Earth is approximated by the Kerr metric. For satellite orbits, the gravitomagnetic redshift (or gravitomagnetic clock effect) has been studied before; see Ref. <cit.> for the case of arbitrary orbits. For clocks on the surface of the Earth, we may use the redshift potential (<ref>). If one clock rotates on the equator, (r, ϑ = π/2), and the other one is situated at the north pole, (r̃, ϑ̃ = 0), the redshift becomes1+z = νν̃ = √(1-2mr̃r̃^2+a^2)√(1-2mr + 4Ωc amr -Ω^2c^2 ( r^2+a^2+2ma^2r)).Subtracting the gravitoelectric part, i.e. the same expression for a=0, the remainder gives the gravitomagnetic redshift between these two clocks. Inserting the values for all parameters leads to a gravitomagnetic redshift of[For the calculation we used the following values for the Earth: m = 0.0044m, a=743 m=3.3m, Ω=2π/86400s, equatorial radius r=6378.137km and polar radius r̃ = 6356.752km.]z_grav.magn.∼ 10^-21,which is about 3 orders of magnitude away from contemporary precision but might be measured in the foreseeable future with further improved clocks.§ POST-NEWTONIAN APPROXIMATION OF THE GEOID In this section, we consider the PN approximation of the relativistic geoid, and we demonstrate that, indeed, the familiar expression is reproduced at the 1PN level.According to the most recent resolution of the International Astronomical Union (IAU), see, e.g. Refs. <cit.>, the PN approximation of the metric of the Earth in geocentric coordinates (cT,X^i) and under the assumption of stationarity reads g_00 = - ( 1 - 2 Uc^2 + 2 U^2c^4) + 𝒪(c^6) , g_0i = -4 U^ic^3 + 𝒪(c^5) , g_ij = δ_ij( 1 + 2Uc^2) + 𝒪(c^4) , where the potentials U,U^i fulfill the equations Δ U(X)= - 4π G ρ(X) , Δ U^i(X)= -4π G ρ^i(X) . The quantities ρ, ρ^i are related to the energy-momentum tensor of the Earth by ρ = (T^00 + T^ii)/c^2 and ρ^i = T^0i/c, evaluated in the Geocentric Celestial Reference System (GCRS). For the scalar and vector potentials, one obtains U(X)= G ∫ d^3 X'ρ(𝐗')|𝐗-𝐗'|, U^i(X)= G ∫ d^3 X'ρ^i(𝐗')|𝐗-𝐗'|. Changing to corotating geocentric coordinates (cT̅,X̅^i), the metric becomes <cit.>g_00 = - ( 1 - 2 Uc^2 + 2 U^2c^4) + Ω^2(X̅^2+Y̅^2)/c^2 , g_0i = 𝐋 - 𝐗̅×Ω/c , g_ij = δ_ij( 1 + 2Uc^2) , where 𝐋 = -2 G 𝐉×𝐗̅c^3 R^3,and Ω, 𝐉 are the angular velocity and angular momentum of the Earth. We use the usual three-vector notation only as a shorthand notation. The vector field ∂_T̅ is a Killing vector field of the spacetime (<ref>). Observers on the Earth's surface move on its integral curves since for them dX̅^i = 0. These observers form an isometric congruence. The corresponding relativistic potential ϕ_PN is given bye^2ϕ_PN =-g_00=1 - 2Uc^2 + 2 U^2c^4 - Ω^2(X̅^2+Y̅^2)/c^2 .The defining condition for the relativistic geoid as a level set of the relativistic potential ϕ_PN yieldsU + 12Ω^2(X̅^2+Y̅^2) - U^2c^2 = constant,which is exactly the expression given by Soffel et al. in Ref. <cit.>; see their Eq. (4). The first two terms reproduce the classical definition of the Newtonian geoid, whereas the last term adds a relativistic correction at the 1PN level.§ CONCLUSIONIn this work, we have generalized the Newtonian and post-Newtonian definitions of the geoid to a fully general relativistic setting. As this definition is not restricted to weak gravitational fields, it makes sense not only for the Earth and other planets but also for compact objects such as neutron stars. Just as the former definitions of the geoid, our definition is based on the assumption that the Earth rotates rigidly with constant angular velocity about a fixed axis. Under this assumption, the Earth is associated with an isometric congruence of worldlines, i.e. with a family of Killing observers. We have defined the geoid in terms of isochronometric surfaces that are the level sets of the redshift potential for this isometric observer congruence. As the isochronometric surfaces may be realized with networks of standard clocks that are connected by fiber links, this is an operational definition of the geoid.While we consider the definition of the geoid in terms of clocks as primary, we have also emphasized that the redshift potential associated with an isometric congruence is, at the same time, an acceleration potential. This observation generalizes the equality of the u- and a-geoid, which was known to hold in a PN setting, into the full formalism of general relativity.In practical geodesy, our stationary gravitational field is the time average of the real gravitational field of the Earth. The real gravitational field of the Earth contains time-dependent parts which have to be treated through, e.g., an appropriate reduction. Here, we focus on the correct and fully relativistic definition of the geoid without time dependence.We have illustrated our definition of the geoid by calculating the isochronometric surfaces of axisymmetric and static spacetimes, with the Schwarzschild metric, the Erez-Rosen metric, and the q-metric as particular examples. We have then considered the case of axisymmetric and stationary spacetimes, with the Kerr metric as a particular example. As the shape of the isochronometric surfaces in a chosen coordinate system has no invariant meaning, we have isometrically embedded these surfaces into Euclidean 3-space to show their intrinsic geometry. As an aside, we have mentioned that the redshift potential for rotating observers in the Kerr metric may be used for estimating the gravitomagnetic redshift for clocks on the surface of the Earth. Finally, we have derived the redshift potential and the relativistic geoid in a 1PN spacetime and recovered the previously known result.An important task for the future is to express the geoid of a rotating and non-axisymmetric body in terms of multipole moments. This is conceptually challenging because in this case the spacetime is not stationary near infinity; the Killing vector field associated with the rotating body becomes spacelike outside of a cylindrical region about the rotation axis. For this reason, the time-independent asymptotically defined Geroch-Hansen multipole moments do not exist. In future work, we are planning to tackle the question of how local measurements in the neighborhood of a gravitating body are to be related to appropriately defined multipole moments in a relativistic formalism without approximations.We emphasize again that our formalism is valid for stationary non-axisymmetric objects as well, as long as the backreaction from gravitational radiation and the resulting slowdown of the rotation can be ignored. In this sense, our geoid can be constructed for any irregularly shaped rotating body. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center (SFB) 1128 “geo-Q” and the Research Training Group 1620 “Models of Gravity.” We also acknowledge support by the German Space Agency DLR with funds provided by the Federal Ministry of Economics and Technology (BMWi) under Grant No. DLR 50WM1547.The authors would like to thank Pacôme Delva, Heiner Denker, Domenico Giulini, Norman Gürlebeck, Sergei Kopeikin, Jürgen Müller, and Michael Soffel for helpful discussions and for reading the manuscript. The first author acknowledges insightful discussions with Vojtěch Witzany and Michael Fennen. § ISOMETRIC EMBEDDING OF ISOCHRONOMETRIC SURFACES As the coordinate representation of the geoid has no invariant geometric meaning, it is recommendable to isometrically embed the isochronometric surfaces into Euclidean 3-space. If such an embedding is possible, it represents the intrinsic geometry of the geoid. In all examples that we considered in this paper, the geoid was defined by the level sets of a functionf(x,y) = f_0 = constant,where x and y are spheroidal coordinates. As an alternative, we may use the coordinates (r,ϑ), which are related to (x,y) by the coordinate transformation x=r/m-1, y=cosϑ; see Eq. (<ref>). On the two-dimensional surface defined by (<ref>), we must have 0 = df =∂_x f(x,y) dx + ∂_y f(x,y) dy ;hence,dx^2 = ( ∂_y f(x,y)∂_x f(x,y))^2 dy^2 .As a consequence, the two-dimensional Riemannian metric on the surface f=f_0 isg^(2) = [ g_xx(x,y) ( ∂_y f(x,y)∂_x f(x,y))^2 + g_yy] dy^2 + g_φφ(x,y) dφ^2 .We want to isometrically embed this surface into Euclidean 3-space with cylindrical coordinates (ζ,φ,h),g_E^(3) = dh^2 + dζ^2 + ζ^2 dφ^2 .The embedding functions h(y) and ζ(y) are to be determined from the equation[ g_xx(x,y) ( ∂_y f(x,y)∂_x f(x,y))^2 + g_yy] dy^2 + g_φφ(x,y) dφ^2 = ( h'(y)^2 + ζ'(y)^2 ) dy^2 + ζ(y)^2 dφ^2 .If Eq. (<ref>) can be explicitly solved for x=x(y), we may insert this expression into (<ref>). Comparing coefficients results in ζ(y) = . √(g_φφ(x,y))|_x=x(y), h(y) = ±∫_0^y dy( g_xx(x,y) ( ∂_y f(x,y)∂_x f(x,y))^2 + g_yy(x,y) . -. g'_φφ(x,y)^24 g_φφ(x,y))^1/2_x=x(y). In Eq. (<ref>), the expression g'_φφ, by abuse of notation, is understood to mean that first x(y) is to be inserted and then the derivative with respect to y is to be taken. The integral in Eq. (<ref>) has to be calculated either analytically, if this is possible, or numerically.Equations (<ref>) and (<ref>) give us the cylindrical radius coordinate ζ and the cylindrical height coordinate h in Euclidean 3-space as functions of the parameter y of which the allowed range is given by y ∈ [-1,1], corresponding to ϑ∈ [0,π]. In this way, we get a meridional section of the embedded surface in parametrized form; by letting this figure rotate about the axis ζ = 0, we get the entire embedded surface. The embedding is possible near all y values for which g_xx(x,y) ( ∂_y f(x,y)∂_x f(x,y))^2 + g_yy(x,y)> g'_φφ(x,y)^24 g_φφ(x,y).If this condition is violated, the surface cannot be isometrically embedded into Euclidean 3-space, which means that its intrinsic geometry is hard to visualize. This direct construction of the embedded surface in parametrized form is possible if Eq. (<ref>) can be explicitly solved for x=x(y). If this cannot be done, we have at least an expression for the derivative of this function, as Eq. (<ref>) implies thatx'(y) = dxdy = - ∂_y f(x,y)∂_x f(x,y).Using Eq. (<ref>), we obtain a coupled system of ordinary differential equations, x'(y)= . - ∂_y f(x,y)∂_x f(x,y)|_x=x(y), h'(y)= ( g_xx(x,y) ( ∂_y f(x,y)∂_x f(x,y))^2 + g_yy(x,y) . -. g'_φφ(x,y)^24 g_φφ(x,y))^1/2_x=x(y), for the functions x(y) and h(y), which is to be solved numerically with initial conditions x(0) = x_0, h(0) =0. Of course, this is possible only if an embedding exists. If x(y) and h(y) have been determined, the function ζ(y) is given by Eq. (<ref>).§ CONVENTIONS AND SYMBOLSIn the following, we summarize our conventions and collect some frequently used formulas. A directory of symbols used throughout the text can be found in Table <ref>. For an arbitrary k-tensor T_μ_1 …μ_k, the symmetrization and antisymmetrization are defined byT_(μ_1…μ_k) := 1/k!∑_I=1^k!T_π_I{μ_1…μ_k},T_[μ_1…μ_k] := 1/k!∑_I=1^k!(-1)^|π_I|T_π_I{μ_1…μ_k},where the sum is taken over all possible permutations (symbolically denoted by π_I{μ_1…μ_k}) of its k indices. The signature of the spacetime metric is assumed to be (-,+,+,+). Greek indices μ, ν ,λ , … are spacetime indices and take values 0 … 3. Latin indices i,j,k are spatial indices and take values 1… 3.
http://arxiv.org/abs/1702.08412v2
{ "authors": [ "Dennis Philipp", "Volker Perlick", "Dirk Puetzfeld", "Eva Hackmann", "Claus Lämmerzahl" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170227181404", "title": "Definition of the relativistic geoid in terms of isochronometric surfaces" }
Institute of Theoretical Physics, Lanzhou University, Lanzhou 730000, Chinatanlei@lzu.edu.cn Institute of Theoretical Physics, Lanzhou University, Lanzhou 730000, China Institute of Theoretical Physics, Lanzhou University, Lanzhou 730000, ChinaBeijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190,China The impacts that the environment has on the quantum phase transition of light in the Dicke-Bose-Hubbard model are investigated. Based on the quasibosonic approach, mean field theory and the perturbation theory, the formulation of the Hamiltonian, the eigenenergies and the superfluid order parameter are obtained analytically. Compared with the ideal cases, the order parameter of the system evolves with time as the photons naturally decay in their environment. When the system starts with the superfluid state, the dissipation makes the photons tend to localize, and a greater hopping energy of photon is required to restore the long-range phase coherence of the localized state of the system. Furthermore, the Mott lobes disappears and the system tends to be classical with the number of atoms increasing; however, the atomic number is far lower than that expected under ideal circumstances. Therefore, our theoretical results offer valuable insight into the quantum phase transition of a dissipative system. 42.50.Nn, 42.50.Pq, 05.70.FhQuantum phase transitions of light in a dissipative Dicke-Bose-Hubbard model Wu-Ming Liu December 30, 2023 ============================================================================§ INTRODUCTIONQuantum simulation has become a research frontier and an indispensable tool in quantum information science<cit.>,it's remarkable development in experiment realization has led to incredible advances in the field of quantum optics and atomic physics<cit.>. Among the recent developments, the system of coupled cavity arrays embedded with cold atoms has been intensively investigated as a platform to realize and simulate quantum many body phenomena because of its extremely high tunability, individual addressability and flexibility in its geometric design<cit.>. A wide range of condensed matter system have been theoretically investigated andmany proposals for probing them have been proposed including the quantum phase transition<cit.>, spin glasses<cit.>, photon crystals<cit.>, the emergence of gauge fields<cit.>, the quantum Hall effects<cit.>, the pfaffian-like topological state<cit.> and the supersolid <cit.>.The simplest physical model of light-matter coupling in a coupled cavity array system is the Jaynes-Cummings Hubbard model, which presents an array of optical cavities that each contain a single two-level atom(TLA) in the photon-blockade regime<cit.>. A modified Jaynes-Cummings Hubbard model based on each cavity embedded a three-level atom has been proposed recently; this model circumvents the drawbacks of the excited-state spontaneous emission and provides a tunable extension of two-polariton bound states of the standard Jaynes-Cummings Hubbard <cit.>. As the number of atoms in each cavity increases, the collective effects due to atomic interactions among themselves give rise to intriguing many-body phenomena. In quantum optics the Dicke model is a paradigm of collective behavior<cit.> that describes the interaction of ensembles of TLAs that are collectively coupled to the single mode of radiation of a cavity<cit.>. Numerous investigations of interesting physical effects and their experiment realization<cit.>, such as the super-radiation phase<cit.>, the superradiant Mott Insulator<cit.> and the dynamical phase transition<cit.>, are discussed. Thus, a Dicke-Bose-Hubbard (DBH) model that includes more than one identical coupled cavities andN identical TLAs in each cavity has been conducted to study the quantum phase transitions of light without considering the counter-rotating terms<cit.>. The transfer of excitations under a large range of operative conditions is also demonstrated and explored by tuning the controlling parameters in the DBH model<cit.>. Both the emergence of a polaritonic glassy phase<cit.> and the quantum phase transitions from the superfluid to the Bose-glass and the Mott-insulator states<cit.> have also been studied. Most recently, the localization-delocalization quantum phase transition of photons of the DBH model including counter-rotating terms has been presented<cit.>. The model shows that under the influence of the counter-rotating terms, the Mott lobes are fully suppressed.As is well known, a realistic quantum optical system can rarely be isolated from its surrounding completely, particularly in an experiment; rather it is usually coupled to its external environment with an infinite number of degrees of freedom. To date, an investigation of quantum phase transition of photons in an dissipative DBH model is still lacking. To treat the interplay between the coupled cavity arrays and its environment in a more general setting, we developed a quasi-bosonic approach to describe the quantum phase transition and photon transport in an open quantum optical systems <cit.>. Without the requirement of considering the finite environment's degrees of freedom, the quasibosonic method is a great concept, that has a computational advantage. In the present paper, we use the quasibosonic approach to obtain an effective Hamiltonian of the dissipative DBH model. The coordinates of a bath can be eliminated and the system can be considered an ensemble of quasi-bosons in less time than its decay rate. Next, the eigenenergies and the superfluid order parameter of the system are also derived analytically for two TLAs on resonance, and we numerically demonstrate the phase diagram of an arbitrary number of TLAs. The theoretical analysis presented here will be an essential reference for future experiments to explore the quantum effects for multi-atom system.The paper is organized as follows. In section 2, the dissipative DBH model is introduced based on the quasibosonic approach. Section 3 is devoted to deriving the eigenvalues and eigenstates for two atoms in each cavity. The analytical solutions of the superfluid order parameter for dressed states are given, and the properties of quantum phase transition are discussed in section 4. The extension to an arbitrary number of TLAs is also given in this section. Section 5 gives the conclusion.§ THE DISSIPATIVE DICKE-BOSE-HUBBARD MODEL The system considered is depicted in Fig. 1. The Hamiltonian of the DBH model considering the coupling to its environment is give by (with ħ=1)<cit.>H = ∑_iH^DM_i-κ∑_ija^†_ia_j-μ∑_iN_i+H_RH^DM_i = ω_aJ^+_iJ^-_i+ω_ca^†_ia_i+β(a_iJ^+_i+a^†_iJ^-_i)H_R = ∑_i∑_ω_kω_kr_k^†r_k+H_CR+H_ARH_CR = ∑_i∑_ω_k[η_c(ω_k) r_k^†a+h.c] H_AR = ∑_i∑_ω_k[η_a(ω_k) r_k^†J^-+h.c]where the indices i and j are individual cavities range over all sites. a^†_i and a_i are the photon creation and annihilation operators, respectively. J^±_i=∑_jδ^±_j are the atomic collective raising and lowering angular momentum operators and the total number of excitations is N_i=a^†_ia_i+J^+_iJ^-_i. The transition energy of TLA is ω_a and ω_c is the frequency of the cavity field. All the atoms couple to cavities with the same coupling β<cit.>. We assume that the hopping energy of photons κ_ij=κ between sites i and j and the chemical potential in the grand canonical ensemble μ_i=μ are the same for all cavities. The coupling Hamiltonian of the system with the environment and the Hamiltonian of the environment are described as H_R. ω_k is a model of bath, and r_k^†(r_k) is the creation and annihilation operators of environment in the kth model. H_CR is the interaction of cavity with environment. The interaction of the atoms with the environment is denoted as H_AR.Considering the influence of the environment, the decoherence of every cavity and the two-level atom would result in the incoherent or dissipative propagation of the incident photon, thusnonequilibrium dynamics for the open quantum many-body system will arise. In general, simulations of nonequilibrium many-body effects for a finite freedom of the system can be performed using the master equation and the mean-field decoupling approximation<cit.>. However it is a formidable task to solve a fairly large parameter space because of the infinite freedom of the environment. To address this problem, our group proposed a quasibosonic approach to eliminated the infinite freedom of the environment, in which the operators of the environment can be treated as a c-number and then the dissipative system can be solved easily<cit.>. One can obtain an effective Hamiltonian for the system based on the quasibosonic approach.H= ∑_iH^DM_i-κ∑_ijã^†_iã_j-μ∑_iÑ_̃ĩH^DM_i = ω̃_aJ̃^+_iJ̃^-_i+ω̃_cã^†_iã_i+β(ã_iJ̃^+_i+ã^†_iJ̃^-_i)where ω̃_a=ω_a-iγ_a, ω̃_c=ω_c-iγ_c. γ_a and γ_c are decay rates of atoms and cavities, respectively. ã_i^†(ã_i) is a quasiboson creation(annihilation) operator. J̃^+_i(J̃^-_i) is the dressed atomics raising(lowering) angular momentum operator. The dissipation becomes an inherent property for the DBH model considered here.A superfluid order parameter ψ, with the mean field assumption ψ≡⟨ã_i⟩, is usually introduced to gain insight into the role of dissipation in the quantum phase transition. For ψ≠0, the system is in superfluid phase. When ψ=0, the system is in the Mott-insulator phase. In the present case, the expected value of ã_i is in general complex with the formation ⟨ã_i⟩=ψ-iψ_γ. ψ_γ is a solvable small quantity as a function of decay rates of the system, and vanishes in the limit of ideal cases. Using the decoupling approximation ã^†_iã_j=⟨ã^†_i⟩ã_j+⟨ã_j⟩ã^†_i-⟨ã^†_i⟩⟨ã_j⟩. The mean-field Hamiltonian of Eq.(2.2) can be written asH^MF = ∑_iH_i^MFH^MF_i =H_i^DM-κψ(ã^†_i+ã_i)+κ|ψ|^2-μ∑_iÑ_̃ĩThis mean-field Hamiltonian is assumed to be the same for every site. § EIGENVALUES AND EIGENSTATES OF THE DISSIPATIVE DICKE-BOSE-HUBBARD MODELIn the following, the case of two TLAs in each cavity is investigated as an example to provide a detailed illustration. The extension for an arbitrary number of two-level atoms is given in 4, which can be easily calculated by using the same approach. The bare states of system are |0,e^⊗2⟩|n-2⟩, |g,e⟩|n-1⟩, and |g^⊗2,0⟩|n⟩ with photon number n running from 0, 1, 2, 3 to ∞<cit.>. For two TLAs system, the case in which two atoms are in the excited state can be denoted as |0,e^⊗2⟩, only one atom in the excited state is denoted by |g,e⟩, and |g^⊗2,0⟩ is for the case that the two atoms are in the ground state. Here the total of 3n bare state bases form a group for the whole Hilbert space. Based on these states, the matrix elements for H^MF_n can be obtained. H^MF_n = | [ 2ω̃_a+(n-2)ω̃_c-nμ √(2(n-1)β)0; √(2(n-1)β) 2ω̃_a+(n-1)ω̃_c-(n+1)μ √(2nβ);0 √(2nβ) nω̃_c-nμ;]|+κ|ψ|^2 with ω̃_c=ω̃_a=ω̃(ω̃=ω-iγ), γ=γ_a+γ_c. The eigenvalues can be obtained by diagonalizing the matrix in Eq.(3.1), and the corresponding eigenstates can be found.E^(0)_|0,n⟩=nω̃ E^(0)_|±,n⟩ = (2n+1)ω̃±β R(n,ω̃/β)/2|0,n⟩=-√(n-1)|0,e^⊗2⟩|n-2⟩+√(n)|g^⊗2,0⟩|n⟩/√(2n-1) |±,n⟩ = √(n)|0,e^⊗2⟩|n-2⟩+1/2√(2)[ω̃/β± R(n,ω̃/β)]|g,e⟩|n-1⟩+√(n-1)|g^⊗2,0⟩|n⟩/√(2n-1+{1/2√(2)[ω̃/β± R(n,ω̃/β)]}^2) Here R(n,ω̃/β)=√(8(2n-1)+(ω̃/β)^2) is the effective Rabi frequency. The energy levels split into three branches corresponding to the upper branch E^(0)_|+,n⟩, center branch E^(0)_|0,n⟩ and the lower branch E^(0)_|-,n⟩ as shown in Fig. 1(b). §THE QUANTUM PHASE TRANSITIONIn this section, we use the perturbation theory to obtain the superfluid order parameter and study the quantum phase transition by changing the controlling parameters. We have assumed that cavities are coupled weakly to each other, thus, the interaction term between cavities can be considered as a perturbation term when the two-level atoms system is coupled strongly to the cavity field. The effective Hamiltonian Eq.(2.3) thus readsH^MF_i = H^DM_i+H^'_iH^'_i = -κψ(ã^†_i+ã_i)+χ|ψ|^2-μÑ_̃ĩwhich is valid on each site, we therefore drop the subscript i in the following. Considering the analogy of transition from the Mott-insulator to superfluid state between the Jaynes-Cummings model and the Bose-Hubbard model and the fact that the analytical results obtained by the second and fourth-order perturbations are in good agreement with the exact diagonalization numerical calculation<cit.>, we derive the analytically solution of the system in terms of the second-order perturbation for simplicity.Eq.(3.2) and Fig. 1(b) show that a center energy level E_|0,n⟩ is required to perform the translation, thus, the on-site repulsion U based on the center branch state |0,n⟩ is independent of the atom-cavity coupling β,which is different from one defined by the state |±,n⟩. To study the quantum phase transition in detail, the superfluid order parameter must calculated separately for different cases.Preparing in the center branch of the dressed state: According to the definition of the superfluid order parameter ψ=⟨Φ_n(t)|ã_i|Φ_n(t)⟩, |Φ_n(t)⟩ can be obtained based on the second-order perturbation theory. We first obtain the second-order corrections of energy eigenvalues E^(2)_|0,n⟩ and (normalized) eigenstates ϕ̃^(2)_|0,n⟩ with respect to the dressed basis Eq.(3.4).E^(2)_|0,n⟩=(n-1)(2n-2)^2κ^2ψ^2/(2n-1)(2n-3)(ϵ-iγ)+4n^3κ^2ψ^2/(2n-1)(2n+1)(-ϵ+iγ) ϕ̃^(2)_|0,n⟩=√(n-1)(2n-2)(-κψ)/√((2n-1)(2n-3))(ϵ-iγ)|n-1⟩ +2n√(n)(-κψ)/√((2n-1)(2n+1))(ϵ-iγ)|n+1⟩where ϵ=ω-μ. Therefore, the eigenvalue of the dissipative system based on the second-order perturbation theory isE_|0,n⟩≡ E_s+iE_γwithE_s = nϵ+κ|ψ|^2+(-8n^3+12n^2+4n-4)κ^2ψ^2ϵ/(2n-1)(2n-3)(2n+1)(ϵ^2+γ^2)E_γ = nγ+(-8n^3+12n^2+4n-4)iκ^2ψ^2γ/(2n-1)(2n-3)(2n+1)(ϵ^2+γ^2)When the system is in the Mott-insulator state, ψ=0, we have E_γ= nγ. When ψ≠ 0, one can take E_γ≈ nγ because we assume that the coupling strength κ between cavities is weak.Up to second order, the expression for the (normalized) eigenstates isϕ_|0,n⟩=1/√(Ñ)ϕ̃_|0,n⟩ ϕ̃_|0,n⟩=√(n-1)(2n-2)(-κψ)/√((2n-1)(2n-3))(ϵ-iγ)|n-1⟩+|n⟩ +2n√(n)(-κψ)/√((2n-1)(2n+1))(ϵ-iγ)|n+1⟩ Ñ=1+(n-1)(2n-2)^2κ^2ψ^2/(2n-1)(2n-3)(ϵ^2+γ^2) +4n^3κ^2ψ^2/(2n-1)(2n+1)(ϵ^2+γ^2)Ñ is the normalized constant. For the open system considered here, the superfluid order parameter ψ is time-dependent. According to Eq.(4.5), the (normalized) eigenstates is a function of the time, however, its time derivative can be ignored because of the second-order correction. Thus, the approximative time-dependent wave function of the system can be written asΦ_n(t)=f(t)ϕ_|0,n⟩Using the Schrödinger equation, one can findΦ_n(t)=ϕ_|0,n⟩e^-iE_|0,n⟩tTherefore, the superfluid order parameter ψ for the state |0,n⟩ can be obtained ψ_1=e^-nγ t√((8n^3-12n^2-4n+4)ϵ/(16n^4-32n^3+12n^2+4n-4)κ-(2n-1)(2n+1)(2n-3)(ϵ^2+γ^2)/(16n^4-32n^3+12n^2+4n-4)κ^2e^-2nγ t)Eq. (4.7) shows that ψ_1 is a function of the parameters κ, γ, t and μ (In present case, μ is a constant). The superfluid order parameter evolves and decays with time with a decay rate proportional to the number of photons n.Preparing in the negative branch of the dressed state: Assume that each site is prepared in the negative branch of the dressed state |-,n⟩. We can find the second-order deviations using a similar procedure, although the calculations become quite tedious when using our current formulation. The superfluid order parameter ψ_2 can be obtained by solving the following equation. ψ_2=Re{e^-2γ nt/Ñ^̃'̃[2[2√(n(n-1)(n-2))+√(n-1)/8(ω+iγ/β-R_n-1^†)(ω-iγ/β-R_n)]^2(-κψ_2)/[2ϵ+2iγ-β(R^†_n-R_n-1^†)][2n-1+1/8(ω-iγ/β-R_n-1)^2][2n-3+1/8(ω+iγ/β-R^†_n-1)^2]+ 2[2√(n(n-1)(n+1))+√(n)/8(ω+iγ/β-R_n^†)(ω-iγ/β-R_n+1)]^2(-κψ_2)/[-2ϵ+2iγ-β(R_n-R_n+1)][2n-1+1/8(ω+iγ/β-R^†_n)^2][2n+1+1/8(ω-iγ/β-R_n+1)^2]]} Ñ^̃'̃ (Appendix) is the normalized constant. In what follows, we use Eqs. (4.7) and (4.8)to numerically investigate the features of quantum phase transition arising from the competition between the on-site repulsion U_n and the hopping rate under the influences of the environment.Analyses. As illustrated in Fig. 2. First, we start with a superfluid phase. The time evolution of the superfluid order parameter ψ for different hopping rates κ and decay rates γ are shown. Comparing Figs. 2a(b) with 2c(d), a clear quantum phase transition is found for different initial states. The ideal cases are also given in Fig. 2 for comparison, which shows that the system is still in the coherent state that was prepared initially.The evolution of the dissipative system clearly reflects the expected decay of the coherence, which is the most obvious characteristic different from ideal cases. For a small t, although ψ decreases slightly, the system remains in a superfluid state. At a sufficiently large t, the effects of the environment become large, and the coherence of the system is initially destroyed in a pronounced manner and is then gradually reduced. Thus, ψ decays rapidly and the system undergoes a phase transition into a Mott-insulating phase. With the increase of the photon number n, the long-range order parameter will decrease rapidly, as shown in Fig. 3(b) and (d), because of the decay time isproportional to n.The critical point t_c is a function of controlling parameters and can be found by setting ψ=0 in Eqs. (4.7) and (4.8), which yields t_c=1/2nγln(4n^2-4n-4)κ/(2n+1)(2n-3)ϵ. It follows that, for certain cavity decay rates κ, one may change the other controlling parameters according to t_c to enable the dissipative system to maintain coherence for a relatively long time.Obviously, when the external environment is considered, the decoherence of every resonator and the TLAs would result in the decay of the superfluid order parameter. In the experiment, dynamical decoupling<cit.> and feedback control<cit.> have been proposed to hamper the decay of the cavity field and TLAs and thus improved the coherence time. In contrast, as shown in Fig. 3, we seek to determine how a Mott-insulator state in the beginning can restore the coherence by changing the intercavity hopping rate κ for a dissipative system. For a small κ, there are no enough excitations for hopping between cavities. By rasing the hopping rate to a certain value κ_c=(8n^3-12n^2-2n+3)ϵ e^2nγ t/(8n^3-12n^2-4n+4), the system will restore its long-range coherence and a phase transition from Mott-insulator to the superfluid phase appears. According to Eqs.(4.7) and (4.8), the photon hopping rate is also found to decrease because of the effect of the environment, thus the long-range coherence can only occur when the increase of the photon hopping rate is faster than its decay. Figs. 3(a) and 3(c) also demonstrate that the influence of environment accumulate over time. With an increase in time, a large hopping rate is required to restore the coherence. Because the system and the environment have been recognized as a whole system in the effective Hamiltonian, then the dissipation is the inherent nature of the system. Therefore, for t=0, the system is also dissipative, and the hopping rate required for the phase transition to occur is higher than the rate expected in the ideal case.In addition, increasing the number of photons to n, the dissipation of the system is also enhanced correspondingly, a higher hopping energy is thus required to induce a phase transition, as shown in Figs. 3(b) and 3(d).In what follows, we extend the model to an arbitrary number of TLAs cases. The Dressed-state basis can be written by the general method todiagonalize the effective DBH Hamiltonian (1.1) by numerical computation. The phase diagrams of the dissipative DBH model are plotted in Fig.4. For comparison, we also show the ideal cases. In dissipative cases, we choose t=0, which implies that the dissipative system is nearly equilibrium. As shown in Fig. 4, as interaction with the environment destroys the coherence of the system, the Mott lobes becomes smaller and the area of the coherent phase decreases. Next, the realization of the superfluid state requires a large hopping rate to derive the localized photons in each cavity. It can also be found that, in a regime with a small hopping rate κ, fewer TLAs could cause the system to become a localized phase compared with the ideal cases. With an increase in the number of TLAs, the coherent state may disappear rapidly for the dissipative system. § CONCLUSIONBased on the quasibosonic approach, a realistic situation of a DBHmodel coupled to its environment was considered. The analytical solution of the superfluid order parameter for two TLAs per cavity was derived. The transition behaviors of the superfluid to Mott-insulation phase and the restoring coherence were discussed. The phase diagram for an arbitrary number of TLAs was also investigated. As the number of TLAs increases, Mott lobes may disappear and such a system tends to be classical. Most importantly, the atomic number is far lower than that under ideal circumstances. This work can provide parameters for reference to simulate strongly correlated many body systems in the actual operation. This work was supported by the National Natural Science Foundation of China under Grant No.11274148. 99M M. J. Hartmann, J. Opt. 18, 104005 (2016).Hur K. L. Hur, L. Henriet, A. Petrescu, K. Plrkhanov, G. Roux, and M. Schiró, C. R. Physique 17, 808-835 (2016).Noh C. Noh, and D. G. Angelakis, Rep. Prog. Phys. 80, 016401 (2017).Michael M. Knap, E. Arrigoni, and W. V. D. Linden, Phys. Rev. B 82, 045126 (2010).Srinivasan K. Srinivasan, and O. Painter, Nature(London) 450, 862 (2007).Aoki T. Aoki, B. Dayan, E. Wilcut, W. P. Bowen, A. S. Parkins, T. J. Kippenberg, K. J. Vahala, and H. J. Kimbe, Nature(London) 443, 671 (2006).Birnbaum K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Nature(London) 436, 87 (2005).Mabuchi H. Mabuchi, and A. C. Doherty, Science 298, 1372 (2002).Raimond J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. 73,565 (2001).Toyoda K. Toyoda, Y. Matsuno, A. Noguchi, S. Haze, and S. Urabe, Phys. Rev. Lett. 111, 160501 (2013).Underwood D. L. Underwood, W. E. Shanks, J. Koch and, A. A. Houch, Phys. Rev. A 86, 023837 (2012).Hartmann M. J. Hartmann, F. G. S. L. Brandão, and M. B. Plenio, Nat. Phys. 2, 849 (2006).Greentree A. D. Greentree, C. Tahan, J. H. Cole, and L. C. L. Hollenberg, Nat. Phys. 2, 856 (2006).Angelakis D. G. Angelakis, M. F. Santos, and S. Bose, Phys. Rev. A 76, 031805(R) (2007).Koch J. Koch, and K. L. Hur, Phys. Rev. A 80, 023811 (2009).Brandao M. J. Hartmann, F. G. S. I. Brandão, and M. B. Plenio, Laser Photon. Rev. 2, 527 (2008).Strack P. Strack, and S. Sachdev, Phys. Rev. Lett. 107, 277202 (2011).Li X. P. Li, and W. V. Liu, Phys. Rev. A 87, 063622 (2013).Jin J. Jin, D. Rossini, and R. Fazio, Phys. Rev. Lett. 110, 163605 (2013).Umucal1lar R. O. Umucal1lar, and Z. Carusotto, Phys. Rev. A 84, 043804 (2011).Carusotto R. O. Umucal1lar, and Z. Carusotto, Phys. Rev. Lett. 108, 206809 (2012).Andrew A. L. C. Hayward, and A. M. Martin, Phys. Rev. A 93, 053614 (2016).Bujnowski B. Bujnowski, J. K. Corso, A. L. C. Hayward, J. H. Cole, and A. M. Martin, Phys. Rev. A 90, 043801 (2014).Guo L. J. Guo, S. Greschner, S. Y. Zhu, and W. Z. Zhang, arXiv: 1611.06404.Minar J. Minář, S. Günes, Söyler, and Z. Lesanovsky, New J. Phys. 18, 053035 (2016).Maggitti A. Maggitti, M. Radonjié, and B. M. Jelenkovié, Phys. Rev. A 93, 013835 (2016).Dicke R. H. Dicke, Phys. Rev. 93, 99 (1954).Badshah F. Badshah, S. Qamar, and M. Paternostro, Phys. Rev. A 90, 033813 (2014).Chitra R. Chitra and O. Zilberberg, Phys. Rev. A 92, 023815 (2015).Mlynek J. A. Mlynek, A. A. Abdumalikov, C. Eichler, and A. Wallraff, Nature Commun. 5, 5186 (2014).Baumann K. Baumann, C. Guerlin, F. Brennecke, and T. Esslinger, Nature(London) 464, 1301 (2010).Hepp K. Hepp and E. H. Lieb, Phys. Rev. A 8, 2517 (1973).Klinder J. Klinder, H. Keßler, M. R. Bakhtiari, M. Thorwart, and A. Hemmerich, Phys. Rev. Lett. 115, 230403 (2015).Kebler J. Klinder, H. Keßler, M. R. Bakhtiari, M. Thorwart, and A. Hemmerich, PNAS 112, 11 (2015).Lei S. C. Lei and R. K. Lee, Phys. Rev. A 77, 033827 (2008).Rossini D. Rossini, and R. Fazio, phys. Rev. Lett. 99, 186401 (2007).Na N. Na, S. Utsunomiya, L. Tian, Y. Yamamoto, Phys. Rev. A 77, 031803(R) (2008).Lu Y. C. Lu, and C. Wang, Quantum Inf. Process 15, 4347-4359 (2016).Tan L. Tan and L. Hai, J. Phy. B 45, 035504 (2012).Liu K. L. Liu, L. Tan, C. H. Lv, and W. M. Liu, Phys. Rev. A 83, 063840 (2011).Buzek V. Bužek, M. Orszag, and M. Roško, phys. Rev. Lett. 94, 163601 (2005).Nissen F. Nissen, S. Schmidt, M. Biondi, G. Blatter, H. E. Türeci, and J. Keeling, Phys. Rev. Lett. 108, 233603 (2012).JM. J. Hartmann, Phys. Rev. Lett. 104, 113601 (2010).Gerace I. Carusotto, D. Gerace, H. E. Tureci, S. D. Liberato, C. Ciuti, and A. Imamoǧly, Phys. Rev. Lett. 103, 033601 (2009).Aron C. Aron, M. Kulkarni, H. E. Türeci, Phys. Rev. X 6, 011032 (2016).Oosten D. V. Oosten, P. V. D. Straten, and H. T. C. Stoof, Phys. Rev. A 63, 053601 (2001).Bylander J. Bylander, S. Gustausson, F. Yan, F. Yoshihara, K. Harrabi, G. Fitch, D. G. Cory, Y. Nakamura, T. S. Tsai, and W. D. Oliver, Nat. Phys. 7, 565 (2011).Xue S. B. Xue, R. B. Wu, W. M. Zhang, J. Zhang, C. W. Li, and T. J. Tarn, Phys. Rev. A 86, 052304 (2012).§ APPENDIXThe normalized constant of superfluid order parameter ψ_2. Ñ^' = 4κ^2ψ^2AA^†+4κ^2ψ^2BB^†A=2√(n(n-1)(n-2))+√(n-1)/8(ω+iγ/β-R_n^†)(ω-iγ/β-R_n-1)/[2ϵ-2iγ-β(R_n-R_n-1)]√([2n-1+1/8(ω+iγ/β-R^†_n)^2][2n-3+1/8(ω-iγ/β-R_n-1)^2])B=2√(n(n-1)(n+1))+√(n)/8(ω+iγ/β-R_n^†)(ω-iγ/β-R_n+1)/[-2ϵ+2iγ-β(R_n-R_n+1)]√([2n-1+1/8(ω+iγ/β-R^†_n)^2][2n+1+1/8(ω-iγ/β-R_n+1)^2]) The conjugates of A and B are A^† and B^†, respectively.
http://arxiv.org/abs/1702.08131v1
{ "authors": [ "Ren-Cun Wu", "Lei Tan", "Wen-Xuan Zhang", "Wu-Ming Liu" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170227032114", "title": "Quantum phase transitions of light in a dissipative Dicke-Bose-Hubbard model" }
Department of Physics, Yale University, New Haven, CT 06511, USAInstitute for Complex Quantum Systems and IQST, Ulm University, 89069 Ulm, GermanyDepartment of Physics, Yale University, New Haven, CT 06511, USAInstitute for Complex Quantum Systems and IQST, Ulm University, 89069 Ulm, GermanyDepartment of Physics and Astronomy, Michigan State University, East Lansing, Michigan 48824, USA We study the breaking of the discrete time-translation symmetry in small periodically driven quantum systems. Such systems are intermediate between large closed systems and small dissipative systems, which both display the symmetry breaking, but have qualitatively different dynamics. As a nontrivial example we consider period tripling in a quantum nonlinear oscillator. We show that, for moderately strong driving, the period tripling is robust on an exponentially long time scale, which is further extended by an even weak decoherence.Multiple-period Floquet states and time-translation symmetry breakingin quantum oscillators M. Dykman December 30, 2023 =============================================================================================The breaking of translation symmetry in time, first proposed by Wilczek <cit.>, has been attracting much attention recently. Such symmetry breaking can occur only away from thermal equilibrium <cit.>. It is of particular interest for periodically driven systems, which have a discrete time-translation symmetry imposed by the driving. Here, the time symmetry breaking is manifested in the onset of oscillations with a period that is a multiple of the driving period t_F. Oscillations with period 2t_F due to simultaneously initialized protected boundary states were studied in photonic quantum walks <cit.>; period-two oscillations can also be expected from the coexistence of Floquet Majorana fermions with quasienergies 0 and ħπ/t_F in a cold-atom system <cit.>. The onset of period-two phases was predicted and analyzed <cit.> in Floquet many-body localized systems, and the first observations of oscillations at multiples of the driving periodin disordered systems were reported <cit.>.In systems coupled to a thermal bath, on the other hand,the effect of period doubling has been well-known. A textbook example is a classical oscillator modulated close to twice its eigenfrequency and displaying vibrations with period 2t_F <cit.>. The oscillator has two states of such vibrations; they have opposite phases, reminiscent of a ferromagnet with two orientations of the magnetization. Several aspects of the dynamics of a parametric oscillator in the quantum regime have been studied theoretically, cf. <cit.>, and in experiments, cf. <cit.>. For a sufficiently strong driving field, a quantum dissipative oscillator, like a classical oscillator, mostly performs vibrations with period 2t_F. The interplay of quantum fluctuations and dissipation leads to transitions between the period-two vibrational states, but the rate of these transitions is exponentially small <cit.>.The goal of this paper is to study time symmetry breaking in isolated or almost isolated driven quantum systems with a few degrees of freedom. They are intermediate between large closed systems and dissipative systems, where the nature of the symmetry breaking is very different. To this end, we analyze a driven nonlinear quantum oscillator. Time symmetry breaking in this system should not be limited to period doubling. Asan illustration of a behavior qualitatively different from period doubling, we consider period tripling and find the conditions where it occurs. We also address the role of decoherence and the connection between the time symmetry breaking in the coherent and incoherent regimes.Floquet (quasienergy) states ψ_(t) are eigenstates of the operatorof time translation by t_F, ψ_(t) ≡ψ_ (t+t_F)=exp(-i t_F/ħ)ψ_(t). For a broken-symmetry state ψ_K,_K with K>1, time translation by t_F is not described by the factor exp(-i t_F/ħ). Instead, ψ_K,_K(t+Kt_F) = exp(-Ki_K t_F/ħ)ψ_K,_K(t). We call ψ_K,_K a period-K Floquet state. It is an eigenstate of T_Kt_F = ()^K, but not T_t_F. Multiple-period states naturally occur if the number of states of the system ℕ→∞, as in the case of an oscillator. For such systems the quasienergy spectrum is generally dense, cf. <cit.>. Then we can find states ψ_ and ψ_' with the difference of the quasienergies | - '| infinitesimally close to ħω_F/K with integer K>1 (or to ħω_F k/K with k<K); ω_F=2π/t_F is the driving frequency. A linear combination αψ_(t) +α'ψ_'(t) is a period-K state. The expectation value of dynamical variables in such a state oscillates with period Kt_F. However, the oscillation amplitude will be very small as, generally, the functions ψ_ and ψ_' will be of a very different form.The situation is different for an oscillator driven close to an overtone of its eigenfrequency ω_0, i.e., for ω_F ≈ Kω_0. Such an oscillator has several sets of quasienergy states where the quasienergy differences within a set are very close to ħω_F/K in a broad parameter range, and are exactly equal to ħω_F/K for some interrelations between the parameters, whereas off-diagonal matrix elements of the dynamical variables are large, see Fig. <ref>. Such states result from tunnel splitting of thestates localized at the minima of the oscillator Hamiltonian in the rotating frame shown in Fig.<ref>(c). These localized states correspond to period-K vibrations in the laboratory frame, see below. In a way, for a parametric oscillator (K=2) the occurrence of a period-2 state could be inferred from the results <cit.>. However, this state was not identified there and the time symmetry breaking was not addressed. In different terms, sets of states separated by ≈ħω_F/Kwere found numerically for K≫ 1 for a special model of an oscillator in the interesting paper <cit.>; the considered states did not break time symmetry.The period tripling (K=3) considered here for a driven oscillator is particularly interesting. It differs from the continuous Landau-type symmetry-breaking transition that occurs for period doubling, cf. <cit.>. In the presence of dissipation, the fully-symmetric (zero-amplitude) state does not loose stability. Also, in the quantum regime, there emerges a geometric phase between the broken-symmetry states localized at the minima of the effective Hamiltonian function in phase space, cf. Fig. <ref>(c). Thus, the period-tripling in an oscillator allows one to reveal, using a simple and physically relevant model, the generic conditions for the onset of strongly overlapping multiple-period states and to relate them to the underlying nontrivial symmetry. It also provides a platform for studying quantum tunnelingbetween localized states in phase space. This problem is considerably different from the classical problemof tunneling in a symmetric double-well potential <cit.> (see also <cit.>). We study a most commonly used model of a nonlinear oscillator, the Duffing model, which describes parametric resonance and, as we will see, can describe period tripling; this model refers to a broad range of systems, including trapped relativistic electrons, cold atomic clouds, Josephson junction based systems, andnanomechanical systems <cit.>. Its Hamiltonian reads H=H_0 + H_F,H_0=1/2p^2 + 1/2ω_0^2 q^2 +1/4γ q^4, where q and p are the oscillator coordinate and momentum. The term H_F≡ H_F(t) describes the driving. In the analysis of parametric resonance, one chooses H_F=-12 q^2 Fcosω_Ft with ω_F≈ 2ω_0. Here we consider H_F=-13q^3Fcosω_Ft with ω_F≈ 3ω_0; the results describe also a drive H_F'=-qF'cosω_Ft with F→ 3γ F'/8ω_0^2. If the driving is not too strong, so that for the states of interest the expectation values of H_F and the nonlinear term ∝ q^4 are small compared to the harmonic part of H_0, the resonant oscillator dynamics can be described in the rotating wave approximation (RWA) <cit.>. For an oscillator driven close to the Kth overtone of its eigenfrequency, one makes a canonical transformation U(t) =exp(- i a^† aω_F t/K), where a and a^† are the ladder operators.The RWA Hamiltonian H_ RWA is obtained by time-averaging the transformed Hamiltonian H_K(t) =U^† (t) H(t) U(t) - iħ U^†(t)U̇(t),H_ RWA=(Kt_F)^-1∫_0^Kt_Fdt H_K(t). Clearly, H_ RWA isindependent of time. We now establish the relation between the eigenvalues of H_ RWA and the quasienergies. If ϕ_(t) is an eigenfunction of H_ RWA, i.e., H_ RWAϕ_ = Eϕ_,then the corresponding wave function in the lab frame is ψ(t)=U(t)ϕ_(t), and ψ(t) = e^-iEt_F/ħU(t+t_F)ϕ_(t)= e^-i Et_F/ħN_Kψ(t). We call E the RWA energy. In Eq. (<ref>) N_K = exp(-2π i a^† a/K), [N_K,H_ RWA]=0. The above commutation relation follows from the relation H_K(t+t_F) = N_K^† H_K(t)N_K and Eq. (<ref>). Using the explicit form of H_ RWA, the commutation relation (<ref>) was found in Ref. Guo2013a for the same operator as N_K.Operators N_K^k with k=0,1,...,K-1 form a cyclic group. Since eigenfunctions of H_ RWA are also eigenfunctions of N_K, one can label them by a superscript k,N_Kϕ^(k) = exp(-2π i k_ /K)ϕ^(k),0≤ k≤ K-1. Note that H_ RWA has eigenfunctions with the same k, but different E. By comparing Eqs. (<ref>) and (<ref>) one finds that a wave function ϕ_^(k) with RWA energy E^(k) corresponds to a usual Floquet state with quasienergy^(k) = (E^(k)+ħω_F k_/K)mod (ħω_F). As we will see, for sufficiently strong drive the eigenstates of H_ RWA form multiplets with close eigenvalues E^(k) but different k. The quasienergies of different states in the multiplets differ by ≈ħω_F/K.Equation (<ref>) allows one to write the functions ϕ^(k) in terms of the Fock states of the oscillator|n⟩ defined by the condition a^† a|n⟩ = n|n⟩. Only one out of each K Fock states contributes to ϕ^(k),ϕ^(k) = ∑_nC_n^(k)|Kn+k_⟩. This relation significantly simplifies numerical diagonalization of H_ RWA, as the coefficients C_n^(k) with different k are uncoupled. More importantly, it shows that the RWA energy levels of states with different k can cross when the parameters of the system vary. This crossing is seen in Fig. <ref>. In contrast, the RWA levels of states with the same k avoid crossing.The motion in the rotating frame isconveniently described by the coordinate Qand momentum P, which are related to q and p asU^† (t)[q+i(K/ω_F)p]U(t) = C(Q+iP)e^-iω_Ft/K. The parameter C is the scaling factor that makes Q and P dimensionless,[Q,P]=iλ, λ=ħ K/ω_FC^2. The dimensionless Planck constant λ and the parameter C in the case of a parametric oscillator, K=2, are given in <cit.>. For the case of period tripling, C=(8ω_Fδω/9γ)^1/2, where δω = 1/3ω_F - ω_0 is the frequency detuning from the resonance, |δω|≪ω_F. In this case H_ RWA = [8ω_F^2(δω)^2/27γ)]ĝ(Q,-iλ∂_Q) with g(Q,P) = 1/4(Q^2+P^2 -1)^2 -1/3 f(Q^3 - 3PQP), where f=F/(8ω_F γδω)^1/2 is the scaled amplitude of the driving. Of interest is the region γδω >0, and we choose γ >0 and δω>0. The function g(Q,P) is the dimensionless Hamiltonian function in the rotating frame. It is plotted in Fig. <ref>. It has a three-fold rotational symmetry in the (Q,P)-plane. This symmetry follows from Eqs. (<ref>) and (<ref>), since N_K is an operator of rotation by angle 2π/K in phase plane; the K-fold symmetry of H_ RWA was also seen in <cit.>. For moderately strong fields, g(Q,P) has three well-separated minima positioned at the vertices of an equilateral triangle (Q_m,P_m); we count m=0,1,2 counterclockwise and set m=0 for the vertex with P_0=0.The eigenstates of the operator ĝ≡ g(Q,-iλ∂_Q) with the lowest RWA energies are localized near(Q_m,P_m). In the absence of tunneling, ĝ has three degenerate eigenstates Ψ_m. Near their maxima, functions Ψ_m have the form of squeezed ground states of a harmonic oscillator centered at (Q_m,P_m) [see Supplemental Material for the details of the calculation]. The oscillator in a state Ψ_m has a broken time symmetry. The expectation values of dynamical variables oscillate at frequency ω_F/3. Indeed, from Eq. (<ref>) time translation by t_F transformsΨ_m→ N_3Ψ_m=Ψ_m-1≡Ψ_m+2. To come back to state Ψ_m, one has to increment time by 3t_F. The relation Ψ_m+1= N_3^†Ψ_m gives the phase shift between functions Ψ_m+1 and Ψ_m. Since N_3 is a rotation operator, this phase shift is geometric in nature [34]. Tunneling between the minima lifts the degeneracy of the ground state of the operator ĝ. In contrast to the problem of tunneling in a symmetric double-well potential <cit.>, g(Q,P) is not even in Q, it has three extrema, and two of them lie at nonzero momenta P. To find the tunnel splitting, we write the wave functions in the coordinate representation, Ψ_m≡Ψ_m(Q). The three normalized eigenstates ϕ^(k) of ĝ with the smallest eigenvalues g^(k) (k=0,1,2) have the form ϕ^(k)(Q) = 1/√(3(1+δ^(k)))∑_m=0,1,2Ψ_m (Q)e^-2mkπ i/3. where δ^(k)= 2 Re[⟨Ψ_0|Ψ_1⟩exp(-2π i k/3)]≪ 1. We choose Ψ_0(Q) to be real and normalized.Since Ψ_m+1 = N_3^†Ψ_m, we have Ψ_2(Q)=Ψ_1^*(Q). Due to the symmetry,the functionsϕ^(k) can be shown to be orthogonal.In the spirit of <cit.>, we calculate g^(k) using the relation ∫_∞^Q_* dQ[ϕ^(k)(Q)(ĝ- g_0)Ψ_0(Q) .. - Ψ_0(Q)(ĝ -g^(k)) ϕ^(k)(Q)]=0with g_0 being the eigenvalue of ĝ in the state Ψ_0, g_0≈min g(Q,P) [34]. The difference g^(k)-g_0 is exponentially small for a small dimensionless Planck constant λ.To choose the upper limit Q_* of the integral (<ref>), we note that the functions Ψ_m(Q) fall off exponentially away from the respective Q_m, with Ψ_0 and Ψ_1,2 falling off in the opposite directions in the interval (Q_1,Q_0). We choose Q_* within this interval and in such a way that Ψ_0,1,2(Q_*) are all of the same order of magnitude and thus can be kept in Eq. (<ref>) for ϕ^(k)(Q). The result of integration (<ref>) should be independent of Q_*.The WKB wave functions Ψ_0,1(Q) in the classically forbidden region between Q_1 and Q_0 have the form Ψ_m(Q)=C_m(i∂_P g)^-1/2e^iS_m(Q)/λ(m=0,1), ∂_Q S_m = (-1)^m (Q),g(Q,) = g_0, where S_0,1(Q) is theclassical action and constants C_0,1 are found from the matching to the corresponding intrawell wave functions. It is critical for understanding the tunneling that, because the effective Hamiltonian function g(Q,P) is quartic in the momentum P,(Q) has a branch point Q_B in the interval (Q_1,Q_0). For Q_1 < Q< Q_B, (Q) has both imaginary and real parts. So does the action S_m(Q). This leads to oscillations of the wave functions in the classically forbidden region. In S_m(Q) one should keep the root with the smallest | Im |. To describe Ψ_0, Eq. (<ref>)has to be modifiedby allowing for a complex conjugate term [34]. Calculating the integrals in Eq. (<ref>) by parts, we find g^(k) - g_0 = C_ tune^-S_ tun/λcos (λ^-1Φ_ tun-2π k/3), where Φ_ tun + i S_ tun = ∫_Q_0^Q_1dQ'(Q') + P_1Q_1/2 + λ G withgiven by equation g(Q,)=min g(Q,P), G being independent of λ and having a contribution from the geometric phase, and C_ tun∝λ^1/2 [34].Equation (<ref>) shows that the splitting of the eigenvalues of H_RWA oscillates as the system parameters vary. Two eigenvalues cross each time λ^-1Φ_ tun = (n+n'/3)π with integer n,n'. Such crossings are seen in Fig. <ref>. Where the eigenvalues do not cross, they stay exponentially close to each other.If the oscillator is in a superposition of two states ϕ^(k) and ϕ^(k'), the expectation values of its variables have period 3t_F provided the observation time is smaller than the exponentially long time |Ω_kk'|^-1, wherethe frequency Ω_kk'=λ^-1[g^(k)-g^(k')]δω is determined by the tunnel splitting. The Fourier spectra of the expectation values generally have components at frequency ω_F/3 ±Ω_kk'; in particular, the coordinate and momentum have just one of these components. This behavior is characteristic also of the oscillator in intrawell states Ψ_m, which are superpositions of ϕ^(1,2,3). The oscillator fluorescence spectrum will display peaks at ω_F/3 ±Ω_kk' as well.It is instructive to compare these results with the period-doubling associated with the topologically protected Floquet boundary states in extended systems <cit.>. To some extent, such states are analogous to the symmetry-protected states ϕ^(k). If tunneling between the Floquet boundary states can be disregarded, similar to disregarding oscillator tunneling, their combination becomes amultiple-period state. However, their overlap is exponentially small, in contrast to the functions ϕ^(k).The intrawell states Ψ_m are particularly important in the presence of dissipation. Even if the dissipation rate Γ is extremely small, but exceeds the exponentially small frequenciesΩ_kk', instead of coherent tunneling between the wells of g(Q,P), the oscillator performs incoherent interwell hopping with typical rate W<|Ω_kk'| [34]. This hopping corresponds to flips of the vibration phase. On times small compared toW^-1 the oscillator stays in the multiple-period state inside a well. This is the exact analog of the classical behavior of a dissipative oscillator, including a parametric oscillator, where the multiple-period state is seen on times short compared to the reciprocal rate of interstate switching.A promising type of oscillator for observing period tripling are modes of microwave cavities coupled to Josephson junctions. Recently there have been studied systems where inelastic Cooper pair tunneling leads to an effective driving of a cavity mode that nonlinearly depends on the mode coordinate and has a tunable frequency 2eV/ħ determined by the voltage V across the Josephson junction <cit.>. There are also other possibilities to resonantly excite multiple-period modes in microwave cavities [P. Delsing, D. Esteve, andF. Portier, private communications]. In conclusion, we studied a quantum oscillator driven close to an overtone of its eigenfrequency and showed that a small quantum system can display coherent multiple-period dynamics. We explicitly described this dynamics for the previously unexplored nontrivial case of period tripling and established the relation to protected boundary Floquet states in extended systems and to multiple-period states in dissipative systems. We are grateful to G. Refael, M. Rudner, and S. Sondhi for the discussions and correspondence.YZ and SMG were supported by the U.S. Army Research Office (W911NF1410011) and by the National Science Foundation (DMR-1609326).;JG and JA were supported in part by the German Science Foundation through SFB/TRR 21 and the Center for Integrated Quantum Science and Technology (IQST);MID was supported in part by the National Science Foundation (Grant No. DMR-1514591). apsrev4-139 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Wilczek(2012)]Wilczek2012 author author F. Wilczek, @noopjournal journal Phys. Rev. Lett. volume 109, pages 160401 (year 2012)NoStop [Watanabe and Oshikawa(2015)]Watanabe2015 author author H. Watanabe and author M. Oshikawa, 10.1103/PhysRevLett.114.251603 journal journal Phys. Rev. Lett. volume 114, pages 251603 (year 2015)NoStop [Kitagawa et al.(2012)Kitagawa, Broome, Fedrizzi, Rudner, Berg, Kassal, Demler, and White]Kitagawa2012 author author T. Kitagawa, author M. A. Broome, author A. Fedrizzi, author M. S. Rudner, author E. Berg, author A. Kassal, I.and Aspuru-Guzik, author E. Demler,and author A. G. White,@noopjournal journal Nature Comm.volume 3, pages 882 (year 2012)NoStop [Jiang et al.(2011)Jiang, Kitagawa, Alicea, Akhmerov, Pekker, Refael, Cirac, Demler, Lukin, and Zoller]Jiang2011 author author L. Jiang, author T. Kitagawa, author J. Alicea, author A. R. Akhmerov, author D. Pekker, author G. Refael, author J. I.Cirac, author E. Demler, author M. D. Lukin, and author P. Zoller, 10.1103/PhysRevLett.106.220402 journal journal Phys. Rev. Lett. volume 106, pages 220402 (year 2011)NoStop [Khemani et al.(2016)Khemani, Lazarides, Moessner, andSondhi]Khemani2016 author author V. Khemani, author A. Lazarides, author R. Moessner,andauthor S. L. Sondhi,@noopjournal journal Phys. Rev. Lett.volume 116, pages 250401 (year 2016)NoStop [von Keyserlingk and Sondhi(2016)]Keyserlingk2016 author author C. W. von Keyserlingk and author S. L.Sondhi, 10.1103/PhysRevB.93.245146 journal journal Physical Review B volume 93, pages 245146 (year 2016)NoStop [Else et al.(2016)Else, Bauer, and Nayak]Else2016 author author D. V. Else, author B. Bauer,andauthor C. Nayak, @noopjournal journal Phys. Rev. Lett. volume 117, pages 090402 (year 2016)NoStop [Yao et al.(2017)Yao, Potter, Potirniche, and Vishwanath]Yao2017 author author N. Y. Yao, author A. C. Potter, author I.-D. Potirniche, and author A. Vishwanath,@noopjournal journal Phys. Rev. Lett.volume 118, pages 030401 (year 2017)NoStop [Khemani et al.(2016)Khemani, von Keyserlingk, and Sondhi]Khemani2016a author author V. Khemani, author C. W. von Keyserlingk,and author S. L. Sondhi, @noopjournal journal ArXiv e-prints(year 2016), http://arxiv.org/abs/1612.08758 arXiv:1612.08758 [cond-mat.stat-mech] NoStop [Bairey et al.(2017)Bairey, Refael, and Lindner]Bairey2017 author author E. Bairey, author G. Refael,and author N. H. Lindner,@noopjournal journal ArXiv e-prints(year 2017), http://arxiv.org/abs/1702.06208 arXiv:1702.06208 NoStop [Zhang et al.(2016)Zhang, Hess, Kyprianidis, Becker, Lee, Smith, Pagano, Potirniche, Potter, Vishwanath, Yao, andMonroe]Zhang2016 author author J. Zhang, author P. W. Hess, author A. Kyprianidis, author P. Becker, author A. Lee, author J. Smith, author G. Pagano, author I.-D.Potirniche, author A. C.Potter, author A. Vishwanath, author N. Y. Yao,and author C. Monroe, @noopjournal journal ArXiv e-prints(year 2016), http://arxiv.org/abs/1609.08684 arXiv:1609.08684 NoStop [Choi et al.(2016)Choi, Choi, Landig, Kucsko, Zhou, Isoya, Jelezko, Onoda, Sumiya, Khemani, von Keyserlingk, Yao, Demler, and Lukin]Choi2016 author author S. Choi, author J. Choi, author R. Landig, author G. Kucsko, author H. Zhou, author J. Isoya, author F. Jelezko, author S. Onoda, author H. Sumiya, author V. Khemani, author C. von Keyserlingk, author N. Y. Yao, author E. Demler,and author M. D. Lukin, @noopjournal journal ArXiv e-prints(year 2016), http://arxiv.org/abs/1610.08057 arXiv:1610.08057 NoStop [Landau and Lifshitz(2004)]LL_Mechanics2004 author author L. D. Landau and author E. M. Lifshitz, @nooptitle Mechanics,edition 3rd ed. (publisher Elsevier, Amsterdam,year 2004)NoStop [Wolinsky and Carmichael(1988)]Wolinsky1988 author author M. Wolinsky and author H. J. Carmichael, @noopjournal journal Phys. Rev. Lett. volume 60, pages 1836 (year 1988)NoStop [Drummond and Kinsler(1989)]Drummond1989 author author P. D. Drummond and author P. Kinsler, @noopjournal journal Phys. Rev. A volume 40, pages 4813 (year 1989)NoStop [Wielinga and Milburn(1993)]Wielinga1993 author author B. Wielinga and author G. J. Milburn, @noopjournal journal Phys. Rev. A volume 48, pages 2494 (year 1993)NoStop [Kryuchkyan and Kheruntsyan(1996)]Kryuchkyan1996 author author G. Y. Kryuchkyan and author K. V. Kheruntsyan, @noopjournal journal Opt. Commun. volume 127, pages 230 (year 1996)NoStop [Marthaler and Dykman(2006)]Marthaler2006 author author M. Marthaler and author M. I. Dykman, @noopjournal journal Phys. Rev. A volume 73, pages 042108 (year 2006)NoStop [Wustmann and Shumeiko(2013)]Wustmann2013 author author W. Wustmann and author V. Shumeiko, @noopjournal journal Phys. Rev. B volume 87, pages 184501 (year 2013)NoStop [Goto(2016)]Goto2016 author author H. Goto, @noopjournal journal Scientific Reports volume 6, pages 21686 (year 2016)NoStop [Puri and Blais(2016)]Puri2016 author author S. Puri and author A. Blais, @noopjournal journal ArXiv e-prints(year 2016), http://arxiv.org/abs/1605.09408 arXiv:1605.09408 NoStop [Nabors et al.(1990)Nabors, Yang, Day, and Byer]Nabors1990 author author C. D. Nabors, author S. T. Yang, author T. Day,and author R. L. Byer, http://josab.osa.org/abstract.cfm?URI=josab-7-5-815 journal journal J. Opt. Soc. Am. B volume 7,pages 815 (year 1990)NoStop [Wilson et al.(2010)Wilson, Duty, Sandberg, Persson, Shumeiko, and Delsing]Wilson2010 author author C. M. Wilson, author T. Duty, author M. Sandberg, author F. Persson, author V. Shumeiko,and author P. Delsing, @noopjournal journal Phys. Rev. Lett. volume 105, pages 233907 (year 2010)NoStop [Lin et al.(2014)Lin, Inomata, Koshino, Oliver, Nakamura, Tsai, and Yamamoto]Lin2014 author author Z. Lin, author K. Inomata, author K. Koshino, author W. Oliver, author Y. Nakamura, author J. Tsai,and author T. Yamamoto, @noopjournal journal Nat Commun volume 5, pages 4480 (year 2014)NoStop [Hone et al.(1997)Hone, Ketzmerick, and Kohn]Hone1997 author author D. W. Hone, author R. Ketzmerick, and author W. Kohn, @noopjournal journal Phys. Rev. A volume 56, pages 4045 (year 1997)NoStop [Marthaler and Dykman(2007)]Marthaler2007a author author M. Marthaler and author M. I. Dykman, @noopjournal journal Phys. Rev. A volume 76, pages 010102R (year 2007)NoStop [Guo et al.(2013)Guo, Marthaler, and Schön]Guo2013a author author L. Guo, author M. Marthaler, and author G. Schön, 10.1103/PhysRevLett.111.205303 journal journal Phys. Rev. Lett. volume 111, pages 205303 (year 2013)NoStop [Lin et al.(2015)Lin, Nakamura, and Dykman]Lin2015 author author Z. R. Lin, author Y. Nakamura, and author M. I. Dykman,10.1103/PhysRevE.92.022105 journal journal Phys. Rev. E volume 92, pages 022105 (year 2015)NoStop [Landau and Lifshitz(1997)]LL_QM81 author author L. D. Landau and author E. M. Lifshitz, @nooptitle Quantum mechanics. Non-relativistic theory, edition 3rd ed. (publisher Butterworth-Heinemann, Oxford, year 1997)NoStop [Garg(2000)]Garg2000 author author A. Garg, booktitle booktitle American Journal of Physics, 10.1119/1.19458 journal journal AJP volume 68,pages 430 (year 2000)NoStop [Tan and Gabrielse(1991)]Tan1991 author author J. Tan and author G. Gabrielse, @noopjournal journal Phys. Rev. Lett. volume 67, pages 3090 (year 1991)NoStop [Dykman(2012)]Dykman2012b editor M. I. Dykman, ed.,@nooptitle Fluctuating Nonlinear Oscillators: from Nanomechanics to Quantum Superconducting Circuits (publisher OUP, Oxford, year 2012)NoStop [Walls(2008)]Walls2008 author author G. J. Walls, D. F. & Milburn, @nooptitle Quantum Optics (publisher Springer, Berlin,year 2008)NoStop [Note1()]Note1 note See Supplemental Material for the details of the calculationNoStop [Hofheinz et al.(2011)Hofheinz, Portier, Baudouin, Joyez, Vion, Bertet, Roche, and Esteve]Hofheinz2011 author author M. Hofheinz, author F. Portier, author Q. Baudouin, author P. Joyez, author D. Vion, author P. Bertet, author P. Roche,and author D. Esteve, 10.1103/PhysRevLett.106.217005 journal journal Phys. Rev. Lett. volume 106, pages 217005 (year 2011)NoStop [Armour et al.(2013)Armour, Blencowe, Brahimi, and Rimberg]Armour2013 author author A. D. Armour, author M. P. Blencowe, author E. Brahimi, and author A. J. Rimberg,10.1103/PhysRevLett.111.247001 journal journal Phys. Rev. Lett. volume 111,pages 247001 (year 2013)NoStop [Gramich et al.(2013)Gramich, Kubala, Rohrer, andAnkerhold]Gramich2013 author author V. Gramich, author B. Kubala, author S. Rohrer,and author J. Ankerhold, 10.1103/PhysRevLett.111.247002 journal journal Phys. Rev. Lett. volume 111, pages 247002 (year 2013)NoStop [Note2()]Note2 note P. Delsing, D. Esteve, and F. Portier, private communicationsNoStopP̅Supplemental Material § THE INTRAWELL WAVE FUNCTIONS OF THE RWA HAMILTONIAN We consider the dynamics of the oscillator driven close to three times its eigenfrequency in the rotating wave approximation (RWA). The scaled RWA Hamiltonian function g(Q,P), which is given by Eq. (10) of the main text and is plotted there in Fig. 1, has three symmetrically located minima atpoints (Q_m,P_m) with m=0,1,2, Q_0=1/2[f+(f^2+4)^1/2],Q_1 = Q_2 = -Q_0/2,P_0=0, P_1 = -P_2 = √(3) Q_0/2, The minimal value g_min of g(Q,P) and the dimensionless frequency of classical vibrations about a minimumω_min = ([∂^2_x_i x_j g(x_1,x_2)])^1/2 (the derivatives are calculated at a minimum of g) are g_min= -1/12fQ_0(Q_0^2+3),ω_min= [3fQ_0(Q_0^2 +1)]^1/2. The frequency ω_min is the same for all minima. So is also the lowest eigenvalue g_0 of the Hamiltonian ĝ(Q,-iλ∂_Q) in the neglect of tunneling. To the lowest order in the dimensionless Planck constant λ it corresponds to the lowest eigenvalue of a harmonic oscillator,g_0=g_min+ 12λω_min. The calculation of the tunnel splitting is done below by first finding the intrawell wave functions Ψ_m(Q) near their maxima inside the well, then finding the geometric phase shift between different Ψ_m, and then explicitly writing down the WKB tails of functions Ψ_m in the classically forbidden regions, which are given by Eq. (13) of the main text. Since Ψ_2(Q) = Ψ_1^*(Q), we only need to find Ψ_0(Q) and Ψ_1(Q). §.§ The wave function Ψ_0(Q)Near the minimum (Q_0, P_0) we have g(Q,P)≈ g_min+1/2(Q_0^2+1)(Q-Q_0)^2 + 3/2fQ_0P^2. The wave function Ψ_0(Q) is Gaussian for |Q-Q_0|≪ |Q_1-Q_0| and can be chosen to be real, Ψ_0(Q) = (√(π) l_q)^-1/2exp[-(Q-Q_0)^2/2l_q^2], with l_q=[λω_min/(Q_0^2+1)]^1/2 being the localization length.We are interested in the tail of Ψ_0 for Q between the minima of g(Q,P), i.e., for Q_1 < Q < Q_0-l_q. The WKB form of Ψ_0(Q) is given by Eq. (13) of the main text, which we here write explicitly, Ψ_0(Q)=C_0(i∂_P g)^-1/2exp[iS_0(Q)/λ], S_0(Q)=∫_Q_0-l_q^Q dQ'(Q'), where (Q) is given by equation g(Q,) = g_0 and ∂_Pg is calculated for P=(Q). For the branch ofthat we are interested in (Q)^2= A(Q) + B^1/2(Q),A(Q)= 1- Q^2 -2fQ, B(Q) = A^2(Q) -4 [g(Q,0)-g_0], with Im <0 for Q<Q_0; we keep the correction ∝λ to secure matching to Eq. (<ref>).For Q close to Q_0 and Q<Q_0-l_q, we have A(Q)<0, B(Q)>0, and A(Q)+ B^1/2(Q) <0. Therefore (Q) is purely imaginary and the same is true for the function ∂_Pg = (Q)B^1/2(Q)with i∂_Pg >0. Accordingly, Ψ_0(Q) exponentially decays with increasing Q_0-Q. The prefactor C_0 is determined by matching Eqs. (<ref>) and (<ref>) for Q close to Q_0 but Q_0-Q≫ l_q, C_0=(ω_min/2√(π e))^1/2. As Q decreases, first B(Q) becomes equal to zero at point Q_B. To the leading order in λ≪ 1Q_B ≈ Q_0-3/4f. For still smaller Q,A(Q) changes sign to positive. This happens forQ_B> Q>Q_1≡ -Q_0/2. Importantly, A(Q_1)= P_1^2 >0, B(Q_1) =2λω_min. In the explicit form, the imaginary part of the momentum in the classically forbidden region is Im(Q)=-[-A(Q)-B^1/2(Q)]^1/2(Q_B<Q<Q_0) Im(Q)= -[(A^2+|B|)^1/2-A]^1/2/√(2)(Q<Q_B). As discussed in the main text, the level splitting crucially depends on the oscillations of the wave function under the barrier. These oscillations start with the decreasing Q at Q=Q_B. Near Q_B we have B(Q)≈∂_Q B(Q_B)(Q-Q_B), whereas A(Q_B)<0. Therefore≈ -i|A(Q_B)|^1/2 + (i/2)|∂_QB(Q_B)/A(Q_B)|^1/2(Q-Q_B)^1/2 for small Q-Q_B>0, i.e., Q_B is a branching point of (Q). We have to go around above and below this point in the complex plane to obtain the wave function for Q<Q_B, following the standard procedure <cit.>. As a result, we find for Q<Q_B Ψ_0(Q)≈ 2C_0|∂_Pg|^-1/2exp[- Im S_0(Q)/λ]cosΦ_0(Q),Φ_0(Q) = Φ_0'(Q)+Φ”_0(Q). Here, the phase Φ'_0(Q) comes from the real part of the action, Φ_0'(Q) = λ^-1∫_Q_B^Q dQ'Re(Q'), Re(Q) = -[(A^2+|B|)^1/2+A]^1/2/√(2), whereas Φ_0”(Q) comes from the prefactor, with account taken of going around Q_B in the complex plane, Φ_0”(Q) = -1/2arcsin[ Re(Q)/|(Q)|] - π/4. The choice of Re and Im in Eqs. (<ref>) and (<ref>) corresponds to writing B^1/2 = i|B|^1/2 in Eq. (<ref>) for ^2 in the region where B(Q)<0.The WKB approximation (<ref>) breaks down near Q_1, as B(Q) becomes ∼λ and |∂_Pg| becomes small. However, we do not need to calculate the wave function Ψ_0(Q) in this region, as seen from Eq. (12) of the main text. §.§ The wave function Ψ_1(Q) The minimum of g(Q,P) at (Q_1,P_1) corresponds to a nonzero momentum P_1>0. Therefore the wave function Ψ_1 centered at Q_1 is complex valued even near its maximum. Calculating Ψ_1 involves three steps: finding it inside the well of g(Q,P) near Q_1,P_1; finding the geometric phase, that relates Ψ_1 and Ψ_0 given that Ψ_0 is chosen in the form (<ref>), and then finding the tail of Ψ_1 in the classically forbidden range.§.§.§ The intra-well wave function and the geometric phase Using the explicit form (<ref>) of Q_1,P_1, to the second order in δ Q = Q-Q_1, δ P=P-P_1 we write the Hamiltonian near (Q_1,P_1) as g(Q,P)≈ g_min + 3/4(1+fQ_0)δ P^2 +1/4(1+5fQ_0)δ Q^2 +(√(3)/4)(fQ_0-1)[δ Q δ P + h.c.]. The expression for Ψ_1 for |δ Q|≪ Q_0-Q_1 then reads Ψ_1(Q)=C_1, intraexp[(iP_1δ Q- 1/2βδ Q^2)/λ],β =[2ω_min+i √(3)(fQ_0-1)]/3Q_0^2. The Gaussian-width parameter βis now complex-valued. So is also the prefactor C_1, intra, which has a phase factor exp(iθ_1).The phase θ_1 has a geometric nature. It is determined by the fact that, as indicated in the main text, Ψ_1 and Ψ_0 are related by the transformation of rotation in phase plane, Ψ_1=N_3^† Ψ_0. Here, N_3 = exp(-2π ia^† a/3) with a = (2λ)^-1/2(Q+ iP)≡ (2λ)^-1/2(Q+ λ∂_Q). To calculate θ_1, we consider a coherent state in the coordinate representation |α⟩ = 1/(πλ)^1/4exp{-1/2(|α|^2-α^2) - [Q-(2λ)^1/2α]^2/2λ} and set α =Q_0/√(2λ), so that the wave function |α⟩ is centered at Q_0 and thusstrongly overlaps with Ψ_0. Since the function Ψ_1 is obtained from Ψ_0 by applying to Ψ_0 the operator N_3^†,we can write the overlap integral as ⟨α|Ψ_0⟩ =⟨α|N_3Ψ_1⟩= ⟨αexp(2π i/3)|Ψ_1⟩. The “rotated" state |αexp(2π i/3)⟩ strongly overlaps with Ψ_1. Therefore the above overlap integrals can be calculated using the explicit Gaussian form of Ψ_0 andΨ_1 near their maxima. With account taken of the normalization of Ψ_1, this gives C_1, intra=[ Reβ/πλ]^1/4exp(iθ_1),θ_1 = 1/2 (β +1)+P_1Q_1/2λ. §.§.§ The wave function Ψ_1 in the classically forbidden region It is clear from Eq. (12) of the main text that we need to find the tailof the wave function Ψ_1 in the classically forbidden region only for Q > Q_1. It is given by Eq. (13) of the main text. In a more explicit form Ψ_1(Q)=C_1(i∂_P g)^-1/2exp[iS_1(Q)/λ], S_1(Q)=-∫_Q_1 + l_q'^Q dQ' (Q'), where (Q) is given by Eqs. (<ref>) and (<ref>), l_q' =[λ/ Re β]^1/2 . Equation (<ref>) corresponds to choosing B^1/2(Q) = i|B(Q)|^1/2 for B(Q) < 0 and to ∂_Pg calculated for P(Q)=(Q), i.e., ∂_Pg= B^1/2.For Q_B-Q≫ Q-Q_1≫ l'_q we have -(Q) ≈ P_1+ iβ(Q-Q_1), as expected from Eq. (<ref>). By matching Eqs. (<ref>) and (<ref>),we find C_1=(ω_min/2√(π e))^1/2exp(iθ_1'),θ_1' = θ_1 -λ^-1[ (l_q^' 2/2) Imβ-P_1l_q' ]. Because we count the action S_1 off from Q_1 + l_q', there emerges an extra phase factor in C_1 due to the oscillations of the wave function inside the “potential well" centered at (Q_1,P_1).§ TUNNEL SPLITTING OF THE SCALED RWA ENERGY LEVELS The scaled RWA energies g^(k) give the values of the quasienergies ^(k) of the driven oscillator, ^(k) = (Ξ g^(k) + ħω_F k/3)(ħω_F), where Ξ = |8ω_F^2(δω)^2/27γ|, see Eqs. (6) and the text above Eq. (9) of the main text.The explicit expressions for the wave functions (<ref>) and(<ref>) allow us to calculate the scaled energies g^(k) usingEq. (12) of the main text, which we reproduce here for convenience, ∫_∞^Q_* dQ[ϕ^(k)(Q)(ĝ- g_0)Ψ_0(Q) .. - Ψ_0(Q)(ĝ -g^(k)) ϕ^(k)(Q)]=0, Functions ϕ^(k) are sums of functions Ψ_m(Q) weighted with factors exp(-2π i mk/3)/√(3). For Q_* well inside the interval (Q_1,Q_0) , we have ∫_∞^Q_*Ψ_0^2(Q)dQ =-1. Taking into account that overlapping of the functions Ψ_1,2(Q) with Ψ_0(Q) is exponentially small, we rewrite Eq. (<ref>) as g^(k)-g_0 ≈ [ ∫_∞ ^Q_*dQΨ_1(Q)ĝΨ_0 - ∫_∞ ^Q_*dQΨ_0ĝΨ_1(Q)] ×exp(-2kπ i/3) +c.c.It is important that the product Ψ_0(Q)Ψ_1(Q) has two terms. One of them is ∝exp{i[S_0(Q) + S_1(Q)]}. It smoothly depends on Q, because S_0(Q) + S_1(Q) = const for Q_1 < Q < Q_0. The other term is∝exp{-i[S_0^*(Q) - S_1(Q)]}, it is a fast oscillating function of Q. The contribution of this term to the integrals (<ref>)is exponentially small and exponentially sensitive to the change of Q_* on the scale ∝λ. Therefore this termshould be disregarded.Using the explicit form of the operator g(Q, -iλ∂_Q) and integrating by parts, from Eq. (<ref>) we obtain g^(k) - g_0 =- 2λ C_0|C_1|exp(-S_λ/λ)cos(Φ_λ/λ-2kπ/3),S_λ =-∫_Q_1+l_q'^Q_0-l_qdQ Im(Q),Φ^(k)_λ = -∫_Q_1 + l_q'^Q_BdQ Re(Q)+λθ_1'. This expression is somewhat inconvenient, asis calculated with account taken of the term ∝λ. It is easy to see that (Q) ≈(Q) +1/2λω_min/∂_Pg, whereis given by the value ofcalculated for λ=0. This approximation breaks down near Q_0, Q_B and Q_1 where ∂_Pg goes to zero. Similar to Ref. Garg2000_1,for Q_0>Q>Q_B one can write ∫_Q_0-l_q^QdQ'(Q')≈∫_Q_0^Q dQ'[(Q') + λ Y(Q',Q_0) ] -iλ/2log|Q-Q_0|/l_q -iλ/4-iλ/2log 2, Y(Q,Q_m)= ω_min/2(Q)B_ cl^1/2(Q) -i/2|Q-Q_m|. Here, B_ cl(Q)= (16f/3)(Q-Q_1)^2(Q- Q_B) is the value of B(Q) calculated for λ=0. A similar transformation can be made for ∫_Q_1+ l_q'^Q dQ'(Q') in the region Q_1<Q<Q_B. We now have to consider the vicinity of Q_B. Formally, the quantum correction todiverges at Q_B. However, the divergence is integrable. Therefore Eq. (<ref>) applies all the way till Q=Q_B, and one can use the value of Q_B given by Eq. (<ref>).The final result for the difference of the scaled RWA energies is Eq. (14) of the main text, g^(k) - g_0 = C_ tune^-S_ tun/λcos(λ^-1Φ_ tun -2π k/3), with real S_ tun and Φ_ tun, S_ tun = ∫_Q_0^Q_1dQIm(Q) + λImK_ tunΦ_ tun= ∫_ Q_B^Q_1dQ Re(Q)+ λ ReK_ tun +λθ_1,C_ tun=-3/2√(λ) ω_min[2(Q_0^2 +1)/3π^2 Q_0^2]^1/4[f(2Q_0-f)]^1/2.Here,K_ tun = ∫_Q_0^ Q_BdQY(Q,Q_0) + ∫_ Q_B^Q_1dQY(Q,Q_1) and θ_1 is given in (<ref>).The explicit expression (<ref>) is in an extremely good agreement with the numerical calculations. This canbe seen from Fig. 1 in the main text. A more detailed comparison is shown in Fig. <ref>.Equation (<ref>) simplifies in the limit of comparatively strong drive, f≫ 1. The leading order terms in S_ tun and in Φ_ tun are quadratic in f. Numerically, the asymptotic regime is reached for comparatively large f, where the tunneling amplitude becomes very small.§ QUANTUM DIFFUSION OVER THE BROKEN-SYMMETRY STATES The dynamics of the driven oscillator system can be strongly changed by an already very weak dissipation. Two types of dissipative processes can be conditionally separated. One of them causes transitions between the states that belong to the same multipletformed by the tunnel splitting of a quantized state of motion insidea well of g(Q,P). In particular, in this paper we considered such multiplet ϕ^(k) formed by the tunnel splitting of the lowest quantized intrawell state. The other dissipative process leads to transitions between the intrawell states. In terms of the dissipation mechanisms, an important type of physicaldissipative processes are transitions between the Fock states of the oscillator with emission or absorption of excitations of the thermal reservoir to which the oscillator is coupled. Another mechanism is fluctuations of the oscillator eigenfrequency due to the coupling to a reservoir or due to an external noise. It leads to dephasing of the vibrations, but not to an appreciable energy exchange with the reservoir. There may be also dissipation channels that are induced by the driving field; however, for the considered comparatively weak resonant field they are not important. We note first that the dephasing does not mix the states within the tunnel-split multiplets. Indeed, as indicated in the main text, the wave functions ϕ^(k) can be written in terms of the Fock states of the oscillator |n⟩ as ϕ^(k) = ∑_nC_n^(k)|3n+k⟩. The coupling to a thermal bath, which leads to dephasing, has the form H^ (ph) = a^† a H_b^ (ph), where a,a^† are the oscillator ladder operators and H_b^ (ph) is an operator that depends on the dynamical variables of the bath only. Clearly, such coupling is diagonal in the ϕ^(k) basis.The simplest coupling that leads to the oscillator energy relaxation is linear in a,a^†. To the lowest order of the perturbation theory, for the well-understood conditions, it is described by the term ρ̇_d in the equation for the oscillator density matrix ρ in slow time compared to ω_F^-1,ρ̇_d = -Γ[(n̅ +1) (a^† aρ -2 a ρ a^† + ρ a^† a) .. +n̅ (a a^†ρ -2 a^†ρ a + ρ aa^†) ], where 2Γ is the energy decay rate of the oscillator and n̅=[exp(ħω_0/k_BT) -1]^-1 is the oscillator Planck number. In what follows we assume that n̅=0; an extension to a nonzero Planck number is straightforward and does not affect the result.The goal of this section is to show that, even where Γ is extremely small, but exceeds the tunnelling frequency Ω_kk' =λ^-1δω (g^(k) - g^(k')), the oscillator dynamics changes qualitatively compared to the coherent dynamics. Instead of coherent tunneling between the intrawell states Ψ_m, which have broken time translation symmetry, the oscillator performs random hopping between the wells.We first discuss the effect of the dissipation (<ref>) by disregarding the dissipation-induced transitions between the intrawell states. In this approximation, one can describe the evolution of the oscillator in terms of the kinetic equation for the matrix elements ρ_mm'≡⟨Ψ_m|ρ|Ψ_m'⟩. The interwell tunneling can be mapped onto the tight-binding model with HamiltonianH_ tun =t_ tun∑_m=0,1,2 |Ψ_m⟩⟨Ψ_m+1| +H.c., where we use the convention |Ψ_3⟩≡ |Ψ_0⟩. The hopping integral is t_ tun= (ħδω/2λ)C_ tunexp[(-S_ tun+iΦ_ tun)/λ] with C_ tun, S_ tun, and Φ_ tun given by Eq. (<ref>).To the leading order in λ, we have ⟨Ψ_m|a|Ψ_m'⟩ = (2λ)^-1/2(Q_m+iP_m)δ_mm'. Therefore, from Eq. (<ref>), off-diagonal matrix elements ρ_mm' decay with rate ∝Γ/λ. If this rate exceeds |Ω_kk'| ∼ |t_ tun|/ħ, then over time ∼λ/Γ the off-diagonal matrix elements decay to their quasi-stationary values, which are determined by the diagonal matrix elements ρ_mm. The latter vary much slower,ρ̇_mm = W∑_m'≠ mρ_m'm'- 2Wρ_mm,W=λ |t_ tun|^2/ħ^2 Γ Q_0^2. Parameter W is the rate of hopping between the wells of g(Q,P), it is much smaller than the tunneling frequency |Ω_kk'|. The hopping is a Poisson process in the slow time, it is incoherent and is a discrete analog of diffusion. The above analysis is in the spirit of the theory of quantum diffusion in solids <cit.> and its analog in systems with a small number of potential wells <cit.>.The role of the dissipation-induced intrawell transitions is more subtle. Even for T=0, these transitions lead to an occupation of excited intrawell states, cf. <cit.>. On the time scale determined by 1/Γ, near the minimum of a well there is progressively formed a Boltzmann-type distribution over the states. The stationary ratio of the populations of the neighboring states can be shown to be (1+2fQ_0-ω_min)/(1+2fQ_0+ω_min). Thetunnel splitting increases for higher-lying intrawell states. However, near the minimum of g(Q,P) this increase is slow. The tunneling action S_ tun(n) varies with the intrawell level number n as |∂ S_ tun(n)/∂ n| =λω_minτ_n. Here, τ_n is the dimensionless imaginary time of interwell tunneling given by Im ∫ dQ/∂_Pg, where the classical momentum is calculatedfor g(Q,P) = g_min +λω_min(n+1/2). This time is logarithmically large for small n. Therefore, for small Γ but still Γ≫ |t_ tun|/ħ, tunneling via excited intrawell states weakly renormalizes the rate W in Eq. (<ref>). Even if for highly excited intrawell states, with n exceeding some critical n_ cr∝ 1/λ, the hopping integral exceeds Γ/ħ, interwell switching via these states will be very slow, as the occupation of these states will be small. We note that, if ħΓ exceeds the hopping integral for almost all intrawell states, interwell switching may occur via dissipation-induced transitions over the interwell barrier of g(Q,P), i.e., over the saddle point of g(Q,P) seen in Fig. 1 of the main text. This is the dominating switching mechanism for a parametric oscillator <cit.>.apsrev4-15 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Landau and Lifshitz(1997)]LL_QM81_1 author author L. D. Landau and author E. M. Lifshitz, @nooptitle Quantum mechanics. Non-relativistic theory, edition 3rd ed. (publisher Butterworth-Heinemann, Oxford, year 1997)NoStop [Garg(2000)]Garg2000_1 author author A. Garg, booktitle booktitle American Journal of Physics, 10.1119/1.19458 journal journal AJP volume 68,pages 430 (year 2000)NoStop [Kagan(1992)]Kagan1992 author author Y. Kagan, @noopjournal journal J. Low Temp. Phys. volume 87, pages 525 (year 1992)NoStop [Dykman and Tarasov(1978)]Dykman1978b author author M. I. Dykman and author G. G. Tarasov, @noopjournal journal Zh. Eksper. Teor. Fiz. volume 74, pages 1061 (year 1978)NoStop [Marthaler and Dykman(2006)]Marthaler2006_1 author author M. Marthaler and author M. I. Dykman, @noopjournal journal Phys. Rev. A volume 73, pages 042108 (year 2006)NoStop
http://arxiv.org/abs/1702.07931v1
{ "authors": [ "Yaxing Zhang", "J. Gosner", "S. M. Girvin", "J. Ankerhold", "M. I. Dykman" ], "categories": [ "quant-ph", "cond-mat.mes-hall" ], "primary_category": "quant-ph", "published": "20170225175334", "title": "Multiple-period Floquet states and time-translation symmetry breaking in quantum oscillators" }
[ Maximum-Likelihood Augmented Discrete Generative Adversarial Networks equal* Tong Cheequal,udm Yanran Liequal,polyu Ruixiang Zhangequal,hkust R Devon Hjelmudm,ivado Wenjie LipolyuYangqiu Songhkust Yoshua Bengioudm udmMontreal Institute for Learning Algorithms, Université de Montréal, Montréal, QC H3T 1J4, Canada polyuDepartment of Computing, The Hong Kong Polytechnic University, Hong Kong hkustThe Hong Kong University of Science and Technology ivadoIVADOTong Chetong.che@umontreal.ca generative adversarial networks, text generation, reinforcement learning0.3in ] Despite the successes in capturing continuous distributions, the application of generative adversarial networks (GANs) to discrete settings, like natural language tasks, is rather restricted. The fundamental reason is the difficulty of back-propagation through discrete random variables combined with the inherent instability of the GAN training objective. To address these problems, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks. Instead of directly optimizing the GAN objective, we derive a novel and low-variance objective using the discriminator's output that follows corresponds to the log-likelihood. Compared with the original, the new objective is proved to be consistent in theory and beneficial in practice. The experimental results on various discrete datasets demonstrate the effectiveness of the proposed approach.§ INTRODUCTION Generative models are appealing because they provide ways to obtain insights on the underlying data distribution and statistics. In particular, these models play a pivot role in many natural language processing tasks such as language modeling, machine translation, and dialogue generation. However, the generated sentences are often unsatisfactory <cit.>. For example, they often lack of consistency in long-term semantics and have less coherence in high-level topics and syntactics <cit.>.This is largely attributed to the defect in the dominant training approach for existing discrete generative models. To generate discrete sequences, it is popular to adopt auto-regressive models through teacher forcing <cit.> which, nevertheless, causes the exposure bias problem <cit.>. The existing approach trains auto-regressive models to maximize the conditional probabilities of next tokens based on the ground-truth histories. In other words, during training, auto-regressive generative models are only exposed to the ground truths from the data distribution rather than those from the model distribution, i.e., its own predictions. It prohibits the trained model to take advantage of learning in the the context of its previous generated words to make the next prediction, resulting in a bias and difficulty in approaching the true underlying distribution <cit.>. Another limitation of teacher forcing is that it is inapplicable to those auto-regressive models with latent random variables, which have performed better than autoregressive (deterministic state) recurrent neural networks (i.e. usual RNNs, LSTMs or GRUs) on multiple tasks <cit.>.An alternative and attractive solution to training autoregressive models is using generative adversarial networks (GAN) <cit.>. The above discussed problem can be prevented if the generative models were able to visit its own predictions during training and had an overall view on the generated sequences. We suggest to facilitate the training of autoregressive models with an additional discriminator under the GAN setting. With a discriminator trained to separate real versus generated sequences, the generative model is able to make use of the knowledge of the discriminator to improve itself. Since the discriminator is trained on the entire sequence, it can in principle provide the training signal to avoid the problem of exposure bias.However, it is nontrivial to apply GANs to discrete data as it is difficult to optimize of the generator using the signal provided by the discriminator. In fact, it is usually very hard to push the generated distribution to the real data distribution, if not impossible, by moving the generated sequence (e.g., a faulty sentence) towards a “true” one (e.g., a correct sentence) in a high dimensional discrete state space. As standard back-propagation fails in discrete settings, the generator can be optimized using the discriminator's output as a reward via reinforcement learning. Unfortunately, even with careful pre-training, we found that the policy has difficulties to get positive and stable reward signals from the discriminator.To tackle these limitations, we propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks (MaliGAN). At the core of this model is the novel GAN training objective which sidesteps the stability issue happening when using the discriminator output as a direct reinforcement learning reward. Alternatively, we develop a normalized maximum likelihood optimization target inspired by <cit.>. We use importance sampling and several variance reduction techniques in order to successfully optimize this objective. The procedure was discovered independently from us by <cit.> in the context of image generation.The new target brings several attractive properties in the proposed MaliGAN. First, it is theoretically consistent and easier to optimize (Section <ref>). Second, it allows the model not only to maximize the likelihood of good behaviors, but also to minimize the likelihood of bad behaviors, with the help of a GAN discriminator.Equipped with these strengths, the model focuses more on improving itself by gaining beneficial knowledge that is not yet well acquired, and excluding the most probable and harmful behaviors. Combined with several proposed variance reduction techniques, the proposed MaliGAN successfully and stably models discrete data sequences (Section <ref>). § PRELIMINARIES AND OVERVIEWThe basic framework for discrete sequence generation is to fit a set of data {𝐱_i}_i=1^N coming from an underlying generating distribution p_d by training a parameterized auto-regressive probabilistic model p_θ. In this work, we aim to generate discrete data, especially discrete sequential data, under the GAN setting <cit.>. GAN defines a framework for training generative models by posing it as a minimax game against a discriminative model. The goal of the generator G is to match its distribution p_g to the real data distribution p_d. To achieve this, the generator transforms noise z sampled from p(z) to a data sample G(z). Following this, the discriminator D is trained to distinguish between the samples coming from p_d and p_g, and can be used to provide a training signal to the generator.When applying the GAN framework to discrete data, the discontinuity prohibits the update of the generator parameters via standard back-propagation. To tackle this, one way is to employ a typical reinforcement learning (RL) strategy that directly uses the GAN discriminator's output, D or log D as a reward. In practice, the problem is usually solved by REINFORCE-like algorithms <cit.>, perhaps with some variance reduction techniques. Formally, we train a generator G(𝐱) together with a discriminator D(𝐱). In its original form, the discriminator is trained to distinguish between the generating distribution p_θ and the real data distribution p_d. The generator is then trained to maximize 𝔼_𝐱∼ p_θ[log D(𝐱)]. Namely, the objective for the generator to optimize is as follows: ℒ_GAN(θ)= -𝔼_𝐱∼ p_θ[log D(𝐱)] ≈-1/n∑_i=1^n log D(𝐱_i),𝐱_i ∼ p_θ.Our work is related to the viewpoint of casting the GAN training as a reinforcement learning problem with a moving reward signal monotone in D(𝐱). Define the normalized probability distribution q'(𝐱) = 1/Z(D)D(𝐱)^1/τ in some bounded region to guarantee integrability (note that D is an approximation to p_d/p+p_d if D is well trained) and also put a maximum-entropy regularizer ℍ(p_θ) to encourage diversity, yielding the regularized loss: ℒ_GAN(θ)= -𝔼_𝐱∼ p_θ[log D(𝐱)] -τℍ(p_θ)=τKL(p_θ||q') + c(D)where c(D) is a constant depending only on D. Hence, optimizing the traditional GAN is basically equivalent to optimizing the KL-divergence KL(p_θ||q'). One major problem with this approach is that q' always moves with D, which is undesirable for both stability and convergence. When we have some samples 𝐱_i∼ p_θ, we want to change θ a bit in order to adjust the likelihood of samples 𝐱_i to improve the quality of the generator. However, since initially p generates very bad sequences, it have little chance of generating good sequences in order to get positive rewards. Though the dedicated pre-training and variance reduction mechanisms help <cit.>, the RL algorithm based on the moving reward signal still seems very unstable and does not work on large scale datasets.We therefore propose to utilize the information of the discriminator as an additional source of training signals, on top of the maximum-likelihood objective. We employ importance sampling to make the objective trainable. The novel training objective has much less variance than that in vanilla reinforcement learning approaches that directly adopt D or log D as reward signals. The analysis and discussions will be presented in more detail in Section <ref>.§ MAXIMUM-LIKELIHOOD AUGMENTED DISCRETE GENERATIVE ADVERSARIAL NETWORKS In this section, we present the details of the proposed model. At the heart of this model is a novel training objective that significantly reduces the variance during training, including the theoretical and practical analysis on the objective's equivalence and attractive properties. We also show how this core algorithm can be combined with several variance reduction techniques to form the full MaliGAN algorithm for discrete sequence generation. §.§ Basic Model of MaliGANWe propose Maximum-Likelihood Augmented Discrete Generative Adversarial Networks (MaliGAN) to generate the discrete data. With MaliGAN, we train a discriminator D(𝐱) with the standard objective that GAN employs. What is different from GANs is a novel objective for the generator to optimize, using importance sampling, which makes the training procedure closer to maximum likelihood (MLE) training of auto-regressive models, and thus being more stable and with less variance in the gradients.To do so, we keep a delayed copy p'(𝐱) of the generator whose parameters are updated less often in order to stabilize training. From the basic property of GANs, we know that an optimal D has the property D(𝐱)=p_d/p_d+p'. So in this case, we have p_d = D/1-Dp'. Therefore, we set the target distribution q for maximum likelihood training to be D/1-Dp'. Let r_D(𝐱) = D(𝐱)/1-D(𝐱), we define the augmented target distribution as: q(𝐱)=1/Z(θ')D(𝐱)/1-D(𝐱)p'(𝐱)=r_D(𝐱)/Z(θ')p'(𝐱) Regarding q as a fixed probability distribution, then the target to optimize is:L_G(θ) = KL(q(𝐱)||p_θ(𝐱)) This objective has an attractive property that q is a “fixed” distribution during training, i.e., if D is sufficiently trained, then q is always approximately the data generating distribution p_d. By defining the gradient as ∇ L_G = 𝔼_q[∇_θlog p_θ (𝐱)], we have the following importance sampling formula:∇ L_G = 𝔼_p'[q(𝐱)/p'(𝐱)∇_θlog p_θ (𝐱)]=1/Z𝔼_p_θ[r_D(𝐱)∇_θlog p_θ (𝐱)]where we assume that p'=p_θ and the delayed generator is only one step behind the current update in the experiments. This importance sampling procedure was discovered independently from us by <cit.>. We propose to optimize the generator using the following novel gradient estimator: ∇ L_G(θ)≈∑_i=1^m (r_D(𝐱_i)/∑_i r_D(𝐱_i)-b)∇log p_θ(𝐱_i) = E({𝐱_i}_1^m)where b is a baseline from reinforcement learning in order to reduce variance. In practice, we let b increase very slowly from 0 to 1. Combined with the objective of the discriminator in an ordinary GAN, we get the proposed MaliGAN algorithm as shown in Algorithm <ref>. §.§ Analysis The proposed objective in Eq. <ref> is also theoretically guaranteed to be sound. In the following theorem, we show that our training objective approximately optimizes the KL divergence KL(q(𝐱)||p_θ(𝐱)) when D is close to optimal. What's more, the objective still makes sense when D is well trained but far from optimal.We have the following two theoretical guarantees for our new training objective. (i) If discriminator D(𝐱) is optimal between delayed generator p' and real data distribution p_d, we have the following equation. 𝔼_p_d[ log p_θ(𝐱)]=1/Z(θ')𝔼_p'[r_D(𝐱)log p_θ(𝐱)]where Z(θ')=𝔼_p'[r_D(𝐱)]=1. (ii) If D(x) is trained well but not sufficiently, namely, ∀ x, D(x) lies between 0.5 andp_d/p_d+p', we have the property that for m →∞, almost surelyE({𝐱_𝐢}_1^m)·∇_θKL(p_d||p_θ) > 0The above gives us a condition for our objective to still push the generator in a descent direction even when the discriminator is not trained to optimality. In addition to its attractiveness in theory, we now demonstrate why the gradient estimator in Eq. <ref> of ∇ L_G(θ) practically can produce better training signal for the generator than the original GAN objective. Similar discussions can be found in <cit.>. In the original GAN setting from a reinforcement learning perspective, e.g. the inclusive KL in Eq. <ref>, the free running auto-regressive model can be viewed as an RL agent exploring the state space and getting a reward, D or log D, at the end of the exploration. The model then tries to adjust the probability of each of its exploration paths according to this reward. However, this gradient estimator would be drastically inefficient when almost all generated paths had a very small discriminator output. Unfortunately, this is very common in GAN training and cannot even be solved with a carefully selected baseline. In the MaliGAN objective, however, the partition function Z is estimated using the samples from the minibatch, which helps dealing with the above dilemma. When we choose, for example, baseline b=1, we can see that the sum of the weights on the generated paths are zero, and the probability of each path is adjusted not according to the absolute value of the discriminator output, but its relative quality in that minibatch. This ensures that the model can always learn something as long as there exist some generations better than others in that mini-batch. Furthermore, the previous theorem ensures the consistency of the mini-batch level normalization procedure.From a theoretical point of view, this normalization procedure also helps. Although at the first glance, when D is optimal, one can prove that Z=1, so estimating Z seems to only introduce additional variance to the model. However, using this estimator in fact reduces the variance due to the following reason: r_D(𝐱) is actually a function with singularity when 𝐱 is in a region Ω in the data space on which D(𝐱)≈ 1. Even with very careful pre-training, such a region Ω r_D ≫ 0 and p'(Ω)≈ 0, making the ratio blow up. In our target 1/Z(θ')𝔼_p'[r_D(𝐱)log p_θ(𝐱)], since it is almost impossible to get samples from Ω with p' in a reasonable size mini-batch, the actual distribution we are sampling from is a “regularized” distribution p_\Ω where p_\Ω(Ω)=0 and p_\Ω≈ p'. So when doing importance sampling to estimate our training objective ∇ L_G = 𝔼_p_d[∇_θlog p_θ (𝐱)] with small mini-batches, we are actually doing normalized-weights importance sampling based on p_\Ω: ∇ L_G ≈𝔼_p_\Ω[r_D(𝐱)∇_θlog p_θ (𝐱)]/𝔼_p_\Ω[r_D(𝐱)]. Since the Monte Carlo estimator has much more variance to estimate 𝔼_p'[r_D(𝐱)∇_θlog p_θ (𝐱)] than𝔼_p_\Ω[r_D(𝐱)∇_θlog p_θ (𝐱)], in practical mini-batch training settings, we can view that we are doing importance sampling with the distribution p_\Ω, and this objective has much less variance compared to importance sampling with p' on r_D which has an infinite singularity. This is why estimating Z= 𝔼_p_\Ω[r_D(𝐱)] is important in order to reduce the variance in the mini-batch training setting. When training auto-regressive models with teacher forcing, a serious problem is exposure bias <cit.>. Namely, the model is only trained on demonstrated behaviors (real data samples), but we also want it to be trained on free-running behaviors. When we set a positive baseline b>0, the model first generates m samples, and then tries to adjust the probabilities of each generated samples by trying to reinforce the best behaviors and exclude the worse behaviors relatively to those in the mini-batch.§.§ Variance Reduction in MaliGAN The proposed renormalized objective in MaliGAN supports much more stable training behavior than the RL objective in a standard GAN. Nevertheless, when the long sequence generation procedure consists of multiple steps of random sampling, we find it is better to further integrate the following advanced variance reduction techniques.§.§.§ Monte Carlo Tree SearchInstead of using the same weight for all time steps in one sample, we use the following formula which is well known in the RL literature:𝔼_p_θ[r_D(𝐱)∇ p(𝐱)]=𝔼_p_θ[∑_t=1^L Q(a_t,𝐬_t)∇ p_θ(a_t|𝐬_t)]where Q(a,𝐬) stands for the “expected total reward” given by r_D=D/1-D of generating token a given previous generation 𝐬, which can be estimated with, e.g., Monte Carlo tree search (MCTS,  <cit.>).Thus, following the gradient estimator presented in Theorem <ref>, we derive another gradient estimator:∇ L_G(θ) ≈∑_i L_i/m∑ Q(a^i_t,𝐬^i_t)∑_i,t^m,L_i Q(a^i_t,𝐬^i_t)∇log p_θ(a^i_t|𝐬^i_t)where m is the size of the mini-batch. Using Monte Carlo tree search brings in several benefits. First, it allows different steps of the generated sample to be adjusted with different weights. Second, it gives us a more stable estimator of the partition function Z. Both of these two properties can dramatically reduce the variance of our proposed estimator. §.§.§ Mixed MLE-Mali TrainingWhen dealing with long sequences, the above model may result in accumulated variance. To alleviate the issue, we significantly reduce the variance by clamping the input using the training data for N time steps, and switch to free running mode for the remaining T-N time steps. Then during our training procedure, inspired from <cit.>, we slowly move N from T towards 0.The training objective is equivalent to setting q in the last section to:q(x_0,x_1,⋯ x_L) = p_d(x_0,⋯ x_N) q(x_N+1,⋯ x_L|x_0,⋯ x_N) We also assume D is trained on the real samples and fake samples generated by p_f(x_0,⋯ x_L) = p_d(x_0,⋯ x_N) p_θ(x_N+1,⋯ x_L|x_0,⋯ x_N) Let 𝐱_≤ N=(x_0,x_1,⋯ x_N),𝐱_>N=(x_N+1,⋯ x_L), we have:∇ L_G= 𝔼_q[∇log p_θ(𝐱)] = 𝔼_p_d[∇log p_θ(𝐱_≤ N)] + 𝔼_q[∇log p_θ (𝐱_>N|𝐱_<N)] = 𝔼_p_d[∇log p_θ(x_0,x_1,⋯ x_T)]+1/Z𝔼_p_θ[ ∑_t=N+1^L r_D(𝐱)∇log p_θ(a_t|𝐬_t)] For each sample 𝐱_i from the real data batch, if it has length larger than N, we fix the first N words of 𝐱_i, and then sample n times from our model till the end of the sequence, and get n samples {𝐱_i,j}_j=1^n.We then have the following series of mini-batch estimators for each 0≤ N≤ T:∇ L^N_G≈ ∑_i=1,j=1^m,n (r_D(𝐱_i,j)/∑_j r_D(𝐱_i,j)-b)∇log p_θ(𝐱^>N_i,j|𝐱^≤ N_i) +1/m∑_i=1^m∑_t=0^N p_θ(a^i_t|𝐬^i_t) =E_N(𝐱_i,j) One difference is that in this model, we normalize the coefficients r_D(𝐱_i,j) based only on samples generated from a single real data sample 𝐱_i. The reason of using this trick will be explained in next sub-section.We have the following theorem which guarantees the theoretical property of this estimator.When D is correctly trained but not optimal in the sense of Theorem  <ref>, when m→∞, we almost surely have ∀ 0≤ N≤ T,E_N(𝐱_i,j)·∇_θKL(p_d||p_θ) >0§.§.§ Single real data based renormalizationMany generative models have multiple layers of randomnesses. For example, in auto-regressive models, the samples are generated via multiple sampling steps. Other examples include hierarchical generative models like deep Boltzmann machines and deep belief networks <cit.>.In these models, high-level random variables are usually responsible for modeling high-level decisions or “modes” of the probability distribution. Changing them can result in much larger effects than that from changing low-level variables. Motivated by this observation, in each mini-batch we first draw a mini-batch of samples (e.g. 32) of high-level latent variables, and then for each high level value we draw a number of low level data samples (e.g. 32). Then we re-estimate the partition function Z from the low-level samples that are generated by each high-level samples. Because lower-level sampling has a much smaller variance, the model can receive better gradient signals from the weights provided by the discriminator.This sampling principle is corresponding to applying the mixed MLE-Mali training discussed above in the auto-regressive settings. In this case we first sample a few data samples, then fix the first N words and let the network generate a lot of samples after N as our next mini-batch. We refer this full algorithm to sequential MaliGAN with Mixed MLE Training, which is summarized in Algorithm <ref>.The reason why doing this single real sample based renormalization is beneficial can be summarized around two elements. First, consider S is a sample from the training set. The first N words S_≤ N should be completed by our model. The conditional distribution p_d(S'_>N|S_≤ N) should be much simpler than the full distribution p_d. Namely, p_d(S'_>N|S_≤ N) consists of only one or a few “modes”. So this renormalization technique can be viewed as trying to train the model on these simpler conditional distributions, which gives more stable gradients. Second, this normalization scheme makes our model robust to mode missing, which is a common failure pattern when training GANs <cit.>. Single sample based renormalization ensures that for every real sample S, the model can receive a moderately strong training signal for how to perform better on generating S_>N conditioned on S_≤ N. However, in batch-wise renormalization as in the basic MaliGAN, this is not possible because there might be some completions S' with r_D(S') very large, so other training samples in that mini-batch receives very little gradient signals. § EXPERIMENTS To examine the effectiveness of the proposed algorithms, we conduct experiments on three discrete sequence generation tasks. We achieve promising results on all three tasks, including a standard and challenging language modeling task. From the empirical results and the following analysis, we demonstrate the soundness of MaliGAN and show its robustness to overfitting.§.§ Discrete MNISTWe first evaluate MaliGAN on the binarized image generation task for the MNIST hand-written digits dataset, similar with <cit.>. The original datasets have 60,000 and 10,000 samples in the training and testing sets, respectively. We split the training set and randomly selected 10,000 samples for validation. We adopted as the generator a deep convolutional neural network based on the DCGAN architecture <cit.>. To generate the discrete samples, we sample from the generator's output binomial distribution. We adopt Algorithm <ref> of MaliGAN for training and use the single latent variable renormalization technique for variance reduction.To compare our proposed MaliGAN with the models trained using the discriminator's output as a direct reward, we also train a generator with the same network architecture, but use the output of the discriminator as the weight of generated samples. We denote it as the REINFORCE-like model. The comparison results are shown in Figure <ref> and Figure <ref>.The two figures in the first line are training losses of the generator and discriminator from the proposed MaliGAN. We can see the training process of MaliGAN with variance reduction techniques is stable and the loss curve is meaningful. The bottom two figures in Figure <ref> are samples generated by the REINFORCE-like model and by MaliGAN. Clearly, the samples generated by MaliGAN have much better visual quality and resemble closely the training data. §.§ Poem GenerationWe examine the effectiveness of our model on a Chinese poem generation task. Typically, there are two genres of Chinese poems. We refer with Poem-5 and Poem-7 to those consisting of 5 or 7 Chinese characters each in a short sentence, respectively. We use the dataset provided in <cit.>, and split them in the standard way [<http://homepages.inf.ed.ac.uk/mlap/Data/EMNLP14/>]. The generator is a one-layer LSTM <cit.> with 32 hidden units for Poem-5 and 100 for Poem-7. Our discriminators are two-layer Bi-LSTMs with 32 hidden neurons. We denote our models trained with Algorithm <ref> and Algorithm <ref> as MaliGAN-basic and MaliGAN-full. We choose two compared models, the auto-regressive model with same architecture but trained with maximum likelihood (MLE), and SeqGAN <cit.>. Following <cit.>, we report the BLEU-2 scores in Table <ref> <cit.>.MaliGAN-full obtained the best BLEU-2 scores on par on both tasks, and MaliGAN-basic was the next best. Clearly, MLE lagged far behind despite the same architecture, which should be attributed to the inherent defect in the MLE teacher-forcing training framework. As pointed by previous researchers <cit.>, BLEU might not be a proper evaluation metric, we also calculate the Perplexity of these four models, obtaining qualitatively similar results. The best scores are reported in Table <ref> and the Perplexity curves are illustrated in Figure <ref>.From the above figures, we can see how our models perform during the training procedure. Although with some oscillations, both MaliGAN-basic and MaliGAN-full achieved lower perplexity. Especially on Poem-7 from Figure <ref>, our proposed models both prevent overfitting when MLE ended up with that. A comparison between the training curve of MaliGAN-basic and that of MaliGAN-full, we can find that the latter has less variance. This demonstrates the effectiveness of the advanced variance reduction techniques in our full model. The peak in the MLE curve on Poem-5 in Figure <ref> is, however, unlikely to be a result of overfitting because that MLE “recovered” from it fast and continued to convergence till the end. In fact, we find it harder to train a stable MLE model on Poem-5 than on Poem-7. We conjecture this resulted from the intricate mutual influence between the improper evaluation and the small training data size.§.§ Sentence-Level Language ModelingWe also examine the proposed algorithm on a more challenging task, sentence-level language modeling, which can be considered as a fundamental task with applications to various discrete sequence generation tasks. To explore the possibilities and limitations of our algorithm, we conduct extensive experiments on the standard Penn Treebank (PTB) dataset <cit.> through parameter searching and model ablations. For evaluation we report sentence-level perplexity, which is the averaged perplexity on all sentences in the test set. For simplicity and efficiency, we adopt a 1-layer GRU <cit.> as our generator, and set the same setting for the baseline model trained with standard teacher forcing<cit.>. We use a Bi-directional GRU network as our discriminator. To stabilize training and provide good initialization for the generator, we first pre-train our generator on the training set using teacher forcing, then we train two models, MaliGAN-basic and MaliGAN-full. MaliGAN-basic is trained with Algorithm <ref> without MCTS. MaliGAN-full is trained by Algorithm <ref> with all the variance reduction techniques included.Note that the computational cost of MCTS is very large, so we remove all sentences longer than 35 words in the training set. We set N = 30 and K = 5 at the beginning of the training and pre-train our discriminator to make it reliable enough to provide informative and correct signals for the generator. The perplexity shown in Table <ref> is achieved by our best performing model, which has 200 hidden neurons and 200 dimensions for word embeddings.From Table <ref> we can see, the simplest model trained by MaliGAN reduced the perplexity of the baseline effectively. Both the basic and the full model, i.e., MaliGAN-basic and MaliGAN-full obtained a notably lower perplexity compared with the MLE model. Although the PTB dataset is much more difficult, we obtain results consistent with Table <ref>. It is encouraging to see that our model is more robust to overfitting in consideration of the relative small size of the PTB data. These results strengthen our belief to realize our algorithm on even larger datasets, which we leave as a future work.The positive result again demonstrates the effectiveness of MaliGAN, whose primary component is the novel optimization objective we propose in Eq. <ref>. Besides, we also gain insights from the model ablation tests about the advanced variance reduction techniques provided in Section <ref>. Combined with the Perplexity curve in Figure <ref>, we can see that with advanced techniques, MaliGAN-full performed in a more stable way during training and can to some extent achieve lower perplexity scores than MaliGAN-basic. We believe these fruitful techniques will be beneficial in other similar problem settings. § RELATED WORKTo improve the performance of discrete auto-regressive models, some researchers aim to tackle the exposure bias problem, which is discussed detailed in <cit.>. The problem occurs when the training algorithm prohibits models to be exposed to their own predictions during training. The second issue is the discrepancy between the objective during training and the evaluation metric during testing, which is analyzed in <cit.> and then summarized as Loss-Evaluation Mismatch by <cit.>. Typically, the objectives in training auto-regressive models are to maximize the word-level probabilities, while in test-time, we often evaluate the models using sequence-level metrics, such as BLEU <cit.>. To alleviate these two issues, the most straightforward way is to add the evaluation metrics into the objective in the training phase. Because these metrics are often discrete which cannot be utilized through standard back-propagation, researchers generally seek help from reinforcement learning. <cit.> exploits REINFORCE algorithm <cit.> and proposes several model variants to well situate the algorithm in text generation applications. <cit.> shares similar idea and directly optimizes image caption metrics through policy gradient methods <cit.>. There exists a third issue, namely Label Bias, especially in sequence-to-sequence learning framework, which obstacles the MLE trained models to be optimized globally <cit.> To addresses the abovementioned issues in training auto-regressive models, we propose to formulate the problem under the setting of generative adversarial networks. Initially proposed by <cit.>, generative adversarial network (GAN) has attracted a lot of attention because it provides a powerful framework to generate promising samples through a min-max game. Researchers have successfully applied GAN to generate promising images conditionally <cit.> and unconditionally <cit.>, to realize image manipulation and super-resolution <cit.>, and to produce video sequences <cit.>. Despite these successes, the feasibility and advantage on applying GAN to text generation are restrictedly explored yet noteworthy. It is appealing to generate discrete sequences using GAN as discussed above. The generative models are able to utilize the discriminator's output to make up the information of its own distribution, which is inaccessible if trained by teacher forcing <cit.>. However, it is nontrivial to train GAN on discrete data due to its discontinuity nature. The instability inherent in GAN training makes things even worse <cit.>. <cit.> exploits adversarial domain adaption to regularize the training of recurrent neural networks. <cit.> applies GAN to discrete sequence generation by directly optimizing the discrete discriminator's rewards. They adopt Monte Carlo tree search technique <cit.>. Similar technique has been employed in <cit.> which improves response generation by using adversarial learning. In <cit.>, which inspired us, the authors propose a way of doing mini-batch reweighting when training latent variable models with discrete variables. However, they make use of inference network which are infeasible in the GAN setting.Our work is also closely related to <cit.>. In <cit.>, they propose to work with the objective KL(p_d||p_θ) in a conditional generation setting. In this case, the situation is similar with ours because rewards such as BLEU scores are available. However, conditional generation metrics such as BLEU scores are decomposable to each time steps, so this property can make them able to directly sample from the augmented distributions, which is not possible for sequence-level GANs, e.g., language modeling. So we have to use importance sampling to train the model. § DISCUSSIONS AND FUTURE WORKIn spite of their great popularity on continuous datasets such as images, GANs haven't yet achieved an equivalent success in discrete domains such as natural language processing. We observed that the main cause of this gap is that while the discriminator can almost perfectly discriminate the good samples from the bad ones, it is notoriously difficult to pass this information to the generator due to the difficulty of credit assignment through discrete computation and inherent instability of RL algorithms applied to dynamic environments with sparse reward. In this work, we take a different approach. We start first from the maximum likelihood training objective KL(p_d||p_θ), and then use importance sampling combined with the discriminator output to derive a novel training objective. We argue that although this objective looks similar to the objective used in reinforcement learning, the normalization in fact does reduce the variance of the estimator by ignoring the region Ω in the data space around the singularity of r_D in which the generator p_θ has almost zero probability to get samples from. Namely, by estimating the partition function Z using samples, we are approximately doing normalized importance sampling with another distribution p_\Ω which has much lower variance c.f. Section <ref>. Practically, this single real sample normalization process combined with mixed training <cit.> successfully avoided the missing mode problem by providing equivalent training signal for each mode. Besides successfully reducing the variances of normal reinforcement learning algorithms, our algorithm is surprisingly robust to overfitting. Teacher forcing is prone to overfit, because by maximizing the likelihood of the training data, the model can easily fit not only the regularities but also the noise in the data. However in our model, if the generator tries to fit too much noise in the data, the generated sample will not look good and hopefully the discriminator will be able to capture the differences between the generated and the real samples very easily.As for future work, we are going to train the model on large datasets such as Google's one billion words <cit.> and on conditional generation cases such as dialogue generation.icml2017
http://arxiv.org/abs/1702.07983v1
{ "authors": [ "Tong Che", "Yanran Li", "Ruixiang Zhang", "R Devon Hjelm", "Wenjie Li", "Yangqiu Song", "Yoshua Bengio" ], "categories": [ "cs.AI", "cs.CL", "cs.LG" ], "primary_category": "cs.AI", "published": "20170226031913", "title": "Maximum-Likelihood Augmented Discrete Generative Adversarial Networks" }
theoremTheorem[section] thm[theorem]Theorem lem[theorem]Lemma corollary[theorem]Corollary cor[theorem]Corollary df[theorem]Definition ex[theorem]Example mth[theorem]The Main Theorem linthm[theorem]Linearity Theorem defthm[theorem]Definability Theorem mapth[theorem]The Mapping Theorem mlm[theorem]The Main Lemma remlm[theorem]The RemoveMax Invertibility Lemma racrlm[theorem]The RemoveAll Cost Lemma rmcrlm[theorem]The RemoveMax Cost Lemma losslm[theorem]Credit Loss Characterization Lemma pqlm[theorem]The pq Lemma fixlm[theorem]Subheap Repair Lemma unlm[theorem]The Uniqueness Lemma complem[theorem]Competition Lemma diaglem[theorem]Diagram Lemma fundlem[theorem]Fundamental Lemma charlem[theorem]Worst-case Heap Characterization Lemma sLBlm[theorem]∑λ Lower Bound Lemma dth[theorem]Decomposition Theorem sith[theorem]Singularity Theorem 1LBth[theorem]The 1^ Lower Bound Theorem 2LBth[theorem]The Lower Bound Theorem UBth[theorem]The Upper Bound Theorem 1oth[theorem]The 1^st Optimality Theorem 2oth[theorem]The 2^nd Optimality Theorem 3oth[theorem]The 3^rd Optimality Theorem cropt[theorem]Optimality Criterion hyp[theorem]Hypothesis example[theorem]Example property[theorem]Property note[theorem]Note algMergeSort[theorem]Algorithm MergeSort exercise[theorem]Exercisetheorem1Theorem[subsection] thm1[theorem1]Theorem lemma1[theorem1]Lemma claim1[theorem1]Claim corollary1[theorem1]Corollary df1[theorem1]Definition proposition1[theorem1]Proposition problem1[theorem1]Problem example1Example[subsection] conjecture1[theorem1]Conjecture remark1[theorem1]Remark property1[theorem1]Property algMergeSort1[theorem1]Algorithm MergeSort linthm1[theorem1]Linearity Theorem defthm1[theorem1]Definability Theorem myheadings M.A.SuchenekSuchenek:Elementary Yet Precise Worst-case Analysis of MergeSort (SV)[label1]2017 Marek A. Suchenek. California State University Dominguez Hills, Department of Computer Science, 1000 E. Victoria St., Carson, CA 90747, USA,The full version of this paper offers two elementary yet precise derivations of an exact formulaW(n) = ∑_i=1 ^n⌈ i ⌉ = n ⌈ n ⌉ - 2^⌈ n ⌉ + 1for the maximum number W(n) of comparisons of keys performed by MergeSort on an n-element array. The first of the two, due to its structural regularity, is well worth carefully studying in its own right. Close smooth bounds on W(n) are derived. It seems interesting that W(n) is linear between the points n = 2^⌊ n ⌋ and it linearly interpolates its own lower bound nn - n + 1 between these points.The manuscript (MS) of the full version of this paper, dated January 20, 2017, can be found at:MergeSort sorting worst case. [2010] 68W40 Analysis of algorithmsACM Computing Classification Theory of computation: Design and analysis of algorithms: Data structures design and analysis: Sorting and searching Mathematics of computing: Discrete mathematics: Graph theory: Trees Mathematics of computing: Continuous mathematics: Calculus ACM classes: F.2.2; G.2.0; G.2.1; G.2.2 § INTRODUCTION MergeSort is one of the fundamental sorting algorithms that is being taught in undergraduate Computer Science curricula across the U.S. and elsewhere. Its worst-case performance, measured by the number of comparisons of keys performed while sorting them, is optimal for the class of algorithms that sort inductively[Inductive sorting of n keys sorts a set of n-1 of those keys first, and then “sorts-in” the remaining n-th key.] by comparisons of keys.[In its standard form analyzed in this paper, MergeSort is not an inductive sorting algorithm. However, its worst-case performance, measured by the number of comparisons of keys performed while sorting them, is equal to the worst-case performance of the binary insertion sort first described by Steinhaus in <cit.> that is worst-case optimal in the class of inductive sorting algorithms that sort by comparisons of keys; see <cit.> page 186.] Historically, it[A bottom-up version of it, invented by John Neumann.] was the first sorting algorithm to run in O(nn) time[In the worst case.].So it seems only fitting to provide an exact formula for MergeSort's worst-case performance and derive it precisely. Unfortunately, many otherwise decent texts offerunnecessarily imprecise[Notable exceptions in this category are <cit.> and <cit.> that derive almost exact formulas, but see Section <ref> page sec:oth for a brief critique of the results and their proofs offered there.] variants of it, and some with quite convoluted, incomplete, or incorrect proofs. Due to these imperfections, the fact that the worst-case performance of MergeSort is the same as that of another benchmark sorting algorithm, thebinary insertion sortof <cit.>, has remained unnoticed[Even in <cit.>.]. In this paper, I present two outlines[The detailed derivations can be found in <cit.>.] of elementary yet precise and complete derivations of an exact formula W(n) = ∑_i=1 ^n⌈ i ⌉ = n ⌈ n ⌉ - 2^⌈ n ⌉ + 1for the maximum number W(n) [Elementary derivation of an exact formula for the best-case performance B(n) of MergeSort, measured by the number of comparisons of keys performed while sorting them, has been done in <cit.>; see Section <ref> page sec:best of this paper.] of comparisons of keys performed by MergeSort on an n-element array. The first of the two, due to its structural regularity, is well worth carefully studying in its own right. Unlike some other basic sorting algorithms[For instance, Heapsort; see <cit.> for a complete analysis of its worst-case behavior.] that run in O(nn) time, MergeSort exhibits a remarkably regular[As revealed by Theorem <ref>, page thm:mersorbounds.] worst-case behavior,the elegant simplicity of which has been mostly lost on its rough analyses. In particular, W(n) is linear[See Figure <ref> page fig:boundsMergeSort.] between the points n = 2^⌊ n ⌋ and it linearly interpolates its own lower bound nn - n + 1 [Given by the left-hand side of the inequality (<ref>) page eq:mersorbounds.] between these points.What follows is a short version (SV) of a manuscript dated January 20, 2017, of the full version version <cit.> of this paper that has been posted at:The derivation of the worst case of MergeSort presented here is roughly the same[Except for the present proof of Lemma <ref> which I haven't been using in my class.] as the one I have been doing in my undergraduate Analysis of Algorithms class. <ref> shows sample class notes from one of my lectures.§ SOME MATH PREREQUISITESA manuscript of the full version <cit.> of this paper contains a clever derivation of a well-known[See <cit.>.] closed-form formula for ∑_i=1 ^n⌈ i ⌉. It proves insightful in my worst-case analysis of MergeSort as its right-hand side will occur on page eq:result in the fundamental equality (<ref>) and serve as an instrument to derive the respective exact formula for MergeSort's worst-case behavior.For every integer n≥ 1,∑_i=1 ^n⌈ i ⌉ = ∑ _y=0^⌈ n ⌉-1 ( n - 2^y) . Proof in <cit.>.From this one can easily conclude that: For every integer n≥ 1,∑_i=1 ^n⌈ i ⌉ = n ⌈ n ⌉ - 2^⌈ n ⌉ + 1 . § MERGESORT AND ITS WORST-CASE BEHAVIOR W(N) A call to MergeSort inherits an n-element array A of integers and sorts it non-decreasingly, following the steps described below.Tosort an n-element array A do:*If n ≤ 1 then return A to the caller,*If n ≥ 2 then *pass the first ⌊n/2⌋ elements of A to a recursive call to MergeSort,* pass the last ⌈n/2⌉ elements of A to another recursive call to MergeSort,*linearly merge, by means of a call to Merge, the non-decreasingly sorted arrays that were returned from those calls onto one non-decreasingly sorted arrayA^',*return A^' to the caller.A Java code ofMerge is shown on the Figure <ref>.[A Java code ofMergeSort is shown in <ref> Figure <ref> page fig:Sort.]A typical measure of the running time of MergeSort is the number of comparisons of keys, which for brevity I call comps, that it performs while sorting array A.The worst-case running time W(n)of MergeSort is defined as the maximum number of comps it performs while sorting an array of n distinct[This assumption is superfluous for the purpose of worst-case analysis as the mere presence of duplicates does not force MergeSort to perform more comps.] elements.Clearly, if n=0 then W(n) = 0. From this point on, I am going to assume that n ≥ 1.[This assumption turns out handy while using expression n.] Since no comps are performed outside Merge, W(n) can be computed as the sum of numbers of comps performed by all calls to Merge during the execution ofMergeSort. The following classic results will be useful in my analysis. The maximum number of comps performed by Merge on two sorted list of total number n of elements is n-1.Proof (constructive, with Java code that generates worst cases shown in the <ref>) in <cit.>.Moreover, if the difference between the lengths of merged list is not larger than 1 then no algorithm that merges sorted lists by means of comps beats Merge in the worst case, that is, has a lower than n-1 maximum number of comps.[Proof in <cit.>, Sec. 5.3.2 page 198; the worst-case optimality of Merge (n-1 comps) was generalized in <cit.> over lists of lengths k and m, with k ≤ m, that satisfy 3k ≥ 2m-2.]This fact makes MergeSort optimal in the intersection of the class of sorting algorithms that sort by merging two sorted lists of lengths' difference not larger than 1 [Or, by virtue of the above-quoted result from <cit.>, with the difference not larger than the half of the length of the shorter list plus 1.] with the class of sorting algorithms that sort by comps.§ AN EASY YET PRECISE DERIVATION OF W(N)MergeSortis a recursive algorithm. If n ≥ 2 then it spurs a cascade of two or more recursive calls to itself.A rudimentary analysis of the respective recursion tree T_n, shown on Figure <ref>, yields a neat derivation of the exact formula for the maximum number W(n) of comps that MergeSort performs on an n-element array. The idea behind the derivation is strikingly simple. It is based on the observation[Which Iprove in <cit.> as Theorem 4.6, page 14.] that for everyk ∈ℕ, the maximum number C_k of comps performed at each level[Empty or not.] kof T_n is given by this neat formula:[It is a simplification of formulas used in derivation presented in <cit.> and discussed in Section <ref> page sec:oth; in particular, it does not refer to the depth h of the decision tree T_n.]C_k = max{n - 2^k, 0} .Sincen - 2^k > 0 ⌈ n ⌉ - 1 ≥ k ,the Corollary <ref> will allow me to conclude from (<ref>) and (<ref>) the main result of this paper[This is how I have been deriving it in my undergraduate Analysis of Algorithms class for some 15 years or so, now.]:W(n)= ∑_k ∈ℕC_k = ∑ _k=0^⌈ n ⌉-1 (n - 2^k) = n⌈ n ⌉ - 2^⌈ n ⌉ + 1 = ∑_i=1 ^n⌈ i ⌉.The missing details[Which I did not show in my Analysis of Algorithms class.] in the above sketch are in <cit.>. Naturally, their only purpose is to prove the equality (<ref>) for allk ∈ℕ, as the rest, shown in (<ref>), easily follows from it.In particular, we get: The number W(n) of comparisons of keys that MergeSort performs in the worst case while sorting an n-element array isW(n) = ∑_i=1 ^n⌈ i ⌉ = n ⌈ n ⌉ - 2^⌈ n ⌉ + 1 . Proof in <cit.>.From that we can conclude a usual rough characterization of W(n): W(n) ≤ n ( n + 1) - 2^ n+ 1=nn + n - n + 1 = nn+ 1andW(n) ≥ nn- 2^ n + 1+ 1=nn - 2n + 1.Therefore,W(n) ∈Θ (n log n). The occurrence of ∑_i=1 ^n⌈ i ⌉ in (<ref>) allows to conclude that W(n) is exactly equal[<cit.> contains no mention of that fact.] to the number of comparisons of keys that the binary insertion sort, considered by H. Steinhaus in <cit.> and analyzed in <cit.>, performs in the worst case. Since the binary insertion sort is known to be worst-case optimal[With respect to the number of comparisons of keys performed.] in the class of algorithms that perform incremental sorting, MergeSort is worst-case optimal in that class[Although it is not a member of that class.], too. From this and from the observation at the end of Section <ref>, page pag:mergopt, I conclude that no algorithm that sorts by merging two sorted lists and only by means of comps is worst-case optimal in the class of algorithms that sort by means of comps as it must perform 8 comps in the worst case while sorting 5 elements[They can be split in two: 1 plus 4, and follow the binary insertion sort, or 2 plus 3, and follow MergeSort.], while one can sort 5 elements by means of comps with no more than 7 comps. § CLOSE SMOOTH BOUNDS ON W(N)Our formula for W(n) contains a function ceiling that is harder to analyze than arithmetic functions and their inverses. In this Section, I outline a derivation of close lower and upper bounds on W(n) that are expressible by simple arithmetic formulas. I show that these bounds are the closest to W(n) in the class of functions of the form nn + c n + 1, where c is a real constant. The detailed derivation and missing proofs can be found in <cit.>.Using the function ε (analyzed briefly in <cit.> and <cit.>), a form of which is shown on Figure <ref>,given by:ε= 1 + θ - 2^θθ =⌈n ⌉ -n ,one can conclude[See <cit.>, Thm. 12.2 p. 94 for a proof.] that, for every n > 0,n ⌈ n ⌉ - 2^⌈ n ⌉ = n( n + ε - 1) ,which yieldsW(n) =n(n+ ε - 1) + 1 =nn + (ε - 1) n + 1 . Function ε given by (<ref>) is a continuous function of non the set of reals > 0. It assumes the minimum 0 for everyn = 2^⌊ n ⌋ and the maximum δ = 1 -e +e ≈ 0.0860713320559342 , [The constant 1 -e +e has been known as theErdös constantδ. Erdös used it around 1955 in order to establish an asymptotic upper bound for the number M(k) of different numbers in a multiplication table of size k × k by means of the following limit:lim _k →∞lnk × k/M(k)/lnln (k × k) =δ . ]for everyn=2^⌊ln n +e ⌋ln 2and only such n. The functionε restricted to integers never reaches the value δ. However,δ is thesupremum of ε restricted to integers. Proof in <cit.>. Characterization (<ref>) and Property <ref> yield close smooth bounds of W(n). They are both of the form nn + c n + 1 and they sandwich tightly W(n) between each other. If one sees W(n) as an infinite polygon[Which it is.], its lower bound circumscribes it and its upper bound inscribes it. W(n) is a continuous concave function, linear between the points n = 2^⌊ n⌋, that for every n > 0 satisfies this inequality: nn - n + 1 ≤ W(n) ≤ nn - (1-δ) n + 1 <nn - 0.913 n + 1 ,with the left ≤ becoming = for every n = 2^⌊ n ⌋ andthe right ≤ becoming = for every n = 2^⌊ n +e ⌋ln 2, and only for such n. Moreover, the graph of W(n) is tangent to the graph of nn - (1-δ) n + 1 at the points n = 2^⌊ n +e ⌋ln 2, and only at such points.Proof in <cit.>. The bounds given by (<ref>) are really close[The distance between them is less than δ n ≈ 0.0860713320559342 n for any positive integer n.] to the exact value of W(n), as it is shown on Figure <ref> page fig:boundsMergeSort. The exact value n⌈ n ⌉ - 2^⌈ n ⌉ +1 is a continuous function (if n is interpreted as a real variable) despite that it incorporates discontinuous function ceiling. It seems interesting that W(n) = n⌈ n ⌉ - 2^⌈ n ⌉ +1(whether n is interpreted as a real variable or an integer variable) is linear between points n = 2^⌊ n ⌋ and linearly interpolates its own lower bound nn - n + 1 between these points. For n restricted to positive integers, the inequality (<ref>) can be slightly enhanced by replacing the ≤ symbol with <, with the following result. 1-δ is the greatest constant c such that for every integer n ≥ 1, W(n) < nn - c n + 1. Proof in <cit.>. Theorem <ref> can be reformulated as follows.inf{c ∈ℝ|∀ n ∈ℕ∖{0} , W(n) < nn - c n + 1 }= 1 - δ . Proof in <cit.>. No upper bound of W(n) that has a form nn - c n + 1 can coincide with W(n) at any integer n, as the following fact ascertains.There is no constant c such that for every integer n ≥ 1, W(n) ≤ nn - c n + 1and for some integer n ≥ 1, W(n) =n - c n + 1. Proof in <cit.>. In particular[Note the ≤ symbol in (<ref>).],inf{c ∈ℝ|∀ n ∈ℕ∖{0}, W(n) ≤ nn - c n + 1 }= 1 - δ . Moreover, we can conclude from Theorem <ref> the following fact.1-δ is the greatest constant c such that for every integer n ≥ 1, W(n) ≤⌈ nn - c n ⌉. Proof in <cit.>. Since for any integer n ≥ 1, W(n) is integer, the lower bound given by (<ref>) yields⌈ nn ⌉ - n + 1 ≤ W(n) ≤⌈ nn - 0.913 n ⌉ .By virtue of Corollary <ref>, for some integers n ≥ 1,[For instance, for n=11.]W(n) > ⌈ nn - 0.914 n ⌉ .Although the bounds given by (<ref>) [Almost the same bounds were given in <cit.>; see Section <ref> for more details on this.] are tighter than those given by (<ref>), they nevertheless involve the discontinuous ceiling function, so that they may not be as easy to visualize or analyze as some differentiable functions, thus losing their advantage over the precise formula W(n) = n⌈ n ⌉ - 2^⌈ n ⌉ +1. Therefore, the bounds given by (<ref>) appear to have an analytic advantage over those given by (<ref>). § OTHER PROPERTIES OF THE RECURSION TREE T_NThis sections contains some well-known auxiliary facts that I didn't need for the derivation of the exact formula for W(n) but am going to derive from the Main Lemma 4.1 of <cit.> for the sake of a thoroughness of my analysis of the decision tree T_n.The depth h of the recursion tree T_n is h = ⌈ n ⌉ . Proof in <cit.>. Theorem <ref> allows for quick derivation of fairly close upper bound on the number of comps performed by MergeSort on an n-element array. Since at each level of T_n less than n comparisons are performed by Merge and at level h no comps are performed, and there are h = ⌈ n ⌉ levels below level h, the total number of comps is not larger than(n-1)h = (n-1)(⌈ n ⌉) < (n-1) ( n + 1) ∈ O(n log n).A cut of a tree T_n is a set Γ of nodes of T such that every branch[A maximal path.] in T_n has exactly one element in Γ.The sum of values shown at the elements of any cut of T_n is n.Proof in <cit.>.The number of leaves in the recursion tree T_n is n.Proof in <cit.>. The following corollary provides some statistics about recursive calls to MergeSort.For every integer n > 0, *T_n has 2n - 1 nodes. *The number or recursive calls spurred by MergeSort on any n-element array is 2(n - 1). *The sum S_n of all values shown in the recursion tree T_n on Figure <ref> is equal to:S_n =n ⌈ n ⌉ - 2^⌈ n ⌉ + 2n= n( n + ε + 1). *The average size A_n of array passed to any recursive call to MergeSort while sorting an n-element array is:A_n =1/2(1+1/n-1) ( n + ε) ≈1/2 ( n + ε).Proof in <cit.>. Here is a very insightful property. It states that MergeSort is splitting its input array fairly evenly[The sizes of the sub-arrays passed to recursive calls at any non-empty level k of the decision tree T_n above the last non-empty level h are the same as the sizes of the elements of the maximally even partition of an n-element set onto 2^k subsets.] so that at any level of the recursive tree, the difference between the lengths of the longest sub-array and the shortest sub-array is ≤ 1. This fact is the root cause ofgood worst-case performance of MergeSort. The difference between values shown by any two nodes in the same level of of the recursion tree T_n is ≤ 1. Proof in <cit.>. Property <ref> has this important consequence that Merge is, by virtue of the observation on page pag:mergopt after the Theorem <ref> page thm:merge_n-1, worst-case comparison-optimal while merging any two sub-arrays of the same level of the recursion tree. Thus the worst-case of MergeSort cannot be improved just by replacing Merge with some tricky merging X as long as X merges by means of comparisons of keys.ReplacingMerge with any other method that merges sorted arrays by means of comps will not improve the worst-case performance of MergeSort measured with the number of comps while sorting an array. Proof follows from theabove observation.Since a parent must show a larger value than any of its children, the Property <ref> has also the following consequence. The leaves in the recursion tree T_n can only reside at the last two non-empty levels of T_n.Proof follows from the Property <ref> as the above observation indicates. As a result, one can conclude[Cf. <cit.>, Sec. 5.3.1 Ex. 20 page 195.] that the recursion tree T_n has the miminum internal and external path lengths among all binary trees on 2n - 1 nodes.Since all nodes at the level h of the recursion tree T_n are leaves and show value 1, no node at level h-1 can show a value >2. Indeed, level h-1 may only contain leaves, that show value 1, and parents of nodes of level h that show value 1+1=2. This observation and the previous result allow for easy characterization of contents of the last two non-empty levels of tree T_n.For every n≥ 2: *there are 2^h - n leaves, all showing value 1,at the level h-1,*there are n - 2^h-1 non-leaves, all showing value 2,at the level h-1, and*there are 2n - 2^h [This value shows in the lower right corner of Figure <ref> page fig:rectre of a sketch of the recursion tree T_n; it was not need needed for the derivation of the main result (<ref>) page eq:levComp2, included for the sake of completeness only.] nodes, all leaves showing value 1, at the level hof the recursion tree T_n, where h is the depth[The level number of the last non-empty level of T_n.] of T_n.Proof in <cit.>. § A DERIVATION OF W(N) WITHOUT REFERENCES TO THE RECURSION TREEIn order to formally prove Theorem <ref> without any reference to the recursion tree, I use here the well-known[For instance, derived in <cit.> and <cit.>.] recurrence relationW(n) = W(⌊n/2⌋) + W(⌈n/2⌉) + n - 1n ≥ 2 W(1) = 0that easily follows from the description (Algorithm <ref> page alg:mersor) of MergeSort, steps <ref>, <ref> and Theorem <ref>. I am going to prove, by direct inspection,that the function W(n) defined by (<ref>) satisfies equations (<ref>) and (<ref>).The details of the proof are in <cit.>.§ OTHER WORKAlthough some variants of parts of the formula (<ref>) appear to have been known for quite some time now, even otherwise precise texts offer derivations that leave room for improvement. For instance, the recurrence relation for MergeSort analyzed in <cit.> asserts that the least number of comparisons of keys performed outside the recursive calls, if any, that suffice to sort an array of size n is n rather than n-1.This seemingly inconsequential variation results in a solution W(n) =∑_i=1 ^n-1 (⌊ i ⌋ + 2) [I saw W(n) =∑_i=1 ^n-1⌊ i ⌋ on slides that accompany <cit.>.] on page 2, Exercise 1.4, rather than the correct formula (<ref>)W(n) =∑_i=1 ^n⌈ i ⌉ derived in this paper. (Also, the relevant derivations presented in<cit.>, although quite clever, are not nearly as precise and elementary as those presented in this paper.) As a result, the fact that MergeSort performs exactly the same number of comparisons of keys as does another classic, binary insertion sort, considered by H. Steinhaus and analyzed in <cit.>, remains unnoticed.Pages 176 – 177 of <cit.> containan early sketch of proof of W(n) = n h - 2^h + 1,where h is the depth of the recursion tree T_n, with remarkably close[Although not 100 percent correct.] bounds given by (<ref>) page eq:baavan_bound. It is similar[The idea behind the sketch of the derivation in <cit.> was based on an observation thatW(n) = ∑ _i = 0 ^h-2 (n - 2^i) + n - B/2 ,where B was the number of leaves at the level h-1 of the decision tree T_n; it was sketchily derived from the recursion tree shown on Figure <ref> and properties stated in the Corollary <ref> page cor:last2char (with only a sketch of proof in <cit.>)not needed for the derivation presented in Section <ref>.]to a simplerderivation based on the equality (<ref>), presented in this paper in Section <ref> and outlined in (<ref>) page eq:result (except for the ∑_i=1 ^n⌈ i ⌉ part), which it predates by several years. The <cit.>'s version of the decision tree T_n (Figure 4.14 page 177 of <cit.>, shown here on Figure <ref>) was a re-use of a decision tree for the special case of n = 2^⌊ n ⌋, with an ambiguous, if at all correct[It may be interpreted as to imply that for any level k, all the left-child sizes at level k are the same and all the right-child sizes at level k are the same, neither of which is a valid statement.], comment in the caption that “[w]henever a node size parameter is odd, the left child size parameter is rounded up[Should be: down, according to (<ref>) page eq:recmergesort1.] and the right child size is rounded down[Should be: up, according to (<ref>) page eq:recmergesort1.].” The proof of the fact, needed for the derivation in <cit.>, that T_n had no leaves outside its last two levels (Corollary <ref> page cor:last2, not needed for the derivation presented in Section <ref>) was waved with a claim “[w]e can[This I do not doubt.] determine that [...]” Although h was claimed in <cit.> to be equal to ⌈ (n+1) ⌉ [Which claim must have produced an incorrect formula n ⌈ (n+1) ⌉ - 2^⌈ (n+1) ⌉ + 1 for W(n) and precluded concluding the neat characterization W(n) = ∑_i=1 ^n⌈ i ⌉.] (and not to the correct ⌈ n ⌉ given by the equality (<ref>) page eq:height, a fact not needed for the derivation presented in Section <ref>), somehow the mostly correct conclusion[Almost identical with (<ref>) page eq:mersorbounds2, except for the constant 0.914.] was inferred from it, however, with no details offered - except for a mention that a function α that satisfies h =n + α, similar to function ε shown on Figure <ref> page fig:eps, was used. It stated that (Theorem 4.6, page 177, in <cit.>):⌈ nn - n + 1 ⌉≤ W(n) ≤⌈ nn - 0.914 n ⌉ . It follows from (<ref>) page eq:wrongbound that the constant 0.914 that appears in (<ref>) is incorrect. It was a rounding error[Of1 - δ, where δ is given by (<ref>) page eq:Erdos.], I suppose, that produced a false upper bound[For instance, if n = 11 then MergeSort performs 29 comparisons of keys while the value of the upper bound ⌈ nn - .914 n ⌉ given in <cit.>, Theorem 4.6. p. 177, is 28; this is a significant error as 28 or less comps while sorting any 11-element array beats the binary insertion sort that requires ∑_i=1 ^11⌈ i ⌉ = 29 comps in the worst case.]. § BEST-CASE ANALYSIS OF MERGESORTIt turns out that derivation of minimum number B(n) of comps performed by MergeSort on an n-element array is a bit more tricky. A formulan/2(⌊ n ⌋ + 1) -∑ _k=0 ^⌊ n ⌋ 2^k (n/2^k+1), where (x)=min (x - ⌊ x ⌋, ⌈ x ⌉ - x), has been derived and thoroughly analyzed in <cit.>. It has been also demonstrated in <cit.> that there is no closed-form formula for B(n).Incidentally, as it was pointed out in <cit.> , B(n) is equal to the sum A(n,2) of bits in binary representations of all integers < n. § A JAVA CODE OF MERGESORTFigure <ref> shows a Java code of MergeSort. § GENERATING WORST-CASE ARRAYS FOR MERGESORT Figure <ref> shows a self-explanatory Java code of recursive method unSort that given a sorted array A reshuffles it, in a way resembling InsertionSort[Although not with InsertionSort's sluggishness; the number of moves of keys it performs is only slightly more than the minimum number (<ref>) of comps performed by MergeSort on any n-element array.], onto a worst-case array for MergeSort. For instance, it produced this array of integers between 1 and 500: 1, 500, 2, 3, 4, 7, 5, 6, 8, 15, 9, 10, 11, 14, 12, 13, 16, 31, 17, 18, 19, 22, 20, 21, 23, 30, 24, 25, 26, 29, 27, 28, 32, 62, 33, 34, 35, 38, 36, 37, 39, 46, 40, 41, 42, 45, 43, 44, 47, 61, 48, 49, 50, 53, 51, 52, 54, 60, 55, 56, 57, 59, 58, 63, 124, 64, 65, 66, 69, 67, 68, 70, 77, 71, 72, 73, 76, 74, 75, 78, 92, 79, 80, 81, 84, 82, 83, 85, 91, 86, 87, 88, 90, 89, 93, 123, 94, 95, 96, 99, 97, 98, 100, 107, 101, 102, 103, 106, 104, 105, 108, 122, 109, 110, 111, 114, 112, 113, 115, 121, 116, 117, 118, 120, 119, 125, 249, 126, 127, 128, 131, 129, 130, 132, 139, 133, 134, 135, 138, 136, 137, 140, 155, 141, 142, 143, 146, 144, 145, 147, 154, 148, 149, 150, 153, 151, 152, 156, 186, 157, 158, 159, 162, 160, 161, 163, 170, 164, 165, 166, 169, 167, 168, 171, 185, 172, 173, 174, 177, 175, 176, 178, 184, 179, 180, 181, 183, 182, 187, 248, 188, 189, 190, 193, 191, 192, 194, 201, 195, 196, 197, 200, 198, 199, 202, 216, 203, 204, 205, 208, 206, 207, 209, 215, 210, 211, 212, 214, 213, 217, 247, 218, 219, 220, 223, 221, 222, 224, 231, 225, 226, 227, 230, 228, 229, 232, 246, 233, 234, 235, 238, 236, 237, 239, 245, 240, 241, 242, 244, 243, 250, 499, 251, 252, 253, 256, 254, 255, 257, 264, 258, 259, 260, 263, 261, 262, 265, 280, 266, 267, 268, 271, 269, 270, 272, 279, 273, 274, 275, 278, 276, 277, 281, 311, 282, 283, 284, 287, 285, 286, 288, 295, 289, 290, 291, 294, 292, 293, 296, 310, 297, 298, 299, 302, 300, 301, 303, 309, 304, 305, 306, 308, 307, 312, 373, 313, 314, 315, 318, 316, 317, 319, 326, 320, 321, 322, 325, 323, 324, 327, 341, 328, 329, 330, 333, 331, 332, 334, 340, 335, 336, 337, 339, 338, 342, 372, 343, 344, 345, 348, 346, 347, 349, 356, 350, 351, 352, 355, 353, 354, 357, 371, 358, 359, 360, 363, 361, 362, 364, 370, 365, 366, 367, 369, 368, 374, 498, 375, 376, 377, 380, 378, 379, 381, 388, 382, 383, 384, 387, 385, 386, 389, 404, 390, 391, 392, 395, 393, 394, 396, 403, 397, 398, 399, 402, 400, 401, 405, 435, 406, 407, 408, 411, 409, 410, 412, 419, 413, 414, 415, 418, 416, 417, 420, 434, 421, 422, 423, 426, 424, 425, 427, 433, 428, 429, 430, 432, 431, 436, 497, 437, 438, 439, 442, 440, 441, 443, 450, 444, 445, 446, 449, 447, 448, 451, 465, 452, 453, 454, 457, 455, 456, 458, 464, 459, 460, 461, 463, 462, 466, 496, 467, 468, 469, 472, 470, 471, 473, 480, 474, 475, 476, 479, 477, 478, 481, 495, 482, 483, 484, 487, 485, 486, 488, 494, 489, 490, 491, 493, 492.It took my MergeSort3,989 comps to sort it. Of course,500 ⌈ 500 ⌉ - 2^⌈ 500 ⌉ + 1 = 4,500 - 512 + 1 = 3,989 .§ NOTES FROM MY ANALYSIS OF ALGORITHMS LECTUREBelow are some of the class digital notes I wrote while lecturing Analysis of Algorithms in Spring 2012, with some comments added after class. Figure 4.14 (decision tree) is from the course textbook <cit.>, page 177, showing a decision tree for MergeSort. Note: This figure is copyrighted by Addison Wesley Longman (2000). I used it transformatively in my class for nonprofit education, criticism, and commentpurposes only, and not for any other purpose, as prescribed by U.S. Code Tittle 17 Chapter 1 para 107 that established the “fair use” exception of copyrighted material. < g r a p h i c s > < g r a p h i c s > < g r a p h i c s >< g r a p h i c s > Below is the improved recursion tree (of the Figure 4.14 page page 177 of <cit.>) that I used in class in Spring 2012. < g r a p h i c s > In Spring 2010 and before, I was deriving the equality (<ref>) on page eq:main_level_comps_formula during my lectures directly from the recurrence relation (<ref>), (<ref>) on page eq:recmergesort1.siam2017 Marek A. Suchenek. All rights reserved by the author. A non-exclusive license to distribute this article is granted to arXiv.org.
http://arxiv.org/abs/1702.08443v1
{ "authors": [ "Marek A. Suchenek" ], "categories": [ "cs.DS", "cs.CC", "cs.DM", "68W40 Analysis of algorithms", "F.2.2; G.2.0; G.2.1; G.2.2" ], "primary_category": "cs.DS", "published": "20170226071246", "title": "Elementary Yet Precise Worst-case Analysis of MergeSort, A short version (SV)" }
1 Max-Planck Institute of Microstructure Physics, Weinberg 2, D-06120 Halle, Germany 2 Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan 3 Department of Physical Chemistry, Faculty of Chemistry and Pharmacy, Sofia University, 1126 Sofia, Bulgaria. 4 Lehrstuhl für Theoretische Festkörperphysik, Staudtstr. 7-B2, 91058 Erlangen, Germany. 5 Max Planck Institute for Chemical Physics of Solids, Nöthnitzer Strasse 40, 01187 Dresden, Germany 6 Department of Physics, Indian Institute of Technology, Roorkee, 247667 Uttarkhand, IndiaWith a view to the design of hard magnets without rare earths we explore the possibility of large magnetocrystalline anisotropy energies in Heusler compounds that are unstable with respect to a tetragonal distortion. We consider the Heusler compounds Fe_2YZ with Y = (Ni, Co, Pt), and Co_2YZ with Y = (Ni, Fe, Pt) where, in both cases, Z = (Al, Ga, Ge, In, Sn). We find that for the Co_2NiZ, Co_2PtZ, and Fe_2PtZ families the cubic phase is always, at T=0, unstable with respect to a tetragonal distortion, while, in contrast, for the Fe_2NiZ and Fe_2CoZ families this is the case for only 2 compounds – Fe_2CoGe and Fe_2CoSn. For all compounds in which a tetragonal distortion occurs we calculate the MAE finding remarkably large values for the Pt containing Heuslers, but also large values for a number of the other compounds (e.g. Co_2NiGa has an MAE of -2.11 MJ/m^3). The tendency to a tetragonal distortion we find to be strongly correlated with a high density of states at the Fermi level in the cubic phase. As a corollary to this fact we observe that upon doping compounds for which the cubic structure is stable such that the Fermi level enters a region of high DOS, a tetragonal distortion is induced and a correspondingly large value of the MAE is then observed. Large magnetocrystalline anisotropy in tetragonally distorted Heuslers: a systematic study. E. K. U. Gross^1 December 30, 2023 ===========================================================================================§ INTRODUCTION Underpinning a diverse range of modern technologies, from computer hard drives to wind turbines, are hard magnets. These are magnets in which the local moments all preferentially align along a certain crystallographic direction, and may be characterized by the energy difference with an unfavorable spatial direction, known as the magnetocrystalline anisotropy energy (MAE). Evidently the local moments of such magnets are very stable and so make excellent permanent magnets, hence their central role in various technologies<cit.>. Current production of hard magnets relies on alloys of rare earth elements, in particular neodymium and dysprosium, e.g. the “neodymium magnet” Nd_2Fe_14B. Such alloys, due to the localized nature of the open shell f-electrons of the rare earths, possess a very large spin orbit coupling and, as a consequence, very high MAE values. However, as the rare earths are both costly and highly polluting to extract from ore there is a current focus on the design of hard magnets without rare earths<cit.>. Magnetic materials having a low crystal symmetry evidently possess a natural spatial anisotropy, and this in turn can lead to very large values of the MAE. Such low symmetry magnets therefore offer a promising design route towards the next generation of hard magnets. Accurate calculation of the MAE requires sophisticated and computationally expensive first principles calculations, making difficult the kind of high throughput search that might be expected to yield interesting high MAE materials. In this paper we show that for a promising materials class - the Heusler alloys - the density of the states at the Fermi level provides a very good indicator of the propensity to distortion, and therefore of the likelihood of finding a high MAE material within this class. The use of such material markers for high MAE can, we believe, significantly alleviate the computation bottleneck preventing high throughput search.The Heusler materials have attracted sustained attention due both to their exceptional magnetic properties as well as a huge variety of possible compounds that may be experimentally realized<cit.>; reviews may be found in Refs. Felser, Graf1, Graf2, Kreiner. These materials, which consist of 4 inter-penetrating face centred cubic lattices, often exhibit a symmetry lowering structural transition to a tetragonal or hexagonal phase<cit.>, raising the possibility of a crystal symmetry lowering induced large MAE. For Mn rich Heusler alloys this has previously been explored<cit.>; here we consider this possibility in the Heusler families Fe_2YZ with Y = (Ni, Co, Pt), and Co_2YZ with Y = (Ni, Fe, Pt) where, in both cases, Z = (Al, Ga, Ge, In, Sn).Our principle findings are that (i) the Co_2NiZ and Co_2PtZ classes naturally distort to a tetragonal structure with c/a values in the range 1.3-1.5; (ii) the Fe rich Heuslers generally do not distort, with the exceptions of Fe_2CoZ where Z = Ge or Sn and the Fe_2PtZ family; (iii) this distortion can induce a very high MAE - of up to 5 MJ/m^3 for the Pt containing Heuslers, comparable to the best known transition metal magnet L1_0-FePt, and of up to 1 MJ/m^3 for the Co rich but Pt free Heuslers. In each case where a distortion occurs the volume change is found to be very small (a few percent at most), with the exception of Fe_2PtGe in which a 6% increase of volume occurs upon distortion. We furthermore find that this tendency to tetragonal distortion strongly correlates to a rather simple material descriptor, namely the density of states (DOS) at the Fermi level. A high DOS favours tetragonal distortion and, on this basis, we consider the possibility of inducing a tetragonal distortion by moderate doping (via a virtual crystal approximation) that shifts the Fermi energy from a low to a high DOS position. Consistent with the validity of this material descriptorwe find that the Heusler alloys Co_2FeAl and Co_2FeSn - in which the Fermi energy lies far from and close to a high DOS region respectively - all spontaneously suffer tetragonal distortion upon doping.§ CALCULATION DETAILS For structural relaxation we use the Vienna ab initio simulation package (VASP)<cit.> with projector augmented wave (PAW) pseudopotentials <cit.>, a plane-wave-basis set energy cutoff of 400 eV, and the Perdew-Burke-Ernzerhof (PBE) functional <cit.>. Reciprocal space integration has been performed with a Γ-centered Monkhorst-Pack 10x10x10 mesh. The structural optimization has been converged to a tolerance of 10^-5 eV, whereas the MAE values were obtained with a tolerance of 10^-7 eV. All calculations are performed in the presence of spin-orbit coupling term. For the calculation of the MAE we have also deployed the all-electron full-potential linearized augmented-plane wave (FP-LAPW) code Elk<cit.>. The definition of MAE adopted in this study is the following: MAE= E^ tot_[100]-E^ tot_[001], where E^ tot_[100] (E^ tot_[001]) represents the total energy with spin orientation in the [100] ([001]) direction. A positive value of the MAE therefore indicates that out-of-plane spin configuration is energetically favourable, whereas a negative one that the in-plane direction is favourable. The Heusler structure is described by the X_2YZ general formula, in which the species X and Y are transition metal elements whereas the Z atom is p-orbital element with metal character (from III or IV main groups). The crystal structure consists of four inter-penetrating face centred cubic lattices and belongs to the 225 (Fm-3m) symmetry group for the regular Heusler structure, and 216 (F-43m) for the inverse Heusler; Wyckoff positions of the atoms are presented in Table. I. § STRUCTURAL DISTORTION We first consider the stability with respect to tetragonal distortion of the Heusler alloys X_2YZ in which the X sub-lattices are occupied by either Fe or Co, the Y sub-lattice by Fe, Co, Ni, or Pt, and Z sub-lattice by Al, Ge, Ga, Sn, or In. This represents 30 materials in total, of which 10 have been previously experimentally synthesized; for details we refer the reader to Table I and II of Appendix A. In Fig. <ref> we present the DOS at the Fermi energy of each of these Heusler materials for both the high symmetry cubic phase and, where it exists, the tetragonal structure. For the Co rich Heuslers Co_2NiZ and Co_2PtZ the high symmetry phase is always unstable with respect to tetragonal distortion while, in contrast, in the case of the Fe containing Heuslers the cubic phase is generally stable. There are two exceptions to this latter rule: Fe_2NiGe and Fe_2NiSn, and the Fe_2PtZ family. For all cases where the tetragonal phase is stable we find the c/a ratios in the range 1.3-1.5 with the high end c/a ratios found for the Fe_2YZ Heuslers in which Z is either Ge or Sn (curiously, as we will see, these are also the Heusler compounds that have the desired positive MAE). Of the Heuslers in Fig. 1 that have been experimentally synthesized only one, Co_2NiGa, is observed in the tetragonal structure, in agreement with our calculations; all others are found to be cubic, also in agreement with our calculations (with the exception of Fe_2NiGe for which we predict a tetragonal structure, this case will be discussed in detail below). For each structure we have also determined whether the material takes on a regular or inverse occupation of the sub-lattices. As may be seen in Fig. <ref> most of the structures are inverse Heusler except for the Co_2FeZ family where the regular cubic structure has a lower ground state energy. This finding is in a good agreement with an empirical rule first stated in Ref. Graf2: when the electronegativity of the Y element is larger than that of the X element the system prefers the inverse Heusler structure, with otherwise the regular Heusler structure realized. There are, however, two deviations from this rule in our results. We find that Fe_2NiGe and Fe_2NiSn adopt a tetragonally distorted regular structure, in agreement with previous theoretical work<cit.>, but in contrast to the inverse structure expected on the basis of the empirical rule (the electronegativity of Ni is higher than that of Fe).In Ref. Gasi experiment reports, in agreement with the semi-empirical rule, a cubic inverse structure for Fe_2NiGe. Accompanying theoretical calculations<cit.>, however, find that the energy change due to antisite disorder is always much smaller than the thermal energy available due to annealing (which takes place at 650 K in the experiment). The authors of Ref. Gasi therefore conclude that annealing will control the state of order for the Fe_2NiZ family. The mismatch between experiment and our results, calculated for fully ordered structures, therefore likely has its origin in thermal induced substitutional disorder. It is worth pointing out that the energy difference between the tetragonally distorted regular structure (our lowest energy structure) and the inverse cubic structure is 120 meV, i.e. about double the thermal energy due to annealing. This indicates that the presence of antisite disorder has a significant impact on the propensity of this material towards tetragonal disorder and that, at least for the Fe_2NiZ family, the coupling between disorder and tetragonal distortion is a subject worthy of further investigation.We now consider the electronic origin of the instability of the cubic phase with respect to a tetragonal distortion. Such instability of the high symmetry phase has been observed in many Heusler compounds, in particular the Mn rich Heuslers<cit.>, and has been attributed to a number of different mechanisms: a Jahn-Teller effect<cit.>, a “band” JT effect<cit.>, a nesting induced Fermi surface instability<cit.>, and anomalous phonon modes<cit.>.In Fig. <ref> we present the total density of states for four representative examples of the set of Heusler compounds we investigate. For all four cases (and for all Heuslers we study in this work) the minority spin channel is not significantly involved in the mechanism of distortion, having a very low DOS near the Fermi energy. For the cases (Co_2NiAl, Fe_2NiGe) in which the cubic phase is unstable we see a clear redistribution of spectral weight near the Fermi energy, such that a high DOS near E_F is lowered by the opening up of a“valley” near E_F in the tetragonal phase. On the other hand, for the materials in which the cubic phase is stable the DOS at E_F is already very low (see the right hand panels of Fig. <ref> for the representative cases of Co_2FeAl and Fe_2CoGe). As may be seen in Fig. <ref> for the case of Co_2NiAl this redistribution of weight occurs in all species and momentum channels, but with states of Co character being the more important. Interestingly, it is seen that the redistribution occurs particularly in states of e_g character.§ MAGNETIC MOMENTS AND MAGNETOCRYSTALLINE ANISOTROPY In Fig. <ref> we present the total magnetic moment, saturation magnetization M_s and MAE for the Co_2YZ and Fe_2YZ Heusler families. In all systems the magnetic order is found to be ferromagnetic. To a good approximation the values of the saturation magnetization M_s fall into four distinct bands: (i) M_s close to 500 kA/m for Co_2NiZ; (ii) M_s close to 900 kA/m for Co_2FeZ, Fe_2NiZ, and Fe_2CoZ; (iii) M_s close to 500 kA/m for Co_2PtZ; and (iv) M_s close to 800 kA/m for Fe_2PtZ. From the viewpoint of hard magnetic applications a high value of the saturation magnetization is desired, and from this point of view the Co_2FeZ, Fe_2NiZ, Fe_2CoZ, and Fe_2PtZ compounds are most interesting. For comparison we recall that the two “standard” hard magnets have saturation magnetizations of 970 kA/m for SmCo_5 and 1280 kA/m for Nd_2Fe_14B.We now turn to a discussion of the MAE values realized in the cases for which a tetragonal distortion occurs (see also Fig. <ref>). We first note that a positive value of the MAE indicates that the magnetic moments all align with the symmetry axis of the tetragonally distorted material: this is essential for hard magnetic applications. When the MAE takes on a negative value this indicates that the moments are in the plane perpendicular to this symmetry axis. We have checked the energy required to rotate spins in-plane finding, as expected, a very soft energy dependence. This freedom to rotate the spin structure obviously renders such cases entirely unsuitable for hard magnetic application. We will therefore focus on those cases for which the MAE is positive.Of the 15 compounds that suffer a tetragonal distortion only 6 have E_MAE > 0. Curiously, these are the compounds for which the Z element is either Ge or Sn: Co_2NiGe, Fe_2NiGe, Fe_2NiSn, Co_2PtSn, Fe_2PtGe, and Fe_2PtSn. The values of the MAE for the Pt free compounds are all, as expected, modest as compared to the Pt containing compounds. The maximum positive MAE for Pt free compounds are found in Fe_2NiGe and Fe_2NiSn, with an MAE of ≈ 1 MJ/m^3, while for the Pt containing compounds we find a much higher MAE of 5.19 MJ/m^3 for Fe_2PtGe. This value is close to the currently highest value observed for an MAE in a rare earth free material (a value of 7 MJ/m^3 for L1_0-FePt<cit.>). The rather high M_s value of 516 kA/m suggests this material might be interesting to further explore in the context of specialist application as a hard magnet.§ DISTORTION CONTROL The previous two sections lead us to conclude that (i) the propensity to tetragonal distortion strongly correlates with a high DOS at the Fermi energy in the cubic phase and (ii) that if a tetragonal distortion occurs, high values of the MAE are possible. This raises the possibility of, with a view to engineering a high MAE, inducing such a distortion by doping.To this end we consider the two materials presented in Fig. <ref> in which the Fermi energy lies in the valley between the two high DOS regions, and dope the cubic phase within the virtual crystal approximation (VCA). In the case of Co_2FeAl a doping of 1.5 electrons is required to shift the Fermi energy into the high DOS region, with a more modest 0.3 electrons required in the case of Fe_2CoGe. In both cases we find that upon such doping, the cubic phase becomes unstable with respect to a tetragonal distortion; structural details may be found in Table II.A subsequent calculation of the MAE finds values comparable to those obtained for the naturally tetragonally distorting Heusler compounds. It is also interesting to note that the mechanism of the distortion appears to be somewhat different from the “natural” cases: while in Fig. <ref> it is clearly seen that the distortion results in a significant redistribution of spectral weight away from the Fermi energy via the opening of a “repulsion valley” in Fig. <ref> this effect is seen to be much weaker. This of course, may be an artifact of the VCA.§ CONCLUSION We have addressed the question of whether we may obtain large magnetocrystalline anisotropy energies in Heusler compounds that adopt a low symmetry tetragonal structure. To this end we have investigated the Heusler compounds Fe_2YZ with Y = (Ni, Co, Pt), and Co_2YZ with Y = (Ni, Fe, Pt) where, in both cases, Z = (Al, Ga, Ge, In, Sn). We find that the cubic phase of 15 of these 30 Heusler compounds is unstable with respect to tetragonal distortion, in particular for the Co_2NiZ, Co_2PtZ, and Fe_2PtZ families the cubic phase is always, at T=0, unstable. In contrast, for the Fe_2NiZ and Fe_2CoZ families this is the case for only 2 compounds – Fe_2CoGe and Fe_2CoSn. The mechanism behind this distortion involves a significant redistribution of spectral weight near the Fermi energy, such that a “valley” in the DOS at the Fermi energy is opened up in the tetragonal phase leading to a reduction in the number of states near the Fermi energy. Curiously, we find that for the compounds we investigate a good rule of thumb exists that if the DOS at the Fermi level is greater than 4.5 states/eV, the cubic phase is unstable.Of the 15 compounds that suffer tetragonal distortion the magnetocrystalline anisotropy energies are found to range in values from -12 MJ/m^3 (Co_2PtAl) to +5.19 MJ/m^3 (Fe_2PtGe). As expected, the values of the MAE for the Pt free Heuslers are more modest in magnitude, and range in value from -2.38 MJ/m^3 (Co_2NiGa) to 1.09 MJ/m^3 (Fe_2NiSn). For hard magnet application only positive values of the magnetocrystalline anisotropy energies, which correspond to moments aligned with the tetragonal symmetry axis, are interesting. Interestingly, we find that the MAE takes on a positive value for all cases in which the Z element is either Ge or Sn.Finally, we have considered the possibility of doping the Heusler compounds in which the cubic phase is stable in order to induce a tetragonal distortion. Using the virtual crystal approximation we find that this is indeed possible, and the doping induced distortion results in magnetocrystalline anisotropy energies values comparable to those obtained in the naturally distorting Heusler compounds. § ACKNOWLEDGMENTSYM, GM and S. Sharma would like to thank the Heusler project funded by the MPG.§ DETAILS OF THE STRUCTURAL AND MAGNETIC PROPERTIES OF THE HEUSLERS INVESTIGATED IN THIS WORKIn this Appendix we present structural details of the Heusler compounds calculated in the manuscript along with experimental structure data where this exists. In Table IIIwe present the Heusler compounds Co_2NiZ, Co_2FeZ, and in Table IVthe compounds Fe_2NiZ, and Fe_2CoZ where in each case Z = Al, Ge, Ga, Sn, or In. 38 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Winterlik et al.(2012a)Winterlik, Chadov, Gupta, Alijani, Gasi, Filsinger, Balke, Fecher, Jenkins, Casper, Kubler, Liu, Gao, Parkin,and Felser]spintronics author author J. Winterlik, author S. Chadov, author A. Gupta, author V. Alijani, author T. Gasi, author K. Filsinger, author B. Balke, author G. Fecher, author C. Jenkins, author F. Casper, author J. Kubler, author G.-D.Liu, author L. Gao, author S. Parkin,andauthor C. Felser, @noopjournal journal Adv. Mater. volume 24, pages 6283 (year 2012a)NoStop [McCallum et al.(2014)McCallum, Lewis, Skomski, Kramer, and Anderson]McCallum author author R. W. McCallum, author L. H. Lewis, author R. Skomski, author M. J. Kramer,and author I. E. Anderson, @noopjournal journal Annu. Rev. Mater. Res. volume 44, pages 451 (year 2014)NoStop [Coey(2011)]Coey1 author author J. M. D.Coey, @noopjournal journal IEEE transactions on magnetics volume 47, pages 4671 (year 2011)NoStop [Coey(2012)]Coey2 author author J. M. D.Coey, @noopjournal journal Scripta Materialia volume 67, pages 524 (year 2012)NoStop [Kramer et al.(2012)Kramer, W., I.A., and Constantinides]Kramer author author M. J. Kramer, author M. R. W., author A. I.A.,and author S. Constantinides, @noopjournal journal JOM volume 64, pages 752 (year 2012)NoStop [Lue and Kuo(2002)]Lue author author C. S. Lue and author Y.-K. Kuo,10.1103/PhysRevB.66.085121 journal journal Phys. Rev. B volume 66, pages 085121 (year 2002)NoStop [Alijani et al.(2011)Alijani, Ouardi, Fecher, Winterlik, Naghavi, Kozina, Stryganyuk, Felser, Ikenaga, Yamashita, Ueda, and Kobayashi]Alijani author author V. Alijani, author S. Ouardi, author G. H. Fecher, author J. Winterlik, author S. S. Naghavi, author X. Kozina, author G. Stryganyuk, author C. Felser, author E. Ikenaga, author Y. Yamashita, author S. Ueda,and author K. Kobayashi, 10.1103/PhysRevB.84.224416 journal journal Phys. Rev. B volume 84, pages 224416 (year 2011)NoStop [Felser et al.(2007)Felser, Fecher, and Balke]Felser author author C. Felser, author G. H. Fecher,and author B. Balke,@noopjournal journal Angew. Chem. Int. Ed. volume 46, pages 668 (year 2007)NoStop [Graf et al.(2011)Graf, Felser, and Parkin]Graf1 author author T. Graf, author C. Felser,andauthor S. S. P. Parkin,@noopjournal journal Progress in Solid State Chemistry volume 39, pages 1 (year 2011)NoStop [Graf et al.(2013)Graf, Winterlik, Muchler, Fecher, Felser, and P.]Graf2 author author T. Graf, author L. Winterlik, author L. Muchler, author G. H. Fecher, author C. Felser,and author P. S. P. P., @noopjournal journal Handbook of Magnetic Materials volume 21, pages 1 (year 2013)NoStop [Kreiner et al.(2014)Kreiner, Kalache, Hausdorf, Alijanin, Qian, Burkhardt, Ouardi, and Felser]Kreiner author author G. Kreiner, author A. Kalache, author S. Hausdorf, author V. Alijanin, author J.-F. Qian, author U. Burkhardt, author S. Ouardi,and author C. Felser, @noopjournal journal Z. Anorg. Allg. Chem volume 640, pages 738 (year 2014)NoStop [Roy and Chakrabarti(2016)]roy16 author author T. Roy and author A. Chakrabarti, @noopjournal journal arXiv:1603.08350(year 2016)NoStop [Wollmann et al.(2015)Wollmann, Chadov, Kübler, andFelser]woll15 author author L. Wollmann, author S. Chadov, author J. Kübler,andauthor C. Felser, 10.1103/PhysRevB.92.064417 journal journal Phys. Rev. B volume 92, pages 064417 (year 2015)NoStop [Talapatra et al.(2015)Talapatra, Arróyave, Entel, Valencia-Jaime, and Romero]tal15 author author A. Talapatra, author R. Arróyave, author P. Entel, author I. Valencia-Jaime,andauthor A. H. Romero, 10.1103/PhysRevB.92.054107 journal journal Phys. Rev. B volume 92, pages 054107 (year 2015)NoStop [Hongzhi et al.(2013)Hongzhi, Pengzhong, Guodong, Fanbin, Heyan, Enke, Wenhong, and Guangheng]hong13 author author L. Hongzhi, author J. Pengzhong, author L. Guodong, author M. Fanbin, author L. Heyan, author L. Enke, author W. Wenhong,and author W. Guangheng, @noopjournal journal Solid State Communications volume 170, pages 44–47 (year 2013)NoStop [Winterlik et al.(2012b)Winterlik, Chadov, Gupta, Alijani, Gasi, Filsinger, Balke, Fecher, Jenkins, Casper, Kübler, Liu, Gao, Parkin,and Felser]wint12 author author J. Winterlik, author S. Chadov, author A. Gupta, author V. Alijani, author T. Gasi, author K. Filsinger, author B. Balke, author G. H.Fecher, author C. A.Jenkins, author F. Casper, author J. Kübler, author G.-D. Liu, author L. Gao, author S. S. P. Parkin,and author C. Felser, 10.1002/adma.201201879 journal journal Advanced Materials volume 24, pages 6283 (year 2012b)NoStop [Wollmann et al.(2014)Wollmann, Chadov, Kübler, andFelser]woll14 author author L. Wollmann, author S. Chadov, author J. Kübler,andauthor C. Felser, 10.1103/PhysRevB.90.214420 journal journal Phys. Rev. B volume 90, pages 214420 (year 2014)NoStop [Kresse and Furthmüller(1996)]vasp author author G. Kresse and author J. Furthmüller, @noopjournal journal Comput. Mat. Sci. volume 6, pages 15 (year 1996)NoStop [Blöchl(1994)]PAW author author P. Blöchl, @noopjournal journal Phys. Rev. B volume 50, pages 17953 (year 1994)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]PBE author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, 10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Dewhurst et al.(2016)Dewhurst, Sharma, and et al.]elk author author K. Dewhurst, author S. Sharma, and author et al., @noopjournal journal http://elk.sourceforge.net/(year 2016)NoStop [Gillessen and Dronskowski(2010)]gill10 author author M. Gillessen and author R. Dronskowski, 10.1002/jcc.21358 journal journal Journal of Computational Chemistry volume 31, pages 612 (year 2010)NoStop [Gasi et al.(2013)Gasi, Ksenofontov, Kiss, Chadov, Nayak, Nicklas, Winterlik, Schwall, Klaer, Adler, andFelser]Gasi author author T. Gasi, author V. Ksenofontov, author J. Kiss, author S. Chadov, author A. K. Nayak, author M. Nicklas, author J. Winterlik, author M. Schwall, author P. Klaer, author P. Adler,and author C. Felser, 10.1103/PhysRevB.87.064411 journal journal Phys. Rev. B volume 87, pages 064411 (year 2013)NoStop [Felser and Hirohata(2016)]book author author C. Felser and author A. E. Hirohata, @noopjournal journal Springer Series in Materials Science 222(year 2016)NoStop [Suits(1976)]Rh author author J. Suits, @noopjournal journal Solid State Commun. volume 18, pages 423 (year 1976)NoStop [Barman et al.(2007)Barman, Banik, Shukla, Kamal, andChakrabarti]bar07 author author S. R. Barman, author S. Banik, author A. K. Shukla, author C. Kamal,and author A. Chakrabarti, http://stacks.iop.org/0295-5075/80/i=5/a=57002 journal journal EPL (Europhysics Letters) volume 80, pages 57002 (year 2007)NoStop [Zayak et al.(2005)Zayak, Entel, Rabe, Adeagbo, andAcet]zay05 author author A. T. Zayak, author P. Entel, author K. M. Rabe, author W. A. Adeagbo,and author M. Acet, 10.1103/PhysRevB.72.054113 journal journal Phys. Rev. B volume 72, pages 054113 (year 2005)NoStop [Paul et al.(2015)Paul, Sanyal, and Ghosh]paul15 author author S. Paul, author B. Sanyal,andauthor S. Ghosh, http://stacks.iop.org/0953-8984/27/i=3/a=035401 journal journal Journal of Physics: Condensed Matter volume 27, pages 035401 (year 2015)NoStop [Ivanov et al.(1973)Ivanov, Solina, Demshira, and Magat]FePt_expt author author O. A. Ivanov, author L. V. Solina, author V. A. Demshira,andauthor L. M. Magat, @noopjournal journal Phys. Met. Metallov.volume 35, pages 92 (year 1973)NoStop [Fichtner et al.(2015)Fichtner, Wang, Levin, Kreiner, Mejia, Fabbrici, Albertini, and Felser]Co2NiGa author author T. Fichtner, author C. Wang, author A. Levin, author G. Kreiner, author C. Mejia, author S. Fabbrici, author F. Albertini,and author C. Felser, @noopjournal journal Metals volume 5, pages 484 (year 2015)NoStop [Gabor et al.(2011)Gabor, Petrisor, Tiusan, Hehn, andPetrisor]Co2FeAl1 author author M. S. Gabor, author T. Petrisor, author C. Tiusan, author M. Hehn,and author T. Petrisor, 10.1103/PhysRevB.84.134413 journal journal Phys. Rev. B volume 84, pages 134413 (year 2011)NoStop [Husain et al.(2016)Husain, Akansel, Svedlindh, and Chaudhary]Co2FeAl2 author author S. Husain, author A. Akansel, S.and Kumar, author P. Svedlindh,and author S. Chaudhary, @noopjournal journal Scientific Reports volume 432, pages 28692 (year 2016)NoStop [Zhang et al.(2004)Zhang, Brück, de Boer, Li, andWu]Co2FeGa author author M. Zhang, author E. Brück, author F. R. de Boer, author Z. Li,and author G. Wu, http://stacks.iop.org/0022-3727/37/i=15/a=001 journal journal Journal of Physics D: Applied Physics volume 37, pages 2049 (year 2004)NoStop [Uvarov1 et al.(2012)Uvarov1, Kudryavtsev1, Kravets, Vovk, Borges, Godinho, andKorenivski]Co2FeGe author author N. Uvarov1, author Y. Kudryavtsev1, author A. Kravets, author Y. Vovk, author R. Borges, author M. Godinho,and author V. Korenivski, @noopjournal journal J. Appl. Phys. volume 112, pages 063909 (year 2012)NoStop [Zhang et al.(2013)Zhang, Wang, Zhang, Liu, Ma, and H.]Zhang author author Y. J. Zhang, author W. H. Wang, author H. G. Zhang, author E. K. Liu, author R. S. Ma,and author W. G. H., @noopjournal journal Physica B: Condensed Matter volume 87, pages 86 (year 2013)NoStop [Buschow et al.(1983)Buschow, van Engen, and Jongebreur]Fe2NiAl2 author author K. Buschow, author P. van Engen,and author R. Jongebreur,@noopjournal journal J. Magn. Magn. Mater. volume 38, pages 1 (year 1983)NoStop [Csanad et al.(2004)Csanad, Csorgo, and Lorstad]Fe2NiAl3 author author M. Csanad, author T. Csorgo, and author B. Lorstad,@noopjournal journal Nukleonikavolume 49, pages S49 (year 2004)NoStop [Ren et al.(2010)Ren, Li, and Luo]Fe2NiGe author author Z. Ren, author S. T. Li,andauthor H. Z. Luo, @noopjournal journal Physica B: Condensed Mattervolume 405, pages 2840 (year 2010)NoStop
http://arxiv.org/abs/1702.08150v1
{ "authors": [ "Y. -i. Matsushita", "G. Madjarova", "J. K. Dewhurst", "S. Shallcross", "C. Felser", "S. Sharma", "E. K. U. Gross" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170227051029", "title": "Large magnetocrystalline anisotropy in tetragonally distorted Heuslers: a systematic study" }
Department of Physics, University of Washington, Seattle, WA 98195-1560, USA^*E-mail: bulgac@uw.edu Faculty of Physics, Warsaw University of Technology, ulica Koszykowa 75, 00-662 Warsaw, POLANDPacific Northwest National Laboratory, Richland, WA 99352, USATheoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USAWe study the fission dynamics of ^240Pu within an implementation of the Density Functional Theory (DFT) extended to superfluid systems and real-time dynamics.We demonstrate the critical role played bythe pairing correlations.The evolution is found to be much slower than previously expectedin this fully non-adiabatic treatment of nucleardynamics, where there are no symmetry restrictions andall collective degrees of freedom (CDOF) are allowed to participate in the dynamics. Nuclear fission, discovered in 1939 <cit.>,is fast approaching the venerable age of 80 years and proves to be one of the most challenging problems in quantum many-body theory. Nuclear fission is an extremely complex physical phenomenon, starting with the formation of the compound nucleus, the shape evolution until the outer saddle point and eventual slide towards the scission where the fission fragments are formed, accompanied or rather followed by neutron and gamma emissions, followed later on by beta-decay, with times scales of these processes ranging over more than twenty orders of magnitude, see Fig. <ref> and Ref. <cit.>.Likely the most difficult part of this entire process is the descent from the saddle to the scission configuration which over the years has proved quite difficult to define, and the formation of the fission fragments. How a large nucleus with more than 200 nucleons separates into two fragments, how the mass and charge is distributed, how much excitation energy and angular momentum each fragment acquires in this process, how many neutrons and gammas are emitted and at what stage of the fission dynamics, and why and how sometimes even more than two fission fragments are formed?Even tough an enormous body of experimental results exists and a large number of phenomenological models have been developed, the present day microscopic results are far from satisfactory <cit.>.Nuclear fission is thus unlike another remarkable problem of the quantum many-body problem, namely superconductivity, which since its discovery in 1911 <cit.>was successfully described microscopically in less than 50 years <cit.>.r7cm< g r a p h i c s > < g r a p h i c s > Qualitative potential energy of a fissioning nucleus versus deformation and characteristic times of various accompanying processes <cit.>. r5cm< g r a p h i c s > The qualitative evolution of the single-particle levels(upper panel) and of the total nuclear energy (lower panel) as a function of nuclear deformation <cit.>. Qualitatively it was understood a long time ago that fission can be described qualitatively using a hydrodynamic description of the nucleus as acharged liquid drop <cit.> and that the evolving shape of the fissioning nucleus can be described within a collective model with a deformation potential, see Fig. <ref>.The independent particle model <cit.> forced us realize that this smooth deformation potentialactually should have quite a lot of roughness, due to the single-particle level crossings as a function of the nuclear deformation, see Fig. <ref> and Ref. <cit.>. Only the lowest A-levels remain mostly occupied while the nuclear shape evolves, as the nucleus does not heat up significantly and the Fermi surface should retain its spherical overall shape. While the nucleus elongates, the Fermi surface becomes oblate, and it can recover its sphericity only if at a level crossing nucleons from the occupied upward going levels jump to unoccupied downward going levels, see upper panel of Fig. <ref>.The total energy of the nucleus, which is to a large extent the sum of the occupied single-particle energies, develops cusps at the level crossings.It was assumed that the residual interactions between the independent particles provide the mechanism for jumping from one level to another at the crossing, and it was also expected that as result also the deformation potential will become smoother.Due to Kramers degeneracy each single-particle level is however double-occupied and the only residual interaction capable of providing an effective mechanism to promote simultaneously two particles from one level to another is the pairinginteraction <cit.>. This requirement can be understood from a different point of view as well <cit.>. During fission the axial symmetry is typically conserved, and one can expect that the probability distribution of the projections of the angular momenta along the fission axis is also conserved. The initial nucleus has a wider waist than the final fragments and the maximum angular momentum is roughly p_FR_A, where p_F is the Fermi momentum and R_A the waist radius. In the final fission products in case of symmetric fission R_A→ R_A/2=R_A/2^1/3 and thus the maximum angular momentum projection is smaller by a factor of 2^1/3 than in the initial nucleus. A dynamics which will conserve the axial symmetry will not be capable to allow for such a dramatic redistribution of angular momenta of the occupied states. This is the main reason while many attempts to describe fission within a time-dependent Hartree-Fock approach failed so far <cit.>. The pairing interaction is the most effective at coupling nucleon pairs in time reverse quantum states in (m,-m)→(m',-m')and it preserves the axial symmetry as well. Thus a full microscopic treatment of the pairing interactions in a dynamic approach is crucial in describing fission dynamics, see Fig. <ref> and Ref. <cit.>. The approach adopted in Ref. <cit.> is based on an extension of the density functional theory to superfluid anytime-dependent phenomena <cit.>. This approach satisfies all expected symmetries of a nuclear Hamiltonian: translational and rotational invariance, Galilean invariance, isospin invariance up to Coulomb effects and proton/neutron mass difference, gauge symmetry, and renormalizability of the theory.The static and the time dependent formalism has been confronted with a multitude of theoretical tests and with various experimental data in cold atom physics, nuclear physics, and neutron star crust problems.R0.6< g r a p h i c s >Induced fission of ^240Pu with normal pairing strength last about 14,000 fm/c from saddle-to-scission. The columns show sequential frames of the density (first column), magnitude of the pairing field (second column), and the phase of the pairing field (third column).In each frame the upper/lower part of each frame shows the neutron/proton density, the magnitude of neutron/proton pairing fields, and of the phase of the pairing field respectively <cit.>. Even though the nuclear energy density functional is not yet known with high enough accuracy and its origin is mostly phenomenological, its basic properties (volume energy, surface energy, symmetry energy, Coulomb energy, spin-orbit interaction) are such that a very large body of observables (masses, charge radii, collective state) are described rather well. For the normal part of the energy density functional we have chosen a well studied parametrization, the SLy4 <cit.>, and for the pairing part we used <cit.>. This type of parametrizations of the nuclear energy density functional has met with difficulties when describing spontaneous fission life-times, since for an under-the-barrier process the life-times depend exponentially on the energy density functional parameters. In the case of induced fission, where the entire dynamics occurs in classically allowed regions, inaccuracies of the order of O(1) MeV in the total deformation potential energy have a relatively small impact on the observables, such as masses and charges of the fission fragments, total kinetic energy and the excitation energies of the fragments.The nucleus ^240Pu was prepared in a state close in deformation to the outer fission barrier and an equivalent neutron energy in the reaction ^239Pu(n,f) of about 1.5 MeV. Our goal was not to describe correctly various fission fragments properties, as for many decades the main difficulty the theory was its inability produce fission starting above but near the top of the fission barrier in a real-time approach.The approximate approaches used widely and based on constructing at first a potential energy surface in a collective space of a typically arbitrary dimension between 2 and 5, which was combined with a recipe to calculate also an appropriate inertia tensor in this collective space, even though they might lead to some reasonable predictions, they do not really prove that a truly microscopic theory is at hand.First of all the choice of collective variables is not rigorous, it is based often on the ability of a specific researcher or group of researchers to solve numerically the problem in the space chosen. It is computationally extremely expensive to construct potential energy surfaces and related inertia tensors in large dimensional spaces. The choice of collective variable is not dictated by a rigorous theory but rather by “intuition.” There are also technical difficulties with defining a potential energy surface in a multidimensional space, which is basically a reduction from an infinite dimensional space to a finite dimensional one, a fact well known in mathematics in catastrophe theory even in the case of finite dimensional spaces.Apart from these rather technical difficulties, there are physics problems, as the introduction of collective degrees of freedom implies an almost exact separation of the degrees of freedom into collective and intrinsic with no coupling between them. This implies that during the evolution the intrinsic degrees of freedom are assumed to remain “unexcited,” which is never the case, unless one deals with fully integrable models. There is always a coupling between collective and intrinsics degrees of freedom, this is why fragments emerge excited. This aspect is trivial to put in evidence, one can start with a small number of collective degrees of freedom excited, such as quadrupole and octupole deformation, and let the nucleus evolve freely, only to discover that in an unrestricted dynamics other degrees of freedom are immediately excited with significant amplitudes. This is one of the main reasons why the present “microscopic” approaches, based on limited and arbitrarily chosen number of collective degrees of freedom cannot be recognized as a solution of the large amplitude collective nuclear many-body problem. The only viable alternative is to allow all degrees of freedom to be active. Even though this might appear as an insurmountable numerical problem, in fact the problem can be solved with current computers. In a high accuracy simulation of induced fission of ^240Pu we integrated in time numerically 256,000 3D time-dependent coupled non-linear complex partial differential equations on a 25^2×50 fm^3 spatial lattice for about 320,000 time stepsusing 512 GPUs in about 47 hours or using 1602 GPUs in about 24 hours. The lattice constant in this calculation corresponds to a cutoff momentum of ≈ 500 MeV/c, which is very high and of the same magnitude with the cutoff momenta used in chiral perturbation effective theories of nucleon-nucleon interactions. R0.6< g r a p h i c s > Induced fission of ^240Pu with enhanced pairing strength last about 1,400 fm/c from saddle-to-scission, thus about ten times faster than in the case of normal pairing strength.The outcome of allowing all collective degrees of freedom to be active and to include time and space dependent pairing fields has been remarkable in several ways. The first surprise was that for the first timean actinide could fission while the dynamics was described with a realistic energy density functional. This could not have happened if pairing correlations would have not been included dynamically, and even if pairing correlations would have been treated approximately at the BCS level either statically or in a time-dependent approach as in Refs. <cit.>. The second surprise was that the properties of the fission fragments came out very close to the observed ones, even though no effort has been put in trying to obtain them. The physics embodied in the nuclear energy density functional is to a large extent accurate and we attribute this agreement to this fact. The third surprise was that the evolution time from the saddle-to-scission was almost an order of magnitude longer than previously predicted <cit.>,namely of the order of 10,000 fm/c. It has already been established that in the absence of pairing, or when pairing as a normal strength and is treated in BCS approximation or with frozen initial occupation probabilities, a nucleus will not fission starting at the outer saddle <cit.>, as there is no mechanism to allow for a redistribution of single-particle occupation probabilities.In order to prove convincingly the crucial role played by the pairing correlations in the fission dynamics we have increased artificially the strength of the pairing interaction.The saddle-to-scission time is then reduced dramatically to about 1,400 fm/c, see Fig. <ref>. The pairing field fluctuates both in magnitude and phase at normal pairing strength, Fig. <ref>, while these fluctuations are basically absent in case of strong pairing, Fig. <ref>, when the dynamics is, as expected, similar to the ideal hydrodynamics <cit.>. The potential energy surface has a lot of “roughness” for normal pairing strength, and the slide down of the nucleus is similar to the motion of an electron in the Drude model of electric conductivity, when the electron is kicked out of the direction of the electric field by elastic collisions with the ions, the length of the trajectory is longer, and the average velocity along the direction of the filed is significantly reduced, even though there is no friction. Similarly, the nucleus from saddle-to-scision remains rather cold and only collective degrees of freedom are significantly excited. Pairing, while not being the engine, it provides the essential “lubricant,” without which fission is brought to a “screeching halt.” 10Natur_1939r O. Hahn and F. Strassmann, Über den Nachweis und das Verhalten der bei der Bestrahlung desUrans mittels Neutronen entstehenden Erdalkalimetalle,Naturwissenschaften 27, 11 (1939). Nature_1939r L. Meitner and O.R. Frisch,Disintegration of Uranium by Neutron: a New Type of Nuclear Reaction, Nature (London), 143, 239 (1939). Gonnenwein:2014 F. Gönnenwein, Lectures presented at LANL FIESTA Fission School & Workshop, Sep. 8-12, 2014, Santa Fe, New Mexico, USA, http://t2.lanl.gov/fiesta2014/ arxiv_1511r N. Schunck and L.M. Robledo,Microscopic Theory of Nuclear Fission: A review, Rep. Prog. Phys. 79, 116301 (2016). KNAV_1911r H. Kamerlingh Onnes,Further Experiments with Liquid Helium, KNAW, Proceedings, 13 II, 1910-1911, Amsterdam, 1911, pp. 1093-1113.PhysRev_1957r J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of Superconductivity, Phys. Rev. 108, 1175 (1957). Haxel O.J. Haxel, H.D. Jensen, and H.E. Suess, On the "magic numbers" in nuclear structure, Phys. Rev. 75, 1766 (1949);Maria Goeppert Mayer, On closed shells in nuclei. II, Phys. Rev. 75, 1969 (1949).PhysRev_1953r D.L. Hill and J.A. Wheeler, Nuclear Constitution and the Interpretation of Fission Phenomena, Phys. Rev. 89, 1102 (1953). Bertsch:1980 G.F. Bertsch, The nuclear density of states in thespace of nuclear shapes, Phys. Lets. B 95, 157 (1980); F. Barranco, G.F. Bertsch, R.A. Broglia, and E. Vigezzi, Large-Amplitude Motion in Superfluid Fermi Droplets, Nucl. Phys. A512, 253 (1990); G.F. Bertsch, Large Amplitude Collective Motion,Nucl. Phys. A574, 169c (1994). PhysRevLett_1997r G.F. Bertsch and A. Bulgac, Comment on “Spontaneous Fission: A Kinetic Approach,"Phys. Rev. Lett. 79, 3539 (1997). PhysRevC_2014r11 C. Simenel and A.S. Umar, Formation and Dynamics of Fission Fragments, Phys. Rev. C 89, 031601(R) (2014); G. Scamps, C. Simenel, and D. Lacroix, Superfluid Dynamics of ^258Fm Fission, Phys. Rev. C 92, 011602(R) (2015).Tanimura:2015Y. Tanimura, D. Lacroix, and G. Scamps,Collective Aspects Deduced from the Time-Dependent Microscopic Mean-Field with Pairing: Application to the Fission Process, Phys. Rev. C 92, 034601 (2015); P. Goddard, P. Stevenson, and A, Rios, Fission Dynamics within Time-Dependent Hartree-Fock: Deformation Induced Fission, Phys. Rev. C 92, 054610 (2015);P. Goddard, P. Stevenson, and A, Rios, Fission Dynamics within Time-Dependent Hartree-Fock: Boost Induced Fission, Phys. Rev. C 93, 014620 (2016).Bulgac:2016 A. Bulgac, P. Magierski, K.J. Roche, and I. Stetcu, Induced Fission of ^240Pu within a Real-TimeMicroscopic Framework, Phys. Rev. Lett. 116, 122504 (2016). ARNPS__2013 A. Bulgac,Time-Dependent Density Functional Theory and Real-Time Dynamics of Fermi Superfluids, Ann. Rev. Nucl. Part. Sci. 63, 97 (2013).NuclPhys_1998 E. Chabanat, P. Bonche, P. Haensel, J. Meyer,R. Schaeffer, A Skyrme parametrization from subnuclear to neutron star densities Part II.Nuclei far from stabilities, Nucl. Phys. A635, 231 (1998). PRL__2003a Y. Yu and A. Bulgac, Energy Density Functional Approach to Superfluid Nuclei, Phys. Rev. Lett. 90, 222501 (2003). PhysRevC_1976r K.T.R. Davies, A.J. Sierk, and J.R. Nix, Effect of Viscosity on the Dynamics of Fission, Phys. Rev. C 13, 2385, (1976); J. Blocki, Y. Boneh, J.R. Nix, J. Randrup, M. Robel, A.J. Sierk, W.J. Swiatecki,One-body dissipation and the super-viscidity of nuclei,Ann. Phys. 113, 330 (1978); J. Randrup, W.J. Swiatecki,One-body dissipation and nuclear dynamics,Ann. Phys. 125, 193 (1980);J.W. Negele, S.E. Koonin, P. Möller, J.R. Nix, and A.J. Sierk, Dynamics of Induced Fission, Phys. Rev. C 17, 1098 (1978). Bulgac:2015 A. Bulgac, M.M. Forbes, and S. Jin, Nuclear energy density functionals: what do we really know?, arXiv:1506.09195.
http://arxiv.org/abs/1702.08490v1
{ "authors": [ "A. Bulgac", "S. Jin", "P. Magierski", "K. J. Roche", "I. Stetcu" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170227194751", "title": "Induced fission of 240Pu" }
EVALUATION OF THE NON-ELEMENTARY INTEGRAL ∫ e^λ x^α dx, α≥2, AND OTHER RELATED INTEGRALSVictor Nijimbere [2ex] School of Mathematics and Statistics, Carleton University,Ottawa, Ontario, Canadavictornijimbere@gmail.com Abstract: A formula for the non-elementary integral ∫ e^λ x^α dx where α is real and greater or equal two, is obtained in terms of the confluent hypergeometric function _1F_1 by expanding the integrand as a Taylor series. This result is verified by directly evaluating the area under the Gaussian Bell curve, corresponding to α=2, using the asymptotic expression for the confluent hypergeometric function and the Fundamental Theorem of Calculus (FTC). Two different but equivalent expressions, one in terms of the confluent hypergeometric function _1F_1 and another one in terms of the hypergeometric function _1F_2, are obtained for each of these integrals, ∫cosh(λ x^α)dx, ∫sinh(λ x^α)dx, ∫cos(λ x^α)dx and ∫sin(λ x^α)dx, λ∈ℂ,α≥2. And the hypergeometric function _1F_2 is expressed in terms of the confluent hypergeometric function _1F_1. Some of the applications of the non-elementary integral ∫ e^λ x^α dx, α≥ 2 such as the Gaussian distribution and the Maxwell-Bortsman distribution are given.Key words: Non-elementary integral, Hypergeometric function, Confluent hypergeometric function, Asymptotic evaluation, Fundamental theorem of calculus, Gaussian, Maxwell-Bortsman distribution. § INTRODUCTIONAn elementary function is a function of one variable built up using that variable and constants, together with a finite number of repeated algebraic operations and the taking of exponentials and logarithms <cit.>.In 1835, Joseph Liouville established conditions in his theorem, known as Liouville 1835's Theorem <cit.>, which can be used to determine whether an indefinite integral is elementary or non-elementary. Using Liouville 1835's Theorem, one can show that the indefinite integral ∫ e^λ x^αdx, α≥ 2, is non-elementary <cit.>, and to my knowledge, no one has evaluated this non-elementary integral before.For instance, if α = 2, λ = -β^2 < 0, where β is a real constant, the area under the Gaussian Bell curve can be calculated using double integration and then polar coordinates to obtain ∫_-∞^+∞ e^-β^2 x^2dx =√(π)/β.Is that possible to evaluate (<ref>) by directly using the Fundamental Theorem of Calculus (FTC) as in equation (<ref>)?∫_-∞^+∞ e^-β^2 x^2dx= lim_t→ -∞∫_t^0 e^-β^2 x^2dx+lim_t→ +∞∫_0^t e^-β^2 x^2dx. The Central limit Theorem (CLT) in Probability theory <cit.> states that the probability that a random variable x does not exceed some observed value zP(X<z)=1/√(2π)∫_-∞^z e^-x^2/2dx.So if we know the antiderivative of the function g(x) = e^λ x^2, we may choose to use the FTC to calculate the cumulative probability P(X < z) in (<ref>) when the value of z is given or is known, rather than using numerical integration.The Maxwell-Boltsman distribution in gas dynamics,F(v)=θ∫_0^v x^2e^-γ x^2dx,where θ and γ are some positive constants that depend on the properties of the gas and v is the gas speed, is another application.There are many other examples where the antiderivative of g(x) = e^λ x^α, α≥ 2 can be useful. For example, using the FTC, formulas for integrals such as ∫_x^∞ e^t^2n+1dt, x<∞;∫_x^∞e^-t^2n+1dt, x > -∞; ∫_x^∞t^2ne^-t^2dt, x≤∞,where n is a positive integer, can be obtained if the antiderivative of g(x)=e^λ x^α, α≥ 2 is known.In this paper, the antiderivative of g(x)=e^λ x^α, α≥ 2, is expressed in terms of a special function, the confluent hypergeometric _1F_1 <cit.>. And the confluent hypergeometric _1F_1 is an entire function <cit.>, and its properties are well known <cit.>. The main goal here is to consider the most general case with λ complex (λ∈ℂ), evaluate the non-elementary integral ∫ e^λ x^α, α≥2 and thus make possible the use of the FTC to compute the definite integral ∫_A^B e^λ x^αdx,for any A and B. And once (<ref>) is evaluated, then integrals such as (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) can also be evaluated using the FTC.Using the hyperbolic and Euler identities,cosh(λ x^α) = (e^λ x^α+ e^-λ x^α)/2, sinh(λ x^α) = (e^λ x^α-e^-λ x^α)/2, cos(λ x^α) = (e^iλ x^α+ e^-iλ x^α)/2, sin(λ x^α) = (e^iλ x^α- e^-iλ x^α)/(2i), the integrals ∫cosh(λ x^α) dx,∫sinh(λ x^α) dx,∫cos(λ x^α) dx ∫sin(λ x^α) dx, α≥ 2,are evaluated in terms of _1F_1 for any constant λ. They are also expressed in terms of the hypergeometric _1F_2. And some expressions of the hypergeometric function _1F_2 in terms of the confluent hypergeometric function _1F_1 are therefore obtained.For reference, we shall first define the confluent confluent hypergeometric function _1 F_1 and the hypergeometricfunction _1 F_2before we proceed to the main aims of this paper (see sections <ref> and <ref>).The confluent hypergeometric function, denoted as _1F_1, is a special function given by the series <cit.>_1 F_1(a;b; x)=∑_n=0^∞(a)_n/(b)_nx^n/n!,where a and b are arbitrary constants, (ϑ)_n=Γ(ϑ+n)/Γ(ϑ) (Pochhammer's notation <cit.>) for any complex ϑ, with (ϑ)_0=1, and Γ is the standard gamma function <cit.>. The hypergeometric function _1F_2 is a special function given by the series <cit.>_1 F_2(a;b,c; x)=∑_n=0^∞(a)_n/(b)_n (c)_nx^n/n!,where a,b and c are arbitrary constants, and (ϑ)_n=Γ(ϑ+n)/Γ(ϑ) (Pochhammer's notation <cit.>) as in Definition <ref>. § EVALUATION OF ∫_A^B E^Λ X^ΑDX The function G(x) = x _1F_1(1/α; 1/α+1; λ x^α), where _1F_1 is a confluent hypergeometric function <cit.>, λ is an arbitrarily constant and α≥ 2, is the antiderivative of the function g(x) = e^λ x^α. Thus,∫ e^λ x^αdx=x _1F_1(1/α; 1/α+1; λ x^α)+C.We expand g(x) = e^λ x^α as a Taylor series and integrate the series term by term. We also use the Pochhammer's notation <cit.> for the gamma function, Γ(a + n) = Γ(a)(a)_n, where (a)_n = a(a + 1) ⋯ (a + n - 1), and the property of the gamma function Γ(a + 1) = aΓ(a) <cit.>. For example, Γ(n + a + 1) = (n + a)Γ (n + a). We then obtain∫ g(x) dx=∫ e^λ x^αdx=∑_n=0^∞λ^n/n!∫ x^α ndx =∑_n=0^∞λ^n/n!x^α n+1/α n+1+C =x/α∑_n=0^∞(λ x^α)^n/(n+1/α)n!+C =x/α∑_n=0^∞Γ(n+1/α)/Γ(n+1/α+1)(λ x^α)^n/n!+C =x∑_n=0^∞(1/α)_n/(1/α+1)_n(λ x^α)^n/n!+C= x _1F_1(1/α; 1/α+1; λ x^α)+C=G(x) + C. □ Example 1. We can now evaluate ∫ x^2ne^λ x^2dx in terms of the confluent hypergeometric function. Using integration by parts,∫ x^2ne^λ x^2dx= x^2n-1/2λe^λ x^2-2n-1/2λ∫ x^2n-2e^λ x^2dx.* For instance, for n=1,∫ x^2e^λ x^2dx= x/2λe^λ x^2-1/2λ∫ e^λ x^2dx= x/2λe^λ x^2-x/2λ _1F_1(1/2; 3/2; λ x^2)+C. * For n=2,∫ x^4e^λ x^2dx= x^3/2λe^λ x^2-3/2λ∫ x^2e^λ x^2dx= x^3/2λe^λ x^2- 3x/4λ^2e^λ x^2+3x/4λ^2 _1F_1(1/2; 3/2; λ x^2)+C. Example 2.Using the method of integrating factor, the first-order ordinary differential equationy^'+2xy=1has solutiony(x)=e^-x^2(∫ e^x^2dx+C)=xe^-x^2 _1F_1(1/2; 3/2;x^2)+Ce^-x^2. Assuming that the function G(x) (see Proposition <ref>) is unknown, in the following lemma, we use the properties of function g(x) to establish the properties of G(x) such as the inflection points and the behavior as x→±∞.Let the function G(x) be an antiderivative of g(x)= e^λ x^α, λ∈ℂ with α≥2. * If the real part of λ is negative (<0) and α is even, then the limits lim_x→-∞G(x) and lim_x→+∞G(x) are finite (constants). And thus the Lebesgue integral ∫_-∞^∞ |e^λ x^α|dx<∞. * If λ is real (λ∈ℝ), then the point (0, G(0)) = (0, 0) is an inflection point of the curve Y = G(x), x∈ℝ. * And if λ∈ℝ and λ<0, and α is even, then the limits lim_x→-∞G(x) and lim_x→+∞G(x) are finite. And there exists real constant θ>0 such thatlimits lim_x→-∞G(x)=-θ and lim_x→+∞G(x)=θ.* For complex λ=λ_r+iλ_i, where the subscript r and i stand for real and imaginary parts respectively, the function g(x)=g(z)=e^z^α where z=(λ_r+iλ_i)^1/αx, α≥2, is an entire function on ℂ. And if λ_r<0 and α is even implies (z^α) is always negative regardless of the values of x. And so, if |z|→∞ (or x→±∞), then g(z)=0 (g(z)→ 0) (or g(x)=0 as x→±∞). Therefore by Liouvilletheorem, G(z) has to be constant as |z|→∞, and so is G(x) as x→±∞. Hence, the Lebesgue integral∫_-∞^∞ |e^λ x^α|dx= ∫_-∞^∞ e^λ_r x^α |e^λ_i x^α|dx=∫_-∞^∞ e^λ_r x^αdx<∞since G(x) is constant as x→±∞. For λ_r<0 and α odd, the limit lim_x→-∞ e^λ_r x^α diverges and so does the integral ∫_-∞^∞ e^λ_r x^αdx. Therefore, the Lebesgue integral ∫_-∞^∞ |e^λ x^α|dx has to diverge too. On the other hand, for λ_r>0, the limit lim_x→+∞ e^λ_r x^α diverges, and so does the integral∫_-∞^∞ e^λ_r x^αdx regardless of the value of α. Therefore, the Lebesgue integral ∫_-∞^∞ |e^λ x^α|dx has to diverge too. * At x = 0, g(0) = 1. And so, around x = 0, the antiderivative G(x) ∼ x because G^'(0) = g(0) = 1. And so(0, G(0)) = (0, 0). Moreover, G^''(x) =g^'(x) =λα x^α-1 e^λ x^α, α≥ 2, gives G^''(0) = 0. Hence, by the second derivative test, if λ is real (λ=λ_r),the point (0, G(0)) = (0, 0) is an inflection point of the curve Y = G(x), x ∈ ℝ. * For λ=λ_r (λ∈ℝ), both g(x) and G(x) are analytic on ℝ. Using this fact and the fact that for even α and λ_r<0, ∫_-∞^∞ |e^λ x^α|dx<∞implies that for even α and λ_r<0, G(x) has to be constant as x→±∞. In addition, the fact that G^''(x) < 0 if x < 0 and G^''(x) > 0 if x > 0 implies that, G(x) is concave upward on the interval (−∞, 0) while is concave downward on the interval (0, +∞). Moreover, the fact that g(x) = G^'(x) is symmetric about the y-axis (even) implies that G(x) has to be antisymmetric about the y-axis (odd). Hence there exists a real positive constant θ>0 such thatlimits lim_x→-∞G(x)=-θ and lim_x→+∞G(x)=θ.□ Example 3. If λ=-1 and α=2, then∫ e^-x^2dx=x _1F_1(1/2; 3/2;-x^2)+C.According to (<ref>), the antiderivative of g(x) = e^-x^2 is G(x)=x _1F_1(1/2; 3/2;-x^2). Its graph as a function of x, sketched using MATLAB, is shown in Figure <ref>. It is in agreement with Lemma <ref>. It is actually seen in Figure <ref> that (0, 0) is an inflection point and that G(x) reaches some constants as x→±∞ as predicted by Lemma <ref>.In the following lemma, we obtain the values of G(x), the antiderivative of the function g(x) =e^λ x^α, as x→±∞ using the asymptotic expansion of the confluent hypergeometric function _1F_1. Consider G(x) in Proposition <ref>. * Then for |x|≫1,G(x)= x _1F_1(1/α; 1/α+1; λ x^α)∼{[ Γ( 1/α+1)e^iπ/α/λ^1/αx/|x|+e^λ x^α/αλ x^α-1, ,;Γ( 1/α+1)e^iπ/α/λ^1/α+e^λ x^α/αλ x^α-1, .;]. * Let α≥2 and be even, and let λ=-β^2, where β is a real number, preferably positive. ThenG(-∞)=lim_x→-∞G(x)=lim_x→-∞ x _1F_1(1/α; 1/α+1; -β^2 x^α)=-1/β^2/αΓ( 1/α+1)andG(+∞)=lim_x→+∞G(x)=lim_x→+∞ x _1F_1(1/α; 1/α+1; -β^2 x^α)=1/β^2/αΓ( 1/α+1). * And by the FTC,∫_-∞^∞ e^-β^2x^α dx=G(+∞)-G(-∞) =1/β^2/αΓ( 1/α+1)-( -1/β^2/αΓ( 1/α+1))=2/β^2/αΓ( 1/α+1).* To prove (<ref>), we use the asymptotic series for the confluent hypergeometric function that is valid for |z|≫ 1 (<cit.>, formula 13.5.1),_1F_1(a;b;z)/Γ(b)=e^± iπ az^-a/Γ(b-a){∑_n=0^R-1(a)_n (1+a-b)_n/n!(-z)^-n+O(|z|^-R)}+e^zz^a-b/Γ(a){∑_n=0^S-1(b-a)_n (1-a)_n/n!(z)^-n+O(|z|^-S)},where a and b are constants, and the upper sign being taken if -π/2<arg(z)<3π/2 and the lower sign if -3π/2<arg(z)≤ -π/2. We set z=λ x^α, a=1/α and b=1/α+1, and obtain_1F_1(1/α;1/α+1;λ x^α)/Γ(1/α+1)=e^iπ/α/(λ x^α)^1/α{∑_n=0^R-1(1/α)_n /n!(λ x^α)^-n+O{λ x^α)^-R}+e^λ x^α(λ x^α)^-1/Γ(1/α){∑_n=0^S-1(1-1/α)_n(λ x^α)^-n+O(λ x^α)^-S}.Then, for |x|≫1,e^iπ/α/(λ x^α)^1/α{∑_n=0^R-1(1/α)_n /n!(λ x^α)^-n+O{λ x^α)^-R}∼{[ e^iπ/α/λ^1/α1/|x|, ,; e^iπ/α/λ^1/α1/x, , ].whilee^λ x^α(λ x^α)^-1/Γ(1/α){∑_n=0^S-1(1-1/α)_n(λ x^α)^-n+O(λ x^α)^-S}∼e^λ x^α/Γ(1/α)λ x^α.And so, for |x|≫1,_1F_1(1/α;1/α+1;λ x^α)/Γ( 1/α+1)∼{[ e^iπ/α/λ^1/α1/|x|+e^λ x^α/Γ(1/α)λ x^α, ,; e^iπ/α/λ^1/α1/x+e^λ x^α/Γ(1/α)λ x^α, .;].Hence,G(x)= x _1F_1(1/α; 1/α+1; λ x^α)∼{[ Γ( 1/α+1)e^iπ/α/λ^1/αx/|x|+e^λ x^α/αλ x^α-1, ,;Γ( 1/α+1)e^iπ/α/λ^1/α+e^λ x^α/αλ x^α-1, .;].* Setting λ=-β^2, where β is real and positive and using (<ref>), then for α even,G(x)= x _1F_1(1/α; 1/α+1; -β^2 x^α)∼1/β^2/αΓ( 1/α+1)x/|x|-e^-β^2 x^α/αβ^2 x^α-1.Therefore,G(-∞)=lim_x→-∞G(x)=lim_x→-∞ x _1F_1(1/α; 1/α+1; -β^2 x^α)=-1/β^2/αΓ( 1/α+1)andG(+∞)=lim_x→+∞G(x)=lim_x→+∞ x _1F_1(1/α; 1/α+1; -β^2 x^α)=1/β^2/αΓ( 1/α+1).* By the Fundamental Theorem of Calculus, we have∫_-∞^+∞ e^-β^2x^α dx=lim_y→-∞∫_y^0 e^-β^2x^α dx +lim_y→+∞∫_0^y e^-β^2x^αdx =lim_y→+∞y _1F_1(1/α;1/α+1;-β^2y^α) -lim_y→-∞y _1F_1(1/α;1/α+1;-β^2y^α)=G(+∞)-G(-∞) =1/β^2/αΓ( 1/α+1)-( -1/β^2/αΓ( 1/α+1))=2/β^2/αΓ( 1/α+1).We now verify whether (<ref>) is correct or not by double integration. We first observe that (<ref>) is valid for all even α≥2. And so, if (<ref>) is verified for α=2, we are done since (<ref>) is valid for all even α≥2. For α=2,we have∫_-∞^+∞ e^-β^2x^2 dx=lim_y→-∞∫_y^0 e^-β^2x^2 dx +lim_y→+∞∫_0^y e^-β^2x^2dx =lim_y→+∞y _1F_1(1/2;3/2;-β^2y^2) -lim_y→-∞y _1F_1(1/2;3/2;-β^2y^2) =G(+∞)-G(-∞)=2/βΓ(3/2)=2/β√(π)/2=√(π)/β. On the other hand,(∫_-∞^∞ e^-β^2 x^2 dx)^2=(∫_-∞^∞ e^-β^2 x^2 dx)(∫_-∞^∞ e^-β^2 y^2 dy) =∫_-∞^∞∫_-∞^∞e^-β^2(x^2+y^2)dydx.In polar coordinate,∫_-∞^∞∫_-∞^∞e^-β^2(x^2+y^2)dydx=∫_0^2π∫_0^∞e^-β^2 r^2rdrdθ =1/2β^2∫_0^2πdθ=π/β^2.This gives∫_-∞^∞ e^-β^2x^2 dx=√(∫_-∞^∞∫_-∞^∞e^-(x^2+y^2)dydx)=√(π)/βas before. □Example 4. Setting λ=-β^2=-1, β=1 and α=2 in Lemma <ref> givesG(-∞)=lim_x→-∞G(x)=lim_x→-∞x _1F_1(1/2;3/2;-x^2) =-√(π)/2andG(+∞)=lim_x→+∞G(x)=lim_x→+∞x _1F_1(1/2;3/2;-x^2) =√(π)/2.This implies θ=√(π)/2 in Lemma <ref>. And this is exactly the value of G(x) as x→∞ in Figure <ref>. We also have lim_x→-∞G(x)=-θ=-√(π)/2 as in Figure <ref>. Using the FTC, we readily obtain∫_-∞^0 e^-x^2 dx =G(0)-G(-∞)=0-(-√(π)/2)=√(π)/2, ∫_0^+∞ e^-x^2 dx =G(+∞)-G(0)=√(π)/2-0=√(π)/2and∫_-∞^+∞ e^-x^2 dx =G(+∞)-G(-∞)=√(π)/2-(-√(π)/2)=√(π).Example 5. In this example, the integral ∫_-∞^x e^t^2n+1 dt, x<∞,where n is a positive integer, is evaluated using Proposition <ref> and the asymptotic expression(<ref>). Setting λ=1 and α = 2n+ 1 in Proposition <ref> , and using(<ref>) gives ∫_-∞^x e^t^2n+1 dt=lim_y→-∞∫_y^x e^t^2n+1 dt =x _1F_1(1/2n+1;2n+2/2n+1;x^2n+1) -lim_y→-∞y _1F_1(1/2n+1;2n+2/2n+1;y^2n+1) =x _1F_1(1/2n+1;2n+2/2n+1;x^2n+1) -Γ(2n+2/2n+1), x<∞.One can also obtain∫_x^+∞ e^-t^2n+1 dt=lim_y→+∞∫_x^y e^-t^2n+1 dt =lim_y→-∞y _1F_1(1/2n+1;2n+2/2n+1;-y^2n+1) -x _1F_1(1/2n+1;2n+2/2n+1;-x^2n+1) =Γ(2n+2/2n+1)-x _1F_1(1/2n+1;2n+2/2n+1;-x^2n+1) , x>-∞. For any A and B, the FTC gives∫_A^B e^λ x^αdx = G(B)-G(A),where G is the antiderivative of the function g(x) = e^λ x^α and is given in Proposition <ref>. And λ is any complex or real constant, and α≥2.G(x) = x _1F_1(1/α; 1/α+1; λ x^α), where λ is any constant, is the antiderivative ofg(x) = e^λ x^α, α≥2 by Proposition <ref>, Lemma <ref> and Lemma <ref>. And since the FTC works for A =-∞ and B = 0 in (<ref>), A = 0 and B = +∞ in (<ref>) and A =-∞ and B = +∞ in (<ref>) by Lemma 2 if λ = −1 and α = 2, and for all λ < 0 and all even α≥ 2, then it has to work for other values of A, B ∈ℝ and for any λ∈ℂ and α≥ 2. This completes the proof. □ Example 6. In this example, we apply Theorem <ref> to the Central Limit Theorem in Probability theory <cit.>. The normal zero-one distribution of a random variable X is the measure μ(dx) = g_X(x)dx, where dx is the Lebesgue measure and the function g_X(x) is the probability density function (p.d.f) of the normal zero-one distribution <cit.>, and isg_X(x) =1/√(2π)e^-x^2/2, -∞<x<+∞.A comparison with the function g(x) in Proposition <ref> and Lemma <ref> gives λ = −β^2=-1/2 and α = 2. By Theorem 1, the cumulative probability, P(X < z), is then given by P(X<z)=μ{(-∞,z)}=∫_-∞^z g_X(x) dx=1/√(2π)∫_-∞^z e^-x^2/2dx=1/2+z/√(2π) _1F_1(1/2;3/2;-z^2/2). For example, we can also use Theorem <ref> to obtain P(-2 < X < 2) = μ(-2, 2) = 0.4772 - (-0.4772) = 0.9544, P(-1 < X < 2) = μ(-1, 2) =0.4772 - (-0.3413) = 0.8185 and so on. Example 7. Using integration by parts and applying Theorem <ref>, the Maxwell-Bortsman distribution is written in terms of the confluent hypergeometric _1F_1 asF(v)=θ∫_0^vx^2e^-γ x^2dx=-θ v/2γe^-γ v^2+θ v/2γ _1F_1(1/2; 3/2; -γ v^2)=θ v/2γ[_1F_1(1/2; 3/2; -γ v^2)-e^-γ v^2]. § OTHER RELATED NON-ELEMENTARY INTEGRALS The function G(x) = x _1F_2(1/2α; 1/2,1/2α+1; λ^2 x^2α/4), where _1F_2 is a hypergeometric function <cit.>, λ is an arbitrarily constant and α≥ 2, is the antiderivative of the function g(x) = cosh(λ x^α). Thus,∫cosh(λ x^α)dx=x _1F_2(1/2α; 1/2,1/2α+1; λ^2 x^2α/4)+C. We proceed as before. We expand g(x) = cosh(λ x^α) as a Taylor series and integrate the series term by term, use the Pochhammer’s notation <cit.> for the gamma function, Γ(a + n) = Γ(a)(a)_n, where (a)_n = a(a + 1) ⋯ (a + n - 1), and the property of the gamma function Γ(a + 1) = aΓ(a) <cit.>. We also use the Gamma duplication formula <cit.>. We then obtain∫ g(x) dx=∫cosh(λ x^α)dx=∑_n=0^∞λ^2n/(2n)!∫ x^2α ndx =∑_n=0^∞λ^2n/(2n)!x^2α n+1/2α n+1+C=x/2α∑_n=0^∞(λ^2 x^2α)^n/(2n)!(n+1/2α)+C =x/2α∑_n=0^∞Γ(n+1/2α)/Γ(2n+1)Γ(n+1/2α+1)(λ^2 x^2α)^n+C =x∑_n=0^∞(1/2α)_n/(1/2)_n(1/2α+1)_n(λ^2 x^2α)^n/n!+C =x _1F_2(1/2α; 1/2,1/2α+1; λ^2 x^2α/4)+C=G(x) + C.□ The functionG(x) =λ x^α+1/α+1 _1F_2(1/2α+1/2; 3/2,1/2α+ 3/2; λ^2 x^2α/4),where _1F_2 is a hypergeometric function <cit.>, λ is an arbitrarily constant and α≥ 2, is the antiderivative of the function g(x) = sinh(λ x^α). Thus,∫sinh(λ x^α)dx=λ x^α+1/α+1 _1F_2(1/2α+1/2; 3/2,1/2α+ 3/2; λ^2 x^2α/4)+C. As above, we expand g(x) = sinh(λ x^α) as a Taylor series and integrate the series term by term, use the Pochhammer’s notation <cit.> for the gamma function, Γ(a + n) = Γ(a)(a)_n, where (a)_n = a(a + 1) ⋯ (a + n - 1), and the property of the gamma function Γ(a + 1) = aΓ(a) <cit.>. We also use the Gamma duplication formula <cit.>. We then obtain∫ g(x) dx=∫sinh(λ x^α)dx=∑_n=0^∞λ^2n+1/(2n+1)!∫ x^2α n+αdx=∑_n=0^∞λ^2n+1/(2n+1)!x^2α n+α+1/2α n+α+1+C=λ x^α+1/2α∑_n=0^∞(λ^2 x^2α)^n/(2n+1)!(n+1/2α+1/2)+C =λ x^α+1/2α∑_n=0^∞Γ(n+1/2α+1/2)/Γ(2n+2)Γ(n+1/2α+3/2)(λ^2 x^2α)^n+C r=λ x^α+1/α+1∑_n=0^∞(1/2α+ 1/2)_n/(3/2)_n(1/2α+3/2)_n(λ^2 x^2α)^n/n!+C=λ x^α+1/α+1 _1F_2(1/2α+1/2; 3/2,1/2α+ 3/2; λ^2 x^2α/4)+C=G(x) + C.□ We also can show as above that∫cos(λ x^α)dx=x _1F_2(1/2α; 1/2,1/2α+1; -λ^2 x^2α/4)+Cand∫sin(λ x^α)dx=λ x^α+1/α+1 _1F_2(1/2α+1/2; 3/2,1/2α+ 3/2; -λ^2 x^2α/4)+C.For any constants α and λ,_1F_2(1/2α; 1/2,1/2α+1; λ^2 x^2α/4) =1/2[ _1F_1(1/α; 1/α+1; λ x^α)+ _1F_1(1/α; 1/α+1; -λ x^α)]and_1F_2(1/2α; 1/2,1/2α+1; -λ^2 x^2α/4) =1/2[ _1F_1(1/α; 1/α+1; iλ x^α)+ _1F_1(1/α; 1/α+1; -iλ x^α)].Using Proposition <ref>, we obtain∫cosh(λ x^α)dx=∫e^λ x^α+ e^-λ x^α/2dx =x/2[ _1F_1(1/α; 1/α+1; λ x^α)+ _1F_1(1/α; 1/α+1; -λ x^α)]+ C.Hence, comparing (<ref>) with (<ref>) gives (<ref>). Using Proposition <ref>, on the other hand, we obtain∫cos(λ x^α)dx=∫e^iλ x^α+ e^-iλ x^α/2dx =x/2[ _1F_1(1/α; 1/α+1; iλ x^α)+ _1F_1(1/α; 1/α+1; -iλ x^α)]+ C.Hence, comparing (<ref>) with (<ref>) gives (<ref>). □For any constants α and λ, λ x^α/α+1 _1F_2(1/2α+1/2; 3/2,1/2α+ 3/2; -λ^2 x^2α/4) =1/2[ _1F_1(1/α; 1/α+1; λ x^α)- _1F_1(1/α; 1/α+1; -λ x^α)]andλ x^α/α+1 _1F_2(1/2α+1/2; 3/2,1/2α+ 3/2; -λ^2 x^2α/4) =1/2i[ _1F_1(1/α; 1/α+1; iλ x^α)- _1F_1(1/α; 1/α+1; -iλ x^α)].Using Proposition <ref>, we obtain∫sinh(λ x^α)dx=∫e^λ x^α+ e^-λ x^α/2dx =x/2[ _1F_1(1/α; 1/α+1; λ x^α)- _1F_1(1/α; 1/α+1; -λ x^α)]+ C.Hence, comparing (<ref>) with (<ref>) gives (<ref>). Using Proposition <ref>, on the other hand, we obtain∫sin(λ x^α)dx=∫e^iλ x^α+ e^-iλ x^α/2idx =x/2i[ _1F_1(1/α; 1/α+1; iλ x^α)- _1F_1(1/α; 1/α+1; -iλ x^α)]+ C.Hence, comparing (<ref>) with (<ref>) gives (<ref>). □ § CONCLUSIONThe non-elementary integral ∫ e^λ x^αdx, where λ is an arbitrary constant and α≥ 2, was expressed in term of the confluent hypergeometric function _1F_1. And using the properties of the confluent hypergeometric function _1F_1, the asymptotic expression for |x|≫ 1 of this integral was derived too. As established in Theorem <ref>, the definite integral (<ref>) can now be computed using the FTC. For example, one can evaluate the area under the Gaussian Bell curve using the FTC rather than using double integration and then polar coordinates. One can also choose to use Theorem 1 to compute the cumulative probability for the normal distribution or that for the Maxwell-Bortsman distribution as shown in examples <ref> and <ref>.On one hand, the integrals ∫cosh(λ x^α) dx, ∫sinh(λ x^α) dx, ∫cos(λ x^α) dx and ∫sin(λ x^α) dx, α≥ 2, were evaluated in terms of the confluent hypergeometric function _1F_1, while on another hand, they were expressed in terms of the hypergeometric _1F_2. This allowed to express the hypergeometric function_1F_2 in terms of the confluent hypergeometric function _1F_1 (Theorems <ref> and <ref>).1Abramowitz M., Stegun I.A. Handbook of mathematical functions with formulas, graphs and mathematical tables. National Bureau of Standards, 1964. 1046 p. 2Billingsley P. Probability and measure. Wiley series in Probability and Mathematical Statistics, 3rd Edition, 1995. 608 p. 3Krantz S.G. Handbook of complex variables. Boston: MA Birkhäuser, 1999. 290 p. DOI: 10.1007/978-1-4612-1588-2 4Marchisotto E.A., Zakeri G.-A. An invitation to integration in finite terms // College Math. J., 1994. Vol. 25, no. 4. P. 295–308. DOI: 10.2307/2687614 5 NIST Digital Library of Mathematical Functions. <http://dlmf.nist.gov/> 6Rosenlicht M. Integration in finite terms // Amer. Math. Monthly, 1972. Vol 79, no. 9. P. 963–972. DOI: 10.2307/2318066
http://arxiv.org/abs/1702.08438v2
{ "authors": [ "Victor Nijimbere" ], "categories": [ "math.CA", "26A36, 33C15, 30E15" ], "primary_category": "math.CA", "published": "20170225010108", "title": "Evaluation of the non-elementary integral $\\int e^{λx^α} dx, α\\ge2$, and other related integrals" }
Interplay of dust alignment, grain growth and magnetic fields in polarization: lessons from the emission-to-extinction ratio L. Fanciullo, 1 V. Guillet, 1 F. Boulanger, 1 A. P. Jones 1 ============================================================================================================================dateThe variety of Brouwerian semilattices is amalgamable and locally finite, hence by well-known results <cit.>, it has amodel completion (whose models are the existentially closed structures). In this paper, we supply for such a model completiona finite and rather simple axiomatization.§ INTRODUCTIONIn algebraic logic some attention has been paid to the class of existentially closed structures in varieties coming from the algebraization of common propositional logics. In fact, there are relevant cases where such classes are elementary: this includes, besides the easy case of Boolean algebras, also Heyting algebras <cit.>, diagonalizable algebras <cit.> and some universal classes related to temporal logics <cit.>,<cit.>. However, very little is known about the related axiomatizations, with the remarkable exception of thecase of the locallyfinite amalgamable varieties of Heyting algebras recently investigated in <cit.> and of the simpler cases of posets and semilattices studied in <cit.>. In this paper, we use amethodology similar to <cit.> (relying on classifications of minimal extensions) in order to investigate the case of Brouwerian semilattices, i.e. the algebraic structures corresponding to the implication-conjunction fragment of intuitionistic logic. Weobtain the finite axiomatization reported below, which is similar in spirit to the axiomatizations from <cit.> (in the sense that we also have kinds of `density' and`splitting' conditions). The main technical problemwe must face for this result (making axioms formulation slightly more complex and proofs much more involved) is the lack of joins in the language of Brouwerian semilattices.§.§ Statement of the main resultThe first researcher to consider the Brouwerian semilattices as algebraic objects in their own right was W. C. Nemitz in <cit.>. A Brouwerian semilattice is a poset (P, ≤) having a greatest element (which we denote with 1), inf's of pairs (the inf of { a,b } is called `meet' of a and b and denoted with a ∧ b) and relative pseudo-complements (the relative pseudo-complement of a and b is denoted with a → b). We recall that a→ b is characterized by the the following property: for every c ∈ P we havec ≤ a → b iff c ∧ a ≤ b. Brouwerian semilattices can also be defined in an alternative way as algebras over the signature 1, ∧, →, subject to the following equations[ a ∧ a=aa ∧ (a → b)= a ∧ b; a ∧ b = b ∧ a b ∧ (a → b) = b; a ∧ (b ∧ c) = (a ∧ b) ∧ c a → (b ∧ c) = (a → b) ∧ (a → c);a ∧ 1= a a → a=1 ]In case this equational axiomatization is adopted, the partial order ≤ is recovered via the definition a ≤ b iff a ∧ b= a.By a result due to Diego and McKay <cit.>, Brouwerian semilattices are locally finite (meaning that all finitely generated Brouwerian semilattices are finite); since they are also amalgamable, it follows <cit.> that the theory of Brouwerian semilattices has a model completion. We prove that such a model completionis given by the above set of axioms for the theory of Brouwerian semilattices together with the three additional axioms (Density1, Density2, Splitting) below. We use the shorthand a ≪ b to mean that a ≤ b and b → a=a. [Density 1] For every c there exists an element b different from 1 such that b ≪ c.[Density 2] For every c,a_1,a_2,d such that a_1,a_2 ≠ 1, a_1 ≪ c, a_2 ≪ c and d → a_1=a_1, d → a_2=a_2 there exists an element b different from 1 such that:a_1 ≪ b a_2 ≪ b b ≪ c d → b=b [Splitting] For every a,b_1,b_2 such that 1 ≠ a ≪ b_1b_2 there exist elements a_1 and a_2 different from 1 such that:b_1 ≥ a_1, b_2 ≥ a_2a_2 → a = a_1 a_1 → a = a_2a_2 → b_1 = b_2 → b_1a_1 → b_2 = b_1 → b_2 As testimony of the usefulness of this result, the following proposition shows some properties of the existentially closed Brouwerian semilattices that can be deduced from our investigation as an easy exercise. Let L be an existentially closed Brouwerian semilattice. Then: * L has no bottom element.* If a,b ∈ L are incomparable, i.e. a ≰ b and b ≰ a, then the join of a and b in L does not exist.* There are no meet-irreducible elements in L.The paper is structured as follows: Section <ref> gives the basic notions and definitions, in particular it describes the finite duality and characterizes the existentially closed structures by means of embeddings of finite extensions of finite sub-structures. In Section <ref> we investigate the minimal extensions and use them to give an intermediate characterization of the existentially closed structures. Section <ref> focuses on the axiomatization, it is split into two subsections: the first about the Splitting axiom and the second about the Density axioms. Finally, in Section <ref> we present and prove some properties of the existentially closed structures whose validity follows from this investigation. § PRELIMINARY BACKGROUNDA co-Brouwerian semilattice, CBS for short, is a structure obtained by reversing the order of a Brouwerian semilattice.We will work with CBSes instead of Brouwerian semilattices.A poset (P, ≤) is said to be a co-Brouwerian semilattice if it has a least element, which we denote with 0, and for every a,b ∈ P there exists the sup of { a,b }, which we call join of a and b and denote with a ∨ b, and the difference a - b satisfying for every c ∈ Pa-b ≤ ciffa ≤ b ∨ c.a ≪ b will mean that a ≤ b and b-a=b.Clearly, there is also an alternative equational definition for co-Brouwerian semilattices (which we leave to the reader, because it is dual to the equational definition for Brouwerian semilattices given above). Moreover, we will call co-Heyting algebras the structures obtained reversing the order of Heyting algebras. Obviously any co-Heyting algebra is a CBS. Let A,B be co-Brouwerian semilattices. A map f:A → B is a morphism of co-Brouwerian semilattices if it preserves 0, the join and difference of any two elements of A. Notice that such a morphismf is an order preserving map because, for any a,b elements of a co-Brouwerian semilattice, we have a ≤ b iff a ∨ b= b.Let L be a CBS.We say that g ∈ L is join-irreducible iff for every n ≥ 0 and b_1, …, b_n ∈ L, we have thatg ≤ b_1 ∨…∨ b_nimplies g ≤ b_ifor somei=1, …, n.Notice that taking n=0 we obtain that join-irreducibles are different from 0.Let L be a CBS and g ∈ L. Then the following conditions are equivalent: * g is join-irreducible. * g ≠ 0 and for any b_1,b_2 ∈ L we have that g ≤ b_1 ∨ b_2 implies g ≤ b_1 or g ≤ b_2. * For every n ≥ 0 and b_1, …, b_n ∈ L we have that g = b_1 ∨…∨ b_n implies g = b_i for some i=1, …, n. * g ≠ 0 and for any b_1,b_2 ∈ L we have that g = b_1 ∨ b_2 implies g = b_1 or g = b_2. * g ≠ 0 and for any a ∈ L we have that g-a=0 or g-a=g.The implications <ref> ⇔ <ref>, <ref> ⇔ <ref> and <ref> ⇒ <ref> are straightforward. For the remaining ones see Lemma 2.1 in <cit.>.Let L be a CBS and a ∈ L.A join-irreducible component of a is a maximal element among the join-irreducibles of L that are smaller than or equal to a. The following is a list of facts that might be used without explicit mention.These identities hold in any CBS:0-a=0 a-0=a (a-b) ∨ b=a ∨ b (a-b) ∨ a= a (a-b) ∨ (a-(a-b))=a a-(a-(a-b))=a-b (a_1 ∨⋯∨ a_n)-b= (a_1-b)∨⋯∨ (a_n-b) a-(b_1 ∨⋯∨ b_n)= ((a-b_1)- ⋯ )-b_nIn particular(a-b)-c=(a-c)-bFurthermore in any CBS:a ≤ biff a-b=0if b ≤ c then b-a≤ c-a and a-c ≤ a-bThe following facts are true in any finite CBS:a = ⋁{join-irreducible components ofa}a-b=⋁{ g | g is a join-irreducible component ofasuch thatg ≰ b}Moreover, in a finite CBS, g is join-irreducible iff it has a unique predecessor, i.e. a maximal element among the elements strictly smaller than g, and in that case we denote it by g^- and it is equal to ⋁_a < g a.Recall that a ≪ b means a ≤ b and b-a=b. Thus, in any finite CBS, a ≪ b if and only if a ≤ b and there are no join-irreducible components of b that are less than or equal to a. Finally, if g is join-irreducible then g^- ≪ g.Let (P, ≤) be a poset. For any a ∈ P we define a = { p ∈ P | p ≤ a } and for any A ⊆ P we define A= ⋃_a ∈ A a. A subset D ⊆ P such that D=D is called a downset, i.e. a downward closed subset, of P. The downsets a and A are called the downsets generated by a and A.Given a poset P, the set of downsets of P denoted by 𝒟(P) has naturally a structure of CBS given by the usual inclusion of subsets. Joins coincide with the union of subsets and the zero element with the empty subset. It turns out that the difference of two downsets A,B ∈𝒟(P) is A-B=(A ∖ B).Note that if P is finite then also 𝒟(P) is. In that case any downset A ∈𝒟(P) is generated by the set of its maximal elements and for any A,B ∈𝒟(P) we have that A-B is the downset generated by the maximal elements of A that are not in B. Moreover the join-irreducibles of 𝒟(P) are the downsets of the form p for p ∈ P and the downsets generated by the maximal elements of a given downset are its join-irreducible components. Notice that this is not always the case when P is infinite.Finally, when P is finite, for A,B ∈𝒟(P) satisfying A ≪ B means that A ⊆ B and A does not contain any maximal element of B. §.§ Locally finiteness The variety of CBSes is locally finite.We just sketch the proof first presented in <cit.>. A CBS L is subdirectly irreducible iff L ∖{ 0 } has a least element, or equivalently L has a single atom, i.e. a minimal element different from 0.Let L be subdirectly irreducible and u the least element of L ∖{ 0 }. Then L ∖{ u } is a sub-CBS of L. This implies that any generating set of L must contain u.Moreover if L is generated by n elements then L ∖{ u } can be generated by n-1 elements. It follows that the cardinality of subdirectly irreducible CBSes generated by n elements is bounded by # F_n-1 +1 where F_m is the free CBS on m generators. Since # F_0=1 by induction we obtain that F_m is finite for any m because it is a subdirect product of a finite family of subdirectly irreducibles which are generated by m elements.Computing the cardinality of F_m is a hard task. It is known that # F_0=1, # F_1=2, # F_2=18 and # F_3=623,662,965,552,330. The size of F_4 is still unknown. In <cit.> it is proved that the number of join-irreducible elements of F_4 is 2,494,651,862,209,437.This shows that although the cardinality of the free CBS on a finite number of generators is always finite,it grows very rapidly.§.§ Finite dualityAny finite CBS is a distributive lattice.A finite CBS is complete, hence also co-complete, so it is a lattice. The map a ∨ (-) preserves infima because it has a left adjoint given by (-) - a. Thus the distributive laws hold. Every finite Brouwerian semilattice is a Heyting algebra but it is not true that every Brouwerian semilattices morphism among finite Brouwerian semilattices is a Heyting algebra morphism.The following theorem presents the finite duality result due to Köhler: The category 𝐂𝐁𝐒_fin of finite CBSes is dual to the category 𝐏 whose objects are finite posets and whose morphisms are partial mappings α:P → Q satisfying:* ∀ p,q ∈dom α ifp < qthen α(p) < α(q). * ∀ p ∈dom α and ∀ q ∈ Qif α(p) < qthen ∃ r ∈dom α such thatp < rand α(r)=q.The proof can be found in <cit.>. We just recall how the equivalence works. To a finite poset P it is associated the CBS 𝒟(P) of downsets of P.To a 𝐏-morphism among finite posets it is associated the morphism of CBSes that maps a downset to the downset generated by its preimage. More explicitly, to a 𝐏-morphism f:P → Q is associated the morphism that maps a downset D of Q to f^-1(D)={ p∈ P|∃ p'≥ p(p'∈dom f  & f(p')∈ D)}.On the other hand, to a finite CBS L it is associated the poset of its join-irreducible elements. The following proposition is easily checked: Let P,Q be finite posets and f : P → Q a 𝐏-morphism. Let α be the associated morphism of CBSes. Then* α is injective if and only if f is surjective.* α is surjective if and only if dom f = P and f is injective.Duality results involving all Brouwerian semilattices can be found in the recent paper <cit.> due to G. Bezhanishvili and R. Jansana. Other dualities are described in <cit.> and <cit.>.Using finite duality we can show that the variety of CBSes has the amalgamation property.The amalgamation property for Brouwerian semilattices is the algebraic counterpart of a syntactic fact about the implication-conjunction fragment of intuitionistic propositional logic: the interpolation property. The proof that such fragment satisfies this property can be found in <cit.>.The theory of CBSes has the amalgamation property.First, we show that the pushout of given monomorphisms (= injective maps) m:L_0 → L_1 and n:L_0 → L_2 among finite CBSes is still formed by monomorphisms. Then we extend the result to the general case.To do this, by finite duality, it is sufficient to show that the category 𝐏 has the coamalgamation property.This means that, given two surjective 𝐏-morphisms among finite posets f:P → Q and g:R → Q there exist a finite poset S and two surjective 𝐏-morphisms f':S → R and g':S → P making the following diagram commute. S [r, "f'"] [d, "g'"'] R [d, "g"] P [r, "f"'] Q For any p ∈ P let q_1, …, q_n be the minimal elements of{ f(a) | a ∈domfanda ≥ p }⊆ Qit could be that n=0 when such set is empty. Define:S_p= { ({ p }, { r_1, …, r_n }) | r_i ∈domgandg(r_i)=q_ifori=1, …,n }Analogously for any r ∈ R let q_1, …, q_n be the minimal elements of{ f(a) | a ∈domganda ≥ r }⊆ Qand defineS_r= { ({ p_1, …, p_n }, { r }) | p_i ∈domfandf(p_i)=q_ifori=1, …,n }LetS_P= ⋃_p ∈ P S_pS_R= ⋃_r ∈ R S_rAnd take S= S_P ∪ S_R.We can immediately observe that if p ∈domf then S_p= { ({ p }, { r }) | r ∈domgandf(p)=g(r) }.And thatS_P ∩ S_R= { ({ p }, { r }) | p ∈domf,r ∈domg andf(p)=g(r) }.And finally that if ({ p }, { r_1, …, r_n }) ∈ S_p then the r_i's are two-by-two incomparable, indeed g is order preserving and the g(r_i)'s are incomparable since they are the minimal elements of a subset of Q. Thus the elements of the two components of any element of S are two-by-two incomparable.We define an order on S in the following way:let (A_1,A_2), (B_1,B_2) ∈ S where A_1,B_1 ⊆ P and A_2,B_2 ⊆ R, we define(A_1,A_2) ≤ (B_1,B_2) iff ∀y ∈ B_1∃x ∈ A_1such thatx ≤ y and∀y ∈ B_2∃x ∈ A_2such thatx ≤ yThis order relation is clearly reflexive.It is antisymmetric, indeed let (A_1,A_2) ≤ (B_1,B_2) and (B_1,B_2) ≤ (A_1,A_2) then for any y ∈ B_1 there exists x ∈ A_1 such that x ≤ y and there exists z ∈ B_1 such that z ≤ x. Since the elements of B_1 are incomparable we get z=y and thus x=y. Therefore B_1 ⊆ A_1. Symmetrically we get A_1 ⊆ B_1 and then A_1=B_1. Reasoning similarly we get A_2=B_2 and then (A_1,A_2) = (B_1,B_2).It is transitive, indeed let (A_1,A_2) ≤ (B_1,B_2) and (B_1,B_2) ≤ (C_1,C_2) then for any z ∈ C_1 there exists y ∈ B_1 such that y ≤ z and there exists x ∈ A_1 such that x ≤ y and hence also x ≤ z. Analogously for the second components. Therefore (A_1,A_2) ≤ (C_1,C_2).Thus we have defined a partial order on S.Take g':S → P and f':S → R as:domg' = S_Pdomf' = S_R g'({ p }, A_2)=p f'(A_1, { r })=rThendomf ∘ g' = (g')^-1 (domf)= { ({ p }, A_2)∈ S_P | p ∈domf } ={ ({ p }, { r }) | p ∈domf, r ∈domgandf(p)=g(r) } = (f')^-1 (domg)= { (A_1, { r })∈ S_R | r ∈domg } = domg ∘ f'and if p ∈domf, r ∈domgandf(p)=g(r) then(f ∘ g')(({ p }, { r }))=f(p)=g(r)=(g ∘ f')(({ p }, { r }))g' is surjective: indeed let p ∈ P and q_1, …, q_n be the minimal elements of { f(a) | a ∈domfanda ≥ p }, by surjectivity of g there exist r_1, …, r_n ∈domg such that g(r_i)=q_i, then ({ p }, { r_1, …, r_n }) ∈ S_p ⊆domg' and g'(({ p }, { r_1, …, r_n }))=p. Analogously for the surjectivity of f'.It remains to show that g',f' are 𝐏-morphisms.Let ({ p }, A), ({ p' }, B) ∈ S_P= domg' such that ({ p }, A) < ({ p' }, B), we show that p < p'.Clearly p ≤ p' by the definition of the order on S.Suppose that p=p', let q_1, …, q_n be the minimal elements of { f(a) | a ∈domfanda ≥ p }. Let A={ r_1, …, r_n } and B={ r_1', …, r_n' } be such that g(r_i)=g(r_i')=q_i for i=1, …,n. Then for any r_i' ∈ B there exists r_j ∈ A such that r_j ≤ r_i'. If r_j ≤ r_i' with j ≠ i then q_j=g(r_j) ≤ g(r_i')=q_i and this is absurd because the q_i's are incomparable. Therefore r_i ≤ r_i' for any i=1, …, n, if r_i < r_i' then q_i=g(r_i) < g(r_i')=q_i which is absurd. Thus r_i=r_i' and A=B, we have obtained a contradiction. Analogous for f'.Therefore g',f' preserve the strict order.Let ({ p }, A) ∈ S_P and p < p'.Let q_1, …, q_n be the minimal elements of { f(a) | a ∈domfanda ≥ p } and q_1', …, q_m' be the minimal elements of { f(a) | a ∈domfanda ≥ p' }; since the latter set is included in the former and they are both finite we have that for any q_j' there exist q_i such that q_i ≤ q_j'.Let A= { r_1, …, r_n } with g(r_i)=q_i. Since g is a 𝐏-morphism and for any q_j' there exists i such that g(r_i)=q_i ≤ q_j', there exists r_j' ∈domg such that r_i ≤ r_j' and g(r_j')=q_j'. Take B= { r_1', …, r_m' } then for any r_j' there exists r_i such that r_i ≤ r_j', therefore ({ p }, A) <({ p' }, B) ∈ S_P. Analogous for f'.Thus f',g' are surjective 𝐏-morphisms and they coamalgamate f,g.We now want to prove the general case: pushouts of monos along monos in the category of CBSes are monos.Suppose m:L_0 → L_1 and n:L_0 → L_2 are monos and L_0,L_1,L_2 are CBSes. Since the variety is locally finite by Theorem <ref>, we can consider L_0,L_1,L_2 as filtered colimits of families of finite CBSes. Assume without loss of generality that L_1 ∩ L_2 =L_0 and m,n are inclusions, then we can consider the families indexed by 𝒫_fin(L_1 ∪ L_2) given for any finite subset S ⊆ L_1 ∪ L_2 by the sub-CBSes respectively of L_1,L_2 and L_0 generated respectively by S ∩ L_1, S ∩ L_2 and m^-1(S ∩ L_1) ∩ n^-1(S ∩ L_2)=S ∩ L_0. Then we can compute the pushouts of the restrictions of the monos for any index, the colimit of all these pushouts is a mono because each of them is a mono. Thus we have obtained that the pushout of m along n and the pushout of n along mare monomorphisms.§.§ Existentially closed CBSesIn this subsection we want to characterize the existentially closed CBSes using the finite extensions of their finite sub-CBSes. Let T be a first order theory and 𝒜 a model of T.𝒜 is said to be existentially closed for T if for every model ℬ of T such that 𝒜⊆ℬ every existential sentence in the language extended with names of elements of 𝒜 which holds in ℬ also holds in 𝒜 The following proposition is well-known from textbooks <cit.>: Let T be a universal theory. If T has a model completion T^*, then the class of models of T^* is the class of models of T which are existentially closed for T. Thanks to the locally finiteness and the amalgamability, by an easy model-theoretic reasoning we obtain the following characterization of the existentially closed CBSes:Let L be a CBS. L is existentially closed iff for any finite sub-CBS L_0 ⊆ L and for any finite extension C ⊇ L_0 there exists an embedding C → L fixing L_0 pointwise. First, we prove that if for any finite sub-CBS L_0 ⊆ L and for any finite extension C ⊇ L_0 there exists an embedding C → L fixing L_0 pointwise, then L is existentially closed.Let D be an extension of L and ∃ x_1, …, x_m φ (x_1, …, x_m, a_1, …,a_n) an existential ℒ_L-sentence, where φ (x_1, …, x_m, a_1, …,a_n) is quantifier free and a_1, …, a_n ∈ L.Suppose D ∃ x_1, …, x_m φ (x_1, …, x_m, a_1, …,a_n).Let d_1, …, d_m be elements of D such that D φ (d_1, …, d_m, a_1, …,a_n).Consider the sub-CBS L_0 ⊆ L generated by a_1, …, a_n and the sub-CBS C ⊆ D generated by d_1, …, d_m, a_1, …,a_n. They are both finite because they are finitely generated and the CBSes form a locally finite variety.By hypothesis there exists an embedding C → L fixing L_0 pointwise.Let d_1', …, d_m' be the images of d_1, …, d_m by this embedding. Thus L φ (d_1', …, d_m', a_1, …,a_n) because φ is quantifier free.Therefore L ∃ x_1, …, x_m φ (x_1, …, x_m, a_1, …,a_n). It follows that L is existentially closed.To prove the other implication, suppose L is existentially closed.By amalgamation property there exists a CBS D amalgamating L and C over L_0. [row sep=tiny] L [rd] L_0 [rd, hook] [ru, hook]D C [ru]Let Σ be the set of quantifier free ℒ_C-sentences of the form c * c'=c” true in C where c,c',c”∈ C and * is either ∨ or -. Hence (C,Σ) is a finite presentation of C.Now let c_1,…,c_r,a_1, …, a_n be an enumeration of the elements in C where the a_i's are the elements in L. We obtain the quantifier free ℒ_C-sentence σ(c_1,…,c_r,a_1, …,a_n) by taking the conjunction of all the sentences in Σ and all the sentences of the form (c = c') for every c,c' ∈ C such that c ≠ c'.Clearly ∃ x_1, …, x_r σ(x_1,…,x_r,a_1, …,a_n) is an existential ℒ_L-sentence true in D. Since L is existentially closed, L ∃ x_1, …, x_r σ(x_1,…,x_r,a_1, …,a_n). Let c_1', …, c_r' ∈ L be such that L σ(c_1',…,c_r',a_1, …,a_n). The map C → L fixing L_0 pointwise and mapping c_i to c_i' is an embedding. Indeed it is injective and an homomorphism by definition of the sentence σ. § MINIMAL FINITE EXTENSIONSIn this section we focus on the finite extensions of CBSes. We are interested in particular to the minimal ones since any finite extension can be decomposed in a finite chain of minimal extensions. We will study minimal finite extensions by describing the properties of some elements which generate them. This investigation will lead us to another characterization of the existentially closed CBSes.Let P be a poset, P_0 ⊆ P and ℱ a partition of P_0, let A,B ∈ℱ.We say that A ≤ B iff there exist a ∈ A, b ∈ B such that a ≤ b. Let P be a finite poset.To give a surjective 𝐏-morphism f from P to any finite poset is equivalent, up to isomorphism, to give a partition ℱ of a subset of P such that: * for all A,B ∈ℱ we have that if A ≤ B and B ≤ A then A=B, * for all A,B ∈ℱ and a ∈ A if A ≤ B then there exists b ∈ B such that a ≤ b,* for all A ∈ℱ we have that all the elements of P in A are two-by-two incomparable.Given a surjective 𝐏-morphism f : P → Q, the partition ℱ of domf ⊆ P is obtained by taking the collection of the fibers of f.ℱ satisfies <ref> because f is order preserving and the order on Q is antisymmetric. Furthermore ℱ satisfies <ref> as a consequence of condition <ref> in the definition of 𝐏-morphism. Finally, ℱ satisfies <ref> because 𝐏-morphisms are strict order preserving.On the other hand, given a partition ℱ of a subset P_0 of P satisfying the conditions <ref>, <ref> and <ref>, we obtain a poset Q by taking the quotient set of P_0 given by ℱ with the order defined in Definition <ref>. The partial map f : P → Q is just the projection onto the quotient.Q is a poset: the order of Q is clearly reflexive, it is antisymmetric because ℱ satisfies <ref>. It is also transitive because if A ≤ B e B ≤ C then there exist a ∈ A,b, b' ∈ B,c ∈ C such that a ≤ b,b' ≤ c; since <ref> holds, there exist c' ∈ C such that b ≤ c', hence a ≤ c' and A ≤ C. The projection f is order preserving, it is a 𝐏-morphism because <ref> holds and it is obviously surjective.It remains to show that a surjective 𝐏-morphism f:P → Q differs by an isomorphism to the projection onto the quotient defined by the partition given by the fibers of f. This follows from the fact that for any a,b ∈domf it is f(a) ≤ f(b) iff f^-1(f(a)) ≤ f^-1(f(b)) (notice that f^-1(f(a)) is the element of ℱ containing a). Indeed if f(a) ≤ f(b), since f is a 𝐏-morphism, there exists b' such that a ≤ b' and f(b')=f(b), therefore since a ≤ b' it is f^-1(f(a)) ≤ f^-1(f(b)). The other direction of the implication holds because f is order preserving. Let P,Q be finite posets andf : P → Q a surjective 𝐏-morphism (or equivalently: let ℱ satisfy conditions <ref>, <ref> and <ref> of Proposition <ref>). We say that f (or ℱ) is minimal if #P=#Q+1.If ℱ is minimal, then at most one element of ℱ is not a singleton. Let f:P → Q be a surjective 𝐏-morphism between finite posets. Let n = #P-#Q. Then there exist Q_0, … , Q_n with Q_0=P, Q_n=Q and f_i:Q_i-1→ Q_i which are minimal surjective 𝐏-morphisms for i=1,…,n such that f= f_n ∘⋯∘ f_1.Let R = domf, we can decompose f=f”∘ f' where f”:R → Q is just the restriction of f on its domain and f':P → R is the partial morphism with domain R that acts as the identity on R.The morphism f”:R → Q is a total morphism[Since it is a total map, its dual preserves the maximum downset and intersections of downsets. Therefore it is dual to a co-Heyting algebras morphism.], we prove by induction on # R-#Q that it can be decomposed in a chain of minimal surjective 𝐏-morphisms.Suppose # R-#Q>1 and let us consider the partition ℱ of R given by the fibers of f”. Let x ∈ P be minimal among the elements of R that are not in a singleton of ℱ. Let G be the element of ℱ containing x, then #G > 1 and all the elements of R inside G are incomparable to each other.Let Q_n-1 be the quotient of R defined by the refining of ℱ in which G is substituted by { x } and G \{ x }, we name this new partition ℱ'.The projection onto the quotient π:R → Q_n-1 is a 𝐏-morphism because ℱ' satisfies the conditions <ref>, <ref> and <ref> of Proposition <ref>. Indeed, it satisfies <ref> and <ref> because ℱ satisfies them and the elements in G are incomparable. To show that <ref> holds it is sufficient to show that for the pairs of sets in ℱ' in which exactly one of the two is { x } or G \{ x } because ℱ satisfies <ref> and { x } and G \{ x } are incomparable.Let A ∈ℱ be different from { x } and G \{ x }.If { x }≤ A then <ref> holds because { x } is a singleton.If A ≤{ x } then there exists a ∈ A such that a ≤ x, hence A is a singleton by minimality of x, therefore <ref> holds.If G \{ x }≤ A then we have that G ≤ A, thus for any y ∈ G \{ x } there exists y' ≥ y such that y' ∈ A.If A ≤ G \{ x } it is A ≤ G thus for any y ∈ A there exists y' ≥ y such that y'=x or y' ∈ G \{ x }. Suppose there exists y ∈ A such that there is no y' ≥ y such that y' ∈ G \{ x }: then x ≥ y, by minimality of x it has to be A= { y } then A ≰ G \{ x }, this is absurd.Therefore π:R → Q_n-1 is a surjective total 𝐏-morphism and we can apply the inductive hypothesis on π.Then it suffices to show that the order-preserving map f_n:Q_n-1→ Q induced by f” is a 𝐏-morphism, because in that case it is obviously surjective and minimal. But this is easy to show because the fibers of f_n are all singletons except one and because f” is a 𝐏-morphism.It remains to decompose f', to do that just enumerate the elements of P ∖ R= { p_1, …, p_k } and let f_1 ' :R ∪{ p_1 }→ R be the partial morphism with domain R that acts as the identity on R. Then construct f_2 ' : R ∪{ p_1,p_2 }→ R ∪{ p_1 } in the same way and so on until p_k.We say that a proper extension L_0 ⊆ L of finite CBSes is minimal if there is no intermediate proper extension L_0 ⊊ L_1 ⊊ L.An extension L_0 ⊆ L of finite CBSes is minimal iff the surjective 𝐏-morphism that is dual to the inclusion is minimal.Let f : P → Q be a surjective 𝐏-morphism with #P=#Q+1. And suppose there exist two surjective 𝐏-morphisms g_1 :P → R and g_2 : R → Q such that f=g_2 ∘ g_1, being g_1 and g_2 surjective #R must be equal to #P or #Q. In the former case the domain of g_1 must be all P and the relative fiber partition could only be the one formed exclusively by singletons because of cardinality, in the latter case the same holds for g_2. So either g_1 or g_2 has to be an isomorphism of posets.Hence if we have two consecutive extensions that form an inclusion whose dual is minimal, then the dual of one of the two extensions is an isomorphism and so the relative extension is the identity.The other implication follows easily from Theorem <ref>. By Definition <ref> it follows immediately that there are two different kinds of minimal surjective 𝐏-morphisms between finite posets.We call a minimal surjective 𝐏-morphism of the first kind when there is exactly one element outside its domain and thus the restriction of such map on its domain is bijective and therefore an isomorphism of posets (any bijective 𝐏-morphism is an isomorphism of posets). Some of these maps are dual to co-Heyting algebras embeddings but some are not.We call a minimal surjective 𝐏-morphism of the second kind when it is total, i.e. there are no elements outside its domain, and thus there is exactly a single fiber which is not a singleton and it contains exactly two elements. The maps of the second kind are dual to co-Heyting algebras embeddings.Figures <ref> and <ref> show some examples of minimal surjective 𝐏-morphisms and relative extensions of CBSes.We call a finite minimal extension of CBSes either of the first or of the second kind if the corresponding minimal surjective 𝐏-morphism is respectively of the first or of the second kind.Therefore, a finite minimal extension of CBSes of the first kind preserves the join-irreducibility of all the join-irreducibles in the domain. Indeed, since the corresponding 𝐏-morphism is an isomorphism when restricted on its domain, we have that the downset generated by the preimage of a principal downset is still principal.A finite minimal extension of CBSes of the second kind preserves the join-irreducibility of all the join-irreducibles in the domain except one which becomes the join of the two new join-irreducible elements in the codomain. Indeed, the corresponding 𝐏-morphism is total and all its fibers are singletons except one, this implies that the preimage of any principal downset is principal except for one whose preimage is a downset generated by two elements. It turns out that we can characterize the finite minimal extensions of CBSes by means of their generators.Let L_0 be a finite CBS and L an extension of L_0. We call an element x ∈ L primitive of the first kind over L_0 if the following conditions are satisfied: * x ∉ L_0 and for any a join-irreducible of L_0: * a-x ∈ L_0, * x-a = xorx-a = 0.Let L_0 be a finite CBS and L an extension of L_0.[Notice that we do not require L to be a finite CBS.] If x ∈ L is primitive of the first kind over L_0 then the sub-CBS L_0 ⟨ x ⟩ of L generated by x over L_0 is a finite minimal extension of L_0 of the first kind. Before proving Theorem <ref> we need the following lemma:Let L_0 be a finite CBS, L an extension of L_0 and x ∈ L primitive of the first kind over L_0, then the two following properties hold:* ∀ a ∈ L_0a-x ∈ L_0, * ∀ a ∈ L_0x-a = xorx-a = 0.Let a ∈ L_0 and a_1, …, a_n be its join-irreducible components in L_0, since L_0 is finite we have a=a_1 ∨⋯∨ a_n. To prove <ref> observe thata-x=(a_1-x) ∨⋯∨ (a_n-x)which is an element of L_0 because it is join of elements of L_0 as a consequence of <ref> of Definition <ref>.Furthermore to prove <ref> notice thatx-a=x-(a_1 ∨⋯∨ a_n)=((x-a_1)- ⋯ )-a_nand that <ref> of Definition <ref> implies that there are two possibilities: x-a_i=x for any i=1, …,n or x-a_i=0 for some i. In the former case we have x-a=x, in the latter suppose that i is the smallest index such thatx-a_i=0 then x-a=((x-a_i)- ⋯ )-a_n=(0- ⋯ )-a_n=0. Let L' be the sub ∨-semilattice of L generated by x over L_0, we show that L' actually coincides with L_0 ⟨ x ⟩.L' is clearly finite, its elements are the elements of L_0 and the elements of the form a ∨ x with a ∈ L_0. It follows from <ref> and <ref> of Lemma <ref> that if a,b,c,d ∈ L_0 ∪{ x } then (a ∨ b) -(c ∨ d) = (a-(c ∨ d) ) ∨ (b-(c ∨ d) )= ( (a-c )-d) ∨ ((b-c)-d) belong to L'. Therefore L'=L_0 ⟨ x ⟩.We want to show that the join-irreducibles of L_0 ⟨ x ⟩ are exactly the join-irreducibles of L_0 and x.x is a join-irreducible element of L_0 ⟨ x ⟩, indeed x ≠ 0 since by hypothesis x ∉ L_0 and suppose that x ≤ a ∨ b with a,b ∈ L_0 ⟨ x ⟩ and a,b ≱ x; therefore a and b must be elements of L_0 because they cannot be of the form c ∨ x with c ∈ L_0. It follows from <ref> of Lemma <ref> and a,b ≱ x that x-a = x-b = x and so 0=x-(a ∨ b )=( x-b)-a= x-a = x, this is absurd because x ≠ 0.The join-irreducible elements of L_0 are still join-irreducible in L_0 ⟨ x ⟩. It is sufficient to show that for any g join-irreducible in L_0 if g ≤ a ∨ x with a ∈ L_0 then g ≤ a or g ≤ x. Notice that being L a CBS it is g= (g-x ) ∨ ( g-(g-x)) (see Remark <ref>), we also have by <ref> of Definition <ref> that g-x and g-(g-x) are in L_0. Then being g join-irreducible in L_0 we get g=g-x or g = g-(g-x). In the latter case g-x= g-(g-(g-x))= g-g = 0 so g ≤ x. In the former case 0=g- (a ∨ x )= (g-x)-a= g-a so g ≤ a.Clearly if an element of the form x ∨ a with a ∈ L_0 is different from a and x it cannot be join-irreducible in L_0 ⟨ x ⟩. Also if an element of L_0 is not join-irreducible in L_0 it cannot be join-irreducible in L_0 ⟨ x ⟩. Hence the join-irreducible elements of L_0 ⟨ x ⟩ are exactly the join-irreducible elements of L_0 and x.Therefore the extension L_0 ↪ L_0 ⟨ x ⟩ is minimal since L_0 ⟨ x ⟩ contains exactly one join-irreducible element more than L_0. Notice that L_0 ⟨ x ⟩ is a minimal extension of L_0 of the first kind because the join-irreducibility of all the join-irreducibles of L_0 is preserved. Let L_0 be a finite CBS and L an extension of L_0.[Again we do not require L to be a finite CBS.] We call a couple of elements (x_1,x_2) ∈ L^2 primitive of the second kind over L_0 if the following conditions are satisfied: * x_1,x_2 ∉ L_0 and x_1 ≠ x_2 and there exists g join-irreducible element of L_0 such that: * g-x_1=x_2 and g-x_2=x_1, * for any join-irreducible element a of L_0 such that a <g we have a-x_i ∈ L_0 for i=1,2.g in Definition <ref> is univocally determined by (x_1,x_2) since g= x_1 ∨ x_2.Indeed, by property <ref> of Definition <ref> we have x_1 ≤ g, x_2 ≤ g and also g-(x_1 ∨ x_2)=(g-x_1)-x_2=x_2-x_2=0 that implies g ≤ x_1 ∨ x_2. Let L_0 be a finite CBS and L an extension of L_0. If (x_1,x_2) ∈ L^2 is primitive of the second kind over L_0 then the sub-CBS L_0 ⟨ x_1, x_2 ⟩ of L generated by { x_1,x_2 } over L_0 is a finite minimal extension of L_0 of the second kind. Before proving Theorem <ref> we need the following lemma:Let L_0 be a finite CBS, L an extension of L_0 and (x_1,x_2) ∈ L^2 primitive of the second kind over L_0, then the two following properties hold:* ∀ a ∈ L_0a-x_i ∈ L_0ora-x_i = b ∨ x_j with b ∈ L_0 for { i,j } = { 1,2 }. * ∀ a ∈ L_0 x_i-a = x_iorx_i-a = 0 for i=1,2.To show <ref> we first prove that if a ≠ g is join-irreducible in L_0, then a-x_i ∈ L_0. If a<g this is covered by the hypothesis <ref> of Definition <ref>. Now suppose that a is a join-irreducible element of L_0 such that a ≰ g then a-g=a because a is join-irreducible. Thus a=a-g ≤ a-x_i ≤ a since x_i ≤ g (because x_i=g-x_j ≤ g with i ≠ j) and thus a-x_i=a ∈ L_0 for i=1,2.We now prove <ref> for all a ∈ L_0.Let a ∈ L_0 and a_1, …, a_n be its join-irreducible components in L_0, since L_0 is finite we have a=a_1 ∨⋯∨ a_n. To prove <ref> we consider two cases: g is a join-irreducible component of a or g is not a join-irreducible component of a. In the former case, when g is a join-irreducible component of a, suppose a_1=g, thena-x_i=(g-x_i) ∨⋯∨ (a_n-x_i)=x_j ∨ (a_2-x_i) ∨⋯∨ (a_n-x_i)with { i,j } = { 1,2 }, notice that (a_2-x_i) ∨⋯∨ (a_n-x_i) ∈ L_0 because it is join of elements of L_0 by what we have just proved. In the latter case, g is not a join-irreducible component of a, we have a-x=(a_1-x) ∨⋯∨ (a_n-x)which is an element of L_0 because it is join of elements of L_0 as a consequence of what we have just proved.Furthermore, to prove <ref> notice that since g is join-irreducible in L_0 we have that for any a ∈ L_0 there are two cases to consider: g ≤ a or g-a=g. In the former case we have, since x_i ≤ g by <ref> of Definition <ref>, that x_i-a=0 for i=1,2 because x_i ≤ g ≤ a. In the latter case, since g-a=g, we havex_i-a=(g-x_j)-a=(g-a)-x_j=g-x_j=x_ifor { i,j } = { 1,2 }. Let L' be the sub ∨-semilattice of L generated by { x_1,x_2 } over L_0.As shown in Remark <ref> we have g= x_1 ∨ x_2. Also x_2 - x_1 = x_2 and x_1 - x_2 = x_1. Indeed x_2 - x_1= (g - x_1)-x_1 =g-x_1= x_2, the other case is symmetrical.Hence by reasoning in a similar way as in the proof of Theorem <ref>, using properties <ref> and <ref> of Lemma <ref>, we get that L'=L_0 ⟨ x_1, x_2 ⟩.We now want to show that the join-irreducibles of L_0 ⟨ x_1, x_2 ⟩ are exactly x_1,x_2 and the join-irreducibles of L_0 different from g.First, notice that if an element of L_0 is not join-irreducible in L_0 it cannot be join-irreducible in L_0 ⟨ x_1, x_2 ⟩. Furthermore, the only elements of L_0 ⟨ x_1, x_2 ⟩ not in L_0 that could be join-irreducible in L_0 ⟨ x_1, x_2 ⟩ are x_1,x_2 because L_0 ⟨ x_1, x_2 ⟩ is the ∨-semilattice generated by { x_1,x_2 } over L_0.We now show that x_1,x_2 are join-irreducible in L_0 ⟨ x_1, x_2 ⟩.Suppose x_1 is not join-irreducible in L_0 ⟨ x_1,x_2 ⟩ and let y_1, …, y_r be its join-irreducible components. One of them must be x_2 because x_1 ∉ L_0 and we observed that all the join-irreducible elements of L_0 ⟨ x_1,x_2 ⟩ are in L_0 ∪{ x_1,x_2 }. But then x_2 ≤ x_1 and therefore, by what was shown above, 0=x_2-x_1=x_2 which is absurd because x_2 ∉ L_0.The same reasoning holds for the join-irreducibility of x_1.It remains to show that the only element join-irreducible of L_0 which is not join-irreducible in L_0 ⟨ x_1, x_2 ⟩ is g.Observe that g is not join-irreducible in L_0 ⟨ x_1, x_2 ⟩ because g=x_1 ∨ x_2 and x_1,x_2 ≠ g since x_1,x_2 ∉ L_0.Let b ∈ L_0 be join-irreducible in L_0 but not in L_0 ⟨ x_1, x_2 ⟩, let y_1, …, y_r be the join-irreducible components of b in L_0 ⟨ x_1, x_2 ⟩. From what we observed above it follows that the y_i's are in L_0 ∪{ x_1,x_2 } and since b is join-irreducible in L_0 at least one of them is not in L_0. We can suppose y_1=x_1, so x_1 ≤ b. This implies that g ≤ b, indeed one among y_2, …, y_r has to be x_2 because otherwise y_2 ∨⋯∨ y_r ∈ L_0 and being the y_i's the join-irreducible components of b we have that x_1= b-(y_2 ∨⋯∨ y_r ) must be in L_0, this is absurd. If g < b then b-g= b because b is join-irreducible in L_0 but in this case x_1=y_1 ≤ b=b-g ≤ b-x_1 = y_2 ∨⋯∨ y_r and this is not possible because the y_i'sare the join-irreducible components of b. This implies b=g.Therefore the extension L_0 ↪ L_0 ⟨ x_1,x_2 ⟩ is minimal since the number of join-irreducibles of L_0 ⟨ x_1,x_2 ⟩ is greater by one than the number of the join-irreducibles of L_0.Notice that L_0 ⟨ x_1,x_2 ⟩ is a minimal extension of L_0 of the second kind because the join-irreducibility of all but one of the join-irreducibles of L_0 is preserved.Let L_0 be a finite CBS and L a finite minimal extension of L_0, then L is generated over L_0 either by a primitive element x ∈ L of the first kind over L_0 or by x_1,x_2 ∈ L forming a primitive couple (x_1,x_2) of the second kind over L_0.Let f : P → Q be the surjective minimal 𝐏-morphism dual to the inclusion of L_0 into L. Recall that P and Q are respectively the posets of the join-irreducible elements of L and L_0.We consider two cases:The first case is when f is of the first kind, i.e. domf ≠ P and there exists only one element p ∈ P ∖domf. In this case, by minimality of f, the restriction of f on its domain is an isomorphism of posets. We want to prove that x=p is a primitive element of L of the first kind over L_0.We observe that the downset p cannot be generated by the preimage of any downset in Q because p is not in the domain of f, therefore x ∉ L_0.For any q ∈ Q let q' be the unique element of P in the preimage of q by f, then f^-1 ( q)= q' because f is a 𝐏-morphism. Hence if q' ≤ p then q'- p= ∅ and if q' ≰ p then q' - p= q'. This translates to the fact that for any a join-irreducible of L_0 we have a-x ∈ L_0 because both ∅ and q' are generated by the preimage of a downset of Q. Furthermore, for any q ∈ Q if p ≤ q' then p -q'=∅ and if p ≰ q' then p- q'= p. Thus for any a join-irreducible of L_0 either x-a=0 or x-a=x.The second case is when f is of the second kind, i.e. domf = P and only two elements p_1,p_2 have the same image by f, recall that p_1,p_2 are incomparable. We want to prove that x_1=p_1 and x_2=p_2 form a primitive couple of elements of L of the second kind over L_0.x_1 ≠ x_2 and x_1,x_2 ∈ L_0 because the downsets p_1 and p_2 are distinct and neither of them is generated by the preimage of a downset in Q. Indeed, since f is a total map the preimages of downsets of Q are already downsets of P and any preimage contains p_1 iff it contains p_2.Let f(p_1)=f(p_2)=g ∈ Q then f^-1(g)= { p_1, p_2 }, since f is total f^-1( g) is a downset and thus f^-1( g)=f^-1( g). We have that f^-1( g)= p_1 ∪ p_2 because f is a 𝐏-morphism and therefore f^-1( g)- p_1= p_2 and f^-1( g)- p_2= p_1 because p_1 and p_2 are incomparable. Therefore g-x_1=x_2 and g-x_2=x_1.Let q ∈ Q such that q <g and q' be the unique element of P in the preimage of q by f, then f^-1 ( q)= q' because f is a 𝐏-morphism. Hence if q' ≤ p then q'- p= ∅ and if q' ≰ p then q' - p= q'. Both ∅ and q' are generated by the preimage of a downset of Q. This means that for any a join-irreducible of L_0 such that a < g we have a-x_i ∈ L_0 for i=1,2.Let L_0 be a finite CBS.We call signature of the first kind in L_0 a couple (h,G) where h ∈ L_0 and G is a set of two-by-two incomparable join-irreducible elements of L_0 such that h < g for all g ∈ G. We allow G to be empty.We call signature of the second kind in L_0 a triple (h_1,h_2,g) where h_1,h_2 ∈ L_0, g is a join-irreducible element of L_0 such that h_1 ∨ h_2 =g^- the unique predecessor of g in L_0.Let L_0 be a finite CBS. To give a minimal finite extension either of the first or of the second kind of L_0 (up to isomorphism over L_0) is equivalent to give respectively: * A signature (h,G) of the first kind in L_0.* A signature (h_1,h_2,g) of the second kind in L_0. Once again finite duality shows its usefulness. Indeed, we have the following lemma:Let Q be a finite poset. To give a minimal surjective 𝐏-morphism f with codomain Q either of the first or of the second kind (up to isomorphism) is equivalent to give respectively: * D,U respectively a downset and an upsetof Q such that D ∩ U= ∅ and for any d ∈ D, u ∈ U we have d ≤ u. * g ∈ Q and D_1,D_2 downsets of Q such that D_1 ∪ D_2=g ∖{ g }. The definitions of upset and of the upset a generated by an element a are analogous to the definitions for the downsets replacing ≤ with ≥. Let f:P → Q be a minimal surjective 𝐏-morphism.If f is of the first kind and domf = P ∖{ x } take D=f( x ∖{ x }) and U=f( x ∖{ x }).If f is of the second kind, i.e. domf = P, then there is exactly one g ∈ Q such that f^-1(g)={ x_1,x_2 } consisting of two elements of P. Take D_i= f( x_i ∖{ x_i }) for i=1,2.On the other hand, given D,U as in <ref> ,we obtain a minimal surjective 𝐏-morphism f:P → Q by taking P= Q ⊔{ x } and extending the order of Q setting q < x iff q ∈ D and x < q iff q ∈ U for any q ∈ Q. Take domf= Q ⊂ P and f as the identity on its domain.Given g ∈ Q, D_1,D_2 as in <ref> obtain a minimal surjective 𝐏-morphism f:P → Q taking P= Q ∖{ g }⊔{ x_1,x_2 } and extending the order of Q ∖{ g } setting q < x_i iff q ∈ D_i and x_i < q iff g < q for any q ∈ Q. Take domf= P and f maps x_1,x_2 into g and acts as the identity on Q ∖{ g }.Now let f_1 : P_1 → Q and f_2 : P_2 → Q be two surjective 𝐏-morphisms to which are associated the same (D,U) or (D_1,D_2,g), we show that there exists an isomorphism of posets φ :P_1 → P_2 such that f_2 ∘φ= f_1.Suppose f_1,f_2 are of the first kind and the same (D,U) is associated to both of them. Then domf_1 = P_1 ∖{ p_1 } and domf_2 = P_2 ∖{ p_2 }. Being f_1,f_2 two 𝐏-morphisms which are isomorphisms when restricted on their domains, we can invert the restriction of f_2 and compose it with the restriction of f_1 to obtain an isomorphism of posets φ' : domf_1 →domf_2.It remains to extend φ' to an isomorphism φ :P_1 → P_2, just set φ (p_1)=p_2; φ so defined is an isomorphism of posets, we need to show that it reflects and preserves the order of P_1. f_1 and f_2 map respectively the elements smaller than p_1 and p_2 into the same elements of Q and the elements greater than p_1 and p_2 into the same elements of Q by hypothesis. Hence φ' maps the elements smaller than p_1 into the elements smaller than p_2 and the elements greater than p_1 into the elements greater than p_2 and so does its inverse. It follows that φ is an isomorphism of posets.Suppose f_1,f_2 are of the second kind and the same (D_1,D_2,g) is associated to both of them. Then f_1,f_2 are total, i.e. domf_1=P_1 and domf_2=P_2. The orders restricted on P_1 \ f_1^-1(g) and P_2 \ f_2^-1(g) are both isomorphic to Q \{ g } with isomorphisms given by the restrictions of f_1,f_2, indeed given two elements a,b ∈ P_1 \ f_i^-1(g) it is f_i(a) ≤ f_i(b) iff a ≤ b because f_i is a 𝐏-morphism for i=1,2. Composing these two isomorphisms we obtain an isomorphism φ' : P_1 \ f_1^-1(g) → P_2 \ f_2^-1(g).We now extend it to φ:P_1 → P_2.Let f_i^-1(g)= { x_1,i, x_2,i} for i=1,2, we can suppose to have ordered the indices in such a way that f_i( x_j,i\{ x_j,i})=D_j for i,j=1,2. Clearly we extend φ' to φ defining φ(x_j,1)=x_j,2. It remains to show that φ is order preserving and reflecting.Let p ∈ P_1 be such that p ∉{ x_1,1, x_2,1}.Since f_1,f_2 are 𝐏-morphisms and f_i(x_j,i)=g we get x_j,2=φ(x_j,1) ≤φ(p) iff g ≤ f_2(φ(p))=f_1(p) iff x_j,1≤ p for j=1,2.Furthermore it is p ≤ x_1,1 iff f_1(p) ∈ D_1 and p ≤ x_2,1 iff f_1(p) ∈ D_2, similarly it is φ(p) ≤ x_1,2 iff f_1(p)=f_2(φ(p)) ∈ D_1 and φ(p) ≤ x_2,2 iff f_1(p)=f_2(φ(p)) ∈ D_2.Therefore φ is order preserving and reflecting.We just need to translate Lemma <ref> in the language of CBSes using the finite duality:A signature of the first kind (h,G) in L_0 corresponds to a couple (D,U) in P as in <ref> of Lemma <ref>. Indeed, by Köhler duality, downsets of P correspond to elements of L_0 and upsets of P correspond to the sets of their minimal elements, i.e. sets of two-by-two incomparable join-irreducible elements of L_0. The conditions D ∩ U = ∅ and ∀ d ∈ D, u ∈ U d ≤ u translate in the condition h < g for any g ∈ G.A signature of the second kind (h_1,h_2,g) in L_0 corresponds to a triple (D_1,D_2,g) in P as in <ref> of Lemma <ref>. Indeed, h_1,h_2 ∈ L_0 correspond to the downsets D_1,D_2 and g join-irreducible of L_0 is an element of P (recall that P is the poset of the join-irreducibles of L_0). The condition that D_1 ∪ D_2 =g ∖{ g } translates into h_1 ∨ h_2 = g^- since the predecessor g^- of g in L_0 corresponds to the downset g ∖{ g } of P. Therefore signatures inside a finite CBS L_0 are like `footprints' left by the minimal finite extensions of L_0: any minimal finite extension of L_0 leaves a `footprint' inside L_0 given by the corresponding signature. On the other hand, given a signature inside L_0 we can reconstruct a unique (up to isomorphism over L_0) minimal extension of L_0 corresponding to that signature. Since, by Theorems <ref>, <ref> and <ref>, minimal finite extension of a finite CBS L_0 are exactly the ones generated over L_0 either by a primitive element or by a primitive couple, to any element or couple primitive over L_0 it is associated a unique signature in L_0. This is exactly what the next definition and theorem talk about. Let L_0 be a finite CBS and L an extension of L_0.We say that a primitive element x ∈ L of the first kind over L_0 induces a signature of the first kind (h,G) in L_0 if for any a join-irreducible of L_0 we have thata < xiffa ≤ handx < aiffg ≤ afor someg ∈ GWe say that a primitive couple (x_1,x_2) ∈ L^2 of the second kind over L_0 induces a signature of the second kind (h_1,h_2,g) in L_0 if g=x_1 ∨ x_2 and for any a join-irreducible of L_0 we have thata < x_iiffa ≤ h_ifori=1,2Let L_0 be a finite CBS and L an extension of L_0.A primitive element x ∈ L induces a signature (h,G) iff the extension L_0 ⊆ L_0 ⟨ x ⟩ corresponds to that signature.A primitive couple (x_1,x_2) ∈ L^2 induces a signature (h_1,h_2,g) iff the extension L_0 ⊆ L_0 ⟨ x_1,x_2 ⟩ corresponds to that signature.For a primitive element x of the first kind over L_0 to induce a signature (h,G) means that h is the predecessor of x in L_0 ⟨ x ⟩ and G is the set of the join-irreducibles of L_0 which are minimal among the ones that are strictly greater than x in L_0 ⟨ x ⟩. This is the same as saying that the signature (h,G) is associated to the extension L_0 ⊆ L_0 ⟨ x ⟩.For a primitive couple (x_1,x_2) of the second kind over L_0 to induce a signature (h_1,h_2,g) means that h_i is the predecessor of x_i in L_0 ⟨ x_1,x_2 ⟩ for i=1,2. This is the same as saying that the signature (h_1,h_2,g) is associated to the extension L_0 ⊆ L_0 ⟨ x_1,x_2 ⟩. It is sufficient to translate Lemma <ref> in the language of CBSes Notice that conditions <ref>, <ref>, <ref> of Definition <ref> already imply that g ≤ a iff x_i < a for i=1,2. Indeed, if g ≤ a then x_i ≤ a since x_i ≤ g by condition <ref> of Definition <ref>, thus x_i < a since x_i ∉ L_0 by <ref> of Definition <ref>. On the other hand if x_i < a then g ≤ a because otherwise g-a=g, since g is join-irreducible of L_0, and this is impossible because x_i is a join-irreducible component of g in the extension (in the proof of Theorem <ref> it is shown that g=x_1 ∨ x_2 and x_1,x_2 are distinct, incomparable and join-irreducibles in the extension). We have thus finally obtained an intermediate characterization of existentially closed CBSes: A CBS L is existentially closed iff for any finite sub-CBS L_0 ⊆ L we have: * Any signature of the first kind in L_0 is induced by a primitive element x ∈ L of the first kind over L_0.* Any signature of the second kind in L_0 is induced by a primitive couple (x_1,x_2) ∈ L^2 of the second kind over L_0. By the characterization of the existentially closed CBSes given in Theorem <ref> we have that a CBS L is existentially closed iff for any finite sub-CBS L_0 and for any finite extension L_0' of L_0 we have that L_0' embeds into L fixing L_0 pointwise. Since any finite extension of L_0 can be decomposed into a chain of minimal extensions, we can restrict to the case in which L_0' is a minimal finite extension of L_0. Then the claim follows from Theorem <ref> and Theorem <ref>. Thanks to Theorem <ref> we already get an axiomatization for the class of the existentially closed CBSes, indeed the quantification over the finite sub-CBS L_0 can be expressed elementarily using an infinite number of axioms. But this axiomatization is clearly unsatisfactory: other than being infinite, it is not conceptually clear.§ AXIOMSIn this section we will prove that the existentially closed CBSes are exactly the ones satisfying the Splitting, Density 1 and Density 2 axioms. Each subsection focuses on one axiom. We will use extensively the characterization of existentially closed CBSes given by Theorem <ref>. To show the validity of the axioms in any existentially closed CBS we will use the following lemma.Let θ(x) and ϕ(x,y) be quantifier-free formulas in the language of CBSes. Assume that for every finite CBS L_0 and every tuple a of elements of L_0 such that L_0 θ(a), there exists an extension L_1 of L_0 which satisfies ∃yϕ(a,y).Then every existentially closed CBS satisfies the following sentence:∀x ( θ (x) ⟶∃yϕ(x,y)) Let L be an existentially closed CBS.Let a=(a_1, …,a_n) ∈ L^n be such that L θ(a). Let L_0 be the sub-CBS of L generated by a_1, …, a_n, by local finiteness L_0 is finite. By hypothesis there exists an extension L_1 of L_0 and b=(b_1, …,b_m) ∈ L_1^m such that L_1 ϕ(a,b).Denote by L_0' the sub-CBS of L_1 generated by b_1, …,b_m over L_0, it is a finite extension of L_0. By Theorem <ref> L_0' embeds into L fixing L_0 pointwise.We thus get L ϕ(a,b') where b'=(b_1', …,b_m') ∈ L^m are the images of b_1, …,b_m by the embedding.Therefore we have proved that:L ∀x ( θ (x) ⟶∃yϕ(x,y))§.§ Splitting axiom[Splitting Axiom] For every a,b_1,b_2 such that b_1 ∨ b_2 ≪ a ≠ 0 there exist elements a_1 and a_2 different from 0 such that:a-a_1= a_2 ≥ b_2 a-a_2= a_1 ≥ b_1b_2-a_1=b_2-b_1b_1-a_2=b_1-b_2Any existentially closed CBS satisfies the Splitting Axiom.It is sufficient to show, by Lemma <ref>, that for any finite CBS L_0 and a,b_1,b_2 ∈ L_0 such thatb_1 ∨ b_2 ≪ a ≠ 0 there exists a finite extension L_0 ⊆ L with a_1,a_2 ∈ L different from 0 such that:a-a_1= a_2 ≥ b_2 a-a_2= a_1 ≥ b_1b_2-a_1=b_2-b_1b_1-a_2=b_1-b_2Let Q be the poset dual to L_0 and A, B_1,B_2 its downsets corresponding to a, b_1,b_2.We obtain a surjective 𝐏-morphism π :P → Q in the following way:For any x ∈ Q such that x ∉ B_2 (respectively x ∉ B_1) let ξ_x,1 (respectively ξ_x,2) be a new symbol.For any x ∈ Q such that x ∈ B_1 ∩ B_2 let ξ_x,0 be a new symbol.Let P be the set of all these symbols, we define an order on P setting:ξ_y,j≤ξ_x,i ⇔ y ≤ xand { i,j }≠{ 1,2 }Intuitively P is composed by a copy of B_1 ∪ B_2 and two copies of Q \ (B_1 ∪ B_2), one of the two copies is placed over B_1 and the other over B_2.We define π :P → Q setting dom π =P and π(ξ_x,i)=x.Let a_1, …,a_r be the join-irreducible components of A, for any i we have a_i ∉ B_1 ∪ B_2 because by hypothesis B_1 ∪ B_2 ≪ A. Therefore π^-1( a_i)= ξ_a_i,1∪ξ_a_i,2We take:A_1= ⋃_i=1^r ξ_a_i,1 andA_2= ⋃_i=1^r ξ_a_i,2We obtain π^-1(A)-A_1=A_2 and π^-1(A)-A_2=A_1, they are both not empty because r ≥ 1 and A is not empty.Furthermore for any x ∈ B_1 ∪ B_2 we have that x ≤ a_i for some i. Therefore if x ∈ B_1 \ B_2 it is ξ_x,1≤ξ_a_i,1, if x ∈ B_2 \ B_1 it is ξ_x,2≤ξ_a_i,2, finally if x ∈ B_1 ∩ B_2 then ξ_x,0≤ξ_a_i,1 and ξ_x,0≤ξ_a_i,2. This implies that π^-1(B_1) ⊆ A_1 and π^-1(B_2) ⊆ A_2.We now show that A_1 ∩ A_2 = π^-1 (B_1) ∩π^-1(B_2).Let ξ∈ P, we show that ξ∈ A_1 ∩ A_2 iff ξ∈π^-1 (B_1) ∩π^-1(B_2).If ξ∈π^-1 (B_1) ∩π^-1(B_2) then π(ξ) ∈ B_1 ∩ B_2, therefore ξ=ξ_x,0 and x ≤ a_i for some i. It implies that ξ_x,0≤ξ_a_i,1, thus ξ_x,0∈ A_1 and ξ_x,0≤ξ_a_i,2, therefore ξ_x,0∈ A_2 and ξ∈ A_1 ∩ A_2.On the other hand if ξ∈ A_1 ∩ A_2 then there exist i,j such that ξ≤ξ_a_i,1 and ξ≤ξ_a_j,2. By definition of the order on P it has to be ξ=ξ_x,0 with x ∈ B_1 ∩ B_2, therefore ξ∈π^-1 (B_1) ∩π^-1(B_2).Thenπ^-1 (B_1) ∩π^-1(B_2) ⊆ A_1 ∩π^-1 (B_2) ⊆ A_1 ∩ A_2 = π^-1 (B_1) ∩π^-1(B_2)Thereforeπ^-1 (B_2)-A_1= π^-1 (B_2)-(A_1 ∩π^-1 (B_2))=π^-1 (B_2)-(π^-1 (B_1) ∩π^-1 (B_2))=π^-1 (B_2)-π^-1 (B_1)Analogously we can showπ^-1 (B_2)-A_1=π^-1 (B_2)-π^-1 (B_1)Thus taking the embedding L_0 ↪ L dual to π and a_1,a_2 ∈ L corresponding to A_1,A_2 we have obtained what we were looking for.If L is a CBS generated by a finite subset X then any join-irreducible element of L is a join-irreducible component in L of some element of X.In any CBS the following identities hold:+ c-(a ∨ b)= (c-a)-b (a ∨ b)-c= (a-c) ∨ (b-c) c-0=c 0-c=0It follows by an easy induction that any term in the language of CBS is equivalent to a term of the form x_1 ∨⋯∨ x_n with x_1, … x_n containing only the difference symbol and variables. Notice that if an element x_1 ∨⋯∨ x_m with x_1, … x_m ∈ L is join-irreducible then it coincides with x_i for some i=1, …,m; thus any join-irreducible element g of L is the interpretation of a term t over the variables X containing only the difference symbol. This implies that g is the join of some join-irreducible components of the leftmost variable in t. Indeed, this can be proved by induction on the complexity of the term observing that if c_1, …, c_m are the join-irreducible components of an element c ∈ L then for any b ∈ L:c-b=(c_1 ∨⋯∨ c_m)-b=(c_1-b)∨⋯∨ (c_m-b)= ⋁_c_i ≰ b c_ibecause c_i-b=0 or c_i-b=c_i, respectively, when c_i ≤ b or c_i ≰ b since the c_i's are join-irreducibles.Thus g is the join of the join-irreducible components of some x ∈ X, so, since it is join-irreducible, it is a join-irreducible component of x.Lemma <ref> is not true for co-Heyting algebras.Indeed, consider the inclusion L_0 ↪ L_1 of co-Heyting algebras described by Figure <ref>. L_1 is generated by L_0 and a but b=a ∧ (1-a) is join-irreducible in L_1 and it is not a join-irreducible component of any element of L_0 or a. Notice that the sub-CBS of L_1 generated by L_0 and a does not contain b as shown in Figure <ref>Let L_0 be a finite sub-CBS of L and let L be generated by L_0 and a_1, …, a_n ∈ L.If a_1, …, a_n are joins of join-irreducible components in L of elements of L_0then the surjective 𝐏-morphism φ: P → Q dual to the inclusion L_0 ↪ L is such that dom φ =P. In particular, the inclusion is also a co-Heyting algebras morphism, i.e. it preserves meets and 1.For instance, this happens when we have that all the a_i's are of the kind a_i=b_i-c_i with b_i ∈ L_0 and c_i ∈ L. Indeed, in such case, the join-irreducible components in L of the a_i's are among those of b_i.By Lemma <ref> all the join-irreducible elements of L are join-irreducible components in L of elements of L_0 or of a_1, …, a_n. Since, by hypothesis, a_1, …, a_n are joins of join-irreducible components of elements of L_0, any join-irreducible components of a_i is a join-irreducible component of a join-irreducible component of an element of L_0 and thus it is a join-irreducible component of an element of L_0. Therefore any join-irreducible element of L is a join-irreducible component in L of an element of L_0.Suppose that there is x ∈ P such that x ∉dom φ, then x corresponds to a join-irreducible element of L which is not a join-irreducible component of any element of L_0. Indeed, if it is a join-irreducible component of a ∈ L_0 then x ∈ P would be a maximal element of the downset φ^-1 (A) where A ⊆ Q is the downset relative to a, but this is not possible since if x ∈φ^-1 (A) then x would be less than or equal to an element in φ^-1 (A) ⊆dom φ which would be different from x, this is absurd because x is maximal in φ^-1 (A). Therefore, dom φ=P because the existence of an element x ∉dom φ would imply the existence of a join-irreducible element of L which is not a join-irreducible component in L of any element of L_0, but this contradicts what we have proven in the first part of this proof. Let L be a CBS and L_0 a finite sub-CBS of L, g be join-irreducible in L_0 and y_1,y_2 ∈ L be nonzero elements such that g-y_1= y_2 g-y_2= y_1Let also L_0 ⟨ y_1,y_2 ⟩ be the sub-CBS of L generated by L_0 and { y_1,y_2 }. We have that: * g=y_1 ∨ y_2,* any join-irreducible a of L_0 such that a ≰ g is still join-irreducible in L_0 ⟨ y_1,y_2 ⟩,* y_1,y_2 are distinct, not in L_0 and they are the join-irreducible components of g in L_0 ⟨ y_1,y_2 ⟩. Notice that y_1 ∨ y_2=g because y_1 ≤ g and y_2 ≤ g and g-(y_1 ∨ y_2)=(g-y_1)-y_2=y_2-y_2=0.Furthermore y_1,y_2 ∉ L_0. Indeed, suppose that y_1 ∈ L_0, then y_2=g-y_1 ∈ L_0, since g is join-irreducible in L_0 and g=y_1 ∨ y_2, we have that g=y_1 or g=y_2, by hypothesis it follows respectively that y_2=0 or y_1=0, in both cases we have a contradiction because y_1,y_2 ≠ 0. Similarly, we obtain that y_2 ∉ L_0.We also have that y_1 ≠ y_2. Indeed, suppose y_1=y_2, then g-y_1=y_1 implies that g=y_1=0 and this is absurd. We now show that any join-irreducible a of L_0 such that a ≰ g is still join-irreducible in L_0 ⟨ y_1,y_2 ⟩.Any element of L_0 ⟨ y_1,y_2 ⟩ is the join of repeated differences of y_1,y_2 and of join-irreducibles ofL_0 different from g, this is implied by the identities (<ref>) as noted in the proof of Lemma <ref>.It is sufficient to show that for any x obtained as repeated differences of y_1,y_2 and join-irreducibles ofL_0 different from g we have a-x=a or a-x=0. This will ensures that a is join-irreducible in L ⟨ y_1,y_2 ⟩. Since x is obtained as repeated differences of y_1,y_2 and join-irreducibles a_1, …, a_n of L_0 different from g, there is a term t in the language of CBSes containing only - and variables expressing x as t(y_1,y_2,a_1, …,a_n). We prove that a-x=a or a-x=0 by induction on the length of t. If the length is 1, then x ∈{ y_1, y_2, a_1, …, a_n } If x ∈ L_0 then a-x=a or a-x=0 since a is join-irreducible of L_0. Moreover a-y_i=a for i=1,2. Indeed, a ≥ a-y_i ≥ a-g=a because a is a join-irreducible of L_0 such that a ≰ g. Suppose the length of t is greater than 1. If the leftmost element among the ones whose differences give x is y_i for i=1,2 then a-x=a because a=a-y_i ≤ a-x ≤ a.[Notice the following fact: if a term t in the language of CBSes only contains - (and variables) and z is the leftmost variable in t, then the inequality t ≤ z is valid in every CBS (this is established by an easy induction on the length of t).]If the leftmost element among the ones whose differences give x is b join-irreducible of L_0 different from g. If b < g, since a ≰ g yields a ≰ b and thus a=a-b by join-irriducibility of a, we obtain that a=a-b ≤ a-x ≤ a. If b ≰ gwe can obtain x with a smaller number of differences because we can apply the induction hypothesis to replace its subterm of the kind b-c with b or 0 (with b and c playing respectively the role previously played by a and x) and apply again the inductive hypothesis because x can be expressed by a term shorter than t. Now we prove that y_1,y_2 are join-irreducibles of L_0 ⟨ y_1,y_2 ⟩. We show that for y_1, for y_2 is analogous.Again, we show that for all x ∈ L_0 ⟨ y_1,y_2 ⟩ we have y_1-x=y_1 or y_1-x=0. Let x be obtained as repeated differences of y_1,y_2 and join-irreducibles ofL_0 different from g as above and let us proceed by induction on the number of such differences.First of ally_1-y_2=(g-y_2)-y_2=(g-y_2)=y_1.Let x ∈ L_0, if g ≤ x then y_1-x ≤ y_1-g=0, if g ≠ x, namely g-x=g (recall that g is join-irreducible in L_0) theny_1-x=(g-y_2)-x=(g-x)-y_2=g-y_2=y_1.If the leftmost element among the ones whose differences give x is y_i for i=1,2 then applying the inductive hypothesis, possibly many times, we obtain that x=0 or x=y_i and in either case y_1-x=0 or y_1-x=y_1.Suppose the leftmost element among the ones whose differences give x is b join-irreducible of L_0 different from g. If b < g then y_1=(g-y_2)=(g-b)-y_2=y_1-b ≤ y_1and y_1=y_1-b ≤ y_1-x ≤ y_1.[Recall footnote <ref>.]If b ≰ g we cannot have b=g because g is not join-irreducible, then using what we have proved above (that any join-irreducible b of L_0 such that b ≰ g is still join-irreducible in L_0 ⟨ y_1,y_2 ⟩) we obtain x=b or x=0 and in either case y_1-x=0 or y_1-x=y_1.Finally, to prove that y_1,y_2 are the join-irreducible components of g in L_0 ⟨ y_1,y_2 ⟩, we simply have to notice that y_1 ≰ y_2 and y_2 ≰ y_1. Just observe that if y_1 ≤ y_2 then g=y_1 ∨ y_2 = y_2 ∉ L_0 which is absurd. Analogously it cannot be y_2 ≤ y_1. Let L be a CBS satisfying the Splitting Axiom.Then for any finite sub-CBS L_0 ⊆ L and for any signature (h_1,h_2,g) of the second kind in L_0 there exists a primitive couple (x_1,x_2) ∈ L^2 of the second kind over L_0 inducing such signature. We follow this strategy: we use the Splitting Axiom to `split' g obtaining two elements, one over h_1 and another over h_2. When h_1=h_2 these two elements form a primitive couple that induces the signature (h_1,h_2,g), but unfortunately this is not true in general because these two elements may be too big. So we may have to `split' these two new elements too in order to obtain other elements which we may have to `split' again and again. This process has to stop after a finite number of steps, intuitively the more the element h_1 ∧ h_2 (the meet is taken inside L_0) is smaller that h_1 and h_2, the more the process lasts. Then we accurately partition the set of all these `shards' into two disjoint subsets and we take the joins of these two subsets. In this way we obtain two elements that form a primitive couple that induces the signature (h_1,h_2,g) and we are done. The statement of the Theorem require that, according to the definition of primitive couple inducing a given signature, we need to do the following. Given h_1,h_2 ∈ L_0 and g join-irreducible of L_0 such that h_1 ∨ h_2 = g^-, we have to find x_1, x_2 ∈ L such that: * x_1 ≠ x_2 and x_1,x_2 ∉ L_0, * g-x_1=x_2 and g-x_2=x_1 and for any a join-irreducible of L_0: * if a <g then a-x_i ∈ L_0 for i=1,2, * a < x_i iff a ≤ h_i for i=1,2. We recall that L_0 is a co-Heyting algebra because it is finite. In particular we can consider meets inside L_0 and they distribute with the joins.Let n_i for i=1,2 the maximum length of chains of join-irreducible elements of L_0k_1 < k_2 < ⋯ <k_n_isuch that k_n_i≤ h_i and k_1 ≰ h_j with i ≠ j, or equivalently k_1 ≰ h_1 ∧ h_2 where the meet is taken inside L_0.Let n=n_1+n_2.Intuitively, the natural number n measures how much h_1 ∧ h_2 is smaller than h_1 and h_2.We prove the claim by induction on n.Case 1: n=0.Then h_1 ∧ h_2=h_1=h_2=g^-. We denote h_1=h_2 by h.Since h ≪ g, we can apply the splitting axiom to g,h,h, hence there exist elements x_1,x_2 ∈ L different from 0 such that:g-x_1= x_2 ≥ h g-x_2= x_1 ≥ hWe now show that (x_1,x_2) is a primitive couple of the second kind and induces the signature (h_1,h_2,g): * As shown in Lemma <ref> we have that x_1 ≠ x_2 and x_1,x_2 ∉ L_0.* g-x_1=x_2 and g-x_2=x_1 follow directly from the splitting axiom, see (<ref>).Let a be a join-irreducible element of L_0, then for i=1,2: * If a < g then a ≤ g^-= h thus a-x_i=0 because h ≤ x_i as a consequence of the splitting axiom, see (<ref>).* If a < x_i then a < g because x_i < g by (<ref>) and therefore a ≤ g^- =h. If a ≤ h then, since x_i ∉ L_0 and h ≤ x_i, we have a < x_i.Case 2: n>0.Suppose that the claim is true for any m<n.Since h_1 ∨ h_2 =g^- ≪ g, we can apply the splitting axiom to g,h_1,h_2, hence there exist elements y_1,y_2 ∈ L different from 0 such that:g-y_1= y_2 ≥ h_2 g-y_2= y_1 ≥ h_1 h_2-y_1=h_2-h_1h_1-y_2=h_1-h_2Let L_0 ⟨ y_1,y_2 ⟩ be the sub-CBS of L generated by L_0 and { y_1,y_2 }. By local finiteness L_0 ⟨ y_1,y_2 ⟩ is finite and thus a co-Heyting algebra.Before continuing the proof we show a series of claims.The two following triples are signatures of the second kind in L_0 ⟨ y_1,y_2 ⟩:(h_1, h_2 ∧ y_1, y_1) (h_1 ∧ y_2, h_2, y_2)where the meets are taken inside L_0 ⟨ y_1,y_2 ⟩. By Lemma <ref> y_1,y_2 ∉ L_0 are join-irreducibles in L_0 ⟨ y_1,y_2 ⟩.Moreover, in L_0 ⟨ y_1,y_2 ⟩ we have that:h_1 ∨ (h_2 ∧ y_1)=y_1^- (h_1 ∧ y_2) ∨ h_2=y_2^-Indeedh_1 ∨ (h_2 ∧ y_1)=(h_1 ∨ h_2) ∧ (h_1 ∨ y_1)=(h_1 ∨ h_2) ∧ y_1and we have that this coincides with y_1^-, the predecessor of y_1 in L_0 ⟨ y_1,y_2 ⟩. To show this, observe that, as a consequence of Lemma <ref>, the inclusion L_0 ↪ L_0 ⟨ y_1,y_2 ⟩ is dual to a surjective 𝐏-morphism φ:P → Q with dom φ= P. Recall that Q and P are the posets of the join-irreducibles respectively of L_0 and L_0 ⟨ y_1,y_2 ⟩. Notice that the preimage of an element q of Q, i.e. a join-irreducible element of L_0 consists of the join-irreducible components of such element inside L_0 ⟨ y_1,y_2 ⟩. This is a consequence of the fact that φ^-1(q) ⊆ P corresponds to q as element of L_0 ⟨ y_1,y_2 ⟩ and the set of maximal elements ofφ^-1(q) is exactly the preimage of q because φ preserves the strict order.Then φ^-1(g)= { y_1, y_2 } because y_1,y_2 are the join-irreducible componentsof g in L_0 ⟨ y_1,y_2 ⟩ by Lemma <ref>.Notice that for any downset D ⊆ Q we have φ^-1(D)=φ^-1(D) because dom φ= P andφ^-1(D) is a downset in the domain of φ.Since φ^-1( g)=φ^-1( g)= y_1 ∪ y_2 we have:φ^-1( g ∖{ g })=φ^-1( g ∖{ g })=( y_1 ∪ y_2) ∖{ y_1,y_2 }Thus (h_1 ∨ h_2)∧ y_1=g^- ∧ y_1 is equal to y_1^- because of the following equations( y_1 ∪ y_2) ∖{ y_1,y_2 }∩ y_1 =y_1 ∖{ y_1,y_2 }=y_1 ∖{ y_1 }To prove that (h_1 ∧ y_2) ∨ h_2 =y_2^- the reasoning is analogous.Notice that h_1 ∧ h_2 is the same taken in L_0 and in L_0 ⟨ y_1,y_2 ⟩ because by Lemma <ref> the inclusion L_0 ↪ L_0 ⟨ y_1,y_2 ⟩ preserves meets. For i=1,2 the maximum length of chains of join-irreducibles of L_0 ⟨ y_1,y_2 ⟩ less than or equal to h_i that are not less than or equal to h_1 ∧ h_2 is the same as n_i defined above (recall that n_i is defined taking the join-irreducibles of L_0).Indeed, suppose there exists a chain of join-irreducibles in L_0 ⟨ y_1,y_2 ⟩k_1 < k_2 < ⋯ <k_rsuch that k_r ≤ h_i and k_1 ≰ h_1 ∧ h_2. By Lemma <ref> there exist b_1, …, b_r join-irreducibles in L_0 such that k_t is a join-irreducible component of b_t in L_0 ⟨ y_1,y_2 ⟩ for t=1, …,r. So b_r -h_i ≠ b_r because k_r ≤ b_r but k_r ≰ b_r -h_i. Indeed b_r -h_i is the join of the join-irreducible components of b_r that are not smaller than or equal to h_i but k_r is a join-irreducible component of b_r such that k_r ≤ h_i. Since b_r is join-irreducible of L_0 and b_r-h_i ≠ b_r it would be b_r ≤ h_i. Similarly b_t ≤ b_t+1 because b_t-b_t+1≠ b_t and b_1 ≰ h_1 ∧ h_2 since k_1 ≰ h_1 ∧ h_2 and we would obtain a chain of join-irreducibles in L_0b_1 < b_2 < ⋯ <b_rOn the other hand, given a chain of join-irreducibles in L_0b_1 < b_2 < ⋯ <b_rsuch that b_r ≤ h_i and b_1 ≰ h_1 ∧ h_2. We can take k_t join-irreducible component of b_t inside L_0 ⟨ y_1,y_2 ⟩ such that k_1 ≰ h_1 ∧ h_2 and k_t < k_t+1 because k_t is join-irreducible and b_t+1 is the join of its join-irreducible components. Clearly we obtain a chain of join-irreducibles in L_0 ⟨ y_1,y_2 ⟩k_1 < k_2 < ⋯ <k_rsuch that k_r ≤ h_i and k_1 ≰ h_1 ∧ h_2. Suppose there exists a chain of join-irreducibles in L_0 ⟨ y_1,y_2 ⟩k_1 < k_2 < ⋯ <k_rsuch that k_r ≤ h_i and k_1 ≰ h_1 ∧ h_2. Let, as above, φ:P → Q the surjective total 𝐏-morphism dual to the inclusion L_0 ↪ L_0 ⟨ y_1,y_2 ⟩, thenφ(k_1) < φ(k_2) < ⋯ < φ(k_r)is a chain of join-irreducibles in L_0 such that φ(k_r) ≤ h_i and φ(k_1) ≰ h_1 ∧ h_2.[We remind that 𝐏-morphisms preserve the strict order.]On the other hand a chain of join-irreducibles in L_0b_1 < b_2 < ⋯ <b_rsuch that b_r ≤ h_i and b_1 ≰ h_1 ∧ h_2 can be lifted to a chain of join-irreducibles of L_0 ⟨ y_1,y_2 ⟩k_1 < k_2 < ⋯ <k_rsuch that φ(k_s)=b_s for s=1, …, r using the fact that φ is a surjective 𝐏-morphism, we obtain that k_r ≤ h_i and k_1 ≰ h_1 ∧ h_2. If h_2 ≰ h_1 then the maximum length of chains of join-irreducibles of L_0 ⟨ y_1,y_2 ⟩ less than or equal to h_2 ∧ y_1 that are not less than or equal to h_1 ∧ h_2= h_1 ∧ (h_2 ∧ y_1) is strictly smaller than n_2 (notice that n_2 ≠ 0 because h_2 ≰ h_1).When h_1 ≰ h_2 in the same way we obtain the analogous result switching y_1 with y_2 and h_1 with h_2.Indeed, suppose there exists a chain of join-irreducibles in L_0 ⟨ y_1,y_2 ⟩k_1 < k_2 < ⋯ <k_n_2such that k_n_2≤ h_2 ∧ y_1 and k_1 ≰ h_1 ∧ h_2. By Lemma <ref> there exist b_1, …, b_n_2 join-irreducibles in L_0 such that k_t is a join-irreducible component of b_t in L_0 ⟨ y_1,y_2 ⟩, this would imply as before that b_n_2≤ h_2 because b_n_2 -h_2 ≠ b_n_2 and b_n_2 is join-irreducible of L_0 and similarly, b_t ≤ b_t+1 because b_t-b_t+1≠ b_t and b_1 ≰ h_1 ∧ h_2 since k_1 ≰ h_1 ∧ h_2 and we would obtain a chain of join-irreducibles in L_0b_1 < b_2 < ⋯ <b_n_2with b_n_2 not a join-irreducible component of h_2. Indeed, suppose b_n_2 is a join-irreducible component of h_2, since k_n_2 is a join-irreducible component of b_n_2 in L_0 ⟨ y_1,y_2 ⟩ we would obtain that k_n_2 is a join-irreducible component of h_2 in L_0 ⟨ y_1,y_2 ⟩, but k_n_2≤ y_1 and y_1 is not greater than or equal to any join-irreducible component of h_2 which is not less than or equal to h_1 ∧ h_2 because h_2 - y_1=h_2-h_1=h_2-(h_1 ∧ h_2).We have used the fact that if L_0 ⊆ L_1 is an extension of finite CBS, a,b ∈ L_0 such that b is a join-irreducible component of a in L_0 and c ∈ L_1 is a join-irreducible component of b in L_1 then c is also a join-irreducible component of a in L_1. We have taken a=h_2, b=b_n_2 and c=k_n_2.Since b_n_2 is join-irreducible in L_0 and b_n_2≤ h_2 there would exist a continuation of the chain given by b_n_2+1 join-irreducible component of h_2 in L_0, but this is absurd because n_2 is the maximum length of such chains. Suppose there exists a chain of join-irreducibles in L_0 ⟨ y_1,y_2 ⟩k_1 < k_2 < ⋯ <k_n_2such that k_n_2≤ h_2 ∧ y_1 and k_1 ≰ h_1 ∧ h_2. Notice that this chain is not empty because n_2 ≠ 0. We have that k_n_2 is not a join-irreducible component of h_2 in L_0 ⟨ y_1,y_2 ⟩. Indeed, k_n_2≤ y_1 and y_1 is not greater than or equal to any join-irreducible component of h_2 which is not less than or equal to h_1 ∧ h_2 because h_2 - y_1=h_2-h_1=h_2-(h_1 ∧ h_2). Thus there would exist a continuation of such chain given by k_n_2+1 join-irreducible component of h_2 in L_0 ⟨ y_1,y_2 ⟩, but this is absurd because we have proved in Claim 2 that n_2 is the maximum length of such chains. We can now apply the inductive hypothesis, to do so we shall consider different cases.Subcase 2.1: h_1,h_2 incomparables. First, we consider the case in which h_1 ≰ h_2 and h_2 ≰ h_1, i.e. h_1,h_2 are incomparable.What we have proved in Claim 3 implies that the sum of the lengths of the chains considered above for either of the two signatures (<ref>) is strictly smaller than n. Therefore we can apply the inductive hypothesis on both the two signatures (<ref>) considered inside L_0 ⟨ y_1,y_2 ⟩ to obtain two primitive couples (y_11, y_12) ∈ L^2 and (y_21, y_22) ∈ L^2 of the second kind over L_0 ⟨ y_1,y_2 ⟩ such that they induce respectively the signatures (h_1, h_2 ∧ y_1, y_1) and (h_1 ∧ y_2,h_2, y_2). This means that: * y_11≠ y_12 and y_11,y_12∉ L_0 ⟨ y_1,y_2 ⟩, * y_1- y_11=y_12 and y_1- y_12=y_11 and for any a join-irreducible of L_0 ⟨ y_1,y_2 ⟩: * if a<y_1 then a-y_1i∈ L_0 ⟨ y_1,y_2 ⟩ for i=1,2, * a < y_11 iff a ≤ h_1 and a < y_12 iff a ≤ (h_2 ∧ y_1). furthermore * y_21≠ y_22 and y_21,y_22∉ L_0 ⟨ y_1,y_2 ⟩, * y_2- y_21=y_22 and y_2- y_22=y_21 and for any a join-irreducible of L_0 ⟨ y_1,y_2 ⟩: * if a<y_2 then a-y_2i∈ L_0 ⟨ y_1,y_2 ⟩ for i=1,2, * a < y_21 iff a ≤ (h_1 ∧ y_2) and a < y_22 iff a ≤ h_2. Notice that properties <ref> of y_11, y_12 and <ref> of y_21, y_22 actually hold for any a ∈ L_0 ⟨ y_1,y_2 ⟩ since any element in a finite CBS is the join of join-irreducible elements. Observe also that for a ∈ L_0 we have a ≤ y_ij iff a < y_ij because y_ij∉ L_0.We want to prove that x_1=y_11∨ y_21 and x_2=y_12∨ y_22 are the two elements of L we are looking for, i.e. (x_1,x_2) is a primitive couple of the second kind over L_0 inducing the signature (h_1,h_2,g).First of all, we observe thaty_1-y_2i=y_1andy_2-y_1i=y_2 fori=1,2Indeed,y_1=g-y_2=(g-y_2)-y_2=y_1-y_2 ≤ y_1-y_2i≤ y_1the second equation is shown analogously. (<ref>) also shows that y_1-y_2=y_1 and y_2-y_1=y_2.Moreovery_1i-y_2j=y_1i andy_2i-y_1j=y_2ifori,j=1,2Indeed, y_11=y_1-y_12=(y_1-y_2)-y_12=(y_1-y_12)-y_2=y_11-y_2 ≤ y_11 - y_21≤ y_11and thus y_11 - y_21=y_11, the remaining cases are analogous.Notice the following fact about the extensions generated by the y_ij's:The two extensions of finite CBSes given by L_0 ⟨ y_1,y_2 ⟩⊆ L_0 ⟨ y_1,y_2,y_11,y_12⟩⊆ L_0 ⟨ y_ij| i,j=1,2 ⟩ are both minimal of the second kind. This implies that any b join-irreducible of L_0 ⟨ y_1,y_2 ⟩ different from y_1,y_2 is still join-irreducible in L_0 ⟨ y_ij| i,j=1,2 ⟩. It suffices to prove that (y_21, y_22) is a primitive couple of the second kind over L_0 ⟨ y_1,y_2, y_11, y_12⟩.First of all, as a consequence of Lemma <ref>, y_1,y_2 are join-irreducible in L_0 ⟨ y_1,y_2 ⟩, thus y_2 is join-irreducible in L_0 ⟨ y_1,y_2, y_11, y_12⟩. * y_21≠ y_22 by property <ref> of y_21, y_22. y_21,y_22∈ L ∖ L_0 ⟨ y_1,y_2, y_11, y_12⟩. Indeed, if y_21∈ L_0 ⟨ y_1,y_2, y_11, y_12⟩ then y_22=y_2-y_21∈ L_0 ⟨ y_1,y_2, y_11, y_12⟩ and vice versa. In that case y_2=y_21∨ y_22∈ L_0 ⟨ y_1,y_2, y_11, y_12⟩ with y_21, y_22≠ y_2 because they are not in L_0 ⟨ y_1,y_2 ⟩, but this is absurd because y_2 is join-irreducible.* y_2-y_21=y_22 and y_2-y_22=y_21 by property <ref> of y_21, y_22.* Since L_0 ⟨ y_1,y_2, y_11, y_12⟩ is a minimal finite extension of L_0 ⟨ y_1,y_2 ⟩, its join-irreducibles are y_11, y_12 and the join-irreducibles of L_0 ⟨ y_1,y_2 ⟩ except y_1. If a is a join-irreducible of L_0 ⟨ y_1,y_2, y_11, y_12⟩ such that a < y_2 then a is join-irreducible in L_0 ⟨ y_1,y_2 ⟩ because a ≠ y_11, y_12 since y_11, y_12≮ y_2: indeed y_1i-y_2=y_1i-(y_21∨ y_22)=(y_1i-y_21)-y_22=y_1i≠ 0 by (<ref>).Thus a-y_2i∈ L_0 ⟨ y_1,y_2 ⟩ by property <ref> of of y_21, y_22. Moreover, we observe thatg-x_1=x_2andg-x_2=x_1because thanks to equations (<ref>) we have:g-x_1=(y_1 ∨ y_2)-(y_11∨ y_21)=((y_1-y_21)-y_11) ∨ ((y_2 - y_11)-y_21) =(y_1-y_11) ∨ (y_2-y_21)=y_12∨ y_22=x_2;showing the second equation of (<ref>) is analogous.We are now ready to show that (x_1,x_2) is a primitive couple of the second kind over L_0 inducing the signature (h_1,h_2,g). * Equations (<ref>) imply that g=x_1 ∨ x_2, thus if x_1=x_2 then x_1=g=0 but this is absurd because x_1,x_2 ≠ 0 since y_11, y_12, y_21, y_22≠ 0 because they are not in L_0.Furthermore x_1,x_2 ∉ L_0; this is because g is join-irreducible in L_0 and g-x_1=x_2 and g-x_2=x_1 are different from 0 and g.* See equations (<ref>).Let now a be a join-irreducible element of L_0 and i=1,2: * If a < g then a ≤ g^-=h_1 ∨ h_2 and a ≤ h_1 or a ≤ h_2 * If a ≤ h_1 then a-x_1=0 because h_1 ≤ y_11≤ x_1 by property <ref> of y_11. * If a ≤ h_2 and a ≰ h_1 we want to prove that a-x_1=a. The join-irreducible components of a in L_0 ⟨ y_1,y_2 ⟩ coincide with the join-irreducible component of a in L_0 ⟨ y_ij| i,j=1,2 ⟩. Since a is the join of its join-irreducible components in L_0 ⟨ y_1,y_2 ⟩, it is sufficient to prove that any join-irreducible component b of a in L_0 ⟨ y_1,y_2 ⟩ is join-irreducible in L_0 ⟨ y_ij| i,j=1,2 ⟩. We have b ≠ y_1,y_2 because b ≤ h_2 and y_1,y_2 ≰ h_2 since if y_i ≤ h_2 then 0=y_i -h_2=(g-y_j)-h_2=(g-h_2)-y_j=g-y_j=y_iwith i ≠ j which is absurd. Thus by Claim 4 we have that b is also join-irreducible in L_0 ⟨ y_ij| i,j=1,2 ⟩. Since a is join-irreducible of L_0 and a ≰ h_1 it is a-h_1=a. For any b join-irreducible component of a in L_0 ⟨ y_1,y_2 ⟩ we have b ≰ h_1 because a-h_1=a means that h_1 is not greater than or equal to any join-irreducible component of a. Since b ≰ h_1 and in particular b ≰ h_1 ∧ y_2 then property <ref> of y_11 and property <ref> of y_21 imply that b ≰ y_11, y_21. Therefore b ≰ y_11∨ y_21=x_1 because b is join-irreducible in L_0 ⟨ y_ij| i,j=1,2 ⟩. This implies that a-x_1=a because x_1 is not greater than or equal to any join-irreducible component of a in L_0 ⟨ y_ij| i,j=1,2 ⟩. For a-x_2 the property is checked in an analogous way. * If a ≤ h_i then a < y_ii≤ x_i by property <ref> of y_11 and property <ref> of y_22 If a < x_1 then a < g by (<ref>) and a ≤ h_1 ∨ h_2=g^-. Let b be a join-irreducible component of a in L_0 ⟨ y_1,y_2 ⟩. We claim that b ≤ h_1. We have that b is join-irreducible in L_0 ⟨ y_1,y_2 ⟩ and b ≠ y_1,y_2 becauseb < x_ 1 and y_1,y_2 ≮ x_1. Indeed, by equations (<ref>), we have: y_2-x_1=(y_2-y_21)-y_11=y_22-y_11=y_22≠ 0 y_1-x_1=(y_1-y_11)-y_21=y_12-y_21=y_12≠ 0 Suppose b ≰ h_1, then by property <ref> of y_11 we would get b ≮ y_11, furthermore b ≰ h_1 ∧ y_2 and by property <ref> of y_21 we would get b ≮ y_21. Then b would also be join-irreducible in L_0 ⟨ y_ij| i,j=1,2 ⟩ (see Claim 4). Therefore b ≮ y_11∨ y_21=x_1 but this is absurd. Thus for any b join-irreducible component of a we have b ≤ h_1 and hence a ≤ h_1. For x_2 the reasoning is analogousSubcase 2.2: h_1,h_2 comparables. The remaining cases are when h_1 < h_2 or h_2 < h_1 since h_1=h_2 only occurs when n=0.We now consider the case h_1 < h_2, for h_2 < h_1 the reasoning is analogous.In this case, by equations (<ref>), we have y_2^-=(h_1 ∧ y_2) ∨ h_2=h_2 in L_0 ⟨ y_1,y_2 ⟩. Notice that we can apply the inductive hypothesis only on the first signature in (<ref>) because h_2 ≰ h_1 but it is not true that h_1 ≰ h_2. Then we obtain the existence of y_11,y_12 with the same properties <ref>, <ref>, <ref>, <ref> as in the previous subcase. We define x_1 = y_11 and x_2=y_12∨ y_2.We want to prove that x_1,x_2 form a primitive couple (x_1,x_2) of the second kind over L_0 inducing the signature (h_1,h_2,g).We haveg-x_1=x_2andg-x_2=x_1becauseg-x_1=(y_1 ∨ y_2)-y_11=(y_1-y_11) ∨ (y_2-y_11)=y_12∨ y_2=x_2 g-x_2=(y_1 ∨ y_2)-(y_12∨ y_2)=((y_1-y_2)-y_12) ∨ ((y_2-y_2)-y_12)=y_1-y_12=y_11=x_1We have used that y_2 -y_11=y_2, it is proven in the same way as (<ref>) above. * Equations (<ref>) imply g=x_1 ∨ x_2, thus if x_1=x_2 then x_1=g=0 but this is absurd because x_1,x_2 ≠ 0 since y_11, y_12, y_2 ≠ 0 because they are not in L_0.Furthermore x_1,x_2 ∉ L_0 since g is join-irreducible in L_0 and g-x_1=x_2 and g-x_2=x_1 are different from 0 and g.* See equations (<ref>).Let now a be a join-irreducible element of L_0: * If a < g then a ≤ g^-= h_1 ∨ h_2=h_2. * If a ≤ h_1 then a-x_1=0 because h_1 ≤ y_11=x_1, moreover a ≤ h_1 < h_2 ≤ y_2 < x_2 imply a-x_2=0. * If a ≤ h_2 and a ≰ h_1 clearly a-x_2=0 because a ≤ h_2 ≤ y_2 ≤ x_2. We want to prove that a-x_1=a. The join-irreducible components of a in L_0 ⟨ y_1,y_2 ⟩ coincide with the join-irreducible components of a in L_0 ⟨ y_1,y_2, y_11, y_12⟩. Since a is the join of its join-irreducible components in L_0 ⟨ y_1,y_2 ⟩, it is sufficient to prove that any join-irreducible component b of a in L_0 ⟨ y_1,y_2 ⟩ is join-irreducible in L_0 ⟨ y_1,y_2, y_11, y_12⟩. Pick such a,b. We have b ≠ y_1 because b ≤ h_2 and y_1 ≰ h_2. Notice that y_1 ≰ h_2 since y_1 ≤ h_2 would imply 0=y_1-h_2=(g-y_2)-h_2=(g-h_2)-y_2=g-y_2=y_1 which is absurd. Then we have that b is also join-irreducible in L_0 ⟨ y_1,y_2, y_11, y_12⟩, this follows from the fact that L_0 ⟨ y_1,y_2, y_11, y_12⟩ is a minimal extension of L_0 ⟨ y_1,y_2 ⟩ of the second kind, which implies that the join-irreducibles in L_0 ⟨ y_1,y_2 ⟩ different from y_1 are still join-irreducible in L_0 ⟨ y_1,y_2, y_11, y_12⟩. Since a is join-irreducible of L_0 and a ≰ h_1 it is a-h_1=a. For any b join-irreducible component of a in L_0 ⟨ y_1,y_2 ⟩ we have b ≰ h_1 because a-h_1=a means that h_1 is not greater than or equal to any join-irreducible component of a. Since b ≰ h_1 by property <ref> of y_11 we have that b ≮ y_11=x_1, therefore b ≰ x_1 because b ≠ y_11 since y_11∉ L_0 ⟨ y_1,y_2 ⟩ . This implies that a-x_1=a because x_1 is not greater than or equal to any join-irreducible component of a in L_0 ⟨ y_1,y_2, y_11, y_12⟩. * By property <ref> of y_11 we have that a ≤ h_1 iff a < y_11=x_1. If a ≤ h_2 then a ≤ h_2 < y_2 ≤ x_2 since h_2=y_2^-. If a < x_2, since x_2 = y_12∨ y_2 ≤ y_1 ∨ y_2=g then a < g and a ≤ g^-=h_1 ∨ h_2=h_2. §.§ Density axioms [Density 1 Axiom] For every c there exists b ≠ 0 such that c ≪ b Any existentially closed CBS satisfies the Density 1 Axiom.It is sufficient to show, by Lemma <ref>, that for any finite CBS L_0 and c ∈ L_0 there exists a finite extension L_0 ⊆ L with b ∈ L different from 0 such that c ≪ b.Let P_0 be the finite poset dual to L_0 and C its downset corresponding to c.Let P be the poset obtained by P_0 by adding a new maximum element m ∈ P such that m ≥ p for any p ∈ P_0 and φ:P → P_0 a surjective 𝐏-morphism such that dom φ=P_0 and it is the identity on its domain. Then C ≪ m and take as L the CBS dual to P and b ∈ L corresponding to m. [Density 2 Axiom] For every c,a_1,a_2,d such that a_1,a_2 ≠ 0, c ≪ a_1, c ≪ a_2 and a_1-d=a_1, a_2-d=a_2 there exists an element b different from 0 such that:c ≪ bb ≪ a_1b ≪ a_2b-d=bAny existentially closed CBS satisfies the Density 2 Axiom.It is sufficient to show, by Lemma <ref>, that for any finite CBS L_0 and c,a_1,a_2,d such that a_1,a_2 ≠ 0, c ≪ a_1, c ≪ a_2 and a_1-d=a_1, a_2-d=a_2 there exists a finite extension L_0 ⊆ L with b ∈ L different from 0 such that c ≪ b, b ≪ a_1, b ≪ a_2 and b-d=b.Let P_0 the poset dual to L_0 and C,A_1,A_2,D its downsets corresponding to c,a_1,a_2,d.If C= ∅ choose two maximal elements α^1, α^2 respectively of A_1 and A_2 and obtain a poset P by adding a new element β to P_0 and setting for any x ∈ P: * β≤ x iff x=β or α^1 ≤ x or α^2 ≤ x. If α^1, α^2 are incomparable they become the only two successors of β in P, otherwise if e.g. α^1 ≤α^2 then α^1 is the only successor of β.* x ≤β iff x=β, i.e. β is minimal in P.Define a surjective 𝐏-morphism φ:P → P_0 taking dom φ=P_0 and φ acting as the identity on its domain. Take B= β, we have: * φ^-1(C)=∅≪ B,* B ≪φ^-1(A_1)=A_1 ∪{β},* B ≪φ^-1(A_2)=A_2 ∪{β},* B- φ^-1(D)=B. Indeed, since a_1-d=a_1 and a_2-d=a_2, D does not contain any maximal element of A_1 or A_2, in particular it does not contain α^1 or α^2Take L the CBS dual to P and b ∈ L corresponding to B.If C ≠∅ let γ_1, …, γ_n be the maximal elements of C.Choose for any i=1, …,n two maximal elements α_i^1, α_i^2 respectively of A_1 and A_2 such that γ_i ≤α_i^1 and γ_i ≤α_i^2. Notice that they exist and γ_i ≠α_i^1, γ_i ≠α_i^2because C ≪ A_1 and C ≪ A_2.Obtain a poset P by adding new elements β_1, …, β_n to P_0 and setting for any x ∈ P: * β_i ≤ x iff x=β_i or α_i^1 ≤ x or α_i^2 ≤ x. If α_i^1, α_i^2 are incomparable they become the only two successors of β_i in P, otherwise if e.g. α_i^1 ≤α_i^2 then α_i^1 is the only successor of β_i.* x ≤β_i iff x=β or x ≤γ_i, i.e. γ_i is the unique predecessor of β_i in P.Define a surjective 𝐏-morphism φ:P → P_0 taking dom φ=P_0 and φ acting as the identity on its domain.Take B= β_1 ∪⋯∪β_n, we have: * φ^-1(C) ≪ B,* B ≪φ^-1(A_1)=A_1 ∪{β_1, …, β_n },* B ≪φ^-1(A_2)=A_2 ∪{β_1, …, β_n },* B- φ^-1(D)=B. Indeed D does not contain any maximal element of A_1 or A_2, in particular it does not contain α_i^1 or α_i^2 for any i=1, …,n.Take L the CBS dual to P and b ∈ L corresponding to B.Let L be a CBS satisfying the Splitting, Density 1 and Density 2 Axioms.Then for any finite sub-CBS L_0 ⊆ L and for any signature (h,G) of the first kind in L_0 there exists a primitive element x ∈ L of the first kind over L_0 inducing such signature. We follow this strategy: if G= ∅ we use the Density 1 Axiom to take an element m ∈ L greater than any element of L_0, then, thanks to the Splitting Axiom, using Theorem <ref> we `split' m into two elements, one over 1_L_0 the top element of L_0 and another over h. It turns out that this second element is primitive of the first kind and induces the signature (h, ∅). If G is nonempty and G={ g_1, …, g_k } we suppose to have already found a primitive element y inducing the signature (h,{ g_1, …, g_k-1}). Then, using Theorem <ref> again, we `split' g_k into two elements g_k',g_k”, the first over h and the second over the predecessor of g_k. Finally, applying the Density 2 Axiom, we obtain an element of L in between h, g_k' and y. It turns out that this element is primitive of the first kind and induces the signature (h, G). The statement of the Theorem require that, according to the definition of primitive element inducing a given signature, we need to do the following. Given h ∈ L_0 and G a set of join-irreducibles of L_0 such that h < g for any g ∈ G, we have to find x ∈ L such that: * x ∉ L_0 and for any a join-irreducible of L_0: * a-x ∈ L_0, * either x-a= xorx-a= 0, * a < x iff a ≤ h and x < a iff g_i ≤ a for some i=1, …, k. The proof is by induction on k= # G.Case k=0.Let 1_L_0 be the maximum element of L_0, by Density 1 there exists 0 ≠ m ∈ L such that 1_L_0≪ m.Then L_1=L_0 ∪{ m } is a sub-CBS of L. Indeed it is closed under taking joins and differences since for any a ∈ L_0 we have m > a and thus a-m=0 and m = m-1_L_0≤ m-a ≤ m, therefore m-a=m. Hence m is a join-irreducible of L_1. Furthermore it is clear that the join-irreducibles of L_1 are the join-irreducibles of L_0 and m.(h,1_L_0,m) is a signature of the second kind in L_1, indeed h ∨ 1_L_0=1_L_0=m^-.Thanks to the Splitting Axiom we can apply Theorem <ref> to the signature (h,1_L_0,m) in L_1 and obtain the existence of a primitive couple of the second kind (x_1,x_2) ∈ L^2 inducing such signature.Thus we have that there exist x_1,x_2 ∈ L such that: * x_1 ≠ x_2 and x_1,x_2 ∉ L_1, * m-x_1=x_2 and m-x_2=x_1 and for any c join-irreducible of L_1: * if c < m then c-x_i ∈ L_1 for i=1,2, * c < x_1 iff c ≤ h and c < x_2 iff c ≤ 1_L_0. Recall that Lemma <ref> implies that for any c ∈ L_1:* c-x_i ∈ L_1 or c-x_i= b ∨ x_j for some b ∈ L_1 with { i,j } = { 1,2 }. * x_i-c= x_iorx_i-c= 0 for i=1,2. Let x=x_1, it is the element we were looking for. Indeed we now show that x is a primitive element of the first kind over L_0 inducing the signature (h, ∅) * x ∉ L_0 since x=x_1 ∉ L_1 by property <ref> of x_1.Let a be a join-irreducible of L_0. Then * a-x_1 ∈ L_0. Indeed, from a ≤ 1_L_0 it follows (by property <ref> of x_2) a < x_2; then by <ref> either a-x_1 ∈ L_1 or a-x_1=b ∨ x_2 with b ∈ L_0. The latter is absurd because (for a < x_2) we would get x_2 > a ≥ a-x_1=b ∨ x_2 ≥ x_2. Then a-x_1 ∈ L_1, i.e. a-x_1 ∈ L_0 because m > a ≥ a-x_1.* x_1-a=x_1 or x_1-a=0 by property <ref>.* a < x_1 if and only if a ≤ h by property <ref> of x_1. x_1 ≮ a, because if x_1 < a then x_1 < 1_L_0 and so 0=x_1-1_L_0=(m-x_2)-1_L_0=(m-1_L_0)-x_2=m-x_2=x_1 which is absurd because x_1 ∉ L_1 by property <ref> of x_1.Case k ≥ 1.Suppose that G= { g_1, …, g_k }. By inductive hypothesis there exists a primitive element y ∈ L of the first kind over L_0 which induces the signature (h, { g_1, …, g_k-1}). This means that for any a join-irreducible of L_0: * y ∉ L_0, * a-y ∈ L_0, * either y-a= yory-a= 0, * a < y iff a ≤ h and y < a iff g_i ≤ a for some i=1, …, k-1. Recall that Lemma <ref> shows that the properties <ref> and <ref> actually hold for any a ∈ L_0.Notice that g_k is still join-irreducible in the sub-CBS L_0 ⟨ y ⟩⊆ L generated by L_0 and y since L_0 ⊆ L_0 ⟨ y ⟩ is a minimal finite extension of the first kind by Theorem <ref>.Since L satisfies the Splitting Axiom, we can apply Theorem <ref> to the signature (h,g_k^-, g_k) in L_0 ⟨ y ⟩. Notice that it is a signature of the second kind because h ∨ g_k^-= g_k^- ≪ g_k. Therefore, there exists a primitive couple of the second kind (g_k',g_k”) ∈ L^2 inducing such signature. Thus we have that there exist g_k',g_k”∈ L such that: * g_k',g_k”∉ L_0 ⟨ y ⟩ and g_k' ≠ g_k”, * g_k-g_k'=g_k” and g_k-g_k”=g_k' and for any a join-irreducible of L_0 ⟨ y ⟩: * if a<g_k then a-g_k' ∈ L_0 ⟨ y ⟩ and a-g_k”∈ L_0 ⟨ y ⟩, * a < g_k' iff a ≤ h and a < g_k” iff a ≤ g_k^-. Observe that property <ref> actually holds for any a ∈ L_0 ⟨ y ⟩ since any element in a finite CBS is the join of join-irreducible elements.Apply the Density 2 Axiom on h,y,g_k',d where d= ⋁{ bjoin-irreducible of L_0 s.t.b ≱ g_1, …, b ≱ g_k }.We can apply it because:h ≪ y since by property <ref> of y we have y-h=y because h ∈ L_0 and h < y.h ≪ g_k' since h < g_k' and g_k'-h=(g_k-g_k”)-h=(g_k-h)-g_k”=g_k-g_k”=g_k'. Notice that g_k-h=g_k because g_k is join-irreducible in L_0. y-d=y since for any b join-irreducible in L_0 such that b ≱ g_1, …, b ≱ g_k we have y-b=y: otherwise, since y is join-irreducible in L_0 ⟨ y ⟩, it would be y-b=0 so b > y and then by property <ref> of y we would have b ≥ g_i for some i<k which is absurd.g_k'-d=g_k' since g_k = g_k-⋁{ bjoin-irreducible of L_0 s.t.b ≱ g_k }≤ g_k-d ≤ g_kand g_k'-d=(g_k-g_k”)-d=(g_k-d)-g_k”=g_k-g_k”=g_k'.Then by the Density 2 Axiom there exists 0 ≠ x ∈ L such that h ≪ x, x ≪ y, x ≪ g_k'andx-d=xx is the element we were looking for. Indeed, it is primitive of the first kind over L_0 and induces the signature (h,G): * We have x ∉ L_0 because if x ∈ L_0 then since x < y it would be x ≤ h by property <ref> of y but this is absurd because x ≠ 0 and h ≪ x.Let a be a join-irreducible of L_0: * If a ≤ h then a-x=0 since h ≤ x by (<ref>). If a ≰ h then by property <ref> of g_k' we have a ≮ g_k'. * If a ≰ h and a ≠ g_k then a is still join-irreducible in L_0 ⟨ y,g_k',g_k”⟩ (since L_0 ⟨ y ⟩⊆ L_0 ⟨ y,g_k',g_k”⟩ is a minimal finite extension by Theorem <ref>), thus a-g_k'=a. Therefore a-x=a because a=a-g_k' ≤ a-x ≤ a since x ≤ g_k'. * If a=g_k then by x ≪ g_k' (see (<ref>)) g_k-x=(g_k' ∨ g_k”)-x=(g_k'-x) ∨ (g_k”-x )=g_k' ∨ ((g_k-g_k')-x) =g_k' ∨ (g_k-(g_k' ∨ x)) =g_k' ∨ (g_k-g_k')=g_k' ∨ g_k”=g_k. * If a ≥ g_i for some i=1, …,k then: * If i ≠ k then a ≥ y ≥ x and x-a=0 by property <ref> of y and (<ref>). * If i=k then a ≥ g_k ≥ g_k' ≥ x and x-a=0. If a ≱ g_i for any i=1,…,k then a ≤ d and x-a=x since x=x-d ≤ x-a ≤ x* If a < x then a < g_k' and thus a ≤ h by property <ref> of g_k'. If a ≤ h then a < x because h <x by (<ref>). If x <a and a ≱ g_1, …, a ≱ g_k, then a ≤ d and x=x-d ≤ x-a =0 which is absurd, thus g_i ≤ a for some i=1, …,k. If g_i ≤ a for some i=1, …,k then: * If i ≠ k then, since y < g_i by property <ref> of y, we have x <y < g_i ≤ a and thus x < a. * If i=k then x < g_k' <g_k ≤ a and thus x<a. § PROPERTIES OF EXISTENTIALLY CLOSED CBSESFrom our investigation we can easily obtain some properties of the existentially closed CBSes:If L is an existentially closed CBS, then L does not have a maximum element.Since L satisfy the Density 1 Axiom for any c ∈ L there exists an element b ≠ 0 such that c ≪ b and therefore c <b. This implies that there cannot be a maximum element of L.Let L be an existentially closed CBS and a,b ∈ L.If a and b are incomparable, i.e. a ≰ b and b ≰ a, then there does not exist the meet of a and b in L.Notice that if a ≤ b then the meet exists and it is a.Denote by c the meet of a and b. Consider L_0 ⊆ L the sub-CBS generated by a,b,c. It is finite by local finiteness. c is the meet of a and b also in L_0.Since a,b are incomparable there exist g_1,g_2 join-irreducible components in L_0 respectively of a and b such that g_1 ≰ b and g_2 ≰ a.By Theorem <ref> taking h=0 ∈ L_0 we have that there exists x ∈ L ∖ L_0 such that for any d ∈ L_0: * d < x iff d=0,* x < d iff g_i ≤ d for i=1 or i=2.We have that x ≰ c sincex ∉ L_0, g_1 ≰ c and g_2 ≰ c, therefore c <c ∨ x. Notice that x < g_1 ≤ a and x < g_2 ≤ b, thus c ∨ x ≤ a and c ∨ x ≤ b. This implies that c cannot be the meet of a and b in L. If L is an existentially closed CBS, then there are no join-irreducible elements of L.Let g be a nonzero element of L. We can apply the splitting axiom on the triple g,0,0, then there exist g_1,g_2 ∈ L such that g-g_1=g_2,g-g_2=g_1 and g_1,g_2 ≠ 0.Since g_1,g_2 ≤ g and g-(g_1 ∨ g_2)=(g-g_1)-g_2=0 we have that g=g_1∨ g_2. Moreover g_1,g_2 ≠ g because g_1,g_2 ≠ 0. Therefore g cannot be join-irreducible because g=g_1 ∨ g_2 with g_1,g_2 ≠ g, recall that 0 is never join-irreducible.alpha
http://arxiv.org/abs/1702.08352v2
{ "authors": [ "Luca Carai", "Silvio Ghilardi" ], "categories": [ "math.LO", "03G25 (Primary), 03C10, 06D20 (Secondary)" ], "primary_category": "math.LO", "published": "20170227161230", "title": "Existentially Closed Brouwerian Semilattices" }
JARA Institute for Quantum Information, RWTH Aachen University, 52056 Aachen, GermanyJARA Institute for Quantum Information, RWTH Aachen University, 52056 Aachen, Germany Max-Planck-Institute of Quantum Optics, Hans-Kopfermann-Str.1, 85748 Garching, GermanyGhent University, Department of Physics and Astronomy, Krijgslaan 281-S9, 9000 Gent, BelgiumGhent University, Department of Physics and Astronomy, Krijgslaan 281-S9, 9000 Gent, Belgium Vienna Center for Quantum Science, Universität Wien, Boltzmanngasse 5, 1090 Wien, AustriaMax-Planck-Institute of Quantum Optics, Hans-Kopfermann-Str.1, 85748 Garching, Germany Anyon condensation forms a mechanism which allows to relate different topological phases. We study anyon condensation in the framework of Projected Entangled Pair States (PEPS) where topological order is characterized through local symmetries of the entanglement.We show that anyon condensation is in one-to-one correspondence to the behavior of the virtual entanglement state at the boundary (i.e., the entanglement spectrum) under those symmetries, which encompasses both symmetry breaking and symmetry protected (SPT) order, and we use this to characterize all anyon condensations for abelian double models through the structure of their entanglement spectrum.We illustrate our findings with the ℤ_4 double model, which can give rise to both Toric Code and Doubled Semion order through condensation, distinguished by the SPT structure of their entanglement. Using the ability of our framework to directly measure order parameters for condensation and deconfinement, we numerically study the phase diagram of the model, including direct phase transitions between the Doubled Semion and the Toric Code phase which are not described by anyon condensation.Entanglement phases as holographic duals of anyon condensates Norbert Schuch Accepted .Received ; in original form=============================================================§ INTRODUCTIONThe study of topologically ordered phases, their relation, and the transitions between them has received steadily growing attention in the last decade. Their lack of local order parameters, the dependence of the ground space structure on their topology, and the exotic nature of their anyonic excitations puts them outside the Landau framework of symmetry breaking and local order parameters, and thus asks for novel ways of characterizing and relating different phases, for instance the structure of their ground space or the nature of their non-trivial excitations (anyons), and the way in which those are related throughout different phases.Anyon condensation has been proposed as a mechanism for relating topological phases <cit.>. The main idea is that some mechanism drives a species a of bosonic anyons to condense into the vacuum. This, in turn, forces any anyon b which has non-trivial statistics with a to become confined, as a deconfined b anyon would have non-trivial statistics with the new vacuum, and moreover leads to the identification of anyons which differ by fusion with a. At the same time, the relation between anyon types and ground space of a theory suggests that this condensation is accompanied by a change in the ground space structure.The formalism of anyon condensation allows to construct “simpler” anyon models from more rich ones, and suggests to think of the “condensate fraction” of the condensed anyon as an order parameter for a Landau-like description of the phase transition. Yet, it is a priori not clear how such an order parameter should be measured, and existing approaches describe anyon condensation as a breaking of the global symmetry of the quantum group or tensor category underlying the model <cit.>.Projected Entangled Pair States (PEPS) <cit.> form a natural framework for the local modelling of topologically ordered phases <cit.>. They associate to any lattice site a tensor which describes both the physical system at that site, and the way in which it is correlated to the adjacent sites through entanglement degrees of freedom.It has been shown that in PEPS, topological order emerges from a local symmetry constraint on the entanglement degrees of freedom, characterized by a group action (for so-called double models of groups) <cit.> or more generally by Matrix Product Operators for twisted doubles <cit.> and string-net models <cit.>. In all cases, both ground states and excitations can be modelled from the very same symmetries which characterize the local tensors: Group actions and irreducible representations (irreps) in the former and Matrix Product Operators with suitable endpoints in the latter case <cit.>. Yet, it has been observed that the entanglement symmetry of the tensors is not in one-to-one correspondence with the topological order in the system: By adding a suitable deformation to the fixed point wavefunction, the system can be driven into a phase transition which is consistent with a description in terms of anyon condensation <cit.>. This raises the question: What is the exact relation between topological phase transitions in tensor networks and anyon condensation, and can we explain this transition “miscroscopically” using the local symmetries in the tensor network description?In this paper, we derive a comprehensive framework for the explanation, classification, and study of anyon condensation in PEPS. Our framework explains and classifies anyon condensation in terms of the different “entanglement phases” emerging at the boundary under the action of the local entanglement symmetry of the tensor, and provides us with the tools to explicitly study the behavior of order parameters measuring condensation and confimement of anyons.More specifically, we show that the symmetry constraint in the entanglement degrees of freedom of the tensor gives rise to a corresponding “doubled” symmetry in the fixed point of the transfer operator, this is, in the entanglement spectrum at the boundary. Anyon condensation can then be understood in terms of the different phases at the boundary, this is, the symmetry breaking pattern together with a possibly symmetry-protected phase of the residual unbroken symmetry. We give necessary and sufficient conditions for the condensation of anyons in abelian double models in terms of the symmetry at the boundary, and show that this completely classifies all condensation patterns in double models of cyclic groups, giving rise to all twisted ℤ_N double models. We also show that these conditions allow to independently derive the anyon condensation rules described above, providing a tensor network derivation of these conditions.The central idea is to relate anyon condensation and confinement to the behavior of string order parameters, which in turn can be related to symmetry breaking and symmetry-protected order, and combine this with the constraints arising from the positivity of the boundary state.We illustrate our framework by discussing all possible phases which can be obtained by condensation from a ℤ_4 double model, which can give rise to Toric Code, Doubled Semion, and trivial phases. Specifically, we show that the Toric Code and Double Semion can exhibit the same symmetry breaking pattern at the boundary, yet are distinguished by different SPT orders, corresponding to the condensation of a charge or a dyon (a combined charge-flux particle), respectively, and thus a different string order parameter.Finally, we apply our framework to numerically study topological phases and the transitions between them along a range of different interpolations. Specifically, the interpretation of condensation and confinement in terms of string order parameters allows us to directly measure order parameters for the different topological phases, namely condensate fractions and order parameters for deconfinement, which allow us to study the nature and order of the phase transitions. Our framework also allows us to set up interpolations between the Toric Code and Double Semion phase, which are a priori not related by anyon condensation, and we find that depending on the nature of the interpolation, we can either find a second-order simultaneous confinement-deconfinement transition, or a first-order transition not characterized by anyon condensation. The paper is structured as follows:In Sec. <ref>, we introduce PEPS, explain how topological order and topological excitations are modelled within this framework, and define condensation and confinement in PEPS models. Sec. <ref> contains the classification of anyon condensation and confinement through the behavior of the boundary: We start by giving the intuition and the main technical assumption, then derive the conditions imposed by the symmetry structure and positivity of the boundary state, and finally show that this classification gives rise to the well-known anyon condensation rules. In Sec. <ref>, we apply this classification to the case of ℤ_N quantum doubles and show that it precisely gives rise to all twisted ℤ_M double models. Finally, in Sec. <ref>, we illustrate our framework with a detailed discussion of the condensation from a ℤ_4 double, and study the corresponding family of models and the transitions between them numerically. § SYMMETRIES IN PEPS AND ANYONSIn this section, we will first introduce the general PEPS framework. We will thenexplain how certain symmetries in PEPS naturally lead to objects defined on the entanglement degrees of freedom which behave like anyonic excitations.The natural question is then to understand the conditions under which these objects describe observable anyons, or whether they fail to do so by either leaving the state invariant (condensation) or by evaluating to zero (confinement) in the thermodynamic limit.We will focus our discussion to the case of abelian groups; however, several of our arguments in fact apply to general groups, and even beyond that for so-called MPO-injective PEPS; we will discuss these aspects in Sec. <ref>. §.§ PEPS, parent Hamiltonians, and excitations Let us start by introducing Projected Entangled Pair States (PEPS). We focus on a translational invariant system on a square lattice with periodic boundary conditions, where we take the system size to infinity.PEPS are constructed from a local tensor A^i_αβγδ, where i=1,…,d is the physical index and α,β,γ,δ=1,…,D are the virtual indices, and D is called the bond dimension.Graphically, they are depicted as a sphere with five legs, one for each index, cf. Fig. <ref>a; equivalently, we can consider A=∑ A^i_αβγδ|i⟩⟨α,β,γ,δ| as a linear map from virtual to physical system.The tensor A is then arranged on a square lattice, Fig. <ref>b, and adjacent virtual indices are contracted (i.e., identified and summed over), which is graphically depicted by connecting the corresponding legs.We thus finally obtain a tensor c_i_1… i_N which only has physical indices, and thus describes a quantum many-body state |Ψ⟩= ∑ c_i_1… i_N|i_1,…,i_N⟩.A useful property of PEPS is the possibility to block sites – we can take the tensors on some k_1× k_2 patch and define them as a new tensor A' with correspondingly larger D. This allows us to restrict statements about properties of localized regions to fixed-size (e.g., single-site or overlapping 2× 2) patches.To any PEPS, one can naturally associate a family of parent Hamiltonians which have this PEPS as their exact zero-energy ground state <cit.>. Such a Hamiltonian is a sum of local terms h, each of which ensures that the state “looks locally correct” on a small patch, i.e., as if it had been built from the tensor A on that patch.This is accomplished by choosing h≥0 such that h is zero on the physical subspace spanned by the tensors on that patch (for arbitrary virtual boundary conditions) and positive otherwise; note that by choosing a sufficiently large patch, it is always possible to find a non-trivial such Hamiltonian (the dimension of the allowed physical subspace scales with the boundary,while the available degrees of freedom scale with the volume).Clearly, the global PEPS wavefunction is a zero-energy state and thus a ground state of the parent Hamiltonian H=∑ h≥0.At the same time, conditions on A are known under which this ground state is unique (in a finite volume) <cit.>: Specifically, it is sufficient if the map from the virtual to the physical system described by A (possibly after blocking) is injective; equivalently, this means that the full auxiliary space can be accessed by acting on the physical space only, i.e., that one can apply a linear map which “cuts out” a tensor and gives direct access to the auxiliary indices.Parent Hamiltonians naturally give rise to the notion of localized excitations, this is, states whose energy differs from the ground state only in some local regions.To this end, one replaces some tensors by “excitation tensors” B, while keeping the original tensor A everywhere else, cf. Fig. <ref>a.For injective PEPS, these are in fact the only possible localized excitations, since due to the one-to-one correspondence between virtual and physical system any tensor B A will yield an increased energy w.r.t. the parent Hamiltonian <cit.>.A key question in the context of this work is when an excitation is topologically non-trivial.We will use the following definition:An excitation is topologically trivial exactly if it can be created (with some non-zero probability) by acting locally on the system, i.e., if there exists a linear (not necessarily unitary) map L on the physical system which will create that excitation on top of the ground state, this is, which transforms A to B.It is now straightforward to see that for an injective PEPS, all localized excitations (Fig. <ref>a) are topologically trivial: Injectivity implies that A (as a map from virtual to physical system) has a left-inverse A^-1, and thus L:=BA^-1 will act as LA=B, i.e., create the desired excitation locally, as shown in Fig. <ref>b. §.§ G-injective PEPS and anyonic excitationsLet us now turn towards PEPS which can support topologically non-trivial excitations.To this end, we consider PEPS which are no longer injective, but enjoy a virtual symmetry under some group action, A=A(U̅_g⊗U̅_g⊗ U_g⊗ U_g)with U_g a unitary representation of some finite group G∋ g; we will denote such tensors as G-invariant.Graphically, this is expressed as-2em < g r a p h i c s > ,where we use the convention that matrices act on the indices from left to right and down to up, such that U̅_g in Eq. (<ref>) turns into U_g^†.An important property of G-invariance is its stability under concatenation: When grouping together several G-invariant tensors, the resulting block is still G-invariant, as the U_g and U_g^† on the contracted indices exactly cancel out.In the following, we will focus on abelian groups (though various parts of the discussion generalize to the non-abelian case), and denote the neutral element by e∈ G.If G-invariance is the only symmetry of the tensor A, i.e., if A is injective on the subspace left invariant by the symmetry, we call A G-injective.The parent Hamiltonians of G-injective PEPS have a topological ground space degeneracy and can support anyonic excitations <cit.>, as we will also discuss in the following.We will generally assume that the tensors are G-injective, since otherwise we might be missing a symmetry, likely rendering the discussion incomplete. §.§.§ Electric excitations In order to understand how these excitations look like, let us consider again the possible localized excitations w.r.t. the parent Hamiltonian. As we have seen earlier, any state where one tensor has been replaced by a different tensor B is by construction a localized excitation.In the injective case, any such B could be obtained by acting locally on the physical degrees of freedom, rendering the excitation topologically trivial.However, it is easy to see that this is no longer the case for G-invariant tensors:Local operations (Fig. <ref>b) can only produce tensors B which are again G-invariant, i.e., transform trivially under the action of the symmetry group, since it is exactly the invariant virtual subspace which is accessible by acting on the physical indices.In contrast, B's which transform non-trivially can no longer be created locally, and thus are topologically non-trivial excitations.It is natural to label these excitations by irreducible representations α(g)∈ℂ of the abelian symmetry group G, this is, we can write B=∑_α B_α ,where < g r a p h i c s >This is, any such excitation can be understood as a superposition of excitations with fixed α, and we will focus on excitations with a fixed α in the following.These excitations will be denoted as electric excitations with charge α. (For non-abelian groups, we would require instead that each B_α is supported on the irrep α of the group action.)It is straightforward to see that for G-injective PEPS, the topological part of the excitation is fully characterized by α: In case B_α itself is injective on the irrep α, this is immediate since it can be transformed into any other B_α' by locally acting on the physical index; in case B_α is not injective, the same can be done by acting on a 3× 3 block centered around α (due to G-injectivity, this allows to access all degrees of freedom at the boundary in the irrep α).In the following, we will focus our attention on electric excitations of the form < g r a p h i c s >where R_α (the yellow diamond) transforms asR_α U_g = α(g) U_g R_α; the general case will be discussed in Appendix <ref>.An important point to note about electric excitations is that for any system with periodic boundaries, they must come in pairs (or groups) which together transform trivially under the symmetry action, i.e., have total trivial charge, since otherwise the state would vanish.§.§.§ Magnetic excitations For injective PEPS, locally changing tensors was the only way to obtain localized excitations, due to the one-to-one correspondence of physical and virtual system <cit.>. For G-injective PEPS, however, there exist ways to non-locally change the tensor network without creating an excitation, or only creating a localized excitation <cit.>. To this end, note that Eq. (<ref>) can be reformulated as-1.5em < g r a p h i c s > -1.5em < g r a p h i c s >and rotated versions thereof. This has the natural interpretation of the U_g and U_g^† forming strings (symbolized by the dashed blue lines above), which can be freely moved through the lattice (“pulling though condition”).(Whether U_g or U_g^† has to be used depends on the orientation of the string relative to the lattice <cit.>.) Thus, any string of U_g's is naturally invisible to the parent Hamiltonian, as it can be moved away from any patch the parent Hamiltonian acts on. Indeed, if G-injectivity holds, one can use the equivalence of physical and virtual system on the invariant subspace to prove that such strings are the only non-local objects which cannot be detected by the parent Hamiltonian <cit.>. This yields a natural way to build localized excitations by placing a string of U_g's with open ends on the lattice, as illustrated in Fig. <ref>: Any such string can only be detected at its endpoints, thereby forming a localized excitation. These excitations are topological by construction, since by acting on the endpoints alone, we are not able to create such a string. At the same time, using G-injectivity one can prove that the endpoints can always be detected in a finite system. Thus, we arrive at a second type of topological non-trivial excitations, namely strings of U_g's with an endpoint,< g r a p h i c s >. (We have followed the notation introduced in Fig. <ref>, where blue dots denote U_g or U_g^†, and the dashed blue line highlights the string formed.) Again, C is an arbitrary G-invariant tensor which can be used to dress the endpoint with an arbitrary topologically trivial excitation; under blocking, it can always be assumed to only sit on a single site as shown. Again, given periodic boundaries any such string must end in a second anyon (or more generally the strings emerging from several anyons can fuse as long as the corresponding group elements multiply to the identity).We will denote these excitations as magnetic excitations with flux g. §.§.§ Dyonic excitations Beyond electric and magnetic excitations, it is also possible to combine the two into a so-called dyon which is of the form-3em < g r a p h i c s >Note that we have made the choice that the irrep R_α sits on the same leg at which the U_g-string ends. While this choice is arbitrary, it is related to any other endpoint, e.g. one where the string ends on the leg before R_α, by a local U_g-string, i.e., a pair of magnetic excitations, which can be created locally and can thus be accounted for by an appropriate choice of C, or even incorporated in R_α.A general anyonic excitation is thus up to local modifications labeled by a tuple g and α; we denote the anyon by gα, and an anyon string with the two conjugate anyons gα and gα̅ at its endpoints by gα. §.§.§ Braiding statistics Let us briefly comment on the braiding statistics of these excitations; we refer to Ref. <cit.> for details. Any physical procedure for moving anyons will result in the U_g-string being pulled along the path. Thus, a half-exchange of two identical anyons transforms < g r a p h i c s >(where for simplicity we have set C=A, as it transforms trivially). Straightening the string by pulling it through the right excitation requires to commute g with R_α, which gives rise to a phase α(g); since the resulting two crossing strings are identical to two non-crossing strings, we thus obtain a overall phase of α(g) due to the half exchange. Similarly, full exchange of two different anyons gα and hβ gives rise to two such exchanges, and thus to a mutual statistics α(h)β(g) for a full exchange.We therefore see that the strings defined this way indeed exhibit the same statistics as D(G), the quantum double model of G <cit.>. §.§ Virtual level vs. observable excitations: Condensation and confinement §.§.§ Anyon condensation and confinement It is suggestive to assume that this is the complete picture, and G-injective PEPS always exhibit an anyon theory given by the quantum double D(G). However, it has by now been understood that this is not the case <cit.>: By adding a physical deformation Λ to the tensor, A→Λ A, one can drive the system towards a product state, eventually crossing a phase transition.E.g., in the toric code this induces string tension (or more precisely loop fugacity), which eventually leads to the breakdown of topological order <cit.>.This is directly related to the question as to whether the objects which we have just identified as anyonic excitations on the virtual level actually describe observable anyons in the thermodynamic limit, and in the limit of large separation between the individual anyons. While, as we have argued, one can prove <cit.> that the endpoints of a virtual string gα correspond to observable excitations, this only applies in a finite volume.However, it is perfectly possible that—depending on the choice of A—new behavior emerges in the thermodynamic limit, which is reflected in a non-trivial environment imposed on a virtual anyon string gαℓ (with ℓ the separation between the endpoints) which can prevent it from describing an observable anyonic excitation as ℓ→∞. This can happen in at least two distinct ways: Either the environment transforms trivially under gαℓ, in which case the PEPS with gαℓ still describes the ground state, or the environment is orthogonal to gαℓ, in which case the state has norm zero and is thus unphysical. We will thus distinguish two different ways in which non-trivial virtual excitations gα might fail to describe observable anyonic excitations: 1. Confinement: The state |ψ[gαℓ]⟩ of the system with an anyon string does not describe a properly normalizable quantum state, i.e., ⟨ψ[gαℓ] |ψ[gαℓ]⟩→ 0 N,ℓ→∞where first the system size N and then the separation ℓ is taken to infinity.The expectation value in Eq. (<ref>) corresponds to the tensor network in Fig. <ref>a, this is, the expectation value of the string operator gαℓ⊗gαℓ in the double layer ket+bra tensor network.2. Condensation: |ψ[gαℓ]⟩ is notorthogonal to the ground state |ψ⟩ in the thermodynamic limit,⟨ψ|ψ[gαℓ]⟩0N,ℓ→∞ ,i.e., the individual endpoints are not distinguished any more from the ground state by a topological symmetry, and thus differ from it at most in local properties.The corresponding tensor network is shown in Fig. <ref>b and corresponds to the expectation value of the string operator ⊗gαℓ.In the remainder of this paper, we will explore the conditions under which condensation and confinement occurs in PEPS models, and provide a classification of the possibly ways in which this can happen. §.§.§ Condensation, confinement, and string order parametersIn order to understand condensation and confinement of anyons in PEPS models, we need to assess the behavior of overlaps⟨ψ[g'α'α̅'ℓ] |ψ[gαℓ]⟩, corresponding to string operators gαℓ⊗g'α'α̅'ℓ on the virtual level, cf. Fig. <ref>, in the thermodynamic limit and as ℓ→∞. In what follows, we will assume C=C'=A for simplicity; we discuss how to adapt the arguments to the general case in Appendix <ref>.It is instrumental to introduce the transfer operator 𝕋:= -5em < g r a p h i c s >which is a completely positive map (from left to right) acting on a one-dimensional chain of D-level systems; if we disregard complete positivity, we can equally think of 𝕋 as a map on a 1D chain of ℂ^D⊗ℂ^D systems.In the following, we will restrict to the case of hermitian 𝕋 (corresponding e.g. to a system with combined reflection and time-reversal symmetry), which in particular implies that the left and right fixed points of 𝕋 are equal.Let us now see how the symmetry of the tensor A is reflected in the transfer operator.G-invariance of the A is inherited by 𝕋, which thus enjoys the symmetries [𝕋,U^⊗ N⊗]=[𝕋,⊗U̅^⊗ N]=0 (with N→∞ the system size); this is, 𝕋 carries an on-site G :=G× G symmetry with representation U_ g = U_g⊗ U_g', with g≡ (g,g') ∈ G.The irreps of G are given by α((h,h'))= α((h,e)) α((e,h'))≡α(h)α̅'(h'), where α(·):=α((·,e)) and α'(·):=α̅((e,·)) are irreps of G; there is thus a correpondence between irreps of G and pairs of irreps of G, and we will write α = (α,α').The trivial irrep will be denoted by 1. Finally, we define gαℓ:= gαℓ⊗g'α'α̅'ℓ. Generally, we will stick to the convention that we use boldface letters for objects living on ket+bra.In terms of the transfer operator, we can now re-express our quantities of interest for condensation and confinement as expectation values of gαℓ in some left and right fixed points (ρ_L| and |ρ_R) of 𝕋⟨ψ[g'α'α̅'ℓ] |ψ[gαℓ]⟩ = (ρ_L| gαℓ |ρ_R) ,where we assume (ρ_L|ρ_R)=1. [We use round brackets |·) to denote vectors on the joint ket+bra virtual level.] The |ρ_∙) can also be understood asoperators acting between ket and bra level, in which case we will denote them by ρ_∙.Specifically, ρ_Lρ_R has been shown to exactly reproduce the entanglement spectrum of a bipartition of the system <cit.>, and thus any statement about the ρ_∙ translates into a property of the entanglement spectrum. Note that gαℓ is formed exactly by a string of symmetry operations and terminated by irreps of the doubled symmetry group G≡ G× G, i.e., a string order parameter, and it is thus suggestive to understand the condensation and confinement of anyons by studying the possible behavior of string order parameters for the group G. §CLASSIFICATION OF STRING ORDER PARAMETERS AND CONDENSATION The following section presents the core result of the paper:We classify all different behaviors which the string operators gαℓ in a G-invariant PEPS can exhibit by relating them to the classification of symmetry-protected (SPT) phases in one dimension, as given by the fixed point of the transfer operator. We start in Sec. <ref> by explaining the intuition why the classification of anyon behaviors should be related to the classification of 1D phases.In Sec. <ref> we explicitly state the technical assumptions made (specifically, the form of the fixed point space).Secs. <ref>–<ref> contain the classification: In Sec. <ref>, westudy the structure of symmetry breaking of the fixed point space and show that the endpointsgαℓ decouple as ℓ→∞, allowing us to restrict to semi-infinite strings in the following;in Sec. <ref>, we derive the constraints imposed by the symmetry breaking on the anyons and show how it allows todecouple anyon pairs; in Sec. <ref>, we make the connection between the behavior of anyons and the SPT structure of the fixed points, and in Sec. <ref>, we show that there exists an additional non-trivial restriction on the SPTs which can appear as fixed points of 𝕋, and thus to the possible anyon behavior, arising from the (complete) positivity of 𝕋. Finally, in Sec. <ref>, we show that the conditions derived in the preceding sections precisely give rise to the known anyon condensation rules. §.§ IntuitionLet us first present the intuition behind this classification.To this end, we use that we are interested in gapped phases and thus the system is short-range correlated: This suggests that the fixed point of the transfer operator 𝕋 is short range correlated as well, and thus has the same structure as the ground state of a local Hamiltonian with the identical symmetry [𝕋, U_ g^⊗ N]=0.Let us now consider the different phases of such a Hamiltonian.We first restrict to the the regime of Landau theory, where phases are classified by order parameters, i.e., irreps of the symmetry group. Depending on the phase, different irreps will have zero or non-zero expectation values, which implies condensation [for a non-zero expectation value of an irrep (α,e) with α e] and confinement [for a vanishing expectation value of an irrep (α,α)] of charges, corresponding to broken diagonal or unbroken non-diagonal symmetries, respectively. On the other hand, assuming a mean-field ansatz (which is exact in a long-wavelength limit), we find that strings of group actions either create a domain wall (for a broken symmetry) or act trivially (for an unbroken symmetry), relating the symmetry breaking patterns also to the condensation and confinement of magnons.We thus see that the condensation and confinement of electric and magnetic excitations corresponds to Landau-type symmetry breaking in the fixed point of the transfer operator, as observed in Ref. <cit.>. As we will see in the following, this picture becomes more rich when we go beyond Landau theory and allow for SPT phases: These phases are not captured by mean-field theory and are rather characterized by the behavior or string order parameters, i.e., strings of group actions terminated by order parameters, which give rise to condensation and confinement of dyonic excitations. §.§ The assumption: Matrix Product fixed points We start by stating our main technical assumption: The fixed point space of 𝕋 (possibly after blocking) is spanned by a set of injective Matrix Product States (MPS), which are related by the action of the symmetry group.Let us be more specific. Let i=(i,i') denote a joint ket+bra index of the blocked transfer operator.Then, we assume there exists a set of matrices M^ i, c which describe distinct MPSρ_ c= -0.75em < g r a p h i c s >on a finite chain with periodic boundary conditions. We require that these MPS fulfill the following conditions: *The ρ_ c span the full fixed point space of 𝕋. (This is, evaluating any quantity of interest either in the fixed point space of 𝕋 or in span{ρ_ c} yields the same result in the thermodynamic limit.) * The ρ_ c are injective, i.e., 𝔼_ c^ c has a unique eigenvalue with maximal magnitude, where 𝔼_ c^ c':= ∑_ i M^ i, cM̅^ i, c' is the mixed transfer operator of the MPS. W.l.o.g., we choose to normalizeM^ i, c such that λ_max(𝔼_ c^ c)=1.* For each c and g, there is a c' such that U_g|ρ_ c⟩=|ρ_ c'⟩, and for each pair c, c', there is a corresponding g. (Here and in the following, we use U_g as a shorthand for the global symmetry action U_ g^⊗ N whenever the meaning is clear from the context.)Note that we make no assumption that the ρ_ c are positive, and in fact in many cases the fixed point space cannot be spanned by positive and injective MPS. Assumption <ref> is the main technical assumption here. Note that to some extent a similar assumption underlies the classification of phases of 1D Hamiltonians <cit.>, where the ground space is approximated by MPS as well: While this is motivated by the known result that MPS can approximate ground states of finite systems efficiently <cit.>, also in that scenario it is yet unproven whether this rigorously implies that MPS are sufficiently general to classify phases in the thermodynamic limit. Assumptions 2 and 3 can be replaced by the weaker assumption that the fixed point space is spanned by some MPS, together with the assumption that we are not missing any symmetries. Specifically, given an MPS with periodic boundary conditions, it can be brought into a standard form (possibly involving blocking of sites)where it can be understood as a superposition of distinct injective MPS ρ_ c (possibly with size-dependent amplitudes) <cit.>. While the ρ_ c are not necessarily fixed points of the transfer operator themselves, such as in the case of an antiferromagnet where the transfer operator acts by permuting the ρ_ c, they will be fixed points of the transfer operator obtained after suitable blocking. Since, as we will see in a moment, cross-terms between different ρ_ c vanish when computing physical quantities of interest, we can instead work with a fixed point space spanned by the ρ_ c [Note that this does not imply that the fixed point space is actually spanned by the ρ_ c. In fact, it is easy to see that this would require extra conditions such as rotational invariance, since e.g.a transfer operator projecting onto a GHZ-type state would have a unique fixed point (the GHZ state) which is not an injective MPS.], corresponding to Assumption 2. Assumption 3 can be justified by requiring that any degeneracy is due to some symmetry of the transfer operator—otherwise, it would be an accidental degeneracy and thus not stable against perturbations.Since the transfer operator has itself a Matrix Product structure, any symmetry of the transfer operator must be encoded locally, i.e., it will show up as a symmetry of the single-site ket+bra object shown in Fig. <ref>a <cit.>.There can be two distinct types of such symmetries: Those which act identically on ket and bra layer, shown in Fig. <ref>b for on-site symmetries, and those which only act on one layer, shown in Fig. <ref>c.(Symmetries which act on the two layers in distinct ways can be split into a product of the former two symmetries, cf. the argument at the beginning of Sec. <ref>.) Symmetries which only act on one layer correspond to topological symmetries of the PEPS tensor, such as those of Eq. (<ref>), and thus need to be incorporated into the description from the very beginning.Symmetries acting identically on ket and bra layer, on the other hand, give rise to a non-trivial physical symmetry action through the identity in Fig. <ref>d and thus correspond to a global physical symmetry of the system; since their corresponding symmetry sectors are degenerate in the transfer operator, they are susceptible to physical perturbations which lead to symmetry breaking <cit.>, and we can therefore assume that the system is in one of the symmetry-broken sectors, in which all fixed points are related by the action of the topological symmetry. This in particular includes breaking of translational symmetry, which warrants that we can obtain injective tensors by blocking sites.Note that it is conceivable that different symmetry-broken sectors are described by a different condensation scheme (a simple example can be obtained by coupling different deformations of the system to an Ising model).§.§ Symmetry breaking structureIn this section, we clarify the symmetry breaking structure of the fixed point space, and show that the relevant expecation values do not depend on which vector in the fixed point space we choose. To this end, consider the set of ρ_ c satisfying the three assumptions just laid out.For each c, let H_ c:={ h∈ G :U_ hρ_ c=ρ_ c}. It is clear that H_ c⊂ G is a subgroup of G; furthermore, for G abelianH_ c is independent of c, since for any h∈ H_ c and g∈ G s.th.U_ g|ρ_ c⟩=|ρ_ c'⟩,|ρ_ c'⟩ =U_ g|ρ_ c⟩ = U_ g U_ h|ρ_ c⟩ = U_ h U_ g|ρ_ c⟩ =U_ h|ρ_ c'⟩ ,and we write H≡ H_ c.What is the structure of H?To this end, consider arbitrary γ_ c s.th. ρ:=∑γ_ cρ_ c≥0.For any h=(h,h') ∈ H, we have that ρ = U_hρ U_h'^†, and thusρ^2 = ρρ^† = (U_hρ U_h'^†)(U_h'ρ U_h^†) = U_hρ^2 U_h^† ,and thus [ρ^2,U_h]=0.Since ρ≥0, this implies that [ρ,U_h]=0 as well, orρ = U_hρ U_h^† ,and similarly ρ = U_h'ρ U_h'^†. Now choose γ_ c s.th. ρ=∑γ_ cρ_ c=𝕋^∞(), the fixed point of 𝕋 obtained when starting from , and pick some c_0.Then, for sufficiently small ϵ≥0, σ':=+ϵ(ρ_ c_0+ρ_ c_0^†)≥0 andσ”:=+iϵ(ρ_ c_0-ρ_ c_0^†)≥0, and thus ρ':=𝕋^∞(σ') and ρ”:=𝕋^∞(σ”) are both positive fixed points and therefore satisfyEq. (<ref>), which implies that alsoρ_ c_0=12ϵ[(ρ'-ρ)-i(ρ”-ρ)]satisfies ρ_ c_0 = U_hρ_ c_0 U_h^†. We thus find that whenever (h,h')∈ H, we must also have that (h,h)∈ H and (h',h')∈ H.Now consider a general element (kℓ,k)∈ H. Then, (k,k)∈ H, and thus (ℓ,e)=(kℓ,k)·(k,k)^-1∈ H. It follows that K∋ k and L∋ℓ form groups, and since (ℓ,e)∈ H ⇒ (ℓ,ℓ)∈ H, L⊂ K. The conserved symmetry H is isomorphic to a direct product K× L with L⊂ K, where K labels the the diagonal and L the off-diagonal symmetry, i.e., H∋ h = (kℓ,k) with k∈ K and ℓ∈ L. To distinguish it from the ket/bra product, we will denote the diagonal/off-diagonal product by H = K ⊠ L.Let us now consider the evaluation of a anyonic string order parameter gαℓ inside general left and right boundary conditions ∑λ_ c^l/r|ρ_ c). This results in a sum over terms of the formO_ c^ c':= -2.8em < g r a p h i c s >,where we supress the dependency of O_ c^ c' on α and g. In case c c', the largest eigenvalue of the mixed transfer operator 𝔼_ c^ c' is strictly smaller than one (a straightforward application of Cauchy-Schwarz, see e.g. Lemma 8 of Ref. <cit.>), and thus,O_ c^ c'→ 0 exponentially as N→∞, i.e., only terms with c= c' survive in the thermodynamic limit.In case c= c', we use that ρ_ c =U_ hρ_ c_0 for some fiducial c_0 with h∈ G, and thus O_ c^ c= -3.6em < g r a p h i c s >and since U_ g and U_ h commute, and the phases from commuting U_ h R_α=α( h) R_α U_ h andU_ h R_α̅= α̅( h)R_α̅ U_ h cancel out, we find that O_ c^ c=O_ c_0^ c_0. We thus find that the expectation value for any string is the same regardless of the boundary conditions, and we will therefore omit the subscript c_0 from now on and write ρ≡ρ_ c_0 and M≡ M^c_0 (in fact, we will most of the time also omit the label M of the tensor).After these considerations, we are left with the following question:Given a symmetry H⊂ G, H=K⊠ L, and an invariant fixed point ρ given by an injective MPS with tensor M, what are the the different possible ways in which strings describing the behavior of anyons can behave regarding condensation and confinement. §.§ Behavior of string order parameters I: Symmetry breaking and decoupling Let us now consider what happens when we separate the two ends of a stringgαℓ.Evaluated in the fixed point MPS ρ≡ρ_ c_0, this corresponds to [(ρ|gαℓρ=; -1.2em < g r a p h i c s >. ]We now distinguish two cases: If g∉ H, then U_g^⊗ N|ρ_ c_0⟩ = |ρ_ c'⟩ with c' c_0, and since different representations of an injective MPS are related by a local gauge transformation <cit.>,it holds that-0.8em < g r a p h i c s >.and thus-3.2em < g r a p h i c s > ,and since λ_max(𝔼^ c'_ c_0)<1, (ρ|gαℓρ→0 as ℓ→∞. We thus obtain (ρ|gαℓρ→0unless g∈ H.In particular, all anyons gα with g∉K are confined. On the other hand, if g∈ H,U_g^⊗ N|ρ_ c_0⟩ = |ρ_ c_0⟩ and thus there exist V_ g such that-1em < g r a p h i c s >,where V_ g forms a projective representation of H which can be chosen unitary by a suitable gauge of the MPS <cit.>.Injectivity of the MPS further implies that its transfer operator 𝔼≡𝔼_ c_0^ c_0 has a unique fixed point 𝔼^N-ℓ-2=-1.2em < g r a p h i c s >(w.l.o.g., we choose σ_R,σ_L≥0, and normalizationimplies tr[σ_Lσ_R]=1), and using Eq. (<ref>), this implies that-1.2em < g r a p h i c s >.Also, since [𝔼, V_ g⊗V̅_ g]=0, uniqueness of the fixed point of 𝔼 implies that V_gσ_∙V_g^† = σ_∙, where ∙=L,R, and the ordering of the indices of σ_∙ is chosen accordingly.With this, we can rewrite Eq. (<ref>) as (ρ|gαℓρ→⟨gα̅^*⟩⟨gα⟩where⟨gα⟩ :=-1.2em < g r a p h i c s >,and correspondingly ⟨gα̅^*⟩.This implies that the expectation value of any string order parameter decouples into a product of two expectation values corresponding to semi-infinite strings, and in order to study condensation and confinement, it is thus sufficient to to consider the behavior of ⟨gα⟩, Eq. (<ref>). In order to highlight the role played by the two layers, we will sometimes also write ⟨gα⊗g'α'⟩ :=⟨gα⟩, with g=(g,g'), α = (α,α').§.§ Behavior of string order parameters II: Symmetry protected phases and group cohomology We will now study the behavior of string order parameters ⟨gα⟩, Eq. (<ref>), with g∈ H more closely and show that they are directly related to the classification of symmetry-protected phases through group cohomology. The crucial point here is that, following Eq. (<ref>), a physical symmetry action U_ g can be replaced by a virtual symmetry action V_ g, where the V_ k form a projective representation of the symmetry group, i.e., V_ g V_ h=ω( g, h) V_gh, where ω: H× H→U(1) is a so-called 2-cocycle – i.e., it satisfies ω( g, hk)ω( h, k)=ω( g, h)ω(gh, k) due to associativity – which, up to gauge choices V_ g∼ e^iϕ_ gV_ g is classified by the second cohomology group H; this discrete classification of the V_ g is what is underlying the classification of symmetry-protected phases in one dimensions <cit.>.The 2-cocycle ω also encodes what happens when we commute V_ g and V_ h: V_ g V_ h = ω( g,h)V_gh =ω( g,h)V_hg= ω( g,h)/ω( h,g) V_ h V_ g .Here, ω( g, h)ω( h, g)=:ν_ h( g) is called the slant product <cit.> of ω with h; for abelian groups, it forms a one-dimensional representation of H, ν_ h( g_1)ν_ h( g_2)=ν_ h( g_1 g_2) [This can be seen using the cocycle conditions and the fact that the group is abelian as follows:ν_ h( g_1) ν_ h( g_2)/ν_ h( g_1 g_2) =ω( g_1, h)ω( g_2, h) ω( h, g_1 g_2)/ω( h, g_1)ω( h, g_2) ω( g_1 g_2, h) ω( hg_1, g_2)/ω( hg_1, g_2)=ω( g_1, h)ω( g_2, h) ω( h, g_1 g_2) ω( hg_1, g_2) /ω( h,g_1 g_2) ω( g_1, g_2) ω( h, g_2) ω( g_1 g_2, h)=ω( g_1, h)ω( g_2, h) ω( h, g_1 g_2) ω( hg_1, g_2) /ω( h,g_1 g_2) ω( h, g_2) ω( g_1, g_2 h) ω( g_2, h) =ω( g_1, h) ω( hg_1, g_2) /ω( h, g_2) ω( g_1, g_2 h) =ω( g_1, h) ω( g_1h, g_2) /ω( h, g_2) ω( g_1, h g_2)=1.]. Note that we can always construct (non-unique) representations γ and γ' of G such that γ(g)γ'(g')=ν_ h((g,g')) for (g,g')∈ H: To this end, let γ(g):=ν_ h((g,e)) for g∈ L, extendγ to a representation of g∈ K (formally, this corresponds to aninduced representation), and define γ'(g):=ν_ h((g,g)/γ(g); finally, both γ and γ' can be extended independently to representations of G. We will now derive conditions on g and α under which⟨gα⟩must be zero and demonstrate how in the remaining cases, it can be made non-zero by an appropriate choice of R_α, and we find that this is in one-to-one correspondence to the inequivalent 2-cocycles, i.e., elements of H; the no-go part part of this discussion has been first given in Ref. <cit.> in the context of string order parameters for SPT phases. To this end, let us consider an MPS with a specific projective representation V_ g with corresponding ω( g, h), and consider a string order parametergα evaluated in that MPS, ⟨gα⟩ =-1.2em < g r a p h i c s >.We now insert a resolution of the identity U_hU_h^† before R_α and use R_αU_h = α( h) U_hR_α, which gives< g r a p h i c s >We thus find that for ⟨gα⟩ to be non-zero, it must hold that α = ν_ g, the irreducible representation obtained as the slant product of the 2-cocycle ω. Conversely, by choosingR_α such that -1.2em < g r a p h i c s >– which is always possible due to the injectivity of M – we have that< g r a p h i c s >i.e., R_α transforms as α≡ν_ g on H as required, and⟨gα⟩ =-1.2em < g r a p h i c s >= 1 .It remains to see how R_α transforms under the action of the full symmetry group G, and more specifically that the construction can be generalized to any irrep α of G with restriction α|_ H≡ν_ g; this,together with how to separate R_α into independent ket and bra actions, is discussed in Appendix <ref>.We thus see that the behavior of string order parameters is in one-to-one correspondence with the different SPT phases appearing in the fixed point of the transfer matrix: For a given SPT phase, a string order parameter ⟨gα⟩ can only be non-zero if α=ν_g, and at the same time, it is always possible to set up the endpoint R_α of the string order parameter such that ⟨gα⟩ actually is non-zero. A string operator gα with ⟨gα⟩0 exists if and only if α( h)=ν_ g( h) for all h∈ H, with α( (h,h'))=α(h)α̅'(h'), g=(g,g'), and ν_ g( h)=ω( h, g)/ω( g, h), where ω is the 2-cocycle classifying the fixed point of the transfer operator.Clearly, the same derivation for the other endpoint of the string, ⟨gα̅^*⟩, yields exactly the same condition.Note that Conditions <ref> and <ref> together show that the “amount of topological order” – this is, the number of anyons – is related to “symmetry breaking gap” between ket and bra,|K|/|L|, where H=K⊠ L: Deconfined anyons gα satisfy (g,g)∈ H, i.e., g∈ K, and (α,α)=ν_(g,g), which fixes α on L and thus leaves |G|/|L| possibilities to extend it to G, yielding a total of |K||G|/|L| deconfined anyons.Out of those, pairs gα and gkαβ are indistinguishable if (gk,g)∈ H, i.e., k∈ L, and (αβ,α)=ν_(gk,g), which fixes β on K, leaving |G|/|K| possible extensions; the size of each set of indistinguishable anyons is thus |L||G|/|K|.The total number of anyons – the ratio of these numbers – is thus (|K|/|L|)^2, and the total quantum dimension is |K|/|L|, the “symmetry breaking gap” between ket and bra.§.§ Constraints from positivity The condition that ⟨gα⟩0 iff α= ν_ g (Condition <ref>) has been derived for a general fixed point of MPO form.However, as we have seen in Sec. <ref>, we can w.l.o.g. take the fixed point to be positive semidefinite, which gave rise to the structure of the unbroken symmetry subgroup H (Condition <ref>). As we will see now, positivity induces yet another constraint, namely on the 2-cocycles realizable in the fixed point.To this end, consider a positive fixed point ρ≥0 with an SPT characterized by some 2-cocycle ω: H× H→ℂ, and consider some g', α, and α' such that ⟨(e,g')(α,α')⟩0.Then, also for the other endpoint⟨(e,g')(α,α')^*⟩0, and thus [following Eq. (<ref>)] 0 < |(ρ|(e,g')(α,α')ℓρ|^2 = |tr[eαℓρg'α'α̅'ℓρ]|^2 = |tr[(√(ρ)eαℓ√(ρ))(√(ρ)g'α'α̅'ℓ√(ρ)) ]|^2 (*)≤tr[(√(ρ)eαℓ√(ρ))(…)^†] ×tr[(√(ρ)g'α'α̅'ℓ√(ρ))(…)^† ] =(ρ|(e,e)(α,α)ℓρ× (ρ|(g',g')(α',α')ℓρ ,where we have used Cauchy-Schwarz in (*) [here, (…) denotes the preceding term].Following Eq. (<ref>), this implies⟨(e,e)(α,α)⟩0, and thus (from Condition <ref>) α(h)=α(h)α̅(e)=ν_(e,e)((h,e)) ≡ 1for (h,e)∈ H. At the same time,⟨(e,g')(α,α')⟩0 implies thatα(h)=α(h)α̅'(e)=ν_(e,g')((h,e)), and thus1=ν_(e,g')((h,e)) =ω((h,e),(e,g'))/ω((e,g'),(h,e)) ,i.e.: The projective representations of ket and bra symmetry actions must commute,V_(g,e) V_(e,g') = V_(e,g') V_(g,e) ,or ν_(g,e)( (e,g'))=1, where (g,e),(e,g')∈ H. §.§Anyon condensation rules Let us now show that the Conditions <ref>–<ref> exactly give rise to the anyon condensation rules mentioned in the introduction: *Only self-bosons can condense.*Anyons become confined if and only if they have mutual non-bosonic statistics with some condensed anyon.*Non-confined anyons which differ by a condensed anyon become indistinguishable. §.§.§ Only self-bosons can condense. Consider a condensed anyon gα,⟨(g,e)(α,1)⟩0. This requires (g,e)∈ H=K⊠ L, i.e., g∈ L, and moreover α(h)=ν_(g,e)((h,h')) ∀ (h,h')∈ H, and thusα(g)=ν_(g,e)((g,e))=1, i.e., gα is a self-boson.§.§.§ Anyons become confined if and only if they have mutual non-bosonic statistics with some condensed anyon. Let us first show that an unconfined anyon kβ,⟨(k,k)(β,β)⟩0, must have mutual bosonic statistics with all condensed anyons gα, ⟨(g,e)(α,1)⟩0. ⟨(k,k)(β,β)⟩0 implies k∈ K and β(h)β(h')=ν_(k,k)((h,h')) for (h,h')∈ H, i.e., β(h)=ν_(k,k)((h,e)) for h∈ L. On the other hand, ⟨(g,e)(α,1)⟩0 implies (g,e)∈ H, i.e., g∈ L, and α(h)=ν_(g,e)((h,h)) for h∈ K. We thus have thatα(k)β(g) = ν_(g,e)((k,k)) ν_(k,k)((g,e)) = 1 ,since ν_ g( h)ν_ h( g)=1.Conversely, consider a confined anyon kβ,⟨(k,k)(β,β)⟩=0: we will show that this implies the existence of a condensed anyon gα,⟨(g,e)(α,1)⟩0, which has mutual non-bosonic statistics, α(k)β(g)1, by explicitly constructing such an anyon gα.⟨(k,k)(β,β)⟩=0 implies that either (i) k∉ K or (ii) there exists (h,h')∈ H s.th.β(h)β(h')ν_(k,k)((h,h')).Let us first consider case (i), k∉ K. Let g=e [thus (g,e)∈ H], and choose an irrep α of G s.th.α(h):=ν_(g,e)((h,h'))= ν_(e,e)((h,h'))≡ 1 for (h,h')∈ H – this is, gα is condensed. On the other hand, since k∉ K we can always choose α s.th. α(k) 1 (as the extension of the irrep from K to G is non-unique), and thus, α(k)β(g) 1, i.e., the anyons have mutual non-bosonic statistics.Now consider case (ii): k∈ K, but there exists some (h_0,h_0')∈ H s.th. β(h_0)β(h_0')ν_(k,k)((h_0,h_0')). Define g:=h_0h_0'^-1∈ L.Since hermiticity implies ν_(k,k)((h,e))=ν_(k,k)((e,h)) (as can be shown by relating ν_(k,k) to the behavior of string order parameters) and thus ν_(k,k)((h,h))=ν_(k,k)((h,e))ν_(k,k)((e,h))=1, we have thatβ(g) = β(g)β(h_0')β(h_0')=β(h_0)β(h_0')ν_(k,k)((h_0,h_0'))= ν_(k,k)((g,e)) ν_(k,k)((h_0',h_0')) = ν_(k,k)((g,e)) .Further, let α'(h'):=ν_(g,e)((e,h'))≡ 1 for h∈ L, and extend it to the trivial irrep α'≡ 1 of G. Then, α(h):=ν_(g,e)((h,h))/α'(h), h∈ K, can be extended to an irrep α of G s.th. α(h)=α(h)α'(h') = ν_(g,e)((h,h')) for all (h,h')∈ H, i.e., gα is condensed. Finally, α(k)β(g)ν_(g,e)((k,k)) ν_(k,k)((g,e)) = 1, i.e., gα and kβ have mutual non-bosonic statistics. §.§.§ Non-confined anyons which differ by a condensed anyon become indistinguishable. Let gα be condensed, i.e., g∈ L and α(h)=ν_(g,e)((h,h')) ∀(h,h')∈ H, and kβ unconfined, k∈ K and β(h)β(h')=ν_(k,k)((h,h')) ∀(h,h')∈ H. Then, (gk,k)∈ H, and α(h)β(h)β(h') = ν_(g,e)((h,h'))ν_(k,k)((h,h'))= ν_(h,h')((g,e))ν_(h,h')((k,k))= ν_(h,h')((gk,k))= ν_(gk,k)((h,h')) ,i.e., the anyons kβ and gkαβ become indistinguishable. § ANYON CONDENSATION IN D(ℤ_N) AND TWISTED DOUBLE MODELSWe will now show that in the case of cyclic groups, G=ℤ_N, this allows for a full classification of all condensation patterns, and that these condensation patterns give rise exactly to all twisted quantum doubles D^ω_3(ℤ_M), where the twist ω_3 is given by a 3-cocycle of ℤ_M [ Note that the same cannot hold for all abelian groups: Condensing from an abelian group gives another abelian model, while twisting an abelian model can give rise to non-abelian models <cit.>. ]. In what follows, we will write the groups additively with neutral element zero, and addition is understood modulo the order of the group.§.§ Allowed phases at the boundary Let us first study the effect of the above conditions on the possible SPT phases at the boundary, and thus the possible condensation patterns. As we have seen, the symmetry G=ℤ_N×ℤ_N of the transfer operator is broken down to a symmetry H=ℤ_qt⊠ℤ_q={(g,g+th) : g=0,…,qt-1, h=0,…,q-1}. Let us now consider the restriction imposed by Eq. (<ref>) on the second cohomology classification of the projective representations of H, H=ℤ_q <cit.>. To this end, given an element n∈ H, n=0,…,q-1, we choose a projective representationV_(g,g) = X^g V_(ht,0) = Z^hn ,of H, g=0,…,qt-1, h=0,…,q-1, where X and Z are such that ZX=μ XZ, μ=exp(2π i/q) (e.g. X a cyclic shift and Z a diagonal q× q matrix), and where V_(g+ht,g):=V_(g,g)V_(ht,0).It is straightforward to check that these yield q inequivalent (and thus all) projective representation, e.g. by comparing the gauge-invariant commutator ω( (t,0),(1,1))/ω( (1,1),(t,0)) = μ^n.We now have that V_(0,h't)=V_(h't,h't)V_((q-h')t,0) and thus Eq. (<ref>) reads V_(ht,0) V_(h't,h't)V_((q-h')t,0) =V_(h't,h't)V_((q-h')t,0) V_(ht,0)which using Eq. (<ref>) is equivalent to μ^hn· h't=1, or (since h and h' are arbitrary) μ^nt = 1. This is the case whenever nt is a multiple of q, i.e., n=kqgcd(t,q). Since at the same time, 0≤ n<q, we find that k=0,1,…,gcd(t,q)-1. This is, out of the q different SPT phases under the symmetry group H, onlygcd(t,q) are allowed due to positivity constraints. §.§ Explicit construction of all twisted ℤ_t doubles and completeness of classification for ℤ_NWe will now show that for cyclic groups, this classification is complete. To this end, we will first show how to obtain all twisted quantum doubles of ℤ_t by anyon condensation from a ℤ_N double, and subsequently use this construction to derive explicit PEPS models for all cases consistent with Conditions <ref>–<ref>. §.§.§ Twisted doubles of ℤ_t from anyon condensation In the following, we will describe how to construct all so-called twisted quantum doubles D^ω_r(ℤ_t) of ℤ_t by anyon condensation from a quantum double of some ℤ_N, and derive the structure of the SPT at the boundary.Here, ω_r≡ω_r(g,h,ℓ) is a so-called 3-cocycle, labelled by an element r of third cohomology group H^3(ℤ_t,U(1))=ℤ_t, r=0,…,t-1. We will just state the corresponding results in the main text and postpone the proofs to Appendix <ref>; for more details on twisted double models and 3-cocycles, we refer the reader to the appendix or Ref. <cit.>.Let r=0,…,t-1 label an element of H^3(ℤ_t,U(1))=ℤ_t, set q=t/gcd(t,r), and let N:=qt. We now define tensors with non-zero elementsM(a) := √(q/t)< g r a p h i c s >ω(a,g,h-g_0),N(a) := √(q/t)< g r a p h i c s > ω(a,g,h-g_0) for a=0,…,t-1.Here, the range of the thick vertical indices is 0,…,t-1 and that of the thin (horizontal and vertical) indices is 0,…,q-1, and h=0,…,q-1, g=0,…,t-1, and g_0=g mod q. Depending on the index, variables are understood modulo t or q.Theω≡ω_r is defined byω(a,g,d)= exp[ 2π i r d/t^2(a+g-(a+g) mod t) ] ,where there is no modular arithmetics in the exponential except for the mod t. Eqs. (<ref>,<ref>) determine the amplitude of all non-zero elements of M(a) and N(a), respectively, while all tensor elements inconsistent with the labels of the indices are zero.The PEPS tensor for the model is now defined as A =t/q^2∑_a-3em < g r a p h i c s > ,where the inner legs correspond to the physical and the outer legs to the virtual indices. As we show in Appendix <ref>, the PEPS defined by this model describes a twisted ℤ_t double model with twist ω. As also shown in the appendix, A satisfies A^†=A=A^2, which implies that AA^†=A and thus the transfer operator is of the form< g r a p h i c s > ,where the second equality holds since< g r a p h i c s >δ_ab.We thus find that the left and right fixed points of the transfer operator are again described by the same tensor.As it turns out, for any fixed a they describe an injective MPS, and thus, the boundary exhibits t symmetry broken sectors labelled by a=0,…,t-1.The tensor A has a ℤ_qt symmetry with generator S_(i_1,i_2),(j_1,j_2) = δ_i_1+1,j_1δ_i_2+1,j_2ω(1,i_1,i_2-i_1) (with i_1, j_1 mod t and i_2, j_2 mod q), which follows from the local condition-0.3em < g r a p h i c s > ,where U_ij≡ U(a)_ij = δ_ijω(1,a,i).Together with its “twin” equation-1.8em < g r a p h i c s > ,V_ij≡ V(a)_ij = e^iϕ_aω(1,a-1,i)δ_i,j-1, Eq. (<ref>) allows us to verify that the symmetry ℤ_qt×ℤ_qt in the fixed point [Eq. (<ref>)] is broken to ℤ_qt⊠ℤ_q, with generators G_1=S⊗ S^† and G_2=S^t⊗. The element n∈ℤ_qt⊠ℤ_q=ℤ_q labelling the virtual symmetry action – determined by the commutation relation of the virtual representations P_1(a)=U(a)V(a+1) and P_2(a)=∏_i=a^a+t-1U(i) of G_1 and G_2 – is given by n=rqt.Overall, given t and r=0,…,t-1, and q=t/gcd(t,r), we thus have constructed a PEPS with bond dimension qt and virtual ℤ_qt symmetry which describes a twisted ℤ_t quantum double with twist r∈H^3(ℤ_t,U(1))=ℤ_t. In the fixed point of the transfer operator, the symmetry is broken down toℤ_qt⊠ℤ_q, and the cocycle n∈ℤ_qt⊠ℤ_q=ℤ_q characterizing the virtual symmetry action in the fixed point is given by n=rqt.§.§.§Completeness of the Conditions <ref>–<ref> Let us now show that this construction allows us to obtain PEPS models for cyclic G for any case compatible with Conditions <ref>–<ref>. Concretely, those conditions imply that given a virtual symmetry ℤ_N in the tensor, it can be broken down to any ℤ_qt⊠ℤ_q symmetry where qt|N(“|” denotes “divides”), and furthermore, the label n of the cocycle characterizing the fixed points must be a multiple of qgcd(q,t), n=kqgcd(q,t).To this end, define q':=gcd(q,t), and let α=q/q', n'=n/α=kq/αgcd(q,t)=k=0,…,q'-1. Next, let β=gcd(n',q'), and define n”=n'/β and q”=q'/β. With γ=αβ, we then have that q=γ q” and n=γ n”. Since q”|q' and q'|t, x:=t/q” is integer.Then, the construction for the twisted double with t̃=t=xq” and twist r̃=xn” described in the preceding section yields q̃ = t/gcd(r̃,t)= xq”/gcd(x n”,xq”) = q”/gcd(n”,q”)= q” ,and the 2-cocycle of the fixed point is characterized byñ = r̃q̃t = xn”q”xq” = n”.We thus know how to create a model with parameters t, q”=q/γ, and n”=n/γ; let us denote its tensor by A^i_k_1,k_2,k_3,k_4, with k_s=0,…,q”t-1, and the generator of the ℤ_q”t symmetry by S; w.l.o.g., we choose a basis |k_s) such that S=∑ |k+1)(k|.We will now show how from this model, we can create a PEPS with parameters t, q, and n, and overall symmetry ℤ_M with M=qt.(In a second step, we will then generalize this to any ℤ_N with qt|N.) To this end, we extend the bond space to a M=γ (q”t)-dimensional space, k_s(ℓ_s,k_s), ℓ_s=0,…,γ-1, and construct the new tensor à by tensoring each virtual index of A independently with an equal weight superposition of all |ℓ_s), i.e., Ã^i_(ℓ_1,k_1),(ℓ_2,k_2),(ℓ_3,k_3),(ℓ_4,k_4) = A^i_k_1,k_2,k_3,k_4As the generator of the ℤ_M symmetry we choose the regular representation in ℤ_M with basis |ℓ q”t+k), i.e., S̃:|ℓ,k)↦|ℓ+⌊ k/q”t⌋,k+1); since each ℓ_s index is in a uniform superposition ∑ |ℓ_s), S̃ acts exactly as S on the non-trivial degrees of freedom k_s of Ã, while leaving ∑ |ℓ_s) invariant.The resulting tensor has thus a ℤ_M symmetry which is broken to ℤ_γ q”t⊠ℤ_γ q”=ℤ_qt⊠ℤ_q in the fixed point, with q:=γ q”.The element n∈ℤ_γ q”t⊠ℤ_γ q”=ℤ_γ q” is determined by the commutation phase of the virtual representations of the two generators S̃⊗S̅̃̅ and S̃⊗ in the fixed point MPS, which equal those of S⊗S̅ and S⊗,and which is thus exp[2π i nγ q”] = exp[2π i n”q”]; we therefore have n≡γ n”, as claimed.To obtain the most general case, we still need to show how to go from a ℤ_M to a ℤ_N symmetry (with M|N) which in the fixed point is broken down to at least ℤ_N×ℤ_N, and possibly further. To this end, let σ:=N/M,denote the original tensor again by A^i_k_1,k_2,k_3,k_4 with k_s=0,…,M-1, extend the indices as (k_s,ℓ_s) with ℓ_s=0,…,σ-1, and define the new tensor Ã^i_(k_1,ℓ_1),(k_2,ℓ_2),(k_3,ℓ_3),(k_4,ℓ_4) = A^i_k_1,k_2,k_3,k_4δ_ℓ_1=ℓ_2=ℓ_3=ℓ4 ,where δ_ℓ_1=ℓ_2=ℓ_3=ℓ4=1 if all ℓ_s are equal, and zero otherwise. Further, define S̃ = S^1/σ⊗∑_ℓ=0^σ-1|ℓ+1⟩⟨ℓ| (with addition modulo σ). Clearly, S̃ generates a representation of ℤ_M (which is faithful if S was faithful). Further, the additional degrees of freedom labelled by ℓ yield two independent GHZ states (i.e., correlated block-diagonal structures) in ket and bra level in the fixed point, which are cyclicly permuted by the action of S̃: The ℤ_N×ℤ_N symmetry is thus at least broken to ℤ_M×ℤ_M, with the model in each symmetry broken sector described by the original PEPS, and the ℤ_M symmetry action generated by S̃^σ = S⊗.Together, this concludes the construction of an explicit example for all cases consistent with Conditions <ref>–<ref>. § EXAMPLE: CONDENSATION OF D(ℤ_4) AND THE DOUBLE SEMION MODELIn the following, we will discuss some examples for anyon condensation in doubles D(ℤ_N).As a warm-up, we will start with the Toric Code model D(ℤ_2), and then discuss in detail the possible condensations in D(ℤ_4), where we will see how condensing a dyon – corresponding to a non-trivial SPT at the boundary – can give rise to the doubled semion model which cannot be described as a double model of a group.Given the double D(ℤ_N), its excitations gα are labelled by group elements g=0,…, N-1 and irreps α=exp(2π i k/N), k=0,…,N-1, where α(g)≡α^g. (We will again write the group additively with neutral element 0.) The self-statistics for a half-exchange of two gα particles is α^g, and the phase acquired through the full exchange of gα and hβ is given by α^hβ^g. Fusing particles gα and hβ results in g+hαβ.As derived in Sec. <ref>, in order for a particle gα to condense, it must be have bosonic self-statistics, i.e., α^g=1.This leads to the identification of gα with the vacuum 01, and subsequently to the identification all pairs hβ and h+gβα. Moreover, all particles hβ which braid non-trivially with gα, α^hβ^g1, become confined. §.§Warm-up: Condensation of the Toric Code Let us start by considering the Toric Code model 𝔻(ℤ_2). It has four particles: The vacuum ∅ = 01, the magnetic particle m=11, the electric particle e=0-1, and the fermion f≡e×m = 1-1; they can be visualized in a two-dimensional grid with g and α as row and column labels, respectively, Fig. <ref>a. e and m (marked red) have bosonic self-statistics α^g and can therefore condense. Fig. <ref>b illustrates the condensation of the e particle: e is identified with the vacuum (indicated by connected dots), and since both m and f have non-trivial mutual statistics α^hβ^g with e (as α=-1, g=0 for e and h=1 for m, f), they become confined (indicated by grayed out boxes).Let us now study the condensation in terms of the symmetry of the transfer operator.We have G=ℤ_2×ℤ_2. The possible symmetry breaking patterns H=K⊠ L are given by H=ℤ_2⊠ℤ_2, H=ℤ_2⊠ℤ_1, and H=ℤ_1⊠ℤ_1, respectively. This is shown in Fig. <ref>c, where the horizontal layers are arranged according to their “ket-bra symmetry breaking gap” |K|/|L| corresponding to the number of anyons in the model, and the arrows point in the direction of decreased symmetry.Let us now consider the three possibilities case by case. * H=ℤ_2⊠ℤ_1: This is the topological case. On the one hand, we have following Cond. <ref> that ⟨gα⊗gα⟩0 for all g and α, since (g,g)∈ H andthe restriction of (α,α) to H is (α,α)((h,h))=α(h)α̅(h)=1=ν_(h,h) [as H is trivial]; this is, all particles gα are unconfined. On the other hand, ⟨gα⊗g'α'⟩=0 whenever either g g' [as (g,g')∉ H] or αα' [as then (α,α')((h,h))≢1]; this is, no particles are condensed. * H=ℤ_1⊠ℤ_1: This is the trivial phase in which e=0-1 is condensed (and thus m=11 is confined). Firstly, since all symmetries are broken,⟨gα⊗g'α'⟩=0 whenever g0 or g'0, which implies that m and f=e×m=1-1 are confined. On the other hand,⟨0α⊗0α'⟩0, since (0,0)∈ H and (α,α') restricted to H is trivially the identity, and thus equals ν_(0,0). * H=ℤ_2⊠ℤ_2: This is another trivial phase, in which m=11 is condensed and e=0-1 is confined. Firstly, note that while ℤ_2⊠ℤ_2=ℤ_2, we have that qt=q=2 and gcd(t,q)=1, i.e., only the trivial cocycle is allowed due to positivity. We have that⟨g1⊗g'1⟩0 since (g,g')∈ H and (1,1)=ν_(g,g'), implying that m is condensed.On the other hand, ⟨gα⊗g'α'⟩=0 whenever α1 or α' 1, since (α,α')( (g,g'))=α(g)α'(g')≢1 for some (g,g')∈ H, i.e., e and f are confined.§.§ Condensation of D(ℤ_4) Let us now turn to our second example, the double D(ℤ_4). The anyon table is given in Fig. <ref>a; here, we find three bosons (marked red), namely 2e, 2m, and the dyon d=2e×2m. It is straighforward to work out the particle tables obtained by condensation: While condensing 2e or 2m leads to two inequivalent toric codes, condensing d – as shown in Fig. <ref>b – leads to the so-called double semion model, with particles s and s̅, which have (anti-)semionic self-statistics g^α=± i, and which fuse with themselves to the vacuum and with each other to the non-trivial boson b=0-1≡21. The double semion model is not a regular double model but can be obtained by twisting D(ℤ_2) with a non-trivial 3-cocycle of ℤ_2, and is thus the simplest example of a twisted model obtained by condensing a regular double.Let us now study the possible symmetry breaking pattern H of D(ℤ_4), shown in Fig. <ref>c. We find six possibilities.* H = ℤ_4⊠ℤ_1. This is the D(ℤ_4) phase; the discussion is analogous to the case 1 for the Toric Code in Sec. <ref> above. * H = ℤ_2⊠ℤ_1.This is a toric code phase in which the 2e particle has been condensed.We have that ⟨gα⊗g'α'⟩=0 unless (g,g')∈ H, i.e., g=g'=0 or g=g'=2, which implies that 1* and 3* are all confined, and 2* is uncondensed. On the other hand, ⟨gα⊗gα'⟩0 iff α(g)α'(g)=1 (as there is only a trivial cocycle), and thus 1-1 is condensed, and 0i≡0-i and 21≡2-1 form the electric and magnetic particle of the Toric Code, respectively. * H = ℤ_4⊠ℤ_2. This is the first case with non-trivial H=ℤ_2, and thus exhibits two distinct condensed phases with identical symmetry breaking pattern.The phase with trivial cocycle corresponds to a Toric Code phase in which the 2m particle has been condensed.First, ⟨gα⊗g'α'⟩=0whenever α^hα'^h'≢1 for some (h,h')∈ H, i.e. unless α=α'=±1, and thus *± i are confined, while *-1 is not condensed. On the other hand,⟨g±1⊗g'±1⟩0iff (g,g')∈ H: Thus, 21 condenses, and 11≡31 and 0-1≡2-1 form the new magnetic and electric particles, respectively.Let us now turn towards the phase with non-trivial cocycle. As we will see, it corresponds to a double semion model with the condensation pattern indicated in Fig. <ref>.It is straightforward to check that for the non-trivial cocycle of H=ℤ_2, ν_(g,g')((h,h'))=i^gh(-i)^g'h' (e.g., by checking it on the generators).Then, ⟨gα⊗g'α'⟩=0 whenever α^hα'^h'≢ν_(g,g')((h,h')) for some (h,h')∈ H, i.e. unless α=± i^g and α'=± i^g' (with the identical choice of ±). This implies that all g± i^g+1 are confined, and only anyons gi^g can condense. Since ⟨gα⊗01⟩0 in addition requires (g,0)∈ H, we find that it is 2-1 which condenses.* H = ℤ_1⊠ℤ_1. This is a trivial phase where all e particles have been condensed; it is fully analogous tocase 2 for the Toric Code in Sec. <ref>. * H = ℤ_2⊠ℤ_2. This is a trivial phase where 2e and 2m have been condensed.We have that ⟨gα⊗g'α'⟩=0unless g,g'∈{0,2}, i.e., 1* and 3* have been confined. It is also zero unless α^gα'^g'=1 for all g,g'∈{0,2} (there is only the trivial cocycle), and thus, *± i is confined as well. For all remaining cases, ⟨gα⊗g'α'⟩0, and thus, all other particles are condensed. * H = ℤ_4⊠ℤ_4. This is a trivial phase where all m particles have been condensed; it is fully analogous tocase 3 for the Toric Code in Sec. <ref>. Note that there is again only the trivial cocycle. §.§ Numerical study In the following, we provide numerical results on different topological phases which can be obtained through condensation from a D(ℤ_4) double model, and the transitions between them. To this end, we have constructed a three-parameter family interpolating between different fixed point models, including the D(ℤ_4) phase, both Toric Code phases, the double semion phase, and a trivial phase.Here, we will limit ourselves to a brief overview of the results; an in-depth discussion of the specific wavefunction family considered as well as the numerical methods used, together with additional results, will be presented elsewhere <cit.>. Let us start by introducing the family of tensors used: A(θ_DS,θ_TC,ℤ_2,θ_TC)=< g r a p h i c s > .Here, the four outside legs correspond to the virtual indices, while the four inside legs are the physical indices.The rings (and the green dots) describe MPOs all of which mutually commute:* The outermost black ring is the MPO of the D(ℤ_4) quantum double, ∑_g U_g^⊗ 4, U_g=X^g, with X the generator of the regular representation of ℤ_4.* The red ring describes a deformation towards the MPO projector for the ℤ_4⊠ℤ_2 double semion model, where < g r a p h i c s > =(X^2)^i(Z^2)^i+j,i,j=0,1 ,with Z the generator of the diagonal representation of ℤ_4, and < g r a p h i c s > = diag(coshθ_DS2, sinhθ_DS2).For θ_DS=∞ (and θ_TC,ℤ_2=θ_TC=0), this gives the double semion MPO, while for θ_DS=0, it acts trivially.* The blue ring describes a deformation towards the H=ℤ_2⊠ℤ_1 Toric Code, where-1.2em < g r a p h i c s > = δ_ijexp((-1)^iθ_TC,ℤ_2Z^2), i,j=0,1 .For θ_TC,ℤ_2=∞, this projects theD(ℤ_4) MPO onto a ℤ_2 subgroup and thus yields the Toric Code, while for θ_TC,ℤ_2=0, it acts trivially. * Green circles < g r a p h i c s > =exp(θ_TCX^2) describe a deformation towards a H=ℤ_4⊠ℤ_2 Toric Code phase: For θ_TC=∞, this enhances the symmetry of the D(ℤ_4) MPO to H=ℤ_4⊠ Z_2, while for θ_TC=0, it once again acts trivially.The two Toric Code constructions correspond to the two ways of embedding a “normal” ℤ_2⊠ℤ_1⊂ℤ_2×ℤ_2 Toric Code into a ℤ_4×ℤ_4 symmetry described in Sec. <ref>. Note that since all projectors commute with each other, their order does not matter. We have studied the phase diagram of this family using infinite Matrix Product States (iMPS) to approximate the fixed point of the transfer operator, by iteratively applying the transfer operator and truncating the bond dimension to some given χ, keeping translational symmetry. From the resulting fixed point iMPS, we can then [using Eq. (<ref>)] immediately compute the order parameters for condensation, ⟨gα⊗01⟩, and deconfinement, ⟨gα⊗gα⟩, respectively, allowing us to distinguish the different topological phases and map out the phase diagram.The condensation and deconfinement order parameters also allow us to study the nature of the phase transitions. Notably, this gives us non-zero order parameters, and thus critical exponents β, for both sides of a condensation-driven phase transition: in the uncondensed phase, the deconfinement order parameter is non-zero, while in the condensed phase, the condensate fraction is non-zero. Note that we use the string operator corresponding to excitations in the fixed point wavefunction to measure the order parameter throughout the phase diagram; this is in exact analogy to the use of order parameters in conventional phase transitions.In addition to that, we can further characterize the phase transition by looking at the scaling of the correlation length ξ, which we can extract either from the fixed point iMPS, or from the finite-size transfer operator and a finite size scaling (note though that this length does need not be equal to the physical correlation length, as it includes e.g. certain anyon-anyon correlation functions).In order to understand the structure of our three-parameter family, Eq. (<ref>), we have computed the different condensation and deconfinement order parameters along the three hyperplanes for which one θ_∙=0; the resulting phase diagram is shown in Fig. <ref>.We find that the system exhibits all phases encoded by the three MPOs in Eq. (<ref>), as well as a trivial phase with H=ℤ_2⊠ℤ_2, which can be understood analytically in the limit where two of the θ_∙ are taken to infinity. As expected, the family thus exhibits phase transitions related to the condensation of anyons from D(ℤ_4) to Toric Code and Double Semion, and from either to the trivial phase; more notably, though, the family also exhibits direct phase transitions between the Toric Code and the Double Semion model, which are not related by anyon condensation.We have studied a number of these phase transitions in more detail; in the following, we illustrate our findings through a few examples, and refer the reader for a more detailed analysis to Ref. <cit.>. First, we have studied the phase transitions in the θ_TC,ℤ_2=0 plane.Fig. <ref> shows the order parameters along line (I) in Fig. <ref>, which describes a D(ℤ_4) to Toric Code transition. Since we have an analytical mapping to the 2D Ising model for this line, it can serve as a benchmark, and we find indeed very good agreement with the analytic predictions.Further study suggests the existence of an analytical mapping for the entire θ_TC,ℤ_2=0 plane; for the θ_DS=0 plane, the critical exponents still match those of the 2D Ising model, though the existence of an exact mapping is unclear. On the other hand, the transitions in the θ_TC=0 plane seem to belong toa different universality class.As an example, Fig. <ref> shows the transition along the line (II) in Fig. <ref>, for which we find critical exponents ν=1.05(7) for the correlations in the fixed point of the transfer operator, and β_+=0.04(1) and β_-=0.23(4) for the anyon condensation and deconfinement order parameters, respectively; notably, the critical exponent β is different on the two sides of the transition. We observe that the critical exponents β_± change continously as we move along the transition line in the θ_DS=1 plane towards the θ_TC,ℤ_2 plane, ultimately reaching β_±=1/8; a detailed discussion will be given elsewhere <cit.>.Finally, let us turn towards the direct Toric Code – Double Semion transition, previously only studied with exact diagonalization and on quasi-1D systems <cit.>, whose nature is yet to be resolved.As one would assume that interactions generally give rise to condensation of excitations, one expects that an interpolation between the two models would typically drive the Toric Code through some condensation transition, either into a trivial or a more complex phase [such as the D(ℤ_4) model], and from there through another condensation-driven transition to the Double Semion model, and a direct transition would at least require some fine-tuning of interactions.We can identify one such fine-tuned transition between the (H=ℤ_4⊠ℤ_2) Toric Code and Double Semion phase in our phase diagram in the θ_TC,ℤ_2=0 plane at (θ_DS^c,θ^c_TC)= (12ln(1+√(2)),12ln(1+√(2))); this is a multi-critical point adjacent to all four phases which goes away as one perturbs away from θ_TC,ℤ_2=0, separating the Toric Code from the Double Semion phase.Fig. <ref>a shows the transition through this point along line (III) in Fig. <ref>, and we find that it is a second order phase transition, driven by two “counterpropagating” condensation and de-condensation transitions, thus preserving the total number of anyons; like all transitions in that plane, it is again in the 2D Ising universality class.Note however that this is a phase transition between two phases with an identical H=ℤ_4⊠ℤ_2 symmetry at the boundary,and therefore corresponds to an SPT phase transition at the boundary in the absence of symmetry breaking, and can therefore only be detected by string order parameters rather than conventional local order parameters.Note however that it has been shown that in certain cases string order parameters can be mapped to local order parameters through a duality mapping <cit.>.As it turns out, there is another way of obtaining a direct phase transition between the H=ℤ_4⊠ℤ_2 Toric Code and Double Semion phase, namely by interpolating between the on-site transfer operators A^† A of the two fixed point models, rather than the tensors A themselves. Since such an interpolation 𝔼(θ)=θ A_0^† A_0 + (1-θ) A_1^† A_1 yields a positive semidefinite 𝔼(θ)≥0, we can construct a continuous path A(θ) of PEPS tensors by decomposing 𝔼(θ)=A(θ)^† A(θ).This interpolation yields again a direct transition between the two phases, and a thorough analysis of the order parameters, shown in Fig. <ref>b, gives compelling evidence that the phase transition is first order.Thus, in order to understand the natureof a generic Toric Code – Double Semion phase transition (given it can even be realized in a robust way) requires further study. In this context, it is an interesting question whether imposing specific symmetries on the system allows one to generically obtain a direct transition between these two phases, rather than requiring fine-tuning of the interactions. § CONCLUSIONS AND OUTLOOK In this paper, we have studied anyon condensation in Projected Entangled Pair State models, and have derived conditions governing the condensation and confinement of anyons.In order to do so, we have related the behavior of anyons to string order parameters and thus symmetry protected order in the fixed point of the transfer operator, this is, the entanglement spectrum of the system.We have derived four conditions: Two characterize the possible symmetry breaking and SPT phases consistent with positivity of the entanglement spectrum, while the other two related these symmetry breaking and SPT patterns to the condensation and confinement of anyons. Specifically, we found that there are topological phases which cannot be distinguished through their symmetry breaking pattern, but solely through the SPT structure of their entanglement spectrum, and which describe phases not related by anyon condensation.For the case of cyclic groups, this classification allowed to construct all twisted doubles by condensing non-twisted double models.We have exemplified our discussion with the ℤ_4 quantum double, which can give rise to both Toric Code and Double Semion phases which form an example of phases with identical symmetry breaking pattern but inequivalent SPT order in the entanglement spectrum. We have also provided numerical results for the phase diagram and the phase transitions of the model.To this end, we have used that the concepts developed in this paper allow us to measure order parameters for condensation and deconfinement and thus extract critical exponents for the order parameter. In particular, we found that this model can realize direct phase transitions between the Toric Code and Doubled Semion models which are not related by anyon condensation, and for which we found both first and second order transitions.A natural question is the interpretation of symmetry broken and SPT phases in the fixed point of the transfer operator in terms of physical properties of the entanglement spectrum and/or edge physics <cit.>: Symmetries U_gρ U_g^†=ρ imply that the entanglement spectrum ρ is block-diagonal, i.e., it originates from a symmetric Hamiltonian. An additional single-layer symmetry U_gρ=ρ implies that the density operator must live in the trivial irrep sector, while a broken symmetry and the resulting dependence on distant boundary conditions implies the existence of a non-local anomalous term in the entanglement Hamiltonian which depends on distant boundaries and encodes a topological superselection rule <cit.>.The implications of SPT order on the entanglement spectrum, on the other hand, are much less clear, and it would be very interesting to identify the features of the entanglement spectrum which would allow to distinguish e.g. Toric Code and Doubled Semion order.It is likely that our results generalize to the case of non-abelian groups, and beyond that to general Matrix Product Operator symmetries <cit.>. An obstacle is that the one-to-one correspondence between string order parameters and SPTs breaks down <cit.>: While it is known that non-abelian SPTs are still characterized by group cohomology, we have used SPT phases to classify the behavior of string order parameters rather than the other way around, and are thus looking for a classification of the behavior of non-abelian string order parameters instead. Let us note, however, that a major simplification might come from the fact that for non-abelian double models, the irrep at the end of a string must be an irrep of its normalizer, so it might well be possible that the problem can be abelianized to an extent which allows to yet again relate it to SPT order. A related question is the generalization of our results to the case of non-hermitian transfer operators, or even PEPS which encode a corresponding global symmetry in a non-trivial way.In that case, string order parameters are evaluated between non-identical left and right fixed points, and the analogy to expectation values in physical states, and thus the correspondence of string order parameters with SPT phases, breaks down; for instance, it is not even clear whether the projective symmetry representation for pairs of left and right fixed points must be equal. Finally, the maybe most important question, which goes far beyond the scope of this work, is a rigorous justification of our main technical assumption, namely that the structure of the fixed point space of a transfer operator for a PEPS in a gapped phase is well described by Matrix Product Operators.While this is well motivated due to the short-range nature of the correlations in the system, and is well-tested numerically through numerous PEPS simulations using contraction schemes which model the boundary as an MPO, it has withstood rigorous assessment up to now. A better understanding of this question would lead to a number of important insights regarding the structure of gapped phases, the nature of the entanglement spectrum, or the convergence of numerical methods, just to name a few. We acknowledge helpful conversations withM. Barkeshli, N. Bultinck, M. Marien,B. Sahinoglu, C. Xu, and B. Yoshida. This work has received support by the EU through the FET-Open project QALGO and the ERC Starting Grant No. 636201 (WASCOSYS),the DFG through Graduiertenkolleg 1995, and the Jülich Aachen Research Alliance (JARA) through JARA HPC grant jara0092 and jara0111. § CONSTRUCTION OF EXPLICIT ENDPOINTS In this appendix, we provide an explicit construction for all anyons gα which are either condensed or deconfined following Condition <ref>, i.e. α|_ H=ν_ g.To this end, we proceed in two steps: First, we generalize the construction of Eq. (<ref>) to obtain R_α which transform as irreps α of G rather than only H.Second, we show that for the case of condensation, g=(g,e) and α=(α,1), and for the case of deconfinement, g =(g,g) and α=(α,α), these R_α allow to construct actual anyons, i.e., single-layer endpoints, for which ⟨gα⟩0; this is exactly what is also required in Section <ref>, where we derive the anyon condensation rules from Conditions <ref>–<ref>. §.§ Construction of R_α for irreps of G Let g∈ H, and α an irrep of G such that α|_ H=ν_ g. The idea of Eq. (<ref>) was to use injectivity of the MPS tensor M to define R_α such that -3em < g r a p h i c s > .The tensor M describes one symmetry-broken sector (with residual symmetry group H) only. In order to construct some R_α which transforms as an irrep of G, we therefore first need to construct an MPS which does not break the symmetry.To this end, choose representants f_𝔞∈ G of every symmetry-broken sector 𝔞∈ G/ H, such that G=⊕_𝔞∈ G/ H f_𝔞 H ;by starting from the generators of the quotient group G/ H, it is possible to pick f_𝔞 such that f_𝔞𝔟= f_𝔞 f_𝔟.Now define< g r a p h i c s >and ℳ^i = ⊕_𝔞 M^i_𝔞; clearly, ℳ^i is block-injective (i.e., injective on the space of block-diagonal matrices).Given k∈ G, there is a unique decomposition k= f_𝔞 h, h∈ H, and thus -2.6em < g r a p h i c s > ;this is, the virtual action of U_ k is 𝒱_ k:=(⊕ V_ h)Π_𝔞 ,where Π_𝔞 permutes the blocks by virtue of 𝔟↦𝔞^-1𝔟; note that 𝒱_ k forms a projective representation of G (the trivial induced projective representation induced by V_ h). Now define 𝒲 := ⊕_𝔟α( f_𝔟) V_ g ,and choose R_α such that< g r a p h i c s >– this is always possible since 𝒲 is block-diagonal and ℳ is block-injective. (We use a thick line to indicate the larger “direct sum” virtual space.) We now have that𝒱_ k𝒲𝒱_ k^†=[(⊕ V_ h)Π_𝔞] [⊕_𝔟α( f_𝔟) V_ g] [Π_𝔞^†(⊕ V^†_ h)]=[(⊕ V_ h)] [⊕_𝔟'α( f_𝔞𝔟') V_ g] [(⊕ V_ h^†)]=ν_ g( h)[⊕_𝔟'α( f_𝔞𝔟') V_ g] =ν_ g( h)α( f_𝔞) 𝒲=α( k) 𝒲 ,where we have used ν_ g( h)=α( h). It immediately follows that< g r a p h i c s >i.e., R_α indeed transforms as the irrep α of G.§.§ Explicit construction of condensed anyons Let us now show that we can explicitly construct condensed anyons gα: Given a R_(α,1) for which ⟨(g,e)(α,1)⟩0, we show how to construct a single-layer anyon (i.e., an endpoint to a string of g's transforming like α) with non-zero expectation value⟨gα⊗e1⟩0, where the endpoint in the bra layer is trivial. To this end, we start by decomposingR_(α,1) = ∑ X^s_α⊗Y̅^s_1 ,where X_α and Y_1 transform like α and trivially, respectively.Since R_(α,1) gives a non-zero expectation value ⟨(g,e)(α,1)⟩0, there must be at least one s_0 for which this also holds; we thus obtain a separable endpoint X^s_0_α⊗Y̅^s_0_1≡ X_α⊗Y̅_1 with⟨(g,e)(α,1)⟩0; however, Y_1 can still be different from the identity.In order to make the endpoint in the bra layer entirely trivial, we use that< g r a p h i c s >since A is G-injective (and C is G-invariant), and thus, < g r a p h i c s >with the endpoint < g r a p h i c s >for the condensed anyon gα.Note that a simple application of Cauchy-Schwarz yields that any condensed anyon is also deconfined.§.§ Explicit construction of deconfined anyons Similar to the preceding section, in this scenario we start from some R_(α,α) s.th.⟨(g,g)(α,α)⟩0, corresponding to a deconfined anyon gα, and want to construct identical endpoints Z_α for the ket and bra layer such that⟨gα⊗gα⟩0. We again start by decomposing R_(α,α) = ∑ X^s_α⊗Y̅^s_α .Let us define the shorthand ⟨ X_α⊗ Y_α⟩:= ⟨gα⊗gα⟩, where S⊗S̅ has endpoints X⊗ Y.Now pick s_0 such that ⟨ X_α⊗Y̅_α⟩≡⟨ X^s_0_α⊗Y̅^s_0_α⟩ 0.If⟨ X_α⊗X̅_α⟩ 0 or ⟨ Y_α⊗Y̅_α⟩ 0, we can choose Z_α:=X_α (or Z_α:=Y_α), and have found the desired non-vanishing identical ket and bra endpoint ⟨ Z_α⊗ Z_α⟩0.Let us now consider the case where both are zero. Let ϕ such that ⟨ X_α⊗ e^-iϕY̅_α⟩>0, and define Z_α := X_α+e^iϕY_α. Then, ⟨ Z_α⊗Z̅_α⟩ = ⟨ X_α⊗X̅_α⟩ +⟨ Y_α⊗Y̅_α⟩+⟨ X_α⊗ e^-iϕY̅_α⟩ +⟨ e^iϕ Y_α⊗X̅_α⟩= 2Re ⟨ X_α⊗ e^-iϕY̅_α⟩ > 0 ,thus again yielding identical endpoints Z_α for ket and bra with non-vanishing expectation value.§ GENERALIZATION TO DRESSED ENDPOINTSLet us now show that the no-go results of Conditions <ref>–<ref> derived in Sec. <ref> equally hold for general endpoints; the explicit construction for any endpoint compatible with all the conditions has already been provided in Appendix <ref>. Let us recall that a general anyon is of the form -0.8cm < g r a p h i c s >.For deriving the no-go results, we generally need to consider joint ket-bra objects; we thus define-0.55cm < g r a p h i c s > -0.55cm < g r a p h i c s >.The generalization of Eq. (<ref>), describing a general string order parameter for a ket and bra anyon pair, evaluated in a pair of fixed points ρ_ c and ρ_ c', is thus of the formO_ c^ c':= -1.3cm < g r a p h i c s > .Just as in Sec. <ref>, a central role will be played by the (mixed) transfer operator 𝔼̃_ c^ c'; we will therefore analyze its structure in detail in the following. §.§ Structure of 𝔼̃_ c^ c' The major complication as compared to the discussion in Section <ref> is that for an M≡ M^ c describing an injective MPS ρ which is a fixed point of the transfer operator, the tensor D:=-1.6em < g r a p h i c s >describing the MPS obtained after applying 𝕋 only needs to be proportional to ρ, with a possibly size-dependent proportionality constant.This has two consequences <cit.>: First, D can consists of several diagonal blocks D_s,s, s=1,…,S each of which describes a copy of the original MPS, i.e., D_s,s=M̃^i_s,where each M̃^i_s is equal to M^i up to a block-dependent gauge transform, M̃^i_s = γ_s X_s M^i X_s^-1 with some left-invertible X_s.Second, there can in addition be off-diagonal blocks D_s,t coupling blocks s and t, which however—up to reordering of blocks—must be upper triangular, i.e., D_s,t≡ 0 if s>t.This implies that any product D^i_1D^i_2⋯ D^i_L can contain each off-diagonal block D^i_s,t at most once, and in particular contains only a finite number of off-diagonal blocks. W.l.o.g., we will assume that D is normalized such that the largest |γ_s|=1, with the normalization of the M^i as before.Let us now consider what this implies when taking large powers (𝔼̃^ c'_ c)^K, K→∞. In that case, there will be large contiguous blocks of the form F_s:=∑ D_s^i, c'⊗M̅^i, c= γ_s∑ X_s M_s^i, c'X_s^-1⊗M̅^i, c (specifically, there will be at least one block with length at least K/S), which will therefore converge to the fixed point of the corresponding original transfer operator 𝔼^ c'_ c, up to normalization and a gauge transform.In particular, this implies for c c' that (𝔼̃_ c^ c')^K decays exponentially in K.For c= c', pick the largest contiguous block F_s within (𝔼̃_ c^ c)^K, and notice that it converges to a rank-1 projector onto its non-degenerate leading eigenvectors, which therefore transform trivially under the group action. Since 𝔼̃_ c^ c commutes with the symmetry action, further applications of 𝔼̃_ c^ c to this rank-1 projector do not change the irrep label of the fixed point.(The symmetry actions on the different blocks are related by the corresponding gauge transform X_s and label irreducible representations in the same way; note that we only care about the symmetry action on the bond degree of freedom of the MPS to the extent they are related to order parameters, i.e., the symmetry action on the “physical” degrees of freedom.) (𝔼̃^ c_ c)^K will generally be a sum over terms in which different blocks F_s with |γ_s|=1 converge to their fixed point, and thus, (𝔼̃^ c_ c)^K→∑_iσ_R^iσ_L^i, where all σ_∙^i transform trivially under the symmetry.(Though it is not relevant in what follows, it is worth noting that terms containing more than one block F_s which converges to the fixed point cannot appear in any expectation value, since their weight grows linearly with the system size, whereas in the normalization only single blocks can show up.)§.§ Application to dressed endpointsLet us verify that the modified expectation value Eq. (<ref>) satisfies the same Conditions as before.Condition <ref> is only about the symmetry breaking pattern (and does not involve anyon strings), and is thus entirely unaffected.The off-diagonal terms in the expectation value Eq. (<ref>) again vanish, since the corresponding large power of the off-diagonal transfer operator will decay as the largest eigenvalue of the mixed transfer operator 𝔼^ c_ c', and thus faster than the diagonal terms, as we will see. Also, from Eq. (<ref>) one can immediately infer that the expectation value is independent of the fixed point chosen, using again the same argument as in Eq. (<ref>).Next, in analogy to Eq. (<ref>), let us consider what happens when we separate a pair of anyons. If g∉ H, we again obtain a mixed transfer operator and thus the corresponding expectation value vanishes, yielding Condition <ref>. If g∈ H, we can again move the symmetry action to the bond degree of freedom of the MPS, and are thus left with -1.3em < g r a p h i c s > .As discussed above, we have that the fixed point space of the transfer operator is of the form-1.3em < g r a p h i c s >where the σ_∙^i transform trivially under the symmetry action.While the endpoints don't decouple any more, we still have thatthe expectation value of Eq. (<ref>) converges to an average over products of expectation values-1.4em < g r a p h i c s >.We can now follow the same reasoning as before: Using that < g r a p h i c s >,we have that -2em < g r a p h i c s >,which shows that Eq. (<ref>) can only be non-vanishing if α( h)=ν_ g( h), yielding Condition <ref>.Note that the converse – that there exists a suitable R_α whenever the condition is satisfied – has already been shown in Appendix <ref>.Finally, the proof of Condition <ref> does not make explicit reference to the form of the anyons, but just uses the restrictions on ⟨gαℓ⟩, Eq. (<ref>), obtained in Condition <ref>.§REALIZATION OF ALL TWISTED ℤ_T DOUBLE MODELS THROUGH CONDENSATION FROM D(ℤ_N) In this Appendix we will discuss PEPS tensors which describe ℤ_t twisted quantum doubles <cit.> with twist r ∈ [0,t-1].We will show that the PEPS tensors have symmetry ℤ_N, where N=qt and q=t/gcd(t,r). We will explicitly construct the fixed points of the transfer matrix and show that their residual symmetry is given by ℤ_qt×ℤ_q and that their entanglement structure corresponds to the second cohomology class rq/t. We start of with defining a right and left handed building blocks for a matrix product operator, referred to as MPO tensors, M(a)_ij and N(a)_ij, i.e. for each a∈[0,t-1] and i,j∈[0,q-1] we define a qt× qt dimensional matrix,. The non-zero matrix elements of M(a)_g_0h_0 are given by [M(a)_g_0h]_{g,h},{g+a,h+a}= √(q/t)ω(a,g,h-g_0)δ_g_0≡ g, with g∈[0,t-1] and g_0,h∈[0,q-1]. Here ω is a 3-cocycle which we define below and δ_g_0≡ g is unity if g_0 = g mod q, zero otherwise. Note that the subscripts g+a and h+a can be greater than t and q, respectively. Here and in the following we will implicitly use modulo t or q when calculating indices which can only take values smaller than t or q respectively. We will use subscript 0 to distinguish between a variable modulo t and q if both are used in the same equation, i.e. as in the definition of [M(a)_g_0h]_{g,h},{g+a,h+a} for g. The left and right handed MPO tensors are related by N(a)_h_0g_0 = M(a)_g_0h_0 (bar denotes complex conjugation). The non-zero values of M(a)_g_0h and N(a)_hg_0 can also be depicted graphically by:M(a)_g_0h =√(q/t)< g r a p h i c s >ω(a,g,h-g_0),N(a)_hg_0 = √(q/t)< g r a p h i c s >ω(a,g,h-g_0),where at the r.h.s. g can take any value satisfying g_0 = g mod q. The horizontal (red, dotted) legs correspond to the indices of M(a) and N(a) , the vertical legs correspond to the indices of the matrices M(a)_g_0h_0and M(a)_g_0h_0. The thick leg (t dim) and the thin leg (q dim) together form a qt dimensional space. The thick edges of the box indicate its orientation and is also used to distinguish M from N. The 3-cocycle ω is defined by:ω(a,g,d)= exp[2π i r d/t^2(a+g-⌊ a+g ⌋)],where ⌊·⌋ denotes modulo t and r∈ [0,t-1] specifies the specifies the class of the 3-cocycle. Note that this gauge differs (by a co-boundary) from the one defined in Ref. propitius:phd-thesis. This cocycle has the following invariant ω(a,g,d) = ω(a,g,d+q), and satisfies the following cocycle condition:ω (g_1,g_2,g_3)ω(g_1,g_2+g_3,g_4)ω(g_2,g_3,g_4) = ω(g_1+g_2,g_3,g_4)ω(g_1,g_2,g_3+g_4) .for any set of g_i's. We use two copies of N(a) and M(a) to construct the map A(a) = ∑_ijkl M(a)_ij⊗ M(a)_jk⊗ N(a)_kl⊗ N(a)_li. The PEPS tensor is linear combination of these maps: A =t/q^2∑_a A(a) which can be graphically represented by ∑_a < g r a p h i c s > .The inner legs correspond to a physical site and the outer four groups of two legs correspond to the four auxiliary sites. §.§Twisted Quantum doubleThe above defined PEPS tensor can also be obtained by starting from MPO tensors for the twisted double defined in Ref. bultinck:mpo-anyons:M(a)_gh =< g r a p h i c s >ω(a,g,h-g), N(a)_hg =< g r a p h i c s >ω(a,g,h-g).All legs (thick) correspond to a t dimensional space. As indicated by expression (<ref>) these MPO tensors can also be used to create a PEPS tensor. We will now discuss two unitaries which can be used to relate the state described by the above MPO tensors to the state described by the MPO tensors given by Eqs. (<ref>) and (<ref>). First consider U acting on a t^5 dimensional space with non-zero matrix entries U_ijpqm,ijklm = δ_p,k-(x-x_0)δ_q,l-(m-m_0), where x = i+l-j. Acting with this unitary on the following physical sitesU < g r a p h i c s > ,gives a state whose reduced density matrix for the physical legs indicated by a thin leg has support on only a q dimensional space, i.e. spanned by the first q vectors of the computational basis. Acting with multiple copies of this unitary (one for each PEPS link) one can effectively reduce the dimension of the on-site Hilbert space from t^8 to (qt)^4. Note that the order does not matter since U acts diagonally on overlapping sites. In the following three steps we successively reduce the entanglement spaceindicated by the green arrow: < g r a p h i c s > .The first reduction is valid since these indices are also coupled through the remaining 6 MPO tensors (we could actually have removed this link completely). In the second and third step we make use of the invariance of the 3-cocylce ω(a,g,d) = ω(a,g,d+q). The MPO tensor labeled by *, only depends on the indicated index modulo q. Applying this reduction to all plaquettes almost gives the model arising from the PEPS defined in Eq. (<ref>), except for the entanglement space between the upper and left MPO tensors of the PEPS tensor still being t-dimensional. A second unitary Ũ will reduce this entanglement. It acts on a t^3q dimensional space as:< g r a p h i c s > , and has non-zero matrix entries Ũ_{g_1+kq,g_2+kq,g_3,g_4},{g_1,g_2,g_3,g_4} = F(⌊ g_2+kq-g_3⌋/q,⌊ g_2-g_3⌋/q) ω(g_3,kq,g_4-g_3) ω(g_1,kq,g_4-g_3), where again ⌊·⌋ denotes modulo t and F(a,b) = √(q/t)exp[2abqπ i /t]. Ũ is unitary since F is a Fourier transform in the difference between second and third index (mod q). Before applying this unitary the reduced density matrix ρ_23 of the state, corresponding to the sites labeled by 2 and 3 in the above equation, is a maximally mixed state, whereas after applying this unitary, ρ_23 has Schmidt rank q. The corresponding 3-cocycles in thedefinition for Ũ ensure that after disentangling, the MPO tensors labeled by * in the above equation, still have the right phase factor. Indeed we have that Ũω(a+h-h',h',g-h)ω(a,h,g-h)|h',h,h,g⟩ = ∑_kα_k|h'+kq,h+kq,h,g⟩ where α_k is given by:α_k = ω(h',kq,g-h)ω(a+h-h',h',kq+g-h)·ω(h,kq,g-h)ω(a,h,kq+g-h) =ω(a+h,kq,g-h)ω(a+h-h',h'+kq,g-h)·ω(a+h-h',h',kq)·ω(a+h,kq,g-h)ω(a,h+kq,g-h)ω(a,h,kq)·= ω(a,h+kq,g-h)ω(a+h-h',h'+kq,g-h),for any integer a. The second equation follows after applying the cocycle condition, Eq. (<ref>), twice: once with g_1 = a+h-h', g_2= h', g_3=kq and g_4=g-h and once with g_1 = a, g_2= h, g_3=kq and g_4=g-h. The first and third equality follow from the invariance ω(a,g,d) = ω(a,g,d+q). §.§ PropertiesWe will shows that the tensor constructed is a projector: A^† = A^2 = A. Both properties can be studied on the level of the MPO tensors M(a) and N(a). First of we have that M(-a)_ij and M(a)_ij^† are related by a gauge transformation:M(a)_ij^† = ∑_kl Q(a)_ikM(-a)_klQ(a)_jl.The matrix entries of Q(a) are given by the cocycle defined in Eq. (<ref>): Q(a)_ij = ω(-a,a,i)δ_i+a_0,j. The above relation follows from the cocycle condition: non-zero entries on the l.h.s. are ω(a,g,j-i) (for matrix indices [{g+a,j+a},{g,j}] with g_0=i), the corresponding matrix entries on the r.h.s. are ω(-a,a,i)ω(-a,a+g,j-i)ω(-a,a,j). These are equal by Eq. (<ref>) using g_1=-a, g_2=a, g_3=g and g_4=j-i, and from the fact that in the chosen gauge for ω we have that ω(0,g_3,g_4) = 1. The same equation can also be derived for N(a). Since Q obeys ∑_jQ(a)_ijQ(a)_kj = δ_ik it follows that the tensor A is Hermitian.The product of two MPO tensors M(a) and M(b) is related to the MPO tensor M(a+b) by a gauge transformation:∑_mn Z(a,b)_i,mn (M(b)_mk· M(a)_nl) = √(q/t)∑_j M(a+b)_ij Z(a,b)_j,kl where Z(a,b) is a q× q^2 matrix whose non-zero entries are given by Z(a,b)_i,kl = ω(a,b,i)δ_i,kδ_i+b_0,l. This equation can also be represented graphically as:< g r a p h i c s > . This relation follows again from the cocycle condition, Eq. (<ref>): i.e. one can verify that ω(a,b,i)ω(b,g,k-i)ω(a,b+g,k-i) = ω(a+b,g,k-i)ω(a,b,k), being the entry-wise equation for the above relation for matrix entries [{g,k}, {g+a+b,k+a+b}] with g_0=i. The zipper Z obeys∑_kl Z(a,b)_i,klZ(a,b)_j,kl =δ_ij and ∑_iZ(a,b)_i,kl Z(a,b)_i,mn = δ_mkδ_nlδ_m+b_0,n which can be represented graphically as:< g r a p h i c s > ,< g r a p h i c s > .Note that the product of two zippers, given by Eq. (<ref>), is not equal to identity but rather equal to a projector. These equations are used in showing that A^2=A. To see this one first uses zippers Z(a,b) to simplify the product A(b)A(a) to q^2/t^2 A(a+b) which can best be graphically explained:< g r a p h i c s >In the first equation one used Eq. <ref> to insert two zippers. Although this product of zippers is a projector rather than identity, this equation is still valid since the support of this projector contains the image of the product of two MPO tensors. Eq. <ref> is used to move one of the two zippers along the string of MPO tensors after which Eq. <ref> is used to remove the two zippers. Using this equation one can show that A^2= t^2/q^4∑_abA(b)A(a) = 1/q^2∑_ab A(a+b) = t/q^2∑_c A(c) = A.This motivates the pre-factor of t/q^2 in the definition of A. The last property of the PEPS-tensor we will discuss in this section is that the corresponding transfer-matrix can be constructed from the MPO tensors: T = ∑|r_a)(l_a|, where |r_a) and |l_a) are given by|r_a)= ∑_{i_n} M(a)_i_1i_2⊗ M(a)_i_2i_3⊗…⊗ M(a)_i_Li_1,|l_a)= ∑_{i_n} N(a)_i_1i_2⊗ N(a)_i_2i_3⊗…⊗ N(a)_i_Li_1.The crucial step in deriving this statement is that the product of two MPO building blocks reduces to a delta function: Tr M(a)_ijN(b)_nm^T= δ_abδ_inδ_jm. Or graphically:< g r a p h i c s >δ_ab.This motivates the factor of √(q/t) in the definition of the MPO tensors. Moreover, it can be used to show that the left and right eigenvectors are orthogonal: (l_a|r_b) ∝δ_ab. Using the above equation one can graphically derive the fixed points of T as follows:< g r a p h i c s > . §.§ Symmetries In this section we show that the PEPS tensor A has a ℤ_N symmetry, where N=qt (hence the transfer matrix T has a ℤ_N×ℤ_N symmetry), and that the fixed points of the the transfer matrix break this symmetry to ℤ_qt×ℤ_q. Moreover, the remaining symmetry acts projectively on the auxiliary space of the fixed points (being the MPO-string space). To derive these statements we introduce a unitary S which relates M(a) and M(a+1) up to a gauge transformation: M(a)_ij S = ∑_klU(a)_ikM(a+1)_klU(a)_jl. Similarly, by combining this with Eq. (<ref>) it follows thatS^† M(a) is related to M(a-1) as S^† M(a)_ij = ∑_klV(a)_ikM(a-1)_klV(a)_jl where V(a) = Q(a)U(-a)Q(-a+1). Both equations can be represented graphically:< g r a p h i c s > and < g r a p h i c s > .The symmetry S is defined by S_(i_1,i_2),(j_1,j_2) = δ_i_1+1,j_1δ_i_2+1,j_2ω(1,i_1,i_2-i_1) and the gauge transformation U is defined by U(a)_ij = δ_ijω(1,a,i). Note that S is independent of a. The equations follows from the cocycle condition ω(1,a+g,d)ω(a,g,d) = ω(1,a,g)ω(1+a,g,d)ω(1,a,g+d). To see that S is a generator of ℤ_N we evaluate S^t. It is a diagonal matrix,(S^t)_(i,i+d),(i,i+d) = ∏_j=1^t ω(1,j,d) = exp[2π idr/t] = exp[2π id/qr/gcd(t,r)], whose matrix entries are q-th roots of unity, which are moreover primitive if gcd(d,q)=1 (for example, d=1). Thus S^N = 1 and N is the smallest exponent for which this is the case. Both S and S^† are symmetries of the tensor A and of the transfer matrix. They are not symmetries of the fixed points|r_a) and |l_a). Only the global action of S⊗ S^† and the global action of S^t⊗𝕀 are symmetries of the fixed points, and they generate the groupℤ_qt×ℤ_q. Their action on an MPO tensor is given by S^† M(a)_ijS = ∑_klP_1_ikM(a)_klP_1_jl andM(a)_ijS^t = ∑_klP_2_ikM(a)_klP_2_jl The corresponding gauge transformations are P_1(a) = U(a)V(a+1) and P_2(a) = ∏_i=a^a+t-1 U(i). The later of these two gauge transformations is most easily analyzed since U(a) is diagonal: P_2(a)_nn = ∏_m=a^a+t-1ω(1,m,n) = exp[-2π i n/qr/gcd(t,r)]. Hence P_2(a) is independent of a and is (up to a permutation) the generalized Pauli Z matrix in ℤ_q. The non-zero matrix entries of the other gauge transformation, being P_1(a)_n,n+1 are all equal, independent of n, due to:ω(1,a,n)ω(-a-1,a+1,n) ·ω(1,-a-1,1+a+n)ω(a,-a,a+1+n)= ω(1,a,n)ω(-a,a+1,n) ·ω(1,-a-1,a+1)ω(a,-a,a+1+n)= ω(1,a,n)ω(a,-a,a+1)ω(1,-a-1,a+1)ω(a,1,n)= ω(a,-a,a+1)ω(1,-a-1,a+1).Here we have used the cocycle condition twice and in the last step we use that in our choice of gauge for ω we have that ω(a,1,n) = ω(1,a,n). Hence, up to a phase P_1 is a shift operator which upon conjugation by P_2 gives rise to a phaseP_1P_2P_1^† P_2^†= exp[-2 π i/qr/gcd(t,r)] .Thus together P_1 and P_2 generate a projective representation of ℤ_qt×ℤ_q, and r/gcd(t,r)=rq/t specifies the corresponding second cohomology class.This family of examples saturates all possible boundary theories of ℤ_N invariant PEPS models satisfying the conditions stated in the main text, in which the diagonal symmetry is maximal. In the general case (Condition 1) the residual symmetry is ℤ_qt×ℤ_q where qt is a merely a divisor of N, instead of qt=N. However, by increasing the dimension of the auxiliary space with a factor of x=N/(qt) one could simply add extra trivial symmetry to the PEPS tensor which would imply extra symmetry of the transfer matrix. The fixed points will break this extra symmetry because they do not have support on the added auxiliary space, and hence the residual symmetry is still ℤ_qt×ℤ_q.10bais:anyon-condensation F. Bais and J. Slingerland, Phys. Rev. B 79, 045316 (2009), arXiv:0808.0627.bais:quantum-sym-breaking F. A. Bais, B. J. Schroers, and J. K. Slingerland, Phys.Rev.Lett. 89, 181601 (2002), hep-th/0205117.bais:hopf-symmetry-breaking-jhep F. Bais, B. Schroers, and J. Slingerland, JHEP 305, 068 (2003), hep-th/0205114.kitaev:gapped-boundaries A. Kitaev and L. Kong, Commun. Math. Phys. 313, 351 (2012), 1104.5047.kong:anyon-condensation-tensor-categories L. Kong, Nucl. Phys. B 886, 436 (2014), arXiv:1307.8244.verstraete:mbc-peps F. Verstraete and J. I. Cirac, Phys. Rev. A 70, 060302 (2004), quant-ph/0311130.buerschaper:stringnet-peps O. Buerschaper, M. Aguado, and G. Vidal, Phys. Rev. B 79, 085119 (2009), arXiv:0809.2393.gu:stringnet-peps Z.-C. Gu, M. Levin, B. Swingle, and X.-G. Wen, Phys. Rev. B 79, 085118 (2009), arXiv:0809.2821.schuch:peps-sym N. Schuch, I. Cirac, and D. Pérez-García, Ann. Phys. 325, 2153 (2010), arXiv:1001.3807.buerschaper:twisted-injectivity O. Buerschaper, Ann. Phys. 351, 447 (2014), arXiv:1307.7763.sahinoglu:mpo-injectivity M. B. Sahinoglu et al., (2014), arXiv:1409.2150.bultinck:mpo-anyons N. Bultinck et al., (2015), arXiv:1511.08090.schuch:topo-top N. Schuch, D. Poilblanc, J. I. Cirac, and D. Perez-Garcia, Phys. Rev. Lett. 111, 090501 (2013), arXiv:1210.5601.haegeman:shadows J. Haegeman, V. Zauner, N. Schuch, and F. Verstraete, Nature Comm. 6, 8284 (2015), arXiv:1410.5443.marien:fibonacci-condensation M. Marien, J. Haegeman, P. Fendley, and F. Verstraete, 1607.05296v1.fernandez:symmetrized-tcode C. Fernandez-Gonzalez, R. S. K. Mong, O. Landon-Cardinal, D. Perez-Garcia, and N. Schuch, Phys. Rev. B 94, 155106 (2016), arXiv:1608.00594.perez-garcia:parent-ham-2d D. Perez-Garcia, F. Verstraete, J. I. Cirac, and M. M. Wolf, Quantum Inf. Comput. 8, 0650 (2008), arXiv:0707.2260.kitaev:toriccode A. Kitaev, Ann. Phys. 303, 2 (2003), quant-ph/9707021.castelnovo:tc-tension-topoentropy C. Castelnovo and C. Chamon, Phys. Rev. B 77, 054433 (2008), arXiv:0707.2084.cirac:peps-boundaries J. I. Cirac, D. Poilblanc, N. Schuch, and F. Verstraete, Phys. Rev. B 83, 245134 (2011), arXiv:1103.3427.chen:1d-phases-rg X. Chen, Z. Gu, and X. Wen, Phys. Rev. B 83, 035107 (2011), arXiv:1008.3745.schuch:mps-phases N. Schuch, D. Perez-Garcia, and I. Cirac, Phys. Rev. B 84, 165139 (2011), arXiv:1010.3732.hastings:arealaw M. Hastings, J. Stat. Mech. , P08024 (2007), arXiv:0705.2024.verstraete:faithfully F. Verstraete and J. I. Cirac, Phys. Rev. B 73, 094423 (2006), cond-mat/0505140.schuch:mps-entropies N. Schuch, M. M. Wolf, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. 100, 30504 (2008), arXiv:0705.0292.perez-garcia:mps-reps D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, Quant. Inf. Comput. 7, 401 (2007), quant-ph/0608197.cirac:mpdo-rgfp J. Cirac, D. Perez-Garcia, N. Schuch, and F. Verstraete, (2016), arXiv:1606.00608.Note1 Note that this does not imply that the fixed point space is actually spanned by the "026A30C ρ _c). In fact, it is easy to see that this would require extra conditions such as rotational invariance, since e.g. a transfer operator projecting onto a GHZ-type state would have a unique fixed point (the GHZ state) which is not an injective MPS.rispler:peps-symmetrybreaking M. Rispler, K. Duivenvoorden, and N. Schuch, Phys. Rev. B 92, 155133 (2015), arXiv:1505.04217.perez-garcia:inj-peps-syms D. Perez-Garcia, M. Sanz, C. E. Gonzalez-Guillen, M. M. Wolf, and J. I. Cirac, New J. Phys. 12, 025010 (2010), arXiv.org:0908.1674.fernandez-gonzalez:uncle-long C. Fernandez-Gonzalez, N. Schuch, M. M. Wolf, J. I. Cirac, and D. Perez-Garcia, Commun. Math. Phys. 333, 299 (2015), arXiv:1210.6613.sanz:mps-syms M. Sanz, M. M. Wolf, D. Perez-Garcia, and J. I. Cirac, Phys. Rev. A 79, 042308 (2009), arXiv:0901.2223.pollmann:symprot-1d F. Pollmann, E. Berg, A. M. Turner, and M. Oshikawa, (2009), arXiv.org:0909.4059.chen:spt-order-and-cohomology X. Chen, Z.-C. Gu, Z.-X. Liu, and X.-G. Wen, Phys. Rev. B 87, 155114 (2013), arXiv:1106.4772.propitius:phd-thesis M. de Wild Propitius, Topological interactions in broken gauge theories, PhD thesis, 1995, arXiv:hep-th/9511195.Note2 This can be seen using the cocycle conditions and the fact that the group is abelian as follows:ν _h(g_1) ν _h(g_2)/ν _h(g_1g_2) =ω (g_1,h)ω (g_2,h) ω (h,g_1g_2)/ω (h,g_1)ω (h,g_2) ω (g_1g_2,h) +.1667emω (hg_1,g_2)/ω (hg_1,g_2)=ω (g_1,h)ω (g_2,h) ω (h,g_1g_2) ω (hg_1,g_2) /ω (h, g_1g_2) ω (g_1,g_2) ω (h,g_2) ω (g_1g_2,h)=ω (g_1,h)ω (g_2,h) ω (h,g_1g_2) ω (hg_1,g_2) /ω (h, g_1g_2) ω (h,g_2) ω (g_1,g_2h) ω (g_2,h) =ω (g_1,h) ω (hg_1,g_2) /ω (h,g_2) ω (g_1,g_2h) =ω (g_1,h) ω (g_1 h,g_2) /ω (h,g_2) ω (g_1,hg_2)=1. .pollmann:spt-detection-1d F. Pollmann and A. M. Turner, Phys. Rev. B 86, 125441 (2012), arXiv:1204.0704.Note3 Note that the same cannot hold for all abelian groups: Condensing from an abelian group gives another abelian model, while twisting an abelian model can give rise to non-abelian models <cit.>.iqbal:preparation M. Iqbal et al., in preparation .zanardi:overlap-curvature P. Zanardi, P. Giorda, and M. Cozzini, Phys. Rev. Lett. 99, 100603 (2007), quant-ph/0701061.gu:fidelity-phasetransitions S.-J. Gu, Int. J. Mod. Phys. B 24, 4371 (2010), arXiv:0811.3127.morampudi:z2-phase-transition S. C. Morampudi, C. von Keyserlingk, and F. Pollmann, Phys. Rev. B 90, 035117 (2014), arXiv:1403.0768.duivenvoorden:stringorder-symbreaking K. Duivenvoorden and T. Quella, Phys. Rev. B 88, 125115 (2013), arXiv:1304.7234.yang:peps-edgetheories S. Yang et al., Phys. Rev. Lett. 112, 036402 (2013), arXiv:1309.4596.
http://arxiv.org/abs/1702.08469v1
{ "authors": [ "Kasper Duivenvoorden", "Mohsin Iqbal", "Jutho Haegeman", "Frank Verstraete", "Norbert Schuch" ], "categories": [ "cond-mat.str-el", "quant-ph" ], "primary_category": "cond-mat.str-el", "published": "20170227190145", "title": "Entanglement phases as holographic duals of anyon condensates" }
Anisotropic power-law inflation in a two-scalar-field model with a mixed kinetic term Sonnet Hung Q. Nguyen December 30, 2023 ===================================================================================== Multi-task learning (MTL) in deep neural networks for NLP has recently received increasing interest due to some compelling benefits, including its potential to efficiently regularize models and to reduce the need for labeled data. While it has brought significant improvements in a number of NLP tasks, mixed results have been reported, and little is known about the conditions under which MTL leads to gains in NLP. This paper sheds light on the specific task relations that can lead to gains from MTL models over single-task setups. § INTRODUCTION Multi-task learning is receiving increasing interest in both academia and industry, with the potential to reduce the need for labeled data, and to enable the induction of more robust models. The main driver has been empirical results pushing state of the art in various tasks, but preliminary theoretical findings guarantee that multi-task learning works under various conditions. Some approaches to multi-task learning are, for example, known to work when the tasks share optimal hypothesis classes <cit.> or are drawn from related sample generating distributions <cit.>.In NLP, multi-task learning typically involves very heterogeneous tasks. However, while great improvements have been reported <cit.>, results are also often mixed<cit.>, and theoretical guarantees no longer apply. The question what task relations guarantee gains or make gains likely in NLP remains open.Contributions This paper presents a systematic study of when and why MTL works in the context of sequence labeling with deep recurrent neural networks. We follow previous work  in studying the set-up where hyperparameters from the single task architectures are reused in the multi-task set-up (no additional tuning), which makes predicting gains feasible. Running MTL experiments on 90 task configurations and comparing their performance to single-task setups, we identify data characteristics and patterns in single-task learningthat predict task synergies in deep neural networks. Both the LSTM code used for our single-taskand multi-task models, as well as the script we used for the analysis of these, areavailable at <github.com/jbingel/eacl2017_mtl>.§ RELATED WORK In the context of structured prediction in NLP, there has been very little work on the conditions under which MTL works. Luong:ea:16 suggest that it is important that the auxiliary data does not outsize the target data, while Hovy:ea:17 suggest that multi-task learning is particularly effective when we only have access to small amounts of target data. Alonso:Plank:17 present a study on different task combinations with dedicated main and auxiliary tasks. Their findings suggest, among others, that success depends on how uniformly the auxiliary task labels are distributed. Mou:2016:transferable investigate multi-task learning and its relation to transfer learning, and under which conditions these work between a set of sentence classification tasks. Their main finding with respect to multi-task learning is that success depends largely on “how similar in semantics the source and target datasets are”, and that it generally bears close resemblance to transfer learning in the effect it has on model performance.§ MULTI-TASK LEARNING While there are many approaches to multi-task learning, hard parameter sharing in deep neural networks <cit.> has become extremely popular in recent years. Its greatest advantages over other methods include(i) that it is known to be an efficient regularizer, theoretically <cit.>, as well as in practice <cit.>; and (ii) that it is easy to implement. The basic idea in hard parameter sharing in deep neural networks is that the different tasks share some of the hidden layers, such that these learn a joint representation for multiple tasks. Another conceptualization is to think of this as regularizing our target model by doing model interpolation with auxiliary models in a dynamic fashion. Multi-task linear models have typically been presented as matrix regularizers. The parameters of each task-specific model makes up a row in a matrix, and multi-task learning is enforced by defining a joint regularization term over this matrix. One such approach would be to define the joint loss as the sum of losses and the sum of the singular values of the matrix. The most common approach is to regularize learning by the sum of the distances of the task-specific models to the model mean. This is called mean-constrained learning. Hard parameter sharing can be seen as a very crude form of mean-constrained learning, in which parts of all models (typically the hidden layers) are enforced to be identical to the mean. Since we are only forcing parts of the models to be identical, each task-specific model is still left with wiggle room to model heterogeneous tasks, but the expressivity is very limited, as evidenced by the inability of such networks to fit random noise <cit.>.§.§ ModelsRecent work on multi-task learning of NLP models has focused on sequence labeling with recurrent neural networks , although sequence-to-sequence models have been shown to profit from MTL as well <cit.>. Our multi-task learning architecture is similar to the former, with a bi-directional LSTM as a single hidden layer of 100 dimensions that is shared across all tasks. The inputs to this hidden layer are 100-dimensional word vectors that are initialized with pretrained GloVe embeddings, but updated during training. The embedding parameters are also shared. The model then generates predictions from the bi-LSTM through task-specific dense projections. Our model is symmetric in the sense that it does not distinguish between main and auxiliary tasks.In our MTL setup, a training step consists of uniformly drawing a training task, then sampling a random batch of 32 examples from the task's training data. Every training step thus works on exactly one task, and optimizes the task-specific projection and the shared parameters using Adadelta. As already mentioned, we keep hyper-parameters fixed across single-task and multi-task settings, making our results only applicable to the scenario where one wants to know whether MTL works in the current parameter setting <cit.>.§.§ TasksIn our experiments below, we consider the following ten NLP tasks, with one dataset for each task. Characteristics of the datasets that we use are summarized in Table <ref>. * CCG Tagging (ccg) is a sequence tagging problem that assigns a logical type to every token. We use the standard splits for CCG super-tagging from the CCGBank <cit.>.* Chunking (chu) identifies continuous spans of tokens that form syntactic units such as noun phrases or verb phrases.We use the standard splits for syntactic chunking from the English Penn Treebank <cit.>.* Sentence Compression (com) We use the publicly available subset of the Google Compression dataset <cit.>, which has token-level annotations of word deletions. * Semantic frames (fnt) We use FrameNet 1.5 for jointly predicting target words that trigger frames, and deciding on the correct frame in context. * POS tagging (pos)We use a dataset of tweets annotated for Universal part-of-speech tags <cit.>. * Hyperlink Prediction (hyp)We use the hypertext corpus from spitkovsky2010 and predict what sequences of words have been bracketed with hyperlinks. * Keyphrase Detection (key) This task amounts to detecting keyphrases in scientific publications. We use the SemEval 2017 Task 10 dataset. * MWE Detection (mwe) We use the Streusle corpus <cit.> to learn to identify multi-word expressions (on my own, cope with).* Super-sense tagging (sem) We use the standard splits for the Semcor dataset, predicting coarse-grained semantic types of nouns and verbs (super-senses). * Super-sense Tagging (str) As for the MWE task, we use the Streusle corpus, jointly predicting brackets and coarse-grained semantic types of the multi-word expressions. § EXPERIMENTS We train single-task bi-LSTMs for each of the ten tasks, as well as one multi-task model for each of the pairs between the tasks, yielding 90 directed pairs of the form ⟨𝒯_main, {𝒯_main, 𝒯_aux}⟩. The single-task models are trained for 25,000 batches, while multi-task models are trained for 50,000 batches to account for the uniform drawingof the two tasks at every iteration in the multi-task setup.The relative gains and losses from MTL over the single-task models (see Table <ref>) are presented in Figure <ref>, showing improvements in 40 out of 90 cases. We see that chunking and high-level semantic tagging generally contribute most to other tasks, while hyperlinks do not significantly improve any other task. On the receiving end, we see that multiword and hyperlink detection seem to profit most from several auxiliary tasks. Symbiotic relationships are formed, e.g., by POS and CCG-tagging, or MWE and compression.We now investigate whether we can predict gains from MTL given features of the tasks andsingle-task learning characteristics. We will use the induced meta-learning for analyzing what such characteristics are predictive of gains. Specifically, for each task considered, we extract a number of dataset-inherent features (see Table <ref>) as well as features that we derive from the learning curve of the respective single-task model. For the curve gradients, we compute the gradients of the loss curve at 10, 20, 30, 50 and 70 percent of the 25,000 batches. For the fitted log-curve parameters, we fit a logarithmic function to the loss curve values, where the function is of the form: L(i) = a ·ln(c· i+d)+b.We include the fitted parameters a and c as features that describe the steepness of the learning curve. In total, both the main and the auxiliary task are described by 14 features. Since we also compute the main/auxiliary ratios of these values, each of our 90 data points is described by 42 features that we normalize to the [ 0, 1] interval.We binarize the results presented in Figure <ref> and use logistic regression to predictbenefits or detriments of MTL setups based on the features computed above.[An experiment in which we tried to predict the magnitude of the losses and gains with linear regression yielded inconclusive results.] §.§ ResultsThe mean performance of 100 runs of randomized five-fold cross-validation of our logistic regression model for different feature combinations is listed in Table <ref>. The first observation is that there is a strong signal in our meta-learning features. In almost four in five cases, we can predict the outcome of the MTL experiment from the data and the single task experiments, which gives validity to our feature analysis. We also see that the features derived from the single task inductions are the most important. In fact, using only data-inherent features, the F_1 score of the positive class is worse than the majority baseline. §.§ Analysis  Table <ref> lists the coefficients for all 42 features. We find that features describing the learning curves for the main and auxiliary tasks are the best predictors of MTL gains. The ratios of the learning curve features seem less predictive, and the gradients around 20-30% seem most important, after the area where the curve typically flattens a bit (around 10%). Interestingly, however, these gradients correlate in opposite ways for the main and auxiliary tasks. The pattern is that if the main tasks have flattening learning curves (small negative gradients) in the 20-30% percentile, but the auxiliary task curves are still relatively steep, MTL is more likely to work. In other words, multi-task gains are more likely for target tasks that quickly plateau with non-plateauing auxiliary tasks. We speculate the reason for this is that multi-task learning can help target tasks that get stuck early in local minima, especially if the auxiliary task does not always get stuck fast. Other features that are predictive include the number of labels in the main task, as well as the label entropy of the auxiliary task. The latter supports the hypothesis put forward by Alonso:Plank:17 (see Related work). Note, however, that this may be a side effect of tasks with more uniform label distributions being easier to learn. The out-of-vocabulary rate for the target task also was predictive, which makes sense as the embedding parameters are also updated when learning from the auxiliary data. Less predictive featuresinclude Jensen-Shannon divergences, which is surprising, since multi-task learning is often treated as a transfer learning algorithm <cit.>. It is also surprising to see that size differences between the datasets are not very predictive. § CONCLUSION AND FUTURE WORK We present the first systematic study of when MTL works in the context of common NLP tasks, when single task parameter settings are also applied for multi-task learning. Key findings include that MTL gains are predictable from dataset characteristics and features extracted from the single-task inductions. We also show that the most predictive features relate to the single-task learning curves, suggesting that MTL, when successful, often helps target tasks out of local minima. We also observed that label entropy in the auxiliary task was also a good predictor, lending some support to the hypothesis in Alonso:Plank:17; but there was little evidence that dataset balance is a reliable predictor, unlike what previous work has suggested. In future work, we aim to extend our experiments to a setting where we optimize hyperparameters for the single- and multi-task models individually, which will give us a more reliable picture of theeffect to be expected from multi-task learning in the wild. Generally, further conclusions could be drawn from settings where the joint models do not treat the two tasks as equals, but instead give more importance to the main task, for instance through a non-uniform drawing of the task considered at each training iteration, or through an adaptation of the learning rates.We are also interested in extending this work to additional NLP tasks, including tasks that go beyondsequence labeling such as language modeling or sequence-to-sequence problems.§ ACKNOWLEDGMENTSFor valuable comments, we would like to thank Dirk Hovy, Yoav Goldberg, the attendants at the second author's invited talk at the Danish Society for Statistics, as well as the anonymous reviewers. This research was partially funded by the ERC Starting Grant LOWLANDS No. 313695, as well as by Trygfonden.eacl2017
http://arxiv.org/abs/1702.08303v1
{ "authors": [ "Joachim Bingel", "Anders Søgaard" ], "categories": [ "cs.CL", "I.2.7" ], "primary_category": "cs.CL", "published": "20170227143721", "title": "Identifying beneficial task relations for multi-task learning in deep neural networks" }
Reinterpreting Maximum Entropy in Ecology: a null hypothesis constrained by ecological mechanism James P. O'Dwyer ^1,Andrew Rominger ^2,Xiao Xiao ^3 1 Department of Plant Biology, University of Illinois, Urbana IL USA 2 Department of Environmental Science, Policy and Management, University of California, Berkeley, USA 3 School of Biology and Ecology, and Senator George J. Mitchell Center for Sustainability Solutions, University of Maine, Orono ME USACorrespondence to be sent to:Dr James P. O'DwyerDepartment of Plant Biology University of Illinois, Urbana IL 61801 jodwyer@illinois.edu§ ABSTRACTSimplified mechanistic models in ecology have been criticized for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process.In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness.But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrize a given mechanistic model from data and use them as MaxEnt constraints. Our approachisolates exactly what mechanism is telling us over and above the state variables alone. § INTRODUCTION Macroecology is the study of patterns of biodiversity aggregated across many species and individuals. These patterns encompass the distributions of organisms across space and time <cit.>, as well as multiple ways to quantify and measure biodiversity <cit.>. Macroecological patterns take surprisingly consistent, simple forms across many different taxonomic groups and distinct habitats <cit.>—for example, the distribution of rare and abundant species can be fitted using one of a handful of common distributions <cit.>.This apparent universality, alongside the sense that it is driven by a combination of high diversity and large numbers <cit.>, has led many ecologists to draw from statistical physics to understand and predict patterns of biodiversity <cit.>. Yet despite promising hints <cit.>, we still currently lack the quantitative, overarching theoretical principles to explain how and why the forms of macroecological patterns are constrained.The Maximum Entropy Theory of Ecology <cit.>, known as METE, has sought to fill this gap. The goal of this approach is to identify a probability distribution that can then be used to make predictions and tested against existing data. The principle of maximum entropy tells us that we can find a unique probability distribution that maximizes entropy, while constraining expectation values using the data we choose to feed into the algorithm. METE is very specific in terms of its data requirements: the theory prescribes a set of constraints based on intuitively important quantities such as the total number of individuals in a system, the total richness (of species or higher taxa <cit.>), and the total energy flux <cit.>. While the specific values of these constraints will differ across ecological communities that are more or less diverse, productive, or populous, the theory posits that the appropriate constraints are identical for all systems. METE has been successful in a number of studies <cit.>, and the cases where this prescription does work suggest that a large amount of the variance in macroecological patterns may indeed stem from statistical constraints <cit.>.However, the question arises, given that any data could be used to constrain the maximum entropy algorithm, why focus exclusively on the constraints proposed by METE?This is particularly relevant given that METE does not universally succeed in predicting macroecological patterns <cit.>, potentially due to system-specific biological constraints being ignored, and the issue was raised in earlier METE papers <cit.>. Indeed we could constrain the maximum entropy algorithm with whatever data we know about a given ecological community, whether that is as specific as the number and spatial location of individuals of your favorite species, or as obscure as the skewness of the distribution of rare and abundant species. Whatever we think we know, the maximum entropy principle will then fill in the gaps, adding the least possible additional information.For any mechanistic model with free, undetermined parameters, we always need to use some subset of our data to fit those parameters, before we can evaluate the performance of the model.Our proposal is that we should use precisely the data necessary to fit mechanistic parameters as a constraint for a maximum entropy algorithm, and then use the corresponding MaxEnt predictions as a null model against which to compare mechanistic predictions.If the mechanistic model then outperforms the corresponding MaxEnt distribution, then our choice of mechanism as modelers was successful.If the model is outperformed by MaxEnt, we may as well not have modeled the mechanism at all—just using the subset of the data necessary to fit parameters and then maximizing entropy is a better approach.§ MECHANISTIC MODELS AND EXPONENTIAL FAMILIES Ecological theories often incorporate various forms of stochasticity, and hence make predictions for probability distributions rather than deterministic quantities.These distributions can range over many questions and systems, from distributions of species abundance, to distributions of trait values.For many (though by no means all) such theoretical predictions, these probability distributions turn out to belong to an exponential family, which means that the distribution is of the form:P(n) =A(α)h(n)e^-α F(n)for some functions F(n), h(n), A(α) and parameter α.In this language, the function F(n) defines the “family",the base measure h(n) and parameter α distinguish between different members of a given family, and the function A(α) ensures an appropriate normalization (i.e. probabilities sum or integrate to =1). We note that the support of the distribution depends on the ecological question and context, and we use the notation n simply because several of our examples will involve discrete, positive species abundances. But this variable might equally represent the value of a continuous trait defined over a specified range, or most generally multiple variables of various types. In general, exponential family distributions can take a diverse range of functional forms. These depend on the sufficient statistics (defined below), the base measure, and the number and type of variables, and include such common cases as the normal distribution, the gamma distribution, and the Pareto distribution, but also many other more general functions.In the mechanistic models we will consider in this paper, the functions F(n), h(n), and A(α) will be essentially fixed by the theory, while α will be a parameter of the model.In some situations, it might be possible to estimate this parameter from independently-gathered data. But in many cases, α will be a `free' parameter, something that ecologists must estimate using the data available. To be more specific, suppose this data takes the form of a series of S independent observations of abundance, {n_i}.Exponential families then have the property that measuring=1/S∑_i=1^S F(n_i)is sufficient to compute a maximum likelihood estimate of the parameter α.We can see this explicitly by writing down the log likelihood of the parameter α, given the S independent observations, taking its derivative, and finding where this log likelihood is maximized:d/dαlogℒ= S d log A(α)/dα - ∑_i=1^S F(n_i) ⇒ .d log A(α)/dα|_α=α_ML = 1/S∑_i=1^S F(n_i) =This yields an equation relating the maximum likelihood estimate of α to .In other words, in the case of a prediction belonging to an exponential family we need only a very precisely specified part of the data we've collected in order to fit the free parameter α. F(n) is therefore known as the sufficient statistic for this family of distributions, regardless of the form of the base measure, h(n). On the other hand, while the data necessary to estimate α does not change with h(n), different h(n) do affect the value of the estimate of α through the normalization, A(α). §.§ Example: Species Abundances in a Neutral Model We now give an example: ecological neutral theory with no dispersal limitation <cit.>. In a neutral model, multiple species compete for a single resource, and interact completely symmetrically—no one species has a selective advantage over any other. From this assumption, it is possible to derive a neutral prediction for the distribution of species abundances.To demonstrate this, we focus on a particular formulation of neutral theory known as the `non-zero-sum' model <cit.>, and consider the probability P_NT(n) that a species chosen at random has abundance n (in a neutral world).This distribution is the solution of a linear master equation, with effective birth and death rates, b and d, that are the same for all species. The result is the well-known log series distribution:P_NT(n) = 1/nlog(1/ν)(1-ν)^nwhere ν = 1-b/d, the difference between birth and death rates in units of the birth rate, and is constrained (by an assumption of constant community size) to be equal to the per capita speciation rate. The log series itself had been introduced much earlier by Fisher <cit.>, and has been fitted to many data sets as a phenomenological candidate for empirical species abundance distributions <cit.>, and so the appearance of the same distribution purely arising from drift was a promising early result for the neutral hypothesis (with the caveat that species abundance distributions had been successfully reproduced by numerous alternative models).How do we determine the best choice of parameter ν for a given data set?If we had an appropriate independent data set with information about the speciation process, this could allow an independent estimateof ν for a given system.We could then compare the form of Eq. (<ref>) with the corresponding observed species abundance distribution. In practice, ecologists testing neutral theory have interpreted ν as a free parameter to be fitted using the species abundance data.We use the notation n_i to denote the abundance of species i, and S, the total number of species in a community, thus the total abundance is ∑_i=1^Sn_i.We can straightforwardly recast Eq. (<ref>) in the canonical form of an exponential family, by defining the parameter α = -log(1-ν).Using this parametrization,P_NT(n) =-1/log(1-e^-α)1/ne^-α nThis is now in the same form as Eq. (<ref>), with F(n)=n, h(n)=1/n, and the normalization A(α) = (-log(1-e^-α))^-1, which ensures that ∑_n=1^∞ P(n) = 1.For this distribution, the sufficient statistic is clearly n, so that the data we need to make a maximum likelihood estimate of the free parameter α is just the mean abundance per species, ∑_i n_i/S. Applying the solution for maximum likelihood estimates in Eq. (<ref>) to the case F(n)=n, and translating backexplicitly to the speciation rate, ν, the maximum likelihood estimate for ν satisfies the following equation:1-ν/νlog(1/ν)= ∑_i=1^S n_i/S = .We can then use Eq. (<ref>), with parametrization determined by Eq. (<ref>), to compute any measure of goodness of fit, or likelihood, or comparison with alternative models, or whatever we wish—all using this point estimate of ν, which in turn requires . Before going on, let's summarize what our definition of mechanistic model does and does not do, and how the example above can be generalized.First, we are assuming that the mechanistic model specifies a set of degrees of freedom (for example, species abundances, n, above), and that the model also leads to a solution for a probability distribution over these degrees of freedom, and moreover that this distribution belongs to an exponential family.We are also considering distributions where there remain one or more 'free' parameters, that encode some or other aspect of the ecological mechanism, but are not fixed to a particular value by the model itself.In the neutral example, the only free parameter is speciation rate, ν.In principle, we might be able to estimate this parameter independently of the species abundance data, or at least have some prior distribution on parameter values based on our knowledge of speciation processes.In this paper though we are considering models and contexts where all parameters that can be fixed independently have been, and the remaining free parameters must be estimated using a given data set.It is this perspective that leads us to the sufficient statistics for this particular model.This neutral model assumes that there are no selective forces, and that species abundances change due to ecological drift alone.We might therefore think of the dominant driver as being demographic stochasticity. What happens if we change the neutral assumption? Alternative mechanistic hypotheses for the species abundance distribution can also result in predicted distributions belonging to an exponential family, but with different sufficient statistics than the neutral model.For example, if species abundances are driven by a large number of successive multiplicative factors (for example due to environmental stochasiticity), the central limit theorem leads to a log normal distribution of species abundances, which belongs to an exponential family with quite different sufficient statistics: log(n) and log(n)^2 <cit.>.On the other hand, there is a key caveat here—it is by no means true that all ecological probability distributions will belong to an exponential family.A classic example of a distribution which does not is the Cauchy distribution, which has appeared in a variety of ecological contexts, including predictions of animal movement <cit.>. In summary, our approach can be easily applied to a range of ecological predictions, so long as the relevant probability distribution falls into an exponential family.§ THE MAXIMUM ENTROPY PRINCIPLE In many cases, we can also make a prediction for a probability distributionusing the principle of maximum entropy <cit.>, which we abbreviate as MaxEnt.MaxEnt here is defined as an inference principle, where the idea is to find the probability distribution that has the maximum possible entropy consistent with a given set of constraints from observed data.In this context, entropy is defined as:H = - ∑_n=1^∞ P(n) log P(n),and is such that larger values of H correspond (on average) to smaller amounts of information about the distribution P(n) in any single observation.In addition to finding the distribution P(n) that maximizes entropy, MaxEnt also allows for input from a given data set, in terms of constraints of the form:∑_n=1^∞ P(n)F(n) = ∑_i F(n_i)/S =In other words, the MaxEnt distribution can be constrained so that its prediction for the theoretical average value of F(n) (evaluated using the distribution P(n)) is equal to the observed average, , for a given quantity, F(n), calculated using values F(n_i) drawn from empirical data. For a single constraint, the MaxEnt distribution for P(n) is <cit.>P_ME(n) = B(λ)e^-λ F(n)where the parameter λ is known as a Lagrange multiplier, and B(λ) is a normalization that depends on the values of this Lagrange multipliers for a particular data set. The value of λ is then determined using the form of the MaxEnt distribution and the measured values of F(n_i), which reduces to the expression:dlog B(λ)/d λ = .where = ∑_i=1^S F(n_i)/Sis an estimator for the expectation value of the observable quantity F(n).There are then straightforward generalizations to multiple constraints and continuous variables <cit.>. For example, given J variables x_j and a set of K constraints, F_k({x_j}), where j runs from 1 to J and k runs from 1 to K, the MaxEnt prediction for P({x_j}) is:P_ME({x_j}) = B({λ_k})e^-∑_k=1^Kλ_k F_k({x_j})where values of each of the Lagrange multipliers λ_j is detemined by∂log B({λ_k})/∂λ_j = F̅_̅j̅.§ REINTERPRETING MAXENT AS A NULL MODEL The procedure of exactly how and which constraints should be chosen in existing MaxEnt approaches in ecology <cit.> is an open question. We propose that, when constrained by the sufficient statistics of any given mechanistic model, MaxEnt can be used as a null hypothesis with which to test the value of the model.We show this proposal graphically in Figure 1. For every mechanistic model with predicted distributions belonging to an exponential family, we can identify its sufficient statistics as the constraints in what we call the `corresponding' MaxEnt theory. If our mechanistic model can outperform its corresponding MaxEnt theory on a given data set, then specifying the details of the model and calculating its solution has been worthwhile. If not, whatever we have contributed to the construction of the model is only useful in so much as it fixes the constraints to measure using the data—beyond that, our efforts as modelers have been futile.We propose the MaxEnt distribution as an appropriate null because the maximum entropy principle specifies as little as possible about the distribution beyond what is fixed by the sufficient statistics measured in a given data set. To perform this comparison quantitatively, we propose an `entropically-corrected' likelihood, where for a given data set we take the likelihood of the mechanistic model, and subtract the likelihood of the corresponding MaxEnt distribution.In the case of S observations of a discrete variable n, and a mechanistic model with distribution P_model(n) of the form given in Eq. (<ref>), our proposed measure of performance takes the form:logℒ_corr(model| {n_i })=∑_i=1^Slog P_model(n_i|α()) - ∑_i=1^Slog P_ME(n|λ()) = S[log(h)+(λ()-α())+logA(α())/B(λ())].where P_ME(n|λ) is the MaxEnt distribution obtained by constraining the mean value of sufficient statistic F(n). This is essentially applying analysis of Sections 1 and 2, and so α() and λ() are given by Eqs. (<ref>) and (<ref>), respectively, while A(α) and B(λ) are the corresponding normalizations of the mechanistic and MaxEnt distributions.Drawing from the classic literature on exponential families <cit.>, we note that the only possible difference between a mechanistic model distribution and its corresponding MaxEnt distribution arises in the form of the base measure, characterized as h(n) above. We think of this as a model-implied base measure, and it leads to a reduction in entropy (relative to the uniform base measure) arising from our specification of the mechanism.We note that the difference between α and λ (with the same sufficient statistic, F), comes only from the fact that they have been estimated using different choices of h(n). Our proposal is thereforea kind of likelihood ratio test for whetherthe model-implied base measure h(n) provides a better explanation of our data than the uniform measure. Moreover, if we already have strong evidence for a particular base measure over the uniform measure <cit.>, then we could also consider this as a new, more stringent null model for any new mechanistic prediction.In other words, our approach can be extended to compare different sets of mechanisms, with the same sufficient statistics but different base measures h(n). § APPLICATIONS TO EMPIRICAL DATA To provide a non-trivial mechanistic model, we turn to size-structured neutral theory (SSNT), and draw results below from <cit.>, and the Supporting Information for this manuscript. This is an extension of the neutral ecological model introduced above, but with the addition of a new variable representing the size, mass, or energy flux of an individual. Speciation is defined in the same way as in the standard neutral theory, but now birth and death rates b(m) and d(m) can depend on the size or mass of an organism, m.Also, there is a new process: ontogenetic growth. Each individual grows through time with a rate g(m), which may also depend explicitly on its current mass. To fully specify this theory, we need to determine the functions b(m), d(m) and g(m).For this analysis, we parametrize these functions in the simplest way, by setting all three to be independent of mass, m, and we use the notation b, d and g for these three, constant rates. Even in this case, the combination of birth, death and growth still introduces variation in individual masses, as well as variation in the average size and total biomass across different species. The analysis of this section will provide an application of our approach using MaxEnt as a null model.It also raises a new question.For any given mechanistic model, there may be multiple possible distributions predicted, for example by marginalizing over some of the variables, which we could think of as unobserved.Each of these different ways of formulating a predicted distribution then has its own corresponding MaxEnt. In the case of these size-structured neutral models, we highlight this by focusing on two cases, which we term coarse-grained and fine-grained.In the coarse-grained prediction, we imagine we are only able to measure total biomass for each species, while in the fine-grained prediction we specify the biomasses of each individual.Each of these has a different corresponding MaxEnt distribution, even though the constraints are the same, and below we explore the consequences of these differences.§.§ Size-Structured Neutral Theory: Coarse-grained Description First, we consider the joint distribution that a species chosen at random will have abundance n and total biomass (summed across all n individuals) M. Under the rules of SSNT, this distribution is <cit.>:P_SSNT(n,M) = 1/m_0log(1/ν)(1-ν)^n/n!(M/m_0)^n-1e^-M/m_0.where n takes values in the positive integers and M is a continuous variable > 0. (The latter definition is straightforward to generalize to account for a finite initial mass of new individuals). ν is the speciation rate in units of the generation time, while m_0 is a mass scale and is equal to the ratio of rates g/d.Finally, we note that marginalizing over total biomass M returns us to the simpler result for the log series species abundance distribution given in Eq. (<ref>). If one chooses not to measure species biomass, the predictions recapitulate the standard neutral theory.The two sufficient statistics of the joint distribution P_SSNT(n,M) given by Eq. (<ref>) are mean biomass per species ∑_i M_i /S = M and mean abundance per species, ∑_i n_i/S =. More explicitly, the maximum likelihood estimates of parameters ν and m_0 are given by:1-ν/νlog(1/ν)= m_0 = /We next carry out our strategy of constructing a MaxEnt distribution with uniform base measure to provide a baseline for the performance of P_SSNT(n,M).Constrainingand , we arrive at the following MaxEnt distribution for M and n in a size-structured community:P_SSME(n,M) = (e^λ_1-1)λ_2e^-λ_1 ne^-λ_2Mwhere the Lagrange multipliers impose the constraints onandand take the values:λ_1 =log/-1 λ_2 = 1/also fixed sign error here We now have an explicit, multivariate example of our proposed entropic correction, which takes the form:log ℒ_corr(model| {n_i },{M_i})= ∑_i=1^S(log[1/m_0log(1/ν)(1-ν)^n_i/n_i!(M_i/m_0)^n_i-1e^-M_i/m_0]- log[(e^λ_1-1)λ_2e^-λ_1 n_ie^-λ_2M_i])where n_i is the abundance of species i, and M_i is the mass of species i, and the sum is over all S observed speciesIn Figure <ref> we use Eq. (<ref>) to evaluate the performance of the size-structured neutral theory (withparameters set by Eq. (<ref>) and Lagrange multipliers also fixed using the data). For demonstration, we specifically examine two taxonomic groups with very different traits: trees and birds. We adopted forest plot data used in <cit.>, all except for one to which we did not have access. These include 75 plots from four continents (Asia, Australia, North America, and South America), with 2189 species and morpho-species, and 380590 individuals in total <cit.>. All individuals have been identified to species or morpho-species, with measurement for diameter at breast height (DBH). We converted DBH to biomass using a metabolic scaling ansatz <cit.>. (For detailed description on the forest plot data and their manipulations, see <cit.>. For a cleaned subset of these data, see the Dryad data package <cit.>.)For our second data set, we compiled all 2958 routes from the North American Breeding Bird Survey <cit.> that were sampled during 2009.These data are availible from US Geological Survey (<https://www.pwrc.usgs.gov/bbs/rawdata>).Survey routes consist of 50 observation points, each sepparated by 0.5 mi. At each point all birds within 0.25 mi are identified and recorded by an expert observer.Body size data were taken from <cit.> and matched by taxonomy to records in the BBS data.Both route data and body mass data are available at <https://github.com/ajrominger/MaxEntSuffStat>.Across these 75 forest plots and 2958 locations from the Breeding Bird Survey, we find a consistent result: in all locations, SSNT is outperformed by its MaxEnt baseline. In other words, if all you know about a forest plot or a bird community is its mean abundance per speciesand mean biomass per species , we should almost always reject size-structured neutral dynamics as an explanation for its species abundance and biomass distributions. In the case of the bird data, this maybe is unsurprising—a model with ontogenetic growth continuing througout an individual's lifetime will generate a broader range of intraspecific variation than we might expect in these species.In the case of the forest data, it would have been less surprising for the neutral model to perform well, but we still find that SSNT performs worse than its corresponding MaxEnt distribution.We note that while we interpret this as telling us that SSNT is a poor description of these data, it doesn't tell us that the corresponding MaxEnt distribution SSME is a good alternative.In particular, since by design the constraints of SSME are identical to those of SSNT, the poor performance of SSNT may suggest that neither distribution (in absolute terms) is likely to be a good description of these data. §.§ Size-Structured Neutral Theory: Fine-grained Description We next consider a more fine-grained way to test the size-structured neutral theory.In addition to measuring each species' abundance and its total biomass, we also measure the mass of each of its individuals. Replacing the joint distribution above for n and M, we can make a neutral prediction for the precise distribution of masses within a species <cit.>:P_SSNTI(n,m_1,…,m_n) = 1/log(1/ν)(1-ν)^n/n1/m_0^n∏_j=1^n e^-m_j/m_0.We have labeled this distribution `SSNTI', so that the I stands for individual-level.The sufficient statistics for the parameters ν and m_0 are again given by mean abundance per species and mean total biomass per species:1-ν/νlog(1/ν)= m_0 = ∑_j=1^nm_j/.Using these as constraints, we can in parallel construct the corresponding individual-level maximum entropy distribution to use as a baseline for the performance of P_SSNTI(n,m_1,…,m_n):P_SSMEI(n,m_1,…,m_n) =(e^λ_1-1)λ_2^ne^-λ_1 n∏_j=1^ne^-λ_2 m_jwhere the Lagrange multipliers impose the constraints on ∑_j=1^nm_j andand take the values:λ_1 =log/-1 λ_2 = /∑_j=1^nm_jThe corrected log likelihood for this case is thenlogℒ_corr(model| {n_i },{m_ij})= ∑_i(-log[(e^λ_1-1)log(1/ν)] +n_ilog[e^λ_1(1-ν)]-log[n_i] )where m_ij is the mass of the j-th individual from species i.In fact, in this expression all of the mass dependence cancels from the two terms, leaving us with the comparison of a log series and geometric series. The mathematical independence of this quantity on individual masses allows us to calculate it even for the breeding bird data, for which no individual mass estimates are available. In Figure <ref> we evaluate the performance of the individual-based size-structured neutral theory by computing its log likelihood (withparameters set by Eq. (<ref>)), with a maximum entropy baseline given by P_SSMEI(n,m_1,…,m_n), with Lagrange multipliers fixed using the data. Across the same forest and breeding bird plots as shown in Figure <ref>, we see that SSNTI is almost universally a better explanation of the data relative to the corresponding maximum entropy distribution, as it is for the breeding bird data.What changed? The individual-based SSNTI neutral model has a larger number of independent variables than its aggregated counterpart SSNT, but when conditioned on a fixed total biomass for a species, ∑_j=1^nm_j=M, P_SSNTI(n,m_1,…,m_n) becomes equal to P_SSNT(n,M): if you blur your eyes and only pick up on total biomass, the two neutral predictions are identical, as they should be. The same is not true of the two MaxEnt distributions, labeled SSME and SSMEI.What changed is that we implicitly told P_SSMEI that total biomass M is comprised of a set of individuals of masses {m_j}.The result is that the SSMEI model is identical to the SSNTI distribution in terms of the biomass factor, but differs in its prediction of the species abundance distribution.So all we are seeing in Figure <ref> is that the classic log series SAD is generally a better description for these data than the geometric distribution. It is not clear to us whether the similarity between the biomass terms in MaxEnt and the mechanistic model here is a general consequence of the increase in the degrees of freedom (here going from SSNT to the more fine-grained SSNTI) or if it is special to this case.Further more systematic investigation of the relationship between MaxEnt and mechanistic distributions as a function of aggregating degrees of freedom may be the most appropriate strategy to clarify this issue.§ DISCUSSIONIn this manuscript we related biological mechanism to the constraints used in the Maximum Entropy (MaxEnt) approach to predicting macroecological patterns. We achieved this by proposing that the sufficient statistics of a mechanistic model should be used as MaxEnt constraints, but the procedure weintroduced is incapable on its own of identifying a unique set of constraints for MaxEnt. Instead, we have (potentially) a different MaxEnt prediction corresponding to each different set of mechanisms, and we proposed that the natural way to use this prediction is as a null hypothesis. This null hypothesis has the properties of specifying unambiguously what quantities should be constrained, and it also does not require or invoke any alternative mechanism for comparison <cit.>. In a sense, our null hypothesis is obtained by removing from the mechanistic distribution all mechanisms that create a bias over the support of the distribution, but retaining the aspects of mechanisms that defined the support in the first place. We propose that if a mechanistic model performs worse in a given data set than its corresponding MaxEnt distribution, then this provides evidence against the mechanisms and assumptions of the model. We demonstrated this by testing size-structured neutral models against their corresponding MaxEnt baselines, using empirical data drawn from multiple forest plots and the Breeding Bird Survey.This test raised another question: how fine-grained is our description of the data, and consequently how many degrees of freedom are there in our model's predicted probability distribution?For example, in this case of forest data, we may be able to estimate just total species biomass, or we may measure each individual stem. In this analysis, we found that whether mechanistic distributions were favored over MaxEnt or vice versa depended not only on the mechanism, but also on the number of degrees of freedom used to describe the data.MaxEnt was generally favored when describing data in terms of total species biomasses, while the size-structured neutral theory was favored when describing data in terms of individual masses. However, our analysis does not clarify whether in general there will be a systematic relationship between mechanistic model success/failure in these terms as we aggregate moreor fewer degrees of freedom.Where does this leave the Maximum Entropy Theory of Ecology <cit.> (METE), which prescribes a particular set of constraints, and makes predictions of the same types of distributions as the above?In previous work the results ofMETE have been compared with e.g. neutral models <cit.>.But the MaxEnt distributions derived in this paper were specifically chosen to match the sufficient statistics of a given mechanistic model, and do not precisely match the distributions predicted by METE. In fact, we do not know of any flavor of mechanistic theory whose independent variables and sufficient statistics precisely match the standard METE degrees of freedom and state variables, but we note that the sufficient statistics of the size-structured neutral models, namely involving average species abundance and biomass, are extremely close to the METE state variables.Clarifying exactly what ranges of ecological mechanisms lead to these sufficient statistics might help us to understand why the METE state variables seem to perform well in the cases that they do, and might also give us insight into where METE might be expected to break down. Moreover, if we were able to show that certain sets of sufficient statistics are more likely than any others when looking across a range of ecological and evolutionary mechanisms, this would open the door to established preferred sets of state variables in a principled way.Several important caveats in our approach are worth emphasizing. First, our example mechanistic models have (i) a finite set of sufficient statistics, (ii) the dimension of this set does not increase with sample size, and (iii) the support of the predicted distribution does not vary with parameter values. These features meant that both maximum entropy distributions and mechanistic model distributions belonged to an exponential family, as defined in Section 1. Not all interesting mechanistic models in ecology will share these features, as many commonly-predicted probability distributions do not belong to exponential families. Second, we have assumed that model parameter values are either known, and fixed independently of a dataset, or are free parameters to be estimated using the current data, and have not tackled intermediate cases where we have partial knowledge of these parameters. Third,our approach does not tell us if either a mechanistic model or its MaxEnt counterpart are good descriptions of the data in absolute terms. For example, if there are too many constraints, apparently good fits of a given mechanistic model or its corresponding MaxEnt may still be uninformative <cit.>. I.e. our approach does not evaluate whether either of these distributions is overfitting a given data set. Finally, we focused on steady-state predictions. On the other hand, the prediction of fluctuation sizes on various timescales is precisely where simplified mechanistic models seem to break down <cit.>.At this point, we do not have a corresponding maximum entropy baseline for these models. § ACKNOWLEDGMENTS We thank three reviewers for an excellent and constructive set of reviews, which helped to shape and convey the main messages of this manuscript. We also acknowledge extensive and helpful feedback from Cosma Shalizi and Ethan White on earlier drafts of the manuscript. JOD acknowledges the Simons Foundation Grant #376199, McDonnell Foundation Grant #220020439, and Templeton World Charity Foundation Grant #TWCF0079/AB47. AJR acknowledges funding from NSF grant DEB #1241253. R. K. Peet provided data for the North Carolina forest plots. T. Kohyama provided the Serimbu dataset through the PlotNet Forest Database. The eno-2plot (by N. Pitman) and DeWalt Bolivia (by S. DeWalt) datasets were obtained from SALVIAS. The BCI forest dynamics research project was made possible by NSF grants to S. P. Hubbell: DEB #0640386, DEB #0425651, DEB #0346488, DEB #0129874, DEB #00753102, DEB #9909347, DEB #9615226, DEB #9405933, DEB #9221033, DEB #-9100058, DEB #8906869, DEB #8605042, DEB #8206992, DEB #7922197, support from CTFS, the Smithsonian Tropical Research Institute, the John D. and Catherine T. MacArthur Foundation, the Mellon Foundation, the Small World Institute Fund, and numerous private individuals, and through the hard work of over 100 people from 10 countries over the past two decades. The UCSC Forest Ecology Research Plot was made possible by NSF grants to G. S. Gilbert (DEB #0515520 and DEB #084259), by the Pepper-Giberson Chair Fund, the University of California, and the hard work of dozens of UCSC students. These two projects are part CTFS, a global network of large-scale demographic tree plots. The Luquillo Experimental Forest Long-Term Ecological Research Program was supported by grants BSR #8811902, DEB #9411973, DEB #0080538, DEB #0218039, DEB #0620910 and DEB #0963447 from NSF to the Institute for Tropical Ecosystem Studies, University of Puerto Rico, and to the International Institute of Tropical Forestry USDA Forest Service, as part of the Luquillo Long-Term Ecological Research Program. Funds were contributed for the 2000 census by the Andrew Mellon foundation and by CTFS. The U.S. Forest Service and the University of Puerto Rico gave additional support. We also thank the many technicians, volunteers and interns who have contributed to data collection in the field.Supplementary Material James P. O'Dwyer ^1,Andrew Rominger ^2,Xiao Xiao ^3 1 Department of Plant Biology, University of Illinois, Urbana IL USA 2 Department of Environmental Science, Policy and Management, University of California, Berkeley, USA 3 School of Biology and Ecology, and Senator George J. Mitchell Center for Sustainability Solutions, University of Maine, Orono ME USA§ DERIVATION OF SIZE-STRUCTURED NEUTRAL THEORY RESULTSIn <cit.>, we derived an exact solutionfor a population undergoing birth at a rate b, mortality at a rate d(m),growth rate g(m), and immigration rate ν.We expressed the solution in terms of a generating functional, Z[H(m)], which is formulated as the limiting case of 𝒵[{h_i}] = ∑_{h_i} P({n_i})e^∑_ih_in_ias a set of discrete size classes labeled by i becomes a continuum. In this discrete case, P({n_i}) is the probability that the population has n_i individuals in size class i. The community-level interpretation of this is as the probability of a given species, chosen at random from a neutral community, having a set of individuals with different sizes n_i. For simplicity, we consider the case where the mass of the smallest individuals is infinitesmally small, though this can be generalized. This leads to the following solution for the size-structured neutral theory generating functional:𝒵[H(m)] =1- log[1-log(1/ν)∫_0^∞ dm f(m)( e^H(m)-1)]/log[1+log(1/ν)∫_0^∞ dm f(m)].Note that the form of this result differs slightly from <cit.>. In keeping with our other versions of neutral models in this paper, we have defined speciation rate here to be a dimensionless per capita speciation rate, in units of the birth rate, b. We are also conditioning on abundances n>0 (in <cit.> we considered a formulation which kept track of a class of extinct species with n=0, and we have removed this here). Growth and mortality rates are then encoded in the function f(m), which satisfies:d/dm(g(m)f(m))+d(m)f(m)=0f(0)g(0) = -b/logν +b∫_0^∞ dm f(m).From this generating functional, we can obtain the generating functions for various joint probability distributions using particular functional forms for the auxiliary function, H(m). In <cit.>, we solved for the species abundance distribution by setting H(m) = h_0, and for the species biomass distribution by setting H(m)=h_1m. These relationships follow from the limit of the definition Eq. (<ref>). §.§ Coarse-grained case To obtain the generating function for the joint distribution of total abundance and total biomass, we correspondingly need to set H(m) = h_0 + h_1m in Eq. (<ref>). This gives:z(h_0,h_1)= 1- log[1-log(1/ν)∫_0^∞ dm f(m)( e^h_0+h_1m-1)]/log[1+log(1/ν)∫_0^∞ dm f(m)].This generating function can then be transformed back into the following probability distribution for P(n,M):P(n,M) = 1/n(log[1/ν]/1+log(1/ν)∫_0^∞ dm f(m))^n f⋆…⋆ f(M)_n timeswhere the biomass dependence is in the form of a product of multiple convolutions.This can be checked by direct substitution:∫_0^∞ dM∑_n=1^∞ P(n,M)e^h_0n+h_1M= ∑_n=1^∞ e^h_0n1/n(log[1/ν]/1+log(1/ν)∫_0^∞ dm f(m))^n ∫_0^∞ dMe^h_1Mf⋆…⋆ f(M)_n times = ∑_n=1^∞ e^h_0n1/n(log[1/ν]/1+log(1/ν)∫_0^∞ dm f(m))^n[∫_0^∞ f(M)e^h_1M dM]^n = -log[1-log[1/ν]∫_0^∞dM f(M) e^h_0+h_1M/1+log[1/ν]∫_0^∞dM f(M) ]/log[1+log(1/ν)∫_0^∞ dm f(m)] = 1- log[1-log(1/ν)∫_0^∞ dm f(m)( e^h_0+h_1m-1)]/log[1+log(1/ν)∫_0^∞ dm f(m)]This result is general, and can be applied in cases where g(m) and d(m) depend on individual body size. In this paper, we focused instead on the `completely neutral' limit, where in fact individuals have identical rates b, d and g independent of their size/mass.In this case, we had shown earlier <cit.> that solving Eq. (<ref>) (again adapting to the per capita definition of ν = 1-b/d that we use throughout this current paper) results in:f(m) =1-ν/νlog(1/ν)d/ge^-d/gmNote that when we integrate over all sizes, we find:∫_0^∞ f(m) = 1-ν/νlog(1/ν),which is the standard non-zero sum neutral theory result for the total number of individuals divided by the expected number of species.Hence we have (when integrated over all size classes) the correct expression for the total number of individuals per species.The exponential function belongs to the larger class of Gamma distributions, which in turn is a particular case of a Tweedie distribution. Tweedie distributions have the nice property that we can convolve them with themselves as many times as we like, and the result takes the same functional form but with rescaled parameters.This makes computing the convolution product straightforward, and for this case we have:f⋆…⋆ f(M)_n times =1/(n-1)!( 1-ν/νlog(1/ν))^n d/g(dM/g)^n-1e^-d/gM.Putting this together with the general result above, we have for the `coarse-grained' size-structured neutral theory:P(n,M) =1/n!(log[1/ν]/1+ 1-ν/ν)^n ( 1-ν/νlog(1/ν))^n d/g(dM/g)^n-1e^-d/gM = 1/n!( 1-ν)^n d/g(dM/g)^n-1e^-d/gMThis is what we reported in the main text, where we defined a size/mass scale m_0=g/d for notational convenience (note that this is distinct from the notation used in <cit.>, where m_0 was used to denote the minimum mass of an individual). §.§ Fine-grained caseTo obtain the generating function for the joint distribution of total abundance and all individual biomasses for the completely neutral size-structured theory, we first consider the distribution of individual biomasses conditioned on total abundance being =n.The generating function of this distribution can be identified by treating the auxiliary function as a constant term plus an additionla function, H(m) = h_0 + h(m), expanding 𝒵[H(m)] in powers of e^h_0, and extracting the term proportional to e^h_0n, to obtain:Z[h(m)] =(∫_0^∞ dm f(m)e^h(m)/∫_0^∞ dm f(m))^nWe also note that when conditioned on ∫_0^∞ n(m) = n, the only allowable size-spectra must take the formn(m) = ∑_i=1^n δ(m-m_i)where m_i is the mass of individual i, and we have used the Dirac delta function.I.e. the spectrum of a species with exactly n individuals must at any one point in time consist of a set of infinitely-sharp spikes located at the masses of its constituent individuals. Hence we can write:Z[h(m)] = ∫ [dn] 𝒫[n(m)|n] e^∫_0^∞ h(m)n(m) = ∫∏_i=1^n dm_i P_SSNTI({m_i}|n)e^∑_i=1^n h(m_i).where 𝒫[n(m)|n] is the a functional giving the probability of a species consisting of a size/mass spectrum n(m) when conditioned on total abundance n, while P_SSNTI({m_i}|n) is an equivalent description in terms of the probability that the same species consists of n individuals with the specific set of n biomasses {m_i}.From the form of Eq. (<ref>) we then have P_SSNTI({m_i}|n) = ∏_i=1^n f(m_i)/∫_0^∞ dm f(m)In the completely neutral case, f(m_i)/∫_0^∞ dm f(m) = d/ge^-d/gm_i, and also P_NT(n) = 1/nlog(1/ν)(1-ν)^n, and putting these results together gives us that:P_SSNTI(n,{m_i})=P_SSNTI({m_i}|n)P_NT(n)=1/log(1/ν)(1-ν)^n/n1/m_0^n∏_j=1^n e^-m_j/m_0where m_0=g/d as in the main text.78 natexlab#1#1url<#>1urlprefixURL [Baribault et al.(2011a)Baribault, Kobe & Finley]dryad_r9p70 Baribault, T. W., Kobe, R. K. & Finley, A. O. (2011a). Data from: Tropical tree growth is correlated with soil phosphorus, potassium, and calcium, though not for legumes. <http://dx.doi.org/10.5061/dryad.r9p70>.[Baribault et al.(2011b)Baribault, Kobe & Finley]Baribault2011 Baribault, T. W., Kobe, R. K. & Finley, A. O. (2011b). Tropical tree growth is correlated with soil phosphorus, potassium, and calcium, though not for legumes. Ecological Monographs, 82, 189–203.[Benhamou(2007)]benhamou2007many Benhamou, S. (2007). How many animals really do the levy walk? Ecology, 88, 1962–1969.[Bradford et al.(2014)Bradford, Murphy, Ford, Hogan & Metcalfe]Bradford2014 Bradford, M. G., Murphy, H. T., Ford, A. J., Hogan, D. & Metcalfe, D. J. (2014). Long-term stem inventory data from tropical rain forest plots in Australia. Ecology, 95, 2362.[Chisholm & O'Dwyer(2014)]chisholm2014ages Chisholm, R. & O'Dwyer, J. (2014). Species ages in neutral biodiversity models. Theoretical Population Biology, 93, 85–94.[Chisholm et al.(2014)Chisholm, Condit, Rahman, Baker, Bunyavejchewin, Chen, Chuyong, Dattaraja, Davies, Ewango, Gunatilleke, Gunatilleke, Hubbell, Kenfack, Kiratiprayoon, Lin, Makana, Pongpattananurak, Pulla, Punchi-Manage, Sukumar, Su, Sun, Suresh, Tan, Thomas & Yap]Chisholm2014b Chisholm, R. A., Condit, R., Rahman, K. A., Baker, P. J., Bunyavejchewin, S., Chen, Y.-Y., Chuyong, G., Dattaraja, H. S., Davies, S., Ewango, C. E. N., Gunatilleke, C. V. S., Gunatilleke, I. A. U. N., Hubbell, S., Kenfack, D., Kiratiprayoon, S., Lin, Y., Makana, J.-R., Pongpattananurak, N., Pulla, S., Punchi-Manage, R., Sukumar, R., Su, S.-H., Sun, I.-F., Suresh, H. S., Tan, S., Thomas, D. & Yap, S. (2014). Temporal variability of forest communities: empirical estimates of population change in 4000 tree species. Ecology Letters, 17, 855–865.[Condit(1998a)]Condit1998a Condit, R. (1998a). Ecological implications of changes in drought patterns: shifts in forest composition in Panama. Climatic Change, 39, 413–427.[Condit(1998b)]Condit1998 Condit, R. (1998b). Tropical forest census plots. Springer-Verlag and R. G. Landes Company, Berlin, Germany, and Georgetown, Texas.[Condit et al.(2004)Condit, Aguilar, Hernández, Pérez, Lao, Angehr, Hubbell & Foster]Condit2004 Condit, R., Aguilar, S., Hernández, A., Pérez, R., Lao, S., Angehr, G., Hubbell, S. P. & Foster, R. B. (2004). Tropical forest dynamics across a rainfall gradient and the impact of an El Niño dry season. Journal of Tropical Ecology, 20, 51–72.[Connor & Simberloff(1979)]connor1979assembly Connor, E. F. & Simberloff, D. (1979). The assembly of species communities: chance or competition? Ecology, 60, 1132–1140.[Darmois(1945)]darmois1945limites Darmois, G. (1945). Sur les limites de la dispersion de certaines estimations. Revue de l'Institut International de Statistique, 9–15.[DeWalt et al.(1999)DeWalt, Bourdy, ChÁvez de Michel & Quenevo]DeWalt1999 DeWalt, S. J., Bourdy, G., ChÁvez de Michel, L. R. & Quenevo, C. (1999). Ethnobotany of the Tacana: Quantitative inventories of two permanent plots of Northwestern Bolivia. Economic Botany, 53, 237–260.[Dunning(2007)]dunning2007 Dunning, J. (2007). Handbook of Avian Body Masses. CRC, Boca Raton, FL.[Enquist & Niklas(2002)]enquist2002global Enquist, B. J. & Niklas, K. J. (2002). Global allocation rules for patterns of biomass partitioning in seed plants. Science, 295, 1517–1520.[Etienne & Alonso(2007)]etienne2007neutral Etienne, R. S. & Alonso, D. (2007). Neutral community theory: how stochasticity and dispersal-limitation can explain species coexistence. Journal of Statistical Physics, 128, 485–510.[Fisher et al.(1943)Fisher, Corbet & Williams]fisher1943relation Fisher, R. A., Corbet, A. S. & Williams, C. B. (1943). The relation between the number of species and the number of individuals in a random sample of an animal population. The Journal of Animal Ecology, 42–58.[Fung et al.(2016)Fung, O'Dwyer, Rahman, Fletcher & Chisholm]fung2016reproducing Fung, T., O'Dwyer, J. P., Rahman, K. A., Fletcher, C. D. & Chisholm, R. A. (2016). Reproducing static and dynamic biodiversity patterns in tropical forests: the critical role of environmental variance. Ecology, 97, 1207–1217.[Gilbert et al.(2010)Gilbert, Howard, Ayala-Orozco, Bonilla-Moheno, Cummings, Langridge, Parker, Pasari, Schweizer & Swope]Gilbert2010 Gilbert, G. S., Howard, E., Ayala-Orozco, B., Bonilla-Moheno, M., Cummings, J., Langridge, S., Parker, I. M., Pasari, J., Schweizer, D. & Swope, S. (2010). Beyond the tropics: forest structure in a temperate forest mapped plot. Journal of Vegetation Science, 21, 388–405.[Gotelli & Graves(2006)]gotelli2006null Gotelli, N. J. & Graves, G. R. (2006). Null models in ecology.[Gotelli & McGill(2006)]gotelli2006neutral Gotelli, N. J. & McGill, B. J. (2006). Null versus neutral models: what's the difference? Ecography, 29, 793–800.[Haegeman & Etienne(2008)]haegeman2008relaxing Haegeman, B. & Etienne, R. S. (2008). Relaxing the zero-sum assumption in neutral biodiversity theory. Journal of Theoretical Biology, 252, 288–294.[Haegeman & Loreau(2008)]haegeman2008limitations Haegeman, B. & Loreau, M. (2008). Limitations of entropy maximization in ecology. Oikos, 117, 1700–1710.[Harte(2011)]harte2011maximum Harte, J. (2011). Maximum entropy and ecology: a theory of abundance, distribution, and energetics. Oxford University Press.[Harte et al.(1999)Harte, Kinzig & Green]Harte1999 Harte, J., Kinzig, A. & Green, J. L. (1999). Self-similarity in the distribution and abundance of species. Science, 284, 334–336.[Harte & Newman(2014)]harte2014maximum Harte, J. & Newman, E. A. (2014). Maximum information entropy: a foundation for ecological theory. Trends in ecology & evolution, 29, 384–389.[Harte et al.(2015)Harte, Rominger & Zhang]harte2015 Harte, J., Rominger, A. & Zhang, W. (2015). Integrating macroecological metrics and community taxonomic structure. Ecology letters, 18, 1068–1077.[Harte et al.(2009)Harte, Smith & Storch]harte2009biodiversity Harte, J., Smith, A. B. & Storch, D. (2009). Biodiversity scales from plots to biomes with a universal species–area curve. Ecology letters, 12, 789–797.[Harte et al.(2008)Harte, Zillio, Conlisk & Smith]harte2008maximum Harte, J., Zillio, T., Conlisk, E. & Smith, A. (2008). Maximum entropy and the state-variable approach to macroecology. Ecology, 89, 2700–2711.[Harvey et al.(1983)Harvey, Colwell, Silvertown & May]harvey1983null Harvey, P. H., Colwell, R. K., Silvertown, J. W. & May, R. M. (1983). Null models in ecology. Annual Review of Ecology and Systematics, 14, 189–211.[Hubbell(2001)]Hubbell2001 Hubbell, S. P. (2001). The Unified Neutral Theory of Biodiversity and Biogeography. Princeton Univ. Press, Princeton.[Hubbell et al.(2005)Hubbell, Condit & Foster]Hubbell2005 Hubbell, S. P., Condit, R. & Foster, R. B. (2005). Barro Colorado forest census plot data. <https://ctfs.arnarb.harvard.edu/webatlas/datasets/bci>.[Hubbell et al.(1999)Hubbell, Foster, O'Brien, Harms, Condit, Wechsler, Wright & Loo de Lao]Hubbell1999 Hubbell, S. P., Foster, R. B., O'Brien, S. T., Harms, K. E., Condit, R., Wechsler, B., Wright, S. J. & Loo de Lao, S. (1999). Light-gap disturbances, recruitment limitation, and tree diversity in a neotropical forest. Science, 283, 554–557.[Jaynes(1957)]Jaynes1957information Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical review, 106, 620.[Jeffreys(1960)]jeffreys1960extension Jeffreys, H. (1960). An extension of the pitman–koopman theorem. In: Mathematical Proceedings of the Cambridge Philosophical Society, vol. 56. Cambridge Univ Press.[Kohyama et al.(2001)Kohyama, Suzuki, Partomihardjo & Yamada]Kohyama2001 Kohyama, T., Suzuki, E., Partomihardjo, T. & Yamada, T. (2001). Dynamic steady state of patch-mosaic tree size structure of a mixed dipterocarp forest regulated by local crowding. Ecological Research, 16, 85–98.[Kohyama et al.(2003)Kohyama, Suzuki, Partomihardjo, Yamada & Kubo]Kohyama2003 Kohyama, T., Suzuki, E., Partomihardjo, T., Yamada, T. & Kubo, T. (2003). Tree species differentiation in growth, recruitment and allometry in relation to maximum height in a Bornean mixed dipterocarp forest. Journal of Ecology, 91, 797–806.[Koopman(1936)]koopman1936distributions Koopman, B. O. (1936). On distributions admitting a sufficient statistic. Transactions of the American Mathematical Society, 39, 399–409.[Lopez-Gonzalez et al.(2009)Lopez-Gonzalez, Lewis, Burkitt, Baker & Phillips]Lopez-Gonzalez2009 Lopez-Gonzalez, G., Lewis, S. L., Burkitt, M., Baker, T. R. & Phillips, O. L. (2009). ForestPlots.net Database. www.forestplots.net. Date of extraction [06, 07, 2012].[Lopez-Gonzalez et al.(2011)Lopez-Gonzalez, Lewis, Burkitt & Phillips]Lopez-Gonzalez2011 Lopez-Gonzalez, G., Lewis, S. L., Burkitt, M. & Phillips, O. L. (2011). ForestPlots.net: a web application and research tool to manage and analyse tropical forest plot data. Journal of Vegetation Science, 22, 610–613.[Marquet et al.(2014)Marquet, Allen, Brown, Dunne, Enquist, Gillooly, Gowaty, Green, Harte, Hubbell et al.]marquet2014theory Marquet, P. A., Allen, A. P., Brown, J. H., Dunne, J. A., Enquist, B. J., Gillooly, J. F., Gowaty, P. A., Green, J. L., Harte, J., Hubbell, S. P. et al. (2014). On theory in ecology. BioScience, 64, 701–710.[May(1975)]May1975 May, R. (1975). Patterns of species abundance and diversity. In: Ecology and Evolution of Communities. Belknap Press.[McDonald et al.(2002)McDonald, Peet & Urban]McDonald2002 McDonald, R. I., Peet, R. K. & Urban, D. L. (2002). Environmental correlates of aak decline and red maple increase in the North Carolina piedmont. Castanea, 67, 84–95.[McGill(2010)]mcgill2010towards McGill, B. J. (2010). Towards a unification of unified theories of biodiversity. Ecology Letters, 13, 627–642.[McGill et al.(2007)McGill, Etienne, Gray, Alonso, Anderson, Benecha, Dornelas, Enquist, Green, He et al.]mcGill2007species McGill, B. J., Etienne, R. S., Gray, J. S., Alonso, D., Anderson, M. J., Benecha, H. K., Dornelas, M., Enquist, B. J., Green, J. L., He, F. et al. (2007). Species abundance distributions: moving beyond single prediction theories to integration within an ecological framework. Ecology letters, 10, 995–1015.[Morlon et al.(2009)Morlon, White, Etienne, Green, Ostling, Alonso, Enquist, He, Hurlbert, Magurran et al.]morlon2009taking Morlon, H., White, E. P., Etienne, R. S., Green, J. L., Ostling, A., Alonso, D., Enquist, B. J., He, F., Hurlbert, A., Magurran, A. E. et al. (2009). Taking species abundance distributions beyond individuals. Ecology Letters, 12, 488–501.[Newman et al.(2014)Newman, Harte, Lowell, Wilber & Harte]newman2014 Newman, E. A., Harte, M. E., Lowell, N., Wilber, M. & Harte, J. (2014). Empirical tests of within-and across-species energetics in a diverse plant community. Ecology, 95, 2815–2825.[O'Dwyer & Chisholm(2013)]Odwyer2012d O'Dwyer, J. & Chisholm, R. (2013). Neutral Theory and Beyond. In: Encyclopedia of Biodiversity. Elsevier.[O'Dwyer & Chisholm(2014)]odwyer2014redqueen O'Dwyer, J. & Chisholm, R. (2014). A mean field model for competition: From neutral ecology to the red queen. Ecology Letters, 17, 961–969.[O'Dwyer & Green(2010)]Odwyer2010 O'Dwyer, J. & Green, J. (2010). Field theory for biogeography: a spatially-explicit model for predicting patterns of biodiversity. Ecology Letters, 13, 87–95.[O'Dwyer et al.(2009)O'Dwyer, Lake, Ostling, Savage & Green]ODwyer2009 O'Dwyer, J., Lake, J., Ostling, A., Savage, V. & Green, J. (2009). An integrative framework for stochastic, size-structured community assembly. Proc Natl Acad Sci, 106, 6170–6175.[O'Dwyer et al.(2015)O'Dwyer, Sharpton & Kembel]odwyer2015phylo O'Dwyer, J., Sharpton, T. & Kembel, S. (2015). Backbones of Evolutionary History Test Biodiversity Theory in Microbial Communities. Proc Natl Acad Sci, 112, 8356–8361.[Palmer et al.(2007)Palmer, Peet, Reed, Xi & White]Palmer2007 Palmer, M. W., Peet, R. K., Reed, R. A., Xi, W. & White, P. S. (2007). A multiscale study of vascular plants in a North Carolia Piedmont forest. Ecology, 88, 2674–2674.[Pardieck et al.()Pardieck, Ziolkowski, Hudson & Campbell]bbs Pardieck, K. L., Ziolkowski, D. J., Hudson, M. A. R. & Campbell, K. North american breeding bird survey dataset 1966 - 2015, version 2015.1. <www.pwrc.usgs.gov/BBS/RawData/>.[Peet & Christensen(1987)]Peet1987 Peet, R. K. & Christensen, N. L. (1987). Competition and tree death. BioScience, 37, 586–595.[Pitman(1936)]pitman1936sufficient Pitman, E. J. G. (1936). Sufficient statistics and intrinsic accuracy. In: Mathematical Proceedings of the cambridge Philosophical society, vol. 32. Cambridge Univ Press.[Pitman et al.(2005)Pitman, Cerón, Reyes, Thurber & Arellano]Pitman2005 Pitman, N. C. A., Cerón, C. E., Reyes, C. I., Thurber, M. & Arellano, J. (2005). Catastrophic natural origin of a species-poor tree community in the world's richest forest. Journal of Tropical Ecology, 21, 559–568.[Preston(1948)]preston1948commonness Preston, F. W. (1948). The commonness, and rarity, of species. Ecology, 29, 254–283.[Preston(1960)]preston1960time Preston, F. W. (1960). Time and space and the variation of species. Ecology, 41, 611–627.[Preston(1962)]preston1962canonical Preston, F. W. (1962). The canonical distribution of commonness and rarity: Part i. Ecology, 43, 185–215.[Pueyo et al.(2007)Pueyo, He & Zillio]pueyo2007 Pueyo, S., He, F. & Zillio, T. (2007). The maximum entropy formalism and the idiosyncratic theory of biodiversity. Ecol Lett, 10, 1017–1028.[Pyke et al.(2001)Pyke, Condit, Aguilar & Lao]Pyke2001 Pyke, C. R., Condit, R., Aguilar, S. & Lao, S. (2001). Floristic composition across a climatic gradient in a neotropical lowland forest. Journal of Vegetation Science, 12, 553–566.[Ramesh et al.(2010)Ramesh, Swaminath, Patil, Pélissier, Venugopal, Aravajy, Elouard & Ramalingam]Ramesh2010 Ramesh, B. R., Swaminath, M. H., Patil, S. V., Pélissier, R., Venugopal, P. D., Aravajy, S., Elouard, C. & Ramalingam, S. (2010). Forest stand structure and composition in 96 sites along environmental gradients in the central Western Ghats of India. Ecology, 91, 3118–3118.[Reed et al.(1993)Reed, Peet, Palmer & White]Reed1993 Reed, R. A., Peet, R. K., Palmer, M. W. & White, P. S. (1993). Scale dependence of vegetation-environment correlations: A case study of a North Carolina piedmont woodland. Journal of Vegetation Science, 4, 329–340.[Rosenzweig(1995)]Rosenzweig1995 Rosenzweig, M. L. (1995). Species Diversity in Space and Time. Cambridge University Press, Cambridge.[Rosindell et al.(2011)Rosindell, Hubbell & Etienne]Rosindell2011 Rosindell, J., Hubbell, S. & Etienne, R. (2011). The unified neutral theory of biodiversity and biogeography at age ten. Trends Ecol Evol, 26, 340–348.[Shipley(2009)]shipley2009limitations Shipley, B. (2009). Limitations of entropy maximization in ecology: a reply to haegeman and loreau. Oikos, 118, 152–159.[Shipley et al.(2006)Shipley, Vile & Garnier]shipley2006plant Shipley, B., Vile, D. & Garnier, É. (2006). From plant traits to plant communities: a statistical mechanistic approach to biodiversity. science, 314, 812–814.[Thompson et al.(2002)Thompson, Brokaw, Zimmerman, Waide, Everham, Lodge, Taylor, García-Montiel & Fluet]Thompson2002 Thompson, J., Brokaw, N., Zimmerman, J. K., Waide, R. B., Everham, E. M., Lodge, D. J., Taylor, C. M., García-Montiel, D. & Fluet, M. (2002). Land use history, environment, and tree composition in a tropical forest. Ecological Applications, 12, 1344–1363.[Volkov et al.(2003)Volkov, Banavar, Hubbell & Maritan]Volkov2003 Volkov, I., Banavar, J. R., Hubbell, S. P. & Maritan, A. (2003). Neutral theory and relative species abundance in ecology. Nature, 424, 1035–1037.[Volkov et al.(2007)Volkov, Banavar, Hubbell & Maritan]Volkov2007 Volkov, I., Banavar, J. R., Hubbell, S. P. & Maritan, A. (2007). Patterns of relative species abundance in rainforests and coral reefs. Nature, 450, 45–49.[Volkov et al.(2009)Volkov, Banavar, Hubbell & Maritan]volkov2009inferring Volkov, I., Banavar, J. R., Hubbell, S. P. & Maritan, A. (2009). Inferring species interactions in tropical forests. Proceedings of the National Academy of Sciences, 106, 13854–13859.[West et al.(1999)West, Brown & Enquist]west1999general West, G. B., Brown, J. H. & Enquist, B. J. (1999). A general model for the structure and allometry of plant vascular systems. Nature, 400, 664–667.[White et al.(2012)White, Thibault & Xiao]white2012characterizing White, E. P., Thibault, K. M. & Xiao, X. (2012). Characterizing species abundance distributions across taxa and ecosystems using a simple maximum entropy model. Ecology, 93, 1772–1778.[Xi et al.(2008)Xi, Peet, Decoster & Urban]Xi2008 Xi, W., Peet, R. K., Decoster, J. K. & Urban, D. L. (2008). Tree damage risk factors associated with large, infrequent wind disturbances of Carolina forests. Forestry, 81, 317–334.[Xiao et al.(2015a)Xiao, Aravajy, Baribault, Brokaw, Christensen, DeWalt, Elouard, Gilbert, Kobe, Kohyama, McGlinn, Palmer, Patil, Peet, P?lissier, Pitman, Ramalingam, Ramesh, Reed, Swaminath, Thompson, Urban, Uriarte, Venugopal, White, White, Xi & Zimmerman]dryad_5fn46 Xiao, X., Aravajy, S., Baribault, T., Brokaw, N., Christensen, N., DeWalt, S., Elouard, C., Gilbert, G., Kobe, R., Kohyama, T., McGlinn, D., Palmer, M., Patil, S., Peet, R., P?lissier, R., Pitman, N., Ramalingam, S., Ramesh, B., Reed, R., Swaminath, M., Thompson, J., Urban, D., Uriarte, M., Venugopal, P., White, E., White, P., Xi, W. & Zimmerman, J. (2015a). Data from: A strong test of the maximum entropy theory of ecology. <http://dx.doi.org/10.5061/dryad.5fn46>.[Xiao et al.(2015b)Xiao, McGlinn & White]xiao2015strong Xiao, X., McGlinn, D. J. & White, E. P. (2015b). A strong test of the maximum entropy theory of ecology. The American Naturalist, 185, E70–E80.[Xiao et al.(2016)Xiao, O'Dwyer & White]xiao2015comparing Xiao, X., O'Dwyer, J. P. & White, E. P. (2016). Comparing process-based and constraint-based approaches for modeling macroecological patterns. Ecology.[Zimmerman et al.(1994)Zimmerman, Everham III, Waide, Lodge, Taylor & Brokaw]Zimmerman1994 Zimmerman, J. K., Everham III, E. M., Waide, R. B., Lodge, D. J., Taylor, C. M. & Brokaw, N. V. L. (1994). Responses of tree species to hurricane winds in subtropical wet forest in Puerto Rico: Implications for tropical tree life histories. The Journal of Ecology, 82, 911–922.
http://arxiv.org/abs/1702.08543v4
{ "authors": [ "James P. O'Dwyer", "Andrew Rominger", "Xiao Xiao" ], "categories": [ "q-bio.PE" ], "primary_category": "q-bio.PE", "published": "20170227213746", "title": "Reinterpreting Maximum Entropy in Ecology: a null hypothesis constrained by ecological mechanism" }
Oakland University, Department of Mathematics and Statistics, 146 Library Drive, Rochester, MI 48309 cesmelio@oakland.edu. We analyze a weak formulation of the coupledproblem defining the interaction between a free fluid and a poroelastic structure. The problem is fully dynamic and is governed by the time-dependent incompressible Navier-Stokes equations and the Biot equations. Under a small data assumption, existence and uniqueness results are proved and a priori estimates are provided.Navier-Stokes Darcy Biot poroelastic weak formulation existence35Q30 35Q35§ INTRODUCTION. We consider a fully dynamic model for the interaction of an incompressible Newtonian fluid with a poroelastic material where the boundary is assumed to be fixed. The fluid flow is governed by the time-dependent incompressible Navier-Stokes equations. For the poroelastic material we use the Biot system with appropriate flow and stress couplings on the interface between the fluid and the poroelastic regions. This problem is a fully dynamic coupled system of mixed hyperbolic-parabolic type and inherits all the difficulties mathematically and numerically involved in the standard fluid-structure interaction and Stokes/Navier-Stokes-Darcy coupling.The literature is rich in works on related coupled problems and here we provide only a partial list of relevant publications. One related problem deals with the interaction of an incompressible fluid with a porous material and modeled by the coupling of the Stokes/Navier-Stokes equations to the Darcy equations. The steady-state case of this problem is analyzed mathematically in <cit.> and the time-dependent case in <cit.>. Another related problem is that of the fluid-structure interaction.The analysis of a weak solution for the time dependent coupling of the Stokes and the linear elasticity equations is discussed in <cit.> and for the flow of a coupling of time-dependent 2D incompressible NSE to the linearly viscoelastic or the linearly elastic Koiter shell in <cit.>. The two layered structure version of this problem is discussed in <cit.>. In geosciences, aquifers and oil/gas reservoirs are porous and deformable affecting groundwater and oil/gas flow, respectively <cit.>. In biomedical sciences, blood flow is influenced by the porous and deformable nature of the arterial wall <cit.>. Therefore, mathematical models that are used to simulate these flow problems must account for both the effects of porosity and elasticity. The Navier-Stokes/Biot system is investigated numerically in <cit.> using a monolithic and a domain decomposition technique and the Stokes/Biot system is investigated in <cit.> using an operator splitting approach and in <cit.> using an optimization based decoupling strategy. In <cit.>, variational formulations for the the Stokes/Biot system are developed using semi-group methods. Also a two-layered version was studied in <cit.>. In this paper, we focus on the coupling of the fully dynamic incompressible Navier-Stokes equations (for the free fluid) with the Biot system (for the poroelastic structure completely saturated with fluid). This coupled problem is the nonlinear version of the problem presented in <cit.>. We construct a weak formulation and show the existence and uniqueness (local) of its solution under small data assumption. We note that this small data assumption is not needed if the fluid is represented by the linear Stokes equations rather than the Navier-Stokes equations and the result would also be global. We assume that the boundaries and the interface between the fluid and the poroelastic material are fixed. The proof proceeds by constructing a semi-discrete Galerkin approximations, obtaining the necessary a priori estimates, and passing to the limit. To the author's knowledge, there is no such analysis for this fully dynamic nonlinear coupled system.The outline of this paper is as follows: In Section 2, we introduce the equations governing the problem and appropriate interface, boundary and initial conditions. The next section is devoted to notation and some well-known results that are used in the forthcoming sections. Section 4 sets the assumptions on data, presents the weak formulation and shows that it is equivalent to the problem. Section 5 summarizes the main result of the paper. Section 6 contains the proof of the existence and uniqueness results and a priori estimates for the weak solution.§ FLUID-POROELASTIC MODEL EQUATIONSLet Ω⊂ℝ^d, d=2,3 be an open bounded domain with Lipschitz continuous boundary ∂Ω. The domain Ω is made up of two regions Ω_f, the fluid region, and Ω_p, the poroelastic region, separated by a common interface Γ_I = ∂Ω_f ∩∂Ω_p. Both Ω_1 and Ω_2 are assumed to be Lipschitz. See Figure  <ref>. The first region Ω_f is occupied by a free fluid and has boundary Γ_f such that Γ_f=Γ^in_f ∪Γ^out_f ∪Γ^ext_f ∪Γ_I, where Γ^in_fand Γ^out_f represent the inlet and outlet boundary, respectively. The second region Ω_p is occupied by a saturated poroelastic structure with boundary Γ_p such that Γ_p=Γ_p^s∪Γ^ext_p ∪Γ_I, where Γ^s_p∪Γ^ext_p represents the outer structure boundary.Fluid flow is governed by the time-dependent incompressible Navier-Stokes equations: ρ_f_f-2μ_f ∇·(_f) + ρ_f_f·∇_f+ ∇ p_f= _fΩ_f × (0,T) ,∇· _f = 0Ω_f× (0,T). Here _f denotes the velocity vector of the fluid, p_f the pressure of the fluid, ρ_f the density of the fluid, μ_f the constant fluid viscosity, and _f the body force acting on the fluid. We used the dot above a symbol to denote the time derivative. The strain rate tensor (_f) is defined by: (_f) = 1/2( ∇_f + (∇_f)^T). The Cauchy stress tensor is given by: _f =2μ_f D(_f)- p_f . So (<ref>) can also be written asρ_f_f-∇·_f + ρ_f_f·∇_f = _f.Equation (<ref>) represents the conservation of linear momentum, while equation (<ref>) is the incompressibility condition that represents the conservation of mass.The poroelastic system is a fully dynamic coupled system of mixed hyperbolic-parabolic type represented by the Biot model <cit.>: ρ_p-2 μ_s∇·()- λ_s ∇ (∇·)+ α∇ p_p= _s Ω_p× (0,T),(s_0 _p+ α∇· ) -∇·∇ p_p=f_p Ω_p× (0,T),whereis the displacement of the structure, p_p is the pore pressure of the fluid, and _p is the fluid velocity in the pores. Here, f_p is the source/sink term and _s is the body force. The parameters ν_s and λ_s denote the Lamé constants for the solid skeleton. The density of the saturated medium is denoted by ρ_p, and the hydraulic conductivity by . In the Biot model, the first equation, (<ref>), is the momentum equation for the balance of forces and the second equation, (<ref>), is the diffusion equation of fluid mass.The total stress tensor for the poroelastic structure is given by: _p = _p^E - αp_p ,where _p^E is the elasticity stress tensor defined by _p^E = 2 μ_s () + λ_s (∇· ). Therefore, (<ref>) can also be written asρ_p -∇·_p= _s.The constrained specific storage coefficient is denoted by s_0 and the Biot-Willis constant by α, the latter is usually close to unity. In the subsequent discussion, we assume that the motion of the structure is small enough so that the domain is fixed at its reference position. All the physical parameters are assumed to be constant in space and time. Next, we prescribe boundary, interface and initial conditions where _f and _p denote the outward unit normal vectors of Ω_f and Ω_p, respectively and _Γ is the unit normal vector of the interface Γ_I pointing from Ω_f to Ω_p. Hence, _Γ_I=_f|_Γ_I=-_p|_Γ_I. Furthermore _Γ^l, l=1,,d-1 denotes an orthonormal set of unit vectors on the tangent plane to Γ_I.§.§.§ Boundary conditions: Since the boundary conditions have no significant effect on the fluid poroelastic interaction, for simplicity they are chosen such that the normal fluid stress is prescribed on the inlet and outlet boundaries, the poroelastic structure is assumed to be fixed at the inlet and outlet boundaries and have zero tangential displacement on the external structure boundary, that is,Γ_f^in×(0,T),_f _f = - P_in(t)_f ,Γ_f^out×(0,T),_f_f = , Γ_f^ext×(0,T),_f= , Γ_p^s×(0,T) ,∇p_p·_p=0, = ,Γ_p^ext×(0,T),{[ p_p=0, _p·_p^E_p = ,; _p^l·=0,1≤l≤d-1. ].§.§.§ Initial conditions:As initial conditions, we assume that everything is at rest in the beginning. At  t=0: _f =,p_p = 0, =, = . §.§.§ Interface conditions on Γ_I× (0,T): The interface conditions are given by _f ·_Γ=( -∇ p_p) ·_Γ ,_f_Γ= _p_Γ ,_Γ·_f_Γ=- p_p ,_Γ·_f_Γ^l =- β (_f - )·_Γ^l, 1≤ l≤ d-1, where β denotes the resistance parameter in the tangential direction. Condition (<ref>) is the continuity of normal flux that satisfies mass conservation, condition (<ref>) is the balance of stresses, that is, the total stresses of the fluid and the poroelastic medium must match at the interface. Condition (<ref>) guarantees the balance of normal components of the stress in the fluid phase across the interface. Finally condition (<ref>) is the Beavers-Joseph-Saffman condition <cit.> which assumes that the tangential stress of the fluid is proportional to the slip rate. More details on the interface conditions can be found in <cit.>. § NOTATION AND USEFUL RESULTS. Let Ω⊂^d, d=2, 3 be a bounded, open connected domain with a Lipschitz continuous boundary ∂Ω. Let (Ω) be the space of all infinitely differentiable functions with compact support in Ω. For s∈ℝ, H^s(Ω) denotes the standard Sobolev space of order s equipped with its standard seminorm |·|_s,Ω and norm ·_s,Ω. We denote the vector- and matrix-valued Sobolev spaces as follows:^s(Ω):=[H^s(Ω)]^d,^s(Ω):=[H^s(Ω)]^d× dand still write |·|_s,Ω and ·_s,Ω for the corresponding seminorms and norms. When s=0, instead of H^0(Ω), ^0(Ω) and ^0(Ω) we write L^2(Ω), ^2(Ω) and ^2(Ω) and instead of |·|_0,Ω and ·_0,Ω we write |·|_Ω and ·_Ω. The scalar product of L^2(Ω) is denoted by (·, ·)_Ω. The spaces (;Ω)={∈^2(Ω): ∇·∈ L^2(Ω)} and ^3/2(;Ω)={∈^2(Ω): ∇·∈ L^3/2(Ω)} are equipped with the graph norm. If Γ⊂∂Ω, and v∈ H^1/2(Γ), we define the extension ṽ of v as ṽ=v on Γ, ṽ=0 on ∂Ω\Γ and define the space of traces of all functions of H^1(Ω) that vanish on ∂Ω\Γ as follows:H_00^1/2(Γ)={v∈ L^2(Γ): ṽ∈ H^1/2(∂Ω)}. We also define for any 1≤ r≤∞,L^r(a, b;X)={f : f_L^r(a,b;X)<∞}equipped with the normf_L^r(a,b;X)=(∫_a^bf(t)^r_X dt)^1/rfor 1≤ r<∞ and f_L^∞(a,b;X)= esssup_t∈ [a,b]f(t)_X.Furthermore,(0,T;X) denotes the set of all functions that are continuous into X and finally we define H^1(a,b;X)={f∈ L^2(a,b;X): ḟ∈ L^2(a,b;X)}. §.§.§ Useful results. Here we state inequalities and results to be used throughout the paper. More details can be found in <cit.>.We define the following spaces for the weak solution: _f ={∈^1(Ω_f): =Γ_f^ext}, _f ={∈_f: ∇·=0 }, _p ={∈^1(Ω_p):=Γ_p^s,_p^l·=0, 1≤ l≤ d-1 Γ_p^ext},Q_f =L^2(Ω_f),Q_p ={r∈ H^1(Ω_p): r=0 Γ_p^ext}.On these spaces, we have the following trace inequalities: ∀∈_f, _Γ_I ≤ T_1||_1,Ω_f _Γ_f^in≤T_2||_1,Ω_f ∀q∈Q_p,q_H^1/2(Γ_I) ≤ T_3|q|_1,Ω_p, q_Γ_p^s≤T_4|q|_1,Ω_p,∀∈_p, _Γ_I ≤ T_5||_1, Ω_p, the Poincaré inequalities:∀∈_f, _Ω_f ≤ P_1||_1, Ω_f,∀∈_p, _Ω_p ≤ P_2||_1, Ω_p, ∀q∈Q_p,q_Ω_p ≤ P_3|q|_1, Ω_p,a Sobolev inequality:∀∈_f, _^4(Ω_f)≤ S_f||_^1(Ω_f), and finally a Korn's inequality: ∀∈_f,||_1,Ω_f≤ K_f()_Ω_f, where T_1-T_5, P_1-P_3, S_f and K_f are positive constants depending only on their corresponding domain.<cit.> Let ζ(t)≤ B+C∫_0^tζ(s)ds,t ∈ (0,T) where ζ is a continuous nonnegative function and B, C≥ 0 are constants. Thenζ(t)≤ B e^Ct.The next theorem is a compactness result that provides a strong convergence result whichis used to pass to the limit in the nonlinear terms of the Galerkin solution. <cit.> Let X, B and Y be Banach spaces such that X⊂ B⊂ Y where the imbedding of X into B is compact. Let F be a bounded set in L^p(0,T;X) where 1≤ p< ∞ and let the set {∂ f/∂ t}_f∈ F be bounded in L^1(0,T;Y). Then F is relatively compact in L^p(0,T;B).§ WEAK FORMULATION. In this section we derive the weak formulation of the problem. But first, we introduce additional notation and present assumptions on the problem data. We assume that ∈^∞(Ω_p) is independent of time, uniformly bounded and positive definite. There exists K_min, K_max>0 such that ∀∈Ω_p,K_min·≤·≤ K_max·. Further, we assume that _f∈^2(0,T;^2(Ω_f)), _s∈^2(0,T;^2(Ω_p)), f_p∈ L^2(0,T;L^2(Ω_p)) and P_in∈ L^2(0,T;H^1/2(Γ_f^in)).The weak formulation we propose for the problem is the following:(WF1) Find _f∈ L^∞(0,T;^2(Ω_f))∩ L^2(0,T;_f), p_f∈ L^1(0,T;Q_f), ∈ W^1,∞(0,T;^2(Ω_p))× H^1(0,T;_p) and p_p∈ L^∞(0,T;L^2(Ω_p))∩ L^2(0,T;Q_p) where _f∈ L^1(0,T;^3/2(Ω_f)),such that for all ∈_f, q∈ Q_f, ∈_p and r∈ Q_p,(ρ_f_f,)_Ω_f+(2μ_f(_f),())_Ω_f+ρ_f(_f·∇_f,)_Ω_f-(p_f,∇·)_Ω_f +(ρ_s,)_Ω_p+(2μ_s(),())_Ω_p+(λ_s∇·,∇·)_Ω_p-(α p_p,∇·)_Ω_p+(s_0ṗ_̇ṗ+α∇·,r)_Ω_p+(∇ p_p,∇ r)_Ω_p+⟨ p_p_Γ,-⟩_Γ_I+Σ_l=1^d-1⟨β(_f-)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(-_f)·_Γ,r⟩_Γ_I=-⟨ P_in(t)_Γ,⟩_Γ_f^in+(_f,)_Ω_f+(_s,)_Ω_p+(f_p,r)_Ω_p,(∇·_f,q)_Ω_f=0, a.e. in (0,T), and _f(0)=Ω_f, (0)=,(0)=,p_p(0)=0 Ω_p.The reason for looking for a solution in L^∞(0,T;L^2(Ω_i)), i=f, p may not seem obvious at this point, since typically the solutions are sought in L^2(0,T;X) where X is an appropriate Sobolev space, but we will prove that such a solution exists. §.§ Equivalence of the weak formulation (WF1). The following proposition establishes the equivalence between the coupled problem and the weak formulation (WF1) proposed in the previous section.Let the data satisfythe assumptions listed in the previous section. Then each solution _f∈ L^∞(0,T;^2(Ω_f))∩ L^2(0,T;_f) such that _f∈ L^1(0,T;^3/2(Ω_f)), p_f∈ L^1(0,T;Q_f), ∈ W^1,∞(0,T;^2(Ω_p))∩ H^1(0,T;_p) and p_p∈L^∞(0,T;L^2(Ω_p))∩ L^2(0,T;Q_p) of the problem defined by (<ref>)-(<ref>), (<ref>)-(<ref>) and (<ref>)-(<ref>) is also a solution of the variational problem (WF1) and conversely.We first show sufficiency. To simplify the presentation, we included in<ref> the justification of using Green's formula in the following proof. Let (_f, p_f, , p_p) be a solution of the coupled problem defined by (<ref>)-(<ref>), (<ref>)-(<ref>), (<ref>)-(<ref>) satisfying the regularity stated in the proposition. We multiply (<ref>) by ∈_f. After integration by parts: (ρ_f_f, )_Ω_f +(2μ_fD(_f),D())_Ω_f+ρ_f( _f·∇_f,)_Ω_f- ( p_f,∇·)_Ω_f -⟨_f_f, ⟩_∂Ω_f = (_f, )_Ω_f. Using (<ref>), (<ref>), = on Γ_f^ext and _f=_Γ on Γ_I, we have (ρ_f_f, )_Ω_f +(2μ_fD(_f),D())_Ω_f+ρ_f( _f·∇_f,)_Ω_f- ( p_f,∇·)_Ω_f -⟨_f_Γ, ⟩_Γ_I = (_f, )_Ω_f-⟨ P_in(t), ⟩_Γ_f^in. Next we multiply (<ref>) by q∈ Q_f and integrate to get (∇·_f,q)_Ω_f=0. Multiplying (<ref>) by ∈_p and integrating by parts yields: (ρ_p ,)_Ω_p +(2 μ_sD(), D())_Ω_p + (λ_s (∇·), ∇·)_Ω_p-(αp_p,∇·)_Ω_p -⟨_p_p,⟩_∂Ω_p = (_s,)_Ω_p. Observe that (<ref>) implies _p_p=_p^E_p=Σ_l=1^d-1(_p^l·_p^E_p)·_p^l on Γ_p^ext. Then since _p^l·=0 on Γ_p^ext,=0 on Γ_p^s and _p=-_Γ on Γ_I, we have (ρ_p,)_Ω_p +(2 μ_sD(), D())_Ω_p + (λ_s (∇·), ∇·)_Ω_p - (αp_p,∇·)_Ω_p+⟨_p_Γ,⟩_Γ_I= (_s,)_Ω_p. Multiplying (<ref>) by r∈ Q_p and integrating over Ω_p, we obtain (s_0 _p+ α∇·,r)_Ω_p +(∇ p_p, ∇ r)_Ω_p -⟨∇ p_p·_p, r⟩_∂Ω_p= (f_p,r)_Ω_p. Using (<ref>), r=0 on Γ_p^ext, and _p=-_Γ on Γ_I, we have (s_0 _p+ α∇·,r)_Ω_p +(∇ p_p, ∇ r)_Ω_p +⟨∇ p_p·_Γ, r⟩_Γ_I= (f_p,r)_Ω_p. Next, we rewrite the interface integrals using the interface conditions (<ref>)-(<ref>). On Γ_I, by (<ref>) and (<ref>), we have _f_Γ=(_Γ·_f_Γ)_Γ+Σ_l=1^d-1(_Γ^l·_f_Γ)_Γ^l= -p_p_Γ-Σ_l=1^d-1(β(_f-)·_Γ^l)_Γ^l. (Note here that since σ_f is symmetric, _Γ^l·_f_Γ=_Γ·_f_Γ^l.) Adding (<ref>), (<ref>), (<ref>) and (<ref>) while using (<ref>) for ⟨_f_Γ,⟩_Γ_I, (<ref>) for ⟨∇ p_p·_Γ, r⟩_Γ_I and (<ref>) and (<ref>) for ⟨_p_Γ,⟩_Γ_I givesthe weak formulation (WF1).For the converse, let (_f, p_f, , p_p) be a solution of (WF1).We pick first ∈(Ω_f), r=0 and =, second ∈, r∈ D(Ω_p) and = and last =, r=0 and ∈(Ω_p). This gives (<ref>) on Ω_f and (<ref>) and (<ref>) on Ω_p in the sense of distributions. Next we multiply (<ref>) with ∈_f, (<ref>) with ∈_p and (<ref>) with r∈ Q_p and apply Green's formulas and add the outcomes to get (ρ_f_f ,)_Ω_f+(2μ_f(_f),())_Ω_f+ρ_f(_f·∇_f,)_Ω_f-(p_f,∇·)_Ω_f+(ρ_s,)_Ω_p+(2μ_s(),())_Ω_p+(λ_s∇·,∇·)_Ω_p-(α p_p,∇·)_Ω_p+(s_0ṗ_̇ṗ+α∇·,r)_Ω_p+(∇ p_p,∇ r)_Ω_p-⟨_f _f,⟩_Γ_f^in∪Γ_f^out∪Γ_I-⟨_p _p, ⟩_Γ_p^ext∪Γ_I-⟨∇ p_p·_p,r ⟩_Γ_p^s∪Γ_I=(_f,)_Ω_f+(_s,)_Ω_p+(f_p,r)_Ω_p. Comparing this with (WF1) gives ⟨ p_p_Γ,-⟩_Γ_I +Σ_l=1^d-1⟨β(_f-)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(-_f)·_Γ,r⟩_Γ_I +⟨ P_in(t)_f,⟩_Γ_f^in=- ⟨_f _f,⟩_Γ_f^in∪Γ_f^out∪Γ_I-⟨_p _p, ⟩_Γ_p^ext∪Γ_I-⟨∇ p_p·_p,r ⟩_Γ_p^s∪Γ_I for all (, , r)∈_f×_p× Q_p. If we let =, = in (<ref>), we get-⟨∇ p_p·_p,r ⟩_Γ_p^s∪Γ_I=⟨(-_f)·_Γ,r⟩_Γ_I, ∀ r∈ Q_p. The choice r∈ Q_p such that r|_Γ_I=0 yields⟨∇ p_p·_p,r ⟩_Γ_p^s=0which implies the first condition of (<ref>). Using thisin (<ref>), we get -⟨∇ p_p·_p,r⟩_Γ_I=⟨(-_f)·_Γ,r⟩_Γ_I, ∀ r∈ Q_p which yields (<ref>). This reduces(<ref>) to⟨ p_p_Γ,-⟩_Γ_I+∑_l=1^d-1⟨β(_f-)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨ P_in(t)_f,⟩_Γ_f^in=-⟨_f _f,⟩_Γ_f^in∪Γ_f^out∪Γ_I-⟨_p _p, ⟩_Γ_p^ext∪Γ_I, ∀∈_f, ∀∈_p. Now we let = in (<ref>). Then-⟨ p_p_Γ,⟩_Γ_I-∑_l=1^d-1⟨β(_f-)·_Γ^l,·_Γ^l⟩_Γ_I=-⟨_p_p, ⟩_Γ_p^ext∪Γ_I,∀∈_p.Since p_p=0 on Γ_p^ext, the choice ∈_p such that = on Γ_I implies0 =⟨_p_p, ⟩_Γ_ext^p=⟨_p·_p_p, ·_p⟩_Γ_ext^p=⟨_p·_p^E_p, ·_p⟩_Γ_ext^p.Therefore we recoverin the sense of distributions the second condition in (<ref>). This also yields⟨ -p_p_Γ-∑_l=1^d-1β((_f-)·_Γ^l)_Γ^l,⟩_Γ_I=-⟨_p_p, ⟩_Γ_I,∀∈_p.Therefore-p_p_Γ-∑_l=1^d-1β((_f-)·_Γ^l)_Γ^l=_p_Γ holds in the sense of distributions. This reduces (<ref>) to⟨ p_p_Γ,⟩_Γ_I+∑_l=1^d-1⟨β(_f-)·_Γ^l,·_Γ^l⟩_Γ_I+⟨ P_in(t)_f,⟩_Γ_f^in=-⟨_f_f,⟩_∂Ω_f, ∀∈_f. Letting ∈_f such that = on Γ_I gives⟨ P_in(t)_f,⟩_Γ_f^in=-⟨_f_f,⟩_Γ_f^in∪Γ_f^out.Pickingsuch that = on Γ_f^out implies(<ref>). If we plug this in the above equation we get 0=-⟨_f_f,⟩_Γ_f^out concluding that _f_f= on Γ_f^out× (0,T) in the distributional sense. This gives (<ref>) and also implies that ⟨ p_p_Γ,⟩_Γ_I+∑_l=1^d-1⟨β(_f-)·_Γ^l,·_Γ^l⟩_Γ_I=-⟨_f_f,⟩_Γ_I, ∀∈_f.Therefore p_p_Γ+∑_l=1^d-1⟨(β(_f-)·_Γ^l)_Γ^l=-_f_f. This compared to (<ref>) implies (<ref>) and also gives (<ref>) and (<ref>) after dotted with _Γ and _Γ^l, 1≤ j≤ d-1.§ MAIN RESULTS.This section summarizes the main results of this paper. First, for the sake of simplicity, we define the following functions of time: _1(t) =(3 T_2^2K_f^24μ_fP_in(t)^2_Γ_f^in+ 3P_1^2K_f^24μ_f_f(t)^2_Ω_f+P_3^22K_minf_p(t)_Ω_p^2+12_s(t)_Ω_p^2)^1/2, _2(t) =(3 T_2^2K_f^22μ_fṖ_in(t)_Γ_f^in^2+3 P_1^2K_f^22μ_f_̇ḟ(t)^2_Ω_f+P_3^22K_minḟ_̇ṗ(t)_Ω_p^2+12_s(t)^2_Ω_p)^1/2,a.e. in (0,T),and the following constant: _3= (C_j^2ρ_fP_in(0)^2_H_00^1/2(Γ_f^in)+1ρ_f_f(0)_Ω_f^2 +12s_0f_p(0)^2_Ω_p+12ρ_s_s(0)^2_Ω_p)^1/2, where the constants T_2, K_f, P_1, P_3 are defined in Section <ref> and C_j is the continuity constant of the continuous lifting operator from H^1/2(∂Ω_f)→ H^1(Ω_f).Observe that _1, _2 and _3 depend only on the data of theproblem.We now present out main existence and uniqueness result.Assume that _f∈ H^1(0,T;^2(Ω_f)), _s∈ H^1(0,T;^2(Ω_p)), _p∈ H^1(0,T;L^2(Ω_f)) and P_in∈ H^1(0,T;H^1/2(Γ_f^in)) and that the following small data condition holds: (1+Tρ_se^T/ρ_s)_2_L^2(0,T)^2+Tρ_s^2e^T/ρ_s_s(0)^2_Ω_p +(1+1ρ_se^T/ρ_s+Tρ_s^2e^T/ρ_s)_1_L^2(0,T)^2+_1^2_L^∞(0,T)<μ_f^39ρ_f^2S_f^4K_f^6. Then, problem (WF1)has a unique solution (_f, p_f,, p_p)such thatρ_f2_f_L^∞(0,T;^2(Ω_f))^2 +ρ_s/2_L^∞(0,T;^2(Ω_p))^2+μ_s()^2_L^∞(0,T;^2(Ω_p))+s_0/2p_p_L^∞(0,T;L^2(Ω_f))^2 +μ_f(_f)^2_L^2(0,T;^2(Ω_f))+12^1/2∇ p_p^2_L^2(0,T;^2(Ω_p))≤(1+Tρ_se^T/ρ_s)_1^2_L^2(0,T). Furthermore, (_f)_L^∞(0,T;^2(Ω_f)) <μ_f3ρ_fS_f^2K_f^3, ρ_f2_f^2_L^∞(0,T;^2(Ω_f)) +ρ_s2^2_L^∞(0,T;^2(Ω_p))+μ_s()^2_L^∞(0,T;^2(Ω_p))+s_02ṗ_p^2_L^∞(0,T;L^2(Ω_f))+μ_f(_f)^2_L^2(0,T;L^2(Ω_f))+12^1/2∇ṗ_p^2_L^2(0,T;^2(Ω_p))≤ (1+Tρ_se^2T/ρ_s^2)_2^2_L^2(0,T)+Te^2T/ρ_s^22_3, and p_f_L^∞(0,T;L^2(Ω_f))≤1κ(ρ_f_f_L^∞(0,T;^2(Ω_f))+2μ_f(_f)_L^∞(0,T;^2(Ω_f))+S_f^2_f_L^∞(0,T;^1(Ω_f))^2 +T_1T_3p_p_L^∞(0,T;H^1(Ω_p))+β T_1^2_f_L^∞(0,T;^1(Ω_f))+β T_1T_5_L^∞(0,T;^1(Ω_p))+T_2P_in_L^∞(0,T;L^2(Γ_f^in))+_f_L^∞(0,T;^2(Ω_f))). The constants T_1-T_5, K_f, S_f, P_1, P_3 used in the above estimates are defined in Section <ref>.§ PROOF OF THEOREM <REF>.The proof consists of multiple steps. The main idea is to use Galerkin's method on the divergence-free version of the weak problem (WF1) in which the fluid pressure p_f is eliminated. We will first present the divergence-free formulation (WF2) and introduce its Galerkin approximation (GF). Next we prove that there exists a unique maximal Galerkin solution by writing (GF) as a system of first order equations and applying the theory of ordinary differential equations. However this existence result holds only on a finite subiniterval of [0,T]. Demonstrating a priori bounds for the Galerkin solution guarantees the validity of this existence result on the entire interval [0,T] and also allows us to pass to the limit. At the end of this process, we obtain a solution _f, and p_p of the divergence-free weak formulation. We conclude the proof using an inf-sup condition to recover the fluid pressure p_f that was eliminated from the weak formulation, proving the equivalence of (WF2) and (WF1). This last step also provides a priori estimates for p_f.§.§ A divergence-free weak formulation.For the analysis of the problem, we will focus on the following divergence free version of the formulation (WF1). (WF2) Find _f∈ L^∞(0,T;^2(Ω_f))∩ L^2(0,T;_f), ∈ W^1,∞(0,T;^2(Ω_p))× H^1(0,T;_p) and p_p∈ L^∞(0,T;L^2(Ω_p))∩ L^2(0,T;Q_p)with_f∈ L^1(0,T;^3/2(Ω_f))such that for all ∈_f, ∈_p and r∈ Q_p,(ρ_f_f,)_Ω_f+(2μ_f(_f),())_Ω_f+ρ_f(_f·∇_f,)_Ω_f+(ρ_s,)_Ω_p+(2μ_s(),())_Ω_p+(λ_s∇·,∇·)_Ω_p-(α p_p,∇·)_Ω_p+(s_0ṗ_̇ṗ+α∇·,r)_Ω_p+(∇ p_p,∇ r)_Ω_p+⟨ p_p_Γ,-⟩_Γ_I+Σ_l=1^d-1⟨β(_f-)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(-_f)·_Γ,r⟩_Γ_I=-⟨ P_in(t)_Γ,⟩_Γ_f^in+(_f,)_Ω_f+(_s,)_Ω_p+(f_p,r)_Ω_p a.e. in (0,T), and _f(0)=Ω_f, (0)=,(0)=,p_p(0)=0 Ω_p.Note that the unknown pressure p_f is no longer in the weak formulation. Furthermore, it is obvious that any solution of (WF1) is a solution of (WF2). The converse will be proved in Section <ref> using an inf-sup condition.§.§ A semi-discrete Galerkin formulation of (WF2).The existence result is proved by constructing a sequence of approximate problems and then passing to the limit, that is, using the Galerkin method.Separability of _f×_p× Q_p implies the existence of a basis {(_i, _i, r_i)}_i≥ 0 consisting of smooth functions.We define_f^m = {_i:i=1,,m},_p^m ={_i:i=1,,m}, Q_p^m ={r_i:i=1,,m}, and use the following Galerkin approximations for the unknowns _f, and p_p: _m(,t)=∑_j=1^m α_j(t)_j(), _m(,t)=∑_j=1^m β_j(t)_j(), p_m(,t)=∑_j=1^m γ_j(t)r_j(). Then, we can write the Galerkin approximation of the problem (WF2) as follows: (GF)Find _m∈^1(0,T;_f^m), _m∈^2(0,T;_p^m), p_m∈^1(0,T;Q_p^m) such that (ρ_f_m,)_Ω_f +(2μ_f(_m),())_Ω_f+ρ_f(_m·∇_m,)_Ω_f+(ρ_s_m,)_Ω_p +(2μ_s(_m),())_Ω_p +(λ_s∇·_m,∇·)_Ω_p-(α p_m,∇·)_Ω_p+(s_0ṗ_m+α∇·_m,r)_Ω_p+(∇ p_m,∇ r)_Ω_p+⟨ p_m_Γ,-⟩_Γ_I+Σ_l=1^d-1⟨β(_m-_m)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(_m-_m)·_Γ,r⟩_Γ_I=-⟨ P_in(t)_Γ,⟩_Γ_f^in+(_f,)_Ω_f+(_s,)_Ω_p+(f_p,r)_Ω_p, for all (, , r)∈_f^m×_p^m× Q_p^m, a.e. t ∈ (0,T) and_m(0)=, _m(0)=, _m(0)=,p_m(0)=0.For each positive integer m, the formulation (GF)has a unique maximal solution (_m,_m,p_m)∈^1(0,T_m;_f^m)×^2(0,T_m;_p^m)×^1(0,T_m;Q_p^m) for some time T_m where 0<T_m≤ T.Using the Galerkin expansions given in (<ref>), the problem (GF) can be represented in matrix form. The following is a standard finite-dimensional argument which is basically defining the problem as a square first order system of ordinary differential equations (ODE) with an initial condition. For the integrals on the left hand side for 1≤ i, j≤ m, we define ^f_ij=ρ_f(_j,_i)_Ω_f, _ij^f=2μ_f((_j),(_i))_Ω_f+Σ_l=1^dβ⟨_j·_Γ^l, _i·_Γ^l ⟩_Γ_I, _i=(ρ_f(_j·∇_k, _i)_Ω_f)_1≤ j,k≤ m, ^s_ij=ρ_s(_j,_i)_Ω_p,_ij^s=2μ_s((_j),(_i))_Ω_p+λ_s(∇·_j,∇·_i)_Ω_p, ^p_ij=s_0(r_j,r_i)_Ω_p,_ij^p=(∇ r_j,∇ r_i)_Ω_p, _ij=α(r_j,∇·_i)_Ω_p+⟨ r_j_Γ,_i⟩_Γ_I,_ij=⟨ r_j_Γ,_i⟩_Γ_I, _ij=Σ_l=1^dβ⟨_j·_Γ^l, _i·_Γ^l ⟩_Γ_I, _ij=Σ_l=1^dβ⟨_j·_Γ^l, _i·_Γ^l ⟩_Γ_I.And finally for the right hand side integrals we define_i=-⟨ P_in(t)_f,_i⟩_Ω_f+(f_f,_i)_Ω_f,_i=(_s,_i)_Ω_p, _i=(f_p,r_i)_Ω_p.The unknowns are _i=α_i(t), _i=β_i(t) and _i=γ_i(t), i=1,, m and we define a vector that holds these unknowns and = as follows :(t)=[ [ (t); (t); (t); (t) ]]and set (())_i=_i·. With these definitions, ( GF) is equivalent to finding , , such that ^f+^f+()+-= ^s+^s--^T+= ^p+^p+^T-^T= where (0),(0) and (0) are given. We can rewrite this as a system of first order equations as follows:+=(),where=[ [ ^f ;; ^p ;^s;]],=[ [^f -; -; -^T^p^T; -^T^s -; ]]and ()=[[ -();;; ]].Since ρ_f, ρ_s and s_0 are positive, ^f, ^p and ^s are symmetric positive definite implying thatis invertible. This defines an autonomous ODE in(t) such that =^-1()-^-1=:g(), (0)The matrices , are 4m× 4m and thevectors , have length 4m. It is obvious that the function g is continuous in time and locally Lipschitz continuous in . Then, it follows from the theory of ordinary differential equations <cit.> that there is a unique maximal solutionin the interval [0,T_m] for some T_m such that 0<T_m≤ T such that each component of , i.e., each component of , , and = belongs to ^1(0,T_m).We need a priori bounds on the Galerkin solution to conclude that T_m=T. We discuss this next in Section <ref>. Note that if we consider the Stokes problem for the fluid part, so if there is no nonlinearity, an existence and uniqueness result will be global on [0,T].§.§ A priori estimates for the Galerkin solution. We begin by stating the main result of this section.Suppose that _f∈ H^1(0,T;^2(Ω_f)), _s∈ H^1(0,T;^2(Ω_p)), _p∈ H^1(0,T;L^2(Ω_p)) and P_in∈ H^1(0,T;H^1/2_00(Γ_f^in)).In addition, assume that the small data condition (<ref>) holds. Then, problem (GF) has a unique solution (_m, _m, p_m)in the interval [0,T]. Furthermore, it satisfies the following bounds: ρ_f2_m_Ω_f^2 +ρ_s/2_m_Ω_p^2+μ_s(_m)^2_Ω_p+λ_s/2∇·_m^2_Ω_p+s_0/2p_m_Ω_p^2+12^1/2∇ p_m^2_L^2(0,T;L^2(Ω_p)) +μ_f(_m)^2_L^2(0,T;L^2(Ω_f)) ≤(1+Tρ_se^T/ρ_s)_1^2_L^2(0,T) for all t∈ [0,T], (_m)_Ω_f <μ_f3ρ_fS_f^2K_f^3, andρ_f2_m^2_Ω_f +ρ_s2_m^2_Ω_p+μ_s(_m)^2_Ω_p+λ_s2∇·_m^2_Ω_p +s_02ṗ_m^2_Ω_p+μ_f(_m)^2_L^2(0,T;L^2(Ω_f))+12^1/2∇ṗ_m^2_L^2(0,T;L^2(Ω_p))≤ (1+Tρ_se^2T/ρ_s^2)_2^2_L^2(0,T)+Te^2T/ρ_s^22_3. Here _1, _2 and _3 are defined in (<ref>). These bounds imply that {_m} is bounded in H^1(0,T;^1(Ω_f)), {_m} is bounded in H^1(0,T;^1(Ω_p)), _m is bounded in L^∞(0,T;^2(Ω_p)) and {p_m} is bounded in H^1(0,T;H^1(Ω_p)).In the next few sections, we verify the bounds (<ref>), (<ref>) and (<ref>) in the interval [0,T_m] which will then imply the global existence of the maximal solution (_m,_m,p_m) in the interval [0,T] as stated in the theorem.§.§.§ Proof of (<ref>). We let =_m, =_m, r=p_m in the Galerkin formulation (GF). Then the Cauch-Schwarz inequality, inequalities (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and assumption (<ref>) onimply (after neglecting the β term which is nonnegative)ρ_f(_m,_m)_Ω_f+2μ_f(_m)^2_Ω_f +ρ_s(_m, _m)_Ω_p+2μ_s((_m), (_m))_Ω_p+λ_s(∇·_m, ∇·_m)_Ω_p+s_0(ṗ_m,p_m)_Ω_p+^1/2∇ p_m^2_Ω_p≤ρ_fS_f^2K_f^3(_m)_Ω_f^3+T_2K_fP_in(t)_Γ_f^in(_m)_Ω_f+P_1K_f_f_Ω_f(_m)_Ω_f+_s_Ω_p_m_Ω_p+P_3K_min^1/2f_p_Ω_p^1/2∇ p_m_Ω_p. Here the only problematic terms on the right hand side are (_m)_Ω_f^3 and _m_Ω_p. The rest can easily be hidden in the left hand side. Observe that since _m(0)= and _m is continuous, there exists a time T_m, 0<T_m≤ T_m such that(_m)_Ω_f<μ_f3ρ_fS_f^2K_f^3∀ t∈ [0,T_m].In fact, this condition holds true on [0,T_m]. For the sake of presentation, we postpone this proof to Section <ref>. Using this condition together with Young's inequality with ϵ>0, we obtain ρ_f(_m,_m)_Ω_f+2μ_f(_m)^2_Ω_f +ρ_s(_m, _m)_Ω_p+2μ_s((_m), (_m))_Ω_p+λ_s(∇·_m, ∇·_m)_Ω_p+s_0(ṗ_m,p_m)_Ω_p+^1/2∇ p_m^2_Ω_p ≤μ_f3(_m)_Ω_f^2+ϵ T_2^2K_f^22P_in(t)^2_Γ_f^in+12ϵ(_m)^2_Ω_f+ϵ P_1^2K_f^22_f^2_Ω_f+12ϵ(_m)^2_Ω_f+12_s^2_Ω_p+12_m^2_Ω_p+P_3^22K_minf_p_Ω_p^2+12^1/2∇ p_m_Ω_p^2 for all t∈ [0,T_m]. Picking ϵ=32μ_f, we get ρ_f(_m,_m)_Ω_f+2μ_f(_m)^2_Ω_f +ρ_s(_m, _m)_Ω_p+2μ_s((_m), (_m))_Ω_p+λ_s(∇·_m, ∇·_m)_Ω_p+s_0(ṗ_m,p_m)_Ω_p+^1/2∇ p_m^2_Ω_p ≤μ_f(_m)_Ω_f^2+12^1/2∇ p_m_Ω_p^2+3 T_2^2K_f^24μ_fP_in(t)^2_Γ_f^in+ 3P_1^2K_f^24μ_f_f^2_Ω_f+P_3^22K_minf_p_Ω_p^2+12_s^2_Ω_p+12_m^2_Ω_p=_1^2(t)+12_m^2_Ω_pwhere _1 is defined in (<ref>). Integrating with respect to t, due to (<ref>), we get: ρ_f/2_m_Ω_f^2 +ρ_s/2_m_Ω_p^2+μ_s(_m)^2_Ω_p+λ_s/2∇·_m^2_Ω_p+s_0/2p_m_Ω_p^2+μ_f∫_0^t(_m)^2_Ω_fdt+12∫_0^t^1/2∇ p_m^2_Ω_pdt ≤_1^2_L^2(0,T)+12∫_0^t_m(s)^2_Ω_pds. So_m(t)_Ω_p^2 ≤2ρ_s_1^2_L^2(0,T)+1ρ_s∫_0^t_m(s)^2_Ω_pds. Therefore, since _m∈𝒞^2(0,T;_p^m), _m(t)_Ω_p^2∈^1(0,T) and applying Gronwall's inequality (<ref>) with ζ(t)=_m(t)_Ω_p^2, C=1ρ_s and B=2ρ_s_1^2_L^2(0,T) yields _m(t)_Ω_p^2≤2ρ_se^T/ρ_s_1_L^2(0,T)^2. Plugging this in (<ref>), we have ρ_f(_m,_m)_Ω_f+μ_f(_m)^2_Ω_f +ρ_s(_m, _m)_Ω_p+2μ_s((_m), (_m))_Ω_p+λ_s(∇·_m, ∇·_m)_Ω_p+s_0(ṗ_m,p_m)_Ω_p+12^1/2∇ p_m^2_Ω_p≤_1^2(t)+1ρ_se^T/ρ_s_1^2_L^2(0,T) or ρ_f2ddt_m_Ω_f^2+μ_f(_m)^2_Ω_f +ρ_s/2d/dt_m_Ω_p^2+μ_sd/dt(_m)^2_Ω_p+λ_s/2d/dt∇·_m^2_Ω_p+s_0/2d/dtp_m_Ω_p^2+12^1/2∇ p_m^2_Ω_p ≤_1^2(t)+1ρ_se^T/ρ_s_1^2_L^2(0,T) for all t∈ [0,T_m]. After integrating with respect to t and using (<ref>) implies the bound (<ref>).§.§.§ Proof of (<ref>) and (<ref>).Recalling (<ref>), assume for a contradiction that there exists T^*∈ (0,T_m] such that∀ t∈ [0,T^*), (_m)(t)_Ω_f<μ_f3ρ_fS_f^2K_f^3,(_m)(T^*)_Ω_f=μ_f3ρ_fS_f^2K_f^3. Similar arguments leading to (<ref>) and using Young's inequality yield2μ_f (_m)^2_Ω_f ≤ ρ_f2_m_Ω_f^2+ρ_s2_m^2_Ω_p+μ_s(_m)^2_Ω_p+ λ_s2∇·_m^2_Ω_p+s_02p_m_Ω_p^2+ρ_f2_m_Ω_f^2+ρ_s2_m_Ω_p^2+μ_s(_m)^2_Ω_p+λ_s2∇·_m^2_Ω_p+s_02ṗ_m^2_Ω_p+_1^2(t)+1ρ_s e^T/ρ_s_1(t)_L^2(0,T)for all t∈ [0,T^*]. To bound the first five terms we differentiate (GF) with respect to time. The specifics of this technique can be found in detail in <cit.>. (ρ_f_m,)_Ω_f+(2μ_f(_m),())_Ω_f+ρ_f(_m·∇_m,)_Ω_f+ρ_f(_m·∇_m,)_Ω_f+(ρ_s⃛_m,)_Ω_p+(2μ_s(_m),())_Ω_p+(λ_s∇·_m,∇·)_Ω_p-(αṗ_m,∇·)_Ω_p+(s_0p̈_m+α∇·_m,r)_Ω_p+(∇ṗ_m,∇ r)_Ω_p +⟨ṗ_m_Γ,-⟩_Γ_I+Σ_l=1^d-1⟨β(_m-_m)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(_m-_m)·_Γ,r⟩_Γ_I=-⟨Ṗ_̇i̇ṅ_Γ,⟩_Γ_f^in+(_f,)_Ω_f+(_s,)_Ω_p+(ḟ_p,r)_Ω_p. We let =_m, r=ṗ_m and =_m in the above, use the Cauch-Schwarz inequality, assumption (<ref>), inequalities (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and assumption (<ref>) onto get (neglecting the nonnegative β term) ρ_f2ddt_m^2_Ω_f+2μ_f(_m)^2_Ω_f +ρ_s2ddt_m^2_Ω_p+μ_sddt(_m)^2_Ω_p+λ_s2ddt∇·_m^2_Ω_p+s_02ddtṗ_m^2_Ω_p+^1/2∇ṗ_m^2_Ω_p ≤ -ρ_f(_m·∇_m,_m)_Ω_f-ρ_f(_m·∇_m,_m)_Ω_f-⟨Ṗ_̇i̇ṅ_Γ,_m⟩_Γ_f^in+(_f,_m)_Ω_f+(_s,_m)_Ω_p+(ḟ_p,ṗ_m)_Ω_p ≤ 2ρ_fS_f^2K_f^3(_m)^2_Ω_f(_m)_Ω_f+T_2K_fṖ_in_Γ_f^in(_m)_Ω_f+P_1K_f_̇ḟ_Ω_f(_m)_Ω_f+_s_Ω_p_m_Ω_p+P_3K_min^1/2ḟ_̇ṗ_Ω_p^1/2∇ṗ_m_Ω_p ≤ 2μ_f3(_m)^2_Ω_f+12ϵ(_m)^2_Ω_f+ϵ T_2^2K_f^22Ṗ_in_Γ_f^in^2+12ϵ(_m)^2_Ω_f+ϵ P_1^2K_f^22_̇ḟ^2_Ω_f+_s_Ω_p_m_Ω_p+12^1/2∇ṗ_m_Ω_p^2+P_3^22K_minḟ_̇ṗ_Ω_p^2. Picking ϵ=3μ_f, we obtain ρ_f2ddt_m^2_Ω_f+μ_f(_m)^2_Ω_f +ρ_s2ddt_m^2_Ω_p+μ_sddt(_m)^2_Ω_p+λ_s2ddt∇·_m^2_Ω_p+s_02ddtṗ_m^2_Ω_p+12^1/2∇ṗ_m^2_Ω_p≤3 T_2^2K_f^22μ_fṖ_in_Γ_f^in^2+3 P_1^2K_f^22μ_f_̇ḟ^2_Ω_f+12_s^2_Ω_p+12_m^2_Ω_p+P_3^22K_minḟ_̇ṗ_Ω_p^2=_2^2(t)+12_m^2_Ω_p, where _2∈ L^2(0,T) is defined in (<ref>). Now we need a bound for _m. Since by assumptionis continuous, we can use Gronwall's inequality (<ref>) again.Integrating from 0 to t for any t∈ [0,T^*] and recalling (<ref>) we obtain ρ_f2_m^2_Ω_f +ρ_s2_m^2_Ω_p+μ_s(_m)^2_Ω_p +λ_s2∇·_m^2_Ω_p+s_02ṗ_m^2_Ω_p≤ρ_f2_m(0)^2_Ω_f +ρ_s2_m(0)^2_Ω_p+s_02ṗ_m(0)^2_Ω_p+_2^2_L^2(0,T)+1ρ_s∫_0^t_m(s)^2_Ω_pds. Now we need bounds for _m(0)^2_Ω_f, _m(0)^2_Ω_p and ṗ_m(0)^2_Ω_p. For that purpose we let =, r=0 and =_m(0) in (GF) at t=0. Due to (<ref>) this yields: ρ_s_m(0)_Ω_p^2=(_s(0), _m(0))_Ω_p≤_s(0)_Ω_p_m(0)_Ω_p. Therefore,_m(0)_Ω_p≤1ρ_s_s(0)_Ω_p. Now let == and r=ṗ_m(0) and t=0 in (GF). Then by (<ref>) we haveṗ_m(0)_Ω_p≤1s_0f_p(0)_Ω_p. Furthermore if we let =, r=0, =_m(0) and t=0 in (GF), we get ρ_f_m(0)^2_Ω_f=-⟨ P_in(0)_Γ_f^in, _m(0)⟩_Γ_f^in+(_f(0), _m(0))_Ω_f.We need a good bound for ⟨ P_in_Γ, _m(0)⟩_Γ_f^in because otherwise _m(0)_Γ_f^in cannot be hidden in the left hand side.Assuming P_in∈ L^2(0,T;H_00^1/2(Γ_f^in)), its extensionP̃_in by zeroto ∂Ω_f is in L^2(0,T;H^1/2(∂Ω_f)). Then we can use the continuous lifting operator j:L^2(0,T;H^1/2(∂Ω_f))→ L^2(0,T;H^1( Ω_f)) with continuity constant C_j. Since ∇·_m=0 ⟨ P_in(0)_Γ_f^in , _m(0)⟩_Γ_f^in=⟨ j(P̃_in(0))_∂Ω_f, _m(0)⟩_∂Ω_f=(∇ (j(P̃_in(0))), _m(0) )_Ω_f≤∇ (j(P̃_in(0))_Ω_f_m(0) _Ω_f≤j(P̃_in(0))_H^1(Ω_f)_m(0) _Ω_f≤ C_jP̃_in(0)_H^1/2(∂Ω_f)_m(0) _Ω_f=C_jP_in(0)_H_00^1/2(Γ_f^in)_m(0) _Ω_f. Then_m(0)_Ω_f≤C_jρ_fP_in(0)_H_00^1/2(Γ_f^in)+1ρ_f_f(0)_Ω_f.Plugging these in (<ref>),ρ_f2_m^2_Ω_f +ρ_s2_m^2_Ω_p+μ_s(_m)^2_Ω_p +λ_s2∇·_m^2_Ω_p+s_02ṗ_m^2_Ω_p ≤C_f^2ρ_fP_in(0)^2_H_00^1/2(Γ_f^in)+1ρ_f_f(0)_Ω_f^2 +12ρ_s_s(0)^2_Ω_p+12s_0f_p(0)^2_Ω_p+_2^2_L^2(0,T) +1ρ_s∫_0^t_m(s)^2_Ω_pds= _3+_2^2_L^2(0,T) +1ρ_s∫_0^t_m(s)^2_Ω_pds where_3 is defined in (<ref>).If we neglect all terms other than the second one on the left hand side of the above equation we get _m^2_Ω_p≤2ρ_s_3+2ρ_s_2^2_L^2(0,T) +2ρ_s^2∫_0^t_m(s)^2_Ω_pds.Using Gronwall's inequality (<ref>) with B= 2ρ_s_3+2ρ_s_2^2_L^2(0,T) , C=2ρ_s^2 and ζ(t)=_m(t)^2_Ω_p, _m(t)^2_Ω_p≤e^2T/ρ_s^2(2ρ_s_2^2_L^2(0,T)+2ρ_s_3). Therefore, putting this in (<ref>) we have ρ_f2ddt_m^2_Ω_f+μ_f(_m)^2_Ω_f +ρ_s2ddt_m^2_Ω_p+μ_sddt(_m)^2_Ω_p+λ_s2ddt∇·_m^2_Ω_p +s_02ddtṗ_m^2_Ω_p+12^1/2∇ṗ_m^2_Ω_p≤_2^2(t)+e^2T/ρ_s^2ρ_s(_2^2_L^2(0,T)+_3). Integrating from 0 to t in [0,T^*] and recalling the bounds (<ref>), (<ref>) and (<ref>) we obtain ρ_f2_m^2_Ω_f +ρ_s2_m^2_Ω_p+μ_s(_m)^2_Ω_p+λ_s2∇·_m^2_Ω_p +s_02ṗ_m^2_Ω_p+∫_0^tμ_f(_m)^2_Ω_fdt+∫_0^t12^1/2∇ṗ_m^2_Ω_pdt≤_2^2_L^2(0,T)+Tρ_se^2T/ρ_s^2_2^2_L^2(0,T)+Te^2T/ρ_s^22_3=(1+Tρ_se^2T/ρ_s^2)_2^2_L^2(0,T)+Te^2T/ρ_s^22_3, which implies (<ref>). Neglecting all terms on the left hand side other than the first we find_m^2_Ω_f≤2ρ_f((1+Tρ_se^2T/ρ_s^2)_2^2_L^2(0,T)+Te^2T/ρ_s^22_3). To find a bound for _m_Ω_f in (<ref>), we recall (<ref>), use assumption (<ref>) to get (<ref>) for all t∈ [0,T^*]. Neglecting the terms other than the ones with d/dt on the left hand side, we integrate (<ref>) from 0 to t where t∈[0,T^*]. Since the initial conditions are zero, we have ρ_f2_m_Ω_f^2 +ρ_s/2_m_Ω_p^2+μ_s(_m)^2_Ω_p +λ_s/2∇·_m^2_Ω_p+s_0/2p_m_Ω_p^2 ≤ (1+Tρ_se^T/ρ_s)_1^2_L^2(0,T). Therefore using (<ref>) and (<ref>) in (<ref>) we get (_m)_Ω_f^2≤12μ_f(_1^2(t) +(1+1ρ_s e^T/ρ_s+Tρ_se^T/ρ_s)_1^2_L^2(0,T)+(1+Tρ_se^2T/ρ_s^2)_2^2_L^2(0,T)+Te^2T/ρ_s^22_3) for all t∈ [0,T^*].So if the data satisfy 12μ_f(_1^2_L^∞(0,T) +(1+1ρ_s e^T/ρ_s+Tρ_se^T/ρ_s)_1^2_L^2(0,T)+(1+Tρ_se^2T/ρ_s^2)_2^2_L^2(0,T)+Te^2T/ρ_s^22_3)<μ_f^39ρ_f^2S_f^4K_f^6,then we have(_m)_Ω_f<μ_f^23ρ_fS_f^2K_f^3for all t∈ [0,T^*] which contradicts with the assumption.Note that if P_in(0)=0,we immediately get _m(0)_Ω_f≤1ρ_f_f(0)_Ω_f rather than (<ref>). In that case, it is not necessary to assume that P_in∈ L^2(0,T;H_00^1/2(Γ_f^in)). §.§ Passing to the limit. Since we established the boundedness of the Galerkin solution (_m, _m, p_m) that we stated in Remark <ref>, we can pass to the limit in (GF) to obtain a solution to (WF2). Reflexivity of _f, _p and Q_p and therefore of H^1(0,T;_f), H^1(0,T;_p) and H^1(0,T;Q_p) imply that there exists a subsequence of the Galerkin solution, still denoted by {(_m, _m, p_m)}_m such that _m ⇀_f H^1(0,T;^1(Ω_f)),_m ⇀H^1(0,T;^1(Ω_p)), _m ⇀ L^2(0,T;^2(Ω_p)),p_m ⇀ p_p H^1(0,T;H^1(Ω_p))where ⇀ stands for weak convergence.Note that due to the continuity of the trace operator we also have_m ⇀_f L^2(0,T;H^1/2(∂Ω_f)),_m ⇀L^2(0,T;H^1/2(∂Ω_p)),p_m ⇀ p_p L^2(0,T;H^1/2(∂Ω_p)).We can pass to the limit in the linear terms with the convergence results above. However, the nonlinear term needs a stronger convergence result. To obtain such a result we recall that ^1(Ω_f)⊂^4(Ω_f)⊂^2(Ω_f) and that ^1(Ω_f) is compactly embedded in ^4(Ω_f) due to the Sobolev embedding theorem. Furthermore, Remark <ref> implies that the subsequence {_m}_m is bounded in L^4(0,T;^1(Ω_f)) and {∂_m/∂ t}_m isbounded in L^1(0,T;^2(Ω_f)). Therefore, Theorem <ref> implies that {_m}_m has a subsequence, still denoted the same, such that _m→_f L^4(0,T;^4(Ω_f)). Fix an integer k≥ 1. We multiply (GF) by ψ(t)∈ L^2(0,T) and integrate in time to obtain ∫_0^T(ρ_f_m ,ψ(t))_Ω_fdt+∫_0^T(2μ_f(_m),ψ(t)())_Ω_fdt+∫_0^Tρ_f(_m·∇_m,ψ(t))_Ω_fdt +∫_0^T(ρ_s_m,ψ(t))_Ω_pdt+∫_0^T(2μ_s(_m),ψ(t)())_Ω_pdt +∫_0^T(λ_s∇·_m,ψ(t)∇·)_Ω_pdt-∫_0^T(α p_m,ψ(t)∇·)_Ω_pdt+∫_0^T(s_0ṗ_m+α∇·_m,ψ(t)r)_Ω_pdt+∫_0^T(∇ p_m,ψ(t)∇ r)_Ω_pdt +∫_0^T⟨ p_m_Γ,ψ(t)(-)⟩_Γ_Idt+Σ_l=1^d-1∫_0^T⟨β(_m-_m)·_Γ^l,ψ(t)(-)·_Γ^l⟩_Γ_Idt+∫_0^T⟨(_m-_m)·_Γ,ψ(t)r⟩_Γ_Idt=-∫_0^T⟨ P_in(t)_Γ,ψ(t)⟩_Γ_f^indt+∫_0^T(_f,ψ(t))_Ω_fdt+∫_0^T(_s,ψ(t))_Ω_pdt+∫_0^T(f_p,ψ(t)r)_Ω_pdt, for all (, , r)∈_f^k×_p^k× Q_p^k, m≥ k. Letting m→∞ we obtain ∫_0^T(ρ_f_f, ψ(t))_Ω_fdt+∫_0^T(2μ_f(_f),ψ(t)())_Ω_fdt+∫_0^Tρ_f(_f·∇_f,ψ(t))_Ω_fdt +∫_0^T(ρ_s,ψ(t))_Ω_pdt+∫_0^T(2μ_s(),ψ(t)())_Ω_pdt +∫_0^T(λ_s∇·,ψ(t)∇·)_Ω_pdt-∫_0^T(α p_p,ψ(t)∇·)_Ω_pdt+∫_0^T(s_0ṗ_p+α∇·,ψ(t)r)_Ω_pdt+∫_0^T(∇ p_p,ψ(t)∇ r)_Ω_pdt +∫_0^T⟨ p_p_Γ,ψ(t)(-)⟩_Γ_Idt+Σ_l=1^d-1∫_0^T⟨β(_f-)·_Γ^l,ψ(t)(-)·_Γ^l⟩_Γ_Idt+∫_0^T⟨(-_f)·_Γ,ψ(t)r⟩_Γ_Idt=-∫_0^T⟨ P_in(t)_Γ,ψ(t)⟩_Γ_f^indt+∫_0^T(_f,ψ(t))_Ω_fdt+∫_0^T(_s,ψ(t))_Ω_pdt+∫_0^T(f_p,ψ(t)r)_Ω_pdt, for all (, , r)∈_f^k×_p^k× Q_p^k. Since any element of , _p, Q_p can be approximated by elements of _f^k, _p^k, Q_p^k and ψ∈ L^2(0,T) is arbitrary, this also holds for all (, , r)∈_f×_p× Q_p a.e. in (0,T). Therefore we recover the equation in (WF2).Last, we checkwhether the initial conditions are satisfied.In (<ref>) we let ψ∈^2([0,T]) such that ψ(T)=0 and ψ̇(T)=0 and integrate the first and the eighth terms once and the fourth term twice with respect to time. Then we do the same in (<ref>) and also take the limit as m→∞ using the convergence properties established in the beginning of the proof and using (<ref>). Comparing the resulting equations yield: -(ρ_f(0),ψ(0))_Ω_f-(ρ_s(0),ψ(0))_Ω_p+(ρ_s(0),ψ̇(0))_Ω_p-(s_0 p(0),ψ(0)r)_Ω_p=0, for all ∈, ∈_p, r∈ Q_p. Since ψ(0) and ψ̇(0) are arbitrary,-(ρ_f(0),)_Ω_f-(ρ_s(0),)_Ω_p+(ρ_s(0),)_Ω_p -(s_0 p(0),r)_Ω_p=0, for all ∈, ∈_p, r∈ Q_p. This yields the initial conditions stated in (WF2).Finally passing to the limit in (<ref>), (<ref>) and (<ref>), we obtain (<ref>), (<ref>) and (<ref>).§.§ Uniqueness.For the Stokes flow, there is no issue of uniqueness. For the Navier-Stokes problem, we can only proveuniqueness for restricted solution. Let (_f, , p_p) be a solution of (WF2).Then if ()_Ω_f≤μS_f^2K_f^3,then (_f, , p_p) is unique.Let (_1, _1, p_1) and (_2, _2, p_2) be two solutions of (WF2). Then =_1-_2, =_1-_2 and ϕ=p_1-p_2 satisfy: (ρ_f,)_Ω_f+(2μ_f(),())_Ω_f+ρ_f(_1·∇_1-_2·∇_2,)_Ω_f-(ϕ,∇·)_Ω_f +(ρ_s,)_Ω_p+(2μ_s(),())_Ω_p+(λ_s∇·,∇·)_Ω_p-(αϕ,∇·)_Ω_p+(s_0ϕ̇+α∇·,r)_Ω_p+(∇ϕ,∇ r)_Ω_p+⟨ϕ_Γ,-⟩_Γ_I+Σ_l=1^d-1⟨β(-)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(-)·_Γ,r⟩_Γ_I=0 for all , , r. Let =, =, r=ϕ where (0)=, (0)=, (0)=, ϕ(0)=0. ρ_f2ddt^2_Ω_f+2μ_f()^2_Ω_f +ρ_s2ddt^2_Ω_p+μ_sddt()^2_Ω_p+λ_s2ddt∇·^2_Ω_p+s_02ddtϕ^2_Ω_p+^1/2∇ϕ^2_Ω_p +Σ_l=1^d-1β-)·_Γ^l^2_Γ_I =-ρ_f(·∇_1+_2·∇,)_Ω_f ≤ S_f^2K_f^3()^2_Ω_f(_1)_Ω_f+S_f^2K_f^3(_2)_Ω_f()_Ω_f^2 ≤ 2μ_f()^2_Ω_f. Therefore, ρ_f2ddt^2_Ω_f +ρ_s2ddt^2_Ω_p+μ_sddt()^2_Ω_p+λ_s2ddt∇·^2_Ω_p+s_02ddtϕ^2_Ω_p≤ 0which implies using the initial conditions:ρ_f2(t)^2_Ω_f +ρ_s2(t)^2_Ω_p+μ_s((t))^2_Ω_p +λ_s2∇·^2_Ω_p+s_02ϕ^2_Ω_p≤ 0This implies=, =, ϕ=0. §.§ Existence of a Navier-Stokes pressure.Next we prove that from any solution of (WF2) we can recover a solution of (WF1). In fact, an inf-sup condition is sufficient to show the existence of a p_f which was eliminated when we restricted the search space L^2(0,T;_f) to L^2(0,T;_f). The following inf-sup condition holds <cit.>: There exists a constant κ>0 such that inf_q∈ Q_fsup_∈_f(∇·,q)_Ω_f_H^1(Ω_f)q_L^2(Ω_f)≥κ.Note that we can replace the supremum above with the supremum over all (,,r)∈_f×_p× Q_p. In fact, the supremum is attained when = and r=0. If (_f, , p_p) is a solution of problem (WF2), then there exists a unique p_f∈ L^∞(0,T;Q_f) such that (_f,p_f,, p_p) is a solution of problem (WF1).Let's define a mappingsuch that for all (,,r)∈_f×_p× Q_p(, , r)=(ρ_f_f,)_Ω_f+(2μ_f(_f),())_Ω_f+ρ_f(_f·∇_f,)_Ω_f+(ρ_s,)_Ω_p+(2μ_s(),())_Ω_p+(λ_s∇·,∇·)_Ω_p-(α p_p(t),∇·)_Ω_p+(s_0ṗ_̇ṗ+α∇·,r)_Ω_p+(∇ p_p,∇ r)_Ω_p+⟨ p_p_Γ,-⟩_Γ_I+Σ_l=1^d-1⟨β(_f-)·_Γ^l,(-)·_Γ^l⟩_Γ_I+⟨(-_f)·_Γ,r⟩_Γ_I+⟨ P_in(t)_f,⟩_Γ_f^in-(_f,)_Ω_f-(_s,)_Ω_p-(f_p,r)_Ω_p, (0,T). It is straightforward to see thatis linear and continuous on _f×_p× Q_p for a.e. t∈ (0,T). Furthermore, (,,r)=0 for any (,,r)∈_f×_f× Q_p for a.e. t∈ (0,T). Therefore, the theory of Babuska-Brezzi imply for a.e. t∈ (0,T) that there exists a unique function p_f∈ Q_f such that(p_f,∇·)_Ω_f=(, , r), ∀ (,,r)∈_f×_f× Q_p, that is, there is a unique p_f∈ L^∞(0,T;Q_f) such that (_f,p_f,, p_p) is a solution of problem (WF1). Furthermore, letting r=0, = in (<ref>) and using the inf-sup condition againwe havep_f_Ω_f≤1κ(ρ_f_f_Ω_f+2μ_f(_f)_Ω_f+S_f^2_f_1, Ω_f^2+T_1T_3p_p_1,Ω_p +β T_1^2_f_1,Ω_f+β T_1T_5_1,Ω_p+T_2P_in_Γ_f^in+_f_Ω_f), a.e. in (0,T) which implies the bound (<ref>) on p_f in Theorem <ref>. This concludes the proof of Theorem <ref>. § THE INTERFACE CONDITIONS.In this section we prove that the interface conditions are meaningful for a solution of (<ref>), (<ref>), (<ref>) and (<ref>)and therefore justify the derivation of the weak formulationin the proof of Proposition <ref>.Let us consider a solution of (<ref>), (<ref>), (<ref>) and (<ref>) such that _f∈ L^2(0,T;_f), p_f∈ L^1(0,T;Q_f), ∈ H^1(0,T;_p) andp_p ∈ L^2(0,T;Q_p) with _f∈ L^1(0,T;^3/2(Ω_f)). §.§.§ Interface conditions (<ref>) and (<ref>). Since _f∈ L^2(0,T;_f) and ^1(Ω_f) isembedded continuously in ^6(Ω_f), for d=2,3, _f∈ L^2(0,T;^6(Ω_f)). Then Hölder's inequality implies_f·∇_f_L^1(0,T;^3/2(Ω_f)≤_f_L^2(0,T;^6(Ω_f))_f_L^2(0,T;_f). Therefore, _f·∇_f∈ L^1(0,T;^3/2(Ω_f)).Then from (<ref>)∇·_f=_f-ρ_f_f-_f·∇_f∈ L^1(0,T;^3/2(Ω_f)).Also _f = 2μ_fD(_f)-p_fI ∈ L^1(0,T;^2(Ω_f)).Hence, each row of _f belongs to L^1(0,T;^3/2(;Ω_f)).Since ^1(Ω_f) is dense in ^3/2(;Ω_f), the following Green's formula hold:∀∈^3/2(;Ω_f), ∀ϕ∈ H^1(Ω_f), ⟨·_∂Ω_f, ϕ⟩_∂Ω_f=(∇·, ϕ)_Ω_f+(, ∇ϕ)_Ω_f.This allows _f_Γ to be defined in a weak sense on Γ_I. See also <cit.>. Nextsince ∈ L^2(0,T;_f), ∈ L^2(0,T;_p) and p_p∈ L^2(0,T;Q_p) we have (-p_p_Γ-∑_l=1^d-1(β(_f-)·_Γ^l)_Γ^l)|_Γ_I∈ L^2(0,T;^4(Γ_I)) where _Γ^1, _Γ^2 are the unit tangential vectors on Γ_I. With these the interface conditions (<ref>) and (<ref>) make sense. For the rest of the interface conditions we use a global space-time argument in ℝ^d+1, d=2, 3. We define new variables in ℝ^d+1 by =(t, x_1, , x_d), d=2, 3 and also define the cylindrical region Ω̃_p=[0,T]×Ω_p. §.§.§ Interface condition (<ref>).Defining Θ=(-(s_0p_p+α∇·), ∇ p_p),the equation (<ref>) can be written as-∇_·Θ=f_pwhere ∇_=(ddt,∇·)^T.Since f_p∈ L^2(Ω̃_p) and Θ∈^2(Ω̃_p), we have Θ∈(;Ω̃_p). So Θ·_Ω̃_p|_∂Ω̃_p is well-defined in H^-1/2(∂Ω̃_p). Therefore the following Green's formula holds <cit.>:∀∈^1(Ω̃_p),(∇_·Θ, )_Ω̃_p=-(Θ,∇_)_Ω̃_p+⟨ (Θ·_Ω̃_p, ⟩_∂Ω̃_pwhere _Ω̃_p outward unit normal to Ω̃_p. This allows the well-definition of ∇ p_p·_∂Ω_p in the weak sense on Γ_I.§.§.§ Interface condition (<ref>). From the above discussion it is enough to check whether _p_Γ is meaningful and we use a similar space-time argument. Defining Σ=(-ρ_s , _p), the equation (<ref>) can be written as-∇_·Σ=_s.Since _s∈ L^2(Ω̃) and ∈^2(Ω̃_p) we have Σ∈(;Ω̃_p). Therefore the following Green's formula holds <cit.>:∀∈^1(Ω̃_p),(∇_·Σ, )_Ω̃_p=-(Σ,∇_)_Ω̃_p+⟨Σ·_Ω̃_p, ⟩_∂Ω̃_p.This allows the well-definition of _p_Γ in the weak sense on Γ_I. § REFERENCESelsarticle-harv
http://arxiv.org/abs/1702.08095v1
{ "authors": [ "Aycil Cesmelioglu" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170226220319", "title": "Analysis of the coupled Navier-Stokes/Biot problem" }
Diagrammar in an Extended Theory of Gravity David C. Dunbar, John H. Godwin, Guy R. Jehu and Warren B. Perkins December 30, 2023 ======================================================================empty emptyWe present a new public dataset with a focus on simulating robotic vision tasks in everyday indoor environments using real imagery.The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes.We train a fast object category detector for instance detection on our data.Using the dataset we show that, although increasingly accurate and fast, the state of the art for object detection is still severely impacted by object scale, occlusion, and viewing direction all of which matter for robotics applications.We next validate the dataset for simulating active vision, and use the dataset to develop and evaluate a deep-network-based system for next best move prediction for object classification using reinforcement learning. Our dataset is available for download at<cs.unc.edu/ ammirato/active_vision_dataset_website/>. § INTRODUCTION The ability to recognize objects is a core functionality forrobots operating in everyday human environments.While there has been amazing recent progress in computer vision on object classification and detection, especially with deep models, these lines of work do not address some of the core needs of vision for robotics.Partly this is due to biases in the imagery considered and the fact that these recognition challenges are performed in isolation for each image.In robotic applications, the biases are different and recognition is performed over multiple images, often with active control of the sensing platform (active vision).This paper attempts to address part of this disconnect by introducing a new approach to studying active vision for robotics by collecting very dense imagery of scenes in order to allow simulating a robot moving through an environment by sampling appropriate imagery.The goals are two-fold, to provide a research and development resource for computer vision without requiring access to robots for experiments, and to provide a way to benchmark and compare different approaches to active vision without the difficulty and expense of evaluating the algorithms on the same physical robotics testbed.We begin by collecting a large dataset of dense RGB-D imagery of common everyday rooms: kitchens, living rooms, dining rooms, offices, etc. This imagery is registered and used to form a 3D reconstruction of each scene.This reconstruction is used to simplify labeling of objects in the collection in 3D as opposed to individually in the thousands of images of those objects.The geometric relationship between images is also used to define connectivity for determining what image would be seen next when moving in a given direction from a given camera position (e.g. what would I see if I turned right? went backwards?).Given this labeled data we adapt a state-of-the-art fast object category detector <cit.> based on deep convolutional networks to the task of recognizing specific object instances in the dataset. While most deep-learning approaches have focused on category detection, instance detection can be practically useful for robotics.This distinction between recognizing a category of object, such as chair, versus a specific object, such as a particular 8.4oz Red Bull can is important.Our results show that the category detection framework can be adapted to instance detection well, with some caveats.Where the detection framework has difficulty is in the range of scales, viewing directions, and occlusions present in everyday scenes (e.g. our data) that is different from the biases present in Internet collected datasets.While the detector performs well for large frontal views of objects its performance falls for other views.This is quantified in Sec. <ref>. This view-dependent variation in recognition performance motivates active-vision for object recognition, controlling the sensing platform to acquire imagery that improves recognition accuracy. Our high-level goals are based on using the pre-collected dense imagery to develop and test active-vision algorithms.To validate this approach we begin by demonstrating that the imagery is sampled densely enough. In particular we care that the results and accuracy of recognition algorithms on samples of the densely collected imagery are close to the results that would be achieved if the robot moved continuously through the environment. This is explored in Sec. <ref>.Given this validation, we proceed to use the densely sampled dataset to train and evaluate a deep-network for determining the next best move to improve object classification. The recognition component for this is pre-trained with external data and then a combined network that performs recognition and selects a direction to move in to improve accuracy is trained on a subset of the densely sampled data using reinforcement learning.To illustrate one way to use the dataset, we employ multiple train/test splits to determine the expected increase in accuracy with multiple moves using our next best move network.See Sec. <ref>. The collected dataset and labels are available at<http://cs.unc.edu/ ammirato/active_vision_dataset_website/>, as well as a small toolbox for visualizations and loading.We hope to also provide the functionality to allow groups to submit algorithms for evaluation on completely private test data in the future.Before collection of imagery, release forms were signed and collected allowing free and legal access to the collected data. § RELATED WORK This paper proposes an approach to collecting and using datasets to train and benchmark object detection and recognition, especially for active recognition. We briefly discuss some of the most related work in each area. The datasets that have been a driving force in pushing the deeplearning revolution in object recognition, Pascal VOC <cit.>,the ImageNet Challenge <cit.>, and MS COCO <cit.>are all collected from web images (usually from Flickr) using websearch based on keywords.These image collections introduce biases from thehuman photographer, the human tagging, and the web search engine. Asa result objects are usually of medium to large size in images andare usually frontal views with small amounts of occlusion.In additionthese datasets focus on object category recognition.The state ofthe art for object classification and recognition in these datasetsis based on either object proposals and feature poolingfollowing <cit.> with advanced deepnetworks <cit.> or on fully convolutional networksimplementing a modern take on sliding windows <cit.> that provide frame-rate or faster performance on high-end hardware forsome reduction in accuracy.Instance recognition (as opposed to object category recognition) has generally been approached using local features or template matching techniques.A recent relevant example using these types of models is <cit.> that trains on objects in a room and is tested on the same objects in the room after rearrangement. In our experiments we are interested in generalization to new environments in order to avoid training in each new room.More recently, <cit.> shows how deep-learning for comparing instances can be applied to instance classification and outperform classic matching methods. For our data, we are also interested in instance detection, including localization in a large image. We use the system from <cit.> to build a much faster detector for object instances than would be possible with explicit matching. There are many RGB-D datasets available today, but none with a focus on simulating robot motion through an environment. <cit.> gives a list of a various RGB-D datasets, somefocus on single objects <cit.>, in what we call “table-top” style data. This type of data, especially the data in BigBIRD <cit.>, is similar to what manufactures may provide for robots in the future. While not capturing real-world scenes, the number of views and detail for each instance in this data can provide valuable training data for instance recognition systems. We include over 30 object instances similar to those in the BigBIRD dataset in our scenes. Scene dataset,<cit.>, <cit.>,<cit.>, and <cit.> do explore environments more than “table-top” data but do not have a dense set of views to simulate robot motion.These data-sets often have only one or two paths through the scene. An actual robot in the real-world has many choices of where to move, and the controller has to be able to pick a good path. See Figure <ref> for a comparison of the available paths through scenes in previous datasets and our data.Active vision has a long history in robotics. Early work largely centered around view selection <cit.>. Others  <cit.> have worked on the problem from a more theoretical perspective, but under many simplified settings for possible motions, or assumptions about known object models. In recent years, next best view prediction has been one of the more popular active vision problems. However, most of these approaches use CAD models of the objects of interest <cit.>, with somesmall sets of real-world images <cit.>. CAD models produce encouraging results, but leave out some real-world challenges in perception. <cit.> gives a system for object detection, pose estimation, and next best view prediction. They are able to test their detection and pose estimation system on existing real image datasets, but need to collect their own data to test their active vision framework. They collect a small scale dataset of only “table-top” style scenes with about 30-60 images each. This shows the need for a dataset for active vision, while also showing how difficult it can be to collect such data at a large scale. § DATA COLLECTION Our dataset covers a variety of scenes from office buildings and homes, often capturing more than one room. For example a kitchen, living room, and dining room may all be present in one scene. We capture a total of 9 unique scenes, but have a total of 17 scans since some scenes are scanned twice. Each scene has from 696-2,412 images, for a total of 20,916 images and 54,247 bounding boxes. We use the Kinect v2 sensor and code from  <cit.> for collection. As stated, we aim to be able to simulate robotic motion through each scene with our scans. At first it may seem the best way to do this is to capture video as the camera moves around the scene. However, in order to get more than one view at any given point the camera must be rotated at that point. Itois not possible to visit the infinite number of points in each scene, so a discrete set of points must be chosen. In a video, even if a consistent frame rate and rotation speed are maintained, there will be images in between the points of rotation that still represent only a single view of a position in the scene. This is unnatural for movement. Imagine a robot arriving at a location and being unable to turn in place.We choose to have the camera visit a set of discrete points throughout the scene in order to provide some consistency among the images and camera positions. A video could still be collected at each point of rotation, but this would increase the dataset size unnecessarily. We choose to sample every 30 degrees at each point of rotation, providing substantial overlap between images while keeping the number of images in each scene manageable. The set of points our robot visits in each scene is essentially a rectangular grid over the scene. We make our points 30 centimeters apart, and justify this in later experiments. Our scenes have between 58-201 points, which allow many choices of how to move. Two scans of a scene will have different placements of objects. Only objects that would be naturally moved in daily life are relocated. For example chairs, books, and BigBIRD objects may be moved, but sofas and refrigerators will stay put. There are two advantages to scanning each scene twice. First, we are able to get more data from each scene, which is important given the limited availability of scenes.Second, we can test a system that learns about objects and a scene from an initial scan, and then is tested on the same scene with moved or new objects, e.g.  <cit.>.§.§ LabelsWe aim to collect 2D bounding boxes of our 33 common instances across all scenes. In addition, we need to provide movement pointers from each image to allow movement through the scene. We provide pointers for rotation clock-wise and counter clock-wise, as well as translation forward, backward, left, and right.For each scan of each scene, we create a sparse reconstruction of the scene using the RGB structure from motion tool COLMAP from Schönberger et al <cit.>. From the reconstruction we get the camera position and orientation for each image. We don't use depth information for the reconstruction because our sampling is so dense that we are rarely testing the limits of the RGB system. See Figure <ref> for example reconstructed camera positions. Using the camera positions and orientations we are able to calculate the movement pointers that allow navigation through each scene using natural robotic movements. To label every object instance in each scan, we feed the output of COLMAP into the dense reconstruction system CMVS/PMVS <cit.>. This gives us a denser point cloud of the scene that makes it easy for humans to recognize objects. We then extract the point cloud of each instance from this dense reconstruction, and are able to get 2D bounding boxes in every image by projecting the point clouds for each object into each image. See Figure <ref>.Given that most of our scans include multiple rooms and lots of clutter, we must account for occlusion or the point clouds will project through walls and occluding objects and give low quality 2D bounding boxes. We are able use the Kinect depth maps with the reconstructed point clouds and camera poses to account for some occlusion, but not all. Some occlusion is missed by the raw depth maps because they are sometimes noisy, giving wrong or no values for reflectiveshiny surfaces, and are not at the same resolution as the RGB images. To improve a given depth map D, we build a dense reconstruction by back projecting the depth maps of cameras that see similar areas of the scene. This solves the difference in resolution problem, as the other depth maps cover the areas missed by D. We are also able to fill in many of the missing or wrong values on specular surfaces by taking advantage of the fact that these values are either zero, or much greater than the true depth. Each depth image has a slightly different view of the specular surface, and so has various correct and incorrect values on that surface. By projecting the point clouds of many depth images into D and keeping the smallest value for each pixel, we are able to remove most of the wrong values that are too large, and fill in a lot of the missing values. As a last step we perform some simple interpolation to attempt to fill in any holes of missing values that are left. See Figure <ref> for a comparison of original to improved depth maps.Though the improved depths are much better they are still not perfect. There is also noise in the dense reconstruction and noise in the labeled point clouds. Knowing this, we inspect every bounding box ourselves to make sure it contains the correct object, and is not of poor quality (too large or small for the object). We have labeled ourscans for BigBIRD objects, yielding an average of over 3000 2D bounding boxes per scan. We provide some measure of difficulty for each bounding box based on its size, leaving adding a measure of occlusion for future work. For our experiments we only consider boxes with a size of at least 50x30 pixels. § EXPERIMENTS We aim to show four things: a baseline for instance detection on our data, why it is important to design systems specifically for robot motion, how our dataset can be used to simulate motion, and a system demonstrating an active vision task on our dataset.§.§ Instance DetectionWe use a state-of-the-art class level object detector as a baseline for instance detection on our dataset. We choose the Single Shot Detection (SSD) network from <cit.> because it offers both real time detection performance (72 FPS) while maintaining a high-level of accuracy. This is exciting for robotics applications for which real time performance is crucial.The SSD network consists of a base network, in our case VGG <cit.>, with additional feature maps added on top of the base network through a series of 1x1 and 3x3 convolutions.We separate our dataset into three training and testing splits. Each split consists of eleven scans from seven scenes as training and three scans from twoscenes for testing. Since small objects present a particularly difficult challenge for our detector, we first only consider boxes of size at least 100x75 pixels for training and testing. We then include all boxes of size at least 50x30, adding more training data but also a more difficult test scenario.We use 500x500 images for training SSD. We train the network using an initial learning rate of 0.001 and train the network for 20,000 iterations with a stepsize of 6,000. We choose to use the same hyperparameter settings across all splits of the data. TheMean Average Precision results for each split are shown in Table <ref>. From this table we can see that the network's performance can vary depending upon the training and testing split used. In the next section we explore how the detection performance is affected by numerous factors in our dataset.§.§ Qualitative Results As our data has a wide variety of views of each object, varying pose and scale, we wanted to see how the detector fared with respect to different views. Figure <ref> shows how detection score changed when camera position changed relative to an object instance. We can seethere is a clear pattern showing the detector is more reliable in some camera positions than in others.Figure <ref> shows how occlusion and object pose can greatly impact the detector even though there are training examples for both cases. We observed similar performance for many objects in all of our test scenes. This behavior motivates an active system that can move from a position with poor detection outputs to one with improved performance.§.§ Ability to Simulate MotionThere are many parts of a robotic system that may be impacted by movement, but we are focused on the vision system, in particular object recognition. To find an appropriate sampling resolution for object recognition, we see how a vision system's output changes as a function of camera movement. We need to find a sampling resolution that can simulate motion but is also practical for data collection purposes. We first drive our robot around some scenes, capturing video as if the robot is naturally moving through the environment. We then label all BigBIRD instances in the videos, and run our instance detector on each image. For each video, we calculate the difference in detection score for each instance in all pairs of images.For example, we take the fourth and tenth frame and plot the difference in score for an instance against the distance the camera moved between frames. We plot the results from four videos in Figure <ref>. For all instances that were detected in at least one image (score greater than 0), even the smallest movement of the camera results in some change in detection score. As the distance between cameras increases, there is a greater change in detection score. We considered the trade-off of having lower variation in our vision system against practicality of data collection.The vertical blue line in each plot in Figure <ref> shows our chosen resolution, 30 cm. We found that for most instances, the change in score at 30cm is not much different than the changes at smaller resolutions like 10 or 20cm. §.§ Active Vision In this section we propose a baseline for an active instance classification task on our dataset. We envision a scenario where a robotic system is given an area of interest, and the system must classify the object instance at that location. We assume that given an initial area, localizing the same area in subsequent images is straight forward. Based on these assumptions, we propose the following problem setting. As input our agent receives an initial image with a bounding box for the target object. The agent can then choose an action at each timestep and will receive a new image and bounding box corresponding with the action. The goal is for the agent to learn an action policy which will increase the accuracy of the instance classifier. A straightforward way of training an active vision system for object recognition would be to train the system to acquire new views of an object when there is occlusion. However, it is not easy to label and quantify the level of occlusion of a target object. Furthermore, even if these labels were readily available our intuition about which views are difficult for a classifier would not necessarily be correct. For example, a classifier may be able to easily recognize some heavily occluded objects by only looking at some small discriminative part of the object. In addition, ourdataset contains numerous factors which make the classification task difficult in addition to occlusions, such as varying object scale and lighting conditions. Therefore, we choose to use classification score as the training signal for our active vision system. A new view of an instance can increase both the confidence and accuracy of our classifier. This leads our model to learn a policy whichattempts to move the agent to views that improve recognition performance.As a feature extractor, we used the first 9 convolutional layers of ResNet-18 models  <cit.>, which recently showed compelling results on the 1000 way imagenet classification task. We used pre-trained models written in the torch framework  <cit.>. The weights for the network are fixed for all experiments although our overall system is end-to-end trainable. The instance classifier and action network share the feature extractor. See Figure <ref>.We first train an instance classifier for BigBIRD <cit.> instances, which appear in our dataset. One natural choice might be to train the classifier and action network simultaneously on our dataset. However, deep neural networks can easily achieve almost 100% classification accuracy on our training dataset. This type of over-fitting would prevent our action network from learning a meaningful policy, and does not perform well on the test set.Thus, we use images from the BigBIRD <cit.> dataset for training our instance classifier. Even though the BigBIRD dataset provides many viewpoints of an instance, it can't be directly used for training since it consists of objects against a plain white background. We instead use the provided object masks to crop the object and overlay it on a random background sampled from SUN397 dataset<cit.>. In order to prevent our network from overfitting, we aggressively applied various data augmentations. These included randomly cropping part of the image, performing color jittering, and sampling different lightening. Additionally, since our dataset consists of many small object instances, we randomly scaled the object by a factor ranging from 0.02 to 1.Our baseline action network is inspired by a recent active vision approach <cit.>. We use the REINFORCE algorithm to train a network to predict an action at each time step. At each time step our action network receives as input an image and a bounding box for the current position. Our network then outputs a score for each action: forward, backward, left, right, clockwise rotation, and counter-clockwise rotation. We fix the maximum number of timesteps during training to be T=5 steps or until the classifier achieves more than 0.9 confidence score. If the instance classifier correctly classifies the instance at the final timestep or reaches a 0.9 score at any timestep, we consider the actions taken by the action network as correct. We then give the network a positive reward signal to adjust the weights of the action network to encourage the chosen moves.More formally, we want to maximize the expected reward with respect to the policy distribution represented by our action network. J(θ) = 𝔼_p(a_1:T|ϕ(I_1:T),bb_1:T;θ)[R]Where ϕ(I_1:T) are the CNN features for the images, bb_1:T are the bounding boxes of target objects. If the classification is correct R is the score of the classifier, otherwise R=0.For simplicity, we assumed the policy distributions to be independent at each timestep, p(a_1:T|ϕ(I_1:T),bb_1:T;θ) = ∏_t^T p(a_t|ϕ(I_t-1),bb_t-1;θ).In order to compute gradients with respect to the parameters of our action network, we use the REINFORCE algorithm, which is sample approximation to the gradient introduced by <cit.> and recently popularized by <cit.>. ∇_θJ ≈1/M∑_i=1^M∑_t=1^T∇_θlog p(a_t^i|ϕ(I_t-1^i),bb_t-1^i;θ)R^i We evaluate our action network by comparing the accuracy of our classifier at different timesteps. The action network is used to choose an action at each image location at each time step, moving to a new image location for the next timestep. We consider how the classification accuracy changes as the maximum timestep, T, increases.Since many of the instances in our dataset are small and far away in the image a natural baseline policy is one that always chooses the move forward action. We additionally compare against a policy of choosing a random action. Figure <ref> shows how our system is able to greatly improve classification accuracy by moving to new image locations. We are also able to outperform the two obvious baselines. Figure <ref> shows some qualitative examples of our system moving through a scene.One potential improvement to our active classification model is a method for aggregating the views at each time step in order to choose the next action and perform multi-view classification<cit.>. We also would like to explore the recurrent models that could consider history of actions taken. Additionally, the active vision task difficulty can be further increased by not providing the bounding box. This would require a policy that considers several hypothesis of both the location and class of the object. We expect that our dataset will provide a challenging test bed for further active vision research. § CONCLUSIONSWe introduce a new labeled dataset for developing and benchmarkingobject recognition methods in challengingindoor environments and active vision strategies for these tasks.We establish a baseline for object instance detection and show that the data is suitable for training a modern deep-learning-basedsystem for next best view selection, using reinforcement learning, something that usually requires using a robot in the loop or synthetic computer graphics models. Using our densely sampled RGB-D imagery allows systems to see and be evaluated on real-world visual perception challenges which include large variations in scale and viewpointas well as real imaging conditions that may not be present in CG. We validate experimentally that current state-of-the-art detectionsystems benefit from active vision on this real-world data. The dataset and toolbox for processing are now public.IEEEtran
http://arxiv.org/abs/1702.08272v2
{ "authors": [ "Phil Ammirato", "Patrick Poirson", "Eunbyung Park", "Jana Kosecka", "Alexander C. Berg" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170227132335", "title": "A Dataset for Developing and Benchmarking Active Vision" }
=1#1!!#2!!#1 #2 =-6pt =-1.0in=6.5in =9.0in1.3 =0.65in
http://arxiv.org/abs/1702.08268v1
{ "authors": [ "Nabarun Chakrabarty", "Biswarup Mukhopadhyaya" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170227132134", "title": "High-scale validity of a two Higgs doublet scenario: predicting collider signals" }
Analytical Representations of Divisors of Integers Krzysztof Maślanka e-mail: krzysiek2357@gmail.com Institute for the History of Science Polish Academy of Sciences======================================================================================================================= Certain analytical expressions which "feel" the divisors of natural numbers are investigated. We show that these expressions encode to some extent the well-known algorithm of the sieve of Eratosthenes.Most part of the text is written in pedagogical style, however some formulas are new.MSC: Primary 11A51; Secondary 26A06§ NOTATION AND CONVENTIONS Throughout this paper we shall adopt the following notation and conventions: n is a given natural number and k is a possible divisor of n. If k actually divides n then j=n/k. Let f(x) denotes any real analytic function defined in the neighborhood of the origin by a power seriesf(x)=∑_j=0^∞c_jx^jwith all c_j≠0 (i=1,2,3...). It will be shown that j is also the exponent of x in the expansion (<ref>) around zero and j labels half-lines or rays of divisors (see below).§ MOTIVATION The theory of divisors of integers is the cornerstone of elementary number theory. It is convenient to introduce the characteristic function for divisors:Definition. For any n,k∈ ℕ_α_nk:={0pt1if k| n0if k∤ n.Another pretty obvious (and rather useless in numerical calculations) representation of (<ref>) is:_α_nk=1/Γ(1-mod( n,k))where Γ(s) denotes the Euler gamma function and mod (n,k) gives the remainder on division of n by k. In fact (<ref>) is more general than (<ref>) since it may be calculated also for non-integer or even complex values of n and k but this leads to some interpretation difficulties which we shall not discuss here.Consider the following expression for some natural numbers n and k:α_nk=.d^n/dx^ne^x^k| _x=0We will prove the followingTheorem. Apart from a trivial normalization factor, _α_nk defined in formula (<ref>) is equal to α_nk defined in (<ref>).Proof. Expanding the exponential function in (<ref>) in power series and performing term-by-term differentiation we get:α_nk=.d^n/dx^n ∑_j=0^∞ (x^k)^j/j!| _x=0=. ∑_j=0^∞ 1/j!d^n/dx^nx^jk| _x=0Recall the general formulas for the n-th derivative of x^p with respect to xd^n/dx^nx^p=Γ(p+1) /Γ(p+1-n)x^p-n=n!pnx^p-n d^n/dx^nx^p=(-1)^n Γ(n-p)/Γ(-p)x^p-nwhere the second formula stems from properties of the gamma function and is suitable for integer negative p (see e.g. <cit.>). Note that the order of derivative n does not have to be integer but for integer n both (<ref>) and (<ref>) reduce to the well-known elementary differentiation rule. Using (<ref>) we get:α_nk=. ∑_j=0^∞ 1/j!Γ(jk+1)/Γ(jk+1-n) x^jk-n| _x=0.=n! ∑_j=0^∞ 1/j!jknx^jk-n| _x=0By simple inspection of (<ref>) we see why this expression "feels" the divisors of the integer n. Indeed, when taking the limit x→0 the only non-zero term in the series appears when jk=n for some integer j, and this occurs if and only if k divides n. All terms with jk>n disappear in the limit x→0 whereas those with jk<n, although singular in x=0, vanish since the binomial coefficient term is zero. Therefore, in the summation (<ref>) at most only one term can survive in the limit process.▪The above reasoning might appear far too excessive. However, it guarantees that among divisors none have been omitted. It should also be stressed that it may be used as a starting point for various generalizations since n need not to be integer.It is easy to guess the normalizing factor:α_nk=1/n!(n/k)!.d^n/dx^ne^x^k| _x=0Using the same reasoning we can derive similar expression for α_nk:α_nk=(k!)^n/k/n!(n/k) !.d^n/dx^ne^x^k/k!| _x=0§ SIMPLE EXAMPLE In a natural way coefficients α_nk may be regarded as a square matrix of arbitrarily large dimension where the running integer n labels rows and the potential divisor k labels columns. The entries of this matrix are either one or zero depending on whether k divides n or not. This matrix is always triangular, since of course no divisor can exceed a given number, and its determinant (for any dimension) is 1.[ n k 1 2 3 4 5 6 7 8 910 ...; 1 1 0 0 0 0 0 0 0 0 0 ...; 2 1 1 0 0 0 0 0 0 0 0 ...; 3 1 0 1 0 0 0 0 0 0 0 ...; 4 1 1 0 1 0 0 0 0 0 0 ...; 5 1 0 0 0 1 0 0 0 0 0 ...; 6 1 1 1 0 0 1 0 0 0 0 ...; 7 1 0 0 0 0 0 1 0 0 0 ...; 8 1 1 0 1 0 0 0 1 0 0 ...; 9 1 0 1 0 0 0 0 0 1 0 ...;10 1 1 0 0 1 0 0 0 0 1 ...; ... ... ... ... ... ... ... ... ... ... ... ](Matrix (<ref>) is closely related to the Redheffer matrix, see e.g. <cit.>, <cit.>.) Introducingσ_0(n):= ∑_k=1^n α_nkwe see that σ_0(n) just counts the number of all divisors of a given n including both unity and n itself.It is known (see e.g. <cit.>) that the inverse of matrix (<ref>) is:β_nk={ 0ptμ(n/k)if k| n0if k∤ n.where μ denotes the Möbius function:μ(n)={c]l 0 if n has squared prime factor +1 if n is a square-free positive integer with an even number of prime factors -1 if n is a square-free positive integer with an odd number of prime factors . [ n k 1 2 3 4 5 6 7 8 910 ...; 1 1 0 0 0 0 0 0 0 0 0 ...; 2-1 1 0 0 0 0 0 0 0 0 ...; 3-1 0 1 0 0 0 0 0 0 0 ...; 4 0-1 0 1 0 0 0 0 0 0 ...; 5-1 0 0 0 1 0 0 0 0 0 ...; 6 1-1-1 0 0 1 0 0 0 0 ...; 7-1 0 0 0 0 0 1 0 0 0 ...; 8 0 0 0-1 0 0 0 1 0 0 ...; 9 0 0-1 0 0 0 0 0 1 0 ...;10 1-1 0 0-1 0 0 0 0 1 ...; ... ... ... ... ... ... ... ... ... ... ... ]Note that the numbers in (<ref>) when summed in rows give zero except for the first row which stems from the following identity:∑_d|nμ(d)=δ_n,1Matrices (<ref>) and (<ref>) are visualized in Figure 1.Somewhat similar but purely qualitative results have been published in <cit.>.< g r a p h i c s >Figure 1. Graphic distribution of divisors (<ref>) for n=1,2,...,50 as a square matrix (left panel). Each blue square denotes +1. In the inverse matrix (<ref>) blue square denotes +1 and red square denotes -1 (right panel).§ GENERAL CASE The particular choice of the exponential function in (<ref>) is not crucial to our reasoning. Indeed, instead of this function we can take any regular function f(x) provided that it has all non-zero coefficients in its power series expansionf(x)=c_0+c_1x+c_2x^2+c_3x^3... c_i ≠0 for i=1,2,3,...Thus in general we have (up to appropriate normalizing factor)α_nk=.d^n/dx^nf( x^k)| _x=0For example, takingf(x)=x/1-x=x+x^2+x^3+...we get:α_nk=. ∑_j=1^∞ jknx^jk-n| _x=0or simplyα_nk=. ∑_j=1^∞ x^jk-n/(jk-n)!| _x=0Takingf(x)=log(1-x)=-x-x^2/2-x^3 /3-...we get:α_nk=(-1)^n/kn/k. ∑_j=1^∞ (-1)^j/jjknx^jk-n| _x=0The general explicit formula for α_nk using arbitrary function f satisfying (<ref>) is:α_nk=(n/k)!1/f^(n/k)(0) . ∑_j=0^∞ f^(j)(0)/j!jknx^jk-n| _x=0where f^(j)(0) denotes the j-th derivative of f with respect to x taken at x=0. (If n/k in (<ref>) is non-integer then the value of fractional derivative f^(n/k)(0) is unimportant since in this case the sum vanishes.)The table below contains normalizing factors for α_nk, for several different choices of function f(x), obtained using (<ref> ).t]ll f(x)=e^x α_nk=n/k!1/n!. d^n/dx^nf(x^k)| _x=0f(x)=ln(1-x) α_nk=-n/k 1/n!.d^n/dx^nf( x^k)| _x=0f(x)=x/1-x α_nk=1/n!. d^n/dx^nf(x^k)| _x=0f(x)=√(1+x) α_nk=-(-2)^n/k/( 2n/k-3)!!n/k!1/n!.d ^n/dx^nf(x^k)| _x=0f(x)=1/√(1+x) α_nk=(-1) ^n/kΓ(1/2)/Γ(n/k+1/2)n/k!1/n!.d^n /dx^nf(x^k)| _x=0f(x)=(1+x)^-3/2 α_nk =(-2)^n/k/(2n/k+1)!!n/k!1/n!.d^n/dx^nf(x^k) | _x=0f(x)=W(x) α_nk=(-1)^n/k-1n/k !/(n/k)^n/k-11/n!. d^n/dx^nf(x^k)| _x=0f(x)=1/1-x-x^2 α_nk=1/F_n/k+11/n!.d^n/dx^nf( x^k)| _x=0(W(x) is the Lambert W-function andF_n in the last row denotes the n-th Fibonacci number.)§ INTERPRETATION Let us now explain in more details how it all works. The thing is that all formulas for α_nk presented so far encode, at least to some extent, the ancient algorithm known as the sieve of Eratosthenes.Indeed, consider as f(x) the function f(x)=x/(1-x) and let us temporarily restrict ourselves to the linear case:f(x)≈ x. According to the general formula (<ref>) we haveα_nk=1/n!.d^n/dx^nf( x^k)| _x=0=1/n!.d^n /dx^nx^k| _x=0=kn.x^k-n | _x=0=δ_k,nand this produces a single line of ones on the diagonal n=k in the divisor matrix (<ref>) – cf. Figure 2 below. This is equivalent to the trivial statement that all integers are divisible both by one and by themselves. Let us further consider more precise approximation f(x)≈ x+x^2. We get from (<ref>) another sequence of ones on the line n=2k. This is equivalent to selecting all even integers n and adding to the divisor matrix (<ref>) their divisors n/2. Taking into account higher powers of x we select all numbers n which are multiplies of 3,4,5,... and this adds to the matrix further lines of divisors: n/3,n/4,n/5, respectively.Proceeding in the same way we finally arrive at the full expansion of f(x):f(x)=x/1-x= ∑_j=1^∞ x^jwhich produces the entire sequence of lines n=jk labelled by parameter j=1,2,3,.... In this way we have selected and visualized all divisors for all integers. It is clear that there are certain well-defined numbers n (marked in bold in Figure 2) which have exactly two divisors: unity and themselves, i.e. prime numbers: 2,3,5,7,11,13,... At the same time we see the importance of condition c_i≠0 in (<ref>) since even a single coefficient c_i=0 would cause a skipping of certain divisors. In view of this the characteristic function for divisors may also be written in a very natural form as a sum over Kronecker deltas:α_nk= ∑_j=1^n δ_jk,nNote that combining (<ref>), (<ref>) and (<ref>) gives:σ_0(n):= ∑_k=1^n α_nk=1/n!d^n/dx^n. ∑_k=1^∞ x^k/1-x^k| _x=0Hence∑_k=1^∞ x^k/1-x^k= ∑_n=1^∞ σ_0(n)x^nwhich is consistent with the theory of Lambert series (see e.g. <cit.> ) which is the generating function for the sequence σ_0(n) where σ_0(n) is the total number of divisors for a given integer n.< g r a p h i c s >Figure 2. Distribution of divisors of integers computed from α_nk. This figure illustrates how various terms in the sum (<ref>) contribute to the whole pattern of divisors. Each term corresponds to a ray of divisors. Rows are labelled by consecutive integers n and columns are labelled by potential divisors k. Each colored disc means that given k actually divides n, otherwise there is small black circle. To better visualize the whole pattern discs are in 3 different colors and lines connecting them are drawn. Of course, above the diagonal (k>n) there can't be any divisors.§ CONCLUDING REMARKS A few elementary comments at the end of this note. As we have seen, all divisors k of integers n lie on rays passing through the origin of the coordinate system on the (n,k) plane and are labelled by an integer parameter j=1,2,3...n=jkWe have also seen that this simple condition has a natural interpretation since j may be identified with the exponent in x^j in the expansion (<ref>). The key thing is that these rays must pass through certain points of an integer lattice and only then a potential divisor can be an actual divisor. For large n these rays typically get closer and closer to one another. Therefore we qualitatively see why it is so difficult to factorize large integers.Moreover, numerical experiments suggest that all divisors lie on countable families of parabolas passing through the origin (see Figures 3, 4 and 5 below). These parabolas are "quantized" in the sense that each family is characterized by two discrete parameters μ=1,2,3... and ν=1,2,3... and inside any family parabolas are labelled by another integer parameter i:g_i^(μν)(k)=-μ/νk^2+i/νkCareful simulations using Mathematica revealed that parameter i assumes equidistant values with integer constant step:δ=(μ,ν)starting from i=μ+ν wheredenotes greatest common divisor, i.e. i=μ+ν, μ+ν+δ, μ+ν+2δ,...< g r a p h i c s >Figure 3. Various families of parabolas (<ref>) for μ=1,2,3 and ν=1,2,3. Step δ (<ref>) described in the main text is also indicated.< g r a p h i c s >Figure 4. Family of parabolas (<ref>) for μ=1 and ν=1 (red),2 (orange),3 (green),and 4 (cyan) for n<100. For clarity of the plot parameter i assumes only 50 consecutive values. Prime numbers among ns are indicated by vertical lines.< g r a p h i c s >Figure 5. Family of parabolas (<ref>) for μ=1 and ν=1 (red),2 (yellow) and 3 (green) around n=740. For clarity of the plot parameter i assumes only 5 consecutive values. Prime numbers among ns are indicated by vertical lines.As far as I am aware the unexpected parabolas in the distribution of divisors have been independently noticed by Jeffrey Ventrella (see his popular book <cit.>, page 33) but with no quantitative considerations.Finally, it should be stressed that, unfortunately, expressions presented in this note do not tell us much about distribution of primes. They are even not very suitable for numerical calculations for large n therefore may be treated merely as a curiosity. Nevertheless, we have shown some unexpected relationship between number theory and calculus.Acknowledgments. The author would like to thank Prof. Jeffrey Lagarias for his encouragement and several suggestions and to Prof. Andrzej Schinzel for several remarks.The results presented in this paper were inspired by experimenting with Wolfram Mathematica. Also all calculations were checked using this powerful software.9 Inverse sequenceOn-Line Encyclopedia of Integer Sequences, sequence number A054525.ApostolTom M. Apostol, Modular Forms and Dirichlet Series in Analysis, 1976, Springer-Verlag.CoxDavid N. Cox, Visualizing the Sieve of Eratosthenes, Notices of the AMS, vol. 55, nr. 5, May 2008, p. 579-582.MillerRossKennet S. Miller, Bertram Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, 1993, John Wiley and Sons.RedhefferRaymond M. Redheffer, Eine explizit lösbare Optimierungsaufgabe, Internat. Schriftenreihe Numer. Math. vol. 36, 1977.TrottMichael Trott, The Mathematica GuideBook for Programming, 2004, New York: Springer-Verlag.VentrellaJeffrey J. Ventrella, Divisor Drips and Square Root Waves – Prime Numbers are the Holes in Complex Composite Number Patterns, 2010, Eyebrain Books (book web site: www.divisorplot.com).
http://arxiv.org/abs/1702.07876v2
{ "authors": [ "Krzysztof Maślanka" ], "categories": [ "math.GM", "Primary 11A51, Secondary 26A06" ], "primary_category": "math.GM", "published": "20170225110024", "title": "Analytical Representations of Divisors of Integers" }
Comb-like Turing patterns embedded in Hopf oscillations: Spatially localized states outside the 2:1 frequency locked region Arik Yochelis December 30, 2023 ===========================================================================================================================To date, the tightest upper and lower-bounds for the active learning of general concept classes have been in terms of a parameter of the learning problem called the splitting index. We provide, for the first time, an efficient algorithm that is able to realize this upper bound, and we empirically demonstrate its good performance.§ INTRODUCTIONIn many situations where a classifier is to be learned, it is easy to collect unlabeled data but costly to obtain labels. This has motivated the pool-based active learning model, in which a learner has access to a collection of unlabeled data points and is allowed to ask for individual labels in an adaptive manner. The hope is that choosing these queries intelligently will rapidly yield a low-error classifier, much more quickly than with random querying. A central focus of active learning is developing efficient querying strategies and understanding their label complexity.Over the past decade or two, there has been substantial progress in developing such rigorously-justified active learning schemes for general concept classes. For the most part, these schemes can be described as mellow: rather than focusing upon maximally informative points, they query any point whose label cannot reasonably be inferred from the information received so far. It is of interest to develop more aggressive strategies with better label complexity.An exception to this general trend is the aggressive strategy of <cit.>, whose label complexity is known to be optimal in its dependence on a key parameter called the splitting index. However, this strategy has been primarily of theoretical interest because it is difficult to implement algorithmically. In this paper, we introduce a variant of the methodology that yields efficient algorithms. We show that it admits roughly the same label complexity bounds as well as having promising experimental performance.As with the original splitting index result, we operate in the realizable setting, where data can be perfectly classified by some function h^* in the hypothesis class . At any given time during the active learning process, the remaining candidates—that is, the elements ofconsistent with the data so far—are called the version space. The goal of aggressive active learners is typically to pick queries that are likely to shrink this version space rapidly. But what is the right notion of size? Dasgupta <cit.> pointed out that the diameter of the version space is what matters, where the distance between two classifiers is taken to be the fraction of points on which they make different predictions. Unfortunately, the diameter is a difficult measure to work with because it cannot, in general, be decreased at a steady rate. Thus the earlier work used a procedure that has quantifiable label complexity but is not conducive to implementation.We take a fresh perspective on this earlier result. We start by suggesting an alternative, but closely related, notion of the size of a version space: the average pairwise distance between hypotheses in the version space, with respect to some underlying probability distribution π on . This distribution π can be arbitrary—that is, there is no requirement that the target h^* is chosen from it—but should be chosen so that it is easy to sample from. Whenconsists of linear separators, for instance, a good choice would be a log-concave density, such as a Gaussian. At any given time, the next query x is chosen roughly as follows: * Sample a collection of classifiers h_1, h_2, …, h_m from π restricted to the current version space V. * Compute the distances between them; this can be done using just the unlabeled points.* Any candidate query x partitions the classifiers {h_i} into two groups: those that assign it a + label (call these V_x^+) and those that assign it a - label (call these V_x^-). Estimate the average-diameter after labeling x by the sum of the distances between classifiers h_i within V_x^+, or those within V_x^-, whichever is larger.* Out of the pool of unlabeled data, pick the x for which this diameter-estimate is smallest.This is repeated until the version space has small enough average diameter that a random sample from it is very likely to have error less than a user-specified threshold ϵ. We show how all these steps can be achieved efficiently, as long as there is a sampler for π.Dasgupta <cit.> pointed out that the label complexity of active learning depends on the underlying distribution, the amount of unlabeled data (since more data means greater potential for highly-informative points), and also the target classifier h^*. That paper identifies a parameter called the splitting index ρ that captures the relevant geometry, and gives upper bounds on label complexity that are proportional to 1/ρ, as well as showing that this dependence is inevitable. For our modified notion of diameter, a different averaged splitting index is needed. However, we show that it can be bounded by the original splitting index, with an extra multiplicative factor of log (1/ϵ); thus all previously-obtained label complexity results translate immediately for our new algorithm.§ RELATED WORK The theory of active learning has developed along several fronts.One of these is nonparametric active learning, where the learner starts with a pool of unlabeled points, adaptively queries a few of them, and then fills in the remaining labels. The goal is to do this with as few errors as possible. (In particular, the learner does not return a classifier from some predefined parametrized class.) One scheme begins by building a neighborhood graph on the unlabeled data, and propagating queried labels along the edges of this graph <cit.>. Another starts with a hierarchical clustering of the data and moves down the tree, sampling at random until it finds clusters that are relatively pure in their labels <cit.>. The label complexity of such methods have typically be given in terms of smoothness properties of the underlying data distribution <cit.>. Another line of work has focused on active learning of linear separators, by querying points close to the current guess at the decision boundary <cit.>. Such algorithms are close in spirit to those used in practice, but their analysis to date has required fairly strong assumptions to the effect that the underlying distribution on the unlabeled points is logconcave. Interestingly, regret guarantees for online algorithms of this sort can be shown under far weaker conditions <cit.>.The third category of results, to which the present paper belongs, considers active learning strategies for general concept classes . Some of these schemes <cit.> are fairly mellow in the sense described earlier, using generalization bounds to gauge which labels can be inferred from those obtained so far. The label complexity of these methods can be bounded in terms of a quantity known as the disagreement coefficient <cit.>. In the realizable case, the canonical such algorithm is that of <cit.>, henceforth referred to as CAL. Other methods use a prior distribution π over the hypothesis class, sometimes assuming that the target classifier is a random draw from this prior. These methods typically aim to shrink the mass of the version space under π, either greedily and explicitly <cit.> or implicitly <cit.>. Perhaps the most widely-used of these methods is the latter, query-by-committee, henceforth QBC. As mentioned earlier, shrinking π-mass is not an optimal strategy if low misclassification error is the ultimate goal. In particular, what matters is not the prior mass of the remaining version space, but rather how different these candidate classifiers are from each other. This motivates using the diameter of the version space as a yardstick, which was first proposed in <cit.> and is taken up again here.§ PRELIMINARIES Consider a binary hypothesis class , a data space , and a distributionover . For mathematical convenience, we will restrict ourselves to finite hypothesis classes. (We can do this without loss of generality whenhas finite VC dimension, since we only use the predictions of hypotheses on a pool of unlabeled points; however, we do not spell out the details of this reduction here.) The hypothesis distance induced byoveris the pseudometric d(h, h'):=_x ∼(h(x) ≠ h'(x)). Given a point x ∈ and a subset V ⊂, denoteV_x^+={ h ∈ V : h(x) = 1 }and V_x^- = V ∖ V_x^+. Given a sequence of data points x_1, …, x_n and a target hypothesis h^*, the induced version space is the set of hypotheses that are consistent with the target hypotheses on the sequence, i.e.{ h ∈: h(x_i) = h^*(x_i)for alli=1,…, n }.§.§ Diameter and the Splitting IndexThe diameter of a set of hypotheses V ⊂ is the maximal distance between any two hypotheses in V, i.e. diam(V) := max_h, h' ∈ V d(h, h').Without any prior information, any hypothesis in the version space could be the target. Thus the worst case error of any hypothesis in the version space is the diameter of the version space. The splitting index roughly characterizes the number of queries required for an active learning algorithm to reduce the diameter of the version space below ϵ.While reducing the diameter of a version space V ⊂, we will sometimes identify pairs of hypotheses h,h' ∈ V that are far apart and therefore need to be separated. We will refer to {h,h'} as an edge. Given a set of edges E = {{h_1, h'_1}, … , {h_n, h'_n }}⊂ 2, we say a data point x ρ-splits E if querying x separates at least a ρ fraction of the pairs, that is, ifmax{| E_x^+|,|E_x^- |} ≤ (1-ρ)|E|where E_x^+ = E ∩_x^+2 and similarly for E_x^-.When attempting to get accuracy ϵ > 0, we need to only eliminate edge of length greater than ϵ. DefineE_ϵ ={{ h, h' }∈ E : d(h, h') > ϵ} .The splitting index of a set V ⊂ is a tuple (ρ, ϵ, τ) such that for all finite edge-sets E ⊂V2,_x ∼(xρ-splitsE_ϵ)≥ τ.The following theorem, due to Dasgupta <cit.>, bounds the sample complexity of active learning in terms of the splitting index. The Õ notation hides polylogarithmic factors in d, ρ, τ, log 1/ϵ, and the failure probability δ.Supposeis a hypothesis class with splitting index (ρ, ϵ, τ). Then to learn a hypothesis with error ϵ, (a) any active learning algorithm with ≤ 1/τ unlabeled samples must request at least 1/ρ labels, and (b) ifhas VC-dimension d, there is an active learning algorithm that draws Õ(d/(ρτ) log^2 (1/ϵ)) unlabeled data points and requests Õ((d/ρ) log^2 (1/ϵ)) labels.Unfortunately, the only known algorithm satisfying (b) above is intractable for all but the simplest hypothesis classes: it constructs an ϵ-covering of the hypothesis space and queries points which whittle away at the diameter of this covering. To overcome this intractability, we consider a slightly more benign setting in which we have a samplable prior distribution π over our hypothesis space . §.§ An Average Notion of DiameterWith a prior distribution, it makes sense to shift away from the worst-case to the average-case. We define the average diameter of a subset V ⊂ as the expected distance between two hypotheses in V randomly drawn from π, i.e. Φ(V) := _h, h' ∼π|_V[d(h, h')]where π|_V is the conditional distribution induced by restricting π to V, that is, π|_V(h) = π(h)/π(V) for h ∈ V. Intuitively, a version space with very small average diameter ought to put high weight on hypotheses that are close to the true hypothesis. Indeed, given a version space V with h^* ∈ V, the following lemma shows that if Φ(V) is small enough, then a low error hypothesis can be found by two popular heuristics: random sampling and MAP estimation.lemmaSmallAverageDiameterImpliesGoodEstimateLemmaSuppose V ⊂ contains h^*. Pick ϵ > 0. (a) (Random sampling) If Φ(V) ≤ϵ π|_V(h^*) then _h ∼π|_V [d(h^*,h)] ≤ϵ.(b) (MAP estimation) Write p_map = max_h ∈ Vπ|_V(h). Pick 0 < α < p_map. If Φ(V)≤ 2 ϵ( min{π|_V(h^*), p_map-α})^2, then d(h^*,h) ≤ϵ for any h with π|_V(h) ≥ p_map - α.Part (a) follows fromΦ(V)=_h,h' ∼π|_V[d(h,h')]≥ π|_V(h^*) _h ∼π|_V[d(h^*,h)] .For (b), take δ =min(π|_V(h^*), p_map - α) and define V_π, δ = { h ∈ V :π|_V(h) ≥δ}. Note that V_π,δ contains h^* as well as any h ∈ V with π|_V(h) ≥ p_map - α.We claim diam(V_π, δ) is at most ϵ. Suppose not. Then there exist h_1, h_2 ∈ V_π, δ satisfying d(h_1, h_2) > ϵ, implyingΦ(V)= _h,h' ∼π|_V[d(h,h')] ≥ 2 ·π|_V(h_1) ·π|_V(h_2) · d(h_1, h_2)>2 δ^2 ϵ. But this contradicts our assumption on Φ(V). Since both h, h^* ∈ V_π, δ, we have (b).§.§ An Average Notion of SplittingWe now turn to defining an average notion of splitting. A data point x ρ-average splits V ifmax{π(V_x^+)^2/π(V)^2Φ(V_x^+),π(V_x^-)^2/π(V)^2Φ(V_x^-) } ≤ (1-ρ) Φ(V).And we say a set S ⊂ has average splitting index (ρ, ϵ, τ) if for any subset V ⊂ S such that Φ(V) > ϵ, _x ∼( xρ-average splitsV )≥ τ.Intuitively, average splitting refers to the ability to significantly decrease the potential function π(V)^2 Φ(V)=_h,h' ∼π[(h,h' ∈ V) d(h,h')]with a single query.While this potential function may seem strange at first glance, it is closely related to the original splitting index. The following lemma, whose proof is deferred to Section <ref>, shows the splitting index bounds the average splitting index for any hypothesis class.lemmaSplittingImpliesAverageSplittingLemmaLet π be a probability measure over a hypothesis class . Ifhas splitting index (ρ, ϵ, τ), then it has average splitting index (ρ/4 ⌈log(1/ϵ) ⌉, 2ϵ, τ). Dasgupta <cit.> derived the splitting indices for several hypothesis classes, including intervals and homogeneous linear separators. Lemma <ref> implies average splitting indices within a log(1/ϵ) factor in these settings. Moreover, given access to samples from π|_V, we can easily estimate the quantities appearing in the definition of average splitting. For an edge sequence E = ({h_1, h'_1}, … , {h_n, h'_n }), define ψ(E) := ∑_i=1^n d(h_i, h'_i).When h_i, h'_i are i.i.d. draws from π|_V for all i=1, …, n, which we denote E ∼ (π|_V)^2 × n, the random variables ψ(E), ψ(E_x^-), and ψ(E_x^+) are unbiased estimators of the quantities appearing in the definition of average splitting. Given E ∼ (π|_V)^2 × n, we have * [1/nψ(E) ] = Φ(V) and * [ 1/nψ(E_x^+) ] = π(V_x^+)^2/π(V)^2Φ(V_x^+) for any x ∈. Similarly for E_x^- and V_x^-. From definitions and linearity of expectations, it is easy to observe [ψ(E)] = nΦ(V). By the independence of h_i, h'_i, we additionally have[ 1/nψ(E_x^+) ]= 1/n[ ∑_{ h_i, h_i' }∈ E_x^+ d(h_i, h'_i) ][3]= 1/n[ ∑_{ h_i, h_i' }∈ E[h_i ∈ V_x^+][h'_i ∈ V_x^+] d(h_i, h'_i) ][3]= 1/n∑_{ h_i, h_i' }∈ E(π(V_x^+)/π(V))^2 [ d(h_i, h'_i) | h_i, h'_i ∈ V_x^+][3]= (π(V_x^+)/π(V))^2 Φ(V_x^+).Remark: It is tempting to define average splitting in terms of the average diameter asmax{Φ(V_x^+), Φ(V_x^-) } ≤ (1- ρ) Φ(V).However, this definition does not satisfy a nice relationship with the splitting index. Indeed, there exist hypothesis classes V for which there are many points which 1/4-split E for any E ⊂V2 but for which every x ∈ satisfies max{Φ(V_x^+), Φ(V_x^-) } ≈ Φ(V). This observation is formally proven in the appendix.§ AN AVERAGE SPLITTING INDEX ALGORITHM Suppose we are given a version space V with average splitting index (ρ, ϵ, τ). If we draw Õ(1/τ) points from the data distribution then, with high probability, one of these will ρ-average split V. Querying that point will result in a version space V' with significantly smaller potential π(V')^2 Φ(V'). If we knew the value ρ a priori, then Lemma <ref> combined with standard concentration bounds <cit.> would give us a relatively straightforward procedure to find a good query point: * Draw E' ∼ (π|_V)^2× M and compute the empirical estimate Φ(V) = 1/Mψ(E'). * Draw E ∼ (π|_V)^2× N for N depending on ρ and Φ.* For suitable M and N, it will be the case that with high probability, for some x, 1/Nmax{ψ(E_x^+), ψ(E_x^-)} ≈ (1-ρ)Φ.Querying that point will decrease the potential. However, we typically would not know the average splitting index ahead of time. Moreover, it is possible that the average splitting index may change from one version space to the next. In the next section, we describe a query selection procedure that adapts to the splittability of the current version space. §.§ Finding a Good Query PointAlgorithm <ref>, which we term select, is our query selection procedure. It takes as input a sequence of data points x_1, …, x_m, at least one of which ρ-average splits the current version space, and with high probability finds a data point that ρ/8-average splits the version space.select proceeds by positing an optimistic estimate of ρ, which we denote ρ_t, and successively halving it until we are confident that we have found a point that ρ_t-average splits the version space. In order for this algorithm to succeed, we need to choose n_t and m_t such that with high probability (1) Φ_t is an accurate estimate of Φ(V) and (2) our halting condition will be true if ρ_t is within a constant factor of ρ and false otherwise. The following lemma, whose proof is in the appendix, provides such choices for n_t and m_t.lemmaSelectionProcedureLemmaLet ρ, ϵ, δ_0 >0 be given. Suppose that version space V satisfies Φ(V) > ϵ. In select, fix a round t and data point x ∈ that exactly ρ-average splits V (that is, max{π|_V(V_x^+)^2 Φ(V_x^+),π|_V(V_x^-)^2 Φ(V_x^-) } = (1-ρ)Φ(V)). If m_t ≥48/ρ_t^2 ϵlog4/δ_0 and n_t ≥max{32/ρ_t^2 Φ_t, 40/Φ_t^2}log4/δ_0 then with probability 1-δ_0, (a) Φ_t ≥ (1-ρ_t/4)Φ(V); (b) if ρ≤ρ_t/2, then 1/n_tmax{ψ(E_x^+), ψ(E_x^-)} > (1- ρ_t) Φ_t; and (c) if ρ≥ 2ρ_t, then 1/n_tmax{ψ(E_x^+), ψ(E_x^-)}≤ (1- ρ_t) Φ_t .Given the above lemma, we can establish a bound on the number of rounds and the total number of hypotheses select needs to find a data point that ρ/8-average splits the version space.Suppose that select is called with a version space V with Φ(V) ≥ϵ and a collection of points x_1, …, x_m such that at least one of x_i ρ-average splits V. If δ_0 ≤δ/(2m (2 + log(1/ρ))), then with probability at least 1- δ, select returns a point x_i that (ρ/8)-average splits V, finishing in less than ⌈log(1/ρ) ⌉ + 1 rounds and sampling O( ( 1/ϵρ^2 + log(1/ρ)/Φ(V)^2) log1/δ_0) hypotheses in total. Remark 1: It is possible to modify select to find a point x_i that (c ρ)-average splits V for any constant c < 1 while only having to draw O(1) more hypotheses in total. First note that by halving ρ_t at each step, we immediately give up a factor of two in our approximation. This can be made smaller by taking narrower steps. Additionally, with a constant factor increase in m_t and n_t, the approximation ratios in Lemma <ref> can be set to any constant. Remark 2: At first glance, it appears that select requires us to know ρ in order to calculate δ_0. However, a crude lower bound on ρ suffices. Such a bound can always be found in terms of ϵ. This is because any version space is (ϵ/2, ϵ, ϵ/2)-splittable <cit.>. By Lemma <ref>, so long as τ is less than ϵ/4, we can substitute ϵ/8⌈log(2/ϵ)⌉ for ρ in when we compute δ_0. Let T := ⌈log(1/ρ) ⌉ + 1. By Lemma <ref>, we know that for rounds t=1,…, T, we don't return any point which does worse than ρ_t/2-average splits V with probability 1-δ/2. Moreover, in the T-th round, it will be the case that ρ/4 ≤ρ_T ≤ρ/2, and therefore, with probability 1-δ/2, we will select a point which does no worse than ρ_T/2-average split V, which in turn does no worse than ρ/8-average split V.Note that we draw m_t + n_t hypotheses at each round. By Lemma <ref>, for each round Φ_t ≥ 3Φ(V)/4 ≥ 3ϵ/4. Thus# of hypotheses drawn=∑_t=1^T (48/ρ_t^2 ϵ + 32/ρ_t^2 Φ_t + 40/Φ_t^2) log4/δ_0 ≤ ∑_t=1^T (96/ϵρ_t^2 + 72/Φ(V)^2) log4/δ_0Given ρ_t = 1/2^t and T ≤ 2 + log 1/ρ, we have∑_t=1^T 1/ρ_t^2 =∑_t=1^T 2^2t ≤ (∑_t=1^T 2^t)^2≤ (2^2 + log 1/ρ)^2=16/ρ^2.Plugging inδ_0 ≤δ/2m (2 + log(1/ρ)), we recover the theorem statement.§.§ Active Learning StrategyUsing the select procedure as a subroutine, Algorithm <ref>, henceforth DBAL for Diameter-based Active Learning, is our active learning strategy. Given a hypothesis class with average splitting index (ρ, ϵ/2, τ), DBAL queries data points provided by select until it is confident Φ(V) < ϵ. Denote by V_t the version space in the t-th round of DBAL. The following lemma, which is proven in the appendix, demonstrates that the halting condition (that is, ψ(E) < 3ϵ n/4, where E consists of n pairs sampled from (π|_V)^2) guarantees that with high probability DBAL stops when Φ(V_t) is small.lemmaCorrectTerminationLemmaThe following holds for DBAL: (a) Suppose that for all t = 1, 2, …, K that Φ(V_t) > ϵ. Then the probability that the termination condition is ever true for any of those rounds is bounded above by K exp( - ϵ n /32). (b) Suppose that for some t = 1, 2, …, K that Φ(V_t) ≤ϵ/2. Then the probability that the termination condition is not true in that round is bounded above by K exp( - ϵ n /48).Given the guarantees on the select procedure in Theorem <ref> and on the termination condition provided by Lemma <ref>, we get the following theorem.Suppose thathas average splitting index (ρ, ϵ/2, τ). Then DBAL returns a version space V satisfying Φ(V) ≤ϵ with probability at least 1 - δ while using the following resources: (a) K ≤8/ρ( log2/ϵ + 2 log1/π(h^*)) rounds, with one label per round, (b) m ≤1/τlog2K/δ unlabeled data points sampled per round, and (c) n ≤ O ( ( 1/ϵρ^2 + log(1/ρ)/ϵ^2) ( logmK/δ + loglog1/ϵ)) hypotheses sampled per round.From definition of the average splitting index, if we draw m = 1/τlog2K/δ unlabeled points per round, then with probability 1-δ/2, each of the first K rounds will have at least one data point that ρ-average splits the current version space. In each such round, if the version space has average diameter at least ϵ/2, then with probability 1-δ/4 select will return a data point that ρ/8-average splits the current version space while sampling no more than n = O( (1/ϵρ^2 + 1/ϵ^2log1/ρ) logmK log1/ϵ/δ) hypotheses per round by Theorem <ref>. By Lemma <ref>, if the termination check uses n'= O( 1/ϵlog1/δ) hypotheses per round, then with probability 1-δ/4 in the first K rounds the termination condition will never be true when the current version space has average diameter greater than ϵ and will certainly be true if the current version space has diameter less than ϵ/2.Thus it suffices to bound the number of rounds in which we can ρ/8-average split the version space before encountering a version space with ϵ/2. Since the version space is always consistent with the true hypothesis h^*, we will always have π(V_t) ≥π(h^*). After K = 8/ρ( log2/ϵ + 2 log1/π(h^*)) rounds of ρ/8-average splitting, we haveπ(h^*)^2 Φ(V_K)≤ π(V_K)^2Φ(V_K) ≤ (1- ρ/8)^K π(V_0)^2 Φ(V_0)≤ π(h^*)^2ϵ/2Where we have used the fact that π(V)^2 Φ(V) ≤ 1 for any set V ⊂. Thus in the first K rounds, we must terminate with a version space with average diameter less than ϵ. § PROOF OF LEMMA <REF> In this section, we give the proof of the following relationship between the original splitting index and our average splitting index. * The first step in proving Lemma <ref> is to relate the splitting index to our estimator ψ(·). Intuitively, splittability says that for any set of large edges there are many data points which remove a significant fraction of them. One may suspect this should imply that if a set of edges is large on average, then there should be many data points which remove a significant fraction of their weight. The following lemma confirms this suspicion.Suppose that V ⊂ has splitting index (ρ, ϵ, τ), and say E = ({h_1, h_1' }, …, { h_n, h_n' }) is a sequence of hypothesis pairs from V satisfying 1/nψ(E) > 2 ϵ. Then if x ∼, we have with probability at least τ, max{ψ(E_x^+), ψ(E_x^-) }≤(1 - ρ/4 ⌈log(1/ϵ) ⌉)ψ(E) .Consider partitioning E asE_0= {{ h, h' }∈ E : d(h,h') < ϵ} and E_k= {{ h, h' }∈ E : d(h,h') ∈ [2^k-1ϵ, 2^k ϵ)for k=1,…, K with K =⌈log1/ϵ⌉. Then E_0, …, E_K are all disjoint and their union is E. Define E_1:K = ∪_k=1^K E_k. We first claim that ψ(E_1:K) > ψ(E_0). This follows from the observation that because ψ(E) ≥ 2 nϵ and each edge in E_0 has length less than ϵ, we must haveψ(E_1:K)=ψ(E) - ψ(E_0)>2 n ϵ - n ϵ >ψ(E_0).Next, observe that because each edge { h, h' }∈ E_k with k ≥ 1 satisfies d(h,h') ∈ [2^k-1ϵ, 2^k ϵ), we haveψ(E_1:K) =∑_k=1^K ∑_{ h , h' }∈ E_kd(h,h')≤ ∑_k=1^K2^kϵ |E_k|.Since there are only K summands on the right, at least one of these must be larger than ψ(E_1:K)/K. Let k denote that index and let x be a point which ρ-splits E_k. Then we haveψ((E_1:K)^+_x) ≤ ψ(E_1:K) - ψ(E_k ∖ (E_k)_x^+) ≤ ψ(E_1:K) - ρ 2^k-1ϵ |E_k| ≤ ( 1 - ρ/2K) ψ(E_1:K).Since ψ(E_1:K) ≥ψ(E_0), we haveψ(E^+_x)≤ ψ(E_0) + ( 1 - ρ/2K) ψ(E_1:K)≤ ( 1 - ρ/4K) ψ(E).Symmetric arguments show the same holds for E^-_x. Finally, by the definition of splitting, the probability of drawing a point x which ρ-splits E_k is at least τ, giving us the lemma. With Lemma <ref> in hand, we are now ready to prove Lemma <ref>. Let V ⊂ such that Φ(V) > 2ϵ. Suppose that we draw n edges E i.i.d. from π|_V and draw a data point x ∼. Then Hoeffding's inequality <cit.>, combined with Lemma <ref>, tells us that there exist sequences ϵ_n, δ_n ↘ 0 such that with probability at least 1-3δ_n, the following hold simultaneously: * Φ(V) - ϵ_n≤ 1/nψ(E)≤ Φ(V) + ϵ_n, * 1/nψ(E_x^+)≥ π(V_x^+)^2/π(V)^2Φ(V_x^+) - ϵ_n, and * 1/nψ(E_x^-)≥ π(V_x^-)^2/π(V)^2Φ(V_x^-) - ϵ_n.For ϵ_n small enough, we have that Φ(V) - ϵ_n > 2ϵ. Combining the above with Lemma <ref>, we have with probability at least τ - 3δ_n,max{π(V_x^+)^2/π(V)^2Φ(V_x^+), π(V_x^-)^2/π(V)^2Φ(V_x^-)} - ϵ_n ≤ 1/nmax{ψ(E_x^+), ψ(E_x^-)}≤ (1 - ρ/4 ⌈log(1/ϵ) ⌉) ψ(E)/n≤ (1 - ρ/4 ⌈log(1/ϵ) ⌉)(Φ(V) + ϵ_n).By taking n →∞, we have ϵ_n, δ_n ↘ 0, giving us the lemma.§ SIMULATIONSWe compared DBAL against the baseline passive learner as well as two other generic active learning strategies: CAL and QBC. CAL proceeds by randomly sampling a data point and querying it if its label cannot be inferred from previously queried data points. QBC uses a prior distribution π and maintains a version space V. Given a randomly sampled data point x, QBC samples two hypotheses h, h' ∼π|_V and queries x if h(x) ≠ h'(x). We tested on two hypothesis classes: homogeneous, or through-the-origin, linear separators and k-sparse monotone disjunctions. In each of our simulations, we drew our target h^* from the prior distribution. After each query, we estimated the average diameter of the version space. We repeated each simulation several times and plotted the average performance of each algorithm. Homogeneous linear separators The class of d-dimensional homogeneous linear separators can be identified with elements of the d-dimensional unit sphere. That is, a hypothesis h ∈𝒮^d-1 acts on a data point x ∈^d via the sign of their inner product:h(x):=sign(⟨ h, x ⟩).In our simulations, both the prior distribution and the data distribution are uniform over the unit sphere. Although there is no known method to exactly sample uniformly from the version space, Gilad-Bachrach et al. <cit.> demonstrated that using samples generated by the hit-and-run Markov chain works well in practice. We adopted this approach for our sampling tasks.Figure <ref> shows the results of our simulations on homogeneous linear separators.Sparse monotone disjunctions A k-sparse monotone disjunction is a disjunction of k positive literals. Given a Boolean vector x ∈{0, 1}^n, a monotone disjunction h classifies x as positive if and only if x_i = 1 for some positive literal i in h. In our simulations, each data point is a vector whose coordinates are i.i.d. Bernoulli random variables with parameter p. The prior distribution is uniform over all k-sparse monotone disjunctions. When k is constant, it is possible to sample from the prior restricted to the version space in expected polynomial time using rejection sampling. The results of our simulations on k-sparse monotone disjunctions are in Figure <ref>. § ACKNOWLEDGMENTSThe authors are grateful to the NSF for support under grants IIS-1162581 and DGE-1144086. Part of this work was done at the Simons Institute for Theoretical Computer Science, Berkeley, as part of a program on the foundations of machine learning. CT additionally thanks Daniel Hsu and Stefanos Poulis for helpful discussions. plain§ APPENDIX: PROOF DETAILS§.§ Remark from Section <ref> In Section <ref>, the remark after the definition of average splitting stated that there exist hypothesis classes V for which there are many points which 1/4-split E for any E ⊂V2 but for which any x ∈ satisfies max{Φ(V_x^+), Φ(V_x^-) }≈Φ(V).Here we formally prove this statement. Consider the hypothesis class of homogeneous linear separators and let V = {e_1, …, e_n}⊂ where e_k is the k-th unit coordinate vector. Let the data distribution be uniform over the n-sphere and the prior distribution π be uniform over V. As a subset of the homogeneous linear separators, V has splitting index (1/4, ϵ, Θ(ϵ)) <cit.>.On the other hand, for any i ≠ j, d(h_i, h_j) = 1/2. This implies that Φ(V)=(h ≠ h') _h,h'[d(h,h') | h≠ h']=n-1/2n.Moreover, any query x ∈ eliminates at most half the hypotheses in V in the worst case. Therefore, for all x ∈,max{Φ(V_x^+), Φ(V_x^-) } ≥ (n/2 - 1)/2 (n/2) =(n-2/n-1) Φ(V).§.§ Proofs of Lemma <ref> and Lemma <ref> The proofs in this section rely crucially on two concentration inequalities. The first is due to Hoeffding <cit.>.Let X_1, …, X_n be i.i.d. random variables taking values in [0,1] and let X = ∑ X_i and μ = [X]. Then for t > 0,(X - μ≥ t) ≤exp( -2t^2/n) Our other tool will be the following multiplicative Chernoff-Hoeffding bound due to Angluin and Valiant <cit.>. Let X_1, …, X_n be i.i.d. random variables taking values in [0,1] and let X = ∑ X_i and μ = [X]. Then for 0 < β < 1, (i) (X ≤ (1 - β)μ) ≤exp( - β^2 μ/2) and (ii) (X ≥ (1 + β)μ) ≤exp( - β^2 μ/3). We now turn to the proof of Lemma <ref>. *In round t, let ρ := ρ_t, Φ := Φ_t, m := m_t, and n := n_t. For (a), recall Φ = 1/mψ(E') for E' ∼ (π|_V)^2 × m. By Lemma <ref>, we have for β_0 >0 ((1-β_0)Φ(V) ≤Φ≤ (1+β_0)Φ(V) ) ≥ 1- 2exp( - m β_0^2 ϵ/3).Taking m ≥3/β_0^2 ϵlog( 4/δ_0), we have the above probability is at least 1 - δ_0/2. Let us condition on this event occurring. To see (b), say w.l.o.g. ( π(V_x^+)/π(V))^2 Φ(V_x^+) = (1-ρ)Φ(V). Then, we have(1/nψ(E_x^+) ≤ (1-ρ) Φ)≤(1/nψ(E_x^+) ≤ (1-ρ) (1+ β_0) Φ(V) ).Taking β such that (1-β)(1-ρ) = (1-ρ)(1+β_0), we have by Lemma <ref> (i),(1/nψ(E_x^+) ≤ (1-ρ) Φ)≤(1/nψ(E_x^+) ≤ (1-β) (1-ρ) Φ(V) ) ≤exp( - n β^2(1-ρ) Φ(V)/2)≤exp( - n (1-ρ) Φ/2(1+β_0)·[ 1 - (1 - ρ)(1+ β_0)/1-ρ]^2) ≤exp( - n (1-ρ/2) Φ/2(1+β_0)·[ 1 - (1 - ρ)(1+ β_0)/1-ρ/2]^2).Taking β_0 ≤ρ/4, the above is less than exp( - n Φρ^2/32). With n as in the lemma statement and combined with our results on the concentration of Φ, we have that with probability 1-δ_01/nmax{ψ(E_x^+), ψ(E_x^-)} > (1-ρ) Φ. To see (c),suppose now that w.l.o.g. ( π(V_x^-)/π(V))^2 Φ(V_x^-)≤( π(V_x^+)/π(V))^2 Φ(V_x^+) = (1-ρ)Φ(V). We need to consider two cases.Case 1: ρ≤ 1/2.Taking β such that (1+β)(1-ρ) = (1-ρ)(1 - β_0), we have by Lemma <ref> (ii),(1/nψ(E_x^+) > (1-ρ) Φ)≤(1/nψ(E_x^+) > (1-ρ) (1-β_0) Φ(V) ) = (1/nψ(E_x^+) > (1+β) (1-ρ) Φ(V) ) ≤exp( - n β^2 (1- ρ)Φ(V)/3) ≤exp( - n (1- ρ)Φ/3(1+β_0)·[(1-ρ)(1-β_0)/1-ρ -1]^2 ) ≤exp( - n Φ/6(1+β_0)·[(1-ρ)(1-β_0)/1- 2 ρ -1 ]^2 ). Taking β_0 ≤ρ/4, the above is less than exp( - n Φρ^2/12). Because ( π(V_x^-)/π(V))^2 Φ(V_x^-)≤( π(V_x^+)/π(V))^2 Φ(V_x^+), we also can say (1/nψ(E_x^-) > (1-ρ) Φ) ≤exp( - n Φρ^2/12).Case 2:ρ > 1/2. Taking β_0 ≤ 1/16, we have (1/nψ(E_x^+) > (1- ρ)Φ) ≤(1/nψ(E_x^+) > (1- ρ)(1-β_0)Φ(V) ) = (1/nψ(E_x^+) > (1-ρ)Φ(V) +((1- ρ)(1-β_0) - (1-ρ)) Φ(V) ) ≤(1/nψ(E_x^+) > (1-ρ)Φ(V) +(ρ - ρ - β_0 ) Φ(V) ) ≤(1/nψ(E_x^+) > (1-ρ)Φ(V) +(ρ/2 - β_0 ) Φ(V) ) ≤(1/nψ(E_x^+) > (1-ρ)Φ(V) +( 1/4 - β_0 ) Φ(V) ) ≤(1/nψ(E_x^+) > (1-ρ)Φ(V) +1/4 - β_0/1 + β_0Φ) ≤(1/nψ(E_x^+) > (1-ρ)Φ(V) +3/17Φ)By Lemma <ref>, the above is less than exp(- n Φ^2/40). Because ( π(V_x^-)/π(V))^2 Φ(V_x^-)≤( π(V_x^+)/π(V))^2 Φ(V_x^+), we also can say (1/nψ(E_x^-) > (1-ρ) Φ) ≤exp(- n Φ^2/40). Regardless of which case we are in, we have for n as in the lemma statement, with probability 1- δ_0,1/nmax{ψ(E_x^+), ψ(E_x^-)}≤ (1- ρ) Φ.We next provide the proof of Lemma <ref>. *Recall that the termination condition from DBAL is 1/nψ(E) < 3ϵ/4 for E ∼ (π|_V)^2× n.Part (a) follows from plugging in β = 1/4 into Lemma <ref> (i) and taking a union bound over rounds 1, …, K. Similarly, part (b) follows from plugging in β = 1/4 into Lemma <ref> (ii) and taking a union bound over rounds 1, …, K.
http://arxiv.org/abs/1702.08553v2
{ "authors": [ "Christopher Tosh", "Sanjoy Dasgupta" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227215924", "title": "Diameter-Based Active Learning" }
Strong rainbow connection numbers oftoroidal meshes Yulong Wei[Corresponding author. E-mail address: yulong.wei@mail.bnu.edu.cn (Y. Wei), xum@bnu.edu.cn (M. Xu), wangks@bnu.edu.cn (K. Wang).]Min XuKaishun WangSch. Math. Sci. & Lab. Math. Com. Sys., Beijing Normal University, Beijing, 100875,China ================================================================================================================================================================================================================================================================== The increasing integration of distributed energy resources (DERs) calls for new monitoring and operational planning tools to ensure stability and sustainability in distribution grids.One idea is to use existing monitoring tools in transmission grids and some primary distribution grids.However, they usually depend on the knowledge of the system model, e.g., the topology and line parameters, which may be unavailable in primary and secondary distribution grids.Furthermore, a utility usually has limited modeling ability of active controllers for solar panels as they may belong to a third party like residential customers. To solve the modeling problem in traditional power flow analysis, we propose a support vector regression (SVR) approach to reveal the mapping rules between different variables and recover useful variables based on physical understanding and data mining.We illustrate the advantages of using the SVR model over traditional regression method which finds line parameters in distribution grids. Specifically, the SVR model is robust enough to recover the mapping rules while the regression method fails when 1) there are measurement outliers and missing data, 2) there are active controllers, or 3) measurements are only available at some part of a distribution grid. We demonstrate the superior performance of our method through extensive numerical validation on different scales of distribution grids. § INTRODUCTIONElectric grids are undergoing a profound change. Renewables and other distributed energy resources (DERs) are expected to supply more than 50% of electricity demand by 2050 in various parts of the world <cit.>.Deep penetration of DERs adds new capabilities and significantly affects the operations of distribution grids.In such distribution networks, proper monitoring will be needed for detecting outages <cit.>, cyber attacks <cit.>, and system failures <cit.>.In addition to monitoring, operational planning is needed for predicting over-voltage, calculating economic dispatch <cit.>, and conducting short-term grid controls <cit.>.The power flow equations are the basis for monitoring and planning in distribution grids <cit.>. However, the power flow equations are built through the knowledge of system topology and network parameters.Such knowledge is only available in well-maintained primary distribution grids and limited secondary distribution grids.In many primary and secondary distribution grids, the assumption of complete information does not hold.In some secondary distribution grids, only the planned topology and switch locations are known, but real-time changes to the topology can be hard to track.Line parameter profiles are inaccurate or even missing.Even reconstructing the admittance matrix can be difficult when using distribution management systems (DMS) such as CYME <cit.>. For example, in South California Edison (SCE), they use CYME software to model their distribution grids. However, the CYME model is only available in a few primary distribution grids.Since the CYME model requires all the topology information, line parameter information, as well as the modeling of controllers and loads, it is incapable of modeling many secondary level distribution grids where most of the required information are missing. Currently, a secondary distribution grid is treated as a single node even if it with DERs such as solar panels.Future distribution networks will host a variety of active control devices ranging from voltage regulators to inverters for rooftop solar, EV charging and storage.These assets are usually independently owned and operated outside of the domain of the DMS.The control rules implemented by these devices are unavailable or can be hard to model, making the direct application of power flow analysis difficult and inaccurate <cit.> even when topology and line parameters are perfectly known.Incomplete system information and limited measurements make the system identification problem hard in practice.The availability of measurements from active devices, line sensors, smart meters, and μPMUs <cit.> is an opportunity to overcome this challenge by designing scalable approaches for system monitoring and analysis relying on new types of data.Recent research augments traditional power flow equations by using historical data to initialize state estimators and solvers  <cit.>, modifying the current system models <cit.> and proposing novel multi-objective optimization formulations <cit.>.In this paper, we focus on building the mapping rules equivalent to the power flow equations in distribution grids. In particular, we discuss how to design data-driven methods to recover the key relationships in power flow equations: the mapping rules between power injections and voltage phasors. A distribution grid’s mapping rules are governed by the elements of the admittance matrix when there are unmodeled no active controllers.When the accurate measurements of all the historical data, including the set of voltage phasors, real power injections, and reactive power injections, are available at all buses, the admittance matrix could be learned via linear regression using historical data. The challenges of using the linear regression approach in distribution grids are that the perfect conditions are usually not satisfied. For example, the parameter estimation approach is not robust against measurement outliers, which are common in distribution grids. Moreover, the linear regression requires the measurements of all buses. In distribution grids, usually the measurements at the root (substation or feeder transformers) and the leaves (end users) are available. Other parts of the network have limited measurements for observability. In this case, a parameter estimation regression model is impossible to build without the measurements at intermediate buses. Furthermore, since the linear regression model explicitly learn the line parameters, it cannot represent any models beyond the linear relationship. Therefore, the little flexibility of a linear regression model prevents it capturing the dynamics of any third party-owned controllers.Finally, the problem of “inverse mapping”, which recovers the voltage phasor information from the measurements of real and reactive power, are not guaranteed to have a unique solution. To solve the inverse mapping problem, the information of the topology and line parameters is still a pre-requisite. With partial measurements and the existence of active controllers, it is hard to recover the full topology and all line parameters through the linear regression model. Even if we have the information of the topology and line parameters, sometimes, the “inverse mapping” problem can be ill-conditioned and do not have a feasible solution.Therefore, we propose to use a kernel-based support vector regression (SVR) <cit.> model to train and represent the mapping rules.The insensitive zone of the SVR model and the linear asymptotic behavior of the SVR loss function provide better tolerance over outliers <cit.>. The kernel trick provides the needed flexibility so that the SVR model can incorporate the power flow equation and incorporate the dynamics of active controllers and handle the situation of incomplete measurements. Many data-driven models behave like black-box without considering physical law-based models.We design the SVR model to represent the based power flow equation exactly when all the measurements are perfectly measured.This is achieved by choosing an appropriate kernel in our SVR model. The “inverse mapping” could be treated as a differentiable mapping. The new mapping is a function of real and reactive power, with output voltage magnitudes. Thus, locally, it can be approximated by a polynomial function of real and reactive power through Taylor expansion. The flexibility of the SVR model with kernel trick provides an accurate approximation for the locally expanded polynomial function.Furthermore, SVR can be computed very efficiently using interior point methods and distributed computing <cit.> and many different kernels can be utilized depending on the applications <cit.>.Some other black box-like data-driven models, such as neural networks (NN), could be also used for the purpose. However, NN does not guarantee the exact representation of the traditional physical law-based model. For example, NN usually requires more data than SVR and is possible to overfit.It works best for highly nonlinear system such as image recognition and natural language processing. In our situation, the system is still governed by physical laws, the SVR model can identify the physical understanding behind the data, but NN cannot.We test the proposed SVR model for estimating both the forward and inverse mapping rulesbetween voltage phasors and power injections on different scales of distribution grids. For example, we use IEEE 8, 123 bus, and systems with bus number between 8 and 123. We also compare the SVR model with traditional parameter learning-based regression. The results reveal that the SVR model outperforms traditional models, especially for the cases of partial measurements, system with active controllers and measurements with outliers. The satisfactory results reveals that we can use the SVR-based mapping rule estimation as the equivalence of the traditional physical law-based power flow equations in distribution grids with renewables.The rest of the paper is organized as follows: Section <ref> reviews the power flow analysis and defines the problem of learning mapping rules for distribution networks. Section <ref> shows how the mapping rule learning can be represented as a SVR problem and how to embed power systems physical understanding into the SVR model. Section <ref> illustrate the advantages of SVR model over traditional parameter learning-based regression model. Section <ref> analyzes experimental results on different distribution grids and compares to the traditional physical law-based power flow equations. Section <ref> concludes the paper. §.§ NotationWe use lower case English and Greek letters, such as p, β to denote scalars and scalar functions, use lower case bold English and Greek letters, such as 𝐚,to denote vectors and vector functions. We use upper case English letters, such as G to denote matrices. We use a comma (,) to denote horizontal concatenation of vectors, and we use a semicolon (;) to denote vertical concatenation of vectors. For example, [x_1, x_2] ∈ℝ^1× 2 is a row vector, and [x_1; x_2] ∈ℝ^2× 1 is a column vector.§ PROBLEM MOTIVATION AND FORMULATIONFor traditional grid monitoring and planning tools, the physical power flow mappings serve as the basis <cit.>: p_i =∑_k=1^n |v_i||v_k| (g_ikcosθ_ik + b_iksinθ_ik), q_i =∑_k=1^n |v_i||v_k| (g_iksinθ_ik - b_ikcosθ_ik), where i = 1, ⋯, n.p_i and q_i are the real and reactive power injections at bus i, (g_ik + j · b_ik) is the (i, k)-th element in the admittance matrix Y=G+j · B, where j is the imaginary unit.|v_i| is the voltage magnitude at bus i and θ_ik is the phase angle difference between bus i and bus k. To use the kernel-based analysis in the following content, we use the rectangular coordinate to represent the voltage phasor because the rectangular coordinate representation simplifies the trigonometric functions to polynomial functions. By definingu_i = |v_i| cosθ_i, w_i = |v_i|sinθ_i,where u_i and w_i are the real and imaginary components of the voltage phasor, the physical law-based power flow mappings (<ref>) can be also expressed as functions of u_i and w_i: p_i =∑_k=1^n (u_iu_kg_ik + w_iw_kg_ik + w_iu_kb_ik - u_iw_kb_ik), q_i =∑_k=1^n(w_iu_kg_ik - u_iw_kg_ik - u_iu_kb_ik - w_iw_kb_ik). Furthermore, we denote = [u_1; ⋯; u_n],= [w_1; ⋯; w_n],= [; ]. Then, the inherent power flow mappings (<ref>) can be abstractly represented as p_i = f_p_i(), q_i = f_q_i().Traditionally, the power flow mappings f_p_i and f_q_i are determined by the system topology and line parameters.However, in distribution grids, the physical law-based representation of power flow mappings may be unavailable because of inaccurate topology information and missing line parameters.To solve the problem, we observe increasing data availability in distribution grids. Therefore, we propose to directly represent the power flow mappings from one set of measurements to another solely based on historical measurements of _t and p_i_t (or q_i_t), t=1, ⋯, T <cit.>. §.§ Representing the Power Flow Mappings using Inner-Product When estimating the power flow mappings, f can be expressed in a different form than (<ref>) to emphasize the unknown coefficients g_ik and b_ik: p_i =∑_k=1^n g_ik(u_iu_k + w_iw_k) + b_ik(w_iu_k - u_iw_k), q_i =∑_k=1^n g_ik(w_iu_k - u_iw_k) - b_ik( u_iu_k + w_iw_k).Subsequently, the power flow mappings (<ref>) could be treated as the inner-product between the vector [_i; _i] and a feature mapping (·) of the state vector [; ]: p_i =<[_i; _i], _p_i([; ])>, q_i =<[_i; _i], _q_i([; ])>, where <·, ·> represents the inner product of two vectors, _i = [g_i1; ⋯; g_in], _i = [b_i1; ⋯; b_in]. In other words, if we map the state vector = [; ] to a higher dimensional space, the power flow mapping becomes a linear function between p_i and _p_i() with parameters [_i; _i]. After compactly denoting 1) the output as y, 2) the system model parameter as , and 3) the state vector as , the power flow mapping could be expressed as:y = <_y, _y ()>,where y = p_i (or y= q_i), _p_i = _q_i = [_i; _i].§ SUPPORT VECTOR REGRESSION FOR POWER FLOW §.§ Estimating Model Parameter via Linear RegressionThe power flow mapping (<ref>) is linear with respect to the system parameters _y. A straightforward approach to find the mapping is to estimate the physical model parameter _y directly through linear regression based on historical data points (_t, y_t), t=1, ⋯, T. By definingΦ_y := [[ _y(_1)^T;⋮; _y(_T)^T ]],the least-square estimation of _y is:_y, LS = (Φ_y^T Φ_y)^-1Φ_y^T .§.§ Formulating the Power Flow Mapping Estimation Problem using the SVR ModelBesides the linear regression approach, the inner-product representation of the power flow mappings naturally forms the basis of a support vector regression (SVR) model <cit.> to estimate the mappings in (<ref>): , , ^⋆, bminimize1/2^2 + C∑_τ=1^T (ξ_t + ξ_t^⋆) subject to y_t - <, _y_t> - b≤ϵ + ξ_t, <, _y_t> + b- y_t ≤ϵ + ξ_t^⋆,ξ_t, ξ_t^⋆≥ 0, where t = 1, ⋯, T, are T samples from historical data. In particular, the inequality constraints (<ref>) and (<ref>) set zero penalty for training data samples located in the ϵ-insensitive zone, in which the data samples contribute no error to the regression fit, or ξ_t =0 and ξ_t^⋆=0. Only the training data samples outside the ϵ-insensitive zone determine the optimal fitting. These data samples are called support vectors. An illustration of a typical SVR fit is shown in Fig. <ref>. §.§ SVR Power FlowThe SVR optimization in (<ref>) is in general difficult to solve due to the large number of constraints and the dimension of the feature mapping().However, special choices of feature mappings lead to a simple representation of the solutions for the SVR regression.These feature mappings satisfy the kernel trick property:K(_1, _2):= <(_1), (_2)> = h(<_1, _2>),where the inner-product between (_1) and (_2) is a scalar function of the inner-product between _1 and _2, and h(·) is a scalar function <cit.>. The space of such feature mappings satisfying this property is the reproducing Hilbert kernel space (RHKS).By choosing a feature mapping (·) in RHKS, we can avoid directly calculating the feature mapping and estimating the topology and line parameters explicitly in the intermediate step. Instead, the kernel automatically helps mapping the data to a proper higher dimension space. To solve the optimization problem (<ref>), we only need to calculate the inner-product between different training data samples: K(_t_1, _t_2) = h(<_t_1, _t_2>).Furthermore, the solution of (<ref>) does not directly provide the optimal model parameter ^⋆. Instead, the solution of (<ref>) is given by an optimal set of parameters α_t^⋆, t=1, ⋯, T. Therefore, the power flow mappings (<ref>) could be represented as the linear combination of the kernel product between a stateand the historical data _1, ⋯, _T, parameterized by ^⋆: y = f_y^⋆() = ∑_t=1^T α_t^⋆ K(, _t).The α_t^⋆ is nonzero only when _t is a support vector. This fact makes the SVR-based representation of power flow mapping sparse and easy to compute.As an illustration, Fig. <ref> summarizes the transformation from the physical law-based representation to the historical data-driven SVR-based representation of the power flow mappings. The physical law-based representation (<ref>) and the SVR-based representation (<ref>) of power flow mappings are both defined using inner-products. However, these two representations have fundamental differences. The parameters of the physical law-based representation are line parameters [; ], of which the dimension is proportional to the size of the distribution grid.Moreover, to apply the mapping representation, we must explicitly mapping the stateto a higher dimensional space () and conduct inner-products. In contrast, the parameters of the SVR-based representation are solely the historical data sample X and the associated Lagrangian multipliers ^⋆, of which the dimension is the number of support vectors, independent from the size of the distribution grids. Furthermore, to apply the SVR-based mapping representation, we only need to conduct the kernel inner-product between the stateand historical data samples, without explicitly mapping the state to higher dimensional space. This is specially useful in distribution grids where the data is abundant but a complete physical model is missing. §.§ Choosing Tuning Parameters for SVR Power FlowCross-validation is typically utilized in SVR to choose the tuning parameters C and ϵ in (<ref>)  <cit.>.This enables the method to increase robustness towards noise and outliers in the data. It also ensures that SVR has good predictive performance. The suggested approach for SVR-based Power Flow is to utilize k-fold cross-validation  <cit.> (typically k=5) with the training data to select the optimal choices of C and ϵ, with k-1 blocks of data used to train the model and one block utilized to assess validation performance and select tuning parameters.The SVR performance is then assessed in a separate data set. The choice of parameters determines the sparsity of α_t in the kernel representation.§.§ Generalizing Power Flow SVR: Inverse MappingsIn many applications of power flow analysis, we are interested in recovering voltage magnitude and phase angle information from the measurements of real and reactive power. Typically, a calibrated power flow model is utilized and solved. Power flow solutions are not guaranteed to be unique, and in some instances, the problem can be ill-conditioned.Additionally, the system might not be fully observed preventing learning of an accurate model in the absence of topology information and relatively accurate line parameter data. (<ref>) instead enables learning aninverse mapping of voltage magnitude as a function of power from historical data. The inverse power flow is a differentiable mapping as a function of real and reactive power.Thus, locally, |v_i|, the voltage magnitude for bus i, can be approximated by a polynomial function of [;].Setting= [;] and utilizing the polynomial kernel produce approximations that can achieve arbitrary accuracy with respect to the Taylor expansion of the inverse mapping.§ ADVANTAGES OF USING SVR REPRESENTATION OVER REGRESSION §.§ Connection between SVR Model and Physical ModelWhen the historical measurements at all buses are fully observable and there are no measurement errors, we have the following theorem proving that the SVR-based representation of power flow mappings can exactly recover the physical law-based representation:The physical law-based power flow mappings (<ref>) can be exactly represented by choosing the quadratic kernel K(_1, _2) = (<_1, _2> + c)^2 = (_1^T _2 + c)^2.First, the quadratic kernel is in the reproducing Hilbert kernel space (RHKS). The feature mapping corresponding to the quadratic kernel (<ref>) is()= [x_1^2, ⋯, x_m^2, √(2) x_1x_2, ⋯, √(2) x_1x_m,√(2) x_2x_3, ⋯, √(2)x_m-1x_m, √(2)cx_1, ⋯, cx_m, c].Second, we can constructively build asuch that the inner-product betweenand the quadratic feature mapping () exactly recover the power flow mapping for p_i. Given = [; ] and the feature mapping (·) in (<ref>), we define ^⋆ as following:β^⋆_j = g_ii, if ()_j = u_i^2 or ()_j = w_i^2, 1/√(2) g_ik, if ()_j = √(2)u_iu_k or √(2)w_iw_k, i ≠ k, 1/√(2) b_ik, if ()_j = √(2)w_iu_k,-1/√(2)b_ik, if ()_j = √(2)u_iw_k,0,otherwise.With the definition of () in (<ref>) andin (<ref>), the inner-product betweenand () is exactly the physical law-based mapping fromto p_i:p_i= ∑_k=1^n g_ik(u_iu_k + w_iw_k) + b_ik(w_iu_k - u_iw_k) = <^⋆, ()>. §.§ Robustness of SVR Model against OutliersThe parameter learning-based regression model only works good if the data is outlier-free. This is because the loss function of a linear regression is a quadratic function. On finite samples, the squared-error loss places much more emphasis on observations with large absolute residuals |y_t - f(_t)| during the fitting process. It is thus far less robust, and its performance severely degrades for grossly mis-measured y-values (“outliers”) <cit.>.The least absolute value deviation estimation (LAD) <cit.> replaces the quadratic loss function by absolute value loss function which provides a more robust criteria. However, the LAD model cannot guarantee a unique solution <cit.>. There are possibly multiple solutions achieving the minimal loss function value.The SVR model resolves the drawbacks of the traditional regression and the least absolute value deviation estimation. First, the asymptotic behavior of the ϵ-insensitive loss function is linear, which is less sensitive to large absolute residuals. Furthermore, the regularization of the parameterin the loss function eliminates the possibility of multiple optimal solutions, which makes the SVR model much more stable than the LAD model. We compare the different loss functions of linear regression model, LAD model and SVR model in Fig. <ref>. §.§ Flexibility of SVR Model for Active Controller ModelingIncreasing penetration of DER adds a variety of active controllers whose control algorithms and device models might be unavailable to the power monitoring systems at the utility.One common kind of active controllers is a capacitor power bank for volt/var control to maintain a stable voltage. Fig. <ref> illustrates a droop control function where the additional reactive power injection is determined by the magnitude of voltage. These active controllers affect the physical law-based power by adding unmeasured power injections to distribution grids. For instance, if bus i in an n-bus distribution grid is equipped with a reactive power bank, where injected reactive power follows voltage variation, q_i' = h(, ), the modified power flow equation at bus i changes to: q_i =∑_k=1^n |v_i||v_k| (g_iksinθ_ik - b_ikcosθ_ik) - q_i', where q_i' is the additional power injection from the reactive power bank, and h(·) is the control policy. If q_i' is omitted from the model, an incorrect mapping will be obtained. As an illustration, we add a reactive power controller at bus 4 for the IEEE 8-bus distribution grid and assume topology and line parameters are known. Fig. <ref> shows the significant mean absolute error (MAE) appearing in the traditional power flow analysis when an active device is added but not modeled.The flexibility of SVR model also provides a practical approach to represent the third-party owned distributed controllers' model once the control algorithm is a differentiable mapping as a function of real and reactive power.§.§ Flexibility of SVR Model for Partial Observable Distribution GridsIn many distribution grids, the measurements are only available at the root level where the substation/feeder transformers are located, and the leaves level where the residential loads and distributed energy resources are located. In the intermediate level buses, no measurements are available. In this case, neither the system is fully observable, nor the regression model can provide the correct line parameters. However, due to the flexibility of the kernel-based SVR model, we can still use the measurements at the available buses as the input to have an accurate power flow representation between the partially measured voltages and power injections at root/leaves. Fig. <ref> shows an example of a partially-observed distribution grid, where the measurements on red nodes are unavailable. Since the traditional regression model requires all measurements to calculate v_i v_j cos (θ_i - θ_j) and v_i v_j sin (θ_i - θ_j), in this case, the regression model can never get the correct line parameters. If the hidden nodes are without source, we can prove that there exists an equivalent admittance matrix. If there are power injections at hidden buses, the regression-based model fails. However, the power injections are still determined by the voltages of the available nodes (bus 1, 4, 5, 7, and 8), the flexibility of the SVR model guarantees that it can still capture the mapping rules. While the physical law-based model does not have a meaningful interpretation for partially-observable grids, the SVR representation model still captures the temporal relationship between the mapping rules and the historical data. § EXPERIMENTAL RESULTS §.§ Experiment SetupWe test our data-driven power flow approach on a variety of settings and real-world data sets.We use 8, 16, 32, 64, 96, 123-bus test feeders, as well as two Southern California Edison (SCE) distribution networks with different shapes. Here, the 16, 32, 64, 96-bus systems are extracted from the IEEE 123-bus system. The bus power injection data is from primary distribution grids of Southern California Edison (SCE) and secondary distribution grids of Pacific Gas andElectric (PG&E). The real and reactive power injection data are from small and medium business or aggregate of several residential homes. The sampling frequency is one hour. The SCE data set's period is from January 1, 2015, to December 31, 2015, and the PG&E data set's period is from August 1, 2010, to July 1, 2011. For IEEE standard test feeders, we run power flow using Matpower package <cit.> to obtain the associated voltage magnitudes and phase angles at each bus. We use MATPOWER software only to implement the Newton-Raphson iteration. All the parameters are built through either IEEE standard distribution grid models or South California Edison real data.For SCE distribution networks, the voltage phase angle information are available on some buses. In our experiments, the topology and line parameter information from IEEE standard case files are only used for data preparation to build the relationship between power injections and voltage phasors. In all evaluation steps, we assume that the topology information and the line parameters are unavailable. Finally, noises are added to check the robustness of the proposed approach.In particular, we compare the regression-based approach and our proposed SVR approach for three common scenarios in distribution grids: 1) with the existence of outliers, 2) with unknown volt/var controllers, and 3) with partially observable distribution grids. All these scenarios are tested on different scales of distribution grids including radial networks and mesh networks. §.§ Data CollectionCurrently, South California Edison (SCE) is regularly collecting different types of time series data from their distribution grids, including the voltage and power from the root nodes as well as the nodes behind the root. Currently, we are building the VADER data infrastructure which has the capability of handling heterogeneous data and post-processing them. We have different data extractors to acquire data, and have a unified data distributor to clean and sort the acquired data to a Cassandra database, which serves as the input source for the proposed mapping rule estimation. §.§ Effectiveness of SVR Model for Forward MappingWe prove that the power flow equations can be represented exactly by the proposed SVR model with the 2nd order polynomial kernel in (<ref>), when we choose the rectangular coordinate representation of the state vector. For forward mapping, we build the proposed SVR model with 2nd order polynomial kernel and other RHKS kernels, as well as the parameter learning-based regression model and an averaging model. In particular, the input of all these models is the voltage phasors at all buses, and the output is the power injection at one certain bus. The raw input and output variables are all the same for different models. Then, for parameter learning-based regression model, raw inputs are reformulated following (<ref>). For SVR models, raw inputs are only reformulated to rectangular coordinates with reference phase angle 45^∘. For performance comparison, we use six weeks' hourly sampled historical data of voltage phasors from all buses and power injections at some buses to train all these models. Random Gaussian measurement error with 1% relative standard deviation are added to training data. 2% of training data samples are modified to outliers. Another three weeks' data is used for testing the performance. No measurement errors and outliers are added to test dataset. Then the root-mean-square error (RMSE) between the estimated power injection and the true power injection is calculated.The result for 123-bus case is shown in Fig. <ref>.It is clear that the performance of the SVR models are better than the regression model and averaging model. Among the SVR models with different kernels, the 2nd order polynomial kernel provides the smallest RMSE, supporting the theoretical deduction in Section <ref>. We also visualize the selection of support vectors among all training data points in Fig. <ref>. The x-axis is the training data point index, and the y-axis is the magnitude of the associated dual Lagrangian multipliers. If a training data point is not a support vector, the dual Lagrangian multiplier is zero. If a training data point is a support vector, the dual Lagrangian multiplier is nonzero. In addition, we mark the outlier data points with a black cross. Fig. <ref> shows that the coefficient is nonzero only when the associated data point is an outlier. Besides the 123-bus result shown in Fig. <ref>, we also compare the performances on various scales of distribution grids from 8-bus distribution grid to 123-bus distribution grid. We also test the performances on 123-bus distribution grid with mesh network by manually connecting several buses to mimic some urban systems' weakly meshed structure. For all of the evaluation, the settings are the same as the 123-bus settings above. The results are shown in Table <ref>. For all test cases, the RMSEs of SVR model are better than regression model and averaging model. For the computational time, training SVR models for all cases is within seconds. This is fast enough for real-time updating. §.§ Effectiveness of SVR Model for Inverse MappingWe also test the performance of the SVR model on inverse mapping: from ,to |v_i| introduced in Section <ref>. For inverse mapping, the input of all these models is the active and reactive power injections at all buses, while the output is the voltage magnitude at a particular bus. Other settings for data preparation and performance evaluation are the same as settings in forward mapping. The result for 123-bus case is shown in Fig. <ref>. Differing from the forward mapping, the SVR model with 1st 2nd order polynomial kernel have the best performance. We evaluate the performances of different models for inverse mapping (from power injections to voltage magnitudes) on various scales of distribution grids from 8-bus to 123-bus distribution grid. We also test the performance on 123-bus distribution grid with loops. The results are in Table <ref>. Similar to the results of the forward mapping estimation, the proposed SVR model outperforms other models on all test cases. The computational times for different models are also similar to the forward mapping estimation.§.§ Robustness of the Extrapolation Capability A power system is a dynamic system, and the load values change significantly over time, especially when it is with different DER penetration levels.Therefore, we train the proposed SVR model and the regression model by using a fixed training set, where all the real power injections are within the range of [-1 p.u., 0] to obtain the mapping rule from voltage phasors to power injections. Then, we test the performances of the learned mapping rules in different power injection levels. We build the same forward mapping model as the previous section, 1% relative measurement error and 2% outliers are added to the training set.Fig. <ref> demonstrates the performance of the two models, where the mean absolute error (MAE) is used to evaluate the performance. Fig. <ref> shows the MAEs of estimating the real power injection when the actual power injection range in the test set is the same as the training set. When the actual power injection is between -0.6 p.u. and -0.4 p.u. (same range for training and test), the performances of the regression model and the SVR model are both good. When the actual power injection is around -1 p.u. or 0 (slightly different ranges for training and test), the performance of the SVR model is much better than the regression model. In this case, the performance of the regression model is worse but still acceptable (error is smaller than 0.05 p.u.). However, when the range of the testing set is different from the training set, the linear model performs poorly as shown in Fig. <ref>. In contrast, the performance of the SVR model is much better than the regression model.In addition to the variation of PV generations in Fig. <ref>, we consider the load variation in Fig. <ref>. It shows that the SVR model is very robust in such case, and the associated absolute error is always less than 0.1 p.u in various test cases. On the other side, the absolute error of regression model could be as high as 0.3 p.u. The data for Fig. <ref> is in Table <ref>. We also test the extrapolation ability of the SVR model for the inverse mapping from power injections to the voltage magnitude at a certain bus of the distribution grid. Similar to use case one, we investigate the performance of the proposed model in different power injection levels. Table <ref> presents the detailed results for inverse mapping estimation. When the training data and testing data are in the same range, the performances of the SVR model is better than the linear regression model, while the error of the regression model is still relatively small, e.g., MAE is less than 0.002 p.u. However, when the power injection range of the testing set is different from the range in the training set, the performance of the regression model degrades significantly, while the SVR model retains good performance.§.§ Robustness to OutliersWe test the robustness of the proposed SVR model to outliers in training data. In this test, no random measurement errors are added to training data.We modify the percentage of outliers from 0 to 8% in training set for all direct measurements. Six weeks' data is used for training and validation, while another three weeks' data is used for testing. We use mean squared error (MSE) to evaluate the performances.For forward mapping, the performance for 123-bus case is illustrated in Fig. <ref>. When there are no outliers, both of the regression model and the proposed SVR model work well. However, the MSE of the regression method increases fast even if there are only 2% outliers in training data, while the performance of the SVR method is robust enough for 8% outliers in training data.§.§ Flexibility for Active ControllersWe test the performance of the proposed SVR model when there exists active controllers in the system. In particular, we add a droop controller at bus 7 in an 8-bus distribution grid, which is shown in Fig. <ref>. The control signals are the voltage magnitude at bus 7, or average voltage magnitude at bus 4, 5, 7 and 8, which stabilizes the voltage of a single bus or the voltages at the leaves of the network, correspondingly. The voltage droop coefficient is 10, which means 0.05 p.u. in voltage change introduces 0.5 p.u. in controlled reactive power bank output change. In this case, we build the forward mapping from voltages at all buses to the reactive power injection at bus 7, which is affected by the droop controller. No random measurement errors and outliers are added to training data. Other settings of the training and testing for regression model and the SVR model are similar with previous settings.Fig. <ref> shows the performance of the SVR method when there are unmodeled active controllers in distribution grids. Given a smaller size of network, and without measurement errors and outliers, the performance difference solely depends on the existence of the unmodeled controllers. When there is no active controller, the learned mapping rule for both regression and SVR is accurate. However, with the unmodeled active controller, the performance of the regression is much worse than SVR, especially for the controller II, which adjusts the reactive power injection at bus 7 based on the mean of the voltage magnitudes at bus 4, 5, 7, and 8. We further evaluate the performance for both SVR and regression with different voltage droop coefficient. The results are shown in Fig. <ref>. In the evaluation, we choose to use the droop controller to adjust the reactive power injection at bus 7, with the input of the average voltage magnitude at bus 4, 5, 7 and 8. The performance of the SVR is very robust against the increase of the droop coefficient, which proves that the SVR model can learn the controller's mechanisms while the parameter-learning approach cannot.§.§ Robustness to Partial ObservationsWe conduct the performance test of the proposed SVR method when we have partial available measurements in a distribution grid. In this section, we choose a 10-bus test distribution grid with mesh network, while bus 1 is the slack bus, shown in Fig. <ref>. We only have the measurements of voltages and power injections at bus 1, 4, 5, 7, 8, which are colored green in Fig. <ref>.For this testing, no random measurement errors or outliers are added in training set. We consider two setups. One setup is that there are no net power injections at the buses without measurement, bus 2, 3, 6, 9, 10. In this case, we may introduce an an equivalent admittance matrix which represents a fully connected graph among active buses which have non-zero power injections by Kron reduction of the admittance matrix <cit.>. Another setup is that there are net power injections at the hidden buses. The net power injection at hidden buses could be private controllers such as reactive power banks or energy losses. We model the net power injection at hidden buses as energy losses which are proportional to the energy consumptions at leaves nodes. No random measurement errors and outliers are added to the training data. Other settings of the training and testing are similar with previous settings.We further add random measurement errors with different levels of standard deviations to training set, and test the robustness of the SVR model when there are measurement errors. The result is shown in Fig. <ref>. It is shown that no matter what the relative measurement error is, the SVR's performance for partially available data is better than regression model, implying the better modeling flexibility and robustness. § CONCLUSIONWith deep DER penetration in distribution grids, proper monitoring with current sensor capability is needed.As the topology, parameters, and active controller information are usually insufficient in some primary distribution grids and many secondary distribution grids, it is hard to apply the transmission grid monitoring tools directly.In this paper, we propose a data-driven approach to recover the mapping rules of power flow equations in distribution grids thanks to the flexibility of kernel trick design in support vector regression.We prove that the data-driven SVR method can match the traditional physical law-based regression method exactly when there are perfect measurements.This property resolves a typical drawback of data fitting methods for overfitting.Numerical results show that our method is robust when there are outliers in historical measurements and when only partial measurements are available.These advantages make our proposed method promising to apply in distribution grids and serves as the basis for other power flow-based for applications, such as state estimation and optimal power flow.IEEEtran
http://arxiv.org/abs/1702.07948v2
{ "authors": [ "Jiafan Yu", "Yang Weng", "Ram Rajagopal" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170225211354", "title": "Mapping Rule Estimation for Power Flow Analysis in Distribution Grids" }
^1Dipartimento di Fisica "E. Pancini", Università di Napoli “Federico II”, Complesso Universitario di Monte Sant'Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy ^2Istituto Nazionale di Fisica Nucleare (INFN) Sezione di Napoli, Complesso Universitario di Monte Sant'Angelo, Edificio G, Via Cinthia, I-80126, Napoli, Italy ^3 Gran Sasso Science Institute, Viale F. Crispi, 7, I-67100, L'Aquila, Italy ^4Dipartimento di Fisica E.R. Cainaiello, University of Salerno, Via Giovanni Paolo II, I 84084-Fisciano (SA), Italy^5INFN, Gruppo Collegato di Salerno, Sezione di Napoli, Via Giovanni Paolo II, I 84084-Fisciano (SA), Italy^6Department of Physics, National Technical University of Athens, Zografou Campus GR 157 73, Athens, Greece^7CASPER, Physics Department, Baylor University, Waco, TX 76798-7310, USAα We use BBN observational data on primordial abundance of ^4He to constrain f(T) gravity. The three moststudied viable f(T) models, namely the power law, the exponential and the square-root exponential are considered, andthe BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that theconstraints are in agreement with those acquired using late-time cosmological data. For the exponential and the square-root exponential models,we show that for realiable regions ofparameters space they always satisfy the BBN bounds. We conclude that viable f(T) models can successfully satisfy the BBN constraints.04.50.Kd, 98.80.-k,98.80.Es, 26.35.+c Constraining f(T) teleparallel gravity by Big Bang Nucleosynthesis S. Capozziello^1,2,3, G. Lambiase^4,5, and E.N. Saridakis^6,7 December 30, 2023 =====================================================================§ INTRODUCTION Cosmological observations coming fromType Ia Supernovae <cit.>,cosmic microwave backgroundradiation <cit.> and the large scale structure <cit.>, provide evidences that the Universeis currently in an accelerating phase. This result is, in general, ascribed to the existence of a sort ofdark energy (DE) sector in the universe, an exotic energy source characterized by a negative pressure. At late times, the dark-energy sector eventually dominates over the cold dark matter (CDM), and drives the Universeto the observed accelerating expansion.The simplestcandidate for DE is the cosmological constant Λ, which has an equation-of-state parameterw=-1. Although this model is in agreement with current observations, it is plagued by some difficulties related to the small observational value of DE density with respect to the expected one arising from quantum field theories (the well known cosmological constant problem <cit.>). Moreover, the ΛCDM paradigm, where cold dark matter (CDM) is considered into the game,may also suffer from the age problem, as it wasshown in <cit.>, while the present data seem to slightly favor an evolving DE with theequation-of-state parameter crossing w=-1 from above to below in the near cosmological past <cit.>.Over the past decade several DE models have been proposed, such as quintessence <cit.>, phantom <cit.>, k-essence <cit.>, tachyon <cit.>, quintom <cit.>, Chaplygin gas <cit.>, generalized Chaplygin gas (GCG) <cit.>, holographic DE <cit.>, new agegraphic DE <cit.>, Ricci DE <cit.> etc. On the other hand, there are also numerous models that induce an effective dark energy which arises from modifications of the gravitational sector itself, such as f(R) gravity <cit.> (this class is very efficient in verifying observational and theoretical constraints and explain the Universe acceleration and phantom crossing <cit.>), or gravity with higher curvature invariants <cit.>, by coupling the Ricci scalar to a scalar field <cit.>, by introducing a vector field contribution <cit.>, or by using properties of gravity in higher dimensional spacetimes <cit.> (for a review see <cit.>).A possibility that can be explored to explain the accelerated phase of the Universeis to consider a theory of gravity based on the Weitzenböck connection, instead of the Levi-Civita one, which deduces that the gravitational field is described by the torsion instead of the curvature tensor. In such theories, the torsion tensor is achieved from products of first derivatives of tetrad fields, and hence no second derivatives appear. This Teleparallelapproach <cit.>, is closely related to General Relativity, except for “boundary terms”<cit.> that involve total derivatives in the action, and thus one can construct the Teleparallel Equivalent of General Relativity (TEGR), which is completely equivalent with General Relativity at the level of equations but is based on torsion instead ofcurvature. Teleparallel gravity possesses a number of attractive features related togeometrical and physical aspects <cit.>. Hence, one can start from TEGR and construct various gravitational modifications based on torsion, with f(T) gravity being the most studied one <cit.>. In particular, it may represent an alternative to inflationary models without the use of the inflaton, as well as to effective DE models, in which the Universe acceleration is driven by the extra torsion terms <cit.> (for a detailed review,see <cit.>). The main advantage of f(T) gravity is that the field equations are 2nd-order ones, a property that makes these theories simpler if compared to the dynamical equations of other extended theories of gravity, such as f(R) gravity.The aim of this paper is to explore the implications of f(T) gravity to the formation of light elements in the early Universe,i.e. to the Big Bang Nucleosynthesis (BBN). On the other hand, we want to explore the possibility to constrain f(T)gravity by BBN observatio nal data.BBN has occurred between the first fractions ofsecond after the Big Bang, around ∼ 0.01 sec, and a few hundreds of seconds after it, when the Universewas hot and dense (indeed BBN, together with cosmic microwave background radiation, provides the strong evidence about the high temperatures characterizing the primordial Universe ). It describes the sequence of nuclear reactions that yielded the synthesis of light elements <cit.>, and therefore drove the observed Universe. In general, from BBN physics,one may infer stringent constraints on a given cosmological model. Hence, in this work,we shall confront variousf(T) gravity models with BBN calculations based on current observational data on primordial abundance of ^4He, and we shallextract constraints on their free parameters.The layout of the paper is as follows. In Section <ref> we review f(T) gravity and the related cosmological models. In Section <ref> we use BBN calculations in order to impose constraints on the free parameters of specific f(T) gravity models. Conclusions are reported in Section <ref>. Finally, in the Appendix we summarize the main notions of BBN physics.§ F(T) GRAVITY AND COSMOLOGY Let us briefly review f(T) gravity, andapply it in a cosmological framework. In this formulation,the dynamical variable is the vierbein field e_i(x^μ), i = 0, 1, 2, 3, which forms an orthonormal basis in the tangent space at each point x^μ of the manifold, i.e. e_i · e_j=η_ij, with η_ij the Minkowsky metric with signature -2: η_ij=diag(1,-1,-1,-1). Denoting with e^μ_i, μ=0,1,2,3 the components of the vectors e_i in a coordinate basis ∂_μ, one can write e_i=e^μ_i∂_μ. As a convection, here we use the Latin indices for the tangent space, and the Greek indices for the coordinates on the manifold. The dual vierbein allows to obtain the metric tensor of the manifold, namely g_μν(x)=η_ij e^i_μ(x)e^j_ν(x).In teleparallel gravity, one adopts the curvatureless Weitzenböck connection (contrarily to General Relativity which is based on the torsion-less Levi-Civita connection), which gives rise to the non-null torsion tensor:T^λ_μν=Γ̂^λ_νμ-Γ̂^λ_μν =e^λ_i(∂_μ e^i_ν - ∂_ν e^i_μ).Remarkably, the torsion tensor (<ref>) encompasses all the information about the gravitational field. The Lagrangian density is built using its contractions,and hence the teleparallel action is given by I = 1/16π G∫ d^4x e T ,with e=det(e^i_μ)=√(-g), and where the torsion scalar T reads asT=S_ρ^μνT^ρ_μν .Here, it isS_ρ^μν = 1/2(K^μν_ρ+δ^μ_ρT^θν_θ-δ^ν_ρT^θμ_θ)K^μν_ρ =-1/2(T^μν_ρ-T^νμ_ρ-T_ρ^μν) ,with K^μν_ρ the contorsion tensor which gives the difference between Weitzenböck and Levi-Civita connections. Finally, the variation of action (<ref>) in terms of the vierbiens gives rise to the field equations, which coincide with those of General Relativity. That is why the above theory is called the Teleparallel Equivalent of General Relativity (TEGR).One can now start from TEGR, and generalize action (<ref>) in order to construct gravitational modifications based on torsion. The simplest scenario is to consider a Lagrangian density that is a function of T, namelyI = 1/16π G∫d^4xe[T+f(T)],that reduces to TEGR as soon asf(T)=0. Considering additionally a matter Lagrangian L_m, variation with respect to the vierbein gives the field equations <cit.> e^-1∂_μ(e e_i^ρS_ρ^μν)[1+f']-e_i^λT^ρ_μλS_ρ^νμ[1+f']+e^ρ_i S_ρ^μν(∂_μ T)f” + 1/4e^ν_i [T+f]=4π G e_i^ρ Θ_ρ^ν ,where f'≡ df/dT, S_i^μν=e_i^ρS_ρ^μν and Θ_μν is the energy-momentum tensor for the matter sector.In order to explore the cosmological implications of f(T) gravity, we focus on homogeneous and isotropic geometry, considering the usual choice for the vierbiens, namelye_μ^A= diag(1,a,a,a),which corresponds to a flat Friedmann-Robertson-Walker (FRW) background metric of the formds^2= dt^2-a^2(t) δ_ij dx^i dx^j,where a(t) is the scale factor. Equations (<ref>), (<ref>), (<ref>) and (<ref>) allow to derive a relation between the torsion T and the Hubble parameter H=ȧ/a, namelyT=-6H^2.Hence, in the case of FRW geometry, and assuming that the matter sector corresponds to a perfect fluid with energy density ρ and pressure p, the i=0=ν component of (<ref>) yields12H^2[1+f']+[T+f]=16π Gρ,while the i=1=ν component gives48H^2f”Ḣ-(1+f')[12H^2+4Ḣ]-(T-f)=16π Gp.The equations close by considering the equation of continuity for the matter sector, namely ρ̇+3H(ρ+p)=0. One can rewrite (<ref>) and (<ref>) in the usual formH^2=8π G/3(ρ+ρ_T),2Ḣ+3H^2=-8π G/3(p+p_T) , whereρ_T= 3/8π G[Tf'/3-f/6],p_T = 1/16π G f-T f'+ 2T^2f”/1+f'+ 2Tf” ,are the effective energy density and pressure arising from torsional contributions. One can therefore define the effective torsionalequation-of-state parameterasω_T≡p_T/ρ_T= -f-T f'+2T^2f”/(1+f'+ 2 T f”)(f-2Tf') .In these classes of theories, the above effective torsional terms are responsible for the accelerated phases of the early or/and late Universe <cit.>.Let us present nowthreespecific f(T) forms, which are the viable ones amongst the variety of f(T) modelswith two parameters out of which one is independent, i.e which pass the basic observational tests <cit.>.*The power-law model by Bengochea and Ferraro (hereafter f_1CDM) <cit.> is characterized by the formf(T) = β |T|^n,where β and n are the two model parameters. Inserting this f(T) form intoFriedmann equation (<ref>) at present, we acquireβ=(6H_0^2)^1-nΩ_m0/2n-1, where Ω_m0=8π G ρ_m/3H_0^2 is the matter density parameter at present, and H_0=73.02± 1.79∼2.1 × 10^-42is the current Hubble parameter value. The best fit on the parameter n is obtained taking the CC+H_0+SNeIa+BAO observational data, and it reads <cit.> n=0.05536 .Clearly, for n=0the present scenario reduces to ΛCDM cosmology, namely T+f(T)=T-2Λ, with Λ=-β/2.*The Linder model (hereafter f_2CDM) <cit.> arises fromf(T)=α T_0(1-e^-p√(T/T_0)), p=1/b ,with α and p (b) the two model parameters. In this case (<ref>)gives thatα=Ω_m0/1-(1+p)e^-p . The CC+H_0+SNeIa+BAO observational data imply that the best fit of b is <cit.> b=0.04095 .As we can see, for p → +∞ the present scenario reduces toΛCDM cosmology.*Motivated byexponential f(R) gravity <cit.>, Bamba et. al. introduced the following f(T) model (hereafter f_3CDM) <cit.>: f(T)=α T_0(1-e^-pT/T_0),p=1/b , with α and p(b) the two model parameters.Inthis case we acquire α=Ω_m0/1-(1+2p)e^-p. For this model, and using CC+H_0+SNeIa+BAO observational data, the best fit is found to be <cit.> b=0.03207 .Similarly to the previous casewe can immediately see thatf_3CDM model tends to ΛCDM cosmology for p → +∞. The abovef(T) models are considered viablein literature because pass the basic observational tests <cit.>.They are characterized by two free parameters. Actually there are two more models with two free parameters, namely the logarithmic model<cit.>,f(T)=α T_0√(T/cT_0)ln(cT_0/T ) ,and the hyperbolic-tangent model <cit.>,f(T)=α(-T)^ntanh(T_0/T) .Nevertheless since these two models do not possessΛCDM cosmology as a limiting case and since they are in tension with observational data <cit.>, in this work we do not consider them.Finally, let us note that one could also construct f(T) models with more than two parameters, for example,combining the abovescenarios.However, consideringmany free parameters would be a significant disadvantage concerning the corresponding values of the information criteria.§ BIG BANG NUCLEOSYNTHESISIN F(T) COSMOLOGYIn the Section,we examine the BBN in the framework of f(T) cosmology. As it is well known, BBN occurs during the radiation dominated era. The energy density of relativistic particles filling up the Universeis given by ρ=π^2/30g_*T^4, where g_*∼ 10 is the effective number of degrees of freedom and T the temperature (in the Appendix we review the main features related to the BBN physics). The neutron abundance is computed via the conversion rate of protons into neutrons, namely λ_pn( T)=λ_n+ν_e→ p+e^-+λ_n+e^+→ p+ν̅_e+λ_n→ p+e^- + ν̅_e ,and its inverse λ_np( T). The relevant quantity is the total rate given by Λ( T)=λ_np( T)+λ_pn( T) .Explicit calculations of Eq. (<ref>) lead to (see (<ref>) in the Appendix) Λ( T) =4 AT^3(4!T^2+2× 3!Q T+2!Q^2) ,where Q=m_n-m_p is the mass difference of neutron and proton, and A=1.02 × 10^-11GeV^-4. The primordial mass fraction of ^4 He can be estimated by making use of the relation <cit.> Y_p≡λ 2 x(t_f)/1+x(t_f) .Here λ=e^-(t_n-t_f)/τ, with t_f the time of the freeze-out of the weak interactions, t_n the time of the freeze-out of the nucleosynthesis, τ the neutron mean lifetime given in (<ref>), and x(t_f)=e^- Q/ T(t_f) is the neutron-to-proton equilibrium ratio. The function λ(t_f) is interpreted as the fraction of neutrons that decay into protons during the interval t∈ [t_f, t_n]. Deviations from the fractional mass Y_p due to the variation of the freezing temperature T_f are given by δ Y_p=Y_p[(1-Y_p/2λ)ln(2λ/Y_p -1)-2t_f/τ] δ T_f/ T_f ,where we have set δ T(t_n)=0 since T_n is fixed by the deuterium binding energy <cit.>. The experimental estimations of the mass fraction Y_p of baryon converted to ^4 He during the Big Bang Nucleosynthesis are <cit.> Y_p=0.2476 ,|δ Y_p| < 10^-4 . Inserting these into (<ref>) one infers the upper bound on δ T_f/ T_f, namely |δ T_f/ T_f| < 4.7 × 10^-4 . During the BBN, at the radiation dominated era, the scale factor evolves as a∼ t^1/2, where t is cosmic time. Thetorsional energy density ρ_T is treated as a perturbation to the radiation energy density ρ. The relation between the cosmic time and the temperature is given by 1/t≃ (32π^3 g_*/90)^1/2 T^2/M_P (or T(t)≃ (t/sec)^1/2MeV). Furthermore, we use the entropy conservation S∼ a^3T^3=constant. The expansion rate of the Universeis derived from (<ref>), and can be rewritten in the formH = H_GR^(R)√(1+ρ_T/ρ)=H_GR+δ H , δ H = (√(1+ρ_T/ρ)-1)H_GR ,where H_GR=√(8π G/2ρ) (H_GR is the expansion rate of the Universein General Relativity). Thus, from the relation Λ= H, one derives the freeze-out temperature T= T_f(1+δ T_f/ T_f), with T_f∼ 0.6 MeV (which follows from H_GR≃ qT^5) and(√(1+ρ_T/ρ)-1)H_GR = 5qT_f^4 δ T_f ,from which, in the regime ρ_T≪ρ, one obtains:δ T_f/ T_f≃ρ_T/ρH_GR/10 qT_f^5 ,with q=4! A≃ 9.6× 10^-36GeV^-4. In what follows we shall investigate the bounds that arise from the BBN constraints, on the free parameters of the three f(T) models presented in the previous Section. These constraint will be determined using Eqs. (<ref>) and (<ref>). Moreover, we shall use the numerical values Ω_m0=0.25 ,T_0=2.6× 10^-13 ,where T_0 is the present value of CMB temperature.* f_1CDM model.For the f_1CDM model of (<ref>) relation (<ref>) gives ρ_T= 1/16π G[β (2n-1)(|6H^2|)^n] = 3H_0^2/8π G Ω_m0( T/ T_0)^4n ,and then (<ref>) yieldsδ T_f/ T_f=π/15√(π g_*/5) Ω_m0( T_f/ T_0)^4(n-1)1/qM_Pl T_f^3 .In Fig. <ref> we depict δ T_f/ T_f from (<ref>) vs n, as well as the upper bound from (<ref>). As we can see, constraints from BBN require n≲ 0.94. Remarkably, this bound is in agreement with the best fit for n of (<ref>), namely n=0.05536, that was obtained using CC+H_0+SNeIa+BAO observational data in <cit.>.* f_2,3CDM model.In the case of f_2CDM model of (<ref>) and f_3CDMmodel of (<ref>), and for the purpose of this analysis, we can unified their investigationparameterizing them asf(T)= α T_0 [1-e^-p(T/T_0)^m] ,withα = Ω_m0/1-(1+2mp)e^-p ,where m=1/2 for modelf_2CDMand m=1 for modelf_3CDM. Inserting (<ref>) into (<ref>) we acquireδ T_f/ T_f=2πα/15√(π g_*/5)( T_0/ T_f)^41/qM_PT_f^3·{[mp( T_0/ T_f)^4m+1/2]e^-p( T_f/ T_0)^4m -1/2}.Hence, using this relation we can calculate the value of |δ T_f/ T_f| for various values of p=1/b that span the order of magnitude of the best fit values (<ref>) and (<ref>) that were obtained using CC+H_0+SNeIa+BAO observational data in <cit.>, and we present our results in Table <ref>. As we can see, in all cases the value of |δ T_f/ T_f| is well below the BBN bound(<ref>). Hence, BBN cannot impose constraints on the parameter values of f_2CDM and f_3CDM models. § CONCLUSIONS In this work we have investigated the implications of f(T) gravity to the formation of light elements in the early Universe, i.e. to the BBN. In particular, we have examined the three most used and well studied viable f(T) models, namely the power law, the exponential and the square-root exponential, and we have confronted them with BBN calculations based on current observational data on primordial abundance of ^4He. Hence, we were able to extract constraints on their free parameters.Concerning the power-law f(T) model, the obtained constraint on the exponent n, is n≲ 0.94. Remarkably, this bound is in agreement with the constraints obtained using CC+H_0+SNeIa+BAO observational data<cit.>. Concerning the exponential and the square-root exponential, we showed that, forrealistic regions of free parameters,they always satisfy the BBN bounds. This means that, in these cases, BBN cannot impose strict constraints on the valuesoffree parameters.In summary,we showed thatviable f(T) models, namely those that pass the basic observational tests, can also satisfy the BBN constraints. This feature acts as an additional advantage of f(T) gravity, which might be a successful candidate fordescribing thegravitational interaction. As discussed in <cit.>, this kind of constraints could contribute in the debate of fixing the most realistic picture that can be based on curvature or torsion. This article is based upon work from COST Action CA15117 “Cosmology and Astrophysics Network for Theoretical Advances and Training Actions” (CANTATA), supported by COST (European Cooperation in Science and Technology).*§ BIG BANG NUCLEOSYNTHESIS In this Appendix we briefly review the main features of Big Bang Nucleosynthesis following <cit.>.In the early Universe, the primordial ^4He was formed at temperature T∼ 100 MeV. The energy and number density were formed by relativistic leptons (electron, positron and neutrinos) and photons. The rapid collisions maintain all these particles in thermal equilibrium. Interactions of protons and neutrons were kept in thermal equilibrium by means of their interactions with leptons ν_e+n⟷p+e^- e^++n⟷p + ν̅_e n ⟷p+e^- + ν̅_e . The neutron abundance is estimated by computing the conversion rate of protons into neutrons, i.e. λ_pn( T), and its inverse λ_np( T). Thus, the weak interaction rates (at suitably high temperature) are given by Λ( T)=λ_np( T)+λ_pn( T) .The rate λ_np is the sum of the rates associated to the processes (<ref>)-(<ref>), namely λ_np=λ_n+ν_e→ p+e^-+λ_n+e^+→ p+ν̅_e+λ_n→ p+e^- + ν̅_e . Finally, the rate λ_np is related to the rate λ_pn as λ_np( T)=e^- Q/ Tλ_pn( T), with Q=m_n-m_p the mass difference of neutron and proton.During the freeze-out stage, one can use the following approximations <cit.>: (i) The temperatures of particles are the same, i.e. T_ν= T_e= T_γ= T. (ii) The temperature T is lower than the typical energies E that contribute to the integrals entering the definition of the rates (one can therefore replace the Fermi-Dirac distribution with the Boltzmann one, namely n≃ e^-E/ T). (iii) The electron mass m_e can be neglected with respect to the electron and neutrino energies (m_e≪ E_e, E_ν).Having these in mind, the interaction rate corresponding to the process (<ref>) is given bydλ_n+ν_e→ p+e^-= dμ (2π)^4 |⟨ M|^2⟩ W,wheredμ ≡ d^3p_e/(2π)^3 2E_ed^3p_ν_e/(2π)^3 2E_ν_ed^3p_ p/(2π)^3 2E_p , W≡ δ^(4)( P)n(E_ν_e)[1-n(E_e)] , P ≡ p_n+p_ν_e-p_p-p_e ,M =(g_w/8M_W)^2 [u̅_pΩ^μ u_n][u̅_eΣ_μ v_ν_e] ,Ω^μ ≡ γ^μ(c_V-c_A γ^5) ,Σ^μ ≡ γ^μ(1-γ^5).In (<ref>) we have used the condition q^2 ≪ M_W^2, where M_W is the mass of the vector gauge boson W, with q^μ=p_n^μ-p_p^μthe transferred momentum. From Eq. (<ref>) it follows that λ_n+ν_e→ p+e^-=A T^5 I_y ,where A≡g_V+3g_A/2π^3 ,and whereI_y=∫_y^∞ϵ(ϵ- Q')^2√(ϵ^2-y^2)n(ϵ- Q)[1-n(ϵ)]dϵ, with y≡m_e/ T ,Q'= Q/ T .A similar calculation for the process (<ref>) givesλ_e^+ + n → p+ ν̅_e=A T^5 J_y , with J_y=∫_y^∞ϵ(ϵ+ Q')^2√(ϵ^2-y^2)n(ϵ)[1-n(ϵ+ Q')]dϵ , which finallyresults to λ_e^+ + n→ p+ν̅_e=AT^3(4!T^2+2× 3!Q T+2!Q^2) . Lastly, for the neutron decay (<ref>) one obtains τ=λ_n→ p+e^- +ν̅_e^-1≃ 887 sec . Hence, in the calculation of (<ref>) we can safely neglect the above interaction rate of the neutron decay, i.e. during the BBN the neutron can be considered as a stable particle.The above approximations (i)-(iii) lead to <cit.> λ_e^+ +n→ p+ν̅_e=λ_n+ν_e→ p+e^- . Thus, inserting (<ref>) into (<ref>), and then into (<ref>), allows to derive the expression for Λ( T), namely Λ( T)≃ 2λ_np=4λ_e^+ +n→ p+ν̅_e , which using (<ref>) leads toΛ( T) =4 AT^3(4!T^2+2× 3!Q T+2!Q^2) . 99Riess A. G. Riess, et al., Astron. J.116, 1009 (1998). S. Perlmutter, et al., Astrophys. J.517, 565 (1999). Spergel D. N. Spergel, et al., ApJS.148, 175 (2003). D. N. Spergel, et al., ApJS.170, 377 (2007). Tegmark M. Tegmark, et al., Phys. Rev. D69, 103501 (2004). Eisenstein D. J. Eisenstein, et al., Astrophys. J.633, 560 (2005). Carroll S. M. Carroll, Living Rev. Rel.4, 1 (2001). E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D.15, 1753 (2006). Yang R. J. Yang and S. N. Zhang, Mon. Not. R. Astron. Soc.407, 1835 (2010). FengB. Feng, X. L. Wang andX. M. Zhang, Phys. Lett. B.607, 35 (2005). CaldwellR. R. Caldwell,R. Dave andR. J. Steinhardt, Phys. Rev. Lett.80, 1582 (1998). Caldwell2R. R. Caldwell, Phys. Lett. B.545, 23 (2002). ArmendarizC. Armendariz-Picon,V. Mukhanov andP. J. Steinhardt, Phys. Rev. D63 , 103510 (2001). Padmanabhan T. Padmanabhan, Phys. Rev. D.66, 021301 (2002). A. Sen, Phys. Scripta. T.117, 70 (2005). Cai:2009zp Y. F. Cai, E. N. Saridakis, M. R. Setare and J. Q. Xia,Phys. Rept.493, 1 (2010).eliE. Elizalde, S. Nojiri andS. D. Odintsov, Phys. Rev. D.70, 043539 (2004). KamenshchikA. Kamenshchik, U. Moschella and V. Pasquier, Phys. Lett. B.511, 265 (2001). BentoM. C. Bento, O. Bertolami and A. A. Sen, Phys. Rev. D.66, 043507 (2002).CohenA. G. Cohen, D. B. Kaplan and A. E. Nelson, Phys. Rev. Lett.82, 4971 (1999).Li2004 M. Li, Phys. Lett. B.603, 1 (2004). Wei H. Wei and R. G. Cai, Phys. Lett. B.660, 113 (2008).Gao C. Gao, F. Wu, X. Chen and Y. G. Shen, Phys. Rev. D.79, 043511 (2009).capfra S. Capozziello, M. Francaviglia , Gen. Rel. Grav. 40, 357 (2008).NojiriS. Nojiri and S.D. Odintsov, Phys. Rept.505, 59 (2011).book S. CapozzielloandV. Faraoni,Beyond Einstein Gravity, Fundamental Theoriesof Physics, Vol. 170, Springer Ed., Dordrecht(2011).fRrev A. De Felice and S. Tsujikawa, Living Rev. Rel.13, 3 (2010). noj1 S. Nojiri and S. D. Odintsov, Phys. Rev. D.68, 123512 (2003).noj2 S. Nojiri and S. D. Odintsov, Phys. Rev. D.74, 086005 (2006).noj4 M. C. B. Abdalla, S. Nojiri and S. D. Odintsov,Class. Quant. Grav.22, L35 (2005).noj5 S. Nojiri and S. D. Odintsov, Phys. Rev. D.77, 026007 (2008).Mota A. De Felice, D. F. Mota and S. Tsujikawa, Phys. Rev. D.81, 023532 (2010).Farajollahi H. Farajollahi, M. Farhoudi and H. Shojaie, Int. J. Theor. Phy.49, 2558 (2010).Zuntz J. Zuntz, T. G. Zlosnik, F. Bourliot, P. G. Ferreira and G. D. Starkman, Phys. Rev.D81, 104015 (2010). Camera M. La Camera, Mod. Phys. Lett. A.25, 781 (2010). noj3 S. Nojiri, and S.D. Odintsov, Int. J. Geom. Meth. Mod. Phys.4, 115 (2007).Capozziello:2011et S. Capozziello and M. De Laurentis, Phys. Rept.509, 167 (2011).Einstein A. Einstein, Sitz. Preuss. Akad. Wiss. p. 217; ibid p. 224(1928). Einstein2 A. Einstein (2005), translations of Einstein papers by A. Unzicker and T. Case, (arXiv:physics/0503046). seb1 S. Bahamonde andC. G. Böhmer,Eur.Phys.J. C76, 578 (2016).seb2 S. Bahamonde andS. Capozziello,Eur.Phys.J. C77,107 (2017). Hayashi K. Hayashi and T. Shirafuji, Phys. Rev. D19, 3524 (1979); Addendum-ibid. D24,3312 (1982). Pereira.bookR. Aldrovandi, J.G. Pereira, Teleparallel Gravity: An Introduction, Springer, Dordrecht, 2013.Maluf:2013gaa J. W. Maluf,Annalen Phys.525, 339 (2013).krssak M. Krsaak, Emmanuel N. Saridakis,Class.Quant.Grav.33, 115009 (2016) Ferraro2 R. Ferraro and F. Fiorini, Phys. Rev. D.75, 084031 (2007).Ferraro3 R. Ferraro and F. Fiorini, Phys. Rev. D78, 124019 (2008). Linder:2010py E. V. Linder, Phys. Rev.D81, 127301 (2010).Wu1 P. Wu and H. Yu, Phys. Lett. B.693, 415 (2010).Chen:2010va S. H. Chen, J. B. Dent, S. Dutta and E. N. Saridakis, Phys. Rev. D83, 023508 (2011).pertftR. Zheng and Q. G. Huang, JCAP1103, 002 (2011).Dent:2011zz J. B. Dent, S. Dutta, E. N. Saridakis, JCAP1101, 009 (2011). Bamba:2010wb K. Bamba, C. Q. Geng, C. C. Lee and L. W. Luo, JCAP1101, 021 (2011).Wu P. Wu and H. Yu, Eur. Phys. J. C.71, 1552 (2011). Zhang:2011qp Y. Zhang, H. Li, Y. Gong and Z. H. Zhu,JCAP1107, 015 (2011).Bengochea G. R. Bengochea, Phys. Lett. B695 (2011).Yang2 R. J. Yang,Eur. Phys. J. C71, 1797 (2011).Cai11 Y. F. Cai, S. H. Chen, J. B. Dent, S. Dutta and E. N. Saridakis,Class. Quant. Grav. 28, 215011 (2011).Li:2011rn M. Li, R. X. Miao and Y. G. Miao, JHEP1107, 108 (2011). Bamba:2011pz K. Bamba and C. Q. Geng,JCAP1111, 008 (2011).Daouda:2011rtM. H. Daouda, M. E. Rodrigues and M. J. S. Houndjo, Eur. Phys. J. C72, 1890 (2012). Atazadeh:2011aa K. Atazadeh and F. Darabi,Eur.Phys.J. C72, 2016 (2012). Karami:2012fu K. Karami and A. Abdolmaleki,JCAP 1204 (2012) 007.Cardone:2012xq V. F. Cardone, N. Radicella and S. Camera,Phys. Rev. D85, 124007 (2012).Otalora:2013tba G. Otalora,JCAP1307, 044 (2013). Ong:2013qjaY. C. Ong, K. Izumi, J. M. Nester and P. Chen, Phys. Rev. D88 (2013) 2,024019. Haro:2014wha J. Haro and J. Amoros,JCAP1412 (2014) 12, 031. Harko:2014sja T. Harko, F. S. N. Lobo, G. Otalora and E. N. Saridakis, Phys. Rev. D89, 124036 (2014). Hanafy:2014ica W. El Hanafy and G. G. L. Nashed, Eur. Phys. J. C75, 279 (2015).Capozziello:2015rda S. Capozziello, O. Luongo and E. N. Saridakis, Phys. Rev. D91 (2015) 12,124037. Bahamonde:2015zma S. Bahamonde, C. G. Böhmer and M. Wright,Phys. Rev. D92, 104042 (2015). Carloni:2015lsa S. Carloni, F. S. N. Lobo, G. Otalora and E. N. Saridakis, Phys. Rev. D93, 024034 (2016). Fazlpour:2016bxo B. Fazlpour, Gen. Rel. Grav.48, no. 12, 159 (2016).Cai:2015emx Y. F. Cai, S. Capozziello, M. De Laurentis and E. N. Saridakis,Rept. Prog. Phys.79, no. 10, 106901 (2016).kolb E.W. Kolb, M.S. Turner,The Early Universe , Addison Wesley Publishing Company, (1989).bernstein J. Bernstein, L.S. Brown, G. Feinberg, Rev. Mod. Phys.61, 25 (1989). Nesseris:2013jea S. Nesseris, S. Basilakos, E. N. Saridakis and L. Perivolaropoulos, Phys. Rev. D88, 103010 (2013).TJCAP16R. C. Nunes, S. Pan and E. N. Saridakis, JCAP1608, no. 08, 011 (2016). Linder:2009jz E. V. Linder,Phys. Rev. D80, 123528 (2009). torres D.F. Torres, H. Vucetich, A. Plastino, Phys. Rev. Lett.79, 1588 (1997).Lambiase1 G. Lambiase, Phys. Rev. D72, 087702 (2005). Lambiase2 G. Lambiase, JCAP 1210 (2012) 028. Lambiase3 G. Lambiase, Phys. Rev. D83 (2011) 107501. coc A. Cocet al., Astrophys. J.600, 544 (2004).altriBBN1K.A. Olive, E. Stillman, G. Steigman, Astrophys. J.483, 788 (1997).altriBBN2 Y. I. Izotov and T. X. Thuan, Astrophys. J.500, 188 (1998).altriBBN3B.D. Fields, K.A. Olive, Astrophys. J.506, 177 (1998).altriBBN4Y. I. Izotov, F. H. Chaffee, C. B. Foltz, R. F. Green, N. G. Guseva and T. X. Thuan, Astrophys. J.527, 757 (1999).altriBBN5 D. Kirkmanet al., Astrophys. J. Suppl. Ser.149, 1 (2003).altriBBN6Y. I. Izotov and T. X. Thuan, Astrophys. J.602, 200 (2004).Capozziello:2008qc S. Capozziello, V. F. Cardone and V. Salzano, Phys. Rev.D78,063504 (2008).blcc10 M. Bouhmadi - López, S. Capozziello, V.F. Cardone, Phys. Rev. D82, 103526 (2010).Stern D. Stern, R. Jimenez, L. Verde, S.A. Stanford, M. Kamionkowski, ApJS.188, 280 (2010).Cabre E. Gaztanaga, A. Cabré, L. Hui, MNRAS.399, 1663, (2009). Union2 R. Amanullah, C. Lidman, D. Rubin, G. Aldering, P. Astier et al., ApJ.716, 712 (2010).P10 W.J. Percival, B.A. Reid, D.J. Eisenstein, N.A. Bahcall, T. Budavari, et al., MNRAS401, 2148, (2010). shoes A.G. Riess, L. Macri, W. Li, H. Lampeitl, S. Casertano et al., ApJ.699, 539 (2009).Vitagliano:2009et V. Vitagliano, J. Q. Xia, S. Liberati and M. Viel,JCAP1003 (2010) 005 [arXiv:0911.1249 [astro-ph.CO]].XW10 L. Xu, Y. Wang, preprint arXiv :1009.0963, 2010 Enzo2011 S. Capozziello, R. Lazkoz, V. Salzano, Phys. Rev. D84, 124061 (2011).
http://arxiv.org/abs/1702.07952v1
{ "authors": [ "S. Capozziello", "G. Lambiase", "E. N. Saridakis" ], "categories": [ "astro-ph.CO", "gr-qc" ], "primary_category": "astro-ph.CO", "published": "20170225215036", "title": "Constraining f(T) teleparallel gravity by Big Bang Nucleosynthesis" }
espaulino@gmail.comInstituto de Ciencias Físicas, Universidad Nacional Autónoma de México, Cuernavaca 62210, Mexico yochelis@bgu.ac.il Department of Solar Energy and Environmental Physics, Swiss Institute for Dryland Environmental and Energy Research, Blaustein Institutes for Desert Research (BIDR), Ben-Gurion University of the Negev, Sede Boqer Campus, 8499000 Midreshet Ben-Gurion, Israel A generic distinct mechanism for the emergence of spatially localized states embedded in an oscillatory background is demonstrated by using 2:1 frequency locking oscillatory system. The localization is of Turing type and appears in two space dimensions as a comb-like state in either π phase shifted Hopf oscillations or inside a spiral core. Specifically, the localized states appear in absence of the well known flip-flop dynamics (associated with collapsed homoclinic snaking) that is known to arise in the vicinity of Hopf-Turing bifurcation in one space dimension. Derivation and analysis of three Hopf-Turing amplitude equations in two space dimensions reveals a local dynamics pinning mechanism for Hopf fronts, which in turn allows the emergence of perpendicular (to the Hopf front) Turing states. The results are shown to agree well with the comb-like core size that forms inside spiral waves. In the context of 2:1 resonance, these localized states form outside the 2:1 resonance region and thus extend the frequency locking domain for spatially extended media, such as periodically driven Belousov-Zhabotinsky chemical reactions. Implications to chlorite-iodide-malonic-acid and shaken granular media are also addressed. Comb-like Turing patterns embedded in Hopf oscillations: Spatially localized states outside the 2:1 frequency locked region Arik Yochelis December 30, 2023 ===========================================================================================================================Experiments with the oscillatory chlorite-iodide-malonic-acid (CIMA) chemical reaction have demonstrated that spiral waves can exhibit a finite size stationary core, a.k.a. dual-mode spiral waves. The behavior had been attributed to a competition between the coexisting oscillatory (Hopf) and stationary periodic (Turing) instabilities through analysis in one-space dimension (1D). Specifically, localized stationary solutions have shown to emerge in between π-shifted oscillations and thus, assumed to explain the spiral core where the amplitude of oscillations vanishes. Yet, numerical simulations indicate that spatially localized comb-like states in 2D form outside the coexistence region that is obtained in 1D. Consequently, a distinct mechanism is derived via a weakly nonlinear analysis near the Hopf-Truing bifurcation in 2D and shown to well agree with numerical simulations. Moreover, the results are discussed in the context of 2:1 frequency locking and show that resonant localized patterns extend the standard frequency locking region. Consequently, the study suggests distinct control and design features to spatially extended oscillatory systems.§ INTRODUCTION Chemical reactions are frequently being used as case models to elucidate generic and rich mechanisms of spatiotemporal dynamics, such as the Turing instability, spiral wave dynamics, bistability, spot replication <cit.> (and the references therein) by providing insights into mathematical mechanisms (e.g., linear, nonlinear, absolute, and convective instabilities) that give rise to pattern selection <cit.>. Among the more popular and exploited reactions are Belousov-Zhabotinsky and chlorite-iodide-malonic-acid (CIMA) <cit.>. Besides interests in chemical controls <cit.>, these reactions are also used as phenomenological models for biological and ecological systems, examples of which include morphogenesis, cardiac arrhythmia, and vegetation in semi-arid regions <cit.>. An intriguing type of pattern formation phenomenon, demonstrating stationary spatial localization embedded in an oscillatory background, has been found experimentally in the CIMA reaction <cit.>. Such localized states have been observed in one- and two-space dimensions (1D and 2D, respectively) <cit.>, and attributed to a Turing core emerging in a Hopf background oscillating with a phase shift of π, a behavior that is typical in the vicinity of a codimension-2 bifurcation <cit.>, a.k.a. a flip-flop behavior or “1D-spiral” <cit.>. The 2D localization was attributed to the phase singularity that forces a vanishing Hopf amplitude and thus in turn emergence of a Turing state <cit.>. In the mathematical context, it was shown that the spatial localization in the 1D Hopf-Turing bifurcation <cit.> bears a similarity to the spatial localization mechanism in systems with a Turing-type (finite wavenumber) instability due to the homoclinic snaking structure <cit.>. In this study, we focus on spatially localized comb-like structures in 2D, see for example Fig. <ref>(a). We show that these localized states emerge via an alternative pinning mechanism over a much wider range of parameters and specifically, in a region where 1D homoclinic snaking is absent. We exploit the context of frequency locking and show that the Hopf-Turing localization in 2D further extends the resonant behavior outside of the resonance tongue <cit.>. The paper is organized as follows: in the rest of the Introduction section, we briefly discuss the phenomenology of spatially extended Hopf-Turing patterns and their impact on the 2:1 resonance outside the classical locking region; in section <ref>, we overview the 1D flip-flop localization and show numerically that planar 2D comb-like localized states exist in a wide parameter range where flip-flop behavior is not present; in section <ref>, we provide an alternative mechanism for localized comb-like states in 2D by deriving and analyzing three amplitude equations that represent a Hopf mode and two perpendicular Turing modes; then in section <ref>, we exploit these insights to explain the comb-like spiral core; finally, we conclude in section <ref>. §.§ Coexistence of periodic stationary and temporal patterns: Hopf-Turing bifurcation Consider a general reaction-diffusion type system: ∂u⃗∂t= f (u⃗)+𝐃∇^2 u⃗, where u⃗≡ (u_1,…,u_N) are chemical subsets (with N being an integer), f(u⃗) are functions containing linear and nonlinear terms that correspond to chemical reactions or interactions, and 𝐃 is a matrix associated with diffusion and cross-diffusion <cit.>.In vicinity of the codimension-2 Hopf-Turing instability, the solution u⃗(x,t) can be approximated by:u⃗≈u⃗_∗ + e⃗_H H(√()x, t)e^iω_c t + e⃗_T T(√()x, t)e^ik_c x+ c.c.,where u⃗_∗ is a spatially uniform state that goes through an instability, c.c. is complex conjugate, H and T are slowly varying Hopf and Turing amplitudes in space and time, e⃗_H and e⃗_T are eigenvectors of the critical Hopf frequency (ω_c) and Turing wavenumber (k_c) at a codimension-2 onset, respectively. Multiple time scale analysis using the above ansatz, leads to a generic set of Hopf and Turing amplitude equations <cit.>: Ht= m_1 H - m_2 |H|^2H - m_3 |T|^2 H + m_4 Hx, Tt= n_1 T - n_2|T|^2 T - n_3 |H|^2T + n_4 Tx, where m_1,2,3,4∈ℂ and n_1,2,3,4∈ℝ. Notably, system (<ref>) is a 1D reduction of (<ref>) and reproduces well the flip-flop behavior <cit.>, i.e., a spatially localized Turing state embedded in π shifted Hopf oscillations. §.§ Frequency locking outside the resonant region The intriguing and rich dynamics of the Hopf-Turing bifurcation has been demonstrated not only in the CIMA reaction <cit.> but also has been found to be fundamental in broadening the 2:1 frequency locking behavior in the periodically forced Belousov-Zhabotinsky chemical reaction <cit.>. Consequently, we focus here on the framework of frequency locking in spatially extended oscillatory media to study both the 2D Hopf-Turing localization and its relation to further increase of the 2:1 resonance region, in general.Let us assume that system (<ref>) goes though a primary oscillatory Hopf instability and is also externally forced at a certain frequency <cit.>. Near the onset and depending on the forcing amplitude and frequency, the medium will exhibit either unlocked or locked oscillations which will obey the forced complex Ginzburg–Landau (FCGL) equation <cit.>:At = (μ+iν)A - (1+ iβ)|A|^2A + γA̅^n-1 + (1+iα)∇^2A,where A is a weakly varying in space and time complex amplitude of the primary Hopf mode, A̅ complex conjugate, n is an integer associated with the n:1 resonance, μ is the distance from the Hopf onset, ν is the difference between natural and the forcing frequencies, and γ is the forcing amplitude. In this context, frequency locking corresponds to asymptotically stationary solutions to (<ref>). Since the Hopf-Turing bifurcation in (<ref>) arises only for n=2, the study of Hopf-Turing spatially localized states applies to only in 2:1 resonant case.In the 2:1 resonance case, Eq. <ref> admits two uniform non-trivial (π shifted) solutions that exist for <cit.>γ>γ_b=|ν -μβ|/√(1-β^2).This bistability region is commonly called the classical (Arnol'd) 2:1 resonance tongue even in the context of the spatially extended media, see region I in Figs. <ref>(b,c). Moreover, bistability of uniform π shifted states, can also lead to formation of inhomogeneous solutions <cit.>, such as labyrinthine patterns, spiral waves, and spatially localized (a.k.a. oscillons). However, it has been shown that nonuniform 2:1 resonant patterns may in fact exist also outside the 2:1 resonance <cit.>, γ<γ_b, i.e., in a region where stationary non-trivial uniform solutions are absent. The resonant condition is obtained through stripe (Turing state) nucleation due to the propagation of a Hopf-Turing front, i.e., an interface that bi-asymptotically connects Hopf and Turing states, as shown in Fig. <ref>(a). The codimension-2 Hopf-Turing bifurcation is an instability of the trivial uniform state A=0 at μ=0 and γ=γ_c <cit.>, with ω_c =να/ρ, k_c^2=να/ρ^2, γ_c=ν/ρ and ρ = √(1+α^2). Notably, the Hopf-Turing bifurcation occurs outside the resonance region as γ_c<γ_b. Multiple time scale analysis resulted with coefficients <cit.> for the Hopf-Turing amplitude equations (<ref>):m_1= μ - iγ -γ_c/α, m_2= 4+i2β2ρ^2+1/αρ, m_3=8ρ(α+ρ) + i {4β2αρ(α + ρ) + 3ρ + α/α - 4(α+ρ)}, m_4=1+iρ, n_1=μ + ργ -γ_c/α, n_2=6ρ(α+ρ) 1-β/α, n_3= 42-3β/α, n_4= 2ρ^2.Specifically, stability analysis of pure Hopf and Turing modes showed that for β=β_c<5α/9 these two uniform states coexist and thus it is possible to form a heteroclinic connection between them <cit.>, i.e., a front solution. The Hopf-Turing front is stationary (Fig. <ref>(b)) at γ_N = γ_c+ μ/ρ[√(3/4(2α-3β)(α-β))- α], and propagates otherwise <cit.>, with γ>γ_N the Turing state invades the Hopf state (Fig. <ref>(a)) and vice-verse (Fig. <ref>(c)). Moreover, the presence of the stationary front serves as an organizing center for the homoclinic snaking phenomena <cit.> that would be discussed in the next section.The dominance of the asymptotically stationary Turing mode in region II, γ_N<γ<γ_b, extends thus, the classical frequency locking domain (region I) once spatially extended patterns are formed <cit.>. Notably, Turing type solutions are in fact standing-waves in the context of the original system (<ref>). Figures <ref>(b,c) show the classical resonance region for a single oscillator (region I) and the extended frequency locked region due to the dominance of a spatially extended Turing mode (region II). Our interest is thus, in the unlocked region III (γ_T<γ<γ_N in Figures <ref>(b,c)) where, despite the Hopf mode dominance (i.e., Hopf state is favorable over the Turing state), 2D resonant localized comb-like states may still form (see Figure <ref>(a)), withγ_T= γ_c-μ/4ρ(α+3β),which is the stability onset of the Turing mode <cit.>. For γ<γ_T only Hopf oscillations persist, i.e., region IV.In what follows, we use γ as a control parameter while keeping all other parameters constant. Notably, we limit the scope to the coexistence region between the Hopf and Turing modes <cit.>, with (β<5α/9) and γ_T<γ<min{γ_H,γ_b}, where γ_H=γ_c+μ(α-3β)/ρ. As such the localized 2D solutions in region III are resonant states and thus extend further the frequency locking boundary, as portrayed in Figure <ref>.§ FLIP-FLOP DYNAMICS AND DEPINNINGHopf-Turing spatial localization, pinning, and depinning in 1D have been studied in detail by Tzou et al. (2013), who have shown the relation to the homoclinic snaking phenomenon <cit.>. Specifically, two snaking behaviors were outlined: Standard (vertical) snaking if the Turing mode is embedded in Hopf background that is oscillating in phase; Collapsed snaking if the Turing mode is embedded in Hopf background that oscillates with a phase shift of π. This case is also known as the "flip-flop" behavior.Both cases form in the vicinity of a stationary front (Maxwell-type heteroclinic connection) between the Hopf (oscillatory) and the Turing (periodic) states, i.e., around γ=γ_N in the context of FCGL [see Eq. <ref>]. The width of the snaking regime is, however, rather narrow and depinning effects become dominant at small deviations from γ_N. Indeed, numerical integrations of (<ref>) confirm this result also in the context of FCGL (see Fig. <ref>): for γ>γ_N (γ<γ_N) the Turing (Hopf) state invades the Hopf (Turing) state <cit.>, and thus the localized Turing state (centered at x=0) expands (collapses), respectively.Onthe other hand, the robustness of comb-like structures (e.g., in spiral waves) as compared to the narrow existence in the parameter space of the flip-flop, suggests that the emergence mechanism is distinct. Indeed, direct numerical simulations in 2D show that comb-like localized patterns emerge in a parameter range in which flip-flop does not coexist γ_T<γ < γ_N, where γ_N ⪅γ_N is considered to be the left limit of the depinning region and computed here numerically. Notably, since the snaking region is very narrow as compared to the rest of the domain, we define in what follows region III to lie within γ_T<γ<γ_N. Moreover, the width (Γ) of the localized comb-like region increases with γ, as shown in Fig. <ref>. To quantify Γ, we employed a discrete Fourier transform (DFT) for every grid point. The dark shading marks the frequency with the highest contribution, as shown in the furthermost right panels in Fig. <ref>. These results indeed confirm that comb-like states (see Fig. <ref>) are related to a distinct 2D pinning mechanism and not just a spatial extension of flip-flop dynamics <cit.>.§ COMB-LIKE LOCALIZED STATES In this section, we show that localized comb-like states are formed due to pinning of a Turing mode that is perpendicular to the π phase shifted Hopf oscillations. Namely, we look for planar localized states, as shown in Figs. <ref>(d,e). At first, we derive the respective amplitude equations, then we obtain uniform solutions along with their stability properties, and finally confirm the results by direct numerical integrations. §.§ Weakly nonlinear analysis Derivation of amplitude equations in 2D follows, in fact the same steps as for the 1D case but with two Turing complex amplitudes (both varying slowly in space and time) Re(A)Im(A) ≈ (1+iα)/ρ1 H√()x,√()y, te^iω_c t+ (α+ρ)1T_∥√()x,√()y, te^ik_c x+(α+ρ)1 T_⊥√()x,√()y, te^ik_c y+ c.c. + h.o.t.,where h.o.t. stands for high order terms. The two Turing modes T_∥ and T_⊥ are defined as parallel and perpendicular to the considered π phase shifted Hopf oscillations (hereafter, Hopf front), respectively. Following the multiple time scale method (see <cit.> for details), we obtain (after some algebra that is not shown here) Ht= m_1 H - m_2 |H|^2H - m_3 |T_⊥|^2+|T_∥|^2 H + m_4 ∇^2 H,T_∥t= n_1 T_∥ - n_2|T_∥|^2+2|T_⊥|^2 T_∥ - n_3 |H|^2T_∥ + n_4 T_∥x, T_⊥t= n_1 T_⊥ - n_22|T_∥|^2+|T_⊥|^2 T_⊥ - n_3 |H|^2T_⊥+n_4 T_⊥y.Besides the standard Hopf-Turing solutions <cit.>, uniform solutions to (<ref>) that involve non-vanishing T_⊥ contributions, are obtained through the amplitudes (|H|,|T_∥|,|T_⊥|):=(R_H,R_∥,R_⊥)=(R_H,R_∥,R_⊥): * Pure Turing modes (stripes), R_⊥ = √(μα + ρ (γ-γ_c)6ρ(α+ρ)(α-β)), R_H =R_∥= 0; * Unstable mixed Turing mode (stationary squares), R_⊥=R_∥= √(μα + ρ (γ-γ_c)18ρ(α+ρ)(α-β)), R_H = 0; * Unstable mixed Hopf-Turing mode (oscillating squares), R_H= 1/2√((18β-2α)μ + 16ρ (γ-γ_c)14α-30β),R_⊥=R_∥ = √((α-3β)μ -ρ (γ-γ_c)ρ(α+ρ)(14α-30β)). §.§ Stability and Hopf fronts After obtaining uniform solutions, we proceed to the selection mechanism by focusing on the spatial symmetry breaking that is induced by the Hopf front. Consequently, we associate (<ref>) with only one spatial dependence (here we use x), which corresponds to the direction of a Hopf front. In addition, for convenience we use polar form H=R_Hexp(iΦ) and consider only the amplitudes of Turing fields (due to spatial dependence H cannot be decoupled as for T_∥,⊥). Hence, system (<ref>) becomes R_Ht=μ R_H -4 R_H^3 - 8ρα+ρR_∥^2 + R_⊥^2R_H -Φx^2+ρΦxR_H - 2ΦxR_Hx , + R_Hx Φt= -γ -γ_c/α- ν_1R_H^2-ν_2R_∥^2 + R_⊥^2 -ρΦx^2-Φx + 2/R_HΦxR_Hx+ ρ/R_HR_Hx,R_∥t=μ + ρ(γ-γ_c)/αR_∥ - 4(2-3β/α) R_H^2R_∥- 6ρ(α+ρ)(1-β/α)R_∥^2 + 2R_⊥^2R_∥+ 2ρ^2R_∥x,R_⊥t=μ + ρ(γ-γ_c)/αR_⊥ - 4(2-3β/α) R_H^2R_⊥- 6ρ(α+ρ)(1-β/α)2R_∥^2 + R_⊥^2R_⊥, where ν_1=2β(2ρ^2+1)/(αρ), and ν_2 = 4β[2αρ(α + ρ) + 3ρ + α]/α - 4(α+ρ). Notedly, the spatial symmetry breaking is reflected in (<ref>) through the absence of a diffusive term.The fixed point analysis (linear stability to spatially uniform perturbations) of (<ref>) is identical to the 1D case <cit.>, and thus, the pure Hopf and pure Turing coexistence regime remains the same, β<5/(9 α). However, to gain insights into the emergence of the T_⊥ mode at the Hopf front region, we examine the linear stability of the trivial solution (R_H,R_∥,R_⊥)=(0,0,0) to non-uniform perturbations, for which the Hopf amplitude and phase can be decoupled:( [ R_H; R_∥; R_⊥; ]) - ( [ R_H; R_∥; R_⊥; ]) ∝ e^σ t -ikx+c.c., where σ is a growth rate of respective wavenumbers, k. Substitution of (<ref>) in (<ref>) and solving to a leading order, yields three dispersion relations: σ_H = μ-k^2, σ_0 = μ +ργ-γ_cα. σ_k = μ +ργ-γ_cα-2ρ^2k^2. As expected, the three growth rates show instability for k=0, where the vector flow is rather isotropic (Fig. <ref>). For completeness, we have computed the trajectories after linearizing (<ref>) about all the fixed points involving R_⊥, and show the projection on the (R_∥,R_⊥) plane. The results are consistent with the stability of pure Turing modes (<ref>) and the saddle for a mixed Turing mode (<ref>). As k is increased, we observe a symmetry breaking between σ_0 and σ_k, which occurs for σ_k=0 or equivalently for k^2_f = μα + γρ - ν/ 2ρ^2. Figure <ref> shows that the perturbations about (R_∥,R_⊥)=(0,0) favor the attractor (R_∥,R_⊥)=(0,R_⊥). The increasing value of the wavenumber is also consistent with the basin of attraction which corresponds to a rather narrow spatial region due to the Hopf front location, i.e., already for k=1 the flow indicates ultimate preference toward (R_∥,R_⊥)=(0,R_⊥). §.§ Numerical results Next, we check the above obtained results vs. direct numerical integrations of (<ref>). We set a sharp Hopf front by using the Hopf amplitude as an initial condition: R_H(x=0 …± L)=∓R_H and R_∥ = R_⊥ = 0. Due to diffusion in the Hopf field, at first a front solution is indeed formed and after additional transient an asymptotic localized T_⊥ Turing state emerges inside the Hopf front region, as shown in Fig. <ref>. The results are in accord with the linear stability analysis of the trivial state, showing that perturbations at the mid front location x=0, are in the basin of attraction of the fix point (R_H, R_∥, R_⊥)=(0,0,R_⊥) which corresponds to the comb-like structure in Fig. <ref>. The width of the T_⊥ Turing localized state increases with γ, which agrees well with numerical integration of (<ref>), as shown in Fig. <ref>. This could be an important feature that explains the different size of Turing core embedded in spiral waves. We note that in the narrow vicinity of γ_N both Turing modes, T_⊥ and T_∥, coexist and can emerge depending on the initial perturbations within the Hopf front. The Turing mode that is parallel to the Hopf front (T_∥) is being described by a partial differential equation (<ref>) and thus in the absence of a pinning mechanism such as, homoclinic snaking, is being directly subjected to diffusive fluxes, which are overtaken by the oscillatory Hopf mode. On the other hand, the perpendicular Turing mode (T_⊥) obeys only a local dynamics via an ordinary differential equation (<ref>). The spatial decoupling in (<ref>) allows thus, under certain initial conditions, pinning of the Hopf amplitude (local selection of the (R_H, R_∥, R_⊥)=(0,0,R_⊥) fixed point), which in turn results effectively in a π phase shifted front. This however, is highly sensitive to domain size or initial conditions due to the secondary zig-zag and Eckhaus instabilities <cit.>. For example, the length of y dimension should be an integer of the typical wavenumber which is close to k_c and within the Eckhaus stable regime; details of the Busse balloon for this problem are given in <cit.>. If this condition is not fulfilled, the nonlinear terms become dominant and the comb-like structures are destroyed and instead a spiral wave with a comb-like core is formed, see Fig. <ref>. § SPIRAL WAVES WITH COMB-LIKE CORE To capture the emergence of the Turing core embedded inside a Hopf spiral, we start with a pure Hopf spiral wave obtained for γ=1.9, as an initial condition. As for the planar front case (Fig. <ref>), direct numerical integration of (<ref>) for γ_T<γ<γ_N shows formation of a Turing spot inside the core due to the vanishing amplitude of the Hopf amplitude within that region, see Fig. <ref>. As expected, also here, the Turing spot size increases with γ. To quantify the size of the Turing core, we use again DFT for each grid point within a window of 128 time steps, where each time step is made out of 100 discrete time iterations. Consequently, each grid point corresponds to a 128-dimensional vector with the amplitude calculated from DFT, where only the elements with the highest amplitude value are selected, as shown in the bottom panel of Fig. <ref>. The frequency contrast allows us to define a criterion for the width (Γ) of the Turing spot. The spiral core width is found to agree with results obtained via the planar front initial condition, as shown Fig. <ref>. In the depinning region above the stationary Hopf-Turing front condition γ_N<γ<γ_b, the spiral core expands by invasion into the Hopf oscillations due to the dominance of the Turing mode and the domain is filled with a periodic pattern <cit.> (not shown here). § CONCLUSIONS In summary, we have presented a distinct pinning mechanism for 2D spatial localization that is associated with the emergence of comb-like structures embedded in a temporally oscillatory background. These spatially localized states emerge in a planar form inside π phase shift oscillations (Fig. <ref>) or as a spiral wave core (Fig. <ref>). The mechanism requires coexistence of periodic stripes in both x and y directions and uniform oscillations, a behavior that is typical in the vicinity of a codimension-2 Hopf-Turing bifurcation. Unlike the homoclinic snaking mechanism that gives rise to localized states over a narrow range of parameters about a stationary Hopf-Turing front in 1D (a.k.a. flip-flop dynamics) <cit.>, the comb-like states are robust (i.e., do not require any Maxwell type construction) and exist over the entire coexistence range as long as the Hopf state is dominant over the Turing (γ_T<γ<γ_N), as shown in Figs. <ref> and <ref>. In the context of 2:1 frequency locking, the comb-like states correspond to spatially localized resonances that further extend the frequency locking regime outside the resonance tongue. Notably, localized comb-like states have been also observed in vibrating granular media and referred to as “decorated fronts” <cit.>. However, in these experiments they seem to form near resonant domain patterns and thus, their formation mechanisms may be unrelated to the Hopf-Turing bifurcation. To this end, using the generic amplitude equation framework, we have presented a selection mechanism that allows us to understand and robustly design spatially localized reaction-diffusion patterns in two dimensional geometries <cit.>. Specifically, these results indicate the origin of intriguing spiral waves with stationary cores that have been observed in CIMA <cit.> and suggest the formation of localized resonant patterns outside the classical 2:1 frequency locking region, as such in the case of periodically driven Belousov-Zhabotinski chemical reaction <cit.>. A detailed analysis/comparison with reaction-diffusion models for chemical reactions is however, beyond the scope of this work and should be addressed in future studies. We thank Ehud Meron and Francois A. Leyvraz for fruitful discussions, and P.M.C. also acknowledges the use of Miztli supercomputer of UNAM under project number LANCAD-UNAM-DGTIC-016. This work was supported by the Adelis Foundation, CONACyT under project number 219993, UNAM under projects DGAPA-PAPIIT IN100616 and IN103017. 40 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Maini, Painter, and Chau(1997)]maini1997spatial author author P. Maini, author K. Painter, and author H. N. P. Chau, title title Spatial pattern formation in chemical and biological systems, @noopjournal journal Journal of the Chemical Society, Faraday Transactions volume 93, pages 3601–3610 (year 1997)NoStop [Kondo and Miura(2010)]kondo2010reaction author author S. Kondo and author T. Miura, title title Reaction-diffusion model as a framework for understanding biological pattern formation, @noop journal journal Science volume 329, pages 1616–1620 (year 2010)NoStop [Murray(2002)]Murray2002 author author J. D. Murray, @nooptitle Mathematical Biology (publisher Springer, address New York, year 2002)NoStop [Keener and Sneyd(1998)]KeenerSneyd1998 author author J. Keener and author J. Sneyd, @nooptitle Mathematical Physiology (publisher Springer-Verlag, address New York, year 1998)NoStop [Cross and Hohenberg(1993)]cross1993pattern author author M. C. Cross and author P. C. Hohenberg, title title Pattern formation outside of equilibrium, @noopjournal journal Reviews of Modern Physics volume 65, pages 851 (year 1993)NoStop [Borckmans et al.(2002)Borckmans, Dewel, De Wit, Dulos, Boissonade, Gauffre, and De Kepper]borckmans2002diffusive author author P. Borckmans, author G. Dewel, author A. De Wit, author E. Dulos, author J. Boissonade, author F. Gauffre,and author P. De Kepper, title title Diffusive instabilities and chemical reactions, @noopjournal journal International Journal of Bifurcation and Chaos volume 12, pages 2307–2332 (year 2002)NoStop [Pismen(2006)]Pismen2006 author author L. M. Pismen, @nooptitle Patterns and Interfaces in Dissipative Dynamics (publisher Springer-Verlag,address Berlin, year 2006)NoStop [Epstein and Showalter(1996)]Epstein author author I. R. Epstein and author K. Showalter, title title Nonlinear chemical dynamics: Oscillations, patterns, and chaos, @noopjournal journal J. Phys. Chem. volume 100, pages 13132–13147 (year 1996)NoStop [De Kepper, Boissonade, and Epstein(1990)]CIreaction author author P. De Kepper, author J. Boissonade,and author I. R. Epstein, title title Chlorite-iodide reaction: A versatile system for the study of nonlinear dynamical behavior, @noopjournal journal J. Phys. Chem. volume 94, pages 6525–6536 (year 1990)NoStop [Vanag and Epstein(2008)]vanag2008design author author V. K. Vanag and author I. R. Epstein, title title Design and control of patterns in reaction-diffusion systems, @noopjournal journal Chaos volume 18, pages 026107 (year 2008)NoStop [Szalai et al.(2012)Szalai, Cuiñas, Takács, Horváth, and De Kepper]szalai2012chemical author author I. Szalai, author D. Cuiñas, author N. Takács, author J. Horváth,and author P. De Kepper, title title Chemical morphogenesis: recent experimental advances in reaction–diffusion system design and control, @noopjournal journal Interface Focus volume 2, pages 417–432 (year 2012)NoStop [Volpert and Petrovskii(2009)]volpert2009reaction author author V. Volpert and author S. Petrovskii, title title Reaction–diffusion waves in biology, @noopjournal journal Physics of Life Reviews volume 6, pages 267–310 (year 2009)NoStop [Meron(2015)]MeronEco author author E. Meron, @nooptitle Nonlinear Physics of Ecosystems (publisher CRC Press, year 2015)NoStop [De Kepper et al.(1994)De Kepper, Perraud, B, and E]kepper author author P. De Kepper, author J. J. Perraud, author R. B,and author D. E, title title Experimental study of stationary Turing patterns and their interaction with traveling waves in a chemical system,@noopjournal journal Int. J. Bifurcation Chaos volume 4, pages 1215–1231 (year 1994)NoStop [Borckmans et al.(1995)Borckmans, Jensen, Pannbacker, Mosekilde, Dewel, and De Wit]borckmans1995localized author author P. Borckmans, author O. Jensen, author V. Pannbacker, author E. Mosekilde, author G. Dewel,and author A. De Wit, title title Localized Turing and Turing-Hopf patterns, in@noopbooktitle Modelling the Dynamics of Biological Systems (publisher Springer, year 1995) pp. pages 48–73NoStop [Dewel et al.(1995)Dewel, Borckmans, De Wit, Rudovics, Perraud, Dulos, Boissonade, and De Kepper]dewel1995pattern author author G. Dewel, author P. Borckmans, author A. De Wit, author B. Rudovics, author J.-J. Perraud, author E. Dulos, author J. Boissonade,and author P. De Kepper, title title Pattern selection and localized structures in reaction-diffusion systems, @noopjournal journal Physica A: Statistical Mechanics and its Applications volume 213, pages 181–198 (year 1995)NoStop [Jensen et al.(1994)Jensen, Pannbacker, Mosekilde, G., and Borckmans]FlipFlopand2Dspirals author author O. Jensen, author V. O. Pannbacker, author E. Mosekilde, author D. G., and author P. Borckmans,title title Localized structures and front propagation in the Lengyel-Epstein model, @noopjournal journal Phys. Rev. E volume 50, pages 736 (year 1994)NoStop [Mau, Hagberg, and Meron(2009)]Dual-mode author author Y. Mau, author A. Hagberg,and author E. Meron, title title Dual-mode spiral vortices,@noopjournal journal Phys. Rev. Evolume 80, pages 065203–1–065203–4 (year 2009)NoStop [Perraud et al.(1993)Perraud, De Wit, Dulos, De Kepper, Dewel, and Borckmans]perraud1993one author author J.-J. Perraud, author A. De Wit, author E. Dulos, author P. De Kepper, author G. Dewel,and author P. Borckmans, title title One-dimensional ‘‘spirals’’: Novel asynchronous chemical wave sources, @noopjournal journal Physical Review Letters volume 71, pages 1272 (year 1993)NoStop [Bhattacharyay(2007)]one-dimensional author author A. Bhattacharyay, title title A theory for one-dimensional asynchronous chemical waves, @noopjournal journal J. Phys. A: Math. Theor. volume 40, pages 3721–3728 (year 2007)NoStop [De Wit et al.(1996)De Wit, Lima, Dewel, and Borckmans]Wit-flipflop author author A. De Wit, author D. Lima, author G. Dewel,and author P. Borckmans, title title Spatiotemporal dynamics near a codimension-two point, @noopjournal journal Phys. Rev. E volume 54, pages 261–271 (year 1996)NoStop [Tzou et al.(2013)Tzou, Ma, Bayliss, Matkowsky, and Volpert]Brusselator author author J. C. Tzou, author Y. P. Ma, author A. Bayliss, author B. J. Matkowsky,and author V. A. Volpert, title title Homoclinic snaking near a codimension-two Turing-Hopf bifurcation point in the brusselator model, @noopjournal journal Phys. Rev. E volume 87, pages 022908–1–022908–20 (year 2013)NoStop [Yochelis et al.(2002)Yochelis, Hagberg, Meron, Lin, and Swinney]ArikLabyrinthine author author A. Yochelis, author A. Hagberg, author E. Meron, author A. L. Lin,and author H. L. Swinney, title title Development of standing-wave labyrinthine patterns, @noopjournal journal Siam J. Applied Dynamical Systems volume 1, pages 236–247 (year 2002)NoStop [Yochelis et al.(2004a)Yochelis, Elphick, Hagberg, and Meron]ArikFCGL author author A. Yochelis, author C. Elphick, author A. Hagberg,and author E. Meron, title title Two-phase resonant patterns in forced oscillatory systems: boundaries, mechanisms and forms, @noopjournal journal Physica D volume 199, pages 201–222 (year 2004a)NoStop [Vanag and Epstein(2009)]vanag2009cross author author V. K. Vanag and author I. R. Epstein, title title Cross-diffusion and pattern formation in reaction–diffusion systems, @noopjournal journal Physical Chemistry Chemical Physics volume 11, pages 897–912 (year 2009)NoStop [Keener(1976)]keener1976secondary author author J. P. Keener, title title Secondary bifurcation in nonlinear diffusion reaction equations, @noopjournal journal Studies in Applied Mathematics volume 55, pages 187–211 (year 1976)NoStop [Kidachi(1980)]kidachi1980mode author author H. Kidachi, title title On mode interactions in reaction diffusion equation with nearly degenerate bifurcations, @noopjournal journal Progress of Theoretical Physics volume 63, pages 1152–1169 (year 1980)NoStop [Just et al.(2001)Just, Bose, Bose, Engel, and Schöll]Bose author author W. Just, author M. Bose, author S. Bose, author H. Engel,and author E. Schöll, title title Spatiotemporal dynamics near a supercritical Turing-Hopf bifurcation in a two-dimensional reaction-diffusion system, @noopjournal journal Phys. Rev. E volume 64, pages 026219–1–026219–12 (year 2001)NoStop [De Wit(1999)]de1999spatial author author A. De Wit, title title Spatial patterns and spatiotemporal dynamics in chemical systems, @noopjournal journal Advances in Chemical Physics volume 109, pages 435–514 (year 1999)NoStop [Lin et al.(2004)Lin, Hagberg, Meron, and Swinney]lin2004resonance author author A. L. Lin, author A. Hagberg, author E. Meron,and author H. L. Swinney, title title Resonance tongues and patterns in periodically forced reaction-diffusion systems, @noopjournal journal Phys. Rev. E volume 69, pages 066217 (year 2004)NoStop [Gambaudo(1985)]gambaudo1985perturbation author author J.-M. Gambaudo, title title Perturbation of a Hopf bifurcation by an external time-periodic forcing, @noopjournal journal Journal of Differential Equations volume 57, pages 172–199 (year 1985)NoStop [Elphick, Iooss, and Tirapegui(1987)]elphick1987normal author author C. Elphick, author G. Iooss, and author E. Tirapegui,title title Normal form reduction for time-periodically driven differential equations, @noopjournal journal Physics Letters A volume 120, pages 459–463 (year 1987)NoStop [Coullet and Emilsson(1992)]CoulletEmilsson1992 author author P. Coullet and author K. Emilsson, title title Strong resonances of spatially distributed oscillators - a laboratory to study patterns and defects, @noopjournal journal Physica D volume 61, pages 119–131 (year 1992)NoStop [Elphick, Hagberg, and Meron(1999)]elphick1999multiphase author author C. Elphick, author A. Hagberg, and author E. Meron, title title Multiphase patterns in periodically forced oscillatory systems, @noopjournal journal Physical Review E volume 59, pages 5285 (year 1999)NoStop [Coullet et al.(1990)Coullet, Lega, Houchmandzadeh, and Lajzerowicz]coullet1990breaking author author P. Coullet, author J. Lega, author B. Houchmandzadeh,and author J. Lajzerowicz,title title Breaking chirality in nonequilibrium systems, @noopjournal journal Physical review letters volume 65,pages 1352 (year 1990)NoStop [Gomila et al.(2001)Gomila, Colet, Oppo, and San Miguel]gomila2001stable author author D. Gomila, author P. Colet, author G.-L. Oppo,and author M. San Miguel, title title Stable droplets and growth laws close to the modulational instability of a domain wall, @noopjournal journal Physical review letters volume 87, pages 194101 (year 2001)NoStop [Burke, Yochelis, and Knobloch(2008)]burke2008classification author author J. Burke, author A. Yochelis, and author E. Knobloch,title title Classification of spatially localized oscillations in periodically forced dissipative systems,@noopjournal journal SIAM Journal on Applied Dynamical Systems volume 7, pages 651–711 (year 2008)NoStop [Or-Guil and Bode(1998)]BodeFronts author author M. Or-Guil and author M. Bode,title title Propagation of Turing-Hopf fronts, @noopjournal journal Physica A volume 249, pages 174–178 (year 1998)NoStop [Yochelis et al.(2004b)Yochelis, Elphick, Hagberg, and Meron]yochelis2004frequency author author A. Yochelis, author C. Elphick, author A. Hagberg,and author E. Meron, title title Frequency locking in extended systems: The impact of a Turing mode, @noopjournal journal EPL volume 69, pages 170 (year 2004b)NoStop [Blair et al.(2000)Blair, Aranson, Crabtree, and Vinokur]Granular author author D. Blair, author I. S. Aranson, author G. W. Crabtree,and author V. Vinokur, title title Patterns in thin vibrated granular layers: Interfaces, hexagons, and superoscillons, @noopjournal journal Phys. Rev. E volume 61, pages 5600–5610 (year 2000)NoStop
http://arxiv.org/abs/1702.08556v3
{ "authors": [ "Paulino Monroy Castillero", "Arik Yochelis" ], "categories": [ "nlin.PS" ], "primary_category": "nlin.PS", "published": "20170227221036", "title": "Comb-like Turing patterns embedded in Hopf oscillations: Spatially localized states outside the 2:1 frequency locked region" }
Cerro Tololo Inter-American Observatory, Casilla 603, La Serena, Chile atokovinin@ctio.noao.eduHarvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA dlatham@cfa.harvard.edu This work extends the still modest number of multiple stars with known relative orbit orientation.Accurate astrometry and radial velocities are used jointly to compute orupdate outer and inner orbits in three nearbytriple systems HIP101955 (orbitalperiods 38.68and 2.51 years), HIP103987 (19.20and 1.035 years),HIP 111805(30.13 and 1.50 years)and inone quadruple systemHIP 2643(periods 70.3, 4.85 and 0.276 years), all composed of solar-type stars.The masses areestimated fromtheabsolute magnitudesandchecked usingthe orbits.The ratios of outer toinner periods (from 14 to 20) and the eccentricities ofthe outer orbitsare moderate.Thesesystems are dynamically stable,but notvery far fromthe stabilitylimit.In threesystems all orbitsare approximatelycoplanar andhave small eccentricity,while inHIP 101955the innerorbit withe=0.6 is highly inclined.§ INTRODUCTIONOrbits ofplanets in the Solarsystem, as well asin many exoplanet systems <cit.>, are located in one plane, presumably the planeofthe protoplanetarydisk. Somemultiple stellarsystems <cit.>haveasimilar“planetary” architecture and could also be formed in a disk.However, this is not the universal rule.Thereare triple stars with nearly perpendicular orbits,likeAlgol, orevencounter-rotatingtriple systemslike ζ Aqr<cit.>. Similarly,thereexistnon-coplanar exoplanetarysystemssuch asν And<cit.> andclose binaries with misaligned stellar spins <cit.>. Dynamical interactions with other stars or planets are often evoked to explainthemisalignment. In verytightstellarsystemssuch interactions mustbe internal (between members)rather than external (withother starsin thecluster). Accretion ofgas withrandom angularmomentumduring starformationisanother promising,but poorly explored mechanism of misalignment.Orbit orientationin triple starsprovides observational constraints on the angularmomentum history relevant to theformation of stellar systems, stars,and planets.However, measurementof relative orbit orientationin triplestarsis challenging. Bothorbits mustbe resolved (eitherdirectly or astrometrically), thesense of rotation mustbe inferredfrom theradialvelocities (RVs),and theouter period mustbe nottoo long fora reasonable orbitcoverage. These conditions are met only for a small number of nearby multiple systems. The Sixth Catalog ofVisual Binary Orbits <cit.> contains 62 candidates of varying orbitquality, mostly without RV data. Without thorough re-assessment andfiltering, the VB6 sample is not suitableforstatisticalstudyof relativeorbitorientation. Long-baselinestellar interferometershelp inresolvingcloser and fastersubsystems, but requiresubstantial efforts,contributing so far only a handful of cases <cit.>Motion ina triplesystem can bedescribed by twoKeplerian orbits only approximately because dynamical interaction between the inner and outer subsystemsconstantly changes their orbits.Thetime scale of thisevolution isnormally muchlonger thanthe timespanof the observations, so the orbits represent the “instantaneous” osculating elementsin thethree-bodyproblem. Knowingtheseorbits andthe masses,the seculardynamical evolutioncan bestudied numerically <cit.>.Inthis work,westudyfour multiplesystemsto determinetheir relative orbit orientation and period ratio as accurately as possible. We selectedcandidates with modestperiod ratios andmoderate outer eccentricity,resemblinginthissense HD 91962. Integerperiod ratios wouldsuggestpotential meanmotion resonances. Such resonances are commonlyfound in multi-planetsystems<cit.>, but arenot documented in stellar multiples; the case of HD 91962 with a period ratio of 18.97±0.06 remains, so far, unique <cit.>.Basic dataon thefour multiplesystems are presentedin Table <ref>; the mobilediagrams in Figure <ref>illustrate theirhierarchicalstructure and periods.The range of periods is similar to that in the solar system. Thelast twocolumns oftheTable givethe parallaxfrom <cit.> andthedynamical parallaxcomputed here fromthe orbital elementsand estimated masses.HIP 2643is a known visual binary containingtwo spectroscopic subsystems (hence it isquadruple); we detecthere astrometricperturbations fromthe 5 year subsystem.The remaining threetriple stars have both inner and outer pairs directly resolved, with their orbits already listed in the VB6. HIP 103987 and111805 wererecentlystudied by <cit.>.Weusethe availableastrometry togetherwith thenew speckleobservations andthe RVsto compute combined orbits, accountingalso for the “wobble” inthe motion of theouter binarycaused bythe subsystem. rrc l ccc c c Basic parameters of multiple systems 0ptHIPHDWDSSpectralVB-VKπ_ HIP2 π_ dyn (J2000)type(mag) (mag) (mag)(mas) (mas) 26432993 00334+4006 F87.75 0.526.35 17.53±0.95 16.1101955 196795 20396+0458 K5V 7.84 1.244.74 59.80±3.42 59.0 103987 200580 21041+0300 F9V 7.31 0.545.79 19.27±0.99 23.2 111805 214608 22388+4419 F9V 6.83 0.555.32 26.18±0.60 24.1Section <ref>presents thedata usedin thisworkand the methods common to all objects.Then each multiple system is discussed individually in Sections 3 to 6. The work is summarized in Section 7.§ OBSERVATIONS AND THEIR ANALYSIS§.§ AstrometryTheouter subsystemsareclassical visualbinaries. Historic micrometric measurements andmodern speckle interferometric data have been obtained fromthe WDS database on ourrequest. Additionally, we secured new speckle astrometryand relative photometry of two systems at the 4.1-m SOAR telescope <cit.>. Accurate modern astrometry reveals“wobble” in themotion ofthe outerpairs causedby the subsystems, evenwhen those are not directlyresolved.The 180 ambiguity of position angle in the standard speckle method is avoided in the case of triple systems, where the orientation of the outer pairis known from micrometermeasures and Hipparcos and defines the orientation of the inner pairas well.The observations presented in H15use theimage reconstructiontechnique thatdoes nothave the 180 ambiguity.§.§ Radial velocitiesPublished RVs areused here together with the newdata. The RVs were measured withthe CfA DigitalSpeedometers <cit.>, initiallyusing the 1.5-mWyeth Reflectoratthe OakRidgeObservatoryinthe townofHarvard, Massachusetts, andsubsequently with the1.5-m Tillinghast Reflector atthe Whipple Observatoryon MountHopkins, Arizona. Starting in 2009thenew fiber-fedTillinghastReflector EchelleSpectrograph <cit.>was used.Thespectral resolutionwas 44,000 forall threespectrographs, butthe typicalsignal-to-noise ratio (SNR) perresolution element of 100for the TRESobservations was a few times higher than for the CfA Digital Speedometer observations.The light of all systemsexcept HIP 111805 is dominated by the bright primary component. Therefore wefollowed our standardprocedure of using one-dimensional correlations of each observed spectrum against a synthetic template drawn fromour library of calculated spectra.The RV zeropoint for each spectrograph wasmonitored using observations ofstandard stars,of daytimesky, andof minorplanets,and the velocities were alladjusted to the native systemof the CfA Digital Speedometers. To get onto theabsolute velocity system defined by our observations of minor planets,about 0.14 km s^-1 should be added to the RVs.These velocities areall based on correlations of just a single echelle order centered on the Mg b triplet near 519 nm, with a wavelengthwindow of4.5 nm forthe CfADigitalSpeedometers and 10.0 nm for TRES. Two objects,HIP 101955and 103987, wereobserved in 2015with the CHIRON echellespectrograph <cit.> at the 1.5m telescope at CTIO with aspectral resolution of 80,000.TheRVs were measured by cross-correlationofthesespectra withthedigitalmask;see <cit.> for further details. §.§ Orbit calculationThe orbitalelements andtheir errors weredetermined withthe IDL codeorbit3.pro[Thecodeispostedat <http://dx.doi.org/10.5281/zenodo.321854>] that fits simultaneouslythe innerand outerorbits usingboththe resolved measures andthe RVs.Itdescribes the triple system“from inside out”, asthe first inner pairAa,Ab and the secondouter pair A,B, where A denotes the center ofmass of Aa,Ab.The motion of the inner pair depends onthe 10 inner elements. As the centerof mass A moves in theouter orbit, the RVsof Aa and Abare sums ofthe inner and outer orbital velocities, while the RVof B depends only on the outer elements. For the positionalmeasurements, the situation is reversed: the positionof theinner pair dependsonly on theinner elements, while the position of the outer pair includes the wobble term. Figure <ref> explains the wobble. The outer elements describe the motionof B around the centerof mass A. However,the center of mass is notdirectly observed. Instead the measurementsof the outer pair give the vector Aa,B if the subsystem is resolved or refer to the photo-center A^*,B otherwise. The primary Aa moves around the center ofmasswithanamplitudereducedbythewobblefactorf= q_1/(1+q_1) comparedto the innerseparation Aa,Ab, whereq_1 is the innermass ratio.Forthe photo-center, theappropriate wobble factor becomes f^* = f - r_1/(1+r_1), where r_1 is the light ratio in the inner pair. The apparent trajectory of Aa,B or A^*,B includes the wobble, it is not a closed ellipse.The 20 orbital elements (10 innerand 10 outer) are given as input to theprogramand thencorrectediterativelytoreach theχ^2 minimum. Errorsofthepositionalmeasuresareassumedtobe isotropic(transverse equalsradial). The errorsofposition measurements and RVs are balanced when the condition χ^2/M ∼ 1 is reachedfor each data set, whereM is the numberof degrees of freedom.Errors of the outliers are increased to reach this balance. The commonsystemic velocityV_0 is ascribedto theouter system (element 20), while the wobblefactor f is stored as the element number 10. Currently the code uses only one common wobble factor for all measures of the outer pair. Intwo objects, HIP2643 and HIP103987, the innersubsystem is either unresolved or hasquestionable measures.The orientation of the inner orbit is then foundonly by modeling the wobble.In such cases,the innersemimajor axisa_1 andthe wobblefactor f cannot be determined separately. Wehave chosen to fix a_1 to its estimatedvalue, while thewobble amplitudeis stillfitted freely through f. When the tertiarycomponent is brighter than theinner subsystem (it isusuallydenotedthenas A),itisstillconsideredasa “tertiary”by the code. In suchcase, thewobble factorf is negative and the outerelements Ω_2 and ω_2 are flipped by 180^∘.The orbitalelements andtheir errors arelisted in Table <ref>. Its first column identifies each subsystem by the Hipparcos number and, in thefollowing line,bythe“discoverer code”andcomponent designationsjoined bythecomma. Thefollowingcolumns givethe period P, theepoch of periastron T_0, theeccentricity e, the semimajoraxisa,the positionangleoftheascendingnode Ω_ A (for the epochJ2000) and the argument of periastron ω_A (both anglesrefer tothe primarycomponent), the orbital inclination i,the RV amplitudes K_1 andK_2. The last column containsthe systemic velocityV_0 for the outerorbit and the wobble factor f for the inner orbit. Table <ref>,available in full electronically, lists the positionalmeasuresandtheirresiduals.Itsfirsttwocolumns identifythepair byitsHipparcosnumber andthesystem designation. The following columns contain (3) the date of observation inBesselianyears,(4) thepositionangleθ,(5)the separation ρ, (6) the assumed error σ, (7) residual to the orbit inangle and (8) inseparation. The lastcolumn (9) indicates the measurementtechnique, asdescribed in thenotes tothe Table. Table <ref>,alsoavailableinfullelectronically, contains theRVs. Its firsttwo columns specify theHipparcos number and the component. Then follow (3) the Julian date, (4) the RV, (5) itserror, and (6)the residual. Thelast column (7)gives the source of the RV, as explained in the notes.l l rrr rrr r ccc 0pt Orbital ElementsHIP/system PT_0 eaΩ_ A ω_ A iK_1K_2V_0, fOther designation(yr)(yr)(”)() () () (km s^-1) (km s^-1) (km s^-1)2643/outer70.341983.620.331 0.393118.9137.8112.3 (3.2)(6.9) -1.37  HO 3 A,B ±1.36 ±0.60 ±0.032±0.020±0.5 ±2.7±0.9 … … ±0.19 2643/middle 4.849 1994.9270.138(0.058)100.7132.394.0 4.857…0.217Aa,Ab ±0.020 ±0.010 ±0.028 fixed ±7.9±7.67 ± 8.2±0.12 … ±0.034 2643/inner0.275951997.1620 0.1986 ……113.4…12.493 … (0.0)Aa1,Aa2 ±0.00002 ±0.0012±0.0073……±1.7…±0.109 …… 101955/outer38.67902016.110 0.118 0.855127.6 233.4 87.40 2.66…-41.11   KUI 99 A,B ±0.031±1.32 ±0.016 ±0.110±0.08±0.5 ±0.05±0.40… ±0.08 101955/iner 2.510132000.5180.61700.1242 147.1 109.7 24.13.27 6.93 0.457  BAG 14 Aa,Ab ±0.00052±0.004 ±0.0047±0.0011±1.8±1.8 ±1.7±0.12±0.71±0.005 103987/outer19.205 2006.259 0.17430.2195102.817.6 65.14.0059.58-1.97  WSI 6 A,B±0.080±3.60±0.0083 ±0.0013 ±0.5 ±2.6±1.0 ±0.082±0.22 ±0.05 103987/inner1.034832014.6223 0.09340.0284 97.3 124.968.69.528…0.350  DSG 6 Aa,Ab±0.00008±0.0089 ±0.0040fixed ±12.5±3.1±13.7±0.058 …±0.062111805/outer30.127 2010.179 0.3240.3361154.25 84.9288.28 6.068.60 -22.58   HDO 295 B,A±0.031±0.073 ±0.004 ±0.0015 ±0.09±0.18 ±0.10±0.14 ±0.23 ±0.08 111805/iner 1.5012 1986.093 0.022 0.0385334.5232.985.80 13.13 19.21-0.330  BAG 15 Ba,Bb ±0.0004 ±0.093 ±0.011 ±0.0010 ±1.0 ±22.3 ±1.6 ±0.25 ±3.1±0.015 ll cccc cc l Relative positions and residuals (fragment) 0ptHIPSysDateθρσO-C_θO-C_ρRefa (year)()() () () ()2643 A,B1885.8100121.2 0.5000 0.1000 -6.4 0.0055 M2643 A,B1948.7900137.8 0.3400 0.10000.6-0.0823 M 2643 A,B1954.9800130.1 0.4900 0.10001.5 0.0036 Ma G: DSSI at Gemini-N; H: Hipparcos; M: micrometer measures;S: speckle interferometry at SOAR; s: other speckle interferometry ll cccc l Radial velocity and residuals (fragment) 0ptHIPCompJDRVσ_ RVO-CRefa +24000003c (km s^-1) 2643 Aa1 48851.50806.9400.490 -0.826 T 2643 Aa1 48947.35705.3300.500 -0.884 L 2643 Aa1 49952.5990 -4.8700.590 -0.703 L 2643 Aa1 48913.5800-12.8901.700 -1.497 L a C: CHIRON; D: D87;L: CfA; L-: Cfa -1 km s^-1; T: <cit.>§.§ Photometry and massesl l cc 0pt Magnitudes and masses HIP Comp.VM (mag)(M_⊙) 2643 Aa1 7.97 1.22Aa2 (14.9) (0.36)Ab(14.5) 0.42B 9.60 0.91101955 Aa 8.39 0.74 Ab 9.74 0.62 B9.47 0.65103987 Aa 7.46 1.15Ab (12.5) 0.56B9.620.67 111805 A7.48 1.14Ba 7.98 1.03Bb 9.25 0.85The relative photometryof the resolved pairs isavailable from Hipparcos and speckleinterferometry.This defines the individual magnitudes of the components and, knowing the distance, their absolute magnitudes.Allcomponents are normal mainsequence stars, allowing us to estimatetheir masses from the standardrelations. We use here the polynomialapproximation of the absolutemagnitude dependence on mass and wavelength from<cit.>.The magnitudes, distance, and masses constitute the model of each object (Table <ref>).Magnitudes not measured directly are given in brackets.The sumof the estimatedmasses does notalways match themass sum computed fromthe orbital elements and theHipparcos parallax. Thelatter can bebiasedbythe complexorbitalmotion inmultiple systemsthathasnotbeenaccountedforintheoriginalHipparcosdata reduction<cit.>. Therefore, the distancesused hereare derivedfrom themass sumandthe orbits (so-called dynamicalparallaxes π_ dyn,see Table <ref>).The RV amplitudes are used as a check, with thecaveatofapotentialRVbias duetoblendingwithother components.The wobble amplitude and the combined color of the system are additionalways tocheck the consistencyof thesystem models. For all multiple systems studied here, the minor discrepancies between various estimatesof massesand magnitudes canbe explainedby the errors and biases. § HIP 2643HIP 2643(HD 2993) is known asa visual binary HO 3or ADS 463.It has been firstresolved in 1887 by Holden, sothe measures cover 1.8 outerperiods.Thevisual orbitsof A,Bwere computedby several authors;the latestorbit by<cit.> hasP=69.15 years. Independently, D. L.has discovered the RV variations with periods of 100and1485 days,meaningthattheprimary componentisa spectroscopic triple, althoughonly one star is seenin the spectra. The whole system is therefore a 3+1 quadruple with a 3-tier hierarchy, like HD 91962.The innermost 101-day pair is Aa1,Aa2, the middle pair is Aa,Ab, and the outer visual pair is A,B (Figure <ref>). Accuratespeckle measuresof A,Bavailable since1985 allowus to detectthe “wobble”caused bythe middlesubsystem Aa,Aband to determine the elements ofits astrometric orbit (Figure <ref>). As our code cannot deal with quadruple stars, we firstfitted thetwo inner spectroscopicorbits (thelatest RVs were reduced by 1 km s^-1 to account for the outer orbit).Then the RV variationscaused by the 101 dayinner orbit were subtracted, andthe correctedRVswereusedjointly withthepositional measurementsto fitthe middleand outerorbits. Asthereare no resolvedmeasures ofthe middlesubsystem, itssemimajoraxis was estimated from themasses and period and fixedto 58 mas, while the wobble factor f was fitted.Although onlytwo starsare directly observed,we canevaluate the massesof all fourcomponents.Tobegin with,we assumethat the innermostorbit hasa largeinclination (thisis justifiedin the following paragraph).[Wedo not determinethe orientation ofthe inner orbit inthis paper.] Thenthe inner RV amplitude and the estimated mass ofAa1 lead to the mass of Aa2, 0.36 M_⊙.The inclination of the middle orbit is known, hence themass ofAb is0.42 M_⊙, whilethe massofB is estimated fromits absolutemagnitude.The outermass sumof 2.91 M_⊙ leads tothe dynamicalparallax of16.1 mas; the Hipparcos parallax of 17.5 mas is likely biased.Taking the masses listedin Table <ref>, we convert them back to the Vand K magnitudes using thesame standard relations.It turns outthat thespectroscopic secondaries Aa2and Abare indeed faint and donot influence the combined photometry. The modeled and measured combined K magnitudesare 6.26 and 6.35 mag, respectively, so the modelreproduces the actual V-K colorreasonably well.The wobble factor f=0.22leads to the mass ratio of0.28 in the middle orbit, while theadopted masses imply q =0.27, in good agreement. If the innermost orbit had asmall inclination, the mass of Aa2 would be larger, and the wobble factor would be smaller than measured.Given the estimated masses, we evaluate the RV amplitudes in the outer orbit, K_1 = 3.2 km s^-1and K_2 = 6.9 km s^-1.The free adjustment leads the much smallerK_1 = 1.4 km s^-1.The RV of BinFigure <ref> isafakepointadded toshowthe expectedRV curveforthevisual secondarythatis notactually measured.Blending with the linesof B likely explains the too small RV amplitude derivedby the free fit to the RVsand increases the RV errors ofAa1, comparedto a trulysingle star (rmsresiduals 0.56 km s^-1).TheRV residuals indeed correlatepositively with the RV, as expected from the blending effect.The sign of the RV trend in theouter orbit establishesits correctnode.NewRV measurements would be helpful for a better definition of the outer orbit and of all the periods.The spectrum of B should be detectable as it contributes 0.21 fraction of the combined light in the V band. The inner periodratio is 17.57±0.07, the outerperiod ratio is 14.43±0.28.The angle Φbetween the middle and outer orbits is254 ±85. Withsuch relativeinclination, the orbitof Aa,Abprecesses, butdoes notexperiencethe Kozai-Lidov cycles.The small eccentricityof all orbits supports indirectly the absence of such cycles and the approximate coplanarity of all orbits.§ HIP 101955This is anearby (17 pc) triple system HD 196795or GJ 795.It has an extensive literature.Theinner subsystem Aa,Ab was discovered by <cit.> (hereafterD87) using CORAVEL and laterresolved for the firsttime by <cit.>;<cit.> presented detailed studyof this triple system.Unlikeother systems featured here,the innerorbit hasasubstantial eccentricityand thetwo orbits have largemutual inclination. Figure <ref> displaysthe innerorbit, whileFigure <ref> shows the trajectory ofA,B with the wobble.Onlythe speckle measures of A,B since 1981 are used tofit the two orbits jointly, with the outer periodfixed toitsvalue foundbyusingall data. The semimajor axisof the wobble is 55.1±0.6 mas. The weighted rms residuals forboth A,B andAa,Ab are ∼3 mas inposition, 0.48 and 0.58 km s^-1 for the RVs of Aa and Ab, respectively.One spectrumof HIP 101955 has beentaken with CHIRON in2015 on JD 2457261.Its cross-correlation function (CCF) with the binary mask is an asymmetric blend that canbe fitted by two Gaussians.The fainter component is in facta blend of Ab and B, astheir RVs were close at that time.Theratio of the dip areasof Ab+B and Aa inthe CCF is 0.50,orΔm=0.75 mag,matchingroughlytheresolved photometry (thesystem model predicts0.45 mag).The rmswidths of theCCFdipsofAa andAb+Bare4.12and5.11km s^-1, respectively.The RVs measured by Duquennoy are likely affected by blending (CORAVEL didnot resolvetheblends,except ontwooccasions).Owing tothis, the“spectroscopic” massesof Aa andAb derivedfrom the combined inner orbit are too small (0.7 and 0.4 M_⊙). Thesystemmodelstarts withthecombinedV=7.84,theHipparcosparallax, and themagnitude differencesΔ V_ Aa,Ab =1.35 mag <cit.>, ΔV_ A,B = 1.35 mag (Hipparcos). Individual magnitudes of the components and their estimated massesare listed in Table <ref>, leading to the mass sum of of 2.02M_⊙ for AB.The orbit of A,B matchesthis masssumfora parallaxof59.0 mas, inexcellent agreementwith 58.8 masdetermined by<cit.>.The mass sum derivedfrom the inner orbit isthen 1.49 M_⊙, while the modelgives 1.36 M_⊙.Themodel predicts the combined K magnitude of 4.62 mag, the observed one is 4.74 mag.Theadopted masses implyq_ Aa,Ab=0.84and matchthe wobble amplitudethat corresponds toq_ Aa,Ab= f/(1- f)= 0.84. However, the spectroscopicinner mass ratio is 0.45;it is biased by the reduced RVamplitude of Aa and stronglycontradicts the relative photometry ofAa,Ab.Evenif the RVamplitudes in theinner orbit were measuredreliably, itssmall inclination preventsgood independent measurement of stellar masses and distance.The RVs of Aa also donot fit well the outerorbit due to blendingwith the other components Ab and B.The masssum in the outer system corresponds to K_1 + K_2= 11.2 km s^-1 (this is arobust estimate, given the highinclination), andthe massratioq_ A,Bleads tothe estimated amplitudesin the outer orbitK_1 = 3.0 andK_2 = 8.2 km s^-1.The fittedK_1 inthe outerorbit convergesto 2.66 km s^-1.Theperiod ratiois 15.41±0.13,the anglebetweenthe orbital angular momentaisΦ =648±14. Strong interactionbetween theorbits andKozai-Lidov cyclesare expected <cit.>. <cit.>estimatedtheperiodof these cycles as 560 years and noted that they may be observable.§ HIP 103987 HD 200580, alias G25-15, is a metal-poor ([Fe/H] ∼ -0.6) multiple systemwithafastpropermotionof046peryear. The single-lined spectroscopic orbit with one year period was published by <cit.>, whilethe outer system A,B wasfirst resolved by <cit.>in 1999and isknown asWSI 6. The astrometric orbitof theinner subsystemwas publishedby <cit.>. Theinner pair Aa,Abwas resolvedat Gemini,and itsfirst visual orbit waspublished inH15.The availableRVs now cover1.5 outer periods and leadto the spectroscopic orbits ofboth inner and outer subsystems.The 19 year outerperiod found from the RVs matches well the visual orbit that coversalmost the full ellipse; the preliminary 21 year orbitof A,B was published by <cit.>. The pair A,B was observed at SOAR several times, but the inner subsystem has never been resolved.In 2015, the star was observed twice with CHIRON in order to get fresh RVs and todetect the lines of other components. Indeed, the CCF of the spectrumandmask isdouble (Figure <ref>). Its componentscorrespondtothevisualprimaryAaandthevisual secondaryB.Thereisno traceofAb, whichshouldhave RVof +15 km s^-1atthemomentof observation;thenon-detection implies that Ab is at least ∼4 mag fainter than B and contradicts thespeckle photometry inH15.BothCCF dipsare verynarrow and correspond to the axial rotation V sin i of 2.2 and 1.5 km s^-1 for Aa and B, respectively.The ratio of the CCF areas corresponds to Δ m_ Aa,B = 1.43mag.At SOAR we measured Δ y_ A,B=2.17 magwiththermsscatter of0.05mag. The spectroscopic Δ m isunderestimated becauseB has a lower effective temperature and stronger lines than Aa.The speckle measuresof A,B are accurate enoughto detect the wobble causedbythe subsystemAa,Abandtodetermine allitsorbital elementsexcept a_1. Figures  <ref>and  <ref> showthe innerand outer orbits. The weighted rms residuals tothe measures of A,B in bothcoordinates are 1.3 and 2.5 mas, the wobble amplitude is 9.9±1.8 mas.The astrometry has adequate phasecoverage of theinner period mainly becausethe pair has beenextensively monitored at SOARduring 2015 withthe goal to resolvetheAa,Ab atquadratures,wherethe predictedseparation reaches 20 mas.Most other measuresof A,B are from Gemini and have an excellentaccuracy of ∼1 mas.They wereobtained at nearly the samephase ofthe innerorbit, as dictatedby theGemini time allocations.Jointanalysis of theRVs and astrometry leadsto the reliable inner orbit. Thelargest correlations are +0.5 between i_1 and f(which defines the astrometric amplitude)and -0.5 between i_1and Ω_1. The wobbleis detectedat the5.6 σ significance level.To test the robustness of the relative orbit orientation,we fixedf tovalues0.29 and0.41 (within±σ ofthe nominal) and repeatedthe fits.Inboth cases the angle Φ increased by 2, less than its error. The subsystem Aa,Ab was resolvedat Gemini in four seasons (including thepreliminary dataof2015communicated byE. Horch). The separation wasclose tothe diffraction limitof Gemini,hence the estimates ofthe separation andΔ m arecorrelated. Probably forthis reason, theΔ m_Aa,Ab =1.54 magat 692 nm publishedin H15appears stronglyunder-estimated(E. Horch, 2016, privatecommunication) andcontradicts boththelack ofthe spectroscopic detection of Aband its mass evaluated below. According to Figure 1in H15 andΔ m_ Aa,Ab= 4.1 magat 692 nm estimated here,the subsystem should be undetectable. In all Gemini runs, the pair Aa,Ab wasoriented in the North-South direction, where the atmospheric dispersion, which is not physically compensated in the DSSI specklecamera, could distort the measures. The prograde orbit of Aa,Abcomputed in H15 fromthe resolved measureshas a near-zero inclination, whichcontradicts the non-zero RV amplitudeof Aa. This said, thepositions ofthe innerpair measured atGemini (Figure <ref>) roughlymatch its orbit,except the 2014 measureswith nearlydoubleseparation.Wequestion theresolved measures ofthe inner pair anduse them in thecombined orbital fit with very small weights.The astrometric orbit of Aa,Ab by <cit.> has the expected semimajor axis of 10.4 mas,but corresponds to the retrograde motion (i=162^∘) and has a very different Ω_ A = 15.6^∘ compared to ourorbit.These authors have notrevised the parallax, which isnecessary fora one yearbinary.Moreover,their results couldbebiasedbythevisualcomponentBunresolvedbyHipparcos.Therefore, this astrometric orbit should be ignored. Owing tothe RV(B)measured with CHIRON,the combined orbitof A,B yields theorbital parallax of 23.4±0.3 masand the components' masses 1.62±0.09M_⊙ for Aand 0.67±0.06 M_⊙ for B. However, the orbital masses are proportional to the cube of theRV amplitudes, and if theamplitudes are slightly reduced by theline blendingwith other components,the orbitalmasses are under-estimated. TheHipparcosparallaxof19.3 masis evidently biased by the 1-year wobble.We adopt themasses of 1.15, 0.56, and 0.67M_⊙ for Aa, Ab, and B,respectively, based on the systemmodel.They correspond tothemass sumof2.38M_⊙, slightlylargerthan measured,andthedynamicalparallax of23.2 mas,matchingthe orbital parallaxwithin its error.The innersemimajor axis a_ Aa,Ab = 28.4 masis then computed and fixed (remember that the resolvedmeasures of Aa,Abare questionable, whilethe wobble amplitude is determined by the fitted factor f). Theinner massratio q_Aa,Ab= 0.49matches themeasured wobble amplitude. The spectroscopic mass of Ab calculatedfrom the inner RV amplitude and inclination is 0.49 M_⊙. Agreement with theadopted massof Ab wouldbe reachedby a 4%increase of K_1or fortheinnerinclination of55^∘instead ofthe measured 69^∘±14^∘.If Ab is anormal M-type dwarf, we expect Δ m_Aa,Ab = 4.1 mag at692 nm, much larger than Δ m_ Aa,Ab = 1.5 mag measured at Gemini and in agreement with the spectroscopic non-detection of Ab with CHIRON. Alternatively, Ab could be a white dwarf. The components' magnitudeslisted in Table <ref> are computed by using Δ V_ AB =2.13 mag measured by speckle interferometry at SOAR and byfurther assuming that Δ V_ Aa,Ab =5 mag to match theadoptedmass ofAb. Themodelreproduces thecombinedK magnitude and predicts V- K = 1.63 mag; the actualV - K = 1.52 mag is slightly bluer, as it should be for a low-metallicity star. Theouter andinnerorbits areinclinedby Φ= 6^∘± 9^∘, i.e. are almost coplanar.The period ratio is 18.55±0.08.§ HIP 111805This system,HD 214608, isalso metal-poor; itwas studiedin H15. The subsystem Ba,Bb, firstresolved by <cit.>, belongs to the secondarycomponent of the well-studied visualpair HDO 295 (ADS 16138) known since 1887. Both orbits are seen nearly edge-onand,as shown below,are well aligned, Φ =25 ± 15. For this reason, thewobble is not obvious in theouter orbit plot.Mostspeckle observations didnot resolvethe innerpair Ba,Bb, which has fewer speckle measures compared to A,B.Thespectroscopicorbitsofbothsubsystemsweredeterminedby <cit.>.AdditionalRVs obtained byD. L.are usedhere. They were derived by TODCOR and refer to the two brightest components A and Ba. Thecomponents often blend, therefore severalhighly deviant RVs were discarded in the orbitcalculation.In H15, the authors adopted the SB elements by Duquennoy and fitted only the remaining elements to the outer orbit.However, Duquennoy fixed the period of A,B using its oldervisual orbit,not spectroscopy. The combinedorbit computed here uses all the data and removes this inconsistency.The “primary”components are Ba andBb, the “tertiary”is A (to get theorbit of B aroundA, change theouter elements Ω_ A andω_ A d by 180^∘),the wobble coefficient f =-0.33 is negative.Inthe last iteration, wefixed the outer period and used only the speckle data for A,B in fitting the remaining 19 free parameters.Ascan be seen in Figures <ref> and <ref>, the RV curves arerather noisy owing to the line blending. The weightedrms RVresidualsare 0.97,3.5, and0.78 km s^-1forBa, Bb,andA,respectively(with someoutliers removed or givenlow weight).Only two uncertainmeasures of RV(Bb) by Duquennoy definethe inner amplitude K_2.Therms residuals of the positionalmeasures arefrom 3to 5 masin Xand Y,for both orbits.The Hipparcos parallaxof 26.2±0.6 masgives atoo small mass sum of 2.3 M_⊙ for the well-defined outer orbit.We adoptthe orbitalparallax of24.1 mas derivedfromthe combined orbit ofA,B and the masssum of 3.0 M_⊙, in agreement with the model masses. The mass sum in the inner orbit is 1.82M_⊙ and, bysubtraction, the mass ofA is 1.18 M_⊙.Itmatches the inner RV amplitudes,but the large error ofK_2 makesthe Msin^3 i estimatein theinner orbit quite uncertain.H15 derivedthe masses ofA, Ba, andBb as 1.12, 0.92,0.77 M_⊙using standardrelationsand disregardingthelow metallicity.The corresponding spectraltypes are F9V, G5V, and K1V. We repeatedthe modeling assuming the orbitalparallax of 24.1 mas. The relative photometryis Δ V_ A,B =0.50 mag, based on theHipparcos datumandsomemeasurementsby Horch,while Δ V_Ba,Bb = 1.27 magis measured by Balegaet al.The derived masses are 1.14, 1.03,0.85 M_⊙, or the mass sum of 3.02 M_⊙.Thecombined K magnitude of the model is 5.25,the observed oneis 5.31mag. Themodel matchesboth orbits quite well. The model implies q_ Ba,Bb= 0.83.The measured wobble factor f^* =-0.33 (most measures ofA,B refer to thephoto-center of B) corresponds toq_ Ba,Bb≈ 0.60, andthe uncertain inner spectroscopicmassratio isq_Ba,Bb≈0.68.Itis possiblethat thecomponentBbis lessmassiveand fainterthan deduced from the photometric model.§ SUMMARY AND DISCUSSIONl cc cc 0pt Relative orbit orientation and period ratio HIPP_ out e_ outP_ out/P_ inΦ (yr)(degrees) 2643 70.34 0.33 14.43±0.2825.4±8.5 2643 4.850.14 17.57±0.07… 101955 38.68 0.12 15.41±0.1364.8±1.4103987 19.20 0.17 18.55±0.08 6.2±9.0 111805 30.13 0.3220.07±0.02 2.5±1.5We determinedinner and outerorbits in four multiplesystems using bothresolved measures andRVs.Theascending nodesare therefore identifiedwithoutambiguity, allowingustocalculate theangle Φ betweenthe orbitalangular momentum vectors. These angles, period ratios,outer periodsP_ out andouter eccentricities e_ out are listed in Table <ref>.In three systems the orbitsareapproximatelyco-aligned, andbothinnerandouter eccentricitiesare small. In suchcase, theinnerorbits precess aroundthe totalangular momentum,with Φbeing approximately constant.Only in HIP 101955 the orbits are closer to perpendicularitythantoalignment,Φ =65^∘. Inthis configuration, theangle Φ and theinner eccentricity oscillate in theco-called Kozai-Lidov cycles.Indeed,the inner eccentricity in HIP 101955 islarge, e = 0.61.None ofthe four close multiple systems are conter-rotating (all haveΦ < 90), in line with the generaltrend of orbitco-alignment noted by<cit.>.The massivecounter-rotating closetriple σOri withΦ∼ 120 <cit.> could be formed by a different process.Figure <ref> comparesthe period ratiosand outer eccentricities of the multiple systems studied here with the dynamical stability criterionof <cit.>.The outerorbit in HIP 2643, as well as ζ Aqr <cit.>, do not satisfy the more strict empirical criterion of <cit.>, which therefore is not valid.The quadruplesystem HIP 2643 witha 3+1 architectureresembles the “planetary” quadruple HD 91962 <cit.> in several ways. In both multiple systems, allthree orbits have moderate eccentricities, theouter and middleorbits arenot farfrom coplanarity,and the periodratiosbetweenthehierarchicallevelsaresmall. This suggests that there was someinteraction between the orbits, at least during the formationof these systems. However, theratio of the two inner periods in HD 91962 is 19.0, suggesting a mean motion resonance, while it is not integer in HIP 2643. Somedata usedin this workwere obtainedat the Southern AstrophysicalResearch (SOAR) telescope. We thank E. Horch for criticalre-evaluation of theGemini speckledata and communication ofhis unpublished observations of HIP103987.We also thankboth Referees forcareful and inquisitive checkof the manuscript.This workused theSIMBAD service operatedby Centredes Données Stellaires(Strasbourg, France),bibliographicreferences fromthe Astrophysics Data System maintainedby SAO/NASA, data products of the TwoMicron All-SkySurvey (2MASS),and theWashingtonDouble Star Catalog maintained at USNO.99 [Albrecht et al.(2014)]Albrecht2014 Albrecht, S., Winn, J. N., Torres, G. et al. 2014, ApJ,785, 83 [Balega et al.(2002)]Balega2002 Balega, I. I., Balega, Y. Y., Hofmann, K.-H. et al. 2002, A&A, 385, 87 [Duquennoy(1987)]D87 Duquennoy, A. 1987, A&A, 178, 114 (D87)[Fabrycky et al.(2014)]Fabrycky2014 Fabrycky, D. C.,Lissauer, J. J., Ragozzine, D. et al. 2014, ApJ, 790, 146[Jancart et al.(2005)]Jancart2005 Jancart, S., Jorissen, A., Babusiaux, C. & Pourbaix, D. 2005, A&A, 442, 365[Hartkopf et al.(2001)]VB6 Hartkopf, W. I., Mason, B. D. & Worley, C. E. 2001, AJ, 122, 3472 (VB6) [Horch et al.(2015)]H15 Horch, E. P., van Altena, W. F., Demarque, P. et al. 2015, AJ, 149, 151 (H15)[Kervella et al.(2013)]Kervella2013 Kervella, P., Mérand, A., Petr-Gotzens, M. G. et al. 2013, A&A, 552, 18 [Latham(1992)]Latham1992 Latham, D. W. 1992, in ASP Conf. Ser. 32, Complementary Approaches to Binary and Multiple Star Research, ed. H. McAlister & W. Hartkopf (IAU Colloq. 135) (San Francisco: ASP), 110[Latham(1985)]Latham1985 Latham, D. W. 1985, in IAU Colloq. 88, Stellar Radial Velocities, ed. A. G. D. Philip & D.W. Latham (Schenectady: L. Davis), 21[Latham et al.(2002)]Latham2002 Latham, D. W., Stefanik, R. P., Torres, G. et al. 2002, AJ, 124, 1144 [Malogolovets et al.(2007)]Malogolovets2007 Malogolovets, E. V., Balega, Yu. Yu., & Rastegaev, D. A. 2007, AstBu, 62, 111 [Mason et al.(2001)]Mason2001 Mason, B., Hartkopf, W. I., Holdenried, E. R. & Rafferty, T. J.2001, AJ, 121, 3224 [Mason & Hartkopf(2014)]Msn2014a Mason, B. D. & Hartkopf, W. I. 2014, IAUDS, 183, 1 [Mardling & Aarseth(2001)]MA2001 Mardling, R. A. & Aarseth, S. J. 2001, MNRAS, 321, 398[McArthur et al.(2010)]McArthur2010 McArthur, B. E., Benedict, G. F., Barnes, R. et al. 2010, ApJ, 715, 1203[Riddle et al.(2015)]RAO Riddle, R. L., Tokovinin, A., Mason, B. D. et al. 2015, ApJ, 799, 4[Schaefer et al.(2016)]Schaefer2016 Schaefer, G. H., Hummel, C. A., Gies, D. R. et al. 2016, AJ, 252, 213 [Söderhjelm(1999)]Soderhjelm1999 Söderhjelm, S. 1999, A&A 341, 121 [Sterzik & Tokovinin(2002)]ST02 Sterzik, M. & Tokovinin, A. 2002, A&A,384,1030[Szentgyorgyi & Furész(2007)]TRES Szentgyorgyi, A.H., &Furész, G.2007,in The3rd Mexico-Korea Conferenceon Astrophysics: Telescopesof theFuture andSan Pedro Mártir, ed.S.Kurtz,RMxAC, 28, 129[Tokovinin & Smekhov(2002)]TS02 Tokovinin, A. A. & Smekhov, M. G. 2002, A&A, 382,118[Tokovinin(2004)]Tok2004 Tokovinin, A. 2004, RMxAC, 21, 7 [Tokovinin et al.(2013)]CHIRON Tokovinin, A., Fischer, D. A., Bonati, M. et al. 2013, PASP, 125, 1336[Tokovinin(2014)]FG14 Tokovinin, A. 2014, AJ, 2014, 147, 86[Tokovinin et al.(2015)]Planetary Tokovinin, A., Latham, D. W., & Mason, B. D.2015, AJ, 149, 195[Tokovinin et al.(2016)]SOAR15 Tokovinin, A., Mason, B.D., Hartkopf, W.I. et al.2016, AJ, 151, 153[Tokovinin(2016a)]CHIRON-1 Tokovinin, A. 2016a, AJ, 152, 11[Tokovinin(2016b)]ZetaAqr Tokovinin, A. 2016b, ApJ, 831, 151[van Leeuwen(2007)] HIP2 van Leeuwen, F. 2007, A&A, 474, 653[Xu et al.(2015)]Xu2015 Xu, X.-B., Xia, F., & Fu, Y.-N. 2015, RAA, 15, 1857[Zhu et al.(2016)]Zhu2016 Zhu, L.-Y., Zhou, X., Hu, J.-Y. et al. 2016, AJ, 151, 107Facilities:ORO:Wyeth (CfA Digital Speedometer), ​FLWO:1.5m ​ (CfA Digital Speedometer, TRES)​,SOAR (HRcam), CTIO:1.5m (CHIRON)
http://arxiv.org/abs/1702.07905v1
{ "authors": [ "Andrei Tokovinin", "David W. Latham" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170225151321", "title": "Relative orbit orientation in several resolved multiple systems" }
1Center for Computational Astrophysics, NationalAstronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan 2Division of Theoretical Astronomy, NationalAstronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan3Institute for Global Prominent Research, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan 4Department of Physics, Graduate School of Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japankawashima.tomohisa@nao.ac.jp accretion, accretion disks — black hole physics — magnetohydrodynamics (MHD)A Possible Time-Delayed Brightening of the Sgr A* Accretion Flow after the Pericenter Passage of G2 Cloud Tomohisa Kawashima 1,2,*,Yosuke Matsumoto 3,4, and Ryoji Matsumoto 4 Accepted 2017 May 2. Received 2017 May 2; in original form 2017 February 25 ========================================================================================================= A possibility of time-delayed radio brightenings of Sgr A* triggered by thepericenter passage of the G2 cloud is studied by carrying out global three-dimensional magnetohydrodynamic simulations taking into account the radiative cooling of the tidal debris of the G2 cloud. Magnetic fields in the accretion flow are strongly perturbed and re-organized after the passage of G2. We have found that the magnetic energy in the accretion flow increases by a factor 3-4 in 5-10 years after the pericenter passage of G2 by a dynamo mechanism driven by the magneto-rotational instability. Since this B-field amplification enhances the synchrotron emission from the disk and the outflow, the radio and the infrared luminosity of Sgr A* is expected to increase around A.D. 2020. The time-delay of the radio brightening enables us to determine the rotation axis of the preexisting disk. § INTRODUCTIONThe pericenter passage of an object named G2 close to the galactic center black hole (BH) Sgr A* <cit.> was expected to enhance the activity of Sgr A* by supplying the tidally stripped gas to the BH. The distance of its pericenter from the Galactic center BH is only ∼ 2×10^3r_ s <cit.>, where r_ s is theSchwarzschild radius. Br-γ observations indicate that the size of the G2 is ∼ 15 mas, i.e. ∼ 10^3r_ s, which is as large as its pericenter distance. The estimated mass of the gas component of G2 is∼3M_⊕, which is comparable to that of the Sgr A* accretion flow, i.e., a hot accretion flow onto the Galactic center BH<cit.>. The G2 was, therefore, expected to affect the dynamics of the Sgr A* accretion flow and trigger a flare. However, no brightening event induced by the pericenter approach of G2 has been observed in Sgr A* <cit.>. After the discovery of the G2, a number of numerical simulations have been performed<cit.>.Most of these simulations did not take into account magnetic fields despite their important roles in accretion processes <cit.>. <cit.> carried outthree-dimensional (3D) general relativistic magnetohydrodynamic (MHD)simulationsin order to study the shock structure at which the electrons may be accelerated, during the pericenter passage of the G2 cloud. Long-term MHD simulations ofthe G2 cloud, however, have not yet been carried out,although it is necessary to explore the future brightening of Sgr A*. In this paper, we present the possibility of the time-delayed amplification of magnetic fields and subsequent accretion by carrying out longer time-scale 3D MHD simulations of a hot accretion flow impacted by a cloud. Since the gas supplied by the cloud need a several rotation period at the pericenter to settle into a rotating torus, and typically 10 rotation period for the re-organization of magnetic fields, it takes 5-10 years until the magnetic fields are re-amplified by the dynamo action driven by the magneto-rotational instability (MRI). Therefore, in order to study the possibility of the time delayed brightening, we need to carry out MHD simulations for time scales of a decade. This is the motivation of this paper.It is still controversial whether or not G2 harbors a star in its center: <cit.> observed G2 in infrared band using SINFONI and proposed that G2 is a pure gas clump, which is possibly formed from a gas stream around Sgr A*, while <cit.> observed G2 in L’ band using Keck observatory and proposed that G2 harbors a star because the size of the dust of G2 does not change during its pericenter passage. For simplicity, we assume pure gas clouds in this paper.This assumption should be reasonable, partly because a tidally disrupted gas feature is evident in p-v diagram <cit.>, and partly because the size of the region where gravityof the central star dominates that of the Galactic center BH is only ∼ 1% of the estimated cloud size <cit.>, so that we canneglect the effect of the central star even if it exists. § SIMULATION MODEL We solve a set of MHD equations taking into account the anomalous resistivity <cit.> and theeffects of radiative cooling in cylindrical coordinates (ϖ, φ, z). In this paper, ϖ and r denotes the cylindrical and the spherical radius, respectively. The equations are the same as those in <cit.>, except we incorporate the effects of radiative cooling due to recombination with approximate cooling function <cit.>:Λ_ rec = 10^-21 (ρ/m_ p)^2 exp[-(9/2)(log T - 4)^2]   erg  cm^-3  s^-1,as well as the bremsstrahlung emission Λ_ brem = 6.2 × 10^20ρ^2 T^1/2  erg  cm^-3  s^-1, since thestructure of the G2 cloud should be affected by the radiative cooling <cit.>. Here, m_ p, ρ, and T is the proton mass, mass density, and gas temperature, respectively. In this work, we do not incorporate the effects of the synchrotron cooling.It is well known that, in optically-thin hot accretion disks,the bremsstrahlung cooling dominates the synchrotron cooling in theouter disk, which we simulate in this work, while thesynchrotron cooling can be dominant in the inner disk.This is because the synchrotron cooling rate is roughly proportional toT^1 if we assume the energy equipartition,while the bremsstrahlung cooling rate is proportional to T^1/2. Since the gas temperature decreases with radius, thebremsstrahlung emission dominates the synchrotron radiation in theouter disks. In this work, as we mention below, we calculate the outer accretion flow, in which the bremsstrahlung emission is the dominant radiative process. In addition, the G2 cloud is so cool that synchrotron emission cannot dominate the Br-γ and bremsstrahlung emissions, since the synchrotron photons are radiated by relativistic electrons. These are the reasons why we ignored the synchrotron cooling in this work. The anomalous resistivity is incorporated by using the same formula as that employed in <cit.>, i.e., η = η_0 [ max((j/ρ)/v_c -1, 0)]^2, where η is the magnetic diffusivity, and j is the current density.There are two free parameters: the coefficient for the anomalous resistivity η_0 and the threshold value v_c. We set η_0 = 1.25 × 10^-3 and v_ c∼ 69.5 in our simulation unit (i.e., v_ c = 0.9c). We carry out global 3D simulations by using a newly-developed MHD code CANS+ <cit.>.The code is based on the HLLD approximate Riemann solver proposed by <cit.>. In order to preserve monotonicity and to achieve high-order accuracy in space, we employ a monotonicitypreserving, fifth-order accurate interface value reconstruction method, MP5 <cit.>.A third-order TVD Runge-Kutta method is used for the time integration.We adopt the hyperbolic divergence cleaning method <cit.> in order to minimize numerical errors of the divergence free condition of the magnetic field. The number of computational cells is (N_ϖ,  N_φ,  N_z) = (256,  128,  320). The cell size is constant in the inner region: Δϖ= Δz= 30 r_ s for 0 < ϖ < ϖ_0and|z| < ϖ_0.Here ϖ_0 is the location of the pressure maximum of the initial torus, which we set ϖ_0  ≡  3×10^3 r_ s. Outside this region, Δϖ and Δz increase with ϖ and z, respectively: Δϖ_n =min(1.05Δϖ_n-1,  Δϖ_ max), Δz_n =min(1.05Δz_n-1,  Δz_ max) for z>0, and Δz_n =min(1.05Δz_n+1,  Δz_ max) for z<0.Here, n denotes the sequential cell number, andΔϖ_ max and Δz_ max are set to be 200r_ s. In azimuthal direction, the cell size is set to be constant, i.e., Δφ = (2π/N_φ). The computational domain is, thus, 0 ≤ϖ≤ 4.323×10^4r_ s,0 ≤φ≤ 2π, and |z| ≤ 2.778×10^4 r_ s. A spherical absorbing inner boundary is imposed at r_ in = 450r_ s, i.e., the physical quantity q is approximated byq = q^' - c^'(q^' - q_0) if their cell-centers are insider_ in, where c^' is the damping coefficient, q^' isthe numerically obtained quantity at the next timestep, and q_0is the initial value[for details, see equation (7) and (8) in <cit.>]. The shape of this absorbing boundary modestly matches up with the exact spherical one, since the cell size is ∼ 5% of the inner boundary radius. The outer boundaries are the free boundaries where waves can be transmitted. At first, we perform the simulations of hot accretion flows without introducing a gas cloud until the accretion flow attains a quasi-steady state.After the quasi-steady state is realized, a gas cloud in pressure equilibrium with the ambient gas is located into the simulation box.§.§ A Hot Accretion Flow Model for the Sgr A* We assume a rotating, equilibrium torus with the pressure maximum at ϖ = ϖ_0.The torus is assumed to be threaded by a weak, purely toroidal, initial magnetic field, by using the equilibrium solution of magnetized tori proposed by <cit.>. The plasma β at the pressure maximum of the torus is initially 100. We set the direction of the rotation axis of the initial torus to be the same as z-axis. The torus is embedded in a hot, isothermal, non-rotating, static, coronal atmosphere with gas temperature ∼ 1.3×10^10 K.For more details of the simulation set-up, see, e.g., <cit.>.After the growth of the non-axisymmetric mode of MRI (∼ 10 orbital-period at the pressure maximum of the initial torus), the angular momentum is efficiently transported by the Maxwell stress and an accretion flow is formed. At 30 orbital-time at the pressure maximum, we locate a gas cloud in the computational domain to simulate the interaction of the G2 cloud with the Sgr A* accretion flow.The normalization factor of the mass densityρ_0 is chosen to be consistent with the observational implication of mass density distribution of the Sgr A* accretion flow ρ_ RIAF(r) = 1.3×10^-21(r/10^4r_ s)^-1.125  g  cm^-3 <cit.> such thatρ_0ρ̂(ϖ_0) ∼ ρ_ RIAF(ϖ_0),where ρ̂(ϖ_0) ∼ 0.3 is the azimuthally averaged, normalized mass density of the quasi-steady accretion flow. §.§ Models of the G2 Cloud There are six parameters describing orbits of a point mass: the orbital inclination i, the longitude of the ascending node Ω, the argument of the pericenter ω, the eccentricity e, the semi-major axis a, and the time of the pericenter passage t_0. The three parameters e, a, and t_0 can be constrained from the observations of the G2.According to the observation of the Br-γ emission line and the data analysis by <cit.>, weset t_0 =A.D.  2014.25, e=0.9762 and thepericenter radius r_ p = a(1-e)=2.4×10^3 r_ s.The semi-major axis a is obtained from e and r_ p. For convenience, we replace t-t_0 by t so that the G2 passes the pericenter at t=0 yr.For our simulations, the other three parameters i, Ω, and ω cannot be constrained from the observation, because of the uncertainty of the angle between the rotation axis of the Sgr A* accretion flow and that of the Galactic plane.For simplicity, we assume Ω = 0 and ω = 0, where ω = 0 means that the pericenter is assumed to be on the equatorial plane of the accretion flows. We expect that the parameter Ω does not significantly affect the results, because the global structure of the accretion flow is not highly non-axisymmetric. In this paper, we present results for i=0 and π/3 rad. In this work, we assume a Schwarzschild BH and employa pseudo-Newtonian potential <cit.>, so that we do not take into account the relation between the direction of the BH spin and the orbit parameters of the gas cloud described above. We set the initial position of the center of G2 at r=2.4×10^4r_ s.For the initial cloud density, we assumeGaussian distribution with FWHM = 3×10^15 cm <cit.> in such a way that the total mass of the gas cloud is 3M_⊕. The initial velocity inside the cloud is assumed to be equal to the Kepler orbital velocity at its center of mass. For the sake of simplicity,we do not assume the initial magnetic field in the cloud. We note that ∇· B = 0 is assured when we locate the cloud, because the magnetic field is neither artificially added to nor removed from the computational domain.The cloud which satisfy the assumption above is located in the computational domain after a quasi-steady accretion flow is formed.§ RESULTSTime evolution of the gas cloud and the hot accretion flow when i=0 is shown in figure <ref>. At t ≃ -4 yr (i.e., 30 Kepler orbital time at the initial pressure maximum of the torus), nonlinear growth of non-axisymmetric MRI has already been saturated, so that the accretion flow has attained the quasi-steady state.At this time, the spherical gas cloud is located at 2.4×10^4r_ s, far outside the zoomed region of figure <ref>. At t ≃ 0 yr, the gas cloud stretched by the tidal force of the Galactic center BHpenetrates the accretion flow. This stage is qualitatively the same as <cit.>, except that the tidally stretched gas becomes slimmerdue to the effectsof the radiative cooling in this work. At t ≃ 5 yr, the accretion flow returns back to the quasi-steady state with its magnetic field being amplified by the MRI-driven-dynamo compared to that before the passage of the gas cloud. The mass density and pressure at r ≳ 3× 10^3 r_ s is roughly half of those before the G2 encounter because the disk mass is swept by the G2 impact.However, the variation of mass density and gas pressure inside 2 × 10^3 r_ s (i.e., inside the pericenter radius of the G2 cloud) is about several tens percent, while the magnetic energy increases by a factor 3-4 after the pericenter passage. Therefore, we expect the radio brightening of the Sgr A* bythe synchrotron emissionwhen the gas with the B-field, which isamplified via the MRI-driven-dynamotriggered by the G2 encounter, accretes onto the innermost regionof the accretion flow with a time-delay, i.e., the dynamo timescaleand the accretion timescale at ∼ 10^3 r_ s. It takes not only dynamo timescale but also accretion timescale to show the brightening, because the synchrotron emission from the innermost region of the pre-existing accretion flow is too bright to mask the radio brightening due to the enhanced magnetic field at ∼ 10^3 r_ s.When the gas with amplified B-field at 10^3 r_ s accretes to the innermost region, synchrotron emission is expected to increase.In figure <ref>, we present time evolution of magnetic energy. The peak magnetic energy, which is 3-4 times larger than that beforethe encounter with G2, appears about 5 and 13 yrs after the pericenterpassage of the G2 cloud for the model with inclination angle i = 0 and π/3,respectively. In the case that i=0, magnetic energy increases after the impact of the G2 cloud because the cloud collision increases the radial component of magnetic fields, which is subsequently amplified by differential rotation of the disk. In the case that i=π/3, magnetic energy slightly decreases after the cloud impact because the enhanced turbulent motion dissipates the magnetic energy. When the rotation axis of the disrupted disk is fixed, MRI-driven-dynamo amplifies the magnetic field of the tilted disk. Here, let us discuss a little more about the competition between the magnetic dissipation and amplification. Since magnetic turbulence enhanced by the passage of the gas cloud increases themagnetic dissipation, the accretion flow approaches to a Taylor state <cit.>, in which the system relaxes to a state with minimum magnetic energy. Figure <ref> indicates that the relaxation of the magnetic energy is more significant in the model with i=π/3, in which the system is highly perturbed by the impact of the cloud. Subsequently, magnetic field is amplified by MRI through the generation of B_z by turbulence. Numerical results for i=π/3 indicates that the magnetic field amplification by the latter mechanism becomes dominant 5 yrs after the pericenter passage of G2. This time scale is consistent with that of the disk dynamo at ϖ∼ 10^3r_ s.As mentioned above, the B-fieldamplification is delayed in the model with i = π/3 because it takes time before the decay ofstrong perturbation caused by the impact of G2.Figure <ref> shows time evolution of the angularmomentum of the flows integrated such as L = ∫_ϖ_ in^750r_s∫_0^2π∫_-ϖ_0^ϖ_0ρ( r× v) ϖ dϖ dφ dz. For the model with i=π/3, the direction of theangular momentum is remarkably modified after the G2 passage:At t ∼ 0 yr, the accretion flow is strongly disturbed by the cloud impact (see also figure <ref>) and thedirection of angular momentum drastically changes.After t ∼ 5 yr, the fluctuation of the L_y and L_ z is lessthan ≃ 10%, so that it can be regarded that the flow has settledto a quasi-steady state with a tilted rotation axis. In this quasi-steady disk, the B-fieldamplification by MRI restarts to dominates the decay of the magnetic field.Thus, the amplification of the B-field in the model with i=π/3is delayed by 5 yrs.In figure <ref>, it is shown that the accretion flow istilted after thepassage of the gas cloud for the model with i=π /3.The tilt of the accretion flow is caused by the angular momentum transport from the gas cloud to the accretion flow,because the angular momentum of the accretion flow is misaligned withthat of the initial orbital motion of the gas cloud.The angular momentum ofthe gas cloud is sufficient to change the direction of the angularmomentumvector of the accretion flow.Since it requires several years for the rotation axis of the accretionflow to settle into the new direction, the B-field amplification viathe disk dynamo is delayed when the inclination is large.§ SUMMARY AND DISCUSSION We carried out 3D MHD simulations of the interactions of the Sgr A* accretion flow with the gas cloud G2, taking into account the effects of radiative cooling. We found that the magnetic energy increases by 3-4 times in 5-10 years after the pericenter passage of the G2 cloud. The delay time of the B-field amplification depends on the orbital inclination of the gas cloud: the maximum magnetic energy appears ≃5 and ≃13 yrs after the pericenter passage for the model with i = 0 and π/3, respectively.The B-field amplification can increase the radio and the infrared luminositywith a time delay after the G2 passage. We expect that the gradual increase of the synchrotron emission with a peak around A.D. 2020 can be observed in the radio and the infrared bands. This significant radio brightening should occur when the amplified magnetic field accretes to the innermost region. Furthermore, the X-ray flare may occur when the amplified magnetic energy is released via the magnetic reconnection in the vicinity of the BH. Here, we discuss the consistency of our scenario with the no detection of the increased radiative flux in Sgr A* up to today. In this paper, we have found that the magnetic field in the Sgr A* accretion flow is amplified 5-10 yrs after the pericenter passage of the G2 cloud. Our scenario would reasonably explain the no detection of the radio and the infrared brightening to date, because the radio and the infrared emission in the Sgr A* may be dominated by the synchrotron emission, which should be enhanced with the B-field amplification.By contrast, the X-ray emission of Sgr A* may be dominated by the bremsstrahlung emission from the outer part of the accretion flow at ∼ 10^5 r_ s <cit.>. This radius is far outside the distance of the pericenter of G2. Thus, change of the dynamics of the inner accretion flowinduced by the G2 impact will not affect the X-ray luminosity of Sgr A*, except the X-ray flare induced by the magnetic reconnection in the vicinity of the BH. It may be thought that the G2 should affect the X-ray luminosity when the cloud starts to interact with the accretion flow at 10^5r_ s. However, since the size of G2 is only ∼ 10^3 r_ s, it would be too small to affect the dynamics of the accretion flow at 10^5 r_ s. No detection of increased X-ray emission up to today is, therefore,consistent with ourscenario.If the brightening is not detected during the next ∼ 10 yrs, there can be two possible reasons: the location of the outer edge of the accretion flow is closer to the Galactic center BH than that of pericenter of th G2, or the gas component of G2 would be less massive than the expected one. The expectation for the brightening discussed above should be confirmed by calculating time-dependentmulti-wavelength radiative spectra, by post-processing the MHD simulation data. In subsequent papers, we would like to carry out the spectral calculations, as well asthe parameter survey of the MHD simulations of the disk-cloud interaction.At the beginning of our simulations, the Br-γ luminosity obtained by volume integration of the cooling function expressed in equation (1) attains the luminosity ∼ 10^-3 L_⊙, which is consistent with the observed luminosity of the G2 <cit.>.During the pericenter passage, however, the Br-γ luminosity decreases down to ∼ 10^-4 L_⊙which is one order of magnitude lower than the observed one <cit.>. This inconsistency may be caused by overheating of the G2 cloud due to the numerical mixture of the G2 cloud and the hot accretion flows during the pericenter passage. This problem would be solved by performing the simulations withhigher spatial resolution, as shown in simulations with adopted mesh refinement (AMR) code focusing not on the system including both the Sgr A*accretion flow and G2 cloud but on only G2 cloud <cit.>. However, the B-field amplification shown in this work should occur also insimulations with fine spatial resolution, since it is driven by the disk dynamo (especially by MRI) and the high-resolution-simulations rather show more efficient B-field amplification <cit.>. Simulations with higher spatial resolution reproducing the consistent luminosity of the Br-γ emission remains as a future work.It should be noted that, after the pericenter passage of the G2 cloud, the mass accretion rate at the inner boundary (450 r_ s) increases 2-4 times of that before the G2 impact in our simulations.However, since the magnetic energy inside the disk does not increase until the MRI driven dynamo grows again (i.e., 5-10 years after the G2 impact), the synchrotron luminosity would not increase significantly until the strongly magnetized region begins to infall.Wnen the gas with the amplified B-fields accrete onto the inner disk, synchrotron emission from the inner disk will increase. Although the observational images would not be perfectly the same as those predicted by <cit.> because of the amplification of the magnetic field after the G2 encounter in our study, synchrotron brightened region may be similar to that studied by <cit.>. The brightening may be detected by East Asia mm/submm VLBI observation and Event Horizon Telescope submillimeter Very Long Baseline Interferometry experiment (EHT). As discussed below, the angle between the rotation axis of the accretion disk and the orbital axis of the G2may be constrained by these observations. Let us discuss whether or not the direction of the rotation axis of the preexisting accretion flow can be constrained by the timing of the brightening. The radio brightening in the vicinity of the BH is expected to follow the amplification of the B-field at ∼  10^3 r_ s, i.e.,the increased B-field will be advected inward and, subsequently, will be amplified further near the BH. If we assume that the amplified B-field is advected to the innermost region of the accretion flow in the viscous accretion timescale, the time lag due to the advection is≲ 1 yr, where we have assumed viscosity parameter <cit.> α≃ 0.1 since our simulations indicate this value in the B-filed re-amplification stage. This accretion timescale is shorter than the timescale of the B-field amplification ∼5 or ∼10 yrs so that we can identifythe difference of the orbital inclination of G2 against the preexisting accretion flow. The comparison of the timing of brightening in the simulation with the future observation would enable us to constrain thedirection of the rotation axis of the preexisting Sgr A* accretion flow, since the orbital plane of the G2 is known.Furthermore, the tilt of the accretion flow in i=π/3 case (figure 4), can be spatially resolved by the East Asia mm/submm VLBI observation and EHT. If the direction of the rotation axis of the accretion disk significantly differs from the angular momentum axis of G2, we would be able to observe the change of the rotation axis of the accretion flow with time.The tilt of the accretion flows caused by the disk-cloud interaction can occur not only in Sgr A* but also in the other low luminosity active galactic nuclei (LLAGNs). The tilt may induce the quasi-periodic oscillation in the LLAGNs and/or the change of the direction of the LLAGN jets. These possible behavior would be important to explore the accretion and ejection histories in LLAGNs.In this work, we set the inclination i=0 and π/3, i.e., the gas cloud is assumed to be co-rotating with the accretion disk.If the gas cloud is counter-rotating with the accretion disk,we expect that the retrograde gas cloud will lose more angular momentum than the prograde one does affected because of the ram pressure of the accretion flow. Especially in the perfectly retrograde case (i.e., i = π), the most part of gas cloud may not be able to keep the Keplerian orbit, which is estimated by the observations of Br-γ emission line,until the G2 reachesthe pericenter.It may also excite a strong disturbance of the accretion flow due to the mixing of the gas with opposite angular momentum, which would result in the drastic increase of mass accretion rate onto the black hole. We leave the parameter study of the gas cloud including the counter-rotating case as a future work. We thank Y. Feng, K. Ohsuga, H.R. Takahashi, M. Kino, and M. Akiyama foruseful discussion.The numerical simulations were mainly carried out on the XC30 at the Center for Computational Astrophysics, National Astronomical Observatory ofJapan.This research also used computational resources of the HPCI system provided by the Information Technology Center, the University of Tokyo, and Research Institute for Information Technology, Kyushu University through the HPCI System Research Project (Project ID:hp120193, hp140170). This work was supported in part by MEXT HPCI STRATEGIC PROGRAM and the Center for the Promotionof Integrated Sciences (CPIS) of Sokendai, and MEXT as a priority issue(Elucidation of the fundamental laws and evolution of the universe)to be tackled by using post-K Computer and JICFuS. This work was also supported by JSPS KAKENHI Grant Number 16H03954, and the NINS project of Formation of International Scientific Base and Network.natexlab#1#1[Abarca et al.(2014)Abarca, Sa̧dowski, & Sironi]2014MNRAS.440.1125A Abarca, D., Sa̧dowski, A., & Sironi, L. 2014, , 440, 1125[Anninos et al.(2012)Anninos, Fragile, Wilson, & Murray]2012ApJ...759..132A Anninos, P., Fragile, P. C., Wilson, J., & Murray, S. D. 2012, , 759, 132[Balbus & Hawley(1991)]1991ApJ...376..214B Balbus, S. A., & Hawley, J. F. 1991, , 376, 214[Bower et al.(2015)Bower, Markoff, Dexter, Gurwell, Moran, Brunthaler, Falcke, Fragile, Maitra, Marrone, Peck, Rushton, & Wright]2015ApJ...802...69B Bower, G. C., Markoff, S., Dexter, J., et al. 2015, , 802, 69[Brandenburg et al.(1995)Brandenburg, Nordlund, Stein, & Torkelsson]1995ApJ...446..741B Brandenburg, A., Nordlund, A., Stein, R. F., & Torkelsson, U. 1995, , 446, 741[Burkert et al.(2012)Burkert, Schartmann, Alig, Gillessen, Genzel, Fritz, & Eisenhauer]2012ApJ...750...58B Burkert, A., Schartmann, M., Alig, C., et al. 2012, , 750, 58[Dedner et al.(2002)Dedner, Kemm, Kröner, Munz, Schnitzer, & Wesenberg]2002JCoPh.175..645D Dedner, A., Kemm, F., Kröner, D., et al. 2002, J. Comput. Phys., 175, 645[Gillessen et al.(2012)Gillessen, Genzel, Fritz, Quataert, Alig, Burkert, Cuadra, Eisenhauer, Pfuhl, Dodds-Eden, Gammie, & Ott]2012Natur.481...51G Gillessen, S., Genzel, R., Fritz, T. K., et al. 2012, , 481, 51[Gillessen et al.(2013a)Gillessen, Genzel, Fritz, Eisenhauer, Pfuhl, Ott, Cuadra, Schartmann, & Burkert]2013ApJ...763...78G —. 2013a, , 763, 78[Gillessen et al.(2013b)Gillessen, Genzel, Fritz, Eisenhauer, Pfuhl, Ott, Schartmann, Ballone, & Burkert]2013ApJ...774...44G —. 2013b, , 774, 44[Guillochon et al.(2014)Guillochon, Loeb, MacLeod, & Ramirez-Ruiz]2014ApJ...786L..12G Guillochon, J., Loeb, A., MacLeod, M., & Ramirez-Ruiz, E. 2014, , 786, L12[Hawley(2000)]2000ApJ...528..462H Hawley, J. F. 2000, , 528, 462[Hotta et al.(2016)Hotta, Rempel, & Yokoyama]2016Sci...351.1427H Hotta, H., Rempel, M., & Yokoyama, T. 2016, Science, 351, 1427[Kato et al.(2008)Kato, Fukue, & Mineshige]2008bhad.book.....K Kato, S., Fukue, J., & Mineshige, S. 2008, Black-Hole Accretion Disks — Towards a New Paradigm — (Kyoto: Kyoto University Press)[Machida et al.(2006)Machida, Nakamura, & Matsumoto]2006PASJ...58..193M Machida, M., Nakamura, K. E., & Matsumoto, R. 2006, , 58, 193[Matsumoto(1999)]1999ASSL..240..195M Matsumoto, R. 1999, in Astrophysics and Space Science Library, Vol. 240, Numerical Astrophysics, ed. S. M. Miyama, K. Tomisaka, & T. Hanawa, 195[Matsumoto et al.(2016)Matsumoto, Asahina, Kudoh, Kawashima, Matsumoto, Takahashi, Minoshima, Zenitani, Miyoshi, & Matsumoto]2016arXiv161101775M Matsumoto, Y., Asahina, Y., Kudoh, Y., et al. 2016, ArXiv e-prints, arXiv:1611.01775[McKinney et al.(2012)McKinney, Tchekhovskoy, & Blandford]2012MNRAS.423.3083M McKinney, J. C., Tchekhovskoy, A., & Blandford, R. D. 2012, , 423, 3083[Miyoshi & Kusano(2005)]2005JCoPh.208..315M Miyoshi, T., & Kusano, K. 2005, J. Comput. Phys., 208, 315[Mościbrodzka et al.(2012)Mościbrodzka, Shiokawa, Gammie, & Dolence]2012ApJ...752L...1M Mościbrodzka, M., Shiokawa, H., Gammie, C. F., & Dolence, J. C. 2012, , 752, L1[Okada et al.(1989)Okada, Fukue, & Matsumoto]1989PASJ...41..133O Okada, R., Fukue, J., & Matsumoto, R. 1989, , 41, 133[Paczyńsky & Wiita(1980)]1980A A....88...23P Paczyńsky, B., & Wiita, P. J. 1980, , 88, 23[Pfuhl et al.(2015)Pfuhl, Gillessen, Eisenhauer, Genzel, Plewa, Ott, Ballone, Schartmann, Burkert, Fritz, Sari, Steinberg, & Madigan]2015ApJ...798..111P Pfuhl, O., Gillessen, S., Eisenhauer, F., et al. 2015, , 798, 111[Quataert(2002)]2002ApJ...575..855Q Quataert, E. 2002, , 575, 855[Saitoh et al.(2014)Saitoh, Makino, Asaki, Baba, Komugi, Miyoshi, Nagao, Takahashi, Takeda, Tsuboi, & Wakamatsu]2014PASJ...66....1S Saitoh, T. R., Makino, J., Asaki, Y., et al. 2014, , 66, 1[Sa̧dowski et al.(2013)Sa̧dowski, Narayan, Sironi, & Özel]2013MNRAS.433.2165S Sa̧dowski, A., Narayan, R., Sironi, L., & Özel, F. 2013, , 433, 2165[Schartmann et al.(2012)Schartmann, Burkert, Alig, Gillessen, Genzel, Eisenhauer, & Fritz]2012ApJ...755..155S Schartmann, M., Burkert, A., Alig, C., et al. 2012, , 755, 155[Schartmann et al.(2015)Schartmann, Ballone, Burkert, Gillessen, Genzel, Pfuhl, Eisenhauer, Plewa, Ott, George, & Habibi]2015ApJ...811..155S Schartmann, M., Ballone, A., Burkert, A., et al. 2015, , 811, 155[Shakura & Sunyaev(1973)]1973A A....24..337S Shakura, N. I., & Sunyaev, R. A. 1973, , 24, 337[Suresh & Huynh(1997)]1997JCoPh.136...83S Suresh, A., & Huynh, H. T. 1997, J. Comput. Phys., 136, 83[Taylor(1974)]1974PhRvL..33.1139T Taylor, J. B. 1974, Physical Review Letters, 33, 1139[Tsuboi et al.(2015)Tsuboi, Asaki, Kameya, Yonekura, Miyamoto, Kaneko, Seta, Nakai, Takaba, Wakamatsu, Miyoshi, Fukuzaki, Uehara, & Sekido]2015ApJ...798L...6T Tsuboi, M., Asaki, Y., Kameya, O., et al. 2015, , 798, L6[Witzel et al.(2014)Witzel, Ghez, Morris, Sitarski, Boehle, Naoz, Campbell, Becklin, Canalizo, Chappell, Do, Lu, Matthews, Meyer, Stockton, Wizinowich, & Yelda]2014ApJ...796L...8W Witzel, G., Ghez, A. M., Morris, M. R., et al. 2014, , 796, L8[Yokoyama & Shibata(1994)]1994ApJ...436L.197Y Yokoyama, T., & Shibata, K. 1994, , 436, L197[Yuan & Narayan(2014)]2014ARA A..52..529Y Yuan, F., & Narayan, R. 2014, , 52, 529[Yuan et al.(2003)Yuan, Quataert, & Narayan]2003ApJ...598..301Y Yuan, F., Quataert, E., & Narayan, R. 2003, , 598, 301
http://arxiv.org/abs/1702.07903v2
{ "authors": [ "Tomohisa Kawashima", "Yosuke Matsumoto", "Ryoji Matsumoto" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170225150818", "title": "A Possible Time-Delayed Brightening of the Sgr A* Accretion Flow after the Pericenter Passage of G2 Cloud" }
=11.06 letterpaper,tmargin=2.5cm,bmargin=2.5cm,lmargin=2.6cm,rmargin=2.6cm þ12 K_split γ equation α̅λκ̨ μ νζ 𝒟 𝒫 φ out𝕀id d z̅ x̅ ρ̅ ↔ 2SL(2,ℝ)F_ϕϕϕϕ F_ϕ_1 ϕ_2 ϕ_3 ϕ_4 F_ϕ_3 ϕ_2 ϕ_1 ϕ_4 pδ:= =:\begin\end and∂ α β̱ δ̣ κ λ γ Δ Γ S_d 𝒪 O N łℓϵ ε ε ε tr 𝐧 11/2 12 equationsection =1000 empty YITP-SB-17-7 Crossing Symmetry in Alpha Space Matthijs Hogervorst^a, Balt C. van Rees^b ^a C.N. Yang Institute for Theoretical Physics, Stony Brook University, USA^b Centre for Particle Theory & Department of Mathematical Sciences,Durham University, Durham, UKWe initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α.This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.§ INTRODUCTION Symmetries and consistency conditions play an important role in quantum field theory. This is especially true in the realm of Conformal Field Theories (CFTs), which can be analyzed by combining constraints fromconformal invariance, unitarity and crossing symmetry. This set of ideas is known as the conformal bootstrap <cit.>. It was revived in <cit.> and has led to a wealth of numerical and analytical results about CFTs, see for instance <cit.>.[See <cit.> for an introductory discussion of the conformal bootstrap.] Since the bootstrap uses constraints coming from correlation functions, it is natural to express crossing symmetry as a sum rule in position space. This is not strictly necessary: for instance, some properties of CFT correlators are more transparent in Mellin space <cit.>. In the present paper we introduce alpha space, an integral transform for CFT correlators based on the Sturm-Liouville theory of the conformal Casimir operator. As we will explain, alpha space can be used to rephrase crossing symmetry as an eigenvalue problem.To illustrate this idea, consider the toy crossing equation∑_n=0^5 c_n p_n(z) = ∑_n=0^5 c_n p_n(1-z) involving the following polynomials:[Up to a choice of normalization, these are the Kravchuk polynomials with N=5 and p = 1/2 <cit.>.] p_n(z) = (-1)^n ∑_j=0^n 2^j 5-zn-j zj , n = 0,…,5 . How can we determine the set of all c_n that satisfy eq:cross0?Since the p_n(z) are polynomials, various brute-force methods can be used. More elegantly, we can realize that the p_n form a complete basis for the space of polynomials of degree ≤ 5, orthogonal with respect to the inner product∫d μf(z) g(z) , ∫d μ= ∑_k=0^5 (-1)^k/2^k 5k∫dz δ(z-k) . This implies that the p_n(1-z) appearing in the RHS of the crossing equation can be decomposed as follows: p_n(1-z) = ∑_m=0^5 Qmn p_m(z) for some 6 × 6 matrix Q. The latter can be easily computed using eq:ipk. Since z ↦ 1-z is an involution, we must have Q^2 = _6 × 6, as can be checked easily.Eq. eq:cross0 can now be recast asc_n = (Q ·c)_nhence our problem reduces to finding all eigenvectors of Q with eigenvalue +1. There are three such eigenvectors:f_1 = p_0 (≡1) f_2 = 2p_1 - p_3 - p_4 , f_3 = p_2 +2p_3 + 2 p_4 , so the most general solution to eq:cross0 is ∑_n=0^5 c_n p_n(z) = ∑_i=1^3 t_i f_i(z), t_i ∈R . In this paper we consider one-dimensional (defect) CFTs which are governed by crossing equations similar to eq:cross0. For definiteness, let us consider a four-point function F(z) of identical operators of dimension h_ϕ, admitting a conformal block decompositionF(z) = ∫_0^∞d h ρ(h) k_h(z) , ρ(h) = ∑_n c_n δ(h-h_n) , where the k_h(z) are 2 conformal blocks:k_h(z) = z^h _2F_1(h,h;2h;z) . The spectrum {h_n} and the OPE coefficients c_n ≥ 0 are typically unknown. Bootstrapping entails computing or constraining these CFT data using the crossing relationF(z) = (z/1-z)^2h_ϕ F(1-z) . There are various technical differences between this d=1 bootstrap problem and the previous toy example. For one, h takes its values in the continuum R_≥ 0, whereas the toy example had a finite and discrete spectrum. Nevertheless, it is tantalizing to apply the logic from the toy example to the bootstrap. For instance, one could hope to constrain the density ρ(h) from eq:Fdef through a relation of the formρ(h) ?= ∫_0^∞d h' Q(h,h'|h_ϕ) ρ(h') for some continuous kernel Q(h,h'|h_ϕ) which plays the role of Q. Sadly Eq. eq:wrong cannot quite be true. The reason is that the conformal blocks k_h(z) don't form an orthogonal basis of functions on (0,1). The principal aim of this paper is to demonstrate that it is nevertheless possible to write down a qualitatively very similar relation. In order to do so we use a new basis of functions to transform our four-point function to a space that we denote as alpha space. In this space we can properly define (<ref>) in terms of a crossing symmetry kernel K which we will explicitly compute. We will discuss its main features and explain how the ordinary conformal block decomposition is recovered from an analytic continuation in alpha.We stress that the philosophy of studying CFTs using crossing kernels — à la eq:wrong — isnot new. An early avatarof this idea can be found in Eq. (2.66) of Ref. <cit.>. Nonetheless, we are not aware of earlier work where the relevant SO(d,2) or 2 crossing kernels have been worked out in detail. An exception is the 2d Liouville CFT, for which the crossing kernels have been computed <cit.> as the 6-j symbol of a class of representations of U_q(𝔰𝔩(2,R)), leading to a formal proof of consistency of the theory.[See also <cit.>.] The case of rational 2d CFTs (i.e. Virasoro minimal models) is also of interest, since in such theories the crossing kernel is realized as a finite matrix <cit.>.We will comment on the group-theoretic interpretation of our crossing symmetry kernel in Sec. <ref>.The outline of this paper is as follows. In Sec. <ref> we review the one-dimensional bootstrap problem and solve the Sturm-Liouville problem for the 2 Casimir operator. This allows us to construct a complete, orthogonal basis of eigenfunctions on the interval (0,1). In Sec. <ref> we use these basis functions to derive a crossing equation similar to eq:wrong, and we study the properties of the relevant kernel K. Sec. <ref> describes several possible applications of crossing kernels to the conformal bootstrap.Note added: while preparing this manuscript we learned about Ref. <cit.>, whichdiscusses a crossing kernel approach to both SU(2) and conformal crossing symmetry equations and is tangentially related to this paper.§ ONE-DIMENSIONAL BOOTSTRAP AND ALPHA SPACE This section is devoted to the Sturm-Liouville theory of the conformal Casimir of 2, the conformal group in one spacetime dimension. One-dimensional CFTs arise in the description of line defects in higher-dimensional theories <cit.>. Although 1d CFTs are in many ways more tractable than d-dimensional systems, we also note that many salient features of the d-dimensional bootstrap already appear at the level of d=1. In addition the 1d conformal blocks appear naturally in the light-cone limit of the higher-dimensional crossing symmetry equations, where it becomes possible to obtain non-trivial analytic results <cit.>. §.§ Sturm-Liouville theory of the 2 CasimirWe will start by analyzing the four-point function of a single primary (or lowest-weight) operator ϕ(x) in a 1d CFT. The general case will be addressed in Sec. <ref>.The only quantum number of ϕ is its scaling dimension h_ϕ, and conformal symmetry dictates that ϕϕϕϕ has the following form: ϕ(x_1) ϕ(x_2) ϕ(x_3) ϕ(x_4) = (z)/|x_1 - x_2|^2h_ϕ|x_3 - x_4|^2h_ϕ where the points x_i ∈R lie on a line and z is the following cross ratio:z |x_12| |x_34|/|x_13| |x_24| ∈(0,1) writing x_ij = x_i - x_j.[Although a priori the variable z is not restricted to the unit interval, we require z ∈ (0,1) to guarantee OPE convergence on both sides of the bootstrap equation.] The function (z) admits the following conformal block (CB) decomposition:(z) = ∑__ϕϕ^2 k_h_(z) where the functions k_h(z) are the 1d conformal blocks defined in Eq. eq:CB. The sum runs over all operatorsin the ϕ×ϕ OPE of dimension h_, and _ϕϕ is the ∈ϕ×ϕ OPE coefficient. Finally, crossing symmetry (invariance under the exchange x_ix_j) of the ϕϕϕϕ correlator leads to the bootstrap constraint(z) = (z/1-z )^2h_ϕ (1-z) which must hold for all 0 ≤ z ≤ 1. We will not assume unitarity (i.e. reflection positivity) in this paper. Just for completeness, we recall that if the CFT in question is unitary, the decomposition eq:CBdec is constrained as follows: * the _ϕϕ must be real-valued, hence _ϕϕ^2 > 0;* there must be a contribution of the unit operatorwith h_ = 0 and _ϕϕ = 1;* all other operators (including ϕ) have h_ > 0.As noted in the introduction, it is conventional in the CFT literature to investigate the bootstrap equation eq:1dBS in position space. Here we will take a different approach. We start by remarking that the conformal blocks k_h(z) are eigenfunctions of a second-order differential operator D, the quadratic Casimir operator of 2:D ·k_h(z) = h(h-1) k_h(z) , D=z^2(1-z) ^2 - z^2. In what follows, we will develop the Sturm-Liouville theory of the operator D on the interval (0,1).[Note added: although we have not attempted to do so, it is in principle possible to change the boundary conditions at z=1 <cit.>. We thank Miguel Paulos for pointing out this reference.]As a first step, we notice that D can be written in the following suggestive form:D ·f(z) = z^2 z [(1-z) f'(z) ]. This implies that D is self-adjoint with respect to the inner product f,g = ∫_0^1d z/z^2 f(z) g(z) where f,g are functions (0,1) →C that are well-behaved near z=0 and z=1. Indeed, we havef, D ·g-D·f,g = ∫_0^1 d zz [ (1-z) (fg' - f'g) ] which is a boundary term. Of course, not all functions have a finite norm with respect to the inner product eq:innerProd. Requiring that a function f is square integrable leads to the following constraints on its asymptotics near z=0 and z=1:f(z) z →0 z^1/2+and f(z) z →1 (1-z)^-1/2+' for constants , ' > 0. In particular, this implies that in a unitary CFT all four-point functions (z) have a divergent norm with respect to eq:innerProd.Our next order of business is to construct an orthogonal basis of eigenfunctions of D. We start by solving the eigenvalue equation D · f =f. After writing = ^2 - 1/4 for convenience, we find that the general solution (for ≠ 0) is given byf(z) = A_1() k_+(z) + A_2()k_-(z) for two constants A_1,2() that are to be determined.In order to fix them, let's analyze the z → 0,1 asymptotics of f(z). First, we notice that the blocks themselves are logarithmically divergent near z = 1. To be precise, we have k_+(z) z →1 - Γ(1+2)/Γ^2(+) ln(1-z) + regular and likewise for k_-(z).Requiring that eq:ans1 has a finite limit as z → 1 therefore determines the relative coefficient A_1()/A_2(). Fixing the overall normalization by imposing f(1) = 1, we arrive at the following eigenfunctions:[Remarkably, these are not the usual `shadow-symmetric' blocks obtained by integrating one-dimensional three-point functions over the real axis <cit.>. Indeed, in one dimension this integral is easily performed using the techniques of <cit.> and diverges logarithmically as z → 1, in contrast with our Ψ_(z).] Ψ_(z) = 1/2 [ Q() k_+(z)+ Q(-) k_- (z)], Q() = 2Γ(-2)/Γ^2(- ) .In what follows, it will be useful to rewrite Ψ_(z) as Ψ_(z) = _2F_1(+,-  1;z-1/z ) using a hypergeometric identity. In particular, this makes it manifest that Ψ_(1) = 1. However, we have not yet inspected the asymptotics near z=0. Assuming thatis real, we find that Ψ_(z) z → 0z^1/2-||, which means that the functions Ψ_ have infinite norm. The only way to avoid this problem is to assume thatis imaginary. In that case, we find that Ψ_ has the following asymptotics: Ψ_(z) z →0 |Q()| √(z) cos(Im()lnz + const.) [ ∈i R] implying that Ψ_ is rapidly oscillating near z=0. Notice that even for imaginary , the function Ψ_(z) is real-valued, since it is symmetric under → -. A plot of two different functions Ψ_(z) is shown in Fig. <ref>. Since Ψ_(z) oscillates near z=0 at a rate that depends on , it is at least plausible that Ψ_,Ψ_ = 0 for ≠±$̱, cf. the Fourier transform onR. This is confirmed by an explicit computation, performed in Appendix <ref>. There it is shown that the inner product Ψ_, Ψ_behaves as a delta function on the imaginary axis. To be precise: iff()is defined oniRand has compact support,we have 1/2π i∫_-i∞^i ∞d Ψ_, Ψ_ f()̱ = N() f() + f(-) /2 whereN() = Γ()Γ(-)/2πΓ(+) Γ(-) = |Q()|^2/2≥ 0 .Informally, Eq. eq:testInt shows that the functionsΨ_(z)are plane-wave normalized, having normN(). The fact that the RHS of eq:testInt contains the sum[f() + f(-)]reflects thatΨ_is even in, which carries over to the inner product Ψ_, Ψ_.Summarizing, we have constructed a set of orthogonal eigenfunctionsΨ_(z)with respect to the inner product eq:innerProd. Naively, we can appeal to familiar arguments of Sturm-Liouville theory to argue that these eigenfunctions form a complete set. In other words, we can decompose a given functionf : (0,1) →Ras follows:f(z) = 1/2π i∫_-i∞^i ∞d /N() f() Ψ_(z) ⇔f() = ∫_0^1d z/z^2f(z)Ψ_(z) .This formula describes howf(z)is encoded by its “spectral density”f(), and vice versa. A mathematically rigorous way to obtain this identity will be described in the next section.A sufficient condition for Eq. eq:sl2decomp to make sense is thatfbe square integrable:f,f= ∫_0^1d z/z^2|f(z)|^2 < ∞ .An equivalent condition (see the next section) is that 1/2π i∫_-i∞^i ∞d /N()|f()|^2is finite. In Secs. <ref> and <ref> we discuss how these constraints can be loosened. Eq. eq:testInt shows that theΨ_(z)form a complete basis inspace. For reference, we remark that theΨ_(z)also obey a completeness relation in position space, namely 1/2π i∫_-i∞^i∞d/N() Ψ_(z) Ψ_(w) = z^2(z-w)as can be deduced from eq:sl2decomp. §.§ Alpha space as a Jacobi transform The alpha space transformf(z) ↦f()is closely related to a known integral transform, known as the Jacobi transform. We will briefly describe this transform in the rest of this section, pointing to Refs. <cit.> as apoint of entry in the mathematics literature. The Jacobi transform is an integral transform that makes use of the Jacobi functions: ϑ_^(p,q)(x) _2F_1((1+p+q) + , (1+p+q) - p+1;- x ),x ≥ 0 .The parametersp,q ≥0are fixed, whereas the label∈i Ris allowed to vary continuously. Notice thatϑ^(p,q)_(x)is even in, and therefore real-valued. Consider now a complex functionf(x), defined forx ≥0, decaying sufficiently fast asx →∞. We assign to it its Jacobi transformJfas follows:f(x)↦(Jf)() ∫_0^∞d x ω_p,q(x)f(x) ϑ^(p,q)_(x) , ω_p,q(x) = x^p (1+x)^q .ω_p,q(x)plays the role of a weight function in position space. A standard result —see Theorem 2.3 of Ref. <cit.>— is thatfcan be restored from its Jacobi transform:f(x) = 1/2π i∫_-i∞^i∞d /N_p,q()(Jf)() ϑ^(p,q)_(x) + … where N_p,q() = 2Γ^2(1+p)Γ(± 2)/Γ((1+p+q) ±)Γ((1+p-q) ±) , Γ(x± y) Γ(x+y)Γ(x-y) .The dots in eq:inverseJ indicate that depending on the values ofpandqa finite number of terms must be added; equivalently, the integration contour incan be deformed to pick up poles coming from1/N_p,q().[A sufficient condition for such terms to be absent is p + q + 1 > 0 and p - q + 1 > 0.]Properly speaking,Jfurnishes a map from the Hilbert spaceL^2(R_+,ω_p,q(x) dx)to the space of functions oniRwhich are normalizablewith respect to the measured/N_p,q(). This map is an isometry: given two complex functionsf,g, the following Parseval formula holds: ∫_0^∞d x ω_p,q(x)f(x)g(x)= 1/2π i∫_-i∞^i∞d /N_p,q() (J^f)() (J^g)() .Specializing to the casef=g, this shows in which sense the Jacobi transform is unitary.It is now straightforward to see that the alpha space transform for the2Casimir is a special case of the Jacobi transform withp = q = 0, after the change of variablex →(1-z)/z. The precise dictionary is given by Ψ_(z) = ϑ_^(0,0)(1-z/z), ∫_0^∞ dx ω_0,0(x) = ∫_0^1dz/z^2 ,N() = N_0,0() .A direct consequence is the identity f,g = 1/2π i∫_-i∞^i ∞d /N() f()g() .It would be interesting to see if further theorems concerning the Jacobi transform can be recycled to prove results about alpha space densities in CFTs.Our discussion has been quite abstract so far and at this stage the reader may want to experiment with some explicit alpha space computations. To do so, it is useful to know that the Jacobi transform essentially maps rational functions to polynomials. A precise statement is the following. LetP_n^(p,q)(x) = (p+1)_n/n! _2F_1(-n,n+p+q+1;p+1;(1-x) )be a Jacobi polynomial of degreen. Then for anyr,s ≥0we have <cit.> ∫_0^∞d x ω_p,q(x) 1/(1+x)^(p+q+r+s)+1 P_n^(p,r)(1-x/1+x) ϑ_^(p,q)(x) = (-1)^n/n!Γ(p+1)Γ((r+s+1) ±)/Γ((p+ q+r+s)+1+n )Γ((p- q+r+s)+1+n ) ×_n(;(p+q+1),(p-q+1),(r+s+1),(r-s+1)) .The object on the last line is a Wilson polynomial <cit.>: _n(;a,b,c,d) = (a+b)_n(a+c)_n(a+d)_n_4F_3(-n,a+,a-,n+a+b+c+d-1 a+b,a+c,a+d;1).Evidently_n(;a,b,c,d)is a polynomial of degreenin^2, and it can be shown that_ndepends symmetrically on its parametersa,b,c,d. Specializing to alpha space (p=q=0) whilst settingr→0,s →2ρ-2, the identity eq:jnaarw becomes ∫_0^1d z/z^2z^ρ P_n(2z-1) Ψ_(z) = (-1)^n/n!Γ(ρ-±)/Γ^2(ρ+n) _n(;,,ρ-,32-ρ)whereP_nis a Legendre polynomial of degreen. This formula can be used to find the alpha space counterpart of rather general functions in position space. As a simple example, we can setn=0to find the alpha space version of the functionz ↦z^ρ: ∫_0^1d z/z^2z^ρ Ψ_(z) = Γ(ρ-±)/Γ^2(ρ) .An additional example will be discussed in Sec. <ref>. §.§ Convergence of the alpha space transform Before we turn to the application of alpha space to CFTs, let us comment on the convergence of the alpha space transformf(z) ↦f(). We have in mind a functionf(z)that has power-law growth atz=0andz=1, i.e.f(z) z → 0 z^pf(z) z → 11/(1-z)^q .Moreover, we assume thatf(z)admits an expansion in powers ofz^haroundz=0, meaning that it is possible to writef(z) = ∑_n=1^∞c_n z^h_n. All of these conditions are certainly satisfied whenf(z)describes a CFT correlation function. Let us first consider the case wherep > 1/2andq<1. In that case, the integral defining its alpha space density f() ∫_0^1 dz/z^2f(z) Ψ_(z)converges whenever|()| < p - þ, meaning thatf()is holomorphic on a finite strip. Moreover, using the alpha space transform of a single power law eq:powerlaw, it is possible to show thatf()extends to a meromorphic function on the entire complex plane, with poles at= h_n - 12+ Nforn=1,2,…(plus mirror poles on the left half plane).Next, consider the casep < 1/2,q < 1. In this case, it is convenient to decomposef(z)as f(z) = f_sing(z) + f_reg(z)wheref_sing(z) = ∑_h_n < 1/2 c_n z^h_n f_reg(z) = ∑_h_n > 1/2 c_n z^h_n .By construction, the regular piecef_reg(z)has a well-defined alpha space transform that extends to a meromorphic function onC. We can define the densityf_sing(α)termwise, by analytically continuing Eq. eq:powerlaw to arbitrary values ofρ.[Such analytic continuations may require a deformation of the alpha space integration contour away from the imaginary axis. Below we explain how to deal with such cases.] Concretely, we take the alpha space transform off(z)to be f() = ∑_h_n < 1/2 c_n Γ(h_n - þ±)/Γ^2(h_n) + f_reg() .If the leading term off_sing(z)is a constant, the above argument breaks down, since1/Γ^2(h)vanishes whenh →0. This is an order-of-limits issue, which can be avoided by writing1as the limit ofz^as→0. Finally, we consider the caseq > 1. For simplicity we considerp > þ, but the case of generalpis straightforward to treat using the above discussion. Given thatq>1, the integral definingf()diverges for all values of. We thus regulate this integral by cutting it off atz = 1-, writing f_() ∫_0^1-dz/z^2f(z) Ψ_(z) .Notice that this regulator does not affect the analytic structure off(): all poles originate from the region of integration nearz=0. Now, to isolate divergent pieces inwe notice thatΨ_(z)admits an expansion in powers of(1-z)of the following form: Ψ_(z) = ∑_k=0^∞ s_k() (1-z)^kwheres_k()is a polynomial of degreekin^2. This implies thatf_()has the following structure of divergences:[To derive this formula, we are assuming that f(z) admits an expansion of the formf(z) = 1/(1-z)^q[ const. + ∑_n ≥ 1 a_n(1-z)^n ]around z=1. If f(z) rather behaves as a more general sum of power lawsf(z) = c_1/(1-z)^q_1 +c_2/(1-z)^q_2 + …Eq. eq:ct1 is modified in a straightforward fashion. ]f_() = [finite as → 0]+∑_j=0^q-1t_j()/^q-1-j wheret_j()is a polynomial in. Consequently, we takef()to be the finite piece off_()obtained by subtracting the divergent terms in eq:ct1. §.§ Conformal block decomposition As a first application of the alpha space formalism of the previous sections, we will show that it can be used to compute conformal block decompositions for CFT correlators. As a starting point, we have in mind a meromorphic spectral densityf(), even in, written in the following form: f() = ∑_n -R_n/ - _n+ (→ -) +entire.The minus sign in front ofR_nis a choice of convention. We will assume that all poles_nlie on the positive real axis; in particular, we see that every pole has a corresponding mirror pole-_non the negative real axis.Our goal is to compute the position space counterpart off():f(z) = ∫_C[d]/N() f() Ψ_(z)whereCis a contour parallel to the imaginary axis. Here and in what follows we write contour integrals as ∫[d ] = ∫_-i ∞^i ∞d /2π i to avoid notational clutter. Notice that bothf()and the measureN()are even in, which means that we can replaceΨ_(z)by any linear combination of the conformal blockQ()k_+(z)and its shadowQ(-)k_-(z). Without loss of generality, let us attempt to close the contourCto the right, picking up all poles_non the right half plane. This means that we have to drop the shadow part∼Q(-)/N() ×k_-(z), as it grows exponentially on the right half plane, whereas the conformal block part decreases as() →∞. Consequently, we find that the position space version off()is given byf(z) = ∫_C[d ]f() Q()/N()k_+(z) = ∫_C[d ]f() 2/Q(-)k_+(z)using the second equality in eq:Ndef.In that case, we can rewritef(z)asf(z) = ∑_n 2R_n/Q(-_n)k_+_n(z) .To pass from Eq. eq:tpint to eq:CBdec2, we used that1/Q(-)is analytic on the right half plane. But the sum appearing in the RHS is precisely a CB decomposition — cf. Eq. eq:CBdec— where then-th term corresponds to an exchanged operator_nof dimension[_n] = 1/2 + _n, having OPE coefficient _ϕϕ_n^2 = 2R_n/Q(-_n) .SinceQ(-) > 0for all>0, we conclude that _ϕϕ_n^2is positive iffR_nis positive. Above, we assumed that all_nwere positive. This means that only operators of dimension[_n] > 1/2appear in the CB decomposition eq:CBdec2. This condition can be loosened: an operator of dimensionh<1/2would simply correspond to a pole_*lying on the left half plane. We must in this case deform the contour to circle_*in the positive direction. Moreover,_*will have a mirror pole-_*on the right half plane, which must be circled in the negative direction, such that it does not give an anomalous contribution tof(z)— see Fig. <ref>. We will revisit this point in Sec. <ref>. The casesh=0(corresponding to the unit operator) andh=1/2require special attention. As forh=0, notice that1/Q(-)has a pole at= -1/2,namely 1/Q(-) → -1/2-1/ + 1/2 .Consequently, it suffices forf()to be finite at= 1/2in order to generate a unit operator term. To be precise, if f()→ -1/2c + O[(+)]and the contour is such that it wraps aroundα= -1/2in the sense described above, thenf(z) = 2c + other conformal blocks. A similar issue arises ifh=1/2, because1/Q(-)vanishes as→0. More precisely 1/Q(-) → 0 π + O(^2)hence in order to obtain a contributionf(z) ∼c k_1/2(z) ∼c √(z) + …in position space, we must have f() = -c /2π^2 + O(^-1) . §.§.§ Examples To develop some familiarity with the alpha space representation of correlation functions, we will compute the alpha space transform of some simple functions inz-space, and we use these results to compute the resulting conformal block decompositions.*Let's compute the alpha space transform of a single conformal block k_h(z) with h > 1/2: k_h() = ∫_0^1 dz/z^2k_h(z) Ψ_(z) = C(h)/^2 - (h-þ)^2 , C(h) = -Γ(2h)/Γ^2(h) .In order to derive this result, it's convenient to use the Mellin-Barnes formula _2F_1(a,b;c;z) = Γ(c)/Γ(a)Γ(b)∫_-i∞^i∞ dsΓ(-s)Γ(a+s)Γ(b+s)/Γ(c+s)(-z)^sin order to expand both k_h(z) and Ψ_(z). Alternatively, Eq. eq:cbt is easy to check numerically inside the strip |()| < h-þ. Let us make two comments about the formula eq:cbt. First, although the integral in eq:cbt converges only in a finite strip, the RHS defines an analytic continuation to any value of . Moreover, the same formula defines an analytic continuation to values of h < 1/2. Second, k_h() has precisely one pole on the right half plane, at = h-1/2, in accordance with our discussion from the previous section.*Let f_p(z) = z^p with p>1/2. We have already encountered this function in Eq. eq:powerlaw, finding that in alpha space it becomes f_p() = Γ(p-±)/Γ^2(p) .Let's use this to obtain the CB decomposition of f_p(z). First, we note that f_p() has poles at _n = p -+ n and_n =- p - n with n ∈N .Closing the contour to the right, we only pick up the _n poles. The residue of the n-th pole isR_n = - Res f()|_ = _n = (-1)^n/n!Γ(2p-1+n)/Γ^2(p) and this pole corresponds to an operator of dimension h = 1/2 + _n= p + n. Using the argument from the previous section, we conclude thatf_p(z) = ∑_n=0^∞2R_n/Q(-_n) k_p+n(z) = ∑_n=0^∞(-1)^n/n!(p)_n^2/(2p-1+n)_nk_p+n(z) .This confirms a known result, see for instance Eq. (4.15) from Ref. <cit.>. *Let f_p,q(z) = z^p(1-z)^-q. It will be instructive to spend some time on the computation of the alpha space density f_p,q(). As a first step, we rewrite Ψ_(z) using the Mellin-Barnes representation eq:mbform.This means that we can writef_p,q()= 1/Γ(±)∫[d s]Γ(-s)Γ(± + s)/Γ(1+s)∫_0^1 d z/z^2(1-z/z)^s z^p/(1-z)^q= 1/Γ(±)Γ(p-q)∫[d s]Γ(-s)Γ(± + s)/Γ(1+s)Γ(1-q+s)Γ(p-1-s)where in the first line we have interchanged the z and s integrals. What remains is a standard Mellin-Barnes integral, which evaluates tof_p,q() = Γ(p-1)Γ(1-q)/Γ(p-q) _3F_2( + , -,1-q  1,2-p;1 )+Γ(1-p)Γ(p-±)/Γ(±)Γ(p) _3F_2(p-12 + ,p-12 - ,p-q  p,p;1 ).which provides an analytic continuation to all , provided that q > p-1.[Interestingly, the above expression can be analytically continued to other values of p and q using hypergeometric identities, in particular Thm. (2.4.4) and Corrollary (3.3.5) from <cit.>. We can for instance write f_p,q() = Γ(1-q)Γ(p-±)/Γ(p)Γ(p-q) _3F_2( + , -,q  p,1;1 ) = Γ(p - ±)/Γ^2(p) _3F_2(p- + ,p--,q  p,p;1 ). The _3F_2(1) hypergeometrics in these expressions converge when p > q resp.q > 1. ]Notice that the first term above is analytic in , hence it does not contain any poles in .However, it does influence the behaviour of f_p,q() at large . The second term contributes two series of poles, at ± = p -+ N. Closing the -contour to the right and computing residues, we arrive at the following conformal block decomposition:f_p,q(z) = ∑_n=0^∞(p)_n^2/n! (2p-1+n)_n _3F_2(-n,2p-1+n, p-q  p, p;1 )k_p+n(z) .This is a new result which would have been rather difficult to guess. For p=q, this reduces to Eq. (4.14) from <cit.>. §.§ Convergence and asymptotics In Sec. <ref> we discussed the convergence of the alpha space transform in a general setting. In the present section, we will specialize to CFT correlation functions, and more particularly, we will relate the largebehaviour off()to the growth off(z)asz →1. Recall that at the extreme pointsz = 0andz = 1a crossing-symmetric four-point function in a unitary CFT behaves asz→ 0 : (z) → 1 + …z→ 1 : (z) →(z/1-z)^-2h_ϕ (1 + … )Clearly such a function is not square integrable with respect to the inner product (<ref>). As we will now proceed to explain, an alpha space transform can nevertheless be defined also for such functions. We will show that divergences near the two endpointsz = 0andz=1translate very differently into alpha space and bear resemblance to the usual IR and UV divergences in Fourier space.Let us first focus onz →0, which is the OPE limit, and suppose we try to transform a functionf(z)behaving likez^p (1 + …)for smallzto alpha space. For our inner product square integrability is lost as we dialpto a value less than or equal to1/2. In alpha space this is reflected by a pair of poles crossing the real axis, as follows from the correspondence between conformal blocks of dimensionhand poles atα= ±(h - 1/2). This forces the integration contour in the inverse alpha transform off the imaginary axis, since the correct position-space expression is recovered only if it wraps around the poles as indicated in Fig. <ref>. This is however the only modification necessary, and we conclude thatz →0singularities of power-law form can be entirely dealt with by augmenting the inverse alpha space transform (<ref>) with a contour prescription around the poles. This prescription works without issues for any0 < p < 1/2; the special casesp=0andp=1/2were discussed above in Sec. <ref>.Now let us consider the limitz →1. For simplicity we will restrict ourselves to the (physically relevant) case of functionsf(z)analytic in0 < z < 1. First of all, sinceΨ_α(1) = 1we find thatf(1) = ∫ [d ] f()/N() ,and similarly it follows fromD·Ψ_α(z) = (^2 - 1/4) Ψ_(z)thatD^n · f(1) = ∫ [d] (^2 - 1/4)^n f()/N() ,which holds as long as theD^n ·f(z)remains square integrable. Supposingf(z)behaves as a power law nearz = 1, we see fromD ·( (1-z)^ρ (1 + … ) ) = ρ^2 (1-z)^ρ - 1(1 + …) ,that acting with the Casimir operatorDworsens the behavior nearz = 1. For generic positiveρthere exists annsuch thatD^n ·f(1)ceases to be well-defined, and therefore the integral in (<ref>) should somehow suffer the same fate. Since we only modify the integrand with a polynomial factor, this can only happen if the integral stops converging. We conclude that the large alpha behavior reflects the `short-distance' behavior off(z)asz →1.[We can also offer a physical explanation. For fixed alpha the Ψ_a(z) oscillate very slowly near z = 1 and to probe this region we need to consider very short `wavelengths', corresponding to very large values of the `momentum' α.]The above discussion also offers a way to make sense of power-law divergent densities in alpha space: we just dividef()by sufficiently powers ofα^2 - 1/4, perform the now-convergent integral overα, and act just as many times withDon the resulting position-space expression. This is in fact entirely analogous to the usual trick in Fourier space, where we habitually make sense of UV-divergent expressions likep^2 αwithα> 0by replacing powers ofp^2with a Laplacian operator, ∫ dxe^i p x p^2 α( 1 + …) → (-□)^n ( ∫ dx e^i p x p^2 α - 2 n( 1+ …) ) ,withnchosen such that the integral becomes convergent at largep. The relation between largeαandzclose to 1 can be made more quantitative. Firstly, if a functionf(z)is infinitely differentiable atz = 1, then the preceding logic demonstrates thatf()/N()must fall off faster than any power for large imaginary alpha. This is exemplified by the alpha space transform ofz^ρgiven above, which falls off exponentially fast. Secondly, for the generic power-law behavior we find that iff(z) = (1-z)^-ρ( 1 + O(1-z) )then f() = (-^2)^ρ-1(1-ρ)/(ρ)( 1 + O(^-2) ) ,which can be found by subtracting the leading power using the alpha space transform of a known function. For example, for small enoughρone can use ∫d^2 z/z^2[ z^ρ(1-z)^-ρ - z^ρ] Ψ_ (z)= (ρ - 1/2±)/^2(ρ)[ (1-ρ)(ρ)/Γ(1/2±) - 1 ]= (-^2)^ρ-1(1-ρ)/(ρ)( 1 + O(^-2) )which can be computed as a limit from the above examples.§.§.§ Application: OPE convergenceWe can use the preceding result to discuss the asymptotic behavior of OPE coefficients in one-dimensional CFTs, i.e., to provide a one-dimensional analogue of the results of <cit.>. Such a result has been discussed previously in the context of the light-cone limit for higher-dimensional CFTs <cit.>. Here we offer an explanation based on the assumption of suitably nice asymptotic behavior in alpha space.Consider once more a unitary CFT correlation function(z)with a corresponding alpha space expressionF()which is meromorphic with simple poles. Our preceding discussion leads us to conclude thatF() ∼(-^2)^2h_ϕ- 1for large imaginaryα, since(z) ∼(1-z)^-2 h_ϕasz →1. We will assume that this asymptotic behavior holds for all non-realαand so the `subtracted' functionF^(s)()(^2)^- 2 h_ϕ + 1 - ϵF() vanishes asymptotically away from the real axis for anyϵ> 0. This means we can write a dispersion relation for it: we writeF^(s)() = ∮ [d]̱F^(s)()̱/ -and push the contour away from the point. With the arcs of the contour at infinity vanishing, we find contributions only from the cuts created by the power-law prefactor and the real axis whereF()has poles. The contributions from the cuts can be made manifestly finite by aligning them along the imaginary axis and keeping the contour some distance away fromα= 0. It follows that the contribution from the poles, which after picking up the residues can be written as ∑_n (_n^2)^- 2 h_ϕ + 1 - ϵ R_n ( 1/ - _n + (α↔ - α))is necessarily finite as well. In a distributional sense, then, we expect the residue series to behave as ∑_n δ(h - h_n) R_n ∼ c(h_ϕ) h^4 h_ϕ - 2 .By working out the example given previously we also find the prefactor:c(h_ϕ) = 1/Γ^2(2 h_ϕ) .We observe that the prefactor vanishes when2 h_ϕis a negative integer which is precisely when thez = 1singularity in(z)also disappears.Finally we can use equation (<ref>) and to relate this result to the asymptotic behavior of the squared primary OPE coefficients themselves as _ϕϕ_h(h)^2 ∼4^1 - h√(π)/Γ^2(2 h_ϕ)h^4 h_ϕ - 3/2 .agreeing with the lightcone bootstrap result, see e.g. <cit.>.[Strictly speaking there is a factor 2 mismatch between eq:ourres and formula (3.8) in <cit.>, due to the fact that in the d-dimensional lightcone results only even spins are allowed to contribute.] It is interesting to see that the leading exponential falloff arises from the prefactorQ(1/2 - h), and the falloff speed is independent of the external dimension. §.§ Alpha space for different external dimensions So far we considered the case of a four-point function of identical operators. However, the Sturm-Liouville theory for the2Casimir operator applies just as well to four-point functions of different operators. In this section, we will briefly discuss this generalization.Concretely, we have in mind a four-point function of primariesϕ_iof dimensionh_i,i=1,…,4. Conformal symmetry restricts this correlator to have the following form: ϕ_1(x_1)ϕ_2(x_2)ϕ_3(x_3)ϕ_4(x_4) = (|x_24|/|x_14|)^h_12(|x_14|/|x_13|)^h_34z^h_12(z)/|x_12|^h_1 + h_2 |x_34|^h_3 + h_4 for some function(z), using the shorthandh_ij ≡h_i - h_j. The stripped correlator admits a conformal block decomposition of the following form: (z) = ∑__ϕ_1 ϕ_2 _ϕ_3 ϕ_4 k^s_h_(z)involving the mixed2conformal blocksk^s_h(z) = z^h+a _2F_1(h +a,h + b;2h; z) ,a = - h_12 , b = h_34 .The sum in Eq. eq:mixedCB now runs over all operators that appear in both theϕ_1 ×ϕ_2andϕ_3 ×ϕ_4OPEs; the label `s' refers to thiss-channel.The blocksk_h^s(z)are eigenfunctions of a mixed Casimir differential operatorD_a,b:D_a,b· f(z) = w_s(z)^-1 d/d z[ w_s(z)(1-z)z^2 f'(z) ] +a(a+1) f(z) ,w_s(z) = (1-z)^a+b/z^2+2a ,which means thatD_a,bis self-adjoint with respect to the inner product f,g_s = ∫_0^1d zw_s(z) f(z)g(z) .Analyzing the relevant Sturm-Liouville problem leads to the following basis of eigenfunctions:[The PDE D_a,bf(z) = (^2-1/4) f(z) has a second solution, namelyz^2a/(1-z)^a+b _2F_1(-a+, -a-  1-a-b;z-1/z) .This second solution ceases to be regular at z=1 when a + b > 0.]Ψ^s_(z)= _2F_1( + a + ,+ a -   1+a+b; z-1/z) = ϑ_^(a+b,a-b)(1-z/z). In the second equality, we have rewrittenΨ_^s(z)as a Jacobi function, to make contact with the integral transform introduced previously.To connect the eigenfunctionsΨ^s_(z)to the conformal blocks, we compute Ψ_^s(z) = [ Q_s() k^s_+(z) + (→ -) ],Q_s() = 2Γ(-2)Γ(1+a+b)/Γ( + a-)Γ(+b-) .As in the case of equal external dimensions, we can decompose any functionf(z)—normalizable with respect to eq:mixedNorm— in terms of the functionsΨ_^s(z), to wit:f(z) = ∫[d ]/N_s() f() Ψ^s_(z) ⇔f() = ∫_0^1 d z w_s(z)f(z) Ψ^s_(z)whereN_s() = 2Γ(± 2)Γ^2(1+a+b)/Γ( + a ±)Γ( + b ±) = |Q_s()|^2/2 .Some care must be taken when considering thecontour in Eq. eq:sturmS: when eithera,b ≤-, the contour must be deformed in the Mellin-Barnes sense because of poles in the factor1/N_s(). §.§.§ Cross channel Applying crossing symmetry to mixed four-point functions leads to a relation between two different four-point functions. In the case of the correlatorϕ_1 ϕ_2 ϕ_3 ϕ_4, the bootstrap equation of interest isF_ϕ_1 ϕ_2 ϕ_3 ϕ_4(z) = (z/1-z)^2h_2F_ϕ_3 ϕ_2 ϕ_1 ϕ_4(1-z)whereF_ϕ_3 ϕ_2 ϕ_1 ϕ_4(z)is defined as in eq:mix4pt but withϕ_1 ϕ_3andh_1 h_3exchanged. Such mixed crossing equations have been used intensively in computing scaling dimensions and OPE coefficients for the 3dIsing andO(N)models <cit.>.Like before, the correlatorF_ϕ_3 ϕ_2 ϕ_1 ϕ_4(z)appearing in the RHS of eq:mixcross admits a decomposition in conformal blocks and in plane-wave normalizable eigenfunctions of the conformal Casimir. However, care must be taken to use conformal blocks with dimensionsh_1 h_3exchanged, and likewise for the eigenfunctionsΨ_^s(z). To be completely explicit, this new conformal block decomposition reads:F_ϕ_3 ϕ_2 ϕ_1 ϕ_4(z) = ∑__ϕ_2 ϕ_3 _ϕ_1 ϕ_4 k^t_h_(z)withk^t_h(z) = z^h+a' _2F_1(h +a',h + b';2h; z) ,a' = h_23 , b' = h_14 .Here and in what follows we use the `t' label for blocks and eigenfunctions in theϕ_2 ×ϕ_3 →ϕ_1 ×ϕ_4channel. The appropriate eigenfunctions in thet-channel areΨ_^t(z) ≡Ψ_^s(z) |_h_1h_3 =_2F_1( + a' + ,+ a' -   1+a'+b';z-1/z)= ϑ_^(a'+b',a'-b')(1-z/z)which satisfy Ψ_^t(z) = [ Q_t() k^t_+(z) + (→ -) ],Q_t() = 2Γ(-2)Γ(1+a'+b')/Γ( + a'-)Γ(+b'-) .Finally, the decomposition of a functionf(z)in terms of the functionsΨ_^treadsf(z) = ∫[d ]/N_t() f() Ψ^t_(z) ⇔f() = ∫_0^1 d z w_t(z)f(z) Ψ^t_(z)wherew_t(z) = (1-z)^a'+b'/z^2+2a'and N_t() = |Q_t()|^2/2 . § CROSSING KERNEL So far, we have used Sturm-Liouville theory as a tool to represent conformal correlators as integrals over a set of basis functionsΨ_. In this section, we will use these integral representations to analyze crossing symmetry. In particular, we will compute thed=1crossing kernel and exhibit its properties.§.§ General case Let us start by considering a mixed four-point functionϕ_1 ϕ_2 ϕ_3 ϕ_4. For such a correlator, we can write down two inequivalent integral representations:ϕ_1 ϕ_2 ϕ_3 ϕ_4∼ (z) = ∫[d ]/N_s()F_s() Ψ_^s(z) , ϕ_3 ϕ_2 ϕ_1 ϕ_4∼ (z) = ∫[d ]/N_t()F_t() Ψ_^t(z) .The∼above denotes that we have omitted various unimportant scaling factors. The spectral densityF_s()encodes information about the CB decomposition in thes-channelϕ_1 ×ϕ_2 →ϕ_3 ×ϕ_4, whereasF_t()describes thet-channelϕ_1 ×ϕ_4 →ϕ_2 ×ϕ_3.The two alpha space densitiesF_s,t()are related — at least implicitly — via the crossing equation eq:mixcross. Plugging Eq. eq:genrep into that equation, we find that ∫[d ]/N_s()F_s() Ψ_^s(z) = (z/1-z)^2h_2∫[d]̱/N_t()̱F_t()̱Ψ_^̱t(1-z) .In order to find make the constraints onF_s,t()manifest, we can manipulate this alpha space bootstrap equation in various ways. For instance, it is possible to expresst-channel eigenfunctions in terms of thes-channel ones: (z/1-z)^2h_2Ψ_^̱t(1-z) = ∫[d]/N_s()K(,|̱h_1,h_2,h_3,h_4) Ψ_^s(z) .The distributionK(,|̱h_1,h_2,h_3,h_4)introduced here relates eigenfunctions in thes- andt-channels, and we will refer to it as a crossing kernel. A schematic interpretation of Eq. eq:kerndef is given in Fig. <ref>.Using eq:kerndef, we can recast the crossing equation eq:alphaxing as ∫[d]/N_s()[F_s() - (K· F_t)() ] Ψ_^s(z) = 0where we have introduced an integral operatorKwhich depends on theh_i: (K· f)() ∫[d]̱/N_t()̱K(,|̱h_1,h_2,h_3,h_4) f()̱ . Recalling that theΨ_^s(z)form a complete basis inz-space, Eq. eq:interm1 can only be satisfied ifF_s() = (K· F_t)() .The point of this identity is that it directly relates the two densitiesF_s,t(); once we compute the kernelK(,|̱h_i), Eq. eq:xint will be completely explicit. In the previous computation, we made an arbitrary choice by expressingΨ_^̱t(1-z)in terms of thes-channel functionsΨ_^s(z). It will be useful to go in the opposite direction as well, by writing (z/1-z)^2h_2Ψ_^̱s(1-z) = ∫[d]/N_s() K(,|̱h_1,h_2,h_3,h_4) Ψ_^t(z)which involves a second crossing kernelK(,|̱h_1,…,h_4). Using the same logic as before, we arrive at an alternate alpha space crossing equation:F_t() = (K· F_s)() ,where(K· f)() ∫[d]̱/N_s()̱ K(,|̱h_1,h_2,h_3,h_4) f()̱ .Bringing everything together, we have recast crossing symmetry as a system of integral equations in alpha space:F_s() = (K· F_t)() ,F_t() = (K· F_s)() .§.§ Identical operators Let us briefly consider the case of the four-point functionϕϕϕϕof four identical primaries. In that case, there is only one spectral densityF()of interest, namely ϕϕϕϕ ∼ (z) = ∫[d ]/N()F() Ψ_(z) .Rather than a system of coupled integral equations, one now finds an eigenvalue equation for the densityF():F() = (K_0 · F)()where the integral operatorK_0is defined as(K_0 · f)()∫[d]̱/N()̱K_0(,|̱h_ϕ) f()̱ ,K_0(,|̱h_ϕ)K(,|̱h_ϕ,h_ϕ,h_ϕ,h_ϕ) .§.§ Functional properties of the crossing kernels In what follows, we will compute the crossing kernelsK(,|̱h_i),K(,|̱h_i)andK_0(,|̱h_i). Since this computation is somewhat technical, we will first derive several properties of these kernels.Evidently, all of the kernels are even in their argumentsand$̱. Less trivially, we see that the kernels K and K are identical after exchanging the external dimensions h_1 and h_3:K(,|̱h_1,h_2,h_3,h_4) = K(,|̱h_3,h_2,h_1,h_4) as follows from Eqs. eq:kerndef, eq:kerndef2.Next, from the structure of Eq. eq:mixsystem, we can surmise thatK ·K = K ·K = 𝕀 . We have derived this with input from the bootstrap, but later we will rederive Eq. eq:inverses formally. For the case of identical operators, Eq. eq:inverses becomesK_0^2 = 𝕀 . Notice that Eqs. eq:inverses and eq:inverses2 only hold when restricted to some space of even functions, as the images of the integral operators K, K and K_0 are even by construction.Both identities eq:inverses and eq:inverses2 are statements about integral operators. By acting with these operators on test functions — say, having compact support — we can turn them into orthogonality/completeness relations for the crossing kernels themselves. To make this concrete, let's define the distributionsD_s(,|̱h_1,h_2,h_3,h_4)N_s()^-1∫[dy]/N_t(y)K(,y|h_i) K(y,|̱h_i) ,D_t(,|̱h_1,h_2,h_3,h_4)N_t()^-1∫[dy]/N_s(y) K(,y|h_i) K(y,|̱h_i) = D_s(,|̱h_3,h_2,h_1,h_4) .Our claim is that D_s,t(,|̱h_i) behave as delta functions on the imaginary axis. Indeed, Eq. eq:inverses implies that∫[d]̱D_s(,|̱h_i) D_t(,|̱h_i)f()̱ =f()+f(-)/2 where f() is arbitrary. This can be thought of as the “local” version of eq:inverses. In the case of identical operators, we simply have∫[d ]̱ D_0(,|̱h_ϕ) f()̱ =f()+f(-)/2 whereD_0(,|̱h_ϕ) =N()^-1 ∫[dy]/N(y) K_0(,y|h_ϕ)K_0(y,|̱h_ϕ) . Eqs. eq:id11 and eq:id12 can be obtained as a limiting case of eq:KasDirac. Interestingly, Eqs. eq:KasDirac and eq:id11 imply that the distributions D_s,t(,|̱h_i) and D_0(,|̱h_ϕ) are identical and independent of external dimensions h_i resp. h_ϕ. As with the Fourier transform, the above identities mean that well-behaved functions f() can be decomposed in terms of the “basis functions” K, K and K_0, with computable coefficients. §.§ Computation of the crossing kernel Let us now turn to the computation of the crossing kernel K(,|̱h_i). To do so, we can use the alpha space technology from Sec. <ref> to write down a position-space integral representation for K, namelyK(,|̱h_1,h_2,h_3,h_4) = ∫_0^1zw_s(z)(z/1-z )^2h_2 Ψ_^s(z) Ψ_^̱t(1-z) . It will be convenient to employ standard Mellin representations for the functions Ψ_^s,t(z):Ψ_^s(z)= Γ(1+a+b)/Γ( + a ±)∫[ds]Γ(-s)Γ(+a+s±)/Γ(1+a+b+s)(1-z/z)^s, Ψ_^̱t(1-z)= Γ(1+a'+b')/Γ( + a'±)∫[dt]Γ(-t)Γ(+a'+t±)/Γ(1+a'+b'+t)(z/1-z)^t.Plugging these into eq:kintrep, one obtains an integral representation of the form K(,|̱h_i) = ∫_0^1dz ∫[ds] ∫[dt] … . Exchanging the order of the integrals, the z-integral yields a beta function, whereas the resulting t-intergral can be performed using the second Barnes lemma. What remains is the following Mellin representation:[A different-looking representation can be found by doing the s-integral first.]K(,|̱h_1,h_2,h_3,h_4) = Γ(1+a+b)Γ(1+a'+b')/Γ(+ a ±)Γ( + b'±) × ∫[ds]Γ(-s)Γ(+a+s±)/Γ(1+a+b+s)Γ(2h_1-1-s)Γ(32-h_1-h_4+s ±)̱/Γ(2-h_1+h_2-h_3-h_4 + s) .This integral can be performed by closing the contour and picking up poles on the right half plane, at s = N and s = 2h_1 - 1 + N. The result is a sum of two hypergeometric _4F_3(1) functions, and it can be cast into a standard form by introducing the Wilson functions of Ref. <cit.>:[Our conventions differ from those of <cit.> as follows: W_(|̱a,b,c,d) = ϕ_i(i;̱a,b,c,1-d). ]W_(;̱a,b,c,d) = Γ(d-a)/Γ(a+b)Γ(a+c)Γ(d±)̱Γ(d̃±) _4F_3(a +,̱a-,̱ ã + , ã-  a+b,a+c,1+a-d;1 )+ (ad)writing ã = (a+b+c-d)and d̃ = (-a+b+c+d). It is useful to know that W_(;̱a,b,c,d) is even in its argumentsand $̱, and it depends symmetrically on its parameters{a,b,c,d}. A closed-form expression for the crossing kernel is then given byK(,|̱h_1,h_2,h_3,h_4) =Γ(1-h_12 + h_34)Γ(1+h_14 + h_23 ) × Γ(h_1 + h_2 - ±) Γ(32 - h_1 -h_4 ±)̱W_(;̱P)with parametersP = P(h_1,h_2,h_3,h_4)specified by P = { + h_14, + h_23, h_2 + h_3 - ,3/2 - h_1 - h_4} .The kernelK(,|̱h_1,h_2,h_3,h_4)admits an expression similar to eq:Kformula, the only difference being thath_1 h_3are swapped. For completeness, we print the formula for the identical-operator kernelK_0(,|̱ h_ϕ)here as well:K_0(,|̱h_ϕ) = Γ(2h_ϕ - ±)Γ(32 - 2h_ϕ±)̱W_(;̱P_0) , P_0 = {, ,2h_ϕ - , 3/2 - 2h_ϕ} .§.§ K and K as intertwiners Having computed the crossing kernelsKandK, let us now revisit the alpha space crossing equation eq:mixsystem. Informally, it encodes thatKmaps a “t-channel” alpha space density to an “s-channel” one, and vice versa forK. In this section we will formalize this idea, making precise in which senseKandKintertwine between two different Hilbert spaces. First, let's introduce a Hilbert spaceH_s = H_s(h_1,h_2,h_3,h_4)fors-channel functions, consisting of all functionsf()that are even inandL^2with respect to the following inner product:(f,g)_s ∫[d]/M_s(;h_1,h_2,h_3,h_4) f()g() , M_s(;h_1,h_2,h_3,h_4) = 2Γ^2(1-h_12+h_34)Γ(± 2)Γ(h_1 + h_2 - ±)/Γ( - h_12±) Γ( + h_34±)Γ(32-h_3 - h_4±) .We have introduced an-independent factor in the measureM_s(;h_i)to simplify some formulas later on. The integration contour in eq:sHilb is to be understood in the Mellin-Barnes sense, which means that it may be deformed depending on the values of theh_i. Likewise, we introduce at-channel Hilbert spaceH_t(h_1,h_2,h_3,h_4)of even functions that are square-integrable with respect to (f,g)_t ∫[d]/M_t(;h_1,h_2,h_3,h_4) f()g() , M_t(;h_1,h_2,h_3,h_4) =M_s(;h_3,h_2,h_1,h_4) .We now claim that the following holds: Theorem 1.1: K is a unitary map H_t →H_s, and K:H_s →H_t is its inverse. Unitarity here means thatKandKpreserve the inner products defined in Eqs. eq:sHilb and eq:tHilb, namely (f,g)_t = (K· f,K· g)_s and(f,g)_s = (K· f,K· g)_t .The proof of this result follows from the properties of theWilson transform, introduced in Ref. <cit.>. This integral transform uses the Wilson functionsW_(;̱a,b,c,d)as a basis. The above result can straightforwardly be deduced from Theorem 4.12 of Ref. <cit.>. Consequently, we will not provide many details. However, it will be instructive to provide a sketch of a (constructive) proof. First, one establishes thatH_sis spanned by the following functions: ξ_n^s(|h_i) = Γ(1-h_12 + h_34)Γ(h_1 + h_2 - ±)_n(;P̃) ,n ∈N.The Wilson polynomials_nwere defined in Eq. eq:wdef, and the set of parametersP̃is given by P̃(h_1,h_2,h_3,h_4) = { - h_12, + h_34, h_1 + h_2 - ,3/2 - h_3 - h_4} = P(h_3,h_2,h_1,h_4).Likewise,H_tis spanned by the functions ξ_n^t(|h_i) = Γ(1+h_14 +h_23) Γ(h_2 + h_3 - ±)p_n(;P) .By linearity, it suffices to establish thatKandKact appropriately on these basis functions. To establish this, one proves first that (ξ_m^s,ξ_n^s)_s = (ξ_m^t,ξ_n^t)_t ∝_mn as well as(K·ξ_n^t)() = (-1)^nξ_n^s() ,(K·ξ_n^s)() = (-1)^nξ_n^t() .Eq. eq:dualIP is a property of the Wilson polynomialsp_n<cit.>, and Eq. eq:eigX is a consequence of Theorem 6.7 of <cit.>. A similar result holds for the case of identical operators. There one defines a Hilbert spaceH_0 = H_0(h_ϕ)of even functions that are finite with respect to (f,g)_0 ∫[d]/M_0(;h_ϕ) f()g() , M_0(;h_ϕ) = M_s(;h_ϕ,h_ϕ,h_ϕ,h_ϕ) .Then the counterpart of the above theorem reads:Theorem 1.2: K_0 is a unitary map H_0 →H_0 obeying K_0^2 = id. Here unitarity means that (f,g)_0 = (K_0· f,K_0· g)_0 .The proof goes along the same lines as the general case discussed before. A basis forH_0is now spanned by the functions ξ^0_n(|h_ϕ) = Γ(2h_ϕ - ±)_n(;P_0)whereP_0was defined in eq:podef.The operatorK_0maps theξ_n^0to themselves, up to a sign(-1)^n:(K_0 ·ξ_n^0)() = (-1)^nξ_n^0() . Of course, the only permissible eigenvalues that could have appeared were±1, given thatK_0^2 = 𝕀. §.§ Analytic structure of the crossing kernel Since we have rephrased bootstrap equations as integral equations in alpha space, it will be instructive to analyze the analytic structure of the crossing kernelK(,|̱h_1,h_2,h_3,h_4). Let's first fix$̱ and investigate the properties of K as a function of , using Eq. eq:Kformula. Since the Wilson functions W_(;̱a,b,c,d) are analytic inand $̱, the only poles inare due to the factorΓ(h_1 + h_2 - ±). ConsequentlyK(,|̱h_i)is a meromorphic function, with its only poles on the right half plane at= h_1 + h_2 - + N. The relevant residues are polynomials of degreenin^̱2, namelyR_n(;̱h_1,h_2,h_3,h_4) - ResK(,|̱h_1,h_2,h_3,h_4) |_ = h_1 + h_2 - 1/2 + n= Γ(1-h_12+h_34)/n!(1+h_14+h_23)_nΓ(2h_1 + 2h_2 - 1+n)/Γ(2h_2+n)Γ(h_1 + h_2 + h_34 +n)× _n(;̱+h_14, +h_23,h_1 + h_4 - ,h_2 + h_3 - ) .Next, remark that for generic values of,K(,|̱h_i)is a rather complicated function of$̱. Upon closer inspection it appears that at certain values _* the kernel K(_*,|̱h_i) becomes polynomial in $̱, up to a number of gamma functions. The relevant values= _*are organized in three families: _n^I = 3/2 - h_3 - h_4 + n, _n^II =- h_12 + n, _n^III =+ h_34 + n,n ∈N .For the first family, we find for instanceK(_n^I,)̱ = k_n^I Γ(32 - h_1 - h_4 ±)̱/Γ(h_2 + h_3 - ±)̱ _n(;̱ + h_14, +h_23, 32 - h_1 - h_4, 32-h_2 - h_3 ) where k_n^I is a constant that does not depend on $̱. For the second and third families, we find K(_n^II,)̱ = k_n^II Γ(32 - h_1 - h_4 ±)̱/Γ( + h_14±)̱ _n(;̱ - h_14, +h_23,h_2 + h_3 - , 32-h_1 - h_4 ),K(_n^III,)̱ = k_n^III Γ(32 - h_1 - h_4 ±)̱/Γ( + h_23±)̱ _n(;̱ + h_14, -h_23,h_2 + h_3 - , 32-h_1 - h_4 ). We can also consider the analytic structure ofK(,|̱h_i)as a function of$̱ for fixed . This is a simple exercise, given the relation eq:KandKw. We therefore refrain from printing explicit formulas.§.§ Symmetries of the crossing kernel The crossing kernel obeys various identities which we will exhibit here. Since none of these results are used in the rest of this paper, this section can be skipped on a first reading.It will be convenient to strip off the gamma functions in Eq. eq:Kformula and to relabel the external dimensions as h_i → + γ_i. What remains is a single Wilson function, namely K̂(,|̱γ_1,γ_2,γ_3,γ_4) = W_(|̱ + γ_1 - γ_4,+ γ_2 - γ_3,-γ_1 - γ_4,+ γ_2 + γ_3) .First, we recall that W_(;̱a,b,c,d) depends symmetrically on its parameters {a,b,c,d}, which implies that K̂(,|̱γ_i) obeysK̂(,|̱γ_1,γ_2,γ_3,γ_4)= K̂(,|̱-γ_1,γ_2,γ_3,γ_4) =K̂(,|̱γ_1,γ_2,-γ_3,γ_4)= K̂(,|̱γ_3,-γ_4,γ_1,-γ_2) = K̂(,|̱-γ_1^♮,-γ_2^♮,-γ_3^♮,-γ_4^♮) , γ_i^♮ =-γ_i + ∑_j=1^4 γ_j.A second type of symmetry can be found using the identity (see Lemma 5.3 of <cit.>) W_(;̱A+ω,A-ω,B+ρ,B-ρ) = W_ω(ρ;A+,A-,B+,̱B-)̱ which descends to K̂(,|̱γ_1,γ_2,γ_3,γ_4) = K̂(γ_1,γ_3|,γ_2,,̱γ_4).A final relation follows from the “duality” property of the Wilson functions: W_(;̱a,b,c,d) = W_(̱;ã,b̃,c̃,d̃) , [ã; b̃; c̃; d̃ ] = (a+b+c+d) - [ d; c; b; a ] which implies that K̂(,|̱γ_1,γ_2,γ_3,γ_4) = K̂(,̱|γ_3,γ_2,γ_1,γ_4). The reader may notice that the above symmetries are reminiscent of those corresponding to the SU(2)6-j symbol <cit.>. In the SU(2) context, the transformations γ_1,3↦ - γ_1,3 are known as mirror symmetries and γ_i ↦γ_i^♮ is a Regge transformation;Eqs. eq:row1 eq:col1 and eq:row2 are related to transformations that exchange rows and columns of the 6-j symbol. A subset of the above symmetries lifts to the full crossing kernel K(,|̱h_i):K(,|̱h_1,h_2,h_3,h_4)=K(,|̱h_3^♮,h_4^♮,h_1^♮,h_2^♮) , h_i^♮ = -h _i + ∑_j=1^4 h_j , =K(,̱|1-h_3^♮,1-h_2^♮,1-h_1^♮,1-h_4^♮) , =K(,̱|1-h_1,1-h_4,1-h_3,1-h_2) .Any two of these identities imply the third one. In conclusion, it appears that the automorphism group of the K(,|̱h_i) is isomorphic to the Klein four-group. In passing, we note that Eq. eq:fullK can also be derived by inspecting the integral representation eq:kerndef.§.§.§ Limit cases For bootstrap applications, one is often interested in four-point functions where some of the operators are identical. In that case, the discussion of the symmetries of the crossing kernel simplifies drastically. For a mixed four-point function of the form σσ, there are two relevant crossing kernels:K_m,1(,|̱h_σ,h_)K(,|̱h_,h_σ,h_σ,h_) ,K_m,2(,|̱h_σ,h_)K(,|̱h_σ,h_σ,h_,h_) .In this case, the content of Eq. eq:fullK reduces toK_m,1(,|̱h_σ,h_) = K_m,2(,̱|1-h_,1-h_σ) .Finally, when all external dimensions are identical, the relevant kernel is K_0(,|̱h_ϕ), which obeysK_0(,|̱h_ϕ) = K_0(,̱|1-h_ϕ) . § APPLICATIONS TO THE CONFORMAL BOOTSTRAP In Section <ref>, we reformulated crossing symmetry in the form of integral equations in alpha space, making use of the crossing kernel K(,|̱h_i). For definiteness, let us consider the identical-operator alpha space equation Eq. eq:idx:F() = ∫[]̱/N()̱K_0(,|̱h_ϕ) F()̱ .In the bootstrap context, we can ask whether Eq. eq:eq (combined with unitarity) can be used to find useful constraints on F(). In this section we will sketch some ideas in this direction, making use of the properties of the crossing kernel as discussed in Sec. <ref>. §.§ (Dis)proving a false theorem We will start by outlining an simple idea for analyzing the alpha space crossing equation eq:eq. One can think of the RHS of eq:eq as a function of α ↦ ∫[d ]̱/N()̱K_0(,|̱h_ϕ) F()̱ and require that eq:afunc has exactly the same analytic structure as F(), appearing on the LHS of eq:eq. Taken at face value, this should lead to constraints of the poles and residues of F(), which correspond to CFT data.The function eq:afunc only depends onthrough the crossing kernel K_0(,|̱h_ϕ). Using the results of Sec. <ref>, we see that the identical-operator kernel K_0(,|̱h_ϕ) has poles at _n = 2h_ϕ -+ n, n ∈N, with residues R_n(|̱h_ϕ) R_n(|̱h_ϕ,h_ϕ,h_ϕ,h_ϕ) = Γ(4h_ϕ - 1 + n)/n!^2 Γ^2(2h_ϕ + n) _n(;̱,,2h_ϕ - ,2h_ϕ - ).Plugging this result into eq:eq, we naively conclude that F() can only have poles at = _n, with their residues constrained as follows:- ResF() |_ = _n?=∫[d ]̱/N()̱ R_n(|̱h_ϕ) F()̱ . Obviously, this conclusion is wrong: it says that any solution to crossing consists of a single tower of exchanged operators with dimensions 2h_ϕ + N. Although solutions of this form exist (e.g. in mean field theory), any interacting CFT correlator furnishes a counterexample to eq:wrongthm. From a mathematical point of view, we have arrived at eq:wrongthm using a doubtful manipulation: Res[ ∫[d ]̱/N()̱K_0(,|̱h_ϕ) F()̱]_ = _n?=∫[d ]̱/N()̱ [ ResK_0(,|̱h_ϕ) ]_ = _n F()̱ .This fails to hold at general , as the function eq:afunc is defined for realonly by analytic continuation. It would be interesting to see if this wrong argument can be refined to give useful bootstrap constraints, likely by deforming the contour in Eq. eq:eq, as discussed in Sec. <ref>. §.§ Split kernel A second idea is to close the $̱ contour in Eq. eq:eq to the right, picking up poles inβ.Since the integrand appearing in the RHS of eq:eq equals K_0(,|̱h_ϕ) F()̱/N()̱ poles in$̱ can come from three different factors. As mentioned, the poles in F(β)— and their residues — are unknown, but of physical interest. Next, 1/N(β) has poles at β = 1/2 + N, and K_0(,|̱h_ϕ) has poles at β = 3/2 - 2h_ϕ + N.[Note that the poles of K_0(,|̱h_ϕ) in β are related to the poles inthrough Eq. eq:KandKw. In particular, the $̱ residues are Wilson polynomials in. Closing the contour means that we have to keep track of all of these different poles.We propose to modify Eq. eq:eq in a straightforward way, bypassing this bookkeeping exercise. The key point is that bothN(β)andF(β)are even inβ; in the definition eq:kerndef of the crossing kernel, it is therefore possible to replaceΨ^t_(̱1-z)byQ_t()̱ k^t_+(1-z), whereQ_tandk_^t(z)were defined in Sec. <ref>. Concretely, we recast the crossing equation asF() = ∫[d ]̱ (,|̱h_ϕ,h_ϕ,h_ϕ,h_ϕ) F()̱ , (,|̱h_1,h_2,h_3,h_4) Q_t()̱/N_t()̱∫_0^1 zw_s(z) (z/1-z)^2h_2Ψ_^s(z) k_ +^t(1-z) .We will from now on consider this “split” kernelK_split(,|̱h_i)with arbitrary external dimensions, although only the caseh_1 = … = h_4 ≡ h_ϕis of interest in the analysis of Eq. eq:eq.We claim that the split kerneldoes not have any poles on the right half plane()̱ > 0. That is to say, by closing the contour of eq:spliteq to the right, we only pick up poles coming fromF(β), as desired.The proof of this claim follows from a direct computation. The computation is very similar to the one from Sec. <ref>. The only difference is that we use a Mellin-Barnes representation for the cross-channel blockk_h^t(1-z), namelyk_ + ^t(1-z) = Γ(1+2)̱/Γ( + a' + )̱Γ( - b' + )̱ × ∫[d t] Γ(-t)Γ( + a'++̱t)Γ( - b' ++̱t)/Γ(1+2+̱t)(z/1-z)^++̱a'+t .As an intermediate step, we rewriteas a Mellin-Barnes integral:(,|̱h_i) = Γ(1-h_12+h_34)/Γ(1+h_14+h_23)2/Γ( - h_12±)Γ( + h_23 + )̱/Γ( - h_23 + )̱∫[ds]Γ(-s)Γ( - h_12 +s ±)/Γ(1-h_12+h_34+s) × Γ(2h_1 - 1 -s)Γ(h_12 + h_3 + h_4 -1-s)Γ(32-h_1 -h_4 ++̱s)/Γ(h_1 + h_4 -+ -̱s) .Closing the contour to the left[Closing the contour to the right would mean picking up poles at s = 2h_1 - 1 + N and s = h_12 + h_3 + h_4 -1 + N. In the case of equal external dimensions, these two series of poles collide to form a single series of double poles. ] and picking up poles ats = -N,s = ± - + h_12 - N, we obtain the following closed-form formula for:K_split(,|̱h_1,h_2,h_3,h_4) = I_1(,|̱h_i) + I_2(,|̱h_i) +I_2(-,|̱h_i)whereI_1(,|̱h_i)=Γ(1-h_12 + h_34)/Γ(1+h_14+h_23)2/S(h_2 + h_4 +-)̱S(h_2 + h_4 --)̱× Γ( + h_14+)̱Γ( + h_23+)̱Γ(32 - h_1 - h_4 + )̱/Γ( - h_12±)Γ(h_2 + h_3 --)̱× _4F̃_3[ + h_14+,̱- h_23 + ,̱ 32 -h_1 - h_4 + ,̱ 32 - h_2 - h_3 +  1+2,̱2-h_2 - h_4 ++ ,̱2-h_2 -h_4 - + ;1 ], I_2(,|̱h_i)= -Γ(1-h_12 + h_34)/Γ(1+h_14+h_23)2/S(h_2 + h_4 + -)̱× Γ(h_1 + h_2 -+ )Γ(h_3 + h_4 -+ )/S(2)Γ( - h_12 -)Γ( + h_34 -)Γ( + h_23+)̱/Γ( - h_23 + )̱× _4F̃_3[ - h_12+,- h_34+,h_1 + h_2 -+ ,h_3 + h_4 -+  1+2,h_2 + h_4 ++,̱h_2 + h_4 +-;1 ].Here we used the notationC(x) = cos(π x)/π,S(x) = sin(π x)/π, and the_4F̃_3(1)are regularized hypergeometric functions. Above we claimed that(,|̱h_i)was analytic in$̱ on the right half plane. This is not completely manifest from the expressions in Eq. eq:Ksplitexp; in fact, it appears that both I_1 and I_2 have singularities at =̱ h_2 + h_4 ± + N. However, it can be shown (using hypergeometric identities, see e.g. <cit.>) that the residues in I_1(,)̱ and I_2(±,)̱ at these points exactly cancel. Equivalently, analyticity follows from a contour pinching argument applied to the Mellin-Barnes integral in Eq. eq:splitMB.In passing, we claim thathas the following symmetry:K_split(,|̱h_1,h_2,h_3,h_4) = K_split(,|̱h_3^♮,h_4^♮,h_1^♮,h_2^♮)cf. Eq. eq:k1r for the normal kernel.[ We also note the existence of a rather mysterious relation between I_1(,|̱h_i) and I_2(,|̱h_i), namelyI_2(,|̱h_1,h_2,h_3,h_4) = N_s()/N_t()̱C(β - h_23)S(h_2 + h_4 ++ )̱/C( + h_3 + h_4) S(2)̱ I_1(,̱|1-h_1,1-h_4,1-h_3,1-h_2) . ] To establish eq:kss, one develops an alternate Mellin-Barnes representation for , by changing the order of integration:(,|̱h_i) = Γ(1-h_12+h_34)/Γ(1+h_14+h_23)2/Γ( + h_34±)Γ( + h_14 +)̱/Γ( - h_14 + )̱ ∫[dt] Γ(-t)Γ( + a' + +̱ t)Γ( - b' + +̱ t)/Γ(1+2+̱t)Γ(h_1 + h_3 - 1 - -̱ t ±)Γ(32 - h_1 - h_4 + +̱ t)/Γ(h_2 + h_3 -- -̱ t) .Closing the contour to the right, we find a representation ofof the schematic form eq:K3t, with I_1,2(,|̱h_i) replaced by functions J_1,2(,|̱h_i) obeyingI_k(,|̱h_1,h_2,h_3,h_4) = J_k(,|̱h_3^♮,h_4^♮,h_1^♮,h_2^♮) ,k =1,2.This proves Eq. eq:kss.Let us finally return to Eq. eq:spliteq. The modified falloff of the split kernel allows one to close the contour in the right β plane and pick up the poles, which we have just demonstrated can only come from F(β). Therefore, up to simple numerical factor the split kernel considered as a function of α for a fixed β is precisely the s-channel alpha space transform of a single t-channel conformal block. It is therefore of interest to consider the analytic properties of (,|̱h_i) in α as well. For example, for identical external dimensions h_i a contour pinching argument applied to Eq. eq:splitMB shows that (,|̱h_ϕ) has double rather than single poles at the double-trace values α = ± (2 h_ϕ -+ N), reflecting the logarithmic behavior of the k^s_β + 1/2(z) as z → 1 in position space. This most clearly demonstrates the impossibility of expressing physical conformal blocks in one channel as proper sums of blocks in the crossed channel and consequently the necessity of using a different basis of functions like our Ψ_α(z) to arrive at a meaningful crossing symmetry kernel. §.§ Using the ξ_n as a basis It appears that a special role is played by the alpha space functions ξ_n^s(|h_i), ξ_n^t(|h_i) and ξ_n^0(|h_ϕ), defined in Eqs. eq:xisdef, eq:xitdef, eq:xi0def. In fact, these basis functions furnish infinitely many solutions to crossing symmetry. To make this concrete, consider the mixed-correlator bootstrap equation eq:mixsystem, which is automatically solved if F_s,t() are chosen as follows:F_s() = ∑_neven c_nξ_n^s(|h_i) ,F_t() = ∑_neven c_nξ_n^t(|h_i) .It is crucial that the same coefficients c_n appear both in F_s() and F_t(), and that only ξ_n with even n appear. The reason is that the ξ_n^s,t() with odd n are antisymmetric under crossing. To understand this more intuitively, it is instructive to analyze the ξ_n in position space. Using Eq. eq:jnaarw, we find that the z-space versions of ξ_n^s(|h_i) and ξ_n^t(|h_i) are given by ξ_n^s(z|h_i) ξ_n^t(z|h_i)= n! Γ(2h_2 + n) Γ(h_1+ h_3 +h_24 + n) z^2h_2 P_n^(a+b,a'+b')(1-2z) P_n^(a'+b',a+b)(1-2z) .Given Eq. eq:xipos, it follows directly that ξ_n^s(z|h_i)= (-1)^n (z/1-z)^2h_2 ξ_n^t(1-z|h_i)where we use that P_n^(p,q)(-x) = (-1)^n P_n^(q,p)(x).Comparing to the crossing equation eq:mixcross, one confirms that the ξ_n with even (resp. odd) n are symmetric (resp. antisymmetric) under crossing symmetry.Next, we will consider the CB decomposition of the functions ξ_n, at least schematically. Notice that ξ_n^s(|h_i) only has poles at = h_1 + h_2 -+ N, as well as mirror poles on the left half plane. Given our discussion in Sec. <ref>, this implies that ξ_n^s(|h_i) has a CB decomposition consisting of operators of dimensions h_1 + h_2 + N. Such a conformal block decomposition looks similar to a mean-field solution, where only double-twist primaries [ϕ_1 ϕ_2]_n ∼ϕ_1 ^nϕ_2 contribute. Similarly, ξ_n^t has a CB decomposition with a spectrum given by h_2 + h_3 + N.For definiteness, we will compute the CB decomposition of ξ^0_n(|h_ϕ) explicitly. The position-space version of ξ^0_n(|h_ϕ) is a limiting case of eq:xipos, namely ξ^0_n(z|h_ϕ) = n! Γ^2(2h_ϕ + n) z^2h_ϕ P_n(1-2z)where P_n denotes a Legendre polynomial. As above, these functions are crossing (anti)symmetric for even (odd) n, as follows fromξ^0_n(z|h_ϕ) = (-1)^n (z/1-z)^2h_ϕξ^0_n(1-z|h_ϕ) .The CB decomposition of ξ_n^0(z|h_ϕ) can be found using alpha space technology; in particular, its residues in alpha space are equal to Wilson polynomials evaluated at certain values of . The precise result is ξ^0_n(z|h_ϕ) = ∑_m=0^∞ A_m^(n) k^()_2h_ϕ + m(z)whereA_m^(n) = Γ^2(2h_ϕ+n)n!(-1)^m/m!(2h_ϕ)^2_m/(4h_ϕ-1+m)_m _4F_3(-n,-m,n+1,4h_ϕ - 1+m 2h_ϕ,2h_ϕ,1;1 ).Notice that the coefficients A_m^(n) are sign-alternating: sgn( A_m^(n)) = (-1)^m, provided that h_ϕ > 0. This implies that the ξ_n do not correspond to unitary solutions of crossing.At least formally, it is possible to derive selection rules for alpha space densities using the functions ξ_n. We will focus on the identical-operator case for simplicity. Recall that the ξ_n^0 form a basis of the Hilbert space H_0 introduced in Sec. <ref>. This implies that if a density F() ∈H_0 is crossing symmetric, it must obey ∫[d ]/M_0(;h_ϕ) ξ_n^0(|h_ϕ) F() = 0 for n = 1,3,5,… This selection rule manifestly holds if F() is of the following form: F() = ∑_neven c_nξ_n^0(|h_ϕ) cf. Eq. eq:mixy. Of course, requiring that F() is normalizable imposes constraints on the growth of the coefficients c_n as n →∞. Unfortunately an alpha space density of the form eq:evendec cannot belong to an interacting CFT: it would have a CB decomposition with exchanged operators of dimensions 2h_ϕ + N and nothing else — in particular, requiring that F() ∈H_0 rules out an identity operator contribution. These unphysical constraints on the spectrum of F are very similar to the issue encountered in Sec. <ref>. We also stress that eq:evendec generically corresponds to a non-unitary CB decomposition, in line with our remarks below Eq. eq:xicb.Imposing unitarity leads to additional constraints on the coefficients c_n, and in future work it would certainly be interesting to examine these in detail.To better understand the role played by the ξ_n, we will briefly consider how these ideas apply to a mean-field correlator:F_MFT(z) = t_1 F_1(z) + t_2 F_2(z)withF_1(z) = z^2h_ϕand F_2(z) = 1 +(z/1-z)^2h_ϕ .Both pieces F_1,2 are crossing symmetric by themselves, but only their combination with t_2 ± t_1 ≥ 0 is unitary. This follows from the CB decompositions eq:powerCB and eq:pqCBdec1.[ Here we are interested in the case p=q of Eq. eq:pqCBdec1, which reads(z/1-z)^p = ∑_n=0^∞(p)_n^2/n! (2p-1+n)_nk_p+n(z) . ] Separately, F_1 and F_2 contain contributions from an infinite tower of operators of dimension 2h_ϕ + n, but the contributions for odd (resp. even) n cancel out when t_1 = t_2 (resp.t_1 = -t_2). The combinations with t_1 = ± t_2 correspond to generalized free fields with bosonic (resp. fermionic) statistics.Can we decompose F_1 and F_2à la Eq. eq:evendec? As for F_1, we see by inspection thatF_1(z) = 1/Γ^2(2h_ϕ) ξ_0^0(z|h_ϕ)consistent with the fact that F_1 is crossing symmetric and non-unitary. In particular, this shows that F_1() ∈H. Notice that this is only possible because F_1(z) has no unit operator contribution. Since F_2(z) does have a unit operator contribution, it follows that F_2(z) cannot be decomposed as in Eq. eq:evendec. Nevertheless, we compute (z/1-z)^2h_ϕ = ∑_n=0^∞ f_nξ_n^0(z|h_ϕ) ,f_n =1/Γ^2(2h_ϕ)1+2n/n!(1-2h_ϕ+n)1/(2h_ϕ - n)_2n .Strictly speaking this holds only for h_ϕ < 1/2; for generic h_ϕ, eq:decLeg makes sense only after analytic continuation.Notice that eq:decLeg contains terms with both even and odd n. This is consistent with the fact that [z/(1-z)]^2h_ϕ by itself has no definite crossing behaviour. Another interesting feature is that the f_n are not sign-definite; in fact, sgn(f_n) = (-1)^n provided that h_ϕ < 1/2. However, we know from Eq. eq:pqCBdec1 that [z/(1-z)]^2h_ϕ has a CB decomposition with positive coefficients. We conclude that there is a conspiracy between the coefficients f_n from Eq. eq:decLeg and the A_m^(n) from eq:xicb that guarantees that the full CB decomposition is unitary.The above example shows how the idea to draw selection rules from the ξ_n runs into problems when naively applied to CFT correlators. Nonetheless, it may be true that a modified version of Eq. eq:selrule holds after carefully regulating the identity operator contribution. We leave this question for future work.§ DISCUSSION This paper has outlined how Sturm-Liouville theory provides a framework to study CFTs. Inspired by classic results <cit.>, we discussed the decomposition of a CFT four-point correlator in terms of a new basis of functions Ψ_α(z) and explained how the familiar conformal block decomposition can be obtained by analytic continuation in α. The alpha space decomposition allowed us to formulate crossing symmetry in terms of an eigenfunction problem for some integral kernels: in particular equation (<ref>) is a mathematically precise version of the abstract idea expressed by equation (<ref>) in the introduction. It features an explicitly known crossing symmetry kernel K_0(α,β|h_ϕ) whose properties we analyzed in some detail.In this paper we did not touch on the profound connection between the alpha space construction and the representation theory of the conformal group. Roughly speaking the dictionary is well-known: three-point functions map to Clebsch-Gordan kernels, conformal blocks are their square — as used in three-fold tensor products — and the crossing symmetry kernel is equal to a 6-j symbol for the conformal group. Moreover, the alpha space decomposition ought to correspond to tensor product decomposition into a direct integral over the principal unitary series of representations. We can however only make all these relations precise if we have a detailed knowledge of both the groups, the representations under consideration, and the Hilbert space of functions on which they act.[In this context it is important to note that the representations are only unitary in Lorentzian signature. In that case the conformal group is actually the universal cover of SL(2,R)<cit.>, which has a richer class of inequivalent unitary representations <cit.> (see also <cit.> for a detailed discussion of the 4d case).] For the case at hand the question appears to be partially solved in <cit.>, which showed that the Wilson functions W_(;̱a,b,c,d) indeed appear as 6-j symbols for representations of the 𝔰𝔩(2,R) conformal algebra. Surprisingly this connection works provided three of the four external dimensions transform in the discrete unitary series, in contrast with the older discussion of <cit.> which is based entirely on the principal unitary series.[This is related to our basis functions being different form the usual shadow-symmetric blocks of <cit.> which are in fact the correct squared Clebsch-Gordan coefficients for three unitary principal series.] It would be interesting to build on the results of <cit.> to explicitly connect all the dots between alpha space, one-dimensional unitary CFTs and representation theory. We hope to return to this problem in the near future.It is of clear interest to generalize our analysis to d ≥ 2 dimensions. This requires solving the Sturm-Liouville problem for the d-dimensional Casimir <cit.> on the square (0,1) × (0,1), or alternatively one could relate this kernel to a suitable set of 6-j symbols of the universal cover of SO(d,2). The higher-d alpha space picture will necessarily be more complicated, because both external and exchanged operators in higher-d CFTs can carry a nontrivial Lorentz spin. An obvious generalization pertains to superconformal field theories in various d <cit.>. Sturm-Liouville theory should also apply beyond four-point correlators in CFTs on R^d; for instance, one can consider its application to CFTs in the presence of boundaries or defects. Most of these problems are rather formal and group-theoretical in nature. In the framework of the conformal bootstrap, it is more exciting to investigate whether alpha space crossing equations can be leveraged to constrain CFT data, or — more ambitiously —to solve bootstrap equations analytically.[See <cit.> for a connection between the conformal Casimir and integrability, which may be helpful in this context.]In Sec. <ref> we discussed some tentative ideas in this direction. Together with recent developments in the realm of Mellin space and the lightcone bootstrap, we are optimistic that alpha space can become part of the analytic bootstrap toolkit.§ ACKNOWLEDGMENTSThis work originated from discussions with Leonardo Rastelli and Pedro Liendo in Stony Brook in 2011, and we would like to thank them for their valuable contributions in these early stages. Moreover, we gratefully acknowledge discussions with the participants of the `Back to the Bootstrap II' meeting in 2012 where an initial version of this work was first presented. We would like to thank Christopher Beem, Jyoti Bhattacharya, Liam Fitzpatrick, Simon Caron-Huot, Abhijit Gadde, Leszek Hadasz, Christoph Keller, Zohar Komargodski, Hugh Osborn, Slava Rychkov, Volker Schomerus, David Simmons-Duffin and Sasha Zhiboedov for more recent discussions and/or comments. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. This work was additionally supported by a grant from the Simons Foundation (#488659).§ COMPUTING THE INNER PRODUCT Ψ_,Ψ_ In this section, we will prove Eq. eq:testInt by computing the inner product Ψ_,Ψ_, as defined in Eq. eq:innerProd. Concretely, we must perform the following integral:Ψ_,Ψ_ = ∫_0^1 d z/z^2 Ψ_(z) Ψ_(̱z)where we used that Ψ_(z) = Ψ_(z) for imaginary . As a first step, we write Ψ_(z) and Ψ_(̱z) using a Mellin-Barnes representation: Ψ_(z) = 1/Γ(±)∫[d s] Γ(-s) Γ(+s±)/Γ(1+s)(1-z/z)^s .Naively, the z-integral eq:inttodo is logarithmically divergent, the divergence coming from the region near z = 0. To resolve this divergence, we regulate Ψ_(̱z) by writing it as follows: Ψ_(̱z)→ z^ _2F_1(+,̱ -  1+;-1-z/z) =z^ Γ(1+)/Γ(±)̱∫[d t] Γ(-t) Γ(+t±)̱/Γ(1++t)(1-z/z)^t for > 0. This behaves as O(z^1/2+) at small z. Evidently, in the limit → 0, the above function reduces to Ψ_(̱z).At this point, the inner product Eq. eq:inttodo is given by triple integral, schematically Ψ_,Ψ_ = ∫_0^1 d z ∫[d s] ∫[d t] ( … ) .Since we have regulated the integrand, this integral converges and we can exchange the order of the different integrals. We do the z-integral first, which is a simple beta function integral. The result is… = Γ(1+)/Γ(±)Γ(±)̱∫[d s]Γ(-s) Γ(± s + )/Γ(1+s) × ∫[d t]Γ(-t) Γ(+t±)̱/Γ(1++t)Γ(1+s+t)Γ(-1-s-t+)/Γ() .We now do the t-integral, using the second Barnes lemma. This yields Ψ_,Ψ_ = lim_→ 0 Γ(1+)/Γ(±)Γ( +±)̱∫[d s] Γ(-s) Γ(± s + )Γ(- -s + ±)̱/Γ(-s+) .At this stage we can take the limit → 0 everywhere, except in the two factors Γ(- -s + ±)̱:… = 1/Γ(±)Γ(±)̱∫[d s]Γ( + s ±)Γ(- - s + ±)̱ .This integral can be computed using the first Barnes lemma, yielding Ψ_, Ψ_ = 1/Γ(±)Γ(±)̱ lim_→ 0Z_(,)̱ , Z_(,)̱ =1/Γ(2)Γ(++̱)Γ(-+̱)Γ(-++̱)Γ(--+̱) . To conclude, we need to analyze the limit → 0 of Z_(,)̱, which we claim is the sum of two Dirac delta functions:lim_→ 0 ∫[d ]̱Z_(,)̱ f()̱ = Γ(± 2) [ f()+f(-) ] ,where f() is a test function.Notice that Eq. eq:toProveZ is sufficient to establish Eq. eq:testInt, after remarking that 2Γ(± 2)/Γ(±)^2 = N() . The proof of eq:toProveZ goes as follows. We start by noticing that lim_→ 0 Z_(,)̱ vanishes, unless =̱±± n for some integer n. If n ≠ 0, the limit → 0 is finite, hence such points do not contribute to the integral in Eq. eq:toProveZ. Hence it suffices to consider the cases β = and β = -. For concreteness, let's consider β =, in which case we can approximate Z_(,)̱ byZ_(,)̱ →̱ Γ(± 2)ω_(-)̱ ,ω_() = Γ(±)/Γ(2) .It is straightforward to see that ω_() behaves as a delta function along the imaginary axis, i.e.lim_→ 0 ∫[d ]ω_() f() = f(0) .This follows from the fact that ω_() is peaked around = 0 with width(takingto be imaginary) together with the fact that∫[d ]ω_()= 1/4^ →1 .The same argument holds for the region where =̱ -. This allows us to conclude.utphys]
http://arxiv.org/abs/1702.08471v2
{ "authors": [ "Matthijs Hogervorst", "Balt C. van Rees" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170227190457", "title": "Crossing Symmetry in Alpha Space" }
=1 addtoresetequationsection @startsection=startsection @section=ł@section @makecaption=makecaption makecaption@makecaption S K R ŁL Q G H̋ B C𝒱 𝒲 𝒟U_q(𝔤)
http://arxiv.org/abs/1702.07954v2
{ "authors": [ "Xinyi Chen-Lin", "Daniel Medina-Rincon", "Konstantin Zarembo" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170225223444", "title": "Quantum String Test of Nonconformal Holography" }
Recursions and trapezoid, symmetric and rotation symmetric functions]Recursions associated to trapezoid, symmetric and rotation symmetric functions over Galois fields Department of Mathematics, University of Puerto Rico, San Juan, PR 00931 franciscastr@gmail.com Department of Mathematics, University of Exeter, Exeter, EX4 4QF, UK r.j.chapman@exeter.ac.uk Department of Mathematics, University of Puerto Rico, San Juan, PR 00931 luis.medina17@upr.eduDepartment of Mathematics, University of Puerto Rico, San Juan, PR 00931 leonid.sepulveda1@upr.eduRotation symmetric Boolean functions are invariant under circular translation of indices.These functions have very rich cryptographic properties and have been used in different cryptosystems. Recently, Thomas Cusick proved that exponential sums of rotation symmetric Boolean functions satisfy homogeneous linear recurrences with integer coefficients.In this work, a generalization of thisresult is proved over any Galois field.That is, exponential sums over Galois fields of rotation symmetric polynomials satisfy linear recurrences with integer coefficients.In the particular case of 𝔽_2, an elementary method is used to obtain explicit recurrences for exponential sums of some of these functions.The concept of trapezoid Boolean function is also introduced and it is showed that the linear recurrences that exponential sums of trapezoid Boolean functions satisfy are the same as the ones satisfied by exponential sums ofthe corresponding rotations symmetric Boolean functions.Finally, it is proved that exponential sums of trapezoid and symmetric polynomials also satisfy linear recurrences with integer coefficients over anyGalois field 𝔽_q.Moreover, the Discrete Fourier Transform matrix and some Complex Hadamard matrices appear as examples in some of our explicit formulas of these recurrences.[2010]05E05, 11T23, 11B50[ L. Brehsner Sepúlveda December 30, 2023 =========================§ INTRODUCTIONA Boolean function is a function from the vector space 𝔽_2^n to𝔽_2.Boolean functions are part of a beautiful branch of combinatorics with applications to many scientific areas. Some particular examples are the areas of theory of error-correcting codes and cryptography. Efficient cryptographic implementations of Boolean functions with many variables is a challenging problem due to memoryrestrictions of current technology. Because of this, symmetric Boolean functions are good candidates for efficient implementations. However, symmetry is a too special property and may imply that theseimplementations are vulnerable to attacks.In <cit.>, Pieprzyk and Qu introduced rotation symmetric Boolean functions. A rotation symmetric Boolean function in n variables is a function which is invariant under the action of the cyclic group C_n on the set 𝔽_2^n. For example, let X_i∈𝔽_2 for 1≤ i≤ n.Define, for 1≤ k ≤ n, the shift functionE_n^k(X_i) = X_i+kif i+k≤ n,X_i+k-nif i+k>n.Extend this definition to 𝔽_2^n by definingE_n^k(X_1,X_2,⋯, X_n) = (E_n^k(X_1),E_n^k(X_2),⋯, E_n^k(X_n)).The shift function E_n^k can also be extended to monomials viaE_n^k(X_i_1X_i_2⋯ X_i_t) = E_n^k(X_i_1)E_n^k(X_i_2),⋯ E_n^k(X_i_t).A Boolean function F( X) in n variables is a rotation symmetric Boolean function if and only if for any (X_1⋯, X_n)∈𝔽_2^n, F(E_n^k(X_1⋯, X_n))=F(X_1⋯, X_n)for every 1≤ k≤ n. Pieprzyk and Qu showed that these functions are useful in the design of fast hashing algorithms with strong cryptographic properties.This work sparked interest in these functions and today their study is an active area of research <cit.>.Every Boolean function in n variables can be identified with a multi-variable Boolean polynomial.This polynomial is known as the algebraic normal form (ANF for short) of the Boolean function.Thedegree of a Boolean function F( X) is the degree of its ANF.The ANF of a rotation symmetric Boolean function is very well-structured.For example, suppose we have a rotation symmetric Boolean function in 5 variables.Suppose that X_1X_2X_3 is part of the ANF of the function.Then, the termsE_5^1(X_1X_2X_3) = X_2X_3X_4E_5^2(X_1X_2X_3) = X_3X_4X_5 E_5^3(X_1X_2X_3) = X_4X_5X_1E_5^4(X_1X_2X_3) = X_5X_1X_2are also part of its ANF.Similarly, suppose that X_1X_3 is also a term of the ANF.Then, X_2X_4, X_3X_5, X_4X_1, X_5X_2are also part of the ANF.An example of a rotation symmetric Boolean function with this property is given byR( X)= X_1X_2X_3+X_2X_3X_4+X_3X_4X_5+X_4X_5X_1+ X_5X_1X_2+ X_1X_3+X_2X_4+ X_3X_5+ X_4X_1+ X_5X_2.Therefore, once a monomial X_i_1⋯ X_i_t is part of the ANF of a rotation symmetric Boolean function, so is E_n^k(X_i_1⋯ X_i_t) for all 1≤ k ≤ n.This implies that theinformation encoded in the ANF of a rotation symmetric Boolean function can be obtained with minimal information.Define the setRSet_n(X_i_1⋯ X_i_t)={E_n^k(X_i_1⋯ X_i_t) |1≤ k ≤ n}.For example,RSet_5(X_1X_2X_3)={X_2X_3X_4, X_3X_4X_5, X_4X_5X_1, X_5X_1X_2, X_1X_2X_3}.Select as a representative for the set RSet_n(X_i_1⋯ X_i_t) the first element in the lexicographic order.For example, the representative for {X_2X_3X_4, X_3X_4X_5, X_4X_5X_1, X_5X_1X_2, X_1X_2X_3}is X_1X_2X_3.Observe that if the rotation symmetric Boolean function is not constant, then X_1 always appears in the lexicographically first element of RSet_n(X_i_1⋯ X_i_t). The short algebraic normal form (or SANF) of a rotation symmetric Boolean function is a function of the forma_0 +a_1 X_1+∑ a_1,jX_1X_j+⋯+a_1,2,⋯,n X_1X_2⋯ X_n,where a_0,a_1, a_1,j,⋯, a_1,2,⋯,n∈𝔽_2 and the existence of the term X_1X_i_2⋯ X_i_t implies the existence of every term inRSet_n(X_1X_i_2⋯ X_i_t)in the ANF.For example, the SANF of the rotation symmetric Boolean function (<ref>) is given byX_1X_3+X_1X_2X_3.Let 1<j_1<⋯< j_s be integers.A rotation symmetric Boolean function of the form R_j_1,⋯, j_s(n)=X_1X_j_1⋯ X_j_s+X_2X_j_1+1⋯ X_j_s+1+⋯+X_nX_j_1-1⋯ X_j_s-1,where the indices are taken modulo n and the complete system of residues is {1,2,⋯, n}, is called a monomial rotation symmetric Boolean function.For example, the rotation symmetricBoolean function (<ref>) is given byR( X) = R_2,3(5) + R_3(5).Sometimes the notation (1,j_1,⋯, j_s)_n is used to represent the monomial rotation Boolean function (<ref>), see <cit.>.In some applications related to cryptography it is important for Boolean functions to be balanced.A balanced Boolean function is one for which the number of zeros and the numberof ones are equal in its truth table.Balancedness of Boolean functions can be studied from the point of view of exponential sums.The exponential sum of an n-variable Boolean function F( X) is defined asS(F)=∑_ x∈𝔽_2^n (-1)^F( x).Observe that a Boolean function F( X) is balanced if and only if S(F)=0.This gives importance to the study of exponential sums.This point of view is also a very active area of research. Forsome examples, please refer to <cit.>.Let F( X) be a Boolean function.List the elements of 𝔽_2^ n in lexicographic order and label them as x_0=(0,0,⋯, 0), x_1=(0,0,⋯, 1) and so on.The vector (F( x_0),F( x_1),⋯, F( x_2^n-1)) is called the truth table or F.The Hamming weight of F, denoted by wt(F), is the number of 1's in the truth table of F.Observe that a Boolean function in n variables is balanced if and only if its Hamming weight is 2^n-1.The Hamming weight of a Boolean function F and its exponential sums are related by the equationwt(F)=2^n-S(F)/2. The study of weights of rotations symmetric Boolean functions has received some attention lately <cit.>.In particular, it has been observed that weights of cubicrotation symmetric Boolean functions are linear recursive with constant coefficients <cit.>.Recently, Cusick <cit.> showed that weights of any rotation symmetricBoolean function satisfy linear recurrences with integer coefficients.Since the exponential sum and the weight function of a Boolean function are related by (<ref>), then it is also truethat exponential sums of rotation symmetric Boolean functions satisfy linear recurrences with integer coefficients.One of the most important results in this work is a generalization of Cusick's Theorem over any Galois field.To be specific, let q=p^r with p prime and r≥1. Exponential sums over 𝔽_qof monomial rotation symmetric polynomials (and linear combinations of them) satisfy homogeneous linear recurrences with integer coefficients.Remarkably, this can be proved by elementary means. Another important result included in this work is that exponential sums over 𝔽_q of elementary symmetric polynomials and linear combinations of them also satisfy linear recurrences with integercoefficients.Surprisingly, the Discrete Fourier Transform matrix, some Complex Hadamard matrices and the quadratic Gauss sum mod p appear in the study of the recurrences considered in this work.This article is divided as follows. The next section is an introduction of the elementary method used to obtain the recurrences.This introduction is done over 𝔽_2 in order to solidify theintuition.The reader interested in the generalization is invited to skip this section, however, the reader is encouraged to read the definition of trapezoid functions, as they are used throughout the article. In section <ref> linear recurrences with integer coefficients are obtained for exponential sums trapezoid functions over Galois fields.Moreover, it is in this section whereit is proved that exponential sums over 𝔽_q of monomial rotation symmetric polynomials and linear combinations of them satisfy linear recurrences with integer coefficients.The same techniqueis used in the section <ref> to prove that exponential sums over 𝔽_q of elementary symmetric polynomials and linear combinations of them also satisfy linear recurrences withinteger coefficients.Finally, in the last section, some conjectures about the initial conditions of some of the sequences considered in this work are presented.§ LINEAR RECURRENCES OVER 𝔽_2 As mentioned in the introduction, Cusick <cit.> recently showed that exponential sums of rotation symmetric Boolean functions satisfy homogeneous linear recurrences with integer coefficients. This fact was suggested by some previous works on the subject. For example, in <cit.>, Cusick and Stnic provided a linear recursion for the sequence of weights for the monomial rotation function (1,2,3)_n.This recursion, however, was not homogeneous, but it could be transformed into a homogeneous one, see <cit.>. Later, Cusick and Johns <cit.> provided recursions forweights of cubic rotation symmetric Boolean functions.In this section we use elementary machinery to provide explicit homogeneous linear recurrences with integer coefficients for exponential sums of some rotation symmetric Boolean functions.The idea is to show that exponential sums of rotation symmetric Boolean functions satisfy the same linear recurrences of exponential sums of trapezoid Boolean functions (see definition below).We prove this fact using elementary machinery and, at this early stage, without the use linear algebra.In the next section we show that exponential sums of rotation symmetric functions over any Galois field satisfylinear recurrences.The reader interested in this generalization may skip this section, but not before reading the definition of trapezoid functions.Define the trapezoid Boolean function in n variables of degree k asτ_n,k = ∑ _j=1^n-k+1 X_jX_j+1⋯ X_j+k-1.For example,τ_7,3 =X_1 X_2 X_3+X_2 X_3 X_4+X_3 X_4 X_5+X_4 X_5 X_6+X_5 X_6 X_7 τ_6,4 =X_1 X_2 X_3 X_4+X_2 X_3 X_4 X_5+X_3 X_4 X_5 X_6.The name trapezoid comes from counting the number of times each variable appears in the function τ_n,k.For example, consider τ_7,3.Observe that X_1 appears 1 time in τ_7,3, X_2appears 2 times, X_3, X_4 and X_5 appears 3 times each, X_6 appears twice, and X_7 appears once.Plotting the these values and connecting the dots produces the shape of an isosceles trapezoid.Figure <ref> is a graphical representation of this.The Boolean variable X_i is represented by i in the x-axis. The y-axis corresponds to the number of times the variable appears in τ_7,3.The opposite is also true, that is, for every isosceles trapezoid that can be constructed by steps of length at most 1, one can construct a trapezoid Boolean function.It turns out that sequences of exponential sums of trapezoid Boolean functions of fixed degree satisfy homogeneous linear recurrences with integer coefficients.These linear recurrences are the samesatisfied by sequences of exponential sums of (1,2,⋯,k)-rotation symmetric Boolean functions.Remarkably, this fact can be proved by elementary means by “playing" a simple game of turningON and OFF some of the variables. Given a Boolean variable X_i, we say that it is turned OFF if X_i assumes the value 0 and turned ON if the variable assumes the value 1. In other words, each Boolean variable represents a “switch" with two options: 0 (OFF) and 1 (ON).We start the discussion with the recurrence for exponential sums of trapezoid Boolean functions. The sequence {S(τ_n,k)}_n=k^∞ satisfies a homogeneous linear recurrence with integer coefficients whose characteristic polynomial is given byp_k(X)=X^k -2 (X^k-2+X^k-3+⋯+X+1).For the sake of simplicity, we present, in detail, the proof for the cases k=3 and k=4.The general case becomes clear after that.Moreover, the complete proof of a generalization of this theorem over any Galois field is presented in section <ref>. Start with the case k=3.Observe that by turning X_n OFF and ON we get the identity S(τ_n,3) = S(τ_n-1,3)+S(τ_n-1,3+X_n-2X_n-1).Consider now S(τ_n-1,3+X_n-2X_n-1).Turn X_n-1 OFF and ON to getS(τ_n-1,3+X_n-2X_n-1)=S(τ_n-2,3)+S(τ_n-2,3+X_n-2+X_n-3X_n-2).Finally, turn X_n-2 OFF and ON to getS(τ_n-2,3+X_n-2+X_n-3X_n-2)=S(τ_n-3,3)-S(τ_n-3,3+X_n-3+X_n-4X_n-3).The last equation is equivalent (after relabeling) to S(τ_n,3)=S(τ_n+1,3+X_n+1+X_nX_n+1)+S(τ_n,3+X_n+X_n-1X_n). Observe that equations (<ref>) and (<ref>) can be combined to obtainS(τ_n,3)=S(τ_n-1,3)+S(τ_n-2,3)+ S(τ_n-2,3+X_n-2+X_n-3X_n-2).Let a_n,3 =S(τ_n,3+X_n+X_n-1X_n).Note that (<ref>) implies that S(τ_n,3)=a_n+1,3+a_n,3.Therefore, (<ref>) can be re-written as(a_n+1,3+a_n,3) = (a_n,3+a_n-1,3)+(a_n-1,3+a_n-2,3)+a_n-2,3,which is equivalent to a_n+1,3 = 2a_n-1,3+2a_n-2,3.This implies that {a_n,3} satisfies the linear recurrence whose characteristic polynomial is given by p_3(X).Since S(τ_n,3)=a_n+1,3+a_n,3, then {S(τ_n,3)} also satisfies such recurrence and the result holds for k=3.Consider now the case when k=4.As it was done in the case when k=3,turning OFF and ON several variables leads to S(τ_n,4) = S(τ_n-1,4)+S(τ_n-2,4)+S(τ_n-3,4) +S(τ_n-3,4+X_n-3+X_n-4X_n-3+X_n-5X_n-4X_n-3)andS(τ_n,4) =S(τ_n+1,4+X_n+1+X_nX_n+1+X_n-1X_nX_n+1) +S(τ_n,4+X_n+X_n-1X_n+X_n-2X_n-1X_n).Now let a_n,4=S(τ_n,4+X_n+X_n-1X_n+X_n-2X_n-1X_n) and observe that (<ref>) can be re-written as(a_n+1,4+a_n,4)=(a_n,4+a_n-1,4)+(a_n-1,4+a_n-2,4)+(a_n-2,4+a_n-3,4)+a_n-3,4,which is equivalent toa_n+1,4 = 2a_n-1,4+2a_n-2,4+2a_n-3,4.Therefore, {a_n,4} satisfies the linear recurrence whose characteristic polynomial is given by p_4(X).Since S(τ_n,4)=a_n+1,4+a_n,4, then {S(τ_n,4)} also satisfies suchrecurrence and the result also holds for k=4.In general, S(τ_n,k) can be expressed asS(τ_n,k) =∑_i=1^k-1 S(τ_n-i,k)+S(τ_n-k+1,k+∑_j=0^k-2∏_i=0^j X_n-k+1-i)and asS(τ_n,k) =S(τ_n+1,k+∑_j=0^k-2∏_i=0^j X_n+1-i)+S(τ_n,k+∑_j=0^k-2∏_i=0^j X_n-i).Combine these equations and proceed as before to obtain the result.This concludes the proof. It turns out that the sequence of exponential sums of (1,2,⋯,k)-rotation symmetric Boolean functions, that is, of R_2,3,⋯, k(n), also satisfies the linear recurrence whose characteristicpolynomial is given p_k(X). This is a well-known result for the case when k=3 (<cit.>), but, to the knowledge of the authors, the closed formula for the general case is new.Before proving that {S(R_2,3,⋯, k(n))} satisfies the linear recurrence with characteristic polynomial p_k(X), we show an auxiliary result which can be proved using the same arguments as in theproof of Theorem <ref>. Let τ_n,k be the trapezoid Boolean function of degree k in n variables.Suppose that F( X) is a Boolean polynomial in the first j variables with j<k.Then, the sequences {S(τ_n,k+F( X))}and{S(τ_n,k+F( X)+X_n+X_nX_n-1+X_nX_n-1X_n-2+⋯+X_nX_n-1⋯ X_n-k+2)}satisfies the linear recurrence whose characteristic polynomial is given by p_k(X). The proof of this result follows the same argument of the proof of Theorem <ref>.Theorem <ref> and Lemma <ref> is all that is needed to show that the sequence of exponential sums of (1,2,⋯,k)-rotation symmetric Boolean functions satisfies the linear recurrence with characteristic polynomial p_k(X).The sequence {S(R_2,3,⋯, k(n))} satisfies the homogeneous linear recurrence whose characteristic polynomial is given by p_k(X).This result can also be proved by turning OFF and ON several variables.As before, we provide the proof for the case when k=4.The general case follows the same argument.To start the argument, turn OFF and ON the variable X_n to get S(R_2,3,4(n)) = S(τ_n-1,4)+S(τ_n-1,4+X_1X_2X_3+X_1X_2X_n-1+X_1X_n-2X_n-1).Consider the second term of the right hand side of this equation.Turn X_n-1 OFF and ON to getS(τ_n-1,4 +X_1X_2X_3+X_1X_2X_n-1+X_1X_n-2X_n-1)=S(τ_n-2,4+X_1X_2X_3) +S(τ_n-2,4+X_1X_2+X_1X_2X_3+X_1X_n-2+X_n-3X_n-2+X_n-4X_n-3X_n-2).Again, consider the second term of the right hand side of equation (<ref>). Turn X_n-2 OFF and ON to getS(τ_n-2,4 +X_1X_2+X_1X_2X_3+X_1X_n-2+X_n-3X_n-2+X_n-4X_n-3X_n-2)=S(τ_n-3,4+X_1X_2+X_1X_2X_3) +S(τ_n-3,4+X_1+X_1X_2+X_1X_2X_3+X_n-3+X_n-4X_n-3+X_n-5X_n-4X_n-3).Equations (<ref>), (<ref>) and (<ref>) lead to the equation S(R_2,3,4(n))= S(τ_n-1,4)+S(τ_n-2,4+X_1X_2X_3)+S(τ_n-3,4+X_1X_2+X_1X_2X_3) +S(τ_n-3,4+X_1+X_1X_2+X_1X_2X_3+X_n-3+X_n-4X_n-3+X_n-5X_n-4X_n-3).Theorem <ref> and Lemma <ref> imply that {S(τ_n-1,4)},{S(τ_n-2,4+X_1X_2X_3)}, {S(τ_n-3,4+X_1X_2+X_1X_2X_3)} and{S(τ_n-3,4+X_1+X_1X_2+X_1X_2X_3+X_n-3+X_n-4X_n-3+X_n-5X_n-4X_n-3)}satisfy the linear recurrence whose characteristic polynomial p_4(X).Since {S(R_2,3,4(n))} is a linear combination of them, then the result holds when k=4.In general, S(R_2,3,⋯, k(n)) can be expressed asS(R_2,3,⋯, k(n)) =S(τ_n-1,k)+∑_m=0^k-3 S(τ_n-2-m,k+∑_j=0^m ∏_i=1^k-1-j X_i) +S(τ_n-k+1,k+∑_j=1^k-1(∏_i=1^j X_i+ ∏_i=0^j-1 X_n-k+1-i))Invoke Theorem <ref> and Lemma <ref> to get the result.This concludes the proof. The same technique can be applied to find linear recurrences of exponential sums other rotations.Recall that R_j_1,⋯, j_s(n)=X_1X_j_1⋯ X_j_s+X_2X_j_1+1⋯ X_j_s+1+⋯+X_nX_j_1-1⋯ X_j_s-1,where the indices are taken modulo n and the complete system of residues is {1,2,⋯, n}.We define the equivalent of the trapezoid Boolean function for R_j_1,⋯, j_s(n) asT_j_1,⋯,j_s(n)=X_1X_j_1⋯ X_j_s+X_2X_j_1+1⋯ X_j_s+1+⋯+X_n+1-j_sX_j_1+n-j_s⋯ X_j_s-1+n-j_sX_n.For instance, under this notation one hasτ_n,k=T_2,3,⋯, k(n).It turns out that for k≥ 4, the sequences {S(R_2,3,⋯,k-2,k(n))} and {S(R_2,3,⋯,k-2,k+1(n))} both satisfy the linear recurrence whose characteristic polynomial isq_k(X)=X^k+1-2X^k-1-2X^k-2-⋯-2X^3-4.As just mentioned, this can be proved by playing a game of turning ON and OFF some variables.However, the process becomes somewhat tedious at a very early stage.For example, recall that Theorem <ref> is an auxiliary result that was used to show that {S(R_2,3,⋯, k(n))} satisfies the linear recurrence with characteristic polynomialp_k(X).Let us show the equivalent of Theorem <ref> for {S(R_2,4(n))}.The idea is to show the reader how tedious the process can get.Recall that the equivalent of the trapezoid Boolean function for this problem is T_2,4(n)=X_1X_2X_4+X_2X_3X_5+⋯+X_n-3X_n-2X_n.Start with the equationS(T_2,4(n)+X_n-1X_n) = S(T_2,4(n+1)+X_n-1X_n+X_n+1+X_nX_n+1)+S(T_2,4(n)+X_n-2X_n-1+X_n+X_n-1X_n),which is a consequence of turning OFF and ON the variable X_n+1. On the other hand, by turning X_n OFF and ON one getsS(T_2,4(n)+X_n-1X_n) = S(T_2,4(n-1)+ S(T_2,4(n-1)+X_n-1+X_n-2X_n-3).This gave us two equations for S(T_2,4(n)+X_n-1X_n).Consider now the right hand side of (<ref>).Turn X_n-1 OFF and ON to get S(T_2,4(n-1)+X_n-1+X_n-2X_n-3) =S(T_2,4(n-2)+X_n-2X_n-3)- S(T_2,4(n-2)+X_n-4X_n-3+X_n-3X_n-2)Now turn X_n-2 OFF and ON to get the equationS(T_2,4(n-2)+X_n-4X_n-3+X_n-3X_n-2) = S(T_2,4(n-3)+X_n-4X_n-3)+ S(T_2,4(n-3)+X_n-5X_n-4+X_n-3+X_n-4X_n-3).Combine equations (<ref>), (<ref>), (<ref>), and (<ref>) to getS(T_2,4(n) = S(T_2,4(n+1)+X_nX_n+1)-S(T_2,4(n-1)+X_n-2X_n-1)+S(T_2,4(n-2)+X_n-3X_n-2) +S(T_2,4(n-2)+X_n-4X_n-3+X_n-2+X_n-3X_n-2). Now let a_n=S(T_2,4(n)+X_n-2X_n-1+X_n+X_n-1X_n).Observe that equation (<ref>) can be re-written asS(T_2,4(n)+X_n-1X_n)=a_n+1+a_n.This and equation (<ref>)implyS(T_2,4(n)) = (a_n+2+a_n+1)-(a_n+a_n-1)+(a_n-1+a_n-2)+a_n-2= a_n+2+a_n+1-a_n+2a_n-2.On the other hand, by switching OFF and ON several variables one obtainsS(T_2,4(n)) =S(T_2,4(n-1))+S(T_2,4(n-2)+X_n-3X_n-2)+S(T_2,4(n-3)+X_n-4X_n-3) + S(T_2,4(n-3)+X_n-5X_n-4+X_n-3+X_n-4X_n-3).Writing this last equation in terms of a_n one gets(a_n+2+a_n+1-a_n+2a_n-2) = (a_n+1+a_n-a_n-1+2a_n-3) +(a_n-1+a_n-2)+(a_n-2+a_n-3)+a_n-3,which simplifies toa_n+2=2a_n+4a_n-3.The characteristic polynomial for this recurrence is q_3(X).Other examples on which this elementary method can be used to find explicit formulas for linear recurrences include the sequence{S(R_2,3,⋯,k(n)+R_2,3,⋯,k-1(n))},which satisfies the linear recurrence with characteristic polynomial x^k-2x^k-1+2,the sequence{S(R_2,3,⋯, k-1,k(n)+R_2,3,⋯,k-2,k(n))},which satisfies the linear recurrence with characteristic polynomial x^k-2x^k-1+2x-2,and the sequence {S(R_2, 3,⋯, k-2,k(n)+ R_2, 3, ⋯, k-1(n)+R_2, 3, ⋯,k(n))},which satisfies the linear recurrence with characteristic polynomial x^k-2(x^k-2+x^k-3+⋯+x^2+1).However, the process is somewhat tedious to be done by hand.Automatization seems to be the way to go.The reader is invited to read Cusick's work <cit.>, which includes a Mathematica code that calculates a linear recurrences for the weights of a given rotation.§ LINEAR RECURRENCES OVER 𝔽_QIn this section we show that exponential sums of rotation functions over Galois fields satisfy linear recurrence.This is a generalization of Cusick's result.Consider the Galois field 𝔽_q = {0,α_1,⋯,α_q-1} where q=p^r with p prime and r≥ 1.Recall that the exponential sumof a function F:𝔽_q^n →𝔽_q is given byS_𝔽_q(F)=∑_ x∈𝔽^n_q e^2π i/pTr_𝔽_q/𝔽_p(F( x)),where Tr_𝔽_q/𝔽_p represents the field trace function from 𝔽_q to 𝔽_p. The same technique used for exponential sums of Boolean functions can be used in general.However, instead of having two options for the “switch", we now have q of them.Let X be a variable which takes values on 𝔽_q.As before, we say that the variable X can be turned OFF or ON, however, this time the term “turn OFF" means that X assumes the value 0, while the term “turn ON" means that X assumes all values in 𝔽_q that are different from zero.Think of this situation as a light switch on which you have the option to turnOFF the light and the option to turn it ON to one of q-1 colors. We consider first sequences exponential sums of trapezoid functions.As in the case over 𝔽_2, they satisfy linear recurrences with integer coefficients over any Galois field𝔽_q. We start with the following lemma, which is interesting in its own right.Let k, n and j be integers with k>2, 1≤ j<k and n≥ k. Then,S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^jβ_s∏_l=0^k-s-1X_n-l)=S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j∏_l=0^k-s-1X_n-l)for any choice of β_s∈𝔽_q^×.The proof is by induction on n.Suppose first that n=k.Observe thatT_2,3,⋯,k(k)+∑_s=1^jβ_s∏_l=0^k-s-1X_k-l =X_1X_2⋯ X_k+β_j X_j+1X_j+2⋯ X_k+β_j-1 X_jX_j+1⋯ X_k +⋯+ β_2 X_3 X_4⋯ X_k+β_1 X_2 X_3⋯ X_k.Consider the right hand side of (<ref>).If 1≤ j≤ k-2, then make the changes of variablesX_t=Y_t, for j+2≤ t ≤ kX_j+1 = β_j^-1Y_j+1 X_t = β_t-1^-1β_t Y_t, for 2≤ t ≤ jX_1 = β_1 Y_1.On the other hand, if j=k-1, then make the change of variablesX_k = β_k-1^-1Y_k X_t = β_t-1^-1β_t Y_t, for 2≤ t ≤ k-1X_1 = β_1 Y_1.This transforms (<ref>) into Y_1Y_2⋯ Y_k +∑_s=1^j∏_l=0^k-s-1Y_k-l.Therefore, S_𝔽_q(T_2,3,⋯,k(k)+∑_s=1^jβ_s∏_l=0^k-s-1X_k-l)=S_𝔽_q(T_2,3,⋯,k(k)+∑_s=1^j∏_l=0^k-s-1X_k-l).This concludes the base case.Suppose now that for some n≥ k we have S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^jβ_s∏_l=0^k-s-1X_n-l)=S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j∏_l=0^k-s-1X_n-l).Consider S_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^jβ_s∏_l=0^k-s-1X_n+1-l).Suppose first that 1≤ j ≤ k-2.Letting X_n+1 run over every element of the field leads to S_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^jβ_s∏_l=0^k-s-1X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n)) +∑_α∈𝔽_q^×S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j+1γ_s(α)∏_l=0^k-s-1X_n-l),where γ_1(α)=α and γ_s(α) = αβ_s-1.By inductionS_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j+1γ_s(α)∏_l=0^k-s-1X_n-l)=S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j+1∏_l=0^k-s-1X_n-l).Therefore,S_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^jβ_s∏_l=0^k-s-1X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n)) +∑_α∈𝔽_q^×S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j+1∏_l=0^k-s-1X_n-l).However, (<ref>) does not depend on the choice of the β_t's. It follows thatS_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^jβ_s∏_l=0^k-s-1X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^j∏_l=0^k-s-1X_n+1-l)is true for 1≤ j≤ k-2.Consider now the case j=k-1.Again,letting X_n+1 run over every element of the field leads to S_𝔽_q(T_2,3,⋯,k(n+1) +∑_s=1^k-1β_s∏_l=0^k-s-1X_n+1-l)= S_𝔽_q(T_2,3,⋯,k(n))+∑_α∈𝔽_q^× e^2π i/pTr_𝔽_q/𝔽_p(αβ_k-1) S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^k-1γ_s(α)∏_l=0^k-s-1X_n-l),where γ_1(α)=α and γ_s(α) = αβ_s-1.However, by inductionS_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^k-1γ_s(α)∏_l=0^k-s-1X_n-l)=S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^j+1∏_l=0^k-s-1X_n-l).Since∑_α∈𝔽_q^×e^2π i/pTr_𝔽_q/𝔽_p(αβ_k-1)=-1,then it follows thatS_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^k-1β_s∏_l=0^k-s-1X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n)) -∑_α∈𝔽_q^×S_𝔽_q(T_2,3,⋯,k(n)+∑_s=1^k-1∏_l=0^k-s-1X_n-l).Since (<ref>) does not depend on the choice of the β_t's, then it follows thatS_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^k-1β_s∏_l=0^k-s-1X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n+1)+∑_s=1^k-1∏_l=0^k-s-1X_n+1-l)is true.This completes the induction and the proof. Next is the recurrence for exponential sums of trapezoid functions over any Galois field.Let k≥ 2 be an integer and q=p^r with p prime. The sequence {S_𝔽_q(T_2,3,⋯,k(n))}_n=k^∞ satisfies a homogeneous linear recurrence with integer coefficients whose characteristic polynomial is given byQ_T,k,𝔽_q(X)=X^k-q ∑ _l=0^k-2 (q-1)^l X^k-2-l. In particular, when q=2 we recover Theorem <ref>.We present the proof for k>2.The case k=2 can be proved using similar techniques.Start by turning X_n OFF and ON, that is, by letting X_n assume all its possible values.This produces the identity S_𝔽_q(T_2,3,⋯,k(n))=S_𝔽_q(T_2,3,⋯,k(n-1))+ ∑_β∈𝔽_q^×S_𝔽_q(T_2,3,⋯,k(n-1)+β∏_j=1^k-1 X_n-j)However, Lemma <ref> impliesS_𝔽_q(T_2,3,⋯,k(n-1)+β∏_j=1^k-1 X_n-j)=S_𝔽_q(T_2,3,⋯,k(n-1)+ ∏_j=1^k-1 X_n-j)for every β∈𝔽_q^×.Therefore, (<ref>) reduces toS_𝔽_q(T_2,3,⋯,k(n))=S_𝔽_q(T_2,3,⋯,k(n-1))+ (q-1)S_𝔽_q(T_2,3,⋯,k(n-1)+ ∏_j=1^k-1 X_n-j)Consider now S_𝔽_q(T_2,3,⋯,k(n-1)+ ∏_j=1^k-1 X_n-j). Let X_n-1 assume all its possible values and use the same argument as before to getS_𝔽_q(T_2,3,⋯,k(n-1)+ ∏_j=1^k-1 X_n-j) =S_𝔽_p(T_2,3,⋯,k(n-2)) + (q-1) S_𝔽_q(T_2,3,⋯,k(n-2)+∏_j=1^k-2 X_n-1-j+∏_j=1^k-1 X_n-1-j)Thus, (<ref>) reduces toS_𝔽_q(T_2,3,⋯,k(n)) = S_𝔽_q(T_2,3,⋯,k(n-1))+(q-1)S_𝔽_q(T_2,3,⋯,k(n-2)) + (q-1)^2 S_𝔽_q(T_2,3,⋯,k(n-2)+∏_j=1^k-2 X_n-1-j+∏_j=1^k-1 X_n-1-j).Continue in this manner to get the following equationS_𝔽_q(T_2,3,⋯,k(n)) = ∑_l=1^k-1(q-1)^l-1S_𝔽_qT_2,3,⋯,k(n-l)) +(q-1)^k-1 S_𝔽_q(T_2,3,⋯,k(n-k+1)+∑_j=0^k-2∏_l=0^j X_n-k+1-l). On the other hand, let X_n+1 assume all its possible values and use Lemma <ref> to get the equationS_𝔽_q(T_2,3,⋯,k(n+1)+∑_j=0^k-2∏_i=0^j X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n)) +e^2π i/pTr_𝔽_q/𝔽_p(1)S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l) +e^2π i/pTr_𝔽_q/𝔽_p(2)S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l) +e^2π i/pTr_𝔽_q/𝔽_p(3)S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l) ⋮ +e^2π i/pTr_𝔽_q/𝔽_p(α_p-1)S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l).Use the well-known formula∑_β∈𝔽_q^× e^2π i/pTr_𝔽_q/𝔽_p(β)=-1.to reduce (<ref>) to S_𝔽_q(T_2,3,⋯,k(n+1)+∑_j=0^k-2∏_i=0^j X_n+1-l) =S_𝔽_q(T_2,3,⋯,k(n))=- S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l).This last equation is equivalent to S_𝔽_q(T_2,3,⋯,k(n)) =S_𝔽_q(T_2,3,⋯,k(n+1)+∑_j=0^k-2∏_i=0^j X_n+1-l)+S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l). Let a_n = S_𝔽_q(T_2,3,⋯,k(n)+∑_j=0^k-2∏_i=0^j X_n-l). Then,S_𝔽_q(T_2,3,⋯,k(n))=a_n+1+a_nand equation (<ref>) is now(a_n+1+a_n) = ∑_l=1^k-1(q-1)^l-1(a_n+1-l+a_n-l)+(q-1)^k-1 a_n-k+1.The last equation reduces toa_n+1=∑_l=0^k-2 q(q-1)^la_n-1-lThis concludes the proof. The polynomial Q_T,k,𝔽_q(X) is quite interesting. In particular, it seems to be irreducible for k > 2 and every q=p^r with p prime.The irreducibility of Q_T,k,𝔽_q(X) when (k,r)=1is a consequence of Eisenstein-Dumas criterion. Let f(x)=a_n x^n+a_n-1x^n-1+⋯+a_1 x+a_0 ∈ℤ[x] be a polynomial. Let p be a prime. Denote the p-adic valuation of an integer m by ν_p(m) (with ν_p(0)=+∞).Suppose that* ν_p(a_n)=0, * ν_p(a_n-i)/i>ν_p(a_0)/n for 1≤ i ≤ n-1, and * (ν_p(a_0),n)=1.Then, f(x) is irreducible over ℚ. Let q=p^r with p prime.Suppose that (k,r)=1.Then, the polynomial Q_T,k,𝔽_q(X)=X^k-q ∑ _l=0^k-2 (q-1)^l X^k-2-lis irreducible over ℚ. This is a direct consequence of Eisenstein-Dumas criterion. Exponential sums over 𝔽_q of rotation functions also satisfy homogeneous linear recurrences.However, in general, these linear recurrences have higher order than thehomogeneous linear recurrences satisfied by exponential sums of trapezoid functions.In other words, the identity observed over 𝔽_2 between the linear recurrences of exponential sums of trapezoid Boolean functions and rotation symmetric Boolean functions is lost over 𝔽_q. For example, if we consider the monomial rotation R_2(n)=X_1X_2+X_2X_3+⋯+X_n-1X_n+X_nX_1,then we have the following result.This is the first result that relies on linear algebra.Suppose that p>2 is prime.Then, {S_𝔽_p(R_2(n)} satisfy the homogeneous linear recurrence with characteristic polynomialQ_R,2,𝔽_p(X)=X^4-p^2. Turn X_n and X_n-1 OFF and ON, that is, let them assume all values in 𝔽_p, and use the identityS_𝔽_p(T_2(n)+β X_n) = S_𝔽_p(T_2(n)+ X_n),for β∈𝔽_p^×to get the equation S_𝔽_p(R_2(n)) =S_𝔽_p(T_2(n-2))+(p-1)S_𝔽_p(T_2(n-2)+X_n-2) +∑_α∈𝔽_p^×∑_β∈𝔽_pe^2π i/pαβS_𝔽_p(T_2(n-2)+α X_1+β X_n-2),Let a_0(n) = S_𝔽_p(T_2(n)) a_1(n) = S_𝔽_p(T_2(n)+X_n) b_α,β(n) = S_𝔽_p(T_2(n)+α X_1+β X_n) for α∈𝔽_p^×, β∈𝔽_p.Then,S_𝔽_p(R_2(n)) =a_0(n-2)+(p-1)a_1(n-2)+∑_α∈𝔽_p^×∑_β∈𝔽_pe^2π i/pαβb_α,β(n-2).Observe thata_0(n) =a_0(n-1)+(p-1)a_1(n-1) a_1(n) =a_0(n-1)-a_1(n-1) b_α,β(n) = ∑_γ∈𝔽_p e^2π i/p(βγ) b_α,γ(n-1),which can be written in matrix form as([ a_0(n); a_1(n); b_1,0(n); b_1,1(n);⋮; b_p-1,p-1(n) ])=A(p)([ a_0(n-1); a_1(n-1); b_1,0(n-1); b_1,1(n-1);⋮; b_p-1,p-1(n-1) ])whereA(p) =( [ A_0(p)OO⋯O;O A_1(p)O⋯O;OO A_2(p)⋯O;⋮⋮⋮⋱⋮;OOO⋯ A_p-1(p);]),andA_0(p)=( [ 1 p-1; 1-1; ]) and A_j(p) = ( [ 1 1 1 ⋯ 1; 1e^2π i/pe^4π i/p ⋯ e^2(p-1)π i/p; 1 e^4 π i/p e^8 π i/p ⋯e^2× 2(p-1)π i/p; ⋮ ⋮ ⋮ ⋱ ⋮; 1e^2(p-1) π i/pe^4(p-1) π i/p ⋯ e^2× (p-1)^2π i/p; ]),for 1≤ j ≤ p-1. It is clear that the first block A_0(p) satisfies X^2-p.All other blocks A_j(p)'s, for 1≤ j ≤ p-1, are √(p)· W_p, where W_p is the p× p square Discrete FourierTransform matrix. Observe thatA_j(p)^2 = ( [ p 0 ⋯ 0 0; 0 0 ⋯ 0 p; 0 0 ⋯ p 0; ⋮ ⋮ ⋮ ⋮; 0 p ⋯ 0 0; ]).Therefore,A_j(p)^4 = ( [ p^2 0 0 ⋯ 0; 0 p^2 0 ⋯ 0; 0 0 p^2 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ p^2; ]).In other words, the big blocks A_j(p)'s satisfiy X^4-p^2.Since X^2-p |X^4-p^2, then we conclude that the matrix A(p) satisfies the polynomial Q_R,2,𝔽_p(X)=X^4-p^2.This means that the sequences {a_0(n)}, {a_1(n)} and {b_α,β(n)}, for α∈𝔽_p^×,β∈𝔽_p, all satisfy the linear recurrence with characteristic polynomial given by Q_R,2,𝔽_p(X).Since {S_𝔽_p(R_2(n))} is a combination of these sequences, then it also satisfies such recurrence. This concludes the proof. We are now ready to prove one of the main results of this article.That is, exponential sums of rotation polynomials satisfiy linear recurrences with integer coefficients. Let k≥ 2 be an integer and q=p^r with p prime and r≥ 1.The sequence {S_𝔽_q(R_2,3,⋯,k(n))}_n≥ k satisfies a linear recurrence with integer coefficients. Let ζ_p=e^2π i/p. Consider the expression S_𝔽_q(R_2,3,⋯, k(n+k)).Let X_n+k, X_n+k-1, ⋯, X_n assume all values in 𝔽_q and observe that S_𝔽_q(R_2,3,⋯, k(n+k))can be written as a linear combination of expressions of the form a_α;β(n)=S_𝔽_q(T_2,3,⋯, k(n)+∑ _j=1^k-1(α _j ∏ _l=1^j X_n+1-l+β _j ∏ _l=1^j X_l)), where α=(α_1,⋯,α_k) ∈𝔽_q^k-1 and β=(β_1,⋯,β_k) ∈𝔽_q^k-1.However, note that for each α,β∈𝔽_q^k-1, we have a_α;β(n) = ∑_γ,λ∈𝔽_q^k-1c_γ,λ· a_γ,λ(n-1),where c_γ,λ∈ℤ[ζ_p] is a cyclotomic integer.Let A_2,3,⋯, k(q) be the corresponding matrix for the linear equations in (<ref>) and F(X) be any annihilating polynomial for A_2,3,⋯, k(q). We can assume that F(X) has integer coefficients.This is because the minimal polynomial of A_2,3,⋯,k(q) is monic,has algebraic integers coefficients and integrality is transitive.Then each {a_α;β(n)}_n satisfies the linear recurrence with characteristic polynomial given by F(X). Since {S_𝔽_q(R_2,3,⋯, k(n+k))} is a linear combination of these sequences, then {S_𝔽_q(R_2,3,⋯, k(n+k))} also satisfies such recurrence. This concludes the proof. We know that the identity between linear recurrences of exponential sums of trapezoid Boolean functions and rotation symmetric Boolean functions is lost over 𝔽_q.However,the proof of Theorem <ref> suggests that a relation can be recovered. Let q=p^r with p prime and r≥ 1.Let μ_T,k,𝔽_q(X) and μ_R,k,𝔽_q(X) be the characteristic polynomials associated to the minimal homogeneous linear recurrences with integer coefficientssatisfied by {S_𝔽_q(T_2,3,⋯, k(n))} and {S_𝔽_q(R_2,3,⋯, k(n))} (resp.).Then,μ_T,k,𝔽_q(X) | μ_R,k,𝔽_q(X).In particular, if (k,r)=1,then Q_T,k,𝔽_q(X) | μ_R,k,𝔽_q(X) In the proof of Theorem <ref>.Observe that {a_0;0(n)}={S_q(T_2,3,⋯,k(n))}, this implies (<ref>).Now, if (k,r)=1, then Q_T,k,𝔽_q(X) is irreducible and therefore μ_T,k,𝔽_q(X)=Q_T,k,𝔽_q(X).This concludes the proof. Let {b(n)} be a sequence on an integral domain D.A set of sequences{{a_1(n)}, {a_2(n)},⋯, {a_s(n)}},where s is some natural number, is called arecursive generating set for {b(n)} if * there is an integer l such that for every n, b(n) can be written as a linear combination of the formb(n)=∑_j=1^s c_j· a_j(n-l), where c_j's are constants that belong to D, and * for each 1≤ j_0 ≤ s and every n, a_j_0(n) can be written as a linear combination of the forma_j_0(n)=∑_j=1^s d_j· a_j(n-1), where d_j's are also constants that belong to D.The sequences {a_j(n)}'s are called recursive generating sequences for {b(n)}. It is a well-known result in the theory of recursive sequences that a sequence that has a recursive generating set satisfies a linear recurrence with constant coefficients.In fact, this techniquehas been used in Theorems <ref> and <ref>. Theorem <ref> generalizes to monomial rotation functions and linear combinations of them, that is, exponential sums over any Galois field of linear combinations of monomial rotation polynomialssatisfy linear recurrences.Of course, in general, we might need to turn OFF and ON more than k variables, even if the rotation is of degree k.Also, even though the sequences(<ref>) always exist, their number might be too big to be handled by hand.For example, consider the sequence of exponential sums {S_𝔽_3(R_2,3(n))}.After someidentifications, the authors needed 24 different recursive generating sequences (not claiming that this is optimal) of the form(<ref>) and their corresponding 24× 24 matrix in orderto find that {S_𝔽_3(R_2,3(n))} satisfy the linear recurrence whose characteristic polynomial is given byX^6-3 X^4-9 X^3+9 X+18 = (X^3-3) (X^3-3 X-6)= (X^3-3)Q_T,3,𝔽_3(X).Also, in general, finding the minimal polynomial of a matrix is not an easy task, therefore explicit formulas like the ones in Theorem <ref> and Theorem <ref> are much harder to get.In the next section, this technique is used to prove that exponential sums over Galois fields of elementary symmetric polynomials (and linear combinations of them) satisfy homogeneous linear recurrences with integer coefficients.§ LINEAR RECURRENCES OVER 𝔽_Q: SYMMETRIC POLYNOMIALS CASEIt is a well-established result that exponential sums of symmetric Boolean functions are linear recurrent.This was first established by Cai, Green and Thierauf <cit.>. In <cit.>, Castro andMedina use this result to show that a conjecture of Cusick, Li, Stnic <cit.> is true asymptotically.In <cit.>, some of the results of <cit.> whereextended to some perturbations of symmetric Boolean functions.This recursivity was also used in <cit.> to study the periodicity mod p (p prime) of exponential sums of symmetric Boolean functions. In this section we show that exponential sums of some symmetric polynomials are linear recurrent over any Galois field.Remarkably, the proof uses the same argument as in the proof ofTheorem <ref>. We decided to include the proof for completeness of the writing.However, the reader is welcome to skip the proof.Let σ_n,k be the elementary symmetric polynomial in n variables of degree k. For example,σ_4,3 = X_1 X_2 X_3+X_1 X_4 X_3+X_2 X_4 X_3+X_1 X_2 X_4.We have the following result. Let k≥ 2 be an integer and q=p^r with p prime and r≥ 1.The sequence {S_𝔽_q(σ_n,k)} satisfies a linear recurrence with constant coefficients.Consider the expression S_𝔽_q(σ_n+k,k).Definea_β(n)=S_𝔽_q(σ_n,k+∑ _j=1^k-1β_j σ_n,k-j),The set {a_β(n)}_β∈𝔽_q^k-1 is a recursive generating set for S_𝔽_q(σ_n+k,k).Therefore, the sequence{S_𝔽_q(σ_n+k,k)}_n≥ 0 satisfies a linear recurrence with constant coefficients.As in the proof of Theorem <ref>, it can be argued that a linear recurrence withinteger coefficients is guaranteed to exist.This concludes the proof. This result can be generalized to any polynomial of the form ∑ _j=0^k-1β_j σ_n,k-j,with β_j ∈𝔽_q.We present the result without proof, as it follows almost verbatim as the one from Theorem <ref>. Let k≥ 2 be an integer and q=p^r with p prime and r≥ 1.The sequence S_𝔽_q(∑ _j=0^k-1β_j σ_n,k-j)satisfies a linear recurrence with constant coefficients, regardless of the choice of the β_j's.Consider the sequence {S_𝔽_3(σ_n,3)}.Recall that in this case the generating sequences are given bya_(s,t)(n)={S_𝔽_3(σ_n,3+sσ_n,2+tσ_n,1)}, where s,t∈𝔽_3.Establish the order (0,0), (1,0), (2,0), (0,1), (1,1), (2,1), (0,2),(1,2), (2,2). Then,( [ a_(0,0)(n); a_(1,0)(n);⋮; a_(2,2)(n) ]) = A ( [ a_(0,0)(n-1); a_(1,0)(n-1);⋮; a_(2,2)(n-1) ]),where the matrix A is given byA=( [111000000;010001100;001010100;0001e^2 i π/3 e^-2 i π/3000; e^-2 i π/30001000e^2 i π/3;e^2 i π/3000010 e^-2 i π/30;0000001 e^-2 i π/3e^2 i π/3;00 e^-2 i π/3e^2 i π/300010;0e^2 i π/30 e^-2 i π/300001;]).The minimal polynomial of A is given by μ_A(X)=X^9-9 X^8+36 X^7-81 X^6+108 X^5-81 X^4+81 X^2-81 X+27= (X^3-3 X^2+3) (X^6-6 X^5+18 X^4-30 X^3+36 X^2-27 X+9).Therefore, {S_𝔽_3(σ_n,3)} satisfies the linear recurrence with characteristic polynomial given by μ_A(X).§.§ Quadratic case The case of the elementary symmetric polynomial of degree 2 is fascinating.Observe that a_s(n) = S_𝔽_p(σ_n,2+s σ_n,1),where s∈𝔽_p, are the generating sequences of {S_𝔽_p(σ_n,2)}.Also,( [ a_0(n); a_1(n);⋮; a_p-1(n) ]) = M(p) ( [ a_0(n-1); a_1(n-1);⋮; a_p-1(n-1) ]),where the matrix M(p) is given byM(p)=([ 1 1 1 1 1 ⋯ 1; e^2(p-1)π i/p 1 e^2 π i/p e^4 π i/p e^6 π i/p ⋯ e^2(p-2)π i/p;e^2× 2(p-2)π i/pe^2× 2(p-1)π i/p 1e^4π i/p e^8 π i/p ⋯e^2× 2(p-3)π i/p;e^2× 3(p-3)π i/pe^2× 3(p-2)π i/pe^2× 3(p-1)π i/p 1e^6π i/p ⋯e^2× 2(p-3)π i/p;e^2× 4(p-4)π i/pe^2× 4(p-3)π i/pe^2× 4(p-2)π i/pe^2× 4(p-2)π i/p 1 ⋯e^2× 2(p-3)π i/p; ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮;e^2(p-1) π i/p e^2× 2(p-1) π i/p e^2× 3(p-1) π i/p e^2× 4(p-1) π i/p e^2× 5 (p-1)π i/p ⋯ 1; ]).The matrix M(p) can be obtained from the p× p Fourier Discrete Transform Matrix by replacing its j-row r_j by RTC^j-1( r_j), where RTC is the rotate through carry functionRTC(a_1,a_2,a_3,⋯, a_n) = (a_n,a_1,a_2,⋯, a_n-1)and RTC^m represents m iterations of RTC. It is not hard to prove that M(p) is a Complex Hadamard Matrix.In particular,M(p) M(p)^T = M(p)^T M(p) = ( [ p 0 0 ⋯ 0; 0 p 0 ⋯ 0; 0 0 p ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ p; ]).This implies that M(p) is diagonalizable and that all its eigenvalues satisfy |λ|=√(p).Moreover, its eigenvalues are related to the number-theoretical quadratic Gauss sum mod p. The quadratic Gauss sum mod p is defined byg(a;p)=∑_k=0^p-1e^2π i ak^2/pIt is well-established thatg(a;p) = (a/p)g(1;p),where (a/p) denotes the Legendre's symbol, and thatg(1;p) = √(p)p≡ 14i√(p)p≡ 34.Let C(p) be the set of eigenvalues of M(p). Let ζ_p =e^2π i/p.Then, λ∈ C(p) if and only ifλ = -2p g(1;p)ζ^-s a^2.In particular, |C(p)|=(p+1)/2.Let p be an odd prime number and ζ=exp(2π i/p). The matrix M(p) has (j,k)-entry ζ^j(k-j) where j and k run from 0 to p-1 inclusive.We compute the eigenvalues of M(p) simply by writing down its eigenvectors.Set s=1/2(p-1). Then 1≡-2s (mod p) For 0≤ a≤ p-1, let v_a be the column vector with k-entry ζ^s(k-a)^2 where 0≤ k≤ p-1. Then the v_a are the cyclic shifts of v_0. The entry in row j of M(p)v_a is∑_k=0^p-1ζ^j(k-j)+s(k-a)^2 = ∑_k=0^p-1ζ^-2sjk+2sj^2+sk^2-2sak+sa^2= ∑_k=0^p-1ζ^s(k-a-j)^2+sj^2-2saj= g(s;p)ζ^s(j-a)^2-sa^2.This is g(s,p)ζ^-sa^2 times the entry in row j of v_a. Therefore each v_a is an eigenvector with eigenvalueg(s;p)ζ^-sa^2= sp g(1;p)ζ^-sa^2 =-2p g(1;p)ζ^-sa^2. As these eigenvalues are not all distinct, there remains the possibility that some of these eigenvectors v_a are not linearly independent. That can only happen with eigenvectors in the same eigenspace, so for v_a and v_p-a where 0<a<p. But it is clear that none of the v_a are multiples of any of the others; simply consider the quotients of corresponding entries. So we have a dimension-two eigenspace for each eigenvalue -2p g(1,p)ζ^-sa^2 for 1≤ a≤1/2(p-1). This completes the proof.Note that if λ is defined as in (<ref>), then equation (<ref>) impliesλ^p = (-i)^p-1/2√(p^p)for every odd prime p.Therefore, Theorem <ref> leads toM(p)^p =( [ (-i)^p-1/2√(p^p)00⋯0;0 (-i)^p-1/2√(p^p)0⋯0;00 (-i)^p-1/2√(p^p)⋯0;⋮⋮⋮⋱⋮;000⋯ (-i)^p-1/2√(p^p);]),and soM(p)^2p =( [ (-1/p)p^p 0 0 ⋯ 0; 0 (-1/p)p^p 0 ⋯ 0; 0 0 (-1/p)p^p ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ (-1/p)p^p; ]).Thus,X^2p-(-1/p)p^pis an annihilating polynomial for the matrix M(p), which in turns implies that {S_𝔽_p(σ_n,2)} satisfies the linear recurrence with characteristic polynomial (<ref>). § SOME OBSERVATIONS AND CONCLUDING REMARKS We had shown that exponential sums over Galois fields of trapezoid polynomials and rotation polynomials satisfy linear recurrences with integer coefficients.This means that they can be calculated efficiently if we know a priori some initial values. We predict the initial conditions for two families of these type of polynomials.Consider the trapezoid polynomial T_2,3,⋯,k(n). Recall that {S_𝔽_q(T_2,3,⋯,k(n))} satisfies the linear recurrence with integer coefficients with characteristic polynomialgiven by Q_T,k,𝔽_q(X), which is of degree k.This implies that we need to know k initial values in order to calculate the whole sequence.Of course, {S_𝔽_q(T_2,3,⋯,k(n))} makes sense only for values of n≥ k, however, since it satisfies a linear recurrence with integer coefficients, it can be extended to values of n<k.We conjecture the following.Let {t_k,q(n)} be defined byt_k,q(j)=q^j,for 0≤ j ≤ k-1t_k,q(n)==q ∑ _l=0^k-2 (q-1)^l t_k,q(n-(l+2)), for n≥ k.Then, S_𝔽_q(T_2,3,⋯,k(n)) = t_k,q(n) for all values of n≥ k.We were able to prove that this conjecture is true for k=2,3,4, but the general statement remains open.We were also able to predict the initial conditions for {S(R_2,3,⋯, k(n))} (Boolean case). Recall that this sequence satisfies the linear recurrence whose characteristic polynomial is given byp_k(X)=X^k -2 (X^k-2+X^k-3+⋯+X+1).Therefore, as in the case of trapezoid polynomial T_2,3,⋯,k(n), we need to know k initial values in order to calculate the whole sequence. Letδ_o(j) =0 if jis even 1 if jis odd.Define {r_k(n)} byr_k(0) =kr_k(j) =2^j-δ_o(j)· 2,for 1≤ j≤ k-1r_k(n) =2∑_l=0^k-2 r_k(n-(l+2)), for n≥ k.Then, S(R_2,3,⋯,k(n)) = r_k(n) for all values of n≥ k. The problem of finding suitable initial conditions for this type of sequences is a nice problem, but also an important one.For example, ifConjecture <ref> is true, then {S(R_2,3,⋯,15(n))}_n≥ 15 =32766, 65504, 131036, 262036, 524096, 104813,2096268, 4192412⋯ {S(R_2,3,⋯,30(n))}_n≥ 30 = 1073086444, 2146129256, 4292171136, 8584167576, 17167985776,34335272736, 68669148016, 137335500952,⋯ {S(R_2,3,⋯,100(n))}_n≥ 100 =1267650600228229401496703205376, 2535301200456458802993406410548, 5070602400912917605986812821300, 10141204801825835211973625642388,20282409603651670423947251284976, 40564819207303340847894502569720,⋯On the other hand, we know that Conjecture <ref> is true for k=3, which means, for example, that{S_𝔽_9(T_2,3(n))}_n≥ 3 =153, 1377, 7209, 23409, 164025, 729729, 3161673, 18377361,⋯ {S_𝔽_7^3(T_2,3(n))}_n≥ 3 =234955, 80589565, 13881523159, 55203852025, 14215001955427, 1647320876934229, 11351488736356111, 2232536080171760209,⋯ {S_𝔽_71^2(T_2,3(n))}_n≥ 3 =50818321, 256175156161, 645881606118001, 2582501749259041, 9764439145967152081, 16422699840579863752321,114835229977615135072561, 330868420079857977922668001,⋯.Also, if Conjecture <ref> is true in general, then we have, for example, {S_𝔽_5(T_2,3,4,5(n))}_n≥ 5 =1845, 9225, 39725, 173025, 730725, 2988025, 13244125, 56108625,⋯ {S_𝔽_11^2(T_2,3,⋯, 7(n))}_n≥ 7 =18445769583241, 2231938119572161, 226346720724231481, 22141818198352009201, 2044333948085969113321,170550498912524502711841, 11342127359186464124132761,⋯ {S_𝔽_7919(T_2,3,⋯,8(n))}_n≥ 8 =13665512318276822315545157633, 108217192048434155916802103295727,734609211013142008709051078210604961, 4848502223556916452901817857822360556623, 30722822355930196223839440343843855453844801,182535766024343164334384388453936618605681619887,⋯. All these values where calculated almost instantaneously.Another nice problem is to automatize the process presented in this work.plain 1 sperber A. Adolphson and S. Sperber.p-adic Estimates for Exponential Sums and the of Chevalley-Warning. Ann. Sci. Ec. Norm. Super., 4^e série,20, 545–556, 1987.ax J. Ax. Zeros of polynomials over finite fields.Amer. J. Math., 86, 255–261, 1964.BCP M. L. Bileschi, T.W. Cusick and D. Padgett. Weights of Boolean cubic monomial rotation symmetric functions. Cryptogr. Commun., 4, 105–130, 2012.cai J. Cai, F. Green and T. Thierauf.On the correlation of symmetric functions. Math. Systems Theory, 29, 245–258, 1996.cm1 F. N. Castro and L. A. Medina.Linear Recurrences and Asymptotic Behavior of Exponential Sums of Symmetric Boolean Functions.Elec. J. Combinatorics, 18:#P8, 2011.cm2 F. N. Castro and L. A. Medina.Asymptotic Behavior of Perturbations of Symmetric Functions. Annals of Combinatorics, 18:397–417, 2014.cm3 F. N. Castro and L. A. Medina.Modular periodicity of exponential sums of symmetric Boolean functions. Discrete Appl. Math. 217, 455–473, 2017.cusick4 T. W. Cusick.Hamming weights of symmetric Boolean functions. Discrete Appl. Math. 215, 14–19, 2016.cusickArXiv T. W. Cusick. Weight recursions for any rotation symmetric Boolean functions. arXiv:1701.06648 [math.CO]cusickjohns T. W. Cusick and B. Johns. Recursion orders for weights of Boolean cubic rotation symmetric functions. Discr. Appl. Math., 186, 1–6, 2015.cusick2 T. W. Cusick, Y. Li, andP. Stnic. Balanced Symmetric Functions over GF(p). IEEE Trans. on Information Theory 5, 1304-1307, 2008.cusickstanica T.W. Cusick and P. Stnic. Fast evaluation, weights and nonlinearity of rotation symmetric functions. Discr. Math., 258, 289–301, 2002.dalaimaitrasarkar D. K. Dalai, S. Maitra and S. Sarkar.Results on rotation symmetric Bent functions. Second International Workshop on Boolean Functions: Cryptography and Applications, BFCA'06, publications of the universities of Rouen and Havre, 137–156, 2006.hell M. Hell, A. Maximov and S. Maitra.On efficient implementation of search strategy for rotation symmetric Boolean functions.Ninth International Workshop on Algebraic and Combinatorial Coding Theory, ACCT 2004, Black Sea Coast, Bulgaria, 2004.fspectrum M. Kolountzakis, R. J. Lipton, E. Markakis, A. Metha and N. K. Vishnoi.On the Fourier Spectrum of Symmetric Boolean Functions. Combinatorica, 29, 363–387, 2009.maxhellmaitra A. Maximov, M. Hell and S. Maitra.Plateaued Rotation Symmetric Boolean Functions on Odd Number of Variables.First Workshop on Boolean Functions:Cryptography and Applications, BFCA'05, publications of the universities of Rouen and Havre, 83–104, 2005.mm1 O. Moreno and C. J. Moreno. Improvement of the Chevalley-Warning and the Ax-Katz theorems. Amer. J. Math., 117, 241–244, 1995.mm O. Moreno and C. J. Moreno. The MacWilliams-Sloane Conjecture on the Tightness of the Carlitz-Uchiyama Bound and the Weights of Dual of BCH Codes. IEEE Trans. Inform. Theory, 40,1894–1907, 1994.piequ J. Pieprzyk and C.X. Qu. Fast hashing and rotation-symmetric functions. J. Universal Comput. Sci., 5 (1), 20–31, 1999.fdegree A. Shpilka and A. Tal. On the Minimal Fourier Degree of Symmetric Boolean Functions. Combinatorica, 88, 359–377, 2014.stanicamaitra P. Stnic and S. Maitra.Rotation Symmetric Boolean Functions – Count and Cryptographic Properties.Discr. Appl. Math., 156, 1567–1580, 2008stanicamaitraclark P. Stnic, S. Maitra and J. Clark.Results on Rotation Symmetric Bent and Correlation Immune Boolean Functions.Fast Software Encryption, FSE 2004, Lecture Notes in Computer Science, 3017, 161–177. SpringerVerlag, 2004.
http://arxiv.org/abs/1702.08038v3
{ "authors": [ "Francis N. Castro", "Robin Chapman", "Luis A. Medina", "L. Brehsner Sepúlveda" ], "categories": [ "math.CO", "05E05, 11T23, 11B37" ], "primary_category": "math.CO", "published": "20170226142209", "title": "Recursions associated to trapezoid, symmetric and rotation symmetric functions over Galois fields" }
http://arxiv.org/abs/1702.08557v1
{ "authors": [ "Dmitry I. Ignatov", "Alexander Semenov", "Daria Komissarova", "Dmitry V. Gnatyshak" ], "categories": [ "cs.SI", "cs.DM", "stat.ML", "62H30, 91C20, 62H30", "I.5.3; J.4" ], "primary_category": "cs.SI", "published": "20170227221154", "title": "Multimodal Clustering for Community Detection" }
Common envelope light-curve – I. Pablo.Galaviz@me.com 1Department ofPhysics and Astronomy, Macquarie University, Sydney, NSW, Australia 2Astronomy, Astrophysics and Astrophotonics Research Centre,Macquarie University, Sydney, NSW, Australia 3Argelander-Institutfür Astronomie, Auf dem Hügel 71, D-53121 Bonn, Germany *Alexander-von-Humboldt fellowThe common envelope binaryinteraction occurs when a star transfers mass onto acompanion that cannot fully accrete it. The interaction can lead to a merger of the two objects or to a close binary. The common envelope interaction is the gateway of all evolved compact binaries, allstellarmergers andlikelymanyofthe stellartransients witnessedtodate. Commonenvelope simulationsareneededto understandthis interactionandto interpretstars andbinaries thought to be the byproductof this stage. At this time, simulations areunable to reproducethe fewobservational dataavailable and several ideas have been put forward to address their shortcomings. The need formore definitive simulation validation is pressing,and is already beingfulfilledby observationsfrom time-domain surveys.In thisarticle, we present an initial method and itsimplementation for post-processinggrid-based common envelope simulations to produce thelight-curve so as to compare simulations withupcoming observations. Herewe implementeda zerothordermethod to calculatethelightemittedfromcommonenvelopehydrodynamic simulations carried outwith the 3D hydrodynamic codeEnzo used in unigrid mode. The code implements an approachfor the computation of luminosity in both optically thick and optically thin regimes and is tested using the first135 days of the common envelope simulation of<cit.>,wherea0.8  redgiantbranchstar interactswitha0.6  companion. Thiscodeisusedto highlighttwolarge obstaclesthatneedtobe overcomebefore realistic light curvescan be calculated. We explainthe nature of theseproblems andthe attemptedsolutions andapproximations in fulldetailto enablethenextstepto beidentifiedand implemented. We also discuss our simulation in relation to recent data of transients identified as common envelope interactions. Common envelope light-curves – I. grid-code module calibration Pablo Galaviz1,2, Orsola De Marco1,2, Jean-Claude Passy 3,*, Jan E. Staff 1,2 Roberto Iaconi 1,2 Received: date / Accepted: date ==================================================================================================== § INTRODUCTION The common envelope (CE) interactionbetween two stars has become the standard explanation for theexistence of close evolved binaries such ascataclysmic variablesor theprogenitors ofTypeIa supernovae <cit.>. Yet,this interactioncontinuesto eludea reasonable physicaldescription. Without it, itbecomes difficult to carry outmeaningful population synthesis studies <cit.>, including those allowing us to reconcile predicted and observed rates of gravitational-wave producing events <cit.>.Hydrodynamic models havebeen carried out with arange of codes <cit.>, but itappears that even thebasics of the interaction,such as the final separationor how much andwhen the CE isejected, are poorly reproduced by these models. Comparing model outputs with observations has mainly been limited to post-CEsystems <cit.>.The separations of post-CEsystems tend to be largerin some simulations than in observations <cit.>, although it is clear that simulated primaries with more massive and/or more compact envelopes result in smaller orbital separations <cit.>.The assumption is that post-CE systemsare generatedby the entireremoval ofthe stellar envelope over one dynamic event. Most CE simulation do not succeed in ejecting the entire envelope <cit.>. Recently, some simulations have successfully achieved envelope ejection by assuming that the entire recombination energy budget is available for the ejection <cit.>. Even so, successful ejection only takes place for certain parameters <cit.>. Moreover, some recombination energy may escape, as the neutral medium becomes optically thin <cit.>. Asa result of the discrepanciesbetween simulations and observationsitis non-trivialtousethe observationsascode validation (for a review of the CE problem see ), as one may suspect additional physics or phases, not modelled in the simulations, may play a role<cit.>.Recently,time-resolvedobservations havedetectedarange ofnew outbursts,whichhavebeennamed intermediateluminosityoptical transients (ILOT; ), so called because theyhaveintermediate luminosities betweenthoseofnovaeand supernovae and lead to extremely red outburst products.Some of these ILOTsappearto havebeendueto themergeroftwo stars. In particular the V1309 Sco ILOT <cit.> has almost certainly been caused by the merger of a subgiant and a low masscompanion,because thecontactbinarywas actuallyobserved beforethe outburst,the periodwas reducing,andthe post-merger object, a large giant, showsno sign of binarity.Other such objects may have been V838 Mon <cit.>, V4332 Sgr <cit.> as well as other, extragalacticones such as M31 RV <cit.> or M85 OT2007 <cit.> or M31 LRN2015 <cit.>.These outbursts, maygive us an early glimpseinto the light properties of CEs andhence provideus with additional model constraintsand code validation.As data accumulates we are already glimpsing at the complexity of these phenomena. It is clear that there are various phases characterising these presumed mergers: a phase preceding the dynamical merger, the dynamical merger itself,and a phase following it, all of which have distinct light properties that contribute to the overall light behaviour <cit.>. The possible processes that change a slowly evolving binary to a fast merging one are several, including the Darwin-instability <cit.> or a slower merger driven by mass loss through the outer Lagrangian point <cit.>. As observations and models multiply, the role of CE simulation becomes a less and less isolated one and different codes and methods will have to be merged, or at least laid alongside <cit.>.In so doing CE codes will have to evolve to their next generation, with higher resolution <cit.> and the addition of extra physics, such as a more refined equation of state <cit.> or the addition of magnetic fields <cit.>. In particular, radiation hydrodynamics will be a fundamental componentin understanding the light expected from the CE fast in-spiral phase. This step will allow us to understand when a CE takes place and when other emission systems are dominating the light.In this paper we attempt the calculation of the light properties of onesimulation of the CE early fast-in-spiral phase by post-processingone of thehydrodynamic simulationsof<cit.>, hereafter P12. The challenges presented by the CE binary interaction when attempting to extract the light properties from simulations are even greater than those encountered when tying to determine the gas dynamics.However these challenges need tobe quantifiedin orderto improve the calculation to the point of being useful. Quantifying the challenges to the accurate calculation of a CE light curve can also focus future hydrodynamic efforts towards aspect of the computation that can aid the post-processing of the light.The paper is organisedas follows: in Section <ref> we describe the physical situation of the early in-spiral of the CE interaction between a giant and a less massive companion. In so doing we set the stage and introduce some of the challenges. In Section <ref> we summarisetheluminositycalculationapproach, with details left to the appendix, where we emphasize the challenge of knowing the photospheric temperature.Thisis followedby thecalculation ofthe lightcurveforthe CE simulations presented by P12 in Section <ref>. We then discuss available observational constraints in Section <ref>. Conclusionsand discussions arepresented in Section <ref>. § THE PHYSICAL SITUATION Before attempting the calculation of the light, it is important to define the physical regime and the parameters of the calculation. The simulation we base this work on is Enzo2 of P12, carried out between a 0.88 , 89 , RGB star and a 0.6 , point mass companion. Their simulation was carried out using a domain size of 2 AU and 128 cells on a side[Enzo2 was not the most resolved simulation of P12, but the outcome of this simulation were not too different from their Enzo7 simulation, which had twice the resolution.]. Here we have repeated the simulation using the same code and setup, but with a domain four times as large and a 512 cells on a side so as to maintain the resolution identical (the cell size is 3.4 ). The reason for this was to prolong the time during which the CE gas remains in the computational domain. In Fig. <ref> we show a slice along the equatorial plane at three times during the simulation, at the beginning, right after the one-dimensional (1D) stellar structure has been mapped and stabilised, at 75 days and at 135 days. We display density, temperature, velocity modulus, the ratio of gas to radiation pressure and the Mach number. In Fig. <ref> we show a zoomed in detail of some of these quantities. Here we can also see the discrete nature of the grid and its relatively low resolution. In Fig. <ref> we show 1D cuts at time zero where we can see the problem of mapping a higher resolution, 1D stellar structure onto a three-dimensional (3D) computational domain with far inferior resolution. The photosphere, as defined by the 1D model, has values of pressure, temperature, density, etc. which are vastly different from those encountered near the centre of the star. These changes are well captured by the 1D model, but lost as soon as it is interpolated onto the 3D domain. This is why in Fig. <ref> even the 1D model (red curves) is missing the points associated with the cooler, low density,photospheric layers: they are all contained within one cell of the 3D domain.In Fig. <ref> we show an important aspect of the convective giant star that will become important at a later time, when we face the problem of the photospheric temperature. The luminosity of each layer in a radiative, spherical star is always equal to: L(r) = - 4 π r^2 c3 κ(r) ρ(r)∂ u_rad(r) ∂ r, where r is the radius, c the speed of light, κ the opacity, ρ the density and u_ rad is the radiative energy density. This is not so for the convective layers where it is the bulk motion of the convective eddies or plumes that transports out the energy. Hence, applying the above expression to our 1D model, we see how in the deep convective envelope the radiative luminosity is small, while it increases to the total value in the outer thin radiative layer.In the 3D simulation the gas is adiabatic and this must be a reasonable approximationbecause of the short duration of the expansion. The only heating is at the hand of compression and some shock heating early in the simulation (Fig. <ref>). Some CE interactions do result in stronger shocks, but not all. In the interaction simulated by P12, the 0.6  companion moves subsonically (Iaconi et al. 2017) although we do see locally mildly supersonic gas before 135 days. The high temperature of the gas around the star (see the temperature panels in Fig. <ref>) is an artificial expedient commonly used in this type of grid computations <cit.>. It insures that the stellar surface does not expand into the vacuum by providing a pressure that balances the atmospheric pressure by way of a very high temperature, but very low density “vacuum gas". While this expedient has no consequence for the hydrodynamics, it is very problematic when extracting the light properties of the CE. As the simulation progresses, the outermost layers of the CE, which have lower densities, acquire a relatively large temperature as they “mix" with vacuum gas. These layers are dynamically unimportant, but they have an artificially high temperature and high opacity. These thin, hot layers can be seen clearly in the temperature panels in Fig. <ref> and Fig. <ref>, even at time zero, where a yellow “skin" surrounds the the gas distribution.Like these artificially hot layers, the low density “vacuum" is completely opaque.Any model that attempts to calculate the light from these simulations will have to devise a way to avoid the low density “vacuum" as well as any low density gas which has an unrealistically high temperature. As we explain below, we do this by imposing a “density floor": gas with density lower than this floor is completely ignored in the calculation of the optical depth. In the detail in Fig. <ref>, top row, we see a small, low density, high temperature plume that is eliminated by the density floor.In certain physical situations, such as supernova explosions, the relationship between gas thermal and radiation energies and expansion is such that photons can leak out from behind the photosphere. This is not the case here.A CE expansion during the dynamical in-spiral is a relatively slow process more akin to the expansion of a Mira giant during its radial pulsation cycle than the expansion of an envelope during a supernova eruption. The early expansion phase which we are trying to characterise here is extremely optically thick, with almost none of the expanding gas becoming transparent. The speed at which the expansion takes place is of the order of tens of kilometres per second. In Fig. <ref>, third row, we see that the expansion early in the interaction is below 50 km s^-1 with only few pockets of material moving faster (the maximum velocity witnessed in the first 135 days of the simulation is 200 ). The Mach number of the gas is just over unity by 135 days and decreasing, with the exception of a pocket of gas at 75 days, which eventually disappears. Over the entire simulation, gas pressure dominates over radiation pressure, except in the “vacuum" and within the thin skin of gas binding the envelope, that is heated by the external medium. The timescale of in-spiral is of the order of a year, with the expansion continuing beyond this time frame with a decreasing velocity.Additionally, none of the energy associated with the in-spiral escapes during the short time of the in-spiral.As the companion in-spirals the gravitational energy is deposited into the gas in the form, primarily, of thermal energy and to a much lesser degree of kinetic energy of the orbit as well as of the gas itself. The thermal energy will escape the star but on timescales longer than 135 days. In Fig. <ref> we show the time photons would take to travel from a certain depth in the CE to the photosphere. This calculation is carried out by using a simple random walk theory with unequal step sizes <cit.>, and is demonstrated for different times during the CE interaction (0, 75 and 135 days). As we can see the time that the photons would take to travel out is always longer than the time over which we want to calculate the light-curve, namely 135 days. Hence in the first 135 days we do not expect any of the energy deposited into the CE by the in-spiralling companion to escape.In Fig. <ref> we display the values of temperature and density along a ray from the centre of the domain to the domain boundary along the positive x direction, at the usual three times during the simulation. The data points in the lower left corner of the plot, at low density and temperature are those located outside the photosphere, within the hot vacuum that is excluded in our simulation. Here we can see that all cells considered contain ionised gas (T>10 000 K), so the dominating opacity source is from electron scattering.§ LUMINOSITY CALCULATION AND THE TEMPERATURE PROBLEMThe CE Light MOdule () reads the density and internal energy of each volume element in the3D computational domain for eachtimestepforwhichthe hydrodynamiccodehascreatedan output. Fromthe internalenergy, atemperature isdetermined and, using thedensity and temperature,an opacity isinterpolated using opacity tables.In opacity tables the opacities are expressed asa function of log T and log R, where R = logρ - 3 log T + 18. We have used opacity tables forZ=0.02 andX=0.7from <cit.>and <cit.>withmetalsfrom<cit.>. The lattertableextendstotemperaturesas lowas1000 K(inthe hydrodynamic simulations used in thispaper, the temperature is never below this value).Using the densityand the opacity, the optical depth is integratedfor eachvolume element alongparallel raysthat are perpendicular toeach face of thenumerical domain. In thisway the location of thesurface where the optical depth is2/3, is found and the temperatureof that location used (but see Section <ref>),assuming blackbody radiation, to determine the brightness of each volume element. In Appendix <ref> we describe these steps in detail. In Appendix <ref> we describe the convolution with filter bandpasses, while in Appendix <ref> we perform numerical tests to verify our implementation.§.§ The calculation of temperature When the 1D model ismapped into thecomputational domain, the temperature quantity is notrequired, since the specific internal energy ofeach cell, u_int, is calculated fromthe pressure and density. The   equationof state isthat of anideal gas with an adiabatic index γ=5/3, while the 1D star was calculated with amore sophisticated,depth-dependent equation ofstate.When the1D starismapped intothe3Ddomain itisnot inperfect hydrostatic equilibrium.This is why,after the initial mapping, the star needs to be stabilisedin , as described in P12. The newequilibrium modeltends tobe slightly larger than theoriginal 1D model. This slightlylarger star constitutesthe initialmodel in, which weuse forthe initialcalculationofthe luminosity. Fig. <ref>showsa typical comparison between the the 1D model and the one used in the 3D model after relaxation, where we are zooming in onto the outer part of the star. Here the 1D model is missing the data points characterising the photosphere. These data points are all at almost the same radius, have very low density, contain almost no mass, but can create a problem when the star is immersed in the hot vacuum. The photosphere is therefore usually eliminated from the 1D model at the time of mapping it into 3D. Even if we had retained the 1D photosphere, the contact with the hot vacuum would, by the second time step, have heated these layers to an unrealistically high temperature, generating the problem which we discuss further below.The temperature at each cell centre in the 3D code is given by:T= M/R(γ-1)u_int,whereR is the universal gasconstant, M=μ m_H N_A is the molar mass, μ is the mean molecular weight, m_H is the hydrogen atomic mass and N_A is Avogadro's number. The temperature changes depending on the composition. The photospheric temperature for thesimulation presented in Section <ref>is smallerthan10 000 K,which isthe approximate limit for a neutral gas.We therefore choose to use a mean molecular weight of1.26, corresponding to neutralmass fractions of X=0.73, Y=0.25 and Z=0.02.§.§ The problem of spatial resolution in defining the effective temperatureThe optical depth τ, determinesthe amount of radiation which is visible to an observer.The photosphere is located at a surface where τ=2/3(approximately halfofthe radiationis visible). The volume of fluid above the photosphere defines the optical thickness of thefluid. Wedefinetworegimes inoursimulation. Wecall opticallythickevery rayforwhich thebackofthe firstcell occupiedbygaswith densityabovethedensityfloor (first discussed at in Sec. <ref> and better described in Sec. <ref>),hasanoptical depthlarger than 2/3.For the optically thick fluid that characterise the earlyexpansion of the CE, the τ=2/3 surface is located to the precision of the hydrodynamic code resolution.The greater problem is that the gradient of temperaturewithin the cell that contains the photosphere is very steep. Consequently, while the physical location of the photosphere is affectedby amodest uncertainty,the estimationof theeffective temperature, and hence of the luminosity is far more inaccurate.Withinthe cellthatcontains thephotosphere,the opticaldepth ranges between zero at the “front" sideof the cell, to a value much larger thanunity, at the“back” side. The temperatureat the centre of the same cellcan havean arbitrarilyhigh valuebecause ofthe steep temperature gradientin the proximityof the stellarphotosphere. A straight interpolationbetween thetwo cells in front and behind the cell containingthe photosphere,is meaningless.The cellin front usually has a temperature valuerelated to the “vacuum” temperature discussedinSec. <ref>,hence unrealistically high. The temperature at the centre of the cell“behind" the cell containing the photosphere has thehigh valuecharacteristic of alocation fartherinside the star. Hence findingthe correctphotospheric temperatureis impossibleby interpolation,becausewe havenoknowledge ofthe external value.Usinga value of zero, or similarlow value for the cell justoutside that containingthe photosphere, and usingone or even multiple pointsto interpolate the temperature atcentre of the cell containingthe photosphere,gives arange ofpossible values, which are dependent on arbitrary fit parameters. These fitting methodsgive an answer for the temperature to within a factor of 2, but the T^4dependance of the luminosity makes these uncertainties unacceptable.Below we discuss a second problem inherent to grid-based CE simulations that has an even worse impact on our attempt to calculate the light from the interaction. §.§ The “vacuum" temperature problemAs explained in Sec. <ref>, in grid simulations the vacuum outside thestarcannot be empty, because otherwise thestar diffuses rapidly out <cit.>.To obviate this problem the vacuum is replaced with avery low density medium. The density is a factor of 10^-4 smaller than the lowest stellar density. In the case of thesimulations presented in Section <ref> the density floor is 7 ×10^-12 g cm^-3.The lowdensity medium iskept in pressureequilibrium withthestarsurfaceby havingahigh temperature (∼ 10^8 K). Such high “vacuum" temperature would be opticallythick, so  hasa minimumdensity belowwhich the medium is considered completelyoptically thin. This “density floor" has to belarger than the density floor inthesimulation for thefollowing reason.At the beginningoftheCEsimulation low density,hydrodynamically unimportant“fingers" of stellar gas expand intothe vacuum and their temperature is affected by the high “vacuum" temperature, sothese low density features have unrealistically high temperatureand are therefore optically thick, artificially extending the photosphere. This problem disappears rapidly asmore mass expandsand dynamically overwhelmsthe tenuous vacuum medium. To circumvent thisproblem we keep the   density floor at 5 × 10^-10 g cm^-3. This density floor would affect the computation of the light in later phases of the expansion, where the medium has expanded sufficiently to decrease in density. However, for the optically thick, early part of the interaction considered here the exact value of the density floor has no effect on the determination of the photospheric location.A far greater problem is that the hot vacuum warms up any outer stellar layers that are adjacent to it and which have lower density. This can clearly be seen by comparing the density and temperature panels both in Figs. <ref> and <ref>. These outer layers are those where we seek to extract the value of the temperature and we see that even if the resolution were higher, their temperature is compromised by the hot vacuum.§.§ Effective temperature determination from flux conservationAn alternative approach is calculating the luminosity using the radiative flux across a thin layer located behind the overheated photosphere. In Fig. <ref>, left panel, we show a density cut at 75 days, along a line marked in Fig. <ref> (top row, middle panel). Along this line we read values of the density, opacity, and we calculated values of ∂ u_ rad / ∂ r. Values of ∂ u_ rad / ∂ r across the photosphere on the left hand side of the gas distribution are plotted in Fig. <ref>, right panel. The red symbols in Fig. <ref>, right panel, are values of the gradient characterising the hot vacuum just to the left of the photosphere, while the blue symbols are values just inside the photosphere. The symbols are 3.4  apart, the resolution of the grid. As can be seen, the cells straddling the photosphere have quite a range of values of ∂ u_ rad / ∂ r. Just inside the photosphere the first ∼8-10 cells are affected by artificial heating as can be seen in the temperature panel in Fig. <ref>. Values of the gradient in the cells immediately behind those are ∼10^-9 erg cm^-4. Values of the opacity at corresponding depth is ∼100 cm^2 g^-1, while the density is ∼4× 10^-8 g cm^-3.In order to check on the viability of this scheme we assume that the distribution of gas is spherical and adopt Eq. <ref> with a mean radius value calculated as the radius of the sphere that has the same volume as that contained by the photosphere at that time. This is 180  (see Fig. <ref>). With this radius we calculate an approximate luminosity of ∼1 , much lower than even the initial stellar luminosity of 648 , while we expect a value similar or larger. Aside from the uncertainty affecting the choice of the right values for gradient and opacity, the reason for this discrepancy must be related to the thermodynamic properties of the gas. At the location we sampled, the quantities reflect theconvective envelope where the flux is not transported by radiation, similar to what shown in Fig. <ref> for time zero. This exercise is unlikely to provide us with a meaningful value of the overall luminosity of the gas distribution even if, instead of assuming spherical symmetry we calculated the flux at that pixel, assumed that it characterises the photosphere and integrated to obtain the luminosity using the actual shape of the gas distribution. §.§ Effective temperature determination from a stratified temperature distributionA final attempt to resolve the problem of the determinationof the effective temperature was made by calculating a stratified, one dimensional atmosphere. This atmosphere could be effectively overlaid on the gas distribution and normalised at a location inside the photosphere, where we have confidence that the values of temperature and density are not affected by the vacuum temperature. By carrying out this exercise, however, we see that the choices are arbitrary and that the eventual value of the effective temperature has a severe uncertainty.Thismethodassumesthat thedensityin the outer parts of the star followsastratification structure with the following decaying power law: ρ(r)=ρ_0 ( r_ph/r )^2, whereρ_0=ρ(r_ ph) andr_ ph is a gridpoint at the photosphere. Assuming that the exterior of the star behaves like an isentropic ideal gas with P ρ^-γ= K,P ρ^-1= R/MT, where P, ρ and T are the pressure, density and temperature of the fluid respectively;M and R are the molarmass and universal gas constant,respectively andK isaconstanttobe defined. Once again the adiabatic index is γ=5/3.From Eq. <ref>, <ref> and <ref>we can derive anexpression for the temperature within the outer stellar gaseous layers: T= T_0( r_ph/r )^4/3,T_0= M K ρ_0^2/3/R.In Fig. <ref>, left column, we plot the density profile from the simulations alongside the optical depth that is calculated using that density, the opacity tables and values of the temperature that are calculated using the two following methods. In the first method, for every ray and every time, we used the simulation data and selected a cell just inside the photosphere, where we estimated that the value of the temperature was not affected by the hot vacuum. For this cell, we then read the temperature and position values and used these values as {T_0,r_ph} in Eq. <ref>, thereby calculating values of T for every value of r outside the location of r_ ph.Alternatively, a second method was to use a set of values {T^*_0,r^*_ph,ρ^*_0} from the simulation at time zero tocalculateKin Eq. <ref>andthen use that value of K to calculate the temperature at other points and other times, anchoring Eq. <ref> at a point inside the photosphere at which we know the density, ρ_0, and the coordinates, r_ ph.The first method (red line in Fig. <ref>, right column) assumes that the temperature of the simulation is accurate inside the star. However, given the high temperature vacuum, we selected only cells with a negative gradient of the temperature profile (in Fig. <ref> these points are at 0.37, 0.54 and 0.79 AU).On the other hand, the second method uses the density and the value of K, which is calculated with initial data only. Therefore, in this case we assume that the density is accurate inside the star and the temperature of the atmosphere follows Eq. (<ref>).Finally, the values of density needed to calculate the optical depth can be taken from the data directly, or, more self consistently with the stratification method, using Eq. <ref>. We tested both cases. The results are similar. Therefore, in Fig. <ref> we present results obtained using the numerical values of the density only.As can be seen, the optical depth reaches a value of 2/3 at two different locations for the two methods. At those locations, the values of the temperature can be read from the right hand side column of Fig. <ref> and they are listed in Table <ref>. As is clear, these values vary greatly depending on the method followed, demonstrating that the values are arbitrary.§.§ A provisional solution to determine the effective temperature During our hydrodynamic simulations, the fluid starts in the optically thick regime, but as the gasexpands it may become optically thin, at which point the photosphereanditstemperature canbemoreeasilydetermined (althoughinthe part of the simulationpresentedinthispaper, thefluid remainsoptically thick). Normally,the simulationbegins witha single star model with known effective temperature and luminosity from theoriginal 1Dmodel.As the companion starts itsinfall through the primary stars' gaseous layers the opticallythick photosphere expands. As long as the gas distribution remains fully optically thick and the temperature of the photosphere is ill-defined (as explained in Sec. <ref>) we make the following approximation: the firstopaque grid pointis reassignedto bethe τ=2/3 surfacepoint withtemperature T=T_, unless thetemperature at the centre of thecell is lower than the effective temperature of theinitial model, in which case we use the actual value: T = {[T_ effifT ≥ T_ eff; Tif T <T_ eff; ].,This implicitly assumes that at t>0 the temperature of the expansing photosphere decreases, something that, as we will see in Section <ref> is not always the case.We also ensure that the combination of temperatures used does not lead to atotal luminositysmaller thanthe initialstellar luminosity, since thestellar luminosity is providedby a thin shellresting on the core ofthe primary and the CE interactionswe are modelling are not thought to alter the nuclear burning rate on short timescales.For the short time over which the photosphere remains optically thick, using a constant value of the effective temperature is likely correct to better than a factor of two, since the temperature is regulated bythe opacity and there is no time for radiative cooling. However, this is still a problematic assumption and the single largest challenge in determining the light from this type of simulation. We discuss this further in Section <ref>. § RESULTS: TOWARDS CALCULATING THE LIGHT CURVE OF A COMMON ENVELOPE SIMULATION In this section we show thelight curve for well studied CE evolution simulations: Enzo2 from P12, which we have carried out with acomputational domain four times as large and the same resolution, as explained in Sec. 2. InFig. <ref>we showthebolometriclightas seenbyan observer locatedalong threeorthogonal directions, parallelto the x, y and z axes, while in Fig. <ref> we show the volume-equivalent radius of the photosphere. We emphasise that the values of the temperature are almost always those of the initial T_ eff of the star (3200 K, see Table 1 in Appendix) because the values of the photosphere almost never drop below this value during the early, optically-thick photospheric expansion. This effectively means that we are assuming a constant temperature photosphere. In Fig. <ref> we show density slices both ontheorbitalandperpendicularplanes.DuringtheentireCE evolution, the model remains effectively optically thick. In addition, assoon asstellar materialleavesthe domainthe photosphereis effectively lost. Despite ourcalculation with a larger computational domain, this happensat approximately 135 days, which is short of the ∼200 days takenby the fast in-fall phase andeven shorter than the ∼1000 days of the entire simulation run of P12.Throughout thisentire CE simulation, thephotosphere coincides with the density of thedensity floor. However, lowering this floor further does not change the lightoutput because of the steep density gradient atthe photosphere. Avery small difference mightbe found toward the end of the simulation time. As can be seen in Fig. <ref>, at 75and 135 days, somematerial extends past thephotosphere. This gas has a density intermediatebetween theanddensity floors andcould beoptically thick,thereby extendingthe photospheric area slightly.However, in our simulationthis very low density stellar gas has a temperature that is greatly increased by the low density,high temperature vacuummedium and thereforecannot be studied.In Fig. <ref> we show the I band luminosity of one of the two perpendicularviews, settingthe objectat 1 kpcand includingno reddening. Thecalibration values toderive the magnitudesfrom the luminositiesare thosefoundinAppendix <ref>. TheV-I colour of theinitial model is 1.98 and wouldmostly not change over time sincethe photospheric temperature iseffectively constant. The initial rise ofthe I band luminosity is almost3 magnitudes in 135 days.Thetotal radiatedoutburst energiesbetween thebeginning ofthe simulations and 135 days are 3.7 × 10^43, 3.5 × 10^43 and9.6 ×10^43 erg forthe x,y andz directions, respectively. We comparethese energies with the totalenergy in the AGB star atthe start of the simulation (namelyits thermal, kinetic and potentialenergies) which is∼ -2 ×10^46 erg. This validatesthe adiabaticapproachfor theshorttimescale wehave simulated here.§ GUIDANCE FROM OBSERVATIONS §.§ Comparisons with transientsAt least three transients have been credibly identified as CE interactions, primarily because their progenitors were observed: V1309 Sco <cit.>, M101-OT <cit.>, and M31-2015 LRN <cit.>. Due tosimilar light and spectral characteristics after the outburst, other transients, such as V 838 Mon <cit.> or NGC4490-OT <cit.> have been suggested as having a similar origin. Here we carry out a comparative discussion of those aspects of the observations that today or in the near future will be the most useful to constrain CE simulations. We also highlight those aspects of the simulations that will be best constrained by observations. We concentrate on M31-2015LRN for which <cit.> have extracted system parameters from their observations.V 1309 Sco is in a way the system that is the closest to our simulations, in light of its low mass, a∼ 1.5  subgiant interacting and merging with a∼ 0.15  companion <cit.>. M31-2015LRN (and likely V 838 Mon) is possibly next, in terms of system's mass: a 3-5.5  primary interacted with a 0.1-0.6  companion and thought to have undergone a CE merger event. M101-OT was instead thought to come from the interaction between an 18- primary and a 1- companion, with NGC4490-OT being even more massive, though an actual value of the mass could not be derived <cit.>. The peak absolute luminosities of these outbursts are listed in Table <ref>, alongside their lightcurve behaviour.From observations of transients, quantities such as the evolution of the photospheric radius, temperature and luminosity, as well as ejected masses, velocities and timescales of the various phasescan be determined, subject to some uncertainties such as on distance and reddening. <cit.> used photometry of the M31-2015LRN outburst to deduce that the photospheric radius increased between 200 and 400  before peak brightness and then to 2000  in the next 30 days. The photospheric expansion velocity was measured to be 360 km s^-1 from spectroscopy. They also determined that the photospheric temperature increased between ∼ 5000 K and ∼ 7000 K during the rise to peak (or ∼ 6500 K to ∼ 11500 K for the highest possible reddening value), followed by a steady decrease to ∼ 3000 K in the next 50 days. Overall, the bolometric luminosity increased between 10^38 and 2× 10^39 erg s^-1 during the rise (2 × 10^38 and 7× 10^39 erg s^-1 for the highest reddening value) and declined to ∼ 5 × 10^38 erg s^-1 in the next 50 days. From our simulations the most reliable quantity is the photospheric radius evolution over 135 days: the volume-equivalent radius (Fig. <ref>) increased between 85 and 250 , recalling that this simulated radius may not have reached its maximum extension. Subject to the caveat of the uncertain temperature (Section <ref>) the simulated bolometric luminosity goes from 648 to 14 000  (x-direction) in 135 days (10^36 to 3 × 10^37 erg s^-1). Our progenitor's absolute magnitudes are M_ I,prog=-2.9 and M_ V,prog=-0.9 (using the average V-I value calculated above), while at 135 days we measure M_ I,135d=-6.8 and using the same color correction we obtain a value of M_ V,135d = -4.8. Lacking at present the ability for a direct comparison, we can however still place the simulated values of M_V and M_I for the progenitor and for the expanded star on the mass vs. absolute magnitude plot of <cit.>, who showed that the more massive systems have brighter outbursts. As pointed out by <cit.> this, and the fact that more massive progenitors have longer outbursts, could be explained by the fact that more massive progenitors have more kinetic energy and angular momentum and longer radiation diffusion times. Using the fits of <cit.>, we would expect a 0.9  progenitor to have M_ V,prog=4.7 and M_ I,prog=3.5. Our progenitor is brighter and redder than predicted by the fit of <cit.>, likely because our star is evolved, while the fitted data are for unevolved stars. In fact OGLE-2002-BLG-360, also plotted by them, but not fitted, is a more evolved starand is indeed brighter. Their fits would predict that a 0.9  unevolved star would have an outburst with peak brightness M_ I, peak = -5.3 and M_ V, peak = -3.4. Comparing their predicted I band with ours (Table 1) shows that our magnitude is at least 1.5 mag brighter, though the V-I colours are similar. This is at this stage acceptable in view of the many uncertainties. §.§ Constraints from Mira giantsThe temperature of the photosphere and the luminosities are not well constrained; as explained,the temperature is effectively kept constant at the value of the progenitor's effective temperature.Mira variables are AGB giants with characteristics similar to the stars we have considered in our simulation. They expand due to pulsations on timescales similar to the expansion timescales considered here and in so doing their radius changes similarly to the radial expansion considered here. In models of o Ceti <cit.>, the effective temperature of the photosphere changes between 3800 K and 2200 K during half a pulsation cycle of 330 days (i.e., 165 days). During this time the radius expands by a factor of 2.3.This is similar to what was found for other Mira stars. Our calculationover 135 days sees an approximate radial expansion over a similar factor. Such a decrease in temperature would give a reduction in luminosity by a factor of ∼10 compared to the values we have estimated.On the other hand, the expanding photosphere may not initially cool. In the case of M31-2015LRN, the expanding photosphere was initially heated by shocks, instead of cooling adiabatically by expansion, and only later cooled. Therefore, observations caution us that assuming that the photosphere initially cools by expansion may be misguided. §.§ Recombination energy as an agent in the the common envelope ejectionAnother related issue of fundamental importance is that of recombination energy released upon recombination of hydrogen and helium. The release ofrecombination energy as light may explain the plateau in certain type of Type II supernovae <cit.>. <cit.> argue that the plateau in the light curve of the transient M31-2015LRN after the maximum is due to such energy release. However, <cit.> and <cit.> argued that during the common envelope expansion, recombination energy is released at such high physical depth that the optical depth should also be large, making the energy released there entirely available to generate pressure that results in the expulsion of the CE. At such depth, they argued, even the dramatic decrease in opacity of recombined gas may not be sufficient to liberate the energy as light on short timescales and is therefore available to do work. This is a very important point that needs a resolution: if recombination energy does not escape, then it must be included in CE simulations, but if part or all of it escapes, then CE simulations that include recombination energy and that are run in the adiabatic approximation, will overestimate the ejected mass, ejecta's speeds, and produce unphysical in-spiral behaviours <cit.>. Observations such as those listed here, and particularly the presence or absence of a plateau in the lightcurve, may point to an observational constraint on how recombination energy is transformed in the star. §.§ What happens just before the common envelope in-spiral?Another aspect of the interaction where observations will provide us with a quantitative constraint concerns what precedes the fast in-spiral. <cit.> suggested that there can be two pre-in-spiral scenarios with distinct observational characteristics. The first, which they apply to M31-2015LRN, is a fast (days) pre-in-spiral phase where a secularly stable orbit is destabilised by the Darwin instability <cit.>. This leads to Roche lobe overflow and the CE in-spiral in quick succession. This phase is characterised by an early ejection of a low mass, but optically thick shell that is observed as an expanding photosphere. At the same time the photospheric temperature increases as this gas is shock-heated by the early in-spiral. This shell is ejected with speeds above escape velocity. Right after this ejection the full photosphere expands and cools, driven by orbital energy deposited during the in-spiral. The second scenario, is a slower one: after Roche lobe overflow the mass transfer remains stable and leads to an outflow from the second Lagrangian point at lower ejection speed (25% of escape speed). The expanding gas creates a wall of material into which the subsequent expansion phase, driven by the in-spiral, will collide. The difference between these two scenarios is key to understanding when a CE is avoided.The simulation presented in P12 and this paper cannot help chose between the two scenarios because the companion is placed on the surface of the giant at the start of the simulations and the primary therefore already well exceeds its own Roche lobe radius. However, one of the SPH simulations presented by <cit.> may afford a better comparison. That simulations is identical to that of P12 analysed here, but it started with a wider orbital separation, with the primary at Roche lobe contact. The stable mass transfer phase, preceding the fast CE in-spiral, lasts a decade, but this is likely a lower limit <cit.>. During this Roche lobe overflow phase, mass is ejected from the second Lagrangian point (L2, on the side of the companion) with speeds of 100-150 km s^-1, which is above the local escape speed of ∼60 km s^-1, while at the third Lagrangian point (L3, on the side of the primary), gas is being ejected with speeds of ∼40 km s^-1, which is similar to the local escape speed of ∼50 km s^-1. The expansion of the photosphere measured here is between 85 and 250  and lasts 135 days, implying an expansion speed of 9 km s^-1 lower than the escape speed and lower than the speed of the ejecta seen emerging from L2 and L3. Once again is it difficult to make quantitative comparisons at this time, but as these values become refined, they will be those that allow us to discriminate what happens before the CE in-spiral and how this phase affects the CE proper and the post-CE parameters.CE interactions and merger observations assume that the expanding primary was caught in a CE right after the main sequence as the star commenced its journey towards the red giant branch. However, statistically, the rate of interactions involving more evolved giants should be larger, because there are more companions at larger separations <cit.>. As more transients and their progenitors are observed, we will know whether the apparent overabundance of sub-giant branch mergers is due to their being particularly bright or simply to our current uncertainty on the nature of the progenitors. If the former, an explanation could be that the CEs we observe are those with a more bound envelope (more massive and compact) where the companion penetrates deeper and tends to merge more readily, releasing more energy. §.§ Dust formation during common envelope expansionFinally, observations tell us that dust will have to be considered in the calculation of the light, as it may deeply affect the lightcurve. As early as during the first few daysof the expansion, dust could be produced, as seen in V 1309 Sco<cit.> thatbrightened over a period of approximately 5 years and then suddenly dimmed by about a magnitude (in the I band) justbefore the outburst. <cit.> obtained an IR spectrum approximately a year afterthe opticaloutburst peakand determinedthat asubstantial amountofdustmusthave condensedinthisobject,thougha determination of the dust mass was impossible without knowledge of thedust geometry.This systemwas knownto bea close-to-edge-on contact binary. We conjecture that thedust forms in a disk along the equator. Thisis in line withthe large dust grainsize measured by <cit.>that necessitatea disk environment. Additionally, <cit.>measured an elongated dusty environment,interpreted as adisk, in V838 Mon. We could therefore conjecture that while dust formation during the CE in-spiral is possible,indeed likely, itmay primarily influence thelight as seenalongtheorbital plane,leavingperpendicularviewpoints relatively unobstructed. § SUMMARY AND FUTURE WORK Inthis article,wepresented acomputational codeto post-processe the luminosityfor hydrodynamic simulations. iscurrently designedtocomputethe light-curveforCE simulationsperformedwith the codein unigridmode.We presented afirst attempt atcalculating the light-curve forone of theCEsimulationspresented byP12,comprisingan ∼80-, 0.88-, RGB star with a 0.6- companion.The computation of the light from CE interactions is paramount if weare to use current and upcoming observations to constrain simulations. Our attempt cannot at this time be considered satisfactory, because we have maintained the photospheric temperature constant over the 135 days of the simulation. With this effort we have, however, elucidated and quantified the main issues with the computation. This is a fundamental step before a solution can be determined. Below we summarise these shortcomings of the current attempt and discuss possible avenues towards a solution, some of which we will explore in future papers in this series.The limitations ofour approach can be divided intotwo groups. The first groupincludes limitationsinherited from the3D hydrodynamic computationitself, whilethesecondgroup comprisesphysical limitations due to simplifications in the light calculation.The main computational limitation are the impossibility to resolve the value of the photospheric temperature during the optically thick phase of the expansion, and even more importantly, the heating of the outer layers by the hot “vacuum". These make it impossible to read a photospheric temperature value directly from the simulation. Accordingwith thevalues ofopacity and densityof ourmodels,we wouldbe abletoproperly resolvethe photospheric temperature, only with a cell size smaller than 0.004  (cf.with 3.4  for our simulation reproducing Enzo2 of P12). The best resolution attained to date by the simulations of Ohlmann et al. (2016) is 0.01  at the centre of the domain. Hence, this is atthe edge of our capabilities evenwith an adaptive mesh refinement (AMR) code,as it would require between 8 and 9 levels of AMR refinement over thelarge volume occupied by the photosphere. Even with higher resolution, however, the problem of the external artificial heating remains.None of the techniques we have tried to reduce the uncertainty on the effective temperaturehas proven satisfactory. Aside from a massive improvement in the resolution of the original calculation, which is not within immediate reach, we are exploring a way to interfaceand rage <cit.> in order to exploit the latter code radiation capabilities, while using information from thecomputation. Another computational limitation is oneof thelargest issueswithhydrodynamiccomputations of stars using grid codes is the finite size of the domain, which allowed us to compute the light for only 135 days of the CE simulation of P12.This problemwillbe greatlyalleviatedbyusingthe AMRversionof <cit.>,which willallowustomaintain resolution with a much larger box. Our smooth particle hydrodynamics simulations <cit.> could also be used. They do not have a hot vacuum and have no computational domain limits, but at present the resolution near the photosphere is even lower than for the grid simulations due to the low density in those regions and to the computational times that at present limit the resolution to approximately a million particles.Once thetechnical issueslisted above have been resolved,wewillhave tocontend withphysical ones.During theinitial infall the timescale is dynamical.This is the reason why simulations of thisphase canavoid theinclusion of muchof thephysics that woulddominateoverlongertimescales. Radiativecoolingisnot expected on such short timescales (asalso confirmed by the fact that the total energy radiated over theinitial 75 days of the interaction ismuchless thantheinitialenergyofthe envelope)andthe thermodynamic properties of the gasshould be well represented by our adiabaticcalculation. Yet,lateron, wecanexpect cooling,gas recombination andthe formation ofmolecules and of dust.This will altertheopacitiesandnecessitatethetreatmentofradiation transport. However,we note that predicting the initial lightcurve rise may not necessitate the treatment of these processes.§ ACKNOWLEDGMENTS It isa pleasure tothank JuhanFrank and Mark Wardle for valuablediscussions and comments onthe manuscript. Peter Wood is thanked for discussions regarding the properties of Mira stars. An anonymous referee is thanked for extensive comments that helped significantly the maturity of this manuscript. Thomas Reichardt is thanked for sharing some details of his upcoming publications regarding ejection speed of mass from CE interactions. This workwas supportedby the AustralianResearchCouncil FutureFellowship(FT120100452)and Discovery Project (DP12013337) programmes.This research was undertaken on the NCI National Facility in Canberra, Australia,which issupported bythe AustralianCommonwealth Government. This workwas performed on theswinSTAR supercomputer at Swinburne University of Technology.Computations described in this work were performedusing the Enzo code[http://enzo-project.org],which isthe productof acollaborative effort of scientists at many universities and national laboratories. JCP thanksthe Alexander von Humboldt-Stiftung and Norbert Langer for their support. § THE FORMULATION Each numerical volume element, or cell, in our simulation has a temperature T(x,y,z) and density ρ(x,y,z). Ifweobservethe computationaldomainfromthe positive directionof the ẑ axis, withthe coordinate system origin inthe centreof thenumerical domain (seeFig. <ref>), wecan then dividethe cube into columnsof area a=Δ xΔ y and infinitesimaldepth of widthz. The surface brightness of each slice is:B(ν,r⃗;T) = N(ν,z;T)h ν/a ω z/Δ l(x,y) , in units of erg s^-1 cm^-2 Hz^-1 sr^-1,where ωreppresents the solid angle subtended by the area a at the observer,and N denotes the number ofphotons of frequency ν emittedfrom the surface of area a.The energy of each photon is ν and has an associated temperatureT. The fractionz/Δl(x,y) denotesthe proportion z of the totallength of the column Δl(x,y) andis equaltol(x,y)_max-l(x,y)_min (see Fig. <ref>). The specific intensity:I(ν;T)= N'(ν;T) ν/A Ω,(withthe sameunitsasEquation <ref>), isthe number ofphotons N' with energyh ν detected at a surface of area A arrivingfrom solid angle Ω.The number of photons arriving at the detection surfaceis related to the number of photons emitted by: N'(ν)= ∫_l_min^l_max N(ν,z)^-τ z ,where:τ:= ∫_z^l_maxκ(x,y,ξ)ρ(x,y,ξ)ξ ,where κ and ρ are the oapcity and density of the medium, respectively,and τ is the optical depth. The exponential factor takesinto account thedispersion of photons betweenz and the surface ofthe volume l_max. The integrationin z gives us the total contribution of the column.Assuming blackbodyradiation, the surface brightness ofone slice in our domain is: B^*(ν;T)_ijk= 2h ν^3/c^21/^h ν/k T_ijk-1 where i,j,k are the x,y,z discrete indices of the numerical domain. The specific intensity in the respective column is: I(ν;T)_ij = ∑_kB^*(ν;T)_ijk ^-∫_z_ijk^l_maxκ(ξ)ρ(ξ)ξ The radiation flux crossing the detection surface is given by: S(ν;T)_ij =∫_0^2π∫_0^π/2I(ν;T)_ijcosθsinθθϕ= π I(ν;T)_ij, where we have assumed that the observation distance z_obs>>max(Δ l(x,y)) and isotropic emission of each column. The flux density ℱ is given by: ℱ_ij= ∫ f(ν) S(ν)_ijν,, inunits of erg s^-1 cm^-2 where f(ν) represents afilter (seeSec. <ref>). Theluminosity is therefore: L= ∑_i ∑_j ℱ_ijΔ x Δ y. §.§ Convolution with filter band passes Lightcurve observationsare performedin specificspectral bands. Assuming homogeneous emission inall frequencies, each volume element should radiatein the fullspectrum. Therefore, we canconvolve the brightness of each slice withthe filter function. Theenergy flux density is:ℱ_filt(T)= 2πk^4 T^4/c^3 h^3∫_0^∞f(χ) χ^3/^χ-1χ= 15 σ/π^5 T^4 ∫_0^∞f(χ) χ^3/^χ-1χ,where χ = hν / k T, and h, k, c are the Planck constant, the Boltzmann constant and the speed of light, respectively.The bolometric flux density is recovered if f(χ)=1, or: ℱ_bol(T)= σ T^4.Fora starwitheffective temperatureT_eff,thebolometric luminosity isL_ bol = 4π R_star^2 σ T_eff^4.The effect of the filter is encoded in the factorf_filt(T) = 15 /π^5∫_0^∞f(χ;T) χ^3/^χ-1χ.Therefore, the energy flux density ℱ_filt(T) takes the formℱ_filt(T)= ℱ_bol(T) f_filt(T). Weintegrate numerically Equation (<ref>) using the SciPy[www.scipy.org] routine of Simpson's rule.To computethe magnitude, weuse the bolometricmagnitude, V-band magnitude and V-Icolour of the Sun, M_,bol=4.75, V_=-26.74[nssdc.gsfc.nasa.gov/planetary/factsheet/sunfact.html] and(V-I)_=0.701 <cit.>, respectively. Additionally,we use ourfilters to computethe solar luminosityin theI andV bandsusing ablackbodycurve with T_, = 5778 K. §.§ Numerical implementation  iswrittenasaset ofpythonclasses. Weusethe SciPy libraryfor integration andinterpolation and NumPy for general mathematical operations. The purpose of the classesin arethe interpolationoftheopacity,the convolutionof theluminositywith filterfunctionsand themain calculation of the luminositydescribed in the previous section. Additionally, we use aset of Python scripts to post-process the data.   uses   <cit.> interface to read data from . Weuse thevtk <cit.>data formatfor 3Doutput and ASCII format for1D and some of the 2Dquantities.  requires the densityfield and eitherthe temperature or theinternal energy (see <ref>). Additionally,it isnecessaryto specify theeffective temperature of theinitial model T_^0. The output fields are: the opacityκ, the optical depth in six directions(± x,±y,±z), thespecific intensity,the energyfluxdensityineachdirection,theluminosityineach direction and the volume inside the τ=2/3 surface.We useMPI4py <cit.> to takeadvantage of multi-core processors.Itis straightforwardtocalculatethe luminosityin a distributedcomputationschemesincethe necessaryquantitiesare located independently in the fluid columns.§ CODE VERIFICATIONInthis appendixweuse twosimple modelswhichare suitablefor comparison with analytic expressions. Theeasiest case is for a fluid with constant opacity and density.§.§.§ Test I. Rectangular cuboid under optically thin conditions Fig. <ref>shows a test casefor an opticallythin fluid. The location of the τ=2/3 surface is well resolved forthe test combination of opacity, density andgridsize. Similarly,theextinctionfactorhasasmooth transition betweenthe transparent region(e^-τ = 1)and the opaqueone (e^-τ =0).Onthe otherhand, Fig. <ref> showsthenumericalsolutionforan opticallythickfluid(see Section <ref>).Theonly difference between these twotests is the scale ofthe y-axis.In thelatter case the τ=2/3surface is notproperly resolved(although theplot looks similar,they-scaleissixordersofmagnitudelarger). The exponential factorfor the opticallythick case is astep function: the fluid goesfrom transparent to opaque from onegrid point to the next. Let us considera cuboid Δx ×Δ y ×Δ z witha fluidatconstant temperatureT_0,density ρ_0and opacity κ_0. The specific intensity in the z direction is: I(ν;T_0) = 1/Δ l(x,y)∫_-Δ z /2^Δ z/2B^* ^-κ_0ρ_0(Δ z /2 - z) z = B^*/ρ_0 κ_0 Δ z( 1 - ^-κ_0ρ_0Δ z) , where Δl is the total length of the domain and B^* is the surface brightness. The flux density then is:ℱ(x,y)= σ T_0^4/ρ_0 κ_0 Δ z (1-^-κ_0ρ_0Δ z ), and the luminosity in the z direction is: L_z= Δ x Δ y σ T_0^4/ρ_0 κ_0 Δ z (1-^-κ_0ρ_0Δ z ). Similarly, we obtain the luminosity for the x and y directions.Using the following parameters: κ_0= 0.3341cm^2/gr, ρ_0= 10^-4gr/cm^3,T_0= 10^7 K, Δ x =3× 10^5 cm, Δ y =4× 10^5 cm, Δ z =5× 10^5 cm,we compare the analytic and numerical results. We havealready shown inFig. <ref> the optical depthand the extinctionfactor togetherwiththe analyticalcurves. Fig. <ref>-showsthe relative error inthe luminosity as function of resolution forthree observation directions. Note that, sincethe sizeof thecuboid isdifferent ineach direction,the correspondingluminositychangesdepending onthedirection. We employed 1/N asconvergence parameter which isproportional to the characteristicgrid resolutionh. Theplot showsalinear dependency𝒪(h) convergence.The relativeerror inthe luminosity is smaller than 1.6% for each grid size. §.§.§ Test II. A sphere under optically thin conditionsIn thecase ofthe cuboid theflux density doesnot dependon the normalcoordinates.However,that isnotthe casefor asphere. Let'sconsider astaticsphere ofradiusR_0, temperatureT_0, constant opacity κ_0 and constant density ρ_0. Consideringa volumeelementincylindrical coordinates. The parametric equation of a sphere in cylindric coordinates {r,θ,z} is:r⃗= r [cos (θ) x̂+sin(θ) ŷ] + z ẑ, 0 ≤ r ≤√(R^2-z^2)| z |≤ R.Where R is the radius of the sphere.The specific intensity in z direction isI(ν,r;T_0) = B^* /2√(R^2-r^2)∫_-√(R^2-r^2)^√(R^2-r^2)^-κ_0ρ_0(√(R^2-r^2) - z) z = B_pk/2 ρ_0 κ_0 √(R^2-r^2) (1-^-2 κ_0ρ_0 √(R^2-r^2)),the flux density isℱ(x,y)= σ T_0^4/2 ρ_0 κ_0 √(R^2-r^2) (1-^-2κ_0ρ_0√(R^2-r^2)), and the luminosity isL=2πσ T_0^4/ρ_0 κ_0∫_0^R 1-^-2κ_0ρ_0√(R^2-r^2)/2√(R^2-r^2) rr, = πσ T_0^4 ^-2κ_0 ρ_0 R+2κ_0ρ_0 R-1/2κ_0^2ρ_0^2. Using the same parameters as aboveand R_0 = 1.25× 10^5 cm, we comparethe analyticand numericalresults (Fig. <ref>-). We observe a good correspondencebetween the numerical result and the analytic one. In this case, the relative error is smaller than 4.5% §.§.§ Test III. Rectangular cuboid under optically thick conditions The difference between thetest presented here and in Secs. <ref> and <ref>is the physical scale.Achange in the scale induces asituation where the photosphere is not resolved andrequires an approximation in order to overcome this lack of resolution. Thistest issimilartothe onepresentedin  <ref>. Theonly changeis the sizeof the box,which isnow Δx =2 , Δy =1 and Δ z = 0.5.Foran optically thick cuboid the luminosity in the z direction isL_z= Δ x Δ y σ T_0^4Similarly, we obtain the luminosity for x and y directions. Fig. <ref>shows theoptical depthand theextinction factor together with theanalytical curves. The convergenceis presented in Fig. <ref>-. Notethat inthiscasethe errordepends exclusively on the grid size, independently of the direction. However, therelativeerrorsarelarger whencomparingtoTestI.In particular, the relative error is in the range [0.5%, 3.5%].§.§.§ Test IV. A sphere under optically thick conditions This isthe equivalent to testII (Sec. <ref>), but withamuchlargersize ofthesphere:R_0=0.25. The luminosity radiatedby halfa solidsphere ofconstant temperature T_0 andradius R_0is L=2πR_0^2σ T_0^4. Fig. <ref>-  shows thecorresponding convergence test.The convergenceinthiscasedeviates fromlinearconvergence. The relativeerrors arethe smallestwhen comparingwith theprevious test. In Section <ref> wewill show that for this case the error depends quadratically on the grid resolution.§.§.§ Stellar modelsAlthough itis possibleto compute theluminosity foran arbitrary fluiddistribution,  isdesigned towork withhydrodynamic evolutionof stars.Herewecompute theluminosityof twostars calculatedbythe1Dcodeandmappedintousing 128^3 and 256^3 cell resolutions. We usetwo   profilesin order totest the computationof the initial luminosity, a smaller red-giant-branch (RGB) star and a larger asymptotic-giant-branch(AGB) starwhoseparametersare listedin Table <ref>. Both of thesestars are in theoptically thick regime.Therefore,the resultsare similarto theoptically thick solid sphere (Sec.<ref>). The stars cover most of the numericaldomain.In Table  <ref> we seehow the error is reduced withincreasing resolution.This servesas an additional verificationtests andgives ameasureof theuncertainty onthe luminositywhen thereisno uncertaintyintemperature.Thisis effectively dominated by locating thephotosphere which is in turn is uncertain by at most a resolution element. The approximate error in the calculation of the luminosity is given by the errors on the temperature (ΔT) and on the radius (Δ R_*): Δ L = ∂ L/∂ RΔ R + ∂ L/∂ TΔ T Forastarofradius R_*,effectivetemperatureT_and luminosity L_* = 4 πσ R_*^2 T_^4, the relative error is: Δ L_*/L_* = 2 Δ R/R_* + 4Δ T/T_For anumeric grid withΔ x, Δy and Δz, the radius R_ phot of our initial model is located in a cell defined by(x_i,y_j,z_k) and(x_i+1,y_j+1,z_k+1).Therefore,the difference betweenthe radius ofthe starand the numericalone is Δ R= | R_*- R_ijk| and satisfythe inequality Δ R ≤√( (Δ x)^2+ (Δ y)^2 + (Δ z)^2). On theother hand,the ΔT =0 sincewe initiallyassign the effective temperature T_to the grid pointsin the photosphere. Theestimation ofthe errorsin Table<ref> are calculated under these assumptions.mn2e
http://arxiv.org/abs/1702.07872v1
{ "authors": [ "Pablo Galaviz", "Orsola De Marco", "Jean-Claude Passy", "Jan E. Staff", "Roberto Iaconi" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170225103436", "title": "Common envelope light-curves - I. grid-code module calibration" }
3 A Selfie is Worth a Thousand Words: Mining Personal Patterns behind User Selfie-posting Behaviours Tianlang Chen Computer Science University of Rochester Rochester, NY 14627tchen45@cs.rochester.edu Yuxiao Chen Computer Science University of Rochester Rochester, NY 14627ychen211@cs.rochester.edu Jiebo Luo Computer Science University of Rochester Rochester, NY 14627jluo@cs.rochester.eduDecember 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================= Selfies have become increasingly fashionable in the social media era. People are willing to share their selfies in various social media platforms such as Facebook, Instagram and Flicker. The popularity of selfie have caught researchers' attention, especially psychologists. In computer vision and machine learning areas, little attention has been paid to this phenomenon as a valuable data source. In this paper, we focus on exploring the deeper personal patterns behind people's different kinds of selfie-posting behaviours. We develop this work based on a dataset of WeChat, one of the most extensively used instant messaging platform in China. In particular, we first propose an unsupervised approach to classify the images posted by users. Based on the classification result, we construct three types of user-level features that reflect user preference, activity and posting habit. Based on these features, for a series of selfie related tasks, we build classifiers that can accurately predict two sets of users with opposite selfie-posting behaviours. We have found that people's interest, activity and posting habit have a great influence on their selfie-posting behaviours. Taking selfie frequency as an example, the classification accuracy between selfie-posting addict and nonaddict can reach 89.36%. We also prove that using user's image information to predict these behaviours achieve better performance than using text information. More importantly, for each set of users with a specific selfie-posting behaviour, we extract and visualize significant personal patterns about them. In addition, to concisely construct the relation between personal pattern and selfie-posting behaviour, we cluster users and extract their high-level attributes, revealing the correlation between these attributes and users' selfie-posting behaviours. In the end, we demonstrate that users' selfie-posting behaviour, as a good predictor, could predict their different preferences toward these high-level attributes accurately.§ INTRODUCTION Popular social media platforms, such as Twitter, Facebook and Instagram, provide their users with a high degree of freedom to express opinions, post interesting images and share immediate news. With the popularity of these platforms among a wide range of users, large amounts of useful information can be mined. Therefore, social media data mining, as a developing research field, has received much attention. A wide spectrum of topics has been studied. For example, in the area of user trait prediction, advanced models are built to predict user information, such as their age <cit.><cit.>, gender <cit.><cit.>, personality <cit.>, interest <cit.> and occupation <cit.>. Also, social media data mining has been applied to a number of meaningful applications, such as disaster management and relief <cit.><cit.>, infectious diseases diffusion analysis <cit.>, election prediction <cit.><cit.> and street walkability measurement <cit.>. Selfie-posting behaviour analysis is a less studied topic in computer science. A selfie is a self-portrait photograph, typically taken with a digital camera or camera phone held in the hand or supported by a selfie stick[https://en.wikipedia.org/wiki/Selfie]. With the popularity of all kinds of social networking services, more and more people choose to post their selfies to their Facebook, Instagram, Twitter and Flicker homepages. They post this type of photos for different reasons, such as recording important events or expressing their moods. Different users have different selfie-posting behaviours, which include comprehensive aspects of selfie-posting information, such as selfie-posting frequency, background selection, with/without other people, and special selfie gestures and preferences. Uncovering the personal patterns behind such diverse information is the motivation of our work. We explore the possibility of predicting whether a person has a specific selfie-posting behaviour from which we attempt to extract user interest, preference, activity and posting habit patterns. The overview of our work is shown in Figure <ref>.Our work is based on WeChat Moment. WeChat, developed by Tencent, a leading Internet company in China, is the most extensively used instant messaging platform in China at present. According to the Tencent 2016 Interim Report, the MAU (monthly active users) of WeChat has reached 806 millions. Besides instant messaging, WeChat Moment is one of the most widely used functions provided by WeChat. Like Twitter, WeChat Moment is a platform where users can post text and pictures (up to 9 pictures) in a Moment, which can be accessed and commented by their WeChat friends. However, different from Twitter followers, users' WeChat friends must be approved and therefore are those users who have close relationship with them in real life, such as family members, friends, colleagues and clients. Because of this attribute, comparing with Twitter, Moment enjoys the following advantages: 1) Moments posted by users can reflect users' interest and emotion in a very intimate way as users do not need to worry whether a Moment is appropriate to be seen by unfamiliar people; 2) Moments can directly reflect small social community features.In this paper, we investigate user selfie behaviors based on their posted images in WeChat Moment. We collect 109,545 images shared by 570 VIP users of a cosmetics brand. The motivation of our work is not only to demonstrate that people's preference and activities can predict their selfie-posting behaviours, but also to establish an explicit relation between people's selfie-posting behaviours and personal patterns. By the end of 2012, Time magazine considered selfie as one of the “top 10 buzzwords” of that year, the popularity of selfie creates another to understand people's life style and inner world, this potential makes our work valuable. Our contributions are fourfold:∙* We propose a method to accurately classify unlabeled images of WeChat Moment and demonstrate that this approach has a great performance.* We construct significant user-level features and build up classifiers for different selfie-posting behaviour tasks with high accuracy. We uncover diverse interests and activities among users with different selfie-posting behaviours.* We extract users' high-level attributes after clustering them. For each selfie-posting behaviour task, we determine the rank of attributes based on the correlation between the attribute and the corresponding selfie-posting behaviour, which establishes a concise and direct relation between selfie-posting behaviours and user preference.* We demonstrate that users' selfie-posting behaviour have the potential to indicate whether a user has a preference toward a specific high-level attribute. § RELATED WORKSelfie-posting Behaviour Analysis. Sharing selfies in social media platforms has become a fashionable behavior among people. It attracts the interests from researchers, especially for psychologists. For instance, Dhir et al. prove that some apparent selfie-posting behavior differences, such as selfie taking frequency and posting frequency, exist between different ages and genders. They demonstrate this by analyzing the questionnaire results from 3763 social media users <cit.>. Qiu et al. notice that some facial expression cues in selfies correlate with human personalities, including agreeableness, conscientiousness, neuroticism, and openness, and can accurately predict openness based on these cues <cit.>. Kim et al. takes advantage of Ajzen's Theory of Planned Behavior to analyze the antecedents of self-posting behavior, finding that users' attitude toward selfie-posting, subjective norm, perceived behavioural control, and narcissism are the significant determinants of an individual's intention to post selfies on SNSs <cit.>. However, selfie-posting behavior analysis based on the technology of computer vision and machine learning is still missing. To our knowledge, our work is the first research that takes advantage of computer vision and machine learning to analyze and predict human selfie-posting behaviours. Also, compared to the previous research, our current study can obtain more general and objective patterns of users behind their selfie-posting behaviours, because it does not depend on the questionnaires, which are mainly dominated by users' subjective feeling.Deep Residual Networks. Because of its superior performance in image classification, recognition and segmentation problems, deep convolution neural networks have attracted intense attention. Various advanced network architectures are being proposed constantly <cit.><cit.>. Deep Residual Networks <cit.> is one of recently proposed high-performance CNN architectures (winning 2015 ILSVRC & ROC). It improves performance mainly by letting few stacked layers fit a residual mapping, which exerts the potential of deeper network. Although the network is very deep, it still have lower complexity than VGG nets <cit.>.Mining information from WeChat. Although most of recent social media data mining researches mainly focus on western social media services, such as Twitter, Facebook as well as Instagram, researchers start to pay attention to WeChat due to its high popularity in China. For example, Qiu et al. analyse the growth, evolution and diffusion patterns of WeChat Messaging Group <cit.>, Li et al. analyse the diffusions patterns of information in Moments by tracking a large amount of pictures in Moment <cit.>.§ RESEARCH TASKSIn this paper, we aim to investigate deeper personal patterns behind people's diverse selfie-posting behaviours. To this end, we design several research tasks as follows:R1: What kinds of people have a preference to post selfies? What kinds of people do not?R2: What kinds of people have a preference to post a series of selfies in a Moment? What kinds of people have a preference to post only one selfie in a Moment?R3: What kinds of people have a preference to only post selfies in a Moment? What kinds of people have a preference to post selfies with images of other categories?R4: What kinds of people have a preference to take selfies with others? What kinds of people prefer to take selfies on his/her own?R5: What kinds of people have a preference to take selfies outdoors? What kinds of people prefer to take selfies indoors?R6: What kinds of people have a preference to take selfies with some peculiar behaviours such as holding a gizmo or wearing a facial mask?R7: Is there a more concise and direct way to indicate the correlation between people's preference and their selfie-posting behaviour? Could we predict people's preference from their selfie-posting behaviour?For each task, We propose a specific selfie-posting measures to explore it quantitatively. All selfie-posting measures that correspond to these tasks will be defined in Section <ref>.§ METHODOLOGY §.§ Moment Image ClassificationTo construct user's profile based on their Moment image, we first classify each image into different categories, so standardized approach can be used to characterize user. Also, the dimension of feature vector will be in a reasonable range. Considering that we do not have any labels for the Moment images, we classify each image by extracting and clustering its deep features. The whole process is as follows. First, we extract deep features from the Deep Residual Network model proposed by He et al. <cit.>. In particular, We extract a deep-level 2048-dimensional feature vector for each image from the last “pool5” layer of ResNet-50. After that, we cluster these feature vectors by k-means clustering. We determine the value of k based on Silhouette Coefficient. When computing it, to reduce time complexity, we replace the mean distance of a sample to all samples of a cluster with the distance between this sample and the centroid of this cluster. We set k from 10 to 100 and find that when k is larger than 60, there is a marked decline for the Silhoette Coefficient. So we set K = 60 originally and obtain 60 categories with their corresponding Moment images. Next, we manually combine several categories that we judge to be the same category, generating 47 categories, and label them according to their corresponding images. The names of the 47 categories are shown in Table <ref> and the classification performance evaluation will be recorded in Section <ref>.Some well-known social networking services have their official image categories, such as Pinterest[https://www.pinterest.com/]. For Pinterest, there are a total of 34 available categories for users to choose from. Therefore, we compare our categories with the categories of Pinterest and discuss some different points. First of all, considering that the users of our dataset are all VIP of a cosmetic brand, some of our categories are consistent with these kinds of users' special attributes, such as Cosmetic, Cosmetics Ad and Bracelet & Necklace. On the other hand, there is a considerable amount of Moment images that are related to WeChat, such as Chat Screenshot, WeChat Moment and WeChat Expression. In addition, other than content, some images the users post have special styles, such as Special Effects Photos and Very Long Pictures (WeChat Moment allows user to post images that is unrestricted in their height), so we also classify them into separated categories. Most importantly, with different scopes of applications for these two services, the categories of Pinterest majorly focus on people's interest, while the categories of WeChat Moment have a wider range of coverage involving users' interest, preference, activity and occupation. §.§ User CharacterizationAfter we classify the images in users' Moments, we can characterize each user according to their posted images. We first give some significant definitions based on category. These definitions will be used to characterize users.definitionDefinition (Occurrence of a category). An occurrence of a category is defined as the existence of image in a Moment that belongs to this category, regardless of the total image number of this category in this Moment, it is regarded as an occurrence of this category.(Frequency of a category for a user). Generally, a user has a number of Moments. For a user, his/her posting frequency of a category is defined as the total occurrence number of this category for these Moments divided by the total occurrence number of all categories for these Moments. It reflects whether a user has a preference to post a specific category in a Moment.(Inertia of a category for a user). For a user, his/her posting inertia of a category is defined as the total image number of this category in all his/her Moments divided by the total occurrence number of this category in all his/her Moments. It reflects whether a user has a preference to posting relatively more images of a specific category in a Moment.(Singleness of a category for a user). For a user, his/her posting singleness of a category is defined as the number of Moments that only have occurrence of this category divided by the number of Moments that have occurrence of this category. It reflects whether a user has a preference to only including a single specific category in a Moment.We intend to exploit the relationship between users' image-posting behaviour of other categories and their selfie-posting behaviours, as we think people's interests, activities and posting habits have a large effect on their selfie-posting behaviours. We make sure that user features do not include any user selfie information. As a result, we extract three kinds of features as follows:∙* User's frequency feature (F-feature). For a user, it is a combination of the frequencies of all categories except selfie. We compute the frequency of each category after filtering out all selfie images of this user. As there are a total of 46 categories besides selfie, this feature is a 46-dimensional vector, which is related to user's interests and activities.* User's inertia feature (I-feature). For a user, we compute it as the total image number of all the other categories in all his/her Moments divided by the total occurrence number of all the other categories in all his/her Moments. This feature is a value related to user's posting habit.* User's singleness feature (S-feature). For a user, we compute it as the total number of Moments that only contain one category divided by the total number of Moments after filtering out all Moments that includes selfie images. This feature is a value related to user's posting habit. Eventually, each user can be characterized as a 48-dimensional feature vector (46+2), we will illustrate how to use these feature vectors to perform the selfie-related research tasks mentioned in Section <ref>.§ EXPERIMENTS§.§ Dataset We collect a dataset from WeChat Moment, which consists of 570 users with their 37,359 Moments and 109,545 Moment images, from Mar 21, 2016 to July 21, 2016. All of these users are VIP of a cosmetic brand. This kind of users' Moments include a considerable amount of selfie images, as well as a sufficient number of images for each subcategory of selfies. We check the dataset and find that almost all the users are female and between about 20 and 40 years old. This fairly targeted datset helps reduce the influence of irrelevant variables so that we will extract more credible information about the relation between people's selfies and their life details. §.§ Experiment Results Table <ref> shows the total image number in each category after we classify images. We can see that Selfie as the most dominant category, its proportion is more than 10%. To evaluate the performance of the image classification approach, for each category, we randomly sample 500 images and let 2 volunteers to judge whether it is accurate to classify an image into this category. The average accuracy for all categoires is 88.5%, standard deviation is 9.12 and 39 of 47 categories are higher than 80%, We show the classification results for several typical categories in Figure <ref>. We can see that our unsupervised image classification method produces very coherent categories. We calculate each user's frequency distributions of all categories and compute the Pearson correlation coefficient between each category as shown in Figure <ref>. The result is consistent with our expectation. High Pearson correlation coefficients exist between some related or similar categories, such as Cosmetic and Cosmetics Ad, Building and Tourist Photo, Meal and Fruit & Cake, Clothes and Sunglass & Bag, and so on. On the other hand, negative Pearson correlation coefficients exist between some categories, such as Building and Cosmetic, Landscape Photo and Cosmetics Ad, Tourist Photo and WeChat Moment, and so on. For the Selfie category, it has relatively high correlation coefficients with Child, Baby, Star, Beauty Ad, Tourist Photo, and Meal. Before we perform the research tasks outlined in Section <ref>, we first filter out the users whose total occurrence number of all categories is lower than 50. This pre-processing avoids the inaccuracy in a user's frequency distributions due to sparse data. After this process, our dataset includes 283 users, 33,877 Moments and 100,677 images.§.§.§ Tasks R1 - R3 To perform R1 - R3, we first design a prediction task to investigate whether people's interest, activity and posting habit can determine these basic selfie features. For research task R1, we sort the users by their selfie frequency, labeling the top 25% users as positive and the bottom 25% users as negative. This task can be regarded as a binary classification task. For research task R2 and reseach task R3, we first filter out the bottom 1/3 users sorted by their total occurrence number of selfies, since selfie inertia and selfie singleness will be invalid if the total selfie occurrence number is too low. After that, we sort the users by their selfie inertia and selfie singleness, respectively, and label them in the same way as task 1. We select different features and fusion strategies for each user according to Section <ref>, and for each task, we perform a 10-fold cross-validation using SVM. In the end, the 10-fold cross-validation result is shown as Table <ref>. From Table <ref>, we can see that for classification of two sets of users (users labeled as positive and users label as negative) with different selfie frequency, the 10-fold cross-validation accuracy reaches 89.36%, which proves that a user's posting frequency of other categories can accurately predict whether this user is a selfie addict or not. For classification of two sets of users with different selfie inertia, the 10-fold cross-validation accuracy reaches83.52% using the fusion of user's frequency feature and inertia feature. It shows that a user's selfie inertia is related to his/her posting frequency of other categories, but the most important feature is the user's inertia feature. A similar conclusion can be drawn for classification of two sets of users with different selfie singleness, the accuracy is 77.67% using the fusion of user's frequency feature and singleness feature.To demonstrate that users' image information is more effective to predict their selfie-posting behaviour, we also extract their text information as a comparison. We still implement the binary classification task for user's selfie frequency in the same way as task R1, but based on user's text information. Each user have corresponding Moments and a Moment includes a text. So we design two approaches for the task. For the first one, we train an Long Short-Term Memory (LSTM) network with word embedding to predict whether a text belongs to a Moment that contains Selfie. We label a text as positive if its corresponding Moment contains image(s) that belongs to Selfie category. For each text, the network output has 2 nodes which record the probability of positive and negative. Then we compute each user's mean positive probability of all his/her texts. In the end a threshold will be learned from users in the training set to determine whether a value of mean probability will be classified as positive or negative. All texts that belong to users whose selfie frequency rank is between 25% to 75% are used to train/evaluate the LSTM network. For other users, we randomly select 70% of them as training samples to learn the threshold and 30% of them as test samples. We label users in the same way as task R1. The accuracy of LSTM to classify a text is only 59.32%, and the classification accuracy on user level is 67.86%. The second approach is to implement doc2vec on each text, and compute the mean vector for all texts of a user to represent his/her text feature. For this, the accuracy is below 55%. This two approaches demonstrate that using user's image information is more effective.For each task, to further investigate the difference between two sets of users and reveal the personal patterns behind a specific selfie-posting behaviour, we respectively compute the mean value of each feature on all users in each user set. Figure <ref> - Figure <ref> show the comparison results between two sets of users for each task. Users' posting preference reflects their interests and activities. From Figure <ref>, we can see that selfie-posting addicts have a clear preference to post images about Child (a picture taken with children is also included in this category). Also, they are more likely to post images about Cosmetic, Cosmetics Ad, Pink Goods, Landscape Photo, Tourist Photo, Hand & Leg, Star, Beauty Ad, and Meal. In contrast, nonaddicts are more interested in posting images about Shoes, Clothes, Chart, Other Ad, Fruit & Cake, and Essay. As shown in Figure <ref>, whether a user have a preference to post a large number of selfies in a Moment is related to his/her posting frequency of other categories. “A series of selfies” lovers are more likely to post images about Pink Goods, Cosmetic, Hand & Leg, Display Rack, Special Effects Photo, and Child, while the opposite users are more likely to post images about Building, Tourist Photo, Meal, Pet, Flower and Fruit & Cake. However, the main factor is a user's posting habit. As the sub-figure of Figure <ref> shows, “A series of selfies” lovers tend to have higher inertia feature, which reveals that other than selfies, they are also accustomed to posting a great number of images that belong to the same category in a Moment. In addition, their singleness feature is lower, which means that they love to include more than one categories in a Moment.From Figure <ref>, it can be seen that whether a user have a preference to only post selfies in a Moment is related to this user'sposting frequency of other categories. “It's all selfies” lovers are more likely to post images about Child, Star, Chat Screenshot, and Wechat Expression. The opposite users have a higher possibility to post images about Building, Snack, Clothes and Sunglass & Beg. Still, the posting habit is the paramount factor. “It's all selfies” lovers usually have a higher value for the singleness feature and lower value for the inertia feature.§.§.§ Tasks R4 - R6To investigate advanced selfie-posting tasks mentions in task R4 - task R6, we classify selfie images in two different directions. On one hand, we continuously implement K-means clustering to cluster the deep-level 2048-dimensional features into 4 subcategories, including Indoor Ordinary Selfie, Outdoor Selfie, Holding Somthing Selfie and Face Mask Selfie, the total image number of each subcategory is shown in Table <ref>. We show the clustering result of each subcategory in Figure <ref>. On the other hand, we detect the number of faces in each image using Face++[http://www.faceplusplus.com] and classify each image into One-face Selfies / Multi-face Selfies, the total image number of each subcategory is also shown in Table <ref>.We define four advanced selfie-posting measures for task R4 - task R6 as follows:(Group selfie tendency for a user).For a user, his/her group selfie tendency is defined as the total occurrence number of the Multi-face Selfie subcategory divided by the sum of occurrence number of One-face Selfie subcategory and Multi-face Selfie subcategory.(Outdoor selfie tendency for a user).For a user, his/her outdoor selfie tendency is defined as the total occurrence number of the Outdoor Selfie subcategory divided by the sum of occurrence number of Outdoor Selfie subcategory and Indoor Ordinary Selfie subcategory.(Holding something selfie tendency for a user).For a user, his/her holding something selfie tendency is defined as the total occurrence number of the Holding Something Selfie subcategory divided by the sum of occurrence number of Holding Something Selfie subcategory and Indoor Ordinary Selfie subcategory.(Face mask selfie tendency for a user). For a user, his/her face mask selfie tendency is defined as the total occurrence number of Face Mask Selfie subcategory divided by the sum of occurrence number of Face Mask Selfie subcategory and Indoor Ordinary Selfie subcategory. Definitions <ref>,  <ref> respectively correspond to task R4, R5 and Definitions <ref>,  <ref> correspond to task R6. For each task, to avoid the inaccuracy due to sparse data, we first filter out the bottom 25% user who have lowest sum of occurrence number of the corresponding two subcategories. Then we sort the users by the tendency of each task, labeling the top 25% users as positive and bottom 25% users as negative. We only select the user frequency feature since other features are useless and may adversely affect the results. The 10-fold cross-validation accuracy for each task is shown in Table <ref>.Table <ref> shows that users' posting frequency of other categories can accurately predict whether they have some typical tendencies. To further reveal the features of users with these special tendencies, for each tendency, we compute the mean value of the top 25% users with highest tendency in the same way as Section <ref>. Also, we compute the mean value of all users as a comparison. Figure <ref> shows the comparison of different sets of users with highest value of specific tendencies. We only select 24 typical categories to simplify the result and make the figure readable. From Figure <ref>, we can see that “group selfie” and “outdoor selfie” lovers prefer to share images that belong to Pet, Bed, Child, Activity, Large Group Photo, Building, Landscape Photo, Tourists Photo, Fruit & Cake and Meal. On the other hand, “face mask selfie” lovers and “holding something selfie” lovers are more likely to share images that belong to Cosmetic, Cosmetics Ad, Beauty Ad, Cosmetic Tips, Chat screenshot, Special Effects Photos, Pink Goods, Display Rack and Poster. In addition, they are all uninterested in sharing images of Clothes. Also, small difference exists between “group selfie” and “outdoor selfie” lovers, for example, “outdoor selfie” lovers like to upload food images. §.§.§ Tasks R7 For this task, we first aim to cluster the users as the basis of the subsequent experiments and also as a guide to extract user's high-level attributes. To achieve user clustering, we apply Non-negative Matrix Factorization (NMF) on user's 46-dimentional frequency features. After factorization, the two non-negative matrices W and H, respectively, represent the category-type distribution and user-type distribution. We cluster users into five types and for each user we extract six high-level attributes based on the clustering result. The six attributes are Travel, Cosmetic, Children, Living Goods, WeChat and Food. Table <ref> shows the categories each attribute includes. For a user, the value of an attribute is defined as the sum of the frequencies of all categories this attribute includes. For a user type, the value of an attribute is defined as the mean attribute value of all users this user type includes. Finally, we normalize each attribute into the rangebetween 0 and 1 on the five user types. Figure <ref> shows the radar plots of the attributes on the five user types. For each user type, we compute the mean value of selfie frequency, selfie inertia, selfie singleness and four advanced selfie tendencies defined in Section <ref> on all users this user type contains. The results are shown in Figure <ref>.It can be seen from Figure <ref> that the five user types respectively reflect five types of users with different preferences. The first type of user have a preference to Travel and Food, the other four types of users respectively have a preference to Cosmetic, Living Goods, WeChat and Children (because there is high correlation between Children and Travel, so the fifth type of users also has a relatively high value of “Travel”). We can thus determine the attribute rank based on the correlation between attribute and each selfie-posting behaviour. From Figure <ref>, for selfie frequency, the correlation ranking of attributes, from high to low (also from positive to negative) is Children, Cosmetic, Travel, Food, Living Goods and WeChat. For selfie inertia, the ranking is Cosmetic, WeChat, Children, Travel, Food and Living Goods. For selfie singleness, WeChat is highest, the others are similar. For group selfie tendency, the ranking is Children, Travel, Food, WeChat, Living Goods and Cosmetic. For outdoor selfie tendency, it is Children, Travel, Food, Living Good, WeChat and Cosmetic. In the end, for both holding something selfie tendency and face mask selfie tendency, the ranking is Cosmetic, WeChat, Children, Living Goods, Travel and Food.The ranking of the high-level attributes establishes the relation between users' selfie-posting behaviours and personal patterns in a more concise and direct way. For a set of users with a specific selfie-posting behaviour, we can easily estimate whether they have a preference for an attribute.The results are quite reasonable. First, it is consistent with our common sense. For example, from Figure <ref>, we can see that users with a preference for WeChat (user type 4) have the highest average selfie singleness, they usually post selfie without other images, which is consistent with our general knowledge that “WeChat” images, as their preference, is seldom posted with selfies compared with other types of images. Furthermore, this result is consistent with the reality. For instance, for users who love Travel, Food and Children (user types 1 and 5), they have high outdoor selfie tendency and group selfie tendency. This is consistent with the fact that most travel photos are taken outdoor and people love taking photos with ones who travel with them, such as their children and friends. Finally, the result is consistent with the purposes of some posted selfies. For example, users who have a preference for cosmetic (user type 2) possess the highest average holding something selfie tendency and face mask selfie tendency, this is consistent with some cosmetic lovers' intention to advertise their cosmetic products by sharing a selfie with products in their hands.In the end, based on above conclusion that users with preferences to different high-level attributes have different selfie-posting behaviour, we further prove that user's selfie-posting behaviour can respectively predict these preferences. For each high-level attribute, we label the top 25% users with highest value of this attribute as positive and bottom 25% users with lowest value of this attribute as negative. Predicting each attribute can thus be regarded as a binary classification task. We use the seven selfie-posting measures as features and for each attribute we train an SVM. The 10-fold cross-validation accuracy for each task is shown in Table <ref>. The result proves that user's selfie-posting behaviour have the potential to indicate whether a user has preference toward some attributes, which could be applied to predict significant user patterns for future research and application. § CONCLUSIONS AND FUTURE WORK In this paper, we investigate deep personal patterns behind people's diverse selfie-posting behaviours. To reduce the influence of irrelevant variables and strengthen the credibility of the results, we collect a specific group of users' posted images from WeChat Moment. Based on Deep Residual Network-derived features and a clustering algorithm, we reliably classify images into different categories in an unsupervised fashion. We then characterize users by extracting three types of user-level features that reflect their interests, activities and posting habits. Furthermore, we define seven measures to comprehensively model users' selfie-posting behaviours. We predict users' selfie-posting behaviours based on three types of features. The result confirms that people's interests, activities and posting habits can help determine their selfie-posting behaviours. Moreover, comparison of typical users with specific selfie-posting behaviours clearly explains the significant personal patterns behind special selfie-posting behaviours. High-level attributes of users are established after we cluster the users by NMF. We compute the correlation ranking of attributes for each selfie-posting behaviour to measure the relation between users' selfie-posting behaviours and personal patterns in a concise and direct fashion. Finally, we design classification tasks which demonstrate the significance of user's selfie-posting behaviour on predicting user's preference toward high-level attributes. In the future, we intend to build comprehensive user profiles by fusing multimodal sensor information, including images, text, and emoji usages.§ ACKNOWLEDGEMENTSWe thank the support of New York State through the Goergen Institute for Data Science, and the dataset provider.abbrv
http://arxiv.org/abs/1702.08097v1
{ "authors": [ "Tianlang Chen", "Yuxiao Chen", "Jiebo Luo" ], "categories": [ "cs.SI" ], "primary_category": "cs.SI", "published": "20170226221209", "title": "A Selfie is Worth a Thousand Words: Mining Personal Patterns behind User Selfie-posting Behaviours" }
pdf: minorversion=6Im
http://arxiv.org/abs/1702.08227v2
{ "authors": [ "Naoto Shiraishi", "Takashi Mori" ], "categories": [ "cond-mat.stat-mech", "quant-ph" ], "primary_category": "cond-mat.stat-mech", "published": "20170227105554", "title": "Systematic Construction of Counterexamples to the Eigenstate Thermalization Hypothesis" }
Department of Physics, East China Normal University, Shanghai, 200062, P. R. China Department of Physics, East China Normal University, Shanghai, 200062, P. R. China Department of Physics, East China Normal University, Shanghai, 200062, P. R. China State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200062, China zhliu@phy.ecnu.edu.cn Department of Physics, East China Normal University, Shanghai, 200062, P. R. China State Key Laboratory of Precision Spectroscopy, East China Normal University, Shanghai 200062, China Global and partial synchronization are the two distinctive forms of synchronization in coupled oscillators and have been well studied in the past decades. Recent attention on synchronization is focused on the chimera state (CS) and explosive synchronization (ES), but little attention has been paid to their relationship. We here study this topic by presenting a model to bridge these two phenomena, which consists of two groups of coupled oscillators and its coupling strength is adaptively controlled by a local order parameter. We find that this model displays either CS or ES in two limits. In between the two limits, this model exhibits both CS and ES, where CS can be observed for a fixed coupling strength and ES appears when the coupling is increased adiabatically. Moreover, we show both theoretically and numerically that there are a variety of CS basin patterns for the case of identical oscillators, depending on the distributions of both the initial order parameters and the initial average phases. This model suggests a way to easily observe CS, in contrast to others models having some (weak or strong) dependence on initial conditions.89.75.-k, 05.45.XtA model bridging chimera state and explosive synchronizationZonghua Liu December 30, 2023 =============================================================§ INTRODUCTIONSynchronization in coupled oscillators has been well studied in the past decades and is now focused on the influence of network structures <cit.>. In this field, two hot topics are the chimera state (CS) and explosive synchronization (ES), respectively. CS was first found by Kuramoto and Battogtokh in 2002 <cit.>.After the discovery, CS has attracted a lot of attention in the past decade <cit.>. Generally speaking, CS is the coexistence of coherent and incoherent behaviors in coupled identical oscillators. Because of different initial conditions, the nonlocally coupled oscillators naturally evolve into distinct coherent and incoherent groups. This counterintuitive coexistence of coherent and incoherent oscillations in populations of identical oscillators, each with an equivalent coupling structure, can be considered as a symmetry breaking on the collective behavior by nonsymmetric initial conditions. This phenomenon reminded people the two heads monster in Greek mythology and thus was named as Chimera State by Abrams and Strogatz in 2004 <cit.>.The study of CS was originally motivated by the phenomenon of unihemispheric sleep of many creatures in real world <cit.>, which was first found in dolphin and then revealed in birds, some aquatic mammals, and reptiles etc. So far, CS has been confirmed in many experiments <cit.>. For example, Tinsley et al reported on experimental studies of CS in populations of coupled chemical oscillators <cit.>. Hagerstrom et al showed experimental observation of CS in coupled-map lattices <cit.>. Viktorov et al demonstrated a coexistence of coherent and incoherent modes in the optical comb generated by a passively mode-locked quantum dot laser <cit.>. Wickramasinghe et al presented the experiment of CS in a network of electrochemical reactions <cit.>. Martens et al devised a simple experiment with mechanical oscillators to show CS <cit.>. And Schoenleber et al reported the CS in the oxide layer during the oscillatory photoelectrodissolution of n-type doped silicon electrodes under limited illumination <cit.>. ES represents the first-order synchronization transition in networked oscillators. When we increase the coupling strength adiabatically, the system stays unsynchronized until a critical forward coupling strength λ_cF where the system suddenly becomes synchronized. That is, its order parameter R has a jump at λ_cF. However, when we decrease the coupling strength adiabatically from a synchronized state, the system does not go back by the same route as the forward process but jump at a different critical backward coupling strength λ_cB. As λ_cF>λ_cB, the forward and backward routes of R forms a hysteresis loop. This first-order transition was in fact found before the concept of complex networks <cit.> and became hot only when it was rediscovered from the positive correlation between the natural frequency of a networked oscillator and its degree by Gómez-Gardeñes et al and named as Explosive Synchronization in 2011 <cit.>. Before the work <cit.>, synchronization on complex networks was generally analyzed by the approach of master stability function <cit.>, which always predicts a second-order phase transition. However, the work <cit.> showed that it is also possible for the synchronization on complex networks to be the first-order, thus inducing great attention on ES <cit.>. It was revealed that except the way in <cit.>, ES can be also observed by many other ways, providing that the growth of synchronized clusters is under a suppressive rule <cit.>.Currently, CS and ES are separately studied as two distinctive topics. In general, we do not have CS in the systems of ES, and vice versa. Thus, it is interesting to ask whether it is possible to observe both of them in a single system. To figure out the answer, we here study this topic by presenting a novel model to bridge these two phenomena. The model consists of two groups of coupled nonidentical oscillators with a natural frequency distribution. Specifically, its coupling strength is adaptively controlled by a parameter β. This model goes back to the standard CS model <cit.> when all the natural frequencies are the same and β=0 and returns to the adaptive model of ES <cit.> when there is only one group of oscillators and β=1.Very interesting, we find that this model displays both CS and ES, where CS can be observed for a fixed coupling strength and ES appears when the coupling is increased adiabatically. Thus, this model sets up a bridge between CS and ES. Moreover, we focus on the case of identical oscillators and show both theoretically and numerically that there are a variety of CS basin patterns, depending on the distributions of both the initial order parameters and the initial average phases. That is, this model shows a way to easily observe CS, in contrast to the sensitive dependence on initial conditions in many previous models <cit.>.The paper is organized as follows. In Sec.II, we introduce the model and study its collective behaviors. In Sec.III, we pay attention to the case of identical oscillators and study it by the dimensional reduction analysis. In Sec. IV, we show the corresponding numerical simulations and and its stability analysis. Finally, in Sec. V, we give conclusions and discussions. § MODEL DESCRIPTIONWe consider a model of two groups of coupled oscillators, defined asθ̇_i,j = ω_i,j+R_j^βλ/N∑_k=1^Nsin(θ_k,j-θ_i,j+α) +R_j^βλ'/N∑_k=1^Nsin(θ_k,j'-θ_i,j+α),where the index j=1,2 represents the two groups, i=1,⋯,N represents the N oscillators in each group. ω_i,j is the natural frequency satisfying an uniform distribution in (-δ,δ). The oscillators are globally coupled with coupling strength λ inside each group and coupling strength λ' between the two groups. j' represents the other group, defined as j'=2 when j=1 and j'=1 when j=2. α is a phase lag parameter and set as α=π/2-0.1, which was chosen by many CS papers <cit.>. The coupling is attractive when α<π/2 and repulsive when α>π/2. β is a parameter located in [0,1].R_1 and R_2 in Eq. (<ref>) are the order parameters of the groups 1 and 2, respectively, which are defined asR_1e^iΨ_1=1/N∑_k=1^N e^iθ_k,1,R_2e^iΨ_2=1/N∑_k=1^N e^iθ_k,2.In the framework of Eq. (<ref>), the population is put into two groups and the coupling strengthes R_j^βλ and R_j^βλ' are closely correlated to the local coherence when β is not 0. The model (<ref>) will return to the case of one population in Ref. <cit.> when λ'=0 and β=1. To show the influence of β, Fig. <ref> shows the synchronization transition of model (<ref>) for different β, with λ'=0. It's easy to see that R has a continuous transition for β=0, a discontinuous transition for β=1, and a transition gradually changing from continuous to discontinuous when β increases, indicating a transition from traditional synchronization to explosive synchronization. When β is in the range of hysteresis loop, there are bistability where the final state of system depends sensitively on the initial conditions. The model (<ref>) is more sensitive to the local coherence if there are two or more groups in the system. Once the initial conditions are asymmetric, the two groups may easily go to different final states, i.e. one group with high coherence and another group with low coherence.Eq. (<ref>) has two limiting behaviors. The first one is the limiting behavior of λ'=0 and β=1, which goes back to the adaptive model of ES in Ref. <cit.>. In this situation, ES can be observed if we increase (decrease) the coupling adiabatically in the forward (backward) continuation diagram. Fig. <ref>(a) shows the dependence of R_1 on λ for δ=1.0. It is easy to see that there is a hysteresis loop, indicating the existence of ES. The inset of Fig. <ref>(a) shows the evolution of two different initial conditions for λ=8.5. We see that one gradually approaches a higher value (R_1≈ 0.58) and the other goes to zero, confirming the sensitivity to initial conditions in the bistable region. We have the same results for R_2 of another group (it's not shown in Fig. <ref> ), as the system exhibits the symmetry 1↔ 2.The second one is the limiting behavior of identical ω_i,j (δ=0) in Eq. (<ref>) for all the oscillators and β=0, which returns to the typical model of CS in Ref. <cit.>. In this case, our numerical simulations confirm that one group is synchronized with R_1=1 while the other is unsynchronized with R_2<1. Furthermore, we were surprised to find that there is still a chimera-like behavior when we keep β=0 but let ω_i,j satisfy the uniform distribution in (-δ,δ). Figure <ref> (a)-(d) show the results for δ=1.0, 0.5, 0.2 and 0.15, respectively. We see that the oscillation periods of R_1 and R_2 increase with the decrease of δ until δ=0.15. After that, the oscillation behaviors of R_1 and R_2 will disappear and are replaced by one group synchronized and the other unsynchronized, i.e. chimera state. We now go back to the current model of Eq. (<ref>) with β=1. We find that it can also show the hysteresis loop. Fig. <ref>(b) shows the results of R_1 for λ'=2. Comparing Fig. <ref>(b) with Fig. <ref>(a) we see that their forward jumping positions are slightly different, i.e. λ_cF<9.0 in Fig. <ref>(a) while λ_cF>9.0 in Fig. <ref>(b). We have observed the same results for R_2 (not shown here), as the symmetry 1↔ 2 in the two groups of the system. The inset of Fig. <ref>(b) shows the evolution of two typical initial conditions from the two groups, respectively, for λ=8.5. We see that one (R_1) goes to a higher value (R_1≈ 0.58) and the other (R_2) goes to zero, indicating a chimera-like behavior. Therefore, we have observed both ES and CS in the model of Eq. (<ref>) when the parameters are taken in the range of the hysteresis loop.Then, we change the range of frequency distribution δ. We find that the hysteresis loop depends on the parameter δ and can be observed only when δ>0.31. With the decrease of δ, the size of the loop decreases until zero at about δ=0.31 and the transition points of R_1 or R_2 also approaches zero. Fig. <ref>(a) shows the results for λ'=2, where the “squares" and “circles" represent R_1 of the forward and backward continuation diagram for δ=1, respectively; the “up triangles" and “down triangles" represent the case of δ=0.7; and the “diamonds" and “left triangles" represent the case of δ=0.4. To check the coexistence of CS, we study the evolution of R_1 and R_2 for two different initial conditions. Fig. <ref>(b)-(d) show the results for λ=8.0 and δ=1, 0.7 and 0.4, respectively. We see that Fig. <ref>(b) is a chimera-like state while Fig. <ref>(c) and (d) are breather-like states. In sum, the range of frequency distribution δ takes a key role for the coexistence of ES and CS. § DIMENSIONAL REDUCTION ANALYSISIn the following parts, we study CS in model (<ref>) with β=1. In order to satisfy the definition of CS, we change to identical oscillators (δ=0). To make a theoretical analysis on Eq. (<ref>), it is better to reduce its dimension. Fortunately, such an approach of dimensional reduction has been proposed by Watanabe and Strogatz <cit.> and then generalized by Pikovsky and Rosenblum <cit.>. We here adopted it to analyze the model (<ref>). In a mean-field framework, the coupling terms in Eq. (<ref>) can be rewritten as R_a^2λsin(Ψ_a-θ_j^a+α)+R_aR_a'λ'sin(Ψ_a'-θ_j^a+α). Thus, Eq. (<ref>) can be rewritten asθ̇_j^a = Im(Z_ae^-iθ_j^a) Z_a = R_a^2λ e^-i(Ψ_a+α)+R_aR_a'λ' e^-i(Ψ_a'+α),where Z is the mean field coupling for the oscillator j, a and a' are the index of the two populations, respectively. The average frequency ⟨ω⟩ has been ignored as it is zero for a symmetric distribution.By introducing three variables ρ_a(t), Θ_a(t), Φ_a(t) and constants ψ_j^a via the transformationtan[θ_j^a-Φ_a/2]=1-ρ_a/1+ρ_atan[ψ_j^a-Θ_a/2],we get the WS equations of the Eq. (<ref>) <cit.>ρ̇_a = 1-ρ_a^2/2Re(Z_ae^-iΦ_a)Θ̇_a = 1-ρ_a^2/2ρ_aIm(Z_ae^-iΦ_a)Φ̇_a = 1+ρ_a^2/2ρ_aIm(Z_ae^-iΦ_a).Generally, the parameter ρ characterizes the degree of synchronization: ρ=0, if the oscillators are incoherent, and ρ=1, if the oscillators are complete synchronized. ρ_a is roughly proportional to the order parameter R_a. The phase variable Θ describes the shift of individual oscillators with the mean phase and Φ describes the average of the phases. It is convenient to introduce new variables ξ_a=Φ_a-Θ_a and z_a=ρ_ae^iΦ_a, then Eq. (<ref>) can be rewritten asż_a = 1/2Z_a-z_a^2/2Z_a^*ξ̇_a = Im(z_a^*Z_a) .If the constants ψ_j^a are uniformly distributed, Eqs. (<ref>) and (<ref>) will decouple. Eq. (<ref>) describes the low dimensional behavior of Eq. (<ref>). In the thermodynamic limit, we have ρ_a=R_a and thus from Eq. (<ref>) we obtainṘ_a = 1/2R_a(1-R_a^2)[λ R_acosα+λ' R_a'cos(Φ_a'-Φ_a+α)]Φ̇_a = 1/2(1+R_a^2)[λ R_asinα+λ' R_a'sin(Φ_a'-Φ_a+α)].Eq. (<ref>) describes the theoretical prediction of the collective behaviors of Eq. (<ref>). However, it is not easy to get the precise solution of Eq. (<ref>). Hence, we here calculate Eq. (<ref>) numerically. In this way, the initial order parameters R_1(0) and R_2(0) and the initial phases Φ_1(0) and Φ_2(0) will be the key factors to influence the final states R_1 and R_2.§ RESULTS AND ANALYSISIn numerical simulations, we take the system size as 2N=100, i.e. N=50 for each group. For the convenience of comparing with the above theoretical predictions, we let all the natural frequencies ω_i,j in Eq. (<ref>) be zero. The initial phases are drawn from the circular Cauchy distribution <cit.>g(θ(0))=1-|γ|^2/2π|e^iθ-γ|^2which can be easily generated from a Lorentzian distribution g(x)=1/π[η/(x-x_0)^2+η^2] with η being the half width at half maximum and x_0 being the center frequency. Making a transformation X=x+i/x-i, we can get a new complex variable X, which is distributed on a unit circular in complex plane. The phases of X are distributed as circular Cauchy distribution. By changing x_0 and η, we can easily change the average and deviation of the circular Cauchy distribution and thus change the initial order parameter of the oscillators. In this way, we have observed a variety of CS patterns in the two groups. Figs. <ref>(a) and (b) show two typical CS patterns after the transient process, where (a) denotes the case of coupling strength λ=λ'=1 and the initial order parameter R_1(0)=0.275 and R_2(0)=0.569, and (b) denotes the case of coupling strength λ=1.5 and λ'=1 and the initial order parameters R_1(0)=0.1 and R_2(0)=0.569. We see that in each case, one group is synchronized with R_2=1 and the other has a different R_1<1, implying a breathing CS. In contrast, we numerically calculate the theoretical Eq. (<ref>) and show the results in Figs. <ref> (c) and (d), where the difference between the initial phases of the two groups is taken as ΔΦ=Φ_2(0)-Φ_1(0)=2π/3. In fact, Figs. <ref> (c) and (d) can be considered as the corresponding theoretical results of Figs. <ref>(a) and (b). Comparing Fig. <ref> (a) with (c) and Fig. <ref>(b) with (d), respectively, we see that the theoretical results are qualitatively consistent with the numerical simulations.To show the dependence of CS on the initial conditions in details, we first fix the initial average phases as ΔΦ=Φ_2(0)-Φ_1(0)=2π/3 and let the initial order parameters R_1(0) and R_2(0) gradually increase from 0 to 1 by changing x_0 and η. Figs. <ref>(a) and (b) show how the stabilized R_1 and R_2 depend on the initial R_1(0) and R_2(0). Comparing Fig. <ref>(a) with (b) we see that R_1 is low when R_2 is high, and vice versa, i.e. they are complementary, indicating that the whole system is always in CS. This is an interestingly finding which tells us that no matter what the initial conditions are, we can always find one group in high coherence while the other in low coherence, indicating that the basin of CS in the model of Eq. (<ref>) is the whole initial condition space or CS is robust to initial conditions. This feature is very different from some of the previous models of CS, where CS is typically observed for carefully chosen initial conditions. We also show the corresponding theoretical results from Eq. (<ref>) in Figs. <ref>(c) and (d). Comparing Fig. <ref>(a) with (c) and Fig. <ref>(b) with (d), respectively, we see that they are almost the same, indicating the consistence between the numerical simulations and theoretical results.Then, we study the influence of the initial average phases. For this purpose, we consider a variety of difference ΔΦ=Φ_2(0)-Φ_1(0). As ΔΦ is not neglected in Eq. (<ref>) of the dimensional reduction, the low dimensional analysis shows the same effect with the numerical simulations by the circular Cauchy distributed initial conditions. For this reason, we here only calculate the theoretical solution of Eq. (<ref>). We find that the stabilized CS do depend on the specific value of the initial average phases. As R_1 and R_2 are complementary, we here only calculate the stabilized R_1. Figure <ref> shows four typical cases where (a)-(d) represent the cases of ΔΦ=π/3,π,π/2 and 0, respectively. It is easy to see that the four patterns in Fig. <ref> are different, indicating the diversity of the CS basin patterns for different initial conditions.The robustness of CS to initial conditions is very interesting. To understand it better, we follow the Ref. <cit.> to make a further analysis on Eq. (<ref>). Firstly, we introduce a new parameter A=λ-λ'. As all the frequencies of oscillators are zero, we rescale the coupling as 1=λ+λ' and thus obtain λ=(1+A)/2 and λ'=(1-A)/2. Therefore, A=0 represents the case of λ=λ', while A=1 represents the case of λ'=0, i.e. only one group of population. Then, we introduce ΔΦ=Φ_2-Φ_1. For a typical CS, one population is synchronized with R=1, thus we can set its order parameter as unity, i.e. R_1=1 and Ṙ_̇1̇=0. By this way, the Eq. (<ref>) becomesṘ_2= 1/2R_2(1-R_2^2)[1+A/2 R_2cosα+1-A/2cos(-ΔΦ+α)]ΔΦ̇ = 1/2(1+R_2^2)[1+A/2 R_2sinα+1-A/2sin(-ΔΦ+α)] -[1+A/2sinα+1-A/2R_2sin(ΔΦ+α)].By letting Ṙ_̇2̇=0, we can get three solutions: R_2=1, R_2=0 and R_2=-(1-A)cos(α-ΔΦ)/(1+A)cosα. The first solution means a complete synchronized state and the other two mean CS. By checking the values of R_1 and R_2 in both Fig. <ref> and Fig. <ref>, we find that all the blue areas are in between 0.1 and 0.2, indicating that they are the third solution. Therefore, we here focus only on the first two solutions, i.e. R_2=1 and R_2=0. The Jacobian matrix of Eq. (<ref>) isM=[[ a b; c d; ]]witha = 1+A/2R_2cosα+1-A/4cos(α-ΔΦ)-(1+A)R_2^3cosα-3(1-A)/4R_2^2cos(α-ΔΦ)b = 1/2R_2(1-R_2^2)sin(α-ΔΦ)c = 1+A/4sinα+3(1+A)/4R_2^2sinα+1-A/2R_2sin(α-ΔΦ)-1-A/2sin(ΔΦ+α)d = -1-A/4(1+R_2^2)cos(α-ΔΦ)-1-A/2R_2cos(ΔΦ+α).By using of the linear stability analysis we find that the solution of R_2=0 is unstable while R_2=1 is stable with the same parameters of Figs. <ref> and <ref>. It means that there is a probability to observe the complete synchronization in the initial condition space. With further linear stability analysis, we find that the invariant manifold with R_1=R_2 found in <cit.> still exists. In order to check this point, we fix the average of initial order ⟨ R(0)⟩=(R_1(0)+R_2(0))/2 and look for the basin of the states in the plane of Δ R(0)=R_1(0)-R_2(0) and ΔΦ(0), i.e. taking the same way as Ref. <cit.> did. Fig. <ref>(a) shows the distribution of the stabilized state for the case of ⟨ R(0)⟩=0.75, where DS means the first group is synchronized while the second one is desynchronized, SD means the second group is synchronized while the first one is desynchronized, and SS means both of the two groups are synchronized. Thus, DS and SD are CS while SS is a complete synchronized state. From this figure, we see the basin of complete synchronized state (SS) is very narrow and it occurs only when the system changes from DC state to SD state or the vice versa, which is the same as in Ref. <cit.>. To see it more clear, Fig. <ref>(b) shows the basin of synchronized state only, where the basin of CS is hidden. These basins of complete synchronized state are too narrow and thus make the SS state not easy to be observed, which is the reason why we miss the complete synchronized state in Figs. <ref> and <ref>. On the other hand, we notice that in Fig. <ref>, the basins of the states are spiral shaped around the point Δ R(0)=0, ΔΦ(0)=π, indicating the influence of the initial phases. This is very similar with the result of Ref. <cit.>. In order to show how the basins of CS change with parameters, we calculate the probability of chimera state with different initial conditions in the parameter plane of A versus π/2-α. Fig.<ref> shows the results. It is easy to see that the probability of CS decreases with the decreasing of α. When A is large, the probability of of CS becomes zero.§ DISCUSSIONTo connect CS and ES, Eq. (<ref>) has three key aspects. The first one is the asymmetric couplings λ and λ'. When λ>λ', the coupling in each group is greater than that between the two groups. Thus, the oscillators may be synchronized in their own groups but remain unsynchronized to those in another group. The second one is the control parameter β. It guarantees the appearance of ES. And the third one is the range parameter of the natural frequencies δ. When δ is relatively large, we have both ES and CS-like behaviors. When δ is relatively small, we only have CS. In this sense, we may also consider δ as the parameter connecting CS and ES.One advantage of Eq. (<ref>) is that its CS can be easily observed. The underlying mechanism may be the bistability. It is known that CS is a kind of symmetry breaking of coherence, due to the symmetry breaking in initial conditions. If a system shows CS, its oscillators should have multi or bistability so that the sensitivity to initial conditions can evolve into the final coexisting behaviors of coherence and incoherence in different population groups. Thus, the multi or bistability is the necessary condition for CS. On the other hand, a characteristic feature of ES is the existence of a hysteresis loop in the order parameter. When coupling strength is located in this hysteresis region, the system has two stable states with one high coherence and the other low coherence, separated by an unstable state. When the coupling is increased adiabatically in the bistable region, the feature of low (high) coherence is remained and thus result in the hysteresis loop, indicating that the bistability is also the necessary condition for ES. Therefore, the bistability is the common basis of CS and ES.Because of the correlation between local order parameter and coupling strength, our model is more sensitive to symmetry breaking of initial conditions, which makes CS be observed easier. On the other hand, we find that in our model, the basin of CS can be very large, which is similar with the large basin of CS in Refs. <cit.>. The spiral-shaped basin of states is also similar with the basin structure in <cit.>, and reminds us the spiral wave chimeras in <cit.> although they are different phenomena.In conclusion, we have presented a model to describe both CS and ES. We reveal that in two limits, the system goes to CS or ES, respectively. While in between the two limits, the model can show both CS and ES at the same coupling strength. The frequency distribution parameter δ may seriously influence the final state. When all the natural frequencies are zero, CS is robust to initial conditions and thus, a diversity of the CS basin patterns can be observed. These findings have been confirmed by both numerical simulations and theoretical analysis, which improves our understanding on both CS and ES, especially on their connection. X.Z. thanks Prof. Arkady Pikvosky for many useful discussions. The authors thank the reviewers for their valuable comments. This work was partially supported by the NNSF of China under Grant Nos. 11135001 and 11375066, 973 Program under Grant No. 2013CB834100, and the Open Fund from the SKLPS of ECNU. Pikvosky:2003 A. Pikvosky, M. Rosenblum, and J. Kurths, Synchronization: A Universal Concept in Nonlinear Sciences (Cambridge University Press, Cambridge, 2003).Boccaletti:2006 S. Boccaletti, V. Latora, and Y. Moreno, Phys. Rep. 424, 175C308 (2006).Arenas:2008 A. Arenas, A. Diaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou, Phys. Rep. 469, 93 (2008).Kuramoto:2002 Y. Kuramoto and D. Battogtokh, Nonlin. Phenom. Complex Syst. 5, 380 (2002).Abrams:2004 D. M. Abrams and S. H. Strogatz, Phys. Rev. Lett. 93, 174102 (2004).Omel:2008 O. E. Omelchenko, Y. L. Maistrenko, and P. A. Tass, Phys. Rev. Lett. 100, 044105 (2008).Sethia:2008 G. C. Sethia, A. Sen, and F. M. Atay, Phys. Rev. Lett. 100, 144102 (2008).Bordyugov:2010 G. Bordyugov, A. Pikovsky, and M. Rosenblum, Phys. Rev. E 82, 035205 (2010).Laing:2009 C. R. Laing, Physica (Amsterdam) D 238, 1569 (2009); Chaos 19, 013113(2009).Martens:2009 E. A. Martens, C. R. Laing, and S. H. Strogatz, Phys. Rev. Lett. 104, 044101 (2010).Wolfrum:2011 M. Wolfrum, O. E. Omelchenko, S. Yanchuk, and Y. L. Maistrenko, Chaos 21, 013112 (2011).Laing:2012 C. R. Laing, K. Rajendran, and I. G. Kevrekidis, Chaos 22, 013132 (2012).Zhu:2012 Y. Zhu, Y. Li, M. Zhang, and J. Yang, Europhys. Lett. 97, 10009 (2012).Panaggio:2013 M. J. Panaggio and D. M. Abrams, Phys. Rev. Lett. 110, 094102 (2013).Dudkowski:2014 D. Dudkowski, Y. Maistrenko, and T. Kapitaniak, Phys. Rev. E 90, 032920 (2014).Omelchenko:2015 I. Omelchenko, A. Provata, J. Hizanidis, E. Schöll, and P. Hövel, Phys. Rev. E 91, 022917 (2015).Jaros:2015 P. Jaros, Y. Maistrenko, and T. Kapitaniak, Phys. Rev. E 91, 022907 (2015).Bohm:2015 F. Böhm, A. Zakharova, E. Schöll, and K. Lüdge, Phys. Rev. E 91, 040901 (2015).Rattenborg:2000 N. C. Rattenborg, C. J. Amlaner, and S. L. Lima, Neurosci. Biobehav. Rev. 24, 817 (2000).Mathews: 2006 C. G. Mathews, J. A. Lesku, S. L. Lima, and C. J. Amlaner, Ethology 112, 286 (2006).Abrams:2008 D. M. Abrams, R. Mirollo, S. H. Strogatz, and D. A. Wiley, Phys. Rev. Lett. 101, 084103 (2008).Pikvosky:2008 A. Pikovsky and M. Rosenblum, Phys. Rev. Lett. 101, 264103 (2008).Ma:2010 R. Ma, J. Wang, and Z. Liu, Europhys. Lett.91, 40006 (2010).Tinsley:2012 M. R. Tinsley, S. Nkomo, and K. Showalter, Nat. Phys. 8, 662 (2012).Hagerstrom:2012 A. M. Hagerstrom et al., Nat. Phys. 8, 658 (2012).Viktorov:2014 E. A. Viktorov, T. Habruseva, S. P. Hegarty, et al., Phys. Rev. Lett. 112, 224101 (2014).Wickramasinghe:2014 M. Wickramasinghe and I. Z. Kiss, Phys. Chem. Chem. Phys.16, 18360 (2014).Martens:2013 E. A. Martens, S. Thutupalli, A. Fourrire, and O. Hallatschek, Proc. Natl. Acad. Sci. 110, 10563(2013).Larger:2013 L. Larger, B. Penkovsky, and Y. Maistrenko, Phys. Rev. Lett. 111, 1(2013).Schoenleber:2014 K. Schoenleber, C. Zensen, A. Heinrich, and K. Krischer, New J. Phys. 16, 63024 (2014).Strogatz:1989 S. H. Strogatz, C. M. Marcus, R. M. Westervelt, and R. E. Mirollo, Physica D 36, 23 (1989).Tanaka:1997 H.-A. Tanaka, A. J. Lichtenberg, and S. Oishi, Physica D 100, 279 (1997).Pazo:2005 D. Pazó, Phys. Rev. E 72, 046211 (2005).Gomez:2011 J. Gómez-Gardeñes, S. Gómez, A. Arenas, and Y. Moreno, Phys. Rev. Lett. 106, 128701 (2011).Pecora:1998 L. M. Pecora and T. L. Carroll, Phys. Rev. Lett. 80, 2109 (1998).Leyva:2012 I. Leyva, R. Sevilla-Escoboza, J. M. Buldú, I. Sendiña-Nadal, J. Gómez-Gardeñes, A. Arenas, Y. Moreno, S. Gómez, R. Jaimes-Reategui, and S. Boccaletti, Phys. Rev. Lett. 108, 168702 (2012).Peron:2012 T. K. D. M. Peron and F. A. Rodrigues, Phys. Rev. E 86, 056108 (2012).Coutinho:2013 B. C. Coutinho, A. V. Goltsev, S. N. Dorogovtsev, and J. F. F. Mendes, Phys. Rev. E 87, 032106 (2013).Liu:2013 W. Liu, Y. Wu, J. Xiao, and M. Zhan, Europhys. Lett. 101, 38002 (2013).Ji:2013 P. Ji, T. K. D. M. Peron, P. J. Menck, F. A. Rodrigues, and J. Kurths, Phys. Rev. Lett. 110, 218701 (2013).Zhang:2013 X. Zhang, X. Hu, J. Kurths, and Z. Liu, Phys. Rev. E 88, 010802(R) (2013).Leyva:2013 I. Leyva, A. Navas, I. Sendiña-Nadal, J. A. Almendral, J. M. Buldú, M. Zanin, D. Papo, and S. Boccaletti, Sci. Rep. 3, 1281 (2013).Leyva:2014 X. Zhang, Y. Zou, S. Boccaletti, and Z. Liu, Sci. Rep. 4, 5200 (2014).Su:2013 G. Su, Z. Ruan, S. Guan, and Z. Liu, Europhys. Lett. 103, 48004 (2013).Hu:2014 X. Hu, S. Boccaletti, W. Huang, X. Zhang, Z. Liu, S. Guan, and C. Lai, Sci. Rep. 4, 7262 (2014).Zou:2014 Y. Zou, T. Pereira, M. Small, Z. Liu, and J. Kurths, Phys. Rev. Lett. 112, 114102 (2014).Zhou:2015 W. Zhou, L. Chen, H. Bi, X. Hu, Z. Liu, and S. Guan, Phys. Rev. E 92, 012812 (2015).Zhang:2015 X. Zhang, S. Boccaletti, S. Guan, and Z. Liu, Phys. Rev. Lett. 114, 038701 (2015).Zhang:2014 X. Zhang, Y. Zou, S. Boccaletti, and Z. Liu, Sci. Rep. 4, 5200 (2014).Martens:2016 E. A. Martens, M. J. Panaggio and D. M. Abrams, New Journal of Physics 18, 022002 (2016).Feng:2015 Y. Feng and H. Hong, Chinese Physics Letters 32, 060502 (2015).Watanabe:1994 S. Watanabe and S. H. Strogatz, Physica D 74, 197 (1994).Pikovsky:2008 A. Pikovsky and M. Rosenblum, Phys. Rev. Lett. 101, 264103 (2008).Mccullagh:1996 P. Mccullagh, The Annals of Statistics 24, 787 (1996).
http://arxiv.org/abs/1702.07897v1
{ "authors": [ "Xiyun Zhang", "Hongjie Bi", "Shuguang Guan", "Jinming Liu", "Zonghua Liu" ], "categories": [ "physics.soc-ph" ], "primary_category": "physics.soc-ph", "published": "20170225144149", "title": "A model bridging chimera state and explosive synchronization" }
Local Short Term Electricity Load Forecasting: Automatic Approaches The-Hien Dang-Ha1, Filippo Maria Bianchi2, Roland Olsson3 1 Department of Informatics, University of Oslo, Norway, Email: hthdang@student.matnat.uio.no 2 Machine Learning Group, University of Tromsø, Norway, Email: filippo.m.bianchi@uit.no 3 Faculty of Computer Sciences Østfold University College, Østfold, Norway, Email: roland.olsson@hiof.no ================================================================================================================================================================================================================================================================================================================================================================================ Short-Term Load Forecasting (STLF) is a fundamental component in the efficient management of power systems, which has been studied intensively over the past 50 years. The emerging development of smart grid technologies is posing new challenges as well as opportunities to STLF.Load data, collected at higher geographical granularity and frequency through thousands of smart meters, allows us to build a more accurate local load forecasting model, which is essential for local optimization of power load through demand side management.With this paper, we show how several existing approaches for STLF are not applicable on local load forecasting, either because of long training time, unstable optimization process, or sensitivity to hyper-parameters.Accordingly, we select five models suitable for local STFL, which can be trained on different time-series with limited intervention from the user. The experiment, which consists of 40 time-series collected at different locations and aggregation levels, revealed that yearly pattern and temperature information are only useful for high aggregation level STLF.On local STLF task, the modified version of double seasonal Holt-Winter proposed in this paper performs relatively well with only 3 months of training data, compared to more complex methods. § INTRODUCTIONLoad forecasting is an integral part of electric power system operations, such as generation, transmission, distribution, and retail of electricity <cit.>.According to different forecast horizons and resolutions, load forecast problems can be grouped into 4 classes: long-term, mid-term, short-term and very short-term.In this paper, we focus on Short-Term Load Forecast (STLF) of hourly electricity load for one day ahead. This model is required as an essential input for the Demand Response (DR) strategy. DR can be defined as "changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices or when system reliability is jeopardized" <cit.>.The DR model is made possible by the flexibility in generating, storing and distributing resources <cit.>. One of the most important applications of the DR model is to limit the peak demand through different Demand Side Management programs, such as real-time pricing or direct load control programs <cit.>, whose effectiveness depends on an accurate and reliable load forecast at different aggregation levels.Due to its fundamental role, STLF has been studied intensively over the past 50 years.However, the deployment of smart grid technologies brings new opportunities as well as challenges to the field.On a smart grid, load data can be collected at a much higher geographical granularity and frequency than before, by means of thousands of smart meters <cit.>.Such a larger availability of data allows the synthesis of a more local load forecasting model, which is essential to optimize power load locally in a demand-response paradigm <cit.>.We refer to a load time-series as “local” when it contains measurements relative to a small geographical region, whose average hourly load goes from several hundreds up to several hundreds of thousands of kWh.Despite the great variety of STLF methods proposed in the literature, most of them focus on load time-series relative to high aggregation levels (big towns, cities or entire countries), whose average goes from several to hundreds of MWh <cit.>. We found that these methods are not applicable to local STLF task for the following reasons:* Long training time: unlike for STLF focusing on high aggregation level, in local STLF we need to train and update thousands of models at the same time. Predictions are made on hourly basis for many local regions, and the forecasting models must often be retrained (e.g., each month). This requirement cuts out approaches relying on slow derivative-free optimizers, such as evolutionary algorithms or particle swarm optimization <cit.>. * Unstable optimization process:since thousands of models needs to be trained at the same time, a local STLF model needs to be robust to the discrepancy in time-series characteristics. For example, including long-term seasonal dependency into state space models makes their optimization process unstable.* Sensitivity to hyperparameters:nonparametric techniques such as artificial neural networks and kernel estimation are characterized by a high sensitivity to the hyperparameters of the model. For example, Feed-Forward Neural Networks (FNN) have been proposed and extensively used for STLF since the 1990s <cit.>.However, their prediction performance highly depends on the number of layers, the amount of nodes per layer, the regularization coefficients, and the learning rate. Such hyperparameters must be tuned through cross-validation, which is time-consuming, due to the slow gradient descent training procedure, and does not guarantee convergence.Additionally, FNN approach requires a carefully-designed preprocessing such as outliers removal to work effectively <cit.>. These problems make FNN unsuitable for the local STLF task.On the other hand, recurrent neural networks such as echo state networks and long short term memory networks are widely adopted in STLF <cit.>. However, these architectures are not considered in this work, as we focus on model-based approaches.After having conducted a comprehensive survey of different STLF approaches, we selected five models, two of which are original variations of existing architectures, proposed in this paper for the first time.The models were chosen (or modified) to overcome the three aforementioned limitations and to be at the same time characterized by a high degree of automation, both in the training and in the prediction phase. In our experiments, we process 40 time-series collected from separate locations and characterized by different aggregation levels.To the best of our knowledge, this is the first time a local STLF experiment has been done with such a large number of time-series with different characteristics.As expected, our experiments show that yearly patterns and temperature information are only useful for high aggregation level STLF.On very local load time-series (less than several hundreds of kWh), the modified version of double seasonal Holt-Winter (modifiedDSHW) proposed in this paper performs relatively well with only 3 months of training data, compared to other more complex methods that require years of training data.The remainder of the paper is organized as follows.In Section <ref>, the datasets under consideration are described, and the main characteristics of the load time-series are analyzed.The five proposed models and their origins are presented in Section <ref>.Section <ref> explains the experiment setup and discusses the obtained results.Section <ref> concludes the paper and suggests some future work.§ DATA DESCRIPTION The dataset under analysis consists of 40 load time-series collected from two countries, US and Norway, at different levels of aggregation. Such diversity in the data allows us to benchmark the generalization capability of various forecast methods.Among these 40 time-series, 20 of them come from the Global Energy Forecasting Competition 2012 (GEFCom2012). This dataset consists of 4 years of hourly load collected from a US utility with 20 zonal level series with average hourly load varies from 10.000kWh up to 200.000kWh. The dataset is also accompanied by 11 temperature time-series collected at the area, which can be used to improve the forecasting performance. The other 20 load time-series come from Hvaler, a small island in Norway with around 6000 households. The island has been used as a smart grid pilot for many years, with over 8000 smart meters have been installed since 2012. The island power grid includes around 100 small distribution substations (including transformer on pole) organized hierarchically. Since there is currently no smart meters installed at these small substations, their loads are estimated by aggregating from their corresponding smart meters installed at households, street lights or other end consumers. The 20 time-series are relative to small distribution substations. They cover two years (2012-2013) and were selected based on the quality of data (e.g. the number of missing entries).As opposed to the GEFCom2012 dataset, the Hvaler's load time-series are much more local and have less than 200kWh average hourly load. This allows us to test how different predictive models perform at various aggregation levels. Before delving into the details of each model, we first examine some characteristics of the load signals.§.§ Seasonal PatternsFig. <ref> shows the hourly load at Hvaler's substation 1 over 2 years. Through a simple inspection, we can observe a strong seasonal pattern characterized by high demand for electricity in winter and low demand in summer.This pattern evidences a dependency between weather conditions and power consumption.However, such a relationship depends also on geographic location and type of consumers.Fig. <ref> shows hourly load at zone 1 of the GEFCom2012 dataset, indicating high demand during summer and winter, while low demand in other seasons. By analyzing the data more in detail, we can notice intraweek seasonal cycles (the load demand on the weekend is usually lower than on the weekdays) and intraday seasonal cycles, which arises from human routines (e.g. peaks at breakfast time and before dinner). Although these yearly, intraweek, and intraday seasonality effects are common in load time-series, their importance is not studied in local STLF. In our experiment, we observed that the yearly pattern is useful only if the load time-series is highly aggregated. §.§ Weather EffectsIn load forecasting, weather conditions have always been an important factor.Although many meteorological elements like humidity, wind, rainfall, cloud cover, thunderstorm could be accounted for, the most influential and popular is the temperature, whose measurement is also easier to retrieve.In fact, temperature variables can explain more than 70% of the load variance in the GEF2012Com dataset <cit.> The scatter plot in Fig. <ref> shows the relationship between load and temperature in both Hvaler and US.While the "V" shape is consistent with the two cases, there are still obvious differences on the relationship, which could be explained by the difference in geographical locations, human comfortable temperature, heating/cooling technology, or type of consumers (e.g. industrial or residential units). §.§ Calendar EffectsPeople change their daily routines on calendar events, such as holidays, festivities and special events (e.g. football matches, transportation strike), with a possible modification in the electricity demand.Those situations represent outliers and could be treated differently to improve the model accuracy. In our study, only the national holiday events are taken into account, due to the lack of information on other events.§.§ Long-Term TrendThe scale, variation, and other properties of the load signal could change over time due to changes in population, technology, or economic conditions. In Fig. <ref>, we can see a tendency of increasing consumption over the years. In our experiment, two methods explicitly model and detrend the load time-series in the first step. Other methods either model it implicitly or ignore it. This is because the long-term trend can be considered as constant in short-term and may not contribute much to one-day-ahead load forecasting. § METHODOLOGY In this section, we review several important STLF approaches. We divided the STLF methodologies of interest into four main categories: Averaging, Linear State Space, Decomposition, and Data-Driven approaches.This classification is particularly meaningful for our analysis, but other taxonomies have been proposed in the literature <cit.>.For each category we provide a short overview, pointing out the main advantages and disadvantages encountered in our applicative scenario.Based on this analysis, one baseline and five potential solutions were implemented and tested on the Hvaler and GEFCom2012 data.§.§ Averaging ApproachDespite being very basic, this is still a popularly used method due to its simple implementation and obvious model interpretation <cit.>.The averaging model makes predictions based on linear combinations of consumption values from “similar” days and was used as a benchmark in <cit.>. The forecast is computed as ŷ_t(k) = y_t + k-s_2 + y_t + k-2s_2+y_t + k-3s_2+y_t + k-4s_2/4, where y_t is the demand in period t, k is the forecast lead time, and s_2 is the so called second seasonal cycle, which is the intraweek s_2 = 24*7 = 168.The model predicts future load by averaging the corresponding observations in each of the previous four weeks. As previously discussed, there are 3 typical seasonal cycles in load time-series, which are intraday, intraweek and yearly cycles. In this paper, their cycle lengths (in hour) are represented respectively as s_1 = 24, s_2 = 24*7=168, and s_3=24*364.25=8766. Note that we fixed the length of each seasonal cycle and did not re-estimate it in different time-series. This is because the estimation requires analyzing a periodogram manually and is hard to automate.At first, we intended to use the averaging model as the baseline to estimate the forecast difficulty. However, the model produced highly autocorrelated errors in the residual series and could not serve as a good baseline.Therefore, we decided to train an additional ARIMA model on the residual series produced by the averaging model. We call this the avgARIMA model and use it as the baseline model in our experiment. §.§ Linear State Space ApproachState space approach refers to those models that can be written in a linear state space form, which consists of a set of states with initial distribution (usually Gaussian), a measurement equation and a Markovian transition equation.Although state space models can be extended to include exogenous variables, such as temperature, the univariate setting is still the most popular in STLF.Recent studies have shown that, although in the long run the load is strongly influenced by meteorological conditions and special events, an univariate model is sufficient in shorter lead times <cit.>. The two most common and accurate state space models that have been reported for STLF task in the literature are Auto-Regressive Integrated Moving-Average (ARIMA) and Holt-Winters exponential smoothing. §.§.§ ARIMAThe ARIMA model was adopted in STLF back in 1987 <cit.>, where a double seasonal ARIMA model was experimented (intraday and intraweek cycles).This approach remains popular, with extensions to include exogenous variables, or intrayear seasonal cycles <cit.>.One big disadvantage of ARIMA is that the model hyperparameters (such as AR, I, and MA orders, as well as orders of seasonal AR, I, and MA terms) are usually derived from the Box-Jenkins test, which is hard to automate and still require human expertise to examine the partial correlogram of the time-series <cit.>.These hyperparameters can be heuristically fixed a-priori <cit.>, However, during our experiment we noted that in the ARIMA model with (double) seasonality the optimization process becomes highly unstable when fixing the AR, I, MA orders and all the seasonal AR, I and MA orders to some arbitrary numbers. Akaike Information Criterion can also be used to set ARIMA hyperparameters. However, a complete search over all possible models is time-consuming, especially for seasonal ARIMA.Therefore, in this paper, we only use ARIMA to correct the autocorrelation in the residual series produced by other models. This is a common practice in time-series forecasting to improve accuracy when the main model has autocorrelated errors.§.§.§ Holt-WintersHolt-Winters is another popular state space model that accommodates the two intraday and intraweek seasonal cycles that commonly appear in load time-series.Taylor et.al. (2003) <cit.> introduced the double seasonal Holt-Winters method (DSHW), whose important advantage, which we find suitable for our local load forecasting problem, is that it only requires the length of the two seasonal cycles to be specified.We indeed did not get any optimization problem when fixing these two seasonal cycles to the intraday s_1 = 24 hours and intraweek s_2 = 168 hours.Implementation details are provided in Section <ref>, where we also introduce a modified version of the original DSHW that yields significantly better performance in our experiment.In 2010, Taylor et.al. <cit.> proposed the triple seasonal Holt-Winters (TSHW), which enables the model to accommodate the intrayear seasonal cycles.However, it turned out that the extra seasonality causes the training process to become much slower and unstable. In fact, the success of the optimization process depends on the choice of initial values for the initial states.To address this issue, Taylor et.al. generated 10^4 initial vectors as possible initialization for the variables of the model. Since this process requires significant computational power, the TSHW is unsuitable for the local STLF task.§.§ Data-Driven ApproachInstead of modeling the underlying physical processes, data-driven methods try to discover consistent patterns from historical data, according to a machine learning approach.A mapping between the input variables and the load is learned and then used for prediction.Depending on the forecasting task, the input and output variables are designed accordingly. In the following, we review two of the main approaches in STLF.§.§.§ Nonlinear Non Auto-Regressive RegressionThis approach models the load as a non-linear function of only exogenous input variables without using the autoregressive terms. According to the nature of the data discussed in Section <ref>, potential exogenous variables of interest are: time of day, time within week, time of year, linear trend, temperatures, or smoothed temperatures.Frameworks such as random forests and gradient boosted model…can be used to map inputs to the desired output. However, since these approaches do not explicitly model the autocorrelation that almost always exists in the load time-series, they must be used in combination with other techniques to be effective, such as a state space and a long-term decomposition model <cit.>.This requires extra effort on the model deployment and management process. For this reason, we did not include this approach in our study.§.§.§ Nonlinear Auto-Regressive with Exogenous (NARX)An NARX model computes the next value of a variable, from previous values of the variable itself and current and past values of exogenous series.The basic formulation reads as y_t=F(y_t-1,y_t-2,… ,x_t,x_t-1,x_t-2,… ), where F(·) is a non-linear function, which could be modeled by any general-purpose machine learning model such as artificial neural network (ANN) or support vector machine (SVM). As discussed in Section <ref>, ANN depends on several hyper-parameters and its training can be cumbersome. An ANN is prone to overfitting and is sensitive to outliers <cit.>.The problem could be solved by replacing the ANN with a model characterized by a lower variance such as SVM, which has been adopted in several studies <cit.>. A different approach is to use the random forest (RF) as in <cit.>.Thanks to its bagging, data sub-sampling, and random features selection process, the RF model is capable of capturing complex patterns, while maintaining a low variance <cit.>.This approach is called NARX-RF and is specified in Section <ref>. §.§ Time-Series Decomposition Approach Time-series decomposition approach deconstructs a time-series into several components, each component represents different kinds of pattern. According to the nature of load time-series discussed in Section <ref>, potential components are long-term trend, intrayear cycle, intraweek cycle, intraday cycle, relationship between temperature and load, holiday events, …In GEFCom2012, Lloyd and James (2014) <cit.> used a Gaussian Process to decompose the load time-series. Their Gaussian Process contains a set of different kernels, each of them is designed to captured different component in the time-series. The long-term trend is captured by a squared exponential kernel, intrayear cycle is captured by a periodic kernel of time, while the relationship between temperature and load is captured by a squared exponential kernel of temperature. Although this hybrid approach works relatively well on GEFCom2012 data, a GP model needs to be manually carefully designed and requires special treatment on different load signals <cit.>. Therefore, we found Gaussian Process not suitable for the local STLF task and did not include it in the experiment. Another popular way to decompose a load time-series is to use linear additive models, where the load is modeled as a linear combination of various independent features. The learned model is interpretable, easy to implement/automate the training process, and able to achieve high accuracy.Many different features have been suggested in the literature to capture different load components. For example, the yearly cycle can be modeled by 8 Fourier series <cit.> or spline functions <cit.>; relationship between temperature and consumption can be modeled by a pice-wise linear <cit.>, quadratic <cit.>, or spline functions <cit.>; monthly change in relationship between temperature and consumption can be modelled by interaction terms between temperature and month of year variable <cit.>. Among different linear additive models proposed in the literature, we found that the TBATS model suggested by De Livera et. al. 2011 <cit.> and the semi-parametric additive model suggested by Goude et.al. 2014 <cit.> are the two most suitable models for the local STLF task.The name TBATS is an acronym for key features of the model: Box-Cox transforms, ARMA errors, Trend, and Trigonometric Seasonal components. On the other hand, the semi-parametric additive approach bases on the Generalized Addictive Model. Precise specification of these two models are given in Section <ref>. § EXPERIMENTS AND DISCUSSION §.§ Experiment SettingIn this section, we show experiment results of the five chosen models on 40 time-series described in section <ref>. Each time-series is marked with 4 testing periods, which are the four following weeks of the final year data: 16, 28, 40, 52, 53 (note that week 53 contains just 1 day or no day at all). These weeks were chosen to bring a fair estimation of model performance in different seasons and holidays during the final week of the year. These testing periods are demonstrated in Fig. <ref>. For each testing period, we did multi-step rolling forecasts without re-estimation. For different testing period, we re-estimated the model. We did not re-estimate the models within one testing period since this approach is impractical. Retraining and updating thousands of models every hour or even every day are infeasible since they require a tremendous amount of computing resource. We used the Mean Average Percentage Error (MAPE) to compare model performance, which is widely used in the energy forecasting community. The MAPEs are calculated separately at 24 prediction horizons for 24 hours ahead. Besides accuracy, we also report the training time of each model, which is an important factor in deciding which model to use in practice but rarely mentioned in the literature. Precise model specifications of one baseline and all five chosen methods are given in the following section. The whole experiment can be easily reproduced from the data and code publicly available at: §.§ Model Specifications §.§.§ avgARIMA The avgARIMA model was used to give a baseline performance for our experiment. First, an averaging model is used to predict the future load by averaging the corresponding observations in each of the previous four weeks, as specified in (<ref>). Its 3-month residual time series is then used to train an ARIMA. A stepwise search was used to optimize the AR, I and MA orders. The procedure is implemented by the auto.arima() function in the R forecast package.§.§.§ originalDSHW and modifiedDSHWThe multiplicative formulation for the original double seasonal Holt-Winders model (DSHW) is given in the following expression <cit.>: l_t = α(y_t/(d_t-s_1 w_t-s_2)) + (1-α)l_t-1 d_t = θ(y_t/(l_tw_t-s_2)) + (1-θ)d_t-s_1 w_t = ω(y_t/(l_td_t-s_1)) + (1-ω)w_t-s_2 ŷ_t(k) = l_td_t-s_1+kw_t-s_2+k+ϕ^k(y_t - (l_t-1d_t-s_1w_t-s_2)) , where l_t is the smoothed level; d_t and w_t are the seasonal indices for the intraday and intraweek seasonal cycles, respectively; α, θ, and ω are the smoothing parameters. The term involving the parameter ϕ in the forecast equation (<ref>) is a simple adjustment for first-order autocorrelation. This model is implemented in the R forecast package under the name dshw(), and named origDSHW in this paper.During the experiment, we recognized that performance of the origDSHW model can be improved significantly if we employ a different objective function. Instead of using the sum of square errors of the in-sample 1-hour ahead forecast, we modified the objective function to the sum of squared errors of all 24 horizons in-sample forecast. Moreover, we also increased the upper limit of the ϕ parameter from 0.9 to 0.99. This model is called modDSHW in this experiment. Both the origDSHW and modDSHW models were trained on 3 months of data. §.§.§ NARX-RF Although an NARX model with SVM yielded good performance in other studies, we found it hard to automate, since its performance depends heavily on the choice of hyper-parameters: the cost of errors C and width of the ϵ-insensitive tube. Moreover, optimal values of these hyper-parameters vary very much on different load signals. Therefore, instead of SVM, the random forest was used, which is referred to as the NARX-RF model in this paper. For building the random forest, the package ranger in R was used.To avoid multi-step ahead predictions, a separate random forest was used for each lead time. For lead time h, the set of inputs consists of the load values at lags: 1, 2, 3, s_1-h, 2s_1-h, 3s_1-h, s_2-h, 2s_2-h; two temperature-related exogenous variables: temperature and exponential smoothed temperature; and three calendar variables: time of day and day of week. The smoothed temperature is often used in STLF to take into account the physical inertia of buildings and delay effects of temperature on consumption <cit.>. The coefficient for the temperature exponential smoothing process was set to 0.85.We kept all the default settings in the ranger function, which set the number of trees to grow ntree = 500, and the number of candidate features at each split mtry = 3. The subsampling ratio was set so that each tree receives 5000 data points to train on. The model makes use of all the available data up to the testing point. §.§.§ TBATS The TBATS model was introduced by De Livera et.al. in 2011 <cit.> to solve the forecasting problem in time series with complex seasonal patterns such as multiple seasonal periods or high-frequency seasonality. The model incorporates Box-Cox transformations, linear trend, Fourier representations with time-varying coefficients, and ARMA error correction. The method involves a simple, yet efficient estimation procedure, which makes it suitable for the local STLF problem. In this experiment, the exact TBATS model described in <cit.> was used without any modification. The TBATS implementation provided in the forecast package was used with all the default settings unchanged.§.§.§ SemiParametric The semi-parametric additive model was first introduced by Goude et.al. in the GEFCom2012 competition <cit.>. In 2014, Goude et.al. have tested the method's generalization ability, where it was used for short and medium-term load forecasting on 2206 large-scale substations automatically <cit.>. Here we present a short explanation of the method, together with some small adaptation we have done to make it more appropriate for the local STLF task. For short, this method is called SemiPar in this paper.The SemiPar method splits the load into three parts: Z_t = Z_t^lt + Z_t^mt + Z_t^st, where Z_t is the electrical load at time t, Z_t^lt is the long-term part of the load, corresponding to low-frequency variations such as long-term trends or economic effects. Z_t^mt is the medium-term part, incorporating daily to weekly effects, the meteorological effects, and the calendar effects.The short term part, Z_t^st, contains everything that could not be captured on a large temporal scale but could be obtained locally in time. We implemented the Z_t^lt and Z_t^mt exactly the same as described in <cit.>. However, for the short term Z_t^st part, we use an ARIMA model (optimized by the auto.arima() function) to capture the auto-correlation in the residual time-series after removing the long-term and medium-term parts.The long-term forecast uses combination of generalized additive models (GAM) and kernel regression, while the medium-term forecast uses GAMs.The GAM model with generalized cross validation criterion is implemented in the R packagemgcv, while the kernel regression is the Nadaraya-Watson model, which is available through the bbemkr package.For the long-term model, we aggregate the consumption and temperature by month, denoted by Z_t^monthly and T_t^monthly. Then we estimate monthly consumption using the following semi-parametric additive model <cit.>:Ẑ_t^monthly = ∑_q=1^12 c_q I_Month_t = q + f(T_t^monthly) + ϵ_tWhere: * I_Month_t = q is an indicator variable which is equal to 1 when the month at observation t is q (from 1 to 12), and 0 otherwise. * f is the effect of the monthly temperature, estimated by thin plate regression splines (default setting in mgcv package).The monthly estimated residuals are then obtained as follows:ϵ̂^monthly = Z_t^monthly - Ẑ_t^monthly Then the residuals are smoothed and interpolated to hourly frequency by using Nadaraya-Watson kernel regressors, with Gaussian kernels and a bandwidth of 12. These smoothed residuals are a good estimate of low-frequency effects, which contains neither annual seasonality nor weather effects. These residuals are considered as Z_t^lt and smooth by construction, and thus they are constantly extrapolated for one-day horizon.By removing Z_t^lt from the original load, we get the signal Z_t^det which contains Z_t^mt and Z_t^st. We fit one mid-term model for each hour of the day so that we have 24 mid-term models. These mid-term models are GAM in the following form <cit.>:Z_t^det = ∑ m_q I_DayType_t = q + g_1(θ_t) + g_2(T_t) + h(toy_t) + ϵ_twhere: * Z_t^det is the de-trended electrical demand at time t. * DayType_t is type of day for observation t. 1 for Sunday, 2 for Monday, 3 for Tuesday, 4 for Wednesday, 5 for Thursday, 6 for Friday, 7 for Saturday, 8 for Christmas and New Year’s Day, 9 for Christmas Eve, 10 for Independence Day, and 11 for Thanksgiving. * θ_t is the smoothed temperature, obtained via exponential smoothing of the real temperature T_t: θ_t = (1-0.85)T_t + 0.85θ_t-1 * toy_t is the time of year, which is the position of the observation t within the year. h(toy_t) corresponds to the smooth yearly cycle of the load. * All the g(.) functions are modeled by thin plate regression splines, while the h(.) function is modeled by cyclic cubic regression splines. An ARIMA short term model is then built to capture patterns in the residuals after removing Z_t^lt and Z_t^mt from Z_t. §.§ Experiment ResultsThe whole experiment was trained on an Intel Core i7-6700k 4.0Ghz machine with 8 cores. Training time of each method is reported in Fig. <ref>, where the CPU time was measured for one core. The whole experiment took about 12 hours to complete. Fig. <ref> shows a comparison between median MAPE of each method at different prediction horizons on the two datasets GEFCom2012 and Hvaler. On the GEFCom2012 dataset, the SemiPar method is obviously the best model. This is expected since it is the only model that has been tested and performed well in a large-scale experiment with thousands of time-series. It is also the only model that explicitly captures all the patterns discussed in section <ref>, including the long-term, mid-term, and the short-term patterns together with the temperature effect. However, on the Hvaler dataset, where all the loads are collected at a much lower aggregation level, the SemiPar method exhibits its limitation. It performs only slightly better than the NARX-RF approach in the first ten horizons and then becomes worse when the prediction horizon increases. This can be explained by the fact that the load time-series in Hvaler are much noisier than in GEFCom2012 dataset since they consist of only a small number of consumers. Therefore, their long-term and mid-term trends are less consistent in the long run, which causes the decomposition approach to be less effective. The load in Hvaler is also collected in a shorter period (2 years), which causes the estimation of the long-term and mid-term components to become less accurate. On the other hand, the short-term processes like intra-week, intra-day, and innovations become more influential in Hvaler dataset. This explains why the modDSHW method, which only uses 3 months of training data and only models the intra-week and intra-day seasonality, can slightly outperform the SemiPar at horizons further than 10. The second best model in both of the two GEFCom2012 and Hvaler datasets is NARX-RF. It performs consistently well on the two datasets at all the prediction horizons, only significantly worse than SemiPar for early horizons. This consistency in performance is an important advantage if the system contains load time-series collected from many different scales, and we want to use only one forecasting method to simplify the deployment process. However, one has to consider its running time, since NARX-RF is more than one order of magnitude slower than other methods.Our modifications made for the DSHW model turn out to be very effective. The modDSHW method significantly outperforms the orgiDSHW method in every case. This suggests that one should always follow these modifications if the DSHW model is of his interest. The modDSHW performs surprisingly well on the Hvaler dataset, even without using temperature information and was trained in only 3 months of data. Therefore, we believe that temperature information only contributes marginally to the forecasting accuracy on very local load time-series like Hvaler. The TBATS method performs badly on both datasets and even worse than the averageARIMA model at some points. This can be because the way it decomposes the time-series is not suitable for the load signal.§ CONCLUSIONS AND FUTURE WORK In this paper, we were looking for solutions for local one-day-ahead load forecasting problem, which needs to be able to model thousands of load time-series automatically without human intervention. One baseline and five models have been proposed, including avgARIMA, orgiDSHW, modDSHW, NARX-RF, TBATS, and SemiPar. These models were tested on 40 different load time-series, collected from US and Norway at different aggregation levels with different characteristics. The experiment results show that the SemiPar has superior performance on high-aggregation load, at the cost of a long historical data requirement. On the other hand, NARX-RF performs consistently well in many cases, at the expense of long training time. On low-aggregation load time-series, our modified version of the DSHW model works surprisingly well with only 3 months of training data and without using temperature information. If the historical data is limited, which is the case when a new smart grid is installed, the modDSHW model is highly recommended. The experiment also suggests that at low aggregation level, long term underlying processes (e.g., trend or intra-year cycle) and temperature information do not contribute much to the forecasting accuracy. Apparently, one can develop a better and more general model for the task by automatically combine or select among those methods proposed in this paper. However, one must acknowledge the fact that this would complicate the deployment and maintenance process, where thousands of models are involved. plain
http://arxiv.org/abs/1702.08025v1
{ "authors": [ "The-Hien Dang-Ha", "Filippo Maria Bianchi", "Roland Olsson" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170226123929", "title": "Local Short Term Electricity Load Forecasting: Automatic Approaches" }
Neutrality and the role of rare variants]Inferring processes of cultural transmission: the critical role of rare variants in distinguishing neutrality from novelty biasesJP O'Dwyer, A Kandler]James P. O'Dwyer^1 & Anne Kandler^2Department of Plant Biology, University of Illinois, Urbana IL 61801 USA^1 Max Planck Institute for Evolutionary Anthropology, Department of Human Behavior, Ecology and Culture, Leipzig, Germany^2[ [ Received: date / Accepted: date =================================== Abstract. Neutral evolution assumes that there are no selective forces distinguishing different variants in a population. Despite this striking assumption, many recent studies have sought to assess whether neutrality can provide a good description of different episodes of cultural change. One approach has been to test whether neutral predictions are consistent with observed progeny distributions, recording the number of variants that have produced a given number of new instances within a specified time interval: a classic example is the distribution of baby names. Using an overlapping generations model we show that these distributions consist of two phases: a power law phase with a constant exponent of -3/2, followed by an exponential cut-off for variants with very large numbers of progeny. Maximum likelihood estimations of the model parameters provide a direct way to establish whether observed empirical patterns are consistent with neutral evolution. We apply our approach to a complete data set of baby names from Australia. Crucially we show that analyses based on only the most popular variants, as is often the case in studies of cultural evolution, can provide misleading evidence for underlying transmission hypotheses. While neutrality provides a plausible description of progeny distributions of abundant variants, rare variants deviate from neutrality. Further, we develop a simulation framework that allows for the detection of alternative cultural transmission processes. We show that anti-novelty bias is able to replicate the complete progeny distribution of the Australian data set. Keywords: Cultural transmission, neutral evolution, pro-novelty bias, anti-novelty bias, progeny distribution, power law§ INTRODUCTION Most theoretical modelling frameworks to cultural evolution make the simplifying assumption that innovations are the product of erroneous cultural transmission resulting in the introduction of cultural variants not previously seen in the population at low abundances <cit.>). But regardless of the mechanisms underlying the occurrence of any particular innovation, its subsequent fate (i.e. whether it goes extinct immediately or is able to spread through the population and reach a certain degree of visibility) provides a window into the processes of cultural transmission present in the population. For example, the `persistence' of a large number of innovations might point to population-level preferences for novel or rare variants. As a large number of such cultural transmission hypotheses have been proposed in the literature <cit.>, the question whether we can develop systematic approaches to distinguish between different transmission hypotheses using aggregated population-level data has gained importance. Seminal work by Bentley and colleagues <cit.> on this topic has focused on distinguishing broadly between neutral and non-neutral cultural transmission processes. Neutral models of cultural transmission make the assumption that there are no selective differences between variants, so that the dynamics of a new variant are not biased either towards proliferation or extinction.This hypothesis results in aparticular kind of stochastic dynamics, known as drift. In balancing the utility and availability of cultural data, the studies mentioned above identified the progeny distribution as a way to distinguish the neutral hypothesis from others.The progeny distribution logs the abundances of cultural variant types which produce k new individuals over a fixed period of time.Bentley and colleagues have estimated the form of the neutral progeny distribution through simulation techniques  <cit.>, concluding that the progeny distribution takes the form of a power law. The exponent of this power law has been fitted as a function that depends on innovation rate and total population size. The theoretical predictions have been compared against empirical data for the choice of baby names, US patents and their citations or pottery motifs, and these analyses provided support for the neutral hypothesis <cit.>. Despite this progress, an analytical expression for the neutral progeny distribution has been lacking so far, which limited further developments in understanding whether observed distributions are consistent with neutrality, or demand non-neutral explanations.In this manuscript we derive the first analytical representation of the neutral progeny distribution for large time intervals, using a neutral model where variants are not constrained to produce at discrete time points, known as an overlapping generations model. We show that the neutral progeny distribution consists of two phases. For small numbers of progeny there is a power law phase. This is broadly consistent with the fits to earlier numerical simulations, but here we find that this power law has a fixed, universally-applicable exponent of -3/2. Following this power law phase, for large enough numbers of progeny there is eventually an exponential drop-offin this distribution. The onset of the exponential decline depends on the innovation rate: the larger the rate, the earlier the onset.The analytical representation of the progeny distribution allows for maximum-likelihood estimations of the model parameter and therefore provides a direct way of parametrizing neutral models using cultural data, and of subsequently evaluating the consistency between observed data and the neutral hypothesis. Importantly we establish that analyses based on only the most popular variants, as is often the case in studies of cultural evolution, can provide misleading evidence for neutral evolution. Further, we show that the progeny distribution represents a statistic that is able to detect alternative cultural transmission hypotheses, in particular bias for or against novelty, and therefore is potentially capable of distinguishing between different processes of cultural transmission based on population-level data. For that we develop a simulation procedure which includes pro- and anti-novelty bias. Anti-novelty bias is characterized as the preference for variants which have been present in the population for a long time (i.e. innovations possess an intrinsic disadvantage) while pro-novelty bias describes the preference for `young' variant types that have only recently been introduced into the cultural system (i.e. innovations possess an intrinsic advantage). In general we find that the progeny distribution reacts sensitively to those changes in the transmission process. Related results have been found by <cit.> who concluded that strong frequency-dependent biases alter the shape of the progeny distribution.They also note that some transmission biases will generate population-level predictions indistinguishable from neutral predictions. Following <cit.>, we apply our framework to an Australian data set recording the first names of newborns (The code of the simulation framework can be downloaded under < https://github.com/odwyer-lab/neutral_progeny_distribution>.). We demonstrate the importance of rare variants for reliable inference of processes of cultural evolution from aggregated population-level data in form of progeny distributions. While the temporal dynamics of abundant names are consistent with neutrality, the analysis based on the complete distribution, including popular and rare names, provides evidence against neutral evolution.This means that progeny distributions generate reliable inferences only in situations where the complete data set is available.We find that anti-novelty bias is able to replicate the complete progeny distribution of the considered Australian baby name data. § NEUTRAL THEORY AND INNOVATION Neutral models have provided basic null models in fields stretching from population genetics <cit.> and ecology <cit.>, to cultural evolution and the social sciences <cit.>. At the core of all varieties of neutral theory is a group of competing variants, and the assumption that selective differences between these variants areabsent. In addition, most neutral models contain the possibility for innovation, i.e. the introduction of entirely new variants into the system. The most common approach to modeling an innovation event is to assume that with some rate a parent individual will produce an offspring of a new type instead of an offspring of the same parental type.This new variant then undergoes the same dynamics as all extant variants.The assumptions of neutrality are often at odds with the vast stores of knowledge biologists and anthropologists have accumulated for natural and social systems. For example, we know that even closely-related biological species differ in their phenotype, and we might expect that these differences are important for predicting and understanding the properties of ecological communities. And yet despite this obvious roadblock, neutral models in ecology have had some considerable success in predicting patterns of biodiversity observed at a single snapshot in time <cit.>.The same is true for cultural evolution where humans are generally not thought of as making decisions at random. Neutrality would imply that individuals do not possess any preferences for existing cultural variants, nor does the adoption of a particular cultural variant provide an evolutionary advantage over the adoption of a different variant. While these inherent assumptions are likely to be violated in the cultural context (for detailed discussions see <cit.>) population-level patterns of various observed episodes of cultural change nevertheless resemble the ones expected under neutrality <cit.>. Statistical tests of neutral theory often focus on static patterns of diversity, observed at one moment in time, such as the balance of rare and dominant species in a population. It has been shown that neutral steady-state predictions for the distribution of species abundances often closely match observed distributions. In contrast, neutral theories in ecology have had less success in predicting the dynamics of diversity, from decadal-scale species abundance fluctuations to geological ages of species <cit.>.Similarly, recent work in cultural evolution has pointed to theimportance of analyzing temporal patterns of change as opposed to static measures of cultural diversity <cit.> and to the influence of aggregation processes particularly in archaeological case studies <cit.> when testing for departures from neutrality.At the very least, these discrepancies bring to light the importance of what statistics are chosen to test a hypothesis like neutral evolution.In this light, a recent study <cit.> analysed the patterns of frequency change, in particular the kurtosis of the distribution of changes over time, of stable words in the Google Ngram data base. Interestingly, this approach identified words under selection: kurtosis values close to zero signaled neutrality while deviations from zero were indicative of selection. In this paper we apply ecological neutral theory to cultural data. We use a model that allows for overlapping generations, an appropriate assumption when analyzing distributions of cultural variants, and for an analytical representation of the progeny distribution. In the following we provide a brief review of the characteristics of this model. §.§ Neutral Theory in Ecology It is assumed that the temporal dynamics of species are governed by reproduction and competition, occurring in continuous time with a given set of rates.The full, interacting version of this model can be described by stochastic Lotka-Volterra systems(either with symmetric, pairwise competition between species where the strength of the competition is controlled by the constant α, or any related constraint on population size). Solving for the dynamics of these systems is, however, analytically intractable but a solvable mean field approximation has been found. This approximation is based on treating each species as interacting with the average state of all other species, rather than the specific configuration of abundances at any given moment in time <cit.>.In the limit of a large number of species this approach states that the correlation between the abundances of any two species is assumed to be small. In other words, the abundances of extant species are assumed to evolve independently of each other.Importantly, the resulting mean field description collapses non-linear rates of competitive interaction into an increased, linear mortality rate for each species. This approximation of the overlapping generations neutral model is also known as the `non-zero-sum' or NZS approximation referring to the fact that the total population size may fluctuate over time, i.e. births and deaths do not sum to zero. It has been shown that this approach provides only a good approximation in populations with a large number of species, but in a less diverse population, where a handful of species are dominant, the mean field approximation is no longer a meaningful description.In the mean field approximation, each species takes an independent, random walk, based on a linear stochastic process. Mathematically, this is described by a linear master equation for the probability P(n|t) that a species has abundance n conditioned on its age (i.e. time since introduction into the system)dP/dt = b(n-1)P(n-1|t)-bnP(n|t)-dnP(n|t)+d(n+1)P(n+1|t).Here, t is species age, and for so-called `point' speciation (where new species always have an abundance of 1) the initial condition is P(n|0) = δ_n,1 (see Fig. <ref> for a schematic representation of the model dynamic). The value d, which is always strictly larger than the birth rate, b, is a combination of intrinsic mortality and the effect of competition arising from all other species. For the point speciation process, this linear master equation has the time-dependent solutionP(n|t) = e^(b-d)t(b/d-b(1-e^(b-d)t)^n-1/(1+b/d-b(1-e^(b-d)t)^n+1.For a more general initial condition, there is a correspondingly more general solution (see Section S2 in the supplementary material for a detailed mathematical derivations of these results).Eq. (<ref>) describes the temporal dynamics of a single species, from its introduction into the system to (guaranteed) eventual extinction. Under the additional assumption that in steady state, the rate of appearance of new species in a population of size Jis given by ν J, it can be shown that the expected species abundance distribution (i.e. the number of species with abundance k) takes the form of a log series distribution⟨ S(k)⟩ ≃ν J ∫_0^∞ P(k|t)dt ≃θ/k(1-θ/J)^kwhere θ=(1-b/d) J stands for the `fundamental biodiversity number'. Finally, there is a constraint relating speciation rate ν to b and d rooted in the mean field approximation.The parameter d is an effective parameter arising from the influence of the rest of the population and therefore theper capita speciation rate ν is constrained to be related to these rates asν = d-b. Summarizing, Eq. (<ref>) gives a complete description of the non-spatial, NZS model that provides a good approximation to various neutral predictions in ecology when diversity is high <cit.>. To ensure consistent notation across different scientific disciplines we will refer in the following to species as variants, to individuals as instances and to speciation as innovation. Further, birth and death rates describe the rates at which a cultural variant generates or looses an instance, respectively (see Fig. <ref>).§.§ Neutral Theory in Cultural Evolution Neutral theory in cultural evolution has been mainly modelled using the Wright Fisher infinitely many allele model (see e.g. <cit.> for a review of the mathematical properties, <cit.> for its introduction to cultural evolution as well as <cit.> for further applications to cultural case studies).In general, this framework assumes that the composition of the population of instances of cultural variants at time t is derived by sampling with replacement from the population of instances at time t-1 resulting in non-overlapping generations. We provide in Section S1 of the supplementary material a brief review of the mathematical characteristics of this model.§ THE NEUTRAL PROGENY DISTRIBUTION Data sets describing the accumulated appearances of cultural variants within a specific time interval, like the choice of baby names in human populations, have typically been summarized by the progeny distribution.This distribution logs the frequency of cultural variants with a total of k progeny, taken over a given, fixed duration, T.In part, this choice of distribution is pragmatic; data for baby names registered at birth are often more complete and more readily available than full censuses of names in a population, which would provide the analogue of a species abundance distribution given in Eq. (<ref>).Additionally, the progeny distribution contains a temporal element, as in general the distribution will change with the duration, T, that the progeny counts are taken over. Finally, the progeny distribution is particularly useful for populations where the effective population size of reproducing individuals may be much smaller than the total population. The distribution directly probes the dynamics of transmission of cultural variants, whereas the species abundance distribution may be much more sensitive to the details of the age structure in the population. In this section we derive an analytical representation of the progeny distribution based on the overlapping generation NZS model for large, well-mixed populations. We show, in agreement with earlier work, that neutral theory generates a power law progeny distribution but with a constant exponent of -3/2, (i.e. the power law exponent does not depend on innovation rate or population).The power law is followed by an exponential cut-off, whereby the onset of this cut-off depends in the innovation rate.Further, we provide a method for identifyingmaximum likelihood neutral parameters.§.§ Analytical results Using the NZS approximation the progeny distribution at late times T, i.e. under the assumption that sufficient time has passed so that the distribution reached stationarity, can be derived as q(k)= (-1)^k-11/2k2d/b+d(4bd/(b+d)^2)^k-1where b and d stand for the birth and death rates of the variants (see Section S3 in the supplementary material for a detailed derivation) and the term1/2k is defined by1/2k=2kk-1^k+1/2^2k(2k-1).The function q(k) describes the frequency of cultural variants which generated exactly k instances, including its innovation event, within a time interval of length T. Eq. (<ref>) is valid only in the large T limit, but in Section S3 of the supplementary material we also provide additional results for moments and generating functions of this distribution for arbitrarydurations, T. The corresponding cumulative distribution (i.e. the fraction of variants with greater than or equal to k cultural variants generated within a time interval of length T) is given byP(K≥ k) =(-1)^k-1b+d/2b(4bd/(b+d)^2)^k1/2k211,(-1/2+k)1+k4bd/(b+d)^2with 21· ,··· representing the Gaussian hypergeometric function (see Section S3 in the supplementary material for a detailed derivation).Interestingly, the distribution q(k) fragments into two parts: one describes a power law and the other an exponential decay (see dotted and dashed lines in Fig. <ref>). For large enough values of k the first terms of Eq. (<ref>) can be approximated by(-1)^k-11/2k ≃ (-1)^k-1(-1)^k/Γ(-1/2)k^3/2 =1/2√(π)k^3/2which determines a power law with the exponent -3/2. However, atapproximately k∼(b/(d-b))^2=(b/ν)^2 the exponential decay starts dominating the distribution (see red line in Fig. <ref>). In summary, the neutral progeny distribution tends towards a power law with an universally-applicable exponent of -3/2 (i.e. the exponent does not, as previously suggested, depend on the parameters of the neutral model) but shows an exponential cut-off at approximately k∼(b/(d-b))^2=(b/ν)^2. The larger the innovation rate, ν/d, the smaller the values of k for which exponential decay dominates the progeny distribution. §.§ Maximum Likelihood Parameters To fit the progeny distribution given in Eq. (<ref>) to empirical data we derive the maximum likelihood estimate of the ratio η=d/b (as we show below that the shape of the progeny distribution depends only on the ratio of the death and birth rate). The log likelihood of observing a given set of S cultural variants with abundances {k_i} at late times is given byL = ∑_i=1^Slog[2d/b+d(4bd/(b+d)^2)^k_i-1 (-1)^k_i-11/2k_i]which can be rewritten asL =∑_i=1^S log[2η/1+η(4/1/η+2+η)^k_i-1 (-1)^k_i-11/2k_i]by using the relation η= d/b.Maximizing this log likelihood with respect to parameter η provides the following point estimate η = K_total/K_total-Swhere K_total is total number of instances observed in the data and S is the total number of variant (a detailed derivation can be found in Section S4 of the supplementary material). §.§ Comparison of Analytical Approximations with Simulations In this section we ensure the validity of our approximations (in particular Eqs. (<ref>) and (<ref>)) by comparing analytical and numerical results. To do sowe simulate the full, non-linear model with overlapping generations. In detail, we generate the temporal frequency behavior of a group of competing variants via the Gillespie algorithm and compute the resulting progeny distribution after a long time interval.We use stochastic Lotka-Volterra systems, where variant i with current abundance n_iwill undergo birth and death processes as well as be involved in competitive interactions with other variants.New variants are introduced at a rate ν J (J describes the total population size) with initial abundance 1, and are considered as an error in the birth process. Therefore the effective per capita birth rate is given by b_0-ν.The rates of these processes for variant i are as follows process description raten_i → n_i +1 birth(b_0-ν)n_i n_i → n_i -1 intrinsic mortalityd_0n_i n_i → n_i -1 competition α n_i ∑_∀ j n_j 0 → 1speciation ν∑_∀ j n_jwhere the labels i and j refer to the extant variants in the system at any given point in time, and the sums are taken over all variants, including variant type i. The simulation of this population is based on the well-known Gillespie algorithm <cit.>. We provide a detailed description of the simulation procedure in Section S5 of the supplementary material. The code used is available under <https://github.com/odwyer-lab/neutral_progeny_distribution>. Fig. <ref> illustrates that the simulated cumulative progeny distributions based on competitive Lotka-Volterra interactions (black dots) coincide their analytical counterparts given by Eq. (<ref>) (red lines) for long time intervals and various values of ν and J. In summary , Eq. (<ref>) (and consequently Eq. (<ref>)) provides an accurate description of the neutral predictions for a model with symmetric, competitive interactions and overlapping generations.§ NOVELTY BIASES So far we assumed that there are no selective differences between the extant variants in the population.In this section we generalise our framework to include selection for and against novel cultural variants (denoted as pro-novelty bias and anti-novelty bias, respectively) and explore the consequences of these selection biases on the shape of the progeny distribution.In general, pro-novelty selection favours `young' variants, i.e. variants that have been invented recently. In contrast, anti-novelty selection disadvantages `young' variants and therefore favors the persistence of established cultural variants over a long time period.In cultural evolution, pro-novelty selection has been associated with fashion trends <cit.>, i.e. the phenomenon where some cultural variants rapidly increase in frequency but also quickly fade away again after other variants have become fashionable. An ecological analog to pro-novelty bias is the red queen effect which is well-explored in the literature (e.g. <cit.>). While the red queen effect is typically thought to arise from the accumulation of selectively advantageous traits over time, the emergent effect is an advantage for new species. §.§ Pro-novelty bias We model pro-novelty bias following earlier ecological theory developed in the context of the red queen hypothesis <cit.>.The only change relative to the simulation described in Section <ref>(<ref>) is the form of the competition between older and younger variants.The rate α_ij now encodes the competitive effect of species j on species i, and depends on innovation times (i.e. the ages of the variants) τ_j and τ_iα_ij= α(1-_0 )for τ_j>τ_i,α_ii= α, α_ij= α(1+_0)for τ_i>τ_j.This means we assume that new variants have the same competitive advantage over all extant variants and each variant interacts with three groups: newer, more advantageous variants, conspecifics and older, less advantageous variants <cit.>.The coefficient α characterizes the strength of competition, while _0 is a constant between zero and one that introduces asymmetry in the competitive interactions.Fig. <ref> shows the progeny distributions generated by neutral theory (grey line), pro-novelty selection (green line) for the parameter constellation J=300, ν=0.01 and _0=1. It is obvious that pro-novelty bias leads to a higher number of variants with small and intermediate abundances and a lower number of variants with very high abundances. As expected, pro-novelty bias reduces the number of singletons, i.e. innovations that have never been transmitted and therefore remained at abundance one.§.§ Anti-novelty bias Modelling anti-novelty bias in a plausible way is not as straightforward as pro-novelty bias.If we were to take the competition coefficients given in (<ref>) and flip the signs, it is highly likely that, for realistic population sizes, we will end up with one, eternal, old variant, and all other variants that enter the system are driven to extinction over a relatively short time frame.While we would expect that anti-novelty bias should promote the persistence of older variants, with a strict competitive advantage of all older variants over all newer variants, these results are too extreme.We therefore introduce the following rates α_ij for the competitive effect of variant j on variant i, which again depend on innovation times τ_j and τ_i but contain an additional exponential decay factorα_ij= α(1-_0e^-λτ_j )for τ_j<τ_i,α_ii= α, α_ij= α(1+_0e^-λτ_i)for τ_i<τ_j.where now we consider _0<0, and λ>0. The effect of λ is thatas a variant ages, competitive differences decrease and they begin to interact more and more symmetrically. This approach allows for the persistence of multiple, older variants, because once a type has survived for a time larger than 1/λ, it interacts almost neutrally with all other established variants.Fig. <ref> shows the progeny distributions generated by neutral theory (grey line) and anti-novelty selection (light red and dark red lines) for the parameter constellation J=300, ν=0.01, _0=-1, λ=0.3 (dark red line) and λ=3 (light red line). Anti-novelty bias leads to a lower number of variants with small and intermediate abundances and a higher number of variants with very high abundances. As expected, anti-novelty bias generates a large number of singletons. Further, the slower the decay of the bias, i.e the smaller λ, the more pronounced are the differences between neutral evolution and anti-novelty selection.§ EMPIRICAL ANALYSIS FOR BABY NAMESStarting with the work by <cit.> data on the choice of baby names have been widely analyzed in the literature using a variety of frameworks. For example, <cit.> analysed the spatial clustering patterns in regard to baby names choice between US states <cit.> or <cit.> used turnover rates to detect transmission biases in US baby names. Further, <cit.> aimed at disentangling stochastic and deterministic influences on the choice of first names. They suggested that the individual trajectories of name frequencies can be replicated by a deterministic dynamic governed by memory and delay processes.Here we apply our methodology to two data sets drawn from the state of South Australia, consisting of all boys' and girls' names registered from 1944 to 2013, respectively, and explore the conclusions about the evolutionary process that can be drawn form it.These data sets are included in Section S6 of the supplementary material together with and a general description and a justification of the application of the mean field approach.§.§ South Australia Baby Names, Neutrality, and Novelty DisadvantageFirst, we calculate the maximum likelihood estimate (<ref>) of the neutral innovation rate, i.e. the rate that most closely explains the observed progeny distributions computed over the full time span of the data sets. We obtain .ν/d|_girls=0.05 and.ν/d|_boys= 0.03indicating a higher tendency for choosing a unique name for newborn girls than for newborn boys. For both groups of names, we then plot the neutral progeny distribution with maximum likelihood parameters alongside the empirical progeny distribution in Fig. <ref>.It is obvious that the neutral distribution (grey lines) produces too many names with intermediate numbers of progeny relative to singletons (i.e. names that have never been transmitted and therefore have an abundance of 1), and too few variants with very large numbers of progeny.Given this discrepancy we ask whether novelty bias can provide a better explanation. Any form of pro-novelty bias, however, will only increase the differences (cf. Fig. <ref>) and therefore we focus on anti-novelty bias.Fig. <ref> (red lines) shows the best fit over a discrete set of parameter values to the data.In order to replicate that only a relatively small (at least compared to the neutral predictions) number of innovations are transmitted at least once, we needed to choose _0=-1 in Eq. (<ref>), so that new variants (initially) have zero competitive effect on any extant variant.We also chose λ >> b, so that if a variant survives (meaning is transmitted at least once), it quickly begins to interact neutrally with the rest of the population. We note that we are not seeking to rigorously fit the anti-novelty bias model, but it is apparent that with these choices anti-novelty bias provides a potential explanation for the phenomena we see in these data.§.§ Restricting to Popular Names Our example data set above contains every baby name registered over a 70-year period in a single region, leading to the potential conclusion that new, rare variants have a disadvantage. However, many available data sets for registered baby names in other regions are incomplete; providing only the most popular names due to privacy considerations. Previous studies have often tested hypotheses for cultural evolution based on similarly incomplete data and in this section we explore how this incompleteness may alter conclusions about the existence of selection biases in the population.In the following we consider two common ways of preprocessing cultural frequency data, both of which amount to removing some subset of data. First, we only keep the most popular names over a given time span, removing any names with fewer appearances (in total, throughout the time interval) than a given threshold.Second, we remove any names with less than a given threshold in any given year. In Figure <ref> we show the results of three analyses of the South Australia baby name data set (top row: boys names, bottom row: girls names).Alongside our analysis using the full data set (left column), we also (i) remove names containing <5 instances over the 70-year time span (middle column) and (ii) remove names from a given year that have <5 instances in that year (right column).We call these a total threshold and a year-by-year threshold, respectively.The differences between the three approaches are stark.We have seen in Section <ref>(<ref>) that the full progeny distribution can be replicated by assuming that innovations are strongly selected against but that this disadvantage fades away quickly; as soon as those novel names are transmitted. They then interact neutrally with the population and therefore we might expect that imposing the total threshold (i.e. in this case innovations are names whose progeny count exceeds this threshold) generates a distribution which is consistent with neutrality.However, if we impose the year-by-year threshold, the resulting progeny distribution changes substantially—if we treat this data as if all names were present, it would look consistent with a novelty advantage, rather than neutrality or novelty disadvantage.The effect of these pre-processings of names data, and the qualitative differences they make, demonstrate the need to be cautious about any conclusions drawn using incomplete data.Our results here mirror a long-standing debate in ecology on snapshots of species abundances, where a lack of sampling of rare species introduces what has been termed a `veil line', and can alter the shape of the species abundance distribution <cit.>. In our case, the progeny distribution veil line can lead us to infer a purely neutral explanation, where in reality there is a strong bias against new names.§ DISCUSSION Innovation is ubiquitous across biological and social domains, but in many cases we lack a direct way to characterize the mechanisms of the innovation process. This is particularly true in the realm of cultural evolution, where it is often not obvious what to look for or to measure in a new variant to describe the mechanismthat gave rise to it. For example, the baby names considered in this paper have no direct analogue of beak size, body plan, or carbon fixation pathways. Nevertheless we know that in these domains new variants are `different' fromextant variants.In this paper we assumed that variants are functionally equivalent but differ in their ages and abundances in the population and aimed at understanding how theses differences can affect the spread behaviour of the innovations. To this end we analyzed the characteristics of the progeny distribution, which aggregates the temporal dynamics of new variants across the population over a fixed time interval, under different assumptions of cultural transmission. Using a mean field model drawn from ecology we derived the first analytical representation of the progeny distribution under the hypothesis of neutrality.We showed that the neutral progeny distribution consists of two phases: a power law phase for intermediate numbers of progeny with a universally-applicable exponent of -3/2, followed by an exponential decay phase for large numbers of progeny. The onset of the exponential phase is modulated by the innovation rate: the higher the rate, the earlier the exponential cut-off. The analytical representation allowed us further to derive maximum likelihood estimates of the neutral model parameters, and therefore to establish whether observed empirical patterns are consistent with the hypothesis of neutrality. In order to allow for selective differences between the cultural variantswe developed a simulation framework and analyzed the effects of pro-and anti-novelty biases on the shape of the progeny distribution.These biases alter the shape of the progeny distribution with pro-novelty biases increasing the occurrence of variants with a low or intermediate numbers of progeny and decreasing the occurrence of variants with a high numbers of progeny. These results go in hand with a decrease of the average life time of the individual variants. The reverse is true for anti-novelty bias. In applying our methodology to baby names from South Australia, we found that the data showed at least two different regimes.First, we see the generation of a lot of variation. The data sets contain a large number of innovations with abundance one, i.e. innovations that have never been transmitted. Second, we see the persistence of some names over a very long time.Our analysis showed that neutrality alone is not able to replicate these patterns, as it produces too many variants with intermediate numbers of progeny relative to singletons (i.e. names that have never been transmitted), and too few variants with very large numbers of progeny. The empirical progeny distribution of baby names is much more closely reflected by assuming an anti-novelty bias whereby the bias decays as soon as a variant survives long enough to become established.Importantly, we concluded that most new names do not proliferate, but if they are transmitted, their interactions with the other variants in the population quickly resemble those under neutrality(The code used for this analysis is available under<https://github.com/odwyer-lab/neutral_progeny_distribution>).This result points to the crucial importance of rare variants for reliable inference of processes of cultural evolution from aggregated population-level data in form of progeny distributions. Analyses based on incomplete data sets including only popular variants according to different threshold rules revealed consistency between the observed (incomplete) data and neutral evolution as well as pro-novelty bias. This is a powerful reminder that we need to be cautious with conclusions about underlying evolutionary processes drawn from incomplete data. Lastly, we note that the result of this study is not to say that the choice of baby names is guided by anti-novelty bias but that anti-novelty bias is a potential cultural transmission process which could explain the observed, complete data set of baby names whereas neutral evolution and pro-novelty biases are not. There may be other, potentially more complex processes of cultural transmission which are able to replicate the observed progeny distribution equally well. For example, content bias might be producing a disadvantage for most new variants, leading to their early extinction, and leaving behind only those new variants which did not have this disadvantage. But the implication of this explanation is that content bias is fairly restrictive, with either a large negative, or neutral effect, but rarely (or never) a positive effect, a distribution which itself would require explanation. The extension of our analytical approach to incorporate these processes, alongside the inherent variability over time of real systems will help shading more light on this issue and be the focus of future research. Acknowledgement. JOD acknowledges the Simons Foundation Grant #376199, McDonnell Foundation Grant #220020439 and Templeton World Charity Foundation Grant #TWCF0079/AB47. We thank the members of the Department of Human Behavior, Ecology and Culture at the Max Planck Institute for Evolutionary Anthropology for helpful comments on an earlier version of this manuscript. Further, we thank three anonymous reviewers for their helpful and encouraging comments. Supplementary material § NEUTRAL THEORY IN CULTURAL EVOLUTIONNeutral theory in cultural evolution has been mainly modelled using the Wright- Fisher infinitely many allele model (see e.g. <cit.> for a review of the mathematical properties, <cit.> for its introduction to cultural evolution as well as <cit.> for further applications to cultural case studies). The theory assumes that in finite populations cultural variants are chosen to be copied according to their relative frequency, and new variants not previously seen in the populations are introduced by a process resembling random mutation. Changes in frequency therefore occur only as a result of drift.While these inherent assumptions are likely to be violated in the cultural context (for detailed discussions see <cit.>) population-level patterns of various observed episodes of cultural change nevertheless resemble the ones expected under neutrality <cit.>.Importantly, theses studies do not conclude that neutral evolution is the underlying evolutionary force shaping the observed empirical patterns. They rather suggest that if each act of choosing one cultural variant rather than another has a different motivation, the emergingpopulation-level patterns will be that there are no directional selective forces affecting what is copied <cit.>. However, it still has to be shown that neutral predictions are distinguishable from predictions generated by alternative cultural selection scenarios <cit.>. If a (potentially unknown) number of cultural scenarios result in very similar predictions, then the meaning of the rejection of the neutral hypothesis becomes hard to interpret.In the following we provide a brief overview over the characteristics of the Wright- Fisher infinitely many allele model. This model assumes that the composition of the population of instances of cultural variants at time t is derived by sampling with replacement from the population of instances at time t-1 resulting in non- overlapping generations. The (temporally constant) population size J and the variables m_i and n_i stand for the abundances of variant i in the population at times t-1 and t, respectively. Then p_i=m_i/J(1-μ), i=1,2,…describes the probability that a specific instance is of variant i. Further, μ denotes the innovation rate which describes the probability that a novel variant, not currently or previously seen in the population, is introduced. In general, the probability that the configuration of abundances [m_1,m_2,…] at time t-1 is transformed into [n_0,n_1,n_2,…] attime t is given by P(X_0(t)=n_0,X_1(t)=n_1,…|X_1(t-1)=m_1,…)=J!/∏_i m_i!∏_i p_i^n_iwith p_0=μ and ∑_im_i =∑_in_i=J. The state space of the Markov process defined by these transition probabilities is extremely large making the derivation of population-level properties of this stochastic process almost intractable. But Eq. (<ref>)implies that the extinction of any variant is inevitable over timeand the time evolution of a single variant can be described by atwo-variant formulation P(X_i(t)=n_i|X_i(t-1)=m_i)=Jn_ip_i^n_i(1-p_i)^J-n_i.We note that under neutrality all variants are considered identical and therefore we can drop the index i from Eq. (<ref>).It follows from the Eq. (<ref>) that the probability that a newly introduced variant with abundance 1 goes immediately extinct is given byP(X(t)=0|X(t-1)=1)=(1-1/J(1-μ))^J→ e^μ-1 for large J.Further, the diffusion approximation of Eq.(<ref>) allows us to determinethe transition probability density f(x,p,τ) as the solution of the diffusion equation ∂ f(x,p,τ)/∂τ=a(p)∂ f(x,p,τ)/∂ p+1/2b(p)∂^2 f(x,p,τ)/∂ p^2with a(p)=-Jμ p, b(p)=p(1-p) and appropriately scaled space and time dimensions p=m/J, x=n/J and τ=t/J <cit.>. In general, an explicit solutions of this equation can only be achieved under relatively restrictive assumptions, <cit.>. Nevertheless, it has been shown that some steady-state properties of the population of instances of cultural variants can be determined.The variant abundance distribution describing the expected number of variants with relative frequencies in the interval (x,x+δ x) at steady state can be approximated by ϕ(x)=θ_c x^-1(1-x)^θ_c-1with θ_c=2Jμ <cit.>. Additionally, the average number of different variants, S, in the populations can be described byE{S}=θ_c+∫_1/J^1 θ_cx^-1(1-x)^θ_c-1dx(e.g. <cit.>). We note that the variant abundance distributions given by Eq. (<ref>) in the main text and Eq. (<ref>) generate similar results for sufficiently large J and sufficiently small ν.§.§ Simulation of the Wright-Fisher modelSimulations of the infinitely many allele Wright-Fisher model are relatively easily obtained through random sampling from previous generations. In detail, in each time step t a new set of J instances is generated through random copying from the population of instances of cultural variants at time step t-1 possessing the abundance configuration [m_1,m_2,…,m_S(t-1)] with ∑_i=1^S_t-1m_i=J. The variable S_t-1 stands for the number of different variants at time step t-1 and m_i records their abundance. The probability that variant i is copied in each of the J production events is given by p_i=m_i/J(1-μ) where μ stands to the innovation rate. If an innovation occurs then a variant, not currently or previously seen in the population, is introduced. After a burn in period which ensures that the system has reached an approximate steady state we determine the progeny distribution after T=200,000 generation for J=1,000 and various values of μ (see dotted lines in Fig. <ref>). We lack an analytical result for the cumulative Wright-Fisher progeny distribution, but drawing from our results for the overlapping generations neutral model we plot a power law with exponent -1/2 (red line) (we showed in the main text that for intermediate values of k the progeny distribution resembles a power law with exponent -3/2). As μ becomes small, we can see that this power law with fixed exponent becomes an increasingly accurate explanation of the first phase of this distribution, just as in the case of overlapping generations. It is likely that fitting a single power law to the whole distribution, including the exponential decline, would explain the apparent variation in power law exponent with μ and J identified in earlier studies.§ NZS SOLUTIONS FOR SPECIES DYNAMICS AND SPECIES ABUNDANCE DISTRIBUTION The non-zero sum (NZS) formulation of neutral theory is an approximation to a neutral, overlapping generations model where all variants compete for a single resource, and the strength of competitive interactions is equal across all pairs of variants.The defining master equation focuses on the dynamics of one focal variant, and characterizes its change in abundance through time, from an initial condition (usually taken to be n=1, and known as point speciation in the ecology literature)dP/dt = b(n-1)P(n-1|t)-bnP(n|t)-dnP(n|t)+d(n+1)P(n+1|t).This master equation is linear because the interactions between the focal variant and the rest of the population are treated in a mean field approximation. In effect, this equation assumes that the remainder of the population is of constant size, and then the pairwise competitive interactions are approximated by just adding to the mortality rate for this variant.To solve Eq. (<ref>) for P(n|t), we use the generating function G(z,t) defined by G(z,t) = ∑_k P(n|t)z^kwhich in turn is the solution of∂ G/∂ t = (z-1)(b(z-1)-(d-b))∂ G/∂ z.Using the method of characteristics, it can be shown that the equation above is solved by G(z,t) = 1+e^-ν t (z-1)/1- b/ν(1-e^-ν t)(z-1).Consistent with the main text, the speciation rate is defined by ν=d-b. To obtain the solution (<ref>) we imposed G(1,t)=1 ensuring the normalization of the probability distribution P(n|t) (i.e. the sum over all values of n is equal to one), and G(z,0)=z corresponding to the point speciation initial condition n=1. Eq. (<ref>) is the generating function of an exponential distribution with time-varying coefficients and the explicit solution of Eq. (<ref>)is therefore obtained by transforming back from this generating function to the exponential P(n|t).For n≥ 1, it holds P(n|t) = e^-ν t/(ν+b(1-e^-ν t))^2[b(1-e^-ν t)/ν+b(1-e^-ν t)]^n-1 ,while for n=0P(0|t)=1-e^-ν t/1+ b/ν(1-e^-ν t).The expected species richness in this model is given byS= ν J ∑_n=1^∞∫_0^∞ dt P(n|t) = ν J ∫ dt e^-ν t/1+ b/ν(1-e^-ν t) = ν J/blog(b/ν),i.e. we sum over all speciation events in the history of the population (of total size J), and compute the probability of those variants being in the population in the present time.Similarly, the expecteddistribution of variant abundances (known as the species abundance distribution in the ecology literature) in this model is given byS(n) = ν J ∫_0^∞ dt P(n|t) = ν J ∫_0^∞ dt e^-ν t/(ν+b(1-e^-ν t))^2[b(1-e^-ν t)/ν+b(1-e^-ν t)]^n-1 = ν J/b[b/b+ν]^n.§ NZS SOLUTION FOR THE PROGENY DISTRIBUTION We now derive the joint probability distribution Q(n,k|T,n_0) that after time T, a variant has n extant individuals, and has had a total of k birth events during the time interval from 0 to T, conditioned on the initial abundance n_0 at time 0. Marginalizing Q(n,k|T,n_0) will lead to a prediction for the neutral progeny distribution, a quantity rarely considered in ecological contexts, but used as a test of neutrality in cultural evolution.Note that we are not necessarily starting this time interval at the speciation time, and so the variant could have some arbitrary abundance n_0 at the start of our time interval. Initially, though, we will drop the n_0-dependence and work withinitial condition n_0=1.For the birth death process described in the last section it holdsdQ/dT =b(n-1)Q(n-1,k-1|T)-bnQ(n,k|T)+d(n+1)Q(n+1,k|T)-dnQ(n,k|T).Note that k does not affect any of the rates. We now consider a new generating function, G(z,y,T), defined asG(z,y,T) = ∑_n=0^∞∑_k=0^∞Q(n,k|T)z^ny^kwhich then satisfies∂ G/∂ T = [bz(yz-1)-d(z-1)]∂ G/∂ zwith initial and boundary conditionsG(1,1,T)=1,G(z,y,0)=zFor a more general initial condition n_0≠ 1 the latter condition changes to z^n_0.Eq. (<ref>) has a solution of the form G(z,y,T) = A(y)-C(y)(A(y)-B(y)z/C(y)+B(y)z)e^T/F(y)/B(y)[(A(y)-B(y)z/C(y)+B(y)z)e^T/F(y)+1]withF(y) = [(b+d)^2-4bdy]^-1/2,A(y) = 1+F(y)(b+d),B(y)= 2byF(y),C(y)=1-F(y)(b+d).Due to the linear nature of the problem thesolution for more general initial conditions, n_0, is given by G(z,y,T,n_0)=G(z,y,T)^n_0.Finally, we can marginalize over the unobserved n (assuming we have knowledge about the progeny, and not about total abundances/census counts) by setting z=1H(y,T,n_0) = G(1,y,T)^n_0 = ( A(y)-C(y)(A(y)-B(y)/C(y)+B(y))e^T/F(y)/B(y)[(A(y)-B(y)/C(y)+B(y))e^T/F(y)+1])^n_0.Weighting q(k|T,n_0) by the steady state species abundance distribution and taking the asymptotic limit of large T leads toH_extant(y,T)= ∑_n_0 S(n_0) H(y,T ,n_0) = ∑_n_0 S(n_0)( A(y)-C(y)(A(y)-B(y)/C(y)+B(y))e^T/F(y)/B(y)[(A(y)-B(y)/C(y)+B(y))e^T/F(y)+1])^n_0= -ν J/blog[1-b/d( A(y)-C(y)(A(y)-B(y)/C(y)+B(y))e^T/F(y)/B(y)[(A(y)-B(y)/C(y)+B(y))e^T/F(y)+1])].We do not yet account for new variants that can appear during the interval T, and themselves contribute to this birth event count.To include these instances we change the initial condition (<ref>) to G(z,y,0)=y, i.e. there is one instance in both, the variant population and its progeny distribution, immediately at speciation. Therefore, this contribution takes the formν J∫_0^T dτ y H(y,τ,1)with an extra factor of y relative to the results above. This means new variants arise at a rate ν J per unit time, they begin per definition with a single instance and single contribution to the progeny distribution, and persist from their innovation time up until T.So in totalH_total(y,T)=H_extant(y,T)+H_new(y,T) =-ν J/blog[1-b/d( A(y)-C(y)(A(y)-B(y)/C(y)+B(y))e^T/F(y)/B(y)[(A(y)-B(y)/C(y)+B(y))e^T/F(y)+1])]+ν J(yA(y)/B(y)T-2yF(y)/B(y)log[C(y)+B(y)+(A(y)-B(y))e^T/F(y)/2]).This is the generating function of the neutral progeny distribution, under the non-zero sum formulation of the neutral theory. §.§ Approximations for large T For large T, it holds H(y,T,n_0) ≃(-C(y)/B(y))^n_0 =(-1 - F(y) (b + d)/2 byF(y))^n_0.Keeping only this leading term of this expansion, and considering the special case of n_0=1, H(y,T,n_0) can be inverted analytically to giveq(k|T,1) ≃2d/b+d(4bd/(b+d)^2)^k (-1)^k 1/21+k.There is a power law phase ∝ k^-3/2 resulting from the asymptotics of the binomial coefficient, and for sufficiently large k there is an exponential drop-off. Eq. (<ref>) can be written in terms of the per capita speciation rate ν, as q(k|T,1)≃2/2-ν/d(4(1-ν/d)/4(1-ν/d)+(ν/d)^2)^k (-1)^k 1/21+k = 2/2-ν/d(1+(ν/d)^2/4(1-ν/d))^-k (-1)^k 1/21+k.For small enough ν the exponential phase kicks in only for relatively large cumulative abundances, i.e. for small ν, it holds q(k|T,1)≃(1+ν/2d)e^-(ν/2d)^2k (-1)^k 1/21+kwhich could be compared to the ν dependence of the species abundance distribution S(n). This concludes the consideration of a single variant, with n_0=1 instances initially.Because each variants is guaranteed to go extinct (d>b in the NZS neutral model), there is a finite solution for the cumulative birth distribution at late times. If we now turn to the whole population,represented by H_total(y,T), we encounter a problem.The first term H_extant(y,T) is finite, as all of the variants summed over will go extinct and produce a finite number of birth counts.However, the second termH_new(y,T) will produce an infinite number of birth counts, and eventually will dwarf the contribution from the steady-state variants contained in h_total(y), i.e will dwarf contributions from variants that were already present at T=0.Consequently, if the population persists indefinitely, all those initial variants will produce their contribution to the birth counts and eventually die out. The population, however, will continue to exist via new variants and the limit for the total number of births will tend to ∞.We start with examining the limit of large T for H_extant(y,T)lim_T→∞H_extant(y,T)=lim_T→∞∑_n_0 S(n_0) H(y,T,n_0)= ∑_n_0 S(n_0)(-1 - F(y) (b + d)/2 byF(y))^n_0=-ν J/blog[-(b + d) + 2 d y + √((b + d)^2 - 4 b d y))/2 d y].There is no analytical expression for the distribution corresponding to this generating function, i.e. Eq (<ref>) cannot be inverted analytically. But using numerical techniques we confirm that the generating function produces a distribution characterized by a power law with exponent -3/2 followed by exponential decline.Further, for large T, it holds for the new variantsH_new(y,T)≃ -ν J TyC(y)/B(y) =-ν J T1 - F(y) (b + d)/2 bF(y).As pointed out above, there are an unbounded number of birth events from new variants introduced during the interval T, and expression (<ref>) (valid for large T) will eventually dominate the finite numbers coming from the term H_extant(y,T). The total number of births from new and extant variants are equal when roughly T∼1/ν. Beyond this point there are very few instances from the extant variants at T=0, and an ever increasing number from novel variants introduced during the considered interval.Note also that this is not a normalized distribution yet and therefore it is not problematic that its coefficients diverge for large T: the coefficients of this generating function are the actual number of variants producing a given cumulative number of births, not the probability that a single variant will produce a given number of births.However, normalization leads to H_total(y,T)/H_total(1,T) ≃yC(y)/B(y)/C(1)/B(1) = yC(y)/B(y)2b/(d-b)(1-b+d/d-b) = -yC(y)/B(y)for late times T. This normalized distribution at very late times is given exactly analytically by the same distribution we found above, but with k→ k-1 reflecting the fact that the initial single instance already counts as a birth event.So it always holds k>0 and we obtainq(k)= (-1)^k-1k 1/2k2d/b+d(4bd/(b+d)^2)^k-1.This of course comes from the fact that at large enough T, we are just summing together the entire number of births for multiple variants starting with n_0=1. The corresponding cumulative distribution is straightforward to compute analytically in terms of a hypergeometric function for the cumulative distribution for (<ref>). Putting it together leads to P(K≥ k) =(-1)^k-1b+d/2b(4bd/(b+d)^2)^k 1/2k211,(-1/2+k)1+k4bd/(b+d)^2. § MAXIMUM LIKELIHOOD ESTIMATION In this section we derive the maximum likelihood estimate of the ratio η=d/b. The log likelihood of observing a given set of S cultural variants with abundances {k_i} at late times is given byL = ∑_i=1^S log(q(k_i)) = ∑_i=1^Slog[2d/b+d(4bd/(b+d)^2)^k_i-1 (-1)^k_i-11/2k_i]which can be rewritten asL =∑_i=1^S log[2η/1+η(4/1/η+2+η)^k_i-1 (-1)^k_i-11/2k_i]by using the relation η= d/b=ν/b+1. It holds ∂ L/∂η=∑_i=1^S(k_i-1)η-1/η(1+η)+ S/η(1+η).Setting K_total=∑_i=1^Sk_i and solving ∂ L/∂η=0 leads toη=K_total/K_total-S.§ SIMULATION OF THE OVERLAPPING GENERATIONS MODEL VIA GILLESPIE ALGORITHMThe NZS approximation described in section 2(<ref>) in the main text has been extensively compared with both, simulations and analytical results for ecological populations with symmetric, competitive interactions. In general, it has been demonstrated that the predictions of the NZS approximation for the distribution of variant abundances at a single point are valid when the innovation rate satisfies ν J>>1, and begin to break down whenν J is small. To test the validity of the approximation (<ref>) given in the main text, we take the same approach and simulate a group of competing variants, but compute the resulting progeny distribution after a long time interval, rather than the species abundance distribution at a single point in time.The simulated populations are described by stochastic Lotka-Volterra systems, where variant i with current abundance n_i will increase abundance by one individual at a rate b_0n_i, undergo intrinsic mortality and decrease abundance by one at a rate d_0n_i.Further, competitive interactions involve the focal variant of abundance n_i in a population of current size J and occur at a rate α n_i J.The strength of competition is controlled by the parameter α and its outcome is the loss of one instance either from the focal variant or from the rest of the population.New variants are introduced at a rate ν J with initial abundance 1, and are considered as an error in the birth process. Therefore the effective per capita birth rate (i.e. the rate of production of instances of the same variant) is b_0-ν.In summary, the rates of these processes for variant i are as followsprocess description raten_i → n_i +1 birth(b_0-ν)n_i n_i → n_i -1 intrinsic mortalityd_0n_i n_i → n_i -1 competition α n_i ∑_∀ j n_j 0 → 1speciation ν∑_∀ j n_jwhere the labels i and j refer to the extant variants in the system at any given point in time, and the sums are taken over all variants, including variant type i.The simulation of this population is based on the well-known Gillespie algorithm <cit.>.This approach involves a sequence of transitions drawn from the possibilities given in (<ref>), with a waiting time in between each of these events.For example, for a system with three variants, a birth event for one of the three types could be followed by a competitive interaction between the other two with the outcome that type three loses an instance, and so on. For a given configuration of instances the waiting time between two events is distributed according to an exponential distribution with a mean time equal to the inverse of the sum of all rates. When an event occurs, the kind of transition is randomly chosen with weights proportional to their rates. Consequently, events are more likely to involve an abundant variants, because all processes are weighted by total variant abundance (see (<ref>)).Finally, the intrinsic rates given in (<ref>) represent the exact description of the population dynamics. In order to evaluate the accuracy of the NZS approximation we need to map those intrinsic rates onto the parameters of the NZS approximation. This mapping is such that the effective birth rate of each variant is given by b=b_0-ν, while the effective mortality rate (incorporating both intrinsic mortality and competition) is given by d=b_0. As d_0 does not directly enter the NZS prediction for the progeny distribution, we simulated these populations with d_0=0.The NSZ expectation for the steady-state population size was derived in <cit.>J_steady = b_0 - d_0/αwhich simplifies to b_0/α when the intrinsic mortality vanishes.We therefore set an initial condition for the simulated population of b_0/α instances of only one cultural variant.To ensure that the system has reached an approximate steady-state before we begin sampling the progeny distribution, we allow the system to burn in by waiting until the first monodominant variant has reached extinction. At this point every extant variant has experienced entirely neutral dynamics, starting from a single instance, and therefore no deviation from the average steady-state neutral population is expected.From this point onwards, we record all birth events, and begin accumulating the progeny distribution. In order to provide a valid comparison with the late-time limit of the progeny distribution given by Eq. (<ref>) in the main text, we stop sampling after T=100b_0/ν time steps (see section <ref> for a derivation of this stopping time). Additionally, we verified that the first two moments of the progeny distribution were asymptotic to constant values by this time, and therefore ensured that we indeed sampled the asymptotic progeny distribution for large T. § DATA SET The South Australian Attorney-General's department provides two data sets consisting of all boys' and girls' names registered from 1944 to 2013, respectively, in South Australia. These data sets can be found under (last accessed 27.02.2017) https://data.sa.gov.au/data/dataset/popular-baby-namesBetween 1944-2013, the total number of girlsnames registered each year varied from a low of 6748 (in 1944) to a peak of 11754 (in 1971), subsequently declining slightly to between 9000-10000 in the last three decades. The total number of distinct names registered each year varied between 741 and 2923. For boys, the total number of names registered per year varied between 7069 and 12464, following a similar pattern to the girls' names, while the total number of distinct names registered each year varied between 477 and 2450. Clearly, there is systematic variation here in both numbers of names (reflecting a changing population size) and in the diversity of names (potentially reflecting a non-stationarity in the innovation rate). However, this variation may be as small as we can reasonably expect in cultural data. We also note that this 70 year span is not a priori long enough to apply our asymptotic results. But we have also explored the change in the progeny distribution over time by considering the change in its first two moments as the time interval, T, over which the progeny distribution is computed is varied from one year up to 70 years.If these moments asymptote to a constant, this would indicate that the distribution is approaching its asymptotic form.We find that these moments are still changing in time as T approaches 70 years, but that this change is relatively slow, indicating that this value of T is close to the asymptotic regime. Therefore we propose that it is reasonable to apply our methodological approach, which assumes that the system is in a steady state with a temporally constant innovation rate ν,and compare the Australian baby name data to the asymptotic form of the progeny distribution for large time intervals that we derived in Eq. (<ref>) in the main text.We stress that in general we have to be careful in drawing conclusions from observed data too firmly. In part because the data likely does not reflect a population in steady-state, or with a constant innovation rate over time, and may only barely span a long enough time frame for our asymptotic results to be applicable.But at the very least, our approach might lead to ways to incorporate this variation, which is inevitably present in real data, and has been underexplored in studies of cultural evolution so far.Additionally we note that different geographical regions will differ in their legislation towards the use of novel baby names (e.g. administrative approval processes might be more or less stringent)which naturally influences the rate of innovation.But our analysis is focused on the spread behavior of innovation, i.e. variants that have been introduced into the system with abundance one. Our results indicate that e.g. the ratio between singletons and variants with abundance two is sensitive to the underlyingprocess of cultural transmission. External processes affecting the rate of innovation might not influence this ratio strongly.Further, the size of the `name space' (meaning the space of all feasible names given the conventions of the particular language) is usually not known. This leads to the question whether the name space could become exhausted over time resulting in a decline of the innovation rate. While this is a valid concern we did not see a strong indication of such a phenomenon in the considered data set: the innovation rates did not show a strong decline over time. unsrtnat
http://arxiv.org/abs/1702.08506v2
{ "authors": [ "James P. O'Dwyer", "Anne Kandler" ], "categories": [ "q-bio.PE" ], "primary_category": "q-bio.PE", "published": "20170227200848", "title": "Inferring processes of cultural transmission: the critical role of rare variants in distinguishing neutrality from novelty biases" }
The 2.4 μm Galaxy Luminosity Function as Measured Using . II. Sample Selection S. E. Lake1, E. L. Wright1, R. J. Assef2, T. H. Jarrett3, S. Petty4, S. A. Stanford5,6, D. Stern7,C.-W. Tsai1,7 December 30, 2023 =====================================================================================================================empty empty We consider how to connect a set of disjoint networks to optimize the performance of the resulting composite network. We quantify this performance by the coherence of the composite network, which is defined by an H_2 norm of the system. Two dynamics are considered: noisy consensus dynamics with and without stubborn agents.For noisy consensus dynamics without stubborn agents, we derive analytical expressions for the coherence of composite networks in terms of the coherence of the individual networks and the structure of their interconnections.We also identifyoptimal interconnection topologies and give bounds on coherence for general composite graphs. For noisy consensus dynamics with stubborn agents, wedevelopa non-combinatorial algorithm that identifies connecting edges such that the composite network coherence closely approximates the performance of the optimal composite graph.§ INTRODUCTIONNetworked systems are becoming ever more important in today's highly connected world. We find such systems in power grids, vehicle networks, sensor networks, and so on.A problem of particular interest is how to coordinate or synchronize these networks, and in addition, how robust this coordinationor synchronization is to external disturbances.With an understanding of the relationship between the network topology and this robustness, it becomes possible to modify a network's topology to optimize performance.In this paper, we study topology design in networks that take the form ofcomposite graphs. A composite graph is onethat is formed from a set of disjoint subgraphs and a designed set of edges between them. We analyze these networks under two dynamics: noisy consensus dynamics and noisy consensus dynamics with stubborn agents. In both cases, we investigate how to choose edges to connect subgraphs to optimize the network coherence—a performance measure defined by the H_2 norm of the system. For networks with noisy consensus dynamics and no stubborn agents,we derive analytical expressions for the coherence of composite networks in terms of the coherence of the individual sub-networks and the structure of their interconnections. We then derive upper and lower bounds for the coherence of general composite graphs. For systems with noisy consensus dynamics with stubborn agents, we prove that coherence is asubmodular function of the edges added to a set of initially disjoint networks, and we use this result to create a greedy algorithm for choosing the connecting edge set for the network. This greedy algorithm yields an edge set that is within a provable bound of the performance of the optimal edge set.Coherence has been used as a measure of network performance in several previous works, for example, <cit.>. <cit.>, <cit.>, and <cit.> describe algorithms and analysis for adding edges to an arbitrary graph to improve its coherence. Modifying the edge weights within a graph is another approach for optimizing network coherence, which is used in <cit.>. In all of these works, however, the authors consider only edge additions or modifications to a single graph. Our focus, in contrast, is on how best to connect a set of disjoint subgraphs. A system of interacting networks is considered in the works <cit.>, where the performance is based on robustness to cascading and random failures. In <cit.>, the authors study the performance of a composite network in terms of the H_2 norm of the system, but they consider different dynamics than those presented in this paper.The remainder of this paper is structured as follows. Section <ref> describes our system model. Section IIIgives analysis and formulas regarding the coherence of composite networks with noisy consensus dynamics. In Section IV, we consider noisy consensus dynamics with stubborn agents and present our greedy algorithm for connecting edge selection. Section V demonstrates the performance of our algorithm through a pair of numerical examples, followed by our conclusion in Section <ref>.§ SYSTEM MODEL We consider a graph consisting of a set of n disjoint subgraphs {G_1, …, G_n}. Each subgraph G_i = (V_i, E_i) is connected and undirected, with n_i nodes. The objective is to connect these n subgraphs to form a connected composite graph G=(V,E), |V|=N, where:V= V_1∪…∪ V_n E =E_1 ∪…∪ E_n ∪,whereis a set of undirected edges connecting the subgraphs, i.e.,⊆{(u,v) |  u∈ V_i, v ∈ V_j,  j ≠ i }. The edge setis to be selected so as to optimize a desired performance objective.The dynamics of each node j ∈ V is given byẋ_j = u_j + ν_j,where u_j is the control input and ν_j is a zero-mean, unit variance, white stochastic disturbance. We consider two types of dynamics, described below. §.§ Noisy Consensus DynamicsWe first consider noisy consensus dynamics, where each node updates its state based on the relative states of its neighbors. The control input is given by:u_j = - ∑_k ∈j(x_j - x_k).The dynamics of the the network G can be written asẋ = - Lx + ν,where L is the Laplacian matrix of the composite graph G, i.e.,L = [ [ L_1 0; ⋮ ⋱ ⋮; 0 L_n ]] + L_E_con,where L_i is the Laplacianmatrix ofG_i, i=1 … n, and L_E_con is the Laplacian matrix of the graph consisting of all nodes in V and only those edges in E_con. We quantify the performance of the network by the network coherence, which is defined as follows:H_C(G):=lim_t →∞∑_j=1^Nvar(x_j - 1/N∑_k=1^N x_k ).The network coherence is the total steady-state variance of the deviations from the average of the current node states. It has been shown that <cit.>:H_C(G) = 1/2L^†,whereL^† is the pseudo-inverse of L.§.§ Stubborn Agent DynamicsWe also consider noisy consensus dynamics with stubborn agents. The nodes execute a consensus law, each with some degree of stubbornness, as defined by the scalard_j ≥ 0. We assume that, in each subgraph, at least one d_j is strictly greater than 0. The control input is given by:u_j = - ∑_k ∈j (x_j - x_k) - d_j x_j.The dynamics of the composite network can be written as:ẋ = - Q + ν,whereQ = [ [ Q_1 0; ⋮ ⋱ ⋮; 0 Q_n ]]+L_E_con,Here, Q_i = L_i + D_i, where D_i is the diagonal matrix of degrees of stubborness for graph G_i, and L_E_con is as defined for the noisy consensus dynamics. We again quantify the performance of a graph G by an H_2 norm,H_S(G) = ∫_0^∞ e^-2Qtdt = 1/2 Q^-1.Note thatif G is connected and at least one d_j > 0, then Q is positive definite <cit.>.The dynamics in (<ref>) are a variation of the dynamics for noise-corrupted leaders presented in <cit.>, where each node j with d_j>0 plays the role of a leader.In our system, we allow that any number of agents may be leaders, including every node in the network. These dynamics can also be given a different interpretation as leader-follower consensus dynamics with noise-free leaders, similar to those presented in <cit.>. Let G' be the graph formed from G by adding a single node sand creating an edge fromeach node j in V to s with edge weight d_j. All other edge weights are equal to 1. Let node s be the single leader node, with noise-free dynamics, i.e. ẋ_s = 0, and let all other nodes be follower nodes, governed by the dynamics in (<ref>) with the control input in (<ref>). Let L' be the weighted Laplacian matrix of G', and let L'_f be the sub-matrix of L' with the row and column corresponding to node s removed.It has been shown that the total steady-state variance of the deviation from the leader's state is given by <cit.>:H_f(G) = 1/2L'_f^-1.Observing that L' = Q, it holds that H_S(G) = H_f(G). § COMPOSITE GRAPHS WITH NOISY CONSENSUS DYNAMICSWe first consider systems with noisy consensus dynamics and no stubborn agents.For such networks, we refine the definition of a composite network in the following way.For each of the n disjoint subgraphs G_i, a single node l_i ∈ V_i is used to connect G_i to the other subgraphs. We call these nodes bridge nodes. An example of a bridge node connecting two subgraphs is shown in in Figure <ref>. Each composite network G of n subgraphs will accordingly have n bridge nodes, where each l_i is connected to at least one other bridge node l_j. The backbone graph is the graph defined by the bridge nodes and the edges between them, B=(V_B, E_B), V_B={l_1, …, l_n}, E_B={(l_i, l_j)  | l_i, l_j, ∈ V_B}. The edge set E_B corresponds toin the composite graph. Our goal is to analyze the coherence of the composite graph in terms of its subgraphs, bridge nodes, and backbone graph topology. To do this, we exploit the connection between coherence and effective resistance in electrical networks. §.§ Resistance in Electrical Networks Consider a connected graph G=(V,E) with N nodes that represents an electrical network, where each edge is a unit resistor. The resistance distance r(u,v) between nodes u and v is the potential distance between u and v when a 1-A current source runs between them <cit.>.The effective resistance of G is the sum of the resistance distances between each pair of nodes <cit.>:Ω_G= 1/2∑_u,v∈ V_Gr(u,v) = ∑_u<v∈ V_Gr(u,v). Coherence is related to effective resistance as follows <cit.>:H_C(G)=Ω_G/2N.We use the following lemmas in our analysis. For any graph G=(V,E), let A=(V_A, E_A) and B=(V_B,E_B) be two subgraphs such that V_A ∪ V_B = V, V_A ∩ V_B = {x}, E_A ∪ E_B = E, and E_A ∩ E_B = ∅. In other words G, is partitioned into two components A and B that share only a single vertex {x}. The resistance distance between any two vertices u, v with u ∈ V_A and v ∈ V_B is: r(u,v) = r(u,x) + r(x,v).For all vertex pairs u,v ∈ G, the graph distance d(u,v) is such that d(u,v) ≥ r(u,v), with equality if and only if there is exactly one path between u and v. In the case of tree graphs, we note that d(u,v)=r(u,v). §.§ Coherence in General Composite GraphsWe now analyze coherence for a general composite graph with an arbitrary backbone graph topology. To do so, we make use of the following definition.Consider graph G = (V, E). The resistance centrality of a node v ∈ V isC(v)=∑_u ∈ Vu ≠ vr(u,v). We observe that resistance centrality is inversely proportional to the information centrality measure defined in <cit.>. We useC_i(v_i) to denote the resistance centrality of a node v_i in subgraph G_i, computed only over the subgraph G_i. Using this definition, we derive a formula for H_C(G)in terms of the the coherence of the subgraphs, the choice of bridge nodes, and the topology of B. Consider a composite graph G with backbone graph B (|V_B|=n): * The coherence of G is:H_C(G)= 1/2N(∑_i=1^n 2 n_i H_C(G_i) + ∑_i=1^n∑_j=i+1^n|V_i||V_j|r(l_i,l_j) + ∑_i=1^n |V - V_i| C_i(l_i) ). * To minimize the coherence of G, B should be defined such that l_i = min_v ∈ V_i C_i(v).To prove this theorem, weuse the following proposition, which immediately follows from Lemma <ref>. The resistance distance between two nodes u_i, v_j ∈ V where u_i ∈ V_i, v_j ∈ V_j, i ≠ j is r(u_i,v_j) = r(u_i,l_i) + r(l_i,l_j)+r(l_j,v_j).We now prove the theorem. We find the effective resistance of G and then use this to find H_C(G).Let G = (V,E)be a composite graph, i.e., V = V_1 ∪…∪ V_n and E = E_1 ∪…∪ E_n ∪ E_B. G is constructed such that every pair of subgraphs G_i, G_j, i ≠ j is connected only through their respective bridge nodes l_i and l_j.By applying Lemma <ref>, we can define Ω_G in terms ofΩ_i, the effective resistance of G_i, for i = 1 … n, and the resistance distances ofeach edge e ∈ E_B:Ω_G=1/2∑_i=1^n ∑_j=1^n (∑_u ∈ V_i∑_v ∈ V_j r(u,v) )=∑_i=1^nΩ_i + ∑_i=1^n ∑_j>i^n (∑_u ∈ V_i∑_v ∈ V_j r(u,v) ).To obtain (<ref>) from (<ref>), note that when i=j, ∑_u ∈ V_i∑_v ∈ V_j r(u,v) = Ω_i.For i ≠ j, each term ∑_u∈ V_i∑_v∈ V_jr(u,v) in (<ref>) can be rewritten as ∑_u∈ V_i∑_v∈ V_jr(u,l_i)+r(l_i,l_j)+r(l_j,v), as noted in Proposition <ref>. In turn, the double sumin (<ref>) can be simplified to |V_j|C_i(l_i) + |V_i||V_j|r(l_i,l_j) + |V_i|C_j(l_j). The formula for Ω_G now becomes: Ω_G = ∑_i=1^nΩ_i +∑_i=1^n∑_j=i+1^n( |V_j|C_i(l_i) + |V_i||V_j|r(l_i,l_j) + |V_i|C_j(l_j) ) = ∑_i=1^nΩ_i + ∑_i=1^n∑_j=i+1^n( |V_j|C_i(l_i) + |V_i|C_j(l_j) )+ ∑_i=1^n∑_j=i+1^n|V_i||V_j|r(l_i,l_j).We can see that ∑_i=1^n∑_j=i+1^n( |V_j|C_i(l_i) + |V_i|C_j(l_j) ) is C_i(l_i) multiplied by the total number of vertices in G-G_i, therefore Ω_G is:Ω_G =∑_i=1^nΩ_i +∑_i=1^n∑_j=i+1^n|V_i||V_j|r(l_i,l_j) + ∑_i=1^n |V - V_i| C_i(l_i).Then, we divide use (<ref>) to replace Ω_i with 2n_iH_C(G_i) and divide (<ref>) by 2N to finally obtain (<ref>). All terms in H_C(G) are constant for any fixed set of graphs G_i and backbone graph B, except for C_i(l_i). Thus, we minimize H_C(G) by selecting each bridge node l_i to be the vertex with the minimum resistance centrality in G_i. §.§ Analysis of Coherence inBackbone Graph Structures We now explore specific backbone graph topologies. In addition to deriving formulae forcoherence, we are interested in identifying backbone graphs that minimize the coherence of the composite graph.First, consider a composite graph with a tree backbone graph with |V_B|=n. The coherence of such a composite graph can be derived from Theorem <ref> by using Lemma <ref> to replace r(l_i,l_j) with the graph distance d(l_i,l_j) between bridge nodes to obtain:H_C(G) = 1/2N( ∑_i=1^n 2n_i H_C(G_i) + ∑_i=1^n∑_j=i+1^n d(l_i,l_j) |V_i||V_j| + ∑_i=1^n |V - V_i| C_i(l_i) ).With this expression we can show that the star backbone graph is the optimal tree backbone graph topology. The optimal composite graph with a tree backbone graph, B=(V_B,E_B), |V_B|=n, has a star graph for B, and the bridge node l_c in the center of the star is such that l_c ∈ V_c, where V_c ∈_i |V_i|. In a star graph, the resistance distance between the bridge node of the central graph G_c and all other subgraphs G_i, c ≠ i, is 1, and the resistance distance between the bridge nodes of all other subgraphs G_i and G_j, i,j ≠ c, is 2. When calculating H_C(G), d(l_i,l_j) |V_i||V_j| is computed for all combinations of i and j ≤ n. Therefore, to minimize ∑_i=1^n∑_j=i+1^n d(l_i,l_j) |V_i||V_j|, and thus also minimize (<ref>), we choose a subgraph V_c ∈_i |V_i| to be the center of the star graph. Then,∑_i=1^n∑_j=i+1^n d(l_i,l_j) |V_i||V_j| becomes∑_i ≠ c |V_c| |V_i| + ∑_i ≠ c∑_j=i+1, j ≠ c 2· |V_i||V_j|. No other arrangement of subgraphs in a tree can reduce H_C(G) more.Now consider a composite graph with a line backbone graph of size n. The coherence of such a composite graph is again derived from (<ref>) and (<ref>), but here d(l_i,l_j)=j-i for all i>j. Therefore:H_C(G) =1/2N( ∑_i=1^n2n_i H_C(G_i) + ∑_i=1^n∑_j=i+1^n (j-i)|V_i||V_j| ∑_i=1^n |V - V_i|C_i(l_i) ).The optimal composite graph G with a line backbone graph B=(V_B, E_B)where the nodes of B, V_B={l_s_1, l_s_2, …,l_s_n}, are ordered from left to right in the path graph and the subgraphs G_1,G_2, …, G_n are ordered by decreasing vertex set size. Then if c=⌊n/2⌋, we can assign the vertices of V_B as follows: l_s_c=l_1,   l_s_c+1=l_2,   l_s_c-1=l_3, l_s_c+1=l_4,   l_s_c-2=l_5,  etc., where l_i is the bridge node of G_i. As in Corollary <ref>, to optimize (<ref>) we need only to optimize ∑_i=1^n∑_j=i+1^n (j-i)|V_i||V_j|. Therefore we need to find the optimal arrangement of subgraphs in order to minimize this sum, which can be done by finding the ordering such that as |V_i||V_j| increases, j-i decreases. Clearly, this is done by placing the subgraphs along the line backbone in a way that minimizes the distance of the largest subgraphs to all other subgraphs and maximizes the distance of the smallest subgraphs to all other subgraphs. This requirement is fulfilled by placing the largest subgraph at the center of the line, placing the smallest subgraphs on the endpoints of B, and arranging the subgraphs in between closer to the center or to the ends according to their size.The coherence of a composite graph with a ring backbone graph |V_B|=n is derived from (<ref>) by noting that in a ring graph with n nodes, r(l_i,l_j) = (j-i)(n-(j-i))/n. We then get the following formula:H_C(G)= 1/2N( ∑_i=1^n 2 n_i H_C(G_i)+ ∑_i=1^n |V - V_i| C_i(l_i)+ ∑_i=1^n∑_j=i+1^n (j-i)(n-(j-i))/n |V_i||V_j| ).Finally we consider the upper and lower bounds for H_C(G) over all subgraph topologies and all backbone graph topologies. The proof of these results is given in the appendix. The lower bound for the coherence of any composite graph with |V_B|=n and |V_i| = |V_j|=m for all i,j ≤ n is: H_C(G) ≥1/2N( n(m-1) + 2m^2(n-1) + 2n(n-1)(m-1)). Now, we consider the upper bound of H_C(G) for a graph G. We first note that with all else held equal, the backbone graph which maximizes H_C(G) is the line graph, since by Lemma <ref> only tree backbones have d(l_i,l_j) ≥ r(l_i,l_j), and the line graph has the largest diameter of all tree graphs. Since all |V_i|=m, the ordering of the subgraphs along the line has no effect on the coherence. We previously derived the formula for the coherence of a line composite graph (<ref>). The upper bound for the coherence of any composite graph with |V_B|=n and |V_i| = |V_j|=m for all i,j ≤ n is: H_C(G) ≤ 1/12N(nm(m^2-1)) + 1/12N(nm^2(n^2-1))     + 1/4N( nm^2 (m-1)(n-1)).§ COMPOSITE GRAPHS WITH STUBBORN AGENT DYNAMICSWe consider the problem of how to selectso as to minimize the coherence H_S(G). In particular, we assume that only a fixed number of edges k can be chosen.The optimal edge set can be found by an exhaustive search over all subsets of k edges, however, this approach is computationally intractable for large subgraphs and values of k.Instead, we define a greedy polynomial-time algorithm for selecting the edge setE_con. The pseudocode is given in Algorithm <ref>. In each iteration, an edge e is chosen whose addition tominimizes the coherence of G.This is repeated until || has reached the desired size k. The input to Algorithm <ref>, is the graph G=(V,E), where V=V_1 ∪…∪ V_n and E=E_1 ∪…∪ E_n, the matrix Q, andthe desired size of , k. § NUMERICAL EXAMPLESWe illustrate the performance of our greedy algorithm for adding edges to initially disjoint networks. In all examples, we consider stubborn agent dynamics. The following two numerical examples were produced in Matlab. Both examples were run with two different types of D matrices: D_I=Iand D_R, where D_R is diagonalized from d=[d_1, …, d_n]^ where for each i, with probability 0.2, we set d_i = 0, and with probability 0.8, we set d_i to a value chosen uniformly at random from (0,1]. §.§ Adding Edges Between Versus Within SubgraphsWe first demonstrate through numerical examples that the coherence of a network with stubborn agent dynamics will always see more improvement by adding a new edge between the subgraphs rather than within a given subgraph.The networks are generated as follows: two disjoint Erdős-Rényi graphs G_1 and G_2 with sizes between 8 and 15 are randomly generated to form a network G=(V_1 ∪ V_2, E_1 ∪ E_2)=(V,E). In Figure <ref> we use D_I and in Figure <ref>, we use D_R for our calculations. For each numerical example, we begin with two candidate edge sets:the set of edges between the two subgraphs and the set of edges within the two subgraphs. We run Algorithm <ref> to choose the set of edges between the graphs, and then we repeat the algorithm to choose the set of edges within the subgraphs.We run a series of 20 trials;we calculate the resulting coherence of each edge added in every trial and take the average results across all runs. These average coherence values from adding edges between the subgraphs and within the subgraphs are plotted in the accompanying figures. As we can see in Figure <ref> and Figure <ref>, the coherence of the network with edges added between the subgraphs is always better than the coherence of the network with edges added only within G_1 or G_2. Note also that initial improvement from adding a single edge is always greater when adding an edge between the subgraphs, rather than adding an edge within the subgraph. We have also observed that when Algorithm <ref> is performed on sets of n>2 graphs, so long as more than n-1 edges are added, the improvement in coherence from adding a k^th edgeup to the n-1^th will always be greater when adding an edge between subgraphs than within subgraphs. That is, the best choice will always be to connect another pair of subgraphs, rather than add an edge within an already connected subgraph of G. §.§ Greedy Versus Optimal E_con We now demonstrate that the coherence of G composed of two subgraphs with E_con of size k chosen by Algorithm <ref> is very close to the coherence of G with an optimal edge set E^* selected.We run simulations with two randomly generated Erdős-Rényi graphs with size between 4 and 8 (the small graph sizes are due to the combinatorial nature of optimal set selection). As in the previous numerical example, in Figure <ref> we use D_I and in Figure <ref>, we use D_R. For this example, we make no distinction between edges within or between the original subgraphs.We first use Algorithm <ref> to choose an edge set of size k to add to the network. Then the optimal set is selected by forming all possible edge sets of size k and calculating the decrease in coherence of adding each of them in turn. The edge set with the greatest decrease is then chosen. The results of both methods are then summed and averaged over a series of 15 trials.We can see that the performance of the edge sets chosen by Algorithm <ref>are quite close to the performance of the optimal sets, justifying the use of the greedy algorithm.§ EXAMPLE Consider the graphs G_1 and G_2 shown in Figure <ref>. Let their respective bridge nodes be l_1=2 and l_2=4 and let the edge set of the backbone graph B be E_B={ (2,4)}. We then form the graph G={{E_1 ∪ E_2}, {V_1 ∪ V_2 ∪ (2,4)}}.For the given graphs, R_1 = 4, R_2 = 6 1/3, C_1(l_1) = 2, C_2(l_2) = 7/3.We then use (<ref>) to calculate the coherence of G to be:H_C(G)=8/3. We also use the above pair of graphs to illustrate the results of Algorithm <ref>. In Table <ref>, we list the edge sets E_con of size k=1, …, 3 generated by the algorithm and their corresponding values H_S(E_con) and compare to the optimal edge set E^⋆ and H_S^⋆. table for H_S(E_con) § CONCLUSION We have considered the problem ofhow to best connect disjoint subgraphs to optimize the coherence of the composite graph. For systems with noisy consensus dynamics, we have derived several expressions and bounds for the coherence of composite graphs. For systems with stubborn agent dynamics we presented a non-combinatorial algorithm for choosing edges which, when added to the network, closely approximate the performance of the optimal edge set of the same size. Finally, we have demonstrated the performance of this algorithm in numerical examples.In future work, we plan to investigate analytical expressions for the coherence of composite networks with stubborn agent dynamics, similar to those we derived for composite networks with noisy consensus dynamics.We also plan to explore the design of composite graphs under additional dynamics and performance measures.§ APPENDIX The lower bound for the coherence of any composite graph with |V_B|=n and |V_i| = |V_j|=m for all i,j ≤ n is: H_C(G) ≥1/2N( n(m-1) + 2m^2(n-1) + 2n(n-1)(m-1)).To find the lower bound of H_C(G), we need to find the backbone graph structure that minimizes the coherence of G.The coherence of a composite graph with a complete graph backbone B, |V_B|=n, is derived from (<ref>), using the fact that in a complete graph, r(l_i,l_j) is the same for all 1 ≤ i,j ≤ n, and r(l_i,l_j) = 2n/n^2. The coherence of a complete graph is then:H_C(G)=1/2N( ∑_i=1^n 2n_iH_C(G_i) + 2n/n^2∑_i=1^n∑_j=i+1^n|V_i||V_j| + ∑_i=1^n |V - V_i| C_i(l_i) ). Clearly the best way to minimize the resistance distance across all edges of the backbone graph is to connect every vertex to every other vertex, resulting in the lowest resistance distance for each edge in the backbone graph. Therefore the composite graph with |V_B|=n and |V_i|=m for all subgraphs G_i will have the highest connectivity when B is a complete graph of size n and each G_i is a complete subgraph of size m.To calculate the effective resistance of this graph G, beginning from (<ref>), we substitute m-1 for 2n_iH_C(G_i), m for |V_i|, |V-V_i|=m(n-1), and (m-1)2m/m^2 for C_i(l_i) to get: H_C(G)=1/2N( ∑_i=1^n m-1 +2n/n^2∑_i=1^n∑_j=i+1^n m^2 + ∑_i=1^n m(n-1) (m-1)2m/m^2) = 1/2N( n(m-1) + 2n/n^2m^2 n(n-1)/2+ nm(n-1)(m-1)2m/m^2)= 1/2N( n(m-1) + 2m^2(n-1) + 2n(n-1)(m-1)).Any other composite graph G' with |V_i| = m and |V_B|=n will therefore have H_C(G') ≥1/2N( n(m-1) + 2m^2(n-1) + 2n(n-1)(m-1)). The upper bound for the coherence of any composite graph with |V_B|=n and |V_i| = |V_j|=m for all i,j ≤ n is: H_C(G) ≤ nm(m^2-1) /12N + nm^2(n^2-1)/12N+ nm^2 (m-1)(n-1)/4N.Since the line composite graph has the highest coherence for any graph G, in order to maximize H_C(G) we set each G_i to be a line graph of size m and each l_i ∈ V_i to be an endpoint of the backbone graph B. We use the structural properties of a line graph to find that C_i(l_i)= ∑_i=1^m-1i=m(m-1)/2 and H_C(G_i)= 1/2n_i∑_i=1^m ∑_j=i+1^m (j-i) = 1/12n_im(m^2-1).We now substitute H_C(G_i)=1/12 n_im(m^2-1), |V-V_i|=m(n-1), |V_i|=m, and C_i(l_i)=m(m-1)/2 into (<ref>) to get:H_C(G)= = 1/2N( ∑_i=1^n 2n_i1/12n_im(m^2-1) + ∑_i=1^n ∑_j=i+1^n (j-i) m^2 + ∑_i=1^n m(n-1)·m(m-1)/2).Simplifying further, we obtain:H_C(G) =1/2N( 1/6m(m^2-1)n + 1/6n(n^2-1)m^2 +nm(n-1)m(m-1)/2). = nm(m^2-1) /12N + nm^2(n^2-1)/12N + nm^2 (m-1)(n-1)/4N.Any other composite graph G' with |V_i| = m and |V_B|=n which does not have both a line backbone graph and line subgraphs G_i will therefore have H_C(G') ≤ nm(m^2-1) /12N + nm^2(n^2-1)/12N+ nm^2 (m-1)(n-1)/4N. IEEEtran
http://arxiv.org/abs/1702.07823v2
{ "authors": [ "Erika Mackin", "Stacy Patterson" ], "categories": [ "math.OC", "cs.SY" ], "primary_category": "math.OC", "published": "20170225025051", "title": "Optimizing the Coherence of Composite Networks" }
Unravelling the Dodecahedral Spaces Jonathan Spreer and Stephan Tillmann December 30, 2023 ========================================We consider the task of learning control policies for a robotic mechanism striking a puck in an air hockey game. The control signal is a direct command to the robot's motors. We employ a model free deep reinforcement learning framework to learn the motoric skills of strikingthe puck accurately in order to score. We propose certain improvements to the standard learning scheme which make the deep Q-learning algorithm feasible when it might otherwise fail. Our improvements include integrating prior knowledge into the learning scheme, and accounting for the changing distribution of samples in the experience replay buffer. Finally we present our simulation results for aimed striking which demonstrate the successful learning of this task, and the improvement in algorithm stability due to the proposed modifications. § INTRODUCTION The problem of learning a skill, a mapping between states and actions to reach a goal in a continuous world, lies at the heart of every interaction of an autonomous system with its environment. In this paper, we consider the problem of a robot learning how to strike effectively the puck in a game of air hockey. Air hockey is a fast competitive game where two players play against each other on a low-friction table. Players are required to develop and perfect skills as blocking and striking in order to play and win. Classical approaches for striking the puck involve a multi stage process of planning and execution. First, planning a strategy based on the goal and skill, e.g., calculating the best point of collision to achieve the goal, then planning a path and trajectory and finally executing the low level motoric control <cit.>. Each part requires full knowledge of the mechanical and physical models, which might be complex. We propose doing the planning and the control simultaneously with learning, which offer an off model way to learn from the final result. The result will be given in a form of reward at the end of each trial, and will direct the learning to the correct policy.Such problems include policy gradients <cit.> where a mapping between states and actions is learned with gradient ascent optimization on the accumulated reward, with or without keeping track of the value function. Another popular approach is Learning from Demonstration(LfD) <cit.> sometimes refereed as imitation learning <cit.> and apprenticeship learning <cit.>. In LfD a human expert (or a programmed agent) is recorded and the learning agent learns on the recorded data in a supervised fashion. Sometimes this process is used as an initialization for a second reinforcement learning stage for improvement.Paper <cit.> used imitation learning to learn primitive behaviors for a humanoid robot in air hockey.Exploration in such an environment is also an interesting issue. ϵ-greedy exploration which is the most common one, is not highly efficient in such systems, since a dynamical system functions as a low pass filter <cit.> and once in a while using a random action might have little affect on the output of the system.We combined several types of explorations including ϵ-greedy and local exploration, along with prior knowledge based exploration that we proposed.We propose an algorithm suitable for learning complex policies in dynamic physical environments. The algorithm combined ϵ-greedy exploration with a temporally correlated noise <cit.> for local exploration, which proved to be essential for effective learning. We further propose two novel contributions. We suggest a more relaxed approach to LfD which does not have the same limitations as standard LfD and can be learned from experience as regular RL. We also manage to overcome the instability of the learning due to the non-stationarity of the observed data, by expending the target update period.We compare our results with other deep reinforcement learning algorithms and achieve significant improvements, we are able to reach near optimal results, and keep them without suffering from a drop in the score function and the policies obtained. § RELATED WORK Research on learning in autonomous systems was conducted in several directions. Our work has been influenced mainly from the recent work of deep Q-networks <cit.>, and the adaptation for continuous domains of deep deterministic policy gradients <cit.>.Since the groundbreaking results shown by Deep Q-Learning for learning to play games on the Atari 2600 arcade environment, there has been extensive research on deep reinforcement learning. Deep Q-learning in particular seeks to approximate the Q-values <cit.>using deep networks, such as deep convolutional neural networks. There has also been work on better target estimation <cit.>, improving the learning by prioritizing the experience replay buffer to maximize learning <cit.> and preforming better gradient updates with parallel batch approaches<cit.>. Some work on adaptation to the continuous control domain has been done also by <cit.>. Policy gradients methods were traditionally used <cit.>, but struggled as the number of parameters increased. Adaptation to the deep neural network framework has also been done in recent years <cit.>. Several benchmarks such as <cit.> have made comparisons between continuous control algorithms. In This paper we focus on the online DQN based approach, and extend it in the domain of continuous state optimal control for striking in air hockey.§ DEEP Q-NETWORKSWe consider a standard reinforcement learning setup consisting of an agent interacting with the environment in discrete time steps. At each step the agent receives an observation s_t ∈ℝ^n which represents the current physical state of the system, takes a action a_t ∈ A which it applies to the environment, receives a scalar reward r_t=r(s_t,a_t), and observes a new state s_t+1 which the environment transitions to. It is assumed that the next state is according to a stochastic transition model P(s_t+1|s_t,a_t). The action set A is assumed to be discrete.The goal of the agent is to maximize the sum of rewards gained from interaction with the environment. Our problem is a finite horizon problem in which the game terminates if the agent reached some predefined time T. We define the future return at time t as R_t = ∑_t'=t^Tr_t', where T is the time at which the game terminates. The goal is to learn a policy which maximizes the expected return 𝔼[R_0] from the initial state.The action-value function Q^*(s,a) is used in many reinforcement learning algorithms. It describes the expected return after taking an action a in state s and thereafter following an optimal policy. The optimal state-action value function Q^* obeys the equality known as the Bellman's equation. Q^∗(s_t,a_t) = 𝔼_s_t+1[r_t + max_a'Q^*(s_t+1,a') | s_t,a_t ]For learning purposes it is common to approximate the value of Q^∗(s,a) by using a function approximator, such as a neural network. We refer to the neural network function approximator with weights θ as a Q-network. A neural network representing the Q-function can be trained by considering the loss function:L(θ) = 𝔼_s_t,a_t,r_t,s_t+1∼ D[(y(θ) - Q(s_t,a_t ; θ) )^2 ] wherey(θ) = r(s_t,a_t)s_t+1 terminaly(θ) = r(s_t,a_t) + max_a Q(s_t+1,a ; θ)s_t+1 not terminalDuring training time each transition of state, action, reward and next state <s_t,a_t,r_t,s_t+1> is stored in an experience replay buffer D from which samples are drawn uniformly in order to reduce time correlations to train the network. y(θ) is called the target and typically also a function of θ. The max{·} operator in the target makes it hard to calculation derivatives in respect to the weights, so the target is kept constant and the derivatives are calculated only according to Q(s_t,a_t;θ). This loss function has the tendency to oscillate and diverge. In order to keep the target stationary and prevent oscillations, the DQN algorithm make use of another network, called a target network with parameters θ̂^-. The target network is the same as the on-line network except that its parameters are copied every C updates from the on-line network, so that θ̂^- are kept fixed during all other updates. The training of the network in this case is according to the following sequence of loss functionsL_i(θ_i) = 𝔼_s_t,a_t,r_t,s_t+1∼ D[(y_i(θ̂^-_i) - Q(s_t,a_t ; θ_i) )^2 ]The target used by DQN is then y_i(θ̂^-_i) = r(s_t,a_t) + max_a Q(s_t+1,a ; θ̂^-_i )and the on-line network weights can be trained by stochastic gradient descent (SGD) and back-propagationθ_i+1 = θ_i + α∇_θ_i L_i(θ_i)where α is the learning rate. An improvement on that has been proposed in the double DQN algorithm, the decoupling of the estimation of the next value and the selection of the action, and decrease the problem of value overestimation, the following target has been usedy_i(θ̂^-_i) = r(s_t,a_t) +Q(s_t+1 ,a_t+1 ; θ̂_i^- ) a_t+1 = _aQ(s_t+1,a ; θ_i )In our work unless specified otherwise all learning updates have been done according to the double DQN learning rule.To explore the environment the systems typically explore via the ϵ-greedy heuristic. Given a state, a deep Q-network (DQN) predicts a value for each action. The agent chooses the action with the highest value with probability 1-ϵ and a random action with probability ϵ. § STRIKING IN AIR HOCKEY We next introduce the striking problem and our learning approach.§.§ The Striking Problem The striking problem deals in general with interception of a moving puck and striking it in a controlled manner. We specialize here to the case where the puck is stationary. We wish to learn the control policy for striking the puck such that after the impact, the puck trajectory will have some desired properties.We focus on learning to strike the puck directly to the opponent's goal. We also considered some other different modes of striking the puck, Such as hitting the wall first. These are not presented here, but the same learning scheme fits them as well. We refer to these modes as skills, which a high level agent can choose from in full air hockey game. The learning goal is to be able to learn these skills with the following desired properties * the puck's velocity should be maximal after the impact with the agent.* the puck's end position at the opponent's side should be the center of the goal.* the puck's direction should be according to the selected skill.The agent is a planar robot with 2 degrees of freedom, X and Y (gantry like robot). We used a second order kinematics for the agent and puck. The state vector of the problem is s_t ∈ℝ^8, which includes all the position and velocities of the agent and the puck in both axes, i.e., s_t = [m_x, m_Vx, m_y, m_Vy, p_x, p_Vx, p_y, p_Vy]^T. Here m_* stands for the agent's state variables and p_* stands for the puck's state variables. The actions are a_t ∈ℝ^2, and include the accelerations in both axes for the agent.The striking problem can be described as the following discrete time optimal planning problem:a_kminimizeϕ(s_T, T) subject to s_k+1 = f(s_k,a_k)s^(i)_k ∈[S^(i)_min, S^(i)_max],i=1,…,8a^(j)_k ∈[A^(j)_min, A^(j)_max],j=1,2s_0 = s(0) Here the objective function ϕ(s_T, T) represents the value of the final state s_T (in terms of velocity and accuracy), and the final time T which we desire to be small. The function f(·) is the physical model dynamics. S^(i)_min, S^(i)_max and A^(j)_min, A^(j)_max are the constraints on the state (table boundaries and velocities) and action spaces (accelerations/torques) respectively. s_0 is the initial state. We assume that f(·), the collision models and the table state constraints are hidden from the learning algorithm, The best known collision model is non-linear and hard to work with <cit.>. Solving analytically such a problem when these function are known is a challenging problem, when they are unknown it is practically impossible with analytic tools. In the simulations specific models were specified as explained in Section <ref>.In order to fit the problem as stated in <ref> to the DQN learning scheme, where the outputs are discrete Q values associated with discrete actions, we discretized the action space by sampling a 2D grid with n actions in each dimension (each dimension represents an axis in joint frame). Thus, we have n^2 actions. We make sure to include the marginal and the zero action, so our class of policies we search in will include the Bang-Zero-Bang profile which is associated with time optimal problems. Each action is associated with an output of the neural network, where each output represents the Q-values of each action under the input state supplied to the network, e.g., if state s is supplied to the network, output i is the Q-value of Q(s,a_i;θ). Thus, for every given state we have n^2 Q-values from the network, associated with the n^2 actions. §.§ Reward Definition The learning is episodic and in this problem the agent receives success indication only upon reaching a terminal state and finishing an episode. The terminal states are states in which the mallet collide with one of the walls (table boundaries violation), and the states in which the mallet strikes the puck (the agent does not perform any actions beyond this point). Any other state including the states in which an episode terminates due to reaching the maximal allowed steps, are not defined as terminal states. At the terminal state of each episode the agent receive the following rewardR_terminal = r_c + r_v + r_dR_terminal is consists of three components. The first is r_c, which is a fixed reward indicating a puck striking. The second component is a reward which encourages the agent to strike the puck with maximum velocity, and given byr_v = (V)· V^2where V is the projection of the velocity on the direction of a desired location x_g on the goal line of the opponent. The last component is a reward for striking accuracy, which indicates how close the puck reachedx_g.r_d = c |x-x_g| ≤ w c · e^-d · (|x-x_g|-w)|x-x_g| > wwhere x is the actual point the puck reaches on the opponent's side on the goal line, c is a scaling factor for the reward, w is the width of the window around the target point which receives the highest reward and d is a decay rate around the desired target location. Naturally, if the episode terminates without striking the puck R_terminal is zero. In order to encourage the agent to reach a terminal state in minimum time, the agent receives a negative small reward -r_time for each time step of the simulation until termination. The accumulative reward for the entire episode then is R_total = R_terminal - n · r_time, where n is the number of time steps for that episode. §.§ ExplorationThe problem of Exploration is a major one, especially in the continuous domain. We address the issue from two angles, completely random exploration and local exploration. §.§.§ Completely Random ExplorationWe use ϵ-greedy exploration (see Section <ref>) in order to allow experimenting with arbitrary actions. In physical systems with inertia it is not efficient since the system acts as a low pass filter, but it does give the agent some sampling of actions it would not try under normal conditions. §.§.§ Local ExplorationThe main type of exploration is what we refer to as local exploration. Similarly to what was done in <cit.>, we added a noise sampled from a noise process 𝒩 to our currently learned policy.Since the agent can apply only actions from a discrete set of actions 𝒜, we projected the outcome on the agent's action set:a_t = 𝒫_𝒜{_a Q(s_t,a ; θ) + 𝒩_t}We used for 𝒩 an Ornstein–Uhlenbeck process <cit.> to generate temporally correlated exploration noise for exploring efficiently. The noise parameters should be chosen in such a way that after the projection the exploration will be effective. Small noise might not change the action after the projection, but large noise might result in straying too far from the greedy policy. Thus, the parameters of the noise should be in proportion to the actions range and the aggregation. §.§ Prior Knowledge from ExperienceIn a complex environment, learning from scratch has been shown to be a hard task. Searching in a continuous high dimensional spaces with local exploration might prove futile. In many learning problems prior knowledge and understanding are present and can be used to improve the learning performance. A common way of inserting priors into the learning process uses LfD.For that purpose, multiple samples of expert performance should be collected, which is not always feasible or applicable.In many cases the prior knowledge can be translated to some reasonable actions, although usually not an optimal policy. Examples for that can be seen in almost every planning problem. In games, the rules give us some guidance to what to do, e.g., in soccer, Kick the ball to the goal, so for an agent to spend time on learning the fact that it has to kick the ball is a waste. In skydiving, the skydivers are told to move their hands to the sides in order to rotate, they are not required to search every possible pose to learn how to rotate. Furthermore, the basic rotating procedure taught to new skydivers is not the correct way to do it, it is taught as a basic technique, an initialization for them to modify and find the correct way.We propose showing the agent a translation of the prior knowledge as a teacher policy. In some episodes instead of letting the agent to act according to the greedy policy, it does what the teacher policy suggests. The samples collected in those episodes are stored in the experience replay buffer as any other samples, allowing the learning algorithm to call upon that experience from the replay buffer and learn from it in within the standard framework.For the problem of air hockey, we used a policy encapsulating some crude knowledge we have of the problem. We just instruct the agent to move in the direction of the puck, regardless of the task at hand (aiming to the right\left\middle), since this knowledge was simple, and robust enough. The guidance policy we constructed has the following form:V_next = P_puck - P_agent/ P_puck - P_agent· MaxVelocity a = V_next - V_agent/Δ t/ V_next - V_agent/Δ t· MaxForcewhere P_object is the x,y position vector of the object, and MaxVelocity, MaForce are physical constraints of the mechanics. The agent acts by the projection of the policy on its action space 𝒫_𝒜{·} This policy will result with an impact between the agent and puck, but by no account will be considered as a good strike since there is no reason the puck will move in the goal's direction (except in the special case when the puck lays on the line between the agent and the goal).The guidance policy is shown (the agent acts by it) and stored in the replay buffer with probability ϵ_p. §.§ Non-Uniform Target Network Update PeriodsThe deep reinforcement learning problem is different from the supervised learning problem in a fundamental way, as the data which the network uses during the learning changes over time. At the beginning, the experience replay buffer is empty, the agent starts to act and fills the buffer, when the buffer reaches its maximal capacity new transitions overwrite the older ones. It is obvious that the data is changing over time, first changing in size and then changing in distribution. As the agent learns and gets better, the data in the buffer reflects that and the samples are more and more of good states which maximize the reward.Recall that the value the neural network tries to minimize is the loss function stated in (<ref>). In order to stabilize the oscillations a target network with fixed weights over constant update periods were introduced. That led to the stationarity of the target value. The choosing of update period length became of the parameters that had to be set. Small update period result with instability since the target network changes too fast and oscillates, large update periods may be too stationary and the bootstrap process might not work properly. Thus, a period that is somewhere in the middle is chosen so the updates are stable. In may domains such as in the air hockey and also in some of Atari games, DQN still suffers from a fall in the score. We argue that this fall is not only due to value overestimation (it happens for Double DQN updates as well), but also for issues with the target value. Choosing a middle value for the update period may result in slow learning in the beginning and fall in the score later in the learning due to oscillations.In many domains such as in the air hockey and also in some of Atari games, DQN still suffers from a drop in the score as the learning process progresses (see, e.g., Fig. <ref>). We argue that this drop is not only due to value overestimation (it happens for Double DQN updates as well), but also for issues with the target value. Choosing a middle value for the update period may result in slow learning in the beginning and a drop in the score later in the learning due to oscillations.We show that by adjusting the update period over time, we manage to stabilize the learning and prevent completely the drop in the score. We start with a small update period since the replay buffer D is empty and we want to learn quickly, we then keep expanding the period as the buffer gets larger, and we need more sampling to cover it. As the agent gets better and the distribution stabilizes, we also expand the update period in order to filter oscillations and keep the agent in the vicinity of the good learned policy. The expansion of the update period is done at each target weights update according to C = C· C_r,C_r ≥ 1where C_r is the expansion rate. When C_r=1 the updates are uniform as in the standard DQN.At the beginning every sample contains new information that should affect the learning. As the learning progresses and the optimal policy hasn't been obtained yet, the samples in the replay buffer are diverse allowing the agent to learn from good samples and bad samples as well. At later stages when the agent has already learned a good policy, and the distribution of samples in the replay buffer resembles that policy. The network at the point if learning continuous, might suffer from what is known as the catastrophic forgetting <cit.> of neural networks. Freezing the target network before that stage, stabilize the learning and allows the network fine tune its performances, even though the distribution in the replay buffer is undiverse. The target network contains then the knowledge gained in the past from bad examples. At that stage of the learning the update period should be large for that purpose. This is achieved by gradually increasing the update period from an initial small period at the beginning during the learning. §.§ Guided-DQNPutting the above-discussed features together produces the guided-DQN algorithm we used in the air hockey problem. The algorithm is given in algorithm <ref>. As an input the algorithm gets the guidance policy, which encapsulated the prior knowledge we have on the problem, and the expansion rate C_r. At each episode, with probability ϵ_p the entire episode will be executed with the guidance policy π(s), or with probability 1-ϵ_p according to the greedy policy, with the addition of time correlated exploration noise. In either case, a guided episode or a greedy episode, at each step the algorithm stores the transitions in the replay buffer, and preforms a learning step on the Q network. Samples from the replay buffer are selected randomly with uniform probability. The projection operator 𝒫_A projects the continuous actions onto the agent's discrete set, by choosing the action with the lowest euclidean distance. Every C updates the target Q network is updated with the weights of the on-line Q network, and C is expanded with a factor of C_r so the next time the target network gets updated, it will be after a longer period than the previous update.The learning rule is a Double DQN learning rule. Note that if the algorithm is not provided with a guidance policy (equivalent to setting ϵ_p to zero), C_r = 1, and the temporal correlated process is 𝒩≡ 0, the GDQN algorithm reduces to the standard Double DQN algorithm. § EXPERIMENTSThe simulation was fashioned after the robotic system in Fig. <ref>.In the robotic system the algorithm would learn on the real unknown physical models, but for the purpose of simulation we used simulation models for the agent dynamics and collision models. The simulation models are hidden from the learning algorithm and exist solely for the purpose of simulating the system for learning. For the agent dynamics we used a discrete time second order dynamics[ X_m; V_x,m; Y_m; V_y,m ]_k+1 =[ 1 T 0 0; 0 1 0 0; 0 0 1 T; 0 0 0 1 ][ X_m; V_x,m; Y_m; V_y,m ]_k +[ 0 0; T 0; 0 0; 0 T ][ a_x; a_y ]_kunder the following constraints|a_x,y|< Maximum force|V{x,y},m |< Maximum velocity|X_m,Y_m|< Table boundariesThese constraints represent the physical constraints present in the mechanical system, where the velocity has a maximum value, the torques are bounded and we are not allowing the mallet to move outside of the table boundaries.We used in the simulations an ideal impact model between the mallet and puck in the sense that we neglected the friction between the bodies during the impact and we assume the impact is instantaneous with energy loss according to a restitution coefficient e. The forces, accelerations, velocities and space (the field's boundaries) are constrained to reflect the physical constraints in the robotic system.The list of parameters (learning rate, probabilities, etc.) used throughout the simulations is given in table <ref>. The learning environment is a custom built simulation based on OpenAI gym <cit.>. The simulation is modeled after an air hockey robotic system with two motors and track, one for each axis. The simulation includes visually the table, the mallet and the puck.We simulated each attempt to strike the puck as an independent episode comprised of discrete time steps. At the beginning of each episode the puck is placed at a random position on the table at the agent's half court with zero velocity and the agent starts from a home position (a fixed position near the middle of the goal) with zero velocity. Each episode terminates upon reaching a terminal state or upon passing the maximum number of steps defined for an episode. The maximum steps number is 150 steps and the terminal states are the states where the agent collides with the puck (good states) or with one of the walls (bad states). The environment returns a reward as described in Section <ref>. No reward is given upon hitting a wall beyond the timely reward.The dynamic model of the puck and agent is a second order model as described in Section <ref>. T is the sampling time of the system and was set to 0.05 [sec] in the simulation. The puck's rotation was neglected, thus the collision models (puck-agent, puck-wall) are ideal with inbound and outbound angles the same. Energy loss in the collisions was modeled with restitution coefficient of e=0.99. The controller is a non-linear neural controller, a fully connectedMulti-Layer Perceptron with 4 layers (3 hidden layers and an output layer), the first two hidden layers are of 100 units each, the third hidden layer is of 40 units and the output layer is of 25 units. All activations are the linear rectifier f(x)=max(0,x). The controller is a map between states s_t (the inputs to the controller) and discretized Q-values. We choose 5 actions in each axis, yielding 25 outputactions\Q-values (see Section <ref>). We used the RMSProp algorithm with mini-batches of size 64.In all the simulation experiments we measured the score for random initial positions of the puck, it will always be shown in graph with the caption random. In addition we measured the performances for additional 3 fix representing states of the puck, fixed positions in the left side, the middle and the right side of the table. In addition we estimated the average value of all the states and present it as well.The graphs matching these measures will be shown with appropriate captions. We present in this paper the results for the direct hit. §.§ ResultsFirst we show the performance of the standard Double DQN in Fig. <ref> for different target network update periods. We choose a fast period a intermediate period and a slow period calculated such that each state in the buffer will be visited 8 times on average before being thrown away from the buffer.It can be seen that the DDQN with fast updates (DDQN_200) rises the fastest but also drops quickly, the same behavior can be observed for the intermediate updates (DDQN_1000) but the rise is slower andthe drop happens less sharply. The score value the network drops to, -150, is exactly the value of the time penalty for a complete episode, i.e., the agent doesn't reach a terminal state. When investigating the policies obtained it can be seen that the agent's action oscillated between two opposite actions which affectively cause it to stand still. For the slow updates (DDQN_5000) the case is different, the network seems mostly indifferent to the updates, and at the end it manages to rise a little. The average value for all three runs oscillates and in general suffers from severe underestimation.In Fig. <ref> we compare the results of three algorithms, the DDQN algorithm with the intermediate update period (the best of the three shown before), Deep-mind's Deep Deterministic Policy Gradients (DDPG) algorithm, and our Guided-DQN algorithm.The DDPG algorithm manages to learn a suboptimal policy, but oscillates strongly around it. It can be seen in the fix positions graphs of the puck, although in the random graphs it looks pretty stable on the suboptimal policy. DDQN was discussed before, and our GDQN as can be seen clearly, learns the optimal policy and reaches the maximum score possible for each of them. In the random puck position the score also reaches an optimal policy in a very stable manner. Note that the score doesn't drop at all, and even the rise at the beginning is faster than the other two algorithms, it even faster than the rise of the DDQN with the fast updates shown in Fig. <ref>, due to the fast updates at the beginning and the guidance of the teacher policy. The average values of DDQN and DDPG are oscillating and suffering from underestimation and overestimation respectively, where GDQN's average value is extremely stable and does not suffer from over or under estimation.The learned control polices and the trajectories are shown in Fig. <ref> for a puck stationed in the left side of the table. The profile in the X-Y plane of the table is shown in Fig. <ref>. The agent is doing a curve in order to hit the puck from the left so it will go to the middle of the goal. The motion is visually very similar to an S-curve, in the X axis the agent performs a saturated action, compatible with a Bang-Bang profile, and in the Y axis something that effectively is like a Band-Zero-Bang.§ CONCLUSIONS We addressed the application of striking a stationary puck with a physical mechanism. We showed that the standard DQN algorithm did not lead to satisfactory results. Therefore we proposed two novel improvements to this algorithm. * using prior knowledge during the learning to direct the algorithm to interesting region of the state and action spaces.* using non-uniform target update periods with expanding rate in order to stabilize the learning.We also augmented the commonly used ϵ-greedy exploration mechanism with a local exploration with temporally correlated random process to better suite the physical environment.The modified algorithm is shown to learn near optimal performance in the motion planning and control problem of air hockey striking. In particular, it solves completely the problem of score drop that was observed in Double DQN. acm
http://arxiv.org/abs/1702.08074v2
{ "authors": [ "Ayal Taitler", "Nahum Shimkin" ], "categories": [ "cs.LG", "cs.RO" ], "primary_category": "cs.LG", "published": "20170226195959", "title": "Learning Control for Air Hockey Striking using Deep Reinforcement Learning" }
=1 aasjournal
http://arxiv.org/abs/1702.08461v4
{ "authors": [ "Eve J. Lee", "Eugene Chiang" ], "categories": [ "astro-ph.EP", "astro-ph.SR" ], "primary_category": "astro-ph.EP", "published": "20170227190005", "title": "Magnetospheric Truncation, Tidal Inspiral, and the Creation of Short and Ultra-Short Period Planets" }
^1Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou 225009, China^2School of Physics, Huazhong University of Science and Technology, Wuhan 430074, ChinaIn this work, we utilize the quasinormal modes (QNMs) of a massless scalar perturbation to probe the Van der Waals-like small and large black holes (SBH/LBH) phase transition of charged topological Anti-de Sitter (AdS) black holes in four-dimensional massive gravity. We find that the signature of this SBH/LBH phase transition is detected in the isobaric as well as in the isothermal process. This further supports the idea that the QNMs can be an efficient tool to investigate the thermodynamical phase transition. 04.50.Kd, 04.70.-s, 04.25.D- Behavior of quasinormal modes and Van der Waals-like phase transition of charged AdS black holes in massive gravity De-Cheng Zou^1[Email:dczou@yzu.edu.cn], Yunqi Liu^2[Email:liuyunqi@hust.edu.cn] and Ruihong Yue^1 [Email:rhyue@yzu.edu.cn] December 30, 2023 ==============================================================================================================================§ INTRODUCTIONEinstein's general relativity introduces gravitons as massless spin-2 particles <cit.>. However, understanding the quantum behavior of gravity could be related to the possible mass of the graviton. This Einstein theory, modified at large distances in massive gravity provides a possible explanation for the accelerated expansion of the Universe that does not require any dark energy. Actually, the massive gravity and its extensions, such as bimetric gravity, can yield cosmological solutions which do display late-time acceleration in agreement with observations <cit.>. Very recently, the LIGO collaboration reporting the discovery of gravitational wave asserted that <cit.> “assuming a modified dispersion relation for gravitational waves, our observations constrain the Compton wavelength of the graviton to be λ_g>10^13 km, which could be interpreted as a bound on the graviton mass m_g<1.2×10^-22eV/c^2”. In order to have massive graviton, the first attempt for constructing a massive theory was in the work of Fierz and Pauli <cit.> which was done in the context of linear theory. Unfortunately this theory possesses so-called van Dam, Veltman and Zakharov discontinuity problem. The resolution to this problem was Vainshtein's mechanism, which requires the system to be considered in nonlinear framework. As is now well known, it usually brings about Boulware-Deser ghost <cit.> by adding generic mass terms for the graviton on the nonlinear level. Subsequently, a nonlinear massive gravity theory was proposed by de Rham, Gabadadze and Tolley (dRGT) <cit.>, where the mass terms are added in a specific way to ensure that the corresponding equations of motion are at most second order differential equations so that the Boulware-Deser ghost is eliminated. Later, spherically symmetric black hole solutions were constructed in the dRGT massive gravity <cit.>, including its extension in terms of electric charge<cit.>, black string <cit.>, BTZ like black holes <cit.> and some other solutions with higher curvature correction terms <cit.>. This program goes beyond solution constructions in the dRGT massive gravity and focuses on the investigation holographic implications <cit.>, discussing the thermodynamical properties <cit.> and calculating QNMs under massless scalar perturbations for the BTZ like black hole <cit.>.The thermodynamical phase transition of a black hole is always a hot topic in black hole physics. It may shed light on the understanding of the relation between gravity and thermodynamics. Recently, thermodynamics of AdS black holes has been generalized to the extended phase space where the cosmological constant is treated as the pressure of the black hole <cit.>. A particular emphasis has been put on the study of the black hole phase transitions in AdS spacetime in Ref.<cit.>, which asserted the analogy between the Van der Waals liquid-gas system behavior and the charged AdS black hole. Subsequently a broad range of thermodynamic behaviors have been discovered, including reentrant phase transitions and more general Van der Waals behavior<cit.>. Recently, some investigations od the thermodynamics of AdS black holes in the massive gravity showed generalization to the extended phase space <cit.>, including the higher curvature terms <cit.>. For a long time, thermodynamical phase transitions of the black hole are supposed to detect by some observational signatures. Considering that QNMs of dynamical perturbations are characteristic issues of black holes <cit.>, it is expected that black hole phase transitions can be reflected in the dynamical perturbations in the surrounding geometries of black holes through frequencies and damping times of the oscillations. Moreover, the QNM frequencies of AdS black holes have direct interpretation in terms of the dual conformal field theory CFT <cit.>. A lot of discussions have been focused on this topic and more and more evidence has been found between thermodynamical phase transitions and dynamical perturbations. See for examples <cit.>. In the extended phase space, we have recovered the deep relation between the dynamical perturbation and the Van der Waals-like SBH/LBH phase transition in the four-dimensional Reissner-Nordström-Anti de Sitter (RN-AdS) black holes with spherical horizon (k=1) <cit.>. Later, matters have been generalized to higher-dimensional RN-AdS black holes <cit.> , including time-domain profiles <cit.>, and higher-dimensional charged black holes in the presence of Weyl coupling <cit.>.It is necessary to point out that in four-dimensional dRGT massive gravity, there always exists so-called Van der Waals-like SBH/LBH phase transition for the charged AdS black holes when the horizon topology is spherical (k=1), Ricci flat (k=0) or hyperbolic (k=-1) <cit.>. In particular, this phenomenon rarely occurs, since this Van der Waals-likeSBH/LBH phase transition was usually recovered in a variety of spherical horizon black hole backgrounds. Motivated by these results,in this paper we find crucial and well justified to reconsider the charged topological AdS black hole in four-dimensional dRGT massive gravity. We further use the QNM frequencies of a massless scalar perturbation to probe the Van der Waals-like SBH/LBH phase transitions of charged topological black holes (k=0, k=±1), respectively.This paper is organized as follows. In Sect. <ref>, we will review the Van der Waals-like SBH/LBH phase transition of charged topological AdS black holes in four-dimensional massive gravity. In Sect. <ref>, we will disclose numerically that the phase transition can be reflected by the QNM frequencies of dynamical perturbations. We end the paper with conclusions and discussions in Sect. <ref>.§ PHASE TRANSITION OF CHARGED TOPOLOGICAL ADS BLACK HOLE IN MASSIVE GRAVITY We start with the action of four-dimensional massive gravity in the presence of a negative cosmological constant <cit.>I=1/16π∫d^4x√(-g)[R-2Λ -1/4F_μνF^μν+m^2∑_i^4c_i U_i(g,f)],where f is a fixed symmetric tensor usually called the reference metric, c_i are constants, m is the mass parameter related to the graviton mass, and F_μν is the Maxwell field strength defined as F_μν=∂_μA_ν-∂_νA_μ with vector potential A_μ. Moreover, U_i are symmetric polynomials of the eigenvalues of the 4× 4 matrix K^μ_ν≡√(g^μαf_αν)U_1 = [ K],U_2 = [ K]^2-[ K^2],U_3 = [ K]^3-3[ K][ K^2]+2[ K^3],U_4 = [ K]^4-6[ K^2][ K]^2 +8[ K^3][ K]+3[ K^2]^2-6[ K^4].The square root in K is understood as the matrix square root, ie., (√(A))^μ_ ν(√(A))^ ν_λ=A^μ_ λ, and the rectangular brackets denote traces [ K]= K^μ_ μ.The action admits a static black hole solution with metricds^2=-f(r)dt^2+1/f(r)dr^2+r^2h_ijdx^idx^j,where the coordinates are labeled x^μ=(t, r, x^1, x^2) and h_ij describes the two-dimensional hypersurface with constant scalar curvature 2k. The constant k characterizes the geometric property of a hypersurface, which takes values k=0 for the flat case, k=-1 for negative curvature and k=1 for positive curvature, respectively. In a four-dimensional situation, we have U_3= U_4=0. Then the solution of charged topological AdS black hole is given by <cit.>f(r)=k+8π P r^2/3-m_0/r+q^2/4r^2 +c_0c_1m^2r/2+c_0^2c_2m^2,where P equals to -Λ/8π. Moreover, the parameters m_0 and q are related to mass and charge of black holeM=V_2/8πm_0, Q=V_2/16πq.Here V_2 is the volume of space spanned by coordinates x^i. When m→0, the solution (<ref>) reduces to the RN-AdS black hole.The reference metric now can have a special choicef_μν=diag(0,0,c_0^2h_ij).Without loss of generality we have set c_0=1 in our following discussions.In terms of the radius of the horizon r_+, the mass M, Hawking temperature T, entropy S and electromagnetic potential Φ of the black holes can be written asM = V_2 r_+/8π(k+c_2m^2+q^2/4r_+^2+8π P r_+^2/3 +c_1m^2 r_+/2),T = -q^2/16π r_+^3+k+c_2m^2/4π r_++2 Pr_+ +c_1m^2/4π,S = V_2/4 r_+^2,Φ=q/r_+In the extended phase space, the black hole mass M is considered as the enthalpy rather than the internal energy of the gravitational system <cit.>.From Eq. (<ref>), the equation of state of the black hole can be obtainedP=T/2r_+-c_1m^2/8π r_+-k+c_2m^2/8π r_+^2 +q^2/32π r_+^4.To compare with Van der Waals fluid equation in four dimensions, we can translate the “geometric” equation of state to a physical one by identifying the specific volume v of the fluid with horizon radius of black hole as v=2r_+.As usual, a critical point occurs when P has an inflection point∂ P/∂ r_+|_T=T_c, r_+=r_c =∂^2 P/∂ r_+^2|_T=T_c, r_+=r_c=0,which leads tor_c=√(6)q/2√(k+c_2m^2),T_c=2(k+c_2m^2)^3/2/3√(6)π q +c_1m^2/4π, P_c=(k+c_2m^2)^2/24π q^2.Evidently the critical behavior occurs when k+c_2m^2>0, which is a joint effect of horizon topology k and c_2m^2. Previous thermodynamical discussions for RN-AdS black holes show that the Van der Waals-like SBH/LBH only occurs for spherical horizon topology k=1. The graviton mass significantly modifies this behavior and a non-zero m admits that possibility of critical behavior for k≠1. In addition, it has been shown <cit.> that when k+c_2m^2>0, the small and large black hole phases are both locally thermodynamically stable because the corresponding heat capacities are always positive[We thank Hai-Qing Zhang for pointing this out.].The equilibrium thermodynamics is governed by the Gibbs free energy, G=G(T, P, q), which obeys the thermodynamic relation G=M-TS. For later discussions, it is convenient to rescale the Gibbs free energy in the following way: g=4π/V_2G. Then g readsg=3q^2/16r_++(k+c_2m^2)r_+/4 -2π P r_+^3/3.Here r_+ is understood as a function of pressure and temperature, r_+=r_+(P,T), via the equation of state (<ref>). § PERTURBATIONS OF CHARGED TOPOLOGICAL ADS BLACK HOLE IN MASSIVE GRAVITYNow we study the evolution of a massless scalar field perturbation in the surrounding geometry of these charged topological AdS black holes.A massless scalar field Ψ(r,t,Ω)=ϕ(r)e^-iω tY_lm(Ω), obeys the Klein-Gordon equation∇_μ^2Ψ(r,t,Ω)=1/√(-g)∂_μ(√( -g)g^μν∂_νΨ(r,t,Ω))=0,where Y_lm(Ω) is a normalizable harmonic function on the 2-dimensional hypersurface. In particular, the Laplace operator on Ω yields∇^2_ΩY_lm(Ω)=-κ^2Y_lm(Ω), It is necessary to point out that the eigenvalue κ^2 usually gets different values in consideration of different horizon topologies. For the spherical (k=1) and flat (k=0) topology, the eigenvalue κ^2 can be zero. Then the radial function ϕ(r) obeysϕ”(r)+(f'(r)/f(r)+2/r)ϕ'(r) +ω^2ϕ(r)/f(r)^2=0,where ω are complex numbers ω=ω_r + iω_im, corresponding to the QNM frequencies of the oscillations describing the perturbation. For the hyperbolic horizon topology(k=-1), the eigenvalue κ^2 of the Laplace operator on Ω cannot be zero <cit.>, and is given by 1/4+ξ^2, where ξ=L_Ω(L_Ω+1), L_Ω=0,1,2,...<cit.>. Then the radial function ϕ(r) obeys the following differential equation:ϕ”(r)+(f'(r)/f(r)+2/r)ϕ'(r) +(ω^2/f(r)-κ^2/r^2)ϕ(r)/f(r)=0. Here we define ϕ(r) as φ(r)exp[-i∫ω/f(r)dr], where the exp[-i∫ω/f(r)dr] asymptotically approaches to ingoing wave near horizon, then Eqs. (<ref>)(<ref>) becomeφ”(r)+φ'(r)(f'(r)/f(r)-2iω/f(r)+2/r) -2iω/r f(r)φ(r)=0, k=0,1andφ”(r)+φ'(r)(f'(r)/f(r)-2iω/f(r)+2/r) -(2iω+κ^2/r)φ(r)/r f(r)=0, k=-1.In this paper, we only consider ξ=0, namely κ^2=1/4 for k=-1.We are going to study whether the signature of Van der Waals-like SBH/LBH phase transition of charged topological AdS black holes can be reflected by the dynamical QNMs behavior in the massless scalar perturbation. For Eqs. (<ref>) and (<ref>), we have φ(r)=1 in the limit of r→ r_+. At the AdS boundary (r→∞), we need φ(r)=0. Under these boundary conditions, we will numerically solve Eqs. (<ref>) and (<ref>) separately to find QNM frequencies by adopting the shooting method. In the context of the Van der Waals phase transition picture, the dynamical perturbations in the isobaric process and isothermal process will be discussed. In our following numerical computations we will set q=2, m=1, c_1=0.05 and c_2=2.§.§ Isobaric phase transitionDue to the pressure P (or l) being fixed in this case, the black hole horizon r_+ is the only variable in the system. The behavior of an isobar with different horizon topologies are plotted in Fig. <ref>. For P<P_c, the oscillating part displays the occurrence of an SBH/LBH phase transition in the system and the Gibbs free energy depicts a swallow tail behavior, also signaling a first-order SBH/LBH phase transition. Here the intersection point indicates the coexistence of two phases in equilibrium. The critical pressure P_c is obtained by ∂ T/∂ r_+=∂^2 T/∂ r_+^2=0.In Table. <ref>(see appendix), we further list the QNM frequencies of massless scalar perturbation around small and large black holes for a first order SBH/LBH phase transition. Fixing the pressure with P=0.003, we obtain the phase transition temperature T_*≃0.04567, T_*≃0.06945 and T_*≃0.08664 in the cases of k=-1, 0 and 1, respectively, where the small and large black hole phases can coexist. With regard to a small black hole phase, the radius of black hole becomes smaller and smaller when the temperature decreases from the phase transition temperature T_*. In this process the absolute values of the imaginary part of the QNM frequencies decrease, while the real part frequencies change very little. On the other hand, when the temperature for the large black hole phase increases from the phase transition temperature T_*, the black hole gets bigger. The QNM frequencies increase in the real and absolute value of imaginary parts. Consequently, the massless scalar perturbation outside the black hole gets more oscillations but it decays faster. These results are consistent with the overall discussions reported in <cit.>. Figure <ref> illustrates the QNM frequencies for small and large black hole phases. Increase in the black hole size is indicated by the arrows. In addition, at the critical position P=P_c, with P_c≃0.0033157 for k=-1, P_c≃0.0132629 for k=0 and P_c≃0.0298416 for k=1, a second-order phase transition occurs. The QNM frequencies of the small and large black hole phases are plotted in Fig. <ref>. We see that QNM frequencies of two black hole phases show the same behavior as the black hole horizon increases at the critical point. §.§ Isothermal phase transition Fixing the black hole temperature T, the associated P-r_+ diagram of charged topological AdS black holes is displayed in the right part of Fig. <ref>. For T<T_c there is an inflection point and behavior is reminiscent of the Van der Waals liquid-gas system. Moreover, the behavior of Gibbs free energy is plotted in the left panel of Fig. <ref>. Similarly to Fig. <ref>, characteristic first order SBH/LBH phase transition behavior shows up.Table <ref> (see appendix) displays the QNM frequencies of small and large black hole phases at temperature T=0.79T_c for different horizon topologies (k=0,±1) in the isothermal precess. Then the first order SBH/LBH phase transition happens at P_*≈0.001728 for k=-1, at P_*≈0.007189 for k=0 and at P_*≈0.016215 for k=1, where the small and large black holes possess the same Gibbs free energy and same pressure. The data above (below) the horizontal line are for the small (large) black hole phase, respectively. The drastically different QNM frequencies for small and large black hole phases are plotted in Fig. <ref>. From the figure we see different slopes of the QNM frequencies in the massless scalar perturbations revealing that small and large black holes are in different phases.In the isothermal transition, the QNMs can be affected by the value of the pressure P(l) and the horizon radius r_+, which are related by a fixed temperature. To illustrate the effects of the two parameters, we list the influence of r_+ on the frequencies for small and large black holes by fixing P(l) in Table <ref> (see appendix) and QNM frequencies by fixing the black hole size r_+ in Table <ref> (see appendix). From Tables <ref> and <ref> one can see that there is competition between the pressure P and horizon radius r_+. Each of these parameters aims to overwhelm the other which affects the decay rate of the field. In order to further discuss how these two factors affect the QNM frequencies, we perform a double-series expansion of the frequency ω(r_++Δ r_+,P+Δ P)ω(r_++Δ r_+,P+Δ P) = ω(r_+,P)+∂ω/∂ r_+Δ r_+ +∂ω/∂ PΔ P +𝒪(Δ r_+^2,Δ P^2,Δ r_+·Δ P).Obviously, the changes of the QNM frequency are under two influences, one is from the change of the black hole size r_+ and the other is from the change of the pressure P (or AdS radius l). For simple discussions, we define Δ_1≡∂ω/∂ r_+Δ r_+ and Δ_2≡∂ω/∂ PΔ P.Note that the choice of the step of pressure Δ P in linear approximation is related to Δ r_+, which is brought about bydP=(-T/2r_+^2+c_1m^2/8π r_++k+c_2m^2/4π r_+^3 -q^2/8π r_+^5)dr_+from the equation of state (<ref>). In Table <ref> (see appendix), we list the QNM frequencies from the linear approximation for small and large black hole phase. One can see that the behavior of ω̃ is in good agreement with the numerical computation results listed in Table <ref>. Comparing Δ_1 and Δ_2 in Table <ref>, the change of P (or l) in small black hole phase clearly wins over the change of the black hole size, which dominantly contributes to the behavior of QNM frequencies for small black hole phase. For the large black hole phase, the contributions of Δ_1 and Δ_2 on the real part of the QNM frequency are comparable. But the change of P (or l) wins out a little. In addition, for the isothermal phase transition at T=T_c, the QNM frequencies for the small black hole and large black hole are plotted in Fig. <ref>, which shows the same behavior as the horizon radius increases.§ CONCLUSIONS AND DISCUSSIONSWe have calculated the QNMs of massless scalar field perturbation around small and large charged topological AdS black holes in four-dimensional dRGT massive gravity. When the Van der Waals-like SBH/LBH phase transition happens in the extended space, no matter whether in the isobaric process by fixing the pressure P or in the isothermal process by fixing the temperature T of the system, the slopes of the QNM frequencies change drastically being different in the small and large black hole phases as the horizon radius r_+ is increasing. This clearly shows the signature of the phase transition between small and large black holes. Moreover, we have also found that, at the critical isothermal and isobaric phase transitions, QNM frequencies for both small and large black holes have the same behavior, suggesting that QNMs are not appropriate to probing the black hole second order phase transition.Comparing with the action of Eq. (<ref>), Ref. <cit.> recently asserted the existence of a Van der Waals-like SBH/LBH phase transition with the massive potential U_3≠0 in the five dimensional case. Moreover, the charged black hole <cit.>, the Born-Infeld black hole <cit.> and black hole in the Maxwell and Yang-Mills fields <cit.> have recently beenconstructed in Gauss-Bonnet massive gravity. The Van der Waals-like SBH/LBH phase transition also appears in these models. It would be interesting to extend our discussion to these black hole solutions.The work is supported by the National Natural Science Foundation of China (NNSFC) (Grant No.11605152), and Natural Science Foundation of Jiangsu Province (Grant No.BK20160452). D.C.Z. are extremely grateful to Hai-Qing Zhang and Hua-Bi Zeng for useful discussions.§ APPENDIX SECTION Here we present the related QNM frequencies of the massless scalar perturbation around small and large black holes in the isobaric as well as in the isothermal process. 9 Gupta:1954zz S. N. Gupta,Phys. Rev. 96, 1683 (1954).Weinberg:1965rz S. Weinberg, Phys. Rev. 138, B988 (1965).Feynman:1996kb R. P. Feynman, F. B. Morinigo, W. G. Wagner and B. Hatfield,Reading, USA: Addison-Wesley (1995) 232 p. (The advanced book program) Hassan:2011zd S. F. Hassan and R. A. Rosen,JHEP 1202, 126 (2012) [arXiv:1109.3515 [hep-th]]. DAmico:2011eto G. D'Amico, C. de Rham, S. Dubovsky, G. Gabadadze, D. Pirtskhalava and A. J. Tolley,Phys. Rev. D 84, 124046 (2011) [arXiv:1108.5231 [hep-th]].Akrami:2012vf Y. Akrami, T. S. Koivisto and M. Sandstad,JHEP 1303, 099 (2013) [arXiv:1209.0457 [astro-ph.CO]].Akrami:2015qga Y. Akrami, S. F. Hassan, F. Knnig, A. Schmidt-May and A. R. Solomon,Phys. Lett. B 748, 37 (2015) [arXiv:1503.07521 [gr-qc]].Abbott:2016blz B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],Phys. Rev. Lett.116, 061102 (2016) [arXiv:1602.03837 [gr-qc]].Fierz:1939ix M. Fierz and W. Pauli,Proc. Roy. Soc. Lond. A 173, 211 (1939).Boulware:1973my D. G. Boulware and S. Deser,Phys. Rev. D 6, 3368 (1972).deRham:2010ik C. de Rham and G. Gabadadze,Phys. Rev. D 82, 044020 (2010) [arXiv:1007.0443 [hep-th]].deRham:2010kj C. de Rham, G. Gabadadze and A. J. Tolley,Phys. Rev. Lett. 106, 231101 (2011) [arXiv:1011.1232 [hep-th]].Vegh:2013sk D. Vegh,arXiv:1301.0537 [hep-th].Ghosh:2015cva S. G. Ghosh, L. Tannukij and P. Wongjun,Eur. Phys. J. C 76, 119 (2016) [arXiv:1506.07119 [gr-qc]].Do:2016abo T. Q. Do,Phys. Rev. D 93, 104003 (2016) [arXiv:1602.05672 [gr-qc]]. Li:2016fbf P. Li, X. z. Li and P. Xi,Phys. Rev. D 93, 064040 (2016) [arXiv:1603.06039 [gr-qc]].Cai:2012db Y. F. Cai, D. A. Easson, C. Gao and E. N. Saridakis,Phys. Rev. D 87, 064001 (2013) [arXiv:1211.0563 [hep-th]]. Berezhiani:2011mt L. Berezhiani, G. Chkareuli, C. de Rham, G. Gabadadze and A. J. Tolley,Phys. Rev. D 85, 044024 (2012) [arXiv:1111.3613 [hep-th]]. Hendi:2015hoa S. H. Hendi, B. Eslam Panah and S. Panahiyan, JHEP 1511, 157 (2015) [arXiv:1508.01311 [hep-th]].Tannukij:2017jtn L. Tannukij, P. Wongjun and S. G. Ghosh,arXiv:1701.05332 [gr-qc]. Hendi:2016hbe S. H. Hendi, S. Panahiyan, S. Upadhyay and B. Eslam Panah,Phys. Rev. D 95, no. 8, 084036 (2017) [arXiv:1611.02937 [hep-th]].Hendi:2016pvx S. H. Hendi, B. Eslam Panah and S. Panahiyan,JHEP 1605, 029 (2016) [arXiv:1604.00370 [hep-th]].Hendi:2015pda S. H. Hendi, S. Panahiyan and B. Eslam Panah,JHEP 1601, 129 (2016) [arXiv:1507.06563 [hep-th]]. Meng:2016its K. Meng and J. Li,Europhys. Lett.116,10005 (2016). Zeng:2014uoa H. B. Zeng and J. P. Wu,Phys. Rev. D 90,046001 (2014)[arXiv:1404.5321 [hep-th]]. Hendi:2016uni S. H. Hendi, N. Riazi and S. Panahiyan,arXiv:1610.01505 [hep-th].Ge:2014aza X. H. Ge, Y. Ling, C. Niu and S. J. Sin,Phys. Rev. D 92,106005 (2015)[arXiv:1412.8346 [hep-th]]. Baggioli:2014roa M. Baggioli and O. Pujolas,Phys. Rev. Lett.114,251602 (2015)[arXiv:1411.1003 [hep-th]]. Hu:2015dnl Y. P. Hu, H. F. Li, H. B. Zeng and H. Q. Zhang,Phys. Rev. D 93, 104009 (2016)[arXiv:1512.07035 [hep-th]]. Sadeghi:2015vaa M. Sadeghi and S. Parvizi,Class. Quant. Grav. 33, 035005 (2016) [arXiv:1507.07183 [hep-th]]. Hendi:2015bna S. H. Hendi, B. Eslam Panah and S. Panahiyan,Class. Quant. Grav. 33,235007 (2016)[arXiv:1510.00108 [hep-th]]. Hendi:2016yof S. H. Hendi, G. Q. Li, J. X. Mo, S. Panahiyan and B. Eslam Panah,Eur. Phys. J. C 76,571 (2016) [arXiv:1608.03148 [gr-qc]].Cai:2014znn R. G. Cai, Y. P. Hu, Q. Y. Pan and Y. L. Zhang,Phys. Rev. D 91, 024032 (2015) [arXiv:1409.2369 [hep-th]].Adams:2014vza A. Adams, D. A. Roberts and O. Saremi, Phys. Rev. D 91, 046003 (2015) [arXiv:1408.6560 [hep-th]].Prasia:2016esx P. Prasia and V. C. Kuriakose, Eur. Phys. J. C 77, no. 1, 27 (2017) [arXiv:1608.05299 [gr-qc]]. Caldarelli:1999xj M. M. Caldarelli, G. Cognola and D. Klemm,Class. Quant. Grav.17, 399 (2000) [hep-th/9908022].Kastor:2009wy D. Kastor, S. Ray and J. Traschen,Class. Quant. Grav.26, 195011 (2009) [arXiv:0904.2765 [hep-th]]. Lu:2012xu H. Lu, Y. Pang, C. N. Pope and J. F. Vazquez-Poritz,Phys. Rev. D 86, 044011 (2012) [arXiv:1204.1062 [hep-th]].Kubiznak:2012wp D. Kubiznak and R. B. Mann,JHEP 1207, 033 (2012)[arXiv:1205.0559 [hep-th]]. Gunasekaran:2012dq S. Gunasekaran, R. B. Mann and D. Kubiznak, JHEP 1211, 110 (2012)[arXiv:1208.6251 [hep-th]]. Hendi:2012um S. H. Hendi and M. H. Vahidinia,Phys. Rev. D 88, 084045 (2013)[arXiv:1212.6128 [hep-th]]. Hendi:2015hgg S. H. Hendi, R. M. Tad, Z. Armanfard and M. S. Talezadeh, Eur. Phys. J. C 76, 263 (2016) [arXiv:1511.02761 [gr-qc]].Zhao:2013oza R. Zhao, H. -H. Zhao, M. -S. Ma and L. -C. Zhang,Eur. Phys. J. C 73, 2645 (2013)[arXiv:1305.3725 [gr-qc]]. Zou:2013owa D. C. Zou, S. J. Zhang and B. Wang,Phys. Rev. D 89, 044002 (2014)[arXiv:1311.7299 [hep-th]]. Zou:2014mha D. C. Zou, Y. Liu and B. Wang,Phys. Rev. D 90, 044063 (2014) [arXiv:1404.5194 [hep-th]].Cai:2013qga R. G. Cai, L. M. Cao, L. Li and R. Q. Yang,JHEP 1309, 005 (2013)[arXiv:1306.6233 [gr-qc]].Dehghani:2014caa M. H. Dehghani, S. Kamrani and A. Sheykhi,Phys. Rev. D 90, 104020 (2014) [arXiv:1505.02386 [hep-th]].Mo:2014qsa J. X. Mo and W. B. Liu,Eur. Phys. J. C 74, 2836 (2014)[arXiv:1401.0785 [gr-qc]]. Hennigar:2015esa R. A. Hennigar, W. G. Brenna and R. B. Mann,JHEP 1507, 077 (2015) [arXiv:1505.05517 [hep-th]].Zhang:2014jfa L. C. Zhang, M. S. Ma, H. H. Zhao and R. Zhao,Eur. Phys. J. C 74, 3052 (2014) [arXiv:1403.2151 [gr-qc]].Xu:2013zea W. Xu, H. Xu and L. Zhao,Eur. Phys. J. C 74, 2970 (2014) [arXiv:1311.3053 [gr-qc]]. Rajagopal:2014ewa A. Rajagopal, D. Kubizk and R. B. Mann,Phys. Lett. B 737, 277 (2014) [arXiv:1408.1105 [gr-qc]].Frassino:2014pha A. M. Frassino, D. Kubiznak, R. B. Mann and F. Simovic,JHEP 1409, 080 (2014) [arXiv:1406.7015 [hep-th]].Wei:2014hba S. W. Wei and Y. X. Liu,Phys. Rev. D 90,044057 (2014) [arXiv:1402.2837 [hep-th]].Altamirano:2014tva N. Altamirano, D. Kubiznak, R. B. Mann and Z. Sherkatghanad,Galaxies 2, 89 (2014) [arXiv:1401.2586 [hep-th]].Wei:2015iwa S. W. Wei and Y. X. Liu,Phys. Rev. Lett.115, 111302 (2015) Erratum: [Phys. Rev. Lett. 116,169903 (2016)] [arXiv:1502.00386 [gr-qc]].Cheng:2016bpx P. Cheng, S. W. Wei and Y. X. Liu,Phys. Rev. D 94, 024025 (2016) [arXiv:1603.08694 [gr-qc]].Wei:2015ana S. W. Wei, P. Cheng and Y. X. Liu,Phys. Rev. D 93, 084015 (2016) [arXiv:1510.00085 [gr-qc]].Hendi:2015cqz S. H. Hendi, S. Panahiyan, B. E. Panah and Z. Armanfard,Eur. Phys. J. C 76, 396 (2016) [arXiv:1511.00598 [gr-qc]]. B.:2015koa C. B. Prasobh, J. Suresh and V. C. Kuriakose,Eur. Phys. J. C 76, 207 (2016) [arXiv:1510.04784 [gr-qc]].Mo:2016ndm J. X. Mo, G. Q. Li and X. B. Xu,Eur. Phys. J. C 76,545 (2016) [arXiv:1609.06422 [gr-qc]].Belhaj:2014eha A. Belhaj, M. Chabab, H. El moumni, K. Masmar and M. B. Sedra,Eur. Phys. J. C 75,71 (2015) [arXiv:1412.2162 [hep-th]].Xu:2014kwa W. Xu and L. Zhao,Phys. Lett. B 736, 214 (2014) [arXiv:1405.7665 [gr-qc]].Xu:2014tja H. Xu, W. Xu and L. Zhao,Eur. Phys. J. C 74,3074 (2014) [arXiv:1405.4143 [gr-qc]].Sadeghi:2016dvc J. Sadeghi, B. Pourhassan and M. Rostami,Phys. Rev. D 94, 064006 (2016) [arXiv:1605.03458 [gr-qc]].Hansen:2016ayo D. Hansen, D. Kubiznak and R. B. Mann,JHEP 1701, 047 (2017) [arXiv:1603.05689 [gr-qc]].Lan:2017yia S. Lan and W. Liu,arXiv:1701.04662 [hep-th].Liang:2017rng J. Liang, Z. H. Guan, Y. C. Liu and B. Liu, Gen. Rel. Grav. 49, 29 (2017). Zeng:2016fsb X. X. Zeng and L. F. Li,Adv. High Energy Phys.2016, 6153435 (2016)[arXiv:1609.06535 [hep-th]]. Hendi:2016usw S. H. Hendi, B. Eslam Panah, S. Panahiyan and M. S. Talezadeh,Eur. Phys. J. C 77,133 (2017) [arXiv:1612.00721 [hep-th]].Xu:2015rfa J. Xu, L. M. Cao and Y. P. Hu, Phys. Rev. D 91, 124033 (2015) [arXiv:1506.03578 [gr-qc]].Zou:2016sab D. C. Zou, R. Yue and M. Zhang,Eur. Phys. J. C 77, 256 (2017) arXiv:1612.08056 [gr-qc].Hendi:2017fxp S. H. Hendi, R. B. Mann, S. Panahiyan and B. Eslam Panah,Phys. Rev. D 95, 021501 (2017) [arXiv:1702.00432 [gr-qc]].Ning:2016usb S. L. Ning and W. B. Liu,Int. J. Theor. Phys.55, 3251 (2016). Prasia:2016fcc P. Prasia and V. C. Kuriakose, Gen. Rel. Grav.48,89 (2016) [arXiv:1606.01132 [gr-qc]].Zeng:2015tfj X. X. Zeng, H. Zhang and L. F. Li,Phys. Lett. B 756, 170 (2016) [arXiv:1511.00383 [gr-qc]]. Nollert:1999ji H. P. Nollert,Class. Quant. Grav. 16 (1999) R159.Kokkotas:1999bd K. D. Kokkotas and B. G. Schmidt,Living Rev. Rel.2, 2 (1999) [gr-qc/9909058]. Konoplya:2011qq R. A. Konoplya and A. Zhidenko,Rev. Mod. Phys.83, 793 (2011) [arXiv:1102.4014 [gr-qc]].Cardoso:2013pza V. Cardoso, . J. C. Dias, G. S. Hartnett, L. Lehner and J. E. Santos,JHEP 1404, 183 (2014) [arXiv:1312.5323 [hep-th]]. Cardoso:2003cj V. Cardoso, R. Konoplya and J. P. S. Lemos, Phys. Rev. D 68, 044024 (2003)[gr-qc/0305037]. Warnick:2013hba C. M. Warnick,Commun. Math. Phys.33, 959 (2015) [arXiv:1306.5760 [gr-qc]]. Konoplya:2002ky R. A. Konoplya,Phys. Rev. D 66, 084007 (2002) [gr-qc/0207028]. Konoplya:2008rq R. A. Konoplya and A. Zhidenko,Phys. Rev. D 78, 104017 (2008)[arXiv:0809.2048 [hep-th]]. Li:2016kws R. Li, H. Zhang and J. Zhao,Phys. Lett. B 758, 359 (2016)[arXiv:1604.01267 [gr-qc]]. Rao:2007zzb X. P. Rao, B. Wang and G. H. Yang,Phys. Lett. B 649, 472 (2007) [arXiv:0712.0645 [gr-qc]].He:2010zb X. He, B. Wang, R. G. Cai and C. Y. Lin,Phys. Lett. B 688, 230 (2010) [arXiv:1002.2679 [hep-th]].Berti:2008xu E. Berti and V. Cardoso,Phys. Rev. D 77, 087501 (2008) [arXiv:0802.1889 [hep-th]]. He:2008im X. He, B. Wang, S. Chen, R. G. Cai and C. Y. Lin,Phys. Lett. B 665, 392 (2008) [arXiv:0802.2449 [hep-th]].Miranda:2008vb A. S. Miranda, J. Morgan and V. T. Zanchin,JHEP 0811, 030 (2008) [arXiv:0809.0297 [hep-th]].Cai:2011qm R. G. Cai, X. He, H. F. Li and H. Q. Zhang,Phys. Rev. D 84, 046001 (2011)[arXiv:1105.5000 [hep-th]]. Pan:2011hj Q. Y. Pan and R. K. Su,Commun. Theor. Phys.55, 221 (2011).Shen:2007xk J. Shen, B. Wang, C. Y. Lin, R. G. Cai and R. K. Su,JHEP 0707, 037 (2007) [hep-th/0703102 [HEP-TH]].Koutsoumbas:2006xj G. Koutsoumbas, S. Musiri, E. Papantonopoulos and G. Siopsis,JHEP 0610, 006 (2006) [hep-th/0606096].Zou:2014sja D. C. Zou, Y. Liu, C. Y. Zhang and B. Wang,Europhys. Lett.116, 40005 (2016) [arXiv:1411.6740 [hep-th]].Zangeneh:2017rhc M. K. Zangeneh, B. Wang, A. Sheykhi and Z. Y. Tang,arXiv:1701.03644 [hep-th]. Liu:2014gvf Y. Liu, D. C. Zou and B. Wang,JHEP 1409, 179 (2014) [arXiv:1405.2644 [hep-th]].Chabab:2016cem M. Chabab, H. El Moumni, S. Iraoui and K. Masmar,Eur. Phys. J. C 76, 676 (2016) [arXiv:1606.08524 [hep-th]].Chabab:2017knz M. Chabab, H. El Moumni, S. Iraoui and K. Masmar,arXiv:1701.00872 [hep-th]. Mahapatra:2016dae S. Mahapatra,JHEP 1604, 142 (2016) [arXiv:1602.03007 [hep-th]].Alsup:2008fr J. Alsup and G. Siopsis,Phys. Rev. D 78, 086001 (2008) [arXiv:0805.0287 [hep-th]].Gonzalez:2012xc P. A. Gonzalez, F. Moncada and Y. Vasquez, Eur. Phys. J. C 72, 2255 (2012) [arXiv:1205.0582 [gr-qc]].Becar:2012bj R. Becar, P. A. Gonzalez and Y. Vasquez, Int. J. Mod. Phys. D 22, 1350007 (2013) [arXiv:1210.7561 [gr-qc]].Balazs:1986uj N. L. Balazs and A. Voros,Phys. Rept.143, 109 (1986).Becar:2015kpa R. Becar, P. A. Gonzalez and Y. Vasquez,Eur. Phys. J. C 76, 78 (2016) [arXiv:1510.06012 [gr-qc]].
http://arxiv.org/abs/1702.08118v3
{ "authors": [ "De-Cheng Zou", "Yunqi Liu", "Ruihong Yue" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170227010533", "title": "Behavior of quasinormal modes and Van der Waals-like phase transition of charged AdS black holes in massive gravity" }
Institut d'Astrophysique Spatiale (IAS), Bâtiment 121, Université Paris-Sud 11 and CNRS, F- 91405 Orsay, France Polarized extinction and emission from dust in the interstellar medium (ISM) are hard to interpret, as they have a complex dependence on dust optical properties, grain alignment and magnetic field orientation. This is particularly true in molecular clouds. The aforementioned phenomena are usually considered independently in polarization studies, while it is likely that they all contribute and their effects have yet to be disentangled. The data available today are not yet used to their full potential. The combination of emission and extinction, in particular, provides information not available from either of them alone. We combine data from the scientific literature on polarized dust extinction withdata on polarized emission and we use them to constrain the possible variations in dust and environmental conditions inside molecular clouds, and especially translucent lines of sight, taking into account magnetic field orientation.We focus on the dependence between– the wavelength of maximum polarization in extinction – and other observables such as the extinction polarization, the emission polarization and the ratio of the two. We set out to reproduce these correlations using Monte-Carlo simulations where the relevant quantities in a dust model – grain alignment, size distribution and magnetic field orientation – vary to mimic the diverse conditions expected inside molecular clouds.None of the quantities chosen can explain the observational data on its own: the best results are obtained when all quantities vary significantly across and within clouds. However, some of the data – most notably the stars with low emission-to-extinction polarization ratio – are not reproduced by our simulation. Our results suggest not only that dust evolution is necessary to explain polarization in molecular clouds, but that a simple change in size distribution is not sufficient to explain the data, and point the way for future and more sophisticated models. e.g. i.e. Planck IRAS N_sideψ_ext ψ_353σ(ψ_ext) σ(ψ_353) P_353 σ(P_353) I_353 τ_353 τ_V p_V λ_max p_max / R_V A_V E(B-V) N_Hf_max s a_alig a_max C_ext C_abs C_sca C_pol C_a C_b C_avg#1[#1]#1#1 L. Fanciullo et al.Polarization and emission-to-extinction ratioInterplay of dust alignment, grain growth and magnetic fields in polarization: lessons from the emission-to-extinction ratio L. Fanciullo, 1 V. Guillet, 1 F. Boulanger, 1 A. P. Jones 1 ============================================================================================================================§ INTRODUCTIONThe light of stars often shows a degree of polarization correlated to interstellar extinction, up to a degree of a few percent permagnitude. This phenomenon has long been recognized as the effect of cosmic dust grains aligned with interstellar magnetic field lines <cit.>. Dust extinguishes starlight, and in the case of non-spherical grains the component of the electric field parallel to a grain's longer axis is more extinguished than the orthogonal one. Furthermore, interstellar dust grains align their shorter axes with the interstellar magnetic field, so they are not generally randomly oriented <cit.>. The overall result is that the dusty, magnetized interstellar medium (ISM) polarizes the starlight that wasn't originally so. The polarization fraction p and the polarization angle ψ of starlight therefore provide information on both interstellar dust and the Galactic magnetic field, or at least the component of it that is parallel to the plane of the sky.It is mainly the large grains that are aligned <cit.>, and in typical ISM conditions their thermal emission falls mainly in the far-infrared (FIR) and submillimeter (submm) range. This emission is also polarized, since emission is more efficient for the electric field component parallel to the longer axis, and it is an important complement to observations of polarized extinction in the optical and near-infrared <cit.>. It should be noted that, since radiation polarized parallel to grains' longer axis is least intense in extinction and most intense in emission, we expect ψ in the submm to be orthogonal to ψ in the optical.The main factors that determine the polarization fraction p are: the optical properties of the dust, the alignment efficiency, and the orientation of the magnetic field lines. Polarization can be expressed as <cit.>p = p_0 Rcos^2γwhere p_0 is the maximum possible polarization given the dust properties, the parameter R – comprised between 0 and 1 – accounts for the effects of imperfect alignment[When grains are in the Rayleigh regime – as is the case for the thermal emission of large dust grains – the parameter R can be calculated analytically, and it is called the Rayleigh Reduction Factor <cit.>], and γ is the angle between the magnetic field lines and the plane of the sky. The wavelength dependence of polarization in extinction usually follows the so-called Serkowski curve <cit.>:p(λ) = · exp( - K ·ln(λ/)^2)where the polarization has a maximumat a wavelength , usually falling in the visible; the value of the parameter K, tied to the inverse of the FWHM, is usually around unity. Since the polarization efficiency of a grain peaks at ∼ 2π times its size <cit.>,traces the typical size of aligned grains: variations inbetween lines of sight may indicate a change in grain size distribution, in the dependence of alignment on size, or both <cit.>. The issue is further complicated by the fact thatalso shows some dependence on the magnetic field angle γ <cit.>.Since polarization depends on many factors at once, its interpretation is a degenerate and difficult problem. This is especially true of the dense and complex environments that are molecular clouds, where magnetic field orientation, grain alignment and dust properties are expected to change on small scales. Despite this, studies on dust polarization are often focused on constraining the grain alignment efficiency ( AP07) or the structure of the magnetic field <cit.> without accounting for other variables. One way of confronting the problem is to construct a cloud model that includes dust evolution, magnetic field structure and grain alignment; however, such models are complex and very computationally demanding. The judicious combination of observational data can also provide interesting insights on dust physics while requiring far lighter calculations. One example of such labor-saving combinations is the complementary use of extinction data and polarized dust thermal emission. This last one has the dimensions of an intensity and it is usually observed in the far infrared and submillimeter (FIR and submm). All-sky surveys in the submillimetrer such as<cit.> are opening new possibilities for this kind of multi-wavelength analysis. The idea that extinction and emission combined can be more informative than either of them alone is explored in <cit.>, which examines the / ratio[This ratio is called R_ P/p in <cit.>.] in the diffuse ISM,being the polarized intensity in emission in the353 GHz channel andbeing the starlight polarization degree in the V band. This ratio is measured in MJy sr^-1 and, sinceandhave (at first approximation) the same dependence on alignment and γ, it should provide strong constraints on the properties of aligned dust grains. Among the new results made possible byis the determination of average / in the diffuse ISM: 5.4 MJy sr^-1,about 2.5 times higher than predicted by pre-existing dust models <cit.>. The present paper aims to extend the study of / to denser environments – namely, translucent lines of sight in molecular clouds – using an updated methodology and a dust model optimized for the high / found by<cit.>. The paper is organized as follows: Section <ref> introduces the observational data used, mostly from translucent lines of sight, and the selection that had to be made for the emission/extinction comparison to be meaningful. Section <ref> presents the dust model used in our work, especially constructed to reproduce the / ratio as well as classic dust observables. The dust model was created for the diffuse ISM, while we study translucent clouds where dust evolution may be taking place: the modification that are needed for the model to fit the data hint at the nature of on dust evolution in these areas. The model results are compared to observations in Section <ref>, and the meaning of this comparison is commented in Section <ref>. Finally, Section <ref> contains our conclusions and the future perspectives.§ DATAOur work combines measures of starlight polarization in the near-ultraviolet to near-infrared (NUV to NIR) range, recovered from the published literature, andmeasurements of total and polarized dust emission at 353 GHz for the same lines of sight. We use a total of 132 objects, which are reduced to 70 after selection (Section <ref>).§.§ Extinction in the NUV to NIRMost of our data points are from AP07 who, in a polarimetric study of the Coalsack nebula, compared their results with data from other clouds [Chamaeleon, Musca, Ophiuchus, R Coronae Australis (RCrA) and Taurus; see Tab. <ref>] taken from pre-existing scientific literature <cit.>. The AP07 study is particularly useful for this purpose because the authors did not employ the Serkowski fit results from the literature, but they used the photometric and polarimetric data therein to conduct their own fits, thus minimizing the systematic effects from different fitting procedures. To increase the statistics we also included data from <cit.>, which provides more lines of sight in Ophiuchus, and <cit.>, providing lines of sight not associated, for the most part,[The star HD 147933 from <cit.> is associated with the Ophiuchus cloud, but this star was eliminated from our sample in the selection process (Section <ref>).] with the aforementioned clouds. From the literature we obtained the Serkowski polarization parametersandfor all stars, as well as the polarization angle . The values of K are not calculated in a consistent fashion in the scientific literature: AP07 often use K = 1.15 and only fit K as a free parameter if it constitutes a statistically significant improvement, while <cit.> impose that K be a linear function of . For this reason, we chose not to include K in our work. The AP07 data retrieved from the Vizier online database[http://vizier.u-strasbg.frhttp://vizier.u-strasbg.fr] do not include the uncertainties onfor Ophiuchus, so we complemented the data with the original article <cit.>: we use the average on ψ in the various bands as the value ofand their standard deviation as the uncertainty. We excluded the stars with standard deviations greater than 7^∘, which we interpreted as stars where angles in different bands are not compatible. We also obtained the value of the V-band polarizationfor most of the AP07 stars, by reading the data directly from the references <cit.>. The data from <cit.> are in the form of (polarized) spectra rather than multi-band photometry; for their lines of sight we took the polarization at λ = 545 nm as the value for . We did not usefor the stars in<cit.>, who fit data from multiple sources and thus provide non-unique values for each band. Finally, we obtained the extinction parameters for most of the stars:andfor the AP07 stars,for the <cit.> and <cit.> stars.§.§ Emission:andsubmm mapsOur submm data consists of the353 GHz (850 μm) maps for the I, Q and U Stokes parameters, from which the polarized intensityand anglewere obtained. We did not use any other frequencies because of their lower S/N. For selection purposes we also used the all-sky submm dust opacity maps created by Marc-Antoine Miville-Deschênes usinganddata <cit.>.We used the secondpublic data release[http://irsa.ipac.caltech.edu/data/Planck/release_2/all-sky-maps/http://irsa.ipac.caltech.edu/data/Planck/release_2/all-sky-maps/], which consist of HEALPix all-sky maps of 10 quantities: the Stokes parameters I, Q and U, the number of hits, the variances of the Stokes parameters II, QQ and UU, and the covariances IQ, IU and QU. Maps are in NESTED ordering and Galactic coordinates; they have a pixelization N_ side = 2048 for a total of 12 · 2048^2 = 50 331 648 pixels with 1'.7 side lengths, so that the beam of the instrument (FWHM ∼ 5') is well-sampled. The maps are in units of K_ CMB, and are converted to MJy/sr, with a conversion factor of 287.45 at 353 GHz <cit.>. To obtain the value of I, Q and U at the position of each star and increase the S/N, we employed the same technique as <cit.>: we averaged the values for the Stokes parameters on a Gaussian PSF centered on the star coordinates and with a FWHM of 5', bringing the effective resolution to ∼7'.In the case of Q and U, since we are working on a flat map recovered from a spherical one, we need to account for the fact that the direction of the north changes from pixel to pixel. We do this by rotating the doublet (Q, U) until it is on the equator in the local reference frame.With the value of the submillimeter Q and U for all the lines of sight we calculated the polarized intensity in emission, = √(Q^2 + U^2), and the polarization angle = 1/2arctan(U,Q) using the HEALPix angle convention (where the relative signs of Q and U are inverted with respect with the IAU convention). Being a quadratic function of measures with finite noise,has a positive bias and it was debiased with the conventional formula <cit.>: P_ deb = √(P_ bias^2 - σ_P^2). We did not apply the more recent debiasing method <cit.> because, after the smoothing, the environments we are studying have a high signal-to-noise ratio, and therefore a low bias. There was no need to apply CMB and CIB corrections, which are negligible at this wavelength and for our datset. For each star we also calculated the polarization angle dispersion function S <cit.>, a tracer of disorder in polarization and therefore in the magnetic field orientation. The (I,Q,U) triplet was again smoothed to increase S/N, using a Gaussian PSF with 5' FWHM and bringing the maps to a ∼7' resolution. The maps thus obtained are oversampled (4 pixels per beam), so we also degrade the pixelization of the Q and U maps to N_ side = 1024 to get closer to the Nyquist criterion. The dispersion function S is then computed for the pixel containing the star, with a lag δ = 5'. §.§ SelectionSince this paper compares different phenomena (dust extinction and thermal emission) observed at different wavelengths (NUV-to-NIR vs. submm), we need to ensure that the comparison is meaningful <cit.>. This amounts to making sure that first, we are observing the same type of grain at the two wavelengths; second, the two wavelengths probe the same volumes of ISM. The first condition is met, in first approximation, as a consequence of dust physics: only large grainscontribute to polarization, in extinction and in emission. Large grains are also the main contributor to submm emission as well as visual and IR extinction (small grains are important in the UV). It should be kept in mind, however, that polarization and overall extinction do not necessarily trace the same grains: a population of large grains that are spherical or unaligned would contribute to extinction and emission, but not polarization. The second condition is not trivial. Extinction measured on background stars – as is the case of the data described in section <ref> – only probes the matter in front of the star itself; emission has no such limitation, especially in the submm where the ISM is optically thin (see Fig. <ref>). In presence of a background to the star, the total intensity I measures systematically more dust than what is observed in the optical. The effect of background on polarization is more complex, and we will detail it in Section <ref>. To compare extinction and emission, therefore, we need to discard those lines of sight that have significant dust emission from behind the star. As shown in <cit.>, this selection can be based on three criteria: Galactic latitude: All stars close to the Galactic plane are very likely to have significant background; so we only keep stars stars with Galactic latitude | b |≥ 2^∘. This forces us to exclude from the study some well-studied clouds, such as the Coalsack Nebula (AP07), located on the Galactic plane.Polarization angle: Dust polarization in extinction should be orthogonal to that in emission.In lines of sight where the anglesandare not orthogonal, extinction and emission do not come from the same dust; we exclude from our sample such lines of sight with a tolerance 3σ or 10^∘, whichever is smaller. For stars in <cit.>, whoseare given without uncertainties, we assume = 0. Sinceangle uncertainties are usually larger than V band angle uncertainties, this should not make a large difference.Column density: The dust submm optical depthcan be converted to an expectedorand compared to the actual extinction measured; lines of sight where the -derived extinction shows an excess have significant background. We use for this the empirical conversion factor / = 1.49 10^4 obtained by <cit.> for the diffuse ISM. In molecular clouds, however, / is known to increase by a factor ∼ 2-3 compared to the diffuse ISM <cit.>. We decided therefore to relax this condition and keep all lines of sight where the -derivedis less than 3 times the measured value. While none of these selection procedures intrinsically exclude all of the contaminated sightlines, when combined, and used together with the selection already operated by AP07, they are more robust. We did some additional selection to improve the data quality. We only kept those stars for which we had a S/N greater than 3 inand greater than 5 in . For a few of the stars in AP07 the quality of the Serkowski fit was low and the the Serkowski parameters were not an adequate representation of the polarization curve; we recovered the observational data from<cit.> and excluded those stars that do not follow Serkowski. Finally, we excluded those stars that according to <cit.> are likely to have intrinsic polarization. The combined selection left us with the values of , , ,andfor 70 lines of sight, 56 of which also have information onand . §.§ Line-of-sight and beam depolarizationThe magnetic field in the ISM has a non-negligible disordered (or “meandering”) component that introduces a confounding variable called “depolarization”. When in an observation there is confusion between polarized sources with different orientation angles, it is possible for orthogonal components of polarization to cancel each other out, so the overall polarization observed may be lower than that of each source taken separately. Depolarization may occur if the interstellar magnetic field changes orientation along a line of sight (line-of-sight depolarization), or if an instrument has a finite observational beam and the magnetic field changes orientation on scales smaller than said beam (beam depolarization). In most polarization studies, the two effects are put together under the name of “beam depolarization” or simply “depolarization”. However, since the two types of depolarization have different effects on the extinction and the emission, we will treat them separately in the present paper. The line-of-sight depolarization, at first approximation, has the same effect on extinction and emission if they probe the same ISM. Complications arise if there is significant emission from the background to the star: if the magnetic field orientation is very different in the foreground and in the background, depolarization in emission may be very different from that in extinction. This would give unreliable measurements of, , the ratio /. The selection described in Section <ref>, if effective, should ensure that the line-of-sight depolarization affects extinction and emission in the same way. We remark that having a uniform magnetic field orientation on the line of sight is not equivalent to having no background emission: sinceis additive, in this case we would observe an excess of polarization in emission as compared to extinction, and overestimate /.The beam depolarization affects observations that have finite beam size. This is usually the case at FIR and submm wavelengths: thebeams measure 5' or more. Extinction observations on stars, on the other hand, are pointlike and suffer no beam effects, so that beam depolarization affects only polarized emission. Unlike line-of-sight depolarization, beam effects are unaffected by our selection criteria. However, the amount of beam depolarization can be estimated from observational data, such as the function S (Section <ref>) that measures field disorder. §.§ Final observablesOur observational data, to be compared to a dust model, is plotted in Fig. <ref>. The top panel shows the normalized polarization in extinction, /, as a function of . The two quantities have a clear negative correlation; we also see that the values for polarization are very widely scattered, their upper limit marking an “envelope” – as is typical of polarized observations – the shape of which may be partly determined by line-of-sight depolarization <cit.>. A very similar behavior can be seen in the submm polarization fraction / as a function of , shown in the central panel. The bottom panel of Fig. <ref> shows a different observable: /. This quantity, like the ratio / used in <cit.>, is meant to trace the optical properties of grains by normalizing out the effects of alignment and magnetic field orientation affecting both emission and extinction. We decided to usein the construction of this ratio, as opposed to , to avoid introducing spurious correlations: many of our stars have high values of(> 0.6μm), andandare going to be negatively correlated in that range. The bottom panel of Fig. <ref> shows that, even if the scatter in / is quite large, it is still small compared to the one observed in / and /; also the dependence of/ onis much less pronounced. This is consistent with our expectations that this quantity be nearly independent of alignment and magnetic field orientation.§ MODEL: DUSTEM WITH POLARIZATIONThe dust model we use should ideally have the three following characteristics: it should predict polarization in both extinction and emission, it should be compatible with the latest results from themission – especially the / ratio, which is underpredicted by pre- models – and it should allow to modify the dust properties to simulate dust evolution. Unfortunately, while models that allow detailed dust evolution exist, to the authors' knowledge they either do not predict polarization <cit.> or they are calibrated on extinction alone and cannot be expected to reproduce the correct / ratio <cit.>. We decided to instead use a model optimized for fitting the latestdata, at the cost of a simplified treatment of dust evolution where only grain size is accounted for.We adopt the dust model recently developed by <cit.> and called “Model A”, a modified version of <cit.>. The computation is done with the DustEM Fortran numerical tool[https://www.ias.u-psud.fr/DUSTEM/https://www.ias.u-psud.fr/DUSTEM/] and its IDL wrapper.[http://dustemwrap.irap.omp.eu/http://dustemwrap.irap.omp.eu/] The model populations and size distributions are chosen to minimize the number of free parameters; the parameters themselves are calculated by fitting the observables typical of the low-latitude diffuse ISM (| b | < 30^∘): the extinction curve, the polarization in extinction up to 4μm, and the emission and polarization SED updated withresults. The model therefore reproduces the average observations for the diffuse ISM, including ratios such as / and . The model includes three grain types (see Tab. <ref> and Fig. <ref>): a population of neutral PAHs with a lognormal size distribution, plus two populations of big grains (amorphous carbon and silicates, respectively) distributed as power laws: dn/da = a^α. We are mainly interested in observables where the big grain contribution is dominant, so the model, unlike <cit.>, has no separate population for very small carbonaceous grains: the very small grains are included in the amorphous carbon population, which is why the power law for carbons is weighted towards small sizes. Large grains are prolate spheroids with an axial ratio of 3 <cit.>. The neutral PAHs and the amorphous carbon grains have the same compositions as their counterparts in <cit.>; the silicate grains have the same composition as <cit.>, with added porosity: 20% of their volume consists of vacuum inclusions. The porosity of the silicate grains is essential in increasing theirratio to the value observed by .In the model silicate grains are aligned according to the phenomenological alignment function provided by DustEM:f(a) = 1/2 f_ max ( 1 + tanh( ln (a/)/p_ stiff))where f(a) is the fraction of grains aligned as a function of the equivalent radius[For non-spherical grains, the equivalent radius is the radius of a sphere of corresponding volume.] a, f_ max is the maximum alignment efficiency,is the size threshold for grain alignment and the parameter p_ stiff denotes the width of the transition. This alignment function is designed to increase monotonically with size, since small grains are generally unaligned. This parametric function is not designed to test any particular alignment process, and while its shape resembles the typical result of the radiative torque model <cit.>, it is also compatible with magnetic alignment for grains with superparamagnetic inclusions<cit.>. The variation of the alignment function for varyingand its effects on the polarization curve in extinction, are shown in Fig. <ref>.In addition to causing polarization, grain alignment affects dust extinction and emission as well. The resulting correction is very small and it is generally ignored in dust models; nonetheless, DustEM provides the option of including alignment effects in extinction and emission. In the present paper, we chose to ignore these effects to allow a more direct comparison to the results of <cit.>, where minor alignment effects were ignored as well.§.§ Model results: fittingDustEM provides the full extinction curve and dust emission for the model, including polarization; most of the parameters introduced in Section <ref> have to be obtained from a fit to the DustEM output.We take the extinction curve interpolated at 550 nm to beof the model, and the thermal emission at 353 GHz – plus a color correction to account for the spectral response of the correspondingband – as the valueof the SED. The same interpolation, operated on the polarized extinction and emission, gives us the model prediction forand . The ,and K of the model are calculated by interpolatingthe model polarized extinction at the effective central wavelengths of the UBVRIJH photometric bands (Tab. <ref>) and fitting a Serkowski function to the synthetic observations thus obtained, keeping K as a free parameter. We also fitted the model SED with a modified blackbody: I_λ = B_λ(T) ·τ_0 · (λ/λ_0)^-β. The fit was performed on the model emission interpolated and color-corrected at the wavelengths of theHFI bands (350, 550, 850, 1380 and 2100 μm) as well as the100 μm band. Since emission at those wavelengths is dominated by big grains, integrating the modified blackbody over wavelength provides the radiance ℛ, or emitted power, of the big grain populations <cit.>.§.§ Model variationsThe model so far described has been developed to fit the average observables in the diffuse ISM.Inside molecular clouds, however, evolution alters the properties of dust considerably, the main alteration being grain growth due to accretion and coagulation <cit.>. Translucent lines of sight, such as those studied in this paper, are typically at the onset of such evolution: for instance, <cit.> and <cit.> find that coagulation takes place whereis greater than 2 or 3. This raises the question of whether our sample can be explained by a dust model designed for the diffuse ISM. To answer this question, we studied how the model output is affected by grain alignment efficiency, magnetic field orientation and grain size. Comparing these results to the observations will inform us whether the data can be explained by the variation of alignment and field orientation alone, using the same dust as in the diffuse ISM, or whether dust growth is necessary, and to what extent. The modifications to our model are purely phenomenological, not based on simulations of grain growth, dust alignment or magnetic field structure; however, they are useful for estimating the variations that physical models will have to reproduce.§.§.§ Variations of the alignment functionLoss of grain alignment inside molecular clouds is sometimes invoked to explain the decrease in polarization efficiency observed at high<cit.>. This weakening of polarization is also in qualitative agreement with some alignment theories,the radiative torque model that predicts that only the largest grains are aligned inside molecular clouds.The alignment efficiency in DustEM is a function of the three parameters , p_ stiff and f_ max, as shown in Eq. <ref>. We simulated different alignment efficiencies by running the model with different values of , keeping the same p_ stiff for simplicity. We did not change f_ max (equal to 1 in our model) because it has the exact same effects onand , and it could have no effect on ; the parameter , on the other hand, affectsandsimilarly but not identically, since polarization cross-section has different size-dependent behavior in the visible (where scattering is dominant) and in the submillimeter.Fig. <ref> shows how f(a) changes as a function ofand the effect this has on the polarization curve; highercorresponds to lower , since fewer grains are aligned, and higher , since the average aligned grain is larger.§.§.§ Variations in magnetic field orientation An ordered magnetic field forming an angle γ with the plane of the sky introduces a factorcos^2γ in the polarized intensity, as shown in Section <ref>. Dust polarization models would greatly benefit from measures of the angle γ; unfortunately, dust only traces the magnetic field component parallel to the plane of the sky, so this information is not usually available. The angle γ is therefore another variable parameter in our model: we ran the model for γ = 0^∘, 30^∘, 45^∘ and 60^∘. The Galactic magnetic field is actually the sum of an ordered component and a disordered, or meandering, one: this latter component causes the phenomena known as line-of-sight and beam depolarization, as explained in Section <ref>. Our model does not include a disordered magnetic field component and therefore it cannot predict depolarization; however,the polarization angle dispersion S (Section <ref>) can be used as a measure of field disorder <cit.>. Using this we were able to assess some of the effects of the disordered magnetic field, as shown in Section <ref>.§.§.§ Variations of grain size distributionGas accretion on grain surfaces <cit.> and formation of aggregates are known to increase grain sizes inside molecular clouds. This growth is supported by theoretical studies <cit.> and it is consistent with observed phenomena such as the flattening of extinction curves in dense environments <cit.> and the coreshine observed in the NIR <cit.>. As already mentioned, there are for now no dust models that treat dust evolution realistically while reproducing the emission-to-extinction polarization ratio revealed by . We therefore opted for a model that is compatible withdata <cit.> at the cost of a simplified treatment of dust evolution. Specifically, we will focus on a single aspect affected by dust evolution: the size distribution of grains. In our model, the size distribution for big grains is a power law defined by three parameters: the minimum and maximum grain sizes a_ min and , and the power law index α.In the case of silicates most of the mass is in the large grains (see Fig. <ref>), meaning that the size distribution is most sensitive to ; furthermore, none of the observables we use are in the UV where the contribution of small grains is important. Therefore, we decided to vary theof silicates between 350 nm and 1 μm (the standard value is ∼ 500 nm) and keep α and a_ min fixed. Although we chose to fix α mainly as a matter of convenience, we note that this is consistent with the model of grain growth by <cit.>, where the slope of the size distribution does not change much during evolution, and the largest variation is in the upper size cutoff.The case of carbon grains is different, as their distribution is weighted towards small sizes: the mass available for large grains is now also dependent on a_ min and on the amount of PAHs, so a realistic model becomes a necessity. For this reason we only varied the size distribution of silicates while leaving that of carbon grains fixed. While this choice does not give realistic results for the variation of extinction and emission with size distribution, it still allows us to predict the dust polarization, which in our model depends on silicates alone. §.§.§ Multi-parameter study: Monte-CarloThe phenomena described in sections <ref> to <ref> are all expected to occur in molecular clouds, so that variations of a single model parameter at a time are not realistic, even if studying their effect can be instructive. We decided to use a Monte-Carlo simulation to explore the effects of simultaneous variations of many parameters. As explained in the previous section, we can only vary the size distribution of silicates, which gives realistic results for polarization but not for unpolarized observables; therefore our Monte-Carlo results can only be compared to polarization observables. The model was run one thousand times, and the values ofandfor silicates were uniformly distributed within the ranges50 nm << 300 nm350 nm << 1000 nmThe operation was repeated for four values of γ (0^∘, 30^∘, 45^∘ and 60^∘), bringing the simulations to a total of 4000.We found that not all combinations of ,and γ in the ranges chosen are realistic: the synthetic observables obtained for some such combinations have values that are never observed. We set out to find a realistic range of parameters by imposing that model results have a range as close as possible to that of actual observations <cit.>. Restrictingandto the following ranges eliminates most of the unrealistic values forand K: 75 nm << 150 nm350 nm << 800 nmwhile the same four values for γ are kept. This selection left us with 844 Monte-Carlo iterations.§ RESULTS §.§ Alignment efficiency and magnetic field orientationThe effects of dust alignment efficiency and magnetic field orientation are shown in Fig. <ref>, which compares the model results with observational data. Dots represent the observed values of / (top), / (middle) and / (bottom) as a function of . Curves represent the model; within each curvevaries between 75 and 150 nm and the four curves correspond to the four values of γ used, 0^∘, 30^∘, 45^∘ and 60^∘.The combination of variable alignment and magnetic field orientation can reproduce most of the observations in the case of / and /, both the general trends and the dispersion. The curve for γ = 0^∘ coincides roughly with the highly polarized, low- lines of sight in our sample. For higher values of γ the polarization decreases, but the dispersion incaused by the variation ofincreases, pushing the maximumto larger values: as a result, the model predicts that weakly polarized lines of sight that can have either small or large , which is indeed the trend found in the observational data. The relation between γ anddescribed by <cit.> is evident in Fig. <ref>: althoughis mainly affected by the alignment size threshold , the model curves with a higher γ are clearly shifted to higher values of . The figure, however, reveals something more: the strength of the γ- relation itself increases with . The leftmost tips of the model curves, corresponding to = 75 nm, all have very similar . On the contrary, the rightmost tips, corresponding to= 150 nm, show wide differences in , comparable to the differences due toitself. It should be noted that our model assumes a uniform magnetic field, and therefore it does not include line-of-sight or beam depolarization. If these effects were important, it may mean that the ordered component of the field is closer to the plane of the sky than our model predicts.Also in Fig. <ref> we see that alignment and magnetic field orientation have very little effect on the model results for /, as was indeed expected: the different curves are close to each other. In fact, the dispersion observed in the observed / is much larger than predicted by the model, which suggests that variations in the dust optical properties occur in translucent lines of sight. Again, one possible confounding factor in this interpretation is the depolarization caused by meandering of the magnetic field: we will now attempt to assess its effects. While our model assumes a uniform magnetic field, the disorder in the field lines can be estimated from the angle dispersion function S (Section <ref>). In the top panel of Fig. <ref> we see an anticorrelation between S and the polarization fraction in emission, a well-known effect usually attributed to line-of-sight depolarization caused by meandering of the field <cit.>. Beam depolarization is also a possible cause, but we can see that beam effects appear negligible in our sample: the bottom panel of Fig. <ref> shows that there is no clear influence of field meandering on the polarization ratio /. Beam depolarization, if present, should affect emission but not on extinction, introducing an anti-correlation between S and /. The fact that we see no such correlation suggests that beam depolarization is negligible in our sample: this supports the idea that we are probing dust in relatively homogeneous regions, which is an important assumption in extinction/emission comparison. The effect of line-of-sight depolarization on the polarization fraction is unfortunately impossible to assess without more advanced modelling, but even if present it should have little effect on / following our selection (Sections <ref> and <ref>).§.§ Grain size distribution As explained in Section <ref>, we ignore variations of size distribution in carbonaceous grains, which affectandbut notand . Because of this, our results on the effects of grain growth on observations are unrealistic for / and /, but realistic for /, and we show only this last observable. Fig. <ref> compares the observations and the model results for 350 ≤≤ 10^3 nm (solid black line). The effect of a variableonis small compared to the effect of grain alignment; on the other hand,has a strong effect on / and is a plausible contributor to the large dispersion observed in this quantity. For comparison, the picture also shows the model results for fixedand α varying of ± 0.5 around the standard value,-3.82 ≤α≤ -2.82 (dashed line). The variations in α have a modest effect oncomparable to that of ; however, unlike , the parameter α has nearly no influence on the / ratio. §.§ Multi-parameter Monte-CarloWe have already mentioned that, realistically, all the model parameters we use will simultaneously vary inside a molecular cloud, and we chose to represent this with a Monte-Carlo simulation (section <ref>). As in the previous section, our modelling does not account for the size distribution of carbon grains and we will only show polarization results. Fig. <ref> compares the Monte-Carlo results to the observational data. The Monte-Carlo was run forγ = 0^∘, 30^∘, 45^∘ and 60^∘; the other model parameters for the Monte-Carlo are uniformly distributed in the regions 75 nm≤≤ 150 nm and 350 ≤≤ 800 nm. The combined variation of dust alignment, field orientation and grain size allows the model to reproduce most of the observations. However some lines of sight, spanning the full range of observed , have a lower / than the model can reproduce. This may be the result of variations in the dust polarization properties due to factors other than size ( grain shape, porosity, chemical composition), not considered in our model. § DISCUSSIONWe have seen that both the grain alignment and the orientation of the magnetic field have important effects on our observables: a combination of variableand variable γ reproduces the general trends and the dispersions for / and / as a function of ; in fact, we find the familiar “envelope” in the distribution of points (Fig. <ref>). There are however some stars, mainly in the Musca cloud, with a higher polarization than the model can reproduce. This is because the model was made to reproduce a / of ∼ 3%, the usually accepted value for the diffuse interstellar medium <cit.> while the outlier stars in Fig. <ref> have /∼ 4%. We should point out, however, that the measures in question have relatively large error bars and most stars are within ∼ 1 σ above the envelope.We can also see in Fig. <ref> that a combination of variable alignment, magnetic field orientation and grain growth reproduces the general trend and most of the data scatter in the / vs.relation. Again some data are outside of the model's range; in this case it is the stars with low /. These lines of sight may belong to regions with different dust properties: while our model reproduces the high / ratio from <cit.>, most models give lower values. Another possibility is that the low / is an effect of beam depolarization, which would lower the observed value ofwithout affecting ; however, in Section <ref> we showed that beam depolarization does not seem important in our sample. A lower / is also what we would expect in regions of low dust heating, sinceis proportional to dust emission: our model results are calculated for the typical interstellar radiation field intensity[We used the default interstellar radiation field in DustEM, which is the <cit.> SED for the Solar neighborhood multiplied by the dimensionless intensity parameter G_0.] in the diffuse ISM, G_0 = 1, but we would expect the radiation field to be less intense in clouds. However, in the long-wavelength range of the Planck function – which is the case ofobservations of dust – emission tends to depend linearly on dust temperature, meaning that the effect of heating on dust SED may be small. A proper estimation of heating effects would be no trivial task, as there is no univocal relation between the observedand the extinction actually experienced by dust (see AP07 for a discussion), but a preliminary analysis is shown in Fig. <ref>. The grey curves in the figure represent the model results for 0.1 ≤ G_0 ≤ 3. The top panel shows no visible correlation between / and dust temperature, suggesting that G_0 only has modest effects <cit.>. The same conclusion seems supported by the bottom panel: this shows that / is not correlated with the normalized radiance ℛ/. Since the radiance ℛ is the bolometric emission of big dust grains (Section <ref>), ℛ/ is a measure of the power emitted (and therefore absorbed) by big grains, averaged on the line of sight.[Stars 10 and 24 of Ophiuchus <cit.> are not shown in the bottom panel of Fig. <ref>, because their large ℛ/ place them outside the plot area. The (ℛ/, /) values for these outliers are (2.15, 6.45) and (3.24, 6.98) respectively, in the units shown on the figure axes.]§ CONCLUSIONS AND PERSPECTIVESDust polarization depends on the efficiency of grain alignment, the orientation and meandering of the magnetic field and the optical properties of the dust itself. Many studies have attempted to explain polarization in molecular clouds in terms of one of these factors (seeAP07 for a study focused on alignment and <cit.> for one focused on magnetic field meandering); nonetheless, it is likely that all factors are at play at once. In this paper we use dust model A from <cit.> and vary its alignment efficiency, grain size and magnetic field orientation to reproduce the diverse conditions that one may find in molecular clouds. The results are compared to extinction and emission data from both bibliographic sources and thesurvey, with particular attention to the polarization observables /, / and / as a function of . We find that none of the model parameters employed can explain the full set of observations on its own. Monte-Carlo simulations show that most of the data can be reproduced by lettingvary between 75 and 150 nm,vary between 350 nm and 800 nm, and γ vary between 0^∘ and 60^∘. Thus, any studies of polarization in molecular clouds need to take into account all these aspects to explain the full range of the data. In particular, the ratio / is very useful in reducing the contributions of alignment and magnetic field orientation, and highlighting variations in dust properties, and especially size distribution. Within the context of our model, the variations observed in / can be partly explained by varying the maximum grain size , while the power law index α has little effect.Nonetheless, some of the observations fall outside the range of the model results, most notably some lines of sight with very low values of / that are found over the full range of observed . This is likely to indicate variations in dust properties other than size distribution, such as shape, structure or chemical composition. Non-size-related dust evolution may also influence our estimate of magnetic field orientation: for instance, lines of sight with a low / – which our model attributes to magnetic field lines nearly orthogonal to the plain of the sky (large γ) – may be explained instead by dust with a low polarization cross-section. The lines of sight with low / can in principle be explained without recurring to dust evolution: a dim radiation field due to extinction, or the beam depolarization due to a disordered magnetic field, would lower the value of polarization in extinction without affecting extinction. However, our analysis conducted in the previous sections suggests that both dust heating and beam depolarization have little effect on our sample.It is possible that the width of the magnetic field orientation range found by our model (0^∘ to 60^∘) is overestimated. Aside from the aforementioned influence of dust properties, field meandering – which is absent in our model – could introduce line-of-sight depolarization, the effects of which are degenerate with increasing the angle γ of (the ordered component of) the magnetic field. Including field meandering in an ISM model would make for a very interesting follow-up, but a polarized radiation transfer code is needed for that. It would also be useful to independently determine γ in future research,using MHD simulations or, where available, measures of line-of-sight magnetic field such as Zeeman. Not all potentially interesting cases were considered in the present paper: this is a first application of this technique to the study of polarization in molecular clouds. The full implications of this technique will become clearer with more detailed modelling and more observational data. A continuation of this work would benefit from extending the dataset to near-infrared extinction and polarization. Observations in the NIR can probe denser lines of sight than those in the visible; furthermore, increasing the wavelength of observation means getting closer to the Rayleigh limit, so that the observables are better trackers of the overall mass of aligned grains and less dependent on the details of alignment and size distribution. This makes NIR observations an interesting complement to observations in the optical. Finally, different types of observations could improve our constraints on the model variables: maps of molecular lines and elemental depletion could be useful in constraining grain growth processes such as accretion and coagulation. We would like to thank S. N. Shore for his stimulating discussion and insightful remarks on a variety of subjects – from ISM physics to data analysis to writing – during the editing of this article. We are also indebted to N. V. Voshchinnikov for his helpful comments. aa
http://arxiv.org/abs/1702.08356v2
{ "authors": [ "Lapo Fanciullo", "Vincent Guillet", "François Boulanger", "Anthony Jones" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170227162016", "title": "Interplay of dust alignment, grain growth and magnetic fields in polarization: lessons from the emission-to-extinction ratio" }
Theoretical studies of superconductivity in dopedBaCoSO Jiangping Hu December 30, 2023 ========================================================= This work aims to show the applicability, and how, of privacy by design approach to biometric systems and the benefit of using formal methods to this end. Starting from a general framework that has been introduced at STM in 2014, that enables to define privacy architectures and to formally reason about their properties, we explain how it can be adapted to biometrics.The choice of particular techniques and the role of the components (central server, secure module, biometric terminal, smart card, etc.) in the architecture have a strong impact on the privacy guarantees provided by a biometric system. In the literature, some architectures have already been analysed in some way. However, the existing proposals were made on a case by case basis, which makes it difficult to compare them and to provide a rationale for the choice of specific options. In this paper, we describe, on different architectures with various levels of protection, how a general framework for the definition of privacy architectures can be used to specify the design options of a biometric systems and to reason about them in a formal way.§ INTRODUCTIONApplications of biometric recognition, as the most natural tool to identify or to authenticate a person, have grew over the years. They now vary from criminal investigations and identity documents to many public or private usages, like physical access control or authentication from a smartphone toward an internet service provider. Such biometric systems involve two main phases: enrolment and verification (either authentication or identification) <cit.>. Enrolment is the registration phase, in which the biometric traits of a person are collected and recorded within the system. In the authentication mode, a fresh biometric trait is collected and compared with the registered one by the system to check that it corresponds to the claimed identity. In the identification mode, a fresh biometric data is collected and the corresponding identity is searched in a database of enrolled biometric references.During each phase, to enable efficient and accurate comparison, the collected biometric data are converted into discriminative features, leading to what is called a biometric template. With the increased use of biometric systems, and more recently with the development of personal data protection regulations, the issues related to the protection of the privacy of the used biometric traits have received particular attention. As leakage of biometric traits may lead to privacy risks, including tracking and identity theft, privacy by design approach is often needed.As a security technical challenge, it has attracted a lot of research works since at least 15 years and a wide-array of well-documented primitives, such as encryption, homomorphic encryption, secure multi-party computation, hardware security, template protection etc.,are known in the litterature. With those building tools, various architectures have been proposed to take into account privacy requirements in the implementation of privacy preserving biometric systems. Some solutions involve dedicated cryptographic primitives such as secure sketches <cit.> and fuzzy vaults <cit.>, others rely on adaptations of existing cryptographic tools <cit.> or the use of secure hardware solutions <cit.>. The choice of particular techniques and the role of the components (central server, secure module, terminal, smart card, etc.) in the architecture have a strong impact on the privacy guarantees provided by a solution. However, existing proposals were made on a case by case basis, which makes it difficult to compare them, to provide a rationale for the choice of specific options and to capitalize on past experience.Here, we aim to show how to use and adapt a general framework that has been introduced in <cit.> for the formal definition and validation of privacy architectures.The goal is specify the various design options in a consistent and comparable way, and then to reason about them in a formal way in order to justify their design in terms of trust assumptions and achieved privacy properties.The privacy by design approach is often praised by lawyers as well as computer scientists as an essential step towards a better privacy protection. It is even becoming more and more often legally compelled, as for instance in European Union with the General Data Protection Regulation <cit.> entering into force. Nevertheless, it is one thing to impose by law the adoption of privacy by design, quite another to define precisely what it is intended to mean technically-wise and to ensure that it is put into practice by developers. The overall philosophy is that privacy should not be treated as an afterthought but rather as a first-class requirement in the design phase of systems: in other words, designers should have privacy in mind from the start when they define the features and architecture of a system. However, the practical application raises a number of challenges: first of all the privacy requirements must be defined precisely; then it must be possible to reason about potential tensions between privacy and other requirements and to explore different combinations of privacy enhancing technologies to build systems meeting all these requirements.This work, which has been conducted in particular within the French ANR research project BioPriv <cit.>, an interdisciplinary project involving lawyers and computer scientists, can be seen as an illustration of the feasibility of the privacy by design approach in an industrial environment. A step in this direction has been described in <cit.> which introduces a system for defining privacy architectures and reasoning about their properties. In Section <ref>, we provide an outline of this framework. Then we show how this framework can be used to apply a privacy by design approach to the implementation of biometric systems. In Sections <ref> to <ref>, we describe several architectures for biometric systems, considering both existing systems and more advanced solutions, and show that they can be defined in this framework. This makes it possible to highlight their commonalities and differences especially with regard to their underlying trust assumptions.In the second part of this paper, we address a security issue which cannot be expressed in the framework presented in Section <ref>. The origin of the problem is that side-channel information may leak from the execution of the system. This issue is acute for biometric systems because the result of a matching between two biometric data inherently provides some information, even if the underlying cryptographic components are correctly implemented <cit.>. To adress this issue, in Section <ref>, we propose an extension of the formal framework, in which information leaks spanning over several sessions of the system can be expressed. In Section <ref>, we apply the extended model to analyse biometric information leakage in several variants of biometric system architectures.Finally, Section <ref> sketches related works and Section <ref> concludes the paper with suggestions of avenues for further work.§ GENERAL APPROACHThe work presented in <cit.> can be seen as a first step towards a formal and systematic approach to privacy by design. In practice, this framework makes it possible to express privacy and integrity requirements (typically the fact that an entity must obtain guarantees about the correctness of a value), to analyse their potential tensions and to make reasoned architectural choices based on explicit trust assumptions. The motivations for the approach come from the following observations: * First, one of the key decisions that has to be taken in the design of a privacy compliant system is the location of the data and the computations: for example, a system in which all data is collected and all results computed on a central server brings strong integrity guarantees to the operator at the price of a loss of privacy for data subjects. Decentralized solutions may provide better privacy protections but weaker guarantees for the operator. The use of privacy enhancing technologies such as homomorphic encryption or secure multi-party computation can in some cases reconcile both objectives. * The choice among the architectural options should be guided by the assumptions that can be placed by the actors on the other actors and on the components of the architecture. This trust itself can be justified in different ways (security protocol, secure or certified hardware, accredited third party, etc.).As far as the formal model is concerned, the framework proposed in <cit.> relies on a dedicated epistemic logic. Indeed, because privacy is closely connected with the notion of knowledge, epistemic logics <cit.> form an ideal basis to reason about privacy properties but standard epistemic logics based on possible worlds semantics suffer from a weakness (called “logical omniscience” <cit.>) which makes them unsuitable in the context of privacy by design.We assume that the functionality of the system is expressed as the computation of a set of equations Ω := {X = T } over a language Term of terms T defined as follows, where c represents constants (c ∈ Const), X variables (X ∈ Var) and F functions (F ∈ Fun):[ T ::= X | c | F (T_1, …, T_n) ]An architecture is defined by a set of components C_i, for i∈[1, N], and a set A of relations. The relations define the capacities of the components and the trust assumptions. We use the following language to define the relations:[ [ A ::= {R}; R ::= Has_i (X) | Receive_i,j ({St}, {X}) | Compute_G (X = T); | Verify_i (St) |Trust_i, j; ]; ;[[6mm]St::= Pro|AttAtt::= Attest_G ({Eq});Pro3l::= Proof_i ({P}) Eq ::= Pred (T_1, …, T_m);P::= Att| Eq;] ]The notation {Z} denotes a set of terms of category Z. Has_i (X) denotes the fact that component C_i possesses (or is the origin of) the value of X, which may correspond to situations in which X is stored on C_i or C_i is a sensor collecting the value of X. In this paper we use the set of predicates Pred := { =, ∈}. Compute_G (X = T) means that the set of components G can compute the term T and assign its value to X and Trust_i, j represents the fact that component C_i trusts component C_j. Receive_i,j ({St}, {X}) means that C_i can receive the values of variables in {X} together with the statements in {St} from C_j .We consider two types of statements here, namely attestations: Attest_i ({Eq}) is the declaration by the component i that the properties in {Eq} hold; and proofs: Proof_i ({P}) is the delivery by C_i of a set of proofs of properties. Verify_i is the verification by component C_i of the corresponding statements (proof or authenticity). In any case, the architecture level does not provide details on how a verification is done. The verification of an attestation concerns the authenticity of the statement only, not its truth that C_i may even not be able to carry out itself. In practice, it could be the verification of a digital signature.Graphical data flow representations can be derived from architectures expressed in this language. For the sake of readability, we use both notations in the next sections.The subset of the privacy logic used in this paper is the following dedicated epistemic logic:[ φ ::= Has_i (X) |Has_i^none (X) |K_i (Prop) | φ_1 ∧ φ_2;Prop ::= 7lPred (T_1, …, T_n)|Prop_1∧ Prop_2; ]Has_i(X) and Has_i^none(X) denote the facts that component C_i respectively can or cannot get the value of X. K_i denotes the epistemic knowledge following the “deductive algorithmic knowledge” philosophy <cit.> that makes it possible to avoid the logical omniscience problem. In this approach, the knowledge of a component C_i is defined as the set of properties that this component can actually derive using its own information and his deductive system ▹_i.Another relation, Dep_i, is used to take into account dependencies between variables. Dep_i (Y, 𝒳) means that if C_i can obtain the values of each variable in the set of variables 𝒳, then it may be able to derive the value of Y. The absence of such a relation is an assumption that C_i cannot derive the value of X from the values of the variables in 𝒳. It should be noted that this dependency relation is associated with a given component: different components may have different capacities. For example, if component C_i is the only component able to decrypt a variable ev to get the clear text v, then Dep_i (v, { ev } ) holds but Dep_j (v,{ ev} ) does not hold for any j ≠ i.The semantics S (A) of an architecture A is defined as the set of states of the components C_i of A resulting from compatible execution traces <cit.>. A compatible execution trace contains only events that are instantiations of relations (e.g. Receive_i,j, Compute_G, etc.) of A (as further discussed in Section <ref>). The semantics S (φ) of a property φ is defined as the set of architectures meeting φ. For example, A ∈ S (Has_i^none (X)) if for all states σ∈ S (A), the sub-state σ_i of component C_i is such that σ_i (X) =, which expresses the fact that the component C_i cannot assign a value to the variable X.To make it possible to reason about privacy properties, an axiomatics of this logic is presented and is proven sound and complete. A ⊢φ denotes that φ can be derived from A thanks to the deductive rules (i.e. there exists a derivation tree such that all steps belong to the axiomatics, and such that the leaf is A ⊢φ). A subset of the axioms useful for this paper is presented in Figure <ref>.§ BIOMETRIC SYSTEMS ARCHITECTURESBefore starting the presentation of the different biometric architectures in the next sections, we introduce in this section the basic terminology used in this paper and the common features of the architectures. For the sake of readability, we use upper case sans serif letters , , etc. rather than indexed variables C_i to denote components. By abuse of notation, we will use component names instead of indices and write, for example, Receive_𝖴, 𝖳 ({}, {𝚍𝚎𝚌}). Type letters , , etc. denote variables. The set of components of an architecture is denoted by 𝒥.The variables used in biometric system architectures are the following: * A biometric reference templatebuilt during the enrolment phase, where a template corresponds to a set or vector of biometrics features that are extracted from raw biometric data in order to be able to compare biometric data accurately.* A raw biometric dataprovided by the user during the verification phase.* A fresh templatederived fromduring the verification phase.* A thresholdwhich is used during the verification phase as a closeness criterion for the biometric templates.* The outputof the verification which is the result of the matching between the fresh templateand the enrolled templates , considering the threshold .Two components appear in all biometric architectures: a component 𝖴 representing the user, and the terminalwhich is equipped with a sensor used to acquire biometric traits. In addition, biometric architectures may involve an explicit issuer , enrolling users and certifying their templates, a servermanaging a database containing enrolled templates, a module (which can be a hardware security module, denoted HSM) to perform the matching and eventually to take the decision, and a smart cardto store the enrolled templates (and in some cases to perform the matching). Figure <ref> introduces some graphical representations used in the figures of this paper.In this paper, we focus on the verification phase and assume that enrolment has already been done. Therefore the biometric reference templates are stored on a component which can be either the issuer (Has_𝖨 (𝚋𝚛)) or a smart card (Has_𝖢 (𝚋𝚛)). A verification process is initiated by the terminalreceiving as input a raw biometric datafrom the user .extracts the fresh biometric templatefromusing the function Extract ∈ Fun. All architectures A therefore include Receive_𝖳, 𝖴 ({}, {𝚛𝚍}) and Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)) and the Dep_𝖳 relation is such that (𝚋𝚜, {𝚛𝚍}) ∈ Dep_𝖳. In all architectures A, the user receives the final decision 𝚍𝚎𝚌 (which can typically be positive or negative) from the terminal: Receive_𝖴, 𝖳 ({}, {𝚍𝚎𝚌}) ∈ A. The matching itself, which can be performed by different components depending on the architecture, is expressed by the function μ∈ Fun which takes as arguments two biometric templates and the threshold 𝚝𝚑𝚛.§ APPLICATION OF THE FRAMEWORK TOSEVERAL ARCHITECTURES FOR BIOMETRIC SYSTEMS WITH VARIOUS PROTECTION LEVELS §.§ Protecting the reference templates with encryptionLet us consider first the most common architecture deployed for protecting biometric data. When a user is enrolled his reference template is stored encrypted, either in a terminal with an embedded database, or in a central database. During the identification process, the user supplies a fresh template, the reference templates are decrypted by a component (which can be typically the terminal or a dedicated hardware security module) and the comparison is done inside this component. The first part of Figure <ref> shows an architecture A_𝖾𝖽 in which reference templates are stored in a central database and the decryption of the references and the matching are done inside the terminal. The second part of the figure shows an architecture A_𝗁𝗌𝗆 in which the decryption of the references and the matching are done on a dedicated hardware security module. Both architectures are considered in turn in the following paragraphs.§.§.§ Use of an encrypted database. The first architecture A_𝖾𝖽 is composed of a user , a terminal , a servermanaging an encrypted databaseand an issuerenrolling users and generating the encrypted database . The set Fun includes the encryption and decryption functions Enc and Dec. When applied to an array, Enc is assumed to encrypt each entry of the array. At this stage, for the sake of conciseness, we consider only biometric data in the context of an identification phase. The same types of architectures can be used to deal with authentication, which does not raise any specific issue. The functionality of the architecture is Ω := {𝚎𝚋𝚛 = Enc (𝚋𝚛), 𝚋𝚛' = Dec (𝚎𝚋𝚛), 𝚋𝚜 = Extract (𝚛𝚍), 𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛)}, and the architecture is defined as:A_𝖾𝖽 := {Has_𝖨 (𝚋𝚛), Has_𝖴 (𝚛𝚍), Has_𝖳 (𝚝𝚑𝚛), Compute_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛)),Receive_𝖲, 𝖨 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))}, {𝚎𝚋𝚛}),Receive_𝖳, 𝖲 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))}, {𝚎𝚋𝚛}), Trust_𝖳, 𝖨,Verify_𝖳 (Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))), Receive_𝖳, 𝖴 ({}, {𝚛𝚍}),Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)), Compute_𝖳 (𝚋𝚛' = Dec (𝚎𝚋𝚛)),Compute_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛)), Receive_𝖴, 𝖳 ({}, {𝚍𝚎𝚌}) }The properties of the encryption scheme are captured by the dependence and deductive relations. The dependence relations are: (𝚎𝚋𝚛, {𝚋𝚛}) ∈ Dep_𝖨, and {(𝚋𝚜, {𝚛𝚍}), (𝚍𝚎𝚌, {𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛}), (𝚋𝚛', {𝚎𝚋𝚛}), (𝚋𝚛, {𝚎𝚋𝚛})} ⊆ Dep_𝖳. Moreover the deductive algorithm relation contains: {𝚎𝚋𝚛 = Enc (𝚋𝚛)}▹{𝚋𝚛 = Dec (𝚎𝚋𝚛)}.From the point of view of biometric data protection, the property that this architecture is meant to ensure is the fact that the server should not have access to the reference template, that is to say: Has_𝖲^none (𝚋𝚛), which can be proven usingRule(the same property holds for 𝚋𝚛'):Has_𝖲 (𝚋𝚛) ∉A_𝖾𝖽 ∄𝒳: (𝚋𝚛, 𝒳) ∈ Dep_𝖲 ∄ T: Compute_𝖲 (𝚋𝚛 = T) ∈ A_𝖾𝖽 ∄ j ∈𝒥, ∄ S, ∄ E, Receive_𝖲, j (S, E) ∈ A_𝖾𝖽∧𝚋𝚛∈ EA_𝖾𝖽⊢ Has_𝖲^none (𝚋𝚛) It is also easy to prove, usingand , that the terminal has access to 𝚋𝚛': Has_𝖳 (𝚋𝚛').As far as integrity is concerned, the terminal should be convinced that the matching is correct. The proof relies on the trust placed by the terminal in the issuer (about the correctness of 𝚎𝚋𝚛) and the computations that the terminal can perform by itself (through Compute_𝖳 and the application of ▹):Verify_𝖳 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))}) ∈ A_𝖾𝖽 Trust_𝖳, 𝖨∈ A_𝖾𝖽A_𝖾𝖽⊢ K_𝖳 (𝚎𝚋𝚛 = Enc (𝚋𝚛)) {𝚎𝚋𝚛 = Enc (𝚋𝚛)}▹{𝚋𝚛 = Dec (𝚎𝚋𝚛)} A_𝖾𝖽⊢ K_𝖳 (𝚎𝚋𝚛 = Enc (𝚋𝚛))A_𝖾𝖽⊢ K_𝖳 (𝚋𝚛 = Dec (𝚎𝚋𝚛)) Compute_𝖳 (𝚋𝚛' = Dec (𝚎𝚋𝚛)) ∈ A_𝖾𝖽A_𝖾𝖽⊢ K_𝖳 (𝚋𝚛' = Dec (𝚎𝚋𝚛)) Assuming that all deductive relations include the properties (commutativity and transitivity) of the equality,can be used to derive: A_𝖾𝖽⊢ K_𝖳 (𝚋𝚛 = 𝚋𝚛'). A further application ofwith another transitivity rule for the equality allows us to obtain the desired integrity property:A_𝖾𝖽⊢ K_𝖳 (𝚋𝚛 = 𝚋𝚛') Compute_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛)) ∈ A_𝖾𝖽A_𝖾𝖽⊢ K_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛))A_𝖾𝖽⊢ K_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛))§.§.§ Encrypted database with a hardware security module. The architecture presented in the previous subsection relies on the terminal to decrypt the reference template and to perform the matching operation. As a result, the clear reference template is known by the terminal and the only component that has to be trusted by the terminal is the issuer. If it does not seem sensible to entrust the terminal with this central role, another option is to delegate the decryption of the reference template and computation of the matching to a hardware security module so that the terminal itself never stores any clear reference template. This strategy leads to architecture A_𝗁𝗌𝗆 pictured in the second part of Figure <ref>.In addition to the user , the issuer , the terminal , and the server , the set of components contains a hardware security module . The terminal does not perform the matching, but has to trust . This trust can be justified in practice by the level of security provided by the HSM(which can also be endorsed by an official security certification scheme). The architecture is described as follows in our framework:A_𝗁𝗌𝗆 := {Has_𝖨 (𝚋𝚛), Has_𝖴 (𝚛𝚍), Has_𝖬 (𝚝𝚑𝚛), Compute_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛)),Receive_𝖲, 𝖨 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))}, {𝚎𝚋𝚛}),Receive_𝖳, 𝖲 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))}, {𝚎𝚋𝚛}), Trust_𝖳, 𝖨,Verify_𝖳 (Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))), Receive_𝖳, 𝖴 ({}, {𝚛𝚍}),Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)), Receive_𝖬, 𝖳 ({}, {𝚋𝚜, 𝚎𝚋𝚛}),Compute_𝖬 (𝚋𝚛' = Dec (𝚎𝚋𝚛)), Compute_𝖬 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛)),Verify_𝖳 ({Attest_𝖬 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛))}), Trust_𝖳, 𝖬,Receive_𝖳, 𝖬 (𝒜, {𝚍𝚎𝚌}), Verify_𝖳 ({Attest_𝖬 (𝚋𝚛' = Dec (𝚎𝚋𝚛))}) }where the set of attestations 𝒜 received by the terminal from the module is 𝒜 := {Attest_𝖬 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛)), Attest_𝖬 (𝚋𝚛' = Dec (𝚎𝚋𝚛))}.The trust relation between the terminal and the module makes it possible to apply ruletwice:Verify_𝖳 ({Attest_𝖬 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛))}) ∈ A_𝗁𝗌𝗆 Trust_𝖳, 𝖬∈ A_𝗁𝗌𝗆 A_𝗁𝗌𝗆⊢ K_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛', 𝚋𝚜, 𝚝𝚑𝚛)) Verify_𝖳 ({Attest_𝖬 (𝚋𝚛' = Dec (𝚎𝚋𝚛))}) ∈ A_𝗁𝗌𝗆 Trust_𝖳, 𝖬∈ A_𝗁𝗌𝗆A_𝗁𝗌𝗆⊢ K_𝖳 (𝚋𝚛' = Dec (𝚎𝚋𝚛)) The same proof as in the previous subsection can be applied to establish the integrity of the matching. The trust relation between the terminal and the issuer and the rules ,make it possible to derive: A_𝗁𝗌𝗆⊢ K_𝖳 (𝖻𝗋 = Dec (𝖾𝖻𝗋)). Then two successive applications ofregarding the transitivity of the equality lead to: A_𝗁𝗌𝗆⊢ K_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)).As in architecture A_𝖾𝖽, the biometric references are never disclosed to the server. However, in contrast with A_𝖾𝖽, they are not disclosed either to the terminal, as shown by rule :Has_𝖳 (𝚋𝚛) ∉A_𝗁𝗌𝗆∄𝒳: (𝚋𝚛, 𝒳) ∈ Dep_𝖳∄ T: Compute_𝖳 (𝚋𝚛 = T) ∈ A_𝗁𝗌𝗆 [.9] ∄ j ∈𝒥, ∄ S, ∄ E, Receive_𝖳, j (S, E) ∈ A_𝗁𝗌𝗆∧𝚋𝚛∈ EA_𝗁𝗌𝗆⊢ Has_𝖳^none (𝚋𝚛) §.§ Enhancing protection with homomorphic encryptionIn both architectures of Section <ref>, biometric templates are protected, but the component performing the matching (either the terminal or the secure module) gets access to the reference templates. In this section, we show how homomorphic encryption can be used to ensure that no component gets access to the biometric reference templates during the verification.Homomorphic encryption schemes <cit.> makes it possible to compute certain functions over encrypted data. For example, if Enc is a homomorphic encryption scheme for multiplication then there is an operation ⊗ such that:c_1 = Enc (m_1) ∧ c_2 = Enc (m_2) ⇒ c_1 ⊗ c_2 = Enc (m_1 × m_2).Figure <ref> presents an architecture A_𝗁𝗈𝗆 derived from A_𝗁𝗌𝗆 in which the server performs the whole matching computation over encrypted data. The user supplies a template that is sent encrypted to the server (denoted 𝚎𝚋𝚜). The server also owns an encrypted reference template 𝚎𝚋𝚛. The comparison, i.e. the computation of the distance between the templates, is done by the server, leading to the encrypted distance 𝚎𝚍𝚎𝚌, but the server does not get access to the biometric data or to the result. This is made possible through the use a homomorphic encryption scheme. On the other hand, the module gets the result, but does not get access to the templates. Let us note that A_𝗁𝗈𝗆 is just one of the possible ways to use homomorphic encryption in this context: the homomorphic computation of the distance could actually be made by another component (for example the terminal itself) since it does not lead to any leak of biometric data.The homomorphic property of the encryption scheme needed for this application depends on the matching algorithm. An option is to resort to a fully homomorphic encryption scheme (FHE) <cit.> as in the solution described in <cit.> which uses a variant of a FHE scheme for face-recognition. However, schemes with simpler homomorphic functionalities can also be sufficient (examples can be found in <cit.>). Since we describe our solutions at the architecture level, we do not need to enter into details regarding the chosen homomorphic scheme. We just need to assume the existence of a homomorphic matching function Hom-μ with the following properties captured by the algorithmic knowledge relations:{𝚎𝚋𝚛 = Enc (𝚋𝚛), 𝚎𝚋𝚜 = Enc (𝚋𝚜),𝚎𝚍𝚎𝚌 = Hom-μ (𝚎𝚋𝚛, 𝚎𝚋𝚜, 𝚝𝚑𝚛) }▹{Dec (𝚎𝚍𝚎𝚌) = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)}The dependence relations include the following: {(𝚋𝚜, {𝚛𝚍}), (𝚎𝚋𝚜, {𝚋𝚜}) }⊆ Dep_𝖳; (𝚎𝚋𝚛, {𝚋𝚛}) ∈ Dep_𝖨; {(𝚋𝚛, {𝚎𝚋𝚛}), (𝚋𝚜, {𝚎𝚋𝚜}), (𝚍𝚎𝚌, {𝚎𝚍𝚎𝚌})}⊆ Dep_𝖬. Architecture A_𝗁𝗈𝗆 is defined as follows:A_𝗁𝗈𝗆:= { Has_𝖨 (𝚋𝚛), Has_𝖴 (𝚛𝚍), Has_𝖲 (𝚝𝚑𝚛), Compute_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛)),Receive_𝖲, 𝖨 ({Attest_𝖨 ({𝚎𝚋𝚛 = Enc (𝚋𝚛)})}, {𝚎𝚋𝚛}), Receive_𝖳, 𝖴 ({}, {𝚛𝚍}),Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)), Compute_𝖳 (𝚎𝚋𝚜 = Enc (𝚋𝚜)),Receive_𝖲, 𝖳 ({}, {𝚎𝚋𝚜}), Compute_𝖲 (𝚎𝚍𝚎𝚌 = Hom-μ (𝚎𝚋𝚛, 𝚎𝚋𝚜, 𝚝𝚑𝚛)),Receive_𝖳, 𝖲 (𝒜, {𝚎𝚍𝚎𝚌}), Verify_𝖳 (Attest_𝖨 ({𝚎𝚋𝚛 = Enc (𝚋𝚛)})),Verify_𝖳 (Attest_𝖲 ({𝚎𝚍𝚎𝚌 = Hom-μ (𝚎𝚋𝚛, 𝚎𝚋𝚜, 𝚝𝚑𝚛)})), Trust_𝖳, 𝖲,Trust_𝖳, 𝖨, Receive_𝖬, 𝖳 ({}, {𝚎𝚍𝚎𝚌}), Compute_𝖬 (𝚍𝚎𝚌 = Dec (𝚎𝚍𝚎𝚌)),Receive_𝖳, 𝖬 ({Attest_𝖬 ({𝚍𝚎𝚌 = Dec (𝚎𝚍𝚎𝚌)})}, {𝚍𝚎𝚌}), Trust_𝖳, 𝖬,Verify_𝖳 (Attest_𝖬 ({𝚍𝚎𝚌 = Dec (𝚎𝚍𝚎𝚌)})), Receive_𝖴, 𝖳 ({}, {𝚍𝚎𝚌}) }where the set 𝒜 of attestations received by the terminal from the server is: 𝒜 := {Attest_𝖨 ({𝚎𝚋𝚛 = Enc (𝚋𝚛)}), Attest_𝖲 ({𝚎𝚍𝚎𝚌 = Hom-μ (𝚎𝚋𝚛, 𝚎𝚋𝚜, 𝚝𝚑𝚛)})}.In order to prove that the terminal can establish the integrity of the result 𝚍𝚎𝚌, we can proceed in two steps, proving first the correctness of 𝚎𝚍𝚎𝚌 and then deriving the correctness of 𝚎𝚍𝚎𝚌 using the properties of homomorphic encryption. The first step relies on the capacities of component 𝖳 and the trust assumptions on components 𝖨 and 𝖲 using rulesandrespectively.Compute_𝖳 (𝚎𝚋𝚜 = Enc (𝚋𝚜)) ∈ A_𝗁𝗈𝗆A_𝗁𝗈𝗆⊢ K_𝖳 (𝚎𝚋𝚜 = Enc (𝚋𝚜)) Verify_𝖳 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (𝚋𝚛))}) ∈ A_𝗁𝗈𝗆 Trust_𝖳, 𝖨∈ A_𝗁𝗈𝗆A_𝗁𝗈𝗆⊢ K_𝖳 (𝚎𝚋𝚛 = Enc (𝚋𝚛)) Verify_𝖳 ({Attest_𝖲 (𝚎𝚍𝚎𝚌 = Hom-μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛))}), Trust_𝖳, 𝖲∈ A_𝗁𝗈𝗆A_𝗁𝗈𝗆⊢ K_𝖳 (𝚎𝚍𝚎𝚌 = Hom-μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)) The second step can be done through the application of the deductive algorithmic knowledge regarding the homomorphic encryption property (with LHS_1 the left hand-side of equation (<ref>)) :LHS_1 ▹{Dec (𝚎𝚍𝚎𝚌) = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)} ∀ Eq ∈ LHS_1: A_𝗁𝗈𝗆⊢ K_𝖳 (Eq)A_𝗁𝗈𝗆⊢ K_𝖳 (Dec (𝚎𝚍𝚎𝚌) = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)) The desired property is obtained through the application of rulesandexploiting the trust relation between 𝖳 and 𝖬 and the transitivity of equality.Verify_𝖳 ({Attest_𝖬 (𝚍𝚎𝚌 = Dec (𝚎𝚍𝚎𝚌))}) ∈ A_𝗁𝗈𝗆 Trust_𝖳, 𝖬∈ A_𝗁𝗈𝗆A_𝗁𝗈𝗆⊢ K_𝖳 (𝚍𝚎𝚌 = Dec (𝚎𝚍𝚎𝚌))A_𝗁𝗈𝗆⊢ K_𝖳 (Dec (𝚎𝚍𝚎𝚌) = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)) A_𝗁𝗈𝗆⊢ K_𝖳 (𝚍𝚎𝚌 = Dec (𝚎𝚍𝚎𝚌))A_𝗁𝗈𝗆⊢ K_𝖳 (𝚍𝚎𝚌 = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)) As far as privacy is concerned, the main property that A_𝗁𝗈𝗆 is meant to ensure is that no component (except the issuer) has access to the biometric references. Rulemakes it possible to prove that , , andnever get access to , as in Section <ref>. The same rule can be applied here to prove A_𝗁𝗈𝗆⊬ Has_𝖬 (𝚎𝚋𝚛) exploiting the fact that neither (𝚋𝚛, {𝚎𝚍𝚎𝚌}) nor (𝚋𝚛, {𝚍𝚎𝚌}) belong to Dep_𝖬. §.§ The Match-On-Card technologyAnother solution can be considered when the purpose of the system is authentication rather than identification. In this case, it is not necessary to store a database of biometric reference templates and a (usually unique) reference template can be stored on a smart card. A smart card based privacy preserving architecture has been proposed recently which relies on the idea of using the card not only to store the reference template but also to perform the matching itself. Since the comparison is done inside the card the reference template never leaves the card. In this Match-On-Card (MOC) technology <cit.> (also called com­pa­rison-on-card), the smart card receives the fresh biometric template, carries out the comparison with its reference template, and sends the decision back (as illustrated in Figure <ref>).In this architecture, the terminal is assumed to trust the smart card. This trust assumption is justified by the fact that the card is a tamper-resistant hardware element. This architecture is simpler than the previous ones but not always possible in practice (for a combination of technical and economic reasons) and may represent a shift in terms of trust if the smart card is under the control of the user.More formally, the MOC architecture is composed of a user , a terminal , and a card . The cardattests that the templates 𝚋𝚛 and 𝚋𝚜 are close (with respect to the threshold 𝚝𝚑𝚛):A_𝗆𝗈𝖼 := {Has_𝖢 (𝚋𝚛), Has_𝖴 (𝚛𝚍), Has_𝖢 (𝚝𝚑𝚛), Receive_𝖳, 𝖴 ({}, {𝚛𝚍}),Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)), Receive_𝖢, 𝖳 ({}, {𝚋𝚜}),Compute_𝖢 (𝚍𝚎𝚌 = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛)), Receive_𝖴, 𝖳 ({}, {𝚍𝚎𝚌}),Receive_𝖳, 𝖢 ({Attest_𝖢 (𝚍𝚎𝚌 = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛))}, {𝚍𝚎𝚌}),Verify_𝖳 ({Attest_𝖢 (𝚍𝚎𝚌 = μ (𝚋𝚛, 𝚋𝚜, 𝚝𝚑𝚛))}), Trust_𝖳, 𝖢}Using rule , it is easy to show that no component apart from 𝖢 gets access to 𝚋𝚛. The proof of the integrity property relies on the capacities of component 𝖳 and the trust assumption on component 𝖢 using rulesandrespectively.§ EXTENSION OF THE FRAMEWORK TO INFORMATION LEAKAGE§.§ Extension of the architecture languageMotivated by the need to analyse the inherent leakage of the result of a matching between two biometric data in biometric systems (cf. <cit.>), we now propose an extension of the formal framework sketched in Section <ref>, in which the information leaking through several executions can be expressed.We highlights the difference with the framework introduced in Section <ref> without repeating their common part. The term language we use is now the following.[T::= X̃|c| F (X̃_1, …, X̃_m, c_1, …, c_q); X̃::=X| X[k];]Functions may take as parameters both variables and constants. Variables X̃ can be simple variables or arrays of variables. If X is an array, Range (X) denotes its size.In this extended framework, in addition to defining a set of primitives, an architecture can also provide a bound on the number of times a primitive can be used.[ [ [7mm]A::=5l{R};R::=Has_i^(n) (X)|Has_i (c)| 3lReceive_i, j^(n) ({St}, {X}∪{c}); | Trust_i, j|Reset|Compute_G^(n) (X = T)|Verify^(n)_i ({St});]; ;[[7mm]St::=Pro|Att Att::=Attest_i ({Eq});Pro::=6lProof_i ({P}) Eq::= Pred (T_1, …, T_m);P::=Att| Eq;] ]The superscript notation ^(n) denotes that a primitive can be carried out at most n ∈ (ℕ∖{0}) ∪{∞} times by the component(s) – where (∀ n' ∈ℕ: n' < ∞). We assume that n is never equal to 0. 𝗆𝗎𝗅 (α) denotes the multiplicity (n) of the primitive α, if any. The Reset primitive is used to reinitialize the whole system.As in the initial model, consistency assumptions are made about the architectures to avoid meaningless definitions.For instance, we require that components carry out computations only on the values that they have access to (either through Has, Compute, or Receive). We also require that all multiplicities n specified by the primitives are identical in a consistent architecture. As a result, a consistent architecture A is parametrized by an integer n ≥ 1 (we note A (n) when we want to make this integer explicit).A key concept for the definition of the semantics is the notion of trace. A trace is a sequence of events and an event[Except for the Session event.] is an instantiation of an architectural primitive[Except for Trust primitives, which cannot be instantiated into events because they are global assumptions.]. The notion of successive sessions is caught by the addition of a Session event[Computations can involve different values of the same variables from different sessions.] . A trace θ of events is said compatible with a consistent architecture A (n) if all events in θ (except the computations) can be obtained by instantiation of some architectural primitive from A, and if the number of events between two Reset events corresponding to a given primitive is less than the bound n specified by the architecture. We denote by T (A) the set of traces which are compatible with an architecture A.[θ::=𝖲𝖾𝗊 (ϵ);ϵ::=Has_i (X:V)|Has_i (c)| 3lReceive_i, j ({St}, {X:V}∪{c}); |Session|Reset|Compute_G (X = T)|Verify_i ({St});]An event can instantiate variables X with specific values V. Constants always map to the same value. Let Val be the set of values the variables and constants can take. The set Val_ is defined as Val ∪{} where ∉Val is a specific symbol used to denote that a variable or a constant has not been assigned yet.The semantics of an architecture follows the approach introduced in <cit.>. Each component is associated with a state. Each event in a trace of events affects the state of each component involved by the event. The semantics of an architecture is defined as the set of states reachable by compatible traces.The state of a component is either the Error state or a pair consisting of: (i) a variable state assigning values to variables, and (ii) a property state defining what is known by a component.[ State_= (State_V × State_P) ∪{Error};State_V= Var ∪ Const →𝖫𝗂𝗌𝗍 (Val_);State_P={Eq}∪{Trust_i, j} ]The data structure 𝖫𝗂𝗌𝗍 over a set S denotes the finite ordered lists of elements of S, 𝗌𝗂𝗓𝖾 (L) denotes the size of the list L, and () is the empty list. For a non-empty list L = (e_1, …, e_n) ∈ S^n where 𝗌𝗂𝗓𝖾 (L) = n ≥ 1, L[m] denotes the element e_m for 1 ≤ m ≤ n, 𝗅𝖺𝗌𝗍 (L) denotes L [n], and 𝖺𝗉𝗉𝖾𝗇𝖽 (L, e) denotes the list (e_1, …, e_n, e) ∈ S^n + 1. Let σ := (σ_1, …, σ_N) denote the global state (i.e. the list of states of all components) defined over (State_)^N and σ_i^v and σ_i^pk denote, respectively, the variable and the knowledge state of the component C_i.The variable state assigns values to variables and to constants (each constant is either undefined or taking a single value). σ_i^v (X) [m] (resp. σ_i^v (c) [m]) denotes the m-th entry of the variable state of X ∈ Var (resp. c ∈ Const). The initial state of an architecture A is denoted by Init^A = ⟨ Init^A_1, …, Init^A_N⟩ where: ∀ C_i: Init^A_i = (Empty, {Trust_i, j |∃ C_j: Trust_i, j∈ A}). Empty associates to each variable and constant a list made of a single undefined value (). We assume that, in the initial state, the system is in its first session. Alternatively, we could set empty lists in the initial state and assume that every consistent trace begins with a Session event.Let S_T: Trace × (State_)^N→ (State_)^N and S_E: Event × (State_)^N→ (State_)^N be the following two functions. S_T is defined recursively by iteration of S_E: for all state σ∈ (State_)^N, event ϵ∈ Event and consistent trace θ∈ Trace, S_T (⟨⟩, σ) = σ and S_T (ϵ·θ, σ) = S_T (θ, S_E (ϵ, σ)). The modification of a state is noted σ [σ_i/(v, pk)] the variable and knowledge states of C_i are replaced by v and pk respectively. σ[σ_i /Error] denotes that the Error state is reached for component C_i. We assume that a component reaching an Error state no longer getsinvolved in any later action (until a reset of the system). The function S_E is defined event per event.The effect of Has_i (X:V) and Receive_i,j (S, {(X:V)}) on the variable state of component C_i is the replacement of the last value of the variable X by the value V: 𝗅𝖺𝗌𝗍 (σ_i^v (X)) := V. This effect is denoted by σ_i^v [X / V]:S_E (Has_i (X:V), σ) = S_E (Receive_i, j (S, {X:V}), σ) = σ[σ_i / (σ_i^v[X/V], σ^pk_i)].In the case of constants, the value V is determined by the interpretation of c (as in the function symbols in the computation).The effect of Compute_G (X = T) is to assign to X, for each component C_i ∈ G, the value V produces by the evaluation (denoted ε) of T. The new knowledge is the equation X = T. A computation may involve values of variables from different sessions. As a result, some consistency conditions must be met, otherwise an error state is reached:S_E (Compute_G (X = T), σ) = σ[∀ C_i ∈ G: σ_i / (σ_i^v[X / V], σ^pk_i ∪{X = T})]if the condition on the computation holds, σ[σ_i / Error]otherwise,where V := ε(T, ∪_C_i ∈ Gσ_i^v). For each X̃^(n)∈ T, the evaluation of T is done with respect to the n last values of X̃ that are fully defined. An error state is reached if n such values are not available. The condition on the computation is then: ∀ C_i ∈ G, X̃^(n)∈ T: 𝗌𝗂𝗓𝖾({m|σ_i^v (V (X̃)) [m]is fully defined}) ≥ n.Semantics of the verification events are defined according to the (implicit) semantics of the underlying verification procedures. In each case, the knowledge state of the component is updated if the verification passes, otherwise the component reaches an Error state. The variable state is not affected.S_E (Verify_i (Proof_j (E)), σ) = σ[σ_i/ (σ_i^v, σ_i^pk∪ new^pk_Proof)]if the proof is valid,σ[σ_i / Error]otherwise, S_E (Verify_i (Attest_j (E)), σ) = σ[σ_i / (σ_i^v, σ_i^pk∪ new^pk_Attest)]if the attestation is valid,σ[σ_i / Error]otherwise.The new knowledge new^pk_Proof and new^pk_Attest are defined as:new^pk_Proof:= {Eq|Eq ∈ E∨ ([ ∃ C_k: Attest_k (E') ∈ E∧ Eq ∈ E'; ∧ Trust_i, k∈σ_i^pk ])} andnew^pk_Attest:= {Eq|Eq ∈ E∧ Trust_i, j∈σ_i^pk}.In the session case, the knowledge state is reinitialized and a new entry is added in the variable states:S_E (Session, σ) = σ [∀ i:σ_i / (upd^v, {Trust_i, j |∃ C_j: Trust_i, j∈ A})],where the new variable state upd^v is such that σ_i^v (X) := 𝖺𝗉𝗉𝖾𝗇𝖽 (σ_i^v (X), ) for all variables X ∈ Var, and σ_i^v (c) := 𝖺𝗉𝗉𝖾𝗇𝖽 (σ_i^v (c), 𝗅𝖺𝗌𝗍 (σ_i^v (c))) for all constants c ∈ Const. The session event is not local to a component, all component states are updated. As a result, we associate to each global state σ a unique number, noted 𝗌 (σ), which indicates the number of sessions. In the initial state, 𝗌 (σ) := 1, and at each Session event, 𝗌 (σ) is incremented.In the reset case, all values are dropped and the initial state is restored: S_E (Reset, σ) = Init^A.This ends the definition of the semantics of trace of events. The semantics S (A) of an architecture A is defined as the set of states reachable by compatible traces. §.§ Extension of the privacy logicThe privacy logic is enhanced to express access to n values of a given variable. The formula Has_i represents n ≥ 1 accesses by C_i to some variable X.[φ::=Has_i (X^(n))|Has_i (c)| Has^none_i (X)| Has^none_i (c)| K_i (Eq)| φ_1 ∧φ_2; Eq::= 5lPred (T_1, …, T_m) ]Several values of the same variables from different sessions can provide information about other variables, which is expressed through the dependence relation.The semantics S (φ) of a property φ∈ℒ_P remains defined as the set of architectures where φ is satisfied. The fact that φ is satisfied by a (consistent) architecture A is defined as follows.* A satisfies Has_i (X^(n)) if there is a reachable state in which X is fully defined (at least) n ≥ 1 times. * A satisfies Has_i (c) if there is a reachable state in which c is fully defined. * A satisfies Has^none_i (X) (resp. Has^none_i (c)) if no compatible trace leads to a state in which C_i assigns a value to X (resp. c). * A satisfies K_i (Eq) if for all reachable states, there exists a state in the same session in which C_i can derive Eq. * A satisfies φ_1 ∧φ_2 if A satisfies φ_1 and A satisfies φ_2.A set of deductive rules for this privacy logic is given in Figure <ref>. One can show that this axiomatics is sound and complete with respect to the semantics above. The soundness theorem states that for all A, if A ⊢φ, then A ∈ S (φ). Completeness means that for all A, if A ∈ S (φ) then A ⊢φ.Due to the length of the proofs and the lack of place, we only give sketch for these proofs. Soundness is proved by induction on the derivation tree. For each theorem A ⊢φ, one can find traces satisfying the claimed property, or show that all traces satisfy the claimed property (depending on the kind of property). Completeness is shown by induction on the property φ. For each property belonging to the semantics, one can exhibit a tree that derives it from the architecture.A trace is said to be a covering trace if it contains an event corresponding to each primitive specified in an architecture A (except trust relations) and if for each primitive it contains as much events as the multiplicity ^(n) of the primitive. As a first step to prove soundness, it is shown that for all consistent architecture A, there exists a consistent trace θ∈ T (A) that covers A.Then the soundness is shown by induction on the depth of the tree A ⊢φ.* Let us assume that A ⊢ Has_i (X^(n)), and that the derivation tree is of depth 1. By definition of 𝒟, such a proof is obtained by application of (), () or (). In each case, it is shown (thanks to the existence of covering traces) that an appropriate trace can be found in the semantics of A, hence A ∈ S (Has_i (X^(n))). The case of A ⊢ Has_i (c) is very similar. * Let us assume that A ⊢ K_i (Eq), and that the derivation tree is of depth 1. By definition of 𝒟, such a proof is obtained by application of (), (), (), () or (). In each case, starting from a state σ' ∈ S_i (A) such that 𝗌 (σ') ≥ n, it is first shown that there exists a covering trace θ≥θ' that extends θ' and that contains n corresponding events Compute_G (X = T) ∈θ in n distinct sessions (for thecase, and other events for the other rules). Then by the properties of the deductive algorithmic knowledge, it is shown that the semantics of the property A ∈ S (K_i (X = T)) holds. * Let us assume that A ⊢ Has_i (X^(n)), and that the derivation tree is of depth strictly greater than 1. By definition of 𝒟, such a proof is obtained by application of () or ().In the first case, by the induction hypothesis and the semantics of properties, there exists a reachable state σ∈ S (A) and n indices i_1, …, i_n such that σ_i^v (X) [i_l] is fully defined for all l ∈ [1, n]. This gives, a fortiori, A ∈ S (Has_i (X^(m))) for all m such that 1 ≤ m ≤ n.In the second case, we have that (Y, {X_1^(n_1), …, X_m^(n_m), c_1, …, c_q}) ∈ Dep_i, that ∀ l ∈ [1, m]: A ⊢ Has_i (X_l^(n_l)) and ∀ l ∈ [1, q]: A ⊢ Has_i (c_l). The proof shows the existence of a covering trace that contains an event Compute_G (Y = T) (where i ∈ G), allowing to conclude that A ∈ S (Has_i (Y^(1))).Again, the corresponding cases for constant are very similar. * A derivation for Has^none is obtained by application of (). The proof assume, towards a contradiction, that A ∉S (Has^none_i (X)). It is shown, by the architecture semantics, that there exists a compatible trace that enable to derive A ⊢ Has^(1)_i (X). However, since () was applied, we have A ⊬ Has^(1)_i (X), hence a contradiction. * The last case (the conjunction ∧) is fairly straightforward. The completeness is proved by induction over the definition of φ.* Let us assume that A ∈ S (Has_i (X^(n))). By the architecture semantics and the semantics of traces, it is shown that the corresponding traces either contain events where X is computed, received or measured, or that some dependence relation on X exists. In the first case, we have A ⊢ Has_i (X^(n)) by applying (respectively) (), (), or () (after an eventual application of ()). In the last case, the proof shows how to exhibit a derivation tree to obtain A ⊢ Has_i (X^(n)) (the () rule is used). * Let us assume that A ∈ S (Has^none_i (X)). By the semantics of properties, this means that in all reachable states, X does not receive any value. The proof shows that A ⊬ S (Has_i (X^(1))), otherwise A ∈ S (Has^none_i (X)) would be contradicted. So as a conclusion, A ⊢ Has^none_i (X) by applying (). * The constant cases A ∈ S (Has_i (c) and A ∈ S (Has^none_i (c)) case are similar to the variable cases. * Let us assume that A ∈ S (K_i (Eq)). By the semantics of properties this means that for all reachable states, there exists a later state in the same session where the knowledge state enables to derive Eq. By the semantics of architecture, we can exhibit a compatible trace that reaches a state where Eq can be derived. By the semantics of compatible traces, the proof shows, by reasoning on the events on the traces, that A ⊢ K_i (Eq) by applying either (), (), (), () or (). * Finally the conjunctive case is straightforward. § EXTENSION OF THE MATCH-ON-CARD TO THE IDENTIFICATION PARADIGMWe now show of the extended framework can be used to reason about the privacy properties of a biometric system where some information leaks after several sessions of the same protocol.The biometric system introduced in <cit.> aims at extending the MOC technology (cf. Section <ref>) to the identification paradigm. A quantized version – corresponding to short binary representations of the templates – of the database is stored inside a secure module, playing the role of the card in the MOC case. From each biometric reference template, a quantization is computed, using typically a secure sketch scheme <cit.>. The reference database is encrypted and stored outside the secure module, whereas the quantizations of the templates are stored inside.The verification step is processed as follows. Suppose one wants to identify himself in the system. A terminal captures the fresh biometrics, extracts a template, computes its quantization 𝚚𝚜 and sends them to the secure module. Then, the module proceeds to a comparison between the fresh quantization and all enrolled quantizations 𝚚𝚛. The c nearest quantizations, for some parameter c of the system, are the c potential candidates for the identification. Then, the module queries the c corresponding (encrypted) templates to the database (by using the list of indices 𝚒𝚗𝚍 of those c nearest quantized versions 𝚚𝚛 of the enrolled templates). This gives the module the access to the set 𝚜𝚎𝚋𝚛 of the c encrypted templates. The module decrypts them, and compares them with the fresh template 𝚋𝚜. The module finally sends its response to the terminal: 1 if one of the enrolled templates is close enough to the fresh template, 0 otherwise. Figure <ref> gives a graphical representation of the resulting architecture.n denotes the size of the database (i.e. the number of enrolled users), q the size of the quantizations, and c the number of indices asked by the card. The ranges are Range (br, 𝚎𝚋𝚛, 𝚚𝚛) = n, Range (𝚛𝚍, thr, 𝚋𝚜, 𝚚𝚜, 𝚍𝚎𝚌) = 1, and Range (𝚒𝚗𝚍, 𝚜𝚎𝚋𝚛, 𝚜𝚋𝚛) = c. The set Fun of functions contains the extraction procedure Extract, the encryption and decryption procedures Enc and Dec, the (non-invertible) quantization Quant of the biometric templates, the comparison of the quantizations QComp, which takes as inputs two sets of quantizations and the parameter c, the selection of the encrypted templates EGet, and finally the matching μ, which takes as arguments two biometric templates and the threshold thr.The biometric reference templates are enrolled by the issuer (Has_𝖨 (br)). A verification process is initiated by the terminalreceiving as input a raw biometric datafrom the user .extracts the fresh biometric templatefromusing the function Extract ∈ Fun. The architecture then contains, as other biometric systems, Receive_𝖳, 𝖴 ({}, {𝚛𝚍}) and Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)) and the Dep_𝖳 relation is such that (𝚋𝚜, {𝚛𝚍}) ∈ Dep_𝖳. The user receives the final decision 𝚍𝚎𝚌 from the terminal: Receive_𝖴, 𝖳 ({}, {𝚍𝚎𝚌}). To sum up, the architecture is described as follows in the framework of Section <ref>:A^𝗆𝗂 := { Has_𝖨 (br), Has_𝖴 (𝚛𝚍), Has_𝖬 (c), Has_𝖬 (thr),Compute_𝖨 (𝚎𝚋𝚛 = Enc (br)), Compute_𝖨 (𝚚𝚛 = Quant (br)),Compute_𝖳 (𝚋𝚜 = Extract (𝚛𝚍)), Compute_𝖳 (𝚜𝚎𝚋𝚛 = EGet (𝚎𝚋𝚛, 𝚒𝚗𝚍)),Compute_𝖳 (𝚚𝚜 = Quant (𝚋𝚜)), Compute_𝖬 (𝚒𝚗𝚍 = QComp (𝚚𝚜, 𝚚𝚛, c)),Compute_𝖬 (𝚜𝚋𝚛 = Dec (𝚜𝚎𝚋𝚛)), Compute_𝖬 (𝚍𝚎𝚌 = μ (𝚜𝚋𝚛, 𝚋𝚜, thr)),Receive_𝖲, 𝖨 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (br))}, {𝚎𝚋𝚛}), Receive_𝖳, 𝖴 ({}, {𝚛𝚍}),Receive_𝖳, 𝖲 ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (br))}, {𝚎𝚋𝚛}), Receive_𝖬, 𝖳 ({}, {𝚚𝚜}),Receive_𝖬, 𝖨 ({Attest_𝖨 (𝚚𝚛 = Quant (br))}, {𝚚𝚛}), Receive_𝖳, 𝖬 ({}, {𝚒𝚗𝚍}),Receive_𝖬, 𝖳 ({}, {𝚜𝚎𝚋𝚛, 𝚋𝚜}), Receive_𝖳, 𝖬 ({}, {𝚍𝚎𝚌}),Trust_𝖳, 𝖨, Trust_𝖬, 𝖨, Trust_𝖳, 𝖬, Verify_𝖳^ (Attest_𝖨 (𝚎𝚋𝚛 = Enc (br))),Verify_𝖳^ ({Attest_𝖬 (𝚍𝚎𝚌 = μ (𝚜𝚋𝚛, 𝚋𝚜, thr))}),Verify_𝖬^ (Attest_𝖨 (𝚚𝚛 = Quant (br))), Verify_𝖳^ ({Attest_𝖬 (𝚜𝚋𝚛 = Dec (𝚎𝚋𝚛))}) }The issuer encrypts the templates and computes the quantizations, which is expressed by the dependencies: Dep_𝖨^𝗆𝗂 := {(𝚎𝚋𝚛, {br}), (𝚚𝚛, {br})}. The terminal and module computations are reflected in the dependencies as well: Dep_𝖳^𝗆𝗂 := {(𝚋𝚜, {𝚛𝚍}), (𝚚𝚜, {𝚋𝚜})}, (𝚜𝚎𝚋𝚛, {𝚋𝚜, 𝚒𝚗𝚍})}. The dependency relation of the module reflects its ability to decrypt the templates: Dep_𝖬^𝗆𝗂 := {(𝚒𝚗𝚍, {𝚚𝚜, 𝚚𝚛, c}), (𝚜𝚋𝚛, {𝚜𝚎𝚋𝚛}), (𝚍𝚎𝚌, {𝚜𝚋𝚛, 𝚋𝚜, thr}), (br, {𝚎𝚋𝚛})}. The absence of such a relation in other dependencies prevents the corresponding components to get access to the plain references, even if they get access to the ciphertexts. §.§ Learning from the selected quantizations Let us now discuss the following point: the formalism of Section <ref> is insufficient to consider the leakage of the sensitive biometric data stored inside the module. In A^𝗆𝗂, we would like that the terminal gets no access to the quantizations: A^𝗆𝗂∈ Has^none_𝖳 (𝚚𝚛). It is indeed possible to derive A^𝗆𝗂⊢ Has^none_𝖳 (𝚚𝚛), thanks to the () rule. According to the notations of <cit.>, where Has_i (X) stands for Has_i (X^(1)) in this paper, we have: ∄ X: Dep_𝖳 (𝚚𝚛, X) ∈ A^𝗆𝗂 Has_𝖳 (𝚚𝚛) ∉A^𝗆𝗂 ∄ j, S: Receive_𝖳, j (S, {𝚚𝚛}) ∈ A^𝗆𝗂 ∄ T: Compute_𝖳 (𝚚𝚛 = T) ∈ A^𝗆𝗂 A ⊬ Has_𝖳 (𝚚𝚛)A ⊢ Has^none_𝖳 (𝚚𝚛) This corresponds to the intuition saying that quantizations are protected since they are stored in a secure hardware element.However, an attack (described in <cit.>) shows that, in practice, quantizations can be learned if a sufficient number of queries to the module is allowed. The attack roughly proceeds as follows (we drop the masks for sake of clarity). The attacker maintains a n×q table (say T) of counters for each bit to be guessed. All entries are initialized to 0. Then it picks q-bits random vector Q and sends it to the module. The attacker observes the set of indices 𝚒𝚗𝚍⊆ [1, n] corresponding to the encrypted templates asked by the module. It updates its table T as follows, according to its query Q and the response 𝚒𝚗𝚍: for each i ∈ [1, n] and j ∈ [1, q], it decrements the entry T[i][j] if Q[j] = 0, and increments it if Q[j] =1. At the end of the attack, the n quantizations are guessed from the signs of the counters.The number of queries made to the module is the crucial point in the attack above (and generally in other black-box attacks against biometric systems <cit.>). Our extended model enables to introduce a bound on the number of actions allowed to be performed. We now use this model to integrate such a bound in the formal architecture description. Let A^𝗆𝗂-𝖾 (n) be the following architecture, for some n ≥ 1:A^𝗆𝗂-𝖾(n) := { Has_𝖨 (br), Has_𝖴^(n) (𝚛𝚍), Has_𝖬 (c), Has_𝖬 (thr),Compute_𝖨^(n) (𝚎𝚋𝚛 = Enc (br)), Compute_𝖨^(n) (𝚚𝚛 = Quant (br)),Compute_𝖳^(n) (𝚋𝚜 = Extract (𝚛𝚍)), Compute_𝖳^(n) (𝚜𝚎𝚋𝚛 = EGet (𝚎𝚋𝚛, 𝚒𝚗𝚍)),Compute_𝖳^(n) (𝚚𝚜 = Quant (𝚋𝚜)), Compute_𝖬^(n) (𝚒𝚗𝚍 = QComp (𝚚𝚜, 𝚚𝚛, c)),Compute_𝖬^(n) (𝚜𝚋𝚛 = Dec (𝚜𝚎𝚋𝚛)), Compute_𝖬^(n) (𝚍𝚎𝚌 = μ (𝚜𝚋𝚛, 𝚋𝚜, thr)),Receive_𝖲, 𝖨^(n) ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (br))}, {𝚎𝚋𝚛}), Receive_𝖳, 𝖴^(n) ({}, {𝚛𝚍}),Receive_𝖳, 𝖲^(n) ({Attest_𝖨 (𝚎𝚋𝚛 = Enc (br))}, {𝚎𝚋𝚛}), Receive_𝖬, 𝖳^(n) ({}, {𝚚𝚜}),Receive_𝖬, 𝖨^(n) ({Attest_𝖨 (𝚚𝚛 = Quant (br))}, {𝚚𝚛}), Receive_𝖳, 𝖬^(n) ({}, {𝚒𝚗𝚍}),Receive_𝖬, 𝖳^(n) ({}, {𝚜𝚎𝚋𝚛, 𝚋𝚜}), Receive_𝖳, 𝖬^(n) ({}, {𝚍𝚎𝚌}),Trust_𝖳, 𝖨, Trust_𝖬, 𝖨, Trust_𝖳, 𝖬, Verify_𝖳^(n) (Attest_𝖨 (𝚎𝚋𝚛 = Enc (br))),Verify_𝖳^(n) ({Attest_𝖬 (𝚍𝚎𝚌 = μ (𝚜𝚋𝚛, 𝚋𝚜, thr))}),Verify_𝖬^(n) (Attest_𝖨 (𝚚𝚛 = Quant (br))),Verify_𝖳^(n) ({Attest_𝖬 (𝚜𝚋𝚛 = Dec (𝚎𝚋𝚛))}) }In addition to the dependence of A^𝗆𝗂, the dependence relations indicates that the leakage is conditioned by a specific link mapping between the outsourced ciphertexts and the stored quantizations: Dep_𝖳^𝗆𝗂-𝖾 (𝚚𝚛, {𝚒𝚗𝚍^(n·q), 𝚚𝚜^(n·q)}). Furthermore, the module may learn the entire databasein a number of queries depending on the size of the database and the number of indices asked by the module: Dep_𝖬^𝗆𝗂-𝖾 (𝚎𝚋𝚛, {𝚜𝚎𝚋𝚛^(⌈n / c⌉)}). §.§ Strengthened variants of the architecture Now, based on some counter-measures of the attacks indicated in <cit.>, we express several variants of the architecture A^𝗆𝗂-𝖾. For each variant, the deductive rules 𝒟 for the property language ℒ_P are used to show that, for some conditions on the parameters, the quantizationsare protected.§.§.§ Variant 1As a first counter-measure, the module could ask the entire database at each invocation. It is rather inefficient, and, in some sense, runs against to initial motivation of its design. However, this can be described within the language ℒ_A, and, in practice, can be manageable for small databases. This architecture, denoted A^𝗆𝗂-𝖾1, is given by A^𝗆𝗂-𝖾 (n) for some n ≥ 1, except that Dep_𝖳^𝗆𝗂-𝖾1 := Dep_𝖳^𝗆𝗂. It is now possible to prove that the quantizations are protected, even in presence of several executions of the protocols. Since the relations Dep_ no longer contains a dependence leading to , an application of () becomes possible and gives the expected property. ∄ X: Dep_𝖳 (𝚚𝚛, X) ∈ A^𝗆𝗂-𝖾1 Has_𝖳^(n) (𝚚𝚛) ∉A^𝗆𝗂-𝖾1 ∄ j: Receive_𝖳, j^(n) (S, {𝚚𝚛}) ∈ A^𝗆𝗂-𝖾1 ∄ T: Compute_𝖳^(n) (𝚚𝚛 = T) ∈ A^𝗆𝗂-𝖾1 ∀ n: A ⊬ Has_𝖳 (𝚚𝚛^(n))A ⊢ Has^none_𝖳 (𝚚𝚛)§.§.§ Variant 2In the precedent variant, the effect of the counter-measure is the withdrawal of the dependence relation. We now consider architectures where such a dependency is still given, but where counter-measures are used to prevent a critical bound on the number of queries to be reached.A first measure is to block the number of attempts the terminal can make. The module can detect it and refuse to respond. This architecture, denoted A^𝗆𝗂-𝖾2, is given by A^𝗆𝗂-𝖾 (b), for some b≪n·q. As a result, the Has^none_i (𝚚𝚛) property can be derived. In particular one must show that A^𝗆𝗂-𝖾2⊬ Has_𝖳 (𝚒𝚗𝚍^(n·q)), in order to prevent the dependence ruleto be applied. ∄ S: Receive_𝖳, 𝖬^(b) (S, {𝚒𝚗𝚍}) ∈ A^𝗆𝗂-𝖾2 Has_𝖳^(b) (𝚒𝚗𝚍) ∈ A^𝗆𝗂-𝖾2 b < n·q ∄ T: Compute_𝖳^(b) (𝚒𝚗𝚍 = T) ∈ A^𝗆𝗂-𝖾2 A^𝗆𝗂-𝖾2⊬ Has_𝖳 (𝚒𝚗𝚍^(n·q)) An application ofenables to conclude. Dep_𝖳^𝗆𝗂-𝖾2 (𝚚𝚛, {𝚒𝚗𝚍^(n·q)}) ∈ A^𝗆𝗂-𝖾2 Has_𝖳^(b) (𝚚𝚛) ∉A^𝗆𝗂-𝖾2 ∄ j: Receive_𝖳, j^(b) (S, {𝚚𝚛}) ∈ A^𝗆𝗂-𝖾2 A^𝗆𝗂-𝖾2⊬ Has_𝖳 (𝚒𝚗𝚍^(n·q)) ∄ T: Compute_𝖳^(b) (𝚚𝚛 = T) ∈ A^𝗆𝗂-𝖾2 A^𝗆𝗂-𝖾2⊬ Has_𝖳 (𝚚𝚛^(1))A^𝗆𝗂-𝖾2⊢ Has^none_𝖳 (𝚚𝚛)§.§.§ Variant 3In the precedent variant, the terminal cannot accumulate enough information since he cannot query the module enough times to derive a useful knowledge. We now describe a variant where the terminal has no bound on the number of times it asks the module, but where the system is regularly reinitialised, so that the accumulated information becomes useless.The leakage of the system runtime is dependent on some association between the quantizationsand the encrypted database ; namely the association π that maps the quantization 𝚚𝚛[i] = Quant (br[π (i)]) to the encrypted template from which it has been computed 𝚎𝚋𝚛[π (i)] = Enc (br[π (i)]). Once this mapping is changed, the information is cancelled. For instance the database can be randomly permuted after b queries to the secure module.Formally, this is caught by adding a Reset primitive to the architecture. Let A^𝗆𝗂-𝖾3 be the architecture defined as A^𝗆𝗂-𝖾3 := A^𝗆𝗂-𝖾2∪{Reset}. The semantics of the Reset events ensures that no more than b values ofwill be gathered by the terminal for a fixed mapping. The proof that A^𝗆𝗂-𝖾3⊢ Has^none_𝖳 (𝚚𝚛) is as the proof that A^𝗆𝗂-𝖾2⊢ Has^none_𝖳 (𝚚𝚛).§ RELATED WORKSGenerally speaking, while the privacy of biometric data has attracted a lot of attention in the news (for instance, with the introduction of a fingerprint sensor in the new iphone) and among lawyers and policy makers[For example with a proposal adopted by the French Senate in May 2014 to introduce stronger requirements for the use of biometrics.], it has not triggered such a strong interest in the computer science community so far. Most studies in this area are done on a case by case basis and at a lower level than the architectures described here. For instance, <cit.> proposes a security model for biometric-based authentication taking into account privacy properties – including impersonation resilience, identity privacy or transaction anonymity – and applies it to biometric authentication. The underlying proofs rely on cryptographic techniques related to the ElGamal public key encryption scheme. <cit.> develop formal models from an information theoretic perspective relying on specific representations of biometric templates close to error correcting codes.As far as formal approaches to privacy are concerned, two main categories can be identified: the qualitative approach and the quantitative approach. Most proposals of the first category rely on a language which can be used to define systems and to express privacy properties. For example process calculi such as the applied pi-calculus <cit.> have been applied to define privacy protocols <cit.>. Other studies <cit.> involve dedicated privacy languages. The main departure of the approach advocated in this paper with respect to this trend of work is that we reason at the level of architectures, providing ways to express properties without entering into the details of specific protocols. Proposals of the second category rely on privacy metrics such as k-anonymity, l-diversity, or ϵ-differential privacy <cit.> which can be seen as ways to measure the level of privacy provided by an algorithm. Methods <cit.> have been proposed to design algorithms achieving privacy metrics or to verify that a system achieves a given level of privacy. These contributions on privacy metrics are complementary to the work described in this paper. We follow a qualitative (or logical) approach here, proving that a given privacy property is met (or not) by an architecture. As suggested in the next section, an avenue for further research would be to cope with quantitative reasoning as well, using inference systems to derive properties expressed in terms of privacy metrics. Several authors <cit.> have already pointed out the complexity of “privacy engineering” as well as the “richness of the data space”<cit.> calling for the development of more general and systematic methodologies for privacy by design. <cit.> point out the complexity of the implementation of privacy and the large number of options that designers have to face. To address this issue and favour the adoption of these tools, <cit.> proposes a number of guidelines for the design of compilers for secure computation and zero-knowledge proofs whereas <cit.> provides a language and a compiler to perform computations on private data by synthesising zero-knowledge protocols. None of these proposals addresses the architectural level and makes it possible to get a global view of a system and to reason about its underlying trust assumption. § CONCLUSIONThis work is the result of a collaboration between academics, industry and lawyers to show the applicability of the privacy by design approach to biometric systems and the benefit of formal methods to this end. Indeed, even if privacy by design becomes a legal obligation in the European Union <cit.> its application to real systems is far from obvious. We have presented in the same formal framework a variety of architectural options for privacy preserving biometric systems. We also have introduced an extension of this formal framework in order to catch the leakage due to the system runtime.One of the main advantages of the approach is to provide formal justifications for the architectural choices and a rigorous basis for their comparison. Table <ref> is a recap chart of the architectures reviewed in the first part of this paper. One of the most interesting pieces of information is the trust assumptions which are highlighted by the model. The first line shows that A_𝖾𝖽 is the architecture in which the strongest trust in put in the terminal that does not have to trust any other component apart from the issuer and is able to get access to . Architecture A_𝗁𝗌𝗆 is a variant of A_𝖾𝖽; it places less trust in the terminal that has to trust the hardware security module to perform the matching. A_𝗁𝗈𝗆 is the architecture in which the terminal is less trusted: it has to trust the issuer, the hardware security module and the server for all sensitive operations and its role is limited to the collection of the fresh biometric trait and the computation of the fresh template. Architecture A_𝗆𝗈𝖼 is similar to this respect but all sensitive operations are gathered into a single component, namely the smart card. It should be clear that no solution is inherently better than the others and, depending on the context of deployment and the technology used, some trust assumptions may be more reasonable than others. In any case, it is of prime importance to understand the consequences of a particular choice in terms of trust.A benefit of the formal approach followed in this paper is that it can provide the foundations for a systematic approach to privacy by design. A proof of concept implementation of a system to support designers in their task has been proposed in <cit.>. In this system, the user can introduce his privacy and integrity requirements (as well as any requirements imposed by the environment such as the location of a given operation on a designated component) and choose different options for the distribution of the operations and the trust assumptions. When an architecture has been built, the system can try to verify the required properties with or without the help of the designer.As stated above, we focused on the architectural level. As a result, we do not cover the full development cycle. Preliminary work has been done to address the mapping from the architecture level to the protocol level to ensure that a given implementation, expressed as an applied pi-calculus protocol, is consistent with an architecture <cit.>. As far as the formal approach is concerned, it would also be interesting to study how it could be used in the context of future privacy certification schemes. This would be especially interesting in the context of the European General Data Protection Regulation <cit.> which promotes not only privacy by design but also privacy seals.plain
http://arxiv.org/abs/1702.08301v2
{ "authors": [ "Julien Bringer", "Herve Chabanne", "Daniel Le Metayer", "Roch Lescuyer" ], "categories": [ "cs.CR", "cs.LO" ], "primary_category": "cs.CR", "published": "20170227142910", "title": "Biometric Systems Private by Design: Reasoning about privacy properties of biometric system architectures" }
[ [ Received: date / Accepted: date ===================================empty empty We consider a Persistent Intelligence, Surveillance and Reconnaissance (PISR) routing problem, which includes collecting data from a set of specified task locations and delivering that data to a control station. Each task is assigned a refresh rate based on its priority, where higher priority tasks require higher refresh rates.The UAV team's objective is to minimize the maximum of the delivery times of all the tasks' data to the control station, while simultaneously, satisfying each task's revisit period constraint. The centralized path planning problem for this PISR routing problem is formulated using mixed integer linear programming and solved using a branch-and-cut algorithm. Heuristics are presented to find sub-optimal feasible solutions that require much less computation time. The algorithms are tested on several instances and their performance is compared with respect to the optimal cost and computation time. § INTRODUCTIONUnmanned Aerial Vehicles (UAVs) are a natural choice for deployment in many military Intelligence, Surveillance and Reconnaissance (ISR) missions<cit.>. A typical ISR scenario involves monitoring a set of task locations for an indefinitely long period of time. These task locations can be buildings, road networks bordering a military base etc. Since these task locations are spatially dispersed, UAVs can be deployed to visit them regularly and ferry the information such as images, videos, sensor data etc. to a control station. This data needs to be delivered to the control station at regular intervals. The importance level of each of the task locations may vary from minimal to highly critical. While scheduling these monitoring missions, it is imperative to schedule the UAV to visit important task more frequently than the ones with lesser significance.We consider a persistent monitoring scenario, where a set of task locations needs to be visited persistently by multiple UAVs. We assume all the available UAVs are homogeneous. Therefore, there is no difference between visits by different UAVs to the same task. We are interested in two metrics viz. data latency or delivery time (to the control station) and revisit rate or revisit period. We define the data delivery time (or latency time) as the time elapsed from collection of data from a task to the time the data is delivered to the control station. The data needs to be delivered at control stations as early as possible. The revisit period is the time between two successive visits to a task location; a task with higher priority needs to have a smaller revisit period compared to a low priority. Prior work: Several variants of persistent routing problem were addressed in <cit.>. Strategies to perform patrolling tasks by multiple agents on a network defined as a graph are presented in <cit.>. In <cit.>, persistent surveillance of an area decomposed into cells is considered; <cit.> attempts to minimize the maximum time since last visit of all the cells, whereas <cit.> balances the frequency of visit to each cell. Persistent surveillance problems with tasks spatially distributed is posed as a vehicle routing problem with time windows in <cit.>. Patrolling strategies to minimize the refresh time of the viewpoints is presented in <cit.>. Approximation algorithms are presented in <cit.>, that minimize the maximum weighted latency (time between successive visits) in discrete environments. In <cit.>, the authors attempts to minimize time between two consecutive visits to partitioned regions while satisfying temporal logic constraints of each agent. A persistent routing scenario where some regions need more visitation than others is addressed in <cit.>, and a policy to achieve that is proposed. In this article, we consider a persistent routing of tasks that are spatially distributed. Also the data collected at the task locations needs to be delivered at a control station (also referred to as depot). In the existing literature concerning persistent routing, the concept of a control station is not considered and delivery time is not addressed.We model this persistent routing problem as a multiple traveling salesman problem with revisit period constraints, and formulated as a mixed integer linear programming (MILP) problem. The contributions of this article are the following: (i) We present a novel formulation addressing two important metrics, delivery time and revisit period for the tasks in ISR missions and model it as a multiple vehicle path planning problem with cycle length constraints. (ii) We present two different MILP models to find optimal solutions to the corresponding path planning problem. The two MILP models constitute novel constraints to address the cycle length limits, which could be applied to any general routing problem involving constraints on cycle length. (iii) A heuristic via assignment-tree search is presented that produces good sub-optimal solutions, and it could be easily generalized to address different cost functions and/or constraints. (iv) We test the algorithms on several random instances and computational results are presented. The PISR routing problem is closely related to the distance constrained and fuel constrained vehicle routing problems <cit.>. This problem differs from these as follows: rather than the constraints being dictated by the UAV, the constraints on a tour are dictated by the tasks a UAV visits, which is a harder constraint to deal with. Also the cost function chosen here to capture the latency requirements is different from the cost of distance constrained VRPs.§ PRELIMINARIES AND ASSUMPTIONSHere, we aim to optimize the total cost of the persistent ISR mission by centralized planning. We are looking at solving the problem before the mission begins and assigning the tasks for each UAV/agent to perform in a pre-specified sequence. Ideally, in a persistent routing scenario, the objective is to optimize the chosen metrics over an infinite time horizon. To plan the mission and schedule tasks to be serviced by each UAV, we need to generate an infinite sequence of visits for each UAV, which is infeasible. To overcome this, one may periodically solve a receding horizon problem, generate the task sequences and schedule the UAVs as proposed in <cit.>. However, reliable UAV-to-UAV communication links would be required, as well as the precise location of all the UAVs each time the planner is executed. A critical assumption we make for a-priori mission planning is the following: we restrict each task to be serviced by the same UAV throughout the mission. Each UAV performs the tasks assigned and returns to the control station, and repeats exactly the same sequence throughout the mission. For example consider a mission with two UAVs and five tasks {t_1, t_2, t_3, t_4, t_5}, and let t_d represent the control station (or depot). A sample assignment for two vehicles V_1 and V_2 is as follows: V_1: t_d → t_1 → t_2 and V_2: t_d → t_3 → t_4 → t_5. Here, if the time of travel for the sequence t_d → t_1 → t_2 → t_d is R_1 seconds, then tasks t_1 and t_2 are serviced once every R_1 seconds. Some of the advantages of this class of solutions are the following: we do not need to have communication between UAVs to update the scheduled tasks at each planning time interval. Whenever a UAV breaks down or needs refueling, the operator precisely knows which tasks are not being serviced, such that a contingency plan could be scheduled. The revisit period for each task is exactly known based on the tasks and the sequence assigned to the UAVs. Also under this restriction, the data that is collected from a task is delivered to the control station before its next visit. This is not guaranteed in the unrestricted case, where a task could be serviced by two different UAVs at successive visits. Along with the above advantages, there is a shortcoming; the cost one would optimize with this restriction could be different from the cost of the unrestricted case. However, due to its advantages in planning the mission and implementing, we pursue the restricted case where each tasks is assigned to one of the UAVs, and it is serviced by the same UAV throughout the mission.There are two important metrics that needs to be addressed in PISR missions. The first one is the data delivery time or data latency (D_i) for each task t_i; D_i is the elapsed time from when a task is completed until the vehicle returns to the depot. This is not the direct travel time between the task and the depot, as the vehicle may service other tasks before returning to depot. This is illustrated in Fig. <ref>, the delivery time for the tasks t_3, t_4 and t_5 are shown as D_3, D_4 and D_5 respectively. The other metric that we consider is the revisit period of each task. It is the time between two successive visits of a task by an UAV. Since each UAV visits the same set of tasks and repeats, the revisit period is the same for every successive visit. The revisit periods R_1 and R_2 are illustrated in Fig. <ref> for the tasks t_1 and t_2. Based on the importance or risk levels, some of the tasks require higher revisit rates than others. We aim to solve the PISR routing problem where a maximum limit on revisit period, R_i,is specified for each task t_i. We want the data delivered at the control station to be as fresh as possible, which requires the delivery times to be as small as possible. To accomplish this, we minimize the maximum delivery time of all the tasks: minmax_i ∈ T D_i. This cost is different from minimizing total cost of the paths, and it is a measure of the delivery time of the first task that is serviced.§ PROBLEM FORMULATION In this section, we define the path planning problem for the PISR missions in more detail. Also we present a mixed integer linear programming (MILP) formulation to find the optimal solution to the path planning problem. We model the MILP using node based and arc based formulations; these models are akin to the models in <cit.> and <cit.> used to solve the traveling salesman problem with time windows and the distance constrained vehicle routing problem. Similar formulations were also used to solve fuel constrained multiple vehicle routing problem in <cit.>. In these articles, the constraints on the length of a tour starting from a depot are constrained. In the formulation presented here, the length of a tour starting from a depot to each task and the length starting from the task returning to the depot together are constrained. The novelty of this formulation lies in modeling and constraining these two lengths together. Also this could be applied to other routing problem which requires to handle cycle lengths such as min-max traveling salesman problem.Let T = {t_1, t_2, … t_n } represent a set of tasks, and as in the previous example d be the index referring to the depot. We define the problem on a graph G(V, E). V is the set of nodes V =T ∪{d}, and E is the set of edges between every pair of nodes in V. Let n_v represent the number of UAVs available for the mission. The problem can be stated as the following: find at most n_v cycles that minimizes the maximum delivery time such that, (i) each task T is covered by one cycle, and (ii) if a task t_i assigned to one of the UAVs, v, with cycle length L_v, then L_v ≤ R_i,∀ i ∈ T.§.§ MILP FormulationIn the MILP formulation, we use a set of binary variables x_ij's and two sets of real variables u_i's and v_i's. Each variable x_ij corresponds to an edge (i,j), and x_ij=1 if edge (i,j) is in any of the UAV cycles.Otherwise, x_ij=0. For a particular cycle (assignment), the variables u_i's denote the travel time from the depot to the task t_i, and v_i's represent the return travel time along the cycle from task t_i to the depot; this is illustrated in Fig. <ref>. Let c_ij represent the time elapsed from task t_i to task t_j. Here, c_ij includes the time of travel between the tasks and the time to perform task t_j. Now we present the MILP formulation using degree constraints, sub-tour elimination constraints (SEC) and revisit period constraints.Degree constraints: ∑_j ∈ Vx_ij =1 ∑_j ∈ Vx_ji =1, ∀ i ∈ T, ∑_j ∈ Tx_dj≤ n_v ∑_j ∈ Tx_jd≤ n_v,x_ij∈{0,1 }∀ (i,j) ∈ E. Constraints in Equation (<ref>) state that for every node representing a task, there should be one incoming edge and one outgoing edge. The constraints (<ref>) state that there should be a maximum of n_v number of incoming and outgoing edges for the depot node; and the binary constraints on the x_ij variables are in (<ref>).Sub-tour elimination constraints:u_i - u_j + c_ij≤ M(1-x_ij), ∀ i ∈ V,j ∈ Tc_di≤ u_i ≤ R_i-c_id, ∀ i ∈ T,v_j - v_i + c_ij≤ M(1-x_ij), ∀ i ∈ T, j ∈ Vc_id≤ v_i ≤ R_i-c_di, ∀ i ∈ T. With just the degree constraints, the MILP may produce solutions containing sub-tours; a sub-tour is an assignment where a subset of tasks are connected as a cycle, but are isolated from the depot. One may refer to <cit.> for further reading on sub-tours. One can remove these infeasible solutions using the sub-tour elimination constraints. To this end, we use the inequalities (<ref>, <ref>) similar to the MTZ-constraints used to solve traveling salesman problem<cit.>. If an edge (i,j) is present in the tour, then the value of the variable u_j should be at leastthe sum of travel time from depot to task t_i and the travel time from task t_i to t_j. If x_ij=1, then u_j ≥ u_i +c_ij. This is enforced by the constraints in (<ref>), where M (referred to as big-M in literature) is a constant of arbitrarily high value. When x_ij is zero, the constraint (<ref>) is trivially satisfied. For every t_i, the minimum value of u_i is the direct travel time from the depot to the task, and the maximum is R_i - c_id. These lower and upper limits on u_i's are imposed by (<ref>). Inequalities (<ref>), (<ref>) are equivalent to (<ref>), (<ref>), using the variables v_i's instead. Inequality (<ref>) states that, when x_ij=1, then v_j should be less then v_i - c_ij. Either the set of constraints (<ref>), (<ref>) or (<ref>), (<ref>) are sufficient to eliminate the sub-tours, but we need both of these to formulate the revisit period constraint, which need both sets of variables, u_i's and v_i's.Revisit period constraints:u_i + v_j ≤ R_i,∀ i ∈ T. For each task, u_i is the time of travel from the depot to the task t_i, and v_i is the return time of travel from the task to the depot. Hence, the sum of these two variables gives the time of travel of the full cycle which covers the task t_i. Therefore, inequalities (<ref>) enforce the revisit period constraints for all of the tasks in T.Objective:zv_i≤ z, ∀ i ∈ T.The variables v_i's also are equal to the delivery time of the data collected from each task. To minimize the maximum of all the delivery time, we introduce an auxiliary variable z, which is needed to formulate the min-max objective. The objectives min z and the inequality (<ref>) together minimize the maximum of all of the delivery times (v_i's).The MILP formulation for the PISR routing problem is the following: (ℱ_1) zIn the above formulation, the big-M in the constraints (<ref>), (<ref>) is known to cause computational problems <cit.>, and hence make the MILP model computationally less efficient. We propose a second formulation without big-M constraints and compare the computational performance of these two formulations. §.§ Formulation based on arcs (ℱ_2) Here, we use the binary variables x_ij's similar to the previous formulation, and the real variables y_ij's and w_ij's∀ i,j ∈ V are used instead of u_i's and v_i's. Variables y_ij represent the travel time from depot to the task t_j if the edge (i,j) is selected in the assignment, i.e., x_ij = 1.Also, when x_ij=1 the variable w_ij is equal to the return travel time from t_i to the depot. For each task t_i, only one of the variables in the set y_ij, j ∈ T and one of the variables in the setw_ij, j ∈ T's are non-zero. The arc based MILP formulation to solve the PISR routing problem is the following: (ℱ_2) z ∑_j ∈Vx_ij =1∑_j ∈Vx_ji =1,∀i ∈T,∑_j ∈Tx_dj ≤n_v∑_j ∈Tx_jd ≤n_v, ∑_j ∈Vy_ij - ∑_j ∈Vy_ji = ∑_j ∈Vc_ijx_ij,∀i ∈T,y_di = c_dix_di,∀i ∈T,0 ≤y_ij ≤R_j x_ij,∀i ∈V, j ∈T,∑_j ∈Vw_ji - ∑_j ∈Vw_ij = ∑_j ∈Vc_jix_ji,∀i ∈T,w_id = c_idx_id,∀i ∈T,0 ≤w_ij ≤D_i x_ij,∀i ∈T,j ∈V∑_j ∈Vy_ji + ∑_j ∈V w_ij ≤R_i,∀i ∈T,w_ij ≤z,∀i ∈T, j ∈V,x_ij ∈{0,1 }∀(i,j) ∈E.Here, constraints in the Equation (<ref>) are the degree constraints, which enforce that only one incoming and one outgoing edge exists for any task. Equation (<ref>) imposes the maximum number of edges going out of and coming into depot to be n_v. The time elapsed from leaving the depot to the end of the task t_i is given by the summation ∑_j ∈ Vy_ji, and the time from leaving the depot to the end of the task that is serviced after t_i is given by the summation ∑_j ∈ Vy_ij; these two summation are illustrated in Fig. <ref>. Constraints (<ref>) and (<ref>) ensure that the difference between these two should be equal to the time between t_i and task serviced after t_i. These constraints are needed to eliminate the sub-tours. Equations (<ref>) ensures the variables y_ij's are non-negative always, and nonzero only when x_ij=1. Constraints in(<ref>) are the counterpart of (<ref>), where the summations ∑_j ∈ Vw_ij and ∑_j ∈ Vw_ji are the return travel times to the depot from the task t_iand the task serviced before t_i respectively. The difference between these two should be equal to the time between these two tasks, and this is imposed by constraints in (<ref>) and (<ref>). The non-negative and non-zero constraints on variables w_ij's are enforced by (<ref>). The set of constraints (<ref>) - (<ref>) or (<ref>) - (<ref>) are sufficient to eliminates the sub-tours, however we need both of these to implement the revisit rate constraints.The summation ∑_j ∈ Vy_ji is the time of travel for a UAV from the depot to the task t_i, and the summation ∑_j ∈ V w_ij is the return travel time from the task t_i to the depot. If UAV_v is visiting task t_i, then these two summations adds up to the total time elapsed to service all the tasks assigned to UAV_v. Constraints (<ref>) enforces the maximum limit on the revisit period for each of the tasks. The variables w_ij's are the delivery time of the tasks, hence the objective function min z and the inequalities (<ref>) together minimizes the maximum delivery time. Inequalities (<ref>) - (<ref>) and (<ref>) - (<ref>) in formulation ℱ_2 can be strengthened using the following inequalities:y_ij ≤(R_j-c_jd)x_ij,∀i,j ∈V,y_id ≤R_ix_id,∀i ∈V,y_ij ≥(c_di + c_ij)x_ij,∀i,j ∈V,,w_ij ≤(R_i-c_di)x_ij,∀i,j ∈V,,w_di ≤R_ix_di,∀i ∈V,w_ij ≥(c_ij + c_jd)x_ij,∀i,j ∈V. When j is not the depot index, sum of the time from leaving the depot to the end of task t_j and c_jd, direct travel time from the end of task t_j to the depot should be less than the revisit period limit for the task t_j. This inequality is expressed in (<ref>), and since c_jd is non-negative, (<ref>) is tighter than (<ref>). Inequality (<ref>) is same as (<ref>) where j is the depot index. Inequality (<ref>) indicates that when i and j are not depot indices and x_ij=1, then the time to travel from depot to t_j should be at leastthe sum of time to travel from depot to t_i and time of travel from t_i to t_j.Inequalities (<ref>) - (<ref>) are counterpart of the inequalities (<ref>) - (<ref>) for the w_ij variables. Inequality (<ref>) states that, when x_ij=1, sum of the direct time of travel from depot to t_i and time from t_i to depot along the cycle should be less than the revisit period limit of t_i. When x_ij=1, w_ij is the time from end of task t_i to returning to the depot, which should be at least the sum of time to travel from t_i to t_j and time to travel from t_j to the depot; this is enforced by (<ref>).To minimize the maximum delivery time, inequalities (<ref>) can be replaced with the following inequalities:w_di - c_di ≤z,∀i ∈T. Clearly, the first task serviced has the highest delivery time. If t_i is the first task, then w_di - c_di is the delivery time for t_i. Therefore, the objective min z along with inequalities (<ref>) minimizes the maximum delivery time. Note that, if t_i is not the first task visited, then w_di=0, and (<ref>) is trivially satisfied. We present the strengthened arc based formulation as follows: (ℱ_3) z (<ref>) - (<ref>), (<ref>),(<ref>), (<ref>),(<ref>) - <ref>). § ASSIGNMENT TREE SEARCH HEURISTICIn this section, we present a heuristic to solve the PISR routing problem. The heuristic is a greedy assignment tree search, based on the prior work in <cit.>, for planning missions involving multiple UAVs. Here, we present a synopsis of the tree search algorithm, however one can refer to <cit.> for further details. This tree search follows a best first search pattern until it finds a feasible assignment. At the root node, the algorithm creates branches and a child node at each branch. Each child corresponds to an assignment of one of the tasks to one of the available UAVs. The number of child nodes are all possible ways to select one unassigned task and assign it to a UAV. Among all the child nodes, the algorithm selects the node with the least cost, and repeats the branching process similar to the branching at root node. This process repeats until all the tasks are assigned.At this point in the search the algorithm arrives at a "leaf" node, which corresponds to a feasible solution to the planning problem. Once, a feasible assignment is found the algorithm stores the solution as an incumbent solution. It proceeds to search the tree by evaluating the unexplored child nodes, and tries to find solutions of lower cost than the incumbentsolution. Also, the tree search prunes the branches before reaching a leaf node, if the current cost at the branch is more than the incumbent solution. The search is terminated either when it finds a feasible solution, or if it reaches a pre-specified maximum number of nodes to be explored. We use this tree search heuristic to find feasible paths for the PISR routing problem. The algorithm is adapted to find feasible paths, such that the cycle lengths of each UAV adheres to the revisit period constraints of the tasks the UAV is assigned, and minimizes the maximum delivery time of all the tasks.Tree search heuristic: * Initialize the problem at a root node with the following: the locations of the tasks, time of travel between the tasks, the maximum limit on the revisit period for each task, and the number of UAVs available.*Create child nodes (n_i), each corresponds to an assignment of a task to an available UAV. Each node corresponds to a list of assignments for each UAV. * Compute the current cost of each child node C(n_i), which is the maximum of the delivery times of all the tasks. For example, a UAV is assigned tasks in the order t_s_1, t_s_2, t_s_3, the maximum of the delivery times is the sum c_s_1s_2 + c_s_2s_3 + c_s_3d. ( Here, c_ij is the sum of the time of travel between t_i and t_j and the time to performs task t_j.) * Check if the assignments violates the revisit period constraints of all the tasks assigned so far. For example if the tasks, t_s_1, t_s_2, t_s_3 are assigned to an UAV, compute the travel time of the cycle c_ds_1+ c_s_1s_2 + c_s_2s_3 + c_s_3d is less than R_s_1, R_s_2 and R_s_3. If any UAV violates the revisit period constraints, assign an infinite cost to the child node. *To select a child node for further branching, we scale the cost based on two factors based on the current task and current UAV that are assigned at each node. The first scale C_s_1 is to force a task with the lowest revisit period limit to be assigned earliest to an UAV. C_s_1 = R_i/R_maxi, where R_j is the maximum revisit period limit of the task t_j that is assigned at the current node, and R_max is the maximum revisit period limits of all the tasks. The second scale S_c_2 is to prioritize a UAV which is assigned a task with revisit limits in earlier assignments (at parent or above nodes); S_c_2 = 10^-n_t, where n_t is the number of revisit period constrained tasks assigned to the current UAV.* Select the child node with the lowest scaled cost S_c_1S_c_2C(n_i), and repeat the branching, steps <ref> - <ref>, until a leaf node is found. Update the incumbent solution, and proceed to explore the unevaluated child nodes at the parent nodes and further until another leaf node is found. * Exit the tree search when there are no child nodes to be evaluated, or the number of nodes evaluated reaches the specified limit, and output the solution with the lowest cost.§ COMPUTATIONAL RESULTS The MILP formulations ℱ_1 and ℱ_3 are solved using branch and cut algorithm. The algorithms are implemented using CPLEX (version 12.6) with C++ API. CPLEX solves the MILP using branch and cut algorithm, which generates the feasible solutions (upper bounds) and lower bounds based on a solution to the dual problem iteratively, and outputs the optimal solution when the gap between the lower bound and upper bound converges to zero. All the simulations were run on a Macbook with Intel i5, 2.7 GHz processor and 8 GB memory. We generated random instances by choosing task locations (xy-coordinates) from an uniform distribution in a square grid of size 4000 × 4000 meters. We have tested the algorithms on 30 random instances,10 each with 10, 20 and 30 tasks and 4 UAVs. We impose the revisit period constraints on 3, 4 and 5 of the tasks for the instances with 10, 20 and 30 tasks respectively. For all the instances, we have chosen the task farthest from the depot, and the revisit period limit is set to 1.1 times the sum of the time from the depot to the farthest task and the task to the depot. We have selected the nearest 2, 3 and 4 tasks for the instances with 10, 20 and 30 tasks respectively to set the revisit period limits. We set the revisit period limit to be 1.1 times the optimal cost of the traveling salesman problem solved on these tasks including the depot, with travel times as the cost of travel between tasks. We assume the UAVs travel at unit speed (one meter per second), and the Euclidean distance between the task locations is chosen to be the travel times between them. The tree search heuristic is implemented in C++, and the maximum number of nodes to be evaluated is set to one million. The computational results for the 30 instances are shown in the Table <ref>. The first and second column refers to the instance number and the number of tasks in the instance respectively. The third and fourth column refers to the cost of the solution and the computation time required by the formulation ℱ_1. The fifth and sixth column refers to the solution cost and the computation time required for solving using formulation ℱ_3. A time limit of one hour and 2.5 hours are set for solving the instances with 20 and 30 tasks. For the instances where the algorithm could not find optimal solutions in the set time limit, the cost of the best found solution are listed. All the computation times reported are in seconds. The seventh and eighth columns refer to the cost of the first feasible solution found (referred to as best first search solution) and the corresponding computation time by the tree search heuristic.The ninth and tenth columns refer to the final cost of the heuristic solution and the computation time required after exploring one million nodes of the tree. With the formulation ℱ_1, the branch-and-cut algorithm could not converge with in the time limit for instances with 20 and 30 tasks. We could find optimal solutions for all the instances with 10 and 20 tasks using the formulation ℱ_3, and for 4 out of 10 instances with the 30 tasks. Though tight feasible solutions are found, the lower bounds given by the LP relaxations of these formulations are not tight enough, and therefore the algorithm needs more computation time to converge. Finding better valid inequalities may solve this problem, which can be a future direction of this research. From the computational results, clearly the formulation ℱ_3 outperformed ℱ_1. The tree search heuristic generated a best first search solution within 11 milliseconds and final solutions with in 11 seconds for all the instances. Also the cost of these solutions are within 50% of the optimal for most of the instances. This heuristic is well suitable for quick planning and onboard re-planning of the missions. Plots of the solutions found by solving the MILP formulation ℱ_3 and the heuristic for an instance with 20 tasks are shown in Fig. <ref>. One can see that the tasks t_6, t_8 and t_14 with tight revisit period limits lie on a UAV tour with the smallest tour length. The task t_10 also has revisit period limits, however the corresponding UAV also visits other tasks without violating the revisit period constraints of t_10. § CONCLUSION We considered a path planning problem for PISR missions that involves multiple UAVs collecting data from spatially dispersed tasks, and delivering at a depot. We have modeled this as an optimization problem to minimize the maximum delivery time for all the tasks while satisfying the revisit period constraints for the high priority tasks. To find optimal solutions, we presented two MILP formulations ℱ_1 and ℱ_3, which include novel constraints to satisfy revisit period limits. These formulations are solved using branch and cut algorithm, and it could find optimal solutions for instances up to thirty tasks. Also, we presented a heuristic based on assignment tree search; it produces sub-optimal solutions which require only a few seconds of computation time. The heuristic could find feasible solutions for all the instances with in 10 milliseconds. For the missions where onboard re-planning is necessary due to change in the tasks or locations, this heuristic is well suitable for quick onboard re-planning. The future directions of this research include finding better valid inequalities for the formulations to strengthen the lower bounds for computational efficiency. Also one can develop similar MILP models to find paths that minimizes the weighted sum of the revisit periods of all the tasks. IEEEtran
http://arxiv.org/abs/1702.08494v1
{ "authors": [ "Satyanarayana G. Manyam", "Steven Rasmussen", "David W. Casbeer", "Krishnamoorthy Kalyanam", "Suresh Manickam" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170227195614", "title": "Multi-UAV Routing for Persistent Intelligence Surveillance & Reconnaissance Missions" }
< g r a p h i c s > The crystallographic stacking order in multilayer graphene plays an important role in determining its electronic structure.In trilayer graphene, rhombohedral stacking (ABC) is particularly intriguing, exhibiting a flat band with an electric-field tunable band gap. Such electronic structure is distinct from simple hexagonal stacking (AAA) or typical Bernal stacking (ABA), and is promising for nanoscale electronics, optoelectronics applications. So far clean experimental electronic spectra on the first two stackings are missing because the samples are usually too small in size (μm or nm scale) to be resolved by conventional angle-resolved photoemission spectroscopy (ARPES).Here by using ARPES with nanospot beam size (NanoARPES), we provide direct experimental evidence for the coexistence of three different stackings of trilayer graphene and reveal their distinctive electronic structures directly. By fitting the experimental data, we provide important experimental band parameters for describing the electronic structure of trilayer graphene with different stackings.Graphene, a monolayer of carbon atoms arranged in a honeycomb lattice, has been extensively investigated in the last decade as a two-dimensional material with intriguingproperties originated from its linearly dispersing Dirac cone at the K point <cit.>.When stacking monolayer graphene to form multilayer graphene, the interlayer interaction can lead to dramatic changes of the Dirac cone, depending on how the graphene layers are stacked. The simplest example is bilayer graphene, which has two different stacking sequences, AB (Bernal) and AA stackings. In contrast to bilayer graphene with AA stackingwhich shows two linearly dispersing Dirac cones displaced from the K point, bilayer Bernal graphene shows parabolic dispersions. A band gap can be induced by a perpendicular electric field, making it potentially useful for applications in electronics and photonics<cit.>.In trilayer graphene, the different stacking sequences provide an even richer playground for electronic band structure engineering <cit.>. There are three stacking sequences, simple hexagonal (AAA), Bernal (ABA) and rhombohedral (ABC) stackings as schematically shown in Fig. <ref>(a)-(c). The different stacking leads to different vibrational and electronic properties. The Raman and infrared active modes, electron-phonon coupling are quite different for ABA and ABC stacking sequences which has also been used as a reliable and efficient method to determine the stacking sequences <cit.>. Moreover, they have different response under applied electric field. AAA stacking has the highest symmetry, and its electronic band structure consists of three equally displaced Dirac cones as shown in Fig. <ref>(d).Applying an electric field can increase the separation between the Dirac cones, however the Dirac cones remain gapless <cit.>. The most common ABA stacking has mirror symmetry and lacks inversion symmetry. Its band structure shows effectively the superposition of a linear Dirac cone from monolayer graphene and two quadratic dispersions from AB stacking bilayer graphene (see Fig. <ref>(e)) <cit.>.Applying an electric field will induce a gap only for the linear dispersing band, while the parabolic bands will still remain gapless <cit.>. Because of the absence of a band gap even under an applied electric field, both of these two graphene stackings are not very useful for electronic devices.Rhombohedral stacking (ABC) graphene has inversion symmetry, but lacks mirror symmetry. Thecrystallographic symmetry leads to two flat bands at the Fermi level with cubic dispersion (Fig. <ref>(f))<cit.>.The double degeneracy in rhombohedral stacking graphene can be lifted by applying different potentials to the top and bottom graphene layers<cit.>.Experimentally, evidence of existence of the tunable gap has been inferred by infrared conductivity <cit.>, electrical and magnetic transport measurements <cit.>. Another important property of rhombohedral trilayer graphene is that the band at the K point near the Fermi level has very small velocity, and in the limit of many layers will become a flat band<cit.>. The high density of states from the flat band provides new opportunities for realizing many exotic properties, e.g. flat band high temperature superconductivity <cit.>, various magnetic orders<cit.> etc.Under an applied magnetic field, a Lifshitz transition induced by trigonal warping in rhombohedral trilayer graphene has been reported<cit.>. It has also been predicted that ferromagnetic spin polarization can exist on the (0001) surfaces of rhombohedral graphite <cit.>, and a thin film of rhombohedral graphene canundergo a magnetic phase transition from antiferromagnetic state to the ferromagnetic state under a perpendicular electric field<cit.>. In brief, rhombohedral trilayer graphene provides a platform to investigate very rich physics and promising applications in electronics, optoelectronics and so on.Energetically, rhombohedral stacking is less stable than Bernal stacking <cit.>. Although rhombohedral graphene has been identified in exfoliated graphene <cit.> and multilayer graphene grown on 3C-SiC(111) or 6H-SiC(0001) substrate <cit.>, rhombohedral graphene is usually mixed with the dominant Bernal graphene, and the small grain size makes it challenging to obtain clean electronic band dispersions <cit.>. The epitaxial graphene sample was grown by annealing the Pt(111) substrate in ultrahigh vacuum at elevated temperatures up to 1600 ^∘C using electron beam bombardment <cit.>. Using nanospot angle-resolved photoemission spectroscopy (NanoARPES), we are able to record electronic nanoimages which reveal different density of states close to the Fermi level related with distinctive regions of different stacking sequences. We resolved the ABA and ABC stacking orders completely and obtained very clean and high quality electronic bands compared to previous reports <cit.>. We also obtained the band structure of AAA stacked trilayer graphene for the first time. The hopping parameters are extracted using tight-binding fitting. Our work reveals the three dissimilar dispersions relation for the three trilayer graphene stacking sequences, and provides important experimental band parameters for describing the electronic structure of trilayer graphene. Figure <ref> shows the characterization of the orientation of multilayer graphene relative to the substrate using low-energy electron diffraction (LEED) and ARPES. Figure <ref>(a) shows the LEED pattern. The most obvious features are indicated by red and black arrows, which have0^∘ and 30^∘ azimuthal angles relative to the platinum substrate. Besides, there is some arc-like shapearound these two patterns. This can be seen more clearly in the azimuthal dependence curve in Fig. <ref>(b). In addition to 0^∘ and 30^∘ grains, there exist 23.4^∘ and 36.6^∘ grains indicated by blue and purple arrows respectively, and shoulders around the 0^∘ peak in Fig. <ref>(b). The Fermi surface map obtained by regular ARPES with beam size of ≈ 100 μm is shown in Fig. <ref>(c), and their evolutions at -0.5 eV and -1.0 eV are shown in Fig. <ref>(d) and (e) respectively which show the Dirac cone features with different orientations. The azimuth dispersion (Fig. <ref>(f)) and the momentum distribution curve at E_F (Fig. <ref>(g)) reveal domains with orientations consistent with the LEED pattern. But more orientations can be distinguished in ARPES spectrum which are hidden in the shoulder of the LEED data and indicated by gray (6^∘) and green (-6^∘) arrows. Combining LEED and regular ARPES data, we reveal at least six different orientations of graphene grown on (111) surface of platinum, 0^∘, 6^∘, 23.4^∘, 30^∘, 36.6^∘ and 54^∘ (equivalent to -6^∘). The rich orientations are resulted from the weak graphene-substrate interaction, which is also likely to lead todifferent stacking sequences.To resolve the electronic structure of multilayer graphene, we perform NanoARPES measurements. Figure <ref>(a) shows the spatially resolved map integrated from -0.1 to -0.5 eV along the Γ-K-M direction of 0^∘ grain.Figure <ref>(b)-(f) shows NanoARPES data taken from five different typical regions marked in panel (a). The large region marked by label b shows negligible intensity within -2 eV of E_F and there is a peak at ≈ -2.2 eV. This is identified as 23.4^∘ graphene.The existence of this peak also in panels (c), (e) and (f) suggests that these graphene domains are on 23.4^∘ graphene. In Fig. <ref>(c), the linear dispersion from 0^∘ monolayer graphene on 23.4^∘ graphene is very clear. The asymmetrical intensity is due to matrix element effect.<cit.>Dispersions of characteristic trilayer graphene are observed in regions marked by label d, e and f. In Fig. <ref>(d), three linear dispersions are present, and they are identified to be from simple hexagonal trilayer graphene. A linear dispersion band and two quadratic dispersion bands from Bernaltrilayer graphene are shown clearly in Fig. <ref>(e). Figure <ref>(f) shows two intersected quadratic dispersion bands below E_F and a flat band at E_F from rhombohedral trilayer graphene.Therefore, in the area which we studied, there are monolayer graphene, trilayer graphene with simple hexagonal, Bernal and rhombohedral stackings. Every trilayer graphene domain shows very distinct dispersion associated to its corresponding stacking. To confirm the stacking sequences oftrilayer graphene and to reveal the interlayer and intralayer coupling of graphene in different stacking sequences, we use tight-binding model <cit.> to fit the ARPES spectra (see the supplementary material for details). As shown in Fig. <ref>(a)-(c), Ab initio numerical calculations <cit.> and quantum capacitance measurements <cit.> suggest that hopping terms γ_0, γ_1 and γ_3 are enough to describe Bernal and rhombohedral stacked trilayer graphenes andγ_0, γ_1 are enough for simple hexagonal stacking, so we only take these hopping terms into account respectively, where γ _0 is the nearest neighbor hopping in monolayer graphene, γ_1 is the vertical nearest interlayer hopping term, and γ_3 is the next nearest neighbor interlayer hopping term. The comparison of the experimental results and the fitting dispersions in Fig. <ref>(g)-(j) shows a good agreement with the dispersions for monolayer, simple hexagonal, Bernal and rhombohedral trilayer graphene respectively, confirming unambiguously the existence of all three trilayer stacking sequences. Besides, these hopping parameters reveal the interlayer and intralayer coupling of graphene and determine their band structure directly. The hopping integrals obtained from fitting the experimental data are listed in Table <ref>.It is well known that Fermi velocity is directly proportional to the nearest intralayer coupling (γ_0). The Fermi velocity obtained from γ_0 is  1.0×10^6m/s for all stackings and is very close to that of pristine graphene <cit.>.The splitting between the bands increases with the vertical nearest interlayer coupling (γ_1) getting stronger.Meanwhile, the next nearest neighbor interlayer coupling (γ_3) will tilt the bands for Bernal and rhombohedral stackings. For Bernal and rhombohedral stacking, the interlayer hopping integrals (γ_1, γ_3) agree well with previous theoretical <cit.> and experimental results including mechanically exfoliated trilayer graphene on SiO_2 <cit.> and synthesized trilayer graphene on 6H-SiC(0001) and3C-SiC(111) substrates <cit.> as shown in Table <ref>. For simple hexagonal stacking (AAA) trilayer graphene, the interlayer hopping parameter (γ_1) is much weaker compared with Bernal or rhombohedral stacking. As far as we know, there are very few experimental reports about simple hexagonal stacking graphene <cit.> and no hopping integral of AAA stacking has been reported before to compare with experimentally.In summary, we have distinguished rhombohedral and Bernal stacked trilayer graphene spatially by their different band structures directly for the first time. Besides, we also observed simple hexagonal stacked trilayer graphene.We have observed that the undoped graphene trilayer films are characterized by Dirac point locked at the Fermi level with different nesting depending on the stacking. Clear dispersions of AAA, ABA and ABC stacking are obtained, and the experimental hopping parameters γ_0, γ_1 and γ_3 are obtained by fitting NanoARPES spectra.We acknowledge support from the National Natural Science Foundation of China (Grant No. 11334006 and 11427903), from Ministry of Science and Technology of China(Grant No. 2015CB921001 and 2016YFA0301004) and Tsinghua University Initiative Scientific Research Program (2012Z02285). The Synchrotron SOLEIL is supported by the Centre National de la Recherche Scientifique (CNRS) and the Commissariat à l'Energie Atomique et aux Energies Alternatives (CEA), France. § METHODS The epitaxial graphene sample was grown by annealing the Pt(111) substrate in ultrahigh vacuum at elevated temperatures up to 1600 ^∘C using electron beam bombardment, which has been reported in our previous works <cit.>. The sample was annealed at 450 ^∘C to clean the surface before the ARPES measurements. NanoARPES experiments were performed at the ANTARES beamline of the SOLEIL synchrotron, France. All ARPES data were taken at a photon energy of 100 eV with Scienta R4000 analyzer, using linearly polarized light. The vacuum was better than 2×10^-10 Torr and the sample was kept at 80 K during the measurement.
http://arxiv.org/abs/1702.08307v1
{ "authors": [ "Changhua Bao", "Wei Yao", "Eryin Wang", "Chaoyu Chen", "José Avila", "Maria C. Asensio", "Shuyun Zhou" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170227145157", "title": "Stacking-dependent electronic structure of trilayer graphene resolved by nanospot angle-resolved photoemission spectroscopy" }
[pages=1-last]./klystron.pdf
http://arxiv.org/abs/1702.08477v2
{ "authors": [ "Zheng Li" ], "categories": [ "quant-ph", "physics.acc-ph" ], "primary_category": "quant-ph", "published": "20170227191502", "title": "Amplification of entangled photon states by klystron" }
1]Jiali WangCorresponding author: Jiali Wang, Research School of Finance, Actuarial Studies and Statistics, College of Business and Economics Building 26C, The Australian National University, Canberra, ACT 2601, Australia. Phone: +61 2 612 57290. Email: u5298171@anu.edu.au 1]Bronwyn Loong 1,2]Anton H. Westveld 3]Alan H. Welsh [1]Research School of Finance, Actuarial Studies and Statistics, College of Business and Economics, The Australian national University, AUS[2]Statistics Laboratory @ the Bio5 Institute & Statistics G.I.D.P., The University of Arizona, USA [3]Mathematical Sciences Institute, College of Physical & Mathematical Sciences, The Australian National University, AUSA Copula-based Imputation Model for Missing Data of Mixed Type in Multilevel Data Sets [======================================================================================[L]December 30, 2023 We propose a copula based method to handle missing values in multivariate data of mixed types in multilevel data sets. Building upon the extended rank likelihood of <cit.> and the multinomial probit model, our model is a latent variable model which is able to capture the relationship among variables of different types as well as accounting for the clustering structure. We fit the model by approximating the posterior distribution of the parameters and the missing values through a Gibbs sampling scheme. We use the multiple imputation procedure to incorporate the uncertainty due to missing values in the analysis of the data. Our proposed method is evaluated through simulations to compare it with several conventional methods of handling missing data. We also apply our method to a data set from a cluster randomized controlled trial of a multidisciplinary intervention in acute stroke units. We conclude that our proposed copula based imputation model for mixed type variables achieves reasonably good imputation accuracy and recovery of parameters in some models of interest, and that adding random effects enhances performance when the clustering effect is strong.§ INTRODUCTIONMultivariate analysis often involves understanding the relationship among variables of different types. Our motivating data set is from the Quality in Acute Stroke Care (QASC) study, which implemented a multidisciplinary intervention to manage fever, hyperglycaemia and swallowing dysfunction in acute stroke patients <cit.>. This study was one of the largest rigorously evaluated clinical trials which showed that organised stroke unit care significantly reduced death and disability among stroke patients. There were 19 acute stroke units in New South Wales, Australia that participated in the study, and they were randomly assigned to an intervention group (10 units) and a control group (9 units). A pre-intervention and a post-intervention cohort of patients were recruited , their demographic variables such as age, gender and marital status were obtained, and process of care variables such as temperature, time from onset to hospital and length of stay in hospital were recorded. The researchers were interested to see if the implementation of the protocols reduced death and dependency, and improved physical and mental health scores. The four primary outcome variables considered were: (1) modified Rankin Scale (an ordinal variable ranging from 0 to 6, measuring the degree of disability or dependence in daily activities); (2) Barthel index (an ordinal variable ranging from 0 to 100, which also measures performance in activities of daily living. It is usually reported as a dichotomised variable with 60 or more and 95 or more as cut points); (3) mean SF-36 mental component summary score; (4) mean SF-36 physical component summary score. Mental and physical component summary scores were measured on continuous scales between 0 and 100.In the QASC study, all the four outcome variables had moderate amounts of missing data and most of the explanatory variables had missing values as well (Table 1). Ignoring all the patients with missing values, which is known as complete case analysis, is a commonly used approach to handle missing data but may lead to biased estimates and reduced statistical power. In other words, the remaining cases may not be representative of the target population if we ignore them completely. The smaller sample size also decreases the power to detect significant treatment effects. Due to the potential for positive dependence among units within the same cluster, this is especially serious in multilevel data sets. Case-wise deletion reduces the sample size of patients within hospitals and the number of hospitals at the same time if any information at the hospital level is missing. As a consequence, both the variations between and within hospitals may not be accurately estimated. An alternative approachis to `impute' missing values, so that after imputation complete data analysis can be performed using standard software. Some ad-hoc procedures include mean imputation and last observation carried forward.More principled imputation methods are model-based, such as joint modelling <cit.> and fully conditional specification <cit.>. Current methods to handle missing data are potentially inadequate to apply to the QASC study which is complicated by the clustering effect and the mix of variable types. <cit.> proposed using a semiparametric copula model based on the extended rank likelihood to analyse multivariate data of mixed types. We extend the work of <cit.> by adding random effects to introduce correlation among individuals within clusters. The model in <cit.> can only be used for continuous and ordinal variables, so we consider a multinomial probit model to handle nominal variables. We then evaluate our model by its ability to recover missing data and estimate the true parameters in some models of interest in both a simulation study and a real data study.The structure of this manuscript is as follows. In section 2 we briefly summarize some popular multivariate techniques to perform missing data imputation and review the general Gaussian copula model and the extended rank likelihood for semiparametric copula estimation as discussed in <cit.>. In section 3 we describe this extended rank likelihood with random effects and combine the copula model with a multinomial probit model. We outline our algorithm to impute missing data in a multilevel data set using our proposed copula model. In section 4, we present and discuss the results of our simulation and real data studies to evaluate our model. The proposed model is compared against several conventional methods using readily available software packages.Section 5 provides concluding remarks and discusses some future research. § BACKGROUND OF MISSING DATA IMPUTATIONLet Y=(Y_obs,Y_mis) denote the `complete' data, with observed part Y_obs and missing part Y_mis. Let θ denote the parameter describing the `complete' data Y. Throughout this paper we assume the data are Missing at Random (MAR)<cit.>, meaning that the probability of missing an entry only depends on the observed data, not on the entry value itself, so that inference about (Y_mis,θ) can be made based on only the observed data Y_obs, and no extra effort is needed to model the missing data process <cit.>. The MAR assumption cannot be tested except in artificial simulation settings, however, it is a simplifying assumption which can be made more reasonable by expanding the model to include more variables that are related to the missing data. Data augmentation <cit.> is often used as a simulation based computational algorithm to approximate the joint posterior distribution of p(θ,Y_mis|Y_obs). It draws Y_mis from p(Y_mis|Y_obs,θ) and θ from p(θ|Y) iteratively. The θ can be treated as coming from the marginal distribution p(θ|Y_obs) and the Y_mis can be treated as coming from p(Y_mis|Y_obs), if our interest lies in filling in the missing values to create complete data sets. §.§ Multiple ImputationHaving obtained guesses for the missing data from an imputation model (which will be discussed further below), we cannot treat them as the `true' data because of the uncertainty due to nonresponse. <cit.> proposed multiple imputation (MI) to obtain M independent draws of Y_mis from p(Y_mis|Y_obs) to create M complete data sets. Combining rules are then applied to the parameter estimates from each of the M complete data sets to obtain a single inferential result, as follows.Let Q be the target population quantity of interest, for example, the coefficients of a regression model. Suppose q̂_m is the point estimate of Q from the m^th imputed complete data set and ŵ_m is an associated measure of sampling variance, m=1,...,M. Three quantities are required for inference on Q:q̅ =1/M∑_m=1^Mq̂_mB =1/M-1∑_m=1^M (q̂_m-q̅)^2 W̅ =1/M∑_m=1^Mŵ_mThe analyst uses q̅ as the point estimate of Q. The sampling variance of q̅ is estimated by T=W̅+ (1+ 1/M) B. The total variance associated with q̅ is a function of the within imputation variance and the between imputation variance.Next we discuss common approaches to impute missing values. §.§ Approaches to Generate Imputations for Missing ValuesA good imputation method aims to preserve relationships among survey variables of interest. The joint modelling (JM) approach usually assumes the data follow an elliptical joint distribution, for example, a multivariate normal or a multivariate t distribution. For continuous variables, some transformations may be needed to approximate the assumed distribution <cit.>. Discrete variables are treated as if they were generated from the underlying continuous variables and then discretized. Most software packages implement the joint modelling approach by first transforming any variables with missing values into responses that follow a multivariate normal distribution. The transformed responses are then regressed against the fully observed variables. The software packages that implement this approach include norm <cit.> and Amelia <cit.> in R and PROC MI in SAS. Other joint modelling techniques include loglinear models and general location models specifically designed for categorical data and mixed data respectively <cit.>. Another useful package in R - pan <cit.> is designed to impute missing values in panel data, assuming a multivariate Gaussian distribution with random effects. <cit.> further extended Schafer's multilevel imputation model by allowing for multivariate response variables at all levels of a data hierarchy, and used Box-Cox type normalizing transformations for continuous non-Gaussian responses. Although elliptical distributions allow for parsimonious description of data, they are restrictive in the marginal distributions which are fully determined by the parent joint distribution and are restrictive in capturing complex dependencies among variables.The fully conditional specification (FCS) <cit.> approach breaks the joint model into a series of univariate regression models. Generalized linear models are often specified to accommodate different types and shapes of variables as well as adding constraints among variables. This method has been implemented by many software packages, for instance, mice <cit.> and mi <cit.> in R, ice in STATA <cit.> and a SAS-based software IVEware <cit.>. To the best of our knowledge, there are no available packages to implement the multilevel fully conditional specification except for the `mice.impute.2l.norm' function in the mice package in R, which fits mixed effects linear regression models for variables with missing values. Because of the lack of packages for practitioners, some authors have investigated including indicator variables for clusters <cit.> in the imputation models or ignore the clustering effects. The main criticism of the fully conditional methods, however, is the lack of theoretical justification to ensure the univariate conditional distributions converge to a proper joint distribution.Several papers have compared JM and FCS MI, but there is no clear conclusion under which circumstances practitioners should favour one over the other. <cit.> performed simulations under three missing data mechanisms and their results showed that JM and FCS produce similar results despite the data not being multivariate normal. <cit.> not only assessed the accuracy of the coefficients fitted to models of interest, but also the accuracy of imputed values. Their study found that FCS imputed more accurately for categorical variables than JM but the differences were small for continuous variables. <cit.> studied the performance of JM and FCS in multilevel settings, and showed using simulations that FCS MI outperforms JM MI in having less bias, and when the intraclass corrlation is small, more accurate parameter estimates are obtained from both JM and FCS. §.§ CopulasTo provide more flexibility in the marginal distributions while at the same time ensuring a proper joint distribution, we consider copula modelling approaches to impute missing values. The word `copula' means `a link, tie, bond'. In mathematics and statistics, it means joining together one-dimensional distribution functions to form a multivariate distribution function. Specifically, the distribution functions for the random variables y_1,...,y_p are F_1(y_1),...,F_p(y_p). Sklar's theorem <cit.> shows that there always exists a function C, such that, F(y_1,...,y_p)=C(F_1(y_1),...,F_p(y_p)), where the function C is called the copula function. Each of the variables is modeled by the marginal distribution F_l(y_l)=u_l, l=1,...,p, which is uniformly distributed, and their dependence is captured by the copula function C. Copula modelling has proven to be very powerful for modeling variables of different types and shapes, when there is an underlying dependence among them. It adopts a `bottom-up' strategy where the starting point is the marginal distributions F_l, which are then glued together by the copula function C. In the `top down' joint modelling approach, the marginal distributions are fully determined by their parental joint distribution so that there is no flexibility to model them. In addition, copula models guarantee the existence of a compatible joint distribution which is not guaranteed by the fully conditional specification approach. Existing models, like multinomial (ordered) probit models for (ordered) categorical data can be treated as special cases of copulas, because the underlying latent variables corresponding to each category are assumed to follow a multivariate Gaussian distribution <cit.>.In a copula model, the parameters are the marginal distributions F_l and the copula function C. <cit.> developed a fully Bayesian estimation procedure to model the joint distribution of both sources of parameters. However, specifying each of the marginal distributions is labour intensive and variables in real data sets may not be accurately represented without a large number of parameters. Some authors suggested transforming the variables using the empirical distribution F̂_l to get pseudo data <cit.> and avoid the parametric estimation of marginal distributions. However, this only applies to continuous variables. To link the discrete variables with continuous latent variables, <cit.> provided a simple way of analysing the correlation among variables with meaningful ordering (continuous and ordered categorical variables), via the extended rank likelihood. This makes use of the fact that the order of the underlying latent variable is consistent with the observed data, and inference about the association parameters can be drawn from the `rank-based' latent variables through a simple parametric form. The extended rank likelihood has already been applied to other closely related models, for example, a general Bayesian Gaussian copula factor model proposed by <cit.> and a bifactor model considered by <cit.>, can be treated as imposing a special structure on the correlation matrix of a Gaussian copula.Using the copula model as an imputation engine is relatively new but has drawn some attention in the literature. <cit.> were among the first authors to consider imputation using a Gaussian copula where the missing data pattern was monotone. <cit.> found that copula based imputation from the Archimedian family compared favourably with nearest neighbour donor imputation and regression imputation by the EM algorithm. <cit.> compared the performance of imputation by the copula model using the extended rank likelihood approach <cit.> with JM (as implemented in Amelia) and FCS (as implemented in MICE) and concluded that the copula imputation approach maintained the prediction accuracy at least as well as the other two approaches but with faster convergence of the sampling algorithm. § SEMI-PARAMETRIC GAUSSIAN COPULA MODEL §.§ The Extended Rank Likelihood with Random EffectsAmong a variety of copulas, we focus on the Gaussian copula in this paper. For further theoretical details of copulas, see <cit.> and for a good summary of some applications of copulas, see <cit.>.Rather than assuming a Gaussian distribution on the data Y directly, the Gaussian copula specifies a joint multivariate Gaussian distribution on the corresponding latent variables as defined next. Let l=1,...,p denote the index of the l^th random variable. Then the l^th latent variable is z_l=Φ^-1(u_l), where u_l=F_l (y_l). That is, C(u_1,...,u_p|Γ)=Φ_p(Φ^-1(u_1),...,Φ^-1(u_p)|Γ)=Φ_p(z_1,...,z_p|Γ), where Φ_p(·|Γ) is the cumulative distribution function of the p-variate normal distribution, with mean zero and correlation matrix Γ. Note that the Gaussian copula can reach the full range of pairwise correlation (-1,1) and the parameters that need to be estimated only come from the correlation matrix Γ.<cit.> derived a rank-based likelihood to estimate the correlation matrix Γ so that there is no need to specify the marginal distributions F_l. The idea is that since we know Φ^-1(F(·)) is a monotone transformation, the ordering of data Y provides partial information about what z should be, that is, y_i_1l<y_i_2l implies z_i_1l<z_i_2l. Suppose we have in total N observations, n=1,...,N. Observing y=(y_1,...,y_N) tells us that z=(z_1,...,z_N) must lie in the set: {z ∈ℝ ^N × p: max{z_hl: y_hl<y_nl}< z_nl < min {z_hl:y_hl> y_nl}}. Let `D' denote the set of all possible z which is consistent with the ordering of y. Then the event `z ∈ D' can be treated as the observed event upon which inference of Γ is made. The full likelihood can be decomposed asp(y|Γ, F_1,...,F_p) =p(z∈ D, y|Γ, F_1,...,F_p)=p(z∈ D|Γ) × p(y|z∈ D,Γ,F_1,...,F_p). <cit.> proved that it is partial sufficient (in the sense of G-sufficient and L-sufficient) to carry out inference about Γ based on the density p(Z∈ D|Γ) and he referred to it as the `extended rank likelihood'. In doing so, we lose the information about Γ from the density p(y|z∈ D,Γ,F_1,...,F_p), but we do not need to estimate the potentially complicated marginal distribution functions and the extended rank likelihood provides a more general and flexible framework for joint modelling. To take into account clustering effects, we extend Hoff's work by adding random effects to the Gaussian copula model at the latent variable level. The idea is that the clustering of the observed data is carried through to the latent variable level.Our model can be described asz_ij|b_i1∼ N_p(b_i1,Γ_1), b_i1∼ N_p(0,Ψ_1),where i={1,...,m} is the group index, j={1,...,n_i} is the individual index within group i, Γ_1 is a correlation matrix and Ψ_1 is a variance-covariance matrix for z_ij and b_i1. Both z_ij and b_i1 are vectors of length p, because we are considering l={1,...,p} variables jointly. In this model, the parameters that need to be estimated are in (Γ_1,Ψ_1), which can be thought of as splitting the total correlation into two parts, the variability within groups and the variability between groups. However, like any model that relies on the ordering of the data but not their magnitude,model (<ref>) suffers from an identifiability problem without constraints on Γ_1. To see this, if we shift the location of the latent variable z_l by μ_l and scale it by σ_l, the model remains unchanged because the new latent variables satisfy the order of the observed data as well. The extended rank likelihood contains only the information about the relative ordering of z but no information about their location and scale. To solve the identifiability problem of scale, we fix Γ_1 to be a correlation matrix instead of a covariance matrix. In other words, there is no need to estimate the variances of z conditional on the random effects, so we fix them as 1. Because the marginal distributions of z have mean equal to 0, there is no identifiability issue for location. We will briefly describe how to add covariates in the discussion section so that the mean of z is no longer 0. §.§ Copula Model for Mixed Type VariablesNotice that the extended rank likelihood described above only applies to continuous and ordinal variables, since it makes no sense to consider meaningful numeric values for nominal variables (categorical variables without ordering). To include nominal variables in the copula model as well, we consider a multinomial probit model <cit.> which can be treated as a Gaussian copula. The idea is to relate a nominal variable to a vector of latent variables which can be thought of as the unnormalized probabilities of choosing each of the categories. Suppose a single nominal variable y has K categories, and we define K-1 latent variables for unit i as w_i=(w_i1,...,w_i,K-1) which follow a multivariate Gaussian distribution. Since all the variables appear on one side and we model them jointly, there are no covariates as predictors for now, therefore we only include the intercept term β vector to represent the relative differences between each category 1,...,K-1 compared with the baseline category K. To add a second level to the hierarchy, again we have the random effects b_i2 in the modelw_ij= β+b_i2+ϵ_ijb_i2∼N_K-1(0,Ψ_2),  ϵ_ij∼ N_K-1(0,Γ_2)y_ij = k  if  w_ijk >w_ijk' and  w_ijk>0, for k'≠ k K  if  w_ijk<0,  for all k=1,...,K-1.The rule of deciding the category is a mapping from the latent variables vector to the observed category. The category k=1,...,K-1 is observed if the k^th element of the vector w_i is the largest and greater than 0; the last category K is observed if the largest element in w_i is smaller than 0. We also fix the diagonal elements of Γ_2 equal 1 to be identifiable.To provide a unified framework of multivariate analysis for mixed type variables, we combine model (<ref>) for variables with ordering and model (<ref>) for variables without ordering as follows z_ij|b_i1∼ N_p(b_i1,Γ_1), w_ij∼ N_K-1(β+b_i2,Γ_2),b_i=(b_i1,b_i2)∼ N_p+K-1(0,Ψ), Ψ=[Ψ_1 Ψ_12; Ψ_21Ψ_2 ],(z_ij,w_ij)|b_i∼ N_p+K-1((0,β)+b_i,Γ), Γ= [Γ_1 Γ_12; Γ_21Γ_2 ].The correlations between variables y_1,...,y_p and y_p+1 are modelled through the off-diagonal matrices Ψ_12 and Γ_12 at the group level and the individual level respectively. Since both Γ_1 and Γ_2 have diagonal elements fixed to be 1, the big matrix Γ is an identifiable correlation matrix. §.§ A Gibbs Sampler A Gibbs sampling scheme is constructed to approximate the joint posterior distribution p(β,Ψ,Γ,b,z,w,y_mis|y_obs) where the unknown quantities in model (<ref>) are the parameters (β,Ψ,Γ) and the latent variables (b,z,w) as well as missing data y_mis. A simple conjugate prior does not exist for a correlation matrix, and we follow the idea in <cit.> of employing a parameter expansion approach <cit.> to facilitate the MCMC sampling. Specifically, we put an Inverse Wishart prior on the matrix Γ̃ which is the semi-conjugate prior in a multivariate Gaussian sampling model. Then the full conditional distribution of Γ̃ can be derived analytically. After updating Γ̃ in each iteration, we rescale it to be a correlation matrix Γ. For ease of computation, we put an improper flat prior on β and a semi-conjugate Inverse Wishart prior on Ψ, where the hyperparameters are the degrees of freedom ν and the scale matrix Λp(β)∝ 1, Ψ∼ Inv Wishart (ν_1,Λ_1),Γ̃∼ Inv Wishart (ν_2,Λ_2). Under these priors, it is straightforward to derive the full conditional distributions for the parameters (β,Γ,Ψ) as follows * p(β|…) ∼ N(1/N∑_i=1^m ∑_j=1^n_i(w_ij-b_2i-Γ_21Γ_1^-1(z_ij-b_i1)),1/N(Γ_2-Γ_21Γ_1^-1Γ_12));* p(Γ̃|…) ∼ Inv Wishart (ν_1-1+N,Λ_1+∑_i=1^mϵ_i^Tϵ_i), where ϵ_i=(z_i,w_i)-(0,β)-b_iΓ_[g,h]=Γ̃_[g,h]/√(Γ̃_[g,g]Γ̃_[h,h]), g,h=1,...,p, Γ is rescaled from Γ̃ after each sampling;* p(Ψ|…)∼ Inv Wishart (ν_2+m,Λ_2+B^T B), where B=(b_1,...,b_m).From the joint Gaussian distribution of (z,w), we can derive the following conditional distributions for the latent variables z,w,b: * p(z_ij|…) ∼ N(b_1i+Γ_12Γ_2^-1(w_ij-β-b_i2),Γ_1-Γ_12Γ_2^-1Γ_21);* p(w_ij|…) ∼ N(β+b_2i+Γ_21Γ_1^-1(z_ij-b_i1),Γ_2-Γ_21Γ_1^-1Γ_12);z_ij and w_ij should be sampled from a truncated Gaussian distribution and a Gaussian distribution under the observed category constraint respectively, see below for details.* p(b_i|…) ∼ N (U_i(Γ^-1⊗ 1_n_i^T)vec((z_i,w_i)-(0,β)),U_i),where U_i=(Ψ^-1+n_iΓ^-1)^-1. The operator ⊗ is the Kronecker product and vec() is the operator that vectorizes a matrix by stacking its columns. Updating the latent variable z is achieved by sampling from a truncated multivariate Gaussian distribution, where the lower and upper bounds for each single entry z_ijl are determined by: lw=max(z_hl:y_hl<y_ijl) and up=min(z_hj:y_hj>y_ijl) respectively, and h is the index that searches over all the rows in the l^th variable. For example, the lower bound for z_ijl is the maximum value of the latent variable z in the l^th column whose corresponding y is smaller than y_ijl and the upper bound can be defined accordingly. Updating the latent variable w is achieved by sampling from a multivariate Gaussian distribution under the constraint of the observed category by an acceptance and rejection algorithm <cit.>. Specifically, we sample a w vector from the multivariate Gaussian distribution and accept this draw if and only if the maximum element of w occurs at the place of the observed category and is greater than 0, or all the elements in w are smaller than 0 and we observe the reference category K. We continue to sample w until a draw is accepted. When there are missing values in (y_1,...,y_p), the lower and upper bounds in z are undefined, and/or any missing value occurs in y_p+1, the observed category in y_p+1 does not exist. In these cases, we just sample z and/or w from the multivariate Gaussian distributions without the constraints.To sample missing values for variables with ordering, we apply the monotone transformation on z: y_ijl=F̂_l^-1[Φ(z_ijl)], l=1,...,p, where F̂_l is the univariate empirical distribution function of variable y_l. To sample the missing values in nominal variables, we choose the category corresponding to the largest element in w if it is greater than 0, and choose the reference category if the largest element in w is smaller than 0.§ SIMULATIONS AND REAL DATA ANALYSIS ON THE QASCWe evaluated the performance of the proposed model through two simulation studies: (i) simulated artificial data with missing values and (ii) the QASC data set with randomly deleted records. We compared the proposed imputation model with other commonly used procedures to treat missing data. §.§ Simulation Based on Artificial DataWe generated 100 complete multilevel data sets with correlated variables of different types, and then deleted some entries under the MAR assumption. The total number of clusters in each data set was 20, the cluster size was 50, and the five variables X_1, X_2, X_3, X_4, X_5 had Gamma, binary, nominal, ordinal, and normal distributions respectively. The variable X_1 followed a skewed Gamma distribution: X_1 ∼ Gamma(3,0.5). We assumed all the subsequent variables were generated depending on the previous ones, to introduce correlation among variables. Specifically, X_2 was a binary variable such that logit(p_X_2)=X_1+ϵ_ij, where p_X_2 is the probability that X_2 equals 1 and ϵ_ij∼ N(0,1). The nominal variable X_3 had 4 categories and was generated by a multinomial probit model, so that 3 latent variables were needed: (l_X_3,1,l_X_3,2,l_X_3,3) ∼ N((X_1,x_2)B_X_3, C_X_3), where B_X_3 is a randomly generated coefficient matrix of dimension 2×3 and C_X_3 is a correlation matrix of dimension 3 × 3. The category in X_3 was chosen to be k (for k=1,2,3) if l_X_3,k was the largest component and was greater than 0; and was chosen to be 4 if max(l_X_3)<0. Because we aimed to create a data set with a multilevel structure, we let the ordinal variable X_4 be generated from a random intercept model, l_X_4=b_X_4,i+X_1+X_2+ β_X_3 X_3+ϵ_ij, with ϵ_ij∼ N(0,1), b_X_4,i∼ N(0,ρ_X_4), and β_X_3 a vector of length 3, corresponding to the 3 categories in X_3. Three thresholds were used to determine four levels, they were the 20%, 30%, 50% quantiles of l_X_4. Lastly, the normally distributed variable X_5 was also generated from a random intercept model, X_5=b_X_5,i+X_1+X_2+ β_X_3 X_3+β_X_4 x_4+ϵ_ij, where ϵ_ij∼ N(0,1), b_X_5,i∼ N(0,ρ_X_5), and β_X_3 and β_X_4 are vectors of length 3.To create missing data under the MAR assumption, we assumed X_5 was completely observed and that the probabilities of missingness in X_j (j=1,...,4) depended on X_5. Specifically, let p_mis,ij be the probability that observation i is missing its value for the X_j variable and we assumed that logit(p_mis,ij)=α_jX_5. By adjusting the parameters α_j, we can control the missingness in each variable.We varied the parameters that generated the data to consider different scenarios: (1) missing rates for each variable from low (10%), median (30%) to high (50%); (2) clustering effect from low (ρ_X_4=ρ_X_5=0.2) to high (ρ_X_4=ρ_X_5=1), corresponding to intra-class correlation coefficients of 0.17 and 0.5 respectively. In the imputation step, we set the number of imputations to be M=10 <cit.>. §.§ Simulation Results SummaryTo compare the performance of the proposed method with others, we considered six competing methods, some of which have already been implemented in some software packages. These methods are listed in Table 2. We used the package mitools in R <cit.> to implement the combining rules (<ref>) after M complete data sets had been generated.The assessment of the relative performance of each method was based on the comparison of the imputation accuracy as well as the 95% coverage rates of the coefficients in the following random intercept logistic regression as a model of interest. We chose this model arbitrarily, and believe that the results would also hold for other models of interest. For each of the 100 simulated complete data sets, we fitted the model logit(p(X_2=1))=b_i+ β_0+ β_1 X_1+ β_2 X_3,2+ β_3 X_3,3+ β_4 X_3,4, b_i ∼ N(0,σ^2). We used the glmer() function in the lme4 package in R to obtain parameter estimates for β=(β_0,β_1,β_2,β_3,β_4). These are our `true' parameter values. After deletion of records by MAR, we applied each of the missing data methods listed in Table 2, and calculated point and variance estimates for β, using the combining rules. We reported the average of the squared bias of the coefficient estimates over the 100 data sets as well as the coverage rates of 95% confidence intervals.901 0.722|c|2*ICC=0.17 3l|CC 3l|JM 3l|FCS 3l|Cluster JM3l|Cluster FCS3l|Copula_Hoff 3l|Cluster Copula3-23 2|c|10% 30% 1l|50% 10% 30% 1l|50%10% 30% 1l|50% 10% 30% 1l|50% 10% 30% 1l|50% 10% 30% 1l|50% 10% 30% 1l|50% 5*Bias1l|β_0 0.033 0.146 1l|0.552 0.037 0.129 1l|0.388 0.014 0.067 1l|0.198 0.041 0.125 1l|0.339 0.011 0.040 1l|0.096 0.014 0.055 1l|0.1040.013 0.054 1l|0.0921l|β_10.040 0.183 1l|0.309 0.044 0.133 1l|0.308 0.024 0.092 1l|0.186 0.022 0.088 1l|0.399 0.026 0.105 1l|0.244 0.029 0.104 1l|0.316 0.026 0.095 1l|0.1871l|β_20.034 0.076 1l|0.190 0.026 0.050 1l|0.104 0.022 0.055 1l|0.106 0.033 0.055 1l|0.183 0.021 0.043 1l|0.099 0.024 0.053 1l|0.080 0.021 0.049 1l|0.0861l|β_30.018 0.073 1l|0.164 0.016 0.048 1l|0.155 0.010 0.056 1l|0.118 0.021 0.051 1l|0.140 0.011 0.038 1l|0.079 0.012 0.046 1l|0.080 0.013 0.051 1l|0.0871l|β_40.045 0.126 1l|0.373 0.032 0.194 1l|0.442 0.017 0.098 1l|0.261 0.020 0.089 1l|0.383 0.011 0.047 1l|0.106 0.013 0.057 1l|0.131 0.012 0.055 1l|0.122 1-1 1-1 5*Coverage 1l|β_0 9077 1l|67 87 78 1l|79 90831l|83 8976 1l|73 9389 1l|87 90861l|80 92901l|87 1l|β_1 9784 1l|77 95 90 1l|79 98 921l|81 98 94 1l|77 94 90 1l|81 89851l|79 100 961l|84 1l|β_2 100 92 1l|87 98 92 1l|91 95100 1l|96 9789 1l|88 100 98 1l|98 100 100 1l|99 99100 1l|98 1l|β_3 9594 1l|82 93 94 1l|88 93951l|88 9190 1l|76 9191 1l|85 94931l|87 91921l|92 1l|β_4 8780 1l|63 89 76 1l|69 93901l|83 9082 1l|79 9089 1l|87 95871l|89 93881l|85 1-1tableA comparison of squared bias and coverage of coefficient estimates of the model of interest under seven methods to handle missing data, with ICC=0.17 and missing rates=10%, 30% and 50%.901 0.722|c|2*ICC=0.5 3l|CC 3l|JM 3l|FCS 3l|Cluster JM3l|Cluster FCS3l|Copula Hoff 3l|Cluster Copula3-23 2|c| 10% 30% 1l|50%10%30%1l|50%10% 30% 1l|50%10% 30% 1l|50% 10% 30%1l|50%10%30%1l|50%10% 30%1l|50%5*Bias1l|β_00.0510.1641l|0.539 0.056 0.160 1l|0.481 0.0290.1141l|0.216 0.0530.1971l|0.4110.0250.074 1l|0.129 0.030 0.100 1l|0.139 0.0240.076 1l|0.1271l|β_10.0350.2101l|0.542 0.049 0.143 1l|0.424 0.0190.0841l|0.212 0.0290.1021l|0.3730.0220.136 1l|0.303 0.036 0.191 1l|0.364 0.0310.183 1l|0.2901l|β_20.0400.0981l|0.235 0.068 0.180 1l|0.327 0.0270.0701l|0.122 0.0340.1731l|0.3910.0230.069 1l|0.111 0.028 0.077 1l|0.093 0.0220.060 1l|0.0941l|β_30.0350.1051l|0.260 0.115 0.250 1l|0.250 0.0240.0881l|0.162 0.0350.1191l|0.2210.0240.071 1l|0.108 0.023 0.073 1l|0.127 0.0240.071 1l|0.1031l|β_40.0890.3031l|0.492 0.098 0.319 1l|0.514 0.0440.2001l|0.340 0.0450.1961l|0.4560.0380.138 1l|0.175 0.048 0.178 1l|0.211 0.0400.138 1l|0.183 1-1 5*Coverage 1l|β_0 9176 1l|74 87 80 1l|78 8980 1l|79 8889 1l|90 9085 1l|88 9089 1l|85 9291 1l|91 1l|β_1 9280 1l|65 94 86 1l|79 9392 1l|78 9692 1l|73 9590 1l|74 8883 1l|78 9593 1l|80 1l|β_2100 92 1l|80 97 91 1l|80 9596 1l|92 9490 1l|86 98 98 1l|93 100 97 1l|97 100 99 1l|91 1l|β_3 8989 1l|82 90 76 1l|74 90 95 1l|86 90 88 1l|78 8887 1l|87 9087 1l|87 8992 1l|90 1l|β_4 8876 1l|66 87 81 1l|70 9284 1l|73 90 83 1l|78 9188 1l|82 9082 1l|84 9084 1l|82 1-1tableA comparison of squared bias and coverage of coefficient estimates of the model of interest under seven methods to handle missing data, with ICC=0.5 and missing rates=10%, 30% and 50%.Table 3 summarizes the results of the simulation experiments under the three missingness rates (10%, 30% and 50%) using the seven methods, when the ICC used to generate the variables X_4 and X_5 is 0.17. When the missingness rate is 10%, all the approaches give reasonably good results in terms of achieving the nominal coverage rate - 95%, though CC and the two joint modelling approaches (JM and Cluster JM) do worse than the others. A possible reason for this is that in the joint modelling approaches, multivariate Gaussian distributions were specified and this is clearly not true in our data generating process, whereas in the sequential imputation approaches (FCS and Cluster FCS) more flexible univariate imputation models were allowed to best accommodate different variable types. For the copula-based methods, the empirical distribution function transformations were applied before fitting a multivariate Gaussian distribution on the latent variable scale where the dependence among the variables was captured. In addition, the squared bias increases with an increase in missingness rate as expected. With a moderate to high level of missingness, Cluster FCS and our proposed method (Cluster Copula) tend to outperform FCS and Copula_Hoff. While all the methods suffer from under-coverage when the missing rates are 30% and 50%, CC seems to be the worst, producing the most biased results. The results meet our expectation because as the percentage of missing data increases, there is less observed data available to capture the complex dependency among variables. Under the MAR assumption, CC causes the most biased results by only using the complete records while its alternatives make use of all the observed data.Table 4 is similar to Table 3 except that the performance is evaluated at ICC=0.5. In other words, the data sets exhibit higher levels of clustering. Compared to the results in Table 3, the results are worse across all methods for the higher ICC value. The imputation methods which take into account clustering effects almost always do better than their counterparts, which is not that obvious in Table 3 when ICC=0.17. Conditional imputation methods do better than joint modelling approaches, and the two copula-based methods tend to achieve the best results, for almost half of the simulation settings with the smallest squared bias.We also compared the imputation accuracy. That is, for each data value we calculated the discrepancy between the average of the 10 imputed values and the before-deletion true values. Note that this comparison is not applied to the CC method.The Euclidean distance was used to measure the imputation accuracy in the continuous variable X_1 and the ordinal variable X_3: 1/10∑_m=1^10∑_i=1^N (X_i,true-X_i,imp^(m))^2/#miss X, and the misclassification rate was used to measure the imputation accuracy in the binary variable X_2 and the nominal variable X_4: 1-1/10∑_m=1^10∑_i=1^N 1_(X_i,true=X_i,imp^(m))/#miss X.Figure 1 shows the results of the imputation accuracy for each simulation study. The points are the means of the Euclidian distances/misclassification rates over the missing observations in a single data set, and the error bars show the 5% and 95% quantiles over the 100 data sets. For variable X_1 which follows a Gamma distribution, there is not much difference in imputation accuracy over the six methods. For the nominal variable X_3 our proposed Cluster Copula method always performs the best except for the top-left panel, while the JM approach is always the worst. The misclassification rates for the binary variable X_2 are smallest in all the scenarios when using our proposed Cluster Copula model but do not differ much from those of the other methods. The misclassification rates for the ordinal variable X_4 are again highest for JM and the rates for the copula-based methods are smaller than the others when the missing rates are 30% and 50%. Generally speaking, the copula based methods tend to impute more accurately for categorical variables but also do no worse than other methods for continuous variables. The joint modelling methods, especially JM, give the least accurate imputation as the multivariate Gaussian distribution assumption does not hold. As the missingness rate and/or ICC increase, all the methods for every variable perform comparatively worse in terms of having a larger disparity compared with the true values and higher misclassification rates, but the patterns of relative performance between the six methods remains broadly the same. §.§ Simulation Based on the QASC Data SetWe also ran simulation studies using the QASC data set to evaluate our proposed method and other competing methods. Here we treated all the complete cases in the QASC data set (75.34% of the original data set) as the `true' data, and sub-sampled 300 patients, 100 times to create 100 sub data sets. Then for each of the sub data sets, missing values were created, trying to mimic the missing data pattern in the original data set. We distinguish between the demographic variables which we treat as MCAR and the process of care variables which we treat as MAR. Specifically, for the demographic variables: `ATSI', `age', `education' and `marital status', values were randomly deleted to roughly match the missingness percentages in Table 1. For the process of care variables and outcome variables, we assumed their missingness depended on the completely observed variables. A missing indicator variable was associated with every variable with missing data which equaled 1 if an entry was missing. For the missing indicators, we fitted logistic regression models on the original data set for `time taken to hospital', `mean temperature', `modified Rankin Scale', `Bartell Index', `physical health score' and `mental health score' respectively against `gender', `period' and `treatment', and the probabilities of missingness for the sub-sampled data sets were decided by the predicted values of these logistic regression models. We noticed that 9.39% of `Bartell Index', `physical health score' and `mental health score' were missing together, and we also took this into account when creating missing data.The relative performance for each method was also compared based on the average imputation accuracy and the squared bias and 95% coverage rate of interval estimates of parameters for some models of interest. Ten imputations were created for all the six imputation methods. The accuracy is shown in Table 5. All the discrepancies between the imputed values and the true values were measured by Euclidean distance except for the nominal variables `marital status' and `ATSI' which used the misclassification rates. Our proposed imputation model achieves the smallest disparity more than half of the time (7/11) and Copula_Hoff is superior in performance to the other four methods. It is interesting to note that joint modelling methods perform better than their FCS counterparts (JM vs.FCS and Cluster JM vs.Cluster FCS) and adding clustering effects enhances the imputation accuracy.The models of interest are based on the models fitted in <cit.>. They fitted logistic regression models for the dichotomous outcomes - `Bartell Index' with cut points equal to 60 and 95, and `modified Rankin Scale' with cut point equaled to 2; and linear models for the continuous variables `physical health score' and `mental health score', including as predictors the variables `treatment', `period' and the interaction between `treatment' and `period'. The models are (mrs2)=b_i+β_0+β_1 period+ β_2 treatment +β_3 treatment*period,(bi60)=b_i+β_0+β_1 period+ β_2 treatment +β_3 treatment*period,(bi90)=b_i+β_0+β_1 period+ β_2 treatment +β_3 treatment*period,mcs=b_i+β_0+β_1 period+ β_2 treatment +β_3 treatment*period+ϵ,pcs=b_i+β_0+β_1 period+ β_2 treatment +β_3 treatment*period+ϵ.The coefficient β_3 and its p-value were used to see if the pre-post change in the intervention group was statistically significant to the change in the control group.All the models included a random intercept term, b_i, to capture the clustering effects. We first fitted the five models of interest on the completely observed patients in each of the 100 sub data sets, and obtained the parameter estimates β=(β_0,β_1,β_2,β_3) and treated them as the true values. Then the parameter estimates from all the seven competing methods were compared against the true parameters, and the 95% coverage rates were obtained from the 100 repetitions. The results are reported in Table 6. The CC approach has the largest bias and the smallest coverage rate. This is not unexpected because the missing data were generated under the MAR assumption and by CC only about 40% of the data were used to fit the models so that the coefficient estimates are biased with large uncertainty. The proposed method Cluster Copula and Copula_Hoff outperform the other methods with Copula_Hoff doing marginally better than Cluster Copula for the first and second logistic models `mrs2' and `bi60', and Cluster Copula doing better for the fifth linear model for `pcs'. There is little difference between the two copula based methods, because the clustering effects were small in the QASC data set (ICC in the models of interest lay between 0.009 and 0.026), and only one nominal variable (marital status) was considered in the imputation models but did not enter into the models of interest later. In other words, taking the clustering effect into account and giving special treatment to the nominal variable does not affect the inference too much in this case. However, we do observe that when ICC is higher in the variable `pcs', our proposed model achieves better imputation accuracy.901 0.6 2|l|2* 3l|CC3l|JM3l|FCS3l|Cluster JM 3l|Cluster FCS3l|Copula_Hoff3l|Cluster Copula3-23 2|l|Bias SD 1l|Coverage Bias sd 1l|Coverage Bias SD 1l|Coverage Bias SD 1l|Coverage Bias sd 1l|Coverage Bias SD 1l|Coverage Bias SD 1l|Coverage 4*Modified Rankin Scale 21l|β_0 0.029 0.426 1l|91 0.007 0.282 1l|910.013 0.284 1l|92 0.006 0.283 1l|90 0.010.274 1l|920.006 0.282 1l|920.007 0.285 1l|92 1|l| 1l|β_1 0.049 0.671 1l|94 0.017 0.431l|960.021 0.431 1l|96 0.015 0.433 1l|98 0.019 0.409 1l|940.013 0.432 1l|980.016 0.434 1l|97 1|l| 1l|β_2 0.022 0.534 1l|93 0.013 0.344 1l|100 0.014 0.344 1l|99 0.014 0.343 1l|98 0.014 0.391 1l|100 0.011 0.345 1l|100 0.013 0.344 1l|99 1|l| 1l|β_3 0.043 0.827 1l|88 0.026 0.521 1l|910.033 0.523 1l|89 0.023 0.521l|90 0.032 0.524 1l|900.022 0.522 1l|930.024 0.518 1l|92 1-24*Bartell Index 601l|β_00.051 0.584 1l|87 0.043 0.315 1l|89 0.032 0.329 1l|90 0.048 0.316 1l|86 0.035 0.315 1l|88 0.012 0.327 1l|91 0.021 0.333 1l|90 1|l| 1l|β_1 0.028 0.586 1l|97 0.027 0.478 1l|95 0.024 0.487 1l|97 0.022 0.480 1l|97 0.024 0.495 1l|95 0.020 0.495 1l|97 0.026 0.501 1l|95 1|l| 1l|β_2 0.043 0.819 1l|85 0.035 0.395 1l|89 0.029 0.405 1l|92 0.025 0.396 1l|93 0.030 0.398 1l|90 0.016 0.411 1l|93 0.018 0.415 1l|93 1|l| 1l|β_3 0.083 0.746 1l|90 0.055 0.606 1l|92 0.072 0.624 1l|87 0.046 0.610 1l|95 0.048 0.635 1l|97 0.038 0.634 1l|98 0.055 0.6431l| 981-24*Bartell Index 951l|β_01.228 1.293 1l|74 0.148 0.881 1l|81 0.163 0.874 1l|80 0.145 0.875 1l|80 0.160 0.792 1l|78 0.117 0.608 1l|85 0.123 0.627 1l|82 1|l| 1l|β_1 3.252 1.379 1l|90 2.518 0.752 1l|96 3.061 0.654 1l|98 1.945 0.733 1l|96 1.854 0.780 1l|95 1.770 0.627 1l|97 1.422 0.831 1l|96 1|l| 1l|β_2 0.263 1.249 1l|72 0.168 0.971 1l|87 0.212 0.967 1l|80 0.170 0.966 1l|78 0.162 0.814 1l|79 0.181 0.710 1l|86 0.175 0.722 1l|87 1|l| 1l|β_3 2.953 1.092 1l|83 2.601 0.750 1l|91 3.207 0.756 1l|85 2.043 0.733 1l|89 1.998 0.691 1l|86 1.896 0.629 1l|93 1.599 0.780 1l|94 1-24*Mental Health Score1l|β_01.791 2.398 1l|86 1.659 2.092 1l|860.672 1.763 1l|880.356 1.616 1l|890.602 1.609 1l|850.367 1.614 1l|900.406 1.638 1l|87 1|l| 1l|β_1 2.101 3.778 1l|98 2.334 2.569 1l|100 1.178 1.587 1l|990.892 1.498 1l|980.952 1.487 1l|970.913 1.495 1l|980.968 1.515 1l|99 1|l| 1l|β_2 0.951 2.984 1l|82 1.042 2.492 1l|850.905 2.110 1l|840.651 2.004 1l|870.703 1.994 1l|880.659 2.012 1l|900.641 1.976 1l|89 1|l| 1l|β_3 1.338 3.619 1l|96 1.536 3.081 1l|100 1.459 2.164 1l|100 1.226 2.024 1l|100 1.285 2.054 1l|100 1.199 2.058 1l|100 1.272 1.988 1l|100 1-24*Physical Health Score1l|β_00.695 2.158 1l|85 0.708 1.819 1l|85 0.343 1.579 1l|80 0.233 1.469 1l|86 0.309 1.452 1l|84 0.222 1.455 1l|88 0.197 1.488 1l|90 1|l| 1l|β_1 0.871 2.385 1l|95 0.898 1.294 1l|98 0.518 1.325 1l|97 0.457 1.247 1l|97 0.466 1.237 1l|97 0.425 1.230 1l|98 0.390 1.258 1l|98 1|l| 1l|β_2 0.684 2.640 1l|84 0.586 1.841 1l|88 0.378 1.823 1l|90 0.339 1.751 1l|92 0.391 1.742 1l|91 0.350 1.755 1l|93 0.331 1.731 1l|92 1|l| 1l|β_3 0.928 2.085 1l|87 1.139 2.079 1l|85 0.930 1.754 1l|85 0.810 1.656 1l|90 0.964 1.801 1l|87 0.876 1.654 1l|89 0.815 1.603 1l|921-2 tableA comparison of squared bias in point estimates of coefficients, standard deviations and 95% coverage of the five models of interest under seven treatments of missing data in the 100 sub sampled QASC data sets. §.§ Application to QASC Data SetWe now apply our proposed method to impute missing data in the original QASC data set with a total of 1480 patients. Unlike in Section 4.3 where we deliberately deleted some records so that we knew the true values, we do not know the true missing values here and therefore cannot measure imputation accuracy. We check the imputation quality by using diagnostics discussed in <cit.> and <cit.>.Specifically, we examined the trace plots of the parameters and convergence in our proposed model (not shown here) and plotted the univariate densities/frequencies of the fully observed values (in black) and the average imputed values (in six colors) for some variables (see Figure 2). All the imputation methods generally agree with the complete data for the continuous variables `length of stay' and `age' and there are small disagreements for the variables `mental health score' and `physical health score'. The imputed values seem to be more spread out for `Bartell Index' than the observed data which is concentrated around 0. Overall, the frequencies of the categorical variables match the observed data with a few exceptions, for example, FCS imputes significantly more at level 4 for `Marital status'; and JM does not have any imputed values that fall into level 6 for `Modified Rankin Scale'. The departure from the observed data does not necessarily mean the imputation is poor, rather it may mean that the distribution of the missing data is different from what is observed, probably because of the missing data process is MAR rather than MCAR, lack of fit in the imputation model, etc.We also report the point estimates of coefficients as well as the standard deviations and p-values of the five models of interest in Table 7, by CC and the six imputation methods. While there are differences in the parameter estimates, the p-values across all the methods generally agree with each other, leading to the same clinical conclusions. There are some exceptions, for example in the random intercept logistic regression model for `Bartell Index 60', the coefficient of the interaction term β_3 is significant at the 0.1 level for the methods CC, FCS and our proposed Cluster Copula method, but significant only at the 0.05 level for the methods JM, Cluster JM, Cluster FCS and Copula_Hoff.901 0.552|l|2* 3l|CC3l|JM3l|FCS3l|Cluster JM 3l|Cluster FCS3l|Copula_Hoff3l|Cluster Copula3-23 2|l|point sd 1l|p-value point sd 1l|p-value point sd 1l|p-value point sd 1l|p-value point sd 1l|p-value point sd 1l|p-value point sd 1l|p-value 4*Modified Rankin Scale 21l|β_0-0.747 0.156 1l|.001-0.709 0.152 1l|.001-0.520 0.147 1l|.001-0.712 0.160 1l|.001-0.771 0.150 1l|.001-0.721 0.151 1l|.001-0.683 0.154 1l|.0011|l| 1l|β_10.1140.163 1l|0.4830.0910.161 1l|0.5740.1790.152 1l|0.2380.1020.170 1l|0.5490.1310.163 1l|0.4210.0920.162 1l|0.5710.1250.159 1l|0.434 1|l|1l|β_20.2410.226 1l|0.2850.1990.218 1l|0.3620.2310.213 1l|0.2790.1990.224 1l|0.3750.2270.216 1l|0.2930.1920.221 1l|0.3840.1430.228 1l|0.5301|l| 1l|β_3-0.659 0.244 1l|0.007-0.588 0.237 1l|0.013-0.569 0.225 1l|0.011-0.601 0.250 1l|0.016-0.621 0.242 1l|0.010-0.616 0.244 0.0121l|-0.604 0.240 1l|0.012 1-24*Bartell Index 601l|β_02.3490.235 1l|.0011.7210.181 1l|.0012.4360.227 1l|.0011.7010.167 1l|.0011.8290.174 1l|.0011.8410.182 1l|.0012.0100.198 1l|.001 1|l|1l|β_1-0.200 0.264 1l|0.449-0.036 0.216 1l|0.869-0.402 0.284 1l|0.157-0.016 0.209 1l|0.939-0.134 0.212 1l|0.527-0.070 0.228 1l|0.759-0.058 0.247 1l|0.814 1|l|1l|β_2-0.472 0.318 1l|0.138-0.360 0.248 1l|0.146-0.494 0.298 1l|0.098-0.392 0.236 1l|0.096-0.466 0.242 1l|0.054-0.431 0.245 1l|0.078-0.369 0.266 1l|0.166 1|l|1l|β_30.6630.3801l|0.0810.6620.311 1l|0.0330.7030.375 1l|0.0610.6830.301 1l|0.0230.7840.327 1l|0.0160.7340.323 1l|0.0230.6200.337 1l|0.0661-24*Bartell Index 951l|β_00.0850.161 1l|0.598-0.098 0.149 1l|0.512-0.122 0.149 1l|0.413-0.125 0.149 1l|0.400-0.076 0.140 1l|0.591-0.065 0.145 1l|0.652-0.031 0.149 1l|0.838 1|l|1l|β_10.2880.161 1l|0.0730.2580.155 1l|0.0970.2160.157 1l|0.1700.2820.1541l|0.0660.2410.151 1l|0.1100.3120.157 1l|0.0470.3050.152 1l|0.0441|l| 1l|β_2-0.163 0.236 1l|0.489-0.219 0.214 1l|0.304-0.197 0.210 1l|0.349-0.204 0.217 1l|0.346-0.211 0.206 1l|0.305-0.167 0.213 1l|0.434-0.144 0.216 1l|0.5051|l| 1l|β_30.5050.242 1l|0.0370.5580.232 1l|0.0160.5760.228 1l|0.0120.5190.228 1l|0.0230.5400.230 1l|0.0190.5260.231 1l|0.0230.5380.230 1l|0.0201-24*mental health score1l|β_046.139 0.810 1l|.00145.106 0.926 1l|.00146.411 1.206 1l|.00144.656 0.865 1l|.00145.683 0.833 1l|.00145.174 0.830 1l|.00145.146 0.813 1l|.0011|l| 1l|β_13.3200.931 1l|.0013.4261.074 1l|0.0012.4761.222 1l|0.0433.8260.983 1l|.0012.6560.963 1l|0.0063.5440.977 1l|.0013.2590.943 1l|0.001 1|l|1l|β_20.0171.201 1l|0.989 -0.404 1.236 1l|0.7440.0021.118 1l|0.998-0.117 1.212 1l|0.923-0.509 1.226 1l|0.678-0.119 1.255 1l|0.924-0.235 1.219 1l|0.847 1|l|1l|β_3-0.0671.383 1l|0.9620.6711.435 1l|0.640-0.092 1.386 1l|0.9470.5531.385 1l|0.6900.9541.403 1l|0.4970.3031.426 1l|0.8320.8651.339 1l|0.5181-24*physical health score1l|β_046.573 0.840 1l|.00145.145 0.806 1l|.00146.128 0.951 1l|.00145.035 0.805 1l|.00145.258 0.786 1l|.00145.383 0.773 1l|.00145.315 0.802 1l|.001 1|l|1l|β_1-3.928 0.808 1l|.001-3.310 0.846 1l|.001-3.918 0.892 1l|.001-3.267 0.833 1l|.001-3.258 0.836 1l|.001-3.301 0.803 1l|.001-3.275 0.819 1l|.001 1|l|1l|β_2-0.527 1.226 1l|0.667 -0.481 1.169 1l|0.681-0.284 1.128 1l|0.801-0.518 1.1651l|0.656-0.863 1.131 1l|0.446-0.645 1.159 1l|0.578-0.385 1.155 1l|0.7391|l| 1l|β_33.035 1.206 1l|0.0123.1481.244 1l|0.0112.9441.237 1l|0.0173.3431.232 1l|0.0073.5051.210 1l|0.0043.3401.248 1l|0.0073.0151.193 1l|0.0121-2 tableA comparison of point estimates of coefficients, standard deviations and p-values of the five models of interest under seven treatments of missing data in the original QASC data set.§ DISCUSSIONIn this paper, we developed a copula based imputation model for multilevel data sets with mixed data. Copula based imputation models have a sound theoretical foundation and we have shown through simulations that copula based imputation models achieve reasonably accurate predictions of the missing values and recovery of parameters in some models of interest.The copula based imputation models outperform the competing methods, especially when the variable distributions depart from normality. We also recommend taking into account clustering effects to incorporate information from the grouping structure in the analysis. This is confirmed from our simulation results, that when the ICC is high, imputation models with random effects added achieve better results.One extension to our models is to add some `fixed' covariates. For the copula models in Section 3, all the variables appear on one side of the equations in (<ref>) and we model their relationship through the correlation matrices on the latent variable scale. But it is often of interest to see both the relationship among variables on the response side and the relationship between the responses and some covariates. For example in the QASC data set, `treatment' is fixed by design at hospital level, so we can treat it as a regressor. By doing so, the treatment effects on some process of care variables can be detected directly through the copula model on the latent variable scale. Here we consider variables with ordering, and extension to nominal variables is straightforward. Let i=1,...,m be the group index, j=1,...,n_i be the individual index within group i, and l=1,...,p be the variable index. Suppose the first k variables have common covariates x_i1,...,x_iq at the group level, in other words, they are fixed within group i. The correlation matrices for residual and random effects b_i are Γ and Ψ respectively as before, but the mean of the latent variables z is no longer zero. Again we use the monotone transformation z_ijl=Φ^-1(F(y_ijl)) to obtain the extended rank likelihood, then the model becomes: z_ij∼ N(b_i+x_i(β, 0), Γ), b_i∼ N(0,Ψ)⇕(z_ij1,...,z_ijk,...,z_ijp)∼ N(b_i+(x_i1,...,x_iq)[ β_11⋯ β_1k0⋯0;⋮ ⋮⋮⋮; β_q1⋯ β_qk0⋯0 ],Γ))It is straightforward to derive the full conditional distributions for the Gibbs sampler, and we omit the details here.Choosing the form of copula is another issue which is a critical yet complicated task. <cit.>, <cit.> provide some guidance on choosing among existing copulas or creating new families of copulas. In this paper, we focused on the Gaussian copula because it is easy to extend to higher dimensions and computationally convenient. However, the main drawbacks of the Gaussian copula are the symmetry assumption and absence of tail dependence <cit.>. Therefore, some goodness-of-fit tests should be examined to check for a need to use other forms of copulas, for example, a (mixture of skewed) t-copulas.kbib 42 urlstyle[Abayomi et al.(2008)Abayomi, Gelman, and Levy]abayomi2008diagnostics Abayomi, Kobi; Gelman, Andrew, and Levy, Marc. Diagnostics for multivariate imputations. Journal of the Royal Statistical Society: Series C (Applied Statistics), 570 (3):0 273–291, 2008. [Aitchison and Bennett(1970)]aitchison1970polychotomous Aitchison, John and Bennett, Jo A. Polychotomous quantal response by maximum indicant. Biometrika, 570 (2):0 253–262, 1970. [Albert and Chib(1993)]albert1993bayesian Albert, James H and Chib, Siddhartha. Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 880 (422):0 669–679, 1993. [Bates et al.(2014)Bates, Mächler, Bolker, and Walker]bates2014fitting Bates, Douglas; Mächler, Martin; Bolker, Ben, and Walker, Steve. Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823, 2014. [Buuren and Groothuis-Oudshoorn(2011)]buuren2011mice Buuren, Stef and Groothuis-Oudshoorn, Karin. mice: Multivariate imputation by chained equations in r. Journal of Statistical Software, 450 (3), 2011. [Chib and Greenberg(1998)]chib1998analysis Chib, Siddhartha and Greenberg, Edward. Analysis of multivariate probit models. Biometrika, 850 (2):0 347–361, 1998. [Demarta and McNeil(2005)]demarta2005t Demarta, Stefano and McNeil, Alexander J. The t copula and related copulas. International Statistical Review/Revue Internationale de Statistique, pages 111–129, 2005. [Drechsler(2015)]drechsler2015multiple Drechsler, Jörg. Multiple imputation of multilevel missing data—rigor versus simplicity. Journal of Educational and Behavioral Statistics, 400 (1):0 69–95, 2015. [Eddings and Marchenko(2011)]eddings2011accounting Eddings, W and Marchenko, Y. Accounting for clustering with mi impute. STATA. Available online at http://www. stata. com/support/faqs/statistics/clustering-and-mi-impute/, checked on, 120 (5):0 2013, 2011. [Fox(2013)]fox2013package Fox, Maintainer John. Package ‘norm’. 2013. [Genest et al.(1995)Genest, Ghoudi, and Rivest]genest1995semiparametric Genest, Christian; Ghoudi, Kilani, and Rivest, L-P. A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika, 820 (3):0 543–552, 1995. [Goldstein et al.(2009)Goldstein, Carpenter, Kenward, and Levin]goldstein2009multilevel Goldstein, Harvey; Carpenter, James; Kenward, Michael G, and Levin, Kate A. Multilevel models with multivariate mixed response types. Statistical Modelling, 90 (3):0 173–197, 2009. [Graham et al.(2007)Graham, Olchowski, and Gilreath]graham2007many Graham, John W; Olchowski, Allison E, and Gilreath, Tamika D. How many imputations are really needed? some practical clarifications of multiple imputation theory. Prevention Science, 80 (3):0 206–213, 2007. [Gruhl et al.(2013)Gruhl, Erosheva, Crane, et al.]gruhl2013semiparametric Gruhl, Jonathan; Erosheva, Elena A; Crane, Paul K, and others, . A semiparametric approach to mixed outcome latent variable models: Estimating the association between cognition and regional brain volumes. The Annals of Applied Statistics, 70 (4):0 2361–2383, 2013. [Hoff(2007)]hoff2007extending Hoff, Peter D. Extending the rank likelihood for semiparametric copula estimation. The Annals of Applied Statistics, pages 265–283, 2007. [Hollenbach et al.(2014)Hollenbach, Metternich, Minhas, and Ward]hollenbach2014fast Hollenbach, Florian M; Metternich, Nils W; Minhas, Shahryar, and Ward, Michael D. Fast & easy imputation of missing social science data. arXiv preprint arXiv:1411.0647, 2014. [Honaker et al.(2011)Honaker, King, Blackwell, et al.]honaker2011amelia Honaker, James; King, Gary; Blackwell, Matthew, and others, . Amelia ii: A program for missing data. Journal of Statistical Software, 450 (7):0 1–47, 2011. [Käärik and Käärik(2009)]kaarik2009modeling Käärik, Ene and Käärik, Meelis. Modeling dropouts by conditional distribution, a copula-based approach. Journal of Statistical Planning and Inference, 1390 (11):0 3830–3835, 2009. [Kole et al.(2007)Kole, Koedijk, and Verbeek]kole2007selecting Kole, Erik; Koedijk, Kees, and Verbeek, Marno. Selecting copulas for risk management. Journal of Banking and Finance, 310 (8):0 2405–2423, 2007. [Kropko et al.(2013)Kropko, Goodrich, Gelman, and Hill]kropko2013multiple Kropko, Jonathan; Goodrich, Ben; Gelman, Andrew, and Hill, Jennifer. Multiple imputation for continuous and categorical data: Comparing joint and conditional approaches. Columbia University, Department of Statistics. New York, 2013. [Lascio(2015)]lascio2015exploring Simone GianneriniLascio, Alessandra Reale. Exploring copulas for the imputation of complex dependent data. Statistical Methods and Applications, pages 159–175, 2015. [Lee and Carlin(2010)]lee2010multiple Lee, Katherine J and Carlin, John B. Multiple imputation for missing data: fully conditional specification versus multivariate normal imputation. American Journal of Epidemiology, page kwp425, 2010. [Little and Rubin(2002)]little2014statistical Little, Roderick JA and Rubin, Donald B. Statistical analysis with missing data. John Wiley & Sons, 2002. [Liu and Wu(1999)]liu1999parameter Liu, Jun S and Wu, Ying Nian. Parameter expansion for data augmentation. Journal of the American Statistical Association, 940 (448):0 1264–1274, 1999. [Lumley(2014)]lumley2014mitools Lumley, T. mitools: Tools for multiple imputation of missing data. r package version 2.0, 2014. [Middleton et al.(2011)Middleton, McElduff, Ward, Grimshaw, Dale, D'Este, Drury, Griffiths, Cheung, Quinn, et al.]middleton2011implementation Middleton, Sandy; McElduff, Patrick; Ward, Jeanette; Grimshaw, Jeremy M; Dale, Simeon; D'Este, Catherine; Drury, Peta; Griffiths, Rhonda; Cheung, N Wah; Quinn, Clare, and others, . Implementation of evidence-based treatment protocols to manage fever, hyperglycaemia, and swallowing dysfunction in acute stroke (QASC): a cluster randomised controlled trial. The Lancet, 3780 (9804):0 1699–1706, 2011. [Murray et al.(2013)Murray, Dunson, Carin, and Lucas]murray2013bayesian Murray, Jared S; Dunson, David B; Carin, Lawrence, and Lucas, Joseph E. Bayesian gaussian copula factor models for mixed data. Journal of the American Statistical Association, 1080 (502):0 656–665, 2013. [Nelsen(2007)]nelsen2007introduction Nelsen, Roger B. An Introduction to Copulas. Springer Science & Business Media, 2007. [Pitt et al.(2006)Pitt, Chan, and Kohn]pitt2006efficient Pitt, Michael; Chan, David, and Kohn, Robert. Efficient bayesian inference for gaussian copula regression models. Biometrika, 930 (3):0 537–554, 2006. [Raghunathan et al.(2001)Raghunathan, Lepkowski, Van Hoewyk, and Solenberger]raghunathan2001multivariate Raghunathan, Trivellore E; Lepkowski, James M; Van Hoewyk, John, and Solenberger, Peter. A multivariate technique for multiply imputing missing values using a sequence of regression models. Survey Methodology, 270 (1):0 85–96, 2001. [Raghunathan et al.(2002)Raghunathan, Solenberger, and Van Hoewyk]raghunathan2002iveware Raghunathan, Trivellore E; Solenberger, Peter W, and Van Hoewyk, John. Iveware: Imputation and variance estimation software. Ann Arbor, MI: Survey Methodology Program, Survey Research Center, Institute for Social Research, University of Michigan, 2002. [Royston et al.(2005)]royston2005multiple Royston, Patrick and others, . Multiple imputation of missing values: update of ice. Stata Journal, 50 (4):0 527, 2005. [Rubin(1976)]rubin1976inference Rubin, Donald B. Inference and missing data. Biometrika, 630 (3):0 581–592, 1976. [Rubin(1987)]rubin1987multiple Rubin, Donald B. Multiple imputation for nonresponse in surveys (wiley series in probability and statistics). 1987. [Schafer(1997)]schafer1997analysis Schafer, Joseph L. Analysis of Incomplete Multivariate Data. CRC press, 1997. [Schafer and Yucel(2002)]schafer2002computational Schafer, Joseph L and Yucel, Recai M. Computational strategies for multivariate linear mixed-effects models with missing values. Journal of computational and Graphical Statistics, 110 (2):0 437–457, 2002. [Sklar(1959)]sklar1959fonctions Sklar, M. Fonctions de répartition à n dimensions et leurs marges. Université Paris 8, 1959. [Su et al.(2011)Su, Gelman, Hill, Yajima, et al.]su2011multiple Su, Yu-Sung; Gelman, Andrew; Hill, Jennifer; Yajima, Masanao, and others, . Multiple imputation with diagnostics (mi) in r: Opening windows into the black box. Journal of Statistical Software, 450 (2):0 1–31, 2011. [Tanner and Wong(1987)]tanner1987calculation Tanner, Martin A and Wong, Wing Hung. The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 820 (398):0 528–540, 1987. [Trivedi and Zimmer(2007)]trivedi2007copula Trivedi, Pravin K and Zimmer, David M. Copula modeling: An Introduction for Practitioners. Now Publishers Inc, 2007. [Van Buuren(2007)]van2007multiple Van Buuren, Stef. Multiple imputation of discrete and continuous data by fully conditional specification. Statistical Methods in Medical Research, 160 (3):0 219–242, 2007. [Zhao and Yucel(2009)]zhao2009performance Zhao, Enxu and Yucel, Recai M. Performance of sequential imputation method in multilevel applications. In American Statistical Association Proceedings of the Survey Research Methods Section, 2009.
http://arxiv.org/abs/1702.08148v1
{ "authors": [ "Jiali Wang", "Bronwyn Loong", "Anton H. Westveld", "Alan H. Welsh" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170227050258", "title": "A Copula-based Imputation Model for Missing Data of Mixed Type in Multilevel Data Sets" }
Urban Analysis of Philadelphia1Department of Statistics, The Wharton School, University of Pennsylvania2StantecShane T. Jensen, Department of Statistics The Wharton School, University of Pennsylvania, 463 Huntsman Hall 3730 Walnut Street, Philadelphia, PA, USA 19102stjensen@wharton.upenn.edu Statistical analyses of urban environments have been recently improved through publicly available high resolution data and mapping technologies that have been adopted across industries.These technologies allow us to create metrics to empirically investigate urban design principles of the past half-century.Philadelphia is an interesting case study for this work, with its rapid urban development and population increase in the last decade.We outline a data analysis pipeline for exploring the association between safety and local neighborhood features such as population, economic health and the built environment.As a particular example of our analysis pipeline, we focus on quantitative measures of the built environment that serve as proxies for vibrancy: the amount of human activity in a local area.Historically, vibrancy has been very challenging to measure empirically. Measures based on land use zoning are not an adequate description of local vibrancy and so we construct a database and set of measures of business activity in each neighborhood. We employ several matching analyses to explore the relationship between neighborhood vibrancy and safety, such as comparing high crime versus low crime locations within the same neighborhood. We find that neighborhoods with more vacancy are associated with higher crime but within neighborhoods, crimes tend not to be located near vacant properties. We also find that longer term residential ownership in a local area is associated with lower levels of crime.In addition, we find that more crimes tend to occur near business locations but businesses that are active (open) for longer periods are associated with fewer crimes. As additional sources of urban data become available, our analysis pipeline can serve as the template for further investigations into the relationships between safety, economic factors and the built environment at the local neighborhood level. Urban Vibrancy and Safety in Philadelphia Colman Humphrey1 and Shane T. Jensen1 and Dylan S. Small1 and Rachel Thurston2 July 10, 2016; Revised January 10, 2017 ==================================================================================§ INTRODUCTIONThroughout history there have been many perspectives on the approach to planning of cities, with a notable clash between dense, organically-formed urban spaces versus large-scale clearing and planning of “superblocks" and automobile-centric layouts. The former perspective viewed city development as a social enterprise created by many hands, whereas the top-down central planning approach involved less input from the residents affected by city changes.The urban renewal movement of the 1960s and 1970s is the largest example of this latter effort, but the same mentality still drives many current development decisions. One historical motivation for top-down urban renewal projects was the idea that cities were over-crowded.<cit.> discusses both positive and negative perspectives on the effects of population density in urban settings.<cit.> argues that the emotional stress caused by high population density produces negative attitudes and hostility among the populace. <cit.> find both positive and negative effects of population density and suggest that population size is a more important factor for attitudes and behavior in urban environments.Earlier responses to anti-density rhetoric and the challenges of urban living during the industrial age resulted in code regulations, restrictive land use zoning, and sometimes large scale clearing of entire neighborhoods.During the age of urban renewal, dense urban environments were demolished and replaced by trending architectural works, civic monuments and tree lined boulevards intended to reduce population density and ease automobile traffic, along with large housing projects for displaced communities. Over time, a large number of these projects failed to attract pedestrian activity and resulted in high crime housing areas.In her seminal work The Death and Life of Great American Cities (1961), Jane Jacobs challenged the proponents of urban renewal and outlined several alternative hypotheses for sustaining successful urban environments.Many of her ideas were based on her own anecdotal observations of urban residents, but can now be investigated quantitatively using recently available urban data.Jacobs was a pioneering voice for the concept of urban vibrancy, a measure of positive activity or energy in a neighborhood that makes an urban place enjoyable to its residents despite the challenges of urban living. An important term coined by Jane Jacobs was“eyes on the street" which summarized her viewpoint that safer and more vibrant neighborhoods were those that had many people engaging in activities (either commercial or residential) on the street level at different times of the day <cit.>. This concept of eyes on the street has been more recently expressed as the “natural surveillance" component of the Crime Prevention through Environmental Design movement <cit.>. These contemporary theories argue that the likelihood of criminal activity is strongly linked to the presence or absence of people on the street.As <cit.> states: “Criminals do not like to be seen or recognized, so they will choose situations where they can hide and easily escape." Policies which promote vibrancy and activity could potentially benefit crime prevention.The recent explosion in high resolution data on cities offers an exciting opportunity for quantitative evaluation of contrasting urban development perspectives as well as current urban planning efforts.The goal of this paper is to outline a pipeline for data collection and analysis of the relationships between neighborhood safety, economic and demographic conditions and the built environment within urban environments.As an example, we target our analysis pipeline towards a specific task: using high resolution data to create quantitative measures of the built environment that can serve as proxies for the human vibrancy of a local area.We then investigate the association between these vibrancy measures and safety in the city of Philadelphia.We focus on vibrancy measures based on land use as well as business activity, which follows the“natural surveillance" idea that the presence of open businesses encourages safety through the store front presence of both staff and customers. Past investigations into the built environment and safety includes <cit.> which investigates the influence of liquor establishments on crime in Cleveland.<cit.> explore the trajectories of crime over a fourteen year period on specific street segments in Seattle, while <cit.> examines the association between crime and specific characteristics of parks in Philadelphia.<cit.> provides a recent comprehensive review of past research into the association between the built environment and safety, where many quasi-experimental studies have shown that changes in housing, zoning and public transit can help to manage crime.In Section <ref>, we will try to emulate a quasi-experimental setting in our own analysis by comparing locations within census block groups, thereby matching locations in terms of their economic and demographic characteristics. The effects of natural surveillance on neighborhood vibrancy can be both subtle and complicated.The presence of a commercial business can encourage vibrancy through the presence of many people, or can give a sense of vacancy and isolation to an area if it is closed during a particular time of the day. In order to get an accurate picture of whether commercial businesses help to encourage safety, we will need to examine whether or not those businesses are open and active, as we outline in Section <ref>. Jane Jacobs also wrote about the positive benefits of residents that feel invested long term in their communities.The defensible space hypothesis of <cit.> argues that an area is safer when people feel a sense of ownership and responsibility in their local community. We will investigate this hypothesis through the relationship between the long term ownership tenure of residents and neighborhood safety.Philadelphia is an interesting case study for this work as it is currently encountering many contemporary issues in urban revival, population growth and desirability. Recent work has shown that urban city centers are growing relative to their suburban counterparts in many areas of the country <cit.>.<cit.> finds an association between population movement of high-income and college-educated households and declining crime rates in central city neighborhoods.We first outline our data collection for the city of Philadelphia in Section <ref> and then explore the associations between safety and several economic, population and land use measures in Section <ref>.To get a more detailed picture of neighborhood vibrancy, we compile a database and several measures of business vibrancy in Section <ref>.In Section <ref>, we employ several matching analyses to evaluate the association between business vibrancy, land use, long term ownership and safety within local neighborhoods. We find several interesting associations between crime and our measures of vibrancy, land use, and ownership tenure, though we are not implying causal effects with these findings.We conclude with a brief discussion in Section <ref>. In order to encourage replication of our analyses and adaptation to other research questions, we have made the code and public data that were used in our analyses available as a github repository at: https://github.com/ColmanHumphrey/urbananalytics § URBAN DATA IN PHILADELPHIA Our analysis will be based on the geographical units defined by the US Census Bureau. Philadelphia county is divided into 384 census tracts which are divided into 1,336 block groups which are divided into 18,872 blocks.Population and economic data is provided by the US Census Bureau, crime data is provided by the Philadelphia Police Department and land use data is provided by the City of Philadelphia.A general theme of our urban work is that results can vary depending on the resolution level of the data. Most of our analyses will be done at the block group level which allows for the best interoperability between our economic and built environment data, but we also perform several analyses at the block level. Shape files from the US Census Bureau delineate the boundaries and area of each census block and block group. Shape files from the City of Philadelphia delineate the boundaries and area of each lot in Philadelphia.For the vast majority of lots in Philadelphia, the lot is entirely contained within a single Census Bureau block.Figure <ref> summarizes the data sources used in our analysis and we provide additional details for each data source in Sections <ref>-<ref>.§.§ Population Data Our population demographic data were pulled from the census website () by setting the geography as allblocks in Philadelphia and setting the data source as “Hispanic or Latino Origin By Race” (which is SF1 P5 in their database). The raw demographic data give the population count in each block from the 2010 census.Of the 18,872 blocks in Philadelphia, 4,558 have no residents (e.g. parks, industrial areas, etc.). At the block level, we restrict our analysis to blocks with at least 25 people, which gives 12,874 blocks that contain 98.9% of the population.At the block group level, we restrict our analysis to block groups with at least 400 people in them, which gives 1,325 block groups (out of 1,336) that contain 99.96% of the population.We calculate the population count and population density in each block group i from the raw population data and using the area of each block group from the US Census Bureau shape files.§.§ Economic Data Our economic data were pulled from the American Community Survey on the census website (): tables B19301 for income and C17002 for poverty, both from 2015.These data are only available at the block group resolution level. For each block group in Philadelphia, we have the per-capita income and the fraction of the population in seven different brackets of income-to-poverty-line ratios: [0, 0.5), [0.5, 1), [1, 1.25), [1.25, 1.5), [1.5, 1.85), [1.85, 2),[2, ∞). For interpretation, the [0.5, 1) bracket represents families that have income between 50% of the poverty line and the poverty line.The poverty line threshold for each household is defined by the Census Bureau according to the size and composition of the household. As an example, a family of four with two children has a poverty line threshold of $23,999.We define a scalar poverty measure for each block group based on the weighted sum of the proportion of block group households in each of the seven poverty brackets:Poverty_i = ∑_q = 1^7 w_q p_i,qwhere p_i,1 is the proportion of block group i households in the lowest bracket [0, 0.5) and p_i,7 is the proportion of block group i households in the highest bracket [2, ∞). We employ linearly decreasing weights = [1, 5/6, 4/6, 3/6, 2/6, 1/6, 0] to give highest weight to the brackets with highest poverty. Our Poverty_i metric varies from 0 to 1, with larger values implying higher poverty. Maps by block group of population density, per-capita income and poverty in Philadelphia are given in Figure S1 of our supplementary materials. §.§ Land Use Zoning Data Land use zoning data were downloaded from the City of Philadelphia. The land use data consist of a shapefile that divides the city into approximately 560,000 lots and provides the area and registered land use zoning designation (commercial, residential, industrial, vacant, transportation, water, park, civic, recreation, culture, and cemetery) for each of these lots. Almost all Philadelphia city lots are contained entirely within a single census block (and block group) which makes it easy to calculate the proportion of land in a census block (or block group) that is designated with a particular land use.Note that we combine the “commercial business" and “commercial consumer" into a single commercial designation, and all three “residential" categories into a single residential designation. For the rest of this paper, mixed use refers to the designation of “commercial / residential mixed".We create several land use measures from these zoning designations.First, we calculate the fraction of area in each geographic unit (either block or block group) i that is designated as `Vacant', Vacant.Prop_i = Area_i (Vacant)/Area_iSecond, we calculate the ratio of the area in each geographic unit (either block or block group) i that is commercial versus residential, ComRes.Prop_i = Area_i(Commercial)/Area_i(Commercial) + Area_i(Residential)Finally, we calculate the proportion of every block or block group that is designated as mixed use, MixedUse.Prop_i = Area_i(Mixed Use)/Area_iThese land use zoning metrics provide our first set of proxy measures for the vibrancy of a local neighborhood.Further details about the land use designations and our created land use metrics are given in Figure S2 of our supplementary materials.Philadelphia's zoning procedures were revised in 2012 (<http://www.phila.gov/li/Pages/Zoning.aspx>). Our zoning data were downloaded in June 2014, and all of our analyses are based on that snapshot. While most of the city's zoning remains unchanged, lots can be rezoned through applications on a continual basis.§.§ Property Data Property data are made available by the City of Philadelphia in the Property dataset on . These data contain the estimated market value, size, age and various other characteristics for each property in the city of Philadelphia.With these data, we focus our analysis pipeline on long term residential ownership, which we estimate by tabulating the time period since the last sale for every residential property in Philadelphia.For any location in the city, we can calculate the average ownership tenure: the time period since the last sale for every residential property around that specified location. §.§ Crime Data Crime data for Philadelphia come from the Philadelphia Police Department through thewebsite.The data contain the date and time of each crime as well as the location in terms of latitude and longitude (WGS84 decimal degrees). Each crime is categorized into one of several types: Homicide, Sexual, Robbery, Assault, Burglary, Theft, Motor Theft, Arson, Vandalism, or Disorderly Conduct.For our subsequent analysis, we combine these types into two super-categories of crimes: a. violent crimes (Homicides, Sexual, Robbery and Assault) and b. non-violent crimes (Burglary, Theft, Motor Theft, Arson, Vandalism, and Disorderly Conduct).Further details about the relative frequency and spatial distribution of crimes in Philadelphia are provided in Figure S3 of our supplementary materials.§ EXPLORING NEIGHBORHOOD FACTORS ASSOCIATED WITH SAFETY IN PHILADELPHIA §.§ Association between Crime and Population We first examine whether population is associated with either violent or non-violent crimes in Philadelphia. We find that population count is more strongly associated with both violent crime (r = 0.26) and non-violent crime (r = 0.46) than population density.In fact, population density is not significantly associated with violent crime (r = -0.01), and negatively associated with non-violent crime (r = -0.09).These correlations were calculated from a robust regression that downweights outlying values <cit.>.Plots of these relationships are provided in Figure S4 of our supplementary materials.We also explored Poisson and Negative Binomial regressions but found that these alternative formulations did not give substantially different results.The lack of a strong positive association between population density and crime is notable in the context of historical hypotheses such as <cit.> which argue that high population density leads to negative attitudes and hostility. In contrast, we find population size to be more strongly associated with crime, which supports the work of <cit.> though this finding is specific to our focus on Philadelphia. To incorporate the association between crime and population count into our subsequent analyses, we define excess violent crime in each block group as the residuals from the robust regression of violent crime on population count.Similarly, we define excess non-violent crime in each block group as the residuals from the robust regression of non-violent crime on population count.We can interpret these excess crime (violent or non-violent) totals as the number of crimes in a block group beyond their expectation based on population count.§.§ Association between Excess Crime and Economic MeasuresAs outlined in Section <ref>, we focus on two measures of the economic health of each block group in Philadelphia: per-capita income and our constructed poverty metric.We find a strong negative relationship between excess violent crime and income(r = - 0.44) and a strong positive relationship between excess violent crime and poverty (r = 0.59). We also find substantial non-linearity in the relationship between income and violent crime, with a strong linear relationship between violent crime and income for per-capita income below $50,000 but very little relationship above per-capita income of $50,000.These economic measures have a much weaker relationship with excess non-violent crime.There is a weak negative association between per-capita income and excess non-violent crime (r = -0.12) and a weak positive association between poverty and excess non-violent crime (r = 0.33). Plots of these relationships are provided in Figure S5 of our supplementary materials.Together, these results suggest that per-capita income and poverty are strongly associated with excess violent crime but not excess non-violent crime, possibly because non-violent crimes are more likely to be crimes of opportunity occurring in areas located away from where the perpetrators reside.Crimes of opportunity may be more driven by locations of businesses which helps to motivate our work in Sections <ref>-<ref>. To incorporate the association between crime and these economic measures into our subsequent analysis, we now re-define excess violent crime in each block group as the residuals from the robust regression of violent crime on population count, per-capita income and poverty; Similarly for excess non-violent crime. We can interpret these excess crime (violent or non-violent) totals as the number of crimes in a block group beyond their expectation based on population count, income and poverty. §.§ Association between Excess Crime and Land Use ZoningUp to this point, we have focussed on the relationship between safety and features based on the population and economic health of each neighborhood. However, our primary interest is the influence of the built environment of the neighborhood on safety, which could inform future urban planning initiatives.One type of data that we have pertaining to the built environment are the land use zoning designations for each lot in the city of Philadelphia (Section <ref>), from which we created three measures of the “vibrancy" in each block group: the fraction of vacant land, the fraction of mixed use land, and the ratio of commercial area to residential area. We find a moderately strong positive relationship between vacant proportion and excess violent crime (r = 0.2) and between vacant proportion and excess non-violent crime (r = 0.2).We also find moderately strong positive relationships between mixed use proportion and excess violent crime (r = 0.23) and between mixed use proportion and non-violent crime (r = 0.23).We find stronger positive relationships between commercial vs. residential proportion and excess violent crime (r = 0.42) and between commercial vs. residential proportion and excess non-violent crime (r = 0.65).Plots of these relationships are provided in Figure S6 of our supplementary materials.The association we find between vacant proportion and safety is relevant to recent studies of the effect of “greening" vacant lots on neighborhood safety <cit.>.That study found that the “greening" of vacant lots was associated with a reduction in gun assaults and vandalism.The strong positive relationship we find between commercial proportion and crime is also interesting in the context of contemporary theories of urbanism. As we describe in Section <ref>, the “eyes on the street" theory of <cit.> and “natural surveillance" theory of <cit.> argue that safer neighborhoods are those that have greater presence of people on the street achieved through a mixing of commercial and residential properties.Our preliminary findings do not support the idea that a mix of commercial and residential land use is associated with increased safety. However, we must concede that land use zoning designations are a low resolution indication of vibrancy that only indicate intended use.In particular, the zoning designation of a lot as commercial does not provide insight into whether the commercial enterprise located there contributes positively or negatively to vibrancy in the area or whether that commercial enterprise is open or closed during times when crimes tend to be committed. This missing information motivates our investigation of more detailed measures of neighborhood vibrancy based on business data in the following Section <ref>.§ URBAN VIBRANCY MEASURES BASED ON BUSINESS DATA As discussed in Section <ref>, measures based on land use zoning designations are an insufficient summary of the vibrancy of a neighborhood. We can not evaluate whether a mix of commercial and residential properties promotes safety without first establishing what types of business enterprises are present in lots zoned for commercial use and when those businesses are active. To that end, we outline our manual assembly and curation of a database of Philadelphia businesses, as well as the construction of several measures of business vibrancy from those data. §.§ Business Data We have manually assembled a database of Philadelphia businesses by scraping three different web sources: Google Places, Yelp, and Foursquare.Each of these sources provide the GPS locations for a large number of businesses in Philadelphia, as well as opening hours for a subset of those businesses.The most difficult issues with assembling this business database are: 1. integrating these three data sources and removing overlapping businesses and 2. categorizing all businesses into a small set of business types. Table <ref> gives the number of businesses and the number of those businesses where we have opening hour information.We also give counts of the total number of businesses and the number of businesses with opening hours in the union of all three data sources (removing duplicates between data sources).Each data source has its own categorization for businesses, with Google using about a hundred categories and Yelp and Foursquare each using closer to a thousand categories. Out of this myriad of business categorizations, we created ten business types: Cafe (4,166), Convenience (1,481), Gym (1,273), Institution (24,489), Liquor (316), Lodging (461), Nightlife (5,108), Pharmacy (799), Restaurant (7,909), and Retail (31,147). The values in parentheses are the total number of businesses in each business type. A particular business can belong to multiple business types, e.g. a restaurant that also sells liquor to go.Most of these business types are self-explanatory, but we need to clarify a few details. The cafe type includes cafes, bakeries and coffee shops that are not full restaurants. The restaurant type also includes meal delivery and meal take out businesses.Institution is a broad type that includes banks, post offices, churches, museums, schools, police and fire departments, as well as many others. §.§ Measures of Business VibrancyWe use our assembled business data to create several high resolution measures of business vibrancy at any particular location in the city of Philadelphia. We want these measures to encapsulate whether a given location has a concentration of a businesses of a particular type, and whether those businesses are active storefronts (i.e. open) during times of the week when crimes tend to be highest.We focus on two high crime windows that have a disproportionately large number of crimes (both violent and non-violent) relative to other times of the week.These two high crime windows are weekday evenings: 6-12pm Monday-Friday and weekend nights: 12-4am Saturday-Sunday. The first set of measures of business vibrancy we consider are simply the total number of businesses of each business type near to any particular location in the city, as we expect that some business types will be more associated with safety than others.We alsowant to take into account whether those businesses are active storefronts in the sense of being open.In particular, we are interested in whether a given location has businesses of a particular type (e.g. cafes) that are open longer than expected. We first establish a consensus number of open hours for each business type by calculating the average open hours across all businesses of that type in Philadelphia. For each business in Philadelphia where we have open hours, we can then calculate its excess number of open hours relative to the consensus for its business type.Building upon these calculations, the second set of measures of business vibrancy we consider are the average excess hours of businesses near to any particular location in the city.Thus, for any particular location in Philadelphia, we have two sets of business vibrancy measures: the number of businesses of each business type and the average excess hours of each business type.The excess hour measures can be calculated over the entire week or just within the high crime windows mentioned above. In Section <ref>, we evaluate the association between these business vibrancy measures and both excess violent and non-violent crimes within the local neighborhoods of Philadelphia. § ASSOCIATION BETWEEN BUSINESS VIBRANCY, LAND USE, OWNERSHIP AND SAFETY Our goal is evaluating the association between our created business vibrancy measures and safety at the local neighborhood level, while controlling for the characteristics of those neighborhoods. We will control for neighborhood characteristics by focusing our analyses on comparing pairs of locations within each block or within each block group.Underlying this strategy is an assumption that the census blocks or block groups are small enough that different locations within these geographic units should be highly similar with regards to the demographic and economic measures we examined in Sections <ref> and <ref>.In Section <ref> below, we focus on pairs of locations within blocks where one location has the highest frequency of crimes and the other location has the lowest frequency of crimes within that block. We then examine these within-block pairs to see if there are differences our business vibrancy measures between the “high crime" vs. “low crime" locations. As an additional study, we also compared crime totals between a location with a business that has longer hours versus another location in that same block group with a business that has shorter hours.Details of this study are given in Section 3 of our supplementary materials. §.§ Comparing “High Crime" vs. “Low Crime" Locations We first calculate the location with the highest crime frequency and the location with the lowest crime frequency within each block. We perform this analysis on the census block level (rather than the block group level) in order to give an even higher resolution view of the association between vibrancy and safety. For each block, we calculate our two sets of business vibrancy measures, the number of businesses of each business type and the average excess hours of each business type, in a 50 meter radius around the high crime and low crime locations in those blocks. Many blocks do not contain any businesses of a particular type near either the highest or lowest crime locations, which excludes those blocks from any comparisons involving that particular business type.We further restrict ourselves to blocks where the highest crime and lowest crime locations are at least 100 meters apart so that the 50 meter radii around these locations do not overlap.For each business type, we calculate a matched pairs t-statistic for differences in the business vibrancy measures around the low crime location minus the business vibrancy measures around the high crime location within each block.If business vibrancy helps to deter crime and promote safety, then these differences in business vibrancy should be positive.In Figure <ref>, we display the matched mean differences in the two business vibrancy measures (the number of businesses of each business type and the average excess hours of each business type) between the low crime and high crime within-block locations.We calculate differences for locations based on violent crimes and locations based on non-violent crimes.The significance threshold for these t-statistics was Bonferroni-adjusted to account for the number of comparisons being performed.Figure <ref> provides these comparisons for the entire week as well as the weekend nights time window.Due to limited space, we do not display the comparisons for weekday evenings, but the results are highly similar to weekend nights. We see in Figure <ref> that the difference in number of businesses is significantly negative (red) for both violent and non-violent crimes for essentially all business types, most strongly retail stores and restaurants.This result suggests that there are significantly more businesses around the higher crime locations than the lower crime locations.However, we also observe in Figure <ref> that for some of these business types, such as gyms and cafes, there are positive differences (blue) for our average excess hours metric, which implies that businesses are open longer around the low crime location compared to the high crime location.These differences are not as significant, but we still see evidence of an interesting and subtle finding: more crimes tend to occur near cafes and gyms but fewer crimes tend to occur near cafes and gyms that are open longer. We can also compare our original land use zoning measures of vibrancy from Section <ref> in a 50 meter radius between these high and low crime locations. We again calculate differences for locations based on violent crimes and locations based on non-violent crimes, but now the differences are based on our three land use vibrancy measures: the fraction of vacant land, the fraction of mixed use land and the ratio of commercial area to residential area.We also calculate differences in average ownership tenure of residential properties (as outlined in Section <ref>) between the high crime and low crime locations.Figure <ref> gives the matched mean differences in average ownership tenure and our three land use vibrancy measures between the low crime and high crime within-block group locations. We display these comparisons for the entire week as well as the weekend nights time window.The comparisons for weekday evenings are not displayed due to limited space but the results are highly similar to weekend nights. In Figure <ref>, we see very strong negative differences for mixed proportion and commercial vs. residential proportion, both of which strongly suggests that there is more mixed and commercial zoning near to the high crime locations. This association between commercial enterprises and crime was also observed in Section <ref> and motivated our development of more detailed measures of business vibrancy in Section <ref>.We also see very strong positive differences for the vacant land proportion which suggests the presence of more vacant land near to low crime locations compared to the high crime locations.This finding is notable when compared to the positive association between vacant proportion and crime that we found in Section <ref>.Together, those two findings suggest that neighborhoods with more vacant properties overall have higher crime but when looking within those neighborhoods, crimes tend not to be located near vacant properties. These results are especially interesting given the mixed effects on crime from the “greening" of vacant lots in the study by <cit.>.We also see in Figure <ref> that average ownership tenure of residential properties is significantly longer around the low crime location compared to the high crime location for both violent and non-violent crimes.This finding provides some support for the defensible space hypothesis <cit.> that longer term residential investment in the community (which we estimate with average ownership tenure) is associated with lower crime.§.§ Summary of Business Vibrancy and Safety Our analysis pipeline for studying the association between land use, business vibrancy and safety in Section <ref> above (and Section 3 of our supplementary materials) has produced several findings that could inform current evaluations of contemporary theories in urban planning.First, we find that more crimes occur near business locations but that some types of businesses (such as cafes and gyms) that are open for longer periods are associated with fewer crimes. Second, we find that although neighborhoods with more aggregate vacancy have higher crime (Section <ref>), when comparing locations within each neighborhood, crimes tend not to be located near vacant properties.Third, we find significantly longer residential ownership around the low crime locations compared to the high crime locations. Another important observation from Figure <ref> is the substantial heterogeneity in the association between business vibrancy and crime both across different business types and different time windows. The power of both studies was compromised by small sample sizes as there are only a limited number of block groups that permit a pair of comparable locations. The associations between land use zoning and safety in Figure <ref> are more significant due to much larger sample sizes of locations for these comparisons.Clearly, the associations between safety and neighborhood vibrancy are subtle, heterogeneous, and in need of even higher resolution studies to fully understand.§ DISCUSSIONThe recent availability of high resolution data on cities provides a tremendous opportunity for sophisticated quantitative evaluation of historical and current urban development.To aid these efforts, this paper outlines a framework for data collection and analysis of the associations between safety, economic and demographic conditions and the built environment within local neighborhoods.We used this framework for a specific task: the creation of quantitative measures of “vibrancy" based on the built environment of a neighborhood and exploration of the association between these vibrancy measures and neighborhood safety.We find that population density is not strongly associated with either violent or non-violent crime, which argues against the theory of <cit.>.We find that population count is a more important predictor of crime, which supports the work of <cit.>. We also explored the association between crime and economic measures as well as measures of vibrancy derived from land use zoning data, but found that these measures were not an adequate summary of the local commercial vibrancy of an area.To address vibrancy at a higher resolution, we constructed several measures of business vibrancy and employed matching of locations within block groups to evaluate the relationship between business vibrancy and safety.Our business vibrancy measures (number of businesses and average excess hours of businesses) are designed to be proxies for the“eyes on the street" concept of <cit.>.Our results suggest that more crimes tend to occur near business locations but that businesses of some types that are active (open) for longer periods are associated with fewer crimes. We also find that the overall proportion of vacancy in a neighborhood is associated with higher crime but that within a neighborhood, crimes tend to not occur as often near to vacant properties.Finally, we find that longer term residential investment in the community (as measured by average ownership tenure of residential properties) is associated with lower levels of crime. However, these preliminary findings should not be interpreted as causal effects.For example, underlying community effects could be driving both the longer business hours and and lower crime rates in our observed associations.Rather, we consider our findings to be opportunities for further in-depth investigation.More direct and high resolution measures of foot traffic or human activity would certainly improve measures of urban vibrancy.For example, <cit.> use mobile phone data to estimate vibrancy in Shenzhen, China.However, these types of direct data sources are not currently publicly available for the city of Philadelphia.Measures of human activity from geographically-linked social media usage are another promising research direction.For example, <cit.> use geo-coded Twitter data to measure human activity around London Underground stations.As further studies deepen our understanding of the role of vibrancy as an indicator of safety, we can consider public policy initiatives that encourage vibrancy by promoting multiple use spaces as well as the potential deregulation of closing times and noise curfews in order to allow businesses to experiment with longer opening hours.More generally, we encourage the adaptation of our analysis pipeline to other research questions within the urban analytics community.The code and public data that were used in our analyses is available as a github repository at: https://github.com/ColmanHumphrey/urbananalytics§ ACKNOWLEDGMENTSThis research was supported by a grant from the Wharton Social Impact Initiative (WSII) at the University of Pennsylvania.r_itself rpkg_tidycensus rpkg_dplyrSageHSupplementary Materials for “Urban Vibrancy and Safety in Philadelphia" § MAPS OF URBAN DATA IN PHILADELPHIA Figure <ref> gives a map outlining the 1,336 block groups in Philadelphia and the population density, per capita income and poverty metric for each of those block groups. Philadelphia county is divided into 384 census tracts which are divided into 1,336 block groups which are divided into 18,872 blocks. The area of the Philadelphia census blocks has an average of 0.00756 square miles (median of 0.00372 square miles) with a standard deviation of 0.0235 square miles.The area of the Philadelphia block groups has an average of 0.107 square miles (median of 0.0482 square miles) with a standard deviation of 0.323 square miles.Figure <ref> gives the land use designations for the city of Philadelphia with a focus in on the neighborhood of center city, as well as the distribution by block group of the vacant proportion (left) and mixed use proportion (right) calculated from these land use designations.Figure <ref> gives the relative frequency of each type of crime in our data as well as the spatial distribution by block group of violent vs. all crimes committed in Philadelphia from 2006-2015. Note that these crime categories are roughly ordered in terms of severity, and that high severity crimes are much less frequent. We see substantial heterogeneity in crime counts across the city with a large outlier count of both violent and non-violent crimes in the Market East block group of center city.§ EXPLORING NEIGHBORHOOD FACTORS ASSOCIATED WITH SAFETY IN PHILADELPHIA Figure <ref> plots the relationship between crime (either violent or non-violent) and population (either population count or population density). We also provide the correlation and test statistic for the slope from a robust regression that downweights outlying values.Figure <ref> plots the relationship between excess crime (either violent or non-violent) and our economic measures(either per-capita income or poverty).We also provide the correlation and test statistic for the slope from a robust regression that downweights outlying values.Figure <ref> plots the relationship between excess crime (either violent or non-violent) and our land use measures (vacant proportion, commercial proportion and mixed use proportion).We also provide the correlation and test statistic for the slope from a robust regression that downweights outlying values.§ COMPARING “OPEN SHORTER" VS. “OPEN LONGER" LOCATIONS The goal of our analysis in Section 5 of our paper is evaluating the association between measures of business vibrancy and safety at the local neighborhood level, while controlling for the characteristics of those neighborhoods by comparing pairs of locations within each block or within each block group. In our main paper, we focus on comparisons of `high crime" vs. “low crime" locations.In this supplemental section, we examine block groups with one location that has “open longer" businesses and another location that has “open shorter" businesses. We then examine these within-block-group pairs to see if there are differences in crime between “open shorter" vs. “open longer" locations. Specifically, for each of our ten business types, we identify block groups that contain a pair of businesses (of that type) where one of those businesses has long opening hours and the other business has short opening hours.We define a business as having long opening hours if its total opening hours are above the 75th percentile for businesses of that type.Similarly, we define a business as having short opening hours if its total opening hours are below the 25th percentile for businesses of that type. We further restrict ourselves to block groups where the pair of businesses are at least 140 meters apart, which is roughly the size of a Philadelphia city block.Only a small subset of the block groups in Philadelphia will contain such a valid pair of locations. For each block group with a valid pair of locations, we then count the number of crimes that occurred within a 70 meter radius around both the long opening hour business location and then we calculate a matched pairs mean differences in crime around the short opening hour business minus long opening hour businesses in each within-block group pair.If businesses that are active (open for a longer period) help to deter crime and promote safety, then these differences in crime should be positive. In Figure <ref>, we display the matched pair mean differences in crime between short opening and long opening hour businesses of each business type separately. We calculate different matched pair mean differences for only violent crimes, only non-violent crimes and all crimes.The significance threshold for these t-statistics was Bonferroni-adjusted to account for the number of comparisons being tested.We also examine these comparisons for the entire week versus just the high crime weekend nights time window.In Figure <ref>, we see mostly negative differences (red) which imply that more crimes are occurring around the business location with longer open hours, especially nightlife locations and restaurants.A notable exception are gyms, which show positive differences that imply fewer crimes occurring around the gym with longer open hours. Recall that our preliminary hypothesis was that greater business vibrancy would be associated with fewer crimes around those vibrant locations relative to less vibrant locations in the same block group.The results in Figure <ref> for gyms does show a trend in this expected direction, but the results for several other business types goes against that hypothesis.That said, there are not many differences in Figure <ref> that are statistically significant.To a large extent, the lack of significance is driven by the small sample sizes in these comparisons.
http://arxiv.org/abs/1702.07909v4
{ "authors": [ "Colman Humphrey", "Shane T. Jensen", "Dylan Small", "Rachel Thurston" ], "categories": [ "stat.AP" ], "primary_category": "stat.AP", "published": "20170225155633", "title": "Urban Vibrancy and Safety in Philadelphia" }
[Criticality & Deep Learning I: Generally Weighted Nets Dan Oprisa Peter TothCriticalAI http://www.critical.ai ]Motivated by the idea that criticality and universality of phase transitions might play a crucial role in achieving and sustaining learning and intelligent behaviour in biological and artificial networks, we analyse a theoretical and a pragmatic experimental set up for critical phenomena in deep learning. On the theoretical side, we use results from statistical physics to carry out critical point calculations in feed-forward/fully connected networks, while on the experimental side we set out to find traces of criticality in deep neural networks. This is our first step in a series of upcoming investigations to map out the relationship between criticality and learning in deep networks. § INTRODUCTION Various systems in nature display patterns, forms, attractors and recurrent behavior, which are not caused by a law per se; the ubiquity of such systems and similar statistical properties of their exhibit order has lead to the term "universality", since such phenomena show up in cosmology, the fur of animals <cit.>, chemical and physical systems <cit.>, landscapes, biological prey-predator systems and endless many others <cit.>. Furthermore, because of universality, it turns out that the most simplistic mathematical models exhibit the same statistical properties when their parameters are tuned correctly. As such it suffices to study N-particle systems with simple, "atomistic" components and interactions since they already exhibit many non-trivial emergent properties in the large N limit. Certain "order" parameters change behavior in a non-classical fashion, for specific noise levels. Using the rich and deep knowledge gained in statistical physics about those systems, we map the mathematical properties and learn about novel behaviors in deep learning set ups. Specifically we look at a collection of N units on a lattice with various pair interactions; when the units are binary spins with values (± 1), the model is known as a Curie-Weiss model. From a physical point of view, this is one of the basic, analytically solvable models, which still possesses the rich emergent properties of critical phenomena. However, given its general mathematical structure, the model has already been used to explainpopulation dynamics in biology <cit.>, opinion formation in society <cit.>, machine learning <cit.> and many others <cit.>. All those systems, with a rich and diverse origination, posses almost identical behavior at criticality. In the latter case of machine learning, the Curie-Weiss model encodes information about fully connected and feed-forward architectures to first order. Similar work was done in <cit.>, where insights from Ising models and fully connected layers are drawn and applied to net architectures; in <cit.> a natural link between the energy function and an autoencoder is established. We will address the generalisation of fully connected system and understand its properties, before moving to the deep learning network and applying there the same techniques and intuition.The article is organised as follows: section <ref> gives a short introduction of critical systems and appropriate examples from physics; in section <ref> we map a concrete, non-linear, feed forward net to its physical counterpart and discuss other architectures as well; then we turn to investigating the practical question whether we can spot traces of criticality in current deep learning nets in <ref>. Finally we summarise our findings in <ref> and hint at future directions for the rich map between statistical systems and deep learning.§ BRIEF SUMMARY OF CRITICAL PHENOMENA Critical phenomena were first thoroughly explained and analysed in the field of statistical mechanics, although they were observed in various other systems, but lacking a theoretical understanding. The study of criticality belongs to statistical physics and is an incredibly rich and wide field, hence we can only briefly summarise some few results of interest for the present article; definitely a much more comprehensive coverage can be found, see e.g.<cit.>. In a nutshell, the subject is concerned with the behavior of systems in the neighbourhood of their critical points, <cit.>. One thus looks at systems composed of (families of) many, identical particles, trying to derive properties for macroscopic parameters, such as density or polarisation from the microscopic properties and interactions of the particles; statistical mechanics can hence be understood as a bridge between macroscopic phenomenology (e.g. thermodynamics) and microscopic dynamics (e.g. molecular or quantum-mechanical interacting collections of particles). In a nutshell, criticality is achieved when macroscopic parameters show anomalous, divergent behavior at a phase transition. Depending on the system at hand, the parameters might be magnetisation, polarisation, correlation, density, etc. Specifically it is the correlation function of the "components" which then displays divergent behavior, and signals strong coordinated group behavior over a wide range of magnitudes. Usually it is the noise (temperature) which at certain values will induce the phase transition accompanied by the critical anomalous behavior. Given its relevance in physics and also its mathematical analogy to our deep learning networks, we will briefly review here the Curie-Weiss model with non-constant coupling and examine its behavior at criticality. §.§ Curie-Weiss model A simplistic, fully solvable model for a magnet is the Curie-Weiss model (CW), <cit.>. It possesses many interesting features, exhibits critical behavior and correctly predicts some of the experimental findings. As its mathematics is later on used in our deep learning setup, we will briefly present main properties and solutions for the sake of self-consistency.The Hamiltonian of the CW model is given by H =-J/2N∑_ij^N s_is_j- b ∑_i^N s_i Here the s_i are a collection of interacting "particles", in our physical case, spins, that interact with each other via the coupling J; they take values (± 1) and interact pairwise with each other, at long distances; the inclusion of a factor of 1/N multiplying the quadratic spin term makes this long-range interaction tractable in the large N limit. Furthermore, there is a directed external magnetic field which couples to every spin via b. Since the coupling between spins is a constant and since every spin interacts with every other spin (except self-interactions, which is accounted by a factor of 1/2) the Hamiltonian can be rewritten to H =-J/2N(∑_i^N s_i)^2 - b ∑_i^N s_i With β = 1/kT being the inverse temperature the partition function can be formulated Z= ∑_s_i∈{± 1}e^-β H(s)= ∑_s_i∈{± 1}expβ[ J/2 N(∑_i^Ns_i)^2+ b ∑_i^N s_i] which can be fully solved, <cit.>, summing over each of the 2^N states; given an explicit partition Z, the free energy can be computed viaF = - kT ln Z Once we have F various macroscopic values of interest can be inferred such as the magnetisation of the system, aka first derivative of F wrt. b. This is a so called "order parameter", which carries various other denominations, such as polarisation, density, opinion imbalance, etc. depending on the system at hand. It basically measures how arranged or homogeneous the system is under the influence of the outside field which couples to the spins via b. A full treatment and derivation of the model including all its critical behavior can be found in <cit.>, from where we get the equation of state for the magnetisation m = btanh( K/bm + b/T) with K = (J/T)^1/2. The analysis of this equation for various temperatures T and couplings J, b reveals a phase transition at the critical temperature T_c = J. Introducing the dimensionless parameter t=(T/T_c-1) and expanding (<ref>) in small couplings the famous power law dependence on temperature for the magnetisation emerges: m ≃√(3) (K-1)^1/2/K^3/2 ∼ |t|^1/2 Here we recognise one of the very typical power laws which are ubiquitous to critical systems. The quantity we are most interested in though is the second derivative of the free energy F wrt. b, which is basically the 2-point correlation function of the spins s_i. Again, expanding the second derivative of the free energy in small couplings and looking in the neighbourhood of the critical temperature T_c yields⟨ s_i,s_j⟩∼b^2/T_c|t|^-1 again displaying power law behavior with a power coefficient γ = 1. The innocent looking equation <ref> has actually tremendous consequences, as it implies that correlation does not simply restrict to nearest neighbours but goes over very long distances only slowly decaying; further, because of the power law behavior, there will be self-similar, fractal patterns in the system: islands of equal magnetisation will form within other islands and so on, all the way through all scales. Also, the correlation diverges at the criticality point T_c. We will carry out the explicit calculations for our case of interest - non-constant matrix couplings - later one, in section <ref>. §.§ Criticality in real-world networks Two of the main motivations why we look for criticality and exploit on it in artificial networks, are the universal arising of this phenomenon as well as various hints of its occurrence in biological <cit.> and neural systems <cit.>; once systems get "sizable" enough, gaining complexity, critical behavior emerges, which also applies to man-made nets <cit.>. Various measures can be formulated to detect criticality, and they all show power law distribution behavior. In the world wide web, e.g. the number of links to a source, and the number of links away from a source, both exhibit power law distribution P(k) ∼ k^-γ for some power coefficient γ≠ 0. Similar behavior can be uncovered in various other networks, if sizable enough, such as citation behavior of scientific articles, social networks, etc. A simple, generic metric to detect criticality in networks is the degree distribution, defined as the number of (weighted) links connecting to one node. Further, also the correlation between nodes is non-trivial, such that nodes with similar degree have higher probability of being connected than nodes of different degree <cit.>, chapter VII. We will follow a similar path as proposed above and grow an experimental network with new nodes having the simplest preferential and directed attachment towards existing nodes, as a function of their degree: Π(k) ∼ k^α Here, Π(k) denotes the probability that some node will grow a link to another node of degree k. Hence, every new node, will prefer nodes with higher degrees, leading to the overall power distribution observed in the real world systems. Additional metrics we look at are single neuron activity as well as layer activity and pattern behavior; more details on that in section <ref>. § CRITICALITY IN DEEP LEARNING NETS §.§ From feed-forward to fully connected architectureWe will focus now on a feed-forward network, with two layers, a_i and b_j connected via a weight matrix w_ij; In order to probe our system for criticality, we write down its HamiltonianH =-1/2N∑_ij^N w_ija_ib_j- h ∑_i^N b_i which has been first formulated in the seminal paper <cit.>. Here, the values of the a and b are {0,1}. Further, by absorbing the biases b_i in the usual way we can assume our weight matrix has the form:W = [2Nh0⋯0;2Nh w_11⋯ w_1n;⋮⋮⋱⋮; -2Nh w_n1⋯ w_nn ] while the V_i read (1,V_1,⋯, V_N).This Hamiltonian describes a two layer net containing rectified linear units (ReLU) in the b-layer with a common bias term h. The weight matrix w_ij sums the binary inputs coming from the a_i and those are fed into b_i; depending whether the ReLU threshold has been reached, a_i is activated, hence the binary values allowed for both, inputs and b-layer.Further, we show in appendix <ref>, that the partition function is up to a constant the same for the units taking values in {± 1} or {0,1}. By redefining N+1 → N We can then formulate the partition function as Z =∑_a,b∈{± 1}e^-β/2N∑_ij W_ija_ib_j where β is the inverse temperature 1/T. This is the partition function of a bipartite graph with non-constant connection matrix w.However, it turns out, that the partition function of the fully connected layer is the highest contribution (1st order) of our feed forward network (see appendix <ref>), hence further simplifying the expression to Z =∑_s_i∈{± 1}e^-β/2N∑_ij W_ijs_is_jWe will now proceed and compute the free energy F, defined as F = -TlnZ, using the procedure presented in <cit.>. From the free energy we then find all quantities of interest, especially the 2-point correlation function of the neurons. §.§ Fully connected architecture with non-constant weights In order to solve the CW model analytically, one has to perform the sum over spins, which is hindered by the quadratic term s_is_j. The standard way to overcome this problem is the gaussian linearisation trick which replaces quadratic term by its square root - linear in s_iand one additional continuous variable - the "mean" field, which is being integrated over entire R: e^a^2 = 1/√(2π)∫_-∞^∞ dx e^-x^2/2 + √(2)ax which in physics, is known as the Hubbard–Stratonovich transform.Unfortunately our coupling is not scalar, and hence we will linearise the sum term by term to keep track of all the weight matrix entries. First we will insert N identities via the Dirac delta function into our Hamiltonian as used in(<ref>): H(s)=-1/2N∑_ij^N s_iW_ijs_j = -1/2N∏_k∫_-∞^∞ dV_kδ(s_k-V_k)∑_ij^N V_iW_ijV_j = ∏_k∫_-∞^∞ dV_kδ(s_k-V_k) H(V) With the definition of the delta functionδ(x) = 1/2π i∫_-i∞^i∞dye^xy the partition function (<ref>) reads now Z(s) =∏_k∫_-∞^∞ dV_kδ(s_k-V_k) ∑_s_i∈{± 1}e^-β H(V)∼∏_k∫_-∞^∞ dV_k∫_-i∞^i∞dU_k∑_s_i∈{± 1} e^U_k(s_k-V_k) e^-β H(V)=∏_k∫_-∞^∞ dV_k∫_-i∞^i∞dU_k e^-U_kV_k + ln(cosh U_k) e^-β H(V) As already stated, we could perform the sum over the binary units s_i, since they show up linearly in the exponential after the change of variables via delta identity[In general we're not interested in numerical multiplicative constants, as later on, when logging the partition and computing the free energy, those terms will be simple additive constants without any contribution after differentiating the expression]; we effectively converted the sum over binary values {± 1} into integrals over R, leading to Z =c ∏_i ∫_-∞^∞dV_i ∫_-i∞^i∞dU_i e^-H^g(V,U,T) with a generalised Hamiltonian H^g = -β/2N∑_ij W_ijV_iV_j +∑_i[ U_iV_i - ln(coshU_i)] =-β/2N∑_ij w_ijV_iV_j - β h∑_iV_i+∑_i[ U_iV_i -ln(coshU_i)]Ultimately we are interested in the free energy per unit, which contains the partition function, via F = lim_N→∞ (-TlnZ)/NFrom F we can now obtain all quantities of interest via derivatives, in our case with respect to h. The partition function Z still contains a product of double integrals, which can be solved via the saddle point approximation; we recall here the one-dimensional case∫_-∞^∞ dx e^-f(x)≈(2π/f”(x_0))^1/2e^-f(x_0) where x_0 is the stationary value of f and f”(x_0) is in our case the Hessian evaluated at the stationary point: H^g_V_iV_j =-β/Nw_ijH^g_U_iU_j =-δ_ij(1-tanh^2U_i)H^g_V_iU_j =δ_ij while H^g is given in (<ref>). The expression <ref> can now be computed by applying simultaneously the saddle point conditions for both integrals. The stationarity conditions[We keep in mind that we enlarged W to contain h as well, hence the explicit equations are h dependent] for V_i and U_i give ∂ H^g/∂ V_i= -β/N∑_jW_ijV_j + U_i=0 = -β(∑_jw_ijV_j/N + h) + U_i=0 ∂ H^g/∂ U_i= V_i - tanhU_i = 0 which combined deliver the self consistency mean field equation of the fully connected layer (<ref>). Further, denoting H^g_0 the the Hamiltonian satisfying the stationarity conditions, it reads H^g_0 =β/2N∑_ij w_ijV_iV_j - ∑_ilncoshβ(∑_jw_ijV_j/N + h) Equation (<ref>) already displays manifestly the consistency equation for the mean field, as taking the first derivative wrt. V_i leaves exactly the consistency equation over per its construction;Now we can rewrite the free energy (<ref>) as F= lim_N→∞T/N[H^g_0 + ln H^g_hh]∼lim_N→∞[1/2N^2∑_ij w_ijV_iV_j +1/N^2∑_ijln[ w_ij(1-V^2_i)-1] -T/N∑_ilncoshβ(∑_jw_ijV_j/N + h)] We need to address now the large N limit; obviously the second term coming from the determinant clearly vanishes in the large-N limit, as the logarithm is slowly increasing, while we divide through N^2; the first term - a double sum over V_i is of order N^2 and hence a well defined average in the limit; the last term - lncosh, when expanded, is again linear in the sum[The interior sum over j is anaverage, hence well defined in the limit; after expansion, we're left with the outer sum (over i), which is again a well defined average when divided by N ], and hence a well defined average after dividing through N, hence we're left with the free energy F = T/NH_0^g= 1/2N^2∑_ij w_ijV_iV_j-T/N∑_ilncoshβ(∑_jw_ijV_j/N + h)] We're at the point now, where all quantities of interest can be derived from the free energy F; the order parameter (aka magnetisation when dealing with spins) per unit is defined asm≡dF/dh = .∂ F/∂ h|_V^st + 0∂ F/∂ V_i.∂ V_i/∂ h|_V^st The second term on the right vanishes identically, as we recognize it being evaluated at the stationarity condition V^st for the Hamiltonian. The contribution of the first term is: ∑_iw_ikV_k/N= 1/N∑_itanhβ (∑_kw_ikV_k/N + h)w_ik⇕V_i= tanhβ (∑_kw_ikV_k/N + h) which is (the weighted sum version of) the iconic self-consistency mean field equation of the CW magnet (<ref>).The critical point, P_c is located where the correlation function diverges for h→ 0; the 2-point correlation function (aka susceptibility when dealing with spins) is the second derivative of F, i.e. the derivative of (<ref>) wrt. h: P_c≡d^2F/dh^2 = dm/dh⇕ ∂ V_i/∂ h = β( 1-V_i^2)(1+ ∑_kw_ik∂ V_k/∂ h/N) where we used the original equation (<ref>) for taking the derivatives.It is worth contemplating first equations (<ref>) and (<ref>). They both capture the essence of the criticality of our system, including it's power law behavior. When the weight matrix reduces to a scalar coupling, both equations reduce to the classical CW system and display the behavior shown in (<ref>) and (<ref>). Furthermore, eq. (<ref>) encodes all the information needed for finding the critical point of matrix system at hand; we recall that all Vs (and their derivatives) are already implicitly "solved" in terms of h and w_ij via the stationarity equation (<ref>) and hence the V_i are just place holders for functions of w and h;we're thus left with a non-linear system of first order differential equations in N variables, which will produce poles for specific values of the couplings and temperature at criticality.§ EXPERIMENTAL RESULTSAfter investigating criticality through the partition function in our theoretical setup, now we turn to a practical question: do current deep learning networks exhibit critical behaviour, or put it differently, can we spot traces of critical phenomena in them? Instead of directly attacking the partition function of real world deep neural nets, we start with the practical observation, that systems at around criticality show off power law distributions in certain internal attributes.Concretely for networks <cit.> we look for traces of power laws in weight distributions, layer activation pattern frequencies, single node activation frequencies and average layer activations. In the following we will present experimental results for multilayer feed-forward networks, convolutional neural nets and autoencoders. For all networks we ran experiments on the CIFAR-10 dataset, training each models for 200 epochs using ReLU activations and Adam Optimizer without gradient clipping and run inferences for 100 epochs. The feed forward network had 3 layers with 500, 400 and 200 nodes, the CNN had 3 convolutional layers followed by 3 fully connected layers and the autoencoder had one layer with 500 nodes. For weight distributions we looked at sums of absolute values of the outgoing weights at each node, as a weighted order of the node. In fig. <ref> we have a log-log plot of counts versus the node order as defined above, and detect no linear behavior.For layer activation patterns we counted the frequency of each layer activations through the inference epochs. Figures <ref> and <ref> are log-log plots of layer activation frequencies versus their respective counts for the feed-forward layer the autoencoder. As we see, the hidden layer activation pattern frequencies of the Autoencoder resembles a truncated straight line, indicating that learning hidden features in unsupervised manner can give rise for scale free, power law phenomena in accordance with the findings of <cit.>, but no other architectures show traces of any power law. For single node activation frequencies we counted the frequency of each node activations through the inference epochs. Figures <ref> and <ref> depict the behavior of feed-forward and CN network. The flat, nearly horizontal line in the latter architecture is again a sign of missing exponent whatsoever. As a last measure we employed the sum of activations defined as the average activations on each layer throughout the inference epochs. Spontaneous and detectable criticality did not arise in classical architectures so the next step will be to create and experiment with systems that have induced criticality and learning rules that take into account criticality. Our first approach was to grow a fully connected net using the preferential attachment algorithm to induce at least some power law in node weights, and use the fully connected net as a hidden to hidden module. We further experimented with different solutions, regarding input and read out of activations from this hidden to hidden module, without changing the power law distribution. (This would simulate a system located at a critical state, with power law weight distribution). Our findings so far show that learning in these systems is very unstable without any advancement in learning and inference. The fundamental missing part is how to naturally induce a critical state in a network, which is equipped with learning rules that inherently take into account the critical state. For that we need new architectures and new learning rules, derived from the critical point equations (<ref>).§ SUMMARY AND OUTLOOK Summary: In this article we make our first steps in investigating the relationship between criticality and deep learning networks. After a short introduction of criticality in statistical physics and real world networks we started with the theoretical setup of a fully connected layer. We used continuous mean field approximation techniques to tackle the partition function of the system ending up with a system of differential equations that determine the critical behaviour of the system. These equations can be the starting point for a possible network architecture with induced criticality and learning rules exploiting criticality. After that we presented results of experiments aiming to find traces of power law distributions in current deep learning networks such as multilayer feed-forward nets, convolutional networks and autoencoders. The results - except for the autoencoder - were affirmative in the negative sense, setting up as next the necessity to create networks with induced criticality and learning rules that exploit the critical state. Outlook: Obviously the fully connected layer, which can be solved analytically on the theoretical side is of limited importance, as it translates into a rather simplistic architecture; more realistic, widely used set-ups, e.g. convolutional or recurrent nets, do very well contain the feed-forward mechanism, but are strongly deviatingand hence only partially mapped to our theoretical treatment; it would definitely be essential to address theoretically the convolution mechanism of deep nets and establish a link between the theoretical and experimental side; also inducing criticality into the net via eq. (<ref>) could prove beneficial and might very well affect learning behavior and flow on the surface on the loss function. Appendix§ DIFFERENT UNIT VALUES We here show that the partition function with Hamiltonian H_{0,1} = ∑_ija_iw_ija_j + h_ia_iwho's units are taking values in {0,1} has the same qualities as encoded in the partition function with Hamiltonian H_{± 1}, who's units take values in {± 1}. We rewrite the Hamiltonian in (<ref>) with units taking values in {± 1} (using Einstein's summation convention over double indices) : H = 1/4(1-u_i)w_ij(1-u_j) + 1/2h_i(1-u_i) where the u_i and v_i take values in {± 1}. Carrying now the multiplications in (<ref>) yields H= 1/4∑_ijw_ij + 1/4u_iw_iju_j- 2/4u_iw_ij1_j- 1/2h_iu_i + 1/2h_i= 1/2(c + u_iw_iju_j+ h'_i u_i) with h_i' = -w_ij1_j - h_i. Hence when computing the partition Z with (<ref>) we obtain Z = ∑_u∈{± 1}e^H = e^c∑_a∈{0,1} e^ a_iw_ija_j + h'_ia_i where the right hand side is the original Hamiltonian with a shifted coupling h'. The additional constant c factors out completely and hence when taking the logarithm and the second derivative it won't change the outcome. Also we note that the second derivative wrt. h' is ∂_h'h' = ∂_hh.§ FIRST ORDER CONTRIBUTIONWe consider here the Hamiltonian of the bi-partite graph connected viaweight matrix w (with Einstein summation convention): H_b = u_iw_ijv_j with the free energyF_b = -ln∑_u,v∈{± 1}exp( u_iw_ijv_j ) Without any loss of generality we set the temperature T=1, and we won't keep track of it.Carrying the partial sum over v_i yields F_b= -ln∑_u∏_j[exp(u_iw_ij) + exp(-u_iw_ij)]=-ln∑_u∏_j[2cosh(u_iw_ij) ] The sum over the v_i is understood as a collection of 2^N terms, each corresponding to a unique combination of 0's and 1's in the vector of length N representing that specific state of the spins; however, the sum can be conveniently written as a product of N binary summands, where each contains exactly the two possible states of the ith spin - this is where the product over j comes from in upper formula. Expanding now to lowest order in w we obtain F_b∼-ln∑_u∏_j(1 + (u_iw_ij)^2/2 )∼ -ln∑_uexp∑_j(u_iw_ij)^2/2=-ln∑_u e^H(u_i) where H(u_i) is the Hamiltonian of the fully connected graph, defined as (Einstein summation convention)H(u_i)= ∑_j(u_iw_ij)^2/2=1/2∑_ik(∑_j w_ijw_jk)_w'_iku_iu_k=1/2∑_iku_iw'_iku_k A few notes are in place regarding eq. (<ref>): the matrix w'_ik is now symmetric by construction and hence mediates between equally sized (actually identical) layers; further, all higher terms of the cosh function are even, hence all contributions are higher order, symmetric interactions of the layer u_i with itself.9 per_bak_nature Per Bak, "How Nature Works: the science of self-organized criticality", Copernicus Springer-Verlag, New York, 1996 Scale_Invariance_Lesne_Lagues Lesne Annick, Lagues Michel, "Scale Invariance, From Phase Transitions to Turbulence", Springer, 2012pruessner Gunnar Pruessner, "Self-organised criticality", Cambridge University Press, 2012bio_sys_poised_crit Thierry Mora, William Bialek, "Are biological systems poised at criticality?", arXiv:1012.2242 [q-bio.QM] Cover_Joy_Elements_of_Information_Theory Thomas M. Cover, Joy A. Thomas, "Elements of Information Theory", Wiley Series, 2006fukushima_self_org Kunihiko Fukushima, "Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position"<www.cs.princeton.edu/courses/archive/spr08/cos598B/Readings/Fukushima1980.pdf> barra_phase_boltzmann Adriano Barra,Giuseppe Genovese, Peter Sollich, Daniele Tantari, "Phase transitions in Restricted Boltzmann Machines with generic priors", arXiv:1612.03132 [cond-mat.dis-nn]hopf_boltzmann Adriano Barra, Alberto Bernacchia, Enrica Santucci, Pierluigi Contucci,"On the equivalence of Hopfield Networks and Boltzmann Machines", arXiv:1105.2790 [cond-mat.dis-nn]hopfield_nn_phys_sysJ. J. Hopfield, "Neural Networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Science, USA, 79 (1982) 2554-2558.peterson_anderson_mft Carsten Peterson, James R . Anderson, "A Mean Field Theory Learning Algorithm for Neural Networks", Complex Systems 1 (1987) 995- 1019loss_surface Anna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, Yann LeCun, "The Loss Surfaces of Multilayer Networks", arXiv:1412.0233 [cs.LG]spinglass_bialek Gasper Tkacik, Elad Schneidman, Michael J. Berry II, William Bialek, "Spin glass models for a network of real neurons", arXiv:0912.5409 [q-bio.NC]energyaeHanna Kamyshanska, Roland Memisevic, "The Potential Energy of an Autoencoder" <https://www.iro.umontreal.ca/ memisevr/pubs/AEenergy.pdf> complex_crit Kim Christensen, Nicholas R. Moloney, "Complexity and Criticality (Advanced Physics Texts)", Imperial College Press, 2005life_order Philip Ball, "One rule of life: Are we poised on the border of order?", New Scientist, April 2014renorm_stat_phys Uwe C. Tauber, "Renormalization Group: Applications in Statistical Physics", Nuclear Physics B, (2011) 1–28origin_order Stuart A. Kauffman, "The Origins of Order: Self-Organization and Selection in Evolution", Oxford University Press, 1993scale_renorm_univers H.E. Stanley, "Scaling,universality and renormalization:three pillars of modern critical phenomena", <http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.71.S358> salinasSilvio Salinas, "Introduction to statistical physics", Springer, 2001cw_full_solutionMartin Kochmanski, Tadeusz Paszkiewicz, Slawomir Wolski, "Curie-Weiss magnet: a simple model of phase transition", arXiv:1301.2141 [cond-mat.stat-mech]crit_brain_dynam Dante R. Chialvo, "Critical brain dynamics at large scale", arXiv:1210.3632 [q-bio.NC] weak_pair_wise_corr_popul Elad Schneidman, Michael J. Berry II, Ronen Segev, William Bialek, "Weak pairwise correlations imply strongly correlated network states in a neural population", arXiv:q-bio/0512013 [q-bio.NC] stat_mech_complex_nets Reka Albert, Albert-Laszlo Barabasi, "Statistical mechanics of complex networks", arXiv:cond-mat/0106096 [cond-mat.stat-mech]zipf_crit_no_fine David J. Schwab, Ilya Nemenman, Pankaj Mehta, "Zipfs law and criticality in multivariate data without fine-tuning", arXiv:1310.0448v3 [q-bio.NC]crit_brain_dynam Dante R. Chialvo, "Critical brain dynamics at large scale", arXiv:1210.3632 [q-bio.NC]
http://arxiv.org/abs/1702.08039v2
{ "authors": [ "Dan Oprisa", "Peter Toth" ], "categories": [ "cs.AI", "cs.LG" ], "primary_category": "cs.AI", "published": "20170226144338", "title": "Criticality & Deep Learning I: Generally Weighted Nets" }
APS/123-QED alex.iacp.dvo@mail.ru ^1School of Natural Sciences, Far Eastern Federal University, 6 Sukhanova Str., 690041 Vladivostok, Russia^2Institute for Automation and Control Processes FEB RAS, 5 Radio Str., 690041 Vladivostok, Russia^3Samara National Research University, Moskovskoe shosse, 34, Samara 443086, Russia^4Lebedev Physical Institute, Leninskiy prospect 53, Moscow 119991, Russia^5ITMO University, St.-Petersburg 197101, Russia Donut-shaped laser radiation, carrying orbital angular momentum, namely optical vortex, recently was shown to provide vectorial mass transfer, twisting transiently molten material and producing chiral micro-scale structures on surfaces of different bulk materials upon their resolidification. In this paper, we show for the first time that nanosecond laser vortices can produce chiral nanoneedles (nanojets) of variable size on thin films of such plasmonic materials, as silver and gold films, covering thermally insulating substrates. Main geometric parameters of the produced chiral nanojets, such as height and aspect ratio, were shown to be tunable in a wide range by varying metal film thickness, supporting substrates, and the optical size of the vortex beam. Donut-shaped vortex nanosecond laser pulses, carrying two vortices with opposite handedness, were demonstrated to produce two chiral nanojets twisted in opposite directions. The results provide new important insights into fundamental physics of the vectorial laser-beam assisted mass transfer in metal films and demonstrate the great potential of this technique for fast easy-to-implement fabrication of chiral plasmonic nanostructures.OCIS codes: (050.4865) Optical vortices; (140.3390) Laser materials processing; (220.4241) Nanostructure fabrication; (050.1940) Diffraction; (140.3300) Laser beam shaping Valid PACS appear here Direct laser printing of chiral plasmonic nanojets by vortex beams S. Syubaev^1^,^2, A. Zhizhchenko^1, A. Porfirev^3, E. Pustovalov^1, O. Vitrik^1^,^2, Yu. Kulchin^2, S. Khonina^3, S. Kudryashov^4^,^5, A. Kuchmizhak^1^,^2^,================================================================================================================================================================§ INTRODUCTIONChirality is a specific inherent feature, readily occurring in natural living systems at almost all length scales from snails and sea shells to chiral molecules and DNA. Artificially designed nanoscale chiral-shaped structures, mimicking their natural analogues owing to unique ways of interaction with optical radiation, possess remarkable properties as circular dichroism, enhancement of nonlinear signals, highly directional emission and photoactivity [1-4]. Meanwhile, utilization of state-of-the-art direct nanofabrication techniques based on ion- or electron beam milling for chiral nanostructures fabrication is rather challenging, which triggers, in its turn, search for novel pathways to produce such unique nanostructures. At nanometer scale, fabrication of chiral nanostructures can be provided by assembly of specifically designed chiral molecules via various self-assembling processes [5-8]. However, the number of the available materials as well as the achievable size of the produced structures are both limited.Recently, in a number of papers an alternative pioneering approach, employing optical radiation with specially designed intensity-, polarization- or phase states, was proposed for fabrication of chiral structures [9-19]. In particular, using inherent chirality of nanoparticles, synthesis of twisted nanoribbons through self-assembly in water solution under irradiation with continuous-wave circularly polarized laser radiation was demonstrated [9]. More importantly, vortex laser pulses, carrying simultaneously orbital (OAM) and spin (SAM) angular momenta, were demonstrated to twist transiently molten metal producing chiral-shaped micron-size needles [10,11]. Later, similar effects were used to produce chiral structures on the surface polymers [12-14] and semiconductors [15,16]. Based on their experiments with ablation of bulk tantalum targets [17], the authors revealed that mass transfer and handedness of the produced surface structures are associated with corresponding phase helicity (or OAM), while the polarization helicity (or SAM) accelerates/decelerates the movement of the molten material. Despite the apparent huge potential of such direct easy-to-implement method in fabrication of different chiral nanostructures mentioned in the previous studies, to the best of our knowledge, no experiments showing formation of twisted nanoneedles of variable size on surface of such common plasmonic materials, as silver or gold, were demonstrated so far. In this paper, we study formation of twisted nanojets under direct ablation of silver (Ag) films of variable thickness with nanosecond (ns) laser vortex pulses generated by passing circularly polarized radiation through S-waveplate [20]. Main geometric parameters of the produced chiral nanoneedles - height, width and aspect ratio - were shown to be tunable in a wide range by varying the thickness of the irradiated metal film, supporting substrate type, as well as optical size of the vortex beam. Moreover, nanosecond vortex pulse with donut-shaped intensity distributions, carrying two vortices with opposite handedness, was demonstrated to produce two chiral nanojets twisted in opposite directions. We believe the obtained results provide useful and important insights into fundamental picture of the vectorial-beam driven mass transfer in metal films, as well as demonstrate great potential of the direct vortex-beam ablation as a versatile fabrication technique for chiral nanophotonics and plasmonics.§ EXPERIMENTAL DETAILSA vortex beam, carrying OAM ℓ=±1 (per photon, in units of ħ), was generated by transferring second-harmonic radiation from a Nd:YAG laser system (Brio, Quantel: central wavelength – 532 nm, pulse duration – 8 ns, repetition rate – 20 Hz, maximal pulse energy – 50 mJ) through a Glan-Taylor polarizer and a quarter-waveplate (Fig. 1(A)), to produce a circularly-polarized beam, and finally through a commercially available radial polarization converter (S-waveplate, Altechna). Then, generated ns-laser vortex pulses were focused onto the sample surface by means of different microscope objectives (Xirox: NA = 0.45, 50x; Nikon Plan Fluor: NA = 0.3, 15x and NA = 0.65, 50x), yielding in donut-shape intensity distributions (Fig. 1(B)) with the outer optical diameter Dvortex of  2.05 μm,  2.7 μm and  3.9 μm for NA=0.65, 0.45 and 0.3, respectively. As samples for nanostructuring, Ag films of variable thickness, ranging from 100 to 1000 nm, were used, being deposited by e-beam evaporation(Ferrotec EV M-6) at the pressure of 5•10−6 bar and the average deposition rate of 0.5 nm/s onto optically smooth silica glass substrates. The resulting film thicknesses were pre-controlled by a calibrated quartz-crystal microbalance system (Sycon STC-2002), and then were verified by atomic force microscopy. Moreover, in order to study a substrate effect on formation of twisted nanojets, two types of substrates for 100-nm thick Ag films – silica glass and poly(methyl methacrylate) – were used. The samples were arranged on a PC-driven micropositioning platform (Newport XM and GTS series), providing a minimal translation step of 50 nm along each axis. Pulse energy E was varied by means of a transmission filter and controlled by a pyroelectric photodetector. All ablation experiments were performed under ambient conditions in the single-pulse mode. The morphology of the produced nanostructures was characterized by high-resolution scanning electron microscopy (SEM, Carl Zeiss Ultra 55+).§ RESULTS AND DISCUSSIONSSeries of energy-resolved SEM images (Fig. 2(A-D)) illustrates the formation and evolution of the twisted nanojets produced under single-pulse irradiation of the 500-nm thick Ag film, covering the silica glass substrate, with ns-laser vortex pulse focused at NA=0.3. Besides the evident twisted shape of the produced nanojets with their clockwise rotational symmetry (the sign of vortex helicity), expectedly coinciding with the vortex pulse handedness, their formation process appears to be very similar to that for common nanojets fabricated under single-pulse ablation of noble (semi-noble) metal films with Gaussian-shape (zero-OAM) ns-laser pulses [21-26]. The main energy-resolved steps include (i) accumulation of the molten material at the spatial center of the beam through thermocapillary forces and melt flows, which rapidly thin the peripheral part, (ii) formation of the liquid nanojet, which undergoes a Rayleigh-Plateau hydrodynamic instability, resulting in appearance and ejection of the molten droplets and, finally, (iii) formation of the through hole by breaking the significantly thinned area, surrounding the nanojet, at increased pulse energy (this stage is not shown in the SEM images). The tipped twisted nanojets are expectedly fabricated for typical pulse energies sufficient to trigger the ejection of all molten droplets from the resolidifying tip. Under such condition, the produced twisted nanojet can be characterized by very fine tips with the typical averaged curvature radius Rtip 12±6 nm, as it was shown by the high-resolution SEM imaging (see inset in Fig. 2(C)). For more tightly focused vortex pulses, similar chiral nanojets, having ultrafine tips, can be produced on the surface of the 500-nm thick Ag film under the single-pulse ablation (see Fig. 2(E-L)). The main difference, evident from the analysis of the presented SEM images, consists in the amount of the molten material involved in the formation process, yielding in variation of the main geometrical parameters of the produced twisted conically shaped nanojets – height (H), width (w), base diameter D and height-to-width (aspect) ratio A. By varying the focusing conditions and systematically studying the formation process of the nanojets under single-pulse ablation of such films, we identify the several general trends characterizing the effect of the objective’s NA. First, the formation of the thinner chiral jets (simultaneous decrease of w and D), providing evident increase of their aspect ratio A (Fig. 3(A-D), is observed for the more tightly focused vortex pulses. With the typical width w of the silver nanojets being varied between 0.95 and 0.68 μm for the tested NA range (almost 4-fold smaller values, comparing to those reported for nanojets produced on the bulk tantalum surface [10]), the aspect ratio A increases two-fold, in its turn (orange circles in Fig. 3D). Second, the average height H accessed for the tipped nanojets without the droplets atop was found to be slightly affected by the NA value (Fig. 3(C)). Similar observations were reported in [10]. Finally, the chirality of the nanojets decreases versus NA. In more general words, the chiral nanojets produced under high-NA focusing (NA=0.65) become less twisted (averaged number of the turnovers per single nanojet decreases) and more similar to those produced under zero-OAM Gaussian pulse ablation [21-26]. In particular, this can indicate that the formation process is directly connected with the resolidification time, which is expectedly smaller for the structures produced under the tight focusing conditions.Similar NA-dependent trends were observed for the chiral nanojets produced on the surface of the 100-nm thick Ag film, covering the glass substrate (grey circles in the Fig. 3(A-D)). Meanwhile, excluding the aspect ratio value A, all other geometric parameters of such nanojets demonstrate evident decrease, comparing to those measured for nanojets produced on the 500-nm thick Ag film, apparently owing to smaller amount of the involved molten material. The SEM study of the nanojets produced on the 100-nm thick Ag film at tight focusing (NA=0.65) indicates their chiral shape as well as pronounced shift of the nanojet position, comparing to the center of both the optical spot and the damaged area (marked by the orange circles in Fig. 3(F-H)), indicating the rotational motion in the direction coincided with the vortex handedness. Also, regardless of the tested experimental parameters (either the objective NA or the film thickness), the average curvature radius Rtip of the nanojet tip was found to be smaller than 17 nm in all cases.In terms of vortex pulse printing of twisted nanojet arrays, it is also important to evaluate their ultimate package density, measuring the size of the film damaged area. Fig. 3(E) shows the typical lateral size of the twisted nanojets as a function of the objective NA, demonstrating the linear scalability and potential few-micron package density at high-NA focusing, which will be studied in details in our forthcoming papers. Finally, since the thickness of the metal film in the microbump area is very small, its exposure to the second laser pulse usually results in destruction of the microbump and appearance of a through hole. In this respect, it seems impossible to use multi-pulse irradiation to reshape or enhance the chirality of the produced jets [10]. Meanwhile, as it was shown in this paper, the large set of geometrical parameters can be realized for nanojets via tuning corresponding focusing conditions and film thicknesses.One additional experimental aspect, uncovering formation and enabling extra tunability of nanojet shapes, which will be only briefly addressed in this paper, consists in variation of supporting substrate, which determines adhesion and acoustic impedance matching between the film and substrate. The “substrate effect” was studied by comparing typical geometrical parameters of the twisted nanojets produced on the surface of 100-nm thick Ag films on the silica glass and PMMA substrates at different pulse energies. With almost twice lower ablation threshold measured for the case of PMMA substrate, using the common linear approximation of the squared diameters of through holes D2 versus natural logarithm of applied pulse energy ln(E) [27], the typical width of the nanojets decreases considerably (Fig.3(A)), yielding in averaged 2.2-fold increase in the aspect ratio A values (see Fig. 3(D)). Similar “substrate effect” was found for the ordinary non-chiral nanojets produced under ablation of the same Ag film with zero-OAM pulses, pointing out the adhesion as one of the key factors, affecting the nanojet formation process and achievable parameter range, in its turn. Also, it should be stressed that ablation experiments undertaken for the 1000-nm thick Ag film, covering the glass substrate, did not demonstrate the formation of regular microscale nanojets for the whole tested NA range, while some rotational movement “fingerprints” were identified from the SEM analysis of the produced surface structures (not shown here).As it was mentioned above, the S-waveplate was used to generate the ns-laser vortex pulses, carrying OAM ℓ=±1, from circularly polarized ones. It is known that under illumination of the S-waveplate with elliptically polarized light, two optical vortices with opposite SAM handedness and opposite OAM handedness can appear [28,29]. This inherent feature of the S-waveplate was used in this paper to generate a donut-shaped beam, carrying two optical vortices with opposite handedness. To do this, for the fixed position of the S-waveplate, we rotated the quarter-wave plate to generate elliptically polarized light, while simultaneously detecting the interference pattern produced by the generated donut-beam and the plane wave from the reference arm of the common Mach-Zehnder interferometer scheme. For a certain position of the quarter-wave plate, the single “fork” in the interference pattern converts into two opposite “forks”, indicating the formation of the vortices with opposite helicity signs (Fig. 4(A,B)) without strong deformations of the donut-shaped intensity distribution.Surprisingly, the ablation of the 500-nm thick Ag film, covering the glass substrate with the donut-beam, carrying two vortices, produces two chiral nanojets with opposite handedness, as it is indicated by the series of the energy-resolved SEM images (Fig. 4(C,D)). At the increasing pulse energy, each separate nanojet undergoes the evolution, similar to those observed for single twisted nanojet, finally evolving into tipped nanoneedle with the pronounced chirality. This remarkable demonstration also indicates that, by tailoring complex intensity and phase distributions via DOEs or other optical elements, the complex patterns with multiple chiral plasmonic nanoneedles can be produced under single-pulse ablation.Finally, the similarity of our present observations and the previous extensive experience in fabrication of thin-film nanojets, using short- and ultrashort laser pulses [24-26,30], provokes us to compare formation of the non- and chiral nanoneedles. In comparison to previously reported results [10,17], where no nanojets were observed under irradiation of the bulk metal target with the zero-OAM Gaussian pulses, for silver films studied in this paper, both Gaussian- and donut-shaped beams produce nanojets, having non- and chiral shapes, respectively. In particular, this indicates that for almost similar general mechanism, underlying the formation of the nanojets and associated with temperature-gradient-driven thermocapillary flow of the molten film, strong rotational movement appears under vortex-pulse irradiation, twisting molten material in the direction, coinciding with the vortex helicity sign. Fingerprints of such rotational movement can be found on both microscale -in the chiral shape of the resolidified nanojets and in the central symmetry breaking of the nanojet spatial position (Fig. 3(F-H)), and nanoscale – in the bending of some nanocrystallites (Fig. 4(E)), typically having radial-symmetry arrangement [25]. High-resolution SEM imaging of produced twisted nanojets shows that such effect is observed only near the nanojet areas, where twisting thermocapillary flows, strong enough to perturb the recrystallization wave, appear (Fig. 4(E)). The helical thermocapillary melt flows within the evolving nanojet and surrounding microbump area possibly can originate from the corresponding characteristic spiral-like (or more complex, [9]) intensity distribution. The origin of such spiral-shaped intensity distribution and corresponding temperature profile on the metal film surface can be explained in terms of optical interference of the incident donut-shaped beam with the spherical wave reflected/scattered from the evolving surface profile of the molten metal film. As any considerable surface profile evolution of the initially smooth metal film starts after passing the electron-phonon relaxation time (few picosecond for noble metal film), helical shape of the nanojets is expected to disappear for pulse durations shorter, than this time. Similar observation was reported for vortex-beam ablation of silicon target [15]. The detailed picture of appearance of such secondary reflected/scattered wave and its interference with the incident one will be a subject of our ongoing experimental studies and theoretical modeling. We believe that this possible alternative explanation of nanojet helicity, together with the present one [15,17], contributes to the basic understanding and supports new elucidating studies of matter structuring by structured light.§ CONCLUSIONS To conclude, nanosecond vortex pulses generated by passing circularly polarized radiation through S-waveplate were found to produce twisted nanojets under single-pulse ablation of Ag films. Main geometric parameters of the produced chiral nanojets, such as height, width and an aspect ratio, were shown to be tunable in a wide range by varying metal film thickness, supporting substrate type, and the optical size of the vortex beam. Donut-shaped vortex nanosecond laser pulses, carrying two vortices with opposite handedness, were demonstrated to produce two chiral nanojets twisted in opposite directions. The results provide new important insights into fundamental physics of the vectorial laser-beam assisted mass transfer in metal films and demonstrate the great potential of this technique for fast easy-to-implement fabrication of chiral plasmonic nanostructures. We believe the obtained results provide useful and important insights into fundamental picture of the vectorial-beam driven mass transfer in metal films, as well as demonstrate great potential of the direct vortex-beam ablation as a versatile fabrication technique for chiral nanophotonics and plasmonics.§ FUNDINGAuthors from IACP and FEFU are grateful for partial support to the Russian Foundation for Basic Research (Projects nos. 14-29-07203 - of_m, 15-02-03173-a, 17-02-00571-a, 17-02-00936-a) and to FASO through “Far East Program”. Authors from SNRU are grateful for partial support to the Russian Foundation for Basic Research (Project no. 16-29-11698). A.A. Kuchmizhak acknowledges the partial support from RF Ministry of Science and Education (Contract No. МК-3287.2017.2) through the Grant of RF President. S.I. Kudryashov is grateful for the partial support by the Government of the Russian Federation (Grant 074-U01) through ITMO Visiting Professorship Program, and by the Presidium of Russian Academy of sciences. E.V. Pustovalov is grateful for the partial support by the Russian Ministry of Education and Science (grant # 3.7383.2017).§ REFERENCES 1. N. M. Litchinitser, “Structured light meets structured matter,” Science 337(6098), 1054-1055 (2012).2. K. Y. Bliokh, F. J. Rodríguez-Fortuño, F. Nori and A. V. Zayats, “Spin-orbit interactions of light,” Nat. Photon. 9(12), 796-808 (2015).3. J. Kaschke and M. Wegener, “Optical and infrared helical metamaterials,” Nanophotonics 5(4), 510-523 (2016).4. G. Rui and Q. Zhan, “Tailoring optical complex fields with nano-metallic surfaces,” Nanophotonics 4(1), 2-25 (2015).5. A. Gopal, M. Hifsudheen, S. Furumi, M. Takeuchi and A. Ajayaghosh, “Thermally assisted photonic inversion of supramolecular handedness,” Angew. Chem. 124(42), 10657-10661 (2012).6. J. Kumar, T. Nakashima and T. Kawai, “Circularly polarized luminescence in chiral molecules and supramolecular assemblies,” J. Phys. Chem. Lett. 6(17), 3445-3452 (2015).7. H. Li, J. Cheng, Y.Zhao, J. W. Lam, K. S. Wong, H. Wu and B. Z. Tang, “L-Valine methyl ester-containing tetraphenylethene: aggregation-induced emission, aggregation-induced circular dichroism, circularly polarized luminescence, and helical self-assembly,” Mater. Horiz. 1(5), 518-521 (2014).8. T. Ikeda, T. Masuda, T. Hirao, J.Yuasa, H. Tsumatori, T. Kawai and T.Haino, “Circular dichroism and circularly polarized luminescence triggered by self-assembly of tris(phenylisoxazolyl)- benzenes possessing a perylenebisimide moiety,” Chem. Commun. 48, 6025−6027 (2012).9. J. Yeom, B. Yeom, H. Chan, K. W. Smith, S. Dominguez-Medina, J. H. Bahng, G. Zhao, W.-S. Chang, S,-J. Chang, A. Chuvilin, D. Melnikau, A.L. Rogach, P. Zhang, S. Link, P. Kral, N.A. Kotov, “Chiral templating of self-assembling nanostructures by circularly polarized light,” Nat. Mat. 14(1), 66-72 (2015).10. K. Toyoda, K. Miyamoto, N. Aoki, R. Morita and T. Omatsu, “Using optical vortex to control the chirality of twisted metal nanostructures,” Nano Lett. 12, 364-3649 (2012).11. T. Omatsu, K. Chujo, Miyamoto, M. Okida, K. Nakamura, N. Aoki and R. Morita, “Metal microneedle fabrication using twisted light with spin,” Opt. Express 18(17), 17967-17973 (2010).12. A. Ambrosio, L. Marrucci, F. Borbone, A. Roviello, and P. Maddalena, “Light-induced spiral mass transport in azo-polymer films under vortex-beam illumination,” Nat. Comm. 3, 989 (2012).13. A. Kravchenko, A. Shevchenko, V. Ovchinnikov, A. Priimagi and M. Kaivola,”Optical interference lithography using azobenzene-functionalized polymers for micro- and nanopatterning of silicon,”.Adv. Mater. 23, 4174–4177 (2011).14. M. Watabe, G. Juman, K. Miyamoto and T. Omatsu, “Light induced conch-shaped relief in an azo-polymer film,” Sci. Rep. 4, 4281 (2014).15. F. Takahashi, K. Miyamoto, H. Hidai, K. Yamane, R. Morita and T. Omatsu, “Picosecond optical vortex pulse illumination forms a monocrystalline silicon needle,” Sci. Rep 6, 21738 (2016).16. J. J. Nivas, H.Shutong, K. K. Anoop, A. Rubano, R. Fittipaldi, A. Vecchione, D. Paparo, L. Marrucci, R. Bruzzese and S. Amoruso, “Laser ablation of silicon induced by a femtosecond optical vortex beam,” Opt. Lett. 40(20), 4611-4614 (2015).17. K. Toyoda, F. Takahashi, S. Takizawa, Y. Tokizane, K. Miyamoto, R. Morita and T. Omatsu, “Transfer of light helicity to nanostructures,” Phys. Rev. Lett. 110(14), 143603 (2013).18. C. Hnatovsky, V. G.Shvedov, N. Shostka, A.V. Rode and W. Krolikowski, “Polarization-dependent ablation of silicon using tightly focused femtosecond laser vortex pulses,” Opt. Lett. 37(2), 226-228(2012).19. C. Hnatovsky, V.Shvedov, W. Krolikowski and A. Rode, “Revealing local field structure of focused ultrashort pulses,” Phys. Rev. Lett. 106(12), 123901 (2011).20. M. Beresna, M. Gecevičius, P. G. Kazansky and T. Gertus, “Radially polarized optical vortex converter created by femtosecond laser nanostructuring of glass,” Appl. Phys. Lett. 98, 201101 (2011).21. Y. Nakata, N. Miyanaga, K. Momoo, T. Hiromoto, “Solid–liquid–solid process for forming free-standing gold nanowhisker superlattice by interfering femtosecond laser irradiation,” Appl. Surf. Sci. 274, 27-32 (2013).22. C. Unger, J. Koch, L. Overmeyer, B.N. Chichkov, “Time-resolved studies of femtosecond-laser induced melt dynamics,” Opt. Express 20, 24864-24872 (2012).23. A. Kuchmizhak, S. Gurbatov, Y. Kulchin, O. Vitrik, “Fabrication of porous metal nanoparticles and microbumps by means of nanosecond laser pulses focused through the fiber microaxicon,” Opt. Express 22(16), 19149-19155 (2014).24. J. P. Moening, S.S. Thanawala, D. G. Georgiev, “Formation of high-aspect-ratio protrusions on gold films by localized pulsed laser irradiation,” Appl. Phys. A 95(3), 635-638 (2009).25. P.A. Danilov, D.A. Zayarny, A.A. Ionin, S.I. Kudryashov, T.T.H. Nguyen, A. Rudenko, I. N. Saraeva, A. A. Kuchmizhak, O. B. Vitrik, Yu. N. Kulchin, “Structure and laser-fabrication mechanisms of microcones on silver films of variable thickness,” JETP Lett. 103(8), 549-552 (2016).26. A. Kuchmizhak, S. Gurbatov, O. Vitrik, Y. Kulchin, V. Milichko, S. Makarov, S. Kudryashov, “Ion-beam assisted laser fabrication of sensing plasmonic nanostructures,” Sci. Rep. 6, 19410 (2016).27. J.M. Liu, “Simple technique for measurements of pulsed Gaussian-beam spot sizes,” Opt. Lett. 7, 196 -198 (1982).28. M. Beresna, M. Gecevicius, and P. G. Kazansky, “Polarization sensitive elements fabricated by femtosecond laser nanostructuring of glass,” Opt. Mater. Express 1, 783–795 (2011).29. M. Gecevicius, R. Drevinskas, M. Beresna, and P.G. Kazansky, “Single beam optical vortex tweezers with tunable orbital angular momentum,” Appl. Phys. Lett. 104, 231110 (2014).30. A. Kuchmizhak, S. Gurbatov, A. Nepomniaschiy, A. Mayor, Y. Kulchin, O. Vitrik, S. Makarov, S. Kudryashov, A. Ionin, “Hydrodynamic instabilities of thin Au/Pd alloy film induced by tightly focused femtosecond laser pulses,” Appl. Surf. Sci. 337, 224-229 (2015).
http://arxiv.org/abs/1702.07891v1
{ "authors": [ "Sergey Syubaev", "Alexey Zhizhchenko", "Alexey Porfirev", "Evgeniy Pustovalov", "Oleg Vitrik", "Yuri Kulchin", "Svetlana Khonina", "Sergey Kudryashov", "Aleksandr Kuchmizhak" ], "categories": [ "physics.optics" ], "primary_category": "physics.optics", "published": "20170225134534", "title": "Direct laser printing of chiral plasmonic nanojets by vortex beams" }
Department of Mathematics, University of Houston^1 Department of Bioengineering, Rice University^2 Center for Theoretical Biological Physics, Rice University^3 Department of Biosciences, Rice University^4 Department of Biology and Biochemistry, University of Houston^5* = Corresponding author. [2010]92CModeling mechanical interactions in growing populations of rod-shaped bacteria William Ott^1,* December 30, 2023 =================================================================================emptyAdvances in synthetic biology allow us to engineer bacterial collectives with pre-specified characteristics. However, the behavior of these collectives is difficult to understand, as cellular growth and division as well as extra-cellular fluid flow lead to complex, changingarrangements of cells within the population. To rationally engineer and control the behavior of cell collectives we need theoretical and computational tools to understand their emergent spatiotemporal dynamics. Here, we present an agent-basedmodel that allows growing cells to detect and respond to mechanical interactions.Crucially, ourmodel couples the dynamics of cell growth to thecell's environment: Mechanical constraints can affect cellular growth rate and a cell may alter its behavior in response to theseconstraints. This couplinglinks the mechanical forces that influence cell growth and emergent behaviors in cell assemblies. We illustrate our approach by showing how mechanical interactions can impact the dynamics of bacterial collectives growing in microfluidic traps.§ INTRODUCTION To realize the full potential of synthetic biology, we need to be able to design assemblies of interacting cells and organisms. Cooperating cells can specialize and assume different responsibilities within a collective <cit.>. This allows such bacterial consortia to outperform monocultures, both in terms of efficiency and range of functionality, as the collective canperform computations and make decisions that are far more sophisticated than those of a single bacterium <cit.>.Recent advances in synthetic biology allow us to design multiple, interactingbacterial strains, and observe them over many generations <cit.>. However, the dynamics of such microbial consortia are strongly affected by spatial and temporal changes in the densities of the interacting strains.The spatial distribution of each strain determines the concentration of the corresponding intercellular signals across the microfluidic chamber, and in turn, the coupling among strains.To effectively design and control such consortia, it is necessary to understand the mechanisms that govern the spatiotemporal dynamics of bacterial collectives. Agent-based modeling provides an attractive approach to uncovering these mechanisms. Such models can capture behaviors and interactions at the single-cell level, while remaining computationally tractable. The cost and time required for experiments make it difficult to explore the impact of inhomogeneous population distributions and gene activity under a variety of conditions. Agent-based models are far easier to run and modify. They thus provide a powerful method to generate and test hypotheses about gene circuits and bacterial consortia that can lead to novel designs. Importantly, agent-based models of microbial collectives growing in confined environments, such as microfluidic traps, should capture the effect of mechanical interactions between cells in the population. Forces acting on the constituent cells play a critical role in the complex dynamics of cellular growth and emergent collective behavior <cit.>. Agent-based models, therefore, need to be able to model the force exerted by growing cells, as well as the mechanical interactions induced by cell-cell contacts or contact with environmental boundaries.Further, it has been shown that theenvironment of an individual cell can influence its growth, which in turn influences the collective's behavior trough mechanical communication <cit.>. In particular, mechanical confinement can cause cells within the collective to grow at different rates <cit.>.Current agent-based models of microbial collectives (e.g. <cit.>) typically do not allow cells to alter their growth rate in direct response to mechanical sensory input.Adding such capability is challenging, due to the complex relationship between cell growth and the extracellular environment.Here, we introduce an agent-based bacterial cell model that can detect and respond to its mechanical environment. We show that our model can be used to make predictions about the spatiotemporal dynamics of consortia growing in two-dimensional microfluidic traps. Further, we demonstrate that emergent collective behavior can depend on how individual cells respond to mechanical interactions. § MODELING FRAMEWORKTo understand the behavior of growing bacterial collectives, we must develop numerical tools that can capture themechanisms that shape their spatiotemporal dynamics. Here, we propose an agent-based model of bacterial assemblies, using a framework that takes into account mechanical constraints that can impact cell growth and influence other aspects of cell behavior.Taking these constraints into account is essential for an understanding of colony formation, cell distribution and signaling, and other emergent behaviors in cell assemblies growing in confined or crowded environments. Our framework differs from other published models in an important way: We assume that each cell comprises two axially independent cell halves that attach through a compressible, stiff spring, whose rest length increases to induce cell growth (Fig. <ref>(a)).Our spring model serves as a first-order approximation of peptidoglycan cell wall response to mechanical stresses <cit.>.The expansion rate of spring rest length sets the target growth rate for the cell. However, in our model the growth rate may not be immediately achieved due to mechanical constraints, such as resistive damping, cell-cell contact, and contact with trap boundaries. Differences in rest length expansion and actual cell growth result in sustained spring compressions, whose energy can be thought of as astored growth potential for the cell. Most published models require that cells grow exactlyata prioriprescribedrates.An exception is a model introduced to study the organization of crowded bacterial colonies in growing in confined niches <cit.>. As a result,most models do not capture mechanical constraint detection and resultant growth modulation. Our approach introduces greater flexibility than, for example, assuming that growth rate is determined by the position of a cell in a trap <cit.>.We first present our cell model and assumptions, derive the theoretical equations of motion for growth of an isolated cell, and validate simulation results by comparing them to our theoretical model. In constructing our simulations and diagrams, we took advantage of two open-source software resources: the physics engine  <cit.> for cell dynamics, and the cell simulation platform  <cit.> (which we modified for use with our cell model) for visualizations and image sequencing.§.§ Cell construction We model each bacterium as an assembly of two independent cell halves.To model cell growth, we assume that these two halves expand symmetrically along the long axis of the bacterium (Fig. <ref>(a)). Each cell half consists of a mass m at the center of a semi-circular pole, which connects to straight, long-body edges (as shown by different colors in Figure <ref>(a)). The two masses connect through a virtual spring with linear spring constant k.Importantly, the rest length of the spring increases in time. In confined environments, extension of the rest length induces forces on neighboring cells, microfluidic trap boundaries, and any other obstacles the cell may encounter.In order to ensure the cell halves act as a single, well-defined cellular unit (for example, upon collision with other cells or fixed barriers), we use a pair of symmetric ball-and-groove type connections to ensure that the halves remain aligned and resist bending <cit.>. This also ensures that any off-axis or rotational impulses are transmitted equally to both halves of the cell.Thus, cell growth forces are designed to act independently in the axial direction, whereas off-axis, cell-external forces act on the cell as a whole.Sufficiently large on-axis components of external forces could result in a cell-length compression in this model, but we mediate this using a rigid-body, back-filling “ratchet” algorithm.Details about the implementation areprovided in the Appendix. §.§ Growth modelWe induce axial cellular growth by extending the rest length, R, of the virtual spring that connects the cell halves (Fig. <ref>(a), top panel). Induced expansion force can be felt by all neighboring objects (see Fig. <ref>(a), bottom panel). Cho et al. <cit.>used a related model to study how mechanical constraints lead to self-organizationin bacterial colonies grown in confined environments.Crucially, rest length extension is an adjustable component of our model that captures the growth tendency of each cell. As we will see, altering how rest-length extension dynamics respond to constraint can impact global dynamics of collectives. To start, however, in Sections <ref> and <ref> we assume that the rest length grows at a constant rate, Ṙ = a. In this case, mechanical constraints canresult in unphysiologically large potential energy stored in a highly compressed spring, an issue we address in subsequent sections.We assume cells grow in an extracellular fluid with a resistive damping parameter, γ, and that our system is in the non-inertial dynamics regime (see Appendix).Fluid damping resists cell growth via a damping force γẋ, where ẋ is the lab-frame speed of a cell half through the extracellular fluid. We explicitly model this parameter to explore the effects that fluid damping variations have on cell dynamics. Although γ defines non-inertial dynamics over a broad range of values, we will see that it directly governs response dynamics under the assumptions of our growth model. We make the simplifying assumption that γ captures all sources of resistive damping, including extracellular fluid damping and dissipative (non-Hamiltonian) damping forces within the cell itself. In particular, γ serves as an imperfect but computationally manageable proxy for cell-internal spring damping.Many published agent-based models treat bacterial cells as unitary rigid bodies under non-inertial dynamics that achieve cell growth by a process we will call the Expansion, Overlap, Relaxation (EOR) method. In these models, forward Euler integration of the growth rate a expands (E) a cell by increasing its length by a ·dt, where dt is the time discretization step.If a cell is sufficiently near, or in contact with, another object (for example another cell or a trap wall) just before this time step, expansion will result in overlap (O).A relaxation algorithm (R) is then asserted that resolves (or prevents) overlaps of all cells and objects using repulsion forces <cit.>, constraint <cit.>, iteration <cit.>, or a related algorithm <cit.>. In our model, we prevent cell overlap by using collision dynamics to resolve competing growth expansion under the constraint of cell-cell or cell-barrier contact (see Appendix). Importantly, by constructing a cell with two axially independent halves, we do not have to assume that each cell reaches a predetermined size, determined by the growth rate, at the end of each time step. In contrast to the EOR method, this allows us to determine the impact of mechanical constraints on the growth of a cell by comparing the achieved cell length ℓ to spring rest length R at each time step. We can then link this measurement (which is made locally by the cell agents themselves), to other aspects of the cell model. As we will see,emergent assembly behavior can depend on how cells modulate growth in response toconstraints.§.§ Equations of motion for an isolated cellWe derive the equations of motion for an isolated cell in an extracellular fluid with resistive damping parameter γ. Cell growth results from a linear spring force computed from the difference between ℓ and the rest length R of our virtual spring (R - ℓ is thus spring compression) and applied to each cell half. Using linear spring constant k, we have the inertial equation of motion for an isolated cell, m ℓ̈/2 = k (R - ℓ) - γℓ̇/2. Assuming non-inertial dynamics (see Appendix), Eq. (<ref>) yields a differential equation for expansion velocity, ℓ̇ = 2k/γ(R - ℓ). In order to close Eq. (<ref>), we must describe the dynamics of the rest length, R. Bacteria grow approximately exponentially (see <cit.> and references contained therein). However, for simplicity we let R extend linearly at rate a, independent of cell length. This assumption can be relaxed, and does not affect the main points below. In Section <ref>, we will introduce mechanical feedback by modulating Ṙ in response to mechanical constraint.Setting R(0) = 0, we have R(t) = at, so Eq. (<ref>) becomesℓ̇ + 2k/γℓ =2k/γ a t. Defining τ = γ/2k, setting initial cell length, ℓ, to zero,and solving Eq. (<ref>) givesthe length of an isolated cell, and the rate of its expansion, ℓ (t) = a ( t -τ + τ e^- t/τ), ℓ̇ (t) = a (1 -e^- t/τ).The parameter τ acts as a time constant for growth dynamics. Eq. (<ref>) shows that ℓ̇→ a, and that τ governs the time required to reach steady state. Since τ is proportional to resistive damping γ for fixed k, resistive damping therefore governs this lag. Using Eq. (<ref>), the compression of the spring that drives the growth of the isolated cell is given by R(t) - ℓ (t) = at - a ( t -τ + τ e^- t/τ) = a τ (1 - e^- t/τ). Notice that Eq. (<ref>) implies that (R - ℓ) → a τ,a measure of the sustained mechanical constraint felt by an isolated growing cell at steady state due to resistive damping.As described in the Appendix, we have implemented this model using theenvironment. To validate our implementation, we first compared the growth of an isolated cellto that given by Eq. (<ref>). We varied resistive damping γ by an order of magnitude, while using units such that k=1, and γ was changed from 1 to 10. Figure <ref>(bc) shows close agreement between theory and simulation forspring compression and expansion speed. The timescale at which both approach their equilibrium values increases with γ.§ BEHAVIOR OF CELLS IN A MOTHER MACHINETo bridge the divide between a single, isolated cell and collectives growing in general two-dimensional geometries, we now study a one-dimensional `mother machine' configuration, where cells are constrained to grow in long, narrow traps.Mother machines are microfluidic devices developed to study bacterial cell growth and division over hundreds of generations (See <cit.>).They consist of an array of impermeable, three-walled narrow channels, each just wide enough to hold a line of cells.The open end of each channel is perpendicular to a `trench' through which fresh nutrient medium flows. Cells exiting the narrow channels are carried away by this flow.We simulated a mother machine using a single three-walled barrier that allowed cells to grow in a single file. We initialized cells in the channel by placing them pole-to-pole, with the `mother cell' placed against the back wall (Figure <ref>(a)).As cells grew, they were constrained to move toward the open end of the narrow channel. Using the model of cell growth described in Section <ref>, we simulated an array of four cells with constant rest-length extension rate, Ṙ = a, and recorded their resulting spring compressions (Figure <ref>(b)) and cell-frame expansion speeds (Figure <ref>(d)).We see that cell growth rates and spring compressions equilibrate after a transient time determined by the spring constant, resistive damping parameter, and cell position in the mother machine. This model predicts that the growth rate of the lead cell (the cell closest to the open end of the trap) equilibrates most quickly, and is the least compressed. This is intuitive, since cells deeper in the trap must overcome the cumulative resistive drag of those nearer the open end.Analytically, we describe the growing line of cells as a coupled mass-spring system (see Appendix), whose dynamics match the simulations illustrated in Figure <ref>. Solving our analytical model shows that steady-state spring compression in a 1D line of cells is a quadratic function of cell position, as Figure <ref>(b) suggests. These simulations illustrate cell behavior resulting from competing growth and resistive forces of neighboring cells in a simple geometry. Note that steady-state compressions are relatively small in this example. This is due to the small number of interacting cells, as well as the parameters we selected. Compressions can grow substantially in larger traps due to increased cell confinement and resulting interaction forces, as we will see in the next section. Local constraint detection can significantly influence the global dynamics of growing collectives in two-dimensional geometries, as we now demonstrate.§ TWO-DIMENSIONAL MICROFLUIDIC TRAP GEOMETRIES: RESULTS AND PREDICTIONSWe next study bacterial assemblies intwo-dimensional geometries. We start with a two-strain microbial consortiumgrowing in a long, narrow trap with open sides. Our model predicts that, after a transient period, strains grow in vertically-oriented, curvilinear stripes perpendicular to the longer edge of the trap. Each stripe behaves as a collection of quasi-mother machines. Defects in the stripes form close to the shorter edges of the trap. While boundary geometry is known to direct the collective orientation of bacterial colonies growing in traps with hard walls <cit.>, our prediction of emergent spatiotemporal patterning in open traps is perhaps surprising.In a final study, we examine how allowing growth rate to depend on spring compression affects the global dynamics of an assembly growing in a trap with three walls.Our model predicts that both protein expression and the nematic (angular) ordering of the cells depend on how rest-length extension rate Ṙ varies with spring compression.§.§ Two-strain consortium growing in an open trapAgent-based models of cellular growth have provided insights into the spatiotemporal dynamics of collectives <cit.>.Here, we use our agent-based model to examine the evolving distribution of two strains in amicrofluidic trap open on all sides (see Figure <ref>(a)). Once a cell reaches the boundary of the trap, we assume that it is rapidly carried away by the flow of the media through a channel surrounding the trap. We simulated this by removing such cells from the simulation. We initialized the simulation by randomly placing several seed cells of each type into the empty trap. Cell growth forces were induced by a constant rest-length extension rate, Ṙ = a.Figure <ref>(a) illustrates a typical spatiotemporal pattern that emerges after growth and expansion of the initial seed cells. Cells organize into vertically-oriented, curvilinear stripes, each composed of a single strain (except for cells near the left and right boundaries, which tend to flow horizontally toward their nearest exits). Each curvilinear column of cells operates as a quasi-mother machine: Cells at the center of the column act as 'mother cells', while descendants form outer components that flow vertically toward the trap boundary.Our simulations predict that strain ratio is relatively stable once these stripes emerge. What determines this stable ratio and the width of the stripes remains unclear, since the transient dynamics that precede this quasi-steady state are complex. The strain type of the central cell in a given curvilinear cell column determines the strain type of all of the cells in the column. To predict the stable strain ratio, it is therefore sufficient to predict how the distribution of central cells emerges. However, this depends sensitively on the initial distribution of cells, the relative growth rate of the two strains, and other factors  <cit.>.Stability of the strain ratio in our simulations emerges from the stability of the quasi-mother machines and their columnar flow, which inhibits cells from lateral motion; notably, only lateral displacement at the mother cell position by a different strain can influence the strain ratio non-transiently.Figure <ref>(b) illustrates the empirical distribution of normalized cell compression over the duration of a simulation. Each horizontal cross-section of this heat map represents the empirical probability density for compression at a given trap depth. As expected, the empirical compression data is consistent with the behavior of a one-dimensional mother machine. In particular, mean compression is highest in the center of the trap, and tapers quadratically as one moves to either of the horizontal trap boundaries (we will see that deviations from this quadratic behavior emerge in three-walled traps). Relatively sharp peaks of the distribution at the long edges of the trap indicate the low-variability of spring compression for cells at the boundary of the columnar flow.§.§ Varying the rest-length extension programThus far we have assumed that rest-length extension rate is constant. We now explore the global implications of allowing rest-length extension rate to vary with spring compression in our model. This study is motivated by experimental evidence supporting the thesis that mechanical forces shape the dynamics of collectives <cit.>. In particular, it has been shown that mechanical forces can become sufficiently large to slow cell growth <cit.>. How to best model the impact of such mechanical constraints on cell growth remains unclear. Here, we thereforeconsider a simple model of how cells modulate their target growth rates in response to mechanical forces, and explore the impact of such growth modulation on the emergent properties of the collective.We introduce a simple growth rate dependence by setting Ṙ to a constant value for low values of spring compression C = R - ℓ, while decreasing it linearly to zero after compression crosses a threshold, T. More precisely, we set Ṙ (C) = a,ifC ⩽ T; a(2 - C/T),ifT < C < 2T; 0,ifC ⩾ 2T.We simulated a three-wall trap geometry, as illustrated in the left column of Figure <ref>. The first row of Figure <ref> shows simulation results for a high threshold T_h of spring compression, the second for a low threshold T_l.The center column (panels (c) and (d)) shows normalized spring compression distributions over the lifespans of the simulations. The spring compression is normalized such that T = 0.5. As before, a horizontal slice represents the empirical probability density for cell compression at a given trap depth. Three regimes emerge: In the bottom section of the trap, the compression profiles are quadratic, suggesting behavior akin to the quasi-mother machine dynamics we examined previously; mean compression levels off beyond the bottom section of the trap before spiking in the back. The sharply increased spring compression at the back wall emerges from the horizontal alignment tendency of cells in this area. Cells parallel to the back of the trap have no open trap boundary in their axial growth direction, which results in marked mechanical confinement as evidenced using both thresholds in our simulations. §.§ Implications for protein accumulation Spring compression in our model can thus cause cellswithin the population to grow at different rates. This heterogeneity has implications for protein accumulation in growing collectives. We considered a simple case in which the amount, x, of some protein in each cell obeys the differential equationẋ = αℓ - β x,where α denotes basal production rate and β is the rate of chemical degradation.When a cell divides, protein is distributed to the daughter cells in proportion to their lengths. The left column of Figure <ref> contains snapshots with cells shaded according to x / ℓ,i.e. protein per length of cell. As we assumed volume is proportional to length, the shading represented protein concentration within the cells, with brighter cells having a higher concentration of protein.Protein concentration is highest in the back of the trap,consistent with the fact that spring compression is highest there. Significantly more protein accumulation occurs when the threshold T is low (bottom snapshot). We remark that compression dynamics can be `faster' than protein dynamics in the following sense: When a cell under significant constraint and expressing a large amount of protein suddenly becomes dislodged (unconstrained), it maytake several generations for protein concentrations in descendant cells to return to levels consistent with equilibrium in unconstrained cells.§.§ Implications for nematic order We finish by examining how nematic order is affected by altering the rest length extension rate vs. spring compression profile. The right column of Figure <ref> shows cell angle distributions over the lifespans of the simulations. An angle of π / 2 corresponds to a vertically oriented cell. Each horizontal slice in the figure represents the empirical probability density function for cell angle at the given trap depth. When the threshold T is high, as in Figure <ref>(e), cells show strong vertical alignment throughout the trap. We observe significantly more nematic disorder with a lower threshold (Figure <ref>(f)).Boyer et al. <cit.> have shown that nematic disorder in three-wall trap geometries can be caused by a buckling instability. Under the assumption that cells in the back of the trap both slow their growth and are smaller due to nutrient depletion, they further show that nematic disorder will be more prevalent there since small cells are more likely to buckle (Figure 5 of <cit.>). By reducing T in our simulations, we observe that reduction of cell growth rate alone leads to strong nematic disorder in the back of a three-walled trap geometry. Consequently, we have recapitulated the Boyer result.However, in our case the mechanisms are different:Nematic disorder emerges solely from slowing cell growth rate, which follows directly from detection and response to mechanical interactions, and not from postulated nutrient depletion.§ DISCUSSION The growth of cells, both in natural environments and experimental conditions, is modulated by a number of factors. These include mutations, nutrient depletion, extracellular forces, and other environmental signals.Cells actively respond to mechanical forces, which implies they are capable of sensing and transducing these signals to a biological response  <cit.>. Here, we have described a simple model of how bacteria effect changes in their growth in response to mechanical interactions. We have shown that such changes can impact the spatiotemporal dynamics of bacterial collectives growing in microfluidic traps. However, our model is certainly an oversimplification. We did not attempt to describe the other factors that modulate cell growth and can lead to emergent dynamical phenomena.For instance, assume that the growth rates of two co-repressing strains in a consortium depend on their transcriptional states, so that the strain that has the higher level of expression grows more slowly.This type of interaction between cell growth, strain competition, and protein expression can lead to relaxation oscillations in both transcriptional and growth rates <cit.>. We expect that a variety of mechanisms that affect growth rate of single cells directly or indirectly can lead to emergent phenomena at the level of the bacterial population.Our agent-based model stands in contrast to most previously developed models: We allow cells to follow first-order dynamics rather than assuming cells achieve their growth rate in each time step.Thus cells effectively monitor their environment and respond to mechanical interactions by modulating growth, and, potentially, other aspects of their interior dynamics.It is unclear whether a cell that is prevented from growing stores this potential.However, mechanical interactions certainly impact cell growth even when nutrient supply is adequate. This is confirmed by experiments performed in osmotic shock, where not only do cells no longer grow, they also return to the cell length they would have achieved, had shock not occurred <cit.>. Thus, growth is "stored," and this also supports the use of increased spring compression in our model under constraint. Although our model is an oversimplification, it shows thatmechanical interactions can play an important role in the organization and dynamics of growing bacterial collectives. We have described a flexible platform for understanding these effects. But much work remains: The predictions of these models, such as the organization of colonies in microfluidic traps, and the impact of crowding on gene expression will need to be validated experimentally. A deeper understanding of the emergence of order and disorder in these bacterial populations will require the development of effective continuum models of collective cell dynamics <cit.>. Agent-based models of the type we describe can serve as a starting point for these further developments.§ APPENDIX §.§ NON-INERTIAL DYNAMICS ASSUMPTIONThe non-inertial dynamics assumption is satisfied in a regime defined by the value of a a fast-scale time constant ξ, which we define with respect to the inertial dynamics equations of motion for an isolated cell in our model. We begin with the assumption that the expansion force on a cell is constant. In our model, this translates to a fixed compression R - ℓ of our expansion spring. The validity of this assumption for our simulations is validated by the scale difference between ξ and the discretization time step dt (during which we assume expansion force is constant). We will see that dt is much larger than ξ. Referring to Eq. (<ref>) in section  <ref> , we set F^exp := 2(R - ℓ) as the constant expansion force.Assuming the mass, m, is constant, the inertial equation of motion for an expanding cell in our model is then:ℓ̈ = F^exp m - γ mℓ̇.We define ξ := mγ as our fast-scale time constant.Solving this equation with initial velocity ℓ̇(0) at time t=0, we obtain the expansion velocity solution,ℓ̇(t) = (1 - e^-tξ) ·F^exp/γ +e^-tξ·ℓ̇(0).Thus, for times t under which our constant force assumption holds,the cell expansion velocity is a convex combination of its terminal velocity and initial condition.We can now compute an explicit equation for the acceleration of the cell by taking the time derivative of (<ref>):ℓ̈(t) = d dtℓ̇(t) = e^-tξ (F^exp/m - γ/mℓ̇(0))Thus, at t = 0, the acceleration is inertial and decays exponentially. From equation (<ref>), we thus see that non-inertial dynamics holds to the extent that F^exp can be assumed constant over a time interval t of interest, such that (conservatively) t ≥ 10ξ (the exponential decays to < 10^-4 in this time). If we take the mass of a cell as m_cell = 10^-15and a fluid damping parameter γ = 10^-8 , we have ξ = 10^-7 , or 0.1 μ . Our computer simulations use a time discretization on the order of dt = 0.001= 0.06 .We then have dt/ξ > 10^5.Thus, during a simulation time interval dt (under which we assume spring rest-length and cell mass are constant), our non-inertial dynamics assumption holds. Indeed, assuming the given cell mass and fluid damping values, non-inertial dynamics holds whenever system forces and masses can be assumed constant over time intervals of μ or greater. §.§ TIME DISCRETIZATION REQUIREMENTS Under the non-inertial dynamics assumption (see section  <ref>) , expansion velocity is proportional to expansion force. In order to prevent overshoot of the expansion velocity for an isolated cell in our simulations, we must observe an upper bound for our discretization time step dt. To see this, we require that ℓ̇ < a. That is, the achieved expansion speed of a cell (starting from rest) should be less than the cell growth rate a. In the RHS of equation (<ref>), we set t = dt to perform a forward Euler integration of the rest length (thus = adt). We set ℓ(0) = 0 and conclude:ℓ̇ < a 2k/γ a dt < a dt < γ/2k = τ/2Thus, dt<τ/2 is a necessary condition in our discretization to prevent expansion speed overshoot from rest. Importantly, this directly links the lower range of γ (for fixed k) to computation time: increased computation time is the result of a smaller dt, which is required by a smaller γ. Thus, simulations that explore smaller values of τ (equivalently, smaller values of γ for fixed k) will engenderhigher computational cost under the model described in this paper. However, a more sophisticated, nonlinear control scheme to regulate expansion velocity could be implemented to mitigate this restriction. Here we retained a simple open-loop growth algorithmto validate agreement between theory and our simulation environment, leaving the development of more advanced control algorithms for future work. §.§ COUPLED MASS-SPRING MATRIX EQUATIONS We now derive the equations of motion for a 1D line of bacterial cells using our model's mass-spring system. In this derivation, we also include the possibility of spring damping, which is a cell-frame dashpot damping added to the expansion spring.We analyze the impact of this damping on the resulting dynamics. §.§.§ 3-CELL MOTHER MACHINE We assume a 1D line of 3 cells in a mother machine configuration (see section  <ref>) where cells are in contact pole-to-pole and are constrained to motion in the axial direction only. Since each cell is composed of two axially-independent halves, the mother machine configuration will identify positions of the contacting cell halves of adjacent cells. The mother cell's trap-wall half will not move in this configuration, thus the equations of motion are determined for the identified positions of each successive cell-cell contact (i=1,2) and lastly for the free-end cell half (i=3), where i is the index number for the equations given below.We assume a spring constant k, fluid damping parameter γ_f, and spring damping parameter γ_s. The matrix-vector equations for an example 3-cell mother-machine system are generated by a stiffness matrix K and damping matrix Γ, which are second-difference matrices that follow from force-balance analysis  <cit.> of the 1D line of masses and springs that represent a back-to-back line of cells in a mother machine using our model. We findK =[2 -10; -12 -1;0 -11 ] ,and Γ = [ 2γ_s + γ_f -γ_s0; -γ_s 2γ_s + γ_f -γ_s;0 -γ_sγ_s + γ_f ].The equations of motion for the coupled system from Newton's 2nd Law are:mẍ = -kK x - Γẋ + k[ [ 0; 0; 1 ] ] a twhere a is the cell growth rate. Cell 1 is the mother cell and cell 3 the open-end cell in the mother machine.Internal force cancellation of adjacent cell halves results in the RHS of the above equation having a forcing term only for the outermost cell half of the open-end cell. Expansion forces are then realized through coupling in the stiffness matrix K. Assuming non-inertial dynamics(see section  <ref>) and that Γ is invertible, (<ref>) becomes:ẋ = -kΓ^-1K x + k Γ^-1 [ [ 0; 0; 1 ] ] a tNow, assuming the matrix product Γ^-1K is diagonalizable with eigenvector matrix Q, we have the equivalent system of equations in the eigen-basis (using the vector variable y in this basis):ẏ = -kQ^-1(Γ^-1K) Q y + k Q^-1Γ^-1 [ [ 0; 0; 1 ] ] a tIf we setb :=kQ^-1Γ^-1 [ [ 0; 0; 1 ] ], with diagonal eigenvalue matrix D, the diagonalized matrix-vector equation becomes:ẏ = -kD y + b a tThe solution to the diagonalized system now follows as for the single-cell case given by Eq.(<ref>) in section (<ref>). For i ∈{1,2,3}, we set τ _i := 1/kD_ii, and have:∂_t (e^t/τ _i y_i) = e^t/τ b_i a tAssuming each y_i(0) = 0, we then have the diagonalized solutions:y_i = τ _i b_i a ( t -τ _i + τ _i e^- t/τ _i) ẏ_̇i̇ = τ _i b_i a ( 1- e^- t/τ _i)We then convert the solution back to the standard basis using x = Qy andẋ = Q ẏ. We thus have that the motion of each cell in the mother machine is a linear combination of eigen-modes of the matrix product Γ^-1K. We now explore the effects of the spring damping on the equations of motion. §.§.§ NO SPRING DAMPING With no spring damping, the Γ matrix is diagonal and we can replace it with a scalar parameter γ.Q, D are then the eigenvector, eigenvalue matrices of K, and we set: τ _i := γ/k D_ii,b_i :=kQ^-1_i3/γThe solution is then given by (<ref> - <ref>). We find that the steady-state solutions to spring compression follow a quadratic profile vs. cell position in the mother machine. This is readily derived without the matrix equations by analyzing the the force balance necessary to achieve linear growth in cell-end speeds towards the open end of the mother machine. If we assume each cell expands (in the cell's frame of reference) at a constant speed v, then each successive cell-end will move (in the laboratory frame of reference) at i · v, where i ∈ 1..N and N is the number of cells in the mother machine, and i=1 is the mother cell. Since the end-cell half of cell N is independent and we are under non-inertial dynamics, this end-cell half must apply a force of γ N v to achieve speed N v in the laboratory frame. Each cell expands with symmetric force, thus cell end (N-1), which moves at (N-1) v must have (by algebraic addition of forces from adjacent cell halves): γ C_N-1 v- γ N v = γ (N-1) v, where C_N-1 is the unknown scale factor for the pentutimate cell. Clearly, C_N-1 = N + (N-1).Continuing in this manner towards the mother cell, we see that the successive cell force differences lead to a quadratic expression for cell compression vs. cell position. An example plot of the steady-state cell compression for N=10 cells is shown in Fig.(<ref>), where the quadratic profile is evident. §.§.§ LARGE SPRING DAMPING With spring damping much larger than fluid damping, we then have for the damping matrix: Γ = γ _s [2 -10; -12 -1;0 -11 ], We note in this case that Γ = γ _s K. Importantly, increasing spring damping relative to fluid damping leads to uniform dynamics of all cells in the mother machine. However, to maintain responsiveness, the spring constant k must scale with spring damping. For example, in the isolated cell case, spring damping and fluid damping reference frames are the same, and the damping parameters add such that γ = γ_f + γ_s to define τ in Eq. (<ref>). Thus, to maintain the same first-order dynamics, k must scale with the resulting additive γ such that τ = γ/k remains constant. We find that, in the limit of k,γ_s →∞, while γ_s/k = constant we recover the EOR model described in section  <ref>. §.§ CHIPMUNK 2D SIMULATION ENVIRONMENT We use the open-source physics engine(See <cit.>) to define cell objects andtraps to simulate interactions and dynamics of cell consortia.The use of this engine by(See <cit.>) was the original inspiration for its use in our model. We detail in this section our simulation loop algorithm and the relevant components from .A simulation step consists of the following.The 2D physics engine is assumed to have just completed a time-step.An un-ordered list structure of cell objects is then traversed to determine if a cell should divide or be removed from the simulation (sub-routines would either add a new daughter cell or remove the cell from the list, respectively).Each remaining cell's physics model is then updated as follows:* The current cell length ℓ is computed by subtracting the positions of the cell ends, which are obtained by querying the respective components from the 2D physics engine.The current spring compression is then computed by subtracting the cell length from the spring rest length.* As a function of the current spring compression, a growth rate is selected for the following time step (the growth rate may also be constant, i.e., independent of compression, or in general, it may be set algorithmically by the user). The growth rate is then asserted in the discrete-time simulation by an increase of the spring rest-length, with increment dR := a · dt, where a is the current growth rate and dt is the discrete time step. Thus, R R + dR.* The cell expansion force is computed as F^exp = k(R - ℓ) and this is set for each cell half independently.* The 2D physics engine is stepped.This consists of 3 principal parts within thesoftware:* The current timestep velocity v_i is forward-integrated to determine new positions for all objects i in the space. Namely, each cell end is extended by v_i · dt, where v_i is computed at the end of the previous physics engine time step (or otherwise altered by the user in steps 1-3 above). In general, cells will not overlap each other as the result of a position integration. Rather, objects in contact will move together with velocities that were resolved in the previous time step (in part C below) by the physics engine via collision dynamics.* The force programmed in item (3) above is used to determine new interaction velocities for all objects in the space.The cell halves' velocities in the non-inertial regime are computed directly by v̂ = F^exp/γ. (The previous velocity of the objects is set to zero in the non-inertial regime).In general this velocity v̂ will not actually be achieved by the cell halves. The impulse solver in part C will adjust velocities and positions based on collision dynamics of objects in the space.* The 2D physics engine's impulse solver iterates over the space to resolve competing object velocities when two objects are in direct contact. The impulse solver adds impulses to each object and the resulting actual velocities of cell halves are computed and will be used in the following timestep's position integration.§.§ RATCHET ALGORITHM FOR CELL BACK-FILLING To ensure that axial compression is accounted for in our model, we employ an algorithm to back-fill contact area to each cell half, such that contraction of a cell is limited to a compression gap and ratchet step, which we now detail. Each cell half is constructed as in Fig.  <ref> with a rectangular center and attached “frontside pole” that defines the frontal contact area of a cell. In addition, however (and not shown in Fig.  <ref>), there is a “backside pole” that is attached to a ratchet-extended rectangular area, which is designed to keep the backside pole just inside the frontside pole of the other half. Usually, this backside pole contact surface is transparent to collision dynamics of a cell, since it lies inside the outer contact hull of the cell. However, in case a cell becomes subject to constricting axial forces (from other cells or trap walls) larger than cell expansion forces, the two halves will contract, but only until the backside poles of each half align with the frontside poles of the other, at which time the cell acts as a rigid body not subject to further compression.As a cell expands, this backside contact area must be extended to limit the amount of compression before the poles are aligned from the two halves (thus forming the rigid body). We employ a ratchet algorithm to achieve this extension, such that the backside pole is extended once the cell length passes a ratchet step r_s.The algorithm is summarized as follows: §.§ TABLE OF PARAMETER VALUES   PARAMETER SIMULATION VALUE SCALE PHYSICAL VALUE dt0.001min 10.06sec m 1e-10 1e-5kg 1e-15kg γ 60min^-1 1e-5kg 1e-5kg.sec^-1 k3600min^-2 1e-5kg 1e-5kg.sec^-2 ξ := m γ^-1 1.7e-12min 1 1e-10sec τ := γ k^-1 0.017min 1 1sec  Simulation values are computed by dividing the physical value by the scale for each parameter and converting units appropriately. We used mass m of a bacterial cell as given in  <cit.>, and dimensionless mass units in our simulations (the scale value for mass is chosen to normalize k and γ to 1.0 in SI units). Our estimate for k derives from the data given in  <cit.> (Supporting Information). We chose γ such that τ = 1sec.In Fig. <ref>, panels (b),(c), we also used a value of 10 γ for comparison of the cell dynamics. Both k and γ simulation values use dimensionless mass units. Two time constants are shown for reference: ξ defines a scale for non-inertial dynamics as in <ref>, and τ defines the first-order growth dynamics of our cell model, as derived in <ref>. We note that γ overestimates a physical value, but it is chosen as a convenient value for computational purposes (see <ref>). Acknowledgments: This work was supported by the National Institutes of Health, through grant R01GM117138 (MRB, KJ, WO) and the joint NSF-National Institute of General Medical Sciences Mathematical Biology Program grant R01GM104974 (MRB, KJ, WO); NSF grants DMS-1413437 (WO), MCB-1616755 (OI), and MCB-1411780 (OI); and the Welch Foundation grant C-1729 (MRB).The authors acknowledge the use of the Opuntia Cluster and the support from the Center of Advanced Computing and Data Systems at the University of Houston. siam
http://arxiv.org/abs/1702.08091v1
{ "authors": [ "James J. Winkle", "Oleg Igoshin", "Matthew R. Bennett", "Krešimir Josić", "William Ott" ], "categories": [ "q-bio.CB", "q-bio.QM", "92C" ], "primary_category": "q-bio.CB", "published": "20170226215243", "title": "Modeling mechanical interactions in growing populations of rod-shaped bacteria" }
Impurity scattering and size quantization effects in a single graphene nanoflake Mikhail Fonin^1[Email address: mikhail.fonin@uni-konstanz.de] December 30, 2023 ================================================================================We study the passive particle transport generated by a circular vortex path in a 2D ideal flow confined in a circular domain. Taking the strength and angular velocity of the vortex path as main parameters, the bifurcation scheme of relative equilibria is identified. For a perturbed path, an infinite number of orbits around the centers are persistent, giving rise to periodic solutions with zero winding number. § INTRODUCTION The passive particle transport in a 2D incompressible flow with prescribed vorticity is a research topic of the highest relevance in Fluid Dynamics <cit.>. In the Lagrangian formulation, the advection of single particles is ruled by a Hamiltonian system where the stream function plays the role of the Hamiltonian. In this paper, we consider the dynamics induced in an ideal flow confined in a circular domain of radius R under the action of a prescribed T-periodic vortex path. This problem is classical in the literature (see for instance <cit.>) and can be seen as a 2D idealization of the mixing of a fluid in a cylindrical tank.Let B_R⊂^2 be the open ball of center (0,0) and radius R, and consider a T-periodic vortex path given by z:→ B_R. Then, the stream function of the fluid confined in B_R and under the action of the vortex is given by Ψ(t,ζ)=Γ/2π(lnζ-z(t)-lnζ-R^2/|z(t)|^2z(t)).Here, Γ is the strength or charge of the vortex, and its sign gives the sense of rotation. In this function, the first term accounts for the vortex action, whereas the second term models the influence of the solid circular boundary. It is useful to see ζas a complex variable, then the corresponding Hamiltonian system is ζ̇^̇*̇ = Γ/2π i(1/ζ-z(t) - 1/ζ - R^2| z(t) |^2z(t)),where the asterisk means the complex conjugate. In the related literature, z(t) is called the stirring protocol. When it is constant, then Ψ is a conserved quantity and all the particles rotate around the vortex in circular trajectories. When z(t) is time-dependent, then the Hamiltonian ceases to be a conserved quantity and the analysis is more delicate. In <cit.>, it is proved that any smooth stirring protocol z(t) induces an infinite number of periodic trajectories rotating around the vortex (non-zero winding number). Hence, it is a natural question to try to identify the stirring protocols that generate periodic trajectories with zero winding number, that is, particles moving periodically but not rotating around the vortex. This question was posed explicitly as an open problem in <cit.>. Our intention is to advance on the comprehension of this difficult problem by analyzing the family of circular protocols z(t)=r_0exp(iθ_0 t). In this case, the change to a corotating frame ζ(t)=η(t)exp(iθ_0 t) transforms (<ref>) into the autonomous systemη̇^* = iθ_0η^*+Γ/2π i(1/η-r_0 - 1/η - R^2r_0).The hamiltonian structure is preserved, so the streamlines are just the level curves of corresponding Hamiltonian, which is indeed a conserved quantity. This fact enables a complete bifurcation analysis of the relative equilibria, performed in Section <ref>, which correspond to periodic solutions of zero winding number for the original system. Moreover, in Section <ref> it is shown that the bifurcation scheme identified in Section <ref> is robust under small perturbations of the vortex path. Finally, the last section exposes some conclusions of the presented study.§ PHASE PORTRAIT AND BIFURCATIONS This section is devoted to the bifurcation analysis of the phase portrait of system (<ref>). Working on cartesian coordinates, the streamlines are level curves of the Hamiltonian function Ψ(x,y)=-θ_0/2(x^2+y^2)+Γ/2πln√((x-r_0)^2+y^2/(x-R^2/r_0)^2+y^2).Here (R,Γ,θ_0)∈(0,+∞)× (∖{0})^2 and r_0∈(0,R). From now on, for the sake of further simplicity, we denote a(x):=a(x,r_0)=x-r_0,b(x):=b(x,R,r_0)=x-R^2/r_0 and c:=Γ/2πθ_0. Thus, system (<ref>) can be written in the (x,y)-variables asẋ = ∂Ψ∂ y =-θ_0y+cθ_0y( 1a(x)^2+y^2 - 1b(x)^2+y^2), ẏ = -∂Ψ∂ x =θ_0x-cθ_0( a(x)a(x)^2+y^2 - b(x)b(x)^2+y^2). Let ⊂^2 be the closed ball of center (0,0) and radius R. It is an immediate calculation to show thatis invariant by the flow of system (<ref>). Next result deals with the phase portrait of the system on . It will be shown that the position r_0 and angular velocity θ_0 of the vortex path are the main parameters on the system, whereas the remaining ones can be normalized. To this end, and for the sake of simplicity on the statement, we set ρ_0:=r_0/R and ϕ_0:=R^2/c=2π R^2θ_0/Γ. Thus, the parameter space of system (<ref>) turns Λ:={(ρ_0,ϕ_0)∈^2: 0<ρ_0<1and ϕ_0≠ 0}. Moreover, let us definef(ρ_0,ϕ_0):=27ρ_0^2(ρ_0^2-1)+ϕ_0(2-3ρ_0^2-3ρ_0^4+2ρ_0^6-2(1-ρ_0^2+ρ_0^4)^3/2),andℬ:={(ρ_0,ϕ_0)∈Λ : ϕ_0f(ρ_0,ϕ_0)(ρ_0-1-ϕ_0/1+ϕ_0)(ρ_0-ϕ_0-1/1+ϕ_0)=0}.The curve ℬ is the union of three curves, namely C_i, i=1,…,3 and splits the parameter space Λ into five connected components, ℛ_i, i=1,…,5, according with Figure <ref>.Let (ρ_0,ϕ_0)∈Λ. The set Λ∖ℬ corresponds to regular parameters of system (<ref>). On each connected component, the phase portrait is the following: * If (ρ_0,ϕ_0)∈ℛ_1 then the dynamics onis a global vortex at (r_0,0) (see Figure <ref>). * If (ρ_0,ϕ_0)∈ℛ_2 then the system has a vortex at (r_0,0), a center at (x_c^*,0) with x_c^*∈(-R,0) and two hyperbolic saddles (x_s^*,± y_s^*) at ∂ with a saddle connection inside(see Figure <ref>). * If (ρ_0,ϕ_0)∈ℛ_3 then the system has a vortex at (r_0,0), a center at (x_c^*,0) with x_c^*∈(-R,0) and a hyperbolic saddle at (x_s^*,0) with x_s^*∈(r_0,R) (see Figure <ref>). * If (ρ_0,ϕ_0)∈ℛ_4 then the system has a vortex at (r_0,0), a center (x_c^*,0) and a hyperbolic saddle (x_s^*,0) satisfying 0<x_c^*<x_s^*<r_0 (see Figure <ref>). * If (ρ_0,ϕ_0)∈ℛ_5 then the dynamics onis a global vortex at (r_0,0) (see Figure <ref>).Moreover, the set ℬ corresponds to bifurcation parameters of system (<ref>). On each curve the phase portrait is the following: * If (ρ_0,ϕ_0)∈ C_1 then the system has a vortex at (r_0,0) and a degenerated saddle at (-R,0) (see Figure <ref>).* If (ρ_0,ϕ_0)∈ C_2 then the system has a vortex at (r_0,0), a center at (x_c^*,0) with x_c^*∈(-R,0) and a degenerated saddle at (R,0) (see Figure <ref>). * If (ρ_0,ϕ_0)∈ C_3 then the system has a vortex at (r_0,0) and a cusp at (x_p^*,0) (see Figure <ref>), where x_p^*:=x_p^*(R,r_0)=R^2+r_0^2-√(R^4-R^2r_0^2+r_0^4)/3r_0. The bifurcation diagram of the phase portrait of system (<ref>) onwith respect to the variables (ρ_0,ϕ_0) is depicted in Figure <ref>. There are eight possibilities (see Figure <ref>) which always presents a vortex at (r_0,0) and: * If ϕ_0>0 and 0<ρ_0<1-ϕ_0/1+ϕ_0 then it has no critical points in . Then the dynamics inis a global vortex.* If ϕ_0>0 and ρ_0=1-ϕ_0/1+ϕ_0 then it has exactly one critical point in : a degenerated saddle at (-R,0).* If ϕ_0>0 and ρ_0>max{1-ϕ_0/1+ϕ_0,ϕ_0-1/1+ϕ_0} then it has exactly three critical points in : a center (x_C^*,0) with x_C^*∈(-R,0) and two hyperbolic saddles (x_SB^*,± y_SB^*) in ∂ with a saddle connection inside .* If ϕ_0>0 and ρ_0=ϕ_0-1/1+ϕ_0 then it has exactly two critical points in : a center (x_C^*,0) with x_C^*∈(-R,0) and a degenerated saddle at (R,0).* If ϕ_0>0 and ρ_0<ϕ_0-1/1+ϕ_0 then it has exactly two critical points in : a center (x_C^*,0) with x_C^*∈(-R,0) and a hyperbolic saddle at (x_S^*,0) with x_S^*∈(r_0,R).* If ϕ_0<0 and η(ρ_0,ϕ_0)>0 then it has no critical points inside .* If ϕ_0<0 and η(ρ_0,ϕ_0)=0 then it has exactly one critical point inside : a cusp at (x_SN^*,0), wherex_SN^*=x_SN^*(R,r_0)=R^2+r_0^2-√(R^4-R^2r_0^2+r_0^4)/3r_0.* If ϕ_0<0 and η(ρ_0,ϕ_0)<0 then it has exactly two critical points inside : a center (x_C^*,0) and an hyperbolic saddle (x_S^*,0) satisfying 0<x_C^*<x_S^*<r_0.For the sake of simplicity we first begin the proof by showing that the only critical points inthat do not lie on the line {y=0} are the hyperbolic saddles (x_s^*,± y_s^*) at ∂ of case (b) on the statement. To this end, assuming y≠ 0, from equations in (<ref>) we have that ẋ=0 if and only if-1+c(1/a(x)^2+y^2-1/b(x)^2+y^2)=0.Since a(x)^2<b(x)^2 for all x<R, if ϕ_0<0 then c<0 and so the left hand side of the previous equality is negative. Then assume ϕ_0>0. In this case, ẋ=0 if and only ify^2=-1/2(a(x)^2+b(x)^2)+1/2√((b(x)^2-a(x)^2)(4c+b(x)^2-a(x)^2)).Substituting the previous equality on the expression of ẏ in (<ref>) and equaling zero one gets the equation(2x-a(x)-b(x))(a(x)+b(x))+√((b(x)^2-a(x)^2)(4c+b(x)^2-a(x)^2))/2(a(x)+b(x))=0.Thus, using that a(x)=x-r_0 and b(x)=x-R^2/r_0, the previous equation has the unique solution x^*_s=R^2+r_0^2/2r_0-c/2r_0(1-r_0^2/R^2)and so(y^*_s)^2=1/4(2(c^2+R^4)/R^2-(c-R^2)^2/r_0^2-(c+R^2)^2r_0^2/R^4).It is a computation to show that x_s^*∈(-R,R) if and only if ρ_0>max{1-ϕ_0/1+ϕ_0,ϕ_0-1/1+ϕ_0} (and so if and only if (ρ_0,ϕ_0)∈ℛ_2) and (x_s^*)^2+(y_s^*)^2=R^2. It is only remaining to prove that (x_s^*,± y_s^*) are hyperbolic saddles. This can be done evaluating the previous expression of the points (x_s^*,± y_s^*) on the Jacobian matrix of system (<ref>). In the case of (x_s^*,y_s^*) the determinant of the Jacobian matrix isdet(DX(x_s^*,y_s^*))=(cR-R^3-(c+R^2)r_0)(cR-R^3+(c+R^2)r_0)θ_0^2/c^2(R^2-r_0^2)which is negative if and only if ρ_0>max{1-ϕ_0/1+ϕ_0,ϕ_0-1/1+ϕ_0}. Then, (x_s^*,y_s^*) is a hyperbolic saddle. The same argument is valid for (x_s^*,-y_s^*). Moreover, since ∂ is an invariant curve of system (<ref>) and (x_s^*,y_s^*)∈∂, ∂ is the stable manifold of one saddle (and unstable of the other). The correspondent unstable (stable) manifold cuts transversally the disk of radius R due to the hyperbolicity of the saddles and so the connection between the saddles follows by Poincaré-Bendixon's theorem. The previous argument shows that out of case (b) on the statement, all the critical points of system (<ref>) lie on {y=0}. Let us prove now the remaining cases of the result. Let us first consider ϕ_0>0. This correspond to statements (a)-(c) and (f)-(g). We can assume with no loss of generality that θ_0>0 and Γ>0. The case with θ_0<0 and Γ<0 follows by reversion of time. Notice that the hypothesis ϕ_0>0 implies c>0. System (<ref>) has critical point at (x^*,0) inside the disk of radius R if and only if the functionF(x):=θ_0(x-c(1/a(x)-1/b(x)))satisfies F(x^*)=0 for some x^*∈(-R,R). Multiplying by a(x)b(x) the previous condition turns into F(x^*)a(x^*)b(x^*)=0. We point out that, on account of the expressions of a(x) and b(x), the previous two conditions are equivalent if x^*∉{r_0,R^2/r_0} (those correspond to singularities on the Hamiltonian function and so no critical points). Thus, system (<ref>) has a critical point at (x^*,0) inif and only ifx^*a(x^*)b(x^*)=c(r_0-R^2/r_0)=:λ=λ(r_0,R,c).Notice that, since ϕ_0>0, λ<0. The cubic polynomial P(x):=xa(x)b(x) has zeros at x=0, x=r_0 and x=R^2/r_0. P(x) is negative if x∈(-∞,0)∪(r_0,R^2/r_0) and it is positive if x∈(0,r_0)∪(R^2/r_0,+∞), and the local maximum and minimum are, respectively,x_M=R^2+r_0^2-√(R^4-R^2r_0^2+r_0^4)/3r_0, x_m=R^2+r_0^2+√(R^4-R^2r_0^2+r_0^4)/3r_0.On the other hand, since ϕ_0>0 then λ=λ(r_0,R,c) varies from zero to -∞. Thus P(x)-λ=0 has always a solution x^*∈(-∞,0), it has a double solution x^*=x_m if P(x_m)=λ and two solutions in (r_0,R^2/r_0) if P(x_m)<λ. Let us study when this solutions correspond to critical points in . We point out that x_m>R so at most two zero of P(x)-λ lie in (-R,R). It is a computation to show that P(R)-λ>0 if ρ_0>ϕ_0-1/1+ϕ_0, P(R)-λ=0 if ρ_0=ϕ_0-1/1+ϕ_0 and P(R)-λ<0 if ρ_0<ϕ_0-1/1+ϕ_0. On the other hand, P(-R)-λ>0 if ρ_0<1-ϕ_0/1+ϕ_0, P(-R)-λ=0 if ρ_0=1-ϕ_0/1+ϕ_0 and P(-R)-λ<0 if ρ_0>1-ϕ_0/1+ϕ_0. Thus, if (ρ_0,ϕ_0)∈ℛ_1, no roots of P(x)-λ are inside [-R,R] and so the result in (a) holds. If (ρ_0,ϕ_0)∈ C_1, the unique zero of P(x)-λ in [-R,R] is x=-R. This correspond to a critical point of system (<ref>) at (-R,0). Moreover, it is a degenerated saddle since ∂ is an invariant curve of system (<ref>) so (f) is proved. If (ρ_0,ϕ_0)∈ℛ_2, P(x)-λ has only one zero x_C^*∈(-R,0). Then, on account of the previous discussion about the hyperbolic saddles (x_s^*,± y_s*) on ∂, result in (b) is proved. If (ρ_0,ϕ_0)∈ C_2, P(x)-λ has two zeros: x=x_c^*∈(-R,0) and x=R. The critical point (R,0) correspond to a degenerated saddle since ∂ is an invariant curve of system (<ref>). Then (g) holds. Finally, if (ρ_0,ϕ_0)∈ℛ_3, P(x)-λ has two zeros: x=x_c^*∈(-R,0) and x=x_s^*∈(r_0,R). In order to end with the case ϕ_0>0 it only remains to prove that x_c^* and x_s^* are a center and a hyperbolic saddle, respectively.The Jacobian matrix associated to system (<ref>) with y=0 is given byDX(x,0)=(0θ_0(-1+c( 1a(x)^2-1b(x)^2)) θ_0(1+c( 1a(x)^2-1b(x)^2)) 0 ).Notice that θ_0(1+c( 1a(x)^2-1b(x)^2))>0 for all x∈(-R,R). On the other hand, setting x=x^* a critical point of (<ref>), we havec(1/a(x^*)^2-1/b(x^*)^2)-1=x^*(1/a(x^*)+1/b(x^*))-1=(x^*)^2-R^2/(x^*-r_0)(x^*-R^2/r_0),where we used F(x^*)=0 on the first equality and the expressions of a(x) and b(x) on the second. ThusDX(x^*,0)=(0θ_0(x^*)^2-R^2/(x^*-r_0)(x^*-R^2/r_0) θ_0(2+(x^*)^2-R^2/(x^*-r_0)(x^*-R^2/r_0)) 0 ).Consequently, taking x^*=x_c^*∈(-R,0) we have (x_c^*)^2-R^2<0 and (x_c^*-r_0)(x_c^*-R^2/r_0)>0. Therefore (x_c^*)^2-R^2/(x_c^*-r_0)(x_c^*-R^2/r_0)<0 and so (DX(x_c^*,0))>0. This implies that (x_c^*,0) is a center. On the other hand, taking x^*=x_s^*∈(r_0,R), (x_s^*)^2-R^2/(x_s^*-r_0)(x_s^*-R^2/r_0)>0 and so (DX(x_s^*,0))<0. This implies that (x_s^*,0) is a hyperbolic saddle. This ends with the proof of statements (a), (b), (c), (f) and (g).Let us now consider the case ϕ_0<0. This corresponds to statements (d), (e) and (h). In this situation we can assume with no loss of generality that θ_0>0 and Γ<0. The opposite case follows by reversion of time. Notice that the hypothesis ϕ_0<0 implies c<0. Consequently, on account of the equation (<ref>) critical points can only belong to {(x,y)∈^2:y=0}. Similarly as before, system (<ref>) has a critical point at (x^*,0) inside the disk of radius R if and only if (<ref>) is satisfied. Notice that, since ϕ_0<0, in this case λ=λ(r_0,R,c) varies from zero to +∞. Thus, on account of R<R^2/r0, if λ stays above of the maxima of P(x) inside (0,r_0) then P(x)-λ=0 has a unique zero which is larger than R. This happens when f(ρ_0,ϕ_0)<0. If f(ρ_0,ϕ_0)=0 then the maximum of P(x) inside (0,r_0) contact λ and gives the cusp (x_p^*,0) with x_p^*=x_M. Finally, if f(ρ_0,ϕ_0)>0 then the maximum of P(x) is greater than λ and so P(x)-λ has two real roots inside (0,r_0): namely x_c^* and x_s^*, satisfying 0<x_c^*<x_M<x_s^*<r_0. It only remains to prove the stability of such critical points. This follows from the expression in (<ref>) of the Jacobian matrix associated to system (<ref>) with y=0. We point out that, since a(x)^2<b(x)^2 for all x∈(0,r_0) and c<0, we have θ_0(-1+c( 1a(x)^2-1b(x)^2))<0.On the other hand, setting x=x^* a critical point of system (<ref>), on account of F(x^*)=0 we have1+c(1/a(x^*)^2-1/b(x^*)^2)=2+(x^*)^2-R^2/(x^*-r_0)(x^*-R^2/r_0)=3r_0(x^*)^2-2(R^2+r_0^2)x^*+R^2r_0/(r_0-x^*)(R^2-r_0x^*).The previous expression is positive if x^*∈(0,x_M) and it is negative if x^*∈(x_M,r_0). This proves that x_c^* is a center and x_s^* is an hyperbolic saddle and ends with the proof of (d), (e) and (h).§ PERIODIC PERTURBATIONS AND LOCAL CONTINUATION OF PERIODIC ORBITS Given an autonomous planar Hamiltonian system η̇= J∇ℋ(η),it is interesting to ask about the existence of periodic solutions of the non-autonomous planar Hamiltonian systemη̇= J∇ H(t,η;ϵ),which are small T-periodic perturbations of (<ref>) meaning that H(t,η;0)≡ℋ(η). A. Fonda, M. Sabatini and F. Zanolin prove in <cit.> that under the hypothesis of the existence of a non-isochronous period annulus for the autonomous Hamiltonian system and some regularity conditions on H(t,η;ϵ) such periodic orbits exist.More precisely, consider ℋ:𝒜→ twice continuously differentiable and 𝒜⊆^2 a period annulus such that the inner and outer components of its boundary are Jordan curves. Assume that 𝒜 is not isochronous, that is the period of the periodic orbits in 𝒜 covers an interval [𝒯_min,𝒯_max], with 𝒯_min<𝒯_max. Then consider H:×𝒜×(0,ϵ_0)→, whose gradient with respect to the second variable, denoted by ∇ H(t,η;ϵ), is continuous in (t,η;ϵ), locally Lipschitz continuous in η and T-periodic in t for some T>0. Under these assumptions, the authors in <cit.> prove the following result: Given two positive integers m and n satisfying𝒯_min<mT/n<𝒯_max,there is an ϵ̅>0 such that, if ϵ≤ϵ̅, then system (<ref>) has at least two mT-periodic solutions, whose orbits are contained in 𝒜, which make exactly n rotations around the origin in the period time mT. The authors also emphasize the following immediate consequence: For any positive integer N there is a ϵ̅_N>0 such that, if ϵ<ϵ̅_N, then system (<ref>) has at least N periodic solutions, whose orbits are contained in 𝒜. Our propose in this section is to illustrate this situation in the case when system (<ref>) has a non-degenerated center inside . This occurs for parameters (ρ_0,ϕ_0)∈ℛ_2∪ℛ_3∪ℛ_4, corresponding to the phase portraits (b), (c) and (d) in Figure <ref> and Theorem <ref>. Let us denote bythe period annulus of the center. The inner boundary ofis the center itself, namely p, whereas the outer boundary ofis formed by saddle connections. In both cases the outer boundary have critical points so it is clear that the period function tends to infinity as the orbits approach the outer boundary. Particularly, the center is not isochronous. Next result states the period of the linearized center. Let (ρ_0,ϕ_0)∈ℛ_2∪ℛ_3∪ℛ_4 and let p=(x,0) be the non-degenerated center of system (<ref>). Then the period of the associated linearized system at p isT_0(x)=2π/√(θ_0^2ν(x/R)(2+ν(x/R)))where ν(x):=x^2-1(x-ρ_0)(x-1/ρ_0).From the expression of the Jacobian matrix of the system in (<ref>) we have that the eigenvalues associated to the center areλ_±=±ω i=±√(θ_0^2 x^2-R^2/(x-r_0)(x-R^2/r_0)(2+(x^2-R^2)/(x-r_0)(x-R^2/r_0)))iwhere ω denotes the frequency of the linearized center. Thus, setting ρ_0=r_0/R, and using that the period of the linearized center is T_0=2π/ω the result holds. Let us now consider the periodically perturbed stirring protocol z_ϵ(t)=r_ϵ(t)exp(iθ_0 t) on system (<ref>) where r_ϵ(t) is a smooth T-periodic perturbation of r_0. More concretely, r_ϵ(t)=r_0+ϵ f(t)+g(t;ϵ) with f and g(·;ϵ) T-periodic analytic functions and g(t;ϵ) tending to zero uniformly on t∈ as ϵ tends to zero. The same change to a corotating frame than the presented at the beginning of this paper transforms (<ref>) into a periodic Hamiltonian system with Hamiltonian function Ψ(t,x,y;ϵ)=-θ_0/2(x^2+y^2)+Γ/2πln√((x-r_ϵ(t))^2+y^2/(x-R^2/r_ϵ(t))^2+y^2).Let (ρ_0,ϕ_0)∈ℛ_2∪ℛ_3∪ℛ_4. For any positive integer N there is a ϵ̅_N>0 such that, if ϵ<ϵ̅_N, then the Hamiltonian system u̇ = J∇Ψ(t,u;ϵ) has at least N periodic solutions contained in D_R. Particularly, the flow induced by system (<ref>) with T-periodic protocol z_ϵ(t) has infinity many periodic trajectories with zero winding number.The spirit of this proof is to use Theorem <ref> in a certain period annulus where the regularity hypothesis are satisfied. On the one hand, since the outer boundary of the whole period annulus of the center in system (<ref>) is a saddle connection, the period of the periodic orbits tends to infinity as they approach the outer boundary. On the other hand, setting p=(x^*,0) the center itself, Lemma <ref> states that the period tends toT_0(x^*)=2π/√(θ_0^2ν(x^*/R)(2+ν(x^*/R)))as the orbits tends to p. The analyticity of the period function ensures then that for any M>0 large enough there exists a period annulus, namely 𝒜_M, such that the period of its orbits covers [T_0(x^*),M].Let us apply Theorem <ref> in 𝒜_M. To this end, it is enough to show that ∇Ψ(t,η;ϵ) is continuous in (t,η;ϵ)∈×𝒜_M×(0,ϵ_0) for some ϵ_0>0, locally Lipschitz continuous in η=(x,y)∈𝒜_M and T-periodic in t. From the expression in (<ref>),∇Ψ(t,η;ϵ)=( [-yθ_0+cθ_0 y(1(x-r_ϵ(t))^2+y^2-1(x-R^2/r_ϵ(t))^2+y^2); θ_0x-cθ_0(x-r_ϵ(t)(x-r_ϵ(t))^2+y^2-x-R^2/r_ϵ(t)(x-R^2/r_ϵ(t))^2+y^2) ]).The previous vector is continuous for all (t,(x,y);ϵ)∈×^2×(0,ϵ_0) whereas (x,y)∉{(r_ϵ(t),0),(R^2/r_ϵ(t),0)}. Since r_ϵ(t)=r_0+o(ϵ) and R^2/r_0>R we can take ϵ small enough to ensure that R^2/r_ϵ(t)>R. On the other hand, by Theorem <ref> (b)-(d), d_H(𝒜_M,r_0)>d_H(𝒜_M,γ)>0 where γ denotes the saddle connection that forms the outer boundary of the period annulus and d_H denotes the Haussdorf distance of non-empty compact subsets of ^2. Thus, by continuity, there exists ϵ_0>0 small enough such that d_H(𝒜_M,r_ϵ(t))>d_H(𝒜_M,γ)>0 for all ϵ<ϵ_0. This implies that ∇Ψ(t,η;ϵ) is continuous for all (t,η;ϵ)∈×𝒜_M× (0,ϵ_0) as we desired. Moreover, for fixed (t,ϵ)∈×(0,ϵ_0), since d_H(𝒜_M,r_ϵ(t))>d_H(𝒜_M,γ)>0 then ∇Ψ(t,η;ϵ)∈ C^1(𝒜_M). Consequently, ∇Ψ(t,η;ϵ) is locally Lipschitz continuous in 𝒜_M. Then we can apply Theorem <ref> and, particularly, Corollary <ref> to show that for any positive integer N there exists 0<ϵ̅_N<ϵ_0 such that if ϵ<ϵ̅_N then system η̇=J∇Ψ(t,η;ϵ) has at least N periodic solutions in 𝒜_M⊂ D_R. Finally, by construction of 𝒜_M those periodic solutions have zero winding number with respect to the vortex. § CONCLUSIONS The main result of Section <ref> has a natural reading for the underlying physical model. Note that ρ_0 is the ratio between the path and domain radii respectively, while ϕ_0 measures the relation between the path angular speed and the vortex strength. The sign of ϕ_0 indicates if the sense of rotation of the vortex and the path is the same or opposite. For example, fixing the positive parameters R,Γ,r_0 and leaving θ_0, for small positive θ_0 there are no equilibria. Then, a first bifurcation point is θ^*_0=Γ/2π R^2R-r_0/R+r_0, where a degenerate saddle appears at (-R,0). A second bifurcation point appears at θ^**_0=Γ/2π R^2R+r_0/R-r_0. For θ_0∈ ]θ^*_0,θ^**_0[, there is a center and two hyperbolic saddles in the border of the domain connected by an heteroclinic. They travel along the border until they collide at θ_0=θ^**_0 into a degenerate saddle, that enter into the domain as an hyperbolic saddle for values above θ^**_0. On the other hand, for negative values of θ_0, corresponding to opposite rotating sense of the vortex and the stirring protocol, we identify a typical saddle-node bifurcation. In the identified bifurcation scheme, saddles are connected by heteroclinic or homoclinic orbits that constitute barriers for the flux transport. Around the centers, the particles rotate with different periods, and this fact makes possible an application of a suitable result for perturbed hamiltonians, proving that for a perturbed vortex path there exist infinitely many periodic solutions that do not rotate around the vortex. The problem to identify more general classes of vortex protocols that generate this kind of periodic orbits with zero winding number is still open.9 Aref84 H. Aref,Stirring by chaotic advection, J. Fluid Mech. 143, 1–21 (1984)Aref2011 H. Aref, J. Roenby, M.A. Stremler and L. Tophøj,Nonlinear excursions of particles in ideal 2D flows, Physica D 240, 199–207 (2011).BT13 A. Boscaggin and P.J. Torres,Periodic motions of fluid particles induced by a prescribed vortex path in a circular domain, Physica D: Nonlinear Phenomena 261, 81–84 (2013).CM T. Carletti and A. Margheri,Measuring the mixing efficiency in a simple model of stirring: some analytical results and a quantitative study via frequency map analysis, J. Phys. A: Mathematical and General 39 (2), 299–312, (2006).FrZ93 P. Franzese and L. Zannetti,Advection by a point vortex in closed domains, Eur. J. of Mech. B-Fluids 12, 1–24 (1993).FSZ A. Fonda, M. Sabatini and F.Zanolin, Periodic solutions of perturbed Hamiltonian systems in the plane by the use of the Poincaré-Birkhoff Theorem, Topol. Methods Nonlinear Anal. 40, 29–52 (2012).Saf P.G. Saffman, Vortex dynamics, Cambridge Univ. Press. 1992.Torres P.J. Torres, Mathematical models with singularities, Atlantis Briefs in Differential Equations (1) Atlantis Press, Paris (2015).Ot S. Wiggins and J.M. Ottino, Foundations of chaotic mixing, Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 362, 937–970 (2004).
http://arxiv.org/abs/1702.07868v1
{ "authors": [ "David Rojas", "Pedro J. Torres" ], "categories": [ "math.DS", "math-ph", "math.MP", "25C34, 37N10, 76B47" ], "primary_category": "math.DS", "published": "20170225101252", "title": "Bifurcation of relative equilibria generated by a circular vortex path in a circular domain" }
[ Deep Voice: Real-time Neural Text-to-Speech Sercan Ö.Arık†sercanarik@baidu.com Mike Chrzanowski†mikechrzanowski@baidu.com Adam Coates†adamcoates@baidu.com Gregory Diamos†gregdiamos@baidu.com Andrew Gibiansky†gibianskyandrew@baidu.com Yongguo Kang†kangyongguo@baidu.comXian Li†lixian05@baidu.comJohn Miller†millerjohn@baidu.com Andrew Ng†andrewng@baidu.com Jonathan Raiman†jonathanraiman@baidu.com Shubho Sengupta†ssengupta@baidu.com Mohammad Shoeybi†mohammad@baidu.com Baidu Silicon Valley Artificial Intelligence Lab, 1195 Bordeaux Dr. Sunnyvale, CA 94089speech generation, deep voice, deep learning, text-to-speech, tts, deep neural networks, machine learning, convolutional neural networks, recurrent neural networks0.2in ] [2]Authors are listed alphabetically by last name. We present Deep Voice, a production-quality text-to-speech system constructed entirely from deep neural networks. Deep Voice lays the groundwork for truly end-to-end neural speech synthesis. The system comprises five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For the segmentation model, we propose a novel way of performing phoneme boundary detection with deep neural networks using connectionist temporal classification (CTC) loss. For the audio synthesis model, we implement a variant of WaveNet that requires fewer parameters and trains faster than the original. By using a neural network for each component, our system is simpler and more flexible than traditional text-to-speech systems, where each component requires laborious feature engineering and extensive domain expertise. Finally, we show that inference with our system can be performed faster than real time and describe optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x speedups over existing implementations. § INTRODUCTIONSynthesizing artificial human speech from text, commonly known as text-to-speech (TTS), is an essential component in many applications such as speech-enabled devices, navigation systems, and accessibility for the visually-impaired. Fundamentally, it allows human-technology interaction without requiring visual interfaces. Modern TTS systems are based on complex, multi-stage processing pipelines, each of which may rely on hand-engineered features and heuristics. Due to this complexity, developing new TTS systems can be very labor intensive and difficult.Deep Voice is inspired by traditional text-to-speech pipelines and adopts the same structure, while replacing all components with neural networks and using simpler features: first we convert text to phoneme and then use an audio synthesis model to convert linguistic features into speech <cit.>. Unlike prior work (which uses hand-engineered features such as spectral envelope, spectral parameters, aperiodic parameters, etc.), our only features are phonemes with stress annotations, phoneme durations, and fundamental frequency (F0). This choice of features makes our system more readily applicable to new datasets, voices, and domains without any manual data annotation or additional feature engineering. We demonstrate this claim by retraining our entire pipeline without any hyperparameter changes on an entirely new dataset that contains solely audio and unaligned textual transcriptions and generating relatively high quality speech. In a conventional TTS system this adaptation requires days to weeks of tuning, whereas Deep Voice allows you to do it in only a few hours of manual effort and the time it takes models to train.Real-time inference is a requirement for a production-quality TTS system; without it, the system is unusable for most applications of TTS. Prior work has demonstrated that a WaveNet <cit.> can generate close to human-level speech. However, WaveNet inference poses a daunting computational problem due to the high-frequency, autoregressive nature of the model, and it has been hitherto unknown whether such models can be used in a production system. We answer this question in the affirmative and demonstrate efficient, faster-than-real-time WaveNet inference kernels that produce high-quality 16 kHz audio and realize a 400X speedup over previous WaveNet inference implementations <cit.>.§ RELATED WORKPrevious work uses neural networks as substitutes for several TTS system components, including grapheme-to-phoneme conversion models <cit.>, phoneme duration prediction models <cit.>, fundamental frequency prediction models <cit.>, and audio synthesis models <cit.>. Unlike Deep Voice, however, none of these systems solve the entire problem of TTS and many of them use specialized hand-engineered features developed specifically for their domain.Most recently, there has been a lot of work in parametric audio synthesis, notably WaveNet, SampleRNN, and Char2Wav <cit.>. While WaveNet can be used for both conditional and unconditional audio generation, SampleRNN is only used for unconditional audio generation. Char2Wav extends SampleRNN with an attention-based phoneme duration model and the equivalent of an F0 prediction model, effectively providing local conditioning information to a SampleRNN-based vocoder.Deep Voice differs from these systems in several key aspects that notably increase the scope of the problem. First, Deep Voice is completely standalone; training a new Deep Voice system does not require a pre-existing TTS system, and can be done from scratch using a dataset of short audio clips and corresponding textual transcripts. In contrast, reproducing either of the aforementioned systems requires access and understanding of a pre-existing TTS system, because they use features from another TTS system either at training or inference time.Second, Deep Voice minimizes the use of hand-engineered features; it uses one-hot encoded characters for grapheme to phoneme conversion, one-hot encoded phonemes and stresses, phoneme durations in milliseconds, and normalized log fundamental frequency that can be computed from waveforms using any F0 estimation algorithm. All of these can easily be obtained from audio and transcripts with minimal effort. In contrast, prior works use a much more complex feature representation, that effectively makes reproducing the system impossible without a pre-existing TTS system. WaveNet uses several features from a TTS system <cit.>, that include values such as the number of syllables in a word, position of syllables in the phrase, position of the current frame in the phoneme, and dynamic features of the speech spectrum like spectral and excitation parameters, as well as their time derivatives. Char2Wav relies on vocoder features from the WORLD TTS system <cit.> for pre-training their alignment module which include F0, spectral envelope, and aperiodic parameters.Finally, we focus on creating a production-ready system, which requires that our models run in real-time for inference. Deep Voice can synthesize audio in fractions of a second, and offers a tunable trade-off between synthesis speed and audio quality. In contrast, previous results with WaveNet require several minutes of runtime to synthesize one second of audio. We are unaware of similar benchmarks for SampleRNN, but the 3-tier architecture as described in the original publication requires approximately 4-5X as much compute during inference as our largest WaveNet models, so running the model in real-time may prove challenging.§ TTS SYSTEM COMPONENTSAs shown in Fig. <ref>, the TTS system consists of five major building blocks: 0em * The grapheme-to-phoneme model converts from written text (English characters) to phonemes (encoded using a phonemic alphabet such as ARPABET).* The segmentation model locates phoneme boundaries in the voice dataset. Given an audio file and a phoneme-by-phoneme transcription of the audio, the segmentation model identifies where in the audio each phoneme begins and ends.* The phoneme duration model predicts the temporal duration of every phoneme in a phoneme sequence (an utterance).* The fundamental frequency model predicts whether a phoneme is voiced. If it is, the model predicts the fundamental frequency (F0) throughout the phoneme's duration.* The audio synthesis model combines the outputs of the grapheme-to-phoneme, phoneme duration, and fundamental frequency prediction models and synthesizes audio at a high sampling rate, corresponding to the desired text. During inference, text is fed through the grapheme-to-phoneme model or a phoneme dictionary to generate phonemes. Next, the phonemes are provided as inputs to the phoneme duration model and F0 prediction model to assign durations to each phoneme and generate an F0 contour. Finally, the phonemes, phoneme durations, and F0 are used as local conditioning input features to the audio synthesis model, which generates the final utterance.Unlike the other models, the segmentation model is not used during inference. Instead, it is used to annotate the training voice data with phoneme boundaries. The phoneme boundaries imply durations, which can be used to train the phoneme duration model. The audio, annotated with phonemes and phoneme durations as well as fundamental frequency, is used to train the audio synthesis model.In the following sections, we describe all the building blocks in detail. §.§ Grapheme-to-Phoneme Model Our grapheme-to-phoneme model is based on the encoder-decoder architecture developed by <cit.>. However, we use a multi-layer bidirectional encoder with a gated recurrent unit (GRU) nonlinearity and an equally deep unidirectional GRU decoder <cit.>. The initial state of every decoder layer is initialized to the final hidden state of the corresponding encoder forward layer. The architecture is trained with teacher forcing and decoding is performed using beam search. We use 3 bidirectional layers with 1024 units each in the encoder and 3 unidirectional layers of the same size in the decoder and a beam search with a width of 5 candidates. During training, we use dropout with probability 0.95 after each recurrent layer.For training, we use the Adam optimization algorithm with β_1=0.9, β_2=0.999, ε=10^-8, a batch size of 64, a learning rate of 10^-3, and an annealing rate of 0.85 applied every 1000 iterations <cit.>.§.§ Segmentation Model Our segmentation model is trained to output the alignment between a given utterance and a sequence of target phonemes. This task is similar to the problem of aligning speech to written output in speech recognition. In that domain, the connectionist temporal classification (CTC) loss function has been shown to focus on character alignments to learn a mapping between sound and text <cit.>. We adapt the convolutional recurrent neural network architecture from a state-of-the-art speech recognition system <cit.> for phoneme boundary detection.A network trained with CTC to generate sequences of phonemes will produce brief peaks for every output phoneme. Although this is sufficient to roughly align the phonemes to the audio, it is insufficient to detect precise phoneme boundaries. To overcome this, we train to predict sequences of phoneme pairs rather than single phonemes. The network will then tend to output phoneme pairs at timesteps close to the boundary between two phonemes in a pair. To illustrate our label encoding, consider the string “Hello!”. To convert this to a sequence of phoneme pair labels, convert the utterance to phonemes (using a pronunciation dictionary such as CMUDict or a grapheme-to-phoneme model) and pad the phoneme sequence on either end with the silence phoneme toget “sil HH EH L OW sil”. Finally, construct consecutive phoneme pairs and get “(sil, HH), (HH, EH), (EH, L), (L, OW), (OW, sil)”.Input audio is featurized by computing 20 Mel-frequency cepstral coefficients (MFCCs) with a ten millisecond stride. On top of the input layer, there are two convolution layers (2D convolutions in time and frequency), three bidirectional recurrent GRU layers, and finally a softmax output layer. The convolution layers use kernels with unit stride, height nine (in frequency bins), and width five (in time) and the recurrent layers use 512 GRU cells (for each direction). Dropout with a probability of 0.95 is applied after the last convolution and recurrent layers. To compute the phoneme-pair error rate (PPER), we decode using beam search. To decode phoneme boundaries, we perform a beam search with width 50 with the constraint that neighboring phoneme pairs overlap by at least one phoneme and keep track of the positions in the utterance of each phoneme pair.For training, we use the Adam optimization algorithm with β_1=0.9, β_2=0.999, ε=10^-8, a batch size of 128, a learning rate of 10^-4, and an annealing rate of 0.95 applied every 500 iterations <cit.>.§.§ Phoneme Duration and Fundamental Frequency Model We use a single architecture to jointly predict phoneme duration and time-dependent fundamental frequency. The input to the model is a sequence of phonemes with stresses, with each phoneme and stress being encoded as a one-hot vector. The architecture comprises two fully connected layers with 256 units each followed by two unidirectional recurrent layers with 128 GRU cells each and finally a fully-connected output layer. Dropout with a probability of 0.8 is applied after the initial fully-connected layers and the last recurrent layer.The final layer produces three estimations for every input phoneme: the phoneme duration, the probability that the phoneme is voiced (i.e. has a fundamental frequency), and 20 time-dependent F0 values, which are sampled uniformly over the predicted duration.The model is optimized by minimizing a joint loss that combines phoneme duration error, fundamental frequency error, the negative log likelihood of the probability that the phoneme is voiced, and a penalty term proportional to the absolute change of F0 with respect to time to impose smoothness. The specific functional form of the loss function is described in Appendix <ref>.[Appendices and audio samples are available on Arxiv. as supplementary material. ]For training, we use the Adam optimization algorithm with β_1=0.9, β_2=0.999, ε=10^-8, a batch size of 128, a learning rate of 3×10^-4, and an annealing rate of 0.9886 applied every 400 iterations <cit.>. §.§ Audio Synthesis Model Our audio synthesis model is a variant of WaveNet. WaveNet consists of a conditioning network, which upsamples linguistic features to the desired frequency, and an autoregressive network, which generates a probability distribution ℙ(y) over discretized audio samples y ∈{0, 1, …, 255}. We vary the number of layers ℓ, the number of residual channels r (dimension of the hidden state of every layer), and the number of skip channels s (the dimension to which layer outputs are projected prior to the output layer).WaveNet consists of an upsampling and conditioning network, followed by ℓ 2×1 convolution layers with r residual output channels and gated tanh nonlinearities. We break the convolution into two matrix multiplies per timestep with prev and cur. These layers are connected with residual connections. The hidden state of every layer is concatenated to an ℓ r vector and projected to s skip channels with skip, followed by two layers of 1×1 convolutions (with weights relu and out) with relu nonlinearities.WaveNet uses transposed convolutions for upsampling and conditioning. We find that our models perform better, train faster, and require fewer parameters if we instead first encode the inputs with a stack of bidirectional quasi-RNN (QRNN) layers <cit.> and then perform upsampling by repetition to the desired frequency.Our highest-quality final model uses ℓ=40 layers, r=64 residual channels, and s=256 skip channels. For training, we use the Adam optimization algorithm with β_1=0.9, β_2=0.999, ε=10^-8, a batch size of 8, a learning rate of 10^-3, and an annealing rate of 0.9886 applied every 1,000 iterations <cit.>.Please refer to Appendix <ref> for full details of our WaveNet architecture and the QRNN layers we use.§ RESULTSWe train our models on an internal English speech database containing approximately 20 hours of speech data segmented into 13,079 utterances. In addition, we present audio synthesis results for our models trained on a subset of the Blizzard 2013 data <cit.>. Both datasets are spoken by a professional female speaker.All of our models are implemented using the TensorFlow framework <cit.>. §.§ Segmentation Results We train on 8 TitanX Maxwell GPUs, splitting each batch equally among the GPUs and using a ring all-reduce to average gradients computed on different GPUs, with each iteration taking approximately 1300 milliseconds. After approximately 14,000 iterations, the model converges to a phoneme pair error rate of 7%. We also find that phoneme boundaries do not have to be precise, and randomly shifting phoneme boundaries by 10-30 milliseconds makes no difference in the audio quality, and so suspect that audio quality is insensitive to the phoneme pair error rate past a certain point. §.§ Grapheme-to-Phoneme Results We train a grapheme-to-phoneme model on data obtained from CMUDict <cit.>. We strip out all words that do not start with a letter, contain numbers, or have multiple pronunciations, which leaves 124,978 out of the original 133,854 grapheme-phoneme sequence pairs.We train on a single TitanX Maxwell GPU with each iteration taking approximately 150 milliseconds. After approximately 20,000 iterations, the model converges to a phoneme error rate of 5.8% and a word error rate of 28.7%, which are on par with previous reported results <cit.>. Unlike prior work, we do not use a language model during decoding and do not include words with multiple pronunciations in our data set. §.§ Phoneme Duration and Fundamental Frequency Results We train on a single TitanX Maxwell GPU with each iteration taking approximately 120 milliseconds. After approximately 20,000 iterations, the model converges to a mean absolute error of 38 milliseconds (for phoneme duration) and 29.4 Hz (for fundamental frequency). §.§ Audio Synthesis ResultsWe divide the utterances in our audio dataset into one second chunks with a quarter second of context for each chunk, padding each utterance with a quarter second of silence at the beginning. We filter out chunks that are predominantly silence and end up with 74,348 total chunks.We trained models with varying depth, including 10, 20, 30, and 40 layers in the residual layer stack. We find that models below 20 layers result in poor quality audio. The 20, 30, and 40 layer models all produce high quality recognizable speech, but the 40 layer models have less noise than the 20 layer models, which can be detected with high-quality over-ear headphones.Previous work has emphasized the importance of receptive field size in determining model quality. Indeed, the 20 layer models have half the receptive field as the 40 layer models. However, when run at 48 kHz, models with 40 layers have only 83 milliseconds of receptive field, but still generate high quality audio. This suggests the receptive field of the 20 layer models is sufficient, and we conjecture the difference in audio quality is due to some other factor than receptive field size.We train on 8 TitanX Maxwell GPUs with one chunk per GPU, using a ring allreduce to average gradients computed on different GPUs. Each iteration takes approximately 450 milliseconds. Our model converges after approximately 300,000 iterations. We find that a single 1.25s chunk is sufficient to saturate the compute on the GPU and that batching does not increase training efficiency.As is common with high-dimensional generative models <cit.>, model loss is somewhat uncorrelated with perceptual quality of individual samples. While models with unusually high loss sound distinctly noisy, models that optimize below a certain threshold do not have a loss indicative of their quality. In addition, changes in model architecture (such as depth and output frequency) can have a significant impact on model loss while having a small effect on audio quality.To estimate perceptual quality of the individual stages of our TTS pipeline, we crowdsourced mean opinion score (MOS) ratings (ratings between one and five, higher values being better) from Mechanical Turk using the CrowdMOS toolkit and methodology <cit.>. In order to separate the effect of the audio preprocessing, the WaveNet model quality, and the phoneme duration and fundamental frequency model quality, we present MOS scores for a variety of utterance types, including synthesis results where the WaveNet inputs (duration and F0) are extracted from ground truth audio rather than synthesized by other models. The results are presented in Table <ref>. We purposefully include ground truth samples in every batch of samples that raters evaluate to highlight the delta from human speech and allow raters to distinguish finer grained differences between models; the downside of this approach is that the resulting MOS scores will be significantly lower than if raters are presented only with synthesized audio samples.First of all, we find a significant drop in MOS when simply downsampling the audio stream from 48 kHz to 16 kHz, especially in combination with μ-law companding and quantization, likely because a 48 kHz sample is presented to the raters as a baseline for a 5 score, and a low quality noisy synthesis result is presented as a 1. When used with ground truth durations and F0, our models score highly, with the 95% confidence intervals of our models intersecting those of the ground truth samples. However, using synthesized frequency reduces the MOS, and further including synthesized durations reduces it significantly. We conclude that the main barrier to progress towards natural TTS lies with duration and fundamental frequency prediction, and our systems have not meaningfully progressed past the state of the art in that regard. Finally, our best models run slightly slower than real-time (see Table <ref>), so we demonstrate that synthesis quality can be traded for inference speed by adjusting model size by obtaining scores for models that run 1X and 2X faster than real-time.We also tested WaveNet models trained on the full set of features from the original WaveNet publication, but found no perceptual difference between those models and models trained on our reduced feature set. §.§ Blizzard ResultsTo demonstrate the flexibility of our system, we retrained all of our models with identical hyperparameters on the Blizzard 2013 dataset <cit.>. For our experiments, we used a 20.5 hour subset of the dataset segmented into 9,741 utterances. We evaluated the model using the procedure described in Section <ref>, which encourages raters to compare synthesized audio directly with the ground truth. On the held out set, 16 kHz companded and expanded audio receives a MOS score of 4.65±0.13, while our synthesized audio received a MOS score of 2.67±0.37.§ OPTIMIZING INFERENCEAlthough WaveNet has shown promise in generating high-quality synthesized speech, initial experiments reported generation times of many minutes or hours for short utterances. WaveNet inference poses an incredibly challenging computational problem due to the high-frequency, autoregressive nature of the model, which requires orders of magnitude more timesteps than traditional recurrent neural networks. When generating audio, a single sample must be generated in approximately 60 μs (for 16 kHz audio) or 20 μs (for 48 kHz audio). For our 40 layer models, this means that a single layer (consisting of several matrix multiplies and nonlinearities) must complete in approximately 1.5 μs. For comparison, accessing a value that resides in main memory on a CPU can take 0.1 μs. In order to perform inference at real-time, we must take great care to never recompute any results, store the entire model in the processor cache (as opposed to main memory), and optimally utilize the available computational units. These same techniques could be used to accelerate image synthesis with PixelCNN <cit.> to fractions of a second per image.Synthesizing one second of audio with our 40 layer WaveNet model takes approximately 55×10^9 floating point operations (FLOPs). The activations in any given layer depend on the activations in the previous layer and the previous timestep, so inference must be done one timestep and one layer at a time. A single layer requires only 42×10^3 FLOPs, which makes achieving meaningful parallelism difficult. In addition to the compute requirements, the model has approximately 1.6× 10^6 parameters, which equate to about 6.4 MB if represented in single precision. (See Appendix <ref> for a complete performance model.)On CPU, a single Haswell or Broadwell core has a peak single-precision throughput of approximately 77× 10^9 FLOPs and an L2-to-L1 cache bandwidth of approximately 140 GB/s [Assuming two 8-wide AVX FMA instructions every cycle and an L2-to-L1 bandwidth of 64 bytes per cycle.]. The model must be loaded from cache once per timestep, which requires a bandwidth of 100 GB/s. Even if the model were to fit in L2 cache, the implementation would need to utilize 70% of the maximum bandwidth and 70% of the peak FLOPS in order to do inference in real-time on a single core. Splitting the calculations across multiple cores reduces the difficulty of the problem, but nonetheless it remains challenging as inference must operate at a significant fraction of maximum memory bandwidth and peak FLOPs and while keeping threads synchronized.A GPU has higher memory bandwidth and peak FLOPs than a CPU but provides a more specialized and hence restrictive computational model. A naive implementation that launches a single kernel for every layer or timestep is untenable, but an implementation based on the persistent RNN technique <cit.> may be able to take advantage of the throughput offered by GPUs.We implement high-speed optimized inference kernels for both CPU and GPU and demonstrate that WaveNet inference at faster-than-real-time speeds is achievable. Table <ref> lists the CPU and GPU inference speeds for different models. In both cases, the benchmarks include only the autoregressive, high-frequency audio generation and do not include the generation of linguistic conditioning features (which can be done in parallel for the entire utterance). Our CPU kernels run at real-time or faster-than-real-time for a subset of models, while the GPU models do not yet match this performance. §.§ CPU Implementation We achieve real-time CPU inference by avoiding any recomputation, doing cache-friendly memory accesses, parallelizing work via multithreading with efficient synchronization, minimizing nonlinearity FLOPs, avoiding cache thrashing and thread contention via thread pinning, and using custom hardware-optimized routines for matrix multiplication and convolution.For the CPU implementation, we split the computation into the following steps:* Sample Embedding: Compute the WaveNet input causal convolution by doing two sample embeddings, one for the current timestep and one for the previous timestep, and summing them with a bias. That is, x^(0) = emb,prev· y_i-1 + emb,cur· y_i + embed * Layer Inference: For every layer j from j=1 to ℓ with dilation width d: * Compute the left half of the width-two dilated convolution via a matrix-vector multiply:a^(j)_prev = prev^(j)· x_i-d^(j-1) * Compute the right half of the dilated convolution:a^(j)_cur = cur^(j)· x_i^(j-1) * Compute the hidden state h^(j) given the conditioning vector L^(j)_h:a^(j) = a^(j)_prev + a^(j)_cur + B^(j)_h + L^(j)_h h^(j) = tanh(a^(j)_0:r) ·σ(a^(j)_r:2r),where v_0:r denotes the first r elements of the vector v and v_r:2r denotes the next r elements. Then, compute the input to the next layer via a matrix-vector multiply:x^(j) = res^(j)· h^(j) + res^(j) * Compute the contribution to the skip-channel matrix multiply from this layer, accumulating over all layers, with q^(0) = skip:q^(j) = q^(j - 1) + skip^(j)· h^(j)* Output: Compute the two output 1×1 convolutions:z_s= relu(q^(ℓ)) z_a= relu(relu· z_s + relu) p= softmax(out· z_a + out)Finally, sample y_i+1 randomly from the distribution p. We parallelize these across two groups of threads as depicted in Figure <ref>. A group of main threads computes x^(0), a^(j)_cur, h^(j), and x^(j), z_a, and p. A group of auxiliary threads computes a^(j)_prev, q^(j), and z_s, with the a^(j)_prev being computed for the next upcoming timestep while the main threads compute z_a and p. Each of these groups can consist of a single thread or of multiple threads; if there are multiple threads, each thread computes one block of each matrix-vector multiply, binary operation, or unary operation, and thread barriers are inserted as needed. Splitting the model across multiple threads both splits up the compute and can also be used to ensure that the model weights fit into the processor L2 cache.Pinning threads to physical cores (or disabling hyperthreading) is important for avoiding thread contention and cache thrashing and increases performance by approximately 30%.Depending on model size, the nonlinearities (tanh, sigmoid, and softmax) can also take a significant fraction of inference time, so we replace all nonlinearities with high-accuracy approximations, which are detailed in Appendix <ref>. The maximum absolute error arising from these approximations is 1.5× 10^-3 for tanh, 2.5×10^-3 for sigmoid, and 2.4×10^-5 for e^x. With approximate instead of exact nonlinearities, performance increases by roughly 30%.We also implement inference with weight matrices quantized toand find no change in perceptual quality when using quantization. For larger models, quantization offers a significant speedup when using fewer threads, but overhead of thread synchronization prevents it from being useful with a larger number of threads.Finally, we write custom AVX assembly kernels for matrix-vector multiplication using PeachPy <cit.> specialized to our matrix sizes. Inference using our custom assembly kernels is up to 1.5X faster than Intel MKL and 3.5X faster than OpenBLAS when using . Neither library provides the equivalentoperations. §.§ GPU Implementation Due to their computational intensity, many neural models are ultimately deployed on GPUs, which can have a much higher computational throughput than CPUs. Since our model is memory bandwidth and FLOP bound, it may seem like a natural choice to run inference on a GPU, but it turns out that comes with a different set of challenges.Usually, code is run on the GPU in a sequence of kernel invocations, with every matrix multiply or vector operation being its own kernel. However, the latency for a CUDA kernel launch (which may be up to 50 μs) combined with the time needed to load the entire model from GPU memory are prohibitively large for an approach like this. An inference kernel in this style ends up being approximately 1000X slower than real-time.To get close to real-time on a GPU, we instead build a kernel using the techniques of persistent RNNs <cit.> which generates all samples in the output audio in a single kernel launch. The weights for the model are loaded to registers once and then used without unloading them for the entire duration of inference. Due to the mismatch between the CUDA programming model and such persistent kernels, the resulting kernels are specialized to particular model sizes and are incredibly labor-intensive to write. Although our GPU inference speeds are not quite real-time (Table <ref>), we believe that with these techniques and a better implementation we can achieve real-time WaveNet inference on GPUs as well as CPUs. Implementation details for the persistent GPU kernels are available in Appendix <ref>.§ CONCLUSIONIn this work, we demonstrate that current Deep Learning approaches are viable for all the components of a high-quality text-to-speech engine by building a fully neural system. We optimize inference to faster-than-real-time speeds, showing that these techniques can be applied to generate audio in real-time in a streaming fashion. Our system is trainable without any human involvement, dramatically simplifying the process of creating TTS systems.Our work opens many new possible directions for exploration. Inference performance can be further improved through careful optimization, model quantization on GPU, andquantization on CPU, as well as experimenting with other architectures such as the Xeon Phi. Another natural direction is removing the separation between stages and merging the segmentation, duration prediction, and fundamental frequency prediction models directly into the audio synthesis model, thereby turning the problem into a full sequence-to-sequence model, creating a single end-to-end trainable TTS system, and allowing us to train the entire system with no intermediate supervision. In lieu of fusing the models, improving the duration and frequency models via larger training datasets or generative modeling techniques may have an impact on voice naturalness. icml2016 § WAVENET ARCHITECTURE AND DETAILSThe WaveNet consists of a conditioning network c = C(v), which converts low-frequency linguistic features v to the native audio frequency, and an auto-regressive process P(y_i | c, y_i-1, …, y_i-R) which predicts the next audio sample given the conditioning for the current timestep c and a context of R audio samples. R is the receptive field size, and is a property determined by the structure of the network. A sketch of the WaveNet architecture is shown in Figure <ref>. The network details are described in the following subsections. §.§ Auto-regressive WaveNet The structure of the auto-regressive network is parameterized by the number of layers ℓ, the number of skip channels s, and the number of residual channels r.Audio is quantized to a=256 values using μ-law companding, as described in Section 2.2 of WaveNet. The one-hot encoded values go through an initial 2x1 convolution which generates the input x^(0)∈ℝ^r for the first layer in the residual stack:x^(0) = W_embed * y + B_embed,where * is the one-dimensional convolution operator. Since the input audio y is a one-hot vector, this convolution can be done via embeddings instead of matrix multiplies. Each subsequent layer computes a hidden state vector h^(i) and then (due to the residual connections between layers) adds to its input x^(i-1) to generate its output x^(i):h^(i) = tanh(W^(i)_h * x^(i-1) + B^(i)_h + L^(i)_h) ·σ(W^(i)_g * x^(i-1) + B^(i)_g + L^(i)_g) x^(i) = x^(i-1) + W^(i)_r · h^(i) + B^(i)_r,where L^(i) is the output for that layer of the conditioning network. Since each layer adds its output to its input, the dimensionality of the layers must remain fixed to the number of residual channels, r. Although here this is written as two convolutions, one for W_h and one for W_g, it is actually done more efficiently with a single convolution with r input and 2r output channels. During inference, this convolution is replaced with two matrix-vector multiplies with matrices prev (the left half of the convolution) and cur (the right half). Thus we can reformulate the computation of h^(i) for a specific timestep t as follows:h'^(i) = W^(i)_prev· x^(i-1)_t-d + W^(i)_cur· x^(i-1)_t + B^(i) + L^(i)h^(i) = tanh(h'^(i)_0:r) ·σ(h'^(i)_r:2r),where L^(i) and is a concatenation of L^(i)_h and L^(i)_g and B^(i) and is a concatenation of B^(i)_h and B^(i)_g.The hidden state h^(i) from each of the layers 1 through ℓ is concatenated and projected with a learned skip down to the number of skip channels s:h= [ h^(1); h^(2); ⋮; h^(ℓ) ]h ∈ℝ^ℓ rz_s= relu(skip· h + skip), z_s ∈ℝ^swhere relu(x) = max(0, x). z_s is then fed through two fully connected relu layers to generate the output distribution p ∈ℝ^a:z_a= relu(W_relu· z_s + B_relu), z_a ∈ℝ^a p= softmax(W_out· z_a + B_out)§.§ Conditioning Network When trained without conditioning information, WaveNet models produce human-like “babbling sounds", as they lack sufficient long-range information to reproduce words. In order to generate recognizable speech, every timestep is conditioned by an associated set of linguistic features. This is done by biasing every layer with a per-timestep conditioning vector generated from a lower-frequency input signal containing phoneme, stress, and fundamental frequency features.The frequency of the audio is significantly higher than the frequency of the linguistic conditioning information, so an upsampling procedure is used to convert from lower-frequency linguistic features to higher-frequency conditioning vectors for each WaveNet layer.The original WaveNet does upsampling by repetition or through a transposed convolution. Instead, we first pass our input features through two bidirectional quasi-RNN layers <cit.> with fo-pooling and 2x1 convolutions. A unidirectional QRNN layer with fo-pooling is defined by the following equations:h̃ = tanh(W_h * x + B_h) o= σ(W_o * x + B_o) f= σ(W_f * x + B_f) h_t=f_t · h_t-1 + (1 - f_t) ·h̃_tz_t= o_t · h_tA bidirectional QRNN layer is computed by running two unidirectional QRNNs, one on the input sequence and one on a reversed copy of the input sequence, and then stacking their output channels. After both QRNN layers, we interleave the channels, so that the tanh and the sigmoid in the WaveNet both get channels generated by the forward QRNN and backward QRNN.Following the bidirectional QRNN layers, we upsample to the native audio frequency by repetition[Upsampling using bilinear interpolation slowed convergence and reduced generation quality by adding noise or causing mispronunciations, while bicubic upsampling led to muffled sounds. Upsampling by repetition is done by computing the ratio of the output frequency to the input frequency and repeating every element in the input signal an appropriate number of times.].We find that the model is very sensitive to the upsampling procedure: although many variations of the conditioning network converge, they regularly produce phoneme mispronunciations. §.§ Input Featurization Our WaveNet is trained with 8-bit μ-law companded audio which is downsampled to 16384 Hz from 16-bit dual-channel PCM audio at 48000 Hz. It is conditioned on a 256 Hz phoneme signal. The conditioning feature vector has 227 dimensions. Of these two are for fundamental frequency. One of these indicates whether the current phoneme is voiced (and thus has an F0) and the other is normalized log-frequency, computed by normalizing the log of F0 to minimum observed F0 to be approximately between -1 and 1. The rest of the features describe the current phoneme, the two previous phonemes, and the two next phonemes, with each phoneme being encoded via a 40-dimensional one-hot vector for phoneme identity (with 39 phonemes for ARPABET phonemes and 1 for silence) and a 5-dimensional one-hot vector for phoneme stress (no stress, primary stress, secondary stress, tertiary stress, and quaternary stress). Not all of the datasets we work with have tertiary or quaternary stress, and those features are always zero for the datasets that do not have those stress levels.In our experiments, we found that including the phoneme context (two previous and two next phonemes) is crucial for upsampling via transposed convolution and less critical but still important for our QRNN-based upsampling. Although sound quality without the phoneme context remains high, mispronunciation of a subset of the utterances becomes an issue. We also found that including extra prosody features such as word and syllable breaks, pauses, phoneme and syllable counts, frame position relative to phoneme, etc, were unhelpful and did not result in higher quality synthesized samples.In order to convert from phonemes annotated with durations to a fixed-frequency phoneme signal, we sample the phonemes at regular intervals, effectively repeating each phoneme (with context and F0) a number proportional to its duration. As a result, phoneme duration is effectively quantized to 1/256 sec≈ 4ms. We use Praat <cit.> in batch mode to compute F0 at the appropriate frequency, with a minimum F0 of 75 and a maximum F0 of 500. The Praat batch script used to generate F0 is available at https://github.com/baidu-research/deep-voice/blob/master/scripts/f0-script.praathttps://github.com/baidu-research/deep-voice/blob/master/scripts/f0-script.praat https://github.com/XXX-anonymous-for-review-XXX/praat-f0-scripthttps://github.com/XXX-anonymous-for-review-XXX/praat-f0-script and can be run with . §.§ Sampling from Output Distribution At every timestep, the synthesis model produces a distribution over samples, P(s), conditioned on the previous samples and the linguistic features. In order to produce the samples, there are a variety of ways you could choose to use this distribution:* Direct Sampling: Sample randomly from P(y).* Temperature Sampling: Sample randomly from a distribution adjusted by a temperature tP̃_t(y) = 1/ZP(y)^1/t,where Z is a normalizing constant.* Mean: Take the mean of the distribution E_P[y].* Mode: Take the most likely sample, argmax P(y).* Top k: Sample from an adjusted distribution that only permits the top k samplesP̃_k(y) =0ify < kth(P(y)) P(y) / Zotherwise,where Z is a normalizing constant. We find that out of these different sampling methods, only direct sampling produces high quality outputs. Temperature sampling produces acceptable quality results, and indeed outperforms direct sampling early on in training, but for converged models is significantly worse. This observation indicates that the generative audio model accurately learns a conditional sample distribution and that modifying this distribution through the above heuristics is worse than just using the learned distribution. §.§ Training We observed several tendencies of the models during training. As expected, the randomly initialized model produces white noise. Throughout training, the model gradually increases the signal to noise ratio, and the volume of the white noise dies down while the volume of the speech signal increases. The speech signal can be inaudible for tens of thousands of iterations before it dominates the white noise.In addition, because the model is autoregressive, rare mistakes can produce very audible disturbances. For example, a common failure mode is to produce a small number of incorrect samples during sampling, which then results in a large number incorrect samples due to compounding errors. This is audible as a brief period of loud noise before the model stabilizes. The likelihood of this happening is higher early on in training, and does not happen in converged models.§ PHONEME MODEL LOSS The loss for the n^th phoneme isL_n = |t̂_n - t_n| + λ_1CE(p̂_n, p_n) + λ_2∑_t=0^T-1|F0_n,t - F0_n,t| + λ_3∑_t=0^T-2|F0_n,t+1 - F0_n,t|,where λ_i's are tradeoff constants, t_n and t_n are the estimated and ground-truth durations of the n^th phoneme, p̂_n and p_n are the estimated and ground-truth probabilities that the n^th phoneme is voiced, CE is the cross-entropy function, F_n,t and F_n,t are the estimated and ground-truth values of the fundamental frequency of the n^th phoneme at time t. T time samples are equally spaced along the phoneme duration.§ NONLINEARITY APPROXIMATION DETAILS During inference, we replace exact implementations of the neural network nonlinearities with high-accuracy rational approximations. In this appendix, we detail the derivation of these approximations.§.§ tanh and sigmoid approximationDenoting ẽ(x) as an approximation to e^|x|, we use the following approximations for tanh and σ:tanh(x)≈sign(x) ẽ(x) - 1/ẽ(x)/ẽ(x) + 1/ẽ(x) σ(x)≈ẽ(x)/1 + ẽ(x)x ≥ 01/1 + ẽ(x)x ≤ 0We choose a forth-order polynomial to represent ẽ(x). The following fit produces accurate values for both tanh(x) and σ(x):ẽ(x) = 1 + |x| + 0.5658 x^2 + 0.143 x^4By itself, ẽ(x) is not a very good approximate function for e^|x|, but it yields good approximations when used to approximate tanh and σ as described in Equations <ref> and <ref>. §.§ e^x approximationWe follow the approach of <cit.> to calculate an approximate e^x function. Instead of approximating e^x directly, we approximate 2^x and use the identity e^x = 2^x/ln 2. Let x to be the floor of x∈ℝ. Then,2 ^ x=2^x· 2^x - x= 2^x·( 1 + (2 ^ x - x - 1) )where 0 ≤ 2 ^ x - x - 1 < 1 since 0 ≤ x - x < 1. If we use a 32-bit float to represent2 ^ x, then x + 127 and 2 ^ x - x - 1 are represented bythe exponent and fraction bits of 2^x. Therefore, if we interpret the bytes pattern of 2 ^ x as a 32-bits integer (represented by I_2^x), we haveI_2^x = (x + 127) · 2 ^ 23 + (2 ^ x - x - 1) · 2 ^ 23.Rearranging the Equation <ref> and using z = x - x results toI_2^x = ( x + 126 + {2^ z - z}) · 2 ^ 23If we can accurately approximate g(z)=2^z - z over z∈[0, 1), then interpreting back the byte representation of I_2^x in Equation <ref> as a 32-bits float, we canaccurately approximate 2 ^ x. We use a rational approximation asg(z) ≈ -4.7259162 + 27.7280233/4.84252568 - z - 1.49012907z,which gives are maximum error 2.4× 10^-5 for x∈(-∞, 0].§ PERSISTENT GPU KERNELS A NVIDIA GPU has multiple Streaming Multiprocessors (SMs), each of which has a register file and a L1 cache. There is also a coherent L2 cache that is shared by all SMs. The inference process needs to generate one sample every 61 μs. Due to the high latency of a CUDA kernel launch and of reading small matrices from GPU memory, the entire audio generation process must be done by a single kernel with the weights loaded into the register file across all SMs. This raises two challenges—how to split the model across registers in a way to minimize communication between SMs and how to communicate between SMs given the restrictions imposed by the CUDA programming model.We split the model across the register file of 24 SMs, numbered SM1 · SM24, of a TitanX GPU. We do not use SM24. SM1 to SM20 store two adjacent layers of the residual stack. This means SM1 stores layers 1 and 2, SM2 stores layers 3 and 4 and so on and so forth. Each layer has three matrices and three bias vectors—W_prev, B_prev, W_cur, B_cur, that are for the dilated convolutions and W_r, B_r. Thus SMi generates two hidden states h^(2i) and h^(2i+1) and an output x^(2i). Each SM also stores the rows of the W_skip matrix that will interact with the generated hidden state vectors. Thus W_skip is partitioned across 20 SMs. Only SM20 needs to store B_skip. SM21 stores W_relu and B_relu. Finally, W_out is split across two SMs—SM22 and SM23 because of register file limitations and SM23 stores B_out.The next challenge is to coordinate the data transfer between SMs, since the CUDA programming model executes one kernel across all SMs in parallel. However we want execution to go sequentially in a round robin fashion from SM1 to SM23 and back again from SM1 as we generate one audio sample at a time. We launch our CUDA kernel with 23 thread blocks and simulate such sequential execution by spinning on locks, one for each SM, that are stored in global memory and cached in L2. First SM1 executes two layers of the WaveNet model to generate h^(1), h^(2) and x^(2). It thenunlocks the lock that SM2 is spinning on and sets its own lock. It does this by bypassing the L1 cache to write to global memory so that all SMs have a coherent view of the locks. Then SM2 does the same for SM3 and this sequential locking and unlocking chain continues for each SM. Finally SM23 generates the output distribution p for timestep t and unlocks SM1 so that entire process can repeat to generate p for timestep t+1.Just like locks, we pass data between SMs, by reading and writing to global memory by bypassing the L1 cache. Since NVIDIA GPUs have a coherent L2 cache, a global memory write bypassing the L1, followed by a memory fence results in a coherent view of memory across SMs. This partitioning scheme however is quite inflexible and only works for specific values of l, r and s shown in Table <ref>. This is because each SM has a fixed sized register file and combined with the relatively inflexible and expensive communication mechanism between SMs implies that splitting weight matrices between SMs is challenging. Any change in those parameters means a new kernel has to be written, which is a very time consuming process.There are two main reasons why the GPU kernels are slower than CPU kernels. Firstly, synchronization between SMs in a GPU is expensive since it is done by busy waiting on locks in L2 cache. Secondly even though we divide the model in a way that will fit in the register file of each SM, the CUDA compiler still spills to L1 cache. We hope that with handcrafted assembly code, we will be able to match the performance of CPU kernels. However, the lack of parallelism in WaveNet inference makes it difficult to hide the latencies inherent in reading and writing small matrices from GPU memory which are exposed in the absence of a rich cache hierarchy in GPUs.§ PERFORMANCE MODELWe present a performance model for the autoregressive WaveNet architecture described in Appendix <ref>. In our model a dot product between two vectors of dimension r takes 2r FLOPS—r multiplications and r additions. This means that a matrix-vector multiply between W, an r × r matrix and x, a r × 1 vector takes 2r · r = 2r^2 FLOPs. Thus calculating h'^(i) usesCost(h'^(i)) = (2r · 2r) + (2r · 2r) + 2r + 2r + 2r FLOPs Let division and exponentiaton take f_d and f_e FLOPs respectively. This means tanh and σ takes (f_d + 2f_e + 1) FLOPs. Thus calculating h^(i) takes 2r ·(f_d + 2f_e + 1) + r FLOPs. Finally calculating x^(i) for each layer takes r + (2r · r) + r FLOPs. This brings the total FLOPs for calculating one layer toCost(layer) = 10r^2 + 11r + 2r(f_d + f_e) FLOPs Under the same model, calculating z_s takes (ℓ· 2r) · s + s + s FLOPs, where we assume that relu takes 1 FLOP. Similarly, calculating z_a takes 2s · a + a + a FLOPs and W_out· z_a + B_out takes 2a · a + a FLOPs.Calculating the numerically stable softmax takes one max, one subtract, one exponentiation, one sum and one division per element of a vector. Hence calculating p takes 3a + a(f_d + f_e) FLOPs.Adding it all up, our final performance model to generate each audio sample is as follows:Cost(sample) =ℓ( 10r^2 + 11r + 2r(f_d + f_e) ) + s(2r ·ℓ + 2) + a(2s + 2a + 3) + a(3 + f_d + f_e) FLOPS If we let ℓ=40, r=64, and s=a=256, and assume that f_d = 10 and f_e = 10, with a sampling frequency of 16384Hz, we have approximately 55×10^9 FLOPs for every second of synthesis.
http://arxiv.org/abs/1702.07825v2
{ "authors": [ "Sercan O. Arik", "Mike Chrzanowski", "Adam Coates", "Gregory Diamos", "Andrew Gibiansky", "Yongguo Kang", "Xian Li", "John Miller", "Andrew Ng", "Jonathan Raiman", "Shubho Sengupta", "Mohammad Shoeybi" ], "categories": [ "cs.CL", "cs.LG", "cs.NE", "cs.SD" ], "primary_category": "cs.CL", "published": "20170225031104", "title": "Deep Voice: Real-time Neural Text-to-Speech" }
Corresponding author: safaei@ut.ac.irDepartment of Physical Chemistry, School of Chemistry, College of Science, University of Tehran, Tehran, I. R. IranHigh-order harmonic generation (HHG) from molecular ion HeH^2+ in two initial states of 1sσ (ground state) and 2pσ (first excited state) is investigated theoretically, in homogeneous and plasmonic-enhanced laser fields. The electron ionization and the enhancement of HHG yield in ground and first excited states, as the initial states, is studied and a strong orientation effect for ionization of electron in 2pσ state is discovered which it can be used to reach a longer lifetime for the electron in the first excited state during the high harmonic emission. In order to investigate effect of plasmonic field on the cutoff position of HHG spectra, first the enhancement of laser field in a nano-antennae is calculated and then the interaction of the enhanced field with HeH^2+ molecule is studied by solving the time-dependent Schrödinger equation. According to the results, when HeH^2+ molecule, in the initial state of 2pσ, is irradiated by plasmonic polarized laser field, with the polarization normal to the molecular axis, the HHG yield and cutoff position enhance while the excited state has a long lifetime during the HHG process.Quantum mechanical study of high-order harmonic generation from HeH^2+ molecule in homogeneous and plasmonic-enhanced laser Fields Nehzat Safaei Received: date / Accepted: date ================================================================================================================================== § INTRODUCTIONHigh-harmonic generation (HHG) is an extremely nonlinear nonperturbative response of atoms and molecules to strong laser fields, which provides us an important tool to investigate ultrafast electronic dynamics [1-2]. When atoms and molecules are subject to intense laser radiation, new phenomena appear such as HHG and above threshold ionization (ATI) [3-5]. In particular, HHG has become a very interesting topic, since it is the most reliable way to get into the coherent light sources from spectral range of ultraviolet to extreme ultraviolet. HHG mechanism can be interpreted by a semiclassical three-step model [6-8]. In this model, when atoms and molecules are exposed to the intense laser fields, the outer shell electrontunnels through the Coulomb barrier, as a consequence of the nonperturbative interaction with the coherent electromagnetic radiation, and then the released electron is accelerated by the laser field and finally it may return to the parent ion due to a phase change of the electric field, followed by attosecond burst of electromagnetic waves emission. Recent studies discovered some novel effects in HHG from asymmetric charged molecules such as HeH^2+ and LiH^3+. For example, Bian and Bandrauk [9] reported the orientation dependence of nonadiabatic effects in HHG from HeH^2+ ion, in ground state as the initial state,in short, intense laser pulses. Researches show that, in these asymmetric molecules with permanent dipoles, the excited states are localized states with a long lifetime. Hence the role of the excited states must be considered in molecular high-order harmonic generation (MHOHG) from asymmetric molecules, and the three-step model, which mainly takes the ground and continuum states into account, can not explain some properties of HHG from these molecules, such as the maximum cutoff energy [10]. Bian and Bandrauk [11] proposed a four-step model for molecular high-order harmonic generation from HeH^2+ molecule. By investigation of LiH^3+, BeH^4+ and HeH^2+, Feng and Chu [12] have shown that this excited state effect is a general character of the asymmetric molecules. On the other hand, previous investigations of HHG from atoms proved that excited states can enhance harmonic yields [13-14]. In this article, we use the simplest asymmetric molecular ion HeH^2+, which has a first excited state 2pσ with a comparably long lifetime, to investigate the orientation dependence of electron ionization and high harmonic emission from the molecule in initial sates of ground and first excited states, to find an efficient way for harmonic generation.Also, the plasmonic field, as one of the recent and important way to extend the cutoff energy of HHG, is implemented. The plasmonic field enhancement in nano-antennae has gained enormous interest and there has been a remarkable theoretical [15-19] and experimental activity [20-24] on this subject. The collective motion of confined electrons in a nanostructure exposed to an external electromagnetic field results in an enhanced field which can be calculated by solving Maxwell's equations. Recent experiments show that using a combination of engineered metal nanostructures and rare gases, HHG can be produced without using extra cavities or laser pumping to amplify the power of the input pulse [25], as the local electric fields enhances by more than 20 dB [26-27]. The numerical and semiclassical approaches to simulate the strong laser-matter interaction in particular high-order harmonic generation, is largely based on the dipole approximation for laser-atom/molecule interactions [15,16]. Within this assumption, the laser electric field (E(r,t)) and its vector potential associated (A(r,t)) are spatially homogeneous in the region where the electron dynamic takes place and spatial dependence of them is ignored i.e.E(r,t) = E(t) and A(r,t) = A(t). On the contrary, the field generated using plasmonic nanostructure is spatially dependent on a nanometer scale and cannot be described by this assumption.In present work, cutoff extension of HHG from HeH^2+ molecule in plasmonic-enhanced fields is calculated for two initial states of 1sσ and 2pσ and compared to the results from homogeneous fields. The rest of the paper is organized as follows. In the next section (Sec. II), structure of the nano-antennae and calculation of the field enhancement are presented. In sectionIII, theoretical methodwhich is based on the time-dependent Schrödinger equation, is described. Results and discussions are presented in Sec. IV. The last part of the paper, Sec. V, provides a short summary and outlook. § STRUCTURE OF NANO-ANTENNAEThe nano-antennae is formed by two identical triangular gold pad separated by an air gap g. As it is presented in Fig. 1(a) the structure of nano-antennae is characterized by three geometrical parameters. The curvature radii of the tips are set to be 4 nm in order to avoid nonphysical fields enhancement due to tip-effect. The spatial profile of the field-enhancement around the bow-tie is determined by the means of finite-difference time domain (FDTD) calculations (COMSOL Multiphysics)[15], which is shown in Fig. 1(b). According to the Fig. 1(b) the laser electric field peak amplitude is enhancedby a factor more than 2.4 near the metal tips compared with the center value. § COMPUTATIONAL METHODSThe interaction of HeH^2+ molecule with a linearly polarized laser field is described by the corresponding two-dimensional time-dependent Schrödinger equation (TDSE) [10-11] as (atomic units are used throughout the paper.) i ∂ψ(x,y,t)/∂ t=[H_0(x,y)+H(x,y,t)] ψ(x,y,t),where the unperturbed Hamiltonian H_0 of the system is given by H_0(x,y)=-1/2∇_x,y^2+ V(x,y).In the above equation, V is the soft coulomb potential of HeH^2+ molecule with softening parameter β as V(x,y)= -2/√(β+(x-R/2)^2)+-1/√(β+(x+R/2)^2),to produce the real energy curve of 1sσ and match the ionization potential of HeH^2+ ion, i.e E_1sσ = -2.25 a.u.. The interaction term in length gauge, in case of θ=0^∘ (θ=90^∘), is H(x,y,t)= E(t,x).x ( H(x,y,t)=(E(t,y).y). The laser polarization is along the molecular axis in case of θ=0^∘ (parallel) and normal to the molecular axis for θ=90^∘ (perpendicular). During the simulations an absorbing potential included to avoid unphysical reflections of the electron wavepacket at the boundaries. The TDSE is solved using unitary split-operator method where an eleven-point finite difference method is used for calculating the first and second derivatives. The Crank-Nicolson method which expresses the exponential operator to the third order is used to handle the time propagation. Simulation boxes are set to 200 a.u. × 200 a.u. and the adaptive grid spacing is set to 0.2 a.u. (near the center of the simulation box) and 0.5 a.u. (near the borders of the simulation box) in both directions. The corresponding time step is set to be 0.01 a.u.. For HeH^2+ molecule the 1sσ ground state is repulsive whereas the first excited state 2pσ is a bound state and has a minimum at R= 3.89 a.u.. In present paper the internuclear distance is fixed at R = 4 a.u., near the first excited-state minimum.The excited state of the HeH^2+ molecule is determined numerically by Gramm-Schmitt orthogonalization method. Based on the Ehrenfest theorem, the time-dependent dipole acceleration in case of θ=0^∘ can be read as [28]:d_A(t)=⟨ψ(x,y,t)| x.[∇V(x,y)+E(t,x)] |ψ(x,y,t)⟩.The HHG spectra are calculated as square of the windowed Fourier transform of dipole acceleration d_A(t) in the direction of polarization of electric field as S(ω)=|1/√(2π)∫_0^T d_A(t) H(t) exp[-iω t] dt | ^2,whereH(t)= 1/2[1-cos(2πt/T)]is the Hanning filter and T is the total pulse duration. The time dependence of harmonics is obtained by Morlet wavelet transform of dipole acceleration d_A(t) via:w(ω,t)= √(ω/π^1/2σ)×∫_-∞^+∞d_A(t)(t^')exp[-iω (t^'-t)]exp[-ω^2 (t^'-t)^2/2σ^2]dt^'.§ RESULTS AND DISCUSSIONIn order to investigate the electron ionization and HHG spectra of HeH^2+ molecule in homogeneous and plasmonic-enhanced fields, Schrödinger equation is solved numerically for two θ=0^∘ and θ=90^∘ orientations. The peak intensity of laser pulse is considered I=3× 10^14 W/cm^2 and the wavelength is λ=800 nm (ω=0.057 a.u.). The laser pulse has a trapezoidal shape, f(t), with 0.5 cycles ramp on, 9 cycles constant, and 0.5 cycles ramp off. Interaction of the laser field with electron, in the case of θ=0^∘, can be written as: E(x,t) = E_0f(t) cos(ω t) ( 1 + g(x)). In the above equation, g(x) is ∑ b_j x^j and b_j coefficients determined by fitting of data obtained from FDTD simulations, which they are set to be zero in case of homogeneous field.First, the interaction of HeH^2+ molecule, in initial state of 1sσ, with homogeneous laser field is simulated. The calculated populations of 1sσ and 2pσ states, during the simulation, are presented in Fig. 2 as a function of laser cycle, for both θ=0^∘ and θ=90^∘ orientations. As it can be seen, when polarization of laser field is along to the molecular axis, the population changes of 1sσ and 2pσ states in every half cycle indicate the enhanced excitation (EE) and this shows that, 2pσ state could pay a key role in the HHG process. As it could be seen from Fig. 2, in contrast with parallel orientation, when polarization of laser field is perpendicular to the molecular axis, θ=90^∘, no significant change is observed in the population of 2pσ state, while the population of 1sσ state has the same pattern as θ=0^∘ orientation, which indicates the role of the next higher excitation states. As pointed out by Kamta et al. [29], for F<0 (the electric field amplitude | F| corresponds to the peak laser intensity), the electric field shifts the energy of the ground and first excited states upward +| F| R/2 and downward -| F| R/2 respectively, which results in small energy gap between dressed states, and both EE and EI (enhanced ionization) occur. In contrary, for the F>0 part of the electric field, energy state of ground and first excited states shift downward and upward respectively, and the energy gap of these dressed states becomes large. Consequently, the electron excited and ionized at the first part (F<0) can transit to the ground state in the F>0 part and emit photons. As it is apparent from the Fig. 2, the periodic population change of ground state for θ=90^∘ orientation, is more than that for θ=0^∘, which one can conclude that, the energy gap of dressed ground and one of the higher excited states, in perpendicular orientation, is smaller than energy gap of dressed ground and first excited state in parallel orientation. No difference was observed between population of states in homogeneous and inhomogeneous fields. The calculated HHG spectra for HeH^2+ molecule in initial state of 1sσ, in both homogeneous and plasmonic fields, are presented in Fig. 3 for two θ=0^∘ and θ=90^∘ orientations. From Fig. 3, one can see a resonance in HHG spectra when molecule was pumped by a laser pulse with polarization along the molecular axis, θ=0^∘, which is consistent with previous findings of Bian and Bandrauk [9]. HHG from an asymmetric molecule shows some differences compared to that from a symmetric system. For asymmetric charged molecules with permanent dipoles, such as HeH^2+ molecular ion, lifetime of the excited states could be comparably long. The mean lifetime of the first excited state 2pσ of HeH^2+ is about 4 ns [30]. Linearly polarized intense laser field along the molecular axis pumps the system to the excited states and the resulting laser-induced electron transfer opens multichannel molecular high-order harmonic generation. In order to describe the observed resonance, Bian et al. [11] recommended a four-step model instead of the three-step semiclassical one. In this four-step model, first laser field pumps the electron from the ground state to the excited state and then part of the electron ionizes. The ionized electron accelerates in the laser field and finally recombines to the ground state and emits the radiation. In contrast with θ=0^∘ orientation, there is no resonance for perpendicular orientation in Fig. 3, where the transition from the1sσ to 2pσ state is forbidden by selection rules. As it can be seen in Fig. 3, there is an extension in cutoff position from 78 (θ=0^∘ orientation) and 70 (θ=90^∘ orientation)in homogeneous field to 105 (θ=0^∘ orientation) and 118 (θ=90^∘ orientation) in inhomogeneous field. In this part, the electron ionization and the high-order harmonic emission form HeH^2+ molecule, in the initial state of 2pσ, is investigated. The electronic probability density of 2pσ state is mainly localized on the H^+ core, while theprobability density of electron in 1sσ state concentrates on the He^2+ core. The populations of 1sσ and 2pσ states, and the probability density of electron (norm), as a function of laser cycle, in homogeneous laser field, is illustrated in the left panel of Fig. 4 for both θ=0^∘ and θ=90^∘ orientations. For the sake of comparison, the norm and the population changes of ground state 1sσ_g and first excited state 2pσ_u of symmetric molecule H_2^+, in initial state of first excited state, in a laser field with same parameter as field applied for HeH^2+ molecule, is presented in right panel of Fig. 4. As it can be seen in Fig. 4, whenmolecule is irradiated by linearly polarized laser field along the molecular axis, θ=0^∘, most of the electron, in both HeH^2+ and H_2^+, ionizes during the radiation. In case of asymmetric molecule HeH^2+, there is a population transformation between ground and first excited states in the last laser cycles, as it is shown in the closed window, which corresponds to the electron transfer between nucleus He^+2 and H^+. This population transformation between ground and first excited state in the initial state of 2pσ, keeps the intensity of HHG higher than that for the initial state of 1sσ. When polarization of laser field is normal to the molecular axis, θ=90^∘, in contrast with symmetric molecule H_2^+, the ionization rate of electron in asymmetric molecule HeH^2+ is much less than that for the case of θ=0^∘ orientation. This observed strong orientation effect for electron ionization could be used to reach a longer lifetime for electron in the first excited state, which is responsible for the enhancement in HHG yield, during the HHG process. Like the initial state 1sσ, no difference was observed between populations of states, for initial state of 2pσ, in homogeneous and plasmonic-enhanced field. Therefor, the extension of the cutoff in plasmonic fields, comes from the further acceleration of the ionized electron due to the gradient of the laser field.The HHG spectra of HeH^2+ in 2pσ initial state, in homogeneous and plasmonic-enhanced fields, is presented in Fig. 5 for θ=0^∘ and θ=90^∘ orientations. As it could be seen, HHG from HeH^2+ molecule in initial state 2pσ has a higher intensity compared to that in 1sσ initial state, in both orientations. An extension in cutoff position from 65 in homogeneous field to 95 in plasmonic field is observed for both orientations. The calculated time profiles of HHG emission from HeH^2+ in 1sσ and 2pσ initial states and θ=90^∘ orientation, in homogeneous and plasmonic-enhanced laser fields, are shown in Fig. 6 (a)-(d). The cutoff extension is obvious for HHG in inhomogeneous field for both initial states. Consistent with Fig. 5, when the HeH^2+ molecule is in the initial state of 2pσ, one can see an enhancement in the HHG yield, in comparison with initial state of 1sσ. As it is shown in the Fig. 6(a) and 6(b), in case of the initial state of 2pσ, the high harmonic emission from HeH^2+, takes place during entire time of laser pulse. For comparison, the time profile of high harmonic emission from H_2^+ in the first excited state and θ=90^∘ orientation, in homogeneous and plasmonic-enhanced laser fields, is presented in Figs. 6(e) and 6(f) respectively. As one can see from the comparison of Figs. 6(a) and 6(b) with 6(e) and 6(f), HHG from HeH^2+ molecule has higher cuttof than H_2^+ molecule, during radiation.§ CONCLUSIONS AND OUTLOOKThe generation of high-order harmonics by HeH^2+ molecule, in two initial states of 1sσ and 2pσ, is investigated in homogeneous and plasmonic-enhanced laser fields. The orientation dependence of electron ionization and high-order harmonic emission in both initial states is presented and compared for two relative orientations of the laser polarization and molecular axis, θ=0^∘ and θ=90^∘.It is found that, when the molecule is irradiated by polarized laser field normal to the molecular axis, θ=90^∘, the excited state, which is responsible for the enhancement in HHG yield, has a longer lifetime than the case for θ=0^∘. HHG from 2pσ initial state has a higher intensity and lower cutoff compared to initial state of 1sσ in both orientations. The cutoff extension of HHG from HeH^2+ molecule in a plasmonic field, made in a bowtie-shaped gold nano-antennae, is calculated and it is about 30 harmonics for the first excited state 2pσ. § ACKNOWLEDGMENTSThe author sincerely thanks Dr H. Ahmadi for valuable discussions. § REFERENCES100 [1] Krausz F and Ivanov M 2009 Rev. Mod. Phys. 81 163[2] Chang Z and Corkum P 2010 J. Opt. Soc. Am. B27 B9[3] Lein M 2005 Phys. Rev. Lett. 94 053004[4] Brabec T and Krausz F 2000 Rev. Mod. Phys. 72 545[5] Protopapas M Keitel C H and Knight P L 1997 Rep. Prog. Phys. 60 389[6] Corkum P B 1993 Phys. Rev. Lett. 71 1994[7] Du H N and Miao X Y 2012 Spectrosc. Lett. 45 556[8] Du H N and Miao X Y 2013 Spectrosc. Lett. 46 535[9] Bian X B and Bandrauk A D 2012 Phys. Rev. A 86 053417[10] Bian X B and Bandrauk A D 2010 Phys. Rev. Lett. 105 093903[11] Bian X B and Bandrauk A D 2013 Appl. Sci. 3 267 [12] Feng L Q and Chu T S 2013 Commun. Comput. Chem 1 11[13] Paul P M Clatterbuck T O Lynga C Colosimo P Dimauro L F Agostini P and Kulander K C 2005 Phys. Rev. Lett. 94 113906[14] Sheehy B Martin J D D Dimauro L F Agostini P Schafer K J Gaarde M B and Kulander K C 1999 Phys. Rev. Lett. 83 5270[15] Ciappina M F Acimovic S S Shaaran T Biegert J Quidant R and Lewenstein M 2012 Opt. Express 20 26261[16] Yavuz I Ciappina M F Chacon A Altun Z Kling M F and Lewenstein M 2016 Phys. Rev. A 93 033404[17] Yavuz I Tikman Y and Altun Z 2015 Phys. Rev. A 92 023413[18] Zhong H Guo J Feng W Li P C and Liu X S 2016 Phys. Lett. A 380 12 188[19] Carrera J J and Chu S I 2007 Phys. Rev. A 75 033807 [20] Fromm D P Sundaramurthy A Schuck P J Kino G and Moerner W E 2004 Nano Lett. 4 957[21] Genov D A Sarychev A K Shalaev V M and Wei A 2004 Nano Lett. 4 153[22] Fischer H and Martin O J F 2008 Opt. Express 16 9144[23] Li K Stockman M I and Bergman D J 2003 Phys. Rev. Lett. 91 227402[24] Bharadwaj P and Novotny L 2007 Opt. Express 15 14266[25] Schuck P J Fromm D P Sundaramurthy A Kino G S and Moerner W E 2005 Phys. Rev. Lett. 94 017402[26] Jones R J Moll K D Thorpe M J and Ye J 2005 Phys. Rev. Lett. 94 193201[27] Muhlschlegel P Eisler H J Martin O J F Hecht B and Pohl D W 2005 Science 308 1607[28] Kamta G L and Bandrauk A D 2005 Phys. Rev. A 71 053407[29] Kamta G L Bandrauk A D and Corkum P B 2005 J. Phys. B: At. Mol. Opt. Phys. 38 L339[30] Ben-Itzhak I Gertner I Heber O and Rosner B 1993 Phys. Rev. Lett. 71 1347
http://arxiv.org/abs/1703.00002v2
{ "authors": [ "Nehzat Safaei" ], "categories": [ "physics.atom-ph", "quant-ph" ], "primary_category": "physics.atom-ph", "published": "20170227203413", "title": "Quantum mechanical study of high-order harmonic generation from HeH$^{2+}$ molecule in homogeneous and plasmonic-enhanced laser Fields" }
MITP/16-103 IFT-UAM/CSIC-17-010December 30, 2023 Deformations, Moduli Stabilisation and Gauge Couplings at One-LoopGabriele Honecker^a,, Isabel Koltermann^a,,and Wieland Staessens^b,c,^aPRISMA Cluster of Excellence, MITP & Institut für Physik(WA THEP),Johannes Gutenberg-Universität, 55099 Mainz, Germany^b Instituto de Física Teórica UAM-CSIC, Cantoblanco, 28049 Madrid, Spain^c Departamento de Física Teórica,Universidad Autónoma de Madrid,28049 Madrid, Spain^Gabriele.Honecker@uni-mainz.de,  ^kolterma@uni-mainz.de^wieland.staessens@csic.es Abstract 140mm We investigate deformations of _2 orbifold singularities on the toroidal orbifold T^6/(_2×_6) with discrete torsion in the framework of Type IIA orientifold model building with intersecting D6-branes wrapping special Lagrangian cycles. To this aim, we employ the hypersurface formalism developed previously for the orbifold T^6/(_2×_2) with discrete torsion and adapt it to the (_2×_6×) point group by modding out the remaining _3 subsymmetry and theorientifold projection . We first study the local behaviour of the _3 × invariant deformation orbits under non-zero deformation and then develop methods to assess the deformation effects on the fractional three-cycle volumes globally. We confirm that D6-branes supporting USp(2N) or SO(2N) gauge groups do not constrain any deformation, while deformation parameters associated to cycles wrapped by D6-branes with U(N) gauge groups are constrained by D-term supersymmetry breaking. These features are exposed in global prototype MSSM, Left-Right symmetric and Pati-Salam models first constructed in <cit.>, for which we here count the number of stabilised moduli and study flat directions changing the values of some gauge couplings.Finally, we confront the behaviour of tree-level gauge couplings under non-vanishing deformations along flat directions with the one-loop gauge threshold corrections at the orbifold point and discuss phenomenological implications, in particular on possible LARGE volume scenarios and the corresponding value of the string scale M_string, for the same global D6-brane models.empty§ INTRODUCTION Since the dawn of string phenomenology, toroidal orbifolds have played a prominent rôle in string model building <cit.>: they provide for exactly solvable conformal field theories, allow for supersymmetric compactifications and are capable of accommodating the necessary ingredients to construct chiral gauge theories. In the context of Type IIA orientifold model building with intersecting D6-branes, factorisable toroidal orbifolds come with factorisable special Lagrangian (sLag) three-cycles as underlying building blocks for such chiral gauge theories, see e.g. <cit.>.[Type IIA orientifold compactifications on non-factorisable toroidal orbifolds <cit.> have only recently been considered for model building purposes <cit.>.] More precisely, these fractional three-cycles are wrapped by (stacks of coincident) D6-branes, which support (non)-Abelian gauge theories on their worldvolumes. Consequently, the parameters characterising the gauge theory are related to geometric data associated to the three-cycles. The square of the tree-level gauge coupling for instance scales inversely proportional to the volume of the three-cycle wrapped by the corresponding D6-brane <cit.>. At the singular orbifold point, all exceptional three-cycles located at orbifold singularities have vanishing volumes, and the volume of a fractional three-cycle is simply (a fraction of)the volume of the bulk three-cycle inherited from the ambient six-torus. However, a thorough study of the four-dimensional effective field theory emerging from a Type IIA orientifold compactification requires to consider a region in moduli space where the orbifold singularities have been resolved or deformed.[In addition to phenomenological considerations, the known prescriptions for identifying dual string theoretic descriptions via mirror symmetry to Type IIB orientifolds or via M-theory to E_8 × E_8 heterotic compactifications, see e.g. <cit.>,are to our best knowledge only valid for smooth Calabi-Yau backgrounds.] The resolution or deformation of such singular points will have undeniable geometric and physical consequences for the D6-branes wrapping them. In first instance, one has to verify whether the sLag condition of the corresponding fractional three-cycle is preserved under the deformation or not.Whenever the deformation violates the sLag condition, supersymmetry is broken via the appearance of a Fayet-Iliopoulos D-term in the four-dimensional effective field theory; the deformation modulus is then bound to be stabilised at the singular orbifold point. If a fractional three-cycle remains sLag under a particular deformation, its volume - and thereby also the associated inverse of the tree-level gauge coupling squared at M_string - is expected to alter with an increasing deformation along this flat direction.One can of course also deform a singularity in the toroidal orbifold at which none of the D6-branes are located, in which case the associated deformation modulus can take any vacuum expectation value (vev) without affecting the physics of the chiral gauge theory at leading order.When resolving orbifold singularities on a singular Calabi-Yau variety, one usually turns to the toolbox of algebraic geometry and toric geometry, see e.g. <cit.>, which would offer us the necessary techniques to resolve exceptional two- and four-cycles through blow-ups. Toric singularities and blow-up resolutions of divisor four-cycles happen to be part of the modus operandi for constructing chiral gauge theories on the Type IIB side <cit.> using fractional D3-branes located at the singularities or D7-branes wrapping the resolved four-cycles. However, in the case of Type IIA model building with fractional D6-branes on orbifolds with discrete torsion, the orbifold singularities have to be deformed rather than blown up, which forces us to consider different tools from algebraic geometry: by viewing two-tori as elliptic curves in the weighted projective space _112^2, a factorisable toroidal orbifold with discrete torsion can be described as a hypersurface in a weighted projective space, with its topology being a double cover of ^1 ×^1 ×^1. Building on this hypersurface formalism first sketched in <cit.> for the T^6/(_2 ×_2) orbifold with discrete torsion and extended to its T^6/(_2 ×_2 ×) and T^6/(_2 ×_6' ×) orientifold versions with underlying isotropic square <cit.>or hexagonal <cit.>two-tori, respectively, we focus here on the so far most fertile patch in the Type IIA orientifold landscape with rigid D6-branes <cit.>, the T^6/(_2 ×_6 ×) orientifold with discrete torsion andone rectangular and two hexagonal underlying two-tori. In this case, the _2^(1)-twisted sector conceptually differs from the _2^(2)- and _2^(3)-twisted sectors, necessitating separate discussions for the respective deformations and making the deformations of this toroidal orbifold more intricate than the other previously discussed orbifolds with discrete torsion. Upon embedding a toroidal orbifold with discrete torsion as a hypersurface in a weighted projective space with carefully chosen weights, (a subset of) sLag three-cycles can be constructed as the fixed loci under the anti-holomorphic involution contained in the orientifold projection , in a similar spirit as in <cit.>. The deformations in the hypersurface formalism allow for the description of exceptional and fractional three-cycles, besides the bulk three-cycles, by which the set of sLags three-cycles on the deformed toroidal orbifold can be immensely extended, all corresponding to calibrated submanifolds <cit.> with respect to the same holomorphic volume three-form Ω_3. It is exactly the presence of these fractional sLag three-cycles that makes toroidal orbifolds with discrete torsion so appealing for D6-brane model building. Contrarily to a bulk sLag three-cycle, a fractional sLag three-cycle is not necessarily accompanied by an open string moduli space <cit.>, as it is (at least in the absence of an additional _3 symmetry completely) projected out by the _2×_2 point group. The absence of open string deformation moduli ensures that the non-Abelian gauge group supported by a stack of D6-branes cannot be spontaneously broken by the displacement of a D-brane in that stack. On a more formal level, knowledge about the moduli space of sLag three-cycles is vital in the search for the mirror manifold <cit.> of the deformed toroidal orbifold. The absence of an open string moduli space for fractional three-cycles is expected to complicate this search, which makes studying the geometric characteristics of fractional sLag three-cycles and uncovering their relations to the closed string moduli space all the more essential.In this article, a first step in revealing those relations for the T^6/(_2 ×_6 ×) orientifold with discrete torsion is taken by studying the functional dependence of the fractional three-cycle volumes on the complex structure (deformation) moduli, whose vevs measure the volumes of the exceptional three-cycles. Through this connection, the viability of a non-zero deformation is assessed by virtue of the preserved sLag conditions of the fractional three-cycles away from the orbifold point, as mentioned before. The physical implications of these deformations for D6-brane model building are discussed in terms of potential Fayet-Iliopoulos terms and/or altering tree-level gauge coupling strength in the effective four-dimensional gauge theories resulting from the orientifold compactifications with D6-branes.Also Kähler moduli are expected to have a substantial influence on the effective four-dimensional gauge theories, as exhibited through their presence in the one-loop threshold corrections to the gauge couplings at the singular orbifold point, see e.g. <cit.> in the context of D6-branes. These gauge threshold corrections can be sizeable for specific anisotropic choices of two-torus volumes <cit.>, given by the vacuum expectation values of the (CP-even part of the) Kähler moduli. In the class of models under consideration, these sizeable gauge threshold corrections are able to lift the degeneracy of the tree-level gauge coupling strengths for distinct fractional D6-brane stacks wrapping the same bulk three-cycle. With a lifted degeneracy of the gauge couplings already at the singular orbifold point at one-loop, it becomes more conceivable toconstruct global intersecting D6-brane models with e.g. a very strongly coupled hidden gauge group, whose gaugino condensate forms a natural source for spontaneous supersymmetry breaking. Clearly, establishing the full moduli-dependence of the one-loop correction to the gauge coupling represents a conditio sine qua non for string model builders, both at and away from the singular orbifold point.This article is organised as follows: in section <ref>, we briefly review the hypersurface formulation for describing local deformations of T^6/(_2 ×_2 ×)singularities as discussed in <cit.> and then go on to discuss additional constraints imposed by the extra _3 symmetry of the T^6/(_2 ×_6 ×) orientifold.Special attention will be devoted to the sLag cycles used for particle physics model building. In section <ref>, additional subtleties in global deformations of T^6/(_2 ×_6 ×) singularities are discussed, and severalprototype examples of global D6-brane models with particle physics spectra are examined. Section <ref> is devoted to the computation of the one-loop corrections to the gauge couplings at the orbifold point and the phenomenological implications of their specific geometric moduli dependences. Finally, section <ref> contains our conclusions and outlook. Additional technical details useful for the computation of deformations and one-loop corrections are relegated to appendices <ref>, <ref> and <ref>. § DEFORMING ORBIFOLD SINGULARITIES IN THE HYPERSURFACE FORMALISM To start, we first briefly review the construction of fractional three-cycles as sums of toroidal and exceptional three-cycles stuck at orbifold singularities in section <ref>, in particular on the orientifold of phenomenological interest T^6/(_2 ×_6 ×).Then, we move on to reviewingLagrangian (Lag) lines on two-tori of rectangular and hexagonal shape in the hypersurface formalism in section <ref>.In section <ref>, we first discuss deformations of _2 ×_2 singularitiesonand afterwards impose relations among deformations due to the specific additional _3 symmetry of the _2 ×_6 action. As a final element, in section <ref> we discuss the general procedures allowing for the quantitative study ofspecial Lagrangian (sLag) three-cycles on deformations ofT^6/(_2 ×_6 ×) in the hypersurface formalism.§.§ Reminiscing about three-cycles on the T^6/(_2 ×_6 ×) orientifold The action of the orbifold group _2 ×_6 on the factorisable six-torus T^6 = T_(1)^2× T_(2)^2 × T_(3)^2 consists of a rotation of the complex coordinates z_k parametrising the respective two-torus T_(k)^2 with k∈{1,2,3}:θ^m ω^n: z_k → e^2 π i (m a_k + n b_k) z_k,with a⃗ = 1/2 (1,-1,0), b⃗ = 1/6(0,1,-1).Note that the point group _2 ×_6 is generated by the elements θ and ω, with θ generating the _2-factor acting on the four-torus T_(1)^2× T_(2)^2 and ω generating the _6-part acting on the four-torus T_(2)^2× T_(3)^2. As a direct product of two Abelian factors containing each _2 as a (sub)group, the orbifold group allows for a global discrete torsion factor η=± 1 <cit.>, whose presence alters the amount of two- and three-cycles supported in the orbifold twisted sectors, as indicated in table <ref> listing the Hodge numbers per sector. In the absence of discrete torsion (η=1), the _2^(i) singularities can be resolved through a blow-up in the respective twisted sector. In the presence of discrete torsion (η = -1), one has to resort to deformations of the _2^(i) singularities, yielding exceptional three-cycles located at the former _2^(i) fixed loci. The three-cycles in the _2^(i)-twisted sectors turn out to be useful tools with regard to particle physics phenomenology and D6-brane model building <cit.>, encouraging us to focus for the remainder of the article on the orbifold with discrete torsion. [ 10|c|Hodge Numbers (h^11,h^21) per sector for the factorisable orbifold T^6/(_2 ×_6);2|c|| _2^(1) _2^(2) _2^(3) _33|c||_6^(')Hodge Numbers;ηUntwistedω^3 θω^3θω^2ω θω θω^2 ([ h^11; h^21 ]);η= +1([ 3; 1 ] )([ 6; 0 ] )([ 8; 0 ] )([ 8; 0 ] )([ 8; 2 ] )([ 2; 0 ] )([ 8; 0 ] )([ 8; 0 ] )([ 51; 1+ 2 ] );η= -1([ 3; 1 ] )([ 0; 6 ] )([ 0; 4 ] )([ 0; 4 ] )([ 8; 2 ] )([ 0; 2 ] )([ 4; 0 ] )([ 4; 0 ] )([19; 15 + 2 ×2 ] );] Z2Z6HodgeNumbersHodge numbers (h^11,h^21) per sector for the factorisable toroidal orbifold T^6/(_2 ×_6) with lattice configuration SU(2)^2× SU(3)× SU(3). In the absence of discrete torsion, the Hodge numbers match those of the orbifold T^6/(_2×_2), namely (h^11,h^21)=(51,3), but with a different distribution over twisted sectors. Considering the orbifolds with discrete torsion leads to a reduction of the initial 51 three-cycles on T^6/(_2×_2) to only 19 three-cycles on T^6/(_2×_6) due to the additional _3-action.A first observation regarding the _6-action deals with the shape of the two-tori T_(2)^2 × T_(3)^2 whose underlying lattice is constrained to be (up to overall rescaling per two-torus) the root lattice of SU(3)× SU(3), i.e. both lattices are hexagonal, and the complex structures of these two-tori are fixed. Only the first two-torus, whose lattice configuration corresponds to (up to overall scaling) the root lattice of SU(2)^2, has an unfrozen complex structure modulus, matching the Hodge number h^21_ bulk = 1 for the _2 ×_6 orbifold. The comparison with the orbifold T^6/(_2×_2) with discrete torsion shows that the additional _3-action reduces the number of three-cycles in the _2^(i) twisted sectors by triple identifications, and thereby also enforces the simultaneous deformation of the associated _2×_2 singularities, as we will discuss in detail in section <ref>. For now, we restrict ourselves to counting the (orbits of) singularities appearing in the various twisted sectors of the orbifold and to indicating how they relate to the Hodge numbers in table <ref>: * Three _2^(i=1,2,3)-twisted sectors generated by (ω^3, ω^3 θ, θ) respectively, where each _2^(i) sector comes with 16 fixed two-tori or fixed lines labelled by the points (αβ) with α, β∈{1,2,3,4} along T^4_(i)≡ T^2_(j)× T^2_(k) as depicted in figure <ref>.In the _2^(1)-twisted sector, the fixed point (11) on T_(2)^2× T_(3)^2 remains invariant under the orbifold action by ω, while the other fixed points recombine into orbifold-invariant orbits consisting of three fixed points each. More explicitly, the _6-action rotates the fixed points as 2→ 3 → 4 → 2 on T^2_(2) and as 2→ 4 → 3 → 2 on T^2_(3), which implies the following five orbits of _2^(1) fixed points: [(21),(31),(41)], [(12),(14),(13)], [(33),(42),(24)], [(22),(34),(43)] and [(44),(23),(32)]. On the orbifold T^6/(_2×_6) without discrete torsion, these _2^(i) fixed point orbits contribute to the h^11 Kähler moduli, while they contribute to the h^21 complex structure modulifor the orbifold set-up with discrete torsion when tensored with a one-cycle on T_(1)^2. In the _2^(j=2,3)-twisted sectors, the four fixed points (α 1) on T_(1)^2 × T_(k=3,2)^2 are invariant under the _6-action, while the other twelve fixed points recombine into four _6-invariant orbits of three fixed points each: [(α 2),(α 3),(α 4)]. On the orbifold without discrete torsion, all fixed point orbits contribute to the counting of h^11, while on the orbifold with discrete torsion only the non-trivial _6-invariant orbits tensored with a one-cycle on T_(j)^2 (which is also rotated under _6) contribute to h^21. * One _3-twisted sector generated by ω^2 with nine fixed two-tori labelled by the fixed points (a b) with a,b ∈{1,5,6} on T_(2)^2× T_(3)^2. The fixed points are subject to the _2×_2 orbifold action mapping 5↔ 6 and 1↺, such the _3 fixed points along T^4_(1) recombine into four invariant orbits: (11), [(15),(16)], [(51),(61)] and [(55),(56),(65),(66)].As detailed in <cit.>, the _3-twisted sector does not feel the discrete torsion phase η=± 1, such that in any case, each fixed point orbit supports two two-cycles per T^4_(1)/_3 singularity, and three-cycles arise from tensoring the _2 ×_2 quadruplet [(55),(56),(65),(66)] on T^4_(1) with one-cycles on T^2_(1). * One _6-twisted and two _6'-twisted sectors generated by (ω, θω, θω^2), respectively. The _6-twisted sector associated to ω comes with one fixed two-torus or fixed line located at the singularity (11) on T_(2)^2× T_(3)^2.As detailed in <cit.>, the discrete torsion phase acts non-trivially in this sector, which accounts for h^11=2 in the case of η=+1 and h^21=2 in the case of η=-1 in analogy to the other _2^(1) twisted sector.The other two _6^' actions have a different structure: the one generated by θω yields twelve fixed points labelled by (α a 1), and the last one generated by θω^2 comes with twelve fixed points labelled by (α 1 a), where α∈{1,2,3,4} and a∈{1,5,6} for both cases. Under the _2×_2 orbifold action, the twelve _6^' fixed points in the θω sector recombine into eight orbits [(α 1 1)]_θω, [(α 5 1), (α 6 1)]_θω. In the absence of discrete torsion (η=+1), each orbit supports a two-cycle and its dual four-cycle, while in the presence of non-trivial discrete torsion (η=-1), only the non-trivial orbits [(α 5 1), (α 6 1)]_θω support each one two-cycle and its dual four-cycle, see <cit.> for details. The second _6^'-twisted sector is obtained by permutation of two-torus indices, T^2_(2)↔ T^2_(3).Now that we have a clear understanding of untwisted and twisted sectors and how they contribute to the Hodge numbers, we can infer the different types of orbifold-invariant three-cycles supported on the orbifold T^6/(_2 ×_6) with discrete torsion:(1) Bulk three-cycles are orbifold-invariant products of three one-cycles, where each one-cycle extends along a different two-torus T_(i)^2. The homology class of each one-cycle is specified by two integer-valued co-prime torus wrapping numbers (n^i,m^i) w.r.t. the basis one-cycles π_2i-1, π_2i of each two-torus T_(i)^2, see figure <ref> for the conventional choice of basis used in this article.The orbifold-invariant products of the basis three-cycles combine into four basis bulk three-cycles (ρ_1, ρ_2, ρ_3, ρ_4), matching the Betti-number b^ bulk_3 = 2 (h^21_ bulk + 1) = 4 counting the number of basis three-cycles inherited from the factorisable six-torus (T^2)^3 after_2×_6 identifications. Generic bulk three-cycles can then be expressed in terms of these four basis bulk three-cycles: [ Π^ bulk = n^1 (n^2 n^3- m^2 m^3) ρ_1 + n^1 (n^2 m^3 + m^2 n^3+m^2m^3) ρ_2; +m^1 (n^2 n^3- m^2 m^3) ρ_3 + m^1 (n^2 m^3 + m^2 n^3+m^2m^3) ρ_4. ] (2) Exceptional three-cycles are orbifold-invariant products of a one-cycle on the _2^(i)-invariant two-torus T_(i)^2 with an exceptional divisor e_αβ^(i) located at the _2^(i) fixed points (αβ) along the four-torus T^4_(i). The _2^(i) fixed points can be resolved by gluing in a two-sphere per singularity. The _6-invariantproducts of the basis one-cycles with the exceptional divisors yield twelve basis exceptional cycles (ϵ_λ^(1), ϵ̃_λ^(1)) in the _2^(1)-twisted sector with λ∈{0,1,2,3,4,5} and 2 × 8 basis exceptional cycles (ϵ_α^(k), ϵ̃_α^(k)) in the _2^(k=2,3)-twisted sectors with α∈{1,2,3,4}. The dimensionality of the full set of basis exceptional three-cycles expected from all _2^(i)-sectors combined matches the Betti-number b^_2_3 = 2 h__2^21 = 2 ·( 6 + 2 × 4)= 28.Furthermore, the _6- and _3-twisted sectors also yield b^_6+_3_3 = 2 · (2+2)=8 exceptional three-cycles located at the ω and ω^2 fixed points along T_(2)^2× T_(3)^2 as detailed above. These latter basis three-cycles need to be taken into account when searching for a unimodular basis of the full three-cycle lattice, but they do not contribute to the standard CFT constructions of Type IIA/ orientifold models <cit.>, in particular they expected to contribute to the open string one-loop annulus amplitude <cit.>, such that they require no further attention from our part, and we shall only focus on the exceptional three-cycles that can be expressed in terms of the _2^(i) exceptional basis three-cycles.(3) Fractional three-cycles are linear combinations of some bulk three-cycle and several exceptional three-cycles. When a bulk three-cycle passes through the _2^(i)-fixed points and represents its own _2-orbifold image, one has to add the appropriate set of exceptional three-cycles (weighted with appropriate sign factors) in order to form a closed fractional three-cycle. As such, a fractional three-cycle can be expressed asΠ^ frac = 1/4Π^ bulk +1/4∑_i=1^3 Π^_2^(i) =1/4Π^ bulk + 1/4∑_i=1^3 ∑_λ(x^(i)_λϵ_λ^(i) + y^(i)_λϵ̃_λ^(i)), where the integer-valued exceptional wrapping numbers (x^(i),y^(i)) are constructed from the torus wrapping numbers (n^i,m^i) along the two-torus T_(i)^2 weighted by sign factors associated to the discrete _2^(i)-eigenvalues ± 1 and to (-1) exponentiated bythe discrete Wilson-lines (τ^j,τ^k) ∈{0,1}. The explicit form of (x^(i),y^(i)) is constrained by the position of the two-cycle on T_(i)^4 set by the discrete shift parameter (σ^j, σ^k) ∈{0,1}, as detailed in table 36 of <cit.>.The sum over λ runs over at most four different values in the _2^(1)-twisted sector with λ∈{ 0,1,2,3,4,5} and two values in the _2^(2,3)-twisted sectors with λ∈{1,2,3,4 } for the orbifold T^6/(_2 ×_6).A detailed discussion of three-cycles on the orbifold T^6/(_2×_6) can be found in <cit.>, while their prospects for intersecting D6-brane model building have been thoroughly investigated in <cit.>. For instance, phenomenologically viable models with three chiral generations were identified in abundance on this orbifold - a feature that can be traced back to the _3-factor in the orbifold group, in analogy to other orbifolds with a _3-factor within the point group <cit.>.Type IIA string compactifcations on T^6/(_2×_6) preserve N=2 supersymmetry in the closed string sector, which can be broken to a phenomenologically more appealing N=1 supersymmetry by including an orientifold projection consisting of the worldsheet parity Ω, a left-moving fermion number projection (-)^F_L and an anti-holomorphic involution R. The fixed planes under the involution R combine into four inequivalent orbits under the _6-action, corresponding to the O6-planesand _2^(i=1,2,3), respectively, as listed in table <ref> for the aAA-lattice configuration.[The anti-holomorphic involution R also constrains the shape of the two-torus lattices and limits the orientation of each lattice w.r.t. the orientifold-invariant direction to two invariant orientations: A or B for a hexagonal lattice and a or b for a rectangular lattice. Through a non-supersymmetric rotation of the lattices, the a priori six independent lattice configurations can be reduced <cit.> to two physically distinct ones: the aAA and bAA lattices. From the model building perspective with intersecting D6-branes, the aAA-lattice configuration turned out <cit.> to be the most fruitful background allowing for global three-generation MSSM, Left-Right (L-R) symmetric and Pati-Salam (PS) models.] Each of the O6-planes carries RR-charge whose sign is denoted by η_ (_2^(i))∈{± 1}, and worldsheet consistency of the Klein-bottle relates them to the discrete torsion parameter <cit.>:η = η_∏_i=1^3 η__2^(i). This implies that at least one of the O6-planes is exotic, with the sign of the RR-charges opposite w.r.t. the other O6-planes. Anticipating the phenomenologically appealing global models discussed in sections <ref> and <ref>, we select the _2^(3)-plane as the exotic O6-plane, i.e. .[The choice η__2^(2)=-1 is equivalent upon permutation of T^2_(2)↔ T^2_(3), while there exists a second inequivalent choice of η_=-1 allowing for supersymmetric solutions to the RR tadpole cancellation conditions.]The absence of twisted sector contributions in the tree-channel for the Klein bottle and Möbius strip amplitudes indicates that the sum over all O6-planes corresponds topologically to a (fraction of a) pure bulk three-cycle. As a consequence, D6-branes wrapping fractional three-cycles should be chosen such that the sum of their bulk three-cycles cancel the RR-charges of the O6-planes, while the sum of the _2^(i)-twisted exceptional three-cycle part should vanish among itself for each i ∈{1,2,3}, in order to ensure vanishing RR tadpoles. Note that the basis bulk and exceptional three-cycles decompose into -even and -odd three-cycles under the orientifold projection, depending on the choice of the exotic O6-plane, as can be deduced from table <ref>. [ 4|c|R-invariant planes on T^6/(_2×_6×); O6-plane Torus wrapping numbersRR-chargeGlobal Models; (1,0;1,0;1,0) η_+ 1; _2^(1)(1,0;-1,2;1,-2) η__2^(1) +1; _2^(2) (0,1;1,0;1,-2) η__2^(2) +1; _2^(3) (0,1;1,-2;1,0) η__2^(3) -1;] O6PlanesZ2Z6O6-planes on the aAA lattice of T^6/(_2×_6×) with discrete torsion (η=-1). The last column indicates the regular/exotic sign of the RR-charges for the global models discussed in sections <ref> and <ref>. [ 8|c|Orientifold images of basis three-cycles onT^6/(_2 ×_6×)withη= -1; 2|c|| Bulk cycles 3|c|| _2^(1) twisted sector3|c| _2^(l) twisted sector withl = 2,3; (ρ_α) α (ϵ^(1)_λ)(ϵ̃^(1)_λ) λ (ϵ^(l)_α)(ϵ̃^(l)_α) α; ρ_1 1 - η_(1) ϵ^(1)_λη_(1) ϵ̃^(1)_λ 0,1,2,3 - η_(l) ϵ^(l)_α η_(l) ( ϵ̃^(l)_α -ϵ^(l)_α ) 1,2,3,4; ρ_1 - ρ_2 2- η_(1)ϵ^(1)_5η_(1) ϵ̃^(1)_5 4;-ρ_3 3 - η_(1) ϵ^(1)_4η_(1) ϵ̃^(1)_4 5; ρ_4-ρ_3 4; ] Z2Z6OrientifoldExceptionalCyclesOrientifold images of the basis bulk and _2^(k) exceptional three-cycles for the aAA lattice configuration, depending on the choice of the exotic O6-plane orbit with sign factor η_(k)≡η_η__2^(k). In order for the D6-branes to preserve the same N=1 supersymmetry, they are required to wrap special Lagranigian (sLag) three-cycles Π:J_(1,1)|_Π =0, Re(Ω_3)|_Π > 0,Im(Ω_3)|_Π =0.Three-cycles satisfying condition (<ref>) where the pullback of the Kähler (1,1)-form J_(1,1) w.r.t.the three-cycle worldvolume vanishes, are called Lagrangian(Lag) cycles. It is straightforward to check that the (factorisable) bulk three-cycles satisfy this condition. Three-cycles satisfying condition (<ref>) are calibrated w.r.t. the (real part of the) holomorphic volume form Ω_3, deserving the epithet special. At the orbifold point, the condition (<ref>) reduces to constraints on the torus wrapping numbers and the bulk complex structure moduli. Deforming the background away from the orbifold point can yield an exceptional three-cycle with non-vanishing volume, which no longer satisfies the special condition, implying that supersymmetry can only be maintained when the volume of such an exceptional three-cycle vanishes;in other words the twisted complex structure modulus is stabilised at vanishing vacuum expectation value (vev). Explicit examples of this phenomenon will be discussed in section <ref>.For the sake of completeness regarding the discussion of geometric moduli on theorientifold with discrete torsion,we also list the counting of Kähler moduli and closed string vectors on the aAA lattice in table <ref>. The counting on the inequivalent bAA lattice can be found in table 46 of <cit.>. [ 10|c|Hodge Numbers (h^11_+,h^11_-) per sector for the factorisable orientifold T^6/(_2 ×_6 ×) with η=-1; 2|c||_2^(1)_2^(2)_2^(3)_3 3|c||_6^(') Hodge Numbers; η Untwisted ω^3θω^3 θ ω^2 ωθωθω^2([ h^11_+; h^11_- ]); η= +1([ 0; 3 ])([ 1; 5 ])([ 0; 8 ])([ 0; 8 ])([ 0; 8 ])([ 0; 2 ])([ 0; 8 ])([ 0; 8 ])([1; 50 ]); η= -1([ 0; 3 ])([ 0; 0 ])([ 0; 0 ])([ 0; 0 ])([ 0; 8 ])([ 0; 0 ])([ 2(1+η_(2)); 2(1-η_(2)) ])([ 2(1+η_(3)); 2(1-η_(3)) ])([4 + 2(η_(2)+η_(3)); 15 - 2(η_(2)+η_(3)) ]); ] Z2Z6HodgeSplittingSplitting of the Hodge number h^11 into -even and -odd part on the aAA lattice. h^11_- counts the number of Kähler moduli on the orientifold, while h^11_+ counts the number of closed string vectors. The models in sections <ref> and <ref> obey η_(2)=1=-η_(3). It is noteworthy that for the phenomenologically interesting choice of exotic _2^(3)-plane, i.e. η_(2)=-η_(3)=1, the θω-twisted sector does not contain any Kähler moduli, i.e. h^11_-=0. The orientifold projection thus removes the geometric moduliin this sector required for resolving the _6^' singularities, and a full resolution and deformation of thetoroidal orbifold background to a smooth Calabi-Yau threefold is not possible in the presence of theorientifold action on Type IIA string theory. §.§ Lagrangian lines on the elliptic curve in the hypersurface formalismAt the orbifold point, the geometric engineering method for D6-brane models on T^6/_2N or T^6/(_2 ×_2M) backgrounds reviewed above formally usesexceptional divisors at _2-singularities and their topological intersection numbers, even though their volumes are set to zero, or in other words the associated twisted complex structure modulihave vanishing vevs.When moving away from the orbifold point into the Calabi-Yau moduli space by deforming the _2-singularities, we have to use an extended toolbox of algebraic geometry and embed the orbifold as a hypersurface in an ambient toric space. The first step in this process consists in reformulating the two-tori as elliptic curves in the weighted complex projective space _112^2 and describing Lag lines on the elliptic curves.Thus, we introduce the coordinates (x,v,y)with weights (1,1,2) as the homogeneous coordinates of the projective space _112^2 and describe a two-torus as a hypersurface within _112^2. More explicitly, a two-torus corresponds to an elliptic curve in _112^2, which forms the zero locus of a polynomial f of degree 4:f ≡ -y^2 +F(x,v) = 0,F(x,v) = 4 v x^3 - g_2 v^3 x - g_3 v^4, where we choose the Weierstrass form for the elliptic curve. There exists a _2 reflection symmetry acting only on y→ -y, yet its fixed points correspond to the roots of the polynomial F(x,v). By expanding F(x,v) in terms of its (finite) roots ϵ_2, ϵ_3 and ϵ_4, F(x,v) = 4v ( x - ϵ_2 v) ( x - ϵ_3 v) ( x - ϵ_4 v),the coefficients g_2 and g_3 are easily related to the roots: g_2 = - 4 ( ϵ_2 ϵ_3 + ϵ_2 ϵ_4 + ϵ_3 ϵ_4) and g_3 = 4 ϵ_2 ϵ_3 ϵ_4, with the roots satisfying the condition ϵ_2 + ϵ_3 + ϵ_4 = 0. The fourth root ϵ_1 located at x = ∞ (in the v=1 patch) represents the _2 fixed point at the origin. The coefficients g_2 and g_3 are on the other hand uniquely determined by the torus lattice and its complex structure parameter τ, such that we can limit ourselves to those torus lattices relevant for the orbifold T^6/(_2×_6): (1) a-type lattices or untilted (rectangular) tori with Re(τ) = 0: generically the roots ϵ_α are all real and can be ordered as ϵ_4<ϵ_3<ϵ_2. A square torus with τ = i represents a special case for which g_3=0, ϵ_2 = - ϵ_4 = 1 and ϵ_3=0. (2) b-type lattices or tilted tori with Re(τ) ≠ 0: generically the roots ϵ_2 and ϵ_4 are related by complex conjugation, ϵ_2 = ϵ_4≡ξ, while ϵ_3 = - 2Re(ξ) is a real parameter. A hexagonal torus with τ = e^i π/3 (cf. figure <ref> (b))forms a special case with g_2 = 0 and ξ = e^2π i/3, for which the elliptic curve exhibits an additional _3 symmetry x/v→ e^2π i/3x/v. This _3 symmetry is in correspondence with a _3 subgroup acting on the hexagonal two-torus lattice, suggesting that the two-tori T_(2)^2 and T_(3)^2 are perfectly described by this type of elliptic curve.A pictorial representation of a square untilted and a hexagonal torus lattice with their respective roots is given in figure <ref>. The two-torus T_(1)^2 is not affected by the _3 subgroup of the _2×_6 point group, hence its torus lattice can in principle be either untilted (a-type lattice) or tilted (b-type lattice). As a tilted two-torus T_(1)^2 does not provide for any (known) phenomenologically appealing global intersecting D6-brane models <cit.>, we confine ourselves to an untilted T_(1)^2 and simplify the set-up even more by choosing a square two-torus when studying deformations. This simplification is justified by the fact that for the choice of an exotic _2^(3)-plane, the (bulk) RR tadpole cancellation conditions can only be solved in a supersymmetric way if all D6-branes extend along π_1 on the a-type lattice, see section <ref> for several examples. Such configurations are supersymmetric for any value of the complex structure parameter Im(τ)>0 on T^2_(1). The full map between a two-torus and an elliptic curve is given by Weierstrass' elliptic function ℘(z), mapping bijectively the holomorphic coordinate z on the two-torus with modular parameter τ to the elliptic curve with coefficients g_2 and g_3. It is easy to see that the Weierstrass' elliptic function ℘(z) satisfies the hypersurface equation (<ref>) through the identification ℘(z) = x/v, ℘'(z) = y/v^2, which reduces to a differential equation on ℘(z). One-cycles on a two-torus T_(i)^2 were introduced in the previous section parameterised by the torus wrapping numbers (n^i,m^i) w.r.t. the basis one-cycles. In order to discuss Lag lines on an elliptic curve, we introduce an anti-holomorphic involution σ acting on the homogeneous coordinates as follows:σ: ([ x; v ]) ⟼A ([ x; v ]),y ⟼ e^i βy, with A ∈ GL_2(). For this action to be an involution, the matrix A has to satisfy the condition AA =. The involution also has to be a symmetry of the elliptic curve, which boils down to the following condition σ( F(x,v) ) = e^2i β F( x,v). Solving both conditions allows to extract the unequivocal forms of the various anti-holomorphic involutions <cit.>. Afterwards, one can determine the fixed loci for each individual anti-holomorphic involution, which will constitute only a subset of all Lag lines on the elliptic curve <cit.>. Fortunately for us, the Lag lines defined as fixed loci under σ are in one-to-one correspondence with the torus one-cycles used as building blocks for global intersecting D6-brane models, as can be checked explicitly by virtue of the Weierstrass' elliptic function ℘(z). Distinguishing between square and hexagonal lattices leads to the following classification of Lag lines: (1) untilted square torus: we distinguish between four one-cycles aX (with X = I, II, III, IV) passing through two roots ϵ_αand one-cycles cX (with X = I, II) not passing through any of the roots. The first type of one-cycles will serve as fractional cycles, while the latter type of one-cycles remain bulk cycles once the orbifold T^6/(_2×_6) is modelled as a hypersurface in an ambient toric space. A full overview of these Lag lines on the square torus and the relations to the roots ϵ_α is offered in table <ref>; their positions in the x-plane are depicted in figure <ref> (a) for the v=1 coordinate patch.(2) hexagonal torus: here we can identify the one-cyclesbX (with X = I, II, III, IV) passing through two roots ϵ_α and corresponding to σ-involution invariant directions. Due to the _3 symmetry, we also find the by ±2π/3 rotated images of these one-cycles, tripling the number of individual one-cycles. A full list of Lag lines is given in table <ref>, while figure <ref> (b) shows their position in the x-plane for the v=1 coordinate patch and clearly exhibits the _3 symmetry.The holomorphic one-form Ω_1 defined on the elliptic curve descends from the following holomorphic two-form defined on the ambient space ^2_112: Ω_1 = 1/π i∫_γx dv ∧ dy - v dx ∧ dy+ y dx∧ dv/f,where we divided by the polynomial f defined on the l.h.s. of equation (<ref>) to obtain a well-defined scale-invariant two-form on ^2_112. The integral is taken over a curve γ around the singular region f=0, such that we can apply Cauchy's residue theorem in a suitable patch to obtain the expression:Ω_1 = vdx - xdv/y|_f=0.The expression for y in terms of x and v follows by imposing the hypersurface equation f=0 and choosing one branch of the square root. With this prescription, the holomorphic one-form (<ref>) allows us to uncover the calibration form for each of the Lag lines identified above. Roughly speaking, Lag cycles with X=even are calibrated w.r.t. Re(Ω_1), while Lag lines with X=odd have Im(Ω_1) ascalibration form.§.§ T^6/(_2 ×_6 ×) in the hypersurface formalismDescribing deformations of the orbifold T^6/(_2 ×_6) requires us to adopt a hypersurface formalism in which T^6/(_2 ×_6) with discrete torsion is embedded into an appropriate toric space. In a first step, we will review in section <ref> how the deformations of T^6/(_2 ×_2) with discrete torsion thrive in the hypersurface formalism, after which we mod out the remaining _3 symmetry within the _2 ×_6 point group in section <ref> to end up with the hypersurface formalism describing the deformations of T^6/(_2 ×_6). Finally, we impose additional constraints due to the orientifold involution .§.§.§ Hypersurface formalism for T^6/(_2 ×_2) with discrete torsionThe factorisable orbifold T^6/(_2 ×_2) with discrete torsion can be mapped <cit.> to the direct product of three distinct elliptic curves modded out by the orbifold group _2 ×_2, whose action only keeps the _2 ×_2-invariant subring of the ring of polynomials. The _2 ×_2-invariant “monomials" are subject to a single equation describing a hypersurface in the toric space parametrised by the coordinates (x_i,v_i,y)_i=1,2,3 with weights q_i according to the weight diagram in table <ref>.Indeed, the orbifold T^6/(_2 ×_2) with discrete torsion or its deformations are described by the zero locus of the polynomial f(x_i,v_i,y) whose most general form reads:f ≡-y^2 + F_(1)(x_1,v_1) F_(2)(x_2,v_2) F_(3)(x_3,v_3) - ∑_i≠ j ≠ k ≠ i∑_α,β=1^4 ε^(i)_αβ F_(i)(x_i,v_i) δ F_(j)^α (x_j,v_j) δ F_(k)^β (x_k,v_k) + ∑_α,β,γ =1^4 ε_αβγδ F_(1)^α (x_1,v_1) δ F_(2)^β (x_2,v_2) δ F_(3)^γ (x_3,v_3).A few comments regarding this polynomial are in order: * The polynomials F_(i)(x_i,v_i) correspond to the homogeneous polynomials of degree four defining the two-torus T_(i)^2 as in equation (<ref>)or its rewritten form (<ref>) and encode information about the complex structure of T^2_(i). As each two-torus has its own complex structure, we have a priori three bulk complex structure moduli in total (with the number later on being reduced by imposing an extra _3 symmetry, cf. section <ref>). Setting all the deformation parameters to zero, ε_αβ^(i) = 0 = ε_αβγ, corresponds to the orbifold point in the complex structure moduli space, with the roots of F_(i)(x_i,v_i) determining the positions of the _2^(k≠ i) fixed points. * The deformation polynomials δ F_(i)^α(x_i,v_i) are also homogeneous polynomials of degree four and have the same roots as F_(i)(x_i,v_i) up to the α^ th zero. Thus, δ F_(i)^α(x_i,v_i) allows to deform the _2-fixed point associated to the α^ th root.* Deformations of the form δ F_(i)^α F_(j) F_(k) with (ijk) a cyclic permutation of (123) are not explicitly considered in equation (<ref>) as they correspond to deformations of the complex structure for two-torus T_(i)^2, up to PSL(2,) transformations acting on (x_i,v_i).[The PSL(2,) transformations are able to eliminate three complex parameters, leaving exactly one independent parameter per two-torus.] Hence, their coefficients correspond to untwisted moduli and their CFT counter-parts are given by the three truly marginal operators from the untwisted sector in the associated N=(2,2) super-conformal field theory, following the construction prescriptions in <cit.>.* The parameter ε_αβ^(i) allows for the deformation of the _2^(i) singularity with index (αβ) on the four-torus T^4_(i)≡ T^2_(j)× T^2_(k) with (ijk) some permutation of (123). Counting the number of distinct deformation parameters, one finds 3 × 4 × 4 = 48 parameters in total, one for each _2^(i) singularity. The number 48 matches exactly the number of truly marginal operators in the _2^(i) twisted sectors of the corresponding N=(2,2) SCFT.* Within the set of possible deformations of T^6/(_2 ×_2), one also observes the deformations associated to the parameters ε_αβγ. The total number of these parameters amounts to 4× 4× 4 = 64 and is in one-to-one correspondence with the number of _2×_2 fixed points on the orbifold. However, these parameters do not represent independent deformations, but rather depend on the complex structure deformations ε_αβ^(i) from the _2^(i) twisted sectors, such that (at most) 64 conifold singularities remain and cannot be deformed away. This reflection is supported <cit.> by the absence of truly marginal operators in the N=(2,2) SCFT corresponding to ε_αβγ-deformations.[Contrary to the blow-up procedure, where blowing up the co-dimension two singularities in the _2^(i) twisted sectors also eliminates the co-dimension three singularities on T^6/(_2×_2) without discrete torsion, the deformation procedure does not automatically lead to the resolution of the 64 _2×_2 fixed points on T^6/(_2×_2) with discrete torsion.]Thus, determining whether or not conifold singularities are present can only be assessed through the geometric description of the deformed orbifold in the hypersurface formalism in terms of the independent deformation parameters ε_αβ^(i). In order to determine the holomorphic volume form Ω_3 on T^6/(_2 ×_2) with discrete torsion,we extend the philosophy from above that allowed us to identify the appropriate hypersurface equation (<ref>). More explicitly, we consider the wedge product of three one-forms Ω_1, one for each two-torus T_(i)^2 as defined in equation (<ref>), and mod out the _2 ×_2 symmetry to obtain the following (simplified) expression (in the v_i=1 patch):Ω_3 = dx_1∧ dx_2 ∧ dx_3/y(x_i)|_f=0,up to an overall normalisation constant and possible phase. The expression for y in terms of x_i follows by imposing the hypersurface equation f!=0on the defining equation (<ref>) and fixing a branch cut for the square root. A thorough analysis of Lag lines on the (deformed) orbifold T^6/(_2 ×_2) on square tori was the subject of <cit.> and further commentsand extensions to products of three hexagonal tori can be found in <cit.>. As is well known, the orbifold T^6/(_2 ×_2) with discrete torsion does not support exceptional two-cycles, implying the absence of twisted sector blow-up modes. If one wants to blow up the _2^(i) singularities rather than deform them, one ought to look at the version of the orbifold without discrete torsion, see e.g. <cit.>. Physical implications of resolving the orbifold singularities through blow-ups have been investigated in <cit.>.§.§.§ Hypersurface formulation for orbifolds with additional _3 × actionSetting up the hypersurface formalism for T^6/(_2×_6) with discrete torsion now consists in acting with the _3-subgroupgenerated by 2b⃗=1/3(0,1,-1) on the hypersurface formulation of T^6/(_2×_2) with discrete torsion, such that the resulting hypersurface polynomial is invariant under the _3-symmetry.[Notice that in <cit.> a different T^6/(_2 ×_6') orbifold with the _6' factor generated byb⃗^ '=1/6(-2,1,1) was considered. The analysis in that case with _2 ×_6' point group was simpler due to all three two-tori being of hexagonal shape and all three _2^(i)-twisted sectors being equivalent (up to a relative sign factor in the orientifold projection if one of the _2^(j)-planes is chosen as the exotic one). ]The _3-action generated by ω^2 will in the first place restrict the form of the homogeneous polynomials F_2 and F_3, while the form of F_1 remains generic as in equation (<ref>). Anticipating the torus lattice configurations for the global models in section <ref>, we consider the first two-torus to be a square untilted two-torus, such that the homogeneous polynomials are given by: [F_(1) (x_1, v_1) = 4 v_1 x_1 (x_1^2 -v_1^2),;F_(2) (x_2, v_2) =4 v_2 (x_2^3 - v_2^3),;F_(3) (x_3, v_3) =4 v_3 (x_3^3 - v_3^3).; ] Evidently, the _3-action also constrains the form of the deformation polynomials δ F_(j=2,3)^α, while the deformation polynomials δ F_(1)^α are shaped by the untilted square lattice choice for the two-torus T_(1)^2:[δ F_(1)^1= x_1^2 (x_1^2 - v_1^2),δ F_(j)^1= x_j (x_j^3 - v_j^3), for j=2,3;δ F_(1)^2=v_1 x_1(x_1 + v_1)^2,δ F_(j)^2=v_j^2 (v_j - x_j) (v_j - ξ x_j) ,;δ F_(1)^3= - v_1^2 (x_1^2 - v_1^2),δ F_(j)^3=v_j^2 (v_j - ξ x_j) (v_j - ξ^2 x_j) ,;δ F_(1)^4= -v_1 x_1 (x_1 -v_1)^2,δ F_(j)^4=v_j^2 (v_j - x_j) (v_j - ξ^2 x_j) .;]By virtue of the Weierstrass' elliptic function, one can easily deduce that the _3-subgroup also acts on the homogeneous coordinates x_i as follows:[Note that the holomorphic three-form defined in equation (<ref>) remains invariant under the _3-symmetry.]ω^2: (x_1,x_2,x_3) ↦ (x_1, ξ x_2, ξ^2 x_3),with ξ = e^2π i/3,which leaves the homogeneous polynomials F_(i)(x_i,v_i) invariant, but forces the deformation polynomials to transform as follows:[ω^2: δ F_(1)^α↦δ F_(1)^α∀ α∈{1,2,3,4},;δ F_(2)^1 ↦ξδ F_(2)^1, δ F_(2)^2 ↦δ F_(2)^3 ↦δ F_(2)^4 ↦δ F_(2)^2,; δ F_(3)^1 ↦ξ^2 δ F_(3)^1, δ F_(3)^2 ↦δ F_(3)^4 ↦δ F_(3)^3 ↦δ F_(3)^2.; ]Keeping in mind these transformation properties, we can deduce that the linear combination δ F_(j)^2+δ F_(j)^3+δ F_(j)^4 = 3 v_j^4 represents a_3-invariant polynomial for j=2,3. However, a simple coordinate transformation on x_j eliminates any deformation of the type 3 v_j^4 F_(1) F_(k≠ j), leaving the complex structure of the two-torus T_(j=2,3)^2 unaltered. The non-invariance of δ F_(j=2,3)^1 under _3 also excludes any type of deformation of the form δ F_(j=2,3)^1 F_(1) F_(k=3,2). This observation agrees with the considerations in section <ref> that the complex structures of the two-tori T_(j=2,3)^2 are frozen to hexagonal shape by the _3-action.The deformation polynomials δ F_(1)^α are left invariant by the _3-action, such that deformations of the type δ F_(1)^α F_(2) F_(3) do exist, up to PSL(2,) transformations acting on (x_1,v_1), and they represent one untwisted complex structure modulus, in line with h^21_ bulk = 1. Recall from footnote <ref> on page Ref:PSLT2 that the three complex parameters of the PSL(2,) symmetry allow to reduce the four deformations to a single independent deformation. Deformations of the _2^(i) singularities are performed through polynomials of the form F_(i)δ F_(j)^αδ F_(k)^β, which should also be _3-invariant for consistency. In this sense, the _3-action will put restrictions on the deformation parameters ε_αβ^(i) and reduce the number of independent deformation moduli. For the _2^(1)-twisted sector, we find six independent deformation parameters which concur with the Hodge number h^21__2^(1) = 6 from table <ref>: * ε_11^(1)≡ε_0^(1) is left untouched and deforms the singularity (11) on the four-torus T^2_(2)× T^2_(3);* ξ^2 ε_21^(1) =ε_31^(1) = ξ ε_41^(1)≡ε_1^(1) deforms the singular orbit [(21),(31),(41)] on T^2_(2)× T^2_(3);* ξ^2 ε_12^(1) =ε_13^(1) = ξ ε_14^(1)≡ε_2^(1) deforms the singular orbit [(12),(13),(14)] on T^2_(2)× T^2_(3);* ε_33^(1) =ε_42^(1) = ε_24^(1)≡ε_3^(1) deforms the singular orbit [(33),(42),(24)] on T^2_(2)× T^2_(3);* ε_22^(1) =ε_34^(1) = ε_43^(1)≡ε_4^(1) deforms the singular orbit [(22),(34),(43)] on T^2_(2)× T^2_(3);* ε_44^(1) =ε_23^(1) = ε_32^(1)≡ε_5^(1) deforms the singular orbit [(44),(23),(32)] on T^2_(2)× T^2_(3). The _2^(j=2,3) sectors are equivalent as a result of the T_(2)^2↔ T_(3)^2 exchange symmetry, such that we can treat them jointly. In the _2^(j)-sector we find four independent deformation parameters, which match the Hodge numbers h^21__2^(2) = h^21__2^(3)=4 in table <ref>: * ∀ α∈{1,2,3,4} we find ε_α 1^(j) = 0, which excludes any type of deformation of the fixed points (α 1) on T_(1)^2 × T_(k≠ j)^2; * ε_1 2^(j) = ε_1 3^(j) = ε_1 4^(j)≡ε_1^(j) deforms the singular orbit [(12),(13),(14)] on T^2_(1)× T^2_(k);* ε_2 2^(j) = ε_2 3^(j) = ε_2 4^(j)≡ε_2^(j) deforms the singular orbit [(22),(23),(24)] on T^2_(1)× T^2_(k);* ε_3 2^(j) = ε_3 3^(j) = ε_3 4^(j)≡ε_3^(j) deforms the singular orbit [(32),(33),(34)] on T^2_(1)× T^2_(k);* ε_4 2^(j) = ε_4 3^(j) = ε_4 4^(j)≡ε_4^(j) deforms the singular orbit [(42),(43),(44)] on T^2_(1)× T^2_(k). The skeptical reader might object that there exists a certain freedom in choosing the forms of the deformation polynomials, yet any consistent _3-invariant choice of the polynomials δ F_(j)^α should yield the same numbers, as the pairing of the _2^(i) fixed points into orbifold-invariant orbits is an eternal consequence of the _3-action. After all, the amount of independent _2^(i) deformations has to match the Hodge numbers h^21__2 in the _2^(i)-twisted sectors discussed in section <ref>. The last type of deformations to consider have the form δ F_(1)^αδ F_(2)^βδ F_(3)^γ and are also subject to the _3-action. The invariance of δ F_(1)^α under the _3-action suggests that we should only worry about the remaining two deformation polynomials and make sure they recombine into _3-invariant combinations with the following relations: * ε_α11 remains unconstrained (∀ α∈{1,2,3,4});* ξ^2 ε_α 21 = ε_α 31 = ξ ε_α 41 (∀ α∈{1,2,3,4});* ξ^2 ε_α 12 = ε_α 13 = ξ ε_α 14 (∀ α∈{1,2,3,4});* ε_α 33 = ε_α 24 = ε_α 42 (∀ α∈{1,2,3,4});* ε_α 22= ε_α 34 = ε_α 43 (∀ α∈{1,2,3,4});* ε_α 44 = ε_α 23 = ε_α 32 (∀ α∈{1,2,3,4}).Hence, we obtain in total 6× 4 = 24 _3-invariant deformations of the type δ F_(1)^αδ F_(2)^βδ F_(3)^γ on the T^6/(_2 ×_6) orbifold, which should be contrasted with the 64 parameters on theorbifold. Observe that the initial 64 _2×_2 fixed points split up into four fixed points, which are left invariant under the _3 action, and 60 fixed points, which are regrouped into _3-invariant triplets. This simple counting explains the 24 allowed deformations δ F_(1)^αδ F_(2)^βδ F_(3)^γ. Similar to the _2 ×_2 orbifold, the ε_αβγ-deformations depend on the twisted complex structure deformation parameters ε_αβ^(i), such that at most 24 conifold singularities remain upon deformation by ε_αβ^(i). In CFT language this would imply that the truly marginal operators associated to the ε_αβγ-deformations do not exist in the associated N=(2,2) SCFT. Hence, also for this orbifold the potential presence of conifold singularities can only be assessed by investigating the hypersurface equation algebraically in the hypersurface formalism. The last element missing to describe sLags in this hypersurface set-up is the orientifold involution σ_ R which acts on the homogeneous coordinates as follows:σ_ R: (x_i,v_i,y) ⟼ ( x_i,v_i,y). This anti-holomorphic involution,constructed from the involution σ defined in equation (<ref>) by choosing A=_2 on each two-torus T^2_(i),has to be a symmetry of the hypersurface, which boils down to the condition σ_ R(f) =f. At the orbifold point, this latter condition constrains the shape of the three two-tori to be either of a-typeor b-type(the latter corresponding to both A- and B-orientation for hexagonal lattices) as discussed in section <ref> and ensures that the orientifold involution is an automorphism of the torus-lattices. The orientifold involution also acts on the deformation polynomials (<ref>) as follows:[ δ F_(1)^α↦δ F_(1)^α ∀ α∈{1,2,3,4}; δ F_(j)^1 ↦δ F_(j)^1, δ F_(j)^2 ↦δ F_(j)^4, δ F_(j)^3 ↦δ F_(j)^3, δ F_(j)^4 ↦δ F_(j)^2 forj=2,3 , ] which concurs with the -action on the _2 fixed points of the aAA lattice, whose individual two-torus positions are depicted in figure <ref>. Taking into account the action of the involution, we observe that the deformation parameters are even further reduced: the a priori complex deformation parameters are either constrained to be real, or two complex deformation parameters are identified, leaving only one independent complex deformation parameter. The latter occurs for the deformation parameters ε_4^(1) and ε_5^(1), for which we can introduce ε_4|5^(1) = 1/2( ε_4+5^(1)± i ε_4-5^(1)) with ε_4±5^(1)∈. All the other deformation parametersε_λ=0,1,2,3^(1) and ε_α=1,2,3,4^(2,3) are constrained to be real. A summary of the independent deformation parameters for the orientifold T^6/(_2×_6 ×) with discrete torsion and aAA lattice is given in table <ref>. [ 5|c|Independent deformation parameters for T^6/(_2×_6 ×) with discrete torsion; _2^(i)Parameter identificationParameter range Exceptional wrapping numbers; ε_0^(1) ε_11^(1) (x_0^(1),y_0^(1)); ε_1^(1) ξ^2 ε_21^(1) =ε_31^(1) = ξε_41^(1) (x_1^(1),y_1^(1)); _2^(1)ε_2^(1) ξ^2 ε_12^(1) =ε_13^(1) = ξε_14^(1) (x_2^(1),y_2^(1)); ε_3^(1)ε_33^(1) =ε_42^(1) = ε_24^(1) (x_3^(1),y_3^(1));ε_4^(1), ε_5^(1) ε_22^(1) =ε_34^(1) = ε_43^(1) =ε_44^(1) = ε_23^(1) =ε_32^(1)(x_4^(1),y_4^(1), x_5^(1),y_5^(1)); _2^(2)ε_α=1,2,3,4^(2) ε_α2^(2) = ε_α3^(2) = ε_α4^(2) (x_α^(2),y_α^(2)); _2^(3)ε_α=1,2,3,4^(3) ε_α2^(3) = ε_α3^(3) = ε_α4^(3) (x_α^(3),y_α^(3));] DefParZ2Z6CompleteOverview of the independent deformation parameters per _2^(i)-twiste sector forwith discrete torsion (η = -1) and the restrictions following from the _3-action and the -action. The last column relates the deformation parameters to the relevant exceptional wrapping numbers introduced in equation (<ref>) of section <ref>. In conclusion, the T^6/(_2×_6 ×) orientifold with discrete torsion corresponds to the zero locus of the following polynomial f:f=- y^2 +v_1 x_1 (x_1^2 -v_1^2) v_2 (x_2^3 - v_2^3) v_3 (x_3^3 - v_3^3) - F_(1){ε_0^(1)· x_2 (x_2^3 - v_2^3) · x_3(x_3^3 - v_3^3) + ε_1^(1)· 3 v_2^3 x_2· x_3(x_3^3 - v_3^3) . + ε_2^(1)· x_2(x_2^3 - v_2^3) · 3 v_3^3 x_3 + ε_3^(1)· 3 v_2^2 v_3^2 (v_2^2 v_3^2 + v_2 v_3 x_2 x_3 + x_2^2 x_3^2).+ ε_4+5^(1)· 3 v_2^2 v_3^2 (v_2 v_3 - x_2 x_3) (2 v_2 v_3 + x_2 x_3) + ε_4-5^(1)· 3 √(3) v_2^2 v_3^2 x_2 x_3 (v_2 v_3 - x_2 x_3 ) }- ∑_j,k∈{2,3},j≠ k F_(j){ε_1^(j)· x_1^2 (x_1^2- v_1^2) · 3 v_k^4 +ε_2^(j)· v_1 x_1 (x_1 + v_1)^2 · 3 v_k^4 . .+ε_3^(j)· v_1^2(x_1^2 - v_1^2) · 3 v_k^4 + ε_4^(j)· v_1 x_1(x_1 - v_1)^2 · 3 v_k^4 } .In this polynomial expression, one clearly notices the difference in the _2^(1) deformations on the one hand and the _2^(2,3) deformations on the other hand, in analogy with the difference in exceptional cycles from the respective _2^(i)-twisted sectors as reviewed in section <ref>. This will obviously imply that the effect of _2^(1) deformations on sLag three-cycles has to be studied separately from the effect of _2^(2,3) deformations. The latter deformations on the other hand are expected to follow a similar pattern due to the two-torus exchange symmetry , which is reflected in the coordinate permutation (x_2,v_2)↔ (x_3,v_3) accompanied by a permutation of the twisted parameters ε_1^(1)↔ε_2^(1).§.§ Deforming special Lagrangian cycles on T^6/(_2 ×_6 ×)Having formulated the consistent hypersurface formalism to discuss deformations ofwith discrete torsion, we can now turn our attention to the geometric properties of sLag three-cycles away from the orbifold point, after providing a concise translation of the three-cycles introduced in section <ref> into the hypersurface formalism.§.§.§ sLags at the Orbifold PointA minimal set of sLags is defined as the fixed loci under the orientifold involution σ_ R, introduced in equation (<ref>). Due to the additional _3-symmetry in equation (<ref>) on the x_2,3 coordinates, this set of sLags can be extended by demanding that they be invariant under the action of σ_ R×_3, a group isomorphic to the symmetric group S_3.To describe the location of the O6-planes in the hypersurface formalism, it suffices to determine the invariant solutions for the σ_ R-sector, as the _3 element ω^2 maps the O6-planes from the σ_ Rω^2-and σ_ Rω^4-sectors to this one. In a coordinate patch where v_i≠0, we can use the (^⋆)^3-scaling symmetry of the ambient space to set v_i=1, such that the O6-planes form a three-dimensional subspace spanned by { Re(x_1),Re(x_2),Re(x_3)} within the complex three-dimensional space parametrised by the coordinates { x_1, x_2, x_3}.Furthermore, this three-dimensional subspace corresponds to the region y^2(x_i)≥ 0, implying that the O6-planes are calibrated w.r.t. Re(Ω_3),the real part of the holomorphic three-form Ω_3 defined in equation (<ref>). This identification of the O6-planes as a real three-dimensional subspace of the x_i-planes matches the geometric description in terms of the torus wrapping numbers provided by table <ref> and will allow us to verify which sLag three-cycles are calibrated w.r.t. the same holomorphic three-form and therefore preserve the same N=1 supersymmetry.When it comes to the sLag three-cycles, we should first offer a clear dictionary between the three types of three-cycles defined in section <ref> and three-dimensional subspaces on the x_i-planes. In order to construct (factorisable) three-cycles on the x_i-planes, we can consider the product N_1⊗ M_2 ⊗ M_3 consisting of Lag lines on each of the two-tori, with N_1 one of the Lag lines from table <ref> and M_2, M_3 Lag lines from table <ref> (or displacements thereof). A necessary condition for the product N_1⊗ M_2 ⊗ M_3 to be a supersymmetric three-cycle is that their relative angles w.r.t. the O6-planes add up to 0 modulo 2π.[The relative angles w.r.t. the - invariant plane can be inferred from the torus wrapping numbers: ϕ^(1) = arctan( m^1/n^1) determines the angle (mod π) on an A-type square T_(1)^2 and ϕ^(j) = arctan( √(3) m^j/2 n^j + m^j) on an A-type hexagonal T_(j)^2 with j=2,3, and (n^i) encodes the orientation to arrive at (mod 2π).] The factorisable bulk three-cycles of section <ref> are then constructed from Lag lines which do not cross the _2^(i) fixed points, such as lines cI and cII on T_(1)^2 and continuous displacements of bX^0,± with X∈{ I, II,III,IV}. This type of Lag lines forms curves (circles) on each x_i-plane separately, as for instance shown in figure <ref>, indicating that a bulk three-cycle has typically the topology of a three-torus T^3. Only bulk three-cycles that lie sufficiently close to a deformed _2^(i) singularity will experience alterations to their overall three-dimensional volume, yet they will always keep their sLag property. This vanilla-like behaviour of bulk three-cycles on deformed orbifolds suggests us todwell on them no longer than necessary, but rather to focus on the two other types of three-cycles.As a matter of fact, the interesting phenomena occur for exceptional and fractional three-cycles passing through deformed singularities. At the orbifold point, there is no possibility to express the exceptional divisors e^(i)_αβ in terms of the homogeneous coordinates x_i, as their volumes are shrunk to zero. Hence, we relegate the discussion of the exceptional three-cycles to the next two subsections, where we will discuss in detail the geometry of exceptional divisors located at deformed _2^(i) fixed points for the various distinct deformation parameters. In the meantime, we develop a dictionary for the fractional three-cycles associated to the _2^(i)-twisted sectors at the orbifold point and assume that for only one twisted sector at a time the deformations will be turned on. In that case, we can describe the fractional three-cycles for the _2^(i)-twisted sector as a direct product of a one-cycle on T_(i)^2 and a two-cycle on T_(i)^4/_2^(i) (or on Def(T_(i)^4/_2^(i)) in later sections). If we limit ourselves to the cycles aI, aII, aIII and aIV on T_(1)^2, the total number of two-cycles on T_(2)^2× T_(3)^2 based on table <ref> is 12^2=144. The sum of the relative angles for these two-cycles w.r.t. the O6-planes adds up to {0,±π/6,±π/3,π/2}, suggesting six different calibration angles. The 24two-cycles with calibration angle 0combine with the one-cycles aI or aIII to form three-cycles calibrated w.r.t. the same three-form Ω_3 as the O6-planes, whilethe 24two-cycles with calibration angle π/2 combine with one-cycles aII or aIV to form supersymmetric three-cycles. This leads to 96 three-cycles which are further reduced to 32independent ones as a result of the identification of the two-cycles on T^4_(1)≡ T_(2)^2× T_(3)^2 under the _3-action. Taking into account the possibility of turning on discrete Wilson lines or discrete _2^(i)-eigenvalues at the singularities offers a large enough class of fractional three-cycles to construct a variety of phenomenologically interesting global intersecting D6-brane models <cit.>.If we look closer at the fractional three-cycles constructed from aI, aII, aIII, aIV, bI^0 and bII^0, we notice that these one-cycles all lie along the real axis Re(x_i) in the complex x_i-planes in figure <ref> and have _2 fixed points as boundaries. For the aI, aII, bI^0 and bII^0 cycles one should imagine the point ϵ_1at infinity x_i= ±∞ as the second boundary, representing the singularity at the origin of the two-tori T^2_(i). For a fractional three-cycle associated to e.g. the _2^(1)-twisted sector, the one-cycle on T_(1)^2 has a topology of a circle S^1, while the two-cycle on T^4_(1)/_2^(1) corresponds to a two-torus pinched down at the boundary points, i.e. at the _2^(1) fixed points lying on the zero locus y=0. Hence, the topology of a fractional three-cycle is simply S^1× T^2/_2. A more pictorial representation will be given in sections <ref> and <ref>. For these kinds of fractional three-cycles, the holomorphic three-form Ω_3 factorises as Ω_3 = Ω_2 ∧Ω_1, such that we can compute the integrals of Ω_2 over the two-cycles on T^4_(i)/_2^(i) separately from the integrals of Ω_1 over the one-cycles on T_(i)^2.In the next two subsections we discuss the effects of deformations in the _2^(1)- and _2^(2,3)-twisted sectors on the exceptional and fractional three-cycles and investigate how their volumes increase or decrease due to the deformation. Due to the exchange symmetry T_(2)^2↔ T_(3)^2 it suffices to discuss only one of the _2^(2,3)-twisted sectors, as the other one will yield the same results. Hence, we can choose to focus on deformations in the _2^(3)-twisted sector. §.§.§ sLags in the deformed _2^(1)-twisted sectorFor a qualitative appreciation of the deformation effects in the _2^(1)-twisted sector, we switch to the x_i=1 patch and describe the sLags in terms of the homogeneous coordinates v_i. The cycles bI^0 and bII^0 are still given by real hypersurfaces at the orbifold point in terms of the homogeneous coordinates v_i=2,3, though in comparison to figure <ref> their regional conditions are changed: for the Lag line bI^0 we find the constraint 0 ≤ v_i≤ 1, while the Lag line bII^0 consists of the union {-∞≤ v_i≤ 0}∪{ 1≤ v_i ≤ + ∞}. Figure <ref> depicts the T^2/_2 topology for the two-cycles constructed from bI^0 and bII^0 on Def(T_(1)^4/_2^(1)), with the blue-shaded regions representing sLag two-cycles calibrated with respect to Re(Ω_2). The blue contour-lines correspond to the zero locus y=0 in the -projected plane (v_2, v_3), and these lines intersect at the _2^(1) fixed points (11), (13), (31) and (33). The fact that we are able to depict graphically the behaviour of the aforementioned singularities follows immediately from the choice of the coordinate patch x_i = 1. Other singularities correspond to complex roots and therefore do not lie in the -restricted plane (v_2, v_3), such that they are not depicted in figure <ref>. The two-cycles bI^0 ×bI^0 and bII^0 ×bII^0 should be paired with a one-cycle aI or aIII on T_(1)^2 to form a sLag three-cycle calibrated with respect to Re(Ω_3).The white regions in figure <ref> on the other hand represent sLag two-cycles calibrated with respect to Im(Ω_2), namely the two-cycles bI^0 ×bII^0 and bII^0 ×bI^0.Anticipating the examples later on, the hidden stacks h_1,2 in the Left-Right (L-R) symmetric model I of section <ref> belong to the three-cycle type aI×bI^0 ×bI^0, while the hidden stacks of the Pati-Salam (PS) II model of section <ref> and of the L-R symmetric II model of section <ref> are of the type aI×bII^0 ×bII^0, and the hidden stacks of the L-R symmetric IIb model are of the type aIII×bII^0 ×bII^0. By turning on each _2^(1) deformation parameter ε^(1)_λ separately in figure <ref>, we can explicitly see which singularities are deformed and which singularities are displaced by the respective deformation parameter. This information is also summarised in the upper part of table <ref>, which results from determining the singular points of the hypersurface equation (<ref>). At the deformed singularity, an exceptional cycle with non-vanishing volume emerges, which is indicated by a red dashed line in figure <ref>. An interesting observation is that the point (33) gets deformed too when turning on the deformation parameters ε_4+5^(1) and ε_4-5^(1). This phenomenon was also observed <cit.> for specific deformations of complex co-dimension 2 singularities on the subspace T^4_(i)/_2 of the orbifold T^6/(_2×_6') with discrete torsion and can be resolved by turning on a correction-term ε_3^(1) depending on the respective deformation parameter, as depicted in the lower diagrams of figures <ref> (f) and (g). A similar consideration holds for the fixed points (22) and (44) which are also deformed, but now for a non-vanishing deformation parameter ε_3^(1)≠ 0. Further details about the counter-terms can be found in appendix <ref>. [ 4|c|Behaviour of _2^(i) fixed points under deformations of T^6/(_2×_6 ×) with discrete torsion;4|c|Deformations in the _2^(1)-twisted sector;modulusdeformed singular orbitadditional deformed orbitdisplaced orbit;ε_0^(1)e_0^(1) ≡(11) e_1^(1) , e_2^(1);ε_1^(1)e_1^(1) ≡[(31),(41),(21)] e_3^(1), e_4^(1), e_5^(1);ε_2^(1)e_2^(1) ≡[(13),(12),(14)] e_3^(1), e_4^(1), e_5^(1);ε_3^(1)e_3^(1) ≡[(33),(42),(24)] e_4^(1), e_5^(1) ; ε_4+5^(1)& ε_4-5^(1)e_4^(1) ≡[(22),(34),(43)], e_5^(1)≡[(44),(32),(23)]e_3^(1) ;4|c|Deformations in the _2^(3)-twisted sector;ε_1^(3)e_1^(3) ≡[(13),(14),(12)]e_2^(3), e_4^(3);ε_2^(3)e_2^(3) ≡[(23),(24),(22)]e_1^(3), e_3^(3);ε_3^(3)e_3^(3) ≡[(33),(34),(32)]e_2^(3), e_4^(3);ε_4^(3)e_4^(3) ≡[(43),(44),(42)]e_1^(3), e_3^(3);] OverviewDeformperZ2SectorOverview of the deformed singular _3-orbits composed of the _2^(i) fixed points ofwith discrete torsion per deformation parameter ε_λ=0,…,4± 5^(1) and ε_α=1,…,4^(3). The third column indicates which other singular orbits are deformed, while the last column lists which orbits remain singular but are displaced. Figure <ref> provides a graphical representation of how the _2^(1) singularities (11), (13), (31) and (33) on T_(1)^4/_2^(1) behave under deformations. The other singular points belonging to the respective _3-invariant orbits are not depicted as they do not lie in the -restricted plane. To depict these latter singularities, one has to perform an appropriate _3-rotation on the coordinates. Similar considerations hold for the _2^(3) singularities depicted in figure <ref>. For a more quantitative picture of the deformation effects, we go back to the coordinate patch in which v_i=1where we can rewrite the hypersurface equation in the vicinity of a singular point on T_(1)^4/_2^(1) as a ^2/_2 (or A_1-type) singularity. In first instance, we look at the zero locus of the hypersurface equation (<ref>), turn on each _2^(1) deformation separately and discuss their effects in a local patch around the singular point (33). This local description can be extracted straightfordwardly for the deformation ε_3^(1), which deforms the exceptional cycles ϵ_3^(1) and ϵ̃_3^(1) at the singularity (33). For the other deformations(ε_0^(1), ε_1^(1), ε_2^(1)), we have to perform a Möbius transformation as explained in appendix <ref>, or a complex rotation (ε_4+5^(1), ε_4-5^(1)) to extract the proper local structure by placing the singularity at the point (33).More explicitly, there exists a Möbius transformation λ_3 <cit.> acting on the homogeneous coordinate x_i that allows to map the _2 fixed point ϵ_1 situated at the origin of a two-torus T^2_(i) to the fixed point ϵ_3 located on the real axis in the new coordinate x̃_i, such that a singular point (αβ) with either α=1 and/or β=1 can always be mapped by λ_3 to the point (33) in the new coordinates. The _2 fixed points ϵ_2 and ϵ_4 on the other hand are mapped to the point ϵ_3 by a _3-transformation as can be seen in figure <ref> (b).With an appropriate rescaling of the homogeneous coordinates, we then find that the singularity is locally described by the following hypersurface equation:ỹ^2 = x̃_2 x̃_ 3 - c_λε_λ^(1), c_a = {[9 (λ = 0),;3 (λ = 1,2),;1 (λ = 3),;1/2 (λ = 4±5). ].A first observation is of course that the co-dimension two singularity locally takes the form of a ^2/_2 singularity in the (x̃_2, x̃_3)-plane. The two-cycles bI^0 ×bI^0 andbII^0 ×bII^0, with torus wrapping numbers (n^2,m^2;n^3,m^3)=(1,0;1,0) and (-1,2;1,-2) on T_(2)^2× T_(3)^2, respectively, pass through the _2^(1) fixed points affected by a non-vanishing deformation parameter ε_λ=0,1,2,3^(1).A full fractional three-cycle calibrated with respect to Re(Ω_3) is then constructed as described in section <ref> by combining these two-cycles with e.g. the one-cycle aI on T_(1)^2, such that the _2^(1) exceptionalcontributions to a fractional three-cycle can be expressed in terms of the basis three-cycles (ϵ_λ^(1), ϵ̃_λ^(1)) ∼orbits of (π_1 ⊗ e_λ^(1), π_2 ⊗ e_λ^(1)) as follows: Π^_2^(1) = (-)^τ^_2^(1)( ϵ_0^(1) + (-)^τ^2ϵ_1^(1) + (-)^τ^3ϵ_2^(1) +(-)^τ^2 + τ^3ϵ_3^(1)).Now, by switching on one of the associated four deformation parameters, an exceptional two-cycle e^(1)_λ=0,1,2,3 with non-vanishing volume grows out of the respective fixed point along T^4_(1)/_6, and it clear from this construction that the volume of an associated fractional three-cycle is also influenced by the evolution of the volume of the exceptional cycle under deformation. At the orbifold point, the two-cycles bI^0 ×bI^0 andbII^0 ×bII^0 along T^4_(1)/_6 can be represented by a set ofreal two-dimensional regions x̃_2 ·x̃_3 ≥ 0in the (x̃_2, x̃_3)-plane. When turning on a deformation parameter ε_λ=0,1,2,3^(1), the volume of these two-cycles will shrink or grow depending on the sign of the deformation parameter: * ε_λ^(1) > 0: bI^0 ×bI^0 and bII^0 ×bII^0 are still two separate two-cycles, both with shrinking sizes as an exceptional two-cycle e_λ^(1) grows out of the singularity (33) in the region ỹ^2 <0. This situation is represented in the upper diagram of figure <ref> (b)-(c)-(d)-(e).The exceptional two-cycle satisfies the algebraic condition x̃_2 = x̃_3, by which the hypersurface equation (<ref>) reduces to the equation for a two-sphere S^2 with radius √(c_λε_λ^(1)). As the two-cycles bI^0 ×bI^0 and bII^0 ×bII^0 remain two separate sLags, the exceptional two-cycle e_λ^(1) has to be calibrated with respect to the same two-form Re(Ω_2). This statement can be shown explicitly by computing the (non-vanishing) volume ∫_e_λ^(1) Re(Ω_2) of the exceptional two-cycle as a function of the parameter ε_λ^(1) in the hypersurface formalism and taking into account that ỹ^2 <0 and x̃_2 = x̃_3. Extending these considerations to the three-cycles on (T^2_(1)× T^4_(1)/_6)/_2^(3),we conclude that the exceptional three-cycle ϵ_λ^(1) is calibrated with respect to the same three-form Re(Ω_3) as the bulk three-cycles and that there should be a relative minus sign between the bulk three-cycle Π^ bulk and Π^_2^(1) in order for the volume of the fractional cycle to decrease upon deformation of the singularity, in line with figure <ref> (e).The cycles bI^0 ×bII^0 and bII^0 ×bI^0 on the other hand have merged into one big two-cycle and are no longer sLag two-cycles separately. As these two-cycles are calibrated with respect to Im(Ω_2), we should take a union two-cycle bI^0 ×bII^0⊕bII^0 ×bI^0 from which the exceptional cycle e_λ^(1) is eliminated, such that the union two-cycle remains sLag with respect to Im(Ω_2). * ε_λ^(1) < 0: bI^0 ×bI^0 and bII^0 ×bII^0 are no longer separate two-cycles but melt together as shown in the lower diagram of figure <ref> (e), while an exceptional two-cycle e_λ^(1) grows out of the singularity (33) in the region ỹ^2 >0. The hypersurface equation (<ref>) reproduces the topology of a S^2 for the algebraic condition x̃_2 = - x̃_3, which implies that the exceptional two-cycle e_λ^(1) is now calibrated with respect to Im(Ω_2). A union two-cycle bI^0 ×bI^0⊕bII^0 ×bII^0 from which the exceptional two-cycle e_λ^(1) is eliminated, will then correspond toone big sLag cycle calibrated with respect to Re(Ω_2).Once again, the two-cycles bI^0 ×bII^0 and bII^0 ×bI^0 both behave differentlywith respect to the two-cyclesbI^0 ×bI^0 and bII^0 ×bII^0 under the deformation as their sizes shrink for increasing |ε_λ^(1)|. Combining them with e.g. the one-cycle aIIon T_(1)^2 allows for the construction of fractional three-cycles calibrated with respect toRe(Ω_3), and bulk three-cycles parallel to the _2^(2)-and _2^(3)-plane, respectively. Their _2^(1) exceptional contributions Π^_2^(1) can thus be decomposed in terms of the basis three-cycles ϵ̃_λ^(1):Π^_2^(1) = (-)^τ^_2^(1)( ϵ̃_0^(1) + (-)^τ^2ϵ̃_1^(1) + (-)^τ^3ϵ̃_2^(1) +(-)^τ^2 + τ^3ϵ̃_3^(1)),implying that the basis three-cycles ϵ̃_λ=0,1,2,3^(1) are calibrated with respect to Re(Ω_3).As the volumes of these fractional three-cycles shrink for a non-vanishing deformation according to figure <ref> (e), there should be a relative minus sign between Π^ bulk and the contribution to Π^_2^(1). For instance, for ε^(1)_3<0 equation (<ref>) describes a fractional three-cycle with τ^_2^(1) + τ^2 + τ^3 = 1 mod 2.The situation for the deformations ε_4+5^(1) and ε_4-5^(1) is different, as they deform singularities through which the two-cycles bI^0 ×bIII^0, bIII^0 ×bI^0, bIII^0 ×bIII^0, bII^0 ×bIV^0, bIV^0 ×bII^0 and bIV^0 ×bIV^0calibrated w.r.t. Re(Ω_2) pass. As such the singular point (33) should not be deformed for (at least) a (small) non-vanishing deformation ε_4+5^(1) or ε_4-5^(1), which explains the required non-vanishing correction term ε_3^(1)(ε_4+5^(1)) or ε_3^(1)(ε_4-5^(1)), respectively, as depicted in the lower diagrams of figures <ref> (f) and (g). A brief discussion on how to obtain these corrections terms is given in appendix <ref>.The (local) discussion of the singularities deformed by a non-vanishing parameter ε_4+5^(1) and ε_4-5^(1) follows the same pattern as the one conducted above for the other deformations ε^(1)_λ∈{0,1,2,3}. Nonetheless, there is an importantdifference, as the exceptional three-cycles (ϵ^(1)_4, ϵ̃^(1)_4 ) are not mapped to (linear combinations of) themselves under the orientifold projection, but to(linear combinations of) the three-cycles (ϵ^(1)_5, ϵ̃^(1)_5 ) and vice versa, as indicated in table <ref>. This implies that, within the context of Type IIA/ orientifolds,the _3-invariant orbits e_4^(1) and e_5^(1) from table <ref> are always deformed simultaneously for a single non-zero deformation ε^(1)_4+5 or ε^(1)_4-5. Exposing the global behaviour of the exceptional three-cycle volumes for each deformation separately requires us to impose the algebraic condition x_2 = ± x_3 on the full hypersurface equation (<ref>) and to extract a real hypersurface equation allowing for a geometric description of an exceptional three-cycle in the hypersurface formalism. Let us work this out explicitly for three-cycles with a bulk orbit parallel to bI^0×bI^0 on T_(1)^4. Consistency with the plots in figure <ref> indicates that exceptional three-cycles calibrated w.r.t. Re(Ω_3) are subject to the constraint y^2 ( Re(x_1), Re(x_2), Im(x_2),ε_λ^(1))≤ 0, which allows to define the integration domain for the volume of the respective exceptional three-cycles. Consider first the deformation ε_0^(1) of the complex co-dimension 2 singularity at the origin (11) on T_(1)^4/_6, which can be placed along the real axes in the (x_2,x_3)-planes by virtue of the Möbius transformation λ_3. Imposing subsequently the algebraic conditionx_2 = x_3 yields a real hypersurface equation reminiscent of a _2-singularity on T^4/_2, implying that the _3-action does not influence the geometrical properties of this exceptional three-cycle. Depicting the volume of the exceptional cycle e_0^(1) as a function of the deformation parameter ε_0^(1) fully confirms this statement, as can be seen explicitly from the left plot of figure <ref>. For small deformations, the exceptional cycle volume exhibits a square-root like dependence on ε_0^(1), characteristic for deformed exceptional two-cycles on ^2/_2. For larger values of the deformation parameter, the exceptional cycle volume goes over into a more linear-like behaviour, before it evolves into a quadratic dependence for very large values of ε_0^(1), enforced by the topology of the ambient T^4.[The volumes of the exceptional cycle and the fractional three-cycles are normalised to the volume of the fractional cycle at the orbifold point, i.e. Vol(Π^ frac)=1 for vanishing deformation ε_λ^(1) with , throughout the paper. In this section we compute the volumes for the fractional cycles with bulk orbit parallel to aI×bI^0×bI^0, such that the integration contours lie completely along the real lines Re(x_i)≥ 1, with i∈{1,2,3}.]The middle panel of figure <ref> shows the ε_0^(1)-dependence of the fractional three-cycle volume with bulk orbit parallel to aI×bI^0×bI^0, which shrinks to zero as the deformation parameter goes to one. Hence, this plot depicts the global behaviour of the fractional three-cycle Π^ frac_- = 1/2( Π^ bulk - ϵ_0^(1)). On the right panel of figure <ref>, we depict the ε_0^(1)-dependence of the volume of the fractional three-cycle Π^ frac_+ = 1/2( Π^ bulk + ϵ_0^(1)), where the bulk orbit is once more parallel to aI×bI^0×bI^0. For this latter fractional three-cycle we observe that its volume grows for increasing values of the deformation parameter, with the same functional behaviour as the exceptional cycle volume. Closer inspection of the behaviour of the bulk cycle volume under deformation reveals that the correct representant in the homology class of bulk cycles corresponds to the three-cycle aI×bIII^0×bIII^0, which happens to lie furthest away from the deformed singularity (11), and its volume is therefore the least affected by the deformation. One can confirm this explicitly by adding the exceptional cycle volume to (twice) the volume of the fractional cycle Π^frac_- and comparing the volume-dependence of the resulting bulk cycle to the volume-dependence of the bulk three-cycle aI×bIII^0×bIII^0 under deformation.Next, we focus on the deformation ε_3^(1) for which it suffices to impose the exceptional cycle condition x_2 =x_3 on equation (<ref>) to extract the real hypersurface equation describing the exceptional cycle volume. The points (33), (24) and (42) in the _3-invariant orbit e_3^(1) are simultaneously deformed for a non-vanishing ε_3^(1), such that the exceptional cycle consists initially of three distinct S^2's resolving each of the three _2^(1) singularities, as shown in the left plot of figure <ref>.In order to extract the volume-dependence of a single S^2 as a function of the deformation parameter, we depict one third of the exceptional cycle volume in the left plot of figure <ref>, for which we observe a similar qualitative behaviour as for the exceptional cycle e_0^(1). More precisely, we notice a square-root type functional dependence of Vol(e_3^(1)) for small deformations, which goes over into a linear behaviour and ends in a quadratic dependence for larger deformations. A quantitive difference with respect to the cycle e_0^(1) is the region of validity for the parameter ε_3^(1). For values of ε_3^(1)∼ 0.37 and higher, the three two-spheres S^2 merge together into one large exceptional three-cycle as depicted in the right panel of figure <ref>, at which pointwe can no longer reliably describe the exceptional cycle through the hypersurface formalism. This is manifested in the horizontal plateau truncating the exceptional cycle volume for values ε_3^(1)≥ 0.37 in the left plot of figure <ref>. The other two plots in figure <ref> represent the (normalised) volumes of the fractional three-cycles Π^frac_± = 1/2( Π^ bulk±ϵ_3^(1)) as a function of ε_3^(1) with bulk orbit parallel to aI×bI^0×bI^0. The representant in the bulk homology class is, however, not the factorisable three-cycle aI×bI^0×bI^0 itself, but a bulk three-cycle aI× C^0× C^0 consisting of the union of one-cycles C^0 = bII^+ ∪ bII^- along both two-tori T_(2)^2 and T_(3)^2. Once again, it suffices to subtract (twice) the exceptional cycle volume from the fractional cycle volume to uncover the dependence of the bulk cycle on the deformation parameter ε_3^(1) and verify that this functional behaviour matches the one of the three-cycle aI× C^0× C^0. A pictorial representation of the one-cycle C^0 is offered in figure <ref>, from which it is immediately clear that the one-cycle does not represent a sLag cycle, since its pull-back of the Kähler two-form on the two-torus does not vanish. Nonetheless, the three-cycle aI× C^0× C^0 belongs to the same homology class as the bulk three-cycles parallel to aI×bI^0×bI^0, such that its integrated volumes are equal to each other, as argued in more detail in <cit.>.Discussing the global aspects of the exceptional cycle e_1^(1) on T^4_(1)/_6 follows a slightly different logic, as the geometric condition x_2 = λ_3( x_3) does not define a fixed set under the orientifold involution, i.e. the resulting hypersurface equation is not real and therefore does not offer the desired direct access to the exceptional sLag. The intuition following from the study of the exceptional cycles e_0^(1) and e_3^(1) allows us, nonetheless, to express the functional dependence of e_1^(1) on the deformation parameter ε_1^(1) through a small detour: we first compute the normalised volume of the bulk cycle aI× C^0×bIII^0 as a function of the deformation parameter ε_1^(1) and then subtract the normalised volume of the fractional cycle with integration contours completely along the real lines Re(x_i=1,2,3)≥ 1. The result of that computation is depicted in the left panel of figure <ref>, from which we can extract the square-root like functional dependence Vol_ norm(e_1^(1))∼√(ε_1^(1)). The plot does not contain information about a potential quadratic dependence on ε_1^(1) for large deformations, as was the case for the exceptional cycles e_0^(1) and e_3^(1). It appears that this type of information can only be extracted explicitly from the hypersurface equation for the exceptional cycle e_1^(1), whose form is constrained by the topology of the ambient T^4. When restricting to the real part of the hypersurface equation upon imposing the condition x_2 = λ_3(x_3), one can qualitatively see three distinct exceptional cycles growing out of the _2^(1) singularities (31), (21) and (41) for non-vanishing ε_1^(1), which merge together for larger deformation parameters analogously to the behaviour of the deformed exceptional cycle e_3^(1) depicted in figure <ref>. In this respect, the _3-action and the T^4 topology do qualitatively constrain the behaviour of the exceptional cycle e_1^(1), even though their full effects cannot be extracted more quantitatively due to an indisputable imaginary component of the hypersurface equation for the exceptional cycle.The functional dependence of the (normalised) volumes for the fractional three-cycles Π^ frac_± = 1/2( Π^ bulk±ϵ_1^(1)) is given in the middle and right plot of figure <ref>, respectively. As expected, the volume of the fractional cycle Π^ frac_- shrinks with growing deformation ε_1^(1), while the volume of Π^ frac_+ grows with increasing deformation ε_1^(1). Due to the exchange symmetry T_(2)^2 ↔ T_(3)^2, the discussion of the global description of the exceptional cycle e_2^(1) is completely analogous to the one for e_1^(1).This brings us finally to the global description of the exceptional cycles e_4^(1) and e_5^(1) on T^4_(1)/_6, which are related to each other through the orientifold projection. In the hypersurface equation (<ref>) this relation under the -projection is manifestly built in, such that a non-zero deformation parameter ε_4+5^(1) resolves both e_4^(1) and e_5^(1) simultaneously and similarly for a non-zero deformation parameter ε_4-5^(1). To extract the hypersurface equation for the exceptional cycles, we have to rotate the x_i=2,3-coordinates over an angle ξ or ξ^2 (or use the Möbius transformation λ_2 or λ_4), after which we can impose the algebraic condition x_2 = ± x_3. Unfortunately, the resulting hypersurface equation does not correspond to a fixed set under the orientifold involution, which is manifested by a purely imaginary contribution to the hypersurface equation. Hence, similarly to the exceptional cycle e_1^(1), we are not able to directly access the exceptional cycles e_4^(1) and e_5^(1).[One can, however, focus on the real part of the hypersurface equation for e_4^(1) and e_5^(1) and compute the volume as a function of the respective deformation parameter. This offers a qualitative understanding of the geometry of e_4^(1) and e_5^(1) and shows that these orbits have a similar behaviour under deformation as the orbit e_3^(1): for small deformations, the exceptional cycle volume exhibits a square-root type functional dependence, while the topology of the ambient T^4_(1) enforces a quadratic behaviour for larger deformations. The two-spheres S^2 at the resolved singularities in the orbit merge together into one large exceptional cycle for a sufficiently large deformation. This common behaviour is inherited from the isotropy between the orbits e_3^(1), e_4^(1) and e_5^(1) on the parent toroidal orbifold T^4_(1)/_6.]The situation is even more complicated in this case as the fractional three-cycles wrapping one or more of the _2^(1)-fixed points in the orbits e_4^(1) and e_5^(1) do not lie along the real axes in the x_2- and x_3-planes, such that we are not able to directly compute the fractional cycle volume as a function of ε_4+5^(1) or ε_4-5^(1) either. To understand the impact of the deformation ε_4+5^(1) on the volume of a fractional cycle, one first has toestablish that the resolved orbits e_4^(1) and e_5^(1) on the parent toroidal orbifold T^6/(_2×_6) with discrete torsion have the exact same structure as the resolved orbit e_3^(1) discussed above, upon respectively considering non-zero complex deformation parameters ε_4^(1) and ε_5^(1) individually. Taking afterwards the orientifold projection into accountimplies - based on the calibration properties with respect to the volume three-form Ω_3 - that the exceptional three-cycles ϵ_4^(1) + ϵ_5^(1) and ϵ̃_4^(1) + ϵ̃_5^(1) are resolved by a non-zero deformation parameter ε_4+5^(1), while the exceptional three-cycles ϵ_4^(1) - ϵ_5^(1) and ϵ̃_4^(1) - ϵ̃_5^(1) are resolved by a non-zero deformation parameter ε_4-5^(1). To assess the impact of the deformation ε_4+5^(1) on the volume of a fractional cycle wrapping _2^(1) singularities in the orbits e_4^(1) and e_5^(1), we exploit our intuition obtained from the other deformations in the _2^(1)-twisted sector and propose the following method to compute the volume for e.g. the fractional three-cycle aI×bI^+×bIII^-: * Compute the (normalised) volume of the bulk cycle aI× C^-× C^+, composed of the union one-cycles C^- =bII^0∪bII^- and C^+ =bII^0∪bII^+ as drawn in figure <ref>, as a function of the deformation parameter ε_4+5^(1);* Consider the volume of a single two-sphere S^2 obtained by deforming the exceptional cycle e_3^(1), as presented in the left panel of figure <ref>, and re-interpret[This identification of the exceptional cycle volumes is supported by the _3-symmetry among the orbits e_3^(1), e_4^(1) and e_5^(1) on the ambient toroidal orbifold.] this volume as the volume of the resolved exceptional two-cycle in the _3- and -invariant exceptional three-cycle ϵ_4^(1)+ϵ_5^(1);* Subtract or add the resulting exceptional cycle volume from the bulk cycle volume to obtain the volumes of the fractional cycles Π_-^ frac and Π_+^ frac respectively:Vol_ norm(Π^ frac_±) =Vol_ norm(Π^ bulk) ± Vol_ norm(ϵ_4^(1)+ϵ_5^(1)).The proposed method does not allow us to obtain any quantitative information about the fractional cycle volume for a given deformation ε_4+5^(1)≠ 0, but it does enable us to envision the qualitative behaviour of the volumes of the fractional cycles Π_±^ frac parallel to e.g. the three-cycle aI×bI^+×bIII^- as presented in figure <ref>. There, we see that the volumes of the fractional three-cycles Π_±^ frac exhibit the expected behaviour under deformation: the volume Vol(Π_-^ frac) decreases with growing ε_4+5^(1), while the volume Vol(Π_+^ frac) increases for growing deformation ε_4+5^(1). The numerical noise for large deformations (ε_4+5^(1)≥ 0.37) is a reflection of the merging of the two-spheres S^2 at the resolved singularities into one large exceptional two-cycle. One can easily repeat this method for fractional three-cycles wrapping any of the other _2^(1) singularities in the orbits e_4^(1) and e_5^(1), or apply this method to probe the effect of a non-vanishing deformation ε_4-5^(1) on the volume of such fractional three-cycles, provided one chooses the appropriate bulk three-cycle. In all of these cases, the qualitative functional behaviour of the fractional three-cycles can be brought back to the case presented in figure <ref>, namely Vol(Π^ frac_±) ∼ Vol(Π^ bulk) ±√(ε_i^(1)) with i=4+5 or i=4-5. §.§.§ sLags in the deformed _2^(3)-twisted sectorTo investigate the deformation effects in the_2^(3)-twisted sector, we turn to the v_i=1 patch so that we can describe the Lag lines in terms of the homogeneous coordinates x_i as in section <ref>. Real hypersurfaces at the orbifold point are represented in this coordinate by the Lag lines aI, aII, aIII and aIV on the two-torus T_(1)^2 and by bI^0 and bII^0 on T_(2)^2. Combining these Lag lines, we can construct a set of sLag two-cycles with topology T^2/_2 on Def(T_(3)^4/_2^(3))and calibrated with respect to Re(Ω_2), represented by the blue-coloured regions in figure (<ref>) (a), i.e. the two-cycles aI×bI^0, aIV×bII^0, aIII×bI^0 and aII×bII^0. The white regions correspond to sLag two-cycles calibrated by Im(Ω_2): aI×bII^0, aIV×bI^0, aIII×bII^0 and aII×bI^0. The blue contour-lines in the -projected (x_1, x_2) plane represent the zero locus y=0 and intersect at the real _2^(3) fixed points (23), (33) and (43). In order to obtain fractional three-cycles calibrated w.r.t. Re(Ω_3) on (T^4/_2^(3)× T^2_(3))/_6, the two-cycles calibrated w.r.t. Re(Ω_2) and Im(Ω_2) on T^4_(3)/_2^(3) should be paired with a one-cycle bI^0/bIII^0 and bII^0/bIV^0, respectively, on T_(3)^2.By turning on the _2^(3) deformation parameters ε_α^(3) one by one, figure <ref> shows exactly which singularities (or singular orbits under the _3 ⊂_6 symmetry) are deformed and which singularities are displaced, in agreement with the lower part of table <ref>. Statements about singularities (1α)cannot be made in this coordinate patch v_i=1 as they are located here at x_1 = ∞, and hence they require us instead to describe the hypersurface equation in terms of the homogeneous coordinates v_i in the coordinate patch x_i=1. At the deformed singularities, an exceptional two-cycle with non-vanishing volume appears, as indicated by the red dashed lines in figure <ref>. We can study the effects of the deformation parameters more qualitatively by studying the zero locus of the hypersurface equation (<ref>) in a local patch around the singular point (23), for which the hypersurface equation reduces locally to the form (after rescalings):ỹ^2 = x̃_1 x̃_2 - 2 ε_α^(3), α = 1,2,3,4.For the deformations ε_1^(3), ε_3^(3) and ε_4^(3) we have to perform an appropriate Möbius transformation from appendix <ref> to mould the hypersurface equation (<ref>) into this specific form, corresponding locally to a ^2/_2-type singularity. The two-cycles passing through the singularity (23) are given by aI×bI^0 and aIV×bII^0, associated to the torus wrapping numbers (n^1,m^1;n^2,m^2)=(1,0;1,0) and (0,1;1,-2) on T_(1)^2× T_(2)^2, respectively. Combining for instance the first two-cycleaI×bI^0 with a one-cycle bI^0 on T_(3)^2 yields a fractional three-cycle as defined in section <ref>, calibrated w.r.t. Re(Ω_3). Its overall _2^(3) exceptional three-cycle is givenin terms of the basis exceptional three-cycles as:Π^_2^(3) = (-)^τ^_2^(3)( (-)^τ^3ϵ_1^(3) + (-)^τ^2 + τ^3ϵ_2^(3)). By turning on the deformation parameter ε_2^(3), an exceptional two-cycle e_2^(3) with non-vanishing volume grows out of the singular point (23) on T^4_(3)/_2^(3), and depending on the sign of the deformation the volumes of the two-cycles aI×bI^0 and aIV×bII^0 will shrink or grow:* ε_2^(3)>0: aI×bI^0 and aIV×bII^0 still form two separate two-cycles with reduced size as an exceptional two-cycle emerges out of the singular point (23) in the region ỹ^2 <0, as depicted in the upper diagram of figure <ref> (b). The local hypersurface equation (<ref>) reduces to the equation of a two sphere S^2 with radius √(2 ε_2^(3)), when we impose the algebraic condition x̃_1 = x̃_2 for the exceptional cycle. The exceptional two-cycle e_2^(3) is calibrated with respect to Re(Ω_2), a feature supported by the fact that the cycles aI×bI^0 and aIV×bII^0 remain sLag for positive deformations. Translating these considerations to the fractional three-cycle with exceptional part displayed in equation (<ref>), we find that the exceptional three-cycle ϵ_2^(3) is calibrated with respect to Re(Ω_3) and that there should be a relative minus sign between the bulk three-cycle Π^ bulk and its contribution to Π^_2^(3), i.e. (-)^τ^_2^(3) + τ^2+ τ^3=-1, in order for the volume of the fractional three-cycle to decrease for positive deformations.The cycles aI×bII^0 and aIV×bI^0 on the other hand are no longer sLag on their own and melt together to one big two-cycle, still calibrated with respect to Im (Ω_2). This bigger cycle is described by the union two-cycle aI×bII^0⊕aIV×bI^0 from which the exceptional cycle e_2^(3) has been eliminated. * ε_2^(3)<0: we observe the opposite picture for negative deformations, namely the two-cycles aI×bI^0 and aIV×bII^0 have merged together to one big two-cycle as shown in the lower diagram of figure <ref> (b). This can be traced back to the fact that the exceptional two-cycle is now calibrated with respect to Im(Ω_2). By taking the union two-cycle aI×bI^0⊕aIV×bII^0, we can ensure that the exceptional two-cycle drops out, such that the union two-cycle remains a sLag two-cycle. For negative deformation parameters, the sizes of the two-cycles aI×bII^0 and aIV×bI^0 shrink, in line with the consideration thatthe exceptional two-cycle e_2^(3) is now calibrated by the same two-form as both two-cycles. The S^2 topology of the exceptional cycle with radius √(-2ε_2^(3)) follows from equation (<ref>) by restricting to the slice x̃_1 = - x̃_̃2̃. Combining the two-cycles with the one-cycle bII^0 on T_(3)^2 allows for the construction of fractional three-cycles, whose volumes are now decreasing for increasing |ε_2^(3)|. This implies a relative minus sign between Π^ bulk andthe contribution of ϵ_2^(3) to Π^_2^(3) for the fractional three-cycle. If we want to turn to a global description of the fractional three-cycles located at the deformed singularities, we have to consider the full hypersurface equation (<ref>) and impose the algebraic conditions x_1= ±x_2 (beyond a neighbourhood of the original singular point and possibly after acting with an appropriate Möbius transformation on the coordinates), allowing us to determine the fixed loci of the orientifold projection. Note, however, that the resulting equation does not reduce to a real hypersurface equation, not even for vanishing deformations. The inability to describe _2^(3) deformations globally is an immediate consequence of the different complex structures on T_(1)^2 and T_(2)^2, which prevent the conditions x_1= ±x_2 to represent the fixed loci of the orientifold projection globally. Nevertheless, we are able to extract information about the functional dependence of the exceptional cycle volume on e.g. the deformation parameter ε_2^(3) by using the following strategy: compute the (normalised) volume of the bulk three-cycle aIII× C^0×bI^0 under non-vanishing deformation ε_2^(3) and subtract it from the (normalised) volume of the fractional three-cycle with integration contours completely along the real lines Re(x_i)≥ 1 in the complex x_i=1,2,3-plane. The result of this computation is shown in the left panel of figure <ref> and exhibits a square-root like dependence on the parameter ε_2^(3) for the (normalised) exceptional cycle volume. The behaviour of the (normalised) fractional three-cycle volumes Vol(Π^ frac_±) under non-zero deformation ε_2^(3) is shown in the middle and right panel of figure <ref>. As expected, the three-cycle Π^ frac_- is characterised by a shrinking volume for increasing ε_2^(3), while the volume of the three-cycle Π^ frac_+ increases for growing ε_2^(3). Given that the other (_3-orbits of) _2^(3) exceptional divisors are related to e_2^(3) by virtue of a Möbius transformation, the volumes of the other exceptional cycles reproduce the same structure under their respective deformation as the one presented in figure <ref>. Due to the exchange symmetry T_(2)^2↔ T_(3)^2, we can straightforwardly transpose the entire analysis into the _2^(2) sector, where the same conclusions can be drawn for the _2^(2) exceptional and fractional three-cycle volumes. One subtle difference arises between the _2^(2)- and _2^(3)-twisted sector when the choice of the exotic O6-plane is taken into account. If either the _2^(2)- or _2^(3)-plane are taken to be the exotic O6-plane, their respective RR-charges are opposite, i.e. η_(2) = - η_(3), resulting in a different decomposition into -even and -odd cycles for both sectors, according to table <ref>, as will be discussed in the following section in terms of prototypical global D6-brane models. § DEFORMATION MODULI IN GLOBAL D6-BRANE MODELS In this section, we apply the findings of geometric deformations in section <ref> to global D6-brane models of phenomenological interest and discuss which moduli are stabilised at the orbifold point or constitute flat directions of the global model affecting physical gauge couplings, either by a direct tree-level dependence or only via higher order and/or non-perturbative effects.§.§ Some generic considerations In section <ref>, the different types of three-cycles on the T^6/(_2 ×_6 ×) orientifold with discrete torsion were briefly reviewed.While a D6-brane a by itself wraps a fractional three-cycle Π_a of the form (<ref>), it is generically accompanied by its orientifold image a' with associated three-cycle Π_a'. The global model remains inert under the exchangewhen simultaneously changing the gauge representations for their conjugates, e.g. . The scalar potential only depends on the sum of the two <cit.>, V_scalar^NS-NS⊃∑_a N_a [ Vol(Π_a) +Vol(Π_a') ] - Vol(Π_O6) {[=0 all D6_a-branes aresLag; > 0else ]. ,which leads to the following qualitative situations observed first in the context ofandmodelswith discrete torsion in <cit.>, see also <cit.>: * The D6-brane a couples to the _2^(i)-twisted deformation modulus ζ^(i)_α such that the sLag condition is violated for a non-vanishing vev ⟨ζ^(i)_α⟩∼√(ε^(i)_α).The deformation modulus ζ^(i)_α itself can be seen as the period associated to the _2^(i) exceptional three-cycle δ̃_α^(i):ζ^(i)_α = ∫_δ̃_α^(i)Ω_3, where the Calabi-Yau three-form is defined in equation (<ref>) and the three-cycle δ̃_α^(i) is an -odd linear combination of the exceptional three-cycles (ϵ_α^(i),ϵ̃_α^(i)) in line with table <ref>.* The D6-brane a couples to the _2^(i)-twisted deformation modulus ζ^(i)_α and stays sLag for arbitrary values of ⟨ζ^(i)_α⟩∼√(ε^(i)_α).In this case the deformation modulus ζ^(i)_α corresponds to the period associated to an -even linear combination of theexceptional three-cycles (ϵ_α^(i),ϵ̃_α^(i)) following table <ref>. * The D6-brane a does not couple directly to the _2^(i)-twisted deformation modulus ζ^(i)_α.In the first case, it is argued that - from a low-energy field theory point of view - the stack of N_a D6_a-branes supports a U(N_a) gauge group, and the U(1)_a factor within accounts for a D-term potential with Fayet-Iliopoulos term D_a ∝√(ε^(i)_α), whose numerical prefactor is fixed by the associated orientifold-odd combination of the exceptional wrapping numbers (x^(i)_α,a, y^(i)_α,a), cf. table <ref> for T^6/(_2 ×_6 ×) with discrete torsion.[ As discussed in the previous section fractional three-cycles loose their sLag property under deformation, when one of its resolved exceptional three-cycles is no longer calibrated with respect to the same volume-form as the bulk three-cycle (and thereby also the orientifold fixed planes). According to equation (<ref>) this results in a positive contribution to the total tension of the D6-branes and O6-planes, upon pursuing the dimensional reduction of the corresponding DBI-actions. Extracting the functional dependence of the FI-parameter on the deformation parameter then follows by computing the volume of the D6-brane on the resolved background and subtracting the tension of the O6-planes. In case the O6-planes are calibrated w.r.t. Re(Ω_3), the positive contribution to the NS-NS scalar potential will scale as ( ∫_Π_a Im(Ω_3) )^2 for small deformations, which is understood from a four-dimensional perspective as (the square of) a FI-parameter <cit.>.] The appearance of the Fayet-Iliopoulos D-term leads to the stabilisation of the deformation modulus at the singular orbifold point, i.e. ⟨ζ^(i)_α⟩ =0.[There exists in principle the possibility that the vevs of charged scalars belonging to a vector-like pair in the bifundamental representation compensate the Fayet-Iliopoulos term ⟨ζ^(i)_α⟩, but due to the form of the scalar potential V_scalar∼∑_z D_z^2, such a vev can only atone for the stabilisation of the deformation modulus if the two gauge factors have equal rank, i.e. instead a gauge symmetry breaking SU(N) × SU(N) ⟨(,)⟩ or ⟨(,)⟩≠ 0⟶ SU(N)_diagoccurs <cit.>. ]In the second case, the D6_a-brane stack has only an orientifold-even exceptional wrapping number,whereas in the last case, both orientifold-even and -odd wrapping numbers of the associated exceptional cycle vanish. The scalar potential (<ref>)possesses a flat direction in the deformation modulus ζ^(i)_α if for all D6-brane stacks in a given global model the second or third case applies, as we will first demonstrate further in a global model with USp(2)^4 gauge group in section <ref> . The four-dimensional gauge couplings at tree-level are obtained by dimensionally reducing the (6+1)-dimensional Dirac-Born-Infeld (DBI) action alongthe compact cycle Π_a and its orientifold image Π_a' of the D6_a-brane worldvolume <cit.>,4π/g_a,tree^2 = 1/16 √(2) k_aM_Planck/M_string Vol(Π_a + Π_a')/√(Vol_6)andk_a = {[1SU(N_a);2 SO/USp(2N_a) ]. ,with the toroidal three-cycle volume at the orbifold point of the aAA lattice ofgiven by:Vol(Π_a)/√(Vol_6)|_orb = Vol(Π_a')/√(Vol_6)|_orb = ∏_i=1^3 L_a^(i) = √(R_1^(1)/R_2^(1) (n^1)^2 + R_2^(1)/R_1^(1) (m^1)^2)∏_i=2^3 √(2/√(3)((n^i)^2 + n^i m^i + (m^i)^2 ) )= √(R_1^(1)/R_2^(1))×{[2/√(3)a ||;2 √(3) a || _2^(1) ]. ,where R_i^(1) are the radii associated to the one-cycles π_i=1,2 of the a-type torus T^2_(1) and r_j the length scales of the hexagonal A-type tori T^2_(j), j∈{2,3} appearing in the definition of the two-torus volumes in equation (<ref>). In the second case, the flat direction affects the strength of the gauge coupling, whereas in the third case the gauge coupling only feels the flat direction via (higher-orderand non-perturbative) backreactions of the deformation on the toroidal cycles. In sections <ref>, <ref> and <ref>, we will present how the three different cases appear in several globally consistent protoptype MSSM, L-R symmetric and PS-models, respectively.For the T^6/(_2 ×_6 ×) orientifold with discrete torsion, aAA lattice background and the choice of _2^(3) as the exotic O6-plane orbit,the orientifold-even and -odd wrapping numbersare obtained from,and,where (P_a,Q_a,U_a,V_a) correspond to the four different products of toroidal wrapping numbers appearing in equation (<ref>) and (x^(i)_α,a, y^(i)_α,a) to the exceptional wrapping numbers in equation (<ref>). The latter have also been associated to the deformation parameters ε^(i)_α in table <ref>.At the singular orbifold point, only relative _2 ×_2 eigenvalues among pairs of D6-brane stacks a and b are of physical importance; e.g. for the counting of chiral fermions the intersection numbersneed to be computed.For illustrative purposes, we choose here the D6_a-brane stack supporting the strong interaction SU(3)_a (or SU(4)_a for PS models)to have (_2^(1), _2^(2), _2^(3)) eigenvalues (+++). The details of a prototype MSSM example, four different L-R symmetric examples and two PS examples are provided in sections <ref>, <ref> and <ref>, respectively.The D6-brane models presented in these sections form prototypes for the respective supersymmetric extensions of the Standard Model with three chiral generations of quarks and leptons and a minimal amount of (chiral) exotic matter (in the adjoint, symmetric and/or antisymmetric representation under the strong and weak interactions) constructed with ϱ-independent - i.e. all D6-branes along the one-cycle π_1 of T^2_(1) -supersymmetric sLags at the orbifold point of T^6/(_2 ×_6 ×) with discrete torsion. The discrete parameter choice in the D6-brane configurations is not unique, as different choices for the discrete Wilson lines, displacements, or _2^(i) eigenvalues may also yield global D6-brane models satisfying the same stringent conditions as the prototype D6-brane configurations. Nonetheless, such a global D6-brane model (with different discrete parameters) will always be characterised by the same massless (open and closed) string spectrum as one of the prototype D6-brane models. In other words, all global intersecting D6-brane models with three chiral generations of quarks and leptons and a minimal amount of exotic matter constructed with ϱ-independent supersymmetric sLags at the orbifold point will be classifiable by the massless string spectra of one of the prototype models considered here.At this point, we anticipate that the bulk cycles of all D6-branes in the prototype models are either parallel to the - or the _2^(1)-invariant (orbit of) O6-plane(s), and orientifold image branes - such as a and a'- thus only differ in their _2 ×_2 eigenvalues, with the precise relation depending on the discrete Wilson lines and displacements (τ⃗,σ⃗),see table <ref> for a compact summary.Further anticipatingthe decomposition of bulk and exceptional wrapping numbers into orientifold-even and -odd parts as summarised in table <ref>,we can summarise the counting of stabilised moduli per model as well as flat directions affecting the gauge couplings according to the relation (<ref>) in table <ref>. Details of deformations of the global MSSM, L-R symmetric and PS models will be discussed in sections <ref>, <ref> and <ref>, respectively. §.§ Deformations in a global USp(2)^4 model Before discussing phenomenologically interesting particle physics models on D6-branes with all their intricacies of stabilised moduli, here we firstconsider deformations in a global model with USp(2)^4 gauge group, which a priori is expected to contain only flat supersymmetric directions. To obtain gauge group enhancementon all D6-brane stacks and consistency with the bulk RR tadpole cancellation conditions <cit.> for the choice of the _2^(3)-orbit as exotic O6-plane, the bulk part of each fractional three-cycle has to be either parallel to the - or the _2^(1)-invariant orbit. In either case, only gauge group enhancement of the type U(N) ↪ USp(2N) occurs for arbitrary choices of _2 ×_2 eigenvalues and arbitrary discrete Wilson line and displacement parameters(τ^1,σ^1) on the a-type torus T^2_(1), see table 10 of <cit.> for details. With the discrete Wilson lines and displacements (2τ^2τ^3;2σ^2σ^3) along T^4_(1) listed as lower index, such three-cycles have the form:Π^|| _(201;211) =ρ_1/4+ (-1)^τ^_2^(1)/4(ϵ^(1)_5- ϵ^(1)_4)+ (-1)^τ^_2^(2)/4([ ϵ^(2)_κ_1-2ϵ̃^(2)_κ_1 ] + (-1)^τ^1 [ ϵ^(2)_κ_2-2ϵ̃^(2)_κ_2 ])- (-1)^τ^_2^(3)/4(ϵ^(3)_κ_1 + (-1)^τ^1 ϵ^(3)_κ_2),Π^|| _2^(1)_(21τ^3;210) = 3ρ_1/4+ (-1)^τ^_2^(1)+τ^3/4(ϵ^(1)_4 - ϵ^(1)_5 ) -(-1)^τ^_2^(2)+τ^3/4([ϵ^(2)_κ_1 - 2 ϵ̃^(2)_κ_1 ] + (-1)^τ^1 [ϵ^(2)_κ_2 - 2 ϵ̃^(2)_κ_2 ] ) - 3 (-1)^τ^_2^(3)/4(ϵ^(3)_κ_1 +(-1)^τ^1 ϵ^(3)_κ_2) , and the remaining two possibilities of gauge group enhancement, Π^|| _(2τ^21;201) with arbitary discrete Wilson line and Π^|| _2^(1)_(210;211),are obtained from those by choosing identical values of (τ⃗;σ⃗) along (T^2)^3 and replacing the _2 ×_2 eigenvalues (τ⃗^_2) as follows:(τ^_2^(1)+τ^2+1, τ^_2^(2) ,τ^_2^(3)+τ^2+1 )^|| _(2τ^21;201) ↔ (τ⃗^_2)^|| _(201;211) ,(τ^_2^(1)+ τ^3 + 1, τ^_2^(2) + τ^3 +1 , τ^_2^(3) )^|| _2^(1)_(21τ^3;210) ↔ (τ⃗^_2)^|| _2^(1)_(210;211) . Π^|| _(201;211) with the choice of _2 ×_2 eigenvalues (+++) and discrete data τ^1=0=σ^1 on T^2_(1) only differs from the QCD stack a of all phenomenologically appealing models discussed later on in this article in the choice of one discrete Wilson line, τ^2, cf. e.g. the MSSM D6-brane configuration in table <ref>.Π^|| _2^(1)_(21τ^3;210) with the choice τ^3=0 on the other hand corresponds to the left- and right-symmetric D6-branes b and c, respectively, of all L-R symmetric andthe PS prototype II model, cf. tables <ref> and <ref>,for the choice of _2 ×_2 eigenvalues (+++) and (-+-), respectively. Moreover,Π^|| _2^(1)_(21τ^3;210) with the choice τ^3=0 corresponds to stack b (and c) of the MSSM (and the PS prototype I), cf. table <ref> (and <ref>), for a different choice of _2 ×_2 eigenvalues (–+) (and (+–)).The D6-brane data of a global model with USp(2)^4 gauge group enhancement satisfying all bulk and twisted RR tadpole cancellation conditions[Notice that the K-theory constraints are trivially satisfied in this model.]is presented in table <ref>, [ 7|c|D6-brane configuration of a global USp(2)^4 model on the aAA lattice of T^6/(_2 ×_6 ×)_η=-1;wrapping numbers Angle/π_2^(i) eigenvalues(τ⃗)(σ⃗) gauge group; a (1,0;1,0;1,0) (0,0,0) (+++) (0,0,1) (0,1,1)USp(2); d (1,0;1,0;1,0) (0,0,0) (-+-) (0,0,1) (0,1,1)USp(2); b (1,0;-1,2;1,-2)(0,1/2,-1/2) (+++) (0,1,0) (0,1,0)USp(2); c (1,0;-1,2;1,-2) (0, 1/2,-1/2) (-+-) (0,1,0) (0,1,0)USp(2); ] 4stackUSp2-4D6-brane configuration of a global model on the aAA lattice of T^6/(_2 ×_6 ×) with discrete torsion andwith _2^(3) as the exotic O6-plane orbit. The D6-branes b and c are chosen identical to all L-R symmetric models and the PS II model in tables <ref> and <ref>, respectively.and the non-vanishing bulk and exceptional wrapping numbers are displayed here in table <ref> for convenience and for comparison with those of the phenomenologically interesting models in table <ref>.They are obtained from the expansion of the following fractional three-cycles:Π_a,d = ρ_1/4∓1/4(ϵ^(1)_4- ϵ^(1)_5)+ 1/4([ ϵ^(2)_1-2ϵ̃^(2)_1 ] + [ ϵ^(2)_2-2ϵ̃^(2)_2 ])∓1/4( ϵ^(3)_1+ ϵ^(3)_2),Π_b,c = 3ρ_1/4±1/4(ϵ^(1)_4 - ϵ^(1)_5 )-1/4( [ϵ^(2)_1 - 2 ϵ̃^(2)_1 ] +[ϵ^(2)_2 - 2 ϵ̃^(2)_2 ] ) ∓3/4(ϵ^(3)_1 + ϵ^(3)_2) .The massless matter spectrum of this D6-brane configuration is displayed in table <ref> and agrees in the bb, bc and cc sectors by construction withthat of the L-R symmetric models and the PS II model displayed intable <ref>. Here, it is noteworthy that at the orbifold point only the relative _2 ×_2 eigenvalues among the D6-branes a, b, c and d are of physical relevance. The absolute _2 ×_2 eigenvalues in all three _2^(i)-twisted sectors will, however, become important when switching on deformations sincethe gauge couplings scale with the cycle volume, 1/g^2_z,tree∝Vol(Π_z) for Π_z = Π_z',and the change in Vol(Π_z) is e.g. proportional to y^(2)_1,z·√(ε^(2)_1) with the absolute sign of y^(2)_1,z =± 2 specified in table <ref> for the choice of discrete D6-brane data in table <ref>. Exchanging (+++) ↔ (-+-) in the D6-brane configuration oftable <ref> will obviously only pairwise permute the D6-brane labels a ↔ d and b ↔ c, while the sign flip (+++)(-+-)↔(–+)(+–) provides a physically distinct model once deformations along the directions of non-vanishing exceptional wrapping numbers listed in table <ref> are switched on. TheD6-branes b and c in this latter case agree with those of the PS prototype I model specified in table <ref>, whose b-brane in turn agrees with the MSSM stack b of the weak interactions in table <ref>.Since the non-vanishing wrapping numbers intable <ref> are only of orientifold-even type - as expected for a model with just USp-gauge factors - each gauge coupling feels flat directions along the untwisted complex structure ϱ of the a-type two-torus T^2_(1) and thetwisted complex strucuture moduli ζ^(i)_α associated to the deformation parameters √(ε^(i)_α)∼⟨ζ ^(i)_α⟩ with (i,α) ∈{(1, 4-5),(2,1),(2,2),(3,1),(3,2) }. All other twisted complex structure moduli only affect the gauge couplings through higher-order, e.g. field redefinitions at loop level, or non-perturbative effects. It is also important to stress that, while the {b,c} sector of the USp(2)^4 model locally agrees with the L-R symmetric models in section <ref> and the PS II modelin section <ref> (or the PS I model in section <ref> upon flipping the _2^(1)×_2^(2) eigenvalues), the global models differ significantly: All phenomenogically appealing models contain at least two U(N_1) × U(N_2) gauge factors with N_1 ≠ N_2. The QCD stack with either U(3)_a or U(4)_agauge symmetry e.g. couples by an orientifold-odd exceptional wrapping number to the two _2^(3)-twisted deformation moduli ζ^(3)_1,2, which leads to their stabilisation, i.e. ε^(3)_1 = 0 = ε^(3)_2. Except for the L-R symmetric model of prototype I, the spectrum does not contain any other U(3) gauge factor, and even in the L-R symmetric prototype Ithe two hidden stacks with U(3)_h_1× U(3)_h_2 symmetry do not possess orientifold-odd wrapping numbers along the directions ζ^(3)_1,2, as can be read off from table <ref>. In all phenomenologically appealing prototypes presented in this article, the Fayet-Iliopoulos term(s) generated by some non-vanishing deformation parameter(s) ε^(3)_1,2 can thus not be compensated by any supersymmetry preserving vev of some charged scalar matter field. The same argument involving the QCD stack a applies to two deformations, ε^(1)_3 and ε^(1)_4+5, from the _2^(1)-twisted sector. Further stabilisations of deformation moduli involving the remaining stacks with unitary gauge groups will be discussed below on a model-by-model basis. As can be read off from the summary in table <ref>, all phenomenologically appealing examples share the property that the D6-brane b of weak interactions – and in all L-R symmetric and PS models also the right-symmetric D6-brane c – experience a flat direction in the deformation parameter ε^(1)_4-5 changing the value of the respective gauge coupling(s).The wrapping numbers and discrete displacements of the D6-brane configuration in table <ref> indicate that the branes a,d are represented by the three-cycle aI×bIII^0×bIII^0 and the branes b,c by the three-cycle aI×bIV^0×bII^0 in the hypersurface formalism, following the dictionary at the singular orbifold point in section <ref>. The deformation ε_4-5^(1) in the _2^(1)-twisted sector allows for resolved exceptional three-cycles calibrated with respect to the same Re(Ω_3) as the bulk parts of the four fractional three-cycles in the parameter region ε_4-5^(1)≤ 0, and similarly the deformations ε_1,2^(i) in the _2^(i=2,3)-twisted sectors yield correctly calibrated (resolved) exceptional three-cycles in the parameter regions ε_1,2^(i)≤ 0. In both cases, this can be verified by applying the Möbius transformation λ_4 on the cycles bIII^0 and bIV^0, after which one can study the (-projected) countour plots that form the equivalent to figures <ref> and <ref> in section <ref>. To obtain the functional behaviour of the fractional three-cycle volumes under the deformations, we first determine which bulk three-cycles and which exceptional cycles at _2^(i) singularities are wrapped by the fractional three-cycles in table <ref>, cf. equation (<ref>). Then, we reconstruct the (normalised) volume of the fractional three-cycles using the results for the exceptional three-cycle volumes from sections <ref> and <ref> as primary building blocks, in the same spirit as the method presented on page It:Method.More precisely, we compute the fractional three-cycle volumes directly as the sum or difference of (normalised) bulk three-cycle volumes and (normalised) exceptional three-cycle volumes as computed in the aforementioned sections.The results of these computations are presented in figures <ref> and <ref> for branes {a,d} and {b,c}, respectively, forthe deformation parameter ε_4-5^(1) in the _2^(1)-twisted sector and in figure <ref> for all branes {a,b,c,d} forthe deformation parameter ε_2^(2) in the _2^(2)-twisted sector. Through the relationship (<ref>) between the tree-level gauge coupling of the gauge theory living on a D6-brane and the volume of the three-cycle wrapped by the D6-brane internally, we can deduce the qualitative behaviour of the gauge coupling when going away from the singular orbifold point by switching on deformations along flat directions. More explicitly, in this toy model we observe in figures <ref> and <ref> that the volumes of the fractional cycles wrapped by branes a and c decrease for increasing deformation parameter |ε^(1)_4-5| in the _2^(1)-twisted sector, implying that the gauge couplings of their respective gauge theories increase when going away from the orbifold singular point. The gauge couplings of branes b and d on the other hand are weaker at points in the moduli space away from the orbifold point, as the fractional cycle volumes wrapped by branes b and d grow for non-zero deformation along the flat direction in the _2^(1)-twisted sector, as depicted in figures <ref> and <ref>. For the _2^(2) deformations, we observe in figure <ref> that branes a and d obtain a stronger gauge coupling, whereas branes b and c have a weaker gauge coupling away from the singularorbifold point. The computations for the deformation parameter ε_1^(2) in the _2^(2)-twisted sector follow the same logic and reproduce the exactly same respective functional dependence as in figures <ref> for the various D6-brane volumes, as can be inferred from the exceptional wrapping numbers in table <ref>.The deformations in the _2^(3)-twisted sector require the exactly same method as the one used for the _2^(2)-twisted sector, which leads (qualitatively) to similar functional dependences of the fractional cycle volumes on the parameters ε_1,2^(3) as presented in figure <ref>: the volumes of branes a and b exhibit the functional behaviour of the left panel, while the volumes of the branes c and d increase for growing deformations |ε_1,2^(3)| as in the middle and right panel of figure <ref>. Hence, the gauge groups USp(2)_a and USp(2)_b acquire a stronger gauge coupling at the string scale M_string, whereas the gauge groups USp(2)_c and USp(2)_d a weaker gauge coupling under deformations along the two flat directions in the _2^(3)-twisted sector.§.§ Deformations in a global MSSM model In <cit.>, we had performed a systematic search for MSSM-like models on T^6/(_2 ×_6 ×) with discrete torsion and found that there is a unique choice of bulk parts supporting the Standard Model gauge group SU(3)_a × SU(2)_b × U(1)_Y ⊂ U(3)_a × USp(2)_b × U(1)_c × U(1)_d, and that all seemingly differentchoices of discrete (Wilson line, displacement and _2 ×_2 eigenvalue) parameters lead to the same massless matter spectrum when completing to a global five-stack model with one `hidden' stack, see the discussion in section 4.1 of <cit.> for details. The D6-brane configuration displayed in table <ref> agrees with the previous example from <cit.> in all relative _2 ×_2 eigenvalues and absolute values of Wilson line and displacement parameters and is thus identical at the orbifold point. For the sake of comparing the different phenomenologically appealing models, we choose here, however, the absolute _2 ×_2 eigenvalues (+++) for the (up to rank) common QCD stack a. [ 7|c|D6-brane configuration of a global 5-stack MSSM model on the aAA lattice;wrapping numbersAngle/π _2^(i) eigenvalues (τ⃗) (σ⃗)gauge group;a(1,0;1,0;1,0)(0,0,0)(+++)(0,1,1)(0,1,1) U(3);b(1,0;-1,2;1,-2) (0,1/2,-1/2) (–+)(0,1,0)(0,1,0) USp(2);c(1,0;-1,2;1,-2)(0, 1/2,-1/2)(-+-)(0,1,1)(0,1,1) U(1);d(1,0;-1,2;1,-2)(0, 1/2,-1/2) (+–)(0,0,1)(0,0,1) U(1);h(1,0;1,0;1,0)(0,0,0) (–+)(0,1,1)(0,1,1) U(4);] 5stackMSSMaAAPrototypeID6-brane configuration of a global five-stack model with gauge group SU(3)_a × SU(2)_b × U(1)_Y × SU(4)_h ×_3 after Green-Schwarz mechanism, which leads to the orientifold-even and -odd wrapping numbers labelled by MSSM in table <ref>.The massless open string spectrum of this MSSM example is for convenience displayed in table <ref>. [ 8|c|Overview of the massless matter spectrum of a global 5-stack MSSM on the aAA lattice ofT^6/(_2 ×_6 ×);sector(U(3)_a ×USp(2)_b ×U(4)_h)_U(1)_c×U(1)_d Q_Y_3sector(U(3)_a ×USp(2)_b ×U(4)_h)_U(1)_c×U(1)_d Q_Y_3;ab 3 ×(, , )_(0,0) 1/6 0 aa' 2×[ (_,,)_(0,0) + h.c.]±1/3 0;ac6 ×(, ,)_(1,0) 1/3 1bb5 ×(,_,)_(0,0) 0 0;ad3 ×(, , )_(0,-1)-1/3 1cc 4 ×(,,)_(0,0) 0 0; ad'3 ×(, , )_(0,-1)-2/3 1dd 5 ×(,,)_(0,0) 0 0;bc3 ×(, ,)_(1,0) + 3 ×[ (, ,)_(1,0) + h.c. ]1/2,±1/21,1||2 dd' [(,,)_(0,2) + h.c.]±11||2; 5-8bd6 ×(,,)_(0,-1)+2 ×[ (,,)_(0,1)+ h.c. ]-1/2, ±1/21,2||1bh 3 ×(,,)_(0,0) 0 2;cd 3 ×(,,)_(-1,1) +3 ×[ (,,)_(-1,1) + h.c. ] 01,1||2 ch'6 ×(,,)_(-1,0)-1/2 0; cd'3 ×(,,)_(1,1)+3 ×[ (,,)_(1,1) + h.c. ] 1, ±1 0dh 3 ×(,,)_(0,1) 1/2 0; 1-4ah 2 ×[ (,,)_(0,0)+h.c.]±1/61||2 dh' 3 ×(,,)_(0,1) 1/2 1; ah' [(,,)_(0,0)+h.c.]±1/6 2 ||1 hh' 2 ×[ (,,6_)_(0,0) + h.c.] 01||2; ] 5stackMSSMaAAPrototypeISpectrumChiral and non-chiral massless (open string) matter spectrum of the five-stack D6-brane model from table <ref> with gauge group SU(3)_a× USp(2)_b × U(1)_Y × SU(4)_h ×_3 after the Green-Schwarz mechanism has been taken into account. For vector-like states, different charges under the discrete _3 ⊂ U(1)_c + 2 U(1)_d + 2 U(1)_h symmetryare denoted using the logic symbol ||. The closed string sector for this model - as well as all other explicit examples discussed in this article -contains (h^11_+,h^11_-,h^21)=(4,15,19) vectors, Kähler moduli and complex structure moduli, respectively.Ignoring the possibility of vevs for charged scalars as suggested in footnote <ref> on page Footnote:SUNSUNFI for the moment, we can read off from table <ref> that the QCD stack a and the `hidden' stack h each couple with orientifold-odd wrapping numbers to the deformation moduli ζ^(1)_3,4+5and ζ^(3)_1,2, while the D6-brane c couples in an orientifold-odd way to ζ^(1)_3,4+5 and ζ^(2)_1,2 andthe D6-brane d to ζ^(2)_1,2 and ζ^(3)_1,2. This MSSM example thus allows to stabilise at most six of the 14 _2-twisted deformation modulias displayed in table <ref>. Out of the remaining eight _2-twisted deformation moduli, only ζ^(1)_4-5 couples in an orientifold-evenway to the two D6-branes b and d. The associated gauge couplings of SU(2)_b ≃ USp(2)_b and U(1)_Y = U(1)_a/6 + [U(1)_c+U(1)_d]/2 thus experience one flat direction at tree-level. The remaining seven _2-twisted deformation moduli ζ^(1)_0,1,2 and ζ^(2,3)_(3,4)on the other hand do not couple directly to any D6-brane in this global model and can at most change the couplings in the low-energy effective MSSM Lagrangian vianon-perturbative or higher order corrections.Let us emphasise here that at the singular orbifold point, the volume of a given fractional three-cycle and thereby the tree-level gauge coupling only depends on the untwisted complex structure modulus ϱ∝R^(1)_2/R^(1)_1 as stated in equations (<ref>) and (<ref>), while the untwisted Kähler moduli v_i,i∈{1,2,3} defined in equation (<ref>) influence the gauge couplings once the one-loop gauge threshold corrections are taken into account, as will be discussed in section <ref>.Before turning to the technicalities of executing the different deformations, let us briefly discuss possible caveats in the counting of the maximal number of stabilised deformation moduli when allowing for vevs of charged matter fields. While the massless matter spectrum in table <ref> provides all charged (open string) scalars that might possiblytrigger some D6-brane recombination process while compensating the Fayet-Iliopoulos term generated by the vev of some (closed string) deformation modulus, a more detailed analysis of the origin of eachmatter state per D6-brane intersection sector x(ω^k y^(')) in table <ref> is required to deduce allowed terms in the low-energy effective action, in particular the necessary selection rule of a closed polygon with n edges along the partaking D6-bane directions per n-point coupling.Let us start by considering the chiral sector of the spectrum in table <ref> only. The only states in bifundamental representations of two unitary gauge factors of equal rank are the right-handed selectrons and sneutrinos[Notice that U(1)_massive = U(1)_c - U(1)_d in this example acts as perturbative global Peccei-Quinn symmetry in the low-energy effective action with the right-handed sneutrinos naturally identified as QCD axions of a generalised `stringy' DFSZ model <cit.>.] in the cd' and cd sectors, respectively, charged under U(1)_c × U(1)_d.As summarised in table <ref>, both D6-branes c and d couple to the twisted deformation moduli ζ^(2)_1,2, and table <ref> shows that the correspondingorientifold-odd exceptional wrapping numbers are opposite, [2 x^(2)_α,c + y^(2)_α,c ]_α=1,2 = - [2 x^(2)_α,d + y^(2)_α,d ]_α=1,2. The D-terms can therefore only be compensated by some vev of the right-handed sneutrinos due to their opposite U(1)_c and U(1)_d charges. In <cit.>, three-point functions involving vevs of this type were shown to be suitable for generating mass terms e.g. of the vector-like down-type quark pairs originating from the ac and ad sectors as well as of the vector-like left-handed lepton (or Higgs) pairs from the bc and bd sectors. At this point, we however, notice two caveats: on the one hand, our discussion so far up to now only includes the relative sign among the U(1)_c × U(1)_d charges, whereas in a more thorough field theoretical study in remains to be seen if the sneutrino/axion or its hermitian conjugate representation are suitable for canceling a Fayet-Iliopoulos term; on the other hand, the naive geometric intuition of Π_c + Π_d = 2Π^bulk_c=d + Π^_2^(1),odd_c + Π^_2^(1),even_d + Π^_2^(3),even_c + Π^_2^(3),odd_d/4 as a merging of cycles in the _2^(2)-twisted sector is contrasted by the naive merging of orientifold image cycles Π_c + Π_c'= Π^bulk_c + Π^_2^(3)_c/2 and Π_d + Π_d'= Π^bulk_d + Π^_2^(1)_d/2 leaving only one _2-twisted sector.One might wonder if taking the vector-like matter states into consideration as well can produce additional flat directions, in particular if (some of) the antisymmetric representations of SU(3)_a or SU(4)_h receive a vev. The former is clearly undesirable since it would break the part SU(3)_a × U(1)_Y of the Standard Model gauge group. On the other hand, if the vector-like pairs in the aa' and hh' sectors of table <ref> were originating from N=2 supersymmetric sectors, their vevs would not be protected by the N=1 SU(N) D-term argument of <cit.> but would instead be expected to constitute flat directions associated to the recombination of orientifold image cycles of the type Π_z+ Π_z' = Π^bulk_z + Π^_2^(2)_z /2 for z ∈{a,h} such that the deformation moduli ζ^(1)_3,4+5 and ζ^(3)_1,2 in the _2^(1)- and _2^(3)-twisted sectors were to be at most stabilised by the existence of stacks c and d, respectively. However, the state-per-sector list in table <ref> shows that usually a chiral multiplet in one sector x(ω^k y) is paired with an anti-chiral multiplet in some x(ω^l ≠ k y) sector. Even the vector-like pair of states in the antisymmetric representation in the aa' (or hh') sector with the orientifold image D6-branes parallel along all three two-tori T^2_(i) does not constitute a genuine N=2 supersymmetric sector due to the different relative_2 ×_2 eigenvaluesentering the orbifold projection on the a priori three chiral multiplets containing e.g. the massless scalar ψ^ior i_-1/2|0 ⟩_NS. A much more detailed and dedicated computation of n-point couplings appearing in the low-energy effective action is thus needed to completely settle this issue. But this goes way beyond the project discussed here, since not even 3-point Yukawa couplings have been computed for the cases at hand of a vanishing intersection angle along T^2_(1). The vanishing arguments of <cit.> rely on N=2 supersymmetry on the factorisable six-torus T^6=(T^2)^3 or its _2 ×_2 orbifold without discrete torsion and are clearly not applicable for the models with discrete torsion discussed in this article. The only element left to discuss at this point is the _2^(1)-twisted deformation ε_4-5^(1), to which the D6-branes b and d couple through the orientifold-even exceptional three-cycle. Hence, this deformation represents a _2^(1)-twisted modulus with a flat direction as indicated in table <ref>. A closer look at the three-cycle configuration in table <ref> for branes b and d reveals that the functional dependence of their volumes in terms of ε_4-5^(1) is equivalent to the branes supporting the gauge groups USp(2)_c and USp(2)_b, respectively, in the toy model of the previous section: the b-brane volume behaves as the righthand part of figure <ref> under a deformation by ε_4-5^(1)≠ 0, while the d-brane cycle exhibits the behaviour of the lefthand side of figure <ref> under the deformation parameter ε_4-5^(1). This implies that the weak gauge coupling becomes stronger when the _2^(1)-twisted modulus ζ_4-5^(1) acquires a non-zero vev and we move away from the singular orbifold point along the flat direction. The massless hypercharge on the other hand is characterised by a smaller gauge coupling away from the orbifold point. Compatibility between the calibration forms for the bulk and (resolved) exceptional three-cycles limits the parameter space of the deformations to the half-line ε_4-5^(1)≤ 0, similarly to the toy model of the previous section. §.§ Deformations in global Left-Right symmetric models In this section, we study how deformations affect each of the prototype L-R symmetric models classified in <cit.>. The common local D6-brane configuration of the observable sector is displayed in the upper part of table <ref>, with the four different types of global completion by two `hidden' sector D6-branes provided in the lower part of the same table. [ 7|c|D6-brane configurations of global L-R symmetric models on T^6/(_2 ×_6 ×);wrapping numbersAngle/π _2^(i) eigenvalues (τ⃗) (σ⃗)gauge group;7|c|Universal observable sector;a(1,0;1,0;1,0)(0,0,0)(+++)(0,1,1)(0,1,1) U(3);b(1,0;-1,2;1,-2) (0,1/2,-1/2)(+++)(0,1,0)(0,1,0) USp(2);c(1,0;-1,2;1,-2)(0, 1/2,-1/2)(-+-)(0,1,0)(0,1,0) USp(2);d(1,0;1,0;1,0) (0, 0,0) (+–)(0,1,1)(0,1,1) U(1); 7|c|Global completion of prototype I;h_1(1,0;1,0;1,0)(0,0,0)(+++)(0,0,0)(0,0,0) U(3);h_2(1,0;1,0;1,0)(0,0,0) (+–)(0,0,0)(0,0,0) U(3);7|c|Global completion of prototype II;h_1(1,0;-1,2;1,-2) (0,1/2,-1/2)(+++)(0,0,0)(0,0,0) U(1);h_2(1,0;-1,2;1,-2) (0,1/2,-1/2) (+–)(0,0,0)(0,0,0) U(1); 7|c|Global completion of prototype IIb;h_1(1,0;-1,2;1,-2) (0,1/2,-1/2)(-+-)(0,0,0)(1,0,0) U(1);h_2(1,0;-1,2;1,-2) (0,1/2,-1/2) (–+)(0,0,0)(1,0,0) U(1); 7|c|Global completion of prototype IIc;h_1(1,0;-1,2;1,-2) (0,1/2,-1/2)(+++)(0,0,1)(0,0,1) U(1);h_2(1,0;-1,2;1,-2) (0,1/2,-1/2)(+++)(0,0,1)(1,1,1) U(1);] 6stackLRSaAALocal observable L-R symmetric D6-brane sector in the first block with four different global completions by two `hidden' branes h_1, h_2 listed below. Observe that for prototype IIb, we have (+++)_h_1' and (+–)_h_2', and thus the only difference w.r.t. prototype II is the different choice of the discrete displacement parameter σ^1_h_i^ IIb=1. The common observable part of the matter spectrum is displayed in table <ref>,[ 8|c|Visible spectrum of all L-R symmetric & the PS I models on T^6/(_2 ×_6 ×);sector(U(3)_a ×USp(2)_b ×USp(2)_c )_U(1)_d_3^I U(1)_B-L^II+IIb+IIcsector(U(3)_a ×USp(2)_b ×USp(2)_c )_U(1)_d_3^I U(1)_B-L^II+IIb+IIc; 8|c|L-R symmetric models & PS I model (with →, 3_→ 6_ of U(3)_a → U(4)_a);ab 3 ×(,,)_(0) 1 1/3 aa' 2×[ (_,,)_(0) + h.c.]2||1±2/3;ac 3 ×(,,)_(0) 2-1/3bb5 ×(,_,)_(0) 0 0;bc10 ×(,,)_(0) 0 0cc 5 ×(,,_A)_(0) 0 0; 8|c|L-R symmetric models only;ad(,,)_(-1) + h.c.1||2±4/3bd3 ×(,,)_(-1) 0 1; ad' 2 ×[ (,,)_(1) + h.c.]1||2∓2/3cd 3 ×(,,)_(1) 0-1; ] 6stackLRSMaAAVisibleSpectrumCommon visible spectrum of the L-R symmetric models. Prototype I has the low-energy gauge group SU(3)_a × USp(2)_b × USp(2)_c × SU(3)_h_1× SU(3)_h_2×_3with the _3 charge displayed in the third and seventh column. The prototype II model has the low-energy gauge group SU(3)_a × USp(2)_b × USp(2)_c ×U(1)_B-L with theU(1)_B-L≡1/3 U(1)_a - U(1)_d - U(1)_h_1 + U(1)_h_2 charge displayed in the fourth and eighth column. For the prototype IIb & IIc models, the U(1)_B-L symmetry is massive and thus only a perturbative global symmetry. The massless matter spectrum is completed by the `hidden' sectors displayed in table <ref> for each prototype I, II, IIb and IIc. The massless closed string sector is for each case identical to the MSSM-like model, i.e.it contains (h^11_+,h^11_-,h^21)=(4,15,19) vectors, Kähler moduli and complex structure moduli, respectively. and the individual `hidden' spectra are given intable <ref>. [ 6|c|Overview of the `hidden' spectra for the L-R symmetric models on T^6/(_2 ×_6 ×); 6|c|Prototype I;sector(U(3)_a ×USp(2)_b ×USp(2)_c ×U(3)_h_1 ×U(3)_h_2 )_U(1)_d_3sector(U(3)_a ×USp(2)_b ×USp(2)_c ×U(3)_h_1 ×U(3)_h_2 )_U(1)_d_3;ah_1 2 ×(,,,,)_(0) 0dh_1 2 ×(,,,,)_(1) 2;ah_2 2 ×(,,,,)_(0) 0dh_22 ×(,,,,)_(-1) 1;bh_1 (,,,,)_(0)+[ (,,,,)_(0) +h.c. ] 2, 2||1 h_1 h_2 (,,,,)_(0) + h.c. 0;bh_2 (,,,,)_(0) + [(,,,,)_(0) + h.c. ] 1, 1||2h_1 h_2'2×[ (,,,,)_(0) + h.c.]2||1;ch_1(,,,,)_(0) + [(,,,,)_(0) +h.c. ]2 , 2||1h_1 h_1' 2×[ (,,,_,)_(0) + h.c.]2||1;ch_2(,,,,)_(0) + [ ,,,,)_(0) + h.c.]1,1||2h_2 h_2' 2×[ (,,,,_)_(0) + h.c.]2||1;6|c|Prototype II;sector(U(3)_a ×USp(2)_b ×USp(2)_c)_U(1)_d×U(1)_h_1 ×U(1)_h_2U(1)_B-Lsector(U(3)_a ×USp(2)_b ×USp(2)_c)_U(1)_d×U(1)_h_1 ×U(1)_h_2U(1)_B-L;ah_1 2 ×[ (,,)_(0,-1,0) + h.c. ]±4/3dh_1 2 ×[ (,,)_(1,-1,0) + h.c. ] 0; ah_1' (,,)_(0,1,0) + h.c.∓2/3 dh_1' (,,)_(1,1,0) + h.c.∓2;ah_2 2 ×[ (,,)_(0,0,-1) + h.c. ]∓2/3dh_2 2×[(,,)_(1,0,-1) + h.c. ]∓2; ah_2' (,,)_(0,0,1) + h.c.±4/3 dh_2' (,,)_(1,0,1) + h.c. 0;bh_13 ×(,,)_(0,1,0) + 3 ×[ (,,)_(0,-1,0) +h.c. ]-1, ±1 h_1 h_2 5 ×[ (,,)_(0,1,-1) + h.c. ]∓2;bh_2 3 ×(,,)_(0,0,-1) + 3 [ ×(,,)_(0,0,1) + h.c. ] -1,±1h_1 h_2' 6×[ (,,)_(0,1,1) + h.c. ] 0;ch_1 3 ×(,,)_(0,-1,0) + 3 ×[(,,)_(0,1,0) +h.c. ] 1, ∓1h_1h_1 4 ×(,,)_(0,0,0) 0;ch_23 ×(,,)_(0,0,1) + 3 [ ×(,,)_(0,0,1) + h.c. ] 1, ±1h_2h_2 4 ×(,,)_(0,0,0) 0; 6|c|Prototype IIb; h_1 h_2 5 ×[ (,,)_(0,1,-1) + h.c. ]∓2h_1h_1 4 ×(,,)_(0,0,0) 0;h_1 h_2' 6×[ (,,)_(0,1,1) + h.c. ] 0h_2h_2 4 ×(,,)_(0,0,0) 0; 6|c|Prototype IIc;ah_1 3 ×(,,)_(0,1,0)-4/3dh_1 3×(,,)_(1,-1,0) 0; ah_1' 3 ×(,,)_(0,1,0)-2/3 dh_1'3×(,,)_(-1,-1,0)-2;bh_14 ×[ (,,)_(0,-1,0) +h.c. ]±1h_1h_1 5 ×(,,)_(0,0,0) 0;ch_16 ×(,,)_(0,-1,0) +2 ×[(,,)_(0,1,0) +h.c. ] 1, ∓1h_2h_2 5 ×(,,)_(0,0,0) 0; ] 6stackLRSMaAAHiddenSpectrum`Hidden' massless spectrum per L-R symmetric model completing the common observable sectordisplayed in table <ref>. The QCD stack a agrees by construction with the one of the MSSM example discussed in section <ref>, while the left- and right-symmetric groups USp(2)_b × USp(2)_chave identical bulk cycles and discrete Wilson line and displacement parameters, but differ in their _2 ×_2 eigenvalues from the D6-brane b of the MSSM example. The D6-brane d only differs from the QCD stack a in the _2 ×_2 eigenvalues and stack size N_d=1 vs. N_a=3.The orientifold-even and -odd wrapping numbers are summarised table <ref> and lead to the naive counting of the maximal number of stabilised deformation moduli in table <ref>. We observe the following differences: * L-R IIc has the maximal number of ten stabilised deformation moduli due to the different choices of displacements (σ^1,σ^2)=(0,0)_h_1, (1,1)_h_2 of the two `hidden' D6-branes.* L-R I is the only model with non-Abelian `hidden' sector and thus states in bifundamental representations of gauge groups of identical rank, SU(3)_a × SU(3)_h_1× SU(3)_h_2. Whilethe twisted deformation modulus ⟨ζ^(1)_3 ⟩ =0 is stabilised at the orbifold point by the presence of D6-brane d, vevs of ζ^(1)_0,1,2 and ζ^(2)_1,2 could potentially be compensated by vevs of scalars in the h_1h_2^(') sectors, which would simultaneously break SU(3)_h_1× SU(3)_h_2→ SU(3)_h^diag.[ More explicitly, one can consider e.g. the vector-like pair (,,,,)_(0) + h.c. from the h_1 h_2 sector and solve the Abelian and non-Abelian D-term constraints in terms of non-vanishing vevs of the scalar components in both bifundamental representations of U(3)_h_1× U(3)_h_2. Such a vacuum configuration would break the non-Abelian gauge groups to the diagonal gauge group U(3)_h^ diag, which would correspond geometrically to the recombination of the two D6-brane stacks h_1,h_2 into a single stack h wrapping [Π^bulk_h + Π^_2^(1)_h]/2, cf. also the microscopic origin of the vector-like matter states from the h_1(ω^0 h_2) sector according to table <ref>. ]* L-R II, IIb, IIc contain Abelian `hidden' D6-branes and thus three gauge groups of equal rank, U(1)_d × U(1)_h_1× U(1)_h_2.In the prototypes II and IIb, some vev of the type ⟨ζ^(1)_0,1,2⟩ might potentially be compensated by vevs associated to charged scalar fields belonging to vector-like pairs in the h_1h_2^(') sector. For prototype IIb, the same considerations apply also to ⟨ζ^(3)_3,4⟩. Details of these potential compensations among vevs of closed and open string scalars depend on the microscopic origin of the latter as detailed intable <ref>.* Prototypes I and IIc have according to the naïve counting in table <ref> only one flat direction, which affects the tree level gauge couplings, in the _2^(1)-twisted sector, while prototypes II and IIb additionally have two flat directions of direct physical consequence in the _2^(2)-twisted sector.The counting of matter states per intersection sector is displayed in tables <ref> and <ref> for the universal observable and individual `hidden' sectors, respectively,and – just as for the MSSM example – a dedicated derivation of the low-energy effective action is needed to determine if vevs of matter states can indeed allow for potentially flat directions in the deformation moduli space as stated above.Looking closer at the _2^(i)-twisted moduli with a flat direction, we notice first of all that the discussion for the USp(2)_b × USp(2)_c sector can be brought back to the analysis presented in section <ref> for the global USp(2)^4 toy model. A common calibration w.r.t. Re(Ω_3) for bulk and resolved exceptional three-cycles then constrains the deformation parameter to lie on the half-line ε_4-5^(1)≤ 0. Going away from the orbifold point along the flat direction of the _2^(1)-twisted modulus ζ^(1)_4-5 implies a weaker gauge coupling for the left stack USp(2)_b and a stronger gauge coupling for the right stack USp(2)_c.[For the prototype IIc model, the modulus ζ^(1)_4-5 also couples to the `hidden' D6-brane stacks h_1 and h_2, such that a deformation along its flat direction also affects their respective U(1) gauge couplingsat the string scale M_string, before these Abelian gauge groups are spontaneously broken at the KK-scaleby virtue of the Stückelberg mechanism.That is to say, the (_2^(1)-twisted sector of the) fractional three-cycle for h_1/h_2 can be brought back to the (_2^(1)-twisted sector of the) fractional cycle supporting the USp(2)_b/c gauge group, such that the gauge coupling for U(1)_h_1 decreases and the one for U(1)_h_2 increases for non-zero deformation ε_4-5^(1). Realising a strongly coupled anomalous U(1) gauge theory at the string scale through these geometric deformations of _2 singularities opens up avenues for D6-brane model building scenarios <cit.> realising Nambu-Jona-Lasinio type models upon integrating out the massive U(1).] The deformation parameters ε_1,2^(2) in the _2^(2)-twisted sector allow for a mutually compatible calibration w.r.t. Re(Ω_3) between bulk and exceptional three-cycles provided the parameters lie on the half-line ε_1,2^(2)≤ 0. The (_2^(2) sector of the) fractional three-cycle for the strong D6-brane stack a is identical to the geometry of the three-cycle (in the _2^(2) sector) supporting the USp(2)_a gauge group in the toy model of section <ref>.This implies that the QCD gauge group acquires a stronger gauge coupling when either the twisted modulus ζ_1^(2) or ζ_2^(2) acquires a non-zero vev. The opposite occurs for the left stack b and right stack c, whose gauge couplings decrease for non-zero deformations along the flat directions of ζ_1,2^(2)as detailed in the context of the USp(2)^4 toy model in section <ref>.Also the d-brane couples to the deformation moduli ζ_1,2^(2) along flat directions, yet its geometric properties cannot be reduced to a situation discussed in previous sections. Using the same modus operandi as in section <ref>, we can compute the normalised volume of the fractional three-cycle of the d-brane, which is shown on the left-hand side of figure <ref>. From this figure, the qualitative picture obviously exhibits the U(1)_d gauge coupling decreasing for non-zero deformations in the _2^(2)-twisted sector along the flat directions ζ_1,2^(2). Due to the relative factor 1/3 in the definition of the generalised U(1)_B-L symmetry of prototype II, the behaviour of the U(1)_d gauge coupling is expected to be dominant with respect to the behaviour of the U(1)_a gauge coupling under _2^(2) deformation, implying that the U(1)_B-L symmetry will also be more weakly coupled for non-zero vevs in the ζ_1,2^(2)-directions. According to table <ref>, the hidden D6-brane stacks h_1 and h_2 in the prototype II and IIb models couple to deformations with flat directions too, which requires us to investigate their fractional cycle volume under deformation. In both prototype models the (relevant part of) fractional three-cycle for h_1 can be recast into the fractional three-cycle supporting the USp(2)_b gauge group in the global toy model of section <ref>. This implies that the fractional cycle volume for hidden stack h_1 exhibits the same behaviour as depicted in the middle and right panel of figure <ref> and that the U(1)_h_1 gauge coupling becomes more weakly coupled for non-zero deformations ⟨ζ_1,2^(2)⟩≠0 in case of prototype II models and ⟨ζ_3,4^(2)⟩≠0 in case of prototype IIb models. The (relevant part of the) fractional three-cycle for h_2 has not yet been encountered before in this article,but using the same techniques as in section <ref> we can compute its fractional three-cycle volume as a function of the _2^(2) deformations, yielding the plots in the middle and right panel of figure <ref>. Hence, the U(1)_h_2 gauge coupling is expected to increase when going away from the singular orbifold point along the flat directions ζ_1,2^(2) for the prototype II models and along the flat directions ζ_3,4^(2) for the prototype IIb models.§.§ Deformations in global Pati-Salam models In <cit.>, a systematic computer scan led to two prototype PS models.In order to streamline the discussion of deformations and of one-loop corrections to the gauge couplings, we here present the prototype I with (up to the size N_a) identical stacks {a,b} as in the MSSM example of section <ref>. The D6-brane configuration is displayed in table <ref> and differs from the original one in <cit.> by a swap of the last two two-tori, T^2_(2)↔ T^2_(3), and by a flip of the absolute _2 ×_2 eigenvalues.[ 7|c|D6-brane configuration of a global Pati-Salam model on the aAA lattice: prototype I;wrapping numbers Angle/π_2^(i) eigenvalues(τ⃗)(σ⃗) gauge group; a (1,0;1,0;1,0) (0,0,0) (+++) (0,1,1) (0,1,1)U(4); b (1,0;-1,2;1,-2)(0,1/2,-1/2)(–+) (0,1,0) (0,1,0)USp(2); c (1,0;-1,2;1,-2) (0, 1/2,-1/2)(+–) (0,1,0) (0,1,0)USp(2); h (1,0;1,0;1,0) (0,0,0)(–+) (0,1,1) (0,1,1)U(6); ] PatiSalamaAAPrototypeIPati-Salam model, prototype I, on the T^6/(_2 ×_6 ×) orientifold with discrete torsion. The three-cycles wrapped by the D6-branes a and b are identical to those of the MSSM example in table <ref>.The spectrum of prototype I is displayed in table <ref>. [ 4|c|Matter spectrum of the prototype I global Pati-Salam model on aAA;sectorU(4)_a×USp(2)_b×USp(2)_c ×U(6)_hsectorU(4)_a×USp(2)_b×USp(2)_c ×U(6)_h;ab 3 ×(, , ; ) aa'2 ×[(6_,,;) + h.c. ];ac3 ×(, , ;) bb'5 ×(,_, ;);bc10 ×(,,; ) cc'5 ×(,, _;);ah 2 ×[ (,,;) + h.c. ]bh 3 ×(, , ;6); ah'(,,;) + h.c.ch3 ×(, , ; 6); hh' 2 ×[ (,,;15_) + h.c.]; ] PatiSalamaAAPrototypeISpectrumChiral and vector-like massless(open string) matter spectrum of the prototype I global PS model on the T^6/(_2 ×_6 ×) orientifold with D6-brane configuration given in table <ref>. The universal closed string spectrum of all global models in this article can e.g. be found in the caption of table <ref>.Both U(1)_a× U(1)_d gauge factors acquiremasses through the Stückelberg mechanism and survive as perturbative global symmetries at low energies.The D6-brane configuration of the prototype II in table <ref> is up to the analogous swaps in two-torus indices and absolute choice of _2 ×_2 eigenvalues plus a swap in h ↔ h' also identical to the original one from <cit.> with the massless matter spectrum displayed in table <ref> for the observable part[ 7|c|D6-brane configuration of a global Pati-Salam model on the aAA lattice: prototype II;wrapping numbersAngle/π _2^(i) eigenvalues (τ⃗) (σ⃗)gauge group;a(1,0;1,0;1,0)(0,0,0)(+++)(0,1,1)(0,1,1) U(4);b(1,0;-1,2;1,-2) (0,1/2,-1/2)(+++)(0,1,0)(0,1,0) USp(2);c(1,0;-1,2;1,-2)(0, 1/2,-1/2)(-+-)(0,1,0)(0,1,0) USp(2);h(1,0;-1,2;1,-2)(0, 1/2,-1/2)(+++)(0,0,0)(0,0,0) U(2);] PatiSalamaAAPrototypeIIPati-Salam model, prototype II, on the T^6/(_2 ×_6 ×) orientifold with discete torsion. The three-cycles wrapped by D6-branes {a,b,c} are identical to those of the L-R symmetric models intable <ref>. Moreover, the `hidden' brane h of this PS II model wraps the same three-cycle as h_1 of the L-R symmetric II model.and in table <ref> for the `hidden' part. All discrete data are chosen such that the sector {a,b,c} of the PS II model agrees (up to the stack size N_a) with that of the L-R symmetric models in section <ref>. [ 4|c|`Hidden' spectrum of the prototype II global Pati-Salam model on aAA; sector U(4)_a×USp(2)_b×USp(2)_c ×U(2)_h sector U(4)_a×USp(2)_b×USp(2)_c ×U(2)_h; ah2 ×[ (,,;) + h.c. ] bh3 ×(, , ;)+ 3×[(, , ;) + h.c. ];ah' (,,;) + h.c. ch3 ×(, , ; ) + 3×[(, , ;) + h.c. ];hh'6 ×[ (,,;_) + h.c.];] PatiSalamaAAPrototypeIISpectrumChiral and vector-like massless `hidden' matter spectrum of the prototype II global Pati-Salam model with D6-brane configuration in table <ref>. The observable spectrum is (up to the rank of stack a) identical to the {a,b,c} sector of the L-R symmetric models, cf.table <ref>. Also in this prototype, both U(1)_a× U(1)_d gauge factors acquire a Stückelberg mass, turning them into perturbative global symmetries in the low-energy effective field theory. The orientifold-odd and -even wrapping numbers are given in table <ref> and lead to the naive counting of the maximal number of stabilised moduliof four and seven for the PS I and PS II model, respectively, in table <ref>. Even though the models differ in the sectors {b,c,h}, in both cases the gauge couplings of branes b,c are sensitive to flat directions in the deformation moduli ζ^(1)_4-5 and ζ^(2)_1,2, with also the gauge couplings of a and h sensitive to ζ^(2)_1,2. Looking more closely at table <ref>, we observe that the deformation moduli ζ_0,1,2^(1) only couple to the hidden stack h in the prototype PS II model. This opens up the possibility to stabilise one of the vevs ⟨ζ_0,1,2^(1)⟩ by compensating it with the vevs of the scalar components in a vector-like pair (, , ;) + h.c. in the bifundamental representation under USp(2)_b× U(2)_h or a vector-like pair (, ,;) + h.c. in the bifundamental representation under USp(2)_c× U(2)_h. In case that the necessary terms indeed appear in the low-energy effective action,the gauge group will be spontaneously broken from USp(2)_x∈{ b,c }× U(2) to a diagonal SU(2)^ diag, corresponding to a recombination of two D-brane stacks,which, however, does not bear an interpretation as a new fractional cycle only wrapping _2^(i) singularities in a single sector i, cf. the overview over the couplings to _2^(i)-twisted sectors in table <ref>. Whether or not such a recombination can happen, has to be studied in more detail from a geometric perspective in the future and cannot be solely assessed through the study of the D-term equations. Furthermore, a more detailed study of the full scalar potential is required as well for these prototype models, but goes well beyond the scope of this article as well. Yet, we offer the listing of matter states per intersection sector in table <ref> for the prototype I example and in table <ref> for the prototype II examples, from which a field theoretic study of the scalar potential could initiate in terms of the necessary selection rule of the existence of closed polygons.Let us instead analyse the _2^(i)-twisted moduli representing flat directions for the PS models, as anticipated in table <ref>. In the _2^(1)-twisted sector, the D6-branes b and c couple to the twisted modulus ζ_4-5^(1) and (the relevant part of) their fractional three-cycles can be brought back to the fractional three-cycles supporting the USp(2)_b,c gauge groups in the toy model of section <ref>. There is a substantial qualitative difference between the prototype I and prototype II models: for the prototype I models the fractional three-cycle volume for the b-brane is given by the right panel of figure <ref> and the one for the c-brane is depicted on the left-hand side of figure <ref>, whereas the prototype II models have the reverse identifications with respect to figure <ref>. This implies that the left-symmetric USp(2)_b gauge group in the prototype I models acquires a stronger gauge coupling when we go away from the singular orbifold point along the flat direction ζ_4-5^(1), while the USp(2)_b gauge group in the prototype II models receives a weaker gauge coupling. The gauge coupling of the right-symmetric USp(2)_c gauge group exhibits the exact opposite behaviour w.r.t. the one of the left-symmetric stack for non-zero deformations ⟨ζ_4-5^(1)⟩≠0. This qualitative difference between the two prototype PS models can be traced back to the relative difference in _2×_2 eigenvalues for the b- and c-stack between both prototypes. The _2^(2) deformations affect the gauge coupling of the hidden gauge group in the prototype I and II PS models, as the hidden D6-brane stack h in both cases couples to the twisted deformation moduli ζ_1,2^(2) along flat directions. The (relevant part) of the fractional three-cycle wrapped by the hidden stack h in the prototype I model is characterised by the same geometry as the d-brane in the L-R symmetric model, such that its volume exhibits the behaviour of the left panel of figure <ref> under non-zero deformation parameters ε_1,2^(2). Hence, the hidden U(6)_h gauge group in the prototype I model has a smaller gauge coupling at the string scale M_string for resolved _2^(2) singularities. The _2^(2) exceptional three-cycle of the hidden stack h in the prototype II model on the other hand takes the same form as the one of the USp(2)_b stack in the toy model of section <ref>. Hence, its fractional three-cycle volume is characterised by the middle and right panel of figure <ref>, and the hidden gauge group U(2)_h has a smaller gauge coupling in the prototype II model as well when we consider non-zero deformations ε_1,2^(2) along flat directions. Note that compatibility of the common calibration w.r.t. Re(Ω_3) of the resolved exceptional three-cycles constrains the parameter space of the deformations to the half-lines ε_4-5^(1)≤0 and ε_1,2^(2)≤0 in both prototype PS models. § GAUGE COUPLINGS AT ONE-LOOP, GEOMETRIC MODULI AND M_STRING Up to this point, changes in the volumes of D6-branes have been discussed, which are related to the tree-level gauge couplings according to equation (<ref>). At the orbifold point, the volumes stem solely from the toroidal cycles as detailed in equation (<ref>). For the MSSM example of section <ref>, the relation (<ref>) amounts to the following relation among the tree-level gauge couplings at the orbifold point:1/g^2_SU(3)_a = 1/g^2_SU(4)_h = 2/31/g^2_USp(2)_b = 6/191/g^2_U(1)_Y,which obviously disagrees with the proposal of gauge coupling unification at M_string≃ M_GUT∼ 10^16GeV <cit.>.As discussed in section <ref>,deformations of exceptional cycles along directions of only orientifold-even wrapping numbers change the volumes of fractional cycles such thatthe degeneracy of e.g. the gauge coupling strength of USp(2)_b and USp(2)_c in the L-R symmetric and PS models is lifted, and also the relations in (<ref>) might be ablished. On the other hand, the degeneracy of gauge couplings is already at the orbifold point lifted when including one-loop corrections Δ_G_xto the gauge couplings of any gauge factor G_x, 8 π^2/g^2_G_x(μ) =8 π^2/g^2_G_x,tree + b_G_x/2lnM^2_string/μ^2 + Δ_G_x/2 ,which depend on the discrete D6-brane data such as _2 ×_2 eigenvalues and discrete Wilson lines and displacement parameters. In section <ref>, we will therefore present theone-loop gauge thresholds Δ_G_x and discuss in section <ref> impacts on the low-energy phenomenology of the global particle physics vacua with D6-brane data specified in tables <ref>, <ref>, <ref> and <ref>. In particular, the class of models at hand shares the unusual feature that all D6-branes wrap the (_2^(1))-invariant one-cycle along T^2_(1) such thatnot only one-loop gauge thresholds can - at least in principle - contribute to lowering the string scale M_string well beyond the Planck scale M_Planck <cit.>, but the weakness ofgravity can already be generated at tree level by a large hierarchy between the two radii R_2^(1) and R_1^(1) of the two-torus T^2_(1). §.§ Gauge couplings at one-loop Let us briefly summarise the results of the formalities of one-loop gauge threshold corrections for the case at hand of fractional D6-branes parallel along at least one two-torus <cit.> before applying these general results to the stringy particle physics vacua on thebackground with exotic O6-plane orbit _2^(3). The Kähler metrics and contributions to the SU(N_x) beta function coefficients,b_SU(N_a) = N_a (-3+φ^_a)+ N_a/2 ( φ^_a + φ^_a)+ ( φ^_a -φ^_a)+∑_b≠ aN_b/2( φ^ab + φ^ab')≡ ∑_k=0^2b_a(ω^k a)^ A +∑_k=0^2 b_a(ω^k a')^ A + ∑_k=0^2 b_(ω^k a)^ M +∑_b≠ a∑_k=0^2 (b_a(ω^k b)^ A + b_a(ω^k b')^ A), per open string sector are displayed in table <ref> [4|c|Kähler metrics and SU(N_x) beta function coefficients on T^6/(_2 ×_2M×) with discrete torsion for vanishing angleϕ_xy^(1)=0; (ϕ^(1)_xy,ϕ^(2)_xy,ϕ^(3)_xy)K_R_xb^A_xy=[ ∑_i=1^3 δ_σ^i_x^σ^i_y δ_τ^i_x^τ^i_y b̃^A,(i)_xy;orδ_σ^1_x^σ^1_y δ_τ^1_x^τ^1_yb̃^A_xy ]b^M_x (= ∑_i=1^3 b^M,(i)_xfor(ϕ⃗)=(0⃗)) forη__2^(3)=-1(only for y=x');(0,0,0) g_string/v_1v_2v_3 √(2π) L_x^(i)∑_i=1^3 δ_σ^i_x^σ^i_y δ_τ^i_x^τ^i_y( -N_y) I_xy^_2^(i),(j ·k)/4 [ -2 ( 1 + (-1)^τ^2_x σ^2_x- (-1)^τ^3_x σ^3_x )forx; -2 ( 1 - (-1)^τ^2_x σ^2_x+ (-1)^τ^3_x σ^3_x ) forx _2^(1) ]; (0,ϕ,-ϕ) g_string/v_1v_2v_3 √(2π) L_x^(1) δ_σ^1_x^σ^1_y δ_τ^1_x^τ^1_y (- N_y ) (I_xy^(2 ·3) + I_xy^_2^(1),(2 ·3) )/4 - 1/2 ( |Ĩ_x^, (2, ·3)| + |Ĩ_x^_2^(1),(2 ·3)|);] beta_coeffs_Kaehler_metricsKähler metrics K_ R_x with two-torus volumes v_i defined in (<ref>) and one-cycle lengths L_x^(i)in (<ref>)for matter representations R_x ∈{ (_x,_y), (_x,_y), (_x), (_x) } and contributions to the beta function coefficients from annulus and Möbius strip topologies, b^ A_xy , b^ M_x.For details on the computation of _2-invariant intersection numbers, the interested reader is referred to e.g. <cit.>. The factor (-1)^2 b_iσ^i_x τ^i_x appearing in the Möbius strip contribution to the beta function coefficient is required for consistency of spectra on tilted tori (b_i=1/2) as first noted in the caption of table 49 in <cit.>, but the exact shape of the corresponding Möbius strip amplitude with b_iσ^i_x τ^i_x ≠ 0 - needed for any known phenomenologically appealing model with rigid D6-branes - is to the present day not known, see appendix B.1 of <cit.> for an extended discussion.for the cases at hand. For the generic case with three non-vanishing angles and/or non-rigid D6-branes, the interested reader is referred to <cit.>. The associated one-loop gauge threshold corrections per open string sector with at least one vanishing angle are collected in table <ref>, [ 3|c|One-loop corrections to gauge couplings on T^6/(_2 ×_2M×) with discrete torsion forϕ_xy^(1)=0;(ϕ^(1)_xy,ϕ^(2)_xy,ϕ^(3)_xy)Δ^A_xy = N_y Δ̃^A_xy⊂Δ^A_SU(N_x)Δ^M_x ⊂Δ^M_SU(N_x); (0,0,0) ∑_i=1^3 b̃^A,(i)_xy Λ_τ^i_xy,σ^i_xy (v_i)∑_i=1^3 b̃^M,(i)_x Λ_b_i,τ^i_x,σ^i_x (v_i);(0,ϕ,-ϕ) b̃^A_xy Λ_τ^1_xy,σ^1_xy (v_1)+ N_y ln2/2 ( I_xy^_2^(2) -I_xy^_2^(3) ) (sgn(ϕ)/2 -ϕ)b̃^M_x Λ_0,τ^1_x,σ^1_x (v_1) + ( |I^_2^(2)_x| -|I^_2^(3)_x| )ln2/2; ] 1-loop-thresholdsOne-loop corrections to the gauge couplings of fractional D6-branes at some vanishing angle on T^6/(_2 ×_2M×) orientifolds with discrete torsion.The annulus lattice sums only depend on relative Wilson linesτ^i_xy≡ |τ^i_x - τ^i_y| ∈{0, 1} and displacements σ^i_xy = |σ^i_x - σ^i_y|∈{0, 1}. To complete the picture, the gauge thresholds for rigid D6-branes at three non-vanishing angles can be found in <cit.>, where also the conversion from Δ_G_x to theholomorphic gauge kinetic function f_G_x using the Kähler metrics from table <ref> is discussed in detail.For all D6-brane examples parallel to the (_2^(1))-invariant planes on the aAA lattice of T^6/(_2 ×_6 ×) discussed in this article, the constant contribution from the Möbius strip topology vanishes due to|I^_2^(2)_x| =|I^_2^(3)_x|. The standard Annulus lattice sums are definedin (<ref>), while the hatted lattice sums in the one-loop Möbius corrections are defined in equation (<ref>). where the following abbreviations of lattice sums for the annulus topology are used,Λ_0,0(v) =- ln( 2 π L^2 η^4 (i v) ),Λ_τ,σ≠ 0,0(v) = -ln( e^-π (σ)^2 v/4| ϑ_1 (τ - i σv/2, iv) |/η(iv))^2 ,with the two-torus volume v defined byv = {[R_1R_2/α'_2 ( a); √(3)/2r^2/α'_3 ( A) ]. , and the one-cycle length L as given in equation (<ref>).For later use in section <ref>, we already provide the asymptotic behaviour of the lattice sums here for two-torus volumina larger than α':Λ_τ,σ(v)v ≫ 1⟶ {[ πv/3 -ln (2π L^2) (τ,σ)=(0,0); [3 (1-σ)^2 -1]π v/6 - 2 δ_σ,0 ln[2 sin(πτ/2)] ≠ (0,0) ]}τ,σ∈{0,1}={[ π v/3 - ln (2π L^2) (0,0);π v/3 -2 ln2 (1,0);-π v/6 (τ,1) ]. ,which turns out to be already an excellent approximation for v ≳ 1, cf. figure 2 of <cit.>.Let us discuss for example the MSSM model with D6-brane data specified in table <ref>. The massless matter spectrum per sector is provided in table <ref>, from which the beta function coefficients b^ A,(i) – or whenever vanishing the reduced numbers b̃^ A,(i) – can be read off.Combining this information with the discrete D6-brane data in table <ref>, we can derivethe full annulus contributions to the gauge thresholds, which aregiven by (with R_1 ≡ R_1^(1)): Δ_SU(3)_a^A,MSSM = 3 ×( Δ̃_aa + Δ̃_aa') + 2 ×Δ̃_ab + ( Δ̃_ac +Δ̃_ac') + ( Δ̃_ad +Δ̃_ad') + 4 ×(Δ̃_ah +Δ̃_ah') =16Λ_0,0^|| (_2^(1)) (v_1) + 2Λ_0,0^||(v_2) - 4/3 ln 2 v ≫ 1⟶ 16 π/3 v_1 + 2 π/3 v_2 - ln(2^4/3 (2π)^18( R_1^2/α')^16( r_2^2/α')^2 ) , Δ_USp(2)_b^A,MSSM =3 ×Δ̃_ab+ 2 ×Δ̃_bb+ Δ̃_bc + Δ̃_bd + 4 ×Δ̃_bh=28Λ_0,0^|| (_2^(1))(v_1) -2Λ_0,0^|| _2^(1)(v_2) -2Λ_0,0^|| _2^(1)(v_3) +Λ_1,1(v_3) + 11/3 ln 2 v ≫ 1⟶ 28 π/3 v_1- 2π/3 v_2 - 5π/6 v_3 - ln(2^-11/3 (2π)^24( R_1^2/α')^28(r_2^2/α')^-2(r_3^2/α')^-2) , Δ_U(1)_Y^A,MSSM = 1/36 Δ_U(1)_a^A,MSSM+ 1/4 Δ_U(1)_c^A,MSSM + 1/4 Δ_U(1)_d^A,MSSM + ( - Δ̃_ac +Δ̃_ac') +( - Δ̃_ad +Δ̃_ad') + ( - Δ̃_cd + Δ̃_cd') = 152/3 Λ_0,0^|| (_2^(1)) (v_1)+ 1/3 Λ_0,0^|| (v_2) + 2Λ_0,0^|| _2^(1)(v_2) -2Λ_1,1(v_2) + Λ_0,0^|| (v_3) +Λ_1,1(v_3) + 47/18 ln 2 v ≫ 1⟶ 152 π/3 v_1 + 10 π/9 v_2 + π/6 v_3 - ln( 9 · 2^-47/18 (2π)^54( R_1^2/α')^152/3(r_2^2/α')^7/3( r_3^2/α') ) , Δ_SU(4)_h^A,MSSM =3 ×(Δ̃_ah +Δ̃_ah') + 2 ×Δ̃_bh + (Δ̃_ch + Δ̃_ch') + ( Δ̃_dh +Δ̃_dh') + 4 ×(Δ̃_hh +Δ̃_hh') =16Λ_0,0^|| (_2^(1))(v_1)- 2Λ_0,0^|| (v_2) + 2ln 2 v ≫ 1⟶16 π/3 v_1 - 2 π/3 v_2- ln( 2^-2 (2π)^14( R_1^2/α')^16(r_2^2/α')^-2) , where we have made use of the same notation Δ̃_xy=Δ̃_yx as in e.g. <cit.>. We will come back to the impact of these one-loop correction in section <ref> after having also determined the Möbius strip contributions, but already point out here that all threshold contributions have a positive dependence on the two-torus volume v_1, while v_2 and v_3 appear with negative prefactors in the one-loop correction to USp(2)_b and SU(4)_h.The annulus contributions to the gauge thresholds of the L-R symmetric models are analogously computed using the D6-brane data in table <ref> and the resulting state-per-sector counting in tables <ref> and <ref> with the following results: Δ_SU(3)_a^A,L-R = {[ 16Λ_0,0^|| (_2^(1))(v_1) L-R I & II; 10Λ_0,0^|| (_2^(1))(v_1) + 6Λ_0,1(v_1)L-R IIb; 13Λ_0,0^|| (_2^(1))(v_1) + 3Λ_0,1(v_1)L-R IIc ]} -4Λ_0,0^|| (v_2) - 10/3 ln 2 ,Δ_USp(2)_b^A,L-R = {[33Λ_0,0^|| (_2^(1))(v_1)L-R I & II; 24 Λ_0,0^|| (_2^(1))(v_1) + 9Λ_0,1(v_1) L-R IIb; 28 Λ_0,0^|| (_2^(1))(v_1) + 5Λ_0,1(v_1) L-R IIc ]} - 4Λ_0,0^|| _2^(1)(v_2) -20/3 ln 2 , Δ_USp(2)_c^A,L-R = {[33Λ_0,0(v_1)^|| (_2^(1))L-R I & II; 24 Λ_0,0(v_1)^|| (_2^(1)) + 9Λ_0,1(v_1) L-R IIb; 29 Λ_0,0(v_1)^|| (_2^(1)) + 4Λ_0,1(v_1) L-R IIc ]} - 4Λ_0,0^|| _2^(1)(v_2) -20/3 ln 2 .The asymptotic behaviour is again easily extracted, producing again only positive contributions from the two-torus volume v_1.For the prototype I L-R symmetric model, the non-Abelian hidden gauge groups experience the following one-loop gauge threshold correction, Δ_SU(3)_h_1/h_2^A,L-R I only = 16Λ_0,0^|| (_2^(1))(v_1) ∓20/3 ln 2 ,and finally the generalised massless U(1)_B-L gauge group of the prototype II L-R symmetric model receives the following one loop correction: Δ_U(1)_B-L^A,L-R II = 704/3 Λ_0,0^|| (_2^(1))(v_1)+ 16/3 Λ_0,0^||(v_2) + 16Λ_0,0^|| (v_3)+ 16 Λ_0,0^|| _2^(1)(v_3) + 674/27 ln 2 .Finally, for the PS models, the analogous computations using the D6-brane data in tables <ref> and <ref> and the state-per-sector counting in tables <ref> and <ref> leads to: Δ_SU(4)_a^A,PS =16Λ_0,0^|| (_2^(1))(v_1) + {[ 4Λ_0,0^|| (v_2) - 4/3 ln 2 PS I; -8Λ_0,0^|| (v_2) - 16/3 ln 2PS II ]. , Δ_USp(2)_b^A,PS = Δ_USp(2)_c^A,PS = 33Λ_0,0^|| (_2^(1))(v_1) - 4 Λ_0,0^|| _2^(1)(v_2) +{[ 8/3 ln 2 PS I; -2Λ_1,1(v_2) - 26/3 ln 2PS II ]. , Δ_SU(6_ I/2_ II)_h^A,PS = {[ 16Λ_0,0^|| (_2^(1))(v_1) - 4Λ_0,0^|| (v_2) + 128ln 2 PS I; 48Λ_0,0^|| (_2^(1))(v_1)- 4Λ_0,0^|| _2^(1)(v_2)-4Λ_1,1(v_2)- 1280/3 ln 2PS II ]. .As for the MSSM and L-R symmetric examples, any of these gauge threshold contributions grows (asymptotically) linearly with v_1. For the Möbius strip topology, modified lattice sums,Λ_b,τ,σ(v̂) withv̂≡v/1-b,with b ∈{0,1/2} appear.For the untilted torus with b=0, i.e. the two-torus T^2_(1) in the T^6/(_2 ×_6 ×) examples at hand, and only discrete Wilson lines and displacements τ,σ∈{0,1}, the lattice sum on T^2_(1) is simply given byΛ_0,τ,σ(v̂) τ,σ∈{0,1}≡Λ_0(v̂) =Λ_0,0(v) - 2ln 2 , where the constant term -2ln 2 stems from the replacement <cit.> L^2 → 2L̂^2 with L̂^2=2 L^2 in the first line of equation (<ref>).However, for (τ,σ)=(1,1) on tilted tori b=1/2, the sign factor (-1)^τσ in the beta function coefficients in table <ref> indicates that also the lattice sum for the hexagonal two-toriT^2_(2)× T^2_(3) needs to be modified in a yet unknown way. We will therefore write the formal expression (<ref>) for the lattice sums throughout the computation and only estimate their size via the ansatz for the asymptotics Λ_1/2,τ,σ(v̂) v ≫ 1⟶π c^1/2_τ,σ/3 v with coefficients c^1/2_τ,σ =O(1) when discussing hierarchies among compact direction in section <ref>.Using the same D6-brane data as for the annulus amplitudes, we obtain as joint expressions for all models, Δ^ M,all_SU(3/4)_a = Δ^ M,MSSM_SU(4)_h = Δ^ M,PS I_SU(6)_h =-4 Λ_0,0,0^|| (_2^(1))(v̂_1)+ 2 Λ_1/2,1,1 (v̂_2) -2Λ_1/2,1,1 (v̂_3),Δ^ M,all_USp(2)_b = Δ^ M,PS + L-R_USp(2)_c=- 6Λ_0,0,0^|| (_2^(1))(v̂_1) -Λ_1/2,1,1 (v̂_2) - Λ_1/2,0,0^|| _2^(1) (v̂_3) ,since Möbius strip contributions are independent of the _2 ×_2 eigenvalues. The remaining massless and anomaly-free gauge symmetries of the MSSM, PS II and L-R symmetric IImodel receive the followingMöbius strip contributions to the one-loop gauge thresholds, Δ^ M,MSSM_U(1)_Y =- 38/3 Λ_0,0,0^|| (_2^(1))(v̂_1)+ Λ_1/2,0,0^_2^(1) (v̂_2) - 2/3 Λ_1/2,1,1 (v̂_2)+5/3 Λ_1/2,1,1 (v̂_3),Δ^ M,L-R II_U(1)_B-L =- 176/3 Λ_0,0,0^|| (_2^(1))(v̂_1) + 8 Λ_1/2,0,0^_2^(1)(v̂_2) + 16/3 Λ_1/2,1,1 (v̂_2)-8Λ_1/2,0,0^_2^(1)(v̂_3) - 16/3 Λ_1/2,1,1 (v̂_3) ,Δ^ M,PS II_SU(2)_h = -12Λ_0,0,0^|| (_2^(1))(v̂_1)+ 2Λ_1/2,0,0^_2^(1) (v̂_2) -2Λ_1/2,0,0^_2^(1)(v̂_3) .At this point, it is noteworthy that the absolute values of the negative coefficientsin equations (<ref>) and (<ref>) of the lattice sum Λ_0,0,0^|| (_2^(1))(v̂_1) on T^2_(1) are always smaller than the positive coefficients in the corresponding Annulus contributions, and thus the one-loop threshold correction of each gauge group still grows (asymptotically) linearly with the two-torus volume v_1 in any model considered in this article. Moreover, the coefficients of Λ_1/2,1,1 (v̂_2) and Λ_1/2,1,1 (v̂_3) have identical absolute value and opposite sign for SU(3/4)_a^ all, SU(4)_h^ MSSM, SU(6_ I/2_ II)_h^ PS. When choosing isotropic torus volumes v_2=v_3, these yet unknown contributions thus cancel. §.§ Balancing M_string with gravitational and gauge couplings in four dimensions Dimensional reduction of the ten-dimensional Einstein-Hilbert term leads to the following relation among the four-dimensional Planck scale M_Planck, the string scale M_string, the string coupling g_string and theKähler moduli v_i inherited from the underlying torus T^6=(T^2_(i))^3,M_Planck^2/M_string^2 = 4π/g^2_stringv_1v_2v_3 withg_string≡ e^ϕ_10, which in conventional D6-brane models on tori or toroidal orbifoldswith phenomenologically acceptable sizes of gauge couplings at tree-level is only consistent with a high string scale M_string≲ M_GUT <cit.>, since usually the O6-plane tensions are cancelled by D6-branes at non-trivial angles on all three two-tori. However, the situation in all global D6-brane models discussed in this article is special since the existence of the exotic O6-plane orbit _2^(3) on T^6/(_2 ×_6 ×) with discrete torsion enforces all D6-branes to lie along the (_2^(1))-invariant direction on the first two-torus T^2_(1) with length scale R_1^(1), while gravitational forces also propagate along the perpendicular direction and are thus sensitive to v_1 =R_1^(1) R_2^(1)/α'. The tree-level gauge coupling in equation (<ref>) of SU(3)_a and SU(4)_h of the MSSM example can e.g. be rewritten using the relation (<ref>),8π^2/g^2_SU(3)_a/SU(4)_h,tree = π/2√(6)√(R_1^(1)/R_2^(1))M_Planck/M_string ,which means that a large hierarchy between M_Planck and M_string can - at least in principle - be compensated by a large hierarchy between R_2^(1) and R_1^(1) to arrive at some phenomenologically appealing order of magnitude of the gauge couplings as already briefly sketched in <cit.>.Such large hierarchies in the D6-brane models at hand would provide explicit examples for the low-string scale scenario, see e.g. <cit.>.We will now also take into account the one-loop gauge threshold corrections, which for the MSSM example of section <ref>have the following asymptotic behaviour,Δ^ A+ M,MSSM_SU(3)_a/2v ≫ 1⟶ 2πv_1 +π/3(v_2 + c^1/2_1,1(v_2 - v_3 )) - ln( (R_1^(1)/R_2^(1)v_1 )^6v_2 ) - 12 ,Δ^ A+ M,MSSM_USp(2)_b/2v ≫ 1⟶ 11 π/3v_1 - π/3(1 + c^1/2_1,1/2)v_2 - 3π/4v_3 - ln((R_1/R_2v_1 )^11 (v_2)^-1 v_3)^-3/2) - 11 ,Δ^ A+ M,MSSM_U(1)_Y/2v ≫ 1⟶ 209 π/9v_1 + (8-c^1/2_1,1 )π/9v_2 +(3 + 10 c^1/2_1,1) π/36v_3 - ln( ( R_1/R_2v_1)^19 v_2^5/3 v_3^1/2)- 36 ,Δ^ A+ M,MSSM_SU(4)_h/2v ≫ 1⟶ 2πv_1 +π/3( - v_2 + c^1/2_1,1(v_2 - v_3 ))- ln( ( R_1/R_2v_1 )^6 v_2^-1) - 7,where wehave rewritten the logarithmic terms in terms of the complex structure parameter R_2^(1)/R_1^(1)on T^2_(1) and the Kähler moduli v_i using the definition (<ref>) and evaluated the constant contributions, e.g. ln(2^4/3 (2π)^18( 2/√(3))^2 ) ≈ 34.29 in the annulus contribution Δ^ A,MSSM_SU(3)_a. Let us stress here again that in all MSSM, L-R symmetric and PS models of section <ref>, the gauge coupling of the QCD stack is weakened by the positive (asymptotically) linear one-loop contribution of the volume v_1, at least for v_1 ≳ 2. Due to the lack of knowledge of the exact shape of the Möbius strip contribution for D6-branes with bulk part along some orientifold-invariant direction and non-vanishing Wilson line and displacement, (τ,σ)=(1,1), along some tilted two-torus T^2_(2or3), we have to distinguish two cases, where our (asymptotic) field theory results in equation (<ref>) are classified as reliable: * Isotropic volumes of the last two tori, i.e. v_2=v_3: in this case, the unknown Möbius strip contributions within the gauge thresholds of SU(3)_a and SU(4)_h of the MSSM example cancel.* A much larger first two-torus, i.e. v_1 ≫ v_2, v_3: the one-loop gauge threshold corrections are expected to be dominated by the asymptotics linear in v_1, and all gauge couplingswill be weakened due to the positive prefactor of v_1 for every single gauge group in each MSSM, L-R symmetric and PS model of section <ref>.Let us discuss each of these two cases further: in the first case of two isotropic tori , for generic volumes v_1 > 2 the (asymptotic) linear dependence on v_1 surpasses the negative constant contributions to all four one-loop gauge thresholds in (<ref>), and at least forthe QCD stack will have a weaker coupling at one-loop due to the (asymptotic) linear dependence on v_2, while the coupling of the hidden stack becomes stronger.This behaviour facilitates the formation of a gaugino condensate on the hidden stack, which will in turn lead to supersymmetry breaking mediated to the observable sector on the one hand via gravitational couplings and on the other hand through the messenger particles with USp(2)_b × U(1)_Y × SU(4)_h charges in thelast block of table <ref>. Going to the edge of the validity of the geometric regime, v_1 ≳ 1 and 6 ≳ v_2,3≳ 1, one can read off from equation (<ref>) that the one-loop corrections to the inverse of (the squared of) the SU(3)_a × SU(4)_h gauge couplings will be negative, while for USp(2)_b × U(1)_Ythe contributions of v_1 cancel the negative constant contributions. To achieve v_1 ≳ 1, we further assume R_1^(1)∼ R_2^(1)∼√(α'), and equation (<ref>) favours a high string scale M_string. Such a choice of scales is in turn consistent with very small volumes v_i ∼ O(1) in (<ref>) and a not too weak string couplingg_string. In the second case of one two-torus significantly larger than the other two, v_1 ≫ v_2,v_3, we can e.g. make the ansatz of M_string∼ 10^12 GeV and g_string∼ 0.1 inequation (<ref>) leading to v_1v_2v_3 ∼ 10^11. In order to achieve α^-1_QCD∼ O(1), equation (<ref>) then requires R_1^(1)/R_2^(1)∼ 10^-12, which is problematic if we require v_i ∈{1,2,3}≳ 1 and R_1,2^(1)/√(α')≳ 1 to be in the geometric regime where the supergravity approximation is expected to be reliable. Even when we choose a larger string coupling ofg_string∼ 0.3, v_1 ∼ 10^12 and v_2 = v_3 ≳ 1, all one-loop corrections in equation (<ref>)- as well as the analogous expressions for all L-R symmetric and PS models - receive exponentially large positive contributions from v_1, thereby exponentially suppressing all gauge couplings.Let us now assume a high string scale M_string∼ M_GUT in equation (<ref>) and discuss the possibility of gauge coupling unification. The tree-level relation (<ref>) together with the fact that in the MSSM example only USp(2)_b and U(1)_d ⊂ U(1)_Y possess one flat direction along the deformation modulus ε^(1)_4-5, boils down to the requirement that the combination of a deformation along this direction with the one-loop gauge threshold correction reduces the inverse of the gauge coupling squaredby 50 % for USp(2)_b ≃ SU(2)_weak and by roughly a factor of three for U(1)_Y. Figure <ref> shows that the volume Vol(Π_b) ∝1/g_USp(2)_b^2 decreases by at most 20 % when reaching the upper bound |ε^(1)_4-5| ≈ 0.4, while Vol(Π_d)increases at the same time by about 10 %. Thus, the one-loop corrections of USp(2)_b × U(1)_Y both have to be sufficiently negative compared to the one-loop correction of SU(3)_a.Using equation (<ref>),we find that for the unknown constant c^1/2_1,1 < - 3/22 at least the prefactor of v_3 is negative for both gauge threshold differences [Δ^ A+ M,MSSM_USp(2)_b - Δ^ A+ M,MSSM_SU(3)_a]/2 and [Δ^ A+ M,MSSM_U(1)_Y - Δ^ A+ M,MSSM_SU(3)_a]/2 under consideration, while for5/4 < c^1/2_1,1, at least the prefactor of v_2 is negative. In both cases, either v_2 or v_3 has to be the largest Kähler modulus, and only the missing computation of the Möbius strip contribution for tilted tori and non-vanishing Wilson line and displacement parameters, i.e. 2b_iτ^i σ^i =1, can settle the question if gauge coupling unification in the MSSM example is feasible. § CONCLUSIONS AND OUTLOOK This article forms the third part in a tryptic about deforming _2 singularities of toroidal orbifolds T^6/(_2×_2M) with discrete torsion in Type IIA superstring theory, focusing here on the - technically most involved but at the same time of greatest phenomenological interest -T^6/(_2 ×_6 ×) orientifold. To study the deformations of such singularities, we employ the techniques of the hypersurface formalism developed for the _2 ×_2 toroidal orbifold with discrete torsion and extend them to the _2 ×_6 toroidal orbifold, in a similar spirit as was done for the _2 ×_6' toroidal orbifold before. The extension boils down to modding out an additional _3 ⊂_6 action along a four-torus in the hypersurface formalism for the _2 ×_2 toroidal orbifold.Since the _6 action does not constrain the geometry of the first two-torus, the deformations in the _2^(1)-twisted sector are structurally different from the ones in the _2^(i=2, 3)-twisted sectors. This finding agrees with the difference in the relevant Hodge numbers as follows:we can identify six deformation moduli (four real and one complex) in the _2^(1)-twisted sector and four (real) deformation moduli in the _2^(i=2,3)-twisted sector each, as presented in the hypersurface equation (<ref>). This inherent difference among the _2^(i)-twisted sectors presents new challenges that where absent for the orbifold groups _2 ×_2 and _2 ×_6', which act isotropically on all three two-tori. It also forces us to discuss the deformations in the _2^(1)-twisted sector in section <ref>separately from the ones in the _2^(i=2,3)-twisted sectors in section <ref>.Introducing an anti-holomorphic involution - a necessary geometric part of the Tye IIA orientifold projection - in the hypersurface formalism opens the door for identifying sLag three-cycles, albeit only a minimal subset of sLag three-cycles defined as the fixed loci under this involution. By virtue of Weierstrass' elliptic function, one can easily divide this subset into bulk three-cycles and fractional three-cycles for vanishing deformations. For non-zero deformations, exceptional three-cycles are characterised in the hypersurface formalim by (real) algebraic equations whose local form reduces to the ones of a (resolved) ^2/_2 singularity. Their global description on the other hand is for most deformations not attainable due to the compact topology of the ambient T^4 and/or different complex structures of the two-torus lattices within the factorisable ambient T^4_(2or3) in case of the _2^(2or3)-twisted sector. Only the exceptional three-cycles associated to the _2^(1) deformation parameters ε_0^(1) and ε_3^(1) can be fully described globally, with the first deforming only the singularity at the origin of T^4_(1)/_2^(1) and the latter deforming a _3-invariant orbit of three singularities on T^4_(1)/_6. By computing the volume-dependence on these deformation parameters for bulk and fractional three-cycles parallel to the -invariant orientifold plane we are able to cross-check and validate the results of the exceptional three-cycle volume. Moreover, by studying the effects of deformations ε_0^(1) and ε_3^(1) on simple fractional three-cycles in detail we obtain the necessary intuition to investigate the effects of other deformations, for which the exceptional three-cycles cannot be accessed directly due to the absence of a global description. This also allows to propose a method for assessing qualitatively any fractional three-cycle volume as a function of a particular deformation parameter, as explained on page It:Method. When applying these techniques to global intersecting D6-brane models, we observe in first instance that -even exceptional three-cycles give rise to deformation moduli with a flat direction, while D6-branes wrapping -odd exceptional three-cycles couple to deformation moduli which ought to be stabilised at vev to avoid the presence of non-vanishing Fayet-Iliopoulos terms, which in turn would signal a breakdown of supersymmetry at the string or Kaluza-Klein scale. A loophole to this consideration emerges when the massless open string spectrum contains suitably charged states whose scalar components can develop non-zero vevs to compensate the non-zero FI-term(s) in all D-term equations simultaneously.If present, this phenomenon would result in a spontaneous breaking of the gauge group supported by the D6-branes and would correspond geometrically to a recombination of two separate D-brane stacks (or a separation of D-branes within a single stack). In order to fully understand whether the loophole is realisable, it is of uttermost importance to acquire a better handle on the low-energy effective action from first principles through CFT computations <cit.> or through dimensional reduction <cit.>. We hope to resolve and report on this issue in future work.Bypassing the loophole we can count the maximal number of stabilised deformation moduli ⟨ζ^(i)_λ⟩ = 0, i.e. those ζ^(i)_λ coupling to D6-branes via the respective -odd exceptional wrapping number.An overview of this counting for the previously constructed and phenomenologically interesting global D6-brane models on T^6/(_2 ×_6 ×) is provided in table <ref>, which shows that a maximum of ten out of 14 twisted complex structure moduli can be stabilised in the L-R symmetric prototype IIc model. For the other prototype models, the maximal number of stabilised deformation moduli is lower, with the lowest number being four in case of the global prototype I PS model. In table <ref>, we also observe a number of twisted moduli with a flat direction affecting the low-energy effective field theory. By going away from the singular orbifold point along these flat directions, the (inverse squared of the) tree-level gauge coupling of some D6-brane acquires a (square root-like) dependence on the non-zero twisted moduli, provided the D6-brane wraps only the associated -even exceptional three-cycle. Depending on the relative sign between the bulk three-cycle and the exceptional three-cycle, the gauge theory on such a D6-brane can become more weakly or more strongly coupled at the string scale, as discussed e.g. for the left-symmetric gauge group SU(2)_L in all global prototype models, for the strong gauge group SU(3)_QCD and the generalised B-L symmetry in global L-R symmetric models and for the hidden gauge group in global PS models. An important observation to be made here concerns the _2×_2 eigenvalues of the respective D6-branes: at the orbifold point, the physics only depends on the relative _2×_2 eigenvalues among the D6-branes (e.g. in the computation of the particle spectra), whereas the absolute _2×_2 eigenvalues enter explicitly in the fractional three-cycle volumes on the deformed toroidal orbifold. This last feature can be useful to improve the matching of the gauge coupling strength at the string scale, or might allow for further subdivisions in the global prototype models. Finally, in each of the global models there exists also a set of deformation moduli to which none of the D6-branes couples, implying that deforming the associated _2 singularities is only expected to impact the fractional three-cycle volumes in subleading order through higher order corrections such as field redefinitions of the moduli or instanton corrections. Additional untwisted moduli-dependent corrections to the tree-level gauge couplings result from the one-loop gauge threshold corrections, which exhibit a linear and logarithmic dependence on the Kähler moduliassociated to the two-torus volumes (in units of α') in the geometric regime (v_i≳ 1). In the global D6-branes models considered here, all fractional three-cycles have a bulk one-cycle part along T_(1)^2 parallel to the (_2^(1))-invariant plane, such that the gauge threshold corrections depend in each intersection sector on the Kähler modulus v_1 representing the area of the first two-torus T_(1)^2. This dependence with an overall positive prefactor presents various phenomenological challenges for compactifications with an intermediate string scale M_string and LARGE internal volumes as discussed in section <ref>:a large hierarchy between the Planck scale M_Planck and the string scale M_string can yield a reasonable tree-level coupling when compensated by a hierarchically large complex structure modulus R_2^(1)/R_1^(1) of T_(1)^2. Taking into account the v_1-dependent one-loop gauge threshold corrections makes it, however, difficult to stay in the geometric regime and impliesan exponential suppression of the running gauge coupling, excluding the possibility of a strong coupling regime of the QCD stack. Thus, the phenomenologically interesting, global D6-brane models at hand prefer a high string scale, while gauge unification is neither easily realisedat tree-level, nor when including the one-loop gauge threshold corrections.Each global D6-brane prototype model at hand forms a realisation of a supersymmetric Minkowski vacuum with all internal RR-fluxes or NS-NS-fluxes set to zero. Despite the underlying mathematical consistency of these global D6-brane models, there are two important elements missing in these vacuum configurations: a mechanism to stabilise all moduli vevs including the dilaton (and thereby also the moduli masses) and an alternative mechanism for supersymmetry breaking beyond the formation of a gaugino condensate in some hidden sector - which is of particular relevance to those models with only Abelian `hidden' gauge bosons such as the L-R symmetric prototypes II, IIb and IIc - producing soft supersymmetry terms lifting the mass degeneracy of the massless open string matter states. In Type II superstring theory, one can (at least in principle) kill these two birds with one stone by turning on internal RR- and NS-NS-fluxes generating a non-vanishing F-term for one (or more) of the moduli multiplets. This consideration begs the question whether one can consistently switch on twisted internal NS-NS-fluxes supported along resolved exceptional three-cycles and discuss moduli stabilisation for the deformation moduli (or twisted complex structure moduli) perturbatively, in contrast to the moduli stabilisation scheme for Kähler moduli in Type IIB string theory through solely non-perturbative effects as discussed e.g. in <cit.>. This question is being addressed in a separate research project and we hope to answer it positively in the near future. Moreover, when switching on twisted NS-NS-fluxes one generally also expects to generate a scalar potential for the axionic (or CP-odd) partners of the twisted complex structure moduli, and it begs the question whether the shape and symmetries of this scalar potential exhibit the generalised Kaloper-Sorbo structure, as recently established <cit.> for Type IIA orientifold compactifications on smooth Calabi-Yau backgrounds. Similar to the potential cancellation of FI-terms by virtue of vevs associated to charged matter states, this question should be settled at tree-level through a dimensional reduction of the Chern-Simons action and the Dirac-Born-Infeld action an effective four-dimensional N=1 supergravity following the methods in <cit.>. Acknowledgements:W.S. would like to thank Iñaki García-Etxebarria for enlightening conversations. This work is partially supported by the Cluster of Excellence `Precision Physics, Fundamental Interactions and Structure of Matter' (PRISMA) DGF no. EXC 1098, the DFG research grant HO 4166/2-2 and the DFG Research Training Group `Symmetry Breaking in Fundamental Interactions' GRK 1581. W.S. is supported by the ERC Advanced Grant SPLE under contract ERC-2012-ADG-20120216-320421, by the grant FPA2012-32828 from the MINECO, and the grant SEV-2012-0249 of the “Centro de Excelencia Severo Ochoa" Programme. § MÖBIUS TRANSFORMATIONSIn section <ref>, a two-torus T^2 was described as an elliptic curve E in the weighted projective space ^2_112 through eq. (<ref>) with a built-in _2 symmetry acting on the homogeneous coordinate y of weight 2. The fixed points of this _2 action correspond to the roots of the polynomial F(x,v) in equation (<ref>). The position of these roots in the x-coordinate or in the v-coordinate are tied to the complex structure of the two-torus. In table <ref> we provide a list of the roots in all coordinate patches for a square two-torus and a hexagonal two-torus. [ 6|c|_2^(i) fixed points per T_(i)^2 in various coordinate patches; 3|c||_2^(i) fixed point on square T_(1)^2 3|c|_2^(i) fixed point on hexagonal T_(l=2,3)^2; fixed point αx_1-coordinatev_1-coordinate fixed point αx_l-coordinatev_l-coordinate; 1 x_1 = ∞ v_1 = 0 1 x_l = ∞ v_l = 0; 2 x_1 = 1 v_1 = 1 3 x_l = 1 v_l = 1; 3 x_1 = 0 v_1 = ∞ 2 x_l = ξ v_l = ξ^2; 4x_1 = -1v_1 = -1 4 x_l = ξ^2 v_l = ξ; ] OverviewZ2FixedPointsPerT2Overview of the _2^(i) fixed points on the square torus T_(1)^2 and the hexagonal tori T_(l=2,3)^2 in the homogeneous coordinates x_i (v_i=1 patch) for the second and fifth column, and in the homogeneous coordinates v_i (x_i=1 patch) for the third and last column. The labelling of the fixed points matches the one in figure <ref>. There exist automorphisms λ_α:E→ E of the elliptic curve <cit.> interchanging the coordinates x and v through the action:λ_α: ( [ x; v ]) ↦λ_α( [ x; v ]) = 1/√(2 ϵ_α^2 + ϵ_βϵ_γ)( [ϵ_α ϵ_α^2 + ϵ_βϵ_γ;1- ϵ_α ]) ( [ x; v ]), y ↦ y, where the ϵ_α correspond to the roots entering in equation (<ref>). These automorphisms λ_α interchange the _2 fixed points: λ_2:1 ↔ 2, 3 ↔ 4, λ_3: 1 ↔ 3, 2 ↔ 4, λ_4: 1 ↔ 4, 2 ↔ 3,in line with the coordinate transformation, and are further constrained by the tiltedness of the torus lattice:untilted: λ_α = λ_α, tilted: λ_2 = λ_4,λ_3 = λ_3.In a coordinate patch where v=1 (or x=1), the automorphisms λ_α coincide with Möbius transformations.Recall that for a square two-torus the roots are given by ϵ_2 = 1, ϵ_3 = 0 and ϵ_4=-1, while a hexagonal two-torus is characterised by the roots ϵ_2 = ξ, ϵ_3=1 and ϵ_4 = ξ^2. The automorphisms λ_α also allow to exchange Lag lines among each other. For a square T^2 the Lag lines aI-aIV from table <ref> behave as follows under Möbius transformations:λ_2 swaps the Lag lines aI ↔ aIV leaving aII and aIII invariant, λ_3 exchanges aI ↔ aIV and aII ↔ aIII, and finally λ_4 swaps aII ↔ aIII leaving aI and aIV invariant.For a hexagonal T^2, the Lag lines from table <ref> behave as follows under Möbius transformations: λ_3 keeps all the cycles bX invariant, whereasλ_2 and λ_4 exchange the Lag lines bI^0 ↔ bIII^0 andbII^0 ↔ bIV^0. § CORRECTION TERMS FOR _2^(1)-DEFORMATIONSIn section <ref> we observed that certain deformations in the _2^(1)-twisted sector deform too many singular orbits. The deformations with parameters ε_4+5^(1) and ε_4-5^(1) for instance deform the singularity (33), inadvertently. Such a feature was for the first time observed in <cit.> and appears to be intrinsic to _2-twisted sectors subject to an additional _3 action on the full covering four-torus T^4, as testified in table <ref>. In order to overcome the unwanted deformation of singular points, which are spatially separated from the one where the deformation parameter is localised,we have to add a counter-term which eliminates the effect of the initial deformation there. Building further on the ε_4+5^(1) example, we have to switch on a correction term ε_3^(1) as a function of ε_4+5^(1). In order to determine this functional dependence, we consider a sLag two-cycle on T^4_(1)/_6, represented by the algebraic condition x_2 =x_3, going through the singular point (33) and enforce that there is no exceptional two-cycle emerging from that point for a deformation parameter ε_4+5^(1)≠ 0 elsewhere. This allows us to find a series expansion for ε_3^(1) as a function of ε_4+5^(1)≡ε_ in:ε_3^(1)(ε_ in) =- 1/4ε_ in^2 - 5/4 × 3!ε_ in^3- 31/4 × (3!)^2ε_ in^4 - 13 × 17/4 × (3!)^3ε_ in^5 - 1753/4 × (3!)^4ε_ in^6 - 11×13×53/2× (3!)^5ε_ in^7 - 167×421/2 × (3!)^6ε_ in^8- 137 × 3331/8 × (3!)^6ε_ in^9 - 13× 17 × 41× 43/4×(3!)^6ε_ in^10+O(ε_ in^11 ).We have to point out that this approach only works for small (initial) deformations ε_ in, as long as the correction term ε_3^(1) (ε_ in) remains smaller than the original deformation. Applying this requirement for the (conservative) assumption |ε_3^(1)| ∼1/10ε_ in implies the range |ε_ in|≲ 0.3 for the initial deformation. Similar considerations have to be made for other deformation parameters, as summarised in table <ref>. In each particular case, we apply the same method as described above and construct a Taylor expansion in terms of the initial deformation. [ 4|c|Deformations & correction terms in the _2^(1) sector;3|c|Deformation parameters; Correction Termsε_3^(1)ε_4+5^(1)ε_4-5^(1);ε_3^(1)-eq. (<ref>)eq. (<ref>);ε_4+5^(1)eq. (<ref>)--;] SummDefCorrZ21Overview of correction terms for various _2^(1) deformation parameters with the corresponding Taylor expansion indicated by the equations in the main text.More explicitly, a non-vanishing deformation parameter ε_3^(1)≡ε_ in deforms the _2^(1)-fixed point orbits e_4^(1) and e_5^(1) unintentionally, as pointed out in table <ref>. This effect can be undone by turning on a correction terms ε_4+5^(1), as a function of ε_3^(1). Following the same logic as presented above, one finds the following series expansion for ε_4+5^(1):ε_4+5^(1) (ε_ in)= - 1/2ε_ in^2 -1/2× 3ε_ in^3-19/2^5× 3^2ε_ in^4 -1/2^5ε_ in^5-1/2^6× 3^2ε_ in^6-5^2 × 17/2^9× 3^3ε_ in^7-149 × 463/2^13× 3^5ε_ in^8 -131 × 223/2^12× 3^5ε_ in^9 -266401/2^14× 3^6ε_ in^10 +O( ε_ in^11).Turning on the deformation parameter ε_4-5^(1)≡ε_ in on the other hand deforms the _2^(1)-fixed point orbit e_3^(1) unwillingly, as can be seen from table <ref>. This effect can be undone by a correction term ε_3^(1) depending on the initial deformation parameter ε_4-5^(1), whose functional dependence follows from repeating the same logic as above: ε_3^(1) (ε_ in)= - 1/2^2 × 3ε_ in^2 + 1/2^3 × 3 √(3)ε_ in^3 + 1/2^4 × 3^2 ε_ in^4 - 11/2^5 × 3^3 √(3)ε_ in^5 - 7/2^6 × 3^4ε_ in^6 + 11/2^3 × 3^5 √(3)ε_ in^7 + 59/2^7 × 3^7ε_ in^8 - 3019/2^9 × 3^6 √(3)ε_ in^9 - 25/2^6 × 3^9ε_ in^10 +O(ε_ in^11).After having determined the relevant correction term for each of the deformations, one can also explicitly verify how the correction terms restore the singular nature of the orbits, as depicted in the lower plots of figure <ref> (f) and (g).§ TABLES OF MATTER STATES PER D6-BRANE INTERSECTION SECTOR In this appendix, we summarise the massless open string spectrum per sector for each of the global prototype D6-brane models with interesting phenomenological features as discussed in section <ref>. Table <ref> contains the massless open string spectrum of the global prototype MSSM modeldiscussed in section <ref>, andtable <ref> contains the counting of massless open string states per intersection sector for the global PSprototype I model discussed in section <ref>. Table <ref> displays the counting of the observable massless open string sector of all L-R symmetric models discussed in section <ref> as well as of the PS prototype II model.Finally, table <ref> provides the sector-per-state counting for the different `hidden' completions of the massless matter spectrum per L-R symmetric model.The presentation in terms of matter states per sector is indispensable for the computation of the gauge threshold corrections in section <ref> and is useful for the discussion about potentially flat directions for twisted moduli in section <ref> to <ref>. [h] 21cm[ 10|c| Total amount of massless matter per sector for a 5-stack MSSM model on the aAA lattice of T^6/(_2 ×_6 ×) with discrete torsion; (χ^x (ω^k ∈{0,1,2} y))y=a y=a'y=by=c y=c'y=d y=d'y=h y=h'; x= a(0,0,0)([2]_2-2+2,-1,1)(0,0,0)(2,1,0) (-4,-1,-1)(0,0,0)(2,1,0)(-2,-1,0) ([2],1,-1) ([2]_-2+2+2,0,0); 2-3 x= b 2|c| (0,3 +[2]/2,-3+[2]/2)(0,0,0) (0,-3 + [2],[4])-(0,3+[2],3+[2])-(-2,-1,0)-; 4-4 x= c 3|c|(0,[4]/2,[4]/2)([2]_2+2-2,3+[2],-3+[2])(0,0,0) (0,[4],-3+[2])(0,[4],3+[2])(0,0,0) (-4,-1,-1); 5-6 x= d 5|c|(0,-3 +|2|/2,3+|2|/2) (0,3+[2],-3+[2])([2]_-2+2+2,0,0)(2,1,0)(2,0,1); 7-8 x= h 7|c|(0,0,0)([2]_2-2+2,-1,1)(0,0,0);]Overview of the total amount of chiral and vector-like massless (open string) matter per sector x(ω^k y) for the global five-stack MSSM-like model with fractional D6-brane configuration given in table <ref>. If the net-chirality |χ^x(ω^ky)| < φ^x(ω^ky), the sector x(ω^ky) comes with a set of vector-like pairs of matter states, whose multiplicity corresponds to n^x(ω^ky)_NC≡φ^x(ω^ky) - |χ^x(ω^ky)|, e.g. n^a(ω^0h)_NC=[2] denotes one vector-like pair of bifundamental states in the sector a(ω^0h).The diagonal entries φ^x(ω x) = φ^x(ω^2 x) = φ^_x/2 count the number of states in the adjoint representation for the x-stack, e.g. the entry (0,[4]/2,[4]/2) in the c(ω^k c)_k=0,1,2 sectors corresponds to four multiplets in the adjoint representation arising at intersections of c with its orbifold images (ω^k c)_k=1,2. The upper and lower entries in the x(ω^k x') sectors count the number of states in the antisymmetric and symmetric representation, respectively. The entries ([2]_2-2+2,-1,1)(0,0,0) for the a(ω^k a')sectors e.g. amount to one vector-like pair of antisymmetric reprentations from the aa' sector plus a second pair spread over the a(ω a')+a(ω^2 a') sectors. The lower index of e.g. 2-2+2 in the aa' sector, moreover, determines the decomposition of ∑_i=1^3 δ^0_σ^i_xyδ^0_τ^i_xyb̃^ A,(i)_xy introduced in table <ref>, which is required as input for the computation of the one-loop gauge threshold Δ_aa'^ A according totable <ref>.21cm[ 7|c| Total amount of matter per sector for the Pati-Salam model prototype I onT^6/(_2 ×_6 ×);(χ^x (ω^k∈{0,1,2} y))y=a y=a'y=by=cy=h y=h'; x= a(0,0,0)([2]_2-2+2,-1,1)(0,0,0)(2,1,0)(-2,0,-1) ([2]_2+2-2,1,-1) ([2]_-2+2+2,0,0); 2-3 x= b 2|c| (0,3 +[2]/2,-3+[2]/2)(0,0,0)([2]_2-2+2,[4],[4])(-2,-1,0)-; 4-4 x= c 3|c|(0,3+[2]/2,-3+[2]/2)(0,0,0)(2,0,1)-; 5-5 x= h 4|c|(0,0,0)([2]_2-2+2,-1,1)(0,0,0);] Overview of the chiral and vector-like matter spectrum per sector x(ω^k y) of the PS I model with D6-brane data specified in table <ref>.For details of the notation see the caption of table <ref>.[h] 21cm[ 9|c| Total amount of matter per sector: universal visible part for all L-R symmetric models and Pati-Salam II onT^6/(_2 ×_6 ×);(χ^x (ω^k∈{0,1,2} y))y=a y=a'y=by=c y=d_ L-Ry=d_ L-R' y=h_ PS II y=h_ PS II^'; x= a(0,0,0)([2]_2-2+2,-1,1)(0,0,0)(2,0,1)(-2,-1,0) ([2]_-2+2+2,0,0) ([2]_2+2-2,1,-1) ([2],-1,1)([2],0,0); 2-3 x= b 2|c| (0,3 +[2]/2,-3+[2]/2)(0,0,0) ([2]_2-2+2,[4], [4])(2,1,0)- (0,[4],-3+[2])-; 4-4 x= c 3|c| (0,3 +[2]/2,-3+[2]/2)(0,0,0)(-2,0,-1)-(0,3+[2],[4])-;x= d_ L-R4|c||(0,0,0)([2]_2-2+2,-1,1)(0,0,0) 2|c|; x=h_ PS II6|c||(0,[4]/2,[4]/2)([2]_2-2+2,3+[2],-3+[2])(0,0,0);]Overview of the massless visible matter states per intersection sector of the L-R symmetric models and the massless visible plus `hidden' matter of the PS II model with D6-brane data specified in tables <ref> and <ref>, respectively.For the sector-per-state counting of the `hidden' matter in the L-R symmetric models see table <ref>. Details of the notation are given in the caption of table <ref>. 21cm [ 9|c| Total amount of `hidden' matter per sector forthe L-R symmetric models on T^6/(_2 ×_6 ×) with discrete torsion; 4|c|| L-R I 4|c| L-R II; (χ^x (ω^k y)) y=h_1y=h_1' y=h_2y=h_2' y=h_1y=h_1' y=h_2y=h_2';x= a (0,-1,-1) (0,0,0) (0,1,1) (0,0,0)([2],-1,1) ([2],0,0)([2],1,-1) ([2],0,0); x=b(2,0,-1) -(-2,0,1) - (0_0,-2,0,[4],-3+[2]) - (0_0+2+0,[4],3+[2]) -; x=c(2,-1,0) -(-2,1,0) -(0_0,-2,0,3+[2],[4]) -(0_0+2+0,-3+[2],[4]) -;x= d (0,1,1) (0,0,0) (0,-1,-1) (0,0,0)([2],1,-1) ([2],0,0)([2],-1,1) ([2],0,0);x= h_1 (0,0,0)([2]_2+2-2,-1,1(0,0,0)([2]_-2+2+2,0,0)([2]_2-2+2,-1,1) (0,[4]/2,[4]/2) ([2]_2-2+2,3+[2],-3+[2])(0,0,0)([2]_-2+2+2,[4],[4])([2]_2+2-2,-3+[2],3+[2]); 2-36-7 x= h_22|c| (0,0,0) ([2]_2+2-2,-1,1)(0,0,0) (0,[4]/2,[4]/2) ([2]_2-2+2,3+[2],-3+[2])(0,0,0); 4|c|| L-R IIb4|c| L-R IIc; (χ^x (ω^k y)) y=h_1y=h_1' y=h_2y=h_2' y=h_1y=h_1' y=h_2y=h_2';x= a (0_2,0,0) (0_2,0_1,0_1) (0_2,0,0) (0_2,0_1,0_1) (-2,-1,0) (2,1,0) (0_2,0,0_1) (0_2,0,0_1); x=b (0,0_5,0_4) - (0,0_5,0_4) - (0,[4],[4]) - (0,0_5,0_5) -; x=c (0,0_4,0_5) - (0,0_4,0_5) - (0,3+[2],3+[2]) - (0,0_4,0_4) -;x= d (0_2,0,0) (0_2,0_1,0_1) (0_2,0,0) (0_2,0_1,0_1) (2,1,0) (-2,-1,0) (0_2,0,0) (0_2,0_1,0_1);x= h_1 (0,[4]/2,[4]/2)([2]_2-2+2,3+[2],-3+[2] )(0,0,0)([2]_-2+2+2,[4],[4])([2]_2+2-2,-3+[2],3+[2]) (0,[5]/2,[5]/2) ([2]_-2+2+2,[5],[5] (0,0_5,0_5) (0,0_5,0_5); 2-36-7 x= h_22|c| (0,[4]/2,[4]/2)([2]_2-2+2,3+[2],-3+[2] )(0,0,0)2|c| (0,[5]/2,[5]/2) ([2]_-2+2+2, [5],[5](0,0,0); ]Overview of the total amount of `hidden' chiral and vector-like matter per sector x(ω^k h_i^(')) and h_i(ω^k h_j^(')) in the L-R symmetric models with D6-brane data specified in table <ref>. The counting of the associated visible matter states per sector is displayed in table <ref>, and the notation is explained in the caption of table <ref>.tocsectionReferences ieeetr
http://arxiv.org/abs/1702.08424v1
{ "authors": [ "Gabriele Honecker", "Isabel Koltermann", "Wieland Staessens" ], "categories": [ "hep-th", "hep-ph" ], "primary_category": "hep-th", "published": "20170227183056", "title": "Deformations, Moduli Stabilisation and Gauge Couplings at One-Loop" }
I. Importance of accretion efficiency and deuterium abundance The isochrones are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A49http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A49 Department of Physics,Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi 464-8602, Japankunitomo@nagoya-u.jp Université de Nice-Sophia Antipolis,Observatoire de la Côte d'Azur,CNRS UMR 7293,06304 Nice CEDEX 04, France Department of Earth and Planetary Sciences, Tokyo Institute of Technology,2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, JapanEarth-Life Science Institute,Tokyo Institute of Technology,2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551, JapanProtostars grow from the first formation of a small seed and subsequent accretion of material.Recent theoretical work has shown that the pre-main-sequence (PMS) evolution of stars is much more complex than previously envisioned. Instead of the traditional steady, one-dimensional solution, accretion may be episodic and not necessarily symmetrical, thereby affecting the energy deposited inside the star and its interior structure. Given this new framework, we want to understand what controls the evolution of accreting stars. We use the MESA stellar evolution code with various sets of conditions. In particular, we account for the (unknown) efficiency of accretion in burying gravitational energy into the protostar through a parameter, ξ, and we vary the amount of deuterium present. We confirm the findings of previous works that, in terms of evolutionary tracks on the Hertzsprung-Russell (H-R) diagram, the evolution changes significantly with the amount of energy that is lost during accretion. We find that deuterium burning also regulates the PMS evolution. In the low-entropy accretion scenario, the evolutionary tracks in the H-R diagram are significantly different from the classical tracks and are sensitive to the deuterium content. A comparison of theoretical evolutionary tracks and observations allows us to exclude some cold accretion models (ξ∼ 0) with low deuterium abundances. We confirm that the luminosity spread seen in clusters can be explained by models with a somewhat inefficient injection of accretion heat. The resulting evolutionary tracks then become sensitive to the accretion heat efficiency, initial core entropy, and deuterium content. In this context, we predict that clusters with a higher D/H ratio should have less scatter in luminosity than clusters with a smaller D/H.Future work on this issue should include radiation-hydrodynamic simulations to determine the efficiency of accretion heating and further observations to investigate the deuterium content in star-forming regions.Revisiting the pre-main-sequence evolution of stars Masanobu Kunitomo<ref>, Tristan Guillot<ref>, Taku Takeuchi,<ref> Present affiliation: Sanoh Industrial Co., Ltd., Japan Shigeru Ida<ref>Received 5 February 2016 / Accepted 6 December 2016 ================================================================================================================================================================================================§ INTRODUCTION Since the pioneering work of <cit.>, the first phase of stellar evolution, the so-called pre-main-sequence or PMS, is generally considered with the following simple approach:A star is formed with a large radius and contracts isotropically, and a huge release of gravitational potential energy heats the interior and yields a similarly large luminosity, which ensures convection in most of the interior of the star. As the star heats up, thermonuclear reactions set in, contraction is slowed, and a radiative zone begins to grow. In standard models for the contraction of a solar mass star, the growth of the radiative zone starts after 2 million years and the star reaches the main sequence (with an outer convective zone that is only about 2.5% of the total mass of the present Sun) after about 40 million years. This long phase, in which the star possesses a very deep convective zone or is even fully convective, almost guarantees a homogeneous composition in the stellar interior owing to extremely fast mixing in convective zones.A large portion of the gravitational energy of the accreted material is supposed to be given to the star, ensuring a large radius and important luminosity <cit.>.Contrary to that ideal picture, a number of studies have revealed that the PMS evolution can be strongly affected by the way material is accreted onto the star during the accretion phase. If material loses entropy before or during the accretion onto the star, it can grow from a small radius and avoid the large quasi-static contraction phase <cit.>. This has strong consequences for the stellar evolutionary tracks and therefore the inferred physical properties of the star <cit.>. Finally, it controls the level to which planet formation affects stellar surface composition, as suggested from observations <cit.> or theoretical models <cit.>.In this article, we wish to understand what controls the evolutionary tracks on the PMS, explain apparent differences seen in published results, and obtain limits on physically plausible evolutionary tracks from observational constraints. In a subsequent article, we will attempt to understand what controls the evolution of radiative and convective zones in PMS stars.This paper is organized as follows. In Sect. <ref>, we describe our physical model and computation method for simulating the PMS evolution including accretion.In Sect. <ref>, we examine how, and to what extent, energy losses during accretion control PMS evolution. We also identify deuterium as playing a leading role in this evolution phase despite its fast burning nature. In Sect. <ref>, we explore the dependence on the entropy of accreting matter. In Sect. <ref>, we compile the results of Sects. <ref> and <ref> to evaluate observational constraints and observational consequences. Our results are summarized in Sect. <ref>.§ METHOD §.§ Stellar evolution codeOur evolution models are calculated for spherically symmetric single stars without strong magnetic fields or rotation, but including accretion from a small initial seed. We use the one-dimensional stellar evolution code MESA version 6596 <cit.> and refer to the Paxton et al. papers for full details of the computational method.The code numerically solves the equations of continuity, momentum, energy, temperature gradient, and composition. We assume the hydrostatic equilibrium for the momentum equation.The temperature gradient is determined by the mixing length theory of <cit.> and <cit.>. The Ledoux criterion of convection is used. The composition is changed by thermonuclear reaction and diffusion.The diffusion coefficient is given by the mixing length theory and overshooting of <cit.>, in which the diffusion coefficient exponentially approaches zero in the radiative zone from the boundary with the convective zone. We use the mixing length parameter, αMLT=1.90506, which is the ratio of the mixing length to the pressure scale height, and the overshooting parameter, fov=0.0119197, which determines the extent of the overshooting region <cit.> (see Appendix <ref>). The energy equation is described in Sect. <ref> in detail.We use the radiative opacity of <cit.> and <cit.> for the temperature range below 10^4.5 K, and the electron conduction opacity of <cit.>. We adopt the MESA default version of the equation of state; we basically follow the OPAL EOS tables <cit.> and SCVH tables <cit.>.§.§ Thermonuclear reactions Pre-main-sequence stars ignite light elements such as deuterium and lithium when the central temperature exceeds ∼ 10^6 and ∼3×10^6 K, respectively.The deuterium burning is as follows: D +^1H→ ^3He + γ. This is a strongly exothermic reaction, which produces 5.494 MeV via one reaction. Although the lithium burning produces more energy by a factor of about three, it does not affect the evolution owing to a ∼ 10^3 times smaller abundance <cit.>. We use the thermonuclear reaction rates of <cit.>. Main-sequence (MS) stars ignite hydrogen when temperature exceeds ∼ 10^7 K.§.§ Chemical compositionThe initial mass fractions of hydrogen, helium, and metals are denoted by Xini, Yini, and Zini, respectively. We choose the input parameters that reproducevalues of the present-day Sun estimated by helioseismic and spectroscopic observations (see Appendix <ref>).We conducted a χ^2 test and used the following converged input parameters: X_ini=0.70046, Y_ini=0.27948 and Z_ini=0.02006. We assume ^3He/^4He = 10^-4, similar to the value in the Jovian atmosphere <cit.>. We use the composition of metals of <cit.>. As described in the previous section, the deuterium content is a key parameter for the PMS evolution. In this study we use the mass fraction of deuterium, XD, and our fiducial XD is 20 ppm (parts per million, 10^-6), which is an interstellar value.However, the interstellar XD remains uncertain. Thus we explore the consequences of varying the deuterium content in Sect. <ref>.Finally, the accreting gas is assumed to have constant composition with time. Importantly, this includes fresh deuterium, which is added to the outer layers. Since deuterium is quickly burned in the stellar interior, this is an important factor governing the evolution on the PMS. The deuterium abundance profile is calculated by accounting for diffusion.§.§ Initial conditions Stars are formed by the collapse of molecular cloud cores through a number of stages <cit.>. As the collapse of a molecular cloud core proceeds, the central density increases and the central region becomes adiabatic. After the first hydrostatic core is formed, the dissociation of hydrogen molecules causes the second collapse after the central temperature exceeds ∼2000 K. As a result of the second collapse, the central temperature increases and eventually a second hydrostatic core is formed.In this paper, we focus on the evolution after the formation of the second core. This second core is only about a Jovian mass () initially, and most of the mass is thus accreted in a subsequent phase. As accretion is progressively suppressed, the star enters its quasi-contraction phase, increases its central temperature, and eventually becomes a main-sequence star. In this paper, we set our fiducial values of the initial mass and radius as 0.01 M_⊙ and 1.5, respectively, following <cit.> and <cit.>. This choice is essentially driven by convergence issues with MESA at low masses.However, our fiducial initial mass is higher than the second core mass, which is derived by radiation-hydrodynamic simulations <cit.>. Therefore initial seeds in the present work correspond to slightly evolved protostars from second cores. As discussed by <cit.>, the values of the initial seed mass and radius set the entropy of the forming star and therefore its subsequent evolution. We show in Appendix <ref> that for ranges of masses between 1 and 4, the range of radii when the mass attains 0.01 M_⊙ is between 0.25 and 1.5. Our fiducial value of 1.5 therefore corresponds to a high initial value of the entropy of the star. In Sect. <ref> we explore the consequences of variations of the initial radius down to 0.25. §.§ Mass accretionThe collapse of the molecular cloud core yields an accretion rate Ṁ∼ cs^3/G, where cs is the characteristic speed of sound in the cloud <cit.>. For molecular cloud temperatures T∼ 30 K, this implies Ṁ≈ 10^-5M_⊙/yr, which is our fiducial accretion rate. The accretion onto the star is however mostly determined by the angular momentum of the collapsing gas and formation of a circumstellar disk <cit.>.It can be strongly variable (episodic), as observed in FU Ori. Our fiducial accretion rate is Ṁ≈ 10^-5M_⊙/yr but we explore the effect of varying this rate in Sect. <ref>. We stop mass accretion abruptly when the stellar mass reaches the final mass, Mfin.§.§ Modeling the consequences of accretion The consequence of accretion onto the forming star is of course an increase of its mass, but this accretion also modifies the stellar environment and its radiation to space and delivers energy to the star. Accretion also delivers fresh deuterium,which is an important source of combustible material. It is difficult at this point to model the problem in its full complexity, in particular, because this should involve accounting for the angular momentum of material in the molecular cloud core, its magnetic field, the presence of outflows, and the particular geometry that arises from the birth of a circumstellar disk. Instead, we adopt a simple parametric approach inspired by the work of <cit.>. First, we assume that the specific entropy of the accreting material is the same as that at the stellar surface.This follows from <cit.> estimate that, in the case of accretion from disk to protostar, any entropy (equivalent temperature) excess would be quickly radiated to space because, especially in the presence of an inner cavity, the optical thickness toward the stellar interior is much larger than toward the disk.In practice, this treatment is favorable to the computational convergence. We point out that validating this hypothesis would require a multidimensional radiation-hydrodynamic simulation beyond the scope of the present work. Second, we parametrize the heat injected by the accreting material asLadd=ξ GM_⋆Ṁ/R_⋆,where M_⋆ and R_⋆ are stellar mass and radius, respectively, and energy constraints impose that ξ is such that 0≤ξ≤ 1 [ Our ξ is equivalent to αϵ in <cit.>. ]. In situations in which radiation to space is favored (e.g., when accreting from a disk with an inner cavity), we expect a low value of ξ. Even when this is not the case, <cit.> show that radiative transfer considerations impose that ξ≤ 3/4 in the spherical accretion. If materials accrete onto the star through an active Keplerian disk, the upper limit of ξ should be 0.5 owing to the radiative cooling from the disk surface. If the star rotates rapidly, a large fraction of gravitational energy can be stored in the rotational energy and then ξ is decreased. Therefore, in practice we opt for ξ=0.5 as a realistic upper limit for what we call “hot accretion”. We label simulations with ξ=0 as “cold accretion” simulations and we label as “warm accretion” simulations all cases where 0<ξ<0.5. Third, we use two simple models to parametrize how the energy enters the stellar interior. The first is the uniform model, which is adopted by <cit.>, and in which Ladd is distributed uniformly and instantaneously within the entire star. The energy deposited per unit mass is thus εadd^ (uniform)=Ladd/M_⋆.The secondis the linear model, in which we consider that the accretion energy is deposited only in an outer layer of relative mass mke, such that the accretion energy is zero in the layer from the stellar center to relative mass (1-mke) and that it increases linearly with mass until the photosphere. With this assumption, the energy deposited per unit mass isεadd^ (linear)=Ladd/M_⋆ Max[0,2/mke^2( M_r/M_⋆-1+mke) ].where M_r is the mass coordinate and mke is expressed as a fraction of the total mass of the star at time t so that 0< mke≤ 1. In both cases, εadd satisfy the relation ∫εadd dM=Ladd. The energy conservation equation thus is written ∂ L/∂ M_r = ε_nuc-( T∂ S/∂ t)_M_r+ε_add,where L is the luminosity, T its temperature, S its specific entropy, t time, and εnuc is the energy production rate from thermonuclear reactions in the shell.The uniform model thus corresponds to the case studied byin which Ladd is distributed uniformly and instantaneously within the entire star. In the linear model, we consider that the accretion energy is deposited preferentially in the outer layers of the star and that this energy deposition is linear in mass over a shell of mass mke (with mass expressed as a fraction of the total mass of the star at time t).BC10 justify the uniform model by the fact that low-mass (≲ 2) stars are fully convective during their PMS stage and that convection may rapidly deliver the accretion energy into the deep interior. We believe that its efficient transport of energy (or equivalently entropy) is probably unrealistic owing to the difficulty in transporting energy into a star. Furthermore, the generation of heat at the surface of accretion flows can potentially hinder convection, thereby preventing any uniform entropy mixing, as recently found by two-dimensional hydrodynamic simulations withhigh accretion rates <cit.>. The linear model is hence a highly simplified but useful parametrization that allows us to explore the consequences of a penetration of the accretion energy only to a fraction of the radius of a star.In the case of hot accretion,adopt a different approach; they do not consider that accretion energy can be transported inward but instead adopt an outer boundary condition to account for the heat generated by accretion <cit.>. In principle, this should be a better approach. However, the approach remains one-dimensional and is not parametrized. It does not include the possibility that part of the energy may be transported to deeper levels in an non-radiative way. We therefore believe that the different approaches are complementary. Numerically, evaluating the thermodynamic state of the accreting material requires careful consideration.The mass increase is performed within MESA using an “Eulerian” scheme <cit.>. MESA originally treated the thermodynamic state of the accreting material with compressional heating <cit.>. However, this is not suitable for rapid accretion whose timescale is shorter than the thermal relaxation timescale. Thus, the new scheme was implemented in version 5527 and is used in this study. The detailed treatment is described in Sect. 7 of <cit.>. §.§ Outer boundary condition The pressure Ps and temperature Ts at the outer boundary is specified by the interpolation of a model atmosphere which is calculated with the assumption that materials accrete onto a small fraction of the stellar surface from the thin disk and do not affect the properties of the photosphere. We found that the choice of the model atmosphere table directly affects the convergence of the calculations. We hence selected the following two tables depending on stellar mass and radius: For stars such that M_⋆≥ 1 and R_⋆ > 0.7, we adopted a “photospheric” table for an optical depth τs=2/3 and Ts=Teff and a surface pressure Ps set by the model atmospheres PHOENIX <cit.> and ATLAS9 <cit.>. For less massive or smaller stars, we usedthe “τs=100” model, which specifies Ps and Ts at τs=100 from the ATLAS9 and COND <cit.> model atmospheres. The impact of the switch of the boundary condition on the stellar structure and evolution is negligible.§.§ Adopted parameters and comparison with previous studiesTable <ref> summarizes the values of our main parameters. Our fiducial values are shown in boldface. We also provide the range of values that we consider in Sects. <ref>–<ref> when we seek to explore the consequences of deviations of some parameters from the fiducial values. In particular, we explore the dependence on the deuterium mass mixing ratio, something which had not been considered so far.We compare our results with two particular evolutionary models from previous studies ( and ) in Sect. <ref>.§ EVOLUTION IN THE CONTEXT OF A “COLD” ACCRETION In this section we describe the resulting PMS evolution in the context of a “cold” accretion of material, i.e., in the extreme case when all of the accretion energy is radiated away (i.e., ξ=0 in Eq. <ref>). Figure <ref> shows how the radius changes with time in the cold accretion case and with our fiducial settings (see Table <ref>). As described in more detail in Appendix <ref>, the evolution can be split into five phases: (I) the contraction phase, (II) the deuterium-burning phase, (III) the second contraction phase, (IV) the swelling phase, and (V) the main sequence. The contraction seen in phase (I) is induced by mass accretion with the same entropy as the star. Phase (II) begins when the internal temperature becomes high enough to start burning pre-existent deuterium, resulting in an expansion. With a decrease in the deuterium content, we enter phase (III) in which mass accretion again dominates over deuterium burning and the star contracts. After mass accretion is completed, the star expands owing to a change of its internal entropy profile. Eventually, after ∼ 2 × 10^7 yr, it enters the main sequence. Starting from a different initial condition (e.g., a smaller initial radius) does not change the sequence, but it alters the duration and characteristics of the star before it reaches the main sequence.After presenting the main differences between classical evolution models and those in the context of cold accretion (Sect. <ref>), we examine the influence of mass accretion rate, deuterium content, final stellar mass, and initial conditions in Sects. <ref>–<ref>. We compare our results against previous studies in Sect. <ref>. §.§ Cold versus classical modelsWe now compare the radius evolution in the case of cold steady accretion with that in the classical model. Figure <ref>a shows that the stellar radius in the case of low-entropy accretion is one order of magnitude smaller than in the classical evolution case. Classically, a spherically symmetrical accretion prevents radiative losses implying a large stellar entropy and therefore a large radius.Following <cit.>, we adopt for the classical evolution case an initial radius of 4.92 at 10^5 yr. When radiative cooling of accreting materials is allowed, the resulting specific entropy is much smaller ,which yields a smaller protostar.Moreover, in the case of the low-entropy accretion, the MS starts at about 20 million years, which is faster than the classical evolution by a factor of about two. This difference comes from the smaller thermal timescale in the stellar interior because of the smaller radius in the case of low-entropy accretion. The pre-main-sequence evolutionary tracks for a 1 M_⊙ star in the classical, non-accreting case and in the cold accretion scenario are shown in Fig. <ref>b. The tracks are drastically different. In the classical case, the star starts with a high entropy, then evolves along the almost vertical Hayashi track until it reaches the horizontal Henyey track in the Hertzsprung-Russell (H-R) diagram. In the cold accretion scenario, the initial evolution in the H-R diagram is nearly horizontal because the accretion does not inject energy and both the entropy and intrinsic luminosity remain small. The implications of these results, including the dependence on ξ, are discussed in Sect. <ref>.We point out that the evolutionary track in the cold accretion case in Fig. <ref>b does not include the accretion luminosity.Once the star moves out of the class I phase and becomes a visible T-Tauri star, its accretion luminosity is easily distinguished from its intrinsic luminosity in the spectral energy distribution because it is emitted in the UV or X-rays rather than in the visible. However, in the embedded phase, the accretion luminosity is reabsorbed by the surrounding molecular cloud and re-emitted at longer wavelengths, making the distinction with the intrinsic luminosity impossible. In that case, its location in the H-R diagram should be shifted upward .In the following three sections, we explore the consequences of changing the accretion history,deuterium content, initial entropy, and final mass. The other parameters are set to be the fiducial values listed in Table <ref>. §.§ Dependence on mass accretion rate As described in Sect. <ref>, the accretion rate onto the star can be highly time dependent and variable from one star to the next.We calculate the evolution varying mass accretion rate, ranging from 10^-4 to 10^-6 /yr, and time variability; i.e., the episodic accretion following the pioneering works of ,and . For episodic accretion, we adopt the parameters ofon the basis of hydrodynamic simulations by <cit.>. We confirm the findings ofandthat the variation of the accretion rate only has a moderate impact on the evolution. The difference in radii for the same mass is at most 0.2 even for accretion rates that differ by two orders of magnitude.As described afterward, this is much smaller than the difference from other effects (the deuterium content XD and the heat injection efficiency ξ).The evolutionary tracks are also hardly affected by the variation of the accretion history. We stress that rather than accretion history, it is the entropy of the accreted material that matters. We therefore choose to fix the mass accretion rate to 10^-5 /yr from now on.§.§ Dependence on deuterium content The PMS evolution is controlled by deuterium burning when it occurs vigorously (i.e., during phase II in Sect. <ref>). The deuterium content can largely differ from star to star because of galactic evolution or the local environment. We now explore how PMS evolution is affected by deuterium content.This was already investigated by <cit.> who concluded that the radius is only moderately affected by deuterium content. For example, when stars become 1 the radii are only increased by a factor of ≃1.6 from a deuterium-free star to the case with XD=35 ppm. However, since they assumed a spherical accretion, this study is to be re-examined in the framework of low-entropy accretion.First we summarize the currently available values of the deuterium content[These values are estimated using the number ratio of the deuterium over hydrogen, (D/H). Since the hydrogen mass fraction X≃ 0.7 and the mass ratio of deuterium to hydrogen is about two, XD≃1.4 (D/H).]. The primordial value at the beginning of the universe is XD,prim=35.8±2.5 ppm <cit.>, the indirectly[ Two methods may be used to constrain the protosolar deuterium abundance: using either the deuterium content in the Jovian atmosphere <cit.> or the enhanced ^3He measured in the solar wind and compared to the Jovian atmosphere <cit.>.] estimated value of the protosolar nebula is XD,PSN=28.0±2.8 ppm <cit.>, and the present-day value of the local interstellar medium varies widely from XD,ISM=13.7±5.3 ppm <cit.> to at least 32.3 ± 3.4 <cit.>. Classical studies, such as <cit.>, used a higher value (XD=35 ppm) based on classical observations of local interstellar media <cit.>. Although these values still remain uncertain to some extent and, in particular, XD,ISM is still under debate <cit.>, they suggest that the deuterium content evolves with time. Considering that XD,PSN=28 ppm at 4.57 Gyr ago, we assume that the deuterium content of present-day star-forming regions may be as low as 20 ppm and use that as our fiducial value. In addition to the time evolution, the deuterium content may show a wide range of values depending on the environment. For example, for stars formed from a cloud strongly affected by winds from nearby evolved stars, we could expect a smaller deuterium content than for other stars.It is therefore important to examine how variations in the deuterium fraction affect the PMS evolution of stars.In order to do so, we use the fiducial settings but vary the D/H ratio.Figure <ref> shows the radius evolution and evolutionary tracks with the different deuterium mass fraction from zero to 40 ppm, where the upper limit is set by the cosmic primordial value. Whereas the evolution before deuterium burning is totally independent of the deuterium content, we see that the tracks deviate after deuterium fusion sets in.For example, in the case XD=40 ppm, the radius and luminosity are up to ∼ 4 and ∼ 30 times larger, respectively, than for the deuterium-free case.With the largest deuterium content, Lnuc exceeds 1/7Lacc for a longer period of time and then the star becomes larger (see Eq. <ref>), where Lacc≡ GM_⋆Ṁ/R_⋆ is the gravitational energy of the accretingmaterial and Lnuc is the energy production rate by thermonuclear reaction.Evolutionary tracks are also largely different depending on XD in the region where log Teff≳ 3.48, i.e., Teff≳ 3000 K. These large differences illustrate the importance of the deuterium content on the PMS evolution. In the extreme case XD=0, the star keeps shrinking even after ∼ 10^4 years. Just before the accretion is completed at 10^5 years, the radius becomes ∼ 0.3 and eventually the star slightly expands. This expansion results from hydrogen burning due to sufficiently high central temperature even before the accretion is completed. We thus find that the deuterium content affects low-entropy accretion evolution more strongly than found by <cit.> with spherical accretion.This is because the total energy injected by accretion, Eacc=∫ Ladddt, exceeds the total energy generated by deuterium burning,ED=qD XD Mfin, where qD=2.63× 10^18erg g^-1 is the energy released by deuterium fusion per gram of deuterium, when ξ≳ 0.05 XD20 ppmR̅Mfin^-1, where R̅ is the time-averaged stellar radius. In spherical accretion, Eacc always dominates ED and the difference in deuterium content is less pronounced. The reverse is true in the case of cold accretion, implying changes in the radii by up to a factor 3 and in luminosity by up to 2 orders of magnitude between the extreme XD values. §.§ Dependence on initial conditions So far we have assumed that our initial stellar seed is characterized by 1.5 and 0.01. As pointed out by , since the entropy of accreting materials is related to the stellar entropy in our prescription (see Sect. <ref>), the initial conditions affect the evolution <cit.>. This is also pointed out bywho stress the importance of the initial seed mass in the subsequent PMS evolution. Here, for computational reasons, we choose to vary the initial entropy (i.e., radius) instead of the initial mass. As discussed in Appendix <ref>, the approach is equivalent with the current uncertainties in stellar seed masses leading to values of the radius at 0.01 between 0.25 and 1.5. In Fig. <ref>, we show the evolutionary tracks of accreting protostars with brown-dwarf masses obtained with Rini from 0.25 to 3 and XD=20 ppm and 35 ppm. We do not show the results for Rini≳3 since these are similar to the Rini=3 case because of their short K-H timescale. Like(see their Figs. 5–7), we confirm that different Rini values can produce a scatter in the low-temperature region (by up to a 1.5 dex difference in luminosity), which cannot be produced by a varying deuterium abundance. Separately, we note that changing the initial conditions results in deuterium fusion setting in at a different stellar mass (0.08 and 0.03 for Rini=1.5 and 0.25, respectively; see Fig. <ref>). We see that the evolutionary track with Rini=0.25 and XD=20 ppm is characterized by a very low luminosity. The main reason for the small luminosity is the small entropy of both an initial seed and accreting materials. In the cold accretion cases, the entropy of the accreting material is assumed to be the same as the stellar entropy (see Sect. <ref>). Moreover, the effect of deuterium burning has less impact in this case.The star expands in Teff≃2500–2900 K because of deuterium burning and luminosity increases. However the duration of the expansion phase is shorter than the other cases because of the large accreting material's gravitational energy, Lacc[The luminosity in the present paper is always the stellar intrinsic luminosity and does not include the accretion luminosity emitted from the shock surface.]. As described in Appendix <ref>, stars expand when LD>1/7Lacc (Eq. <ref>). Hence the larger Lacc makes the expansion phase (phase II in Sect. <ref>) shorter and the stellar radius and luminosity remain small (see also ). §.§ Evolution tracks for different final masses In this section we focus on the evolution of stars with varying masses. Apart from the final masses, the settings are the same as the fiducial values listed in Table <ref>. The cold accretion evolution of stars with masses of 0.05, 0.1, 0.3, 1, and 1.5 are shown in Fig. <ref>. Low-mass stars contract for a long time because of their long K-H timescale. For example, stars with 0.1 and 0.3 enter their MS at 10^9 and 3 × 10^8 years, respectively. During the first million years of evolution, Fig. <ref> shows that the size of stars undergoing cold accretion is not a monotonic function of mass. For example, a 0.1 star expands just after accretion ceases due to deuterium burning and becomes larger than the other stars considered herefor some time.On the other hand, a 0.05 star contracts monotonically and even more after ≃ 1–2 million years when all deuterium has been consumed.Higher mass stars (1.0, 1.5) expand just after accretion has stopped because of hydrogen burning.§.§ Comparison with previous studiesHere we compare our results with the studies ofand . We use two particular examples: the long-dashed line in Fig. 2 of(episodic and cold accretion) and the case of “mC5-C” in(steady and cold accretion). The other settings are summarized in Table <ref>. Quantitatively, their results differ:obtain radii that are often about two times larger than those of . For example, in , the radius when M_⋆ = 0.9 is ∼ 1.3 (see their Fig. 2) while for the same mass, it is ∼ 0.6 in . An important difference between these calculations is that the assumed deuterium contents differ by a factor 1.75.As shown in the previous section, this difference has a large impact on the evolution (see Fig. <ref>).Figure <ref> compares the radius evolution inandto our calculations with the same hypotheses. Given the fact that different stellar models are used (including different values of αMLT –see Table <ref>), there is excellent agreement between our calculations and those ofand . The difference between the former and the latter is indeed caused by the different assumed deuterium abundances, XD=20 ppm and 35 ppm, respectively.The figure also compares two calculations with episodic and steady accretion for XD=20 ppm. The difference between these two calculations is much smaller than between calculations with different deuterium abundances: the evolution of the PMS star is principally governed by the mean accretion rate and by the deuterium abundance of accreted material and much less by the nature (steady or episodic) of the accretion. § HOT AND WARM ACCRETIONIn this section we explore how heat injection by accretion affects stellar evolution. First, we assume that accretion energy is redistributed uniformly in the star and examine variations with the accretion efficiency parameter ξ defined in Eq. (<ref>). Then, we examine, with the parameter mke (see Eq. <ref>), how assumptions concerning where the accretion energy is released affect the PMS evolution tracks. In this section the parameters other than ξ and mke are set to their default values (see Table <ref>).§.§ Early evolution with a varying ξ Figure <ref> shows the evolution tracks obtained with the heat injection parameter in the range ξ=0–0.5. When comparing simulations with ξ=0.5 and ξ=0, the radii differ by 1 order of magnitude and the luminosities differ by 2 orders of magnitude. These differences in evolutionary tracks are larger and concern a wider range of effective temperatures than those due to a different deuterium content. Thus, the most important parameter controlling the PMS evolution is the amount of heat injected by accretion. The evolutionary track of accreting stars corresponds to the “birthline” proposed by <cit.>. Aspointed out, Fig. <ref>b shows that the birthlines strongly depend on heat injection ξ, implying that the concept of a definitive birthline may be elusive. The radius when the accretion is completed (i.e., at 10^5 yr) in the case of ξ=0.5 is almost the same as obtained by <cit.> for spherical accretion. Figure <ref>ashows that before deuterium burning (at 10^4 to 10^5 yrs), the radius initially increases for ξ=0.5, decreases slightly for ξ=0.1, and shows a pronounced decrease for ξ≲ 0.05.Again, this behavior can be understood using the energy equation as in Appendix <ref>. The situation corresponds to a case when nuclear reactions and radiative cooling may be neglected (i.e., Lnuc, L_⋆≪ Lacc). The energy equation may thus be shown to yield Ṙ/R_⋆ = ( 2-1-ξ/C) Ṁ/M_⋆, where C is a constant as a function of the specific heat ratio and the polytropic index (see Appendix <ref>). Therefore, the radius should be constant if ξ = 1- 2C. In the case of a fully convective star with monoatomic gas (C=3/7), the critical ξ for the constant radius is 1/7.In a second phase, for ξ≲ 0.1, deuterium burning sets in and leads to a limited increase of the stellar radius. However, this does not occur in the case of ξ=0.5. First, in this case, deuterium fusion starts only at ∼ 1.2×10^5 years, i.e., after the accretion is completed because of the low central temperature during the accreting phase due to the large radius. Moreover, since Lnuc does not exceed L_⋆, deuterium burning only delays the contraction from radiative cooling, and the star does not expand (see Appendix <ref> and Eq. <ref>). §.§ Importance of the location of heat injectionSo far, we assumed a uniform distribution of injected heat as in(see Eq. <ref>). However, this assumption may not be valid, especially in stars with a large radiative core. Moreover, as described in Sect. <ref>, recent radiation-hydrodynamic simulations by <cit.> showed that the uniform heat redistribution may not be accurate. We now examine the case in which the accretion heat is injected only in surface regions (see Eq. <ref>).Figure <ref>b shows that the evolution of an accreting solar-mass star strongly depends on the assumed location of the heat injection region.In the case of ξ=0.05 (blue lines), the luminosity with mke=0.1 is up to about one order of magnitude larger than the case of the uniform distribution and is almost the same as the case with ξ=0.3 and uniform distribution.This dependence mainly comes from the assumption that the entropy of accreting materials are the same as the stellar surface (see Sect. <ref>). If mke is small and ξ>0, the accretion heat is injected only in the outer envelope. This causes εadd to be large at these locations, which, according to Eq. (<ref>), leads to an increase of the stellar luminosity and hence of the specific entropy in this region. The surface entropy then becomes larger than in the case of a large mke, or equivalently, of a uniform injection of accretion heat. The assumption that any added mass has the same entropy as the stellar photosphere effectively leads to accreting material with a higher entropy and therefore to a larger radius. The assumptions of a uniform or linear injection of accretion energy and of continuity of entropy between the photosphere and accreted material are questionable. However, we can see that the uniform model with low ξ effectively represents one low extreme model. For high ξ values (ξ>0.3), we expect evolution models to evolve very rapidly initially, losing memory of the initial conditions and resembling standard evolutionary tracks. In that sense, although radiation hydrodynamic simulations would be desirable, the uniform model defined by Eq. (<ref>) is a useful simplification to represent possible evolutionary tracks.§ IMPLICATION FOR THE EVOLUTIONARY TRACKS IN THE H-R DIAGRAM In this section we compare our results with observations in the H-R diagrams. We now turn to isochrones by integrating the evolutions of various final masses as shown in Sect. <ref>. We give special attention to whether these new evolutionary tracks can explain the luminosity spread problem for young stellar objects (YSOs), i.e., the fact that for a given cluster, stars are spread over a relatively wide range of luminosities instead of forming a well-defined luminosity-effective temperature relation as would be expected for stars of similar ages and compositions.Although the consequences of accretion in the H-R diagrams have been investigated by previous works (,and ), we choose here to attempt to constrain the values of ξ that are in agreement with the observations of young clusters.§.§ The PMS luminosity spread problem §.§.§ Observational constraintsThe luminosity spread of PMS stars has been a matter of debate for decades <cit.>. This spread is seen almost ubiquitously in star-forming regions and young clusters.Three types of explanations have been proposed: (i) observational or astrophysical uncertainties <cit.>, (ii) an intrinsic age spread <cit.> and (iii) physical processes <cit.>. Determining the reason for this spread is important for our understanding of star formation. It is difficult to determine the luminosity of young PMS stars because it is subject to the observational (e.g., differential extinction, reddening, distance, and cluster membership) and astrophysical (e.g., circumstellar material and its accretion, unresolved binary and variability) uncertainties <cit.>. However, the contribution to the luminosity spread by each uncertainty has been quantitatively estimated <cit.> and various authors claimed that the sum of their contributions is smaller than the observed luminosity spread. Moreover, <cit.> found that the projected radii, which are less affected by observational uncertainties, instead of luminosities, also spread widely. These results suggest that the luminosity spread is genuine.A luminosity spread of 0.2–0.3 dex would correspond to a ∼ 0.4 dex spread in the ages of PMS stars. This could be explained theoretically; for example, <cit.> propose that stars are formed by the recurrent compressions by expanding bubbles and that consequently the members of a cluster are not necessarily formed in a short period of time. Instead, cold accretion leads to much smaller stellar radii and luminosities than the classical, non-accreting models and this can explain the luminosity spread for stars Teff≳ 3500 K (see ;see also Sect. <ref>). For stars of lower temperatures, the results depend on the size of the assumed seed radius and differ between the different studies (see Sect. <ref>).After showing how the PMS isochrones depend on ξ, XD, and Rini, we try to estimate possible values of ξ that are compatible with the observations. In this section, we assume a uniform injection of accretion heat (see Sect. <ref>).§.§.§ Isochrones as a function of assumed ξIn Fig. <ref> we compare the PMS stars observed in several young clusters to our theoretical isochrones for the cases ξ=0, 0.05, 0.1, and 0.5.The remaining parameters are our fiducial values (XD=20 ppm, Rini=1.5, and Mini=0.01). We use the following observational data of young stars: ρ Ophiuchus <cit.>, σ Orionis <cit.>, Taurus and Chamaeleon I <cit.>, Taurus-Auriga <cit.>, and Orion nebula cluster <cit.>. The first panel shows that classical evolutionary tracks with ξ=0.5 would require a large spread of ages to explain all clusters. The ages of stars would need to range between 0.3 and 10 Myr in ρ Oph, ONC, and Tau-Aur and between 1 and 10 Myr in the other clusters to reproduce the observed luminosity spread. Even in that case, a few stars are underluminous and can be explained only by invoking that their luminosity has been underestimated. <cit.> estimated that the errors in the luminosities of these stars is ∼0.5 dex, which indeed means that this is a possibility. Other fixed values of ξ do not allow us to find a better solution. When ξ=0.1, the most luminous stars cannot be explained anymore. When ξ≤ 0.05 the slope of the isochrones becomes inconsistent with the ensemble of observational data points. At the same time,the ξ=0 isochrones are characterized by very low luminosities and can explain the stars with the lowest luminosities observed in ρ Oph. Conversely, if one assumes that stars within a cluster are coeval, a distribution of the values of ξ within a cluster can be invoked to explain the luminosity spread, as proposed by , , and . However, this approach fails to reproduce the luminosity spread of very-low-mass stars in the Tau and Cha I. We revisit this in Sect. <ref>.§.§.§ Dependence on deuterium contentAs described in Sect. <ref>, even the deuterium fraction of the present-day local ISM is still under debate. We now investigate how isochrones depend on deuterium content.Figure <ref> shows 1 Myr isochrones obtained with different values of ξ and with either XD=20 ppm or 35 ppm. The isochrones obtained ξ=0.5 are almost independent of XD. However, isochrones obtained for ξ=0 differ very significantly when XD changes. As described inSect. <ref>, this is because in the case of cold accretion, deuterium burning regulates the PMS evolution. But Fig. <ref> also shows that the isochrones remain almost parallel. For example, the isochrone with (ξ, XD)=(0.1, 20  ppm) is very similar to that with (0, 35  ppm). This is because both ξ and XD control the specific entropy of the accreted material.In the context of a variable ξ value, a low abundance of deuterium yields a more important spread of solutions than a high abundance. We do not expect XD to vary within a cloud because of turbulent mixing. However, if XD varies from one cluster to the next, this would yield a more important luminosity spread for clusters with low XD values.§.§.§ Dependence on initial conditionsAs described in Sect. <ref> in the context of cold accretion, the PMS evolutionary tracks depend on the initial conditions, and in particular on the physical characteristics of the initial stellar seed.In Fig. <ref>, we show the isochrones obtained with Rini of 0.25 and 3, for different values of ξ.The comparison of the results with the same ξ and different Rini shows that the dependence on Rini is significant across the entire effective temperature range, except for hot accretion (ξ∼ 0.5). The dependence is most important for cold accretion (see dotted lines in Fig. <ref>) and corresponds to the situation described in Sect. <ref>. For intermediate values ξ∼ 0.1 (dotted lines), the dependence on Rini is smaller but still significant. It is most pronounced at small ages (≲1 Myr, which corresponds to the K-H timescale) and for low-mass stars; since the total energy of high-mass stars is dominated by the injected energy, Eadd, the initial energy does not matter.A high value of Rini>1.5 is required to explain the most luminous stars, particularly for the low Teff <3000K stars. However, invoking a variable Rini including low values Rini∼ 0.25 is required to explain the observed luminosity spread in the Tau and Cha I clusters by assuming coeval stars. §.§.§ Resulting constraints The observed luminosities and effective temperatures of young clusters can hence be explained in the context of coeval stars by assuming that ξ and Rini can vary within a given cluster. This interpretation suggests that ρ Oph, ONC, and Tau-Aur are extremely young (∼0.3 Myr); σ Ori is slightly older at ∼ 1Myr[ We do not derive the age of Tau and Cha I because the data in <cit.> is an assembly of young stars in two star-forming regions.]. However, inaccurate estimations of stellar luminosities, membership issues, and other problems can affect these age estimates.We can see in Sect. <ref> and Fig. <ref> that for a majority of stars ξ>0.1 for XD=20 ppm and Rini=1.5 considering the following two facts: the slopes of the isochrones are not compatible with the observations for ξ≤ 0.05 and the number of underluminous stars is small in Teff=3500–4500 K. For higher values of XD, this constraint on ξ should be relaxed somewhat.For lower values of Rini, we would have to impose ξ to be even closer to 0.5.Even with higher values of Rini, ξ<0.1 would still be rare because the slope of the isochrone does not match the observation. It is thus premature at this point and without a proper modeling of where the accretion heat is deposited to attempt constraining ξ more precisely.However, one important conclusion that we can draw is that the evolution tracks cannot deviate too significantly from the standard model: stars with entropies that are too low are ruled out by the observation of young clusters. In that sense, the limit set by the model characterized by a uniform accretion heat redistribution, ξ=0.1, XD=20 ppm and Rini=1.5, is useful when considering the range of possibilities in agreement with observational data. Finally, if indeed star growth is characterized by an inefficient burial of accretion heat (ξ < 0.5), then we should expect that clusters characterized by a lower deuterium mixing ratio XD should have a larger luminosity spread than clusters with a higher XD. This may thus become testable, depending on the possibility to reliably determine D/H ratios in clusters (see Sect. <ref>). §.§ The case of CoRoT 223992193 We now consider the case of a specific system, the eclipsing binary CoRoT 223992193 <cit.>.We expect both stars to have the same age. Therefore, the system provides important constraints to test our evolution models and retrieve ξ independent of the cluster results discussed previously. Figure <ref> shows the constraints on luminosity and effective temperature for both components of the system and compares them to evolutionary tracks for classical (non-accreting) models, and for our models with values of ξ between 0 and 0.1, XD of 20 and 35 ppm, and Rini=1.5.The two stars have masses between 0.6 and 0.8 and an age less than 5 Myr. Although the evolutionary tracks are strongly sensitive to the assumed values of ξ,the isochrones remain mostly parallel to each other, except for extremely low values of ξ with small XD. This implies that varying ξ essentially results in a shift in age, where stars with a given effective temperature and luminosity are younger for lower values of ξ. Quantitatively, the constraints that we can derive on ξ are very similar to those obtained for young clusters (and with the same caveats). With the assumptions of Rini and the uniform distribution, for XD=20 ppm, we obtain that ξ≥ 0.05, with the limiting case corresponding to both stars being close to their birthline. For XD=35 ppm, the initial entropy is larger so that no constraint on ξ can be obtained. § CONCLUSIONSThe PMS evolution of stars has long been considered a relatively simple theoretical problem governed by the quasi-static contraction of an almost isentropic star. We have seen that this evolution phase may in fact be strongly altered by the fact that the stellar envelope must be accreted onto a protostellar seed and that a significant fraction of the energy may be lost in the accretion shock connected to this process. The goal of this paper was to understand what controls the PMS and to derive constraints from the observations. In order to do so, we used simulations with the stellar evolution code MESA that account for a progressive accretion of material onto a forming star. We have first shown that beyond classical parameters, such as mass and metallicity, the evolution on the PMS is controlled essentially by three parameters: ξ, the efficiency at which the gravitational energy of the accreted material is transformed into internal energy of the star, XD, the mass mixing ratio of deuterium, and the entropy of the initial stellar seed (or equivalently in the present work its radius Rini for a mass of 0.01). The parameter ξ=0.5 corresponds to the classical models and lead to the formation of stars with a large radius and entropy and an evolution that is essentially independent of the two other parameters, XD and Rini. Progressively lower values of ξ yield stars with (much) smaller radii and a richer ensemble of possibilities in terms of their evolution. In particular, because the entropy of the accreted material is then effectively smaller, the abundance of deuterium in that material becomes very important in deciding whether stars will expand significantly (high XD values) or will retain a small size all the way to the main sequence (for XD<30 ppm). We showed that the differences between the results of <cit.> and <cit.> can be accurately reproduced and the results from different choices for XD.We compared the evolutionary tracks to the observations of several young clusters. We confirmed that the spread in luminosities in each cluster could be explained without invoking an age spread but by instead assuming that ξ could vary from one star to the next. A variation of Rini (i.e., stellar entropy at 0.01) is also needed to explain underluminous, cool stars in Tau and Cha I.However, the observations indicate that most stars cannot be too low in entropy when they form. This implies that within the uniform accretion model, we can rule out as unlikely those scenarios with low ξ and XD values.Specifically, the model with XD=20 ppm, Rini=1.5, and ξ=0.1 sets a useful boundary: stars can have lower entropies only in relatively rare cases. This means that for a majority of stars, stellar evolution cannot differ from the classical evolution tracks beyond the limit set by this limiting model. On the other hand, because of the multi-parameter dependence, we cannot derive an independent constraint on ξ. For example, a model with XD=35 ppm and ξ=0 is equivalent in terms of entropy and luminosities in the H-R diagram to the XD=20 ppm and ξ=0.1.We found these constraints to be compatible with the observational constraints from the PMS eclipsing binary CoRoT 223992193. One caveat is that if a significant number of stars in the clusters are affected by observational errors (such as L_⋆, membership), our constraints is changed. Separately, we found that the possibility of reliably measuring deuterium abundances in clusters would allow testing for inefficient accretion (i.e., ξ<0.5). If this is the case, we would expect the spread in luminosity to be larger in clusters with a lower deuterium to hydrogen ratio. Present observations indicate that σ Ori may have a smaller luminosity spread than other clusters. Although many other factors are to be considered, it is possible that this is due to a higher D/H ratio in that cluster.The PMS evolution of stars is not as simple as once thought and merits further investigation. On the observational side, further insight would be gained through the discovery of more very young eclipsing binaries, the determination of deuterium abundances in clusters, and further observations of young accreting stars. On the theoretical side, the main uncertainties in our calculations are due to extremely simplified outer boundary conditions and the ad hoc prescription used to relate the gravitational energy of the accreted material to the internal energy in the star. Three-dimensional radiation hydrodynamic simulations of a collapsing molecular cloud core with sufficient resolution to resolve the central stellar seed are needed.A consequence of this more complex PMS evolution is that the stellar interior is not necessarily fully convective during most of this phase. This may have strong implications to understand the chemical composition of stars and connect measurements of stellar compositions to the formation of planets. We will investigate this issue in our next paper. We express our gratitude to S. Inutsuka, T. Hosokawa, P. Morel, T. Nakamoto, M. Kuzuhara, and M. Ikoma for fruitful discussions and comments. Bill Paxton and Dean Townsley kindly helped M.K. use the stellar-evolution code MESA. We appreciate the critical and constructive comments of the referees, which helped us to improve this paper. M.K. is supported by Grant-in-Aid for JSPS Fellows Grant Number 24·9296, MEXT of Japan (Grant: 23244027) and Foundation for Promotion of Astronomy. aa 86 natexlab#1#1[Allard et al.(2001)Allard, Hauschildt, Alexander, Tamanai, & Schweitzer]Allard+01 Allard, F., Hauschildt, P. H., Alexander, D. R., Tamanai, A., & Schweitzer, A. 2001, , 556, 357[Amelin et al.(2002)Amelin, Krot, Hutcheon, & Ulyanov]Amelin+02 Amelin, Y., Krot, A. N., Hutcheon, I. D., & Ulyanov, A. A. 2002, Science, 297, 1678[Angulo et al.(1999)Angulo, Arnould, Rayet, Descouvemont, Baye, Leclercq-Willain, Coc, Barhoumi, Aguer, Rolfs, Kunz, Hammer, Mayer, Paradellis, Kossionides, Chronidou, Spyrou, degl'Innocenti, Fiorentini, Ricci, Zavatarelli, Providencia, Wolters, Soares, Grama, Rahighi, Shotter, & Lamehi Rachti]Angulo+99 Angulo, C., Arnould, M., Rayet, M., et al. 1999, Nuclear Physics A, 656, 3[Asplund et al.(2009)Asplund, Grevesse, Sauval, & Scott]Asplund+09 Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481[Bahcall et al.(2005)Bahcall, Basu, Pinsonneault, & Serenelli]Bahcall+05 Bahcall, J. N., Basu, S., Pinsonneault, M., & Serenelli, A. M. 2005, , 618, 1049[Baraffe & Chabrier(2010)]BC10 Baraffe, I. & Chabrier, G. 2010, , 521, A44[Baraffe et al.(1998)Baraffe, Chabrier, Allard, & Hauschildt]Baraffe+98 Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, , 337, 403[Baraffe et al.(2009)Baraffe, Chabrier, & Gallardo]BCG09 Baraffe, I., Chabrier, G., & Gallardo, J. 2009, , 702, L27[Baraffe et al.(2012)Baraffe, Vorobyov, & Chabrier]BVC12 Baraffe, I., Vorobyov, E., & Chabrier, G. 2012, , 756, 118[Basu & Antia(2004)]Basu+Antia04 Basu, S. & Antia, H. M. 2004, , 606, L85[Burningham et al.(2005)Burningham, Naylor, Littlefair, & Jeffries]Burningham+05 Burningham, B., Naylor, T., Littlefair, S. P., & Jeffries, R. D. 2005, , 363, 1389[Cassisi et al.(2007)Cassisi, Potekhin, Pietrinferni, Catelan, & Salaris]Cassisi+07 Cassisi, S., Potekhin, A. Y., Pietrinferni, A., Catelan, M., & Salaris, M. 2007, , 661, 1094[Castelli & Kurucz(2004)]Castelli+Kurucz04 Castelli, F. & Kurucz, R. L. 2004, ArXiv Astrophysics e-prints[Chabrier et al.(2007)Chabrier, Gallardo, & Baraffe]Chabrier+07 Chabrier, G., Gallardo, J., & Baraffe, I. 2007, , 472, L17[Chambers(2010)]Chambers10 Chambers, J. E. 2010, , 724, 92[Chandrasekhar(1967)]Chandrasekhar67 Chandrasekhar, S. 1967, An introduction to the study of stellar structure[Cox & Giuli(1968)]Cox+Giuli68 Cox, J. & Giuli, R. 1968, Gordon and Breach, New York, 401[Da Rio et al.(2010)Da Rio, Robberto, Soderblom, Panagia, Hillenbrand, Palla, & Stassun]Da-Rio+10 Da Rio, N., Robberto, M., Soderblom, D. R., et al. 2010, , 722, 1092[Dunham & Vorobyov(2012)]Dunham+Vorobyov12 Dunham, M. M. & Vorobyov, E. I. 2012, , 747, 52[Ferguson et al.(2005)Ferguson, Alexander, Allard, Barman, Bodnarik, Hauschildt, Heffner-Wong, & Tamanai]Ferguson+05 Ferguson, J. W., Alexander, D. R., Allard, F., et al. 2005, , 623, 585[Gatti et al.(2008)Gatti, Natta, Randich, Testi, & Sacco]Gatti+08 Gatti, T., Natta, A., Randich, S., Testi, L., & Sacco, G. 2008, , 481, 423[Gatti et al.(2006)Gatti, Testi, Natta, Randich, & Muzerolle]Gatti+06 Gatti, T., Testi, L., Natta, A., Randich, S., & Muzerolle, J. 2006, , 460, 547[Geroux et al.(2016)Geroux, Baraffe, Viallet, Goffrey, Pratt, Constantino, Folini, Popov, & Walder]Geroux+16 Geroux, C., Baraffe, I., Viallet, M., et al. 2016, , 588, A85[Gillen et al.(2014)Gillen, Aigrain, McQuillan, Bouvier, Hodgkin, Alencar, Terquem, Southworth, Gibson, Cody, Lendl, Morales-Calderón, Favata, Stauffer, & Micela]Gillen+14 Gillen, E., Aigrain, S., McQuillan, A., et al. 2014, , 562, A50[Grevesse & Sauval(1998)]GS98 Grevesse, N. & Sauval, A. J. 1998, , 85, 161[Guillot(1999)]Guillot99 Guillot, T. 1999, , 47, 1183[Guillot et al.(2014)Guillot, Ida, & Ormel]Guillot+14 Guillot, T., Ida, S., & Ormel, C. W. 2014, , 572, A72[Hartmann(2001)]Hartmann01 Hartmann, L. 2001, , 121, 1030[Hartmann et al.(1997)Hartmann, Cassen, & Kenyon]Hartmann+97 Hartmann, L., Cassen, P., & Kenyon, S. J. 1997, , 475, 770[Hauschildt et al.(1999a)Hauschildt, Allard, & Baron]Hauschildt+99a Hauschildt, P. H., Allard, F., & Baron, E. 1999a, , 512, 377[Hauschildt et al.(1999b)Hauschildt, Allard, Ferguson, Baron, & Alexander]Hauschildt+99b Hauschildt, P. H., Allard, F., Ferguson, J., Baron, E., & Alexander, D. R. 1999b, , 525, 871[Hayashi(1961)]Hayashi61 Hayashi, C. 1961, , 13, 450[Heber et al.(2008)Heber, Baur, Bochsler, Burnett, Reisenfeld, Wieler, & Wiens]Heber+08 Heber, V. S., Baur, H., Bochsler, P., et al. 2008, in Lunar and Planetary Inst. Technical Report, Vol. 39, Lunar and Planetary Science Conference, 1779[Hébrard et al.(2005)Hébrard, Tripp, Chayer, Friedman, Dupuis, Sonnentrucker, Williger, & Moos]Hebrard+05 Hébrard, G., Tripp, T. M., Chayer, P., et al. 2005, , 635, 1136[Henyey et al.(1965)Henyey, Vardya, & Bodenheimer]Henyey+65 Henyey, L., Vardya, M. S., & Bodenheimer, P. 1965, , 142, 841[Herwig(2000)]Herwig00 Herwig, F. 2000, , 360, 952[Hillenbrand(2009)]Hillenbrand09 Hillenbrand, L. A. 2009, in IAU Symposium, Vol. 258, IAU Symposium, ed. E. E. Mamajek, D. R. Soderblom, & R. F. G. Wyse, 81–94[Hosokawa et al.(2011)Hosokawa, Offner, & Krumholz]Hosokawa+11 Hosokawa, T., Offner, S. S. R., & Krumholz, M. R. 2011, , 738, 140[Hosokawa & Omukai(2009)]Hosokawa+Omukai09 Hosokawa, T. & Omukai, K. 2009, , 691, 823[Hueso & Guillot(2005)]Hueso+Guillot05 Hueso, R. & Guillot, T. 2005, , 442, 703[Inutsuka(2012)]Inutsuka12 Inutsuka, S.-i. 2012, Progress of Theoretical and Experimental Physics, 2012, 010000[Inutsuka et al.(2015)Inutsuka, Inoue, Iwasaki, & Hosokawa]Inutsuka+15 Inutsuka, S.-i., Inoue, T., Iwasaki, K., & Hosokawa, T. 2015, , 580, A49[Inutsuka et al.(2010)Inutsuka, Machida, & Matsumoto]Inutsuka+10 Inutsuka, S.-i., Machida, M. N., & Matsumoto, T. 2010, , 718, L58[Jeffries(2007)]Jeffries07 Jeffries, R. D. 2007, , 381, 1169[Jeffries(2012)]Jeffries12 Jeffries, R. D. 2012, Are There Age Spreads in Star Forming Regions?, ed. A. Moitinho & J. Alves, 163[Kenyon & Hartmann(1995)]Kenyon+Hartmann95 Kenyon, S. J. & Hartmann, L. 1995, , 101, 117[Larson(1969)]Larson69 Larson, R. B. 1969, , 145, 271[Lellouch et al.(2001)Lellouch, Bézard, Fouchet, Feuchtgruber, Encrenaz, & de Graauw]Lellouch+01 Lellouch, E., Bézard, B., Fouchet, T., et al. 2001, , 370, 610[Linsky et al.(2006)Linsky, Draine, Moos, Jenkins, Wood, Oliveira, Blair, Friedman, Gry, Knauth, Kruk, Lacour, Lehner, Redfield, Shull, Sonneborn, & Williger]Linsky+06 Linsky, J. L., Draine, B. T., Moos, H. W., et al. 2006, , 647, 1106[Mahaffy et al.(1998)Mahaffy, Donahue, Atreya, Owen, & Niemann]Mahaffy+98 Mahaffy, P. R., Donahue, T. M., Atreya, S. K., Owen, T. C., & Niemann, H. B. 1998, , 84, 251[Martin et al.(2012)Martin, Lubow, Livio, & Pringle]Martin+12 Martin, R. G., Lubow, S. H., Livio, M., & Pringle, J. E. 2012, , 423, 2718[Masunaga & Inutsuka(2000)]Masunaga+Inutsuka00 Masunaga, H. & Inutsuka, S.-i. 2000, , 531, 350[Meléndez et al.(2009)Meléndez, Asplund, Gustafsson, & Yong]Melendez+09 Meléndez, J., Asplund, M., Gustafsson, B., & Yong, D. 2009, , 704, L66[Mercer-Smith et al.(1984)Mercer-Smith, Cameron, & Epstein]Mercer-Smith+84 Mercer-Smith, J. A., Cameron, A. G. W., & Epstein, R. I. 1984, , 279, 363[Muzerolle et al.(2005)Muzerolle, Luhman, Briceño, Hartmann, & Calvet]Muzerolle+05 Muzerolle, J., Luhman, K. L., Briceño, C., Hartmann, L., & Calvet, N. 2005, , 625, 906[Nelder & Mead(1965)]Nelder+Mead65 Nelder, J. A. & Mead, R. 1965, The computer journal, 7, 308[Palla & Stahler(1992)]Palla+Stahler92 Palla, F. & Stahler, S. W. 1992, , 392, 667[Palla & Stahler(2000)]Palla+Stahler00 Palla, F. & Stahler, S. W. 2000, , 540, 255[Paxton et al.(2011)Paxton, Bildsten, Dotter, Herwig, Lesaffre, & Timmes]Paxton+11 Paxton, B., Bildsten, L., Dotter, A., et al. 2011, , 192, 3[Paxton et al.(2013)Paxton, Cantiello, Arras, Bildsten, Brown, Dotter, Mankovich, Montgomery, Stello, Timmes, & Townsend]Paxton+13 Paxton, B., Cantiello, M., Arras, P., et al. 2013, , 208, 4[Paxton et al.(2015)Paxton, Marchant, Schwab, Bauer, Bildsten, Cantiello, Dessart, Farmer, Hu, Langer, Townsend, Townsley, & Timmes]Paxton+15 Paxton, B., Marchant, P., Schwab, J., et al. 2015, , 220, 15[Prantzos(2007)]Prantzos+07 Prantzos, N. 2007, , 130, 27[Ramírez et al.(2009)Ramírez, Meléndez, & Asplund]Ramirez+09 Ramírez, I., Meléndez, J., & Asplund, M. 2009, , 508, L17[Ramírez et al.(2011)Ramírez, Meléndez, Cornejo, Roederer, & Fish]Ramirez+11 Ramírez, I., Meléndez, J., Cornejo, D., Roederer, I. U., & Fish, J. R. 2011, , 740, 76[Reggiani et al.(2011)Reggiani, Robberto, Da Rio, Meyer, Soderblom, & Ricci]Reggiani+11 Reggiani, M., Robberto, M., Da Rio, N., et al. 2011, , 534, A83[Rogers & Nayfonov(2002)]Rogers+Nayfonov02 Rogers, F. J. & Nayfonov, A. 2002, , 576, 1064[Saumon et al.(1995)Saumon, Chabrier, & van Horn]Saumon+95 Saumon, D., Chabrier, G., & van Horn, H. M. 1995, , 99, 713[Seaton(2005)]Seaton05 Seaton, M. J. 2005, , 362, L1[Shu(1977)]Shu77 Shu, F. H. 1977, , 214, 488[Soderblom et al.(2014)Soderblom, Hillenbrand, Jeffries, Mamajek, & Naylor]Soderblom+14 Soderblom, D. R., Hillenbrand, L. A., Jeffries, R. D., Mamajek, E. E., & Naylor, T. 2014, Protostars and Planets VI, 219[Stahler(1983)]Stahler+83 Stahler, S. W. 1983, , 274, 822[Stahler(1988)]Stahler88 Stahler, S. W. 1988, , 332, 804[Stahler & Palla(2005)]Stahler+Palla05 Stahler, S. W. & Palla, F. 2005, The Formation of Stars[Stahler et al.(1986)Stahler, Palla, & Salpeter]Stahler+86 Stahler, S. W., Palla, F., & Salpeter, E. E. 1986, , 302, 590[Stahler et al.(1980)Stahler, Shu, & Taam]SST80I Stahler, S. W., Shu, F. H., & Taam, R. E. 1980, , 241, 637[Stamatellos et al.(2007)Stamatellos, Whitworth, Bisbas, & Goodwin]Stamatellos+07 Stamatellos, D., Whitworth, A. P., Bisbas, T., & Goodwin, S. 2007, , 475, 37[Stassun et al.(2014)Stassun, Feiden, & Torres]Stassun+14 Stassun, K. G., Feiden, G. A., & Torres, G. 2014, , 60, 1[Steigman(2006)]Steigman06 Steigman, G. 2006, International Journal of Modern Physics E, 15, 1[Sugimoto & Nomoto(1975)]Sugimoto+Nomoto75 Sugimoto, D. & Nomoto, K. 1975, , 27, 197[Tomida et al.(2013)Tomida, Tomisaka, Matsumoto, Hori, Okuzumi, Machida, & Saigo]Tomida+13 Tomida, K., Tomisaka, K., Matsumoto, T., et al. 2013, , 763, 6[Townsley & Bildsten(2004)]Townsley+Bildsten04 Townsley, D. M. & Bildsten, L. 2004, , 600, 390[Vaytet et al.(2013)Vaytet, Chabrier, Audit, Commerçon, Masson, Ferguson, & Delahaye]Vaytet+13 Vaytet, N., Chabrier, G., Audit, E., et al. 2013, , 557, A90[Vidal-Madjar & Gry(1984)]Vidal-Madjar+Gry84 Vidal-Madjar, A. & Gry, C. 1984, , 138, 285[Vorobyov & Basu(2005)]VB05 Vorobyov, E. I. & Basu, S. 2005, , 633, L137[Vorobyov & Basu(2010)]Vorobyov+Basu10 Vorobyov, E. I. & Basu, S. 2010, , 719, 1896[Winkler & Newman(1980)]Winkler+Newman80 Winkler, K.-H. A. & Newman, M. J. 1980, , 236, 201 § Χ^2 TEST FOR THE INPUT PARAMETERSIn this study we chose the input parameters that reproduce the observed solar quantities. In addition to the radius and luminosity, the internal structure and surface composition are constrained by helioseismic and spectroscopic analyses <cit.>. Although the solar metallicity remains a matter of debate[ Using three-dimensional atmosphere model, the metallicity in the solar atmosphere is estimated to be ∼ 0.0134, which is reduced from the classical value (∼ 0.02) by about 70% <cit.>.] it is beyond the scope of this paper. We performed a χ^2 test to find the best initial settings using the “Nelder-Mead simplex algorithm” <cit.>.The input parameters are the initial composition (X_ini, Y_ini, and Z_ini; see Sect. <ref>), the mixing-length parameter, α_MLT, and the overshoot mixing parameter, f_ov <cit.>.We calculated the evolution of 1 stars from PMS phase with changing these parameters to minimize the χ^2 value at the solar age, which is assumed to be 4.567 Gyr <cit.>. The results of the χ^2 test listed in Table <ref> are used in this paper. § EVOLUTION AND UNDERLYING PHYSICS IN THE COLD ACCRETION CASEIn this Appendix, we explain the basic behavior and its underlying physics in the case of cold accretion shown in Sect. <ref>. The evolution can be split into five phases as shown in Fig. <ref>. We explain each phase below. I. The adiabatic contraction phase:In this phase, the star shrinks while increasing its mass. The radius evolution can be fitted byR_⋆∝ M_⋆^-1/3 .Indeed, we show in Appendix <ref> that this relation is verified for a perfect gas star that accretes gas with the same entropy. In the current settings, the accretion is intense enough that the Kelvin-Helmholtz (K-H) timescale, τKH≡ |Etot|/L_⋆, is much longer than the accretion timescale, τacc≡ M_⋆/Ṁ, where Etot is the total energy of the star and L_⋆ is the stellar intrinsic luminosity [ The total luminosity of protostars is the sum of the intrinsic luminosity, L_⋆, and the radiation from the accretion shock front (i.e., Lacc - Ladd). ]. This is equivalent to Lacc≫ L_⋆ as shown in Fig. <ref>. The accretion luminosity is defined asLacc ≡GM_⋆Ṁ/R_⋆ ,which is for example 31.3 in the case of M_⋆=0.1, R_⋆=1, and Ṁ=10^-5 /yr.II. The deuterium-burning phase:After the central temperature exceeds ∼ 10^6 K, deuterium fusion affects the evolution (see Sect. <ref>). In the current settings, it happens at t = 7 × 10^3 yr and M_⋆≃ 0.08. As described in Sect. <ref>, the energy production rate of deuterium burning has a strong temperature dependence, that is, εnuc∝ (T/10^6 K)^11.8.This strong temperature sensitivityis responsible for a rapid expansion of the star through the so-called “thermostat effect” <cit.>.Indeed, after the ignition of deuterium fusion, the central temperature is maintained constant at about 10^6 K by the fact that any temperature increase would result in an expansion of the deuterium burning region, and its adiabatic cooling anda temperature decrease would be balanced by adiabatic heating, respectively. With the approximate relations for a perfect gas star in Appendix <ref>, Tc∝ Pc/ρc∝ M_⋆/R_⋆ and therefore a constant central temperature implies that a mass increase results in an expansion of the star. However, the central temperature is not exactly constant so we must test that the energy produced by deuterium burning is sufficient to cause the expansion.In Appendix <ref>, we derive Eq. (<ref>), which shows how the rate of change of the stellar radius of a fully convective perfect gas star depends on the mass accretion rate, intrinsic luminosity, and nuclear energy production rate.Our derivation follows <cit.>. By neglecting the intrinsic luminosity (since the Kelvin-Helmholtz timescale is long compared to relevant timescales in this phase), Ṙ/R_⋆ = -1/3Ṁ/M_⋆+7/3R_⋆ Lnuc/GM_⋆^2 . Therefore, the condition for the expansion is given by Lnuc > 1/7Lacc . In the current phase, since this condition is satisfied owing to the vigorous burning of the pre-existent deuterium, Ṙ>0.III. A second contraction phase:After ∼ 1.5 × 10^4 yr, pre-existing deuterium has been burned up and deuterium fusion relies on the accretion offresh deuterium. In this phase, the star contracts again. This is because the mass accretion rate of freshly accreted deuterium, Ṁ XD, is not sufficient to compensate for the compression by accretion. The maximum energy production rate of deuterium burning, LD, max, is calculated by assuming that it is instantaneously burned, LD,max=8.65 L_⊙Ṁ10^-5 /yrXD2.0×10^-5 .This may be compared to the accretion luminosity Lacc defined in Eq. (<ref>), but for a mass that is now M_⋆∼ 0.5. We thus obtain LD,max/Lacc∼ 0.05 for standard values of the parameters. According to Eq. (<ref>), this shows that the burning of accreted deuterium cannot prevent the adiabatic contraction, independent of the accretion rate. In this phase, thermonuclear energy production is dominated by deuterium burning, implying that Lnuc≃ LD,max until the accretion ceases at 10^5 yr (see Fig. <ref>). IV. The swelling phase:After the accretion ceases, the radius remains nearly constant for several million years. This timescale is then determined by what happens in the deep interior and is shorter than the photospheric K-H timescale of ∼ 10^7–10^8 years.This is because the luminosity in the deep interior is much larger than that at the surface because of the absorption of energy in the stellar interior. Thus, the internal thermal timescale, which is ∼ GM_r^2/(2rl), is down to a few million years in the deep interior, where r and l are the radius and luminosity. For most of this phase, the dominant source of the luminosity is the energy release in each mass shell, i.e., T∂ S/∂ t in Eq. (<ref>).After several million years, the star expands.A “luminosity wave” <cit.> gradually propagates from the interior to the atmosphere and accompanies an increase in temperature and decrease in opacity of the outer layers. The redistribution of entropy in the star due to the luminosity wave changes the internal structure (e.g., the polytropic index n) and then causes the stellar expansion <cit.>. These profiles are shown in Fig. <ref>. V. The main sequence:The star then slightly shrinks from the K-H contraction to enter the main sequence (MS). In this phase, the intrinsic luminosity is almost entirely due to hydrogen burning. The energy equation becomes almost time independent, which means that stars evolve on a much longer timescale. This slow evolution is caused by the change in chemical composition of the central regions due to nuclear energy production. The MS lasts until, in the central regions, hydrogen is exhausted. § ANALYTICAL RELATIONS Here we derive the mass-radius relationship of Eq. (<ref>) in two ways [In addition, another derivation is possible using a polytropic analysis <cit.>.]. First, we use the characteristic density and pressure and the entropy following <cit.>. The characteristic density and pressure of the star in the hydrostatic equilibrium are given byρ̃ ∝ M_⋆/R_⋆^3 ,P̃ ∝ M_⋆^2/R_⋆^4  .The entropy is given by S=c_Vln( P/ρ^γad) +S_0,where c_V is the specific heat at constant volume, γad =c_P/c_V the specific heat ratio, and S_0 is the constant. Substituting Eq. (<ref>) into this equation, we obtainR_⋆=M_⋆^-2-γad/3γad-4exp( 1/3γad-4S-S_0/c_V) .In the case of the monatomic ideal gas (γad=5/3), we obtainR_⋆= M_⋆^-1/3exp[ 2/3μ/ℛ (S-S_0) ] ,where μ is the mean molecular weight and ℛ is the gas constant. This equation shows that the radius is determined only by entropy and mass. In isentropic stars, R_⋆∝ M_⋆^-1/3 (Eq. <ref>). Secondly, Eq. (<ref>) can be derived from the energy equation, which also gives Eq. (<ref>) following <cit.>. The total energy of a star is given by the sum of the internal and gravitational energy, Etot=Eint+Eg . The energy conservation is expressed as follows:dEtot/dt =-L_⋆ +Lnuc -GM_⋆Ṁ/R_⋆+Ladd .The right-hand side (RHS) shows the radiative cooling, the energy production by thermonuclear reactions, and gravitational and internal energies of accreting materials. From Eq. (<ref>) here we assume Ladd=ξ GM_⋆Ṁ/R_⋆ .From the virial theorem, Etot=(4-3γad)Eint=3γad-4/3(γad-1)Eg . If we assume the polytropic relation that P(ρ)=Kρ^1+1/n,Eg=-3/5-nGM_⋆^2/R_⋆ , where n is the polytropic index.If we define C≡(3γad-4)/(γad-1)(5-n), Etot=-CGM_⋆^2/R_⋆.Thus, the total energy evolution in Eq. (<ref>) is transformed as Ṙ/R_⋆ = ( 2-1-ξ/C) Ṁ/M_⋆ -RL_⋆/CGM^2+RLnuc/CGM^2 .The second term of RHS of Eq. (<ref>) corresponds to the inverse of the K-H timescale τKH, which is defined as the typical timescale of the radiative cooling (i.e., τKH≡|Etot|/L_⋆).During the main-accretion phase, τKH is in general much longer than the accretion timescale τacc≡ M/Ṁ [This is equivalent to the condition L_⋆≪ Lacc as shown in Fig. <ref>.]. Therefore, the second term of RHS of Eq. (<ref>) can be neglected if the first term of RHS is not zero, i.e., 1-ξ≠ 2C.In fully convective stars, which consist of the monatomic ideal gas, n=3/2 and γad=5/3 and then C=3/7. In addition, in the case that ξ=0, we obtain Eq. (<ref>). Moreover if Lnuc≪ Lacc, it is transferred as Eq. (<ref>). § DEPENDENCE ON THE INITIAL STELLAR SEED MASS In this paper we chose 10^-2 as the fiducial value of the initial stellar seed mass. We stress that this value is higher than the second-core mass inrecent work of the hydrodynamic collapse of molecular clouds<cit.>. For example, a second core is formed with 4 and 4 in <cit.>, while 1.4 and 0.65 in the ten simulations of <cit.> [<cit.> indicate that the difference probably results from different opacity tables.]. Therefore, our calculations start with slightly evolved seeds rather than second cores.Our approach in Sects. <ref> and <ref> was to fix the initial seed mass and explore different values of the initial seed radius, following . This choice was essentially motivated by the fact that the convergence of evolution models with MESA is more difficult for very low seed masses.However, in light of the fact that, as pointed out by , the low value of the initial seed mass has important consequences for the subsequent stellar evolution, we must show that the range of initial conditions yield evolutionary tracks that are equivalent to those that we would obtain with small initial seed masses. Figure <ref> illustrates seven evolutionary models starting from different seed masses: 0.01 (I–II),0.004 (III–IV),and 10^-3 (V–VII), for various initial radii R_ ini and heat injection efficiencies ξ.Case (I) corresponds to the fiducial initial condition used in the present paper and case (II) is used in Sect. <ref>. Case (III) corresponds to seed conditions obtained by <cit.>. Case (IV) corresponds to a seed with the same specific entropy as our fiducial seed (I) but a mass of only 0.004 (see Eq. <ref>). Case (V) corresponds to the largest specific entropy for which we could converge an evolution calculation for a 10^-3 seed.The radius of a 10^-3 seed with the same entropy as case (I) would be 3.2, but unfortunately the corresponding calculation failed to converge. Finally, cases (VI) and (VII) are obtained by arbitrarily changing the initial radii for 10^-3 seeds to span a range of specific entropies.Interestingly, for stars evolving from the initial conditions (II), (V), (VI), or (VII), their radius is so small that they reach high enough central temperatures to ignite hydrogen burning when their mass reaches ≃0.6.We can see inFig. <ref> that the range of radii obtained for models with different initial seed masses and different values of ξ is the same as that obtained with the our fiducial seed mass and various values of the initial radius. Furthermore, we see that the evolution is largely controlled by the value of the ξ parameter: For ξ=0.5, the evolutionary tracks converge independent of the choice of the initial seed properties above about 0.1. This is also the case for the ξ=0.1 case, although differences in radii on the order of ∼ 10% remain even after accretion is completed. The cold accretion case is the one for which differences in initial conditions persist the longest and can still be on the order of ∼ 40% at the end of the accretion phase.Our conclusion that most stars would have been formed with ξ≳0.1 is not affected by the uncertainties on the initial conditions.In order to explain the small sizes of young cool stars,invoked the need for small seed radius (0.2 at 0.01) whileinvoked the need for a small seed mass.Our calculations in Fig. <ref> show that these conditions are equivalent.
http://arxiv.org/abs/1702.07901v1
{ "authors": [ "Masanobu Kunitomo", "Tristan Guillot", "Taku Takeuchi", "Shigeru Ida" ], "categories": [ "astro-ph.SR", "astro-ph.EP" ], "primary_category": "astro-ph.SR", "published": "20170225150041", "title": "Revisiting the pre-main-sequence evolution of stars I. Importance of accretion efficiency and deuterium abundance" }
#1 =by #1⊥ ⊥
http://arxiv.org/abs/1702.08496v1
{ "authors": [ "Jason Roy", "Kirsten J Lum", "Michael J. Daniels", "Bret Zeldow", "Jordan Dworkin", "Vincent Lo Re III" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170227195941", "title": "Bayesian nonparametric generative models for causal inference with missing at random covariates" }
Building upon work by Matsumoto, we show that the quantum relative entropy with full-rank second argument is determined by four simple axioms: i) Continuity in the first argument, ii) the validity of the data-processing inequality, iii) additivity under tensor products, and iv) super-additivity. This observation has immediate implications for quantum thermodynamics, which we discuss. Specifically, we demonstrate that, under reasonable restrictions, the free energy is singled out as a measure of athermality. In particular, we consider an extended class of Gibbs-preserving maps as free operations in a resource-theoretic framework, in which a catalyst is allowed to build up correlations with the system at hand. The free energy is the only extensive and continuous function that is monotonic under such free operations.Axiomatic characterization of the quantum relative entropy and free energy Henrik Wilming, Rodrigo Gallego, Jens Eisert December 30, 2023 ==========================================================================§ INTRODUCTIONThe quantum relative entropy captures the statistical distinguishability of two quantum states. For two states ρ and σ supported on the same Hilbert space it is defined asS(ρ || σ) = (ρlogρ - ρlogσ),whenever supp(ρ) ⊆supp(σ) and set to infinity otherwise. This quantity has a clear interpretation in the statistical discrimination of ρ from σ, appearing as an error rate in quantum hypothesis testing <cit.>, a result commonly known as Stein's Lemma. It is hence no surprise that this quantity appears in a plethora of places in contemporary quantum physics. This is particularlytrue in the context of quantum information theory <cit.>. In therelative entropy of entanglement it quantifies the entanglement content of a general quantum state <cit.>. More generally, it appears in conversion rates in so-called resource theories <cit.>. Relatedly, it takes center stage in theproblem of (approximately) recovering quantum information <cit.>. But the applications are not confinedto quantum information theory. In many-body physics, it provides bounds on the clustering of correlations in space in terms of the mutual information <cit.>. In quantum thermodynamics <cit.>, which is the context that is in the focusof attention in this note,its interpretation as the non-equilibrium free energy gives an upper bound to how much work can be extracted from a non-equilibrium system and is important in answering how to operationally define work in the quantum regime in the first place <cit.>. Not the least,it has appeared in the context of the AdS/cft correspondence <cit.>, again drawing from and building upon theabove mentioned applications.In this note, we restrict to the case where the second argument σ has full rank and only consider finite dimensional Hilbert spaces. Essentially by re-interpreting and building upon a theorem by Matsumoto <cit.>, we will show that the quantum relative entropy (<ref>) is (up to a constant factor) the only function featuring the following four properties: *Continuity: For fixed σ, the map ρ↦ S(ρ || σ) is continuous <cit.>.*Data-processing inequality: For any quantum channel T we have,S(T(ρ)|| T(σ)) ≤ S(ρ || σ). *Additivity:S(ρ_1⊗ρ_2 || σ_1⊗σ_2) = S(ρ_1||σ_1) + S(ρ_2||σ_2). *Super-additivity: For any bipartite state ρ_1,2 with marginals ρ_1,ρ_2 we haveS(ρ_1,2 || σ_1⊗σ_2) ≥ S(ρ_1||σ_1) + S(ρ_2||σ_2). No subset of these properties characterizes the relative entropy uniquely, but the Properties <ref> to <ref> are, for example, also fulfilled by the Renyi-divergences <cit.>.The uniqueness of the quantum relative entropy under Properties <ref>-<ref> has significant implications for quantum thermodynamics (QT), which we elaborate upon. The formalism of QT has recently been recast in within the framework of a resource theory <cit.>, so one in which quantum states that are different from Gibbs states (at a fixed environment temperature) are considered resources. We will refer here to this kind of resource as athermality. Within this resource theory one is, among other problems, interested in finding bona fide measures of athermality. These are functions that quantify the amount of athermality of a given system. A requirement for a function reasonably quantifying the degree of athermality is that the it does not increase under the free operations of the resource theory. The problem of identifying such functions has been studied intensively in the last years for different classes of free operations, providing families of valid measures that are regarded as generalizations of the free energy <cit.>. They share the property that they are all based of generalizations of the quantum relative entropy (<ref>).In this work, we will use the uniqueness result on the quantum relative entropy to show that the usual non-equilibrium free energy emerges as the unique continuous and extensive measure of athermality under a certain meaningful choice of free operations. In this sense, we also provide a fresh link of resource-theoretic considerations in quantum thermodynamics to more traditional descriptions of thermodynamic processes in the quantum regime. § AXIOMATIC DERIVATION OF QUANTUM RELATIVE ENTROPY We start by formally stating the main technical result. Let f be a function on pairs of quantum states acting on the same finite dimensional Hilbert space, with the second argument having full rank. Suppose f fulfills Properties <ref>-<ref>. Then it is given byf(ρ,σ)= C (ρlogρ - ρlogσ) := C S(ρσ),for some constant C>0. The proof relies on a characterization of the relative entropy in terms of different properties laid out in Ref.<cit.>. To state it, we first require a definition: Let (ρ,σ) be a pair of states on a finite-dimensional Hilbert space H and {ρ'_n} be a sequence of states on the Hilbert spaces H^⊗ n. We define a function f on pairs of quantum states to be lower asymptotically semi-continuous (l.a.s.) with respect to σ iflim_n→∞ρ^⊗ n-ρ'_n_1 =0implieslim inf_n→∞1/n(f(ρ'_n,σ^⊗ n)-f(ρ^⊗ n,σ^⊗ n))≥ 0.Then Matsumoto's theorem <cit.> for the relative entropy can stated in the following way.Let f fulfill the data-processing inequality, additivity and be lower asymptotically semi-continuous with respect to all σ. Then f∝ S.The proof of Theorem <ref> follows from the subsequent Lemma, which in turn implies that the Properties <ref>-<ref> give rise to the conditions of Theorem <ref>.Let f be a function on pairs of quantum states with the following properties, * The map ρ↦ f(ρ,σ) is continuous for any fixed σ.* Additivity: f(ρ_1⊗ρ_2,σ_1⊗σ_2)=∑_i=1^2 f(ρ_i,σ_i).* Super-additivity:f(ρ_1,2,σ_1⊗σ_2)≥ f(ρ_1⊗ρ_2,σ_1⊗σ_2). Then f is lower asymptotically semi-continuous with respect to any σ. Let {ρ'_n} be a sequence of states such that ρ'_n-ρ^⊗ n_1→ 0. Since the trace norm fulfills the data-processing inequality, we know that ||ρ'_n,i-ρ||_1 → 0, where ρ'_n,i denotes the marginal of ρ'_n on the i-th tensor-factor. Hence, the marginals converge to ρ. From the properties of f, we furthermore see that1/n(f(ρ'_n,σ^⊗ n)-f(ρ^⊗ n,σ^⊗ n) )≥1/n∑_i(f(ρ'_n,i,σ)-f(ρ,σ))≥min_i{f(ρ'_n,i,σ)} - f(ρ,σ) n→∞⟶ 0,where the limit follows from continuity and the second line from additivity and super-additivity.§ UNIQUENESS OF THE FREE ENERGY The results of the previous section, in particular Theorem <ref>, can be applied to the resource theory ofβ-athermality. We formulate it as a resource theory of pairs of a quantum state and a Hamiltonian (ρ,H) thatwe call object. An object (ρ,H) is said to have the resource of β-athermality if it fulfillsρ≠ω_β,Hwhere ω_β,H is the Gibbs state for the Hamiltonian H and inverse temperature β>0, given byω_β,H:=e^-β H/ (e^-β H).In this way, the resource theory of β-atermality is concerned with the absence of thermal equilibrium at temperature 1/β <cit.>. Concerning the set of free operations, we will be considering here the most general set of operations that do not create resourceful states from states featuring fewer resources.In order to progress, let us first define the so-called Gibbs-preserving maps (GP) which are quantum channels that have Gibbs state (<ref>) as a fixed point. More formally, a GP-channel is defined as a trace-preserving completely positive map G_β with the property thatG_β(ω_β,H)= ω_β,H∀H.Note that formulated as above, GP channels only induce transitions that change the quantum state but not the Hamiltonian. This can be extended by simply considering functions G that act on the object, possibly changing also the Hamiltonian, but which at the same timedo not create β-athermality. In this way, we define a GP-map as a function (ρ,H) ↦ (σ,K)=G_β(ρ,H) such thatG_β(ω_β,H,H)=(ω_β,K,K).This condition can equivalently be cast into the following form: One may define the set of GP channels as G_β^H (ω_β,H)=ω_β,K(H) for all H, and the map between Hamiltonians as G̅(H)=K so thatG(ρ,H)=(G_β^H(ρ),G̅(H)).With this notation, condition (<ref>) is simply given byG_β^H(ω_β,H)=ω_β,G̅(H).GP-maps G are not only a natural extension of GP-channels G for the case where Hamiltonians are modified, but one can also see that any GP-map can be implemented if one is given access to a GP-channels and an ancillary system in a Gibbs state. This is formalized by the following Lemma taken from Ref. <cit.>. Any map G_β fulfilling (<ref>) acting on a system Scan be implemented by adding an ancillary system A in the Gibbs state (ω_β,K,K) and applying a GP channel G to the entire compound. More formally, we find thatG_β(ρ_S,H_S):=(σ,K)=(_S (G_β(ρ⊗ω_β,K)),K). Once we have established the set of GP maps for the objects, we will now introduce the notion of catalyst in this framework. This is done analogously to the case of catalysts for other sets of operations such as thermal operations <cit.>. In the following, we will also frequently drop the β-subscript from GP-maps for simplicity of notation. §.§ Catalysts and correlationsWe will now turn to defining the transitions between objects that can be performed with GP maps and the use of what is called a “catalyst” in this context. This is simply an ancillary quantum system that is left in the same state (in a sense that will be made precise later) after the transition is performed, rendering the metaphor of an actual catalyst quite appropriate. We say that the transition(ρ_S,H_S) → (σ_S,K_S)is a catalytic free transition if there exist a GP map G and a system A described by the object (γ_A,R_A) such thatG( (ρ_S,H_S) ⊗ (γ_A,R_A) )= (σ_S,K_S) ⊗ (γ_A,R_A).We will in this case simply denote it by(ρ_S,H_S)c> (σ_S,K_S). Here, we are employing the convenient notation(ρ_S,H_S) ⊗ (γ_A,R_A) :=(ρ_S ⊗γ_A , H_S ⊗𝕀_A+ 𝕀_S ⊗ R_A) to describe tensor products of objects. In the remainder of this work, we will simply write H_S ⊗𝕀_A+ 𝕀_S ⊗ R_A:=H_S+R_A. Importantly, we are assuming that the catalyst A is left in the same state and Hamiltonian and also uncorrelated from S. The role of the correlations of the catalysts and its role in quantum thermodynamics has been first noted in Ref. <cit.>. There, one considers a catalysts consisting of k subsystems and one merelydemands that the marginal state of each subsystem is left untouched. We define it here formally for the case of GP maps.We say that the transition(ρ_S,H_S) → (σ_S,K_S)is a marginal-catalytic free transition if there exist a GP map G and systems A_1,…,A_k described by the object (γ_A,R_A)=⊗_i=1^k (γ^i,R^i) such thatG( (ρ_S,H_S) ⊗ (γ_A,R_A) )= (σ_S,K_S) ⊗ (γ̃_A,R_A),where _|A_i(γ̃_A)=_|A_i(γ_A) for all i ∈ (1,…,k). We will in this case simply denote it by(ρ_S,H_S)mc> (σ_S,K_S). Note that in this case the system A does not remain unchanged, but only its local marginals. In this sense, it is not truly a catalyst, but a catalyst on its reduced states. It is natural to expect that this indeed allows for a larger set of transitions, since the system A is “used up” by employing the initial lack of correlations as a resource.We will now consider a family of transitions that also introduces correlations, but for which the catalyst is, unlike in Definition <ref>, left entirely untouched. In this case, correlations are built up between the system and the catalyst. In this way, the catalyst is re-usable as long as it is employed in order to implement a transition on a new system. We call this transitions, originally introduced in Ref. <cit.>, correlated-catalytic free transitions: We say that the transition(ρ_S,H_S) → (σ_S,K_S)is a correlated-catalytic free transition if there exist a GP map G and a system A described by the object (γ_A,R_A) such thatG( (ρ_S,H_S) ⊗ (γ_A,R_A) )= (η,K_S+ R_A),where _A(η)=σ_S and _S(η)=γ_A. We will in this case simply denote it by(ρ_S,H_S)cc> (σ_S,K_S).We will now show that the non-equilibrium free energy is the only function, under reasonable assumptions, that does not increase under operations of the form of Definitions <ref> and <ref>. §.§ Free energy as a unique measure of non-equilibriumWe will call a measure of non-equilibrium a function that quantifies how far a given object (ρ,H) is from its equilibrium object (ω_β,H,H). The minimal requirement on such a measure is that it is non-increasing under free transitions. The larger the set of free transitions, the more restricted is the allowed set of measures. One of the most well-studied measures of non-equilibrium is based on the quantum relative entropy. It is related to the free energy asΔ F_β(ρ,H):= 1/βS(ρω_β,H) =F_β(ρ,H)- F_β(ω_β,H,H)where F_β(ρ,H)=(ρ H) -β^-1 S(ρ) with S being the von Neumann (and not the relative) entropy. The measure Δ F_β fulfillsthe following properties that we express here for a generic measure denoted by M_β: *Continuity: For fixed Hamiltonian H, the map ρ↦ M_β(ρ,H) is continuous.*Additivity:M_β(ρ_1⊗ρ_2, H_1+ H_2) = M_β(ρ_1,H_1) +M_β(ρ_2,H_2). *Monotonicity: * Monotonicity:M_β(ρ,H)≥ M_β(σ,K) if G(ρ,H) = (σ,K).* Catalytic monotonicity:M_β(ρ,H)≥ M_β(σ,K) if (ρ,H) c> (σ,K).* Marginal-catalytic monotonicity:M_β(ρ,H)≥ M_β(σ,K) if (ρ,H) mc> (σ,K).* Correlated-catalytic monotonicity:M_β(ρ,H)≥ M_β(σ,K) if (ρ,H) cc> (σ,K). All those properties apply for all states and Hamiltonians involved. The fact that Δ F_β fulfills <ref> and <ref> follows from the continuity and additivity properties of the quantum relative entropy. The other properties can be related to the data processing inequality and super-additivity as we will see in Theorem <ref>. Before that, let us note that for any function M_β on objects, we can define a function M_β on pairs of quantum states as M_β(ρ,ω_β,H)=M_β(ρ,H). At the same time, it is true that any full-rank state σ can be thought of as the Gibbs-state of the modular HamiltonianH_σ := -1/βlogσ+ C,for any C∈. With this notation, all objects of the form (σ,H_σ) are Gibbs-objects. Importantly, the modular Hamiltonian H_σ is only defined up to an additive constant. It turns out however, that the properties <ref> and <ref> imply that M_β(ρ,H) = M_β(ρ,H+C) for any C∈ (see Appendix <ref> for a proof). Any additive measure of athermality is hence automatically gauge-invariant in this sense.Thus, the functions M_β and M_β are in a one-to-one correspondence. With this equivalence, we say that a measure M_β is super-additive if, for any bipartite quantum states, it fulfillsM_β(ρ_1,2,σ_1⊗σ_2)≥M_β(_2(ρ)⊗_1(ρ),σ_1⊗σ_2)and additive if it fulfillsM_β(ρ_1⊗ρ_2,σ_1⊗σ_2)= M_β(ρ_1,σ_1) + M_β(ρ_2,σ_2).Also, M_β is said to fulfill the data processing inequality ifM_β(T(ρ),T(σ))≤M_β(ρ,σ)for allρ, all σ being full-rank and for all quantum channels T.At this point, a note of caution is appropriate. We have previously defined the function M_β only in the specific case where the second argument has full rank. There clearly are quantum-channels T that reduce the rank of full-rank states, in which case M_β(T(ρ),T(σ)) may at first seem undefined. This is not a problem, however. To see this, we make use of the following fact about quantum channels:Let T:B( H)→B( H') be a quantum channel and σ any full-rank state. If T(σ) is only supported on a subspace P⊆ H', then T(ρ) is supported only within P for any ρ.The proof is given in the appendix. By the previous lemma, we see that any quantum channel that maps a full-rank state σ into a state T(σ) without full rank simply maps all states to the smaller Hilbert space P=supp(T(σ)) and should be considered as a map from states on H to states on P instead. Since the function M_β is defined on all finite-dimensional Hilbert spaces, we can simply assume that it acts on B(P)×B(P) in this case. In yet other words, the function M_β(ρ,σ) is always defined if supp(ρ)⊆supp(σ) by restricting it to supp(σ),M_β(ρ,σ) =M_β(ρ|_supp(σ),σ|_supp(σ)).We can then show the following equivalence between properties of M_β and M_β. There exist the following two equivalences between the properties of the measures of athermality M_β(ρ,H) and the corresponding function M_β(ρ,σ). * The measure M_β fulfills additivity <ref> and marginal-catalytic monotonicity <ref>M_β is super-additive <ref>, additive <ref> and fulfills the data-processing inequality <ref>.* The measure M_β fulfills Additivity <ref> and correlated-catalytic monotonicity <ref>M_β is super-additive <ref>, additive <ref> and fulfills the data-processing inequality <ref>. The proof of this theorem, together with a more detailed set of the implications between the properties of M_β and the corresponding function M_β, is provided in the appendix.The previous theorem simply tells us that any additive measure of athermality M_β that does not increase under marginal-catalytic operations (Definition <ref>) or also under correlated-catalytic operations (Definition <ref>) is in one to one correspondence with a function M_β that is additive, super-additive and fulfills the data processing inequality. This has as a first consequence that the measure Δ F_β fulfills indeed Properties <ref>-<ref>. More importantly, using our re-formulation of Matsumoto's result of Theorem <ref>, we can show that Δ F_β is, up to a constant factor, the only measure of athermality that fulfills <ref>-<ref>. This is the content of our main result, which follows from Theorem <ref>.[Uniqueness of monotones] Any monotone for marginal-catalytic transitions or correlated-catalytic transitions at environment temperature β that is additive and depends continuously on the density matrix is proportional to Δ F_β. The implications of this result are that the free energy difference Δ F_β is the only bona fide quantifier of athermality under the most general set of free operations that do not create the resource. § DISCUSSION AND OUTLOOKIn this work, we have investigated the question which properties uniquely determine the quantum relative entropy among all function on pairs of quantum states. Our re-formulationof Matsumoto's result highlights the role of super-additivity as a key property in the axiomatic derivation of the quantum relative entropy. The role of super-additivity in the arena of quantum thermodynamics has been shown to be related to the build up of correlations between the system at hand and a catalyst, which in turn represents the components of the machine that come back to their initial state after the cyclic process.We have shown how the relative entropy and non-equilibrium free energy uniquely emerge from considerations about how to treat catalysts and their correlations in the resource theoretic approach to quantum thermodynamics. Usually, notions of relative entropy are employed to capture asymptotic weakly correlated settings (thermodynamic limit), thus when acting on many uncorrelated copies of a system (see Ref. <cit.> for a recent discussion of asymptotic thermodynamics from the point of view of resource theories). Importantly, and in contrast, in our approach they emerge without having to invoke any thermodynamic limit, but rather follow from properties of monotones in the single-shot setting. However, they are precisely singled out by the fact that we disregard correlations in the setting of marginal-catalytic and correlated-catalytic free transitions. It thus seems that the crucial feature of for the emergence of free energy is thus the disregarding of correlations. Note that this fits well to how these quantities appear in notions of macroscopic thermodynamics: Macroscopic equilibrium thermodynamics usually emerges in large systems which are well within thermodynamic phases. In such phases, correlations decay exponentially in space. Hence, the correlations of an object with its surrounding scale like its surface-area and not like their volume. Macroscopic objects are then essentially uncorrelated with other objects due to their small surface-to-volume ratio.In this work,we have distinguished two ways of creating correlations with the catalyst, the marginal-catalytic one of Definition <ref> and the correlated-catalytic one of Definition <ref>. The first represents the situation where the components of the machine become correlated among themselves, while the second represents the case where the machine builds correlation with the system upon which the machine induces a transition. Although these two sets both give rise to the free energy difference as a unique measure of athermality, we consider the latter as a much more adequate set of operations to incorporate correlations in thermodynamics. The reason for this is that the correlations build up between the catalyst and the system do not prevent one from re-using the catalyst to implement again a transition of the same kind on another system. It is an additional contribution of this work to flesh out this difference.We end the discussion by posing an interesting open question. This is to investigate how to characterize all the possible thermodynamic transitions that can be implemented with correlated-catalysts. In Ref. <cit.> we have seen that indeed the operations of Definition <ref> are more powerful than the ones of <ref>. At the same time, it has been recently been shown in Ref. <cit.> that a variant of Definition <ref> allows to extract work from passive states. The question remains whether all the transitions that do not increase the free energy difference Δ F_β are possible, as they indeed are for the ones of Definition <ref>, as shown in Ref. <cit.>. If this is indeed true also for correlated-catalysts, one would have found an interpretation of the free energy as a unique criterium for the second law of thermodynamics. If it is not true, then it is necessary to consider genuinely new monotones, which are not additive or not continuous. Both options would be interesting from the perspective of the further development of quantum thermodynamics. Acknowledgments: We acknowledge funding from the DFG (GA 2184/2-1), the BMBF, the EU (COST, AQuS), and the ERC (TAQ) and the Studienstiftung des Deutschen Volkes.25 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Hiai and Petz(2005)]Hiai2005 author author F. Hiai and author D. Petz,in 10.1142/9789812563071_0004 booktitle Asymptotic Theory of Quantum Statistical Inference (publisher World Scientific Pub Co Pte Lt, year 2005) pp.pages 43–63NoStop [Ogawa and Nagaoka(2005)]Ogawa2005 author author T. Ogawa and author H. Nagaoka, in 10.1142/9789812563071_0003 booktitle Asymptotic Theory of Quantum Statistical Inference(publisher World Scientific Pub Co Pte Lt, year 2005) pp. pages 28––42NoStop [Brandao and Plenio(2010)]GeneralStein author author F. G. Brandao and author M. B. Plenio, @noopjournal journal Commun. Math. Phys. volume 295, pages 791 (year 2010)NoStop [Vedral(2002)]Vedral2002 author author V. Vedral, 10.1103/revmodphys.74.197 journal journal Rev. Mod. Phys. volume 74,pages 197–234 (year 2002)NoStop [Vedral et al.(1997)Vedral, Plenio, Rippin, and Knight]Vedral1997 author author V. Vedral, author M. B. Plenio, author M. A. Rippin,andauthor P. L. Knight, 10.1103/physrevlett.78.2275 journal journal Phys. Rev. Lett. volume 78, pages 2275 (year 1997)NoStop [Brandao et al.(2013)Brandao, Horodecki, Oppenheim, Renes, and Spekkens]Brandao2013 author author F. G. S. L.Brandao, author M. Horodecki, author J. Oppenheim, author J. M. Renes,and author R. W. Spekkens, @noopjournal journal Phys. Rev. Lett. volume 111, pages 250404 (year 2013)NoStop [Brandao and Gour(2015)]Brandao2015 author author F. G. S. L.Brandao and author G. Gour, @noopjournal journal Phys. Rev. Lett. volume 115, pages 070503 (year 2015)NoStop [Junge et al.(2016)Junge, Renner, Sutter, Wilde, andWinter]Junge2016 author author M. Junge, author R. Renner, author D. Sutter, author M. M. Wilde,and author A. Winter, in 10.1109/isit.2016.7541748 booktitle 2016 IEEE International Symposium on Information Theory (ISIT) (publisher Institute of Electrical and Electronics Engineers (IEEE),year 2016) pp. pages 2494–2498NoStop [Kastoryano and Eisert(2013)]Kastoryano2013 author author M. J. Kastoryano and author J. Eisert, @noopjournal journal J. Math. Phys. volume 54, pages 102201 (year 2013)NoStop [Bernigau et al.(2015)Bernigau, Kastoryano, and Eisert]Bernigau2015 author author H. Bernigau, author M. J. Kastoryano,and author J. Eisert, 10.1088/1742-5468/2015/02/p02008 journal journal J. Stat. Phys. volume 2015, pages P02008 (year 2015)NoStop [Goold et al.(2016)Goold, Huber, Riera, del Rio, andSkrzypczyk]Goold2016 author author J. Goold, author M. Huber, author A. Riera, author L. del Rio,and author P. Skrzypczyk, 10.1088/1751-8113/49/14/143001 journal journal J. Phys. A volume 49, pages 143001 (year 2016)NoStop [Gallego et al.(2016)Gallego, Eisert, and Wilming]Gallego2015 author author R. Gallego, author J. Eisert, and author H. Wilming, 10.1088/1367-2630/18/10/103017 journal journal New J. Phys. volume 18, pages 103017 (year 2016)NoStop [Lashkari and Van Raamsdonk(2016)]Lashkari2015 author author N. Lashkari and author M. Van Raamsdonk, @noopjournal journal J. High En. Phys. volume 2016 (year 2016)NoStop [Matsumoto(2010)]Matsumoto2010 author author K. Matsumoto, @nooptitle Reverse test and characterization of quantum relative entropy,(year 2010),http://arxiv.org/abs/1010.1030 arXiv:1010.1030 NoStop [Audenaert and Eisert(2005)]AudenaertContinuous author author K. M. R.Audenaert and author J. Eisert, @noopjournal journal J. Math. Phys. volume 46, pages 102104 (year 2005)NoStop [Tomamichel(2016)]Tomamichel2016 author author M. Tomamichel, 10.1007/978-3-319-21891-5 title Quantum Information Processing with Finite Resources, series SpringerBriefs in Mathematical Physics, Vol. volume 5 (publisher Springer International Publishing, year 2016)NoStop [Janzing et al.(2000)Janzing, Wocjan, Zeier, Geiss, and Beth]Janzing00 author author D. Janzing, author P. Wocjan, author R. Zeier, author R. Geiss,and author T. Beth, @noopjournal journal Int. J. Th. Phys. volume 39, pages 2717 (year 2000)NoStop [Horodecki and Oppenheim(2013)]Horodecki2013 author author M. Horodecki and author J. Oppenheim, @noopjournal journal Nature Comm. volume 4, pages 2059 (year 2013)NoStop [Brandao et al.(2015)Brandao, Horodecki, Ng, Oppenheim, and Wehner]Brandao15 author author F. G. S. L.Brandao, author M. Horodecki, author N. H. Y. Ng, author J. Oppenheim, and author S. Wehner,@noopjournal journal PNAS volume 112, pages 3275 (year 2015)NoStop [Gour(2016)]Gour16 author author G. Gour, @nooptitle Quantum resource theories in the single-shot regime,(year 2016), http://arxiv.org/abs/1610.04247 arXiv:1610.04247 NoStop [Buscemi and Gour(2016)]Buscemi16 author author F. Buscemi and author G. Gour,@nooptitle Quantum relative Lorenz curves,(year 2016), http://arxiv.org/abs/1607.05735 arXiv:1607.05735 NoStop [Lostaglio et al.(2015)Lostaglio, Müller, and Pastena]Lostaglio15 author author M. Lostaglio, author M. P. Müller,and author M. Pastena, 10.1103/PhysRevLett.115.150402 journal journal Phys. Rev. Lett. volume 115, pages 150402 (year 2015)NoStop [Ng et al.(2015)Ng, Mančinska, Cirstoiu, Eisert, and Wehner]Ng15 author author N. H. Y.Ng, author L. Mančinska, author C. Cirstoiu, author J. Eisert, and author S. Wehner, http://stacks.iop.org/1367-2630/17/i=8/a=085004 journal journal New J. Phys. volume 17,pages 085004 (year 2015)NoStop [Sparaciari et al.(2016)Sparaciari, Oppenheim, and Fritz]Sparaciari2016 author author C. Sparaciari, author J. Oppenheim,and author T. Fritz, @nooptitle A resource theory for work and heat,(year 2016), http://arxiv.org/abs/1607.01302 1607.01302 NoStop [Sparaciari et al.(2017)Sparaciari, Jennings, and Oppenheim]Sparaciari2017 author author C. Sparaciari, author D. Jennings,and author J. Oppenheim, @nooptitle Energetic instability of passive states in thermodynamics,(year 2017), http://arxiv.org/abs/1701.01703 arXiv:1701.01703 NoStop § GAUGE INVARIANCE OF M_ΒHere, we show that any measure of athermality fulfilling properties <ref> – <ref> is gauge-invariant in the sense that M_β(ρ,H) = M_β(ρ,H+C) for all c∈. To see this, first note that since tracing out and adding a thermal ancilla are free transitions, M_β(ω_β,H,H)=0 for any H.A simple calculation using additivity then also shows gauge-invariance:M_β(ρ,H+C)= M_β((ρ,H+C)⊗(ω_β,K,K))= M_β(ρ⊗ω_β,K,H⊗ + C⊗ + ⊗ K)=M_β((ρ,H)⊗(ω_β,K,K+C))=M_β((ρ,H)⊗(ω_β,K+C,K+C))= M_β(ρ,H),where we made use of the gauge invariance of Gibbs states. § RANK-DECREASING QUANTUM CHANNELSIn this appendix we prove the validity of Lemma <ref> of the main-text. We have to show that given a channel T: B( H)→ B( H') and a full-rank state σ such that supp(T(σ))⊂ P, we also have supp(T(ρ))⊂ P for all states ρ. Here, P is an arbitrary subspace of the total Hilbert space H'. Let σ = ∑_i q_i i be the eigen-decomposition of σ. Since T maps positive operators to positive operators, and the support of the sum of positive operators is the union of the supports of the operators we conclude that T(i) is supported in P for all i. We thus only need to show that also operators of the form T(|i⟩⟨j|) are supported on P. Now consider any density operator ρ = d+r where d is the diagonal part of ρ (in the eigenbasis of σ) and r the rest. We know that (T(d)) = 1 since T is trace-preserving. Hence (T(r))=0. Let us now assume (to arrive at a contradiction) that T(r) has support within the subspace Q=-P. Since T maps positive operators to positive operators,0 ≤ Q T(ρ) Q= Q T(r) Q.Thus we conclude on the one hand that Q T(r) Q≥ 0. On the other hand, we know that1=(T(ρ)) ≥(P T(ρ)) = 1 + (PT(r)).Hence, (P(T(r)))=0. Since T is trace-preserving we also have(PT(r))= - (Q T(r)) =0. Hence Q T(r)Q=0 and also Q T(ρ) Q=0. By positivity and Hermiticity of T(ρ) we also get PT(ρ)Q=0 and Q T(ρ)P=0. We thus conclude that T(ρ)=PT(ρ)P, which finishes the proof.§ PROOF OF THEOREM <REF> AND OTHER EQUIVALENCES We will show a more complete set of equivalences than the ones of Theorem <ref>, which corresponds simply to iii) and iv). The following properties are equivalent: i)M_β fulfills monotonicity <ref>M_β fulfills the data-processing inequality (DPI) (<ref>).ii) M_β fulfills catalytic monotonicity <ref> and additivity <ref>M_β fulfills additivity (<ref>) and the DPI (<ref>).iii) M_β fulfills marginal-catalytic monotonicity <ref> and additivity <ref>M_β fulfills super-additivity (<ref>), additivity (<ref>) and the DPI (<ref>).iv) M_β fulfills correlated-catalytic monotonicity <ref> and additivity <ref>M_β fulfills super-additivity (<ref>), additivity (<ref>) and the DPI (<ref>).Let us first show i) (⇒). Let T be any given quantum-channel. We have to show that M_β(T(ρ),T(σ))≤ M_β(ρ,σ). But by the previous discussion, T(σ) can always be considered to be full-rank. Therefore, the Hamiltonian H_T(σ) exists and the map (ρ, H_σ) ↦ (T(ρ),H_T(σ)) is automatically a GP-map. We therefore obtainM_β(T(ρ),T(σ))= M_β(T(ρ),H_T(σ)) <ref>≤ M_β(ρ,H_σ) =M_β(ρ,σ).i) (⇐) follows asM_β(σ,K) = M_β(G(ρ,ω_β,H))= M_β(G_β^H(ρ),ω_β,G̅(H))(<ref>)= M_β(G_β^H(ρ),G_β^H(ω_β,H))(<ref>)≤ M_β(ρ,H).The proof of ii) (⇒) is trivial given i), since <ref>⇒<ref> and it follows straightforwardly that (<ref>)⇒<ref>. The proof of ii) (⇐) follows from noting that (ρ,H)c>(σ,K) implies that there exist G so thatG(ρ⊗γ, H+R)=(σ,K)⊗ (γ,R).Hence, we find thatM_β(σ,ω_β,K) + M_β(γ,ω_β,R)(<ref>)=M_β(σ⊗γ,ω_β,K⊗ω_β,R)(<ref>)≤ M_β(ρ,ω_β,H)+M_β(γ,ω_β,R),which implies straightforwardly M_β(ρ,H)≥ M_β(σ,K), that is, (<ref>).Now we show iii) (⇒). Note that <ref> implies <ref>, since a correlated catalyst is a particular case of using a catalyst. Together with the equivalences i) and ii), we should only show super-additivity of Eq. (<ref>). This follows from the fact that(ρ_1,2,H_1+H_2)mc>(ρ_1⊗ρ_2,H_1+H_2). To show this, let us choose as catalyst γ=ρ_1 ⊗ρ_2. The GP map performing the transition is just a swap between the initial system and the catalyst. Hence, the final system is (σ,K)=(ρ_1 ⊗ρ_2,H_1+H_2) and the final catalyst γ̃=(ρ_1,2,H_1+H_2), which clearly fulfills the conditions of Definition <ref>. To see iii) (⇐), we first note that (<ref>) and (<ref>) already imply <ref> and <ref>. It thus remains to show that adding (<ref>) also implies <ref>. This follows sincesuper-additivity of M_β together with additivity impliesM_β(γ̃_A,R_A)=M_β (γ̃_A, ⊗_i ω_β, R^i)≥∑_iM_β(γ^i,R^i) = M_β(⊗_i (γ^i,R^i) = ∑_i M_β(γ^i,R^i).Finally, let us turn to iv). Again, since correlated catalytic transitions include catalytic transitions, the only non-trivial property left to show here is that <ref> and <ref> also imply (<ref>). To see this consider an initial object (ρ_1,2,H_1+H_2) together with the catalyst (ρ_2,H_2) and use a similar trick as for marginal catalytic transitions. Since a swap between the second system of the initial object and the catalyst is a Gibbs-preserving transition which leaves the catalyst correlated but otherwise unchanged,we know that M_β(ρ_1,2,H_1+H_2) ≥ M_β(ρ_1⊗ρ_2,H_1+H_2). But then we haveM_β(ρ_1,2,ω_β,H_1⊗ω_β,H_2)= M_β(ρ_1,2,H_1+H_2) ≥ M_β(ρ_1⊗ρ_2,H_1+H_2) =M_β(ρ_1⊗ρ_2,H_1+H_2),which completes the argument.
http://arxiv.org/abs/1702.08473v1
{ "authors": [ "Henrik Wilming", "Rodrigo Gallego", "Jens Eisert" ], "categories": [ "quant-ph", "cond-mat.stat-mech", "math-ph", "math.MP" ], "primary_category": "quant-ph", "published": "20170227190943", "title": "Axiomatic characterization of the quantum relative entropy and free energy" }