entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
186
| authors
sequencelengths 1
769
| primary_category
stringclasses 96
values | categories
sequencelengths 1
6
| text
stringlengths 3
512k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/1701.07501v3 | 20170125215529 | Locality and Availability of Array Codes Constructed from Subspaces | [
"Natalia Silberstein",
"Tuvi Etzion",
"Moshe Schwartz"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
= 10000
= 6000
= 2pt
plain
thmTheorem
theorem
.
lemLemma
lemma.
propProposition
proposition.
corCorollary
corollary.
defnDefinition
definition.
xmplExample
example.
cnstrConstruction
construction.
3pt width4pt depth2pt height6pt
itemizei
3em.5em##1##1
Itemizei
2.5em.5em##1##1
enumeratei
3em1emenumi
enumerateii
2em1emenumii
enumrom
enumeraterom
2em1emenumrom
|
http://arxiv.org/abs/1701.07719v3 | 20170126143627 | The asymptotic volume of diagonal subpolytopes of symmetric stochastic matrices | [
"J. de Jong",
"R. Wulkenhaar"
] | math.CO | [
"math.CO",
"05A16, 52B11"
] |
Alpha Fair Coded Caching
Apostolos Destounis^1,
Mari Kobayashi^2,
Georgios Paschos ^1,
Asma Ghorbel^2
^1 France Research Center, Huawei Technologies Co. Ltd., email: firstname.lastname@huawei.com
^2Centrale-Supélec,
France, email: firstname.lastname@centralesupelec.fr
======================================================================================================================================================================================================================================================================================
The asymptotic volume of the polytope of symmetric stochastic matrices can be determined by asymptotic enumeration techniques as in the case of the Birkhoff polytope. These methods can be extended to polytopes of symmetric stochastic matrices with given diagonal, if this diagonal varies not too wildly. To this end, the asymptotic number of symmetric matrices with natural entries, zero diagonal and varying row sums is determined.
Keywords: Asymptotic enumeration, Polytope volumes
MSC 2010: 05A16, 52B11
§ INTRODUCTION
Convex polytopes arise naturally in various places in mathematics. A fundamental problem is the polytope's volume. Some results are known for low-dimensional setups <cit.>, polytopes with only a few vertices, or highly symmetric cases <cit.>. This work belongs to the latter category.
A convex polytope P is the convex hull of a finite set S_P={v_j∈ℝ^n} of vertices.
Stochastic matrices are square matrices with nonnegative entries, such that every row of the matrix sums to one. The symmetric stochastic N× N-matrices are an example of a convex polytope. It will be denoted by 𝒫_N. Its vertices are given by the symmetric permutation matrices. There are ∑_j=0^N/2N2j(2j-1)!! such matrices. It follows directly from the Birkhof-Von Neumann theorem that all symmetric stochastic matrices are of this form. A basis for this space is given by
{I_N}∪{B^(jk)|1≤ j<k≤ N} ,
where I_N is the N× N identity matrix and the matrix elements of B^(jk) are given by
B^(jk)_lm={[ B^(jk)_lm=1 , if {l,m}={j,k} ;; B^(jk)_lm=1 , if j≠ l=m≠ k ;; B^(jk)_lm=0 , otherwise ]. .
All these vertices are linearly independent and it follows that the polytope is
.
A convex subpolytope P' of a convex polytope P is the convex hull of a finite set {v'_j∈P} of elements in P.
Slicing a polytope yields a surface of section, which is itself a convex space and, hence, a polytope. Determining its vertices is in general very difficult.
Spaces of symmetric stochastic matrices with several diagonal entries fixed are examples of such slice subpolytopes of 𝒫_N, provided that these entries lie between zero and one. The slice subpolytope of 𝒫_N, obtained by fixing all diagonal entries h_j∈[0,1], will be called the diagonal subpolytope P_N(h_1,…,h_N) here. This is a polytope of dimension N(N-3)/2. These polytopes form the main subject of this paper.
To keep the notation light, vectors of N elements are usually written by a bold symbol. The diagonal subpolytope with entries h_1,…,h_N will thus be written by P_N(h).
The main results are the following two theorems.
Let V_N(t;λ) be the number of symmetric N× N-matrices with an empty diagonal and entries in the natural numbers such that t_j is the j-th row sum. Denote the total entry sum by x=∑_j=1^Nt_j and let λ be the average matrix entry
λ=x/N(N-1)>C/log N .
If for some ω∈(0,1/4) the limit
lim_N→∞t_j-λ(N-1)/λ N^1/2+ω=0 for all j=1,…,N ,
then the number of such matrices is asymptotically (N→∞) given by
V_N(t;λ)=√(2)(1+λ)^N2/(2πλ(λ+1)N)^N/2(1+1/λ)^x/2exp[14λ^2+14λ-1/12λ(λ+1)]
×exp[-1/2λ(λ+1)N∑_m(t_m-λ(N-1))^2]exp[-1/λ(λ+1)N^2∑_m(t_m-λ(N-1))^2]
×exp[2λ+1/6λ^2(λ+1)^2N^2∑_m(t_m-λ(N-1))^3]exp[-3λ^2+3λ+1/12λ^3(λ+1)^3N^3∑_m(t_m-λ(N-1))^4]
×exp[1/4λ^2(λ+1)^2N^4(∑_m(t_m-λ(N-1))^2)^2]×(1+𝒪(N^-1/2+6ω)) .
Let h=h_1,…,h_N with h_j∈[0,1] and χ=∑_j=1^Nh_j. If
lim_N→∞N^1/2-ωN-1/N-χ·|h_j-χ/N|=0 for all j=1,…,N ,
and for some ω∈(loglog N/2log N,1/4), then the asymptotic volume (N→∞) of the polytope of symmetric stochastic N× N-matrices with diagonal (h_1,…,h_N) is given by
(P_N(h))=√(2)e^7/6(e(N-χ)/N(N-1))^N2(N(N-1)^2/2π(N-χ)^2)^N/2exp[-N(N-1)^2/2(N-χ)^2∑_j(h_j-χ/N)^2]
×exp[-(N-1)^2/(N-χ)^2∑_j(h_j-χ/N)^2]exp[-N(N-1)^3/3(N-χ)^3∑_j(h_j-χ/N)^3]
×exp[-N(N-1)^4/4(N-χ)^4∑_j(h_j-χ/N)^4]exp[(N-1)^4/4(N-χ)^4(∑_j(h_j-χ/N)^2)^2]
×(1+𝒪(N^-1/2+6ω)) .
The outline of this paper is as follows. In Paragraph <ref> the volume problem is formulated as a counting problem and subsequently as a contour integral. Under the assumption of a restricted region this is subsequently integrated in Paragraph <ref>. Paragraph <ref> is dedicated to a fundamental lemma to actually restrict the integration region. The volume of the diagonal subpolytopes is extracted from the counting result in Paragraph <ref>.
§ COUNTING PROBLEM
The volume of a polytope P in ℝ^n with basis {ℬ_j∈ℝ^n|1≤ j≤ d} is obtained by
∫_[0,1]^du 1_P(∑_j=1^du_jℬ_j) ,
where 1_P is the indicator function for the polytope P. If the polytope is put on a lattice (aℤ)^n with lattice parameter a∈(0,1), an approximation of this volume is obtained by counting the lattice sites inside the polytope and multiplying this by the volume a^n of a single cell. This approximation becomes better as the lattice parameter shrinks. In the limit this yields
(P)=lim_a→ 0 a^n |{P∩ (aℤ)^n}| .
This approach is formalized by the Ehrhart polynomial <cit.>, which counts the number of lattice sites of ℤ^n in a dilated polytope. A dilation of a polytope P by a factor a^-1>1 yields the polytope a^-1P, which is the convex hull of the dilated vertices S_a^-1P={a^-1v|v∈ S_P}. That the obtained volume is the same, follows from the observation
|{a^-1P∩ℤ^n}|=|{P∩ (aℤ)^n}| .
The volume integral of the diagonal subpolytope P_N(h) is
(P_N(h))={∏_1≤ k<l≤ N∫_0^1 u_kl} 1_P_N(h)(I_N+∑_1≤ k<l≤ Nu_kl(B^(kl)-I_N)) .
To see that this integral covers the polytope, it suffices to see that the any symmetric stochastic matrix A=(a_kl) is decomposed in basis vectors as
A=(a_kl)=I_N+∑_1≤ k<l≤ Na_kl(B^(kl)-I_N) .
The next step is to introduce a lattice (aℤ)^N2 and count the sites inside the polytope. Each such site is a symmetric stochastic matrix with h_1,…,h_N on the diagonal.
Since the volume depends continuously on the extremal points, it can be assumed without loss of generality that all h_j are rational. This implies that a dilation factor a^-1 exists, such that all a^-1(1-h_j)=t_j∈ℕ and that the matrices that solve
([ 0 b_12 ⋯ b_1N; b_12 0 ⋯ b_2N; ⋮ ⋮ ⋱ ⋮; b_1N b_2N ⋯ 0 ])([ 1; 1; ⋮; 1 ])=([ t_1; t_2; ⋮; t_N ])
with t_j,b_jk∈ℕ are to be counted. This yields a number V_N(t). The polytope volume is then given by
(P_N(h))=lim_a→0 a^N(N-3)/2V_N(1-h_1/a,…,1-h_N/a) ,
where
V_N(t)=∮_𝒞 w_1/2π i w_1^1+t_1…∮_𝒞 w_N/2π i w_N^1+t_N ∏_1≤ k<l≤ N1/1-w_kw_l .
To see this, let the possible values m for the matrix element b_jk be given by the generating function
1/1-w_jw_k=∑_m=0^∞(w_jw_k)^m .
Applying this to all matrix entries shows that V_N(t) is given by the coefficient of the term w_1^t_1w_2^t_2… w_N^t_N in ∏_1≤ j<k≤ N1/1-w_jw_k. Formulating this in derivatives yields
V_N(t)=1/t_1!/ w_1|_w_1=0^t_1…1/t_N!/ w_N|_w_N=0^t_N ∏_1≤ k<l≤ N1/1-w_kw_l .
By Cauchy's integral formula the number of matrices (<ref>) follows from this. The contour 𝒞 encircles the origin once in the positive direction, but not the pole at w_kw_l=1.
The next step is to parametrize this contour explicitly and find a way to compute the integral for N→∞. This must be done in such a way that a combinatorial treatments is avoided. A convenient choice is
w_j=√(λ_j/λ_j+1)e^iφ_j , with λ_j∈ℝ_+ and φ_j∈[-π,π) .
Later a specific value for λ_j will be chosen.
The counting problem has now been turned into an integral over the N-dimensional torus
V_N(t)=(∏_j=1^N(1+1/λ_j)^t_j/2)(2π)^-N∫_𝕋^Nφ e^-i∑_j=1^Nφ_jt_j
×∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) 1/1-√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) (e^i(φ_k+φ_l)-1) ,
where we have written φ for φ_1…φ_N.
The notations
x=∑_jt_j=∑_j=1^Nt_j and ∑_k<l(φ_k+φ_l)=∑_1≤ k<l≤ N(φ_k+φ_l)
are used, when no doubt about N can exist. When no summation bounds are mentioned, these will always be 1 and N. The notation a≪ b indicates that a<b and a/b→0.
The main tool for these integrals will be the stationary phase method, also called the saddle-point method. In the form used in this paper, the exponential of a function f is integrated around its maximum x̃, so that
lim_Λ→∞∫ x e^Λ f(x)=lim_Λ→∞exp[Λ f(x̃)]∫ x exp[Λ f^(2)(x̃)/2(x-x̃)^2+Λ f^(3)(x̃)/6(x-x̃)^3]
×exp[Λ f^(4)(x̃)/24(x-x̃)^4]
=exp[Λ f(x̃)]√(-2π/Λ f^(2)(x̃))(1+15/162(f^(3)(x̃))^2/9Λ(-f^(2)(x̃))^3+3/4f^(4)(x̃)/6Λ (f^(2)(x̃))^2+𝒪(Λ^-2)) .
Many counting problems can be computed asymptotically by the saddle-point method <cit.>. Often it is assumed that all t_j are equal, but we show that it suffices to demand that they do not deviate too much from this symmetric case.
§ INTEGRATING THE CENTRAL PART
The integrals in (<ref>) are too difficult to compute in full generality. A useful approximation can be obtained from the observation that the integrand
|1/1-μ(e^iy-1)|^2=1/1-2μ(μ+1)(cos(y)-1) for y∈(-2π,2π)
is concentrated in a neighbourhood of the origin and the antipode y=±2π, where it takes the value 1. This is plotted in Figure <ref>. For small y and μ y the absolute value of the integrand factor can be written as
|1/1-μ(e^iy-1)|=√(1/1+μ(μ+1)y^2)(1+𝒪(y^4)) .
It is concentrated in a small region around the origin and the antipode. The form of the region is assumed to be [δ_N,δ_N]^N with
δ_N=N^-αζ_N/min_j{λ_j} ,
where α∈(0,1/2) and ζ_N tends slowly to infinity. In the remainder of this paragraph the integral inside this box will be computed.
To this end, a lower bound is introduced. Below this threshold we do not strive for accuracy. The aim is thus to find the asymptotic number V_N(t) for configurations t, such that this number is larger than the Lower bound.
Lower bound
For N, α∈(0,1/2), t_j∈ℕ and λ_j∈ℝ_+^N for j=1,…,N we define the Lower bound by
ℰ_α = (2πλ(λ+1)N)^-N/2(∏_j=1^N(1+1/λ_j)^t_j/2)(∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
×exp[14λ^2+14λ-1/12λ(λ+1)]exp[-N^1-2α] ,
where λ=N^-1∑_jλ_j.
The integral in [-δ_N,δ_N]^N can now be cast into a simpler form, where the size δ_N of this box can be used as an expansion parameter. The expansion used is
1/1-μ(exp[iy]-1)=exp[∑_j=1^kA_j(iy)^j]+𝒪(y^k+1(1+μ)^k+1) .
The coefficients A_j(μ) (or A_j if the argument is clear) are polynomials in μ of degree j. They are obtained as the polylogarithms
A_n(μ)=(-1)^n/n! _1-n(1+1/μ) .
The first four coefficients are
A_1=μ ; A_2=μ/2(μ+1) ; A_3=μ/6(μ+1)(2μ+1)
and A_4=μ/24(μ+1)(6μ^2+6μ+1) .
The value of the parameter μ in the above formules can be approximated. Assuming that ε_k is small compared to λ and writing ε=max_kε_k, this is
√((λ+ε_k)(λ+ε_l))/√((λ+ε_k+1)(λ+ε_l+1))-√((λ+ε_k)(λ+ε_l))≈λ+ε_k+ε_l/2
-2λ+1/8λ(λ+1)(ε_k-ε_l)^2+2λ^2+2λ+1/16λ^2(λ+1)^2(ε_k^3-ε_k^2ε_l-ε_kε_l^2+ε_l^3)+𝒪(ε^4/λ^3) .
Applying this in combination with (<ref>) produces the combinations
∑_k<l(φ_k+φ_l)·(√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
=∑_j=1^Nφ_j[N-2/2λ_j+N/2λ-B_1(Nε_j^2+∑_mε_m^2)+C_1(Nε_j^3-ε_j∑_mε_m^2+∑_mε_m^3)]
×(1+𝒪(Nε^4/λ^4)) ;
∑_k<l(φ_k+φ_l)^2· A_2(√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
=[∑_j=1^Nφ_j^2((N-2)A_2+ε_jB_2(N-4)-(N-4)C_2ε_j^2-C_2∑_mε_m^2)
+∑_j=1^Nφ_j(A_2∑_mφ_m+2B_2ε_j∑_mφ_m-2C_2ε_j^2∑_mφ_m+D_2ε_j∑_mε_mφ_m)]
×(1+𝒪(Nε^3/λ^3)) ;
∑_k<l(φ_k+φ_l)^3· A_3(√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
=[∑_jφ_j^3A_3(N-4)+3A_3∑_jφ_j^2∑_mφ_m]×(1+𝒪(ε/λ)) and
∑_k<l(φ_k+φ_l)^4· A_4(√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
=[∑_jφ_j^4A_4(N-8)+4A_4∑_jφ_j^3∑_mφ_m+3A_4(∑_jφ_j^2)^2]×(1+𝒪(ε/λ)) .
Here we used the additional combinations
B_1 = 2λ+1/8λ(λ+1) ; C_1=2λ^2+2λ+1/16λ^2(λ+1)^2 ; B_2=2λ+1/4
C_2=2λ^2+2λ+1/16λ(λ+1) ; D_2=6λ^2+6λ+1/8λ(λ+1)
to simplify the notation.
The simplest way to compute this integral is to ensure that the linear part of the exponent is small.
Splitting λ_j=λ+ε_j and choosing the value
ε_j=2/N-2(t_j-λ(N-1))
is done therefore. Combined with the assumption that x=∑_jt_j=λ N(N-1), this implies that ∑_mε_m=0. Assuming furthermore that |t_j-λ(N-1)|≪λ N^1/2+ω, the error terms |ε/λ|≪ N^-1/2+ω follow.
The first step now is to focus on the integral inside the box [-δ_N,δ_N]^N, simplify and calculate this.
The estimates in Lemma <ref> cause the integral (<ref>) to depend non-trivially on λ. For that reason λ is explicitly mentioned as an argument.
Assume that K,N∈ℕ, ω,α∈ℝ_+ are chosen such that ω∈(0,log(Kα-2) + loglog N/4log N), α∈(0,1/4-ω) and K>2/α+1. Define
δ_N=N^-αζ_N/min{λ_j} ,
so that ζ_N→∞ and N^-δζ_N→0 for any δ>0, when N→∞. If x=∑_jt_j, the average matrix entry λ=x/N(N-1) and
lim_N→∞t_j-λ(N-1)/λ N^1/2+ω=0 for j=1,…,N ,
then the integral
V_N(t)=(∏_j=1^N(1+1/λ_j)^t_j/2)(2π)^-N∫_[-δ_N,δ_N]^Nφ e^-i∑_j=1^Nφ_jt_j
×∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) 1/1-√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) (e^i(φ_k+φ_l)-1)
is given by
V_N(t;λ)=2/(2π)^N(∏_j=1^N(1+1/λ_j)^t_j/2)·(∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
×∫_[-δ_N,δ_N]^Nφ exp[-i∑_jφ_jt_j]exp[∑_n=1^K-1i^n∑_k<lA_n(√(λkλ_l)/√((1+λk)(1+λ_l))-√(λkλ_l))·(φ_k+φ_l)^n]+𝒟 ,
up to a difference 𝒟 that satisfies
|𝒟|≤𝒪(N^2-Kα)√(2)(1+λ)^N2/(2πλ(λ+1)N)^N/2(1+1/λ)^x/2exp[10λ^2+10λ+1/4λ(λ+1)]
exp[-1/2λ(λ+1)N∑_m(t_m-λ(N-1))^2]exp[3/4λ^2(λ+1)^2N^2∑_m(t_m-λ(N-1))^2]
×exp[2λ+1/6λ^2(λ+1)^2N^2∑_m(t_m-λ(N-1))^3]exp[6λ^2+6λ+1/24λ^3(λ+1)^3N^3∑_m(t_m-λ(N-1))^4]
×exp[6λ^2+6λ+1/8λ^3(λ+1)^3N^4(∑_m(t_m-λ(N-1))^2)^2] .
To the fraction
(1-√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) (e^i(φ_k+φ_l)-1))^-1
in the integral (<ref>) the expansion (<ref>) in combination with (<ref>) and (<ref>) is applied. To prove that contributions in (<ref>) of K-th order or higher are irrelevant, we put these in the exponential exp[h(x)]. To estimate their contribution, the estimate
|∫ x e^f(x)(e^h(x)-1)|≤𝒪(sup_x |e^h(x)-1|)·∫ x |e^f(x)|
is applied to the integral. Taking the absolute value of the integrand sets the imaginary parts of the exponential to zero. In terms of (<ref>) and (<ref>) this means that A_3, B_1 and C_1 are set to zero. This integral is calculated in Lemma <ref>. Taking this result and setting these coefficients to zero completes the proof.
Assume that K,N∈ℕ, ω,α∈ℝ_+ are chosen such that ω∈(0,log(Kα-2) + loglog N/4log N), α∈(0,1/4-ω) and K>2/α+1. Define
δ_N=N^-αζ_N/min{λ_j} ,
so that ζ_N→∞ and N^-δζ_N→0 for any δ>0, when N→∞. If x=∑_jt_j, the average matrix entry λ=x/N(N-1)>C/log N and
lim_N→∞t_j-λ(N-1)/λ N^1/2+ω=0 for j=1,…,N ,
then the integral
V_N(t;λ)=2/(2π)^N(∏_j=1^N(1+1/λ_j)^t_j/2)·(∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
×∫_[-δ_N,δ_N]^Nφ exp[-i∑_jφ_jt_j]exp[∑_n=1^K-1i^n∑_k<lA_n(√(λkλ_l)/√((1+λk)(1+λ_l))-√(λkλ_l))·(φ_k+φ_l)^n]
is asymptotically (N→∞) given by
V_N(t;λ)=√(2)/(2πλ(λ+1)N)^N/2[∏_n(1+1/λ_n)^t_n/2][∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l)]
×exp[14λ^2+14λ-1/12λ(λ+1)]exp[∑_mε_m^2/16λ^2(λ+1)^2]exp[-(2λ+1)^2/128λ^3(λ+1)^3(∑_mε_m^2)^2]
×exp[-(2λ+1)^2N/128λ^3(λ+1)^3∑_mε_m^4]×(1+𝒪(N^-1/2+6ω+N^2+1/3C-Kαexp[N^4ω]))
=√(2)(1+λ)^N2/(2πλ(λ+1)N)^N/2(1+1/λ)^x/2exp[14λ^2+14λ-1/12λ(λ+1)]
exp[-1/2λ(λ+1)N∑_m(t_m-λ(N-1))^2]exp[-1/λ(λ+1)N^2∑_m(t_m-λ(N-1))^2]
×exp[2λ+1/6λ^2(λ+1)^2N^2∑_m(t_m-λ(N-1))^3]exp[-3λ^2+3λ+1/12λ^3(λ+1)^3N^3∑_m(t_m-λ(N-1))^4]
×exp[1/4λ^2(λ+1)^2N^4(∑_m(t_m-λ(N-1))^2)^2]×(1+𝒪(N^-1/2+6ω+N^2+1/3C-Kαexp[N^4ω])) .
This is much larger than the Lower bound from Definition <ref>
V_N(t;λ)/ℰ_α→∞ .
Define ε_j=2/N-2(t_j-λ(N-1)) and assume that |ε_j|≤λ N^-1/2+ω with 0<ω<1/14. It follows that ∑_jε_j=0.
To the integral V_N(t;λ) the expansion (<ref>) for k=4 in combination with (<ref>) and (<ref>) is applied. It will follow automatically that the higher orders (K>5) in this expansion will yield asymptotically irrelevant factors. This expansion produces the combinations (<ref>).
Introducing δ-functions for S_1=∑_mφ_m, S_2=∑_mφ_m^2, T_3=∑_mε_mφ_m and T_4=∑_mε_m^2φ_m through their Fourier representation yields the integral
V_N(t;λ)=2/(2π)^N[∏_n(1+1/λ_n)^t_n/2][∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l)]∫τ_1∫ S_1∫τ_3
×∫ T_3∫ T_4∫τ_4∫ S_2∫τ_2 exp[2π i(τ_1S_1+τ_2S_2+τ_3T_3+τ_4T_4)
-A_2S_1^2-2B_2S_1T_3+2C_2S_1T_4-2D_2T_3^2+3A_4S_2^2]
×{∏_j∫_-δ_N^δ_Nφ_j exp[iφ_j(-B_1Nε_j^2-B_1∑_mε_m^2-2πτ_1-3A_3S_2
+C_1Nε_j^3-C_1ε_j∑_mε_m^2+C_1∑_mε_m^3-2πτ_3ε_j)]
×exp[-φ_j^2(A_2(N-2)+B_2(N-4)ε_j-(N-4)C_2ε_j^2-C_2∑_mε_m^2+2π iτ_2)]
×exp[-iφ_j^3(A_3(N-4)+4iA_4S_1)]
×exp[φ_j^4(A_4(N-8))] } .
To ensure that that overall error consists of asymptotically irrelevant factors only, the φ_j-integral must be computed up to 𝒪(N^-1). Dividing the integration parameter φ_j by √(A_2(N-2)) shows that the φ_j-integral is of the form
1/√(A_2(N-2))∫_-δ_N√(A_2(N-2))^δ_N√(A_2(N-2))φ exp[iφ Q_1/√(A_2(N-2))-φ^2Q_2-iφ^3Q_3/(A_2(N-2))^3/2+φ^4Q_4/(A_2(N-2))^2]
=√(π/A_2(N-2))[Q_2+3iQ_3φ̃/(A_2(N-2))^3/2]^-1/2
×exp[iQ_1φ̃/√(A_2(N-2))-Q_2φ̃^2-iQ_3φ̃^3/(A_2(N-2))^3/2+Q_4φ̃^4/A_2^2(N-2)^2]
×{1-15Q_3^2/16(A_2(N-2))^3(Q_2+3iQ_3φ̃/(A_2(N-2))^3/2)^3+3Q_4/4A_2^2(N-2)^2(Q_2+3iQ_3φ̃/(A_2(N-2))^3/2)^2} ,
which is calculated by the (<ref>) around the maximum φ̃ of the integrand. Observing that Q_1=𝒪(N^2ω), Q_2=𝒪(1) and Q_3,4=𝒪(N), shows that
φ̃=iQ_1/2Q_2√(A_2(N-2))+𝒪(N^-3/2+4ω)=𝒪(N^-1/2+2ω)
is sufficient for the desired accuracy. This implies that
exp[iQ_1φ̃/√(A_2(N-2))-Q_2φ̃^2-iQ_3φ̃^3/(A_2(N-2))^3/2]=exp[-Q_1^2/4A_2(N-2)]×(1+𝒪(N^-2+6ω)) .
The terms in square and curly brackets are then rewritten using
1/√(1+y)≈ e^-y/2+y^2/4 and 1+z≈exp[z]
respectively. Using the same order of factors as in (<ref>), the result of the φ_j-integral is
√(π/A_2(N-2))exp[-B_2(N-4)ε_j/2A_2(N-2)+C_2(N-4)ε_j^2/2A_2(N-2)+C_2∑_mε_m^2/2A_2(N-2)-iπτ_2/A_2(N-2)
-3A_3B_1N(N-4)ε_j^2/4A_2^2(N-2)^2-3A_3B_1(N-4)∑_mε_m^2/4A_2^2(N-2)^2-3πτ_1A_3(N-4)/2A_2^2(N-2)^2-9A_3^2S_2(N-4)/4A_2^2(N-2)^2]
×exp[B_2^2(N-4)^2ε_j^2/4A_2^2(N-2)^2]exp[-B_1^2N^2ε_j^4/4A_2(N-2)-B_1^2Nε_j^2∑_mε_m^2/2A_2(N-2)
-πτ_1B_1Nε_j^2/A_2(N-2)-3A_3B_1S_2Nε_j^2/2A_2(N-2)-B_1^2(∑_mε_m^2)^2/4A_2(N-2)-πτ_1B_1∑_mε_m^2/A_2(N-2)-3A_3B_1S_2∑_mε_m^2/2A_2(N-2)
-π^2τ_1^2/A_2(N-2)-3πτ_1A_3S_2/A_2(N-2)-9A_3^2S_2^2/4A_2(N-2)]exp[-15A_3^2(N-4)^2/16A_2^3(N-2)^3]exp[3A_4(N-8)/4A_2^2(N-2)^2] .
Integrating now τ_2 yields a delta function that assigns the value
S_2=N/2A_2(N-2) .
Doing the same for τ_3 and τ_4 yields T_3=0 and T_4=0. The S_1-integral is
∫ S_1 exp[2π iτ_1S_1-A_2S_1^2]=√(π/A_2)exp[-π^2τ_1^2/A_2] .
and the final integral
∫τ_1 exp[-2π^2(N-1)τ_1^2/A_2(N-2)-3πτ_1A_3N/A_2^2(N-2)-2πτ_1B_1N∑_mε_m^2/A_2(N-2)]
=√(A_2(N-2)/2π (N-1))exp[9A_3^2N^2/8A_2^3(N-1)(N-2)+B_1^2N^2(∑_mε_m^2)^2/2A_2(N-1)(N-2)+3A_3B_1N^2∑_mε_m^2/2A_2^2(N-1)(N-2)] .
Putting this all together yields
V_N(t;λ)=√(2)/(2πλ(λ+1)N)^N/2[∏_n(1+1/λ_n)^t_n/2][∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l)]
×exp[14λ^2+14λ-1/12λ(λ+1)]exp[∑_mε_m^2/16λ^2(λ+1)^2]exp[-(2λ+1)^2/128λ^3(λ+1)^3(∑_mε_m^2)^2]
×exp[-(2λ+1)^2N/128λ^3(λ+1)^3∑_mε_m^4] .
Comparing (<ref>) to the Lower bound ℰ_α, it is immediately clear that V_n(t;λ) is much larger. Expand the products in square brackets around λ,
[∏_j(1+1/λ_j)^λ(N-1)+(N-2)ε_j/2/2][∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l)]
=(1+1/λ)^x/2(1+λ)^N2exp[2λ(N-1)+(N-2)ε_j/4log(1+λ+ε_j/1+λλ/λ+ε_j)]
×exp[-∑_k<llog(1+λ-√((1+λ-1+λ/1+λ+ε_k)(1+λ-1+λ/1+λ+ε_l)))]
=(1+1/λ)^x/2(1+λ)^N2
×exp[(∑_mε_m^2)·[Nλ^2/4λ^2(λ+1)^2+λ/4λ^2(λ+1)^2-(N-1)3λ^2+λ/8λ^2(λ+1)^2-1/8λ(λ+1)]
+(∑_mε_m^3)·[-N6λ^3+3λ^2+λ/24λ^3(λ+1)^3+N14λ^3+9λ^2+3λ/48λ^3(λ+1)^3]
+(∑_mε_m^4)·[N6λ^4+6λ^3+4λ^2+λ/24λ^4(λ+1)^4-N30λ^4+28λ^3+19λ^2+5λ/128λ^4(λ+1)^4]
+(∑_mε_m^2)^2·[6λ^2+6λ+1/128λ^3(λ+1)^3]×(1+𝒪(N^-1/2+5ω)) ,
yields combined with (<ref>) the desired result.
To determine the error from the difference 𝒟 from Lemma <ref>, we divide it by V_N(t;λ). Assuming that |t_j-λ(N-1)|=λ N^1/2+ω takes maximal values, it follows that the relative difference is at most
𝒪(N^2-Kα)exp[4λ^2+4λ+1/3λ(λ+1)]exp[4λ^2+4λ+3/4(λ+1)^2N^2ω]
×exp[(2λ+1)^2λ N^4ω/8(λ+1)^3]exp[(2λ+1)^2λ N^4ω/8(λ+1)^3] .
Only the first exponential can become large, if λ is small. Assuming that λ > C/log(N), this factor adds an error N^1/3C.
To keep this relative error small, it is furthermore necessary that exp[N^4ω]≪ N^Kα-2. Solving this yields
0<ω<log(Kα-2)+log(log(N))/4log(N) .
Choosing the value of λ may seem arbitrary at first. It is not. Comparing (<ref>) to the Lower bound ℰ_1/2-r for some small r>0, the outcome is only much larger, if
∑_mε_m^4≪ N^2r and ∑_mε_m^2≪ N^r .
It follows that λ N(N-1)=x in the limit. In <cit.> the number of matrices V_N(t;λ) has been calculated for the case that all t_j are equal. They require λ to be the average matrix entry for infinitely large matrices. Because Lemma <ref> covers this case too, the same value for λ had to be expected.
For any ω∈(0,loglog N/6log N) and α∈(0,1/4-ω), define
δ_N=N^-αζ_N/min{λ_j} ,
such that ζ_N→∞ and N^-δζ_N→0 for any δ>0. Assuming that x=∑_jt_j=λ N(N-1), |t_j-λ(N-1)|≪λ N^1/2+ω and λ>C/log(N), the integral
V_N(t;λ)=2/(2π)^N(∏_j=1^N(1+1/λ_j)^t_j/2)·(∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
×∫_[-δ_N,δ_N]^Nφ exp[-i∑_jφ_jt_j]exp[∑_n=1^K-1i^n∑_k<lA_n(√(λkλ_l)/√((1+λk)(1+λ_l))-√(λkλ_l))·(φ_k+φ_l)^n]
is then for any ⌊2/α⌋+1≤ K≤log(N) given by
V_N(t;λ)=√(2)(1+λ)^N2/(2πλ(λ+1)N)^N/2(1+1/λ)^x/2exp[14λ^2+14λ-1/12λ(λ+1)]
exp[-1/2λ(λ+1)N∑_m(t_m-λ(N-1))^2]
×exp[-64λ^6-192λ^5-160λ^4+40λ^2+8λ+1/64λ^4(λ+1)^4N^2∑_m(t_m-λ(N-1))^2]
×exp[2λ+1/6λ^2(λ+1)^2N^2∑_m(t_m-λ(N-1))^3]exp[-3λ^2+3λ+1/12λ^3(λ+1)^3N^3∑_m(t_m-λ(N-1))^4]
×exp[6λ^2+6λ+1/8λ^3(λ+1)^3N^4(∑_m(t_m-λ(N-1))^2)^2]
×exp[-8λ^4+16λ^3-8λ-1/48λ^5(λ+1)^5N^6(∑_m(t_m-λ(N-1))^2)^3]
×(1+𝒪(N^-1/2+6ω+N^2+1/3C-Kα)) .
where 𝒟 is the difference from Lemma <ref>.
Define ε_j=2/N-2(t_j-λ(N-1)) and assume that |ε_j|≤λ N^-1/2+ω with 0<ω<1/14. It follows that
|ε_j/λ|≪ N^-1/2+ω and ∑_jε_j=0 .
To the integral V_N(t;λ) the expansion (<ref>) for k=4 in combination with (<ref>) and (<ref>) is applied. It will follow automatically that the higher orders in this expansion will yield smaller contributions. This expansion produces the combinations (<ref>) and (<ref>). Introducing δ-functions for S_1=∑_mφ_m, S_2=∑_mφ_m^2, T_3=∑_mε_mφ_m, T_4=∑_mε_m^2φ_m and T_5=∑_mε_mφ_m^2 through their Fourier representation yields the integral
V_N(t;λ)=2/(2π)^N[∏_n(1+1/λ_n)^t_n/2][∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l)]∫τ_1∫ S_1∫τ_3
×∫ T_3∫τ_4∫ T_4∫ S_2∫τ_2∫ T_5∫τ_5 exp[2π i(τ_1S_1+τ_2S_2+τ_3T_3+τ_4T_4+τ_5T_5)
-A_2S_1^2-2B_2S_1T_3-2C_2T_3^2+3A_4S_2^2-D_2S_2∑_mε_m^3-E_2S_2∑_mε_m^4-F_2T_5∑_mε_m^3-G_2T_4^2
+6B_4T_5S_2+3D_4T_5^2]
×{∏_j∫_-δ_N^δ_Nφ_j exp[iφ_j(-B_1Nε_j^2-B_1∑_mε_m^2-2πτ_1-3B_3T_5-3A_3S_2
+C_1Nε_j^3-C_1ε_j∑_mε_m^2+C_1∑_mε_m^3-2πτ_3ε_j-3D_3ε_jT_5-2iD_2T_4ε_j-3B_3S_2ε_j
-D_1(N-2)ε_j^4-D_1∑_mε_m^4+E_1ε_j∑_mε_m^3-2πτ_4ε_j^2+3C_3S_2ε_j^2+F_1ε_j^2∑_mε_m^2
-2iC_2S_1ε_j^2-2E_1ε_j^4-F_1ε_j^4+2iD_2S_1ε_j^3+2iE_2S_1ε_j^4+2iF_2T_3ε_j^3)]
×exp[-φ_j^2(A_2(N-2)+B_2(N-4)ε_j-NC_2ε_j^2-C_2∑_mε_m^2+2π iτ_2+ND_2ε_j^3
+2π iτ_5ε_j-D_2ε_j∑_mε_m^2+E_2(N-4)ε_j^4+G_2ε_j^2∑_mε_m^2-3iC_3S_1ε_j^2-6C_4S_2ε_j^2
×-4F_2ε_j^4-2G_2ε_j^4)]
×exp[-iφ_j^3(A_3(N-4)+B_3(N-8)ε_j-C_3(N-8)ε_j^2-C_3∑_mε_m^2
+4iA_4S_1+4iB_4T_3+4iC_4T_4+4iD_4T_3ε_j+4iB_4S_1ε_j-4D_3ε_j^2+4iC_4S_1ε_j^2)]
×exp[φ_j^4(A_4(N-8)+B_4(N-16)ε_j+C_4(N-16)ε_j^2+C_4∑_mε_m^2-8D_4ε_j^2)] } .
To ensure that that overall error is of order 𝒪(N^-1), the φ_j-integral must be computed up to 𝒪(N^-2). Dividing the integration parameter φ_j by √(A_2(N-2)) shows that the φ_j-integral is of the form
1/√(A_2(N-2))∫_-δ_N√(A_2(N-2))^δ_N√(A_2(N-2))φ exp[iφ Q_1/√(A_2(N-2))-φ^2Q_2-iφ^3Q_3/(A_2(N-2))^3/2+φ^4Q_4/(A_2(N-2))^2]
=√(π/A_2(N-2))[Q_2+3iQ_3φ̃/(A_2(N-2))^3/2]^-1/2exp[iQ_1φ̃/√(A_2(N-2))-Q_2φ̃^2-iQ_3φ̃^3/(A_2(N-2))^3/2]
×{1-15Q_3^2/16(A_2(N-2))^3(Q_2+3iQ_3φ̃/(A_2(N-2))^3/2)^3+3Q_4/4A_2^2(N-2)^2(Q_2+3iQ_3φ̃/(A_2(N-2))^3/2)^2} ,
which is calculated by the (<ref>) around the maximum φ̃ of the integrand. Observing that Q_1=𝒪(N^2ω), Q_2=𝒪(1) and Q_3,4=𝒪(N), shows that
φ̃=iQ_1/2Q_2√(A_2(N-2))+3iQ_1^2Q_3/8Q_2^3(A_2(N-2))^5/2+𝒪(N^-5/2+6ω)=𝒪(N^-1/2+2ω)
is sufficient for the desired accuracy. This implies that
exp[iQ_1φ̃/√(A_2(N-2))-Q_2φ̃^2-iQ_3φ̃^3/(A_2(N-2))^3/2]=exp[-Q_1^2/4A_2(N-2)-Q_3Q_1^3/8Q_2^3(A_2(N-2))^3]×(1+𝒪(N^-2)) .
The terms in square and curly brackets are then rewritten using
1/√(1+y)=e^-y/2+y^2/4-y^3/6+y^4/8×(1+𝒪(N^-2)) and 1+z=exp[z]×(1+𝒪(N^-2))
respectively.
It is a tedious but straightforward exercise to show that[It serves the calculations and checks to write to this formula down.]
φ̃=-iB_1Nε_j^2/2√(A_2(N-2))-iB_1∑_mε_m^2/2√(A_2(N-2))-π iτ_1/√(A_2(N-2))-3iB_3T_5/2√(A_2(N-2))-3iA_3S_2/2√(A_2(N-2))
+iC_1Nε_j^3/2√(A_2(N-2))-iC_1ε_j∑_mε_m^2/2√(A_2(N-2))+iC_1∑_mε_m^3/2√(A_2(N-2))-iπτ_3ε_j/√(A_2(N-2))
-3iD_3T_5ε_j/2√(A_2(N-2))+D_2T_4ε_j/√(A_2(N-2))-3iB_3S_2ε_j/2√(A_2(N-2))-iD_1(N-2)ε_j^4/2√(A_2(N-2))-iD_1∑_mε_m^4/2√(A_2(N-2))
+iE_1ε_j∑_mε_m^3/2√(A_2(N-2))-π iτ_4ε_j^2/√(A_2(N-2))+3iC_3S_2ε_j^2/2√(A_2(N-2))+iF_1ε_j^2∑_mε_m^2/2√(A_2(N-2))+C_2S_1ε_j^2/√(A_2(N-2))
+iB_1B_2N(N-4)ε_j^3/2(A_2(N-2))^3/2+iB_1B_2(N-4)ε_j∑_mε_m^2/2(A_2(N-2))^3/2+iπτ_1B_2(N-4)ε_j/2(A_2(N-2))^3/2
+3iB_2B_3T_5(N-4)ε_j/2(A_2(N-2))^3/2+3iA_3B_2S_2(N-4)ε_j/2(A_2(N-2))^3/2-iB_2C_1N(N-4)ε_j^4/2(A_2(N-2))^3/2
+iB_2C_1(N-4)ε_j^2∑_mε_m^2/2(A_2(N-2))^3/2-iB_2C_1(N-4)ε_j∑_mε_m^3/2(A_2(N-2))^3/2+iπτ_3B_2(N-4)ε_j^2/(A_2(N-2))^3/2
+3iB_2D_3T_5(N-4)ε_j^2/2(A_2(N-2))^3/2-B_2D_2T_4(N-4)ε_j^2/(A_2(N-2))^3/2+3iB_2B_3S_2(N-4)ε_j^2/2(A_2(N-2))^3/2
-iB_1C_2N^2ε_j^4/2(A_2(N-2))^3/2-iB_1C_2Nε_j^2∑_mε_m^2/2(A_2(N-2))^3/2-iπτ_1NC_2ε_j^2/(A_2(N-2))^3/2-3iB_3C_2T_5Nε_j^2/2(A_2(N-2))^3/2
-3iA_3C_2S_2Nε_j^2/2(A_2(N-2))^3/2-iB_1C_2Nε_j^2∑_mε_m^2/2(A_2(N-2))^3/2-iB_1C_2(∑_mε_m^2)^2/2(A_2(N-2))^3/2-iπτ_1C_2∑_mε_m^2/(A_2(N-2))^3/2
-3iB_3C_2T_5∑_mε_m^2/2(A_2(N-2))^3/2-3iA_3C_2S_2∑_mε_m^2/2(A_2(N-2))^3/2-πτ_2B_1Nε_j^2/(A_2(N-2))^3/2-πτ_2B_1∑_mε_m^2/(A_2(N-2))^3/2
-2π^2τ_1τ_2/(A_2(N-2))^3/2-3πτ_2B_3T_5/(A_2(N-2))^3/2-3πτ_2A_3S_2/(A_2(N-2))^3/2
-iB_1B_2^2N(N-4)^2ε_j^4/2(A_2(N-2))^5/2-iB_1B_2^2(N-4)^2ε_j^2∑_mε_m^2/2(A_2(N-2))^5/2-iπτ_1B_2^2(N-4)^2ε_j^2/(A_2(N-2))^5/2
-3iB_2^2B_3T_5(N-4)^2ε_j^2/2(A_2(N-2))^5/2-3iA_3B_2^2S_2(N-4)^2ε_j^2/2(A_2(N-2))^5/2+3iA_3B_1^2N^2(N-4)ε_j^4/8(A_2(N-2))^5/2
+3iA_3B_1^2N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^5/2+3iπτ_1A_3B_1N(N-4)ε_j^2/2(A_2(N-2))^5/2+9iA_3B_1B_3T_5N(N-4)ε_j^2/4(A_2(N-2))^5/2
+9iA_3^2B_1S_2N(N-4)ε_j^2/4(A_2(N-2))^5/2+3iA_3B_1^2(N-4)(∑_mε_m^2)^2/8(A_2(N-2))^5/2+3iπτ_1A_3B_1(N-4)∑_mε_m^2/2(A_2(N-2))^5/2
+9iA_3B_1B_3T_5(N-4)∑_mε_m^2/4(A_2(N-2))^5/2+9iA_3^2B_1S_2(N-4)∑_mε_m^2/4(A_2(N-2))^5/2+3iπ^2τ_1^2A_3(N-4)/2(A_2(N-2))^5/2
+9iπτ_1A_3B_3T_5(N-4)/2(A_2(N-2))^5/2+9iπτ_1A_3^2S_2(N-4)/2(A_2(N-2))^5/2+27iA_3B_3^2T_5^2(N-4)/8(A_2(N-2))^5/2
+27iA_3^2B_3S_2T_5(N-4)/4(A_2(N-2))^5/2+27iA_3^3S_2^2(N-4)/8(A_2(N-2))^5/2+𝒪(N^-3/2-2ω) .
It is furthermore helpful to expand[It serves the calculations and checks to write to this formula down.]
Q_2+3Q_3iφ̃/(A_2(N-2))^3/2=1+B_2(N-4)ε_j/A_2(N-2)-C_2Nε_j^2/A_2(N-2)-C_2∑_mε_m^2/A_2(N-2)
+2iπτ_2/A_2(N-2)+D_2Nε_j^3/A_2(N-2)+2iπτ_5ε_j/A_2(N-2)-D_2ε_j∑_mε_m^2/A_2(N-2)+E_2(N-4)ε_j^4/A_2(N-2)
+G_2ε_j^2∑_mε_m^2/A_2(N-2)-3iC_3S_1ε_j^2/A_2(N-2)-6C_4S_2ε_j^2/A_2(N-2)
+3A_3B_1N(N-4)ε_j^2/2(A_2(N-2))^2
+3A_3B_1(N-4)∑_mε_m^2/2(A_2(N-2))^2+3πτ_1A_3(N-4)/(A_2(N-2))^2+9A_3B_3T_5(N-4)/2(A_2(N-2))^2
+9A_3^2S_2(N-4)/2(A_2(N-2))^2-3A_3C_1N(N-4)ε_j^3/2(A_2(N-2))^2+3A_3C_1(N-4)ε_j∑_mε_m^2/2(A_2(N-2))^2
-3A_3C_1(N-4)∑_mε_m^3/2(A_2(N-2))^2+3πτ_3A_3(N-4)ε_j/(A_2(N-2))^2+9A_3D_3T_5(N-4)ε_j/2(A_2(N-2))^2
+3iA_3D_2T_4(N-4)ε_j/(A_2(N-2))^2+9A_3B_3S_2(N-4)ε_j/2(A_2(N-2))^2+3A_3D_1(N-2)(N-4)ε_j^4/2(A_2(N-2))^2
+3A_3D_1(N-4)∑_mε_m^4/2(A_2(N-2))^2-3A_3E_1(N-4)ε_j∑_mε_m^3/2(A_2(N-2))^2+3πτ_4A_3(N-4)ε_j^2/(A_2(N-2))^2
-9A_3C_3S_2(N-4)ε_j^2/2(A_2(N-2))^2-3A_3F_1(N-4)ε_j^2∑_mε_m^2/2(A_2(N-2))^2+3iA_3C_2S_1(N-4)ε_j^2/(A_2(N-2))^2
-3A_3B_1B_2N(N-4)^2ε_j^3/2(A_2(N-2))^3-3A_3B_1B_2(N-4)^2ε_j∑_mε_m^2/2(A_2(N-2))^3-3πτ_1A_3B_2(N-4)^2ε_j/2(A_2(N-2))^3
-9A_3B_2B_3T_5(N-4)^2ε_j/2(A_2(N-2))^3-9A_3^2B_2S_2(N-4)^2ε_j/2(A_2(N-2))^3+3A_3B_2C_1N(N-4)^2ε_j^4/2(A_2(N-2))^3
-3A_3B_2C_1(N-4)^2ε_j^2∑_mε_m^2/2(A_2(N-2))^3+3A_3B_2C_1(N-4)^2ε_j∑_mε_m^3/2(A_2(N-2))^3-3πτ_3A_3B_2(N-4)^2ε_j^2/2(A_2(N-2))^3
-9A_3B_2D_3T_5(N-4)^2ε_j^2/2(A_2(N-2))^3-3iA_3B_2D_2T_4(N-4)^2ε_j^2/(A_2(N-2))^3-9A_3B_2B_3S_2(N-4)^2ε_j^2/2(A_2(N-2))^3
+3A_3B_1C_2N^2(N-4)ε_j^4/2(A_2(N-2))^3+3A_3B_1C_2N(N-4)ε_j^2∑_mε_m^2/2(A_2(N-2))^3+3πτ_1A_3C_2N(N-4)ε_j^2/(A_2(N-2))^3
+9A_3B_3C_2T_5N(N-4)ε_j^2/2(A_2(N-2))^3+9A_3^2C_2S_2N(N-4)ε_j^2/2(A_2(N-2))^3+3A_3B_1C_2N(N-4)ε_j^2∑_mε_m^2/2(A_2(N-2))^3
+3A_3B_1C_2(N-4)(∑_mε_m^2)^2/2(A_2(N-2))^3+3πτ_1A_3C_2(N-4)∑_mε_m^2/(A_2(N-2))^3+9A_3B_3C_2T_5(N-4)∑_mε_m^2/2(A_2(N-2))^3
+9A_3^2C_2S_2(N-4)∑_mε_m^2/2(A_2(N-2))^3-3iπτ_2A_3B_1N(N-4)ε_j^2/(A_2(N-2))^3-3iπτ_2A_3B_1(N-4)∑_mε_m^2/(A_2(N-2))^3
+3A_3B_1B_2^2N(N-4)^3ε_j^4/2(A_2(N-2))^4+3A_3B_1B_2^2(N-4)^3ε_j^2∑_mε_m^2/2(A_2(N-2))^4+3πτ_1A_3B_2^2(N-4)^3ε_j^2/(A_2(N-2))^4
+9A_3B_2^2B_3T_5(N-4)^3ε_j^2/2(A_2(N-2))^4+9A_3^2B_2^2S_2(N-4)^3ε_j^2/2(A_2(N-2))^4-9A_3^2B_1^2N^2(N-4)^2ε_j^4/8(A_2(N-2))^4
-9A_3^2B_1^2N(N-4)^2ε_j^2∑_mε_m^2/4(A_2(N-2))^4-9πτ_1A_3^2B_1N(N-4)^2ε_j^2/2(A_2(N-2))^4-27A_3^2B_1B_3T_5N(N-4)^2ε_j^2/4(A_2(N-2))^4
-27A_3^3B_1S_2N(N-4)^2ε_j^2/4(A_2(N-2))^4-9A_3^2B_1^2(N-4)^2(∑_mε_m^2)^2/8(A_2(N-2))^4-9πτ_1A_3^2B_1(N-4)^2∑_mε_m^2/2(A_2(N-2))^4
-27A_3^2B_1B_3T_5(N-4)^2∑_mε_m^2/4(A_2(N-2))^4-27A_3^3B_1S_2(N-4)^2∑_mε_m^2/4(A_2(N-2))^4+3B_1B_3N(N-8)ε_j^3/2(A_2(N-2))^2
+3B_1B_3(N-8)ε_j∑_mε_m^2/2(A_2(N-2))^2+3πτ_1B_3(N-8)ε_j/(A_2(N-2))^2+9B_3^2T_5(N-8)ε_j/2(A_2(N-2))^2
+9A_3B_3S_2(N-8)ε_j/2(A_2(N-2))^2-3B_3C_1N(N-8)ε_j^4/2(A_2(N-2))^2+3B_3C_1(N-8)ε_j^2∑_mε_m^2/2(A_2(N-2))^2
-3B_3C_1(N-8)ε_j∑_mε_m^3/2(A_2(N-2))^2+3πτ_3B_3(N-8)ε_j^2/(A_2(N-2))^2+9B_3D_3T_5(N-8)ε_j^2/2(A_2(N-2))^2
+3iB_3D_2T_4(N-8)ε_j^2/(A_2(N-2))^2+9B_3^2S_2(N-8)ε_j^2/2(A_2(N-2))^2-3B_1B_2B_3N(N-4)(N-8)ε_j^4/2(A_2(N-2))^3
-3B_1B_2B_3(N-4)(N-8)ε_j^2∑_mε_m^2/2(A_2(N-2))^3-3πτ_1B_2B_3(N-4)(N-8)ε_j^2/2(A_2(N-2))^3
-9B_2B_3^2T_5(N-4)(N-8)ε_j^2/2(A_2(N-2))^3-9A_3B_2B_3S_2(N-4)(N-8)ε_j^2/2(A_2(N-2))^3-3B_1C_3N(N-8)ε_j^4/2(A_2(N-2))^2
-3B_1C_3(N-8)ε_j^2∑_mε_m^2/2(A_2(N-2))^2-3πτ_1C_3(N-8)ε_j^2/(A_2(N-2))^2-9B_3C_3T_5(N-8)ε_j^2/2(A_2(N-2))^2
-9A_3C_3S_2(N-8)ε_j^2/2(A_2(N-2))^2-3B_1C_3Nε_j^2∑_mε_m^2/2(A_2(N-2))^2-3B_1C_3(∑_mε_m^2)^2/2(A_2(N-2))^2
-3πτ_1C_3∑_mε_m^2/(A_2(N-2))^2-9B_3C_3T_5∑_mε_m^2/2(A_2(N-2))^2-9A_3C_3S_2∑_mε_m^2/2(A_2(N-2))^2
+6iA_4B_1S_1Nε_j^2/(A_2(N-2))^2+6iA_4B_1S_1∑_mε_m^2/(A_2(N-2))^2+6iB_1B_4T_3Nε_j^2/(A_2(N-2))^2+6iB_1B_4T_3∑_mε_m^2/(A_2(N-2))^2
+6iB_1C_4T_4Nε_j^2/(A_2(N-2))^2+6iB_1C_4T_4∑_mε_m^2/(A_2(N-2))^2+𝒪(N^-2) .
Writing out (<ref>), where the order of factors is preserved, yields
√(π/A_2(N-2))exp[-B_2(N-4)ε_j/2A_2(N-2)+C_2Nε_j^2/2A_2(N-2)+C_2∑_mε_m^2/2A_2(N-2)
-iπτ_2/A_2(N-2)-D_2Nε_j^3/2A_2(N-2)-iπτ_5ε_j/A_2(N-2)+D_2ε_j∑_mε_m^2/2A_2(N-2)-E_2(N-4)ε_j^4/2A_2(N-2)
-G_2ε_j^2∑_mε_m^2/2A_2(N-2)+3iC_3S_1ε_j^2/2A_2(N-2)+3C_4S_2ε_j^2/A_2(N-2)
-3A_3B_1N(N-4)ε_j^2/4(A_2(N-2))^2
-3A_3B_1(N-4)∑_mε_m^2/4(A_2(N-2))^2-3πτ_1A_3(N-4)/2(A_2(N-2))^2-9A_3B_3T_5(N-4)/4(A_2(N-2))^2
-9A_3^2S_2(N-4)/4(A_2(N-2))^2+3A_3C_1N(N-4)ε_j^3/4(A_2(N-2))^2-3A_3C_1(N-4)ε_j∑_mε_m^2/4(A_2(N-2))^2
+3A_3C_1(N-4)∑_mε_m^3/4(A_2(N-2))^2-3πτ_3A_3(N-4)ε_j/2(A_2(N-2))^2-9A_3D_3T_5(N-4)ε_j/4(A_2(N-2))^2
-3iA_3D_2T_4(N-4)ε_j/2(A_2(N-2))^2-9A_3B_3S_2(N-4)ε_j/4(A_2(N-2))^2-3A_3D_1(N-2)(N-4)ε_j^4/4(A_2(N-2))^2
-3A_3D_1(N-4)∑_mε_m^4/4(A_2(N-2))^2+3A_3E_1(N-4)ε_j∑_mε_m^2/4(A_2(N-2))^2-3πτ_4A_3(N-4)ε_j^2/2(A_2(N-2))^2
+9A_3C_3S_2(N-4)ε_j^2/4(A_2(N-2))^2+3A_3F_1(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^2-3iA_3C_2S_1(N-4)ε_j^2/2(A_2(N-2))^2
+3A_3B_1B_2N(N-4)^2ε_j^3/4(A_2(N-2))^3+3A_3B_1B_2(N-4)^2ε_j∑_mε_m^2/4(A_2(N-2))^3+3πτ_1A_3B_2(N-4)^2ε_j/4(A_2(N-2))^3
+9A_3B_2B_3T_5(N-4)^2ε_j/4(A_2(N-2))^3+9A_3^2B_2S_2(N-4)^2ε_j/4(A_2(N-2))^3-3iA_3B_2^2T_3(N-4)^2ε_j/2(A_2(N-2))^3
-3A_3B_2C_1N(N-4)^2ε_j^4/4(A_2(N-2))^3+3A_3B_2C_1(N-4)^2ε_j^2∑_mε_m^2/4(A_2(N-2))^3
-3A_3B_2C_1(N-4)^2ε_j∑_mε_m^3/4(A_2(N-2))^3+3πτ_3A_3B_2(N-4)^2ε_j^2/4(A_2(N-2))^3+9A_3B_2D_3T_5(N-4)^2ε_j^2/4(A_2(N-2))^3
+3iA_3B_2D_2T_4(N-4)^2ε_j^2/2(A_2(N-2))^3+9A_3B_2B_3S_2(N-4)^2ε_j^2/4(A_2(N-2))^3-3A_3B_1C_2N^2(N-4)ε_j^4/4(A_2(N-2))^3
-3A_3B_1C_2N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^3-3πτ_1A_3C_2N(N-4)ε_j^2/2(A_2(N-2))^3-9A_3B_3C_2T_5N(N-4)ε_j^2/4(A_2(N-2))^3
-9A_3^2C_2S_2N(N-4)ε_j^2/4(A_2(N-2))^3-3A_3B_1C_2N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^3-3A_3B_1C_2(N-4)(∑_mε_m^2)^2/4(A_2(N-2))^3
-3πτ_1A_3C_2(N-4)∑_mε_m^2/2(A_2(N-2))^3-9A_3B_3C_2T_5(N-4)∑_mε_m^2/4(A_2(N-2))^3-9A_3^2C_2S_2(N-4)∑_mε_m^2/4(A_2(N-2))^3
+3iπτ_2A_3B_1N(N-4)ε_j^2/2(A_2(N-2))^3+3iπτ_2A_3B_1(N-4)∑_mε_m^2/2(A_2(N-2))^3-3A_3B_1B_2^2N(N-4)^3ε_j^4/4(A_2(N-2))^4
-3A_3B_1B_2^2(N-4)^3ε_j^2∑_mε_m^2/4(A_2(N-2))^4-3πτ_1A_3B_2^2(N-4)^3ε_j^2/2(A_2(N-2))^4-9A_3B_2^2B_3T_5(N-4)^3ε_j^2/4(A_2(N-2))^4
-9A_3^2B_2^2S_2(N-4)^3ε_j^2/4(A_2(N-2))^4+9A_3^2B_1^2N^2(N-4)^2ε_j^4/16(A_2(N-2))^4+9A_3^2B_1^2N(N-4)^2ε_j^2∑_mε_m^2/8(A_2(N-2))^4
+9πτ_1A_3^2B_1N(N-4)^2ε_j^2/4(A_2(N-2))^4+27A_3^2B_1B_3T_5N(N-4)^2ε_j^2/8(A_2(N-2))^4+27A_3^3B_1S_2N(N-4)^2ε_j^2/8(A_2(N-2))^4
+9A_3^2B_1^2(N-4)^2(∑_mε_m^2)^2/16(A_2(N-2))^4+9πτ_1A_3^2B_1(N-4)^2∑_mε_m^2/4(A_2(N-2))^4+27A_3^2B_1B_3T_5(N-4)^2∑_mε_m^2/8(A_2(N-2))^4
+27A_3^3B_1S_2(N-4)^2∑_mε_m^2/8(A_2(N-2))^4-3B_1B_3N(N-8)ε_j^3/4(A_2(N-2))^2-3B_1B_3(N-8)ε_j∑_mε_m^2/4(A_2(N-2))^2
-3πτ_1B_3(N-8)ε_j/2(A_2(N-2))^2-9B_3^2T_5(N-8)ε_j/4(A_2(N-2))^2-9A_3B_3S_2(N-8)ε_j/4(A_2(N-2))^2+3B_3C_1N(N-8)ε_j^4/4(A_2(N-2))^2
-3B_3C_1(N-8)ε_j^2∑_mε_m^2/4(A_2(N-2))^2+3B_3C_1(N-8)ε_j∑_mε_m^3/4(A_2(N-2))^2-3πτ_3B_3(N-8)ε_j^2/2(A_2(N-2))^2
-9B_3D_3T_5(N-8)ε_j^2/4(A_2(N-2))^2-3iB_3D_2T_4(N-8)ε_j^2/2(A_2(N-2))^2-9B_3^2S_2(N-8)ε_j^2/4(A_2(N-2))^2
+3B_1B_2B_3N(N-4)(N-8)ε_j^4/4(A_2(N-2))^3+3B_1B_2B_3(N-4)(N-8)ε_j^2∑_mε_m^2/4(A_2(N-2))^3
+3πτ_1B_2B_3(N-4)(N-8)ε_j^2/4(A_2(N-2))^3+9B_2B_3^2T_5(N-4)(N-8)ε_j^2/4(A_2(N-2))^3
+9A_3B_2B_3S_2(N-4)(N-8)ε_j^2/4(A_2(N-2))^3+3B_1C_3N(N-8)ε_j^4/4(A_2(N-2))^2+3B_1C_3(N-8)ε_j^2∑_mε_m^2/4(A_2(N-2))^2
+3πτ_1C_3(N-8)ε_j^2/2(A_2(N-2))^2+9B_3C_3T_5(N-8)ε_j^2/4(A_2(N-2))^2+9A_3C_3S_2(N-8)ε_j^2/4(A_2(N-2))^2
+3B_1C_3Nε_j^2∑_mε_m^2/4(A_2(N-2))^2+3B_1C_3(∑_mε_m^2)^2/4(A_2(N-2))^2+3πτ_1C_3∑_mε_m^2/2(A_2(N-2))^2+9B_3C_3T_5∑_mε_m^2/4(A_2(N-2))^2
+9A_3C_3S_2∑_mε_m^2/4(A_2(N-2))^2-3iA_4B_1S_1Nε_j^2/(A_2(N-2))^2-3iA_4B_1S_1∑_mε_m^2/(A_2(N-2))^2-3iB_1B_4T_3Nε_j^2/(A_2(N-2))^2
-3iB_1B_4T_3∑_mε_m^2/(A_2(N-2))^2-3iB_1C_4T_4Nε_j^2/(A_2(N-2))^2-3iB_1C_4T_4∑_mε_m^2/(A_2(N-2))^2]
×exp[B_2^2(N-4)^2ε_j^2/4(A_2(N-2))^2-B_2C_2N(N-4)ε_j^3/2(A_2(N-2))^2-B_2C_2(N-4)ε_j∑_mε_m^2/2(A_2(N-2))^2+iπτ_2B_2(N-4)ε_j/(A_2(N-2))^2
+B_2D_2N(N-4)ε_j^4/2(A_2(N-2))^2+iπτ_5B_2(N-4)ε_j^2/(A_2(N-2))^2-B_2D_2(N-4)ε_j^2∑_mε_m^2/2(A_2(N-2))^2
+3A_3B_1B_2N(N-4)^2ε_j^3/4(A_2(N-2))^3+3A_3B_1B_2(N-4)^2ε_j∑_mε_m^2/4(A_2(N-2))^3+3πτ_1A_3B_2(N-4)^2ε_j/2(A_2(N-2))^3
+9A_3B_2B_3T_5(N-4)^2ε_j/4(A_2(N-2))^3+9A_3^2B_2S_2(N-4)^2ε_j/4(A_2(N-2))^3-3A_3B_2C_1N(N-4)^2ε_j^4/4(A_2(N-2))^3
+3A_3B_2C_1(N-4)^2ε_j^2∑_mε_m^2/4(A_2(N-2))^3-3A_3B_2C_1(N-4)^2ε_j∑_mε_m^3/4(A_2(N-2))^3+3πτ_3A_3B_2(N-4)^2ε_j^2/2(A_2(N-2))^3
+9A_3B_2D_3T_5(N-4)^2ε_j^2/4(A_2(N-2))^3+3iA_3B_2D_2T_4(N-4)^2ε_j^2/2(A_2(N-2))^3+9A_3B_2B_3S_2(N-4)^2ε_j^2/4(A_2(N-2))^3
-3A_3B_1B_2^2N(N-4)^3ε_j^4/4(A_2(N-2))^4-3A_3B_1B_2^2(N-4)^3ε_j^2∑_mε_m^2/4(A_2(N-2))^4-3πτ_1A_3B_2^2(N-4)^3ε_j^2/4(A_2(N-2))^4
-9A_3B_2^2B_3T_5(N-4)^3ε_j^2/4(A_2(N-2))^4-9A_3^2B_2^2S_2(N-4)^3ε_j^2/4(A_2(N-2))^4+3B_1B_2B_3N(N-4)(N-8)ε_j^4/4(A_2(N-2))^3
+3B_1B_2B_3(N-4)(N-8)ε_j^2∑_mε_m^2/4(A_2(N-2))^3+3πτ_1B_2B_3(N-4)(N-8)ε_j^2/2(A_2(N-2))^3
+9B_2B_3^2T_5(N-4)(N-8)ε_j^2/4(A_2(N-2))^3+9A_3B_2B_3S_2(N-4)(N-8)ε_j^2/4(A_2(N-2))^3+C_2^2N^2ε_j^4/4(A_2(N-2))^2
+C_2^2Nε_j^2∑_mε_m^2/2(A_2(N-2))^2-iπτ_2C_2Nε_j^2/(A_2(N-2))^2-3A_3B_1C_2N^2(N-4)ε_j^4/4(A_2(N-2))^3
-3A_3B_1C_2N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^3-3πτ_1A_3C_2N(N-4)ε_j^2/2(A_2(N-2))^3-9A_3B_3C_2T_5N(N-4)ε_j^2/4(A_2(N-2))^3
-9A_3^2C_2S_2N(N-4)ε_j^2/4(A_2(N-2))^3+C_2^2(∑_mε_m^2)^2/4(A_2(N-2))^2-iπτ_2C_2∑_mε_m^2/(A_2(N-2))^2
-3A_3B_1C_2N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^3-3A_3B_1C_2(N-4)(∑_mε_m^2)^2/4(A_2(N-2))^3
-3πτ_1A_3C_2(N-4)∑_mε_m^2/2(A_2(N-2))^3-9A_3B_3C_2T_5(N-4)∑_mε_m^2/4(A_2(N-2))^3-9A_3^2C_2S_2(N-4)∑_mε_m^2/4(A_2(N-2))^3
+3iπτ_2A_3B_1N(N-4)ε_j^2/2(A_2(N-2))^3+3iπτ_2A_3B_1(N-4)∑_mε_m^2/2(A_2(N-2))^3+9A_3^2B_1^2N^2(N-4)^2ε_j^4/16(A_2(N-2))^4
+9A_3^2B_1^2N(N-4)^2ε_j^2∑_mε_m^2/8(A_2(N-2))^4+9πτ_1A_3^2B_1N(N-4)^2ε_j^2/4(A_2(N-2))^4+27A_3^2B_1B_3T_5N(N-4)^2ε_j^2/8(A_2(N-2))^4
+27A_3^3B_1S_2N(N-4)^2ε_j^2/8(A_2(N-2))^4+9A_3^2B_1^2(N-4)^2(∑_mε_m^2)^2/16(A_2(N-2))^4+9πτ_1A_3^2B_1(N-4)^2∑_mε_m^2/4(A_2(N-2))^4
+27A_3^2B_1B_3T_5(N-4)^2∑_mε_m^2/8(A_2(N-2))^4+27A_3^3B_1S_2(N-4)^2∑_mε_m^2/8(A_2(N-2))^4]
×exp[-B_2^3(N-4)^3ε_j^3/6(A_2(N-2))^3+B_2^2C_2N(N-4)^2ε_j^4/2(A_2(N-2))^3+B_2^2C_2(N-4)^2ε_j^2∑_mε_m^2/2(A_2(N-2))^3
-iπτ_2B_2^2(N-4)^2ε_j^2/(A_2(N-2))^3-3A_3B_1B_2^2N(N-4)^3ε_j^4/4(A_2(N-2))^4-3A_3B_1B_2^2(N-4)^3ε_j^2∑_mε_m^2/4(A_2(N-2))^4
-3πτ_1A_3B_2^2(N-4)^3ε_j^2/2(A_2(N-2))^4-9A_3B_2^2B_3T_5(N-4)^3ε_j^2/4(A_2(N-2))^4-9A_3^2B_2^2S_2(N-4)^3ε_j^2/4(A_2(N-2))^4]
exp[B_2^4(N-4)^4ε_j^4/8A_2^4(N-2)^4]×exp[-B_1^2N^2ε_j^4/4A_2(N-2)-B_1^2Nε_j^2∑_mε_m^2/2A_2(N-2)
-πτ_1B_1Nε_j^2/A_2(N-2)-3B_1B_3T_5Nε_j^2/2A_2(N-2)-3A_3B_1S_2Nε_j^2/2A_2(N-2)
+B_1C_1N^2ε_j^5/2A_2(N-2)-B_1C_1Nε_j^3∑_mε_m^2/2A_2(N-2)+B_1C_1Nε_j^2∑_mε_m^3/2A_2(N-2)-πτ_3B_1Nε_j^3/A_2(N-2)
-3B_1D_3T_5Nε_j^3/2A_2(N-2)-iB_1D_2T_4Nε_j^3/A_2(N-2)-3B_1B_3S_2Nε_j^3/2A_2(N-2)-B_1D_1N(N-2)ε_j^6/2A_2(N-2)
-B_1D_1Nε_j^2∑_mε_m^4/2A_2(N-2)+B_1E_1Nε_j^3∑_mε_m^3/2A_2(N-2)-πτ_4B_1Nε_j^4/A_2(N-2)+3B_1C_3S_2Nε_j^4/2A_2(N-2)
+B_1F_1Nε_j^4∑_mε_m^2/2A_2(N-2)-iB_1C_2S_1Nε_j^4/A_2(N-2)-B_1^2(∑_mε_m^2)^2/4A_2(N-2)-πτ_1B_1∑_mε_m^2/A_2(N-2)
-3B_1B_3T_5∑_mε_m^2/2A_2(N-2)-3A_3B_1S_2∑_mε_m^2/2A_2(N-2)+B_1C_1Nε_j^3∑_mε_m^2/2A_2(N-2)
-B_1C_1ε_j(∑_mε_m^2)^2/2A_2(N-2)+B_1C_1∑_mε_m^2∑_nε_n^3/2A_2(N-2)-πτ_3B_1ε_j∑_mε_m^2/A_2(N-2)
-3B_1D_3T_5ε_j∑_mε_m^2/2A_2(N-2)-iB_1D_2T_4ε_j∑_mε_m^2/A_2(N-2)-3B_1B_3S_2ε_j∑_mε_m^2/2A_2(N-2)
-B_1D_1(N-2)ε_j^4∑_mε_m^2/2A_2(N-2)-B_1D_1∑_mε_m^2∑_nε_n^4/2A_2(N-2)+B_1E_1ε_j∑_mε_m^2∑_nε_n^3/2A_2(N-2)
-πτ_4B_1ε_j^2∑_mε_m^2/A_2(N-2)+3B_1C_3S_2ε_j^2∑_mε_m^2/2A_2(N-2)+B_1F_1ε_j^2(∑_mε_m^2)^2/2A_2(N-2)
-iB_1C_2S_1ε_j^2∑_mε_m^2/A_2(N-2)-π^2τ_1^2/A_2(N-2)-3πτ_1B_3T_5/A_2(N-2)-3πτ_1A_3S_2/A_2(N-2)
+πτ_1C_1Nε_j^3/A_2(N-2)-πτ_1C_1ε_j∑_mε_m^2/A_2(N-2)+πτ_1C_1∑_mε_m^3/A_2(N-2)-2π^2τ_1τ_3ε_j/A_2(N-2)-3πτ_1D_3T_5ε_j/A_2(N-2)
-2iπτ_1D_2T_4ε_j/A_2(N-2)-3πτ_1B_3S_2ε_j/A_2(N-2)-πτ_1D_1(N-2)ε_j^4/A_2(N-2)-πτ_1D_1∑_mε_m^4/A_2(N-2)
+πτ_1E_1ε_j∑_mε_m^3/A_2(N-2)-2π^2τ_1τ_4ε_j^2/A_2(N-2)+3πτ_1C_3S_2ε_j^2/A_2(N-2)
+πτ_1F_1ε_j^2∑_mε_m^2/A_2(N-2)-2iπτ_1C_2S_1ε_j^2/A_2(N-2)-9B_3^2T_5^2/4A_2(N-2)-9A_3B_3S_2T_5/2A_2(N-2)
+3B_3C_1T_5Nε_j^3/2A_2(N-2)-3B_3C_1T_5ε_j∑_mε_m^2/2A_2(N-2)+3B_3C_1T_5∑_mε_m^3/2A_2(N-2)-3πτ_3B_3T_5ε_j/A_2(N-2)
-9B_3D_3T_5^2ε_j/2A_2(N-2)-3iB_3D_2T_4T_5ε_j/A_2(N-2)-9B_3^2S_2T_5ε_j/2A_2(N-2)-3B_3D_1T_5(N-2)ε_j^4/2A_2(N-2)
-3B_3D_1T_5∑_mε_m^4/2A_2(N-2)+3B_3E_1T_5ε_j∑_mε_m^3/2A_2(N-2)-3πτ_4B_3T_5ε_j^2/A_2(N-2)+9B_3C_3S_2T_5ε_j^2/2A_2(N-2)
+3B_3F_1T_5ε_j^2∑_mε_m^2/2A_2(N-2)-3iB_3C_2S_1T_5ε_j^2/A_2(N-2)-9A_3^2S_2^2/4A_2(N-2)
+3A_3C_1S_2Nε_j^3/2A_2(N-2)-3A_3C_1S_2ε_j∑_mε_m^2/2A_2(N-2)+3A_3C_1S_2∑_mε_m^3/2A_2(N-2)-3πτ_3A_3S_2ε_j/A_2(N-2)
-9A_3D_3S_2T_5ε_j/2A_2(N-2)-3iA_3D_2S_2T_4ε_j/A_2(N-2)-9A_3B_3S_2^2ε_j/2A_2(N-2)-3A_3D_1S_2(N-2)ε_j^4/2A_2(N-2)
-3A_3D_1S_2∑_mε_m^4/2A_2(N-2)+3A_3E_1S_2ε_j∑_mε_m^3/2A_2(N-2)-3πτ_4A_3S_2ε_j^2/A_2(N-2)+9A_3C_3S_2^2ε_j^2/2A_2(N-2)
+3A_3F_1S_2ε_j^2∑_mε_m^2/2A_2(N-2)-3iA_3C_2S_1S_2ε_j^2/A_2(N-2)-C_1^2N^2ε_j^6/4A_2(N-2)+C_1^2Nε_j^4∑_mε_m^2/2A_2(N-2)
-C_1^2Nε_j^3∑_mε_m^3/2A_2(N-2)+πτ_3C_1Nε_j^4/A_2(N-2)+3C_1D_3T_5Nε_j^4/2A_2(N-2)+iC_1D_2T_4Nε_j^4/A_2(N-2)
+3B_3C_1S_2Nε_j^4/2A_2(N-2)-C_1^2ε_j^2(∑_mε_m^2)^2/4A_2(N-2)+C_1^2ε_j∑_mε_m^2∑_nε_n^3/2A_2(N-2)-πτ_3C_1ε_j^2∑_mε_m^2/A_2(N-2)
-3C_1D_3T_5ε_j^2∑_mε_m^2/2A_2(N-2)-iC_1D_2T_4ε_j^2∑_mε_m^2/A_2(N-2)-3B_3C_1S_2ε_j^2∑_mε_m^2/2A_2(N-2)-C_1^2(∑_mε_m^3)^2/4A_2(N-2)
+πτ_3C_1ε_j∑_mε_m^3/A_2(N-2)+3C_1D_3T_5ε_j∑_mε_m^3/2A_2(N-2)+iC_1D_2T_4ε_j∑_mε_m^3/A_2(N-2)
+3B_3C_1S_2ε_j∑_mε_m^3/2A_2(N-2)-π^2τ_3^2ε_j^2/A_2(N-2)-3πτ_3D_3T_5ε_j^2/A_2(N-2)
-2iπτ_3D_2T_4ε_j^2/A_2(N-2)-3πτ_3B_3S_2ε_j^2/A_2(N-2)-9D_3^2T_5^2ε_j^2/4A_2(N-2)-3iD_2D_3T_4T_5ε_j^2/A_2(N-2)
-9B_3D_3S_2T_5ε_j^2/2A_2(N-2)+D_2^2T_4^2ε_j^2/A_2(N-2)-3iB_3D_2S_2T_4ε_j^2/A_2(N-2)-9B_3^2S_2^2ε_j^2/4A_2(N-2)
+A_3B_1^3N^3(N-4)ε_j^6/8(A_2(N-2))^3+3A_3B_1^3N^2(N-4)ε_j^4∑_mε_m^2/8(A_2(N-2))^3+3πτ_1A_3B_1^2N^2(N-4)ε_j^4/4(A_2(N-2))^3
+9A_3B_1^2B_3T_5N^2(N-4)ε_j^4/8(A_2(N-2))^3+9A_3^2B_1^2S_2N^2(N-4)ε_j^2/8(A_2(N-2))^3
+3A_3B_1^3N(N-4)ε_j^2(∑_mε_m^2)^2/8(A_2(N-2))^3+3πτ_1A_3B_1^2N(N-4)ε_j^2∑_mε_m^2/2(A_2(N-2))^3
+9A_3B_1^2B_3T_5N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^3+9A_3^2B_1^2S_2N(N-4)ε_j^2∑_mε_m^2/4(A_2(N-2))^3
+3π^2τ_1^2A_3B_1N(N-4)ε_j^2/2(A_2(N-2))^3+9πτ_1A_3B_1B_3T_5N(N-4)ε_j^2/2(A_2(N-2))^3
+9πτ_1A_3^2B_1S_2N(N-4)ε_j^2/2(A_2(N-2))^3+27A_3B_1B_3^2T_5^2N(N-4)ε_j^2/8(A_2(N-2))^3
+27A_3^2B_1B_3S_2T_5N(N-4)ε_j^2/4(A_2(N-2))^3+27A_3^3B_1S_2^2N(N-4)ε_j^2/8(A_2(N-2))^3
+A_3B_1^3(N-4)(∑_mε_m^2)^3/8(A_2(N-2))^3+3πτ_1A_3B_1^2(N-4)(∑_mε_m^2)^2/4(A_2(N-2))^3
+9A_3B_1^2B_3T_5(N-4)(∑_mε_m^2)^2/8(A_2(N-2))^3+9A_3^2B_1^2S_2(N-4)(∑_mε_m^2)^2/8(A_2(N-2))^3
+3π^2τ_1^2A_3B_1(N-4)∑_mε_m^2/2(A_2(N-2))^3+9πτ_1A_3B_1B_3T_5(N-4)∑_mε_m^2/2(A_2(N-2))^3
+9πτ_1A_3^2B_1S_2(N-4)∑_mε_m^2/2(A_2(N-2))^3+27A_3B_1B_3^2T_5^2(N-4)∑_mε_m^2/8(A_2(N-2))^3
+27A_3^2B_1B_3S_2T_5(N-4)∑_mε_m^2/4(A_2(N-2))^3+27A_3^3B_1S_2^2(N-4)∑_mε_m^2/8(A_2(N-2))^3]
×exp[-15A_3^2(N-4)^2/16(A_2(N-2))^3+45A_3^2B_2(N-4)^3ε_j/16(A_2(N-2))^4-45A_3^2C_2N(N-4)^2ε_j^2/16(A_2(N-2))^4
-45A_3^2C_2(N-4)^2∑_mε_m^2/16(A_2(N-2))^4+135A_3^3B_1N(N-4)^3ε_j^2/32(A_2(N-2))^5+135A_3^3B_1(N-4)^3∑_mε_m^2/32(A_2(N-2))^5
-45A_3^2B_2^2(N-4)^4ε_j^2/8(A_2(N-2))^5-15A_3B_3(N-4)(N-8)ε_j/8(A_2(N-2))^3
+45A_3B_2B_3(N-4)^2(N-8)ε_j^2/8(A_2(N-2))^4+15A_3C_3(N-4)(N-8)ε_j^2/8(A_2(N-2))^3
+15A_3C_3(N-4)∑_mε_m^2/8(A_2(N-2))^3-15B_3^2(N-8)^2ε_j^2/16(A_2(N-2))^3]
×exp[3A_4(N-8)/4(A_2(N-2))^2-3A_4B_2(N-4)(N-8)ε_j/2(A_2(N-2))^3+3A_4C_2N(N-8)ε_j^2/2(A_2(N-2))^3
+3A_4C_2(N-8)∑_mε_m^2/2(A_2(N-2))^3-9A_3A_4B_1N(N-4)(N-8)ε_j^2/4(A_2(N-2))^4
-9A_3A_4B_1(N-4)(N-8)∑_mε_m^2/4(A_2(N-2))^4+9A_4B_2^2(N-4)^2(N-8)ε_j^2/4(A_2(N-2))^4
+3B_4(N-16)ε_j/4(A_2(N-2))^2-3B_2B_4(N-4)(N-16)ε_j^2/2(A_2(N-2))^3+3C_4(N-16)ε_j^2/4(A_2(N-2))^2
+3C_4∑_mε_m^2/4(A_2(N-2))^2]×(1+𝒪(N^-2)) .
In the product of N of such exponentials yields various sums. These expression simplify somewhat because ∑_mε_m=0. Integrating τ_5 yields a delta function, from which it follows that
T_5=-B_2(N-4)∑_mε_m^2/2A_2^2(N-2)^2+𝒪(N^-1) .
This means that T_5=𝒪(N^-1+2ω), so that many of the T_5-terms may be neglected. Using the same strategy it follows that the τ_2-integral yields a delta function, so that
S_2=N/2A_2(N-2)
+2A_2C_2N(N-2)+B_2^2(N-4)^2-6A_3B_1N(N-4)/2A_2^3(N-2)^3∑_mε_m^2+𝒪(N^-1) .
Also the τ_4-integral yields a delta function, so that
T_4=-3iA_3(N-4)∑_mε_m^2/4A_2^2(N-2)^2-iB_1N∑_mε_m^4/2A_2(N-2)-iB_1(∑_mε_m^2)^2/2A_2(N-2)
-iπτ_1∑_mε_m^2/A_2(N-2)-3iA_3S_2∑_mε_m^2/2A_2(N-2)+𝒪(N^-1) ,
which implies that all T_4-dependence may be neglected. Alternatively, the T_4-integral and afterwards the τ_4-integral may be computed using the saddle point method (<ref>), yielding the same result. Using this for the T_3-integral yields
∫ T_3 exp[2π iτ_3T_3-2B_2S_1T_3-2C_2T_3^2-3iB_1B_4T_3N∑_mε_m^2/A_2^2(N-2)^2]
=√(π/2C_2)exp[-π^2τ_3^2/2C_2+B_2^2S_1^2/2C_2-iπτ_3B_2S_1/C_2+3πτ_3B_1B_4N∑_mε_m^2/2C_2A_2^2(N-2)^2
+3iB_1B_2B_4S_1N∑_mε_m^2/2C_2A_2^2(N-2)^2]×(1+𝒪(N^-1)) .
The τ_3-integral is very similar, although a bit longer. We obtain
∫τ_3 exp[-π^2τ_3^2/2C_2-iπτ_3B_2S_1/C_2+3πτ_3B_1B_4N∑_mε_m^2/2C_2A_2^2(N-2)^2+9πτ_3A_3B_2(N-4)^2∑_mε_m^2/4A_2^3(N-2)^3
-3πτ_3B_3(N-8)∑_mε_m^2/2A_2^2(N-2)^2-πτ_3B_1N∑_mε_m^3/A_2(N-2)+πτ_3C_1N∑_mε_m^4/A_2(N-2)-πτ_3C_1(∑_mε_m^2)^2/A_2(N-2)
-π^2τ_3^2∑_mε_m^2/A_2(N-2)-3πτ_3B_3S_2∑_mε_m^2/A_2(N-2)]
=√(2C_2/π)√(1/1+2C_2∑_mε_m^2/A_2(N-2))exp[-B_2^2S_1^2/2C_2+B_2^2S_1^2∑_mε_m^2/A_2(N-2)-3iB_1B_2B_4S_1N∑_mε_m^2/2C_2A_2^2(N-2)^2
-9iA_3B_2^2S_1(N-4)^2∑_mε_m^2/4A_2^3(N-2)^3+3iB_2B_3S_1(N-8)∑_mε_m^2/2A_2^2(N-2)^2+iB_1B_2S_1N∑_mε_m^3/A_2(N-2)
+B_1^2C_2N^2(∑_mε_m^3)^2/2A_2^2(N-2)^2-iB_2C_1S_1N∑_mε_m^4/A_2(N-2)+iB_2C_1S_1(∑_mε_m^2)^2/A_2(N-2)
+3iB_2B_3S_1S_2∑_mε_m^2/A_2(N-2)]×(1+𝒪(N^-1)) .
The next intgral:
∫ S_1 exp[2π iτ_1S_1-A_2S_1^2+B_2^2S_1^2∑_mε_m^2/A_2(N-2)-9iA_3B_2^2S_1(N-4)^2∑_mε_m^2/4A_2^3(N-2)^3
+3iB_2B_3S_1(N-8)∑_mε_m^2/2A_2^2(N-2)^2+iB_1B_2S_1N∑_mε_m^3/A_2(N-2)-iB_2C_1S_1N∑_mε_m^4/A_2(N-2)
+iB_2C_1S_1(∑_mε_m^2)^2/A_2(N-2)+3iB_2B_3S_1S_2∑_mε_m^2/A_2(N-2)+3iC_3S_1∑_mε_m^2/2A_2(N-2)
-3iA_3C_2S_1(N-4)∑_mε_m^2/2A_2^2(N-2)^2-6iA_4B_1S_1N∑_mε_m^2/A_2^2(N-2)^2-iB_1C_2S_1N∑_mε_m^4/A_2(N-2)
-iB_1C_2S_1(∑_mε_m^2)^2/A_2(N-2)-2iπτ_1C_2S_1∑_mε_m^2/A_2(N-2)-3iA_3C_2S_1S_2∑_mε_m^2/A_2(N-2)]
=√(π/A_2)√(1/1-B_2^2∑_mε_m^2/A_2^2(N-2))exp[-π^2τ_1^2/A_2-π^2τ_1^2B_2^2∑_mε_m^2/A_2^3(N-2)-πτ_1B_1B_2N∑_mε_m^3/A_2^2(N-2)
-B_1^2B_2^2N^2(∑_mε_m^3)^2/4A_2^3(N-2)^2+9πτ_1A_3B_2^2(N-4)^2∑_mε_m^2/4A_2^4(N-2)^3-3πτ_1B_2B_3(N-8)∑_mε_m^2/2A_2^3(N-2)^2
+πτ_1B_2C_1N∑_mε_m^4/A_2^2(N-2)-πτ_1B_2C_1(∑_mε_m^2)^2/A_2^2(N-2)-3πτ_1B_2B_3S_2∑_mε_m^2/A_2^2(N-2)-3πτ_1C_3∑_mε_m^2/2A_2^2(N-2)
+3πτ_1A_3C_2(N-4)∑_mε_m^2/2A_2^3(N-2)^2+6πτ_1A_4B_1N∑_mε_m^2/A_2^3(N-2)^2+πτ_1B_1C_2N∑_mε_m^4/A_2^2(N-2)
+πτ_1B_1C_2(∑_mε_m^2)^2/A_2^2(N-2)+2π^2τ_1^2C_2∑_mε_m^2/A_2^2(N-2)+3πτ_1A_3C_2S_2∑_mε_m^2/A_2^2(N-2)]×(1+𝒪(N^-1)) .
For the final integral we repeat the proces once more. This yields
∫τ_1 exp[-π^2τ_1^2/A_2-π^2τ_1^2B_2^2∑_mε_m^2/A_2^3(N-2)-πτ_1B_1B_2N∑_mε_m^3/A_2^2(N-2)
+9πτ_1A_3B_2^2(N-4)^2∑_mε_m^2/4A_2^4(N-2)^3-3πτ_1B_2B_3(N-8)∑_mε_m^2/2A_2^3(N-2)^2
+πτ_1B_2C_1N∑_mε_m^4/A_2^2(N-2)-πτ_1B_2C_1(∑_mε_m^2)^2/A_2^2(N-2)-3πτ_1B_2B_3S_2∑_mε_m^2/A_2^2(N-2)-3πτ_1C_3∑_mε_m^2/2A_2^2(N-2)
+3πτ_1A_3C_2(N-4)∑_mε_m^2/2A_2^3(N-2)^2+6πτ_1A_4B_1N∑_mε_m^2/A_2^3(N-2)^2+πτ_1B_1C_2N∑_mε_m^4/A_2^2(N-2)
+πτ_1B_1C_2(∑_mε_m^2)^2/A_2^2(N-2)+2π^2τ_1^2C_2∑_mε_m^2/A_2^2(N-2)+3πτ_1A_3C_2S_2∑_mε_m^2/A_2^2(N-2)]
×exp[-3πτ_1A_3N(N-4)/2A_2^2(N-2)^2-6πτ_1A_3C_2N(N-4)∑_mε_m^2/A_2^3(N-2)^3
-15πτ_1A_3B_2^2(N-4)^3∑_mε_m^2/4A_2^4(N-2)^4+9πτ_1A_3^2B_1N(N-4)^2∑_mε_m^2/A_2^4(N-2)^4
+9πτ_1B_2B_3(N-4)(N-8)∑_mε_m^2/4A_2^3(N-2)^3+3πτ_1C_3(N-4)∑_mε_m^2/A_2^2(N-2)^2
-2πτ_1B_1N∑_mε_m^2/A_2(N-2)-π^2τ_1^2N/A_2(N-2)-3πτ_1B_3T_5N/A_2(N-2)-3πτ_1A_3S_2N/A_2(N-2)+2πτ_1C_1N∑_mε_m^3/A_2(N-2)
-2πτ_1D_1(N-1)∑_mε_m^4/A_2(N-2)+3πτ_1C_3S_2∑_mε_m^2/A_2(N-2)+πτ_1F_1(∑_mε_m^2)^2/A_2(N-2)
+3πτ_1A_3B_1^2N^2(N-4)∑_mε_m^4/4A_2^3(N-2)^3+3π^2τ_1^2A_3B_1N(N-4)∑_mε_m^2/A_2^3(N-2)^3
+9πτ_1A_3B_1^2N(N-4)(∑_mε_m^2)^2/4A_2^3(N-2)^3+9πτ_1A_3^2B_1S_2N(N-4)∑_mε_m^2/A_2^3(N-2)^3]
=√(A_2(N-2)/2π(N-1))√(1/1-(-B_2^2/2A_2^2(N-1)+C_2/A_2(N-1)+3A_3B_1N(N-4)/2A_2^2(N-1)(N-2)^2)∑_mε_m^2)×exp[3A_3B_1N^2∑_mε_m^2/2A_2^2(N-1)(N-2)
+3A_3B_1N^2(∑_mε_m^2)^2/2A_2^2(N-1)(N-2)(-B_2^2/2A_2^2(N-1)+C_2/A_2(N-1)+3A_3B_1N(N-4)/2A_2^2(N-2)^2(N-1))+B_1^2N^2(∑_mε_m^2)^2/2A_2(N-1)(N-2)
+B_1^2N^2(∑_mε_m^2)^3/2A_2(N-1)(N-2)(-B_2^2/2A_2^2(N-1)+C_2/A_2(N-1)+3A_3B_1N(N-4)/2A_2^2(N-2)^2(N-1))+9A_3^2N^2/8A_2^3(N-1)(N-2)
+B_1^2B_2N^2(∑_mε_m^2)(∑_nε_n^3)/2A_2^2(N-1)(N-2)+3A_3B_1B_2N^2∑_mε_m^3/4A_2^3(N-1)(N-2)+B_1^2B_2^2N^2(∑_mε_m^3)^2/8A_2^3(N-1)(N-2)
+C_1^2N^2(∑_mε_m^3)^2/2A_2(N-1)(N-2)-B_1C_1N^2(∑_mε_m^2)(∑_nε_n^3)/A_2(N-1)(N-2)-3A_3C_1N^2∑_mε_m^3/2A_2^2(N-1)(N-2)
+3A_3B_1N^2(∑_mε_m^2)^2/4A_2^4(N-1)(N-2)^4(2A_2C_2N(N-2)+B_2^2(N-4)^2-6A_3B_1N(N-4))
+9A_3^2N^2∑_mε_m^2/8A_2^5(N-1)(N-2)^4(2A_2C_2N(N-2)+B_2^2(N-4)^2-6A_3B_1N(N-4))
-9A_3B_1B_2^2N(N-4)^2(∑_mε_m^2)^2/8A_2^4(N-1)(N-2)^3-27A_3^2B_2^2N(N-4)^2∑_mε_m^2/16A_2^5(N-1)(N-2)^3+3B_1B_2B_3N(N-8)(∑_mε_m^2)^2/4A_2^3(N-1)(N-2)^2
+9A_3B_2B_3N(N-8)∑_mε_m^2/8A_2^4(N-1)(N-2)^2-B_1B_2C_1N^2(∑_mε_m^2)(∑_nε_n^4)/2A_2^2(N-1)(N-2)-3A_3B_2C_1N^2∑_mε_m^4/4A_2^3(N-1)(N-2)
+B_1B_2C_1N(∑_mε_m^2)^3/2A_2^2(N-1)(N-2)+3A_3B_2C_1N(∑_mε_m^2)^2/4A_2^3(N-1)(N-2)+3B_1B_2B_3S_2N(∑_mε_m^2)^2/2A_2^2(N-1)(N-2)
+9A_3B_2B_3S_2N∑_mε_m^2/4A_2^3(N-1)(N-2)+3B_1C_3N(∑_mε_m^2)^2/4A_2^2(N-1)(N-2)+9A_3C_3N∑_mε_m^2/8A_2^3(N-1)(N-2)
-3A_3B_1C_2N(N-4)(∑_mε_m^2)^2/4A_2^3(N-1)(N-2)^2-9A_3^2C_2N(N-4)∑_mε_m^2/8A_2^4(N-1)(N-2)^2-3A_4B_1^2N^2(∑_mε_m^2)^2/A_2^3(N-1)(N-2)^2
-9A_3A_4B_1N^2∑_mε_m^2/2A_2^4(N-1)(N-2)^2-B_1^2C_2N^2(∑_mε_m^2)(∑_nε_n^4)/2A_2^2(N-1)(N-2)-3A_3B_1C_2N^2∑_mε_m^4/4A_2^3(N-1)(N-2)
-B_1^2C_2N(∑_mε_m^2)^3/2A_2^2(N-1)(N-2)-3A_3B_1C_2N(∑_mε_m^2)^2/4A_2^3(N-1)(N-2)-3A_3B_1C_2S_2N(∑_mε_m^2)^2/2A_2^2(N-1)(N-2)
-9A_3^2C_2S_2N∑_mε_m^2/4A_2^3(N-1)(N-2)+3A_3B_1C_2N^2(N-4)(∑_mε_m^2)^2/A_2^3(N-1)(N-2)^3+9A_3^2C_2N^2(N-4)∑_mε_m^2/2A_2^4(N-1)(N-2)^3
+15A_3B_1B_2^2N(N-4)^3(∑_mε_m^2)^2/8A_2^4(N-1)(N-2)^4+45A_3^2B_2^2N(N-4)^3∑_mε_m^2/16A_2^5(N-1)(N-2)^4-9A_3^2B_1^2N^2(N-4)^2(∑_mε_m^2)^2/2A_2^4(N-1)(N-2)^4
-27A_3^3B_1N^2(N-4)^2∑_mε_m^2/4A_2^5(N-1)(N-2)^4-9B_1B_2B_3N(N-4)(N-8)(∑_mε_m^2)^2/8A_2^3(N-1)(N-2)^3
-27A_3B_2B_3N(N-4)(N-8)∑_mε_m^2/16A_2^4(N-1)(N-2)^3-3B_1C_3N(N-4)(∑_mε_m^2)^2/2A_2^2(N-1)(N-2)^2-9A_3C_3N(N-4)∑_mε_m^2/4A_2^3(N-1)(N-2)^2
+3B_1B_3T_5N^2∑_mε_m^2/2A_2(N-1)(N-2)+9A_3B_3T_5N^2/4A_2^2(N-1)(N-2)+B_1D_1N(N-1)(∑_mε_m^2)(∑_nε_n^4)/A_2(N-1)(N-2)
+3A_3D_1N(N-1)∑_mε_m^4/2A_2^2(N-1)(N-2)-3B_1C_3S_2N(∑_mε_m^2)^2/2A_2(N-1)(N-2)-9A_3C_3S_2N∑_mε_m^2/8A_2^2(N-1)(N-2)
-B_1F_1N(∑_mε_m^2)^3/2A_2(N-1)(N-2)-3A_3F_1N(∑_mε_m^2)^2/4A_2^2(N-1)(N-2)-3A_3B_1^3N^3(N-4)(∑_mε_m^2)(∑_nε_n^4)/8A_2^3(N-1)(N-2)^3
-9A_3^2B_1^2N^3(N-4)∑_mε_m^4/16A_2^4(N-1)(N-2)^3-9A_3B_1^3N^2(N-4)(∑_mε_m^2)^3/8A_2^3(N-1)(N-2)^3-27A_3^2B_1^2N^2(N-4)(∑_mε_m^2)^2/16A_2^4(N-1)(N-2)^3
-9A_3^2B_1^2S_2N^2(N-4)(∑_mε_m^2)^2/2A_2^3(N-1)(N-2)^3-27A_3^3B_1S_2N^2(N-4)∑_mε_m^2/4A_2^4(N-1)(N-2)^3]×(1+𝒪(N^-1)) ,
where (<ref>) is applied to to exp[-3πτ_1A_3S_2N/A_2(N-2)] to simplify the notation a bit.
Besides this, the expansion
[∏_j(1+1/λ_j)^λ(N-1)+(N-2)ε_j/2/2][∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l)]
=(1+1/λ)^x/2(1+λ)^N2exp[N-1/2∑_j(λ+ε_j)log(1+λ+ε_j/1+λλ/λ+ε_j)]
×exp[-∑_k<llog(1+λ-√((1+λ-1+λ/1+λ+ε_k)(1+λ-1+λ/1+λ+ε_l)))]
=(1+1/λ)^x/2(1+λ)^N2
×exp[(∑_mε_m^2)·[Nλ^2/4λ^2(λ+1)^2+λ/4λ^2(λ+1)^2-(N-1)3λ^2+λ/8λ^2(λ+1)^2-1/8λ(λ+1)]
+(∑_mε_m^3)·[-N6λ^3+3λ^2+λ/24λ^3(λ+1)^3-3λ^2+λ/12λ^3(λ+1)^3
+(N-1)14λ^3+9λ^2+3λ/48λ^3(λ+1)^3+2λ+1/16λ^2(λ+1)^2]
+(∑_mε_m^4)·[N6λ^4+6λ^3+4λ^2+λ/24λ^4(λ+1)^4+6λ^3+4λ^2+λ/24λ^4(λ+1)^4
-(N-1)30λ^4+28λ^3+19λ^2+5λ/128λ^4(λ+1)^4-2λ^2+2λ+1/32λ^3(λ+1)^3-6λ^2+6λ+1/128λ^3(λ+1)^3]
+N(∑_mε_m^5)[-20λ^5+30λ^4+30λ^3+15λ^2+3λ/80λ^5(λ+1)^5+248λ^5+300λ^4+310λ^3+165λ^2+35λ/1280λ^5(λ+1)^5]
+N(∑_mε_m^6)[15λ^6+30λ^5+40λ^4+30λ^3+12λ^2+2λ/60λ^6(λ+1)^6-504λ^6+744λ^5+1038λ^4+836λ^3+357λ^2+63λ/3072λ^6(λ+1)^6]
+(∑_mε_m^2)^2·[6λ^2+6λ+1/128λ^3(λ+1)^3]
+(∑_mε_m^2)(∑_nε_n^3)·[-(2λ+1)^3/128λ^4(λ+1)^4]
+(∑_mε_m^2)(∑_nε_n^4)·[40λ^4+80λ^3+74λ^2+34λ+5/1024λ^5(λ+1)^5]
+(∑_mε_m^3)^2·[40λ^4+80λ^3+58λ^2+18λ+3/768λ^5(λ+1)^5]]×(1+𝒪(N^-1))
is needed. Together with all the exponents the final answer
V_N(t;λ)=√(2)(1+λ)^N2/(2πλ(λ+1)N)^N/2(1+1/λ)^x/2exp[14λ^2+14λ-1/12λ(λ+1)]
×exp[(∑_mε_m^2)·[Nλ^2/4λ^2(λ+1)^2+λ/4λ^2(λ+1)^2-(N-1)3λ^2+λ/8λ^2(λ+1)^2-1/8λ(λ+1)
+32λ^4+64λ^3+40λ^2+8λ+1/256λ^4(λ+1)^4-264λ^6+792λ^5+784λ^4+248λ^3-33λ^2-25λ-3/192λ^4(λ+1)^4N]]
×exp[(∑_mε_m^3)·[-N6λ^3+3λ^2+λ/24λ^3(λ+1)^3-3λ^2+λ/12λ^3(λ+1)^3
+(N-1)14λ^3+9λ^2+3λ/48λ^3(λ+1)^3+2λ+1/16λ^2(λ+1)^2
-2λ^3+3λ^2+3λ+1/48λ^3(λ+1)^3]]
×exp[(∑_mε_m^4)·[N6λ^4+6λ^3+4λ^2+λ/24λ^4(λ+1)^4+6λ^3+4λ^2+λ/24λ^4(λ+1)^4
-(N-1)30λ^4+28λ^3+19λ^2+5λ/128λ^4(λ+1)^4-2λ^2+2λ+1/32λ^3(λ+1)^3-6λ^2+6λ+1/128λ^3(λ+1)^3
-(2λ+1)^2N/128λ^3(λ+1)^3-6λ^4+12λ^3+20λ^2+14λ+3/128λ^4(λ+1)^4]]
×exp[N(∑_mε_m^5)·[-20λ^5+30λ^4+30λ^3+15λ^2+3λ/80λ^5(λ+1)^5+248λ^5+300λ^4+310λ^3+165λ^2+35λ/1280λ^5(λ+1)^5
+4λ^3+6λ^2+4λ+1/128λ^4(λ+1)^4]]
×exp[N(∑_mε_m^6)[15λ^6+30λ^5+40λ^4+30λ^3+12λ^2+2λ/60λ^6(λ+1)^6
-504λ^6+744λ^5+1038λ^4+836λ^3+357λ^2+63λ/3072λ^6(λ+1)^6
-14λ^4+28λ^3+36λ^2+22λ+5/768λ^5(λ+1)^5]]
×exp[(∑_mε_m^2)^2·[6λ^2+6λ+1/128λ^3(λ+1)^3+72λ^6+216λ^5+220λ^4+80λ^3-2λ^2-6λ-1/512λ^5(λ+1)^5N]]
×exp[(∑_mε_m^2)(∑_nε_n^3)·[-(2λ+1)^3/128λ^4(λ+1)^4+(2λ+1)^3/128λ^4(λ+1)^4]]
×exp[(∑_mε_m^2)(∑_nε_n^4)·[40λ^4+80λ^3+74λ^2+34λ+5/1024λ^5(λ+1)^5
-16λ^4+32λ^3+44λ^2+28λ+5/1024λ^5(λ+1)^5]]
×exp[(∑_mε_m^3)^2·[40λ^4+80λ^3+58λ^2+18λ+3/768λ^5(λ+1)^5-8λ^4+16λ^3-4λ^2-12λ-3/1024λ^5(λ+1)^5]]
×exp[(∑_mε_m^2)^3-8λ^4-16λ^3+8λ+1/3072λ^5(λ+1)^5]×(1+𝒪(N^-1))
is obtained. Applying to this that
e_j=2/N-2(t_j-λ(N-1))
yields the claimed result.
To determine the error from the difference 𝒟 from Lemma <ref>, we divide it by V_N(t;λ). Assuming that t_j-λ(N-1)=λ N^1/2+ω takes maximal values, it follows that the relative difference is given by
𝒪(N^2-Kα)exp[4λ^2+4λ+1/3λ(λ+1)]
×exp[N^2ω/64λ^2(λ+1)^4(16(λ+1)^2λ^2(4λ^2+4λ+3)
+64λ^6+192λ^5+160λ^4-40λ^2-8λ-1)]
×exp[(4λ^2+4λ+1)λ N^4ω/8(λ+1)^3]exp[λ N^6ω/48(λ+1)^5(8λ^4+16λ^3-8λ-1)] .
Only the first exponential can become large, if λ is small. Assuming that λ > C/log(N), this factor adds an error N^1/3C.
To keep this relative error small, it is furthermore necessary that exp[N^6ω]<N^Kα-2. Solving this yields
0<ω<log(Kα-2)+log(log(N))/6log(N) .
Fixing the value ω=ρlog(Kα-2)+log(log(N))/6log(N) for some ρ∈(0,1) shows that the relative error is 𝒪(N^2+1/3C-KαN^(-2+Kα)^ρ)=𝒪(N^2+1/3C-Kα).
Methods to treat such multi-dimensional combinatorical Gaussian integrals in more generality have been discussed in <cit.>.
§ REDUCTION OF THE INTEGRATION REGION
In the previous paragraph the result of the integral (<ref>) in a small box around the origin was obtained. Knowing this makes it much easier to compare the contribution inside and outside of this box. This is the main aim of Lemma <ref>.
For a∈[0,1] and n∈ℕ the estimates
exp[nalog (2)]≤ (1+a)^n≤exp[na]
hold.
The right-hand side follows from
(1+a)^n=∑_j=0^na^jnj=∑_j=0^n(na)^j/j!n!/n^j(n-j)!≤∑_j=0^n(na)^j/j!≤exp[na] .
For the left-hand side it suffices to show that log(1+a)≥ alog(2). Because equality holds at one and zero, this follows from the concavity of the logarithm.
For any ω∈(0,1/4) and α∈(0,1/4-ω), define
δ_N=N^-αζ_N/min{λ_j} ,
such that ζ_N→∞ and N^-δζ_N→0 for any δ>0. Assuming that x=∑_jt_j=λ N(N-1), |t_j-λ(N-1)|≪λ N^1/2+ω and λ>C/log(N), the integral
V_N(t)=(∏_j=1^N(1+1/λ_j)^t_j/2)(2π)^-N∫_𝕋^Nφ e^-i∑_j=1^Nφ_jt_j
×∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) 1/1-√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) (e^i(φ_k+φ_l)-1)
can be restricted to
V_N(t;λ)=2/(2π)^N(∏_j=1^N(1+1/λ_j)^t_j/2)∫_[-δ_N,δ_N]^Nφ exp[-i∑_jφ_j(t_j-λ(N-1))]
×∏_1≤ k<l≤ N√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) 1/1-√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) (e^i(φ_k+φ_l)-1)
×(1+𝒪(N^3/2exp[-N^1-2αζ_N^2])) .
The idea of the proof is to consider the integrand in a small box [-δ_N,δ_N]^N and see what happens to it if some of the angles φ lie outside of it.
Because x is even, it follows that the integrand takes the same value at φ and φ+π=(φ_1+π,…,φ_N+π). This means that only half of the space has to be considered and the result must be multiplied by 2.
This estimate follows directly from application of (<ref>-<ref>) to the integrand and a computation like the one in the proof of Lemma <ref>. Writing
μ_kl=√(λ_kλ_l)/√((1+λ_k)(1+λ_l))-√(λ_kλ_l) and ε_j=λ_j-λ
with |ε_j|≪λ N^-1/2+ω this yields
|∫_[-δ_N,δ_N]^Nφ ∏_1≤ k<l≤ N1/1-μ_kl(exp[i(φ_k+φ_l)]-1)|
≤∫_[-δ_N/2,δ_N/2]^Nφ |exp[∑_m=1i^m∑_k<lA_m(μ_kl)(φ_k+φ_l)^m]|
≤√(2)(2π/λ(λ+1)N)^N/2exp[10λ^2+10λ+1/4λ(λ+1)]exp[N^1/2+2ω] .
The final exponent exp[N^1/2+2ω] here comes from the estimate μ_kl≥λ(1-N^-1/2+ω).
Now we argue case by case why other configurations of the angles φ_j are asymptotically suppressed.
Case 1. All but finitely many angles lie in the box [-δ_N,δ_N]^N. A finite number of m angles lies outside of it. We label these angles {φ_1,…,φ_m}. The maximum of the integrand
f:(φ_m+1,…,φ_N)↦∏_1≤ k<l≤ N1/1-μ_kl(exp[i(φ_k+φ_l)]-1)
in absolute value is given by the equations
0=∂_φ_j|f|=∑_k≠ jsin(φ_j+φ_k)/1-2μ_kl(μ_kl+1)(cos(φ_j+φ_k)-1) for j=m+1,…,N .
It is clear that the maximum is found for φ̃=φ_m+1=…=φ_N. The first order solution to this is then
φ̃=-1/2(N-m-1)∑_k=1^msin(φ_k)/1+2μ_kj(μ_kj+1)(1-cosφ_k) .
This shows that the maximum will lie in the box [-δ_N/2,δ_N/2]^N. This implies that |φ_j-φ_k|>δ_N/2, when 1≤ j≤ m and m+1≤ k≤ N. Applying the estimate (<ref>) to pairs of such angles and afterwards (<ref>) to the remaining N-m angles in the box [-δ_N,δ_N]^N-m gives us an upper bound of
2√(2)/(2πλ(λ+1)(N-m))^N-m/2(∏_j(1+1/λ_j)^t_j/2)·(∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))
×Nmexp[30λ^2+30λ+3/12λ(λ+1)]exp[N^1/2+2ω](1+λ(λ+1)δ_N^2/4)^-Nm/2
on the part of the integral in the small box [-δ_N,δ_N]^N. There are Nm ways to select the m angles. Applying Lemma <ref> to the final factor and comparing the result with the Lower bound, shows that this may be neglected if
2√(2)e^16λ^2+16λ+4/12λ(λ+1)e^m/2N^3m/2(2πλ(λ+1))^m/2exp[N^1-2α+N^1/2+2ω]e^-Nmlog(2)/8λ(λ+1)δ_N^2→0 .
The condition 0<α<1/4-ω and the sequence ζ_N→∞ guarantee this. In fact, the same argument works for all m such that m/N→0.
Case 2. If the number m=ρ N of angles outside the integration box [-δ_N,δ_N]^N increases faster, another estimate is needed, because the maximum φ̃ may lie outside of [-δ_N/2,δ_N/2]. It is clear that 0<ρ<1 in the limit.
Estimate the location φ_j=φ̃ of the maximum is much trickier now. Regardless of its precise location, we will take the maximum value as the estimate for the integrand in the entire integration box. The smaller box [-δ_N/2,δ_N/2]^N is considered once more. We distinguish two options.
-Case 2a. The maximum lies in [-δ_N/2,δ_N/2]^N, thus φ̃∈[-δ_N/2,δ_N/2].
Applying the estimate (<ref>) to this yields an upper bound
Nρ N(2δ_N)^N(1-ρ)(2π)^ρ N(∏_j(1+1/λ_j)^t_j/2)
×(∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))(1+1/4λ(λ+1)δ_N^2)^-N^2ρ(1-ρ)/4 .
Applying Lemma <ref> to the last factor and dividing this by ℰ_α shows that
(2^1/ρπδ^1-ρ/ρ(2πλ(λ+1)N)^2/ρNexp[N^-2α/ρ+N^-1/2+2ω(1-ρ)/ρ-λ(λ+1)δ_N^2N(1-ρ)log(2)/16])^ρ N→0
is a sufficient and satisfied condition.
-Case 2b. The maximum lies not in [-δ_N/2,δ_N/2]^N. This is the same as δ_N/2<|φ̃|≤δ_N.
Applying (<ref>) only to the angles φ_ρ N+1,…,φ_ρ N in the integration box gives an upper bound
Nρ N(2δ_N)^N(1-ρ)(2π)^ρ N(∏_j(1+1/λ_j)^t_j/2)
×(∏_k<l√((1+λ_k)(1+λ_l))/√((1+λ_k)(1+λ_l))-√(λ_kλ_l))(1+1/4λ(λ+1)δ_N^2)^-N^2(1-ρ)^2/4 .
The same steps as in Case 2a. will do.
This shows that the integration can be restricted to the box [-δ_N,δ_N]^N. The error terms follow from Case 1., since convergence there is much slower.
Lemma <ref> shows that for every α∈(0,1/4-ω) and N∈ℕ there is a box that contains most of the integral's mass. As N increases, this box shrinks and the approximation becomes better. The parameter α determines how fast this box shrinks. Smaller values of α lower the Lower bound and, hence, increase the number of configurations within reach at the price of more intricate integrals and less accuracy.
The observation that ζ_N=log(N) and K≥α^-1(3/2+1/3C-6ω) satisfies all the demands proves Theorem <ref>.
An idea of the accuracy of these formulas can be obtained from Table <ref> and <ref>, where the reference values
y_k=∑_j=1^N(t_j-λ(N-1))^k for k≥ 2
are defined to compare configurations with the reference values 2^-kλ^kN^1+k/2 for k≥ 2.
§ POLYTOPES
In the previous paragraphs the asymptotic counting of symmetric matrices with zero diagonal and entries in the natural numbers was discussed. This allows us to return to the polytopes. The first step is to count the total number of symmetric matrices with zero diagonal and integer entries summing up to x to see which fraction of such matrices are covered by Theorem <ref>. This is easily done by a line of N2+x/2 elements, for example unit elements 1, and N2-1 semicolons. Putting the semicolons between the elements, such that the line begins and ends with a unit element and no semicolons stand next to each other, creates such a matrix. The number of elements before the first semicolon minus one is the first matrix element b_12. The number of elements minus one between the first and second semicolon yields the second matrix element b_13. In this way, we obtain the N2 elements of the upper triangular matrix. There are N2-1+x/2 positions to put N2-1 semicolons and thus
N2-1+x/2N2-1≈1/N√(1/πλ(λ+1))(1+λ)^N2(1+1/λ)^x/2(1+𝒪(N^-1))
such matrices, where we have used Stirling's approximation and the average matrix entry condition λ=x/(N(N-1)) for the approximation.
The next step is to estimate the number of matrices within reach of Theorem <ref>. Using only the leading order, the number of covered matrices is given by
∫t V_N(t;λ) δ(λ N(N-1)-∑_jt_j)
=(1+λ)^N2(1+1/λ)^x/2/π^N/2√(λ (λ+1)N)exp[14λ^2+14λ-1/12λ(λ+1)]∫ S∫σ∫τ exp[2π iσ S+S^2]
×{∏_j=1^N∫_-N^ω^N^ω y_j exp[2π iτ y_j]exp[-y_j^2(1+2/N-2π iσ/N)]exp[√(2)(2λ+1)y_j^3/3√(λ(λ+1)N)]
×exp[-3λ^2+3λ+1/3λ(λ+1)Ny_j^4]}
=(1+λ)^N2(1+1/λ)^x/2/N√(πλ (λ+1))exp[14λ^2+14λ-1/12λ(λ+1)]∫ S∫σ∫τ exp[2π iσ S+S^2]
×exp[-π^2τ^2(1-2/N+2iπσ/N)]exp[-1+π i σ+π iτ(2λ+1)/√(2λ(λ+1))]exp[5(2λ+1)^2/24λ (λ+1)]
×exp[-3λ^2+3λ+1/4λ(λ+1)]×(1-𝒪(exp[-N^2ω]/N^2ω))^N
=(1+λ)^N2(1+1/λ)^x/2/N√(πλ (λ+1))exp[-1/4λ(λ+1)]×(1-𝒪(exp[-N^2ω]/N^2ω))^N×(1+𝒪(N^-1)) .
Here we have used that
beginalign*
|hspace-4mm∫y exp[ay-by^2+cy^3+dy^4]=exp[a^2/4b+a^3c/8b^3+9a^4c^2/64b^5+a^4d/16b^4]√(π/b-3ac/2b-9a^2c^2/8b^3-3da^2/2b^2)×(1+15c^2/16b^3+3d/4b^2) ,
assuming that a,b=𝒪(1), c=𝒪(N^-1/2) and d=𝒪(N^-1).
A fraction exp[-1/4λ(λ)] of the matrices is covered, provided that ω is large enough. A sufficient condition is that
ω≥loglog N/2log N .
Combining this with the condition
ω≤log (Kα-2) + log(log N)/4log (N)
shows that K≥log(N)/α+2 is necessary to satisfy both demands. However, such large values of K remain without consequences, because higher values of K only influence the the error term in Lemma <ref>.
As λ→∞, the fraction of covered matrices tends to one and the volume of the diagonal subpolytopes of symmetric stochastic matrices can be determined by (<ref>). In terms of the variables
t_j=1-h_j/a and χ=∑_jh_j
the volume of the diagonal subpolytope is calculated by
(P_N(h))=lim_a→0a^N(N-3)/2V_N(1-h/a;N-χ/aN(N-1))
=√(2)e^7/6(e(N-χ)/N(N-1))^N2(N(N-1)^2/2π(N-χ)^2)^N/2exp[-N(N-1)^2/2(N-χ)^2∑_j(h_j-χ/N)^2]
×exp[-(N-1)^2/(N-χ)^2∑_j(h_j-χ/N)^2]exp[-N(N-1)^3/3(N-χ)^3∑_j(h_j-χ/N)^3]
×exp[-N(N-1)^4/4(N-χ)^4∑_j(h_j-χ/N)^4]exp[(N-1)^4/4(N-χ)^4(∑_j(h_j-χ/N)^2)^2] .
The convergence criterion becomes
|t_j-λ (N-1)|/λ N^1/2+ω=N^1/2-ω(N-1)/N-χ|h_j-χ/N|→0 .
This is the same as
∑_j|h_j-χ/N|^k≪ (N-χ/N-1)^kN^1-k/2+kω for all k≥2 .
This means that we only have accuracy in a small neighbourhood around χ/N. However, the calculation (<ref>) shows that this corresponds to almost all matrices asymptotically, so that outside of this region the polytopes will have very small volumes. There, not all relevant factors are known, but missing factors will be small compared to the dominant factor. This means that for diagonals that satisfy
lim_N→∞(N-1)^2∑_j(h_j-χ/N)^2/(N-χ)^2log(N)=0
qualitatively reasonable results are expected.
Since we are calculating a N2-dimensional volume with only one length scale, it follows that no correction can become large in this limit. It inherits the relative error from Theorem <ref>. This proves Theorem <ref>. Examples of this formula at work are given in Figure <ref> and <ref>.
§.§ Acknowledgments
This work was supported by the Deutsche Forschungsgemeinschaft (SFB 878 - groups, geometry & actions). We thank N. Broomhead and L. Hille for valuable discussions. In particular, we thank B.D. McKay and M. Isaev for clarifications of various aspects in asymptotic enumeration, pounting out a gap in the previous version and their suggestions to fill it.
unsrt
|
http://arxiv.org/abs/1701.07675v2 | 20170126124158 | Sparse Ternary Codes for similarity search have higher coding gain than dense binary codes | [
"Sohrab Ferdowsi",
"Slava Voloshynovskiy",
"Dimche Kostadinov",
"Taras Holotyak"
] | cs.IT | [
"cs.IT",
"cs.CV",
"cs.IR",
"math.IT"
] |
Pulse length of ultracold electron bunches extracted from a laser cooled gas
O.J. Luiten
December 30, 2023
============================================================================
This paper addresses the problem of Approximate Nearest Neighbor (ANN) search in pattern recognition where feature vectors in a database are encoded as compact codes in order to speed-up the similarity search in large-scale databases. Considering the ANN problem from an information-theoretic perspective, we interpret it as an encoding, which maps the original feature vectors to a less entropic sparse representation while requiring them to be as informative as possible. We then define the coding gain for ANN search using information-theoretic measures. We next show that the classical approach to this problem, which consists of binarization of the projected vectors is sub-optimal. Instead, a properly designed ternary encoding achieves higher coding gains and lower complexity.
Approximate Nearest Neighbor search, content identification, binary hashing, coding gain, sparse representation
§ INTRODUCTION
The problem of content identification, e.g., identification of people from their biometrics or objects from unclonable features was first formalized in information-theoretic terms by Willems et. al. in <cit.>. They defined the identification capacity as the exponent of the number of database items M that could reliably be identified in an asymptotic case where the feature dimension n →∞. They modeled the enrollment and acquisition systems as noisy communication channels while they considered the vectors as random channel codes. The authors then characterized the identification capacity as I(F;Q), the mutual information between the enrolled items F and the noisy queries Q. In this setup, however, the increase in n leads to an exponential increase in M ≃ 2^nI(F;Q). This incurs infeasible search/memory complexities, making the system impractical. Subsequent works attempted at decreasing these complexities. For example, <cit.> considered a two-stage clustering-based system to speed-up the search, while <cit.> considered the compression of vectors prior to enrollment and studied the achievable storage and identification rates.
Similar to the content identification problem, the pattern recognition community considers the similarity search based on Nearest Neighbors (NN's) to a given query within the items in a database. This problem is the basis for many applications like similar image retrieval, copy detection, copywrite protection, etc. The idea is that semantic similarity can be mapped to distances within a vectorial space ℝ^n. The search for similarity then reduces to search for NN's by comparing the vectorial representation of a given query to representations of items in a database. The challenge, however, is when M is huge, e.g., billions. The NN search based on linear scan then becomes the main bottleneck for the system. Approximative solutions, where performance is traded with memory and complexity, are then to be preferred. This is addressed in Approximate Nearest Neighbor (ANN) search, an active topic in computer vision and machine learning communities.[Refer to <cit.> for detailed review of ANN methods and applications.]
The methods introduced for ANN in pattern recognition communities can roughly be divided into two main categories. A first category of methods is based on quantization of the data. These methods mainly target memory constraints where every database item is given an index, or a set of indices which refer to codeword(s) from a trained codebook. At the query time, the items are reconstructed from the codebooks and matched with the given query, usually using look-up-tables.
A second family of methods for ANN, also considered in this work, is based on projecting the data from ℝ^n to a usually lower dimensional space. The projected data are further binarized using the sign function. The search in the binary space is faster since it is less entropic than the original ℝ^n. Moreover, this approach brings more practical advantage since the distance matching in the binary space using the Hamming distance is performed in fixed-point instead of the floating point operations in the original real space or the space of codebooks in quantization-based methods.
While the content identification literature is rigorously studying the fundamental limits of identification under storage or complexity constraints for known source and channel distributions and infinite code-lengths, the literature of ANN search drops these information-theoretic assumptions and hence treats the problem from a more practical perspective. This work tries to bridge these approaches. While we do not assume the asymptotic case and hence do not consider fundamental limits with their achievability and converse arguments, we propose a practical systems where it is explicitly required to maximize the mutual information between the query and database codes. Moreover, to address complexity requirements, we further require the codes to be sparse and hence allowing efficient search and storage. Concretely, this work brings the following contributions:
* Considering the projection approach to ANN, we introduce the concept of “coding gain” to quantify the efficiency of a coding scheme for similarity search.
* We then show that the standard approach in the literature, which is based on binarization of the projected values, is sub-optimal. Instead, as was also shown recently in <cit.>, the ternarization approach namely the Sparse Ternary Codes (STC) is to be preferred. While in <cit.> this advantage was shown from the signal approximation points of view, here we characterize it in terms of coding gain.
* We next show how to design the STC codes to maximize the coding gain. As opposed to the Maximum Likelihood (ML) decoder, we consider a sub-optimal but fast decoder, which brings sub-linear search complexity. Considering the memory-complexity trade-offs, this provides a wealth of possibilities for design, much richer than the binary counterpart while maintains all its advantages. Based on this decoder, we simulate an identification scenario showing the efficiency of the proposed methodology.
The problem formulation, along with the definition of the proposed coding gain are discussed in section <ref>. Binary codes are summarized in section <ref> while the ternary encoding is discussed in section <ref>. Comparison of the two encoding schemes in terms of the coding gain and also complexity ratio is performed in section <ref>. Finally, section <ref> concludes the paper.
§ PRELIMINARIES
§.§ Problem formulation
We consider a database ℱ = {𝐟(1), ⋯, 𝐟(M)} of M items as vectorial data-points 𝐟(i)'s, where 𝐟(i) = [f_1(i), ⋯, f_n(i)]^T ∈ℝ^n and 1 ⩽ i ⩽ M. We assume, for the sake of analysis, that each database entry 𝐟(i) is a realization of a random vector 𝐅 whose n elements are i.i.d. realizations of a random variable F with F ∼𝒩(0,σ_F^2).
The query 𝐪 = [q_1, ⋯, q_n]^T ∈ℝ^n is a perturbed version of an item from ℱ. We assume the perturbation follows an AWGN model as 𝐐 = 𝐅 + 𝐏, where elements of 𝐏∼𝒩(0,σ_P^2 𝐈_n ) are independent from elements of 𝐅. For a given query, the similarity search is based on a distance measure 𝒟(·,·): ℝ^n ×ℝ^n →ℝ^+, which is assumed here to be the ℓ_2 distance, i.e., 𝒟_ℓ_2(𝐪,𝐟(i)) = ||𝐪 - 𝐟(i)||_2.
In the case of the content identification problem, the search system finds the nearest item from ℱ to 𝐪, i.e., î = _1 ⩽ i ⩽ M[𝒟_ℓ_2(𝐪,𝐟(i))], or produces a list ℒ(𝐪) of nearest items in the more general (A)NN search, i.e., ℒ(𝐪) = { i : 𝒟_ℓ_2(𝐪,𝐟(i)) ⩽ϵ n }, where ϵ > 0 is a threshold.
§.§ Coding gain for similarity search
The above problem formulation implies a memoryless observation channel for queries. In <cit.> the identification capacity is defined for this channel (along with another AWGN for the enrollment channel) as a measure for the number of items that can be reliably identified. While this analysis is based on the asymptotic assumption where the number of items M should grow exponentially with dimension n, it can be argued, however, that M is fixed in practice and can be even well below the amount that the capacity would accommodate. Moreover, the probability of correct identification might be only asymptotically achievable and is less than 1 in practice.
Therefore, in this work, instead of the channel coding arguments, we consider the practical case where the focus is on fast decoding for which the given database is encoded. This comes with the price of lower identification performance.
To quantify the efficiency of a coding scheme for fast ANN search under the setup of section <ref>, we consider the coding gain as the ratio of mutual information between the encoded versions of the enrolled items and query and the entropy of the enrolled items, i.e.:
g_ℱ(ψ_e,ψ_i) = I(X;Y)/H(X),
where the encoding for enrollment provides X = ψ_e[F] and encoding for identification provides Y = ψ_i[Q].[Notice that, unlike the usual binary design, in the STC framework, these two stages need not be the same. In fact, they are designed according to the statistics of the observation channel.]
Mutual information in the definition of Eq. <ref> can directly be linked to channel transition probabilities. The choice of entropy, on the other hand, is justified by both memory and complexity requirements. Obviously, the cost of the database storage is directly linked to H(X) when source coding is used. Moreover, since the effective space size is |𝒳^n| ≈ 2^nH(X), a lower entropic space also implies a lower search complexity.
§ BINARY CODES
Binary encoding is a classical approach widely used in the pattern recognition literature to address fast search methods. As mentioned earlier, they are based on projecting the data to a lower-dimensional space and binarizing them.
§.§ Random Projections
The projection of vectors of ℱ and 𝐪 in ℝ^n is performed using random projections to ℝ^l.[The recent trend in pattern recognition is favoring projectors which are learned from the data. However, in practice, it turns out that the performance boost obtained from these methods is limited to low bit-rate regimes only. Moreover, they are not straightforward for analytic purposes. Therefore, here we opt for random projections.] This choice is justified by results like Johnson-Lindenstrauss lemma <cit.>, where, under certain conditions, the pair-wise distances in the projected domain are essentially preserved with probabilistic guarantees.
Random projections are usually performed using an n × l unit-norm matrix W whose elements are generated as i.i.d. from a Gaussian distribution W ∼𝒩(0,1/n). In the case of binary codes, for memory constraints, usually l < n. For the ternary codes proposed in <cit.>, which we analyze next, however, since the entropy can be bounded by imposing sparsity, longer code lengths can be considered. In this case, the use of sparse random projections of <cit.> is justified, where performance guarantees require longer lengths. This is very useful since the sparsity in the projector matrix can be reduced to 𝒪(2nl/s) instead of 𝒪(nl), where s is the sparsity parameter as in
W ∼±√(s/2n), w.p. 2/s,
0, w.p. 1- 2/s.
According to the data model assumed in section <ref>, the elements of the projected data F̃ and Q̃ will also follow Gaussian distribution as F̃∼𝒩(0,σ_F^2) and Q̃|F̃∼𝒩(F̃, σ_P^2), where 𝐅̃^T = 𝐅^T W and 𝐐̃^T = 𝐐^T W. This gives the joint-distribution of the projected data as a bivariate Gaussian with ρ = σ_F/√(σ_F^2 + σ_P^2), i.e., p(f̃,q̃) = 𝒩([0,0]^T,[ σ_F^2 σ_F^2; σ_F^2 σ_F^2 + σ_P^2 ]).
§.§ Binarizing the projections
Binary codes are obtained by binarizing the projected values using the sign function, i.e., X_b = sign (F̃) and Y_b = sign (Q̃). One can consider an equivalent to the observation channel in the encoded case. In <cit.>, for the i.i.d. Gaussian setup, this channel was derived as a BSC with I(X_b,Y_b) = 1 - H_2(P_b), where P_b = 𝔼_p(q̃)[𝒬(|q̃|/σ_P)] = 1/πarccos(ρ) is the probability of bit flip and 𝒬(u) = ∫_u^∞1/√(2π)e^-u'/2 du' is the Q-function.
The maximum-likelihood optimal decoder for the binary codes is simply the minimum Hamming-distance decoder, i.e., î = 1 ⩽ i ⩽ Margmin [𝒟_H(𝐲_b,𝐱_b(i))] with 𝒟_H (𝐲, 𝐱) = 1/l∑_j = 1^l y_j ⊕ x_j and ⊕ representing the XOR operator. As mentioned earlier, this has two main practical advantages. First, the cost of storage of any database vector will reduce to l bits. Second, the search is performed on the binary vectors using fixed-point operations. However, as we will show in the next section, better performance can be achieved with STC, the ternary counterpart of the binary codes.
§ SPARSE TERNARY CODES (STC)
The idea behind the STC <cit.> is that different projection values have different robustness to noise which can be expressed as a bit reliability measure. This can be achieved by ignoring the values whose magnitude is below a threshold, i.e., X_t = ϕ_λ_X(F̃) and Y_t = ϕ_λ_Y(Q̃), where:
ϕ_λ(t) =
+1, t ⩾λ,
0, -λ <t < λ,
-1, t ⩽ -λ,
and λ_X and λ_Y are threshold values for the enrolled items and the query, respectively. The threshold values λ_X and λ_Y directly influence the sparsity ratios α, the ratio of the number of elements of 𝐗_t with non-zero values ({± 1 }'s) to the code length l and γ, the ratio of the number of elements of 𝐘_t with non-zero values to the code length l. These values can be easily calculated as:
α = ∫_-∞^-λ_X p(f̃) df̃ + ∫_λ_X^∞ p(f̃) df̃ = 2𝒬(λ_X/σ_F),
γ = ∫_-∞^-λ_Y p(q̃) dq̃ + ∫_λ_Y^∞ p(q̃) dq̃ = 2𝒬(λ_Y/√(σ_F^2 + σ_P^2)).
Complementary to <cit.> where the choice of ϕ_λ was justified as an approximation for the hard-thresholding function, the optimal solution to the direct sparse approximation problem, here we propose an information-theoretic argument.
§.§ Information measures for STC
It is very useful to study the equivalent ternary channel between X_t and Y_t. The element-wise entropy of the ternary code is:
H(X_t) = -2α log_2(α) - (1-2α)log_2(1-2α).
According to Eq. <ref>, this is only a function of the threshold value λ_X (see Fig. <ref>.a).
As for I(X_t;Y_t), unlike the binary case we cannot have a closed form expression. Instead, we calculate this quantity with numerical integration.
Consider the transition probabilities for ternary channel:
P = p(y|x)=
[ p_+1|+1 p_0|+1 p_-1|+1; p_+1|0 p_0|0 p_-1|0; p_+1|-1 p_0|-1 p_-1|-1; ].
The elements of the transition matrix P are defined as the integration of the conditional distribution p(Q̃|F̃) = p(F̃,Q̃)/p(F̃) with proper integral limits and calculated numerically for these threshold values. For example, p_+1|+1 = p_Q̃|F̃(+1|+1) = ∫_λ_Y^∞∫_λ_X^∞ p(f̃,q̃) df̃ dq̃/∫_λ_X^∞ p(f̃) df̃ is the probability that values of F̃ greater than λ_X are queried with values greater than λ_Y while p_0|+1 = p_Q̃|F̃(0|+1) = ∫_-λ_Y^λ_Y∫_λ_X^∞ p(f̃,q̃) df̃ dq̃/∫_λ_X^∞ p(f̃) df̃ quantifies the probability that these values have magnitudes less than λ_Y in the query and hence will be assigned as `0' in the ternary representation. Out of these 9 transition probabilities, 5 are independent and the rest are replicated due to symmetry.
The mutual information in the ternary case is:
I(X_t;Y_t) = H(X_t) + H(Y_t) - H(X_t,Y_t),
where H(X_t) is given by Eq. <ref>. Similarly, H(Y_t) = -2γ log_2(γ) - (1-2γ)log_2(1-2γ), where γ can be derived from its definition of Eq. <ref> or, equivalently γ = α (P(1,1) + P(1,3)) + (1-2 α) P(1,2). The joint entropy is then calculated from elements of P as:
H(X_t,Y_t) = -2 αP(1,1)log(αP(1,1))
-2 αP(1,2)log(αP(1,2)) -2 αP(1,3)log(αP(1,3))
-2 (1-2α) P(2,1)log((1-2α) P(2,1))
-(1-2α) P(2,2)log((1-2α) P(2,2)).
This, in fact, is a function only of λ_X, λ_Y and σ_P^2. We will use these quantities in section <ref> to compare the coding gain of these ternary codes with those of binary codes.
§.§ Fast sub-linear decoder
Given a ternary sequence 𝐲 = (y_1,⋯,y_l)^T, the maximum-likelihood rule to choose the optimal 𝐱(i) amongst the registered 𝐱(1),⋯,𝐱(M) is given as î = _1 ⩽ i ⩽ M p_𝐘|𝐗(𝐲 |𝐱(i) ). This rule can be simplified as the maximization of the log-likelihood as:
î = _1 ⩽ i ⩽ M∑_j = 1^l logp_Y|X(y_j|x_j(i)).
This requires to take into account all the 9 transition probabilities of Eq. <ref>.
Similar to the binary codes, the above maximum-likelihood decoder should exhaustively scan all the M items in the database. However, by skipping the transitions to and from `0' and only considering elements of P containing `+1' and `-1', an alternative decoder can be considered as:
î = _1 ⩽ i ⩽ M∑_j = 1^n [ ν (1_{ x_j = y_j = +1 } + 1_{ x_j = y_j = -1 } )
+ν' (1_{ x_j = +1, y_j = -1 } + 1_{ x_j = -1, y_j = +1 } ) ],
where ν and ν' are voting constants that will be specified shortly and 1_{·} is the indicator function.
This decoder is sub-optimal in the ML sense since instead of taking into account all the 9 transitions (from which 5 independent), it considers only 4 transitions (from which 3 independent) and treats the other 5 as the same. However, it performs the search in sub-linear complexity instead of the otherwise exhaustive scan. As we will show next, this allows us to drastically reduce the search complexity. An algorithmic description of this decoder was given in <cit.> for practical scenarios. In this work, however, since we are assuming a known distribution for the data, we can calculate the optimal values of ν and ν' as Eq. <ref>, where ν encourages sign match and ν' is a negative value that penalizes sign mismatch. Since we are ignoring the `0' transitions, ν^0 is a bias term and compensates by considering the expectation of the occurrences of logp(y|x) terms in Eq. <ref>, when either x or y is `0'.
ν = ν^0 + log(P(1,1)),
ν' = ν^0 + log(P(1,3)),
ν^0 = -[ 2αP(1,2) logP(1,2)
+ (1-2 α) (2P(2,1) logP(2,1) + P(2,2) logP(2,2)) ].
§ SIMULATION RESULTS
§.§ Comparison of Sparse Ternary Codes with dense binary codes
Here we compare the coding gain of STC with dense binary codes. The mutual information of the ternary code depends on both λ_X and λ_Y, therefore, for different values of λ_X[In practice, this is chosen based on memory constraints.], we choose λ_Y that maximizes I(X_t;Y_t). Since there is no analytic expression for the maximum of I(X_t;Y_t), for a fixed λ_X, we use a simple grid search among different values of λ_Y for the maximizer of mutual information. Fig. <ref>.d shows the (λ_X, λ_Y^*) pairs under three different noise levels, as characterized by SNR = 10log_10σ_F^2/σ_P^2.
For fair comparison, we choose the code lengths of binary and ternary cases (l_b and l_t, respectively) to ensure the same code entropy, i.e., l_bH(X_b) = l_t H(X_t). We then compare
l_bI(X_b,Y_b) with l_tI(X_t,Y_t) in Fig. <ref>.b.
As is seen from this figure, the proper choice of thresholds leads to interesting regimes where, for the same entropy and hence the same number of bits, the ternary code preserves more mutual information compared to binary codes.
§.§ Identification performance
We consider the identification of synthetic data by comparing the probability of correct identification for different pairs of memory and complexity ratio in Fig. <ref>. Memory usage is measured by entropy of a coded block, i.e., l_bH(X_b) for the binary and l_tH(X_t) and ternary while complexity ratio is measured as l_b/n and 4αγ l_t/n, for the binary and ternary codes, respectively. Keeping the complexity of the sparse random projections stage the same in both cases in each experiment, for equal memory usage, a large gap is observed between the complexity ratios of the two counterparts. Furthermore, usually much better performance is achieved for the STC.
§ CONCLUSIONS
This work is an attempt to bridge the problem of content identification with that of similarity search based on Nearest Neighbors in pattern recognition. Based on information-theoretic insights, the concept of coding gain was proposed as a figure of merit. It was shown that the Sparse Ternary Codes posses higher coding gain than the classical dense binary hashing scheme and hence provide better performance-complexity-memory trade-offs. As a future work, the extension to compression-based setups for the STC will be studied.
IEEEbib
|
http://arxiv.org/abs/1701.08117v2 | 20170127170800 | Neutron activation and prompt gamma intensity in Ar/CO$_{2}$-filled neutron detectors at the European Spallation Source | [
"E. Dian",
"K. Kanaki",
"R. J. Hall-Wilton",
"P. Zagyvai",
"Sz. Czifrus"
] | physics.ins-det | [
"physics.ins-det"
] |
mymainaddress,mysecondaryaddress,mytertiaryaddress]E. Dian mycorrespondingauthor
[mycorrespondingauthor]Corresponding author
dian.eszter@energia.mta.hu
mysecondaryaddress]K. Kanaki
mysecondaryaddress,myquaternaryaddress]R. J. Hall-Wilton
mymainaddress,mytertiaryaddress]P. Zagyvai
mytertiaryaddress]Sz. Czifrus
[mymainaddress]Hungarian Academy of Sciences, Centre for Energy Research, 1525 Budapest 114., P.O. Box 49., Hungary
[mysecondaryaddress]European Spallation Source ESS ERIC, P.O Box 176, SE-221 00 Lund, Sweden
[mytertiaryaddress]Budapest University of Technology and Economics, Institute of Nuclear Techniques, 1111 Budapest, Műegyetem rakpart 9.
[myquaternaryaddress]Mid-Sweden University, SE-851 70 Sundsvall, Sweden
Monte Carlo simulations using MCNP6.1 were performed to study the effect of neutron activation in Ar/CO_2 neutron detector counting gas. A general MCNP model was built and validated with simple analytical calculations. Simulations and calculations agree that only the ^40Ar activation can have a considerable effect. It was shown that neither the prompt gamma intensity from the ^40Ar neutron capture nor the produced ^41Ar activity have an impact in terms of gamma dose rate around the detector and background level.
ESS neutron detector B4C neutron activation ^41Ar MCNP Monte Carlo simulation
§ INTRODUCTION
Ar/CO_2 is a widely applied detector counting gas, with long history in radiation detection. Nowadays, the application of Ar/CO_2-filled detectors is extended in the field of neutron detection as well. However, the exposure of Ar/CO_2 counting gas to neutron radiation carries the risk of neutron activation. Therefore, detailed consideration of the effect and amount of neutron induced radiation in the Ar/CO_2 counting gas is a key issue, especially for large volume detectors.
In this paper methodology and results of detailed analytical calculations and Monte Carlo simulations of prompt and decay gamma production in boron-carbide-based neutron detectors filled with Ar/CO_2 counting gas are presented (see Appendix).
In Section 2 a detailed calculations method for prompt gamma and activity production and signal-to-background ratio is introduced, as well as a model built in MCNP6.1 for the same purposes. The collected bibliographical data (cross section, decay constant) and the cross section libraries used for MCNP6.1 simulation are also presented.
In Section 3 the results of the analytical calculations and the simulation, their comparison and their detailed analysis are given.
In Section 4 the obtained results are concluded from the aspects of gamma emission during and after irradiation, radioactive waste production and emission, and the effect of self-induced gamma background on the measured signal.
§ CONTEXT
The European Spallation Source (ESS) has the goal to be the world's leading neutron source for the study of materials by the second quarter of this century <cit.>
Large scale material-testing instruments, beyond the limits of the current state-of-the-art instruments are going to be served by the brightest neutron source in the world, delivering 5 MW power on target.
At the same time the ^3He crisis instigates detector scientists to open a new frontier for potential ^3He substitute technologies and adapt them to the requirements of the large scale instruments that used to be fulfilled by well
-tested ^3He detectors. One of the most promising replacements is the ^10B converter based gaseous detector technology, utilising an Ar/CO_2 counting gas. Ar/CO_2-filled detectors will be utilised among others for inelastic neutron spectrometers <cit.>, where on the one hand, very large detector volumes are foreseen, on the other hand, very low background radiation is required. Consequently, due to the high incoming neutron intensity and large detector volumes, the effects of neutron-induced reactions, especially neutron activation in the solid body or in the counting gas of the detector could scale up and become relevant, both
in terms of background radiation and
radiation safety.
Gaseous detectors
The gaseous ionisation chamber is one of the most common radiation detectors. The ionisation chamber itself is a gas filled tank that contains two electrodes with DC voltage <cit.>. The detection method is based on the collision between atoms of the filling gas and the photons or charged particles to detect, during which electrons and positively charged ions are produced. Due to the electric field between the electrodes, the electrons drift to the anode, inducing a measurable signal. However, this measurable signal is very low, therefore typically additional are wires included and higher voltage is applied in order to obtain a gain on the signal, while the signal is still proportional to the energy of the measured particle; these are the so-called proportional chambers <cit.>. These detectors can be used as neutron detectors if appropriate converters are applied that absorb the neutron while emitting detectable particles via a nuclear reaction. In the case of ^3He- or ^10BF_3-based detectors the converter is the counting gas itself, but solid converters could be used as well with conventional counting gases.
Thermal neutron detector development
One leading development has been set on the Ar/CO_2 gas filled detectors with solid enriched boron-carbide (^10B_4C) neutron converters <cit.>, detecting neutrons via the ^10B(n,α)^7Li reaction <cit.>.
With this
technology
the optimal thickness of the
boron-carbide layer is typically 1 μm <cit.>, otherwise the emitted α particle is stopped inside the layer and remains undetected.
But, a thinner
boron-carbide layer means smaller neutron conversion efficiency. The idea behind the detector development at ESS is the multiplication of
boron-carbide neutron converter layers by using repetitive geometrical structures, in order to increase the neutron conversion efficiency and obtain a detection efficiency that is competitive with
that
of the ^3He detectors <cit.>.
Shielding issues in detector development
The modern neutron instruments are being developed to reach high efficiency, but also higher performance, such as time or energy resolution to open new frontiers in experimentation. One of the most representative characteristics of these instruments is the signal-to-background ratio,
which is targeted in the optimisation process. While the traditional solutions for improving the signal-to-background ratio are based on increasing the source power and improving the transmittance of neutron guides, for modern instruments the background reduction via optimised shielding becomes equally relevant. For state-of-the-art instruments the cost of a background reducing shielding can be a major contribution in the total instrument budget <cit.>. In order to optimise the shielding not only for radiation safety purposes but in order to improve the signal-to-background ratio a detailed map of potential background sources is essential. While the components of the radiation background coming from the neutron source and the neutron guide system are well known, the effect of newly developed
boron-carbide converter based gaseous detectors still has to be examined, especially the background radiation and potential self-radiation coming from the neutron activation of the solid detector components and of the detector filling gas.
Argon activation
The experience over the last decades showed that for facilities, e.g. nuclear power plants, research reactors and research facilities with accelerator tunnels, there is a permanent activity emission during normal operation that mainly contains airborne radionuclides <cit.>. For most of these facilities ^41Ar is one of the major contributors
to the radiation release. ^41Ar is produced via thermal neutron capture from the naturally occurring ^40Ar,
which is the main isotope of natural argon with 99.3 % abundance <cit.>. At most
facilities ^41Ar is produced from the irradiation of the natural argon content of air. In air-cooled and water-cooled reactors ^40Ar is exposed in the reactor core as part of the coolant; in the latter case it is coming from the air dissolved in the primary cooling water.
Air containing argon is also present in the narrow
gap between the reactor vessel and the biological shielding.
The produced ^41Ar mixes with the air of the reactor hall and
is
removed by the ventilation system. In other facilities ^41Ar is produced in the accelerator tunnel. In all cases, within the radiation safety plan of the facility the ^41Ar release is taken into account <cit.> and well estimated either via simple analytical calculations or Monte Carlo simulations. The average yearly ^41Ar release of these facilities can reach a few thousand GBq.
For the ESS the ^41Ar release coming from the accelerator and the spallation target is already calculated <cit.>,
but in addition
the exposure of the large volume of Ar/CO_2
contained in the neutron detectors should also be considered. Due to the 70-90 % argon content of the counting gas and the fact that most instruments operate with thermal or cold neutron flux that leads to a higher average reaction rate, the ^41Ar production in the detectors could be commensurate with the other sources.
For all the above mentioned reasons, argon activation is an issue to consider at ESS both in terms of activity release and in terms of occupational exposure in the measurement hall.
§ APPLIED METHODS
Analytical calculation for neutron activation
Neutron activation occurs during the (n,γ) reaction where a neutron is captured by a target nucleus. The capture itself is usually followed by an instant photon emission; these are the so called prompt photons. The energies of the emitted prompt photons are specific for the target nucleus. After capturing the neutron, in most cases the nucleus gets excited, and becomes radioactive; this is the process of neutron activation, and the new radionuclide will suffer decay with its natural half-life. Due to their higher number of neutrons, the activated radionuclei mostly undergo β^- decay, accompanied by a well-measurable decay gamma radiation. The gamma energies are specific for the radionucleus. These two phenomena form the basics of two long-used and reliable analytical techniques, the neutron activation analysis (NAA <cit.>) and the prompt gamma activation analysis (PGAA <cit.>). Consequently, detailed measured and simulated data are available for neutron activation calculation.
For shielding and radiation safety purposes the produced activity concentration (a [Bq/cm^3]) and the prompt photon intensity have to be calculated that are depending on the number of activated nuclei (N^* [1/cm^3]). The production of radionuclides (reaction rate) depends on the number of target nuclei (N_0 [1/cm^3]) for each relevant isotope, the irradiating neutron flux (Φ [n/cm^3/s]) and the (n,γ) reaction cross section (σ [cm^2]) at the irradiating neutron energies, while the loss of radionuclides is determined by their decay constants (λ [1/s]). A basic assumption is that the number of target nuclei can be treated as constant if the loss of target nuclei during the whole irradiation
does not exceed 0.1 %. This condition is generally fulfilled, like in the cases examined in this study, therefore the rate of change of the number of activated nuclei is given by Equation <ref>.
dN^*/dt = N_0·Φ·σ - λ· N^*
With the same conditions, the activity concentration after a certain time of irradiation (t_irr [s]) can be calculated with Equation <ref>.
a (t_irr) = N_0·Φ·σ·(1- e^λ t_irr)
In this study, as the activation calculation is based on Equation <ref>, the activity yield of the naturally present radionuclides (e.g. cosmogenic ^14C in CO_2) is ignored due to the very low abundance of these nuclides. The activity yield of the secondary activation products, the products of multiple independent neutron captures on the same target nucleus, are ignored as well, because of the low probability of the multiple interaction.
The prompt gamma intensity (I [1/s/cm^3]) coming from the neutron capture can be calculated similarly to the (n,γ) reaction rate. In this case a prompt gamma line (i) specific cross section (σ_pg,i) has to be used <cit.>, that is proportional to the (n,γ) cross section, the natural abundance of the target isotope in the target element, and the weight of the specific gamma energy with respect to the total number of gamma lines. For this reason in Equation <ref> the number of target nuclei corresponds to the element (N'_0 [1/cm^3]), not the isotope (N_0 [1/cm^3]).
I_i = N'_0·Φ·σ_pg,i
In the current study, activity concentration, prompt gamma intensity and the respective prompt gamma spectrum have been calculated for each isotope in the natural composition <cit.> of an 80/20 volume ratio Ar/CO_2 counting gas at room temperature and 1 bar pressure and in an aluminium alloy
used for the detector frame. Alloy Al5754 <cit.> has been chosen as a typical alloy used in nuclear science for mechanical structures.
Activity concentration and prompt gamma intensity calculations have been done for several monoenergetic neutron beams in the range of 0.6–10 Å (227.23–0.82 meV). Since for isotopes of interest the energy dependence of the (n,γ) cross section is in the 1/v (velocity) region <cit.>, the cross sections for each relevant energy
have been easily extrapolated from the thermal (1.8 Å) neutron capture cross sections listed in Table <ref>.
The irradiating neutron flux has been approximated with 10^4 n/cm^2/s. This
value has been determined for a worst case scenario based on the following assumptions: the planned instruments are going to have various neutron fluxes at the sample position, and the highest occurring flux can be conservatively estimated to 10^10 n/cm^2/s <cit.>. The neutron fraction scattered from the sample is in the range of 1-10 %. Calculating with 10 %, the approximation remains conservative. A realistic sample surface is 1 cm^2,
reducing the scattered flux to 10^9 n/s. The sample-detector distance also varies among the instruments, so the smallest realistic distance of 100 cm was used for a conservative approximation. Therefore the neutron yield has to be normalised to a 10^5 cm^2 surface area at this sample-detector distance. According to these calculations, 10^4 n/cm^2/s is a conservative estimation for the neutron flux the detector is exposed to. This simple approach allows that the result can be scaled to alternate input conditions, i.e. a higher neutron flux or detector geometry.
MCNP simulation for neutron activation
Monte Carlo simulations have been performed in order to determine the expected activity concentration and prompt gamma intensity in the counting gas
and the aluminium frame
of boron-carbide-based
neutron detectors.
The MCNP6.1 <cit.> version has been used for the simulations. The detector gas volume has been approximated
as a generic 10 cm x 10 cm x 10 cm cube, surrounded by a 5 mm thick aluminium box made of Al5754 alloy, representing the detector frame, as it is described in Figure <ref>.
In order to avoid interference with the prompt photon emission of the Ar/CO_2, the counting gas was replaced with vacuum while observing the activation on the aluminium frame.
The detector geometry has been irradiated with a monoenergetic neutron beam from a monodirectional disk source of 8.5 cm radius at 50 cm distance from the surface of the target volume. A virtual sphere has been defined around the target gas volume with 10 cm radius for simplifying prompt photon counting. Both the activity concentration and the prompt gamma intensity determined with MCNP6.1 simulations have been scaled to a 10^4 n/cm^2/s irradiating neutron flux.
Different runs have been prepared for each element in the gas mixture
and the Al5754 alloy
to determine the prompt gamma spectrum and total intensity. The prompt photon spectrum has been determined for each element with the following method: a virtual sphere
has been defined around the cubic target volume. Since the target volume was located in vacuum, all the prompt photons produced in a neutron activation reaction have to cross this virtual surface. Within MCNP, the particle current integrated over a surface, can be easily determined (F1 tally <cit.>). Knowing the volume of the target, the prompt photon intensity can be calculated for the simulated neutron flux (Φ_MCNP, [flux/source particle]). After the Φ_MCNP average neutron flux in the target volume has been determined (F4 tally <cit.>), the prompt photon intensity can be scaled for any desired neutron flux, 10^4 n/cm^2/s in this case. With this method the self-absorption of the target gas volume can be considered to be negligible.
The activity concentration is not given directly by the simulation, but it can be calculated from the R_MCNP reaction rate (reaction/source particle) and the Φ_MCNP flux. The R_MCNP is calculated in MCNP in the following way: first the track length density of neutrons has to be determined in the target volume (F4 tally <cit.>), and then this value has to be multiplied with the reaction cross section of the specific reaction of interest, through the entire spectrum, taking into account the number of target nuclei of the irradiated material (FM tally multiplication card <cit.>). In the current simulations each isotope has been defined as a different material, with their real partial atomic density ([atom/barn/cm]) in the counting gas or in the aluminium alloy for the (n,γ) reaction (ENDF reaction 102). As the reaction rate given by the MCNP simulation is the saturated reaction rate for the Φ_MCNP flux, and contains all the geometrical and material conditions of the irradiation, the time-dependent activity concentration for any Φ flux can be calculated with Equation <ref>.
a (t_irr) = R_MCNP·Φ/Φ_MCNP·(1- e^λ t_irr)
In order to determine the above mentioned quantities, the cross section libraries have to be chosen carefully for the simulation.
Within the current study different libraries have been used to simulate the prompt gamma production and the reaction rates. Several databases have been tested, but only a few of them contain data on photon production for the isotopes of interest.
Tables <ref> and <ref>
present
the combinations
that give the best agreement with the theoretical expectations, especially in terms of spectral distribution. These are the ENDF <cit.>, TALYS <cit.> and LANL <cit.> databases.
The MCNP6.1 simulation has been repeated for each naturally occurring isotope in the counting gas and the aluminium frame, and analytical calculations have been also prepared to validate the simulation, in order to obtain reliable and well-applicable data on the detector housing and counting gas activation and gamma emission both for shielding and for radiation protection purposes.
In order to demonstrate the effect of gamma radiation on the measured signal, the signal-to-background has been calculated for a typical and realistic detector geometry. A generic boron-carbide based detector can be represented by a 5-20 mm thick gas volume surrounded by a few millimetre thin aluminium box, carrying the few micrometers thick boron-carbide converter layer(s). The gas volume is determined by the typical distance needed for the energy deposition. In a realistic application, a larger gas volume used to be used for efficiency purposes, built up from the above mentioned subvolumes.
As a representative example a V_gas = 256 cm^3 counting gas volume has been chosen as the source of gamma production, with an A_in = 16 cm^2 entrance surface for incident neutrons, divided into 20 mm thick subvolumes by 16 layers of 2 μm thin enriched
boron-carbide.
In this study the gamma efficiency has been approximated with 10^-7 for the entire gamma energy range <cit.> due to its relatively low energy-dependence, whereas the neutron efficiency has been calculated for all the mentioned energies on the basis of <cit.>
, resulting in a neutron efficiency varying between 0.4-0.72 within the given energy range. Therefore the measured signal and the signal of the gamma background were calculated as in Equations <ref>-<ref>, where η_i is the detection efficiency for the particle type i, Φ is the incident neutron flux and I_photon is the produced photon production in a unit gas volume. Signal-to-(gamma-)background ratio has been calculated as S_n/Sγ.
S_n = A_in·Φ·η_n
S_γ = V_gas· I_photon·η_γ
All calculations and simulations have been done for a 10^4 n/cm^2/s monoenergetic neutron irradiation for 0.6, 1, 1.8, 2, 4, 5 and 10 Å neutron wavelengths. Activity concentration has been calculated for t_irr = 10^6 s irradiation time and t_cool = 10^7 s cooling time.
This irradiation time roughly corresponds to typical lengths of operation cycles for spallation facilities.
Photon production has been normalised for a 1 cm^3 volume, irradiated with Φ = 1 n/cm^2/s or Φ = 10^4 n/cm^2/s neutron flux.
Therefore here the photon production in a unit gas or aluminium volume irradiated with a unit flux is given as photon/cm^3/s/n/cm^2/s.
The uncertainties of the simulated and the bibliographical data have been taken into account. The MCNP6.1 simulations had high enough statistics, that the uncertainties of the simulated results were comparable to the uncertainties of the measured/bibliographical qualities used for the analytical calculations. The uncertainty of the total prompt photon production for all elements were below 5 % for the entire neutron energy range, while the uncertainties of the main prompt gamma lines were below 10 % for all elements, and less than 5 % for argon and the elements of the aluminium alloy.
For the anaytical calculations,
the error propagation takes into account the uncertainty of
the prompt gamma line specific cross section, given in the IAEA PGAA Database <cit.>, being below 5 % for the main lines of all major isotopes, the σ absorption cross section and the λ decay constant (see Appendix).
The obtained uncertainties of the photon intensities are generally within the size of the marker, here the error bars have been omitted. They have also been omitted for some of the spectra for better visibility.
§ RESULTS AND DISCUSSION
§.§ Prompt gamma intensity in detector counting gas
The total prompt photon production and its spectral distribution in Ar/CO_2 counting gas has been analytically calculated (Equation <ref>) on the basis of detailed prompt gamma data from IAEA PGAA Data-base <cit.>.
The same data have been obtained with Monte Carlo simulation using MCNP6.1.
Prompt photon production normalised to incident neutron flux has been calculated for all mentioned wavelengths.
The comparison of the result has shown, that the simulated and calculated total prompt photon yields qualitatively agree for Ar, C, and O within 2 %, 11 % and 21 %, respectively.
It has also been show that for these three elements proper cross section libraries can be found (see Table <ref>), the use of which in MCNP simulations produce prompt photon spectra that qualitatively agree with the calculated ones. As an example Figure <ref> shows the simulated and calculated prompt photon spectra from argon in Ar/CO_2 for a 1.8 Å, Φ = 1 n/cm^2/s neutron flux, irradiating a 1 cm^3 volume. Since numerous databases lack proper prompt photon data, this agreement is not trivial to achieve for all the elements.
For these three elements MCNP simulations can effectively replace analytical calculations, which is especially valuable for more complex geometries. For all these reasons hereinafter only the MCNP6.1 simulated results are presented.
In Figure <ref> it is also shown, that the prompt photon emission is dominated by argon, as expected due to the very small capture cross section of the oxygen and the carbon; the argon total prompt photon yield is 3 orders of magnitude higher than the highest of the rest. According to Figure <ref>, within the argon prompt gamma spectrum, there are 3 main gamma lines that are responsible for the majority of the emission;
the ones at 167 ± 20 keV, 1187 ± 3 keV and 4745 ± 8 keV.
§.§ Activity concentration and decay gammas in detector counting gas
The induced activity in the irradiated Ar/CO_2 gas volume, as well as the photon yield coming from the activated radionuclei has been determined via analytical calculation, based on the bibliographical thermal (25.30 meV) neutron capture cross sections and the half-lives of the isotopes in the counting gas (see Table <ref>). A similar calculation has been prepared on the bases of reaction rates determined with MCNP simulations for each isotope of the counting gas. Activity concentrations obtained from the calculation and the MCNP6.1 simulation agree within the margin of error, therefore only the MCNP simulations are presented.
As an example the build-up of activity during irradiation time for 1.8 Å is given in Figure <ref> for all the produced radionuclei.
It can be stated, that the total activity of the irradiated counting gas practically equals the ^41Ar activity (see Figure <ref>), which is 1.28 · 10^-1 Bq/cm^3 at the end of the irradiation time. This is 2 orders of magnitude higher than the activity of ^37Ar, which is 6.90 · 10^-4 Bq/cm^3, and 7 orders of magnitude higher than the activity of ^38Ar (7.99 · 10^-9 Bq/cm^3) and ^19O (3.19 · 10^-8 Bq/cm^3). The activity of carbon is negligible.
The decrease of activity in the detector counting gas because of the natural radioactive decay is shown in Figure <ref>.
After the end of the irradiation the main component of the total activity is the ^41Ar, although it practically disappears after a day (10^5 s), due to its short 109.34 m half-life
with ^37Ar becoming the dominant isotope. However, in terms of gamma emission, all the remaining isotopes, ^37Ar, ^39Ar and ^14C are irrelevant, since they are pure beta-emitters. Therefore,
with the above listed conditions there is only minimal gamma emission from the Ar/CO_2 counting gas after 10^5 s cooling time. For the same reason, the ^41Ar activity quickly saturates and
accordingly it can contribute to the gamma emission during the irradiation as well.
Decay gamma emission of the activated radionuclei from a unit volume per second, with the activity reached by the end of the irradiation time have been calculated. It is shown that the decay gamma yield practically all comes from the activated argon; the emission of the 1293.587 keV ^41Ar line is 8 orders of magnitudes higher than the yield of any other isotope.
Comparing the prompt and the maximum decay gamma emission of all the isotopes, as it is shown in Table <ref>, it is revealed that for the argon, the prompt photon production (3.9 · 10^-1 photon/cm^3/s/n/cm^2/s) and the saturated decay gamma production (1.27 · 10^-1 photon/cm^3/s/n/cm^2/s) are comparable. There is a factor of 3 difference, whereas for carbon and oxygen the decay gamma production is negligible comparing with the prompt gamma production.
[htbp]
Prompt and decay gamma emission from 80/20 V% Ar/CO_2 at 1 bar pressure and from Al5754 aluminium alloy, irradiated with 10^4 1/cm^2 s monoenergetic neutron flux for 10^6 s irradiation time. Results of MCNP6.1 simulation.
!
Element Photon yield 7cNeutron wavelength [Å]
[ 1/cm^3 s] 0.6 1 1.8 2 4 5 10
2*Ar
prompt 1.3200± 0.0400· 10^-10 2.1500± 0.0500· 10^-10 3.9600± 0.0800· 10^-10 4.3700± 0.0900· 10^-10 8.6400± 0.1400· 10^-10 1.0800± 0.0160· 10^0-0 2.1500± 0.0250· 10^0-0
decay 4.2270± 0.0010· 10^-20 7.0450± 0.0020· 10^-20 1.2667 ± 0.0003 · 10^-10 1.4090 ± 0.0004 · 10^-10 2.8179 ± 0.0007 · 10^-10 3.5224 ± 0.0009 · 10^-10 7.0440± 0.0020· 10^-10
2*C
prompt 8.1000± 1.4000· 10^-50 1.3300± 0.1800· 10^-40 2.2100± 0.2300· 10^-40 2.5100± 0.2500· 10^-40 5.3300± 0.3600· 10^-40 6.9000± 0.4000· 10^-40 1.3600± 0.0600· 10^-30
decay 8.4900± 0.1100· 10^-21 1.4400± 0.0200· 10^-20 2.5100± 0.0300· 10^-20 2.7900± 0.0400· 10^-20 5.5600± 0.0700· 10^-20 6.9400± 0.0900· 10^-20 1.3900± 0.0200· 10^-19
2*O
prompt 1.5800± 0.4300· 10^-50 2.5100± 0.5500· 10^-50 4.1000± 0.7000· 10^-50 4.8100± 0.7700· 10^-50 1.1200± 0.1200· 10^-40 1.4300± 0.1400· 10^-40 2.9600± 0.1900· 10^-40
decay 1.6190± 0.0350· 10^-80 2.7000± 0.0600· 10^-80 4.8000± 0.1000· 10^-80 5.4000± 0.1200· 10^-80 1.0800± 0.0200· 10^-70 1.3500± 0.0300· 10^-70 2.6900± 0.0600· 10^-70
2*Al
prompt 8.2700± 0.1100· 10^1-0 1.3790± 0.0150· 10^2-0 2.4700± 0.0200· 10^2-0 2.7500± 0.0200· 10^2-0 5.4400± 0.0300· 10^2-0 6.7600± 0.0300· 10^2-0 1.3000± 0.0050· 10^3-0
decay 4.4419 ± 0.0018 · 10^1-0 7.4010± 0.0030· 10^1-0 1.3288 ± 0.0005 · 10^2-0 1.4773 ± 0.0006 · 10^2-0 2.9290± 0.0010· 10^2-0 3.6373 ± 0.0015 · 10^2-0 6.9981 ± 0.0028 · 10^2-0
2*Cr
prompt 2.0000± 0.1000· 10^0-0 3.3500± 0.1400· 10^0-0 6.0000± 0.2000· 10^0-0 6.7000± 0.2000· 10^0-0 1.3400± 0.0300· 10^1-0 1.6800± 0.0360· 10^1-0 3.3500± 0.0500· 10^1-0
decay 5.4774 ± 0.0026 · 10^-30 9.1310± 0.0040· 10^-30 1.6418 ± 0.0008 · 10^-20 1.8263 ± 0.0009 · 10^-20 3.6530± 0.0020· 10^-20 4.5660± 0.0020· 10^-20 9.1300± 0.0040· 10^-20
2*Cu
prompt 7.3000± 0.1000· 10^-10 1.2300± 0.1300· 10^0-0 2.2000± 0.1700· 10^0-0 2.4400± 0.1900· 10^0-0 4.8800± 0.2900· 10^0-0 6.0900± 0.3400· 10^0-0 1.2200± 0.0500· 10^1-0
decay 6.4400± 0.0300· 10^-30 1.0730± 0.0050· 10^-20 1.9300± 0.0100· 10^-20 2.1500± 0.0100· 10^-20 4.2900± 0.0200· 10^-20 5.3660± 0.0260· 10^-20 1.0730± 0.0050· 10^-10
2*Fe
prompt 1.6900± 0.1200· 10^0-0 2.8400± 0.1600· 10^0-0 5.1000± 0.2000· 10^0-0 5.7000± 0.2000· 10^0-0 1.1300± 0.0300· 10^1-0 1.4120± 0.0370· 10^1-0 2.8200± 0.0500· 10^1-0
decay 2.3400± 0.0600· 10^-40 3.9000± 0.1000· 10^-40 7.0000± 0.2000· 10^-40 7.8000± 0.2100· 10^-40 1.5600± 0.0400· 10^-30 1.9500± 0.0500· 10^-30 3.9000± 0.1000· 10^-30
2*Mg
prompt 1.6100± 0.1200· 10^0-0 2.6800± 0.1700· 10^0-0 4.8400± 0.2300· 10^0-0 5.3800± 0.2400· 10^0-0 1.0800± 0.0300· 10^1-0 1.3450± 0.0380· 10^1-0 2.6700± 0.0500· 10^1-0
decay 3.1900± 0.0300· 10^-20 5.3200± 0.0500· 10^-20 9.5600± 0.0900· 10^-20 1.0600± 0.0100· 10^-10 2.1200± 0.0200· 10^-10 2.6520± 0.0250· 10^-10 5.2790± 0.0490· 10^-10
2*Mn
prompt 1.7700± 0.0600· 10^1-0 2.9500± 0.0800· 10^1-0 5.3000± 0.1100· 10^1-0 5.8900± 0.1200· 10^1-0 1.1800± 0.0200· 10^2-0 1.4800± 0.0200· 10^2-0 2.9500± 0.0300· 10^2-0
decay 9.3000± 0.1000· 10^0-0 1.5600± 0.0200· 10^1-0 2.8000± 0.0300· 10^1-0 3.1140± 0.0360· 10^1-0 6.2300± 0.0700· 10^1-0 7.7900± 0.0900· 10^1-0 1.5600± 0.0200· 10^2-0
2*Si
prompt 2.7500± 0.1800· 10^-10 4.5200± 0.2300· 10^-10 8.1000± 0.3000· 10^-10 9.1000± 0.3000· 10^-10 1.8150± 0.0460· 10^0-0 2.2700± 0.0500· 10^0-0 4.5500± 0.0700· 10^0-0
decay 1.6812 ± 0.0007 · 10^-60 2.8020± 0.0010· 10^-60 5.0380± 0.0020· 10^-60 5.6040± 0.0020· 10^-60 1.1207 ± 0.0004 · 10^-50 1.4008 ± 0.0006 · 10^-50 2.8010± 0.0010· 10^-50
2*Ti
prompt 2.6000± 0.1500· 10^0-0 4.4000± 0.2000· 10^0-0 7.8000± 0.3000· 10^0-0 8.7000± 0.3500· 10^0-0 1.7500± 0.0500· 10^1-0 2.1800± 0.0600· 10^1-0 4.3600± 0.0900· 10^1-0
decay 1.5950± 0.0080· 10^-30 2.6600± 0.0100· 10^-30 4.7790± 0.0250· 10^-30 5.3160± 0.0280· 10^-30 1.0630± 0.0060· 10^-20 1.3290± 0.0070· 10^-20 2.6600± 0.0100· 10^-20
2*Zn
prompt 4.9300± 1.3800· 10^-10 8.3000± 1.9000· 10^-10 1.4900± 0.2700· 10^0-0 1.6600± 0.2900· 10^0-0 3.3200± 0.4300· 10^0-0 4.1300± 0.4800· 10^0-0 8.3000± 0.7000· 10^0-0
decay 1.1140± 0.0080· 10^-30 1.8600± 0.0100· 10^-30 3.3380± 0.0250· 10^-30 3.7100± 0.0300· 10^-30 7.4200± 0.0600· 10^-30 9.2800± 0.0700· 10^-30 1.8600± 0.0100· 10^-20
Figure <ref> and Table <ref>
demonstrate that, as both the prompt and the decay gamma yield are determined by the neutron absorption cross section, their energy dependence follows the 1/v rule within the observed energy range in case of all the isotopes of the Ar/CO_2 counting gas. Therefore activation with cold neutrons produces a higher yield, and the thermal fraction is negligible.
As it has been indicated, most of the activated nuclei are beta emitters, and some of the isotopes in the Ar/CO_2 are pure beta emitters, therefore the effect of beta radiation should also be
evaluated. In Table <ref>, the activated beta-emitter isotopes in Ar/CO_2 and the most significant ones of them in aluminium housing have been collected. As an example, according to the calculated activity concentrations (see Figure <ref>), only ^41Ar has a considerable activity in the counting gas. Therefore the only beta that might be taken into account is the 1197 keV ^41Ar beta. However, with the usual threshold settings <cit.> of proportional systems, the energy-deposition of the beta-radiation does not appear in the measured signal. Therefore on the one hand, the effect of beta radiation is negligible in terms of the detector signal-to-background ratio, while on the other hand, in terms of radiation protection, due to the few 10 cm absorption length in gas and few millimeters absorption length in aluminium, the beta exposure from the detector is also negligible.
Consequently only the prompt and the decay gamma emission have considerable yield to the measured background spectrum, and both of them are dominated by the ^41Ar, during and after the irradiation. A typical neutron beam-on gamma emission spectrum is shown in Figure <ref>, for 1.8 Å, 10^4 n/cm^2/s incident neutron flux, calculated with saturated ^41Ar activity.
In order to demonstrate how the gamma radiation background, induced by neutrons in the detector itself, affects the measured signal, signal-to-background ratio has been calculated for detector-filling gas, on the basis of Equations <ref> and <ref>. As afore described, Ar/CO_2 can be represented with ^41Ar in terms of gamma emission. According to its very small saturation time, both the prompt and the decay gamma production have been considered in the background.
In Figure <ref> the good agreement of the calculated and the simulated signal-to-background ratios are shown, for the self-induced gamma background coming from neutron activation. For both cases,
the signal-to-background ratio increases with the square root of the energy and
varies between 10^9-10^10 through the entire energy range.
The calculation has been done with a 10^-1 order of magnitude neutron efficiency, that is typical for a well-designed boron-carbide based neutron detector, and it has been shown that the effect of gamma background is really small, giving only a negligible contribution to the measured signal. Moreover, applying the same calculation for beam monitors, having the lowest possible neutron efficiency (approximated as 10^-5), the signal-to-background ratio is still 10^5, meaning that even for beam monitors the self-induced gamma background is vanishingly small.
§.§ Prompt gamma intensity in Al5754 aluminium frame
The prompt and decay photon yield of the aluminium frame or housing of the detectors have been determined via analytical calculation and MCNP6.1 simulation with the same methods and parameters as the ones used for the Ar/CO_2.
Prompt photon production normalised with incident neutron flux has been calculated.
For the Al5754 alloy as well, the calculated and MCNP6.1 simulated spectra qualitatively agree, although the agreement within the total prompt photon production varies from element to element, as shown in Table <ref>. Even with the best fitting choice of cross section databases (Table <ref>), the difference is not higher than 10 % for most elements, but for Mn and Zn the differences between the prompt photon productions are 28 % and 23 %, respectively. However, since for all isotopes of these elements the simulation results are conservative, the MCNP simulation remains reliable. Figure <ref> is given as an example to show the produced prompt photon spectrum for Φ = 1 n/cm^2/s neutron flux, irradiating an 1 cm^3 volume.
Comparing the prompt photon emission from a unit volume of Al5754
with the same for Ar/CO_2
(see Table <ref>) it can be stated, that the prompt photon intensity coming from the aluminium housing is 3 orders of magnitude higher than the one coming from the counting gas. However, for large area detectors, like the ones used in chopper spectrometry, where the gas volume might be 10^5 cm^3 (see <cit.>) the prompt photon yield of the detector counting gas can become comparable to
that of the solid frame.
The two main contributors to the prompt photon emission are the aluminium and the manganese (Figure <ref>); the aluminium total prompt photon yield is 2 order of magnitudes, while the manganese total prompt photon yield is 1 order of magnitude higher than the yield of the rest,
respectively.
Consequently, even the minor components in the aluminium alloy can be relevant for photon production,
if they are having a considerable neutron capture cross section.
According to Figure <ref>, within the simulated Al5754 prompt gamma spectrum, there is one main gamma line that is responsible for the majority of the emission, 7724.03 ± 0.04 keV line of ^27Al.
It has to be mentioned, that in the analytically calculated spectrum a second main gamma line appears at 30.638 ± 0.001 keV, also from ^27Al; it only has a significant yield on the basis of IAEA Data, that is not reproduced within the simulation. However, the mentioned gamma energy is low enough that for practical purposes the MCNP simulation remains reliable.
§.§ Activity concentration and decay gammas in Al5754 aluminium frame
An analytical calculation has been performed in order to determine the induced activity in the irradiated aluminium housing, as well as the photon yield coming from the activated radionuclides, with the same methods that have been used for the counting gas. The calculation was based on the bibliographical thermal neutron capture cross sections and the half-lives of the isotopes in the AL5754 aluminium alloy (see Table <ref>).
An example of the activity build-up during irradiation time for 1.8 Å is presented in Figure <ref> for all the produced radionuclei. According to Figures <ref> and <ref>, for most of the isotopes in Al5754 the activity concentrations obtained from calculations and MCNP6.1 simulations agree within the margin of error or within the range of 5 %. However, for a few isotopes the difference is significant. In the case of ^55Cr with the most suitable choice of cross section libraries largest discrepancy between the simulations and the calculations <cit.> is 13 %.
Also extra care is needed when treating Zn in the simulations; with calculations made on the basis of the thermal neutron cross section data of Mughabghab <cit.>, the discrepancies for ^65Zn, ^69Zn, ^71Zn are 5 %, 7 % and 10 % respectively, while in the case of using the NIST database <cit.> for the calculations, the differences were 18 %, 3 % and 1 %. Since ^64Zn, the parent isotope of ^65Zn is the major component in the natural zinc, the usage of the first database is recommended. According to Table <ref>, the activity concentration of the zinc is 5 orders of magnitude smaller than the highest occurring activity concentration, hence the large difference between the calculated and the simulated result does not have a significant impact on the results of the whole alloy.
In Figure <ref> it is demonstrated, that the majority of the produced total activity is estimated to be given by the ^28Al and the ^55Mn, 1.33 · 10^2 Bq/cm^3 and 1.96 · 10^1 Bq/cm^3 at the end of the irradiation time, respectively. It is also shown, that for all isotopes the activity concentration saturates quickly at the beginning of the irradiation time, therefore the decay gamma radiation is also produced practically during the entire irradiation time, with a yield constant in time.
The decay gamma intensity of the activated radionuclei from a unit volume has also been calculated, with the activity reached by the end of the irradiation time, like in case of Ar/CO_2 (see Table <ref>). It is shown that the decay gamma given by the ^28Al and ^55Mn; their decay photon emission is 3 and 2 orders of magnitude higher then the rest. The decay gamma spectrum is dominated by the 1778.969 ± 0.012 keV line of ^28Al.
Figure <ref> and Table <ref> demonstrate, that for aluminium and manganese the prompt photon production (2.47 · 10^2 and 5.27 · 10^1 photon/cm^3/s/n/cm^2/s) and the saturated decay gamma production (1.33 · 10^2 and 2.8 · 10^1 photon/cm^3/s/n/cm^2/s) are comparable; the yield of decay photon is 53-54 % of that of the prompt photon one, whereas for all the other isotopes the decay gamma production is less than 1 % compared to the prompt gamma production.
Figure <ref> depicts that the total gamma emission spectrum during the neutron irradiation is dominated by the aluminium. The majority of the total photon yield comes from the ^27Al prompt gamma emission, while the two main lines of the measured spectrum are the 1778.969 ± 0.012 keV ^28Al decay gamma and the 7724.03 ± 0.04 keV ^27Al prompt gamma line.
The decrease of activity in the detector counting gas because of the natural radioactive decay has also been calculated and the obtained results are shown in Figure <ref>, like in the case of Ar/CO_2 in Figure <ref>. There are three isotopes that become major components of the total activity for some period during the cooling time: ^28Al with 1 order of magnitude higher activity than the rest within 0-6· 10^3 s (10 min), ^56Mn with 2 orders of magnitude higher activity than the rest within 6· 10^3-10^6 s (11 days), and ^51Cr with 1 order of magnitude higher activity than the rest from 10^6 s, therefore the total activity decrease is relatively fast. However, because of the long half-life of ^55Fe, (T_1/2 = 2.73 ± 0.03 y), a small backround activity is expected to remain for years after the irradiation.
§ CONCLUSIONS
Analytical calculations and MCNP6.1 modelling have been prepared and compared in order to study the effect of neutron activation in boron-carbide-based neutron detectors. A set of MCNP6.1 cross section databases has been collected for Ar/CO_2 counting gas and aluminium detector housing estimated as Al5754, which both give good agreement with the analytical calculations, or give an acceptable, conservative estimation both for prompt gamma production and activity calculations. These databases are recommended to use for more complex geometries, where the analytical calculations should be replaced by MCNP simulations.
It has been shown, that the prompt photon emission of the aluminium housing is dominated by the Al and Mn contributors, while that of the counting gas is mainly given by Ar. The prompt photon intensity from an aluminium-housing unit volume is 3 orders of magnitude higher than from that of the counting gas.
The total activity concentration of the housing is mainly given by the ^28Al and the ^56Mn, and given by the ^41Ar in the counting gas. Due to the short half-lives of the main isotopes, their decay gammas already appear and saturate during the irradiation period, giving a comparable decay gamma emission to the prompt photon emission in terms of yield.
With the afore mentioned typical counting gas, the decay gamma yield of ^41Ar saturates at 1.28 · 10^-1 Bq/cm^3, and based on this value, operational scenarios can be envisaged. With these results it has been shown, that only a low level of activation is expected in the detector counting gas. Therefore with a flushing of 1 detector volume of gas per day, assuming a V = 10^7 cm^3 detector volume, 1.28 · 10^6 Bq/day activity production is expected. By varying the flush rate and storing the counting gas up to 1 day before release, only negligible levels of activity will be present in the waste Ar/CO_2 stream.
Neutron-induced gamma signal-to-background ratio has also been determined for several neutron energies, revealing that the signal-to-background ratio changes within the range of 10^9 - 10^10 for general boron-carbide-based detector geometries, and still being 10^5 even for beam monitors, having the lowest possible efficiency.
The effect of beta-radiation coming from the activated isotopes has also been considered, and it can be stated that the beta-radiation is negligible both in terms of the signal-to-background ratio and in terms of radiation protection.
In this study all simulations and calculations were made for a generic geometry, and a reliable set of data on activity and photon production were given that can be generally applied and scaled for any kind or boron-carbide-based neutron detector, filled with Ar/CO_2.
§ ACKNOWLEDGMENTS
This work has been supported by the In-Kind collaboration between ESS ERIC and the Centre for Energy Research of the Hungarian Academy of Sciences (MTA EK).
Richard Hall-Wilton would like to acknowledge the EU Horizon2020 Brightness Grant [676548].
§ APPENDIX
§ REFERENCES
|
http://arxiv.org/abs/1701.07717v5 | 20170126143040 | Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro | [
"Zhedong Zheng",
"Liang Zheng",
"Yi Yang"
] | cs.CV | [
"cs.CV"
] |
Unlabeled Samples Generated by GAN
Improve the Person Re-identification Baseline in vitro
Zhedong Zheng Liang Zheng Yi Yang To whom all correspondence should be addressed.
Centre for Artificial Intelligence, University of Technology Sydney
{zdzheng12,liangzheng06,yee.i.yang}@gmail.com
===========================================================================================================================================================================================================================
empty
The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline.
We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at <https://github.com/layumi/Person-reID_GAN>.
§ INTRODUCTION
Unsupervised learning can serve as an important auxiliary task to supervised tasks <cit.>. In this work, we propose a semi-supervised pipeline that works on the original training set without an additional data collection process. First, the training set is expanded with unlabeled data using a GAN. Then our model minimizes the sum of the supervised and the unsupervised losses through a new regularization method. This method is evaluated with person re-ID, which aims to spot the target person in different cameras. This has been recently viewed as an image retrieval problem <cit.>.
This paper addresses three challenges. First, current research in GANs typically considers the quality of the sample generation with and without semi-supervised learning in vivo <cit.>. Yet a scientific problem remains unknown: moving the generated samples out of the box and using them in currently available learning frameworks. To this end, this work uses unlabeled data produced by the DCGAN model <cit.> in conjunction with the labeled training data. As shown in Fig. <ref>, our pipeline feeds the newly generated samples into another learning machine (i.e. a CNN). Therefore, we use the term “in vitro” to differentiate our method from <cit.>; these methods perform semi-supervised learning in the discriminator of the GANs (in vivo).
Second, the challenge of performing semi-supervised learning using labeled and unlabeled data in CNN-based methods remains. Usually, the unsupervised data is used as a pre-training step before supervised learning <cit.>. Our method uses all the data simultaneously. In <cit.>, the unlabeled/weak-labeled real data are assigned labels according to pre-defined training classes, but our method assumes that the GAN generated data does not belong to any of the existing classes. The proposed LSRO method neither includes unsupervised pre-training nor label assignments for the known classes. We address semi-supervised learning from a new perspective. Since the unlabeled samples do not belong to any of the existing classes, they are assigned a uniform label distribution over the training classes. The network is trained not to predict a particular class for the generated data with high confidence.
Third, in person re-ID, data annotation is expensive, because one has to draw a pedestrian bounding box and assign an ID label to it. Recent progress in this field can be attributed to two factors: 1) the availability of large-scale re-ID datasets <cit.> and 2) the learned embedding of pedestrians using a CNN <cit.>. That being said, the number of images for each identity is still limited, as shown in Fig. <ref>. There are 17.2 images per identities in Market-1501 <cit.>, 9.6 images in CUHK03 <cit.>, and 23.5 images in DukeMTMC-reID <cit.> on average. So using additional data is non-trivial to avoid model overfitting. In the literature, pedestrian images used in training are usually provided by the training sets, without being expanded. So it is unknown if a larger training set with unlabeled images would bring any extra benefit. This observation inspired us to resort to the GAN samples to enlarge and enrich the training set. It also motivated us to employ the proposed regularization to implement a semi-supervised system.
In an attempt to overcome the above-mentioned challenges, this paper 1) adopts GAN in unlabeled data generation, 2) proposes the label smoothing regularization for outliers (LSRO) for unlabeled data integration, and 3) reports improvements over a CNN baseline on three person re-ID datasets. In more details, in the first step, we train DCGAN <cit.> on the original re-ID training set. We generate new pedestrian images by inputting 100-dim random vectors in which each entry falls within [-1, 1]. Some generated samples are shown in Fig. <ref> and Fig. <ref>. In the second step, these unlabeled GAN-generated data are fed into the ResNet model <cit.>. The LSRO method regularizes the learning process by integrating the unlabeled data and, thus, reduces the risk of over-fitting.
Finally, we evaluate the proposed method on person re-ID and show that the learned embeddings demonstrate a consistent improvement over the strong ResNet baseline.
To summarize, our contributions are:
* the introduction of a semi-supervised pipeline that integrates GAN-generated images into the CNN learning machine in vitro;
* an LSRO method for semi-supervised learning. The integration of unlabeled data regularizes the CNN learning process. We show that the LSRO method is superior to the two available strategies for dealing with unlabeled data; and
* a demonstration that the proposed semi-supervised pipeline has a consistent improvement over the ResNet baseline on three person re-ID datasets and one fine-grained recognition dataset.
§ RELATED WORK
In this section, we will discuss the relevant works on GANs, semi-supervised learning and person re-ID.
§.§ Generative Adversarial Networks
The generative adversarial networks (GANs) learn two sub-networks: a generator and a discriminator. The discriminator reveals whether a sample is generated or real, while the generator produces samples to cheat the discriminator. The GANs are first proposed by Goodfellow <cit.> to generate images and gain insights into neural networks. Then, DCGANs <cit.> provides some techniques to improve the stability of training. The discriminator of DCGAN can serve as a robust feature extractor. Salimans <cit.> achieve a state-of-art result in semi-supervised classification and improves the visual quality of GANs. InfoGAN <cit.> learns interpretable representations by introducing latent codes. On the other hand, GANs also demonstrate potential in generating images for specific fields. Pathak <cit.> propose an encoder-decoder method for image inpainting, where GANs are used as the image generator. Similarly, Yeh <cit.> improve the inpainting performance by introducing two loss types. In <cit.>, 3D object images are generated by a 3D-GAN. In this work, we do not focus on investigating more sophisticated sample generation methods. Instead, we use a basic GAN model <cit.> to generate unlabeled samples from the training data and show that these samples help improve discriminative learning.
§.§ Semi-supervised Learning
Semi-supervised learning is a sub-class of supervised learning taking unlabeled data into consideration, especially when the volume of annotated data is small. On the one hand, some research treats unsupervised learning as an auxiliary task to supervised learning. For example, in <cit.>, Hinton learn a stack of unsupervised restricted Boltzmann machines to pre-train the model. Ranzato propose to reconstruct the input at every level of a network to get a compact representation <cit.>. In <cit.>, the auxiliary task of ladder networks is to denoise representations at every level of the model. On the other hand, several works assign labels to the unlabeled data. Papandreou <cit.> combine strong and weak labels in CNNs using an expectation-maximization (EM) process for image segmentation. In <cit.>, Lee assigns a “pseudo label” to the unlabeled data in the class that has the maximum predicted probability. In <cit.>, the samples produced by the generator of the GAN are all taken as one class in the discriminator. Departing from previous semi-supervised works, we adopt a different regularization approach by assigning a uniform label distribution to the generated samples.
§.§ Person Re-identification
Some pioneering works focus on finding discriminative handcrafted features <cit.>. Recent progress in person re-ID mainly consists of advancing CNNs. Yi <cit.> split a pedestrian image into three horizontal parts and respectively train three part-CNNs to extract features. Similarly, Cheng <cit.> split the convolutional map into four parts and fuse the part features with the global feature. In <cit.>, Li add a new layer that multiplies the activation of two images in different horizontal stripes. They use this layer to explicitly allow patch matching in the CNN. Later, Ahmed <cit.> improve the performance by proposing a new patch matching layer that compares the activation of two images in neighboring pixels. In addition, Varior <cit.> combine the CNN with some gate functions, aiming to adaptively focus on the salient parts of input image pairs, this method is limited by computational inefficiency because the input should be image pairs.
A CNN can be very discriminative by itself without explicit part-matching. Zheng <cit.> directly use a conventional fine-tuning approach (called the ID-discriminative embedding, or IDE) on the Market-1501 dataset <cit.> and its performance exceeds many other recent results. Wu <cit.> combine the CNN embedding with hand-crafted features. In <cit.>, Zheng combine an identification model with a verification model and improve the fine-tuned CNN performance. In this work, we adopt the IDE model <cit.> as a baseline, and show that the GAN samples and LSRO effectively improve its performance. Recently, Barbosa <cit.> propose synthesizing human images through a photorealistic body generation software. These images are used to pre-train an IDE model before dataset-specific fine-tuning. Our method is different from <cit.> in both data generation and the training strategy.
§ NETWORK OVERVIEW
In this section, we describe the pipeline of the proposed method. As shown in Fig. <ref>, the real data in the training set is used to train the GAN model. Then, the real training data and the newly generated samples are combined into training input for the CNN. In the following section, we will illustrate the structure of the two components, , the GAN and the CNN, in detail. Note that, our system does not make major changes to the network structures of the GAN or the CNN with one exception - the number of neurons in the last fully-connected layer in the CNN is modified according to the number of training classes.
§.§ Generative Adversarial Network
Generative adversarial networks have two components: a generator and a discriminator. For the generator, we follow the settings in <cit.>. We start with a 100-dim random vector and enlarge it to 4×4×16 using a linear function. To enlarge the tensor, five deconvolution functions are used with a kernel size of 5×5 and a stride of 2. Every deconvolution is followed by a rectified linear unit and batch normalization. Additionally, one optional deconvolutional layer with a kernel size of 5×5 and a stride of 1, and one tanh function are added to fine-tune the result. A sample that is 128×128×3 in size can then be generated.
The input of the discriminator network includes the generated images and the real images in the training set. We use five convolutional layers to classify whether the generated image is fake. Similarly, the size of the convolutional filters is 5×5 and their stride is 2. We add a fully-connected layer to perform the binary classification (real or fake).
§.§ Convolutional Neural Network
The ResNet-50 <cit.> model is used in our experiment. We resize the generated images to 256×256×3 using bilinear sampling. The generated images are mixed with the original training set as the input of the CNN. That is, the labeled and unlabeled data are simultaneously trained. These training images are shuffled. Following the conventional fine-tuning strategy <cit.>, we use a model pre-trained on ImageNet <cit.>. We modify the last fully-connected layer to have K neurons to predict the K-classes, where K is the number of the classes in the original training set (as well as the merged new training set). Unlike <cit.>, we do not view the new samples as an extra class but assign a uniform label distribution over the existing classes. So the last fully-connected layer remains K-dimensional. The assigned label distribution of the generated images is discussed in the next section.
§ THE PROPOSED REGULARIZATION METHOD
In this section, we first revisit the label smoothing regularization (LSR), which is used for fully-supervised learning. We then extend LSR to the scenario of unlabeled learning, yielding the proposed label smoothing regularization for outliers (LSRO) method.
§.§ Label Smoothing Regularization Revisit
LSR was proposed in the 1980s and recently re-discovered by Szegedy <cit.>. In a nutshell, LSR assigns small values to the non-ground truth classes instead of 0. This strategy discourages the network to be tuned towards the ground truth class and thus reduces the chances of over-fitting. LSR is proposed for use with the cross-entropy loss <cit.>.
Formally, let k∈{1,2,...,K} be the pre-defined classes of the training data, where K is the number of classes. The cross-entropy loss can be formulated as:
l = -∑_k=1^Klog(p(k))q(k),
where p(k)∈[0,1] is the predicted probability of the input belonging to class k, and can be outputted by CNN. It is derived from the softmax function which normalizes the output of the previous fully-connected layer. q(k) is the ground truth distribution. Let y be the ground truth class label, q(k) can be defined as:
q(k)=
0
1 .
If we discard the 0 terms in Eq. <ref>, the cross-entropy loss is equivalent to only considering the ground truth term in Eq. <ref>.
l = - log(p(y)).
So, minimizing the cross-entropy loss is equivalent to maximizing the predicted probability of the ground-truth class.
In <cit.>, the label smoothing regularization (LSR) is introduced to take the distribution of the non-ground truth classes into account. The network is thus encouraged not to be too confident towards the ground truth. In <cit.>, the label distribution q_LSR(k) is written as:
q_LSR(k)=
ε/K
1-ε+ε/K ,
where ε∈[0,1] is a hyperparameter. If ε is zero, Eq. <ref> reduces to Eq. <ref>. If ε is too large, the model may fail to predict the ground truth label. So in most cases, ε is set to 0.1.
Szegedy assume that the non-ground truth classes take on a uniform label distribution. Considering Eq. <ref> and Eq. <ref>, the cross-entropy loss evolves to:
l_LSR = - (1-ε)log(p(y)) - ε/K∑_k=1^Klog(p(k)).
Compared with Eq. <ref>, Eq. <ref> pays additional attention to the other classes, rather than only the ground truth class. In this paper, we do not employ LSR on the IDE baseline because it yields a slightly lower performance than using Eq. <ref> (see Section <ref>). We re-introduce LSR because it inspires us in designing the LSRO method.
§.§ Label Smoothing Regularization for Outliers
The label smoothing regularization for outliers (LSRO) is used to incorporate the unlabeled images in the network. This extends LSR from the supervised domain to leverage unsupervised data generated by the GAN.
In LSRO, we propose a virtual label distribution for the unlabeled images. We set the virtual label distribution to be uniform over all classes, due to two inspirations. 1) We assume that the generated samples do not belong to any pre-defined classes. 2) LSR assumes a uniform distribution over the all classes to address over-fitting. During testing, we expect that the maximum class probability of a generated image will be low, , the network will fail to predict a particular class with high confidence. Formally, for a generated image, its class label distribution, q_LSRO(k), is defined as:
q_LSRO(k)= 1/K.
We call Eq. <ref> the label smoothing regularization for outliers (LSRO).
The one-hot distribution defined in Eq. <ref> will still be used for the loss computation for the real images in the training set.
Combining Eq. <ref>, Eq. <ref> and Eq. <ref>, we can re-write the cross-entropy loss as:
l_LSRO = -(1-Z) log(p(y))
- Z/K∑_k=1^Klog(p(k)).
For a real training image, Z=0. For a generated training image, Z=1. So our system actually has two types of losses, one for real images and one for generated images.
Advantage of LSRO. Using LSRO, we can deal with more training images (outliers) that are located near the real training images in the sample space, and introduce more color, lighting and pose variances to regularize the model. For instance, if we only have one green-clothed identity in the training set, the network may be misled into considering that the color green is a discriminative feature, and this limits the discriminative ability of the model. By adding generated training samples, such as an unlabeled green-clothed person, the classifier will be penalized if it makes the wrong prediction towards the labeled green-clothed person. In this manner, we encourage the network to find more underlying causes and to be less prone to over-fitting. We only use the GAN trained on the original training set to produce outlier images. It would be interesting to further evaluate whether real-world unlabeled images are able to achieve a similar effect (see Table <ref>).
Competing methods. We compare LSRO with two alternative methods. Details of both methods are available in existing literature <cit.>; breif descriptions follow.
* All in one. Using <cit.>, a new class label is created, , K+1, and every generated sample is assigned to this class. CNN training follows in Section <ref>.
* Pseudo label. Using <cit.>, during network training, each incoming GAN-image is passed forward through the current network and is assigned a pseudo label by taking the maximum value of the probability prediction vector (p(k) in Eq. <ref>). This GAN-image can be thus trained in the network with this pseudo label. During training, the pseudo label is assigned dynamically, so that the same GAN-image may receive different pseudo labels each time it is fed into the network. In our experiments, we begin feeding GAN images and assigning them pseudo labels after 20 epochs. We also set a global weight to the softmax loss of 0.1 to the GAN and 1 to the real images.
Our experimental results show that the two methods also work on the GAN images and that LSRO is superior to “All in one” and “Pseudo label”. Explanations are provided in the Section <ref>.
§ EXPERIMENT
We mainly evaluate the proposed method using the Market-1501 <cit.> dataset, because it is a large scale and has a fixed training/testing split. We also report results on the CUHK03 dataset <cit.>, but due to the computational cost of 20 training/testing splits, we only use the GAN images generated from the Market-1501 dataset. In addition, we evaluate our method on a recently released pedestrian dataset DukeMTMC-reID <cit.> and a fine-grained recognition dataset CUB-200-2011 <cit.>.
§.§ Person Re-id Datasets
Market-1501 is a large-scale person re-ID dataset collected from six cameras. It contains 19,732 images for testing and 12,936 images for training. The images are automatically detected by the deformable part model (DPM) <cit.>, so misalignment is common, and the dataset is close to realistic settings. There are 751 identities in the training set and 750 identities in the testing set. There are 17.2 images per identity in the training set. We use all the 12,936 detected images from the training set to train the GAN.
CUHK03 contains 14,097 images of 1,467 identities. Each identity is captured by two cameras on the CUHK campus. This dataset contains two image sets. One is annotated by hand-drawn bounding boxes, and the other is produced by the DPM detector <cit.>. We use the detected set in this paper. There are 9.6 images per identity in the training set. We report the averaged result after training/testing 20 times. We use the single shot setting.
DukeMTMC-reID is a subset of the newly-released multi-target, multi-camera pedestrian tracking dataset <cit.>. The original dataset contains eight 85-minute high-resolution videos from eight different cameras. Hand-drawn pedestrian bounding boxes are available. In this work, we use a subset of <cit.> for image-based re-ID, in the format of the Market-1501 dataset <cit.>. We crop pedestrian images from the videos every 120 frames, yielding 36,411 total bounding boxes with IDs annotated by <cit.>. The DukeMTMC-reID dataset for re-ID has 1,812 identities from eight cameras. There are 1,404 identities appearing in more than two cameras and 408 identities (distractor ID) who appear in only one camera. We randomly select 702 IDs as the training set and the remaining 702 IDs as the testing set. In the testing set, we pick one query image for each ID in each camera and put the remaining images in the gallery. As a result, we get 16,522 training images with 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images. The evaluation protocol is available on our website <cit.>. Some example re-ID results from the DukeMTMC-reID are shown in Fig. <ref>.
§.§ Implementation Details
CNN re-ID baseline. We adopt the CNN re-ID baseline used in <cit.>. Specifically, the Matconvnet <cit.> package is used. During training, We use the ResNet-50 model <cit.> and modify the fully-connected layer to have 751, 702 and 1,367 neurons for Market-1501, DukeMTMC-reID and CUHK03, respectively. All the images are resized to 256×256 before being randomly cropped into 224×224 with random horizontal flipping. We insert a dropout layer before the final convolutional layer and set the dropout rate to 0.5 for CUHK03 and 0.75 for Market-1501 and DukeMTMC-reID, respectively. We use stochastic gradient descent with momentum 0.9. The learning rate of the convolution layers is set to 0.002 and decay to 0.0002 after 40 epochs and we stop training after the 50th epochs. During testing, we extract the 2,048-dim CNN embedding in the last convolutional layer for an 224×224 input image. The similarity between two images is calculated by a cosine distance for ranking.
GAN training and testing. We use Tensorflow <cit.> and the DCGAN package <cit.> to train the GAN model using the provided data in the original training set without preprocessing (, foreground detection). All the images are resized to 128×128 and randomly flipped before training. We use Adam <cit.> with the parameters β_1=0.5, β_2=0.99. We stop training after 30 epochs. During GAN testing, we input a 100-dim random vector in GAN, and the value of each entry ranges in [-1, 1]. The outputted image is resized to 256×256 and then used in CNN training (with LSRO). More GAN images are shown in Fig. <ref>.
§.§ Evaluation
The ResNet baseline. Using the training/testing procedure described in Section <ref>, we report the baseline performance of ResNet in Table <ref>, Table <ref> and Table <ref>. The rank-1 accuracy is 73.69%, 71.5% and 60.28% on Market-1501, CUHK03 and DukeMTMC-reID respectively. Our baseline results are on par with the those reported in <cit.>. Note that the baseline alone exceeds many previous works <cit.>.
The GAN images improve the baseline.
As shown in Table <ref>, when we add 24,000 GAN images to the CNN training, our method significantly improves the re-ID performance on Market-1501. We observe improvement of +4.37% (from 73.69% to 78.06%) and +4.75% (from 51.48% to 56.23%) in rank-1 accuracy and mAP, respectively. On CUHK03, we observe improvements of +1.6%, +1.2%, +0.8%, and +1.6% in rank-1, 5, 10 accuracy and mAP, respectively. The improvement on CUHK03 is relatively small compared to that of Market-1501, because the DCGAN model is trained on Market-1501 and the generated images share a more similar distribution with Market-1501 than CUHK03. We also observe improvements of +2.46% and +2.14% in rank-1 and mAP, respectively, on the strong ResNet baseline in the DukeMTMC-reID dataset. These results indicate that the unlabeled images generated by the GAN effectively yield improvements over the baseline using the LSRO method.
The impact of using different numbers of GAN images during training.
We evaluate how the number of GAN images affects the re-ID performance. Since unlabeled data is easy to obtain, we expect the model would learn more general knowledge as the number of unlabeled images increases. The results on Market-1501 are shown in Table <ref>. We note that the number of real training images in Market-1501 is 12,936. Two observations are made.
First, the addition of different numbers of GAN images consistently improves the baseline. Adding approximately 3×GAN images compared to the real training set still has a +2.38% improvement to rank-1 accuracy.
Second, the peak performance is achieved when 2×GAN images are added. When too few GAN sample are incorporated into the system, the regularization ability of the LSRO is inadequate. In contrast, when too many GAN samples are present, the learning machine tends to converge towards assigning uniform prediction probabilities to all the training samples, which is not desirable. Therefore, a trade-off is recommended to avoid poor regularization and over-fitting of uniform label distributions.
GAN images vs. real images in training. To further evaluate the proposed method, we replace the GAN images with the real images from CUHK03 which are viewed as unlabeled in training. Since CUHK03 only contains 14,097 images, we randomly select 12,000 for the fair comparison.
Experimental results are shown in Table <ref>. We compare the results obtained using the 12,000 CUHK03 images and the 12,000 GAN images. We find the real data from CUHK03 also assists in the regularization and improves the performance. But the model trained with GAN-generated data is sightly better. In fact, although the images generated from DCGAN are visually imperfect (see Fig. <ref>), they still possess similar regularization ability as the real images.
Comparison with the two competing methods.
We compare the LSRO method with the “All in one” and “Pseudo label” methods implied in <cit.> and <cit.>, respectively. The experimental results on Market-1501 are summarized in Table <ref>.
We first observe that both strategies yield improvement over the baseline. The “All in one” method treats all the unlabeled samples as a new class, which forces the network to make “careful” predictions for the existing K classes. The “Pseudo label” method gradually labels the new data, and thus introduces more variance to the network.
Nevertheless, we find that LSRO exceeds both strategies by approximately +1% ∼ +2%. We speculate the reason is that the “All in one” method makes a coarse label estimation, while the “Pseudo label” originally assumes that all the unlabeled data belongs to the existing classes <cit.> which is not true in person re-ID. While these two methods still use the one-hot label distribution, the LSRO method makes a less stronger assumption (label smoothing) towards the labels of the GAN images. These reasons may explain why LSRO has a superior performance.
Comparison with the state-of-the-art methods.
We compare our method with the state-of-the-art methods on Market-1501 and CUHK03, listed in Table <ref> and Table <ref>, respectively. On Market-1501, we achieve rank-1 accuracy = 78.06%, mAP = 56.23% when using the single query mode, which is the best result compared to the published papers, and the second best among all the available results including ArXiv papers. On CUHK03, we arrive at rank-1 accuracy = 73.1%, mAP = 77.4% which is also very competitive. The previous best result is produced by combining the identification and verification losses <cit.>. We further investigate whether the LSRO could work on this model. We fine-tuned the publicly available model in <cit.> with LSRO and achieve state-of-the-art results rank-1 accuracy = 83.97%, mAP = 66.07% on Market-1501. On CUHK03, we also observe a state-of-the art performance rank-1 accuracy = 84.6%, mAP = 87.4%. We, therefore, show that the LSRO method is complementary to previous methods due to the regularization of the GAN data.
§.§ Fine-grained Recognition
Fine-grained recognition also faces the problem of a lack of training data and annotations. To further test the effectiveness of our method, we provide results on the CUB-200-2011 dataset <cit.>. This dataset contains 200 bird classes with 29.97 training images per class on average. Bounding boxes are used in both training and testing. We do not use part annotations.
In our implementation, the ResNet baseline has a recognition accuracy of 82.6%, which is slightly higher than the 82.3% reported in <cit.>. This is the baseline we will compare our method with.
Using the same pipeline in Fig. <ref>, we train DCGAN on the 5,994 images in the training set, and then we combine the real images with the generated images (see Fig. <ref>) to train the CNN. During testing, we adopt the standard 10-crop testing <cit.>, which uses 256×256 images as input and the averaged prediction as the classification result. As shown in Table <ref>, the strong baseline outperforms some recent methods, and the proposed method further yields an improvement of +0.6% (from 82.6% to 83.2%).
We also combine the two models generated by our method with different initializations to form an ensemble. This leads to a 84.4% recognition accuracy. In <cit.>, Liu report a 85.5% accuracy with a five-model ensemble using parts and a global scene. We do not include this result because extra annotations are used. We focus on the regularization ability of the GAN, but not on producing a state-of-the-art result.
§ CONCLUSION
In this paper, we propose an “in vitro” usage of the GANs for representation learning, , person re-identification. Using a baseline DCGAN model <cit.>, we show that the imperfect GAN images effectively demonstrate their regularization ability when trained with a ResNet baseline model. Through the proposed LSRO method, we mix the unlabeled GAN images with the labeled real training images for simultaneous semi-supervised learning. Albeit simple, we demonstrate consistent performance improvement over the re-ID and fine-grained recognition baseline systems, which sheds light on the practical use of GAN-generated data.
In the future, we will continue to investigate on whether GAN images of better visual quality yield superior results when integrated into supervised learning. This paper provides some baseline evaluations using the imperfect GAN images and the future investigation would be intriguing.
Acknowledgements. We thank the support of Data to Decisions Cooperative Research Centre (<www.d2dcrc.com.au>), Google Faculty Research Award and NVIDIA Corporation with the donation of TITAN X (Pascal) GPU.
ieee
|
http://arxiv.org/abs/1702.00299v2 | 20170127121013 | A charged anisotropic well-behaved Adler-Finch-Skea solution Satisfying Karmarkar Condition | [
"Piyali Bhar",
"Ksh. Newton Singh",
"Farook Rahaman",
"Neeraj Pant",
"Sumita Banerjee"
] | physics.gen-ph | [
"physics.gen-ph"
] |
piyalibhar90@gmail.com
Department of
Mathematics,Government General Degree College, Singur, Hooghly 712 409, West Bengal,
India
ntnphy@gmail.com
Department of Physics, National Defence Academy, Khadakasla, Pune-411023, India.
rahaman@associates.iucaa.in
Department of Mathematics, Jadavpur University, Kolkata, West Bengal-700032, India.
neeraj.pant@yahoo.com
Department of Mathematics, National Defence Academy, Khadak- wasla, Pune-411023, India.
banerjee.sumita.jumath@gmail.com
Department of Mathematatics,Budge-Budge Institute of Technology,Budge-Budge, West-Bengal, India.
In the present article, we discover a new well-behaved charged anisotropic solution of Einstein-Maxwell's field equations. We ansatz the metric potential g_00 of the form given by Maurya el al. (arXiv:1607.05582v1) with n=2. In their article it is mentioned that for n=2 the solution is not well-behaved for neutral configuration as the speed of sound is non-decreasing radially outward. However, the solution can represent a physically possible configuration with the inclusion of some net electric charged i.e. the solution can become a well-behaved solution with decreasing sound speed radially outward for a charged configuration. Due to the inclusion of electric charged the solution leads to a very stiff equation of state (EoS) with the velocity of sound at the center v_r0^2=0.819, v_t0^2=0.923 and the compactness parameter u=0.823 is closed to the Buchdahl limit 0.889. This stiff EoS support a compact star configuration of mass 5.418M_⊙ and radius of 10.1 km.
pacs: 02.60.Cb; 04.20.-q; 04.20.Jb; 04.40.Nr; 04.40.Dg
A charged anisotropic well-behaved Adler-Finch-Skea solution Satisfying Karmarkar Condition
Sumita Banerjee
Jan 15 2017
===========================================================================================
§ INTRODUCTION
Many studies on astrophysical massive compact objects assumed the matter distribution is generally isotropic. However, such simplified assumptions yields satisfactory results to some extend and not for all systems. Recent researches in theoretical physics on the compact stellar systems suggested that matter distribution at the interior of these compact objects is most probably to be anisotropic from certain density ranges <cit.>. In the light of these studies, a new physics emerges to study the properties of anisotropic matter distributions in general relativity. The anisotropy in pressure could be introduced by the existence of a solid core, type P superfluid, complex nuclear interactions, or inclusion of net electric charge. The energy-momentum tensor T^μν of such anisotropic matter is equivalent to those by assuming a fluid composed of two perfect fluids, or a perfect fluid and a null fluid, or two null fluids <cit.>. Most of the method commonly adopted by many researchers to explore new analytic solutions of the field equations are by assuming g_00 or g_11, radial pressure, anisotropy, electric field, density and equation of state so that the field equations are integrable <cit.>. The Buchdahl limit of ideal fluid distributions postulated that the compactness parameter u=2M/R should be ≤ 8/9 so that it doesn't proceed a gravitational collapse so that can form a singularity or Black Hole. This upper bound in u is generalized by Andreasson <cit.> with the inclusion of charge, anisotropy and even cosmological constant. In the derivation of the new generalized upper bound in u, he assumed a simple inequality in between pressure and density as p_r+2p_t≤ρ.Many investigations have also shown that the behavior of a collapsing star is strongly influence by its initial static configuration on account for various parameters like, pressure anisotropy, charge, EoS, shear, radiations etc. Two static configurations at the initial with same masses and radii for different pressure profiles when undergoing collapse leads to very different temperature evolution at their later stages <cit.>. Hence, to completely understand the physics evolving stars one needs to understand the initial static configurations with the inclusion these various factors. In Newtonian approximation, adiabatic collapse of a fluid distribution obeying polytropic EoS is possible only when the adiabatic index Γ < 4/3. However, this idea is turnover by Chan et al. <cit.> for anisotropic fluid objects where collapse is still possible even for Γ≥ 4/3 depending on the nature of anisotropy. Herrera and Santos <cit.> have also suggested that the stability of static stellar configurations can be enhanced by the nature of local anisotropy. Inclusion of charge is the source of electric field which can cause pressure anisotropy Usov <cit.> and it can counter balanced the gravitational attraction by the electric repulsion other than the pressure gradient. On using this concepts Ivanov <cit.> proposed a model for charged perfect fluid that inhibits the growth of spacetime curvature to avoid singularities. Bonnor <cit.> also pointed out that a dust distribution of arbitrarily large mass can be bounded in very small radius and maintained equilibrium against the gravity by a repulsive force produced by a small amount of charge. Thus it is interesting to study the implications of Einstein-Maxwell's field equations in general relativistic.Bhar et al. <cit.> also presented new class of exact interior solution of Einstein-Maxwell's field equations in (2 + 1) dimensional spacetime by assuming Chaplygin gas EoS and Krori-Barua metric with charged BTZ spacetime as exterior. Using this solution they have discussed all the physical properties of charged anisotropic stellar configuration. Bhar and Rahaman <cit.> proposed a new model of dark energy star consisting of five zones viz solid core of constant energy density, thin shell between core and interior, an inhomogeneous interior region with anisotropic pressures, a thin shell, and the exterior vacuum region. Bhar <cit.> also used the Krori-Barua metric potentials in the presence of quintessence field to model stable strange star model.In this article, we have adopted a new method in order of discover new exact solution of Einstein-Maxwell's field equations that satisfies Tolman-Oppenheimer-Volkoff (TOV) equation. We adopted the method used by Karmarker 1948 to solve the field equation where the obtained solutions are classified as Class One. In this method the Reimann curvature tensor ℛ_μναβ satisfy a particular equation that finally link the two metric component g_00 and g_11 in a single equation i.e. the two metric components are dependent on each other. Therefore, we only need to assume one of the metric potential and electric field intensity to integrate the field equations. The rest of the physical quantities like pressure, density, sound speed, anisotropy, etc. can be completely determine from g_00, g_tt and E^2 only. Many other articles are also available in literature on embedding Class One solutions <cit.>.
§ BASIC FIELD EQUATIONS
To describe the interior of a static and spherically symmetry object the line element can be taken in canonical co-ordinate as,
ds^2=-e^ν(r)dt^2+e^λ(r)dr^2+r^2(dθ^2+sin^2θ dϕ^2)
Where ν and λ are functions of the radial coordinate `r' only.
Now if the space-time (<ref>) satisfies the Karmarkar condition <cit.>
ℛ_1414ℛ_2323=ℛ_1212ℛ_3434+ ℛ_1224ℛ_1334
with ℛ_2323≠ 0 <cit.>, it represents the space-time of emending class 1.
For the condition (<ref>), the line element (<ref>) gives the following differential equation
λ'ν'/1-e^λ=-2(ν”+ν'^2)+ν'^2+λ'ν'.
with e^λ≠ 1. Solving equation (<ref>) we get,
e^λ=1+Fν'^2e^ν
where F ≠ 0, an arbitrary constant.
We assume that the matter within the star is charged and anisotropic in nature. The corresponding the energy-momentum tensor is described by,
T^μ_ξ = ρ v^μ v_ξ + p_r χ_ξχ^μ + p_t(v^μ v_ξ -χ_ξχ^μ-g^μ_ξ)
+1 4π(-ℱ^μνℱ_ξν+1 4g^μ_ξℱ_σνℱ^σν)
here all the symbols have their usual meanings.
Now for the line element (<ref>) and the matter distribution (<ref>) Einstein-Maxwell's Field equations (assuming G=c=1) take the form,
ℛ^μ_ξ-1 2ℛ g^μ_ξ = -8π T^μ_ξ
1√(-g)∂∂ x^β(√(-g) ℱ^μβ) = -4π𝒥^μ
ℱ^μν_;β+ℱ^νβ_;μ+ℱ^βμ_;ν = 0
The Maxwell's Stress Tensor ℱ^μβ is defined by
ℱ^μβ=∂^β A^μ -∂^μ A^β
Here A^μ=(ϕ(r),0,0,0) is the magnetic four potential and 𝒥^μ is four-current density defined as
𝒥^μ=σ_0 √(g_00)dx^μ dx^0
provided σ_0 is the proper charge density.
For a static fluid configuration, the non-zero components of the four-current density is j^0 and function of r only because of spherical symmetry. From (<ref>) we get
ℱ^01=-e^-(ν+λ)/ 2 q(r) r^2
where q(r) is the charge enclosed within a sphere of radius r and given by
q(r)=4π∫_0^r e^λ/2σ_0 η^2 dη
The Einstein-Maxwell's field equations (<ref>)-(<ref>) reduces to 4-system of non-linear differential equations given by,
1-e^-λ/r^2+e^-λλ'/r = 8πρ+E^2
e^-λ-1/r^2+e^-λν'/r = 8π p_r-E^2
e^-λ(ν”/2+ν'^2/4-ν'λ'/4+ν'-λ'/2r) = 8π p_t+E^2
e^-λ/2 4π r^2(r^2E)' = σ(r)
where σ(r) is the charge density and E=q(r)/r^2 is the electric field intensity at the interior.
Now we have to solve the Einstein-Maxwell's field equations (<ref>)-(<ref>) with the help of equation (<ref>). One can notice that we have five equations with 6 unknowns namely λ, ν, ρ,p_r, p_t and E. To solve the above set of equations let us ansatz the metric co-efficient g_tt proposed by Adler <cit.> as,
e^ν=B(1+Cr^2)^2
Where B and C are constants.
Let us assume the electric field as well in the form given below
E^2=KCr^2/1+Cr^2
On using equation (<ref>) and (<ref>) we obtain,
e^λ=1 + 16 B C^2 F r^2
This metric form of e^λ is similar to that of Finc-Skea solution <cit.>.
Now employing the values of e^ν and e^λ to the Einstein-Maxwell's field equations (<ref>)-(<ref>)and using the expression for E^2 given in eq.(<ref>) we obtain the expression for matter density, radial & transverse pressure and proper charge density as,
8πρ = 16 B C^2 F (3 + 16 B C^2 F r^2)/(1 + 16 B C^2 F r^2)^2-K C r^2/1 + C r^2
8π p_r = 4C + K C r^2 - 16 B C^2 F {1 + C r^2 (1 - K r^2)}/(1 + C r^2) (1 + 16 B C^2 F r^2)
8π p_t = 1/(1 + C r^2) (1 + 16 B C^2 F r^2)^2[4C - K Cr^2
- 256 B^2 C^5 F^2 K r^6 -
16 B C^2 F ×
{1 - C r^2 (1-2 K r^2)}]
σ(r) = C K r (Cr^2+2)/2 π (C r^2+1)^2 √(16 B C^2 F r^2+1)
and the anisotropic factor Δ is obtained as,
8πΔ = 8π(p_t-p_r)
= 2 C r^2/(1 + C r^2) (1 + 16 B C^2 F r^2)^2×[-K
-
16 B C^2 F {1 + 2 K r^2 - 8 B C F
(1 + C r^2- 2 K C r^4)}]
§ BOUNDARY CONDITIONS AND DETERMINATION CONSTANTS
We match our interior space-time to the exterior Reissner- Nördstrom line element given by
ds^2 = -(1-2m/r+q^2/r^2)dt^2+(1-2m/r+q^2/r^2)^-1dr^2
+r^2(dθ^2+sin^2θ dϕ^2)
with the radial coordinate r>m+√(m^2-q^2)
Using the continuity of the metric coefficient e^ν and e^λ across the boundary we get the following equations
1-2M/r_b+q^2(r_b)/r_b^2 = B(1+C r_b^2)^2
(1-2M/r_b+q^2(r_b)/r_b^2)^-1 = 1 + 16 B C^2 F r_b^2
p_r(r=r_b) = 0
On using the boundary conditions (<ref>)-(<ref>) we get
F = K r_b^2+4/16 C B (1-C K r_b^4+C r_b^2)
B = C K r_b^5-2 C M r_b^2+C r_b^3+r_b-2 M/r_b (C r_b^2+1)^2
C = 1/2 (3 K r_b^7-5 M r_b^4+2 r_b^5)×
[-r_b^2 √(K^2 r_b^6+4 K r_b^4-16 M r_b+4 r_b^2+16 M^2)
-K r_b^5+6 M r_b^2-2 r_b^3]
We have chosen the M, r_b and K as free parameters.
§ PHYSICAL ANALYSIS OF OUR PRESENT MODEL
Our present model satisfies the following conditions:
* We know that the metric coefficients should be regular inside the stellar interior. From our solution we can easily check that e^λ(r=0)=1 and e^ν(r=0)=B, a positive constant. To see the characteristic of the metric potential we plotted the graph of e^-ν and e^-λ in fig. <ref>(left). The profiles show that metric coefficients are regular and monotonic decreasing function of r inside the stellar interior.
* The matter density, radial and transverse pressure should be positive inside the stellar interior for a physically acceptable model. The radial pressure should be vanish at the boundary of the star.
Moreover the central density, central pressure and Zeldovich condition can be obtained as,
ρ_c = 6BC^2F/π>0
p_rc = p_tc=C(1-4BCF)/2π>0
p_rcρ_c = 1-4BCF/12BCF≤ 1
The above equations imply that our model is free from central singularities and it also gives a constraint on BCF as 1/16 ≤ BCF < 1/4.
Now for our model the gradient of matter density and radial pressure are obtained as,
dρ/dr = -[2 C K r/(1 + C r^2)^2+512 B^2 C^4 F^2 r (5 + 16 B C^2 F r^2)/(1 + 16 B C^2 F r^2)^3]
dp_r/dr = 2 C r/(1 + C r^2)^2 (1 + 16 B C^2 F r^2)^2[K-4C
+32BC^2F{8BCF(1+2Cr^2+C^2r^4+CKr^4)
+(K-4C)r^2-2}]
We note that at the point r=0 both dρ/dr=0 and dp_r/dr=0 and,
d^2ρ dr^2=-C/4π(K+1280B^2C^3F^2)
d^2p_r/dr^2=--CK+4C^2(1+16BCF-256B^2C^2F^2)/4π
The profile of matter density, radial and transverse pressure are shown in fig. <ref>(right) and fig. <ref> (left) respectively. The figures show that the matter density ρ, radial pressure (p_r) and transverse pressure (p_t) are monotonic decreasing function of r. All are positive for 0<r≤ r_b (r_b being the boundary of the star) and both ρ and p_t are positive at the boundary where as the radial pressure vanishes there. The profile of the pressure to density density ratios are also monotonically decreasing and less than 1, fig. <ref> (right). The profile of dρ/dr, dp_r/dr and dp_t/dr are plotted in fig. 5 (right). The plots show that both dρ/dr, dp_r/dr and dp_t/dr are negative which once again verify that ρ, p_r and p_t are monotonic decreasing function of r.
* The profile of the anisotropic factor Δ is shown against r in fig.<ref> (left). The anisotropic factor is negative (i.e. p_t<p_r) from center till r=5.47 km and positive (i.e. p_t>p_r) for r>5.47km till upto the surface in increasing trend., which implies p_t>p_r. Moreover at the center of the star the anisotropic factor vanishes which is also a required condition. The electric field vanishes at the center and monotonically increasing outward while the charge density is increasing upto r=3.84km and then decreasing till the surface, fig <ref> (right).
§ MASS-RADIUS RELATION AND COMPACTNESS PARAMETER
The mass and compactness parameter of the compact star is obtained as,
m(r) = ∫_0^r4πρ r^2dr
= -K r^3/6 + r/2[K/C -16 B C^2 F r^2/1+16 B C^2 F r^2]
-
K/2C^3/2tan^-1√(Cr)
u(r) = 2m(r) r=-K r^2/3 + K/C -16 B C^2 F r^2/1+16 B C^2 F r^2
-K/C^3/2tan^-1√(Cr)/r
The profile of the compactness parameter and mass function is plotted against r in fig <ref>. The profile shows that mass and compact parameter are increasing function of r and they are regular everywhere inside the stellar interior.
§ ENERGY CONDITIONS
In this section we are going to verify the energy conditions namely null energy condition (NEC), dominant energy condition (DEC) and weak energy condition(WEC) at all points in the interior of a star which will be satisfied if the following inequalities hold simultaneously:
NEC : ρ(r)≥ 0 ,
WEC : ρ(r)-p_r(r) ≥ 0 and ρ(r)-p_t(r) ≥ 0,
DEC : ρ(r) ≥ |p_r|, |p_t|
We will check the energy conditions with the help of graphical representation. In Fig. <ref> (left), we have plotted the L.H.S of the above inequalities which verifies that all the energy conditions are satisfied at the stellar interior.
§ STABILITY OF THE MODEL AND EQUILIBRIUM
§.§ Equilibrium under various forces
Equilibrium state under four forces viz gravitational, hydro-statics, anisotropic and electric forces can be analyze whether they satisfy the generalized Tolman-Oppenheimer-Volkoff (TOV) equation or not and it is given by
-M_g(r)(ρ+p_r)/re^ν-λ/2-dp_r/dr+2/r(p_t-p_r)+σ(r) E(r) e^λ/2=0,
where M_g(r) represents the gravitational mass within the radius r, which can derived from the Tolman-Whittaker formula and the Einstein's field equations and is defined by
M_g(r) = 4 π∫_0^r (T^t_t-T^r_r-T^θ_θ-T^ϕ_ϕ) r^2 e^(ν+λ)/2dr
For the Eqs. (<ref>)-(<ref>), the above Eq. (<ref>) reduced to
M_g(r)=1/2re^(λ-ν)/2 ν'.
Plugging the value of M_g(r) in equation (<ref>), we get
-ν'/2(ρ+p_r)-dp_r/dr+2/r(p_t-p_r)+σ(r) E(r) e^λ/2=0.
The above expression may also be written as
F_g+F_h+F_a+F_e=0,
where F_g, F_h, F_a and F_e represents the gravitational, hydrostatics and anisotropic and electric forces respectively.
The expression for F_g, F_h, F_a and F_e can be written as,
F_g = -ν'/2(ρ+p_r)
= - c^2 r [1 + 8 B c F (1 + 3 c r^2)]/π(1 + c r^2)^2 (1 + 16 B c^2 F r^2)^2
F_h = -dp_r/dr
= - C r[K-4C+32BC^2F f_1(r)]/4π(1 + C r^2)^2 (1 + 16 B C^2 F r^2)^2
F_a = C r^2 [-K - 16 B C^2 F f_2(r)]/2π r(1 + C r^2) (1 + 16 B C^2 F r^2)^2
F_e = σ Ee^λ/2=CKr(3+2Cr^2)/4π(1+Cr^2)^2
f_1(r) = 8BCF(1+2Cr^2+C^2r^4+CKr^4)
+(K-4C)r^2-2
f_2(r) = {1 + 2 K r^2 - 8 B C F (1 + C r^2- 2 K C r^4)}
The profile of three different forces are plotted in fig. <ref> (left). The figure shows that gravitational force is dominating is nature and is counterbalanced by the combine effect of hydrostatics and anisotropic force.
§.§ Causality and stability condition
In this section we are going to find the subliminal velocity of sound and stability condition. For a physically acceptable model of anisotropic fluid sphere the radial and transverse velocity of sound should be less than 1 which is known as causality conditions. The radial velocity (v_sr^2) and transverse velocity (v_st^2) of sound can be obtained as
v_sr^2 = (1+16 BC^2F r^2)/K + 16 B C^2 F f_3(r)×[4C-K-32BC^2F×
{8BCF(1+2Cr^2+C^2r^4+CKr^4)
+(K-4C)r^2-2}]
v_st^2 = 1/K + 16 B C^2 F f_3(r)×[K + 4C[1 + 4 B C F ×
{6 + 3 (4 C + K) r^2 + 256 B^2 C^4 F^2 K r^6
+ 16 B C F (2C^2r^4 + 3C K r^4- 2 C r^2-2)}]]
f_3(r) = [3 K r^2 +256 B^2 C^3 F^2 r^2 {1 + 2 C r^2 + C(C + K) r^4}
+16 B C F {5 + 10 C r^2 + C (5 C + 3 K) r^4}]
The profile of radial and transverse velocity of sound have been plotted in fig. <ref>, the figure indicates that our model satisfies the causality condition.
§.§ Adiabatic index and stability condition
For a relativistic anisotropic sphere the stability is related to the adiabatic index Γ, the ratio of two specific heats, defined by <cit.>,
Γ=ρ+p_r/p_rdp_r/dρ; Γ_t = ρ+p_t/p_tdp_t/dρ
Now Γ>4/3 gives the condition for the stability of a Newtonian sphere and Γ =4/3 being the condition for a neutral equilibrium proposed by <cit.>. This condition changes for a relativistic isotropic sphere due to the regenerative effect of pressure, which renders the sphere more unstable. For an anisotropic general relativistic sphere the situation becomes more complicated, because the stability will depend on the type of anisotropy. For an anisotropic relativistic sphere the stability condition is given by <cit.>,
Γ>4/3+[4/3(p_ti-p_ri)/|p_ri^'|r+1/2κρ_ip_ri/|p_ri^'|r],
where, p_ri, p_ti, and ρ_i are the initial radial, tangential, and energy density in static equilibrium satisfying (<ref>). The first and last term inside the square brackets represent the anisotropic and relativistic corrections respectively and both the quantities are positive that increase the unstable range of Γ <cit.>.
§.§ Harrison-Zeldovich-Novikov static stability criterion
The stability analysis adopted by <cit.>, <cit.> etc. requires the determination of eigen-frequencies of all the fundamental modes. However, <cit.> and <cit.> simplify such messy calculations and reduced it to much more simpler formulism. They have assumed that the adiabatic index of a pulsating star is same as in a slowly deformed matter. This leads to a stable configuration only if the mass of the star is increasing with central density i.e. dM/dρ_c > 0 and unstable if dM/dρ_c < 0.
In our solution, the mass as a function of central density can be written as
M = 8πρ_c R^3/6/1+ 16πρ_c R^2/3+K α R^5 2(1+α r^2)
α = √(πρ_c 6BF)
which gives us (for a given radius, B and F)
d M d ρ_c = 3 R^3 √(πρ_c /B F)/2 ρ_c (8 πρ_c R^2+3)^2 [R^2 √(6 πρ_c/B F)+6]^2×
[R^2 {48 πρ_c (R^2 √(πρ_c/B F)+2 √(6))
+√(6) K (8 πρ_c R^2+3)^2}+288 √(πρ_c B F)]
Since all the quantities used in (<ref>) are positive finite values, dM/dρ_c is always >0, which implies that the total mass of stellar system increases with increase in central density. This condition can be further confirm by Fig. <ref> (right).
§ DISCUSSION AND CONCLUSION
It has been observed that the physical parameters (e^-λ, e^-ν, ρ, p_r, p_t, p_r/ρ, p_t/ρ, v_r^2, v_t^2 ) are positive at the center, within the limit of realistic equation of state and monotonically decreasing outward (Figs. <ref>, <ref>, <ref>). However, the anisotropy, z_s, E^2 and Γ are increasing outward which is necessary for a physically viable configuration (Figs. <ref>, <ref>). The proper charged density at the interior is also shown in Fig. <ref>. The variation of compactness parameter u and the mass distribution with radial coordinates in Figs. <ref> signifies that the new charged anisotropic solution leads to very stiff EoS. This stiff EoS yields a compactness parameter of 0.823, which is very closed to the Buchdahl limit 0.889 and also the total mass of 4.156M_⊙ is bounded in a very small radius of 10.1km only. For the chosen values mentioned in Fig. <ref>, the variation of anisotropy factor signifies that for 0≤ r ≤ 5.45km, Δ < 0 (or p_r>p_t) and for 5.45km<r≤ 10.1km, Δ > 0 (or p_t>p_r). The non-singular properties of the solution can be represent by the finite values of central density 4.7× 10^15g/cm^3, central pressure 28.03× 10^35 dyne/cm^2, central sound speed v_r^2=0.819 and v_t^2=0.923. The relativistic adiabatic index at the center are Γ_r0=2.062 and Γ_t=2.348 and are also increasing monotonically outward. The redshift of the configuration at the surface is of about 1.01.
Furthermore, our presented solution satisfies Weak Energy Condition (WEC), Null Energy Condition (NEC) and Dominant Energy Condition (DEC) which is shown in Fig. <ref>. The stability factor |v_t^2-v_r^2| lies in between 0 and 1 which represents a stable configuratio (Fig. <ref>). The decreasing nature of pressures and density is further justified by their negativity of their gradients, Fig. <ref>. The solution also represents a static and equilibrium configuration as the force acting on the fluid sphere is counter-balancing each other. For a charged anisotropic stellar fluid in equilibrium the gravitational force, the hydro-static pressure, the Coulomb force and the anisotropic force are acting through a generalized TOV-equation and they are counter-balancing to each other, Fig. <ref>. The stability analysis of the solution is also extended by adopting the Harrison-Zeldovich-Novikov static stability criterion. According to the static stability criterion, the variation of mass must be increasing in trend with the increase in central density i.e. dM/dρ_c>0 for stable and dM/dρ_c ≤ 0 for unstable configurations. The Fig. <ref>, we have plotted the mass by varying ρ_c from 0-8.082 × 10^16 g/cm^3 and it is noticeable that the maximum mass becomes saturated at 5.418M_⊙ from about 6.735 × 10^16 g/cm^3. Hence from all the above analysis, we conclude that the presented solution satisfy (i) TOV-equation showing its equilibrium condition and also satisfies the static stable criterion of Harrison-Zeldovich-Novikov i.e. dM/dρ_c>0.
§.§ Acknowledgments
FR would like to thank the authorities of the Inter-University Centre
for Astronomy and Astrophysics, Pune, India for providing research facilities. FR is also
thankful to DST and SERB, Govt. of India for providing financial support.
99
rud M. Ruderman Ann. u. Rev. Astron. Astrophys. 10, 427 (1972)
can V. Canuto, Neutron stars: general review, Solvay Conference on astrophysics and gravitation. Brussels, Belgium. 1973.
let1 P. S. Letelier, Phys. Rev. D, 22, 807 (1980)
let2 P. S. Letelier and R. Machado J. Math. Phys. 22, 827 (1981)
let3 P. S. Letelier, Nuovo Cimento B 69, 145 (1982)
bay S. S. Bayin, Phys. Rev. D 26, 1262 (1982)
dev1 D. Krsna, M. Gleiser: Gen. Relativ. Gravit. 34, 1793 (2002)
dev2 D. Krsna, M. Gleiser: Gen. Relativ. Gravit. 35, 1435 (2003)
esc Esculpi, M., Malaver, M., Aloma, E.: Gen. Relativ. Gravit. 39, 633 (2007)
mah1 Maharaj, S.D., Govender, M.: Aust. J. Phys. 50, 959 (1997)
mah2 Maharaj, S.D., Govender, M.: Int. J. Mod. Phys. D 14, 667 (2005)
ntn1 K. N. Singh et al.: Astrophys. Space Sci. (2015) 358:1
ntn2 K. N. Singh et al.: Int. J. Theor. Phys. (2015) 54:3408
ntn3 K. N. Singh, N. Pant Ind. J. Phys. 90, 843 (2016)
ntn4 K. N. Singh, N. Pant, M. Govender: Ind. J. Phys. 90, 1215 (2016)
maur1 Maurya, S.K., Gupta, Y.K.: Astrophys. Space Sci. 332, 481 (2011b)
maur2 Maurya, S.K., Gupta, Y.K.: Astrophys. Space Sci. 332, 155 (2011c)
and1 H. Andreasson: Commun. Math. Phys. 288, 715–730 (2009)
and2 H. Andreasson et al.: Class. Quantum Grav. 29, 095012 (2012)
nai N. Naidu, M. Govender, 25, 1650092 (2016)
chan R. Chan et al.: MNRAS 265, 533 (1993)
her1 L. Herrera, N. O. Santos: Physics Reports 286, 53 (1997)
uso Usov, V.V.: Phys. Rev. D 70, 067 (2004)
iva Ivanov, B.V.: Phys. Rev. D 65, 104001 (2002)
bon Bonnor, W.B.: Mon. Not. R. Astron. Soc. 137, 239 (1965)
bhar1 Bhar, P. et al.: Astrophys. Space Sci. 360, 32 (2015)
bhar2 Bhar, P., Rahaman, R.: Eur. Phys. J. C 75, 41 (2015)
bhar3 Bhar, P.: Astrophys. Space Sci. 356, 365 (2015)
kn1 K. N. Singh, et al.: Astrophys. Space Sci.: (2016) 361:173
kn2 K. N. Singh, N. Pant: Astrophys. Space Sci. (2016) 361:177
kn3 K. N. Singh, et al.: Int. J. Mod. Phys. D (2016): 25 1650099
kn4 K. N. Singh, et al.: Astrophys. Space Sci. (2016) 361:339
kn5 K. N. Singh, et al.: Ind. J. Phys. (2016) DOI 10.1007/s12648-016-0917-7
kn6 K. N. Singh, et al.: Chin. Phys. C (2016) DOI:10.1088/1674-1137/41/1/015103
kn7 K. N. Singh, N. Pant: Eur. Phys. J. C (2016) 76 : 524
bhar4 Bhar, P., et al.: arXiv:1604.00531 [gr-qc]
karK.R. Karmarkar, Proc. Ind. Acad. Sci. A 27, 56 (1948)
pandeyS.N. Pandey, S.P. Sharma, Gene. Relativ. Gravit. 14 (1982)
adl Adler, R.J.: J. Math. Phys. 15, 727 (1974)
finch M.R. Finch, J.E.F. Skea, Class. Quantum. Grav. 6, (1989) 467
ab Abreu, H., et al.: Class. Quantum Gravity 24, 4631 (2007).
bondi64H. Bondi, Proc. R. Soc. Lond. A 281, 39 (1964)
chan64 S. Chandrasekhar, Phys. Rev. Lett. 12, 114 (1964)
har65 B. K. Harrison et al., Gravitational Theory and Gravitational Collapse (Chicago: University of Chicago Press) (1965)
zel Ya. B. Zeldovich, I. D. Novikov, Relativistic Astrophysics Vol. 1: Stars and Relativity (Chicago: University
of Chicago Press) (1971)
|
http://arxiv.org/abs/1701.07833v1 | 20170126190003 | The large-N Yang-Mills S-matrix is ultraviolet finite, but the large-N QCD S-matrix is only renormalizable | [
"Marco Bochicchio"
] | hep-th | [
"hep-th",
"hep-ph"
] |
a]Marco Bochicchio
[a]INFN sez. Roma 1
Piazzale A. Moro 2, Roma, I-00185, Italy
marco.bochicchio@roma1.infn.it
YM and QCD are known to be renormalizable, but not ultraviolet finite, order by order in perturbation theory. It is a fundamental question as to whether YM or QCD are ultraviolet finite, or only renormalizable,
order by order in the large-N 't Hooft or Veneziano expansions. We demonstrate that Renormalization Group and Asymptotic Freedom imply that in 't Hooft large-N expansion the S-matrix in YM is ultraviolet finite, while in both 't Hooft and Veneziano large-N expansions the S-matrix in confining QCD with massless quarks is renormalizable but not ultraviolet finite. By the same argument it follows that the large-N 𝒩=1 SUSY YM S-matrix is ultraviolet finite as well. Besides, we demonstrate that the correlators of local gauge-invariant operators, as opposed to the S-matrix, are renormalizable but in general not ultraviolet finite in the large-N 't Hooft and Veneziano expansions, neither in pure YM and 𝒩=1 SUSY YM nor a fortiori in massless QCD. Moreover, we compute explicitly the counterterms that arise renormalizing the large-N 't Hooft and Veneziano expansions, by deriving in confining massless QCD-like theories a low-energy theorem of NSVZ type, that relates the log derivative with respect to the gauge coupling of a k-point correlator, or the log derivative with respect to the RG-invariant scale, to a k+1-point correlator with the insertion of F^2 at zero momentum. Finally, we argue that similar results hold in the large-N limit of a vast class of confining QCD-like theories with massive matter fields, provided a renormalization scheme exists, as for example MS, in which the beta function is independent on the masses. In particular, in both 't Hooft and Veneziano large-N expansions the S-matrix in confining massive QCD and massive 𝒩=1 SUSY QCD is renormalizable but not ultraviolet finite.
fancy
[][The large-N YM S-matrix is ultraviolet finite, but the large-N QCD S-matrix is only renormalizable
][]The large-N Yang-Mills S-matrix is ultraviolet finite, but the large-N QCD S-matrix is only renormalizable
[
==========================================================================================================
§ INTRODUCTION
SU(N) Yang-Mills (YM) and SU(N) QCD with N_f quark flavors are known to be renormalizable but not ultraviolet finite in perturbation theory. It is a fundamental question, that has never been considered previously, as to whether their large-N 't Hooft or Veneziano expansions (Section <ref>) enjoy better ultraviolet properties non-perturbatively, perhaps limiting only to the large-N S-matrix, once the lowest 1/N order has been made finite by renormalization as defined in Sections <ref>, <ref>. Answering this question sets the strongest constraints on the solution, that is yet to come, of large-N YM and QCD.
The first main result in this paper is that Renormalization Group (RG) and Asymptotic Freedom (AF) imply that in 't Hooft expansion the large-N YM S-matrix is ultraviolet finite, while in both 't Hooft and Veneziano expansions the large-N S-matrix in confining massless QCD [By massless QCD we mean QCD with massless quarks.] is renormalizable but not ultraviolet finite (Section <ref>): In 't Hooft expansion due to log divergences of meson loops (Section <ref>) starting at order of N_f/N, in Veneziano expansion due to loglog divergences of "overlapping" meson-glueball loops (Section <ref>) starting at order of N_f/N^3. By the same argument it follows that in 't Hooft expansion the large-N 𝒩=1 SUSY YM S-matrix is ultraviolet finite as well.
Correlators (Section <ref>), as opposed to the S-matrix, turn out to be renormalizable but loglog divergent in general, in addition to the possible divergences of the S-matrix in the aforementioned large-N expansions, but at the lowest order, even in pure large-N YM and 𝒩=1 SUSY YM.
The second main result is a low-energy theorem (Section <ref>) of Novikov-Shifman-Vainshtein-Zakharov (NSVZ) type in confining massless QCD-like theories [By QCD-like theory we mean a confining Asymptotically Free (AF) gauge theory admitting the large-N 't Hooft or Veneziano limits. We call such a theory massive if its matter fields are massive, and massless if a choice of parameters exists for which the theory is massless to all orders perturbation theory.], that allows us to compute explicitly the lowest-order large-N counterterms implied by RG and AF as opposed to perturbation theory.
Finally, we argue that similar results hold (Section <ref>) for the large-N S-matrix in a vast class of confining QCD-like theories with massive matter fields, provided a renormalization scheme exists in which the beta function is independent on the masses. MS is an example of such a scheme. Besides, the asymptotic results in Section <ref> extend also to the correlators of the massive theory provided the massless limit of the massive theory exists smoothly.
§ LARGE-N 'T HOOFT AND VENEZIANO EXPANSIONS
We recall briefly the 't Hooft <cit.> and Veneziano <cit.> expansions in large-N YM and QCD with N_f quark flavors.
Non-perturbatively, 't Hooft large-N limit is defined computing the QCD functional integral in a neighborhood of N=∞ with 't Hooft gauge coupling g^2 = g^2_YM N and N_f fixed. The corresponding perturbative expansion, once expressed in terms of g^2, can be reorganized in such a way that each power of 1/N contains the contribution of an infinite series in g^2<cit.>.
The lowest-order contribution in powers of 1/N to connected correlators of local single-trace gauge-invariant operators 𝒢_i(x_i) and of quark bilinears ℳ_i(x_i), both normalized in such a way that the two-point correlators are on the order of 1, turns out to be on the order of:
⟨𝒢_1(x_1)𝒢_2(x_2)⋯𝒢_n(x_n) ⟩_conn∼ N^2-n ; ⟨ℳ_1(x_1)ℳ_2(x_2)⋯ℳ_k(x_k)⟩_conn∼ N^1-k/2
⟨𝒢_1(x_1)𝒢_2(x_2)⋯𝒢_n(x_n)ℳ_1(x_1)ℳ_2(x_2)⋯ℳ_k(x_k)⟩_conn∼ N^1-n-k/2
This is the 't Hooft Planar Theory, that perturbatively sums Feynman graphs triangulating respectively a sphere with n punctures, a disk with k punctures on the boundary, and a disk with k punctures on the boundary and n punctures in the interior. The punctured disk arises in 't Hooft large-N expansion from Feynman diagrams whose boundary is exactly one quark loop.
Higher-order contributions correspond to summing the Feynman graphs triangulating orientable Riemann surfaces with smaller fixed Euler characteristic. They correct additively 't Hooft Planar Theory with a weight N^χ, where χ=2-2g-h-n-k/2 is the Euler characteristic of an orientable Riemann surface of genus g (i.e. a sphere with g handles), with h holes (or boundaries), n marked points in the interior, and k marked points on the boundary of some hole, that the Feynman graphs triangulate. Non-perturbatively a handle is interpreted as a glueball loop, and a hole as a meson loop <cit.>.
On the contrary, non-perturbatively Veneziano large-N limit is defined computing the QCD functional integral in a neighborhood of N=∞ with g^2 and N_f/N fixed. Since in large-N QCD factors of the ratio N_f/N, that is kept fixed, may arise perturbatively only from quark loops, Veneziano large-N expansion contains perturbatively already at the lowest order Feynman graphs that triangulate a punctured sphere or a punctured disk with any number of holes, i.e. it contains the sum of all the Riemann surfaces that are geometrically planar: This is the Veneziano Planar Theory. Higher orders contain higher-genus Riemann surfaces.
§ LARGE-N YM AND MASSLESS QCD S-MATRIX
We assume that YM and QCD have been regularized in a way that we leave undefined, but in special cases in Section <ref>, by introducing a common cutoff scale Λ, perturbatively, in the large-N expansion, and non-perturbatively. The details of the regularization do not matter for our arguments.
In perturbation theory, pure YM and massless QCD need only gauge-coupling renormalization in the classical action in order to get a finite large-Λ limit, since in massless QCD there is no quark-mass renormalization because chiral symmetry is exact in perturbation theory. In addition, local gauge invariant operators need also in general multiplicative renormalizations, associated to the anomalous dimensions of the operators, in order to make their correlators finite.
We will see in Section <ref> that also in large-N YM and massless QCD non-Planar multiplicative renormalizations occur in general in both 't Hooft and Veneziano expansions, once the Planar correlators (i.e. the lowest-order correlators) have been made finite by the Planar gauge-coupling and multiplicative renormalizations.
However, multiplicative renormalizations must cancel in the S-matrix because of the LSZ reduction formulae, since the S-matrix cannot depend on the choice of the interpolating fields for a given asymptotic state in the external lines <cit.> (see also Section <ref>). Therefore, only gauge-coupling renormalization is necessary in the large-N YM and massless QCD S-matrix, but non-perturbatively according to the RG [We assume that the aforementioned theories actually exist mathematically and are renormalizable, that the 1/N expansion is at least asymptotic, and that standard RG is actually asymptotic in the ultraviolet to the exact result because of asymptotic freedom. Though these statements are universally believed, no rigorous mathematical construction of YM or of QCD or of their large-N limits presently exists, let alone a mathematically rigorous proof of these statements.], because of the summation of an infinite number of Feynman graphs at any fixed 1/N order.
Non-perturbatively, gauge-coupling renormalization is equivalent to make finite and (asymptotically) constant the RG-invariant scale: Λ_RG= const Λexp(-1/2β_0 g^2) (β_0 g^2)^-β_1/2 β_0^2(1+...), uniformly for arbitrarily large Λ in a neighborhood of g=0, where the dots represent
an asymptotic series in g^2 of renormalization-scheme dependent terms, that obviously vanish as g → 0. The overall constant is scheme dependent as well.
Moreover, non-perturbatively RG requires that every physical mass scale of the theory is proportional to Λ_RG. Therefore, being Λ_RG the only parameter occurring in the S-matrix in both large-N YM and confining massless QCD, the ultraviolet finiteness of the large-N S-matrix is equivalent to the existence of a renormalization scheme for g in which the large-N expansion of Λ_RG is finite. This is decided as follows.
We consider first 't Hooft expansion in large-N YM. In this case, β_0= β_0^P= 1/(4 π)^211/3, β_1= β_1^P= 1/(4 π)^434/3, where the superscript P stands for 't Hooft Planar. Now, both in the 't Hoof Planar Theory and to all the 1/N orders, the first-two coefficients of the beta function β_0, β_1 get contributions only from 't Hooft Planar diagrams. This implies that in large-N YM the 1/N expansion of Λ_YM is in fact finite <cit.>, the non-Planar 1/N corrections occurring in the dots or in const contributing only at most a finite change of renormalization scheme to the 't Hooft Planar RG-invariant scale, Λ^P_YM =const Λexp(-1/2β^P_0 g^2) (β^P_0 g^2)^-β^P_1/2 β_0^P2(1+...).
Thus the S-matrix in large-N YM is finite in 't Hooft expansion around the Planar Theory, once the Planar Theory has been made finite by the gauge-coupling renormalization implicit in the finiteness of Λ^P_YM<cit.>. Indeed, since YM is renormalizable, all glueball loops must be finite in the S-matrix (i.e. on-shell, see also Section <ref>), because if they were divergent, their divergence ought to be reabsorbed into a divergent redefinition of Λ_YM, that is the only parameter in the S-matrix, contrary to what we have just shown. A similar argument implies that in 't Hooft expansion the large-N 𝒩=1 SUSY YM S-matrix is ultraviolet finite as well.
't Hooft expansion of large-N massless QCD is deeply different.
In this case, β_0= β_0^P + β_0^NP= 1/(4 π)^211/3 - 1/(4 π)^22/3N_f/N and β_1= β_1^P + β_1^NP= 1/(4 π)^434/3 - 1/(4 π)^4 (13/3 - 1/N^2) N_f/N, where the superscript NP stand for non-'t Hooft Planar.
Since quark loops occur at order of 1/N, the first coefficient of the beta function, β^P_0, gets an additive non-'t Hooft Planar 1/N correction, β_0^NP= - 1/(4 π)^22/3N_f/N. As a consequence it is impossible to find a renormalization scheme for g that makes Λ_QCD finite in the 't Hooft Planar Theory and in the next order of the 1/N expansion at the same time, as the following computation shows <cit.>:
Λ_QCD∼Λexp(-1/2β^P_0 (1+β_0^NP/β_0^P) g^2 )
∼Λexp(-1/2β^P_0 g^2 )(1+β_0^NP/β_0^P/2β^P_0 g^2 )
∼Λ^P_QCD (1+β_0^NP/β_0^Plog(Λ/Λ^P_QCD))
where in the first line g is a bare free parameter according to the RG to all the 1/N orders, while in the last line we have renormalized g according to the Asymptotic Freedom of the 't Hooft Planar Theory 1/2β^P_0 g^2∼log(Λ/Λ^P_QCD), as follows for consistency by requiring that Λ^P_QCD is finite uniformly in a neighborhood of Λ =∞.
The symbol ∼ in this paper means asymptotic equality in a sense specified by the context, up to perhaps a non-zero constant overall factor. We should notice that the equalities in Equation <ref> hold asymptotically, uniformly for large finite Λ and small g even before Planar renormalization, without the need to actually take the limits Λ→∞, g → 0, as they are obtained expressing g identically in terms of Λ^P_QCD in the last asymptotic equality. We emphasize that the log divergence in Equation <ref> occurs precisely because of the Asymptotic Freedom of the Planar Theory.
In Section <ref> we will compute explicitly by means of a low-energy theorem the large-N counterterm due to the renormalization of Λ^P_QCD, that turns out to agree exactly, within the leading-log accuracy, with the perturbative counterterm due to quark loops. Indeed, were Λ^P_QCD to get only a finite renormalization, the complete large-N QCD and the 't Hooft Planar Theory would have the same β_0, that is false.
Hence, being Λ_QCD the only physical mass scale, glueball and meson masses receive 1/N log-divergent self-energy corrections proportional to the one of Λ_QCD, that can arise only from a log divergence of meson loops. This is a physical fact, that characterizes the meson interactions in the ultraviolet (UV), reflecting the corresponding perturbative quark interactions in the UV.
Therefore, 't Hooft expansion of the QCD S-matrix, though renormalizable, starting at order of N_f/N is log divergent, due to log divergences of meson loops.
The chances of finiteness would seem more promising in the Veneziano expansion. In this case, β_0= β_0^VP= 1/(4 π)^211/3 - 1/(4 π)^22/3N_f/N and β_1= β_1^VP + β_1^NVP, with β_1^VP= 1/(4 π)^4( 34/3 - 13/3N_f/N) and β_1^NVP= 1/(4 π)^4N_f/N^3, where the superscripts VP and NVP stand for Veneziano Planar and non-Veneziano Planar.
Since the Veneziano Planar Theory contains already all quark loops, the first coefficient of the Veneziano Planar beta function
and of the complete beta function coincide. As a consequence there is no log divergence in the expansion of Λ_QCD.
Nevertheless, also in the Veneziano expansion it is impossible to find a renormalization scheme for g in which both Λ^VP_QCD and its 1/N corrections are finite at the same time, because of a loglog divergence starting at order of N_f/N^3 due to "overlapping"
glueball-meson loops, as the following computation shows:
Λ_QCD∼Λexp(-1/2β_0 g^2) (g^2)^-β^VP_1/2 β_0^2 (g^2)^-β^NVP_1/2 β_0^2
∼Λexp(-1/2β_0 g^2) (g^2)^-β^VP_1/2 β_0^2 (1- β^NVP_1/ 2β_0^2log g^2)
∼Λ^VP_QCD (1+β^NVP_1/ 2β_0^2loglog(Λ/Λ^VP_QCD))
Thus the large-N Veneziano expansion of the S-matrix in confining [In fact, Equation <ref> may be valid only for N_f/N and g in a certain neighborhood of 0. Indeed, it is believed that there is a critical value of N_f/N and of g at which massless QCD becomes exactly conformal because of an infrared zero of the beta function. At this critical value of N_f/N and of g, Λ_QCD may vanish due to the infrared zero. Similar considerations may apply to other massless QCD-like theories (Section <ref>).] massless QCD is not ultraviolet finite as well. In any case both 't Hooft and Veneziano expansions of the S-matrix are renormalizable, all the aforementioned divergences being reabsorbed order by order in the 1/N expansions by a redefinition of Λ_QCD.
§ LARGE-N YM AND MASSLESS QCD CORRELATORS
We study now the multiplicative renormalizations of gauge-invariant operators in the large-N 't Hooft and Veneziano expansions. They are sufficient to make the correlators finite, once the gauge coupling and Λ_QCD have been renormalized as described in Section <ref>, in any massless QCD-like theory. The computations greatly simplify if we reconstruct the asymptotic structure of the bare correlators from the asymptotic renormalized correlators, either in the complete theory or in the large-N expansions. In order to do so, we employ an asymptotic structure theorem <cit.> for glueball and meson two-point correlators in 't Hooft large-N limit of massless QCD-like theories,
and the associated, but much more general, asymptotic estimates <cit.>, that hold both in the complete theory and a fortiori in 't Hooft and Veneziano large-N limits.
For the aims of this paper it is sufficient to report the asymptotic theorem <cit.> in the coordinate representation. Under mild assumptions, it reads as follows.
The connected two-point Euclidean correlator of a hermitian local single-trace gauge-invariant operator or of a quark bilinear, 𝒪^(s), of spin s, naive mass dimension D, and with anomalous dimension γ_𝒪^(s)(g)[We suppose that the matrix of anomalous dimensions has been diagonalized, as generically possible at least at the leading order, which is the only one that matters for the asymptotic behavior.],
asymptotically for short distances, and at the leading order in the large-N limit, has the following spectral representation and asymptotic behavior in the coordinate representation, for x≠ 0[For x≠ 0 no contact term (i.e. distribution supported at x=0) occurs, and there are no convergence problems for the spectral sum and the spectral integral in Equation <ref> provided they are performed after the Fourier transform to the coordinate representation <cit.>.]:
⟨𝒪^(s)(x) 𝒪^(s)(0) ⟩_conn∼∑_n=1^∞∫ P^(s)(p_α/m^(s)_n) m^(s)2D-4_n Z_n^(s)2ρ_s^-1(m^(s)2_n)/p^2+m^(s)2_n e^ip· xd^4p
∼∫_ m^(s)2_1^∞∫ P^(s)(p_α/p) p^2D-4Z^(s)2(m) /p^2+m^2 e^ip· xd^4p dm^2
∼𝒫^(s)(x_α/x)/x^2D Z^(s)2(x, μ) 𝒢^(s)(g(x))
∼𝒫^(s)(x_α/x)/x^2D (g^2(x)/g^2(μ))^γ_0/β_0
∼𝒫^(s)(x_α/x)/x^2D(1/β_0log(1/x^2 Λ^2_QCD)(1-β_1/β_0^2loglog(1/x^2 Λ^2_QCD)/log(1/x^2 Λ^2_QCD)))^γ_0/β_0
where the infinite diverging sequence { m^(s)_n } is supposed to be characterized by a smooth RG-invariant asymptotic spectral density (possibly dependent on 𝒪^(s)) of the masses squared ρ_s(m^2)=dn/dm^2<cit.>, for large masses and fixed spin, with dimension of the inverse of a mass squared. P^(s)( p_α/m^(s)_n) is a dimensionless polynomial in the four momentum p_α, that projects on the free propagator of spin s and mass m^(s)_n, and γ_𝒪^(s)(g)= - ∂log Z^(s)/∂logμ=-γ_0 g^2 + O(g^4), with Z_n^(s) the associated renormalization factor computed at the momentum scale p^2=m^(s)2_n: Z_n^(s)≡ Z^(s)(m^ (s)_n)= exp∫_g (μ)^g (m^(s)_n )γ_𝒪^(s) (g)/β(g)dg.
The renormalization factors are fixed asymptotically for large n to be:
Z_n^(s)2∼[1/β_0log m^ (s) 2_n /Λ^2_QCD(1-β_1/β_0^2loglog m^ (s) 2_n /Λ^2_QCD/log m^ (s) 2_n /Λ^2_QCD + O(1/log m^ (s) 2_n /Λ^2_QCD ) )]^γ_0/β_0
P^(s)(p_α/p) is the projector obtained substituting -p^2 to m_n^2 in P^(s)(p_α/m_n)[We use Veltman conventions for Euclidean and Minkowski propagators of spin s<cit.>.]. This substitution in Equation <ref> is an identity up to contact terms <cit.>, that do not contribute for x ≠ 0.
The second line in Equation <ref> occurs because asymptotically, under mild assumptions <cit.>, we can substitute to the discrete sum the continuous integral weighted by the spectral density.
Thus the asymptotic spectral representation depends only on the anomalous dimension but not on the spectral density. This integral form of the Kallen-Lehmann representation holds asymptotically in the UV also in the Veneziano Theory and in the complete theory, since it does not assume a discrete spectrum. 𝒫^(s)(x_α/x) is the dimensionless spin projector in the coordinate representation in the conformal limit. The RG-invariant function of the running coupling only, 𝒢^(s)(g(x)), admits the expansion: 𝒢^(s)(g(x))= const(1+ O(g^2(x))).
Indeed, perturbatively at the lowest non-trivial order the correlator of a hermitian operator in the coordinate representation must be exactly conformal and non-vanishing in a massless QCD-like theory, because the two-point correlator of a non-zero hermitian operator cannot vanish in a unitary conformal theory.
In fact, the coordinate representation is the most fundamental for deriving <cit.> the asymptotic theorem, because only in the coordinate representation the operators are multiplicatively renormalizable, since for x ≠ 0 no further additive renormalization due to possibly divergent contact terms may arise.
The asymptotic structure of the bare correlators in the complete theory follows from Equation <ref> dividing by the asymptotic multiplicative renormalization factor of the complete theory (g^2(Λ)/g^2(μ))^γ_0/β_0 :
⟨𝒪^(s)(x) 𝒪^(s)(0) ⟩_bare∼𝒫^(s)(x_α/x)/x^2D (g^2(x)/g^2(Λ))^γ_0/β_0.
Reinserting the Planar multiplicative renormalization necessary to make finite the Planar correlator, we get in both 't Hooft and Veneziano Planar expansions (the superscript 𝒫 stays for P or VP):
⟨𝒪^(s)(x) 𝒪^(s)(0) ⟩_conn∼𝒫^(s)(x_α/x) /x^2D (g^2(Λ)/g^2(μ))^γ^𝒫_0/β^𝒫_0
(g^2(x)/g^2(Λ))^γ_0/β_0
= 𝒫^(s)(x_α/x)/x^2D (g^2(Λ)/g^2(μ))^γ^𝒫_0/β^𝒫_0
(g^2(x)/g^2(Λ))^γ^𝒫_0/β^𝒫_0 (g^2(x)/g^2(Λ))^γ_0/β_0 - γ^𝒫_0/β^𝒫_0
∼⟨𝒪^(s)(x) 𝒪^(s)(0) ⟩^𝒫 (1 + (γ_0/β_0- γ^𝒫_0/β^𝒫_0 ) log (g^2(x)/g^2(Λ)))
∼⟨𝒪^(s)(x) 𝒪^(s)(0) ⟩^𝒫 (1 + ( γ_0/β_0- γ^𝒫_0/β^𝒫_0 ) log (log(Λ^2/Λ^2_QCD)/log(1/x^2 Λ^2_QCD)))
Thus the expansion of the correlators around the Planar Theory has in general loglog divergences due to the 1/N corrections to the anomalous dimensions. Remarkably, the correlator of F^2:
⟨ F^2(x) F^2(0) ⟩_conn∼1/x^8 (g^4(x)/g^4(μ))
∼1/x^8(1/β_0log(1/x^2 Λ^2_QCD)(1-β_1/β_0^2loglog(1/x^2 Λ^2_QCD)/log(1/x^2 Λ^2_QCD)))^2
has no such loglog corrections in the 't Hooft and Veneziano expansions, since γ_0=2β_0 for F^2, both in the complete theory and in the Planar Theory, and thus the change of the anomalous dimension is always compensated by the change of the beta function. Hence the only renormalization in Equation <ref> is due to the 1/N expansion of Λ_QCD described in Section <ref>.
§ LARGE-N MASSLESS QCD COUNTERTERMS FROM A LOW-ENERGY THEOREM, AS OPPOSED TO PERTURBATION THEORY
A new version of a NSVZ low-energy theorem is obtained as follows. For a set of operators 𝒪_i, deriving:
⟨𝒪_1 ⋯𝒪_i |=⟩∫𝒪_1 ⋯𝒪_i e^-N/2g^2∫ F^2(x)d^4x+⋯/∫ e^-N/2g^2∫ F^2(x)d^4x +⋯
with respect to -1/g^2, we get:
∂⟨𝒪_1 ⋯𝒪_i |⟩/∂log g=
N/g^2∫⟨𝒪_1 ⋯𝒪_i F^2(x)|-⟩⟨𝒪_1 ⋯𝒪_i|⟨%s|%s⟩⟩ F^2(x)d^4x
Since non-perturbatively in massless QCD-like theories the only parameter is Λ_QCD, we can trade g for Λ_QCD in the LHS:
∂⟨𝒪_1 ⋯𝒪_i |⟩/∂ (-1/g^2)= ∂⟨𝒪_1 ⋯𝒪_i |⟩/∂Λ_QCD∂Λ_QCD/∂ (-1/g^2). Employing the defining relation: (∂/∂logΛ+β(g)∂/∂ g)Λ_QCD=0, with β(g)=-β_0 g^3-β_1g^5+⋯, we obtain:
∂Λ_QCD/∂(-1/g^2)=g^3/2∂Λ_QCD/∂ g=
-g^3/2β(g)∂Λ_QCD/∂logΛ=
-g^3/2β(g)Λ_QCD,
where the last identity follows from the relation: Λ_QCD=Λ f(g)= e^logΛ f(g), for some function f(g). Hence we get a NSVZ low-energy theorem:
∂⟨𝒪_1 ⋯𝒪_i |⟩/∂logΛ_QCD =
-Nβ(g)/g^3∫⟨𝒪_1 ⋯𝒪_i F^2(x)|-⟩⟨𝒪_1 ⋯𝒪_i|⟨%s|%s⟩⟩ F^2(x)d^4x
Now we specialize to multiplicatively renormalized operators in the Planar Theory, 𝒪_i= F^2, in such a way that the only source of divergences is the renormalization of Λ_QCD (see the comment below Equation <ref>), being the combination Nβ(g)/g^3 F^2(x) already RG invariant [The unusual power of g in front of F^2(x) is due to the non-canonical normalization of the action in Equation <ref>.].
Therefore, the divergent part of the correlator at the lowest 1/N order is:
[⟨𝒪_1 ⋯𝒪_i |^⟩𝒩𝒫]_div =
∂⟨𝒪_1 ⋯𝒪_i |^⟩𝒫/∂Λ_QCDΛ_QCD^𝒩𝒫,
where 𝒫=P,VP, Λ_QCD^NP= β_0^NP/β_0^PΛ^P_QCDlog(Λ/Λ^P_QCD) +⋯, and
Λ_QCD^NVP= β^NVP_1/ 2β_0^2Λ^VP_QCDloglog(Λ/Λ^VP_QCD)+ ⋯, in 't Hooft and Veneziano expansions of massless QCD respectively.
It follows from Equation <ref> that the divergent part of the correlator at leading order in the non-Planar Theory satisfies the new low-energy theorem at large-N:
[⟨ F^2 ⋯ F^2 |^⟩𝒩𝒫]_div
=Nβ^𝒫(g) Λ_QCD^𝒩𝒫/g^3Λ_QCD^𝒫∫⟨ F^2 ⋯ F^2|^⟩𝒫⟨ F^2(x)|^⟩𝒫-⟨ F^2 ⋯ F^2 F^2(x)|^⟩𝒫 d^4x
and thus arises, up to finite scheme-dependent corrections, from the divergent counterterm in the action: - β^𝒫_0 N Λ_QCD^𝒩𝒫/Λ_QCD^𝒫∫ F^2(x).
In 't Hooft expansion: - β^𝒫_0 N Λ_QCD^𝒩𝒫/Λ_QCD^𝒫 = - N β_0^NP [ log(Λ/Λ^P_QCD)+
1/2 β_0^P(β_1^NP-β_0^NPβ_1^P/β_0^P) loglog(Λ/Λ^P_QCD)]= 1/(4 π)^22/3 N_f log(Λ/Λ^P_QCD)+⋯, that coincides exactly within the leading-log accuracy, perhaps as expected, with the perturbative counterterm arising from quark loops. The loglog(Λ/Λ^P_QCD) counterterm follows from Equation <ref> including the contributions from β_1 by a straightforward but tedious computation.
§ S-MATRIX IN LARGE-N MASSIVE QCD-LIKE THEORIES
We may wonder as to whether the results for massless theories described in Section <ref>, <ref>, <ref>, apply also to confining massive QCD-like theories, in particular to the large-N limit of massive QCD [This point was raised by an anonymous referee.]. Introducing further mass scales is an additional complication, that may involve extra renormalizations associated to the mass parameters. However, the question that we answer in this Section is as to whether, supposing the further parameters have been already renormalized, the large-N expansion of the massive theory may get milder ultraviolet divergences than the massless one.
The simple answer is negative, provided a renormalization scheme exists in which the beta function is independent on the masses, as it is appropriate for the UV-complete massive theory, as opposed to the "low-energy" effective theory at scales much smaller than the masses: In such a scheme the renormalization of Λ_QCD goes through exactly as in the massless theory, as described in Sections <ref>, <ref>, <ref>. An example is the MS scheme in massive QCD-like theories.
In particular, the large-N massive QCD S-matrix is renormalizable but not UV finite, as it is not its massless limit. Moreover, both the 't Hooft and the Veneziano expansions of the 𝒩=1 SUSY massive QCD S-matrix in the Confining/Higgs phase <cit.> are renormalizable but not UV finite, because the first-two coefficients of the beta function: β_0 = 1/(4 π)^2 3 - 1/(4 π)^2N_f/N, β_1= 1/(4 π)^4 6 - 1/(4 π)^4 (4 - 2/N^2) N_f/N, imply that β_0^NP=- 1/(4 π)^2N_f/N and β_1^NVP= 2/(4 π)^4N_f/N^3.
Another question is what happens regularizing and renormalizing a QCD-like theory by the embedding into an ultraviolet finite theory [This point was raised by the same anonymous referee.], that for example is feasible concretely for 𝒩=1 SUSY QCD with 1 ≤ N_f ≤ N and for 𝒩=1 SUSY YM, by the embedding into a suitable finite 𝒩=2 SUSY theory <cit.> containing massive multiplets on the order of M that act as regulators, and may eventually be decoupled in the limit M →∞, in order to recover the original theory <cit.>.
In this respect the Veneziano limit of massive 𝒩=1 SUSY QCD with 1 ≤ N_f ≤ N is particularly interesting, since in this case both the Veneziano Planar Theory and the next orders in the large-N expansion of the regularizing 𝒩=2 theory are UV finite, since the beta function vanishes and the 𝒩=2 SUSY is only softly broken by the massive multiplets <cit.>, the absence of divergences depending only on the vanishing of β_0 because of the 𝒩=2 SUSY <cit.>.
However, being asymptotically conformal in the deep ultraviolet, the regularizing 𝒩=2 theory is not 𝒩=1 SUSY QCD, that instead is AF, that means that the conformal behavior is corrected in general in the correlators by fractional powers of logs, according to Equation <ref>.
Thus, despite the finiteness of the 𝒩=2 theory, what we want really to discover is the gauge-coupling renormalization of its 𝒩=1 "low-energy limit" in the Veneziano expansion as the mass M of the regulator multiplets goes to infinity. This is again the original question that we already answered above, the only difference being that the effective cutoff of the regularized 𝒩=1 theory is now on the order of M instead of Λ.
Hence, though the regularized massive 𝒩=1 SUSY QCD theory is finite for finite M, it is UV divergent in the Veneziano expansion as M →∞.
Finally, we should add that the asymptotic estimates for the correlators in the massless theory in Section <ref> apply without modification to massive QCD-like theories provided the massless limit exists smoothly, since in this case the leading UV asymptotics of the correlators is independent on the masses. Yet some modification may possibly arise in massive 𝒩=1 SUSY QCD with 1 ≤ N_f ≤ N, because the massless limit in the correlators may not be necessarily smooth, being the massles limit for certain SUSY meson one-point correlators divergent <cit.>.
§ ACKNOWLEDGMENTS
We would like to thank Gabriele Veneziano for helpful comments.
99H1 G. 't Hooft, A planar diagram theory for strong interactions, Nucl. Phys. B 72 (1974) 461.
Veneziano0 G. Veneziano, Some Aspects of a Unified Approach to Gauge, Dual and Gribov Theories, Nucl. Phys. B 117 (1976) 519.
AFB M. Bochicchio, An asymptotic solution of large-NQCD, EPJ Web of Conferences 80, (2014) 00010, http://arxiv.org/abs/arxiv:1409.5144arXiv:1409.5144 [hep-th].
AF M. Bochicchio, Asymptotic Freedom versus Open/Closed Duality in Large-N QCD, https://arxiv.org/abs/arXiv:1606.04546arXiv:1606.04546 [hep-th].
MBN M. Bochicchio, Glueball and meson propagators of any spin in large-NQCD, Nucl. Phys. B 875 (2013) 621, https://arxiv.org/abs/1305.0273arXiv:1305.0273 [hep-th].
MBM M. Bochicchio, S. P. Muscinelli, Ultraviolet asymptotics of glueball propagators, JHEP 08 (2013) 064, https://arxiv.org/abs/1304.6409arXiv:1304.6409 [hep-th].
Seiberg K. Intriligator, N. Seiberg, Lectures on supersymmetric gauge theories and electric-magnetic duality, Nucl. Phys. Proc. Suppl. B 45 (1996) 1, https://arxiv.org/abs/arXiv:hep-th/9509066arXiv:hep-th/9509066 [hep-th].
AM N. Arkani-Hamed, H. Murayama, Holomorphy, Rescaling Anomalies and Exact beta Functions in Supersymmetric Gauge Theories, JHEP 06 (2000) 030, https://arxiv.org/abs/arXiv:hep-th/9707133arXiv:9707133 [hep-th].
|
http://arxiv.org/abs/1701.07881v1 | 20170126212629 | Prevalance of Chaos in Planetary Systems Formed Through Embryo Accretion | [
"Matthew S. Clement",
"Nathan A. Kaib"
] | astro-ph.EP | [
"astro-ph.EP"
] |
1HL Dodge Department of Physics Astronomy, University of Oklahoma, Norman, OK 73019, USA & corresponding author email: matt.clement@ou.edu
The formation of the solar system's terrestrial planets has been numerically modeled in various works, and many other studies have been devoted to characterizing our modern planets' chaotic dynamical state. However, it is still not known whether our planets’ fragile chaotic state is an expected outcome of terrestrial planet accretion. We use a suite of numerical simulations to present a detailed analysis and characterization of the dynamical chaos in 145 different systems produced via terrestrial planet formation in <cit.>. These systems were created in the presence of a fully formed Jupiter and Saturn, using a variety of different initial conditions. They are not meant to provide a detailed replication of the actual present solar system, but rather serve as a sample of similar systems for comparison and analysis. We find that dynamical chaos is prevalent in roughly half of the systems we form. We show that this chaos disappears in the majority of such systems when Jupiter is removed, implying that the largest source of chaos is perturbations from Jupiter. Chaos is most prevalent in systems that form 4 or 5 terrestrial planets. Additionally, an eccentric Jupiter and Saturn is shown to enhance the prevalence of chaos in systems. Furthermore, systems in our sample with a center of mass highly concentrated between ∼0.8–1.2 AU generally prove to be less chaotic than systems with more exotic mass distributions. Through the process of evolving systems to the current epoch, we show that late instabilities are quite common in our systems. Of greatest interest, many of the sources of chaos observed in our own solar system (such as the secularly driven chaos between Mercury and Jupiter) are shown to be common outcomes of terrestrial planetary formation. Thus, consistent with previous studies such as <cit.>, the solar system's marginally stable, chaotic state may naturally arise from the process of terrestrial planet formation.
Keywords: Chaos, Planetary Formation, Terrestrial Planets Received; Accepted
§ INTRODUCTION
Our four terrestrial planets are in a curious state where they are evolving chaotically, and are only marginally stable over time <cit.>. This chaos is largely driven by interactions with the 4 giant planets. However our understanding of the dynamical evolution of the gas giants, particularly Jupiter and Saturn, has changed drastically since the introduction of the Nice Model <cit.>.
The classical model of terrestrial planetary formation, where planets form from a large number of small embryos and planetesimals that interact and slowly accrete, is the basis for numerous studies of planetary evolution <cit.>. Using direct observations of proto-stellar disks <cit.>, it is clear that free gas disappears long before the epoch when Earth's isotope record indicates the conclusion of terrestrial planetary formation <cit.>. For these reasons, a common initial condition taken when numerically forming the inner planets is a fully formed system of gas giants at their current orbital locations. Many numerical models have produced planets using this method. However, none to date have analyzed the chaotic nature of fully evolved accreted terrestrial planets up to the solar system's current epoch. It should be noted that other works have modeled the outcome of terrestrial planetary formation up to 4.5 Gyr. <cit.> evolved 5000 such systems from 10000 planetesimals and showed correlations between the resulting power-law orbital spacing and the initial mass distribution. Furthermore, many works have performed integrations of the current solar system, finding solutions that showed both chaos and a very real possibility of future instabilities <cit.>. Our work is unique in that we take systems formed via direct numerical integration of planetary accretion, evolve them to the solar system's age, probe for chaos and its source, and draw parallels to the actual solar system.
Although the classical terrestrial planet formation model has succeeded in replicating many of the inner solar system’s features, the mass of Mars remains largely unexplained <cit.>. Known as the Mars mass deficit problem, most simulations routinely produce Mars analogues which are too massive by about an order of magnitude. <cit.> argue for an early inward, and subsequent outward migration of a fully formed Jupiter, which results in a truncation of the proto-planetary disc at 1 AU prior to terrestrial planetary formation. If correct, this “Grand Tack Model” would explain the peculiar mass distribution observed in our inner solar system. Another interesting solution involves local depletion of the disc in the vicinity of Mars's orbit <cit.>. A detailed investigation of the Mars mass deficit problem is beyond the scope of this paper. It is important, however, to note that accurately reproducing the mass ratios of the terrestrial planets is a significant constraint for any successful numerical model of planetary formation.
Through dynamical modeling, we know chaos is prevalent in our solar system <cit.>. It is important to note the difference between “stability" and “chaos." While a system without “chaos" can generally be considered stable, a system with “chaos" is not necessarily unstable <cit.>. As is convention in other works, in this paper “chaos” implies both a strong sensitivity of outcomes to specific initial conditions, and a high degree of mixing across all energetically accessible points in phase space <cit.> Conversely “Instability” is used to describe systems which experience specific dynamical effects such as ejections, collisions or excited eccentricities.
The chaos in our solar system mostly affects the terrestrial planets, particularly Mercury, and can cause the system to destabilize over long periods of time. <cit.> even shows a 1–2% probability of Mercury's eccentricity being excited to a degree which would risk planetary collision in the next 5 Gyr. What we still don't fully understand is whether these chaotic symptoms (highly excited eccentricities, close encounters and ejection) are an expected outcome of the planetary formation process as we presently understand it, or merely a quality of our particular solar system. The work of <cit.> showed us that the outcomes of semi-analytic planetary formation models of our own solar system show symptoms of chaos, and are connected to the particular initial mass distribution which is chosen. However these systems were formed without the presence of the gas giants, and planetesimal interactions were simplified to minimize computing time. Perhaps our solar system is a rare outlier in the universe, with it's nearly stable, yet inherently chaotic system of orbits occurring by pure chance. Of even greater interest, if it turns out that systems like our own are unlikely results of planetary formation, we may need to consider other mechanisms that can drive the terrestrial planets into their modern chaotic state.
This work takes 145 systems of terrestrial planets formed in <cit.> as a starting point. The systems are broken into three ensembles. The first set of 50 simulations, “Circular Jupiter and Saturn" (cjs), are formed with Jupiter and Saturn on nearly circular (e0.01) orbits, at their current semi-major axes. The simulations use 100 self-interacting embryos on nearly circular and coplanar orbits between 0.5 and 4.0 AU, and 1000 smaller non-self-interacting planetesimals. The smaller planetesimals interact with the larger bodies, but not with each other. Additionally, the initial embryo spacing is uniform and embryo mass decreases with semi-major axis to yield an r-3/2 surface density profile. The second ensemble (containing 46 integrations), “Extra Eccentric Jupiter and Saturn" (eejs) evolve from the same initial embryo configuration as cjs, with Jupiter and Saturn initially on higher (e=0.1) eccentricity orbits. The final batch of integrations (49 systems), “Annulus" (ann), begin with Jupiter and Saturn in the same configuration as cjs, however no planetesimals are used. 400 Planetary embryos for ann are confined to a thin annulus between 0.7–1.0 AU, roughly representative of the conditions described following Jupiter's outward migration in the Grand Tack Model <cit.>.
After advancing each system to t=4.5 Gyr, we perform detailed 100 Myr simulations and probe multiple chaos indicators. By careful analysis we aim to show whether chaotic systems naturally emerge from accretion models, and whether the source of the chaos is the same as has been shown for our own solar system.
§ METHODS
§.§ System Formation and Evolution
We use the simulations modeling terrestrial planet formation in <cit.> as a starting point for our current numerical work. In <cit.>, all simulations are stopped after 200 Myrs of evolution, an integration time similar to previous studies of terrestrial planet formation <cit.>. Because we ultimately want to compare the dynamical state of our solar system (a 4.5 Gyr old planetary system) with the dynamical states of our simulated systems, we begin by integrating the systems from <cit.> from t=200 Myr to t=4.5 Gyr. Since bodies can evolve onto crossing orbits and collide before t=4.5 Gyr, accurately handling close encounters between massive objects is essential. Thus, we use the MERCURY hybrid integrator <cit.> to integrate our systems up to t=4.5 Gyr. During these integrations, we use a 6-day timestep and remove bodies if their heliocentric distance exceeds 100 AU. Because we are unable to accurately integrate through very low pericenter passages, objects are also merged with the central star if their heliocentric distance falls below 0.1 AU. Though by no means ideal, the process of removing objects at 0.1 AU is commonplace in direct numerical models of planetary formation due to the limitations of the integrators used for such modeling. <cit.> showed that this does not affect the ability to accurately form planets in the vicinity of the actual inner solar system, since objects crossing 0.1 AU must have very high eccentricities. These excited objects interact weakly when encountering forming embryos due to their high relative velocity, and rarely contribute to embryo accretion. It should be noted that many discovered exoplanetary systems have planets with semi-major axis interior to 0.1 AU. However, we are not interested in studying such systems since we aim to draw parallels to our actual solar system. The WHFAST integrator used in the second phase of this work (Section 2.2), however, can integrate the innermost planet to arbitrarily high eccentricities, so the 0.1 AU filter is no longer used. Finally, to assess the dynamical chaos among planetary-mass bodies, any “planetesimal” particles (low-mass particles that do not gravitationally interact with each other) that still survive after 4.5 Gyrs are manually removed from the final system.
§.§ Numerical Analysis
Numerical simulations for detailed analysis of the fully evolved systems are performed using the WHFAST integrator in the Python module Rebound <cit.>. WHFAST <cit.> is a freely available, next generation Wisdom Holman symplectic integrator <cit.> ideal for this project due to its reduction on the CPU hours required to accurately simulate systems of planets over long timescales. WHFAST's reduction in error arising from Jacobi coordinate transformations, incorporation of the MEGNO (Mean Exponential Growth factor of Nearby Orbits) parameter, improved energy conservation error and tunable symplectic corrector up to order 11 motivate the integrator choice. The accuracy of many mixed variable symplectic integration routines are degraded by integrating orbits through phases of high eccentricity and low pericenter. For this reason, in Figure <ref> we plot the variation in energy from our simulation with the lowest pericenter (q=0.136 AU). In the upper panel, we plot the energy of the innermost planet, since this should be fixed in the secular regime. We see that energy variations stay well below one part in 103. In the lower panel, we plot the fractional change in the total energy of this system. Again, we find that energy variations rarely exceed one part in 104. Finally, in Figure <ref> we show a histogram of the fractional energy change between the start and end of the integration for all of our simulations. For the vast majority of systems, the fractional energy change is far less than one part in 104, and no simulations exceed 103. Values in excess of 104 are from simulations where the eccentricities of the giant planets were artificially inflated and the performance of the integrator is degraded by close encounters. These systems with frequent close encounters are obviously chaotic, so our chaos determination is not affected.
MEGNO is the primary tool for identifying chaotic systems. Introduced in <cit.>, MEGNO represents the time averaged ratio of the derivative of the infinitesimal displacement of an arc of orbit in N-dimensional phase space to the infinitesimal displacement. For quasi-periodic (stable) motion, MEGNO will converge to a value of 2 in the infinite limit. For chaotic systems, however, MEGNO will diverge <cit.>. <cit.> showed that MEGNO is an extremely useful and accurate tool for detecting chaos. Systems which are non-chaotic will maintain stable MEGNO values of ∼2 for the duration of the simulation, while chaotic systems diverge from 2. For this project, systems which attained a maximum value of MEGNO ≥ 3.0 were classified as chaotic.
For use in certain analyses, the Lyapunov Timescale (τ_L) is also output. WHFAST calculates the inverse of τ_L by least squares fitting the time evolution of MEGNO <cit.>. Systems classified as chaotic tended to have a τ_L less than ∼10–100 Myr.
Some simulations which quickly displayed chaos were terminated early to save computing time. Terminating these simulations early did not affect the chaos determination since MEGNO had already clearly diverged, nor did shorter simulations affect follow on data analysis and reduction (such as the detection of resonances described in section 2.4).
Another tool we use to characterize the chaos in our systems is Angular Momentum Deficit (AMD) <cit.>. AMD (equation <ref>) measures the difference between the z-component of the angular momentum of a given system to that of a zero eccentricity, zero inclination system with the same masses and semi-major axes. Evaluating the evolution of AMD over the duration of a simulation will probe whether angular momentum is being exchanged between giant planets and terrestrial planets as orbits excite and deexcite due to induced chaos <cit.>.
M_z(def) = ∑_im_i√(a_i[1 - √((1 - e_i^2))cosi_i])/∑_im_i√(a_i)
§.§ Simulation Parameters
Simulations are run for 100 Myr, with an integration timestep of 3.65 days. Orbital data is output every 5000 years. The order of the symplectic corrector is the order to which the symplectic correction term in the interaction Hamiltonian (ϵdt in <cit.>) is expanded. Here, we set this to order 1 (WHFAST allows for corrections up to order 11) <cit.>. 10 sample systems were integrated at different corrector values in order to determine the lowest corrector order necessary to accurately detect chaos. Additionally, a total of 16 1 Gyr simulations consisting of both chaotic and non-chaotic systems of 3, 4 and 5 terrestrial planets were performed to evaluate long-term behavior and verify the adequacy of 100 Myr runs. Finally, 6 sets of 145 simulations are performed, results and findings for which are reported in section 3. The 6 runs are summarized in Table <ref>.
§.§ Detecting Mean Motion Resonances
A Mean Motion Resonance (MMR) occurs when the periods of orbital revolution of 2 bodies are in integer ratio to one another. For a given MMR, the resonant angle will librate between 2 values <cit.>. Many possible resonant angles exist. For this paper, however, we only consider 4 of the more common planar resonant angles <cit.>. Only planar resonances are considered because our systems typically have very low inclinations.
To detect MMRs, the average keplerian period is calculated for all bodies in the simulation. 4 resonant angles are calculated for all sets of bodies with period ratios within 5% of a given integer ratio. 21 different MMRs (all possible permutations of integer ratios between 2:1 and 8:7) are checked for. Using a Komolgorov-Smirnov test, each resulting time-resonant angle distribution (e.g. figure <ref>) is compared to a uniform distribution (e.g. figure <ref>d), yielding a p-value. All distributions with p-values less than 0.01 are evaluated by eye for libration. Figure <ref> shows 4 different example distributions and their classification.
§ RESULTS
§.§ System Evolution Beyond 200 Myrs
In <cit.>, systems of terrestrial planets were generated via simulations of terrestrial planet accretion. These simulations were terminated after 200 Myrs of evolution, as each simulation had evolved into a system dominated by 1–6 terrestrial planet-mass bodies. Terminating accretion simulations after 200–400 Myrs of system evolution is common practice since the great majority of accretion events occur well before these final times are reached. However, it remains unknown how these newly formed systems evolve over the next several Gyrs. Do planetesimals and embryos naturally accrete into indefinitely stable configurations of terrestrial planets? Or are the systems that arise from terrestrial planet accretion often only marginally stable, with major instabilities occurring hundreds of Myrs or Gyrs after formation?
To begin answering this question, we take the systems from <cit.> and integrate them for another 4.3 Gyrs with MERCURY. In Figure <ref>, we show the cumulative distributions of times at which these systems lose their last terrestrial planet mass body (m> 0.055 M_⊕). These planets can be lost via collision with a larger planet, collision with the Sun, or ejection from the system (r>100 AU). We find that there are many systems that undergo substantial dynamical evolution after their first 200 Myrs. As Figure <ref> shows, between 20 and 50% of systems lose at least 1 planet after t=200 Myrs. This fraction varies with the simulation batch. Systems in the cjs set are the most likely to lose planets at late times. This is likely due to the fact that these systems often form planets well beyond 2 AU <cit.>, where dynamical timescales are longer and planets require a longer time period to undergo ejections or final collisions compared to those at ∼1 AU. In eejs simulations, a smaller fraction of systems (∼25%) lose a planet after their first 200 Myrs of evolution. In these systems, the effects of an eccentric Jupiter and Saturn greatly deplete the mass orbiting beyond 1.5–2 AU <cit.>, and this absence of more distant material may explain the decrease in late instabilities. Finally, the conditions are even more extreme in the ann simulations, where the initial planetesimal region is truncated at 1 AU. These simulations have the lowest rate of late (t>200 Myrs) instabilities at 18%.
It should also be noted that some systems lose planets at extremely late times. 8 out of 150 systems (∼5%) lose planets after t=1 Gyr. 5 of these systems are from cjs, while eejs and ann yield 1 and 2 systems, respectively. This small, yet non-negligible fraction of systems undergoing late instabilities may help explain the existence of transient hot dust around older main sequence stars <cit.>. These very late instabilities in our systems occur even though the orbits of Jupiter and Saturn are effectively fixed for the entire integration. The rate of instabilities would likely be significantly higher if the orbits of the gas giants evolved substantially over time <cit.>.
In Figure <ref>, we look at the mass distributions for the last planet lost from each system with an instability after t=200 Myrs. In general, we see that the last planets lost from systems with late instabilities have masses below ∼0.5 M_⊕. This is not surprising, since during an instability event it is typical for the smallest planets to be driven to the highest eccentricities, resulting in their collision or ejection <cit.>. However, not all systems abide by this. In particular, 3 of the 13 eejs systems that undergo late instabilities lose planets with masses well over 1 M_⊕.
This suggests there may be a different instability mechanism in eejs systems. Indeed, when we look at how the last planets are lost from eejs systems, we find that 10 of the 13 systems with late instabilities lose their planets via collision with the Sun. This contrasts strongly with the cjs and ann systems, where there is only one instance of a planet-Sun collision among the 34 systems that have late instabilities. Moreover, there are 4 eejs systems with only 1 planet at t=200 Myrs, which go on to have a planet-Sun collision before t=4.5 Gyrs. In these cases, the gas giants are clearly driving instabilities. This is not surprising, since the heightened eccentricities of Jupiter and Saturn will enhance the secular and resonant perturbations they impart on the terrestrial planets. When interactions between a gas giant and the terrestrial planets are the main driver of an instability, the relative masses of the terrestrial planets lose their significance because they are all so small relative to the gas giants. This allows for more massive planets to be lost from these systems.
Figure <ref>A shows an example of an instability within a cjs system. In this case, a system of 5 terrestrial planets are orbiting at virtually fixed semi-major axes for 3.8 Gyrs when an instability develops between the inner 3 planets. The second and third planets collide and the resulting 4-planet system finishes the simulation with smaller orbital eccentricities than it began with. On the other hand, the evolution of an eejs system is shown in Figure <ref>B. Here we see the eccentricities of 3 relatively well separated planets driven up around 400–500 Myrs, leading to a collision between the second and third planets. After the collision, the outermost planet's eccentricity is again quickly excited and eventually approaches 0.8. Shortly after this point, the planet collides with the Sun (after a scattering event with the inner planet). While the detailed dynamics of this system are undoubtedly complex, the behavior is clearly different from that of the cjs system, and is almost certainly a consequence of the enhanced gas giant perturbations produced from their increased eccentricities.
We also study how the properties of systems with late instabilities differ from systems that do not lose planet-mass bodies after t=200 Myrs. In Figure <ref> we look at the number of planets that each system has. In panels A–C, we see that after 200 Myrs of evolution, cjs systems typically have 4–6 planet-mass bodies (an average of 4.68 planets per system). This is significantly higher than eejs and ann systems, which have an average of 2.45 and 3.00 planets per system, respectively. The differences can largely be attributed to the lack of distant planets in these systems, owing to their initial conditions. Panels A–C also show which systems go on to lose planets at later times. For cjs and ann simulations, these systems tend to have more planets than the overall distribution. In contrast, no such trend is seen among eejs systems. Regardless of planet number, the eejs systems all seem to have roughly the same probability of losing a planet at late times. This is again a symptom of the gas giants driving instabilities within these systems, unlike the cjs and ann systems, where interactions between terrestrial planets play a larger role in late instabilities. Finally, panels D–F show the distributions of planets per system after 4.5 Gyrs of evolution. At the end of our integrations, the cjs, eejs, and ann systems have an average of 3.76, 2.12, and 2.78 planets per system respectively. For all of our simulation batches, we see that systems with late instabilities tend to have lower numbers of planets than the overall distribution of systems. Thus, in the case of cjs and ann systems, late instabilities tend to transform systems with relatively high numbers of planets into systems with relatively few planets. We also note that 4 eejs systems finish with no terrestrial planets whatsoever.
Finally, we show the AMD of each of our terrestrial planet systems at t=200 Myrs and t=4.5 Gyrs in Figure <ref>. Panels A–C show our systems' AMD distribution at t=200 Myrs. For each of our simulation batches, the median AMD is greater than the solar system's value. Our cjs, eejs and ann simulations have median AMD values of 2.9, 4.0, and 1.7 times the value of the modern inner solar system. Again, we also show the AMD distributions for systems that go on to have late instabilities. These systems tend to have larger AMD values. For the cjs, eejs and ann systems that undergo late instabilities the median AMD values at t=200 Myrs are 4.1, 14, and 4.3 times the solar system's AMD, respectively. Interestingly, though, panels D–F demonstrate that these systems are not always destined to maintain a relatively large AMD. Systems in the cjs and ann batches that undergo late instabilities have median AMD values of 2.7 and 3.7 times the value of the solar system after 4.5 Gyrs of evolution, respectively. Thus, a late instability does not necessarily increase the AMD of the system, and in some situations can result in moderate decreases. On the other hand, in eejs systems, the excited orbits of Jupiter and Saturn continue to wreak havoc on the terrestrial planets. Systems that experience late instabilities have a median AMD of 132 times that of the solar system!
§.§ Prevalence of Chaos
A selection of results from our simulations are provided in Appendix A (Table <ref>). τ_L and MEGNO are listed for run 1 for all systems. Additionally, we provide our chaos determination (yes or no) for runs 1, a and b, as well as MMRs detected for run 1. Figure <ref> compares the fraction of all systems which are chaotic between runs 1, a and b. We find that removing Jupiter and Saturn has the greatest effect on reducing chaos in our systems. In general, ∼50% of systems exhibit some form of chaos, when Saturn is removed only ∼40% of systems are chaotic and when Jupiter is removed that number is only ∼20%. This indicates that the chaos in most of our systems is likely driven by perturbations from Jupiter. In fact, when Jupiter was removed, all systems but 1 had τ_L's which either increase, or are within 1.5 orders of magnitude of the original value.
Figure <ref> also clearly shows the disparity of chaos between systems with different numbers of terrestrial planets. This is most pronounced in 5-planet configurations, where only 2 such systems are free of chaos with the outer planets in place, and only 3 when they are removed. Having 4 terrestrial planets may not to be a significant source of chaos in our own solar system. This can be seen in figure <ref>, which provides MEGNO plots for the solar system with and without the 4 outer planets. In fact, the solar system may better be described as a 3-planet configuration when compared to our results. Most 4 and 5-planet systems in our study differ greatly from our own since they typically contain only planets with masses comparable to Earth and Venus (see Table <ref> for 2 such 5-planet examples). Mars analogues are rare in our systems, and Mercury sized planets are almost non-existent. If we consider our solar system a 3 terrestrial planet arrangement, it's inherent chaos fits in well with our results; where about half of systems show chaos with the giant planets in place, and only around 1 in 6 when they are removed. A shortcoming of this comparison is that many studies have shown that Mercury is a very important source of the chaos in our own solar system <cit.>. However, when we integrate the solar system without Mercury, the system is still chaotic. Therefore, though the actual solar system does match our results, this comparison is limited by the fact that present models of the terrestrial planet formation systematically fail to produce Mercury analogs.
MMRs between planets are common features in many of our chaotic systems, implying that they are often important sources of the dynamical chaos. We detect 365 MMRs among all simulations in this phase of the project, 82% of which occur in chaotic systems. Further analysis shows that the MMRs which do occur in non-chaotic systems tend to be of higher order between smaller terrestrial planets. It should be noted that the vast majority of these MMRs are intermittent, and last only a fraction of the entire simulation duration.
Figure <ref> shows the fraction of systems which are chaotic in runs 1, c, d and e. It is clear that an eccentric Jupiter and Saturn can quickly introduce chaos to an otherwise non-chaotic system. One interesting result from this batch of simulations is that when the eccentricities of the outer planets are inflated, the likelihood of a 5:2 MMR between Jupiter and Saturn developing increases. In almost all systems, this resonant perturbation introduces chaos, and can possibly destabilize the system. Since systems labeled cjs were formed with the giant planets on near circular orbits, simply multiplying the already low eccentricity by 1.5 or 2 was not enough to produce a noticeable effect. For this reason, run e was performed using a step increase of 0.05. There is a clear parallel between the results of this scenario and a Nice Model instability <cit.>, where the outer planets rapidly transition from nearly circular to relatively eccentric orbits.
To further probe this effect, we repeat run e for cjs and ann systems using the MERCURY hybrid integrator in order to accurately detect collisions and ejections. Systems are integrated for 1 Gyr using simulation parameters similar to those discussed in section 2.1. We find instabilities are relatively common in these systems. 29% of ann systems and 52% of cjs systems lose one or more planets over the 1 Gyr integration. Table <ref> shows the percentage of systems which lose a given number of terrestrial planets. In fact, the resulting systems are quite similar to those produced after integrating the eejs batch to the current epoch. A small fraction of systems lose all inner planets, and some can have instabilities occur very late in the simulations (Figure <ref>). Overall we show that an event similar to the Nice Model scenario, where the Giant planets eccentricities quickly inflate, can result in a non-negligible probability of inner planet loss.
If we again classify our solar system as a 3 terrestrial planet system, we can draw further parallels between the results of such cjs run e configurations and the Nice Model instability, since Jupiter and Saturn begin on near circular orbits. In run e, 6 out of 8 such systems were chaotic, as compared to only 2 out of 8 in run 1. Indeed, we see that even a perturbation in the giant planet's eccentricities of 0.05 is successful in rapidly making a system chaotic. In fact, when our own solar system is integrated with the outer planets on circular (e0.001, i∼0) orbits, the chaos disappears.
§.§ Angular Momentum Deficit and Last Loss Analysis
For our systems, we evaluate the difference between the average AMD of the inner planets over the first and last 3 Myr of our 100 Myr simulations. Taking the average removes the contributions from periodic forcing in the AMD from Jupiter and Saturn. We find a weak trend for chaotic 4 and 5 planet systems to have larger changes in AMD over the duration of the simulation than their non-chaotic counterparts. Of the 13 systems which had total changes in AMD greater than the actual solar system's value, 9 were classified as chaotic. The largest outlier, a non-chaotic eejs 2 planet system (eejs25), is discussed further in section 3.5.1. We also search for any correlation between the time and mass of the last object (m> 0.055 M_⊕) lost, and the chaos of a system. Though we show that some unstable, chaotic systems can stabilize after losing a planetary mass body, we are unable to identify any conclusive trends as to whether a late instability will shape the ultimate chaotic state of a system.
§.§ Mass Concentration Statistic and Center of Mass Analysis
To evaluate the degree to which mass is concentrated at a given distance away from the central star, we utilize a mass concentration statistic (S_c) <cit.>:
S_c = MAX(∑_im_i/∑_im_i[log_10(a/a_i)]^2)
The expression in parenthesis in (<ref>) is essentially the level of mass concentration at any point as a function of semi-major axis. <cit.> utilizes a logarithm in the equation since, in our own solar system, the semi-major axes of the planets lie spaced in rough geometric series (the famous Titus-Bode law). S_c is the maximum value of the mass concentration function. A system where most of the mass is concentrated in a single, massive planet would have a very steep mass concentration curve, and a high value of S_c. A system of multiple planets with the same mass would have a smoother curve and yield a lower S_c. As a point of reference, the S_c value of the solar system's 4 inner planets, where most of the mass is concentrated in Venus and Earth, is 90. S_c values are provided in the same format as AMD values in figure <ref>. A general, weak correlation can be seen between chaotic systems and slightly higher values of S_c, however this trend is not very conclusive.
We also provide the center of mass for each system of terrestrial planets in figure <ref>. A clear trend is visible where non-chaotic systems tend to have a center of mass between ∼0.8–1.2 AU (the value being slightly greater as number of planets increases). In general, the more mass is concentrated closer to the central star, or closer to Jupiter, the greater the likelihood of chaos developing. This is likely related to Jupiter's role in introducing chaos to systems. This trend is true for all three simulation subsets, but particularly strong in the cjs and eejs batches.
§.§ Systems of Particular Interest
§.§.§ eejs25
The system with the largest change in AMD over the duration of the simulation is surprisingly non-chaotic. This outlier (eejs25), is a system of just 2 inner planets. The innermost planet is ∼117% the mass of Earth, residing at a semi-major axis of 0.62 AU, and the second planet is ∼96% the mass of Venus at a semi-major axis of 1.35 AU. The innermost planet is locked in a strong, secularly driven resonance with Jupiter (figure <ref>). This causes the eccentricity of the innermost planet to periodically oscillate between ∼0.15 and ∼0.7 over a period of ∼8 Myr. These oscillations are remarkably stable. In fact, due to the fortuitous spacing between of the Sun and inner 2 planets, this oscillation does not lead to interactions with other bodies in the system.
§.§.§ cjs10 and cjs13
The 2 most stable 5 planet configurations occurred in cjs10 and cjs13, with both systems classified as non-chaotic through runs 1, a and b. In the runs which vary eccentricity, cjs10 started to develop a weak 5:2 MMR between Jupiter and Saturn, causing mild chaos in run c (where eccentricities are increased by 150%). However, the chaos in this run was mild (MEGNO only rose to 3.703 and τ_L for this run was 1.57E+09). A step increase of 0.05 to Jupiter and Saturn's eccentricity in run e was required to fully introduce chaos (maximum MEGNO values 198.9 and 186.4 for cjs10 and cjs13 respectively) to both of these systems. This excitation of the eccentricities of the giant planets drove an occasional 3:1 MMR between the second and fourth inner planets in cjs13, possibly contributing to this chaos. The most remarkable similarity between these systems is their mass spacing and distribution (summarized in Table <ref>). Both have similarly low values of S_c (20.1 and 15.6). In fact, the planet spacing of both systems is somewhat reminiscent of a Titus-Bode Law series. For example, all orbital locations of cjs13 are within 6% of a Titus-Bode series beginning at the inner planet's semi-major axis.
§ DISCUSSION AND CONCLUSIONS
We have presented an analysis of systems of terrestrial planets formed through direct numerical integration of terrestrial accretion, fully evolved to the present epoch. Our work aims to assess whether our solar system, and it's inherently chaotic dynamics, is a likely result of planetary formation as we currently understand it. We report that roughly half of our systems display some form of chaos. By far, the most common source of this dynamical chaos is perturbations from Jupiter. Additionally, we find that systems in our sample with greater numbers of terrestrial planets are far more prone to chaos than those with fewer inner planets. Unfortunately, systems formed through numerical integrations (including those of <cit.> which are used for this work) still routinely produce Mercury and Mars analogues which are far too massive <cit.>. Consequently, we find it best to consider our solar system a 3 terrestrial planet system for the purposes of comparison in this work. This classification of the solar system works well with our results that 3-planet systems have an ∼50% chance of being chaotic, and a much lower probability when the giant planets are removed.
By varying the eccentricities of Jupiter and Saturn in 3 separate batches of simulations, we show that an eccentric system of outer planets can quickly introduce dynamical chaos and trigger instabilities in otherwise stable systems. This result confirms the findings in numerous previous works <cit.>. The inflation in eccentricity required to create such a chaotic system is surprisingly small. By varying the eccentricity of a batch of systems with Jupiter and Saturn on nearly circular orbits, we show that dynamical chaos quickly ensues. This sort of event is akin to a Nice Model-like instability <cit.>. We go on to show that in such an instability, the possibility of destabilizing the inner planets to the point where a terrestrial planet is lost by either collision or ejection is fairly high.
Additionally, we find that systems most immune to developing dynamical chaos tend to have centers of mass between ∼0.8–1.2 AU, though that range is by no means absolute. This is an interesting result, and likely related to Jupiter's role in driving chaos in many of these systems.
We consistently identified systems throughout our suite of simulations which displayed many of the same chaotic dynamics as our own solar system. It is clear that chaotic systems such as our own are common results of planetary formation. The largest source of chaos in our own system, perturbations from Jupiter, is the most common source of chaos observed in our work. The solar system, however, is akin to only a small fraction (∼10%) of our simulations since removing just the planets beyond Jupiter turns our system non-chaotic. Additionally we show that late instabilities are common among these systems, and it is not far-fetched to imagine a late instability shaping dynamics within our own system. Finally, we find many systems with similar numbers of terrestrial planets, semi-major axis configurations, mass concentrations and chaos indicators (τ_L and MEGNO) as our own.
§ ACKNOWLEDGEMENTS
This work was supported by NSF award AST-1615975. Simulations in this paper made use of the REBOUND code which can be downloaded freely at http://github.com/hannorein/rebound. The bulk of our simulations were performed over a network managed with the HTCondor software package (<https://research.cs.wisc.edu/htcondor/>).
apj
§ SUPPLEMENTAL DATA
|c|c|c|c|c|c|c|c|
[Simulation Results]Simulation Results
System τ_L MEGNO Inner Run 1[1] Run a[1] Run b[1] Run 1
(years) Planets MMRs[2]
4c
– Continued from previous page
System τ_L MEGNO Inner Run 1[1] Run a[1] Run b[1] Run 1
(years) Planets MMRs[2]
4rContinued on next page
cjs1 7.09E+05 100.7 5 Y Y Y
cjs2 1.69E+07 45.35 4 Y N N 7:4
cjs3 2.93E+05 187.2 5 Y Y Y
cjs4 3.69E+05 160.9 5 Y Y Y 2:1
cjs5 4.63E+05 183.5 5 Y Y Y
cjs6 3.61E+05 160.0 5 Y Y Y
cjs7 5.04E+06 126.0 5 Y N Y 7:4,5:1,
7:1 (S),
5:2 (J,S)
cjs8 1.71E+10 2.004 4 N N N
cjs9 5.93E+10 1.997 3 N N N
cjs10 8.28E+08 1.896 5 N N N
cjs11 7.80E+10 1.999 2 N N N
cjs12 7.14E+06 90.97 5 Y Y Y
cjs13 6.10E+09 2.003 5 N N N
cjs14 1.78E+10 1.988 3 N N N
cjs15 7.61E+07 8.794 5 Y Y Y 7:3 (J)
cjs16 4.89E+04 212.8 4 Y Y Y
cjs17 5.97E+08 4.467 4 Y N N 3:1 (J)
cjs18 5.16E+07 21.04 3 Y N N 3:1 (J)
cjs19 1.50E+07 53.36 4 Y Y Y
cjs20 7.23E+10 2.000 2 N N N
cjs21 9.03E+10 1.996 2 N N N
cjs22 3.86E+10 2.000 3 N N N
cjs23 8.83E+03 169.0 4 Y Y Y
cjs24 1.66E+06 20.29 4 Y N Y 5:2 (J,S)
cjs25 2.16E+10 2.040 4 N N N 2:1
cjs26 1.39E+05 183.2 5 Y Y Y 6:1
cjs27 8.50E+10 1.999 3 N N N
cjs28 7.22E+10 1.999 2 N N N
cjs29 9.22E+07 12.82 4 Y Y Y 5:3:1
cjs30 7.50E+08 4.816 3 Y N N
cjs31 1.58E+10 2.001 2 N N N
cjs32 2.81E+09 1.966 4 N N N
cjs33 1.31E+10 1.999 2 N N N
cjs34 7.89E+04 117.0 4 Y Y Y
cjs35 2.26E+10 2.000 3 N N N
cjs36 3.33E+06 186.5 4 Y N Y
cjs37 2.14E+05 193.5 5 Y Y Y
cjs38 3.50E+08 5.732 3 Y N Y
cjs39 6.30E+10 1.999 4 N N N
cjs40 6.05E+10 1.999 4 N N N
cjs41 6.46E+06 113.5 4 Y Y Y 5:3
cjs42 9.03E+10 1.998 4 N N N 5:1
cjs43 1.47E+07 49.51 4 Y N Y 4:1
cjs44 1.93E+05 189.4 4 Y Y Y
cjs45 1.38E+05 194.1 4 Y N Y
cjs46 5.10E+10 2.011 4 N N N
cjs47 2.11E+06 129.4 4 Y N Y
cjs48 1.68E+08 8.483 4 Y Y Y
cjs49 1.49E+07 20.44 4 Y N Y
cjs50 5.08E+08 6.271 4 Y N Y
eejs1 4.36E+05 178.4 3 Y N Y
eejs2 1.75E+05 87.08 4 Y Y Y 5:3
eejs3 4.55E+05 202.9 4 Y N Y 8:5,7:1
eejs4 9.62E+05 174.4 1 Y N N
eejs5 1.05E+10 2.000 2 N N N
eejs6 4.11E+07 29.38 3 Y Y Y 3:1,7:1 (S)
eejs7 2.41E+07 34.65 4 Y N Y
eejs8 1.33E+10 1.988 3 N N N
eejs9 1.01E+08 9.676 5 Y Y Y 8:3,8:5
eejs10 6.92E+05 151.8 1 Y N N
eejs11 1.95E+10 2.000 2 N N N
eejs13 9.19E+08 7.045 4 Y Y N
eejs15 2.57E+09 1.999 1 N N N
eejs16 3.39E+10 2.002 3 N N N 5:2 (J,S)
eejs18 1.87E+10 2.017 4 N N N
eejs19 2.93E+10 2.000 1 N N N
eejs20 1.94E+09 1.992 3 N N N
eejs21 4.17E+09 1.919 3 N N N
eejs22 1.05E+10 1.998 2 N N N
eejs23 3.11E+06 183.4 4 Y Y Y
eejs24 1.21E+07 57.05 5 Y Y Y 8:1 (S)
eejs25 6.18E+09 2.041 2 N N N
eejs26 6.31E+09 1.912 3 N N N
eejs27 8.95E+08 1.813 3 N N N
eejs28 2.48E+09 4.370 3 Y N N
eejs29 3.43E+05 173.0 1 Y N N 5:2 (J,S)
eejs30 6.32E+05 158.7 2 Y N Y
eejs31 2.45E+05 184.6 4 Y Y Y
eejs32 1.96E+10 1.969 3 N N N 7:3
eejs33 3.96E+10 2.004 3 N N N 7:2,8:5
eejs34 2.67E+05 196.5 1 Y Y N
eejs35 4.23E+06 154.0 2 Y N Y 7:3
eejs36 3.39E+06 180.2 1 Y N N
eejs37 1.06E+06 183.2 4 Y Y Y
eejs38 2.29E+05 171.8 2 Y N N
eejs39 1.84E+09 2.005 2 N N N
eejs40 1.08E+05 189.1 4 Y Y Y
eejs41 5.27E+06 33.10 4 Y Y Y 5:3
eejs42 1.31E+10 2.511 4 N N N
eejs43 4.94E+09 1.998 1 N N N
eejs44 7.84E+04 193.8 3 Y N Y
eejs45 2.09E+06 178.1 4 Y N Y
eejs46 5.52E+10 2.009 4 N N N
eejs47 5.37E+08 8.082 3 Y N N 8:5
eejs49 3.06E+10 1.996 3 N N N 5:2
eejs50 2.10E+10 1.929 2 N N N 7:3
ann1 3.64E+09 2.014 3 N N Y
ann2 3.02E+10 1.998 3 N N N
ann3 3.06E+10 1.988 4 N N N
ann4 4.94E+10 2.000 3 N N N
ann5 4.97E+07 29.32 3 Y N Y 8:5,7:1 (J)
ann6 1.66E+08 7.441 4 Y N N 7:4
ann7 3.91E+09 3.132 3 Y N N
ann8 2.87E+05 185.6 3 Y N Y
ann9 4.67E+10 2.010 3 N N N 2:1
ann10 2.06E+10 1.998 2 N N N
ann11 3.17E+09 2.294 3 N N N
ann12 6.72E+03 195.1 4 Y Y Y
ann13 2.89E+06 125.7 2 Y Y Y
ann14 1.68E+07 60.09 4 Y N N 4:1
ann15 9.01E+09 2.103 3 N N N
ann16 1.85E+10 1.99 2 N N N
ann17 1.05E+09 4.372 4 Y N N 5:2 (J,S)
ann18 8.23E+08 2.159 3 N N N
ann19 3.61E+05 183.8 3 Y Y Y
ann20 2.07E+06 175.4 4 Y Y Y
ann21 4.55E+06 21.76 3 Y N Y 3:1
ann22 2.00E+07 44.64 3 Y N Y
ann23 2.00E+07 53.55 3 Y N Y 7:4
ann24 2.53E+10 1.999 3 N N N
ann25 3.59E+07 20.22 4 Y Y Y 8:3
ann26 1.08E+08 9.212 4 Y Y Y 5:3
ann27 1.15E+08 13.65 4 Y Y N 5:2,8:5
ann28 1.90E+08 15.54 5 Y Y Y
ann29 5.97E+06 115.7 4 Y N Y
ann30 5.10E+03 193.1 3 Y Y Y
ann31 1.32E+08 8.47 3 Y Y Y
ann32 1.85E+06 182.3 4 Y Y Y
ann33 2.36E+06 191.6 4 Y Y Y 2:1
ann35 3.93E+07 23.00 3 Y N N
ann36 3.50E+10 1.998 3 N Y N 7:3
ann37 2.75E+05 192.5 4 Y Y Y
ann38 4.23E+10 2.001 3 N N N
ann39 2.50E+05 189.6 3 Y Y Y
ann40 6.91E+05 192.9 4 Y Y Y
ann41 4.22E+10 1.998 3 N N N
ann42 1.01E+10 2.000 2 N N N
ann43 1.49E+11 2.000 3 N N N
ann44 2.75E+08 11.41 3 Y N N
ann45 9.52E+09 2.000 2 N N N
ann46 4.81E+06 152.4 3 Y Y Y
ann47 3.71E+10 2.004 3 N N N
ann48 8.07E+06 54.18 4 Y Y Y 5:2 (J,S)
ann49 7.25E+05 185.7 3 Y N Y 7:4
ann50 5.53E+10 2.008 3 N N N
[1]“Y" indicates chaos was detected, “N" indicates it was not.
[2](J) and (S) indicate the resonance was with Jupiter or Saturn respectively.
|
http://arxiv.org/abs/1701.07862v2 | 20170126201104 | Automated construction of molecular active spaces from atomic valence orbitals | [
"Elvira R. Sayfutyarova",
"Qiming Sun",
"Garnet K. -L. Chan",
"Gerald Knizia"
] | physics.chem-ph | [
"physics.chem-ph"
] |
We introduce the atomic valence active space (AVAS), a simple and well-defined automated technique for constructing active orbital spaces for use in multi-configuration and
multi-reference (MR) electronic structure calculations.
Concretely, the technique constructs active molecular orbitals capable of describing all relevant electronic configurations emerging from a targeted set of atomic valence orbitals (e.g., the metal d orbitals in a coordination complex).
This is achieved via a linear transformation of the occupied and unoccupied orbital spaces from an easily obtainable single-reference wavefunction (such as from a Hartree-Fock or Kohn-Sham calculations) based on projectors to targeted atomic valence orbitals.
We discuss the premises, theory, and implementation of the idea, and several of its variations are tested.
To investigate the performance and accuracy, we calculate the excitation energies for various transition metal complexes in typical application scenarios.
Additionally, we follow the homolytic bond breaking process of a Fenton reaction along its reaction coordinate.
While the described AVAS technique is not an universal solution to the active space problem, its premises are fulfilled in many application scenarios of transition metal chemistry and bond dissociation processes.
In these cases the technique makes MR calculations easier to execute, easier to reproduce by any user, and simplifies
the determination of the appropriate size of the active space required for accurate results.
Manifestations of minimum-bias dijets in high-energy nuclear collisions
Thomas A. Trainor
December 30, 2023
========================================================================
§ INTRODUCTION
Multiconfigurational and multireference (MR) methods remain indispensable in the treatment of
challenging electronic structure problems.
Transition metal complexes provide a rich source of examples, as they often feature strongly correlated electronic degrees of freedom, which render density functional theory (DFT) calculations unreliable.
Unfortunately, MR methods require an a priori choice of a suitable set of active molecular orbitals (an active space), which critically determines the quality of the results.
The choice of active space is non-trivial and often represents a major challenge in practical computations.
The standard way to choose an active space is as follows: First, a full set of molecular orbitals of the molecule is computed with a simple electronic structure method, such as Hartree-Fock (HF) or Kohn-Sham DFT (KS-DFT). Second, these molecular orbitals, both occupied and unoccupied, are visually inspected, and based on their shape, energy, occupation numbers, etc., one selects into the active space the set of orbitals which are expected to be chemically most relevant
(e.g., with significant transition metal d-electron character or ligand character in the case of transition metal complexes—regions which are empirically known to be important).
But despite the existence of general advice on how to select a good set of starting orbitals <cit.>, there are some problems with this approach.
First, the selection of molecular orbitals for the active space is performed by the user, normally based on personal experience.
This makes MR methods hard to apply, and gives results that are complex to reproduce and hard to judge in terms of quality.
This stands in contrast to single-reference calculations which do not require active spaces and therefore do not have this level of arbitrariness.
Second, the molecular orbitals mix together valence orbitals of different character;
for example, typically a metal's d atomic valence orbitals contribute to a very large number of molecular orbitals, and it is often not easy to truncate these to a small active subset in such a way that they retain the capability of describing all the right physics in complicated systems.
In the case of large complexes, especially with multiple metal ions, this procedure commonly becomes a matter of trial and error.
Techniques to aid in the construction of high quality active spaces are therefore highly desirable, particularly as more powerful electronic structure methods are becoming available which are capable of treating increasingly large active spaces efficiently.<cit.>
There have been a number of contributions in this area, and the general procedures can loosely be summarized as based on
estimating the (correlated) occupation numbers of the orbitals (including the full single orbital density matrix in Ref. Keller:ActiveSpaceSelection,Stein:ActiveSpaceSelection), followed by selecting the active orbitals based on partial occupancy according to an input threshold. The various techniques differ primarily
in how the occupation number information is obtained. For example, correlated occupation numbers
can be estimated from unrestricted Hartree-Fock <cit.> or Kohn-Sham calculations or from correlated calculations,
such as MP2 <cit.> or approximate DMRG calculations <cit.>.
Nonetheless, while these existing automated approaches are advancements from typical ad hoc active space constructions, there is still room for further improvement.
For example, an obvious drawback of the above procedures is that they all require a non-trivial preliminary calculation:
either a correlated calculation must be performed, or a suitable broken symmetry solution must be found.
This is not always possible: for example, there may not always be a broken symmetry solution involving the region of interest, or
the preliminary correlated calculation may simply be too costly.
A perhaps less obvious drawback, but one which one encounters in practice, is that there is no guarantee that the active orbitals
found in these automated procedures are actually spatially located in the region of chemical interest. For example, in
models of enzymatic binding sites, particularly those with charged ligands, the unpaired electrons may lie in functional groups which are spatially
far from and irrelevant to the chemistry of the metal center. In this case, an additional inspection is once again required to choose the subset of
active orbitals of chemical interest.
Here we propose an alternative approach to construct molecular active spaces for multireference problems automatically and systematically.
This approach does not suffer from the drawbacks mentioned above, and is particularly well-suited for practical calculations involving
transition metal complexes. In its simplest formulation, the procedure requires only an easily obtained single-determinant reference function, together with a choice of target atomic valence orbitals.
The technique is based on the following idea: It is empirically known that one can typically identify a small set of atomic valence orbitals which give rise to the strong correlation effects (for example, d atomic valence orbitals in transition metal complexes).
We therefore aim to construct a set of active molecular orbitals by defining them in terms of these atomic valence orbitals.
Concretely, using simple linear algebra, we can define mathematical rotations of the occupied and virtual molecular orbitals of the single-determinant reference function which maximizes their given atomic valence character (e.g. 3d character).
After this rotation, the relevant molecular orbitals to include into the active space can be selected automatically.
The ideas employed here are closely related to earlier work of Iwata<cit.> and Schmidt and coworkers<cit.> regarding the construction of approximate valence spaces;
however these earlier works focused on constructing molecular orbitals capable of spanning the entire valence space (not only the valence space of selected atomic orbitals, with the goal of identifying active spaces for processes involving said atomic orbitals), and the latter work also markedly differs in intent.
Similar mathematical techniques were also used in the construction of molecule-intrinsic minimal basis sets capable of spanning a set of given occupied orbital spaces<cit.>.
Sec. <ref> explains the motivation, background, theory, and implementation of the construction proposed here.
Sec. <ref> then describes criteria for judging the quality of the constructed active spaces, and discusses its application to a large number of prototypical MR calculations of metal complexes.
Sec. <ref> describes conclusions and possible implication for future work.
§ THEORY
§.§ What is an active space?
Many naturally occuring stable molecules have an electronic structure which is qualitatively well described by a single-determinant self-consistent field (SCF) wave function,
such as used in Kohn-Sham Density Functional Theory (KS-DFT) or Hartree-Fock (HF).
In this case, the entire space of molecular orbitals (MO) is strictly divided into fully occupied and
fully unoccupied molecular (spin-) orbitals.
However, there are important classes of chemical systems in which this picture breaks down and where a superposition of multiple N-electron determinants is required to describe the electronic structure even qualitatively.
This phenomenon is sometimes called strong correlation; prototypical cases in which it occurs are (a) the process of homolytic bond breaking, and (b) various kinds of transition metal complexes—particularly when the complex is in an overall low-spin state generated
by coupling of the metal to other metals or redox non-innocent ligands, as frequently encountered in catalysis and in bio-inorganic systems.
Both of these prototypical cases share the same root cause: The occurrence of valence atomic orbitals with energy levels similar to other valence orbitals, but with poor orbital overlap,
giving rise to small energy splittings between bonding and anti-bonding linear combinations.
In these cases quantum resonances between both bonding and anti-bonding orbitals must be explicitly considered to qualitatively describe the electronic structure—and single-determinant wave functions are incapable of doing so, because in these each MO either is occupied or is not, but not both.
The core idea developed in this manuscript is that in these two most important cases, the emergence of strong correlation is tightly linked to specific valence atomic orbitals, which are easy to identify. In the case of transition metals, these are the compact d orbitals (and possibly some specific ligand orbitals), and in the case of bond dissociation, these are the valence atomic orbitals of the dissociating atom(s).
Thus, in the following we will assume that a small number of specific valence atomic orbitals are explicitly selected by the user (e.g., the d orbitals of metal centers), and that the goal is to construct an active space suitable for describing all highly relevant determinants that they give rise to.
We stress that this initial selection is easy in practice—the core problem is how to use the information to build a suitable active space.
We thus briefly consider what an active space is.
An active space wavefunction is one where the superpositions
of determinants are restricted so that varied occupations are found only within the active orbitals {ϕ_1 …ϕ_m}. Technically, this means that the wavefunction can be written as
a second quantized product
|Ψ⟩ = {ϕ_1 …ϕ_m } | core⟩
where {ϕ_1 …ϕ_m} denotes a general occupancy wavefunction within the active orbitals, and |core⟩
denotes a single determinant.
It is clear that the active orbitals must span the space of our chosen specific valence atomic orbitals. However,
Eq. (<ref>) additionally implies that the rest of the molecule must be well described by the single core determinant.
To achieve this, the active space must contain orbitals additional to our set of specific valence atomic orbitals, and
to remain compact, we need to define the minimal additional set.
To this end, we here employ the basic observation in density matrix embedding theory (DMET),<cit.> which
describes how to construct such an active space explicitly.
In particular, DMET tells us that the active space with the above properties is at most twice the size of the initial set of chosen valence atomic orbitals.
However, in this work we modify the presentation and practice of the DMET procedure
to make it more natural in the active space setting.
In particular, in contrast to the original presentation in referencesGK_GKC1,GK_GKC2 (but as described in the appendix of Ref. zheng2016ground)
we do not introduce separate “fragment” and “bath” orbitals, but rather retain the occupied and virtual character of the constructed active orbitals.
This has the important benefit that it leads to a natural truncation procedure, which allows us to further reduce the size of the active space in
the most chemically meaningful way.
In Sec. <ref> we describe the isolation of entangled orbitals for the active space construction, and Sec. <ref> will provide technical details and discuss various practical aspects relevant in the active space case.
Finally, Sec. <ref> outlines how the presented approach may be extended to more complex cases, such as double-shell effects or complex metal/ligand interactions.
§.§ Isolating target-overlapping orbitals for the active space
Let A={|p⟩} denote the (small) set of chosen target valence
atomic orbitals (not necessarily orthonormal), which we expect to be
responsible for strong correlations (e.g. the five 3d orbitals of a
third row transition metal atom of a complex, details on their
selection and representation will follow). Let |Φ⟩ denote
a Slater determinant, which represents the electronic structure of our
system of interest at the SCF level (HF or KS-DFT).
Our method for active space construction is built around the following
physical assumptions, which are those used in
DMET<cit.>:(a) a
SCF wave function |Φ⟩ may be unable to describe
precisely how our target atomic orbitals are bonded with the rest of
the molecule; however, it will generally describe the rest of
the molecule reasonably well (experience in DMET suggest that this works
even in cases where the SCF wave function as a whole is qualitatively
very wrong<cit.>), and (b) we can isolate the part of
|Φ⟩ which involves our target AOs from the part which does
not, and employ the former part as the active space (therefore allowing it
to be replaced by a more powerful wave function description), and retain the
simple determinantal description of rest.
To this end, we employ a rotation within the set of occupied molecular
orbitals of |Φ⟩ which splits them into two groups: one group
which has overlap with our target AOs, and one group which does not.
A simple dimensional counting argument will show that for a set of
|A| selected target AOs, there is a rotation of the occupied
molecular orbitals such that at most |A| of them have non-zero
weight on the target AOs. We similarly split the set of virtual
molecular orbitals into one group of at most |A| virtual orbitals
which have weight on the target atomic valence orbitals, and the other
group which does not. The idea is now to explicitly construct these
rotated orbital groups, and then employ the at most 2|A| combined
occupied and virtual orbitals with target overlap as active orbitals,
while leaving the other occupied and virtual orbitals without target
overlap as inactive (closed-shell) or virtual orbitals, respectively,
in the following multi-configuration treatment. As all of the
selected AOs in A then lie within the span of this active space, the
resulting multi-configurational wave function is then capable of
representing arbitrary quantum resonances involving the target AOs.
We first discuss the occupied case. Let i = 1… N_occ
denote N_occ occupied molecular orbitals (MOs) of
|Φ⟩. The projector P̂ onto the space of atomic
orbitals in A, is given by
P̂ = ∑ _p,q∈ A |p ⟩ [σ^-1]_pq⟨ q |.
Here σ denotes the |A|×|A| target AO overlap matrix with
elements [σ]_pq = ⟨ p|q⟩, and σ^-1 its
matrix inverse.
Employing these projectors, we construct the first set of active
orbitals by rotating |Φ⟩'s occupied MOs {|i⟩} as
follows.
First, we calculate the N_occ× N_occ overlap
matrix of occupied orbitals projected onto (A), the space of
selected target atomic orbitals:
[ S^A]_ij = ⟨ i|P̂|j ⟩,
where i,j are occupied orbital indices. Next we compute the
N_occ× N_occ (unitary) matrix of eigenvectors
[ U]_ij of S^A, such that
S^A U = U
diag(σ_1,…,σ_N_occ)
(where diag(…) denotes a diagonal matrix of the
given elements), or, written in component form,
∀ i,j: ∑_k [ S^A]_ik [ U]_kj = [ U]_ijσ_j.
There are at most |A| non-zero eigenvalues {σ_i}, because
(A) is a |A|-dimensional space, and [ S]^A_ij
involves a projection onto it. Furthermore, the eigenvectors [
U]_ij of S^A_ij define a rotation on the occupied orbitals:
|i⟩ ↦ |ĩ⟩ = ∑_k|k⟩ [
U]_ki,
which clearly separates them into two groups: The at most |A|
rotated occupied orbitals |ĩ⟩ with σ_i≠ 0,
which have non-vanishing overlap with our target atomic orbitals (and
which therefore should go into the active space), and
the remaining |ĩ⟩ with σ_i= 0 which have no
overlap with our target atomic orbitals, and therefore can stay as
inactive (inner closed shell) orbitals in the subsequent
multiconfigurational methods.
Note that the rotated occupied orbitals {|ĩ⟩} in
(<ref>) are obtained as a unitary
transformation of |Φ⟩'s original occupied orbitals
{|i⟩}. Consequently, a determinantal wave function
|Φ̃⟩ built from the {|ĩ⟩} is physically
equivalent (differs by at most a phase factor) from the original
determinant |Φ⟩. That is, so far we have done nothing to
|Φ⟩ except for splitting its occupied orbitals into a convenient
set of at most |A| orbitals related to our |A| target AOs and the
remaining set we can treat as inactive.
We then proceed similarly for the virtual orbitals {|a⟩, 1 =
1…,N_vir} of |Φ⟩: We form the
N_vir× N_vir projected overlap matrix
[S̅]^A_ab = ⟨ a|P̂|b ⟩,
where a,b are virtual orbital indices, then find its unitary matrix
of eigenvectors U̅ such that
S̅^AU̅ = U̅diag(σ_1,…,σ_N_vir)
⇔ ∀ a,b: ∑_c [S̅^A]_ac [U̅]_cb = [U̅]_abσ_b,
and use U̅ to rotate the virtual orbitals via
|a⟩ ↦ |ã⟩ = ∑_c|c⟩
[U̅]_ca.
Again, the at most |A| of the new virtual orbitals {|ã⟩} with eigenvalues σ_a≠ 0 are selected for the active
space, while the remaining orbitals will stay unoccupied in the
subsequent multi-configuration treatment.
Finally, having active orbitals with overlap with A from both sides,
occupied and unoccupied orbitals in |Φ⟩, we can form the
total active space by combining the sets of {|ĩ⟩} and
{|ã⟩} with non-zero projected overlap eigenvalues.
Since the combined set includes all orbitals which have
non-vanishing overlap with our target space (A), all of the
selected AOs in A then lie within the span of this active space.
Therefore, a multi-configurational wave function with this active
space will be capable of representing arbitrary quantum resonances
involving the target AOs—which was the goal of our construction.
§.§ Technical details of the construction
Sec. <ref> discusses the formal framework of the active space construction.
However, several practical aspects still need to be discussed:
(a) How are the target AOs A={|p⟩} chosen and represented?
(b) How are the actual rotation matrices U (eq. (<ref>)) and U̅ (eq. (<ref>)) computed in practice?
(c) Can the active space (formally twice the number of the target AOs) be further reduced in size?
(d) How should open-shell systems be handled? (In particular, what to do for restricted open-shell functions |Φ⟩?)
We will discuss these questions in the current and next subsections.
Let us first assume that |Φ⟩ is a closed-shell Slater determinant obtained from an SCF calculation.
Its occupied and virtual molecular orbitals are expressed as:
|i ⟩ = ∑_μ∈ B_1 |μ⟩ C^μ_i
|a ⟩ = ∑_μ∈ B_1 |μ⟩C̅^μ_a,
where μ are basis functions from the (large) computational basis set B_1 (e.g., cc-pVTZ or def2-TZVPP), and C^μ_i=[ C_occ]_μ i and C̅^μ_a=[ C_vir]_μ a are the coefficients of the basis function μ in the expansion of the occupied orbital i and virtual orbital a, respectively.
C_occ and C_vir denote the |B_1|× N_occ occupied and |B_1|× N_vir virtual sub-matrices of the |B_1|× |B_1| SCF orbital matrix C (note that N_occ+N_vir=|B_1|—each orbital is either occupied or virtual).
In general, computational basis sets such as B_1 do not contain basis functions directly corresponding to AOs of any sort.
For this reason, we here select our target AOs A={|p⟩} based on a second auxiliary basis set B_2.
This is a minimal basis set of tabulated free-atom AOs (here MINAO is used<cit.>; but other choices, such as subsets of ANO-RCC,<cit.> ano-pVnZ,<cit.> or ANO-VT-XZ<cit.>, could be considered; see also Sec. <ref>).
This choice leads to simple expressions for the projected overlap matrices
S^A_ij = ⟨i|P̂|j|=⟩∑_μμ' C^μ_i P_μμ' C^μ'_j
S̅^A_ab = ⟨a|P̂|b|=⟩∑_μμ'C̅^μ_a P_μμ'C̅^μ'_b,
where the matrix elements of the projector are
P_μμ' =∑_pp'⟨μ|p ⟩ [σ^-1]_pp'⟨ p'| μ' ⟩.
Combining all formulas into a numerical algorithm, the rotated orbitals are constructed as follows:
* Let A ⊂ B_2 denote the subset of AOs we choose as target AOs for the active space construction (for example, the five 3d AOs in a transition metal complex with one metal center).
* Form the overlap matrix σ with elements σ_pp' = ⟨ p | p' ⟩, where p, p' ∈ A, as well as
σ's inverse matrix, with elements σ^pp'=[σ^-1]_pp'. Both matrices have dimension of |A| × |A|.
* Form the overlap matrix 𝐒_21 between the functions p of A ⊂ B_2 and the functions μ of the large basis set B_1, with elements [ S_21]_pμ= ⟨ p | μ⟩.
* Form the projector P_μμ' =∑_pp'⟨μ|p ⟩σ^pp'⟨ p'| μ' ⟩ , or 𝐏=𝐒_21^†σ^-1𝐒_21.
* Form the projected overlap matrices 𝐒^A= C_occ^† P C_occ for the occupied orbitals (eq. (<ref>)), and S̅^A= C_vir^† P C_vir for the virtual orbitals (eq. (<ref>)).
* Finally, diagonalize both projected overlap matrices to obtain the transformation matrices separating the MO sets by overlap with (A).
Concretely, diagonalize 𝐒^A to obtain the eigenvector matrix U, and use it to find the transformed occupied orbital matrix C̃_occ = C_occ U.
Then diagonalize 𝐒̅^A to obtain the eigenvector matrix U̅, and use it to find the transformed virtual orbital matrix C̃_vir = C_virU̅. These are the expansion coefficients of {|ĩ⟩} (eq. (<ref>)) and {|ã⟩} (eq. (<ref>)), respectively.
Rather than using the minimal basis B_2 directly, one could consider choosing the target AOs from a set of polarized AOs which take the molecular environment into account, such as the Intrinsic Atomic Orbitals (IAOs).<cit.>
If the IAOs are given as
|p̃⟩ = ∑_μ |μ⟩ T_μ p,
where T_μ p denotes the elements of the |B_1|× |B_2| IAO transformation matrix,<cit.> this can be incorporated by updating the projection matrix in S^A_ij and S^A_ab in eqs. (<ref>) and (<ref>) as
𝐏 = 𝐒𝐓(𝐓^†𝐒𝐓)^-1𝐓^†𝐒.
For simplicity, we do not follow this approach in this work, and choose the target AOs directly from the minimal basis B_2 as described above.
Additionally, in some cases one may wish to extend the target space by some AOs which lie outside the valence space. This scenario is discussed in Sec. <ref>.
§.§ Truncating the active space
The eigenvalues σ_i and σ_a of the projected overlap matrices
in eqs. (<ref>) and (<ref>)
reflect the degree to which the transformed orbitals |ĩ⟩ and |ã⟩ overlap with the space of our target AOs.
If we include every such transformed orbital with σ_a ≠ 0 and
σ_i ≠ 0 into our active space,
the resulting CAS space will exactly include all electronic configurations which can be formed over the given AOs and the maximum size of the CAS space is twice
that of the target AO space. However, this CAS space may be too large and we may need to truncate it.
Here we can use the fact that often many of the σ_i and σ_a are small.
As a practical measure, we can set a threshold, such as 0.05–0.1, to exclude MOs with negligible overlap with (A).
In addition to reducing the size of the active space, this can further
improve the reproducibility of calculations in the case of very small eigenvalues.
This threshold becomes the only numerical parameter to be chosen by the user, and together with the selection of target AOs and the type of the SCF wave function |Φ⟩, it fully determines the active space.
Of course, if truncation is used, the active space no longer captures all possible configurations involving the target AOs which can be formed. However, the truncation does not affect the quality
of the description of the rest of the molecule by the core determinant in Eq. (<ref>). This guarantees
that the CASCI energy lies below the variational HF energy. We can also imagine the opposite tradeoff,
where one obtains a truncated active space which retains the ability to capture all possible configurations involving the target AOs, at the cost of worsening
the quality of the core determinant which describes the rest of the molecule.
(In DMET language, this would correspond to truncating the “bath” orbitals,
which is considered in Refs. Wouters2016 and qmmmdmet).
However, this may be a worse truncation procedure in the current setting, as the energy gained by treating the fluctuations in occupation number in the target AOs (such as a TM
3d shell) may not make up for the energy lost in incompletely describing the mean-field hybridization between the target AOs and the rest of the molecule.
In particular, this second bath truncation procedure can, in principle, lead to a CASCI energy above the variational HF energy.
§.§ Treatment of open-shell systems
If |Φ⟩ is a closed-shell determinant, then the active space construction algorithm can be directly used as described in Sec. <ref>.
However, in the case of open shell determinants, several choices can be considered:
* One may perform the algorithm separately for alpha and beta orbitals, thus creating active orbitals with different spatial parts for alpha and beta-spin electrons.
While this choice is the most straight-forward and, arguably, creates the best initial active orbitals in the open-shell case, this option is not directly feasible if a spin-adapted multiconfigurational calculation will follow—most existing MCSCF programs cannot use such unrestricted orbitals (although this is implemented in the code we use here <cit.>).
A possible remedy for this problem would be to construct a single set of “corresponding orbitals”<cit.> from the separate alpha- and beta-orbital sets, but this has not been tested here.
* One may use exclusively the alpha
orbitals to construct the active space orbitals (and inactive orbitals determining the core determinant).
This treatment can be applied to both restricted and unrestricted SCF functions |Φ⟩ in a simple manner.
The rationale for this is that in the restricted open-shell case, the occupied beta orbitals lie entirely within the linear span of the occupied alpha orbitals, so one can argue that this choice takes care of both spin cases.
However, this argument is somewhat misleading because it may lead to some unoccupied beta orbitals being transformed into the core space, therefore enforcing their occupation with two electrons.
This error in the core means that the CASCI energy may be higher than the variational HF energy.
If the singly-occupied MOs in |Φ⟩ have only small components on the target valence AOs, this can lead to very bad CASCI wavefunctions.
* If a ROHF determinant |Φ⟩ is used, one may apply the construction of Sec. <ref> exclusively to the doubly-occupied and fully unoccupied orbitals of |Φ⟩, to form the core determinant and initial part of the active space, and then include additionally all the singly-occupied orbitals of |Φ⟩ into the active space.
The CASCI energy is then guaranteed to be below the variational HF energy, and further spin-adaptation can be used.
The main drawback is that the active space is usually larger in this procedure.
By default, we use method <ref>, however, we compare the different schemes in one of the systems below.
§ COMPUTATIONAL DETAILS
We implemented the atomic valence active space (AVAS) construction within the PySCF<cit.> package.
All Complete Active Space Self-Consistent Field (CASSCF), Complete Active Space Configuration Interaction (CASCI), and strongly contracted N-electron valence state perturbation theory (NEVPT2)<cit.>
calculations were carried out using PySCF.
For active spaces with more than 16–17 orbitals we used the Block code<cit.> through the PySCF-Block interface to perform DMRG calculations in the active space.
The AVAS construction is also being implemented in a development version of Molpro.
For simplicity, we used all-electron cc-pVTZ-DK<cit.> basis sets for all systems, apart from the Fenton reaction (vide infra).
For the auxiliary minimal basis B_2 used to choose the target AOs, we employed the MINAO basis<cit.>, which is a truncated subset of the cc-pVTZ basis; for most atoms, this set
consists of spherically averaged ground-state Hartree-Fock orbitals for the free atoms.
We did not use point group symmetry in the present calculations, since in a straight-forward implementation, the MOs do not necessarily retain symmetry-adaption after rotation; however, if symmetry-adaptable sets of target AOs are chosen, symmetry respecting orbital rotations can in principle be constructed.
For completeness, scalar relativistic effects were included using the exact-two-component (X2C) approach,<cit.> but this did not lead to significant differences from
the non-relativistic calculations in any of the considered examples. Spin-orbit coupling was not considered.
For the case of the Fenton reaction, the geometries were optimized with the DSCF and GRAD modules of Turbomole 7.0. This was done at the level of symmetry-broken unrestricted B3LYP<cit.> with def2-TZVP basis sets<cit.>, starting from the structures provided in Ref. petit:FentonReaction. The RI approximation was not employed, and solvation effects were not considered.
The characters of the starting geometry, transition state geometry, and product geometry were confirmed by computing the analytic nuclear Hessians at these points, via the AOFORCE module.<cit.>
The reaction path was computed by tracing the intrinsic reaction coordinate<cit.> in both directions,
starting at the transition state geometry.
This was done using Turbomole's DRC module.
Finally, the structures of the starting point, transition state, product, and IRC segments (of both directions) were joined, aligned, and compressed using the development version of IboView.<cit.>
Employing these geometries, the reported multi-configuration AVAS calculations were performed with PySCF, using cc-pVTZ orbital basis sets and a non-relativistic Hamiltonian.
The used molecular geometries are supplied in the supporting information, as are selected visualizations of the obtained active spaces.
Orbital visualizations were made with IboView,<cit.> and show iso-surfaces enclosing 80% of the orbital's electron density.
§ RESULTS AND DISCUSSION
Judging the quality of an initial active space is difficult.
Here we employ two complementary criteria:
* In the course of a CASSCF calculation, the active space orbitals are optimized.
If the overlap of the optimized active space with our initial active space guess remains high, we here regard our active space guess as being of high quality. To quantify this aspect, we compute the N_act× N_act overlap matrix
S_change = ( C_act^final)^† S ( C_act^initial)
between the initial guess and the optimized final active orbitals, and compute its singular value decomposition.
If all singular values are close to 1.0, the active space remains mostly unchanged during the optimization.
On the contrary, each singular value close to 0.0 indicates that an initial active orbital
had to be completely replaced by an unrelated orbital. The latter case may indicate that the initial active space lacks the capability to represent some essential features of the strongly correlated electronic structure of the given molecule (since the optimized active orbital, which presumably is needed to represent energetically important electron configurations of the system, lies completely outside the linear span of the initial active space guess); the initial active space guess may therefore need to be changed or enlarged.
* We compare our computed excitation energies to experimental results and other high-level calculations reported in the literature.
When constructing a well-behaved series of active spaces, we should be able to see convergence or stability of the computed properties
with respect to the active space size.
We note that, strictly speaking, neither criterion can establish an initial active space as “definitely good”: While criterion 1 tests whether the CASSCF-optimized active space is close to the initial active space, there is no formal guarantee that the optimized CASSCF wave function itself is the best representative of the sought after electronic state (e.g., the optimization algorithm could be stuck in an undesirable local minimum). While criterion 2 establishes the compatibility with some specific physical properties and reference systems, it cannot guarantee that no other physical properties or systems exist which need different active spaces for a proper description.
However, both criteria could establish our active space construction to be unfit for its target applications—by showing counter-examples and negative results.
In order to investigate the basic properties of the AVAS procedure,
we now apply these criteria in calculations on various transition-metal complexes.
§.§ A. Ferrocene
We begin by considering the electronic structure of ferrocene, Fe(C5H5)2.
The ground state of ferrocene is dominated by a single configuration with Fe d^6.
Both the MO analysis in Ref. fecp2_1 and our CI expansion coefficients indicate that the lowest excited states of ferrocene
have significant multiconfigurational character.
We carried out an initial restricted Hartree-Fock (ROHF) calculation for the singlet ground state.
We used the optimized D_5h geometry from Ref. fecp2_geom (using the cc-pw-CVTZ basis set at the CCSD(T) level).
In this geometry, the two cyclopentadienyl (Cp) rings are planar and the z-axis is aligned with the Cp-Fe-Cp axis.
As our target set of AOs, we first chose the five 3d orbitals of Fe.
Using a threshold of 0.1, the AVAS scheme produces a seven orbital active space: five orbitals from the occupied orbital space and two orbitals from the unoccupied orbital space are combined into a (10e,7o) active space; the orbitals are visualized in Fig. <ref>.
The five overlap eigenvalues above the threshold from the occupied space are 0.325, 0.325, 0.973, 0.973, 0.995, and the two eigenvalues from the unoccupied space are 0.675, 0.675.
There are three omitted unoccupied orbitals: two of them have only 2.7% weight in the 3d AO space and (3d_xy, 3d_x^2-y^2) character, and one has 0.6% weight in the 3d AO space, corresponding to the 3d_z^2 AO.
The two orbitals from the occupied orbital space with eigenvalues 0.325 and 0.325, as well as the two selected MOs from the unoccupied orbital space with eigenvalues 0.675, 0.675,
both have (3d_xz, 3d_yz) character; based on this, we conclude that (3d_xz, 3d_yz) are most strongly involved in bonding with the two Cp rings, in agreement with the bonding picture established in earlier studies <cit.>.
To construct a second, larger active space, we next included the ten p_z orbitals of the carbon atoms in the Cp rings into the target AO list.
The AVAS construction then yields 15 orbitals above the 0.1 overlap threshold, giving an (18e,15o) active space: 9 of occupied character
and 6 of unoccupied character.
Of the 9 occupied orbitals, four do not carry Fe 3d character, instead belonging to the π-system of the Cp rings; two more have strong mixing with the Fe (3d_xz, 3d_yz) orbitals, and the final three represent almost-pure Fe 3d_xy, 3d_x^2-y^2, and 3d_z^2 atomic orbitals (with ≥95% orbital weight on the Fe 3d AOs).
Among six unoccupied orbitals, two have no Fe 3d character, two have predominantly Fe (3d_xz, 3d_yz) character, and two have <5% character of Fe 3d_xy and 3d_x^2-y^2 orbitals, respectively.
This agrees with our observations for the smaller (10e,7o) active space; namely, the additional orbitals in the (18e,15o) space are practically non-bonding in character.
Lowering the AVAS threshold from 0.1 to 0.05 gives a (22e,17o) active space, which includes some further effectively non-bonding orbitals beyond the (18e,15o) active space.
We calculated the ground state and singlet and triplet excited states of ferrocene with these active spaces at the CASCI and CASSCF levels.
Previous studies<cit.> indicate that there are three low-lying d → d singlet transitions (1 ^1E”_2, 1 ^1E”_1, 2 ^1E”_1) and
three low-lying d → d triplet transitions (1 ^3E”_1, 1 ^3E”_2, 2 ^3E”_1).
These d→ d transitions describe excitations from the three non-bonding orbitals (predominantly of 3d_x^2-y^2, 3d_xy and 3d_z^2 metal character,
as described above) to the two antibonding orbitals having mostly (3d_xz, 3d_yz) metal character. All
these excited states have multiconfigurational (but single-excitation) character.
Note that the E”_1 and E”_2 states are doubly degenerate, thus there are 6 low-lying singlet and triplet excited states.
In the CASSCF calculations we therefore state-averaged over 7 roots and 6 roots in the singlet and triplet manifolds, respectively.
Table <ref> displays the excitation energies,
compared to experimental and theoretical numbers from the literature,
including singly excited configuration interaction (SECI) <cit.>, symmetry adapted cluster configuration interaction (SAC-CI) <cit.>
and time-dependent density functional theory (TD-DFT) calculations <cit.>.
Table <ref> compares the impact of using different active spaces with different methods.
The performance of CASCI for the smallest (10e,7o) active space is reasonable for most of the excited states, with the 2 ^1E”_1 and 2 ^3E”_1 states being exceptions. These two states have some Rydberg character,<cit.>
and therefore the valence CASCI overestimates these transitions by about 2 eV; this effect can also be seen in the errors of the SECI energies.
Averaging over all the states in the CASSCF seemed to spread the error over the states, lowering all the energies.
An accurate description of the differential correlation in the 2 ^1E”_1 and 2 ^3E”_1 states thus
requires a dynamic correlation treatment. We find excellent agreement for all states at the CASSCF+NEVPT2 level,
with a largest error of only 0.21 eV.
Using the larger (18e,15o) active space, which includes the ligand π-orbitals, worsens the CASSCF excitation energies
(except for the 2 ^1E”_1 and 2 ^3E”_1 states). However, incorporating dynamic
correlation through NEVPT2 rebalances the states, improving CASCI and CASSCF energies (except for 1 ^1E”_2 and 1 ^3E”_1) and
yielding better agreement with experiment and the (10e,7o) CASSCF+NEVPT2 excitation energies.
If we further go from the (18e,15o) active space to the (22e,17o) active space, by additionally including the truncated orbitals with eigenvalues below 0.1, the excitation energies change by less than 0.01 eV.
Together, these observations indicate that the multiconfigurational character is already well
converged in the smallest (10e,7o) active space.
As discussed above, a second test of the quality of the AVAS active space is provided by the SVD decomposition
of the overlap between the CASSCF-optimized active space and the active space initial guess. In the case of ideal coincidence, the SVD eigenvalues should be equal to 1.
The smallest SVD eigenvalues we found were 0.927 for the (10e,7o) active space and 0.906 for the (18e,15o) active space, respectively.
This indicates that here the AVAS provides a stable and accurate initial guess for the CASSCF procedure.
§.§ B. [Fe(NO)(CO)3]-
We next consider the complex anion [Fe(NO)(CO)3]-, which exhibits catalytic activity in a range of organic reactions, and has been extensively characterized both theoretically and
experimentally (see Ref. fenoco3 and references therein).
The complex features three-center bonds along both the Fe-N-O axis and between Fe and each pair of CO ligands;
<cit.> its catalytic mode of action exhibits a highly unusual nitrosyl-ligand based oxidation in some cases,<cit.>
and response to photo-activation in other cases.<cit.>
Analysis of the ground-state CASSCF wavefunction and natural orbital occupations indicates that
it has some multiconfigurational character, and that it should be thought of as a Fe(0) species bound via two covalent
π-bonds to the [NO^-].<cit.>
As in the previous example, we started with a RHF calculation for the singlet ground state, and
for the simplest active space we chose five 3d orbitals of Fe as the target AOs.
We used the geometry of Ref. fenoco3_2.
Using an overlap threshold of 0.1 gives rise to five occupied orbitals and three unoccupied orbitals for the active space; two unoccupied orbitals with
only 6% weight in the 3d orbital space lie below the threshold for active space inclusion.
Of the three unoccupied orbitals included in the active space, two have (3d_yz,3d_xy) and (3d_xz,3d_x^2-y^2) character, respectively, while the third one has mostly 3d_z^2 character.
The resulting active space is visualized in Fig. <ref>.
Additionally including the nitrogen 2p orbitals into the set of target AOs (with the same threshold) gives a (16e,14o) active space; this adds
three MOs with 2p character, involved in the three-center Fe-N-O bonds, and three involved in N-Fe-CO type bonds.
Unfortunately, there is no gas phase experimental excitation data for this system. However, theoretical vertical excitation energies
from state-averaged CASSCF calculations followed by MRCI+Q, using the def2-TZVPP basis set (omitting g-functions) have previously been reported,<cit.> which we can compare against.
We used CASSCF and NEVPT2 to compute the vertical transition energies
averaging over five singlet and four triplet states, as in Ref. fenoco3_2.
The smallest SVD eigenvalue for the active space overlap with the initial guess for the (10e,8o) active space is 0.806, indicating
that AVAS provides a good initial guess. For the (16e,14o) active space, the lowest SVD eigenvalue decreases to 0.652—apparently adding only the nitrogen 2p orbitals,
without also adding the carbon 2p orbitals, leads to a less balanced active space compared to the (10e,8o) AVAS initial active space.
The vertical excitation energies from CASSCF with the (10e,8o) active space are in better agreement with the CASSCF/MRCI+Q excitation
energies than with CASSCF results from Ref. fenoco3_2. This indicates that the (10e,8o) active space constructed using AVAS
provides a more balanced decription of electron correlation than the larger active spaces.
However, the NEVPT2 dynamical correlation treatment significantly raises the obtained excitations energies above the MRCI+Q values.
Similarly, CASSCF calculations with the larger (16e,14o) active space also yield significantly higher excitation
energies than with the (10e,8o) active space, and the CASSCF+NEVPT2 excitation energies in this larger space are also
fairly different from the values obtained from the (10e,8o) active space. This lack of stability with respect to the active space size indicates
that the excited states are not benign electronically, and that their accurate description requires a more sophisticated dynamic correlation treatment beyond 2nd order perturbation theory.
This is supported by the MRCI+Q study in Ref. fenoco3_2, where the (empirical) Q-contribution to the excitation energy is as large as 0.2 eV.
To further substantiate these claims, we also computed CASSCF+NEVPT2 results for the same manually selected (14e,9o) initial active space as described
in Ref. fenoco3_2 (in these calculations, the initial 14 active orbitals were manually selected for Fe d and NO π and π^*-character by visual inspection of KS-DFT/PBE
orbitals computed with the def2-TZVPP basis set; the CASSCF excitation energies thus obtained reproduced the results reported in the supporting information of Ref. fenoco3_2 with better than 0.01 eV accuracy).
By comparing the results of (our) NEVPT2 and (the referenced) MRCI+Q for the same active space of Ref. fenoco3_2, we can separate the effect of the dynamic correlation treatment from the quality of the active space.
We see that CASSCF+NEVPT2 calculations performed with our automatically constructed (10e,8o) active space and the manually selected (14e,9o) active space
show a fair agreement in the case of singlet excited states; however, the CASSCF+NEVPT2 method overestimates the energies of the triplet excited states with the (14e,9o) active space
by 0.5–0.75 eV more than with the (10e,8o) active space, compared to the CASSCF/MRCI+Q transition energies.
Combined, these facts strongly suggest that the approximate NEVPT2 correlation treatment is the primary cause of deviation from the
MRCI+Q reference values, rather than our automatically constructed active space, and that the smaller AVAS provides a more balanced description of this system.
§.§ C. FeO4^2-
As our next system, we consider the bare tetraoxoferrate (VI) ion, FeO4^2-.
We assume a tetrahedral FeO4^2- cluster with an Fe–O distance of 1.660 Å.
We started with a ROHF calculation for the ^3A_2 ground state.
Including only 3d orbitals into the target AO set gives a (8e,8o) active space.
Three unoccupied MOs have 34.5% weight in the 3d orbital space, with 3d_yz, 3d_xz, and 3d_x^2-y^2 character,
but are mostly centred on the ligands; the occupied MOs have mostly metal character.
This indicates that some of the low-lying excitations are charge-transfer excitations.
An earlier study <cit.> found that the ground and excited states cannot be described by
a simple Ligand Field Theory d^2 model; rather, they contain superpositions of a large number of configurations, including
ligand-to-metal excitations. From this it has been argued that it is insufficient
to only consider molecular orbitals with Fe 3d character in
the active space to describe excited states.
Indeed, we find that CASSCF+NEVPT2 calculations with the (8e,8o) active space (generated only with the 3d orbitals in the target AO set)
significantly overestimate the excited states energies, by about ≈ 6500 cm^-1 (≈0.81 eV), compared to experiment.
For this reason, we expanded the target AO list to the five 3d orbitals of Fe and 2p orbitals of all four O atoms.
Using option <ref> to transform the alpha orbitals, our scheme with the 0.1 overlap threshold produced 14
occupied orbitals and 3 unoccupied orbitals, resulting in a (26e,17o) active space.
We calculated the vertical excitation energies of FeO4^2-, namely
transition energies from the ground ^3A_2 state to the first two excited states, ^1E and ^1A_1 (see Table <ref>),
using CASCI and CASSCF (state-averaged over three singlet states and one triplet state) and CASCI+NEVPT2,
comparing to previously reported RASSCF and experimental numbers.
The CASCI calculations significantly overestimate the excitation energies; however, this is significantly improved
by optimizing the orbitals using state-averaged CASSCF.
The smallest SVD singular value for the active space overlap between the initial
and optimized active orbitals is 0.968, indicating that
our initial active orbitals provide a very good guess for the CASSCF procedure and only require
a little relaxation to yield good agrement with experiment.
Including dynamic correlation by means of NEVPT2 on top of CASCI or CASSCF significantly improves the results.
Note that the difference between our CASSCF excitation energies and those in Ref. feo4m2 with 17 orbitals, obtained with
the RASSCF method, reflects both the slightly different basis set as well as the truncated CI configuration space in RAS.
§.§ D. VOCl4^2-
We now consider the oxotetrachlorovanadate(IV) anion, VOCl4^2-.
We use a square pyramidal geometry for VOCl4^2-, as in Ref. Vancoillie (although
we use a different orientation: the V atom is at the origin, the O atom is on the z axis above the x-y plane and the Cl atoms are below the x-y plane).
In this complex, vanadium is in a d^1 configuration.
As in the next example, here the d→ d excitations do not have much multiconfigurational character. However, it is important
that multireference methods (and their active spaces) provide a balanced description of all states, not just multiconfigurational
ones. The vanadium complex provides a system to test this in an early transition metal (single-reference) problem.
If we choose a set of five AOs, representing the five 3d orbitals of a vanadium atom,
we obtain five occupied and four unoccupied MOs from the ROHF reference wavefunction
using the AO-projector option <ref>. One of the occupied MOs
is a non-bonding 3d_xy atomic orbital, while the
other V 3d AOs strongly mix with the valence orbitals of the oxygen atom and four chlorine atoms.
This results in four doubly occupied bonding MOs and four anti-bonding MOs which are unoccupied in the ground state.
The two unoccupied MOs have 69.4% overlap with the 3d orbital space and carry (3d_xz,3d_yz) character,
other two have 54.0% and 70.3% overlap with the 3d orbital space and have (3d_z^2 and 3d_x^2-y^2) character, respectively;
all four unoccupied MOs have mostly metal character.
Using the (9e,9o) active space we calculated the lowest transitions, which are essentially d → d in nature.
There are four possible ligand-field transitions from the highest non-bonding 3d_xy orbital to four unoccupied MOs, thus
in CASSCF we averaged over five doublet states. Table <ref> summarizes the low-energy vertical d → d energies.
The CASSCF method with the small (9e,9o) active space gives an accurate ^2B_1 state, but strongly overestimates
the other excited states.
Using NEVPT2 to treat the dynamic correlation on top of CASSCF significantly improves the excited state energies,
resulting in a good agreement with the experimental values and with CASPT2 results obtained with a (11e,10o) space <cit.>.
This (11e,11o) space is similar to ours, with the addition of the oxygen 2p shell.
We also construct a larger (33e,21o) active space, including the 2p orbitals of O and the 3p orbitals of Cl into the target
AO list. In this larger space, to reduce computational cost, we used CASCI+NEVPT2 rather than CASSCF+NEVPT2.
The excited states from CASCI+NEVPT2 with the (33e,21o) active space are also in excellent agreement with the experimental data.
The stability of the CASCI/CASSCF+NEVPT2 excitations with respect to expanding the active space confirms that the correlation
is well converged by all these treatments.
In the CASSCF calculation with the (9e,9o) active space, the smallest SVD eigenvalue for the active space overlap between the initial and the optimized active orbitals amounts to 0.821. This implies that, in this case, AVAS provides a reasonable, but not perfect, initial guess for CASSCF.
We also used this complex to test and compare options <ref> and <ref>, as described in Sec. <ref>, for constructing the active space with a ROHF reference determinant |Φ⟩.
The excitation energies, calculated with these two options, differ by less than 100 cm^-1 (≈0.01 eV). However, VOCl4^2-'s ground state has only one singly occupied orbital,
and it is possible that larger differences will occur for systems with ground states of higher spin.
§.§ E. [CuCl4]^2-
We finally consider the D_4h [CuCl4]^2- complex, with a Cu–Cl bond length of 2.291 Å, as in Ref. Vancoillie.
As in the vanadium system, the d→ d transitions are single-reference in character: this complex provides a late transition metal example.
Using the 3d AOs of Cu as the target AOs and a default cutoff of 0.1, we obtain only 5 occupied MOs and no unoccupied MOs. This result might surprise at first glance. However, in this case, the antibonding orbitals have ligand character,
and thus there are no unoccupied MOs having more than 5% 3d character.
The lowest ligand-field transitions arise from the excitation of electrons from the doubly-occupied MOs
with dominant 3d character to the singly occupied MOs with 3d character.
Despite the fact that the lowest transitions happen mostly within the 3d orbital space,
such a small (9e, 5o) active space is insufficient to describe them at the CASSCF level—because
the nearly filled space leaves no room for electron correlation.
CASSCF+NEVPT2 however, provides good agreement with the experimental numbers (see Table <ref>).
To see the effect of a larger active space, we also included the
3p AOs of Cl in the target AO list, obtaining a (33e,17o) active space. The corresponding CASSCF
excitation energies are still poor, indicating that the necessary correlation is not of valence character.
The CASSCF+NEVPT2 excitation energies in this larger space, however, remain in very good agreement with experiment.
The insensitivity to active space indicates that correlations are well converged in the CASSCF+NEVPT2 treatment.
In CASSCF calculations we averaged over five doublet states, each having four doubly occupied d orbitals and one d-orbital with a single electron.
The smallest SVD eigenvalue for the active space overlap with the initial guess in converged CASSCF calculations
is equal to 0.930 in the case of the small (9e,5o) active space and 0.985 for the (33e,17o) active space, indicating
that AVAS provides a good initial guess for CASSCF.
§.§ F. Fenton reaction: an example of homolytic bond dissociation
When modeling chemical reactions with multi-configuration methods, one particularly challenging problem is the selection of active spaces capable of representing all relevant configurations in a homolytic bond dissociation process.
In fact, aside from special cases, where e.g. a reasonable active space choice along the reaction path is enabled by “accidental” factors, such as molecular symmetry, or all the relevant orbitals are energetically well-separated from and unmixed with other orbitals, or the molecule is simply small enough to allow for a full-valence active space treatment, active space selection constitutes
the primary bottleneck in the real-world applications of multi-configurational methods to reaction chemistry.
However, as rationalized in Sec. <ref>, we expect the AVAS scheme to be applicable when studying homolytic bond dissociation processes.
To illustrate this capability, we here consider one key step of a Fenton reaction. Concretely, we consider the homolytic splitting of hydrogen peroxide (H2O2) by aqueous ferrous (Fe(II)) iron at low pH, to yield OH^. radicals and an aqueous ferric (Fe(III)) species. Summarized, we examine the inner step of
Fe^2+ + H2O2 -> [Fe^2+(HOOH)] -> [Fe^3+(OH^-)] + OH^. -> Fe^3+ + OH^. + OH^-.
Variants of this reaction have been proposed as possibly relevant elementary steps in biochemical processes involving iron-oxo (Fe(IV)=O) species (including such as performed by cytochromes P450<cit.>), but this chemistry has been the subject of intense mechanistic debate since its inception in 1894<cit.>.
The history, background, and significance of this reaction, as well as the surrounding controversies, are discussed in Refs. dunford:Iron23VsHydrogenPeroxideReview, kremer:fenton1999, and petit:FentonReaction.
Here no attempt is made at quantitatively describing the reaction, or at resolving any of the controversies it involves; rather, we only examine whether AVAS is capable of producing an active space capable of qualitatively describing the reaction mechanism of a non-trivial homolytic bond dissociation along the entire IRC. In particular, no solvation effects are considered.
To model the Fenton reaction, we considered the [Fe(H2O)5(H2O2)]^2+ complex as an initial reagent.
It represents an octahedral [Fe(H2O)6]^2+ complex, a model for the aqueous ferrous ion,
with one water molecule substituted by the hydrogen peroxide molecule.
Such a complex is expected to be formed when H2O2 enters the coordination sphere of [Fe(H2O)6]^2+.
The homolytic bond cleavage/dissociation of the [Fe(H2O)5(H2O2)]^2+ complex leads to the formation of two spin-coupled radical fragments: ferric [Fe(H2O)5(OH)]^2+ and hydroxyl OH radicals <cit.>.
For this process we computed a reaction path along the intrinsic reaction coordinate, at the level of UB3LYP/def2-TZVP as described in Sec. <ref> (the geometry optimizations were started with structures from Ref. petit:FentonReaction).
The reaction model is visualized in Fig. <ref>.
The reactant complex has four unpaired electrons on iron and a total spin quantum number of S=4/2.
To build the active space, we first computed ROHF wave functions (with four unpaired electrons) for each structure along the IRC.
To ensure the convergence for the ROHF method and retain a continuous character of the ROHF solution along the entire reaction path, we used the orbitals obtained for the previous geometry as an initial guess for the next one, starting at the reactant side.
The actual active space was then formed by the AVAS procedure, in which we used the three 2p orbitals of two dissociating oxygen atoms and five 3d orbitals of Fe as target AOs.
For the open-shell treatment, we invoked option 3 from Sec. <ref>; that is, we applied the AVAS projection only on doubly-occupied and virtual orbitals from ROHF wavefunction, and the four singly-occupied 3d
orbitals of Fe were added unchanged to the active space to keep the correct spin S number for the entire system.
For the initial complex, AVAS with a 10% threshold produced 12 occupied orbitals and 1 unoccupied orbital to form the (20e,13o) active space.
To retain consistency of the active space along the reaction path, we then fixed the size of the active space to the values thus obtained; that is, for each geometry we chose the 12 occupied orbitals corresponding to the 12 largest overlap eigenvalues with the target space, and the one virtual orbital with the largest overlap with the target space.
With this active space, multi-configuration calculations along the reaction path were executed.
We wish to probe the capability of the AVAS to describe the qualitative features of the electronic structure on both sides of the reaction, despite the fact that AVAS was constructed from a ROHF wave function, which is itself incorrect on the product side. For this reason, the orbitals were not optimized in the multiconfiguration treatment—that is, we use (approximate) CASCI with ROHF orbitals as a multi-configuration method, and not (orbital-optimized) CASSCF.
Concretely, the CASCI energies were approximated with DMRG-CI (M=1600), which should be essentially exact with this active space.
The obtained energies are displayed in Fig. <ref> (top).
During the process, the ROHF energy rises, because a ROHF determinant is incapable of qualitatively describing the electronic structure of the product complex.
In contrast, with the same ROHF orbitals, the computed CASCI energies show an energy maximum, and a decreasing energy towards the product side.
In Fig. <ref> (bottom) we also plot the CASCI-occupation numbers of two active MOs participating in this homolytic bond cleavage along the IRC.
As expected for a homolytic bond cleavage, in which one doubly occupied molecular orbital splits into two singly occupied molecular orbitals, the occupation numbers go from ≈2.0/0.0 to ≈1.0/1.0 during the process.
This confirms the formation of two radicals on the product side.
Concretely, the active orbital obtained from the virtual side of the AVAS construction at ROHF level changes its occupation
number from 0.06 to 0.96 along the IRC, at the DMRG-CI level.
As expected at this level of theory, the absolute energetics of the process are not well reproduced (compared to accurate CCSD(T)-F12 calculations<cit.>).
However, we conclude that the qualitative features of the process are correctly described with the given active space.
In particular, the AVAS contains the relevant active orbitals for describing the electronic structure of both the reactant side and product side of the reaction—even though the AVAS is constructed from a ROHF wave function which is qualitatively incorrect on the product side.
This observation is consistent with the earlier finding in DMET that relevant entanglement spaces can be constructed from Hartree-Fock wave functions even in cases where the latter are qualitatively incorrect.<cit.>
All geometries, energies and occupation numbers are available in Supporting Information.
§ SPECULATIVE EXTENSIONS FOR COMPLEX CASES
We believe that AVAS is useful for the application cases fulfilling the premises described in Sec. <ref>.
However, it is clear that various application scenarios fall outside of this range.
We here outline two straight-forward extensions of the AVAS concept, which may extend its range of applicability.
Investigations of the details and practical performance of these approaches are beyond the scope of this work and will be done elsewhere.
§.§ Active spaces beyond the valence space
For some combinations of electronic structure methods and/or application scenarios, it may be desirable to construct active spaces capable of explicitly representing electronic degrees of freedom which cannot be represented within a (minimal) valence-AO basis.
One such case is the treatment of Rydberg excited states: describing such states on even a qualitative level will require diffuse orbitals in the active space.
The other, more common, case, is the extension of the active space in order to improve the quantitative accuracy of computations in certain cases.
For example, the accuracy of CASPT2 (and related PT2 methods) for transition metal complexes is often substantialy improved if a correlating second radial shell of d-AOs is included into the active space.
This is known as the “double-shell effect”,<cit.> and rests on the fact that doing so, effectively, relocates the treatment of d-electron radial electron correlation from the (very approximate) “PT2 part” of the method to the (very accurate) “CAS part”.
While in this work only valence-spanned active spaces are considered, the AVAS procedure should be able to also create such extended active spaces—by including the additional desired non-valence AOs in the target AO space. This can be done by simply taking any non-minimal atomic natural orbital basis set (such as ANO-RCC,<cit.> ano-pVnZ,<cit.> ANO-VT-XZ,<cit.> or any other) as source for these target AOs—rather than the MINAO basis used here.
For example, a “double-shell” active space for a 3d transition metal atom could be created by including both 3d and 4d AOs in the “target” AO space of AVAS. One could even add an additional shell of tight f functions, to follow the idea of “correlation consistency”<cit.> of dynamic correlation—although doing so is not common practice.
§.§ Handling complex metal/ligand interactions or multi-nuclear coordination complexes
We expect that normally the simple d-electron AVAS will be sufficient to qualitatively represent the multi-reference wave functions of ground states and (at least) d↦ d-excited states of single-metal coordination complexes. However, there are many application scenarios in which either complicated and non-transparent metal-ligand interactions, or the presence of multiple metal cores in coordination complexes, would make a straightforward active space construction via AVAS either questionable (if ligand AOs or additional metal AOs are not included in the target space) or infeasible (if such additional AOs are included, but the so-created active space becomes too large for convenient quantitative handling).
In the presence of such effects it may be warranted to combine AVAS with either one of several methods of constructing approximate FCI wave functions in large active spaces (vide infra), or with the entanglement-based active-space construction procedure of Reiher and coworkers<cit.>.
In the simplest case of suspected complex metal-ligand interactions in a single-metal complex, one might, for example, first create an AVAS by including as target AOs both the d-electrons of the transition metal, as well as all ligand AOs suspected to play an important role (e.g., all ligand valence AOs from the first coordination sphere, or all p_z-orbtials of a large π-system coordinated to the metal atom).
In practice, the active space created from this choice will frequently be too large for an accurate quantitative multi-reference calculation including dynamic correlation.
However, there are methods capable of providing qualitative wave functions and first-order reduced density matrices at an approximate CASCI level for such active spaces.
For large active spaces without too many open shells, methods such as SHCI <cit.> or FCIQMC <cit.> can
provide an efficient description, while for active spaces with a larger number of open shells, but fewer than 50 orbitals, QC-DMRG <cit.> is a reliable approach.
Once such a qualitative wave function is present, it can be used to determine which orbitals are not needed in the active space.
Concretely, we suggest the following procedure of handling these complex cases:
* Make an initial AVAS, with a target space including all AOs suspected to be possibly relevant.
* Compute an approximate CASCI wave function |Ψ_approx⟩ in this active space—by using, for example, SHCI<cit.> (with high threshold ϵ) or QC-DMRG<cit.> (with low bond dimension M).
* Compute the natural orbitals {ψ_i} and their occupation numbers {n_i} from the approximate CASCI wave function |Ψ_approx⟩.
* Do the actual quantitative calculation with an active space {ψ_i; t≤ n_i≤ (2-t)}, where t is some numeric threshold (e.g., t=0.05). That is, use an active space composed of the natural orbitals ψ_i from step 3 for which the occupation numbers n_i differ significantly from 0.0 (empty in all configurations of |Ψ_approx⟩) and 2.0 (doubly-occupied in all configurations of |Ψ_approx⟩).
While not investigated here, we expect this combination of methods to be capable of identifying suitable active spaces in many application scenarios.
§.§ Quantitative treatment of homolytic bond dissociation processes
The obtained data in Sec. <ref> shows that the AVAS contains the necessary orbitals for describing the electron configurations relevant in a homolytic bond dissociation process, along the entire reaction path.
However, we note that for actual quantitative calculations, particularly in simple concerted reaction processes without hidden intermediates,<cit.> modified procedures may be beneficial.
For example, if the chemical transformation of the reaction is sufficiently characterized by a single transition state (as in Sec. <ref>), a more economical way of treating the reaction might be to compute an AVAS active space only at the transition state, use it to initialize a CASSCF calculation there, and then propagate the initial orbitals from geometry to geometry.
The main benefit of this latter approach would be that in the AVAS procedure, not only the rotated occupied and virtual orbitals with very low overlap with the target AO space could be eliminated from the active space, but also the orbitals with very high overlap (say, ≥98%) could be eliminated, as such values indicate that they stay doubly occupied/unoccupied during the reaction.
The active space size could therefore be substantially reduced in such cases, therefore allowing the application of more powerful electron correlation methods.
However, a detailed study of this is beyond the scope of this work.
We also wish to explicitly note that there are many reactions in which the chemical transformations are not characterized by only the transition state (e.g., see Ref. knizia:CurlyArrows Fig. 3, or Refs. kraka:UrvaReview,joo:MechanismOfBarrierlessReaction,kraka:StunningExampleOfComplexMechanism), and in these cases the ability to construct a consistent active space along the entire reaction path, such as afforded by the AVAS procedure, may be valuable.
§ CONCLUSIONS
In this work, we investigated how to systematically and automatically construct
molecular active spaces solely from a single determinant wavefunction
together with a list of atomic valence orbitals. The atomic valence active space (AVAS) procedure is based on a straightforward
linear algebraic rotation of the occupied and unoccupied molecular orbital spaces which maximizes their given atomic valence character.
The method automatically detects the valence bonding partners of a given atomic valence orbital, and, by using
a single small threshold, can also detect non-bonding orbitals without constructing
spurious partners (either occupied or unoccupied).
To test our scheme, we tested both the quality of our orbitals as initial guesses for CASSCF optimization,
as well as the accuracy and stability of the valence excitation energies calculated within our spaces. We find
high overlap of our orbitals with fully optimized CASSCF orbitals, demonstrating their high quality.
We can also obtain good CASSCF excitation energies in cases where the excitations are dominated by valence correlation.
In molecules where the excitations are not of this character, we find that the
addition of dynamic correlation (through the N-electron valence perturbation theory)
yields quantitative agreement with experiment.
No doubt it will still be necessary to experiment with active spaces in the modeling of large and very complex molecules.
Our study provides two reasons to believe that the difficulty of
performing reliable multireference calculations in complex problems can be reduced using the AVAS technique.
First, the simple procedure makes it trivial to obtain not only the minimal active spaces, but
also extended active spaces, for example, including additional ligand orbitals.
This makes it simpler to systematically explore different active spaces, eliminating user error and subjectivity in their definition,
and allowing for convergence of properties with respect to the active space size.
Second, systematically varying active space size, while including a dynamic correlation treatment (such as NEVPT2)
provides a straightforward way to assess whether our active space is converged, as computed observables should become
insensitive to the active space size.
For these reasons, we believe that the AVAS construction provides a simple route to painless multireference calculations by non-experts,
particularly in complex systems involving transition metals.
§ SUPPORTING INFORMATION
Supporting information: Additional computational details; geometric structures of the studied transition metal complexes; additional figures of studied active orbital spaces; numerical and structural data of the studied reaction path of the Fenton reaction.
This material is available free of charge via the Internet at http://pubs.acs.org.
§ ACKNOWLEDGMENTS
We acknowledge the US National Science Foundation for funding this research primarily through the award NSF:CHE-1665333. Additional support for software development and to support QS was provided through NSF:CHE-1657286. We
acknowledge additional support for GKC from the Simons Foundation through a Simons Investigatorship.
|
http://arxiv.org/abs/1701.07700v2 | 20170126135216 | Post-Newtonian parameter $γ$ and the deflection of light in ghost-free massive bimetric gravity | [
"Manuel Hohmann"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
manuel.hohmann@ut.ee
Laboratory of Theoretical Physics, Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
We consider the parametrized post-Newtonian (PPN) limit of ghost-free massive bimetric gravity with two mutually non-interacting matter sectors coupled to the two metrics. Making use of a gauge-invariant differential decomposition of the metric perturbations, we solve the field equations up to the linear PPN order for a static, point-like mass source. From the result we derive the PPN parameter γ for spherically symmetric systems, which describes the gravitational deflection of light by visible matter. By a comparison to its value measured in the solar system we obtain bounds on the parameters of the theory. We further discuss the deflection of light by dark matter and find an agreement with the observed light deflection by galaxies. We finally speculate about a possible explanation for the observed distribution of dark matter in galactic mergers such as Abell 520 and Abell 3827.
Post-Newtonian parameter γ and the deflection of light
in ghost-free massive bimetric gravity
Manuel Hohmann
December 30, 2023
=============================================================================================
§ MOTIVATION
Current observations, together with their interpretation according to the standard ΛCDM model of cosmology, indicate that visible matter constitutes only 5% of the total matter content of the universe, while the remaining constituents are given by 26% dark matter and 69% dark energy <cit.>. These dark components of the universe arise purely from phenomenology: dark energy offers a potential explanation for the observed accelerating expansion of the universe <cit.>, while dark matter potentially explains the rotation curves of galaxies <cit.>, the formation of large scale structures in the early universe <cit.> or the lensing and peculiar motion in galaxy clusters <cit.>. Note that all of these observations are based on the gravitational influence of the dark universe on visible matter. Despite significant effort, no direct, non-gravitational interaction of visible matter and dark components has been observed. This raises the question whether any such non-gravitational interaction exists, or whether the coupling between the dark and visible sectors is purely gravitational, or even gravity itself constitutes at least part of the dark sector.
In this article we discuss a model which naturally features a purely gravitational coupling prescription for dark matter, while at the same time being able to accommodate dark energy. This model is based on the idea that the geometry of spacetime, which mediates the gravitational interaction, is described not by a single metric as in general relativity, but by two separate metrics. Further, each matter field couples to only one of these metrics, and there is no direct, non-gravitational coupling between matter fields associated to different metrics. These assumptions imply the existence of two different matter sectors, whose mutual interaction is mediated only by an interaction between the two metrics, so that they appear mutually dark. However attractive in its phenomenology, this idea also leads to potential theoretical issues, since a coupling between them requires at least one of the corresponding gravitons to be massive <cit.>, and such massive gravity theories generally suffer from the existence of a ghost instability <cit.>.
While it has been believed for several decades that the aforementioned ghost instability completely excludes any theories with gravitationally interacting massive spin 2 particles, it has been discovered recently that this is not the case, and that a particular, narrow class of theories avoids the ghost <cit.>; see <cit.> for a number of reviews. The most simple class of such theories indeed features two metric tensors, and allows for two separate, mutually non-gravitationally non-interacting classes of matter fields, each of which couples exclusively to one metric, and which interact with each other only through an interaction between the two metrics <cit.>. The interpretation of one of them as corresponding to dark matter, as we discussed above, has also been studied <cit.>, possibly involving an additional “graviphoton” vector field and reproducing modified Newtonian dynamics (MOND) on galactic scales <cit.>. Note, however, that in contrast to the latter we do not introduce a graviphoton, or aim to model dark matter as a gravitational effect as in MOND. We further remark that in massive gravity theories also the massive graviton is a potential dark matter candidate, an idea which has only recently been considered <cit.>.
Besides providing possible explanations for the observed dark sector of the universe, any viable theory of gravity must of course also pass tests in the solar system. An important tool for testing metric gravity theories with high-precision data from solar system experiments is the parametrized post-Newtonian (PPN) formalism <cit.>. The main idea of the PPN formalism is to express the metric tensor as a perturbation around a flat background, and then expand the perturbation in terms of certain integrals over the gravitating matter distribution. The coefficients of these potentials in the metric perturbation are characteristic for a given gravity theory, and can directly be linked to observable quantities. In this article we focus on a particular PPN parameter, conventionally denoted γ, which has been measured to high precision in numerous solar system experiments <cit.>, in particular through very long baseline interferometry <cit.>, via the Shapiro delay of radio signals <cit.> and using combined observations of the motion of bodies in the solar system <cit.>. To present all observations are in full agreement with the general relativity value γ = 1 <cit.>.
While all of the aforementioned experiments observed the gravitational interaction within the visible sector, it should be noted that the PPN parameter γ has also been determined through the deflection of visible light by galaxies, whose total gravitating matter content contains a significant contribution from dark matter <cit.>. Also these observations, although less precise, are in agreement with the general relativity value γ = 1. It is needless to say that understanding the light deflection by dark matter is essential for a correct interpretation of observations where the dark matter distribution is reconstructed from lensing under the assumption that dark matter deflects light in the same way as visible matter. This is particularly important in the case of galactic mergers, such as the so-called “Bullet Cluster” 1E0657-558 <cit.>, the “Train Wreck Cluster” Abell 520 <cit.>, MACS J0025.4-1222 <cit.> or Abell 3827 <cit.>, where visible and dark matter appear clearly separated from each other. However, it is not a priori clear that this assumption is valid in a theory in which dark and visible matter couple differently to gravity.
As mentioned above, the PPN formalism in its standard form requires a single dynamical metric for the description of gravity. In order to discuss theories with multiple metric tensors, we need an extension of this standard PPN formalism. A possible extension, which features massless and massive gravity modes, but includes only one type of gravitating source matter, has been introduced and applied in <cit.>. A complementary extension to multiple metrics and a corresponding number of matter sectors, but including only massless gravity modes, has been developed and applied in <cit.>. In this article we choose to make use of the latter, and to extend it to also allow calculating the PPN parameter γ for both dark and visible matter in massive gravity. This is the simplest possible extension, and the first step towards a fully general extension of the formalism to massive gravity theories; the latter would allow for a calculation of all PPN parameters.
We remark that the perturbative expansion of the metric in a weak field limit, which is an important ingredient to the PPN formalism, is not always valid in the context of bimetric gravity due to the Vainshtein mechanism <cit.>, and that a full, non-linear treatment is required in order to determine the gravitational dynamics close to the source mass. This non-linear mechanism typically suppresses all deviations from general relativity within a given radius around the source mass, called the Vainshtein radius. A perturbative treatment is valid only outside this radius. We will not discuss the Vainshtein mechanism in this article, and restrict ourselves to the case of theories in which the Vainshtein radius is sufficiently small so that the perturbative treatment is valid on solar system scales and above.
The outline of this article is as follows. In section <ref> we briefly review the action and field equations of ghost-free massive bimetric gravity. We then perform a perturbative expansion of these field equations in section <ref>, using an adapted version of the PPN formalism. We further simplify the obtained equations using gauge-invariant perturbation theory in section <ref>. This will yield us a set of equations, which we will solve for a static, point-like mass source in section <ref>, and thus determine the effective Newtonian gravitational constant and PPN parameter γ. We will connect our result to observations, in particular of the deflection of light, in section <ref>. We end with a conclusion in section <ref>. A few lengthy calculations are displayed in the appendix. In appendix <ref> we derive the linearized interaction potential connecting the two metrics. In appendix <ref> we list derivatives of the Yukawa potential. We show how to check our solution of the field equations in appendix <ref>.
§ ACTION AND FIELD EQUATIONS
In this section we start our discussion of the post-Newtonian limit of bimetric gravity with a brief review of its action and gravitational field equations, which are derived by variation with respect to the two metrics. We then trace-reverse the field equations, as this will be more convenient when we construct their solution. These trace-reversed field equations will be the main ingredient for our calculation.
The starting point for our derivation is the action functional
S = _Md^4x[m_g^2/2√(- g)R^g + m_f^2/2√(- f)R^f - m^4√(- g)∑_n = 0^4β_ne_n(√(g^-1f))
+ √(- g)ℒ_m^g(g,Φ^g) + √(- f)ℒ_m^f(f,Φ^f)]
for two metric tensors g_μν, f_μν and two sets of matter fields Φ^g,f, each of which couples to only one metric tensor, and between which there is no direct, non-gravitational interaction. One may thus interpret, e.g., Φ^g as visible matter, constituted by the standard model fields and governed by the standard matter Lagrangian ℒ^g, and Φ^f as dark matter, constituted by a distinct set of fields with possibly different structure of the Lagrangian ℒ^f. However, we will not make any assumptions on the constituting fields of the two matter types here, or on their Lagrangians, as these will not be relevant for our calculation.
Note that since there are two metrics, there is no canonical prescription for raising or lowering tensor indices. Therefore, we will not raise or lower indices automatically, but provide definitions for all tensor fields with fixed index positions. In the action (<ref>) this applies to the Ricci scalars, each of which is defined solely through its corresponding metric, such that they are related to the Ricci tensors by
R^g = g^μνR^g_μν , R^f = f^μνR^f_μν .
Note further the appearance of the (1,1) tensor field g^-1f, which we assume to have a square root A such that
A^μ_σA^σ_ν = g^μσf_σν .
This is certainly the case in a sufficiently small neighborhood of the flat proportional background metrics g_μν = η_μν, f_μν = c^2η_μν, which we will henceforth consider. The functions e_0, …, e_4 in the action are the matrix invariants
e_k(A) = A^μ_1_[μ_1⋯ A^μ_k_μ_k] = 1/k!(4 - k)!ϵ^μ_1 ⋯μ_k λ_1 ⋯λ_4 - kϵ_ν_1 ⋯ν_k λ_1 ⋯λ_4 - kA^ν_1_μ_1⋯ A^ν_k_μ_k
of this square root, while their coefficients β_0, …, β_4 are constant, dimensionless parameters to the action. The remaining parameters are the Planck masses m_g, m_f and the interaction mass m, all of which are of mass dimension. Any choice of the constant parameters determines a particular action, and hence a particular theory.
By variation with respect to the metric tensors we obtain the gravitational field equations
m_g^2(R^g_μν - 1/2g_μνR^g) + m^4V^g_μν = T^g_μν ,
m_f^2(R^f_μν - 1/2f_μνR^f) + m^4V^f_μν = T^f_μν .
Here we have defined the energy-momentum tensors as usual through
T^g_μν = -2/√(- g)δ(√(- g)ℒ_m^g(g,Φ_g))/δ g^μν ,
T^f_μν = -2/√(- f)δ(√(- f)ℒ_m^f(f,Φ_f))/δ f^μν .
The potential terms V^g,f_μν are given by
V^g_μν = g_μρ∑_n = 0^3(-1)^nβ_nY_n^ρ_ν(A) ,
V^f_μν = f_μρ∑_n = 0^3(-1)^nβ_4 - nY_n^ρ_ν(A^-1) ,
where the functions Y_0, …, Y_3 are defined as
Y_n(A) = ∑_k = 0^n(-1)^ke_k(A)A^n - k
and analogously for Y_n(A^-1). We remark that the action (<ref>), and hence also the field equations (<ref>), are fully symmetric under a simultaneous exchange g_μν↔ f_μν and β_n ↔β_4 - n.
While we could work directly with the field equations (<ref>), it turns out to be more convenient to use the trace-reversed equations instead, which read
m_g^2R^g_μν + m^4V̅^g_μν = T̅^g_μν ,
m_f^2R^f_μν + m^4V̅^f_μν = T̅^f_μν .
Here we have trace-reversed each term with its corresponding metric, i.e., we have applied the definitions
V̅^g_μν = V^g_μν - 1/2g_μνg^ρσV^g_ρσ , T̅^g_μν = T^g_μν - 1/2g_μνg^ρσT^g_ρσ ,
V̅^f_μν = V^f_μν - 1/2f_μνf^ρσV^f_ρσ , T̅^f_μν = T^f_μν - 1/2f_μνf^ρσT^f_ρσ .
The field equations (<ref>) are the equations we will be working with during the remainder of this article. In order to calculate the post-Newtonian limit, we will need a perturbative expansion of these equations. This will be done in the next section.
§ POST-NEWTONIAN APPROXIMATION
We now come to a perturbative expansion of the field equations (<ref>) displayed in the previous section. For this purpose, we first briefly review the notion of velocity orders in section <ref>, and label the relevant components of the metric and energy-momentum tensors. We then discuss the metric ansatz, essentially following the construction developed in <cit.>, in section <ref>. Finally, we apply these constructions to the field equations under consideration. We derive and solve the field equations at the zeroth velocity order in section <ref>, and derive the second order equations in section <ref>.
§.§ Expansion in velocity orders
A central ingredient of the PPN formalism is the assumption that the gravitating source matter is constituted by a perfect fluid. Since there are two different types of matter Φ^g,f in the theory we consider, which interact only gravitationally, we apply this assumption to each of them. Their energy-momentum tensors therefore take the form
T^g μν = (ρ^g + ρ^gΠ^g + p^g)u^g μu^g ν + p^gg^μν ,
T^f μν = (ρ^f + ρ^fΠ^f + p^f)u^f μu^f ν + p^ff^μν ,
with rest energy densities ρ^g,f, specific internal energies Π^g,f, pressures p^g,f and four-velocities u^g,f μ. Note that the four-velocities are normalized with their corresponding metrics,
u^g μu^g νg_μν = u^f μu^f νf_μν = -1 .
We further assume that the source matter is slow-moving within our chosen frame of reference, so that the velocity components satisfy
v^g,f i = u^g,f i/u^g,f 0≪ 1 .
We then assign orders of magnitude 𝒪(n) ∝ |v⃗|^n to all dynamical quantities. For the matter variables we assign ρ^g,f∼Π^g,f∼𝒪(2) and p^g,f∼𝒪(4), based on their values for the matter constituting the solar system. For the metrics we assume a small perturbation around a flat, proportional background solution, where we expand the perturbation in velocity orders in the form
g_μν = η_μν + h_μν = η_μν + h^(1)_μν + h^(2)_μν + h^(3)_μν + h^(4)_μν + 𝒪(5) ,
c^-2f_μν = η_μν + e_μν = η_μν + e^(1)_μν + e^(2)_μν + e^(3)_μν + e^(4)_μν + 𝒪(5)
with constant c > 0. Not all of these components are relevant for the post-Newtonian limit, while others vanish due to symmetries and conservation laws. The only non-vanishing components that are relevant for our calculation of the PPN parameter γ in this article are
h^(2)_00 , h^(2)_ij , e^(2)_00 , e^(2)_ij .
Further, we only consider quasi-static solutions, so that changes of the metric are induced only by the motion of the source matter. We therefore assign another velocity order ∂_0 to any time derivative. We finally assume that the source matter is located in a bounded region and that the metrics are asymptotically flat, so that the metric perturbations and their derivatives vanish at infinity.
Since we are interested only in the second order metric perturbations (<ref>), and hence need to solve the field equations only up to the second velocity order, it is also sufficient to expand the energy-momentum tensors (<ref>) to the second velocity order. The only relevant components are
T^g(2) 00 = ρ^g , T^f(2) 00 = ρ^f/c^2 , T^g(2) ij = T^f(2) ij = 0 .
In order to use them in the field equations (<ref>), we need to lower their indices with their corresponding metrics, which yields
T^g(2)_00 = ρ^g , T^f(2)_00 = c^2ρ^f , T^g(2)_ij = T^f(2)_ij = 0 .
Finally, we also need to trace-reverse these terms with their corresponding metrics, from which we obtain
T̅^g(2)_00 = 1/2ρ^g , T̅^f(2)_00 = c^2/2ρ^f , T̅^g(2)_ij = 1/2ρ^gδ_ij , T̅^f(2)_ij = c^2/2ρ^fδ_ij .
These terms will enter the trace-reversed field equations (<ref>).
§.§ Metric ansatz and PPN parameters
Another important ingredient of the PPN formalism is an ansatz for the metric in terms of potentials, which are integrals over the source matter distribution. Their coefficients in the metric are observable quantities which allow a characterization of the gravity theory under examination. For single metric theories there is a standard form for this PPN metric ansatz <cit.>. Here we use a generalization to multimetric theories developed in <cit.>. Our ansatz for the second order metric perturbation reads
h^(2)_00 = -α^ggχ^g - α^gfχ^f ,
h^(2)_ij = 2θ^ggχ^g_,ij + 2θ^gfχ^f_,ij - [(γ^gg + θ^gg)χ^g + (γ^gf + θ^gf)χ^f]δ_ij ,
e^(2)_00 = -α^fgχ^g - α^ffχ^f ,
e^(2)_ij = 2θ^fgχ^g_,ij + 2θ^ffχ^f_,ij - [(γ^fg + θ^fg)χ^g + (γ^ff + θ^ff)χ^f]δ_ij ,
where = ∂^i∂_i and indices are raised and lowered with the flat metric η_μν. The PPN potentials we have introduced here are second order derivatives of the superpotentials
χ^g(t, x⃗) = -∫ρ^g(t, x⃗') |x⃗ - x⃗'| d^3x' , χ^f(t, x⃗) = -c^3∫ρ^f(t, x⃗') |x⃗ - x⃗'| d^3x' .
The definition of χ^f contains a factor c^3, which originates from the volume element of the spatial part of the unperturbed contribution c^2η_μν of the metric f_μν. In the PPN metric we further have twelve PPN parameters α^g,f g,f, γ^g,f g,f, θ^g,f g,f. Note that these are not independent, as we have not yet fixed a gauge for the metric. The gauge freedom allows us to apply a diffeomorphism generated by a vector field ξ^μ, provided that it preserves the perturbation ansatz (<ref>). This means that the vector field ξ^μ must be of the same order as the metric perturbations. Recall that under a diffeomorphism the metrics change according to
δ_ξg_μν = (ℒ_ξg)_μν = 2g_σ(μ∇^g_ν)ξ^σ , δ_ξf_μν = (ℒ_ξf)_μν = 2f_σ(μ∇^f_ν)ξ^σ ,
where ℒ denotes the Lie derivative. At the linear perturbation level, which is sufficient for our calculation here, this yields the transformation of the metric perturbations
δ_ξh_μν = δ_ξe_μν = 2η_σ(μ∂_ν)ξ^σ .
Further demanding consistency with the PPN metric ansatz (<ref>) we find that the only allowed and relevant vector field is of second velocity order and can be written as <cit.>
ξ_0 = 0 , ξ_i = λ^gχ^g_,i + λ^fχ^f_,i ,
with two constants λ^g,f, and where we have defined ξ_μ = η_μνξ^ν. Under a diffeomorphism generated by this vector field the metric perturbations change by
δ_ξh^(2)_00 = δ_ξe^(2)_00 = 0 , δ_ξh^(2)_ij = δ_ξe^(2)_ij = 2λ^gχ^g_,ij + 2λ^fχ^f_,ij .
By choosing λ^g = -θ^gg and λ^f = -θ^ff we can always eliminate these two PPN parameters from the metric ansatz. In the remainder of our calculation we will adopt this gauge, in which θ^gg = θ^ff = 0, as this turns out to be compatible with the standard PPN gauge for single metric theories <cit.>.
§.§ Background solution
Recall from section <ref> that we have expanded the metrics around a flat Minkowski background, as usual in the PPN formalism. For the PPN formalism to be applicable in this form, it is necessary that this background is a solution of the field equations at the zeroth velocity order. Since both the Ricci tensors R^g,f(0)_μν and the energy-momentum tensors T̅^g,f(0)_μν at the zeroth velocity order vanish, these simply reduce to
m^4V̅^g(0)_μν = 0 , m^4V̅^f(0)_μν = 0 ,
where we assume m > 0. In order to determine the potential terms V̅^g,f(0)_μν, and also the second velocity order in the next section, it is useful to first linearize the potential in the metric perturbations. Since this is a rather lengthy procedure, we have deferred it to appendix <ref>. Here we make use of the result (<ref>), from which we read off the zeroth order contribution
V̅^g(0)_μν = -(β̃_0 + 3β̃_1 + 3β̃_2 + β̃_3)η_μν = -(β_0 + 3cβ_1 + 3c^2β_2 + c^3β_3)η_μν ,
V̅^f(0)_μν = -(β̃_1 + 3β̃_2 + 3β̃_3 + β̃_4)c^-2η_μν = -(β_1 + 3cβ_2 + 3c^2β_3 + c^3β_4)c^-1η_μν ,
where we used the abbreviations β̃_k = c^kβ_k. We require that these equations, which are polynomial in c, possess at least one common positive solution c > 0. Note that a particular fixed c solves both equations if and only if the parameters in the action (<ref>) satisfy
β_0 = -3cβ_1 - 3c^2β_2 - c^3β_3 , β_4 = -c^-3β_1 - 3c^-2β_2 - 3c^-1β_3 .
The condition that the background equations are solved by proportional flat metrics therefore completely determines the two parameters β_0 and β_4 in the action in terms of a new free parameter c > 0. In the following we will therefore replace β_0 and β_4, and hence β̃_0 and β̃_4, using
β̃_0 = -3β̃_1 - 3β̃_2 - β̃_3 , β̃_4 = -β̃_1 - 3β̃_2 - 3β̃_3 ,
and keep β_1, β_2, β_3 and c as free parameters of the class of theories we discuss.
§.§ Second order field equations
For the remainder of our calculation and in order to determine the second order metric perturbations we will need to expand the field equations (<ref>) to the second velocity order. The only relevant components are given by
m_g^2R^g(2)_00 + m^4V̅^g(2)_00 = T̅^g(2)_00 ,
m_g^2R^g(2)_ij + m^4V̅^g(2)_ij = T̅^g(2)_ij ,
m_f^2R^f(2)_00 + m^4V̅^f(2)_00 = T̅^f(2)_00 ,
m_f^2R^f(2)_ij + m^4V̅^f(2)_ij = T̅^f(2)_ij .
We have already calculated the necessary components (<ref>) of the energy-momentum tensor at the second velocity order. The components of the Ricci tensor are easily obtained and yield the standard textbook result <cit.>
R^g(2)_00 = -1/2 h^(2)_00 ,
R^g(2)_ij = -1/2( h^(2)_ij - h^(2)_00,ij + h^(2)_kk,ij - h^(2)_ik,jk - h^(2)_jk,ik) ,
R^f(2)_00 = -1/2 e^(2)_00 ,
R^f(2)_ij = -1/2( e^(2)_ij - e^(2)_00,ij + e^(2)_kk,ij - e^(2)_ik,jk - e^(2)_jk,ik) .
Finally, we also need the second velocity order contribution from the potential terms. Using the result (<ref>) derived in appendix (<ref>) one finds the components
V̅^g(2)_00 = 1/4β̃(3h^(2)_00 - 3e^(2)_00 - h^(2)_ii + e^(2)_ii) ,
V̅^g(2)_ij = 1/4β̃[2h^(2)_ij - 2e^(2)_ij + (h^(2)_kk - e^(2)_kk - h^(2)_00 + e^(2)_00)δ_ij] ,
V̅^f(2)_00 = 1/4c^2β̃(3e^(2)_00 - 3h^(2)_00 - e^(2)_ii + h^(2)_ii) ,
V̅^f(2)_ij = 1/4c^2β̃[2e^(2)_ij - 2h^(2)_ij + (e^(2)_kk - h^(2)_kk - e^(2)_00 + h^(2)_00)δ_ij] ,
where we introduced the abbreviation
β̃ = β̃_1 + 2β̃_2 + β̃_3 .
These are the field equations we will be using during the remainder of this article. However, directly working with these equations poses two difficulties. First, the field equations possess a gauge freedom, as discussed in section <ref>, and so the solution will be unique only after gauge fixing. Second, the equations turn out to be involved and cumbersome to solve due to the mixing of tensor components. Both of these difficulties can be solved straightforwardly by performing a gauge-invariant differential decomposition of the metric perturbations. We will detail this formalism in the next section.
§ GAUGE-INVARIANT DIFFERENTIAL DECOMPOSITION
In the previous section we have performed an expansion of the gravitational field equations (<ref>) into velocity orders and obtained the second order equations (<ref>). Instead of solving them directly for the metric perturbations (<ref>), we will first bring them into a significantly simpler form in this section. For this purpose, we employ the formalism of gauge-invariant perturbations, which is well-known from cosmology <cit.>. We apply this procedure in several steps. First, we decompose the metric perturbations into gauge-invariant potentials in section <ref>. In section <ref> we further decompose these potentials into velocity orders as required by the PPN formalism. Using the expressions obtained, we then decompose the Ricci tensors (<ref>) in section <ref>, the potentials (<ref>) in section <ref> and the energy-momentum tensors (<ref>) in section <ref>. This will finally yield us a full decomposition of the field equations (<ref>) in section <ref>.
§.§ Decomposition of the metrics
We start with a differential decomposition of the metric perturbations. Using the split into time and space components, we introduce the decomposition
h_00 = -2ϕ^g,
h_0i = ∂_iB^g + B^g_i,
h_ij = -2ψ^gδ_ij + 2_ijE^g + 4∂_(iE^g_j) + 2E^g_ij ,
e_00 = -2ϕ^f,
e_0i = ∂_iB^f + B^f_i,
e_ij = -2ψ^fδ_ij + 2_ijE^f + 4∂_(iE^f_j) + 2E^f_ij
into four scalars ϕ^g,f, ψ^g,f, B^g,f, E^g,f, two divergence-free vectors B^g,f_i, E^g,f_i and one trace-free, divergence-free tensor E^g,f_ij. Here _ij denotes the trace-free second derivative _ij = ∂_i∂_j - 1/3δ_ij. From these quantities we further derive the potentials
I_1^g,f = ϕ^g,f + ∂_0B^g,f - ∂_0^2E^g,f ,
I_3^g,f = B^g,f ,
I_4^g,f = E^g,f ,
I_2^g,f = ψ^g,f + 1/3 E^g,f ,
I^g,f_i = B^g,f_i - 2∂_0E^g,f_i ,
I'^g,f_i = E^g,f_i ,
I^g,f_ij = E^g,f_ij .
The advantage of using these potentials becomes apparent when we consider gauge transformations of the metric, i.e., diffeomorphisms generated by a vector field ξ^μ which preserve the perturbation ansatz (<ref>) as discussed in section <ref>. Here we introduce a differential decomposition for ξ_μ = η_μνξ^ν of the form
ξ_0 = X , ξ_i = ∂_iX' + X_i
into two scalars X, X' and one divergence-free vector X_i. One now easily computes from the decomposition (<ref>) the transformations
δ_ξϕ^g,f = -∂_0X , δ_ξψ^g,f = -1/3 X' , δ_ξB^g,f = ∂_0X' + X , δ_ξE^g,f = X' ,
δ_ξB^g,f_i = ∂_0X_i , δ_ξE^g,f_i = 1/2X_i , δ_ξE^g,f_ij = 0 .
The potentials (<ref>) hence transform as
δ_ξI_1^g,f = δ_ξI_2^g,f = 0 , δ_ξI_3^g,f = ∂_0X' + X , δ_ξI_4^g,f = X' ,
δ_ξI^g,f_i = 0 , δ_ξI'^g,f_i = 1/2X_i , δ_ξI^g,f_ij = 0 .
Finally, defining the linearly related potentials
I_1^± = I_1^g ± I_1^f ,
I_2^± = I_2^g ± I_2^f ,
I_3^± = I_3^g ± I_3^f ,
I_4^± = I_4^g ± I_4^f ,
I^±_i = I^g_i ± I^f_i ,
I'^±_i = I'^g_i ± I^f_i ,
I^±_ij = I^g_ij± I^f_ij ,
we see that the six scalar potentials I_1^±, I_2^±, I_3^-, I_4^-, the three vectors I^±_i, I'^-_i and the two tensors I^±_ij are invariant under gauge transformations, while the remaining two scalars I_3^+, I_4^+ and the vector I'^+_i are pure gauge degrees of freedom corresponding to the two scalars and the vector constituting the diffeomorphisms. The only physical degrees of freedom are the gauge invariant potentials. Since the gravitational field equations are derived from a diffeomorphism invariant action, we can fully express them in terms of these gauge invariants. In the following we will do so by introducing a suitable differential decomposition of the Ricci tensors, potentials and energy-momentum tensors.
§.§ Gauge invariant potentials and velocity orders
Recall from section <ref> that we have decomposed the metric perturbations into velocity orders and that only the components (<ref>) are relevant for the calculation we present in this article. We now apply the decomposition into velocity orders to the gauge invariant potentials above in order to determine which of them will be relevant for our calculation. A comparison of the relevant components (<ref>) with the differential decomposition (<ref>) shows that only the quantities
ϕ^g,f(2) , ψ^g,f(2) , E^g,f(2) , E^g,f(2)_i , E^g,f(2)_ij
at the second velocity order will be relevant. Using the relations (<ref>), while taking into account that time derivatives are weighted with an additional velocity order 𝒪(1), then yields the relevant potentials
I_1^g,f(2) = ϕ^g,f(2) ,
I_2^g,f(2) = ψ^g,f(2) + 1/3 E^g,f(2) ,
I_4^g,f(2) = E^g,f(2) ,
I'^g,f(2)_i = E^g,f(2)_i ,
I^g,f(2)_ij = E^g,f(2)_ij .
Finally, transitioning to the linearly related potentials (<ref>) shows that the only relevant gauge-invariant potentials are the five scalars I_1^±(2), I_2^±(2), I_4^-(2), the vector I'^-(2)_i and the two tensors I^±(2)_ij, while the scalar I_4^+(2) and the vector I'^+(2)_i are pure gauge quantities. From the former we can now calculate the relevant components of the Ricci tensor and the potential.
§.§ Decomposition of the Ricci tensors
We now perform a differential decomposition of the Ricci tensors, similar to the differential decomposition (<ref>) of the metric introduced above. Here we use the defining relations
R^g,f_00 = K_1^g,f ,
R^g,f_0i = ∂_iK_3^g,f + K^g,f_i ,
R^g,f_ij = 1/3K_2^g,fδ_ij + _ijK_4^g,f + 2∂_(iK'^g,f_j) + K^g,f_ij .
From this definition and the second order field equations (<ref>) follows that the only relevant components for our calculation are given by
K_1^g,f(2) = I_1^g,f(2) ,
K_2^g,f(2) = 4 I_2^g,f(2) - I_1^g,f(2) ,
K_4^g,f(2) = I_2^g,f(2) - I_1^g,f(2) ,
K'^g,f(2)_i = 0 ,
K^g,f(2)_ij = - I^g,f(2)_ij .
Comparing these expressions with the gauge transformations (<ref>) we see that they contain only gauge-invariant potentials, as expected from the fact that they originate from a diffeomorphism invariant action.
§.§ Decomposition of the potentials
For the trace-reversed potentials we proceed in full analogy to the decomposition (<ref>) of the Ricci tensors. Here we use the decomposition
V̅^g,f_00 = U_1^g,f , V̅^g,f_0i = ∂_iU_3^g,f + U^g,f_i , V̅^g,f_ij = 1/3U_2^g,fδ_ij + _ijU_4^g,f + 2∂_(iU'^g,f_j) + U^g,f_ij .
From the second order field equations (<ref>) we read off that the relevant components are given by
U_1^g(2) = -c^2U_1^f(2) = -1/2β̃(3I_1^-(2) - 3I_2^-(2) + I_4^-(2)) ,
U_2^g(2) = -c^2U_2^f(2) = 1/2β̃(3I_1^-(2) - 15I_2^-(2) + 5 I_4^-(2)) ,
U_4^g(2) = -c^2U_4^f(2) = β̃I_4^-(2) ,
U'^g(2)_i = -c^2U'^f(2)_i = β̃I'^-(2)_i ,
U^g(2)_ij = -c^2U^f(2)_ij = β̃I^-(2)_ij .
Again we see that these depend only on gauge invariant potentials, as expected.
§.§ Decomposition of the energy-momentum tensors
We finally also need to perform a differential decomposition of the energy-momentum tensors. Following the same prescription as for the Ricci tensors and the potentials we define
T̅^g,f_00 = Q_1^g,f , T̅^g,f_0i = ∂_iQ_3^g,f + Q^g,f_i , T̅^g,f_ij = 1/3Q_2^g,fδ_ij + _ijQ_4^g,f + 2∂_(iQ'^g,f_j) + Q^g,f_ij .
For the expressions (<ref>) for the second order trace-reversed energy-momentum tensors of the perfect fluid then follow the relevant components
Q_1^g(2) = 1/2ρ^g ,
Q_1^f(2) = 1/2c^2ρ^f ,
Q_2^g(2) = 3/2ρ^g ,
Q_2^f(2) = 3/2c^2ρ^f ,
Q_4^g,f(2) = 0 ,
Q'^g,f(2)_i = 0 ,
Q^g,f(2)_ij = 0 .
These are all expressions we need for the field equations (<ref>) at the second velocity order.
§.§ Decomposition of the field equations
We now have all expressions at hand which are necessary to perform a differential decomposition of the second order field equations (<ref>) and to fully express them in terms of gauge-invariant quantities. It is an important feature of the differential decomposition that it is unique and bijective under the boundary conditions mentioned in section <ref>, which imply that all metric perturbations and their derivatives vanish at infinity. It thus follows that the field equations (<ref>) are equivalent to the decomposed field equations
m_g^2K_1^g(2) + m^2U_1^g(2) = Q_1^g(2) ,
m_f^2K_1^f(2) + m^2U_1^f(2) = Q_1^f(2) ,
m_g^2K_2^g(2) + m^2U_2^g(2) = Q_2^g(2) ,
m_f^2K_2^f(2) + m^2U_2^f(2) = Q_2^f(2) ,
m_g^2K_4^g(2) + m^2U_4^g(2) = Q_4^g(2) ,
m_f^2K_4^f(2) + m^2U_4^f(2) = Q_4^f(2) ,
m_g^2K'^g(2)_i + m^2U'^g(2)_i = Q'^g(2)_i ,
m_f^2K'^f(2)_i + m^2U'^f(2)_i = Q'^f(2)_i ,
m_g^2K^g(2)_ij + m^2U^g(2)_ij = Q^g(2)_ij ,
m_f^2K^f(2)_ij + m^2U^f(2)_ij = Q^f(2)_ij .
We can now insert the expressions for the differential components of the Ricci tensors, the potentials and the energy-momentum tensors which we derived above. We start with the trace-free, divergence-free tensor equations (<ref>). Inserting the components K^g,f(2)_ij, U^g,f(2)_ij, Q^g,f(2)_ij yields the equations
-m_g^2/2( I^+(2)_ij + I^-(2)_ij) + m^4β̃ I^-(2)_ij = 0 ,
-m_f^2/2( I^+(2)_ij - I^-(2)_ij) - m^4β̃/c^2I^-(2)_ij = 0 .
Note that together with the boundary conditions they yield the trivial solution I^±(2)_ij = 0. We then continue with the divergence-free vector equations (<ref>). Using the expressions for K'^g,f(2)_i, U'^g,f(2)_i, Q'^g,f(2)_i we obtain
m^4β̃I'^-(2)_i = 0 , -m^4β̃/c^2I'^-(2)_i = 0 .
These equations are equivalent as a consequence of the Bianchi identities, which follow from the diffeomorphism invariance of the action (<ref>). Also these equations yield a trivial solution I'^-(2)_i = 0. We are thus left with the scalar equations (<ref>), (<ref>) and (<ref>), which take the form
1/2ρ^g = m_g^2/2( I_1^+(2) + I_1^-(2)) - m^4β/2(3I_1^-(2) - 3I_2^-(2) + I_4^-(2)) ,
c^2/2ρ^f = m_f^2/2( I_1^+(2) - I_1^-(2)) + m^4β/2c^2(3I_1^-(2) - 3I_2^-(2) + I_4^-(2)) ,
3/2ρ^g = m_g^2/2(4 I_2^+(2) + 4 I_2^-(2) - I_1^+(2) - I_1^-(2)) + m^4β/2(3I_1^-(2) - 15I_2^-(2) + 5 I_4^-(2)) ,
3c^2/2ρ^f = m_f^2/2(4 I_2^+(2) - 4 I_2^-(2) - I_1^+(2) + I_1^-(2)) - m^4β/2c^2(3I_1^-(2) - 15I_2^-(2) + 5 I_4^-(2)) ,
0 = m_g^2/2(I_2^+(2) + I_2^-(2) - I_1^+(2) - I_1^-(2)) + m^4β I_4^-(2) ,
0 = m_f^2/2(I_2^+(2) - I_2^-(2) - I_1^+(2) + I_1^-(2)) - m^4β/c^2I_4^-(2) .
Note that also these equations are not independent, but are related to each other as a consequence of the Bianchi identities. Indeed, one easily checks that
K_2^g,f(2) - 3K_1^g,f(2) - 4 K_4^g,f(2) = 0 ,
Q_2^g,f(2) - 3Q_1^g,f(2) - 4 Q_4^g,f(2) = 0 ,
U_2^g(2) - 3U_1^g(2) - 4 U_4^g(2) = 6m^4β̃( I_1^-(2) - 2 I_2^-(2)) = -c^2(U_2^f(2) - 3U_1^f(2) - 4 U_4^f(2)) ,
which shows that the corresponding linear combinations of the scalar equations become identical. Symbolically, this can be written as
(<ref>) - 3(<ref>) - 4(<ref>) = -c^2[(<ref>) - 3(<ref>) - 4(<ref>)] .
Hence, one of the equations (<ref>) is redundant and can be omitted. The remaining five equations then determine the five gauge-invariant scalar potentials I_1^±(2), I_2^±(2), I_4^-(2). We will solve these equations in the following section for the special case of a static point mass source.
§ STATIC SPHERICALLY SYMMETRIC SOLUTION
Using the gauge invariant field equations (<ref>) derived in the preceding section we are now in the position to construct an explicit solution. The starting point will be a point mass, which we discuss in section <ref>. We will then determine a solution for the gauge invariant potentials in section <ref>. From these we will derive the metric components in section <ref>, reversing the procedure detailed in section <ref>. By comparison with the metric ansatz (<ref>) we read off the PPN parameters in section <ref>. We finally discuss a few limiting cases in section <ref>.
§.§ Point-mass source
The matter source we consider for our solution is a static point mass located at the origin of our coordinate system, which is constituted by masses M^g and M^f with respect to the two matter sectors. Invoking the interpretation of the matter sectors as visible and dark matter, this would correspond to a source containing both visible and dark matter, unless one of the masses vanishes. This choice is the most general one, and includes the physically relevant case of a galaxy with a dark matter component, as we discuss later in section <ref>. A source of this type is characterized by the matter variables
ρ^g = M^gδ(x⃗) , ρ^f = M^fδ(x⃗)/c^3 , Π^g,f = 0 , p^g,f = 0 , v^g,f_i = 0 ,
where we have normalized the delta function in ρ^f with the spatial volume element c^3 of the unperturbed metric f^(0)_μν = c^2η_μν. Note that this factor cancels the volume element in the corresponding superpotential (<ref>). Using isotropic spherical coordinates, the superpotentials thus read
χ^g,f = -M^g,fr .
For later convenience we also list the second order derivatives of the superpotentials, which take the form
χ^g,f_,ij = M^g,f(x_ix_j/r^3 - δ_ij/r) , χ^g,f = -2M^g,f/r .
These will be used when we read off the PPN parameters in section <ref>.
§.§ Gauge-invariant potentials
We will now determine the gauge-invariant potentials I_1^±(2), I_2^±(2), I_4^-(2) by solving the scalar part (<ref>) of the field equations at the second velocity order, where we assume the matter source given by the point mass introduced above. It will turn out to be convenient to use rescaled mass units
m̃_g = m_g , m̃_f = cm_f M̃^g = M^g , M̃^f = cM^f .
We then start with the purely algebraic equations (<ref>) and (<ref>). Using the definitions above these take the form
0 = m̃_g^2/2(I_2^+(2) + I_2^-(2) - I_1^+(2) - I_1^-(2)) + m^4β̃I_4^-(2) ,
0 = m̃_f^2/2(I_2^+(2) - I_2^-(2) - I_1^+(2) + I_1^-(2)) - m^4β̃I_4^-(2) .
Here we choose to solve these equations for the potentials I_2^±(2). The solutions are given by
I_2^±(2) = I_1^±(1) - (1/m̃_g^2∓1/m̃_f^2)m^4β̃I_4^-(2) .
We can use this relation to eliminate I_2^±(2) from the remaining equations. Using the linear combination (<ref>), which together with the boundary conditions yields
I_1^-(2) = 2I_2^-(2) ,
we can then solve for I_4^-(2) and obtain the solution
I_4^-(2) = 1/2μ^2I_1^-(2) ,
where we have defined the mass parameter
μ = m^2√(β̃(1/m̃_f^2 + 1/m̃_g^2)) .
We now take a suitable linear combination of the scalar equations (<ref>) and (<ref>), so that the terms involving I_1^+(2) cancel. Eliminating I_2^-(2) and I_4^-(2) with the relations (<ref>) and (<ref>) we obtain
I_1^-(2) - μ^2I_1^-(2) = 2/3(M̃^g/m̃_g^2 - M̃^f/m̃_f^2)δ(x⃗) ,
which is a screened Poisson equation for I_1^-(2). The solution is given by
I_1^-(2) = -(M̃^g/m̃_g^2 - M̃^f/m̃_f^2)e^-μ r/6π r .
From the relations (<ref>) and (<ref>) then immediately follows
I_2^-(2) = -(M̃^g/m̃_g^2 - M̃^f/m̃_f^2)e^-μ r/12π r ,
I_4^-(2) = -(M̃^g/m̃_g^2 - M̃^f/m̃_f^2)e^-μ r/12πμ^2r .
Inserting these results into the scalar equation (<ref>), we obtain an equation for I_1^+(2), which is most conveniently expressed as
(I_1^+(2) + m̃_g^2 - m̃_f^2/m̃_g^2 + m̃_f^2I_1^-(2)) = M̃^g + M̃^f/m̃_g^2 + m̃_f^2δ(x⃗) .
This is an ordinary Poisson equation and one immediately reads off the solution
I_1^+(2) = -M̃^g + M̃^f/m̃_g^2 + m̃_f^21/4π r - m̃_g^2 - m̃_f^2/m̃_g^2 + m̃_f^2I_1^-(2) = -M̃^g + M̃^f/m̃_g^2 + m̃_f^21/4π r + m̃_g^2 - m̃_f^2/m̃_g^2 + m̃_f^2(M̃^g/m̃_g^2 - M̃^f/m̃_f^2)e^-μ r/6π r .
Finally, making use of the relation (<ref>) yields
I_2^+(2) = -M̃^g + M̃^f/m̃_g^2 + m̃_f^21/4π r + m̃_g^2 - m̃_f^2/m̃_g^2 + m̃_f^2(M̃^g/m̃_g^2 - M̃^f/m̃_f^2)e^-μ r/12π r .
This completes the solution of the field equations in terms of gauge-invariant potentials.
§.§ Metric components
Before we calculate the metric components from the solution for the gauge-invariant potentials, it is convenient to introduce the abbreviations
ℐ_M = M̃^g + M̃^f/8π(m̃_g^2 + m̃_f^2) , ℐ_± = -1/24πμ^2(M̃^g/m̃_g^2±M̃^f/m̃_f^2) , 𝒟 = -m̃_g^2 - m̃_f^2/m̃_g^2 + m̃_f^2
for a few frequently occurring constants. Further, we use the shorthand notation
𝒴_μ(r) = e^-μ r/r , 𝒴_0(r) = 1/r
for the Yukawa and Coulomb potentials. Using these abbreviations, the solution derived above takes the simple form
I_1^-(2) = 4μ^2ℐ_-𝒴_μ ,
I_2^-(2) = 2μ^2ℐ_-𝒴_μ ,
I_4^-(2) = 2ℐ_-𝒴_μ ,
I_1^+(2) = -2ℐ_M𝒴_0 + 4μ^2𝒟ℐ_-𝒴_μ ,
I_2^+(2) = -2ℐ_M𝒴_0 + 2μ^2𝒟ℐ_-𝒴_μ .
In order to separate these potentials into the potentials for the individual metrics, we further need to fix the pure gauge potential I_4^+(2). A convenient choice, which turns out to be compatible with the standard PPN gauge, is given by
I_4^+(2) = 2ℐ_+𝒴_μ .
Together with the relations (<ref>) we then obtain the potentials
I_1^g,f(2) = -ℐ_M𝒴_0 + 2μ^2(𝒟± 1)ℐ_-𝒴_μ ,
I_2^g,f(2) = -ℐ_M𝒴_0 + μ^2(𝒟± 1)ℐ_-𝒴_μ ,
I_4^g,f(2) = (ℐ_+ ±ℐ_-)𝒴_μ .
Now using the relations (<ref>) we obtain the quantities
ϕ^g,f(2) = -ℐ_M𝒴_0 + 2μ^2(𝒟± 1)ℐ_-𝒴_μ ,
ψ^g,f(2) = -ℐ_M𝒴_0 + μ^2(𝒟± 1)ℐ_-𝒴_μ - 1/3(ℐ_+ ±ℐ_-)𝒴_μ ,
Ẽ^g,f(2) = (ℐ_+ ±ℐ_-)𝒴_μ ,
and finally using their definition (<ref>) yields the components of the metric perturbations
h^(2)_00 = 2ℐ_M𝒴_0 - 4μ^2(𝒟 + 1)ℐ_-𝒴_μ ,
e^(2)_00 = 2ℐ_M𝒴_0 - 4μ^2(𝒟 - 1)ℐ_-𝒴_μ ,
h^(2)_ij = 2[ℐ_M𝒴_0 - μ^2(𝒟 + 1)ℐ_-𝒴_μ]δ_ij + 2(ℐ_+ + ℐ_-)∂_i∂_j𝒴_μ ,
e^(2)_ij = 2[ℐ_M𝒴_0 - μ^2(𝒟 - 1)ℐ_-𝒴_μ]δ_ij + 2(ℐ_+ - ℐ_-)∂_i∂_j𝒴_μ .
For later use we now insert the constants (<ref>) and the Yukawa and Coulomb potentials (<ref>). Note that second order derivatives of these potentials contain also delta functions, which must be taken into account for deriving further quantities from the metric perturbations. We have listed the relevant formulas in appendix <ref>. Using these formulas we obtain
h^(2)_00 = M̃^g + M̃^f/4π(m̃_g^2 + m̃_f^2)r + m̃_f^2M̃^g - m̃_g^2M̃^f/3πm̃_g^2(m̃_g^2 + m̃_f^2)re^-μ r ,
e^(2)_00 = M̃^g + M̃^f/4π(m̃_g^2 + m̃_f^2)r - m̃_f^2M̃^g - m̃_g^2M̃^f/3πm̃_f^2(m̃_g^2 + m̃_f^2)re^-μ r ,
h^(2)_ij = [M̃^g + M̃^f/4π(m̃_g^2 + m̃_f^2)r - 2m̃_g^2m̃_f^2M̃^f - m̃_g^4M̃^f - 3m̃_f^4M̃^g/18πm̃_g^2m̃_f^2(m̃_g^2 + m̃_f^2)re^-μ r - 2M̃^f/9m̃_f^2μ^2δ(x⃗)]δ_ij
=+ [μ r(μ r + 3) + 3]M̃^f/6πm̃_f^2μ^2r^5e^-μ r(x_ix_j - 1/3r^2δ_ij) ,
e^(2)_ij = [M̃^g + M̃^f/4π(m̃_g^2 + m̃_f^2)r - 2m̃_g^2m̃_f^2M̃^g - m̃_f^4M̃^g - 3m̃_g^4M̃^f/18πm̃_g^2m̃_f^2(m̃_g^2 + m̃_f^2)re^-μ r - 2M̃^g/9m̃_g^2μ^2δ(x⃗)]δ_ij
=+ [μ r(μ r + 3) + 3]M̃^g/6πm̃_g^2μ^2r^5e^-μ r(x_ix_j - 1/3r^2δ_ij) .
Note that the off-diagonal contribution of h^(2)_ij depends only on the mass M̃^f, while the off-diagonal contribution of e^(2)_ij contains only the mass M̃^g. This is a consequence of our gauge choice (<ref>), and the reason for making this choice.
§.§ PPN parameters
We can now read off the PPN parameters by comparing the solution (<ref>) to the PPN metric ansatz (<ref>). Since we have used rescaled mass units (<ref>), it is convenient to replace the superpotentials (<ref>), which for a point mass source take the form (<ref>), by the correspondingly rescaled superpotentials
χ̃^g = χ^g = -M^gr = -M̃^gr , χ̃^f = cχ^f = -cM^fr = -M̃^fr .
We thus use the modified PPN metric ansatz
h^(2)_00 = 2α̃^ggM̃^g + α̃^gfM̃^f/r ,
h^(2)_ij = 2γ̃^ggM̃^g + γ̃^gfM̃^f/rδ_ij + 2θ̃^ggM̃^g + θ̃^gfM̃^f/r^3x_ix_j ,
e^(2)_00 = 2α̃^fgM̃^g + α̃^ffM̃^f/r ,
e^(2)_ij = 2γ̃^fgM̃^g + γ̃^ffM̃^f/rδ_ij + 2θ̃^fgM̃^g + θ̃^ffM̃^f/r^3x_ix_j .
Note that the observable parameters α^gg = α̃^gg, γ^gg = γ̃^gg and θ^gg = θ̃^gg, which govern the gravitational interaction within the visible matter sector, are unaffected by this rescaling, and that only the PPN parameters involving the dark sector receive constant factors. We then read off the PPN parameters
α̃^gg = 3m̃_g^2 + 4m̃_f^2e^-μ r/24πm̃_g^2(m̃_f^2 + m̃_g^2) , α̃^gf = 3 - 4e^-μ r/24π(m̃_f^2 + m̃_g^2) ,
α̃^ff = 3m̃_f^2 + 4m̃_g^2e^-μ r/24πm̃_f^2(m̃_f^2 + m̃_g^2) , α̃^fg = 3 - 4e^-μ r/24π(m̃_f^2 + m̃_g^2) ,
γ̃^gg = 3m̃_g^2 + 2m̃_f^2e^-μ r/24πm̃_g^2(m̃_f^2 + m̃_g^2) , γ̃^gf = 9m̃_f^2 + 2(m̃_g^2 - 2m̃_f^2)e^-μ r/72πm̃_f^2(m̃_f^2 + m̃_g^2) - μ r(μ r + 3) + 3/36πm̃_f^2μ^2r^2e^-μ r ,
γ̃^ff = 3m̃_f^2 + 4m̃_g^2e^-μ r/24πm̃_f^2(m̃_f^2 + m̃_g^2) , γ̃^fg = 9m̃_g^2 + 2(m̃_f^2 - 2m̃_g^2)e^-μ r/72πm̃_g^2(m̃_f^2 + m̃_g^2) - μ r(μ r + 3) + 3/36πm̃_g^2μ^2r^2e^-μ r ,
θ̃^gg = 0 , θ̃^gf = μ r(μ r + 3) + 3/12πm̃_f^2μ^2r^2e^-μ r ,
θ̃^ff = 0 , θ̃^fg = μ r(μ r + 3) + 3/12πm̃_g^2μ^2r^2e^-μ r .
We find that the gauge condition θ̃^gg = θ̃^ff = 0, which we have introduced in section <ref>, is satisfied, due to our choice (<ref>). From these parameters we can in particular derive the observable quantities
G_eff = α̃^gg = 3m̃_g^2 + 4m̃_f^2e^-μ r/24πm̃_g^2(m̃_f^2 + m̃_g^2) , γ = γ̃^gg/α̃^gg = 3m̃_g^2 + 2m̃_f^2e^-μ r/3m̃_g^2 + 4m̃_f^2e^-μ r ,
which are the effective Newtonian constant and the usual PPN parameter γ. Both quantities depend on the distance r between the mass source and the location where the gravitational field is probed, in contrast to general relativity, where both quantities are constant. It is further remarkable that γ depends only on the ratio m̃_f/m̃_g of the two Planck masses and the graviton mass μ, and that this result essentially resembles the observable parameters of scalar-tensor theory with a general potential <cit.>, or the more general Horndeski class of theories <cit.>, which depend on the Brans-Dicke parameter ω and the scalar field mass.
§.§ Limiting cases
We finally discuss a few interesting limiting cases for the mass parameters m̃_g,f and μ and their consequences for the PPN parameters. These are in particular:
* It is well known that in the limit m̃_f → 0, while keeping the parameters m and β_k in the interaction potential fixed, one obtains the general relativity limit for the visible sector <cit.>. Note that in this limit we also have μ→∞. The PPN parameters (<ref>) then take the form
α̃^gg = α̃^gf = α̃^fg = α̃^ff = 1/8πm̃_g^2 ,
γ̃^gg = γ̃^gf = γ̃^fg = γ̃^ff = 1/8πm̃_g^2 ,
θ̃^gg = θ̃^gf = θ̃^fg = θ̃^ff = 0 ,
while the observable parameters (<ref>) are given by
G_eff = 1/8πm̃_g^2 , γ = 1 ,
as usual in general relativity.
* For equal Planck mass parameters m̃_g = m̃_f one obtains the PPN parameters
α̃^gg = α̃^ff = 3 + 4e^-μ r/48πm̃_g^2 , α̃^gf = α̃^fg = 3 - 4e^-μ r/48πm̃_g^2 ,
γ̃^gg = γ̃^ff = 3 + 2e^-μ r/48πm̃_g^2 , γ̃^gf = γ̃^fg = 3 - 2e^-μ r/48πm̃_g^2 - (μ r + 1)e^-μ r/12πm̃_g^2μ^2r^2 ,
θ̃^gg = θ̃^ff = 0 , θ̃^gf = θ̃^fg = [μ r(μ r + 3) + 3]e^-μ r/12πm̃_g^2μ^2r^2
and the observable parameters
G_eff = 3 + 4e^-μ r/48πm̃_g^2 , γ = 3 + 2e^-μ r/3 + 4e^-μ r .
We remark that this result is similar to the PPN parameter γ in higher-order gravity, except for an additional scalar contribution and a different sign due to the massive graviton being a ghost in the latter class of theories <cit.>. Note that the effective Planck mass for the visible sector,
m_Pl^2 = lim_r →∞1/8π G_eff ,
is given by m_Pl^2 = 2m̃_g^2.
* In the limit μ→∞ of a highly massive graviton we find the PPN parameters
α̃^gg = α̃^gf = α̃^fg = α̃^ff = 1/8π(m̃_g^2 + m̃_f^2) ,
γ̃^gg = γ̃^gf = γ̃^fg = γ̃^ff = 1/8π(m̃_g^2 + m̃_f^2) ,
θ̃^gg = θ̃^gf = θ̃^fg = θ̃^ff = 0 ,
from which follow the observable parameters
G_eff = 1/8π(m̃_g^2 + m̃_f^2) , γ = 1 .
In this case the effective Planck mass (<ref>) turns out to be m_Pl^2 = m̃_g^2 + m̃_f^2.
This concludes our discussion of the post-Newtonian limit of ghost-free bimetric gravity for a static point mass. The PPN parameters we have obtained now allow us to discuss observable effects, and in particular the deflection of light by both dark and visible matter. This will be done in the following section.
§ CONFRONTATION WITH OBSERVATIONS
In the previous section we obtained both a general result and a number of limiting cases for the effective gravitational constant and the PPN parameter γ, as well as additional PPN parameters which govern effects involving a second, dark type of matter. We can now compare our results with observations, in particular of the deflection of light. We will restrict ourselves to visible matter in section <ref> and derive bounds on the parameters of ghost-free massive bimetric gravity from solar system experiments. In section <ref> we will discuss the deflection of visible light by dark matter and its consistency with observations of lensing effects by galaxies. We will further speculate on a possible explanation for the lensing effects observed in the vicinity of galactic mergers, in particular Abell 520 and Abell 3827.
§.§ Solar system consistency
We have remarked in section <ref> that our result (<ref>) for the effective Newtonian constant G_eff and the PPN parameter γ has essentially the same form as the corresponding result for scalar-tensor gravity with a general potential <cit.>, or the more general Horndeski class of theories <cit.>. Hence the experimental constraints on the parameters of these theories derived from measurements of γ can directly be translated to constraints on the parameters of ghost-free massive bimetric gravity, and in particular to the ratio m̃_f/m̃_g of the Planck masses and the graviton mass μ. An important obstacle that must be taken into account is the fact that γ is not constant, but depends exponentially on the distance r between the gravitating mass source and the observer. This restricts the possible experimental tests of γ to those for which such an interaction distance can be defined. The most precise observation of γ which satisfies this condition is the measurement of the Shapiro time delay of radio signals between Earth and the Cassini spacecraft on its way to Saturn, from which a value γ - 1 = (2.1 ± 2.3) · 10^-5 was obtained <cit.>. These were passing by the sun at a distance of 1.6 solar radii, so that we define the interaction distance r_0 ≈ 7.44 · 10^-3AU. Following the same procedure as detailed in <cit.>, we find that the area of the parameter space shown in figure <ref> is excluded at 2σ confidence level. Note, however, that the assumption of a constant interaction distance for this experiment is only an approximation, and that more accurate results are obtained from a thorough treatment of light propagation in the solar gravitational field <cit.>.
§.§ Light deflection by dark matter
The full set (<ref>) of PPN parameters, which we derived in section <ref>, allows us to discuss also the gravitational interaction of dark matter. We first consider the parameter α̃^gf, which can be interpreted as an effective Newtonian constant for the gravitational influence of dark matter Φ^f on visible matter Φ^g. For short distances, μ r < ln(4/3), we see that α̃^gf becomes negative, so that the gravitational interaction between dark and visible matter becomes repulsive; however, taking into account the bounds shown in figure <ref>, we see that this is possible only on scales significantly smaller than the solar system, and hence does not play any role for the observed dark matter concentrations. On the scales of galaxies or even galactic clusters we can safely assume μ r ≫ 1, and thus use the PPN parameters obtained in the limit μ→∞ in section <ref>. In this limit, the gravitational effects on both test masses and light become indistinguishable between visible and dark matter sources. In particular, it follows that the deflection of visible light by dark matter is likewise governed by a PPN parameter
γ̅ = γ̃^gf/α̃^gf→ 1
in the limit of large scales. This agrees with observations of the deflection of light by galaxies, which contain significant amounts of dark matter in addition to the visible mass <cit.>.
Our result plays an important role in particular for the observed light deflection by galactic mergers, such as most prominently the so-called “Bullet Cluster” 1E0657-558 <cit.> or more recently MACS J0025.4-1222 <cit.>. Measurements of the mass distribution in these and other mergers using weak lensing together with x-ray imaging show that the gas component of the merger, which is heated by the collision and which constitutes the major amount of visible matter, is not at the same location as the dominant gravitating matter contribution, and that the motion of the latter is largely unaffected by the collision. This leads to the conclusion that their dark matter content is non-interacting, so that the dark matter components of the colliding objects pass through each other <cit.>. However, observations of the so-called “Train Wreck Cluster” Abell 520 <cit.> or Abell 3827 <cit.> show a more differentiated picture. While also Abell 520 shows evidence for dark matter components which have passed through each other unaffectedly, one has further identified another dark mass concentration in the central region, which is difficult to explain if dark matter is non-interacting. Similar stress on the non-interacting dark matter model is put by an observed separation between stellar and dark matter in Abell 3827. A possible explanation for these observations is to assume that dark matter also possesses a component which interacts non-gravitationally <cit.>.
The bimetric class of theories we studied in this article allows for an interesting tentative model for the aforementioned observations, which hint towards the existence of both interacting and non-interacting dark matter components. Invoking the interpretation of the matter sector Φ^f as dark matter, as suggested in <cit.>, and further assuming that Φ^f contains an interacting component, would suggest that the central dark matter concentration in Abell 520 and the separated dark matter concentration in Abell 3827 result from a collision of these interacting components, while any dark matter constituted by massive gravitons, as suggested in <cit.>, would pass the merger unaffectedly, and could thus account for the dark matter concentrations away from the center of Abell 520 or the unaffected dark matter halos in Abell 3827. Future extensions of our work presented here will be necessary in order to quantitatively assert the viability of such models.
§ CONCLUSION
We have considered the post-Newtonian limit of ghost-free massive bimetric gravity with two mutually non-interacting matter sectors. From the assumption that the vacuum field equations are solved by two flat metrics proportional to the Minkowski metric, we have derived restrictions on the parameters in the action. For this restricted class of theories we have derived the field equations up to the second velocity order by making use of a suitable extension of the PPN formalism to multiple metrics. We have solved these equations for a point-like mass source using a gauge-invariant differential decomposition of the metric perturbations. From this solution we have read off the effective gravitational constant G_eff and the PPN parameter γ for the visible matter sector. By comparing our result to the observed value determined by the Cassini tracking experiment we have derived combined bounds on two parameters of the theory, namely on the mass of the massive graviton and on the ratio of the Planck masses occurring in the bimetric action.
We have further discussed the interpretation of the additional matter sector as a possible constituent of dark matter. From our experimental bounds we then concluded that on scales significantly larger than the solar system, and hence in particular on the observationally relevant scales of galaxies and clusters, the gravitational effects caused by visible and dark matter become indistinguishable from each other. It thus follows that dark matter should deflect light in the same way as visible matter does, in agreement with measurements of the PPN parameter γ through the lensing effect of galaxies, which contain a significant dark matter component. Another possible experimental test of this result could be performed by searching for possible (non-)correlations between the ratio of dark to visible matter of a galaxy and its light deflection. Such an analysis would be most effective with data of higher precision than available to date <cit.>.
On a more speculative note, we have considered that besides the second matter sector also massive gravitons could contribute to the observed dark matter content of the universe. The assumption that the former contains non-gravitational self-interactions, while the latter interacts only gravitationally, then provides a tentative explanation for the observed separation of apparently different dark matter components in galactic mergers such as Abell 520 and Abell 3827. The question arises whether such different dark matter constituents could be distinguished also in other processes besides galactic mergers, for example, by their light deflection properties. An extension of our work presented here to the light deflection caused by massive graviton concentrations might answer this question.
There are also other possibilities to further extend the theoretical analysis we presented in this article. While we have studied only linear perturbations of flat vacuum solutions, considering also the quadratic perturbation order would allow us to calculate the PPN parameter β, and thus open the possibility for additional tests using solar system observations. This would ultimately lead to a full generalization of the formalism developed in <cit.> to massive gravity theories. Further, one may also include cosmological corrections to the PPN formalism along the lines of <cit.>, and thus relax the condition of a flat background. Finally, one may consider more general theories with N > 2 metric tensors and a corresponding number of matter sectors <cit.>, or involving an effective metric <cit.>, both of which allow for ghost-free matter coupling prescriptions <cit.>. We intend to study these generalizations in future research.
The author is happy to thank the members of the Laboratory of Theoretical Physics at the University of Tartu for fruitful discussions. He gratefully acknowledges the full financial support of the Estonian Research Council through the Startup Research Grant PUT790 and the European Regional Development Fund through the Center of Excellence TK133 “The Dark Side of the Universe”.
§ LINEARIZATION OF THE POTENTIAL
In this appendix we show how to obtain the linearized potentials, which enter the gravitational field equations at the zeroth and second velocity order as shown in sections <ref> and <ref>. The starting point for our derivation is a linear perturbation ansatz for the metrics, which we write in the form
g_μν = η_μν + h_μν ,
f_μν = c^2(η_μν + e_μν) .
Up to the linear perturbation order, we can then write their inverses as
g^μν = η^μν - η^μρη^νσh_ρσ + 𝒪(h^2) ,
f^μν = 1/c^2(η^μν - η^μρη^νσe_ρσ) + 𝒪(e^2) .
For their product we find
g^μρf_ρν = c^2(δ^μ_ν - D^μ_ν) + 𝒪({h,e}^2) ,
where we introduced the perturbation tensor
D^μ_ν = η^μρ(h_ρν - e_ρν) .
Since the matrix g^μρf_ρν is given as a perturbation of the Kronecker symbol δ^μ_ν, we can find its square root A^μ_ν as defined in (<ref>) using a series expansion analogously to the well-known Taylor series
√(1 + x) = 1 + x/2 + 𝒪(x^2) .
This series expansion yields
A^μ_ν = c(δ^μ_ν - 1/2D^μ_ν) + 𝒪({h,e}^2) .
For later use we also need to expand powers of A into linear perturbations. These are given by
(A^k)^μ_ν = c^k(δ^μ_ν - k/2D^μ_ν) + 𝒪({h,e}^2) .
The matrix invariants e_k(A) defined by (<ref>) then take the form
e_0(A) = 1 ,
e_1(A) = A^μ_μ = c(4 - 1/2D^μ_μ) + 𝒪({h,e}^2) ,
e_2(A) = 1/2(A^μ_μA^ν_ν - A^μ_νA^ν_μ) = c^2(6 - 3/2D^μ_μ) + 𝒪({h,e}^2) ,
e_3(A) = 1/6(A^μ_μA^ν_νA^ρ_ρ - 3A^μ_νA^ν_μA^ρ_ρ + 2A^μ_νA^ν_ρA^ρ_μ) = c^3(4 - 3/2D^μ_μ) + 𝒪({h,e}^2) ,
e_4(A) = 1/24(A^μ_μA^ν_νA^ρ_ρA^σ_σ - 6A^μ_μA^ν_νA^ρ_σA^σ_ρ + 3A^μ_νA^ν_μA^ρ_σA^σ_ρ + 8A^μ_νA^ν_ρA^ρ_μA^σ_σ
=- 6A^μ_νA^ν_ρA^ρ_σA^σ_μ) = c^4(1 - 1/2D^μ_μ) + 𝒪({h,e}^2) .
For the matrices Y_n defined via (<ref>) we then find the expressions
Y_0^μ_ν(A) = δ^μ_ν + 𝒪({h,e}^2) ,
Y_1^μ_ν(A) = c(-3δ^μ_ν - 1/2D^μ_ν + 1/2D^ρ_ρδ^μ_ν) + 𝒪({h,e}^2) ,
Y_2^μ_ν(A) = c^2(3δ^μ_ν + D^μ_ν - D^ρ_ρδ^μ_ν) + 𝒪({h,e}^2) ,
Y_3^μ_ν(A) = c^3(-δ^μ_ν - 1/2D^μ_ν + 1/2D^ρ_ρδ^μ_ν) + 𝒪({h,e}^2) .
In order to obtain the corresponding expressions for A^-1 = √(f^-1g) instead of A, one simply replaces D by -D and c by c^-1. We can now calculate the potentials (<ref>). Using renormalized parameters β̃_k = c^kβ_k we obtain
V^g_μν = [(β̃_0 + 3β̃_1 + 3β̃_2 + β̃_3)η_μν - (1/2β̃_1 + β̃_2 + 1/2β̃_3)η_μνη^ρσ(h_ρσ - e_ρσ)
=+ (β̃_0 + 7/2β̃_1 + 4β̃_2 + 3/2β̃_3)h_μν - (1/2β̃_1 + β̃_2 + 1/2β̃_3)e_μν] + 𝒪({h,e}^2) ,
V^f_μν = 1/c^2[(β̃_1 + 3β̃_2 + 3β̃_3 + β̃_4)η_μν + (1/2β̃_1 + β̃_2 + 1/2β̃_3)η_μνη^ρσ(h_ρσ - e_ρσ)
=+ (3/2β̃_1 + 4β̃_2 + 7/2β̃_3 + β̃_4)e_μν - (1/2β̃_1 + β̃_2 + 1/2β̃_3)h_μν] + 𝒪({h,e}^2) .
Finally, we calculate the trace-reversed potentials (<ref>), which are given by
V̅^g_μν = [-(β̃_0 + 3β̃_1 + 3β̃_2 + β̃_3)η_μν + (1/4β̃_1 + 1/2β̃_2 + 1/4β̃_3)η_μνη^ρσ(h_ρσ - e_ρσ)
=- (β̃_0 + 5/2β̃_1 + 2β̃_2 + 1/2β̃_3)h_μν - (1/2β̃_1 + β̃_2 + 1/2β̃_3)e_μν] + 𝒪({h,e}^2) ,
V̅^f_μν = 1/c^2[-(β̃_1 + 3β̃_2 + 3β̃_3 + β̃_4)η_μν - (1/4β̃_1 + 1/2β̃_2 + 1/4β̃_3)η_μνη^ρσ(h_ρσ - e_ρσ)
=- (1/2β̃_1 + 2β̃_2 + 5/2β̃_3 + β̃_4)e_μν - (1/2β̃_1 + β̃_2 + 1/2β̃_3)h_μν] + 𝒪({h,e}^2) .
These expressions can now be used in the post-Newtonian field equations at the zeroth velocity order in section <ref> and at the second velocity order in section <ref>.
§ DERIVATIVES OF THE YUKAWA POTENTIAL
During our calculation we have frequently encountered (mostly second order) derivatives of the Yukawa potential, for which we introduced the shorthand notation
𝒴_k(r) = e^-kr/r .
Taking into account the singularity at the origin, its second derivatives are given by
∂_i∂_j𝒴_k = {[kr(kr + 3) + 3]x_ix_j/r^5 - (kr + 1)δ_ij/r^3}e^-kr - 4π/3δ_ijδ(x⃗) ,
which is a straightforward generalization of the well-known formula for the Coulomb potential <cit.>. Taking the trace yields the standard formula
𝒴_k = k^2e^-kr/r - 4πδ(x⃗) .
These formulas cover all expressions which appear in the final result for the Ricci tensor and the interaction potential. Note that during intermediate steps also fourth order derivatives of the Yukawa potential occur in derivatives of the metric perturbations. For completeness we also list the corresponding expressions. From the formula given above immediately follows
∂_i∂_j𝒴_k = k^2{[kr(kr + 3) + 3]x_ix_j/r^5 - (kr + 1)δ_ij/r^3}e^-kr - 4π k^2/3δ_ijδ(x⃗) - 4π∂_i∂_jδ(x⃗)
and thus
𝒴_k = k^4e^-kr/r - 4π k^2δ(x⃗) - 4πδ(x⃗) .
These are all terms which occur during our calculation.
§ CHECKING THE FIELD EQUATIONS IN COMPONENTS
Since we have used a rather technical transformation of the field equations to gauge invariant potentials in section <ref> and the corresponding inverse transformation of their solution to metric components in section <ref>, it is appropriate to check the obtained result also using the field equations in their original component form as shown in section <ref>. While this is rather cumbersome using the explicit expressions (<ref>) and requires careful tracking of singular contributions from higher derivatives of Coulomb and Yukawa potentials, it becomes considerably simpler by using the abbreviations (<ref>), starting from the expressions (<ref>) and finally evaluating higher derivatives using the formulas shown in appendix <ref>.
From the expressions (<ref>) one easily reads off the traces
h^(2)_ii = 2[3ℐ_M𝒴_0 - 3μ^2(𝒟 + 1)ℐ_-𝒴_μ + (ℐ_+ + ℐ_-)𝒴_μ] ,
e^(2)_ii = 2[3ℐ_M𝒴_0 - 3μ^2(𝒟 - 1)ℐ_-𝒴_μ + (ℐ_+ - ℐ_-)𝒴_μ]
of the spatial components of the metric perturbations. Using the formulas (<ref>) for the potential at the second velocity order we then obtain
V^g(2)_00 = -c^2V^f(2)_00 = -β̃ℐ_-(3μ^2𝒴_μ + 𝒴_μ) ,
V^g(2)_ij = -c^2V^f(2)_ij = -β̃ℐ_-(3μ^2𝒴_μδ_ij - 𝒴_μδ_ij - 2∂_i∂_j𝒴_μ) .
Further, we need to evaluate second order derivatives of the metric, which read
h^(2)_00,ij = 2ℐ_M∂_i∂_j𝒴_0 - 4μ^2(𝒟 + 1)ℐ_-∂_i∂_j𝒴_μ ,
e^(2)_00,ij = 2ℐ_M∂_i∂_j𝒴_0 - 4μ^2(𝒟 - 1)ℐ_-∂_i∂_j𝒴_μ ,
h^(2)_00 = 2ℐ_M𝒴_0 - 4μ^2(𝒟 + 1)ℐ_-𝒴_μ ,
e^(2)_00 = 2ℐ_M𝒴_0 - 4μ^2(𝒟 - 1)ℐ_-𝒴_μ ,
h^(2)_kk,ij = 2[3ℐ_M∂_i∂_j𝒴_0 - 3μ^2(𝒟 + 1)ℐ_-∂_i∂_j𝒴_μ + (ℐ_+ + ℐ_-)∂_i∂_j𝒴_μ] ,
e^(2)_kk,ij = 2[3ℐ_M∂_i∂_j𝒴_0 - 3μ^2(𝒟 - 1)ℐ_-∂_i∂_j𝒴_μ + (ℐ_+ - ℐ_-)∂_i∂_j𝒴_μ] ,
h^(2)_ij = 2[ℐ_M𝒴_0 - μ^2(𝒟 + 1)ℐ_-𝒴_μ]δ_ij + 2(ℐ_+ + ℐ_-)∂_i∂_j𝒴_μ ,
e^(2)_ij = 2[ℐ_M𝒴_0 - μ^2(𝒟 - 1)ℐ_-𝒴_μ]δ_ij + 2(ℐ_+ - ℐ_-)∂_i∂_j𝒴_μ ,
h^(2)_ii = 2[3ℐ_M𝒴_0 - 3μ^2(𝒟 + 1)ℐ_-𝒴_μ + (ℐ_+ + ℐ_-)𝒴_μ] ,
e^(2)_ii = 2[3ℐ_M𝒴_0 - 3μ^2(𝒟 - 1)ℐ_-𝒴_μ + (ℐ_+ - ℐ_-)𝒴_μ] ,
h^(2)_ik,jk = 2[ℐ_M∂_i∂_j𝒴_0 - μ^2(𝒟 + 1)ℐ_-∂_i∂_j𝒴_μ + (ℐ_+ + ℐ_-)∂_i∂_j𝒴_μ] ,
e^(2)_ik,jk = 2[ℐ_M∂_i∂_j𝒴_0 - μ^2(𝒟 - 1)ℐ_-∂_i∂_j𝒴_μ + (ℐ_+ - ℐ_-)∂_i∂_j𝒴_μ] .
Inserting these expressions into the formulas (<ref>) for the Ricci tensor then yields the components
R^g,f(2)_00 = -ℐ_M𝒴_0 + 2μ^2(𝒟± 1)ℐ_-𝒴_μ ,
R^g,f(2)_ij = -ℐ_M𝒴_0δ_ij + μ^2(𝒟± 1)ℐ_-(𝒴_μδ_ij - ∂_i∂_j𝒴_μ) .
Inserting the expressions (<ref>) and (<ref>) into the second order field equations (<ref>), applying the definitions (<ref>) and using the relations for the Coulomb and Yukawa potentials listed in appendix <ref> finally yields
m̃_g^2R^g(2)_00 + m^4V̅^g(2)_00 = -m̃_g^2(M̃^g + M̃^f)𝒴_0 - (m̃_g^2M̃^f - m̃_f^2M̃^g)(𝒴_μ - μ^2𝒴_μ)/8π(m̃_g^2 + m̃_f^2)
= M̃^g/2δ(x⃗) = T̅^g(2)_00 ,
m̃_f^2/c^2R^f(2)_00 + m^4V̅^f(2)_00 = -m̃_f^2(M̃^g + M̃^f)𝒴_0 + (m̃_g^2M̃^f - m̃_f^2M̃^g)(𝒴_μ - μ^2𝒴_μ)/8π c^2(m̃_g^2 + m̃_f^2)
= M̃^f/2δ(x⃗) = T̅^f(2)_00 ,
m̃_g^2R^g(2)_ij + m^4V̅^g(2)_ij = -m̃_g^2(M̃^g + M̃^f)𝒴_0 - (m̃_g^2M̃^f - m̃_f^2M̃^g)(𝒴_μ - μ^2𝒴_μ)/8π(m̃_g^2 + m̃_f^2)δ_ij
= M̃^g/2δ(x⃗)δ_ij = T̅^g(2)_ij ,
m̃_f^2/c^2R^f(2)_ij + m^4V̅^f(2)_ij = -m̃_f^2(M̃^g + M̃^f)𝒴_0 + (m̃_g^2M̃^f - m̃_f^2M̃^g)(𝒴_μ - μ^2𝒴_μ)/8π c^2(m̃_g^2 + m̃_f^2)δ_ij
= M̃^f/2δ(x⃗)δ_ij = T̅^f(2)_ij .
This shows that the field equations are indeed satisfied.
apsrev4-1
|
http://arxiv.org/abs/1701.07555v1 | 20170126024906 | Robust analysis of second-leg home advantage in UEFA football through better nonparametric confidence intervals for binary regression functions | [
"Gery Geenens",
"Thomas Cuddihy"
] | stat.ME | [
"stat.ME",
"stat.AP"
] |
Robust analysis of second-leg home advantage in UEFA football through better nonparametric confidence intervals for binary regression functions
Gery GeenensCorresponding author: ggeenens@unsw.edu.au, School of Mathematics and Statistics, UNSW Sydney, Australia, tel +61 2 938 57032, fax +61 2 9385 7123
School of Mathematics and Statistics,
UNSW Sydney, Australia Thomas Cuddihy
School of Mathematics and Statistics,
UNSW Sydney, Australia
December 30, 2023
=================================================================================================================================================================================================================================================================================================================
empty
In international football (soccer), two-legged knockout ties, with each team playing at home in one leg and the final outcome decided on aggregate, are common. Many players, managers and followers seem to believe in the `second-leg home advantage', i.e. that it is beneficial to play at home on the second leg. A more complex effect than the usual and well-established home advantage, it is harder to identify, and previous statistical studies did not prove conclusive about its actuality. Yet, given the amount of money handled in international football competitions nowadays, the question of existence or otherwise of this effect is of real import. As opposed to previous research, this paper addresses it from a purely nonparametric perspective and brings a very objective answer, not based on any particular model specification which could orientate the analysis in one or the other direction. Along the way, the paper reviews the well-known shortcomings of the Wald confidence interval for a proportion, suggests new nonparametric confidence intervals for conditional probability functions, revisits the problem of the bias when building confidence intervals in nonparametric regression, and provides a novel bootstrap-based solution to it. Finally, the new intervals are used in a careful analysis of game outcome data for the UEFA Champions and Europa leagues from 2009/10
to 2014/15. A slight `second-leg home advantage' is evidenced.
Keywords: football; home advantage; nonparametric regression; confidence intervals; undersmoothing.
§ INTRODUCTION
The `home field advantage' in sport is well established, as its quantitative study can be traced back to the late 70's <cit.>. A meta-analysis of 30 research articles by <cit.>, including over 30,000 games, found significant home field advantages in each of 10 different sports, be it individual such as tennis or team such as football (soccer). Yet, there seems to exist a downward trend in this advantage over time. <cit.> analysed more than 400,000 games, dating back to 1876, from ice hockey, baseball, American football, basketball and football, and found a decline in home field advantage for most sports. Nevertheless, the advantage was still positive for all sports in the final year analysed (2002).
The exact causes of that advantage are multiple and complex. In an effort to better understand them, an important literature review by <cit.> developed a conceptual framework, which was later updated by <cit.>. It incorporates 5 major components: game location; game location factors; critical psychological and physiological states; critical behavioural states; and performance outcomes. <cit.> posited that each component affects all subsequent ones. For example, `game location' influences `game location factors' such as crowd size and composition, which in turn affects the `psychological state' and then `behavioural state' of the players, etc. They concluded that further research is necessary in all of the 5 components to better understand the individual impacts, for example there is insufficient research into the effect of crowd density and absolute crowd size and their interaction. More recently, <cit.> reached a similar conclusion, and reiterated their thoughts from an analysis two decades earlier <cit.>: “Clearly, there is still much to be learnt about the complex mechanisms that cause home advantage, both in soccer and other sports. The topic remains a fruitful area of research for sports historians, sociologists, psychologists and statisticians alike.”
One aspect of the home field advantage whose existence is still being debated is that of the `second-leg home advantage' (hereafter: SLHA), when a contest between two teams comprises two matches (`legs'), with each team as the home team in one leg, and the final outcome decided on aggregate. At first sight, one might expect that each team would get their respective home advantage and that the effects would cancel out. Yet, a common belief is that the team playing at home for the second leg has a slight advantage over the other team. One theory to support this claim is that the team playing away on the first game can essentially play it safe there, while taking advantage of playing at home on the decider game when the difference is to be made. The stake of the second leg being higher, the crowd support and the induced pressure might indeed be more intense then than on the first leg, where getting the upper hand is unlikely to be final anyway. This may create an asymmetry in home advantage between the two legs <cit.>.
Those two-legged ties are very common in knockout stages of football international club competitions, such as national team play-offs in some qualification tournaments, including the FIFA World Cup, and most prominently the European cups, namely the UEFA Champions League and Europa League. Those competitions are big business, especially the UEFA Champions League. For instance, from the season 2015-2016 onwards, to qualify to the quarter-finals brings in a bonus of 6 millions of euros; to advance further to the semi-finals brings in an extra bonus of 7 millions of euros; and to qualify to the final brings in another 10.5 or 15 millions of euros (depending on the outcome of the final)[http://www.uefa.com/uefachampionsleague/news/newsid=2398575.html – these amounts are the `fixed amounts', awarded according to the clubs' performance; they excluded the so-called `market pool', which essentially comes from television income.]. Hence, beyond the sporting aspect, the difference between qualifying or being eliminated at some level of the knockout stage represents a huge amount of money. As a result, an unwarranted advantage for the team playing at home second implies an equally unwarranted economic shortfall for the team playing at home first, whose only fault is to have been unlucky at the draw. The question of existence of the SLHA is, therefore, of great significance.
Consequently, scholars have attempted to evidence or otherwise the SLHA, with research focussing on the case of UEFA administered competitions. Yet, none really proved conclusive. A naive comparison of the fraction of teams qualifying when playing at home on the second leg against when playing at home on the first leg, is not telling. This is because of the non-random manner in which UEFA sometimes seeds teams in knockout stages of their competitions. For instance, in the first knockout round following the initial group stage, such as the Round-of-16 in the Champions league or Round-of-32 in the Europa league, group winners (supposedly the best teams) are automatically assigned to play the second game at home. Obviously, this may lead to the spurious result of existence of SLHA, as stronger teams are preferentially allocated as playing at home second in some instances. This induces an obvious confounding factor.
<cit.> and <cit.> adjusted for it by conditioning their analyses on the difference between the UEFA coefficients of the two teams, assumed to be a reasonable proxy for their relative strength (more on this in Section <ref>). <cit.> found no evidence of SLHA and <cit.> found a significant effect in seasons before 1994/95 but not afterwards. By contrast, <cit.> and <cit.> did find a significant SLHA effect, however they did not control for the confounding factor, which greatly lowers the value of their study. It seems indeed clear that the relative strength of the matched teams should be taken into account for meaningful analyses. Hence this paper will focus on the `conditional' case only.
Specifically, denote p(x) the probability of the second-leg home team qualifying given that the difference in `strength' between the two teams (however it is measured) at the time of the game is x. Then, p(0) is the probability of the second-leg home team going through, given that the two teams are of the same strength. The probability p(0) is, therefore, the parameter of main interest in this study, with a value for p(0) above 1/2 indicating a second-leg home advantage. Existence of the SLHA effect can thus be formally tested by checking whether an empirical estimate p̂(0) of p(0) significantly lies above 1/2.
<cit.> and <cit.> estimated p(x) using a logistic regression model, but failed to provide an examination of goodness-of-fit of this parametric model and just stated regression estimates as-is. To offer an alternate viewpoint on the question, this paper explores the topic using nonparametric techniques, really `letting the data speak for themselves'. To this effect, a Nadaraya-Watson (NW) kernel regression model <cit.> has been developed to regress the final outcome of the two-legged knockout on a measure of the inequality in team strengths, based again on the UEFA club coefficients. This model allows robust and objective estimation of p(x) from historical data. A measure of statistical significance is provided by the asymptotic normality of the Nadaraya-Watson estimator, which enables the calculation of pointwise confidence intervals for p(x) at any x <cit.>.
However, the working model being essentially a conditional Bernoulli model here, the estimated probability p̂(x) is some kind of `conditional sample proportion' (see Section <ref> for details). The above standard confidence intervals for p(x) thus amount to some sort of Wald intervals, adapted to the conditional case. In the classical (i.e., non-conditional) framework, it is well known <cit.> that the coverage of the Wald interval for a proportion p can be very poor, and this for any values of p ∈ (0,1) and the sample size n. <cit.> explained how the coverage probability of the Wald interval is affected by both systematic negative bias and oscillations,
and recommended three alternatives to the Wald interval: the Wilson, the Agresti-Coull and the Jeffreys intervals.
The methodological contribution of this paper is twofold. First, such `better' confidence intervals will be obtained for a conditional probability estimated by the Nadaraya-Watson estimator. `Conditional' versions of the Wilson and Agresti-Coull confidence intervals will thus be constructed. When doing so, the inherent bias of the NW estimator will be a major factor to take into account, as often in procedures based on nonparametric function estimation. Consequently, the second contribution will be to devise a careful strategy for efficiently dealing with that bias when computing the above confidence intervals in practice. Finally, those will be used for the interval-estimation of the probability p(0) of interest. This will allow the research question about the existence of some SLHA to be addressed in a very robust way.
The paper is organised as follows. Section <ref> provides an overview of the work by <cit.> and <cit.> about confidence intervals for a proportion. A particular emphasis will be on the Wilson and Agresti-Coull confidence intervals, in order to prepare for their adaptation to the conditional case in Section <ref>.
There, some background on the standard, Wald-type interval for a conditional probability estimated by the Nadaraya-Watson estimator is provided. Then details of the derivations of the Wilson and Agresti-Coull intervals for the conditional case are given. The problem of the bias is addressed, and a novel way of choosing the right smoothing parameter when constructing the confidence intervals is suggested. The performance of the new intervals based on this strategy are then analysed through a simulation study. Section <ref> comes back to the research question and presents the results. A discussion about the implications and limitations of the current study follows, with some suggestions for future research in the field. Finally, Section <ref> concludes.
§ CONFIDENCE INTERVALS FOR A PROPORTION
§.§ Background
Consider a random sample {Y_1,Y_2,…,Y_n}i.i.d.∼ Bernoulli(p), for some value p ∈ (0,1) (degenerate cases p=0 or p=1 have very limited interest), and say that Y_i = 1 if individual i has a certain characteristic, and Y_i=0 otherwise. Denote ≐∑_i=1^n Y_i, the number of sampled individuals with the characteristic. The sample proportion p̂ = ∑_i=1^n Y_i/n = /n is known to be the maximum likelihood estimator of p, satisfying
√(n)(p̂ - p) 𝒩(0,p(1-p))
as n →∞ from the Central Limit Theorem. Substituting in p̂ for the estimation of the standard error, a confidence interval of level 1-α for p easily follows:
CI_Wa = [p̂± z_1-α/2√(p̂(1-p̂)/n) ],
where z_a is the quantile of level a ∈ (0,1) of the standard normal distribution. This is the Wald interval.
Unfortunately, this interval reputedly shows poor behaviour in terms of coverage probability, as has been known for quite some time, see for instance <cit.> and <cit.>. <cit.> provided a thorough understanding of the cause of the phenomenon. They showed, even for large values of n, that the pivotal function W=n^1/2(p̂ - p)/√(p̂(1-p̂)) can be significantly non-normal with large deviations of bias, variance, skewness and kurtosis. Of particular importance is the bias term <cit.>:
(W)=(n^1/2(p̂ - p)√(p̂(1-p̂)))= p - 1/2/√(np(1-p))(1 + 7/2n+9(p-1/2)^2/2np(1-p)) + o(n^-3/2),
which is non-zero for p ≠1/2 and changes sign depending if p<1/2 or p>1/2. Hence, even though p̂ is unbiased for p, the estimation of the standard error by substituting in p̂ introduces some substantial positive or negative bias in W. This eventually results in systematic negative bias for the coverage probability of the interval based on W, a correction of which requiring a shift of the centre of the interval towards 1/2. Obviously, this problem is most serious for values of p `far away' from 1/2, i.e. close to 0 or 1. This is the origin of the popular rule-of-thumb `np(1-p) must be greater than 5 (or 10)', supposed to validate the usage of the Wald interval, but mostly discredited by (<ref>). In addition, the coverage probability also suffers from important oscillations across n and p <cit.>. This is essentially due to the discrete nature of : clearly, for finite n, p̂ can only take on a finite number of different values (0, 1/n, 2/n, …, (n-1)/n, 1), and that causes problems when approximating its distribution by a normal smooth curve, see <cit.> for details.
Hence <cit.> went on to investigate a dozen of alternatives, including the `exact' Clopper-Pearson interval, an interval based on the Likelihood Ratio test, and some intervals based on transformations such as arcsine and logit. They recommended only three of them as replacements for the Wald interval: the Wilson, the Agresti-Coull and the Jeffreys intervals. This is due to their superior performance in coverage probability and interval length, as well as their ease of interpretation.
In this paper the Wilson and Agresti-Coull intervals will be adapted to the conditional, kernel regression-based, setting. These two intervals are derived from the asymptotic normality of the sample proportion (<ref>). As it is known, the Nadaraya-Watson estimator of the conditional probability p(x), that will be used in Sections <ref> and <ref>, is a weighted sample average, which can be regarded as a `conditional sample proportion' in this framework. In particular, the Nadaraya-Watson estimator is - under mild conditions - asymptotically normally distributed as well <cit.>, which allows a natural extension of the Wilson and Agresti-Coull intervals to the conditional case. On the other hand, the Jeffreys interval has a Bayesian derivation, being essentially a credible interval from the posterior distribution of p when some uninformative (`Jeffreys') prior is used. It is less obvious how this construction would fit in the conditional setting. It seems, therefore, reasonable to leave the Jeffreys interval on the side in this paper.
§.§ The Wilson and Agresti-Coull confidence intervals for a proportion
The Wilson interval, first described by <cit.>, follows from the inversion of the score test for a null hypothesis H_0: p=p_0, hence it is also known as `score interval'. Like the Wald interval, it is obtained from (<ref>). The main difference, though, is that the variance factor p(1-p) is not estimated and keeps its unknown nature through the derivation. Specifically, from a statement like
(-z_1-α/2 < p̂-p/√(p(1-p)/n) < z_1-α/2) ≃ 1- α,
essentially equivalent to (<ref>), it follows that the confidence interval for p at level 1-α should be the set of all values of p such that
(p̂-p)^2 ≤ p(1-p)/n z_1-α/2^2.
Solving this quadratic inequality in p yields
CI_Wi = [p̂ + z_1-α/2^2/2n±z_1-α/2/n^1/2√(p̂(1-p̂) + z_1-α/2^2/4n)1+z_1-α/2^2/n ] = [ + z_1-α/2^2/2/n+z_1-α/2^2±n^1/2z_1-α/2/n+z_1-α/2^2√(p̂(1-p̂) + z_1-α/2^2/4n) ].
<cit.> showed that this interval can be written
CI_Wi = [(1-w)p̂ + (w) 1/2± z_1-α/2√((1-w) p̂(1-p̂)/n+z^2_1-α/2 + (w)1/4(n+z^2_1-α/2)) ],
where w=z_1-α^2/n+z_1-α^2. This interval is symmetric around (1-w)p̂ + (w) 1/2, a weighed average of p̂ and the uninformative prior 1/2, with the weight on p̂ heading to 1 asymptotically. Compared to the Wald interval (<ref>), the interval centre is now shifted towards 1/2 which substantially reduces the bias in coverage probability as suggested below (<ref>). Likewise, the coefficient on z_1-α/2 in the ± term is the same weighted average of the variance when p=p̂ and when p = 1/2. <cit.> `strongly recommend' the Wilson interval as alternative to the Wald interval. However, they acknowledged that the form of this interval is complicated and so suggested a new interval with a simple form which they called the `Adjusted Wald interval', now better known as the `Agresti-Coull' interval.
They noted that the Wilson interval is like a `shrinking' of both the midpoint and variance estimate in the Wald interval towards 1/2 and 1/4 respectively, with the amount of shrinkage decreasing along with n. This lead them to consider the midpoint of the Wilson interval, p̃≐ + z_1-α/2^2/2/n + z_1-α/2^2, as another point estimate of p, and then continue with the Wald Interval derivation. For the `usual' confidence level 1-α = 0.95, z_1-α/2 =1.96 ≃ 2, hence p̃≃ + 2/n + 4 and this procedure is sometimes loosely called the `add 2 successes and 2 failures' strategy. It combines the idea of shifting the centre toward 1/2, à la Wilson, with the simplicity of the Wald interval derivation which substitutes in an estimate for the standard error.
Specifically, define
≐ + z_1-α/2^2/2, ñ≐ n + z_1-α/2^2 and p̃≐/ñ.
Then, given that p̃-p̂ = O(n^-1), it follows from (<ref>) that
√(n)( p̃-p) 𝒩(0, p(1-p)/n ),
and acting as in Section <ref> yields
CI_AC =[ p̃± z_1-α/2√(p̃(1-p̃)/ñ) ].
<cit.> provided an excellent breakdown of the performance of the Wald, Wilson and Agresti-Coull intervals as a whole. The Wilson and Agresti-Coull intervals behave very similarly. In particular, they are almost indistinguishable around p=0.5 <cit.>. Most importantly, the Wilson and Agresti-Coull intervals maintain a coverage probability very close to the confidence level 1-α, and this for all p and even for small values of n, as opposed to the Wald interval whose coverage probability remains consistently below target, even for large n. The oscillations persist, though, which could be expected: it is the discreteness of the Bernoulli distribution that causes the oscillations, not how the intervals are formed. <cit.> also proved that both the Agresti-Coull and the Wilson intervals are shorter on average than the Wald interval. In the following section, conditional versions of these intervals are constructed, and it is analysed if these observations carry over to the case of estimating a conditional probability via nonparametric binary regression.
§ CONFIDENCE INTERVALS FOR A CONDITIONAL PROBABILITY
§.§ Binary regression and Nadaraya-Watson estimator
Consider now a bivariate sample ={(X_1,Y_1),…,(X_n,Y_n)} of i.i.d. replications of a random vector (X,Y) ∈×{0,1} such that X ∼ F (unspecified) and Y |X ∼Bernoulli(p(X)). Now, the probability of a certain individual having the characteristic of interest is allowed to vary along with another explanatory variable X. Of interest is the estimation of the conditional probability function
p(x) = (Y=1 | X=x).
Assuming that X is a continuous variable whose distribution admits a density f, the estimation of p(x) actually falls within the topic of regression. Indeed,
(Y|X=x) = 0×(1-p(x)) + 1× p(x) = p(x),
meaning that p(x) is actually a conditional expectation function. The problem is called binary regression.
Common parametric specifications for p include logistic and probit models. Their use is so customary that their goodness-of-fit is often taken for granted in applied studies. E.g., within the application considered in this paper, <cit.> and <cit.> did not attempt any validation of their logistic model. The primary tool for suggesting a reasonable parametric specification in the `continuous-response' context is often the basic (X,Y)-graph (scatter-plot). When the response is binary, though, a scatter-plot is not much informative (no clear shape for the cloud of data points, see for instance Figure <ref> below), hence binary regression actually lacks that convenient visual tool. Maybe that is the reason why the question of goodness-of-fit of a logistic regression model is so often overlooked in the literature, as if the incapacity of visually detecting departures from a model automatically validates it. Yet, without any visual guide, the risk of model misspecification is actually higher in binary regression than in other cases <cit.>, with misspecification typically leading to non-consistent estimates, biased analyses and questionable conclusions.
In order to avoid any difficulty in postulating and validating some parametric specification for the function p, here a Nadaraya-Watson kernel regression estimator will be used. Kernel smoothing is a very popular nonparametric regression method <cit.>, and the Nadaraya-Watson (NW) estimator <cit.> one of its simplest variants. Given the sample , the NW estimator is defined as
p̂_h(x) = ∑_i=1^n K(x-X_i/h)Y_i/∑_i=1^n K(x-X_i/h),
where K is a `kernel' function, typically a smooth symmetric probability density like the standard Gaussian, and h is a `bandwidth', essentially fixing the smoothness of the final estimate p̂_h. Clearly, (<ref>) is just a weighted average of the binary values Y_i's, with weights decreasing with the distance between x and the corresponding X_i. Hence it returns an estimation of the `local' proportion of the Y_i's equal to 1, for those individuals such that X_i ≃ x <cit.>, which is indeed a natural estimate of (<ref>). It is straightforward to see that p̂_h(x) always belongs to [0,1], as it is just a (weighted) average of 0/1 values. There exist more elaborated nonparametric regression estimators (Local Polynomial or Splines, for instance), but those usually fail to automatically satisfy this basic constraint on p(x). Hence the NW estimator seems a natural choice here.
Classical results in kernel regression <cit.> state that estimator (<ref>) is a consistent one for p(x) provided that h → 0 and nh →∞ as n →∞. Moreover, if h = O(n^-1/5), it is asymptotically normal. Specifically, adapting Theorem 4.5 of <cit.> to the binary case gives, at all x such that f and p are twice continuously differentiable, and f(x) > 0,
√(nh)(p̂_h(x) - p(x)) 𝒩(1/2λμ_2(K) b(x), R(K) p(x)(1-p(x))f(x)),
where λ = lim_n→∞ nh^5 <∞, μ_2(K) = ∫ u^2 K(u) du and R(K) = ∫ K^2(u) du are kernel-dependent constants, and
b(x) = p”(x) + 2p'(x)f'(x)/f(x).
Balancing squared bias and variance, in order to achieve minimum Mean Squared Error for the estimator, requires to take h ∼ n^-1/5 <cit.>. This means λ >0, materialising a non-vanishing bias term in (<ref>).
This bias has a major impact on any statistical procedure based on nonparametric function estimation <cit.>, and requires careful treatment. In the context of building confidence intervals, it can be either explicitly estimated and corrected <cit.>, or one can act via undersmoothing: if h=o(n^-1/5), hence purposely sub-optimal, then λ = 0 in (<ref>) which becomes
√(nh)(p̂_h(x) - p(x))𝒩(0, R(K) p(x)(1-p(x))f(x)).
The bias is seemingly gone; of course, at the price of an increased variance. <cit.> and <cit.> theoretically demonstrated the superiority of treating the bias via undersmoothing over explicit correction in terms of empirical coverage of the resulting confidence intervals. Clearly, (<ref>) is the analogue of (<ref>) for a conditional probability. It will consequently serve below as the basis for constructing confidence intervals for p(x) at any x.
§.§ Wald interval for a conditional probability
In particular, a Wald-type confidence interval at level 1-α for p(x) is
CI_Wa(x;h)= [p̂_h(x) ± z_1-α/2√(p̂_h(x)(1-p̂_h(x))/nhf̂_h(x)/R(K)) ].
It directly follows from the asymptotic normality statement (<ref>) with the variance R(K) p(x)(1-p(x))/f(x) being estimated: p(x) is estimated by p̂_h(x) (<ref>), and f(x) is estimated by its classical kernel density estimator <cit.>
f̂_h(x) = 1/nh∑_i=1^nK(x-X_i/h).
Although it would not necessarily be optimal for estimating f(x) itself, here the same kernel K and bandwidth h as in (<ref>) should be used in (<ref>). The reason why a factor 1/f(x) arises in the variance of (<ref>)/(<ref>) is that not all n observations, but only a certain fraction (asymptotically) proportional to f(x) are effectively used by the essentially local estimator (<ref>) for estimating p at x. So, the estimation of f here should be driven by accurately quantifying that `local equivalent sample size' at x. The fact that the quantity nh f̂_h(x)/R(K) is actually that equivalent sample size follows by seeing that nhf̂_h(x) is the denominator of (<ref>), while R(K) gives an appreciation of how large is the weight given to those observations `close' to x, overall.
The Wald nature of (<ref>) and the shortcomings of its analogue exposed in Section <ref>, though, motivate the adaptation of the Wilson and Agresti-Coull intervals to the conditional context.
§.§ Wilson and Agresti-Coull intervals for a conditional probability
The derivation of the `conditional' Wilson interval follows the same steps and justification as those presented in Section <ref>. From (<ref>), it is seen that a confidence interval of level 1-α for p(x) should wrap up all those values of p(x) such that
-z_1-α/2 < p̂_h(x) - p(x)/√(R(K)p(x)(1-p(x))/nhf(x)) < z_1-α/2,
that is,
(p̂_h(x) - p(x))^2 ≤ R(K)p(x)(1-p(x))/nhf(x) z_1-α/2^2.
Solving for p(x), and estimating the unknown f(x) by its kernel estimator f̂_h (<ref>), yields the interval
CI_Wi(x;h) = [ p̂_h(x) + z_1-α/2^2R(K)/2nhf̂_h(x)±z_1-α/2 R(K)^1/2/(nhf̂_h(x))^1/2√(p̂_h(x)(1-p̂_h(x)) + z_1-α/2^2R(K)/4nhf̂_h(x))/1 + z_1-α/2^2R(K)/nhf̂_h(x) ]
= [ p̂_h(x)nhf̂_h(x)/R(K) + z_1-α/2^2/2/nhf̂_h(x)/R(K)+z_1-α/2^2±z_1-α/2(nhf̂_h(x)/R(K))^1/2/nhf̂_h(x)/R(K)+z_1-α/2^2√(p̂_h(x)(1-p̂_h(x)) + z_1-α/2^2R(K)/4nhf̂_h(x)) ].
Similarly to (<ref>), the centre of this interval can be represented as (1-w)p̂_h(x) + (w)1/2, where w = z_1-α/2^2/nhf̂_h(x)/R(K)+z_1-α/2^2. This highlights that the adaptation to the conditional case has not altered the interval's nature. In addition, (<ref>) directly suggests an `Agresti-Coull', simpler version of it.
Indeed, the (non-conditional) Agresti-Coull interval (<ref>) is built around p̃, the centre of the corresponding (non-conditional) Wilson interval (<ref>). Extending this to the conditional case from (<ref>), one can define
CI_AC(x;h) = [p_h(x) ± z_1-α/2√(p_h(x)(1-p_h(x))/ñ_h(x)) ]
where
p_h(x) = p̂_h(x)nhf̂_h(x)/R(K) + z_1-α/2^2/2/nhf̂_h(x)/R(K)+z_1-α/2^2 and ñ_h(x) = nhf̂_h(x)/R(K)+z_1-α/2^2.
The interpretation of nhf̂_h(x)/R(K) as the `local equivalent sample size' makes the analogy between this and (<ref>) obvious.
§.§ Choice of `undersmoothed' bandwidth
The three intervals (<ref>), (<ref>) and (<ref>) follow straight from (<ref>), which holds true if and only if h=o(n^-1/5) (undersmoothing). Results of <cit.> on asymptotic intervals in nonparametric regression, suggest that the coverage probability of the Wald-type interval (<ref>) is
(p(x) ∈ CI_Wa(x;h)) = 1-α + O(nh^5 + h^2 +(nh)^-1).
Hence, for minimising the coverage error of the confidence interval computed on p̂_h(x), one should take h such that h ∼ n^-1/3. Common practice in nonparametric methods requiring such undersmoothing, is to take h `smaller' than a value, say h_0, supposed to be optimal for estimating p(x). Typically, h_0 would be returned by a data-driven selection procedure, such as cross-validation or plug-in, see <cit.> for a review. Often, h_0 is then just divided by some constant which heuristically looks appropriate. In the present case, one could take h = h_0 n^-1/3/n^-1/5 = h_0 n^-2/15, so as to (supposedly) obtain h ∼ n^-1/3, given that h_0 ∼ n^-1/5, see comments below (<ref>).
There is actually no justification for doing so in practice. `Undersmoothing' is a purely asymptotic, hence theoretical, concept. The `undersmoothed' h is to tend to 0 quicker than the optimal h_0 as n would tend to ∞, but this convergence is obviously meaningless when facing a sample of data of fixed size n. Indeed expressions like h ∼ n^-1/3 or h_0 ∼ n^-1/5 do not really make sense for fixed n. It is understood that, mainly because of the inherent bias of p̂_h(x), the value of h leading to confidence intervals with good coverage properties is not, in general, the optimal bandwidth h_0 for estimating p(x). However asymptotic expressions such as (<ref>) cannot really be of any practical help. In fact, there are no effective empirical ways of selecting a right h in this framework, as <cit.> deplored.
This paper fills this gap, as a sensible way of selecting such a value of h is devised. A practical procedure, it does not claim to return an `undersmoothed' bandwidth or otherwise. It just aims to return a numerical value of h which guarantees, for the data at hand, the intervals (<ref>), (<ref>) or (<ref>) to have high degree of coverage accuracy. It is essentially a bootstrap procedure, which in many ways resembles <cit.>'s idea. A main difference, though, is that <cit.> looked for the (higher) nominal confidence level that they should target for their intervals, so that their empirical versions have a coverage probability close to 1-α. Here, the approach is more direct as the constructive parameter h is the only focus.
The procedure goes as follows:
* From the initial sample , estimate p(x) by p̂_h_0(x) using an appropriate bandwidth h_0 returned by any data-driven procedure;
* Generate a large number B of bootstrap resamples ^*(b)={(X_i,Y_i^*(b)); i=1,…,n}, b=1,…, B, according to Y_i^*(b)∼Bernoulli(p̂_h_0(X_i));
* For b ∈{1,…,B}, compute on ^*(b) a collection of intervals[Here CI(x;h) denotes a generic confidence interval for p(x), which can be CI_Wa(x;h), CI_Wi(x;h) or CI_AC(x;h).] CI^*(b)(x;h) on a fine grid of candidate values of h;
* Estimate the coverage probability of the interval CI(x;h) by the fraction P(x;h) of intervals CI^*(b)(x;h) which contain the `true' value p̂_h_0(x);
* Select for bandwidth one of the values of h for which P(x;h) is above 1-α. If P(x;h) nowhere takes a value higher than 1-α, choose h which maximises P(x;h).
The intervals (<ref>), (<ref>) and (<ref>) heavily depend on the local geometry of the data around x, through the `local equivalent sample size' nhf̂_h(x)/R(K) (see comments below (<ref>)). In order for the bootstrap intervals to mimic this appropriately, the resampling in 2. is done conditionally on the design values {X_i} and those are kept fixed; only the Y_i's are resampled. Also, theoretical considerations about bootstrap methods for nonparametric curve estimation suggest that resampling such as in 2. should be done from an estimate p̂_g with g, this time, an oversmoothed bandwidth – with the same caveat about what this really means in practice. Again, the reason behind this is to do with the bias of the estimator, see e.g. <cit.>. As opposed to other procedures, though, here the bootstrap resamples are only used for identifying another bandwidth, not for direct estimation of quantities of interest. Hence, oversmoothing is not theoretically required (see <cit.> for justification of this).
This procedure is tested through simulations in the next section, where it is seen to perform very well. Those simulations show that P(x;h) is essentially a concave function of h, up to some minor fluctuations due to sampling (see Figure <ref> below), and that in almost all situations P(x;h) indeed takes values higher than 1-α. Because of concavity, the values of h for which it is the case forms a convex subset of ^+. The value of h chosen in 5. can then be the average of those values, for instance. This guarantees the selected value of h to correspond to a value of P(x;h) close to its maximum, hence leaning more to the side of conservatism than otherwise. For the rare cases in which P(x;h) does not go above 1-α, its maximum value is very close to 1-α, anyway.
§.§ Simulation study
In order to test, compare and validate the above construction of confidence intervals for p(x) and the suggested bandwidth selection procedure, a twofold simulation study was run.
Scenario 1. The data was generated according to the following process:
X ∼(-π,π),
Y | X ∼Bernoulli(p(X)),
p(x) = e^3 sin(x)/1+e^3 sin(x).
This regression function, shown in Figure <ref>, was used in Example 5.119 in <cit.>. The three confidence intervals (<ref>), (<ref>) or (<ref>) for p(x) at x=0 and x= π/2 were computed on M=1,000 independent samples generated as above. All confidence intervals were truncated to [0,1] when necessary. The coverage probabilities of the three intervals, at the two locations, were approximated by the fraction of those M=1,000 intervals which include the true values p(0) = 1/2 and p(π/2) ≃ 0.953. In the non-conditional case, values of p close to 0 or 1 are known to be problematic (see comments below (<ref>)), so comparing how much impact the value of p(x) has on the performance of the `conditional' intervals is of interest. To isolate that effect, a uniform design for X was considered, to ensure that the areas `close to x=0' and `close to x=π/2' are equally populated by data.
Three sample sizes were considered: n=50 (`small' sample), n=250 (`medium' sample) and n=1000 (`large sample'). The targeted confidence level was 95%, i.e. α = 0.05. For each sample, the values of h to use in (<ref>), (<ref>) or (<ref>) were determined by the procedure described in Section <ref>. The initial value of h_0 was taken here as the theoretically optimal value of the bandwidth for estimating p - in this simulation study it is accessible as we know the `truth': it is h_0 ≃ 0.745 n^-1/5. This allows a fair comparison of the observed results as they are not impacted by other arbitrary decisions. The number of bootstrap replications was set to B=1,000 and the best value of h was looked for on a grid of 200 equispaced values from 0.05 to 2. The final value of h was taken as the centre (average) of the set of values producing an estimated coverage higher than 95%, as suggested at the end of Section <ref>. The (approximated) coverage probabilities of the intervals built according to this procedure are given in Table <ref>.
It emerges from Table <ref> that the Wilson and Agresti-Coull intervals reach an empirical coverage consistently very close to the targeted 95%, and this for all sample sizes and at both locations (i.e., both when p(x) ≃ 1/2 and p(x) ≃ 1). A notable exception, though, is when x = π/2 and n=50 (`small' sample). As explained in Section <ref>, the number of observations effectively playing a role in estimating p at x is nh f̂_h(x)/R(K). Here, with n=50, f ≡ 1/(2π) and R(K) = 1/(2√(π)) (for K=ϕ the standard Gaussian kernel), the local equivalent sample size is roughly 1 observation for the values of h around h_0. That means that the effect of the shrinkage described below (<ref>) is severe here, and the centre of the interval is seriously held back toward 1/2. At x=0, this is a good thing as p(0)=1/2, and the observed coverage is actually higher than 95%; at x=π/2, with p(π/2) close to 1, this is detrimental to the level of the intervals (which keep an empirical coverage higher than 90%, though). As expected given the behaviour of its non-conditional counterpart, the Wald interval struggles to maintain a reasonable level of coverage, even at large sample sizes or in favourable cases (x=0). Only when x=0 and n=1000 does the Wald interval produces reasonable results (but still slightly less accurate in terms of coverage probability than Wilson and Agresti-Coull).
Another perspective on this is provided by Figures <ref> and <ref>, which show boxplots of the lengths of the M=1,000 confidence intervals computed from each construction for n=1000, as well as the selected values of h, for x=0 and x=π/2. At x=0, the three intervals are always very similar, and so are the values of h selected by the procedure described in Section <ref>. The empirical coverage are, therefore, similar as well as shown by Table <ref>. At x=π/2, however, the procedure selects values of h much smaller for the Wilson and Agresti-Coull intervals, than for the Wald interval (Figure <ref>, right panel). This means that the constructed Wilson and Agresti-Coull intervals are indeed longer than the Wald interval (Figure <ref>, left panel), but that is the price to pay to keep a coverage probability of 95%. Recalling that h is selected via a bootstrap procedure, the value of h guaranteeing a high coverage for the bootstrap replications of the Wald interval, is not guaranteed to maintain such high coverage `in the real world'. For the Wilson and Agresti-Coull intervals, on the other hand, that is the case.
Figures <ref> and <ref> also show the optimal value h_0 = 0.188 (for n=1000; dashed line in the right panel). According to the boxplots, the value h supposed to be good for constructing the confidence intervals (<ref>), (<ref>) or (<ref>) is, for many samples, smaller than h_0, in agreement with what `undersmoothing' suggests. It is, however, not always the case. Oftentimes (especially at x=0), taking h (much) greater than h_0 seems to be the right thing to do. The fact that a `small' h is not always ideal is easily understood through the case x=0. In this scenario, due to symmetry, p(0) is actually equal to (Y) = p, the non-conditional probability (Y=1). As a result, any confidence interval for p such as those described in Section <ref> can be used for p(0) as well. This is advantageous, as the sample average Y̅=p̂ is naturally a better estimator (smaller variance, no bias) of the global p than any local (h `small'), conditional attempt. Those `non-conditional' intervals are actually recovered from (<ref>), (<ref>) or (<ref>) as h→∞. Indeed, it is known that taking a large bandwidth in nonparametric regression essentially makes local estimators into global ones <cit.>. Intuitively, if h is very large (h ≃∞) in (<ref>) then all observations are equally weighted and p̂_∞(x) just reduces to the sample average Y̅=/n=p̂. Therefore, it is beneficial to take h `large' here. Of course, this is a very particular situation, but it exemplifies that the best h really depends on the design and is not necessarily `small'. Hence heuristic rules such as taking h = C_n h_0, with C_n a small constant (possibly depending on n), are thus to be precluded and h should be selected by a careful data-driven selection procedure. This shows the value of the procedure developed in Section <ref>.
Scenario 2. The purpose of this second scenario is to empirically validate the real data analysis shown in the next section. Essentially the same study as in Scenario 1 was repeated, but this time data sets of size n=1350 were generated as
X ∼ 0.45 ×(-1,1/4) + 0.55 ×(0.8,1/4),
Y | X ∼Bernoulli(p(X)),
p(x) = 1/1+e^-0.088-0.770 x.
The above mixture of Normals is a good parametric approximation of the distribution of the predictor X in the application below (Figure <ref>), the above function p is the best logistic fit for the analysed data (Figure <ref>), and the sample size n=1350 is akin to that sample size as well. Hence the results of this simulation gives an appreciation of the validity of the real case analysis described in the next Section. Again, M=1,000 independent samples were generated. For each of them, the values of h to use in (<ref>), (<ref>) or (<ref>) were determined by the procedure described in Section <ref>. The coverage probabilities of the three types of confidence intervals for p(0) was approximated by the fractions of the M=1,000 such intervals which include the true p(0) = 0.522. Those were 0.934 for the Wald interval, 0.953 for the Wilson interval, and 0.955 for the Agresti-Coull interval. Figure <ref> shows the lengths and selected values of h for the M=1,000 independent samples generated in this scenario. The conclusion are very similar to what was said for the case x=0, n=1000 in Scenario 1. In particular, both the Wilson and Agresti-Coull intervals show coverage probabilities very close to their nominal level 95%, while, as p(0) ≃ 1/2 and n is `large', the Wald interval is not doing too bad either. This indicates that the conclusions drawn in Section <ref> can be given some credibility.
§ ANALYSIS OF UEFA COMPETITIONS
§.§ UEFA coefficients
The main aim of this analysis is to investigate the existence, or otherwise, of the Second Leg Home Advantage (SLHA) in UEFA competitions (Champions League and Europa League). The SLHA is described as the advantage in a balanced two-legged knockout tie whereby teams are, on average, more likely to qualify if they play at home in the second leg. Hereafter, the team playing at home on the second leg will be called `Second Leg Home Team' [SLHT], and conversely for the `First Leg Home Team' [FLHT]. Table 1 in <cit.> exemplifies that many players and coaches believe that the SLHA exists. Maybe more surprising is that the UEFA body itself seems to believe in it as well: the design of the UEFA Champions and Europa Leagues is such that teams which finished first in their group in the first stage, are rewarded by being guaranteed to play the second leg at home in the first subsequent knockout round.
If only for that, any serious empirical analysis of the SLHA effect must adjust for team strengths, as briefly explained in Section <ref>. This said, the `strength' of a team is a rather vague concept, which in addition is likely to vary over the course of a season according to many imponderable factors (e.g., the injury of a key player may seriously affect a team's abilities). A simple proxy for such `strength' seems to be the UEFA club coefficient. Its calculation is actually based on two elements: first, an individual club index, obtained from the points awarded to the team for their performance during the course of the Champions or Europa league each year (details to be found in <cit.>); second, a country index, which is the average of the indices of all the clubs of the same country which took part to a European competition that year. The country index for a particular season is defined as the sum of that country's points from the previous five seasons. Since 2009, the club UEFA coefficients have been calculated as the sum of the clubs points from the previous five seasons plus the addition of 20% of the relevant country coefficient for the latest season.[Between 2004-05 and 2008-09, the country weight was 33%, whereas it was 50% before 2003-04. ] The club coefficients take thus into account the recent performance of the club in European competitions, but also the strength of the league from which the team comes. Although not perfect, they form the basis for ranking and seeding teams in the UEFA competitions, and should be reasonably representative of the relative overall strengths of the teams over one season.
Support for this assertion is indirectly provided in <cit.>. They hypothesised that, as the club coefficients are only updated at the end of each season, group stage performance would provide a more up-to-date measure of team strength for the following knockout rounds. As such, they included both club coefficients and group stage performance as predictors in their logistic regression model. It turned out that `group stage performance' was not statistically significant either in a model by itself or in a model which also used `club coefficient' (which was significant). It seems, therefore, fair to consider the UEFA coefficient of clubs as the main indicator of team strength for a given season, and the modelling below will make use of that index only.
§.§ Preliminaries
Data preparation. The analysed data, all drawn from <cit.>, include all match results from UEFA Champions and Europa leagues from 2009/2010 through to 2014/15 and the UEFA coefficients of all clubs which took part to some European competition for those seasons. Initially, data from seasons 1999/00 to 2014/15 were planned to be analysed, however UEFA has changed the method for calculating the club coefficients three times during that period. These changes mean that the time periods (1999/00 to 03/04, 04/05 to 08/09, and 09/10 onwards) are no longer directly comparable given that the UEFA coefficients would be fundamentally different variables. This necessitated analysis of only one time period, and the most recent (2009/10 to 2014/15) was naturally chosen. It would have been possible to recalculate the UEFA coefficients of each club for the years prior to 2009 using the current method, but it is probably meaningful here to focus on the last years only, given the possible fading of the SLHA over years that have sometimes been suggested by previous studies (see Section <ref>). This analysis will consequently evaluate the existence of the SLHA `now', and not from a historical perspective.
Altogether, 4160 matches were played in the Champions and Europa leagues from 2009/2010 to 2014/2015. Of course, only the knockout two-legged ties were of interest in this study. Hence, the first, obvious action was to remove group stage games, played in a round robin style and not qualifying as a two-stage knockout, and match-ups where only one game was played such as Finals or match-ups where one game was cancelled (it happened that some games were cancelled due to security concerns, or others[http://www.uefa.org/news/newsid=1666823.html]). It remained n=1353 two-legged ties (i.e., pairs of matches), both in qualifying rounds (before the group stage) and in final knockout stage (after the group stage).
In a two-legged knockout tie, the aggregate score over the two matches is tallied and the team which scored the most goals on aggregate is declared the winner and qualifies to the next round. If the teams have scored an equal number of goals, the so-called `away goals rule' applies: the team which scored most `away from home' would qualify. If this criterion does not break the tie, then two extra periods of 15 minutes are played at the end of the normal time of the second leg. If the teams are still tied then, the result is decided by penalty shoot-out. For games which went to extra-time (there were n_ET = 84 of them), it is reasonable to hypothesise that some SLHA may be induced by the fact that the SLHT plays 30 minutes longer at home than the FLHT, hence benefiting more from its home advantage <cit.>. This could justify to exclude those games from the study, given that they should artificially give rise to some sort of SLHA. It was, however, decided to keep them in a first time, arguing that it might precisely be the key element to take into account when assessing the SLHA. The possibility of playing extra time and/or taking penalties on their home ground might explain why most players favour playing at home on the second leg. In a second time, those n_ET=84 games will be excluded from the analysis, in order to appreciate the real effect of extra-time/shoot-out on the SLHA (see Section <ref>).
Predictor. Call Y the binary outcome of a two-legged tie, and for the ith tie define Y_i = 1 if the SLHT qualifies and Y_i = 0 otherwise (hence, if the FLHT qualifies). The study is based on the regression of Y on an explanatory variable X quantifying the inequality in strength between the two teams involved. Call C_1 and C_2 the UEFA coefficients of the FLHT and SLHT, respectively, at the time of meeting, and define[There are actually two occurrences where teams had a coefficient of zero, which causes a problem for defining their logarithm: two teams from Gibraltar made their first appearances in 2014/15, and Gibraltar itself was a newly accepted member of the UEFA so had a country coefficient of zero as well. Those teams were artificially given a coefficient of C=0.001, well below the next smallest coefficient of 0.050.]
X = log(C_2/C_1)=log(C_2) - log(C_1).
A positive value of X indicates that the SLHT is stronger than the FLHT, and conversely for a negative X. The value X=0 indicates an exactly balanced tie. The value X_i is the observed value of X for the ith tie.
Note that <cit.> and <cit.> used the difference C_2-C_1 as control variable (<cit.> actually used a normalised version of it). The reason why a log transformation is introduced in (<ref>) is that C_1 and C_2 are positive variables. It turns out that two positive numbers are more naturally compared through their ratio than through their difference.[There are mathematical reasons for this. Algebraically, (^+,×) is a group, (^+,+) is not. The Haar measure (i.e., the `natural' mathematical measure) on (^+,×) is ν(dx) = dx/x, translating to the measure of an interval [a,b] ⊂^+ being log(b)-log(a).] As an illustration, in the first qualifying round of the Champions League 2014-2015, the Sammarinese team of La Fiorita (C_1=0.699) faced the Estonian team of Levadia Tallinn (C_2=4.575); the same year, in the semi-finals, FC Barcelona (C_1=157.542) clashed with Bayern München (C_2 = 154.328). The (absolute) difference in coefficients is roughly the same for both ties (3.876 and 3.214, respectively), however anybody with a slight appreciation for European football would know that the second case, bringing together two giants of the discipline, was to be much tighter than the first one, opposing a new-coming team from one of the weakest leagues in Europe (San Marino) to a more experienced team from a mid-level league. The ratio of the coefficients C_2/C_1, respectively 6.545 and 1.021 for the above two ties, is much more representative of the relative forces involved.
Figure <ref> shows the kernel estimate f̂ (<ref>) of the density f of the predictor X, overlaid to an histogram. The bandwidth h=0.252 was selected by direct plug-in <cit.> and the kernel was the standard Gaussian density. The estimate clearly suggests that f is bimodal. This can be understood the following way. In the qualifying rounds, it is very rare to see two teams of very similar strength facing each other: there is often a team `much stronger' (at that level) than the other. See the above example La Fiorita versus Levadia Tallinn: although the two teams can be considered `weak' and both their UEFA coefficients are `low', the ratio of those coefficients unequivocally tells which team is likely to qualify. A value of X close to 0 is actually only observed if the UEFA coefficients of the two matched-up teams are really of the same order, and that typically happens in the final rounds, when the strongest teams meet (for example, FC Barcelona versus Bayern München, see above). There are, obviously, much more qualifying games than semi-finals, hence there are comparatively less games characterised by a value of X close to 0, than otherwise. In any case, f̂ shows a peak on the positive side noticeably higher than that on the negative side. In fact, the observed proportion of positive values X_i's is 752/1353 = 0.556 (Wilson confidence interval: [0.529;0.582]). This means that the stronger team is indeed more often the SLHT than the contrary, which confirms the existence of the confounding factor described in Sections <ref> and <ref>, and the necessity of taking it properly into account.
Model. In this application, it is is sensible to treat the observed values X_i = log(C_2,i)-log(C_1,i) of the predictor as known constants set by the design (`fixed design'), and it will be assumed that
Y_i |X_i ∼Bernoulli(p(X_i))
for i = 1,…, n =1356, independently of one another. The function p is left totally unspecified except that it is twice continuously differentiable.
§.§ Analysis
The Nadaraya-Watson estimator p̂_h_0 (<ref>) was computed on the data set = {(X_i,Y_i); i=1,…,n}. The kernel K was the standard Gaussian density, while the optimal bandwidth was approximated by the method based on AIC described in <cit.>,[This bandwidth selector is implemented in the R package np.] which returned h_0 = 0.525. The resulting estimate is shown in Figure <ref> (left). Common sense suggests that p should be a monotonic function of x, hence the little `bump' in p̂_h_0 between x=-5 and x=-4 is most certainly due to random fluctuation only. This happens in an area where data are rather sparse, so it is not surprising that the nonparametric, essentially local, estimator is not the most accurate there. If the focus of the analysis was that part of the range of values of X, then more involved estimation methods could be used (e.g., adaptive estimation based on variable bandwidths, robust version of the NW estimator such as LOESS, or isotonic regression). However, this study is mainly interested by what happens around x=0, where data are abundant. The NW estimate p̂_h_0 looks well-behaved and smooth over [-2,2], say (see close-up in Figure <ref>, right), so it is probably enough here. In particular, the estimator gives p̂_h_0(0) = 0.539 > 1/2, indicating a potential SLHA. The statistical significance of this effect is examined below by computing the confidence intervals (<ref>), (<ref>) and (<ref>) for p(0), using a bandwidth h obtained from the procedure described in Section <ref>. Details are given below.
Step 1. is what Figure <ref> shows. In Step 2., B=5,000 bootstrap resamples were generated as Y_i^*(b)∼Bernoulli(p̂_h_0(X_i)); i=1,…,n, b = 1,…,B. For Step 3., an equispaced grid of 200 candidate values for h, from h=0.05 to h=2, was built. On each bootstrap resample, the 3 confidence intervals (<ref>), (<ref>) and (<ref>) at x=0 for each candidate value of h were computed. The appearance of the functions P(0;h) (Step 4.) for the three types of intervals is shown in Figure <ref>. The returned values of the bandwidth to use (Step 5.) were obtained as the average of all the values h which give an estimated coverage higher than 95%. Those were: h=0.873 for the Wald interval, and h=0.854 for both the Wilson and the Agresti-Coull interval. With these values of h in (<ref>), (<ref>) and (<ref>), we obtain as 95% confidence intervals for p(0):
CI_Wa(x=0;h=0.873) = CI_Wi(x=0;h=0.854) = CI_AC(x=0;h=0.854) = [0.504; 0.574].
Interestingly, with this sample size, the three intervals are mostly indistinguishable, as they differ only from the fourth decimal digit. Of course p=1/2 does not belong to this interval, evidencing the statistical significance of the SLHA (more on this in Section <ref>).
From a methodological point of view, it is interesting to note that the empirically `optimal' bandwidth h for building the confidence intervals (here h ≃ 0.86) is actually greater than the bandwidth h_0 = 0.525, considered `best' for estimating p. As already noted in Section <ref>, this is in contrast to what a naive interpretation of `undersmoothing' would suggest (to take h smaller than h_0). Admittedly, h_0 still belongs to the acceptable area ({h >0: P(0;h) ≥ 0.95}), see Figure <ref>. However, if one took h = h_0 n^-1/3/n^-1/5 as it has sometimes been suggested in the literature following (<ref>), it is here h=0.2 and the so-produced intervals would have a coverage probability much lower than the targeted 95%. Note that taking h too small has also an adverse effect of the length of the intervals, given that the standard error of the estimator p̂_h(x) is essentially inversely proportional to h, by (<ref>)-(<ref>). This confirms that ad-hoc procedures aiming at producing a supposedly `undersmoothed' bandwidth need not work well in finite samples, and should not be used.
For completeness, some elements of analysis based on a parametric logistic model are briefly given below. Importantly, it is noted that a classical goodness-of-fit test for GLM based on the deviance and Pearson's residuals rejects the logistic model for the data (p-value ∼ 0.001). The le Cessie-van Houwelingen test <cit.>, precisely based on the Nadaraya-Watson estimator (<ref>), shows marginal evidence against it as well (p-value =0.06). Therefore, what follows is not fully supported by the data, and is shown for illustration only. The fitted logistic model is (p(x)) = α + β x, and the coefficients are estimated at α̂ = 0.088 and β̂ = 0.770. This logistic fit is shown in Figure <ref>, overlaid to the Nadaraya-Watson estimator, to appreciate their discrepancy around 0. A 95% confidence interval for α is [-0.035;0.210] ∋ 0, indicating the non-significance of the intercept. Given that p(0) = e^α/(1+e^α), this translates into a 95% confidence interval [0.491,0.552] ∋ 1/2 for p(0). Hence an analysis based on logistic modelling would fail to highlight a potential SLHA. Figure <ref> reveals that the constrained logistic specification for p forces the estimate to `take a shortcut' compared to what the data really say (i.e., the nonparametric estimate), and that smoothes over the interesting features around 0. Remarkably, the length of the interval for p(0) based on this parametric model is 0.061, whereas the nonparametric confidence interval (<ref>) is only slightly longer (margin of 0.070). The price to pay for the flexibility and robustness granted by the nonparametric approach does not seem to be in terms of precision (i.e., length of the confidence intervals) in this study.
Finally, the nonparametric analysis was repeated, but this time excluding the ties which went to extra-time, in order to appreciate the effect of those on the existence or otherwise of the SLHA. There were n_ET = 84 such occurrences of extra-time, so it remained 1269 two-legged ties. The estimated Nadaraya-Watson estimator (h_0 =0.539 by AIC criterion, standard Gaussian kernel) is shown in Figure <ref>. Compared to the estimator using all data, the two curves are remarkably close around 0. This `no-extra-time' version gives p̂_h_0(0) = 0.540. So, somewhat surprisingly, the magnitude of the SLHA is actually not influenced at all by the `extra-time' element. What the plot reveals, though, is that the two curves move apart as x moves away from 0. For x>0, the dashed line (`no-extra-time' curve) is above the solid line, and conversely for x<0. This can be interpreted the following way: when an underdog playing away on the second leg still manages to qualify, that happens `often' after extra-time, i.e., after a long and though battle. Indeed, one doesn't expect an underdog to go and qualify easily at the home ground of a much stronger team.
§.§ Discussion
This analysis investigated the existence of a second leg home advantage in two-stage knockout matches in the UEFA Champions and Europa leagues from 2009/10 to 2014/15. A significant effect was found. This finding contrasts with other research where the difference in team strength was controlled for, most relevantly <cit.>, who found a significant positive SLHA that had disappeared by 1995/96, and <cit.>, who found no effect. Obviously, these conflicting conclusions may have many different origins. First and foremost, the analysed data were not exactly the same. <cit.> analysed historical data from all the European cups from their early time (1955/56) up until 2005/06 (taking into account a potential effect of time), while <cit.> looked only at the final knockout stage of the UEFA Champions League from 1994/95 to 2009/10.
In this paper it was decided to analyse data from 2009/10 to 2014/15 in order to keep the control variable X, based on the UEFA club coefficients, homogeneous across the study. Indeed, the way that the UEFA calculates those coefficients have drastically changed over time, most recently in 1999, 2004 and 2009. <cit.> acknowledged this issue and addressed it by using indicator variables to create three covariates to include in their logistic model:
(p(x)) = α + β_1 𝐈_(60/61-98/99)x + β_2𝐈_(99/00-03/04)x + β_3𝐈_(04/05-05/06)x
(where in their case, x is the difference in UEFA coefficients, as mentioned in Section <ref>). This allowed them to estimate the second-leg home advantage across all seasons as a whole whilst controlling for team strength via the three calculation methods simultaneously. However the usefulness of an overall measure is not particularly high as the strength of the effect is known to be changing within the same time period. Their refined time series analysis of the effect essentially breaks the data down into sub-time periods, anyway. <cit.>, on the other hand, did not account for the differences in coefficient calculation methods, or at least they did not mention anything of that sort. In fact, they incorrectly stated that their entire dataset utilises 20% of the country coefficient and then assume that there are no changes in calculation method [their Section 2.3]. However, there were three changes in the weight that the country index carries over the period that they studied (see Footnote <ref>.). The control variable was also different, as here the log-difference between the UEFA coefficients was used, as opposed to their difference in <cit.> and <cit.>. This is arguably more natural for comparing such positive indices.
Finally, <cit.> and <cit.> both used logistic regression modelling, but neither provided, or made reference to, any diagnostics investigating the validity of their models. Yet, there are serious concerns about it. For instance, Figure 2 in <cit.> indicates that, for a maximum (normalised) difference in UEFA coefficients (-1 or 1), so in the most unbalanced tie one can imagine, the `underdog' team keeps around 20% chance of qualifying. Common sense alone suggests that, if La Fiorita was ever to face FC Barcelona, they would not have 1 in 5 chances of qualifying.
Actually, testing the goodness-of-fit of the logistic model for the data analysed in this paper lead to reject it. Worse, forcing a logistic-based analysis did not allow a significant SLHA effect to be highlighted, whereas it was picked up by the nonparametric procedure.
Highlighting a significant SLHA effect opens the door for further analyses. For instance, <cit.> analysed 199 two-legged knockout ties from the Champions and Europa leagues from 1994/95 to 2006/07, and investigated whether the number of goals scored in each leg is associated with the second-leg home advantage. Table <ref> shows that the average goals scored by the away team in each leg remains similar but the goals scored by the home team increases approximately 33% from the FLHT to the SLHT. This suggests that the cause of any potential second-leg home advantage is not due to any pressures or influences on the FLHT when playing away on the second leg, but rather it is a boost to the performance of the SLHT on their own soil relative to the FLHT on theirs. Admittedly this analysis did not control for the relative team strengths (see Section <ref>) but it does provide an interesting avenue for future research to explore. Reproducing this type of analysis whilst controlling for team strengths would probably provide a better understanding of whether the effect is an advantage to the SLHT or more so a disadvantage to the FLHT - or both.
More specific questions may be asked as well. For instance, <cit.> stated in their review that the home field advantage is apparently universal across all types of sport, yet not universal across all teams within a sport. Investigating if the second-leg home advantage affects different teams individually would also be of interest. Historically, some clubs have indeed demonstrated expertise in improbable comebacks when playing home on the
second leg, what is now known in the football folklore as remontada (Spanish for `comeback' or `catch-up'). Research could also turn to the realm of psychology and sociology, attempting to develop for the SLHA a conceptual framework similar to that of <cit.> and <cit.> and briefly exposed in Section <ref>.
§ CONCLUSION
Motivated by a formal analysis of the existence (or otherwise) of the so-called `second-leg home advantage' in some international football competitions, this paper aimed to develop better tools for drawing reliable conclusions from a binary regression model, that is, when a conditional probability function p(x) is to be empirically estimated. In particular, a reliable method for constructing pointwise (i.e., for a fixed value of x) confidence intervals with good empirical coverage properties was needed.
Avoiding rigid and sometimes unwarranted parametric specifications for the function p, the method developed here is based on the Nadaraya-Watson estimator, arguably one of the simplest nonparametric regression estimators. In the case of a binary response, this estimator returns a kind of `conditional sample proportion', from which standard confidence intervals of Wald type can easily be constructed. However, in the basic case of estimating a binomial probability, the Wald confidence interval is known to perform very poorly, and alternative confidence intervals, such as the Wilson and the Agresti-Coull intervals, have been strongly recommended.
The first main methodological contribution of the paper was to extend those `better' confidence intervals to the conditional case. Given that the Nadaraya-Watson estimator is a locally weighted average of the observed binary responses, that extension was very natural and did not present any problem. `Conditional versions' of the Wilson and Agresti-Coull intervals were thus proposed. Actually, any estimator of type
p̂_θ(x) = ∑_i=1^n W_i(x;θ) Y_i,
where {W_i(·;θ); i=1,…,n} is a set of weights, possibly depending on a parameter θ (often: a smoothing parameter, such as a bandwidth) and summing to 1, can be regarded as a `local sample proportion', and hence could serve as the basis of the methodology mutatis mutandis. Nonparametric regression estimators of type (<ref>) are known as linear smoothers, and include many common nonparametric regression estimators such as Local Linear, Splines or basic (i.e., without thresholding) Wavelet estimators, for instance. The methodology developed here is thus very general.
As often when nonparametric function estimation is involved, the inherent bias of estimators like (<ref>) constitutes a major stumbling block when devising inferential tools. When building confidence intervals, it has been advocated that proceeding via undersmoothing, that is, working purposely with a sub-optimal bandwidth, would be beneficial in theory. However, an attractive and effective data-driven procedure for selecting such an `undersmoothed' bandwidth was missing up until now. The second main methodological contribution of the paper was to suggest such a procedure, based on some bootstrap resampling scheme.
Somewhat surprisingly, the bandwidth returned by the procedure and supposed to be optimal for building good confidence intervals, was not seen to be necessarily smaller than what it should be otherwise. That is in contrast with what a naive interpretation of `undersmoothing' would suggest. The procedure was validated through a simulation study, and proved very efficient at returning a bandwidth guaranteeing empirical coverage very close to the nominal level for the so-constructed intervals in all situations. Importantly, nothing in the procedure pertains to the binary regression framework, so it is clear that the suggested methodology can be used for selecting the right bandwidth for building confidence intervals for a general regression function as well.
These new intervals were finally used for answering the research question as to the existence of the second-leg home advantage in international football competitions. To that purpose, data from the UEFA Champions and Europa leagues from 2009/10 to 2014/15 were collected and analysed. Working within the regression framework allowed for the abilities of the teams involved to be taken into account, which, due to UEFA seeding regulations, confounds the relationship between playing at home in the second game and the probability of a qualification. This confounding factor was confirmed by an explanatory analysis of the data. For reasons made clear in the paper, the relative strength of the matched teams was measured through the log-difference of the UEFA coefficients of the clubs. Then, the nonparametric model revealed a significant second-leg home advantage, with an estimated probability of qualifying when playing at home on the second leg of 0.539 and 95% confidence interval [0.504;0.574], after controlling for the teams' abilities. The existence of such an unwarranted advantage for the team playing at home second may call for some system of compensation and/or handicap in knockout stages of UEFA administered competitions.
Importantly, the analysis provided is this paper is very objective, in the sense that, purely nonparametric in nature, it does not rely on any arbitrary assumption enforced by the analyst and which could orientate the conclusions in one or the other direction. In particular, no second-leg home advantage effect was evidenced by previous research, exclusively based on parametric models such as logistic regression but without any justification or validation of that parametric specification. It is revealing to observe that, although not fully supported by the data here, a similar analysis based on logistic modelling was not able to highlight the effect. Model misspecification can, indeed, hide interesting features.
§ ACKNOWLEDGEMENTS
This research was supported by a Faculty Research Grant from the Faculty of Science, UNSW Sydney (Australia).
99
[Agresti and Coull(1998)]Agresti1998 Agresti, A. and Coull, B.A. (1998), Approximate is better than “exact” for interval estimation of binomial proportions, Amer. Statist., 52, 119-126.
[Blyth and Still(1983)]Blyth1983 Blyth, C.R. and Still, H.A. (1983), Binomial confidence intervals, J. Amer. Statist. Assoc., 78, 108-116.
[Brown et al(2001)]Brown2001b Brown, L.D., Cai, T.T. and DasGupta, A. (2001), Interval estimation for a binomial proportion, Statist. Sci., 16, 101-133.
[Brown et al(2002)]Brown2002 Brown, L.D., Cai, T.T. and DasGupta, A. (2002), Confidence intervals for a binomial proportion and asymptotic expansions, Ann. Statist., 30, 160-201.
[Carron et al(2005)]Carron2005 Carron, A.V., Loughhead, T.M. and Bray, S.R. (2005), The home advantage in sports competitions: Courneya and Carron's (1992) conceptual framework a decade later, J. Sports Sci., 23, 395-407.
[Chen and Qin(2002)]Chen2002 Chen, S.X. and Qin, Y.S. (2002), Confidence intervals based on local linear smoother, Scand. J. Statist., 29, 89-99.
[Copas(1983)]Copas83 Copas, J.B. (1983), Plotting p against x, J. Roy. Statist. Soc. Ser. C, 32, 25-31.
[Courneya and Carron(1992)]Courneya1992 Courneya, K.S. and Carron, A.V. (1992), The home advantage in sport competitions: A literature review, Journal of Sport and Exercise Psychology, 14, 13-27.
[Cressie(1978)]Cressie1978 Cressie, N. (1978), A finely tuned continuity correction, Ann. Inst. Statist. Math., 30, 435-442.
[Eguchi et al(2003)]Eguchi03 Eguchi, S., Kim, T.Y. and Park, B.U. (2003), Local likelihood method: A bridge over parametric and nonparametric regression, J. Nonparametr. Stat., 15, 665-683.
[Eubank and Speckman(1993)]Eubank1993 Eubank, R.L. and Speckman, P.L. (1993), Confidence Bands in Nonparametric Regression, J. Amer. Statist. Assoc., 88, 1287-1301.
[Eugster et al(2011)]Eugster2011 Eugster, M.J.A., Gertheiss, J. and Kaiser, S. (2011), Having the second leg at home: advantage in the UEFA Champions League knockout phase?, J. Quant. Anal. Sports, 7, 1.
[Flores et al(2015)]Flores2015 Flores, R., Forrest, D., de Pablo, C. and Tena, J. (2015), What is a good result in the first leg of a two-legged football match? European J. Oper. Res., 247, 641-647.
[Ghosh(1979)]Ghosh1979 Ghosh, B.K. (1979), A comparison of some approximate confidence intervals for the binomial parameter, J. Amer. Statist. Assoc., 74, 894-900.
[Hall(1992)]Hall1992 Hall, P. (1992), On bootstrap confidence intevals in nonparametric regression, Ann. Statist., 20, 695-711.
[Hall and Horowitz(2013)]Hall13 Hall, P. and Horowitz, J. (2013), A simple bootstrap method for constructing nonparametric confidence bands for functions, Ann. Statist., 41, 1892-1921.
[Härdle and Bowman(1988)]Hardle1988 Härdle, W.K. and Bowman, A.W. (1988), Bootstrapping in nonparametric regression: local adaptive smoothing and confidence bands, J. Amer. Statist. Assoc., 83, 102-110.
[Härdle et al(2004)]Hardle2004 Härdle, W.K., Müller, M., Sperlich, S. and Werwatz, A., Nonparametric and Semiparametric Models: an Introduction, Springer, 2004.
[Horowitz and Savin(2001)]Horowitz2001 Horowitz, J.L. and Savin, N.E. (2001), Binary response models: logits, probits and semiparametrics, J. Econ. Persp., 15, 43-56.
[Hurvich et al(1998)]Hurvich98 Hurvich, C.M., Simonoff, J.S. and Tsai, C.-L. (1998), Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion, J. R. Stat. Soc. Ser. B Stat. Methodol., 60, 271-293.
[Jamieson(2010)]Jamieson2010 Jamieson, J.P. (2010), The home field advantage in athletics: a meta-analysis, J. Appl. Soc. Psychol., 40, 1819-1848.
[Kassies(2016)]Kassies Kassies, B. (2016), UEFA European Cup Coefficients Database, http://kassiesa.home.xs4all.nl/bert/uefa/data/index.html.
[Köhler et al(2014)]Kohler14 Köhler, M., Schindler, A. and Sperlich, S. (2014), A Review and Comparison of Bandwidth Selection Methods for Kernel Regression, Int. Stat. Rev., 82, 243-274.
[le Cessie and van Houwelingen(1991)]Cessie91 le Cessie, S. and van Houwelingen, J.C. (1991), A goodness-of-fit test for binary regression models based on smoothing methods, Biometrics, 47, 1267-1282.
[Lidor et al(2010)]Lidor2011 Lidor, R., BarEli, M., Arnon, M. and BarEli, A.A. (2010), On the advantage of playing the second game at home in the knockout stages of European soccer cup competitions, International Journal of Sport and Exercise Psychology, 8, 312-325.
[Nadaraya(1964)]Nadaraya1964 Nadaraya, E.A. (1964), On estimating regression, Theory Probab. Appl., 9, 141-142.
[Neumann(1997)]Neumann1997 Neumann, M.H. (1997), Pointwise confidence intervals in nonparametric regression with heteroscedastic error structure, Statistics, 29, 1-36.
[Nevill and Holder(1999)]Nevill99 Nevill, A.M. and Holder, R.L. (1999), Home advantage in sport: an overview of studies on the advantage of playing at home, Sports Med., 28, 221-236.
[Olivier and May(2006)]Olivier2006 Olivier, J. and May, W.L. (2006), Weighted confidence interval construction for binomial parameters, Stat. Methods Med. Res., 15, 37-46.
[Page and Page(2007)]Page2007a Page, L. and Page, K. (2007), The second leg home advantage: evidence from European football cup competitions, J. Sports Sci., 25, 1547-1556.
[Pollard(1986)]Pollard1986 Pollard, R. (1986), Home advantage in soccer: a retrospective analysis, J. Sports Sci., 4, 237-248.
[Pollard(2006)]Pollard2006a Pollard, R. (2006), Home advantage in soccer: variations in its magnitude and a literature review of the inter-related factors associated with its existence, J. Sport Behav., 29, 169-189.[Pollard(2008)]Pollard2008 Pollard, R. (2008), Home advantage in football: a current review of an unsolved puzzle, Open Sports Sci. J., 1, 12-14.
[Pollard and Pollard(2005)]Pollard2005a Pollard, R. and Pollard, G. (2005), Long-term trends in home advantage in professional team sports in North America and England (1876-2003), J. Sports Sci., 23, 337-350.
[Rodríguez-Campos and Cao-Abad(1993)]Rodriguez93 Rodríguez-Campos, M.C. and Cao-Abad, R. (1993), Nonparametric bootstrap confidence intervals for discrete regression functions, J. Econometrics, 58, 207-222.
[Schwartz and Barsky(1977)]Schwartz1977 Schwartz, B. and Barsky, S.F. (1977), The home advantage, Social Forces, 55, 641-661.
[Wasserman(2006)]Wasserman2006 Wasserman, L., All of Nonparametric Statistics, Springer, 2006.
[Watson(1964)]Watson1964 Watson, G.S. (1964), Smooth regression analysis, Sankhya, 26, 359-372.
[Wilson(1927)]Wilson1927 Wilson, E. (1927), Probable inference, the law of succession, and statistical inference, J. Amer. Statist. Assoc., 22, 209-212.
[Xia(1998)]Xia98 Xia, Y. (1998), Bias-corrected confidence bands in nonparametric regression, J. R. Stat. Soc. Ser. B Stat. Methodol., 60, 797-811.
|
http://arxiv.org/abs/1701.07835v2 | 20170126190005 | How to Reconcile the Observed Velocity Function of Galaxies with Theory | [
"Alyson M. Brooks",
"Emmanouil Papastergis",
"Charlotte R. Christensen",
"Fabio Governato",
"Adrienne Stilp",
"Thomas R. Quinn",
"James Wadsley"
] | astro-ph.GA | [
"astro-ph.GA"
] |
1Department of Physics & Astronomy, Rutgers University, 136 Frelinghuysen Rd., Piscataway, NJ 08854;
abrooks@physics.rutgers.edu
2Kapteyn Astronomical Institute, University of Groningen, Landleven 12, Groningen NL-9747AD, Netherlands;
papastergis@astro.rug.nl
†NOVA postdoctoral fellow
3Department of Physics, Grinnell College, Noyce Science Center, 1116 Eighth Ave., Grinnell, IA 50112
4Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195-1580
5Department of Biostatistics, Box 359461, University of Washington, 4333 Brooklyn Ave. NE, Seattle, WA 98195-9461
6Department of Physics & Astronomy, McMaster University, Hamilton, ON, L8S 4M1, Canada
Within a Λ Cold Dark Matter (ΛCDM) scenario, we use high resolution cosmological simulations spanning over four orders of magnitude in galaxy mass to understand the deficit of dwarf galaxies in observed velocity functions. We measure velocities in as similar a way as possible to observations, including generating mock HI data cubes for our simulated galaxies. We demonstrate that this apples-to-apples comparison yields an “observed” velocity function in agreement with observations, reconciling the large number of low-mass halos expected in a ΛCDM cosmological model with the low number of observed dwarfs at a given velocity. We then explore the source of the discrepancy between observations and theory, and conclude that the dearth of observed dwarf galaxies is primarily explained by two effects. The first effect is that galactic rotational velocities derived from the HI linewidth severely underestimate the maximum halo velocity.
The second effect is that a large fraction of halos at the lowest masses are too faint to be detected by current galaxy surveys. We find that cored dark matter density profiles can contribute to the lower observed velocity of galaxies, but only for galaxies in which the velocity is measured interior to the size of the core (∼3 kpc).
§ INTRODUCTION
The velocity function of galaxies is indicative of the number of galactic halos that exist as a function of mass, and is therefore a powerful test of our cosmological galaxy formation model. For galaxies with velocities above ∼100 that are primarily dispersion-dominated, the observed velocity function (VF) is generally in agreement with theoretical expectations within a Λ Cold Dark Matter (CDM) cosmology, as long as the effects of baryons are included <cit.>. To probe to lower galaxy masses which are more likely to be rotation-dominated, HI rotation data is ideal.
<cit.> and <cit.> were some of the first to combine the early type galaxy VF from SDSS with the HIVF from HIPASS to probe to lower galaxy masses, and found that theory predicted more dwarfs below ∼80 than observed <cit.>.
The HIVF has since been updated, thanks in large part to data from the ALFALFA HI survey <cit.>, and from systematic optical searches for neighboring galaxies <cit.>.
<cit.> used early data from the ALFALFA survey (at 40% of its eventual sample size) to confirm that there is a deficit of low mass observed galaxies compared to that expected in a ΛCDM cosmology.
In galaxies with rotational velocities, v_rot, ∼25 , the ALFALFA HIVF shows nearly an order of magnitude fewer galaxies than expected based on straightforward ΛCDM estimates (e.g., that each dark matter halo contains one luminous galaxy).
<cit.> made a separate measurement of the VF using the catalog of Local Volume galaxies out to 10 Mpc <cit.>. They derived velocities for galaxies as faint as M_B = -10.
Despite probing to these low masses and correcting for completeness, they still found a dearth of low velocity galaxies compared to the number expected in CDM. Likely due to the fact that they could include gas-poor faint galaxies, the discrepancy is not as large as seen in the ALFALFA HIVF sample, but they confirm the nearly factor of 10 discrepancy between theory and observation at v_rot∼25 .
This missing dwarf problem is reminiscent of the missing satellites problem <cit.>, but now extends into the field, beyond the virial radius of more massive galaxies. This means that solutions that rely on the tidal field of the host galaxy to reduce the numbers and masses of satellite dwarfs <cit.> should not apply, and a new mechanism to reduce the number of field galaxies needs to be invoked. One long-standing solution to the missing dwarf problem is warm dark matter <cit.>, in which the thermal relic mass of the dark matter particle is ≳ 2 keV.
However, <cit.> and <cit.> showed that WDM more massive than 1.5 keV doesn't suppress enough structure at low masses to be compatible with the observed VF <cit.>. Lighter masses have already been ruled out based on the small scale structure observed in the Lyman-α forest at high redshift <cit.>. Hence, WDM is difficult to make compatible with all available observational constraints.
Another interpretation of the observed VF is not that there are dwarfs missing, but that low mass galaxies display lower velocities than anticipated. This may be due to complications related to the way rotational velocities are measured observationally <cit.>, baryonic physics <cit.>, or to dark matter physics if the dark matter is self-interacting <cit.>.
However, assigning galaxies with low HI rotational velocities to relatively large halos in order to reproduce the observed VF has also proven challenging in the ΛCDM context. This is because the internal kinematics of dwarfs seem to indicate low-mass hosts. <cit.> demonstrated that galaxies with stellar masses in the 10^6-10^8 M_star range appear to be hosted by much smaller halos than predicted by abundance matching, based on their observed rotation velocities. <cit.> extended this to a much larger sample, but confirmed that galaxies with HI rotation velocities below ∼25 were incompatible with residing in the more massive halos that abundance matching predicts. <cit.> used the observed densities of local group dwarf irregulars to derive the halo masses that they reside in.
All of the galaxies seemed to be in halos of similar mass,
but it was a much lower mass than predicted by abundance matching. They concluded that it does not seem possible to simultaneously reproduce the measured velocity function (i.e., satisfy abundance matching) and the observed densities of galaxies, an issue referred to as the too big to fail problem.
However, recent work by <cit.> used results from simulations in which stellar feedback processes alter the dark matter content of dwarf galaxies to show that they can simultaneously match the densities and velocities of observed dwarfs. In this scenario, feedback from stars and supernovae create bursty star formation histories in dwarf galaxies that fluctuate the gravitational potential well at the center of the dwarf <cit.>. Dark matter core creation leads to a better match between theory and observed rotation curves <cit.>. Feedback is particularly effective in dwarf galaxies with halo masses of a few 10^10 M_⊙ <cit.>, where it can transform an initially steep inner dark matter density profile into a flatter “cored” profile. At lower halo masses there is less star formation, leading to less energy injection and lower core formation efficiencies <cit.>. At higher masses, the deeper potential wells of galaxies make core formation increasingly difficult <cit.>, at least if an additional source of feedback is neglected, such as AGN <cit.>. In this model with baryonic feedback, it is possible to assign dwarf galaxies to relatively massive halos, despite the low rotational velocities measured from their spatially resolved stellar kinematics. This is because baryonic feedback can push dark matter out of the central regions, lowering the enclosed mass at the radii that stellar kinematics probe (but without affecting the total halo mass[modulo a slight reduction in halo mass caused by the loss of baryons or preventive feedback <cit.>]). Hence, the densities are lowered, and the apparent velocities of the galaxies, reconciling the observations with theory.
Based on such simulations, <cit.> derived an analytic model for the dark matter density profile that varies with stellar-to-halo mass ratio. <cit.> and <cit.> used this analytic model to derive galaxy trends that they claim reconcile the halo densities and the observed VF. In this work, we use simulations directly. These simulations also create dark matter cores <cit.>, following very similar trends to those in <cit.>. However, because we use the simulations directly, we do not have to resort to analytic models for the baryon distribution in the galaxies. <cit.> also recently used simulations directly to show that baryonic simulations can be reconciled with observations. However, they did not investigate the role of dark matter cores in their results. We show that accounting for the gas distribution is important and not straightforward. Unlike <cit.>, we do not find that dark matter core creation consistently has a large impact on observed velocities of galaxies, yet we do find that we can reproduce the observed VF.
This paper is organized as follows: In Section 2 we present information about the simulations. In Section 3 we demonstrate that deriving velocities from baryons yields a substantially lower velocity in dwarf galaxies than expected from theoretical results that rely on dark matter-only simulations. In Section 3.1 we explore how completeness (i.e., the number of detectable halos at low velocity) affects the observed VF. In Section 3.2 we describe our method to mimic observations and derive velocities in as close a way as possible to the observations. In Section 3.3 we re-derive the expected VF given our completeness results and mock observed velocities. In Section 3.4 we demonstrate that our simulations match other essential scaling relations. We systematically explore the importance of various effects in reducing the observed velocities relative to to theoretical velocities in Section 4. In Section 5 we explore the role of dark matter cores on the reduced observed velocities. We find that cores are only important in galaxies where the velocity is measured interior to the size of the core. We compare are results to previous work in Section 6, and conclude in Section 7.
§ THE SIMULATIONS
ccccccccc
Properties of the Simulated Galaxies
Simulation
v_max,dmo range
M_star range
m_DM,part
m_star,part
Softening
Overdensity
N_DM
pc Δρ /ρ within R_vir
(1) (2) (3) (4) (5) (6) (7) (8)
Fields 1 – 6 30-150 2×10^5-10^10 1.6×10^5 8 ×10^3 174 -0.15 to 1.35 0.03-3.4×10^6
Field 7 43-56 2×10^7-3×10^8 2×10^4 10^3 85 -0.02 0.05-2×10^6
Field 8 38 10^8 6×10^3 4.2×10^2 64 0.01 2×10^6
All fields have been run both with baryons and as DM-only. Column (2) lists the v_max,dmo range of each galaxy at z=0 in the DM-only version of the run. Column (3) lists the stellar mass range of the galaxies at z=0 in the baryonic version of the run. Columns (4) and (5) list the mass of individual dark matter and star particles, respectively, in the baryonic runs. Column (6) shows ϵ, the spline gravitational force softening, in pc. Column (7) shows the environmental density relative to the average, the rms mass fluctuation on 8h^-1 Mpc scales.
Column (8) lists the range in total number of DM within the virial radius of the halos at z=0 in the baryonic runs.
The high-resolution simulations used in this work were run with pkdgrav <cit.> and its baryonic (SPH) version gasoline <cit.>, using a Λ Cold Dark Matter (ΛCDM) cosmology with Ω_m = 0.24, Ω_Λ = 0.76, H_0 = 73 km s^-1, σ_8=0.77, and n=0.96. The galaxies were originally selected from two uniform dark matter-only simulations of 25 and 50 comoving Mpc per side. From these volumes, eight field–like regions where selected, each centered on a galaxy with halo mass[The virial radius is defined relative to critical density, ρ_c, where the mean density enclosed is ρ/ρ_c≈ 100 at z=0.] ranging from 10^10 to 10^12 M_⊙. Each field was then resimulated using the “zoom-in” volume renormalization technique <cit.>, which simulates a region out to roughly 1 Mpc of the primary halo at the highest resolution, while fully preserving the surrounding large scale structure that builds angular momentum in tidal torque theory <cit.>. These simulations were run from approximately z=150 to z=0. A uniform UV background turns on at z = 9, mimicking cosmic reionization following a modified version of <cit.>. The rms mass fluctuation relative to the cosmic average, δρ/ρ, for each chosen field ranges from -0.15 to1.35 when measured on a scale of 8h^-1 Mpc (see Table <ref>). Five of the fields fall within 0.05 standard deviations of the cosmic mean density.
The spline force softening, ϵ, ranges from 64 pc to 174 pc in the high resolution regions (see Table <ref>), and is kept fixed in physical pc at z < 10. The dark matter (DM) and stellar mass resolutions are listed in Table <ref>. The gas smoothing length is allowed to shrink as small as 0.1ϵ in very dense regions (0.5ϵ is typical) to ensure that hydro forces dominate at very small scales. The main galaxy in every zoomed region contains several millions of DM particles within its virial radius.
The high resolution of these cosmological simulations allows us to identify the high density peaks where H_2 can form. We track the non-equilibrium formation and destruction of H_2, following both a gas-phase and a dust (and hence metallicity) dependent scheme that traces the Lyman-Werner radiation field and allows for gas and dust self-shielding <cit.>. We include cooling from both metal lines and H_2 <cit.>. Metal cooling, H_2 fractions, and self-shielding of high density gas from local radiation play an important role in determining the structure of the interstellar medium and where star formation can occur <cit.>. With this approach, we link the local star formation efficiency directly to the local H_2 abundance. As described in <cit.>, the efficiency of star formation, c^*, is tied to the H_2 fraction, X_ H_2. The resulting star formation rate (SFR) depends on the local gas density such that SFR ∝ c^*X_ H_2(ρ_ gas)^1.5, with c^* = 0.1. This value of c^* gives the correct normalization of the Kennicutt-Schmidt relation.[Note that the efficiency of star formation in any given region is actually much lower than the implied 10%, due to the fact that feedback from newly formed stars quickly disrupts gas, shuts off cooling, and lowers the overall efficiency <cit.>.] Because star formation is restricted to occurring in the presence of H_2, stars naturally form in high density regions (> 100 amu cm^-3), with no star formation density threshold imposed.
Star particles represent a simple stellar population born with a <cit.> initial mass function. The star particles lose mass through stellar winds and supernovae (SN Ia and SN II). Supernovae deposit 10^51 ergs of thermal energy into the
surrounding gas following the “blastwave” scheme described in <cit.>. No velocity “kicks” are given to the surrounding gas particles, but thermal energy is deposited and cooling is turned off within a “blastwave” radius and adiabatic expansion phase calculated following <cit.>. The thermal energy deposition from supernovae can lead to bubbles of hot gas that expand, driving winds from the galaxies. Unlike other “sub-grid” schemes, the gas stays hydrodynamically coupled while in galactic outflows. Despite its reliance on supernovae, this model should be interpreted as a scheme to model the effect of energy deposited in the local interstellar medium by all processes related to young stars, including UV radiation from massive stars <cit.>. The rate of ejected mass in winds in these simulations is dependent on galaxy mass, ranging from less than the current SFR in Milky Way-mass galaxies, to typically a few times the current SFR in galaxies with v_circ∼ 50 , to more than 10 times the current SFR in galaxies with v_circ∼ 20 <cit.>. These ejection rates are similar to what is observed in real galaxies over a range of redshifts <cit.>. Additionally, <cit.> demonstrated that these simulations match the observed stellar mass to halo mass relation <cit.>, by creating a more realistic star formation efficiency as a function of galaxy mass.
The star formation and feedback in these simulations leads to important trends in the resulting galaxies that are important for the present study. First, feedback strongly suppresses star formation, but the amount of suppression scales with galaxy mass <cit.>. In the deeper potential wells of massive galaxies, high densities make it easier for the gas to cool quickly after being heated by supernovae. The lower densities in dwarf galaxies are more susceptible to heating, driving the star formation efficiencies even lower in dwarfs. Hence, even though the simulated dwarf galaxies may lose much of their gas in winds <cit.>, the gas that stays behind is very inefficient at forming stars, so that the dwarfs are very gas rich <cit.>. Second, when star formation is tied directly to high density regions with H_2, subsequent feedback causes these cold, dense regions to become massively over-pressurized. This leads to very bursty star formation histories in dwarf galaxies <cit.>. Bursty star formation creates fluctuations in the galaxy potential well, particularly in halos with masses a few 10^10 M_⊙ <cit.>, which causes initially cuspy dark matter density profiles to transform into flatter “cores”. In Section <ref> we examine whether this core formation lowers the measured v_rot of halos from that predicted in DM-only simulations.
§ THE IMPACT OF BARYONS ON THE VF
The goal of this study is to identify whether baryonic processes can reconcile the VF expected theoretically in a universe with observations. The first attempts to compare the theoretical and observational VFs were based on the results of DM-only cosmological simulations <cit.>. However, this simple approach neglects several important baryonic effects that are important for making a fair comparison between theory and observations.
Here, we analyze the impact of baryonic effects on the theoretical VF by using simulations run both with baryons and as DM-only. Our aim is to perform “mock observations” of our baryonic simulations, and derive a theoretical VF in as similar a way as possible to current observational determinations.
In what follows, various definitions of velocity arise. In order to compare results from observations to results from theory, one must define a characteristic galaxy velocity that can be compared. In practice, each defined characteristic velocity is slightly different, being derived in a slightly different way. Below, we explore in detail the results of various definitions of characteristic velocity. To minimize confusion for the reader, in Table <ref> we define each velocity that we use in the remainder of this paper.
We focus our comparison on the VF measured in the Local Volume (D ≲ 10 Mpc) by <cit.>, based on the catalog of nearby galaxies of <cit.>. The catalog is optically selected, and probes with reasonable completeness galaxies as faint as M_B = -10. The majority (∼80%) of galaxies in this Local Volume catalog have measurements of their rotational velocity based on the width of their HI profile, . Some fraction of galaxies lack HI data, either because they are intrinsically gas-poor (e.g., satellites of nearby massive galaxies), or because they have not been targeted by HI observations. These galaxies are assigned rotational velocities based on stellar kinematic measurements when available, or otherwise according to an empirical luminosity-velocity relation. Note that the <cit.> VF is consistent with other independent observational measurements of the VF, such as the one performed by the ALFALFA blind HI survey <cit.>.
In order to make an appropriate comparison with the observational VF measured by <cit.>, we need to model two key observational effects using our baryonic simulations. First, we need to replicate the completeness limitations of the <cit.> catalog at low luminosities. Faint galaxies tend to have low rotational velocities. Hence, if a halo is not detectable in current surveys, the density of observed galaxies at the low velocity end of the VF will be surpressed relative to theoretical expectations that populate each halo with a detectable galaxy. Thus completeness can significantly impact the measurement of the low-velocity end of the VF and must be accounted for. Second, we need to compute the theoretical VF in terms of the rotational velocity measured observationally, . This entails deriving realistic estimates of the HI linewidths for our baryonic halos.
ll
Characteristic Velocity Definitions in the Text
symbol
definition
v_circ circular velocity; v_circ = √(GM/r) where M is the mass enclosed within radius r
[2mm]
the maximum value of v_circ for a dark matter-only simulated halo; 2 is the theoretical counterpart to
[2mm]
twice the maximum value of v_circ measured for a dark matter-only simulated halos multiplied by the sin of the
observational inclination angle i; the theoretical counterpart to
[2mm]
for galaxies with measurable HI: the full width of a galaxy's HI line profile, measured at 50% of the profile peak
height when the galaxy is viewed edge-on (inclination i = 90^∘); for galaxies with no measurable HI: twice the
stellar velocity dispersion; the observational counterpart to 2
[2mm]
× sin(i); for a galaxy viewed at a random inclination angle i; observational counterpart to
[2mm]
w^e_20 similar to but measured at 20% of the HI profile peak height
[2mm]
V_f velocity of a galaxy measured on the flat part of the rotation curve
[2mm]
v_max,sph the maximum value of v_circ for a galaxy halo in a baryonic simulation
[2mm]
v_out v_circ measured at R_out, the radius at which a galaxy's HI surface density falls below 1 M_⊙/pc^2
[2mm]
v_out,dmo+b v_circ for a dark matter-only halo (reduced by a velocity consistent with removing the cosmic baryon fraction) + v_circ
for only the baryons in the counterpart baryonic simulation. Measured at R_out, where R_out is determined from the
simulated baryonic counterpart
Note that , , and w_20 are all derived from spatially unresolved data. The remainder of the characteristic velocities are derived from spatially resolved data, and are associated with a particular radius within a given galaxy.
§.§ Detectability of halos
Each “zoomed” simulation contains a high resolution region centered on a halo, ranging in virial mass from 10^10 M_⊙ to 10^12 M_⊙. In addition to the central halo, every zoomed region contains smaller galaxies that we also include in our analysis. Because the theoretical VF is traditionally derived using results from DM-only simulations, for every simulated baryonic halo we identify its counterpart in the DM-only run in order to assign a value, the maximum circular velocity in the DM-only runs. Because the DM particles are identical in both the baryonic and DM-only initial conditions, identifying a counterpart is relatively straightforward. For all halos in the DM-only run with more than 64 particles, we identify the DM particles that make up each halo in the DM-only run, then find those same particles in the baryonic run and note the halo[Halos are identified with AHF, AMIGA's Halo Finder <cit.>. AHF is available for download at .] that most of those particles belong to. We find a matching counterpart for 6271 halos and subhalos.
We use this sample to compute the fraction of halos hosting simulated galaxies with M_∗ > 10^6 M_⊙ in the baryonic runs, as a function of the maximum circular velocity in the DM-only runs, f_det(v_max,dmo). The M_∗ > 10^6 M_⊙ cutoff is chosen because it corresponds to the typical stellar mass of galaxies with M_B = -10, which define the faint limit of the <cit.> measurement. The result is shown in Figure <ref>. As the figure shows, virtually all halos with ≳ 35 host detectable galaxies, and thus are expected to be included in the VF measurement of <cit.>. On the other hand, the detectable fraction drops precipitously at lower values of , falling below the 5% level at ≲ 25 . As shown in <ref>, this sharp drop in the fraction of detectable galaxies at low values of has important consequences for the measurement of the low-velocity end of the VF.
Keep in mind that the value of where the dramatic drop in detectability takes place is dependent on the depth of the galaxy catalog used to measure the VF. If a deeper census of Local Volume galaxies were available, the minimum detectable stellar mass would be lower than ∼ 10^6 M_⊙, and the drop in detectability would consequently appear at lower values of than shown in Figure <ref>. Eventually, a physical effect will limit galaxy formation in halos with very low values of , namely reionization feedback <cit.>.
§.§ Mock “observed” rotational velocities
Most of the galaxies (∼80%) in <cit.> have rotational velocities derived from HI. For this reason, we analyze the HI content of our baryonic halos and derive observationally motivated rotational velocities for our simulated galaxies that contain enough HI mass to fall into the <cit.> catalog. <cit.> also include dispersion-supported galaxies with no measurable HI down to M_B = -10. In this section, we describe our selection criteria to mimic this sample and derive mock observational velocities.
To restrict our sample to halos with enough baryonic material to fall into the <cit.> sample, we identify all halos in the DM-only zoomed runs that have v_max,dmo≥ 15 at z = 0 and their counterparts in the baryonic zoomed runs.
This yields an initial sample of 57 halos. From this initial sample, we identify those with an HI mass, M_ HI, greater than 10^6 M_⊙, corresponding to the HI mass of the faintest galaxies in the <cit.> catalog that have HI linewidth data. This yields a sample of 42 galaxies with enough HI mass to generate mock HI data cubes (described below). Of the remaining gas-poor galaxies, we keep only those with r-band magnitudes brighter than -10 in the baryonic runs, to approximately mimic the M_B = -10 limit of the <cit.> catalog. Five out of the initial sample of 57 halos are fainter than this r-band cutoff, and are therefore not included in the subsequent analysis.
Ten gas-poor halos with HI masses below our adopted cutoff, M_HI < 10^6 M_⊙, remain in the sample. Four of these dispersion-supported halos with no HI are satellites of a Milky Way-mass galaxy, and we adopt for them the stellar velocity dispersion as the mock “observed” velocity. For the other six faint galaxies without HI data cubes, we adopted the procedure of <cit.>, who assigned a velocity dispersion of 10 to all halos with M_K fainter than -15.5. Hence, we assign these halos a velocity dispersion of 10 . However, whether we use a fixed 10 or the stellar velocity dispersion measured directly from the simulation makes no change to our results, as the simulated velocity dispersions are on the order of 10 , similar to the observational data.
The HI mass fraction of every gas particle in the baryonic runs is calculated based on the particle's temperature, density, and the cosmic UV background radiation flux, while including a prescription for self-shielding of H_2 and dust shielding in both HI and H_2 <cit.>. This allows for the straightforward calculation of the total HI mass of each simulated galaxy. We create mock HI data cubes only for the 42 halos that contain M_ HI > 10^6 M_⊙. Specifically, we create mock data cubes that mimic ALFALFA observations <cit.>. After specifying a viewing angle (see below), our code considers the line-of-sight velocity of each gas particle. The velocity of each particle is tracked in the simulation by solving Newton's equations of motion, but any turbulent velocity of the gas is not taken into account. Velocity dispersions in dwarf galaxies can be on the order of the rotational velocity, ∼10-15 <cit.>. Dispersions are thought to be driven at least partially by thermal velocities or supernovae <cit.>. In our simulations, supernovae inject thermal energy, and the thermal state of the HI gas needs to be considered in the mock HI linewidth for a realistic comparison to observations. To account for the thermal velocity, the HI mass of each gas particle is assumed to be distributed along the line-of-sight in a Gaussian distribution with a standard deviation given by the thermal velocity dispersion, σ = √(kT/m_HI), where T is the temperature of the gas particle. After this thermal broadening is calculated, a mock HI data cube can be generated by specifying the spatial and velocity resolution.
For all of our galaxies, we adopted a spatial resolution of 54 pixels across 2R_vir. In practice, this corresponds to ∼1kpc resolution in our lowest mass galaxies to up to ∼9kpc resolution in our most massive galaxies.
However, the spatial resolution plays no role in our study, since measurements of the VF are based on spatially unresolved HI data. For the velocity resolution, we match the ALFALFA specification of 11.2 (two-channel boxcar smoothed).
For each of the 42 galaxies with M_HI > 10^6 M_⊙, we create two HI data cubes. In the first case, we orient each galaxy to be viewed edge-on, i.e., such that the HI angular momentum vector is lying in the image plane. This generates HI data cubes without inclination effects. In the second case, we pick a random orientation of each simulated galaxy (the x-axis of the simulation volume in all cases) and generate HI data cubes that capture inclination effects. In both cases, we measure the width of the HI profile at 50% of the peak height. Hereafter, we denote the edge-on velocity width by , while we denote the velocity width projected at a random inclination angle by . The latter projected velocity width, , is the one that can be directly measured observationally.
For the 10 gas-poor, dispersion-dominated galaxies, we define both and to be twice the stellar velocity dispersion.
Example HI line profiles for three simulated galaxies spanning a large range of mass are shown in Figure <ref>. The HI profiles are derived at random inclination angles, which are indicated in each panel. The figure demonstrates how the HI rotational velocity can differ from the simple theoretical expectation based on the DM-only runs. In particular, we compare the measured of the simulated galaxies to its simplest theoretical equivalent[The form of the “theoretical velocity width”, , follows from the fact that the HI profiles plotted in Fig. <ref> are projected on a viewing angle of inclination i, and include emission from both the approaching and receding sides of the HI disk. In the text we will generally use to compare DM-only velocities with edge-on HI velocity widths, , and to compare with projected HI velocity widths, .], 2 v_max,dmo sin i. The example simulated galaxies shown in Figure <ref> demonstrate a trend that has a profound impact on the computation of the theoretical VF. In particular, the HI velocity for massive galaxies is larger than the DM-only velocity, >. This shift to higher velocities in the baryonic run is attributed to the cooling of baryons onto the central halo in massive galaxies <cit.>. On the contrary, low-mass simulated galaxies display the opposite effect. The single-peaked shape of their HI profile leads to a measured value of that is significantly smaller than .
Figure <ref> shows the relation between the mock observational and DM-only rotational velocities for all our baryonic halos. More specifically, we compare the edge-on velocity widths, , with the equivalent edge-on DM-only widths, . This is done in order to facilitate a direct comparison that neglects inclination effects. The red points show the average relation in bins containing four (in the highest velocity bins) to nine (in the lowest velocity bins) data points, depending on the density of the data. Error bars reflect the 1σ standard deviation about the average. The dashed line in both panels shows a one-to-one relation between the baryon and DM-only results. It is obvious from this plot that galaxies with 2v_max,dmo≳ 150 show higher velocities in the baryonic runs than the DM-only runs, while the trend is reversed at lower masses. The dotted line in the right panel shows the decrease expected in velocity from the DM-only runs if all of the baryons had been lost from the halo. The lowest mass galaxies show a much larger change than can be explained due to baryon loss alone.[Note that even if a simulated dwarf galaxy loses a large percentage of the cosmic baryon fraction it remains gas-rich at z=0 due to the fact that the gas that remains behind is inefficient at forming stars, unless it is a satellite and has had its gas stripped.] We dissect the reasons for this lower-than-expected velocity in Section <ref>.
Twenty of the 52 halos plotted in Figure <ref> are subhalos (denoted by squares) of larger halos. As seen in this figure and those that follow, the simulated galaxies hosted by subhalos follow similar kinematic trends to those hosted by central halos.
§.§ Re-deriving the Expected VF
Based on the results of <ref> and <ref>, we can now compute a realistic expectation for the VF of galaxies in a universe. The process is illustrated in Figure <ref>. In particular, we start from the VF of halos in a universe with Planck cosmological parameters <cit.>. This DM-only VF is plotted as a black dashed line in Fig. <ref>, and is obtained from the BolshoiP dissipationless cosmological simulation <cit.>. The halo VF represents the number density of halos as a function of their maximum circular velocity . We denote the theoretical DM-only VF by
ϕ_h(v_max,dmo)= dN_h/dV dlog_10(v_max,dmo) .
In the equation above, dN_H is the number of halos contained in a representative volume element dV of the universe, that have rotational velocities within the logarithmic velocity bin dlog_10(v_max,dmo).
Second, we correct the plotted DM-only halo VF to take into account the detectability of halos as a function of . We perform this correction based on the result of Figure <ref>. In particular,
ϕ_h,det = f_det(v_max,dmo) ×ϕ_h(v_max,dmo) .
The corrected DM-only VF is plotted in Fig. <ref> as thin grey lines. The bundles of lines represent the uncertainty due to the number of simulated halos used to make Figure <ref>.
Lastly, we compute the change in the theoretical VF that is due to the difference between the theoretical and observational measures of rotational velocity,
ϕ_h,det(v_max,dmo) →ϕ_h,det(w_50) .
This is done by first generating a large number of values according to the DM-only halo VF corrected for halo detectability (grey lines in Fig. <ref>). We then assign to each generated halo an edge-on velocity width value, , based on the mean and scatter of the - relation shown in Figure <ref>. Lastly, we calculate the projected HI velocity width as w_50 = w_50^e ×sin i. Inclination values, i, are drawn assuming random orientations, i.e., such that cos i is uniformly distributed in the [0,1] interval. The final results for the baryonic VF expected in a cosmology according to our simulations are shown by the blue lines in Figure <ref>. The bundles again represent the uncertainty due to the combined uncertainties introduced by the number of simulated halos used to calculate detectability and the number of galaxies in each of the bins in Figure <ref>. This distribution is directly comparable to the observational VF measured in the Local Volume by <cit.>.
Figure <ref> clearly demonstrates that taking into account both the “observed” velocities and the luminous fraction of halos has a dramatic effect on the theoretical VF. At the high velocity end, the baryonic VF displays a higher normalization than the DM-only distribution, which is caused by the fact that the HI velocity width, , is larger than for massive halos (refer to Figs. <ref> & <ref>; though note the effect appears less strong in Figure <ref> because it shows 2 instead of ). However, baryonic effects have their largest impact on the low-velocity end of the theoretical VF. In particular, the fact that low-mass halos have values significantly smaller than than 2 means that the theoretical VF systematically “shifts” towards lower velocities in the dwarf regime. This translates into a substantial reduction of the VF normalization at w_50≲ 100 .
At even lower velocities, w_50≲ 40 , the very low detectability of small halos further suppresses the normalization of the baryonic VF. Together, the effects of the baryonic velocity shift and of halo detectability lead to a dramatic decrease on the number of low-velocity galaxies expected in , compared to the simplistic DM-only estimate. As Fig. <ref> shows, the difference is more than an order of magnitude already at w_50 = 50 . This huge suppression in the number density at low velocities brings our theoretical VF in agreement with the observational measurements, and shows no signs of the overproduction of dwarf galaxies typically encountered in .
§.§ Validation Against Other Scaling Relations
A key point regarding the results of Fig. <ref> is that reproducing the observational VF in a simulation is not physically meaningful unless the typical HI disk sizes in dwarf galaxies are also reproduced correctly.
This is because the ratio between and can be made arbitrarily small in dwarf galaxies by producing simulated galaxies with very small HI disks.
Because the innermost portion of the rotation curve is rapidly rising,
it could be possible to reproduce the observed VF but not accurately reproduce observed disk sizes.
Figure <ref> compares the sizes of HI disks in our simulated galaxies with the observed sizes in the sample of galaxies with interferometric HI observations compiled by <cit.>. The observational datapoints show the outermost radius where the HI rotational velocity can be measured by the interferometric observations for each galaxy. One complication here is that the outermost HI radius for the galaxies in the <cit.> sample is not defined in a consistent way, but depends on the depth of each interferometric observation and the quality of each galaxy's kinematics. For the simulations, we derive “outermost” HI radii where the HI surface density profiles of our simulated galaxies fall below 1 M_⊙/pc^2. The adopted HI surface density cutoff corresponds to the value probed by typical interferometric HI observations. We examined the results using different definitions of “outermost” HI radius for our simulated galaxies, and found that the results were generally consistent but that this definition produces the least scatter. This is not surprising, because we have also verified that our simulated galaxies follow the observed HI mass – radius relation from <cit.>, where the HI radius is again defined at the 1 M_⊙/pc^2 isophote. The observed relation has remarkably low scatter, so it is reassuring that using a similar definition for the simulations also produces the smallest scatter. Overall, Fig. <ref> shows that our simulated galaxies have HI disk sizes that are in agreement with observations, indicating that the mock observational velocities computed in <ref> are realistic.
Similarly, the fraction of detectable halos computed in <ref> is not physically meaningful unless our simulations reproduce the baryonic content of real galaxies. In Figure <ref> we show the baryonic (cold gas plus stellar mass) Tully-Fisher relation for the simulated galaxies used in this work (black points, top panel). We restrict ourselves to central galaxies only (excluding subhalos) for comparison to the observational data, which is taken from <cit.>. The line in both panels is the baryonic Tully Fisher relation fit to observed galaxies in <cit.>, log(M_b) = 1.61 + 4.04V_f. Since the <cit.> measurement refers to the flat outer velocity of galactic rotation curves, we adopt for the simulations the circular velocity of the baryonic runs measured at 4 disk scale lengths as V_f. The bottom panel is for the cold gas mass (1.33*M_HI in the simulations) only. The simulations have been divided into a gas-rich (M_star/1.33M_HI > 2.0, blue points) and gas-poor (M_star/1.33M_HI < 2.0, red points) sample. Like the observational data, gas-rich galaxies follow the observed baryonic Tully Fisher relation, while gas-poor galaxies lie below the relation (the dotted line shows a reduction of the relation by a factor of 5). This plot demonstrates that our simulated galaxies match the stellar and HI masses of galaxies as a function of velocity.
Overall, Figures <ref> & <ref> give us confidence that the theoretical VF computed in <ref> is physically well motivated. Consequently, moving from predictions based on DM-only runs to baryonic simulations may be the key to reconciling the theoretical expectation of the VF with the observational measurements.
§ VELOCITY CHANGES IN THE PRESENCE OF BARYONS
In this section, we examine the baryonic effects that lead to dwarf halos being observed at lower velocities than predicted based on DM-only simulations, and that help to reconcile the theory with the observations.
It is well known that the rotation curves of many dwarf galaxies are still rising at their outermost measured point <cit.>, suggesting that the true v_max of the halo is higher than HI measures. In this section we
use the velocity at the outermost HI data point in our baryonic simulations in order to determine how much of a role this plays in the lowered velocities we see in the dwarf simulations compared to their DM-only v_max,dmo values. Recall that in Figure <ref> we defined the outermost HI data point, R_out, in our simulations to be the point at which the HI surface density falls below 1 M_⊙/pc^2. In what follows, we refer to the circular velocity at R_out as v_out.
In our dwarf galaxies, the radius of the outermost HI data is generally still on the rising part of the
rotation curve. We quantify this in the top panel of Figure <ref>, where
we compare v_out to the maximum value of the circular velocity in the baryonic
run, v_max,sph.[Note that up until now the v_max we have been
dealing with comes from the DM-only runs, v_max,dmo. v_max,sph will differ
from from v_max,dmo due to processes like baryonic contraction at high masses, or
loss of most of the baryons from the smallest mass halos. We wish to quantify how
well HI traces the rotation velocity after these other factors have had their
influence, and ultimately determine how well w_50 is tracing the outermost
HI rotation velocity. Hence, we switch to v_max,sph in Figure <ref>.]
In the more massive galaxies, v_out is indeed
capturing the maximum value of the rotation curve. However, in galaxies below
∼50 , the outermost HI rotation velocity systematically underestimates
v_max,sph.
Next we wish to know if is tracing the outermost velocity, v_out. The second panel in Figure <ref> shows the ratio of the two. In the four most massive galaxies, traces a slightly higher velocity than the outermost HI rotation velocity, due to the fact that these galaxies have large bulges and higher velocities near their center. More importantly for interpreting dwarf galaxy data, is systematically smaller than v_out. In other words, v_out is already under-measuring the maximum rotational velocity of the galaxy because it is on the rising rotation curve, but is measuring an even lower velocity. This suggests that may be measuring a velocity even closer to the center than v_out.
Evidence that this is the case is found in the third panel of Figure <ref>, where we compare w^e_20 to v_out instead. w^e_20 measures the width of the HI profile at 20% of the peak height rather than 50%. While it slightly overestimates v_out in galaxies above ∼50 , it does a much better job of capturing v_out in the lower mass galaxies. In summary, it seems that w^e_20 is a more reliable indicator of the outermost measurable rotation velocity in the dwarf galaxies.
Finally, the bottom panel of Figure <ref> shows the ratio between our
w^e_50 and w^e_20 measurements, and demonstrates that w^e_20 can measure a much larger velocity in the dwarfs than w^e_50, up to a factor of two larger in the lowest mass galaxies. This difference has been noted previously. Using
ALFALFA data, <cit.> showed that the difference between the two velocities is well described by the relation
w^e_20 = w^e_50 + 25 km s^-1 <cit.>. This relation is shown as the black line in the bottom panel of Figure <ref>. <cit.> showed that the discrepancy between w^e_20 and w^e_50 could lead to substantial differences in the slope of the baryonic Tully Fisher relation, while <cit.> demonstrated that the use of w^e_50 instead of v_max,dmo could fully explain the difference in the theoretical VF compared to observations. We note that almost all observational measurements of the VF are based on w^e_50 rather than w^e_20 <cit.> due to the fact that it can be hard to measure the line width at 20% of the peak height due to spectrum noise at typical signal-to-noise ratios.
The change between w^e_20 and w^e_50 is likely due to the shape of the HI profile as a function of mass. As was seen in Figure <ref>, more
massive galaxies exhibit a double-horned profile. The horns are built up due
to the piling up of velocity along the flat part of the rotation curve in large
spirals. However, lower mass galaxies
are usually still rising at the outermost HI data point, as discussed above.
This leads to an HI profile that is more Gaussian. The drop-off at the edges
of the double-horned profile is rapid, so that the difference between w^e_20 and w^e_50 is small. However, the Gaussian shape in the dwarfs ensures that this is no longer true. Measuring lower in the HI profile can lead to a much
larger velocity width. These higher velocities must come from further out on
the rotation curve.
In summary, the maximum rotational velocity traced by HI does not generally trace the full v_max,sph for dwarf galaxies below ∼50 . This is due to the fact that the outermost HI is still on the rising part of the rotation curve. Additionally, w^e_50 does not measure the the outermost HI rotation velocity in dwarf galaxies, compounding the problem further. The combination of these two effects leads to the shift in velocities measured between the baryonic and DM-only simulations seen in Figure <ref>.
§ DOES DARK MATTER CORE CREATION MATTER?
Recent high resolution cosmological simulations of galaxies, including those
used in this study, have shown
that feedback from young stars and supernovae can create dark matter cores
in galaxies <cit.>. <cit.>
and <cit.> showed that this result varies with stellar mass (and
thus also with halo mass, given that there is a stellar-to-halo mass relation).
The shallow potential wells of dwarf galaxies at M_vir∼ 10^10 M_⊙
are particularly susceptible to core creation, but the deeper potential wells
of MW-mass galaxies are less so, and galaxies have a harder time creating large cores in lower mass halos that form less stars and therefore inject
less energy <cit.>.
In this section we explore whether the change in the dark matter profile in
dwarf galaxies has any impact on the observed VF. Work by <cit.> concluded that measuring a theoretical velocity at the radius which reproduces is not enough to match observed velocities in models that retain a cuspy, NFW dark matter density profile. Instead, they showed that additionally considering dark matter core creation could lower the theoretical velocities enough to bring them in line with observations. We demonstrate here that this is true only for galaxies which have R_out≲ 3kpc.
Assessing the impact of core creation is not simple because the densities in the baryonic simulations may also be subjected to some level of contraction due to the presence of the baryons, and disentangling the two effects is not straightforward. Note that this contraction does not have to be adiabatic contraction of the dark matter, and in fact adiabatic contraction of the dark matter is unlikely to occur in the dwarf regime that we are exploring here. However, as we demonstrate below, the fact that gas can cool to the center of the galaxy can increase the rotation velocity in the inner regions, even in dwarf galaxies, in the baryonic simulations. This effect must be accounted for before a direct comparison can be made between the velocities in the baryonic runs and the DM-only runs. If it is not accounted for, a comparison between the baryonic and DM-only velocities would minimize the impact of dark matter core creation.
To overcome this, we develop a proxy for a contracted model without core creation by adding together the velocity profile in a DM-only run[A DM-only simulation contains the cosmic density of all matter, Ω_baryon and Ω_DM. Here, we scale down the velocity profile of the DM-only run by an amount consistent with removing the cosmic baryonic fraction, so that we can add the baryonic contribution from the SPH runs instead.] with the velocity profile of only the baryonic component in its counterpart SPH run. This effectively “contracts” the profile due to the presence of baryons, but does not include dark matter cores since the DM-only runs do not experience core creation. We measure the velocity from this combined model at the outermost HI radius, R_out, determined from the SPH runs, and label it v_out,dmo+b.
In Figure <ref> we compare v_out measured in the SPH runs to v_out,dmo+b as a function of R_out. The ratio v_out/v_out,dmo+b gives us an estimate of how much core creation alone has supressed the rotation curve in the baryonic runs. The data points are color coded based on the slope of their dark matter density profile, measured between 300-700pc, labeled α_500pc. We include in this plot all simulated galaxies with HI. The lowest mass galaxies have star formation efficiencies too low to create substantial dark matter cores. Core creation is not the only mechanism that can suppress the rotation curve, as loss of baryons alone can lower the baryonic rotation curve relative to the DM-only case. The ratio expected for pure baryonic mass loss is shown by the dotted line in Figure <ref>. If core creation is important we would expect to see that the strongly cored galaxies lie systematically lower than other galaxies. We find this is only true for galaxies with R_out < 2-3 kpc.
For galaxies with R_out < 3 kpc (corresponding to v_out < 50 ), the galaxies with dark matter cores generally occupy the lowest velocity ratios. This suggests that core creation contributes to velocity suppression in this regime. The velocities can be lower by up to 40%, comparable to the reduction from measuring on the rising part of the rotation curve alone (see top panel of Figure <ref>). Thus, cores do seem to substantially contribute to lowered velocities for galaxies with R_out < 3 kpc.
For galaxies with R_out > 3 kpc, there are strongly cored galaxies that do not show any signs of having their velocities reduced. In Figure <ref>, we provide an example of why a galaxy with a strong dark matter core may not have a lower velocity.
Figure <ref> shows the rotation curve for one of our dwarf galaxies that
undergoes significant dark matter core creation. At z=0 this halo has a dark matter density slope of -0.3.
This profile causes the rotation curve to rise much more slowly in the baryonic run (red solid line) compared to the combined DM-only/baryonic model (solid black line) or the DM-only run (black dashed line). The red dashed line shows the DM contribution to the baryonic run's total v_circ, to emphasize the presence of the dark matter core.
It can be seen that the baryonic run has a lower rotational velocity than the
combined DM-only/baryonic model interior to ∼2 kpc. It is clear from Figure <ref> that if the HI is tracing velocity interior to ∼2kpc, then core creation would reduce the measured velocity in this galaxy. However, this galaxy has HI gas that extends out to roughly 5 kpc, where it is tracing the flat part of the rotation curve, and is an excellent measure of v_max,sph.
This galaxy also highlights another subtle point.
The DM-only run reaches a v_max,dmo = 55.8 at 27 kpc. The baryonic run reaches v_max,sph = 58.3 at 7.5 kpc. The velocity from the HI profile, w^e_50, is 55 , comparable to the v_max,dmo measured in the DM-only run. Thus, there is almost no change in v_max between the two runs, i.e., this halo does not undergo adiabatic contraction in the usual sense. It is simply that the radius at which v_max occurs is quite different. In the baryonic run, the fact that gas can cool leads to the mass being more centralized than in the DM-only run, without increasing v_max overall. Likewise, the “contracted” model combining the DM-only profile with the baryonic profile is not adiabatically contracted, but simply reaches v_max at a smaller radius. The cold gas increases the central velocity relative to the DM-only run despite
the fact that this dwarf is dark matter dominated overall, with a baryon ratio (cold gas and stellar mass to total DM mass) of only 2% at z = 0 (but remains gas-rich due to the fact that star formation is inefficient).
There are a total of four galaxies in our sample where the DM-only counterpart has v_max,dmo∼ 55 km s^-1, like the galaxy shown in Figure <ref>. All of these galaxies have stellar masses between 1.5-3×10^8 M_⊙, and all have a cored dark matter density profile, but their HI masses vary by an order of magnitude. Two of them have R_out∼ 1.5 kpc, while two have R_out∼ 5 kpc. As expected, the two with small R_out have substantially lower w^e_50 values compared to v_max,dmo. Hence, scatter in the HI content at a given halo mass leads to scatter in the role of dark matter cores.
From these examples, we learn that if core creation is to impact the measured
velocity in a galaxy, the HI must not extend significantly further than the size of the dark matter core. A similar conclusion was found by <cit.> by analyzing observational dwarf data. In simulated galaxies with efficient core creation, the dark matter cores are often 1-2 kpc. From Figure <ref>, we see that the strongly cored galaxies with R_out < 2 kpc do indeed tend to show a lower rotation velocity in the baryonic run than their DM-only counterpart.
§ COMPARISON WITH PREVIOUS WORKS
In this section we discuss how our results compare to previous works on this topic. First we focus on the ability to reproduce the VF, then specifically on the impact of dark matter cores.
§.§ Velocities
<cit.> were the first to show explicitly that using w^e_50 instead of v_max,dmo could reconcile the theoretical VF with the observed VF. Their approach was semi-empirical, using abundance matching (a relationship between baryonic mass and halo mass) convolved with a relation between between baryonic mass and velocity. They showed the impact of using various definitions of velocity, with only w^e_50 recovering the observed VF.
Like the work presented in this paper, <cit.> also used cosmological zoomed simulations, the NIHAO suite, to make mock HI profiles, and showed that their measured w^e_50 could reproduce the observed VF. <cit.> followed a similar analysis as in this paper, and both works use galaxies simulated with the code Gasoline, but the simulations vary in terms of details. A slightly lower resolution in most of the NIHAO galaxies prevents the use of H_2-based star formation as used here, but NIHAO includes a prescription for early stellar feedback (feedback from young massive stars that is deposited prior to the first SNII from any given star particle). A detailed comparison of mock observed velocities at a given v_max,dmo shows that the mock velocities in NIHAO are lower than in this work. Perhaps because of this, <cit.> need not consider completeness in order to reproduce the observed VF; the lower velocities of w^e_50 alone are enough to allow the NIHAO galaxies to match the data (and may even slightly over-reduce the velocities in the lowest halos; see their figure 3).
Thus, this work and both <cit.> and <cit.> have concluded that the difference between v_max,dmo and w^e_50 is the primary reason for the disagreement between theory and observations. An apples-to-apples comparison between models and real galaxies alleviates the tension.
On the other hand, <cit.> attempt to correct observed velocities to their underling v_max. Using a sample of galaxies with resolved HI rotation curves from <cit.>, they fit v_out to both NFW and cored rotation curve models in order to infer the true v_max of each galaxy. This correction can then be applied to galaxies with unresolved HI velocities of similar baryonic mass. However, they conclude that there is not enough of a shift to resolve the discrepancy between the theoretical and observed VF, even when the effects of dark matter core creation are taken into account.
To reconcile the work of <cit.> with the conclusions in this paper, <cit.>, and <cit.>, the correction from observed v_out to v_max must fail. Mock resolved HI rotation curves of the simulated galaxies should, in principal, be able to address this question. However, results so far are inconclusive. <cit.> made mock HI rotation curves of two simulated dwarf galaxies and tested the conditions under which they could reliably recover the model halo masses. They found that starburst and post-starburst dwarf galaxies have large HI bubbles that push the rotation curve out of equilibrium, and that galaxies viewed near face-on also presented problems, but could otherwise recover their model inputs (as long as they used a model with a dark matter core). They concluded that a carefully selected sample should allow for a reliable recovery of true halo masses. In <cit.> they applied their method to 19 observed galaxies, and derived a stellar mass-to-halo mass relation in agreement with abundance matching results for field galaxies, concluding that there are no dwarf galaxy problems in CDM. On the other hand, <cit.> failed to recover the true v_circ of any of their 10 dwarf galaxies (from the Moria simulation suite) when producing mock HI rotation curves. They conclude that the disks of dwarfs are simply too thick, combined with feedback causing significant structure and disequilibrium so that the HI rotation curve fails to be a good measure of the underlying gravitational potential. Given the mixed results, more work in this area is required.
§.§ Dark Matter Cores
A recent analysis by <cit.> also examined the effects of dark matter core creation on the observed galaxy VF, comparing to the Local Volume VF derived in <cit.>. The top panel of Figure <ref> shows the measured velocity dispersion (for HI poor galaxies) or edge-on w^e_50/2 (for HI rich galaxies) versus the stellar mass in the simulated galaxies. The flattening of /2 below ∼10^7 M_⊙ is attributed to core creation in <cit.>. This flattening is not reproduced in their models with an NFW profile (they examine galaxies down to 10^6 M_⊙ in stellar mass). Only their model that includes dark matter core creation reproduces this flattening.
Although we also find this flattening to occur at ∼10^7 M_⊙, the bottom panel of Figure <ref> demonstrates that this flattening in velocity cannot be due to core creation, as the trend is found in DM-only runs as well.
While /2 is a quantity derived from the baryonic simulations, the bottom panel of Figure <ref> shows v_max,dmo plotted against the stellar mass of the galaxies in the baryonic version of the runs. Recall that v_max,dmo is a quantity derived from the DM-only versions of the galaxies. DM core creation requires the presence of baryons, and hence cores cannot form in the DM-only runs. The galaxies in the DM-only runs retain a steep, cuspy DM density profile. Despite the steep inner profile, the flattening of the trend at low stellar masses persists in each panel. Hence, core creation cannot be responsible for the flattening. This is contrary to the conclusions in <cit.>. Reinforcing this conclusion, the data points in Figure <ref> are again color coded by the slope of the DM density profile in the baryonic version of the runs. While cored galaxies tend to cluster in a given stellar mass range, they do not appear to play a role in the flattening of the trend below stellar masses of 10^7 M_⊙.
We note that the flattening below stellar masses of 10^7 M_⊙ is consistent with observational data <cit.>, which show a roughly constant velocity of ∼10 km s^-1 and tend to be in dispersion supported galaxies. As previously discussed in Section 3.2, our faintest simulated galaxies have velocity dispersions ∼10 km s^-1, consistent with the observations. This is a more direct comparison to the observations than presented in <cit.>, where they measured v_circ of a model galaxy at the radius that best reproduced w_50 values.
We offer a different interpretation for the flattening of velocities at low galaxy masses: the steep relation between M_star and v_max,dmo (or, equivalently, between M_star and M_halo) at low halo masses. As has been noted by previous authors <cit.>, the steep relation at low halo masses suggests that galaxies over a wide range of stellar masses (10^6-10^8 M_⊙) reside in nearly the same host halo mass <cit.>. The bottom panel of Figure <ref> confirms that this trend also occurs in our simulations. All of the low stellar mass galaxies reside in a narrow range of v_max,dmo (or equivalently, M_halo). This will lead them to have similar observed velocities as well, as seen in the top panel.
We note that unlike <cit.>, our results are not in conflict with the conclusions in <cit.>. In that paper, the role of core creation on the Too Big to Fail Problem <cit.> was explored using an analytic model (not simulations) for galaxies that had halo masses determined using stellar velocity dispersions at their half light radii. In all cases, the half light radius is ≲1 kpc. As we have seen, core creation can reduce the velocities of dwarfs interior to 1 kpc, and will thus alter the derived masses (though the magnitude of the reduction may not be as significant as <cit.> predicted for dwarf Irregulars, since they neglected gas in the inner kpc). HI, on the other hand, can extend to much larger radii than typical half light radii, and eliminates any impact of dark matter cores on the measured velocity <cit.>.
§ CONCLUSIONS
In this work, we have used high resolution cosmological simulations of individual galaxies in order to resolve the discrepancy between the observed galaxy VF and the predicted VF within ΛCDM. In particular we study the apparent dearth of observed low velocity galaxies.
To ensure that the simulated galaxies have realistic sizes and gas contents, and thus can be used to interpret observations, we verified that the simulated galaxies with baryons match observed scaling relations. In particular, the simulations match the HI sizes of galaxies as a function of velocity and the baryonic Tully-Fisher relation.
We use these realistic galaxies to generate mock “observed” velocities. For galaxies with M_HI > 10^6 M_⊙, we produce mock HI datacubes and derive a characteristic velocity using the width of the HI profile at 50% of the peak height, . This is the velocity commonly used to generate the observed galaxy VF <cit.>. For gas poor galaxies, we follow the procedure of <cit.> and use stellar velocity dispersion. When the “observed” velocities from baryonic simulations are compared to theoretical velocities (derived from the maximum circular velocity of matched counterpart halos in dark matter-only simulations), we find that there is a systematic shift in dwarf galaxies to lower velocities (see Figure <ref>). The magnitude of this velocity shift, combined with a proper accounting of luminous halos, reconciles the observed VF with the theoretical VF (Figure <ref>).
Thus, there are two primary considerations necessary to bring the theoretical VF into agreement with the observed VF. First, to match the observed VF at velocities below ∼40 , the fraction of luminous halos must be accounted for. If a halo does not host a luminous galaxy, it will remain undetected in current surveys, lowering the observed number of galaxies at low velocities compared to theoretical expectations that allow all halos to host a detectable galaxy. Here, we calculated the luminous fraction for halos with M_* > 10^6 M_⊙, which corresponds to the lower luminosity limit used to calculate the observed VF in <cit.>. The fraction of luminous halos drops precipitously below 40 . Without considering this effect, the velocity difference alone between our mock observations and theoretical velocities is not sufficient to reproduce the observed VF at the low velocity end. We note that previous work on this subject did not explicitly consider the fraction of luminous halos <cit.>.
Second, to match the observed VF it is necessary to derive a relationship between observed characteristic velocities of galaxies and theoretical velocities for halos. We have demonstrated here that this relationship shifts the predicted VF into agreement with the current observations. The source of the velocity shift in dwarf galaxies is a combination of factors:
(1) The primary shift that makes observed velocities lower than theoretical velocities in dwarf galaxies is due to the fact that the velocity tracer (typically HI) does not trace the full potential wells of dwarfs. That is, the outermost HI is still on the rising part of the rotation curve <cit.>. We demonstrate this in the top panel of Figure <ref>, where we explicitly found the circular velocity at the radius that the HI surface density dropped below 1 M_⊙/pc^2, i.e., the outermost observable rotation velocity, v_out, in simulated galaxies with baryons. The top panel of Figure <ref> shows that v_out underpredicts the maximum value of the circular velocity in dwarf galaxies.
(2) For galaxies that derive a characteristic velocity using , there is an additional reduction in observed velocity. We demonstrate this in the second panel of Figure <ref>, where we compare the velocity results derived from to v_out. Although v_out was already lower than the true maximum velocity of a galaxy's dark matter halo, can be an additional 50% lower than v_out. This is because the HI line profile shape in dwarfs tends to be gaussian. Measuring at a lower peak height, 20%, instead agrees with v_out (third panel of Figure <ref>). To date, essentially all observed VF measurements have been made with rather than w_20 because typical signal-to-noise ratios generally prevent a reliable measurement of w_20.
(3) For galaxies with HI sizes under ∼3 kpc, an additional reduction in velocity can occur if the galaxy has a dark matter core. Typical core sizes found in simulations are on the order of 1-2 kpc, and we demonstrate in Figure <ref> that core creation reduces the overall circular velocity in the very center of cored galaxies. However, if the characteristic rotational velocity is derived at a larger radius, then the measured circular velocity is usually comparable to the expected theoretical velocity. We attempted to quantify the contribution of core creation to the reduction in velocity in Figure <ref>. There, we compared the circular velocity of halos in both the baryonic and contracted (dark matter-only + baryons) models at R_out. This removes the contribution to the reduction in velocity due to being on the rising part of the rotation curve, and avoids the reduction due to . Figure <ref> shows that galaxies with HI sizes < 3 kpc typically have lower circular velocities than the contracted dark matter models, by up to 40%. This reduction is comparable to the reduction in velocity from measuring on the rising part of the rotation curve alone (see top panel of Figure <ref>). Hence, core creation leads to a further reduction in observed velocities for galaxies with R_out < 3 kpc.
Overall, we have demonstrated in this paper that we can start with the abundance of dwarf galaxies predicted in ΛCDM and reconcile the theoretical predictions with the observed VF. We do this by properly accounting for the relation between characteristic velocities derived from observations and the characteristic velocities typically derived from theory, and by accounting for the fraction of observable halos detectable in current VF studies. We conclude that there is no missing dwarf problem in ΛCDM.
AB acknowledges support from National Science Foundation (NSF) grant AST-1411399. EP is supported by a NOVA postdoctoral fellowship to the Kapteyn Astronomical Institute. FG was partially supported by NSF grant AST-1410012, HST theory grant AR-14281, and NASA grant NNX15AB17G.
Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293.
natexlab#1#1
[Abramson et al.(2014)Abramson, Williams, Benson,
Kollmeier, & Mulchaey]Abramson2014
Abramson, L. E., Williams, R. J., Benson, A. J., Kollmeier, J. A., &
Mulchaey, J. S. 2014, , 793, 49
[Agertz & Kravtsov(2015)]Agertz2014
Agertz, O., & Kravtsov, A. V. 2015, , 804, 18
[Agertz et al.(2013)Agertz, Kravtsov, Leitner, &
Gnedin]Agertz2013
Agertz, O., Kravtsov, A. V., Leitner, S. N., & Gnedin, N. Y. 2013,
, 770, 25
[Arraki et al.(2014)Arraki, Klypin, More, &
Trujillo-Gomez]Arraki2014
Arraki, K. S., Klypin, A., More, S., & Trujillo-Gomez, S. 2014,
, 438, 1466
[Barnes & Efstathiou(1987)]Barnes1987
Barnes, J., & Efstathiou, G. 1987, , 319, 575
[Bekeraitė et al.(2016)Bekeraitė, Walcher, Wisotzki,
Croton, Falcón-Barroso, Lyubenova, Obreschkow, Sánchez,
Spekkens, Torrey, van de Ven, Zwaan, Ascasibar, Bland-Hawthorn,
González Delgado, Husemann, Marino, Vogelsberger, &
Ziegler]Bekeraite2016
Bekeraitė, S., Walcher, C. J., Wisotzki, L., et al. 2016, ,
827, L36
[Bigiel et al.(2008)Bigiel, Leroy, Walter, Brinks, de
Blok, Madore, & Thornley]Bigiel2008
Bigiel, F., Leroy, A., Walter, F., et al. 2008, , 136, 2846
[Bigiel et al.(2010)Bigiel, Walter, Blitz, Brinks, de Blok, &
Madore]Bigiel2010
Bigiel, F., Walter, F., Blitz, L., et al. 2010, , 140, 1194
[Blanc et al.(2009)Blanc, Heiderman, Gebhardt, Evans, &
Adams]Blanc2009
Blanc, G. A., Heiderman, A., Gebhardt, K., Evans, N. J., & Adams, J. 2009,
, 704, 842
[Bode et al.(2001)Bode, Ostriker, & Turok]Bode2001
Bode, P., Ostriker, J. P., & Turok, N. 2001, , 556, 93
[Boylan-Kolchin et al.(2011)Boylan-Kolchin, Bullock, &
Kaplinghat]Boylan-kolchin2011
Boylan-Kolchin, M., Bullock, J. S., & Kaplinghat, M. 2011, , 415,
L40
[Bradford et al.(2015)Bradford, Geha, &
Blanton]Bradford2015
Bradford, J. D., Geha, M. C., & Blanton, M. R. 2015, , 809, 146
[Brook & Di Cintio(2015a)]Brook2014
Brook, C. B., & Di Cintio, A. 2015a, , 450, 3920
[Brook & Di Cintio(2015b)]Brook2015
—. 2015b, , 453, 2133
[Brook et al.(2016)Brook, Santos-Santos, &
Stinson]Brook2016b
Brook, C. B., Santos-Santos, I., & Stinson, G. 2016, , 459, 638
[Brook & Shankar(2016)]Brook2016
Brook, C. B., & Shankar, F. 2016, , 455, 3841
[Brooks et al.(2007)Brooks, Governato, Booth, Willman,
Gardner, Wadsley, Stinson, & Quinn]Brooks2007
Brooks, A. M., Governato, F., Booth, C. M., et al. 2007, , 655,
L17
[Brooks et al.(2013)Brooks, Kuhlen, Zolotov, &
Hooper]Brooks2013
Brooks, A. M., Kuhlen, M., Zolotov, A., & Hooper, D. 2013, , 765,
22
[Brooks & Zolotov(2014)]BZ2014
Brooks, A. M., & Zolotov, A. 2014, , 786, 87
[Catinella et al.(2006)Catinella, Giovanelli, &
Haynes]Catinella2006
Catinella, B., Giovanelli, R., & Haynes, M. P. 2006, , 640, 751
[Chae(2010)]Chae2010
Chae, K.-H. 2010, , 402, 2031
[Chan et al.(2015)Chan, Kereš, Oñorbe, Hopkins,
Muratov, Faucher-Giguère, & Quataert]Chan2015
Chan, T. K., Kereš, D., Oñorbe, J., et al. 2015, , 454,
2981
[Christensen et al.(2012)Christensen, Quinn, Governato,
Stilp, Shen, & Wadsley]Christensen2012
Christensen, C., Quinn, T., Governato, F., et al. 2012, , 425,
3058
[Christensen et al.(2016)Christensen, Davé, Governato,
Pontzen, Brooks, Munshi, Quinn, & Wadsley]Christensen2016
Christensen, C. R., Davé, R., Governato, F., et al. 2016, ,
824, 57
[Christensen et al.(2014)Christensen, Governato, Quinn,
Brooks, Shen, McCleary, Fisher, & Wadsley]Christensen2014a
Christensen, C. R., Governato, F., Quinn, T., et al. 2014, , 440,
2843
[de Blok et al.(2008)de Blok, Walter, Brinks,
Trachternach, Oh, & Kennicutt]deblok2008
de Blok, W. J. G., Walter, F., Brinks, E., et al. 2008, , 136, 2648
[Di Cintio et al.(2014a)Di Cintio, Brook,
Dutton, Macciò, Stinson, & Knebe]diCintio2014b
Di Cintio, A., Brook, C. B., Dutton, A. A., et al. 2014a,
, 441, 2986
[Di Cintio et al.(2014b)Di Cintio, Brook,
Macciò, Stinson, Knebe, Dutton, & Wadsley]diCintio2014a
Di Cintio, A., Brook, C. B., Macciò, A. V., et al.
2014b, , 437, 415
[Di Cintio et al.(2013)Di Cintio, Knebe, Libeskind, Brook,
Yepes, Gottlöber, & Hoffman]diCintio2013
Di Cintio, A., Knebe, A., Libeskind, N. I., et al. 2013, , 431,
1220
[Domínguez et al.(2015)Domínguez, Siana,
Brooks, Christensen, Bruzual, Stark, & Alavi]Dominguez2014
Domínguez, A., Siana, B., Brooks, A. M., et al. 2015, ,
451, 839
[Dutton et al.(2011)Dutton, Conroy, van den Bosch, Simard,
Mendel, Courteau, Dekel, More, & Prada]Dutton2011
Dutton, A. A., Conroy, C., van den Bosch, F. C., et al. 2011, ,
416, 322
[Dutton et al.(2016)Dutton, Macciò, Dekel, Wang,
Stinson, Obreja, Di Cintio, Brook, Buck, & Kang]Dutton2016
Dutton, A. A., Macciò, A. V., Dekel, A., et al. 2016, , 461,
2658
[Elbert et al.(2015)Elbert, Bullock, Garrison-Kimmel,
Rocha, Oñorbe, & Peter]Elbert2014
Elbert, O. D., Bullock, J. S., Garrison-Kimmel, S., et al. 2015,
, 453, 29
[Ferrero et al.(2012)Ferrero, Abadi, Navarro, Sales, &
Gurovich]Ferrero2012
Ferrero, I., Abadi, M. G., Navarro, J. F., Sales, L. V., & Gurovich,
S. 2012, , 425, 2817
[Fry et al.(2015)Fry, Governato, Pontzen, Quinn,
Tremmel, Anderson, Menon, Brooks, & Wadsley]Fry2015
Fry, A. B., Governato, F., Pontzen, A., et al. 2015, , 452, 1468
[Garrison-Kimmel et al.(2014)Garrison-Kimmel, Boylan-Kolchin,
Bullock, & Kirby]Garrison-Kimmel2014
Garrison-Kimmel, S., Boylan-Kolchin, M., Bullock, J. S., & Kirby,
E. N. 2014, , 444, 222
[Gill et al.(2004)Gill, Knebe, & Gibson]Gill2004
Gill, S. P. D., Knebe, A., & Gibson, B. K. 2004, , 351, 399
[Giovanelli et al.(2005)Giovanelli, Haynes, Kent,
Perillat, Saintonge, Brosch, Catinella, Hoffman, Stierwalt,
Spekkens, Lerner, Masters, Momjian, Rosenberg, Springob,
Boselli, Charmandaris, Darling, Davies, Garcia Lambas, Gavazzi,
Giovanardi, Hardy, Hunt, Iovino, Karachentsev, Karachentseva,
Koopmann, Marinoni, Minchin, Muller, Putman, Pantoja, Salzer,
Scodeggio, Skillman, Solanes, Valotto, van Driel, & van
Zee]Giovanelli2005
Giovanelli, R., Haynes, M. P., Kent, B. R., et al. 2005, , 130, 2598
[Gnedin & Kravtsov(2011)]Gnedin2011
Gnedin, N. Y., & Kravtsov, A. V. 2011, , 728, 88
[Gnedin et al.(2009)Gnedin, Tassis, &
Kravtsov]Gnedin2009
Gnedin, N. Y., Tassis, K., & Kravtsov, A. V. 2009, , 697, 55
[Gonzalez et al.(2000)Gonzalez, Williams, Bullock, Kolatt,
& Primack]Gonzalez2000
Gonzalez, A. H., Williams, K. A., Bullock, J. S., Kolatt, T. S., &
Primack, J. R. 2000, , 528, 145
[Governato et al.(2010)Governato, Brook, Mayer, Brooks,
Rhee, Wadsley, Jonsson, Willman, Stinson, Quinn, &
Madau]Governato2010
Governato, F., Brook, C., Mayer, L., et al. 2010, , 463, 203
[Governato et al.(2012)Governato, Zolotov, Pontzen,
Christensen, Oh, Brooks, Quinn, Shen, & Wadsley]Governato2012
Governato, F., Zolotov, A., Pontzen, A., et al. 2012, , 422, 1231
[Haardt & Madau(2001)]Haardt2001
Haardt, F., & Madau, P. 2001, in Clusters of Galaxies and the High
Redshift Universe Observed in X-rays, ed. D. M. Neumann & J. T. V. Tran
[Haynes et al.(2011)Haynes, Giovanelli, Martin, Hess,
Saintonge, Adams, Hallenbeck, Hoffman, Huang, Kent, Koopmann,
Papastergis, Stierwalt, Balonek, Craig, Higdon, Kornreich,
Miller, O'Donoghue, Olowin, Rosenberg, Spekkens, Troischt, &
Wilcots]Haynes2011
Haynes, M. P., Giovanelli, R., Martin, A. M., et al. 2011, , 142,
170
[Hopkins et al.(2011)Hopkins, Quataert, &
Murray]Hopkins2011
Hopkins, P. F., Quataert, E., & Murray, N. 2011, , 417, 950
[Karachentsev et al.(2013)Karachentsev, Makarov, &
Kaisina]Karachentsev2013
Karachentsev, I. D., Makarov, D. I., & Kaisina, E. I. 2013, , 145,
101
[Katz et al.(2017)Katz, Lelli, McGaugh, Di Cintio,
Brook, & Schombert]Katz2017
Katz, H., Lelli, F., McGaugh, S. S., et al. 2017, , 466, 1648
[Katz & White(1993)]Katz1993
Katz, N., & White, S. D. M. 1993, , 412, 455
[Kauffmann(2014)]Kauffmann2014
Kauffmann, G. 2014, , 441, 2717
[Kennicutt(1998)]Kennicutt1998
Kennicutt, Jr., R. C. 1998, , 498, 541
[Kirby et al.(2011)Kirby, Martin, & Finlator]Kirby2011
Kirby, E. N., Martin, C. L., & Finlator, K. 2011, , 742, L25
[Klypin et al.(2015)Klypin, Karachentsev, Makarov, &
Nasonova]Klypin2015
Klypin, A., Karachentsev, I., Makarov, D., & Nasonova, O. 2015,
, 454, 1798
[Klypin et al.(1999)Klypin, Kravtsov, Valenzuela, &
Prada]Klypin1999
Klypin, A., Kravtsov, A. V., Valenzuela, O., & Prada, F. 1999, ,
522, 82
[Knollmann & Knebe(2009)]Knollmann2009
Knollmann, S. R., & Knebe, A. 2009, , 182, 608
[Koribalski et al.(2004)Koribalski, Staveley-Smith, Kilborn,
Ryder, Kraan-Korteweg, Ryan-Weber, Ekers, Jerjen, Henning,
Putman, Zwaan, de Blok, Calabretta, Disney, Minchin, Bhathal,
Boyce, Drinkwater, Freeman, Gibson, Green, Haynes, Juraszek,
Kesteven, Knezek, Mader, Marquarding, Meyer, Mould, Oosterloo,
O'Brien, Price, Sadler, Schröder, Stewart, Stootman, Waugh,
Warren, Webster, & Wright]Koribalski2004
Koribalski, B. S., Staveley-Smith, L., Kilborn, V. A., et al. 2004,
, 128, 16
[Kornei et al.(2012)Kornei, Shapley, Martin, Coil, Lotz,
Schiminovich, Bundy, & Noeske]Kornei2012
Kornei, K. A., Shapley, A. E., Martin, C. L., et al. 2012, , 758,
135
[Kroupa et al.(1993)Kroupa, Tout, & Gilmore]Kroupa1993
Kroupa, P., Tout, C. A., & Gilmore, G. 1993, , 262, 545
[Krumholz & McKee(2008)]Krumholz2008
Krumholz, M. R., & McKee, C. F. 2008, , 451, 1082
[Loeb & Weiner(2011)]Loeb2011
Loeb, A., & Weiner, N. 2011, Physical Review Letters, 106, 171302
[Lovell et al.(2012)Lovell, Eke, Frenk, Gao, Jenkins,
Theuns, Wang, White, Boyarsky, & Ruchayskiy]Lovell2012
Lovell, M. R., Eke, V., Frenk, C. S., et al. 2012, , 420, 2318
[Macciò et al.(2016)Macciò, Udrescu, Dutton,
Obreja, Wang, Stinson, & Kang]Maccio2016
Macciò, A. V., Udrescu, S. M., Dutton, A. A., et al. 2016, ,
463, L69
[Martin(1998)]Martin1998
Martin, C. L. 1998, , 506, 222
[Martizzi et al.(2013)Martizzi, Teyssier, &
Moore]Martizzi2013
Martizzi, D., Teyssier, R., & Moore, B. 2013, , 432, 1947
[Maxwell et al.(2015)Maxwell, Wadsley, &
Couchman]Maxwell2015
Maxwell, A. J., Wadsley, J., & Couchman, H. M. P. 2015, , 806, 229
[McGaugh & Schombert(2015)]McGaugh2015
McGaugh, S. S., & Schombert, J. M. 2015, , 802, 18
[Menci et al.(2012)Menci, Fiore, & Lamastra]Menci2012
Menci, N., Fiore, F., & Lamastra, A. 2012, , 421, 2384
[Moore et al.(1999)Moore, Ghigna, Governato, Lake,
Quinn, Stadel, & Tozzi]Moore1999
Moore, B., Ghigna, S., Governato, F., et al. 1999, , 524, L19
[Moster et al.(2013)Moster, Naab, & White]Moster2013
Moster, B. P., Naab, T., & White, S. D. M. 2013, , 428, 3121
[Munshi et al.(2013)Munshi, Governato, Brooks,
Christensen, Shen, Loebman, Moster, Quinn, &
Wadsley]Munshi2013
Munshi, F., Governato, F., Brooks, A. M., et al. 2013, , 766, 56
[Narayanan et al.(2012)Narayanan, Krumholz, Ostriker, &
Hernquist]Narayanan2012
Narayanan, D., Krumholz, M. R., Ostriker, E. C., & Hernquist, L. 2012,
, 421, 3127
[Nierenberg et al.(2013)Nierenberg, Treu, Menci, Lu, &
Wang]Nierenberg2013
Nierenberg, A. M., Treu, T., Menci, N., Lu, Y., & Wang, W. 2013,
, 772, 146
[Oñorbe et al.(2015)Oñorbe, Boylan-Kolchin, Bullock,
Hopkins, Kereš, Faucher-Giguère, Quataert, &
Murray]Onorbe2015
Oñorbe, J., Boylan-Kolchin, M., Bullock, J. S., et al. 2015,
, 454, 2092
[Obreschkow et al.(2013)Obreschkow, Ma, Meyer, Power,
Zwaan, Staveley-Smith, & Drinkwater]Obreschkow2013
Obreschkow, D., Ma, X., Meyer, M., et al. 2013, , 766, 137
[Oh et al.(2011)Oh, Brook, Governato, Brinks, Mayer, de
Blok, Brooks, & Walter]Oh2011
Oh, S.-H., Brook, C., Governato, F., et al. 2011, , 142, 24
[Oh et al.(2015)Oh, Hunter, Brinks, Elmegreen, Schruba,
Walter, Rupen, Young, Simpson, Johnson, Herrmann, Ficut-Vicas,
Cigan, Heesen, Ashley, & Zhang]Oh2015
Oh, S.-H., Hunter, D. A., Brinks, E., et al. 2015, , 149, 180
[Okamoto et al.(2008)Okamoto, Gao, & Theuns]Okamoto2008
Okamoto, T., Gao, L., & Theuns, T. 2008, , 390, 920
[Ostriker & McKee(1988)]Ostriker1988
Ostriker, J. P., & McKee, C. F. 1988, Reviews of Modern Physics, 60, 1
[Papastergis et al.(2015)Papastergis, Giovanelli, Haynes, &
Shankar]Papastergis2015
Papastergis, E., Giovanelli, R., Haynes, M. P., & Shankar, F. 2015,
, 574, A113
[Papastergis et al.(2011)Papastergis, Martin, Giovanelli, &
Haynes]Papastergis2011
Papastergis, E., Martin, A. M., Giovanelli, R., & Haynes, M. P. 2011,
, 739, 38
[Papastergis & Ponomareva(2017)]Papastergis2017
Papastergis, E., & Ponomareva, A. A. 2017, , 601, A1
[Papastergis & Shankar(2016)]Papastergis2016
Papastergis, E., & Shankar, F. 2016, , 591, A58
[Peñarrubia et al.(2012)Peñarrubia, Pontzen, Walker,
& Koposov]Penarrubia2012
Peñarrubia, J., Pontzen, A., Walker, M. G., & Koposov, S. E. 2012,
, 759, L42
[Peebles(1969)]Peebles1969
Peebles, P. J. E. 1969, , 155, 393
[Planck Collaboration et al.(2014)Planck Collaboration, Ade,
Aghanim, Armitage-Caplan, Arnaud, Ashdown, Atrio-Barandela,
Aumont, Baccigalupi, Banday, & et al.]Planck2014
Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2014, ,
571, A16
[Polisensky & Ricotti(2011)]Polisensky2011
Polisensky, E., & Ricotti, M. 2011, , 83, 043506
[Pontzen & Governato(2012)]Pontzen2012
Pontzen, A., & Governato, F. 2012, , 421, 3464
[Pontzen & Governato(2014)]Pontzen2014
—. 2014, , 506, 171
[Read et al.(2016a)Read, Agertz, &
Collins]Read2016a
Read, J. I., Agertz, O., & Collins, M. L. M. 2016a, ,
459, 2573
[Read et al.(2016b)Read, Iorio, Agertz, &
Fraternali]Read2016b
Read, J. I., Iorio, G., Agertz, O., & Fraternali, F.
2016b, , 462, 3628
[Read et al.(2017)Read, Iorio, Agertz, &
Fraternali]Read2016c
—. 2017, , 467, 2019
[Rodríguez-Puebla et al.(2016)Rodríguez-Puebla,
Behroozi, Primack, Klypin, Lee, & Hellinger]Rodriguez2016
Rodríguez-Puebla, A., Behroozi, P., Primack, J., et al. 2016,
, 462, 893
[Santos-Santos et al.(2017)Santos-Santos, Di Cintio, Brook,
Macciò, Dutton, & Domínguez-Tenreiro]Santos2017
Santos-Santos, I. M., Di Cintio, A., Brook, C. B., et al. 2017, ArXiv
e-prints, arXiv:1706.04202
[Sawala et al.(2013)Sawala, Frenk, Crain, Jenkins,
Schaye, Theuns, & Zavala]Sawala2013
Sawala, T., Frenk, C. S., Crain, R. A., et al. 2013, , 431, 1366
[Sawala et al.(2015)Sawala, Frenk, Fattahi, Navarro,
Bower, Crain, Dalla Vecchia, Furlong, Jenkins, McCarthy, Qu,
Schaller, Schaye, & Theuns]Sawala2015
Sawala, T., Frenk, C. S., Fattahi, A., et al. 2015, , 448, 2941
[Schneider et al.(2017)Schneider, Trujillo-Gomez,
Papastergis, Reed, & Lake]Schneider2016
Schneider, A., Trujillo-Gomez, S., Papastergis, E., Reed, D. S., &
Lake, G. 2017, , 470, 1542
[Schruba et al.(2011)Schruba, Leroy, Walter, Bigiel,
Brinks, de Blok, Dumas, Kramer, Rosolowsky, Sandstrom,
Schuster, Usero, Weiss, & Wiesemeyer]Schruba2011
Schruba, A., Leroy, A. K., Walter, F., et al. 2011, , 142, 37
[Seljak et al.(2006)Seljak, Makarov, McDonald, &
Trac]Seljak2006
Seljak, U., Makarov, A., McDonald, P., & Trac, H. 2006, Physical
Review Letters, 97, 191303
[Shen et al.(2010)Shen, Wadsley, & Stinson]Shen2010
Shen, S., Wadsley, J., & Stinson, G. 2010, , 407, 1581
[Sheth et al.(2003)Sheth, Bernardi, Schechter, Burles,
Eisenstein, Finkbeiner, Frieman, Lupton, Schlegel, Subbarao,
Shimasaku, Bahcall, Brinkmann, & Ivezić]Sheth2003
Sheth, R. K., Bernardi, M., Schechter, P. L., et al. 2003, , 594,
225
[Spergel & Steinhardt(2000)]Spergel2000
Spergel, D. N., & Steinhardt, P. J. 2000, Physical Review Letters, 84,
3760
[Stadel(2001)]Stadel2001
Stadel, J. G. 2001, PhD thesis, UNIVERSITY OF WASHINGTON
[Stanimirović et al.(2004)Stanimirović,
Staveley-Smith, & Jones]Stanimirovic2004
Stanimirović, S., Staveley-Smith, L., & Jones, P. A. 2004, ,
604, 176
[Stilp et al.(2013a)Stilp, Dalcanton, Skillman,
Warren, Ott, & Koribalski]Stilp2013b
Stilp, A. M., Dalcanton, J. J., Skillman, E., et al.
2013a, , 773, 88
[Stilp et al.(2013b)Stilp, Dalcanton, Warren,
Skillman, Ott, & Koribalski]Stilp2013a
Stilp, A. M., Dalcanton, J. J., Warren, S. R., et al.
2013b, , 765, 136
[Stinson et al.(2006)Stinson, Seth, Katz, Wadsley,
Governato, & Quinn]Stinson2006
Stinson, G., Seth, A., Katz, N., et al. 2006, , 373, 1074
[Swaters et al.(2009)Swaters, Sancisi, van Albada, & van
der Hulst]Swaters2009
Swaters, R. A., Sancisi, R., van Albada, T. S., & van der Hulst, J. M.
2009, , 493, 871
[Tamburro et al.(2009)Tamburro, Rix, Leroy, Mac Low,
Walter, Kennicutt, Brinks, & de Blok]Tamburro2009
Tamburro, D., Rix, H.-W., Leroy, A. K., et al. 2009, , 137, 4424
[Teyssier et al.(2013)Teyssier, Pontzen, Dubois, &
Read]Teyssier2013
Teyssier, R., Pontzen, A., Dubois, Y., & Read, J. I. 2013, ,
429, 3068
[Trujillo-Gomez et al.(2011)Trujillo-Gomez, Klypin, Primack,
& Romanowsky]Trujillo-Gomez2011
Trujillo-Gomez, S., Klypin, A., Primack, J., & Romanowsky, A. J. 2011,
, 742, 16
[Trujillo-Gomez et al.(2016)Trujillo-Gomez, Schneider,
Papastergis, Reed, & Lake]Trujillo2016
Trujillo-Gomez, S., Schneider, A., Papastergis, E., Reed, D. S., &
Lake, G. 2016, ArXiv e-prints, arXiv:1610.09335
[van der Wel et al.(2011)van der Wel, Straughn, Rix,
Finkelstein, Koekemoer, Weiner, Wuyts, Bell, Faber, Trump,
Koo, Ferguson, Scarlata, Hathi, Dunlop, Newman, Dickinson,
Jahnke, Salmon, de Mello, Kocevski, Lai, Grogin, Rodney, Guo,
McGrath, Lee, Barro, Huang, Riess, Ashby, &
Willner]vanderwel2011
van der Wel, A., Straughn, A. N., Rix, H.-W., et al. 2011, , 742,
111
[Verbeke et al.(2017)Verbeke, Papastergis, Ponomareva,
Rathi, & De Rijcke]Verbeke2017
Verbeke, R., Papastergis, E., Ponomareva, A. A., Rathi, S., & De
Rijcke, S. 2017, ArXiv e-prints, arXiv:1703.03810
[Viel et al.(2013)Viel, Becker, Bolton, &
Haehnelt]Viel2013
Viel, M., Becker, G. D., Bolton, J. S., & Haehnelt, M. G. 2013, ,
88, 043502
[Viel et al.(2008)Viel, Becker, Bolton, Haehnelt, Rauch,
& Sargent]Viel2008
Viel, M., Becker, G. D., Bolton, J. S., et al. 2008, Physical Review
Letters, 100, 041304
[Viel et al.(2006)Viel, Lesgourgues, Haehnelt, Matarrese,
& Riotto]Viel2006
Viel, M., Lesgourgues, J., Haehnelt, M. G., Matarrese, S., & Riotto,
A. 2006, Physical Review Letters, 97, 071301
[Vogelsberger et al.(2012)Vogelsberger, Zavala, &
Loeb]Vogelsberger2012
Vogelsberger, M., Zavala, J., & Loeb, A. 2012, , 423, 3740
[Wadsley et al.(2004)Wadsley, Stadel, &
Quinn]Wadsley2004
Wadsley, J. W., Stadel, J., & Quinn, T. 2004, New Astronomy, 9, 137
[Wang et al.(2016)Wang, Koribalski, Serra, van der Hulst,
Roychowdhury, Kamphuis, & Chengalur]Wang2016
Wang, J., Koribalski, B. S., Serra, P., et al. 2016, , 460, 2143
[Wetzel et al.(2016)Wetzel, Hopkins, Kim,
Faucher-Giguère, Kereš, & Quataert]Wetzel2016
Wetzel, A. R., Hopkins, P. F., Kim, J.-h., et al. 2016, , 827, L23
[Wise et al.(2012)Wise, Abel, Turk, Norman, &
Smith]Wise2012
Wise, J. H., Abel, T., Turk, M. J., Norman, M. L., & Smith, B. D.
2012, , 427, 311
[Yaryura et al.(2016)Yaryura, Helmi, Abadi, &
Starkenburg]Yaryura2016
Yaryura, C. Y., Helmi, A., Abadi, M. G., & Starkenburg, E. 2016,
, 457, 2415
[Zavala et al.(2009)Zavala, Jing, Faltenbacher, Yepes,
Hoffman, Gottlöber, & Catinella]Zavala2009
Zavala, J., Jing, Y. P., Faltenbacher, A., et al. 2009, , 700, 1779
[Zavala et al.(2013)Zavala, Vogelsberger, &
Walker]Zavala2013
Zavala, J., Vogelsberger, M., & Walker, M. G. 2013, , 431, L20
[Zolotov et al.(2012)Zolotov, Brooks, Willman, Governato,
Pontzen, Christensen, Dekel, Quinn, Shen, &
Wadsley]Zolotov2012
Zolotov, A., Brooks, A. M., Willman, B., et al. 2012, , 761, 71
[Zwaan et al.(2010)Zwaan, Meyer, &
Staveley-Smith]Zwaan2010
Zwaan, M. A., Meyer, M. J., & Staveley-Smith, L. 2010, , 403, 1969
|
http://arxiv.org/abs/1701.07944v1 | 20170127044820 | Tracing interstellar magnetic field using velocity gradient technique: Application to Atomic Hydrogen data | [
"Ka Ho Yuen",
"A. Lazarian"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO",
"astro-ph.IM"
] |
kyuen2@wisc.edu, lazarian@astro.wisc.edu
1Department of Astronomy, University of Wisconsin-Madison
2Department of Physics, The Chinese University of Hong Kong
The advancement of our understanding of MHD turbulence opens ways to develop new techniques to probe magnetic fields. In MHD turbulence, the velocity gradients are expected to be perpendicular to magnetic fields and this fact was used by <cit.> to introduce a new technique to trace magnetic fields using velocity centroid gradients. The latter can be obtained from spectroscopic observations. We apply the technique to GALFA HI survey data and compare the directions of magnetic fields obtained with our technique with the direction of magnetic fields obtained using PLANCK polarization. We find excellent correspondence between the two ways of magnetic field tracing, which is obvious via visual comparison and through measuring of the statistics of magnetic field fluctuations obtained with the polarization data and our technique. This suggests that the velocity centroid gradients has a potential for measuring of the foreground magnetic field fluctuations and thus provide a new way of separating foreground and CMB polarization signals.
§ INTRODUCTION
Turbulence is ubiquitous in astrophysics. The Big Power Law in the Sky <cit.> shows clear evidence that interstellar turbulence extends over 10 orders of magnitude of scales in the interstellar media (ISM). The ISM is magnetized and therefore the turbulence is magnetohydrodynamic (MHD) in nature, e.g. see <cit.>.
The modern theory of turbulence has been developed on the basis of the prophetic work by , (, henceforth GS95). The original ideas were modified and augmented in subsequent theoretical and numerical studies (, see for a a review).[We do not consider the modifications of the GS95 model that were intended to explain the spectrum k^-3/2 that was reported in some numerical studies (e.g. <cit.>). We believe that the reason for the deviations from the GS95 predictions is the numerical bottleneck effect, which is more extended in the MHD compared to hydro turbulence <cit.>. This explanation is supported by high resolution numerical simulations that correspond to GS95 predictions (see <cit.>). The simulations also strongly support the anisotropy predicted in GS95 and rule out the anisotropy prediction in the aforementioned alternative model.] The Alfvenic incompressible motions dominate the cascade. This cascade can be visualized as a cascade of elongated eddies rotating perpendicular to the local direction of the field.[The notion of the local direction was not a part of the original GS95 model. It was introduced and justified in more recent publications (see ).] Naturally, this induces the strongest gradients of velocity perpendicular to the magnetic field. Thus one can expect that measuring the gradient in turbulent media can reveal the local direction of magnetic field. This property of velocity gradients was employed in (, hereafter GL16) to introduce a radically new way of tracing magnetic fields using spectroscopic data. Instead of using aligned grains or synchrotron polarization (see ), GL16 applied velocity centroid gradients (henceforth VCGs) to synthetic maps obtained via MHD simulations and obtained a good agreement between the projected magnetic fields and the directions traced by the VCGs. As the velocity centroids can be readily available from spectroscopic observations (see Esquivel & Lazarian 2005), this provided a way not only for observational tracing of magnetic fields but also for finding its strength using the GL16 technique that is similar to the well-known Chardrasechar-Fermi method.
Motivated by the GL16 study, in this paper we calculate the VCGs using HI data from the GALFA survey<cit.> and compare the directions of the magnetic fields that we trace using the gradients with the directions of magnetic fields that are available from the PLANCK polarization survey <cit.>.[Based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.] To do this, we first significantly improve the procedure of calculating of the VCGs and test it with numerical data. Our recipe for calculating the VCGs is presented in <ref>, while in <ref>, we apply the technique to trace magnetic fields. We discuss our results in <ref>, and our conclusions are presented in <ref>.
§ IMPROVED PROCEDURE FOR CALCULATING VELOCITY GRADIENTS
GL16 established that the VCGs can trace magnetic field in MHD turbulence. However, this exploratory study lacks a criterion on judging on how well gradients can trace magnetic fields. Therefore it is difficult to judge what is the resolution requirement to trace magnetic field vectors and and what are the uncertainties. Therefore our first goal is to introduce a more robust procedure of the VCGs calculation which is to return the tracing that is independent on the resolution of the simulations and only depends
on the parameters of MHD turbulence.
We used a single fluid, operator-split, staggered grid MHD Eulerian code ZEUS-MP/HK,[Maintained by Otto & Yuen, (https://bitbucket.org/cuhksfg/zeusmp-hk/)] a variant of the well-tested code ZEUS-MP <cit.>, to set up a three-dimensional, uniform, isothermal, supersonic, sub-Alfvenic turbulent medium. We adopted periodic boundary conditions. The initial cube was set with a uniform density, and an initial uniform field. Turbulence was injected solenoidally continuously, e.g. see <cit.>, see also Appendix of <cit.>. Our simulations had the resolution of 792^3. We selected two cubes with sonic Mach number M_s=5 and Alfvenic Mach number M_A=0.6 but different initial magnetic field orientation (one was parallel to the z-axis, another is at the angle π/7 to the z-axis). Compared to the GL16, we used higher resolution simulations and studied the effect of varying magnetic-field direction relative to the line of sight.
To trace magnetic field we generated polarization maps by projecting our data cubes along the x-axis and assuming that the dust producing the polarization followed the gas and was perfectly aligned by the magnetic field. Let ϕ=tan^-1(B_y/B_z), where B_y,z are the y and z direction of magnetic field. The intensity I, velocity centroid C and stokes parameters Q, U were computed by :
I( r) = ∫ρ( r,x) dx
C( r) =I^-1∫ρ( r,x) v_x( r,x) dx
Q( r) ∝∫ρ( r,x) cos2ϕ dx
U( r) ∝∫ρ( r,x) sin2ϕ dx
where r is the vector on the y-z plane. The polarization angle is given by ϕ_2d = 0.5tan^-1(U/Q). Polarization traces the magnetic field projected along the line of sight.
We calculated velocity centroids following GL16 but modified the VCGs calculations to increase the accuracy of the procedure. In particular, we performed cubic spline interpolation, which uses a three-point estimate to provide the maps for gradient study. The resulting map is 10 times larger than the original one. To search for maximum gradient direction in each data point, we selected a neighborhood of the radius vector r∈ (0.9,1.1) pixels in the interpolated map. The interpolation process is accurate with a 3^o error, and is comparable to the Sober operator used in <cit.>. We smoothed our data with a σ=2 pixels Gaussian kernel.
The statistical properties of gradient fields can determine the mean direction of magnetic fields in a sub-region of interest. We divided our synthetic maps into sub-regions and examined the statistical behavior of gradient vector orientation (hereafter absolute angle (AA)) and relative angle ϕ between gradients and fields (hereafter relative angle (RA)) within the region. The upper four panels of <ref> shows what distributions of the AA and RA look like when size of the block decreases. As the block size increases, the mean gradient direction becomes more well-defined. The alignment between the gradient and magnetic field also becomes more clear as block size increases. We find that as the block size arrives at 100×100, a sharp distribution emerges with well-defined mean and dispersion. By measuring the mean of the AA distributions, we determine the mean magnetic field direction within the respective block. The RA distributions tells us how accurate this prediction of magnetic fields is. We shall call this treatment sub-block averaging in the following sections. Notice that, sub-block averaging is not a smoothing method. It is used to increase the emphasis of important statistics and suppress noise in a region, and provide an estimate on how accurate this averaging is by the AA-RA diagram. On the other hand, smoothing does not provide such an estimate. A detailed discussion of how white noise affects the sub-block averaging and smoothing is provided in an extended paper by Lazarian et al. (2017), where the a companion new measure, namely, synchrotron intensity gradients are studied.
The benefits of our approach can be seen in Figure <ref>. We divided the whole simulation domain into 16 blocks with equal size, and predicted the magnetic field direction in each block. As one can see from these figures, the VCGs trace well magnetic fields. We also confirmed this for synthetic observations when the line of sight was at different angles to the mean direction of magnetic field.
(, C-F) provides an expression relating the strength of of plane-of-sky magnetic field by dispersion of turbulent velocities δ v and polarization vectors δθ in magnetized turbulence (For an improved C-F method, see ):
δ B ∼√(4πρ)δ v/δϕ
The mean magnetic field strength can also be calculated using the same concept in sub-block averaging.The dispersion of VCGs and that of magnetic-field directions are not exactly the same, but the difference is small. GL16 introduced a factor γ of ∼ 1.29 to account for this difference. In our case, using our improved procedure of gradient calculation we get the dispersion of the VCGs in blocks that is just 1.07-times that of polarization. The standard deviation of the ratio of the dispersions is 0.05. As illustrated in GL16, the factor γ varies with parameters of MHD turbulence. Elsewhere we shall provide a fitting expression for γ as the function of M_s and M_A. This should further increase the accuracy of obtaining the value of magnetic field strength. More details on the technique of obtaining magnetic field intensity using only spectroscopic information and no polarimetry will be provided in our forthcoming paper (Yuen & Lazarian, in preparation).
§ APPLICATION TO OBSERVATION DATA
With the tested procedure in hand, we selected diffuse regions from observation surveys. We acquired data from the Galactic Arecibo L-Band Feed Array HI Survey (GALFA-HI). We compare the VCGs directions to the PLANCK polarization data. In diffuse media, polarization of emitted radiation is perpendicular to local magnetic field direction <cit.>, i.e. the same way as the VCGs. To adapt the difference of resolutions, we adjust the block size used in Planck to reflect the same physical block GALFA is referring to.
The region we selected from GALFA-HI survey data spans right ascension 15^o to 35^o and declination 4^o to 16^o. The bin size along the velocity axis is 0.18 km/s. We analyzed 353GHz polarization data obtained by the Planck satellite's High Frequency Instrument (HFI).[We use the planckpy module to extract polarization data in a particular region with J2000 equatorial coordinate: (https://bitbucket.org/ezbc/planckpy/src)] We performed the same procedure as indicated in Section <ref>. We checked the AA and RA, as shown in the lower 4 panels of <ref>, to pick an appropriate block size for a gradient vector. For the given case, a 100× 100 block satisfies the requirement in the recipe. The velocity gradient vectors are plotted with polarization vectors in Figure <ref>. In this region, most of the gradient vectors align very well with polarization vectors. The detailed study of the observed deviations from the perfect alignment will be provided in our subsequent publication.
Following GL16 we provide a comparison with the alignment magnetic field as traced by polarization and the intensity gradients. The emission intensity of atomic is proportional to its column density. The column density gradients were shown to act as tracers of magnetic fields is <cit.>. Figure <ref> shows the histograms of relative orientations between velocity and intensity gradient vectors to polarization. In agreement with the theoretical expectations as well as the results in GL16, our improved procedure of calculating the VCGs shows that the latter are much better aligned with polarization compared to the intensity gradients. Indeed, nearly 80% of the VCGs are within 45^o deviation from the polarization direction compared to 61% of the intensity gradients.
§ DISCUSSION
§.§ Structure functions of velocity gradients
The structure functions of polarization and gradient fields can also allow us to study how well-aligned they are. As the statistics of polarization are dependent on the Alfvenic Mach number M_A <cit.>, the close relationship between rotated the VCGs and magnetic fields suggests that gradient statistics should have similar behavior to the polarization statistics. To compare the VCGs to polarization in synthetic maps, we extended the sub-block averaging algorithm to every point of our map, and computed the structure function in terms of the orientation θ of gradient/polarization vectors:
SF_2( r) = ⟨(θ( r')-θ( r'+ r))^2⟩
The statistics of dust polarization are important for studying magnetic field turbulence <cit.> and for cleaning the CMB polarization maps. If we want to do the same using VCGs,
it is important test to what extent the statistics of the VCGs are similar to those revealed by polarization.
The left and the middle panels of <ref> show the power spectra P_ϕ(k) and second order structure functions SF_2( r), respectively, of the VCGs orientation and the polarization angle. In terms of the spectra, both VCGs orientations and polarization angles exhbitit a -2 slope. We also examined the structure functions for polarization and the VCG distributions from the observation data using the same procedure. The right panel of Figure <ref> shows the structure function computed using observation data, the +1 slope also emerged.
§.§ Comparison with other techniques and earlier papers
This paper presents the first application of the VCGs to observational data arising from diffuse media. By comparing the results obtained with the VCGs and PLANCK polarimetry data, we have demonstrated the practical utility of the VCG for tracing of magnetic fields and obtaining statistical information about magnetic field in this diffuse region.
The gradient techniques have big advantage over other techniques for estimating magnetic field direction and strengths: These techniques only require an easily available centroid. Unlike the PLANCK map, the VCG maps do not require unique multi-billion dollar satellites but can be routinely obtained with the existing spectroscopic surveys. By using different species, one can distinguish and study separately different regions along the line of sight. Combining the VCGs that trace magnetic fields in diffuse gas with polarimetry, e.g. ALMA polarimetry, that traces magnetic fields in molecular clouds, one can study what is happening with magnetic fields as star formation takes place. This may be a way to test different predictions, e.g. the prediction of magnetic flux removal through the reconnection diffusion process <cit.>.
The alignment of density gradients were previously explored by <cit.>. The alignment of these gradients with magnetic field is also due to the properties of turbulence. For instance, <cit.> showed that GS95 turbulence can in some situations imprint its structure on density. However, density does not trace turbulence as directly as velocity does. Therefore, we expect more deviations of density gradients from the magnetic field direction compared to the velocity gradients. Our study confirms the conclusions in GL16 that the VCGs provide a better tracer. We expect that the density gradients are related to the filaments which align with magnetic fields as reported in <cit.>. Therefore we expect that the VCGs trace magnetic fields better than the filaments.
We, however, have to stress that this region is only a particular example on how VCG works, which does not represent it is applicable everywhere without cautions on the limitations. One should understand that both density and velocity properties are important components of MHD turbulent cascades. Therefore, the deviations of the gradients from the magnetic field direction are informative. For instance, we observe an a different behavior of VCGs and density gradients in the regions of strong shocks as well as in self-gravitating regions (Yuen & Lazarian, in prep.). Therefore there is important synergy of the simultaneous use of VCGs, density/intensity gradients and polarimetry. Adding to the list the newly suggested technique of synchrotron intensity gradients that is discussed in a new paper by Lazarian et al. (2017) increases the wealth of the available tools. This opens new ways of exploring magnetic fields in the multi-phase ISM.
We would also like to point out that while the polarimetry directions in Figure <ref> seem to be well aligned over significant patches of the sky, this does not mean that there is no turbulence there. The correspondence of the VCGs and polarization directions can be understood only if the media is turbulent. The power law behavior of the statistics related to both the VCGs and polarization directions confirms this. The fact that the power law does not correspond to the GS95 slope is due to the effects of the emitting region geometry as it discussed in <cit.>.
§ CONCLUSIONS
Our work provide a promising example on how the Velocity Centroid Gradient (VCG) technique introduced in GL16 traces magnetic fields in interstellar media. In the paper:
* We provide a new robust prescription for calculating the VCGs and test this new approach using the synthetic data obtained with MHD simulations.
* We show that with the new prescription the estimates of magnetic field strength based on the C-F approach can be improved.
* We apply the VCGs to the available high latitude HI GALFA data and demonstrate an excellent alignment of the direction of the VCGs and those measured by PLANCK polarization.
* We show that the statistics of the fluctuations measured by the VCGs and polarization have the same slope for both synthetic and observational data, which suggests that VCGs could potentially be promising tool for accounting for polarized foregrounds within CMB studies.
* The differences between the directions defined by the polarization, the VCGs and the intensity gradients carry information about the turbulent interstellar medium and this calls for the synergetic use of the three measures.
We thank Susan Clark for her help with GALFA data. We thank Avi Loeb and Diego F. Gonzalez-Casanova useful discussions. We also thank Paul Law for his generous help with PLANCK data. The stay of KHY at UW-Madison is supported by the Fulbright-Lee Hysan research fellowship and Department of Physics, CUHK. AL acknowledges the support the NSF grant AST 1212096, NASA grant NNX14AJ53G as well as a distinguished visitor PVE/CAPES appointment at the Physics Graduate Program of the Federal University of Rio Grande do Norte, the INCT INEspao and Physics Graduate Program/UFRN.
[Adam et al.(2016)Adam, Ade, Aghanim, Arnaud, Ashdown, Aumont,
Baccigalupi, Banday, Barreiro, Bartolo, Battaner, Benabed, Benoît,
Benoit-Lévy, Bernard, Bersanelli, Bertincourt, Bielewicz, Bock,
Bonavera, Bond, Borrill, Bouchet, Boulanger, Bucher, Burigana, Calabrese,
Cardoso, Catalano, Challinor, Chamballu, Chiang, Christensen, Clements,
Colombi, Colombo, Combet, Couchot, Coulais, Crill, Curto, Cuttaia, Danese,
Davies, Davis, de Bernardis, de Rosa, de Zotti, Delabrouille, Delouis,
Désert, Diego, Dole, Donzelli, Doré, Douspis, Ducout, Dupac,
Efstathiou, Elsner, Enßlin, Eriksen, Falgarone, Fergusson, Finelli,
Forni, Frailis, Fraisse, Franceschi, Frejsel, Galeotta, Galli, Ganga, Ghosh,
Giard, Giraud-Héraud, Gjerløw, González-Nuevo, Górski,
Gratton, Gruppuso, Gudmundsson, Hansen, Hanson, Harrison,
Henrot-Versillé, Herranz, Hildebrandt, Hivon, Hobson, Holmes,
Hornstrup, Hovest, Huffenberger, Hurier, Jaffe, Jaffe, Jones, Juvela,
Keihänen, Keskitalo, Kisner, Kneissl, Knoche, Kunz, Kurki-Suonio,
Lagache, Lamarre, Lasenby, Lattanzi, Lawrence, Le Jeune, Leahy, Lellouch,
Leonardi, Lesgourgues, Levrier, Liguori, Lilje, Linden-Vørnle,
López-Caniego, Lubin, Macías-Pérez, Maggio, Maino,
Mandolesi, Mangilli, Maris, Martin, Martínez-González, Masi,
Matarrese, McGehee, Melchiorri, Mendes, Mennella, Migliaccio, Mitra,
Miville-Deschênes, Moneti, Montier, Moreno, Morgante, Mortlock, Moss,
Mottet, Munshi, Murphy, Naselsky, Nati, Natoli, Netterfield,
Nørgaard-Nielsen, Noviello, Novikov, Novikov, Oxborrow, Paci, Pagano,
Pajot, Paoletti, Pasian, Patanchon, Pearson, Perdereau, Perotto, Perrotta,
Pettorino, Piacentini, Piat, Pierpaoli, Pietrobon, Plaszczynski,
Pointecouteau, Polenta, Pratt, Prézeau, Prunet, Puget, Rachen,
Reinecke, Remazeilles, Renault, Renzi, Ristorcelli, Rocha, Rosset, Rossetti,
Roudier, Rusholme, Sandri, Santos, Sauvé, Savelainen, Savini, Scott,
Seiffert, Shellard, Spencer, Stolyarov, Stompor, Sudiwala, Sutton, Suur-Uski,
Sygnet, Tauber, Terenzi, Toffolatti, Tomasi, Tristram, Tucci, Tuovinen,
Valenziano, Valiviita, Van Tent, Vibert, Vielva, Villa, Wade, Wandelt,
Watson, Wehus, Yvon, Zacchei, & Zonca]Adam2016iPlanck/iResults
Adam, R., Ade, P. A. R., Aghanim, N., et al. 2016,
http://dx.doi.org/10.1051/0004-6361/201525820Astronomy & Astrophysics, 594, A8
[Andersson et al.(2015)Andersson, Lazarian, &
Vaillancourt]Andersson2015InterstellarAlignment
Andersson, B.-G., Lazarian, A., & Vaillancourt, J. E. 2015,
http://dx.doi.org/10.1146/annurev-astro-082214-122414Annu.
Rev. Astron. Astrophys, 53, 501
[Armstrong et al.(1995)Armstrong, Rickett, &
Spangler]Armstrong1995ElectronMedium
Armstrong, J. W., Rickett, B. J., & Spangler, S. R. 1995,
http://dx.doi.org/10.1086/175515The Astrophysical
Journal, 443, 209
[Beresnyak(2014)]2014ApJ...784L..20B Beresnyak, A. 2014, , 784, L20
[Beresnyak & Andrey(2014)]Beresnyak2014SpectraSimulations
Beresnyak, A., & Andrey. 2014,
http://dx.doi.org/10.1088/2041-8205/784/2/L20The
Astrophysical Journal Letters, Volume 784, Issue 2, article id. L20, 5 pp.
(2014)., 784
[Beresnyak & Lazarian(2010)]Beresnyak2010ScalingTurbulence
Beresnyak, A., & Lazarian, A. 2010,
http://dx.doi.org/10.1088/2041-8205/722/1/L110The
Astrophysical Journal Letters, Volume 722, Issue 1, pp. L110-L113 (2010).,
722, L110
[Beresnyak et al.(2005)Beresnyak, Lazarian, &
Cho]Beresnyak2005DensityTurbulencec
Beresnyak, A., Lazarian, A., & Cho, J. 2005,
http://dx.doi.org/10.1086/430702The Astrophysical
Journal, Volume 624, Issue 2, pp. L93-L96., 624, L93
[Boldyrev(2006)]Boldyrev2006SpectrumTurbulence
Boldyrev, S. 2006,
http://dx.doi.org/10.1103/PhysRevLett.96.115002Physical
Review Letters, 96, 115002
[Brandenburg &
Lazarian(2013)]Brandenburg2013AstrophysicalTurbulence
Brandenburg, A., & Lazarian, A. 2013,
http://dx.doi.org/10.1007/s11214-013-0009-3Space
Science Reviews, Volume 178, Issue 2-4, pp. 163-200, 178, 163
[Chandrasekhar & Fermi(1953)]Chandrasekhar1953ProblemsField.
Chandrasekhar, S., & Fermi, E. 1953,
http://dx.doi.org/10.1086/145732The Astrophysical
Journal, 118, 116
[Chepurnov & Lazarian(2010)]Chepurnov2010ExtendingData
Chepurnov, A., & Lazarian, A. 2010,
http://dx.doi.org/10.1088/0004-637X/710/1/853The
Astrophysical Journal, Volume 710, Issue 1, pp. 853-858 (2010)., 710, 853
[Cho & Lazarian(2002)]Cho2002CompressiblePlasmasb
Cho, J., & Lazarian, A. 2002,
http://dx.doi.org/10.1103/PhysRevLett.88.245001Physical
Review Letters, vol. 88, Issue 24, id. 245001, 88
[Cho & Lazarian(2003)]Cho2003CompressibleImplicationsb
—. 2003,
http://dx.doi.org/10.1046/j.1365-8711.2003.06941.xMonthly
Notices of the Royal Astronomical Society, Volume 345, Issue 12, pp.
325-339., 345, 325
[Cho & Lazarian(2009)]Cho2009SimulationsTurbulence
—. 2009,
http://dx.doi.org/10.1088/0004-637X/701/1/236The
Astrophysical Journal, Volume 701, Issue 1, pp. 236-252 (2009)., 701, 236
[Cho et al.(2001)Cho, Lazarian, &
Vishniac]Cho2001SimulationsMedium
Cho, J., Lazarian, A., & Vishniac, E. 2001,
http://dx.doi.org/10.1086/324186The Astrophysical
Journal, Volume 564, Issue 1, pp. 291-301., 564, 291
[Cho & Vishniac(2000)]Cho2000TheTurbulence
Cho, J., & Vishniac, E. T. 2000,
http://dx.doi.org/10.1086/309213The Astrophysical
Journal, Volume 539, Issue 1, pp. 273-282., 539, 273
[Clark et al.(2015)Clark, Hill, Peek, Putman, &
Babler]Clark2015NeutralForegrounds
Clark, S. E., Hill, J. C., Peek, J. E. G., Putman, M. E., & Babler, B. L.
2015,
http://dx.doi.org/10.1103/PhysRevLett.115.241302Physical
Review Letters, 115, 1
[Draine(2011)]Draine2011PhysicsMedium
Draine, B. T. 2011, Physics of the interstellar and intergalactic medium
(Princeton University Press), 540
[Falceta-Goncalves et al.(2008)Falceta-Goncalves, Lazarian, &
Kowal]Falceta-Goncalves2008StudiesTechnique
Falceta-Goncalves, D., Lazarian, A., & Kowal, G. 2008,
http://dx.doi.org/10.1086/587479The Astrophysical
Journal, Volume 679, Issue 1, article id. 537-551, pp. (2008)., 679
[Goldreich(1995)]GoldreichP.Sridhar1995GS95IITurbulence
Goldreich, P. & Sridhar, S. 1995,
http://dx.doi.org/10.1086/174600The Astronomical
Journal, 438, 763
[González-Casanova & Lazarian(2016)]2016arXiv160806867G
González-Casanova, D. F., & Lazarian, A. 2016, ArXiv
e-prints, http://arxiv.org/abs/1608.06867arXiv:1608.06867
[Hayes et al.(2006)Hayes, Norman, Fiedler, Bordner, Li, Clark, &
Low]Hayes2006
Hayes, J. C., Norman, M. L., Fiedler, R. A., et al. 2006, 188
[Kowal & Lazarian(2010)]Kowal2010VelocityScalingsb
Kowal, G., & Lazarian, A. 2010,
http://dx.doi.org/10.1088/0004-637X/720/1/742The
Astrophysical Journal, Volume 720, Issue 1, pp. 742-756 (2010)., 720, 742
[Lazarian & A.(2005)]Lazarian2005AstrophysicalFormation
Lazarian, A. 2005,
http://dx.doi.org/10.1063/1.2077170MAGNETIC FIELDS IN
THE UNIVERSE: From Laboratory and Stars to Primordial Structures. AIP
Conference Proceedings, Volume 784, pp. 42-53 (2005)., 784, 42
[Lazarian(2007)]2007JQSRT.106..225L
Lazarian, A. 2007,
http://dx.doi.org/10.1016/j.jqsrt.2007.01.038,
106, 225
[Lazarian & A.(2014)]Lazarian2014ReconnectionFormation
—. 2014,
http://dx.doi.org/10.1007/s11214-013-0031-5Space
Science Reviews, 181, 1
[Lazarian et al.(2012)Lazarian, Esquivel, &
Crutcher]Lazarian2012MagnetizationDiffusion
Lazarian, A., Esquivel, A., & Crutcher, R. 2012,
http://dx.doi.org/10.1088/0004-637X/757/2/154The
Astrophysical Journal, Volume 757, Issue 2, article id. 154, 20 pp. (2012).,
757
[Lazarian & Vishniac(1999)]Lazarian1999ReconnectionField
Lazarian, A., & Vishniac, E. T. 1999,
http://dx.doi.org/10.1086/307233The Astrophysical
Journal, Volume 517, Issue 2, pp. 700-718., 517, 700
[Li et al.(2014)Li, Banerjee, Pudritz, Jørgensen, Shang,
Krasnopolsky, & Maury]Li2014a
Li, Z.-Y., Banerjee, R., Pudritz, R. E., et al. 2014,
http://adsabs.harvard.edu/abs/2014arXiv1401.2219LarXiv.org,
1401, 2219
[Lithwick & Goldreich(2001)]Lithwick2001CompressiblePlasmas
Lithwick, Y., & Goldreich, P. 2001,
http://dx.doi.org/10.1086/323470The Astrophysical
Journal, Volume 562, Issue 1, pp. 279-296., 562, 279
[Maron & Goldreich(2000)]Maron2000SimulationsTurbulence
Maron, J., & Goldreich, P. 2000,
http://dx.doi.org/10.1086/321413The Astrophysical
Journal, Volume 554, Issue 2, pp. 1175-1196., 554, 1175
[Norman(2000)]Norman2000
Norman, M. L. http://arxiv.org/abs/astro-ph/00051092000, 6, 6
[Ostriker et al.(2000)Ostriker, Stone, & Gammie]Ostriker2000
Ostriker, E. C., Stone, J. M., & Gammie, C. F.
http://dx.doi.org/10.1086/3182902000, 56
[Otto et al.(2017)]Otto17 Otto, F., Ji, W., & Li, H.-b. 2017, arXiv:1701.01806
[Peek et al.(2011)Peek, Heiles, Douglas, Lee, Grcevich,
Stanimirovic, Putman, Korpela, Gibson, Begum, Saul, Robishaw, &
Krco]Peek2011The1
Peek, J. E. G., Heiles, C., Douglas, K. A., et al. 2011,
http://dx.doi.org/10.1088/0067-0049/194/2/20The
Astrophysical Journal Supplement, Volume 194, Issue 2, article id. 20, 13 pp.
(2011)., 194
[Pillai et al.(2015)Pillai, Kauffmann, Tan, Goldsmith, Carey, &
Menten]Pillai2015
Pillai, T., Kauffmann, J., Tan, J. C., et al.
http://dx.doi.org/10.1088/0004-637X/799/1/742015, 74
[Soler et al.(2013)Soler, Hennebelle, Martin,
Miville-Deschênes, Netterfield, & Fissel]Soler2013
Soler, J. D., Hennebelle, P., Martin, P. G., et al.
http://dx.doi.org/10.1088/0004-637X/774/2/1282013, 16
[Zhang et al.(2014)Zhang, Qiu, Girart, Hauyu, Liu, Tang, Koch,
Li, Keto, Ho, Rao, Lai, Ching, Frau, Chen, Li, Padovani, Bontemps, Csengeri,
& Juarez]Zhang2014
Zhang, Q., Qiu, K., Girart, J. M., et al.
http://arxiv.org/abs/1407.39842014, 1
|
http://arxiv.org/abs/1701.07469v1 | 20170125201043 | Numerical Approximations for a three components Cahn-Hilliard phase-field Model based on the Invariant Energy Quadratization method | [
"Xiaofeng Yang",
"Jia Zhao",
"Qi Wang",
"Jie Shen"
] | math.NA | [
"math.NA"
] |
[pages=1-last]3CH_Sep17.pdf
|
http://arxiv.org/abs/1701.07750v3 | 20170126155639 | Skyrmion Gas Manipulation for Probabilistic Computing | [
"Daniele Pinna",
"Flavio Abreu Araujo",
"Joo-Von Kim",
"Vincent Cros",
"Damien Querlioz",
"Perre Bessiere",
"Jacques Droulez",
"Julie Grollier"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.dis-nn",
"physics.comp-ph"
] | |
http://arxiv.org/abs/1701.08115v2 | 20170127165904 | Covering and tiling hypergraphs with tight cycles | [
"Jie Han",
"Allan Lo",
"Nicolás Sanhueza-Matamala"
] | math.CO | [
"math.CO"
] |
a4paper,tmargin=2.6cm,bmargin=2.6cm,lmargin=2.6cm,rmargin=2.6cm,headheight=1cm,headsep=1cm,footskip=1cm
theoremTheorem
plain
acknowledgementAcknowledgement
algorithmAlgorithm
axiomAxiom
caseCase
claim[theorem]Claim
*claim*Claim
conclusionConclusion
construction[theorem]Construction
conditionCondition
conjecture[theorem]Conjecture
corollary[theorem]Corollary
criterionCriterion
definition[theorem]Definition
example[theorem]Example
exerciseExercise
fact[theorem]Fact
lemma[theorem]Lemma
notationNotation
problem[theorem]Problem
proposition[theorem]Proposition
question[theorem]Question
*question*Question
remark[theorem]Remark
solutionSolution
summarySummary
obs[theorem]Observation
equationsection
theoremsection
casesection
subcaseCase
subcasecase
𝒜
ℬ
𝒞
ℰ
ℱ
𝒢
𝒦
𝒥
ℳ
𝒩
𝒬
α
ε
ε
γ̊
𝒫
𝒫
ℕ
𝒯
𝐝
𝐢
𝐮
𝐯
H
ℛ
𝒴
ḍẹg̣
=
e.g.
proofclaim[1][Proof of the claim]
#1
#1
=[]=[
]
calc
Covering and tiling hypergraphs with tight cycles]Covering and tiling hypergraphs with tight cycles
Department of Mathematics, University of Rhode Island, Kingston, RI, USA, 02881
jie_han@uri.edu
The research leading to these results was partially supported by FAPESP (Proc. 2013/03447-6, 2014/18641-5, 2015/07869-8) (J. Han) EPSRC, grant no. EP/P002420/1 (A. Lo) and the Becas Chile scholarship scheme from CONICYT (N. Sanhueza-Matamala).
School of Mathematics, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
s.a.lo@bham.ac.uk, NIS564@bham.ac.uk
A k-uniform tight cycle C^k_s is a hypergraph on s > k vertices with a cyclic ordering such that every k consecutive vertices under this ordering form an edge.
The pair (k,s) is admissible if (k,s) = 1 or k/(k,s) is even.
We prove that if s ≥ 2k^2 and H is a k-uniform hypergraph with minimum codegree at least (1/2 + o(1))|V(H)|, then every vertex is covered by a copy of C^k_s.
The bound is asymptotically sharp if (k,s) is admissible.
Our main tool allows us to arbitrarily rearrange the order of which a tight path wraps around a complete k-partite k-uniform hypergraph, which may be of independent interest.
For hypergraphs F and H, a perfect F-tiling in H is a spanning collection of vertex-disjoint copies of F.
For k ≥ 3, there are currently only a handful of known F-tiling results when F is k-uniform but not k-partite.
If s ≢0 k, then C^k_s is not k-partite.
Here we prove an F-tiling result for a family of non k-partite k-uniform hypergraphs F.
Namely, for s ≥ 5k^2, every k-uniform hypergraph H with minimum codegree at least (1/2 + 1/(2s) + o(1))|V(H)| has a perfect C^k_s-tiling.
Moreover, the bound is asymptotically sharp if k is even and (k,s) is admissible.
[
Nicolás Sanhueza-Matamala
December 30, 2023
=============================
§ INTRODUCTION
Let and F be graphs.
An F-tiling in H is a set of vertex-disjoint copies of F.
An F-tiling is perfect if it spans the vertex set of .
Note that a perfect F-tiling is also known as an F-factor or a perfect F-matching.
The following question in extremal graph theory has a long and rich story: given F and n, what is the maximum δ such that there exists a graph H on n vertices with minimum degree at least δ without a perfect F-tiling?
We call such δ the tiling degree threshold for F and denote it by t(n, F).
Note that if n ≢0 |V(F)| then a perfect F-tiling cannot exist, so this case is not interesting.
Hence we will always assume that n ≡ 0 |V(F)| whenever we discuss t(n, F).
A first result in the study of tiling thresholds in graphs comes from the celebrated theorem of Dirac <cit.> on Hamiltonian cycles, which easily shows that t(n, K_2) = n/2 - 1.
Corrádi and Hajnal <cit.> proved that t(n, K_3) = 2n/3 - 1,
and Hajnal and Szemerédi <cit.> generalized this result for complete graphs of any size, showing that t(n, K_t) = (1 - 1/t)n -1.
For a general graph F, Kühn and Osthus <cit.> determined t(n, F) up to an additive constant depending only on F. This improved previous results due to Alon and Yuster <cit.>, Komlós, Sárközy and Szemerédi <cit.> and Komlós <cit.>.
We study tilings in the setting of k-graphs, i.e. hypergraphs where every edge has exactly k vertices, for some k ≥ 2.
We focus on tilings using “tight cycles”, which are k-graphs that generalise the usual notion of cycles in graphs.
We also study the related problem of finding F-coverings in a hypergraph , that is, finding copies of F, not necessarily vertex-disjoint, which together cover every vertex of .
After choosing a notion of “minimum degree” for k-uniform hypergraphs, both tilings and coverings give rise to corresponding questions in extremal hypergraph theory, which generalise the “tiling thresholds” in graphs to the setting of hypergraphs.
In what follows, we describe precisely all of the problems under consideration.
§.§ Tiling thresholds
A hypergraph = (V(),E()) consists of a vertex set V() and an edge set E(), where each edge e ∈ E() is a subset of V().
We will simply write V and E for V(H) and E(H), respectively, if it is clear from the context.
Given a set V and a positive integer k, Vk denotes the set of subsets of V with size exactly k.
We say that is a k-uniform hypergraph or k-graph, for short, if E ⊆Vk.
Note that 2-graphs are usually known simply as graphs.
Given a hypergraph and a set S ⊆ V, let the neighbourhood N_(S) of S be the set { T ⊆ V ∖ S : T ∪ S ∈ E } and let _(S) = |N_ (S)| denote the number of edges of containing S.
If w ∈ V, then we also write N_(w) for N_( {w} ).
We will omit the subscript if is clear from the context.
We denote by δ_i() the minimum i-degree of , that is, the minimum of _(S) over all i-element sets S ∈Vi.
Note that δ_0() is equal to the number of edges of .
Given a k-graph , δ_k-1() and δ_1() are referred to as the minimum codegree and the minimum vertex degree of , respectively.
For k-graphs and F, an F-tiling in H is a set of vertex-disjoint copies of F;
and an F-tiling is perfect if it spans the vertex set of .
For a k-graph F, define the codegree tiling threshold t(n, F) to be the maximum of δ_k-1() over all k-graphs on n vertices without a perfect F-tiling.
We implicitly assume n ≡ 0 |V(F)| whenever we discuss t(n, F).
We describe known results on tiling thresholds for k-graphs, when k ≥ 3.
Let K^k_t denote the complete k-graph on t vertices.
For k ≥ 3, Kühn and Osthus <cit.> determined t(n, K^k_k) asymptotically;
the exact value was determined by Rödl, Ruciński and Szemerédi <cit.> for sufficiently large n.
Lo and Markström <cit.> determined t(n, K^3_4) asymptotically, and independently, Keevash and Mycroft <cit.> determined t(n, K^3_4) exactly for sufficiently large n.
We say that a k-graph H is t-partite (or that is a (k,t)-graph, for short) if V has a partition { V_1, …, V_t} such that |e ∩ V_i| ≤ 1 for all edges e ∈ E and all 1 ≤ i ≤ t.
A (k,t)-graph H is complete if E consists of all k-sets e such that |e ∩ V_i| ≤ 1, for all 1 ≤ i ≤ t.
Recently, Mycroft <cit.> determined the asymptotic value of t(n, K) for all complete (k,k)-graphs K.
However, much less is known for non-k-partite k-graphs.
For more results on tiling thresholds for k-graphs, see the survey of Zhao <cit.>.
§.§ Covering thresholds
Given a k-graph F, an F-covering in H is a spanning set of copies of F.
Similarly, define the codegree covering threshold c(n, F) of F to be the maximum of δ_k-1() over all k-graphs on n vertices not containing an F-covering.
Trivially, a perfect F-tiling is an F-covering, and an F-covering has a copy of F.
Thus,
_k-1(n, F) ≤ c(n, F) ≤ t(n, F),
where _k-1(n, F) is codegree Turán threshold, that is, the maximum of δ_k-1() over all F-free k-graphs H on n vertices.
In this sense, the covering problem is an intermediate problem between the Turán and the tiling problems.
As for results on covering thresholds, for any non-empty (2-)graph F, we have c(n, F) = ( χ(F) - 2/χ(F) - 1 + o(1) )n, see <cit.>,
where χ(F) is the chromatic number of F.
Han, Zang and Zhao <cit.> studied the vertex-degree variant of the covering problem, for complete (3,3)-graphs K.
Falgas-Ravry and Zhao <cit.> studied c(n, F) when F is K^3_4, K^3_4 with one edge removed, K^3_5 with one edge removed and other 3-graphs.
§.§ Cycles in hypergraphs
Given 1 ≤ℓ < k, we say that a k-graph on more than k vertices is an ℓ-cycle if every vertex lies in some edge and there is a cyclic ordering of the vertices such that under this ordering, every edge consists of k consecutive vertices and two consecutive edges intersect in exactly ℓ vertices.
Note that an ℓ-cycle on s vertices can exist only if k - ℓ divides s.
If ℓ = 1 we call the cycle loose, if ℓ = k-1 we call the cycle tight.
We write C^k_s for the k-uniform tight cycle on s vertices.
When k = 2, ℓ-cycles reduce to the usual notion of cycles in graphs.
Corrádi and Hajnal <cit.> determined t(n, C^2_3) and Wang Wang2010, Wang2012 determined t(n, C^2_4) and t(n, C^2_5).
In fact, El-Zahar <cit.> gave the following conjecture on cycle tilings.
Let G be a graph on n vertices and let n_1, …, n_r ≥ 3 be integers such that n_1 + … + n_r = n.
If δ(G) ≥∑_i=1^r ⌈ n_i / 2 ⌉, then G contains r vertex-disjoint cycles of lengths n_1, …, n_r respectively.
The bound on the minimum degree, if true, would be best possible.
In particular, the conjecture would imply that t(n, C^2_s) = ⌈ s / 2 ⌉ n / s - 1.
The conjecture was verified for r = 2 by El-Zahar and a proof (for large n) was announced by Abbasi <cit.> as well as by Abbasi, Khan, Sárközy and Szemerédi (see <cit.>).
Given integers ℓ, k such that 1 ≤ℓ≤ (k-1)/2, it is easy to see that a k-uniform ℓ-cycle on s vertices C satisfies c(n, C) ≤ s+1 (by constructing C greedily).
If s ≡ 0 k, then the tight cycle C^k_s is k-partite.
For all t ≥ 1, let K^k(t) denote the complete (k,k)-graph whose vertex classes each have size t.
Note that C^k_s is a spanning subgraph of K^k(s/k).
Erdős <cit.> proved the following result, which implies an upper bound on the Turán number of C^k_s.
For all k ≥ 2 and s > 1, there exists n_0 = n_0(k,s) such that ( n, K^k(s) ) < n^k - 1/s^k-1 for all n ≥ n_0.
Our first result is a sublinear upper bound for c(n, C^k_s) when s ≡ 0 k.
For all 2 ≤ k ≤ s with s ≡ 0 k, there exist n_0(k, s) and c = c(k, s) such that c(n, C_s^k) ≤ cn^1 - 1/s^k-1 for all n ≥ n_0.
There are some previously known results for tiling problems regarding ℓ-cycles.
Whenever C is a 3-uniform loose cycle, t(n, C) was determined exactly by Czygrinow <cit.>.
For general loose cycles C in k-graphs, t(n, C) was determined asymptotically by Mycroft <cit.> and exactly by Gao, Han and Zhao <cit.>.
For tight cycles C^k_s with s ≡ 0 k, Mycroft <cit.> proved that t(n, C^k_s) = (1/2 + o(1))n.
Notice that all mentioned cycle tiling results correspond to cases where the cycles are k-partite (since k-uniform loose cycles are k-partite for k ≥ 3).
We now focus on the covering and tiling problems for the tight cycle C^k_s, for all integers k,s which do not necessarily make C^k_s a (k,k)-graph.
We show that a minimum codegree of (1/2+o(1))n suffices to find a C^k_s-covering.
Let k, s ∈ℕ with k ≥ 3 and s ≥ 2k^2.
For all γ > 0, there exists n_0 = n_0(k, s, γ) such that for all n ≥ n_0, c(n, C^k_s) ≤ (1/2 + γ)n.
Moreover, this result is asymptotically tight if k and s satisfy the following divisibility conditions.
Let 2 ≤ k < s and let d = (k,s). We say that the pair (k,s) is admissible if d = 1 or k/d is even.
Note that an admissible pair (k,s) satisfies s ≢0 k.
Let 3 ≤ k < s be such that (k,s) is admissible.
Then c(n, C^k_s) ≥⌊ n / 2 ⌋ - k + 1.
Moreover, if k is even, then _k-1(n, C^k_s) ≥⌊ n / 2 ⌋ - k + 1.
Notice that if (k,s) is admissible, k ≥ 3 is even and s ≥ 2k^2, then Theorem <ref> and Proposition <ref> imply that _k-1(n, C^k_s) = (1/2 + o(1))n.
We also study the tiling problem corresponding to C^k_s.
We give some lower bounds on t(n, C^k_s).
Notice that the bound is significantly higher if (k,s) is admissible.
Let 2 ≤ k < s ≤ n with n divisible by s.
Then t(n, C^k_s) ≥⌊ n/2 ⌋ - k.
Moreover, if (k,s) is admissible, then
t(n, C_s^k) ≥⌊(1/2 + 1/2s) n ⌋ - k if k is even,
⌊( 1/2 + k/4s(k-1) + 2k) n ⌋- k if k is odd.
On the other hand, recall that the case s ≡ 0 k was solved asymptotically by Mycroft <cit.>, thus we study the complementary case.
We prove an upper bound on t(n, C^k_s) which is valid whenever s ≢0 k and s ≥ 5k^2.
Note that the bound is asymptotically sharp if k is even and (k,s) is admissible.
Let 3 ≤ k < s be such that s ≥ 5k^2 and s ≢0 k.
Then, for all γ > 0, there exists n_0 = n_0(k, s, γ) such that for all n ≥ n_0 with n ≡ 0 s,
t(n, C_s^k) ≤( 1/2 + 1/2s + γ)n.
§.§ Organisation of the paper
In Section <ref> we set up basic notation and give sketches of the proofs of our main results, Theorems <ref> and <ref>.
In Section <ref> we give constructions which imply lower bounds for the Turán numbers and covering and tiling thresholds of tight cycles, thus proving Propositions <ref> and <ref>.
In the next two sections we study the covering problem.
In Section <ref> we describe a family of gadgets which will be useful during the proofs of Proposition <ref> and Theorem <ref>.
Those proofs are done in Section <ref>.
Sections <ref>–<ref> are dedicated to investigating the tiling problem.
Our aim is the proof of Theorem <ref>, i.e. bounding t(n, C^k_s) from above.
In Section <ref>, we review the absorption technique for tilings, which we use in Section <ref> to prove Theorem <ref> under the assumption that we can find an almost perfect C^k_s-tiling (Lemma <ref>).
We prove Lemma <ref> in the next two sections: in Section <ref> we review tools of hypergraph regularity and in Section <ref> we introduce various auxiliary tilings that we use to finish the proof.
We conclude with some remarks and open problems in Section <ref>.
§ NOTATION AND SKETCHS OF PROOFS
For a hypergraph and S ⊆ V, we denote [S] to be the subgraph of induced on S, that is, V([S]) = S and E([S]) = { e ∈ E : e ⊆ S }.
Let ∖ S = H[V ∖ S].
For hypergraphs and G, let - G be the subgraph of obtained by removing all edges in E() ∩ E(G).
Given a, b, c reals with c > 0, by a = b ± c we mean that b - c ≤ a ≤ b + c.
We write x ≪ y to mean that for all y ∈ (0,1] there exists an x_0 ∈ (0,1) such that for all x ≤ x_0 the subsequent statement holds.
Hierarchies with more constants are defined in a similar way and are to be read from the right to the left.
We will always assume that the constants in our hierarchies are reals in (0,1].
Moreover, if 1/x appears in a hierarchy, this implicitly means that x is a natural number.
For all k-graphs and all x ∈ V, define the link (k-1)-graph (x) of x in to be the (k-1)-graph with V((x)) = V ∖{ x} and E((x)) = N_(x).
Given integers a_1, …, a_t ≥ 1, let K^k(a_1, …, a_t) denote the complete (k,t)-graph with vertex partition V_1, …, V_t such that |V_i| = a_i for all 1 ≤ i ≤ t.
For a family ℱ of k-graphs, an ℱ-tiling is a set of vertex-disjoint copies of (not necessarily identical) members of ℱ.
For a sequence of distinct vertices v_1, …, v_s in a k-graph H, we say P=v_1 … v_s is a tight path if all k consecutive vertices form an edge.
Note that all tight paths have an associated ordering of vertices.
Hence, v_1 … v_s and v_s … v_1 are assumed to be different tight paths, even if the corresponding subgraphs they define are the same.
Suppose that P_1 = v_1 … v_s and P_2 = w_1 … w_s' are two vertex-disjoint tight paths in a k-graph H.
If it happens that v_1 … v_s w_1 … w_s' is also a tight path in H, then we will denote it by P_1 P_2.
We sometimes refer to P_1 P_2 as the concatenation of P_1 and P_2.
Note that P_1P_2 has more edges than P_1 ∪ P_2.
We naturally extend this definition (whenever it makes sense) to the concatenation of a sequence of paths P_1, …, P_r, and we denote the resulting path by P_1 … P_r.
For two tight paths P_1 and P_2, we say that P_2 extends P_1, if P_2 = P_1 P' for some tight path P' (where we may have |V(P')| < k, that is, P' contains no edge).
Also, we may define a tight cycle C by writing C = v_1 … v_s, whenever v_i… v_s v_1 … v_i-1 is a tight path for all 1 ≤ i ≤ s.
For all k ∈ℕ, let [k] = {1, …, k }. Let S_k be the symmetric group of all permutations of the set [k], with the composition of functions as the group operation. Let id∈ S_k be the identity function that fixes all elements in [k]. Given distinct i_1, …, i_r ∈ [k], the cyclic permutation (i_1 i_2 … i_r) ∈ S_k is the permutation that maps i_j to i_j+1 for all 1 ≤ j < r and i_r to i_1, and fixes all the other elements; we say that such a cyclic permutation has length r. All permutations σ∈ S_k can be written as a composition of cyclic permutations σ_1 …σ_t such that these cyclic permutations are disjoint, meaning that there are no common elements between all pairs of these different cyclic permutations.
Let H be a k-graph, V_1, …, V_k be disjoint vertex sets of V and let σ∈ S_k.
We say that a tight path P = v_1 … v_ℓ in H has end-type σ with respect to V_1, …, V_k if for all 2 ≤ i ≤ k, v_ℓ - k + i∈ V_σ(i). Similarly, we say P has start-type σ with respect to V_1, …, V_k if v_i∈ V_σ(i) for all 1 ≤ i ≤ k-1. If H and V_1, …, V_k are clear from the context, we simply say that P has end-type σ and start-type σ, respectively.
Note that one could define start-type and end-type in terms of (k-1)-tuples in [k] instead.
However, for our purposes, it is more convenient to define it in terms of permutations of [k].
§.§ Sketches of proofs of Theorems <ref> and <ref>
We now sketch the proof of Theorem <ref>.
Let be a k-graph on n vertices with δ_k-1() ≥ (1/2 + γ)n.
Consider any vertex x ∈ V().
We can show that, for some appropriate value of t, x is contained in some copy K of K^k_k(t) with vertex classes V_1, …, V_k.
Suppose that s ≡ r ≢0 k with 1 ≤ r < k.
Suppose P = v_1 … v_k is a tight path in K such that v_i ∈ V_i for all 1 ≤ i ≤ k and v_1 = x.
By wrapping around K, we may find a tight path P_2 = v_1 … v_ℓ which extends P_1, but if we only use vertices and edges of K, then we have v_j ∈ V_ℓ where j ≡ℓ k, for all j ∈ [ℓ].
To break this pattern, we will use some gadgets (see Section <ref> for a formal definition).
Roughly speaking, a gadget is a k-graph on V(K) and some extra vertices of H.
Using these gadgets we can extend P to a tight path P' with end-type σ, for an arbitrary σ∈ S_k (see Lemma <ref>).
Having done that (and choosing σ appropriately), then it is easy to extend P' into a copy of C_s^k by wrapping around V_1, …, V_k.
The proof of Theorem <ref> uses the absorbing method, introduced by Rödl, Ruciński and Szemerédi <cit.>.
We first find a small vertex set U ⊆ V(H) such that H[U ∪ W] has a perfect C_s^k-tiling for all small sets W with |U| + |W| ≡ 0 s.
Thus the problem of finding a perfect C_s^k-tiling is reduced to finding a C_s^k-tiling in H ∖ U covering almost all of the remaining vertices.
However, we do not find such C_s^k-tiling directly.
First we show that there exists a k-graph F_s on s vertices containing a C_s^k which has a particularly useful structure: it is obtained from a complete (k,k)-graph by adding a few extra vertices.
So finding an almost perfect F_s-tiling suffices.
Instead, we show that there exists an { F_s, E_s }-tiling for some suitable k-graph E_s, subject to the minimisation of some objective function ϕ().
We do so by considering its fractional relaxation, which we call a weighted fractional { F^∗_s, K^∗_s }-tiling (see Section <ref>).
Further, we use the hypergraph regularity lemma in the form of `regular slice lemma' of Allen, Böttcher, Cooley and Mycroft <cit.>.
§ LOWER BOUNDS
In this section, we construct k-graphs which give lower bounds for the codegree Turán numbers and covering and tiling thresholds for tight cycles.
These constructions will imply Proposition <ref> and Proposition <ref>.
We remark that the bounds obtained here can be improved by an additive constant via careful calculations and case distinctions, which we omit for the sake of giving a clear presentation.
Let A and B be disjoint vertex sets.
Define ^k_0 = ^k_0(A,B) to be the k-graph on A ∪ B such that the edges of ^k_0 are exactly the k-sets e of vertices that satisfy |e ∩ B| ≡ 1 2.
Note that δ_k-1(^k_0) ≥min{ |A|,|B| } - k + 1.
Let 3 ≤ k ≤ s and d = (k,s).
Let A and B be disjoint vertex sets.
Suppose that H^k_0(A,B) contains a tight cycle C^k_s on s vertices with V(C^k_s) ∩ A ≠∅.
Then |V(C^k_s) ∩ A| ≡ 0 s/d and (k,s) is not an admissible pair.
Let C_s^k = v_1 … v_s.
For all 1 ≤ i ≤ s, let ϕ_i ∈{ A, B } be such that v_i ∈ϕ_i and let ϕ_s + i = ϕ_i.
If two edges e and e' in E(H^k_0(A,B)) satisfy |e ∩ e'| = k-1, then |e ∩ A| = |e' ∩ A| by construction.
Thus ϕ_i + k = ϕ_i for all 1 ≤ i ≤ s.
Therefore, ϕ_i+d = ϕ_i for all 1 ≤ i ≤ s.
Hence, |V(C_s^k) ∩ A| ≡ 0 s/d.
Let r = |{ v_1, …, v_k }∩ A| = |{ i: 1 ≤ i ≤ k, ϕ_i = A }|.
Note that r > 0 and r ∈{ k/d, 2k/d, …, k }.
Since { v_1, …, v_k } is an edge in H^k_0(A,B), it follows that k-r ≡ 1 2 and so, r ≢k 2.
This implies d ≥ 2 and k/d is odd, i.e. (k,s) is not an admissible pair.
Now we use Proposition <ref> to prove Propositions <ref> and <ref>.
Let A and B be disjoint vertex sets of sizes |A| = ⌊ n/2 ⌋ and |B| = ⌈ n/2 ⌉.
Consider the k-graph _0 = ^k_0(A,B).
By Proposition <ref>, no vertex of A can be covered with a copy of C^k_s.
Then c(n, C^k_s) ≥δ_k-1(H_0) ≥⌊ n / 2 ⌋ - k + 1.
Moreover, if k is even, then H^k_0(A,B) = H^k_0(B,A).
So no vertex of B can be covered by a copy of C^k_s.
Hence _0 is C^k_s-free.
Therefore, _k-1(n, C^k_s) ≥δ_k-1(H_0) ≥⌊ n / 2 ⌋ - k + 1.
To see the first part of the statement, let d := (k,s) and s' := s/d.
Note that d ≤ k < s, thus s' > 1.
Let A and B be disjoint vertex sets chosen such that |A| + |B| = n, | |A| - |B| | ≤ 2 and |A| ≢0 s'.
Consider the k-graph H_0 = H^k_0(A,B) and note that δ_k-1(H_0) ≥min{ |A|,|B| } - k + 1 ≥⌊ n/2 ⌋ - k.
Proposition <ref> implies that all copies C of C^k_s in _0 satisfy |V(C) ∩ A| ≡ 0 s'.
Since |A| ≢0 s', it is impossible to cover all vertices in A with vertex-disjoint copies of C_s^k.
This proves that t(n, C^k_s) ≥δ_k-1(_0) ≥⌊ n/2 ⌋ - k as desired.
Now suppose that (k,s) is an admissible pair.
Let be the k-graph on n vertices with a vertex partition {A, B, T} with |A| = ⌈ (n - |T|) / 2 ⌉ and |B| = ⌊ (n - |T|)/2 ⌋, where |T| will be specified later.
The edge set of consists of all k-sets e such that |e ∩ B| ≡ 1 2 or e ∩ T ≠∅.
Note that δ_k-1() ≥min{|A|, |B| } + |T| - (k-1) ≥⌊ (n + |T|)/2 ⌋ - k + 1. We separate the analysis into two cases depending on the parity of k.
Case 1: k even.
Since [A ∪ B] = ^k_0(A, B) = ^k_0(B,A), by Proposition <ref>, [A ∪ B] is C^k_s-free.
Thus, all copies of C^k_s in must intersect T in at least one vertex.
Hence, all C^k_s-tilings have at most |T| vertex-disjoint copies of C^k_s.
Taking |T| = n/s - 1 assures that does not contain a perfect C^k_s-tiling.
This implies that t(n, C^k_s) ≥⌊(1/2 + 1/(2s) ) n ⌋ - k.
Case 2: k odd.
Since [A ∪ B] = ^k_0(A,B), by Proposition <ref> no vertex in A can be covered by a copy of C^k_s.
Hence, all copies of C^k_s in with non-empty intersection with A must also have non-empty intersection with T.
Moreover, all edges in intersect A in at most k-1 vertices, so all copies of C^k_s in H intersect A in at most s(k-1)/k vertices.
Thus a perfect C^k_s-tiling would contain at most |T| and at least k|A|/(s(k-1)) cycles intersecting A.
Let |T| = ⌈ nk / (2s(k-1) + k) ⌉ - 1.
Since |T| < nk/(2s(k-1)+k) and |A| ≥ (n-|T|)/2,
k|A|/s(k-1)≥k(n-|T|)/2s(k-1) > nk/2s(k-1)( 1 - k/2s(k-1) + k) > |T|,
and thus a perfect C^k_s-tiling in cannot exist.
This implies
t(n, C^k_s) ≥δ_k-1() ≥⌊n+|T|/2⌋ - k + 1 ≥⌊( 1/2 + k/4s(k-1) + 2k) n ⌋ - k,
as desired.
§ G-GADGETS
Throughout this section, let τ (123 … k) ∈ S_k.
Let H be a k-graph, and let K be a complete (k,k)-graph in H with its natural vertex partition { V_1, …, V_k}.
Knowing the end-types and start-types of paths with respect to V_1, …, V_k will help us to concatenate them and form longer paths which contains them both.
For instance, if P_1 and P_2 are vertex-disjoint tight paths, P_1 has end-type π and P_2 has start-type π, then we can concatenate the paths and obtain P_1 P_2.
Let P be a tight path in H with end-type π∈ S_k.
For x ∈ V_π(1)∖ V(P), Px is a tight path of with end-type πτ.
We call such an extension a simple extension of P.
By repeatedly applying r simple extensions (which is possible as long as there are available vertices), we may obtain an extension P x_1 … x_r of P with end-type πτ^r, using r extra vertices and edges in K.
In the same spirit, observe that if P_1 has end-type π and P_2 has start-type πτ, then the sequence of ordered clusters corresponding to the last k-1 vertices of P_1 coincides with the corresponding sequence of the first k-1 vertices of P_2.
Thus, by using one extra vertex x ∈ V_π(1)∖ (V(P_1) ∪ V(P_2)) and setting P_1 x P_2, we can join these paths.
If P is a path with end-type π, we would like to find a path P' that extends P such that |V(P')| ≡ |V(P)| k and P' has end-type σ, for arbitrary σ∈ S_k.
The goal of this section is to define and study `G-gadgets', a tool which will allow us to do precisely that.
Let G be a 2-graph on [k] and S ⊆ V().
We say W_G ⊆ V() is a G-gadget for K avoiding S if there exists a family of pairwise-disjoint sets { W_ij : ij ∈ E(G) } such that W_G = ⋃_ij ∈ E(G) W_ij, and for all ij ∈ E(G),
* |W_ij| = 2k - 1,
* |W_ij∖ V(K)| = 1, W_ij∩ S = ∅ and, for all 1 ≤ i' ≤ k,
|W_ij∩ V_i'| =
1 if i' ∈{i,j},
2 otherwise,
* for all σ∈ S_k with σ(1) ∈{i, j}, [W_ij] contains a spanning tight path with start-type στ and end-type (ij) σ.
If K is clear from the context, we will just say “a G-gadget avoiding S”.
For all edges ij ∈ E(G), we write w_ij for the unique vertex in W_ij∖ V(K).
We emphasize that <ref> is the key property that allows us to obtain an extension of a path at the same time we perform a change in the end-type.
In words, <ref> says that given any k-1 ordered clusters that miss V_i, there exists a tight path with vertex set W_ij, which start with the same ordered k-1 clusters and ends with the same ordered k-1 clusters but with V_j replaced by V_i.
In other words, W_ij allows us to “switch” the type of a path by replacing i by j.
See Figure <ref> for an example.
Suppose P is a tight path with end-type π and σ is a cyclic permutation. In the next lemma, we show how to extend P into a tight path with end-type σπ using a G-gadget, where G is a path.
Let k ≥ 3 and r ≥ 2.
Let σ = (i_1 i_2 … i_r) ∈ S_k be a cyclic permutation.
Let G be a 2-graph on [k] containing the path Q = i_1 i_2 … i_r.
Let be a k-graph containing a complete (k,k)-graph K with vertex partition V_1, …, V_k.
Suppose that P is a tight path in with end-type π∈ S_k such that π(1) = i_r.
Suppose W_G is a G-gadget avoiding V(P) and |V_i_j∖ V(P)| ≥ 2|E(G)| for all 1 ≤ j ≤ r.
Then there exists an extension P' of P with end-type σπ such that
* |V(P')| = |V(P)| + 2k(r-1),
* for all 1 ≤ i ≤ k,
| V_i ∩ ( V(P') ∖ V(P) )| =
2 (r-1) - 1 if i ∈{i_1, i_2, …, i_r-1},
2 (r-1) otherwise,
* there exists a (G - Q)-gadget W_G - Q for K avoiding V(P') and
* V(P') ∖ V(P ∪ K) = { w_i_j i_j+1 : 1 ≤ j < r }.
We proceed by induction on r.
First suppose that r = 2 and so σ = (i_1 i_2).
Consider a G-gadget W_G avoiding V(P).
Since i_1 i_2 ∈ E(G), there exists a set W_i_1 i_2⊆ W_G disjoint from V(P) such that |W_i_1 i_2| = 2k - 1 and [W_i_1 i_2] contains a spanning tight path P” with start-type πτ and end-type (i_1 i_2) π = σπ.
Note that |V_i_2∩ W_G| ≤ 2 |E(G)| - 1, as |V_i_2∩ W_i_1i_2| =1.
Hence V_i_2∖( V(P) ∪ W_G ) ∅.
Take an arbitrary vertex x_i_2∈ V_i_2∖( V(P) ∪ W_G ) and set P' = P x_i_2 P”.
Since π( 1 ) = i_2, it follows that P' is a tight path with end-type σπ, and P' satisfies properties (i), (ii) and (iv).
Set W_G - i_1 i_2 = W_G ∖ W_i_1 i_2. Then W_G - i_1 i_2 is a (G - i_1 i_2)-gadget for K avoiding V(P'), so P' satisfies property (iii), as desired.
Next, suppose r > 2. Define σ' = (i_2 i_3 … i_r) and note that σ = (i_1 i_2) σ'. Then σ' is a cyclic permutation of length r-1, with π(1) = i_r and the path Q' = i_2 … i_r-1 i_r is a subgraph of G. By the induction hypothesis, there exists an extension P” of P with end-type σ' π such that |V(P”)| = |V(P)| + 2k(r-2) and,
for all 1 ≤ i ≤ k,
| V_i ∩ ( V(P”) ∖ V(P) )| =
2 (r-2) - 1 if i ∈{i_2, i_3, …, i_r-1},
2 (r-2) otherwise.
Moreover, there exists a (G - Q')-gadget W_G - Q' avoiding V(P”) and V(P”) ∖ V(P ∪ K) = { w_i_j i_j+1 : 2 ≤ j < r }.
Note that σ' π (1) = σ'( i_r ) = i_2 and i_1 i_2 ∈ E(G - Q').
For all 1 ≤ i ≤ r, | V_i ∖ V(P') | ≥ 2 |E(G - Q')|.
Again by the induction hypothesis, there exists an extension P' of P” with end-type (i_1 i_2) σ' π = σπ such that |V(P')| = |V(P”)| + 2k = |V(P)| + 2k(r-1) and,
for all 1 ≤ i ≤ k,
| V_i ∩ ( V(P') ∖ V(P”) )| =
1 if i = i_1,
2 otherwise.
and V(P') ∖ (V(P”∪ K)) = { w_i_1 i_2}, so P' satisfies properties (i), (ii) and (iv). Furthermore, set W_G - Q = W_G - ⋃_j = 1^r-1 W_i_j i_j+1. Then W_G - Q is a (G - Q)-gadget for K avoiding V(P'), so P' satisfies property (iii) as well.
In the next lemma, we show how to extend a path with end-type id to one with an arbitrary end-type.
We will need the following definitions.
Consider an arbitrary σ∈ S_k ∖{id}.
Write σ in its cyclic decomposition
σ = (i_1,1 i_1,2… i_1, r_1 ) (i_2,1 i_2,2… i_2, r_2)
…
( i_t,1 i_t,2… i_t, r_t ) ,
where σ is a product of t = t(σ) disjoint cyclic permutations of respective lengths r_1, …, r_t so that r_j ≥ 2 and i_j,r_j = min{ i_j, r' : 1 ≤ r' ≤ r_j } for all 1 ≤ j ≤ t; and i_1,r_1 < i_2,r_2 < … < i_t,r_t.
Define m(σ) = i_t,r_t. On the other hand, if σ = id, then define t(σ) = 0 and m(σ) = 1.
Define G_σ to be the 2-graph on [k] consisting precisely of the (vertex-disjoint) paths Q_j = i_j,1 i_j,2… i_j,r_j for all 1 ≤ j ≤ t(σ).
So G_id is an empty 2-graph.
Note that for all σ,
2| E(G_σ)| + t(σ) = 2 ∑_j=1^t(σ) r_j - t(σ) ≤ 2k-1.
For 1 ≤ i ≤ k and σ∈ S_k ∖{id}, set X_i, σ = 1 if i ∈{ i_t', 1, …, i_t', r_t' - 1} for some 1 ≤ t' ≤ t, and X_i, σ = 0 otherwise.
Also, for 1 ≤ i ≤ k, set Y_i, σ = 1 if i ∈{σ(j) : 1 ≤ j < m(σ) } and Y_i, σ = 0 otherwise. If σ = id, then define X_i, σ = Y_i, σ = 0 for all 1 ≤ i ≤ k.
Let k ≥ 3. Let be a k-graph containing a complete (k,k)-graph K with vertex partition V_1, …, V_k and a tight path P with end-type id.
Let σ∈ S_k and let G be a 2-graph on [k] containing G_σ.
Suppose that K has a G-gadget W_G avoiding V(P), and |V_i ∖ V(P)| ≥ 2 | E(G) | + 2.
Then there exists an extension P' of P with end-type στ^m(σ) - 1 such that
* |V(P')| = |V(P)|+ 2 k |E(G_σ)| + m(σ) - 1,
* for all 1 ≤ i ≤ k, |V_i ∩ ( V(P') ∖ V(P) )| = 2|E(G_σ)| - X_i, σ + Y_i, σ,
* K has a (G - G_σ)-gadget avoiding V(P') and
* V(P') ∖ V(P ∪ K) = { w_ij : ij ∈ E(G_σ) }.
Let
σ = (i_1,1 i_1,2… i_1, r_1 ) (i_2,1 i_2,2… i_2, r_2)
…
( i_t,1 i_t,2… i_t, r_t )
as defined above. We proceed by induction on t = t(σ). If t = 0, then σ = id and m(σ) = 1, so the lemma holds by setting P' = P. Now suppose that t ≥ 1 and the lemma is true for all σ' ∈ S_k with t(σ') <t. Let
σ_1 = (i_1,1 i_1,2… i_1, r_1 ) (i_2,1 i_2,2… i_2, r_2) … ( i_t-1,1 i_t-1,2… i_t-1, r_t-1 )
and σ_2 = ( i_t,1 i_t,2… i_t, r_t ), so σ_1 σ_2 = σ_2 σ_1 = σ.
For 1 ≤ i ≤ 2, let G_i = G_σ_i and m_i = m(σ_i).
Note that G_σ = G_1 ∪ G_2.
Let G' = G - G_1.
Since t(σ_1)= t-1, by the induction hypothesis, there exists a path P_1 that extends P with end-type σ_1 τ^ m_1- 1 such that
* |V(P_1)| = |V(P)| + 2 k |E(G_1)| + m_1 - 1,
* for all 1 ≤ i ≤ k, |V_i ∩ ( V(P_1) ∖ V (P) )| = 2 |E(G_1)| - X_i, σ_1 + Y_i, σ_1,
* K has a G'-gadget W_G' avoiding V(P_1) and
* V(P_1) ∖ V(P ∪ K) = { w_ij : ij ∈ E(G_1) }.
Note that for all 1 ≤ i ≤ k,
| V_i ∖( V(P_1) ∪ W_G') | ≥ 2 |E(G)| + 2 - (2 |E(G_1)| + 1 ) - 2 |E(G')| =1.
We extend P_1 using m_2 - m_1 >0 simple extensions, avoiding the set V(P_1) ∪ W_G' in each step, to obtain an extension P_2 of P_1 with end-type σ_1 τ^m_1- 1τ^m_2 - m_1 = σ_1 τ^m_2 - 1 such that
|V(P_2)| = |V(P_1)| + m_2 - m_1 = |V(P)| + 2 k |E(G_1)| + m_2 - 1
and W_G' is a G'-gadget for K that avoids V(P_2).
As P_1 has end-type σ_1 τ^m_1 - 1, V(P_2) ∖ V(P_1) contains precisely one vertex in V_i for all i ∈{σ_1 τ^m_1 - 1(j) : 1 ≤ j ≤ m_2 - m_1 } = {σ_1 ( m_1 ), …, σ_1(m_2 - 1) }.
Since σ_1(i) = σ(i) for all m_1 ≤ i < m_2 and m_2 = i_t, r_t, together with <ref> we deduce that
|V_i ∩ ( V(P_2) ∖ V (P) )| = 2 |E(G_1)| - X_i, σ_1 + Y_i, σ.
Note that σ_1 τ^m_2 - 1(1) = σ_1 ( m_2 ) = σ_1 ( i_t, r_t ) = i_t, r_t.
Since G' contains G_2, by Lemma <ref> there exists an extension P' of P_2 with |V(P')| = |V(P_2)| + 2k|E(G_2)| and P' has end-type σ_2 σ_1 τ^m_2 - 1 = στ^m(σ) - 1, as m_2 = m(σ).
Moreover, as G' - G_2 = G - G_σ, K has a (G - G_σ)-gadget avoiding V(P'), implying (iii).
Similarly, (iv) holds.
Note that
|V(P')| = |V(P_2)| + 2k|E(G_2)| = |V(P)| + 2k |E(G_σ)| + m(σ) - 1
implying (i). Finally, for all 1 ≤ i ≤ k, we have
|V_i ∩ (V(P') ∖ V(P_2) )| =
2 |E(G_2)| - 1 if i ∈{i_t,1, …, i_t, r_t - 1},
2 |E(G_2)| otherwise.
So |V_i ∩ (V(P') ∖ V(P_2))| = 2|E(G_2)| - X_i, σ_2.
Note that X_i, σ = X_i, σ_1 + X_i, σ_2 because σ_1 and σ_2 are disjoint.
Thus, together with (<ref>), (ii) holds.
Now we want to use the previous lemmas to find tight cycles of a given length.
Let P be a tight path with start-type σ and end-type π. If π = σ, then there exists a tight cycle C containing P with V(C) = V(P). Similarly if π = στ^-r, then (by using r simple extensions) there exists a tight cycle C on |V(P)| + r vertices containing P.
In general, in order to extend P into a tight cycle we use Lemma <ref> to first extend P to a path P' with end-type στ^- r for some suitable r, using the edges of K and a suitable G-gadget.
The next lemma formalises the aforementioned construction of the tight cycle C containing P and gives us precise bounds on the sizes of V_i ∩ (V(C) ∖ V(P)) in the case where σ = π, which will be useful during Section <ref>.
Let k ≥ 3. Let σ, π∈ S_k and 0 ≤ r < k.
Then there exists a 2-graph G:= G(σ, π,r) on [k] consisting of a vertex-disjoint union of paths such that the following holds for all s ≥ k(2k-1) with s ≡ r k:
let be a k-graph containing a complete (k,k)-graph K with vertex partition V_1, …, V_k, and let P be a tight path with start-type σ and end-type π.
Suppose W_G is a G-gadget for K avoiding V(P) and | V_i ∖ V(P) | ≥⌊ s /k ⌋ + 1.
Then there exists a tight cycle C on |V(P)| + s vertices containing P, such that
V(C) ∖ (V(P ∪ K)) = { w_ij : ij ∈ E(G) }.
Moreover, if σ = π, then for all 1 ≤ i,j ≤ k,
| | V_i ∩ ( V(C) ∖ V(P)) | - | V_j ∩ ( V(C) ∖ V(P)) | | ≤ 1.
Without loss of generality, we may assume that π = id.
Define σ' = στ^-r∈ S_k.
Let G = G_σ'.
Note that |E(G)| ≤ k-1, t(σ')≤ k/2 and 2|E(G)| + t(σ') ≤ 2k - 1 by (<ref>).
Let ,K,P be as defined in the lemma.
By Lemma <ref>, there exists an extension P' of P with end-type σ' τ^m(σ') - 1 such that
|V(P')| = |V(P)|+ 2 k |E(G)| + m(σ') - 1, for all 1 ≤ i ≤ k,
|V_i ∩ ( V(P') ∖ V (P) )| = 2 |E(G)| - X_i, σ' + Y_i, σ'
and V(P') ∖ (V(P ∪ K)) = { w_ij : ij ∈ E(G) }.
We use k - m(σ') + 1 simple extensions to get an extension P” of P' of order
|V(P”)| = |V(P')| + (k - m(σ') + 1) = |V(P)| + 2 k |E(G)|+k.
Note that V(P”) ∖ V(P') uses precisely one vertex in each of the clusters V_i for all i ∈{σ' τ^m(σ') - 1(j) : 1 ≤ j ≤ k - m(σ') + 1 } = {σ'(j) : m(σ') ≤ j ≤ k } = { j : Y_j, σ' = 0 }. It follows that for all 1 ≤ i ≤ k,
|V_i ∩ ( V(P”) ∖ V (P) )| = 2 |E(G)| + 1 - X_i, σ'.
Note that P” has end-type σ ' τ^m(σ') - 1τ^k - m(σ') + 1 = σ' = στ^-r.
For all 1 ≤ i ≤ k and 0 ≤ r < k, set Z_i, σ, r = 1 if i ∈{σ(j) : k-r+1 ≤ j ≤ k }, and set Z_i, σ, r = 0 otherwise. We use r more simple extensions to get an extension P”' of P with end-type στ^-rτ^r = σ of order
|V(P”')| = |V(P”)| + r = |V(P)| + 2 k |E(G)|+k + r
such that, for all 1 ≤ i ≤ k,
|V_i ∩ ( V(P”') ∖ V (P) )| = 2 |E(G)| + 1 + Z_i, σ, r - X_i, σ'.
Since |E(G)| ≤ k-1 and s ≡ r k, it follows that |V(P”')| ≤ |V(P)| + s. Also, |V(P”') ∖ V(P)| ≡ s k. For all 1 ≤ i ≤ k,
|V_i ∖ V(P”') |
≥ |V_i ∖ V(P) | - 2 |E(G)| - 1 + X_i, σ' - Z_i, σ, r
≥⌊ s / k ⌋ - 2 |E(G)| - 1 = 1/k( k ⌊ s / k ⌋ - 2 k |E(G)| - k )
= 1/k( s - r - 2 k |E(G)| - k ) = 1/k( s - (|V(P”')| - |V(P)|) ).
Since P”' has start-type σ and end-type σ, then we can easily extend P”' (using simple extensions) into a tight cycle C on |V(P)| + s vertices.
Note that V(C) ∖ (V(P ∪ K)) = { w_ij : ij ∈ E(G) }, as desired.
Moreover, for all 1 ≤ i,j ≤ k,
| | V_i ∩ ( V(C) ∖ V(P)) | - | V_j ∩ ( V(C) ∖ V(P)) | |
= | | V_i ∩ ( V(P”') ∖ V(P)) | - | V_j ∩ ( V(P”') ∖ V(P)) | |
= | ( Z_i, σ, r - X_i, σ' ) - ( Z_j, σ, r - X_j, σ' ) |.
Suppose now that σ = π = id. We will show that -1 ≤ Z_i, σ, r - X_i, σ'≤ 0 for all 1 ≤ i ≤ k, implying that for all 1 ≤ i, j ≤ k, | | V_i ∩ ( V(C) ∖ V(P)) | - | V_j ∩ ( V(C) ∖ V(P)) | | ≤ 1.
It suffices to show that if Z_i, σ, r = 1, then X_i, σ' = 1.
If r = 0 then it is obvious, so suppose that 1 ≤ r < k.
Let 1 ≤ i ≤ k such that Z_i, σ, r = 1.
Since σ = π = id, then σ' = τ^-r.
So if Z_i, σ, r = 1, then k - r + 1 ≤ i ≤ k.
To show that X_i, τ^-r = 1, we need to show that i is not the minimal element in the cycle that it belongs in the cyclic decomposition of τ^-r, that is, there exists m < i such that i is in the orbit of m under τ^-r.
Let d = (r, k).
Choose 1 ≤ m ≤ d such that m ≡ i d.
The order of τ^-r is exactly k/d and the orbit of m has exactly k/d elements.
There are exactly k/d elements i' satisfying 1 ≤ i' ≤ k and i' ≡ m d, and all elements i' in the orbit of m also satisfy i' ≡ m d, so it follows that i is in the orbit of m under τ^-r.
Finally, m ≤ d ≤ k - r < i.
This proves that X_i, τ^-r = 1, as desired.
§.§ Finding G-gadgets in k-graphs with large codegree
We now turn our attention to the existence of G-gadgets.
We prove that all large complete (k,k)-graphs contained in a k-graph with δ_k-1(H) large have a G-gadget, for an arbitrary 2-graph G on [k].
Let 0 < 1/n, 1/t_0 ≪γ, 1/k.
Let be a k-graph on n vertices with δ_k-1() ≥ (1/2 + γ)n containing a complete (k,k)-graph K with vertex partition V_1, …, V_k.
Let S ⊆ V() be a set of vertices such that |V(K) ∪ S| ≤γ n /2 and |V_i ∖ S| ≥ t_0 for all 1 ≤ i ≤ k.
Let G be a 2-graph on [k].
Then there exists a G-gadget for K avoiding S.
Choose 0 < 1/t ≪γ, 1/k and let t_0 t + k^2.
Suppose that ij ∈ E(G) and |V_ℓ∖ S| ≥ t + 2 |E(G)| for all 1 ≤ℓ≤ k.
Let U_ℓ⊆ V_ℓ∖ S with |U_ℓ| = t for all 1 ≤ℓ≤ k and let R = [k] ∖{i, j}.
Let U = ⋃_1 ≤ℓ≤ k U_ℓ and
T = { A ∈Uk-1 : | A ∩ U_r| = 1 for all r ∈ R and |A ∩ (U_i ∪ U_j)| = 1 }.
Then T has size 2 t^k-1.
By the codegree condition, all members in T have (1/2 + γ)n - |V(K) ∪ S| ≥ (1/2 + γ/2)n neighbours outside of V(K) ∪ S and by an averaging argument, there exists a vertex w ∉ V(K) ∪ S such that (w) satisfies | (w) ∩ T | ≥ (1 + γ) t^k-1.
For all u ∈ U_i ∪ U_j, N_(w) ∩ T(u) is a family of (k-2)-sets of ⋃_r ∈ R U_r. We have that
∑_(u_i, u_j) ∈ U_i × U_j |N_(w) ∩ T(u_i) ∩ N_(w) ∩ T(u_j)|
≥∑_(u_i, u_j) ∈ U_i × U_j( d_(w) ∩ T(u_i) + d_(w) ∩ T(u_j) - t^k-2)
= t |(w) ∩ T| - t^k ≥ t^k(1 + γ) - t^k = γ t^k,
and by an averaging argument, there exists a pair (x^∗_i, x^∗_j) ∈ U_i × U_j such that |N_(w) ∩ T(x^∗_i) ∩ N_(w) ∩ T(x^∗_j)| ≥γ t^k-2.
By the choice of t and by Theorem <ref>, we have that N_(w) ∩ T(x^∗_i) ∩ N_(w) ∩ T(x^∗_j) contains a copy K' of K^k-2_k-2(2). Define W_ij = V(K') ∪{ w, x_i^∗, x_j^∗} and note that |W_ij| = 2(k-2) + 3 = 2k - 1.
We now check that <ref> holds for W_ij.
Recall that, informally, this means that given any k-1 ordered clusters that miss V_i, there exists a tight path with vertex set W_ij, which starts with the same ordered k-1 clusters and ends with the same ordered k-1 clusters but with V_j replaced by V_i.
For all r ∈ R, let U_r ∩ V(K') = { x_r, x'_r}.
Consider an arbitrary σ∈ S_k with σ(1) = i and σ(j') = j.
By construction, we have that
x_σ(2) x_σ(3)… x_σ(j' - 1) x^∗_j x_σ(j' + 1) x_σ(j' + 2)… x_σ(k) w x'_σ(2) x'_σ(3)… x'_σ(j' - 1) x^∗_i x'_σ(j'+1) x'_σ(j'+2)… x'_σ(k)
is a spanning tight path in [W_ij], of start-type στ and end-type (ij)σ.
Clearly W_ij is an ij-gadget avoiding S.
Set S' = S ∪ W_ij and G' = G - ij.
Repeating this construction for all edges in E(G - ij) and using that t_0 = t + k^2, it is possible to conclude that K has a G-gadget avoiding S.
§.§ Auxiliary k-graphs F_s
Given a tight cycle C_s^k, we would like to find a k-graph F_s such that C_s^k ⊆ F_s and F_s is obtained from a complete (k,k)-graph by adding “few” extra vertices.
This will be useful in Section <ref>.
Let K be a (k,k)-graph with vertex partition V_1, …, V_k.
Consider a 2-graph G on [k] with E(G) = { j_i j'_i : 1 ≤ i ≤ℓ} and let y_1, …, y_ℓ be a set of ℓ vertices disjoint from V(K).
Let W_G := { y_1, …, y_ℓ}.
We define the G-augmentation of K to be the k-graph F = F(K,G) such that
V(F) = V(K) ∪ W_G and
E(F) = E(K) ∪⋃_1 ≤ i ≤ℓ ( E(H(y_i, j_i)) ∪ E(H(y_i, j'_i)) ),
where H(v,j) is a complete (k,k)-graph with partition { v }, V_1, V_2, …, V_j-1, V_j+1, …, V_k.
The easy (but crucial) observation is that if |V_i| ≥ 2 ℓ for all 1 ≤ i ≤ k,
then the G-augmentation of K contains a G-gadget for K avoiding ∅.
Using that, we can prove the following.
Let k ≥ 3, s ≥ 2k^2 and s ≢0 k.
Then there exists a 2-graph G_s on [k] that is a disjoint union of paths, and a_s,1, …, a_s,k, ℓ∈ℕ such that
| a_s,i - a_s,j | ≤ 1 for all i, j ∈ [k],
ℓ = |E(G_s)| ≤ k - 1,
and if K = K^k(a_s,1, …, a_s,k), then F_s, the G_s-augmentation of K, contains a spanning C_s^k and |V(F_s) ∖ V(K)| = ℓ.
Let r ∈{1, …, k-1 } be such that s ≡ r k.
Let G_s be the 2-graph obtained from Lemma <ref> (with parameters σ = π = id and r).
Note G_s is a disjoint union of paths and thus ℓ = E(G_s) ≤ k-1.
Suppose that V_1, …, V_k are disjoint sets of size ⌊ s/k ⌋ + 1 and let K' be the complete (k,k)-graph with partition { V_1, …, V_k }.
For all i ∈ [k] let v_i ∈ V_i and consider the tight path P = v_1 … v_k.
Note that P has both start-type and end-type id.
Let F' be the G_s-augmentation of K'.
It is easily checked that |V_i ∖ V(P)| ≥ 2(k-1) ≥ 2ℓ and therefore there is a G_s-gadget for K' in F' avoiding V(P).
By the choice of G_s, F' contains a tight cycle C on s vertices containing P such that V(C) ∖ V(K) = V(F') ∖ V(K') = W_G_s and, over the range i ∈ [k], the values |V(C) ∩ V_i| differ at most by 1.
It is easily checked that letting a_s,i := |V(C) ∩ V_i| we obtain the desired properties.
§ COVERING THRESHOLDS FOR TIGHT CYCLES
In this section, we prove the upper bounds for the covering codegree threshold for tight cycles, proving Proposition <ref> and Theorem <ref>.
We first prove Proposition <ref>, which immediately implies Proposition <ref> since K^k(s) contains a C^k_s'-covering for all s' ≡ 0 k with s' ≤ sk.
We will use the following classic result of Kővári, Sós and Turán <cit.>.
Let z(m,n; s,t) denote the maximum possible number of edges in a bipartite 2-graph G with parts U and V for which |U| = m and |V| = n, which does not contain a K_s,t subgraph with s vertices in U and t vertices in V. Then
z(m,n; s,t) < (s-1)^1/t (n - t + 1) m^1 - 1/t + (t-1)m.
For all k ≥ 3 and s ≥ 1, let n, c ≥ 2 such that 1/n, 1/c ≪ 1/k, 1/s.
Then c(n, K^k(s)) ≤ c n^1 - 1/s^k-1.
Let be a k-graph on n vertices with δ_k-1() ≥ c n^1 - 1/s^k-1.
Fix a vertex x ∈ V() and consider the link (k-1)-graph (x) of x.
Let U_1 := E((x)).
Note that
| U_1 | ≥n-1k-2δ_k-1()/k - 1≥ c^1/2 n^k - 1 - 1/s^k-1.
Let U_2 := V() ∖{ x}.
Consider the bipartite 2-graph B with parts U_1 and U_2, where e ∈ U_1 is joined to u ∈ U_2 if and only if e ∪{u}∈ E().
By the codegree condition of , all (k-1)-sets e ∈ U_1 have degree at least δ_k-1() - 1 in B.
Hence
|E(B)| ≥ | U_1|(δ_k-1() - 1) ≥ | U_1| ( c n^1 - 1/s^k-1 - 1 ).
We claim there is a K_n^k-1-1/s^k-2,s-1 as a subgraph in B, with n^k-1-1/s^k-2 vertices in U_1 and s-1 vertices in U_2.
Suppose not.
Then, by Theorem <ref>,
|E(B)|
≤ z(| U_1|, n-1; n^k - 1 - 1/s^k-2, s-1)
< (n^k - 1 - 1/s^k-2)^1/s-1 n | U_1|^1 - 1/s-1 + (s-1) | U_1|
= | U_1| ( n ( n^k - 1 - 1/s^k-2/| U_1|)^1/s-1 + s-1 )
(<ref>)≤ | U_1| ( c^-1/2(s-1) n^1 - 1/s^k-1 + s-1 ) < | U_1| n^1 - 1/s^k-1.
This contradicts (<ref>).
Let K be a copy of K_n^k-1-1/s^k-2,s-1 in B.
Let W := V(K) ∩ U_1 and X := { x_1, …, x_s-1} = V(K) ∩ U_2.
Since |W| = n^k - 1 - 1/s^k-2 and 1/n ≪ 1/k, 1/s, by Theorem <ref>, W contains a copy K' of K^k-1(s).
By construction, for all y ∈{ x }∪ X and all e ∈ E(K'), { y }∪ e ∈ E().
Hence, [ { x }∪ X ∪ V(K')] contains a K^k(s) covering x, as desired.
We are ready to prove Theorem <ref>.
Let t ∈ℕ be such that 1/n_0≪ 1/t ≪γ, 1/s. Let be a k-graph on n ≥ n_0 vertices with δ_k-1() ≥ (1/2 + γ) n. Fix a vertex x and a copy K of K^k_k(t) containing x, which exists by Proposition <ref>. Let V_1, …, V_k be the vertex partition of K with x ∈ V_1. By the choice of t, |V_i| ≥max{2 k^2 + 2, ⌊ s / k ⌋ + 2 } for all 1 ≤ i ≤ k.
Let x_1 = x and select arbitrarily vertices x_i ∈ V_i for 2 ≤ i ≤ k.
Now P = x_1 … x_k is a tight path on k vertices with both start-type and end-type id.
Let G be a complete 2-graph on [k].
By Lemma <ref>, there exists a G-gadget for K avoiding V(P).
Thus, by Lemma <ref>, there exists a tight cycle in V() on s vertices containing P, and in turn, x.
§ ABSORPTION
We need the following “absorbing lemma”, which is a special case of a lemma of Lo and Markström <cit.>.
Let s ≥ k ≥ 3 and 0 < 1/n ≪η, 1/s and 0 < α≪μ≪η, 1/s.
Suppose that is a k-graph on n vertices and for all distinct vertices x, y ∈ V() there exist η n^s-1 sets S of size s-1 such that H[S ∪{ x }] and H[ S ∪{ y } ] contain a spanning C_s^k.
Then there exists U ⊆ V() of size |U| ≤μ n with |U| ≡ 0 s such that there exists a perfect C_s^k-tiling in H[U ∪ W] for all W ⊆ V() ∖ U of size |W| ≤α n with |W| ≡ 0 s.
Thus to find an absorbing set U, it is enough to find many (s-1)-sets S as above for each pair x, y ∈ V().
First we show that we can find one such S.
Let s ≥ 5k^2 with s ≢0 k.
Let 1/n ≪γ, 1/s.
Let be a k-graph on n vertices with δ_k-1() ≥ (1/2 + γ)n.
Then for all pair of distinct vertices x, y ∈ V(), there exists S ⊆ V() ∖{ x, y } such that |S| = s-1 and both [S ∪{ x }] and [S ∪{ y}] contain a spanning C_s^k.
Let 1/n ≪ 1/t ≪γ, 1/s.
Consider the k-graph _xy with vertex set V(_xy) = (V(H) ∖{ x, y }) ∪{ z } (for some z ∉ V()) and edge set
E(_xy) = E(∖{ x, y }) ∪{{ z }∪ S : S ∈ N_(x) ∩ N_(y) }.
Note that |V(_xy)| = n - 1 and δ_k-1(_xy) ≥γ |V(_xy)|.
By Proposition <ref>, H_xy contains a copy K of K^k_k(t) containing z.
Let V_1, …, V_k be the vertex partition of K with z ∈ V_1.
Select arbitrarily vertices v_i ∈ V_i for 2 ≤ i ≤ k.
Let ' = _xy∖{ z, v_2, …, v_k } and K' = K ∖{ z, v_2, …, v_k }.
Note that δ_k-1(') ≥ (1/2 + γ/2) |V(')| and K' ⊆'.
By Lemma <ref> with H' and K' playing the roles of H and K respectively, there exists a K_k-gadget for K' in H'.
Hence, there exists a K_k-gadget for K in _xy avoiding { z, v_2, …, v_k }.
Now we construct a copy of C_s^k in _xy containing z.
Note that P = z v_2 … v_k is a tight path on k vertices with start-type and end-type id.
Since there exists a K_k-gadget for K avoiding V(P), by Lemma <ref> H_xy contains a copy C of C_s^k containing z.
Finally, let S = V(C) ∖{z}⊆ V().
By construction, |S| = s-1 and both H[S ∪{x}] and H[S ∪{y}] contain a spanning C_s^k in , as desired.
We now apply the standard supersaturation trick to find many sets S.
Let k ≥ 3 and 0 < 1/m ≪γ, 1/k.
Let be a k-graph on n ≥ m vertices with δ_k-1() ≥ (1/2 + γ)n.
Let x, y ∈ V() be distinct.
Then the number of m-sets R ⊆ V() ∖{ x, y } such that δ_k-1( [ R ∪{ x, y } ] ) ≥ (1/2 + γ/2) (m+2) is at least n-2m / 2.
To prove Lemma <ref>, first we recall the following fact about concentration for hypergeometric random variables around their mean (see, e.g., <cit.>).
Let a, γ > 0 with a + γ < 1.
Suppose that S ⊆ [n] and |S| ≥ (a + γ) n.
Then
| { M ∈[n]m : |M ∩ S| ≤ am }| ≤nm e^- γ^2 m/3(a + γ)≤nm e^- γ^2 m / 3.
Let T be a (k-1)-set in V().
Note that, since 1/n ≤ 1/m ≪γ,
|N_H(T) ∖{x,y}|
≥( 1/2 + γ)n - 2
≥( 1/2 + 2/3γ)(n-2).
We call an m-set R ⊆ V(H) ∖{x,y} bad for T if |N_H(T) ∩ R| ≤ (1/2 + 3γ/5)m.
An application of Lemma <ref> (with 1/2 + 3γ/5, γ/15, n-2, N_H(T) ∖{x,y} playing the roles of a, γ, n and S, respectively) implies that the number of m-sets which are bad for T is at most
| { R ∈V(H) ∖{x,y}m : |N_H(T) ∩ R| ≤ (1/2 + 3γ/5)m }| ≤n-2m e^- γ^2 m / 675.
Say an m-set R ⊆ V() ∖{ x, y } is good if δ_k-1( R ∪{x,y} ) > (1/2 + 3 γ / 5)m (and bad, otherwise).
Note that for any good m-set R,
δ_k-1(H[R ∪{x,y}]) > (1/2 + 3 γ / 5)m ≥ (1/2 + γ/2)(m+2),
thus it is enough to prove that there are at most n-2m/2 bad m-sets.
Note that R is bad if and only if there exists a (k-1)-set T ⊆ R ∪{x,y} such that R is bad for T.
Therefore, the number of bad sets is at most
m+2k-1n-2m e^-γ^2 m / 675≤1/2n-2m,
where the inequality follows from the choice of m.
Let k ≥ 3 and s ≥ 5k^2.
Let 1/n ≪α≪μ≪γ, 1/s.
Let be a k-graph on n vertices with δ_k-1() ≥ (1/2 + γ) n.
Then, there exists U ⊆ V() of size |U| ≤μ n with |U| ≡ 0 s such that there exists a perfect C_s^k-tiling in H[U ∪ W] for all W ⊆ V() ∖ U of size |W| ≤α n with |W| ≡ 0 s.
Let μ≪η≪ 1/m ≪γ, 1/s.
Let x, y be distinct vertices in V().
By Lemma <ref>, at least n-2m/2 of the m-sets R ⊆ V() ∖{ x, y } are such that δ_k-1( [ R ∪{ x, y } ] ) ≥ (1/2 + γ / 2) (m+2).
By Lemma <ref>, each one of these subgraphs contains a set S ⊆ R of size s-1 such that [S ∪{ x }] and [S ∪{ y }] have spanning copies of C_s^k.
Then the number of these sets S in H is at least
1/2n-2m/n - 2 - (s-1)m - (s-1) = n-2s-1/2 ms-1≥η n^s-1.
Then the result follows from Lemma <ref>.
§ TILING THRESHOLDS FOR TIGHT CYCLES
Now we prove Theorem <ref> under the assumption that the following `almost perfect C_s^k-tiling lemma' holds.
Let 1/n ≪α, γ, 1/s, k ≥ 3 and s ≥ 5k^2 such that s ≢0 k.
Let be a k-graph on n vertices with δ_k-1() ≥ (1/2 + 1/(2s) + γ)n.
Then has a C_s^k-tiling covering at least (1 - α) n vertices.
Assuming Lemma <ref> is true, we use it to prove Theorem <ref>.
Choose 1/n ≪α≪μ≪γ, 1/k, 1/s.
By Lemma <ref>, there exists U ⊆ V() of size |U| ≤μ n with |U| ≡ 0 s such that there exists a perfect C_s^k-tiling in [U ∪ W] for all W ⊆ V() ∖ U of size |W| ≤α n with |W| ≡ 0 s.
Define ' = ∖ U.
Then δ_k-1(') ≥δ_k-1() - |U| ≥ (1/2 + 1/(2s) + γ/2)|V(')|.
An application of Lemma <ref> (with γ/2, |V(H')| playing the roles of γ, n, respectively, and noting the hierarchy of constants in both lemmas are consistent) implies that there exists a C_s^k-tiling ' in ' covering at least (1 - α)|V(')| vertices.
Let W be the set of uncovered vertices by ' in '.
Then |W| ≤α n and |W| ≡ 0 s.
By the absorbing property of U, there exists a perfect C_s^k-tiling ” in [U ∪ W].
Then ' ∪” is a perfect C_s^k-tiling in .
The rest of the paper will be devoted to the proof of Lemma <ref>.
§ HYPERGRAPH REGULARITY AND REGULAR SLICE LEMMA
To prove Lemma <ref> we will use the hypergraph regularity lemma, which requires the following definitions.
§.§ Regular complexes
Let be a partition of V into vertex classes V_1, …, V_s. A subset S ⊆ V is -partite if |S ∩ V_i| ≤ 1 for all 1 ≤ i ≤ s.
A hypergraph is -partite if all of its edges are -partite, and it is s-partite if it is -partite for some partition with || = s.
A hypergraph is a complex if whenever e∈ E() and e' is a non-empty subset of e we have that e'∈ E().
All the complexes considered in this paper have the property that all vertices are contained in an edge.
For a positive integer k, a complex is a k-complex if all the edges of consist of at most k vertices.
The edges of size i are called i-edges of .
Given a k-complex , for all 1 ≤ i ≤ k we denote by _i the underlying i-graph of : the vertices of _i are those of and the edges of _i are the i-edges of .
Given s≥ k, a (k,s)-complex is an s-partite k-complex.
Let be a -partite k-complex.
For i ≤ k and X ∈i, we write _X for the subgraph of _i induced by ⋃ X.
Note that _X is an (i,i)-graph.
In a similar manner we write _X^< for the hypergraph on the vertex set ⋃ X, whose edge set is ⋃_X' ⊊ X_X'.
Note that if is a k-complex and X is a k-set, then _X^< is a (k-1, k)-complex.
Given i≥ 2, consider an (i,i)-graph _i and an (i-1,i)-graph _i-1 on the same vertex set, which are i-partite with respect to the same partition .
We write _i(_i-1) for the family of all -partite i-sets that form a copy of the complete (i-1)-graph K_i^i - 1 in _i-1.
We define the density of _i with respect to _i-1 to be
d(_i|_i-1)=|_i(_i-1)∩ E(_i)|/|_i(_i-1)| if |_i(_i-1)|>0,
and d(_i|_i-1)=0 otherwise.
More generally, if Q=(Q_1, …, Q_r) is a collection of r subhypergraphs of _i-1, we define _i( Q):=⋃_j=1^r _i(Q_j) and
d(_i| Q)=|_i( Q)∩ E(_i)|/|_i( Q)| if |_i( Q)|>0,
and d(_i| Q)=0 otherwise.
We say that _i is (d_i,,r)-regular with respect to _i-1 if for all r-tuples Q with |_i( Q)|> |_i(_i-1)| we have d(_i| Q) = d_i ±.
Instead of (d_i, , 1)-regularity we simply refer to (d_i, )-regularity; we also say simply that _i is (, r)-regular with respect to _i-1 to mean that there exists some d_i for which _i is (d_i, , r)-regular with respect to _i-1.
Given an i-graph G whose vertex set contains that of _i-1, we say that G is (d_i, , r)-regular with respect to _i-1 if the i-partite subgraph of G induced by the vertex classes of _i-1 is (d_i, , r)-regular with respect to _i-1.
Given 3 ≤ k ≤ s and a (k,s)-complex with vertex partition , we say that is (d_k, d_k-1, …, d_2,_k,,r)-regular if the following conditions hold:
* For all 2 ≤ i ≤ k-1 and A ∈i, _A is (d_i, )-regular with respect to (_A^<)_i-1, and
* for all A ∈k, the induced subgraph _A is (d_k, _k, r)-regular with respect to (_A^<)_i-1.
Sometimes we denote (d_k, …, d_2) by and write (, _k, , r)-regular to mean (d_k, …, d_2, _k, , r)-regular.
We will need the following “regular restriction lemma” which states that the restriction of regular complexes to a sufficiently large set of vertices in each vertex class is still regular, with somewhat degraded regularity properties.
Let k, m ∈ℕ and β, , _k, d_2, …, d_k be such that
1/m≪≪_k, d_2, …, d_k-1 and _k ≪β, 1/k.
Let r, s ∈ℕ and d_k > 0.
Set = (d_k, …, d_2).
Let G be a (, _k, , r)-regular (k,s)-complex with vertex classes V_1, …, V_s each of size m.
Let V'_i ⊆ V_i with |V'_i| ≥β m for all 1 ≤ i ≤ s. Then the induced subcomplex G[V'_1 ∪…∪ V'_s] is (, √(_k), √(), r)-regular.
§.§ Statement of the regular slice lemma
In this section we state the version of the regularity lemma (Theorem <ref>) due to Allen, Böttcher, Cooley and Mycroft <cit.>, which they call the regular slice lemma.
A similar lemma was previously applied by Haxell, Łuczak, Peng, Rödl, Ruciński and Skokan in the case of 3-graphs <cit.>.
This lemma says that all k-graphs G admit a regular slice , which is a regular multipartite (k-1)-complex whose vertex classes have equal size such that G is regular with respect to .
Let t_0, t_1 ∈ℕ and > 0. We say that a (k-1)-complex is (t_0, t_1, )-equitable if it has the following two properties:
* There exists a partition of V() into t parts of equal size, for some t_0 ≤ t ≤ t_1, such that is -partite.
We refer to as the ground partition of , and to the parts of as the clusters of .
* There exists a density vector = (d_k-1, …, d_2) such that, for all 2 ≤ i ≤ k-1, we have d_i ≥ 1/t_1 and 1/d_i ∈ℕ, and the (k-1)-complex is (, , , 1)-regular.
Let X ∈k.
We write _X for the (k-1, k)-graph (_X^<)_k-1.
A k-graph G on V() is (_k, r)-regular with respect to _X if there exists some d such that G is (d, _k, r)-regular with respect to _X.
We also write d^∗_, G(X) for the density of G with respect to _X, or simply d^∗(X) if and G are clear from the context.
Given , _k > 0, r, t_0, t_1 ∈ℕ, a k-graph G and a (k-1)-complex on V(G), we call a (t_0, t_1, , _k, r)-regular slice for G if is (t_0, t_1, )-equitable and G is (_k, r)-regular with respect to all but at most _k tk of the k-sets of clusters of , where t is the number of clusters of .
Given a regular slice for a k-graph G, we keep track of the relative densities d^∗(X) for k-sets X of clusters of , which is done via a weighted k-graph.
Given a k-graph G and a (t_0, t_1, )-equitable (k-1)-complex on V(G), we let R_(G) be the complete weighted k-graph whose vertices are the clusters of , and where each edge X is given weight d^∗(X). When is clear from the context we write R(G) instead of R_(G).
The regular slice lemma (Theorem <ref>) guarantees the existence of a regular slice with respect to which R(G) resembles G in various senses.
In particular, R(G) inherits the codegree condition of G in the following sense.
Let G be a k-graph on n vertices.
Given a set S ∈V(G)k - 1, recall that _G(S) is the number of edges of G which contain S.
The relative degree (S; G) of S with respect to G is defined to be
(S; G) = _G(S)/n - k +1.
Thus, (S; G) is the proportion of k-sets of vertices in G extending S which are in fact edges of G.
To extend this definition to weighted k-graphs G with weight function d^∗, we define
(S; G) = ∑_e ∈ E(G): S ⊆ e d^∗(e)/n - k +1.
Finally, for a collection 𝒮 of (k-1)-sets in V(G), the mean relative degree (𝒮; G) of 𝒮 in G is defined to be the mean of (S; G) over all sets S ∈𝒮.
We will need an additional property of regular slices.
Suppose G is a k-graph, 𝒮 is a (k-1)-graph on the same vertex set, and is a regular slice for G on t clusters.
We say is (η, 𝒮)-avoiding if for all but at most ηtk-1 of the (k-1)-sets Y of clusters of , it holds that |_Y ∩𝒮| ≤η | _Y|.
We can now state the version of the regular slice lemma that we will use.
Let k ∈ℕ with k ≥ 3.
For all t_0 ∈ℕ, _k > 0 and all functions r: ℕ→ℕ and : ℕ→ (0, 1], there exist t_1, n_1 ∈ℕ such that the following holds for all n ≥ n_1 which are divisible by t_1!.
Let G be a k-graph on n vertices,
and let 𝒮 be a (k-1)-graph on the same vertex set with |E(𝒮)| ≤θnk-1.
Then there exists a (t_0, t_1, (t_1), _k, r(t_1))-regular slice for G such that, for all (k-1)-sets Y of clusters of , we have (Y; R(G)) = (_Y; G) ±_k,
and furthermore is (3 √(θ), 𝒮)-avoiding.
We remark that the original statement of <cit.>
did not include the “avoiding” property with respect to a fixed (k-1)-graph 𝒮.
This, however, can be obtained easily from their proof.
We sketch this in Appendix <ref>.
§.§ The d-reduced k-graph and strong density
Once we have a regular slice for a k-graph G, we would like to work within k-tuples of clusters with respect to which G is both regular and dense. To keep track of those tuples, we introduce the following definition.
Let G be a k-graph and be a (t_0, t_1, , _k, r)-regular slice for G.
Then for d > 0 we define the d-reduced k-graph R_d(G) of G to be the k-graph whose vertices are the clusters of and whose edges are all k-sets of clusters X of such that G is (_k, r)-regular with respect to X and d^∗(X) ≥ d.
Note that R_d(G) depends on the choice of but this will always be clear from the context.
The next lemma states that for regular slices as in Theorem <ref>, the codegree conditions are also preserved by R_d(G).
Let k, r, t_0, t ∈ℕ and , _k > 0.
Let G be a k-graph and let be a (t_0, t_1, , _k, r)-regular slice for G.
Then for all (k-1)-sets Y of clusters of , we have
(Y; R_d(G)) ≥(Y; R(G)) - d - ζ(Y),
where ζ(Y) is defined to be the proportion of k-sets Z of clusters with Y ⊆ Z that are not (_k, r)-regular with respect to G.
For 0 ≤μ, θ≤ 1, we say that a k-graph on n vertices is (μ, θ)-dense if there exists 𝒮⊆V()k - 1 of size at most θnk-1 such that, for all S ∈V()k - 1∖𝒮, we have _(S) ≥μ(n - k + 1). In particular, if has δ_k-1() ≥μ n, then it is (μ, 0)-dense.
By using Lemma <ref>, we show that R_d(G) `inherits' the property of being (μ, θ)-dense.
Let 1/n ≪ 1/t_1 ≤ 1/t_0 ≪ 1/k and μ, θ, d, , _k > 0.
Suppose that G is a k-graph on n vertices, that G is (μ, θ)-dense and let 𝒮 be the (k-1)-graph on V(G) whose edges are precisely { S ∈V(G)k-1 : _G(S) < μ (n-k+1) }.
Let be a (t_0, t_1, , _k, r)-regular slice for G such that for all (k-1)-sets Y of clusters of , we have (Y; R(G)) = (_Y; G) ±_k, and furthermore is (3 √(θ), 𝒮)-avoiding.
Then R_d(G) is ((1 - √(θ))μ - d - _k - √(_k), 3√(θ) + 3√(_k) )-dense.
Let be the ground partition of and t = | |.
Let m = n/t.
Clearly |V| = m for all V ∈.
Let _1 be the set of all Y ∈k - 1 such that |_Y ∩𝒮| ≥ 3 √(θ) |_Y|.
Since is (3 √(θ), 𝒮)-avoiding, |_1| ≤ 3 √(θ)tk-1.
For all Y ∈k - 1, let ζ(Y) be defined as in Lemma <ref>.
Let _2 be the set of all Y ∈k - 1 with ζ(Y) > √(_k).
Since G is (_k, r)-regular with respect to all but at most _k tk of the k-sets of clusters of , it follows that |_2| √(_k) (t - k + 1) / k ≤_k tk, namely, |_2| ≤√(_k)tk-1.
Then it follows that |_1 ∪_2| ≤ 3 (√(θ) + √(_k)) tk-1.
We will show that all Y ∈k - 1∖ (_1 ∪_2) will have large codegree in R_d(G), thus proving the lemma.
Consider any Y ∈k - 1∖ (_1 ∪_2).
Since Y ∉_2, ζ(Y) ≤√(_k).
By Lemma <ref>, we have
(Y; R_d(G))
≥(Y; R(G)) - d - ζ(Y)
≥(Y; R(G)) - d - √(_k)
≥(_Y; G) - _k - d - √(_k).
So it suffices to show that (_Y; G) ≥ (1 - 3 √(θ)) μ.
Recall that (_Y; G) is the mean of (S; G) over all S ∈_Y.
Since Y ∉_1, |_Y ∩𝒮| ≤√(θ) |_Y|.
By definition, for all S ∈_Y ∖𝒮, _G(S) ≥μ(n - k + 1).
Thus (_Y; G) ≥ (1 - √(θ)) μ, as required.
For 0 ≤μ, θ≤ 1, a k-graph on n vertices is strongly (μ, θ)-dense if it is (μ, θ)-dense and, for all edges e ∈ E() and all (k-1)-sets X ⊆ e, _(X) ≥μ (n - k + 1).
We prove that all (μ, θ)-dense k-graphs contain a strongly (μ', θ')-dense subgraph, for some degraded constants μ', θ'.
Let n ≥ 2k and 0 < μ, θ < 1.
Suppose that is a k-graph on n vertices that is (μ, θ)-dense.
Then there exists a sub-k-graph ' on V() that is strongly (μ - 2^k θ^1/(2k - 2), θ + θ^1/(2k - 2))-dense.
Let 𝒮_1 be the set of all S ∈V()k-1 such that _(S) < μ(n - k + 1).
Thus, |𝒮_1| ≤θnk - 1.
Let β = θ^1/(k-1).
Now, for all j ∈{ k - 1, k - 2, …, 1} in turn we construct _j ⊆V()j in the following way.
Initially, let _k - 1 = 𝒮_1.
Given j > 1 and _j, we define _j-1⊆V()j - 1 to be the set of all X ∈V()j - 1 such that there exist at least β (n - j + 1) vertices w ∈ V() with X ∪{ w }∈_j.
For all 1 ≤ j ≤ k-1, |_j| ≤β^jnj.
We prove it by induction on k - j. When j = k-1 it is immediate.
Now suppose 2 ≤ j ≤ k - 1 and that |_j| ≤β^jnj.
By double counting the number of tuples (X, w) where X is a (j-1)-set in _j-1 and X ∪{ w }∈_j we have |_j-1| β (n - j + 1) ≤ j |_j|.
By the induction hypothesis it follows that
|_j-1| ≤j/β(n - j + 1) |_j| ≤β^j-1nj - 1.
For all 1 ≤ j ≤ k - 1, let F_j be the set of edges e ∈ E() such that there exists S ∈_j with S ⊆ e, and let F = ⋃_j = 1^k - 1 F_j. Define ' = - F. We will show that it satisfies the desired properties.
For each j-set, there are at most n - jk - j k-edges containing it.
Thus, for all 1 ≤ j ≤ k - 1, the claim above implies that
|F_j| ≤ |_j| n - jk - j≤β^j njn - jk - j = β^j kjnk.
Therefore
| F | ≤∑_j=1^k - 1 |F_j| ≤nk∑_j=1^k - 1kjβ^j≤ 2^k βnk.
Let 𝒮_2 be the set of all S ∈V()k - 1 contained in more than 2^k √(β) (n - k + 1) edges of F.
It follows that |𝒮_2| ≤√(β)nk - 1.
This implies that |𝒮_1 ∪𝒮_2| ≤ (θ + √(β)) nk - 1 = (θ + θ^1/(2k - 2)) nk - 1.
Now consider an arbitrary S ∈V()k - 1∖ (𝒮_1 ∪𝒮_2). As S ∉𝒮_1, it follows that _(S) ≥μ(n - k + 1). As S ∉𝒮_2, it follows that
_'(S)
≥_(S) - 2^k √(β) (n - k + 1) ≥ (μ - 2^k θ^1/(2k - 2)) (n - k + 1).
Therefore, ' is (μ - 2^k θ^1/(2k - 2), θ + θ^1/(2k - 2))-dense.
Let e ∈ E(') and let X ∈ek-1.
It is enough to prove that X ∉𝒮_1 ∪𝒮_2.
As e ∉ F_k-1, it follows that X ∉_k-1 = 𝒮_1.
So it is enough to prove that X ∉𝒮_2.
Suppose the contrary, that X ∈𝒮_2.
Then X is contained in more than 2^k √(β) (n - k + 1) edges e' ∈ E(F).
Let W = N_F(X).
For all w ∈ W, fix a set A_w ∈⋃_j=1^k-1_j such that A_w ⊆ X ∪{ w } and let T_w = X ∩ A_w.
If A_w ⊆ X then A_w ⊆ e ∈ E('), a contradiction.
Hence w ∈ A_w for all w ∈ W, and therefore |T_w| = |A_w| - 1 ≤ k-2 < |X| for all w ∈ W.
We deduce T_w ≠ X for all w ∈ W.
By the pigeonhole principle, there exists T ⊊ X and W_T ⊆ W such that for all w ∈ W_T, T_w = T and |W_T| ≥ |W|/(2^k-1) ≥ 2 √(β) (n - k + 1) > √(β) n.
Suppose |T| = t ≥ 1. Then for all w ∈ W_T, T ∪{ w } = A_w ∈_t + 1, so there are at least √(β) n ≥β (n - t) vertices w ∈ V() such that T ∪{ w }∈_t+1.
Therefore, T ∈_t and T ⊆ X ⊆ e, which is a contradiction because e ∉ F_t.
Hence, we may assume that T = ∅.
Then for all w ∈ W_T, { w }∈_1.
And so |_1| ≥ |W_T| > √(β) n, contradicting the claim.
§.§ The embedding lemma
We will need a version of “embedding lemma” which gives sufficient conditions to find a copy of a (k,s)-graph in a regular (k, s)-complex G.
Suppose that G is a (k,s)-graph with vertex classes V_1, …, V_s, which all have size m.
Suppose also that is a (k,s)-graph with vertex classes X_1, …, X_s of size at most m.
We say that a copy of in G is partition-respecting if for all 1 ≤ i ≤ s, the vertices corresponding to those in X_i lie within V_i.
Given a k-graph G and a (k-1)-graph J on the same vertex set, we say that G is supported on J if for all e ∈ E(G) and all f ∈ek - 1, f ∈ E(J).
We state the following lemma which can be easily deduced from a lemma stated by Cooley, Fountoulakis, Kühn and Osthus <cit.>.
Let k, s, r, t, m_0 ∈ℕ and let d_2, …, d_k-1, d, , _k > 0 be such that 1/d_i ∈ℕ for all 2 ≤ i ≤ k-1, and
1/m_0≪1/r, ≪_k, d_2, …, d_k-1 and _k ≪ d, 1/t, 1/s.
Then the following holds for all m ≥ m_0.
Let be a (k,s)-graph on t vertices with vertex classes X_1, …, X_s.
Let be a (d_k-1, …, d_2, , , 1)-regular (k-1, s)-complex with vertex classes V_1, …, V_s all of size m.
Let G be a k-graph on ⋃_1 ≤ i ≤ s V_i which is supported on _k-1 such that for all e ∈ E() intersecting the vertex classes { X_i_j : 1 ≤ j ≤ k }, the k-graph G is (d_e, _k, r)-regular with respect to the k-set of clusters { V_i_j : 1 ≤ j ≤ k }, for some d_e ≥ d depending on e.
Then there exists a partition-respecting copy of in G.
The differences between Lemma <ref> and <cit.> are discussed in Appendix <ref>.
§ ALMOST PERFECT C_S^K-TILINGS
The aim of this section is to prove Lemma <ref>, that is, finding an almost perfect C_s^k-tiling.
Throughout this section, we fix k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let G_s, W_G_s, a_s,1, …, a_s,k, ℓ, F_s be given by Proposition <ref>.
Recall that F_s contains a spanning C_s^k.
Therefore, an F_s-tiling in implies the existence of a C_s^k-tiling in H of the same size.
Here we summarise some useful inequalities that will be used throughout the section.
Let M_s = max_i a_s,i and m_s = min_i a_s,i.
We have
ℓ + ∑_i=1^k a_s,i = s, M_s ≤ m_s+1, and 1 ≤ℓ≤ k - 1.
From this, we can easily deduce
m_s + 1 ≥ M_s ≥s - ℓ/k≥s - k + 1/k.
Define E_s = K^k(M_s), the complete (k,k)-graph with each part of size M_s.
Given an { F_s, E_s }-tiling in , let _ and _ be the set of copies of F_s and E_s in , respectively.
Define
ϕ() = 1/n( n - s ( |_| + 3/5 |_| ) ).
Note that if _ = ∅, then is an F_s-tiling covering all but ϕ()n vertices.
Let ϕ() be the minimum of ϕ() over all { F_s, E_s}-tilings in .
Given n ≥ k and 0 ≤μ, θ < 1, let Φ(n, μ, θ) be the maximum of ϕ() over all (μ, θ)-dense k-graphs on n vertices.
Note that ϕ(H) and Φ(n, μ, θ) depend on k and s but they will be clear from the context.
Let k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let 1/n, θ≪α, γ, 1/k, 1/s.
Then Φ(n, 1/2 + 1/(2s) + γ, θ) ≤α.
We now show that Lemma <ref> implies Lemma <ref>.
Fix α, γ > 0. Note that |V(F_s)| = s and |V(E_s)| = k M_s.
Let δ = 7/10.
Using s ≥ 5k^2, (<ref>) and k ≥ 3, we deduce kM_s/s ≥ 1 - (k-1)/(5k^2) ≥ 43/45.
Hence
3 s / 5 ≤ 43 s δ/45 ≤δ k M_s.
Define α_1 = α (1 - δ) and choose some θ≪α, γ, 1/k, 1/s.
Since 1/n ≪α, γ, 1/k, 1/s as well, Lemma <ref> (with α_1 in place of α) implies that Φ(n, 1/2 + 1/(2s) + γ, θ) ≤α_1.
Let be a k-graph on n vertices with δ_k-1() ≥ (1/2 + 1/(2s) + γ)n.
Then ϕ() ≤Φ(n, 1/2 + 1/(2s) + γ, 0) ≤Φ(n, 1/2 + 1/(2s) + γ, θ) ≤α_1.
Let be an { F_s, E_s }-tiling in with ϕ() ≤α_1.
Hence,
1 - α_1 ≤ 1 - ϕ()
≤s/n( | _| + 3 /5 |_| )
(<ref>)≤1/n( s |_| + δ k M_s |_| ).
As is a tiling, we have that s |_| + k M_s |_| ≤ n.
Hence, 1 - α_1 ≤ (1 - δ) s | _ |/n + δ and so
s |_| ≥( 1 - α_1/1 - δ) n = (1 - α) n.
Therefore H contains an _s-tiling _ covering all but at most α n vertices, implying the existence of a C_s^k-tiling of the same size.
§.§ Weighted fractional tilings
Our strategy for proving Lemma <ref> is to apply the regular slice lemma (Theorem <ref>).
In the reduced k-graph, we find a fractional {F^∗_s, E^∗_s }-tiling for some simpler k-graphs F^∗_s and E^∗_s.
By using the regularity methods, this fractional tiling can then be lifted to an actual tiling with copies of F_s, E_s in the original k-graph, which covers a similar proportion of vertices.
To define the k-graphs F^∗_s and E^∗_s, we use the notion of G-augmentation introduced in Subsection <ref>.
Let K be a k-edge with vertices { x_1, …, x_k }.
Let G_s be the 2-graph on [k] given by Corollary <ref>.
Let F^∗_s be the G_s-augmentation of K (with respect to the vertex partition V_i := {x_i} for all i ∈ [k]).
Let V(F^∗_s) = { x_1, …, x_k }∪{ y_1, …, y_ℓ}, where ℓ = |E(G_s)|.
We refer to c(F^∗_s) = { x_1, …, x_k } as the set of core vertices of F^∗_s and p(F^∗_s) = { y_1, …, y_ℓ} as the set of pendant vertices of F^∗_s.
Define the function α: V(F^∗_s) →ℕ to be such that for u ∈ V(F^∗_s),
α(u) =
a_s,i if u = x_i,
1 if u ∈ p(F^∗_s).
Note that there is a natural k-graph homomorphism θ from F_s to F^∗_s such that for all u ∈ V(F^∗_s), |θ^-1(u)| = α(u).
Observe that (<ref>), s ≥ 5k^2 and k ≥ 3 imply that α(u) = 1 if and only if u is a pendant vertex.
Let ^∗_s() be the set of copies of F^∗_s in .
Given v ∈ V(H) and F^∗_s ∈^∗_s(), define
α_F^∗_s(v) = α(u) if v corresponds to vertex u ∈ V(F^∗_s),
0 otherwise.
Given v ∈ V() and e ∈ E(), define
α_e(v) =
M_s if v ∈ e,
0 otherwise.
We now define a weighted fractional { F^∗_s, E_s^∗}-tiling of to be a function ω^∗ : ^∗_s() ∪ E() → [0,1] such that, for all vertices v ∈ V(),
ω^∗(v) ∑_F^∗_s ∈^∗_s()ω^∗(F^∗_s) α_F^∗_s(v) + ∑_e ∈ E()ω^∗(e) α_e(v) ≤ 1.
Note that if (contrary to our assumptions) a_s,1 = … = a_s,k = 1,
then we have α_F^∗_s(v) = 1{ v ∈ V(F^∗_s) } and α_e(v) = 1{ v ∈ e } implying that ω^∗ is the standard fractional { F_s, E_s }-tiling.
Note that the definition depends on k and the functions α_F^∗_s and α_e, but those will always be clear from the context.
Define the minimum weight of ω^∗ to be
ω^∗_min = min_J ∈^∗_s() ∪ E()
v ∈ V()
ω^∗(J) α_J(v) ≠ 0 ω^∗(J) α_J(v).
Analogously to ϕ(), define
ϕ( ω^∗ ) = 1/n( n - s ( ∑_F^∗_s ∈^∗_s()ω^∗(F^∗_s) + 3/5∑_e ∈ E()ω^∗(e) ) ).
Given c > 0 and a k-graph , let ϕ^∗(, c) be the minimum of ϕ(ω^∗) over all weighted fractional { F^∗_s, E^∗_s }-tilings ω^∗ of with ω^∗_min≥ c.
Note that ϕ^∗(H, c) also depends on k, s, α_F^∗_s and α_e, which will always be clear from the context.
Let be an { F_s, E_s }-tiling. We say that a vertex v is saturated under if it is covered by a copy of F_s and v corresponds to a vertex in W_G_s under that copy.
Let S() denote the set of all saturated vertices under .
Define U() as the set of all uncovered vertices under .
Analogously, given a weighted fractional { F^∗_s, E^∗_s }-tiling ω^∗, we say that a vertex v is saturated under ω^∗ if
∑_F^∗_s ∈_s^∗()
α_F^∗_s(v) = 1 ω^∗(F^∗_s) α_F^∗_s (v) = 1,
that is, ω^∗(v) = 1 and all its weight comes from copies of F^∗_s such that v corresponds to a pendant vertex.
Let S(ω^∗) be the set of all saturated vertices under ω^∗.
Also, define U(ω^∗) as the set of all vertices v ∈ V() such that ω^∗(v) = 0.
Let k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let H be a k-graph on n vertices.
Let ω^∗ be a weighted fractional { F^∗_s, E^∗_s }-tiling in H.
Then the following holds:
* s ∑_F^∗∈^∗_sω^∗ (F^∗) + k M_s ∑_e ∈ E()ω^∗ (e) ≤ n. In particular, ∑_F^∗∈^∗_sω^∗ (F^∗) ≤ n/s and ∑_e ∈ E()ω^∗ (e) ≤ n/ (k M_s),
* |S(ω^∗)| ≤ℓ n / s,
* if S' ⊆ S(ω^∗) with |S'| > n/s, then there exists F^∗∈^∗_s() with ω^∗(F^∗) > 0 such that |p(F^∗) ∩ S'| ≥ 2.
For <ref>, note that
n
≥∑_v ∈ V()∑_F^∗∈^∗_s(H)ω^∗(F^∗) α_F^∗(v) + ∑_v ∈ V()∑_e ∈ E(H)ω^∗(e) α_e(v)
= ∑_F^∗∈^∗_s(H)ω^∗(F^∗) ∑_v ∈ V()α_F^∗(v) + ∑_e ∈ E(H)ω^∗(e) ∑_v ∈ V()α_e(v)
= s ∑_F^∗∈^∗_s(H)ω^∗(F^∗) + k M_s ∑_e ∈ E(H)ω^∗(e).
To prove <ref>, recall that all of the vertices v ∈ S(ω^∗) only receive weight from pendant vertices, and all copies of F ∈^∗_s() have precisely ℓ pendant vertices, and therefore
|S(ω^∗)| = ∑_v ∈ S(ω^∗)∑_F^∗∈^∗_s()ω^∗(F^∗) α_F^∗(v) ≤ℓ∑_F^∗∈^∗_s()ω^∗(F^∗) <ref>≤ℓ n / s.
Finally, for <ref>, suppose the contrary, that, for all F^∗∈^∗_s() with ω^∗(F^∗) > 0, we have ∑_v ∈ S'α_F^∗(v) = |p(F^∗) ∩ S'| ≤ 1. Then
|S'|
= ∑_v ∈ S'∑_F^∗∈^∗_s()ω^∗(F^∗) α_F^∗(v) = ∑_F^∗∈^∗_s()ω^∗(F^∗) ∑_v ∈ S'α_F^∗(v)
≤∑_F^∗∈^∗_s()ω^∗(F^∗) ≤ n/s,
a contradiction.
Note that F_s admits a natural perfect weighted fractional F^∗_s-tiling, defined as follows.
Let a = ∏_1 ≤ i ≤ k a_s,i.
Let F be a copy of F_s and suppose that V(F) = V_1 ∪…∪ V_k ∪ W, where V_1, …, V_k forms a complete (k,k)-graph with |V_i| = a_s,i for all 1 ≤ i ≤ k and |W| = ℓ.
Note that a ≤ M^k_s.
For all (v_1, …, v_k) ∈ V_1 ×…× V_k, the vertices { v_1, …, v_k }∪ W span a copy of F^∗_s, where we identify { v_1, …, v_k } with the core vertices of F^∗_s and W with the pendant vertices of F^∗_s.
Define ω^∗ by assigning to all such copies the weight 1 / a.
A similar method shows that E_s admits a perfect weighted fractional E^∗_s-tiling, by setting ω^∗(e) = M^-k_s for all e ∈ E_s.
We can naturally extend these constructions to find a weighted fractional { F^∗_s, E^∗_s }-tiling given an {F_s, E_s}-tiling, by repeating the above procedure over all copies of F_s and E_s.
The following proposition collects useful properties of the obtained fractional tiling, for future reference.
All of them straightforward to check by using the construction outlined above, so we omit its proof.
Let k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let be a k-graph and let be an { F_s, E_s }-tiling in .
Then there exists a weighted fractional { F^∗_s, E^∗_s }-tiling ω^∗ such that
* ϕ() = ϕ(ω^∗),
* |_| = ∑_F^∗∈^∗_s()ω^∗(F^∗),
* |_| = ∑_e ∈ E()ω^∗(e),
* S(ω^∗) = S() and U(ω^∗) = U(),
* for all F^∗∈_s^∗(), ω^∗(F^∗) ∈{ 0, a^-1}, where a = ∏_1 ≤ i ≤ k a_s,i,
* for all e ∈ E(), ω^∗(e) ∈{ 0, M^-k_s }, moreover if e ∈ E(E_s) for some E_s ∈_, then ω^∗(e) = M^-k_s,
* ω^∗_min≥ M_s^-k, and
* ω^∗(v) ∈{0,1} for all v ∈ V().
The next lemma assures that if R is a reduced k-graph of , then ϕ(H) is roughly bounded above by ϕ^∗(R, c).
Let k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let c ≥β > 0 and
1/n≪, 1/r ≪_k ≪ 1/t_1 ≤ 1/t_0 ≪β, c, 1/s, 1/k
and
_k ≪ d, 1/k, 1/s.
Let be a k-graph on n vertices and be a (t_0, t_1, , _k, r)-regular slice for , and R = R_d() be its d-reduced k-graph obtained from .
Then ϕ() ≤ϕ^∗(R, c) + s β / c.
Let ω^∗ be a weighted fractional { F^∗_s, E^∗_s }-tiling on R such that ϕ(ω^∗) = ϕ^∗(R, c) and ω^∗_min≥ c.
Let t = |V(R)| and let m = n/t, so that each cluster in has size m.
Let n^∗_F be the number of F^∗_s ∈^∗_s(R) with ω^∗(F^∗_s) > 0 and n^∗_E be the number of E ∈ E(R) with ω^∗(E) > 0.
Note that
n^∗_F + n^∗_E ≤ t/c.
For all clusters U ∈ V(R), we subdivide U into disjoint sets { U_J }_J ∈^∗_s(R) ∪ E(R) of size |U_J| = ⌊ω^∗(J) α_J(U) m ⌋.
In the next claim, we show that if ω^∗(J) > 0 for some J ∈^∗_s(R) ∪ E(R) then we can find a large F_s-tiling or a large E_s-tiling on ⋃_U ∈ V(J) U_J.
For all J ∈_s^∗(R) ∪ E(R) with ω^∗(J) > 0, [ ⋃_U ∈ V(J) U_J ] contains
* an F_s-tiling _J with |_J| ≥ m (ω^∗(J) - β ) if J ∈^∗_s(R); or
* an E_s-tiling _J with |_J| ≥ m (ω^∗(J) - β ) if J ∈ E(R).
We will only consider the case when J ∈^∗_s(R), as the case J ∈ E(R) is proved similarly.
Suppose c(J) = { X_1, …, X_k } and p(J) = {Y_1, …, Y_ℓ}, so V(J) = c(J) ∪ p(J).
We will first show that if X'_i ⊆ X_i for all 1 ≤ i ≤ k and Y'_j ⊆ Y_j for all 1 ≤ j ≤ℓ are such that |X'_i| = |Y'_j| ≥β m, then [ ⋃_1 ≤ i ≤ k X'_i ∪⋃_1 ≤ j ≤ℓ Y'_j ] contains a copy F of F_s such that |V(F) ∩ X'_i| = a_s,i for all 1 ≤ i ≤ k and |V(F) ∩ Y'_j| = 1 for all 1 ≤ j ≤ℓ.
Indeed, take X'_i, Y'_j as above and construct the subcomplex ' obtained by restricting along with to the subsets X'_i, Y'_j and then deleting the edges in not supported in k-tuples of clusters corresponding to edges in E(J).
Then ' is a (k, k + ℓ)-complex.
Since is (t_0, t_1, )-equitable, there exists a density vector = (d_k-1, …, d_2) such that, for all 2 ≤ i ≤ k-1, we have d_i ≥ 1/t_1, 1/d_i ∈ℕ and is (d_k-1, …, d_2, , , 1)-regular.
As J ⊆ R, all edges e in E(J) ∩ E(R) induce k-tuples X_e of clusters in with d^∗(X_e) = d_e ≥ d and is (d_e, _k, r)-regular with respect to X_e.
By Lemma <ref>, the restriction of X_e to the subsets { X'_1, …, X'_k, Y'_1, …, Y'_ℓ} is (d_e, √(_k), √(), r)-regular.
Hence, by Lemma <ref>, there exists a partition-respecting copy F of F_s in ', that is, F satisfies |V(F) ∩ X'_i| = a_s,i for all 1 ≤ i ≤ s and |V(F) ∩ Y'_j| = 1 for all 1 ≤ j ≤ℓ, as desired.
Now consider the largest F_s-tiling _J in [ ⋃_U ∈ V(J) U_J ] such that all F ∈_J satisfy |V(F) ∩ X_i| = a_s,i for all 1 ≤ i ≤ k and |V(F) ∩ Y_j| = 1 for all 1 ≤ j ≤ℓ.
Let V(_J) = ⋃_F ∈_J V(F).
By the discussion above, we may assume that |U_J ∖ V(_J) | < β m for some U ∈ V(J).
A simple calculation shows that |(Y_j)_J ∖ V(_J) | < β m for all 1 ≤ j ≤ℓ and |(X_i)_J ∖ V(_J) | < a_s,iβ m for all 1 ≤ i ≤ k.
Therefore, _J covers at least sm(ω^∗(J) - β) vertices and it follows that |_J| ≥ m (ω^∗(J) - β).
Now consider the { F_s, E_s }-tiling = _∪_ in , where _ = ⋃_J ∈^∗_s(R)_J and _ = ⋃_E ∈ E(R)_J as given by the claim (and we take _J = _J = ∅ whenever ω^∗(J) = 0).
Therefore
|_| + 3/5 |_|
≥∑_ F^∗_s ∈^∗_s(R)
ω^∗(F^∗_s) > 0 m(ω^∗(F^∗_s) - β) + 3/5∑_ E ∈ E(R)
ω^∗(E) > 0 m(ω^∗(E) - β )
≥ m ( ∑_F^∗_s ∈^∗_s(R)ω^∗(F^∗_s) + 3/5∑_ E ∈ E(R)ω^∗(E) - β (n^∗_F + n^∗_E) )
≥ m ( ∑_F^∗_s ∈^∗_s(R)ω^∗(F^∗_s) + 3/5∑_ E ∈ E(R)ω^∗(E) - β t/c)
= m t ( 1 - ϕ(ω^∗)/s - β/c) = n/s( 1 - ϕ(ω^∗) - β s/c).
Thus we have ϕ() ≤ϕ() ≤ϕ(ω^∗) + s β / c = ϕ^∗(R, c) + s β / c.
§.§ Proof of Lemma <ref>
We begin with some lemmas before formally proving Lemma <ref>.
Let k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let μ + γ/3 ≤ 2/3.
Then Φ(n, μ, θ) ≤Φ((1 + γ)n, μ + γ/3, θ) + s γ.
Let be a k-graph on n vertices that is (μ, θ)-dense. Consider the k-graph ' on the vertices V() ∪ A obtained from by adding a set of |A| = γ n vertices and adding all of the k-edges that have non-empty intersection with A. Since
μ + γ/1 + γ≥μ + γ / 3
as μ + γ/3 ≤ 2/3, ' is (μ + γ/3, θ)-dense.
Let ' be an { F_s, E_s }-tiling on ' satisfying ϕ(') = ϕ(').
Consider the { F_s, E_s }-tiling in obtained from ' by removing all copies of F_s or E_s intersecting with A. It follows that
1 - ϕ()
= s/n( |_| + 3/5 |_| )
≥s/n( |_'| + 3/5 |_'| ) - s γ
≥s/(1 + γ) n( |_'| + 3/5 |_'| ) - s γ = 1 - ϕ(') - s γ.
Hence, ϕ() ≤ϕ() ≤ϕ(') + s γ≤Φ((1 + γ)n, μ + γ/3, θ) + s γ.
The next lemma shows that given an { F_s, E_s }-tiling of a strongly (μ, θ)-dense k-graph with ϕ(T) “large”, we can always find a better weighted fractional { F^∗_s, E^∗_s }-tiling in terms of ϕ^∗.
Let k ≥ 3, s ≥ 5k^2 with s ≢0 k, and c = s^-2k.
For all γ > 0 and 0 ≤α≤ 1 there exists n_0 = n_0(k, s, γ, α) ∈ℕ and ν = ν(k, s, γ) > 0 and θ = θ(α, k) such that following holds for all n ≥ n_0.
Let be a k-graph on n vertices that is strongly (1/2 + 1/(2s) + γ, θ)-dense and ϕ() ≥α. Then ϕ^∗(, c) ≤ (1 - ν) ϕ().
We defer the proof of Lemma <ref> to the next subsection and now we use it to prove Lemma <ref>.
Consider a fixed γ > 0.
Suppose the result is false, that is, there exists α > 0 such that for all n ∈ℕ and θ^∗ > 0 there exists n' > n satisfying Φ(n', 1/2 + 1/(2s) + γ, θ^∗) > α.
Let α_0 be the supremum of all such α.
Apply Lemma <ref> (with parameters γ/2, α_0/2 playing the roles of γ, α) to obtain n_0 = n_0(k, s, γ/2, α_0/2), ν = ν(k, s, γ/2) and θ = θ(α_0/2, k).
Let
0 < η≪ν, γ, α_0, 1/s.
By the definition of α_0, there exists θ_1 > 0 and n_1 ∈ℕ such that for all n ≥ n_1,
Φ(n, 1/2 + 1/(2s) + γ, θ_1) ≤α_0 + η/2.
Now we prepare the setup to use the regular slice lemma (Theorem <ref>).
Let β, _k, , d, θ^∗, θ' > 0 and t_0, t_1, r, n_2 ∈ℕ be such that
1/n_2 ≪, 1/r ≪ _k, 1/t_1 ≪ 1/t_0 ≪β≪γ' ≪η, c=s^-k, 1/s, 1/k, 1/n_0, 1/n_1;
_k ≪ d ≪γ';
_k ≪θ' ≪θ^∗≪γ', θ, θ_1
and n_2 ≡ 0 t_1!.
Let be a (1/2 + 1/(2s) + γ, θ')-dense k-graph on n ≥ n_2 vertices with
ϕ() > α_0 - η,
such exists by the definition of α_0. By removing at most t_1! - 1 vertices we get a k-graph ' on at least n_2 vertices such that |V(')| is divisible by t_1! and ' is (1/2 + 1/(2s) + γ - γ', 2 θ')-dense.
Let 𝒮 be the set of (k-1)-tuples T of vertices of V(H') such that _H'(T) < (1/2 + 1/(2s) + γ - γ')(|V(H')| - k + 1).
Thus |𝒮| ≤ 2 θ' |V(H')|k-1.
By Theorem <ref>, there exists a (t_0, t_1, , _k, r)-regular slice for ' such that for all (k-1)-sets Y of clusters of , we have (Y; R(')) = (_Y; ') ±_k, and furthermore, is (3 √(2 θ'), 𝒮)-avoiding.
Let R = R_d(') be the d-reduced k-graph obtained from ' and .
Since θ', d, _k ≪γ', _k ≪θ' and is (3 √(2 θ'), 𝒮)-avoiding, Lemma <ref> implies that R is (1/2 + 1/(2s) + γ - 2 γ', 5 √(θ'))-dense.
By Lemma <ref>, there exists a subgraph R' ⊆ R on the same vertex set that is strongly (1/2 + 1/(2s) + γ - 3 γ', θ^∗)-dense as θ' ≪γ', 1/k, θ^∗.
Since the vertices of R' are the clusters of , we have |V(R')| ≥ t_0 ≥ n_1.
By the fact that θ^∗≤θ_1, Lemma <ref> (with 9 γ ' playing the role of γ) and (<ref>), we deduce that
ϕ(R')
≤Φ( |V(R')|, 1/2 + 1/(2s) + γ - 3 γ', θ^∗ )
≤Φ( (1 + 9 γ') |V(R')|, 1/2 + 1/(2s) + γ, θ^∗ ) + 9 γ' s
≤α_0 + η/2 + 9 γ' s ≤α_0 + η.
We further claim that ϕ^∗(R', c) ≤α_0 - 2 η.
Note that c = s^-k and α_0 ≥ 4 η.
Therefore, if ϕ(R') < α_0 / 2, then the claim holds by Proposition <ref>.
Thus we may assume that ϕ(R') ≥α_0/2.
Note that |V(R')| ≥ t_0 ≥ n_0, γ - 3 γ' ≥γ / 2, and θ^∗≤θ.
By the choice of n_0, ν, and θ (given by Lemma <ref>lemma:fractionalisbetter),
we have
ϕ^∗(R', c) ≤ (1 - ν) ϕ(R') ≤ (1 - ν) (α_0 + η) ≤α_0 - 2 η,
where the last inequality holds since η≪ν, α_0.
Finally, recall that β≪η, c, so an application of Lemma <ref> implies that
ϕ() ≤ϕ^∗(R, c) + s β / c ≤ϕ^∗(R', c) + s β / c ≤α_0 - η,
contradicting (<ref>).
§.§ Proof of Lemma <ref>
Before proceeding with the full details of the proof of Lemma <ref>, we first give a rough outline of the proof.
Let be an { F_s, E_s }-tiling of satisfying ϕ() = ϕ(H).
By Proposition <ref>, we obtain a weighted fractional { F^∗_s, E^∗_s }-tiling ω^∗_0 with ϕ(ω^∗_0) = ϕ(), U(ω^∗_0) = U() and (ω^∗_0)_min≥ M_s^-k.
Our aim is to sequentially define weighted fractional { F^∗_s, E^∗_s }-tilings ω^∗_1, ω^∗_2, …, ω^∗_t such that ϕ(ω^∗_j-1) - ϕ(ω^∗_j) ≥ν_1 / n for all j ∈ [t], where ν_1 is a fixed positive constant.
We will follow this procedure for t = Ω(n) steps, and we will show that ω^∗_t satisfies the required properties.
Moreover, we will construct ω^∗_j+1 based on ω^∗_j by changing the weights of _s() and E() on a small number of vertices, such that no vertex has its weight changed more than once during the whole procedure.
Recall that U() is the set of uncovered vertices.
If |U()| is large then we construct ω^∗_j+1 from ω^∗_j via assigning weights to edges that contain at least k-1 vertices in U().
Suppose that |U()| is small.
Since ϕ() ≥α, not all of the weight of ω^∗_0 can contributed by copies of F^∗_s.
Thus there must exist edges e ∈ E(H) with positive weight under ω^∗_0.
We use this to find e ∈ E(H) with ω^∗_j(e) > 0.
The crucial property is that a copy of F^∗_s might be obtained from an edge by adding a few extra vertices to it.
We use this to obtain ω^∗_j+1 from ω^∗_j by reducing the weight on e before assigning weight to some copy of F^∗_s which originates from e.
More care is needed to ensure that ω^∗_j+1 is indeed a weighted fractional { F^∗_s, E^∗_s }-tiling.
Ideally we would like that the extra vertices which are added to e to form a copy of F^∗_s are not saturated, if possible.
We summarise and recall the relevant properties of F^∗_s, which was originally as defined at the beginning of Subsection <ref>.
There exists a 2-graph G_s on [k] with ℓ≤ k-1 edges which consists of a disjoint union of paths.
Suppose e_1, …, e_ℓ is an enumeration of the edges of G_s and e_i = j_i j'_i for all 1 ≤ i ≤ s.
If X = { x_1, …, x_k }, then we may describe F^∗_s as having vertices V(F^∗_s) = X ∪{ y_1, …, y_ℓ}, and the edges of F^∗_s are X together with (X ∖{ x_j_i}) ∪{ y_i} and (X ∖{ x_j'_i}) ∪{ y_i} for all 1 ≤ i ≤ℓ.
We call c(F^∗_s) = X and p(F^∗_s) = { y_1, …, y_ℓ} the core and pendant vertices of F^∗_s, respectively.
The following two lemmas are needed for the case when U() is small.
The idea is the following: suppose H is a k-graph on n vertices with δ_k-1(H) ≥ (1/2 + 1/(2s) + γ)n.
If X is a k-edge in H, we would like to extend it into a copy F of F^∗_s such that c(F) = X.
Lemma <ref> will indicate where should we look for the vertices of p(F).
Let k ≥ 3, s ≥ 2k^2 and ℓ≤ k - 1.
Suppose that N_i ⊆ [n] are such that |N_i| ≥ (1/2 + 1/(2s) + γ)n for all 1 ≤ i ≤ k.
Let G be a 2-graph on { N_1, …, N_k } such that N_i N_j ∈ E(G) if and only if |N_i ∩ N_j| ≤ (ℓ/s + γ)n.
Then G is bipartite.
We will show that G does not have any cycle of odd length.
It suffices to show that N_i_1 N_i_2j+1∉ E(G) for all paths N_i_1… N_i_2j+1 in G on an odd number of vertices.
For any S ⊆ [n], write S := [n] ∖ S.
First, note that if N_i is adjacent to N_j in G, then |N_i ∖N_j| = |N_i ∩ N_j| ≤( ℓ/s + γ) n and |N_j∖ N_i| ≤ (n - |N_j|) - (|N_i| - |N_i ∩ N_j|) ≤( ℓ/s - γ) n.
Hence if N_i N_j N_k is a path on three vertices in G, then
|N_i ∖ N_k| ≤ |N_i ∖N_j| + |N_j∖ N_k| ≤ 2 ℓ n / s.
Now consider a path in G on an odd number of vertices.
Without loss of generality (after a suitable relabelling), we assume the path is given by N_1 N_2 … N_2j+1 for some j which necessarily satisfies 2j+1 ≤ k.
By using the previous bounds repeatedly, we obtain
|N_1∖ N_2j+1|
≤ |N_1 ∖ N_3| + |N_3 ∖ N_5| + … + |N_2j-1∖ N_2j+1|
≤2 ℓ j n/s≤ℓ (k-1) n/s.
Since ℓ≤ k-1 and s > 2k^2, we obtain
|N_1∩ N_2j+1|
≥ |N_1| - ℓ(k-1)n/s≥( 1/2 + 1/2s + γ) n - (k-1)^2 n /s
> ( ℓ/s + γ) n.
Hence, N_1 N_2j+1∉ E(G) as desired.
Let k ≥ 3 and s ≥ 5k^2 with s ≢0 k.
Let 1/n ≪γ, 1/k and θ > 0.
Let H be a strongly (1/2 + 1/(2s) + γ, θ)-dense k-graph on n vertices.
Let X = { x_1, …, x_k } be an edge of H and let N_i = N_H(X ∖{ x_i }) for all 1 ≤ i ≤ k.
Let S ⊆ V(H) with |S| ≤ (ℓ/s + γ/3)n and y_0 ∈ N_1 ∩ N_2.
Suppose either |N_1 ∩ N_2| < (ℓ/s + 2 γ / 3)n or |N_i ∩ N_j| ≥ (ℓ/s + 2 γ/3)n for all 1 ≤ i, j ≤ k.
Then there exists a copy F^∗ of F^∗_s such that c(F^∗) = X and p(F^∗) ∩ (S ∖{ y_0 }) = ∅.
Note that |N_i| ≥ (1/2 + 1/(2s) + γ)(n-k+1) ≥ (1/2 + 1/(2s) + 2γ/3)n for all 1 ≤ i ≤ k.
Let G be the 2-graph on [k] such that ij ∈ E(G) if and only if |N_i ∩ N_j| < (ℓ/s + 2γ/3)n.
Note that if ij ∉ E(G), then |N_i ∩ N_j| ≥ (ℓ/s + 2 γ/3)n ≥ |S|+ℓ.
Recall that G_s, the 2-graph which defines F^∗_s, is a disjoint union of paths.
By our assumption, either 12 ∈ E(G) or G is empty.
By Lemma <ref>, G is bipartite.
Thus, in either case, there exists a bijection ϕ: V(G_s) → [k] such that {ϕ(j_i) ϕ(j'_i) : j_i j'_i ∈ E(G_s) }∩ E(G) ⊆{ 12 }.
Let e_1, …, e_ℓ be an enumeration of the edges of E(G_s).
Consider e_i = j_i j'_i ∈ E(G_s).
If {ϕ(j_i), ϕ(j'_i) } = { 1, 2 }, then let y_i = y_0.
Otherwise, ϕ(j_i) ϕ(j'_i) ∉ E(G) and therefore |N_ϕ(j_i)∩ N_ϕ(j'_i)| ≥ |S| + ℓ.
Thus we can greedily pick y_i ∈ (N_ϕ(j_i)∩ N_ϕ(j'_i)) ∖ S such that y_1, …, y_ℓ are pairwise distinct.
Then there exists a copy F^∗ of F^∗_s with c(F^∗) = X and p(F^∗) = { y_1, …, y_ℓ}, which satisfies the required properties.
Now we are ready to prove Lemma <ref>.
We may assume that γ≪α, 1/k, 1/s.
Recall that our aim is to define a sequence of fractional { F^∗_s, E^∗_s }-tilings ω^∗_0, …, ω^∗_t, for some t ≥ 0.
Let
ν_1 = s/25 k M_s^k, ν_2 = γ/40 k^3 s^k, and ν = ν_1 ν_2/2.
Choose θ≪α, 1/k and 1/n_0 ≪α, γ, 1/k, 1/s.
Let be a strongly (1/2 + 1/(2s) + γ, θ)-dense k-graph on n ≥ n_0 vertices with ϕ() ≥α.
Choose t = ⌊ν_2 ϕ(H) n ⌋.
Recall that G_s, ℓ, F_s, m_s, M_s are given by Proposition <ref> and they satisfy (<ref>) and (<ref>).
Let be an { F_s, E_s }-tiling on with ϕ() = ϕ().
Apply Proposition <ref> and obtain a weighted fractional { F^∗_s, E^∗_s }-tiling w^∗_0 satisfying all the properties of the proposition.
Given that ω^∗_j has been defined for some 0 ≤ j ≤ t, define
A_j = { v ∈ V() : ∀ J ∈^∗_s() ∪ E(), ω^∗_j(J) α_J(v) = ω^∗_0(J) α_J(v) }.
So A_j is the set of vertices such that ω^∗_j is “identical to w^∗_0”.
Note that by Proposition <ref><ref>, for all v ∈ A_j,
ω^∗_j(v) = ω^∗_0(v) ∈{0,1}.
Clearly we have A_0 = V().
Let 𝒯^+_0 = { J ∈^∗_s() ∪ E() : ω^∗_0(J) > 0 }.
The set A_j will indicate where we should look for graphs J ∈𝒯^+_0 whose weight on ω^∗_j is known (by knowing the weight on J ∈ω^∗_0), and we will modify those to define the subsequent weighting ω^∗_j+1.
By Proposition <ref> and (<ref>), we have that for all J ∈𝒯^+_0, if V(J) ∩ A_j ≠∅, then ω^∗_j(J) = ω^∗_0(J) and therefore
ω^∗_j(J) - 1/M_s^-k
= 0 if J ∈ E(H) or m_s = M_s,
≥ c otherwise.
Now we turn to the task of making the construction of ω^∗_1, …, ω^∗_t explicit.
There is a sequence of weighted fractional { F^∗_s, E^∗_s }-tilings ω^∗_1, …, ω^∗_t such that for all 1 ≤ j ≤ t,
* A_j ⊆ A_j-1 and |A_j| ≥ |A_j-1| - 5k^2;
* (ω^∗_j)_min≥ c and
* ϕ(ω^∗_j) ≤ϕ(ω^∗_j-1) - ν_1/n.
Note that Lemma <ref> follows immediately from Claim <ref> as ϕ(ω^∗_t) ≤ϕ() - ν_1 t / n ≤ (1 - ν) ϕ().
Proof of Claim <ref>.
Suppose that, for some 0 ≤ j < t, we have already defined ω^∗_0, ω^∗_1, …, ω^∗_j satisfying <ref>–<ref>.
We write U_i = U(ω^∗_i), for each 0 ≤ i ≤ j.
Observe that U_0 = U() by the choice of ω^∗_0 and Proposition <ref><ref>.
Note that <ref> implies that |A_j| ≥ |A_0| - 5 k^2 j ≥ n - 5k^2 ν_2 ϕ(H)n ≥ (1 - αγ / 40)n, and therefore
|V ∖ A_j| = n - |A_j| ≤αγ/40 n.
Now our task is to construct ω^∗_j+1.
We will use the following shorthand notation.
For all J ∈^∗_s() ∪ E(), if we have already specified the values of ω^∗_j+1, then let
∂(J) = ω^∗_j+1(J) - ω^∗_j(J).
The proof splits on two cases depending on the size of U_0.
Case 1: |U_0| ≥ 3 α n / 4.
Note that (U_0∖ U_j)∩ A_j=∅, which implies that A_j ∩ U_0⊆ A_j ∩ U_j.
By (<ref>), |A_j ∩ U_j| ≥ |A_j ∩ U_0| ≥ |U_0| - αγ n / 40 ≥ 3 α n / 4 - αγ n / 40 ≥α n / 2.
Together with 1/n ≪α, we get
|U_j ∩ A_j |k - 1≥α n / 2 k - 1≥α^k-1/2^knk - 1≥θnk - 1 + k^2 nk - 2
as θ, 1/n ≪α, 1/k.
Since is strongly (1/2 + 1/(2s) + γ, θ)-dense, we can (greedily) find k disjoint (k-1)-sets W_1, …, W_k of U_j ∩ A_j such that (W_i) ≥ (1/2 + 1/(2s) + γ)(n - k + 1) for all 1 ≤ i ≤ k.
Define N_i = N(W_i) ∩ A_j.
Then
|N_i|
≥( 1/2 + 1/2s + γ)(n-k+1) - (n - |A_j|) (<ref>)≥( 1/2 + 1/2s + γ/2) n.
Suppose that for some 1 ≤ i ≤ k, there exists x ∈ N_i ∩ U_j.
Then e = { x }∪ W_i ∈ E(), so we can define ω^∗_j+1(e ) = 1 and ω^∗_j+1(J) = ω^∗_j(J) for all J ∈ (^∗_s() ∪ E()) ∖{ e }.
In this case, |A_j+1| = |A_j| - k ≥ |A_j| - 5k^2, (ω^∗_j+1)_min = (ω^∗_j)_min≥ c and ϕ(ω^∗_j+1) = ϕ(ω^∗_j) - 3s/(5n) ≤ϕ(ω^∗_j) - ν_1 / n so we are done.
Thus, we may assume that
⋃_1 ≤ i ≤ k N_i ⊆ A_j ∖ U_j.
For all F^∗∈^∗_s() and e ∈ E(), define
d_F^∗ = ∑_i=1^k| N_i ∩ c(F^∗) |
and
d_e = ∑_i=1^k |N_i ∩ e|.
Case 1.1: there exists F^∗∈^∗_s(H) with ω^∗_j(F^∗) > 0 and d_F^∗≥ k+1.
There exist distinct i, i' ∈{1, …, k} and distinct x∈ N_i∩ c(F^∗), x' ∈ N_i'∩ c(F^∗) such that both e_1 = W_i ∪{ x } and e_2 = W_i'∪{ x' } are edges in .
Note that since x ∈ A_j, by (<ref>) we have ω^∗_j(F^∗) = ω^∗_0(F^∗) ≥ M_s^-k.
Also, since x, x' ∈ c(F^∗), α_F^∗(x), α_F^∗(x') ≥ m_s.
Define ω^∗_j+1 to be such that
∂(J)
=
m_s M_s^-(k+1) if J ∈{ e_1, e_2},
-M_s^-k if J = F^∗,
0 otherwise.
Then ω^∗_j+1 is a weighted fractional { F^∗_s, E^∗_s }-tiling.
First, note that |A_j+1| = |A_j| - (3k + ℓ - 2) ≥ |A_j| - 5k^2.
Secondly, using (<ref>) we have that ω^∗_j(F^∗) is either 0 or at least c.
Thus we obtain
(ω^∗_j+1)_min ≥min{ (ω^∗_j)_min, M_s ω^∗_j+1(e_1), c }≥min{ c, m_s M_s^-k, c }≥ c.
Finally,
ϕ(ω^∗_j) - ϕ(ω^∗_j+1)
= s/n( ∂(F^∗) + 3/5 (∂(e_1) + ∂(e_2) ) ) = s/n M_s^k( 6 m_s/5 M_s - 1 ).
Using (<ref>), s ≥ 5k^2, ℓ≤ k-1 and k ≥ 3, we can lower bound m_s/M_s by
m_s/M_s≥M_s - 1/M_s≥s - ℓ - k/s - ℓ = 1 - k/s - ℓ≥ 1 - k/5k^2 - k+1≥40/43.
We deduce ϕ(ω^∗_j) - ϕ(ω^∗_j+1) ≥ 5 s / (43 M_s^k n) ≥ν_1 / n, so we are done in this subcase.
Case 1.2: there exists e ∈ E(H) with ω^∗_j(e) > 0 and d_e≥ k+1.
We prove this case using a similar argument used in Case 1.1.
There exist distinct i, i' ∈{1, …, k} and distinct x, x' ∈ e such that both e_1 = W_i ∪{ x } and e_2 = W_i'∪{ x' } are edges in .
Since x ∈ A_j, Proposition <ref><ref> and (<ref>) implies that ω^∗_j(e) = M_s^-k.
Define ω^∗_j+1 to be such that
∂(J) =
-M_s^-k if J = e,
M_s^-k if J ∈{ e_1, e_2 },
0 otherwise.
Then ω^∗_j+1 is a weighted fractional { F^∗_s, E^∗_s }-tiling with |A_j+1| = |A_j| - (3k - 2) ≥ |A_j| - 5k^2.
Note ω^∗_j+1(e) = 0 and ω_j+1^∗(e_i) > ω_j^∗(e_i) for i ∈ [2], so we have (ω^∗_j+1)_min≥ (ω^∗_j)_min≥ c.
Note that
ϕ(ω^∗_j) - ϕ(ω^∗_j+1)
= 3 s/5 n(∂(e_1) + ∂(e_2) + ∂(e) ) = 3s/ 5 M_s^k n≥ν_1/n,
so this finishes the proof of this subcase.
Case 1.3: Both Cases 1.1 and 1.2 do not hold.
Thus d_J≤ k for all J ∈^∗_s() ∪ E() with ω^∗_j(J) > 0.
Recall that α_F^∗(v) ≤ M_s if v ∈ c(F^∗) and α_F^∗(v) = 1 if v ∈ p(F^∗).
Thus, for all F^∗∈^∗_s() with ω^∗_j(F^∗) > 0, we have
∑_i=1^k∑_x ∈ N_iα_F^∗(x)
≤∑_i=1^k( M_s |N_i ∩ c(F^∗)| + |N_i ∩ p(F^∗)| )
= M_s d_F^∗ + ∑_i=1^k |N_i ∩ p(F^∗)| ≤ k(M_s + ℓ) ≤ s + k^2.
Therefore,
∑_F^∗∈^∗_s∑_i=1^k∑_x ∈ N_iω^∗_0(F^∗) α_F^∗(x)
≤ (s + k^2) ∑_F^∗∈^∗_s() w^∗_0(F^∗).
Similarly, for e ∈ E() with ω^∗_j(e) > 0, we obtain
∑_i=1^k ∑_x ∈ N_iα_e(x)
= ∑_i=1^k M_s |e ∩ N_i| = M_s d_e ≤ k M_s.
Hence,
∑_e ∈ E()∑_i=1^k ∑_x ∈ N_iω^∗_0(e) α_e(x) ≤ k M_s ∑_e ∈ E() w^∗_0(e).
Combining everything, we deduce
∑_i=1^k |N_i|
= ∑_i=1^k ∑_x ∈ N_i 1
(<ref>), (<ref>)=∑_i=1^k ∑_x ∈ N_iω^∗_0(x)
= ∑_i=1^k ∑_x ∈ N_i∑_J ∈^∗_s() ∪ E()ω^∗_0(J) α_J(x)
= ∑_J ∈^∗_s() ∪ E()∑_i=1^k ∑_x ∈ N_iω^∗_0(J) α_J(x)
(<ref>), (<ref>)≤ (s + k^2) ∑_F^∗∈_s^∗() w^∗_0(F^∗) + k M_s ∑_e ∈ E() w^∗_0(e)
Prop. <ref><ref>≤ n + k^2 ∑_F^∗∈_s^∗() w^∗_0(F^∗) ≤ n + k^2/sn ≤6n/5,
where the last inequality uses s ≥ 5k^2.
This contradicts (<ref>) and finishes the proof of Case 1.
Case 2: |U_0| < 3 α n / 4.
Write , for _, _, respectively.
Note that n = s || + k M_s | | + |U_0|.
Hence,
α≤ϕ() ≤ 1 - s/n | | ≤1/n( k M_s | | + |U_0| ) ≤k M_s | |/n + 3 α/4.
Using that s ≥ 5k^2, that k ≥ 3, that 1/n ≪α, γ≤ 1 and (<ref>), we have
| | ≥α n/4 k M_s≥αγ n/40 + 1 ≥ n - |A_j| + 1.
Hence there exists E_s ∈ with V(E_s) ⊆ A_j.
By Proposition <ref><ref>, there exists an edge X = { x_1, …, x_k }∈ E() such that X ⊆ A_j and
w^∗_j(X) = w^∗_0(X) = M_s^-k.
We would like to use Lemma <ref> to find copies F of F^∗_s with c(F) = X, and decrease the weight of X to be able to increase the weight of an appropriate copy of F^∗_s.
Recall that S(ω^∗_j) is the set of saturated vertices with respect to ω^∗_j.
We write S_j = S(ω^∗_j) and let S' = S_j ∪ (V(H) ∖ A_j).
Proposition <ref><ref> and (<ref>) together imply that |S'| ≤ (ℓ/s + γ/40)n.
For all 1 ≤ i ≤ k, let N_i = N_H(X ∖{ x_i }).
We may assume (by relabelling) that either |N_1 ∩ N_2| < (ℓ/s + 2γ/3)n or |N_i ∩ N_j| ≥ (ℓ/s + 2γ/3) for all 1 ≤ i, j ≤ k.
Case 2.1: (N_1 ∩ N_2) ∖ S' ≠∅.
In this case, select y ∈ (N_1 ∩ N_2) ∖ S' and apply Lemma <ref> with S', y playing the roles of S, y_0.
We obtain a copy F_1 of F^∗_s such that c(F_1) = X and p(F_1) ∩ S' = ∅.
Then p(F_1) ⊆ A_j ∖ S_j.
Let P_0 = p(F_1) ∖ U_j.
For p ∈ p(F_1) ∩ U_j, by (<ref>), ω^∗_j(p) = 0.
For every p ∈ P_0, by the definitions of A_j and U_j, there exists J_p ∈^+_0 such that p ∈ V(J_p), and since p ∉ S_j we also can choose J_p such that α_J_p(p) ≥ m_s.
(The J_p might coincide for different p ∈ P_0.)
Define ω^∗_j+1 to be such that
∂(J) =
M_s^-k if J = F_1,
- M_s^-k if J = X,
- M_s^-k/m_s if J = J_p for some p ∈ P_0,
0 otherwise.
Then ω^∗_j+1 is a weighted fractional { F^∗_s, E^∗_s }-tiling.
First, note that |A_j+1| ≥ |A_j| - (|V(F_1)| + ∑_p ∈ P_0 |V(J_p)| ) ≥ |A_j| - (2k+2k^2) ≥ |A_j| - 5k^2.
Secondly, (<ref>) implies that ω^∗_j+1(X) = 0 and ω^∗_j+1(F_1) ≥ c, and moreover, for all p ∈ P_0, ω^∗_j+1(J_p) ≥ M_s^-k(1 - 1/m_s) ≥ M_s^-k-1≥ c. Thus
(ω^∗_j+1)_min≥ c.
Finally, since |P_0| ≤ |p(F_1)| = ℓ, we have
ϕ(ω^∗_j) - ϕ(ω^∗_j+1)
≥s/n( ∂(F_1) + 3/5∂(X) + ∑_p ∈ P_0∂(J_p) )
≥s/n M_s^k( 2/5 - |P_0|/m_s)
≥s/n M_s^k( 2/5 - ℓ/m_s).
By (<ref>), ℓ≤ k-1 and s ≥ 5 k^2, we get
ℓ/m_s≤k-1/M_s - 1≤k/M_s≤k^2/s - ℓ≤k^2/5k^2 - k + 1≤1/4,
where the last inequality holds for every k ≥ 3.
Thus ϕ(ω^∗_j) - ϕ(ω^∗_j+1) ≥ 3s / (20 n M^k_s) ≥ν_1 / n and we are done.
Case 2.2: N_1 ∩ N_2 ⊆ S'.
Since H is strongly (1/2 + 1/(2s) + γ, θ)-dense and 1/n ≪γ, 1/k, we deduce |N_1 ∩ N_2| ≥ (1/s + γ)n.
Using N_1 ∩ N_2 ⊆ S' and (<ref>), we have |N_1 ∩ N_2 ∩ S_j ∩ A_j| ≥ (1/s + γ/2)n.
By Proposition <ref><ref>, there exists F_2 ∈^∗_s() ∩^+_0 and |p(F_2) ∩ N_1 ∩ N_2 ∩ S_j ∩ A_j| ≥ 2.
Let y'_1, y”_1 be two distinct vertices in p(F_2) ∩ N_1 ∩ N_2 ∩ S_j ∩ A_j.
We claim that
0.85there exists F'_2 ∈^∗_s() such that p(F'_2) ∖ p(F_2) ⊆ A_j ∖ (S_j ∪ X),
their core vertices satisfy c(F'_2) = c(F_2),
and { y'_1, y”_1 }∖ p(F'_2) ≠∅.
To see where we are heading, if we have found such F'_2, then our aim will be to define ω^∗_j+1 by decreasing the weight of F_2 and X, which will allow us then to increase the weight of F'_2 and a copy F'_1 of F^∗_s such that c(F'_1) = X and { y'_1, y”_1 }∩ p(F'_1) ≠∅.
Let us check (<ref>) holds.
Let Z = c(F_2) = { z_1, …, z_k } and for every 1 ≤ i ≤ k let Z_i = N_H(Z ∖{ z_i }).
Since y'_1 ∈ p(F_2), without loss of generality (by relabelling) we may assume that y'_1 ∈ Z_1 ∩ Z_2.
Suppose first that (Z_1 ∩ Z_2) ∖ (S' ∪ X ∪ V(F_2)) is non-empty.
Select any y”'_1 ∈ (Z_1 ∩ Z_2) ∖ (S' ∪ X ∪ V(F_2)).
Thus there exists F'_2 ∈^∗_s() such that c(F'_2) = Z, p(F'_2) = (p(F_2) ∖{ y'_1 }) ∪{ y”'_1 }, p(F'_2) ∖ p(F_2) = {y”'_1 }⊆ A_j ∖ (S_j ∪ X) and y'_1 ∈{ y'_1, y”_1 }∖ p(F'_2), as desired.
Hence, we may assume Z_1 ∩ Z_2 ⊆ S' ∪ X ∪ V(F_2).
This implies that |Z_1 ∩ Z_2| ≤ |S' ∪ X ∪ V(F_2)| ≤ (ℓ/s + γ/40)n + |X|+|V(F_2)| < (ℓ/s + 2 γ/3)n.
Apply Lemma <ref> (with Z, Z_i, S' ∪ X ∪ V(F_2), y'_1 playing the roles of X, N_i, S and y_0, respectively) to obtain F'_2 ∈^∗_s such that c(F'_2) = Z and p(F'_2) ∩ ( S' ∪ X ∪ V(F_2) ∖{ y'_1 } ) = ∅.
It is easily checked that F'_2 satisfies (<ref>).
Now take such an F'_2 and assume (after relabelling, if necessary) that y'_1 ∉ p(F'_2).
Apply Lemma <ref> (with X, N_i, S' ∪ V(F'_2), y'_1 playing the roles of X, N_i, S and y_0, respectively) to obtain F'_1 such that c(F'_1) = X and p(F'_1) ∩ (S' ∖{ y'_1 }) = ∅.
Let P' = (p(F'_1) ∖{ y'_1 }) ∪ (p(F'_2) ∖ p(F_2)) and observe that P' ⊆ A_j ∖ S_j.
Let P'_0 = P' ∖ U_j.
Arguing as in the previous case we see that for every p ∈ P' ∩ U_j, ω^∗_j(p) = 0, and for every p ∈ P'_0 there exists J_p ∈^+_0 such that p ∈ V(J_p) and α_J_p(p) ≥ m_s.
Let ω^∗_j+1 be such that
∂(J) =
M_s^-k if J = F'_1,
M_s^-(k+1) m_s if J = F'_2,
- M_s^-k if J ∈{ X, F_2 },
- M_s^-k / m_s if J = J_p for some p ∈ P'_0,
0 otherwise.
Since p(F'_1) ∪ p(F'_2) ⊆ P' ∪ p(F_2), the decrease of weight in F_2 and the J_p implies that the vertices in p(F'_1) ∪ p(F'_2) get weight at most 1 under ω^∗_j+1.
Using that, it is not difficult to check that ω^∗_j+1 is indeed a weighted fractional { F^∗_s, E^∗_s }-tiling.
Note that A_j∖ A_j+1⊆ V(F'_1) ∪ V(F_2) ∪ V(F'_2) ∪⋃_p ∈ P'_0 V(J_p) and |P'_0| ≤ |p(F'_1)| + |p(F'_2)| = 2 ℓ.
Using that ℓ≤ k-1, we deduce |A_j+1| ≥ |A_j| - 3(k+ℓ) - |P'_0|(k+ℓ) ≥ |A_j| - (3 + 2 ℓ)(k+ℓ) ≥ |A_j| - 5k^2.
Similarly as in the previous case, we deduce from (<ref>) that
(ω^∗_j+1)_min≥ c.
Using that |P'_0| ≤ 2 ℓ, we deduce
ϕ(ω^∗_j) - ϕ(ω^∗_j+1)
≥s/n( ∂(F'_1) + ∂(F'_2) + ∂(F_2) + 3/5∂(X) + ∑_p ∈ P'_0∂(J_p) )
= s/n M_s^k( 1 + m_s/M_s - 1 - 3/5 - |P'_0|/m_s)
≥s/n M_s^k( m_s/M_s - 3/5 - 2 ℓ/m_s).
From (<ref>), s ≥ 5k^2 and ℓ≤ k-1, we deduce
m_s/M_s - 3/5 - 2 ℓ/m_s ≥2/5 - 1/M_s - 2 ℓ/m_s≥2/5 - 1 + 2 ℓ/m_s≥2/5 - k(1 + 2 ℓ)/s - ℓ - k
≥2/5 - 2k^2 - k/5k^2 - 2k + 1
= k+2/25k^2 - 10 k + 5≥k+2/25k^2≥1/25 k.
Thus we get ϕ(ω^∗_j) - ϕ(ω^∗_j+1) ≥ s/(25 M_s^kkn) ≥ν_1/n and we are done.
This finishes the proof of Case 2.2 and of Claim <ref>.
□
This concludes the proof of Lemma <ref>.
§ REMARKS AND FURTHER DIRECTIONS
The following family of examples gives lower bounds for the Turán problems of tight cycles on a number of vertices not divisible by k (and hence for the tiling and covering problem, as well).
We acknowledge and thank a referee for suggesting this construction.
We are not aware of its appearance in the literature before, although it bears some resemblance to examples considered by Mycroft to give lower bounds for tiling problems <cit.>.
Let k ≥ 2 and p > 1 be a divisor of k.
For n > 0, we define the k-graph H^k_n,p as follows.
Given a vertex set V of size n, partition it into p disjoint vertex sets V_1, …, V_p of size as equal as possible.
Assume that every x ∈ V_i is labelled with i, for all 1 ≤ i ≤ p.
Let H^k_n,p be the k-graph on V where the edges are the k-sets such that the sum of the labels of its vertices is congruent to 1 modulo p.
Using this construction, we deduce the following lower bounds for _k-1(n, C^k_s) when s is not divisible by k (and therefore, also for c(n, C^k_s)).
Let s > k ≥ 2 with s not divisible by k.
Let p be a divisor of k which does not divide s.
Then _k-1(n, C^k_s) ≥⌊ n/p ⌋ - k + 2.
In particular, _k-1(n, C^k_s) ≥⌊ n/k ⌋ - k + 2.
Given k, p, n, let H = H^k_n,p be the k-graph given by Construction <ref>.
Since the sets V_i are chosen to have size as equal as possible, we deduce |V_i| ≥⌊ n/p ⌋ holds for all 1 ≤ i ≤ p.
It is easy to check that no edge of is entirely contained in any set V_i,
and that, for every (k-1)-set S in V, N(S) = V_j ∖ S for some j.
Thus δ_k-1(H) ≥⌊ n/p ⌋ - k + 2.
We show that is C^k_s-free.
Let C be a tight cycle on t vertices in .
It is enough to show that p divides t (since p does not divide s, it will follow that t ≠ s).
Recall from Construction <ref> that every x ∈ V_i is labelled with i.
We double count the sum T of the labels of vertices, over all the edges of C.
On one hand, T ≡ 0 k since each vertex appears in exactly k edges of C and thus is counted k times.
Since p divides k, T ≡ 0 p.
On the other hand, the sum of the labels of a single edge is congruent to 1 modulo p and there are t of them, thus T ≡ t p.
This implies that p divides t.
Now we discuss covering thresholds.
Let s > k ≥ 3.
Theorem <ref> and Proposition <ref> imply that c(n, C_s^k) = (1/2 + o(1))n for all admissible pairs (k,s) with s ≥ 2k^2.
A natural open question is to determine c(n, C_s^k) for the non admissible pairs (k,s).
The smallest case not covered by our constructions is when (k,s) = (6,8), and Proposition <ref> implies that c(n, C^6_8) ≥⌊ n/3 ⌋ - 4.
Is the lower bound for c(n, C_s^k) given by Proposition <ref> asymptotically tight, for non admissible pairs (k,s)?
In particular, is c(n, C^6_8) = (1/3 + o(1))n?
Now, we consider the Turán thresholds.
Theorem <ref> and Proposition <ref> also show that _k-1(n, C_s^k) = (1/2 + o(1))n for k even, s ≥ 2k^2 and (k, s) is an admissible pair.
We would like to know the asymptotic value of _k-1(n, C_s^k) in the cases not covered by our constructions.
Proposition <ref> implies that _k-1(n, C_s^k) ≥⌊ n/k ⌋ - k+ 2 for s not divisible by k; but on the other hand, if s ≡ 0 k then _k-1(n, C_s^k) = o(n), which follows easily from Theorem <ref>.
The simplest open case is when k = 3 and s is not divisible by 3.
Note that C^3_4 = K^3_4, and the lower bound _2(n, C^3_4) ≥ (1/2 + o(1))n holds in this case <cit.>.
We conjecture that in the case k=3, for s > 4 and not divisible by three, the lower bound given by Proposition <ref> describes the correct asymptotic behaviour of _k-1(n, C^k_s).
_2(n, C^3_s) = (1/3 + o(1))n for every s > 4 with s ≢0 3.
Finally, we discuss tiling thresholds.
Let (k,s) be an admissible pair such that s ≥ 5k^2.
If k is even, then Theorem <ref> and Proposition <ref> imply that t(n, C_s^k) = (1/2 + 1/(2s) + o(1))n.
We conjecture that for k odd, the bound given by Proposition <ref> is asymptotically tight.
Let (k,s) be an admissible pair such that k ≥ 3 is odd and s ≥ 5k^2.
Then t(n, C_s^k) = (1/2 + k/(4 s(k-1) + 2k ) + o(1))n.
Note that, for k odd, the extremal example given by Proposition <ref> is an example of the so-called space barrier construction.
However, it is different from the common construction which is obtained by attaching a new vertex set W to an F-free k-graph and adding all possible edges incident with W.
On the other hand, for k even, it is indeed the common construction of a space barrier.
It also would be interesting to find bounds on the Turán, covering and tiling thresholds that hold whenever k < s ≤ 5k^2. The known thresholds for these kind of k-graphs do not necessarily follow the pattern of the bounds we have found for longer cycles. For example, note that C^k_k+1 is a complete k-graph on k+1 vertices, which suggests that for lower values of s the problem behaves in a different way. Concretely, when (k, s) = (3, 4), it is known that t(n, C^3_4) = (3/4 + o(1))n KeevashMycroft2014, LoMarkstroem2015.
Given k ≥ 3, what is the minimum s such that t(n, C_s^k) ≤ (1/2 + 1/(2s) + o(1))n holds?
§ ACKNOWLEDGEMENTS
We thank Richard Mycroft and Guillem Perarnau for their valuable comments and insightful discussions.
We also thank an anonymous referee for their comments and suggestions that simplified some parts and vastly improved the presentation of the paper.
In particular, we are grateful for their suggestions of a simpler proof of Lemma <ref> and Construction <ref>.
§ HYPERGRAPH REGULARITY
In Section <ref> we stated modified versions of some regularity statements which follow from easy modifications of the original statements or proofs.
In this appendix we sketch how to guarantee those properties hold.
§.§ Avoiding fixed (k-1)-graphs
Our version of the Regular Slice Lemma (Theorem <ref>) includes an additional property (that of “avoiding” a fixed (k-1)-graph 𝒮 on the same vertex set as G) which is not present in the original statement <cit.>.
We claim that extra property follows already from their proof by doing one simple extra step.
Their proof of the Regular Slice Lemma can be summarised as follows (we refer the reader to <cit.> for the precise definitions).
First, they obtain an “equitable family of partitions” 𝒫^∗ from (a strengthened version of) the Hypergraph Regularity Lemma.
This can be used to find suitable complexes in the following way: first, for each pair of clusters of 𝒫^∗, select a 2-cell uniformly at random. Then, for each triple of clusters of 𝒫^∗ select a 3-cell uniformly at random which is supported on the corresponding previously selected 2-cells; and so on, until we select (k-1)-cells.
This will always output a (t_0, t_1, )-equitable (k-1)-complex , and the task is to check that, with positive probability, is actually a (t_0, t_1, , _k, r)-regular slice satisfying the “desired properties” with respect to the reduced k-graph.
Having selected at random as before, the most technical part of the proof is to show that the “desired properties” of the reduced k-graph (labelled (a), (b) and (c) in <cit.>) hold with probability tending to 1 whenever n goes to infinity.
Thankfully, that part of the proof does not require any modification for our purposes.
Moreover, the selected will be a (t_0, t_1, , _k, r)-regular slice with probability at least 1/2.
This is shown by upper bounding the expected number of k-sets of clusters of 𝒥 for which G is not (_k, r)-regular, and an application of Markov's inequality (cf. <cit.>).
It is a natural adaptation of this method that will show that is also (3 θ^1/2, 𝒮)-avoiding with probability at least 2/3.
Let 𝒮 be a (k-1)-graph on V(G) of size at most θnk-1.
We only need to consider the edges of 𝒮 which are 𝒫-partite.
Every 𝒫-partite edge of 𝒮 is supported in exactly one (k-1)-cell of the family of partitions 𝒫^∗, which by <cit.> is present in with probability p = ∏_i=2^k-1 d_i^k-1j.
Thus the expected size of |E(𝒮) ∩ E(_k-1)| is at most |E(𝒮)|p ≤θ p nk-1.
By Markov's inequality, with probability at least 2/3 we have |E(H) ∩ E(_k-1)| ≤ 3 θ p nk-1.
By the previous discussion, with positive probability satisfies all of the properties of <cit.> and also that |E(𝒮) ∩ E(_k-1)| ≤ 3 θ p nk-1.
Thus we may assume satisfies all of the previous properties simultaneously, and it is only necessary to check that is (3 θ^1/2, 𝒮)-avoiding.
Let t be the number of clusters of and m the size of a cluster in .
For each (k-1)-set of clusters Y, _Y has (1 ±_k/10) p m^k-1 edges (see <cit.>).
We say a (k-1)-set of clusters Y is bad if |_Y ∩ E(𝒮)| > √(6 θ) |_Y| and let be the set of bad (k-1)-sets.
Then
3 θ p nk-1 ≥∑_Y |_Y ∩ E(𝒮)| ≥ || √(6 θ) (1 - _k/10) p m^k-1,
which implies || ≤ 3 θ^1/2tk-1.
It follows that is (3 θ^1/2, 𝒮)-avoiding, as desired.
§.§ Embedding lemma
Note that <cit.> is stronger than Lemma <ref> in the sense that it allows embeddings of k-graphs with bounded maximum degree whose number of vertices is linear in m, but we don't require that property here.
The main technical difference between Lemma <ref> and Theorem 2 in <cit.> is that their lemma asks for the stronger condition that for all e ∈ E() intersecting the vertex classes { X_i_j : 1 ≤ j ≤ k }, the k-graph G should be (d, _k, r)-regular with respect to the k-set of clusters { V_i_j : 1 ≤ j ≤ k }, such that the value d does not depend on e, and 1/d ∈ℕ; where as we allow G to be (d_e, _k, r)-regular for some d_e ≥ d depending on e and not necessarily satisfying 1/d_e ∈ℕ.
By the discussion after Lemma 4.6 in <cit.>, we can reduce to that case by working with a sub-k-complex of ∪ G which is (d, d_k-1, d_k-2, …, d_2, _k, , r)-regular, whose existence is guaranteed by an application of the “slicing lemma” <cit.>.
|
http://arxiv.org/abs/1701.07915v2 | 20170127005321 | An overpartition analogue of $q$-binomial coefficients, II: combinatorial proofs and $(q,t)$-log concavity | [
"Jehanne Dousse",
"Byungchan Kim"
] | math.CO | [
"math.CO",
"math.NT",
"11P81, 11P84, 05A10, 05A17, 11B65, 05A20, 05A30"
] |
Over-(q,t)-Binomial Coefficients]An overpartition analogue of q-binomial coefficients, II: combinatorial proofs and (q,t)-log concavity
Institut für Mathematik, Universität Zürich
Winterthurerstrasse 190, 8057 Zürich, Switzerland
jehanne.dousse@math.uzh.ch
School of Liberal Arts
Seoul National University of Science and Technology
232 Gongneung-ro, Nowon-gu, Seoul,01811, Korea
bkim4@seoultech.ac.kr
[2010] 11P81, 11P84, 05A10, 05A17, 11B65, 05A20, 05A30
This research was supported by the International Research & Development Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology(MEST) of Korea (NRF-2014K1A3A1A21000358), the Forschungskredit of the University of Zurich, grant no. FK-16-098, and the STAR program number 32142ZM
In a previous paper, we studied an overpartition analogue of Gaussian polynomials as the generating function for overpartitions fitting inside an m × n rectangle. Here, we add one more parameter counting the number of overlined parts, obtaining a two-parameter generalization m+n n_q,t of Gaussian polynomials, which is also a (q,t)-analogue of Delannoy numbers. First we obtain finite versions of classical q-series identities such as the q-binomial theorem and the Lebesgue identity, as well as two-variable generalizations of classical identities involving Gaussian polynomials. Then, by constructing involutions, we obtain an identity involving a finite theta function and prove the (q,t)-log concavity of m+n n_q,t. We particularly emphasize the role of combinatorial proofs and the consequences of our results on Delannoy numbers. We conclude with some conjectures about the unimodality of m+n n_q,t.
[
Byungchan Kim
December 30, 2023
=====================
§ INTRODUCTION
Gaussian polynomials (or q-binomial coefficients) are defined by
m+n n_q = (q)_m+n/(q)_m (q)_n,
where (a)_k = (a;q)_k := ∏_j=1^k (1-aq^j-1 ) for k ∈ℕ_0∪{∞}. They are the generating functions for partitions fitting inside an m × n rectangle. In our previous paper <cit.>, we studied an overpartition analogue m+n n_q of these polynomials as the generating function for the number of overpartitions fitting inside an m × n rectangle. We recall that an overpartition is a partition in which the last occurrence of each distinct number may be overlined <cit.>, the eight overpartitions of 3 being
3, 3, 2+1, 2+1, 2 + 1, 2+ 1, 1+1+1, 1+1+1.
In this paper, we add a variable t counting the number of overlined parts in our over q-binomial coefficients and define
m+n n_q,t := ∑_k,j ≥ 0p(m,n,k,N) t^k q^N,
where p(m,n,k,N) counts the number of overpartitions of N, with k overlined parts, fitting inside an m × n rectangle, i.e. with largest part ≤ m and number of parts ≤ n. We call these two-variable polynomials m+n n_q,t over-(q,t)-binomial coefficients or (q,t)-overGaussian polynomials. If we set t=0, meaning that no part is overlined, we obtain the classical q-binomial coefficients, and if we set t=1 we obtain the over q-binomial coefficients of <cit.>. As we shall see in Section <ref>, the polynomials m+n n_q,t are also (q,t)-analogues of the Delannoy numbers D(m,n) <cit.>.
Again, by conjugation of the Ferrers diagrams, it is clear that
m+n n_q,t = m+n m_q,t.
Most of our results of <cit.> easily generalize to this new setting. Moreover, the new variable t also allows us to do more precise combinatorial reasoning. Therefore in this paper we mainly focus on combinatorial proofs, which turn out to be very powerful and often simpler than q-theoretic proofs.
The limiting behavior of over-(q,t)-binomial coefficients is interesting, with
lim_n →∞n j_q,t = (-t q)_j/(q)_j,
as when n tends to infinity, the restriction on the size of the largest part (or equivalently the number of parts) disappears. From this limiting behavior, we expect natural finite versions of identities in which overpartitions naturally arise. In this direction, we consider finite versions of classical q-series identities. For example, we prove a finite version of the q-binomial theorem.
For every positive integer n,
∑_k ≥ 0 n+k-1 k_q,t z^k q^k = (-tzq^2)_n-1/(zq)_n.
By taking the limit as n →∞, we find that
∑_k ≥ 0 (-tq)_k/(q)_k z^k q^k = (-tzq^2)_∞/(zq)_∞.
Replacing z by z/q and t by -t/q gives the q-binomial theorem.
We also prove a finite version of a special case of the Rogers-Fine identity.
For a positive integer n,
∑_k ≥ 0 n+k-1 k_q,t z^k q^k = ∑_k ≥ 0z^k q^k^2+k (-t zq^2)_k/(zq)_k+1n-1 k_q,t +t z q^2k+2 n-2 k_q,t.
By taking the limit as n →∞, we obtain
∑_k ≥ 0(-tq)_k/(q)_k z^k q^k = ∑_k ≥ 0z^k q^k^2+k (-tzq^2)_k (-tq)_k/ (q)_k (zq)_k+11 +t z q^2k+2,
which is the case a=1 of the Rogers-Fine identity <cit.>
∑_k ≥ 0 (-tq)_k/ (aq)_k z^k q^k = ∑_k ≥ 0a^k z^k q^k^2+k (-tzq^2 /a)_k (-tq)_k/ (aq)_k (zq)_k+11 +t z q^2k+2.
We also prove the following very curious identity, which contains a truncated theta function.
For each nonnegative integer n,
∑_k=0^n (-1)^k n k_q,1 =
0 if n is odd,
∑_j=-n/2^n/2 (-1)^j q^j^2 if n is even.
This identity is interesting in several aspects. First of all, it is not clear at all how the cancellation occurs. Its proof is reminiscent of Franklin's proof on Euler's pentagonal number theorem <cit.>. Secondly, it also resembles Zagier's “strange” identity <cit.>:
∑_k ≥ 0 (q)_k = -1/2∑_k ≥ 1χ (k) q^(k^2 -1 )/24.
In our case, by taking the limit as n goes to infinity, we obtain the “formal” identity
∑_k ≥ 0 (-1)^k (-q)_k/(q)_k = ∑_k ∈ℤ (-1)^k q^k^2.
Here by “formal” identity, we mean that the left-hand side does not converge as a power series in q. Thirdly, in a q-theoretic sense, Theorem <ref> is equivalent to
∑_|j| ≤ n (-1)^j q^j^2 = 2∑_j=0^n-1∑_k=0^j (-1)^j q^k(k+1)/22n - k j_qj k_q + (-1)^n ∑_k=0^n q^k(k+1)/22n - k n_q n k_q.
Lastly, the involution to prove Theorem <ref> implies the following identity.
For each positive integer n, we have
1 + ∑_k=1^n (-q)^k n k _q,1 + n-1
k-1 _q,1 = ∑_ |j| ≤⌊ ( n+1 )/2 ⌋
(-1)^j q^j^2.
Corollary <ref> is a finite version of a special case of Alladi's weighted partition theorem <cit.>.
We also study q-log concavity properties. In <cit.>, Butler showed that q-binomial coefficients are q-log concave, namely that for all 0<k<n,
n k_q^2 - n k-1_qn k+1_q
has non-negative coefficients as a polynomial in q. Actually, Butler <cit.> proved a much stronger result, namely that
n k_q n ℓ_q - n k-1_q n ℓ +1_q
has non-negative coefficients as a polynomial in q for 0<k ≤ℓ < n. Here we prove that over-(q,t)-binomial coefficients satisfy a generalization of this property, and therefore are also (q,t)-log concave.
For all 0<k ≤ℓ < n,
n k_q,t n ℓ_q,t - n k-1_q,t n ℓ +1_q,t
has non-negative coefficients as a polynomial in t and q.
Our proof is again combinatorial, as we construct an injection to show the non-negativity. The q-log concavity of q-binomial coefficients and of Sagan's q-Delannoy numbers <cit.>, as well as the log-concavity (and therefore unimodality) of Delannoy numbers follow immediately from the proof of Theorem <ref>, as we shall see in Section <ref>.
The remainder of this paper is organized as follows. In Section <ref>, we study basic properties of over q-binomial coefficients and give connections with Delannoy numbers. Then in Section <ref>, we study finite versions of the q-binomial theorem, a special case of the Rogers-Fine identity and the Lebesgue identity. In section <ref>, we focus on two-variable generalizations of classical identities involving binomial coefficients. Then in Section <ref>, we give the involution proof of Theorem <ref>. In Section <ref>, we prove Theorem <ref> by constructing an involution and study its implications. In Section <ref>, we conclude with some observations and conjectures concerning the unimodality of the over-(q,t)-binomial coefficients m+n n_q,t.
§ BASIC PROPERTIES AND CONNECTION TO DELANNOY NUMBERS
The Delannoy numbers <cit.> D(m,n), also sometimes called Tribonacci numbers <cit.>, are the number of paths from (0,0) to (m,n) on a rectangular grid, using only East, North and North-East steps, namely steps from (i,j) to (i+1,j), (i,j+1), or (i+1,j+1). Let 𝒟_m,n be the set of such paths. For a path p ∈𝒟_m,n, we define the weight of each of its steps p_k as
wt(p_k) :=
0, if it goes from (i,j) to (i+1,j),
i, if it goes from (i,j) to (i,j+1),
i+1, if it goes from (i,j) to (i+1,j+1).
Then we define the weight wt(p) of p to be the sum of the weights of its steps, and d(p) to be the number of North-East steps in p. By mapping North-East steps to overlined parts, we obtain a bijection between Ferrers diagrams of overpartitions fitting inside a m × n rectangle and Delannoy paths from the origin to (m,n). Therefore, we can see that over-(q,t)-binomial coefficient are generating functions for Delannoy paths.
For non-negative integers m and n,
m+n n _q,t = ∑_ p ∈𝒟_m,n t^d(p) q^wt(p).
In this sense, we can say that over-(q,t)-binomial coefficients are (q,t)-analogues of Delannoy numbers, which generalize the q-Delannoy numbers introduced by Sagan <cit.> (after exchanging t and q),
D_q (m,n) = ∑_ p ∈𝒟_m,n q^d(p).
In particular when q=t=1 we have
m+n n _1,1 = D(m,n).
A different q-analogue of Delannoy numbers has been given by Ramirez in <cit.>.
Most of our results of <cit.> generalize to the new setting with the additional variable t. It is sufficient to keep track of the number of overlined parts in the original proofs. Here we present two of them which have an interesting connection with Delannoy numbers.
Now we give an exact formula for m+n n_q,t.
For non-negative integers m and n,
m+n n_q,t = ∑_k=0^min{m,n} t^k q^k(k+1)/2(q)_m+n-k/(q)_k(q)_m-k(q)_n-k.
As in <cit.>, let G(m,n,k) denote the generating function for overpartitions fitting inside an m × n rectangle and having exactly k overlined parts.
We have
G(m,n,k) = q^k(k+1)/2m k_q n+m-k n-k_q
= q^k(k+1)/2(q)_m+n-k/(q)_k(q)_m-k(q)_n-k.
Since G(m,n,k) is non-zero if and only if 0 ≤ k ≤min{m,n}, we have
m+n n_q,t =∑_k=0^min{m,n} t^k G(m,n,k) = ∑_k=0^min{m,n} t^k q^k(k+1)/2(q)_m+n-k/(q)_k(q)_m-k(q)_n-k.
The case t=0 gives the classical formula for Gaussian polynomials and the case t=1 corresponds to Theorem 1.1 in <cit.>. Lemma 3 in <cit.> is essentially another formulation of Theorem <ref>, but their proof is more complicated as it involves several q-series identities, while ours is purely combinatorial. Moreover, when t=q=1, we obtain the following classical formula for Delannoy numbers:
D(m,n)= ∑_k=0^min{m,n}n km+n-k n.
Note that using q-multinomial coefficients
a+b+c a,b,c_q := (q)_a+b+c/(q)_a(q)_b(q)_c,
we can rewrite (<ref>) as
m+n n_q,t = ∑_k=0^min{m,n} t^k q^k(k+1)/2m+n-k k,m-k,n-k_q.
In the same way, the analogues of Pascal's triangle of <cit.> can also be generalized.
For positive integers m and n, we have
m+n n_q,t = m+n -1 n-1_q,t +q^n m+n-1 n_q,t + tq^n m+n-2 n-1_q,t,
m+n n_q,t = m+n -1 n_q,t +q^m m+n-1 n-1_q,t + tq^m m+n-2 n-1_q,t.
Again t=0 gives the classical recurrences for q-binomial coefficients, t=1 gives Theorem 1.2 of <cit.>, and t=q=1 gives the classical recurrence for Delannoy numbers:
D(m,n)=D(m-1,n)+D(m,n-1)+D(m-1,n-1).
We also obtain (q,t)-analogues of two other classical formulas for Delannoy numbers.
Recall that the basic hypergeometric series _rϕ_s are defined by
_rϕ_s (a_1,a_2,…,a_r;b_1,…,b_s ;q,z) := ∑_n ≥ 0 (a_1)_n (a_2)_n⋯ (a_r)_n/ (q)_n (b_1)_n⋯ (b_s)_n[ (-1)^n q^n(n-1)/2]^1+s-r z^n.
We can express over-(q,t)-binomial coefficients using a basic hypergeometric series.
For all m,n positive integers,
m+n n _q,t = m+n n _q_2ϕ_1 ( q^-n, q^-m ; q^-n-m ; q, -tq).
We may assume m ≥ n, as otherwise we could consider the conjugate of Ferrers diagram of the overpartitions. Using the fact that
(q^-n ; q)_k = (q;q)_n/(q;q)_n-k (-1)^k q^k2 - nk,
we derive that
_2ϕ_1 ( q^-n, q^-m ; q^-n-m ; q, -tq) = ∑_k =0^n (q^-n)_k (q^-m)_k/ (q)_k (q^-n-m)_k (-tq)^k
= (q)_n (q)_m/(q)_n+m∑_k=0^n (q)_m+n-k t^k q^k(k+1)/2/(q)_n-k (q)_m-k (q)_k
= (q)_n (q)_m/(q)_n+mm+n n _q,t
as desired.
By setting t=q=1, we can recover the well-known formula for Delannoy numbers
D(m,n) = m+nn_2F_1 (-n,-m; -m-n ; -1),
where _2F_1 is a hypergeometric function.
Moreover, from <cit.> we have a transformation formula for the terminating series
_2ϕ_1 (q^-n, b ; c ; q,z) = (c/b)_n/(c)_n b^n _3ϕ_1 (q^-n, b, q/z ; bq^1-n/c ; q , z/c ).
By setting b=q^-m, c=q^-n-m, and z=-tq, we find another expression for over-(q,t)-binomial coefficients.
For all non-negative integers m and n,
m+n n_q,t = ∑_k=0^min{m ,n } x^k q^k(k+1)/2 (-1/x)_km k_qn k_q.
By setting q=t=1, we can obtain another well-known formula for Delannoy numbers
D(m,n) = ∑_k=0^min{m,n} 2^k mknk.
Actually, the bijection given in <cit.> gives a combinatorial proof for Proposition <ref>. As details are lengthy and we do not use this bijection later, we omit details here.
§ FINITE VERSIONS OF CLASSICAL Q-SERIES IDENTITIES
§.§ The q-binomial theorem
In this section we use the (q,t)-overGaussian polynomials m+n n_q,t to prove new finite versions of classical q-series identities. Recall the q-binomial theorem.
For |t|,|z|<1,
∑_k ≥ 0(t)_k/(q)_k z^k = (tz)_∞/(z)_∞.
We start by giving two different finite versions of the q-binomial theorem involving over-(q,t)-binomial coefficients.
We first prove combinatorially Theorem <ref>.
Notice that z^k q^k generates a column of k unoverlined 1's. We append the partition generated by n+k-1 k_q,t to the right of these 1's. Therefore, we find that the left-hand side of (<ref>) is the generating function for the number of overpartitions with largest part ≤ n and no overlined 1, where the exponent of z counts the number of parts and the exponent of t counts the number of overlined parts. It is clear that the right-hand side of (<ref>) generates the same partitions.
Moreover Proposition 3.1 of <cit.> can be easily generalized by keeping track of the number of overlined parts in the original proof, and gives another finite version of the q-binomial theorem.
For every positive integer n, we have
(-tzq)_n/(zq)_n = 1+ ∑_k ≥ 1 z^k q^k ( [ n+k-1
k ]_q,t + t [ n+k-2
k-1 ]_q,t).
By letting n tend to infinity, we obtain the following.
Let p (n,k,ℓ) be the number of overpartitions of n with k parts and ℓ overlined parts. Then,
∑_n,k,ℓ≥ 0p (n,k,ℓ) z^k t^ℓ q^n =(-tzq)_∞/(zq)_∞ = 1+ ∑_k ≥ 1 z^k q^k (-t)_k/(q)_k.
Now replacing z by z/q and t by -t in the above gives the q-binomial theorem.
§.§ A special case of the Rogers-Fine identity
We now turn to the proof of Theorem <ref>, which uses Durfee decomposition.
We first observe that for every positive integer n,
∑_k ≥ 0 n+k-1 k_q,t z^k q^k = ∑_k ≥ 0z^k q^k^2+k (-tq^2)_k/(zq)_k+1 n-1 k_q,t
+ ∑_k ≥ 0t z^k q^k^2+k (-tq^2)_k-1/(zq)_k n-2 k-1_q,t.
The left-hand side is the generating function for the number of overpartitions with largest part ≤ n and no overlined 1, where the exponent of z counts the number of parts and the exponent of t counts the number of overlined parts, as in the proof of Theorem <ref>. Now we consider the Durfee rectangle of size (k+1) × k. We can distinguish two cases according to whether the bottom-right corner of Durfee rectangle is overlined or not. When it is not overlined, z^k q^k^2 + k generates the Durfee rectangle. Moreover, (-t zq^2)_k/ (zq)_k+1 generates the overpartition below the Durfee rectangle and n-1 k_q,t generates the overpartition to the right of the Durfee rectangle. When the bottom-right corner of the Durfee rectangle is overlined, t z^k q^k^2+k generates the Durfee rectangle. Since the parts below the Durfee rectangle are less then k+1 in this case, they are generated by (-t zq^2)_k-1/ (zq)_k. Moreover, there could be no further overlined k+1, so n- 2 k-1_q,t generates the overpartition to the right of the Durfee rectangle. By replacing k by k+1 in the second sum, we obtain the desired identity.
§.§ The Lebesgue identity
Finally we also have a generalization of Sylvester's identity <cit.>, which is a finite version of the Lebesgue identity. We define
S(n;t,y,q) := 1 + ∑_j ≥ 1( t [ n-1
j-1 ] _q,t(-tyq)_j-1/(yq)_j-1 y^j q^j^2 + [ n
j ]_q,t(-tyq)_j/(yq)_j y^j q^j^2).
For any positive integer n,
S(n;t,y;q)= (-tyq)_n/(yq)_n.
The new variable t allows us to deduce Lebesgue's identity from Theorem <ref>.
For |q| <1,
∑_k ≥ 0 (-tq)_k q^k(k+1)/2/(q)_k = (-tq^2 ;q^2)_∞/ (q;q^2)_∞.
In (<ref>), we replace q by q^2, t by tq, and y by 1/q. Then, by taking the limit as n goes to infinity, we find that
(-tq^2 ; q^2)_∞/ (q;q^2)_∞ = 1+ ∑_j ≥ 1 tq (-tq^3;q^2)_j-1 (-tq^2 ; q^2)_j-1/(q^2;q^2)_j-1 (q;q^2)_j-1 q^2j^2-j + (-tq^3;q^2)_j (-tq^2 ;q^2)_j/ (q^2;q^2)_j (q;q^2)_j q^2j^2 - j
= 1 + ∑_j ≥ 1 tq (-xq^2)_2j-2/(q)_2j-2 q^2j^2 - j + (-tq^2)_2j/(q)_2j q^2j^2 - j
=1+ ∑_j ≥ 1 (-tq)_2j-1/(q)_2j-1tq(1-q^2j-1)/1+tq q^2j^2 - j + (-tq)_2j/(q)_2j1+tq^2j+1/1+tq q^2j^2-j
=1+ ∑_j ≥ 1 (-tq)_2j-1/(q)_2j-11 - 1+tq^2j/1+tqq^2j^2 - j + (-tq)_2j/(q)_2jq^2j - 1-q^2j/1+tq q^2j^2 - j
= 1+ ∑_j ≥ 1 (-tq)_2j-1/(q)_2j-1 q^2j^2 - j + (-tq)_2j/(q)_2j q^2j^2 + j
= ∑_j ≥ 0 (-tq)_j q^j(j+1)/2/(q)_j.
Thus we can see Theorem <ref> as a finite version of the Lebesgue identity. Two different finite versions were given by Rowell <cit.>, and Alladi and Berkovich <cit.>, respectively. Alladi <cit.> gave another proof of the Lebesgue identity in terms of partitions into distinct odd parts.
§ GENERALIZATIONS OF Q-BINOMIAL COEFFICIENTS IDENTITIES
In this section, we prove two-variable generalizations of Gaussian polynomial identities. As a first example, by tracking the number of parts, one can easily see that the following identity <cit.> holds:
∑_j=0^n q^j m+j j_q = n+m+1 m+1_q,
which is a q-analogue of the classical identity
∑_j=0^nm+jj = n+m+1m+1.
By tracking the number of overlined and non-overlined parts separately, we can prove the following two-parameter generalization of (<ref>).
For positive integers m and n,
m+n+1 m+1_q,t =1 + ∑_j=1^n q^jm+j j_q,t + t m+j-1 j-1_q,t.
By taking the limit when m →∞, we also find that
(-tq)_n/(q)_n = 1 + ∑_j=1^n q^j(-tq)_j/(q)_j + t (-tq)_j-1/(q)_j-1 = 1+ ∑_j=1^N (-t)_j q^j/(q)_j.
By setting q=t=1, we find that
D(m+1,n) = 1 + ∑_j=1^nD(m,j) + D(m,j-1).
Secondly, we find an over-Gaussian polynomial generalization of the identity <cit.>
∑_k=0^h n k_q m h-k_q q^(n-k)(h-k) = m+n h_q,
which is a q-analogue of the classical identity
∑_k=0^hnkmh-k = n+mh.
For positive integers m, n ≥ h,
∑_k=0^h q^(n-k)(h-k)n k_q,t m h-k_q,t + t n-1 k_q,t m-1 h-k-1_q,t = m+n h_q,t.
For an overpartition generated by the right hand side, we consider the largest rectangle of the form (n-k) × (h-k) fitting inside the Ferrers diagram of , i.e. its Durfee rectangle of size (n-k) × (h-k). It is clear that such a k is uniquely determined, and as has at most h parts, k is between 0 and h. We have two cases according to whether the bottom right corner of the Durfee rectangle is overlined or not. In the case where it is non-overlined, the overpartition on the right side of the Durfee rectangle does fit inside a (m-h+k) × (h-k) rectangle and the overpartition below the Durfee rectangle is inside a (n-k) × k rectangle. The generating function of such partitions is q^(n-k)(h-k) n k_q,t m h-k_q,t . In the case where the bottom right corner is overlined, we can see that the overpartition on the right side should be inside a (m-h+k) × (h-k-1) rectangle and the overpartition below the Durfee rectangle fits inside a (n-k-1) × k rectangle, the generating function of such partitions equals t q^(n-k)(h-k) n-1 k_q,t m-1 h-k-1_q,t.
By setting q=t=1 and m=m+h in Proposition <ref>, we find that for n,m ≥ h >0,
D(m+n,h) = ∑_k=0^hD(n-k,k)D(m+k,h-k) + D(n-k-1,k) D(m+k,h-k-1).
Finally, in <cit.>, Prellberg and Stanton used the following identity
1/(x)_n = ∑_m=0^n-1n+m -1 2m q^2m^2x^2m/(x)_m + n+m 2m+1 q^2m^2+mx^2m+1/(x)_m+1
to prove that for all n, the coefficients of
(1-q) 1/(q^n )_n + q
are non-negative.
By employing Durfee rectangle dissection according to whether the size of the Durfee rectangle is (m+1) × 2m or m × (2m-1) and whether the corner of Durfee rectangles is overlined or not, we can deduce an overpartition version.
For any positive integer n,
(-tzq)_n/ (zq)_n = 1+ ∑_m=1^n-1n+m-1 2m _q,t + t n+m-2 2m-1 _q,t z^2m q^2m^2+2m (-tzq)_m/(zq)_m
+∑_m=1^nn+m-1 2m-1 _q,t z^2m-1 q^2m^2 - m (-tzq)_m/(zq)_m
+ ∑_m=1^nn+m-2 2m-2 _q,t t z^2m-1 q^2m^2-m (-tzq)_m-1/(zq)_m-1 .
By taking the limit n →∞, we find that
(-tzq)_∞/ (zq)_∞ = 1+ ∑_m=1^∞(-tq)_2m/(q)_2m + t (-tq)_2m-1/(q)_2m-1 z^2m q^2m^2+2m (-tzq)_m/(zq)_m
+∑_m=1^∞ (-tq)_2m-1/(q)_2m-1 z^2m-1 q^2m^2 - m (-tzq)_m/(zq)_m
+ ∑_m=1^∞ (-tq)_2m-2/(q)_2m-2 t z^2m-1 q^2m^2-m (-tzq)_m-1/(zq)_m-1
=∑_m=0^∞ z^2m q^2m^2+2m(-tq)_2m (-tzq)_m/(q)_2m (zq)_m1+tzq^m+1
+ ∑_m=1^∞ z^2m-1 q^2m^2 - m (-tq)_2m-1(-tzq)_m/(q)_2m-1 (zq)_m1 + tzq^3m.
This can be viewed as an overpartition analogue of
(-zq)_∞ = ∑_k ≥ 0 z^k q^k(3k+1)/2 (1 + zq^2k+1 ) (-zq)_k/(q)_k,
which becomes Euler's pentagonal number theorem when z=-1.
Numerics suggest an overpartition analogue of the result of Prellberg and Stanton.
For all positive integers n, the coefficients of
(1-q) (-q^n)_n/(q^n)_n + q
are non-negative.
§ THE INVOLUTION PROOF OF THEOREM <REF>
We now prove Theorem <ref>, using an involution similar to Franklin's proof of Euler's Pentagonal Numbers Theorem.
For convenience, we allow non-overlined 0 as a part. Then, we can interpret the coefficient of q^N in
(-1)^k n k_q,1
as the number of overpartitions of N into “exactly” k parts ≤ n-k with weight (-1)^k. Let 𝒪_k,n be the set of above-mentioned overpartitions. For an overpartition λ∈𝒪_k,n with k ≤ n, we denote by π the overpartition below its Durfee square and by μ the conjugate of the overpartition on the right of the Durfee square. If the size of the Durfee square is d (≤ k), then π has k-d parts and μ has less than N-k-d parts. Define s(π) to be the smallest nonzero part of π and s(μ) to be the smallest part of μ. (Note that μ does not have 0 as a part). If there is no nonzero part in π or μ then we define s(π) = 0 or s(μ) =0 accordingly. We also define s_2 (π) (resp. s_2(μ)) to be the second smallest nonzero part of π (resp. μ).
We build a sign-reversing involution ϕ on 𝒪_n = ∪_0 ≤ k ≤ n𝒪_k,n as follows:
Case 1. If s(π) = s(μ) =0, then ϕ(λ)=λ. This case is invariant under this map.
Case 2. If s(π)=0 and s(μ)>0 or s(π) > s(μ), ϕ(λ) is obtained by moving s(μ) below s(π). Then, the resulting partition is in 𝒪_k+1,n since it has now k+1 parts and the size of the largest part is decreased by 1, and thus it does not violate the maximum part condition for 𝒪_k+1,n.
Case 3. If s(π) < s(μ),
Case 3.1 if s_2(π) = s(π) and s(π) is not overlined, we overline s(π) and s_2(π) and move s(π) to the right of s(μ).
Case 3.2 if s_2(π) > s(π) or s(π) is overlined, ϕ(λ) is obtained by moving s(π) to the right of s(μ).
In both cases, the resulting overpartition is in 𝒪_k-1,N.
Case 4. s(π) = s(μ). We have to consider different subcases according to whether s(π) and s(μ) are overlined or not.
For convenience, we define χ ( a ) = 1 if a is an overlined part and χ(a)=0 if a is a non-overlined part.
Case 4.1. If χ( s(μ) )=χ(s(π))=1, we move s(μ) below s(π) and un-overline both s(π) and s(μ). Note that the resulting overpartition is in 𝒪_k+1,n.
Case 4.2. If χ( s(μ)) =1 and χ(s(π))=0, we move s(μ) below s(π). The resulting overpartition is in 𝒪_k+1,n.
Case 4.3. If χ ( s(μ) ) = 0 and χ(s(π)) =1, we move s (π) to the right of s(μ) and this gives an overpartition in 𝒪_k-1,n.
Case 4.4. If χ ( s(μ) ) = χ(s(π))=0
Case 4.4.1. if s_2(π) = s(π), then overline s_2(π) and s(π) and move s (π) to the right of s(μ).
Case 4.4.2. if s_2(π) > s(π) or s_2 (π) =0, we move s (π) to the right of s(μ).
In both cases, this gives an overpartition in 𝒪_k-1,n.
Before proving ϕ is an involution, here we give one example.
For an overpartition (5,5,3,2,0) ∈𝒪_5,10, the size of the Durfee square is 3, π = (2,0), and μ = (2, 2). Thus, s(π)=s(μ)=2. Since χ(μ)=1, we move 2 in μ below s(π). As a result, we have a new overpartition with π = (2, 2,0) and μ = (2), which gives the overpartition (4,4,3,2,2,0) ∈𝒪_6,10. Note that ϕ ((4,4,3,2,2,0)) = (5,5,3,2,0) ∈𝒪_5,10 as we expected.
Now we prove that this is true in general.
The map ϕ is an involution.
We need to prove that for every overpartition λ in 𝒪_n, we have ϕ(ϕ(λ))=λ. Here also, we need to distinguish several cases.
Case 1. If s(π) = s(μ) =0, then ϕ(λ)=λ, so ϕ(ϕ(λ))=λ.
Case 2. If s(π) > s(μ),
Case 2.a. if s_2(μ) > s(μ), then ϕ(λ) is obtained by moving s(μ) below s(π). Thus ϕ(λ) is in the case 3.2 and we obtain ϕ(ϕ(λ)) by moving s(μ) back to its initial place. Therefore ϕ(ϕ(λ))=λ.
Case 2.b. if s_2(μ) = s(μ) and s(μ) is overlined, then ϕ(λ) is in the case 4.3 and ϕ(ϕ(λ))=λ.
Case 2.c. if s_2(μ) = s(μ) and s(μ) is non-overlined, then ϕ(λ) is in the case 4.4.2 and ϕ(ϕ(λ))=λ.
Case 3. If s(π) < s(μ),
Case 3.a. if s_2(π) = s(π) and s(π) is not overlined, λ is in the case 3.1 and ϕ(λ) is obtained by overlining s(π) and s_2(π) and moving s(π) to the right of s(μ). Thus ϕ(λ) is in the case 4.1 and we obtain ϕ(ϕ(λ)) by moving s(π) back to its initial place and un-overlining s(π) and s_2(π) again. Therefore ϕ(ϕ(λ))=λ.
Case 3.b. if s_2(π) = s(π) and s(π) is overlined, λ is in the case 3.2 and ϕ(λ) is obtained by moving s(π) to the right of s(μ). Thus ϕ(λ) is in the case 4.2 and we obtain ϕ(ϕ(λ)) by moving s(π) back to its initial place. Therefore ϕ(ϕ(λ))=λ.
Case 3.c. if s_2(π) > s(π), λ is in the case 3.2, ϕ(λ) is in the case 2, and ϕ(ϕ(λ))=λ.
Case 4. If s(π) = s(μ),
Case 4.1. if χ( s(μ) )=χ(s(π))=1,
Case 4.1.a. if s_2(μ) = s(μ), then ϕ(λ) is obtained by moving s(μ) under s(π) and un-overlining both. Thus ϕ(λ) is in case 4.4.1 and we get ϕ(ϕ(λ)) by moving s(μ) back to its initial place and overlining s(π) and s(μ) again. Therefore ϕ(ϕ(λ))=λ.
Case 4.1.b. if s_2(μ) > s(μ), then ϕ(λ) is in case 3.1 and ϕ(ϕ(λ))=λ.
Case 4.2. if χ( s(μ)) =1 and χ(s(π))=0,
Case 4.2.a. if s_2(μ) = s(μ), then ϕ(λ) is obtained by moving s(μ) under s(π). Thus ϕ(λ) is in case 4.3 and ϕ(ϕ(λ))=λ.
Case 4.2.b. if s_2(μ) > s(μ), then ϕ(λ) is in case 3.2 and ϕ(ϕ(λ))=λ.
Case 4.3. if χ ( s(μ) ) = 0 and χ(s(π)) =1,
Case 4.3.a. if s_2(π) = s(π), then ϕ(λ) is obtained by moving s(π) to the right of s(μ). Thus ϕ(λ) is in case 4.2 and ϕ(ϕ(λ))=λ.
Case 4.2.b. if s_2(π) > s(π), then ϕ(λ) is in case 2 and ϕ(ϕ(λ))=λ.
Case 4.4. if χ ( s(μ) ) = χ(s(π))=0
Case 4.4.1. if s_2(π) = s(π), then ϕ(λ) is obtained by overlining s_2(π) and s(π) and moving s (π) to the right of s(μ). Thus ϕ(λ) is in case 4.1 and we get ϕ(ϕ(λ)) by moving s(π) back to its initial place and un-overlining s(π) and s(μ) again. Therefore ϕ(ϕ(λ))=λ.
Case 4.4.2. if s_2(π) > s(π), then ϕ(λ) is in case 2 and ϕ(ϕ(λ))=λ.
Thus in every case, ϕ(ϕ(λ))=λ.
Now we are finally ready to prove Theorem <ref>.
From the sign reversing involution ϕ, we see that only square overpartitions survive after pairing ∈𝒪_n and ϕ() ∈𝒪_n. Moreover, the square overpartition of j^2 (with 0 ≤ j ≤⌊ n/2 ⌋) is in 𝒪_k,n for k from j to n-j. Thus, the sum of weights is
∑_ k = j^n-j (-1)^k =
0, if n is odd,
(-1)^j, if n is even,
as the summation runs over n-2j+1 consecutive integers from j. By considering overlined and non-overlined square overpartitions, we arrive at
∑_k=0^n (-1)^k n k_q,1 =
0, if n is odd,
1 + 2 ∑_j=1^n/2 (-1)^j q^j^2, if n is even,
as there is no overlined partition for the empty partition.
A proof of Corollary <ref> follows from the simple observation that the right-hand side of Corollary <ref> corresponds to having exactly k positive parts in the involution instead of k non-negative parts.
Finally, by setting q=t=1 in Theorem <ref>, we obtain the following.
For all positive integers n,
∑_k=0^n (-1)^k D(n-k,k) =
0, if n is odd,
-1, if n ≡ 2 4,
1, if n ≡ 0 4.
§ (Q,T)-LOG-CONCAVITY OF THE OVER Q-BINOMIAL COEFFICIENTS
In this section, we prove Theorem <ref> by constructing an involution. Before starting the proof, we introduce some notation. Let 𝒫 denote the set of overpartitions of non-negative integers, and 𝒫 ( m,n ) the set of overpartitions fitting inside a m × n rectangle. We also write #_o () for the number of overlined parts in and || for the weight of (i.e. the sum of its parts).
To prove Theorem <ref>, we want to find an injection ϕ from 𝒫 (n-k+1, k-1 ) ×𝒫 ( n-ℓ-1, ℓ +1 ) to
𝒫 (n-k, k ) ×𝒫 ( n-ℓ, ℓ ), such that, if ϕ(,μ)=(η,ρ), then ||+|μ| = |η|+|ρ| and #_o ()+ #_o (μ) = #_o (η) + #_o (ρ).
We generalize the proof in <cit.> to overpartitions. We define two maps 𝒜 and ℒ on 𝒫×𝒫, and take ϕ to be the restriction of ℒ∘𝒜 to the domain 𝒫 (n-k+1, k-1 ) ×𝒫 ( n-ℓ-1, ℓ +1 ).
We obtain the injectivity of ϕ by showing that
* 𝒜 and ℒ are involutions on 𝒫×𝒫,
* 𝒜( 𝒫 (n-k+1, k-1) ×𝒫 ( n-ℓ-1, ℓ +1 ) ) ⊂𝒫 (n-k, k-1) ×𝒫 (n-ℓ, ℓ +1 ),
* ℒ( 𝒫 (n-k, k-1) ×𝒫 ( n-ℓ, ℓ +1) ) ⊂𝒫 (n-k, k) ×𝒫 (n-ℓ, ℓ ).
Let us start by defining 𝒜. For a given overpartition pair (, μ) ∈𝒫×𝒫, we define I by the largest integer satisfying
_I -μ_I+1≥ℓ -k +1, if _I is not overlined,
ℓ -k +2, if _I is overlined,
where we define μ_i+1 =0 if _i >0, but the number of parts in μ is less than i+1. If there is no such I, we define I=0. Now we define
𝒜 (, μ) =(γ, τ),
where
γ := ( μ_1 + (ℓ -k +1), …, μ_I + (ℓ -k +1), _I+1, _I+2, … ),
τ := (_1 - (ℓ -k +1), …, _I - (ℓ -k +1), μ_I+1, μ_I+2, … ).
Note that if λ_i (resp. μ_i), i ≤ I was overlined (resp. non-overlined) in λ (resp. μ), then _i - (ℓ -k +1) (resp. μ_i + (ℓ -k +1)) is overlined (resp. non-overlined) in τ (resp. γ).
Before defining ℒ, let us introduce two maps 𝒮 and 𝒞 on 𝒫×𝒫 by
𝒮 (, μ) := (μ, ) and 𝒞 (,μ) := (^c, μ^c).
Then we define ℒ as
ℒ := 𝒮∘𝒞∘𝒜∘𝒞∘𝒮.
We now want to verify that (i) is satisfied. Since 𝒮 and 𝒞 are involutions on 𝒫×𝒫, we only need to show that 𝒜 is an involution.
First of all, let us verify that 𝒜 is well defined, i.e. if 𝒜 (, μ) =(γ, τ) and (, μ) ∈𝒫×𝒫, then γ and τ are also overpartitions.
By definition ( μ_1 + (ℓ -k +1), …, μ_I + (ℓ -k +1)), (_I+1, _I+2, … ), (_1 - (ℓ -k +1), …, _I - (ℓ -k +1)) and (μ_I+1, μ_t+2, … ) are overpartitions so we only need to check that
μ_I + (ℓ -k +1) ≥_I+1 if μ_I + (ℓ -k +1) is not overlined
_I+1 +1 if μ_I + (ℓ -k +1) is overlined,
and
_I - (ℓ -k +1) ≥μ_I+1 if _I - (ℓ -k +1) is not overlined
μ_I+1 +1 if _I - (ℓ -k +1) is overlined.
Equation (<ref>) is clear by (<ref>). Let us turn to (<ref>).
By definition of I, we have
μ_I+2 + (ℓ -k +1) ≥λ_I+1 +1 , if λ_I+1 is not overlined,
λ_I+1, if λ_I+1 is overlined,
If μ_I is not overlined, then μ_I ≥μ_I+2, so
μ_I + (ℓ -k +1) ≥_I+1.
If μ_I is overlined, then by definition of an overpartition μ_I ≥μ_I+2 +1, so
μ_I + (ℓ -k +1) ≥_I+1+1.
This completes the verification of (<ref>).
Now we want to check that 𝒜 is an involution. Let (, μ) ∈𝒫×𝒫 and (γ,τ) = 𝒜 (,μ). We want to show that 𝒜 (γ,τ)= (,μ).
By definition of I and 𝒜, the parts with indices ≥ I+1 of γ (resp. τ) are exactly the same as those of (resp. μ) and will therefore not be moved when we apply 𝒜 again. Therefore the only thing left to check is that
γ_I -τ_I+1≥ℓ -k +1, if γ_I is not overlined,
ℓ -k +2, if γ_I is overlined,
that is that
μ_I +(ℓ -k+1) -μ_I+1≥ℓ -k +1, if μ_I is not overlined,
ℓ -k +2, if μ_I is overlined,
which is clear by definition of an overpartition.
Thus the I of (γ,τ) is the same as the one of (λ,μ), and 𝒜(γ,τ) = (λ,μ).
The point (i) is proved.
Then, point (ii) is obvious from the definition of 𝒜.
Finally let us verify (iii). We have, by definition of 𝒮, 𝒞 and 𝒜,
𝒮( 𝒫 (n-k, k-1) ×𝒫 ( n-ℓ, ℓ +1) ) = 𝒫 ( n-ℓ, ℓ +1) ×𝒫 (n-k, k-1),
𝒞( 𝒫 ( n-ℓ, ℓ +1) ×𝒫 (n-k, k-1) ) = 𝒫 (ℓ +1, n-ℓ) ×𝒫 (k-1, n-k),
𝒜( 𝒫 (ℓ +1, n-ℓ) ×𝒫 (k-1, n-k) ) ⊂𝒫 (ℓ, n-ℓ) ×𝒫 (k, n-k),
𝒞( 𝒫 (ℓ, n-ℓ) ×𝒫 (k, n-k) ) = 𝒫 (n-ℓ, ℓ) ×𝒫 (n-k,k),
𝒮( 𝒫 (n-ℓ, ℓ) ×𝒫 (n-k,k) ) = 𝒫 (n-k,k) ×𝒫 (n-ℓ, ℓ).
Thus (iii) is satisfied.
Here we give an example to illustrate the map ϕ = ℒ∘𝒜 = 𝒮∘𝒞∘𝒜∘𝒞∘𝒮∘𝒜.
When n=10, k=4, and ℓ=5, we consider the partition pair (, μ) ∈𝒫 (7,3) ×𝒫 (4,6), where
= (7,6,4), and μ=(4,4,3,3,2,2).
Then, we see that
(, μ)
𝒜 ( (6, 6, 4 ), (5,4,3,3,2,2) )
𝒮 ( (5,4,3,3,2,2) , (6, 6, 4 ) )
𝒞 ( ( 6, 6, 4, 2, 1 ), (3,3,3, 3,2,2 ) )
𝒜 ( (5,5, 4,2,1), (4,4, 3, 3,2,2 ) )
𝒞 ( (5, 4,3,3, 2 ), (6, 6, 4,2 ) )
𝒮 ( (6, 6, 4,2 ), (5, 4,3,3, 2 ) ),
which is in 𝒫 (6,4) ×𝒫 (5, 5) as desired.
If we forbid overlined parts (i.e. if we set t=0), the above proof becomes Butler's proof of (<ref>).
From Theorem <ref>, we can deduce several interesting corollaries.
The over-(q,t)-binomial coefficients are (q,t)-log-concave, namely for all 0<k < n,
n k_q,t^2- n k-1_q,t n k+1_q,t
has non-negative coefficients as a polynomial in q and t.
By setting t=0, we obtain Butler's result on the q-log-concavity of q-binomial coefficients.
Recall that we have shown that ℒ is an injection from 𝒫 (n-k, k-1) ×𝒫 ( n-ℓ, ℓ +1) to 𝒫 (n-k, k) ×𝒫 (n-ℓ, ℓ ). From this we obtain the following.
For all 0<k ≤ℓ < n,
n k_q,t n ℓ_q,t - n-1 k-1_q,t n+1 ℓ+1 _q,t
has non-negative coefficients as a polynomial in q and t.
By setting q=t=1 in Theorems <ref> and <ref>, we deduce the following result on Delannoy numbers.
For all 0 <k ≤ℓ <n, we have
D(n-k,k) D(n-ℓ,ℓ) ≥ D(n-k+1,k-1) D(n-ℓ-1,ℓ+1),
D(n-k,k) D(n-ℓ,ℓ) ≥ D(n-k,k-1) D(n-ℓ,ℓ+1).
Now by setting ℓ=k and n=n+k in Corollary <ref> this yields the log-concavity of Delannoy numbers, which also implies their unimodality.
For all n>k>0, the Delannoy numbers D(n,k) satisfy
D(n,k)^2 ≥ D(n+1,k-1) D(n-1,k+1),
D(n,k)^2 ≥ D(n,k-1) D(n,k+1).
In particular, the Delannoy numbers D(n,k) are log-concave.
Similarly, by setting q=1 and t=q, in Theorems <ref> and <ref>, we deduce the following result on Sagan's q-Delannoy numbers.
For all 0 <k ≤ℓ <n, we have
D_q(n-k,k) D_q(n-ℓ,ℓ) ≥ D_q(n-k+1,k-1) D_q(n-ℓ-1,ℓ+1),
D_q(n-k,k) D_q(n-ℓ,ℓ) ≥ D_q(n-k,k-1) D_q(n-ℓ,ℓ+1).
And by setting ℓ=k and n=n+k in Corollary <ref>, we obtain the q-log-concavity of Sagan's q-Delannoy numbers.
For all n>k>0, Sagan's q-Delannoy numbers D_q(n,k) satisfy that
D_q (n,k)^2 - D_q(n+1,k-1) D_q(n-1,k+1)
and D_q (n,k)^2 - D_q(n,k-1) D_q(n,k+1)
have non-negative coefficients as polynomials in q. In particular Sagan's q-Delannoy numbers D_q(n,k) are q-log-concave.
Moreover, we can also generalize Corollary 4.5 of <cit.> to over-(q,t)-binomial coefficients.
For 0 ≤ k-r ≤ k ≤ℓ≤ℓ+r ≤ n,
n k_q,t n ℓ_q,t - n k-r_q,t n ℓ+r_q,t
has non-negative coefficients as a polynomial in t and q.
The proof is similar to the one in <cit.>.
By Theorem <ref>, all the terms of the telescoping sum
n k_q,t n ℓ_q,t - n k-r_q,t n ℓ+r_q,t
= ∑_i=0^r-1( n k-i_q,t n ℓ+i_q,t - n k-i-1_q,t n ℓ+i+1_q,t)
have non-negative coefficients.
As usual, setting q=t=1 yields some interesting result on Delannoy numbers.
For 0 ≤ k-r ≤ k ≤ℓ≤ℓ+r ≤ n,
D(n-k,k) D(n-ℓ,ℓ) ≥ D(n-k+r,k-r) D(n-ℓ+r,ℓ+r).
§ UNIMODALITY CONJECTURES
We now present a few conjectures and observations about the unimodality of over-(q,t)-binomial coefficients.
Recall that a polynomial p(x)= a_0 + a_1 x + ⋯ + a_r x^r is unimodal if there is an integer ℓ (called the peak) such that
a_0 ≤ a_1 ≤⋯≤ a_ℓ-1≤ a_ℓ≥ a_ℓ+1≥⋯≥ a_r.
It is well-known that Gaussian polynomials <cit.> and q-multinomial coefficients <cit.> are unimodal.
We extend this definition to polynomials in two variables. We say that a polynomial P(q,t) = ∑_k=0^r ∑_n=0^s a_k,nt^kq^n is doubly unimodal if
* for every fixed k ∈{0, … , r}, the coefficient of t^k in P(q,t) is unimodal in q, that is there exists an integer ℓ such that
a_k,0≤ a_k,1≤⋯≤ a_k,ℓ-1≤ a_k,ℓ≥ a_k,ℓ+1≥⋯≥ a_k,s,
* for every fixed n ∈{0, … , s}, the coefficient of q^n in P(q,t) is unimodal in t, that is there exists an integer ℓ' such that
a_0,n≤ a_1,n≤⋯≤ a_ℓ'-1,n≤ a_ℓ',n≥ a_ℓ'+1,n≥⋯≥ a_r,n.
Computer experiments suggest that the following conjectures are true.
For every positive integers m and n, the over-(q,t)-binomial coefficient m +n n _q,t is doubly unimodal.
By the formula (<ref>) and using the fact that q-multinomial coefficients are unimodal, we can easily deduce that part (i) of the definition is satisfied. Therefore the challenging part of the conjecture is to prove that for every N, the coefficient of q^N in m n_q,t is unimodal in t.
For every positive integers m and n, m +n n _q,1 is unimodal in q.
Conjecture <ref> doesn't immediately imply Conjecture <ref>, as the peaks in q are not the same for each t^k. Therefore, even if they might be related, the two conjectures are of independent interest.
We illustrate our conjectures for m=n=4 in Table 1.
Pak and Panova <cit.> recently proved that the classical q-binomial coefficients are strictly unimodal. Experiments show that it should also be the case for m +n N _q,1, and for the coefficients of q^N in m+n n_q,t (as a polynomial in t). However it is not the case for the coefficients of t^k in m+n n_q,t (as a polynomial in q).
§ ACKNOWLEDGEMENTS
The authors thank Krishna Alladi, Manjul Bhargava, Alex Berkovich, Bruce Berndt, and Ali Uncu for their valuable comments.
30
Alladi1
K. Alladi, A new combinatorial study of the Rogers-Fine identity and a related partial theta series, Int. J. Number Theory 5 (2009), 1311–1320.
Alladi3
K. Alladi, V. E. Hoggatt, On Tribonacci numbers and related functions, Fibonacci Quart. 15 (1977), 42–45.
Alladi2
K. Alladi, Partitions with non-repeating odd parts and combinatorial identities, Ann. Comb. 20 (2016), 1–20.
AllBer
K. Alladi, A. Berkovich, New polynomial analogues of Jacobi's triple product and Lebesgue's identities, Adv. Appl. Math. 32 (2004), 801–824.
Abook
G. E. Andrews, The Theory of Partitions, Addison–Wesley, Reading, MA, 1976; reissued: Cambridge University Press,
Cambridge, 1998.
And_aGauss
G. E. Andrews, a-Gaussian polynomials and finite Rogers-Ramanujan identities, In Theory and Applications of Special Functions: A Volume Dedicated to Mizan Rahman, M. Ismail and E. Koelink eds., 39–60. Springer, New York, 2005.
Delannoy
C. Banderier, S. Schwer, Why Delannoy numbers?,
J. Statist. Plann. Inference 135 (2005), no. 1, 40–54.
Butler
L. M. Butler, The q-log-concavity of q-binomial coefficients,
J. Combin. Theory Ser. A 54 (1990), no. 1, 54–63.
LC
S. Corteel, J. Lovejoy, Overpartitions, Trans. Amer. Math. Soc. 356 (2004), no. 4, 1623–1635.
DK
J. Dousse and B. Kim, An overpartition analogue of q-binomial coefficients, Ramanujan J. 42 (2017), 267–283.
Fine
N. J. Fine, Basic Hypergeometric Series and Applications, American Mathematical Society, Providence, RI, 1988.
GR
G. Gasper and M. Rahman, Basic Hypergeometric Series, 2nd Edition, Cambridge Univ. Press, Cambridge, 2004.
Pak
I. Pak, G. Panova, Strict unimodality of q-binomial coefficients,
Comptes Rendus Mathématiques 351 (2013), 415–418.
PS
T. Prellberg and D. Stanton, Proof of a monotonicity conjecture,
J. Combin. Theory Ser. A 103 (2003), no. 2, 377–381.
Ramirez
J. L. Ramirez, Incomplete Tribonacci numbers and polynomials, J. Integer Seq. 17 (2014), article 14.4.2.
Rowell
M. Rowell, A new exploration of the Lebesgue identity, Int. J. Number Theory 6 (2010), 785–798.
Sagan
B. Sagan, Unimodality and the reflection principle.
Ars Combin. 48 (1998), 65–72.
Syl
J. J. Sylvester, A constructive theory of partitions, arranged in three acts, an interact, and an exodion, in The Collected Papers of J. J. Sylvester, Vol. 3, Cambridge University Press, London, 1–83; reprinted by Chelsea, New York, 1973.
Syl2
J. J. Sylvester, Proof of the hitherto undemonstrated
fundamental theorem of invariants, Philosophical Magazine 5 (1878), 178–188.
Za
D. Zagier, Vassiliev invariants and a strange identity related to the Dedekind eta-function, Topology 40 (2001), 945–960.
|
http://arxiv.org/abs/1701.07681v1 | 20170126130948 | Fast and Accurate Time Series Classification with WEASEL | [
"Patrick Schäfer",
"Ulf Leser"
] | cs.DS | [
"cs.DS",
"cs.LG",
"stat.ML"
] |
2
Fast and Accurate Time Series Classification with WEASEL
Patrick Schfer
Humboldt University of Berlin
Berlin, Germany
patrick.schaefer@hu-berlin.de
Ulf Leser
Humboldt University of Berlin
Berlin, Germany
leser@informatik.hu-berlin.de
December 30, 2023
======================================================================================================================================================================================================================================================
Time series (TS) occur in many scientific and commercial applications, ranging from earth surveillance to industry automation to the smart grids. An important type of TS analysis is classification, which can, for instance, improve energy load forecasting in smart grids by detecting the types of electronic devices based on their energy consumption profiles recorded by automatic sensors. Such sensor-driven applications are very often characterized by (a) very long TS and (b) very large TS datasets needing classification. However, current methods to time series classification (TSC) cannot cope with such data volumes at acceptable accuracy; they are either scalable but offer only inferior classification quality, or they achieve state-of-the-art classification quality but cannot scale to large data volumes.
In this paper, we present WEASEL (Word ExtrAction for time SEries cLassification), a novel TSC method which is both scalable and accurate. Like other state-of-the-art TSC methods, WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set.
On the popular UCR benchmark of 85 TS datasets, WEASEL is more accurate than the best current non-ensemble algorithms at orders-of-magnitude lower classification and training times, and it is almost as accurate as ensemble classifiers, whose computational complexity makes them inapplicable even for mid-size datasets. The outstanding robustness of WEASEL is also confirmed by experiments on two real smart grid datasets, where it out-of-the-box achieves almost the same accuracy as highly tuned, domain-specific methods.
Fast and Accurate Time Series Classification with WEASEL
Patrick Schfer
Humboldt University of Berlin
Berlin, Germany
patrick.schaefer@hu-berlin.de
Ulf Leser
Humboldt University of Berlin
Berlin, Germany
leser@informatik.hu-berlin.de
December 30, 2023
======================================================================================================================================================================================================================================================
§ INTRODUCTION
A (one-dimensional) time series (TS) is a collection of values sequentially ordered in time. TS emerge in many scientific and commercial applications, like weather observations, wind energy forecasting, industry automation, mobility tracking, etc. One driving force behind their rising importance is the sharply increasing use of sensors for automatic and high resolution monitoring in domains like smart homes <cit.>, starlight observations <cit.>, machine surveillance <cit.>, or smart grids <cit.>.
Research in TS is diverse and covers topics like storage, compression, clustering, etc.; see <cit.> for a survey. In this work, we study the problem of time series classification (TSC): Given a concrete TS, the task is to determine to which of a set of predefined classes this TS belongs to, the classes typically being characterized by a set of training examples. Research in TSC has a long tradition <cit.>, yet progress was focused on improving classification accuracy and mostly neglected scalability, i.e., the applicability in areas with very many and/or very long TS. However, many of today's sensor-driven applications have to deal with exactly these data, which makes methods futile that do not scale, irrespective of their quality on small datasets. Instead, TSC methods are required that are both very fast and very accurate.
As a concrete example, consider the problem of classifying energy consumption profiles of home devices (a dish washer, a washing machine, a toaster etc.). In smart grids, every device produces a unique profile as it consumes energy over time; profiles are unequal between different types of devices, but rather similar for devices of the same type (see Figure <ref>). The resulting TSC problem is as follows: Given an energy consumption profile (which is a TS), determine the device type based on a set of exemplary profiles per type. For an energy company such information helps to improve the prediction of future energy consumption <cit.>.
For approaching these kinds of problems, algorithms that are very fast and very accurate are required. Regarding scalability, consider millions of customers each having dozens of devices, each recording one measurement per second. To improve forecasting, several millions of classifications of time series have to be performed every hour, each considering thousands of measurements. Even when optimizations like TS sampling or adaptive re-classification intervals are used, the number of classifications remains overwhelming and can only be approached with very fast TSC methods.
Regarding accuracy, it should be considered that any improvement in prediction accuracy may directly transform into substantial monetary savings. For instance, <cit.> report that a small improvement in accuracy (below 10%) can save tens of millions of dollars per year and company. However, achieving high accuracy classification of home device energy profiles is non trivial due to different usage rhythms (e.g., where in a dishwasher cycle has the TS been recorded?), differences in the profiles between concrete devices of the same type, and noise within the measurements, for instance because of the usage of cheap sensors.
Current TSC methods are not able to deal with such data at sufficient accuracy and speed. Several high accuracy classifiers, such as Shapelet Transform (ST) <cit.>, have bi-quadratic complexity (power of 4) in the length of the TS; even methods with quadratic classification complexity are infeasible. The current most accurate method (COTE <cit.>) even is an ensemble of dozens of core classifiers many of which have a quadratic, cubic or bi-quadratic complexity. On the other hand, fast TSC methods, such as BOSS VS <cit.> or Fast Shapelets <cit.>, perform much worse in terms of accuracy compared to the state of the art <cit.>. As concrete example, consider the (actually rather small) PLAID benchmark dataset <cit.>, consisting of 1074 profiles of 501 measurements each stemming from 11 different devices. Figure <ref> plots classification times (in log scale) versus accuracy for seven state-of-the-art TSC methods and the novel algorithm presented in this paper, WEASEL. Euclidean distance (ED) based methods are the fastest, but their accuracy is far below standard. Dynamic Time Warping methods (DTW, DTW CV) are common baselines and show a moderate runtime of 10 to 100 ms but also low accuracy. Highly accurate classifiers such as ST <cit.> and BOSS <cit.> require orders-of-magnitude longer prediction times. For this rather small dataset, the COTE ensemble classifier has not yet terminated training after right CPU weeks (Linux user time), thus we cannot report the accuracy, yet. In summary, the fastest methods for this dataset require around 1ms per prediction, but have an accuracy below 80%; the most accurate methods achieve 85%-88% accuracy, but require 80ms up to 32sec for each TS.
In this paper, we propose a new TSC method called WEASEL: Word ExtrAction for time SEries cLassification. WEASEL is both very fast and very accurate; for instance, on the dataset shown in Figure <ref> it achieves the highest accuracy while being the third-fastest algorithm (requiring only 4ms per TS). Like several other methods, WEASEL conceptually builds on the so-called bag-of-patterns approach: It moves a sliding window over a TS and extracts discrete features per window which are subsequently fed into a machine learning classifier. However, the concrete way of constructing and filtering features in WEASEL is completely different from any previous method. First, WEASEL considers differences between classes already during feature discretization instead of relying on fixed, data-independent intervals; this leads to a highly discriminative feature set. Second, WEASEL uses windows of varying lengths and also considers the order of windows instead of considering each fixed-length window as independent feature; this allows WEASEL to better capture the characteristics of each classes. Third, WEASEL applies aggressive statistical feature selection instead of simply using all features for classification; this leads to a much smaller feature space and heavily reduced runtime without impacting accuracy. The resulting feature set is highly discriminative, which allows us to use fast logistic regression instead of more elaborated, but also more runtime-intensive methods.
We performed a series of experiments to assess the impact of (each of) these improvements. First, we evaluated WEASEL on the popular UCR benchmark set of 85 TS collections <cit.> covering a variety of applications, including motion tracking, ECG signals, chemical spectrograms, and starlight-curves. WEASEL outperforms the best core-classifiers in terms of accuracy while also being one of the fastest methods; it is almost as accurate as the current overall best method (COTE) but multiple orders-of-magnitude faster in training and in classification. Second, for the concrete use case of energy load forecasting, we applied WEASEL to two real-live datasets and compared its performance to the other general TSC methods and to algorithms specifically developed and tuned for this problem. WEASEL again outperforms all other TS core-classifiers in terms of accuracy while being very fast, and achieves an accuracy on-par with the domain-specific methods without any domain adaptation.
The rest of this paper is organized as follows: In Section 2 we present related work. Section 3 briefly recaps bag-of-patterns classifiers and feature discretization using Fourier transform. In Section 4 we present WEASEL's novel way of feature generation and selection. Section 5 presents evaluation results. The paper concludes with Section 6.
§ RELATED WORK
With time series classification (TSC) we denote the problem of assigning a given TS to one of a predefined set of classes. TSC has applications in many domains; for instance, it is applied to determine the species of a flying insect based on the acoustic profile generated from its wing-beat <cit.>, or for identifying the most popular TV shows from smart meter data <cit.>.
The techniques used for TSC can be broadly categorized into two classes: whole series-based methods and feature-based methods <cit.>. Whole series similarity measures make use of a point-wise comparison of entire TS. These include 1-NN Euclidean Distance (ED) or 1-NN Dynamic Time Warping (DTW) <cit.>, which is commonly used as a baseline in comparisons <cit.>. Typically, these techniques work well for short but fail for noisy or long TS <cit.>. Furthermore, DTW has a computational complexity of O(n^2) for TS of length n. Techniques like early pruning of candidate TS with cascading lower bounds can be applied to reduce the effective runtime <cit.>. Another speed-up techniques first clusters the input TS based on the fast ED and later analyzes the clusters using the triangle inequality <cit.>.
In contrast, feature-based classifiers rely on comparing features generated from substructures of TS. The most successful approaches can be grouped as either using shapelets or bag-of-patterns (BOP). Shapelets are defined as TS subsequences that are maximally representative of a class. In <cit.> a decision tree is built on the distance to a set of shapelets. The Shapelet Transform (ST) <cit.>, which is the most accurate shapelet approach according to a recent evaluation <cit.>, uses the distance to the shapelets as input features for an ensemble of different classification methods. In the Learning Shapelets (LS) approach <cit.>, optimal shapelets are synthetically generated. The drawback of shapelet methods is the high computational complexity resulting in rather long training and classification times.
The alternative approach within the class of feature-based classifiers is the bag-of-patterns (BOP) model <cit.>. Such methods break up a TS into a bag of substructures, represent these substructures as discrete features, and finally build a histogram of feature counts as basis for classification. The first published BOP model (which we abbreviate as BOP-SAX) uses sliding windows of fixed lengths and transforms these measurements in each window into discrete features using Symbolic Aggregate approXimation (SAX) <cit.>. Classification is implemented as 1-NN classifier using Euclidean distance of feature counts as distance measure. SAX-VSM <cit.> extends BOP-SAX with tf-idf weighing of features and uses the Cosine distance; furthermore, it builds only one feature vector per class instead of one vector per sample, which drastically reduces runtime. Another current BOP algorithm is the TS bag-of-features framework (TSBF) <cit.>, which first extracts windows at random positions with random lengths and next builds a supervised codebook generated from a random forest classifier. In our prior work, we presented the BOP-based algorithm BOSS (Bag-of-SFA-Symbols) <cit.>, which uses the Symbolic Fourier Approximation (SFA) <cit.> instead of SAX. In contrast to shapelet-based approaches, BOP-based methods typically have only linear computational complexity for classification.
The most accurate current TSC algorithms are Ensembles. These classify a TSC by a set of different core classifiers and then aggregate the results using techniques like bagging or majority voting. The Elastic Ensemble (EE PROP) classifier <cit.> uses 11 whole series classifiers including DTW CV, DTW, LCSS and ED. The COTE ensemble <cit.> is based on 35 core-TSC methods including EE PROP and ST. If designed properly, ensembles combine the advantages of their core classifiers, which often lead to superior results. However, the price to pay is excessive runtime requirement for training and for classification, as each core classifier is used independently of all others.
§ TIME SERIES, BOP, AND SFA
The method we introduce in this paper follows the BOP approach and uses truncated Fourier transformations as first step on feature generation. In this section we present these fundamental techniques, after formally introducing time series and time series classification.
In this work, a time series (TS) T is a sequence of n∈ℕ real values, T=(t_1,…,t_n), t_i∈ℝ[Extensions to multivariate time series are discussed in Section 6]. As we primarily address time series generated from automatic sensors with a fixed sampling rate, we ignore time stamps. Given a TS T, a window S of length w is a subsequence with w contiguous values starting at offset a in T, i.e., S(a,w)=(t_a,…,t_a+w-1) with 1≤ a≤ n-w+1. We associate each TS with a class label y∈ Y from a predefined set Y. Time series classification (TSC) is the task of predicting a class label for a TS whose label is unknown. A TS classifier is a function that is learned from a set of labeled time series (the training data), takes an unlabeled time series as input and outputs a label.
Algorithms following the BOP model build this classification function by (1) extracting windows from a TS, (2) transforming each window of real values into a discrete-valued word (a sequence of symbols over a fixed alphabet), (3) building a feature vector from word counts, and (4) finally using a classification method from the machine learning repertoire on these feature vectors. Figure <ref> illustrates these steps from a raw time series to a BOP model using overlapping windows.
BOP methods differ in the concrete way of transforming a window of real-valued measurements into discrete words (discretization). WEASEL builds upon SFA which works as follows <cit.>: (1) Values in each window are normalized to have standard deviation of 1 to obtain amplitude invariance. (2) Each normalized window of length w is subjected to dimensionality reduction by the use of the truncated Fourier Transform, keeping only the first l<w coefficients for further analysis. This step acts as a low pass filter, as higher order Fourier coefficients typically represent rapid changes like dropouts or noise. (3) Each coefficient is discretized to a symbol of an alphabet of fixed size c to achieve further robustness against noise. Figure <ref> exemplifies this process.
§ WEASEL
In this section, we present our novel TSC method WEASEL (Word ExtrAction for time SEries cLassification). WEASEL specifically addresses the major challenges any TSC method has to cope with when being applied to data from sensor readouts, which can be summarized as follows (using home device classification as an example):
Invariance to noise: TS can be distorted by (ambiance) noise as part of the recording process. In a smart grid, such distortions are created by imprecise sensors, information loss during transmission, stochastic differences in energy consumption, or interference of different consumers connected to the same power line. Identifying TS class-characteristic patterns requires to be noise robust.
Scalability: TS in sensor-based applications are typically recorded with high sampling rates, leading to long TS. Furthermore, smart grid applications typically have to deal with thousands or millions of TS. TSC methods in such areas need to be scalable in the number and length of TS.
Variable lengths and offsets: TS to be classified may have variable lengths, and recordings of to-be-classified intervals can start at any given point in time. In a smart grid, sensors produce continuous measurements, and the partitioning of this essentially infinite stream into classification intervals is independent from the usages of devices. Thus, characteristic patterns may appear anywhere in a TS (or not at all), but typically in the same order.
Unknown characteristic substructures: Feature-based classifiers exploit local substructures within a TS, and thus depend on the identification of recurring, characteristic patterns. However, the position, form, and frequency of these patterns is unknown; many substructures may be irrelevant for classification. For instance, the idle periods of the devices in Figure <ref> are essentially identical.
We carefully engineered WEASEL to address these challenges. Our method conceptually builds on the BOP model in BOSS <cit.>, yet uses rather different approaches in many of the individual steps. We will use the terms feature and word interchangeably throughout the text. Compared to previous works in TSC, WEASEL implements the following novel ideas, which will be explained in detail in the following subsections:
* Discriminative feature generation: WEASEL derives discriminative features based on class characteristics of the concrete dataset.
This differs from current BOP <cit.> methods, which apply the same feature generation method independent of the actual dataset, possibly leading to features that are equally frequent in all classes, and thus not discriminative.
Specifically, our approach first Fourier transforms each window, next determines discriminative Fourier coefficients using the ANOVA f-test and finally applies information gain binning for choosing appropriate discretization boundaries. Each step aims at separating TS from different classes.
* Co-occurring words: The order of substructures (each represented by a word) is lost in the BOP model. To mitigate this effect, WEASEL also considers bi-grams of words as features. Thus, local order is encoded into the model, but as a side effect the feature space is increased drastically.
* Variable-length windows: Typically, characteristic TS patterns do not all have the same length. Current BOP approaches, however, assume a fixed window length, which leads to ignorance regarding patterns of different lengths. WEASEL removes this restriction by extracting words for multiple window lengths and joining all resulting words in a single feature vector - instead of training separate vectors and selecting (the best) one as in other BOP models. This approach can capture more relevant signals, but again increases the feature space.
* Feature selection: The wide range of features considered captures more of the characteristic TS patterns but also introduces many irrelevant features. Therefore, WEASEL uses an aggressive Chi-Squared test to filter the most relevant features in each class and reduce the feature space without negatively impacting classification accuracy.
WEASEL is composed of the building blocks depicted in Figure <ref>: our novel supervised symbolic representation for discriminative feature generation and the novel bag-of-patterns model for building a discriminative feature vector.
First, WEASEL extracts normalized windows of different lengths from a time series. Next, each window is approximated using the Fourier transform, and those Fourier coefficients are kept that best separate TS from different classes using the ANOVA F-test. The remaining Fourier coefficients are discretized into a word using information gain binning, which also chooses discretization boundaries to best separate the TS classes; More detail is given in Subsection 4.2.
Finally, a single bag-of-patterns is built from the words (unigrams) and neighboring words (bigrams). This bag-of-patterns encodes unigrams, bigrams and windows of variable lengths. To filter irrelevant words, the Chi-Squared test is applied to this bag-of-patterns (Subsection 4.1). As WEASEL builds a highly discriminative feature vector, a fast linear time logistic regression classifier is applied, as opposed to more complex, quadratic time classifiers (Subsection 4.1).
Algorithm <ref> illustrates WEASEL: sliding windows of length w are extracted (line 6) and windows are normalized (line 7). We empirically set the window lengths to all values in [8,…,n]. Smaller values are possible, but the feature space can become untraceable, and small window lengths are basically meaningless for TS of length >10^3.
Our supervised symbolic transformation is applied to each real-valued sliding window (line 11,15). Each word is concatenated with the window length and its occurrence is counted (line 12,16). Lines 15–16 illustrate the use of bigrams: the preceding sliding window is concatenated with the current window. Note, that all words (unigrams,bigrams,window-length) are joined within a single bag-of-patterns. Finally irrelevant words are removed from this bag-of-patterns using the Chi-Squared test (line 19). The target dimensionality l is learned through cross-validation.
BOP-based methods have a number of parameters, which heavily influence their performance. Of particular importance is the window length w. An optimal value for this parameter is typically learned for each new dataset using techniques like cross-validation. This does not only carry the danger of over-fitting (if the training samples are biased compared to the to-be-classified TS), but also leads to substantial training times. In contrast, WEASEL removes the need to set this parameter, by constructing one joined high-dimensional feature vector, in which every feature encodes the parameter's values (Algorithm <ref> lines 12,16).
Figure <ref> illustrates our use of unigrams, bigrams and variable window lengths. The depicted dataset contains two classes 'A' and 'B' with two samples each. The time series are very similar and differences between these are difficult to spot, and are mostly located between time stamps 80 and 100 to 130. The center (right) column illustrates the features extracted for window length 50 (75).
Feature '75 aa ca' (a bigram for length 75) is characteristic for the A class, whereas the feature '50 db' (an unigram for length 50) is characteristic for the B class. Thus, we use different window lengths, bigrams, and unigrams to capture subtle differences between TS classes.
We show the impact of variable-length windows and bigrams to classification accuracy in Section <ref>.
§.§ Feature Selection and Weighting: Chi-squared Test and Logistic Regression
The dimensionality of the BOP feature space is O(c^l) for word length l and c symbols. It is independent of the number of time series N as these only affect the frequencies. For common parameters like c=4, l=4, n=256 this results in a sparse vector with 4^4 = 256 dimensions for a TS. WEASEL uses bigrams and O(n) window lengths, thus the dimensionality of the feature space rises to O(c^l· c^l· n). For the previous set of parameters this feature space explodes to 4^8· 256 = 256^3.
WEASEL uses the Chi-squared (χ^2) test to identify the most relevant features in each class to reduce this feature space to a few hundred features prior to training the classifier. This statistical test determines if for any feature the observed frequency within a specific group significantly differs from the expected frequency, assuming the data is nominal. Larger χ^2-values imply that a feature occurs more frequently within a specific class. Thus, we keep those features with χ^2-values above the threshold. This highlights subtle distinctions between classes. All other features can be considered superfluous and are removed. On average this reduces the size of the feature space by 30-70% to roughly 10^4 to 10^5 features.
Still, with thousands of time series or features an accurate, quadratic time classifier can take days to weeks to train on medium-sized datasets <cit.>. For sparse vectors, linear classifiers are among the fastest, and they are known to work well for large dimensional (sparse) vectors, like in document
classification. These linear classifiers predict the label based on a dot-product of the input feature vector and a weight vector. The weight vector represents the model trained on labeled train samples. Using a weight vector highlights features that are characteristic for a class label and suppresses irrelevant features. Thus, the classifier aims at finding those features, that can be used to determine a class label. Methods to obtain a weight vector include Support Vector Machines <cit.> or logistic regression <cit.>. We implemented our classifier using liblinear <cit.> as it scales linearly with the dimensionality of the feature space <cit.>. This results in a moderate runtime compared to Shapelet or ensemble classifiers, which can be orders of magnitude slower (see Section 5.3).
§.§ Supervised Symbolic Representation
A symbolic representation is needed to transform a real-valued TS window to a word using an alphabet of size c. The problem with SFA <cit.> is that it (a) filters the high frequency components of the signal, just like a low-pass filter. But for instance, the pitch (frequency) of a bird sound is relevant for the species but lost after low-pass filtering. Furthermore, it (b) does not distinguish between class labels when quantizing values of the Fourier transform. Thus, there is a high likelihood of SFA words to occur in different classes with roughly equal frequencies. For classification, we need discriminative words for each class. Our approach is based on:
* Discriminative approximation: We introduce feature selection to the approximation step by using the one-way ANOVA F-test: we keep the Fourier values whose distribution best separates the class labels in disjoint groups.
* Discriminative quantization: We propose the use of information gain <cit.>. This minimizes the entropy of the class labels for each split. I.e., the majority of values in each partition correspond to the same class label.
In Figure <ref> we revisit our sample dataset. This time with a window length of 25. When using SFA words (left), the words are evenly spread over the whole bag-of-patterns for both prototypes. There is no single feature whose absence or presence is characteristic for a class.
However, when using our novel discriminative words (center), we observe less distinct words, more frequent counts and the word 'db' is unique within the 'B' class. Thus, any subsequent classifier can separate classes just by the occurrence of this feature. When training a logistic regression classifier on these words (right), the word 'db' gets boosted and other words are filtered.
Note, that the counts of the word 'db' differ for both representations, as it represents other frequency ranges for the SFA and discriminative words.
This showcase underlines that not only different window lengths or bigrams (as in Figure <ref>), but also the symbolic representation helps to generate discriminative feature sets. Our showcase is the Gun-Point dataset <cit.>, which represents the hand movement of actors, who aim a gun (prototype A) or point a finger (prototype B) at people.
§.§.§ Discriminative Approximation using One-Way ANOVA F-test
For approximation each TS is Fourier transformed first. We aim at finding those real an imaginary Fourier values that best separate between class labels for a set of TS samples, instead of simply taking the first ones. Figure <ref> (left) shows the distribution of the Fourier values for the samples from the Gun-Point dataset. The Fourier value that best separates between the classes is imag_3 with the highest F-value of 1.5 (bottom).
We chose to use a one-way ANOVA F-test <cit.> to select the best Fourier coefficients, as it is applicable on continuous variables, as opposed to the Chi-squared test, which is limited to categorical variables. The one-way ANOVA F-test checks the hypothesis that two or more groups have the same normal distribution around the mean. The analysis is based on two estimates for the variance existing within and between groups: mean square within (MS_W) and mean square between (MS_B). The F-value is then defined as: F=MS_B/MS_W. If there is no difference between the group means, the F-value is close to or below 1. If the groups have different distributions around the mean, MS_B will be larger than MS_W. When used as part of feature selection, we are interested in the largest F-values, equal to large differences between group means. The F-value is calculated for each real real_i∈ REAL(T) and imaginary imag_i∈ IMAG(T) Fourier value. We keep those l Fourier values with the largest F-values. In Figure <ref> these are real_0 and imag_3 for l=2 with F-values 0.6 and 1.5.
Assumptions made for the ANOVA F-test:
* The ANOVA F-test assumes that the data follows a normal distribution with equal variance. The BOP (WEASEL) approach extracts subsequences for z-normalized time series. It has been shown that subsequences extracted from z-normalized time series perfectly mimic normal distribution <cit.>. Furthermore, the Fourier transform of a normal distribution
f(x)=1/σ√(2π)· e^--x^2/2σ^2
with μ=0,σ=1 results in a normal distribution of the Fourier coefficients <cit.>:
F(t)=∫ f(x)· e^-itx=e^iμσe^-1/2(σ t)^2=e^-1/2(σ t)^2
Thus, the Fourier coefficients follow a symmetrical and uni-modal normal distribution with equal variance.
* The ANOVA F-test assumes that the samples are independently drawn. To guarantee independence, we are extracting disjoint subsequences, i.e. non-overlapping, to train the quantization intervals. Using disjoint windows for sampling further decreases the likelihood of over-fitting quantization intervals.
§.§.§ Discriminative Quantization using Entropy / Information Gain
A quantization step is applied to find for each selected real or imaginary Fourier value the best split points, so that in each partition a majority of values correspond to the same class. We use information gain <cit.> and search for the split with largest information gain, which represents an increase in purity. Figure <ref> (right) illustrates five possible split points for the imag_3 Fourier coefficient on the two labels 'Gun' (orange) and 'Point' (red). The split point with the highest information gain of 0.46 is chosen.
Our quantization is based on binning (bucketing). The value range is partitioned into disjoint intervals, called bins. Each bin is labeled by a symbol. A real value that falls into an interval is represented by its discrete label. Common methods to partition the value range include equi-depth or equi-width bins. These ignore the class label distribution and splits are solely based on the value distribution. Here we introduce entropy-based binning. This leads to disjoint feature sets. Let Y={ (s_1,y_1),…,(s_N,y_N)} be a list
of value and class label pairs with N unique class labels. The multi-class entropy is then given by: Ent(Y)=∑_(s_i,y_i)∈ Y-p_y_ilog_2p_y_i, where p_y_i is the relative frequency of label y_i in Y. The entropy for a split point sp with all labels on the left Y_L={ (s_i,y_i)|s_i≤ sp, (s_i,y_i)∈ Y.} and all labels on the right Y_R={ (s_i,y_i)|s_i>sp, (s_i,y_i)∈ Y.} is given by:
Ent(Y,sp)=|Y_L|/|Y|Ent(Y_L)+|Y_R|/|Y|Ent(Y_R)
The information gain for this split is given by:
Information Gain=Ent(Y)-Ent(Y,sp)
Algorithm <ref> illustrates entropy-binning for a c symbol alphabet and word length l. For each set of the l real and imaginary Fourier values, an order-line is built (line 5). We then search for the c split points that maximize the information gain (line 6). After choosing the first split point (line 10) any remaining partition Y_L or Y_R that is not pure is recursively split (lines 13-14). The recursion ends once we have found c bins (line 12).
We fix the alphabet size c to 4, as it has been shown in the context of BOP models that using a constant c=4 is very robust over all TS considered <cit.>.
§ EVALUATION
§.§ Experimental Setup
We mostly evaluated our WEASEL classifier using the full UCR benchmark dataset of 85 TSC problems <cit.>[The UCR archive has recently been extended from 45 to 85 datasets.]. Furthermore, we compared its performance on two real-life datasets from the smart grid domain; results are reported in Section <ref>.
Each UCR dataset provides a train and test split set which we use unchanged to make our results comparable the prior publications. We compare WEASEL to the best published TSC methods (following <cit.>), namely COTE (Ensemble) <cit.>, 1-NN BOSS <cit.>, Learning Shapelets <cit.>, Elastic Ensemble (EE PROP) <cit.>, Time Series Bag of Features (TSBF) <cit.>, Shapelet Transform (ST) <cit.>, and 1-NN DTW with and without a warping window set through cross validation on the training data (CV) <cit.>. A recent study <cit.> reported COTE, ST, BOSS, and EE PROP as the most accurate (in this order).
All experiments ran on a server running LINUX with 2xIntel Xeon E5-2630v3 and 64GB RAM, using JAVA JDK x64 1.8. We measured runtimes of all methods using the implementation given by the authors <cit.> wherever possible, resorting to the code by <cit.> if this was not the case. For 1-NN DTW and 1- NN DTW CV, we make use of the state-of-the-art cascading lower bounds from <cit.>. Multi-threaded code is available for BOSS and WEASEL, but we have restricted all codes to use a single core to ensure comparability of numbers. Regarding accuracy, we report numbers published by each author <cit.>, complemented by the numbers published by <cit.>, for those datasets where results are missing (due to the growth of the benchmark datasets). All numbers are accuracy on the test split.
For WEASEL we performed 10-fold cross-validation on the training datasets to find the most appropriate value for the SFA word length l∈[4,6,8] We kept c=4 and chi=2 constant, as varying these values has only negligible effect on accuracy (data not shown). We used liblinear with default parameters (bias=1,p=0.1 and solver L2R_LR_DUAL). To ensure reproducible results, we provide the WEASEL source code and the raw measurement sheets <cit.>.
§.§ Accuracy
Figure <ref> shows a critical difference diagram (introduced in <cit.>) over the average ranks of of the different TSC methods. Classifiers with the lowest (best) ranks are to the right. The group of classifiers that are not significantly different in their rankings are connected by a bar. The critical difference (CD) length, which represents statistically significant differences, is shown above the graph.
The 1-NN DTW and 1-NN DTW CV classifiers are commonly used as benchmarks <cit.>. Both perform significantly worse than all other methods. Shapelet Transform (ST), Learning Shapelets (LS) and BOSS have a similar rank and competitive accuracies. WEASEL is the best (lowest rank among all core classifiers (DTW, TSBF, LS, BOSS, ST), i.e., it is on average the most accurate core classifiers. This confirms our assumptions that the WEASEL pipeline resembles the requirements for time series similarity (see Section 5.3).
Ensemble classifiers generally show compelling accuracies at the cost of enormous runtimes. The high accuracy is confirmed in Figure <ref>, where COTE <cit.> is the overall best method. The advantage of WEASEL is its much lower runtime, which we address in Section 5.3.
We performed a Wilcoxon signed rank test to assess the differences between WEASEL and COTE, ST, BOSS, EE. The p-values are 0.0001 for BOSS, 0.017 for ST, 0.0000032 for EE and COTE 0.57. Thus, at a cutoff of p=0.05, WEASEL is significantly better than BOSS, ST and EE, yet very similar to COTE.
§.§ Scalability
Figure <ref> plots for all TSC methods the total runtime on the x-axis in log scale vs the average accuracy on the y-axis for training (top) and prediction (bottom). Runtimes include all preprocessing steps like feature extraction or selection. Because of the high wall-clock time of some classifiers, we limited this experiment to the 45 core UCR datasets, encompassing roughly N=17000 train and N=62000 test time series. The slowest classifiers took more than 340 CPU days to train (Linux user time).
The DTW classifier is the only classifier that does not require training. The DTW CV classifier requires a training step to set a warping window, which significantly reduces the runtime for the prediction step. Training DTW CV took roughly 186 CPU hours until completion. WEASEL and BOSS have similar train times of 16-24 CPU hours and are one to two orders of magnitude faster than the other core classifiers. WEASEL's prediction time is 38ms on average and one order of magnitude faster than that of BOSS. LS and TSBF have the lowest prediction times but a limited average accuracy <cit.>. As expected, the two Ensemble methods in our comparison, EE PROP and COTE, show by far the longest training and classification times. On the NonInvasiveFatalECGThorax1, NonInvasiveFatalECGThorax2, and StarlightCurves datasets training each ensemble took more than 120, 120 and 45 CPU days.
§.§ Accuracy by datasets and by domain
In this experiment we found that WEASEL performs well independent of the domain. We studied the individual accuracy of each method on each of the 85 different datasets, and also grouped datasets by domain to see if different methods have domain-dependent strengths or weaknesses. We used the predefined grouping of the benchmark data into four types: synthetic, motion sensors, sensor readings and image outlines. Image outlines result from drawing a line around the shape of an object. Motion recordings can result from video captures or motion sensors. Sensor readings are real-world measurements like spectrograms, power consumption, light sensors, starlight-curves or ECG recordings. Synthetic datasets were created by scientists to have certain characteristics. For this experiment, we only consider the non-ensemble classifiers. Figure <ref> shows the accuracies of WEASEL (black line) vs. the six core classifiers (orange area). The orange area shows a high variability depending on the datasets.
Overall, the performance of WEASEL is very competitive for almost all datasets. The black line is mostly very close to the upper outline of the orange area, indicating that WEASEL's performance is close to that of its best competitor. In total WEASEL has 36 out of 85 wins against the group of six core classifiers. On 69 (78) datasets it is not more than 5% (10%) to the best classifier. The no-free-lunch-theorem implies that there is no single classifier that can be best for all kinds of datasets. Table <ref> shows the correlation between the classifiers and each of the four dataset types. It gives an idea of when to use which kind of classifier based on dataset types. E.g., when dealing with sensor readings, WEASEL is likely to be the best, with 48.6% wins. Overall, WEASEL has the highest percentage of wins in the groups of sensor readings, synthetic and image outline datasets. Within the group of motion sensors, it performs equally good as LS and ST.
The main advantage of WEASEL is that it adapts to variable-length characteristic substructures by calculating discriminative features in combination with noise filtering. Thus, all datasets that are composed of characteristic substructures benefit from the use of WEASEL. This applies to most sensor readings like all EEG or ECG signals (CinC_ECG_torso, ECG200, ECG5000, ECGFiveDays, NonInvasiveFatalECG_Thorax1, NonInvasiveFatalECG_Thorax2, TwoLeadECG, ...), but also mass spectrometry (Strawberry, OliveOil, Coffee, Wine, ...), or recordings of insect wing-beats (InsectWingbeatSound). These are typically noisy and have variable-length, characteristic substructures that can appear at arbitrary time stamps <cit.>. ST also fits to this kind of data but, in contrast to WEASEL, is sensitive to noise.
Image outlines represent contours of objects. For example, arrow-heads, leafs or planes are characterized by small differences in the contour of the objects. WEASEL identifies these small differences by the use of feature weighting. In contrast to BOSS it also adapts to variable length windows. TSBF does not adapt to the position of a window in the time series. ST and WEASEL adapt to variable length windows at variable positions but WEASEL also offers noise reduction, thereby smoothing the contour of an object.
Overall, if you are dealing with noisy data that is characterized by windows of variable lengths and at variable positions, which may contain superfluous data, WEASEL might be the best technique to use.
§.§ Influence of Design Decisions on WEASEL's Accuracy
We look into the impact of three design decisions on the WEASEL classifier:
* The use of a novel supervised symbolic representation that generates discriminative features.
* The novel use of bigrams that adds order-variance to the bag-of-patterns approach.
* The use of multiple window lengths to support variable length substructures.
We cannot test the impact of the Chi-Squared-test, as the feature space of WEASEL is not computationally feasible without feature selection (see Section <ref>).
Figure <ref> shows the average ranks of the WEASEL classifier where each extension is disabled or enabled: (a) "one window length, supervised and bigrams", (b) "unsupervised and unigrams", (c) "unsupervised and bigrams", (d) "supervised and unigrams", and (e) "supervised and bigrams". The single window approach is least accurate. This underlines that the choice of window lengths is crucial for accuracy. The unsupervised approach with unigrams is equal to the standard bag-of-patterns model. Using a supervised symbolic representation or bigrams slightly improves the ranks. Both extensions combined, significantly improve the ranks.
The plot justifies the design decisions made as part of WEASEL. Each extension of the standard bag-of-patterns model contributes to the classifier's accuracy. Bigrams add order variance and the supervised symbolic representation produces disjoint feature sets for different classes. Datasets contain characteristic substructures of different lengths which is addressed by building a bag-of-patterns using all possible window lengths.
§.§ Use Case: Smart Plugs
Appliance load monitoring has become an important tool for energy savings <cit.>. We tested the performance of different TSC methods on data obtained from intrusive load monitoring (ILM), where energy consumption is separately recorded at every electric device. We used two publicly available datasets ACS-F1 <cit.> and PLAID <cit.>. The PLAID dataset consists of 1074 signatures from 11 appliances. The ACS-F1 dataset contains 200 signatures from 100 appliances and we used their intersession split. These capture the power consumption of typical appliances including air conditioners, lamps, fridges, hair-dryers, laptops, microwaves, washing machines, bulbs, vacuums, fans, and heaters. Each appliance has a characteristic shape. Some appliances show repetitive substructures while others are distorted by noise. As the recordings capture one day, these are characterized by long idle periods and some high bursts of energy consumption when the appliance is active. When active, appliances show different operational states.
Figure <ref> shows the accuracy and runtime of WEASEL compared to state of the art. COTE did not finish training after eight CPU weeks, thus we cannot report their results, yet. ED and DTW do not require training.
WEASEL scores the highest accuracies with 92% and 91.8% for both datasets. With a prediction time of 10 and 100 ms it is also fast. Train times of WEASEL are comparable to that of DTW CV and much lower than that of the other high accuracy classifiers.
On the large PLAID dataset WEASEL has a significantly lower prediction time than its competitors, while on the small sized ACS-F1 dataset the prediction time is slightly higher than that of DTW or BOSS. 1-NN classifiers such as BOSS and DTW scale with the size of the train dataset. Thus, for larger train datasets, they become slower. At the same time, for small datasets like PLAID, they can be quite fast.
The results show that our approach naturally adapts to appliance load monitoring. These data show how WEASEL automatically adapts to idle and active periods and short, repetitive characteristic substructures, which were also important in the sensor readings or image outline domains (Section 5.4).
Note that the authors of the ACS-F1 dataset scored 93% <cit.> using a hidden Markov model and a manual feature set. Unfortunately their code is not available and the runtime was not reported. Our accuracy is close to theirs, while our approach was not specially adapted for the domain.
§ CONCLUSION AND FUTURE DIRECTION
In this work, we have presented WEASEL, a novel TSC method following the bag-of-pattern approach which achieves highly competitive classification accuracies and is very fast, making it applicable in domains with very high runtime and quality constraints. The novelty of WEASEL is its carefully engineered feature space using statistical feature selection, word co-occurrences, and a supervised symbolic representation for generating discriminative words. Thereby, WEASEL assigns high weights to characteristic, variable-length substructures of a TS. In our evaluation on altogether 87 datasets, WEASEL is consistently among the best and fastest methods, and competitors are either at the same level of quality but much slower or equally fast but much worse in accuracy.
In future work, we will explore two directions. First, WEASEL currently only deals with univariate TS, as opposed to multi-variate TS recorded from an array of sensors. We are currently experimenting with extensions to WEASEL to also deal with such data; a first approach which simply concatenates the different dimensions into one vector shows promising results, but requires further validation. Second, throughout this work, we assumed fixed sampling rates, which let us omit time stamps from the TS. In future work, we also want to extend WEASEL to adequately deal with TS which have varying sampling rates.
abbrv
|
http://arxiv.org/abs/1701.07906v1 | 20170127000657 | Collective dynamics in atomistic models with coupled translational and spin degrees of freedom | [
"Dilina Perera",
"Don M. Nicholson",
"Markus Eisenbach",
"G. Malcolm Stocks",
"David P. Landau"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[]dilinanp@physast.uga.edu
Center for Simulational Physics, The University of Georgia, Athens, Georgia 30602, USA
Department of Physics and Astronomy, Mississippi State University, Mississippi State, Mississippi 39762, USA
University of North Carolina at Asheville, Asheville, North Carolina 28804, USA
Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
Center for Simulational Physics, The University of Georgia, Athens, Georgia 30602, USA
Using an atomistic model that simultaneously treats the dynamics of translational and spin degrees of freedom,
we perform combined molecular and spin dynamics simulations
to investigate the mutual influence of the phonons and magnons on their respective frequency spectra and lifetimes in ferromagnetic bcc iron.
By calculating the Fourier transforms of the space- and time-displaced correlation functions,
the characteristic frequencies and the linewidths of the vibrational and magnetic excitation modes were determined.
Comparison of the results with that of the standalone molecular dynamics and spin dynamics simulations
reveal that the dynamic interplay between the phonons and magnons leads to a shift in the respective frequency spectra
and a decrease in the lifetimes.
Moreover, in the presence of lattice vibrations, additional longitudinal magnetic excitations were observed with the same frequencies as the longitudinal phonons.
Collective dynamics in atomistic models with coupled translational and spin degrees of freedom
David P. Landau
==============================================================================================
§ INTRODUCTION
For decades, dynamical simulations of atomistic models have played a pivotal role in the study of collective phenomena in materials at finite temperatures.
Molecular dynamics (MD) <cit.> utilizing empirical potentials
has been extensively used in the analysis of vibrational properties in a variety of systems such as
metals and alloys <cit.>, polymers <cit.>, carbon nanotubes <cit.>, graphene <cit.> etc.
With regard to magnetic excitations, the lesser-known spin dynamics (SD) method <cit.>
has proven to be an indispensable tool for investigating classical lattice-based spin models for which the analytical solutions are intractable.
Over the years, SD simulations have expanded our understanding of spin waves and solitons in magnetic materials,
leading to a number of groundbreaking discoveries,
including the existence of propagating spin waves in paramagnetic bcc iron <cit.>,
presence of longitudinal two-spin-wave modes <cit.> that subsequently lead to experimental verification <cit.>,
and an unexpected form of transverse spin wave excitations in antiferromagnetic nanofilms <cit.>.
Study of collective dynamics in magnetic materials faces an enormous challenge
due to the coupling of lattice vibrations and spin waves which is inherently neglected in the aforementioned atomistic models.
In magnetic metals and alloys, the atomic magnetic moments and exchange interactions
strongly depend on the local atomic environment <cit.>
and therefore change dynamically as the local crystal structure is distorted by lattice vibrations <cit.>.
On the other hand, magnetic interactions themselves are integral for maintaining the structural stability of such systems <cit.>.
For instance, the stabilization of the bcc crystal structure in iron is long conceived to be of magnetic origin <cit.>.
Furthermore, a number of recent studies emphasize the significance of phonon-magnon coupling on various dynamical processes such as
self diffusion <cit.>, thermal transport <cit.>, dislocation dynamics <cit.>, and spin-Seebeck effect <cit.>.
The dynamics of atomic and magnetic degrees of freedom are, hence, inseparable and should be treated in a self-consistent manner.
The idea of integrating spin dynamics with molecular dynamics was pioneered by Omelyan et al. <cit.> in the context of a simple model for ferrofluids.
The foundation of this combined molecular and spin dynamics (MD-SD) approach lies in the unification of an atomistic potential and a Heisenberg spin Hamiltonian,
with the coupling between the atomic and spin subsystems established via a coordinate-dependent exchange interaction.
With the use of an empirical many-body potential and a parameterized exchange interaction,
Ma et al. <cit.> further extended MD-SD into a framework for realistic modeling of bcc iron.
The parameterization developed by Ma et al. <cit.>
has since been successfully adopted to investigate various phenomena in bcc iron such as
magneto-volume effects <cit.>,
vacancy formation and migration <cit.>,
and external magnetic field effects <cit.>.
Moreover, the method has been recently extended by incorporating spin-orbit interactions
to facilitate the dynamic exchange of angular momentum between the lattice and spin subsystems <cit.>.
This, in particular, extends the applicability of MD-SD to accurate modeling of non-equilibrium processes.
The aim of this paper is to improve our understanding of phonon-magnon interactions in the ferromagnetic phase of bcc iron within the context of MD-SD.
This study is an extension of our earlier preliminary work <cit.>
which primarily focused on the effect of lattice vibrations on the spin-spin dynamic structure factor in the [100] lattice direction.
In this paper, we provide a more in-depth analysis of the mutual influence of phonons and magnons on their respective frequency spectra and lifetimes
for all three high-symmetry lattice directions: [100], [110] and [111].
This is achieved by comparing the results obtained for MD-SD simulations with those of standalone MD and SD simulations
in which spin-lattice coupling is completely neglected.
In Sec. <ref>, we present the MD-SD formalism and the parameterization for bcc iron, followed by a comprehensive description of the methods we adopt
for characterizing collective excitations.
Sec. <ref> and <ref>, respectively, report our results on vibrational and magnetic excitations,
followed by conclusions in Sec. <ref>.
§ METHODS
§.§ Combined molecular and spin dynamics
MD-SD is essentially a reformulation of the MD approach,
in which the effective spin angular momenta of the atoms {𝐒_i} are incorporated into the Hamiltonian
and treated as explicit phase variables.
For a classical system of N magnetic atoms of mass m described by their positions {𝐫_i},
velocities {𝐯_i}, and the atomic spins {𝐒_i},
the MD-SD Hamiltonian takes the form
ℋ = ∑_i=1^N mv_i^2 /2 + U({𝐫_i}) - ∑_i<j J_ij({𝐫_k }) 𝐒_i ·𝐒_j,
where the first term represents the kinetic energy of the atoms,
and U({𝐫_i}) is the spin-independent (non-magnetic) scalar interaction between the atoms.
The Heisenberg-like exchange interaction with the coordinate-dependent exchange parameter and J_ij({𝐫_k })
specifies the exchange coupling between the ith and jth spins.
The aforementioned Hamiltonian has true dynamics as described by the classical equations of motion
align
d𝐫_i/dt = 𝐯_i
d𝐯_i/dt = 𝐟_i/m
d𝐒_i/dt = 1/ħ 𝐇_i^eff ×𝐒_i
where 𝐟_i = -∇_𝐫_iℋ and 𝐇_i^eff =∇_𝐒_iℋ
are the interatomic force and the effective field acting on the ith atom/spin.
The goal of the MD-SD approach is to numerically solve the above equations of motion
starting from a given initial configuration,
and obtain the trajectories of both the atomic and spin degrees of freedom.
MD-SD is a generic framework that with proper parameterization, can be readily adopted for any magnetic material
in which the spin interactions can be modeled classically.
In this study, we adopt the parameterization introduced by Ma et al. <cit.> for bcc iron,
in which U({𝐫_i}) is constructed as
U({𝐫_i}) = U_DD - E_spin^ground,
where U_DD is the “magnetic” embedded atom potential developed by Dudarev and Derlet <cit.>,
and E_spin^ground = - ∑_i<j J_ij({𝐫_k })|𝐒_i||𝐒_j|
is the energy contribution from a collinear spin state,
subtracted out to eliminate the magnetic interaction energy that is implicitly contained in U_DD.
With the particular form of U({𝐫_i}) given in Eq. (<ref>),
Hamiltonian (<ref>) provides the same ground state energy as U_DD.
The exchange interaction is modeled via a simple pairwise function J(r_ij) parameterized by first-principles calculations <cit.>,
with spin lengths absorbed into its definition, i.e. J(r_ij) = J_ij({𝐫_k }) |𝐒_i||𝐒_j|.
We assume constant spin lengths |𝐒| = 2.2/g, with g being the electron g factor.
We would like to point out that the fluctuation of the magnitudes of magnetic moments
and spin-orbit interactions are not considered in this work.
In transition metals and alloys, fluctuation of spin magnitudes may have a notable effect on the material properties, particularly at high temperatures.
Ma et al. <cit.> proposed a way of incorporating longitudinal spin fluctuations into SD and MD-SD simulations via a Langevin-type equation of motion
within the context of fluctuation-dissipation theorem.
Numerical coefficients of the corresponding Landau Hamiltonian can be determined from ab initio calculations <cit.>.
An accurate depiction of spin-orbit interactions can be potentially achieved with the use of Hubbard-like Hamiltonians
as the foundation for deriving the equations of motion <cit.>.
A phenomenological approach for modeling spin-orbit interactions in MD-SD has also been recently proposed <cit.>,
but was not adopted in this study due to its computationally demanding nature.
§.§ Characterizing collective excitations
In MD and SD simulations, space-displaced, time-displaced correlation functions of the microscopic dynamical variables are
integral to the study of the collective phenomena in the system <cit.>.
Fourier transforms of these quantities directly yield information regarding the frequency spectra and the lifetimes of the
respective collective modes.
Let us define microscopic atom density as
ρ_n (𝐫, t) = ∑_i δ[𝐫-𝐫_i(t)].
The spatial Fourier transform of the space-displaced, time-displaced density-density correlation function,
namely, the intermediate scattering function <cit.> then takes the form
F_nn(𝐪, t) = 1/N< ρ_n(𝐪, t) ρ_n(-𝐪, 0) >,
where ρ_n(𝐪, t) = ∫ρ_n(𝐫, t) e^ -i𝐪·𝐫 d𝐫 = ∑_i e^ -i𝐪·𝐫_i(t).
The power spectrum of the intermediate scattering function
S_nn(𝐪, ω) = 1/2π∫_-∞^+∞ F_nn(𝐪, t) e^-iω t dt,
is called the “density-density dynamic structure factor” for the momentum transfer 𝐪 and frequency (energy) transfer ω.
S_nn(𝐪, ω) is directly related to the differential cross section measured in inelastic neutron scattering experiments <cit.>.
Local density fluctuations in a system are caused by the thermal diffusion of atoms as well
as vibrational modes related to the propagating lattice waves <cit.>.
For liquid systems, the thermal diffusive mode can be identified as a peak in S_nn(𝐪, ω) centered at ω = 0,
whereas for solids this peak will disappear due to the absence of thermal diffusion <cit.>.
In crystalline solids, peaks in S_nn(𝐪, ω) at non-zero frequencies can be uniquely associated with
longitudinal vibrational modes with the corresponding frequencies and wave vectors.
As the transverse lattice vibrations do not cause local density fluctuations towards the direction of wave propagation,
S_nn(𝐪, ω) is incapable of revealing
information about these modes.
Therefore, to identify transverse lattice vibrations, one needs to consider the time-dependent correlations of transverse velocity components.
With the microscopic “velocity density” defined as ρ_v(𝐫, t) = ∑_i 𝐯_i(t) δ[𝐫-𝐫_i(t)],
the spatial Fourier transform of the velocity-velocity correlation function takes the form
F_vv^L,T(𝐪, t) = 1/N< ρ_v^L,T(𝐪, t) ·ρ_v^L,T(-𝐪, 0) >,
where ρ_v^L,T(𝐪, t) = ∑_i 𝐯_i^L,T(t) e^ -i𝐪·𝐫_i(t),
with the superscripts L and T respectively denoting the longitudinal and transverse components with reference to the direction of the wave propagation.
Peaks in the corresponding power spectra S_vv^L(𝐪, ω) and S_vv^T(𝐪, ω)
respectively reveal longitudinal and transverse vibrational modes of the system.
It can be shown that S_vv^L(𝐪, ω) is directly related to the density-density dynamic structure factor S_nn(𝐪, ω)
via the relationship S_nn(𝐪, ω) = ω^2/q^2 S_vv^L(𝐪, ω) <cit.>.
Just as the time-dependent density-density and velocity-velocity correlations reveal vibrational excitations associated with the lattice subsystem,
spin density autocorrelations can elucidate the magnetic excitations associated with the spin subsystem.
The microscopic “spin density” is given by
ρ_s (𝐫, t) = ∑_i 𝐒_i(t) δ(𝐫-𝐫_i(t)).
Treating the spin-spin correlations along x, y, and z directions separately,
we define the intermediate scattering function as
F_ss^k(𝐪, t) = 1/N< ρ_s^k(𝐪, t) ρ_s^k(-𝐪, 0) >,
where k = x, y, or z, and
ρ_s(𝐪, t) = ∑_i 𝐒_i(t) e^-i𝐪·𝐫_i(t).
For a ferromagnetic system in the microcanonical ensemble, the magnetization vector is a constant of motion
and serves as a fixed symmetry axis throughout the time evolution of the system.
To differentiate between the magnetic excitations that propagate parallel and perpendicular to this symmetry axis,
we redefine the coordinate system in spin space such that the z axis is parallel to the magnetization vector.
The components {F_ss^k(𝐪, t)} can then be simply regrouped to yield the longitudinal component
F_ss^L(𝐪, t) = F_ss^z(𝐪, t),
and the transverse component
F_ss^T(𝐪, t) = 1/2( F_ss^x(𝐪, t) + F_ss^y(𝐪, t) ).
Note that the separation of magnetic excitations into longitudinal and transverse modes is only meaningful for temperatures below the Curie temperature T_C,
since above T_C, the net magnetization vanishes and all directions in spin space become equivalent.
Fourier transforms of F_ss^L,T(𝐪, t) yield the spin-spin dynamic structure factors S_ss^L,T(𝐪, ω).
Just like the density-density dynamic structure factor, the spin-spin dynamic structure factor
is a measurable quantity in inelastic neutron scattering experiments <cit.>.
In this study, we are primarily interested in investigating wave propagation along the three principle lattice directions:
[100], [110] and [111]. Let us denote the wave vectors in these directions as 𝐪 = (q, 0, 0), (q, q, 0), and (q, q, q), respectively.
Due to the finite size of the simulation box, the accessible values of q in each direction is constrained to a discrete set given by
q = 2 π n_q/La, with n_q = ± 1, ± 2, …, ±, L for the [100] and [111] directions, and n_q = ± 1, ± 2, …, ±, L/2 for the [110] direction,
where L is the linear lattice dimension and a = 2.8665 Å is the lattice constant of bcc iron.
§.§ Simulation details
For integrating the coupled equations of motion presented in Eq. (<ref>), we adopted an algorithm based on
the second order Suzuki-Trotter (ST) decomposition of the non-commuting operators <cit.>.
To obtain a reasonable level of accuracy as reflected by the energy and magnetization conservation,
an integration time step of Δ t = 1 fs was used.
For computing canonical averages of time-dependent correlation functions, we used time series obtained from microcanonial dynamical simulations,
that are, in turn, initiated from equilibrium states drawn from the canonical ensemble at the desired temperature T.
Averaging over the results of multiple simulations started from different initial states
yields good estimates of the respective canonical ensemble averages <cit.>.
For generating the initial states for our microcanonical MD-SD simulations, we adhere to the following procedure.
First, we equilibrate the subspace consisting of positions and spins using the Metropolis Monte Carlo (MC) method <cit.>.
As the second step, we assign initial velocities to the atoms based on the Maxwell-Boltzmann distribution at the desired temperature T.
Finally, we perform a short microcanonical MD-SD equilibration run (typically ∼ 1000 time steps with Δ t = 1 fs),
which would ultimately bring the whole system to the equilibrium by resolving any inconsistencies
between the position-spin subspace and the velocity distribution.
Fig. <ref> shows the time evolution of the instantaneous lattice and spin temperatures
as observed in a microcanonical MD-SD simulation initiated from an equilibrium state generated from the aforementioned technique for T=800 K.
Both lattice and spin temperatures fluctuate about a mean value of T=800 K,
indicating that the lattice and the spin subsystems are in mutual equilibrium.
To characterize phonon and magnon modes,
we performed simulations for the system size L=16 (8192 atoms) at temperatures T=300 K, 800 K, and 1000 K.
T = 1000 K was particularly chosen due to its vicinity to the Curie temperature of bcc iron, T_C ≈ 1043 K.
(A recent high resolution Monte Carlo study has revealed that the transition temperature of the particular spin-lattice model
used in our study to be T ≈ 1078 K <cit.>).
Equations of motion were integrated up to a total time of t_max = 1 ns, and the space-displaced, time-displaced correlation functions were computed
for the three principle lattice directions: [100], [110] and [111].
To increase the accuracy, we have averaged these quantities over different starting points in the time series.
Canonical ensemble averages were estimated using the results of 200 independent simulations, each initiated from a different initial state.
The time Fourier transform in Eq. (<ref>) was carried out to a cutoff time of t_cutoff = 0.5 ns.
As our primary goal is to understand the mutual impact of the phonons and magnons on their respective frequency spectra and lifetimes,
we have also performed standalone MD and SD simulations for comparison.
For MD simulations, we used the Dudarev-Derlet potential to model the interatomic interactions
while completely neglecting the spin-spin interactions.
SD simulations were conducted with the atoms frozen at perfect bcc lattice positions, and the exchange parameters
determined from the same pairwise function used for MD-SD simulations.
§ RESULTS
§.§ Vibrational excitations
For all the temperatures considered, we observe well defined excitation peaks at non-zero frequencies in the
density-density dynamic structure factor S_nn(𝐪, ω), as well as in
the longitudinal and the transverse components of the velocity-velocity dynamic structure factor: S_vv^L(𝐪, ω) and S_vv^T(𝐪, ω).
For each 𝐪 along [100] and [111] lattice directions, all three quantities show single peaks (See Fig. <ref> for an example).
The peak positions in S_nn(𝐪, ω) and S_vv^L(𝐪, ω) for the same wave vector coincide with each other as they
are both associated with the longitudinal vibrational modes, and hence convey the same information.
The peaks in S_vv^T(𝐪, ω) are associated with the transverse lattice vibrations.
Since there are two orthogonal directions perpendicular to a given wave vector 𝐪,
there are, in fact, two transverse vibrational modes for each 𝐪.
Due to the four-fold and three-fold rotational symmetry about the axes [100] and [111], respectively,
the two transverse modes for the wave vectors along these directions become degenerate <cit.>.
As a result, we only observe a single peak in S_vv^T(𝐪, ω) for the wave vectors along these directions.
We also observe single peak structures in S_nn(𝐪, ω) and S_vv^L(𝐪, ω) for the wave vectors along the [110] direction.
However, for the case of S_vv^T(𝐪, ω), one can clearly identify two distinct peaks.
This is a consequence of the two transverse modes being non-degenerate due to the reduced rotational symmetry (two-fold) about the [110] axis
in comparison to [100] and [111] directions <cit.>.
To extract the positions and the half-widths of the phonon peaks,
we fit the simulation results for the dynamic structure factor to a Lorentzian function of the form <cit.>
S(𝐪, ω) = I_0 Γ^2/(ω-ω_0)^2 + Γ^2,
where ω_0 is the characteristic frequency of the vibrational mode, I_0 is the intensity or the amplitude of the peak,
and Γ is the half-width at half maximum (HWHM) which is inversely proportional to the lifetime of the excitation.
The errors of the fitting parameters were estimated using the following procedure.
The complete set of correlation function estimates obtained from 200 independent simulations was divided into 10 groups,
and the data within each group were averaged over to yield 10 results sets.
Dynamic structure factors were independently computed for these 10 correlation function sets.
To estimate the errors in the fitting parameters,
we separately performed curve fits to these 10 independent dynamic structure factor estimates,
and calculated the standard deviations of the fitting parameters.
Statistical errors bars obtained in this manner were found to be an order of magnitude larger than the error bars estimated by the curve-fitting tool.
For all the temperatures and wave vectors considered,
the Lorentzian lineshape given in Eq. (<ref>) fitted well with the peaks observed in S_nn(𝐪, ω) and S_vv^L,T(𝐪, ω).
Fig. <ref> shows an example curve fit for the MD-SD results of S_nn(𝐪, ω)
for 𝐪 = (1.1 Å-1,0, 0) at T=300 K.
To fit the two peak structure observed in S_vv^T(𝐪, ω) for the [110] direction,
we use the sum of two Lorentzians.
Using the peak positions obtained from the Lorentzian fits, one can construct phonon dispersion relations for the three principle lattice directions.
Fig. <ref> shows the the dispersion curves determined from our MD-SD simulations for T=300 K,
along with the experimental results <cit.> obtained from inelastic neutron scattering.
For comparison, we have also shown the results of standalone MD simulations for the same temperature.
In general, for small to moderate q values, both MD-SD and MD dispersion curves agree well with the experimental results,
but deviations can be observed for larger q values, particularly near the zone boundaries in [100] and [111] directions.
Although the MD-SD and MD dispersion curves are indistinguishable within the resolution of Fig. <ref>,
we will show later on that there are, in fact, deviations larger than the error bars.
At temperatures in the vicinity of absolute zero,
due to the low occupation of vibrational modes,
phonons behave as weakly interacting quasiparticles that can be treated within the harmonic approximation <cit.>.
In this limit, characteristic frequencies of the phonons are well defined
and the lifetimes are practically infinite.
As the temperature is increased, phonon occupation numbers also increase,
which in turn increases the probability of mutual interactions.
As a result of such phonon-phonon scattering at elevated temperatures,
characteristic frequencies of the phonons may shift,
and the lifetimes may shorten <cit.>.
In magnetic crystals, the co-existence of phonons and magnons gives rise to another class of scattering processes,
namely, phonon-magnon scattering.
Just as phonon-phonon scattering, phonon-magnon scattering
may also lead to a shift in the characteristic phonon frequencies,
as well as shortening of the phonon lifetimes.
As the occupancy of both phonon and magnon modes increases with temperature,
these effects will be more pronounced as the temperature is increased.
To carefully examine the changes in the phonon frequency spectrum due to magnons,
we compare the characteristic frequencies determined from MD-SD simulations (ω__MD-SD) with the ones obtained from
MD simulations (ω__MD) by calculating the fractional frequency shift,
( ω__MD-SD - ω__MD)/ω__MD.
The results for the three principle directions are shown in Figs. <ref> and <ref>,
for the longitudinal and the transverse modes, respectively.
With the exception of the high frequency transverse branch along [110] direction (TA2),
phonon frequencies shift to higher values in the presence of magnons.
In general, the shift in frequencies becomes more pronounced as the temperature is increased.
A particularly interesting behavior occurs in the longitudinal branch for the [111] direction
where we observe dips in the curves for all three temperatures at the same q value.
For all three temperatures, the frequency shift of the vibrational mode that corresponds to the bottom of the dip is close to zero.
Therefore, the frequency of this phonon mode appears to be unaffected by the presence of magnons.
Lifetimes of the phonon excitations are inversely proportional to the half-widths at half maximum of the corresponding vibrational peaks
observed in S_nn(𝐪, ω) and S_vv^L,T(𝐪, ω).
To study the impact of the magnons on the phonon lifetimes, we compare the half-widths
obtained from MD-SD simulations with that of the MD simulations.
Fig. <ref> shows the results for the longitudinal phonons.
For the longitudinal phonons at T=300 K, a marginal increase in the half-widths can be observed due to the magnons,
which becomes more pronounced as the temperature is increased.
For the case of transverse phonons, we did not observe any notable difference between the MD-SD and MD half-widths outside the error bars,
for all the temperatures considered.
§.§ Magnetic excitations
§.§.§ Transverse magnon modes
For the temperatures T=300 K and T=800 K, our results for the transverse component of the spin-spin dynamic structure factor S_ss^T(𝐪, ω)
show a single spin wave peak, that can be fitted to a Lorentzian lineshape of the form Eq. (<ref>) (See Fig. <ref> (a) for an example.).
For T=1000 K, we also observe a diffusive central peak at ω = 0, as observed in neutron scattering experiments <cit.>
and previous SD studies <cit.>.
This two-peak structure can be best captured by a function of the form <cit.>
S(𝐪, ω) = I_c exp(-ω^2/ω_c^2) + I_0 Γ^2/(ω-ω_0)^2 + Γ^2,
where the first term (Gaussian) corresponds to the central peak, and the second term (Lorentzian) describes the spin wave peak
(See Fig. <ref> (b) for an example.).
For large 𝐪 values at T=800 K and T=1000 K, spin wave peaks in S_ss^T(𝐪, ω) were found to be asymmetric,
and hence did not yield good fits to Lorentzian lineshapes.
Therefore, one cannot obtain reliable estimates of the magnon half-widths.
However, spin wave peak positions can still be determined relatively precisely, thus the magnon dispersion relations can be constructed.
Fig. <ref> shows the transverse magnon dispersion relations for small |𝐪| values along the three principle directions
as determined from MD-SD simulations at T=300 K.
In agreement with the experimental findings <cit.>, the three dispersion relations are isotropic when plotted as
functions of the magnitude of the wave vector |𝐪|.
Moreover, for small |𝐪| values, our results agree quantitatively with the experimental results for the [110] direction <cit.>.
Fig. <ref> shows the complete dispersion curves determined from MD-SD and SD simulations for T=300 K, T=800 K, and T=1000 K.
For both MD-SD and SD, the characteristic frequencies shift to lower values as the temperature is increased.
This indicates increased magnon-magnon scattering at elevated temperatures.
For all three temperatures, particularly near the zone boundaries, we can observe a marginal difference between the MD-SD and SD dispersion curves.
This, in fact, is a result of phonon-magnon scattering.
To further investigate the magnon softening due to phonons,
we calculate the fractional frequency shift of the magnons,
( ω__MD-SD - ω__SD)/ω__SD.
The results are shown in Fig. <ref> for the three principle directions.
For small q values, magnon modes shift to lower frequencies in the presence of phonons.
As q increases, the direction of the shift is reversed.
Moreover, the shift in frequencies becomes more pronounced as the temperature is increased.
Fig. <ref> compares the transverse magnon half-widths obtained from MD-SD and SD simulations for T=300 K.
Although the difference between the half-widths is negligible for small q values,
for moderate to large q values, half-widths for the MD-SD results are significantly larger than that for the SD results.
This indicates significant shortening of the magnon lifetimes due to phonon-magnon scattering.
§.§.§ Longitudinal magnon modes
Our results for the longitudinal spin-spin dynamic structure factor S_ss^L(𝐪, ω)
obtained from both MD-SD and SD simulations show many very low-intensity excitations peaks, for all wave vectors considered.
Fig. <ref> shows S_ss^L(𝐪, ω) for a small system size L=8 at T=300 K,
where we compare the SD results [panel (a)] with the MD-SD results [panel (b)]
for 𝐪 = 2 π/La(1, 0, 0).
In the context of classical Heisenberg models, Bunker et al. <cit.> showed that
the excitation peaks observed in S_ss^L(𝐪, ω) are two-spin-wave creation and/or annihilation peaks
which result from the pairwise interactions between transverse magnon modes.
For ferromagnetic systems, only spin wave annihilation peaks are present,
and their frequencies are given by
ω_ij^- (𝐪_i ±𝐪_j) = ω(𝐪_i) - ω(𝐪_j),
where 𝐪_i and 𝐪_j are the wave vectors of the two transverse magnon modes which comprise the two-spin-wave excitation.
Since the set of allowable wave vectors {𝐪_i} depends on the system size L,
the resultant two-spin-wave spectrum also varies with L.
For a real magnetic crystal where L is practically infinite,
the two-spin-wave spectrum would become continuous.
To verify whether the peaks we observe in S_ss^L(𝐪, ω) are two-spin-wave peaks,
we chose a relatively small system size (L=8)
so that the set of allowable wave vectors is reduced to a manageable size.
Then, using MD-SD and SD simulations, we separately determined the transverse magnon frequencies that correspond to the first few n_q values
along all possible lattice directions.
With this information at hand, we can predict the expected positions of all the two-spin-wave annihilation peaks using Eq. (<ref>)
for both SD and MD-SD case.
As an example, let us consider the wave vector pair 𝐪_i = (1, 1, 1) and 𝐪_j = (1, 1, 0).
Since 𝐪_i - 𝐪_j = (0, 0, 1), they produce a spin wave annihilation peak in S_ss^L(𝐪, ω)
for 𝐪 = (0, 0, 1) at the frequency ω^- = ω(𝐪_i) - ω(𝐪_j).
(Note that we have ignored the common pre-factor 2 π/La from the wave vectors.)
In Fig. <ref> (a) and (b), we have superimposed the predicted spin wave annihilation peak positions
corresponding to each case.
We see an excellent match between the observed peaks and the predicted two-spin-wave peak positions,
with the exception of the particular sharp peak at ω≈ 10 meV which only appears in panel (b).
Surprisingly, the position of this peak coincides with the frequency of the longitudinal phonon mode for the same 𝐪
as determined from the peak position of S_nn(𝐪, ω) or S_vv^L(𝐪, ω).
Similar excitation peaks were observed for all wave vectors,
for all system sizes and temperatures considered.
The origin of these coupled phonon-magnon modes can be explained as follows.
Unlike transverse phonons, when a longitudinal phonon propagates along a certain lattice direction, it generates fluctuations in the local atom density
along that direction with the corresponding phonon frequency.
This, in turn, leads to fluctuations in the local density of the longitudinal components of the spins (i.e. components of the spin vectors
parallel to the net magnetization).
These longitudinal spin fluctuations propagate along with the phonon, yielding a sharp, coupled mode in the the longitudinal magnon spectrum.
Fig. <ref> and Fig. <ref> show
S_ss^L(𝐪, ω) for 𝐪 = 2 π/La(1, 0, 0) at T=800 K and T=1000 K, respectively,
where we compare the SD results [panel (a)] with the MD-SD results [panel (b)].
In each figure, the inset of panel (b) shows the longitudinal density-density dynamic structure factor for the same wave vector.
In comparison to the results for T=300 K, we observe that the diffusive central peak becomes more pronounced as the temperature rises,
and many of the low-intensity two-spin-wave peaks broaden and disappear into its tail.
These observations are in qualitative agreement with previous SD studies of the ferromagnetic Heisenberg model <cit.>.
The coupled phonon-magnon peak also becomes less pronounced with increasing temperature, as the diffusive central peak becomes more pronounced.
At T=1000 K, the intensity of the peak is very low and is barely recognizable.
Above the Curie temperature, spins are randomly oriented and the vector sum of spins per unit volume will be zero on average.
Hence, the coupled phonon-magnon mode should entirely disappear;
however, it is already so faint at T=1000 K that we clearly would not have sufficient resolution to test this behavior above the Curie temperature.
We would like to point out that the existence of these coupled phonon-magnon modes is a phenomenon that so far hasn't been discovered experimentally.
In fact, this is not surprising since the experimental detection of these peaks would be extremely challenging
due to their very low intensities.
§ CONCLUSIONS
To investigate collective phenomena in ferromagnetic bcc iron, we performed combined molecular and spin dynamics (MD-SD) simulations
at temperatures T=300 K, T=800 K, and T=1000 K.
From the trajectories of these simulations, space- and time-displaced correlation functions associated with the atomic and spin variables were calculated.
Fourier transforms of these quantities, namely, dynamic structure factors, directly reveal information regarding the
frequencies and the lifetimes of the vibrational and magnetic excitation modes.
For small q values, the dispersion relations obtained from our simulations at T=300 K agree well with the experimental results,
but deviations can be observed for large q values, especially for the transverse magnon dispersion curves.
These discrepancies can be attributed to the anharmonic effects not being faithfully captured in the embedded atom potential and the pairwise functional
representation of the exchange interaction.
In fact, Yin et al. <cit.> recently pointed out that the exchange parameters in bcc iron depend on
the local atomic environment in a complicated manner that may not be properly characterized through a pairwise distance-dependent function.
Thus, a more accurate depiction of magnetic interactions necessitates the development of sophisticated models of exchange interactions
that effectively capture the contribution of the local environment.
To understand the mutual influence of the phonons and magnons on each other,
we compared our results with that of the standalone molecular dynamics and spin dynamics simulations.
Due to phonon-magnon coupling, we observe a shift in the characteristic frequencies,
as well as a decrease in the lifetimes.
These effects become more pronounced as the temperature is increased.
Moreover, the frequency shifts and the lifetime reductions that occur in magnons due to phonons
are found to be far more pronounced than the corresponding effects experienced by phonons due to magnons.
This is not surprising considering the fact that the energy scale associated with the spin-spin interactions is about an order of magnitude smaller than
that of the atomic (non-magnetic) interactions.
A comparison of our results at different temperatures shows that the effects of spin-lattice coupling becomes more pronounced
as the temperature rises.
However, due to critical fluctuations, the size of the error bars for magnon properties increases rapidly as the temperature approaches the Curie temperature
(See Fig. <ref> (b) and Fig. <ref> for example.).
Therefore, obtaining reliable estimates of magnon properties becomes increasingly difficult as the Curie temperature is approached.
The unprecedented resolution provided by our simulations has allowed us to clearly identify
two-spin-wave peaks in the longitudinal spin-spin dynamic structure factor
with amplitudes down to six orders of magnitude smaller than that of the highest single spin wave peak observed.
In addition, in the presence of lattice vibrations,
we also observe additional longitudinal magnetic excitations with frequencies which coincide with those of the longitudinal phonons.
This is an unexpected form of longitudinal spin wave excitations that so far has not been detected in inelastic neutron scattering experiments,
presumably due to their very low intensities.
This work was sponsored by the “Center for Defect Physics”, an Energy Frontier Research Center of the Office of Basic Energy Sciences (BES),
U.S. Department of Energy (DOE); the later stages of the work of G.M.S. and M.E. was supported by the Materials Sciences
and Engineering Division of BES, US-DOE.
We also acknowledge the computational resources provided by the Georgia Advanced Computing Resource Center.
62
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Rapaport(2004)]Rapaport2004
author author D. C. Rapaport, @noop title The Art of Molecular
Dynamics Simulation, edition 2nd ed. (publisher Cambridge University Press, New York, year
2004)NoStop
[Frenkel and Smit(2002)]Frenkel2002
author author D. Frenkel and author B. Smit, @noop title Understanding Molecular Simulation:
from Algorithms to Applications, edition 2nd ed. (publisher Academic Press, San Diego, year
2002)NoStop
[Petry et al.(1991)Petry,
Heiming, Trampenau, Alba,
Herzig, Schober, and Vogl]metal1
author author W. Petry, author A. Heiming,
author J. Trampenau, author M. Alba, author
C. Herzig, author H. R. Schober, and author G. Vogl, @noop journal journal Phys. Rev. B volume 43, pages 10933 (year 1991)NoStop
[Heiming et al.(1991)Heiming, Petry, Trampenau, Alba, Herzig, Schober, and Vogl]metal2
author author A. Heiming, author W. Petry,
author J. Trampenau, author M. Alba, author
C. Herzig, author H. R. Schober, and author G. Vogl, @noop journal journal Phys. Rev. B volume 43, pages 10948 (year 1991)NoStop
[Sun and Murthy(2006)]metal3
author author L. Sun and author J. Y. Murthy, @noop journal journal Appl.
Phys. Lett. volume 89, pages 171919
(year 2006)NoStop
[Meyer and Entel(1998)]alloy1
author author R. Meyer and author P. Entel, @noop journal journal Phys. Rev. B volume 57, pages 5140 (year
1998)NoStop
[Branicio et al.(2003)Branicio, Rino, Shimojo, Kalia, Nakano, and Vashishta]alloy2
author author P. S. Branicio, author J. P. Rino,
author F. Shimojo, author R. K. Kalia, author
A. Nakano, and author
P. Vashishta, @noop journal journal J. Appl. Phys. volume
94, pages 3840 (year 2003)NoStop
[Henry and Chen(2008)]polymer1
author author A. Henry and author G. Chen, @noop journal journal Phys. Rev. Lett. volume 101, pages 235502 (year 2008)NoStop
[Lopez Navarrete and Zerbi(1991)]polymer2
author author J. T. Lopez Navarrete and author G. Zerbi, @noop journal journal J.
Chem. Phys. volume 94, pages 957
(year 1991)NoStop
[Shiomi and Maruyama(2006)]nanotubes
author author J. Shiomi and author S. Maruyama, @noop journal journal
Phys. Rev. B volume 73, pages 205420
(year 2006)NoStop
[Koukaras et al.(2015)Koukaras, Kalosakas, Galiotis, and Papagelis]graphene
author author E. N. Koukaras, author G. Kalosakas,
author C. Galiotis, and author K. Papagelis, @noop
journal journal Sci. Rep. volume 5, pages 12923 (year
2015)NoStop
[Landau and Krech(1999)]Landau1999
author author D. P. Landau and author M. Krech, @noop journal journal J. Phys.: Condens.
Matter volume 11, pages R179
(year 1999)NoStop
[Tsai et al.(2000)Tsai,
Bunker, and Landau]Tsai2000
author author S.-H. Tsai, author A. Bunker, and author D. P. Landau, @noop journal journal Phys. Rev. B volume 61, pages 333 (year
2000)NoStop
[Chen and Landau(1994)]Chen1994
author author K. Chen and author D. P. Landau, @noop journal journal Phys.
Rev. B volume 49, pages 3266
(year 1994)NoStop
[Gerling and Landau(1990)]Gerling1990
author author R. W. Gerling and author D. P. Landau, @noop journal journal Phys.
Rev. B volume 41, pages 7139
(year 1990)NoStop
[Watson et al.(1969)Watson,
Blume, and Vineyard]Blume
author author R. E. Watson, author M. Blume, and author G. H. Vineyard, @noop journal journal Phys. Rev. volume 181, pages 811 (year
1969)NoStop
[Tao et al.(2005)Tao,
Landau, Schulthess, and Stocks]Tao2005
author author X. Tao, author D. P. Landau,
author T. C. Schulthess, and author G. M. Stocks, @noop journal journal Phys. Rev. Lett. volume 95, pages 087207 (year 2005)NoStop
[Bunker and Landau(2000)]Bunker2000
author author A. Bunker and author D. P. Landau, @noop journal journal Phys.
Rev. Lett. volume 85, pages 2601
(year 2000)NoStop
[Schweika et al.(2002)Schweika, Maleyev, Brückel, Plakhty, and Regnault]TwoSpinWaves
author author W. Schweika, author S. V. Maleyev, author T. Brückel,
author V. P. Plakhty, and author L.-P. Regnault, @noop journal journal EPL volume 60, pages 446 (year
2002)NoStop
[Hou et al.(2015)Hou,
Landau, Stocks, and Brown]Zhuofei2015
author author Z. Hou, author D. P. Landau,
author G. M. Stocks, and author G. Brown, @noop
journal journal Phys. Rev. B volume 91, pages 064417 (year
2015)NoStop
[Shimizu(1981)]VariableMoments1
author author M. Shimizu, @noop journal journal Rep.
Prog. Phys. volume 44, pages 329
(year 1981)NoStop
[Sabiryanov et al.(1995)Sabiryanov, Bose, and Mryasov]VariableMoments2
author author R. F. Sabiryanov, author S. K. Bose, and author O. N. Mryasov, 10.1103/PhysRevB.51.8958 journal
journal Phys. Rev. B volume 51, pages 8958 (year 1995)NoStop
[Sabiryanov and Jaswal(1999)]VariableMoments3
author author R. F. Sabiryanov and author S. S. Jaswal, 10.1103/PhysRevLett.83.2062 journal journal Phys. Rev. Lett. volume 83, pages 2062 (year
1999)NoStop
[Yin et al.(2012)Yin,
Eisenbach, Nicholson, and Rusanu]Yin2012
author author J. Yin, author M. Eisenbach,
author D. M. Nicholson, and author A. Rusanu, 10.1103/PhysRevB.86.214423 journal journal Phys. Rev. B volume 86, pages 214423 (year 2012)NoStop
[Abrikosov et al.(1996)Abrikosov, James, Eriksson, Söderlind, Ruban, Skriver, and Johansson]PhaseStabilityFeCo
author author I. A. Abrikosov, author P. James,
author O. Eriksson, author P. Söderlind, author
A. V. Ruban, author
H. L. Skriver, and author
B. Johansson, 10.1103/PhysRevB.54.3380 journal journal Phys.
Rev. B volume 54, pages 3380
(year 1996)NoStop
[Ekman et al.(1998)Ekman,
Sadigh, Einarsdotter, and Blaha]Ekman1998
author author M. Ekman, author B. Sadigh,
author K. Einarsdotter, and author P. Blaha, @noop
journal journal Phys. Rev. B volume 58, pages 5296 (year
1998)NoStop
[Hasegawa and Pettifor(1983)]PhaseStabilityIron
author author H. Hasegawa and author D. G. Pettifor, 10.1103/PhysRevLett.50.130 journal journal Phys. Rev. Lett. volume 50, pages 130 (year 1983)NoStop
[Herper et al.(1999)Herper,
Hoffmann, and Entel]Herper1999
author author H. C. Herper, author E. Hoffmann, and author P. Entel, @noop journal journal Phys. Rev. B volume 60, pages 3839 (year
1999)NoStop
[Ding et al.(2014)Ding,
Razumovskiy, and Asta]SelfDiffusion
author author H. Ding, author V. I. Razumovskiy, and author M. Asta, @noop journal journal Acta
Mater. volume 70, pages 130 (year 2014)NoStop
[Boona and Heremans(2014)]ThermalTransport
author author S. R. Boona and author J. P. Heremans, @noop journal journal
Phys. Rev. B volume 90, pages 064421
(year 2014)NoStop
[Dudarev et al.(2008)Dudarev, Bullough, and Derlet]DislocationDynamics
author author S. L. Dudarev, author R. Bullough, and author P. M. Derlet, 10.1103/PhysRevLett.100.135503 journal
journal Phys. Rev. Lett. volume 100, pages 135503 (year 2008)NoStop
[Jaworski et al.(2011)Jaworski, Yang, Mack, Awschalom, Myers, and Heremans]Seebeck
author author C. M. Jaworski, author J. Yang,
author S. Mack, author
D. D. Awschalom, author
R. C. Myers, and author
J. P. Heremans, @noop journal journal Phys. Rev. Lett. volume 106, pages 186601 (year
2011)NoStop
[Omelyan et al.(2001)Omelyan, Mryglod, and Folk]Omelyan2001
author author I. P. Omelyan, author I. M. Mryglod, and author R. Folk, 10.1103/PhysRevLett.86.898 journal
journal Phys. Rev. Lett. volume 86, pages 898 (year 2001)NoStop
[Ma et al.(2008)Ma,
Woo, and Dudarev]Ma2008
author author P.-W. Ma, author C. H. Woo, and author S. L. Dudarev, 10.1103/PhysRevB.78.024434 journal journal Phys. Rev. B volume 78, pages 024434 (year 2008)NoStop
[Chui and Zhou(2014a)]MagnetoVolumeEffects
author author C. P. Chui and author Y. Zhou, @noop journal journal AIP Advances volume 4, pages 087123 (year
2014a)NoStop
[Wen et al.(2013)Wen,
Ma, and Woo]WenVacancy1
author author H. Wen, author P.-W. Ma, and author C. Woo, http://dx.doi.org/10.1016/j.jnucmat.2013.05.054 journal
journal J. Nucl. Mater. volume 440, pages 428 (year 2013)NoStop
[Wen and Woo(2014)]WenVacancy2
author author H. Wen and author C. Woo, http://dx.doi.org/10.1016/j.jnucmat.2014.03.025 journal journal J. Nucl. Mater. volume
455, pages 31 (year 2014)NoStop
[Chui and Zhou(2014b)]MagField
author author C. P. Chui and author Y. Zhou, http://dx.doi.org/10.1063/1.4869023 journal
journal AIP Advances volume 4, eid 037110 (year 2014b)NoStop
[Perera et al.(2016a)Perera, Eisenbach, Nicholson, Stocks, and Landau]spin_orbit
author author D. Perera, author M. Eisenbach,
author D. M. Nicholson, author G. M. Stocks, and author D. P. Landau, @noop
journal journal Phys. Rev. B volume 93, pages 060402 (year
2016a)NoStop
[Perera et al.(2014a)Perera, Landau,
Nicholson, Malcolm Stocks, Eisenbach, Yin, and Brown]Perera2014
author author D. Perera, author D. P. Landau,
author D. M. Nicholson, author G. Malcolm Stocks, author M. Eisenbach, author
J. Yin, and author
G. Brown, http://dx.doi.org/10.1063/1.4863488 journal journal J. Appl. Phys. volume 115, pages 17D124 (year 2014a)NoStop
[Perera et al.(2014b)Perera, Landau,
Nicholson, Stocks, Eisenbach,
Yin, and Brown]PereraST2014
author author D. Perera, author D. P. Landau,
author D. M. Nicholson, author G. M. Stocks, author
M. Eisenbach, author
J. Yin, and author
G. Brown, http://stacks.iop.org/1742-6596/487/i=1/a=012007 journal
journal J. Phys.: Conf. Ser. volume
487, pages 012007 (year
2014b)NoStop
[Dudarev and Derlet(2005)]Dudarev2005
author author S. L. Dudarev and author P. M. Derlet, http://stacks.iop.org/0953-8984/17/i=44/a=003 journal journal J. Phys.: Condens. Matter volume 17, pages 7097 (year
2005)NoStop
[Derlet and Dudarev(2007)]Derlet2007
author author P. Derlet and author S. Dudarev, http://dx.doi.org/10.1016/j.pmatsci.2006.10.011 journal
journal Prog. Mater. Sci. volume 52, pages 299 (year 2007)NoStop
[Ma and Dudarev(2012)]LongFluc1
author author P.-W. Ma and author S. L. Dudarev, 10.1103/PhysRevB.86.054416 journal journal Phys. Rev. B volume
86, pages 054416 (year 2012)NoStop
[Ma and Dudarev(2014)]LongFluc2
author author P.-W. Ma and author S. L. Dudarev, 10.1103/PhysRevB.90.024425 journal journal Phys. Rev. B volume
90, pages 024425 (year 2014)NoStop
[Coury et al.(2016)Coury,
Dudarev, Foulkes, Horsfield,
Ma, and Spencer]spin_orbit_2
author author M. E. A. Coury, author S. L. Dudarev, author W. M. C. Foulkes, author A. P. Horsfield, author P.-W. Ma, and author J. S. Spencer, 10.1103/PhysRevB.93.075101 journal journal Phys. Rev. B volume 93, pages 075101 (year 2016)NoStop
[Lovesey(1984)]Lovesey1984
author author S. W. Lovesey, @noop title Theory of Neutron
Scattering from Condensed Matter (publisher Oxford
University Press, Oxford, year 1984)NoStop
[Hansen and McDonald(2006)]Hansen2006
author author J. P. Hansen and author I. R. McDonald, @noop title Theory of Simple
Liquids, edition 3rd ed. (publisher Academic
Press, London, year 2006)NoStop
[Anento and Padró(2004)]Anento2004
author author N. Anento and author J. A. Padró, 10.1103/PhysRevB.70.224211 journal journal Phys. Rev. B volume
70, pages 224211 (year 2004)NoStop
[Tsai et al.(2005)Tsai,
Lee, and Landau]Tsai2005
author author S.-H. Tsai, author H. K. Lee, and author D. P. Landau, 10.1119/1.1900096 journal journal Am. J. Phys. volume 73, pages 615 (year 2005)NoStop
[Krech et al.(1998)Krech,
Bunker, and Landau]Krech1998
author author M. Krech, author A. Bunker, and author D. P. Landau, 10.1016/S0010-4655(98)00009-5 journal
journal Comput. Phys. Commun. volume
111, pages 1 (year 1998)NoStop
[Metropolis et al.(1953)Metropolis, Rosenbluth, Rosenbluth,
Teller, and Teller]Metropolis1953
author author N. Metropolis, author A. W. Rosenbluth, author M. N. Rosenbluth, author A. H. Teller, and author E. Teller, @noop journal journal J.
Chem. Phys. volume 21, pages 1087
(year 1953)NoStop
[Nurdin and Schotte(2000)]Nurdin2000
author author W. B. Nurdin and author K.-D. Schotte, 10.1103/PhysRevE.61.3579 journal
journal Phys. Rev. E volume 61, pages 3579 (year 2000)NoStop
[Perera et al.(2016b)Perera, Vogel, and Landau]rewl
author author D. Perera, author T. Vogel, and author D. P. Landau, 10.1103/PhysRevE.94.043308 journal journal Phys. Rev. E volume 94, pages 043308 (year 2016b)NoStop
[Dove(2005)]LatticeDynamics
author author M. T. Dove, @noop title Introduction to Lattice
Dynamics, , Cambridge Topics in Mineral Physics and Chemistry (publisher Cambridge University Press, Cambridge, year
2005)NoStop
[Minkiewicz et al.(1967)Minkiewicz, Shirane, and Nathans]phononExp1
author author V. J. Minkiewicz, author G. Shirane,
and author R. Nathans, 10.1103/PhysRev.162.528 journal journal
Phys. Rev. volume 162, pages 528
(year 1967)NoStop
[Brockhouse et al.(1967)Brockhouse, Abou-Helal, and Hallman]phononExp2
author author B. N. Brockhouse, author H. E. Abou-Helal, and author E. D. Hallman, http://www.sciencedirect.com/science/article/pii/003810986790258X
journal journal Solid State Communications volume 5, pages 211 (year
1967)NoStop
[Maradudin and Fein(1962)]phononScattering1
author author A. A. Maradudin and author A. E. Fein, 10.1103/PhysRev.128.2589 journal
journal Phys. Rev. volume 128, pages 2589 (year 1962)NoStop
[Fultz(2010)]phononScattering2
author author B. Fultz, @noop journal journal Prog.
Mater Sci. volume 55, pages 247
(year 2010)NoStop
[Mook and Lynn(1985)]central_peak
author author H. A. Mook and author J. W. Lynn, @noop journal journal J. Appl.
Phys. volume 57, pages 3006 (year 1985)NoStop
[Lynn(1975)]Lynn
author author J. W. Lynn, 10.1103/PhysRevB.11.2624 journal
journal Phys. Rev. B volume 11, pages 2624 (year 1975)NoStop
[Collins et al.(1969)Collins, Minkiewicz, Nathans, Passell, and Shirane]Collins
author author M. F. Collins, author V. J. Minkiewicz, author R. Nathans,
author L. Passell, and author G. Shirane, 10.1103/PhysRev.179.417 journal journal
Phys. Rev. volume 179, pages 417
(year 1969)NoStop
|
http://arxiv.org/abs/1701.07664v1 | 20170126115316 | Cornering Compressed Gluino at the LHC | [
"Natsumi Nagata",
"Hidetoshi Otono",
"Satoshi Shirai"
] | hep-ph | [
"hep-ph"
] |
0.6cm
2.5pt=0pt=0pt
>∼
2.5pt=0pt=0pt
<∼
3.0pt=1.0pt=0pt
∝∼
tr
SU
UT-17-02
KYUSHU-RCAPP 2017-01
IPMU17-0016
1.1cm
Cornering Compressed Gluino at the LHC
1.2cm
Natsumi Nagata^1,
Hidetoshi Otono^2,
and
Satoshi Shirai^3
0.5cm
^1 Department of Physics, University of Tokyo,
Tokyo 113-0033,
Japan
^2 Research Center for Advanced Particle Physics, Kyushu University,
Fukuoka 812-8581, Japan
^3 Kavli Institute for the Physics and Mathematics of the Universe
(WPI),
The University of Tokyo Institutes for Advanced Study,
The
University of Tokyo, Kashiwa 277-8583, Japan
1.0cm
We discuss collider search strategies of gluinos which are highly
degenerate with the lightest neutralino in mass. This scenario is
fairly difficult to probe with conventional search strategies at
colliders, and thus may provide a hideaway of
supersymmetry. Moreover, such a high degeneracy plays an important role
in dark matter physics as the relic abundance of the lightest
neutralino is significantly reduced via coannihilation. In this paper,
we discuss ways of uncovering this scenario with the help of longevity
of gluinos; if the mass difference between the lightest neutralino and
gluino is ≲ 100 GeV and squarks are heavier than gluino, then
the decay length of the gluino tends to be of the order of the
detector-size scale. Such gluinos can be explored in the searches of
displaced vertices, disappearing tracks, and anomalously large energy
deposit by (meta)stable massive charged particles. We find that these
searches are complementary to each other, and by combining their
results we may probe a wide range of the compressed gluino region in
the LHC experiments.
§ INTRODUCTION
Supersymmetric (SUSY) extensions of the Standard Model (SM) have been
thought of as the leading candidate of physics beyond the SM. In
particular, weak-scale SUSY has various attractive features—the
electroweak scale is stabilized against quantum corrections, gauge
coupling unification is achieved, and so on—and
therefore has widely been studied so far.
This paradigm is, however, under strong pressure from the results
obtained at the Large Hadron Collider (LHC).
Direct searches of SUSY particles impose stringent limits on
their masses, especially those of colored particles such as squarks and
gluino <cit.>. In addition, the observed value of the mass of the
SM-like Higgs boson <cit.>, m_h≃ 125 GeV, may also imply that SUSY particles
are rather heavy. In the minimal supersymmetric Standard Model (MSSM),
the tree-level value of the SM-like Higgs boson mass is smaller than the
Z-boson mass <cit.>, and we need sizable
quantum corrections in order to explain the discrepancy between the
tree-level prediction and the observed value. It turns out that
sufficiently large radiative corrections are provided by stop-loop
diagrams <cit.> if the stop masses are much larger than the electroweak
scale.
The SUSY SM with heavy SUSY particles (or, equivalently, with a high
SUSY-breaking scale) have various advantages from the
phenomenological point of view <cit.>; i) severe limits from the measurements of
flavor-changing processes and electric dipole moments can be evaded
<cit.>; ii) heavy sfermions do
not spoil successful gauge coupling unification if gauginos and
Higgsinos remain around the TeV scale <cit.>; iii)
the dimension-five proton decay caused by the color-triplet Higgs
exchange <cit.>, which was problematic
for weak-scale SUSY <cit.>, is suppressed
by sfermion masses and thus the current proton decay bound can be evaded
if SUSY particles are heavy enough <cit.>, making the minimal SUSY
SU(5) grand unification <cit.> viable;
iv) cosmological problems in SUSY theories, such as the gravitino
problem <cit.> and the Polonyi problem
<cit.> can be avoided. Nevertheless, high-scale SUSY
models may have a potential problem regarding dark matter. In SUSY SMs
with R-parity conservation, the lightest SUSY particle (LSP) is
stable and thus can be a dark matter candidate. In particular, if
the LSP is the lightest neutralino, its relic abundance is determined by the
ordinary thermal freeze-out scenario. Then, it turns out that if the
mass of the neutralino LSP is well above the weak scale, its thermal relic
abundance tends to exceed the observed value of dark matter density,
Ω_ DM h^2 ≃ 0.12 <cit.>. Thus, the
requirement of Ω_ DM h^2 ≲ 0.12 imposes a severe
constraint on models with high-scale SUSY breaking.[In fact,
environmental selection with multiverse may naturally give the condition
Ω_ DM h^2 ≲ 0.12 and favor a “Spread SUSY”-type
spectrum <cit.>. ]
In order to avoid over-production of the neutralino LSP in the high-scale
SUSY scenario, it is necessary to assure a large annihilation cross
section for the LSP. A simple way to do that is to assume the neutralino
LSP to be an almost pure SU(2)_L multiplet, i.e., a wino or
Higgsino. In such cases, the LSP has the electroweak interactions and
thus has a relatively large annihilation cross section, which is further
enhanced by the so-called Sommerfeld effects <cit.>. Indeed, the thermal relic abundance of wino and
Higgsino with a mass of around 3 TeV <cit.> and 1 TeV
<cit.>, respectively, is found to be in good agreement
with the observed dark matter density Ω_ DM h^2 ≃ 0.12
<cit.>. Smaller masses are also allowed by the observation;
in such cases, their thermal relic accounts for only a part of the total
dark matter density and the rest should be filled with other dark
matter candidates and/or with non-thermal contribution via, e.g.,
the late-time decay of gravitinos <cit.>. For previous studies of wino and Higgsino dark matter,
see Refs. <cit.> and
Refs. <cit.>, respectively, and references therein.
On the other hand, bino-like dark matter in general suffers from over-production, and thus a
certain mechanism is required to enhance the annihilation cross
section. For example, we may utilize the s-channel resonant
annihilation through the exchange of the Higgs bosons (called funnel)
<cit.>. Coannihilation <cit.> may
also work if there is a SUSY particle that is degenerate with the LSP in
mass and has a large annihilation cross section; stau
<cit.>, stop
<cit.>, gluino <cit.>, wino
<cit.>, etc., can be such a candidate. In particular,
only the latter two can have a mass close to the LSP in the case of the
split-SUSY type models <cit.>.
In this paper, we especially focus on the neutralino-gluino
coannihilation case as this turns out to offer a variety of
interesting signatures at colliders and thus can be probed in various
search channels at the LHC. For a search strategy of the bino-wino
coannihilation scenario at the LHC, see Ref. <cit.>.
In order for the neutralino-gluino coannihilation to work, the
neutralino LSP and gluino should be highly degenerate in mass. For
instance, if the neutralino LSP is bino-like, the mass difference
between the LSP and gluino, Δ m, needs to be less than around
100 GeV and squark masses should be less than O(100) TeV for
coannihilation to be effective <cit.>. Such a small mass
difference makes it difficult to probe this scenario in the conventional
LHC searches as hadronic jets from the decay products of gluinos tend to
be soft. In the previous work <cit.>, however, it is
pointed out that such a compressed gluino with heavy squarks has a decay
length of ≳ O(1) mm and therefore may be probed by using
searches for displaced vertices (DVs). In fact, it was shown in
Ref. <cit.> that the DV search at the LHC can investigate
a wide range of parameter space where the correct dark matter abundance
is obtained for the bino LSP through coannihilation with
gluinos.
In this paper, we further study the prospects of the LHC searches to
probe this compressed gluino scenario. In particular, we discuss search
strategies for the very degenerate case, i.e., Δ m ≲
O(10) GeV. Such an extremely small mass difference considerably
narrows down the reach of DV searches. We however find that for such a
small mass difference, long-lived gluinos leave disappearing track
signals when they form charged R-hadrons, and thus can be probed in the
disappearing track searches. In addition, for gluinos which have a decay
length of ≳ 1 m, searches for anomalously large energy deposit
by (meta)stable heavy charged particles can be exploited. We see below that
these three searches are complementary to each other. Hence, by
combining the results from these searches we can thoroughly study the
compressed gluino scenario at the LHC.
§ PROPERTIES OF COMPRESSED GLUINO
Here we discuss the compressed gluino signatures at colliders. If
the masses of squarks are very large and/or the mass difference between the
gluino and the neutralino LSP Δ m is quite small, the gluino
decay width is strongly suppressed and its decay length can be as large
as the detector-size scale. We briefly discuss this feature in
Sec. <ref>.[For detailed discussions on the
decay of (long-lived) gluinos, see Refs. <cit.>.] When the gluino lifetime
is longer than the QCD hadronization time scale, gluinos produced at
colliders form R-hadrons. We discuss the properties of R-hadrons and their
implications on the LHC searches in
Sec. <ref>.[For previous studies on the R-hadron
properties in the split-SUSY, see Ref. <cit.>. ]
§.§ Gluino decay
The decay of gluinos is induced by the diagrams shown in
Fig. <ref>. Here, we focus on the case where the mass
difference Δ m is rather small (Δ m ≲ 100 GeV) and
squark masses are much larger than the electroweak scale; i.e.,
m≳ 10 TeV, where m denotes the
typical scale of squark masses.[We however note that gluinos
can be long-lived even though m is around the TeV scale if
the mass difference Δ m is small enough. This possibility is
briefly discussed in Sec. <ref>.] In this
case, the tree-level three-body decay process shown in
Fig. <ref> is proportional to (Δ m)^5, and thus its
decay width is strongly suppressed for compressed gluinos.
For the loop-induced two-body decay in Fig. <ref>, on the other
hand, the decay width strongly depends not only on the mass difference
but also on the nature of the lightest neutralino. If the neutralino LSP
is a pure bino and left-right mixing of the scalar top quarks is small,
the matrix element of the two-body decay process is
suppressed by a factor of m_g - m_B
(m_g and m_B are the gluino and bino
masses, respectively), which
originates from a chirality flipping in the external lines. As a result,
the two-body decay rate also goes as Γ(g→B g) ∝ (Δ m)^5, which makes the three-body decay channel dominate the two-body one. If the LSP is a pure wino, the
two-body decay is suppressed by the SU(2)_L gauge symmetry, and thus
the three-body decay is again the dominant channel. Contrary to these
cases, if the LSP is Higgsino-like, the two-body decay is the main decay
process. In this case, the dominant contribution to this decay process
comes from the top-stop loop diagram and its matrix element
is proportional to the top mass m_t. Thus, the two-body decay rate
goes as Γ(g→H^0 g) ∝ m_t^2 (Δ
m)^3. In addition, this loop contribution is logarithmically enhanced
as the masses of the scalar top quarks get larger. The tree-level decay
process is, on the other hand, suppressed by small Yukawa couplings
since the stop exchange process, which is large because of the top
Yukawa coupling, is kinematically forbidden for compressed
gluinos.
Eventually, we see that the gluino decay length is approximately given by
cτ_g =
O(1-10) cm×(Δ m10 GeV)^-5(m10 TeV)^4 (bino or wino LSP)
O(0.01-0.1) mm×(Δ m10 GeV)^-3(m10 TeV)^4 (Higgsino LSP) .
The dependence of these approximated formulae on Δ m is captured
by the above discussions. From these expressions, we find that
compressed gluinos generically have decay lengths of the order of the
detector size when the squark mass scale is ≳ 10 TeV. We make
the most of this observation to probe the compressed gluino scenario.
There are several situations where the above tendency of gluino
decay may be altered. For the bino LSP case, the two-body decay channel can
be important if the left-right mixing in the scalar top sector is
large and Δ m is very small. This decay branch
can also be enhanced if there is a sizable bino-Higgsino mixing. In
addition, if Δ m ≪ 10 GeV, the parton-level description gets
less appropriate, and the hadronic properties of decay products
significantly affect the gluino decay. Further dedicated studies are
required to give a precise theoretical calculation of the gluino decay
rate for this very small Δ m region, which is beyond the scope of
this paper.
§.§ R-hadrons
A long-lived gluino forms a bound state with quarks and/or gluons once
they are produced at colliders. Such bound states, being R-parity odd,
are called R-hadrons <cit.>. R-hadrons are categorized into
several classes in terms of their constituents; if R-hadrons are
composed of a gluino and a pair of quark and anti-quark, gq̅q, they are called R-mesons; if they consist of a gluino and
three quarks (anti-quarks), g qqq (gq̅q̅q̅), they are called R-baryons (R-antibaryons);
a bound state which is made of a gluino and a gluon, g g, is
referred to as an R-glueball.
The production fractions of R-hadron species have a direct impact on
the gluino search sensitivities discussed below, as some of these
search strategies rely on the production of charged R-hadrons. A
computation <cit.> in which hadronization is performed
using Pythia <cit.> shows that the production
rates of R-mesons dominate those of R-baryons. The production
fraction of R-glueball is, on the other hand, theoretically unknown
and thus regarded as a free parameter. In the analysis of
Ref. <cit.>, this fraction is set to be 10%, which is
the default value used in Pythia. Then, it is found that the
fraction of R-mesons is 88.5% while that of R-baryons is only
1.6%. Among them, charged R-hadrons are 44.8%. This value
however significantly decreases if we set the R-glueball fraction to
be a larger value. Taking this ambiguity into account, in the following
analysis, we take different values for the R-glueball fraction and
regard the resultant changes as theoretical uncertainty.
The mass spectrum of R-hadrons, especially that of R-mesons and
R-glueball, also affects the R-hadron search strategy
significantly. An estimation <cit.> based on a simple mass
formula for the lowest hadronic states <cit.>, which is
derived from the color-spin interaction given by one gluon exchange,
indicates that the lightest R-meson state is “R-rho”, namely, a bound
state which consists of a gluino and a vector iso-triplet made of up and
down quarks. The mass of “R-pion” (a bound state of a gluino and a
spin-zero iso-triplet made of up and down quarks) is found to be larger
than the R-rho mass by about 80 MeV. This observation is consistent
with the calculations using the bag model <cit.> and
QCD lattice simulation <cit.>, which predict R-rho to be
lighter than R-pion by 40 MeV and 50 MeV, respectively. On the other
hand, there is controversy about the estimation of the R-glueball
mass. An estimation by means of the constituent masses of partons
shows that R-glueball is heavier than R-rho by 120 MeV
<cit.>. The bag-model calculation
<cit.> also predicts R-glueball to be slightly heavier
than R-rho. However, the lattice result <cit.> shows that
the R-rho mass is larger than the R-glueball mass by 47 MeV, though
we cannot conclude by this result that these results are incompatible
with each other since the uncertainty of this calculation is as large as
90 MeV (and the former two estimations also suffer from uncertainties of
similar size). We here note that if R-glueball is lighter than
R-rho and the mass difference between them is larger than the pion
mass, then an R-rho can decay into an R-glueball and a pion via
strong interactions. Other R-mesons may also decay into R-glueball.
This considerably reduces the number of tracks associated with charged
R-mesons, and thus weakens the discovery reach of R-hadrons. In the
following analysis, we assume that such decay channels are kinematically
forbidden, as is supported by the above calculations of the R-meson
and R-glueball mass spectrum. As for
R-baryons, the flavor singlet J=0 state, which has a non-zero
strangeness, is the lightest <cit.>. In addition, there are flavor octet states which are
stable against strong decays and heavier than the singlet state
by about a few hundred MeV. Their weak-decay lifetime is likely to be
sufficiently long so that they can be regarded as stable at colliders
<cit.>.
While R-hadrons are propagating through a detector, they may scatter
off nuclei in it. Such processes are potentially important since they may change R-hadron species. For instance, by scattering a nucleon in the detector
material, an R-meson or R-glueball can be converted into an
R-baryon with emitting a pion.
However, the reverse process is unlikely since pions
are rarely found in the detector material and the process itself suffers
from kinematical suppression. For this reason, although R-mesons and
R-glueball are dominantly produced at the outset, we have a sizable
fraction for R-baryons in the outer part of detectors, such as in the
Muon Spectrometer. The nuclear reaction rates of R-hadrons are
evaluated in Refs. <cit.>, and it is found that an R-hadron
interacts with nucleons about five times while propagating in 2 m of
iron. Therefore, most of R-mesons and R-glueballs may be converted into
R-baryons before they enter into the Muon Spectrometer. In the
analysis discussed below, however, we focus on the R-hadron searches
using the Inner Detector, on which the nucleon interactions have little
impact as the matter density up to the Inner Detector region is
very low. This makes our search strategies free from uncertainties
originating from the estimation of R-hadron interactions in
detectors—we neglect these effects in the following analysis.
§ LHC SEARCHES
Next, we discuss the LHC signatures of the compressed gluinos. Depending
on the gluino lifetime and the gluino-LSP mass difference, we need
adopt several strategies to catch gluino signals. In this paper, we
focus on the ATLAS detector. The performance of the CMS detector is
similar to that of the ATLAS, though.
§.§ ATLAS experiment
The ATLAS detector is located at one of the interaction points of the LHC,
which consists of the Inner Detector, the calorimeters, the Muon
Spectrometer, and the magnet systems <cit.>. The long-lived
gluino searches discussed in this paper make full use of the Pixel
detector and the SemiConductor Tracker (SCT) in the Inner
Detector. Various dedicated techniques, which have been developed to
search for new long-lived particles, can be applied to the long-lived
gluino searches in order to maximize its discovery potential. Here, we
briefly review the detectors used in the searches relevant to this work.
§.§.§ Pixel detector
The Pixel detector is the sub-detector closest to the interaction point,
which has a four-layer cylindrical structure with a length of about
800 mm in the barrel region. The innermost layer called
Insertable B-Layer (IBL), which was installed before the LHC-Run2
started <cit.>, has silicon pixel sensors of
50×250 μ m and is located at a radius of 33
mm. The other layers have silicon pixel sensors of 50×400 μ m at radii of 50.5 mm, 88.5 mm, and
122.5 mm. The Pixel detector can measure the energy deposit
along the trajectory of each charged particle, i.e., dE/dx,
which is sensitive to slow-moving (meta)stable particles according to
the Bethe-Bloch formula.
§.§.§ SCT
The SCT surrounds the Pixel detector, which has four layers in the
barrel region with a length of about 1500 mm at radii of
299 mm, 371 mm, 443 mm, and
514 mm. A module on each layer consists of two 80 μ
m-pitch strip silicon sensors with a stereo angle of 40 mrad
between strip directions. The SCT does not have a capability to measure
dE/dx in contrast to the Pixel detector, since the SCT employs binary
readout architecture.
§.§ ATLAS searches
Gluinos give rise to various signatures at colliders depending on the
decay lifetime. In this work, we consider the following four search
strategies,[Some of these search strategies have already been
considered in the context of long-lived gluino searches in
Refs. <cit.>.] which are sensitive to
different ranges of the gluino decay length:
§.§.§ Prompt decay <cit.>:
cτ_g≲ 1 mm
Searches for a new particle which is assumed to decay at the interaction
point are also sensitive to long-lived particles. The ATLAS experiment
reconstructs tracks whose transverse impact parameters (d_0) are less
than 10 mm, then checks a correspondence with primary vertices.
As a result, a portion of the decay vertices of long-lived particles is
merged with one of the primary vertices.
Generally, for metastable gluinos, these inclusive searches get less
effective for cτ_g≫ 1 cm, since jets from the
gluino decay are displaced from the primary vertex and fail the event
selection criteria <cit.>. In the compressed gluino case,
however, these searches are less sensitive to the gluino lifetime, since
it is jets from the initial state radiation that play a main role in the
conventional jets + MET searches. For this reason, even for a gluino
with a decay length greater than O(1) m, the resultant mass
bound will be similar to that for a prompt decay compressed gluino.
§.§.§ Displaced-vertex search <cit.>:
cτ_g≳ 1 mm
A long-lived gluino decaying to quarks or a gluon leaves a displaced
vertex (DV) away from the interaction point. In order to reconstruct tracks
from such a DV, the requirement on the transverse impact parameter for
tracks is loosen such that 2 mm < |d_0| < 300 mm. As a
result, the sensitivity is maximized for particles with a decay length
of O(10) mm.
The sensitivity becomes worse as the mass difference between gluino and
the LSP gets smaller <cit.>, since the invariant mass of
DVs is required to be larger than 10 GeV in order to separate
signal events from background fake vertices. Due to this requirement,
the gluino-LSP mass difference of ≲ 20 GeV is hard to probe.
§.§.§ Disappearing-track search <cit.>:
cτ_g≳ 10 cm
Originally, the disappearing-track search has been developed for the
search of long-lived charged winos with the neutral wino being the
LSP <cit.>. This technique may also be applicable to long-lived
gluinos for the
following reason. As we discuss in the previous section, a certain
fraction of long-lived gluinos form charged R-hadrons. If the gluino and
the LSP are degenerate in mass, the track associated with a charged
R-hadron seems to disappear when the gluino in the R-hadron decays,
since the jet emission from the gluino decay is very soft. As mentioned
above, the DV search does not work efficiently when the gluino-LSP mass
difference is very small. Such a degenerate mass region can instead be
covered by the disappearing-track search.
A candidate track in the disappearing-track search should have four hits
in the Pixel detector and the SCT with no activity after the last hit
required. Thanks to the installation of IBL, the minimum length of
disappearing tracks which can be searched for by this strategy is
shorten from 299 mm to 122.5 mm in the LHC Run2. This allows the range
of decay lengths covered by the disappearing-track search to be slightly
wider than that in the DV search.
§.§.§ Pixel dE/dx search <cit.>:
cτ_g≳ 1 m
A particle with a mass of the order of the electroweak scale or larger
tends to travel with a low velocity after it is produced at
the LHC, which may be observed as a large dE/dx in the Pixel
detector. Hence, by searching for this signature, we can probe charged
R-hadrons. While a minimum ionization particle is expected to give ∼
1.2 MeV· cm^2 / g of dE/dx, the threshold for
the Pixel dE/dx search is set to be 1.8 MeV·cm^2
/g with a small correction depending on η.
For the track selection, at least seven hits in the Pixel detector and
the SCT are required, which corresponds to the minimum track-length of
371 mm. Note that this search strategy does not require decay of
particles, and thus is also sensitive to completely stable
particles. For this reason, the cover-range of decay lengths by the
Pixel dE/dx search is quite broad.
§.§ LHC Prospects
Now, we show the sensitivities of the searches listed in
Sec. <ref> to the compressed gluino scenario.
In this study, we use the program Madgraph5
<cit.>+Pythia8
<cit.>+Delphes3
<cit.> and estimate the production cross sections of
SUSY particles with Prospino2 <cit.> or NLL-fast <cit.>.
In Fig. <ref>, we show the prospects of each
search in the c τ_g̃–m_g̃ plane with an
integrated luminosity of 40 fb^-1 at the 13 TeV LHC. The blue dashed
lines show the expected limits from the DV search with the
gluino-LSP mass difference set to be Δ m =100 GeV, 20 GeV, and
15 GeV from top to bottom. We use the same event selection requirements
as in the 8 TeV study <cit.> except that we require
missing energy E_ T^ miss to be greater than 200 GeV as a
trigger, which was E_ T^ miss > 100 GeV in the 8 TeV study
<cit.>. We expect the number of background events for
this signal is as small as in the case of the 8 TeV run
<cit.>; here, we assume it to be 0–10 and the systematic
uncertainty of the background estimation to be 10%. The upper (lower)
border of each band corresponds to the case where the number of the
background events is 0 (10). It is found that the sensitivity of the DV
search is maximized for a gluino with a decay length of ∼ 10 cm,
and may reach a gluino mass of about 2.2 TeV (1.2 TeV) for Δ m =
100 GeV (15 GeV). We also find that the DV search becomes less powerful
when the gluino-LSP mass difference gets smaller, as mentioned in
Sec. <ref>.
The red dashed lines show the expected reaches of the disappearing-track
search. As we discussed above, this search relies on the production of
charged R-hadrons, but the estimate of the charged R-hadron
production rates suffers from uncertainty due to the unknown
R-glueball fraction. To take this uncertainty into account, in
Fig. <ref>, we set the R-glueball fraction
to be 10% and 50% in the top and bottom lines, respectively. To estimate the
prospects, we basically adopt the same selection criteria as the charged
wino search at the LHC Run 1 <cit.>, including the isolation
criteria and the detection efficiency of a disappearing track. To consider
the update in the LHC Run 2, we refer to the result presented in
Ref. <cit.>. In addition, to take into account the
improvement because of the IBL, we assume the detection efficiency 60% for
|η|<1.5 and 13<r<30 cm, which was zero at the Run 1 study
<cit.>. We further adopt the following kinematic cut:
E_ T^ miss > 140 GeV and the leading jet with a transverse
momentum of P_ T>140 GeV. The number of background events is
assumed to be 10 and its uncertainty is supposed to be 10% for
40 fb^-1. The result in Fig. <ref> shows
that the disappearing track search is sensitive to a
decay length of ∼ 1 m, and can probe a longer lifetime region than
the DV search. Notice that even for c τ_g̃ =
O(10) cm, around which the sensitivity of the DV search is
maximized, the reach of the disappearing-track search may exceed that of
the DV search if the gluino-LSP mass difference is very small.
The expected reach of the Pixel dE/dx search is plotted in the green
long-dashed lines. Here again, we set the R-glueball fraction to be
10% and 50% in the top and bottom lines, respectively, to show the
uncertainty from the unknown R-glueball fraction. For this analysis,
we adopt the same event selection as in the study of ATLAS with
√(s)=13 TeV and an integrated luminosity of 3.2 fb^-1
<cit.>, where the R-glueball fraction is set to be
10%. We have estimated the number of background events by rescaling the
result of 3.2 fb^-1 to 40 fb^-1. We have also
assumed that the gluino-LSP mass difference is so tiny that the missing
energy, which is required in Ref. <cit.>, comes from
initial state radiations. As can be seen, the Pixel dE/dx search can
probe c τ_g̃≳ 1 m, and its sensitivity does not
decrease for c τ_g̃≫ 1 m, contrary to the previous two
cases.
To show the prospects of the above searches for probing the compressed
gluino scenario, in Fig. <ref>, we show their expected
reaches with the 13 TeV 40 fb^-1 LHC data in the
m_g̃–Δ m plane, where the shaded areas with the blue
dashed, red solid, green long-dashed, and orange dash-dotted borders
correspond to the searches of DVs, disappearing tracks,
anomalous energy deposit (dE/dx) in the Pixel detector, and prompt
gluino decays, respectively. We also show a contour plot for the gluino
decay length cτ_g̃ in the black dotted lines. All squark
masses (collectively denoted by m_q̃) are set to be 10 TeV
and 50 TeV in Figs. <ref> and <ref>,
respectively, and the LSP is assumed to be a pure bino. To obtain the
expected sensitivity of the prompt gluino decay search with the
40 fb^-1 data, we adopt the event cut criteria used in
Ref. <cit.> and estimate the number of background
events by rescaling the result of the 13.3 fb^-1 case.
These figures show that the highly compressed region can
be probed with the disappearing-track search and the Pixel dE/dx
search, while if the gluino-LSP mass splitting is large enough (such
that cτ_g̃≪ 1 mm), only the traditional gluino searches
can have sensitivities. Between these two regions, the DV
search offers the best sensitivity. We also find that the reach of the
DV search strongly depends on the squark masses.
To obtain a gluino decay length of cτ_g̃∼
10 cm, to which the DV search is most sensitive, a
smaller Δ m is required for lighter squarks (see
Eq. (<ref>)). As noted above, the sensitivity of the
DV search is considerably reduced for a small Δ m;
for this reason, the reach of the DV search shrinks for
small squark masses. On the other hand, the disappearing-track and Pixel
dE/dx searches are rather robust on the change of m_q̃, as
these searches do not rely on the jet emission from the gluino decay. In
any case, Fig. <ref> shows that the experimental strategies
discussed in Sec. <ref> play complementary rolls in searching for
long-lived gluinos, and by combining the results from these searches we
can probe a wide range of parameter space in the compressed gluino
scenario.
§ CONCLUSION AND DISCUSSION
The compressed-gluino parameter region in the MSSM, where gluino and the
LSP are highly degenerate in mass, can evade the over-abundance of dark
matter and thus is still viable. Therefore, it is important to test this
possibility experimentally. It is however difficult to probe this
scenario using the conventional search strategy at the LHC based on the
jets plus missing energy signatures, since jets emitted from the gluino
decay tend to be very soft. In this paper, we have discussed the
LHC search strategies which are sensitive to the compressed gluino
scenario. They include the searches of DVs, disappearing
tracks, and anomalous energy deposit in the Pixel detector, on top of
the ordinary inclusive searches. Then we have found that these searches
are indeed sensitive to long-lived gluinos.
In summary, we show the cover areas of these search strategies in the
Δ m–cτ_g plane in Fig. <ref>, where
we set the gluino mass to be 1.5 TeV and consider the 13 TeV LHC run
with an integrated luminosity of 40 fb^-1. As seen in this figure,
depending on the lifetime and the gluino-LSP mass difference, we can
adopt different search strategies. When the gluino decay length is
≳ 1 m, the Pixel dE/dx search offers the best sensitivity to
gluinos. For 0.1 m≲ cτ_g̃≲ 10 m, the disappearing-track
search is quite promising. The DV search can cover the
range of 1 mm≲ c τ_g̃≲ 1 m, though
its sensitivity strongly depends on the gluino-LSP mass difference. The
c τ_g̃ < 1 mm region can be probed by ordinary
prompt-decay gluino searches. As a consequence, these searches
complement each other, which allows us to investigate a broad range of
the compressed gluino region in the LHC experiments.
Notice that although we have focused on the cases with a high
sfermion mass scale (m≳ 10 TeV), the search
strategies discussed in this paper, especially the disappearing-track
search and the dE/dx search, can also be powerful for
m < 10 TeV if the gluino-LSP mass difference is small
enough to make gluino long-lived. Such a possibility may be
interesting since it offers a refuge for (semi)natural SUSY models, which
may be uncovered by the long-lived gluino searches.
In this paper, we discuss long-lived gluinos whose longevity
comes from small mass difference between the gluino and the
LSP. Actually, in not only the gluino case but more general “co-LSP”
scenarios, we may have such a long-lived particle. For instance, in the
very compressed stau-LSP and stop-LSP cases, metastable charged
particles may appear and thus can be observed in the long-lived particle
searches at the LHC. As it turns out, for
instance, by using the setup of the disappearing track search discussed
above, we can probe a right-handed stau (stop) with a mass of around 200
(950) GeV at the LHC Run 2 for cτ=10 cm. Detailed studies of such
generalization are out of the scope of the present paper and will be
discussed elsewhere.
§ ACKNOWLEDGMENTS
This work was supported by World Premier International Research Center
Initiative (WPI), MEXT, Japan.
aps
|
http://arxiv.org/abs/1701.07951v1 | 20170127055003 | Full jet in quark-gluon plasma with hydrodynamic medium response | [
"Yasuki Tachibana",
"Ning-Bo Chang",
"Guang-You Qin"
] | nucl-th | [
"nucl-th",
"hep-ex",
"hep-ph",
"nucl-ex"
] |
yasuki.tachibana@mail.ccnu.edu.cn
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China
changnb@mail.ccnu.edu.cn
Institute of Theoretical Physics, Xinyang Normal University, Xinyang, Henan 464000, China
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China
guangyou.qin@mail.ccnu.edu.cn
Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei, 430079, China
We study the nuclear modifications of full jets and their structures in relativistic heavy-ion collisions including the effect of hydrodynamic medium response to jet quenching.
To study the evolutions of the full jet shower and the traversed medium with energy and momentum exchanges between them,
we formulate a coupled jet-fluid model consisting of a set of jet transport equations and relativistic hydrodynamics equations with source terms.
In our model, the full jet shower interacts with the medium and gets modified via collisional and radiative processes during the propagation.
Meanwhile, the energy and momentum are deposited from the jet shower to the medium and then evolve with the medium hydrodynamically.
The full jet defined by a cone size in the final state includes the jet shower and the particles produced from jet-induced flow.
We apply our model to calculate the full jet energy loss and the nuclear modifications of jet rate and shape in Pb+Pb collisions at 2.76 A TeV.
It is found that the inclusion of jet-induced flow contribution leads to stronger jet-cone size dependence for jet energy loss and jet suppression.
Jet-induced flow also has a significant contribution to jet shape function and dominates at large angles away from the jet axis.
Full jet in quark-gluon plasma with hydrodynamic medium response
Guang-You Qin
December 30, 2023
================================================================
§ INTRODUCTION
In relativistic heavy-ion collisions at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), the deconfined state of quarks and gluons, namely quark-gluon plasma (QGP), has been created.
One of the remarkable properties of the produced QGP is the strong collective motion which has been well described by relativistic hydrodynamics with the extremely small viscosity to entropy ratio <cit.>.
The hydrodynamic behavior of the QGP has been confirmed by the large anisotropic flows as measured by experiments, which implies strong interactions among the QGP constituents.
In addition to the collective phenomena, one can study the novel features of the QGP as strongly interacting matter via jet quenching <cit.>.
In high-energy nucleus-nucleus collisions, the energetic jet showering partons may be produced in hard processes at very early time.
During their propagation through the QGP, the jet partons interact with the medium constituents via collisional and radiative processes, which change the momenta of the shower partons in the hard jet. The phenomena related to the modification of the jets and their inner structure by the QGP medium effect are commonly referred to as jet quenching in a broad sense.
In recent experiments of Pb+Pb collisions at the LHC, detailed measurements of fully reconstructed jets with very high transverse momentum have become available, thanks to the high center of mass energy.
One of the most general consequence of jet quenching seen in the experiments is the suppression of jet rates due to the energy loss of the full jets with a given cone of the size R = √(Δη_p^2 + Δϕ_p^2) <cit.>.
Furthermore, the measurements of substructures inside the full jets have provided us more detailed information on jet quenching <cit.>.
Motivated by the detailed measurements, a lot of theoretical effort has been devoted to the study of jet-medium interaction and the medium effect on the full jets with showering structures in relativistic heavy-ion collisions
<cit.>
The interactions between the energetic partons in the jet shower and the QGP medium constituents exchange energies and momenta between them.
For the QGP fluid, the jet shower is a bunch of fast-moving energy-momentum deposition sources, which are supposed to excite the medium fluid, and induce flows propagating with the jet as a collective medium response <cit.>.
The jet-induced flow carries the energy and momentum deposited by the jet and enhances the hadron emission from the medium around the direction of the jet axis.
Some of these enhanced hadrons are detected as part of the jet together with the fragments from the jet shower and affect the final state full jet energy and structures <cit.>.
Since the structure of the jet-induced flow is characterized by the bulk properties of the QGP, e.g., sound velocity, and viscosities, the detailed investigations of the contribution of the collective medium response to the jet structure may provide us not only the precise interpretation of the experimental data, but also unique opportunities to study the bulk properties of the QGP through the jet events in relativistic heavy-ion collisions.
In this work, we study the nuclear modification of full jet structures in the QGP medium including the contributions from the hydrodynamic medium response to jet-deposited energy and momentum.
We employ a coupled full jet shower and QGP fluid model which is composed of a set of transport equations for the jet shower evolution and the hydrodynamic equations with source terms for the QGP medium evolution.
The transport equations describe the evolution of the three-dimensional momentum distributions of partons in the jet shower <cit.>, including the collisional energy loss, the transverse momentum broadening, and medium-induced partonic splittings for all partons within the showering jet.
The space-time evolution of the QGP medium is described by (3+1)-dimensional relativistic ideal hydrodynamic equations with source terms <cit.>.
The source terms account for the transfer of the deposited energy and momentum from the jet shower to the QGP fluid, and are constructed with the evolving distributions of the partons in the jet shower
obtained as solutions of the jet transport equations.
Based on our coupled jet-fluid model, we perform simulations of jet events in Pb+Pb collisions at 2.76A TeV, investigate the flow induced in the medium as a hydrodynamic response to jet quenching, and study how the jet-induced flow affects the full jet structures in the final state.
Our study shows that the contribution of the particles originating from the jet-induced flows increases the jet-cone size dependence of the full jet energy loss.
It is also found that jet-induced flow has a significant contribution to the final state full jet shape, and dominates jet shape function at very large angles away from the jet direction.
The paper is organized as follows. In Sec. <ref>, we present the formulation of our coupled full jet shower and QGP fluid model used in this work.
In Sec. <ref>, we present and discuss the results from the simulations of jet events in Pb+Pb collisions at 2.76A TeV.
We will focus on the effect of the hydrodynamic medium response to the jet quenching on the final state full jet observables. Section <ref> is devoted to the summary and concluding remarks of this work.
§ THE COUPLED JET-FLUID MODEL
§.§ Jet Shower Evolution in Medium
Jets are collimated clusters of the particles originating from high-p_T partons produced in early stage partonic hard scatterings.
The produced high-p_T parton successively radiates partons and develops a shower of partons due to its high virtuality.
In relativistic heavy-ion collisions, the interactions between the propagating jet and the constituents of the QGP medium, including collsional and radiative processes, change
the momenta of the jet shower partons, and thus modify the energy
as well as the structure of the full jet defined by a cone size.
In this work, to describe the time-evolution of the jet shower structure driven by the interaction with the medium, we employ a set of coupled transport equations for the energy and transverse momentum distributions of the partons contained in the jet shower,
f_i(ω_i, k_i⊥^2,t)=dN_i(ω_i, k_i⊥^2,t)/dω_i dk_i⊥^2,
where the index i denotes the parton species (quark or gluon), ω_i is its energy, and k_i⊥ is its transverse momentum with respect to the jet axis.
The transport equations have the following generic form <cit.>:
d/dtf_j(ω_j, k_j⊥^2, t) = (ê_j ∂/∂ω_j
+ 1/4q̂_j ∇_k_⊥^2)f_j(ω_j, k_j⊥^2, t)
+∑_i∫ dω_idk_i⊥^2 dΓ̃_i→ j(ω_j, k_j⊥^2|ω_i, k_i⊥^2)/dω_j dk^2_j⊥dt f_i(ω_i, k_i⊥^2, t)
-∑_i ∫ dω_idk_i⊥^2 dΓ̃_j→ i(ω_i, k_i⊥^2|ω_j, k_j⊥^2)/dω_i dk^2_i⊥dtf_j(ω_j, k_j⊥^2, t).
The first two terms on the right hand side in Eq. (<ref>) describe the energy-momentum changes of the jet partons via
scatterings with the medium constituents.
The first term accounts for the energy-momentum changes in the longitudinal direction with respect to the jet axis; such contribution is called collisional energy loss and its strength is determined by the longitudinal momentum loss rate ê=d⟨ E⟩/dt.
The momentum changes in the transverse direction, whose effect is called transverse momentum broadening, is taken into account through the second term, with the exchange rate of transverse momentum squared q̂=d⟨Δ p_⊥^2⟩/dt <cit.>.
The last two terms are the contributions from the medium-induced partonic splitting processes, including the gain term coming from the radiation of parton j with ω_j and k_j⊥ from the parton i with ω_i and k_i⊥ and the loss term for the radiation of parton i from parton j.
It should be noted that the transport equations (<ref>) are coupled to each other through the terms for the medium-induced parton splittings, i.e., the gain term for the process i→ j in the transport equation for parton j also appears as a loss term in the transport equation for parton i.
The medium-induced parton splittings change the number of partons and the internal momentum distribution of the jet, but do not change the total energy and momentum inside the full jet (for sufficiently large cone size) because they are conserved in the splitting processes.
For the rates of the medium-induced partonic splittings, we employ the results from higher-twist jet energy loss formalism <cit.>:
dΓ_i→ j(ω_j, k_j⊥^2|E_i)/dω_j dk_j⊥^2 dt = 2 α_s/πq̂_g x P_i→ j(x) /ω_j k_j⊥^4sin^2 (t - t_i/2τ_f).
Here, P_i→ j(x=ω_j/E_i) is the vacuum splitting function for the process i→ j with ω_j the energy of the radiated parton and E_i the energy of the parent parton, τ_f = 2E_ix(1-x)/k_j⊥^2 is the formation time of the radiated parton with k_j⊥ the transverse momentum with respect to the propagation direction of the parent parton, and t_i is the production time of the parent parton.
To obtain the splitting rates of the form dΓ̃_i→ j(ω_j, k_j⊥^2|ω_i, k_i⊥^2)/dω_j d^2k_j⊥dt
in Eq. (<ref>), the multiplication with the Jacobian 𝒥 = |∂ k_ij⊥^2/∂ k_j⊥^2| is necessary (see Ref. <cit.> for more details).
In the calculation, we put a constraint that the medium-induced radiations are allowed only for the partons whose formation times are achieved (t>τ_f).
In our model, the strength of the each contribution of the medium modification in Eq. (<ref>) is controlled by two transport coefficients: ê for collisional energy loss,
and q̂ for transverse momentum broadening and medium-induced partonic splittings.
Assuming that the medium is in almost local thermal equilibrium, the fluctuation-dissipation theorem can relate these two transport coefficients to each other: q̂ = 4T ê <cit.>.
Also since q̂ for quarks and gluons are related by a color factor, q̂_g/q̂_q = C_ A/C_ F, only one transport coefficient (we choose q̂_q for quark) determines the sizes of all the medium effects in Eq. (<ref>).
In this work, we use q̂_q of the form,
q̂_q (x) = q̂_q,0[T(x)/T_0]^3 p· u(x)/p_0.
where T_0=T(τ=τ_0,x=0,y=0,η_ s=0) is the initial temperature at the center of the QGP medium produced in central Pb+Pb collisions at 2.76A TeV at the LHC, p^μ is the four-momentum of the propagating parton, u_μ is the flow four-velocity of the medium, and q̂_q,0 is the exchange rate of transverse momentum squared for quark in a static QGP medium at T_0. The factor p· u/p_0 is introduced to account for the flow effect in a non-static medium <cit.>.
We solve the transport equations (<ref>) numerically to describe the jet evolution in terms of the energy and transverse momentum distributions of the partons contained in the jet shower.
The initial condition for full jet shower is constructed from simulation <cit.>.
The jet shower evolves according to the transport equations
during the propagation through the QGP medium with the local temperature
above T_c = 160 MeV.
We introduce a minimum energy ω_ cut for the partons in the jet shower.
When the energy of the partons in the jet shower becomes below ω_ cut during their evolution, they are considered to be absorbed in the medium.
We put the energy and momentum of the absorbed partons into the medium together with the collisional energy loss and transverse momentum broadening terms in Eq. (<ref>).
The same cut-off energy is also used for the medium-induced partonic splittings, i.e, only partons with energy above ω_ cut can be radiated; this is to take into account some effect for the balance between radiation and absorption.
In this study, we set q̂_q,0=1.7 GeV^2/fm and ω_ cut=1 GeV. These values provide a good description to the nuclear modification factor R_ AA for inclusive jet spectrum in most central Pb+Pb collisions at 2.76 A TeV measured by the ATLAS, ALICE, and CMS Collaborations <cit.>.
Our value of q̂_q,0 is also consistent with the one obtained by JET Collaboration <cit.>.
§.§ Hydrodynamic Equations with Source Terms
The conventional way to describe the space-time evolution of the QGP is to use the hydrodynamic equations ∂_μ T_ QGP^μν=0,
where T_ QGP^μν is the energy-momentum tensor of the QGP fluid.
This equation represents the energy-momentum conservation only in the fluid.
However, in the case that jets propagate through the QGP, the QGP and the jets exchange their energies and momenta through the scatterings between their constituents.
Therefore, the energy-momentum conservation is satisfied not for the QGP only, but for the combined system of the QGP and the jets:
∂_μ[T_ QGP^μν(x)+T_ jet^μν(x)]=0.
Here
T_ jet^μν is the energy-momentum tensor of the jet shower.
Assuming that the energy and momentum deposited by the jets are quickly thermalized, we model the QGP as an ideal fluid in local equilibrium whose energy-momentum tensor can be decomposed as:
T^μν_ QGP=(ϵ+p)u^μu^ν-pη^μν,
where ϵ is the energy density, p is the pressure, u^μ is the flow four-velocity, and η^μν= diag(1,-1,-1,-1) is the Minkowski metric.
If we define the source terms by
J^ν(x)=-∂_μT_ jet^μν(x),
Equation (<ref>) becomes the hydrodynamic equations with source terms:
∂_μT_ QGP^μν(x) = J^ν(x).
In the numerical hydrodynamic calculations for the medium evolution in the relativistic heavy-ion collisions, we use the relativistic τ-η_ s coordinates which is convenient to describe the longitudinal dynamics.
To do so, we employ the equations in which the partial derivative in the Eq. (<ref>) are the replaced with the covariant derivative in the relativistic τ-η_ s coordinates:
D_μ̅T_ QGP^μ̅ν̅ = J^ν̅,
where μ̅ and ν̅ are the suffixes for the components in the τ-η_ s coordinate system: x^μ̅=(τ,x,y,η_ s).
Equation (<ref>) is numerically solved to describe the space-time evolution of the QGP medium.
Using this framework, the collective response to the jet quenching in the expanding QGP fluid can be properly described.
To close the system of hydrodynamic equations (<ref>), an equation of state is necessary.
Here we employ the equation of state from the lattice QCD calculation <cit.>.
As the fluid expands, the QGP fluid cools down and finally turns into a hadronic matter according to the equation of state.
We keep using Eq. (<ref>) to describe the space-time evolution of the hadronic matter until the temperature drops to the freeze-out temperature T_ FO.
§.§ The Source Terms
The source terms in Eq. (<ref>) describe the transfer of the energy-momentum between the jet and the QGP fluid.
Here we construct the source terms from the evolving parton distributions in the jet shower obtained as the solutions of the transport equation (<ref>).
We recall the kinetic definition of the energy-momentum tensor of the jet shower <cit.>:
T^μν_ jet(x)
= ∑_j∫d^3k_j/ω_j k_j^ν k_j^μ f_j([k]_j,[x],t),
where f_j([k]_j,[x],t) is the phase-space distribution of the parton j in the jet shower.
Referring to Eq. (<ref>), the source term can be written as:
J^ν(x)
=
-dP_ jet^ν/dt d^3x
=
-∑_j∫d^3k_j/ω_j k_j^ν k_j^μ∂_μ f_j([k]_j,[x],t),
where P_ jet^ν is the total four-momentum of the jet shower
and dP_ jet^ν/d^3x is its 3-dimensional space density.
As mentioned above, there are three types of processes, i.e., the collisional energy loss, the transverse momentum broadening, and the medium-induced partonic splitting, contributing to the derivative of the phase-space parton distribution in Eq. (<ref>).
However, since the total energy and momentum in the jet shower are conserved in the splitting processes, their contribution turns out to vanish by the integration and summation in Eq. (<ref>).
Therefore, the energy and momentum are exchanged between the jet and the QGP fluid through the processes of first two terms on the right hand side of Eq. (<ref>), i.e., the collisional energy loss and transverse momentum broadening.
We also have to estimate the position of each energetic parton in the full jet. For a parton with energy ω_j and momentum [k]_j at time t, its position is estimated as: [x]=[x]_0^ jet+([k]_j/ω_j)t, where [x]_0^ jet is the production point of the jet.
Then the source term is obtained as:
J^ν(x)
=
-∑_j∫d^3k_j
k^ν_j.df_j ([k]_j,t)/d t|_ col.δ^(3)([x]-[x]_0^ jet-[k]_j/ω_jt).
Here .df_i ([k],t)/d t |_ col. is the part of the time derivative of the momentum distribution corresponding to the first and second terms on the right hand side in Eq. (<ref>):
.df_j (
ω,k^2_⊥,
t)/dt|_ col. = (ê_j ∂/∂ω_j+1/4q̂_j ∇_k_⊥^2)f_j(ω_j, k_j⊥^2, t).
Considering the rotational symmetry for the distributions of the shower partons along the jet axis and using the collimated shower approximation (r = θ≈sinθ = k_⊥/ω), the source term can be written as:
J^ν(x) ≈ -
1/2π r t^3(x^ν- x^ν_ jet,0).d E^ jet/d t dr|_ col.δ(|[x]-[x]_0^ jet|-t),
where
.d E^ jet/d t dr|_ col. = ∑_j∫dω dk^2_j⊥ω_j.df_j(
ω_j,k^2_j⊥,
t)/dt|_ col.δ(r -k_j⊥/ω_j).
Here r is defined as r=√((η_ s-η_ jet)^2+(ϕ'-ϕ_ jet)^2),
with ϕ'=arctan[(y-y_0^ jet)/(x-x_0^ jet)] denoting the azimuthal angle with respect to the initial position of the jet center [x]_0^ jet, η_ jet and ϕ_ jet the pseudorapidity and azimuthal angle of the jet, and [x]^ jet(t) the position of the jet center at the time t.
Using Eqs. (<ref>), (<ref>) and (<ref>), we can construct the source terms from the energy and transverse momentum distributions of the partons inside the jet shower.
To obtain the source terms
in the relativistic τ-η_ s coordinates for Eq. (<ref>),
we perform Lorentz transformation:
J^ν̅(τ, x, y, η_s)
=
-dP_ jet^ν̅/τ dτ dx dy dη_s
= Λ^ν̅_μ
J^μ(x)
=
- Λ^ν̅_μdP_ jet^μ/dt d^3x,
where
Λ^ν̅_μ = [ coshη_ s 0 0 -sinhη_ s; 0 1 0 0; 0 0 1 0; -1/τsinhη_ s 0 0 1/τcoshη_ s ].
§.§ Initial Condition of the Medium
We assume that the QGP medium is locally thermalized at the proper time τ=τ_0 (set as τ_0 = 0.6 fm/c in our calculation), and then apply relativistic hydrodynamics to describe its space-time evolution.
We set the initial entropy density distribution in the transverse plane at midrapidity η_ s=0 as:
s_T([x]_⊥)
=
C/τ_0 [
(1-α)/2
n^0.8[b]_ part([x]_⊥)
+α n^0.8[b]_ coll([x]_⊥)
],
where n_ coll^0.8[b] and n_ part^0.8[b] are the number densities of nucleon-nucleon binary collisions and participating nucleons generated from the optical Glauber model with impact parameter [b], respectively.
The parameters C=41.4 and α=0.08 are chosen for Pb+Pb collision at 2.76A TeV by fitting the centrality dependence of the multiplicity at midrapidity from the ALICE Collaboration <cit.>.
In η_ s direction, we initialize the medium profile using the following function form:
H(η_ s) = exp[-(|η_ s|-η_ flat/2)^2/2σ_η^2θ(|η_ s|-η_ flat/2) ].
The function H(η_s) has a shape consisting of a flat region around the mid-rapidity,
like the Bjorken scaling solution <cit.>,
and two halves of a Gaussian connected smoothly to the vacuum at both ends of the flat region.
The parameters η_ flat=3.8 and σ_η=3.2 are fitted to reproduce the pseudorapidity distribution for the multiplicity for central Pb+Pb collisions at 2.76A TeV.
Finally, the full 3-dimensional profile of the initial entropy density is given as:
s(τ_0,[x]_⊥,η_ s)
=
s_T([x]_⊥)
H(η_ s).
We further assume that there is no initial flow at τ=τ_0 in the transverse direction, which means that the radial expansion of the medium is caused solely by the initial pressure gradient incorporated in the initial profile condition of Eq. (<ref>).
For the flow velocity in the longitudinal direction, the space-time rapidity component is initially set to zero: u^η_ s(τ=τ_0)=0 <cit.>.
In this work, we employ the smooth averaged initial profile of the medium obtained from the optical Glauber model, which does not include the initial geometrical fluctuation of the nucleons and their internal structures in the colliding heavy ions.
This initial state fluctuations in principle can affect both the jet evolution and the medium response to jet quenching; we would like to leave the event-by-event studies including the initial state fluctuations as a future work.
§.§ Freeze-out
The hydrodynamic medium response to the energy and momentum deposited from the jets will induce additional flows in the QGP medium.
These jet-induced flows propagate and enhance hadron emissions in directions around the jet axes.
Some of the hadrons produced from jet-induced flows remain in the jet cone after the background subtraction and are counted as part of the full jets.
To obtain the momentum distributions of particles produced from the medium, we use the Cooper-Frye formula <cit.>:
E_idN_i/d^3p_i
=
g_i/(2π)^3∫_Σp_i^μ dσ_μ(x)/exp[p_i^μu_μ(x)/T(x)]∓ 1,
where g_i is the degeneracy, ∓ corresponds to Bose or Fermi distribution for particle species i,
and Σ is the freeze-out hypersurface.
The freeze-out is chosen to occur at a fixed temperature T_ FO.
Here a typical value T_ FO=140 MeV (e.g., Ref. <cit.>) is used.
For the sake of simplicity, when calculating the particle spectra via the Cooper-Frye formula, we consider only one species of bosons with the same mass as the charged pions.
The contributions from other hadron specicies are taken into account by replacing g_i by the effective degrees of freedom d_ eff defined as follows:
e_ lat(T) = d_ eff(T)/(2π)^3∫ d^3p
√(m_π^±^2+[p]^2)/exp[√(m_π^±^2+[p]^2)/T]-1,
where e_ lat is the energy density obtained from lattice QCD calculations <cit.>.
The contribution from the jet-induced medium flow to the full jet is obtained by removing the contribution from the background medium without jet propagation:
ΔdN/d^3 p = .dN/d^3 p|_
-
. dN/d^3 p|_.
The contribution Δ dN/d^3 p is added to that of the jet shower calculated from the transport equations (<ref>) to obtain the final state full jet.
For the particles included in Eq. (<ref>), we impose the transverse momenta cut p^ trk, hyd_T>1 GeV/c for inclusive jet analyses, and p^ trk, hyd_T>0.5 GeV/c for dijet analyses following the measurements by CMS Collaboration <cit.>.
§ SIMULATIONS AND RESULTS
In this work, we initialize the jet production points in the transverse plane η_ s=0 according to the distributions of the binary nucleon-nucleon collisions which are calculated by using the Glauber model <cit.>.
Jet spectra and the momentum distributions of the shower partons inside the full jets are obtained via simulation <cit.> with package <cit.> used for the full jet reconstruction.
Hard jets are assumed to be created at τ = 0, and travel freely until the thermalization proper time of the QGP, τ=0.6 fm/c.
Then the jet shower starts to interact with the QGP and evolves according to the transport equations (<ref>).
The jet-medium interaction is turned off when the local temperature of the medium is below T_c = 160 MeV.
The interaction in the hadronic matter is usually small compared to the QGP phase, and is neglected in this work.
The medium profile at initial time τ=τ_0 is calculated by employing Eq. (<ref>) with the impact parameter [b]=0 (central collisions).
The evolution of the medium is governed by the ideal hydrodynamic equations with source terms (<ref>).
As the system expands and cools down, it transits from QGP phase to hadronic phase, and finally the freeze-out occurs.
The momentum distributions of hadrons produced from the medium are calculated via the Cooper-Frye formula (<ref>).
After the subtraction of the background (without jet), the remaining part (<ref>) contributes to final state full jets.
Hereafter we call the part of jet described by the transport equations (<ref>) as shower part, and that coming from the jet-induced flow via freeze-out as hydro part.
In this study, we mainly focus on the contribution of the hydro part of the full jet.
Detailed studies of the contribution of each medium modification effect on the jet shower part can be found in Ref. <cit.>.
Also since we perform the simulation independently for each single jet shower, the effect of possible interference between the flows induced by multiple jet showers in one event is not included.
§.§ Flow Induced by Full Jets
Figure <ref> shows the snapshots of the energy density distribution of the medium in the transverse plane at η_ s=0 at different proper times for an example event.
For this event, the single jet
with the initial p_T^ jet = 150 GeV/c
is produced at (x_0^ jet, y_0^ jet)=(0 fm, 6.54 fm) and travels in the direction of ϕ_p=5π/8.
The upper panels show the whole energy density of the medium, and the lower panels show the energy density after the subtraction of the energy density in the events without jet propagation.
From these figures, we can see that the V-shaped wave fronts (shown by higher energy density region) are induced by the jet propagation, and develop with time in the medium.
This V-shaped wave front is the Mach cone <cit.>, a conical shock wave that appears as an interference of sound waves caused by an object moving faster than the medium sound velocity.
Here the highly collimated jet shower deposits its energy and momentum and induces a Mach cone whose vertex is the center of the jet <cit.>.
This wave front of the Mach cone carries the energy and momentum, propagates outward and also causes the lower energy density region behind the wave front.
During the propagation, the Mach cone and the radial flow of the medium are pushed and distorted by each other.
One can see that the Mach cone is asymmetrically deformed in this example because the jet travels through the off-central path in the medium.
In this work, we neglect the effect of the finite small shear viscosity of the QGP and model the medium created in relativistic heavy-ion collisions as an ideal (non-viscous) fluid.
The finite viscosities are important for more precise description of the medium evolution and the collective anisotropic flows observed in the final states <cit.>.
It can also affect the shape of the medium response to the jet-deposited energy and momentum, e.g., the Mach cone can be smeared by the finite shear viscosity <cit.>.
In our study, we assume the instantaneous thermalization of the energy and momentum deposited by the jet; the finite relaxation time effects may be included in the source terms <cit.> (note that the smearing due to the finite grid size in the hydrodynamic simulation mimics some relaxation effect).
Since the relaxation times for the deposited energy and momentum are closely related to the transport coefficients of the QGP, and the inclusion of such effects would provide further information on the QGP's properties, which we would like to leave as a future work.
§.§ Full Jet Energy Loss and Suppression
In our framework, the final full jets are contributed from two parts: jet shower part and hydrodynamics response part.
The shower part of the jet loses energy due to three mechanisms: the collisional energy loss and the absorption of the soft partons by the medium, the transverse momentum broadening which kicks the partons out of the jet cone, and the medium-induced radiation outside the jet cone.
The hydro part of the jet comes from the lost energy and momentum from the jet shower which thermalize into the medium and induce connical flow; some of the energy is still inside the jet cone.
Thus the hydro part will partially compensate the energy loss experienced by the jet shower part.
Here we study the effect of jet-induced medium flow on full jet energy loss and full jet suppression.
Figure <ref> shows the mean value of the total energy (transverse momentum) loss for inclusive jets with and without the hydro part contribution as a function of initial full jet transverse momentum.
The left panel shows the results for the jet-cone sizes R=0.2, 0.3, and 0.4, and the right shows for R=0.3, 0.6, and 0.9.
One can see the general feature that the amount of the energy loss increases with increasing initial jet transverse momentum while the fractional energy loss decreases.
The total p^ jet_T loss for the full jets with the inclusion of the hydro part contribution is smaller than that without the contribution from the medium response.
For jets with the cone size R=0.3, about 10 % of the lost p^ jet_T from the jet shower part is recovered by the hydro part.
We can also see the jet cone size dependence of jet energy loss from Figure <ref>.
For the shower part without the hydro part contribution, the jet cone size dependence is rather weak.
This is due to the reason that the shower part of the jet is quite collimated, i.e., most of the energy in the shower part is covered by a narrow jet cone, therefore, jet energy does not change much with increasing jet cone sizes.
On the contrary, jet-induced flow evolves with medium, diffuses, and can spread quite widely around jet axis. As a result, the jet cone size dependence becomes much stronger when adding the hydro part contribution.
The effect of full jet energy loss in the relativistic heavy-ion collisions can be quantified by the measurements of nuclear modification factor R_ AA for single inclusive jet spectrum, defined as:
R_ AA= 1/⟨ N_ coll⟩d^2N_ jet^ AA/dη_p dp^ jet_T/d^2N_ jet^ pp/dη_p dp^ jet_T,
where ⟨ N_ coll⟩ is the number of binary nucleon-nucleon collisions averaged over events in a given centrality class, N_ jet^ AA is the number of jets in nucleus-nucleus collisions, and N_ jet^ pp is that in p+p collisions.
One important result of jet energy loss is that jet p_T spectrum in nucleus-nucleus collisions is shifted to lower p_T^ jet compared to that in p+p collisions.
Since the jet spectrum is a steeply decreasing function of p_T^ jet, jet R_ AA will become smaller than unity in high-p_T^ jet region.
Figure <ref> shows the nuclear modification factor R_ AA for single inclusive jets as a function of p_T^ jet for different jet cone sizes:
the left panel for the jet-cone sizes R=0.2, 0.3, and 0.4, and the right for R=0.3, 0.6, and 0.9.
We also compare the results with and without the inclusion of the contribution from the jet-induced flow.
We find that without the hydro part contribution, the jet cone size dependence for jet R_ AA is very week, which is consistent with the weak dependence for jet energy loss as seen in Figure <ref>.
The inclusion of the contribution from jet-induced flow decreases the total energy loss and thus increase the value of R_ AA; it also increases the jet-cone size dependence of R_ AA.
Our results are comparable with CMS measurements with the jet-cone sizes R=0.2, 0.3, and 0.4, which show relatively small jet cone size dependence (but with large error bars).
§.§ Full Jet Shape Function
One of the advantages of studying fully reconstructed jets in relativistic heavy-ion collisions is that one may investigate not only the full jet energy loss and suppression, but also their internal structures which provide us the detailed information on how the energy is distributed inside the full jets and how the energy distribution is modified by the interaction with the QCD medium.
Jet shape function describes how the energy inside (and outside) the full jets is distributed in the radial direction (transverse to the jet axis) and is defined as follows:
ρ_ jet(r) = 1/N_ jet∑_ jet[
1/p_T^ jet∑_ trk∈(r-δ r/2,r+δ r/2) p_T^ trk/δ r],
where r = √((η_p - η_ jet)^2 + (ϕ_p - ϕ_ jet)^2) is the radial distance of the jet constituents from the jet axis, δ r is the bin size,
and the sum is taken over all constituents (tracks) of the full jets in the bin at r.
The left panel of Figure <ref> shows our result for the jet shape function inside the jet cone for inclusive jets with p^ jet_T>100 GeV/c and R=0.3 in central Pb+Pb collisions and in p+p collisions, compared to the experimental data from CMS Collaboration <cit.>.
To see the medium effect on the jet shape function more clearly, the nuclear modification factor for the jet shape function R^ρ_ AA(r)= ρ_ AA(r)/ρ_ pp(r) is shown in the right panel of Figure <ref>.
We can see that our results (both with and without the contribution from the hydrodynamic response part) show similar nuclear modification pattern for the jet shape function to the experimental data from CMS Collaboration, i.e., little change for small r, a dip at r∼0.1 and an enhancement at large r.
In other words, the inner hard core of the jet is more collimated while the tail (the outer soft part) of the jet is broadened, in central Pb+Pb collisions compared to pp collisions.
The medium modification feature for the shower part of the full jet has been extensively studied in Ref. <cit.> which shows that the collisional energy loss and the thermalization of the soft shower partons (into the medium) make the jet narrower with more collimated hard core, while the transverse momentum broadening and medium-induced radiation transport the energy from the inner to the outer sides of the jet and broaden the tail of the jet shape function.
After the inclusion of the contribution from jet-induced medium flow, the jet shape function at small r is not modified much, but for large r region (r>0.2-0.25), there is a significant enhancement of the jet broadening effect.
This seems to be quite natural considering the jet cone size dependence of full jet energy loss as seen in Figure <ref>, i.e., the energy loss from the shower part of the jet induces conical flow and medium excitation which evolve with the medium and diffuse to larger angles with respect to the jet axis.
To see more clearly the contribution from the hydrodynamic response part (jet-induced medium flow) to jet broadening effect, we show in Figure <ref> the jet shape function ρ(r) for inclusive jets with an extended radial distance 0 < r<1.
The trigger p_T threshold for the inclusive full jets is set to be p_T^ jet> 100 GeV/c.
Here we still use p_T^ jet defined by the jet-cone size R=0.3 as the normalization factor for the jet shape function at r>R=0.3, to be consistent with the experimental results from CMS Collaboration <cit.>.
The red solid line shows the result for jets with both shower and hydro parts,
the orange dashed dotted solid line shows the contribution from the hydro part,
and the blue dashed line shows the result for jet without hydro part.
The green dotted line shows the result from simulation.
As we can see, the shower part of jet shape function is a deep falling function of r, while the energy (momentum) from the hydrodynamic response part is a quite flat distribution in a wide range of r.
This is because the energy loss from the shower part is carried away by the jet-induced flow which evolves with the medium and diffuses to large distances <cit.>.
Compared to simulation, the broadening of the shower part of the jet continues to large r by the transverse momentum kicks and medium-induced radiation, but the contribution from the hydro part to the jet shape function is quite flat and finally dominates over the shower part in the region with r>0.5.
CMS Collaboration has recently measured the jet shape functions with a wide range of r (up to r=1) for both leading and subleading jets in asymmetric dijet events in Pb+Pb collisions at 2.76A TeV <cit.>.
We also perform the calculation for the jet shape functions in dijet events, and the comparison with CMS data is shown in Figure <ref>.
In the calculation, we chose dijet events with
the leading jet with p^ jet_T,1 > 120 GeV/c,
the subleading jet with p^ jet_T,2 > 50 GeV/c,
and the azimuthal angle between the leading and subleading jets Δφ_1,2>5π/6.
In the figure, we do not show the jet shape function for p+p collisions from CMS <cit.> since the data contain the contamination from the underlying event (and therefore are quite different at large r region as compared to the jet shape function obtained from simulation).
In Pb+Pb collisions, such background effect is supposed to be small at large r, since the jet shape function is dominated by the shower part at small r and by the jet-induced medium flow at large r (see below).
From Figure <ref>(a), we can see that the jet shape function for leading jets in central Pb+Pb collisions is quite similar to that for inclusive jets shown in Figure <ref>.
The shower part dominates the jet shape function at relatively small r and is broadened by the medium effect starting from r=0.2-0.3, while the hydro part starts to dominate the jet shape function at large r region (r>0.5).
Our full result on jet shape function with the contributions from both shower and hydro parts reproduces the experimental result quite well throughout the entire r range (up to r=1).
For subleading jets as shown in Figure <ref>(b), we can see that the jet shape function is much broader than that of leading jets due to larger jet-medium interaction for subleading jets.
As a result, the shower part of the jet shape function is wider and the hydro part contribution is also larger and more widely distributed than that in leading jets.
Our full result also provides a good description of the jet shape function for subleading jets except the middle r region.
The above results clearly show that the hydrodynamic medium response to jet-medium interaction plays an important role in the study of fully reconstructed jets, especially at large r region.
This means that the jet shape function at large angles with respect to the jet direction provides a good opportunity to study the hydrodynamic medium response to jet quenching.
§ SUMMARY
In this work, we have studied the nuclear modifications of full jet structures in relativistic heavy-ion collisions with the inclusion of the contribution from the medium exitations induced by the propagating hard jets.
We have formulated a model which consists of a set of transport equations to describe the full jet shower evolution in medium and relativistic ideal hydrodynamic equations with source terms to describe the dynamical evolution of the QGP medium.
The transport equations control the evolutions of energy and transverse momentum distributions of the shower partons within the full jet.
The contribution from the momentum exchange with the medium via
scatterings with medium constituents is taken into account by collisional energy loss and transverse momentum broadening terms.
The partonic splitting terms account for the contribution of the medium-induced radiations; the rates for the induced splittings were taken from higher-twist jet energy loss formalism.
The local temperature and flow velocity of the medium are embedded in the jet quenching parameter q̂ which controls the amplitudes of all the medium modification processes.
The relativistic ideal hydrodynamic equations with source terms determine the space-time evolution of the medium which exchanges the energy and momentum with the propagating jet shower.
The energy and momentum deposited by the jet shower into the medium fluid are included via the source terms, which may be constructed from the solutions of jet shower transport equations based on the energy-momentum conservation for the combined system of the QGP medium and the jet shower.
Based on our coupled jet shower and QGP fluid model, we have performed the simulations of jet events in central Pb+Pb collisions at 2.76A TeV.
The transport equations for jet shower were numerically solved with the initial conditions generated by .
We kept track of both the energy and transverse momentum distribution of all shower partons in the full jet until the interaction with the QGP ceases.
The relativistic ideal hydrodynamic equations with source terms were numerically solved in the (3+1)-dimensional τ-η_ s coordinates, with the initial profile of the medium obtained from the optical Glauber model.
We found that the additional energy density flow can be induced by the jet shower propagation in the medium, and the jet-induced conical flow is pushed and distorted by the medium radial flow (and vice versa) during the propagation.
To study how the hydrodynamic medium response contributes to the full jet observables, we calculated the particles produced from the hydrodynamic medium response by using the Cooper-Frye formula, which are combined with the jet shower part to obtain the final state full jets.
We generated (di)jet events in central Pb+Pb collisions by using for the momentum distribution and the Glauber model for the spatial distribution.
We calculated the total energy (p_T) loss of the full jets for different jet-cone sizes and found that the contribution of the hydro part partially compensates the energy loss of the jet shower.
Such compensation effect increases with increasing jet cone size.
As a result, one obtains stronger jet cone-size dependence for the single inclusive jet R_ AA when taking into account hydrodynamic response contribution.
The effect of jet-induced medium flow on jet shape functions were studied for inclusive jets as well as for dijets in central Pb+Pb collisions at 2.76A TeV.
Jet shape functions for leading jets in dijet events and single inclusive jets are quite similar, while the nuclear modification for subleading jets is larger due to more jet-medium interaction.
Our results showed that the particles produced from jet-induced medium flow do not affect very much the jet shape function at small r, but significantly enhance the broadening of the jet shape function, and finally dominate at large r region.
Our full results for jet shape functions can reproduce quite well the experimental data from CMS Collaboration <cit.>, after taking into account both jet shower and hydrodynamic response contributions.
In summary, we have found that jet-induced flow plays significant roles in the study of jet structure in relativistic heavy-ion collisions, especially at large angles with respect to the jet axis; detailed studies of full jet structure at large r should provide us much information about the medium response effect in the process of jet-medium interaction.
The authors would like to thank X.-N. Wang for discussions and helpful comments.
Also, Y.T. is grateful to Y. Hirono for discussions regarding numerical implementations.
This work is supported in part by the Natural Science Foundation of China (NSFC) under Grant Nos. 11375072 and 11405066, Chinese Ministery of Science and Technology under Grant No. 2014DFG02050, and the Major State Basic Research Development Program in China under Grant No. 2014CB845404.
|
http://arxiv.org/abs/1701.08151v1 | 20170127185455 | Underscreening in concentrated electrolytes | [
"Alpha A. Lee",
"Carla Perez-Martinez",
"Alexander M. Smith",
"Susan Perkin"
] | physics.chem-ph | [
"physics.chem-ph",
"cond-mat.soft"
] |
alphalee@g.harvard.edu
John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138
Department of Chemistry, Physical and Theoretical Chemistry Laboratory, University of Oxford, Oxford OX1 3QZ, U.K.
Department of Chemistry, Physical and Theoretical Chemistry Laboratory, University of Oxford, Oxford OX1 3QZ, U.K.
Department of Inorganic and Analytical Chemistry, University of Geneva, 1205 Geneva, Switzerland
susan.perkin@chem.ox.ac.uk
Department of Chemistry, Physical and Theoretical Chemistry Laboratory, University of Oxford, Oxford OX1 3QZ, U.K.
Screening of a surface charge by electrolyte and the resulting interaction energy between charged objects is of fundamental importance in scenarios from bio-molecular interactions to energy storage. The conventional wisdom is that the interaction energy decays exponentially with object separation and the decay length is a decreasing function of ion concentration; the interaction is thus negligible in a concentrated electrolyte. Contrary to this conventional wisdom, we have shown by surface force measurements that the decay length is an increasing function of ion concentration and Bjerrum length for concentrated electrolytes. In this paper we report surface force measurements to test directly the scaling of the screening length with Bjerrum length. Furthermore, we identify a relationship between the concentration dependence of this screening length and empirical measurements of activity coefficient and differential capacitance. The dependence of the screening length on the ion concentration and the Bjerrum length can be explained by a simple scaling conjecture based on the physical intuition that solvent molecules, rather than ions, are charge carriers in a concentrated electrolyte.
Underscreening in concentrated electrolytes
Susan Perkin
===========================================
The structure of electrolytes near a charged surface underpins a plethora of applications, from supercapacitors <cit.> to colloidal self-assembly <cit.> and electroactive materials such as ionmonomeric polymer-metal composites <cit.>. The structure of dilute electrolytes is relatively well-understood <cit.>. However, dilute electrolytes have a low conductivity because the conductivity is proportional to the concentration of charge carriers. As such, dilute electrolytes are generally unsuitable for many electroactive materials and concentrated electrolytes are preferred up to the point when the viscosity increases significantly. Understanding electrolytes at high concentrations remains a conceptual challenge because the ion-ion Coulomb interaction is strong and long-ranged. The most extreme case of concentrate electrolytes are ionic liquids — liquids at room temperature which comprise pure ions without any solvent <cit.>.
To segue into exploring the physics of concentrated electrolytes, we first revisit the physics of dilute electrolytes. The seminal Debye-Hückel theory <cit.> predicts that the interaction between two charged surfaces in an electrolyte decays exponentially with the surface separation <cit.>. The characteristic decay length, known as the Debye length, is given by
λ_D =√(ϵ k_B T/4 π q^2 c_ion)≡1/√(4 π l_B c_ion),
where ϵ is the dielectric constant of the medium, k_B the Boltzmann constant, T the temperature, q the ion charge, c_ion the ion concentration (which is twice the salt concentration for a 1:1 electrolyte), and
l_B = q^2 /ϵ k_B T
is the Bjerrum length. The Bjerrum length is the distance at which the interaction energy between two ions equals the thermal energy unit k_B T. The Debye-Hückel theory is a mean-field theory valid when l_B^3 c_ion≪ 1, i.e. when the ion-ion separation is far greater than the Bjerrum length and thus the Coulomb interactions can be treated as a perturbation to ideal gas behaviour. Therefore, the Debye-Hückel theory is only applicable for dilute electrolytes.
For concentrated electrolytes, only a handful of analytical results are known. A well-known result pertains to the pair correlation function, g_ij(r), which is the probability of finding a particle of component j at a distance r from another
particle of component i <cit.>. Mathematical analysis of the Ornstein-Zernicke equation reveals that, for particles interacting via a short-ranged <cit.> or Coulomb <cit.> potential, the asymptotic decay of the correlation function takes the form
r (g_ij(r)-1) ∼ A_ij e^-α_0 rcos(α_1 r + θ_ij), as r →∞.
Crucially, Equation (<ref>) implies that all correlation functions in the system decay with the same rate α_0 and oscillate with same wavelength 2π/α_1 in the asymptotic limit; only the amplitude A_ij and phase θ_ij are species-dependent. In an electrolyte solution, 1/α_0 is the electrostatic screening length and 2π/α_1 the characteristic correlation wavelength. Equation (<ref>) is a general asymptotic result for the decay of correlations that is independent of the electrolyte model. Analytical expressions for α_0 and α_1 could be obtained for the restricted primitive model <cit.>. However, the restricted primitive model does not explicitly account for space-filling solvent molecules and thus may not capture certain important features of screening in electrolytes (c.f. Section <ref>). Without considering specific models to compute α_0 and α_1, we will use the Equation (<ref>) to organise our discussion about different theories.
The decay of correlations in the bulk electrolyte is directly related to the decay of interactions between charged surfaces, measurable via techniques such as the Surface Force Balance (SFB). To illustrate why this is the case, recall from Equation (<ref>) that the asymptotic wavelength is the same for all correlation functions. Therefore, if we consider two large charged spheres of radius R immersed in the electrolyte, their asymptotic pair correlation function is given by r (g_ss(r)-1) ∼ A_ss e^-α_0 rcos(α_1 r + θ_ss). Thus the potential of mean force v(r) ∼ - k_B T log g(r) ∼ A_ss e^-α_0 rcos(α_1 r + θ_ss)/r. As the concentration of the large spheres is negligible compared to the ions and solvent, α_0 and α_1 are independent of the properties of the spheres. The interactions between the charged plates in the SFB decays in the same way as the same as the interactions between two charged spheres of radius R→∞. Therefore, within this picture, the electrostatic screening length and characteristic correlation wavelength measured by SFB is the same as that for the bulk electrolyte.
We first consider the characteristic correlation wavelength 2π/α_1. The Debye-Hückel theory for dilute electrolytes corresponds to the limiting case α_1 =0 and α_0 =1/λ_D. However, a finite oscillatory period emerges (α_1>0) when a/ λ_D = √(4 π l_B c_0 a^2)≳√(2), where a is the ion diameter <cit.>. This threshold value of a/λ_D is widely known as the Kirkwood line <cit.>, first reported by John Kirkwood in 1936. In other words, pass the Kirkwood line, the decay of ion-ion correlations switches from a monotonic exponential decay to a damped oscillatory decay. In the context of ionic liquids, the presence of an oscillatory decay of ion charge density away from charged interfaces has been called “overscreening” <cit.>. Integral equation theories also predict that the density-density correlation function becomes oscillatory and has a decay length that is longer than the charge-charge correlation function at an even higher electrolyte concentration <cit.>; this is termed “core-dominated” decay.
The subject of this paper is the electrostatic screening length 1/α_0. The Debye-Hückel theory (<ref>) predicts that the electrostatic screening length decreases as the electrolyte concentration increases. Direct experimental measurements of this screening length for concentrated electrolytes is relatively scarce, perhaps a surprise as the theory of electrolyte solutions has received significant attention in the past century <cit.>. The first sign that the Debye-Hückel screening length is qualitatively awry for concentrated electrolytes is a series of SFB studies showing that the interaction force between charged surfaces in an ionic liquid decays exponentially, but with a decay length that is orders of magnitude larger than the Debye length or the ion diameter <cit.>. It was then shown, via SFB measurements of the screening length in ionic liquid-solvent mixtures and alkali halide salt solutions, that the long electrostatic screening length is not unique to pure ionic liquids: the electrostatic screening length in concentrated electrolytes increases with ion concentration, contrary to the predictions of the Debye-Hückel theory <cit.>. Moreover, we provide empirical evidence that the screening length scales as
λ_S ∼ l_B c_ion a^3.
In the remainder of this paper, we will term Equation (<ref>) “underscreening”. The electrolyte solution “underscreens” charged surfaces in the sense that the interaction between charged surfaces is significantly longer-ranged than the Debye-Hückel regime of ions behaving as a weakly interacting gas.
To allay potential confusion, we emphasize that “underscreening” and its cognate “overscreening” <cit.> are two distinct parameters in the decay of ion-ion correlation, Equation (<ref>). Underscreening pertains to the anomalously long electrostatic screening length and overscreening pertains to a finite oscillatory period. Therefore, mathematically speaking, overscreening and underscreening could occur together if an electrolyte has an oscillatory decay of ion-ion correlation with a decay length that follows the scaling (<ref>). However, experimentally oscillations are measured only in the near-surface region and no oscillatory component is detected in the long-ranged component of the surface force <cit.>.
In this paper, we first discuss the experimental evidence for underscreening and report a new set of experiments verifying the scaling relationship (<ref>). We then show how underscreening is reflected in two classic properties of electrolytes: the activity coefficient and differential capacitance. Finally, we propose a scaling conjecture to understand the phenomenology of underscreening.
§ EXPERIMENTAL MEASUREMENTS OF THE SCREENING LENGTH
The screening lengths, λ_S, of electrolyte solutions were determined from direct measurements of the change in the interaction force with distance between two charged mica plates across the electrolyte. The apparatus used for such measurements, called the surface force balance (SFB; see Figure <ref>), employs white light interferometry to determine the separation between the plates to ∼ 0.1 nm. The mica plates are supported on cylindrical lenses (each of radius ∼ 1 cm) and mounted in crossed-cylinder configuration; the arrangement is geometrically equivalent to a sphere of radius 1 cm approaching a flat plate. The symmetry and well-defined geometry make the setup particularly useful for quantitative comparison to theory; the technique has been used over the past few decades to study forces across dilute electrolytes <cit.>, simple molecular liquids <cit.>, and soft matter <cit.>. In the case of dilute electrolytes the surface force is dominated by a repulsive osmotic pressure, increasing exponentially as D decreases, with decay length equal to λ_D which decreases with increasing concentration in accordance with Equation (<ref>).
In contrast to the measurements in dilute electrolytes, surface force measurements across pure ionic liquids have revealed short-range oscillations reminiscent of structural forces in molecular liquids <cit.> and, beyond the oscillatory region, monotonic screening extending to distances far greater than predicted by simple application of Debye-Hückel theory <cit.>. In a study aimed at connecting up the dilute electrolyte and ionic liquid ends of the electrolyte spectrum, some of us recently reported a non-monotonic trend in the asymptotic screening length with concentration <cit.>. In this section we describe those experiments in detail and investigate the scaling behaviour of λ_S with c_ion and, separately, the scaling of λ_S with l_B achieved by varying solvent ϵ at constant c_ion.
§.§ Experimental details
In the SFB experiments, white light interferometry is used to determine the forces between two molecularly smooth mica surfaces separated by a thin film of electrolyte (see Figure <ref>) <cit.>. The mica sheets of equal thickness are backsilvered to create a partially reflecting and partially transmitting mirror before gluing onto the lenses and injection of liquid in the gap between the mica surfaces. The resulting silver-mica-liquid-mica-silver stack acts as an interferometric cavity. Bright columnated white light incident on interferometer, dispersed with a spectrometer, emerges as a set of bright fringes of equal chromatic order (FECO).
The bottom lens is mounted on a horizontal leaf spring, while the top lens is mounted on a piezo-electric tube (PZT). By expanding the PZT, the top surface is brought at constant velocity towards the bottom surface from separations D of 200-400 nm to D of one or a few molecular diameters. The rate of approach is sufficiently slow that there is no measurable hydrodynamic contribution to the force, as evidenced by the insensitivity of the measured forces to small changes of rate of approach. The FECO pattern is captured by a camera at rates of approximately 10 frames per second. At large separations, there is no normal force on the spring, but as the lenses are brought in to contact, the normal forces arising from interactions between the surfaces will cause bending of the spring. The deflection of the spring, and thus the normal force, can be inferred from the interferometric pattern. The force between the surfaces can then be related to the interaction energy E using the Derjaguin approximation <cit.>. Therefore, E=F/2π R, where R is the local radius of curvature between the lenses, of the order of 1 cm for these experiments.
In order to vary solvent ϵ at constant c_ion we used solutions of ionic liquid at fixed 2M concentration in solvents of varying polarity. The ionic liquid was 1-butyl-1-methypyrrolydinium bis[(trifluoromethyl)sulfonyl]imide (abbreviated [C_4C_1Pyrr][NTf_2], Iolitec 99.5 %), and the molecular solvents were propylene carbonate (Sigma-Aldrich, anhydrous 99.7 %), dimethyl sulfoxide (Sigma-Aldrich, anhydrous 99.9%), acetonitrile (Sigma Aldrich, anhydrous 99.8%), benzonitrile (Sigma-Aldrich, anhydrous 99%) and butyronitrile (Fluka, purity ≥99%).
The FECO fringes were analysed using the method outlined by Israelachvilii <cit.>; our analysis uses the refractive index values of the bulk mixture to compute the separation between the mica surfaces. The refractive index of the mixtures of 2M [C_4C_1Pyrr][NTf_2] in dimethyl sulfoxide and in benzonitrile were measured to be 1.441 and 1.461, respectively, using an Abbe 60 refractometer; for mixtures of 2M [C_4C_1Pyrr][NTf_2] in propylene carbonate, butyronitrile, and acetonitrile, the FECO analysis used estimated refractive index values of 1.422, 1.408, and 1.380, respectively. These estimated values are weighted average values between the refractive index of the pure ionic liquid (1.425; measured by supplier), and the refractive index of the solvents (1.4189 for propylene carbonate and 1.3842 for butyronitrile, both from CRC handbook <cit.>, and 1.344 for acetonitrile (provided by supplier). Calculation of such a weighted average for dimethyl sulfoxide and benzonitrile solutions led to values in very good agreement with our direct measurements.
Several precautions are taken to ensure the purity and stability of the liquid mixtures during the measurements. In all experiments, the ionic liquid [C_4C_1Pyrr][NTf_2] was dried in vacuo (10^-2 mbar, 70^∘C) for several hours to remove residual water. In the case of acetonitrile, butyronitrile and propylene carbonate experiments, the liquid was obtained from freshly opened bottles, while for benzonitrile measurements and some of the dimethyl sulfoxide experiments, the bottles had been opened within two weeks of the measurement. The dried ionic liquid was then mixed with the solvents and introduced in between the lenses within a few minutes, in order to minimise exposure of the mixture to atmospheric moisture. In all cases, the liquid film between the mica surfaces was in contact with a large bulk reservoir. For solutions of ionic liquid with dimethyl sulfoxide, benzonitrile, and propylene carbonate, a droplet of solution of approximately 20 μL was injected between the lenses. In the case of acetonitrile and butyronitrile solutions, which are volatile, the bottom lens was immersed in a bath of the solution, and for acetonitrile tests, additional solvent was introduced in the SFB chamber to create a saturated solvent vapour and thus minimise evaporation. The drying agent P_2O_5 was also introduced in the chamber to capture any residual water vapour. Different glues were used to attach the mica sheets to the lenses, depending on the compatibility of the solvents: glucose (Sigma-Aldrich, 99.5%) was used as glue for propylene carbonate experiments, EPON 1004 (Shell Chemicals) was used for benzonitrile, acetonitrile and butyronitrile soltuions, and paraffin (Aldrich, melting point 53-57^∘) was used for dimethyl sulfoxide experiments.
§.§ Experimental measurements varying c_ion
An example of the measured interaction force between two mica plates as a function of separation, D, across a pure ionic liquid, [C_4C_1Pyrr][NTf_2], is shown in Figure <ref>(b). As the surfaces approach from large D they experience a repulsive force, exponentially increasing with decreasing D, eventually giving way to an oscillatory region at D ≲ 5-8 nm. The key signature of oscillatory forces is the presence of minima in the profile as detected on retraction of the surfaces; these are shown using a linear scale in the inset to Figure <ref>(b). Interpretation of the structural features in ionic liquids leading to such oscillatory forces has been discussed in the past <cit.>, the details of this near-surface region depend on ionic liquid molecular features such as cation-anion size asymmetry and ion amphiphilicity, surface chemistry and surface charge. Here we focus instead on the monotonic tail of the interaction force—in this pure ionic liquid, the tail is measurable above our resolution limit from about 20 nm (several tens of ion diameters)—which we will show to be relatively insensitive to the molecular features of the ionic liquid or electrolyte. The exponential decay length in the asymptotic limit is taken as the screening length λ_S.
We studied the variation of λ_S with c_ion in a mixture of [C_4C_1Pyrr][NTf_2] with propylene carbonate (molecular solvent), chosen for their miscibility and liquidity over the full range of mole fraction, from pure solvent to pure salt, at room temperature. Figure <ref>(a) shows three force profiles chosen at concentration points to demonstrate the clear decrease in λ_S between 0.01 M (λ_S=2.7± 0.3 nm) and 1.0 M (λ_S=1.05± 0.4 nm), and the subsequent increase in λ_S between 1.0 M and 2.0 M (λ_S=5.4± 0.7 nm). Figure <ref>(b) shows how λ_S varies with c_ion^1/2, and also shows similar measurements made for NaCl in water. It is clear that in both cases there exists a minimum in λ_S at intermediate concentration.
The realisation that NaCl in water at sufficiently high concentration shows the same divergence of screening length as observed in ionic liquids led us to hypothesise that the origin of the anomalous λ_S lies in electrostatic interactions between ions, rather than in a mechanism dependent on chemical features such as hydrogen bonding or nanoscale aggregation of non-polar domains. This indeed appears to be the case, as demonstrated by the collapse of all data points when the screening length is scaled by the Debye length and the concentration is scaled by the dielectric constant and ion diameter, as shown in Figure <ref>. We note that the dielectric constant varies substantially as a function of ion concentration; the dielectric constants of ionic liquid solutions are calculated using effective medium theory <cit.>, and the dielectric constant of alkali halide solutions are taken from the literature <cit.>. Included in Figure <ref> are also a wide range of pure ionic liquids, and some further 1:1 inorganic salts in water; the common scaling appears to be general across these electrolytes. The abscissa in Figure <ref> is the nondimensional quantity a/λ_D, where a is the mean ion diameter in the electrolyte; the ion diameter of ionic liquid is estimated from X-ray scattering experiments <cit.>, and we take the ion diameter of alkali halide salts to be the unhydrated ion diameter <cit.>. a/λ_D scales as (c_ion/ϵ)^1/2. As we will show in the following section, a/λ_D is also an important parameter describing the scaling of the chemical potential and activity in electrolyte solutions.
There are two distinct scaling regimes in Figure <ref>, which we call “low" and “high" concentration. At low concentration, where a/λ_D<1, the measured screening length is the Debye length i.e. λ_S/λ_D=1. This persists until the point at which the Debye length shrinks to the ion diameter, a=λ_D. At high concentration, when a/λ_D>1, the scaling switches to a power law:
λ_S/λ_D∼(a/λ_D)^3
which is equivalent to λ_S∼ c_ion a^3 l_B. That is to say, our measurements suggest that in the high-concentration regime the screening length scales linearly with Bjerrum length.
§.§ Experimental measurements varying l_B
The experiments described above consist of a survey across the concentration spectrum for two classes of electrolyte. This series of experiments, however, does not provide the most direct test of the scaling of screening length because along this axis c_ion and ϵ are coupled: each increment in concentration also leads to an alteration of dielectric constant of the mixture. Therefore in order to test the apparent linear relationship between λ_S and l_B in the high concentration regime we next carried out a series of experiments where the Bjerrum length was varied at fixed (high) salt concentration. This was achieved using a range of molecular solvents with a wide range of dielectric constants mixed with salt at fixed c_ion (2M). We measured the force between mica plates across each of these electrolytes in the SFB: in each case a long-ranged exponential decaying force was apparent, qualitatively similar to those described above, and the asymptotic decay length was extracted. The resulting screening lengths are plotted against the Bjerrum length of the electrolyte mixture in Figure <ref>. Considerable experimental error arises from these measurements employing volatile solvents (particlarly ACN) and when removing traces of water is difficult (for DMSO). Nonetheless it is clear that the screening length increases with Bjerrum length, and the data are consistent with a linear scaling of λ_S with l_B as predicted by the underscreening relationship (Equation (<ref>)).
Finally, we note that the proposed underscreening scaling of λ_S with l_B also implies that λ_S ∼ 1/T, and therefore experiments with varying temperature also provide a test of the underscreening relationship. In a recent paper comparing the screening lengths in [C_2mim][NTf_2] and [C_3mim][NTf_2] at different temperatures it was indeed found that the screening length decreases with increasing temperature <cit.>. However the activated mechanism proposed there led to the suggestion of an Arrhenius dependence on temperature, i.e. log( λ_S) ∼ 1/T, which is not consistent with the underscreening scaling (Equation (<ref>)). As such we now revisit the data presented in ref <cit.>. Figure <ref>(a)-(b) show the measured screening lengths λ_S vs. 1/T and Figure <ref>(c)-(d) show log(λ_S) vs. 1/T. The goodness-of-fit of logλ_S against 1/T – the test of Arrhenius dependence – is actually inferior to that of λ_S against 1/T, although the difference is slight. Therefore the data in Figure <ref> are consistent with the scaling λ_S ∼ l_B c_0 a^3. Further studies of temperature dependence of λ_S in different electrolytes will help distinguish between the Arrhenius dependence on temperature and the λ_S ∼ 1/T implied by the underscreening scaling. We note that our analysis and that in ref <cit.> ignore the dependence of the dielectric constant on temperature; the validity of this assumption must be addressed in future works.
§ RELATING UNDERSCREENING TO PHYSICAL PROPERTIES
Setting aside the question of why the screening length in concentrated electrolytes is anomalously long, in this section we instead assess whether this anomalously long screening length can be connected to other independently measured physical properties of the electrolytes. We will consider two archetypical properties of concentrated electrolytes – the activity coefficient of ions and the differential capacitance at the point of zero charge. Analytical expressions that connect the Debye length to both quantities for dilute electrolytes are well-known. An intuitive attempt is to replace the Debye length with the experimentally measured screening length and compare with experimental measured capacitance and activity coefficient. However, one is not sure whether this intuitive approach is consistent. Instead, we will construct a semi-phenomenological free energy functional and derive the relationship between screening length, activity coefficient, and differential capacitance using this free energy.
Consider a simple Landau-Ginzburg expansion for the free energy F of the electrolyte in response to an infinitesimally small external potential δ V(𝐫) and fixed charge distribution σ(𝐫). We expand the free energy as a functional of local charge density ρ = c_+ - c_-,
F[ρ] = e^2/2ϵ∫∫[q ρ( 𝐫)+σ(𝐫)] [q ρ (𝐫')+σ(𝐫')]/| 𝐫-𝐫'|d𝐫d𝐫' + ∫[ p/2ρ(𝐫)^2 - ρ(𝐫) δ V(𝐫) ] d𝐫,
where p is a phenomenological coefficient. This expansion is valid for infinitesimal charge fluctuations, ρ≪ c where c = c_+ + c_-. In this limit, we can assume that the fluctuations in the charge density are independent of the total ion density, and the total ion density is uniform in space. The first term in Equation (<ref>) captures the electrostatic interactions between ions and fixed charges, the second term is a local energetic penalty to accumulating charge density. In the Debye-Hückel formalism, this term can be derived by linearising the ideal gas entropy, yielding p = k_B T/c. However, we will leave p to be a phenomenological parameter which may depend on c; we will determine p later by fitting to the experimentally measured screening length. Finally, the last term in Equation (<ref>) describes the interaction between the external potential and the induced charge density in the electrolyte.
We first consider the case with no fixed charge (σ(𝐫)=0). Minimising the free energy (<ref>) yields the Euler-Lagrange equation
p ρ(𝐫) + q^2/ϵ∫ρ(𝐫')/|𝐫 - 𝐫'| d𝐫' = δ V(𝐫).
After performing a three-dimensional Fourier transform on Equation (<ref>), we arrive at
ρ̂(k) = δ̂ ̂V̂(k)/p + 4 π q^2/ϵ k^2,
thus the susceptibility is given by χ(k) = (p + 4 π q^2/(ϵ k^2))^-1. Therefore, electric field perturbations decay exponentially in the medium with a characteristic screening length
λ_S = √(ϵ p/ 4 π q^2).
As such, p could be inferred by measuring the screening length experimentally. A long screening length corresponds to a large value of p.
§.§ Activity coefficient
To motivate the concept of an activity coefficient, we note that the chemical potential of an ideal solution as a function of concentration c reads
μ^id = μ^* + k_B T log c,
where μ^* is the standard chemical potential, i.e. the chemical potential of a 1M solution at standard conditions. Electrolyte solutions are non-ideal due to ion-ion interactions. The actual chemical potential of the cation/anion can be written as a sum of the ideal solution part and the excess part
μ_± = μ^* + k_B T log c_± + μ^ex_± = μ^* + k_B T log (γ_± c_±)
where γ_± = e^μ^ex_±/(k_B T) is called the activity coefficient. In other words, the activity, γ_± c_±, is a measure of the “effective concentration” of species in the system and the activity coefficient, γ_±, is a measure of the deviation of the electrolyte from ideality. The activity coefficient/excess chemical potential is a quantity that has been measured extensively in the literature because of its relevance to electrochemistry <cit.>.
To derive the activity coefficient theoretically, we need to solve for the electric potential outside an ion, which we will model as an uniformly charged spherical shell of radius a. The charge distribution is given σ(𝐫) = Σδ(|𝐫|-a), where Σ=q/(4 π a^2) is the surface charge density of the ion. Substituting this charge distribution into Equation (<ref>) and setting the external potential δ V(𝐫) =0, we arrive at
∇^2 ϕ - 1/λ_S^2ϕ = -4π/ϵΣδ(|𝐫|-a).
where
ϕ(𝐫)= -e/ϵ∫q ρ(𝐫') + σ(𝐫')/|𝐫 - 𝐫'| d𝐫'
is the electric potential. Equation (<ref>) can be solved to yield
ϕ(r) =
- q/ϵ re^a/λ_S/1+a/λ_S e^- r/λ_S a<r,
-q/ϵ a1/1+a/λ_S 0<r<a.
Equation (<ref>) captures the physics that the self-energy of an ion is reduced by the surrounding ionic atmosphere. This reduction in self-energy due to the ionic atmosphere is given by
ϕ_self = -q/ϵ a1/1+a/λ_S + q/ϵ a = - q/ϵ1/λ_S+a.
The excess chemical potential due to the ion-ion correlation can thus be computed by the Debye charging process: we consider the ionic atmosphere fixed, and compute the work done required increase the charge of the ion from 0 to q amid the ionic atmosphere
μ_ex/k_B T = ∫_0^qϕ_self(q')/k_B Tdq' = - 1/2 l_B/λ_S+ a.
Therefore, the activity coefficient predicted using the semi-phenomenological model (<ref>) is indeed the classic Debye-Hückel expression but with the Debye length replaced by the experimentally measured screening length.
Figure <ref> shows that the activity coefficient/excess chemical potential for aqueous sodium chloride computed using Equation (<ref>) and the experimentally determined screening length agrees quantitatively with direct measurements of the activity coefficient <cit.>. The experimental measurements of the screening length and estimates of the Bjerrum length are outlined in Section (<ref>). In particular, the increase in the experimentally measured screening length explains the upturn in the excess chemical potential which is usually attributed to excluded-volume interactions <cit.> or dielectric saturation <cit.>.
§.§ Differential capacitance at the point of zero charge
A quantity crucial to energy storage using electrical double layer supercapacitors is the differential capacitance, defined as
C_d = ∂σ/∂ V,
where σ is the surface charge density on the electrode and V is the applied potential difference. The differential capacitance can be computed using the classic Gouy-Chapman-Stern model <cit.>: we assume a layer of ions adsorpted onto the electrode – the Stern layer – and a “diffuse” layer of ions adjacent to the Stern layer which is held in place by ion-electrode electrostatic interactions (Figure <ref>).
The total capacitance C_d is the sum of the diffuse and Stern components in series,
1/C_d = 1/C_Stern + 1/C_Diffuse.
The capacitance of the Stern layer can be estimated by assuming that it is a parallel plate capacitor, with one plate being the electrode and another plate being the ions, thus
C_Stern = ϵ/4 π a.
The capacitance of the diffuse layer can be computed by minimising the semi-phenomenological free energy (<ref>) given a fixed surface potential at the interface between the Stern layer and the diffuse layer. Analogous to the classic Debye-Hückel theory, the electric potential away from a surface with potential ϕ_0 is simply
ϕ = ϕ_0 e^-x/λ_S
where x is the direction normal to the surface and x=0 denotes the position of the surface, the so-called Outer Helmholtz Plane. The induced surface charge is therefore given by Gauss Law,
4 πσ = - ϵdϕ/d x|_x=0 = ϵϕ_0/λ_S,
thus
C_Diffuse = ϵ/4 πλ_S
and substituting Equations (<ref>) and (<ref>) into (<ref>), we arrive at the differential capacitance at the point of zero charge
C_d = ϵ/4 π1/a + λ_S.
Beyond the point of zero charge, non-linear effects such as the finite size of ions become important <cit.> and thus cannot be captured by the simple free energy (<ref>). As such, we restrict ourselves to comparing with experimental measurement of the differential capacitance at the point of zero charge.
To our knowledge, systematic measurement of differential capacitance as a function of ion concentration is scarce. A recent study reported the differential capacitance as a function of dilution for the ionic liquid [EMIm][NTf_2] in propylene carbonate and other organic solvents on glassy carbon electrode <cit.>. Although the screening length of [EMIm][NTf_2] in propylene carbonate has not be measured, we will use the measured screening length of [C_4 C_1 Pyrr][NTf_2] in propylene carbonate as a close proxy as the two ionic liquids share similar ion sizes and chemical functional groups.
Figure <ref> shows that the differential capacitance at the potential of zero charge predicted by Equation (<ref>) agrees with the experimentally measured differential capacitance for low and intermediate ion concentrations. In particular, the minimum in the screening length as a function of concentration appears to match the maximum in differential capacitance – a counterintuitive phenomenon that is outside standard Gouy-Chapman-Stern model as the classic Debye length is a decreasing function of concentration. However, the capacitance at higher concentrations cannot be captured by Equation (<ref>). For the pure ionic liquid, the measured capacitance is significantly larger than what one might predict from the very long screening length. Taken together, this discrepancy between screening length at high concentrations and capacitance may indicate the dominant role of specific ion-surface interactions in determining the capacitance of ionic liquids <cit.>. Indeed, for pure ionic liquids, the differential capacitance is dominated by the lateral structure of the monolayer of ions nearest to the electrode <cit.>, thus any surface chemistry or corrugations will significantly modify the differential capacitance.
§ TOWARDS A THEORY OF UNDERSCREENING: A CONJECTURE
The dependence of the screening length on the Bjerrum length and ion concentration observed empirically, Equation (<ref>), is the opposite of the relationship that one would expect from the expression of the classic Debye length (<ref>). The classic Debye length decreases with increasing ion concentration and Bjerrum length whilst the electrostatic screening length in concentrated electrolytes increases with ion concentration and Bjerrum length. In this section, we will conjecture a simple physical argument that explains this screening length.
Our argument begins with a thought experiment: Suppose we put a slab of ionic crystal between two charged surfaces, and ask whether the crystal screens the electric field. The answer is evidently no because the ions are immobile and thus the crystal acts as a dielectric slab. Now, suppose the crystal contains Schottky defects. Charge transport in such defect-laden ionic crystal occurs via ions hopping onto defect sites. Alternatively, reminiscent of the particle-hole symmetry, one could view defects itself as the charge carrier. Defects in the sub-lattice of the cations behave as negative charges, and defects in the sub-lattice of the anions behave as positive charges. Such system would be able to screen an external electric field, but the charge carrier density that enters the Debye length is the defect concentration rather than the ion concentration; a similar conclusion is reached by analyzing the 1D lattice Coulomb fluid near close packing <cit.>.
An ionic crystal is an extreme example of a correlated Coulomb melt where the ions are translationally immobile. We conjecture that a concentrated electrolyte behaves similarly to an ionic crystal in the sense that the electric potential felt by an ion due to all other ions is significantly greater than thermal fluctuations, and therefore the incentive for an ion to respond to an external potential perturbation is minimal. The role of Schottky defects is played by solvent molecules. Although solvents are charge-neutral molecules, they disrupt ion-ion correlation by freeing up a site that would have been occupied by an ion. Therefore, solvent molecules acquire an effective charge analogous to a defect in an ionic crystal. Another way to phrase the same statement is that solvent concentration fluctuations are coupled with charge fluctuations, which has been observed in molecular dynamics simulations of electrical double layer capacitors <cit.>.
We can put the physical intuition suggested above in a more quantitative footing by rewriting the “defect” Debye length
Λ_D = (4 πq̃_solv^2 l_B c_solv)^-1/2,
where q̃_solv^2 is the mean-squared effective charge of a solvent molecule relative to the charge of an ion (the mean charge of a “defect” is zero in a symmetric electrolyte because it is as likely for a solvent molecule to be in the “cation sub-lattice” as in the “anion sub-lattice”), l_B is the Bjerrum length of the electrolyte, and c_solv is the concentration of solvent molecule. Assuming the system is incompressible, c_solv = c_tot - c_ion, where c_tot is the total concentration of the system which is assumed to be independent of ion concentration.
The next step is to estimate the effective mean-squared charge of a solvent molecule, or “defect”, in this concentrated ionic system. Qualitatively, the defect takes the position of an ion in this correlated ionic system, and as such the energy of creating a defect must be comparable to the fluctuation energy of the ionic system per ion. The energy of a defect scales as E_defect∼q̃_solv^2. This can be seen via symmetry (the defect energy is symmetric with respect to the charge of the defect), or by noting that a uniformly charged sphere of net charge q has a self-energy that scales as ∼ q^2.
The energy density of the ion system can be derived using dimensional analysis: the only relevant electrostatic lengthscale in a system where Debye-Hückel screening is negligible is the Bjerrum length. Therefore, one would expect the energy density e_ion∼ l_B^-3 from dimensional analysis. This estimate is analogous to the fluctuation energy for a dilute electrolyte which is known to scale as ∼λ_D^-3 <cit.>, except the role of the Debye length in dilute electrolytes is replaced by the Bjerrum length in concentrated electrolytes because Debye screening is suppressed by strong ion-ion correlation. The electrostatic energy per ion is therefore E_ion∼ a^3 e_ele∼ (a/l_B)^3. Equating E_ion with E_defect gives the scaling relationship
q̃_solv^2 ∼(a/l_B)^3.
This charge scaling shows the important physics that strong ionic correlations (large Bjerrum length) suppresses thermal fluctuations in the system, and therefore the mean-squared charge of a defect which is acquired through fluctuations.
Substituting (<ref>) and the incompressibility constraint into Equation (<ref>), we obtain
Λ_D ∼ (4 π ( c_tot - c_ion)a^3/l_B^2)^-1/2≈ (4 π c_tota^3/l_B^2)^-1/2 + 1/2 √(4 π) (c_tot a^3)^3/2 l_B c_iona^3,
where the expansion is valid for c_ion≪ c_tot. Equation (<ref>) shows that the leading order correction to Debye-Hückel behaviour scales as ∼ l_B c_iona^3, agreeing with the scaling observed empirically (c_tot a^3 in the denominator is the total packing fraction of molecular species and is approximately a constant independent of concentration). We note that for ionic liquids, although there are no solvent molecules per se, the internal degrees of freedom in the ions, in particular the alkyl chains on the cation, could perform the role of the solvent by disrupting order in the strongly correlated ionic melt.
We next consider the ion concentration at which this “ionic crystal” analogy becomes appropriate. The discussion above suggests that the ionic crystal regime is reached when the typical ion-ion electrostatic interaction energy is greater than k_B T. We can put this inituition in a more quantitative footing: Consider a spherical blob of electrolyte of radius R in the bulk electrolyte. Modelling the blob as an uniformly charged sphere, the fluctuation energy of the blob is given by
E_fluct∼ k_B T l_B <Q^2>/R
where Q is the charge of the blob. If charge fluctuations in the blob follow Gaussian statistics, then <Q^2> ∼ N_ion where N_ion is the number of ions in the blob, which in turn is related to the bulk density via N_ion∼ c_ion R^3. Therefore E_fluct∼ k_B T l_B c_ion R^2 and the fluctuation energy increases with the blob size. The minimal blob size is obviously the ion diameter, and the strong correlation regime is reached when the fluctuation energy of even this minimal blob is above k_B T. In other words
l_B c_ion a^2 ∼ 1
The scaling relationship (<ref>) can be rewritten as a/λ_D ∼ 1, which agrees with experimental results.
We emphasise that the arguments presented above must be read as speculative conjectures. Key steps such as assuming that scaling for the fluctuation energy of the correlated ion melt has only the Bjerrum length as the relevant lengthscale and ignoring the possible dependence of the prefactor of E_defect on the ion concentration all require more rigorous justifications. Nonetheless, we believe ideas about solvent molecules being effective charge carriers in a concentrated ionic melts may suggest that an analytical theory of asymptotically concentrated electrolytes like ionic liquids could be within reach.
§ DISCUSSION AND CONCLUSION
We have adressed the recent demonstration of anomalously long screening lengths in concentrated electrolytes and put forward a scaling law, termed underscreening, that appears robust in experiments where solvent dielectric and electrolyte concentration are varied separately. We hypothesise that underscreening could be seen in many more systems other than surface forces, activity coefficient and capacitance. The obvious experimental candidate is the interactions between charged colloids in concentrated electrolytes <cit.>. Other candidate systems include the rate of electrochemical reaction as a function of the spectator ion concentration as the redox rate is dependent on the potential drop near the electrode, which in turn depends on the screening length. Probing the bulk correlation length using small angle scattering techniques <cit.> or molecular simulations could reassure us the connection between asymptotic decay of surface forces and bulk properties.
Our paper identifies several open questions, perhaps the most pressing of which is development of a rigorous theory of underscreening. We have identified two avenues towards building a microscopic model: First, the fact that our semi-phenomenological free energy (<ref>) agrees with measured activity coefficient and to some extent differential capacitance suggests that the screening length has origin in a large local energy penalty for the accumulation of charge density. One should revisit classic theories of electrolyte solutions to identify the physics that may give rise to such local energy penalty, bearing in mind that it cannot be specific to the chemistry of the ions because this scaling is robust for a diverse class of electrolytes. Second, the scaling argument presented in Section <ref> suggests that perhaps one could construct a theory of concentrated electrolyte by considering a dilute theory of interacting solvent molecules with a fluctuation-induced charge. Systematically averaging out over ions degree of freedom to arrive at a representation based on interacting “holes” is the analytical challenge.
The physical quantities and measurements that we have mentioned thus far are equilibrium properties. The next frontier is dynamic or non-equilibrium effects. We expect that underscreening may manifest itself in linear response quantities such as conductivity, which is related to the equilibrium structure via the fluctuation-dissipation theorem. Extending the free energy (<ref>) to understand linear non-equilibrium response and comparing with experimental data is clearly the next step. The physics beyond linear response is much richer. For example one could imagine that there is a threshold electric field above which the migration of the strongly correlated ions under the applied electric field dominates over ion-ion correlations and thus underscreening becomes unimportant; continuing the analogy between ionic crystals and concentrated electrolyte, this threshold electric field may be analogous to dielectric breakdown. Indeed, the dissociation constant of weak electrolytes is known to be an increasing function of electric field strength <cit.>, although a simple argument shows that underscreening cannot be understood by simple ion pairing <cit.>. For pure ionic liquids, the fact that they comprise domains of alkyl chains and domains of charged groups with locally heterogeneous dynamics <cit.> will complicate the microscopic picture of ion transport.
In summary, we have presented a series of experimental results showing that the interaction between charged surfaces in a concentrated electrolyte decays exponentially with a decay length that follows the scaling relationship λ_S ∼ l_B c_ion a^3, where l_B is the Bjerrum length, c_ion the ion concentration and a the ion diameter. This scaling relationship is robust to varying the chemical functionalities or molecular features of the ions, and is verified for both ionic liquid solutions and alkali halide solutions. This anomalously long screening length which increases linearly with l_B and c_ion is the opposite of what one would expect from the classic Debye length and is termed “underscreening”. By constructing a semi-phenomenological free energy, we show that underscreening explains the classic measurements that the activity coefficient in aqueous sodium chloride solution is a non-monotonic function of ion concentration. Underscreening also explains the observation that the differential capacitance at the point of zero charge is a non-monotonic function of ion concentration. We conjecture that in a concentrated electrolyte with strong ion-ion correlations, it is the neutral solvent molecules rather than ions that acts as charge carriers; the solvent molecules acquire an effective charge through thermal fluctuations. We show that the empirically observed scaling relationship λ_S ∼ l_B c_ion a^3 follows naturally from this heuristic conjecture.
AAL is supported by a George F. Carrier Fellowship at Harvard University. AMS is supported by a Doctoral Prize from the EPSRC. SP and CPM are supported by The Leverhulme Trust (RPG-2015-328) and the ERC (under Starting Grant LIQUISWITCH). AAL and SP thank S Safran and J Klein for their hospitality at the Weizmann Institute and insightful discussions. We are also grateful to R Evans, J Forsman, D Limmer, P Pincus, N Green, R van Roij, E Trizac and participants in the Workshop on Anomalous Screening at the Weizmann Institute for many interesting ideas and comments.
|
http://arxiv.org/abs/1701.07888v1 | 20170126220006 | There Are (super)Giants in the Sky: Searching for Misidentified Massive Stars in Algorithmically-Selected Quasar Catalogs | [
"Trevor Dorn-Wallenstein",
"Emily Levesque"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA",
"astro-ph.HE"
] |
Effect of tetrahedral shapes in heavy and superheavy nuclei
J. Skalski
December 30, 2023
===========================================================
Thanks to incredible advances in instrumentation, surveys like the Sloan Digital Sky Survey have been able to find and catalog billions of objects, ranging from local M dwarfs to distant quasars. Machine learning algorithms have greatly aided in the effort to classify these objects; however, there are regimes where these algorithms fail, where interesting oddities may be found. We present here an X-ray bright quasar misidentified as a red supergiant/X-ray binary, and a subsequent search of the SDSS quasar catalog for X-ray bright stars misidentified as quasars.
§ INTRODUCTION/OVERVIEW
§.§ Red Supergiant X-ray Binaries
Over the past decade, many exotic close binary systems with supergiant components have been discovered. Systems like NGC 300 X-1 — a Wolf-Rayet/black hole X-ray binary (<cit.>) — and SN2010da — a sgB[e]/neutron star X-ray binary (<cit.>) — are two such examples of a coupling between a massive star in a short-lived evolutionary phase and a compact stellar remnant. Interestingly, no X-ray binaries with confirmed red supergiant (RSG) counterparts have been discovered (RSGs have been proposed as the candidate donor star for a few Ultraluminous X-ray Sources, see <cit.>). This may be partially explained by the rarity of RSGs; however, though rare, RSGs are both longer-lived and more common (due to the smaller — 10 - 25 M_⊙ — initial masses of their zero age main sequence progenitors) than most other evolved massive stars.
RSG X-ray binaries, if they exist, offer a view into an interesting edge case of accretion; their extended envelopes and strong winds (M ∼ 10^-4 M_⊙ yr^-1, <cit.>) could allow for accretion from both the wind and Roche-Lobe Overflow in an environment continually enriched with dust produced by the RSG. RSG X-ray binaries are also the immediate progenitors of Thorne-Żytkow Objects — stars with embedded neutron star cores (<cit.>) — assuming the neutron star plunges into the RSG as it expands (<cit.>).
§.§ J0045+41
To search for RSG X-ray binaries, we used the photometry of the Local Group Galaxy Survey (LGGS, <cit.>), which covers M31, M33, the Magellanic Clouds and 7 dwarf galaxies in the Local Group. Following <cit.> to find RSGs among the nearly-identical foreground dwarfs, we cross-referenced the positions of the LGGS RSGs with the Chandra Source Catalog (CSC, <cit.>), and found one RSG coincident with an X-ray source.
LGGS J004527.30+413254.3 (J0045+41 hereafter) is a bright (V ≈ 19.9) object of previously-unknown nature in the disk of M31. <cit.> classify J0045+41 as an eclipsing binary with a period of ∼ 76 days. J0045+41 was also observed with the Palomar Transient Factory (PTF); the g-band lightcurve shows evidence for a ∼650 day period. On the other hand <cit.> identify J0045+41 as a globular cluster, and it has been included in catalogs of M31 globular clusters as recently as 2014 (<cit.>). The LGGS photometry is consistent with the color and brightness of a RSG. Indeed, following <cit.>, we found that, as an RSG, J0045+41 would have an effective temperature of ∼3500 K and bolometric magnitude of -6.67, consistent with a 12-15 M_⊙ RSG. However, a complete SED fit to photometry from the Panchromatic Hubble Andromeda Treasury (PHAT, <cit.>) using the Bayesian Extinction and Stellar Tool (BEAST, <cit.>) yields an unphysical result of 300 M_⊙, 10^ 5 K star, extincted by A_V∼4 magnitudes. Furthermore, the object appears extended in the PHAT images (though its radial profile appears similar to that of other nearby stars).
J0045+41 is separated by ∼1.18^'' from an X-ray source. The source, CXO J004527.3
+413255, is bright (F_X = 1.98×10^-13 erg s^-1 cm^-2) and hard; fitting a spectrum obtained by Williams et al. (in prep) yields a power law with Γ∼1.5. The best-fit neutral hydrogen column density is 1.7×10^21 cm^-2, which corresponds to A_V∼1.
§.§.§ Observations and Data Reduction
The apparent periodicity of J0045+41 and its apparent association with a hard and unabsorbed X-ray source prompted us to obtain follow-up spectroscopic observations to determine the true nature of this object. We obtained a longslit spectrum of J0045+41 using the Gemini Multi-Object Spectrograph (GMOS) on Gemini North. Four 875 second exposures were taken 2016 July 5 using the grating centered on 5000 Å, and four 600 second exposures were taken 2016 July 9 using the grating centered on 7000 Å, with a blocking filter to remove 2^nd-order diffraction. Due to the gaps between GMOS's three CCDs, two of each set of exposures were offset by +50 Å. The data were reduced using the standard package. The final reduced spectrum has continuous signal from ∼4000 to ∼9100 Å at a resolution of R ∼ 1688 ( blue)/1918 ( red).
§.§.§ Spectrum and Redshift Determination
The spectrum (Figure <ref>) shows that J0045+41 is a quasar at z≈0.21 (measured with Hα, Hβ, [OIII]λ5007 and Ca II H and K). While a false positive, this quasar is quite interesting on its own. The (low-significance) detections of periodicity on short timescales by multiple sources are difficult to explain. Furthermore, mistaking a blue quasar for a red star would imply a high reddening along the line of sight, which is consistent with a sightline through M31; however, the low H column density implied by the X-ray counterpart's spectrum and our inability to achieve a satisfactory fit to the optical spectrum by reddening the (redshifted) quasar template spectrum from <cit.> indicate that J0045+41 belongs to a small and intriguing class of intrinsically red quasars, observed through a low-extinction region of the ISM in M31 (<cit.>).
§ STARS IN THE SDSS QUASAR CATALOG
If a quasar can be misidentified as a RSG, are there red stars — especially X-ray bright stars — in already-existing quasar catalogs? To answer this question, we turned to the quasar catalog of the Sloan Digital Sky Survey (SDSS, <cit.>).
§.§ Sample Selection
We selected all SDSS objects automatically tagged as quasars that were within 0.2 magnitudes of J0045+41 in g-r vs. r-i vs. i-z color-space — J0045+41 is too faint in u to utilize u-g — and ignored any warning flags to avoid throwing out interesting objects that were not easily identified by the SDSS algorithm. 1098 objects in this sample had associated spectroscopic observations. Interestingly, many of the spectroscopically-determined redshifts were unbelievably small or even negative, implying that these objects are in a regime of color-space where classification algorithms may fail. Indeed, on visual inspection of these spectra, many of them are stellar.
§.§ Stars
We used , a Python implementation of Markov Chain Monte Carlo (MCMC) by <cit.>, to fit Gaussian profiles to the Ca II triplet (λ = 8498, 8542, and 8662 Å), which we use to identify stars. The posterior distributions of the parameters allow us to determine if the triplet is well fit, and estimate errors for each parameter. Because the relative centroids and strengths of the lines are fixed, a good fit guarantees the lines are actually detected, while simultaneously measuring — with accurate errors — the radial velocity and equivalent width (W_λ) of the triplet. After a follow-up inspection by eye of spectra that were noisy or missing data, we find 344 confirmed cool stars, representing ∼31% of the total sample. Figure <ref> shows the distribution of W_λ for W_λ/σ_W_λ > 1. We follow <cit.> to estimate luminosity from W_λ, and find that most stars are dwarfs (W_λ≲ 6.5 Å) but ∼40 stars have larger equivalent widths indicating they are likely giants or supergiants (the relationship is dependent on effective temperature and metallicity, so these labels are approximate).
§ DISCUSSION AND FUTURE WORK
This result demonstrates that, when looking for rare objects like RSG X-ray binaries, it is important to look in unlikely places; e.g., a red and X-ray bright star may be confused for a quasar if the classification algorithm mistakes the continuum between the TiO bands in a RSG spectrum for an emission line. The fact that some of the color-space containing M-dwarfs — by far the most common type of star — is a regime where classification algorithms fail underlines the importance of improving on these algorithms until they perform as well as the human eye. Indeed, many of these stars were previously identified (see <cit.>), but are still listed as quasars on the SDSS online data portal.
Future work will focus on improving our star-finding algorithm to use alternate spectral features when the Ca II triplet is missing or obscured by noise, and on finding which areas of color-space contain significant numbers of these misidentified stars, with the goal of finding RSG X-ray binaries as well as improving our knowledge of where classification algorithms fail.
[Crowther (2010)]crowther10Crowther, P. A., Barnard, R., Carpano, S., 2010, , 403, L41
[Dalcanton (2012)]dalcanton12 Dalcanton, J. J., Williams, B. F., Lang, D., et al. 2012, , 200, 18
[Davidsen (1977)]davidsen77 Davidsen, A., Malina, R., & Bowyer, S. 1977, , 211, 866
[Evans (2010)]evans10 Evans, I. N., Primini, F. A., Glotfelty, K. J., et al. 2010, , 189, 37-82
[Foreman-Mackey (2013)]formanmackey13 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, , 125, 306
[Gordon (2016)]gordon16 Gordon, K. D., Fouesneau, M., Arab, H., et al. 2016, , 826, 104
[Heida (2016)]heida16Heida, M., Jonker, P. G., Torres, M. A. P., 2016, , 459, 771
[Jennings & Levesque (2016)]jennings16 Jennings, J., & Levesque, E. M. 2016, , 821, 131
[Kim (2007)]kim07 Kim, S. C., Lee, M. G., Geisler, D., et al. 2007, , 134, 706
[Levesque (2006)]levesque06 Levesque, E. M., Massey, P., Olsen, K. A. G., et al. 2006, , 645, 1102
[Massey (1998)]massey98 Massey, P. 1998, , 501, 153
[Massey & Olsen(2003)]massey03 Massey, P., & Olsen, K. A. G. 2003, , 126, 2867
[Massey (2006)]massey06 Massey, P., Olsen, K. A. G., Hodge, P. W., et al. 2006, , 131, 2478
[Massey (2007)]massey07 Massey, P., Olsen, K. A. G., Hodge, P. W., et al. 2007, , 133, 2393
[Richards (2003)]richards03 Richards, G. T., Hall, P. B., Vanden Berk, D. E., et al. 2003, , 126, 1131
[Taam (1978)]taam78 Taam, R. E., Bodenheimer, P., & Ostriker, J. P. 1978, , 222, 269
[Thorne & Żytkow (1975)]thorne75 Thorne, K. S., & Żytkow, A. N. 1975, , 199, L19
[Vanden Berk (2001)]vandenberk01 Vanden Berk, D. E., Richards, G. T., Bauer, A., et al. 2001, , 122, 549
[van Loon (2005)]vanloon05 van Loon, J. T., Cioni, M.-R. L., Zijlstra, A. A., & Loup, C. 2005, , 438, 273
[Vilardell (2006)]vilardell06 Vilardell, F., Ribas, I., & Jordi, C. 2006, , 459, 321
[Villar (2016)]villar16Villar, V. A., Berger, E., Chornock, R., 2016, , 830, 11
[Wang (2014)]wang14 Wang, S., Ma, J., Wu, Z., & Zhou, X. 2014, , 148, 4
[West (2011)]west11 West, A. A., Morgan, D. P., Bochanski, J. J., et al. 2011, , 141, 97
[York (2000)]york00 York, D. G., Adelman, J., Anderson, J. E., Jr., et al. 2000, , 120, 1579
|
http://arxiv.org/abs/1701.08222v1 | 20170127235208 | Sampling Without Time: Recovering Echoes of Light via Temporal Phase Retrieval | [
"Ayush Bhandari",
"Aurelien Bourquard",
"Ramesh Raskar"
] | cs.IT | [
"cs.IT",
"cs.CV",
"math.IT"
] |
Sampling Without Time:
Recovering Echoes of Light via Temporal Phase Retrieval
Ayush Bhandari^†, Aurélien Bourquard ^ and Ramesh Raskar^†
^† Media Laboratory and ^ Research Laboratory of Electronics
Massachusetts Institute of Technology Cambridge, MA 02139–4307 USA.
^†,ayush@MIT.edu ∙ aurelien@MIT.edu ∙ raskar@MIT.edu
To Appear in the Proceedings of IEEE ICASSP, 2017[The 42^nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP).].
This work expands on the ideas discussed in <cit.> and <cit.>.
150mm
Abstract
This paper considers the problem of sampling and reconstruction of a continuous-time sparse signal without assuming the knowledge of the sampling instants or the sampling rate. This topic has its roots in the problem of recovering multiple echoes of light from its low-pass filtered and auto-correlated, time-domain measurements. Our work is closely related to the topic of sparse phase retrieval and in this context, we discuss the advantage of phase-free measurements. While this problem is ill-posed, cues based on physical constraints allow for its appropriate regularization. We validate our theory with experiments based on customized, optical time-of-flight imaging sensors. What singles out our approach is that our sensing method allows for temporal phase retrieval as opposed to the usual case of spatial phase retrieval. Preliminary experiments and results demonstrate a compelling capability of our phase-retrieval based imaging device.
1.5
§ INTRODUCTION
During a recent visit to meet a collaborator, the author (who happens to be an avid photographer) saw a stark reflection of a local monument on the window panes of the Harvard science center. This is shown in Fig. <ref>. This is a common place phenomenon at a macro scale as well as a micro scale (for example microscopy). At the heart of this problem is a fundamental limitation, that is, all of the conventional imaging sensors are agnostic to the time information. Alternatively stated, the image–formation process is insensitive to the potentially distinct times that the photons can spend traveling between their sources and the detector.
To elaborate, consider the Gedankenexperiment version of Fig. <ref> described in Fig. <ref> (a). Let us assume that a light source is co-located with the imaging sensor (such as a digital camera). The reflection from the semi-reflective sheet (such as a window pane) arrives at the sensor at time t_1 = 2d_1/c while the mannequin is only observable on or after t_2 = 2d_2/c. Here, c is the speed of light. While we typically interpret images as a two dimensional, spatial representation of the scene, let us for now, consider the photograph in Fig. <ref> along the time-dimension. For the pixel ł x_0, y_0 $̊, this time-aware image is shown in Fig. <ref> (b). Mathematically, our time-aware image can be written as a2–sparse signal (and as aK–sparse signal in general),
m( r,t) = ( λ _1Γ _1)( r)δ( t - t_1 (r)) + ( λ _2Γ _2)( r)δ( t - t_2(r))
where{ Γ_k( r ) }are the constituent image intensities,{ λ_k( r ) }are the reflection coefficients,r = łx , y^̊⊤is the 2D spatial coordinate, andδ(·)is the Dirac distribution.
A conventional imaging sensor produces images by marginalizing time information, resulting in the 2D photograph,
∫_0^Δ≫t_2m( r,t)dt = ( λ _1Γ _1)( r) + ( λ _2Γ _2)( r) ≡ Iłr.̊
Recovering{ Γ_k }, k = 1,2givenI, more generally,Kechoes of light,
m( r,t) = ∑_k = 0^K - 1( λ _kΓ _k)( r)δ( t - t_k(r))
is an ill-posed problem. Each year, a number of papers <cit.> attempt to address this issue by using regularization and/or measurement-acquisition diversity based on image statistics, polarization, shift, motion, color, or scene features. Unlike previous works, here, we ask the question: Can we directly estimate{ Γ_k }_k = 0^K - 1in (<ref>)? In practice, sampling (<ref>) would require exorbitant sampling rates and this is certainly not an option. Also, we are interested in{ Γ_k }_k = 0^K - 1only, as opposed to{ Γ_k,t _k }_k = 0^K - 1<cit.> wheret_kis a non-linear argument in (<ref>). As a result, our goal is to recover the intensities of a sparse signal. For this purpose, we explore the idea of sampling without time—a method for sampling a sparse signal which does not assume the knowledge of sampling instants or the sampling rate.
In this context, our contributions are twofold:
* For the general case of K-echoes of light, our work relies on estimating the constituent images {Γ _k}_k = 0^K - 1 from the filtered, auto-correlated, time-resolved measurements. This is the distinguishing feature of our approach and is fundamentally different from methods proposed in literature which are solely based on spatio-angular information (cf. <cit.> and references therein).
* As will be apparent shortly, our work is intimately linked with the problem of (sparse) phase retrieval (cf. <cit.> and references therein). Our “sampling without time” architecture leads to an interesting measurement system which is based on time-of-flight imaging sensors <cit.> and suggests that K^2-K+1 measurements suffice for estimation ofK echoes of light. To the best of our knowledge, neither such a measurement device nor bounds have been studied in the context of image source separation <cit.>.
For the sake of simplicity, we will drop the dependence ofmandΓon spatial coordinatesr. This is particularly well suited for our case because we do not rely on any cross-spatial information or priors. Also, scaling factorsλ_kare assumed to be absorbed inΓ_k.
Significance of Phase-free or Amplitude-only measurements: It is worth emphasizing that resolving spikes from a superposition of low-pass filtered echoes, is a problem that frequently occurs in many other disciplines. This is a prototype model used in the study of multi-layered or multi-path models. Some examples include seismic imaging <cit.>, time-delay estimation <cit.>, channel estimation <cit.>, optical tomography <cit.>, ultrasound imaging <cit.>, computational imaging <cit.> and source localization <cit.>. Almost all of these variations rely on amplitude and phase information or amplitude and time-delay information. However, recording amplitude-only data can be advantageous due to several reasons. Consider a common-place example based on pulse-echo ranging. Let pł t=̊sin( ωt) be the emitted pulse. Then, the backscattered signal reads r( t ) = Γ_0sin( ω t - θ _0). In this setting, on-chip estimation of phase (θ_0) or time delay (t_0)<cit.>,
* can either be computationally expensive or challenging since t_k is a non-linear parameter in (<ref>), and hence, in r( t ).
* requires more measurements. For instance, 2 measurements suffice for amplitude-only estimation (Γ_0) while amplitude-phase pair requires 4 measurements <cit.>. This aspect of phase estimation is an important bottleneck as the the frame rate of an imaging device is inversely proportional to number time-domain measurements.
* is prone to errors. In many applications, multiple-frequency measurements, ω = kω _0, are acquired <cit.> assuming that phase and frequency, θ _0 = ωt_0 are linearly proportional. However, this is not the case in practice. The usual practice is to oversample <cit.>.
In all such cases, one may adopt our methodology of working with intensity-only measurements.
§ IMAGE–FORMATION MODEL FOR TOF SENSORS
Optical ToF sensors are active devices that capture 3D scene information. We adopt the generic image–formation model used in <cit.> which is common to all ToF modalities such as lidar/radar/sonar, optical coherence tomography, terahertz, ultrasound, and seismic imaging. In its full generality, and dropping dependency on the spatial coordinate for convenience, one can first formalize this ToF acquisition model as:
p → h → r →ϕ→ yy↦m = y_
where,
* p( t ) >0 is a T-periodic probing function which is used to illuminate the scene. This may be a sinusoidal waveform, or even a periodized spline, Gaussian, or Gabor pulse, for instance.
* hł t , τ$̊ is the scene response function (SRF). This may be a filter, a shift-invariant functionh_ SIł t , τ=̊ h ł t - τ$̊, or a partial differential equation modeling some physical phenomenon <cit.>.
* r( t ) = ∫p( τ)h( t,τ)dτ is the reflected signal resulting from the interaction between the probing function and the SRF.
* y( t ) = ∫r( τ)ϕ( t,τ)dτ is the continuous-time signal resulting from the interaction between the reflected function and the instrument response function (IRF), or ϕ, which characterizes the electro-optical transfer function of the sensor.
* y is a set of discrete measurements of the form. y( t )|_t = nΔ.
* y = y * y is the cyclic auto-correlation of y, where * and ł· denote the convolution and time reversal operators, respectively.
The interplay betweenp,h, andϕresults in variations on the theme of ToF imaging <cit.>. In this work, we will focus on an optical ToF setting. Accordingly:
* The probing function corresponds to a time-localized pulse.
* The SRF, accounting for the echoes of light, is a K-sparse filter,
h_( t,τ)≡ h_Kł t - τ=̊∑_k = 0^K - 1Γ _kδ( t - τ - t_k).
* The IRF is fixed by design asϕ _SI( t,τ) = p( t + τ). This implements the so-called homodyne, lock-in sensor <cit.>.
Due to this specific shift-invariant structure of the SRF and IRF, we have,y = p*p *h _K ≡p*h _K.
Finally, the measurements read,
m = y = łφ*h_K,̊ φ = p.
§ SAMPLING ECHOES OF LIGHT
§.§ Bandlimited Approximation of Probing Function
Due to physical constraints inherent to all electro-optical systems, it is reasonable to approximate the probing signal as a bandlimited function <cit.>. We useL–term Fourier series with basis functionsu_n(ω_0 t ) e^ω_0nt,ω_0 T = 2π, to approximatepwith,
p( t ) ≈p( t ) = ∑_| ℓ| ⩽ Lp_ℓu_ℓ( ω_0 t) ,
where thep_ℓare the Fourier coefficients. Note that there is no need to computepłt $̊ explicitly as we only require the knowledge of p (cf. (<ref>)). In Fig. <ref>, we plot the calibrated p and its Fourier coefficients. In this case, T = 232.40 ns. We also plot its approximation, φ = p with L = 25 together with the measured p and φ.
§.§ Sampling Theory Context
The shift-invariant characterization of the ToF image–formation model allows to re-interpret the sampled version of (<ref>) as the filtering of a sparse signal h_K with a low-pass kernel ψ = φ (cf. <cit.>).
m( t ) acmh_K*φ≡h_K*ψ.
Note that the properties of the auto-correlation operation imply that the sparsity of h_K is K^2 - K + 1, unlike h_K that is K-sparse and is completely specified by due to symmetry. Based on the approximation (<ref>) and the properties of convolution and complex exponentials, and defining ψ_ℓ = |p_ℓ|^4, we rewrite m(t) as,
m ł t = ∑_|ℓ|≤ Lψ_ℓ u_ℓłω_0t %̊s̊∫h_Kłτů_ℓ^*łω_0τ dτ_Fourier Integral,
Finally, the properties of the Fourier transform imply that sampling the above signal m(t) at time instants t = nΔ, results in discrete measurements of the form m = 𝐔𝐃_ψ̂ŝ, which corresponds to the available acquired data in our acquisition setting. Combining all the above definitions, it follows that:
* m∈ℝ^N is a vector of filtered measurements (cf. (<ref>)).
* 𝐔∈ℂ^N ×( 2L + 1) is a DFT matrix with elements [ u_n łω_0ℓΔ]_n,ℓ.
* 𝐃_ψ̂∈ℂ^ł 2L+1×̊ł 2L+1 is a diagonal matrix with diagonal elements ψ_ℓ. These are the Fourier-series coefficients of φ.
* s∈ℝ^2L+1 is a phase-less vector containing the Fourier coefficients of h_K, which is obtained by sampling the Fourier transform s( ω) of h_K at instants ω = ℓω_0. The signal s( ω) directly depends on the quantities |Γ_k| of interest to be retrieved and is expressed as
s( ω) = | ∑_k = 0^K - 1Γ _ku_t_kłω|^2≡ |h_K łω|̊^2
= ∑_k = 0^K - 1|Γ _k|^2_a_0 + 2∑_k = 0^K - 1∑_m = k + 1^K - 1| Γ _k|| Γ _m|_a_k, mcos( ωt_k,m + ∠Γ _k,m)
where t_k,m = t_k - t_m and ∠Γ _k,m = ∠Γ _k - ∠Γ _m.
It is instructive to note that the relevant unknowns {Γ _k}_k = 0^K - 1 can be estimated from s∈ℝ^2L+1 in (<ref>) which in turn depend only on L as opposed to sampling rate and sampling instants. This aptly justifies our philosophy of sampling without time.
§ RECONSTRUCTION VIA PHASE RETRIEVAL
Given data model m = 𝐔𝐃_ψ̂ŝ (<ref>), we aim to recover the image parameters |Γ_k| at each sensor pixel. For this purpose, we first estimate samples of ŝ(ω) as s = 𝐃_ψ^-1𝐔^+m, where 𝐔^+ is the matrix pseudo-inverse of 𝐔. This is akin to performing a weighted deconvolution, knowing that 𝐔 is a DFT matrix. Next, we solve the problem of estimating |Γ_k| in two steps (1) First we estimate a_k,m, and then, (2) based on the estimated values, we resolve ambiguities due to |·|.
Parameter Identification via Spectral Estimation: Based on the coefficients s, one can then retrieve the amplitude and frequency parameters that are associated with the oscillatory terms as well as the constant value in (<ref>). The oscillatory-term and constant-term parameters correspond to {a_k, m, t_k, m} and a_0, respectively. All parameter values are retrievable from s through spectral estimation <cit.>; details are provided in <cit.> for the interested reader.
Note that, given the form of (<ref>) and our acquisition model, the sparsity level of the sequence s—corresponding to the total amount of oscillatory and constant terms—is (K^2 - K)/2 + 1. The magnitude values a_k, m and a_0 can thus be retrieved if at least L≥ (K^2 - K) + 1 autocorrelation measurements are performed <cit.>, which is the case in the experiments described in Section <ref>.
Resolving Ambiguities:
Based on the aforementioned retrieved parameters, one wishes to then deduce {|Γ_k|}_k=0^K-1. The estimated cross terms {a_k, m, t_k, m} allow to retrieve the values of |Γ_k| through simple pointwise operations. Here we will focus on the case, K=2. Due to space limitations, we refer the interested readers to our companion paper <cit.> for details on the general case (K>2). The case when K=2,
s( ω) mag | Γ _0u_t_0^*( ω) + Γ _1u_t_1^*( ω)|^2 + ε _ω( Γ _k,t_k)_k > 2_≈ 0
= | Γ _0|^2 + | Γ _1|^2_a_0 + 2| Γ _0|| Γ _1|_a_01cos( ωt_0,1 + ∠Γ _0,1),
may be a result of two distinct echoes (Fig. <ref>) or due to approximation of higher order echoes by two dominant echoes (due to inverse-square law). In this case, we effectively estimate two terms: a_0 and a_01 (<ref>). The set of retrieved magnitude values then amount to solving<cit.>:
| | Γ _0| ±| Γ _1|| = √(a_0±a_01), a_0,a_01 > 0.
Thanks to the isoperimetric property for rectangles: a_0≥ a_01, the r.h.s above is always positive unless there is an estimation error in which case, an exchange of variables leads to the unique estimates,
{|Γ_k|}_k = 1,2 = √(a_0 + a_01)±√(a_0 - a_01)/2.
§ EXPERIMENTAL VALIDATION
§.§ Simulation
Noting that measurements m = 𝐔𝐃_ψ̂ŝ and ŝ are linked by an invertible, linear system of equations, knowing m amounts to knowing ŝ. We have presented detailed comparison of simulation results in <cit.> for the case where one directly measures ŝ. In this paper, we will focus on practical setting where the starting point of our algorithm is (<ref>).
[!t]
< g r a p h i c s >
Experimental results on acquisition and reconstruction of echoes of light in a scene. (a) Single pixel, time-stamped samples y = p*h _2 serve as our ground truth because the phase information is known. We also plot the estimated, 2-sparse SRF (<ref>) in red ink. (b) Fourier domain frequency samples of the SRF, h_2 łℓω_0$̊ withL = [-20,20]andω_0 = 2π/Δ, Δ= 70ps. We plot the measured data in green and the fitted data in black. The frequencies are estimated using Cadzow's method. (c) Same pixel, time-stampless, low-pass filtered, auto-correlated samples (<ref>) together with the estimated, auto-correlated SRFh_2in red ink. (d) Fourier domain samplessłℓω_0 $̊ (<ref>) and its fitted version (black ink). (e) Ground truth images for the experiment. (f) Estimated images using temporal phase retrieval.
§.§ Practical Validation
Our experimental setup mimics the setting of Fig. <ref>. A placard that reads “Time–of–Flight” is covered by a semi-transparent sheet, hence K = 2. The sampling rate is Δ≈ 70 × 10^-12 seconds using a our custom designed ToF imaging sensor <cit.>. Overall, the goal of this experiment is to recover the magnitudes {|Γ _k|}_k = 0^k=1 given auto-correlated measurements mł nΔacmył nΔ,̊𝐦∈ℝ^2795. To be able to compare with a “ground truth”, we acquire time-domain measurements ył nΔ$̊ before autocorrelation, whose Fourier-domain phases are intact. In Fig. <ref>(a), we plot the non-autocorrelated measurements𝐲while phase-less measurements𝐦are shown in Fig. <ref>(c) from which we note that the samples are symmetric in time domain due tom = y(cf. <ref>). In the first case (cf. Fig. <ref>(a)) wherey = p *h_K, we have <cit.>,
y( ω) = | p( ω)|^2h_K^*( ω), h_K( ω) = ∑_k = 0^K - 1Γ _ku_t_k( ω).
Similar to𝐦in (<ref>), in this case, the measurements read𝐲 = 𝐔𝐃_φ 𝐡and we can estimate the complex-valued vector𝐡. The phase information in𝐡allows for the precise computation of{Γ_k,t_k}_k=0^K-1<cit.>. These “intermediate” measurements serve as our ground truth. The spikes corresponding to the SRF (<ref>) are also marked in Fig. <ref>(a).
Fourier-domain measurementsh_2^*linked withyłnΔ$̊ are shown in Fig. <ref>(b). Accordingly, h^*_2( ω), ω = ℓω_0, l = {-L, …, L} where ω_0 = 2 π/Δ and L = 20. We estimate the frequency parameters {Γ_k,t_k}_k=0^K-1 using Cadzow's method <cit.>. The resulting fit is plotted in Fig. <ref>(b).
In parallel to Figs. <ref>(a),(b), normalized, autocorrelated data 𝐦 = 𝐲 are marked in Fig. <ref>(c). The signal 𝐲 is also shown as reference in green dashed line. We also plot the estimated h_2 (autocorrelated SRF). In Fig. <ref>(d), we plot measured and deconvolved vector s = 𝐃_ψ^-1𝐔^+m from which we estimate a_0,a_k,m (<ref>). The result of fitting using Cadzow's method with 41 samples is shown in Fig. <ref>(d).
The reconstructed images, |Γ_0| and |Γ_1|, due to amplitude-phase measurements (our ground truth, Fig. <ref>(a)) are shown in Fig. <ref>(e). Alternatively, the reconstructed images with auto-correlated/intensity-only information are shown in <ref>(f). One can observe the great similarity between the images obtained in both cases, where only a few outliers appear in the phase-less setting. The PSNR values between the maps reconstructed with and without the phase information are of 30.25 dB for |Γ_1| and 48.88 dB for |Γ_0|. These numerical results indicate that, overall, the phase loss that occurs in our autocorrelated measurements still allows for accurate reconstruction of the field of view of the scene.
Finally, in order to determine the consistency of our reconstruction approach, the PSNR between the original available measurements and their re-synthesized versions—as obtained when reintroducing our reconstructed profiles |Γ_k| into our forward model—are also provided. As shown in Figs. <ref>(a) and (c), the PSNR values in the oracle and phase-less settings correspond to 43.68 dB and 42.41 dB, respectively, which confirms that our reconstruction approach accurately takes the parameters and structure of the acquisition model into account.
§ CONCLUSIONS
In this paper, we have introduced a method to satisfactorily recover the intensities of superimposed echoes of light, using a customized ToF imaging sensor for acquisition and temporal phase retrieval for reconstruction. Up to our knowledge, this is the first method that performs time-stampless sampling of a sparse ToF signal, in the sense that we only measure the amplitudes sampled at uniform instants without caring about the particular sampling times or sampling rates. This innovation can potentially lead to alternative hardware designs and mathematical simplicity as phases need not be estimated in hardware.
1.5
10
Bhandari:2016b
A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging
sensors,” IEEE Signal Processing Magazine, vol. 33, no. 4, pp.
45–58, Sep. 2016.
Bhandari:2016a
A. Bhandari, A. Bourquard, S. Izadi, and R. Raskar, “Time-resolved image
demixing,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and
Sig. Proc. (ICASSP), Mar. 2016, pp. 4483–4487.
Ohnishi:1996
N. Ohnishi, K. Kumaki, T. Yamamura, and T. Tanaka, “Separating real and
virtual objects from their overlapping images,” in Lecture Notes in
Computer Science.1em plus 0.5em minus 0.4emSpringer Science
+ Business Media, 1996, pp. 636–646.
Levin:2004
A. Levin, A. Zomet, and Y. Weiss, “Separating reflections from a single image
using local features,” in Proc. of IEEE Comp. Vis. and Patt.
Recognition, vol. 1, 2004, pp. I–306.
Bronstein:2005
A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse
ICA for blind separation of transmitted and reflected images,” Int.
J. Imaging Syst. Technol., vol. 15, no. 1, pp. 84–91, 2005.
Kong:2011
N. Kong, Y.-W. Tai, and S. Y. Shin, “High-quality reflection separation using
polarized images,” vol. 20, no. 12, pp. 3393–3405, 2011.
Li:2014
Y. Li and M. S. Brown, “Single image layer separation using relative
smoothness,” in Proc. of IEEE Comp. Vis. and Patt. Recognition, 2014,
pp. 2752–2759.
Chandramouli:2016
P. Chandramouli, M. Noroozi, and P. Favaro, “ConvNet-based depth estimation,
reflection separation and deblurring of plenoptic images,” University of
Bern, submitted, 2016.
Bhandari:2016
A. Bhandari, A. M. Wallace, and R. Raskar, “Super-resolved time-of-flight
sensing via FRI sampling theory,” in Proc. of IEEE Intl. Conf. on
Acoustics, Speech and Sig. Proc. (ICASSP), Mar. 2016, pp. 4009–4013.
Eldar:2016
Y. C. Eldar, N. Hammen, and D. G. Mixon, “Recent advances in phase retrieval
[Lecture Notes],” vol. 33, no. 5, pp. 158–162, Sep. 2016.
Qiao:2016
H. Qiao and P. Pal, “Sparse phase retrieval with near minimal measurements: A
structured sampling based approach,” in Proc. of IEEE Intl. Conf. on
Acoustics, Speech and Sig. Proc. (ICASSP), Mar. 2016, pp. 4722–4726.
Rajaei:2016
B. Rajaei, E. W. Tramel, S. Gigan, F. Krzakala, and L. Daudet, “Intensity-only
optical compressive imaging using a multiply scattering material and a double
phase retrieval approach,” in Proc. of IEEE Intl. Conf. on Acoustics,
Speech and Sig. Proc. (ICASSP), Mar. 2016.
Huang:2016
K. Huang, Y. C. Eldar, and N. D. Sidiropoulos, “Phase retrieval from 1D
Fourier measurements: Convexity, uniqueness, and algorithms,” vol. 64,
no. 23, pp. 6105–6117, Dec. 2016.
Lu:2011
Y. M. Lu and M. Vetterli, “Sparse spectral factorization: Unicity and
reconstruction algorithms,” in Proc. of IEEE Intl. Conf. on Acoustics,
Speech and Sig. Proc. (ICASSP), May 2011.
Levy:1981
S. Levy and P. K. Fullagar, “Reconstruction of a sparse spike train from a
portion of its spectrum and application to high-resolution deconvolution,”
Geophysics, vol. 46, no. 9, pp. 1235–1243, Sep. 1981.
Fuchs:1999
J. J. Fuchs, “Multipath time-delay detection and estimation,” vol. 47, no. 1,
pp. 237–243, Jan. 1999.
Barbotin:2012
Y. Barbotin, A. Hormati, S. Rangan, and M. Vetterli, “Estimation of sparse
MIMO channels with common support,” vol. 60, no. 12, pp. 3705–3716, Dec.
2012.
Seelamantula:2014
C. S. Seelamantula and S. Mulleti, “Super-resolution reconstruction in
frequency-domain optical-coherence tomography using the
finite-rate-of-innovation principle,” vol. 62, no. 19, pp. 5020–5029, Oct.
2014.
Boufounos:2011
P. Boufounos, “Compressive sensing for over-the-air ultrasound,” in
Proc. of IEEE Intl. Conf. on Acoustics, Speech and
Sig. Proc. (ICASSP), May 2011.
Velten:2016
A. Velten, R. Raskar, D. Wu, B. Masia, A. Jarabo, C. Barsi, C. Joshi,
E. Lawson, M. Bawendi, and D. Gutierrez, “Imaging the propagation of light
through scenes at picosecond resolution,” Commun. ACM, vol. 59,
no. 9, pp. 79–86, Aug. 2016.
Shin:2016
D. Shin, F. Xu, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “Computational
multi-depth single-photon imaging,” Optics Express, vol. 24, no. 3,
p. 1873, Jan. 2016.
Bhandari:2014a
A. Bhandari, A. Kadambi, R. Whyte, C. Barsi, M. Feigin, A. Dorrington, and
R. Raskar, “Resolving multipath interference in time-of-flight imaging via
modulation frequency diversity and sparse regularization,” Optics
Letters, vol. 39, no. 6, pp. 1705–1708, Mar. 2014.
Kadambi:2013
A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and
R. Raskar, “Coded time of flight cameras: sparse deconvolution to address
multipath interference and recover time profiles,” ACM Trans.
Graphics, vol. 32, no. 6, pp. 1–10, Nov. 2013.
Fuchs:1994
J. J. Fuchs and H. Chuberre, “A deconvolution approach to source
localization,” vol. 42, no. 6, pp. 1462–1470, Jun. 1994.
Bhandari:2015
A. Bhandari, C. Barsi, and R. Raskar, “Blind and reference-free fluorescence
lifetime estimation via consumer time-of-flight sensors,” Optica,
vol. 2, no. 11, pp. 965–973, Nov. 2015.
Hibino:1997
K. Hibino, B. F. Oreb, D. I. Farrant, and K. G. Larkin, “Phase-shifting
algorithms for nonlinear and spatially nonuniform phase shifts,”
Jour.l of the Opt. Soc. of America A, vol. 14, no. 4, p. 918, Apr.
1997.
Hariharan:1987
P. Hariharan, B. F. Oreb, and T. Eiju, “Digital phase-shifting interferometry:
a simple error-compensating phase calculation algorithm,” Appl. Opt.,
vol. 26, no. 13, p. 2504, Jul. 1987.
Pan:2017
H. Pan, T. Blu, and M. Vetterli, “Towards generalized FRI sampling with an
application to source resolution in radioastronomy,” vol. 65, no. 4, pp.
821–835, Feb. 2017.
Stoica1997
P. Stoica and R. L. Moses, Introduction to spectral analysis.1em
plus 0.5em minus 0.4emPrentice Hall, Upper Saddle River, 1997, vol. 1.
Potts:2010
D. Potts and M. Tasche, “Parameter estimation for exponential sums by
approximate prony method,” Signal Processing, vol. 90, no. 5, pp.
1631–1642, May 2010.
Cadzow:1988
J. A. Cadzow, “Signal enhancement–A composite property mapping algorithm,”
vol. 36, no. 1, pp. 49–62, 1988.
|
http://arxiv.org/abs/1701.07758v1 | 20170126161817 | Semi-analytical model of the contact resistance in two-dimensional semiconductors | [
"Roberto Grassi",
"Yanqing Wu",
"Steven J. Koester",
"Tony Low"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
rgrassi@umn.edu
Department of Electrical and Computer Engineering, University of Minnesota, 200 Union St. SE, Minneapolis, MN 55455, USA
Wuhan National High Magnetic Field Center and School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074, China
Department of Electrical and Computer Engineering, University of Minnesota, 200 Union St. SE, Minneapolis, MN 55455, USA
Contact resistance is a severe performance bottleneck for electronic devices based on two-dimensional layered (2D) semiconductors, whose contacts are Schottky rather than Ohmic. Although there is general consensus that the injection mechanism changes from thermionic to tunneling with gate biasing, existing models tend to oversimplify the transport problem, by neglecting the 2D transport nature and the modulation of the Schottky barrier height, the latter being of particular importance in back-gated devices. In this work, we develop a semi-analytical model based on Bardeen's transfer Hamiltonian approach to describe both effects. Remarkably, our model is able to reproduce several experimental observations of a metallic behavior in the contact resistance, i.e., a decreasing resistance with decreasing temperature, occurring at high gate voltage.
Semi-analytical model of the contact resistance in two-dimensional semiconductors
Tony Low
December 30, 2023
=================================================================================
Introduction— 2D layered semiconducting materials, such as transition metal dichalcogenides (TMDs) and black phosphorus (BP), have many interesting electrical and optical properties <cit.>, but tend to form Schottky barriers (SB) at the interfaces with metal contacts, resulting in a large contact resistance that severely degrades the device performance <cit.>.
Thermionic emission <cit.> is commonly assumed when extracting the SB height from temperature-dependent current measurements of field-effect transistors (FETs). In <cit.>, it was pointed out that this procedure is correct only at the gate voltage corresponding to the flat-band condition. Considering an n-type device, for example, above the flat-band voltage, the conduction band edge in the channel is higher than at the interface with the contact, hence electrons traversing the channel see a larger barrier than the SB height. Below the flat-band voltage, tunneling starts to contribute and the thermionic emission theory loses validity. As a result, this can lead to unphysical negative SB heights <cit.>. Furthermore, experiments show that, as opposed to the insulating behavior of a SB contact, the two-terminal resistance <cit.> as well as contact resistance <cit.> can decrease with decreasing temperature at high gate voltage. The origin of this metallic behavior is debated and not yet clarified <cit.>.
Recently, a model for SB FETs has been proposed in <cit.> and applied to extract the SB height and bandgap of BP devices. This model assumes one-dimensional transport and a bias-independent SB height. However, in a typical geometry with a top contact to a multilayer 2D semiconductor as in the sketch of Fig. <ref>a, transport is inherently 2D. To be precise, due to quantization, the SB height Φ_1 to a 2D semiconductor should be defined as the difference between the edge E_1 of the first energy subband in the semiconductor and the Fermi level of the metal μ (see the schematic band profile in Fig. <ref>b for an n-type device). Transport occurring at energies above (below) E_1 is generally referred to as “thermionic” (“tunneling”). However, in the presence of a back gate, the subband edge and thus the SB height are expected to be modulated by the vertical electric field. When E_1 is lower than the bulk band edge at the interface with the metal, even the electrons traversing the junction at energies above E_1 see a tunneling barrier in the vertical (i.e., z) direction and a new transport regime arises.
In this paper, we present a semi-analytical model of the contact resistance to multilayer 2D semiconductors in this “vertical tunneling” regime. The model is based on a triangular barrier approximation of the vertical potential profile in the semiconductor underneath the contact. 2D transport is separated into a sequence of two 1D mechanisms: (i) quantum tunneling through the SB at the metal-to-semiconductor interface, followed by (ii) semiclassical “diffusive” transport across the semiconductor (the source of scattering being the in- and out-tunneling across the SB). The model is benchmarked against numerical solutions of the 2D quantum transport problem and employed to study the dependence of the contact resistance on vertical electric field and temperature. We show that, when the SB height is sufficiently lowered by the vertical electric field, contact resistance shows a metallic behavior with temperature, as observed in experiments. The model predicts a smooth transition from a thermionic-like regime at low electric field, where the tunneling barrier is almost transparent, to a true vertical tunneling regime at high electric field. In the former case, the extraction method of the SB height based on thermionic emission theory can still be applied.
Model— We consider a single planar junction of length L_x between a metal and a multi-layer 2D semiconductor with thickness a (Fig. <ref>a). A vertical electric field is created inside the semiconductor by the presence of a back gate. Let x and z be the longitudinal and vertical directions, respectively. The device is uniform in the y direction. We focus on the portion of the semiconductor covered by the metal (contact region) assuming that the uncovered part (channel) simply acts as a “reflectionless” contact or semi-infinite lead <cit.>. A current can flow as indicated in Fig. <ref>a.
We neglect hole transport and discuss only injection of electrons from the metal to the conduction band of the semiconductor. A simple single-valley effective mass Hamiltonian, with the same values of the effective masses m_x,y,z in the metal and in the semiconductor, is adopted. Such effective mass model has been shown to provide an accurate description of the out-of-plane quantization in multilayer BP <cit.>. For the case of multilayer TMDs, one should employ a more fundamental tight-binding model <cit.>. Whereas the form of the equations will be different, the general trends are not expected to change significantly. The bulk band edge profile is assumed to be uniform along the x direction and is approximated with a triangular barrier along the z direction, as shown in Fig. <ref>b, where the value of the band edge in the metal V_0 is chosen low enough so that it provides significant density of states at the Fermi level. Within this non-self-consistent approximation, which is valid at low carrier concentration, the magnitude of the vertical electric field F in the semiconductor is simply proportional to the voltage V_G applied between the back gate and the top metal:
F = V_G/a + (ϵ_s/ϵ_ox)t_ox ,
where the workfunctions of the two metals are taken to be equal, t_ox is the back oxide thickness, and ϵ_s and ϵ_ox are the dielectric constants of the semiconductor and oxide, respectively.
Due to vertical confinement, the energy spectrum in the semiconductor splits into a set of discrete 2D subbands. Within a triangular well approximation, the subband edges E_i (i positive integer) can be computed as <cit.>
E_i = - qF ( a - | ζ_i |/k_F) ,
where the energy reference is taken at the top of the barrier in Fig. <ref>b, q is the elementary electric charge, the wavevector k_F is defined as
k_F = ( 2 m_z qF/ħ^2)^1/3 ,
and ζ_i are the zeros of Airy's function, i.e., Ai(ζ_i)=0, which can be approximated as <cit.>
ζ_i ≈ - [ 3 π/8 (4 i-1) ]^2/3 .
We limit the discussion to the case E_i<0. Indeed, the subband description looses validity above the barrier.
We assume that transport within the semiconductor can be described by a set of decoupled 1D Boltzmann's transport equations <cit.>, one for each subband, where the tunneling from the metal to the semiconductor and vice versa is included as a scattering mechanism. The corresponding relaxation time τ_i, or inverse of the probability rate that an electron originally in the k-space state (k_x,k_y) of the i-th subband tunnels into the metal, is computed according to Bardeen's transfer Hamiltonian theory <cit.>, which has been recently applied to describe tunneling in vertical heterostructures of 2D materials <cit.> and electron-hole bilayer tunnel FETs <cit.>. In the limit of large L_x, we get
1/τ_i = {[ 1/hħ^2/2 m_zk_F^2/Ai'^2(ζ_i)4 √(- E_i (E_i - V_0))/-V_0 e^-2γ_0, V_0 < E_i < 0; 0, otherwise ].
where h is Planck's constant, ħ=h/(2π), γ_0 is defined as
γ_0 = 2/3ζ_0^3/2 ,
ζ_0 = - k_F E_i/qF = k_F a - | ζ_i | ,
and Ai'(ζ_i) is the derivative of Airy's function evaluated at ζ_i, which can be approximated as <cit.>
|Ai'(ζ_i)| ≈1/√(π)[ 3 π/8 (4 i-1) ]^1/6 .
Note that τ_i is independent of both k_x and k_y. The tunneling current is computed from the x-dependent distribution function of each subband, which is obtained by solving Boltzmann's transport equation with τ_i as the scattering relaxation time and with appropriate boundary conditions. In particular, we assume that the electrons are backscattered at the left end of the contact region. Differentiating the tunneling current with respect to the applied bias V_D and taking the limit V_D ≪ k_B T/q (k_B is Boltzmann's constant and T the temperature) gives us the low bias conductance G or inverse of contact resistance (per unit width). We obtain the semi-analytical expression
G = 2q^2/h∫_-∞^∞ d ε T(ε) (-∂ F_0/∂ε) ,
F_0(ε) = √(m_y k_B T/2 πħ^2)ℱ_-1/2(μ - ε/k_B T) ,
where ε is the total energy for electrons with k_y=0 and ℱ_-1/2 the Fermi-Dirac integral of order -12. The total transmission function T is defined as
T(ε) = ∑_i T_i(ε) ,
with the trasmission probability T_i of each subband given by
T_i(ε) = {[ 0, ε < E_i; 1 - e^-2 L_x/λ_i, ε > E_i ]. ,
where λ_i = |v_x| τ_i is the mean free path related to tunneling and |v_x| = √( 2 (ε-E_i)/m_x ) is the longitudinal carrier velocity. In the Supporting information, we provide a detailed derivation of the model.
Results— In Fig. <ref> we plot the tunneling rate 1/τ_i of the first two subbands as a function of electric field and semiconductor thickness. The predicted tunneling rate goes to zero at small F or a because the Bardeen model does not account for the above-the-barrier regime at E_i>0. At high electric field or large semiconductor thickness, the exponential term in (<ref>) is dominating. In this regime, an increase of F or a results in a decrease of the scattering rate 1/τ_i. This can be understood by noting that, since both k_x and k_y are conserved in the tunneling process, an electron can tunnel from the metal to the semiconductor only if its vertical energy is equal to E_i. However, according to (<ref>), E_i shifts to lower energies with increasing F or a. Because of that shift, the tunneling distance, which is equal to |E_i|/(qF) at the vertical energy E_i, becomes longer as F or a increase.
In order to benchmark the proposed model, we solve numerically the 2D Schrödinger equation with open boundary conditions using the Green function (GF) method <cit.> and assuming the same non-self-consistent triangular potential profile as in Fig. <ref>b. Details on the GF calculation can be found in the Supporting information. Fig. <ref> shows the plot of the total transmission function T(ε) computed with the analytical expression in (<ref>)–(<ref>) and with GF for different sets of parameter values. The two models are in good general agreement. The transmission function increases by one at each energy corresponding to a subband edge E_i, indicating a resonant tunneling regime, and shows a decaying behavior between two successive subband edges. Indeed, different energies correspond to different k_x states. Since the length of the contact L_x is finite and the transfer length or average distance traveled by an electron in the semiconductor before tunneling into the metal is equal to the mean free path λ_i = |v_x| τ_i, the probability of escaping into the metal is larger for the states closer to the subband edge which have smaller velocity. As shown in Fig. <ref>, increasing the contact length tends to raise the transmission probability of each single subband to unity because the ratio L_x/λ_i between the contact length and the average distance before tunneling increases, which means that the electrons have more chances to enter the contact. The shift of E_i to lower energy as a or F increase is clearly seen from the shift of the transmission peaks in Fig. <ref>b and c, respectively. This is accompanied by a narrowing of the peaks, which is related to the decrease of 1/τ_i discussed above.
It should be noted that, with reference to the generic subband of index i, our model predicts no vertical tunneling contribution at energies below E_i (see Eq. <ref>). This process could be possible in the case of a realistic band bending between the channel and the contact region. However, since tunneling is decreasing exponentially with the tunneling distance, the effect would be concentrated at the contact edge. Therefore, if E_i is sufficiently close to the metal Fermi level μ, the contribution from energies below E_i (lateral tunneling) is negligible compared to energies above E_i (vertical tunneling) because of the large surface-to-edge ratio of the contact.
Fig. <ref>a plots G, obtained by numerically computing the energy integral in (<ref>) and resolved for the first two subbands, as a function of F and temperature T. It can be seen that G is a non-monotonic function of the electric field. This is consistent with our previous observations: the transmission probability reduces with increasing F because the tunneling distance increases. Fig. <ref>a shows that, at high electric field before the second subband starts to contribute significantly, the derivative ∂ G/∂ T is negative, i.e., contact resistance decreases with decreasing temperature. This has to do with the factor -∂ F_0/∂ε, which is plotted in Fig. <ref>b. It can be proved that its derivative with respect to temperature changes sign at the energy ε_0 ≈μ+0.857 k_B T, which has only a weak temperature dependence (ε_0=-0.243 and -0.228 eV at T=100 and 300 K, respectively) and appears as a crossover point in Fig. <ref>b. As the transmission function shifts to lower energy with increasing F (compare the plots at F=0.08 and 0.1 V/nm in Fig. <ref>b), more contribution to the integral in (<ref>) comes from the energy range where ∂(-∂ F_0/∂ε)/∂ T < 0 and eventually leads to ∂ G/∂ T<0.
The model in (<ref>) allows for an analytical solution in two limiting cases. In order to simplify the discussion, we assume that only the first subband contributes to transport. Note that the energy range relevant for transport goes from E_1 to few k_B T's above E_1 or μ, whichever is maximum. If L_x ≫λ_1 in this energy range, it follows that T_1(ε)≈ 1 (i.e., an almost transparent barrier) and (<ref>) simplifies to
G ≈2q^2/h F_0(E_1) .
If, in addition, Φ_1=E_1-μ≫ k_B T, then (<ref>) further reduces to
G ∝ T^1/2exp( -Φ_1/k_B T) ,
which is the expression of the thermionic emission theory for a 2D system <cit.>. The prefactor is T^1/2 instead of T^3/2 because we are considering the low bias limit V_D ≪ k_B T/q. Expression (<ref>) implies that the SB height Φ_1 can be extracted from the slope of ln(G/T^1/2) vs 1/T. On the other hand, if L_x ≪λ_1 in most of the energy window for transport, one can derive (see Supporting information)
G ≈2q^2/h√(m_x m_y)/ħL_x/τ_1 f_0(E_1) ,
where f_0(E) = {exp[(E-μ)/(k_B T)]+1}^-1 is the Fermi-Dirac function. Fig. <ref>c compares the Arrhenius plot of G/T^1/2 computed with the rigorous model in (<ref>) and the approximated expressions in (<ref>) and (<ref>). E_1 is calculated according to (<ref>) in all three cases. Similar Arrhenius plots are commonly used to extract the SB height in experiments <cit.>. For the chosen set of parameter values, approximation (<ref>) is valid up to F≈ 0.08 V/nm. At higher electric field, the transmission function becomes increasingly peaked around the subband edge (see Fig. <ref>b) and (<ref>) becomes a better approximation. A positive slope, or metallic behavior, is predicted at high electric field similar to what has been reported in experiments <cit.>. In Fig. <ref>d, we plot the SB height obtained by fitting the data of the rigorous model in (<ref>) with the thermionic expression (<ref>), compared with the actual value of Φ_1=E_1-μ from (<ref>). It is seen that the extraction method based on the thermionic emission theory can provide good results at low electric field values, where the tunneling barrier is almost transparent. In the high-field regime, a fitting based on (<ref>) would provide a more physical result.
We conclude by noting that the model presented in this work can be easily extended to account for a finite carrier mobility in the semiconductor by introducing an additional relaxation time τ_s (and a corresponding mean free path λ_s = |v_x| τ_s) related to elastic scattering. The main effect of scattering would be, for each subband, a shorter transfer length and a transmission probability that does saturate to unity in the limit of a long contact length. In the regime when only one subband is populated, the model could also be extended to include self-consistent electrostatics using the variational approach in <cit.>.
Conclusions— In summary, we have demonstrated that the metallic behavior of the contact resistance observed in recent experiments can be explained by taking into account the modulation of the vertical tunneling due to the SB lowering with increasing electric field in back-gated devices. To the best of our knowledge, this transport regime has not been discussed before. The model also suggests a non-monotonic behavior of the contact resistance with respect to vertical electric field and semiconductor thickness. Our semi-analytical model provides a reasonable description of contact resistance in 2D semiconductors and could be useful for contact engineering in future 2D electronics.
34
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Novoselov et al.(2005)Novoselov, Jiang,
Schedin, Booth, Khotkevich, Morozov, and Geim]novoselov2005two
authorK. Novoselov,
authorD. Jiang,
authorF. Schedin,
authorT. Booth,
authorV. Khotkevich,
authorS. Morozov, and
authorA. Geim,
journalProceedings of the National Academy of Sciences of the
United States of America volume102,
pages10451 (year2005).
[Wang et al.(2012)Wang, Kalantar-Zadeh,
Kis, Coleman, and Strano]wang2012electronics
authorQ. H. Wang,
authorK. Kalantar-Zadeh,
authorA. Kis,
authorJ. N. Coleman,
and authorM. S.
Strano, journalNature nanotechnology
volume7, pages699 (year2012).
[Xu et al.(2014)Xu, Yao, Xiao, and
Heinz]xu2014spin
authorX. Xu,
authorW. Yao,
authorD. Xiao, and
authorT. F. Heinz,
journalNature Physics volume10,
pages343 (year2014).
[Fiori et al.(2014)Fiori, Bonaccorso,
Iannaccone, Palacios, Neumaier, Seabaugh, Banerjee, and
Colombo]fiori2014electronics
authorG. Fiori,
authorF. Bonaccorso,
authorG. Iannaccone,
authorT. Palacios,
authorD. Neumaier,
authorA. Seabaugh,
authorS. K. Banerjee,
and authorL. Colombo,
journalNature nanotechnology volume9,
pages768 (year2014).
[Low et al.(2016)Low, Chaves, Caldwell,
Kumar, Fang, Avouris, Heinz, Guinea, Martin-Moreno, and
Koppens]low2016polaritons
authorT. Low,
authorA. Chaves,
authorJ. D. Caldwell,
authorA. Kumar,
authorN. X. Fang,
authorP. Avouris,
authorT. F. Heinz,
authorF. Guinea,
authorL. Martin-Moreno,
and authorF. Koppens,
journalNature Materials (year2016).
[Koppens et al.(2014)Koppens, Mueller,
Avouris, Ferrari, Vitiello, and Polini]koppens2014photodetectors
authorF. Koppens,
authorT. Mueller,
authorP. Avouris,
authorA. Ferrari,
authorM. Vitiello, and
authorM. Polini,
journalNature nanotechnology volume9,
pages780 (year2014).
[Sun et al.(2016)Sun, Martinez, and
Wang]sun2016optical
authorZ. Sun,
authorA. Martinez, and
authorF. Wang,
journalNature Photonics volume10,
pages227 (year2016).
[Das and Appenzeller(2013)]das2013does
authorS. Das and
authorJ. Appenzeller,
journalNano letters volume13,
pages3396 (year2013).
[Du et al.(2014)Du, Liu, Deng, and
Ye]du2014device
authorY. Du,
authorH. Liu,
authorY. Deng, and
authorP. D. Ye,
journalACS nano volume8,
pages10035 (year2014).
[Haratipour et al.(2015)Haratipour,
Robbins, and Koester]haratipour2015black
authorN. Haratipour,
authorM. C. Robbins,
and authorS. J.
Koester, journalElectron Device Letters, IEEE
volume36, pages411 (year2015).
[Sze and Ng(2006)]sze2006physics
authorS. M. Sze and
authorK. K. Ng,
titlePhysics of semiconductor devices
(publisherJohn wiley & sons, year2006).
[Allain et al.(2015)Allain, Kang,
Banerjee, and Kis]allain2015electrical
authorA. Allain,
authorJ. Kang,
authorK. Banerjee, and
authorA. Kis,
journalNature Materials volume14,
pages1195 (year2015).
[Yu et al.(2014)Yu, Lee, Ling, Santos,
Shin, Lin, Dubey, Kaxiras, Kong, Wang et al.]yu2014graphene
authorL. Yu,
authorY.-H. Lee,
authorX. Ling,
authorE. J. Santos,
authorY. C. Shin,
authorY. Lin,
authorM. Dubey,
authorE. Kaxiras,
authorJ. Kong,
authorH. Wang, et al.,
journalNano letters volume14,
pages3055 (year2014).
[Avsar et al.(2015)Avsar, Vera-Marun,
Tan, Watanabe, Taniguchi, Castro Neto, and Özyilmaz]avsar2015air
authorA. Avsar,
authorI. J. Vera-Marun,
authorJ. Y. Tan,
authorK. Watanabe,
authorT. Taniguchi,
authorA. H. Castro Neto,
and
authorB. Özyilmaz,
journalACS nano volume9,
pages4138 (year2015).
[Liu et al.(2015)Liu, Wu, Cheng, Yang,
Zhu, He, Ding, Li, Guo, Weiss et al.]liu2015toward
authorY. Liu,
authorH. Wu,
authorH.-C. Cheng,
authorS. Yang,
authorE. Zhu,
authorQ. He,
authorM. Ding,
authorD. Li,
authorJ. Guo,
authorN. O. Weiss,
et al., journalNano letters
volume15, pages3030 (year2015).
[Cui et al.(2015)Cui, Lee, Kim, Arefe,
Huang, Lee, Chenet, Zhang, Wang, Ye et al.]cui2015multi
authorX. Cui,
authorG.-H. Lee,
authorY. D. Kim,
authorG. Arefe,
authorP. Y. Huang,
authorC.-H. Lee,
authorD. A. Chenet,
authorX. Zhang,
authorL. Wang,
authorF. Ye, et al.,
journalNature nanotechnology volume10,
pages534 (year2015).
[Radisavljevic and
Kis(2013)]radisavljevic2013mobility
authorB. Radisavljevic
and authorA. Kis,
journalNature materials volume12,
pages815 (year2013).
[Penumatcha et al.(2015)Penumatcha,
Salazar, and Appenzeller]penumatcha2015analysing
authorA. V. Penumatcha,
authorR. B. Salazar,
and
authorJ. Appenzeller,
journalNature communications volume6
(year2015).
[Datta(1997)]datta1997electronic
authorS. Datta,
titleElectronic transport in mesoscopic systems
(publisherCambridge university press, year1997).
[Zhang et al.(2017)Zhang, Huang, Chaves,
Song, Özçelik, Low, and Yan]zhang2017infrared
authorG. Zhang,
authorS. Huang,
authorA. Chaves,
authorC. Song,
authorV. O. Özçelik,
authorT. Low, and
authorH. Yan,
journalNature Communications volume8,
pages14071 (year2017).
[Kang et al.(2016)Kang, Zhang, and
Wei]kang2016unified
authorJ. Kang,
authorL. Zhang, and
authorS.-H. Wei,
journalThe journal of physical chemistry letters
volume7, pages597 (year2016).
[Miller(2008)]miller2008quantum
authorD. A. Miller,
titleQuantum mechanics for scientists and engineers
(publisherCambridge University Press, year2008).
[Abramowitz and Stegun(1964)]abramowitz1964handbook
authorM. Abramowitz and
authorI. A. Stegun,
titleHandbook of mathematical functions: with formulas,
graphs, and mathematical tables, number55
(publisherCourier Corporation, year1964).
[Rudan(2015)]rudan2015physics
authorM. Rudan,
titlePhysics of Semiconductor Devices
(publisherSpringer, year2015).
[Bardeen(1961)]bardeen1961tunnelling
authorJ. Bardeen,
journalPhysical Review Letters volume6,
pages57 (year1961).
[Harrison(1961)]harrison1961tunneling
authorW. A. Harrison,
journalPhysical Review volume123,
pages85 (year1961).
[Duke(1969)]duke1969tunneling
authorC. B. Duke,
titleTunneling in solids, vol. volume10
(publisherAcademic Pr, year1969).
[Feenstra et al.(2012)Feenstra, Jena, and
Gu]feenstra2012single
authorR. M. Feenstra,
authorD. Jena, and
authorG. Gu,
journalJournal of Applied Physics volume111,
pages043711 (year2012).
[Britnell et al.(2013)Britnell,
Gorbachev, Geim, Ponomarenko, Mishchenko, Greenaway, Fromhold, Novoselov, and
Eaves]britnell2013resonant
authorL. Britnell,
authorR. Gorbachev,
authorA. Geim,
authorL. Ponomarenko,
authorA. Mishchenko,
authorM. Greenaway,
authorT. Fromhold,
authorK. Novoselov,
and authorL. Eaves,
journalNature communications volume4,
pages1794 (year2013).
[Alper et al.(2013)Alper, Lattanzio,
De Michielis, Palestri, Selmi, and Ionescu]alper2013quantum
authorC. Alper,
authorL. Lattanzio,
authorL. De Michielis,
authorP. Palestri,
authorL. Selmi, and
authorA. M. Ionescu,
journalElectron Devices, IEEE Transactions on
volume60, pages2754 (year2013).
[Agarwal et al.(2014)Agarwal, Teherani,
Hoyt, Antoniadis, and Yablonovitch]agarwal2014engineering
authorS. Agarwal,
authorJ. T. Teherani,
authorJ. L. Hoyt,
authorD. A. Antoniadis,
and
authorE. Yablonovitch,
journalElectron Devices, IEEE Transactions on
volume61, pages1599 (year2014).
[Anwar et al.(1999)Anwar, Nabet, Culp,
and Castro]anwar1999effects
authorA. Anwar,
authorB. Nabet,
authorJ. Culp, and
authorF. Castro,
journalJournal of applied physics volume85,
pages2663 (year1999).
[Anugrah et al.(2015)Anugrah, Robbins,
Crowell, and Koester]anugrah2015determination
authorY. Anugrah,
authorM. C. Robbins,
authorP. A. Crowell,
and authorS. J.
Koester, journalApplied Physics Letters
volume106, pages103108
(year2015).
[Stern(1972)]stern1972self
authorF. Stern,
journalPhysical Review B volume5,
pages4891 (year1972).
Supporting information: Semi-analytical model of the contact resistance in two-dimensional semiconductors
§ MODEL OF VERTICAL TUNNELING
We compute the tunneling rate in the limit of an infinite contact length and assume that the electric potential does not depend on the longitudinal position x, which implies translational invariance along x. Let z=0 be the vertical position of the metal-to-semiconductor interface. We consider a simple effective mass Hamiltonian
ℋ = -ħ^2/2∇·m̂^-1∇ + E_c(z) = 𝒯 + E_c(z), m̂ = (
[ m_x 0 0; 0 m_y 0; 0 0 m_z ]),
where the values of the effective masses m_x,y,z are taken to be the same in the metal and in the semiconductor and the conduction band edge profile E_c(z) is modeled as a triangular barrier (F > 0 is the magnitude of the vertical electric field, see Fig. <ref>a):
E_c(z) = {[ V_0, z< 0; - q F z, 0 <z< a; ∞, z > a ].
Note that this is different from the Fowler-Nordheim field-emission problem <cit.> because of the presence of the hard wall at z=a. Suppose that an electron is launched from z<0 towards the interface. The electron wavefunction will be totally reflected at z=a, resulting in a reflection coefficient, measured as the ratio between the probability currents of reflected and incident waves, identically equal to one at all energies. Does it mean that the tunneling probability is zero? The Bardeen Transfer Hamiltonian method <cit.> provides a way to overcome this difficulty: the tunneling process across the barrier is thought of as a scattering event between states localized on different sides of the junction and the corresponding transition probability is computed through Oppenheimer's version of time-dependent perturbation theory <cit.>.
More precisely, ℋ is taken as the perturbed Hamiltonian acting in the time interval 0<t<t_P. For t<0, the unperturbed Hamiltonian must be identified with an Hamiltonian ℋ_L that approximates well the true Hamiltonian ℋ on the metal side of the junction but whose eigenfunctions decay in the semiconductor. We take ℋ_L = 𝒯 + E_cL(z) with E_cL(z) a potential step (Fig. <ref>b):
E_cL(z) = {[ V_0, z< 0; 0, z > 0 ].
For t>t_P, one must choose a different unperturbed Hamiltonian ℋ_R which, conversely, approximates well ℋ on the semiconductor side of the junction but whose eigenfunctions decay in the metal. We take ℋ_R = 𝒯 + E_cR(z) with E_cR(z) a triangular well (Fig. <ref>c):
E_cR(z) = {[ - q F z, z < a; ∞, z > a ].
It is assumed that prior to the perturbation the electron wavefunction coincides with an eigenfunction ψ_L,α of ℋ_L. Since this is not an eigenstate of ℋ, the electron wavefunction will evolve during the time interval 0<t<t_P according to the time-dependent Schrödinger equation. If t_P is sufficiently large, the probability that the electron is subsequently found in the eigenstate ψ_R,β of ℋ_R at t>t_P, is, to first order and per unit t_P,
P_αβ = 2π/ħ| ⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩|^2 δ( E_L,α - E_R,β) ,
where E_L,α and E_R,β are the eigenvalues corresponding to the initial state ψ_L,α and final state ψ_R,β, respectively, and δ is Dirac's delta function <cit.>. The conservation of energy is related to the perturbation being constant in time. In (<ref>), it is assumed that each set of eigenfunctions ψ_L,α and ψ_R,β is discrete and orthonormal. We consider a rectangular domain with finite sides L_x and L_y in the plane parallel to the junction and prescribe the periodic boundary conditions ψ_L,α/R,β(x=0,y,z)=ψ_L,α/R,β(x=L_x,y,z), ψ_L,α/R,β(x,y=0,z)=ψ_L,α/R,β(x,y=L_y,z). In addition, we consider a finite length L_z of the metal region in the z direction with the hard-wall boundary condition ψ_L,α(x,y,z=-L_z)=0. This way, both energy spectra are discrete and the corresponding eigenfunctions normalizable. Later, we will take the limit as L_x,L_y,L_z go to infinite in order to recover the continuous case. It should be noted that, contrary to the standard time-dependent perturbation theory <cit.>, the matrix element in (<ref>) must be computed between eigenstates of different Hamiltonians. Also, the perturbation Hamiltonian must be evaluated with respect to the initial Hamiltonian. The validity of (<ref>) rests on the assumption that the two sets of eigenstates of ℋ_L and ℋ_L are “almost orthogonal” to each other, in particular that ⟨ψ_R,β | ψ_L,α⟩≪ 1 <cit.>. Similary, one has for the probability rate of the inverse transition
P_βα = 2π/ħ| ⟨ψ_L,α | ℋ-ℋ_R | ψ_R,β⟩|^2 δ( E_L,α - E_R,β) = P_αβ ,
where the last equality follows from the delta function and the Hermiticity of the various Hamiltonians.
From (<ref>)-(<ref>) we get
ℋ = {[ ℋ_L, z<0; ℋ_R, z>0 ].
which means that ℋ satisfies the separability property of Bardeen's model Hamiltonian, i.e., that ℋ - ℋ_L 0 only in regions of space where ℋ - ℋ_R ≡ 0 <cit.> [Bardeen's theory is often introduced by writing the Hamiltonian in the form ℋ = ℋ_L + ℋ_R + ℋ_T, with ℋ_T being the “transfer” Hamiltonian, despite the fact that Bardeen himself did not make use of such decomposition in its original paper <cit.>. In our case, we get from (<ref>)
ℋ_T = {[ -ℋ_R, z<0; -ℋ_L, z>0 ].
but this Hamiltonian does not correspond to either of the perturbation Hamiltonians that appear in (<ref>) or (<ref>). See also the discussion in <cit.>.]. Since ℋ - ℋ_L ≡ 0 for z<0, the matrix element in (<ref>) can be written as
⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩ = ∫_0^L_x dx ∫_0^L_y dy ∫_-∞^∞ dz ψ_R,β^* ( ℋ-ℋ_L ) ψ_L,α
= ∫_Ω_Rψ_R,β^* ( ℋ-ℋ_L ) ψ_L,α d^3 r ,
where 𝐫=(x,y,z) and Ω_R = {𝐫;0<x<L_x,0<y<L_y,0<z<∞}. Noting that ℋ - ℋ_R ≡ 0 for z>0, we can get the symmetric expression
⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩ = ∫_Ω_R[ ψ_R,β^* ( ℋ-ℋ_L ) ψ_L,α - ψ_L,α( ℋ-ℋ_R ) ψ_R,β^* ] d^3 r
= ∫_Ω_R[ ψ_R,β^* ( 𝒯-E_L,α) ψ_L,α - ψ_L,α( 𝒯-E_R,β) ψ_R,β^* ] d^3 r .
Due to the delta function in (<ref>), we are only interested in the case E_L,α = E_R,β, for which
⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩ = ∫_Ω_R[ ψ_R,β^* 𝒯ψ_L,α - ψ_L,α𝒯ψ_R,β^* ] d^3 r .
Applying Green's theorem, we finally get
⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩ = -iħ∫_Σ_R𝐧·𝐉_βα d^2 r ,
where Σ_R is the surface of Ω_R, 𝐧 is the unit vector normal to Σ_R pointing in the outward direction, and
𝐉_βα = -iħ/2m̂^-1[ ψ_R,β^* ∇ψ_L,α - ψ_L,α∇ψ_R,β^* ]
is the matrix element of the probability current density operator between the states ψ_L,α and ψ_R,β.
Let us now start to pick up all the ingredients that we need to calculate (<ref>). The eigenfunctions and corresponding eigenvalues of ℋ_L are <cit.>
ψ_L,α(𝐫) ≡ψ_L(𝐤_L;𝐫) = 1/√(L_x L_y) e^i (k_xL x + k_yL y) b ×{[ 0, z<-L_z; cos(k_z z + φ), -L_z<z<0; cos(φ)e^-κ z, z>0 ].
E_L,α ≡ E_L(𝐤_L) = ħ^2/2( k_xL^2/m_x + k_yL^2/m_y) + E_z ,
E_z = V_0 + ħ^2 k_z^2/2 m_z ,
κ = √(- 2 m_z E_z)/ħ ,
φ = arctan(κ/k_z) ,
where 𝐤_L = (k_xL,k_yL,k_z) and it is assumed that E_z < 0 [We limit the discussion to the case E_z < 0 because (<ref>) looses validity if ⟨ψ_R,β | ψ_L,α⟩ is not a small number.].
Because of the periodic boundary conditions, the transverse components of the wavevector are quantized as k_xL = 2 π l/L_x, k_yL = 2 π m/L_y (l,m integers). As for k_z, the allowed values are the roots of the transcendental equations k_z L_z - π (n-1/2) = φ in the interval 0 < k_z <√(-2 m_z V_0)/ħ. For large L_z, we have k_z ≈π n/L_z (n positive integer). The constant b can be obtained from the normalization condition
1 = |b|^2 [ ∫_-L_z^0cos^2(k_z z + φ) dz + cos^2(φ) ∫_0^∞ e^-2 κ z dz ] = |b|^2/2( L_z+1/κ) ≈ |b|^2 L_z/2 ,
where only the leading term in L_z has been kept. Thus, up to an unimportant phase, b = √(2/L_z).
The solutions of the eigenvalue problem of ℋ_R are <cit.>
ψ_R,β(𝐫) ≡ψ_R,i(𝐤_∥ R; 𝐫) = 1/√(L_x L_y) e^i (k_xR x + k_yR y) c ×{[ Ai(ζ), z<a; 0, z>a ].
E_R,β ≡ E_R,i(𝐤_∥ R) = ħ^2/2( k_xR^2/m_x + k_yR^2/m_y) + E_i ,
E_i = - qF ( a - | ζ_i |/k_F) ,
ζ = -k_F ( z+E_i/qF),
k_F = ( 2 m_z qF/ħ^2)^1/3 ,
where 𝐤_∥ R = (k_xR,k_yR), Ai is Airy's function and ζ_i are its zeros, which can be approximated as (i positive integer) <cit.>
ζ_i ≈ - [ 3 π/8 (4 i-1) ]^2/3 .
The constant c can be obtained from the normalization condition
1 = |c|^2 ∫_-∞^aAi^2(ζ) dz = |c|^2/k_F∫_ζ_i^∞Ai^2(ζ) dζ ,
where the last integral can be evaluated using integration by parts and the fact that Ai is a solution of Airy's equation Ai” = ζAi (the prime symbol indicates derivative with respect to ζ):
∫_ζ_i^∞Ai^2(ζ) dζ = - ∫_ζ_i^∞ 2 Ai(ζ) Ai'(ζ) ζ dζ = - ∫_ζ_i^∞ 2 Ai'(ζ) Ai”(ζ) dζ = Ai'^2(ζ_i) .
Therefore, c = √(k_F)/|Ai'(ζ_i)|, in which we can use the approximated expression <cit.>
|Ai'(ζ_i)| ≈1/√(π)[ 3 π/8 (4 i-1) ]^1/6 .
The surface Σ_R in (<ref>) is made up of six faces. By inserting (<ref>) and (<ref>) into (<ref>), it can be shown that the integrals over the two faces at x=0 and x=L_x, as well as the integrals over the two faces at y=0 and y=L_y, cancel out each other exactly [For example, J_x,βα (x=L_x,y,z)- J_x,βα (x=0,y,z) ∝ e^i (k_xL -k_xR) L_x - 1=0 because of the periodic boundary conditions.]. The integral over the face at z=∞ is also zero because ψ_L,α is vanishingly small. We are only left with the integral over the face at z=0:
⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩ = ħ^2/2 m_z√(2/L_z)√(k_F)/|Ai'(ζ_i)|cos(φ) [ Ai(ζ) d/dz e^-κ z - e^-κ zd/dzAi(ζ) ]_z=0
×1/L_x∫_0^L_x e^i (k_xL -k_xR) x dx 1/L_y∫_0^L_y e^i (k_yL -k_yR) y dy
= - ħ^2/2 m_z√(2/L_z)√(k_F)/|Ai'(ζ_i)|cos(φ) [ κAi(ζ_0) - k_F Ai'(ζ_0) ] δ_k_xL,k_xRδ_k_yL,k_yR ,
where
ζ_0 ≡ζ(z=0) = - k_F E_i/qF = k_F a - |ζ_i|
and δ is Kronecker's delta function. Conservation of transverse momentum is a consequence of the translational symmetry along x and y. Combined with energy conservation, it implies that E_z = E_i and thus ζ_0 = ( κ/ k_F )^2. Assuming ζ_0 ≫ 1 (which is consistent with ⟨ψ_R,β | ψ_L,α⟩≪ 1), we can substitute in (<ref>) the asymptotic expressions
Ai(ζ_0) ≈e^-γ_0/2 √(π)ζ_0^1/4 ,
Ai'(ζ_0) ≈ - ζ_0^1/4 e^-γ_0/2 √(π) ,
where γ_0 = (2/3) ζ_0^3/2 <cit.>, to get
⟨ψ_R,β | ℋ-ℋ_L | ψ_L,α⟩ = - ħ^2/2 m_z√(2/L_z)√(k_F)/|Ai'(ζ_i)|cos(φ) [ κ + k_F √(ζ_0)] e^-γ_0/2 √(π)ζ_0^1/4δ_k_xL,k_xRδ_k_yL,k_yR
= - ħ^2/2 m_z√(2/L_z)k_F/|Ai'(ζ_i)|2 k_z √(κ)/√(k_z^2+κ^2)e^-γ_0/2 √(π)δ_k_xL,k_xRδ_k_yL,k_yR .
Finally, plugging (<ref>) into (<ref>), we obtain
P_αβ = 1/h( ħ^2/2 m_z)^22 π/L_zk_F^2/Ai'^2(ζ_i)4 k_z^2 κ/k_z^2+κ^2 e^-2γ_0δ_k_xL,k_xRδ_k_yL,k_yRδ( E_L,α - E_R,β) .
§ MODEL OF LONGITUDINAL DIFFUSION
Suppose that the states in the metal (L) are populated according to a Fermi-Dirac distribution with Fermi level μ_L:
f_L(E_L,α) = 1/exp( E_L,α-μ_L/k_B T) + 1
with k_B Boltzmann's constant and T the temperature. As for the semiconductor, we cannot assume that the states are in equilibrium because a current has to flow in the x direction as shown in Fig. 1 of the main text. In order to compute the population of such states, we assume semiclassical diffusive transport and make use of Boltzmann's transport equation <cit.>.
Let f_i(x,𝐤_∥ R) be the distribution function in the four-dimensional phase space associated with the i-th subband [f_i is independent of y because of the translational symmetry along y.]. Under the assumption that the electric potential is uniform along x, Boltzmann's equation reads
v_x ∂ f_i/∂ x = C, 0<x<L_x
where v_x = ħ k_xR/m_x is the longitudinal carrier velocity and the transitions from the metal to the semiconductor and vice versa due to vertical tunneling are included through a collision term C [Other types of scattering, which could be responsible for a finite carrier mobility in the semiconductor, are here neglected.]:
C = ∑_α f_L(E_L,α) P_αβ ( 1-f_i) - f_i P_βα[ 1 - f_L(E_L,α) ]
= ∑_α P_αβ[ f_L(E_L,α) - f_i ] .
Note that expression (<ref>) takes into account Pauli's exclusion principle. Using (<ref>), we get
C = [ f_L(E_R,β) - f_i ] ∑_α P_αβ = f_L(E_R,β) - f_i /τ_i ,
where the relaxation time τ_i is defined as
1/τ_i = ∑_α P_αβ = ∑_k_z1/h( ħ^2/2 m_z)^22 π/L_zk_F^2/Ai'^2(ζ_i)4 k_z^2 κ/k_z^2+κ^2 e^-2γ_0δ( E_z - E_i ) .
Going to the limit of large L_z, we can replace
∑_k_z→∫L_z/π d k_z
so that
1/τ_i = ∫_0^√(-2 m_z V_0)/ħ d k_z 1/h( ħ^2/2 m_z)^2 2 k_F^2/Ai'^2(ζ_i)4 k_z^2 κ/k_z^2+κ^2 e^-2γ_0δ( E_z - E_i )
and, with the change of variables k_z → E_z,
1/τ_i = ∫_V_0^0 d E_z 1/hħ^2/2 m_zk_F^2/Ai'^2(ζ_i)4 √(-E_z (E_z - V_0))/-V_0 e^-2γ_0δ( E_z - E_i )
= {[ 1/hħ^2/2 m_zk_F^2/Ai'^2(ζ_i)4 √(-E_i (E_i - V_0))/-V_0 e^-2γ_0, V_0 < E_i < 0; 0, otherwise ].
Besides the subband index i, the relaxation time depends on the parameters m_z, F, a, and V_0. It can be shown that, replacing the eigenfunctions (<ref>) of ℋ_L by their WKB <cit.> approximation
ψ_L,α^WKB(𝐫) = 1/√(L_x L_y) e^i (k_xL x + k_yL y)√(2/L_z)×{[ 0, z<-L_z; cos(k_z z + π/4), -L_z<z<0; 1/2√(k_z/κ) e^-κ z, z>0 ].
the expression of τ_i simplifies to
1/τ_i^WKB = {[ 1/hħ^2/2 m_zk_F^2/Ai'^2(ζ_i) e^-2γ_0, V_0 < E_i < 0; 0, otherwise ].
where V_0 appears only as an energy cut-off. This last formulation, which does not depend on the precise bandstructure of the metal, could be useful for treating injection from the metal to the valence band of the semiconductor.
Let f_i^+ and f_i^- denote the distribution functions of right-going and left-going states, respectively, i.e., f_i^±(x,k_xR,k_yR) = f_i(x,± k_xR,k_yR) with k_xR>0. We impose the boundary conditions
f_i^-(x = L_x,𝐤_∥ R) = f_R(E_R,β) ,
f_i^+(x = 0,𝐤_∥ R) = f_i^-(x = 0,𝐤_∥ R) ,
where f_R is a Fermi-Dirac function similar to (<ref>) with μ_L replaced by μ_R (see Fig. <ref>). The latter condition makes sure that the longitudinal current vanishes at x=0. The Boltzmann equations (<ref>) for different wavevectors are independent of each other expect for ± k_xR. The solutions are
f_i^± = -[ f_L(E_R,β) - f_R(E_R,β) ] e^- L_x ± x/|v_x| τ_i + f_L(E_R,β) .
The current (per unit width) from the semiconductor to the metal can be obtained by summing the net flux |v_x|(f_i^+-f_i^-) at x = L_x over all semiconductor states per unit area, multiplying by 2 for spin degeneracy and multiplying by the electronic charge q:
I = 2q/L_x L_y∑_k_xR>0∑_k_yR∑_i |v_x| (f_i^+-f_i^-)_x=L_x
= 2 q/L_x L_y∑_k_xR>0∑_k_yR∑_i |v_x| ( 1 - e^-2 L_x/|v_x| τ_i) [ f_L(E_R,β) - f_R(E_R,β) ] .
Going to the limit of large L_x,L_y, we can replace
∑_k_xR→∫L_x/2π d k_xR , ∑_k_yR→∫L_y/2π d k_yR
to get
I = 2q/( 2 π)^2∫_0^∞ d k_xR∫_-∞^∞ d k_yR∑_i |v_x| ( 1 - e^-2 L_x/|v_x| τ_i) [ f_L(E_R,β) - f_R(E_R,β) ] .
Finally, with the change of variables k_xR→ε = ħ^2 k_xR^2 / (2 m_x) + E_i, we obtain the Landauer formula <cit.>
I = 2q/h∑_i∫_E_i^∞ d ε ( 1 - e^-2 L_x/|v_x| τ_i) 1/2 π∫_-∞^∞ d k_yR[ f_L( ħ^2 k_yR^2/2 m_y + ε) - f_R( ħ^2 k_yR^2/2 m_y + ε) ]
= 2q/h∑_i∫_E_i^∞ d ε ( 1 - e^-2 L_x/|v_x| τ_i) [ F_L(ε) - F_R(ε) ]
= 2q/h∫_-∞^∞ d ε T(ε) [ F_L(ε) - F_R(ε) ] ,
where the longitudinal velocity must be computed as |v_x| = √( 2 (ε-E_i) / m_x), the supply function F_L/R is defined as
F_L/R(ε) = √(m_y k_B T/2 πħ^2)ℱ_-1/2( μ_L/R - ε/k_B T)
with ℱ_-1/2 the Fermi-Dirac integral of order -12 <cit.>, T is the transmission function
T(ε) = ∑_i T_i(ε) ,
with the transmission probability T_i given by
T_i(ε) = {[ 0, ε < E_i; 1 - e^-2 L_x/λ_i, ε > E_i ].
and λ_i = |v_x| τ_i. For ε > E_i, we can have the asymptotic behaviors
L_x ≫λ_i : T_i(ε) ≈ 1
L_x ≪λ_i : T_i(ε) ≈2 L_x/λ_i
Approximation (<ref>) holds, in particular, as ε→ E_i^+ (resonant tunneling). Note also that, when (<ref>) is satisfied, T_i decays as a function of energy as 1/√(ε).
The low bias conductance per unit width G can be evaluated from (<ref>). Let μ_L=μ and μ_R=μ-qV_D. We have
G = .∂ I/∂ V_D|_V_D=0 = 2q^2/h∫_-∞^∞ d ε T(ε) (-∂ F_0/∂ε) ,
where F_0(ε) is given by (<ref>) with μ_L/R replaced by μ.
Assume for simplicity that only the first subband contributes to transport. In the two limiting cases when either (<ref>) or (<ref>) are satisfied over the whole energy range of interest for transport (from E_1 to few k_B T's above max{E_1,μ}), it is possible to derive analytical expressions for G. If L_x ≫λ_1 in this energy range, it follows immediately from (<ref>)
G = 2q^2/h F_0(E_1) .
To work out the expression of G in the other limiting case when L_x ≪λ_1 in most of the energy window for transport[The inequality does not hold for energies close to E_1 but their contribution becomes increasingly smaller as τ_1 increases.], it is convenient to go back to the double-integral formulation of the tunneling current in (<ref>) and do the change of variables k_xR→ E_R,β:
I = 2q/h∫_E_1^∞ d E_R,β( 1/π∫_0^k_ymax d k_yR2 L_x/|v_x| τ_1) [ f_L(E_R,β) - f_R(E_R,β) ] ,
where k_ymax = √(2 m_y (E_R,β-E_1))/ħ and |v_x| = √( 2 [E_R,β-ħ^2 k_yR^2/(2 m_y)-E_1] / m_x). The integral over the transverse wavevector can be easily computed with the change of variables k_yR→arcsin(k_yR/k_ymax) to give
I = 2q/h∫_E_1^∞ d E_R,βk_ymax L_x/√(2 (E_R,β-E_1)/m_x)τ_1[ f_L(E_R,β) - f_R(E_R,β) ]
= 2q/h√(m_x m_y)/ħL_x/τ_1∫_E_1^∞ d E_R,β[ f_L(E_R,β) - f_R(E_R,β) ] .
Letting μ_L=μ and μ_R=μ-qV_D, we finally get
G = .∂ I/∂ V_D|_V_D=0 = 2q^2/h√(m_x m_y)/ħL_x/τ_1∫_E_1^∞ d E_R,β(-∂ f_0/∂ E_R,β) = 2q^2/h√(m_x m_y)/ħL_x/τ_1 f_0(E_1) ,
where f_0 is a Fermi-Dirac function similar to (<ref>) with μ_L replaced by μ.
§ GREEN FUNCTION ALGORITHM
We discretize the Hamiltonian in (<ref>) using finite differences on a two-dimensional rectangular grid (Fig. <ref>a). The same linear potential profile as in (<ref>) is assumed. The transmission function is computed as <cit.>
T(ε) = tr[ Γ^L G^r Γ^R G^a ] ,
where the symbol tr indicates the trace, G^r is the retarded Green function, G^a=G^r†, Γ^L/R = i (Σ^r,L/R-Σ^a,L/R), with Σ^r,L/R the retarded self-energy representing the renormalization of the Hamiltonian of the channel region (black dots in Fig. <ref>a) due to the presence of the semi-infinite left/right lead, and Σ^a,L/R = ( Σ^r,L/R )^†. The channel region is partitioned into layers as shown in Fig. <ref>b. Using matrix block notation and noting that the only non-null block of Σ^r,L is Σ^r,L_0,0 and the only non-null block of Σ^r,R is Σ^r,R_N_x+1,N_x+1, (<ref>) can be rewritten as
T(ε) = tr[ Γ^L_0,0 G^r_0,N_x+1Γ^R_N_x+1,N_x+1G^a_N_x+1,0] .
The self-energy of the left lead is computed analytically using the prescription given in <cit.>:
Σ^r,L_0,0(i,i') = ∑_m=1^N_xχ_m(i) σ_m χ_m(i') ,
χ_m(i) = √(2/N_x+1)sin(k_x i) ,
σ_m = t_z ×{[ λ-1+√(λ^2-2λ), λ < 0; λ-1-√(λ^2-2λ), λ > 2; λ-1-i√(2λ-λ^2), 0 < λ < 2 ].
λ = ε - V_0 - 2 t_x (1 - cos k_x)/2 t_z ,
k_x = π m/N_x+1 ,
where t_x = ħ^2/(2 m_x Δ_x^2) and similarly for t_z. The self-energy of the right lead is obtained numerically using a well-known iterative algorithm <cit.>. The matrix block G^r_0,N_x+1 is computed through a combination of the recursive and decimation algorithms <cit.>, modified so as to treat a non-tridiagonal-block Hamiltonian matrix. Let A = ε I - H_C - Σ^r,L - Σ^r,R, where H_C is the Hamiltonian matrix of the channel region alone, and define δ_1^(0) = A_0,0, δ_2^(0) = A_1,1, α^(0) = A_0,1, β^(0) = A_1,0. The algorithm consists in eliminating the layers from 1 to N_x with the formulas
δ_1^(n) = δ_1^(n-1) - α^(n-1)[ δ_2^(n-1)]^-1β^(n-1) ,
δ_2^(n) = A_n+1,n+1 - A_n+1,n[ δ_2^(n-1)]^-1 A_n,n+1 ,
α^(n) = -α^(n-1)[ δ_2^(n-1)]^-1 A_n,n+1 + A_0,n+1 ,
β^(n) = - A_n+1,n[ δ_2^(n-1)]^-1β^(n-1) + A_n+1,0
for n=1,…,N_x, where it is understood that A_0,N_x+1=A_N_x+1,0^† = 0. At the end, the required matrix block of the Green function can be obtained as
G^r_0,N_x+1 = - [ δ_1^(N_x)]^-1α^(N_x){δ_2^(N_x) - β^(N_x)[ δ_1^(N_x)]^-1α^(N_x)}^-1 .
12
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Fowler and Nordheim(1928)]fowler1928electron
authorR. H. Fowler and
authorL. Nordheim, in
booktitleProceedings of the Royal Society of London A:
Mathematical, Physical and Engineering Sciences (organizationThe
Royal Society, year1928), vol. volume119, pp.
pages173–181.
[Bardeen(1961)]bardeen1961tunnelling_S
authorJ. Bardeen,
journalPhysical Review Letters volume6,
pages57 (year1961).
[Harrison(1961)]harrison1961tunneling_S
authorW. A. Harrison,
journalPhysical Review volume123,
pages85 (year1961).
[Duke(1969)]duke1969tunneling_S
authorC. B. Duke,
titleTunneling in solids, vol. volume10
(publisherAcademic Pr, year1969).
[Oppenheimer(1928)]oppenheimer1928three
authorJ. R. Oppenheimer,
journalPhysical review volume31,
pages66 (year1928).
[Rudan(2015)]rudan2015physics_S
authorM. Rudan,
titlePhysics of Semiconductor Devices
(publisherSpringer, year2015).
[Miller(2008)]miller2008quantum_S
authorD. A. Miller,
titleQuantum mechanics for scientists and engineers
(publisherCambridge University Press, year2008).
[Abramowitz and Stegun(1964)]abramowitz1964handbook_S
authorM. Abramowitz and
authorI. A. Stegun,
titleHandbook of mathematical functions: with formulas,
graphs, and mathematical tables, number55
(publisherCourier Corporation, year1964).
[Messiah(1961)]messiah1961quantum
authorA. Messiah,
titleQuantum mechanics, vol. 1
(publisherNorth-Holland, Amsterdam, year1961).
[Datta(1997)]datta1997electronic_S
authorS. Datta,
titleElectronic transport in mesoscopic systems
(publisherCambridge university press, year1997).
[Sancho et al.(1985)Sancho, Sancho,
Sancho, and Rubio]sancho1985highly
authorM. L. Sancho,
authorJ. L. Sancho,
authorJ. L. Sancho,
and authorJ. Rubio,
journalJournal of Physics F: Metal Physics
volume15, pages851 (year1985).
[Low and Appenzeller(2009)]low2009electronic
authorT. Low and
authorJ. Appenzeller,
journalPhysical Review B volume80,
pages155406 (year2009).
|
http://arxiv.org/abs/1701.07820v2 | 20170126185520 | Hierarchy construction and non-Abelian families of generic topological orders | [
"Tian Lan",
"Xiao-Gang Wen"
] | cond-mat.str-el | [
"cond-mat.str-el",
"math.CT"
] |
Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada
Department of Physics and Astronomy,
University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
Department of Physics, Massachusetts Institute of
Technology, Cambridge, Massachusetts 02139, USA
We generalize the hierarchy construction to generic 2+1D topological orders
(which can be non-Abelian) by condensing Abelian anyons in one topological
order to construct a new one. We show that such construction is reversible and
leads to a new equivalence relation between
topological orders. We refer to the corresponding equivalent class (the orbit of
the hierarchy construction) as “the non-Abelian family”. Each non-Abelian
family has one or a few root topological orders with the smallest number of
anyon types. All the Abelian topological orders belong to the trivial
non-Abelian family whose root is the trivial topological order. We show that
Abelian anyons in root topological orders must be bosons or
fermions with trivial mutual statistics between them. The classification of
topological orders is then greatly simplified, by focusing on the roots of each
family: those roots are given by non-Abelian modular extensions of
representation categories of Abelian groups.
Hierarchy construction and non-Abelian families of generic topological orders
Xiao-Gang Wen
=============================================================================
Introduction:
The ultimate dream of classifying objects in nature may be creating a “table”
for them. A classic example of such classification result is the “Periodic
Table” for chemical elements. As for the topological ordered<cit.>
phases of matter, which draws more and more research interests recently, we are
already able to create some “tables” for
them<cit.>, via
the theory of
categories. However, efforts are needed to further understand the tables, for
example, to reveal some “periodic” structures in the table.
In the Periodic Table, elements are divided into several “families” (the
columns of the table), and those in the same family have similar chemical
properties. The underlying reason for this is that elements in the same family
have similar outer electron structures, and only differ by “noble gas cores”.
The last family consists of noble gas elements, which are chemically “inert”,
as they have no outer electrons besides the noble gas cores. Thus the
“family” can be considered as the equivalent class up to the “inert” noble
gas elements.
When it comes to topological orders, we also have “inert” ones: the Abelian
topological orders are “inert”, for example, in the application of
topological quantum computation<cit.>. Abelian anyons can not
support non-local topological degeneracy, which is an essential difference from
non-Abelian anyons. Is it possible to define equivalent classes for topological
orders, which are up to Abelian topological orders? In this letter, we use the
hierarchy construction to establish such equivalent classes, which we will call
the “non-Abelian families”. The hierarchy construction is well known in the
study of Abelian fractional quantum Hall (FQH)
states<cit.>. In this letter we generalize it to
arbitrary (potentially non-Abelian) topological orders.
We show that the generalized hierarchy construction is reversible. Thus, we
can say that two topological orders belong to the same “non-Abelian family”
if they are related by the hierarchy construction.
Each non-Abelian family has special “root” topological orders (see Table
<ref>), with the following properties:
* Root states have the smallest rank (number of anyon types) among
the non-Abelian family.
* Abelian anyons in a root state are all bosons or fermions, and have
trivial mutual statistics with each other.
Since any topological order in the same non-Abelian family can be reconstructed
from a root state, our work simplifies the classification of generic
topological orders to the classification of root states.
Our calculation is based on quantitative characterizations of topological
orders. One way to do so is to use the S,T modular matrices obtained
from the non-Abelian geometric phases of degenerate ground states on torus
<cit.>. We will show, starting from a topological order described
by S,T, how to obtain another topological order described by new S',T' via
a condensation of Abelian anyons. (For a less general approach based on wave
functions, see BS07113204.) The calculation uses the theory of fusion and
braiding of quasiparticles (which will be called anyons) in topological
order. Such a theory is the so called “unitary modular tensor category (UMTC) theory”
(for a review and much more details on UMTC, see W150605768).
A UMTC is simply a set of anyons (two anyons connected by a local operator
are regarded as the same type), plus data to describe their fusion and
braiding. Like the fusion of two spin-1 particles give rise to a “direct
sum” of spin-0,1,2 particles: 1 ⊗ 1 =0⊕ 1 ⊕ 2, the fusion
of two anyons i and j in general gives rise to a “direct sum” of several
other anyons: i⊗ j = ⊕_k N^ij_k k. So the fusion of anyons
is quantitatively described by a rank-3 integer tensor N^ij_k. From
N^ij_k, we can determine the internal degrees of freedom of anyons, which
is the so called quantum dimension. For example, the quantum dimension
of a spin-S particle is d=2S+1. For an anyon i, its quantum dimension
d_i, which can be non-integer, is the largest eigenvalue of matrix N_i with
(N_i)_kj=N^ij_k.
After knowing the fusion, the braiding of anyons can be fully determined by the
fractional part of their angular momentum L^z: s_i =mod(L^z_i,1).
s_i is called the topological spin (or simply spin) of the anyon i.
The last piece of data to characterize topological orders is the chiral
central charge c, which is the number of right-moving edge modes minus the
number of left-moving edge modes.
It turns out that two sets of data (S,T) and (N^ij_k,s_i) can fully
determine each other:
T_ij =^ 2π s_i_ij,
S_ij =∑_k ^ 2π (s_i+s_j-s_k)N^ij_k d_k/D,
^ 2π s_i =T_ii,
N^ij_k =∑_l S_li S_ljS_lk/ S_l1 .
where D=√(∑ d_i^2) is the total quantum dimension.
1.25
Hierarchy construction in generic topological orders: Let us consider
an Abelian anyon condensation in a generic topological order, described by a
UMTC . (Such a condensation in an Abelian topological order is
discussed in Appendix <ref>.) The anyons in are labeled by
i,j,k,⋯. Let a_c be an Abelian anyon in with spin s_c. We
condense a_c into the Laughlin state Ψ=∏(z_i-z_j)^m_c-2s_c,
where m_c= even and m_c-2s_c≠ 0. [This is different from the
so called “anyon condensation” categorical approach where only bosons
condensing into the trivial state is considered.] The resulting topological
order is described by UMTC , determined by , a_c and m_c.
To calculate , we note that the anyons in are the anyons i in
dressed with the vortices of the Laughlin state of a_c. The vorticity
is given by m-t_i, where m is an integer, and 2π t_i is the mutual
statistics angle between anyon i and the condensing anyon a_c in the
original topological order , which can be extracted from the S matrix
^-2π t_i=S^_i a_c/S^_i, or -t_i = s_i + s_a_c -
s_i⊗ a_c. Thus anyons in are labeled by pairs I=(i,m). We
like to ask what is the spin and fusion rules of I=(i,m)?
The spin of (i,m) is given by the spin of i plus the spin of the m-t_i
flux in the Laughlin state:
s_(i,m) = s_i +1/2(m-t_i)^2 /m_c-2s_c
Fusing i with m-t_i flux and j with n-t_j flux gives us
i⊗ j with m-t_i + n-t_j flux:
(i,m) ⊗(j,n) ∼⊕_kN^ij_k(k,m-t_i+n-t_j+t_k),
where N^ij_k is the fusion coefficient in .
Since a_c with m_c-t_a_c=m_c-2s_c flux is condensed, fusing with
(a_c,m_c) anyon does not change the anyon type in . So, we have
an equivalence relation:
(i,m) ∼ (i⊗ a_c, m-t_i+m_c-2s_c+t_i⊗ a_c),
The above three relations fully determine the topological order .<cit.>
It is important to fix a “gauge” for t_i, say by choosing t_i
∈ [0,1). The same label (i,m) may label different anyons under different
“gauge” choices of t_i. Similarly, we have fixed a “gauge” for s_c that
fixed the meaning of m_c. Note that t_a_c is automatically fixed when
s_c is fixed: t_a_c=2 s_c, while other t_i can be freely chosen. This
ensures that the equivalence relation (<ref>) is compatible with fusion
(<ref>), where (<ref>) is generated by fusing with the
trivial anyon (a_c,m_c). The combinations m-t_i,
m_c-2s_c determine the final spins and fusion rules; they are
gauge-invariant quantities. Thus, if we change the gauge of t_i,s_c, i.e.,
modify them by some integers, m,m_c should be modified by the same integers to
ensure that the construction remains the same.
Below we will study the properties of in detail. Let M_c=m_c-2s_c.
Applying the equivalence relation (<ref>) q times, we obtain
(i,m)∼ (i⊗ a_c^⊗ q,m-t_i+qM_c+t_i⊗ a_c^⊗
q).
Let q_c be the “period” of a_c, i.e., the smallest positive integer such
that a_c^⊗ q_c=. We see that
(i,m)∼ (i,m+q_cM_c).
Thus, we can focus on the reduced range of m∈{0,1,2,⋯,q_c|M_c|-1}.
Let ||, || denote the rank of , respectively. Now within the
reduce range of m, we have q_c|M_c||| different labels, and we want to
show that the orbit generated by the equivalence relation (<ref>) all
have the same length, which is q_c. To see this, just note that for
0<q<q_c, either i≠ i⊗ a_c^⊗ q, or if i=i⊗
a_c^⊗ q, m≠ m-t_i+qM_c+t_i⊗ a_c^⊗ q=m+qM_c; in
other words, the labels (i,m) are all different within q_c steps. It
follows that the rank of is ||=|M_c|||.
Strictly speaking, anyons in should one-to-one correspond to the
equivalent classes of (i,m). However, as the orbits have the same length, it
would be more convenient to use (i,m) directly (as we will see later, this
is the same as working in a pre-modular category ). For example, when we need to
sum over anyons in , we can instead do
∑_I∈→1/q_c∑_i∈∑_m=0^q_c|M_c|-1.
Now we are ready to calculate other quantities of the new topological order
. First, it is easy to see that the quantum dimensions remain the same
d_(i,m)=d_i.
The total quantum dimension is then
D_^2=1/q_c∑_i∈∑_m=0^q_c|M_c|-1
d_(i,m)^2=|M_c|D_^2.
The S matrix is
S^_(i,m),(j,n) =∑_k N^ij_k/D_ d_k ^2π
[s_(i,m)+s_(j,n)-s_(k,m+n+t_k-t_i-t_j)]
=1/√(|M_c|)S^_ij^-2π(m-t_i)(n-t_j)/M_c.
It is straightforward to check that S^_(i,m),(j,n) is unitary (with
respect to equivalent classes of (i,m)). Moreover, this formula for S can
recover the equivalence relation (<ref>) and fusion rules
(<ref>) via unitarity and Verlinde formula.
The new S^,T^ matrices (T^-matrix is determined by the spin of
anyons s^_(i,m) in (<ref>)), as well as S^,T^, should
both obey the modular relations STS=^2πc/8T^† S T^†,
from which we can extract the central charge of . The new
central charge is found to be (see Appendix <ref>)
c^=c^+(M_c) .
Clearly, the one-step hierarchy construction described by (<ref>),
(<ref>), and (<ref>) is fully determined by an Abelian anyon a_c and
M_c, where M_c+2s_c is an even integer. In Appendix <ref>, we
discuss the above hierarchy construction more rigorously at the full
categorical level.
As an application, let us explain the “eight-fold way” observed in the table
of topological orders<cit.>: whenever there is a
fermionic quasiparticle, the topological order has eight companions with the
same rank and quantum dimensions but different spins and central charges. If we
apply the one-step condensation with a_c being a fermion, and M_c=± 1, a
new topological order of the same rank is obtained. [Physically this
amounts to condensing the fermionic quasiparticle into an integer quantum Hall
state. If we instead condense the fermionic quasiparticle into p± p
states we are able to obtain the “sixteen fold way”. However, such
condensation is beyond the construction of this work.] The spins of the anyons
carrying fermion parity flux (having non-trivial mutual statistics with the
fermion a_c) are shifted by ±1/8, and the central charge is shifted by
± 1, while all the quantum dimensions remain the same. If we repeat it
eight times, we will go back to the original state (up to an E_8 state), generating the
“eight fold way”.
Reverse construction and non-Abelian families:
The one-step condensation from to is always reversible. In ,
choosing a_c'=(,1), s_c'=1/2M_c, m_c'=0, M_c'=-1/M_c, and repeating
the construction, we will go back to .
One may first perform the construction for a pre-modular and then
reduce the resulting category to a modular category. Taking
(j,n)=a_c'=(,-1) in (<ref>) we find that the mutual
statistics between (i,m) and a_c'=(,1) is
t'_(i,m)=m-t_i/M_c.
Let (i,m,p),(j,n,q) label the anyons after the above one-step condensation;
the new S matrix is
S_(i,m,p),(j,n,q)=S^_ij^-2π(m-t_i)(n-t_j)/M_c^-2π(p-t'_(i,m))(q-t_(j,n)')/M'_c
=S^_ij^2π (t_i q+ t_j p -t_a_c pq)
=S^_i⊗a̅_c^⊗ p,j⊗a̅_c^⊗ q,
which means that we can identify (i,m,p) with i⊗a̅_c^⊗ p
(a̅_c denotes the anti-particle of a_c).
It is easy to check that they have the same spin s_(i,m,p)=s_i⊗a̅_c^⊗ p. Therefore, i∼ (i⊗ a_c^⊗ p,m,p), ∀
m,p, we have come back to the original state .
Therefore, generic hierarchy constructions are reversible, which defines an
equivalence relation between topological orders. We call the corresponding
equivalent classes the “non-Abelian families”.
Now we examine the important quantity M_c=m_c-2s_c which relates the ranks
before and after the one-step condensation, ||=|M_c|||. Since m_c is
a freely chosen even integer, when a_c is not a boson or fermion (s_c≠ 0
or 1/2 1), we can always make 0<|M_c|<1, which means that the rank is
reduced after one-step condensation. We then have the first important
conclusion: Each non-Abelian family have “root” topological orders with
the smallest rank; the Abelian anyons in the “root” states are all bosons or
fermions.
We can further show that the Abelian bosons or
fermions in the “root” states have trivial mutual statistics among them.
To see this, assuming that a,b are Abelian anyons in a root state. Since
the mutual statistics is given by DS_ab=exp[2π(s_a+s_b-s_a⊗
b)], and a,b,a⊗ b are all bosons or fermions, non-trivial mutual
statistics can only be DS_ab=-1. Now consider two cases: (1) one of a,b,
say a, is a fermion, then by condensing a (choosing a_c=a,
m_c=2, s_c=1/2, t_b=1/2), in the new topological order, the rank remains the same but
s_(b,0)=s_b+t_b^2/2M_c=s_b+1/8, which means (b,0) is an Abelian
anyon but neither a boson nor a fermion. By condensing (b,0) again we can
reduce the rank, which conflicts with the “root” state assumption. (2) a,b
are all bosons. Still we condense a with m_c=2,s_c=0,t_b=1/2. In the new
topological order the rank is doubled but
s_(b,0)=s_b+t_b^2/2M_c=1/16, which means further condensing (b,0)
with m_c'=0 the rank is reduced to 1/8, which is again, smaller than the
rank of the beginning root state, thus contradictory.
Therefore, in the root states, Abelian anyons are bosons or fermions with
trivial mutual statistics.
We also have a straightforward corollary: all Abelian topological orders
have the same unique root state, which is the trivial topological order. In
other words, all Abelian topological orders are in the same trivial non-Abelian
family, which resembles the noble gas family in the Periodic Table. Thus, the
non-Abelian families are indeed equivalent classes up to Abelian topological
orders.
To easily determine if two states belong to the same non-Abelian family,
it is very helpful to introduce some non-Abelian invariants. One is the fractional part of
the central charge. Since the one-step condensation changes the central charge
by (M_c) (see (<ref>)), we know that central charges in the same
non-Abelian family can only differ by integers.
Another invariant is the quantum dimension. It is not hard to check that, in
the one-step condensation, the number of anyons with the same quantum dimension
is also multiplied by |M_c|. The third invariant is a bit involved. Note that
in the one-step condensation, if i has trivial mutual statistics with a_c,
t_i=0, then (i,0) in have the same spin as i in and the same
mutual statistics with (j,m),∀ m as that between i and j in .
Therefore, the centralizer of Abelian anyons, namely, the subset of anyons that
have trivial mutual statistics with all Abelian anyons (the anyons in red in
Table <ref>), is the same within one non-Abelian family. These facts
enable us to quickly tell that two states are not in the same non-Abelian
family.
Examples: Realizations of non-Abelian FQH states were first
proposed in Wnab,MR9162. One of them
is <cit.>
Ψ_ν=1({z_i})=[χ_2({z_i})]^2,
where χ_k({z_i}) is the many-fermion wave function with k filled
Landau levels. The bulk effective theory is the
SU(2)_-2^f Chern-Simons (CS) theory with 3 types of anyons and the edge has
c=5/2 (see Appendix <ref>). So the state is one of the root state N_c=3_5/2 in Table
<ref>. Another bosonic non-abelian FQH liquid at ν=1 is
<cit.>
Ψ_ν=1 = (1/z_1-z_21/z_2-z_3⋯)∏ (z_i-z_j),
whose edge has a chiral central charge c=3/2. It is the state described by
N_c=3_3/2 which belong to the same non-Abelian family as the 3_5/2 state above. The experimentally realized ν=5/2 FQH state is likely to
belong to this non-Abelian family <cit.>.
A more interesting non-Abelian state (which can perform universal topological quantum computation <cit.>) is
Ψ_ν=3/2({z_i})=[χ_3({z_i})]^2,
whose edge has a chiral central charge c=21/5. The bulk effective
theory is the SU(2)_-3^f CS theory with 4 types of anyons <cit.>.
So the state is N_c=4_21/5, which belongs to the same non-Abelian
family as the state 2_26/5 in Table <ref> (see Appendix
<ref>, which contains more examples of non-Abelian states and
non-Abelian families).
We like to remark that the topological orders studied in this paper do not
require and do not have any symmetry. However, some c=0 topological orders
with a Z_2 automorphism i→ i' that changes the sign of spins
s_i =-s_i' can be realized by
time-reversal symmetric states <cit.>.
Conclusion and Outlook:
In this letter we introduced the hierarchy construction in generic topological
orders, which established a new equivalence relation: Two topological orders
related by the hierarchy construction belong to the same “non-Abelian
family”. This reveals intriguing new structures in the classification of
topological orders.
Non-Abelian families are equivalent classes up to Abelian topological orders.
Topological orders in the same non-Abelian family share some properties, such
as quantum dimensions and the fractional part of central charges.
In particular we studied the “root” states, the states in a non-Abelian
family with the smallest rank. Other states can be constructed from the root
states via the hierarchy construction. Thus, classifying all topological
orders is the same as classifying all root states, namely, all states such that
their Abelian anyons have trivial mutual statistics. In other words, we can try
to generate all possible topological orders by constructing all the root
states, which can be obtained by starting with an Abelian group G,
extending its representation category (G) or (G^f) to a modular
category<cit.> while requiring all the extra anyons
being non-Abelian (which is referred to as a non-Abelian modular extension).
This is a promising future problem and may be an efficient way to produce
tables of topological orders.
Although in this letter we focused on bosonic topological orders with no
symmetry (described by modular categories), the construction also applies to
bosonic/fermionic topological orders with any symmetry (described by certain
pre-modular categories)<cit.>. The same argument goes
for non-Abelian families and root states with symmetries.
TL thanks Zhenghan Wang for helpful discussions. This research was supported by NSF Grant No. DMR-1506475 and NSFC 11274192.
§ HIERARCHY CONSTRUCTION IN ABELIAN TOPOLOGICAL ORDERS
In this section, we will discuss hierarchy construction, Abelian anyon
condensation, in Abelian topological orders in a very general setting. This
motivates the similar construction for generic non-Abelian states discussed in
the main text.
Consider a bosonic Abelian topological order, which can always be described by
an even K-matrix K_0 of dimension . Anyons are labeled by
-dimensional integer vectors ľ_0. Two
integer vectors ľ_0 and ľ'_0 are equivalent (describe the same
type of topological excitation) if they are related by
ľ'_0 = ľ_0 + K_0 ǩ,
where ǩ is an arbitrary integer vector. The mutual statistical angle between two
anyons, ľ_0 and ǩ_0, is given by
þ_ľ_0,ǩ_0 =2πǩ_0^T K_0^-1ľ_0.
The spin of the anyon ľ_0 is given by
s_ľ_0 = 1/2ľ_0^T K_0^-1ľ_0.
In the hierarchy construction of a new topological order from an old one, a
basic step is to condense Abelian anyons into a Laughlin-like
state. Let us construct a new topological order from the K_0
topological order by assuming Abelian anyons labeled by ľ_c condense.
Here we treat the anyon as a bound state between a boson and flux. We then
smear the flux such that it behaves like an additional uniform magnetic field,
and condense the boson into ν=1/m_c Laughlin state (where m_c= even).
The resulting new topological order is described by the (+1)-dimensional
K-matrix
K_1= K_0 ľ_c
ľ_c^T m_c
In the following, we are going to show that, to describe the result of the ľ_c anyon condensation, we do not need to know K_0 directly. We only need
to know the spin of the condensing particle ľ_c
s_c=1/2ľ_c^T K_0^-1ľ_c,
and the mutual statistics
þ_ľ_0,ľ_c≡ 2π t_ľ_0, t_ľ_0 = ľ_c^T K_0^-1ľ_0
between ľ_0 and ľ_c.
A calculation:
K_0 ľ_c
ľ_c^T m_c
K'_0 ľ_c'
ľ_c^' T m_c'
=
K_0 K'_0 + ľ_c ľ_c^' T K_0 ľ_c' +m_c' ľ_c
ľ_c^T K'_0 + m_c ľ_c^' T m_c m_c' + ľ_c^T ľ_c'
= I
ľ_c' = -m_c' K_0^-1ľ_c
m_cm_c' =1 - ľ_c^T ľ_c' = 1 + m_c' ľ_c^T K_0^-1ľ_c
m_c' = 1/ m_c - ľ_c^T K_0^-1ľ_c = 1/ m_c - 2s_c
ľ_c' = - K_0^-1ľ_c/ m_c - 2s_c
K_0' = K_0^-1 (I-ľ_c ľ_c^' T) = K_0^-1 + K_0^-1ľ_c ľ_c^T K_0^-1/ m_c - 2s_c
to be removed
First, we find that, as long as m_c-2s_c≠ 0, K_1 is invertible with
K_1^-1 =
K_0^-1 + K_0^-1ľ_c ľ_c^T K_0^-1/ m_c - 2s_c - K_0^-1ľ_c/ m_c - 2s_c
- ľ_c^T K_0^-1/ m_c - 2s_c 1/ m_c - 2s_c
The anyons in the new K_1 topological order are labeled by
+1-dimensional integer vector ľ^T = (ľ_0^T, m). The spin of ľ is
s_ľ = 1/2ľ^T K_1^-1ľ
= 1/2(2s_0 + m^2+t_ľ_0^2 -2 m t_ľ_0/m_c-2s_c)
= s_ľ_0 +1/2(m-t_ľ_0)^2 /m_c-2s_c
The vectors
ľ^T = (ľ_0^T, m)
and
ľ^' T = (ľ_0^' T, m')
are equivalent if they are related by
ľ_0^' - ľ_0 = K_0 ǩ_0 + k ľ_c, m' - m = ľ_c^T ·ǩ_0+m_c k,
for any -dimensional integer vector ǩ_0 and integer k.
To avoid the gauge ambiguity, for the
integer vectors ľ_0, we pick a representative for each equivalent class
(by (<ref>), fixing the gauge).
Taking k=1 and appropriate ǩ_0 such that ľ_0' and ľ_0 are
the pre-fixed representatives, we see that
(ľ_0^T, m) ∼ (ľ'_0^T∼ľ_0^T+ľ_c^T, m+t_ľ'_0-t_ľ_0+m_c-2s_c).
We also want to express the fusion in the new state in terms of the pre-fixed
representatives ľ_1,ľ_2,ľ_3. Assuming that (ľ_3^T,m_3)∼ (ľ_1^T+ľ_2^T,m_1+m_2), and taking k=0 and appropriate ǩ_0 in
(<ref>) (the cases of non-zero k can be generated via
(<ref>)), we find that
(ľ_1^T,m_1)+(ľ_2^T,m_2)
∼ (ľ_3^T∼ľ_1^T+ľ_2^T,m_3=m_1+m_2+t_ľ_3-t_ľ_1-t_ľ_2).
We can easily calculate the determinant of K_1 whose absolute value is the
rank of the new state:
(K_1) = K_0 ľ_c
ľ_c^T m_c
= (K_0)(m_c-ľ_c^T K_0^-1ľ_c)
=(m_c-2s_c)(K_0)
Let M_c=m_c-2s_c. It is an important gauge invariant quantity relating the
ranks of the two states. If we perform the condensation with a different anyon ľ_c'
and a different even integer m_c', but make sure that ľ_c'∼ľ_c
and M_c'=m_c'-2s_c'=m_c-2s_c=M_c, the new topological order will be the same.
It is worth mentioning that such construction is reversible: for the K_1
state, take ľ_c'^T=(0̌^T,1),m_c'=0, and repeat the construction:
K_2= K_0 ľ_c 0
ľ_c^T m_c 1
0 1 0∼ K_0 0 0
0 0 1
0 1 0∼ K_0.
We return to the original K_0 state.
§ CALCULATING THE CENTRAL CHARGE DIFFERENCE OF ONE-STEP CONDENSATION
In the one-step condensation from to , the central charge is
changed by (M_c). In this section we give the detailed calculation.
Firstly, using the
modular relation for both and , we find that
1/q_c√(|M_c|) ∑_i,j,k∈∑_p=0^q_c|M_c|-1{S^_xiS^_ikT^_kkS^_kjS^_jy.
.×exp[2π/2M_c(t_i+t_j-t_k+p)^2]}
=exp( 2πc^-c^/8)T^_xxδ_xy.
To show
c^-c^=(M_c) 8,
we need to use the reciprocity theorem for generalized
Gauss sums<cit.>:
∑_n=0^|c|-1^πan^2+bn/c=√(|c/a|)^π/4[(ac)-b^2/ac]∑_n=0^|a|-1^-πcn^2+bn/a,
where a,b,c are integers, ac≠ 0 and ac+b even. Thus,
∑_p=0^q_c|M_c|-1exp[2π/2M_c(t_i+t_j-t_k+p)^2]
=1/q_c^π(t_i+t_j-t_k)^2/M_c∑_p=0^q_c^2|M_c|-1^π/M_cq_c^2[q_c^2p^2+2q_c^2(t_i+t_j-t_k)p]
=√(|M_c|)/q_c^π/4(M_c)∑_p=0^q_c^2-1^-π[M_cp^2+2(t_i+t_j-t_k)p]
=√(|M_c|)/q_c^π/4(M_c)∑_p=0^q_c^2-1^-π
(m_c-2s_c)p^2S^_ia_c^⊗ p/S^_iS^_ja_c^⊗
p/S^_jS^_ka_c^⊗ p/S^_k
=√(|M_c|)/q_c^π/4(M_c)∑_p=0^q_c^2-1T^_a_c^⊗
p,a_c^⊗ pS^_ia_c^⊗ p/S^_iS^_ja_c^⊗
p/S^_jS^_ka_c^⊗ p/S^_k.
Substituting the above result into (<ref>), we have
1/q_c√(|M_c|)∑_i,j,k∈∑_p=0^q_c|M_c|-1{S^_xiS^_ikT^_kkS^_kjS^_jy.
.×exp[2π/2M_c(t_i+t_j-t_k+p)^2]}
=1/q_c^2^π/4(M_c)∑_p=0^q_c^2-1∑_kT_kk^ T^_a_c^⊗ p,a_c^⊗ pS^_ka_c^⊗ p/S^_k
×∑_i S^_xiS^_ikS^_ia_c^⊗
p/S^_i∑_j
S^_kjS^_jyS^_ja_c^⊗ p/S^_j
=1/q_c^2^π/4(M_c)∑_p=0^q_c^2-1∑_kT_k⊗ a_c^⊗ p,k⊗ a_c^⊗ p^
N^k,a_c^⊗ p_x N^k,a_c^⊗ p_y
=1/q_c^2^π/4(M_c)∑_p=0^q_c^2-1∑_kT_k⊗ a_c^⊗ p,k⊗ a_c^⊗ p^δ_k⊗ a_c^⊗ p,xδ_xy
=^π/4(M_c)T^_xxδ_xy,
as desired.
In fact, based on the physical picture, we have a stronger result
c^=c^+(M_c) .
So the central charge is changed by (M_c) after the one-step
condensation. A direct corollary is that the central charge of an Abelian
topological order is given by the signature of its K-matrix (the number of
positive eigenvalues minus the number of negative eigenvalues).
§ THE GENERALIZED HIERARCHY CONSTRUCTION
AT FULL CATEGORICAL LEVEL
Does the generalized hierarchy construction from to described by
(<ref>), (<ref>), and (<ref>) always give a valid topological
order ? To confirm this, below we will give a more rigorous construction
at full categorical level, which goes down to the level of F,R matrices.
The first step is to construct a pre-modular category , based on the
observation that the range of the second integer label can be reduced to
q_c|M_c|, and the combination m-t_i for (i,m) works as an gauge invariant
quantity. Such can be viewed as a version of “semi-direct
product” of with _q_c|M_c|. We use the gauge invariant m̃=m-t_i instead of the integer m to label anyons in ; in other
words, the anyons are labeled by the new pair (i,m̃) where i∈
and m̃+t_i ∈_q_c|M_c|. Fusion is then given by addition
(i,m̃)⊗ (j,ñ)=⊕_k N^ij_k (k,[m̃+ñ]_q_c|M_c|),
where [⋯]_q_c|M_c| denotes the residue modulo q_c|M_c|. The F,R matrices in
are given by those in modified by appropriate phase
factors. More precisely, let F^i_1i_2i_3_i_4
and R^i_1i_2_i_3 be the F,R matrices in ; then in we take
F̃^(i_1,m̃_1)(i_2,m̃_2)(i_3,m̃_3)_(i_4,m̃_4)
=F^i_1i_2i_3_i_4^π/M_cm̃_1(m̃_2+m̃_3-[m̃_2+m̃_3]_q_c|M_c|),
R̃^(i_1,m̃_1)(i_2,m̃_2)_(i_3,m̃_3)=R^i_1i_2_i_3^π/M_cm̃_1m̃_2
=R^i_1i_2_i_3^π/M_c(m_1-t_i_1)(m_2-t_i_2).
It is straightforward to check that they satisfy the pentagon and hexagon
equations, and is a valid pre-modular category. Moreover, the
modified R matrices do give us the desired modified spin. This also suggests
that the hierarchy construction equally works for pre-modular categories, thus
can be easily generalized to topological orders with
symmetries<cit.>.
The second step is to reduce the pre-modular category to the
modular category . Categorically, just note that {(i=a_c,m̃=M_c)^⊗ q,q=0,…,q_c-1} forms the Müger center of
, which can be identified with (_q_c); by condensing this
(_q_c) we obtain the desired modular category . Put it simply,
we just further impose the equivalence relation (<ref>) in ,
such that one orbit of length q_c is viewed as one type of anyon instead of
q_c different types. This way we rigorously recover the same construction
described by (<ref>), (<ref>), and (<ref>).
§ TABLES OF NON-ABELIAN FAMILIES
In this section, we list some non-Abelian families.
Each table contains a family up to a certain N.
Each row corresponds to a topological order. The anyons are listed with
increasing quantum dimensions. Only the quantum dimensions of a root state is
explicitly given. The quantum dimensions of other topological orders can be
easily obtained from those of the root, since the degeneracy for each
value of quantum dimension scales linearly with N.
The anyons in red have trivial mutual statistics with all Abelian anyons.
Such sets of anyons are the same within each family, and is an invariant
of the non-Abelian family.
The following is the Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
1_0 1 0 1
2_1 2 0,1/4
2_7 2 0,3/4
3_2 3 0,1/3,1/3
3_6 3 0,2/3,2/3
4_0 4 0,0,0,1/2
4_0 4 0,0,1/4,3/4
4_1 4 0,1/8,1/8,1/2
4_7 4 0,7/8,7/8,1/2
4_2 4 0,1/4,1/4,1/2
4_6 4 0,3/4,3/4,1/2
4_3 4 0,3/8,3/8,1/2
4_5 4 0,5/8,5/8,1/2
4_4 4 0,1/2,1/2,1/2
5_0 5 0,1/5,1/5,4/5,4/5
5_4 5 0,2/5,2/5,3/5,3/5
6_1 6 0,1/12,1/12,3/4,1/3,1/3
6_7 6 0,11/12,11/12,1/4,2/3,2/3
6_3 6 0,1/4,1/3,1/3,7/12,7/12
6_5 6 0,3/4,2/3,2/3,5/12,5/12
7_2 7 0,1/7,1/7,2/7,2/7,4/7,4/7
7_6 7 0,6/7,6/7,5/7,5/7,3/7,3/7
8_0 8 0,1/8,1/8,7/8,7/8,1/4,3/4,1/2
8_1 8 0,0,0,1/4,1/4,1/4,3/4,1/2
8_1 8 0,0,1/16,1/16,1/4,1/4,9/16,9/16
8_1 8 0,0,13/16,13/16,1/4,1/4,5/16,5/16
8_7 8 0,0,0,1/4,3/4,3/4,3/4,1/2
8_7 8 0,0,15/16,15/16,3/4,3/4,7/16,7/16
8_7 8 0,0,3/16,3/16,3/4,3/4,11/16,11/16
8_2 8 0,1/8,1/8,1/4,3/4,3/8,3/8,1/2
8_6 8 0,7/8,7/8,1/4,3/4,5/8,5/8,1/2
8_3 8 0,1/4,1/4,1/4,3/4,1/2,1/2,1/2
8_5 8 0,1/4,3/4,3/4,3/4,1/2,1/2,1/2
8_4 8 0,1/4,3/4,3/8,3/8,5/8,5/8,1/2
The following non-Abelian family is described by effective SU(2)_-3
Chern-Simons (CS) theory plus some Abelian CS theories. So it is called the
SU(2)_-3 non-Abelian family. Due to the level-rank duality, it is also
called the SU(3)_2 non-Abelian family since its contains a state described
by SU(3)_2 CS theory. We can also call the family as the Fibonacci
non-Abelian family since the root state is the Fibonacci non-Abelian state.
This family contains FQH state <cit.>
Ψ_ν=3/2({z_i})=[χ_3({z_i})]^2, N_c =4_21/5
In general, for a state
Ψ_ν=k/n({ z_i})=[χ_k({ z_i})]^n,
its low energy effective theory obtained from the projective parton
construction is given by <cit.>
= ψ_a^† (_0 _- (a_0)_ ) ψ_a
-
|[(_i - A_i) _- (a_i)_ ] ψ_a|^2/2m
where a=1,⋯,k=3 and =1,⋯,n=2, and a_μ is the SU(n) gauge
field doing the projection. Before the projection (when a_μ=0) the
above effective theory describes a filling fraction ν=kn IQH state whose
edge has a chiral central charge c=kn (has kn right-moving modes). After the projection (after
integrating out the non-zero dynamical SU(n) gauge field a_μ), the edge
states will have a reduced central charge
c = kn - k (n^2-1) /k+n.
For our case here, k=3 and n=2 and we find c=21/5 in
(<ref>).
If we integrate out that fermion fields ψ_a first, we will obtain an
effective SU(n) CS theory at level -k with central charge -
k (n^2-1) /k+n. The state (<ref>) and the effective theory
(<ref>) has the same number of anyon types as the SU(n)_-k CS theory
But the spins of the anyons in (<ref>)
and in (<ref>) is not given by those of SU(n)_-k CS theory. They
may differ by 1/2 since the anyons in (<ref>) may contain an extra
fermion field ψ_a. So the spins of the anyons in (<ref>) and
in (<ref>) are related to the spins in SU(n)_-k CS theory via
s_i = s_i^SU(n)_-k mod 1/2.
In other words,
the spins of the anyons in (<ref>) and
in (<ref>) are related to the spins in SU(n)_k CS theory via
s_i = - s_i^SU(n)_k mod 1/2.
This allows us to identify the state (<ref>) in the table of the
non-Abelian family which is marked by the red N_c. We will denote the
effective SU(n) CS theory obtained from (<ref>) by integrating out the
fermions as SU(n)_-k^f. So the red N_c=4_21/5 row in the
following table is described by the SU(n)_-k^f CS effective theory
(<ref>). On the other hand, the row N_c=4_31/5 is described by
the pure SU(n)_-k CS effective theory (without coupling to fermionic
fields).
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
2_26/5 3.618 0,3/5 1,ζ_3^1
4_31/5 7.236 0,1/4,17/20,3/5 SU(2)_-3 CS
4_21/5 7.236 0,3/4,7/20,3/5 SU(2)_-3^f CS
6_36/5 10.85 0,1/3,1/3,14/15,14/15,3/5
6_16/5 10.85 0,2/3,2/3,4/15,4/15,3/5
8_1/5 14.47 0,3/8,3/8,1/2,39/40,39/40,1/10,3/5
8_36/5 14.47 0,1/4,1/4,1/2,1/10,17/20,17/20,3/5
8_6/5 14.47 0,1/2,1/2,1/2,1/10,1/10,1/10,3/5
8_31/5 14.47 0,1/8,1/8,1/2,1/10,29/40,29/40,3/5
8_11/5 14.47 0,5/8,5/8,1/2,1/10,9/40,9/40,3/5
8_26/5 14.47 0,0,1/4,3/4,17/20,7/20,3/5,3/5
8_26/5 14.47 0,0,0,1/2,1/10,3/5,3/5,3/5
8_16/5 14.47 0,3/4,3/4,1/2,1/10,7/20,7/20,3/5
8_21/5 14.47 0,7/8,7/8,1/2,1/10,3/5,19/40,19/40
10_6/5 18.09 0,2/5,2/5,3/5,3/5,0,0,1/5,1/5,3/5
10_26/5 18.09 0,1/5,1/5,4/5,4/5,4/5,4/5,2/5,2/5,3/5
The following non-Abelian family is described by effective SU(2)_3 CS theory.
So it is called the SU(2)_3 non-Abelian family. The states in the following
table are time-reversal conjugates of those in the previous table. This family
contains FQH state <cit.>
Ψ_ν=-3/2({z̅_i})=[χ_3({z̅_i})]^2, N_c =4_19/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
2_14/5 3.618 0,2/5 1,ζ_3^1
4_9/5 7.236 0,3/4,3/20,2/5 SU(2)_3 CS
4_19/5 7.236 0,1/4,13/20,2/5 SU(2)_3^f CS
6_4/5 10.85 0,2/3,2/3,1/15,1/15,2/5
6_24/5 10.85 0,1/3,1/3,11/15,11/15,2/5
8_39/5 14.47 0,5/8,5/8,1/2,1/40,1/40,9/10,2/5
8_4/5 14.47 0,3/4,3/4,1/2,9/10,3/20,3/20,2/5
8_34/5 14.47 0,1/2,1/2,1/2,9/10,9/10,9/10,2/5
8_9/5 14.47 0,7/8,7/8,1/2,9/10,11/40,11/40,2/5
8_29/5 14.47 0,3/8,3/8,1/2,9/10,31/40,31/40,2/5
8_14/5 14.47 0,0,1/4,3/4,3/20,13/20,2/5,2/5
8_14/5 14.47 0,0,0,1/2,9/10,2/5,2/5,2/5
8_24/5 14.47 0,1/4,1/4,1/2,9/10,13/20,13/20,2/5
8_19/5 14.47 0,1/8,1/8,1/2,9/10,2/5,21/40,21/40
10_34/5 18.09 0,2/5,2/5,3/5,3/5,0,0,4/5,4/5,2/5
10_14/5 18.09 0,1/5,1/5,4/5,4/5,1/5,1/5,2/5,3/5,3/5
The following SU(2)_2 Ising non-Abelian family contains FQH states <cit.>
Ψ_ν=1({z_i})=[χ_2({z_i})]^2, N_c=3_5/2
and
Ψ_ν=1 = (1/z_1-z_21/z_2-z_3⋯)∏ (z_i-z_j),
, N_c=3_3/2
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
3_1/2 4 0,1/2,1/16 1,1,ζ_2^1
3_15/2 4 0,1/2,15/16
3_3/2 4 0,1/2,3/16
3_13/2 4 0,1/2,13/16
3_5/2 4 0,1/2,5/16
3_11/2 4 0,1/2,11/16
3_7/2 4 0,1/2,7/16
3_9/2 4 0,1/2,9/16
6_1/2 8 0,1/4,3/4,1/2,15/16,3/16
6_15/2 8 0,1/4,3/4,1/2,1/16,13/16
6_3/2 8 0,1/4,3/4,1/2,1/16,5/16
6_13/2 8 0,1/4,3/4,1/2,15/16,11/16
6_5/2 8 0,1/4,3/4,1/2,3/16,7/16
6_11/2 8 0,1/4,3/4,1/2,13/16,9/16
6_7/2 8 0,1/4,3/4,1/2,5/16,9/16
6_9/2 8 0,1/4,3/4,1/2,11/16,7/16
9_1/2 12 0,5/6,5/6,1/3,1/3,1/2,7/48,7/48,13/16
9_1/2 12 0,1/6,1/6,2/3,2/3,1/2,47/48,47/48,5/16
9_15/2 12 0,5/6,5/6,1/3,1/3,1/2,1/48,1/48,11/16
9_15/2 12 0,1/6,1/6,2/3,2/3,1/2,41/48,41/48,3/16
9_3/2 12 0,5/6,5/6,1/3,1/3,1/2,15/16,13/48,13/48
9_3/2 12 0,1/6,1/6,2/3,2/3,1/2,5/48,5/48,7/16
9_13/2 12 0,5/6,5/6,1/3,1/3,1/2,43/48,43/48,9/16
9_13/2 12 0,1/6,1/6,2/3,2/3,1/2,1/16,35/48,35/48
9_5/2 12 0,5/6,5/6,1/3,1/3,1/2,1/16,19/48,19/48
9_5/2 12 0,1/6,1/6,2/3,2/3,1/2,11/48,11/48,9/16
9_11/2 12 0,5/6,5/6,1/3,1/3,1/2,37/48,37/48,7/16
9_11/2 12 0,1/6,1/6,2/3,2/3,1/2,15/16,29/48,29/48
9_7/2 12 0,5/6,5/6,1/3,1/3,1/2,3/16,25/48,25/48
9_7/2 12 0,1/6,1/6,2/3,2/3,1/2,11/16,17/48,17/48
9_9/2 12 0,5/6,5/6,1/3,1/3,1/2,5/16,31/48,31/48
9_9/2 12 0,1/6,1/6,2/3,2/3,1/2,13/16,23/48,23/48
The following SU(2)_5 non-Abelian family contains FQH state <cit.>
Ψ_ν=-5/2({z̅_i})=[χ_5({z̅_i})]^2, N_c=6_1/7
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
3_8/7 9.295 0,6/7,2/7 1,ζ_5^1,ζ_5^2
6_1/7 18.59 0,3/4,6/7,17/28,1/28,2/7
6_15/7 18.59 0,1/4,3/28,6/7,2/7,15/28
9_50/7 27.88 0,2/3,2/3,6/7,11/21,11/21,20/21,20/21,2/7
9_22/7 27.88 0,1/3,1/3,6/7,4/21,4/21,2/7,13/21,13/21
12_1/7 37.18 0,7/8,7/8,1/2,6/7,41/56,41/56,5/14,9/56,9/56,11/14,2/7
12_50/7 37.18 0,3/4,3/4,1/2,6/7,5/14,17/28,17/28,1/28,1/28,11/14,2/7
12_8/7 37.18 0,0,1/4,3/4,3/28,6/7,6/7,17/28,1/28,2/7,2/7,15/28
12_8/7 37.18 0,0,0,1/2,6/7,6/7,6/7,5/14,11/14,2/7,2/7,2/7
12_43/7 37.18 0,5/8,5/8,1/2,6/7,5/14,27/56,27/56,51/56,51/56,11/14,2/7
12_15/7 37.18 0,1/8,1/8,1/2,55/56,55/56,6/7,5/14,11/14,2/7,23/56,23/56
12_36/7 37.18 0,1/2,1/2,1/2,6/7,5/14,5/14,5/14,11/14,11/14,11/14,2/7
12_22/7 37.18 0,1/4,1/4,1/2,3/28,3/28,6/7,5/14,11/14,2/7,15/28,15/28
12_29/7 37.18 0,3/8,3/8,1/2,6/7,13/56,13/56,5/14,11/14,2/7,37/56,37/56
The following SU(2)_-5 non-Abelian family (or SU(5)_2 non-Abelian family) contains FQH state <cit.>
Ψ_ν=5/2({z_i})=[χ_5({z_i})]^2, N_c=6_55/7
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
3_48/7 9.295 0,1/7,5/7 1,ζ_5^1,ζ_5^2
6_55/7 18.59 0,1/4,1/7,11/28,27/28,5/7
6_41/7 18.59 0,3/4,25/28,1/7,5/7,13/28
9_6/7 27.88 0,1/3,1/3,1/7,10/21,10/21,1/21,1/21,5/7
9_34/7 27.88 0,2/3,2/3,1/7,17/21,17/21,5/7,8/21,8/21
12_55/7 37.18 0,1/8,1/8,1/2,1/7,15/56,15/56,9/14,47/56,47/56,3/14,5/7
12_6/7 37.18 0,1/4,1/4,1/2,1/7,9/14,11/28,11/28,27/28,27/28,3/14,5/7
12_48/7 37.18 0,0,1/4,3/4,25/28,1/7,1/7,11/28,27/28,5/7,5/7,13/28
12_48/7 37.18 0,0,0,1/2,1/7,1/7,1/7,9/14,3/14,5/7,5/7,5/7
12_13/7 37.18 0,3/8,3/8,1/2,1/7,9/14,29/56,29/56,5/56,5/56,3/14,5/7
12_41/7 37.18 0,7/8,7/8,1/2,1/56,1/56,1/7,9/14,3/14,5/7,33/56,33/56
12_20/7 37.18 0,1/2,1/2,1/2,1/7,9/14,9/14,9/14,3/14,3/14,3/14,5/7
12_34/7 37.18 0,3/4,3/4,1/2,25/28,25/28,1/7,9/14,3/14,5/7,13/28,13/28
12_27/7 37.18 0,5/8,5/8,1/2,1/7,43/56,43/56,9/14,3/14,5/7,19/56,19/56
The following three non-Abelian families are obtained
by stacking two FQH states from the two
Fibonacci non-Abelian families.
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
4_0 13.09 0,2/5,3/5,0 1,ζ_3^1,ζ_3^1,ζ_8^2
8_1 26.18 0,1/4,17/20,13/20,2/5,3/5,0,1/4
8_7 26.18 0,3/4,3/20,7/20,2/5,3/5,0,3/4
12_2 39.27 0,1/3,1/3,14/15,14/15,11/15,11/15,2/5,3/5,0,1/3,1/3
12_6 39.27 0,2/3,2/3,1/15,1/15,4/15,4/15,2/5,3/5,0,2/3,2/3
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
4_12/5 13.09 0,3/5,3/5,1/5 1,ζ_3^1,ζ_3^1,ζ_8^2
8_7/5 26.18 0,3/4,7/20,7/20,3/5,3/5,19/20,1/5
8_17/5 26.18 0,1/4,17/20,17/20,3/5,3/5,1/5,9/20
12_2/5 39.27 0,2/3,2/3,4/15,4/15,4/15,4/15,3/5,3/5,13/15,13/15,1/5
12_22/5 39.27 0,1/3,1/3,14/15,14/15,14/15,14/15,3/5,3/5,1/5,8/15,8/15
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
4_28/5 13.09 0,2/5,2/5,4/5 1,ζ_3^1,ζ_3^1,ζ_8^2
8_33/5 26.18 0,1/4,13/20,13/20,2/5,2/5,1/20,4/5
8_23/5 26.18 0,3/4,3/20,3/20,2/5,2/5,4/5,11/20
12_38/5 39.27 0,1/3,1/3,11/15,11/15,11/15,11/15,2/5,2/5,2/15,2/15,4/5
12_18/5 39.27 0,2/3,2/3,1/15,1/15,1/15,1/15,2/5,2/5,4/5,7/15,7/15
The following SU(2)_7 non-Abelian family contains FQH state <cit.>
Ψ_ν=-7/2({z̅_i})=[χ_7({z̅_i})]^2, N_c=8_13/3
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
4_10/3 19.23 0,1/3,2/9,2/3 1,ζ_7^1,ζ_7^2,ζ_7^3
8_7/3 38.46 0,3/4,1/12,1/3,35/36,2/9,2/3,5/12
8_13/3 38.46 0,1/4,1/3,7/12,2/9,17/36,11/12,2/3
12_4/3 57.70 0,2/3,2/3,0,0,1/3,8/9,8/9,2/9,1/3,1/3,2/3
12_16/3 57.70 0,1/3,1/3,1/3,2/3,2/3,2/9,5/9,5/9,0,0,2/3
The following SU(2)_-7 non-Abelian family contains FQH state <cit.>
Ψ_ν=7/2({z_i})=[χ_7({z_i})]^2, N_c=8_35/3
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
4_14/3 19.23 0,2/3,7/9,1/3 1,ζ_7^1,ζ_7^2,ζ_7^3
8_17/3 38.46 0,1/4,11/12,2/3,1/36,7/9,1/3,7/12
8_11/3 38.46 0,3/4,2/3,5/12,7/9,19/36,1/12,1/3
12_20/3 57.70 0,1/3,1/3,0,0,2/3,1/9,1/9,7/9,1/3,2/3,2/3
12_8/3 57.70 0,2/3,2/3,1/3,1/3,2/3,7/9,4/9,4/9,0,0,1/3
The following SU(2)_4 non-Abelian family contains FQH state <cit.>
Ψ_ν=1/2({z_i})=[χ_2({z_i})]^4, N_c=10_3
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
5_2 12 0,0,1/8,5/8,1/3 1,1,ζ_4^1,ζ_4^1,2
5_2 12 0,0,7/8,3/8,1/3
10_1 24 0,0,3/4,3/4,1/8,7/8,3/8,5/8,1/12,1/3
10_1 24 0,0,3/4,3/4,1/16,1/16,9/16,9/16,1/12,1/3
10_1 24 0,0,3/4,3/4,13/16,13/16,5/16,5/16,1/12,1/3
10_3 24 0,0,1/4,1/4,1/8,7/8,3/8,5/8,1/3,7/12
10_3 24 0,0,1/4,1/4,3/16,3/16,11/16,11/16,1/3,7/12 SU(4)_-2^f, SU(4)_-2
10_3 24 0,0,1/4,1/4,15/16,15/16,7/16,7/16,1/3,7/12
15_0 36 0,0,2/3,2/3,2/3,2/3,1/24,1/24,7/8,3/8,13/24,13/24,0,0,1/3
15_0 36 0,0,2/3,2/3,2/3,2/3,1/8,19/24,19/24,7/24,7/24,5/8,0,0,1/3
15_4 36 0,0,1/3,1/3,1/3,1/3,23/24,23/24,1/8,5/8,11/24,11/24,1/3,2/3,2/3
15_4 36 0,0,1/3,1/3,1/3,1/3,7/8,5/24,5/24,17/24,17/24,3/8,1/3,2/3,2/3
The following SU(2)_-4 non-Abelian family contains FQH state <cit.>
Ψ_ν=-1/2({z̅_i})=[χ_2({z̅_i})]^4, N_c=10_5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
5_6 12 0,0,7/8,3/8,2/3 1,1,ζ_4^1,ζ_4^1,2
5_6 12 0,0,1/8,5/8,2/3
10_7 24 0,0,1/4,1/4,1/8,7/8,3/8,5/8,11/12,2/3
10_7 24 0,0,1/4,1/4,15/16,15/16,7/16,7/16,11/12,2/3
10_7 24 0,0,1/4,1/4,3/16,3/16,11/16,11/16,11/12,2/3
10_5 24 0,0,3/4,3/4,1/8,7/8,3/8,5/8,2/3,5/12
10_5 24 0,0,3/4,3/4,13/16,13/16,5/16,5/16,2/3,5/12 SU(4)_2^f, SU(4)_2
10_5 24 0,0,3/4,3/4,1/16,1/16,9/16,9/16,2/3,5/12
15_0 36 0,0,1/3,1/3,1/3,1/3,7/8,5/24,5/24,17/24,17/24,3/8,0,0,2/3
15_0 36 0,0,1/3,1/3,1/3,1/3,23/24,23/24,1/8,5/8,11/24,11/24,0,0,2/3
15_4 36 0,0,2/3,2/3,2/3,2/3,1/8,19/24,19/24,7/24,7/24,5/8,1/3,1/3,2/3
15_4 36 0,0,2/3,2/3,2/3,2/3,1/24,1/24,7/8,3/8,13/24,13/24,1/3,1/3,2/3
The following SU(2)_9 non-Abelian family contains FQH state <cit.>
Ψ_ν=-9/2({z̅_i})=[χ_9({z̅_i})]^2, N_c=10_5/11
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
5_16/11 34.64 0,9/11,2/11,1/11,6/11 1,ζ_9^1,ζ_9^2,ζ_9^3,ζ_9^4
10_5/11 69.29 0,3/4,9/11,25/44,41/44,2/11,1/11,37/44,13/44,6/11
10_27/11 69.29 0,1/4,3/44,9/11,2/11,19/44,1/11,15/44,35/44,6/11
15_82/11 103.9 0,2/3,2/3,9/11,16/33,16/33,28/33,28/33,2/11,1/11,25/33,25/33,7/33,7/33,6/11
15_38/11 103.9 0,1/3,1/3,5/33,5/33,9/11,2/11,17/33,17/33,1/11,14/33,14/33,29/33,29/33,6/11
The following SU(2)_-9 non-Abelian family contains FQH state <cit.>
Ψ_ν=9/2({z_i})=[χ_9({z_i})]^2, N_c=10_83/11
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
5_72/11 34.64 0,2/11,9/11,10/11,5/11 1,ζ_9^1,ζ_9^2,ζ_9^3,ζ_9^4
10_83/11 69.29 0,1/4,2/11,19/44,3/44,9/11,10/11,7/44,31/44,5/11
10_61/11 69.29 0,3/4,41/44,2/11,9/11,25/44,10/11,29/44,9/44,5/11
15_6/11 103.9 0,1/3,1/3,2/11,17/33,17/33,5/33,5/33,9/11,10/11,8/33,8/33,26/33,26/33,5/11
15_50/11 103.9 0,2/3,2/3,28/33,28/33,2/11,9/11,16/33,16/33,10/11,19/33,19/33,4/33,4/33,5/11
The following SU(3)_4 non-Abelian family contains FQH state <cit.>
Ψ_ν=-7/4({z̅_i})=χ_1({z̅_i})[χ_4({z̅_i})]^3, N_c=15_4/7
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
5_18/7 35.34 0,6/7,6/7,1/7,3/7 1,ζ_5^2,ζ_5^2,ζ_12^2,ζ_12^4
10_11/7 70.68 0,3/4,6/7,6/7,17/28,17/28,25/28,1/7,5/28,3/7
10_25/7 70.68 0,1/4,3/28,3/28,6/7,6/7,1/7,11/28,19/28,3/7
15_4/7 106.0 0,2/3,2/3,6/7,6/7,11/21,11/21,11/21,11/21,1/7,17/21,17/21,2/21,2/21,3/7
15_32/7 106.0 0,1/3,1/3,6/7,6/7,4/21,4/21,4/21,4/21,1/7,10/21,10/21,16/21,16/21,3/7
The following SU(3)_-4 non-Abelian family contains FQH state <cit.>
Ψ_ν=7/4({z_i})=χ_1({z_i})[χ_4({z_i})]^3, N_c=15_52/7
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
5_38/7 35.34 0,1/7,1/7,6/7,4/7 1,ζ_5^2,ζ_5^2,ζ_12^2,ζ_12^4
10_45/7 70.68 0,1/4,1/7,1/7,11/28,11/28,3/28,6/7,23/28,4/7
10_31/7 70.68 0,3/4,25/28,25/28,1/7,1/7,6/7,17/28,9/28,4/7
15_52/7 106.0 0,1/3,1/3,1/7,1/7,10/21,10/21,10/21,10/21,6/7,4/21,4/21,19/21,19/21,4/7
15_24/7 106.0 0,2/3,2/3,1/7,1/7,17/21,17/21,17/21,17/21,6/7,11/21,11/21,5/21,5/21,4/7
The following four non-Abelian families are obtained by stacking a FQH state
from the two SU(2)_± 3 non-Abelian families with a FQH state from the two
SU(2)_± 5 non-Abelian families.
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_58/35 33.63 0,2/5,1/7,5/7,19/35,4/35 1,ζ_3^1,ζ_5^1,ζ_5^2,2.915,3.635
12_23/35 67.26 0,3/4,3/20,2/5,25/28,1/7,5/7,13/28,41/140,19/35,4/35,121/140
12_93/35 67.26 0,1/4,13/20,2/5,1/7,11/28,27/28,5/7,111/140,19/35,4/35,51/140
18_268/35 100.8 0,2/3,2/3,1/15,1/15,2/5,1/7,17/21,17/21,5/7,8/21,8/21,22/105,22/105,19/35,4/35,82/105,82/105
18_128/35 100.8 0,1/3,1/3,11/15,11/15,2/5,1/7,10/21,10/21,1/21,1/21,5/7,92/105,92/105,19/35,4/35,47/105,47/105
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_138/35 33.63 0,2/5,6/7,2/7,9/35,24/35 1,ζ_3^1,ζ_5^1,ζ_5^2,2.915,3.635
12_103/35 67.26 0,3/4,3/20,2/5,6/7,17/28,1/28,2/7,1/140,9/35,24/35,61/140
12_173/35 67.26 0,1/4,13/20,2/5,3/28,6/7,2/7,15/28,9/35,71/140,131/140,24/35
18_68/35 100.8 0,2/3,2/3,1/15,1/15,2/5,6/7,11/21,11/21,20/21,20/21,2/7,97/105,97/105,9/35,24/35,37/105,37/105
18_208/35 100.8 0,1/3,1/3,11/15,11/15,2/5,6/7,4/21,4/21,2/7,13/21,13/21,9/35,62/105,62/105,2/105,2/105,24/35
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_142/35 33.63 0,3/5,1/7,5/7,26/35,11/35 1,ζ_3^1,ζ_5^1,ζ_5^2,2.915,3.635
12_177/35 67.26 0,1/4,17/20,3/5,1/7,11/28,27/28,5/7,139/140,26/35,11/35,79/140
12_107/35 67.26 0,3/4,7/20,3/5,25/28,1/7,5/7,13/28,26/35,69/140,9/140,11/35
18_212/35 100.8 0,1/3,1/3,14/15,14/15,3/5,1/7,10/21,10/21,1/21,1/21,5/7,8/105,8/105,26/35,11/35,68/105,68/105
18_72/35 100.8 0,2/3,2/3,4/15,4/15,3/5,1/7,17/21,17/21,5/7,8/21,8/21,26/35,43/105,43/105,103/105,103/105,11/35
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_222/35 33.63 0,3/5,6/7,2/7,16/35,31/35 1,ζ_3^1,ζ_5^1,ζ_5^2,2.915,3.635
12_257/35 67.26 0,1/4,17/20,3/5,3/28,6/7,2/7,15/28,99/140,16/35,31/35,19/140
12_187/35 67.26 0,3/4,7/20,3/5,6/7,17/28,1/28,2/7,29/140,16/35,31/35,89/140
18_12/35 100.8 0,1/3,1/3,14/15,14/15,3/5,6/7,4/21,4/21,2/7,13/21,13/21,83/105,83/105,16/35,31/35,23/105,23/105
18_152/35 100.8 0,2/3,2/3,4/15,4/15,3/5,6/7,11/21,11/21,20/21,20/21,2/7,13/105,13/105,16/35,31/35,58/105,58/105
The following two non-Abelian families are obtained by stacking a FQH state
from the two Fibonacci non-Abelian families with a FQH state from the Ising
non-Abelian family.
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_3/10 14.47 0,1/2,11/16,9/10,2/5,7/80 1,1,ζ_2^1,ζ_3^1,ζ_3^1,2.288
6_73/10 14.47 0,1/2,9/16,9/10,2/5,77/80
6_13/10 14.47 0,1/2,13/16,9/10,2/5,17/80
6_63/10 14.47 0,1/2,7/16,9/10,2/5,67/80
6_23/10 14.47 0,1/2,15/16,9/10,2/5,27/80
6_53/10 14.47 0,1/2,5/16,9/10,2/5,57/80
6_33/10 14.47 0,1/2,1/16,9/10,2/5,37/80
6_43/10 14.47 0,1/2,3/16,9/10,2/5,47/80
12_3/10 28.94 0,1/4,3/4,1/2,13/16,9/16,9/10,3/20,13/20,2/5,77/80,17/80
12_73/10 28.94 0,1/4,3/4,1/2,11/16,7/16,9/10,3/20,13/20,2/5,7/80,67/80
12_13/10 28.94 0,1/4,3/4,1/2,15/16,11/16,9/10,3/20,13/20,2/5,7/80,27/80
12_63/10 28.94 0,1/4,3/4,1/2,5/16,9/16,9/10,3/20,13/20,2/5,77/80,57/80
12_23/10 28.94 0,1/4,3/4,1/2,1/16,13/16,9/10,3/20,13/20,2/5,17/80,37/80
12_53/10 28.94 0,1/4,3/4,1/2,3/16,7/16,9/10,3/20,13/20,2/5,67/80,47/80
12_33/10 28.94 0,1/4,3/4,1/2,15/16,3/16,9/10,3/20,13/20,2/5,27/80,47/80
12_43/10 28.94 0,1/4,3/4,1/2,1/16,5/16,9/10,3/20,13/20,2/5,57/80,37/80
18_3/10 43.41 0,1/6,1/6,2/3,2/3,1/2,15/16,29/48,29/48,1/15,1/15,9/10,2/5,17/30,17/30,1/240,1/240,27/80
18_3/10 43.41 0,5/6,5/6,1/3,1/3,1/2,37/48,37/48,7/16,9/10,7/30,7/30,11/15,11/15,2/5,67/80,41/240,41/240
18_73/10 43.41 0,5/6,5/6,1/3,1/3,1/2,5/16,31/48,31/48,9/10,7/30,7/30,11/15,11/15,2/5,11/240,11/240,57/80
18_73/10 43.41 0,1/6,1/6,2/3,2/3,1/2,13/16,23/48,23/48,1/15,1/15,9/10,2/5,17/30,17/30,211/240,211/240,17/80
18_13/10 43.41 0,1/6,1/6,2/3,2/3,1/2,1/16,35/48,35/48,1/15,1/15,9/10,2/5,17/30,17/30,31/240,31/240,37/80
18_13/10 43.41 0,5/6,5/6,1/3,1/3,1/2,43/48,43/48,9/16,9/10,7/30,7/30,11/15,11/15,2/5,77/80,71/240,71/240
18_63/10 43.41 0,5/6,5/6,1/3,1/3,1/2,3/16,25/48,25/48,9/10,7/30,7/30,11/15,11/15,2/5,221/240,221/240,47/80
18_63/10 43.41 0,1/6,1/6,2/3,2/3,1/2,11/16,17/48,17/48,1/15,1/15,9/10,2/5,17/30,17/30,7/80,181/240,181/240
18_23/10 43.41 0,5/6,5/6,1/3,1/3,1/2,1/48,1/48,11/16,9/10,7/30,7/30,11/15,11/15,2/5,7/80,101/240,101/240
18_23/10 43.41 0,1/6,1/6,2/3,2/3,1/2,41/48,41/48,3/16,1/15,1/15,9/10,2/5,17/30,17/30,61/240,61/240,47/80
18_53/10 43.41 0,5/6,5/6,1/3,1/3,1/2,1/16,19/48,19/48,9/10,7/30,7/30,11/15,11/15,2/5,191/240,191/240,37/80
18_53/10 43.41 0,1/6,1/6,2/3,2/3,1/2,11/48,11/48,9/16,1/15,1/15,9/10,2/5,17/30,17/30,77/80,151/240,151/240
18_33/10 43.41 0,5/6,5/6,1/3,1/3,1/2,7/48,7/48,13/16,9/10,7/30,7/30,11/15,11/15,2/5,17/80,131/240,131/240
18_33/10 43.41 0,1/6,1/6,2/3,2/3,1/2,47/48,47/48,5/16,1/15,1/15,9/10,2/5,17/30,17/30,57/80,91/240,91/240
18_43/10 43.41 0,5/6,5/6,1/3,1/3,1/2,15/16,13/48,13/48,9/10,7/30,7/30,11/15,11/15,2/5,161/240,161/240,27/80
18_43/10 43.41 0,1/6,1/6,2/3,2/3,1/2,5/48,5/48,7/16,1/15,1/15,9/10,2/5,17/30,17/30,67/80,121/240,121/240
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_77/10 14.47 0,1/2,5/16,1/10,3/5,73/80 1,1,ζ_2^1,ζ_3^1,ζ_3^1,2.288
6_7/10 14.47 0,1/2,7/16,1/10,3/5,3/80
6_67/10 14.47 0,1/2,3/16,1/10,3/5,63/80
6_17/10 14.47 0,1/2,9/16,1/10,3/5,13/80
6_57/10 14.47 0,1/2,1/16,1/10,3/5,53/80
6_27/10 14.47 0,1/2,11/16,1/10,3/5,23/80
6_47/10 14.47 0,1/2,15/16,1/10,3/5,43/80
6_37/10 14.47 0,1/2,13/16,1/10,3/5,33/80
12_77/10 28.94 0,1/4,3/4,1/2,3/16,7/16,1/10,17/20,7/20,3/5,3/80,63/80
12_7/10 28.94 0,1/4,3/4,1/2,5/16,9/16,1/10,17/20,7/20,3/5,73/80,13/80
12_67/10 28.94 0,1/4,3/4,1/2,1/16,5/16,1/10,17/20,7/20,3/5,73/80,53/80
12_17/10 28.94 0,1/4,3/4,1/2,11/16,7/16,1/10,17/20,7/20,3/5,3/80,23/80
12_57/10 28.94 0,1/4,3/4,1/2,15/16,3/16,1/10,17/20,7/20,3/5,63/80,43/80
12_27/10 28.94 0,1/4,3/4,1/2,13/16,9/16,1/10,17/20,7/20,3/5,13/80,33/80
12_47/10 28.94 0,1/4,3/4,1/2,1/16,13/16,1/10,17/20,7/20,3/5,53/80,33/80
12_37/10 28.94 0,1/4,3/4,1/2,15/16,11/16,1/10,17/20,7/20,3/5,23/80,43/80
18_77/10 43.41 0,1/6,1/6,2/3,2/3,1/2,11/48,11/48,9/16,1/10,23/30,23/30,4/15,4/15,3/5,13/80,199/240,199/240
18_77/10 43.41 0,5/6,5/6,1/3,1/3,1/2,1/16,19/48,19/48,14/15,14/15,1/10,3/5,13/30,13/30,239/240,239/240,53/80
18_7/10 43.41 0,5/6,5/6,1/3,1/3,1/2,3/16,25/48,25/48,14/15,14/15,1/10,3/5,13/30,13/30,29/240,29/240,63/80
18_7/10 43.41 0,1/6,1/6,2/3,2/3,1/2,11/16,17/48,17/48,1/10,23/30,23/30,4/15,4/15,3/5,229/240,229/240,23/80
18_67/10 43.41 0,1/6,1/6,2/3,2/3,1/2,5/48,5/48,7/16,1/10,23/30,23/30,4/15,4/15,3/5,3/80,169/240,169/240
18_67/10 43.41 0,5/6,5/6,1/3,1/3,1/2,15/16,13/48,13/48,14/15,14/15,1/10,3/5,13/30,13/30,209/240,209/240,43/80
18_17/10 43.41 0,5/6,5/6,1/3,1/3,1/2,5/16,31/48,31/48,14/15,14/15,1/10,3/5,13/30,13/30,73/80,59/240,59/240
18_17/10 43.41 0,1/6,1/6,2/3,2/3,1/2,13/16,23/48,23/48,1/10,23/30,23/30,4/15,4/15,3/5,19/240,19/240,33/80
18_57/10 43.41 0,1/6,1/6,2/3,2/3,1/2,47/48,47/48,5/16,1/10,23/30,23/30,4/15,4/15,3/5,73/80,139/240,139/240
18_57/10 43.41 0,5/6,5/6,1/3,1/3,1/2,7/48,7/48,13/16,14/15,14/15,1/10,3/5,13/30,13/30,179/240,179/240,33/80
18_27/10 43.41 0,5/6,5/6,1/3,1/3,1/2,37/48,37/48,7/16,14/15,14/15,1/10,3/5,13/30,13/30,3/80,89/240,89/240
18_27/10 43.41 0,1/6,1/6,2/3,2/3,1/2,15/16,29/48,29/48,1/10,23/30,23/30,4/15,4/15,3/5,49/240,49/240,43/80
18_47/10 43.41 0,1/6,1/6,2/3,2/3,1/2,41/48,41/48,3/16,1/10,23/30,23/30,4/15,4/15,3/5,63/80,109/240,109/240
18_47/10 43.41 0,5/6,5/6,1/3,1/3,1/2,1/48,1/48,11/16,14/15,14/15,1/10,3/5,13/30,13/30,23/80,149/240,149/240
18_37/10 43.41 0,5/6,5/6,1/3,1/3,1/2,43/48,43/48,9/16,14/15,14/15,1/10,3/5,13/30,13/30,13/80,119/240,119/240
18_37/10 43.41 0,1/6,1/6,2/3,2/3,1/2,1/16,35/48,35/48,1/10,23/30,23/30,4/15,4/15,3/5,79/240,79/240,53/80
𝔰𝔬(10)_2 non-Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_0 20 0,0,1/5,4/5,0,1/2 1,1,2,2,√(5),√(5)
6_0 20 0,0,1/5,4/5,1/4,3/4
12_1 40 0,0,1/4,1/4,1/20,1/5,4/5,9/20,0,1/4,3/4,1/2
12_1 40 0,0,1/4,1/4,1/20,1/5,4/5,9/20,1/16,1/16,9/16,9/16
12_1 40 0,0,1/4,1/4,1/20,1/5,4/5,9/20,13/16,13/16,5/16,5/16
12_7 40 0,0,3/4,3/4,19/20,1/5,4/5,11/20,0,1/4,3/4,1/2
12_7 40 0,0,3/4,3/4,19/20,1/5,4/5,11/20,15/16,15/16,7/16,7/16
12_7 40 0,0,3/4,3/4,19/20,1/5,4/5,11/20,3/16,3/16,11/16,11/16
18_2 60 0,0,1/3,1/3,1/3,1/3,2/15,2/15,1/5,4/5,8/15,8/15,0,5/6,5/6,1/3,1/3,1/2
18_2 60 0,0,1/3,1/3,1/3,1/3,2/15,2/15,1/5,4/5,8/15,8/15,1/12,1/12,1/4,3/4,7/12,7/12
18_6 60 0,0,2/3,2/3,2/3,2/3,13/15,13/15,1/5,4/5,7/15,7/15,11/12,11/12,1/4,3/4,5/12,5/12
18_6 60 0,0,2/3,2/3,2/3,2/3,13/15,13/15,1/5,4/5,7/15,7/15,0,1/6,1/6,2/3,2/3,1/2
𝔰𝔬(5)_2 non-Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_4 20 0,0,2/5,3/5,1/4,3/4 1,1,2,2,√(5),√(5)
6_4 20 0,0,2/5,3/5,0,1/2
12_3 40 0,0,3/4,3/4,3/20,7/20,2/5,3/5,0,1/4,3/4,1/2
12_3 40 0,0,3/4,3/4,3/20,7/20,2/5,3/5,3/16,3/16,11/16,11/16
12_3 40 0,0,3/4,3/4,3/20,7/20,2/5,3/5,15/16,15/16,7/16,7/16
12_5 40 0,0,1/4,1/4,17/20,13/20,2/5,3/5,0,1/4,3/4,1/2
12_5 40 0,0,1/4,1/4,17/20,13/20,2/5,3/5,13/16,13/16,5/16,5/16
12_5 40 0,0,1/4,1/4,17/20,13/20,2/5,3/5,1/16,1/16,9/16,9/16
18_2 60 0,0,2/3,2/3,2/3,2/3,1/15,1/15,4/15,4/15,2/5,3/5,0,1/6,1/6,2/3,2/3,1/2
18_2 60 0,0,2/3,2/3,2/3,2/3,1/15,1/15,4/15,4/15,2/5,3/5,11/12,11/12,1/4,3/4,5/12,5/12
18_6 60 0,0,1/3,1/3,1/3,1/3,14/15,14/15,11/15,11/15,2/5,3/5,1/12,1/12,1/4,3/4,7/12,7/12
18_6 60 0,0,1/3,1/3,1/3,1/3,14/15,14/15,11/15,11/15,2/5,3/5,0,5/6,5/6,1/3,1/3,1/2
The following SU(2)_11 non-Abelian family contains FQH state <cit.>
Ψ_ν=-11/2({z̅_i})=[χ_11({z̅_i})]^2, N_c=12_59/13
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_46/13 56.74 0,4/13,2/13,7/13,6/13,12/13 1,ζ_11^1,ζ_11^2,ζ_11^3,ζ_11^4,ζ_11^5
12_33/13 113.4 0,3/4,3/52,4/13,47/52,2/13,15/52,7/13,11/52,6/13,12/13,35/52
12_59/13 113.4 0,1/4,4/13,29/52,2/13,21/52,41/52,7/13,37/52,6/13,12/13,9/52
18_20/13 170.2 0,2/3,2/3,38/39,38/39,4/13,2/13,32/39,32/39,8/39,8/39,7/13,5/39,5/39,6/13,12/13,23/39,23/39
18_72/13 170.2 0,1/3,1/3,4/13,25/39,25/39,2/13,19/39,19/39,34/39,34/39,7/13,31/39,31/39,6/13,12/13,10/39,10/39
The following SU(2)_-11 non-Abelian family contains FQH state <cit.>
Ψ_ν=11/2({z_i})=[χ_11({z_i})]^2, N_c=12_45/13
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_58/13 56.74 0,9/13,11/13,6/13,7/13,1/13 1,ζ_11^1,ζ_11^2,ζ_11^3,ζ_11^4,ζ_11^5
12_71/13 113.4 0,1/4,49/52,9/13,5/52,11/13,37/52,6/13,41/52,7/13,1/13,17/52
12_45/13 113.4 0,3/4,9/13,23/52,11/13,31/52,11/52,6/13,15/52,7/13,1/13,43/52
18_84/13 170.2 0,1/3,1/3,1/39,1/39,9/13,11/13,7/39,7/39,31/39,31/39,6/13,34/39,34/39,7/13,1/13,16/39,16/39
18_32/13 170.2 0,2/3,2/3,9/13,14/39,14/39,11/13,20/39,20/39,5/39,5/39,6/13,8/39,8/39,7/13,1/13,29/39,29/39
The following is the 𝔰𝔬(8)_-3 non-Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_8/3 74.61 0,1/9,1/9,1/9,1/3,2/3 1,ζ_7^3,ζ_7^3,ζ_7^3,ζ_16^4,ζ_16^6
12_5/3 149.2 0,3/4,1/9,1/9,1/9,31/36,31/36,31/36,1/12,1/3,2/3,5/12
12_11/3 149.2 0,1/4,1/9,1/9,1/9,13/36,13/36,13/36,1/3,7/12,11/12,2/3
18_2/3 223.8 0,2/3,2/3,1/9,1/9,1/9,7/9,7/9,7/9,7/9,7/9,7/9,0,0,1/3,1/3,1/3,2/3
18_14/3 223.8 0,1/3,1/3,1/9,1/9,1/9,4/9,4/9,4/9,4/9,4/9,4/9,1/3,2/3,2/3,0,0,2/3
The following is the 𝔰𝔬(8)_3 non-Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_16/3 74.61 0,8/9,8/9,8/9,2/3,1/3 1,ζ_7^3,ζ_7^3,ζ_7^3,ζ_16^4,ζ_16^6
12_19/3 149.2 0,1/4,8/9,8/9,8/9,5/36,5/36,5/36,11/12,2/3,1/3,7/12
12_13/3 149.2 0,3/4,8/9,8/9,8/9,23/36,23/36,23/36,2/3,5/12,1/12,1/3
18_22/3 223.8 0,1/3,1/3,8/9,8/9,8/9,2/9,2/9,2/9,2/9,2/9,2/9,0,0,2/3,1/3,2/3,2/3
18_10/3 223.8 0,2/3,2/3,8/9,8/9,8/9,5/9,5/9,5/9,5/9,5/9,5/9,1/3,1/3,2/3,0,0,1/3
The following is the (E_7)_3 non-Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_2 100.6 0,6/7,5/7,3/7,0,1/3 1,3+√(21)/2,3+√(21)/2,3+√(21)/2,5+√(21)/2,7+√(21)/2
12_1 201.2 0,3/4,6/7,5/28,5/7,17/28,3/7,13/28,0,3/4,1/12,1/3
12_3 201.2 0,1/4,27/28,3/28,6/7,5/7,19/28,3/7,0,1/4,1/3,7/12
18_0 301.8 0,2/3,2/3,2/21,2/21,6/7,5/7,8/21,8/21,3/7,11/21,11/21,0,2/3,2/3,0,0,1/3
18_4 301.8 0,1/3,1/3,1/21,1/21,6/7,4/21,4/21,16/21,16/21,5/7,3/7,0,1/3,1/3,1/3,2/3,2/3
The following is the (E_7)_-3 non-Abelian family:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
6_6 100.6 0,1/7,2/7,4/7,0,2/3 1,3+√(21)/2,3+√(21)/2,3+√(21)/2,5+√(21)/2,7+√(21)/2
12_7 201.2 0,1/4,1/7,23/28,2/7,11/28,4/7,15/28,0,1/4,11/12,2/3
12_5 201.2 0,3/4,1/28,25/28,1/7,2/7,9/28,4/7,0,3/4,2/3,5/12
18_0 301.8 0,1/3,1/3,19/21,19/21,1/7,2/7,13/21,13/21,4/7,10/21,10/21,0,1/3,1/3,0,0,2/3
18_4 301.8 0,2/3,2/3,20/21,20/21,1/7,17/21,17/21,5/21,5/21,2/7,4/7,0,2/3,2/3,1/3,1/3,2/3
The following are other non-Abelian families:
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_1/4 27.31 0,1/2,27/32,27/32,1/4,3/4,7/32 1,1,ζ_6^1,ζ_6^1,ζ_6^2,ζ_6^2,ζ_6^3
7_29/4 27.31 0,1/2,23/32,23/32,1/4,3/4,3/32
7_5/4 27.31 0,1/2,31/32,31/32,1/4,3/4,11/32
7_25/4 27.31 0,1/2,19/32,19/32,1/4,3/4,31/32
7_9/4 27.31 0,1/2,3/32,3/32,1/4,3/4,15/32
7_21/4 27.31 0,1/2,15/32,15/32,1/4,3/4,27/32
7_13/4 27.31 0,1/2,7/32,7/32,1/4,3/4,19/32
7_17/4 27.31 0,1/2,11/32,11/32,1/4,3/4,23/32
14_1/4 54.62 0,1/4,3/4,1/2,31/32,31/32,23/32,23/32,0,1/4,3/4,1/2,3/32,11/32
14_29/4 54.62 0,1/4,3/4,1/2,27/32,27/32,19/32,19/32,0,1/4,3/4,1/2,31/32,7/32
14_5/4 54.62 0,1/4,3/4,1/2,3/32,3/32,27/32,27/32,0,1/4,3/4,1/2,7/32,15/32
14_25/4 54.62 0,1/4,3/4,1/2,23/32,23/32,15/32,15/32,0,1/4,3/4,1/2,3/32,27/32
14_9/4 54.62 0,1/4,3/4,1/2,31/32,31/32,7/32,7/32,0,1/4,3/4,1/2,11/32,19/32
14_21/4 54.62 0,1/4,3/4,1/2,11/32,11/32,19/32,19/32,0,1/4,3/4,1/2,31/32,23/32
14_13/4 54.62 0,1/4,3/4,1/2,3/32,3/32,11/32,11/32,0,1/4,3/4,1/2,23/32,15/32
14_17/4 54.62 0,1/4,3/4,1/2,7/32,7/32,15/32,15/32,0,1/4,3/4,1/2,27/32,19/32
21_1/4 81.94 0,5/6,5/6,1/3,1/3,1/2,89/96,89/96,89/96,89/96,19/32,19/32,1/12,1/12,1/4,3/4,7/12,7/12,31/32,29/96,29/96
21_1/4 81.94 0,1/6,1/6,2/3,2/3,1/2,3/32,3/32,73/96,73/96,73/96,73/96,11/12,11/12,1/4,3/4,5/12,5/12,13/96,13/96,15/32
21_29/4 81.94 0,1/6,1/6,2/3,2/3,1/2,31/32,31/32,61/96,61/96,61/96,61/96,11/12,11/12,1/4,3/4,5/12,5/12,1/96,1/96,11/32
21_29/4 81.94 0,5/6,5/6,1/3,1/3,1/2,77/96,77/96,77/96,77/96,15/32,15/32,1/12,1/12,1/4,3/4,7/12,7/12,27/32,17/96,17/96
21_5/4 81.94 0,5/6,5/6,1/3,1/3,1/2,5/96,5/96,5/96,5/96,23/32,23/32,1/12,1/12,1/4,3/4,7/12,7/12,3/32,41/96,41/96
21_5/4 81.94 0,1/6,1/6,2/3,2/3,1/2,85/96,85/96,85/96,85/96,7/32,7/32,11/12,11/12,1/4,3/4,5/12,5/12,25/96,25/96,19/32
21_25/4 81.94 0,1/6,1/6,2/3,2/3,1/2,27/32,27/32,49/96,49/96,49/96,49/96,11/12,11/12,1/4,3/4,5/12,5/12,85/96,85/96,7/32
21_25/4 81.94 0,5/6,5/6,1/3,1/3,1/2,65/96,65/96,65/96,65/96,11/32,11/32,1/12,1/12,1/4,3/4,7/12,7/12,5/96,5/96,23/32
21_9/4 81.94 0,1/6,1/6,2/3,2/3,1/2,1/96,1/96,1/96,1/96,11/32,11/32,11/12,11/12,1/4,3/4,5/12,5/12,23/32,37/96,37/96
21_9/4 81.94 0,5/6,5/6,1/3,1/3,1/2,27/32,27/32,17/96,17/96,17/96,17/96,1/12,1/12,1/4,3/4,7/12,7/12,7/32,53/96,53/96
21_21/4 81.94 0,5/6,5/6,1/3,1/3,1/2,7/32,7/32,53/96,53/96,53/96,53/96,1/12,1/12,1/4,3/4,7/12,7/12,89/96,89/96,19/32
21_21/4 81.94 0,1/6,1/6,2/3,2/3,1/2,23/32,23/32,37/96,37/96,37/96,37/96,11/12,11/12,1/4,3/4,5/12,5/12,3/32,73/96,73/96
21_13/4 81.94 0,1/6,1/6,2/3,2/3,1/2,13/96,13/96,13/96,13/96,15/32,15/32,11/12,11/12,1/4,3/4,5/12,5/12,27/32,49/96,49/96
21_13/4 81.94 0,5/6,5/6,1/3,1/3,1/2,31/32,31/32,29/96,29/96,29/96,29/96,1/12,1/12,1/4,3/4,7/12,7/12,65/96,65/96,11/32
21_17/4 81.94 0,5/6,5/6,1/3,1/3,1/2,3/32,3/32,41/96,41/96,41/96,41/96,1/12,1/12,1/4,3/4,7/12,7/12,77/96,77/96,15/32
21_17/4 81.94 0,1/6,1/6,2/3,2/3,1/2,25/96,25/96,25/96,25/96,19/32,19/32,11/12,11/12,1/4,3/4,5/12,5/12,31/32,61/96,61/96
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_31/4 27.31 0,1/2,5/32,5/32,1/4,3/4,25/32 1,1,ζ_6^1,ζ_6^1,ζ_6^2,ζ_6^2,ζ_6^3
7_3/4 27.31 0,1/2,9/32,9/32,1/4,3/4,29/32
7_27/4 27.31 0,1/2,1/32,1/32,1/4,3/4,21/32
7_7/4 27.31 0,1/2,13/32,13/32,1/4,3/4,1/32
7_23/4 27.31 0,1/2,29/32,29/32,1/4,3/4,17/32
7_11/4 27.31 0,1/2,17/32,17/32,1/4,3/4,5/32
7_19/4 27.31 0,1/2,25/32,25/32,1/4,3/4,13/32
7_15/4 27.31 0,1/2,21/32,21/32,1/4,3/4,9/32
14_31/4 54.62 0,1/4,3/4,1/2,1/32,1/32,9/32,9/32,0,1/4,3/4,1/2,29/32,21/32
14_3/4 54.62 0,1/4,3/4,1/2,5/32,5/32,13/32,13/32,0,1/4,3/4,1/2,1/32,25/32
14_27/4 54.62 0,1/4,3/4,1/2,29/32,29/32,5/32,5/32,0,1/4,3/4,1/2,25/32,17/32
14_7/4 54.62 0,1/4,3/4,1/2,9/32,9/32,17/32,17/32,0,1/4,3/4,1/2,29/32,5/32
14_23/4 54.62 0,1/4,3/4,1/2,1/32,1/32,25/32,25/32,0,1/4,3/4,1/2,21/32,13/32
14_11/4 54.62 0,1/4,3/4,1/2,21/32,21/32,13/32,13/32,0,1/4,3/4,1/2,1/32,9/32
14_19/4 54.62 0,1/4,3/4,1/2,29/32,29/32,21/32,21/32,0,1/4,3/4,1/2,9/32,17/32
14_15/4 54.62 0,1/4,3/4,1/2,25/32,25/32,17/32,17/32,0,1/4,3/4,1/2,5/32,13/32
21_31/4 81.94 0,5/6,5/6,1/3,1/3,1/2,29/32,29/32,23/96,23/96,23/96,23/96,1/12,1/12,1/4,3/4,7/12,7/12,83/96,83/96,17/32
21_31/4 81.94 0,1/6,1/6,2/3,2/3,1/2,7/96,7/96,7/96,7/96,13/32,13/32,11/12,11/12,1/4,3/4,5/12,5/12,1/32,67/96,67/96
21_3/4 81.94 0,5/6,5/6,1/3,1/3,1/2,1/32,1/32,35/96,35/96,35/96,35/96,1/12,1/12,1/4,3/4,7/12,7/12,95/96,95/96,21/32
21_3/4 81.94 0,1/6,1/6,2/3,2/3,1/2,19/96,19/96,19/96,19/96,17/32,17/32,11/12,11/12,1/4,3/4,5/12,5/12,5/32,79/96,79/96
21_27/4 81.94 0,1/6,1/6,2/3,2/3,1/2,91/96,91/96,91/96,91/96,9/32,9/32,11/12,11/12,1/4,3/4,5/12,5/12,29/32,55/96,55/96
21_27/4 81.94 0,5/6,5/6,1/3,1/3,1/2,11/96,11/96,11/96,11/96,25/32,25/32,1/12,1/12,1/4,3/4,7/12,7/12,71/96,71/96,13/32
21_7/4 81.94 0,1/6,1/6,2/3,2/3,1/2,31/96,31/96,31/96,31/96,21/32,21/32,11/12,11/12,1/4,3/4,5/12,5/12,91/96,91/96,9/32
21_7/4 81.94 0,5/6,5/6,1/3,1/3,1/2,5/32,5/32,47/96,47/96,47/96,47/96,1/12,1/12,1/4,3/4,7/12,7/12,11/96,11/96,25/32
21_23/4 81.94 0,1/6,1/6,2/3,2/3,1/2,5/32,5/32,79/96,79/96,79/96,79/96,11/12,11/12,1/4,3/4,5/12,5/12,25/32,43/96,43/96
21_23/4 81.94 0,5/6,5/6,1/3,1/3,1/2,95/96,95/96,95/96,95/96,21/32,21/32,1/12,1/12,1/4,3/4,7/12,7/12,9/32,59/96,59/96
21_11/4 81.94 0,1/6,1/6,2/3,2/3,1/2,25/32,25/32,43/96,43/96,43/96,43/96,11/12,11/12,1/4,3/4,5/12,5/12,7/96,7/96,13/32
21_11/4 81.94 0,5/6,5/6,1/3,1/3,1/2,9/32,9/32,59/96,59/96,59/96,59/96,1/12,1/12,1/4,3/4,7/12,7/12,29/32,23/96,23/96
21_19/4 81.94 0,5/6,5/6,1/3,1/3,1/2,83/96,83/96,83/96,83/96,17/32,17/32,1/12,1/12,1/4,3/4,7/12,7/12,5/32,47/96,47/96
21_19/4 81.94 0,1/6,1/6,2/3,2/3,1/2,1/32,1/32,67/96,67/96,67/96,67/96,11/12,11/12,1/4,3/4,5/12,5/12,31/96,31/96,21/32
21_15/4 81.94 0,5/6,5/6,1/3,1/3,1/2,71/96,71/96,71/96,71/96,13/32,13/32,1/12,1/12,1/4,3/4,7/12,7/12,1/32,35/96,35/96
21_15/4 81.94 0,1/6,1/6,2/3,2/3,1/2,29/32,29/32,55/96,55/96,55/96,55/96,11/12,11/12,1/4,3/4,5/12,5/12,19/96,19/96,17/32
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_2 28 0,0,1/7,2/7,4/7,1/8,5/8 1,1,2,2,2,√(7),√(7)
7_2 28 0,0,1/7,2/7,4/7,7/8,3/8
14_1 56 0,0,3/4,3/4,1/28,25/28,1/7,2/7,9/28,4/7,1/8,7/8,3/8,5/8
14_1 56 0,0,3/4,3/4,1/28,25/28,1/7,2/7,9/28,4/7,1/16,1/16,9/16,9/16
14_1 56 0,0,3/4,3/4,1/28,25/28,1/7,2/7,9/28,4/7,13/16,13/16,5/16,5/16
14_3 56 0,0,1/4,1/4,1/7,23/28,2/7,11/28,4/7,15/28,1/8,7/8,3/8,5/8
14_3 56 0,0,1/4,1/4,1/7,23/28,2/7,11/28,4/7,15/28,3/16,3/16,11/16,11/16
14_3 56 0,0,1/4,1/4,1/7,23/28,2/7,11/28,4/7,15/28,15/16,15/16,7/16,7/16
21_0 84 0,0,2/3,2/3,2/3,2/3,20/21,20/21,1/7,17/21,17/21,5/21,5/21,2/7,4/7,1/24,1/24,7/8,3/8,13/24,13/24
21_0 84 0,0,2/3,2/3,2/3,2/3,20/21,20/21,1/7,17/21,17/21,5/21,5/21,2/7,4/7,1/8,19/24,19/24,7/24,7/24,5/8
21_4 84 0,0,1/3,1/3,1/3,1/3,19/21,19/21,1/7,2/7,13/21,13/21,4/7,10/21,10/21,23/24,23/24,1/8,5/8,11/24,11/24
21_4 84 0,0,1/3,1/3,1/3,1/3,19/21,19/21,1/7,2/7,13/21,13/21,4/7,10/21,10/21,7/8,5/24,5/24,17/24,17/24,3/8
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_6 28 0,0,6/7,5/7,3/7,7/8,3/8 1,1,2,2,2,√(7),√(7)
7_6 28 0,0,6/7,5/7,3/7,1/8,5/8
14_7 56 0,0,1/4,1/4,27/28,3/28,6/7,5/7,19/28,3/7,1/8,7/8,3/8,5/8
14_7 56 0,0,1/4,1/4,27/28,3/28,6/7,5/7,19/28,3/7,15/16,15/16,7/16,7/16
14_7 56 0,0,1/4,1/4,27/28,3/28,6/7,5/7,19/28,3/7,3/16,3/16,11/16,11/16
14_5 56 0,0,3/4,3/4,6/7,5/28,5/7,17/28,3/7,13/28,1/8,7/8,3/8,5/8
14_5 56 0,0,3/4,3/4,6/7,5/28,5/7,17/28,3/7,13/28,13/16,13/16,5/16,5/16
14_5 56 0,0,3/4,3/4,6/7,5/28,5/7,17/28,3/7,13/28,1/16,1/16,9/16,9/16
21_0 84 0,0,1/3,1/3,1/3,1/3,1/21,1/21,6/7,4/21,4/21,16/21,16/21,5/7,3/7,7/8,5/24,5/24,17/24,17/24,3/8
21_0 84 0,0,1/3,1/3,1/3,1/3,1/21,1/21,6/7,4/21,4/21,16/21,16/21,5/7,3/7,23/24,23/24,1/8,5/8,11/24,11/24
21_4 84 0,0,2/3,2/3,2/3,2/3,2/21,2/21,6/7,5/7,8/21,8/21,3/7,11/21,11/21,1/8,19/24,19/24,7/24,7/24,5/8
21_4 84 0,0,2/3,2/3,2/3,2/3,2/21,2/21,6/7,5/7,8/21,8/21,3/7,11/21,11/21,1/24,1/24,7/8,3/8,13/24,13/24
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_8/5 86.75 0,4/5,2/15,0,2/5,1/3,4/5 1,ζ_13^1,ζ_13^2,ζ_13^3,ζ_13^4,ζ_13^5,ζ_13^6
14_3/5 173.5 0,3/4,4/5,11/20,53/60,2/15,0,3/4,3/20,2/5,1/12,1/3,4/5,11/20
14_13/5 173.5 0,1/4,1/20,4/5,2/15,23/60,0,1/4,13/20,2/5,1/3,7/12,1/20,4/5
21_38/5 260.2 0,2/3,2/3,4/5,7/15,7/15,2/15,4/5,4/5,0,2/3,2/3,1/15,1/15,2/5,0,0,1/3,4/5,7/15,7/15
21_18/5 260.2 0,1/3,1/3,2/15,2/15,4/5,2/15,7/15,7/15,0,1/3,1/3,11/15,11/15,2/5,1/3,2/3,2/3,2/15,2/15,4/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_32/5 86.75 0,1/5,13/15,0,3/5,2/3,1/5 1,ζ_13^1,ζ_13^2,ζ_13^3,ζ_13^4,ζ_13^5,ζ_13^6
14_37/5 173.5 0,1/4,1/5,9/20,7/60,13/15,0,1/4,17/20,3/5,11/12,2/3,1/5,9/20
14_27/5 173.5 0,3/4,19/20,1/5,13/15,37/60,0,3/4,7/20,3/5,2/3,5/12,19/20,1/5
21_2/5 260.2 0,1/3,1/3,1/5,8/15,8/15,13/15,1/5,1/5,0,1/3,1/3,14/15,14/15,3/5,0,0,2/3,1/5,8/15,8/15
21_22/5 260.2 0,2/3,2/3,13/15,13/15,1/5,13/15,8/15,8/15,0,2/3,2/3,4/15,4/15,3/5,1/3,1/3,2/3,13/15,13/15,1/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_1 93.25 0,1/2,1/2,1/4,1/4,5/8,0 1,ζ_6^2,ζ_6^2,2+√(2),2+√(2),2ζ_6^2,3+√(8)
14_0 186.5 0,3/4,1/4,1/4,1/2,1/2,0,0,1/4,1/4,3/8,5/8,0,3/4
14_2 186.5 0,1/4,3/4,3/4,1/2,1/2,1/4,1/4,1/2,1/2,7/8,5/8,0,1/4
21_7 279.7 0,2/3,2/3,1/6,1/6,1/6,1/6,1/2,1/2,11/12,11/12,11/12,11/12,1/4,1/4,7/24,7/24,5/8,0,2/3,2/3
21_3 279.7 0,1/3,1/3,5/6,5/6,5/6,5/6,1/2,1/2,1/4,1/4,7/12,7/12,7/12,7/12,23/24,23/24,5/8,0,1/3,1/3
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
7_7 93.25 0,1/2,1/2,3/4,3/4,3/8,0 1,ζ_6^2,ζ_6^2,2+√(2),2+√(2),2ζ_6^2,3+√(8)
14_0 186.5 0,1/4,3/4,3/4,1/2,1/2,0,0,3/4,3/4,3/8,5/8,0,1/4
14_6 186.5 0,3/4,1/4,1/4,1/2,1/2,3/4,3/4,1/2,1/2,1/8,3/8,0,3/4
21_1 279.7 0,1/3,1/3,5/6,5/6,5/6,5/6,1/2,1/2,1/12,1/12,1/12,1/12,3/4,3/4,17/24,17/24,3/8,0,1/3,1/3
21_5 279.7 0,2/3,2/3,1/6,1/6,1/6,1/6,1/2,1/2,3/4,3/4,5/12,5/12,5/12,5/12,1/24,1/24,3/8,0,2/3,2/3
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_2/5 47.36 0,2/5,2/5,2/5,4/5,4/5,4/5,1/5 1,ζ_3^1,ζ_3^1,ζ_3^1,ζ_8^2,ζ_8^2,ζ_8^2,2+√(5)
16_37/5 94.72 0,3/4,3/20,3/20,3/20,2/5,2/5,2/5,4/5,4/5,4/5,11/20,11/20,11/20,19/20,1/5
16_7/5 94.72 0,1/4,13/20,13/20,13/20,2/5,2/5,2/5,1/20,1/20,1/20,4/5,4/5,4/5,1/5,9/20
24_32/5 142.0 0,2/3,2/3,1/15,1/15,1/15,1/15,1/15,1/15,2/5,2/5,2/5,4/5,4/5,4/5,7/15,7/15,7/15,7/15,7/15,7/15,13/15,13/15,1/5
24_12/5 142.0 0,1/3,1/3,11/15,11/15,11/15,11/15,11/15,11/15,2/5,2/5,2/5,2/15,2/15,2/15,2/15,2/15,2/15,4/5,4/5,4/5,1/5,8/15,8/15
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_14/5 47.36 0,2/5,2/5,3/5,0,0,4/5,2/5 1,ζ_3^1,ζ_3^1,ζ_3^1,ζ_8^2,ζ_8^2,ζ_8^2,2+√(5)
16_9/5 94.72 0,3/4,3/20,3/20,7/20,2/5,2/5,3/5,0,0,4/5,3/4,3/4,11/20,3/20,2/5
16_19/5 94.72 0,1/4,17/20,13/20,13/20,2/5,2/5,3/5,0,0,1/20,4/5,1/4,1/4,13/20,2/5
24_4/5 142.0 0,2/3,2/3,1/15,1/15,1/15,1/15,4/15,4/15,2/5,2/5,3/5,0,0,4/5,2/3,2/3,2/3,2/3,7/15,7/15,1/15,1/15,2/5
24_24/5 142.0 0,1/3,1/3,14/15,14/15,11/15,11/15,11/15,11/15,2/5,2/5,3/5,0,0,2/15,2/15,4/5,1/3,1/3,1/3,1/3,11/15,11/15,2/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_26/5 47.36 0,2/5,3/5,3/5,0,0,1/5,3/5 1,ζ_3^1,ζ_3^1,ζ_3^1,ζ_8^2,ζ_8^2,ζ_8^2,2+√(5)
16_31/5 94.72 0,1/4,17/20,17/20,13/20,2/5,3/5,3/5,0,0,1/5,1/4,1/4,9/20,17/20,3/5
16_21/5 94.72 0,3/4,3/20,7/20,7/20,2/5,3/5,3/5,0,0,19/20,1/5,3/4,3/4,7/20,3/5
24_36/5 142.0 0,1/3,1/3,14/15,14/15,14/15,14/15,11/15,11/15,2/5,3/5,3/5,0,0,1/5,1/3,1/3,1/3,1/3,8/15,8/15,14/15,14/15,3/5
24_16/5 142.0 0,2/3,2/3,1/15,1/15,4/15,4/15,4/15,4/15,2/5,3/5,3/5,0,0,13/15,13/15,1/5,2/3,2/3,2/3,2/3,4/15,4/15,3/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_38/5 47.36 0,3/5,3/5,3/5,1/5,1/5,1/5,4/5 1,ζ_3^1,ζ_3^1,ζ_3^1,ζ_8^2,ζ_8^2,ζ_8^2,2+√(5)
16_3/5 94.72 0,1/4,17/20,17/20,17/20,3/5,3/5,3/5,1/5,1/5,1/5,9/20,9/20,9/20,1/20,4/5
16_33/5 94.72 0,3/4,7/20,7/20,7/20,3/5,3/5,3/5,19/20,19/20,19/20,1/5,1/5,1/5,4/5,11/20
24_8/5 142.0 0,1/3,1/3,14/15,14/15,14/15,14/15,14/15,14/15,3/5,3/5,3/5,1/5,1/5,1/5,8/15,8/15,8/15,8/15,8/15,8/15,2/15,2/15,4/5
24_28/5 142.0 0,2/3,2/3,4/15,4/15,4/15,4/15,4/15,4/15,3/5,3/5,3/5,13/15,13/15,13/15,13/15,13/15,13/15,1/5,1/5,1/5,4/5,7/15,7/15
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_8/15 69.59 0,3/5,1/3,2/9,2/3,14/15,37/45,4/15 1,ζ_3^1,ζ_7^1,ζ_7^2,ζ_7^3,3.040,4.097,4.658
16_113/15 139.1 0,3/4,7/20,3/5,1/12,1/3,35/36,2/9,2/3,5/12,14/15,41/60,37/45,103/180,1/60,4/15
16_23/15 139.1 0,1/4,17/20,3/5,1/3,7/12,2/9,17/36,11/12,2/3,14/15,11/60,13/180,37/45,4/15,31/60
24_98/15 208.7 0,2/3,2/3,4/15,4/15,3/5,0,0,1/3,8/9,8/9,2/9,1/3,1/3,2/3,14/15,3/5,3/5,37/45,22/45,22/45,14/15,14/15,4/15
24_38/15 208.7 0,1/3,1/3,14/15,14/15,3/5,1/3,2/3,2/3,2/9,5/9,5/9,0,0,2/3,14/15,4/15,4/15,7/45,7/45,37/45,4/15,3/5,3/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_28/15 69.59 0,3/5,2/3,7/9,1/3,4/15,17/45,14/15 1,ζ_3^1,ζ_7^1,ζ_7^2,ζ_7^3,3.040,4.097,4.658
16_13/15 139.1 0,3/4,7/20,3/5,2/3,5/12,7/9,19/36,1/12,1/3,1/60,4/15,23/180,17/45,14/15,41/60
16_43/15 139.1 0,1/4,17/20,3/5,11/12,2/3,1/36,7/9,1/3,7/12,4/15,31/60,113/180,17/45,14/15,11/60
24_118/15 208.7 0,2/3,2/3,4/15,4/15,3/5,1/3,1/3,2/3,7/9,4/9,4/9,0,0,1/3,14/15,14/15,4/15,2/45,2/45,17/45,14/15,3/5,3/5
24_58/15 208.7 0,1/3,1/3,14/15,14/15,3/5,0,0,2/3,1/9,1/9,7/9,1/3,2/3,2/3,4/15,3/5,3/5,32/45,32/45,17/45,14/15,4/15,4/15
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_92/15 69.59 0,2/5,1/3,2/9,2/3,11/15,28/45,1/15 1,ζ_3^1,ζ_7^1,ζ_7^2,ζ_7^3,3.040,4.097,4.658
16_107/15 139.1 0,1/4,13/20,2/5,1/3,7/12,2/9,17/36,11/12,2/3,59/60,11/15,157/180,28/45,1/15,19/60
16_77/15 139.1 0,3/4,3/20,2/5,1/12,1/3,35/36,2/9,2/3,5/12,11/15,29/60,67/180,28/45,1/15,49/60
24_2/15 208.7 0,1/3,1/3,11/15,11/15,2/5,1/3,2/3,2/3,2/9,5/9,5/9,0,0,2/3,1/15,1/15,11/15,43/45,43/45,28/45,1/15,2/5,2/5
24_62/15 208.7 0,2/3,2/3,1/15,1/15,2/5,0,0,1/3,8/9,8/9,2/9,1/3,1/3,2/3,11/15,2/5,2/5,13/45,13/45,28/45,1/15,11/15,11/15
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_112/15 69.59 0,2/5,2/3,7/9,1/3,1/15,8/45,11/15 1,ζ_3^1,ζ_7^1,ζ_7^2,ζ_7^3,3.040,4.097,4.658
16_7/15 139.1 0,1/4,13/20,2/5,11/12,2/3,1/36,7/9,1/3,7/12,1/15,19/60,8/45,77/180,59/60,11/15
16_97/15 139.1 0,3/4,3/20,2/5,2/3,5/12,7/9,19/36,1/12,1/3,1/15,49/60,167/180,8/45,11/15,29/60
24_22/15 208.7 0,1/3,1/3,11/15,11/15,2/5,0,0,2/3,1/9,1/9,7/9,1/3,2/3,2/3,1/15,2/5,2/5,8/45,23/45,23/45,1/15,1/15,11/15
24_82/15 208.7 0,2/3,2/3,1/15,1/15,2/5,1/3,1/3,2/3,7/9,4/9,4/9,0,0,1/3,1/15,11/15,11/15,38/45,38/45,8/45,11/15,2/5,2/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_62/17 125.8 0,5/17,2/17,8/17,6/17,13/17,12/17,3/17 1,ζ_15^1,ζ_15^2,ζ_15^3,ζ_15^4,ζ_15^5,ζ_15^6,ζ_15^7
16_45/17 251.7 0,3/4,3/68,5/17,2/17,59/68,15/68,8/17,7/68,6/17,13/17,35/68,12/17,31/68,63/68,3/17
16_79/17 251.7 0,1/4,5/17,37/68,2/17,25/68,49/68,8/17,6/17,41/68,1/68,13/17,65/68,12/17,3/17,29/68
24_28/17 377.6 0,2/3,2/3,49/51,49/51,5/17,2/17,40/51,40/51,7/51,7/51,8/17,1/51,1/51,6/17,13/17,22/51,22/51,12/17,19/51,19/51,43/51,43/51,3/17
24_96/17 377.6 0,1/3,1/3,5/17,32/51,32/51,2/17,23/51,23/51,41/51,41/51,8/17,35/51,35/51,6/17,5/51,5/51,13/17,2/51,2/51,12/17,3/17,26/51,26/51
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
8_74/17 125.8 0,12/17,15/17,9/17,11/17,4/17,5/17,14/17 1,ζ_15^1,ζ_15^2,ζ_15^3,ζ_15^4,ζ_15^5,ζ_15^6,ζ_15^7
16_91/17 251.7 0,1/4,65/68,12/17,15/17,9/68,53/68,9/17,61/68,11/17,4/17,33/68,5/17,37/68,5/68,14/17
16_57/17 251.7 0,3/4,12/17,31/68,15/17,43/68,19/68,9/17,11/17,27/68,67/68,4/17,3/68,5/17,14/17,39/68
24_108/17 377.6 0,1/3,1/3,2/51,2/51,12/17,15/17,11/51,11/51,44/51,44/51,9/17,50/51,50/51,11/17,4/17,29/51,29/51,5/17,32/51,32/51,8/51,8/51,14/17
24_40/17 377.6 0,2/3,2/3,12/17,19/51,19/51,15/17,28/51,28/51,10/51,10/51,9/17,16/51,16/51,11/17,46/51,46/51,4/17,49/51,49/51,5/17,14/17,25/51,25/51
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
9_0 16 0,0,1/2,1/2,1/16,15/16,7/16,9/16,0 1,1,1,1,ζ_2^1,ζ_2^1,ζ_2^1,ζ_2^1,2
9_0 16 0,0,1/2,1/2,3/16,13/16,5/16,11/16,0
9_1 16 0,0,1/2,1/2,1/16,1/16,9/16,9/16,1/8
9_1 16 0,0,1/2,1/2,15/16,3/16,11/16,7/16,1/8
9_1 16 0,0,1/2,1/2,13/16,13/16,5/16,5/16,1/8
9_7 16 0,0,1/2,1/2,3/16,3/16,11/16,11/16,7/8
9_7 16 0,0,1/2,1/2,1/16,13/16,5/16,9/16,7/8
9_7 16 0,0,1/2,1/2,15/16,15/16,7/16,7/16,7/8
9_2 16 0,0,1/2,1/2,1/16,3/16,11/16,9/16,1/4
9_2 16 0,0,1/2,1/2,15/16,13/16,5/16,7/16,1/4
9_6 16 0,0,1/2,1/2,15/16,13/16,5/16,7/16,3/4
9_6 16 0,0,1/2,1/2,1/16,3/16,11/16,9/16,3/4
9_3 16 0,0,1/2,1/2,3/16,3/16,11/16,11/16,3/8
9_3 16 0,0,1/2,1/2,1/16,13/16,5/16,9/16,3/8
9_3 16 0,0,1/2,1/2,15/16,15/16,7/16,7/16,3/8
9_5 16 0,0,1/2,1/2,13/16,13/16,5/16,5/16,5/8
9_5 16 0,0,1/2,1/2,15/16,3/16,11/16,7/16,5/8
9_5 16 0,0,1/2,1/2,1/16,1/16,9/16,9/16,5/8
9_4 16 0,0,1/2,1/2,3/16,13/16,5/16,11/16,1/2
9_4 16 0,0,1/2,1/2,1/16,15/16,7/16,9/16,1/2
18_0 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,1/16,13/16,13/16,5/16,5/16,9/16,9/16,1/8,7/8
18_0 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,0,0,1/2,1/2,1/2,1/2,1/8,7/8
18_0 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,1/8,7/8,7/8,3/8,3/8,5/8,5/8,1/8,7/8
18_0 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/4,1/4,1/4,1/4,3/4,3/4,3/4,3/4,1/8,7/8
18_1 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,15/16,3/16,13/16,5/16,11/16,7/16,9/16,0,1/4
18_1 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,1/8,1/8,5/8,5/8,1/2,1/2,0,1/4
18_1 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,7/8,7/8,1/4,1/4,3/4,3/4,3/8,3/8,0,1/4
18_7 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,15/16,3/16,13/16,5/16,11/16,7/16,9/16,0,3/4
18_7 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,1/8,1/4,1/4,3/4,3/4,5/8,5/8,0,3/4
18_7 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,7/8,7/8,3/8,3/8,1/2,1/2,0,3/4
18_2 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,1/16,13/16,13/16,5/16,5/16,9/16,9/16,1/8,3/8
18_2 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,1/8,1/8,1/8,5/8,5/8,5/8,5/8,1/8,3/8
18_2 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,3/8
18_2 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,7/8,7/8,7/8,7/8,3/8,3/8,3/8,3/8,1/8,3/8
18_6 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,1/16,13/16,13/16,5/16,5/16,9/16,9/16,7/8,5/8
18_6 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,7/8,7/8,7/8,7/8,3/8,3/8,3/8,3/8,7/8,5/8
18_6 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,1/4,1/4,3/4,3/4,1/2,1/2,7/8,5/8
18_6 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,1/8,1/8,1/8,5/8,5/8,5/8,5/8,7/8,5/8
18_3 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,15/16,3/16,13/16,5/16,11/16,7/16,9/16,1/4,1/2
18_3 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,1/8,1/4,1/4,3/4,3/4,5/8,5/8,1/4,1/2
18_3 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,7/8,7/8,3/8,3/8,1/2,1/2,1/4,1/2
18_5 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,15/16,3/16,13/16,5/16,11/16,7/16,9/16,3/4,1/2
18_5 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,7/8,7/8,1/4,1/4,3/4,3/4,3/8,3/8,3/4,1/2
18_5 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,1/8,1/8,5/8,5/8,1/2,1/2,3/4,1/2
18_4 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/16,1/16,13/16,13/16,5/16,5/16,9/16,9/16,3/8,5/8
18_4 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/4,1/4,1/4,1/4,3/4,3/4,3/4,3/4,3/8,5/8
18_4 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,1/8,1/8,7/8,7/8,3/8,3/8,5/8,5/8,3/8,5/8
18_4 32 0,0,1/4,1/4,3/4,3/4,1/2,1/2,0,0,0,0,1/2,1/2,1/2,1/2,3/8,5/8
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
9_5/14 37.18 0,1/2,3/16,1/7,9/14,3/14,5/7,37/112,101/112 1,1,ζ_2^1,ζ_5^1,ζ_5^1,ζ_5^2,ζ_5^2,2.548,3.177
9_103/14 37.18 0,1/2,1/16,1/7,9/14,3/14,5/7,23/112,87/112
9_19/14 37.18 0,1/2,5/16,1/7,9/14,3/14,5/7,51/112,3/112
9_89/14 37.18 0,1/2,15/16,1/7,9/14,3/14,5/7,9/112,73/112
9_33/14 37.18 0,1/2,7/16,1/7,9/14,3/14,5/7,65/112,17/112
9_75/14 37.18 0,1/2,13/16,1/7,9/14,3/14,5/7,107/112,59/112
9_47/14 37.18 0,1/2,9/16,1/7,9/14,3/14,5/7,79/112,31/112
9_61/14 37.18 0,1/2,11/16,1/7,9/14,3/14,5/7,93/112,45/112
18_5/14 74.36 0,1/4,3/4,1/2,1/16,5/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,23/112,51/112,3/112,87/112
18_103/14 74.36 0,1/4,3/4,1/2,15/16,3/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,9/112,37/112,101/112,73/112
18_19/14 74.36 0,1/4,3/4,1/2,3/16,7/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,37/112,65/112,101/112,17/112
18_89/14 74.36 0,1/4,3/4,1/2,1/16,13/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,107/112,23/112,87/112,59/112
18_33/14 74.36 0,1/4,3/4,1/2,5/16,9/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,79/112,51/112,3/112,31/112
18_75/14 74.36 0,1/4,3/4,1/2,15/16,11/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,9/112,93/112,73/112,45/112
18_47/14 74.36 0,1/4,3/4,1/2,11/16,7/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,93/112,65/112,17/112,45/112
18_61/14 74.36 0,1/4,3/4,1/2,13/16,9/16,25/28,1/7,9/14,11/28,27/28,3/14,5/7,13/28,107/112,79/112,31/112,59/112
27_5/14 111.5 0,1/6,1/6,2/3,2/3,1/2,5/48,5/48,7/16,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,83/336,83/336,65/112,17/112,275/336,275/336
27_5/14 111.5 0,5/6,5/6,1/3,1/3,1/2,15/16,13/48,13/48,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,9/112,139/336,139/336,331/336,331/336,73/112
27_103/14 111.5 0,1/6,1/6,2/3,2/3,1/2,47/48,47/48,5/16,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,41/336,41/336,51/112,3/112,233/336,233/336
27_103/14 111.5 0,5/6,5/6,1/3,1/3,1/2,7/48,7/48,13/16,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,107/112,97/336,97/336,289/336,289/336,59/112
27_19/14 111.5 0,1/6,1/6,2/3,2/3,1/2,11/48,11/48,9/16,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,79/112,125/336,125/336,317/336,317/336,31/112
27_19/14 111.5 0,5/6,5/6,1/3,1/3,1/2,1/16,19/48,19/48,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,23/112,181/336,181/336,37/336,37/336,87/112
27_89/14 111.5 0,1/6,1/6,2/3,2/3,1/2,41/48,41/48,3/16,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,335/336,335/336,37/112,101/112,191/336,191/336
27_89/14 111.5 0,5/6,5/6,1/3,1/3,1/2,1/48,1/48,11/16,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,55/336,55/336,93/112,247/336,247/336,45/112
27_33/14 111.5 0,1/6,1/6,2/3,2/3,1/2,11/16,17/48,17/48,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,93/112,167/336,167/336,23/336,23/336,45/112
27_33/14 111.5 0,5/6,5/6,1/3,1/3,1/2,3/16,25/48,25/48,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,37/112,223/336,223/336,101/112,79/336,79/336
27_75/14 111.5 0,1/6,1/6,2/3,2/3,1/2,1/16,35/48,35/48,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,293/336,293/336,23/112,87/112,149/336,149/336
27_75/14 111.5 0,5/6,5/6,1/3,1/3,1/2,43/48,43/48,9/16,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,13/336,13/336,79/112,31/112,205/336,205/336
27_47/14 111.5 0,1/6,1/6,2/3,2/3,1/2,13/16,23/48,23/48,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,107/112,209/336,209/336,65/336,65/336,59/112
27_47/14 111.5 0,5/6,5/6,1/3,1/3,1/2,5/16,31/48,31/48,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,265/336,265/336,51/112,3/112,121/336,121/336
27_61/14 111.5 0,1/6,1/6,2/3,2/3,1/2,15/16,29/48,29/48,1/7,17/21,17/21,13/42,13/42,9/14,37/42,37/42,3/14,5/7,8/21,8/21,9/112,251/336,251/336,107/336,107/336,73/112
27_61/14 111.5 0,5/6,5/6,1/3,1/3,1/2,37/48,37/48,7/16,41/42,41/42,1/7,9/14,10/21,10/21,1/21,1/21,3/14,5/7,23/42,23/42,307/336,307/336,65/112,17/112,163/336,163/336
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
9_107/14 37.18 0,1/2,13/16,6/7,5/14,11/14,2/7,75/112,11/112 1,1,ζ_2^1,ζ_5^1,ζ_5^1,ζ_5^2,ζ_5^2,2.548,3.177
9_9/14 37.18 0,1/2,15/16,6/7,5/14,11/14,2/7,89/112,25/112
9_93/14 37.18 0,1/2,11/16,6/7,5/14,11/14,2/7,61/112,109/112
9_23/14 37.18 0,1/2,1/16,6/7,5/14,11/14,2/7,103/112,39/112
9_79/14 37.18 0,1/2,9/16,6/7,5/14,11/14,2/7,47/112,95/112
9_37/14 37.18 0,1/2,3/16,6/7,5/14,11/14,2/7,5/112,53/112
9_65/14 37.18 0,1/2,7/16,6/7,5/14,11/14,2/7,33/112,81/112
9_51/14 37.18 0,1/2,5/16,6/7,5/14,11/14,2/7,19/112,67/112
18_107/14 74.36 0,1/4,3/4,1/2,15/16,11/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,89/112,61/112,109/112,25/112
18_9/14 74.36 0,1/4,3/4,1/2,1/16,13/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,103/112,75/112,11/112,39/112
18_93/14 74.36 0,1/4,3/4,1/2,13/16,9/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,75/112,47/112,11/112,95/112
18_23/14 74.36 0,1/4,3/4,1/2,15/16,3/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,5/112,89/112,25/112,53/112
18_79/14 74.36 0,1/4,3/4,1/2,11/16,7/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,33/112,61/112,109/112,81/112
18_37/14 74.36 0,1/4,3/4,1/2,1/16,5/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,103/112,19/112,39/112,67/112
18_65/14 74.36 0,1/4,3/4,1/2,5/16,9/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,19/112,47/112,95/112,67/112
18_51/14 74.36 0,1/4,3/4,1/2,3/16,7/16,3/28,6/7,5/14,17/28,1/28,11/14,2/7,15/28,5/112,33/112,81/112,53/112
27_107/14 111.5 0,1/6,1/6,2/3,2/3,1/2,1/16,35/48,35/48,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,103/112,197/336,197/336,5/336,5/336,39/112
27_107/14 111.5 0,5/6,5/6,1/3,1/3,1/2,43/48,43/48,9/16,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,253/336,253/336,47/112,95/112,61/336,61/336
27_9/14 111.5 0,1/6,1/6,2/3,2/3,1/2,41/48,41/48,3/16,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,5/112,239/336,239/336,47/336,47/336,53/112
27_9/14 111.5 0,5/6,5/6,1/3,1/3,1/2,1/48,1/48,11/16,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,295/336,295/336,61/112,109/112,103/336,103/336
27_93/14 111.5 0,1/6,1/6,2/3,2/3,1/2,15/16,29/48,29/48,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,89/112,155/336,155/336,299/336,299/336,25/112
27_93/14 111.5 0,5/6,5/6,1/3,1/3,1/2,37/48,37/48,7/16,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,33/112,211/336,211/336,19/336,19/336,81/112
27_23/14 111.5 0,1/6,1/6,2/3,2/3,1/2,47/48,47/48,5/16,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,281/336,281/336,19/112,89/336,89/336,67/112
27_23/14 111.5 0,5/6,5/6,1/3,1/3,1/2,7/48,7/48,13/16,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,1/336,1/336,75/112,11/112,145/336,145/336
27_79/14 111.5 0,1/6,1/6,2/3,2/3,1/2,13/16,23/48,23/48,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,75/112,113/336,113/336,11/112,257/336,257/336
27_79/14 111.5 0,5/6,5/6,1/3,1/3,1/2,5/16,31/48,31/48,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,19/112,169/336,169/336,313/336,313/336,67/112
27_37/14 111.5 0,1/6,1/6,2/3,2/3,1/2,5/48,5/48,7/16,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,323/336,323/336,33/112,81/112,131/336,131/336
27_37/14 111.5 0,5/6,5/6,1/3,1/3,1/2,15/16,13/48,13/48,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,43/336,43/336,89/112,25/112,187/336,187/336
27_65/14 111.5 0,1/6,1/6,2/3,2/3,1/2,11/16,17/48,17/48,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,71/336,71/336,61/112,109/112,215/336,215/336
27_65/14 111.5 0,5/6,5/6,1/3,1/3,1/2,3/16,25/48,25/48,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,5/112,127/336,127/336,271/336,271/336,53/112
27_51/14 111.5 0,1/6,1/6,2/3,2/3,1/2,11/48,11/48,9/16,1/42,1/42,6/7,5/14,11/21,11/21,20/21,20/21,11/14,2/7,19/42,19/42,29/336,29/336,47/112,95/112,173/336,173/336
27_51/14 111.5 0,5/6,5/6,1/3,1/3,1/2,1/16,19/48,19/48,6/7,4/21,4/21,29/42,29/42,5/14,5/42,5/42,11/14,2/7,13/21,13/21,103/112,85/336,85/336,229/336,229/336,39/112
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
9_12/5 52.36 0,0,33/40,13/40,1/5,1/5,1/8,5/8,3/5 1,1,ζ_8^1,ζ_8^1,ζ_8^2,ζ_8^2,ζ_8^3,ζ_8^3,ζ_8^4
9_12/5 52.36 0,0,3/40,23/40,1/5,1/5,7/8,3/8,3/5
18_7/5 104.7 0,0,3/4,3/4,3/40,33/40,13/40,23/40,19/20,19/20,1/5,1/5,1/8,7/8,3/8,5/8,7/20,3/5
18_7/5 104.7 0,0,3/4,3/4,61/80,61/80,21/80,21/80,19/20,19/20,1/5,1/5,1/16,1/16,9/16,9/16,7/20,3/5
18_7/5 104.7 0,0,3/4,3/4,1/80,1/80,41/80,41/80,19/20,19/20,1/5,1/5,13/16,13/16,5/16,5/16,7/20,3/5
18_17/5 104.7 0,0,1/4,1/4,3/40,33/40,13/40,23/40,1/5,1/5,9/20,9/20,1/8,7/8,3/8,5/8,17/20,3/5
18_17/5 104.7 0,0,1/4,1/4,11/80,11/80,51/80,51/80,1/5,1/5,9/20,9/20,15/16,15/16,7/16,7/16,17/20,3/5
18_17/5 104.7 0,0,1/4,1/4,71/80,71/80,31/80,31/80,1/5,1/5,9/20,9/20,3/16,3/16,11/16,11/16,17/20,3/5
27_2/5 157.0 0,0,2/3,2/3,2/3,2/3,3/40,29/120,29/120,89/120,89/120,23/40,13/15,13/15,13/15,13/15,1/5,1/5,1/24,1/24,7/8,3/8,13/24,13/24,4/15,4/15,3/5
27_2/5 157.0 0,0,2/3,2/3,2/3,2/3,119/120,119/120,33/40,13/40,59/120,59/120,13/15,13/15,13/15,13/15,1/5,1/5,1/8,19/24,19/24,7/24,7/24,5/8,4/15,4/15,3/5
27_22/5 157.0 0,0,1/3,1/3,1/3,1/3,19/120,19/120,33/40,13/40,79/120,79/120,1/5,1/5,8/15,8/15,8/15,8/15,23/24,23/24,1/8,5/8,11/24,11/24,14/15,14/15,3/5
27_22/5 157.0 0,0,1/3,1/3,1/3,1/3,3/40,109/120,109/120,49/120,49/120,23/40,1/5,1/5,8/15,8/15,8/15,8/15,7/8,5/24,5/24,17/24,17/24,3/8,14/15,14/15,3/5
1.25
N_c D^2 s_1,s_2,⋯ d_1,d_2,⋯
9_28/5 52.36 0,0,7/40,27/40,4/5,4/5,7/8,3/8,2/5 1,1,ζ_8^1,ζ_8^1,ζ_8^2,ζ_8^2,ζ_8^3,ζ_8^3,ζ_8^4
9_28/5 52.36 0,0,37/40,17/40,4/5,4/5,1/8,5/8,2/5
18_33/5 104.7 0,0,1/4,1/4,37/40,7/40,27/40,17/40,1/20,1/20,4/5,4/5,1/8,7/8,3/8,5/8,13/20,2/5
18_33/5 104.7 0,0,1/4,1/4,19/80,19/80,59/80,59/80,1/20,1/20,4/5,4/5,15/16,15/16,7/16,7/16,13/20,2/5
18_33/5 104.7 0,0,1/4,1/4,79/80,79/80,39/80,39/80,1/20,1/20,4/5,4/5,3/16,3/16,11/16,11/16,13/20,2/5
18_23/5 104.7 0,0,3/4,3/4,37/40,7/40,27/40,17/40,4/5,4/5,11/20,11/20,1/8,7/8,3/8,5/8,3/20,2/5
18_23/5 104.7 0,0,3/4,3/4,9/80,9/80,49/80,49/80,4/5,4/5,11/20,11/20,13/16,13/16,5/16,5/16,3/20,2/5
18_23/5 104.7 0,0,3/4,3/4,69/80,69/80,29/80,29/80,4/5,4/5,11/20,11/20,1/16,1/16,9/16,9/16,3/20,2/5
27_38/5 157.0 0,0,1/3,1/3,1/3,1/3,37/40,91/120,91/120,31/120,31/120,17/40,2/15,2/15,2/15,2/15,4/5,4/5,23/24,23/24,1/8,5/8,11/24,11/24,11/15,11/15,2/5
27_38/5 157.0 0,0,1/3,1/3,1/3,1/3,1/120,1/120,7/40,27/40,61/120,61/120,2/15,2/15,2/15,2/15,4/5,4/5,7/8,5/24,5/24,17/24,17/24,3/8,11/15,11/15,2/5
27_18/5 157.0 0,0,2/3,2/3,2/3,2/3,37/40,11/120,11/120,71/120,71/120,17/40,4/5,4/5,7/15,7/15,7/15,7/15,1/8,19/24,19/24,7/24,7/24,5/8,1/15,1/15,2/5
27_18/5 157.0 0,0,2/3,2/3,2/3,2/3,101/120,101/120,7/40,27/40,41/120,41/120,4/5,4/5,7/15,7/15,7/15,7/15,1/24,1/24,7/8,3/8,13/24,13/24,1/15,1/15,2/5
|
http://arxiv.org/abs/1701.08110v4 | 20170127163850 | The Conformal BMS Group | [
"Sasha J. Haco",
"Stephen W. Hawking",
"Malcolm J. Perry",
"Jacob L. Bourjaily"
] | hep-th | [
"hep-th"
] | Extension and restriction principles for the HRT conjecture
Kasso A. Okoudjou
December 30, 2023
===========================================================
§ INTRODUCTION
In four-dimensional Minkowski space, the isometries of the spacetime are given by the ten independent solutions to Killing's equation. These solutions allow one to form the Poincaré algebra, made up of four translations in each of the spacetime directions, plus three boosts and three spatial rotations. These are the symmetries of special relativity.
As soon as gravitational fields are included via general relativity, the standard isometry transformations of flat space must be revised. In the 1960s Bondi, van der Burg, Metzner and Sachs (BMS) postulated that there must be some way in which the full Poincaré group represent `approximate' symmetry transformations <cit.>, <cit.>. They studied these approximate symmetries of curved spacetime by investigating the asymptotic symmetries of asymptotically flat spacetimes at null infinity—if the spacetime were asymptotically flat, then infinitely far away from any gravitational fields we must in some sense be able to reproduce the Poincaré group as the symmetry group. This group of asymptotic symmetries is known as the BMS group <cit.>, <cit.>, a larger group than the Poincaré group of flat space, that consists of the ordinary Lorentz transformations plus an infinite number of `supertranslations'.
This BMS group has been extensively studied over the years. Penrose investigated the BMS group as a symmetry group on null infinity <cit.>, and later with Newman, he looked into possible subgroups of BMS that might arise when considering scattering problems and the emission of radiation out to infinity <cit.>.
More recently, the BMS group has received renewed attention. An extension to the BMS group has been proposed to include `superrotations' <cit.>, and work has been done on the conserved quantities that would be associated to the asymptotic symmetries of the BMS group <cit.>. In the quantum picture, these conservation laws amount to relations between ingoing and outgoing scattering states <cit.>, and have been shown to be equivalent to so-called soft theorems <cit.>, and subleading soft-theorems <cit.>, originally formulated by Weinberg and Low <cit.>. Within the last year, the effect of these symmetries on black hole spacetimes has been investigated, and the potential for these conservation laws to provide answers to the black hole information paradox <cit.>.
While the Poincaré and BMS groups describe the symmetries of special and general relativity, for any theory that also admits a conformal symmetry, the necessary group of isometries must be larger. In flat space, the Poincaré group gets extended to the conformal group at spacelike infinity, and at null infinity, one needs not the BMS group but a conformal version of it, which is developed here.
Conformal symmetry is at the heart of many important physical theories. For example, Maxwell's free field equations are conformally invariant, as is the massless Dirac equation. In terms of gravity, the situation is less clear, but for empty space, the Weyl tensor is unchanged by conformal transformations to the metric <cit.>. Another hint at conformal symmetry in gravity is through the connection with Yang-Mills theory: some aspects of gravity, particularly scattering amplitudes, can be regarded as the product of two Yang-Mills theories <cit.> - and we know Yang Mills to be a classically conformally invariant theory in Minkowski space.
Given that 𝒩=4 Yang-Mills theory exhibits conformal symmetry, an obvious next step will be to study the action of the conformal BMS group in this context. The BMS group has previously been shown to be a conformal extension of the Carroll group <cit.>. A generalization of the BMS group for supergravity has also been studied <cit.>, although without investigation into asymptotically conformal transformations. Recently, work on classifying the asymptotic symmetry algebras of theories in different dimensions has been studied in the context of holography <cit.>.
This paper is organized as follows. In we review the well known symmetry groups of Minkowski space, and extend this to the asymptotic symmetries of the BMS group in . In we introduce the conformal BMS group, and discuss its algebra and properties. The closure of this algebra is more subtle than it may at first appear—due to the fact that the generators are metric-dependent. This is achieved through a modified bracket, defined and discussed in . We illustrate this modified bracket algebra with a detailed example in .
§ CONFORMAL SYMMETRIES OF FLAT SPACE:
POINCARÉ AND CONFORMAL GROUPS
In four-dimensional flat Minkowski spacetime, it is possible to identify certain symmetries of the metric—transformations that leave the spacetime invariant. These are the ten isometries which form the well known Poincaré group of the symmetries of special relativity. These symmetries are found by asking for which vector fields ξ does the Lie derivative of the metric vanish, in other words, solutions to Killing's equation,
(ℒ_ξg)_a b=∇_a ξ_b + ∇_b ξ_a = 0.
ℒ_ξ is the Lie derivative with respect to the vector field ξ. In (3+1)-dimensional Minkowski space, we get ten independent solutions (Killing vectors (KV)) that make up the Poincaré group. This Poincaré group consists of the Lorentz group, a subgroup made up of three boosts and three spatial rotations, as well as an abelian normal subgroup of four translations in each of the spacetime directions.
The generators of these symmetry transformations may be written,
M_a b≡(x_a∂_b - x_b∂_a), P_a≡∂_a ,
where the M_a b give the Lorentz transformations and P_b the translations.
The commutation relations are,
[P_a, P_b ] = 0,
[M_a b, P_c ] = η_b c P_a - η_a c P_b,
[M_a b, M_c d ] = η_a d M_b c + η_b c M_a d - η_b d M_a c - η_a c M_b d,
where η_a b is the Minkowski metric of signature (-,+,+,+). These are the generators of the group O(3,1).
We may also look at transformations which preserve the metric up to a conformal factor,
ℒ_ξ g = Ω^2 g .
By taking the trace, we can solve for Ω^2 and find that the transformations ξ correspond to solutions to the conformal Killing equation, which in four dimensions is:
∇_a ξ_b + ∇_b ξ_a - 1/2 g_a b ∇_c ξ^c = 0 .
The solutions are conformal Killing vectors (CKV).
In flat space, the conformal Killing vectors consist of the Poincaré group, along with an extension to include special conformal transformations generated by K_μ and dilatations (scalings) generated by D:
D ≡x^a ∂_a ,
K_a ≡x^2 ∂_a - 2 x_a x^b ∂_b .
The commutation relations are given by:
[D , K_a ] = K_a,
[D , P_a ] = -P_a,
[K_a, P_b ] = 2 η_a bD + 2 M_a b,
[K_a, M_b c ] = η_a bK_c - η_a c K_b .
These are the generators of the group O(4,2).
§ CONFORMAL SYMMETRIES OF ASYMPTOTICALLY FLAT SPACE
In a curved spacetime the above transformations no longer hold as exact symmetries. However, in any asymptotically flat spacetime one can define `asymptotic symmetries' which correspond to those transformations that are consistent with the boundary conditions of asymptotic flatness. This amounts to the consideration of an `asymptotic Killing equation'—the solutions to which are known to form a larger group of symmetries, known as the BMS group <cit.>. This consists of the ordinary Lorentz transformations, plus an infinite number of `supertranslations' and `superrotations'. Let us briefly review how these symmetries arise in some detail, before extending this algebra to include also the asymptotic manifestations of conformal symmetry.
Using retarded Bondi coordinates (u, r, x^A), the flat space Minkowski metric is given by,
ds^2 = - du^2 - 2 du dr + r^2 γ_AB dx^A dx^B ,
where γ_A B is the unit metric on the two-sphere at infinity. In the Bondi gauge,
g_rr = g_rA = 0, ∂_r ((g_AB)/r^2 ) = 0 .
In order to maintain this metric asymptotically, any allowed transformations are constrained by a set of boundary conditions. These ensure that any non-zero components of the resulting Riemann tensor have suitable r-dependence as r →∞, so that the curvature falls off sufficiently fast. The corresponding changes to the metric must therefore obey certain fall-off conditions, given by
δg_uA ∼𝒪(r^0) ,
δg_ur ∼𝒪(r^-2) ,
δg_uu ∼𝒪(r^-1) ,
δg_AB ∼𝒪(r) .
In order to satisfy the Bondi gauge, we also require,
δg_rr = δg_rA = 0, ∂_r ((g_AB + δg_AB)/r^2 ) = 0 .
If peeling holds <cit.>, any asymptotically flat metric can be written as an expansion in powers of 1/r. In Bondi coordinates near null infinity, this is,
ds^2= - du^2 - 2 du dr + r^2 γ_A B dx^A dx^B
+2m_b/r du^2 + r C_AB dx^A dx^B + D_A C^A_B du dx^B +… ,
where D_A is the covariant derivative with respect to the metric on the two-sphere, m_b and C_AB denote first order corrections to flat space. m_b is the `Bondi mass aspect', and ∂_u C_AB = N_AB where N_AB is the `Bondi news'. Capital letters A, B,... can be raised and lowered with respect to γ_A B.
Transformations that preserve these conditions and therefore maintain the structure of the metric correspond to asymptotic solutions to the Killing equation. These are generated by the vector fields,
ξ_T ≡f ∂_u + 1/2D^2 f ∂_r - 1/r D^A f ∂_A ,
ξ_R ≡1/2u ψ ∂_u - (1/2r ψ- 1/4u D^2 ψ) ∂_r + (Y^A - u/2rD^A ψ)∂_A ,
where f is any scalar spherical harmonic, Y^A are conformal Killing vectors on the 2-sphere, and ψ≡D_A Y^A. Further terms that are subleading in r have been neglected. The vectors ξ_T generate infinitesimal `supertranslations' and the ξ_R give the `superrotations'. The supertranslations act to shift individual light rays of null infinity forwards or backwards in retarded time. The standard BMS group of infinitesimal transformations preserving the asymptotically flat metric contains only the the superrotations that are globally well defined on the sphere. These correspond to supertranslations ξ_T, and superrotations ξ_R for which Y^z = 1, z, z^2 and its conjugates, when expressed in stereographic coordinates on the two-sphere <cit.>. More recently, an `extended BMS' group has been proposed to include all vector fields ξ_R with Y^z= z^n+1 (and conjugates) for any n <cit.>. There is a similar construction at past null infinity.
§ THE CONFORMAL BMS SYMMETRY GROUPS
For the conformal case, we look for asymptotic solutions to the conformal Killing equation, and ask that the infinitesimal changes in the metric satisfy the same fall-off conditions as above.
The group of solutions involves the ordinary BMS supertranslations (T) and superrotations (R), plus a dilatation (D), another sort of conformal dilatation, a BMS dilatation' (E), and a new BMS special conformal transformation, a BMS special conformal transformation' (C). In our coordinates, at leading order, these are given by,
T ≡f ∂_u + 1/2D^2f ∂_r - 1/rD^A f ∂_A ,
R ≡1/2u ψ ∂_u - (1/2r ψ- 1/4u D^2 ψ) ∂_r + (Y^A - u/2rD^A ψ) ∂_A ,
D ≡u ∂_u + r ∂_r ,
E ≡u^2/2 ∂_u + r(u+r) ∂_r ,
C ≡u^2/4 ζ ∂_u - ( u^2/4 + r^2/2 + u r/2 )ζ ∂_r - u/2(1 + u/2r )D^Aζ ∂_A ,
where ψ≡D_A Y^A, ζ≡D_A Z^A, and Y^A and Z^A are conformal Killing vectors on the 2-sphere. Note that while the superrotations may be formed from any conformal Killing vectors, the special conformal transformations however vanish if Z^A is a Killing vector. Therefore C is only formed from the divergence of strictly conformal Killing vectors'.
Thus the conformal BMS group is larger than both the conformal group and the BMS group. As well as the infinite number of supertranslations and superrotations, the new special conformal transformation also give an infinite number of symmetries—generated by the infinity of strictly conformal Killing vectors Z^A. Just as for the superrotations we can define both global and local special conformal transformations. The conformal BMS group described above is the group CBMS^+, as it is defined on future null infinity, 𝒥^+. Performing a similar calculation on past null infinity, 𝒥^-, we can obtain the corresponding (although different) group CBMS^-.
It is also worthwhile considering how the original (i.e. flat space) conformal group fits into this larger asymptotic group. In flat space, there are four special conformal transformations, given by . When written in (u, r, x^A) coordinates, these are,
K_u ≡u^2 ∂_u + 2 r (u+r)∂_r ,
K_r ≡2 u^2 ∂_u - u^2∂_r ,
K_A ≡- u (u+2r)∂_A ;
we can thus identify,
K_u = 2 E ,
and the other components are contained within the superrotation and the new special conformal transformation C, for suitable choice of ψ and ζ.
For example, choosing coordinates (u, r, θ, ϕ), then
Z^θ = - 4 θ, Z^ϕ = 0, ζ= 4 C + 2 E = K_r .
§.§ The Modified Bracket
In order to compute the algebra, there is an important subtlety that must be taken into account: it is not the Lie bracket that is required, but a modified version of it (see e.g. <cit.>). This is because the vector fields generate perturbations in the metric and these vector fields are themselves metric-dependent. Thus, in calculating the commutator an extra piece must be added or subtracted from the usual bracket in order to take into account how each vector field varies as the metric changes.
Consider the action of a vector field, ξ_1 on the metric, followed by another vector, ξ_2. We allow metric variations which satisfy the fall-off conditions given above and calculate the possible vector fields, ξ, which can give rise to such variations. Thus these vector fields are defined through,
ℒ̂_ξ_1g = ĥ ,
where the `conformal' Lie derivative is defined by,
(ℒ̂_ξg)_ab = ∇_a ξ_b + ∇_b ξ_a - 1/2g_ab ∇_c ξ^c ,
When the vector ξ_2 acts on the metric we allow for additional perturbations:
ξ_2 →ξ_2 + μ_2 ,
g + ĥ →g + ĥ + K̂
where μ_2 is a first order perturbation to the vector field and K̂ is a second order variation of the metric. We then find the action of ℒ̂_ξ_2g to second order. Explicitly,
K̂_ab = + μ_2^c ∂_c g_ab + ξ^c ∂_c ĥ_ab + ∂_a ξ^c ĥ_bc + ∂_a μ_2^c g_bc + ∂_b μ^c g_ac + ∂_b ξ^c ĥ_ac
- 1/2g_ab ∂_c μ_2^c - 1/2ĥ_ab ∂_c ξ^c - 1/2g_ab Γ^c_ccd μ^d - 1/2ĥ_ab Γ^c_ccd ξ^d - 1/2g_ab δΓ^c_ccd ξ^d ,
where δΓ^c_ccd is the perturbation of the connection Γ^c_ccd due to the change . Asking that the corresponding changes to the metric still satisfy the boundary conditions and the Bondi gauge as above, we may solve for μ_2.
In order to find the commutator, [ξ_1, ξ_2 ] of two generators we must repeat the process—acting first with ξ_2 and then with ξ_1, and find the corresponding values of μ_1. We can then compute,
δμ= μ_2 - μ_1 ,
which gives the necessary piece that must be subtracted from the ordinary commutator to account for changes to the metric from the vector fields being themselves metric-dependent.
It turns out that the only commutators for which this modification is important are those involving T. In we illustrate this modified bracket in the most subtle case—showing that the commutator of two supertranslations, [T, T ], vanishes.
§.§ The Conformal BMS Algebra
In order to get a sense of the general structure of the group, it is useful to look at the elements involved in the commutation relations. The general results take the following overall form,
[T , R ] ∼T ,
[T, D ] ∼T ,
[R, R ] ∼R ,
[C, D ] ∼C ,
[D, E ] ∼E ,
[E, R ] ∼C .
We also have that
[R, C] ∼E ,
except in the special case where the vector, Y^A that generates the superrotations is a Killing vector, i.e., ψ = 0, in which case,
[R, C] ∼C .
All other commutators vanish:
[T , T ] = 0 ,
[C, T ] = 0 ,
[C, C ] = 0 ,
[R, D ] = 0 ,
[T, E ] = 0 ,
[C, E ] = 0 ,
[E, E ] = 0 ,
[D, D ] = 0 .
One can now compare this algebra with that of the flat space conformal group. The first thing to notice is that the structure is entirely different. In particular, no commutator ever produces a dilatation on the right hand side. In the case of flat space, a special conformal transformation commuted with a translation gives a combination of dilatations and rotations. In this conformal BMS group, the commutation of both C and E with a supertranslation give zero. In addition, when a BMS special conformal transformation is commuted with a superrotation that is generated by a Killing vector, the result is consistent with the flat space version: we get another BMS special conformal transformation. However, when the superrotation is generated by a conformal Killing vector then the commutator gives a different result, a BMS dilatation.
Both the flat space conformal group and the conformal BMS group have a subgroup involving the elements T, R, D, and these subgroups have the same structure—as seen in the first three lines of (<ref>). The superrotations form their own subgroup, just like the rotations in the flat space group.
Other subgroups of the conformal BMS group can be identified. There is one involving T, D, E, one with E, R, C, and one with T, R. There is another involving all elements except for the supertranslations, R, C, D, E. A dilatation with any other element also generates a subgroup.
With this group structure in mind, we can now look at the commutation relations in more detail. The supertranslations are generated by the function f, so we write . Similarly, the superrotations and special conformal transformations are generated by vector fields, so we write and . Then, more explicitly, the group algebra is given by,
[ 95pt[T(f),D] 65pt=T(f') , 30ptf' 150pt= f , ]
[ 95pt[T(f),R(Y^A)] 65pt=T(f') , 30ptf' 150pt= 1/2f ψ- Y^A D_A f , ]
[ 95pt[D,C(Z^A)] 65pt= C((Z')^A) , 30pt(Z')^A 150pt= Z^A , ]
[ 95pt[R(Y^A), E ] 65pt=C((Z')^A) , 30pt(Z')^A 150pt= Y^A , ]
[ 95pt[R(Y^A) , R((Y')^A)] 65pt=R((Y”)^A) , 30pt(Y”)^A 150pt= Y^B D_B (Y')^A - (Y')^B D_B Y^A . ]
When R is generated by a strict conformal Killing vector,
[R(Y^A),C(Z^A)]= 1/4 (ζψ+ D^A ζD_A ψ) E ,
whereas when R is generated by a Killing vector,
[R(Y^A),C(Z^A)]= C((Z')^A), (Z')^A = Y^A ζ .
At first sight, when R is generated by a strict CKV it does not look as though the commutator with C gives simply E. However, closer inspection of the prefactor reveals that it is indeed a constant. This requires the following identities that hold for a 2d strict CKV:
Y^A = - 1/2D^A ψ ,
D_A D_B ψ = - γ_AB ψ .
Note that since the generators of C must be strict conformal Killing vectors, equation (<ref>) shows that if the superrotation involved is generated by a Killing vector, then the commutator vanishes. While equation (<ref>) gives a general expression for the commutation of two superrotations, it is worthwhile examining the result for the different cases in which the superrotations are generated by two KVs, two strict CKVs, or one of each. For either two KVs or two strict CKVs, the resulting superrotation generator, (Y”)^A is a KV, but for one KV and one strict CKV, one gets a strict CKV.
We have checked all the Jacobi identities, and provide an illustrated example of how these commutation relations are computed according to the modified bracket in .
§ CONCLUSIONS AND DISCUSSION
The symmetries of spacetime at asymptotic infinity—especially in the case of asymptotically flat geometry—are of particular interest to the physics of scattering processes. In particular, this is where the S-matrix should be measured. The fact that there are more symmetries at infinity than mere Poincaré is extremely suggestive, and the connection between the holomorphically extended BMS group and the recently proposed infinite-dimensional symmetries of soft-particle scattering amplitudes <cit.> related to soft-theorems <cit.> may hint at a previously overlooked simplicity in the structure of four dimensional theories involving massless particles.
Because many of the most intriguing results along these lines have been found in the context of the scattering of massless particles, the extension of the BMS group to include spacetimes with conformal symmetry is both natural and important. This is what we have done here. Continuing this generalization to the case of conformal theories with maximal supersymmetry is a natural road ahead—with exciting possibility of connecting the new symmetries proposed in <cit.> with those known to exist in the case of maximally supersymmetric Yang-Mills theory in the planar limit. In a subsequent paper, a twistor representation of this group along with its supersymmetric extension will be discussed.
§ ACKNOWLEDGEMENTS
This work was supported in part by the Danish National Research Foundation (DNRF91), a MOBILEX research grant from the Danish Council for Independent Research and a grant from the Villum Fonden (JLB), by the Avery-Tsui Foundation (SWH) and by STFC (SJH, MJP) and Trinity College research grants (MJP). We are also grateful to the support from the Cynthia and George Mitchell Foundation.
§ THE MODIFIED BRACKET
As explained in section 4.1, computing the algebra of the conformal BMS group required a delicate examination of the effect of each vector field on the spacetime, and how this would affect the action of a subsequent transformation. Here is a worked example for the commutator of two different supertranslations, [T_1 , T_2 ] = 0.
Start by considering the action of a supertranslation,
T_1 = g ∂_u + 1/2D^2g ∂_r - 1/rD^A g ∂_A .
The ordinary commutator of this supertranslation, together with another supertranslation, T_2, generated by the function f, gives,
[T_1, T_2 ] = [g ∂_u + 1/2D^2g ∂_r - 1/rD^A g ∂_A, f ∂_u + 1/2D^2f ∂_r - 1/rD^A f ∂_A],
=+ 1/2r(D^Af D_A D^2g - D^Ag D_A D^2f) ∂_r
= + 1/2r^2(D^2g D^Af - D^2f D^Ag + 2 D^Bg D_B D^A f - 2 D^B f D_B D^A g)∂_A .
This has the form,
[T_1, T_2 ] = 1/rA ∂_r + 1/r^2B^A∂_A ,
where A and B are functions of the two-sphere only.
By considering dimensions, this implies that
μ_2^u = 0 ,
and letting
μ_2^r = 1/r ,
and
μ_2^A = 1/r^2B̂^A .
Under the action of the first supertranslation the resulting infinitesimal changes to the metric are given by,
ĥ_uA = - 1/2 D_A(2 g + D^2 g) ,
ĥ_AB = - r (2 D_A D_B g - γ_AB D^2g) ,
with all other components zero.
Then, under the action of the second supertranslation, T_2, on the metric there will be extra second order terms, K̂_ab, given by,
K̂_ab = - μ_2^c ∂_c g_ab + T_2^c ∂_c ĥ_ab + ∂_a T_2^c ĥ_bc + ∂_a μ_2^c g_bc + ∂_b μ^c g_ac + ∂_b T_2^c ĥ_ac
- 1/2g_ab ∂_c μ_2^c - 1/2ĥ_ab ∂_c T_2^c - 1/2g_ab Γ^c_ccd μ^d - 1/2ĥ_ab Γ^c_ccd T_2^d - 1/2g_ab δΓ^c_ccd T_2^d .
The relevant Christoffel symbols and perturbations are given by,
Γ^A_AAr = 2/r ,
δΓ^r_rrA = 1/2rD_A(D^2 + 2)g ,
δΓ^A_AAB = - 1/2r D_B(D^2+2)g .
Thus, explicitly calculating the second order changes to the metric,
K̂_rA=0 =g_ru D_A μ^u + g_AB ∂_r μ^B + ∂_r T_2^B ĥ_AB,
= - r^2 γ̂_AB ( 2/r^3B̂^A ) - 1/r D^B f ( 2 D_A D_B g - γ_AB D^2 g) ,
Therefore,
B̂_A = -1/2D^Bf (2 D_A D_B g - γ_AB D^2g) .
K̂_AB=𝒪(r) =+r^2(D_A μ_B + D_B μ_A - 1/2γ_AB(∂_u μ^u + ∂_r μ^r + D_C μ^C - 2/r μ^r))
=+ D_A T_2^C ĥ_BC + D_B T_2^C ĥ_AC + D_A T_2^u ĥ_uB + D_B T_2^u ĥ_uA + T_2^r ∂_r ĥ_AB
= + T_2^C D_C ĥ_AB - 1/2 ĥ_AB D_C T_2^C - 1/rĥ_AB T_2^r - 1/2r^2 γ_AB δΓ^c_ccd T_2^d .
Since
ĥ^A_A = 0 ,
then,
K̂^A_A = r^2 D_A μ^A - r^2 ∂_r μ^r + 2r μ^r + 2 D^A T_2^B ĥ_AB + 2 D^A T_2^u ĥ_uA - r^2 δΓ^c_ccd T_2^d ,
= 2r μ^r -r^2 ∂_r μ^r + r^2 D_A μ^A +2D^AD^Bf(2D_AD_Bg-γ_ABD^2g)
=-D^Af D_A(D^2+2)g,
=3Â-1/2D^AD^Bf(2D_AD_Bg-γ_ABD^2g)-1/2D^Bf(2D^2D_Bg-D_BD^2g)
=+2D^AD^Bf(2D_AD_Bg-γ_ABD^2g)-D^AfD_A(D^2+2)g,
= 3Â + 3 D^A D^B f D_A D_B g - 3/2 D^2 f D^2g - 3/2 D^B f D_B D^2g -3 D^A f D_A g .
Since
∂_r ((g_AB)/r^2 ) = 0 ,
we have,
 = 1/2 D^B f D_B D^2g - D_A D_B f D^A D^B g + 1/2D^2 f D^2g + D^A f D_A g .
Thus,
μ_2^u = 0 ,
μ_2^r = 1/r (1/2 D^B f D_B D^2g - D_A D_B f D^A D^B g + 1/2D^2 f D^2g +D^A f D_A g ) ,
μ_2^A = - 1/2 r^2 D^Bf (2 D_A D_B g - γ_AB D^2g) .
When we perform the same set of calculations using first the action of T_2, followed by T_1, we get the same results for μ_1^a, with f ↔g.
Therefore, we can calculate,
δμ^a = μ_2^a - μ_1^a ,
to find,
δμ^u = 0 ,
δμ^r = 1/2r(D^B f D_B D^2 g - D^B g D_B D^2 f) ,
δμ^A = 1/2r^2 (D_Bg (2 D^A D^B f - γ^AB D^2f) - D_Bf (2 D^A D^B g - γ^AB D^2g))
= 1/2r^2(D^2g D^Af - D^2f D^Ag + 2 D^Bg D_B D^A f - 2 D^B f D_B D^A g) .
These terms exactly cancel those arising from the ordinary commutator, and so upon subtracting these off, we find that,
[T_1, T_2 ] = 0 .
10
Sachs:1962zza
R. Sachs, “Asymptotic Symmetries in Gravitational Theory,”
http://dx.doi.org/10.1103/PhysRev.128.2851Phys. Rev. 128
(1962) 2851–2864.
Bondi:1962px
H. Bondi, M. G. J. van der Burg, and A. W. K. Metzner, “Gravitational Waves
in General Relativity, 7: Waves from Axisymmetric Isolated Systems,”
http://dx.doi.org/10.1098/rspa.1962.0161Proc. Roy. Soc. Lond.
A269 (1962) 21–52.
Penrose:1962ij
R. Penrose, “Asymptotic Properties of Fields and Space-Times,”
http://dx.doi.org/10.1103/PhysRevLett.10.66Phys. Rev. Lett. 10 (1963) 66–68.
Newman:1966ub
E. T. Newman and R. Penrose, “Note on the Bondi-Metzner-Sachs Group,”
http://dx.doi.org/10.1063/1.1931221J. Math. Phys. 7 (1966)
863–870.
Barnich:2011ct
G. Barnich and C. Troessaert, “Supertranslations Call for Superrotations,”
http://arxiv.org/abs/1102.4632PoS (2010) 010,
arXiv:1102.4632 [gr-qc].
[Ann. U. Craiova Phys.21,S11(2011)].
Compere:2016jwb
G. Compère and J. Long,
“Vacua of the gravitational field,”
doi:10.1007/JHEP07(2016)137JHEP 1607, 137 (2016), arXiv:1601.04958 [hep-th]
Strominger:2016wns
A. Strominger and A. Zhiboedov,
“Superrotations and Black Hole Pair Creation,”
https://arxiv.org/abs/1610.00639arXiv:1610.00639 [hep-th].
Hawking:2016b
S. W. Hawking, M. J. Perry and A. Strominger, “Superrotation Charge and Supertranslation Hair on Black Holes,”
https://arxiv.org/abs/1611.09175arXiv:1611.09175 [hep-th].
Compere:2016hzt
G. Compère and J. Long,
“Classical static final state of collapse with supertranslation memory,”
doi:10.1088/0264-9381/33/19/195001 Class. Quant. Grav. 33, no. 19, 195001 (2016), arXiv:1602.05197 [gr-qc]
Wald:1999wa
R. M. Wald and A. Zoupas, “A General Definition of `Conserved Quantities' in
General Relativity and other Theories of Gravity,”
http://dx.doi.org/10.1103/PhysRevD.61.084027Phys. Rev. D61 (2000) 084027,
http://arxiv.org/abs/gr-qc/9911095 arXiv:gr-qc/9911095 [gr-qc].
Barnich:2011mi
G. Barnich and C. Troessaert, “BMS Charge Algebra,”
http://dx.doi.org/10.1007/JHEP12(2011)105JHEP 12 (2011)
105,
http://arxiv.org/abs/1106.0213 arXiv:1106.0213 [hep-th].
Flanagan:2015pxa
E. E. Flanagan and D. A. Nichols, “Conserved Charges of the Extended
Bondi-Metzner-Sachs Algebra,”
http://arxiv.org/abs/1510.03386 arXiv:1510.03386 [hep-th].
Kapec:2014opa
D. Kapec, V. Lysov, S. Pasterski and A. Strominger,
“Semiclassical Virasoro symmetry of the quantum gravity 𝒮-matrix,”
doi:10.1007/JHEP08(2014)058JHEP 1408, 058 (2014), arXiv:1406.3312 [hep-th]
He:2014laa
T. He, V. Lysov, P. Mitra, and A. Strominger, “BMS Supertranslations and
Weinberg's Soft Graviton Theorem,”
http://dx.doi.org/10.1007/JHEP05(2015)151JHEP 05 (2015)
151,
http://arxiv.org/abs/1401.7026 arXiv:1401.7026 [hep-th].
Strominger:2014pwa
A. Strominger and A. Zhiboedov,
“Gravitational Memory, BMS Supertranslations and Soft Theorems,”
doi:10.1007/JHEP01(2016)086 JHEP 1601, 086 (2016), arXiv:1411.5745 [hep-th].
Lysov:2014csa
V. Lysov, S. Pasterski, and A. Strominger, “Low's Subleading Soft Theorem as
a Symmetry of QED,”
http://dx.doi.org/10.1103/PhysRevLett.113.111601Phys. Rev.
Lett. 113 (2014) 111601,
http://arxiv.org/abs/1407.3814 arXiv:1407.3814 [hep-th].
Low:1954kd
F. E. Low, “Scattering of Light of Very Low Frequency by Systems of Spin
1/2,”
http://dx.doi.org/10.1103/PhysRev.96.1428Phys. Rev. 96
(1954) 1428–1432.
Weinberg:1965nx
S. Weinberg, “Infrared Photons and Gravitons,”
http://dx.doi.org/10.1103/PhysRev.140.B516Phys. Rev. 140
(1965) B516–B524.
Hawking:1976ra
S. W. Hawking,
“Breakdown of Predictability in Gravitational Collapse,”
doi:10.1103/PhysRevD.14.2460Phys. Rev. D 14, 2460 (1976)
Hawking:2016msc
S. W. Hawking, M. J. Perry, and A. Strominger, “Soft Hair on Black Holes,”
http://dx.doi.org/10.1103/PhysRevLett.116.231301Phys. Rev.
Lett. 116 (2016) no. 23, 231301,
http://arxiv.org/abs/1601.00921 arXiv:1601.00921 [hep-th].
Penrose:1965am
R. Penrose,
“Zero rest mass fields including gravitation: Asymptotic behavior,”
doi:10.1098/rspa.1965.0058Proc. Roy. Soc. Lond. A 284, 159 (1965).
Borsten:2015pla
L. Borsten and M. J. Duff,
“Gravity as the square of Yang–Mills?,”
doi:10.1088/0031-8949/90/10/108012 Phys. Scripta 90, 108012 (2015), arXiv:1602.08267 [hep-th]
Duval:2014-2
C. Duval, G. W. Gibbons, P. A. Horvathy,
“Conformal Carroll groups and BMS symmetry,”
http://arxiv.org/abs/1402.5894 Class. Quant. Grav. 31 (2014) 092001, arXiv:1402.5894 [gr-qc].
Duval:2014lpa
C. Duval, G. W. Gibbons and P. A. Horvathy,
“Conformal Carroll groups,”
http://arxiv.org/abs/1403.4213 J. Phys. A 47 (2014) 335204., arXiv:1403.4213 [hep-th].
Awada:1985by
M. A. Awada, G. W. Gibbons, and W. T. Shaw, “Conformal Supergravity, Twistors
and the Super BMS Group,”
http://dx.doi.org/10.1016/S0003-4916(86)80023-9Annals Phys. 171 (1986) 52.
Irakleidou:2016xot
M. Irakleidou and I. Lovrekovic, “Asymptotic Symmetry Algebra of Conformal
Gravity,”
http://arxiv.org/abs/1611.01810 arXiv:1611.01810 [hep-th].
Newman:1961qr
E. Newman and R. Penrose,
“An Approach to gravitational radiation by a method of spin coefficients,”
doi:10.1063/1.1724257J. Math. Phys. 3, 566 (1962).
Strominger:2013jfa
A. Strominger, “On BMS Invariance of Gravitational Scattering,”
http://dx.doi.org/10.1007/JHEP07(2014)152JHEP 1407 (2014)
152,
http://arxiv.org/abs/1312.2229 arXiv:1312.2229 [hep-th].
Tanabe:2011es
K. Tanabe, S. Kinoshita, and T. Shiromizu, “Asymptotic Flatness at Null
Infinity in Arbitrary Dimensions,”
http://dx.doi.org/10.1103/PhysRevD.84.044055Phys. Rev. D84 (2011) 044055,
http://arxiv.org/abs/1104.0303 arXiv:1104.0303 [gr-qc].
Barnich:2010eb
G. Barnich and C. Troessaert, “Aspects of the BMS/CFT Correspondence,”
http://dx.doi.org/10.1007/JHEP05(2010)062JHEP 05 (2010)
062,
http://arxiv.org/abs/1001.1541 arXiv:1001.1541 [hep-th].
Barnich:2009se
G. Barnich and C. Troessaert, “Symmetries of Asymptotically Flat
4-Dimensional Spacetimes at Null Infinity Revisited,”
http://dx.doi.org/10.1103/PhysRevLett.105.111103Phys. Rev.
Lett. 105 (2010) 111103,
http://arxiv.org/abs/0909.2617 arXiv:0909.2617 [gr-qc].
Strominger:2013lka
A. Strominger, “Asymptotic Symmetries of Yang-Mills Theory,”
http://dx.doi.org/10.1007/JHEP07(2014)151JHEP 1407 (2014)
151,
http://arxiv.org/abs/1308.0589 arXiv:1308.0589 [hep-th].
Cachazo:2014fwa
F. Cachazo and A. Strominger, “Evidence for a New Soft Graviton Theorem,”
http://arxiv.org/abs/1404.4091 arXiv:1404.4091 [hep-th].
Cheung:2016iub
C. Cheung, A. de la Fuente, and R. Sundrum, “4D Scattering Amplitudes and
Asymptotic Symmetries from 2D CFT,”
http://arxiv.org/abs/1609.00732JHEP 1701, 112 (2017), arXiv:1609.00732 [hep-th].
|
http://arxiv.org/abs/1701.08026v1 | 20170127122156 | Curvature in Hamiltonian Mechanics And The Einstein-Maxwell-Dilaton Action | [
"S. G. Rajeev"
] | math-ph | [
"math-ph",
"hep-th",
"math.MP"
] |
Department of Physics and Astronomy
Department of Mathematics
University of Rochester Rochester, NY 14627
s.g.rajeev@rochester.edu
Riemannian geometry is a particular case of Hamiltonian mechanics:
the orbits of the hamiltonian H=1/2g^ijp_ip_j are
the geodesics. Given a symplectic manifold (Γ,ω), a hamiltonian
H:Γ→ℝ and a Lagrangian sub-manifold M⊂Γ
we find a generalization of the notion of curvature. The particular
case H=1/2g^ij[p_i-A_i][p_j-A_j]+ϕ
of a particle moving in a gravitational, electromagnetic and scalar
fields is studied in more detail. The integral of the generalized
Ricci tensor w.r.t. the Boltzmann weight reduces to the action principle
∫[R+1/4F_ikF_jlg^klg^ij-g^ij∂_iϕ∂_jϕ]e^-ϕ√(g)d^nq
for the scalar, vector and tensor fields.
Curvature in Hamiltonian Mechanics And The Einstein-Maxwell-Dilaton
Action
S. G. Rajeev
Jan 15 2017
==========================================================================
§ INTRODUCTION
The theory of geodesics on a Riemannian manifold is a particular
case of Hamiltonian mechanics: they are simply the solutions of Hamilton's
equations for H=1/2g^ijp_ip_j. Is there a generalization
of Riemannian geometry corresponding to more general hamiltonians?
The three ideas we would like to generalize are those of distance,
volume and curvature.
The volume is the easiest to generalize: the Boltzmann weight gives
a natural measure of integration in the phase space. Integrating out
the momenta gives the generalization of the Riemannian volume element
in configuration space (as long as ∫ e^-Hd^np converges).
Recall that the “reduced action” (a.k.a “eikonal”) σ_E(Q,Q')=∫_Q'^Qp_i(E,q)dq^i
of the trajectory of given energy E connecting two points is a
good candidate for distance; although in the most general case it
is not positive or symmetric, let alone satisfy the triangle inequality.
If the Hamiltonian is an even function of momenta H(q,p)=H(q,-p)
we have time reversal invariance and the symmetry σ_E(Q,Q')=σ_E(Q',Q)
follows. If in addition H(q,p) is a convex function of the momenta,
σ_E(Q,Q') satisfies the triangle inequality. In particular,
for familiar mechanical systems it reduces to the Jacobi-Maupertuis
metric.
The most subtle notion to generalize is curvature. We will use the
second variation of the action to find a quantity ℛ_ij(q,p)
which transforms as a symmetric tensor under co-ordinate transformations
in configuration space and which reduces to the Riemann tensor in
Riemannian geometry: ℛ_ij(q,p)=-g_im(q)p^kp^lR_ klj^m(q)
where two of the indices of the Riemann tensor are contracted by momentum.
An explicit formula for curvature in terms of derivatives of the hamiltonian
(up to fourth order) will be given. Its trace is the generalization
of the Ricci form; integrating over momentum gives the analogue of
the Ricci scalar density. We will show that in the case of most physical
interest,
H(q,p)=1/2g^ij(q)[p_i-A_i(q)][p_j-A_j(q)]+ϕ(q)
this Ricci scalar density is a natural unified action for gravity
coupled to electromagnetic and dilaton fields. Usually such actions
arise from much more complicated theories with many unwanted fields
(Kaluza-Klein, String theory etc.).
To further strengthen our claim of having found the correct generalization,
we will show that in some simple but rather subtle cases (Lagrange
points, Penning trap) the positivity of curvature is sufficient for
stability; while negative curvature implies instability.
The simple harmonic oscillator has constant positive curvature; it
also has finite diameter for the allowed subset of states of the configuration
space for a given a energy. Its phase space has finite volume w.r.t.
to the Boltzmann measure. Thus the simple harmonic oscillator is the
mechanical analogue of the sphere 𝕊^2.
In Riemannian geometry, Myer's theorem says that the diameter is finite
when the curvature is bounded below by a positive constant. Under
the same condition, the Lichnerowicz theorem says that the spectrum
of the Laplacian has a gap (the first non-zero eigenvalue is bounded
below by a constant) . Perhaps there are generalizations to mechanical
systems.
Let us spell out these ideas in some more detail before delving into
calculations.
§.§ Distance
Even in Riemannian geometry, the natural way to measure distance between
two points Q',Q is to find the minimum over piece-wise differentiable
curves of the action
S=1/2∫_0^Tg_ijq̇^iq̇^jdt, q(0)=Q', q(T)=Q
rather than the arc-length
l=1/2∫_0^T√(g_ijq̇^iq̇^j)dt.
The square root makes the arc-length tricky to differentiate as a
function of the curve . (Just as |x| is not differentiable, unlike
x^2). In the jargon of high energy physics, the re-parametrization
invariance of l is a “gauge invariance” that needs to be fixed;
S is a “gauge-fixed” version of l. S is called energy
in many mathematics textbooks<cit.>, but the proper
mechanical analogue is action.
Let s_T(Q,Q') be the value of S on the minimizing curve. It
is related to the Riemannian distance d(Q,Q') (i.e., the minimum
of the arc-length) through
s_T(Q,Q')=d^2(Q,Q')/2T.
It satisfies the Hamilton-Jacobi equation
1/2g^ij∂_Q^is_T∂_Q^js_T+∂ s_T/∂ T=0.
Its Legendre transform
σ_E(Q,Q')=min_T[ET+s_T(Q,Q')], σ_E(Q,Q')=√(2E)d(Q,Q')
satisfies the stationary Hamilton-Jacobi equation (the eikonal equation
in the language of optics)
1/2g^ij∂_Q^iσ_E∂_Q^jσ_E=E.
Because T,E only appear as overall factors in these formulas, it
is customary in geometry to choose units where they are set to constant
values (T=1/2=E) ; this corresponds to choosing a parametrization
where the velocity of the geodesic is unity.
For a more general mechanical system the action is still some integral
over the curve
S=∫ L(q,q̇)dt
but it may not be a purely quadratic function of the velocities. If
we drop the condition that L(q,q̇) be quadratic in q̇,
but insist that it is homogenous
L(q,λq̇)=λ^rL(q,q̇), λ,r>0
we get the well-studied case of Finsler geometry<cit.>.
As Chern points out, this is the case originally studied by Riemann
in his Habilitation; later, the name Riemannian geometry came to be
associated to the restricted case of a quadratic form. Finsler geometry
is so natural that Chern says<cit.>
“Finsler geometry is not a generalization of Riemannian geometry.
It is better described as Riemannian geometry without the quadratic
restriction”.
The typical hamiltonian of a mechanical system is a quadratic, but
not homogenous function of momenta
H(q,p)=1/2g^ij[p_i-A_i(q)][p_j-A_j(q)]+ϕ(q)
corresponding to a Lagrangian
L(q,q̇)=1/2g_ijq̇^iq̇^j+A_iq̇^i-ϕ(q).
This describes a particle moving under the influence of gravitational,
electromagnetic and scalar forces. It is natural look for a geometry
associated to such (or even more general) hamiltonians. To paraphrase
Chern, we seek
“Riemannian Geometry Without the Homogenous Restriction”.
Such ideas go back to Hamilton<cit.> himself, as noted by
Klimes<cit.>.
§.§ Hamiltonian Mechanics
Let us recall some basic facts of mechanics. By Darboux's theorem<cit.>
every symplectic manifold (Γ,ω) can be covered by co-ordinate
charts such that in each chart the symplectic form has constant coefficients:
ω=dp_idq^i.
These co-ordinates satisfy the canonical Poisson brackets
{ q^i,q^j} =0={ p_i,p_j} , { p_i,q^j} =δ_i^j
A hamiltonian H:Γ→ℝ determines a family of curves
that pass through each point of M, satisfying Hamilton's equations
q̇^i=H^i, ṗ_i=-H_i
where
q̇^i=dq^i/dt, H^i=∂ H/∂ p_i, H_i=∂ H/∂ q^i.
These curves are the extrema of the action
S=∫[p_iq̇^i-H]dt.
If we are also given a Lagrangian sub-manifold M⊂Γ,
locally Γ can be identified<cit.> with
the co-tangent bundle T^*M. That is, there is a neighborhood
in which M is determined by p_i=0 so that q^i are co-ordinates
on it. An example is the case where M is the configuration space
of a physical system.
Given a pair of points that are close enough and a time T, there
is a solution to Hamilton's equations with the boundary conditions
q^i(0)=Q^i', q^i(T)=Q^i
The action of this solution s_T(Q,Q') has a Legendre transform
σ_E(Q,Q')=min_T[ET+s_T(Q,Q')]
They satisfy the time dependent
H(Q,∂_Qs_T)+∂ s_T/∂ T=0
and stationary
H(Q,∂_Qσ_E)=E
versions of the Hamilton-Jacobi equation. Thus σ_E(Q,Q')
can be thought of as a generalization of the Riemannian distance function.
We will adopt terminology from optics and call σ_E(Q,Q')
the eikonal.
But there are important differences; it may not be a homogenous function
of E unless the hamiltonian happens to have some sort of scale
symmetry. In general, it is not symmetric:
σ_E(Q,Q')≠σ_E(Q',Q)
An example is a particle moving in a magnetic field (there is an explicit
calculation below). Thus, we will not be able to define a metric (in
the sense of topology) on the Lagrangian manifold M (configuration
space) using σ_E. But that should not bother physicists
too much: we already gave that up when we allowed g_ij to have
Lorentzian signature in relativistic mechanics.
§.§.§ Time Reversal Invariant systems
If the hamiltonian is time reversal invariant
H(q,p)=H(q,-p)
the eikonal will be a symmetric function
σ_E(Q,Q')=σ_E(Q',Q).
An example is the case of a particle moving in a potential but with
no magnetic field:
H(q,p)=1/2g^ijp_ip_j+V(q).
In this example the stationary Hamilton-Jacobi equation can be rewritten
as
1/2[E-V(Q)]g^ij∂_Q^iσ_E∂_Q^jσ_E=1
which is just the eikonal equation for the Jacobi-Maupertius metric
ĝ_ij(q)=[E-V(q)]g_ij(q).
The trajectories must lie in the region where V(q)<E; they are
geodesics of the Jacobi-Maupertius metric on the manifold whose boundary
consist of turning points where E=V(q). In particular, σ_E(Q,Q')
satisfies the triangle inequality and is a metric (in the sense of
topology) .
This example suggests that in the case of time reversal invariant
systems for which H(q,p) is a convex function of momenta, (i.e.,
H^ij(q,p)≡∂^2H/∂ p_i∂ p_j
is a positive matrix), the eikonal σ_E(Q,Q') is a metric
in the sense of topology on some subset M_E⊂ M of allowed
configurations. It would be interesting to have a rigorous mathematical
proof of this.
Some Remarks:
* Is there a version of Myer's theorem<cit.>? That
is, given that the Ricci curvature(defined below) is bounded below
ℛ≥ω^2>0 , does it follow that σ_E(Q,Q')≤πE/ω?
We will see some elementary examples that suggest that this is true.
Is the fundamental group of M_E finite?
* In the other direction, does -ℛ≥ω^2>0 and boundedness
of σ_E imply that M_E has infinite fundamental group?
This could have applications to ergodicity.
* When we pass to the quantum theory, σ_E becomes the phase
of the wave function and the HJ equation becomes the Schrodinger equation.
In the toy model where M is one-dimensional, it is possible to
find a quantum theory of gravity based on this interpretation<cit.>.
Perhaps it is of interest to see how much of that generalizes to higher
dimensions.
§.§ Volume
Given a hamiltonian H:Γ→ℝ, there is a natural measure
of integration on phase space (motivated by Thermodynamics), the Boltzmann
weight(Of course, n=1/2Γ.)
dμ_H=e^-Hd^npd^nq/[2π]^n/2.
It is normalized to agree, in that special case, with the Riemannian
volume √(g)d^nq on the configuration space (after integrating
out the momentum directions). Also, we have chosen units in which
the temperature is equal to one.
§.§ Curvature
As noted earlier, there is no obstruction to choosing local co-ordinates
in which the symplectic form has constant components. So, unlike a
Riemannian metric, a symplectic form does not uniquely determine a
connection or curvature. There are many torsion-less connections,
that preserve the symplectic form; there is no local obstruction to
choosing the curvature to be zero. There could be global obstructions
however. It is possible to choose a connection on a symplectic manifold
by a variational principle<cit.>. This is useful
in deformation quantization. None of this has any dependence on the
hamiltonian.
Instead, we want to construct a curvature from the Hamiltonian that
measures the response of a mechanical system to small perturbations.
But a hamiltonian H, curvature. For example, there is a neighborhood
of every minimum of H in which the Hamiltonian can be brought to
the Birkhoff Normal Form<cit.>. Assuming
that the natural frequencies of small oscillations at the minimum
are not rationally related (which is the generic case) there is a
canonical transformation that brings H to the quadratic form
H(q,p)=1/2∑_k[p_k^2+ω_k^2q_k^2]+⋯
up to any desired order ≥3 in p_k,q_k.
In Riemannian geometry, the infinitesimal deviation of geodesics (which
is determined by the second variation of the action) determines curvature.
It would be useful to have a generalization of curvature to more general
mechanical systems. For example, negative curvature could be an indication
of dynamical instability<cit.>.
Given a symplectic manifold Γ, a hamiltonian H:Γ→ℝ
and a Lagrangian sub-manifold M⊂Γ we will construct
a notion of curvature. The trick is to again consider the second variation
of the action. We will be able to write it as
𝒮_1=∫[1/2G_ij(q,p)∘ξ^i∘ξ^j-1/2ξ^iξ^jℛ_ij(q,p)]dt
Here ξ is the infinitesimal variation of the orbit, thought of
as a curve in M. Also, ∘ξ^i is a covariant
derivative of ξ along the orbit. (The explicit formula is given
later). We do not attempt to define a covariant derivative (connection,
parallel transport etc.) along an arbitrary direction.
G_ij(q,p) is the inverse matrix of the second derivative of the
hamiltonian w.r.t. momentum
H^ikG_kj=δ_j^i, H^ij=∂ H/∂ p_i∂ p_j.
We will require that this second derivative H^ij of the
hamiltonian be a positive matrix, so that the inverse exists. (It
will be clear in most cases how to do a “Wick Rotation” to the
case (e.g., of Lorentzian signature) when H^ij is only invertible
and not positive.) That is, we require that H(q,p) is a convex
function of momenta. We will prove that the quantities G_ij(q,p),ℛ_ij(q,p)
transform as symmetric tensors under co-ordinate transformations q^i→q̃^i(q).
They are not in general homogenous functions of p. An explicit
expression for the curvature tensor ℛ_ij(q,p) in terms
of derivatives (up to fourth order) of H will be given.
There is also an analogue of the Ricci tensor
ℛ(q,p)=H^ijℛ_ij(q,p)
The generalization of the Ricci scalar-density is its average over
momentum:
ℜ(q)=∫ℛ(q,p)e^-H(q,p)d^np/[2π]^n/2.
§.§.§ Riemannian Geometry
In the particular case of Riemannian geometry
H(q,p)=1/2g^ij(q)p_ip_j
they reduce to the Riemann tensor R_ klj^m, Ricci tensor R_ij
and the Ricci scalar-density as follows:
ℛ_ij(q,p)=-g_imH^kH^lR_ klj^m(q), ℛ(q,p)=H^kH^lR_kl(q), ℜ(q)=√(g)R(q)
where
H^k=g^kmp_m.
Thus ℛ is the Einstein-Hilbert Lagrangian density for
GR (with Euclidean signature).
§.§.§ Adding a Magnetic Field
The hamiltonian
H=1/2g^kl[p_k-A_k][p_l-A_l]
leads to
ℛ_ij=-g_imH^kH^lR_ klj^m+1/4F_ikF_jlg^kl+1/2H^k{∂_jF_ki+∂_iF_kj}
ℛ=H^kH^lR_kl+1/4F_ikF_jlg^klg^ij+H^kg^ij∂_iF_kj
ℛ=[R+1/4F_ikF_jlg^klg^ij]√(g)
Thus ℛ is exactly the Lagrangian density for Einstein-Maxwell
theory. We get the correct “unified” variational principle in
a natural geometric theory without having to assume extra dimensions
(as in Kaluza-Klein theory).
§.§.§ Adding a Scalar Field
If we add also a scalar potential
H=1/2g^kl[p_k-A_k][p_l-A_l]+ϕ
ℛ_ij=-g_imH^kH^lR_ klj^m+1/4F_ikF_jlg^kl+1/2H^k{∂_jF_ki+∂_iF_kj} +∂_i∂_jϕ
In particular, the curvature of a non-relativistic particle with potential
energy ϕ is simply its Hessian ∂_i∂_jϕ.
The harmonic oscillator has constant positive curvature. The inverted
harmonic oscillator (which has an unstable equilibrium point) has
constant negative curvature. The Ricci curvature is
ℛ=H^kH^lR_kl+1/4F_ikF_jlg^klg^ij+H^kg^ij∂_iF_kj+Δϕ
Again for a a non-relativistic particle the Ricci curvature is the
Laplacian of the potential.
The generalization of the Ricci scalar density in this case
ℜ=[R+1/4F_ikF_jlg^klg^ij+Δϕ]e^-ϕ√(g)
Similar expression also arise as effective Lagrangian densities in
string theory and in Kaluza-Klein theories; the scalar ϕ is
the dilaton in that context[I thank Sumit Das for clarifying this point.].
If we make the field redefinition
g̃_ij=e^2αϕg_ij
and choose
α=-1/n-2
the scalar curvature density can be brought to the more conventional
form (dropping a total derivative)
ℜ=√(g̃)R̃+1/4F_ikF_jlg̃^klg̃^ij√(g̃)e^-2/n-2ϕ+2n-1/n-2√(g̃)g̃^ij∂_iϕ∂_jϕ
This action describes a scalar and a photon minimally coupled to the
gravitational field, with an additional non-minimal coupling of the
scalar to the photon. The parametrization of the original Hamiltonian
is, for reference,
H=1/2e^-2/n-2ϕg̃^ij[p_i-A_i][p_j-A_j]+ϕ.
More specific examples are given later (Section <ref>).
We now turn to the explicit calculations to establish these facts.
§ CO-ORDINATE TRANSFORMATIONS
Our considerations are local, best described in old fashioned co-ordinate
notation. It is important to know how quantities transform under co-ordinate
transformations and to identify tensorial quantities, which transform
homogeneously.
Let us begin with Hamilton's equations themselves. The configuration
space M has co-ordinates q^i, which determine a canonical
co-ordinate system on Γ with conjugate variables p_i.
We can transform to any new set of co-ordinates q̃^i which
are smooth functions of q^i such that the inverse transformation
is smooth as well. The momenta p̃_i conjugate to q̃^i
are given by the transformation law of covariant vectors fields (components
of a 1-form)
p̃_i=∂ q^j/∂q̃^ip_j.
It follows that q̇^i and H^i=∂ H/∂ p_i
transform as the components of a contra-variant vector field.
H^j=∂q̃^j/∂ q^iH^i
But ṗ_i and H_i=∂ H/∂ q^i
do not transform homogeneously. Instead,
H̃_j=H_b∂ q^b/∂q̃^j+H^ap̃_k∂^2q̃^k/∂ q^c∂ q^a∂ q^c/∂q̃^j
We can see this by rewriting
dH=H_idq^i+H^idp_i
in the new canonical co-ordinate system:
dH=H_i∂ q^i/∂q̃^adq̃^a+H^id{p̃_a∂q̃^a/∂ q^i}
=H_i∂ q^i/∂q̃^adq̃^a+H^i∂q̃^a/∂ q^idp̃_a+H^ip̃_a∂^2q̃^a/∂ q^j∂ q^idq^j
By collecting the coefficients of dp̃_j,dq̃^j
we get the above transformation laws for H̃^j,H̃_j.
It will be convenient to denote the various derivatives of the hamiltonian
by
H_j_1⋯ j_s^i_1⋯ i_r=∂^r+sH/∂ p_i_1⋯∂ p_i_r∂ q^j_1⋯∂ q^j_s
That is, the upper indices correspond to differentiation with respect
to p_i and the lower indices to q^i. By extension of the
above argument, we see that H^i_1⋯ i_r transform as
the components of a symmetric tensor under canonical co-ordinate transformations;
but that the mixed derivatives H_j_1⋯ j_s^i_1⋯ i_r=∂^r+sH/∂ p_i_1⋯∂ p_i_r∂ q^j_1⋯∂ q^j_s
with s>0 transform inhomogeneously.
We are assuming that the matrix H^ij is positive; so it has an
inverse G_jk at every point (q,p).
H^ijG_jk=δ_k^i.
This G_ij (which could depend on p as well as q) is our
analogue of the metric tensor; in particular it transforms covariant
tensor. But we will not use G_ij or H^ij to raise
or lower indices (except when we talk of the special case of Riemannian
geometry).
The curvature computation makes sense if H^ij as long as invertible,
even if not positive (as in Lorentzian geometry). It would be interesting
to generalize to the case where H^ij are not invertible (“sub-Hamiltonian
geometry”), analogous to sub-Riemannian geometry<cit.>.
In fact, this paper arose out of my attempts to find a formula for
curvature in sub-Riemannian geometry.
§ THE SECOND VARIATION
Under the variation q^i↦ q^i+ϵξ^i,p_i↦ p_i+ϵπ_i
the change of the action S=∫[p_iq̇^i-H]dt is, to second
order,
S_ϵ=S+ϵ∫[π_iq̇^i+p_iξ̇^i-H_iξ^i-H^iπ_i]dt+ϵ^2∫[π_iξ̇^i-ℋ]dt+O(ϵ^3)
where
ℋ=1/2[H_ijξ^iξ^j+2H_i^jξ^iπ_j+H^ijπ_iπ_j]
Requiring that the first order variation of S be zero gives us
Hamilton's equations. Given a solution of Hamilton's equations, the
second variation (“Jacobi Functional”)
𝒮=∫[π_iξ̇^i-ℋ]dt
can be thought of as the action of a mechanical system with quadratic
(albeit time dependent) hamiltonian ℋ. It has an extremum
when
ξ̇^i=H_j^iξ^j+H^ijπ_j, π̇_i=-H_ijξ^j-H_i^jπ_j
The solutions of these equations are called “Jacobi fields”. They
describe the change of the orbit under infinitesimal perturbations
of boundary conditions. Because H^ij is invertible, we can eliminate
π_i in favor of ξ̇^i
π_j=G_jkξ̇^k-G_jkH_l^kξ^l
in the Jacobi functional to get a “Lagrangian” version of it:
𝒮_1=∫[1/2G_ijξ̇^iξ̇^j-ξ̇^iξ^jG_ikH_j^k+1/2ξ^iξ^j{ -H_ij+H_i^kH_j^lG_kl}]dt
We will mimic standard computations of Riemannian geometry <cit.>
to regroup the integrand into tensorial terms. This will lead us to
curvature.
Start with the identity
ξ̇^iξ^j=1/2[ξ̇^iξ^j-ξ̇^jξ^i]+1/2d/dt[ξ^iξ^j]
so that
-ξ̇^iξ^jG_ikH_j^k=-1/2[ξ̇^iξ^j-ξ̇^jξ^i]G_ikH_j^k-G_ikH_j^k1/2d/dt[ξ^iξ^j]
=1/2ξ̇^iξ^j[-G_ikH_j^k+G_jkH_i^k]+1/2ξ^iξ^jd/dt[G_ikH_j^k]+total derivative
=1/2ξ̇^iG_ik[-H_j^k+H^kmG_jlH_m^l]ξ^j+1/2ξ^iξ^jd/dt[G_ikH_j^k]+total derivative
Imitating the calculation in Riemannian geometry, we also add another
total derivative (this is the step that is not obvious):
d/dt[1/4Ġ_ijξ^iξ^j]=1/2Ġ_ijξ̇^iξ^j+1/4G̈_ijξ^iξ^j
allowing us to write
-ξ̇^iξ^jG_ikH_j^k=1/2ξ̇^iG_ik[-H_j^k+H^kmG_jlH_m^l+G^klĠ_lj]ξ^j+ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +total derivative
This suggests that we define an analogue of the Christoffel symbol
Γ_ij^kq̇^i of Riemannian geometry:
γ_j^k=1/2[-H_j^k+H^kmG_jlH_m^l+H^klĠ_lj]
so that
-ξ̇^iξ^jG_ikH_j^k=ξ̇^iG_ikγ_j^kξ^j+ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +total derivative
The point (which we prove below) is that although ξ̇^i
does not transform as a vector , the “covariant time derivative”
∘ξ^k=ξ̇^k+γ_j^kξ^j
does. We do not attempt to define a covariant derivative along
an arbitrary direction; only along the orbit of the Hamiltonian vector
field.
We can now rewrite 𝒮_1 in terms of this covariant derivative:
𝒮_1=∫[1/2G_ijξ̇^iξ̇^j+ξ̇^iG_ikγ_j^kξ^j+ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +1/2ξ^iξ^j{ -H_ij+H_i^kH_j^lG_kl}]dt
=∫[1/2G_ij{ξ̇^iξ̇^j+2ξ̇^iγ_k^jξ^k} +ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +1/2ξ^iξ^j{ -H_ij+H_i^kH_j^lG_kl}]dt
=∫[1/2G_ij{ξ̇^iξ̇^j+ξ̇^iγ_k^jξ^k+ξ̇^jγ_k^iξ^k} +ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +1/2ξ^iξ^j{ -H_ij+H_i^kH_j^lG_kl}]dt
=∫[1/2G_ij{ξ̇^iξ̇^j+ξ̇^iγ_k^jξ^k+γ_k^iξ^kξ̇^j} +ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +1/2ξ^iξ^j{ -H_ij+H_i^kH_j^lG_kl}]dt
=∫[1/2G_ij∘ξ^i∘ξ^j-1/2G_klγ_i^kγ_j^lξ^iξ^j+ξ^iξ^j{1/2d/dt[G_ikH_j^k]+1/4G̈_ij} +1/2ξ^iξ^j{ -H_ij+H_i^kH_j^lG_kl}]dt
Thus
𝒮=∫[1/2G_ij∘ξ^i∘ξ^j-ξ^iξ^j1/2{ G_klγ_i^kγ_j^l-d/dt[G_ikH_j^k]-1/2G̈_ij+H_ij-H_i^kH_j^lG_kl}]dt
The symmetric part of the quantity in the curly brackets is an analogue
of curvature.
𝒮_1=∫[1/2G_ij∘ξ^i∘ξ^j-1/2ξ^iξ^jℛ_ij]dt
Rewriting time derivatives as Poisson Brackets, we can express it
in terms of the first four derivatives of the hamiltonian:
ℛ_ij=G_klγ_i^kγ_j^l-1/2{ H,G_ikH_j^k+G_jkH_i^k} -1/2{ H,{ H,G_ij}} +H_ij-H_i^kH_j^lG_kl
The trace
ℛ=H^ijℛ_ij
plays the role of the Ricci tensor. There is no notion of Ricci scalar
in general Hamiltonian mechanics. But, it makes sense to integrate
this w.r.t. the Boltzmann measure
ℜ(H)=∫ℛ dμ_H
to give a functional of the hamiltonian. We will see that this reduces
to the integral of the Ricci scalar over a Riemannian manifold.
§.§ The Transformation of γ_j^i and ℛ_ij
Recall that ξ^i transforms as the components of a vector field:
ξ̃^i=∂q̃^i/∂ q^j ξ^j
but not its time derivative:
d/dtξ̃^i=d/dt[∂q̃^i/∂ q^jξ^j]
ξ̇̃̇^i=∂q̃^i/∂ q^jξ̇^j+∂^2q̃^i/∂ q^k∂ q^jq̇^kξ^j
The inhomogeneous term in the covariant time derivative
∘ξ^i=ξ̇^i+γ_j^kξ^j
is cancelled if the symbols γ_j^i transform as
γ̃_j^i=∂q̃^i/∂ q^kγ_l^k∂ q^l/∂q̃^j-∂ q^l/∂q̃^j∂^2q̃^i/∂ q^k∂ q^lq̇^k
For,
∘ξ̃^i=ξ̇̃̇^i+γ̃_j^iξ̃^j
=∂q̃^i/∂ q^jξ̇^j+∂^2q̃^i/∂ q^k∂ q^jq̇^kξ^j+{∂q̃^i/∂ q^kγ_l^k∂ q^l/∂q̃^j-∂ q^l/∂q̃^j∂^2q̃^i/∂ q^k∂ q^lq̇^k}∂q̃^j/∂ q^m ξ^m
=∂q̃^i/∂ q^j[ξ̇^j+γ_l^jξ^l]
Using Hamilton's equations, we can write the required transformation
law as
γ̃_j^i=∂q̃^i/∂ q^kγ_l^k∂ q^l/∂q̃^j-∂ q^l/∂q̃^j∂^2q̃^i/∂ q^k∂ q^lH^k
For later reference we rewrite this by a relabelling of indices as
γ̃_k^l=∂ q^c/∂q̃^k∂q̃^l/∂ q^aγ_c^a-∂ q^c/∂q̃^k∂^2q̃^l/∂ q^c∂ q^aH^a.
γ_k^l transforms as above. So ∘ξ^i
and ℛ_ij transform as tensors.
Proof We need the transformations of H_k^l,P_k^l≡ H^ljG_kiH_j^i,G_kjḢ^ik.
§.§.§ The Transformation of H_k^l
Recall that
H̃_j=H_b∂ q^b/∂q̃^j+H^ap̃_k∂^2q̃^k/∂ q^c∂ q^a∂ q^c/∂q̃^j
By differentiating w.r.t. to p̃_i we get the transformation
of H̃_j^i:
H̃_j^i=∂ q^b/∂q̃^j∂q̃^i/∂ q^aH_b^a+∂ q^c/∂q̃^j∂^2q̃^i/∂ q^c∂ q^aH^a+∂ q^c/∂q̃^j ∂q̃^i/∂ q^a ∂^2q̃^k/∂ q^c∂ q^bH^abp̃_k
Relabelling indices (for later use)
H̃_k^l=∂ q^c/∂q̃^k∂q̃^l/∂ q^aH_c^a+∂ q^c/∂q̃^k∂^2q̃^l/∂ q^c∂ q^aH^a+∂ q^b/∂q̃^k ∂q̃^l/∂ q^d ∂^2q̃^m/∂ q^b∂ q^cH^dcp̃_m
§.§.§ The Transformation of P_k^l
P̃_k^l=H̃^ljG̃_ki{∂ q^b/∂q̃^j∂q̃^i/∂ q^aH_b^a+∂ q^c/∂q̃^j∂^2q̃^i/∂ q^c∂ q^aH^a+∂ q^c/∂q̃^j ∂q̃^i/∂ q^a ∂^2q̃^m/∂ q^c∂ q^bH^abp̃_m}
Note that
H̃^ljG̃_ki∂ q^c/∂q̃^j ∂q̃^i/∂ q^a=H^dcG_na∂q̃^l/∂ q^d∂ q^n/∂q̃^k
so that
P̃_k^l=H^mbG_naH_b^a∂q̃^l/∂ q^m∂ q^n/∂q̃^k+∂q̃^l/∂ q^mH^mcG̃_ki∂^2q̃^i/∂ q^c∂ q^aH^a+H^dcG_na∂q̃^l/∂ q^d∂ q^n/∂q̃^k∂^2q̃^m/∂ q^c∂ q^bH^abp̃_m
=P_n^m∂q̃^l/∂ q^m∂ q^n/∂q̃^k+∂q̃^l/∂ q^mH^mcG̃_ki∂^2q̃^i/∂ q^c∂ q^aH^a+H^dc∂q̃^l/∂ q^d∂ q^b/∂q̃^k∂^2q̃^m/∂ q^c∂ q^bp̃_m
§.§.§ The Transformation of G_kiḢ^il
G̃_kiḢ̃̇^il=G̃_kid/dt[∂q̃^i/∂ q^a∂q̃^l/∂ q^cH^ac]
=G̃_ki∂q̃^i/∂ q^a∂q̃^l/∂ q^cḢ^ac+G̃_ki∂q̃^i/∂ q^a∂^2q̃^l/∂ q^c∂ q^bq̇^bH^ac+G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cq̇^bH^ac
=G̃_ki∂q̃^i/∂ q^a∂q̃^l/∂ q^cḢ^ac+∂^2q̃^l/∂ q^c∂ q^bq̇^b{ H^acG̃_ki∂q̃^i/∂ q^a} +G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cq̇^bH^ac
=G̃_ki∂q̃^i/∂ q^a∂q̃^l/∂ q^cH^ac+∂^2q̃^l/∂ q^c∂ q^bq̇^b{ H^acG_ad∂ q^d/∂q̃^k} +G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cq̇^bH^ac
=G̃_ki∂q̃^i/∂ q^a∂q̃^l/∂ q^cḢ^ac+∂^2q̃^l/∂ q^c∂ q^bq̇^b∂ q^c/∂q̃^k+G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cq̇^bH^ac
=G̃_ki∂q̃^i/∂ q^a∂q̃^l/∂ q^cḢ^ac+∂^2q̃^l/∂ q^c∂ q^a∂ q^c/∂q̃^kq̇^a+G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cq̇^bH^ac
=G̃_ki∂q̃^i/∂ q^a∂q̃^l/∂ q^cḢ^ac+∂^2q̃^l/∂ q^c∂ q^a∂ q^c/∂q̃^kH^a+G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cH^aH^ac
=∂ q^b/∂q̃^k∂q̃^l/∂ q^cG_baḢ^ac+∂^2q̃^l/∂ q^c∂ q^a∂ q^c/∂q̃^kH^a+G̃_ki∂^2q̃^i/∂ q^a∂ q^b∂q̃^l/∂ q^cH^aH^ac
where, we use Hamilton's equation q̇^a=H^a.
Relabeling c→ a,a→ b,b→ c in the first term and c→ m,a→ c,b→ a,
in the last term (for later use),
G̃_kiḢ̃̇^il=∂ q^c/∂q̃^k∂q̃^l/∂ q^aG_cbḢ^ba+∂^2q̃^l/∂ q^c∂ q^a∂ q^c/∂q̃^kH^a+G̃_ki∂^2q̃^i/∂ q^c∂ q^a∂q̃^l/∂ q^mH^aH^cm
So consider the linear combination
AH̃_k^l+BP̃_k^l+CG̃_kiĠ̃̇^il
=A{∂ q^c/∂q̃^k∂q̃^l/∂ q^aH_c^a+∂ q^c/∂q̃^k∂^2q̃^l/∂ q^c∂ q^aH^a+∂ q^b/∂q̃^k ∂q̃^l/∂ q^d ∂^2q̃^m/∂ q^b∂ q^cH^dcp̃_m}
+B{ P_c^a∂q̃^l/∂ q^a∂ q^c/∂q̃^k+∂q̃^l/∂ q^mH^mcG̃_ki∂^2q̃^i/∂ q^c∂ q^aH^a+H^dc∂q̃^l/∂ q^d∂ q^b/∂q̃^k∂^2q̃^m/∂ q^c∂ q^bp̃_m}
+C{∂ q^c/∂q̃^k∂q̃^l/∂ q^aG_cbḢ^ba+∂^2q̃^l/∂ q^c∂ q^a∂ q^c/∂q̃^kH^a+G̃_ki∂^2q̃^i/∂ q^c∂ q^a∂q̃^l/∂ q^mH^aH^cm}
If we choose
A+C=-1, B+C=0, A+B=0 A=-1/2=C, B=1/2
we get the transformation law
1/2[-H̃_k^l+P̃_k^l-G̃_kiĠ̃̇^il]=∂ q^c/∂q̃^k∂q̃^l/∂ q^a1/2[-H_c^a+P_c^a-G_cbḢ^ba]-∂ q^c/∂q̃^k∂^2q̃^l/∂ q^c∂ q^aH^a
So we can choose
γ_c^a=1/2[-H_c^a+P_c^a-G_cbḢ^ba]=1/2[-H_c^a+P_c^a+Ġ_cbH^ba]
(where we used Ġ_cbH^ba+G_cbḢ^ba=0) to get
the transformation law
γ̃_k^l=∂ q^c/∂q̃^k∂q̃^l/∂ q^aγ_c^a-∂ q^c/∂q̃^k∂^2q̃^l/∂ q^c∂ q^aH^a
This is what we wanted.
§ COMPARISON WITH RIEMANNIAN GEOMETRY
We must show that the above formula (<ref>) reduces
to the usual one for curvature in Riemannian geometry. In this section,
unlike before, we raise and lower indices using the metric tensor.
If H=1/2g^ijp_ip_j Hamilton's equations reduce to
the geodesic equation<cit.> . Also,
H^ij=g^ij
H_i^j=∂_ig^jkp_k
Recalling the formula for the Christoffel symbols,
Γ_jk^i=1/2g^im[∂_kg_jm+∂_jg_km-∂_mg_jk]
we can rewrite them in terms of the contra-variant metric tensor
Γ_jk^iq̇^k=1/2g^im[∂_kg_jm+∂_jg_km-∂_mg_jk]q̇^k
=1/2g^im[ġ_jm+{ -g_kag_mb∂_jg^ab+g_jag_kb∂_mg^ab}q̇^k]
=1/2[g^ikġ_jk+{ -p_a∂_jg^ai+g_jap_bg^im∂_mg^ab}]
This is a particular case of the general formula
γ_j^i=1/2[H^ikĠ_kj-H_j^i+G_jlH_m^lH^im]
so that γ_j^i reduces to Γ_jk^iq̇^k
in Riemannian geometry.
The Riemann tensor is
R_ijk^l=∂_jΓ_ik^l-∂_kΓ_ij^l+Γ_jm^lΓ_ik^m-Γ_km^lΓ_ij^m
R_lijk=g_lnR_ ijk^n
To compare the curvatures, it is convenient to choose Riemann normal
co-ordinates in which the first derivative of the metric is zero at
the chosen point.( ≈ denotes equality in Riemannian normal
co-ordinates up to higher order terms.)
g_ij≈δ_ij
R_iklm≈1/2(∂_k∂_lg_im+∂_i∂_mg_kl-∂_k∂_mg_il-∂_i∂_lg_km)
R_iklj≈1/2(∂_k∂_lg_ij+∂_i∂_jg_kl-∂_k∂_jg_il-∂_i∂_lg_kj)
On the other hand,
ℛ_ij≈-1/2(Ḣ_j^i+Ḣ_i^j)-1/2G̈_ij+H_ij
In Riemannian geometry,
Ġ_ij≡{ H,G_ij} =H^k∂_kg_ij
G̈_ij=H^m∂_m[H^n∂_ng_ij]-H_m∂/∂ p_m[H^n∂_ng_ij]=H^mH^n∂_m∂_ng_ij+H^mH_m^n∂_ng_ij-H_mH^mn∂_ng_ij
Ḣ_j^i≡{ H,∂_ig^jkp_k} =H^m∂_m∂_ig^jkp_k-H_k∂_ig^jk
so that in normal co-ordinates
G̈_ij≈ p_kp_l∂_k∂_lg_ij
Ḣ_j^i≈ p_kp_m∂_m∂_ig^jk≈-p_kp_l∂_l∂_ig_jk
H_ij=1/2∂_i∂_jg^klp_kp_l≈-1/2p_kp_l∂_i∂_jg_kl
ℛ_ij≈1/2p_kp_l(∂_l∂_ig_jk+∂_l∂_jg_ik)-1/2p_kp_l∂_k∂_lg_ij-1/2p_kp_l∂_i∂_jg_kl
Thus
ℛ_ij≈-p_kp_lR_iklj
Since ℛ_ij and R_ijk^l are tensors we get the
equality in general co-ordinates
ℛ_ij=-g_imH^kH^lR_ klj^m, H^m=g^mnp_n
Moreover
H^ijℛ_ij=-H^kH^lR_ klm^m=H^kH^lR_ kml^m
so that
ℛ≡ H^ijℛ_ij=H^kH^lR_kl.
Note the Gaussian integrals
∫ e^-1/2g^ijp_ip_jd^np/[2π]^n/2=√(g), ∫ e^-1/2g^ijp_ip_jp_kp_ld^np/[2π]^n/2=√(g)g_kl
Thus we get the Einstein-Hilbert Lagrangian for GR as the Boltzmann
average of the Ricci tensor:
ℛ≡∫ℛ e^-Hd^np/[2π]^n/2=R√(g), R=R_klg^kl.
Thus there might be some merit in considering ℛ(H) as
a variational principle that determines the hamiltonian itself in
the general case.
§ ADDING A MAGNETIC FIELD AND A SCALAR POTENTIAL
The typical hamiltonian of a point particle in physics is a polynomial
of order two in the momenta; it describes its interaction with a gravitational
electromagnetic and scalar field
H=1/2g^kl[p_k-A_k][p_l-A_l]+ϕ
We can compute,
H^k=g^kl[p_l-A_l]
H_j^k=∂_jg^kl[p_l-A_l]-g^kl∂_jA_l
H_m^l=∂_mg^ln[p_n-A_n]-g^ln∂_mA_n
H^kl=g^kl, G_kl=g_kl
H^kmG_jlH_m^l=g^kmg_jl{∂_mg^ln[p_n-A_n]-g^ln∂_mA_n}
=-g^km[∂_mg_jl]g^ln[p_n-A_n]-g^km∂_mA_j
H^klĠ_ij=g^klH^n∂_ng_kl
To proceed further we pass to the Riemann normal co-ordinates normal
co-ordinates; also choose A_i=0 at the origin by a choice of
gauge (but of course not the derivative ∂_iA_j).
γ_j^k≈1/2[∂_jA_k-∂_kA_j]=1/2F_jk
and
ℛ_ij≈γ_i^kγ_j^k-1/2{Ḣ_j^i+Ḣ_i^j} -1/2G̈_ij+H_ij-H_i^kH_j^lG_kl
Ḣ_j^i≈-p_kp_l∂_l∂_ig_jk-p_k∂_k∂_jA_i
G̈_ij≈ p_kp_l∂_k∂_lg_ij
H_ij≈-1/2p_kp_l∂_i∂_jg_kl+g^kl∂_iA_k∂_jA_l-p_k∂_i∂_jA_k+∂_i∂_jϕ
H_i^kH_j^lG_kl≈∂_iA_k∂_jA_k
-1/2{Ḣ_j^i+Ḣ_i^j} +H_ij-H_i^kH_j^lG_kl≈1/2p_kp_l[∂_l∂_ig_jk+∂_l∂_jg_ik-∂_i∂_jg_kl]+
p_k{1/2∂_k∂_jA_i+1/2∂_k∂_iA_j-∂_i∂_jA_k} +g^kl∂_iA_k∂_jA_l-∂_iA_k∂_jA_k+∂_i∂_jϕ
=1/2p_kp_l[∂_l∂_ig_jk+∂_l∂_jg_ik-∂_i∂_jg_kl]+1/2p_k{∂_jF_ki+∂_iF_kj} +∂_i∂_jϕ
ℛ_ij≈1/4F_ikF_jk+1/2p_kp_l[∂_l∂_ig_jk+∂_l∂_jg_ik-∂_i∂_jg_kl]+1/2p_k{∂_jF_ki+∂_iF_kj} +∂_i∂_jϕ
In a general co-ordinate system, this is the tensorial equality
ℛ_ij=-g_imH^kH^lR_ klj^m+1/4F_ikF_jlg^kl+1/2H^k{∂_jF_ki+∂_iF_kj} +∂_i∂_jϕ
Taking a trace
ℛ=H^kH^lR_kl+1/4F_ikF_jlg^klg^ij+H^kg^ij∂_iF_kj+Δϕ
§.§ An Action Principle for Fields
The integral over momentum with the Boltzmann weight now has an extra
factor of e^-ϕ:
∫ e^-{1/2g^ij[p_i-A_i][p_j-A_j]+ϕ}d^np/[2π]^n/2=e^-ϕ√(g),
∫ e^-{1/2g^ij[p_i-A_i][p_j-A_j]+ϕ}[p_k-A_k][p_l-A_l]d^np/[2π]^n/2=e^-ϕ√(g)g_kl
Thus
ℛ≡∫ℛ e^-Hd^np/[2π]^n/2=[R+1/4F_ikF_jlg^klg^ij+Δϕ]e^-ϕ√(g)
Similar actions also arise in string theory and in Kaluza-Klein theories;
the scalar ϕ is the dilaton in that context.
If we make the field redefinition
g̃_ij=e^2αϕg_ij
and choose
α=-1/n-2
the action of the fields can be brought to the more conventional
form (dropping a total derivative)
ℛ=√(g̃)R̃+1/4F_ikF_jlg̃^klg̃^ij√(g̃)e^-2/n-2ϕ+2n-1/n-2√(g̃)g̃^ij∂_iϕ∂_jϕ
The parametrization of the original Hamiltonian is, for reference,
H=1/2e^-2/n-2ϕg̃^ij[p_i-A_i][p_j-A_j]+ϕ.
This action describes a scalar and a photon minimally coupled to the
gravitational field, with an additional non-minimal coupling of the
scalar to the photon.
§ PARTICULAR CASES
§.§ The Free Particle
The simplest, but rather trivial, case is the hamiltonian of a particle
moving on the real line
H=1/2p^2
Of course, the curvature is zero. The trajectories are straight lines.
It is straightforward to get
s_T(Q,Q')=(Q-Q')^2/2T, σ_E(Q,Q')=√(2E)|Q-Q'|
They satisfy the HJ equations
1/2[∂_Qs_T]^2+∂ s_T/∂ T=0, 1/2[∂_Qσ_E]^2=E
§.§ The Harmonic Oscillator
The simplest non-Euclidean geometry is the sphere; it has constant
positive curvature. The mechanical analogue is the simple harmonic
oscillator
H=1/2[p^2+ω^2q^2]
The Lagrangian sub-manifold (configuration space) M is one-dimensional,
just the real line ℝ. Even the real line is curved in
our sense! It is constant and positive:
ℛ_11=ω^2
In this is case, this is also the Ricci form. The phase space has
finite volume in the Boltzmann measure; the induced volume element
on the real line is the Gaussian.
dq∫ e^-Hdp/√(2π)=e^-1/2ω^2q^2dq.
Given E>0, the set of allowed positions
M_E={ q| H(q,p)=E for some p} =[-√(2E)/ω,√(2E)/ω]
is just the interval |q|<√(E)/ω; i.e., the major
axis of the energy ellipse. Given two points Q,Q'∈ M_E we have
the solution to the eikonal equation
σ_E(Q,Q')=∫_Q'^Q√(2E-ω^2q^2)dq
Geometrically, this is area of the region bounded by the energy ellipse,
vertical axes at Q,Q' and the horizontal axis. σ_E(Q,Q')
is a metric (in the sense of topology) on the above interval . The
maximum of σ_E(Q,Q') occurs when Q=-√(E)/ω,Q'=-√(E)/ω
and is equal to half the area of the energy ellipse. Thus,
σ_E(Q,Q')≤πE/ω
This is reminiscent of Myer's inequality in Riemannian geometry<cit.>.
If the Ricci tensor is bounded below
R_ijξ^iξ^j≥ω^2ξ^iξ^jg_ij, ω>0
the distance between any two points in the manifold is bounded as
well:
d(Q,Q')≤π/ω.
Could there be a generalization of Myer's theorem to more general
mechanical systems with a convex, time-symmetric Hamiltonian?
< g r a p h i c s >
The inverted harmonic oscillator
H=1/2[p^2-ω^2q^2]
has an unstable equilibrium point at q=0=p; it has constant negative
curvature
ℛ_11=-ω^2
and is the mechanical analogue of Lobachewski space.
§.§ Constant Magnetic Field
In the case of a particle moving on the plane we denote the co-ordinates
by z≡(x,y) instead of q^1,q^2. The Hamiltonian
H=1/2[p_x+By]^2+1/2[p_y-Bx]^2
corresponds to a particle in a constant magnetic field B normal
to the plane. (We choose units where the mass and charge are equal
to one. So B is just the cyclotron frequency.)
The curvature is constant and positive:
ℛ_ij=1/4B^2δ_ij.
Hamilton's equations are equivalent to the Lorentz equations
ẍ=Bẏ, ÿ=-Bẋ
It is instructive to find the action s_T(Z,Z') of the solution
satisfying the boundary conditions
x(0)=X', x(T)=X
y(0)=Y' y(T)=Y
This is a standard exercise in physics textbooks<cit.>
. After a long but straightforward calculation we get
s_T(Z,Z')=[1/4BBT/2]| Z-Z'|^2-1/2BZ× Z'
where the cross-product is Z× Z'=XY'-YX'.
We can verify directly that this satisfies the Hamilton-Jacobi equation
1/2[∂_Xs_T+BY]^2+1/2[∂_Ys_T-BX]^2+∂ s_T/∂ T=0.
Its Legendre transform
σ_E(Z,Z')=min_T[ET+s_T(Z,Z')]
=2E/Barcsin[B|Z'-Z|/2√(2E)]+1/2|Z'-Z|√(2E-1/4|Z'-Z|^2B^2)-1/2B Z× Z
Since the trajectory is a circle of radius √(2E)/B,
only points with |Z-Z'|<2√(2E)/B are connected by a
smooth trajectory. Farther points would be connected by stitching
together piecewise-circular segments. The above formula describes
only one such segment.
Again we can verify directly that the stationary HJ equation is satisfied:
1/2[∂_Xσ_E+BY]^2+1/2[∂_Yσ_E-BX]^2=E
In units where E=1/2 , (i.e., unit velocity)
σ(Z,Z')=1/Barcsin[B|Z'-Z|/2]+1/2|Z'-Z|√(1-1/4|Z'-Z|^2B^2)-1/2B Z× Z'
We can understand each of these terms geometrically. The trajectory
is a circle of radius √(2E)/B connecting Z to Z'
. If B>0 it is described in a counter-clockwise direction.
Let C be the center and let M be the point halfway on the chord
ZZ' . Consider the right triangle CMZ'. The lengths of its sides
are
|CZ'|=1/B, |MZ'|=1/2|Z'-Z|, |CM|=1/B√(1-1/4|Z'-Z|^2B^2)
The half-angle at the center is
MCZ'=arcsin[B|Z'-Z|/2]=MCZ
Thus each term in σ(Z,Z') has a meaning of an area (times
B), as illustrated in the figure:
* the first term is the area of the circular arc of angle MCZ
(Blue)
* the second term is the area of the right triangle CMZ'(Yellow)
* the third term subtracts the area of the triangle OZZ' (Red)
< g r a p h i c s >
§.§ Magnetic Field Plus Quadratic potential
We can combine the above two cases and add an extra dimension to get
H=1/2[p_x+By]^2+1/2[p_y-Bx]^2+1/2p_z^2+ϕ
where ϕ is a positive quadratic form in x,y,z. This describes
a particle in a Penning trap or the immediate vicinity of a Lagrange
point in the circular restricted three-body problem. (In the co-rotating
frame of the primary bodies, there is a Coriolis force which is mathematically
identical to the force due to a constant magnetic field normal to
the plane of rotation.) The curvature
ℛ_ij=1/4F_ikF_jlg^kl+∂_i∂_jϕ
can be written conveniently in the co-ordinate system which diagonalizes
∂_i∂_jϕ
∂_i∂_jϕ=([ k_1 0 0; 0 k_2 0; 0 0 k_3 ])
F_ij=([ 0 B 0; -B 0 0; 0 0 0 ])
ℛ_ij=([ k_1+1/4B^2 0 0; 0 k_2+1/4B^2 0; 0 0 k_3 ])
If ϕ is harmonic (e.g., an electrostatic field as in the Penning
trap or a Newtonian Gravitational field as in the three-body problem)
k_1+k_2+k_3=0.
It is well known that such a harmonic potential ϕ does not have
a stable equilibrium as at least one of the k_i must be negative.
Adding a strong enough magnetic field can stabilize such a potential;
this is the idea behind the Penning trap and the surprising stability
of the Lagrange points L_4 and L_5.
Whether harmonic or not, the case
k_3>0, k_1<0, k_2<0, 1/2B^2>√(k_1k_2)+|k_1|+|k_2|/2
is known to be stable<cit.>.
In this case, if the curvature is positive,
k_3>0, 1/4B^2>|k_1|, 1/4B^2>|k_2|
it follows that 1/4B^2 is also greater than the average
of the geometric and arithmetic means of the r.h.s.:
1/4B^2>1/2[√(k_1k_2)+|k_1|+|k_2|/2].
Thus positivity of curvature is sufficient for stability in this case.
It is not necessary: we can have
|k_2|<1/4B^2<|k_1|
and still have
|k_1|+|k_2|/2<1/4B^2.
Since the arithmetic mean of positive numbers always exceed their
geometric mean,
√(k_1k_2)<|k_1|+|k_2|/2
this would give stability without positivity of curvature.
On the other hand, negative curvature is sufficient for instability:
k_3<0.
§ ACKNOWLEDGEMENT
I thank Sumit Das for explaining that the scalar field ϕ is
the dilaton. In addition thanks to Miguel Alonso, Alex Iosevich, Andrew
Jordan, Arnab Kar, Govind Krishnaswami and Evan Ranken for discussions.
10
doCarmoRiemGeomM. P. do Carmo, Riemannian Geometry,
Birkhauser (1992).
ChernFinslerS. S. Chern, Geometry without the Quadratic
Restriction, Notices of the AMS, 959 (1996); D. Bao, R. L. Bryant,
S. S. Chern and Z. Shen, A Sampler of Riemann–Finsler
Geometry, Cambridge University Press (2004).
RandersGRUnifiedEMPhysRev.59.195G. Randers, Phys. Rev.
59, 195 (1941).
HamiltonW. R. Hamilton, Trans. Roy. Irish Acad., 17, 1–144
(1837).
KlimesL. Klimes, Journal of Electromagnetic Waves and Applications,
27,1589(2013).
WeinsteinLagrSubMfldsA. Weinstein, Adv. Math. 6,
329 (1971).
BambusiBirkhoffNormalFormD. Bambusi Birkhoff
normal form and almost global existence for some Hamiltonian PDEs.
(2007). Available at http://users.mat.unimi.it/users/bambusi/pedagogical.pdf
HendersonRajeev R.J. Henderson and S.G. Rajeev, Class.Quant.Grav.
11, 1631 (1994), arXiv:gr-qc/9401029.
SymplecticConnectionsP.Bieliavsky, M.Cahen, S. Gutt and
J. Rawnsley J. Geom. Phys. 38,140 (2001);P.Bieliavsky,
M.Cahen, S. Gutt, J. Rawnsley and L. Schwachhofer, Symplectic
Connections, arXiv:math/0511194 [math.SG]; K. Habermann and L.
Habermann, Introduction to Symplectic Dirac Operators, Springer
(2006)
ArnoldCurvature V. I. Arnold, Ann. Inst. Poly. Genoble
16 , 319 (1966)
Govind3BodyG. S. Krishnaswami and H. Senapati J. Math.
Phys. 57, 102901 (2016), arXiv:1606.05091.
MontgomerySubRiemGeomR. Montgomery, A Tour of Subriemannian
Geometries, Their Geodesics and Applications, AMS (2002)
JohnLeeRiemannianManifoldsJ. M. Lee, Riemannian Manifolds
Springer (1997)
FeynmanHibbsMagField Problem 3-10 in R. P. Feynman and
A. R. Hibbs, Quantum Mechanics and Path Integrals, McGraw-Hill
(1965)
RajeevMechanicsS. G. Rajeev, Advanced Mechanics,
Oxford (2012).
|
http://arxiv.org/abs/1701.07665v3 | 20170126115452 | Higgs Scalaron Mixed Inflation | [
"Yohei Ema"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
Department of Physics, Faculty of Science, The University of Tokyo
We discuss the inflationary dynamics of a system with a non-minimal coupling between the Higgs and the Ricci scalar
as well as a Ricci scalar squared term. There are two scalar modes in this system,
i.e. the Higgs and the spin-zero mode of the graviton, or the scalaron.
We study the two-field dynamics of the Higgs and the scalaron during inflation,
and clarify the condition where inflation is dominated by the Higgs/scalaron.
We also find that the cut-off scale at around the vacuum
is as large as the Planck scale,
and hence there is no unitarity issue,
although there is a constraint on the couplings from the perturbativity of the theory
at around the vacuum.
Higgs Scalaron Mixed Inflation
Yohei Ema
December 30, 2023
================================
UT 17-04
§ INTRODUCTION
After the observation of the cosmic microwave background (CMB) anisotropy,
inflation plays a central role in the modern cosmology.
It is usually assumed that inflation is caused by potential energy of a scalar field, or the inflaton,
but there is no candidate within the standard model (SM).
Hence we need to go beyond the SM to cause inflation.
Among a variety of such inflation models,
the Higgs-inflation <cit.>
and the R^2-inflation <cit.> models are intriguing
because of their minimality as well as consistency with the CMB observation.
In the Higgs-inflation model, the SM Higgs boson plays the role of the inflaton
thanks to a large non-minimal coupling to the Ricci scalar.
In the R^2-inflation model, a spin-zero component of the metric (or the scaralon)
obtains a kinetic term and plays the role of the inflaton once we introduce a Ricci scalar squared term
in the action. It is known that both models predict a similar value of the spectral index
which is in good agreement with the Planck observation <cit.>.
It is also attractive that both models predict the tensor-to-scalar ratio that may be detectable
in the future CMB experiment (CMB-S4) <cit.>.
In the actual analysis of these models,
it is sometimes assumed that the Higgs or the scalaron is
the only scalar degree of freedom during inflation.
In reality, however, the Higgs must always be there
even if we consider the R^2-inflation model.
In addition, if we consider the Higgs-inflation model,
the large non-minimal coupling of the Higgs to the Ricci scalar
may radiatively induce a large Ricci scalar squared term <cit.>
that makes the scalaron dynamical as well. Hence it is more realistic
to consider the dynamics of both the Higgs and the scalaron simultaneously.[
A similar study with an additional scalar field instead of the scalaron
with a non-minimal coupling to the Ricci scalar has been
performed in literature.
See, e.g. Refs. <cit.>
and references therein.
] In this paper we will thus study the Higgs-scalaron two-field inflationary dynamics,[
An analysis in this direction is also performed in Ref. <cit.>
although some aspects we discuss in this paper such as
the unitarity/perturbativity and the implication of
the electroweak vacuum metastability are not addressed there.
See also Refs. <cit.>
as other treatments.
] and derive the parameter dependence of the inflationary predictions in our system.
We will clarify the quantitative condition where inflation is dominated by the Higgs or the scalaron.
In addition, we will address the unitarity structure of our system,
which is much different from that of the Higgs-inflation.
The organization of this paper is as follows.
In Sec. <ref>, we discuss the inflationary dynamics of the Higgs-scalaron two-field system.
We first study the dynamics analytically,
and later confirm it by numerical calculation.
In Sec. <ref>, we study the unitarity structure of this system.
We find that the cut-off scale of our system is as large as the Planck scale,
which is similar to the case of the R^2-inflation rather than the Higgs-inflation.
In Sec. <ref>, we concentrate on the dynamics of the Higgs
when the electroweak (EW) vacuum is metastable.
The last section <ref> is devoted to the summary and discussions.
§ INFLATIONARY DYNAMICS
In this section, we study the two-field dynamics of the Higgs and the scalaron during inflation.
§.§ Action in Jordan/Einstein frame
We start from the following action in the Jordan frame:
S = ∫ d^4x √(-g_J) [
M_P^2/2(1+ξ_h h^2/M_P^2)R_J .
.
+ ξ_s/4R_J^2
-1/2g^μν_J∂_μ h∂_ν h - λ_h/4h^4
],
where g_Jμν is the metric (with the “almost-plus” convention), g_J is the determinant of the metric,
R_J is the Ricci scalar, M_P is the reduced Planck mass
and h is the Higgs in the unitary gauge.
We add the subscript J for the quantities in the Jordan frame.
We consider only the case
ξ_s, |ξ_h |≫ 1.
In particular, we concentrate on the case ξ_s > 0 since otherwise there is a tachyonic mode.
On the other hand, we do not specify the signs of ξ_h and λ_h.
Concerning the sign of λ_h,
the current measurement of the top and Higgs masses indicates that
it becomes negative at a high energy region, resulting in
the metastable EW vacuum <cit.>,
although the stable EW vacuum is also still allowed.
In view of this, we consider both λ_h > 0 and λ_h < 0 in this paper.
By introducing an auxiliary field s, the action (<ref>)
is rewritten as <cit.>[
This choice of the dual description is unique up to the shift and the rescaling of the auxiliary field s.
For more details, see App. <ref>.
]
S = ∫ d^4x √(-g_J) [
M_P^2/2(1+ξ_h h^2 + ξ_s s/M_P^2)R_J .
.
- ξ_s/4s^2
-1/2g^μν_J∂_μ h∂_ν h - λ_h/4h^4
].
Note that the variation with respect to s gives
s = R_J,
and we restore the original action (<ref>) after substituting it to Eq. (<ref>).
The field s corresponds to a spin-zero mode of the graviton that is dynamical due to
the presence of the Ricci scalar squared term.
We call it a “scalaron” in this paper.
First we perform the Weyl transformation to
obtain the action in the Einstein frame. We define the metric in the Einstein frame as
g_μν = Ω^2 g_Jμν, Ω^2 = 1+ξ_h h^2 + ξ_s s/M_P^2.
The Ricci scalar is transformed as
R_J = Ω^2[R + 3lnΩ^2
- 3/2g^μν∂_μlnΩ^2 ∂_νlnΩ^2],
where R and are the Ricci scalar and the d'Alembert operator constructed from g_μν, respectively.
The action now reads
S = ∫ d^4x √(-g)[
M_P^2/2R - 3M_P^2/4g^μν∂_μlnΩ^2 ∂_νlnΩ^2
.
.
-g^μν/2Ω^2∂_μ h∂_ν h - U(h, s)
],
where the potential in the Einstein frame is given by
U(h, s) ≡λ_h h^4 + ξ_s s^2/4Ω^4.
We define a new field ϕ as
ϕ/M_P ≡√(3/2)lnΩ^2.
It corresponds to the inflaton degree of freedom in our system.
By eliminating s in terms of ϕ, we finally obtain
S = ∫ d^4x √(-g) [
M_P^2/2R - 1/2g^μν∂_μϕ∂_νϕ.
.
-1/2e^-χg^μν∂_μ h∂_ν h
- U(ϕ, h)
],
where the potential now reads
U(ϕ, h) = 1/4e^-2χ[
λ_h h^4
+ M_P^4/ξ_s(e^χ-1-ξ_h h^2/M_P^2)^2
],
and we have defined
χ≡√(2/3)ϕ/M_P.
This is the master action in our system.
Note that so far we have not used any approximation.
In the following, we study the inflationary dynamics of this action in the Einstein frame.
§.§ Two-field dynamics
Now we study the inflationary dynamics of the action (<ref>).
An analysis for a similar system is performed in Ref. <cit.>,
and we follow that procedure here.
The action (<ref>) contains the kinetic mixing term
between ϕ and h, and hence we define the following field τ to solve the mixing:
τ≡s/h^2.
Note that τ = 0 corresponds to the pure Higgs-inflation,
while τ = ∞ corresponds to the pure R^2-inflation.
The kinetic terms now read
ℒ_kin
=
-1/2(1+1/6(ξ_h + ξ_s τ)e^χ/e^χ - 1)(∂ϕ)^2
- M_P^2/8ξ_s^2(1-e^-χ)/(ξ_h+ ξ_s τ)^3(∂τ)^2
+ M_P/2√(6)ξ_s/(ξ_h + ξ_s τ)^2(∂ϕ) (∂τ).
Since we are interested in the inflationary dynamics, we concentrate on the case
ξ_h h^2 + ξ_s s ≫ M_P^2,
or
e^χ≫ 1,
in this section.
Then, the kinetic terms are approximated as
ℒ_kin
=
-1/2(1+1/6(ξ_h + ξ_s τ))(∂ϕ)^2
- M_P^2/8ξ_s^2/(ξ_h+ ξ_s τ)^3(∂τ)^2
+ M_P/2√(6)ξ_s/(ξ_h + ξ_s τ)^2(∂ϕ) (∂τ).
Note that τ satisfies
ξ_h + ξ_s τ≫M_P^2/h^2 > 0,
when the condition (<ref>) is satisfied,
and hence the kinetic term of τ has a correct sign.
In the following we assume
ξ_h + ξ_s τ≫ 1,
which is true for h ≲ M_P.[
It might not be true for, e.g. the critical case <cit.>
where ξ_h ∼𝒪(10) since the Higgs field value is
of order M_P during inflation in that case.
] Then after defining the canonically normalized field τ_c as
dτ_c ≡ξ_sM_P/2(ξ_h + ξ_sτ)^3/2dτ,
the kinetic mixing term between ϕ and τ_c is suppressed by 1/√(ξ_h + ξ_s τ).
Therefore, to the leading order in it,
we can approximate the kinetic term as
ℒ_kin
=
-1/2(∂ϕ)^2
- M_P^2/8ξ_s^2/(ξ_h τ^2 + ξ_s)^3(∂τ)^2.
There is no kinetic mixing between ϕ and τ to the zero-th order in
1/√(ξ_h + ξ_s τ).
Armed with these diagonalized fields, we now study the structure of the potential,
which is expressed as
U(ϕ, τ) = M_P^4/4λ_h + ξ_sτ^2/(ξ_h + ξ_sτ)^2[1-exp(-√(2/3)ϕ/M_P)]^2.
To the leading order in Eq. (<ref>),
its derivative gives
∂ U/∂τ_c = -λ_h + ξ_h τ/(ξ_h + ξ_sτ)^3/2M_P^3.
Note that we take the derivative with respect to τ_c, not τ.
Hence the extrema are
τ = ∞, τ_min,
where we have defined
τ_min≡λ_h/ξ_h.
In particular, τ = 0 is not an extremum of the potential.
It means that the pure Higgs-inflation is never realized in our system,
although the pure R^2-inflation is possible.
Nevertheless, there is some parameter region where inflation is caused mostly by the Higgs as we will see below.
In order to be the inflationary trajectory,
these extrema must be minima of the potential.
The second derivative at each extremum is given by
.∂^2 U/∂τ_c^2|_τ = ∞ = -ξ_h/ξ_sM_P^2,
.∂^2 U/∂τ_c^2|_τ = τ_min = 2ξ_h/ξ_sM_P^2.
Note again that we take the derivative with respect to τ_c, not τ.
Thus, τ = ∞ is the minimum for ξ_hξ_s < 0,
while τ = τ_min is the minimum for ξ_hξ_s>0.
Also, the potential at the minimum must be positive to cause inflation.
The potential at each extremum is given by
.U/M_P^4|_τ = ∞ = 1/4ξ_s,
.U/M_P^4|_τ = τ_min = 1/4(ξ_h^2/λ_h + ξ_s).
The former is always positive in the case of our interest,
while the latter together with Eq. (<ref>) gives a non-trivial constraint on the parameters.
In fact, Eq. (<ref>) in particular means
ξ_h + ξ_s τ_min > 0,
and hence we obtain the condition
λ_h ξ_h > 0,
by combining the requirement that Eq. (<ref>) is positive.
Thus, the trajectory with τ = τ_min causes inflation only if
ξ_h > 0 and λ_h > 0. Note that we consider only the case ξ_s > 0.[
It is sufficient to require only these conditions since Eq. (<ref>)
is trivially satisfied under them.
]
In summary, there are three cases that are relevant for inflation: (a) ξ_h > 0, ξ_s > 0 and λ_h > 0,
(b) ξ_h < 0, ξ_s>0 and λ_h>0, and (c) ξ_h<0, ξ_s>0 and λ_h<0.
From now we mainly concentrate on the case (a) since τ = τ_min is the minimum only in this case.
We briefly comment on the case (b) in the end of this subsection.
The case (c) is also interesting since it corresponds to the metastable EW vacuum.
Hence we discuss the case (c) in detail in Sec. <ref>.
§.§.§ Case (a): ξ_h>0, ξ_s>0 and λ_h>0
In this case, the potential minimum for τ is given by
τ = τ_min≠ 0,
and hence the inflaton ϕ is a mixture of the Higgs and the scalaron.
We first assume that τ sits at this minimum during inflation.
Later we will see that this assumption is actually valid.
Recalling that τ = s/h^2, for ξ_h ≫ξ_s τ_min, the inflaton is dominated by the Higgs,
while in the other limit, it is mostly composed of the scalaron. Thus, the situation is as follows:[
A similar condition is obtained for a scale invariant model with an additional dilaton field
in Ref. <cit.>.
It may be reasonable because the theory is almost scale invariant in our case
as long as we consider the inflationary dynamics.
It is also consistent with the rough estimation in Ref. <cit.>.
]
λ_h ξ_s ≪ξ_h^2: Higgs-inflation like,
λ_h ξ_s ≫ξ_h^2: R^2-inflation like.
It is also seen from the potential. The potential for τ = τ_min is given by
.U|_τ = τ_min
= M_P^4/41/ξ_h^2/λ_h + ξ_s[1-exp(-√(2/3)ϕ/M_P)]^2,
and hence it is the same as the Higgs-inflation for λ_hξ_s ≪ξ_h^2,
while the same as the R^2-inflation for λ_hξ_s ≫ξ_h^2.
In the intermediate case, the inflaton is a mixture of the Higgs and the scalaron,
which we call a “Higgs scalaron mixed inflation.”
In order to reproduce the normalization of the CMB anisotropy,
the parameters should satisfy <cit.>
ξ_h^2/λ_h + ξ_s ≃ 2× 10^9.
It is well-known that this type of model is in good agreement with the spectral index
observed by the CMB experiments.
In Fig. <ref>, we show the schematic picture of the parameter region in the case (a).
Now we investigate the assumption that τ sits at the minimum of its potential during inflation.
It is valid if the mass squared of τ_c at around the minimum
∼ξ_h M_P^2/ξ_s (see Eq. (<ref>))
is much larger than the Hubble parameter squared during inflation H_inf^2.
The ratio is estimated as
ξ_h M_P^2/ξ_s/H_inf^2 ∼ξ_h/ξ_s(ξ_h^2/λ_h + ξ_s) > ξ_h ≫ 1.
Thus, τ sits at the minimum of its potential during inflation as long as ξ_h ≫ 1 is satisfied,
which is the case of our interest, and hence we have verified our assumption.
It means that the inflationary dynamics effectively reduces to a single field case,
whose potential is given by Eq. (<ref>).
§.§.§ Case (b): ξ_h<0, ξ_s>0 and λ_h>0
In this case, the potential minimum for τ is
τ = ∞,
and hence inflation is caused solely by the scalaron.
Actually, the potential is given by
.U|_τ = ∞
= M_P^4/4ξ_s[1-exp(-√(2/3)ϕ/M_P)]^2,
which is nothing but the potential of the R^2-inflation model.
It is consistent with the CMB for ξ_s ≃ 2× 10^9 <cit.>.
§.§ Numerical confirmation
In this subsection, we perform numerical calculation to confirm the analysis in the previous subsection.
In particular, we numerically study the case (a), or the Higgs scalaron mixed inflation case.
While some approximations are used in the previous subsection,
we emphasize that we use no approximation in this subsection.
More explicitly, we directly solve the background equations of motion
derived from the action (<ref>), which read
0 = ϕ̈ + 3Hϕ̇ + e^-χ/√(6)M_Pḣ^2 + ∂ U/∂ϕ,
0 = ḧ + (3H - √(2/3)ϕ̇/M_P)ḣ + e^χ∂ U/∂ h,
Ḣ = -1/2M_P^2(ϕ̇^2 + e^-χḣ^2),
H^2 = 1/6M_P^2(ϕ̇^2 + e^-χḣ^2 + 2U),
where the potential is
U(ϕ, h) = 1/4e^-2χ[
λ_h h^4
+ M_P^4/ξ_s(e^χ-1-ξ_h h^2/M_P^2)^2
].
Here we take the background metric in the Einstein frame to be
the Friedmann-Lemaître-Robertson-Walker (FLRW) one without spatial curvature,
H is the Hubble parameter and the dots denote the derivatives with respect to the time.
For the convenience of readers, we write down the explicit forms of the derivatives of the potential:
∂ U/∂ϕ =
e^-2χM_P^3/√(6) ξ_s[
(1+ξ_h h^2/M_P^2)(e^χ - 1 - ξ_h h^2/M_P^2)
- λ_h ξ_s h^4/M_P^4],
∂ U/∂ h =
e^-2χM_P^2 h/ξ_s[
-ξ_h(e^χ -1 -ξ_h h^2/M_P^2)
+ λ_h ξ_s h^2/M_P^2].
Actually, these equations of motion are redundant, and hence we have used the last one (<ref>)
to check the consistency of our numerical calculation. We have numerically solved the first three equations of motion,
and checked that Eq. (<ref>) is satisfied at least better than 10^-5 level.
In Fig. <ref>, we show the time evolution of τ.
The blue line corresponds to our numerical calculation,
while the red dashed line is τ_min given in Eq. (<ref>).
The top panel corresponds to the Higgs-inflation like case,
while the bottom one does to the R^2-inflation like case.
See the caption for more details on the parameters and the initial conditions.
As we can see from the figures, after hundreds or thousands of oscillations
τ eventually settles down to its potential minimum.
It happens within a few e-foldings,
and hence τ has almost no effect on the inflationary dynamics at all.
This result confirms our analysis in the previous subsection.
In Fig. <ref>, we also show the inflationary dynamics in the Higgs-scalaron field space.
The blue line again corresponds to our numerical calculation,
while the red dashed line is the ϕ direction that is orthogonal to the τ direction.
We can see that the fields oscillate in the τ direction at first, but after τ settles down to its potential minimum,
the system follows the trajectory of ϕ.
Thus, it is again consistent with our analytical treatment in the previous subsection.
In summary, we have verified that our analysis in the previous subsection well describes
the actual inflationary dynamics of the present system.
§ CUT-OFF SCALE AND PERTURBATIVITY
In this section, we discuss the cut-off scale and the perturbativity
of our model at around the vacuum.
Note that the discussion in this section holds for all the cases (a), (b) and (c).
We start our discussion with the exact action in the Einstein frame (<ref>).
The minimum of the potential (<ref>) is at (ϕ, h) = (0, 0),
so we expand the action around that point:
S
≃∫ d^4x √(-g)[ M_P^2/2R - 1/2(∂ϕ)^2
- 1/2(1-χ)(∂ h)^2
.
.
-λ_UV/4(1-2χ)h^4
-m_ϕ^2/2ϕ^2 +m_ϕ/3√(2ξ_s)ϕ^3
.
.
-7/108ξ_s(1-3χ/7)ϕ^4
+ξ_h m_ϕ/√(2ξ_s)ϕ h^2
-ξ_h/2ξ_s(1-7χ/9)ϕ^2h^2
],
where we have defined the Higgs quartic coupling in the original ultraviolet (UV) theory
and the inflaton mass squared as[
Although λ_UV is shifted due to the mixing between ϕ and h,
it does not help to stabilize the EW vacuum for λ_h < 0.
This is because the CMB observation fixes m_ϕ∼ 10^13 GeV,
which is much higher than the instability scale of the Higgs potential <cit.>.
Note that inflation is dominated by the scalaron for λ_h < 0,
and hence m_ϕ is fixed.
]
λ_UV ≡λ_h + ξ_h^2/ξ_s,
m_ϕ^2 ≡M_P^2/3ξ_s.
Neglected terms are suppressed by higher powers of M_P.
Thus, recalling that χ = √(2/3) ϕ/M_P, we can see that
the cut-off scale of our model is as high as the Planck scale.
It is in contrast to the case of the Higgs-inflation,
where a power counting argument[
Analysis beyond the power counting might change the situation <cit.>.
] suggests that the cut-off scale Λ_cut;h
at around the vacuum is of
𝒪(M_P/ξ_h) <cit.>.[
It does not necessarily mean an inconsistency of the Higgs-inflation during inflation <cit.>
although the unitarity can be broken during the inflaton oscillation epoch <cit.>.
See, e.g. Refs. <cit.>
for construction of a UV completed model and
also
Refs. <cit.> for the discussion of UV effects on the Higgs-inflation.
] Rather, our model is similar to the case of the R^2-inflation.
In the R^2-model, the cut-off scale is the Planck scale because the scalaron is just an auxiliary field
in the Jordan frame, and hence it can absorb the large non-minimal coupling
with the curvature <cit.>.
Similarly in our case, the scalaron absorbs the large non-minimal couplings ξ_s and ξ_h
so that it can avoid the unitarity issue.
An interesting point of our model is that, Higgs can be the dominant component of the inflaton while keeping
the cut-off to be the Planck scale.
Still, there is a constraint on the parameters if we require the theory to be perturbative.
In order to see this, let us keep only the renormalizable terms in the action at around the vacuum:
S
≃∫ d^4x √(-g) [ M_P^2/2R - 1/2(∂ϕ)^2
- 1/2(∂ h)^2
.
.
-λ_UV/4h^4
-m_ϕ^2/2ϕ^2 +m_ϕ/3√(2ξ_s)ϕ^3
.
.
-7/108ξ_sϕ^4
+ξ_h m_ϕ/√(2ξ_s)ϕ h^2
-ξ_h/2ξ_sϕ^2h^2
].
The Higgs quartic coupling in our system is λ_UV,
and hence it should be smaller than ∼ 4π for the theory to be perturbative.
Thus, we may require the following condition:[
We implicitly assume |λ_h |≲𝒪(0.1).
]
ξ_h^2/ξ_s≲ 4π,
otherwise it is unreasonable to use the tree-level potential such as Eq. (<ref>).
It means that the theory with non-zero ξ_s is qualitatively different from that with ξ_s = 0 even
if we take the limit ξ_s → 0 because of the presence of the scalaron.
Once we have the scalaron degree of freedom, we need Eq. (<ref>)
to keep the theory perturbative at around the vacuum,
while there is no such a constraint if ξ_s = 0 from the beginning.
It might be interesting to see that Eq. (<ref>) requires the inflaton mass as
m_ϕ^2 ≲M_P^2/ξ_h^2∼Λ_cut;h^2,
where the right-hand-side is the cut-off scale of the Higgs inflation model at around the vacuum.
Since we consider the case |ξ_h|≫ 1, the inflaton-Higgs quartic coupling is always
in the perturbative region once Eq. (<ref>) is satisfied.
We should note that the perturbativity of our system is important
to safely obtain the SM at the energy scale below m_ϕ.
In order to see this point, we now derive the infrared (IR) theory from our system
(or the UV theory) below the energy scale of m_ϕ.
We may define the IR theory as
S_IR =
∫ d^4x √(-g) [ M_P^2/2R - 1/2(∂ h)^2
-λ_IR/4h^4
].
Then, λ_IR may be determined by matching, e.g. the scattering process
hh→ hh in the IR and UV theories.[
We consider only the degrees of freedom of h and ϕ for simplicity.
] If the UV and IR theories are perturbative,
we may obtain λ_IR = λ_h
by comparing the tree level processes in the IR and UV theories.
For more details on this point, see App. <ref>.
Thus, the IR theory is nothing but the SM
with the Higgs quartic coupling given by λ_h.
If the UV theory is strongly coupled, however,
the tree-level matching does not make sense.
We need to sum an infinite numbers of diagrams to calculate the scattering,
which is a complicated task.
Moreover, it is even non-trivial whether the Higgs remains as an asymptotic state,
and the IR theory might be totally different from the one described
by Eq. (<ref>).
Therefore it is at least secure to consider only the parameter region ξ_h^2/ξ_s ≲ 4π.
§ METASTABLE ELECTROWEAK VACUUM
In this section, we discuss the two-field dynamics of our system in the case (c):
ξ_h<0, ξ_s>0 and λ_h<0.
It corresponds to the case of the metastable EW vacuum.
Note that
the current measurement of the top and Higgs masses actually indicates that the Higgs quartic
coupling becomes negative at a high energy region,
resulting in the metastable EW vacuum <cit.>.
We now study the dynamics of the Higgs during and after inflation.
During inflation, the potential minimum for τ is
τ = ∞,
and hence it corresponds to the usual R^2-inflation.
In this case, as long as |ξ_h |≫ 1,
the Higgs stays at h = 0 during inflation
since τ is heavy enough (see Eq. (<ref>)).
It is nothing but the stabilization mechanism discussed, e.g. in
Refs. <cit.>.
It is well-known that the EW vacuum metastability has some tension
with high-scale inflation models including the R^2-inflation if there
is no coupling between the inflaton and the Higgs
sectors <cit.>.
However, once we introduce couplings between the inflaton and/or the Ricci scalar and the Higgs,
the EW vacuum is stabilized during inflation since they induce an effective mass of the Higgs.
Here we have explicitly shown that such a stabilization mechanism does work
for the R^2-inflation model.
After inflation, however, resonant Higgs production
typically occurs due to the inflaton oscillation,
and hence the EW vacuum may be destabilized
during the preheating epoch <cit.>.
Here we consider the dynamics of the Higgs after inflation in our system.
Soon after inflation ends, the ϕ^3 and ϕ^4 terms as well as the Planck suppressed terms
become negligible due to the cosmic expansion.
In the same way, the Higgs-inflaton quartic coupling becomes less important
than the Higgs-inflaton trilinear coupling.
Therefore we may approximate the action as
S
≃∫ d^4x √(-g) [ M_P^2/2R - 1/2(∂ϕ)^2
- 1/2(∂ h)^2
.
.
-λ_UV/4h^4
-m_ϕ^2/2ϕ^2
+ξ_h m_ϕ/√(2ξ_s)ϕ h^2
].
There are two cases depending on the sign of λ_UV.
If λ_UV is negative, or ξ_h^2 < |λ_h |ξ_s,
this system is studied in Ref. <cit.>.[
Actually in that study they also include the positive Higgs-inflaton quartic coupling,
but the trilinear coupling eventually becomes more important than the quartic coupling,
so we can safely apply their result to our system.
] In this case, the so-called tachyonic preheating occurs since the effective mass squared
of the Higgs oscillates between positive and negative values <cit.>.
As a result, the EW vacuum is destabilized
during the preheating epoch if the trilinear coupling satisfies <cit.>
|ξ_h m_ϕ/√(ξ_s)|≳𝒪(10)×m_ϕ^2/M_P.
In order to prevent such a catastrophe, ξ_h must satisfy
|ξ_h |≲𝒪(10).
Note that the Higgs-inflaton quartic coupling makes things even worse in our case
since it contributes negatively to the effective mass squared of the Higgs for ξ_h < 0.
If λ_UV is positive, or ξ_h^2 > |λ_h |ξ_s,
the situation is more complicated.
The direction ϕ = 0 considered
in Refs. <cit.>
is absolutely stable in this case.
Nevertheless, the potential has an unstable direction ϕ = ξ_h h^2/M_P
(or more precisely e^χ - 1 = ξ_h h^2/M_P)
since λ_h is negative. Fluctuations in this direction might be enhanced
since the inflaton inevitably couples to this direction.
Hence it might be possible that the EW vacuum is destabilized even in this case for large enough |ξ_h|,
although we leave a detailed study as a future work.
§ SUMMARY AND DISCUSSIONS
In this paper, we have considered the inflationary dynamics of a system with the non-minimal coupling
between the Higgs and the Ricci scalar ξ_h h^2 R as well as the Ricci scalar squared term ξ_sR^2.
In such a system, there are two scalar degrees of freedom,
i.e. the Higgs and the scalar part of the metric, or the scalaron.
We have shown that inflation successfully occurs
in the following three cases: (a) ξ_h > 0, ξ_s > 0 and λ_h > 0,
(b) ξ_h < 0, ξ_s>0 and λ_h>0, and (c) ξ_h<0, ξ_s>0 and λ_h<0,
where λ_h is the Higgs quartic coupling in the Jordan frame.
We have seen that in every case the inflationary dynamics effectively reduces to a single field one
since the direction orthogonal to the inflaton is heavy enough for |ξ_h |≫ 1.
In particular, in the case (a), the inflaton is a mixture of the Higgs and the scalaron,
which we call a Higgs scalaron mixed inflation.
The inflaton potential in this case is given by
U
= M_P^4/41/ξ_h^2/λ_h + ξ_s[1-exp(-√(2/3)ϕ/M_P)]^2,
where ϕ is the inflaton and M_P is the reduced Planck mass,
and hence it is consistent well with the CMB observation as long as
ξ_h^2/λ_h + ξ_s ≃ 2× 10^9.
We have also addressed the unitarity structure of our system at around the vacuum,
and found that the cut-off scale is as large as the Planck scale.
This is in contrast to the Higgs-inflation where the cut-off scale at around the vacuum
is M_P/ξ_h ≪ M_P.
Rather, it is similar to the R^2-inflation.
Still, the parameters must satisfy ξ_h^2/ξ_s ≲ 4π if we require the perturbativity of our system,
and hence the inflaton mass should be less than ∼ M_P/ξ_h.
Finally we have briefly discussed the implications of the metastable EW vacuum
to the R^2-inflation.
We have explicitly shown that if |ξ_h |≫ 1,
the EW vacuum is not destabilized during inflation albeit it is metastable.
Still, however, it is possible that the EW vacuum is destabilized during the inflaton oscillation
epoch due to a resonant enhancement of the Higgs quanta.
In order to avoid such a catastrophe, we might require |ξ_h |≲𝒪(10),
although the situation is unclear
when |ξ_h| is large enough so that ξ_h^2 ≳|λ_h|ξ_s.
It may be interesting to study further on this respect.
We have several remarks.
First of all, although we have mainly concentrated on the inflationary dynamics in this paper,
the (p)reheating dynamics after inflation is also important.
The inflationary predictions of our system depend
on the reheating temperature thorough the number of e-foldings.
Actually, it is the dynamics after inflation that makes
the differences of the inflationary predictions
between the Higgs- and R^2-inflation <cit.>.
We leave a detailed study of the reheating dynamics as a future work.
Related to the reheating dynamics, we point out the difference of our system
to the Higgs inflation during the inflaton oscillation epoch.
In general, if there is a large non-minimal coupling between
the inflaton and the Ricci scalar (without a scalaron) as in the Higgs-inflation,
the inflaton dynamics shows a peculiar behavior called a “spike”-like feature
during the inflaton oscillation
epoch <cit.>,
which was not taken into account in the previous studies <cit.>.[
Actually there is a comment on the energy scale ∼√(λ_h) M_P
related to this spike-like feature in Sec. 3.4 in Ref. <cit.>.
Nevertheless, possible effects on the preheating dynamics are underestimated there.
] In the Jordan frame, this is because the kinetic term of the inflaton
(or the Higgs) suddenly changes when the inflaton passes
the points |ϕ|∼ M_P/ξ_h,
which has some influence even in the Einstein frame.
In particular, if the inflaton is gauge-charged such as the Higgs,
longitudinal gauge bosons with extremely high-momentum ∼√(λ_h) M_P
are efficiently produced at the first oscillation so that the unitarity may be violated <cit.>.
Here it is essential to note that the longitudinal gauge boson mass is different from
the transverse gauge boson one if the symmetry breaking field (the Higgs in our case)
is time-dependent <cit.>.
In our case with the scalaron, however,
such a violent phenomena does not occur thanks to the presence of the scalaron.
Thus, we can trust our system from inflation until the present universe.
It is also valuable to note that some Higgs condensation is unavoidably produced
in the Higgs scalaron mixed inflation case even if the Higgs is sub-dominant.
The amplitude of the Higgs condensation at the beginning of the inflaton oscillation
is estimated for the Higgs sub-dominant case as follows.
During inflation, the ratio s/h^2 is fixed to be λ_h/ξ_h, and hence
exp(√(2/3)ϕ/M_P) - 1 = ξ_h h^2 + ξ_s s/M_P^2≃λ_h ξ_s/ξ_hh^2/M_P^2.
The left-hand-side is of order unity at the beginning of the inflaton oscillation,
and thus the amplitude of the Higgs condensation at that time is estimated as
h_osc∼√(ξ_h/λ_h ξ_s)M_P ∼ 10^-5√(ξ_h/λ_h)M_P,
where we substitute ξ_s ≃ 2× 10^9 from the CMB observation in the last similarity.
Although the dynamics of the Higgs condensation after inflation is non-trivial
due to the couplings to the inflaton,
it might have some phenomenological consequences such as
the spontaneous leptogenesis <cit.> and
the gravitational wave <cit.>.
Note that in our case the Higgs has no isocurvature perturbation
as opposed to the case considered
in Refs <cit.>
since τ is massive during inflation.
Note Added: While finalizing this paper, another paper <cit.>
was submitted to the arXiv that calculated the curvature perturbation in the same system.
§ ACKNOWLEDGMENTS
YE thanks Kazunori Nakayama for useful discussions.
YE also acknowledges Oleg Lebedev and the members of the theoretical high energy physics group
in the University of Helsinki for their hospitality, where some part of this work was done.
This work was supported by the JSPS Research Fellowships for Young Scientists
and the Program for Leading Graduate Schools, MEXT, Japan.
§ REDUNDANCY OF DUAL DESCRIPTION
In this appendix, we comment on redundancy of dual description of Eq. (<ref>).
Instead of Eq. (<ref>), we may consider the following action:
S = ∫ d^4x √(-g_J) [
M_P^2/2(1+ξ_h' h^2 + ξ_s' s/M_P^2)R_J .
.
- ξ_s”/4s^2-λ_sh/2s h^2
-1/2g^μν_J∂_μ h∂_ν h - λ_h'/4h^4
],
where the parameters ξ_h', ξ_s', ξ_s” and λ_h' are in general different
from ξ_h, ξ_s and λ_h.
As long as they satisfy
ξ_h' - ξ_s'/ξ_s”λ_sh = ξ_h, ξ_s'^2/ξ_s” = ξ_s, λ_h' - λ_sh^2/ξ_s” = λ_h,
the action (<ref>) reproduces Eq. (<ref>)
after integrating out the scalaron s, and hence they are actually redundant.
This redundancy corresponds to the shift and the rescaling of the scalaron s,
and the physics does not depend on this ambiguity of the dual description.
We can also see this in the Einstein frame.
By following the same procedure we obtained Eq. (<ref>),
we can show that the Einstein frame action obtained from Eq. (<ref>)
is exactly Eq. (<ref>) thanks to Eq. (<ref>).
In the main text, we have chosen the parameters as
ξ_h' = ξ_h, ξ_s' = ξ_s” = ξ_s, λ_sh = 0, λ_h' = λ_h,
which of course satisfy Eq. (<ref>).
§ MATCHING
In this appendix, we explain how to obtain the relation λ_IR = λ_h.
In order to express the coupling in the IR theory in terms of those in the UV theory,
we may compare the tree-level scattering amplitude of the process hh→ hh.
The corresponding diagrams in the IR and UV theories are shown in Fig. <ref>.
In the IR theory, the tree-level amplitude is given by
iℳ_IR = -6iλ_IR,
where the numerical factor comes from the permutation of the external Higgs particles.
In the UV theory, it is given by
iℳ_UV = -6iλ_UV
-2ξ_h^2 m_ϕ^2/ξ_s(i/s-m_ϕ^2+i/t-m_ϕ^2+i/u-m_ϕ^2)
≃ -6i(λ_UV - ξ_h^2/ξ_s)
+ 𝒪(k^2/m_ϕ^2),
where the second term comes from the inflaton-Higgs trilinear coupling with
s, t and u being the Mandelstam variables.
In the last line, we have taken only
leading order terms in 𝒪(k^2/m_ϕ^2)
with k being the typical momentum of the scattering.
Thus, by matching the scattering amplitude, we obtain
λ_IR = λ_UV - ξ_h^2/ξ_s = λ_h.
Note that this procedure is reasonable only when the IR and UV theories are perturbative.
apsrev4-1
|
http://arxiv.org/abs/1701.07713v1 | 20170126141903 | Seed Layer Impact on Structural and Magnetic Properties of [Co/Ni] Multilayers with Perpendicular Magnetic Anisotropy | [
"Enlong Liu",
"J. Swerts",
"T. Devolder",
"S. Couet",
"S. Mertens",
"T. Lin",
"V. Spampinato",
"A. Franquet",
"T. Conard",
"S. Van Elshocht",
"A. Furnemont",
"J. De Boeck",
"G. Kar"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
AIP/123-QED
]Seed Layer Impact on Structural and Magnetic Properties of [Co/Ni] Multilayers with Perpendicular Magnetic Anisotropy
Enlong.Liu@imec.be.
imec, Kapeldreef 75, Leuven 3001, Belgium.
Department of Electrical Engineering (ESAT), KU Leuven, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
Center for Nanoscience and Nanotechnology, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay, France
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
Department of Electrical Engineering (ESAT), KU Leuven, Leuven 3001, Belgium.
imec, Kapeldreef 75, Leuven 3001, Belgium.
[Co/Ni] multilayers with perpendicular magnetic anisotropy (PMA) have been researched and applied in various spintronic applications. Typically the seed layer material is studied to provide the desired face-centered cubic (fcc) texture to the [Co/Ni] to obtain PMA. The integration of [Co/Ni] in back-end-of-line (BEOL) processes also requires the PMA to survive post-annealing. In this paper, the impact of NiCr, Pt, Ru, and Ta seed layers on the structural and magnetic properties of [Co(0.3 nm)/Ni(0.6 nm)] multilayers is investigated before and after annealing. The multilayers were deposited in-situ on different seeds via physical vapor deposition at room temperature. The as-deposited [Co/Ni] films show the required fcc(111) texture on all seeds, but PMA is only observed on Pt and Ru. In-plane magnetic anisotropy (IMA) is obtained on NiCr and Ta seeds, which is attributed to strain-induced PMA loss. PMA is maintained on all seeds after post-annealing up to 400^∘C. The largest effective perpendicular anisotropy energy (K_U^eff≈ 2×10^5J/m^3) after annealing is achieved on NiCr seed. The evolution of PMA upon annealing cannot be explained by further crystallization during annealing or strain-induced PMA, nor can the observed magnetization loss and the increased damping after annealing. Here we identify the diffusion of the non-magnetic materials from the seed into [Co/Ni] as the major driver of the changes in the magnetic properties. By selecting the seed and post-annealing temperature, the [Co/Ni] can be tuned in a broad range for both PMA and damping.
[
G. Kar
December 30, 2023
=====================
§ INTRODUCTION
Materials with perpendicular magnetic anisotropy (PMA) have recently received a lot of interest due to their use in spin-transfer-torque magnetic random access memory (STT-MRAM) and spin logic applications<cit.>. Magnetic tunnel junctions (MTJs) with PMA are required for further scaling of the critical device dimension (CD). The perpendicular MTJs (p-MTJ) enable STT-MRAM devices with longer data retention time and lower switching current at a smaller CD when compared to the MTJ’s with in-plane magnetic anisotropy (IMA)<cit.>. A typical p-MTJ stack comprises an MgO tunnel barrier sandwiched between a synthetic antiferromagnet (SAF) as fixed layer, and a magnetically soft layer as free layer. The material requirements for the fixed and free layer differ. Whereas the free layer is aimed to have a high PMA and low damping to ensure data retention and fast switching via STT, the SAF requires high PMA and has preferentially high damping to ensure that it remains fixed during STT writing and reading to avoid back hopping<cit.>. As such, it is of high importance to control the PMA strength and damping in PMA materials.
Co-based PMA multilayers [Co/X] (X = Pt, Pd) have received a lot of attention for their potential application in STT-MRAM, especially as SAF materials <cit.>. The PMA of these multilayers comes from the interface of [Co/X] in each bilayer repeat<cit.>. Besides, [Co/Ni] has been researched as alternative PMA material and has been employed in p-MTJ because of its high spin polarization and low Gilbert damping constant<cit.>. Also, [Co/Ni] has been incorporated in an ultrathin SAF<cit.>. Recently, the use of [Co/Ni] in the free layer material was proposed to enable free layers with high thermal stability needed at CD below 20nm<cit.>. Next to STT-MRAM, [Co/Ni] has also been used as domain wall motion path in magnetic logic devices<cit.>. Briefly speaking, [Co/Ni] multilayers are being considered for various applications in next generation spintronic devices.
First-principle calculations predicted that [Co/Ni] in fcc(111) texture possesses PMA. The maximum anisotropy is obtained when Co contains just 1 monolayer and Ni has 2 monolayers<cit.>. Experimental studies proved that prediction<cit.> and reported on the PMA in [Co/Ni] for various sublayer thickness, repetition number and deposition conditions<cit.>. To get [Co/Ni] with the correct crystallographic orientation and good texture quality, a careful seed selection is required. Various seed layers have been studied. as well as their impact on PMA and damping, including Cu<cit.>, Ti<cit.>, Au<cit.>, Pt<cit.>, Ru<cit.> and Ta<cit.>. PMA change in [Co/Ni] is commonly observed and attributed to interdiffusion of the Co/Ni bilayers<cit.>. However, we observed earlier that also the diffusion of the seed material can strongly impact the [Co/Ni] magnetization reversal, especially after annealing<cit.>. A more in-depth study on the impact of the seed after annealing on the PMA and damping of [Co/Ni] is therefore required. Certainly the thermal robustness is of high importance for CMOS applications since the [Co/Ni] needs to be able to withstand temperatures up to 400^∘C that are used in back-end-of-line (BEOL) processes. In this paper, we study four sub-5nm seed layers: Pt(3 nm), Ru(3 nm), Ta(2 nm) and Hf(1 nm)/NiCr(2 nm) and present their impact on both the structural and magnetic properties of as-deposited and annealed [Co/Ni]. We show that a good lattice match to promote fcc(111) texture and to avoid [Co/Ni] interdiffusion is not the only parameter that determines the choice of seed layer. The diffusion of the seed material in the [Co/Ni] is identified as a key parameter dominating the PMA and damping of [Co/Ni] after annealing.
§ EXPERIMENTAL DETAILS
The [Co/Ni] on various seed layers were deposited in-situ at room temperature (RT) on thermally oxidized Si(100) substrates using physical vapor deposition system in a 300 mm Canon Anelva EC7800 cluster tool. Prior to seed layer deposition, 1nm TaN is deposited to ensure adhesion and to reflect the bottom electrode material that is used in device processing<cit.>. The detailed stack structure is Si/SiO_2/TaN(1.0)/seed layer/[Ni(0.6)/Co(0.3)]_4/Ni(0.6)/Co(0.6)/Ru(2.0)/Ta(2.0) (unit: nm). Ru/Ta on top serves as capping layer to protect [Co/Ni] from oxidation in air. The films were further annealed at 300^∘C for 30 min and 400^∘C for 10 min in N_2 in a rapid thermal annealing (RTA) set-up.
The crystallinity of the [Co/Ni] films was studied via a θ-2θ scan using the Cu Kα wavelength of λ=0.154nm in a Bede MetrixL X-ray diffraction (XRD) set-up. The degree of texture is evaluated in the same tool by measuring full-width at half-maximum (FWHM) of the rocking curve in an ω scan with 2θ fixed at the fcc(111) peak position of [Co/Ni]. Transmission electron microscopy (TEM) is used to identify the microstructure of multilayers. Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) is used to study diffusion of the seed material in the [Co/Ni]. The measurements were conducted in a TOFSIMS IV from ION-TOF GmbH with dual beam configuration in interlaced mode, where O^2+ and Bi^3+ are used for sputtering and analysis, respectively. X-ray Photoelectron Spectroscopy (XPS) is used to quantify the diffusion amount of the seed layers. The measurements were carried out in angle resolved mode using a Theta300 system from ThermoInstruments. 16 spectra were recorded at an exit angle between 22^∘ and 78^∘ as measured from the normal of the sample. The measurements were performed using a monochromatized Al Kα X-ray source (1486.6 eV) and a spot size of 400μm. Because of the surface sensitivity of the XPS measurement (depth sensitivity is ±5 nm), the XPS analysis was carried out on [Co/Ni] films without Ru/Ta cap.
A Microsense vibrating samples magnetometer (VSM) is used to characterize the magnetization hysteresis loops and to determine the saturation magnetization (M_s). The effective perpendicular anisotropy field (μ_0H_k^eff) and the Gilbert damping constant (α) in [Co/Ni] was measured via vector-network-analyzer ferromagnetic resonance (VNA-FMR)<cit.> and corresponding analysis<cit.>. The effective perpendicular anisotropy energy (K_U^eff) is calculated by the equation K_U^eff = μ_0H_k^eff× M_s/2 in the unit of J/m^3.
§ RESULTS AND DISCUSSION
§.§ Seed layer impact on structural properties of [Co/Ni]
To obtain large PMA in [Co/Ni] systems, highly textured, smooth fcc(111) films are required<cit.>. Fig.1 shows the θ-2θ XRD patterns of [Co/Ni] on different seed layers as-deposited and after annealing. The fcc(111) peaks of bulk Co and Ni are located at 44.1^∘ and 43.9^∘, respectively<cit.>. The presence of a peak around 2θ = 44^∘ confirms the fcc(111) texture in as-deposited [Co/Ni] films on all seed layers<cit.>. After annealing, the peak intensity increases, in particular for the [Co/Ni] sample on a Pt seed. Additionally, a shift in the peak position is observed for all seeds. The black arrows indicate the shift direction in Fig.<ref>(a)-(d). For [Co/Ni] on NiCr and Ta seed, the diffraction peak of [Co/Ni] shifts towards larger peak position of bulk Co and Ni, meaning that [Co/Ni] films on NiCr and Ta possess tensile stress after annealing. In contrast, compressive stress is induced in [Co/Ni] on Pt seed, since the peak position moves towards the Pt(111) peak due to lattice matching. On Ru seed, the peak nearly does not shift after annealing.
The quality of fcc(111) texture of [Co/Ni] is further examined via rocking curves. The results of FWHM of the rocking curves are given in Fig.<ref>, as well as the influence of post-annealing. The larger FWHM observed in [Co/Ni] on Ru seed suggests a lower degree of texture, which is in agreement with the θ-2θ pattern where [Co/Ni] on Ru seed shows a peak with lower intensity. Post-annealing at 300^∘C leads to further crystallization and enhanced texturing, as indicated by the decreased FWHM for [Co/Ni] on all seed layers. However, FWHM increases after 400^∘C annealing, which may be attributed to the intermixing of Co and Ni and hence the degradation in crystal quality.
Fig.<ref> shows the microstructure of [Co/Ni] deposited on different seed layers imaged by TEM after 300^∘C annealing. In all [Co/Ni] samples the grains extend from seed to cap. The interface between [Co/Ni] and seed layer is clear and smooth in the samples with Pt and Ru seed layer in Fig.<ref>(b) and (c), respectively. For [Co/Ni] on NiCr, however, the interface between the multilayers and the seed layer cannot be distinguished (Fig.<ref>(a)), but both show clearly crystalline and texture matched. For the [Co/Ni] on Ta seed in Fig.<ref>(d), the interface is quite rough and a nanocrystalline structure at the interface between Ta and [Co/Ni] can be spotted, which may indicate intermixing.
§.§ Seed layer impact on magnetic properties of [Co/Ni]
The presence of PMA in [Co/Ni] before and after annealing is firstly checked by VSM (Fig.<ref>). Despite the presence of fcc(111) peaks on all seeds, no PMA was observed in the as-deposited films on NiCr and Ta seeds. In contrast, PMA occurs in as-deposited [Co/Ni] on Pt and Ru seed. After 300^∘C annealing, PMA appears in [Co/Ni] on NiCr and Ta seed (see Fig.4(a) and (c), respectively). Simultaneously, an M_s loss is observed. Note that for the [Co/Ni] on NiCr, the M_s loss is large and the hysteresis loop becomes bow-tie like with coercivity (μ_0H_c) increase (Fig.<ref>(a)). The large μ_0H_c enables [Co/Ni] on NiCr seed to function as hard layer in MTJ stacks<cit.>. Fig.<ref>(a) and (b) summarizes M_s and μ_0H_c of the [Co/Ni] on different seed layers for various annealing conditions. The M_s and μ_0H_c of [Co/Ni] on NiCr and Ta seed decrease and increase, respectively, further after 400^∘C annealing. For the [Co/Ni] Pt and Ru seed, there is no change in M_s and μ_0H_c(see Fig.<ref>(b) and (d), respectively), even after 400^∘C annealing, indicating a good thermal tolerance.
Fig.5(c) and (d) show the effective perpendicular anisotropy field (μ_0H_k^eff) values and the calculated K_U^eff of [Co/Ni] on each seed for different annealing conditions. As deposited, the μ_0H_k^eff of [Co/Ni] on NiCr and Ta seed is 0, meaning that they have IMA, as shown in Fig.<ref>, while the highest μ_0H_k^eff is found on Pt seed as expected from the small lattice mismatch and the crystalline nature of Pt buffers, i.e. fcc(111). After annealing at 300^∘C, μ_0H_k^eff significantly increases in [Co/Ni] on all seeds, except when the [Co/Ni] is grown on Ru. On Ru, μ_0H_k^eff is the lowest. After 400^∘C annealing, PMA is maintained in all samples, though μ_0H_k^eff of [Co/Ni] on Pt and Ta seed slightly decrease. For the NiCr seed, μ_0H_k^eff even increases further and becomes more than 2 times larger than the values found in the other samples. The large μ_0H_k^eff and K_U^eff of [Co/Ni] on NiCr, especially after 400^∘C, will be analyzed in the following.
§.§ Correlation between structural and magnetic properties of [Co/Ni]
Commonly, highly fcc(111) textured [Co/Ni] films result in large PMA. In our case, we have observed some anomalous behaviors. The as-deposited film on NiCr and Ta did not show PMA. The PMA occured and increased significantly after annealing, while M_s loss was observed. On the other hand, only limited PMA increase is observed on Pt seed after annealing, despite the large increase in diffraction peak intensity shown in Fig.<ref>(b). In short, the improvement of [Co/Ni] film quality leads to limited increase in PMA for [Co/Ni] on Pt and Ru seed, yet there is a huge increase in PMA for [Co/Ni] on NiCr and Ta seed. In the further, we will discuss the mechanisms responsible for the observed trends.
§.§.§ Impact of strain on the static magnetic properties
As shown in Section III.A, the fcc(111) peak position of [Co/Ni] on the various seed layers differs from each other, indicating the existence of strain induced by the seed layer. Because of the magneto-elastic effect, strain-induced magnetic anisotropy (K_s) can be an important contribution of the total PMA. K_s is calculated as<cit.>
K_s = 18B_2^Co+30B_2^Ni/48·(ε_0-ε_3).
In this equation, B_2^Co=-29MJ/m^3 and B_2^Ni=+10MJ/m^3 reflect the fcc(111) cubic magneto-elastic coupling coefficients of bulk Co and Ni, respectively<cit.>. Possible thin film effects on the coefficients are beyond the scope of the paper. ε_3 is the out-of-plane strain, which can be derived from the shift of fcc(111) peak in XRD, with d_111 = 2.054Å as the reference<cit.>. And ε_0=-ε_3/ν, where ν is Poisson ratio. ν is calculated as weight-averaged values of bulk Co and Ni<cit.>.
Fig.<ref> summarizes the strain-induced PMA before and after annealing. It is clear that the strain from Pt and Ru seed result always in a negative contribution to the PMA of [Co/Ni], which may in both cases counteract with the increase in PMA from improved film quality after annealing (see the narrower peak of [Co/Ni] with larger intensity after annealing in Fig.<ref> and decreased FWHM in Fig.<ref>) and results in little net K_U^eff improvement (see Fig.<ref>(d)). Similarly observed in Fig.<ref>(d), as-deposited [Co/Ni] on Ta seed shows low K_U^eff due to the negative strain-induced PMA, even though it has the required texture (see Fig.<ref>). After annealing, strain-induced PMA contributes positively to total K_U^eff of [Co/Ni] on Ta seed. For [Co/Ni] on NiCr seed, the strain after annealing promotes the increase in total K_U^eff. From this discussion, it is clear that the seed layer providing in-plane tensile strain to [Co/Ni] is desired for PMA increase.
However, only the strain contribution cannot explain the large PMA that is observed after annealing on NiCr. Moreover, as shown in Fig.<ref>(b), there is also a large increase in μ_0H_c for [Co/Ni] on NiCr and Ta samples, while their M_s reduce dramatically, which cannot be explained by the previous strain-induced PMA change.
§.§.§ Impact of diffusion on the static magnetic properties
We have performed an advanced compositional analysis of the [Co/Ni] after annealing. Fig.<ref> shows the ToF-SIMS depth profiles of Cr, Pt, Ru and Ta as-deposited and after 400^∘C annealing. The Ni signal is provided to indicate position of the [Co/Ni] multilayers. In the case of the NiCr seed, Cr is found throughout the whole layer of [Co/Ni] after 400^∘C annealing, since its signal appears at the same depth (sputter time) as Ni. That means Cr diffuses heavily in the [Co/Ni]. The same phenomenon is observed for Pt seed, but the intensity of the signal from diffused Pt is low when compared to the Pt signal in the seed layer part, indicating that the diffusion amount of Pt is limited. For Ru shown in Fig.<ref>(c), the interface between the Ru seed and [Co/Ni] remains sharp after annealing. On the contrary, the less steep increase in Ta signal suggest intermixing between [Co/Ni] and Ta after annealing at the interface, but does not suggest Ta diffusion in the bulk of the [Co/Ni] films.
To quantitatively study the diffusion of the seed in the [Co/Ni], XPS measurements are conducted and the apparent atomic concentration of Co, Ni and seed layer element in each sample with different annealing conditions are shown in Fig.<ref>. Note that the higher apparent concentration of Co when compared to Ni for the as-deposited sample is due to the surface sensitivity of the XPS technique as explained in the figure caption. It is clear that Cr diffuses the most among the four seed layers. The presence of Cr in [Co/Ni] may also lead to the shift of the [Co/Ni] peak towards the NiCr peak in the XRD pattern shown in Fig.<ref>(a), probably resulting in the formation of Co-Ni-Cr alloy. Furthermore, the observed change in magnetic properties can likely be attributed to the formation of the Co-Ni-Cr alloy<cit.>. Indeed, the uniform diffusion of Cr was reported to cause μ_0H_c increase in Co-Ni film with in-plane anisotropy<cit.>. And Cr can be coupled antiferromagnetically with its Co and Ni hosts causing the M_s drop<cit.> and higher PMA. Less diffusion is observed for the Pt seed, in agreement with the peak shift toward Pt seed (Fig.1(b)) and for the lower increase in PMA as well, since the [Co/Pt] system, alloys or multilayers, is a well-known PMA system and so a change in PMA due to Pt atoms in the [Co/Ni] matrix is not necessarily detrimental<cit.>. On the contrary, no significant diffusion of Ru into [Co/Ni] layers has been observed in Fig.<ref>(c), so no impact on PMA is to be expected. Finally, a large increase in PMA has been observed after annealing whereas the diffusion is limited for Ta seed. Possibly, the improvement of crystal structure and the formation of a Co-Ni-Ta alloy at the interface of Ta seed and [Co/Ni] part happens at higher temperature, which gives rise to PMA and M_s loss<cit.>.
Apart from the M_s and PMA change, the diffusion of the seed material into the [Co/Ni] might also impact the magneto-elastic coefficients to be taken into account when calculating the strain-induced PMA (see Section III.C.1). This impact is however not straightforward and requires further study.
§.§.§ Impact of diffusion on the dynamic magnetic properties
Earlier studies reported on dopants that increase the damping when incorporated into a ferromagnetic film<cit.>. In our case, it is natural to expect that an impact from the diffused seed layer element will be exerted on the dynamic magnetic properties of [Co/Ni]. Therefore, the Gilbert damping constant (α) of each sample with different annealing conditions was derived from VNA-FMR for study. Fig.<ref> compares the permeabilities of the [Co/Ni] films as-deposited and after 400^∘C annealing when the FMR frequency is set to 15 GHz by a proper choice of the applied field. The broadening of the linewidths after annealing reflects the increase in damping for all cases, with the noticeable exception of Ru. The NiCr resonance was too broadened to be resolved after 400^∘C annealing, reflecting a very high damping or a very large inhomogenity in the magnetic properties. Linear fits of the FMR linewidth versus FMR frequency were conducted (not shown) to extract the damping parameters, which are listed in Table <ref>. It should be noticed that in our VNA-FMR measurement, the two-magnon contributions to the linewidth and hence its impact on damping derivation can be excluded due to the perpendicular geometry in the measurement<cit.>. And the contribution of spin-pumping within the seed layer to the linewidth in our cases is always expected to be within the error bar<cit.>. The lowest damping in the as-deposited [Co/Ni] films were obtained on Ta and NiCr seed, i.e. in the in-plane magnetized samples. The highest damping was found on Pt seed, a fact that is generally interpreted as arising from the large spin-orbit coupling of Pt. And the damping values increase with post annealing in [Co/Ni] on all seed layers except Ru. Though the damping of [Co/Ni] is larger than CoFeB/MgO<cit.>, it is equal to or smaller than [Co/Pt] and [Co/Pd]<cit.>, which makes it of interest to use as free layer material with high thermal stability in high density STT-MRAM applications<cit.>.
Fig.<ref>(a) and (b) plot the correlations between the dopant concentration and the α and K_U^eff values, respectively. Here the seed element concentration in the [Co/Ni] multilayer is reflected by the ratio of the apparent seed concentration (at.%) divided by the sum of the apparent Co and Ni concentrations (at.%) for each sample and annealing condition. There is a clear correlation between the damping and the concentration of the seed element in the [Co/Ni] system. The absence of evolution of the damping upon annealing for [Co/Ni] on Ru can thus be explained by the non-diffusive character of the Ru seed. In the case of the NiCr seed, the damping is the largest after annealing. A similar trend was observed before on Au seeds and attributed to the formation of superparamagnetic islands in thin [Co/Ni] multilayers<cit.>. However, since we observe for the same [Co/Ni] system no M_s loss and no μ_0H_c increase on Ru and Pt, and since the PMA on NiCr after anneal is the highest, the formation of super paramagnetic islands cannot be the reason to explain the magnetic behavior. It is more likely that the increased damping comes from the formation of a textured Co-Ni-Cr alloy with high PMA and high damping, as mentioned in Section III.C.2. Finally, in the case of Pt and Ta seeds, the damping on the Pt seed is higher than on the Ta seed for the same concentration. That is attributed to the spin-orbit coupling for Pt dopants being substantially higher than for Ta. In conclusion, it is clear that the seed diffusion cannot be ignored when studying the impact of annealing on the structural and magnetic properties of [Co/Ni].
§ CONCLUSIONS
In summary, the structural and magnetic properties of [Co/Ni] on different seed layers are investigated. [Co/Ni] on Pt and Ru seed show fcc(111) texture after deposition and have PMA. IMA is observed for [Co/Ni] on NiCr and Ta seed, and PMA appears and increases after post-annealing. Further annealing improves the texture and hence increases PMA. Meanwhile, the shift of [Co/Ni] diffraction peaks in XRD curves indicates the presence of strain in the [Co/Ni] film, which can also influence PMA. The strain-induced PMA may have a positive effect on the total PMA (NiCr and Ta seed), or a negative impact (Pt and Ru seed). A dramatic reduction of M_s and large increase in μ_0H_c of [Co/Ni] on NiCr and Ta seed after annealing is observed. The damping property of [Co/Ni] on different seed layers evolves similarly as PMA with post-annealing. The explanation for these phenomena is the diffusion property of seed layer materials. High PMA and very large damping is obtained on NiCr seed because of the dramatic diffusion of Cr and the formation of Co-Ni-Cr alloy. Though non-diffusive Ru seed results in low damping, low PMA as-deposited and after annealing is obtained. Pt seed can provide good PMA, but its large spin-orbit coupling exerted from the interface with [Co/Ni] and diffused part increases damping, especially after annealing. PMA as high as on Pt has been observed for Ta seed with lower damping due to its smaller spin-orbit coupling. Fig.<ref> summarizes schematically the impact of seed layer and annealing on the structural and magnetic properties of [Co/Ni] on different seed layers.
Finally, by selecting the seed and post-annealing temperature, the [Co/Ni] can be tuned in a broad range from low damping to high damping while maintaining PMA after annealing until 400^∘C. As such, the [Co/Ni] multilayer system is envisaged for various applications in spintronics, such as highly damped fixed layers, or low damping free layers in high density magnetic memory or domain wall motion mediated spin logic applications.
This work is supported by IMEC's Industrial Affiliation Program on STT-MRAM devices.
apsrev4-1
59
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[ITR(2015)]ITRS2015
http://www.itrs.net/ title The International
Technology Roadmap for Semiconductors, (year
2015)NoStop
[Driskill-Smith et al.(2011)Driskill-Smith, Apalkov, Nikitin,
Tang, Watts, Lottis,
Moon, Khvalkovskiy, Kawakami,
Luo, Ong, Chen, and Krounbi]Driskill-Smith2011
author author A. Driskill-Smith, author D. Apalkov, author V. Nikitin,
author X. Tang, author
S. Watts, author D. Lottis, author K. Moon, author A. Khvalkovskiy, author R. Kawakami, author X. Luo,
author A. Ong, author
E. Chen, and author
M. Krounbi, in 10.1109/IMW.2011.5873205 booktitle 2011 3rd IEEE
International Memory Workshop (IMW) (publisher IEEE, year 2011) pp. pages 1–3NoStop
[Devolder et al.(2016)Devolder, Kim, Garcia-Sanchez,
Swerts, Kim, Couet,
Kar, and Furnemont]Devolder2016PRB
author author T. Devolder, author J.-V. Kim,
author F. Garcia-Sanchez,
author J. Swerts, author W. Kim, author
S. Couet, author G. Kar, and author A. Furnemont, 10.1103/PhysRevB.93.024420
journal journal Phys. Rev. B volume 93, pages 024420 (year
2016)NoStop
[Yakushiji et al.(2010)Yakushiji, Saruya, Kubota, Fukushima, Nagahama, Yuasa, and Ando]Yakushiji2010
author author K. Yakushiji, author T. Saruya,
author H. Kubota, author A. Fukushima, author
T. Nagahama, author
S. Yuasa, and author
K. Ando, 10.1063/1.3524230 journal journal Applied
Physics Letters volume 97, pages
232508 (year 2010)NoStop
[Chatterjee et al.(2014)Chatterjee, Tahmasebi, Mertens,
Kar, Min, and Boeck]Chatterjee2014
author author J. Chatterjee, author T. Tahmasebi, author S. Mertens,
author G. S. Kar, author T. Min, and author
J. D. Boeck, 10.1109/TMAG.2014.2326731 journal journal IEEE
Transactions on Magnetics volume 50, pages 4401704 (year 2014)NoStop
[Chatterjee et al.(2015)Chatterjee, Tahmasebi, Swerts,
Kar, and De Boeck]Chatterjee2015
author author J. Chatterjee, author T. Tahmasebi, author J. Swerts,
author G. S. Kar, and author J. De Boeck, 10.7567/APEX.8.063002 journal journal
Applied Physics Express volume 8, pages 063002 (year 2015)NoStop
[Johnson et al.(1996)Johnson, Bloemen, Broeder, and Vries]Johnson1996
author author M. T. Johnson, author P. J. H. Bloemen, author F. J. a. D. Broeder, and author J. J. D. Vries, 10.1088/0034-4885/59/11/002 journal journal Reports on Progress in Physics volume 59, pages 1409 (year
1996)NoStop
[Mizukami et al.(2011)Mizukami, Zhang, Kubota, Naganuma, Oogane, Ando, and Miyazaki]Mizukami2011
author author S. Mizukami, author X. Zhang,
author T. Kubota, author H. Naganuma, author
M. Oogane, author Y. Ando, and author T. Miyazaki, 10.1143/APEX.4.013005
journal journal Applied Physics Express volume 4, pages 013005 (year
2011)NoStop
[Tadisina et al.(2010a)Tadisina, Natarajarathinam, and Gupta]Tadisina2010
author author Z. R. Tadisina, author A. Natarajarathinam, and author S. Gupta, 10.1116/1.3430549 journal journal Journal of Vacuum Science & Technology A:
Vacuum, Surfaces, and Films volume 28, pages 973 (year 2010a)NoStop
[Tadisina et al.(2010b)Tadisina, Natarajarathinam, Clark, Highsmith,
Mewes, Gupta, Chen, and Wang]Tadisina2010a
author author Z. R. Tadisina, author A. Natarajarathinam, author B. D. Clark, author A. L. Highsmith, author T. Mewes,
author S. Gupta, author E. Chen, and author
S. Wang, 10.1063/1.3358242 journal journal Journal of
Applied Physics volume 107, pages
09C703 (year 2010b)NoStop
[Lytvynenko et al.(2015)Lytvynenko, Deranlot, Andrieu, and Hauet]Lytvynenko2015
author author I. Lytvynenko, author C. Deranlot, author S. Andrieu, and author T. Hauet, 10.1063/1.4906843 journal journal
Journal of Applied Physics volume 117, pages 053906 (year 2015)NoStop
[Kar et al.(2014)Kar,
Kim, Tahmasebi, Swerts,
Mertens, Heylen, and Min]Kar2014
author author G. S. Kar, author W. Kim, author T. Tahmasebi, author
J. Swerts, author S. Mertens, author N. Heylen, and author T. Min, in 10.1109/IEDM.2014.7047080
booktitle 2014 IEEE International Electron Devices
Meeting (publisher IEEE, year 2014) pp. pages 19.1.1–19.1.4NoStop
[Swerts et al.(2015)Swerts,
Mertens, Lin, Couet,
Tomczak, Sankaran, Pourtois,
Kim, Meersschaut, Souriau,
Radisic, Van Elshocht, Kar, and Furnemont]Swerts2015
author author J. Swerts, author S. Mertens,
author T. Lin, author
S. Couet, author Y. Tomczak, author K. Sankaran, author G. Pourtois, author W. Kim, author J. Meersschaut, author L. Souriau, author D. Radisic,
author S. Van Elshocht,
author G. Kar, and author A. Furnemont, 10.1063/1.4923420 journal journal Applied
Physics Letters volume 106, pages
262407 (year 2015)NoStop
[Tomczak et al.(2016)Tomczak, Swerts, Merten, Lin, Couet, Liu, Sankaran,
Pourtois, Kim, Souriau,
Van Elshocht, Kar, and Furnemont]Tomczak2016
author author Y. Tomczak, author J. Swerts,
author S. Merten, author T. Lin, author
S. Couet, author E. Liu, author K. Sankaran, author G. Pourtois,
author W. Kim, author
L. Souriau, author S. Van Elshocht, author G. Kar, and author A. Furnemont, @noop journal
journal Applied Physics Letters volume
108, pages 042402 (year 2016)NoStop
[Liu et al.(2016)Liu,
Swerts, Couet, Mertens,
Tomczak, Lin, Spampinato,
Franquet, Van Elshocht, Kar,
Furnemont, and De Boeck]Liu2016
author author E. Liu, author J. Swerts,
author S. Couet, author S. Mertens, author
Y. Tomczak, author T. Lin, author V. Spampinato, author A. Franquet, author S. Van Elshocht, author G. Kar,
author A. Furnemont, and author J. De Boeck, 10.1063/1.4945089 journal journal
Applied Physics Letters volume 108, pages 132405 (year 2016)NoStop
[Bromberg et al.(2014)Bromberg, Moneck, Sokalski, Zhu, Pileggi, and Zhu]Bromberg2014
author author D. M. Bromberg, author M. T. Moneck, author V. M. Sokalski, author J. Zhu,
author L. Pileggi, and author J.-G. Zhu, in 10.1109/IEDM.2014.7047159 booktitle 2014 IEEE
International Electron Devices Meeting (publisher IEEE, year 2014) pp. pages 33.1.1–33.1.4NoStop
[Daalderop et al.(1990)Daalderop, Kelly, and Schuurmans]Daalderop1990
author author G. H. O. Daalderop, author P. J. Kelly, and author M. F. H. Schuurmans, 10.1103/PhysRevB.42.7270
journal journal Physical Review B volume 42, pages 7270 (year
1990)NoStop
[Daalderop et al.(1992)Daalderop, Kelly, and den
Broeder]Daalderop1992
author author G. H. O. Daalderop, author P. J. Kelly, and author F. A. den Broeder, 10.1103/PhysRevLett.68.682
journal journal Physical Review Letters volume 68, pages 682 (year
1992)NoStop
[den Broeder et al.(1992)den
Broeder, Janssen, Hoving, and Zeper]den1992
author author F. den
Broeder, author E. Janssen,
author W. Hoving, and author W. Zeper, 10.1109/20.179619 journal journal IEEE
Transactions on Magnetics volume 28, pages 2760 (year 1992)NoStop
[Bloemen et al.(1992)Bloemen, De Jonge, and den
Broeder]Bloemen1992
author author P. J. H. Bloemen, author W. J. M. De Jonge, and author F. A. den Broeder, 10.1063/1.352048 journal journal Journal of Applied Physics volume 72, pages 4840 (year
1992)NoStop
[Gottwald et al.(2012)Gottwald, Andrieu, Gimbert, Shipton, Calmels, Magen, Snoeck, Liberati, Hauet, Arenholz, Mangin, and Fullerton]Gottwald2012
author author M. Gottwald, author S. Andrieu,
author F. Gimbert, author E. Shipton, author
L. Calmels, author C. Magen, author E. Snoeck, author M. Liberati, author T. Hauet, author E. Arenholz, author S. Mangin, and author E. E. Fullerton, 10.1103/PhysRevB.86.014425
journal journal Physical Review B volume 86, pages 014425 (year
2012)NoStop
[Shioda et al.(2015)Shioda,
Seki, Shimada, and Takanashi]Shioda2015
author author A. Shioda, author T. Seki,
author J. Shimada, and author K. Takanashi, 10.1063/1.4915106 journal journal
Journal of Applied Physics volume 117, pages 17C726 (year 2015)NoStop
[Beaujour et al.(2007)Beaujour, Chen, Krycka, Kao, Sun, and Kent]Beaujour2007
author author J.-M. Beaujour, author W. Chen,
author K. Krycka, author C. C. Kao, author
J. Z. Sun, and author
A. D. Kent, 10.1140/epjb/e2007-00071-1 journal journal
European Physical Journal B volume 59, pages 475 (year 2007), http://arxiv.org/abs/0611027 0611027 NoStop
[Gimbert and Calmels(2012)]Gimbert2012
author author F. Gimbert and author L. Calmels, 10.1103/PhysRevB.86.184407 journal journal Physical Review B volume 86, pages 184407 (year
2012)NoStop
[You et al.(2012)You,
Sousa, Bandiera, Rodmacq, and Dieny]You2012
author author L. You, author R. C. Sousa,
author S. Bandiera, author B. Rodmacq, and author B. Dieny, 10.1063/1.4704184 journal journal Applied
Physics Letters volume 100, pages
172411 (year 2012)NoStop
[Haertinger et al.(2013)Haertinger, Back, Yang, Parkin, and Woltersdorf]Haertinger2013
author author M. Haertinger, author C. H. Back, author S.-H. Yang,
author S. S. P. Parkin, and author G. Woltersdorf, 10.1088/0022-3727/46/17/175001 journal journal Journal of Physics D: Applied Physics volume 46, pages 175001 (year
2013)NoStop
[Akbulut et al.(2015)Akbulut, Akbulut, Özdemir, and Yildiz]Akbulut2015
author author S. Akbulut, author A. Akbulut,
author M. Özdemir, and author F. Yildiz, 10.1016/j.jmmm.2015.04.061 journal journal Journal of Magnetism and Magnetic Materials volume 390, pages 137 (year
2015)NoStop
[Shaw et al.(2010)Shaw,
Nembach, and Silva]Shaw2010
author author J. M. Shaw, author H. T. Nembach, and author T. J. Silva, 10.1063/1.3506688 journal journal
Journal of Applied Physics volume 108, pages 093922 (year 2010)NoStop
[Wang et al.(2013)Wang,
Zhang, Ma, and Jin]Wang2013
author author G. Wang, author Z. Zhang,
author B. Ma, and author Q. Y. Jin, 10.1063/1.4799524 journal journal Journal of
Applied Physics volume 113, pages
17C111 (year 2013)NoStop
[Song et al.(2013)Song,
Lee, Sohn, Yang,
Parkin, You, and Shin]Song2013-2
author author H. S. Song, author K. D. Lee,
author J. W. Sohn, author S. H. Yang, author
S. S. P. Parkin, author
C. Y. You, and author
S. C. Shin, 10.1063/1.4813542 journal journal Applied
Physics Letters volume 103, pages
022406 (year 2013)NoStop
[Kurt et al.(2010)Kurt,
Venkatesan, and Coey]Kurt2010
author author H. Kurt, author M. Venkatesan, and author J. M. D. Coey, 10.1063/1.3481452 journal journal Journal of Applied Physics volume 108, pages 073916 (year 2010)NoStop
[Posth et al.(2009)Posth,
Hassel, Spasova, Dumpich,
Lindner, and Mangin]Posth2009
author author O. Posth, author C. Hassel,
author M. Spasova, author G. Dumpich, author
J. Lindner, and author
S. Mangin, 10.1063/1.3176901 journal journal Journal of
Applied Physics volume 106, pages
023919 (year 2009)NoStop
[Fukami et al.(2010)Fukami,
Suzuki, Tanigawa, Ohshima, and Ishiwata]Fukami2010
author author S. Fukami, author T. Suzuki,
author H. Tanigawa, author N. Ohshima, and author N. Ishiwata, 10.1143/APEX.3.113002 journal journal Applied
Physics Express volume 3, pages
113002 (year 2010)NoStop
[Gubbiotti et al.(2012)Gubbiotti, Carlotti, Tacchi, Madami, Ono, Koyama, Chiba,
Casoli, and Pini]Gubbiotti2012
author author G. Gubbiotti, author G. Carlotti,
author S. Tacchi, author M. Madami, author
T. Ono, author T. Koyama, author D. Chiba, author F. Casoli, and author M. G. Pini, 10.1103/PhysRevB.86.014401
journal journal Physical Review B volume 86, pages 014401 (year
2012)NoStop
[Ju et al.(2015)Ju,
Li, Wu, Zhang, Liu, and Yu]Ju2015
author author H. Ju, author B. Li, author Z. Wu, author
F. Zhang, author S. Liu, and author G. Yu, 10.7498/aps.64.097501 journal journal Acta Physica Sinica volume 64, pages 097501 (year
2015)NoStop
[Sabino et al.(2014)Sabino,
Tran, Hin Sim, Ji Feng, and Eason]Sabino2014
author author M. P. R. Sabino, author M. Tran, author C. Hin Sim,
author Y. Ji Feng, and author K. Eason, 10.1063/1.4865212 journal journal
Journal of Applied Physics volume 115, pages 17C512 (year 2014)NoStop
[Kato et al.(2011)Kato,
Matsumoto, Okamoto, Kikuchi,
Kitakami, Nishizawa, Tsunashima, and Iwata]Kato2011
author author T. Kato, author Y. Matsumoto,
author S. Okamoto, author N. Kikuchi, author
O. Kitakami, author
N. Nishizawa, author
S. Tsunashima, and author
S. Iwata, 10.1109/TMAG.2011.2158082 journal journal IEEE
Transactions on Magnetics volume 47, pages 3036 (year 2011)NoStop
[Cao et al.(2016)Cao,
Li, Yang, Chen, Yang, Liu, and Yu]Cao2016
author author Y. Cao, author M.-H. Li,
author K. Yang, author
X. Chen, author G. Yang, author Q.-Q. Liu, and author G.-H. Yu, 10.1007/s12598-016-0782-8 journal journal Rare Metals , pages 1
(year 2016)NoStop
[Bilzer et al.(2007)Bilzer,
Devolder, Crozat, Chappert,
Cardoso, and Freitas]Bilzer2007
author author C. Bilzer, author T. Devolder,
author P. Crozat, author C. Chappert, author
S. Cardoso, and author
P. P. Freitas, 10.1063/1.2716995 journal journal Journal of
Applied Physics volume 101, pages
074505 (year 2007)NoStop
[Devolder et al.(2013)Devolder, Ducrot, Adam, Barisic, Vernier, Kim, Ockert, and Ravelosona]Devolder2013
author author T. Devolder, author P.-H. Ducrot, author J.-P. Adam,
author I. Barisic, author N. Vernier, author
J.-V. Kim, author B. Ockert, and author D. Ravelosona, 10.1063/1.4775684
journal journal Applied Physics Letters volume 102, pages 022407 (year 2013)NoStop
[Davis(2000)]davis2000nickel
author author J. R. Davis, @noop title Nickel, Cobalt, and
Their Alloys, ASM specialty handbook (publisher ASM
International, year 2000)NoStop
[Rafaja et al.(2000)Rafaja,
Vacinova, and Valvoda]Rafaja2000
author author D. Rafaja, author J. Vacinova, and author V. Valvoda, 10.1016/S0040-6090(00)01072-5 journal journal Thin Solid Films volume 374, pages 10 (year 2000)NoStop
[Sander et al.(1999)Sander,
Enders, and Kirschner]Sander1999
author author D. Sander, author A. Enders, and author J. Kirschner, 10.1016/S0304-8853(99)00310-8 journal journal Journal of Magnetism and Magnetic Materials volume 200, pages 439 (year
1999)NoStop
[Gopman et al.(2016)Gopman,
Dennis, Chen, Iunin,
Finkel, Staruch, and Shull]Gopman2016
author author D. B. Gopman, author C. L. Dennis,
author P. J. Chen, author Y. L. Iunin, author
P. Finkel, author M. Staruch, and author R. D. Shull, 10.1038/srep27774
journal journal Scientific Reports volume 6, pages 27774 (year
2016), http://arxiv.org/abs/1601.01349 arXiv:1601.01349
NoStop
[Cardarelli(2008)]cardarelli2008materials
author author F. Cardarelli, @noop title Materials Handbook:
A Concise Desktop Reference (publisher Springer London, year 2008)NoStop
[Ishikawa et al.(1986)Ishikawa, Tani, Yamada, Ota, Nakamura, and Itoh]Ishikawa1986
author author M. Ishikawa, author N. Tani,
author T. Yamada, author Y. Ota, author
K. Nakamura, and author
A. Itoh, 10.1109/TMAG.1986.1064562 journal journal IEEE
Transactions on Magnetics volume 22, pages 573 (year 1986)NoStop
[Tokushige and Miyagawa(1990)]Tokushige1990
author author H. Tokushige and author T. Miyagawa, 10.1109/TJMJ.1990.4564145 journal journal IEEE Translation Journal on Magnetics in
Japan volume 5, pages 575 (year 1990)NoStop
[Hasegawa et al.(1989)Hasegawa, Ono, Kawanabe, Nakagawa, and Naoe]Hasegawa1989
author author K. Hasegawa, author S. Ono,
author T. Kawanabe, author S. Nakagawa, and author M. Naoe, 10.3379/jmsjmag.13.S1445 journal journal Journal
of the Magnetics Society of Japan volume 13, pages 445 (year 1989)NoStop
[Iwasaki and Ouchi(1978)]Iwasaki1978
author author S. Iwasaki and author K. Ouchi, 10.1109/TMAG.1978.1059928 journal
journal IEEE Transactions on Magnetics volume 14, pages 849 (year 1978)NoStop
[Chen et al.(2015)Chen,
Li, Yang, Jiang,
Han, Liu, and Yu]Chen2015
author author X. Chen, author M. Li, author K. Yang, author
S. Jiang, author G. Han, author Q. Liu, and author G. Yu, http://dx.doi.org/10.1063/1.4930830 journal
journal AIP Advances volume 5, pages 097121 (year 2015)NoStop
[Khan et al.(1990)Khan,
Fisher, and Heiman]Khan1990
author author M. Khan, author R. Fisher, and author N. Heiman, 10.1109/20.50508 journal journal IEEE
Transactions on Magnetics volume 26, pages 118 (year 1990)NoStop
[Gupta(2001)]Gupta2001
author author K. P. Gupta, 10.1361/105497101770339319 journal
journal Journal of Phase Equilibria volume 22, pages 65 (year 2001)NoStop
[Kil et al.(2015)Kil,
Choi, Bae, Oh, Choi, and Park]Kil2015
author author J. Kil, author Y. Choi, author G. Bae, author
H. Oh, author W. Choi, and author W. Park, 10.1109/TMAG.2015.2438324
journal journal IEEE Transactions on Magnetics volume 51, pages 1 (year
2015)NoStop
[Rantschler et al.(2007)Rantschler, McMichael, Castillo,
Shapiro, Egelhoff, Maranville, Pulugurtha, Chen, and Connors]Rantschler2007
author author J. O. Rantschler, author R. D. McMichael, author A. Castillo,
author A. J. Shapiro, author W. F. Egelhoff, author
B. B. Maranville, author
D. Pulugurtha, author
A. P. Chen, and author
L. M. Connors, 10.1063/1.2436471 journal journal Journal of
Applied Physics volume 101, pages
033911 (year 2007)NoStop
[Shaw et al.(2014)Shaw,
Nembach, and Silva]Shaw2014
author author J. M. Shaw, author H. T. Nembach, and author T. J. Silva, 10.1063/1.4892532 journal journal
Applied Physics Letters volume 105, pages 062406 (year 2014)NoStop
[Zwierzycki et al.(2005)Zwierzycki, Tserkovnyak, Kelly,
Brataas, and Bauer]Zwierzycki2005
author author M. Zwierzycki, author Y. Tserkovnyak, author P. J. Kelly, author A. Brataas, and author G. E. W. Bauer, 10.1103/PhysRevB.71.064420 journal journal Physical Review B volume 71, pages 064420 (year 2005)NoStop
[Liu et al.(2011)Liu,
Zhang, Carter, and Xiao]Liu2011a
author author X. Liu, author W. Zhang, author M. J. Carter, and author G. Xiao, 10.1063/1.3615961 journal journal Journal of
Applied Physics volume 110, pages
033910 (year 2011)NoStop
[Kato et al.(2012)Kato,
Matsumoto, Kashima, Okamoto,
Kikuchi, Iwata, Kitakami, and Tsunashima]Kato2012
author author T. Kato, author Y. Matsumoto,
author S. Kashima, author S. Okamoto, author
N. Kikuchi, author S. Iwata, author O. Kitakami, and author S. Tsunashima, 10.1109/TMAG.2012.2198446
journal journal IEEE Transactions on Magnetics volume 48, pages 3288 (year
2012)NoStop
[Ishikawa et al.(2016)Ishikawa, Enobio, Sato, Fukami, Matsukura, and Ohno]Ishikawa2016
author author S. Ishikawa, author E. Enobio,
author H. Sato, author
S. Fukami, author F. Matsukura, and author H. Ohno, 10.1109/TMAG.2016.2517098
journal journal IEEE Transactions on Magnetics volume 52, pages 3400704 (year 2016)NoStop
|
http://arxiv.org/abs/1701.07894v1 | 20170126223350 | Heavy neutrino potential for neutrinoless double beta decay | [
"Yoritaka Iwata"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
§ INTRODUCTION
Observation of neutrino oscillation has clarified the nonzero neutrino mass.
The observation of neutrinoless double-beta decay, for whose existence nonzero neutrino mass plays a supportive role, is associated with important physics; e.g.,
* existence of Majorana particle,
* breaking of leptonic number conservation,
* quantitative determination of neutrino mass.
In this sense neutrinoless double-beta decay is intriguing enough to bring about an example exhibiting the physics beyond the standard model of elementary particle physics (for a review, see <cit.>).
Although LSND <cit.> experiment has suggested the possible existence of heavy neutrinos (recognized as “sterile neutrino” in various literatures), theorists started to account for such a contribution to the neutrinoless double beta decay half life only recently.
In addition, regarding the motivation for sterile neutrinos, the GALLEX/SAGE experiments <cit.> and the reactor anomaly support such existences.
All three experiments suggest neutrino masses on the eV scale.
Another motivation is that sterile neutrinos could be dark matter candidates, in that case the masses are on the keV scale.
If heavy neutrinos exist, those neutrinos are mixed into the effective mass.
As an example of relation between the half life of neutrinoless double-beta decay, the effective light neutrino mass (m_ν), and the effective heavy neutrino mass (η_N) is given by <cit.>
[ [T_0 ν^1/2] ^-1 = G { |M^0 ν|^2 (m_ν/m_e)^2 + |M^0 N|^2 (η_N )^2 }, ]
where G is the phase space factor (its value is obtained rather precisely), m_e is the electron mass (its value is also precisely obtained), η_N denotes the effective mass relative to electron mass, and M^0 ν and M^0 N are the nuclear matrix elements (NME, for short) for light and heavy neutrinos respectively.
In this context light neutrinos mean already-observed ordinary neutrinos.
Under the existence of heavy neutrino, we need to have half life observations for two different double-beta decay events (for example, decay of calcium and xenon):
[ [T_0 ν, I^1/2] ^-1 = G_I { |M^0 ν_ I|^2 (m_ν/m_e)^2 + |M_ I^0 N|^2 (η_N )^2 }, ]
and
[ [T_0 ν, II^1/2] ^-1 = G_II{ |M^0 ν_ II|^2 (m_ν/m_e)^2 + |M_ II^0 N|^2 (η_N )^2 }, ]
where indices I and II identify the kind of decaying nuclei.
Because there are two unknown quantities: m_ν and η_N, here we have two equations.
In order to determine the neutrino mass, it is necessary to calculate M_ I^0 ν, M_ II^0 ν, M_ I^0 N and M_ II^0 N very precisely.
At this point, many calculations by various theoretical models have been dedicated to NME calculations.
Since the detail information on initial and final states (i.e., quantum level structure of these states) is necessary for the calculation of NMEs, it is impossible to have reliable NME without knowing nuclear structures.
The impact of precise NME calculations is expected to be large enough (e.g., for a large-scale shell model calculation for light neutrinos, see Ref. <cit.>), and the unknown leptonic mass-hierarchy and the Majorana nature of neutrinos are expected to be discovered.
In this article heavy neutrino potential for neutrinoless double beta decay (for the definition, see Eq. (<ref>)) is studied from a statistical point of view.
The results in this article are intended to be compared to light neutrino cases (that is, ordinary neutrino case) presented in Ref. <cit.>.
The comparison clarifies the contribution of heavy neutrinos for neutrinoless double beta decay half-life.
§ CONDITION FOR THE EXISTENCE OF HEAVY NEUTRINO
Role of the nuclear matrix element is seen by solving Eqs. (<ref>)-(<ref>).
Under the validity of
[ |M_ I^0 N|^2 / |M^0 ν_ I|^2 |M_ II^0 N|^2 / |M^0 ν_ II|^2, ]
the effective neutrino mass for light and heavy neutrinos are represented by
[ (m_ν/m_e)^2 =
- |M^0 N_ II|^2 [G_I T_0 ν, I^1/2] ^-1 + |M^0 N_ I|^2 [G_II T_0 ν, II^1/2] ^-1/ |M^0 ν_ II|^2 |M_ I^0 N|^2 - |M^0 ν_ I|^2 |M_ II^0 N|^2 , ]
and
[ (η_N )^2 =
|M^0 ν_ II|^2 [G_I T_0 ν, I^1/2] ^-1 - |M^0 ν_ I|^2 [G_II T_0 ν, II^1/2] ^-1/ |M^0 ν_ II|^2 |M_ I^0 N|^2 - |M^0 ν_ I|^2 |M_ II^0 N|^2 , ]
respectively.
The condition (<ref>) is valid if nuclear structure effect on double beta decay is not so simple; indeed it is not true only if NMEs for different decay candidates are exactly the same in their heavy-to-light ratios.
This condition was explored in Refs. <cit.>.
According to Eq. (<ref>), the experimentally-confirmed nonzero neutrino effective mass suggests that
[ |M^0 N_ I |^2 G_I T_0 ν, I^1/2 |M^0 N_ II |^2 G_II T_0 ν, II^1/2 ]
where note that Eq. (<ref>) is written only by heavy neutrino NMEs and half lives.
According to Eq. (<ref>),
|M^0 ν_ I|^2 G_I T_0 ν, I^1/2 = |M^0 ν_ II|^2 G_II T_0 ν, II^1/2
suggests that heavy neutrinos do not exist.
The satisfaction of Eq. (<ref>) means either one of the following possibilities:
(i) the present framework (<ref>) is too simple to be valid,
(ii) heavy neutrinos do not exist.
Since Eq. (<ref>) is written even without knowing anything about heavy neutrino, this condition is practically used as the sufficient condition for the existence of heavy neutrino under the validity of the framework (<ref>) (i.e., heavy neutrino existence condition for Eq. (<ref>)).
It is worth noting that, as discussed around Eq. (15) of Ref. <cit.>, additional terms can be added to Eq. (<ref>).
Under the non-existence of heavy neutrino (by applying Eq. (<ref>) to Eq. (<ref>)),
[ (m_ν/m_e)^2 = [G_I T_0 ν, I^1/2] ^-1/ |M^0 ν_ I|^2 |M^0 ν_ II|^2 |M^0 N_ I|^2 - |M^0 ν_ I|^2 |M^0 N_ II|^2 / |M^0 ν_ II|^2 |M_ I^0 N|^2 - |M^0 ν_ I|^2 |M_ II^0 N|^2
= 1/G_I [T_0 ν, I^1/2] ^-1/ |M^0 ν_ I|^2 ]
trivially follows.
If the squared masses are positive,
[ ( |M^0 N_ II|^2 [G_I T_0 ν, I^1/2] ^-1 - |M^0 N_ I|^2 [G_II T_0 ν, II^1/2] ^-1)
( |M^0 ν_ II|^2 [G_I T_0 ν, I^1/2] ^-1 - |M^0 ν_ I|^2 [G_II T_0 ν, II^1/2] ^-1) ≤ 0, ]
must be satisfied (i.e., real mass condition).
§ NEUTRINO POTENTIAL
§.§ Nuclear matrix element
Nuclear matrix element in double beta decay is investigated under the closure approximation.
It approximates all the different virtual intermediate energies by a single intermediate energy (i.e., with the averaged energy called closure parameter).
For neutrinoless double beta decay, nuclear matrix element for light and heavy neutrinos are written by
[ M^0 ν = M_ F^0 ν - g_V^2/g_A^2 M_ GT^0 ν + M_ T^0 ν ]
and
[ M^0 N = M_ F^0 N - g_V^2/g_A^2 M_ GT^0 N + M_ T^0 N; ]
respectively, where g_V and g_A denote vector and axial coupling constants, and α of M_α^0 ν is the index for the double beta decay of three kinds: α = F, GT, T (Fermi, Gamow-Teller, and tensor parts).
According to Ref. <cit.>, each part is further represented by the sum of two-body transition density (TBTD) and anti-symmetrized two-body matrix elements.
[ M_α^0 x = ⟨ 0_f^+ |O_α^0 x | 0_i^+ ⟩; = ∑ TBTD(n'_1 l'_1 j'_1 t'_1, n'_2 l'_2 j'_2 t'_2, n_1 l_1 j_1 t_1, n_2 l_2 j_2 t_2; J); ⟨ n'_1 l'_1 j'_1 t'_1, n'_2 l'_2 j'_2 t'_2; J|O_α^0 x(r) | n_1 l_1 j_1 t_1, n_2 l_2 j_2 t_2; J ⟩_ AS ]
where O^0 x_α(r) are transition operators of neutrinoless double beta decay, and 0_i^+ and 0_f^+ denote initial and final states, respectively (x is either ν or N).
The sum is taken over indices ( n_i l_i j_i t_i,n'_j l'_j j'_j t'_j) with i,j=1,2, where n, l, j and t mean principal, angular momentum and isospin quantum numbers, respectively, j_1 and j_2 (or j_1' and j_2') are coupled to J (or J), similarly l_1 and l_2 (or l_1' and l_2') are coupled to λ (or λ'), and t_1 = t_2 = 1/2, t_1' = t_2' = -1/2 is valid if neutrons decay into protons.
The two-body matrix element before the anti-symmetrization is represented by
[ ⟨ n'_1 l'_1 j'_1 t'_1, n'_2 l'_2 j'_2 t'_2; J|O_α^0 x(r) | n_1 l_1 j_1 t_1, n_2 l_2 j_2 t_2; J ⟩; = 2 ∑_S, S', λ, λ'√(j_1' j_2' S' λ')√(j_1 j_2 S λ)
⟨ l_1' l_2' λ' S'; J| S_α | l_1 l_2 λ S; J ⟩
⟨ n_1' l_1' n_2' l_2'; J| H_α(r) | n_1 l_1 n_2 l_2 ⟩; {[ l_1' 1/2 j_1'; l_2' 1/2 j_2'; λ' S' J ]}
{[ l_1 1/2 j_1; l_2 1/2 j_2; λ S J ]} ]
where H_α(r) is the neutrino potential, S_α denotes spin operators, S and S' mean the two-body spins, and {·} including nine numbers denotes the 9j-symbol.
By implementing the Talmi-Moshinsky transforms:
⟨ n l, NL| n_1 l_1, n_2 l_2 ⟩_λ⟨ n' l', N'L'| n_1' l_1', n_2' l_2' ⟩_λ'
the harmonic oscillator basis is transformed to the center-of-mass system.
[ ⟨ l_1' l_2' λ' S'; J| S_α | l_1 l_2 λ S; J ⟩⟨ n_1' l_1' n_2' l_2'; J| H_α(r) | n_1 l_1 n_2 l_2 ⟩; = ∑_ mos2⟨ n l, NL| n_1 l_1, n_2 l_2 ⟩_λ⟨ n' l', N'L'| n_1' l_1', n_2' l_2' ⟩_λ'
⟨ l' L λ' S'; J| S_α | l L λ S; J ⟩⟨ n' l'| H_α(√(2)ρ) | n l ⟩, ]
where ρ = r /√(2) is the transformed coordinate of center-of-mass system, and “mos2” means that the sum is taken over (n,n',l,l',N,N') <cit.>.
In this article, in order to have a comparison to the preceding results <cit.>, we focus on the neutrino potential effect arising from
[ ⟨ n' l'| H_α(√(2)ρ) | n l ⟩. ]
This part is responsible for the amplitude of each transition from a state with n, l to another state with n', l', while the cancellation is determined by spin-dependent part.
For calculations of heavy-neutrino exchange matrix elements, see Refs. <cit.>.
§.§ Neutrino potential represented in the center-of-mass system
Under the closure approximation neutrino potential <cit.> is represented by
[ H_α(√(2)ρ) = 2R/π∫_0^∞ f_α (√(2)ρ q) h_α(q)/√(q^2 + m_ν^2) ( √(q^2 + m_ν^2) + ⟨ E ⟩ ) q^2 dq. ]
where q is the momentum of virtual neutrino, m_ν is the effective neutrino mass, R denotes the radius of decaying nucleus, and f_α is a spherical Bessel function (α=0,2),
In particular ⟨ E ⟩ is called the closure parameter, which means the averaged excitation energy of virtual intermediate state.
In Eq. (<ref>) neutrino potentials include the dipole form factors (not just the form factors) that take into account the nucleon size.
The massless neutrino limit (m_ν→ 0) of neutrino potential is
[ H_α(√(2)ρ) = 2R/π∫_0^∞ f_α (√(2)ρ q) h_α(q)/q+ ⟨ E ⟩ q dq, ]
and the heavy mass limit (m_ν >> ⟨ E ⟩, m_ν^2 >> q^2) of neutrino potential is
[ H_α(√(2)ρ) = 1/m_ν^22R/π∫_0^∞ f_α (√(2)ρ q) h_α(q) q^2 dq, ]
For ordinary light neutrinos, the neutrino potential in the massless limit can be utilized.
Simkovic unit is exploited for heavy neutrino case, in which the value of m_ν^2 H_α(√(2)ρ) is divided by proton and electon masses (i.e. the value of (m_ν^2/m_p m_e)H_α(√(2)ρ) is shown in this article).
Following the corresponding study on massless limit cases <cit.>, this article is devoted to investigate heavy mass limit cases.
The representation of neutrino potentials are
[ h_ F(q^2) = g_V^2/(1+q^2/Λ_V^2)^4; h_ GT(q^2) = 2/3q^2/4 m_p^2 (μ_p - μ_n) ^2 g_V^2/(1+q^2/Λ_V^2)^4
+
( 1-2/3q^2/q^2+m_π^2 + 1/3( q^2/q^2+m_π^2)^2 )
g_A^2/(1+q^2/Λ_A^2)^4; h_ T(q^2) = 1/3q^2/4 m_p^2 (μ_p - μ_n) ^2 g_V^2/(1+q^2/Λ_V^2)^4
+
( 2/3q^2/q^2+m_π^2 - 1/3( q^2/q^2+m_π^2)^2 )
g_A^2/(1+q^2/Λ_A^2)^4; ]
where μ_p and μ_n are magnetic moments satisfying μ_p - μ_n = 4.7, m_p and m_π are proton mass and pion mass, and Λ_V=850MeV, Λ_A=1086MeV are the finite size parameters.
Figure <ref> shows the integrand of Eq. (<ref>).
In any case ripples of the form: q ρ = const. can be found if q and ρ are relatively large.
The upper-value of the integral range should be at least equal to or larger than q=1600 MeV.
In our research including our recent publication <cit.>, we take q=2000 MeV and r=10 fm as the maximum value for numerical integration of Eq. (<ref>) (massless neutrino cases).
We noticed that, for the convergence, q_max of the integral should be rather larger for the heavy cases compared to the light cases.
§ STATISTICS
Since actual quantum states are represented by the superposition of basic states such as | nl ⟩ in the shell-model treatment, the contribution of neutrino potential part can be regarded as the superposition:
[ ∑_n, n', l, l' k_n, n', l, l' ⟨ n' l'| H_α(√(2)ρ) | n l ⟩. ]
using a suitable set of coefficients { k_n, n', l, l'} determined by the nuclear structure of grandmother and daughter nuclei.
Accordingly, in order to see the difference between the light and heavy neutrino contributions, it is worth investigating the statistical property of neutrino potential part (<ref>) at heavy mass limit.
Frequency distribution of neutrino potential part (<ref>) is shown in Fig. <ref>.
The values are always positive for Fermi and Gamow-Teller parts, while the tensor part includes non-negligible negative values.
Indeed, the sum of positive and negative contributions of tensor part suggests that total sum 9.458 is obtained by the cancellation between +9.943 and -0.485 (i.e., 9.458 = 9.943-0.485).
The order of the magnitude is different only for the tensor part.
Indeed, the average of the nonzero components is 0.0526 for the Fermi part, 0.0485 for the Gamow-Teller part, and 0.0063 for the tensor part.
Contributions with l= l' =0 (sum) cover 49.6% of the total contributions (sum) for Fermi, 52.3% for Gamow-Teller parts, and 12.8% for tensor part.
Since the corresponding values in light ordinary neutrino cases are 27.1% for Fermi part, 27.1% for Gamow-Teller part, and 7.2% for tensor part <cit.>, l= l' =0 component is clarified to play a more dominant role (roughly equal to twice) in heavy neutrino case.
Large contributions for Fermi, Gamow-Teller and tensor parts are summarized in Table <ref>.
Contribution labeled by (n l n' l')=(0 0 0 0) (i.e. transition between 0s orbits) provides the largest contribution in any part. Roughly speaking, we see that s-orbit is remarkably significant in heavy neutrino cases.
Indeed, all the top 10 contributions of Fermi and Gamow-Teller parts are completely filled with s-orbit contributions.
As seen in the top 10 list the order of the kind (n l n' l') are similar for Fermi and Gamow-Teller parts, where note that the order of Fermi and Gamow-Teller parts is exactly the same for ordinary light neutrino case as far as the top 10 list is concerned <cit.>.
Ten largest contributions (sum) cover 49.6% of the total contributions (sum) for the Fermi part, 52.3% for the Gamow-Teller part, and 13.4% for the tensor part.
The minimum value for the tensor part is -0.0086 achieved by (n l n' l')=(3 0 3 4) and (3 4 3 0).
Correlation between the values of Eq. (<ref>) for different parts are examined in Fig. <ref>.
Comparison between Fermi and Gamow-Teller parts shows that they provide almost the same values, although the Fermi part generally shows slightly larger value compared to the Gamow-Teller part.
Such a quantitative similarity between Fermi and Gamow-Teller parts is not trivial since we can find essentially different mathematical representations at least in their form factors (cf. Eq. (<ref>)).
The tensor part is positively correlated with the Fermi part (therefore Gamow-Teller part).
The l=l' components of the tensor part contributions (sum) cover 28.9% of the total tensor part contributions (sum).
§ SUMMARY
There are components of the two kinds in the nuclear matrix element; one is responsible for the amplitude and the other is for the cancellation.
As a component responsible for the amplitude, neutrino potential part (i.e., Eq. (<ref>)) is investigated in this article.
The presented results are valid not only to a specific double-beta decay candidates but also to all the possible candidates within n, n' = 0, 1, ⋯, 3 and l, l' = 0, 1, ⋯, 6.
Note that, in terms of the magnitude, almost 40% smaller values are applied for the Gamow-Teller part in calculating the nuclear matrix element since (g_V/g_A)^2 = (1/1.27)^2 ∼ 0.62 (cf. Eq. (<ref>)).
Among several results on heavy neutrino cases, positive correlation of the values between Fermi, Gamow-Teller and tensor parts has been clarified.
This property is common to the light ordinary neutrino cases.
Apart from the tensor part values, almost a half of the total contributions has been shown to be occupied only by 10 largest contribution in which 10 largest contribution is exactly the same as l =l'=0 contributions.
As a result the enhanced dominance of s-wave contribution is noticed for heavy neutrino cases.
The other components of the NMEs also responsible for the cancellation will be studied in the next opportunity.
99
16engel
J. Engel and J. Menéndez, arXiv:1610.06548.
06dodelson
S. Dodelson, A. Melchiorri, A. Slosar, Phys. Rev. Lett. 97, 04301 (2006).
97bahcall
J. N. Bahcall, Phys. Rev. C 56, 3391 (1997).
12vergados
J. D. Vergados, H. Ejiri, and F. Simkovic, Rep. Prog. Phys. 75, 106301 (2012).
13horoi
M. Horoi, Phys. Rev. C 87, 014320 (2013).
15iwata
Y. Iwata, N. Shimizu, T. Otsuka, Y. Utsuno, J. Menéndez, M. Honma, and T. Abe, Phys. Rev. Lett. 116 112502 (2016).
16iwata
Y. Iwata, to appear in Nucl. Phys. Lett.; arXiv:1609.03118.
11faessler
A. Faessler, G. L. Fogli, E. Lisi, A. M. Rotunno and F. Simkovic, Phys. Rev. D 83, 113015 (2011).
15lisi
E. Lisi, A. Rotunno and F. Simkovic, Phys. Rev. D 92, 093004 (2015).
iwata-cns
Y. Iwata, J. Menéndez, N. Shimizu, T. Otsuka, Y. Utsuno, M. Honma, and T. Abe, CNS annual report. CNS-REP-94, 71 (2016).
10blennow
M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, and J. Menéndez, JHEP 07, 096 (2010).
14faessler
A. Faessler, M. González, S. Kovalenko, and F. Simkovic, Phys. Rev. D90, 096010 (2014).
15barea
J. Barea, J. Kotila, and F. Iachello, Phys. Rev. D 92, 093001 (2015).
15hyvarinen
J. Hyvarinen and J. Suhonen, Phys. Rev. C91, 024613 (2015).
16horoi
M. Horoi and A. Neacsu, Phys. Rev. C93, 024308 (2016).
91tomoda
T. Tomoda, Rep. Prog. Phys. 54 53 (1991).
10horoi
M. Horoi and S. Stoica, Phys. Rev. C 81, 024321 (2010).
13senkov
R. A. Sen'kov and M. Horoi, Phys. Rev. C 88, 064312 (2013).
|
http://arxiv.org/abs/1701.07871v1 | 20170126203640 | Secure SWIPT Networks Based on a Non-linear Energy Harvesting Model | [
"Elena Boshkovska",
"Nikola Zlatanov",
"Linglong Dai",
"Derrick Wing Kwan Ng",
"Robert Schober"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
solution
ThmTheorem
LemLemma
CorCorollary
DefDefinition
ExamExample
AlgAlgorithm
ProbProblem
propositionProposition
RemarkRemark
.13 in
9.8in
-0.2in
(Invited Paper)
Secure SWIPT Networks Based on a Non-linear Energy Harvesting Model
Elena Boshkovska,
Nikola Zlatanov, Linglong Dai, Derrick Wing Kwan Ng, and Robert Schober E. Boshkovska and R. Schober are with Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Germany. Linglong Dai is with Tsinghua University, Beijing, China. L. Dai is supported by the International Science & Technology Cooperation Program of China (Grant No. 2015DFG12760) and the National Natural Science Foundation of China (Grant No. 61571270).
D. W. K. Ng is with The University of New South Wales, Australia. N. Zlatanov is with Monash University, Australia. R. Schober is supported by the AvH Professorship Program of the Alexander von Humboldt Foundation. D. W. K. Ng is supported under Australian Research Council's Scheme Discovery Early Career Researcher Award funding scheme (project
number DE170100137).
, December 30, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We optimize resource allocation to enable communication security in simultaneous wireless information and power transfer (SWIPT) for internet-of-things (IoT) networks. The resource allocation algorithm design is formulated as a non-convex optimization problem. We aim at maximizing the total harvested power at energy harvesting (EH) receivers via the joint optimization of transmit beamforming vectors and the covariance matrix of the artificial noise injected to facilitate secrecy provisioning. The proposed problem formulation takes into account the non-linearity of energy harvesting circuits and the quality of service requirements for secure communication.
To obtain a globally optimal solution of the resource allocation problem, we first transform the resulting non-convex sum-of-ratios objective function into an equivalent objective function in parametric subtractive form, which facilitates the design of a novel iterative resource allocation algorithm. In each iteration, the semidefinite programming (SDP) relaxation approach is adopted to solve a rank-constrained optimization problem optimally. Numerical results reveal that the proposed algorithm can guarantee communication security and provide a significant performance gain in terms of the harvested energy compared to existing designs which are based on the traditional linear EH model.
§ INTRODUCTION
In the era of the Internet of Things (IoT), it is expected that 50 billion wireless communication devices will be connected worldwide <cit.>. Smart physical objects equipped with sensors and wireless communication chips are able to collect and exchange information. These smart objects are wirelessly connected to computing systems to provide intelligent everyday services such as e-health, automated control, energy management (smart city and smart grid), logistics, security control, and safety management, etc. However, the limited energy storage capacity of battery-powered wireless communication devices severely limits the lifetime of wireless communication networks. Although battery replacement provides an intermediate solution to energy shortage, frequent replacement of batteries can be costly and cumbersome. This creates a serious performance bottleneck in providing stable communication services. On the other hand, a promising approach to extend the lifetime of wireless communication networks is to equip wireless communication devices with energy harvesting (EH) technology to scavenge energy from external sources. Solar, wind, tidal, biomass, and geothermal are the major renewable energy sources for generating electricity <cit.>. Yet, these conventional natural energy sources are usually climate and location dependent which limits the mobility of the wireless devices. More importantly, the intermittent and uncontrollable nature of these natural energy sources is a major obstacle for providing stable wireless communications via traditional EH technologies.
Recently, wireless energy transfer (WET) has attracted significant attention
from both academia and industry <cit.>Krikidis2014,Ding2014,JR:SWIPT_mag,JR:SWIPT_mag_Ming_Kwan, JR:QQ_WPC,COML:EE_WIPT,JR:MIMO_WIPT–<cit.>, as a key to unlock the potential of IoT. Generally speaking, the WET technology can be divided into three categories: magnetic resonant coupling, inductive coupling, and radio frequency (RF)-based WET. The first two technologies rely on near-field magnetic fields and do not support any mobility of EH devices, due to the short wireless charging distances and the required alignment of the magnetic field with the EH circuits. In contrast, RF-based WET technologies <cit.>Krikidis2014,Ding2014,JR:SWIPT_mag,JR:SWIPT_mag_Ming_Kwan, JR:QQ_WPC,COML:EE_WIPT,JR:MIMO_WIPT–<cit.> utilize the far-field of electromagnetic (EM) waves which enables concurrent
wireless charging and data communication in WET networks over long distances (e.g. hundreds of metres). Moreover, the broadcast nature of wireless channels facilitates one-to-many wireless charging which eliminates the need for power cords and manual recharging for IoT devices. As a result, simultaneous wireless information and power transfer (SWIPT) is expected to be a key enabler for sustainable IoT communication networks. Yet, the introduction of SWIPT to communication systems has led to a paradigm shift in both system architecture and resource allocation
algorithm design. For instance, in SWIPT systems, one can increase the energy
of the information carrying signal to increase the amount of RF energy harvested at the receivers. However, increasing the power of the information signals may also increase their susceptibility to
eavesdropping, due to the higher potential for information leakage. As a result, both communication security concerns and the need for efficient WET naturally arise in systems providing SWIPT services.
Nowadays, various types of cryptographic encryption algorithms are employed at the application layer for guaranteeing wireless communication security. However, secure secret key management and distribution via an authenticated third party is typically required for these algorithms, which may not be realizable in future wireless IoT networks due to the expected massive numbers of devices. Therefore, a considerable amount of work has recently been
devoted to information-theoretic physical (PHY) layer security
as a complementary technology to the existing encryption algorithms <cit.>JR:Massive_MIMO,JR:Artifical_Noise1,JR:Chen_security,JR:HM_security_1,JR:HM_security_2–<cit.>. It has been shown that
in a wire-tap channel, if the source-destination channel enjoys better conditions compared to the source-eavesdropper channel <cit.>, perfectly secure
communication between a source and a destination is possible. Hence, multiple-antenna technology and advanced signal processing algorithms have been proposed to ensure secure communications. Specifically, by exploiting the extra degrees of freedom
offered by multiple antennas, the information beams can be focused on the desired legitimate receivers to reduce the chance of information leakage. Besides, artificial noise can be injected into the communication channel deliberately to degrade the channel quality of eavesdroppers. These concepts have also been extended to SWIPT systems to provide secure communication. In <cit.>, beamforming was studied to enhance security and power efficiency in SWIPT systems. The authors of <cit.> proposed a multi-objective optimization framework to investigate the non-trivial tradeoff between interference, total harvested power, and energy consumption in a secure cognitive radio SWIPT network. However, the resource allocation algorithms designed for secure SWIPT systems <cit.> were based on a linear EH model which does not capture the highly non-linear characteristics of practical end-to-end WET <cit.>JR:Energy_harvesting_circuit,JR:EH_measurement_1–<cit.>. In particular, existing resource allocation schemes designed for the
linear EH model may lead to severe
resource allocation mismatches resulting in performance
degradation in WET and secure communications. These observations motivate us to study the design of efficient resource allocation algorithms for secure SWIPT systems taking into account a practical non-linear EH model.
§ SYSTEM MODEL
In this section, we first introduce the notation adopted in this
paper. Then, we present the downlink channel
model for secure communication in SWIPT systems.
§.§ Notation
We use boldface capital and lower case letters to denote matrices and vectors, respectively. 𝐀^H, (𝐀), (𝐀), and (𝐀) represent the Hermitian transpose, trace, rank, and determinant of matrix 𝐀, respectively; 𝐀≻ and 𝐀≽ indicate that 𝐀 is a positive definite and a positive semidefinite matrix, respectively; 𝐈_N is the N× N identity matrix; [𝐪]_m:n returns a vector with the m-th to the n-th elements of vector 𝐪; ℂ^N× M denotes the set of all N× M matrices with complex entries; ℍ^N denotes the set of all N× N Hermitian matrices. The circularly symmetric complex Gaussian (CSCG) distribution is denoted by CN(𝐦,Σ) with mean vector 𝐦 and covariance matrix Σ; ∼ indicates “distributed as"; E{·} denotes statistical expectation; · represents the absolute value of a complex scalar. [x]^+ stands for max{0,x} and [·]^T represents the transpose
operation.
§.§ Channel Model
A frequency flat fading channel for downlink communication is considered. The SWIPT system comprises
a base station (BS), an information receiver (IR),
and J energy harvesting receivers (ER), as shown in Figure <ref>. The BS is equipped with N_T≥ 1 antennas. The IR is a single-antenna device and each ER is equipped with N_R≥ 1 receive antennas for EH. In the considered system, the signal intended for the IR is overheard by the ERs due to the broadcast nature of wireless channels. To guarantee communication security, the ERs are treated as potential eavesdroppers which has to be taken into account for resource allocation algorithm design. We assume that N_T> N_R
for the following study. The signals received at the IR and ER j∈{1,…, J} are modelled as
y = 𝐡^H(𝐰s+𝐯) +n,
𝐲_ER_j = 𝐆_j^H(𝐰s+𝐯)+𝐧_ER_j, ∀ j∈{1,…,J},
respectively, where s∈ℂ and 𝐰∈ℂ^N_T×1 are the information symbol and the corresponding beamforming vector, respectively. Without loss of generality, we assume that E{s^2}=1. 𝐯∈ℂ^N_T× 1 is an artificial noise vector generated by the BS to facilitate efficient WET and to guarantee communication security. In particular, 𝐯 is modeled as a random vector with circularly
symmetric complex Gaussian distribution
𝐯∼ CN(0, 𝐕),
where 𝐕∈ℍ^N_T, 𝐕≽0, denotes the covariance matrix of the artificial noise. The channel vector between the BS and the IR is denoted by 𝐡∈ℂ^N_T×1 and the channel matrix between the BS and ER j is denoted by 𝐆_j∈ℂ^N_T× N_R. n∼ CN(0,σ_s^2) and 𝐧_ER_j∼ CN(,σ_s^2𝐈_N_R) are the additive white Gaussian noises (AWGN) at the IR and ER j, respectively, where σ_s^2 denotes the noise power at each antenna of the receiver.
§.§ Energy Harvesting Model
Figure <ref> depicts the block diagram of the ER in SWIPT systems. In general, a bandpass filter and a
rectifying circuit are adopted in an RF-ER to convert the received RF power to direct current (DC) power. The total received RF power at ER j is given by
P_ER_j=((𝐰𝐰^H +𝐕)𝐆_j𝐆_j^H).
In the SWIPT literature, for simplicity, the total harvested power at ER j is typically modelled as follows:
Φ_ER_j^Linear=η_j P_ER_j,
where 0≤η_j≤1 denotes the energy conversion efficiency of ER j. From (<ref>), it can be seen that with existing models, the total harvested power at the ER is linearly and directly proportional to the received RF power. However, practical RF-based EH circuits consist of
resistors, capacitors, and diodes. Experimental results have shown that these circuits <cit.>JR:Energy_harvesting_circuit–<cit.> introduce various non-linearities into the end-to-end WET. In order to design a resource allocation algorithm for practical secure SWIPT systems, we adopt the non-linear parametric EH model from <cit.>. Consequently, the total harvested power at ER j, Φ_ER_j, is modelled as:
Φ_ER_j = [Ψ_ER_j
- M_jΩ_j]/1-Ω_j, Ω_j=1/1+exp(a_jb_j),
Ψ_ER_j = M_j/1+exp(-a_j(P_ER_j-b_j))
is a sigmoid function which takes the received RF power, P_ER_j, as the input.
Constant M_j denotes the maximal harvested power at ER j when the EH circuit is driven to saturation due to an exceedingly large input RF power. Constants a_j and b_j capture the joint effects of resistance, capacitance, and circuit sensitivity. In particular, a_j reflects the non-linear charging rate (e.g. steepness of the curve) with respect to the input power and b_j determines to the minimum turn-on voltage of the EH circuit.
In practice, parameters a_j, b_j, and M_j of the proposed model in (<ref>)
can be obtained using a standard curve fitting algorithm for measurement results of a given EH hardware circuit. In Figure <ref>, we show an example for the curve fitting for the non-linear EH model in (<ref>) with parameters M=0.024, b=0.014, and a=150. It can be observed that the parametric non-linear model closely matches experimental results provided in <cit.> for the wireless power harvested by a practical EH circuit. Figure <ref> also illustrates the inability of the linear model in (<ref>) to capture the non-linear characteristics of practical EH circuits, especially in the high received RF power regime.
§.§ Secrecy Rate
Assuming perfect channel state information (CSI) is available at the
receiver for coherent detection, the achievable rate (bit/s/Hz) between the BS and the IR is given by
R = log_2(1+𝐰^H𝐇𝐰/(𝐇𝐕)+σ_s^2),
where 𝐇=𝐡𝐡^H.
On the other hand, the capacity between the BS and ER j for decoding the signal of the IR can be expressed as
R_ER_j = log_2(𝐈_N_R+
𝐐_j^-1𝐆_j^H𝐰𝐰^H𝐆_j),
𝐐_j = 𝐆_j^H𝐕𝐆_j+
σ_s^2𝐈_N_R≻,
where 𝐐_j denotes the interference-plus-noise covariance matrix for ER j. Hence, the achievable secrecy rate of the IR is given by <cit.>
R_sec = [R-∀ jmax { R_ER_j}]^+.
§ OPTIMIZATION PROBLEM AND SOLUTION
In the considered SWIPT system, we aim to maximize the total harvested power in the system while providing secure communication to the IR. To this end, we formulate the resource allocation
algorithm design as the following non-convex optimization
problem assuming that perfect channel state information is available [In the sequel, since Ω_j does not affect the design of the optimal resource allocation policy, with a slight abuse of notation, we will directly use Ψ_ER_j to represent the harvested power at ER j for simplicity of presentation. ]:
Resource Allocation for Secure SWIPT:
𝐕∈ℍ^N_T,𝐰 ∑_j=1^J Ψ_ER_j
subject to C1: 𝐰_2^2 + (𝐕)≤ P_max,
C2: 𝐰^H𝐇𝐰/(𝐕𝐇)+σ_s^2≥Γ_req,
C3: R_ER_j≤ R_ER^Tol, ∀ j,
C4: 𝐕≽.
Constants P_max and Γ_req in constraints C1 and C2 denote the maximum transmit power budget and the minimum required signal-to-interference-plus-noise ratio (SINR) at the IR, respectively. Constant R_ER^Tol>0 in C3 is the maximum tolerable data rate which restricts the capacity of ER j if it attempts to decode the signal intended for the IR. In practice, the BS sets log_2(1+Γ_req)> R_ER^Tol>0, to ensure secure communication[ In the considered problem formulation, we can guarantee that the achievable secrecy rate is bounded below by R_sec≥log_2(1+Γ_req)-R_ER^Tol>0 if the problem is feasible.]. Constraint C4 and 𝐕∈ℍ^N_T constrain matrix 𝐕 to be a positive semidefinite Hermitian matrix. It can be observed that the objective function in (<ref>) is a non-convex function due to its sum-of-ratios form. Besides, the log-det function in C3 is non-convex. Now, we first transform the non-convex objective function into an equivalent objective function in subtractive form via the following theorem.
Suppose {𝐰^*,𝐕^*} is the optimal solution to (<ref>), then there exist two column vectors μ^*=[μ_1^*,…,μ_J^*]^T and β^*=[β_1^*,…,β_J^*]^T such that {𝐰^*,𝐕^*} is an optimal solution to the following optimization problem
𝐕^*∈ℍ^N_T,𝐰^*∈ F ∑_j=1^J μ_j^*[M_j- β_j^*(1+exp(-a_j(P_ER_j-b_j)))],
where F is the feasible solution set of (<ref>). Besides, {𝐰^*,𝐕^*} also satisfies the following system of equations:
β_j^*(1+exp(-a_j(P_ER_j^*-b_j)))-M_j = 0,
μ_j^*(1+exp(-a_j(P_ER_j^*-b_j)))-1 = 0,
and P_ER_j^*=((𝐰^*(𝐰^*)^H +𝐕^*)𝐆_j𝐆_j^H).
Proof: Please refer to <cit.> for a proof of Theorem 1.
Therefore, for (<ref>),
we have an equivalent optimization problem in (<ref>) with an objective function in subtractive form with extra parameters (μ^*,β^*). More importantly, the two problems have the same optimal solution {𝐰^*,𝐕^*}.
Besides, the optimization problem in (<ref>) can be solved by an iterative algorithm consisting of two nested loops <cit.>. In the inner loop, the optimization problem in (<ref>) for given (μ,β), μ=[μ_1,…,μ_J]^T and β=[β_1,…,β_J]^T, is solved. Then, in the outer loop, we find the optimal (μ^*,β^*) satisfying the system of equations in (<ref>) and (<ref>), cf. Algorithm 1 in Table <ref>.
§.§ Solution of the Inner Loop Problem
In each iteration, in line 3 of Algorithm 1, we solve an inner loop optimization problem in its hypograph form:
Inner Loop Problem
𝐖,𝐕∈ℍ^N_T,τ ∑_j=1^J μ_j^*[M_j- β_j^*(1+exp(-a_j(τ_j-b_j)))]
subject to C1: (𝐖+𝐕)≤ P_max,
C2: (𝐖𝐇)/Γ_req≥(𝐕𝐇)+σ_s^2 ,
C3, C4,
C5: ((𝐖+𝐕)𝐆_j𝐆_j^H) ≥τ_j,∀ j,
C6: (𝐖)=1, C7: 𝐖≽,
where 𝐖=𝐰𝐰^H is a new optimization variable matrix and τ=[τ_1,τ_2,…,τ_J] is a vector of auxiliary optimization variables. The extra constraint C5 represents the hypograph of the inner loop optimization problem.
We note that the inner loop problem in (<ref>) is still a non-convex optimization problem. In particular, the non-convexity arises from the log-det function in C3 and the combinatorial rank constraint C6. To circumvent
the non-convexity, we first introduce the following proposition to handle constraint C3.
For R_ER^Tol> 0,∀ j, and (𝐖)≤ 1, constraint C3 is equivalent to constraint C3, i.e.,
C3⇔C3𝐆_j^H𝐖𝐆_j ≼ α_ER𝐐_j, ∀ j,
where α_ER=2^R_ER^Tol-1 is an auxiliary constant and
is a linear matrix inequality (LMI) constraint.
Proof: Please refer to Appendix A in <cit.> for the proof.
Now, we apply Proposition <ref> to Problem (<ref>) by replacing constraint with constraint which yields:
Equivalent Formulation of Problem (<ref>)
𝐖,𝐕∈ℍ^N_T,τ ∑_j=1^J μ_j^*[M_j- β_j^*(1+exp(-a_j(τ_j-b_j)))]
subject to C1, C2, C4, C5, C7,
C3: 𝐆_j^H𝐖𝐆_j≼α_ER𝐐_j, ∀ j,
C6: (𝐖)=1.
The non-convexity of (<ref>) is now only due to the rank constraint in C6. We adopt semidefinite programming (SDP) relaxation to obtain a tractable solution. Specifically, we remove the non-convex constraint C6 from (<ref>) which yields:
SDP Relaxation of Problem (<ref>)
𝐖,𝐕∈ℍ^N_T,τ ∑_j=1^J μ_j^*[M_j- β_j^*(1+exp(-a_j(τ_j-b_j)))]
subject to C1, C2, C4, C5, C7,
C3: 𝐆_j^H𝐖𝐆_j≼α_ER𝐐_j, ∀ j,
C6: (𝐖)=1.
In fact, (<ref>) is a standard convex optimization problem which can be solved by numerical convex program solvers such as Sedumi or SDPT3 <cit.>. Now, we study the tightness of the adopted SDP relaxation in (<ref>).
For Γ_req>0 and if the considered problem is feasible, we can construct a rank-one solution of (<ref>) based on the solution of (<ref>).
Proof: Please refer to the Appendix.
Therefore, the non-convex optimization problem in (<ref>) can be solved optimally.
§.§ Solution of the Outer Loop Problem
In this section, an iterative algorithm based on the damped Newton method is adopted to update (μ,β) for the outer loop problem. For notational simplicity, we define functions φ_j(β_j)=β_j(1+exp(-a_j(P_ER_j-b_j)))-M_j
and φ_J+i(μ_i)=μ_i(1+exp(-a_i(P_ER_i-b_i)))-1, i∈{1,…,J}. It is shown in <cit.> that the unique optimal solution (μ^*,β^*) is obtained if and only if φ(μ, β)=[φ_1,φ_2,…,φ_2J]^T=. Therefore, in the n-th iteration of the iterative algorithm, μ^n+1 and β^n+1 can be updated as, respectively,
μ^n+1 = μ^n+ζ^n𝐪^n_J+1:2J β^n+1=β^n+ζ^n𝐪^n_1:J,
𝐪^n = [φ'(μ,β)]^-1φ(μ,β)
and φ'(μ,β) is the Jacobian matrix of φ(μ,β). ζ^n is the largest ε^l satisfying
φ(μ^n+ε^l𝐪^n_J+1:2J,β^n+ε^l𝐪^n_1:J)
≤ (1-ηε^l)φ(μ,β),
where l∈{1,2,…}, ε^l∈(0,1), and η∈(0,1). The damped Newton method converges to the unique solution (μ^*,β^*) satisfying the system of equations (<ref>) and (<ref>), cf. <cit.>.
§ RESULTS
In this section, simulation results are presented to illustrate the performance of the proposed
resource allocation algorithm. We summarize the most important simulation parameters in Table <ref>.
In the simulation, the IR and the J=10 ERs are located at 50 meters and 10 meters from the BS, respectively. The maximum tolerable data rate at the potential eavesdropper is set to R_ER^Tol=1 bit/s/Hz. For the non-linear EH circuit, we set M_j=20 mW which corresponds to the maximum harvested power per ER. Besides, we adopt a_j=6400 and b_j=0.003.
In Figure <ref>, we study the average total harvested power versus the minimum required receive SINR, Γ_req, at the IR for different numbers of transmit antennas and resource allocation schemes. As can be observed, the average total harvested power decreases with increasing Γ_req. Indeed, to satisfy a more stringent SINR requirement, the direction of
information beam has to be steered towards the IR which yields a smaller amount of RF energy for
EH at the ERs. On the other hand, a significant EH gain can be achieved by the proposed optimal scheme when the number of antennas equipped at the BS increases. In fact, additional transmit antennas equipped at the BS provide extra spatial degrees of freedom which facilitate a more flexible resource allocation. In particular, the BS can steer the direction of the artificial noise and the information signal towards the ERs accurately to improve the WET efficiency.
For comparison, we also show the performance of a baseline scheme. For the baseline scheme, the resource allocation algorithm is designed based on an existing linear EH model, cf. (<ref>). Specifically, we optimize 𝐰,𝐕 to maximize the total harvested power subject to the constraints in (<ref>). Then, this baseline scheme is applied for resource allocation in the considered system with non-linear ERs. We observe from Figure <ref> that
a substantial performance gain is achieved by the proposed optimal resource allocation algorithm compared to the baseline scheme. This is due to the fact that resource allocation mismatch occurs in the baseline scheme as it does not account for the non-linear nature of the EH circuits.
Figure <ref> illustrates the average system secrecy rate versus the minimum required SINR Γ_req of the IR for different numbers of transmit antennas, N_T. The average system secrecy rate, i.e., C_sec, increases with increasing Γ_req. This is because the maximum achievable rate of the ERs for decoding the IR signal is limited by the resource allocation to be less than R_ER^Tol=1 bit/s/Hz. Besides, although the minimum SINR requirement increases in Figure <ref>, the proposed optimal scheme is able to fulfill all QoS requirements due to the proposed optimization framework.
§ CONCLUSIONS
In this paper, a resource allocation algorithm enabling secure SWIPT in IoT communication networks was presented. The algorithm design based on a practical non-linear EH model was formulated as a non-convex optimization problem for the maximization of the total energy transferred to the ERs. We transformed the resulting non-convex optimization
problem into two nested optimization problems which led to an efficient iterative approach
for obtaining the globally optimal solution. Numerical
results unveiled the potential performance gain in EH brought by the
proposed optimization and its robustness against
eavesdropping for IoT applications.
§ APPENDIX-PROOF OF THEOREM <REF>
To start with, we first define τ^* as the optimal objective value of (<ref>).
When (𝐖)>1 holds after solving (<ref>), an optimal rank-one solution for (<ref>) can be constructed as follows <cit.>. For a given τ^*, we solve an auxiliary optimization problem:
𝐖,𝐕∈ℍ^N_T (𝐖)
subject to C1, C2, C3, C4, C5, C7.
It can be observed that the optimal solution of (<ref>) is also an optimal resource optimal resource allocation policy for (<ref>) when τ^* is fixed in (<ref>). Therefore, in the remaining part of the proof, we show that solving (<ref>) returns a rank-one beamforming matrix 𝐖. Therefore, we study the Lagrangian of problem (<ref>) which is given by:
L = (𝐖)+λ((𝐖+𝐕)- P_max) - (𝐖𝐑)
- ∑_j=1^Jρ_j (τ_j^*-((𝐖+𝐕)𝐆_j𝐆_j^H))- (𝐕𝐙)
+ ∑_j=1^J(𝐃_C_3_j( 𝐆_j^H𝐖𝐆_j-α_ER𝐐_j))
+ α( (𝐕𝐇)+σ_s^2-(𝐖𝐇)/Γ_req)+Δ,
where λ≥ 0, α≥ 0, 𝐃_C_3_j≽,∀ j∈{1,…,J}, 𝐙≽, ρ_j≥ 0, and 𝐑≽ are the dual variables for constraints C1, C2, C3, C4, C5, and C7, respectively. Δ is a collection of variables and constants that are not relevant to the proof.
Then, we exploit the following Karush-Kuhn-Tucker (KKT) conditions which are needed for the proof[In this proof, the optimal
primal and dual variables of (<ref>) are denoted by the corresponding
variables with an asterisk superscript.]:
𝐑^*,𝐙^*,𝐃_C_3_j^*≽, λ^*,α,ρ^*_j≥0,
𝐑^*𝐖^*=, 𝐙^*𝐕^*=,
𝐑^*=
(λ^*+1)𝐈_N_T-∑_j=1^Jρ_j𝐆_j𝐆_j^H +∑_j=1^J 𝐆_j𝐃_C_3_j^* 𝐆_j^H
-α^*𝐇/Γ_req,
𝐙^*= λ^*𝐈_N_T-∑_j=1^Jρ_j𝐆_j𝐆_j^H
-∑_j=1^J α_ER𝐆_j𝐃_C_3_j^* 𝐆_j^H+α^* 𝐇.
Then, subtracting (<ref>) from (<ref>) yields
𝐑^* = 𝐈_N_T+𝐙^*+ ∑_j=1^J(1+α_ER) 𝐆_j𝐃_C_3_j^* 𝐆_j^H_𝐀≻
- α^*𝐇(1+1/Γ_req).
Besides, constraint C2 in (<ref>) is satisfied with equality for the optimal solution and we have α^*>0. From (<ref>), we have
(𝐑^*)+(α^*𝐇(1+1/
Γ_req))
≥ (𝐑^*+α^*𝐇(1+1/
Γ_req))
= (𝐀)=N_T
⇒ (𝐑^*)≥ N_T-1.
As a result, (𝐑^*) is either N_T-1 or N_T. Furthermore, since Γ_req>0 in C2, 𝐖^*0 is necessary. Hence, (𝐑^*)=N_T-1 and (𝐖^*)=1 hold and the rank-one solution for (<ref>) is constructed.
10
url@samestyle
JR:IOT
M. Zorzi, A. Gluhak, S. Lange, and A. Bassi, “From today's INTRAnet of things
to a Future INTERnet of Things: a Wireless- and Mobility-Related View,”
IEEE Wireless Commun., vol. 17, pp. 44–51, 2010.
JR:Kwan_hybrid_BS
D. W. K. Ng, E. S. Lo, and R. Schober, “Energy-Efficient Resource Allocation
in OFDMA Systems with Hybrid Energy Harvesting Base Station,” IEEE
Trans. Wireless Commun., vol. 12, pp. 3412–3427, Jul. 2013.
CN:Shannon_meets_tesla
P. Grover and A. Sahai, “Shannon Meets Tesla: Wireless Information and Power
Transfer,” in Proc. IEEE Intern. Sympos. on Inf. Theory, Jun. 2010,
pp. 2363 –2367.
Krikidis2014
I. Krikidis, S. Timotheou, S. Nikolaou, G. Zheng, D. W. K. Ng, and R. Schober,
“Simultaneous Wireless Information and Power Transfer in Modern
Communication Systems,” IEEE Commun. Mag., vol. 52, no. 11, pp.
104–110, Nov. 2014.
Ding2014
Z. Ding, C. Zhong, D. W. K. Ng, M. Peng, H. A. Suraweera, R. Schober, and H. V.
Poor, “Application of Smart Antenna Technologies in Simultaneous Wireless
Information and Power Transfer,” IEEE Commun. Mag., vol. 53, no. 4,
pp. 86–93, Apr. 2015.
JR:SWIPT_mag
X. Chen, Z. Zhang, H.-H. Chen, and H. Zhang, “Enhancing Wireless Information
and Power Transfer by Exploiting Multi-Antenna Techniques,” IEEE
Commun. Mag., no. 4, pp. 133–141, Apr. 2015.
JR:SWIPT_mag_Ming_Kwan
X. Chen, D. W. K. Ng, and H.-H. Chen, “Secrecy Wireless Information and Power
Transfer: Challenges and Opportunities,” IEEE Commun. Mag., 2016.
JR:QQ_WPC
Q. Wu, M. Tao, D. Ng, W. Chen, and R. Schober, “Energy-Efficient Resource
Allocation for Wireless Powered Communication Networks,” IEEE Trans.
Wireless Commun., vol. 15, pp. 2312–2327, Mar. 2016.
COML:EE_WIPT
X. Chen, X. Wang, and X. Chen, “Energy-Efficient Optimization for Wireless
Information and Power Transfer in Large-Scale MIMO Systems Employing Energy
Beamforming,” IEEE Wireless Commun. Lett., vol. 2, pp. 1–4, Dec.
2013.
JR:MIMO_WIPT
R. Zhang and C. K. Ho, “MIMO Broadcasting for Simultaneous Wireless
Information and Power Transfer,” IEEE Trans. Wireless Commun.,
vol. 12, pp. 1989–2001, May 2013.
JR:QQ_WPCN
Q. Wu, W. Chen, and J. Li, “Wireless Powered Communications With Initial
Energy: QoS Guaranteed Energy-Efficient Resource Allocation,” IEEE
Wireless Commun. Lett., vol. 19, Dec. 2015.
Report:Wire_tap
A. D. Wyner, “The Wire-Tap Channel,” Tech. Rep., Oct. 1975.
JR:Massive_MIMO
J. Zhu, R. Schober, and V. Bhargava, “Secure Transmission in Multicell
Massive MIMO Systems,” IEEE Trans. Wireless Commun., vol. 13, pp.
4766–4781, Sep. 2014.
JR:Artifical_Noise1
S. Goel and R. Negi, “Guaranteeing Secrecy using Artificial Noise,”
IEEE Trans. Wireless Commun., vol. 7, pp. 2180 – 2189, Jun. 2008.
JR:HM_security_1
H. M. Wang, C. Wang, D. Ng, M. Lee, and J. Xiao, “Artificial Noise Assisted
Secure Transmission for Distributed Antenna Systems,” IEEE Trans.
Signal Process., vol. PP, no. 99, pp. 1–1, 2016.
JR:HM_security_2
J. Chen, X. Chen, W. H. Gerstacker, and D. W. K. Ng, “Resource Allocation for
a Massive MIMO Relay Aided Secure Communication,” IEEE Trans. on
Inf. Forensics and Security, vol. 11, no. 8, pp. 1700–1711, Aug 2016.
JR:Kwan_secure_imperfect
D. W. K. Ng, E. S. Lo, and R. Schober, “Robust Beamforming for Secure
Communication in Systems with Wireless Information and Power Transfer,”
IEEE Trans. Wireless Commun., vol. 13, pp. 4599–4615, Aug. 2014.
JR:MOOP_SWIPT
——, “Multiobjective Resource Allocation for Secure Communication in
Cognitive Radio Networks With Wireless Information and Power Transfer,”
IEEE Trans. Veh. Technol., vol. 65, no. 5, pp. 3166–3184, May 2016.
CN:EH_measurement_2
J. Guo and X. Zhu, “An Improved Analytical Model for RF-DC Conversion
Efficiency in Microwave Rectifiers,” in IEEE MTT-S Int. Microw. Symp.
Dig., Jun. 2012, pp. 1–3.
JR:Energy_harvesting_circuit
C. Valenta and G. Durgin, “Harvesting Wireless Power: Survey of
Energy-Harvester Conversion Efficiency in Far-Field, Wireless Power Transfer
Systems,” IEEE Microw. Mag., vol. 15, pp. 108–120, Jun. 2014.
JR:EH_measurement_1
T. Le, K. Mayaram, and T. Fiez, “Efficient Far-Field Radio Frequency Energy
Harvesting for Passively Powered Sensor Networks,” IEEE J.
Solid-State Circuits, vol. 43, pp. 1287–1302, May 2008.
JR:non_linear_model
E. Boshkovska, D. Ng, N. Zlatanov, and R. Schober, “Practical Non-Linear
Energy Harvesting Model and Resource Allocation for SWIPT Systems,”
IEEE Commun. Lett., vol. 19, pp. 2082–2085, Dec. 2015.
JR:Elena_TCOM
E. Boshkovska, D. W. K. Ng, N. Zlatanov, A. Koelpin, and R. Schober, “Robust
Resource Allocation for MIMO Wireless Powered Communication Networks Based
on a Non-linear EH Model,” 2016, submitted to TCOM. [Online]. Available:
<http://arxiv.org/abs/1609.03836>
JR:sum_of_ratios
Y. Jong, “An Efficient Global Optimization Algorithm for Nonlinear
Sum-of-Ratios Problem,” May 2012. [Online]. Available:
<http://www.optimization-online.org/DB FILE/2012/08/3586.pdf>
JR:Kwan_CR_Layered
D. W. K. Ng, M. Shaqfeh, R. Schober, and H. Alnuweiri, “Robust Layered
Transmission in Secure MISO Multiuser Unicast Cognitive Radio Systems,”
IEEE Trans. Veh. Technol., vol. 65, no. 10, pp. 8267–8282, Oct. 2016.
website:CVX
M. Grant and S. Boyd, “CVX: Matlab Software for Disciplined Convex
Programming, version 2.0 Beta,” [Online] <https://cvxr.com/cvx>, Sep.
2013.
7463025
Y. Sun, D. W. K. Ng, J. Zhu, and R. Schober, “Multi-Objective Optimization
for Robust Power Efficient and Secure Full-Duplex Wireless Communication
Systems,” IEEE Trans. Wireless Commun., vol. 15, no. 8, pp.
5511–5526, Aug. 2016.
|
http://arxiv.org/abs/1701.07487v1 | 20170125211232 | A Linear, Decoupled and Energy stable scheme for smectic-A Liquid Crystal Flows | [
"Xiaofeng Yang",
"Alex Brylev"
] | math.NA | [
"math.NA"
] |
[pages=1-last]smetic_submitted.pdf
|
http://arxiv.org/abs/1701.07462v1 | 20170125195208 | Quartic time-dependent oscillatons | [
"A. Mahmoodzadeh",
"B. Malekolkalami"
] | gr-qc | [
"gr-qc",
"astro-ph.GA"
] |
Quartic time-dependent oscillatons
Cesare Davini^1 Antonino Favata^2 Roberto Paroni^3
A. Mahmoodzadeha.mahmoodzadeh@iau-boukan.ac.ir, B. MalekolkalamiB.Malakolkalami@uok.ac.ir
==============================================================================================
Faculty of Science, University of Kurdistan, Sanandaj, P.O.Box
416, Iran
In this paper, we will study some properties of oscillaton, spherically
symmetric object made of a real time-dependent scalar field, Using
a self-interaction quartic scalar potential instead of a quadratic
or exponential ones discussed in previous works. Since the oscillatons
can be regarded as models for astrophysical objects which play the
role of dark matter, therefore investigation of their properties has
more importance place in present time of physics^, research.
Therefore we investigate the properties of these objects by Solving
the system of differential equations obtained from the Einstein-Klein-Gordon
(EKG) equations and will show their importance as new candidates for
the role of dark matter in the galactic scales.
I. INTRODUCTION
The first evidence for dark matter appeared in the 1930s, when astronomer
Fritz Zwicky noticed that the motion of galaxies bound together by
gravity was not consistent with the laws of gravity. Zwicky argued
that it should be more matter than what is visible and he named dark
matter this unkown kind of invisible matter. Since that time, numerous
evidence has confirmed the existence of dark matter. For instance,
Galaxy rotation curves, galaxy cluster composition, bulk motions in
the Universe, gravitational lensing, the formation of large scale
structure (LSS) and red shift are examples that prove there should
be more than just visible matter in the Universe, what is in the form
of invisible matter or non-baryonic. Unfortunately one of the biggest
challenge in astrophysics and particle physics which has remained
unsolved as of yet, is the problem of dark matter. Therefore the number
of proposals have been presented for solving the problem of dark matter
theory have had an increasingly process in recent years. Standard
model particle physics, including WIMPs, super WIMPs, light gravitinos,
hidden dark matter, sterile neutrinos, axions, and other models based
on warm dark matter, particles with self interactions, complex scalar
field for bosonic dark matter were therefore under scrutiny, Although
efforts have not yielded a unit and certain result so far [1-4].
Nowadays the another alternative which has paid much attention, is
real scalar field and study of oscillatons made of a real time-dependent
scalar field for solving the hypothesis of dark matter has found general
importance at galactic scales [5-6].
II. MATHEMATICAL BACKGROUND
In this section, we study the case of a self-interaction qurtic scalar
potential with spherical symmetry similar to what analyzed in [7].
The most general spherically-symmetric metric case is written as
ds^2=g_αβdx^αdx^β=-e^ν-μdt^2+e^ν+μdr^2+r^2(dθ^2+sin^2θ dφ^2),(1)
where ν=ν(t,r) and μ=μ(t,r) are functions of time and
spherical radial cordinate (we have used natural units in which c=1).
Tensor for a real scalar field Φ(t,r) with a scalar potential
field V(Φ) is defined as [7, 8, 9].
T_αβ=Φ_,αΦ_,β-1/2g_αβ[Φ^,γΦ_,γ+2V(Φ)].(2)
The non-vanishing components of T_αβ are
-T^0_0=ρ_Φ=1/2[e^-(ν-μ)^̇2̇+e^-(ν+μ)Φ^'2+2V(Φ)],(3)
T_01=p_Φ=^',(4)
T_1^1=p_r=1/2[e^-(ν-μ)^̇2̇+e^-(ν+μ)Φ^'2-2V(Φ)],(5)
T_2^2=p_=1/2[e^-(ν-μ)^̇2̇-e^-(ν+μ)Φ^'2-2V(Φ)],(6)
and we have also T^3_3=T^2_2. Overdots denote ∂/∂ t
and primes denote ∂/∂ r. The different components
mentioned above are identified as the energy density, ρ_,
the momentum density, p_, the radial pressure, p_r
, and the angular pressure, p_, respectively. Einstein equations,
G_αβ=R_αβ-1/2g_αβR=k_0T_αβ
are used to obtain differential equations for functions ν,μ
then
(ν+μ)^.=k_0rΦ^',(7)
ν^'=k_0/2(e^2μ^̇2̇+Φ^'2),(8)
μ^'=1/r[1+e^ν+μ(k_0r^2V(Φ)-1)],(9)
where R_αβ , R are the Ricci tensor and Ricci scalar
respectively and k_0=8π G=8π/m_pl^2 . The universal
gravitational constant, G , is the inverse of the reduced Planck
mass squared m_pl. The conservation equations for the scalar
field energy- momentum tensor (2) requires to have
T_;β^αβ=[□Φ-dV(Φ)/dΦ]^,α=0,(10)
where □=∂_α∂^α=g_αβ∂^α∂^β
is the d^,Alembertian operator. Therefore we can obtain the Klein-Gordon
(KG) equation for the scalar field Φ(t,r)
Φ^''+Φ^'(2/r-μ^')-e^ν+μdV(Φ)/dΦ=e^2μ(..Φ+.μ.Φ).(11)
As we can see this differential equation, is fully related to scalar
potential field and is considered as the representative of all cases
of oscillatons with any kind of Φ(t,r) andV(Φ) [8]
.
III. QUARTIC POTENTIALS
The hypothesis of scalar dark matter in the universe with a minimally
coupled scalar field and a scalar potential in the form of quadratic,
exponential or cosh has been discussed before [7, 9, 10]. But
in this study we are interested in to investigate the self- interaction
of an oscillaton only, which is described by a quartic form of scalar
field for the role of dark matter at the cosmological scale. This
scalar field potential can be written as
V(Φ)=1/4λ^4,(12)
where λ is the quartic interaction parameter which is obtained
through constraints imposed on formulation of the problem. If we choose
Φ(t,r)=σ(r)ϕ(t) , then equation (11) reads as
ϕ{σ^''+σ^'(2/r-μ^')}-λ e^ν+μ^3=e^2μσ(ϕ̈+.μϕ̇).(13)
Taking into account with the Fourier expansion
e^± f(x)=I_0(f(x))+2[n=1]∞∑(±1)^nI_n(f(x)),(14
.a)
e^± f(x)cos(2θ)=I_0(f(x))+2[n=1]∞∑(±1)^nI_n(f(x))cos(2nθ),(14
.b)
where I_n(z) are the modified Bessel functions of the first kind,
we can rewrite the Eq. (13) as
1/σ{σ^''+σ^'(2/r-μ^')}-λ e^ν+μσ^2ϕ^2=e^2μ/ϕ(ϕ̈+.μϕ̇),(15)
This equation is not separable due to the second term in left-hand
side. Right-hand side term suggests that the scalar field oscillates
harmonically in time with a damping term related to .μ.
Following the work [8-9], we just consider that
√(k_0)Φ(t,r)=2σ(r)cos(ω t),(16)
where ω is the fundamental frequency of the scalar oscillaton.
Integrating on Eq. (7) is a straight forward for obtaining the following
one
ν+μ=(ν+μ)_0+rσσ^'cos(2ω t),(17)
with (ν+μ)_0 as an arbitrary function of r-coordinate
only. Then the metric functions can be expanded as
ν(t,r)=ν_0(r)+ν_1(r)cos(2ω t),(18
.a)
μ(t,r)=μ_0(r)+μ_1(r)cos(2ω t),(18
.b)
comparison of these two recent equations with Eq. (17) reveals that
ν_1+μ_1=rσσ^'.(18
.c)
Then the metric functions can be expanded by using Eq. (14.b) as
e^ν+μ=e^ν_0+μ_0[I_0(ν_1+μ_1)+2[n=1]∞∑I_n(ν_1+μ_1)cos(2nω t)]
=e^ν_0+μ_0[I_0(rσσ^')+2[n=1]∞∑I_n(rσσ^')cos(2nω t)],(19
.a)
e^ν-μ=e^ν_0-μ_0[I_0(ν_1-μ_1)+2[n=1]∞∑I_n(ν_1-μ_1)cos(2nω t)].(19
.b)
These equations show that metric coefficients oscillate in time with
even-multiples of ω , while scalar field oscillates with odd-multiples
of ω.
A. Differential equations
Similar to what has been done for boson star cases in works [6,
7, 11], we perform variable changes for numerical purposes of the
following form
x=m_Φr , =ω/m_Φ
,e^ν_0→e^ν_0 ,
e^μ_0→^-1e^μ_0,(20)
where now the metric coefficients are given by g_tt=-^-2e^ν-μ
and g_rr=e^ν+μ. It is seen that the mass of scalar field
(m_Φ) plays a basic role in rescaling of time and distance.
Hence the differential equations for metric functions are obtained
easily from Eqs. (8-11) if we use Eqs. (19- 20) , the scalar field
(16), and setting each Fourier component to zero.
ν_0^'=x[e^2μ_0σ^2(I_0(2μ_1)-I_1(2μ_1))+σ^'2],(21)
ν_1^'=x[e^2μ_0σ^2(2I_1(2μ_1)-I_0(2μ_1)-I_2(2μ_1))+σ^'2],(22)
μ_0^'=1/x{1+e^ν_0+μ_0[1/2x^2σ^4(3I_0(xσσ^')+2I_1(xσσ^')+I_2(xσσ^')-I_0(xσσ^')]},
(23)
μ_1^'=1/xe^ν_0+μ_0[x^2σ^4(2I_0(xσσ^')+3.5I_1(xσσ^')+2I_2(xσσ^'))-2I_1(xσσ^')],(24)
σ^''=-σ^'(2/x-μ_0^'-1/2μ_1^')+σ^3e^ν_0+μ_0[3I_0(xσσ^')+4I_1(xσσ^')+I(xσσ^')]-e^2μ_0σ[I_0(2μ_1)(1-μ_1)+I_1(2μ_1)+μ_1I_12(2μ_1)],(25)
where now the primes denote d/dx. Meanwhile these equations
are obtained due to rescaling mentioned in Eq. (20) which causes the
following changes in the metric functions ν, μ and the radial
part of scalar field σ.
ν(t,r)≡ν(t,x)→ν^'(t,r)=m_Φν^'(t,x),(26
.a)
μ(t,r)≡μ(t,x)→μ^'(t,r)=m_Φμ^'(t,x),(26
.b)
σ(r)≡σ(x)→σ^'(r)=m_Φσ^'(x)→σ^''(r)=m_Φ^2σ^''(x),(26
.c)
then constraints imposed to the Eqs.21-25 require we put the condition
λ=m_Φ^2k_0.(26 .d)
It is necessary to state that in making the expansions (21-25) the
neglected terms on the right hand side were those containing cos(4ω t),
cos(6ω t) and so on, while the neglected ones in Klein-Gordon
equation were those with cos(3ω t) , cos(5ω t), and
so on. This suggests that the metric coefficients should be expanded
with even Fourier terms and the scalar field expansion involves only
odd Fourier terms. Then, the expansions used in [12] are well
justified. By solving equations (21-25) numerically, the solutions
are completely determined then metric functions and metric coefficients
are obtained as well as oscillaton mass and frequency. Before doing
any calculation on these equations, it is recalled that Eq. (18.c)
is an exact algebraic relation. This means that we can solve
a system of four ordinary differential equations instead of a system
with five equations.
B. Initial Conditions
Non-singular solutions for a scalar filed at x=0 require that σ^'(0)=0
and ν(t,0)+μ(t,0)=0, so ν_0(0)=-μ_0(0) and μ_1(0)=-ν_1(0).
The latter condition is obtained in shorter way, Eq. (18.c). If the
scalar field vanishes when x→∞ then Eq. (16) implies
that σ(∞)=0. Asymptotically flatness, complying with
the Minkowski condition, at infinity, requires that μ_1(∞)=0
as ν_1(∞)=0, but μ_0(∞)=-ν_0(∞)≠0
because of the change of variables in (20), and exp(ν-μ)(∞)=^-2
gives the value of ω (fundamental frequency), while still
exp(ν+μ)(∞)=1 [6, 7, 11]. Now the first step is to
choose a value for σ(0) which is called the central
value because for each value of σ(0) we only have two degrees
of freedom and we need to adjust the central values μ_0(0),
μ_1(0) and then ν_0(0), ν_1(0) results from μ_0(0),
μ_1(0). These values are sufficient to obtain different n-nodes
solutions. On the other hand as we can see from Eq. (24), It is important
to mention that the radial derivative of μ_1(x) is always positive.
Hence providing μ_1(0)<0 asymptotically flat condition is reached.
On the other hand we have neglected higher terms of expansion (19
-a) and (19 -b), therefore the condition |μ_1|<1 is needed
for the solutions of Eqs. (21-25) to converge.
C. Numerical results
If we expand the metric as
g=g_0(x)+g_2(x)cos(2ω t)+g_4(x)cos(4ω t)+...=[n=0]∞∑g_2n(x)cos(2nω t),(27)
and comparing it with Eqs. (19 a, 19 b) then, typical metric coefficients
for 0-node solution are obtained. The radial and time metric coefficients
with a central value of σ(x=0)=0.4 and other boundary conditions
are shown in Fig. 1.
In Figs. 2. and 3. the answers for differential equations mentioned
by Eqs. (21-25) are shown.
As we can see from Fig. 3. the radial part of scalar field, σ,
with a damping decrease becomes negative for some values of x .
This means that negative scalar fields have an effective role in being
of oscillatons.
Similar to what has been done for metric coefficient, we can rewrite
the energy density, Eq. (3), the radial pressure, Eq. (5) and the
angular pressure, Eq. (6) for oscillaton as
ρ_Φ(t,x)=1/8πm_pl^2m_Φ^2{σ^2e^-(ν-μ)[1-cos(2ω t)]+σ^'2e^-(ν+μ)[1+cos(2ω t)]+σ^4/2[cos(4ω t)+4cos(2ω t)+3]}
(28)
p_r(t,x)=1/8πm_pl^2m_Φ^2{σ^2e^-(ν-μ)[1-cos(2ω t)]+σ^'2e^-(ν+μ)[1+cos(2ω t)]-σ^4/2[cos(4ω t)+4cos(2ω t)+3]}
(29)
p_(t,x)=1/8πm_pl^2m_Φ^2{σ^2e^-(ν-μ)[1-cos(2ω t)]-σ^'2e^-(ν+μ)[1+cos(2ω t)]-σ^4/2[cos(4ω t)+4cos(2ω t)+3]}
(30)
Equations 28-30 show that different nodes of energy density, radial
and angular components of pressure can be obtained easily through
their expansion. The values of ρ_Φ for times ω t=0,
π/2 and ρ_Φ0 using the Fourier expansion to
second order are shown in Fig. 4.
The values calculated for the components of radial and angular pressure
for times ω t=0,π/2 and zero nodes are shown in
Fig. 5. As we can see from Fig. 5. both radial and angular components
have a negative effective pressure for some values of x. This means
that for negative pressure work is done on the oscillaton when it
expands. On the other hand by using Eq . (4) and (16) we can also
evaluate the momentum density of the oscillaton. The values of p_Φ
for times ω t=0,π/2 are shown in Fig. 5. As we can
see from Figs.4 and 5.
Since the metric coefficients are asymptotically flat, static and
comply with their corresponding ones in the Minkowski situation when
x→∞, therefore exp(ν+μ) is identified as
1/1-2GM_Φ/r . Then the mass seen by an observer
at infinity may be calculated as [7-9]
M_Φ=(m_pl^2/m_Φ)x→∞limx/2(1-e^-ν-μ).(31)
This is the mass which is related to the scalar field and can be employed
as a possibility for the role of dark matter. Equation (31) shows
that the calculated mass of the oscillaton is constant which means
that the masses observed at infinity are the same for all times. This
is natural because oscillaton should be in line with the Schwarzschild
solution for the same mass according to Birkhoff^,s theorem[13].
Here some thing is unusual, and that is, for σ(x=0)<0.235
the mass obtained from the Eq. (31) is negative. We can see that there
is a negative maximum mass M_max=-0.23377m_Pl^2/m with σ_c(x=0)=0.175.
Whereas the p_r(0,x) (main component of radial pressure) at least
for these mentioned values, σ(x=0)<0.235, are negative. The
closest known real representative of such exotic matter is a region
of pseudo-negative pressure density produced by the Casimir
effect [14-17]. Therefore the negative mass can be described
by this model, quartic potential, the advantage which distinguishes
this model from quadratic and exponential scalar potential. For values
higher than σ(x=0)>0.235 the mass values are positive and
increase rapidly.
As we mentioned in boundary conditions, the fundamental frequency
is obtained by asymptotic value exp(ν-μ)(∞)=Ω^-2.
In that we had μ_0(∞)=-ν_0(∞)≠0 due to the
boundary conditions and taking into account the rapid convergence
of ν_0 (Fig. 2), then we can have
Ω=e^-ν_0(∞).(32)
The profile of the fundamental frequencies are shown in Fig. 8. It
is clear that more massive oscillatons oscillate with smaller frequencies.
IV. THE STATIONARY LIMIT PROCEDURE
For weak field condition in which σ(0)≪1 (in consequence
|μ_1|≪1 ), It is easy to simplify the Eqs. (21-25) as far
as possible as
ν_0^'=x[e^2μ_0σ^2+σ^'2],(33)
μ_0^'=1/x{1+e^ν_0+μ_0(3/2x^2σ^4-1)},(34)
μ_1^'=σ e^ν_0+μ_0(x^2-1),(35)
σ^''=-σ^'(2/x-μ_0^')-3σ^3e^ν_0+μ_0-σ e^2μ_0,(36)
where for z≪1, we have used this fact that I_0(z)∼𝒪(1),
I_1(z)∼𝒪(z/2) and higher orders, I_n(z)
, are neglected, while Eq. (18-c) with regard to variable changes
mentioned in Eq. (20) remains without change.
If we expand the scalar potential in a Fourier series by taking Eq.
(16)
V(Φ)=[n=0]4∑V_n(σ)cos(nω t)=m_^2m_Pl^2/16πσ^4[cos(4ω t)+4cos(2ω t)+3],(37)
then, as we have λ=m_Φ^2k_0 , then we can obtain
σ=(16π V_0(σ)/m_Pl^2m_^2)^1/4.(38)
It is clear that Eq. (36) is the only equation which will change among
(33-36). Therefore it can be rewritten as
σ^''=-σ^'(2/x-μ_0^')-3(16π V_0(σ)/m_Pl^2m_^2)^1/4e^ν_0+μ_0-(16π V_0(σ)/m_Pl^2m_^2)e^2μ_0.(39)
At this stage it is necessary to recall that, since oscillatons are
made of real scalar fields, therefore we know from non-relativistic
field theory that charge and current densities which are identified
as ρ and J respectively should be equal
to zero, then these objects are electrically neutral. On the other
hand real Φ corresponds to electrically neutral particles in
the oscillaton environment, hence we do not expect any electromagnetic
wave emitting from oscillatons [9].
Another interesting thing is that: Could the oscillatons predicted
by the scalar field be somehow associated with the ”gravitational
waves” phenomenon?
As a motivation for the this issue, we start with the following reasoning.
In the actual status of our understanding of the universe, there is
an apparent asymmetry in the kind of interactions that take part in
nature. The Scalar Field Dark Matter Model: A Braneworld Connection
known fundamental interactions are either spin-1, or spin-2. Electromagnetic,
weak and strong interactions are spin-1 interactions, while gravitational
interactions are spin-2. Of course, this could be just a coincidence.
Nevertheless, we know that the simplest particles are the spin-0 ones.
The asymmetry lies in the fact that there is no spin-0 fundamental
interactions. Why did Nature forget to use spin-0 fundamental interactions?
On the other hand, we know from the success of the Λ CDM
model that two fields currently take the main role in the Cosmos,
the dark matter and the dark energy. Recently, it has been indeed
proposed that dark matter is a scalar field, that is, a spin-0 fundamental
interaction. This is the so called Scalar Field Dark Matter (SFDM)
hypothesis . If true, this hypothesis could solve the problem of the
apparent asymmetry in our picture of nature [18]. As a final part
of this work, it is interesting to do a comparison between these kinds
of oscillatons made by quartic scalar potential and our previous work
which described by exponential scalar potential [9]. For quartic
scalar potential, metric coefficients, for different values of σ
, comply with flatness condition asymptotically much more better than
their corresponding ones in exponential scalar potential as well as
frequencies. But in contrast to exponential and quadratic scalar potential
we have several singularity points in energy density, radial and angular
components of pressure in this kind of potential with no persuasive
explanation [7, 9].
IV. CONCLUSIONS
In this paper we presented the simplest approximation for solving
the minimally coupled Einstein-Klein-Gordon equations for a spherically
symmetric oscillating soliton object endowed with a scalar quartic
potential field V(Φ)=1/4λ^4 and an harmonic
time-dependent scalar field Φ. By taking into account the Fourier
expansions of differential equations and with regard to the boundary
conditions which require the non-singularity and asymptotically flatness,
solutions are obtained easily. It should be emphasized that a dynamical
situation is imposed on the region of the oscillaton only, therefore
we have asymptotically static metric and solutions. This fact helps
us to find the mass of these astronomical objects as the most important
topics to justify what called dark matter as well as their fundamental
frequency. Results show that a quartic scalar field potential causes
different profiles for metric functions and metric coefficients as
well as energy density and mass distribution in comparison with what
has been done in previous works for quadratic and exponential scalar
field potentials. On the other hand with the same boundary initial
conditions, all kind of the potentials, have the same fundamental
frequency and mass relations [6,7,9]. Nevertheless, there are
some more problems that should be investigated for oscillatons derived
from a quartic scalar field. Here are some of these problems:
* In Fourier expansion, we have used to second order only for simplicity
and higher order requires more complex calculation.
* For quartic potential studied in this research for σ(x=0)<0.235
we have negative mass which can be justified by Casimir effect and
negative pressure, but more research should be carried out in this
field.
* For σ(x=0)>0.325, the mass values increase rapidly.
10
key-1Jonathan L. Feng, arXiv: 1003.0904v2 [astro-ph],
2010.
[2]key-2Alexandre Arbey, Julien Lesgourguesc and Pierre
Salatia, arXiv: astro-ph/0112324v2, 2002.
[3]key-3W. Buchmller, C. Ldeling, arXiv: hep-ph/0609174v1,
2006.
[4]key-4Tanja Rindler-Daller and Paul R. Shapiro,arXiv:
1312.1734v2 [astro-ph.Co] 2014.
[5]key-5T. Matos and F. S. Guzmn, Class. Quantum Grav.
18, 5055 (2001).
[6]key-6T. Matos, F. S. Guzmn , L. A Urea- Lpez and
D. Nez, arXiv: astro- ph/ 0102419.
[7]key-7L. Arturo Urna-Lpez, arXiv: gr-qc/0104093v3,
2002.
[8]key-8L. Arturo Urea- Lpez, Tonatiuth Matos and Ricardo
Becerril qu-gr. 19 (2002) 6259-6277.
[9]key-9B. Malakolkalami, A. Mahmoodzadeh, Phys. Rev. D
94.103505 (2016).
[10]key-10Tonatiuh Matos and F. Siddhartha Guzmn, arXiv:
gr-qc/0108027v1, 2001.
[11]key-11R. Friedberg, T. D. Lee and Y. Pang, Phys. Rev.
D 35, 3640 (1987).
[12]key-12E. Seidel and W.-M. Suen, Phys. Rev. Lett.
66.1659 (1991).
[13]key-13S. Weinberg, Gravitation and Cosmology
(John Wiley and Sons, Inc., New York, 1972), p. 337.
[14]key-14Astrid Lambrecht and The Casimir effect:
a force from nothing, IOP publishing Ltd 2008, ISSN:0953-8585.
[15]key-15Astrid Lambrecht and Serge Reynuaud, Casimir
effect and experiments, arxiv:1112.1301v1 [quantum-ph] 2011.
[16]key-16Saossen Mbarek and M. B. Paranjape, Negative
mass bubbles in de Sitter space-time, arXive: 1407.145v2 [gr-qc]
2014.
[17]key-17J. P. Petit, Negative Mass Hypothesis
In Cosmology And The Nature Of Dark Energy, Astrophysics and Space
Science, 354, 2014.
[18]key-18Tonatiuh Matos, Luis Arturo Urea- Lpez, Miguel
Alcubierre, Ricardo Becerril, Francisco S. Guzmn, and Daro Nez,
The Scalar Field Dark Matter Model: A Braneworld Connection, Lect.
Notes Phys. 646, 401–420 (2004).
|
http://arxiv.org/abs/1701.07926v9 | 20170127024558 | Boosted nonparametric hazards with time-dependent covariates | [
"Donald K. K. Lee",
"Ningyuan Chen",
"Hemant Ishwaran"
] | stat.ML | [
"stat.ML",
"62N02 (Primary) 62G05, 90B22 (Secondary)"
] |
textfont=bf
[algorithm]labelfont=sc,font=large
ruled
algorithmtbploa
algorithm
=30pt =5pt
byby
=12pt plus1pt minus1pt
<
=
#1<#1.
#1<#1
#1section
.#1.
#1subsection
.#1.
claim
.5em
#1 #2. (#3)
claim
theoremTheorem
propositionProposition
lemmaLemma
corollaryCorollary
remarkRemark
exampleExample
definitionDefinition
assumptionAssumption
assumptionxassumption
#1(<ref>)#1 #1
αβ̱δ̣ΔεγΓκ̨łλŁΛμϕΦρ̊σθΘτζ
å A B C D E F
H I J K
M
𝒩 O P Q Rß S T U V W X Y Z
argmax
argmin
𝔼ℙℝℚ
AGEboostCENSUSESINUISANCEFailF̃F
λ̂_boostλ_nm̂μ_n
#1‖ #1 ‖_Φ#1#2‖ #1 ‖_Φ,#2#1#2‖ #1 ‖^2_Φ,#2R
φα_
=cmcsc10
=cmcsc10 scaled 1
=cmcsc9 scaled 2
=cmsl12
=cmbx9 scaled 1
.gif,.pdf,.png,.jpg,.tiff
empty
Boosted nonparametric hazards with time-dependent covariates10pt
Donald K.K. Lee[Correspondence: donald.lee@emory.edu. Supported by a hyperplane], Ningyuan Chen[Supported by the HKUST start-up fund R9382], Hemant Ishwaran[Supported by the NIH grant R01 GM125072]5pt
Emory University, University of Toronto, University of Miami10pt
Preprint of https://doi.org/10.1214/20-AOS2028Annals of Statistics 49:4:2101-2128 (2021)
Given functional data from a survival process with time-dependent covariates, we derive a smooth convex representation for its nonparametric log-likelihood functional and obtain its functional gradient. From this we devise a generic gradient boosting procedure for estimating the hazard function nonparametrically.
An illustrative implementation of the procedure using regression trees is described to show how to recover the unknown hazard. The generic estimator is consistent if the model is correctly specified; alternatively an oracle inequality can be demonstrated for tree-based models. To avoid overfitting, boosting employs several regularization devices. One of them is step-size restriction, but the rationale for this is somewhat mysterious from the viewpoint of consistency. Our work brings some clarity to this issue by revealing that step-size restriction is a mechanism for preventing the curvature of the risk from derailing convergence.
MSC 2010 subject classifications. Primary 62N02; Secondary
62G05, 90B22.
Keywords. survival analysis, gradient boosting,
functional data, step-size shrinkage, regression trees, likelihood
functional.1pt
§ INTRODUCTION
Flexible hazard models involving time-dependent covariates are
indispensable tools for studying systems that track covariates over
time. In medicine, electronic health records systems make it possible
to log patient vitals throughout the day, and these measurements can
be used to build real-time warning systems for adverse outcomes such
as cancer mortality <cit.>. In financial technology,
lenders track obligors' behaviours over time to assess and revise
default rate estimates. Such models are also used in many other
fields of scientific inquiry since they form the building blocks for
transitions within a Markovian state model. Indeed, this work was
partly motivated by our study of patient transitions in emergency
department queues and in organ transplant waitlist
queues <cit.>. For example, allocation for a donor heart
in the U.S. is defined in terms of coarse tiers
<cit.>, and transplant candidates are assigned to
tiers based on their health status at the time of listing. However, a
patient's condition may change rapidly while awaiting a heart, and
this time-dependent information may be the most predictive
of mortality and not the static covariates collected far in the past.
The main contribution of this paper is to introduce a fully
nonparametric boosting procedure for hazard estimation with
time-dependent covariates. We describe a generic gradient boosting
procedure for boosting arbitrary base learners for this
setting. Generally speaking, gradient boosting adopts the view of
boosting as an iterative gradient descent algorithm for minimizing a
loss functional over a target function space. Early work includes
Breiman <cit.> and Mason et al. <cit.>. A unified
treatment was provided by Friedman <cit.>, who coined the term
“gradient boosting” which is now generally taken to be the modern
interpretation of boosting.
Most of the existing boosting approaches for survival data focus on
time-static covariates and involve boosting the Cox proportional hazards model. Examples
include the popular R-packages mboost (Bühlmann and Hothorn <cit.>)
and gbm (Ridgeway <cit.>) which apply gradient boosting to the Cox partial likelihood loss.
Related work includes the penalized Cox partial likelihood approach of Binder and Schumacher <cit.>. Other important approaches, but not based on
the Cox model, include L_2Boosting <cit.> with inverse
probability of censoring weighting
(IPCW) <cit.>, boosted
transformation models of parametric families <cit.>, and
boosted accelerated failure time models <cit.>.
While there are many boosting methods for dealing with time-static
covariates, the literature is far more sparse for the case of
time-dependent covariates. In fact, to our knowledge there is no
general nonparametric approach for dealing with this setting. This is
because in order to implement a fully nonparametric estimator, one has
to contend with the issue of identifying the gradient, which turns out
to be a non-trivial problem due to the functional nature of the data.
This is unlike most standard applications of gradient boosting where
the gradient can easily be identified and calculated.
Time-dependent covariate framework
To explain why this is so challenging, we start by formally defining the
survival problem with time-dependent covariates. Our description
follows the framework of Aalen <cit.>. Let T denote the
potentially unobserved failure time. Conditional on the
history up to time t- the probability of failing at T ∈ [t,t+dt)
equals
λ(t,X(t))Y(t)dt.
Here λ(t,x) denotes the unknown hazard function,
X(t)∈⊆^p is a predictable covariate process, and
Y(t)∈{0,1} is a predictable indicator of whether the subject is
at risk at time t.[The filtration of interest is σ{X(s),Y(s),I(T≤ s):s≤ t}. If X(t) is only observable when
Y(t)=1, we can set X(t)=x^c∉ whenever Y(t)=0.] To
simplify notation, without loss of generality we normalize the units
of time so that Y(t)=0 for t>1.[Since the data is always observed up to some finite time, there is no information loss from censoring at that point. For example, if T' is
the failure time in minutes and the longest duration in the data is
τ'=60 minutes, the failure time in hours, T, is at most
τ=1 hour. The hazard function on the minute timescale,
λ_T'(t',X(t')), can be recovered from the hazard function
on the hourly timescale, λ_T(t,X(t)), via
λ_T'(t',X(t')) =
1/τ'λ_T(t'/τ',X(t'/τ')).]
In other words, the subject is not at risk after time t=1, so we can
restrict attention to the time interval (0,1].
If failure is observed at T∈(0,1] then the indicator Δ=Y(T)
equals 1; otherwise Δ=0 and we set T to an arbitrary number larger than 1, e.g. T=∞. Throughout we
assume we observe n independent and identically distributed
functional data samples
{(X_i(·),Y_i(·),T_i)}_i=1^n. The evolution of
observation i's failure status can then be thought of as a
sequence of coin flips at time increments t = 0,dt,2dt,⋯,
with the probability of “heads” at each time point given
by (<ref>). Therefore, observation i's contribution to
the likelihood is
[ {1-λ(0,X_i(0))Y_i(0)dt}×{1-λ(dt,X_i(dt))Y_i(dt)dt}×⋯×λ(T_i,X_i(T_i))^Δ_i; e^-∫_0^1 Y_i(t)λ(t,X_i(t))dtλ(T_i,X_i(T_i))^Δ_i, ]
where the limit can be understood as a product integral. Hence, if the
log-hazard function is
F(t,x)=logλ(t,x),
then the (scaled) negative log-likelihood functional is
_n(F)
=1/n∑_i=1^n∫_0^1 Y_i(t)e^F(t,X_i(t))dt
-1/n∑_i=1^nΔ_iF(T_i,X_i(T_i)) ,
which we shall refer to as the likelihood risk. The goal is to
estimate the hazard function ł(t,x)=e^F(t,x) nonparametrically by minimizing _n(F).
The likelihood does not have a gradient in generic function spaces
As mentioned, our approach is to boost F using functional gradient
descent. However, the chief difficulty is that the canonical
representation of the likelihood risk functional does not have a
gradient. To see this, observe that the directional derivative
of eq:loglik equals
-25pt
d/dθ_n(F+θ f)|_θ=0
= 1/n∑_i=1^n∫_0^1 Y_i(t)e^F(t,X_i(t)) f(t,X_i(t))dt
- 1/n∑_i=1^nΔ_if(T_i,X_i(T_i)),
which is the difference of two different inner products ⟨
e^F,f⟩ _†-⟨ 1,f⟩
_ where
⟨ g,f ⟩_† = 1/n∑_i=1^n ∫_0^1 Y_i(t) g(t,X_i(t)) f(t,X_i(t))dt,
⟨ g,f ⟩_ = 1/n∑_i=1^n Δ_i g(T_i,X_i(T_i))f(T_i,X_i(T_i)).
Hence, (<ref>) cannot be expressed as a single inner
product of the form ⟨ g_F, f ⟩ for some function
g_F(t,x). Were it possible to do so, g_F would then be the gradient function.
In simpler non-functional data settings like regression or
classification, the loss can be written as L(Y,(x)), where
is the non-functional statistical target and Y is the
outcome, so the gradient is simply ∂ L(Y,(x))/∂(x). The negative gradient is then approximated using a base
learner f∈ from a predefined class of functions (this
being either parametric; for example linear learners, or
nonparametric; for example tree learners). Typically, the optimal
base learner f̂ is chosen to minimize the L^2-approximation
error and then scaled by a regularization parameter 0<ν≤ 1 to
obtain the updated estimate of :
← - νf̂, 15pt f̂=_f∈‖∂ L/∂-f ‖_2.
Importantly, in the simpler non-functional data setting the gradient does
not depend on the space that belongs to. By contrast, a key
insight of this paper is that the gradient of _n(F) can only
be defined after carefully specifying an appropriate sample-dependent
domain for _n(F). The likelihood risk can then be
re-expressed as a smooth convex functional, and an analogous
representation also exists for the population risk. These
representations resolve the difficulty above, allow us to describe and
implement a gradient boosting procedure, and are also crucial to
establishing guarantees for our estimator.
Contributions of the paper
A key discovery that unlocks the boosted hazard estimator is Proposition <ref> of Section <ref>. It provides an integral representation
for the likelihood risk from which several results follow, including,
importantly, an explicit representation for the gradient.
Proposition <ref> relies on defining a suitable space of
log-hazard functions defined on the time-covariate domain
[0,1]×. Identifying this space is the key insight that
allows us to rescue the likelihood approach and to derive the gradient
needed to implement gradient boosting. Arriving at this framework is
not conceptually trivial, and may explain the absence of boosted
nonparametric hazard estimators until now.
Algorithm <ref> of Section <ref> describes our estimator. The algorithm minimizes the likelihood risk (<ref>) over the defined space of log-hazard functions.
In the special case of regression tree learners, expressions for the likelihood risk and its gradient are obtained from Proposition <ref>, which are then used to describe a tree-based implementation of our estimator in Section <ref>. In Section <ref> we apply it to a high-dimensional dataset generated from a naturalistic simulation of patient service times in an emergency
department.
Section <ref> establishes the consistency of the
procedure. We show that the hazard estimator is consistent if the
space is correctly specified. In particular, if the space is the span
of regression trees, then the hazard estimator satisfies an oracle
inequality and recovers ł up to some error tolerance (Propositions <ref> and <ref>).
Another contribution of our work is to clarify the mechanisms used by gradient boosting to avoid overfitting. Gradient boosting typically applies two types of regularization to invoke slow learning: (i) A small
step-size is used for the update; and (ii) The number of boosting
iterations is capped. The number of iterations used in our algorithm
is set using the framework of Zhang and Yu <cit.>, whose work shows how stopping early ensures consistency. On the other hand, the role of step-size restriction is more mysterious. While <cit.> demonstrates that small step-sizes are needed to prove consistency,
unrestricted greedy step-sizes are already small enough for
classification problems <cit.> and also for commonly used
regression losses (see the Appendix of <cit.>). We show in
Section <ref> that shrinkage acts as a counterweight to the curvature of the risk (see Lemma <ref>). Hence if the curvature is unbounded, as is the case for hazard regression, then the step-sizes may need to be explicitly controlled to ensure convergence. This important result adds to our understanding of statistical convergence in gradient boosting. As noted by Biau and Cadre <cit.> the literature for this is relatively sparse, which motivated them to propose another regularization mechanism that also prevents overfitting.
Concluding remarks can be found in Section <ref>. Proofs not appearing in the body of the paper can be found in the Appendix.
The boosted hazard estimator
In this section, we describe our boosted hazard estimator. To provide readers with concrete examples for the ideas introduced here, we will show how the quantities defined in this section specialize in the case of regression trees, which is one of a few possible ways to implement boosting.
We begin by defining in Section <ref> an appropriate
sample-dependent domain for the likelihood risk _n(F). As
explained, this key insight allows us to re-express the likelihood risk and its population analogue as smooth convex functionals, thereby enabling us to compute their gradients in closed form in
Propositions <ref> and <ref> of
Section <ref>. Following this, the boosting
algorithm is formally stated in Section <ref>.
Specifying a domain for _n(F)
We will make use of two identifiability conditions <ref> and <ref> to define the domain for _n(F). Condition <ref> below is the same as Condition 1(iv)
of Huang and Stone <cit.>.
The true hazard function
λ(t,x) is bounded between some interval
[Λ_L,Λ_U]⊂(0,∞) on the time-covariate
domain [0,1]×.
Recall that we defined X(·) and Y(·) to be predictable processes, and so it
can be shown that the integrals and expectations appearing in this
paper are all well defined. Denoting the indicator function as
I(·), define the following population and empirical
sub-probability measures on [0,1]×:
μ(B)
= ( ∫_0^1 Y(t)· I[{t,X(t)}∈ B] dt ),
(B)
= 1/n∑_i=1^n∫_0^1 Y_i(t)· I[{t,X_i(t)}∈ B] dt,
and note that (B)=μ(B) because the data is
i.i.d. by assumption.
Intuitively, measures the denseness of the observed sample
time-covariate paths on [0,1]×. For any integrable f,
∫ f dμ = ( ∫_0^1 Y(t)· f(t,X(t)) dt ),
∫ f d = 1/n∑_i=1^n∫_0^1 Y_i(t) · f(t,X_i(t)) dt.
This allows us to define the following (random) norms and inner products
f_,1 = ∫|f| d
f_,2 = (∫ f^2 d)^1/2
f_∞ = sup{ |f(t,x)|:(t,x)∈[0,1]×}
⟨ f_1,f_2⟩ _ = ∫ f_1f_2 d,
and note that
·_,1≤·_,2≤·_∞
because ([0,1]×)≤1.
By careful design, allows us to specify a natural domain for
_n(F). Let {ϕ_j(t,x)}_j=1^d be a set of bounded
functions [0,1]×↦[-1,1] that are linearly independent,
in the sense that
∫_[0,1]×(∑_jc_jϕ_j)^2dtdx=0 if and only
if c_1=⋯=c_d=0 (when some of the covariates are
discrete-valued, dx should be interpreted as the product of
a counting measure and the Lebesgue measure). The span of the functions is
={∑_j=1^dc_jϕ_j:c_j∈} .
For example, the span of all regression tree functions that can be defined on [0,1]× is ={∑_j c_jI_B_j(t,x) : c_j ∈},[It is clear that said span is contained in . For the converse, it suffices to show that is also contained in the span of trees of some depth. This is easy to show for trees with p+1 splits, because they can generate partitions of the form (-∞,t]×(-∞,x^(1)]×⋯×(-∞,x^(p)] in [0,1]× (Section 3 of <cit.>).] which are linear combinations of indicator functions over disjoint time-covariate cubes indexed[With a slight abuse of notation, the index j is only considered multi-dimensional when describing the geometry of B_j, such as in (<ref>). In all other situations j should be interpreted as a scalar index.] by j=(j_0,j_1,⋯,j_p):
B_j={[ (t,x) ∈ [0,1]× : [ t^(j_0)<t≤ t^(j_0+1); x^(1,j_1)<x^(1)≤ x^(1,j_1+1); ⋮; x^(p,j_p)<x^(p)≤ x^(p,j_p+1) ] ]}.
The regions B_j are formed using all possible split points
{x^(k,j_k)}_j_k for the k-th coordinate x^(k), with the spacing determined by the precision of the measurements. For example, if weight is measured to the closest kilogram, then the set of all possible split points will be {0.5, 1.5, 2.5,⋯} kilograms. Note that these split points are the finest possible for any realization of weight that is measured to the nearest kilogram. While abstract treatments of trees assume that there is a continuum of split points, in reality they fall on a discrete (but fine) grid that is pre-determined by the precision of the data.
When is equipped with ⟨·,·⟩
_, we obtain the following sample-dependent subspace of
L^2(), which is the appropriate domain for
_n(F):
(,⟨·,·⟩_).
Note that the elements in (,⟨·,·⟩_) are equivalence classes rather than actual functions
that have well defined values at each (t,x). This is a problem
because the likelihood risk (<ref>) requires evaluating F(t,x)
at the points (T_i,X_i(T_i)) where Δ_i=1. We resolve this by fixing an orthonormal basis {_nj(t,x)}_j
for (,⟨·,·⟩ _), and represent each member of (,⟨·,·⟩ _) uniquely in the form
∑_jc_j_nj(t,x). For example in the case of regression trees, applying the Gram-Schmidt procedure to {ϕ_j(t,x) = I_B_j(t,x)}_j gives
{_nj(t,x) }_j =
{I_B_j(t,x)/(B_j)^1/2: (B_j)>0},
which by design have disjoint support.
The second condition we impose is for {ϕ_j}_j=1^d to be
linearly independent in L^2(μ), that is
∑_jc_jϕ_j_μ,2^2 = ∑_ijc_i(∫ϕ_iϕ_jdμ)c_j=0
if and only if c_1=⋯=c_d=0. Since by construction
{ϕ_j}_j=1^d are already linearly independent in
[0,1]×, the condition intuitively requires the set of all
possible time-covariate trajectories to be adequately dense in
[0,1]× to intersect a sufficient amount of the support of
every ϕ_j. This is weaker than the identifiability conditions 1(ii)-1(iii) in
<cit.> which require X(t) to have a positive joint probability
density on [0,1]×.
The Gram matrix Σ_ij=∫ϕ_iϕ_jdμ is
positive definite.
Integral representations for the likelihood risk
Having deduced the appropriate domain for _n(F), we can now
recast the risk as a smooth convex functional on (,⟨·,·⟩ _). Proposition <ref> below provides closed form expressions for this and its gradient. We note that if the risk is actually of a certain simpler form, it might be possible to estimate its gradient empirically from our risk expression using <cit.>.
For functions F(t,x), f(t,x) of the form ∑_j c_j _nj(t,x), the likelihood risk (<ref>) can be written as
_n(F)=∫(e^F- F)d,
where ∈(,⟨·,·⟩_)
is the function
(t,x)
=1/n∑_j{∑_i=1^nΔ_i_nj(T_i,X_i(T_i))}_nj(t,x).
Thus there exists ρ∈(0,1) (depending on F and f)
for which the Taylor representation
_n(F+f)=_n(F)+
⟨ g_F,f⟩_+1/2∫
e^F+ρ ff^2d
holds, where the gradient
g_F(t,x)
=∑_j⟨ e^F,_nj⟩
__nj(t,x)
-(t,x)
of _n(F) is the projection of e^F- onto
(,⟨·,·⟩ _).
Hence if g_F=0 then the infimum of _n(F) over
the span of {_nj(t,x)}_j is uniquely attained
at F.
For regression trees the expressions (<ref>) and
(<ref>) simplify further because is closed under
pointwise exponentiation, i.e. e^F∈ for F∈. This is
because the B_j's are disjoint so F = ∑_j c_j I_B_j and hence
e^F = ∑_j e^c_j I_B_j. Thus
(t,x) = ∑_j:(B_j)>0_j/n(B_j) I_B_j(t,x),
_n(F) = ∑_j:(B_j)>0(e^c_j(B_j)
-c_j_j/n),
g_F(t,x) = ∑_j:(B_j)>0( e^c_j - _j/n(B_j)) I_B_j(t,x),
where
_j=∑_iΔ_iI[{T_i,X_i(T_i)}∈ B_j]
is the number of observed failures in the time-covariate region
B_j.
Fix a realization of {(X_i(·),Y_i(·),T_i)}_i=1^n.
Using (<ref>) we can rewrite (<ref>) as
_n(F)
=∫ e^Fd-1/n∑_i=1^nΔ_iF(T_i,X_i(T_i)).
We can express F in terms of the basis {_nk}_k as
F(t,x)=∑_kc_k_nk(t,x). Hence
∫ F d = ∫1/n∑_j{∑_i=1^nΔ_i_nj(T_i,X_i(T_i))}_nj(t,x) F(t,x) d
= 1/n∑_j{∑_i=1^nΔ_i_nj(T_i,X_i(T_i))}∫_nj(t,x) F(t,x) d
= 1/n∑_j{∑_i=1^nΔ_i_nj(T_i,X_i(T_i))}∫_nj(t,x) ∑_kc_k_nk(t,x) d
= 1/n∑_j{∑_i=1^nΔ_i_nj(T_i,X_i(T_i))}
c_j
= 1/n∑_i=1^nΔ_i∑_jc_j_nj(T_i,X_i(T_i))
= 1/n∑_i=1^nΔ_iF(T_i,X_i(T_i)),
where the fourth equality follows from the orthonormality
of the basis. This completes the derivation of (<ref>).
By an interchange argument we obtain
d/dθ_n(F+θ f)
= ∫(e^F+θ f-)fd,
d^2/dθ^2_n(F+θ f)
= ∫ e^F+θ ff^2d,
the latter being positive whenever f≠0; i.e., _n(F) is
convex. The Taylor representation (<ref>) then follows
from noting that g_F is the orthogonal projection of
e^F-∈ L^2() onto (,⟨·,·⟩ _).
The expectation of the likelihood risk also has an integral
representation. A special case of the
representation (<ref>) below is proved in Proposition 3.2
of <cit.> for right-censored data only, under assumptions that do not allow for internal covariates. In the statement of the proposition below recall that
Λ_L and Λ_U are defined in <ref>. The constant
is defined later in (<ref>).
For F∈∪{logλ},
R(F)={_n(F)}=∫(e^F-λ F)dμ.
Furthermore the restriction of R(F) to
is coercive:
1/2R(F)≥Λ_L/F_∞
+Λ_Umin{0,1-log(2Λ_U)},
and it attains its minimum at a unique point F^*∈(,⟨·,·⟩_μ). If
contains the underlying log-hazard function then F^*=logλ.
Coerciveness (<ref>) implies that any F with expected
risk R(F) less than R(0)≤1<3 is uniformly bounded:
F_∞
</Λ_L[3/2+Λ_Umax{0,log(2Λ_U)-1}]
≤β_Λ
where the constant
β_Λ=3/2+Λ_Umax{0,log(2Λ_U)-1}/min{1,Λ_L}
is by design no smaller than 1 in order to simplify subsequent analyses.
The boosting procedure
In gradient boosting the key idea is to update an iterate in a direction that is approximately aligned to the negative gradient. To model this direction formally, we introduce the concept of an -gradient.
Suppose g_F 0. We say that a unit vector g_F^∈(,⟨·,·⟩ _) is an -gradient at F if
for some 0<≤1,
⟨g_F/g_F_,2,g_F^⟩_≥.
Call -g_F^ a negative -gradient if g_F^ is an -gradient.
Our boosting procedure seeks approximations g_F^ that
satisfy eq:epsgradient for some pre-specified alignment value .
The larger is, the closer the alignment is between the negative gradient
and the negative -gradient, and the greater the risk reduction. In
particular, -g_F is the unique negative 1-gradient with maximal risk reduction. In practice, however, we find that using a smaller value of leads to simpler approximations that prevent overfitting in finite samples. This is consistent with other implementations of boosting: It is well known that the statistical performance of gradient descent generally improves when simpler base learners are used.
Algorithm <ref> describes the proposed boosting procedure for
estimating λ. For a given level of alignment , Line 3
finds an -gradient g__m^ at _m satisfying (<ref>) at the m-th iteration, and uses its negation for the boosting update in Line 4. If the -gradients are tree learners, as is the case with the implementation in Section <ref>, then the trees cannot be grown in the same way as the standard boosting algorithm in Friedman <cit.>. This is because the standard approach grows all regression trees to a fixed depth, which may or may not ensure -alignment at each boosting iteration.
To ensure -alignment, the depth of the trees are not fixed in the implementation in Section <ref>. Instead, at each boosting iteration a tree is grown to whatever depth is needed to satisfy eq:epsgradient.alg. This can always be done because the alignment is non-decreasing in the number of tree splits, and with enough splits we can recover the gradient g__m itself up to -almost everywhere.[Split the tree until each leaf node contains just one of the regions B_j in (<ref>) with (B_j)>0. Then set the value of the node equal to the value of the gradient function (<ref>) inside B_j.] As mentioned earlier, we recommend using small values of , which can be determined in practice using cross-validation. This differs from the standard approach where cross-validation is used to select a common tree depth to use for all boosting iterations.
10pt
In addition to the gradient alignment , Algorithm <ref> makes use of two other regularization parameters, Ψ_n and ν_n. The first defines the early stopping criterion (how many boosting iterations to use), while the second controls the step-sizes of the boosting updates. These are two common regularization techniques used in boosting:
* Early stopping. The number of boosting iterations
is controlled by stopping the algorithm before the uniform
norm of the estimator F__∞ reaches or
exceeds
Ψ_n=W(n^1/4)→∞,
where W(y) is the branch of the Lambert function that returns the
real root of the equation ze^z=y for y>0.
* Step-sizes. The step-size ν_n ≪ 1 used in gradient
boosting is typically held constant across iterations. While we can also do this in our procedure,[The term ν_n^2 e^Ψ_n in condition (<ref>) would need to be replaced by ν_n^2 e^Ψ_n if a constant step-size is used.] the role of step-size shrinkage becomes more salient if we use ν_n/(m+1) instead as the step-size for the m-th iteration in Algorithm <ref>. This step-size is controlled in two ways. First, it is made to decrease with each iteration according to the Robbins-Monro condition that the sum of the steps diverges while the sum of squared steps converges. Second, the shrinkage factor ν_n is selected to make the step-sizes decay with n at rate
ν_n^2e^Ψ_n<1,
15pt ν_n^2e^Ψ_n→ 0.
This acts as a counterbalance to _n(F)'s unbounded curvature:
d^2/dθ^2_n(F+θ
f)|_θ=0=∫ e^F f^2d,
which is upper bounded by e^Ψ_n when F_∞<Ψ_n
and f_,2=1.
Consistency
Under <ref> and <ref>, guarantees for our hazard estimator in Algorithm <ref> can be derived for two scenarios of interest. The guarantees rely on the regularizations described in Section <ref> to avoid overfitting. In the following development, recall from Proposition <ref> that F^* is the unique minimizer of R(F), so it satisfies the first order condition
⟨ e^F^*-λ,F⟩ _μ=0
for all F∈. Recall that the span of all trees is closed under pointwise exponentiation (e^F∈), in which case (<ref>) implies that ł^* = e^F^* is the orthogonal projection of λ onto (,⟨·,·⟩_μ).
* Consistency when is correctly specified.
If the true log-hazard function logλ is in ,
then Proposition <ref> asserts that
F^*=logλ. It will be shown in this case
that is consistent:
‖-λ‖_μ,2^2 = o_p(1).
* Oracle inequality for regression trees. If is closed under pointwise exponentiation, it follows from (<ref>) that λ^* is the best L^2(μ)-approximation to λ among all candidate hazard estimators {e^F:F∈}. It can then be shown that converges to this best approximation:
‖ -λ‖ _μ,2^2
=‖ł^* -λ‖ _μ,2^2
+ o_p(1).
This oracle result is in the spirit of the type of guarantees available for tree-based boosting in the non-functional data setting. For example, if tree stumps are used for L_2-regression, then the regression function estimate will converge to the best approximation to the true regression function in the span of tree stumps <cit.>. Similar results also exist for boosted classifiers <cit.>.
Propositions <ref> and <ref> below
formalize these guarantees by providing bounds on the error terms above. While sharper bounds may exist, the purpose of this paper is to introduce our generic estimator for the first time and to provide guarantees that apply across different implementations. More refined convergence rates may exist for a specific implementation, just like the analysis in Bühlmann and Yu <cit.> for L_2Boosting when componentwise spline learners are specifically used.
En route to establishing the guarantees, Lemma <ref> below clarifies the role played by step-size restriction in ensuring convergence of the estimator. As explained in the Introduction, explicit shrinkage is not necessary for classification and regression problems where the risk has bounded curvature. Lemma <ref> suggests that it may, however, be needed when the risk has unbounded curvature, as is the case with _n(F). Seen in this light, shrinkage is really a mechanism for controlling the growth of the risk curvature.
Strategy for establishing guarantees
The representations for _n(F) and its population analogue R(F)
from Section <ref> are the key ingredients for formalizing the guarantees. We use them to first show that _∈(,⟨·,·⟩_) converges to F^*∈(,⟨·,·⟩_μ): Applying Taylor's theorem to the representation for R(F) in Proposition <ref> yields
‖_-F^*‖_μ,2^2≤
2R(_)-R(F^*)/min_t,x(ł^*∧).
The problem is thus transformed into one of risk minimization
R(_)→ R(F^*), for which <cit.>
suggests analyzing separately the terms of the decomposition
0
≤
R(_)-R(F^*)
≤ |_n(_)-R(_)|
40pt(I) complexity argument
+ |_n(F^*)-R(F^*)|
40pt(II) standard argument
+ {_n(_)-_n(F^*)}.
25pt(III) curvature argument
The authors argue that in boosting, the point of limiting the number
of iterations (enforced by lines 5-10 in Algorithm <ref>) is to prevent _ from growing too fast, so that (I) converges to zero as n→∞. At the same time, is allowed to grow with n in a controlled manner so that the empirical risk _n(_) in (III) is eventually minimized as n→∞. Lemmas <ref> and <ref> below show that our procedure achieves both
goals. Lemma <ref> makes use of complexity theory
via empirical processes, while Lemma <ref> deals with
the curvature of the likelihood risk. The term (II) will be bounded
using standard concentration results.
Bounding (I) using complexity
To capture the effect of using a simple negative -gradient (<ref>) as the descent direction, we bound (I) in terms of the complexity of[
For technical convenience, _ has been enlarged from
_,boost to include the unit ball.] _ = _,∪{F∈:F_∞=1}⊆,
where _,={_m
=-∑_k=0^m-1ν_n/k+1g__k^:m = 0,1,…}.
Depending on the choice of weak learners for the -gradients,
_ may be much smaller than . For example, coordinate
descent might only ever select a small subset of basis functions
{ϕ_j}_j because of sparsity. As another example if ł(t,x) is additively separable in time and also in each covariate, then regression trees might only ever select simple tree stumps (one tree split).
The measure of complexity we use below comes from empirical process
theory. Define _^Ψ={F∈_:F_∞<Ψ}
for Ψ>0 and suppose that Q is a sub-probability measure on
[0,1]×. Then the L^2(Q)-ball of radius δ>0
centred at some F∈ L^2(Q) is
{F'∈_^Ψ:F'-F_Q,2<δ}. The covering
number (δ,_^Ψ,Q) is the minimum number of
such balls needed to cover _^Ψ (Definitions 2.1.5 and
2.2.3 of van der Vaart and Wellner <cit.>), so (δ,_^Ψ,Q)=1 for
δ≥Ψ. A complexity measure for _ is
J__=sup_Ψ,Q{∫_0^1{log(uΨ,_^Ψ,Q)}^1/2 du},
where the supremum is taken over Ψ>0 and over all non-zero
sub-probability measures. As discussed, J__ is never greater than, and potentially much smaller than J_, the complexity of , which is fixed and finite.
Before stating Lemma <ref>, we note that the result
also shows an empirical analogue to the norm equivalences
F_μ,1≤F_μ,2≤F_∞≤/2F_μ,1 for all F∈
exists, where
=2sup_F∈:F_∞=1(F_∞/F_μ,1)
=2/inf_F∈:F_∞=1F_μ,1>1.
The factor of 2 serves to simplify the presentation, and can be
replaced with anything greater than 1.
6pt
There exists a universal constant
κ such that for any 0<η<1, with probability at least
1-4exp{ -(η n^1/4/κ J__)^2}
an empirical analogue to (<ref>) holds for all F∈:
F_,1≤F_,2≤F_∞≤F_,1,
and for all F∈_^Ψ_n,
|{_n(F)-_n(0)}-{R(F)-R(0)}|<η.
6pt
The equivalences (<ref>) imply that
(,⟨·,·⟩ _)
equals its upper bound =d. That is, if
∑_jc_jϕ_j_,2=0, then
∑_jc_jϕ_j_∞=0, so c_1=⋯=c_d=0
because {ϕ_j}_j=1^d are linearly independent on
[0,1]×.
Bounding (III) using curvature
We use the representation in Proposition <ref> to study the
minimization of the empirical risk _n(F) by boosting. Standard
results for exact gradient descent like Theorem 2.1.15
of Nesterov <cit.> are in terms of the norm of the minimizer, which
may not exist for _n(F).[The infimum of _n(F) is not always attainable: If f is non-positive and vanishes on the set {{T_i,X_i(T_i)}:Δ_i=1}, then _n(F+θ f)=∫(e^F+θ f- F)d is decreasing in θ so f is a direction of recession. This is however not an issue for boosting because of early stopping.] If coordinate descent is used instead, Section 4.1 of <cit.> can be applied to convex functions whose infimum may not be attainable, but its curvature is
required to be uniformly bounded above. Since the second derivative of
_n(F) is unbounded (<ref>),
Lemma <ref> below provides two remedies: (i) Use the
shrinkage decay (<ref>) of ν_n to counterbalance
the curvature; (ii) Use coercivity (<ref>) to show that
with increasing probability, {_m}_m=0^ are uniformly
bounded, so the curvatures at those points are also uniformly
bounded. Lemma <ref> combines both to derive a result
that is simpler than what can be achieved from either one alone. In
doing so, the role played by step-size restriction becomes clear. The
lemma relies in part on adapting the analysis in Lemma 4.1
of <cit.> for coordinate descent to the case for generic
-gradients. The conditions required below will be shown to hold
with high probability.
Suppose (<ref>) holds and that
|_n(F^*)-R(F^*)|<1,
15pt
sup_F∈_^Ψ_n|_n(F)-R(F)|<1.
Then the largest gap between F^* and {_m}_m=0^,
γ̂=max_m≤m̂_m-F^*_∞∨1,
is bounded by a constant no greater than 2β_Λ,
and for n≥55,
_n(_)-_n(F^*)
<2eβ_Λ(log
n/4n^1/4)^/(γ̂)
+ν_n^2e^Ψ_n.
The last term in (<ref>) suggests that the role of the
step-size shrinkage ν_n is to keep the curvature of the
risk in check, to prevent it from derailing convergence. Recall
from (<ref>) that e^Ψ_n describes the
curvature of _n(_m). Thus our result clarifies the
role of step-size restriction in boosting functional data.
Regardless of whether the risk curvature is bounded or not, smaller
step-sizes always improve the convergence bound. This can be seen from
the parsimonious relationship between ν_n
and (<ref>). Fixing n, pushing the value of
ν_n down towards zero yields the lower limit
2eβ_Λ(log n/4n^1/4)^/(γ̂).
However, this limit is unattainable as ν_n must be positive in
order to decrease the risk.
This effect has been observed in practical applications of boosting.
Friedman <cit.> noted improved performance for gradient boosting with
the use of a small shrinkage factor ν. At the same time, it was
also noted there was diminishing performance gain as ν became
very small, and this came at the expense of an increased number of
boosting iterations. This same
phenomenon has also been observed for
L_2Boosting <cit.> with componentwise linear learners.
It is known that the solution path for L_2Boosting closely matches
that of lasso as ν→ 0. However, the algorithm exhibits
cycling behaviour for small ν, which greatly increases the number
of iterations and offsets the performance gain in trying to
approximate the lasso (see Ehrlinger and Ishwaran <cit.>).
Formal statements of guarantees
As a reminder, we have defined the following quantities:
= e^_, the boosted hazard estimator in Algorithm <ref>
ł^* = e^F^*, where F^* is the unique minimizer of R(F) in Proposition <ref>
Λ_L,Λ_U = lower and upper bounds on λ(t,x) as defined in <ref>
γ̂ = maximum gap between F^* and
{_m}_m=0^ defined in (<ref>)
= a universal constant
= constant defined in (<ref>)
β_Ł = constant defined in (<ref>)
J__ = complexity measure (<ref>), bounded above by J_
To simplify the results, we will assume that n≥55 and also set
the shrinkage to satisfy ν_n^2e^Ψ_n=log
n/(64n^1/4). Our first guarantee shows that our hazard estimator is consistent if the model is correctly specified.
(Consistency under correct model specification). Suppose contains the true log-hazard function logλ. Then with probability
1-8exp{ -(log n/κ (Λ_L^-1∨Λ_U)
J__)^2}
we have that _∞ is bounded and
‖-ł‖ _μ,2^2
<13β_Łmax_t,x(ł∨)^2/min_t,x(ł∧)(log n/4n^1/4)^/(γ̂).
Thus is consistent.
Via the tension between and J__, Proposition <ref> captures the trade-off in statistical performance between weak and strong learners in gradient boosting. The advantage of low complexity (weak learners) is reflected in the increased probability of the L^2(μ)-bound holding, with this probability being maximized when J__→0, which generally occurs as → 0. However, diametrically opposed to this, we find that the L^2(μ)-bound is minimized by → 1, which occurs with the use of stronger learners that are more aligned with the gradient. This same trade-off is also captured by our second guarantee which establishes an oracle inequality for tree learners.
(Oracle inequality for tree learners). Suppose e^F∈ for F∈. Then among {e^F:F∈}, ł^* is the best L^2(μ)-approximation to ł, that is
ł^* = min_e^F:F∈‖ e^F-λ‖ _μ,2.
Moreover, converges to this best approximation ł^*: With probability
1-8exp{-(log n/κ(Λ_L^-1∨Λ_U) J__)^2}
we have that _∞ is bounded and
‖-λ‖ _μ,2^2 <
ρ_^2 +
13β_Λmax_t,x(Λ_U∨)^2/min_t,x(Λ_L∧)(log n/4n^1/4)^/(γ̂),
where ρ_^2 = ł^*-ł_μ,2^2 is the smallest error one can achieve from using functions in {e^F:F∈} to approximate λ.
For tree learners, ł^*(t,x) is constant over each region B_j in (<ref>), and its value equals the local average of ł over B_j,
ł^*(t,x)|_B_j = 1/μ(B_j)∫_B_jλ dμ.
Hence if the B_j's are small, ł^* should closely approximate ł (recall from Remark <ref> that the size of the B_j's is fixed by the data). To estimate the approximation error ρ_ in terms of B_j, suppose that λ is sufficiently smooth, e.g. Hölder continuous |λ(t,x)-λ(t',x')| ≾(t-t',x-x')^b for some b>0. Then since inf_B_jλ≤ł^*|_B_j≤sup_B_jλ,
ρ_≤λ^*-λ_∞≾max_j(B_j)^b.
A tree-based implementation
Here we describe an implementation of Algorithm <ref> using
regression trees, whereby the -gradient g__m^ is obtained by growing a tree to satisfy eq:epsgradient.alg for a pre-specified .
To explain the tree growing process, first observe that the m-th step log-hazard estimator is an additive expansion of CART basis functions. Thus it can be written as
_m(t,x) = ∑_b=0^m-1∑_l=1^L_bγ_b,l I_A_b,l(t,x)
= ∑_j c_m,j I_B_j(t,x),
where A_b,l is the l-th leaf region of the b-th tree. Recall from Section <ref> that each tree is grown until eq:epsgradient.alg is satisfied, so the number of leaf nodes L_b can vary from tree to tree. The leaf regions are typically large subsets of the time-covariate space [0,1]× adaptively determined by the tree growing process (to be discussed shortly). Since each leaf region can be further decomposed into the finer disjoint regions B_j in eq:regions, _m(t,x) can be rewritten as (<ref>). However, many of these regions will share the same coefficient value, so (<ref>) can be written more compactly as
_m(t,x) = ∑_j c_m,j I_B_m,j'(t,x),
where B_m,j' is the union of contiguous regions whose coefficient equals c_m,j. This smooths the hazard estimator (t,x) over [0,1]×, thanks to the regularization imposed by limiting the number of trees (early stopping) and also by the use of weak tree learners. This is unlike the unconstrained hazard MLE (t,x) defined in eq:mle.hazard, which can take on a different value in each region B_j, making it prone to overfit the data.
To construct an -gradient g__m^ with -alignment to g__m defined by (<ref>),
g__m(t,x) =
∑_j:(B_j)>0( e^c_m,j -
_j/n(B_j)) I_B_j(t,x),
the tree splits are adaptively chosen to reduce the L^2()-approximation error between g__m^ and g__m. We implement tree splits for both time and covariates. Specifically, suppose we wish to split a leaf region A ⊆ [0,1]× into left and right daughter subregions A_1 and A_2, and assign values γ_1 and γ_2 to them. For example, a split on the k-th covariate could propose left and right daughters such as
A_1={(t,x)∈ A: x^(k)≤ s},
A_2={(t,x)∈ A: x^(k)> s},
or a split on time t could propose regions
A_1={(t,x)∈ A: t≤ s},
A_2={(t,x)∈ A: t> s}.
Now note that g__m is constant within each region B_j. We denote its value by g__m(t_B_j,x_B_j) where (t_B_j,x_B_j) is the centre of B_j. Hence the best split of A into A_1 and A_2 is the one that minimizes
-35pt
min_γ_1∫_A_1{ g__m(t,x)
- γ_1 }^2 d + min_γ_2∫_A_2{g__m(t,x)
- γ_2 }^2 d
=
min_γ_1∑_j:B_j⊆ A_1(B_j) ·{ g__m(t_B_j,x_B_j) - γ_1 }^2
+
min_γ_2∑_k:B_k⊆ A_2(B_k)·{ g__m(t_B_k,x_B_k) - γ_2 }^2
=
min_γ_1∑_j:z_j∈ A_1,
w_j>0 w_j· (ỹ_j - γ_1)^2
+ min_γ_2∑_k:z_k∈ A_2,
w_k>0 w_k· (ỹ_k - γ_2)^2,
where
ỹ_j = g__m(t_B_j,x_B_j)
= e^c_m,j - _j/n(B_j)
represents the j-th pseudo-response, z_j = (t_B_j,x_B_j)
its covariate and w_j = (B_j) its weight. Thus the splits use a weighted least squares criterion, which can be efficiently computed as usual.
We split the tree until eq:epsgradient.alg is satisfied, resulting in L_m leaf nodes (L_m-1 splits). As discussed in Section <ref>, we can always find a deep enough tree that is an -gradient because with enough splits we can recover the gradient g__m itself. Recall also that a small value of performs best in practice, and this can be chosen by cross-validating on a set of small-sized candidates: For each one we implement Algorithm <ref>, and we select the one that minimizes the cross-validated risk _n(F) defined in (<ref>). By contrast, the standard boosting algorithm <cit.> uses cross-validation to select a common number of splits to use for all trees, which does not ensure that each tree is an -gradient.
Regarding the possible split points for the covariates covariate.split.point, note that the k-th covariate x^(k) = x^(k)(t) is a time series that is sampled periodically. This yields a set of unique values equal to the union of all of the sampled values for the n observations. In direct analogy to non-functional data boosting, we place candidate split points in-between the sorted values in this set. In other words, splits for covariates only occur at values corresponding to the observed data just as in non-functional boosting.
The resolution for the grid of candidate time
splits time.split.point is set equal to the temporal
resolution. For example, the covariate trajectories in the simulation
in Section <ref> are piecewise constant and may change every
0.002 days. Placing the candidate split points at {0.002,0.004,…} days simplifies the exact computation of (B_j)
because every covariate trajectory is constant between these points.
Again, notice that the splits for time only occur at values informed by
the observed data.
Putting it together, the setup above leverages our insight in
(<ref>) by transforming the survival functional data
into the data values {w_j, ỹ_j, z_j}_j:w_j>0, which
enables the implementation to proceed like standard gradient boosting
for non-functional data. Only the pseudo-response ỹ_j in
{w_j, ỹ_j, z_j} needs to be updated at each boosting
iteration, while the other two do not change. In terms of storage it
costs 𝒪(np|𝒯|) to store {w_j, ỹ_j,
z_j}_j:w_j>0, where |𝒯| is the cardinality of the set
of candidate time splits.[Each {w_j, ỹ_j, z_j} is
of dimension p+3 and the number of time-covariate regions B_j
with w_j>0 is at most n(|𝒯|+1). To show the latter,
observe that B_j will only have w_j=(B_j)>0 if it is
traversed by at least one sample covariate trajectory. Then note
that each of the n sample covariate trajectories can traverse at
most |𝒯|+1 unique regions.] Computationally, choosing a
new tree split requires testing 𝒪(np|𝒯|)
candidate splits.[A sample covariate trajectory can have at
most |𝒯| unique observed values for the k-th covariate
x^(k), so there are at most n|𝒯| candidate splits
for x^(k). Thus there are 𝒪(np|𝒯|)
candidate splits for p covariates. The number of candidate splits
on time is obviously |𝒯|.] The space and time
complexities of the implementation are reasonable given that they are
𝒪(np) for non-functional data boosting: In the functional
data setting, each sample can have up to |𝒯| observations,
so n functional data samples is akin to
𝒪(n|𝒯|) samples in a non-functional data
setting.
Numerical experiment
We now apply the boosting procedure of Section <ref> to a
high-dimensional dataset generated from a naturalistic
simulation. This allows us to compare the performance of our estimator
to existing boosting methods. The simulation is of patient service
times in an emergency department (ED), and the hazard function of
interest is patient service rate in the ED. The study of patient
transitions in an ED queue is an important one in healthcare
operations, because without a high resolution model of patient flow
dynamics, the ED may be suboptimally utilized which would deny patients
of timely critical care.
Service rate The service rate model used in the simulation is based upon a service time dataset from the ED of an academic hospital in the United States. The dataset contains information on 86,983 treatment encounters from 2014 to early 2015. Recorded for each encounter was: Age, gender, Emergency Severity Index (ESI)[Level 1 is the most severe
(e.g., cardiac arrest) and level 5 is the least (e.g., rash). We
removed level 1 patients from the dataset because they were treated
in a separate trauma bay.], time of day when treatment in the ED
ward began, day of week of ED visit, and ward census. The last one
represents the total number of occupied beds in the ED ward, which
varies over the course of the patient's stay. Hence it is a
time-dependent variable. Lastly, we also have the duration of the
patient's stay (service time).
The service rate function is developed from the data in the following
way. First, we apply our nonparametric estimator to the data to
perform exploratory analysis. We find that:
* The key variables affecting the service rate (based on relative variable importance <cit.>) are ESI, age, and ward census. In addition, two of the most pronounced interaction terms identified by the tree splits are (≥34, =5) and (≥34, ≤4).
* Holding all the variables fixed, the shapes of the estimated service rate function resemble the hazard functions of log-normal distributions. This agrees with the queuing literature that find log-normality to be a reasonable parametric fit for service durations.
Guided by these findings, we specify the service rate
λ(t,X(t)) for the simulation as a log-normal accelerated
failure time (AFT) model, and estimate its parameters from data.
This yields the service rate
λ(t,x) = θ(x) ·ϕ_l(θ(x)t;m,σ)/1-Φ_l(θ(x)t;m,σ),
where ϕ_l(·;m,σ) and Φ_l(·;m,σ) are the
PDF and CDF of the log-normal distribution with log-mean m=-1.8 and
log-standard deviation σ=0.74. The function θ(x) captures
the dependence of the service rate on the covariates:
logθ(X(t)) = -0.0071· + 0.022· - min{ a·_t/70,2 }
+ 0.10· I(≥ 34,=5) - 0.10· I(≥ 34,≤ 4)
+ 0·_1 + ⋯ + 0·_43.
The specification for θ(X(t)) above is a slight modification of
the original estimate, with the free parameter a allowing us to
study the effect of time-dependent covariates on hazard
estimation. When a=0, the service rate does not depend on
time-varying covariates, but as a increases, the dependency becomes
more and more significant. In the data, the ward census never exceeds
70, so we set the capacity of the simulated ED to 70 as well. The
min operator caps the impact that census can have on the simulated
service rate as a grows. The irrelevant covariates _1,⋯,_43 are added to the data in order to assess how boosting performs in high dimensions. We explicitly include them in (<ref>) to remind ourselves that the simulated data is high-dimensional. Forty of the irrelevant variables are generated synthetically as described in the next subsection, while the rest are variables from the original dataset not used in the simulation.
Simulation model
Using (<ref>) and (<ref>), we simulate a naturalistic
dataset of 10,000 patient visit histories. The value of a will be
varied from 0 to 3 in order to study the impact of time-dependent
covariates on hazard estimation. Each patient is associated with a
46-dimensional covariate vector consisting of:
* The time-varying ward census. The initial value is sampled from its marginal empirical distribution in the original dataset. To simulate its trajectory over a patient's stay, for every timestep advance of 0.002 days (≈3 minutes), a Bernoulli(0.02) random variable is generated. If it is one, then the census is incremented by a normal random variable with zero mean and standard deviation 10. The result is truncated if it lies outside the range [1,70], the upper end being the capacity of the ED.
* The other five time-static covariates in the original dataset. These are sampled from their marginal empirical distributions in the original dataset. Two of the variables (age and ESI) influence the service rate, while the other three are irrelevant.
* An additional forty time-static covariates that do not affect the service rate (irrelevant covariates). Their values are drawn uniformly from [0,1].
We also generate independent censoring times (rounded to the nearest 0.002 days) for each visit from an exponential distribution. For each simulation, the rate of the exponential distribution is set to achieve an approximate target of 25% censoring.
Comparison benchmarks
When the covariates are static in time, a few software packages are available for performing hazard estimation with tree ensembles. Given that the data is simulated from a log-normal hazard, we compare our nonparametric method to two correctly specified parametric estimators:
* The blackboost estimator in the R package mboost <cit.> provides a tree boosting procedure for fitting the log-normal hazard function. In order to apply this to the simulated data, we make ward census a time-static covariate by fixing it at its initial value.
* Transformation forests <cit.> in the R package trtf can also fit log-normal hazards. Moreover, it allows for left-truncated and right-censored data. Since the ward census variable is simulated to be piecewise constant over time, we can treat each segment as a left-truncated and right-censored observation. Thus for this simulation, transformation forests are able to handle time-dependent covariates with time-static effects. This falls in between the static covariate/static effect blackboost estimator and our fully nonparametric one.
Since the service rate model used in the simulations is in fact log-normal, the benchmark methods above enjoy a significant advantage over our nonparametric one, which is not privy to the true distribution. In fact, when a=0 the log-normal hazard (<ref>) depends only on time-static covariates, so the benchmarks should outperform our nonparametric estimator. However, as a grows, we would expect a reversal in relative performance.
To compare the performances of the estimators, we use Monte Carlo integration to evaluate the relative mean squared error
%MSE = 𝔼_X[∫_0^1 {λ(t,X)-λ̂(t,X)}^2dt]/𝔼_X [∫_0^1 λ(t,X)^2dt].
The Monte Carlo integrations are conducted using an independent test set of 10,000 uncensored patient visit histories. For the test set, ward census is held fixed over time at the initial value, and we use the grid {0,0.02,0.04,⋯,1} for the time integral. The nominator above is then estimated by the average of {λ(t,x)-λ̂(t,x)}^2 evaluated at the 51×10,000 points of (t,x). The denominator is estimated in the same manner.
Results
For the implementation of our estimator in Section <ref>, the value of and the number of trees m̂ are jointly determined using ten-fold cross validation. The candidate values we tried for are {0.003, 0.004, 0.005, 0.006, 0.007}, and we limit m̂ to no more than 1,000 trees. A wider range of values can be of course be explored for better performance (at the cost of more computations). As comparison, we run an ad-hoc version of our algorithm in which all trees use the same number of splits, as is the case in standard boosting <cit.>. This approach does not explicitly ensure that the trees will be -gradients for a pre-specified . The number of splits and the number of trees used in the ad-hoc method are jointly determined using ten-fold cross-validation.
In order to speed up convergence at the m-th iteration for both approaches, instead of using the step-size ν_n/(m+1) of Algorithm <ref>, we performed line-search within the interval (0,ν_n/(m+1)]. While Lemma <ref> shows that a smaller shrinkage ν_n is always better, this comes at the expense of a larger m̂ and hence computation time. For simplicity we set ν_n=1 for all the experiments here.
For fitting the blackboost estimator, we use the default setting of nu=0.1 for the step-size taken at each iteration. The other hyperparameters, mstop (the number of trees) and maxdepth (maximum depth of trees), are chosen to directly minimize the relative MSE on the test set. This of course gives the blackboost estimator an unfair advantage over our estimator, which is on top of the fact that it is based on the same distribution as the true model. Transformation forest (using also the true distribution) is fit using code kindly provided by Professor T. Hothorn.[In the code 100 trees are used in the forest, which takes about 700 megabytes to store the fitted object when applied to our simulated data.]Variable selection. The relative importance of
variables <cit.> for our estimator are given in
Table <ref> for all four cases a=0,1,2,3. The four
factors that influence the service rate (<ref>) are
explicitly listed, while the irrelevant covariates are grouped
together in the last column. When a=0, the service rate does not
depend on census, and we see that the importance of census and the
other irrelevant covariates are at least an order of magnitude smaller
than the relevant ones. As a increases, census becomes more and more
important as correctly reflected in the table. Across all the cases
the importance of the relevant covariates are at least an order of
magnitude larger than the others, suggesting that our estimator is
able to pick out the influential covariates and largely avoid the
irrelevant ones.
Presence of time-dependent covariates. Table <ref> presents the relative MSEs for the estimators as the service rate function (<ref>) becomes increasingly
dependent on the time-varying census variable. When a=0 the service rate depends only on time-static covariates, so as expected, the parametric log-normal benchmarks perform the best when applied to data simulated from a log-normal AFT model.
However, as a increases, the service rate becomes increasingly
dependent on census. The corresponding performances of both benchmarks deteriorate dramatically, and is handily outperformed by the proposed estimator. We note that the inclusion of just one time-dependent covariate is enough to degrade the
performances of the benchmarks, despite the fact that they have the
exact same parametric form as the true model.
Finally we find comparable performance among the ad-hoc boosted
estimator and our proposed one, although a slight edge goes to the latter especially in the more difficult simulations with larger a. The results here demonstrate that there is a place in the survival
boosting literature for fully nonparametric methods like this one that
can flexibly handle time-dependent covariates.
Discussion
Our estimator can also potentially be used to evaluate the
goodness-of-fit of simpler parametric hazard models. Since our
approach is likelihood-based, future work might examine whether model
selection frameworks like those in <cit.> can be extended to
cover likelihood functionals. For this, <cit.> provides
some guidance for determining the effective degrees of freedom for the
boosting estimator. The ideas in <cit.> may also be germane.
The implementation presented in Section <ref> is one of many possible ways to implement our estimator. We defer the design of a more refined implementation to future research, along with open-source code.
10pt Acknowledgements. The review team provided many insightful comments that significantly improved our paper. We are grateful to Brian Clarke, Jack Hall, Sahand Negahban, and Hongyu Zhao for helpful discussions. Special thanks to Trevor Hastie for early formative discussions. The dataset used in Section <ref> was kindly provided by Dr. Kito Lord.
40pt
APPENDIX: PROOFS
§.§ Proof of Proposition <ref>
Writing
R(F)=( ∫_0^1Y(t)· e^F(t,X(t)) dt-Δ F(T,X(T)) ),
we can apply (<ref>) to establish the first part of the
integral in (<ref>) when F∈∪{logλ}. To
complete the representation, it suffices to show that the point
process
M(B)=Δ· I[{T,X(T)}∈ B]
has mean ∫_Bλ dμ, and then apply Campbell's formula. To
this end, write N(t)=I(T≤ t) and consider the filtration σ{X(s),Y(s),N(s):s≤ t}. Then N(t) has the Doob-Meyer form
dN(t)=λ(t,X(t))Y(t)dt+dM(t) where M(t) is a
martingale. Hence
{ M(B) } = ( ∫_0^1I[{t,X(t)}∈ B] dN(t))
= ( ∫_0^1 Y(t)· I[{t,X(t)}∈ B]·λ(t,X(t)) dt )
+ (∫_0^1I[{t,X(t)}∈ B] dM(t))
= ∫_Bλ dμ
+(∫_0^1I[{t,X(t)}∈ B] dM(t)),
where the last equality follows from (<ref>). Since
I[{t,X(t)}∈ B] is predictable because X(t) is, the desired
result follows if the stochastic integral ∫_0^1I[{t,X(t)}∈ B]dM(t) is a martingale. By Section 2 of Aalen <cit.>, this is true if M(t) is square-integrable. In fact, M(t)=N(t)-∫_0^tλ(t,X(t))dt is bounded because λ(t,x) is bounded above by <ref>. This establishes
(<ref>).
Now note that for a positive constant Λ the function e^y-Λ y
is bounded below by both -Λ y and Λ y+2Λ{1-log2Λ},
hence e^y-Λ y≥Λ|y|+2Λmin{0,1-log2Λ}.
Since Λmin{0,1-log2Λ} is non-increasing in Λ, <ref> implies that
e^F(t,x)-λ(t,x)F(t,x) ≥min{ e^F(t,x)-Λ_LF(t,x),e^F(t,x)-Λ_UF(t,x)}
≥Λ_L|F(t,x)|+2Λ_Umin{0,1-log(2Λ_U)}.
Integrating both sides and using the norm equivalence relation (<ref>)
shows that
R(F) ≥Λ_LF_μ,1+2Λ_Umin{0,1-log(2Λ_U)}
≥2Λ_L/F_∞+2Λ_Umin{0,1-log(2Λ_U)}
≥2Λ_L/F_μ,2+2Λ_Umin{0,1-log(2Λ_U)}.
The lower bound (<ref>) then follows from the second
inequality. The last inequality shows that R(F) is coercive on
(,⟨·,·⟩ _μ). Moreover the
same argument used to derive (<ref>) shows that R(F) is
smooth and convex on (,⟨·,·⟩
_μ). Therefore a unique minimizer F^* of R(F) exists in
(,⟨·,·⟩ _μ). Since <ref>
implies there is a bijection between the equivalent classes of
(,⟨·,·⟩ _μ) and the functions
in , F^* is also the unique minimizer of R(F) in
. Finally, since e^F(t,x)-λ(t,x)F(t,x) is pointwise
bounded below by λ(t,x){1-logλ(t,x)},
R(F)≥∫(λ-λlogλ)dμ=R(logλ) for all
F∈.
§.§ Proof of Lemma <ref>
By a pointwise-measurable argument (Example 2.3.4 of <cit.>) it
can be shown that all suprema quantities appearing below are
sufficiently well behaved, so outer integration is not
required. Define the Orlicz norm
X=inf{C>0:Φ(|X|/C)≤ 1} where
Φ(x)=e^x^2-1. Suppose the following holds:
sup_F∈_^Ψ_n|{_n(F)-_n(0)}-{R(F)-R(0)}| ≤ κ' J__/n^1/4,
sup_G∈_: G_∞≤ 1|G_,1-G_μ,1| ≤ κ”J__/n^1/2,
where J__ is the complexity measure (<ref>),
and κ',κ” are universal constants. Then by Markov's
inequality, (<ref>) holds with probability
at least 1-2exp[-{η n^1/4/(κ'J__)}^2], and
sup_G∈_: G_∞≤ 1{G_μ,1-G_,1} <1/
holds with probability at least
1-2exp[-{n^1/2/(κ”J__)}^2]. Since
>1 and η<1, (<ref>) and
(<ref>) jointly hold with probability at least
1-4exp[-{η n^1/4/(κ J__)}^2]. The
lemma then follows if (<ref>)
implies (<ref>). Indeed, for any non-zero F∈,
its normalization G=F/F_∞ is in _ by
construction (<ref>). Then (<ref>) implies that
F_∞/F_,1=1/G_,1≤
because
1/>G_μ,1-G_,1≥2/-G_,1,
where the last inequality follows from the definition of (<ref>).
Thus it remains to establish (<ref>) and (<ref>),
which can be done by applying the symmetrization and maximal
inequality results in Sections 2.2 and 2.3.2 of <cit.>. Write
_n(F)=(1/n)∑_i=1^nl_i(F) where
l_i(F)=∫_0^1Y_i(t)e^F(t,X_i(t))dt-Δ_iF(T_i,X_i(T_i))
are independent copies of the loss
l(F)=∫_0^1 Y(t)· e^F(t,X(t))dt-Δ· F(T,X(T)),
which is a stochastic process indexed by F∈. As was shown in
Proposition <ref>, {l(F)}=R(F). Let
ζ_1,⋯,ζ_N be independent Rademacher random
variables that are independent of Z={(X_i(·),Y_i(·),T_i)}_i=1^n. It
follows from the symmetrization Lemma 2.3.6 of <cit.> for
stochastic processes that the left hand side of (<ref>) is
bounded by twice the Orlicz norm of
-30pt
sup_F∈_^Ψ_n|1/n∑_i=1^nζ_i{l_i(F)-l_i(0)}|
≤1/nsup_F∈_^Ψ_n|∑_i=1^nζ_i∫_0^1
Y_i(t) { e^F(t,X_i(t))-1} dt|
+ 1/nsup_F∈_^Ψ_n|∑_i=1^nζ_iΔ_iF(T_i,X_i(T_i))|.
Now hold Z fixed so that only ζ_1,⋯,ζ_n are
stochastic, in which case the sum in the second line
of (<ref>) becomes a separable subgaussian
process. Since the Orlicz norm of ∑_i=1^nζ_ia_i is
bounded by (6∑_i=1^na_i^2)^1/2 for any constant
a_i, we obtain the following the Lipschitz property for any
F_1,F_2∈_^Ψ_n:
-30pt
∑_i=1^nζ_i∫_0^1Y_i(t){ e^F_1(t,X_i(t))-e^F_2(t,X_i(t))} dtζ|Z
≤ 6∑_i=1^n[∫_0^1 Y_i(t){ e^F_1(t,X_i(t))-e^F_2(t,X_i(t))} dt]^2
≤ 6e^2Ψ_n∑_i=1^n(∫_0^1 Y_i(t)· |F_1(t,X_i(t))-F_2(t,X_i(t))|dt)^2
≤ 6e^2Ψ_n∑_i=1^n∫_0^1Y_i(t){F_1(t,X_i(t))-F_2(t,X_i(t))}^2dt
= 6ne^2Ψ_nF_1-F_2_,2^2,
where the second inequality follows from |e^x-e^y|≤ e^max(x,y)|x-y|
and the last from the Cauchy-Schwarz inequality. Putting the Lipschitz
constant (6n)^1/2e^Ψ_n obtained above into Theorem 2.2.4
of <cit.> yields the following maximal inequality: There is
a universal constant κ' such that
-30pt
sup_F∈_^Ψ_n|∑_i=1^nζ_i∫_0^1Y_i(t)
{ e^F(t,X_i(t))-1} dt| ζ|Z
≤κ' n^1/2e^Ψ_n∫_0^Ψ_n{log𝒩(u,_^Ψ_n,)} ^1/2du
≤κ' n^1/2e^Ψ_nΨ_nJ__,
where the last line follows from (<ref>). Likewise the
conditional Orlicz norm for the supremum of
|∑_i=1^nζ_iΔ_iF(T_i,X_i(T_i))|
is bounded by κ' J__n^1/2Ψ_n. Since neither
bounds depend on Z, plugging back into (<ref>)
establishes (<ref>):
-30pt
sup_F∈_^Ψ_n|{_n(F)-_n(0)}-{R(F)-R(0)}|
≤ 2κ' J__Ψ_ne^Ψ_n/n^1/2{1+e^-Ψ_n}
≤4κ' J__/n^1/4,
where Ψ_ne^Ψ_n=n^1/4 by (<ref>). On noting that
G_,1 =
1/n∑_i=1^n∫_0^1Y_i(t)|G(t,X_i(t))|dt,10pt
G_μ,1={∫_0^1Y(t)|G(t,X(t))| dt},
(<ref>) can be established using the same approach.
§.§ Proof of Lemma <ref>
For m<, applying (<ref>) to
_n(_m+1)=_n(_m-ν_n/m+1g__m^)
yields
_n(_m+1)
=
_n(_m)-ν_n/m+1⟨
g__m,g__m^⟩_
+ν_n^2/2(m+1)^2∫(g__m^)^2exp{_m-ρ (_m+1 - _m) } d
<_n(_m)-ν_n/m+1g__m_,2+ν_n^2e^Ψ_n/2(m+1)^2,
where the bound for the second term is due to (<ref>)
and the bound for the integral follows from
∫(g__m^)^2d=1
(Definition <ref> of an -gradient) and
_m_∞, _m+1_∞<Ψ_n for
m< (lines 5-6 of Algorithm <ref>). Hence for
m≤, (<ref>) implies that
_n(_m)<_n(0)
+∑_m=0^∞ν_n^2e^Ψ_n/2(m+1)^2<_n(0)+1≤2
because ν_n^2 e^Ψ_n < 1 under (<ref>). Since
max_m≤m̂_m_∞<Ψ_n, and using our
assumption sup_F∈_^Ψ_n|_n(F)-R(F)|<1 in
the statement of the lemma, we have
R(_m)
≤_n(_m)+|_n(_m)-R(_m)|
<3.
Clearly the minimizer F^* also satisfies R(F^*)≤ R(0)<3.
Thus coercivity (<ref>) implies that
_m_∞,F^*_∞<α_β_Λ,
so the gap γ̂ defined in (<ref>) is bounded
as claimed.
It remains to establish (<ref>), for
which we need only consider the case
_n(_m̂)-_n(F^*)>0. The termination
criterion g__m=0 in Algorithm <ref> is never
triggered under this scenario, because by Proposition <ref> this
would imply that _ minimizes _n(F)
over the span of {_nj(t,x)}_j, which also contains F^* (Remark <ref>). Thus either
=∞, or the termination criterion _-ν_n/+1ĝ__^_∞≥Ψ_n in line 5 of Algorithm <ref> is met. In the latter case
Ψ_n ≤‖_-ν_n/+1g__^‖_∞≤‖_-ν_n/+1g__^‖ _,2
≤α_(∑_m=0^-1ν_n/m+1+1)
where the inequalities follow from (<ref>) and from
g__m^_,2=1. Since the sum is diverging, the inequality also holds for sufficiently large (e.g. =∞).
Given that F^* lies in the span
of {_nj(t,x)}_j, the Taylor expansion (<ref>) is valid for _n(F^*).
Since the remainder term in the expansion is non-negative, we have
_n(F^*)
= _n(_m + F^* - _m)
≥ _n(_m) + ⟨
g__m,F^*-_m⟩_.
Furthermore for m ≤m̂,
⟨ g__m,_m-F^*⟩_ ≤ _m-F^*_,2·g__m_,2
≤ _m-F^*_∞·g__m_,2
≤ γ̂·g__m_,2.
Putting both into (<ref>) gives
_n(_m+1)
< _n(_m)+ν_n/γ̂(m+1)⟨ g__m,F^*-_m⟩_
+ν_n^2e^Ψ_n/2(m+1)^2
≤ _n(_m)+ν_n/γ̂(m+1){_n(F^*)-_n(_m)}+ν_n^2e^Ψ_n/2(m+1)^2.
Subtracting _n(F^*) from both sides above and denoting
δ_m=_n(_m)-_n(F^*),
we obtain
δ_m+1
<(1-ν_n/γ̂(m+1))δ_m
+ν_n^2e^Ψ_n/2(m+1)^2.
Since the term inside the first parenthesis is between 0 and 1, solving
the recurrence yields
δ_ < δ_0∏_m=0^-1(1-ν_n/γ̂(m+1))
+ν_n^2e^Ψ_n∑_m=0^∞1/2(m+1)^2
≤ max{0,δ_0}exp(-/γ̂∑_m=0^-1ν_n/m+1)+ν_n^2e^Ψ_n
≤
emax{0,δ_0}exp(-/γ̂Ψ_n)+ν_n^2e^Ψ_n,
where in the second inequality we used the fact that 0≤1+y≤ e^y
for |y|<1, and the last line follows from (<ref>).
The Lambert function (<ref>) in Ψ_n=W(n^1/4) is
asymptotically log y-loglog y, and in fact by Theorem 2.1
of <cit.>, W(y)≥log y-loglog y for y≥ e. Since
by assumption n≥55>e^4, the above becomes
δ_
<emax{0,δ_0}(log n/4n^1/4)^/(γ̂)+ν_n^2e^Ψ_n.
The last step is to control δ_0, which is bounded
by 1-_n(F^*) because _n(_0)=_n(0)≤1.
Then under the hypothesis |_n(F^*)-R(F^*)|<1,
we have
δ_0≤ 1-R(F^*)+1<2-R(F^*).
Since (<ref>)
implies R(F^*)≥2Λ_Umin{0,1-log(2Λ_U)},
δ_0
<2-R(F^*)≤2
+2Λ_Umax{0,log(2Λ_U)-1}<2β_Λ.
§.§ Proof of Proposition <ref>
Let δ=log n/(4n^1/4) which is less than one for n≥ 55 >
e^4. Since ,γ̂≥1 it follows that
δ<(log n/4n^1/4)^/(γ̂).
Now define the following probability sets
S_1 = {sup_F∈_^Ψ_n|{_n(F)-_n(0)}-{R(F)-R(0)}|<δ/3}
S_2 = {|_n(0)-R(0)|<δ/3}
S_3 = {|_n(F^*)-R(F^*)|<δ/3}
S_4 = {} ,
and fix a sample realization from ∩_k=1^4S_k. Then the
conditions required in Lemma <ref> are satisfied with
sup_F∈_^Ψ_n|_n(F)-R(F)|<2δ/3, so
γ̂ (and hence _∞) is bounded and (<ref>) holds. Since Algorithm <ref> ensures that __∞<Ψ_n, we have _∈_^Ψ_n and therefore it also follows that |_n(_)-R(_)|<2δ/3. Combining (<ref>) and (<ref>) gives
‖_-F^*‖ _μ,2^2 ≤ 2/min_t,x(λ^*∧)(2δ/3+δ/3
+ {_n(_)-_n(F^*)})
< 2/min_t,x(λ^*∧)(δ+2eβ_Λ(log n/4n^1/4)^/(γ̂)
+1/16·log n/4n^1/4)
< 13β_Λ/min_t,x(λ^*∧)(log n/4n^1/4)^/(γ̂),
where the second inequality follows from (<ref>) and
ν_n^2 e^Ψ_n=log n/(64n^1/4), and the last from (<ref>). Now, using the inequality |e^x-e^y|≤max(e^x,e^y)|x-y| yields
‖-ł^*‖ _μ,2^2
<13β_Łmax_t,x(ł^*∨)^2/min_t,x(ł^*∧)(log n/4n^1/4)^/(γ̂),
and the stated bound follows from F^*=logł since is correctly specified (Proposition <ref>).
The next task is to lower bound (∩_k=1^4S_k).
It follows from Lemma <ref> that
(S_1∩ S_4)
≥
1-4exp{ -(log n/12κ J__)^2} .
Bounds on (S_2) and (S_3) can be obtained using
Hoeffding's inequality. Note from (<ref>) that
_n(0)=∑_i=1^n∫_0^1 Y_i(t)dt/n and
_n(F^*)=∑_i=1^nl_i(F^*)/n for the loss
l(·) defined in (<ref>). Since 0≤∫_0^1
Y_i(t)dt≤1 and
-F^*_∞<l(F^*)≤e^F^*_∞+F^*_∞,
(S_2)
≥
1-2exp{ -2n^1/2(log n/12)^2},(S_3)
≥
1-2exp{ -2n^1/2(log n/36e^F^*_∞)^2} .
By increasing the value of κ and/or replacing J__
with max(1,J__) if necessary, we can combine the
inequalities to get a crude but compact bound:
{∩_k=1^4S_k}≥
1-8exp{ -(log n/κ J__e^F^*_∞)^2}.
Finally, since F^*_∞ = logλ_∞ < max{|logΛ_L|,|logΛ_U|}, we can replace e^F^*_∞ in the probability bound above by Λ_L^-1∨Λ_U.
§.§ Proof of Proposition <ref>
It follows from (<ref>) that ł^* is the orthogonal projection of
λ onto (,⟨·,·⟩ _μ).
Hence
‖-λ‖ _μ,2^2 = ‖ e^F^*-λ‖ _μ,2^2
+‖ e^_-e^F^*‖ _μ,2^2
= min_F∈‖ e^F-λ‖ _μ,2^2
+‖ e^_-e^F^*‖ _μ,2^2
≤ min_F∈‖ e^F-λ‖ _μ,2^2
+max_t,x(λ^*∨)^2‖_-F^*‖ _μ,2^2,
where the inequality follows from |e^x-e^y|≤max(e^x,e^y)|x-y|.
Bounding the last term in the same way as Proposition <ref>
completes the proof. To replace e^F^*_∞ in (<ref>) by Λ_L^-1∨Λ_U, it suffices to show that Λ_L ≤λ^*(t,x) ≤Λ_U. Since the value of λ^* over one of its piecewise constant regions B is ∫_Bλ dμ/μ(B), the desired bound follows from <ref>. We can also replace max_t,x(ł^*∨) and min_t,x(ł^*∧) with max_t,x(Λ_U∨) and min_t,x(Λ_L∧) respectively.
REFERENCES
abbrvnat
|
http://arxiv.org/abs/1701.08118v1 | 20170127170907 | Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis | [
"Björn Ross",
"Michael Rist",
"Guillermo Carbonell",
"Benjamin Cabrera",
"Nils Kurowsky",
"Michael Wojatzki"
] | cs.CL | [
"cs.CL"
] |
Mode volume, energy transfer, and spaser threshold in plasmonic systems with gain
Tigran V. Shahbazyan
January 27, 2017
==================================================================================
Some users of social media are spreading racist, sexist, and otherwise hateful content.
For the purpose of training a hate speech detection system, the reliability of the annotations is crucial, but there is no universally agreed-upon definition.
We collected potentially hateful messages and asked two groups of internet users to determine whether they were hate speech or not, whether they should be banned or not and to rate their degree of offensiveness. One of the groups was shown a definition prior to completing the survey.
We aimed to assess whether hate speech can be annotated reliably, and the extent to which existing definitions are in accordance with subjective ratings.
Our results indicate that showing users a definition caused them to partially align their own opinion with the definition but did not improve reliability, which was very low overall.
We conclude that the presence of hate speech should perhaps not be considered a binary yes-or-no decision, and raters need more detailed instructions for the annotation.
§ INTRODUCTION
Social media are sometimes used to disseminate hateful messages.
In Europe, the current surge in hate speech has been linked to the ongoing refugee crisis.
Lawmakers and social media sites are increasingly aware of the problem and are developing approaches to deal with it, for example promising to remove illegal messages within 24 hours after they are reported <cit.>.
This raises the question of how hate speech can be detected automatically.
Such an automatic detection method could be used to scan the large amount of text generated on the internet for hateful content and report it to the relevant authorities.
It would also make it easier for researchers to examine the diffusion of hateful content through social media on a large scale.
From a natural language processing perspective, hate speech detection can be considered a classification task: given an utterance, determine whether or not it contains hate speech.
Training a classifier requires a large amount of data that is unambiguously hate speech.
This data is typically obtained by manually annotating a set of texts based on whether a certain element contains hate speech.
The reliability of the human annotations is essential, both to ensure that the algorithm can accurately learn the characteristics of hate speech, and as an upper bound on the expected performance <cit.>.
As a preliminary step, six annotators rated 469 tweets. We found that agreement was very low (see Section 3).
We then carried out group discussions to find possible reasons. They revealed that there is considerable ambiguity in existing definitions.
A given statement may be considered hate speech or not depending on someone's cultural background and personal sensibilities.
The wording of the question may also play a role.
vll. sollten wir hier noch etwas mehr das Problem herausarbeiten: niedriges agreement, kein agreement, transfer, ambiguität
We decided to investigate the issue of reliability further by conducting a more comprehensive study across a large number of annotators, which we present in this paper.
Our contribution in this paper is threefold:
* To the best of our knowledge, this paper presents the first attempt at compiling a German hate speech corpus for the refugee crisis.[Available at <https://github.com/UCSM-DUE/IWG_hatespeech_public>]
* We provide an estimate of the reliability of hate speech annotations.
* We investigate how the reliability of the annotations is affected by the exact question asked.
§ HATE SPEECH
For the purpose of building a classifier, warner2012 define hate speech as “abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation”.
More recent approaches rely on lists of guidelines such as a tweet being hate speech if it “uses a sexist or racial slur” <cit.>.
These approaches are similar in that they leave plenty of room for personal interpretation, since there may be differences in what is considered offensive.
For instance, while the utterance “the refugees will live off our money” is clearly generalising and maybe unfair, it is unclear if this is already hate speech.
More precise definitions from law are specific to certain jurisdictions and therefore do not capture all forms of offensive, hateful speech, see e.g. matsuda1993.
In practice, social media services are using their own definitions which have been subject to adjustments over the years <cit.>.
As of June 2016, Twitter bans hateful conduct[“You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.
We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”, The Twitter Rules].
With the rise in popularity of social media, the presence of hate speech has grown on the internet.
Posting a tweet takes little more than a working internet connection but may be seen by users all over the world.
Along with the presence of hate speech, its real-life consequences are also growing.
It can be a precursor and incentive for hate crimes, and it can be so severe that it can even be a health issue <cit.>.
It is also known that hate speech does not only mirror existing opinions in the reader but can also induce new negative feelings towards its targets <cit.>.
Hate speech has recently gained some interest as a research topic on the one hand – e.g. <cit.> – but also as a problem to deal with in politics such as the No Hate Speech Movement by the Council of Europe.
The current refugee crisis has made it evident that governments, organisations and the public share an interest in controlling hate speech in social media.
However, there seems to be little consensus on what hate speech actually is.
§ COMPILING A HATE SPEECH CORPUS
As previously mentioned, there is no German hate speech corpus available for our needs, especially not for the very recent topic of the refugee crisis in Europe.
We therefore had to compile our own corpus.
We used Twitter as a source as it offers recent comments on current events.
In our study we only considered the textual content of tweets that contain certain keywords, ignoring those that contain pictures or links.
This section provides a detailed description of the approach we used to select the tweets and subsequently annotate them.
To find a large amount of hate speech on the refugee crisis, we used 10 hashtags[#Pack, #Aslyanten, #WehrDich, #Krimmigranten, #Rapefugees, #Islamfaschisten, #RefugeesNotWelcome, #Islamisierung, #AsylantenInvasion, #Scharia] that can be used in an insulting or offensive way.
Using these hashtags we gathered 13 766 tweets in total, roughly dating from February to March 2016.
However, these tweets contained a lot of non-textual content which we filtered out automatically by removing tweets consisting solely of links or images.
We also only considered original tweets, as retweets or replies to other tweets might only be clearly understandable when reading both tweets together.
In addition, we removed duplicates and near-duplicates by discarding tweets that had a normalised Levenshtein edit distance smaller than .85 to an aforementioned tweet.
A first inspection of the remaining tweets indicated that not all search terms were equally suited for our needs.
The search term #Pack (vermin or lowlife) found a potentially large amount of hate speech not directly linked to the refugee crisis. It was therefore discarded.
As a last step, the remaining tweets were manually read to eliminate those which were difficult to understand or incomprehensible.
After these filtering steps, our corpus consists of 541 tweets, none of which are duplicates, contain links or pictures, or are retweets or replies.
As a first measurement of the frequency of hate speech in our corpus, we personally annotated them based on our previous expertise.
The 541 tweets were split into six parts and each part was annotated by two out of six annotators in order to determine if hate speech was present or not.
The annotators were rotated so that each pair of annotators only evaluated one part.
Additionally the offensiveness of a tweet was rated on a 6-point Likert scale, the same scale used later in the study.
Even among researchers familiar with the definitions outlined above, there was still a low level of agreement (Krippendorff's α =
.38).
This supports our claim that a clearer definition is necessary in order to be able to train a reliable classifier.
The low reliability could of course be explained by varying personal attitudes or backgrounds, but clearly needs more consideration.
§ METHODS
In order to assess the reliability of the hate speech definitions on social media more comprehensively, we developed two online surveys in a between-subjects design. They were completed by 56 participants in total (see Table <ref>).
The main goal was to examine the extent to which non-experts agree upon their understanding of hate speech given a diversity of social media content.
We used the Twitter definition of hateful conduct in the first survey.
This definition was presented at the beginning, and again above every tweet.
The second survey did not contain any definition.
Participants were randomly assigned one of the two surveys.
The surveys consisted of 20 tweets presented in a random order. For each tweet, each participant was asked three questions.
Depending on the survey, participants were asked (1) to answer (yes/no) if they considered the tweet hate speech, either based on the definition or based on their personal opinion.
Afterwards they were asked (2) to answer (yes/no) if the tweet should be banned from Twitter.
Participants were finally asked (3) to answer how offensive they thought the tweet was on a 6-point Likert scale from 1 (Not offensive at all) to 6 (Very offensive). If they answered 4 or higher, the participants had the option to state which particular words they found offensive.
After the annotation of the 20 tweets, participants were asked to voluntarily answer an open question regarding the definition of hate speech.
In the survey with the definition, they were asked if the definition of Twitter was sufficient.
In the survey without the definition, the participants were asked to suggest a definition themselves.
Finally, sociodemographic data were collected, including age, gender and more specific information regarding the participant's political orientation, migration background, and personal position regarding the refugee situation in Europe.
The surveys were approved by the ethical committee of the Department of Computer Science and Applied Cognitive Science of the Faculty of Engineering at the University of Duisburg-Essen.
§ PRELIMINARY RESULTS AND DISCUSSION
Since the surveys were completed by 56 participants, they resulted in 1120 annotations.
Table <ref> shows some summary statistics.
To assess whether the definition had any effect, we calculated, for each participant, the percentage of tweets they considered hate speech or suggested to ban and their mean offensiveness rating. This allowed us to compare the two samples for each of the three questions. Preliminary Shapiro-Wilk tests indicated that some of the data were not normally distributed. We therefore used the Wilcoxon-Mann-Whitney (WMW) test to compare the three pairs of series. The results are reported in Table <ref>.
Participants who were shown the definition were more likely to suggest to ban the tweet.
In fact, participants in group one very rarely gave different answers to questions one and two (18 of 500 instances or 3.6%).
This suggests that participants in that group aligned their own opinion with the definition.
We chose Krippendorff's α to assess reliability, a measure from content analysis, where human coders are required to be interchangeable. Therefore, it measures agreement instead of association, which leaves no room for the individual predilections of coders. It can be applied to any number of coders and to interval as well as nominal data. <cit.>
This allowed us to compare agreement between both groups for all three questions.
Figure <ref> visualises the results.
Overall, agreement was very low, ranging from α = .18 to .29.
In contrast, for the purpose of content analysis, Krippendorff recommends a minimum of α = .80, or a minimum of .66 for applications where some uncertainty is unproblematic <cit.>.
Reliability did not consistently increase when participants were shown a definition.
To measure the extent to which the annotations using the Twitter definition (question one in group one) were in accordance with participants' opinions (question one in group two), we calculated, for each tweet, the percentage of participants in each group who considered it hate speech, and then calculated Pearson's correlation coefficient.
The two series correlate strongly (r = .895, p < .0001), indicating that they measure the same underlying construct.
§ CONCLUSION AND FUTURE WORK
This paper describes the creation of our hate speech corpus and offers first insights into the low agreement among users when it comes to identifying hateful messages.
Our results imply that hate speech is a vague concept that requires significantly better definitions and guidelines in order to be annotated reliably.
Based on the present findings, we are planning to develop a new coding scheme which includes clear-cut criteria that let people distinguish hate speech from other content.
Researchers who are building a hate speech detection system might want to collect multiple labels for each tweet and average the results.
Of course this approach does not make the original data any more reliable <cit.>. Yet, collecting the opinions of more users gives a more detailed picture of objective (or intersubjective) hatefulness.
For the same reason, researchers might want to consider hate speech detection a regression problem, predicting, for example, the degree of hatefulness of a message, instead of a binary yes-or-no classification task.
In the future, finding the characteristics that make users consider content hateful will be useful for building a model that automatically detects hate speech and users who spread hateful content, and for determining what makes users disseminate hateful content.
§ ACKNOWLEDGMENTS
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant No. GRK 2167, Research Training Group ”User-Centred Social Media”.
konvens2016
|
http://arxiv.org/abs/1701.07699v2 | 20170126134834 | Discontinuous transition from direct to inverse cascade in three-dimensional turbulence | [
"Ganapati Sahoo",
"Alexandros Alexakis",
"Luca Biferale"
] | nlin.CD | [
"nlin.CD"
] | |
http://arxiv.org/abs/1701.08142v5 | 20170127183009 | Modelling Preference Data with the Wallenius Distribution | [
"Clara Grazian",
"Fabrizio Leisen",
"Brunero Liseo"
] | stat.ME | [
"stat.ME",
"stat.AP",
"stat.CO",
"stat.ML"
] |
Modelling Preference Data with the Wallenius Distribution
Clara Grazian1
Fabrizio Leisen2
Brunero Liseo3
1
University of Oxford, U.K.
2
University of Kent, U.K.
3
Sapienza Università di Roma, Italy
=========================================================================================================================================================================================================================================================================
The Wallenius distribution is a generalisation of the Hypergeometric
distribution where weights are assigned to balls of different colours.
This naturally defines a model for ranking categories which can be
used for classification purposes. Since, in general,
the resulting likelihood is not analytically available, we adopt an approximate Bayesian computational (ABC) approach
for estimating the importance of the categories. We illustrate the performance of the estimation procedure on simulated datasets. Finally, we use the new model for analysing two datasets concerning movies ratings and Italian academic statisticians' journal preferences. The latter
is a novel dataset collected by the authors.
Keywords: Approximate Bayesian Computation, Biased Urn, Movies ratings, Scientific Journals Preferences.
§ INTRODUCTION AND MOTIVATIONS
Human beings naturally tend, in everyday life, to compare and rank concepts and objects such as food, shops, singers and football teams, according to their preferences. In general, to rank a set of objects means to arrange them in order with respect to some characteristic. Ranked data are often employed in contexts where objective and precise measurements are difficult, unreliable, or even impossible to obtain and the observer is bound to collect ordinal information about preferences, judgments, relative or absolute ranking among competitors, called items.
Modern web technologies have made available a huge amount of ranked data, which can provide information about social and psychological behaviour, marketing strategies and political preferences. The codification of this information has been of interest to the statisticians since the beginning of the 20th century. The Thurstone model (TM) assumes that each
item i is associated with a score W_i on which the comparative judgment is based; examples of unidimensional scores are the unrecorded finishing times of players in a race or any possible preference/attitude measure towards items.
Item i is preferred to item j if W_i is greater than W_j, see <cit.>. From the modelling point of view, this corresponds to assigning a probability p_ij=(W_i > W_j).
The Bradley-Terry model (BT) is a particular case of the TM model with p_ij=p_i(p_i+p_j)^-1 where p_i,p_j≥ 0 are the item parameters reflecting the rate of each item, see <cit.>. Paired comparison models are always applicable to rankings after converting the latter in a suitable set of pairwise preferences. Conversely, paired comparisons of K items do not necessarily correspond to a ranking, due to the potential presence of circularities. A popular extension of the BT model is the Plackett-Luce model (PL).
Given a set of L items and a vector of probabilities (p_1,…, p_L), such that ∑_i=1^L p_i=1, the PL model assigns a probability distribution on all the set of possible rankings of these objects which is a function of the (p_1,…, p_K), see <cit.> and <cit.>. TM, BT and PL are not the only proposals in the field, and modelling ranking is an active area of research, see <cit.> and <cit.>.
There is no wide consensus about the use of choice or ranking data for better representing preferences and, very often, the best solution is problem specific. In this paper, we consider a sort of hybrid situation; in fact, we assume that choices related to single items can be further classified into categories of different relevance, and the ranking of categories is the main goal of the statistical analysis.
Our approach makes use of an extension of the Hypergeometric distribution, namely the Wallenius distribution <cit.> and can be used in the cases where data are available in the form of rankings, votes, preferences of items but the interest is in defining the importance of the categories in which the items can be
clustered.
The Wallenius distribution arises quite naturally in situations where sampling is performed without replacement and units in the population have different probabilities to be drawn. To be more specific, consider a urn with balls of c different colours: for i = 1, …, c there are m_i balls of colour i. In addition, colour i has a priority ω_i>0 which specifies its relative importance with respect to the other colours. A sample of n balls, with n < ∑_i=1^c m_i, is drawn sequentially without replacement. The Wallenius distribution describes the probability distribution for all possible strings of balls of length n drawn from this urn.
This experimental situation arises in very different contexts. For example, in auditing problems, transactions are examined by randomly selecting a single euro (or pound, or dollar) among the total amount, so larger transactions are more likely to be drawn and checked.
The Wallenius distribution was introduced by <cit.> and it is also known as the noncentral Hypergeometric distribution; this alternative name is justified by the fact that, when all the priorities ω_i's are equal, one gets back to the classical Hypergeometric distribution. However this name should be avoided because, as extensively discussed by <cit.>, this is also the name of another distribution, proposed by <cit.>.
Although the Wallenius distribution is a very natural statistical model for the aforementioned situations, its popularity in applied settings has been prevented by the lack of a closed form expression of the probability mass function: see Section <ref> for details.
The gist of this paper is the use of the priorities vector ω=(ω_1, …, ω_c) of the Wallenius distribution as a measure of importance for different values of a categorical variable.
In particular, we analyse two datasets, where we aim at ranking the categories rather than the items.
The first dataset considers data downloaded from the MovieLens website, which consists of 105,339 ratings across 10,329 movies performed by 668 users.
In this framework, it is of interest to classify the different genres in terms of satisfaction, in order to provide some useful feedback to users and/or providers.
The second dataset considers data we collected between October and November 2016 among Italian academic statisticians.
They indicated their journal preferences from the 2015 ISI “Statistics and Probability” list of Journals. In this context, we are interested in ranking the journal categories in order to provide a description of the research interests of the Italian Statistical community.
We adopt a Bayesian methodology which allows us to overcome the computational problems related to the lack of a closed form expression of the probability mass function of the Wallenius distribution.
We propose a novel approximate Bayesian computational approach <cit.>, where the vector of summary statistics is represented by the relative frequencies of the different categories and the acceptance mechanism is based on the distance in variation <cit.>
The paper is organized as follows: in Section <ref> we introduce the Wallenius distribution; in Section <ref> our approximated inferential strategy is described, based on an ABC algorithm. The performance of the algorithm has been tested in several examples, first in an extensive simulation study (Section <ref>) and then on two real datasets (Section <ref>). A discussion concludes the paper.
§ THE WALLENIUS DISTRIBUTION
Consider an urn with N balls of c different colours. There are m_i balls of the i-th colour, so that ∑_i^c m_i=N. In this situation, the multivariate Hypergeometric distribution is the discrete probability distribution which describes the sampling without replacement of n balls. In this framework, the probability of drawing a ball of a certain colour is proportional to the number of balls of the same colour. It is possible to generalise the experiment with a biased sampling of balls. For instance, each colour may have a different priority or importance, say ω_i>0, i=1, …, c. Suppose we have drawn n balls without replacement from the urn and let X_n= (X_1n, X_2n, …, X_cn) denote the frequencies of balls of different colours in the sample. Let Z_n be the colour of the ball drawn at time n.
In this setting, the probability that the next ball is of colour i also depends on its priority and is defined as
P( Z_n+1 = i |X_n ) = (m_i-X_in) ω_i/∑_j=1^c ( m_j - X_jn) ω_j.
<cit.> provided the above expression and the probability mass function of X_n for the case c=2. <cit.>
derived the following general expression. For a given integer n, and parameters
m= (m_1, …, m_c) and ω= (ω_1, …, ω_c),
the probability of observing a vector of colour frequencies x=(x_1, … , x_c) is
P(x; n, m, ω) = ∏_j=1^c m_jx_j∫_0^1
∏_j=1^c ( 1- t^ω_j/d )^x_j dt,
where ∑_i=1^c x_i=n and d= ∑_j=1^c ω_j (m_j - x_j). When ω_i = ω, for every i=1,…, c, the Wallenius distribution reduces to the multivariate Hypergeometric distribution. This can be easily shown by considering, without loss of generality, ω=1 and c=2. In this particular case, the probability mass function simplifies to
P(x; n, m )= mxN-mn-x∫_0^1 ( 1- t^1/d )^ndt.
The change of variable z=t^1/d leads to
P(x; n, m ) = mxN-mn-x
d ∫_0^1 ( 1- z )^n z^d-1 dz
=mxN-mn-xΓ(d+1) Γ(n+1)/Γ(n+d+1).
Since d=N-n, the probability mass function reduces to
P(x; n, m )= mxN-mn-x /Nn,
which is the probability mass function of the Hypergeometric distribution when two colours are considered.
The Wallenius distribution has been underemployed in the statistical literature mainly
because the integral appearing in (<ref>) cannot be solved in a closed form and numerical approximations are necessary. <cit.> has made a substantial contributions in this direction, providing approximations based either on asymptotic expansions or numerical integration. To our knowledge, the Wallenius distribution has only been used in a limited number of applications, mainly devoted to auditing problems <cit.>, ecology <cit.>, vaccine efficacy <cit.> and modeling of RNA sequences <cit.>.
In this work, we propose a novel look at the Wallenius distribution and we use it as statistical model, with the goal of ranking the values of a categorical random variable, based on preference data.
This is motivated by the sampling nature of the Wallenius distribution where an importance ω_j is associated with category j. The highest ω_j's represent the most popular categories. This naturally defines a new model which allows us to rank preferences.
Notice that we are implicitly assuming that all balls of the same colour have the same importance; this may not be the case in some applications: we will discuss this aspect in the final section.
Recently, the development of social networks and the competitive pressure to provide customized services has motivated many new ranking problems involving hundreds or thousands of objects.
Recommendations on products such as movies, books and songs are typical examples in which the number of objects is extraordinarily large. In recent years, many researchers in statistics and computer science have developed models to handle such big data. For instance, in Section <ref> we consider the problem of ranking customer movie choices in terms of genres such as Comedy, Drama and Science Fiction. We consider data downloaded from the MovieLens website which consists of 105,339 online ratings of 10,329 movies by 668 raters on a scale of 1-5. We rank the categories by estimating the priority parameters of the Wallenius distribution by using an approximate Bayesian approach.
In particular, in the next section, we introduce a simple ABC algorithm which allows us to avoid the direct computation of the integral in equation (<ref>).
§ BAYESIAN INFERENCE FOR THE WALLENIUS MODEL
Let x_h=(x_h1, …, x_hc) be a draw of n_h balls from the Wallenius urn described in equation (<ref>),
where h=1, …, k and ∑_j=1^c x_hj=n_h.
In this paper we adopt a Bayesian approach, where the parameter vector ω
is considered random. For a given prior distribution π(ω), the resulting posterior
is
π(ω|x_1, …, x_k ) ∝π(ω)
∏_h=1^k
[ ∫_0^1 ∏_j=1^c ( 1- t_h^ω_j/d_h )^x_hj dt_h
],
with d_h = ∑_j=1^c ω_j (m_j - x_hj).
Here k represents the sample size, that is, the number of different and conditionally
independent preference lists provided by the interviewees, while n_h (h=1, …, k) is the number of items
selected by the h-th interviewee.
The above posterior distribution depends on
k different integrals which cannot be reduced to a closed form.
This makes the implementation of standard Markov Chain Monte Carlo (MCMC) methods for estimating ω rather complex.
Indeed, most MCMC methods rely on the direct evaluation of the unnormalized posterior distribution (<ref>).
Although there are many available routines, in different software packages, to evaluate univariate integrals, we noticed that they lack accuracy especially for large values
of the n_h's and m.
We believe that this problem has had a strong negative impact
on the popularization of the Wallenius distribution despite a need for
interpretable models in the applied setting. For instance, the Wallenius distribution arises naturally in genetics as an alternative to the Fisher exact test, see <cit.> and the references therein.
In this section, we propose an algorithm which allows to sample from the posterior distribution introduced in (<ref>). The algorithm belongs to the class of approximate Bayesian computational (ABC) methods. This approach is philosophically different from the standard MCMC methods since the implementation only requires to draw samples from the generating model for a given parameter value. In the case of the Wallenius distribution, the task of generating draws is not hard, making the use of ABC particularly straightforward. <cit.> provided methods and algorithms to sample from the Wallenius distribution. He also made available a reliable R package, called , which has been used extensively in this work.
The ABC methodology can be considered as a (class of) popular algorithms that achieves posterior simulation by avoiding the computation of the likelihood function: see <cit.>, <cit.> and <cit.> for recent surveys. As remarked by <cit.>, the first genuine ABC algorithm was
introduced by <cit.> in a population genetics setting.
Explicitly, we consider a parametric model {f(·|θ), θ∈Θ} and suppose that a dataset
y∈𝒟⊂ℝ^n is observed. Let
ε>0 be a tolerance level, η a summary statistic
(which is often not sufficient) defined on 𝒟 and ρ a
distance or metric acting on the η space.
Let π be a prior distribution for θ; the ABC algorithm is described in Algorithm 1.
The basic idea behind the ABC is that, for a small (enough)
ε and a representative summary statistic, we can obtain a
reasonable approximation of the posterior distribution. The practical implementation of an ABC algorithm requires the selection of a suitable summary statistic, a distance and a tolerance level. In our specific case we summarized the data by using the arithmetic mean of the
observed and simulated frequency vectors, i.e., at the ℓ-th iteration of pseudo data generation, we have
η(x^(ℓ)) =p^(ℓ) = 1/k∑_h=1^k p^(ℓ)_h,
with
p^(ℓ)_h = (x_h1^(ℓ)/n_h, …, x_hc^(ℓ)/n_h )
to be compared with the relative frequencies observed in the sample
η(x^(t))=p^(t)= 1/k∑_h=1^k p^(t)_h.
with
p^(t)_h = ( x_h1/n_h, …, x_hc/n_h ).
Since the frequencies p^(ℓ)=( p_1^(ℓ),…, p_c^(ℓ)) and p^(t)= ( p_1,…, p_c) can be interpreted as discrete probability distributions, it is natural to compare them through the “distance in variation” <cit.> metrics
ρ(p^(ℓ),p^(t))=1/2∑_j=1^c |p_j^(ℓ)-p_j|
Regarding the setting of tolerance level we refer to the Section <ref> where the algorithm will be tested on simulated data.
§.§.§ The prior distribution
The vector of parameters ω= (ω_1, …, ω_c) assumes values in ℝ_+^c and different priors can be considered. However, one must take into account that the priority parameters ω_j must be interpreted in a relative way.
In fact, the quantity d in the p.m.f. of the Wallenius distribution (defined in equation (<ref>)) depends on the priority parameters ω. In particular,
d= ∑_j=1^c ω_j (m_j - x_j).
If we consider two different vectors ω' and ω such that ω' = κω for κ > 0, we have that
ω_j'/d'=κω_j/∑_j=1^c κω_j (m_j - x_j)=ω_j/∑_j=1^c ω_j (m_j - x_j)=ω_j/d
where d' and d are computed respectively with ω' and ω. Equation (<ref>) implies that the p.m.f. of the Wallenius distribution does not change if we consider the vector of priorities ω' instead of ω. This induces an identifiability issue, which can be resolved by a normalization step.
From this perspective, the most natural way to follow is to assume that ∑_j=1^cω_j=1, and to assume a Dirichlet prior on the normalized vector. Hereafter we will assume that the Dirichlet prior we adopt in the simulations and the real data examples are symmetric (i.e., all the hyperparameters are equal). Our default choice will be to set them all equal to 1, making the prior uniform on its support.
An alternative default choice, especially useful when c is large, is given by α = 1/c, as explained in <cit.>.
§.§.§ Alternative computational approaches
The R package allows the approximate numerical evaluation of the probability mass function of the Wallenius distribution. In a classical setting, this makes feasible the computation of the MLE.
In a Bayesian setting this enables the implementation of standard MCMC algorithms, such as the Metropolis-Hastings sampler.
Nonetheless, we deem more appropriate to use the ABC approach illustrated in this section for several reasons.
First, the output of the Bayesian approach is far richer than the one available in a classical setting. For instance, in Section <ref> we are able to easily compute important summaries of the posterior distribution, i.e. the probability p_ij=(ω_i>ω_j).
Second, standard MCMC methods require repeated evaluations of the likelihood function. This could lead to an unsustainable computational burden compared to ABC.
Last but not least, we have performed a simulation study regarding the behaviour of the maximum likelihood estimator of the vector
ω and we noticed that it typically tends to produce unreliable and unstable estimates when the “true” ω is close to the boundary of the simplex and/or when the number of categories is large.
§ SIMULATION STUDY
In order to test Algorithm §<ref> with the summary statistics shown in Section <ref>, we have conducted an extensive simulation study, with different scenarios. We performed 20 repeated simulations of k draws from the Wallenius distribution where each draw consists of a number n_h (h=1, ⋯, k) of balls. We use the prior distribution defined in Section <ref>, i.e. a Dirichlet prior 𝒟ir(1,…,1). As already stated in Section <ref>, we use the summary statistics and the distance in variation defined in equations (<ref>) and (<ref>). The tolerance level ε has been chosen with a pilot simulation where 10^5 values have been simulated by fixing the tolerance level to a very large value. Then, the distribution of the distances from the true values has been studied. The tolerance level is fixed as a small quantile of this distribution (it is common practice to fix it as the quantile of level 0.05). The complete procedure will be described in the following.
The simulated experiments have been performed for different values of c, ranging between 2 and 20, and using three configurations for both m and ω, as explained below:
* same number of balls for each colour, i.e. m_j=m, j=1, …,c; uniform importance weights, i.e. ω_j=ω, j=1, …,c;
* increasing values for m_j's (all the integers between 1 and c) and ω's (all the integers between 1 and c, normalized to sum to one), j=1, …, c;
* increasing values for m_j's (all the integers between 1 and c) and decreasing values for the ω's (all the integers between c and 1, normalized to sum to one), j=1, …, c;
Finally, we have used three different sample sizes, namely k=5, k=50 and k=1000.
The value of n_h's has been taken to be half the total number of balls in the urn. The results are available in Tables <ref>, <ref> and <ref>.
Surprisingly, as the sample size k increases, the root mean squared error (RMSE) remains relatively stable.
Results are less accurate for those configurations where both ω and m are uniform, while they are more accurate for configurations where ω and m follow an opposite ordering. This may be explained by observing that data are carrying more information on ω in this situation.
The RMSE is decreasing almost everywhere as the value of c increases: the only case where this is not true is the case of both ω and m uniform.
This may suggest that the Wallenius distribution does not perform well when the “true” model is the simpler classical multivariate Hypergeometric model, especially when the number of categories c is large.
Table <ref>, <ref> and <ref> also show the average acceptance rates of the ABC algorithm used in the simulation experiments. The acceptance rate depends on the value of the tolerance level ϵ chosen in the experiment: we have followed the strategy described in <cit.>, where a pilot run is done to study the distribution of the distance between the summary statistics computed on the observed data and on the simulated data. Then, ε is chosen to be a quantile of the empirical distribution of this distance. We have chosen to consider the quantile of level 0.05. With this automatic choice of ε we obtain an acceptance rate of about 0.01-0.02 on average. We obtained lower acceptance rates in the case of a small number of colours. These rates are compatible with the average tolerance level. It could be possible to reduce the RMSE by reducing the tolerance level ε, however there is a balance between the goodness of the approximation and the computational cost. In an applied context, it is always advisable to compare several tolerance levels. We will propose this comparison in Section <ref>. In this context, we use only one threshold ε (in the automatic way above described) to focus the analysis on a Monte Carlo comparison by varying the sample size and the number of colours in the urn.
As a conclusive remark of the section, we have performed a sensitivity analysis regarding the common hyperparameter of the Dirichlet prior. For values ranging from 1/c (the choice suggested in <cit.>) and 1 (the uniform prior), we have always obtained similar results in terms of RMSE, showing a sort of robustness of the model, at least with respect to this particular aspect.
§ REAL DATA APPLICATIONS
We now apply the proposed approach to two real datasets, in order to assess the applicability and the performance of the algorithm.
In both cases, we obtain the ratings of a group of individuals about specific elements from a list. Each individual may choose the number of elements to rate. The elements are then grouped in categories and the goal is to provide a ranking of the categories. By using the urn terminology of Section <ref>, the categories are the colours and each element from the list is a ball; the aim of the analysis is to perform inference on the importance weights of each colour.
§.§ Movies dataset
This dataset describes 5-star (with half-star increments) rating from MovieLens, a movie recommendation service (http://grouplens.org/datasets/movielens/). The dataset may change over time. We consider the dataset which contains 105,339 ratings across 10,329 movies. These data were created by 668 users between April 03, 1996 and January 09, 2016. This dataset was generated on January 11, 2016. Users were randomly selected by MovieLens, with no demographic information, and each of them has rated at least 20 movies. The movies in the dataset were described by genre, following the IMDb information (https://www.themoviedb.org/); nineteen genres were considered in the dataset, including a “no genre” category; we have decided to eliminate the empty category from the analysis. In this case, we consider a movie to be "good" if its rating is at least 3.5 stars. Therefore, the vector X_n represents the frequencies of "good movies" in each category. Each film may be described by more than one genre. In this case we have proceeded as follows: we have ordered the genres in terms of their generality and then assigned to the movie the least general genre with which it was described. We have decided the following order (from the less general to the most general): Animation → Children → Musical → Documentary → Horror → Sci-Fi → Film Noir → Crime → Fantasy → War → Western → Mistery → Action → Thriller → Adventure → Romance → Comedy → Drama. Of course, this is an experimental choice, which may affect the results. Since the movies can be cross-classified, an interesting (and more realistic) development would be considering a model which can take into account this feature; this is left for further research. We have then replicated the same prior choice and the same choices of distance and vector of summary statistics described in Section <ref>. The tolerance level ε has been chosen with a pilot simulation in order to produce a sample of size 10^5, as described in Section <ref>. In this particular case, we have used ε=0.5. Table <ref> displays the posterior mean estimates of the vector of importance weights ω. The importance weights seem to be very close, with small differences among them. This suggests that there is not a category which is particularly popular. Nonetheless, we can observe a slightly preference for the Action and Sci-Fi genres and less interest in the Fantasy, War and Drama genres. We believe that this similarity in the importance weights is due to an excessive number of categories in the movies dataset. In this setting the graphical comparison of the marginal posterior distributions can provide a better insight on the customer preferences. Figure <ref> shows that there is more variability in the users preferences to choose a particular movie genre, such as Action or Romance.
§.§ Statistical Journals dataset
The scientific areas (or “settori scientifici disciplinari”, S.S.D.) are a characterization used in the academic Italian system to classify knowledge in higher education.
The sectors are determined by the Italian Ministry of Education. In particular, there are 367 S.S.D., divided into 14 macro-areas and each member of the academic staff pertains to a single sector. We have performed a survey on the preferences of the researchers in Statistics (Sector SECS-S/01) of Italian universities about the available scientific journals. It should be noted that researchers in Probability and Mathematical Statistics, Medical, Economic and Social Statistics are not included in this survey, because they pertain to different sectors. We have considered only staff with both teaching and research contracts. Postdoctoral fellows and PhD students have been excluded. In this survey we have used the 2015 “Statistics and Probability” list of journals of the Institute for Scientific Information (ISI). We have asked to SESC-S/01 researchers to indicate their preferences in this list, between a minimum of ten and a maximum of twenty. One difference from the Movies example of Section <ref> is that the participants do not have to indicate the level of their preference, only a list of journals which each of the participants considers either
* prestigious and/or
* likely for a potential submission and/or
* professionally significant (in terms of frequency of readings).
The survey was conducted between 25th October 2016 and 4th November 2016. We have collected 174 responses, distributed, in terms of role, as follows: 49 Full professors (Professori Ordinari), 72 Associate Professors (Professori Associati) and 53 Assistant Professors, both fixed-term and tenure-track (Ricercatori a tempo indeterminato e a tempo determinato). We have then grouped the journals by category, considering five main classes of interest: Methodology, Probability, Applied Statistics, Computational Statistics and Econometrics and Finance. The list of journals and relative category is available in the Appendix. Among the 124 journals available in the “Statistics and Probability” ISI list, we have classified 23 journals in Probability, 45 in Methodology, 34 in Applied Statistics, 9 in Computational Statistics and 13 in Econometrics and Finance. We assume the Wallenius distribution for modelling the dataset, where c represents the number of the categories. The preferences of each respondent are summarized in a vector where the position of each entry represents the number of journals falling in the corresponding category. We consider that this vector is a realization of the Wallenius distribution.
The results are available in Figure <ref>, Figure <ref> and Table <ref>, which show that there seems to be a preference for the research in Methodological and Applied Statistics among the researchers in Statistics and less interest in journals of Probability. As already stated, this should highlight the fact that researchers in Mathematical Statistics and Probability do not pertain to the investigated sector.
These results also show that the effect of a decrease of the tolerance level seems to be a concentration of the posterior distributions of the importance weights ω, except for the weight relative to the Computational journals, for which there is a shift. As a possible explanation of this fact, one should consider that this category is under-represented in the list (at least, according our classification) with respect to the others. Table <ref> shows the estimated pair comparison probabilities for the journal categories.
§ DISCUSSION
In this paper we have considered the problem of ranking categories of items.
We have proposed a novel model based on the Wallenius distribution. In terms of an urn scheme, it generalizes the Hypergeometric distribution with an additional vector of parameters ω, which represents the importance of the different types of balls in the urn.
A referee noticed that “the model assumes that the balls of the same colours (eg. the journals in the same category) are equally likely to be drawn.” This assumption may not be justified, since, in the Journal example, journals in the same category may have different standing. This is exactly the reason why we propose the Wallenius model for ranking categories rather than single items; the weight ω refers to the entire categories and they do not discriminate within categories. However, it is certainly of scientific interest to pursue the above issue and to conceive a nested model
where items might be further ranked within categories; see, for example, <cit.>. In a Bayesian nonparametric setting, this approach could be further generalized by using nested non-exchangeable species sampling sequences, see <cit.> and <cit.>.
So far the Wallenius model has been definitely under-employed, due to the analytical intractability of the probability mass function. In this work we proposed an approximate Bayesian computational algorithm which provides a fast and reliable approach to the estimation of the vector of priorities ω. Our method is easy to implement and it might be very useful in several statistical applications where balls are drawn from the urn in a biased fashion. Paradigmatic examples of the importance of the Wallenius model especially appear in auditing where transactions are randomly checked with probability proportional to their monetary value. We analysed two datasets concerning movies ratings and Italian academic statisticians' journal preferences. The ABC algorithm allows us to estimate the importance of movies categories or journal preferences under the assumption of a Wallenius generating model. Future work will focus on the use of the Wallenius distribution to other areas of application and on the estimation of the category multiplicities m given the knowledge of the importance weights ω.
§ ACKNOWLEDGEMENTS
The authors are very grateful to Martin Ridout for his valuable comments on a first draft of the paper. This project has been funded by the Royal Society International Exchanges Grant “Empirical and Bootstrap Likelihood Procedures for Approximate Bayesian Inference”. F Applied Statistics, 9 in Computational Statistics and 13 in Econometrics and Finance. We assume the Wallenius distribution for modelling the dataset, where c represents the number of the categories. The preferences of each respondent are summarized in a vector where the position of each entry represents the number of journals falling in the corresponding category. We consider that this vector is a realization of the Wallenius distribution.
abrizio Leisen was supported by the European Community's Seventh Framework Programme [FP7/2007-2013] under grant agreement no: 630677.
20
natexlab#1#1
[Airoldi et al.(2014)]Airoldi
Airoldi, E., Costa, T., Bassetti, F., Leisen, F. and Guindani, M. (2014).
Generalized species sampling priors with latent beta reinforcements.
Journal of the American Statistical Association 109, 1466–1480.
[Allingham et al.(2009)]allingham2009bayesian
Allingham, D., King, R.A.R., and Mengersen, K.L. (2010).
Bayesian estimation of quantile distributions. Applied Statistics, 9 in Computational Statistics and 13 in Econometrics and Finance.
Statistics and Computing
19(2), 189–201.
[Alvo and Yu(2014)]alvo
Alvo, M. and Yu, P. L. H. (2014).
Statistical Methods for Ranking Data.
Springer, New York.
[Bassetti, Crimaldi and Leisen (2010)]Bassetti
Bassetti, F., Crimaldi, I. and Leisen, F. (2010).
Conditionally identically distributed species sampling sequences.
Advances in Applied Probability 42, 433–459.
[Beaumont(2010)]bea:10
Beaumont, M. (2010).
Approximate Bayesian computation in evolution and Applied Statistics, 9 in Computational Statistics and 13 in Econometrics and Finance..
Annual Review of Ecology, Evolution, and Systematics
41, 379–406.
[Berger et al.(2015)Berger, Bernardo and Sun]berger2015
Berger, J. O., Bernardo, J. M. and Sun, D. (2015).
Overall objective priors.
Bayesian Analysis 10, 189–221.
[Bradley and Terry(1952)]bt52
Bradley, R. A. and Terry, M. E. (1952).
Rank analysis of incomplete block designs. I: The method of paired
comparisons.
Biometrika 39, 324–345.
[Bremaud(1998)]bremaud
Bremaud, P. (1998).
Markov Chains: Gibbs Fields, Monte Carlo Simulation
and Queues.
Springer-Verlag: New York.
[Chesson(1976)]chesson
Chesson, J. (1976).
A non-central multivariate Hypergeometric distribution arising from
biased sampling with application to selective predation.
Journal of Applied Probability 13, 795–797.
[Fisher(1935)]fisher1
Fisher, R. (1935).
The logic of inductive inference.
Journal of the Royal Statistical Society 98, 39–82.
[Fog(2008a)]fog1
Fog, A. (2008a).
Calculation methods for Wallenius' Noncentral Hypergeometric
Distribution.
Communications in Statistics - Simulation and Computation
37, 258–273.
[Fog(2008b)]fog2
Fog, A. (2008b).
Sampling methods for Wallenius' and Fisher's Noncentral
Hypergeometric Distributions.
Communications in Statistics - Simulation and Computation
37, 241–257.
[Gao et al.(2011)Gao, Fang, Zhang, Zhi and Cui]gao11
Gao, L., Fang, Z., Zhang, K., Zhi, D. and
Cui, X. (2011).
Length bias correction for RNA-seq data in gene set analyses.
Bioinformatics 27, 662–669.
[Gillett(2000)]gillett
Gillett, P. R. (2000).
Monetary unit sampling: a belief-function implementation for audit
and accounting applications.
International Journal of Approximate Reasoning 25,
43–70.
[Hernández-Suárez and Castillo-Chavez(2000)]castillo2000urn
Hernández-Suárez, C. M. and Castillo-Chavez, C. (2000).
Urn models and vaccine efficacy.
Statistics in Medicine 19, 827–835.
[Inskip et al.(2013)]inskip
Inskip, C., Ridout, M., Fahad, Z., Tully, R.,
Barlow, A., Greenwood Barlow C., Islam, M.A., Roberts, T.,
MacMillan, D. (2013).
Human–Tiger Conflict in Context: Risks to Lives
and Livelihoods in the Bangladesh Sundarbans.
Human Ecology 41, 169–186.
[Karabatsos and Leisen (2018)]KL2018
Karabatsos, G. and Leisen, F. (2018).
An approximate likelihood perspective on ABC methods.
Statistics Surveys 12, 66–104.
[Luce(1959)]luce
Luce, R. D. (1959).
Individual Choice Behavior: A Theoretical Analysis.
John Wiley and Sons Inc., New York.
[Manly(1974)]manly
Manly, B. J. (1974).
A model for certain types of selection experiments.
Biometrics 30(2), 281–294.
[Marden(1995)]marden
Marden, J. (1995).
Analyzing and Modeling Rank Data.
Chapman and Hall, London.
[Marin et al.(2012)Marin, Robert and Pudlo]marin-abc
Marin, J. M., Robert, C. P. and Pudlo, P. (2012).
Approximate Bayesian computational methods.
Statistics and Computing 22, 1167–1180.
[Plackett(1975)]plack
Plackett, R. L. (1975).
The analysis of permutations.
Journal of the Royal Statistical Society Series C 24,
193–202.
[Pritchard et al.(1999)Pritchard, Seielstad, Perez-Lezaun and
Feldman]Pritchard
Pritchard, J., Seielstad, M., Perez-Lezaun, A. and
Feldman, M. (1999).
Population growth of human Y chromosomes: a study of Y chromosome micro-satellites.
Molecular Biology and Evolution 16, 1791–1798.
[Thurstone(1927)]thurs
Thurstone, L. L. (1927).
A law of comparative judgment.
Psychological review 34, 273–286.
[Wallenius(1963)]walle
Wallenius, K. T. (1963).
Biased Sampling: The Non-Central Hypergeometric Probability
Distribution - Department of Statistics - Stanford University.
Ph.D. thesis, Department of Statistics - Stanford University.
§ APPENDIX
|
http://arxiv.org/abs/1701.08104v1 | 20170127163205 | FM-Delta: Fault Management Packet Compression | [
"Tal Mizrahi",
"Yoram Revah",
"Yehonathan Refael Kalim",
"Elad Kapuza",
"Yuval Cassuto"
] | cs.NI | [
"cs.NI"
] |
=10000
FM-Delta: Fault Management Packet Compression
Technical Report
This technical report is an extended version of <cit.>, which was accepted to the IFIP/IEEE International Symposium on Integrated Network Management, IM 2017., January 2017
FM-Delta: Fault Management Packet Compression
Tal Mizrahi, Yoram Revah, Yehonathan Refael Kalim, Elad Kapuza, Yuval Cassuto
Marvell Semiconductors, Technion
talmi@marvell.com, revahyo@gmail.com, {srefaelk@campus, eladkap@campus, ycassuto@ee}.technion.ac.il
December 30, 2023
================================================================================================================================================================================================================================================================================================
empty
Fault Management (FM) is a cardinal feature in communication networks. One of the most common FM approaches is to use periodic keepalive messages. Hence, switches and routers are required to transmit a large number of FM messages periodically, requiring a hardware-based packet generator that periodically transmits a set of messages that are stored in an expensive on-chip memory.
With the rapid growth of carrier networks, and as 5G technologies emerge, the number of users and the traffic rates are expected to significantly increase over the next few years. Consequently, we expect the on-chip memories used for FM to become a costly component in switch and router chips.
We introduce a novel approach in which FM messages are stored in compressed form in the on-chip memory, allowing to significantly reduce the memory size. We present FM-Delta, a simple hardware-friendly delta encoding algorithm that allows FM messages to be compressed by a factor of 2.6.
We show that this compression ratio is very close to the results of the zlib compression library, which requires much higher implementation complexity.
§ INTRODUCTION
§.§ Background
Network devices, such as switches and routers, are often required to transmit control-plane messages. The ability to generate and transmit messages has been recognized as an essential building block in network devices, not only in conventional networks, but also in Software-Defined Networks (e.g., <cit.>). Specifically, the ability to generate packets is important in the context of Fault Management (FM).
Fault detection is essential in large-scale networks, enabling fast recovery and effective troubleshooting; it is one of the key components in Operations, Administration, and Maintenance (OAM) <cit.>. FM is typically implemented as a combination of proactive and reactive mechanisms for detecting failures. Some of the most commonly deployed FM protocols (e.g., <cit.>) are implemented using periodic keepalive messages; a fault is detected when no keepalive messages are received from a given source for a given period of time.
§.§ FM Scaling
The scaling problem of storing FM messages is a real-life problem that some of the authors of this paper encountered while designing packet processor silicons. A switch or a router that runs an FM protocol periodically transmits FM messages. The rate of FM messages varies from 1 packet every ten minutes to 300 packets per second per service <cit.>. Typical carrier Ethernet devices support tens of thousands of services (e.g., 16k in <cit.> and 64k in <cit.>). Due to these large scales, typical devices do not provide FM support at the highest rate for all services simultaneously. As 5G technologies evolve, network devices will be required to support a larger number of services at a higher bandwidth. Thus, fault detection will be expected with a low detection time, implying a high rate of FM messages. Hence, we expect these scales to continuously increase in the next few years.
Example.
The rate of traffic that a network device is expected to generate for 64k services, assuming the FM message length is roughly 100B <cit.>, and assuming 300 packets per second per service <cit.>, is 64k × 300 × 100B ≅ 16 Gbps. Such high traffic rates cannot be handled by the device's software layer, and thus must be implemented in the device's hardware. A typical implementation (e.g., <cit.>) uses a hardware engine and an on-chip memory; the engine periodically reads the messages stored in the memory and transmits them to the network. In this case the required memory space for 64k services is 64k × 100B ≅ 6.4 MB. This is a very significant size for an on-chip memory, consuming an expensive area of roughly 18 mm^2 in a 28nm process. As a point of reference, the entire on-chip packet memory of a typical switch is on the order of a few megabytes, e.g., 12 MB in <cit.>.
This example illustrates that as carrier and mobile backhaul networks evolve, the significant size of the FM on-chip memory may become infeasible.
§.§ FM Packet Compression in a Nutshell
We argue that if FM packets are compressed when stored in the memory, then the expensive on-chip memory can be significantly reduced.
In our approach FM packets are compressed offline, and stored in compressed form in an on-chip memory. Each packet is decompressed by the hardware packet generation engine before it is transmitted to the network. Notably, packets are in compressed form only when stored in memory, and are decompressed before transmission, making the compression transparent to the network.
Why do we focus on FM messages? The solution we propose relies on three properties that are unique to the problem at hand: (i) the entropy of the FM packets stored in the memory is low, as they share common properties, (ii) the FM packet memory is accessed sequentially, since all packets are sent periodically, and (iii) we can determine the order of the FM packets in the memory.[In the FM-Delta system we present we have full control of the order in which packets are stored in the memory, and thus the order in which packets are accessed and decompressed.] These three properties significantly improve the effectiveness of the compression.
§.§ Related Work
Data compression has been analyzed in the context of network switches and routers, e.g., <cit.>. Specifically, data-plane packet compression has been widely discussed and analyzed in the literature, e.g., <cit.>. For example, packet compression is widely used in the Hypertext Transfer Protocol (HTTP) <cit.>. It has been shown that HTTP compression can be very effective <cit.>, in some cases reducing the data to 20% of its original size. However,
it has been shown <cit.> that packet compression is ineffective for random internet traffic, as this traffic typically has high entropy, and is often already compressed or encrypted. In this paper we focus on control-plane packet compression, and present a use case in which packet compression is highly effective.
§.§ Contributions
The contributions of this paper are as follows:
* We present a novel approach that uses packet compression to reduce the size of the on-chip memory used in Fault Management (FM) message transmission.
* We introduce FM-Delta, a simple delta encoding algorithm that can easily be implemented in hardware, and present a high-level design of a system that uses FM-Delta.
* We evaluated the algorithm over a synthetically generated FM packet database, and show that it offers a compression ratio that is comparable to state-of-the-art compression algorithms, such as Lempel-Ziv (LZ).[It is important to note that known LZ implementations (such as zlib <cit.>) are unlikely to be realizable in the low-level hardware setup of interest here, due to their significant processing and memory requirements.] The FM packet database that we used in our experiments is publicly available <cit.>.
1 10000
§ A SYSTEM DESIGN
§.§ Overview
Fig. <ref> illustrates the design of a system that uses FM-Delta to periodically generate FM packets. The system consists of two main components:
* Software module – responsible for compressing the FM packets and storing them in the on-chip memory of the packet generation engine. The compression procedure is performed offline. When an FM packet needs to be added to the FM packet database, or removed from it, the software layer invokes an insert or a remove operation, instructing the packet generation engine to perform an incremental update in the FM database.
* Packet generation engine – this hardware module sequentially reads the packets from the on-chip FM packet memory, decompresses them in real-time, and transmits them to the network.
§.§ Compression Algorithm
We introduce FM-Delta, a simple delta encoding algorithm that we designed and implemented.
FM-Delta.
We present an outline of the FM-Delta algorithm. Given a sequence of N uncompressed packets, the compressed packets are represented as follows.
The first packet is represented in uncompressed form, as shown in Fig. <ref>.
The i^th packet, for i≥ 2, is compressed, as shown in Fig. <ref>.
The compressed packet consists of the following fields (Fig. <ref>):
* Length – represents the length in bytes of the original (uncompressed) packet.
* Delta Bitmap – indicates the differences between packet i and packet i-1. Every packet is divided into equal-sized words, and each bit in the bitmap indicates whether the respective word in the current packet is equal to the corresponding word in the previous packet. The word size is a parameter of the algorithm. Section <ref> discusses the effectiveness of the algorithm with various word sizes.
* Values – the values of the words that differ from the previous packet.
The decompression algorithm is presented in Fig. <ref>.
Common (de)compression algorithms, such as Lempel-Ziv <cit.> are not hardware-friendly, as they require a complex iterative indirection over a sliding window. In FM-Delta, every packet is decompressed by comparing it to the previous (decompressed) packet using the Delta Bitmap. This simple comparison does not require an iterative procedure, and thus all the words of the packet can be compared (decompressed) in parallel, making FM-Delta a silicon-friendly algorithm.
§.§ Insertion and Removal
As described above, the packet database is compressed offline, and accessed by the chip in real-time. An essential question is how the packet database is updated, i.e., how a new packet can be added to the database, or how a packet can be removed from it.
One approach that allows simple insertion and removal is by using a linked list. Each compressed packet in the memory is followed by a pointer that points to the location of the next packet in memory. The linked list approach is simple, but requires some memory overhead for the next pointers, and also may be inefficient due to fragmentation.
We suggest a more efficient approach that allows for simple insertion and removal of packets when using FM-Delta. In this approach (Fig. <ref>) packets are stored contiguously in the memory. The packet generation engine proceeds by a read-decompress-write procedure; every packet is read from memory, decompressed, and then written back to the memory. There is a cache that stores the previous packet, which enables the engine to decompress the current packet using FM-Delta.
Removal
Assume that we have N compressed packets in the memory, denoted by P_1, P_2, …, P_N, and we want to remove a packet P_k from the database. Removing the k^th packet from the database includes two operations that need to take place atomically: (i) removing the k^th packet from the memory, and (ii) updating the (k+1)^th delta-encoded packet. The latter is required since after removing packet P_k, the compressed (k+1)^th packet is encoded with respect to packet P_k-1. The two operations must be performed atomically to prevent inconsistent reading or decompression.
In order to perform the two operations above atomically, we assume that there is a software layer that triggers the removal operation, and that the removal itself is performed by the packet generation engine. When the software layer triggers the operation, it also provides k, the index of the packet to be removed, and P'_k+1, the newly compressed (k+1)^th packet.
Once the packet generation engine receives the removal request it:
* Performs the conventional read-decompress-write procedure until it reaches packet P_k-1.
* Reads and decompresses packets P_k and P_k+1, but does not write them back to the memory.
* Writes the newly compressed packet, P'_k+1 immediately after packet P_k-1.
* Continues the read-decompress-write procedure on packets P_k+2, …, P_N, so that each packet is written after the previously written packet, thus eliminating the gap created by removing P_k.
Insertion
Assume we have N packet in the memory, and we want to insert a new compressed packet, P, before packet P_k. As in the removal procedure, two operations need to occur atomically: (i) inserting packet P, and (ii) updating packet P_k to a newly encoded P'_k. Again, we assume that there is a software layer that provides the location k, the packet that needs to be inserted P, and the newly encoded P'_k.
Upon receiving an insertion request, the packet generation engine:
* Performs the conventional read-decompress-write procedure until it reaches packet P_k-1.
* Reads and decompresses packet P_k, but does not write it back to the memory.
* Writes the newly inserted (compressed) packet P immediately after P_k-1.
* For packets P_k+1, …, P_N, the engine proceeds by reading packet P_j, decompressing it, and then writing packet P_j-1 after the previously written packet.
Note that in steps 2-4 above, the packet generation engine takes care not to write over packets that have not yet been read. For example, if the compressed packet P_j is slightly shorter than P_j-1, the engine writes P_j-1 only after having read P_j+1. Thus, the packet generation engine maintains a small cache that stores the currently read packets.
The packet ordering, insertion, and removal are described in further detail in the extended version of this paper <cit.>.
§.§ Random Access to FM Packets
In our approach we assumed that the FM packet memory is always accessed sequentially. However, in some cases the FM application may require an urgent packet P_k to be sent immediately. The delta encoding scheme implies that in order to access the k^th packet the packet generation engine must sequentially read all the preceding packets.
In order to allow quick random access, our delta encoding scheme can be extended to include entry points. For example, the FM software layer can store packets P_10, P_20, P_30, … in uncompressed form. Thus, when the software layer instructs the packet generation engine to access the 21^st packet, it also provides the uncompressed P_20, and the engine can use P_21 with one memory access. In this example every packet in the FM memory can be reached in at most 9 memory access operations.
For j=1 to N
Read packet j
Decompress packet j
Transmit packet j
Wait period/N
Remove(k,P)
For j=1 to k-1
Read packet j
Decompress packet j
Transmit packet j
Decompress packet P
Transmit packet P
Write packet P to the previous location of packet k
For j=k+2 to N
Read packet j
Decompress packet j
Transmit packet j
Write packet j to the previous location of packet j-1
Insert(k,P,P’)
For j=1 to k-1
Read packet j
Decompress packet j
Transmit packet j
Decompress packet P
Transmit packet P
Write packet P to the previous location of packet k
Decompress packet P’
Transmit packet P’
Write packet P’ after packet P
For j=k+1 to N
Read packet j
Decompress packet j
Transmit packet j
Write packet j after previously written packet
§ EVALUATION
§.§ Data Set
We evaluated our FM-Delta compression system on 100 sets of synthetically generated packets.[We did not use publicly available packet traces since these traces either do not include FM packets, or do not include packet payloads, thus preventing effective data compression analysis.] Each set comprises 100k packets. Our synthesized data sets are publicly available <cit.>.
The data sets consists of two types of packets: Continuity Check Messages (CCM) <cit.>, and Bidirectional Forwarding Detection (BFD) <cit.> control messages. Each packet type was used on half of the data set. CCMs are defined over Ethernet, while BFDs are over IPv4-over-Ethernet. Each packet included a random number of VLAN tags (either 0, 1, or 2 tags).
The network topology can significantly affect the extent to which FM packets can be compressed in our setting. For example, if multiple FM packets are sent to the same destination device, then some of the packet fields may be similar in these packets. Moreover, in CCMs <cit.> the MEG ID field is a 48-byte field that has a different value in each Maintenance Entity Group (MEG) <cit.>. Thus, the number of devices per MEG can significantly affect the similarity among packets.
We assumed that the current device has a set of 32 MAC addresses, and thus the source MAC address of each packet was randomly chosen from the pool of 32 addresses, while the destination MAC address was randomly chosen[Throughout this section the `randomly chosen' refers to a uniformly distributed selection.] without constraints. The IP addresses of each BFD packet were randomly chosen. The VLAN IDs of each packet that included a VLAN tag was also randomly chosen. The MEG ID field in CCM packets was randomly chosen for each set of 3 CCM packets.[We assumed that the number of Maintenance Points (MP) <cit.> per Maintenance Entity Group (MEG) <cit.> is 4 on average.]
Our experiments were performed in two modes: ordered mode, in which packets were arranged so as to allow a higher compression ratio, and random mode, in which packets were ordered randomly. In the ordered mode, the arrangement was implemented according to two criteria: (i) packets were ordered according to their size, allowing alignment between the fields of consecutive packets, and (ii) packets from the same MEG were grouped together.
§.§ Results
We present a glimpse at some of our experimental results. Fig. <ref> illustrates the compression ratio[The compression ratio is the ratio between the uncompressed data set and the compressed data set.] using zlib. Since zlib supports nine possible compression levels, the graph presents the compression ratio for each of the levels.
Fig. <ref> presents the compression ratio of the delta encoding algorithm as a function of the word size. The word size is a design parameter of the FM-Delta algorithm. Specifically, for the FM protocols that were analyzed in this work, if the word size is fixed at 2 bytes, we expect the algorithm to provide the best performance for the FM protocols we analyzed.
Notably, although the delta encoding algorithm is significantly simpler, it provides a comparable compression ratio; zlib provides a compression ratio of 2.9 at the highest compression level, but is not hardware-friendly, while the hardware-friendly FM-Delta provides a ratio of 2.6.
Another significant result is that the ordered message set allows a higher compression ratio, emphasizing the advantage of optimizing the order of the data set. Such an optimization is feasible in light of the relative flexibility of FM specifications as to the order and exact timing of packet transmission.
§ DISCUSSION
§.§ Why FM-Delta Works
The compression approach we present is effective due to three important properties of the FM packet memory.
Entropy.
The ability to compress the packets stored in the on-chip memory depends on the entropy of the data. An important observation is that the FM packets at hand have low entropy for two main reasons: (i) The packets stored in the memory have a similar packet format, or at most a small number of possible packet formats. (ii) It is often the case that multiple packets share common properties, such as IP addresses, or VLAN identifiers.
Sequential access.
We assume that the set of packets that is used by the packet-generation engine is compressed in advance, and stored in the on-chip memory in compressed form. The engine periodically performs the following steps for each packet: (i) read a compressed packet from the memory, (ii) decompress the packet, and (iii) transmit the packet. Since all packets are transmitted periodically, the memory is always accessed sequentially, according to a fixed order. This unique property enables the use of sequential (de)compression algorithms, such as delta encoding and Lempel-Ziv.
Order.
Both of the algorithms we analyzed perform sequential compression and decompression, which is most efficient when similar packets are grouped together. Since we have control over the order of the FM packets in the memory, these packets can be arranged in a way that allows highly effective compression. The significance of the packet order is demonstrated in Section <ref>.
§ CONCLUSION AND OUTLOOK
In this paper we have shown that packet compression can significantly reduce the required on-chip memory in Fault Management protocol implementations.
Surprisingly, delta encoding, which is typically dismissed as an ineffective compression approach, is shown to be highly effective for packet compression. We introduce a simple and hardware-friendly delta encoding algorithm that provides a compression ratio of 2.6, and allows to reduce the cost of packet processor silicons.
Potential improvements of FM-Delta can be considered, for example by using a dictionary for common field values. Furthermore, the preliminary results presented in this paper can be further established by experimenting with a hardware implementation of FM-Delta, and by analyzing real-life traces of FM messages. Notably, the concepts we presented may be applied not only to FM messages, but also to other types of control messages that are stored by switches and routers.
ieeetr
|
http://arxiv.org/abs/1701.07683v1 | 20170126132241 | The Run Control system of the NA62 experiment | [
"Nicolas Lurkin"
] | physics.ins-det | [
"physics.ins-det"
] |
^1 School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT, UK
ncl@hep.ph.bham.ac.uk
The NA62 experiment at the CERN SPS aims at measuring the ultra-rare decay
K^+ →π^+ νν̅ with 10 % accuracy. This can be achieved by detecting about 100 Standard Model events with 10% background in 2–3 years of data taking.
The experiment consists of a large number of subsystems dedicated to the detection of the incoming kaon and outgoing pion and also focusing on particle identification and vetoing capabilities.
Run Control has been designed to link their trigger and data acquisition systems in a single central application easily controllable by educated but non-expert operators.
The application has been continuously evolving over time, integrating new equipments and taking into account requirements and feedback from operation.
Future development includes a more automatized system integrating the knowledge acquired during the operation of the experiment.
§ THE NA62 EXPERIMENT
The NA62 experiment at the CERN SPS aims to collect about 100 standard model (SM) events of the ultra-rare kaon decay K^+ →π^+ νν̅ with less than 10% background in 2–3 years of data taking.
This process is forbidden at tree level and can occur only through a Flavour Changing Neutral Current loop.
It is theoretically very clean
and the prediction of the branching ratio has been computed within the SM <cit.> to an exceptionally high degree of precision: ℬ(K^+→π^+νν̅)_th = (8.4 ± 1.0) 10^-11.
This channel is therefore an excellent probe of the Standard Model and any deviation from this value would indicate a new physics process.
A total of 7 events were observed by the BNL E787/E949 experiments <cit.> including an estimated background of (2.6 ± 0.4) events.
The measured branching ratio ℬ(K^+→π^+νν̅)_exp=(1.73^+1.15_-1.05)× 10^-10 has an uncertainty by far too large to conclude on a possible deviation from the SM predictions known at ∼ 10% precision.
To achieve its ambitious goal, NA62 needs to accumulate O(10^13) K^+ decays using a secondary 75 GeV/c momentum unseparated hadron beam containing only ∼6% K^+.
Many different sub-detectors have been developed to identify the signal process and fight the
background from other kaon decays and beam accidental coincidences. The current status and achieved sensitivity of the experiment is reported in <cit.>.
§ THE NA62 RUN CONTROL
Run Control is an application that centrally controls and monitors all equipment involved in the trigger and data acquisition (TDAQ) system.
Its purpose is to allow a non-expert operator to supervise the data taking easily while still giving specialists the possibility to achieve a high level of control of their own equipment.
Technologies and architecture:
Operation of the experiment involves other control systems dedicated to hardware equipment (Dectector Control System DCS, Gas System, Cryogenics). A coherent and natural choice was
the industrial software “WinCC Open Architecture".
This software is the central part of two frameworks - JCOP <cit.> and UNICOS <cit.>- already widely used at CERN in the LHC experiments.
Some of the components provided by these developer toolkits are of special relevance for NA62:
* DIM (Distributed Interface Management) is the communication layer between Run Control and the numerous equipment distributed across the experiment <cit.>;
* The FSM toolkit manages the finite state machines with SMI++ processes <cit.> according to their definition (states, transition rules, actions);
* The configuration database tool defines and/or applies sets of parameters from a database;
* The Farm Monitoring and Control manages the PC farm which receives information from individual subdetectors and builds the complete events.
The NA62 TDAQ system is composed of many devices, most of them operating
in a specific way.
This complication is overcome by internally using a Finite State Machine (FSM) model of each device, therefore presenting a simple and common control to the operator.
Little equipment-specific knowledge is integrated in Run Control, while a common generic-device interface is created to transmit commands and information.
Simple basic instructions are sent through this channel, and are received by the device control software that is responsible for executing the proper sequence of actions on the hardware.
More complex configurations rely on a scheme of flexible XML files where the details of the configuration are again handled by the receiver.
Multiple sets of configurations are stored in an Oracle database and can be quickly loaded and distributed to the relevant equipment when needed.
This architecture allows Run Control and the controlled systems to evolve independently from each other while keeping a permanent compatibility.
The software is implemented as a hierarchical tree of FSM where the state of each node is either defined by a set of rules summarizing the state of the child-nodes or by the evaluation of logical expressions involving parameters transmitted by the devices.
The external nodes of the tree implement the FSM models representing the specific hardware or software devices (boards, computer, crates and control softwares).
Their states are evaluated according to the value of parameters received from the corresponding piece of equipment.
The internal nodes represent logical subsystems (subdetectors, PC farm), summarizing the states of their own child-nodes following a set of rules.
Finally the top node further aggregates the state of the subsystems to represent the global state of the experiment.
A change of state is always propagated upwards from the device where this change originates to the top node.
Conversely, a command can be issued at any node level and is always propagated downwards to all the child-nodes until it reaches a device node (Figure <ref>).
At this point, the command is generated and transmitted to the standardized interface through the network using the DIM protocol.
Infrastructure
Run Control is a distributed system spread across different machines in the experiment.
The core of the system is located on a dedicated WinCC OA data server on the technical network.
A machine bridging the networks hosts the DIM managers which are responsible for loosely binding all devices in the experiment and Run Control.
The experiment DAQ continues to run correctly standalone in case of a connectivity problem or if this node is cut from one of the networks.
As soon as communication is re-established, all clients reconnect to the DIM servers and Run Control resumes control over the experiment.
The bridge also receives external information
from the beam line complex.
The user interface itself runs on a different computer in the control room and is remotely connected to the main system.
Future developments
The first version of Run Control was successfully deployed for a dry run in 2012 and has continuously evolved since then.
New equipment and subdetectors have been delivered and subsequently integrated.
The experience acquired during the following years and the feedback from the operators brought Run Control to a good level of reliability and usability.
The next steps in the development are the automatization of known procedures and automatic error detection and recovery.
The ELectronic Eye of NA62 (ELENA62) is the core system being developed and already partially deployed.
It provides a framework for monitoring functions, interactions with the operator through visual and audio notifications, and confirmation windows.
It is highly customizable and configurable as the monitoring and control of specific components of the experiment are provided through plug-in modules.
Among those already in use to increase the efficiency and quality of the data taking are:
the PC farm module that handles the PC nodes by restarting the acquisition software after a crash is detected, or that reboots the nodes when a hardware error occurs;
two monitoring modules of the beam-line magnets and vacuum in the decay region that provide a fast feedback to the operator;
another module that automatizes the start and end of run procedure, additionally guiding the operator when a manual action is necessary.
In conclusion, Run Control has already proved its usability and good reliability, confirming the technological choices made for its design.
It has now reached a matured level of development with minimal daily maintenance, allowing the integration of the experience acquired during its operation into a more autonomous system.
§ REFERENCES
99
Buras_2015 Buras AJ, Buttazzo D, Girrbach-Noe J and Knegjens R, 2015 JHEP 1511 033
E787/E949 Artamonov A V et al. 2009 Phys. Rev. D 79 092004
GRkaon16 Ruggiero G, in KAON16 proceedings, this conference
JCOP Holme O et al 2005 Conf. Proc. C 051010 WE2.1
UNICOS Gayet P et al. 2005 ICALEPCS proceedings WE2.2-6I
DIM Gaspar C et al. 2001 Comput. Phys. Commun. 140 102
SMI++ Franek B and Gaspar C 2004 IEEE Nuclear Science Symposium Conference Record vol 3 1831
|
http://arxiv.org/abs/1701.08153v1 | 20170127185625 | Geometry and numerical continuation of multiscale orbits in a nonconvex variational problem | [
"Annalisa Iuorio",
"Christian Kuehn",
"Peter Szmolyan"
] | math.DS | [
"math.DS"
] |
Annalisa Iuorio]Annalisa Iuorio
Institute for Analysis and Scientific Computing, Vienna University of Technology
Wiedner Hauptstraße 8-10,
1040, Vienna
Austria
annalisa.iuorio@tuwien.ac.at
Faculty of Mathematics, Technical University of MunichBoltzmannstraße 385748 Garching bei München,Germany
ckuehn@ma.tum.de
Institute for Analysis and Scientific Computing, Vienna University of Technology
Wiedner Hauptstraße 8-10,
1040, Vienna
Austria
szmolyan@tuwien.ac.at
Primary 70K70; Secondary 37G15
We investigate a singularly perturbed, non-convex variational problem arising in materials science with a
combination of geometrical and numerical methods.
Our starting point is a work by Stefan Müller, where it is proven that the solutions of the variational
problem are periodic and exhibit a complicated multi-scale structure. In order to get more insight into the
rich solution structure, we transform the corresponding Euler-Lagrange equation
into a Hamiltonian system of first order ODEs and then use geometric singular perturbation theory to study
its periodic solutions. Based on the geometric analysis we construct an initial periodic orbit to start numerical
continuation of periodic orbits with respect to the key parameters. This allows us to observe the influence of the
parameters on the behavior of the orbits and to study their interplay in the minimization process. Our results
confirm previous analytical results such as the asymptotics of the period of minimizers predicted by Müller.
Furthermore, we find several new structures in the entire space of admissible periodic orbits.
Geometry and numerical continuation of multiscale orbits in a nonconvex variational problem
Peter Szmolyan
January 27, 2017
===========================================================================================
§ INTRODUCTION
The minimization problem we consider is to find
min_u ∈ U{ℐ^ε(u):=∫_0^1( ε^2 u_XX^2 +
W(u_X) + u^2 ) dX },
where U is a space containing all sufficiently regular functions u:[0,1]
→ℝ of the spatial variable X ∈[0,1],
0 < ε≪ 1 is a small parameter,
u_X=∂ u/∂ X, u_XX=∂^2 u/∂ X^2, and the
function W is a symmetric, double well potential; in particular, here W is chosen as
W(u_X)=1/4(u_X^2-1)^2.
This model arises in the context of coherent solid-solid phase transformation to describe
the occurrence of simple laminate microstructures in one-space dimension. Simple laminates
are defined as particular structures where two phases of the same material (e.g.,
austenite/martensite) simultaneously appear in an alternating pattern <cit.>. This
situation is shown schematically in Figure <ref>(a). These and related structures
have been intensively studied both in the context of geometrically linear
elasticity <cit.> and in the one of fully nonlinear
elasticity <cit.>. A comparison
between these two approaches is given by Bhattacharya <cit.>. We focus here on the
one-dimensional case starting from the work of Müller <cit.>, but a 2D approach has also
been proposed <cit.>.
An alternative choice of the functional W which sensibly simplifies energy calculations
for equilibria has been recently adopted by Yip <cit.>. The same functional with more general boundary
conditions has been treated by Vainchtein et al <cit.>.
In all these cases, very significant theoretical and experimental advances have been reached.
Nevertheless, many interesting features concerning the asymptotics and dynamics of these problems can still be explored.
We start from the one-dimensional model (<ref>)-(<ref>) analyzed
by Müller and introduce a different approach based on geometric singular perturbation
theory <cit.> which allows us to better understand the critical points of the
functional ℐ^ε and to obtain an alternative method eventually able
to handle more general functionals.
In <cit.> minimizers are proven to exhibit a periodic multi-scale structure (Figure <ref>):
a fast scale of order (ε) describes the “jumps” between the two values of the
derivative u_X, and a slow scale of order (ε^1/3) represents the distance between
two points with equal value of u_X. From a physical viewpoint, the two values of the derivative
u_X=± 1 model the two different phases of the material. The jumps describe the transition
between the phases and the regions with almost constant values of u_X correspond to parts of
the material occupied by the same phase.
One of the key results in <cit.> consists in an
asymptotic formula for the period of minimizing solutions, when the solution space U is chosen
as the set of all u ∈ H^2(0,1) subject to Dirichlet boundary conditions.
For ε→ 0, the period P^ε behaves as
P^ε=2(6 A_0 ε)^1/3 + 𝒪(ε^2/3),
where A_0=2 ∫_-1^1 W^1/2(w) dw.
The approach based on fast-slow analysis of the Euler-Lagrange equation applied here
allows us to identify geometrically certain classes
of periodic orbits. These orbits are used as starting solutions for numerical continuation using the software
package <cit.>. This powerful tool has been adopted for example by
Grinfeld and Lord <cit.> in their numerical analysis of small amplitude periodic solutions of (<ref>).
We provide here a detailed study of periodic solutions based upon one-parameter continuation in the parameters
ε and μ. It turns out that several fold bifurcations of periodic orbits
structure the parameter space. A numerical comparison with the law (<ref>) will be presented,
by means of a minimization process of the functional ℐ^ε along certain families
of periodic orbits. Our work also leads to new insights into the dependence of the period
on the parameters ε and μ for non-minimizing sequences of periodic orbits.
The paper is structured as follows: Section <ref> introduces the approach based on geometric singular
perturbation theory using the intrinsic multi-scale structure of the problem. We describe the
transformation of the Euler-Lagrange equation associated to the functional ℐ^ε
into a multiscale ODE system, along with the decomposition of periodic orbits into slow and fast pieces
using the Hamiltonian function.
We identify a family of large amplitude singular periodic orbits and prove their persistence for ϵ small.
A crucial point is the construction of an initial periodic
orbit for ε≠ 0 in order to start numerical continuation: the strategy we use is
illustrated in Section <ref>, where the continuation of the orbits with respect to the main
parameters is also performed. This section includes also the comparison between the analytical expression
of the period given by Müller and our numerical results as well as the general parameter study of
periodic solutions. Section <ref> is devoted to conclusions and an outline for future
work.
§ THE EULER-LAGRANGE EQUATION AS A FAST-SLOW SYSTEM
In this section, the critical points (not only the minimizers) of the functional
ℐ^ε are analyzed. A necessary condition they
have to satisfy is the Euler-Lagrange equation <cit.>. The Euler-Lagrange
equation associated to ℐ^ε is the singularly perturbed,
fourth order equation
ε^2u_XXXX-1/2σ(u_X)_X+u=0,
where σ(u_X)=W'(u_X)=u_X^3-u_X. Equation (<ref>) can be rewritten
via
w := u_X,
v := -ε^2 w_XX+1/2σ(w),
z := ε w_X,
as an equivalent system of first order ODEs
u̇ =w,
v̇ =u,
εẇ = z,
εż = 1/2 (w^3-w)-v,
where d/dX= .
Equations (<ref>) exhibit the structure of a (2,2)-fast-slow system,
with u, v as slow variables and w, z as fast variables. We recall that a
system is called (m,n)-fast-slow <cit.> when it has the form
εẋ = f(x,y,ε),
ẏ = g(x,y,ε),
where x ∈ℝ^m are the fast variables and y ∈ℝ^n
are the slow variables. The re-formulation of system (<ref>)
on the fast scale is obtained by using the change of variable ξ=X/ε, i.e.
d x/dξ = x' = f(x,y,ε),
d y/dξ = y' = ε g(x,y,ε).
On the fast scale, system (<ref>) has the form
u' =ε w,
v' =ε u,
w' = z,
z' = 1/2 (w^3-w)-v,
which for ε>0 is equivalent to (<ref>).
The system possesses the unique equilibrium
p_0= ( 0,0,0,0 ),
which is a center, since the eigenvalues of the Jacobian are all purely imaginary.
An important property of system (<ref>) is stated in the following result:
Equations (<ref>) and (<ref>) are singularly perturbed Hamiltonian
systems
=0.7pt1.4
[ u̇ = - ∂ H/∂ v; v̇ = ∂ H/∂ u; εẇ = -∂ H/∂ z; εż = ∂ H/∂ w, ] ξ=X/ε⇔ [ u' = -ε∂ H/∂ v; v' = ε∂ H/∂ u; w' = -∂ H/∂ z; z' = ∂ H/∂ w, ]
i.e., they are Hamiltonian systems with respect to the symplectic form
dz d w + 1/εdv du and with Hamiltonian function
H(u,v,w,z)=1/8(4 u^2 - 8 v w - 2 w^2 + w^4 - 4 z^2).
The result follows by differentiating (<ref>) with respect to
the four variables. For more background on fast-slow Hamiltonian systems
of this form see <cit.>.
Since the Hamiltonian (<ref>) is a first integral of the system, the dynamics
take place on level sets, defined by fixing H(u,v,w,z) to a constant value
μ∈ℝ. This allows us to reduce the dimension of the system by one, which
we use both in analytical and numerical considerations.
One main advantage in the use of geometric singular perturbation theory is that the
original problem can be split into two subsystems by analyzing the singular limit
ε→ 0 on the slow scale (<ref>) and on the fast
scale (<ref>). The subsystems are usually easier to handle. Under suitable
conditions the combination of both subsystems allows us to obtain information for
the full system when 0 < ε≪ 1. In particular, if one can construct
a singular periodic orbit by combining pieces of slow and fast orbits, then the existence of
a periodic orbit 𝒪(ε)-close to the singular one for small
ε≠ 0 can frequently be proven under suitable technical
conditions by tools from geometric singular perturbation
theory <cit.>.
The slow singular parts of an orbit are derived from the reduced problem (or
slow subsystem), obtained by letting ε→ 0 in (<ref>)
0 = f(x,y,0),
ẏ = g(x,y,0),
which describes the slow dynamics on the critical manifold
_0:={(x,y)∈ℝ^m×ℝ^n:f(x,y,0)=0.}
Considering ε→ 0 on
the fast scale (<ref>) yields the layer problem (or
fast subsystem)
x' = f(x,y,0),
y' = 0,
where the fast dynamics is studied on “layers” with constant values of the
slow variables. Note that _0 can also be viewed as consisting of equilibrium
points for the layer problem. _0 is called normally hyperbolic if
the eigenvalues of the matrix Df_x(p,0)∈ℝ^m× m
do not have zero real parts for p∈_0. For normally hyperbolic invariant
manifolds, Fenichel's Theorem applies and yields the existence of a slow
manifold _ε. The slow manifold lies at a distance (ε)
from _0 and the dynamics on _ε is well-approximated by the reduced
problem; for the detailed technical statements of Fenichel's Theorem we refer
to <cit.>.
In our Hamiltonian fast-slow context, we focus on the analysis of families of
periodic orbits for system (<ref>) which are parametrized by the
level set parameter μ. The first goal is to geometrically construct
periodic orbits in the singular limit ε=0. The reduced
problem is given by
u̇ = w,
v̇ = u,
on the critical manifold (see Fig. <ref>)
𝒞_0={ (u, v, w, z) ∈ℝ^4 : z=0, v=1/2(w^3-w) }.
The equations of the layer problem are
w' = z
z' = 1/2(w^3-w)-v̅,
on “layers” where the slow variables are constant (u=u̅, v=v̅).
Note that for Hamiltonian fast-slow systems such as (<ref>), both reduced and layer
problems are Hamiltonian systems with one degree of freedom.
§.§ The Reduced Problem
Equations (<ref>) describe the reduced problem on _0, if w is considered
as a function of (u, v) on _0.
𝒞_0 is normally hyperbolic except for two fold lines
_- ={(u,(w_-^3-w_-)/2 ,w_-,0)∈ℝ^4},
_+ ={(u,(w_+^3-w_+)/2 ,w_+,0)∈ℝ^4},
where w_± are defined by σ'(w_±)=0, i.e., w_±=±1/√(3).
For p∈_±, the matrix D_x f(p,0) has a double zero
eigenvalue.
The lines _± naturally divide 𝒞_0 into three parts
𝒞_0,l=𝒞_0 ∩{ w < w_- }, 𝒞_0,m=𝒞_0 ∩{w_- ≤ w ≤ w_+}, 𝒞_0,r=𝒞_0 ∩{ w > w_+ },
as shown in Figure <ref>. The submanifolds involved in our analysis are
only 𝒞_0,l and 𝒞_0,r, which are normally hyperbolic. More
precisely, 𝒞_0,l and 𝒞_0,r are of saddle-type, since
the matrix D_x f(p,0) along them always has two real eigenvalues of opposite
sign. We remark that saddle-type critical manifolds have played an important role in the
history of fast-slow systems in the context of the travelling wave problem for the
FitzHugh-Nagumo equation, see for
example <cit.>.
On _0-_±, the flow of the reduced system is, up to a time rescaling, given by
u̇ = (3w^2-1)w,
ẇ = 2u.
We differentiate v=1/2(w^3-w) with respect to X, re-write the equation in
(u,w)-variables and apply the time rescaling corresponding to the multiplication
of the vector field by the factor (3w^2-1) (cf. <cit.>). On
𝒞_0,m this procedure changes the direction of the flow, but it does not affect
the parts of the critical manifold involved in our analysis.
The Hamiltonian function allows us to restrict our attention to two subsets of
𝒞_0,l^μ and 𝒞_0,r^μ by fixing the value of μ.
Analyzing the slow flow on these two normally hyperbolic branches, we see that
u decreases along _0,l and increases along _0,r as shown in Figure <ref>.
§.§ The Layer Problem
The layer problem is obtained by setting ε=0 in (<ref>). We
obtain a two-dimensional Hamiltonian vector field on “layers” where the slow variables
are constant (u=u̅,v=v̅)
w' = z,
z' = 1/2(w^3-w)-v̅.
The two branches 𝒞_0,l^μ and 𝒞_0,r^μ are
hyperbolic saddle equilibria for the system (<ref>) for every value
of u̅, v̅. To construct a singular limit periodic orbit we are
particularly interested in connecting orbits between equilibria of the
layer problem.
The layer problem (<ref>) has a double heteroclinic connection if and only
if v̅=0. These are the only possible heteroclinic connections of the layer
problem (<ref>).
System (<ref>) is Hamiltonian, with v̅ as a parameter and Hamiltonian
function
H_f(w,z)=-z^2/2 + w^4/8 - w^2/4 - v̅ w.
The lemma follows easily by discussing the level curves of the Hamiltonian;
for the convenience of the reader we outline the argument.
Indexing the level set value of H_f as θ, the solutions
of (<ref>) are level curves {H_f(w,z)=θ}. The equilibria
of (<ref>) are {z=0,w=w_l,w_m,w_r}; here w_l,w_m,w_r are the
three solutions of
2v̅-w^3+w=0
which depend upon v̅. We only have to
consider the case where there are at least two real equilibria w_l and w_r
which occurs for v̅∈[-1/(3√(3)),1/(3√(3))]. Let
H_f(w_l,0)=:θ_l, H_f(w_r,0)=:θ_r
and note that since (<ref>) is cubic we can calculate θ_l,r
explicitly. To get a heteroclinic connection we must have θ_l=θ_r
and by an explicit calculation this yields the condition v̅=0. Hence,
heteroclinic connections of (<ref>) can occur only if v̅=0.
For v̅=0 one easily finds that the relevant equilibria are located at
w_l=-1 and w_r=1 so that θ_l=-1/8=θ_r. The double heteroclinic
connection is then explicitly given by the curves {z=±1/2(1-w^2)}
(see also Figure <ref>).
The next step is to check where the relevant equilibria of the layer problem
are located on the critical manifold _0^μ for a fixed value of the
parameter μ since we have a level set constraint for the full system. Using
Lemma <ref> one must require w=±1,v=0 while z=0 is the critical
manifold constraint, hence
H(u,0,±1,0)=1/2 u^2-1/8!=μ.
Therefore, the transition points where fast jumps from _0,l to _0,r
and from _0,r to _0,l are possible are given by
_0^μ∩{v=0,w=± 1}={u=±√(2μ+1/4),v=0,w=±1,z=0}.
Observe that fast orbits corresponding to positive values of u connect
𝒞_0,r^μ to 𝒞_0,l^μ, while the symmetric orbits
with respect to the u=0 plane connect
𝒞_0,l^μ to 𝒞_0,r^μ.
Recall that w = ± 1 represent the two phases of the material.
Hence, the heteroclinic orbits of the layer problem can be interpreted as
instantaneous transitions between these phases.
§.§ Singular Fast-Slow Periodic Orbits
The next step is to define singular periodic orbits by combining pieces of orbits of
the reduced and layer problem. Figure <ref> illustrates the situation. The entire singular orbit
γ_0^μ is obtained connecting two pieces of orbits of the reduced problem with
heteroclinic orbits of the fast subsystem for a
fixed value of μ, see Figure <ref>(a). The configuration of the two-dimensional
critical manifold and the singular periodic orbit is indicated in Figure <ref>(b).
Here we are only interested in singular periodic orbits which have nontrivial slow
and fast segments. Therefore, we do need transitions points from the fast
subsystem to the slow subsystem. This requirement implies, by using the
result (<ref>), the lower bound μ>-1/8. A second requirement
we impose is that the slow subsystem orbits lie inside the normally hyperbolic
parts C_0,l^μ and C_0,r^μ. The u-coordinate of the slow segment
closest to the lines _± is located at u=0. Hence, we calculate
the value of the Hamiltonian under the condition that the slow trajectory is tangent to
_± which yields
H(0,1/2(w_±^3-w_±),w_±,0)=1/24.
Combining these considerations with the results from
Sections <ref>-<ref> gives the following result on the existence
of singular periodic orbits (ε=0):
For ε=0, the fast-slow system (<ref>),(<ref>)
has a family of periodic orbits {γ_0^μ}_μ consisting
of precisely two fast and two slow subsystem trajectories with slow parts
lying entirely in _0,l and _0,r if and only if
μ∈ I_μ, I_μ :=
(-1/8,1/24).
The persistence of these periodic orbits for 0 < ε≪ 1 on each individual surface level
of the Hamiltonian can be proven by using an argument based on the theorem introduced
by Soto-Treviño in <cit.>.
For every μ∈ I_μ and for ε > 0 sufficiently small,
there exists a locally unique periodic orbit of the fast-slow system (<ref>),(<ref>) that is
𝒪(ε) close to the corresponding singular .
The Hamiltonian structure of the system suggests to study the individual levels as parametrized
families by directly applying the Hamiltonian function as a first integral
to reduce the dimension of the system. At first sight, a convenient choice is to express v
as a function of the variables (u, w, z) and μ
v=4u^2-8μ-2w^2+w^4-4z^2/8w.
Consequently, equations (<ref>) transform into a (2,1)-fast-slow system
u̇ = w,
εẇ = z,
εż = 1/2 (w^3-w)-4u^2-8μ-2w^2+w^4-4z^2/8w.
Theorem 1 in <cit.> for a C^r (r ≥ 1) (2,1)-fast-slow system guarantees the persistence of periodic orbits
consisting of two slow pieces connected by heteroclinic orbits for 0 < ε≪ 1 when the following conditions hold:
* The critical manifolds are one-dimensional and normally hyperbolic (given by Lemma <ref>).
* The intersection between W^u(_0,l) (resp. W^u(_0,r)) and W^s(_0,r) (resp. W^s(_0,l)) is transversal (confirmed by Lemma <ref>, see Fig. <ref>).
* The full system possesses a singular periodic orbit and the slow flow on the critical manifolds is transverse to touch-down and take-off sets,
which reduce to 0-dimensional objects in this case as we explicitly obtained in (<ref>).
However, system (<ref>) appears to be nonsmooth at w=0 and the fast orbits necessarily cross w=0. To overcome this (apparent) difficulty we use other charts
for the manifold H(u,v,w,z) = μ for parts of the singular orbit close to w=0. Instead of (<ref>) we now express u as a function of the other variables, i.e.
u = ±1/2√(8 v w + 2 w^2 - w^4 + 4 z^2 + 8 μ).
This leads to the following description of the dynamics:
* System (<ref>) describes the dynamics on the slow pieces away from w=0;
* The heteroclinic connection corresponding to u>0 is expressed by
v' = + ε/2√(8 v w + 2 w^2 - w^4 + 4 z^2 + 8 μ),
w' = z,
z' = 1/2 (w^3-w)-v.
* The heteroclinic connection corresponding to u<0 is expressed by
v' = - ε/2√(8 v w + 2 w^2 - w^4 + 4 z^2 + 8 μ),
w' = z,
z' = 1/2 (w^3-w)-v.
If we consider system (<ref>) as a smooth dynamical system on the manifold defined by H=μ, the proof given in <cit.> (based on proving the transversal intersection of two manifolds obtained by flowing suitably chosen initial conditions forward and backward in time) goes through without being affected by the fact that we have to work with several coordinate systems, as described above.
System (<ref>) has two parameters μ,ε, which naturally
leads to the question how periodic orbits deform and bifurcate when the two
parameters are varied. Furthermore, the fast-slow structure with orbits consisting
of two fast jumps and two slow segments as shown in Figure <ref> and the
three-dimensional form (<ref>) provide analogies to the travelling wave
frame system obtained from the partial differential equation version of the
FitzHugh-Nagumo <cit.> (FHN) equation. The three-dimensional fast-slow
FHN system has been studied in great detail using various fast-slow systems
techniques (see e.g. <cit.>).
One particular approach to investigate the FHN parameter space efficiently is to
employ numerical continuation methods <cit.>. In fact,
numerical approaches to FHN have frequently provided interesting conjectures and
thereby paved the way for further analytical studies. Adopting this approach, we are
going to investigate the problem (<ref>) considered here using numerical
continuation to gain better insight into the structure of periodic orbits.
§ NUMERICAL CONTINUATION
This section is devoted to the numerical investigation of the the critical points
of the functional ℐ^ε via the Euler-Lagrange
formulation (<ref>). A powerful tool for such computations
is <cit.>. is able to numerically
track periodic orbits depending upon parameters using a combination of a boundary
value problem (BVP) solver with a numerical continuation algorithm. Using
such a framework for fast-slow systems often yields a wide variety
of interesting numerical and visualization results; for a few recent examples we refer
to <cit.>.
The first task one has to deal with is the construction of a starting orbit for fixed
ε≠ 0. For (<ref>) this is actually a less trivial task than
for the FHN equation as we are going to explain in Section <ref>.
In Section <ref>, we are also going to construct a starting periodic
orbit based upon the geometric insights of Section <ref>.
Once the starting periodic orbit is constructed, we use to perform
numerical continuation in both parameters μ and ε. This yields
bifurcation diagrams and the solutions corresponding to some interesting points
on the bifurcation branches. Then, the connection between the parameters in the
minimization process is investigated, in order to numerically determine the
correspondence that leads to the functional minimum. Finally, a comparison with
the period law (<ref>) predicted by Müller <cit.> is performed.
§.§ Construction of the starting orbit
As indicated already, the construction of a starting periodic orbit is not trivial:
* The singular orbit itself, obtained by matching slow and fast subsystem orbits
for a fixed value of μ and for ε=0, cannot be used owing to re-scaling
problems (the fast pieces would all correspond to x=0).
* The computation of a full periodic orbit using a direct initial value solver
approach for is hard to perform since the slow manifolds are
of saddle type and an orbit computed numerically would diverge from them exponentially
fast <cit.>.
* Matching slow segments obtained with a saddle-type algorithm <cit.>
and fast parts computed with an initial value solver may cause problems at the points
where the four pieces should match.
* In contrast to the FitzHugh-Nagumo case <cit.>, the periodic
orbits we are looking for cannot be detected as Hopf bifurcations from the zero equilibrium.
In that case, we could use to locate such bifurcations and then find a periodic
orbit for 0 < ε≪ 1 fixed by branch-switching at the Hopf bifurcation point. In
our case, however, the origin p_0 is a center equilibrium, and an infinite number of periodic
orbits exist around it in the formulation (<ref>).
* Starting continuation close to the equilibrium p_0 is difficult due to its degenerate nature (w=0).
Our strategy is to use the geometric insight from Section <ref> in combination with
a slow manifolds of saddle-type (SMST) algorithm and a homotopy approach. We construct an
approximate starting periodic orbit using a value of μ which leads to a short
“time” spent on the slow parts of the orbits, so that the saddle-type branches
do not lead to numerical complications. Then we use an SMST algorithm to find a suitable
pair of starting points lying extremely close to the left and right parts of the slow manifolds
^μ_ε,l and ^μ_ε,r. In the last step we employ
numerical continuation to study the values of μ we are actually interested in; this
is the homotopy step.
The value μ=-1/8 is peculiar, since in this case the singular slow segments for (<ref>) reduce
to two points
𝒞_0,l^-1/8 = { (0,-1,0) }, and 𝒞_0,r^-1/8 = { (0,1,0) },
i.e., touch-down and take-off sets for the fast dynamics coincide in this case.
The range (<ref>) we are considering does not include μ=-1/8;
however, this property makes it an excellent candidate for the first step of our strategy.
Indeed, we know already from the geometric analysis in Section <ref> that the time
spent near slow manifolds is expected to be very short in this case.
Although it is still not possible to compute the full orbit using forward/backward integration,
we can compute two halves, provided we choose the correct initial condition. We aim to find
a point on the slow manifolds ^μ_ε,l and ^μ_ε,r as an
initial value. The SMST algorithm <cit.> helps to solve this problem. The
procedure is based on a BVP method to compute slow manifolds of saddle-type in fast-slow
systems. Fixing ε and μ, we select manifolds B_l and B_r, which are
transverse to the stable and unstable eigenspaces of 𝒞_0,l^μ and
𝒞_0,r^μ, respectively (Figure <ref>). The plane B_l and the line
B_r provide the boundary conditions for the SMST algorithm.
Implementing the algorithm for μ=-1/8 and ε=0.001 for (<ref>)
shows that there are actually two points (0,w_L,0) and (0,w_R,0) which
are contained in the slow manifold even for ε≠ 0 as well as in the critical
manifold _0. From the geometric analysis in Section <ref> we know that at
μ=-1/8 the take-off and touch-down points coincide and a singular
double-heteroclinic loop exists for v=0. This motivates the choice of (u,v,w,z)=(0,0,w_L,0)
and (u,v,w,z)=(0,0,w_R,0) in the following algorithm: a numerical integration
of the full four-dimensional problem (<ref>) forward and backward in x is performed,
imposing the Hamiltonian constraint using a projective algorithm. The computation is stopped
once the hyperplane {w=0} is reached. The full periodic orbit is then constructed by matching
two symmetric pieces together.
In principle, there are different ways how one may arrive at a useful construction of a
highly accurate starting periodic orbit. In our context, the geometric analysis guided the
way to identify the simplest numerical procedure, which is an approach that is likely to be
successful for many other non-trivial fast-slow numerical continuation problems.
§.§ Continuation in μ
A detailed analysis of the critical points' dependence on the Hamiltonian is performed.
The value of μ can be arbitrarily chosen only in the interval I_μ, while
ε is fixed to 0.001. Continuation is performed on system (<ref>)
using the initial orbit obtained numerically in Section <ref>. Starting at
μ=-1/8, is able to compute the variation of the orbits up to
μ=1/24. The bifurcation diagram of the period P with respect to the
parameter μ is shown in Figure <ref>(a).
The first/upper branch of the continuation displays fast-slow orbits corresponding perturbations
of the singular ones {γ_0^μ}_μ∈ I_μ for fixed
ε≠ 0. As predicted by the geometric analysis we observe that decreasing
μ reduces the length of the slow parts, so that the orbits almost correspond to the double
heteroclinic one analytically constructed at μ = -1/8; see Figure <ref>. Near
μ=-1/8 the bifurcation branch has a fold in (μ,P)-space leading to
the second/lower bifurcation branch. The difference between the orbits on the two
branches for a fixed value of μ is shown in Figure <ref>(b). Along the
second branch, periodic solutions around the center equilibrium appear, which collapse
into it with increasing μ (Figure <ref>(b)).
Furthermore, numerical continuation robustly indicates that the upper branch has another
fold when continued from μ=0 to higher values of μ as shown in
Figure <ref>(a). The orbits obtained by fixing a value of μ on the upper branch
and its continuation after the fold differ only because of the appearance of two new fast
parts near the plane {u=0} as shown in Figure <ref>(a). We conjecture that
these parts arise due to the loss of normal hyperbolicity at _±; see also
Section <ref>.
§.§ Continuation in ε
We perform numerical continuation in ε by fixing three values of
μ in order to capture the behavior of the solutions for the range I_μ
from Proposition <ref>. We consider μ_l ≈ -1/8 with
μ_l>-1/8, μ_c ≈ 0, and μ_r ≈1/24 with
μ_r<1/24; or more precisely μ_l = -0.12489619925,
μ_c = 1.5378905702· 10^-5, and μ_r = 0.04100005066.
For each of these values, we find two bifurcation branches connected via a fold
in (μ,P)-space; see Figure <ref>.
The bifurcation diagrams and the associated solutions shown in Figure <ref>
nicely illustrate the dependence of the period on the singular perturbation parameter
ε. When ε→ 0 there are two very distinct limits for
the period P=P(ε) (Figure <ref>(a)-(b)) depending whether we are
on the upper and lower parts of the main branch of solutions. In the case with
μ_r≈1/24 when orbits come close to non-hyperbolic singularities on
_0, we actually seem to observe that P(0)
seems to be independent on whether we consider the upper or lower part of the branch
(see Figure <ref>(c)). Furthermore, functional forms of P(ε) are
clearly different for small ε so the natural conjecture is that there
is no universal periodic scaling law if we drop the functional minimization constraint.
The deformation under variation of ε of the periodic orbits in (w,z,u)-space
is also interesting. For μ=μ_l (Figure <ref>(a)), we observe that the upper
branch corresponds to the solutions that we expect analytically from
Proposition <ref> consisting of two fast and two slow segments when
approaching ε=0. A similar scenario occurs also
for the other values of μ (Figure <ref>(b) and Figure <ref>(c)).
When the ε value is too large, or when we are on
a different part of the branch of solutions, the orbits closed to the equilibrium of
the full system or additional pieces resembling new fast contributions appear.
§.§ Period scaling
So far, no boundary conditions have been imposed; moreover, all the computed solutions
are not necessarily minimizers of the functional, but only critical points. Our
conjecture is that the interaction between the two main parameters of the system
μ and ε should allow us to obtain the true minimizers via a
double-limit. In other words, for every value of ε there is a
corresponding orbit which minimizes the functional ℐ^ε,
and since along this orbit the Hamiltonian has to constantly assume a certain
value μ̅, the minimization process should imply a direct connection
between the parameters. Consequently, it is interesting to investigate this
ansatz from the numerical viewpoint.
A first possibility is to establish a connection between the two parameters
ε and μ via a direct continuation in both parameters, starting
from certain special points, such as the fold points detected in
Sections <ref>-<ref>. However, it turns out that this
process does not lead to the correct scaling law for minimizers of
_ε as shown in Figure <ref>.
Another option is instead to check if among the critical points of the Euler-Lagrange
equation (<ref>) we have numerically obtained there are also the minimizers of
the functional ℐ^ε respecting the power law (<ref>).
In <cit.>, boundary conditions on the interval [ 0,1 ] are also included
in the variational formulation, and from the results obtained from the continuation in
ε, one may expect that high values of μ would not be able to fit them,
since the period is always too high. Lower values of μ, instead, seem to have sufficiently
small period. Hence, one could fix one of those (for example, μ_l) and look at what happens
as ε→ 0. The hope is that the (ε^1/3) leading-order scaling
for the period naturally emerges. Unfortunately, this does not happen, as we can see in
Figure <ref>; the lower branch seems to give a linear dependence on ε,
while the upper branch gives a quadratic one.
The reason why from this naive approach the (ε^1/3) leading-order scaling does
not emerge lies in the lack of connection with the minimization process. However,
Figure <ref> demonstrates that there are several nontrivial scalings of natural
families of periodic orbits as ε→ 0.
So far, we have just assumed that the Hamiltonian value of the minimizers should be “low”,
but indeed there is a strict connection between the values of ε one is considering
and the value of μ of the minimizers. In other words, there is not a unique value of
μ given by the minimizers for every ε small but minimizers move over different
Hamiltonian energy levels as ε→ 0. Starting from this consideration,
another option, which turns out to be the correct one to recover the scaling (<ref>),
is to use the periodic orbits from numerical continuation to compute the numerical value of
the functional ℐ^ε as a function of the period P fixing different
values of ε in a suitable range, such as:
I_ε= [ 10^-7, 10^-1].
Then, we obtain different parabola-shaped diagrams where we can extract the value of the period
minimizing the functional (Figure <ref>). When plotting these values related to the
value of ε for which they have been computed, one obtains the results shown in
Figure <ref>. The values numerically extracted from our solutions match the analytical
results on the period proven by Müller (<ref>) when the value of ε is
sufficiently small. As ε increases, the period law is less accurate, as one
would expect.
§ CONCLUSION & OUTLOOK
In summary, we have shown that geometric singular perturbation theory and numerical continuation
methods can be very effective tools to understand nonconvex multiscale variational
problems via the Euler-Lagrange formulation. We have proven the existence of a class of singular
periodic orbits based upon a fast-slow decomposition approach and we have shown that these orbits persist
for ε small.
The geometric insight was used
to determine a starting solution for numerical continuation in the context of a reduced
three-dimensional fast-slow system. Then we studied the dependence of periodic orbits on the
singular perturbation parameter as well as the Hamiltonian energy level set parameter arising
in the reduction from a four- to a three-dimensional system. The parameter space is structured
by several fold points. Furthermore, we were able to study the shape of non-minimizing periodic
orbits for very broad classes of parameters. Finally, we showed that several natural scaling
laws for non-minimizing sequences of periodic solutions exist and also confirmed numerically
the leading-order scaling predicted by Müller for minimizing sequences.
Based upon this work, there are several open problems as well as generalizations one might
consider. In particular, it would be desirable to extend the persistence result
to the general class of singularly-perturbed Hamiltonian fast-slow systems (<ref>); this
is the subject of ongoing work.
Another important observation of our numerical study are the intricate orbits that seem to
arise when parts of the slow segments start to interact with the singularities _± where
the critical manifold is not normally hyperbolic. The natural conjecture is that the
additional small fast loops that we observe numerically could correspond to homoclinic “excursions”
in the fast subsystem anchored at points close to _±. The blow-up method <cit.> is
likely to provide an excellent tool to resolve the non-normally hyperbolic singularities; see
e.g. <cit.> where the existence of complicated fast-slow
periodic orbits involving loss of normal hyperbolicity is proven.
The construction of an initial orbit has been one of the hardest problems to tackle. It was solved
using analytical and numerical tools, after discarding several other plausible approaches.
The SMST algorithm <cit.> has been a helpful tool to determine good starting points on the slow
manifolds and then use an initial value solver to obtain segments of a complete whole orbit. Although
our approach works well in practical computations, there are interesting deep numerical analysis
questions still to be answered regarding the interplay between certain classes of fast-slow “initial
guess” starting orbits and the success or failure of Newton-type methods for the associated BVPs. In
particular, can one prove certain geometric conditions or restrictions on ε to guarantee
the convergence for the first solution?
Another highly relevant direction would be to extend our approach to more general classes of
functionals. There are many different singularly-perturbed variational problems, arising
e.g. in materials science, to which one may apply the techniques presented here. In this context,
it is important to emphasize that we expect that particularly other non-convex functionals could
be excellent candidates for future work.
From the viewpoint of applications, it would be interesting to study the practical relevance of
non-minimizing sequences of periodic solutions. Although we expect the long-term behavior to be
governed by minimizers, it is evident that non-minimizing periodic orbits can have a high impact
on time-dependent dynamics, e.g., either via transient behavior, via noise-induced phase
transitions, or as dynamical boundaries between different regimes.
Acknowledgements: AI and PS would like to thank the Fonds zur Förderung der
wissenschaftlichen Forschung (FWF) for support via a doctoral school (project W1245).
CK would like to thank the VolkswagenStiftung for
support via a Lichtenberg professorship. CK and PS also acknowledge partial support
of the European Commission (EC/REA) via a Marie-Curie International Reintegration Grant (MC-IRG).
plain
|
http://arxiv.org/abs/1701.07464v2 | 20170125195601 | Tunnelling in Dante's Inferno | [
"Kazuyuki Furuuchi",
"Marcus Sperling"
] | hep-th | [
"hep-th",
"astro-ph.CO"
] |
UWTHPH-2017-2
1.2in
Tunnelling in Dante's Inferno
Kazuyuki Furuuchi^a and Marcus Sperling^b,(a,c)
8mm
^aManipal Centre for Natural Sciences, Manipal University
Dr.T.M.A. Pai Planetarium Building
Madhav Nagar, Manipal, Karnataka 576104, India
^bFakultät für Physik,
Universität Wien
Boltzmanngasse 5, A-1090 Wien, Austria
^cInstitut für Theoretische Physik,
Leibniz Universität Hannover
Appelstraße 2, 30167 Hannover, Germany
10mm
We study quantum tunnelling in Dante's Inferno model of large field inflation.
Such a tunnelling process, which will terminate inflation, becomes problematic
if the tunnelling rate is rapid compared to the Hubble time scale at the time
of inflation.
Consequently, we constrain the parameter space of Dante's Inferno model by
demanding a suppressed tunnelling rate during inflation.
The constraints are derived and explicit numerical bounds are provided for
representative examples.
Our considerations are at the level of an effective field theory;
hence, the presented constraints have to hold regardless of any UV completion.
§ INTRODUCTION
The slow-roll inflation paradigm has been phenomenologically successful,
initially solving the naturalness issues in Big Bang Cosmology,
and later explaining the primordial density perturbations.
General predictions of slow-roll inflation on primordial density perturbations
agree very well with recent Cosmic Microwave Background (CMB) observations.
Nevertheless, slow-roll inflation has its own naturalness issue.
Protecting the flatness of the inflaton potential against quantum corrections
has been a long-standing challenge.
This issue is particularly severe in large field inflation models in which
the inflaton enjoys super-Planckian field excursion.
A standard approach to explain the flatness of a potential in an effective
field theory is imposing a symmetry.
For example, natural inflation <cit.> assumes a continuous shift
of an axion field as an approximate symmetry.
The comparison of this model with CMB data requires a super-Planckian axion
decay constant.
Naively, this indicates that the symmetry must be respected at the Planck scale.
However, there are strong indications that continuous global symmetries are not
respected in a quantum theory of gravity (see <cit.> for a recent
discussion together with a review of earlier studies).
An approach to circumvent this problem was proposed under the name of
extra-natural inflation <cit.>.
This model realises a super-Planckian axion decay constant in four dimensions
by means of an effective gauge field theory in higher dimensions.
The super-Planckian axion decay constant is achieved at the expense of a very
small gauge coupling.
However, it was immediately noticed that this model is difficult to realise in
string theory <cit.>.
The lasting difficulty in realising extra-natural inflation in string theory
led to the Weak Gravity Conjecture <cit.>,
which limits the relative weakness of gauge forces compared to the gravitational
force.
This conjecture may eventually forbid a super-Planckian axion decay constant
in effective field theories which can consistently couple to gravity,
though several logical steps need to be examined in more detail.
If a super-Planckian axion decay constant is forbidden in effective field
theories which are consistently coupled to gravity, then a new major
obstacle for the realisation of large field inflation via natural
inflation arises.
However, a possible way out may be axion monodromy
inflation <cit.>.
In this class of models, the axion decay constant is sub-Planckian,
but the axion couples to an additional degree of freedom, which we call
winding number direction below.
An effective super-Planckian excursion of the inflaton is achieved by going
through the axion direction multiple times, with a shift in the winding number
direction for each round.
This appears to be a promising avenue for realising large field inflation.
Nevertheless, the validity of axion monodromy inflation should be examined
further, both at the level of an effective field theory as well as at the level
of an UV completion.
In particular, it has been pointed out that quantum tunnelling through the
potential roughly in the winding number direction may terminate inflation
before it lasts long enough for solving the naturalness issues in Big Bang
Cosmology <cit.>.
It turned out that the tunnelling rate is highly model dependent.
Tunnelling in related models has
subsequently been studied in
<cit.>.
In this article, we study tunnelling in an axion monodromy model,
namely Dante's Inferno model <cit.>.
We limit our study to the level of an effective field theory.
Besides phenomenological interests in this promising model,
there is an attractive technical feature: The potential wall
orthogonal to the inflaton direction is explicitly given.
This allows us to apply a standard calculation à la
Coleman <cit.> in order to estimate the tunnelling rate.
In particular, one can estimate the tension of the surface of the
bubble, through which the false “vacuum" decays[
Precisely speaking, during inflation
the state is not in a local minimum of the potential,
but slowly rolling in ϕ-direction.
This point has been investigated in <cit.>.
In this article, we will loosely use the term (false) “vacuum"
for such configurations, because we will be mainly dealing with
slices of constant ϕ of the potential
in which the state is in a local minimum.].
This is in contrast to other axion monodromy models for which
the tension of the wall is treated as an input from a UV theory
<cit.>.
We constrain the parameter space of Dante's Inferno model by requiring a
suppressed tunnelling rate during inflation.
In particular, we will show that in some regions of the parameter space,
the suppression of the tunnelling process yields a new constraint.
This constraint comes purely at the level of an effective field theory;
hence, regardless of the UV completion of the theory, the constraint has to
hold.
For a fixed ratio Λ/f_1, where Λ is the parameter controlling
the height of the sinusoidal potential and f_1 is the smaller axion decay
constant in Dante's Inferno model, the condition that tunnelling is suppressed
introduces a lower bound on f_1 in such a parameter region.
We demonstrate this observation by providing explicit numerical bounds
in a couple of representative examples.
The outline of this article is as follows: Dante's Inferno model is briefly
reviewed in Sec. <ref>.
Thereafter, we discuss quantum tunnelling and suppression thereof in
Sec. <ref>.
We exemplify these considerations for the choice of a monomial inflaton
potential in Sec. <ref>.
Lastly, Sec. <ref> concludes.
Three appendices provide the necessary background and details for choosing
constant field values during inflation, the bounce
solution, and the thin-wall approximation.
§ DANTE'S INFERNO MODEL
In this section, we review Dante's Inferno model <cit.>
and fix our notation. The dynamics of the model are governed by the following
action:
S_DI =
∫ d^4 x
√(-g)[
1/2_μϕ_1 ^μϕ_1
+
1/2_μϕ_2 ^μϕ_2
-
V_DI(ϕ_1,ϕ_2)
] ,
where the scalar potential is given by
V_DI(ϕ_1,ϕ_2)
=
V_1(ϕ_1)
+
Λ^4( 1 - cos(ϕ_1/f_1 - ϕ_2/f_2)
).
Fig. <ref> displays the behaviour of the potential
V_DI(ϕ_1,ϕ_2) for some parameter values.
It is convenient to perform the following rotation in the field space:
(
[ χ; ϕ ])
=
(
[ cosγ - sinγ; sinγ cosγ ])
(
[ ϕ_1; ϕ_2 ]) ,
where
sinγf_1/√(f_1^2 + f_2^2),
cosγf_2/√(f_1^2 + f_2^2) .
In terms of the rotated fields, the potential (<ref>) becomes
V_DI(χ,ϕ)
=
V_1 (χcosγ + ϕsinγ)
+
Λ^4
(
1 - cosχ/f),
where
f f_1 f_2/√(f_1^2 + f_2^2) .
According to <cit.>, the following two conditions are required
for Dante's Inferno model:
2π f_1 ≪ 2π f_2 ≲ M_P ,
Λ^4/f≫ V_1' .
Here, M_P (8π G)^-1/2 is the reduced Planck mass
with G being Newton's constant.
For later convenience, we rewrite condition (<ref>) as
s Λ^4/f V_1'≫ 1.
The last inequality in (<ref>) is expected to follow from the Weak Gravity
Conjecture as we reviewed in the Introduction, which we assume in this article.
Next, condition (<ref>) implies
f ≃ f_1 ,
cosγ≃ 1 ,
sinγ≃f_1/f_2≪ 1 .
Now, let us take a closer look on the origin of condition (<ref>).
In large field inflation, we typically have ϕ_∗≳ 10 M_P,
where the suffix ∗ indicates that it is the value when the pivot scale
exited the horizon (see (<ref>) in Sec. <ref>
for values of ϕ_∗ in explicit examples).
This constrains sinγ via
sinγ
=
ϕ_1 ∗/ϕ_∗ .
For the effective field theory description of the potential V_1(ϕ_1) to be
valid, a natural expectation is that ϕ_1 ∗ is bounded from above by
the reduced Planck scale M_P.
This assumption together with (<ref>) implies that sinγ≲
0.1
whenever ϕ_∗≳ 10 M_P.
Note that the effective description may break down
at a much smaller energy scale, such that the value of ϕ_1 ∗
decreases accordingly.
For example, for a moderate model assumption ϕ_1 ∗≲ 10^-1
M_P, then (<ref>) imposes sinγ≲ 10^-2.
Next, let us examine condition (<ref>), which implies that the field
χ first settles down to the local minimum in a slice of constant ϕ
before the field ϕ, which plays the role of inflaton in Dante's Inferno
model, starts to slow-roll.
Then, from (<ref>) the inflaton potential V_I(ϕ) is given by
V_I(ϕ) = V_1 (ϕsinγ) .
We refer to Fig. <ref> to illustrate that the inflaton rolls along
the bottom of the valley. As one observes, there seem to be numerous
valleys in the potential, but all of them are connected
by the periodic identification in ϕ_2-direction.
As the inflaton rolls along the valley
one period in ϕ_2-direction,
the bottom of the valley is shifted in ϕ_1-direction.
While the axion decay constant f_2 is sub-Planckian as in (<ref>),
super-Planckian inflaton excursion can be achieved
by going round in ϕ_2-direction several times.
However, the slow-roll inflation may terminate if quantum tunnelling through
the
wall of the valley happens.
Requiring that the tunnelling rate is sufficiently small compared to the
Hubble time scale during inflation may impose further constraints on the
parameter space of Dante's Inferno model.
We will explore the consequences of this requirement in the next section.
§.§ Dante's Inferno model from higher dimensional gauge theories
Dante's Inferno model can be obtained from higher dimensional gauge theories.
In this circumstance there is an additional constraint on the
parameters <cit.>, which reads
Λ^4
≃3 c/π^2 (2π L_5)^4 ,
where the natural value of c is (1).
The axion decay constants f_1 and f_2 are given as
f_1 = 1/g_1 (2π L_5) ,
f_2 = 1/g_2 (2π L_5) ,
where g_1 and g_2 are the gauge couplings in four-dimension.
From (<ref>) and (<ref>), and assuming that the perturbative
approximation is valid, i.e. g_1 ≲ 1, we obtain
Λ≲ f .
§ TUNNELLING IN DANTE'S INFERNO MODEL
It is well-known that a quantum field theory with two local minima, ψ_±,
of the potential has two classically stable equilibrium states. However,
assuming that ψ_- is the unique state with lowest energy, the state
ψ_+ is rendered unstable quantum mechanically, because of a non-vanishing
tunnelling probability through the potential barrier into the so-called true
vacuum state ψ_-.
The decay of a false vacuum ψ_+ proceeds by nucleation of bubbles, inside
which the true vacuum[Below will study tunnelling between a false
vacuum and another false vacuum with lower energy, which can be analysed
without introducing new ingredients.] resides.
The tunnelling rate per volume ΓVol between true
and false vacua, as discussed in <cit.>,
can be parametrised by two quantities A and B (in leading order) via
ΓVol= A e^-B ħ [1+ (ħ)] .
While the details of the coefficient A are somewhat complicated, it is
possible to provide a closed expression for B solely from the semi-classical
treatment. The relevant solution has been referred to as bounce and is
reviewed in App. <ref>.
From (<ref>) it is apparent that the tunnelling process is suppressed
provided B ≫ħ and the pre-factor A is well-behaved.
A dimensional analysis of the pre-factor reveals A ∼ M^4,
where M is a relevant mass scale in the model.
(We refer, for example, to <cit.> for numerical
calculations of the coefficients in the case of a simple scalar field theory.)
This estimate may be off by a few orders, but the error will still be small
compared to the exponential suppression factor e^-B ħ.
However, since B is positive, there may be scenarios in which the tunnelling
is not exponentially suppressed, i.e. e^-B ħ∼(1). For
instance in inflation models, if A induces a rapid rate compared to
the Hubble time scale during inflation, then the tunnelling becomes
potentially dangerous as it might terminate inflation too early. More
precisely,
this happens if A ≳ H^4, where H is the Hubble expansion rate at the
time of inflation.
Consequently, two cases arise:
* On the one hand, if all relevant scales in the
model are smaller than H, the tunnelling rate is irrelevant during inflation,
regardless of the precise order of B.
* If, on the other hand, we
assume that all relevant scales in Dante's Inferno model satisfy
Λ, f_1, f_2 ≳ H then one has to carefully verify which subsequent
parameter regions are protected from an unsuppressed tunnelling rate.
It is therefore the objective of this article to
analyse the exponent B together with the condition B≫1 for Dante's
inferno model for inflation in the regime Λ, f_1, f_2 ≳ H. As
customary, we set ħ≡ 1 for the rest of this article.
Obtaining a viable parameter region in Dante's Inferno model then means that
one has to avoid scenarios in which the tunnelling in χ-direction
is unsuppressed.
In those cases, one can investigate the dynamics of the field χ, while
regarding the value of ϕ as being fixed
in time[A path with
varying ϕ gives a larger action and is irrelevant for the
estimation of the tunnelling rate.]. We refer to App. <ref>
for a discussion of the effects of a time-dependent ϕ.
Since we are interested in the tunnelling rate during the slow-roll inflation,
we choose the value of the inflaton when the pivot scale exited the horizon,
ϕ = ϕ_∗, as a reference point.
(We comment briefly on other values of ϕ at the end of
Sec. <ref>.)
Then, from (<ref>) the potential V(χ) for the field χ
becomes
V(χ)
V_1(χcosγ + ϕ_∗sinγ) +
Λ^4
(
1 - cosχ/f).
We first estimate the tunnelling rate including the effects of gravity in order
to understand when we can neglect the gravitational back-reactions.
Following <cit.>, the Euclidean action of a scalar
field χ coupled to Einstein gravity reads
S_E
=
∫ d^4 x
√(g)[
1/2
g^μν_μχ_νχ
+
V(χ)
-
1/16π G
R
].
To estimate the gravitational back-reaction, we employ an O(4)-symmetric
ansatz.
There are few limitations of such an ansatz:
Firstly, inflation with (almost) flat spatial space, which is supported by
observations, does not respect O(4) symmetry[
See <cit.> for a recent study
of a non-O(4)-symmetric bounce solution
without gravitational back-reactions.].
Secondly, there is no proof that the O(4)-symmetric bounce gives the least
action among all bounce solutions.
We will not try to fully justify the use of an O(4)-symmetric bounce in this
article. Nevertheless, since the space is empty during inflation, and we will
be
interested in processes which proceed fast compared to the Hubble expansion
rate, we hope that the first point may not be so crucial.
For the second point, we expect that even if there exists a non-O(4)-symmetric
bounce, with smaller action than the O(4)-symmetric bounce,
the O(4)-symmetric bounce provides at least the
lower bound for the tunnelling rate.
Moreover, we may expect that the difference between the constraints on the
parameter space of Dante's Inferno model from the non-O(4)-symmetric bounce
do not differ qualitatively from those of the O(4)-symmetric bounce.
Assuming O(4) symmetry, the metric takes the form
ds^2 =
d ξ^2
+
a^2(ξ)
dΩ^2,
where dΩ^2 is the canonical metric of the unit S^3.
Moreover, the O(4) symmetry restricts the field χ to be a function
of the radial coordinate ξ only.
Thus, for O(4)-symmetric solutions, the Euclidean action (<ref>) becomes
S_E
=
2π^2
∫ d ξ[
a^3
(
1/2(dχ/dξ)^2
+
V(χ)
)
-
3/16π G
a ( (da/dξ)^2 + 1 )
].
We have dropped a surface term, which is irrelevant, because we consider the
difference of actions with the same boundary conditions <cit.>.
It is convenient to rescale the variables as follows:
ψχ/f ,
ρ f a ,
ζ f ξ .
Then (<ref>) becomes
S_E
=
2 π^2
∫ dζ[
ρ^3
(
1/2ψ̇^2
+
U (ψ)
)
-
3/κρ(
ρ̇^2 + 1
)
] ,
where
κ
8π G f^2 ,
U(ψ)
U_0(ψ)
+
U_1(ψ) ,
U_0(ψ)
λ^4
(
1 - cosψ),
λ Λ/f ,
U_1(ψ)
1/f^4 V_1 (f ψcosγ + ϕ_∗sinγ) .
The Euclidean equations of motion are given as
ψ̈
+
3
ρ̇/ρψ̇ =
U'(ψ) ,
ρ̇^2 - 1
=
κ/3ρ^2
(
1/2ψ̇^2
-
U(ψ)
),
where (<ref>) is the Friedmann equation.
The bounce action B reads
B
S_E[ψ_B] - S_E[ψ_+] ,
where ψ_B is the bounce solution, and ψ_+ is the value of the field
ψ at the false vacuum we start with, ψ_+ =0 in our case.
Similarly to <cit.>, we evaluate the bounce
action (<ref>) in the so-called thin-wall approximation, which holds
provided the following two conditions are satisfied:
* The height of the barrier of the potential is much
larger than the energy difference
U(ψ_+) - U(ψ_-)
between a false vacuum and another false vacuum, to which the tunnelling occurs.
* The width of the surface wall of the bubble, through which
the initial false vacuum decays, is much smaller than the bubble size.
In our case, the condition (<ref>) gives
≪ 2 λ^4 .
We examine the remaining condition (<ref>) along the way.
The bounce action for a general potential within the thin-wall approximation has
been presented in <cit.>.
In terms of our variables, the bounce action is given in (<ref>) of
App. <ref>.
Defining
h_0
H_0/f ,
H_0
√(8π G V_1(ϕ_∗sinγ)/3) ,
the bounce action reads as follows:
B
≃2 · 27 π^2 (8λ^2)^4 /√(( - 48 λ^4κ)^2
+ 12 h_0^2 (48 λ^4))
×1/(
+
√(( - 48 λ^4κ)^2 + 12 h_0^2 (48 λ^4)))^2
-
( 48 λ^4 κ)^2
From (<ref>) we observe that gravitational back-reactions are negligible
whenever
κ≪max{/48 λ^4,
h_0/2λ^2} .
When (<ref>) is satisfied, the bounce action reduces to
B
≃2 · 27 π^2 (8λ^2)^4 /√(()^2
+ 12 h_0^2 (48 λ^4))(
+
√(()^2
+ 12 h_0^2 (48 λ^4)))^2
.
The demand (<ref>) suggests that expression (<ref>) simplifies
further in two extreme cases: in the
following subsection we assume that either 48 λ^4 is
much larger than h_0 2λ^2 or vice versa.
§.§ Flat-space limit
Let us first look at the situation that space-time can be regarded as flat,
i.e.the effect of the curvature, represented by h_0, of the de Sitter space is
negligible:
/48 λ^4≫h_0/2λ^2 ,
which we will refer to as flat-space limit.
In this case, the action (<ref>) reduces to the result of
Coleman <cit.>:
B
≃
B_0
=
27π^2 S_1^4/2()^3 ,
where
S_1
2 ∫ dζ(
U_0 (ψ_B)
-
U_0 (ψ_-)
)
= 8 λ^2,
We refer to (<ref>) for the explicit calculation in our set-up.
As shown in App. <ref>, the thickness of the surface wall
is ∼ 2/λ^2.
Recalling (<ref>), the thin-wall approximation is valid in the flat-space
limit if the bubble size ρ̅ satisfies
ρ̅
=
ρ̅_0
=
3 S_1/
=
24λ^2/≫ 2/λ^2 ,
which is equivalent to
/12 λ^4≪ 1 .
We observe that (<ref>) is satisfied due to (<ref>).
Note that in the flat-space limit
the condition (<ref>) for negligible gravitational
back-reaction reduces to
κ≪/48 λ^4 .
§.§ De Sitter limit
Next, let us look at the opposite limit of the flat-space
limit (<ref>),
in which the effect of the curvature of the de Sitter space,
represented by h_0, is dominant:
/48 λ^4 h_0≪h_0/2λ^2 .
We refer to this limit as de Sitter limit.
In this case the bounce action (<ref>) becomes
B ≃16 π^2 Λ^2 f/H_0^3 .
So far we have kept the inflaton potential general. In order to quantitatively
discuss constraints arising from a suppressed tunnelling rate, we specify the
inflaton potential in the next section.
§ EXAMPLES: CHAOTIC INFLATION
Let us study examples with an inflaton potential V_I(ϕ) given by a
monomial, i.e.
V_I(ϕ) =
V_p(ϕ) α_p ϕ^p/p! .
In this section, we will work in the unit M_P≡1.
Without loss of generality, we take α_p > 0 and assume that inflation
took place when ϕ > 0.
The associated slow-roll parameters are defined as follows:
ϵ_V (ϕ)
1/2(
V_p'/V_p)^2
=
p^2/2 ϕ^2 ,
η_V (ϕ)
V_p”/V_p
=
p(p-1)/ϕ^2 .
In slow-roll inflation, the spectral index n_s and the tensor-to-scalar ratio
r can be calculated via
n_s = 1 - 6 ϵ_V (ϕ_∗) + 2 η_V (ϕ_∗) ,
r = 16 ϵ_V (ϕ_∗) ,
where ∗ refers to the value when the pivot scale exited the horizon.
The CMB observations constrain ϵ_V, |η_V| ≲(10^-2)
through the relations (<ref>), see for instance <cit.>.
The number of e-folds N is readily computed to read
N(ϕ)
=
|
∫_ϕ_end^ϕ
dϕV_p/V_p'|
=
|
∫_ϕ_end^ϕ
dϕϕ/p|
=
1/2p[
ϕ^2
]_ϕ_end^ϕ
=
1/2p(
ϕ^2 - ϕ_end^2
) ,
where we define ϕ_end by the condition
ϵ_V(ϕ_end) = 1 ,
which in the examples under consideration gives
ϕ_end = p/√(2) .
Inserting (<ref>) into (<ref>)
and solving ϕ_∗
for a given N_∗ N(ϕ_∗) yields
ϕ_∗ =
√(
2p (
N_∗ + p/4)
) .
The scalar power spectrum in slow-role inflation is given as
P_s
=
V_p(ϕ_∗)/24π^2 ϵ_V(ϕ_∗)
=
2.2 · 10^-9 ,
where the numerical value stems from CMB observations <cit.>.
The coefficient α_p in (<ref>), for a given N_∗, is determined
by first computing ϕ_∗ via (<ref>), then inserting this value
into (<ref>) and subsequently solving for α_p.
Explicitly,
P_s
=
α_p/12π^2ϕ_∗^p+2/p!p^2
=
2.2 · 10^-9 ,
thus
α_p
=
12π^2p!p^2/ϕ_∗^p+2· P_s
=
12π^2p!p^2/ϕ_∗^p+2· 2.2 · 10^-9 .
Now, we use this input data from inflation models constrained by
CMB observations to estimate the corresponding tunnelling rate in Dante's
Inferno model.
The parameter , as defined in (<ref>), reads in the current
example as follows:
=
1/f^4(
V_1 (ϕ_∗sinγ)
-
V_1 (-2π f cosγ +ϕ_∗sinγ)
)
=
1/f^4(
V_p (ϕ_∗)
-
V_p (ϕ_∗-2π f γ)
)
≃γ2π V_p'(ϕ_∗)/f^3 ,
where we have used two ingredients to obtain the last line: Firstly, we
employed (<ref>), more precisely
2 π f γ≃ 2 π f_2 ,
and, secondly, due to the smallness of the slow-roll parameters ϵ_V
and
η_V, see (<ref>), it follows that the inflaton potential
V_p(ϕ) around ϕ∼ϕ_∗ does not change much over the Planck
scale, i.e. M_P ≳ 2π f_2.
In the following two subsections we examine the tunnelling rate
in two scenarios: Firstly, in the flat-space limit and, secondly, in the de
Sitter limit.
§.§ Flat-space limit
We begin with the parameter region of Dante's Inferno model in which the
flat-space limit (<ref>) is appropriate, i.e.
/48 λ^4≫h_0/2λ^2 .
For negligible gravitational back-reaction the bounce action in the flat-space
limit is provided in (<ref>).
We investigate the validity of the negligibility of the gravitational
back-reaction later in the subsection.
Inserting (<ref>) into (<ref>) yields
B
=
27 · 2^8 Λ^8 f/π(
tanγ/V_p'(ϕ_∗))^3 ,
where we have used S_1 = 8 λ^2 for our set-up (c.f. (<ref>) in
App. <ref>).
Then, for a suppressed tunnelling process, i.e. B ≫ 1, the following
condition has to hold:
tanγ≫(
27 · 2^8 Λ^8 f/π)^-1/3
V_p'(ϕ_∗)
tanγ_T.
Using (<ref>)–(<ref>), the explicit form of tanγ_T reads as
tanγ_T
=
2^5/6π^7/3·1/(f Λ^8)^1/3·[ P_s
(p/4 N_∗+p)^3/2] .
In (<ref>) the numerical factor in the squared brackets is determined by
the parameters of the inflation model p,N_∗, and CMB observations
(<ref>).
The constraint (<ref>) should be compared with the defining condition of
Dante's
Inferno model (<ref>), which in terms of the parameters of the model
gives
Λ^4/f≫γ
V_p'(ϕ_∗) ,
or equivalently
tanγ≫f/Λ^4
V_p'(ϕ_∗)
tanγ_DI .
In the above, we have used
dV_p/dϕ(ϕ)
=
d/dϕ V_1 (ϕsinγ)
=
dϕ_1/dϕdV_1/dϕ_1 (ϕ_1 = ϕsinγ)
=
sinγdV_1/dϕ_1(ϕ_1) ,
which follows from (<ref>) and the usual chain rule.
Again, using (<ref>)–(<ref>) allows to specialise tanγ_DI to
tanγ_DI
=
24 √(2)π^2 ·f/Λ^4·[ P_s
(p/4 N_∗ +p)^3/2] .
We are particularly interested in the scenario for which condition (<ref>)
for suppressed tunnelling enforces a stronger condition on the model
than (<ref>).
From (<ref>) and (<ref>), this is the case for
tanγ_T > tanγ_DI .
In terms of the parameters of the model under consideration, the
inequality (<ref>) reduces to
(
27 · 2^8 Λ^8 f/π)^-1/3
>
f/Λ^4 ,
or equivalently
Λ/f≳ 7 .
Whenever (<ref>) is satisfied,
the constraint (<ref>), which ensures the suppression of the tunnelling, is
more restrictive than the defining condition (<ref>) of Dante's Inferno
model. In other words, in the region of the parameter space where (<ref>)
holds tunnelling is not automatically suppressed in Dante's Inferno model;
thus, an additional constraint arises[As discussed in the beginning
of Sec. <ref>, we only
consider the region f_1, f_2, Λ≳ H.].
Note that in Dante's Inferno model derived from a higher dimensional gauge
theory discussed in Sec. <ref>, (<ref>) assures that
the tunnelling process is suppressed for natural values of the
parameters (<ref>) (at least in the simplest version of the model).
To demonstrate how condition (<ref>) constrains the parameter space,
we illustrate the scenarios Λ/f = 10, p=1,2, and N_∗ = 60 in
Fig. <ref> and Fig. <ref>, respectively.
If one wishes to fix a certain value of tanγ then the
condition (<ref>) yields a lower bound on f ≃ f_1.
For example, if we demand tanγ∼ 5 · 10^-2 then f ≳
10^-4 is required in the above cases,
as can be read off from (<ref>) or Fig. <ref> and
Fig. <ref>.
Now, let us focus on condition (<ref>), which defines
the flat-space limit.
From (<ref>), (<ref>), and (<ref>), one infers that
condition (<ref>) becomes
/24 λ^2 h_0
=
γπ/12V_p'(ϕ_∗)/Λ^2 H_0≫ 1 ,
which we recast as
tanγ≪
F
π/12V_p'(ϕ_∗)/Λ^2 H_0 .
Specialising F via (<ref>)–(<ref>) to the current model, we obtain
F
=
1/Λ^2·(π^2 √(P_s)· p /4 N_∗+p)
∼1/Λ^2·(10^-6),
where the last numerical value holds for p=1,2 with N_∗ = 50-60.
As discussed around (<ref>), Dante's Inferno model requires
tanγ≲(10^-1) or less.
Thus, the flat-space limit is appropriate for
Λ≪(10^-5) .
From (<ref>) and (<ref>), one readily computes the following ratio:
F/tanγ_DI
=
π/12Λ^2/f H_0
=
1/24 √(2 P_s)√(4N_∗ +p/p )·Λ^2/
f∼(10^5) ·(Λ/f) Λ ,
where we have used p=1,2 and N_∗∼ 50-60.
Hence, when Λ/f ≳ 7 as in (<ref>), then (<ref>) implies
F ≫tanγ_DI, provided Λ≳(10^-5) holds.
Next, we examine condition (<ref>) for negligible gravitational
back-reaction.
In terms of the parameters of the current model, we obtain
tanγ≪
Kπ/24V_p'(ϕ_∗)/Λ^2 f^3 .
By means of (<ref>)–(<ref>), we explicitly parametrise K as
K
=
1/Λ^2 f^3·[
√(2)π^3 P_s·(
p/4N_∗+p)^3/2]
∼1/Λ^2 f^3·(10^-11) ,
where the last numerical value holds for p=1,2 with N_∗ = 50-60.
As we assume tanγ≪ 1 in Dante's Inferno model, if K ≳ 1
then (<ref>) does not introduce a further constraint.
Thus, K ≳ 1 whenever
Λ^2 f^3 ≲(10^-11) .
For the range of the parameter f as displayed in Fig. <ref> to
Fig. <ref>, K is always much greater than 1 and, therefore,
the gravitational back-reaction can be neglected.
We note that (<ref>) and (<ref>) imply the following ratio:
K/F
=
H_0/2f^3.
Then H_0 ∼(10^-5), for p=1,2 with N_∗ = 50-60 as previously
used, implies that K ≳ F for f ≲(10^-2).
In this case, tanγ≪ K is automatically satisfied if tanγ≪
F.
Finally, we verify the validity of the thin-wall approximation.
Inserting (<ref>) and (<ref>) into the condition (<ref>)
for the validity of the thin-wall approximation gives
6/πΛ^4/f V_1'≫ 1 .
We have used (<ref>) to obtain (<ref>).
Using the parameter s, as introduced in (<ref>), one can
rewrite (<ref>) as
6/π s
≫
1 .
We recall that s≫ 1 is one of the conditions (<ref>) required in
Dante's Inferno model.
Thus, the condition (<ref>) for the validity of the thin-wall
approximation gives numerically the same constraint on Dante's Inferno model
as (<ref>), up to a minor difference of a (1) numerical factor.
As a consequence, the thin-wall approximation is always valid in Dante's
Inferno model in the flat-space limit.
Finally, we notice from (<ref>) that, within the class of monomial inflation
potentials, the tunnelling rate either stays constant (for p=1) or decreases
(for p>1) as ϕ_∗ decreases.
Therefore, it is sufficient to estimating the tunnelling rate at
ϕ=ϕ_∗ in order to verify the suppression of the tunnelling process
in these cases.
§.§ De Sitter limit
In this subsection, we study the de Sitter limit (<ref>) which gives
/24 λ^2 h_0
=
γπ/12V_p'(ϕ_∗)/Λ^2 H_0≪ 1 ,
or equivalently
tanγ≫
F:=
π/12V_p'(ϕ_∗)/Λ^2 H_0∼1/Λ^2·(10^-6) ,
where the last approximation holds for p=1,2 with N_∗∼ 50-60.
As discussed around (<ref>), Dante's Inferno model requires tanγ≲(10^-1) or less.
Then (<ref>) implies at least
Λ^2 ≫(10^-5) .
One should keep in mind that the right hand side of (<ref>) can be
even smaller, depending on the desired tanγ.
In the de Sitter limit (<ref>), the bounce is given by (<ref>)
when gravitational back-reaction is negligible.
We will examine gravitational back-reaction shortly.
In this case, the condition for a suppressed tunnelling rate, i.e. B ≫ 1,
becomes
Λ^2 f ≫H_0^3/16π^2∼(10^-16) ,
where we have used the value of H_0 for p=1,2 with N_∗ = 50-60.
In the parameter region of a sufficiently rapid pre-factor A, i.e. all
relevant scales are above the Hubble scale at the time of inflation, condition
(<ref>) is not a constraint at all. Therefore, the tunnelling
process is suppressed provided the gravitational back-reaction is negligible
and the thin-wall approximation is applicable.
Consequently, we focus on the gravitational back-reaction for the de Sitter
limit first.
In this case, condition (<ref>) for negligibility of the gravitational
back-reaction reads
κ≪h_0/2λ^2 .
In terms of the original parameters of the model, (<ref>)
becomes
2 Λ^2 f ≪ H_0 ∼(10^-5).
By means of (<ref>), condition (<ref>) implies
f ≪(1) .
Reminding ourselves of one of the defining conditions of Dante's Inferno
model (<ref>), we conclude that (<ref>) is always satisfied
in this model.
Therefore, in Dante's Inferno model, the gravitational back-reaction is always
negligible in the de Sitter limit.
Finally, let us verify the validity of the thin-wall approximation in the de
Sitter limit.
To begin with, we note that the bubble size ρ̅ in the
de Sitter limit is always smaller than the bubble size ρ̅_0 in
the flat-space limit, which follows from the definition (<ref>) of
ρ̅ in App. <ref> and the fact that
x and y in (<ref>) are positive numbers.
For a quantitative estimate of ρ̅, we specialise
x and y of (<ref>) to the parameters in our model
(see (<ref>)):
x =
κ·(
/48 λ^4)^-1 ,
y
=
6 h_0^2/κ + 1
≃6 h_0^2/κ .
The last approximation in (<ref>) always holds in the de Sitter limit.
From (<ref>) we immediately infer
2xy
≃(
/24λ^2h_0)^-2≫ 1 ,
where the last hierarchy is a consequence of the de Sitter limit (<ref>).
Moreover, the negligible gravitational back-reaction in the de Sitter limit, as
discussed in (<ref>), allows to deduce
x/y
=
2κ^2 ·(
h_0/2λ^2)^-2≪ 2 .
By means of (<ref>) and (<ref>), we then obtain
ρ̅^2
≃ρ̅_0^2/2xy
=
(
24 λ^2/)^2
(
/24λ^2h_0)^2
=
1/h_0^2.
Since the thickness of the surface wall is given as ∼ 2/λ^2 as
described in App. <ref>, the second condition (<ref>) of the
thin-wall approximation becomes
λ^2 ρ̅/2≃λ^2/2h_0
=
Λ^2/2H_0 f∼Λ^2/2f·(10^5)
≫ 1,
where we have used the value of H_0 for p=1,2 and N_∗ = 50-60.
Consequently, (<ref>) and (<ref>) imply that (<ref>)
is always satisfied in the current model within the de Sitter limit.
Therefore, the thin-wall approximation is always appropriate in this limit.
§ SUMMARY AND DISCUSSIONS
In this article, we studied tunnelling in Dante's Inferno model
within the thin-wall approximation and subsequent constraints on the parameter
space.
In general, we argued that the tunnelling process can only become fatal for
inflation if all scales in Dante's model satisfy Λ,f_1,f_2≳ H,
and
if B is less than order one. All other parameter regions are intrinsically
safe from tunnelling in the leading order of ħ.
We have shown in (<ref>) that the flat-space limit is appropriate for
Λ≪ (10^-5).
In the flat-space limit, the parameter space is simultaneously constrained by
the condition (<ref>) for a suppressed tunnelling rate, and one of the
defining conditions (<ref>) of Dante's Inferno model, i.e.
tanγ = f_1/f_2≫max{tanγ_T, tanγ_DI} .
We have seen that for a fixed ratio Λ/f, a lower bound on the parameter
f ≃ f_1 is imposed by (<ref>), for a given f_1/f_2.
In particular, we have shown in (<ref>) that
tanγ_T is bigger than tanγ_DI
when Λ/f ≳ 7,
in which case the condition for a suppressed tunnelling rate
gives a stronger constraint than the defining condition
of Dante's Inferno model.
Since the parameter space is multi-dimensional, one has to choose certain
parameters to obtain a visualisable subspace.
We computed the bounds numerically in monomial chaotic inflation with
Λ/f = 10,
p=1,2 with N_∗ = 50-60 and exemplified these in Fig.
<ref> – <ref>.
While numerical values of the bounds were given in the examples, the method for
obtaining the bound is clearly general and can be straightforwardly applied to
other forms of the inflaton potential.
In the de Sitter limit
which was shown to be appropriate for Λ^2 ≫(10^-5)
in (<ref>),
the condition for a suppressed tunnelling rate is
trivial in the problematic region Λ,f_1,f_2,≳ H.
In other words, the tunnelling process is
always suppressed in this limit.
We summarized those constraints on the parameter space in
Table. <ref>.
Additionally, we identified in each limit the parameter region in which the
thin-wall approximation is valid and the gravitational back-reactions are
negligible.
It turned out that this covers a large part of the parameter region of interest.
The original article <cit.> mentioned that a useful value of
Λ lie in the range from 10^-3 M_P to 10^-1 M_P,
and typical
values for f_1 and f_2 are 10^-3 M_P and 10^-1 M_P, respectively.
Our results confirm that these values are safe from tunnelling,
and further provide explicit constrains on these parameters from the condition
of a suppressed tunnelling rate.
For the article at hand, we restricted ourselves to the level of an effective
field theory.
For example, the parameter Λ in (<ref>), which controls the height
of the sinusoidal potential, was treated as an input parameter.
However, when the model is embedded in a UV theory
the height of the sinusoidal potential could be a function of the
required monodromy number N_monΔϕ·cosγ (2π f_2) ∼(10) · M_P / f_2.
(Here Δϕ denotes the field distance the inflaton field travels
during the inflation.)
In a related model embedded in string theory, for instance, the height of the
sinusoidal potential was shown <cit.> to be proportional to
e^-γ_br N_mon, where γ_br is a parameter independent of
N_mon.
Such a rapid decrease of the height of the sinusoidal potential for increasing
N_mon would give rise to much severer constraint on the axion decay
constants than the ones given in this article.
Consequently, UV completions of Dante's Inferno model and the constraints from
it are certainly an important direction to be investigated in the future.
Acknowledgments
We would like to thank Yoji Koyama, Olaf Lechtenfeld, and
Marco Zagermann for useful discussions.
This collaboration was supported by a Short Term Scientific Mission (STSM)
under COST action MP1405.
MS was supported by the DFG research training group GRK1463 “Analysis,
Geometry, and String Theory” and the Insitut für Theoretische Physik of the
Leibniz Universität Hannover. MS is currently supported by Austrian Science
Fund (FWF) grant P28590.
MS would like to thank Manipal Center for Natural Sciences,
Manipal University for hospitality and support during the visit.
§ EFFECTS OF TIME EVOLUTION OF INFLATON ON TUNNELLING
In this appendix we justify the claim of regarding ϕ as being fixed in
time during inflation.
Accounting for changes of the tunneling rate through a time variation of
ϕ(ξ) is achieved by modifying (<ref>) as follows:
U_1(ψ,ζ)
1/f^4 V_1 (f ψcosγ + ϕ(ζ) sinγ) ,
i.e. one simply keeps the time dependent ϕ(ζ) instead of choosing the
reference point ϕ_∗.
§.§ Flat-space limit
In the flat-space limit
discussed in Sec. <ref>,
the time variation of U_1 may become relevant
through its appearance in Δ U.
As in (<ref>), Δ U can be estimated as
U(ψ_+) - U(ψ_-)
≃γ2π V_I'(ϕ(ξ))/f^3 ,
where V_I is defined in (<ref>).
To judge the impact of a time dependent ϕ, we examine the time variation
of Δ U in a time interval Δζ relative to Δ U itself.
In detail
1/Δ U·Δ U/ζ·Δζ
≃V_I”/V_I'·/ξϕ(ξ) ·Δξ
≃η_V H Δξ ,
where we used the slow-role approximation of the equations of motion, i.e.
3 H ϕ/ξ≃
V_I'
,
3 H^2
≃ V_I .
Recall Sec. <ref>, we assume that
all relevant physical parameters are greater than the
Hubble expansion rate H, i.e.
Λ, f_1, f_2 ≳ H.
Consequently, during a time interval Δξ≃ 1/H the relative
change (<ref>) becomes of order η_V, and we may safely neglect the
time dependence of in the slow-role regime η_V ≲(10^-2).
§.§ de Sitter limit
In de Sitter limit
discussed in
Sec. <ref>,
a time variation of U_1 enters via the time variation of
V_I.
In slow-role inflation models, it is well-known that the time variation of
V_I is suppressed by the slow-role parameters. To be explicit, we find
1/V_I·/ζ V_I
·Δζ
≃V_I'/V_I·ϕ/ζ·Δζ
≃
2 ϵ_V H Δξ ,
where we again made use of (<ref>).
Hence, (<ref>) is small in a time-scale Δξ∼ 1/H,
as ϵ_V ≲(10^-2).
§ SUMMARY OF THE BOUNCE SOLUTION IN THE THIN-WALL APPROXIMATION
The bounce action for general potential in the thin-wall approximation has been
given in <cit.> (<cit.> is also a useful read).
Here, we review the necessary results.
The relevant Euclidean action is of the form
S_E
=
2π^2
∫
dζρ^3
(
1/2ψ̇^2
+
U(ψ)
)
-
3ρ/κ_P(
ρ̇^2+1
).
The action (<ref>) has the same form[
Precisely speaking, variables in (<ref>) were dimensionless,
but it is straightforward to implement this point in the comparison,
e.g. by setting M_P ≡ 1 as we did.] as (<ref>),
with parameters κ being replaced with κ_P defined by
κ_P 8π G = M_P^-2 .
The bounce action is introduced as
B S_E[ψ_B] - S_E[ψ_+] ,
where ψ_B is the bounce solution.
In the thin-wall approximation, which is appropriate whenever
conditions (<ref>) and (<ref>) hold, we evaluate (<ref>) by dividing
the integration region into three parts:
Outside the bubble, at the surface of the bubble, and inside the bubble.
Outside the bubble, the bounce and false vacuum are identical;
therefore, the contribution B_out to the bounce action is
B_out = 0 .
At the surface wall of the bubble,
we can replace ρ by the position of the
centre of the surface wall
ρ̅.
Then, the contribution to the bounce action from
the surface wall B_w is given by
B_w
=
2 π^2 ρ̅^3 S_1 ,
where
S_1
2 ∫ dζ(
U_0 (ψ_B)
-
U_0 (ψ_-)
) .
Inside the bubble, ψ is constant, such that (<ref>) allows to
deduce
dζ
=
d ρ(
1 - κ_P/3ρ^2 U(ψ)
)^1/2 ,
and we then obtain
B_in =
-12π^2/κ_P∫_0^ρ̅ dρ[
(
1-κ_P/3U_-
)^1/2
-
(
1-κ_P/3U_+
)^1/2]
=
12π^2/κ_P^2[
U_-^-1(
(1-κ_P ρ^2/3 U_-)^3/2
-1
)
-
U_+^-1(
(1-κ_P ρ^2/3 U_+)^3/2
-1
)
] ,
where U_+ U(ψ_+) and U_- U(ψ_-) are the energy
density of the false vacuum we start with and another false vacuum we end with,
respectively.
Extremising B with respect to ρ̅ gives
ρ̅^2
=
ρ̅_0^2
/
1+ 2xy +x^2
,
where
ρ̅_0
3 S_1/(U_+ - U_-) ,
is the critical bubble size without the presence of gravity, and x and y
are
defined as follows:
x ρ̅_0^2/4κ_P (U_+ - U_-)/3,
y U_+ + U_-/U_+ - U_- .
The bounce action is obtained as
B ≃
B_0 r(x,y) ,
where
B_0 27 π^2 S_1^4/2 (U_+-U_-)^3 ,
which is the bounce action in flat space. The function r(x,y) is defined
as follows:
r(x,y)
2 · (1+xy)-√(1+2xy+x^2)/x^2 (y^2-1) √(1+2xy+x^2)
=
2 ·( (1+xy)^2- (1+2xy+x^2))/x^2 (y^2-1)
√(1+2xy+x^2)((1+xy)+√(1+2xy+x^2))
=
2/√(1+2xy+x^2)((1+xy)+√(1+2xy+x^2)) .
To obtain the bounce action for set-up of this article, one simply has to
replace κ_P with κ as mentioned earlier.
Then, we use the result (<ref>) of App. <ref>:
S_1 = 8 λ^2 ,
together with
U_+ - U_- = ,
U_+ + U_- = 6 h_0^2/κ - .
Putting all the pieces together we obtain
B_0
=
27 π^2 S_1^4/2 ()^3
=
27 π^2 (8λ^2)^4/2 ()^3 ,
x
=
3 S_1^2 κ/4
=
48 λ^4 κ/,
y = 6 h_0^2/κ - 1 .
Moreover, we find
1 + 2xy + x^2
=
(1 - x)^2 + 2 x 6 h_0^2/κ
=
1/()^2(
( - 48 λ^4κ)^2
+ 12 h_0^2 (48 λ^4)
) ,
and
1+xy
=
1/()^2(
()^2
-
48 λ^4 κ
+
6 h_0^2 (48 λ^4)
).
Finally, we arrive at
B ≃27 π^2 (8λ^2)^4/√(( - 48 λ^4κ)^2 + 12 h_0^2 (48 λ^4))
×1/
()^2
-
48 λ^4 κ
+
6 h_0^2 (48 λ^4)
+
√(( - 48 λ^4κ)^2
+ 12 h_0^2 (48 λ^4))
=
2 · 27 π^2 (8λ^2)^4 /√(( - 48
λ^4κ)^2
+ 12 h_0^2 (48 λ^4))
×1/(
+
√(( - 48 λ^4κ)^2
+ 12 h_0^2 (48 λ^4)))^2
-
( 48 λ^4 κ)^2
,
which is the justification for (<ref>) used in the main text.
§ INSTANTON FOR SINUSOIDAL POTENTIAL AND THE THIN-WALL APPROXIMATION
In the thin-wall approximation <cit.>, where the bubble radius
ρ̅ is much larger than the thickness of the surface wall of the
bubble, one can neglect the change in ρ at the wall.
(A more quantitative definition of the thickness of the surface wall
is given below.)
The problem of finding an O(4)-symmetric
bounce solution reduces to solving an instanton equation associated to the
following one-dimensional action:
S_ψ
=
∫ dζ[
1/2ψ̇^2
+
U_0(ψ)
],
where the potential for the case of our interest is the one defined
in (<ref>), i.e.
U_0(ψ)
=
λ^4
(
1 - cosψ) .
Note that U_1(ψ) in (<ref>) differs from the
constant part only by ().
Due to (<ref>), this difference is irrelevant in the equation of motion and
can be dropped
in the thin-wall approximation in the leading order.
The equation of motion derived from the action (<ref>) reads
ψ̈ = U_0/ψ ,
and can be solved as follows: Multiplying ψ̇ on both sides
of (<ref>) and integrating once with respect to ζ gives
d ψ/dζ
=
±√(2 U_0(ψ)).
The action of the solution ψ_± of (<ref>) becomes
S_1
S_ψ [ψ_±]
= ∫ dζ[
1/2ψ̇_±^2
+
U_0(ψ_±)
]
=
∫ dζ
2 U_0
=
∫ dψ √(U_0)
=
8 λ^2.
The solution ψ_± of the equation (<ref>) is given by
ψ_± (ζ)
=
4 tan^-1[
exp( ±λ^2 (ζ - ζ_0) )
] ,
where ζ_0 is an integration constant.
The asymptotic behaviour of the instanton (<ref>) at ζ→
+∞ is
π/2
-
ψ_+/4∼ψ_-/4∼
e^-λ^2(ζ-ζ_0) .
The asymptotic behaviour of the instanton (<ref>) at ζ→
-∞ is
ψ_+/4∼π/2
-
ψ_-/4∼
e^λ^2(ζ-ζ_0) .
These asymptotic behaviours are anticipated from the mass term at the vacua.
These exponential decays define the thickness of the surface wall
to be 2/λ^2.
Then the second condition (<ref>) of the thin-wall approximate, applied to
the bubble radius ρ̅ defined in (<ref>), gives
ρ̅≫2/λ^2.
In the flat-space limit, we have
ρ̅ = ρ̅_0
=
3 S_1/
=
24λ^2/ .
Inserting (<ref>) into (<ref>), we obtain the consistency
condition for the thin-wall approximation in flat-space limit:
/12λ^4≪ 1 .
This condition is automatically satisfied due to the first condition
(<ref>)
of the thin-wall approximation, applied to our model (<ref>).
JHEP
|
http://arxiv.org/abs/1701.07631v1 | 20170126094813 | Kinematic and stellar population properties of the counter-rotating components in the S0 galaxy NGC 1366 | [
"L. Morelli",
"A. Pizzella",
"L. Coccato",
"E. M. Corsini",
"E. Dalla Bontà",
"L. M. Buson",
"V. D. Ivanov",
"I. Pagotto",
"E. Pompei",
"M. Rocco"
] | astro-ph.GA | [
"astro-ph.GA"
] |
L. Morelli et al.
Counter-rotating stellar components in NGC 1366
Dipartimento di Fisica e Astronomia “G. Galilei”, Università di Padova,
vicolo dell'Osservatorio 3, I-35122 Padova, Italy
lorenzo.morelli@unipd.it
INAF-Osservatorio Astronomico di Padova, vicolo dell'Osservatorio 2,
I-35122 Padova, Italy
European Southern Observatory, Karl-Schwarzschild-Strasse 2,
D-85748 Garching bei München, Germany
European Southern Observatory, Avenida Alonso de Córdova 3107,
Vitacura, Casilla 19001, Santiago de Chile, Chile
Many disk galaxies host two extended stellar components
that rotate in opposite directions. The analysis of the stellar
populations of the counter-rotating components provides constraints
on the environmental and internal processes that drive their
formation.
The S0 NGC 1366 in the Fornax cluster is known to host a stellar
component that is kinematically decoupled from the main body of the galaxy. Here we successfully
separated the two counter-rotating stellar components to
independently measure the kinematics and properties of their stellar
populations.
We performed a spectroscopic decomposition of the spectrum obtained
along the galaxy major axis and separated the relative contribution
of the two counter-rotating stellar components and of the
ionized-gas component. We measured the line-strength indices of the
two counter-rotating stellar components and modeled each of them
with single stellar population models that account for the
α/Fe overabundance.
We found that the counter-rotating stellar component is younger, has nearly the same metallicity, and is less α/Fe enhanced than
the corotating component. Unlike most of the counter-rotating
galaxies, the ionized gas detected in NGC 1366 is neither associated
with the counter-rotating stellar component nor with the main
galaxy body. On the contrary, it has a disordered distribution
and a disturbed kinematics with multiple velocity components
observed along the minor axis of the galaxy.
The different properties of the counter-rotating stellar components
and the kinematic peculiarities of the ionized gas suggest that
NGC 1366 is at an intermediate stage of the acquisition process, building the counter-rotating components with some gas clouds still
falling onto the galaxy.
Kinematic and stellar population properties of the
counter-rotating components in the S0 galaxy NGC 1366Based
on observations made with ESO Telescopes at the La Silla-Paranal
Observatory under programmes 075.B-0794 and 077.B-0767.
L. Morelli1,2
A. Pizzella1,2
L. Coccato3
E. M. Corsini1,2
E. Dalla Bontà1,2
L. M. Buson2
V. D. Ivanov3,4
I. Pagotto1
E. Pompei4
M. Rocco1
December 30, 2023
================================================================================================================================================================================================================================================
§ INTRODUCTION
The photometric and kinematic analysis of nearby objects reveals that
disk galaxies may host decoupled structures on various scales, from
a few tens of pc <cit.>
to several kpc <cit.>.
In particular, observational evidence for two stellar disks, two
gaseous disks, or for a gaseous disk and a stellar disk rotating in
opposite directions have been found on large scales in galaxies of
different morphological types <cit.>. Counter-rotating stellar and/or gaseous disks occur in
∼30% of S0 galaxies <cit.> and in
∼10% of spirals <cit.>.
Different processes have been proposed to explain the formation of a
galaxy with two counter-rotating stellar disks, and each formation
scenario is expected to leave a noticeable signature in the stellar
population properties of the counter-rotating components.
A counter-rotating stellar disk can be built from gas accreted with an
opposite angular momentum with respect to the pre-existing galaxy from
the environment or from a companion galaxy. The counter-rotating gas
settles on the galaxy disk and forms the counter-rotating stars. In
this case, the gas is kinematically associated with the counter-rotating
stellar component, which is younger and less massive than
the main body of the galaxy <cit.>.
Another viable, but less probable, formation process is related to the
major merger between two disk galaxies with opposite rotation. The
difference in age of the two counter-rotating components depends on
the stellar population of the progenitors and on the timescale of the
star formation triggered by the binary merger. Moreover, the two
stellar disks are expected to have a different thickness
<cit.>.
Finally, the dissolution of a bar or triaxial stellar halo can build
two counter-rotating stellar components with similar age and mass
without involving gas. One of them is rotating in the same direction
as the bulge and disk of the pre-existing galaxy <cit.>.
These predictions are difficult to be tested, since outside our Galaxy
it is a hard task to separate the single components of a composite stellar population. However, this is possible in a few galaxies because of
the difference in velocity of their extended counter-rotating stellar
components. Counter-rotating galaxies are therefore ideal laboratories
for studying how galaxies grow by episodic or continuous accretion of
gas and stars through acquisition and merging events.
<cit.> presented a spectroscopic decomposition
technique that allows separating the relative contribution of two
stellar components from the observed galaxy spectrum. This allows us to
study the kinematics and spectroscopic properties of individual
components independently, minimizing their cross-contamination along
the line of sight. We applied this technique to many of the galaxies
known to host counter-rotating stellar disks with the aim of
constraining their formation process <cit.>. In most of these cases, the available
evidence supports the hypothesis that stellar counter-rotation is the end product of
a retrograde acquisition of external gas and subsequent star
formation. Other teams developed their own algorithms for
separating the kinematics and stellar populations of
counter-rotating galaxies and found results similar to ours
<cit.>.
NGC 1366 is a bright and spindle galaxy (Fig. <ref>) in the
Fornax cluster at a distance of 17 Mpc <cit.>. It is
classified as S0^0 by <cit.> and S0_1(7)/E7 by <cit.>
because it has a highly inclined thin disk. Although NGC 1366
belongs to the LGG 96 group, <cit.>, it does not have any
nearby bright companion and shows an undisturbed morphology. It has an
absolute total B magnitude M_B_T^0=-18.30 mag, as derived from
B_T=11.97 mag <cit.> by correcting for the inclination and
extinction given by HyperLeda <cit.>. The apparent
isophotal diameters measured at a surface brightness level of μ_B =
25 mag arcsec^-2 are 2.1×0.9 arcmin corresponding to
10.4×4.5 kpc. Its surface-brightness distribution is well
fit by a Sérsic bulge and an exponential disk with a
bulge-to-total luminosity ratio B/T=0.2, as found by
<cit.>. These authors detected a
kinematically decoupled stellar component that is younger than the
host bulge and has probably formed by enriched material acquired through
interaction or minor merging.
In this paper we revisit the case of NGC 1366 by successfully
separating the two counter-rotating components and properly
measuring the properties of their stellar populations
(Sect. <ref>). The analysis of the kinematics of the
stars and ionized gas and of the stellar populations is consistent
with the formation of the counter-rotating component from
external gas that is still accreting onto the galaxy
(Sect. <ref>).
§ LONG-SLIT SPECTROSCOPY
§.§ Observations and data reduction
We carried out the spectroscopic observations of NGC 1366 on 2005
January 25 with the 3.5 m New Technology Telescope (NTT) at the
European Southern Observatory (ESO) in La Silla (Chile). We obtained
2×45-minutes spectra along the major (P.A.=2^∘) and
minor (P.A.=92^∘) axis of the galaxy with the ESO Multi-Mode
Instrument (EMMI). It mounted a 1200 grooves mm^-1 grating
with a 1.0 arcsec × 5.5 arcmin slit, giving an instrumental
resolution σ_ inst=25 . The detector was a mosaic of
the No. 62 and No. 63 MIT/LL CCDs. Each CCD has 2048 × 4096
pixels of 15 × 15 μ m^2. We adopted a 2×2
pixel binning. The wavelength range between about 4800 Å and 5400
Å was covered with a reciprocal dispersion of 0.40
Å pixel^-1 after 2×2 pixel binning. All the spectra were
bias subtracted, flat-field corrected, cleaned of cosmic rays, and
wavelength calibrated using standard IRAF[Image Reduction and
Analysis Facility (IRAF) is distributed by the National Optical
Astronomy Observatory (NOAO), which is operated by the Association
of Universities for Research in Astronomy (AURA), Inc. under
cooperative agreement with the National Science Foundation.]
routines. The spectra obtained along the same axis were coadded using
the center of the stellar continuum as reference. Further details
about the instrumental setup and spectra acquisition are given in
<cit.>. We followed the prescriptions of
<cit.> for the data reduction.
§.§ Stellar and ionized-gas kinematics
We derived the stellar kinematics along both the major and minor axis
of NGC 1366 with a single-component and with a two-components analysis
as done in <cit.>.
We first measured the spectra without separating the two
counter-rotating components <cit.>. We used the penalized pixel fitting <cit.> and gas and
absorption line fitting <cit.>
IDL[Interactive Data Language (IDL) is distributed by ITT
Visual Information Solutions.] codes with the ELODIE library of
stellar spectra from <cit.> and adopting a Gaussian
line-of-sight velocity distribution (LOSVD) to obtain the velocity
curve and velocity dispersion radial profile along the observed
axes. We subtracted the measured velocities from the systemic velocity,
but we did not apply any correction for the slit orientation and
galaxy inclination, while we corrected the measured velocity
dispersion for the instrumental velocity dispersion.
We found a peculiar stellar kinematics along the major axis of
NGC 1366 (Fig. <ref>). The velocity curve is symmetric
around the center for the innermost |r|≤11. It is
characterized by a steep rise reaching a maximum of |v|≃50
at |r|≃2 and decreasing farther out to
|v|≃0 at 6|r|11. For |r|≥11
the spectral absorption lines clearly display a double peak that
is due to the
difference in velocity of the two counter-rotating components. The absorption lines of the two stellar populations
are so well separated that the pPXF-GANDALF procedure fit
only one of the two components. This is the reason for the shift in velocities and the drop in velocity dispersion to lower values
that we measured on both sides of the galaxy at |r|≥11
(Fig. <ref>). The velocities measured at large negative
and positive radii are related to the counter-rotating and corotating
component, respectively.
The velocity dispersion shows a central maximum σ≃150
and decreases outwards. It rises again to peak at
σ≃140 at |r|≃9 and decreases
to a value
of σ≃100 at |r|≃25.
The combination of zero velocity with two off-centered and symmetric
peaks in the velocity dispersion of the stellar component measured
along the galaxy major axis is indicative of two
counter-rotating components. This feature shows up in the kinematics
obtained from long-slit <cit.> and integral-field
spectroscopy <cit.> when the two counter-rotating
components have almost the same luminosity and their difference in
velocity is not resolved.
We found no kinematic signature of stellar decoupling along the minor
axis of NGC 1366 (Fig. <ref>). The velocity curve is
characterized by |v|≃0 at all radii, indicating that the
photometric and kinematic minor axes of the galaxy coincide with
each
other. The velocity dispersion profile is radially symmetric and
smoothly declines from σ≃150 in the center to
≃60 at the last measured radius (r≃14).
Finally, we derived the kinematics of the two counter-rotating
components along the major axis at the radii where their difference in
velocity was resolved, giving rise to double-peaked absorption
lines. To reach the signal-to-noise ratio (S/N) needed to successfully perform the spectral
decomposition, we averaged the galaxy spectrum along the spatial
direction in the regions with the highest contribution of the
counter-rotating component. We obtained a minimum S/N ≥ 30
per resolution element, which increases to a maximum value S/N
≃ 50 in the very central region.
We performed the spectroscopic decomposition using the implementation
of the pPXF developed by <cit.>. We built for each
stellar component a best-fitting synthetic template as linear
combination of the ELODIE stellar spectra. The two templates depend on
the corresponding stellar populations of the corotating and
counter-rotating components and were convolved with a Gaussian LOSVD
according to their kinematics. We added multiplicative polynomials to
deal with differences in the continuum shape of the galaxy and stellar
spectra due to flux calibration and flat fielding residuals. We also
included a few Gaussian functions to account for the ionized-gas
emission lines and generated a synthetic galaxy spectrum that
matches the
observed spectrum. The spectroscopic decomposition returns the luminosity
fraction, the line-of-sight velocity, and velocity dispersion of the
two stellar components, the line-of-sight velocity and velocity
dispersion of the ionized gas, and the two best-fitting synthetic
stellar templates to be used for the analysis of the stellar
population properties. We quantified the errors on the luminosity
fraction, line-of-sight velocity, and velocity dispersion of the
two counter-rotating stellar components with a series of Monte Carlo
simulations on a set of artificial galaxy spectra, as done in <cit.>.
The decomposition of the galaxy spectrum in the radial bins at
r=-20.9, -12.6, 11.4 and 19.9 are
shown in Fig. <ref>, and the resulting kinematics of the
corotating and counter-rotating stellar components are plotted in
Fig. <ref>. Corotating stars are characterized by a higher
rotation velocity (|v|≃120) and a lower velocity
dispersion (σ≃30) than the counter-rotating stars that
rotate with a |v|≃90 and have a (σ≃80 ).
The corotating and counter-rotating components contribute
(45±15)% and (55±15)% of the stellar luminosity at all the
measured radii. We converted the luminosity fraction of each component
into mass fraction using the measured ages and metallicities and
adopting the models by <cit.>. We derived stellar
mass-to-light ratios of M/L=3.02 and M/L=1.63 for the corotating
and counter-rotating components, respectively. From these quantities
we found that the stellar mass fractions of the corotating and
counter-rotating components are 60% and 40%, respectively.
A comparison between the stellar and ionized-gas velocity curves
indicates that the gas is disturbed and is not associated with one of
the two counter-rotating components. In fact, the gas rotates in the
same direction and with a velocity amplitude close to that of the
stellar component at small (|r|1) and large radii
(|r|≥11). A broad feature is clearly visible in the
gas structure at |r|≃7-10 along the major axis
(Fig. <ref>). Although the emission line has a
broad profile (Fig. <ref>), there is no clear evidence
for a double peak. The wavelength range of our spectra does not
cover the region, which prevents us form building a complete diagnostic
diagram to properly distinguish between the different excitation
mechanisms of the ionized gas. However, the high value of log(/)≃ 1.5 favors the shocks as excitation mechanism.
We detected two ionized-gas rotating components along the galaxy
minor axis as it results from the double-peaked emission line
shown in Fig. <ref>. We independently measured the brighter
emission line at lower velocities and the fainter emission line at
higher velocities. Their velocity and velocity dispersion are shown
in Fig. <ref>. The two gas components have a systematic
and almost constant offset in velocity with respect to the stellar
component, suggesting the presence of more gas clouds along the line
of sight. We prefer this interpretation to the idea of having two
gas components with mirrored asymmetric distributions with a brighter
and a fainter side and giving rise to an X-shaped emission
line. The gas velocity dispersion is typically σ_ gas <
100 and mostly σ_ gas≃ 50 along both
axes after correcting for the instrumental velocity dispersion.
§.§ Stellar populations
We measured the Lick line-strength indices <cit.> of the corotating and
counter-rotating components on the best-fitting synthetic templates
and derived the age, metallicity, and ratio of the corresponding
stellar population as in <cit.>. We derived the
errors on the equivalent widths of the line-strength indices of the
two counter-rotating stellar components with a series of Monte Carlo
simulations on a set of artificial galaxy spectra as done in <cit.>. We report the measurements in
Table <ref> and compare them to the line-strength indices
predicted for a single stellar population that accounts for the
α/Fe overabundance by <cit.> in
Fig. <ref>. We obtained the stellar population properties
of both components from the line-strength indices averaged on the two
galaxy sides. They are given in Table <ref> together
with the relative luminosity of the corotating and counter-rotating
components.
The comparison of the averaged age values suggests that the
counter-rotating component is significantly younger (Age= 2.6
Gyr) than the corotating component (age= 5.6 Gyr). The two
averaged
metallicities are both subsolar and similar to each other (=-0.16 and
-0.18 dex for the counter-rotating and corotating components,
respectively). However, the large scatter in the metallicity
measurements of the corotating component does not allow us to give a
firm conclusion. At face value, the subsolar ratio of the
counter-rotating component (= -0.07 dex) points to a longer
star-formation timescale than that of the corotating component,
which is characterized by a supersolar ratio (= 0.08
dex).
§ DISCUSSION AND CONCLUSIONS
There is no morphological or photometric evidence that NGC 1366 is
hosting two counter-rotating stellar components. NGC 1366 is
characterized by an undisturbed morphology with no sign of recent
interaction with small satellites or companion galaxies of similar
size <cit.>. This is common for most
of the counter-rotating galaxies since their environment does not
appear statistically different from that of normal galaxies,
see <cit.>. In addition, the surface brightness distribution
of NGC 1366 is remarkably well fitted by a Sérsic bulge and an
exponential disk with no break at any radius <cit.>.
We provided the spectroscopic evidence of two
counter-rotating stellar components with a high rotation velocity and
low velocity dispersion (v/σ≃2) that give almost the same
contribution to the galaxy luminosity. We infer that they have a similar
scale length from the constant slope of the exponential
surface-brightness radial profile outside the bulge-dominated region
as in NGC 4138 <cit.> and NGC 4550
<cit.>. These kinematic and
photometric properties support the disk nature of the two components.
The stellar population of the corotating component is characterized
by an older age, consistent with that of bulge <cit.>, subsolar metallicity, and almost solar
α/Fe enhancement. This suggests a formation timescale of a few
Gyr that occurred at the time of the galaxy assembly. The
counter-rotating stellar component is remarkably younger with lower
α/Fe enhancement and subsolar metallicity. The metallicity and
age values obtained for the two components are consistent within the
errors with the results obtained by <cit.> on the galaxy
integrated light when considering its strong radial gradients of
stellar population properties. Therefore, the counter-rotating stellar
component could be the end result of a slower star formation process
that occurred in a disk of gas accreted by a preexisting galaxy and
settled onto retrograde orbits. However, unlike most of previously
studied cases <cit.>,
the ionized gas of NGC 1366 is not associated with the
counter-rotating stellar component. It has peculiar kinematics with multiple velocity components
along the minor axis with different gas clouds along
the line of sight. The kinematic mismatch between the ionized
gas and counter-rotating stellar component complicates the scenario of
gas accretion followed by star formation.
The most obvious possibility is to consider an episodic gas
accretion. The first event of capture of external gas occurred ∼3
Gyr ago and built the counter-rotating stellar component. It was
followed by a subsequent event that is still ongoing at
present. However, this rises the question about the origin of the
newly supplied and kinematically decoupled gas since there is no clear
donor candidate in the neighborhood of NGC 1366. This leaves us with
the possibility of the acquisition of small gas clouds coming either
from the environment or from the internal reservoir inside the galaxy
itself. When external gas is captured in distinct clouds, it settles
onto the galaxy disk in a relatively short time <cit.>. In this case, NGC 1366
could be an object caught at an intermediate stage of the acquisition
process, before its configuration becomes stable. It is interesting to
note that this could also have occurred in galaxies with gas associated
with the counter-rotating stellar component. Without clear evidence of
ongoing star formation or very young stars, the counter-rotating
stellar component could be the result of a past acquisition of gas
coming from the same reservoir that provides the counter-rotating gas we
observe at present.
An intriguing alternative was explored by <cit.>. They
showed the time evolution of the distribution and kinematics of gas and
stars in a set of numerical simulations aimed at investigating the
formation of the stellar counter-rotating disks of NGC 4550 from a
binary merger. One Gyr after the merger, while the stars have
settled in two counter-rotating disks with a relatively regular
kinematics, the gas distribution still remains rather disordered with
a disturbed kinematics. However, this configuration is not stable, and
the gas tends to a more regular configuration between 1 and 2 Gyr from
the merging event. The structure and stellar populations properties of
the counter-rotating components of NGC 1366 are somewhat different from
those of NGC 4550 for a direct comparison of our results with the
simulations by <cit.>, and dedicated simulations are
needed for a firmer interpretation of this galaxy in terms of a binary
merger.
These speculations need further evidence since the available
spectroscopic data are not conclusive. To date, NGC 1366 is a unique
example, and it may become a corner stone for understanding the formation
of counter-rotation in relatively isolated and undisturbed
galaxies. Mapping the ionized-gas distribution and kinematics of
NGC 1366 with integral-field spectroscopy is a crucial complement for
the present dataset and is necessary to distinguish between different
scenarios and address the question of the origin of the gas. In the
case of a episodic gas acquisition, we expect to see a clear
morphological and kinematic signature of the incoming gas without a
counter-part in the stellar distribution. In contrast, in the case
of a galaxy binary merger, we expect to observe a morphological
association between the distribution of stars and gas, a regular
velocity field for the two counter-rotating stellar disks, and an
irregular velocity field for the ionized gas.
We benefited from discussion with Roberto P. Saglia. This work was
supported by Padua University through grants 60A02-5857/13,
60A02-5833/14, 60A02-4434/15, and CPDA133894. LM and EMC acknowledge
financial support from Padua University grants CPS0204 and
BIRD164402/16, respectively. LM is grateful to the ESO Scientific
Visitor Programme for the hospitality at ESO Headquarters while this
paper was in progress. This research made use of the HyperLeda
Database (http://leda.univ-lyon1.fr/) and NASA/IPAC Extragalactic
Database (NED) which is operated by the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the National
Aeronautics and Space Administration (http://ned.ipac.caltech.edu/).
aa
[Algorry et al.(2014)Algorry, Navarro, Abadi,
Sales, Steinmetz, & Piontek]Algorry2014 Algorry,
D. G., Navarro, J. F., Abadi, M. G., et al. 2014, , 437,
3596
[Bertola et al.(1996)Bertola, Cinzano, Corsini,
Pizzella, Persic, & Salucci]Bertola1996 Bertola, F.,
Cinzano, P., Corsini, E. M., et al. 1996, , 458, L67
[Bettoni et al.(2001)Bettoni, Galletta, &
Prada]Bettoni2001
Bettoni, D., Galletta, G., & Prada, F. 2001, , 374, 83
[Bettoni et al.(2014)Bettoni, Mazzei, Rampazzo,
Marino, Galletta, & Buson]Bettoni2014 Bettoni, D.,
Mazzei, P., Rampazzo, R., et al. 2014, , 354, 83
[Cappellari & Emsellem(2004)]Cappellari2004
Cappellari, M., & Emsellem, E. 2004, , 116, 138
[Coccato et al.(2011)Coccato, Morelli, Corsini,
Buson, Pizzella, Vergani, & Bertola]Coccato2011
Coccato, L., Morelli, L., Corsini, E. M., et al. 2011,
, 412, L113
[Coccato et al.(2013)Coccato, Morelli, Pizzella,
Corsini, Buson, & Dalla Bontà]Coccato2013 Coccato,
L., Morelli, L., Pizzella, A., et al. 2013, , 549, A3
[Coccato et al.(2015)Coccato, Fabricius, Morelli,
Corsini, Pizzella, Erwin, Dalla Bontà, Saglia,
Bender, & Williams]Coccato2015 Coccato, L., Fabricius,
M., Morelli, L., et al. 2015, , 581, A65
[Combes(2006)]Combes2006 Combes, F. 2006, in Mass
Profiles and Shapes of Cosmological Structures , eds. G. A. Mamon,
F. Combes, C. Deffayet, & B. Fort, EAS Publ. Ser., 20, 97
[Corsini et al.(2012)]Corsini2012 Corsini, E. M.,
Méndez-Abreu, J., Pastorello, N., et al. 2012, , 423, L79
[Corsini(2014)]Corsini2014 Corsini, E. M. 2014, in
Multi-Spin Galaxies, eds. E. Iodice, & E. M. Corsini,
ASP Conf. Ser., 486, 51
[Corsini et al.(2003)Corsini, Pizzella, Coccato, &
Bertola]Corsini2003 Corsini, E. M., Pizzella, A.,
Coccato, L., & Bertola, F. 2003, , 408, 873
[Crocker et al.(2009)Crocker, Jeong, Komugi,
Combes, Bureau, Young, & Yi]Crocker2009 Crocker,
A. F., Jeong, H., Komugi, S., et al. 2009, , 393, 1255
[Davis et al.(2011)Davis, Alatalo, Sarzi, Bureau,
Young, Blitz, Serra, Crocker, Krajnović, McDermid,
Bois, Bournaud, Cappellari, Davies, Duc, de Zeeuw,
Emsellem, Khochfar, Kuntschner, Lablanche, Morganti,
Naab, Oosterloo, Scott, & Weijmans]Davis2011 Davis,
T. A., Alatalo, K., Sarzi, M., et al. 2011, , 417, 882
[de Vaucouleurs et al.(1991)de Vaucouleurs, de
Vaucouleurs, Corwin, Buta, Paturel, & Fouque]RC3
de Vaucouleurs, G., de Vaucouleurs, A., Corwin, H. G.,
et al. 1991, Third Reference Catalogue of Bright Galaxies
(Springer, Berlin)
[Erwin(2004)]Erwin2004 Erwin, P. 2004, , 415, 941
[Evans & Collett(1994)]Evans1994 Evans, N. W. &
Collett, J. L. 1994, , 420, L67
[Ferguson(1989)]Ferguson1989 Ferguson, H. C. 1989,
, 98, 367
[Galletta(1996)]Galletta1996 Galletta, G. 1996,
in Barred Galaxies, eds. R. Buta, D. A. Crocker, & B. G. Elmegreen,
ASP Conf. Ser., 91, 429
[Garcia et al.(1993)]Garcia1993 Garcia, A. M., Paturel, G.,
Bottinelli, L., & Gouguenheim, L. 1993, , 98, 7
[Gorgas et al.(1990)Gorgas, Efstathiou, &
Salamanca]Gorgas1990 Gorgas, J., Efstathiou, G., &
Aragón-Salamanca, A. 1990, , 245, 217
[Johnston et al.(2013)Johnston, Merrifield,
Aragón-Salamanca, & Cappellari]Johnston2013
Johnston, E. J., Merrifield, M. R., Aragón-Salamanca, A.,
& Cappellari, M. 2013, , 428, 1296
[Jore et al.(1996)Jore, Broeils, &
Haynes]Jore1996 Jore, K. P., Broeils, A. H., & Haynes,
M. P. 1996, , 112, 438
[Kannappan & Fabricant(2001)]Kannappan2001
Kannappan, S. J., & Fabricant, D. G. 2001, , 121, 140
[Katkov et al.(2011)]Katkov2011 Katkov, I., Chilingarian,
I., Sil'chenko, O., Zasov, A., & Afanasiev, V. 2011, Baltic
Astronomy, 20, 453
[Katkov et al.(2013)Katkov, Sil'chenko, &
Afanasiev]Katkov2013 Katkov, I. Y., Sil'chenko, O. K., &
Afanasiev, V. L. 2013, , 769, 105
[Katkov et al.(2016)Katkov, Sil'chenko,
Chilingarian, Uklein, & Egorov]Katkov2016 Katkov,
I. Y., Sil'chenko, O. K., Chilingarian, I. V., Uklein, R. I.,
& Egorov, O. V. 2016, , 461, 2068
[Khoperskov & Bertin(2016)]Khoperskov2016
Khoperskov, S., & Bertin, G. 2016, , in press
[arXiv:1610.02705]
[Krajnović et al.(2011)Krajnović, Emsellem,
Cappellari, Alatalo, Blitz, Bois, Bournaud, Bureau,
Davies, Davis, de Zeeuw, Khochfar, Kuntschner,
Lablanche, McDermid, Morganti, Naab, Oosterloo, Sarzi,
Scott, Serra, Weijmans, & Young]Krajnovic2011
Krajnović, D., Emsellem, E., Cappellari, M., et al.
2011, , 414, 2923
[Kuijken & Garcia-Ruiz(2001)]Kuijken2001 Kuijken,
K., & Garcia-Ruiz, I. 2001, in Galaxy Disks and Disk Galaxies,
eds. J. G. Funes, & E. M. Corsini, ASP Conf. Ser., 230, 401
[Makarov et al.(2014)]Makarov2014 Makarov, D., Prugniel, P.,
Terekhova, N., Courtois, H., & Vauglin, I. 2014, , 570, A13
[Mapelli et al.(2015)]Mapelli2015 Mapelli, M., Rampazzo, R.,
& Marino, A. 2015, , 575, A16
[Maraston(2005)]Maraston2005 Maraston, C. 2005, , 362, 799
[Mitzkus et al.(2017)]Mitzkus2016 Mitzkus, M., Cappellari,
M., & Walcher, C. J. 2017, , 464, 4789
[Morelli et al.(2008)Morelli, Pompei, Pizzella,
Méndez-Abreu, Corsini, Coccato, Saglia, Sarzi, &
Bertola]Morelli2008 Morelli, L., Pompei, E., Pizzella,
A., et al. 2008, , 389, 341
[Morelli et al.(2012)Morelli, Corsini, Pizzella,
Dalla Bontà, Coccato, Méndez-Abreu, &
Cesetti]Morelli2012 Morelli, L., Corsini, E. M.,
Pizzella, A., et al. 2012, , 423, 962
[Morelli et al.(2015)]Morelli2015 Morelli, L.,
Pizzella, A., Corsini, E. M., et al. 2015, Astronomische
Nachrichten, 336, 208
[Morelli et al.(2016)]Morelli2016 Morelli, L., Parmiggiani,
M., Corsini, E. M., et al. 2016, , 463, 4396
[Pizzella et al.(2002)Pizzella, Corsini, Morelli,
Sarzi, Scarlata, Stiavelli, & Bertola]Pizzella2002
Pizzella, A., Corsini, E. M., Morelli, L., et al. 2002,
, 573, 131
[Pizzella et al.(2004)Pizzella, Corsini, Vega
Beltrán, & Bertola]Pizzella2004 Pizzella, A.,
Corsini, E. M., Vega Beltrán, J. C., & Bertola, F. 2004,
, 424, 447
[Pizzella et al.(2014)Pizzella, Morelli, Corsini,
Dalla Bontà, Coccato, & Sanjana]Pizzella2014
Pizzella, A., Morelli, L., Corsini, E. M., et al. 2014,
, 570, A79
[Prugniel & Soubiran(2001)]Prugniel2001 Prugniel,
P., & Soubiran, C. 2001, , 369, 1048
[Puerari & Pfenniger(2001)]Puerari2001 Puerari,
I., & Pfenniger, D. 2001, , 276, 909
[Rix et al.(1992)Rix, Franx, Fisher, &
Illingworth]Rix1992 Rix, H.-W., Franx, M., Fisher, D.,
& Illingworth, G. 1992, , 400, L5
[Rubin(1994)]Rubin1994 Rubin, V. C. 1994, , 108,
456
[Sandage & Bedke(1994)]CAG Sandage, A., & Bedke, J. 1994,
The Carnegie Atlas of Galaxies (Carnegie Institution of Washington,
Washington, DC)
[Sarzi et al.(2006)Sarzi, Falcón-Barroso,
Davies, Bacon, Bureau, Cappellari, de Zeeuw, Emsellem,
Fathi, Krajnović, Kuntschner, McDermid, &
Peletier]Sarzi2006 Sarzi, M., Falcón-Barroso, J.,
Davies, R. L., et al. 2006, , 366, 1151
[Sellwood & Merritt(1994)]Sellwood1994 Sellwood,
J. A., & Merritt, D. 1994, , 425, 530
[Thakar & Ryden(1996)]Thakar1996 Thakar, A. R., &
Ryden, B. S. 1996, , 461, 55
[Thakar & Ryden(1998)]Thakar1998 Thakar, A. R., &
Ryden, B. S. 1998, , 506, 93
[Thakar et al.(1997)Thakar, Ryden, Jore, &
Broeils]Thakar1997 Thakar, A. R., Ryden, B. S., Jore,
K. P., & Broeils, A. H. 1997, , 479, 702
[Thomas et al.(2003)Thomas, Maraston, &
Bender]Thomas2003 Thomas, D., Maraston, C., & Bender,
R. 2003, , 339, 897
[Vergani et al.(2007)]Vergani2007 Vergani, D., Pizzella, A.,
Corsini, E. M., et al. 2007, , 463, 883
[Worthey et al.(1994)Worthey, Faber, Gonzalez, &
Burstein]Worthey1994 Worthey, G., Faber, S. M.,
Gonzalez, J. J., & Burstein, D. 1994, , 94, 687
|
http://arxiv.org/abs/1701.07825v3 | 20170126190000 | Backflows by AGN jets: Global properties and influence on SMBH accretion | [
"S. Cielo",
"V. Antonuccio-Delogu",
"J. Silk",
"A. D. Romeo"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
=1
|
http://arxiv.org/abs/1701.07515v1 | 20170125230130 | Q-analogues of the Fibo-Stirling numbers | [
"Quang T. Bach",
"Roshil Paudyal",
"Jeffrey B. Remmel"
] | math.CO | [
"math.CO",
"05A15, 05E05"
] |
Non-colocated Time-Reversal MUSIC:
High-SNR Distribution of Null Spectrum
D. Ciuonzo, Senior Member, IEEE and P. Salvo Rossi, Senior Member, IEEEManuscript received 2nd December 2016; accepted 25th January 2017.
D. Ciuonzo is with DIETI, University of Naples “Federico II”,
Naples, Italy.
P. Salvo Rossi is with the Dept. of Electronics and Telecommunications,
NTNU, Trondheim, Norway.
E-mail: {domenico.ciuonzo, salvorossi}@ieee.org.
Received: date / Accepted: date
===================================================================================================================================================================================================================================================================================================================================================================================
Let F_n denote the n^th Fibonacci number relative to the initial
conditions F_0=0 and F_1=1.
In <cit.>, we introduced Fibonacci analogues of the Stirling
numbers called Fibo-Stirling numbers
of the first and second kind. These numbers serve as the connection coefficients
between the Fibo-falling factorial basis
{(x)_↓_F,n:n ≥ 0} and the Fibo-rising factorial
basis {(x)_↑_F,n:n ≥ 0} which are defined by
(x)_↓_F,0 = (x)_↑_F,0 = 1 and for
k ≥ 1, (x)_↓_F,k = x(x-F_1) ⋯ (x-F_k-1) and
(x)_↑_F,k = x(x+F_1) ⋯ (x+F_k-1).
We gave a general rook theory model which allowed us to give combinatorial
interpretations of the Fibo-Stirling numbers of the first and second kind.
There are two natural
q-analogues of the falling and rising Fibo-factorial basis. That is, let
[x]_q = q^x-1/q-1. Then we let
[x]_↓_q,F,0 = [x]_↓_q,F,0 =
[x]_↑_q,F,0 = [x]_↑_q,F,0=1 and, for
k > 0, we let
[x]_↓_q,F,k = [x]_q [x-F_1]_q ⋯ [x-F_k-1]_q,
[x]_↓_q,F,k= [x]_q ([x]_q-[F_1]_q) ⋯ ([x]_q-[F_k-1]_q),
[x]_↑_q,F,k= [x]_q [x+F_1]_q ⋯ [x+F_k-1]_q, and
[x]_↑_q,F,k= [x]_q ([x]_q+[F_1]_q) ⋯
([x]_q+[F_k-1]_q).
In this paper, we show we can modify the
rook theory model of <cit.> to give combinatorial interpretations
for the two different types q-analogues of the Fibo-Stirling numbers which
arise as the connection coefficients between the two different q-analogues
of the Fibonacci falling and rising factorial bases.
§ INTRODUCTION
Let ℚ denote the rational numbers and
ℚ[x] denote the ring of polynomials over ℚ.
Many classical combinatorial sequences can be defined as
connection coefficients between various basis of
the polynomial ring ℚ[x].
There are three very natural bases for ℚ[x].
The usual power basis {x^n: n≥ 0},
the falling factorial basis {(x)_↓_n: n≥ 0},
and the rising factorial basis {(x)_↑_n: n≥ 0}.
Here we let (x)_↓_0 = (x)_↑_0 = 1 and for
k ≥ 1, (x)_↓_k = x(x-1) ⋯ (x-k+1) and
(x)_↑_k = x(x+1) ⋯ (x+k-1).
Then the Stirling numbers of the first kind s_n,k,
the Stirling numbers of the second kind S_n,k and
the Lah numbers L_n,k are defined by specifying
that for all n ≥ 0,
(x)_↓_n = ∑_k=1^n s_n,k x^k,
x^n = ∑_k=1^n S_n,k (x)_↓_k,
(x)_↑_n = ∑_k=1^n L_n,k (x)_↓_k.
The signless Stirling numbers of the first kind
are defined by setting c_n,k = (-1)^n-k s_n,k.
Then it is well known that c_n,k, S_n,k, and L_n,k
can also be defined by the recursions that
c_0,0 = S_0,0 = L_0,0 = 1,
c_n,k = S_n,k = L_n,k = 0 if either n < k or
k < 0, and
c_n+1,k = c_n,k-1+ n c_n,k,
S_n+1,k = S_n,k-1+kS_n,k,
L_n+1,k = L_n,k-1 +(n+k) L_n,k
for all n,k ≥ 0.
There are well known combinatorial interpretations of
these connection coefficients. That is,
S_n,k is the number of set partitions of [n] = {1, …, n}
into k parts, c_n,k is the number of permutations
in the symmetric group S_n with k cycles, and
L_n,k is the number of ways to place n labeled balls into k unlabeled
tubes with at least one ball in each tube.
In <cit.>, we introduced Fibonacci analogues of
the number s_n,k, S_n,k, and L_n,k.
We started with the tiling model of the F_n of <cit.>.
That is, let ℱ𝒯_n denote the set of tilings
a column of height n with tiles of height
1 or 2 such that bottom most tile is of height 1.
For example, possible tiling configurations
for ℱ𝒯_i for i ≤ 4 are shown in
TilingsThe tilings counted by F_i for 1 ≤ i ≤ 4.
For each tiling T ∈ℱ𝒯_n, we let
one(T) is the number of tiles of
height 1 in T and two(T) is the number of tiles of
height 2 in T and define
F_n(p,q) = ∑_T ∈ℱ𝒯_n q^one(T)p^two(T).
It is easy to see that
F_1(p,q) =q, F_2(p,q) =q^2 and F_n(p,q) =q F_n-1(p,q)+
pF_n-2(p,q) for n ≥ 2 so that F_n(1,1) =F_n.
We then
defined the p,q-Fibo-falling
factorial basis
{(x)_↓_F,p,q,n:n ≥ 0} and the p,q-Fibo-rising factorial
basis {(x)_↑_F,p,q,n:n ≥ 0} by setting
(x)_↓_F,p,q,0 = (x)_↑_F,p,q,0 = 1 and setting
(x)_↓_F,p,q,k = x(x-F_1(p,q)) ⋯ (x-F_k-1(p,q))
(x)_↑_F,p,q,k = x(x+F_1(p,q)) ⋯ (x+F_k-1(p,q))
for k ≥ 1.
Our idea to define p,q-Fibonacci analogues of
the Stirling numbers of the first kind, 𝐬𝐟_n,k(p,q),
the Stirling numbers of the second kind, 𝐒𝐟_n,k(p,q),
and the Lah numbers, 𝐋𝐟_n,k(p,q), is to
define them
to be the connection coefficients between the usual power basis
{x^n: n ≥ 0} and the p,q-Fibo-rising factorial and
p,q-Fibo-falling factorial bases.
That is, we define 𝐬𝐟_n,k(p,q), 𝐒𝐟_n,k(p,q),
and 𝐋𝐟_n,k(p,q) by the equations
(x)_↓_F,p,q,n = ∑_k=1^n 𝐬𝐟_n,k(p,q) x^k,
x^n = ∑_k=1^n 𝐒𝐟_n,k(p,q) (x)_↓_F,p,q,k,
(x)_↑_F,p,q,n = ∑_k=1^n 𝐋𝐟_n,k(p,q) (x)_↓_F,p,q,k
for all n ≥ 0.
It is easy to see that these equations imply simple recursions for
the connection coefficients 𝐬𝐟_n,k(p,q)s, 𝐒𝐟_n,k(p,q)s, and 𝐋𝐟_n,k(p,q)s.
That is, 𝐬𝐟_n,k(p,q)s, 𝐒𝐟_n,k(p,q)s, and 𝐋𝐟_n,k(p,q)s can be defined by the following recursions
𝐬𝐟_n+1,k(p,q) = 𝐬𝐟_n,k-1(p,q)- F_n(p,q) 𝐬𝐟_n,k(p,q),
𝐒𝐟_n+1,k(p,q) = 𝐒𝐟_n,k-1(p,q)+ F_k(p,q) 𝐒𝐟_n,k(p,q),
𝐋𝐟_n+1,k(p,q) = 𝐋𝐟_n,k-1(p,q)+ (F_k(p,q) +F_n(p,q))𝐋𝐟_n,k(p,q)
plus the boundary
conditions
𝐬𝐟_0,0(p,q)=𝐒𝐟_0,0(p,q)=𝐋𝐟_0,0(p,q)=1
and
𝐬𝐟_n,k(p,q) =𝐒𝐟_n,k(p,q) =𝐋𝐟_n,k(p,q) =0
if k > n or k < 0.
If we define 𝐜𝐟_n,k(p,q):= (-1)^n-k𝐬𝐟_n,k(p,q), then
𝐜𝐟_n,k(p,q)s can be defined by the recursions
𝐜𝐟_n+1,k(p,q) = 𝐜𝐟_n,k-1(p,q)+F_n(p,q) 𝐜𝐟_n,k(p,q)
plus the boundary
conditions 𝐜𝐟_0,0(p,q)=1 and 𝐜𝐟_n,k(p,q) =0 if k > n or k < 0. It also follows that
(x)_↑_F,p,q,n = ∑_k=1^n 𝐜𝐟_n,k(p,q) x^k.
In <cit.>, we developed a new rook theory model to give a
combinatorial interpretation of
the 𝐜𝐟_n,k(p,q)s and the 𝐒𝐟_n,k(p,q)s and to give
combinatorial proofs of their basic properties.
This new rook theory model is
a modification of the rook theory model for S_n,k
and c_n,k except that we replace rooks by Fibonacci
tilings.
The main goal of this paper
is to show how that model can be modified to
give combinatorial interpretations to two new
q-analogues of the 𝐜𝐟_n,k(1,1)s and the
𝐒𝐟_n,k(1,1)s.
Let [0]_q =1 and [x]_q = 1-q^x/1-q. When
n is a positive integer, then [n]_q = 1+ q+ ⋯ +q^n-1 is
the usual q-analogue of n. Then there are two natural
analogues of the falling and rising Fibo-factorial basis. First we let
[x]_↓_q,F,0 = [x]_↓_q,F,0 =
[x]_↑_q,F,0 = [x]_↑_q,F,0=1. For k > 0,
we let
k > 0,
[x]_↓_q,F,k = [x]_q [x-F_1]_q ⋯ [x-F_k-1]_q,
[x]_↓_q,F,k = [x]_q ([x]_q-[F_1]_q) ⋯ ([x]_q-[F_k-1]_q),
[x]_↑_q,F,k = [x]_q [x+F_1]_q ⋯ [x+F_k-1]_q,
[x]_↑_q,F,k = [x]_q ([x]_q+[F_1]_q) ⋯
([x]_q+[F_k-1]_q).
Then we define 𝐜𝐅_n,k(q) and 𝐜𝐅_n,k(q)
by the equations
[x]_↑_q,F,n= ∑_k=1^n 𝐜𝐅_n,k(q) [x]_q^k
and
[x]_↑_q,F,n= ∑_k=1^n
𝐜𝐅_n,k(q)[x]_q^k.
Similarly, we define 𝐒𝐅_n,k(q) and
𝐒𝐅_n,k(q)
by the equations
[x]_q^n= ∑_k=1^n 𝐒𝐅_n,k(q) [x]_↓_q,F,k
and
[x]_q^n= ∑_k=1^n
𝐒𝐅_n,k(q) [x]_↓_q,F,k
One can easily find recursions for these polynomials.
For example,
[x]_q^n+1 = ∑_k=1^n+1𝐒𝐅_n+1,k(q)
[x]_↓_q,F,k = ∑_k=1^n 𝐒𝐅_n,k(q) [x]_↓_q,F,k[x]_q
= ∑_k=1^n 𝐒𝐅_n,k(q) [x]_↓_q,F,k([F_k]_q
+ q^F_k[x-F_k]_q)
= ∑_k=1^n [F_k]_q 𝐒𝐅_n,k(q) [x]_↓_q,F,k
+∑_k=1^n q^F_k𝐒𝐅_n,k(q) [x]_↓_q,F,k+1.
Taking the coefficient of [x]_↓_q,F,k[x]_q on both sides
shows that
𝐒𝐅_n+1,k(q)= q^F_k-1𝐒𝐅_n,k-1(q)+[F_k]_q
𝐒𝐅_n,k(q)
for 0 ≤ k ≤ n+1.
It is then easy to check that the 𝐒𝐅_n,k(q)s can be defined
by the recursions (<ref>) with the initial conditions
that 𝐒𝐅_0,0(q)=1 and 𝐒𝐅_n,k(q)=0 if k < 0 or
n < k.
A similar argument will show that
𝐒𝐅_n,k(q) can be defined
by the initial conditions that
𝐒𝐅_0,0(q)=1 and 𝐒𝐅_n,k(q)=0 if k < 0 or
n < k and the recursion
𝐒𝐅_n+1,k(q)= 𝐒𝐅_n,k-1(q)+[F_k]_q
𝐒𝐅_n,k(q).
for 0 ≤ k ≤ n+1.
Similarly, 𝐜𝐅_n,k(q) can be defined
by the initial conditions that
𝐜𝐅_0,0(q)=1 and 𝐜𝐅_n,k(q)=0 if k < 0 or
n < k and the recursion
𝐜𝐅_n+1,k(q)= q^F_n-1𝐜𝐅_n,k-1(q)+[F_n]_q
𝐜𝐅_n,k(q),
for 0 ≤ k ≤ n+1, and
𝐜𝐅_n,k(q) can be defined
by the initial conditions that
𝐜𝐅_0,0(q) and 𝐜𝐅_n,k(q)=0 if k < 0 or
n < k and the recursion
𝐜𝐅_n+1,k(q)= 𝐜𝐅_n,k-1(q)+
[F_n]_q 𝐜𝐅_n,k(q)
for 0 ≤ k ≤ n+1.
The main goal of this paper is to give a rook theory model
for the polynomials 𝐜𝐅_n,k(q),
𝐜𝐅_n,k(q), 𝐒𝐅_n,k(q), and
𝐒𝐅_n,k(q). Our rook theory
model will allow us to give combinatorial proofs of
the defining equations (<ref>), (<ref>), (<ref>),
and (<ref>) as well as combinatorial proofs of
the recursions (<ref>), (<ref>), (<ref>), and (<ref>).
We shall see that our rook theory model
𝐜𝐅_n,k(q),
𝐜𝐅_n,k(q), 𝐒𝐅_n,k(q), and
𝐒𝐅_n,k(q) is essentially the same
as the the rook theory model used in <cit.> to interpret
the 𝐒𝐟_n,k(p,q)s and 𝐒𝐟_n,k(p,q)s
but with a different weighting scheme.
The outline of the paper is as follows. In Section 2, we
describe a ranking and unranking theory for the set of
Fibonacci tilings which will a crucial element in our weighting
scheme for our rook theory model that we shall use to give
combinatorial interpretations of
the polynomials 𝐜𝐅_n,k(q),
𝐜𝐅_n,k(q), 𝐒𝐅_n,k(q), and
𝐒𝐅_n,k(q). In section 3, we shall
review the rook theory model in <cit.> and show how it
can be modified for our purposes. In Section 4,
we shall prove general product formulas for Ferrers boards
in our new model which will specialize
(<ref>), (<ref>), (<ref>),
and (<ref>) in the case where the Ferrers board
is the staircase board whose column heights are
0,1, …, n-1 reading from left to right. In Section 5,
we shall prove various special properties of the polynomials 𝐜𝐅_n,k(q),
𝐜𝐅_n,k(q), 𝐒𝐅_n,k(q), and
𝐒𝐅_n,k(q).
§ RANKING AND UNRANKING FIBONACCI TILINGS.
There is a well developed theory for ranking and unranking
combinatorial objects. See for example, Williamson's book
<cit.>. That is, give a collection of
combinatorial objects 𝒪 of cardinality n, one
wants to define bijections rank:𝒪→{0, …, n-1} and unrank:{0, …, n-1}→𝒪
which are inverses of each other. In our case, we let
ℱ_n denote the set of Fibonnaci tilings of height n. Then
we construct a tree which we call the Fibonacci tree for F_n.
That is, we start from the top of a Fibonacci tiling
and branch left if we see a tile of height 1 and branch
right if we see a tiling of height 2.
For example,
the Fibonacci tree for F_5 is pictured
in Figure <ref>.
FtreeThe tree for F_5
Then for any tiling T ∈ℱ_n, we define
the rank of T for F_n, _n(T), to be the number of paths to the left of the path for T in the Fibonacci tree for F_n. Clearly
{_n(T): T ∈ℱ_n}= {0,1,2, … , F_n-1}
so that ∑_T ∈ℱ_n q^_n(T) = 1+q+ ⋯ +q^F_n-1 =
[F_n]_q. It is, in fact, quite easy to see compute
the functions _n and unrank_n in this situation.
That is, suppose that we represent the tiling T as a sequence
seq(T) = (t_1, … ,t_n) where reading the tiles starting at the bottom,
t_i = 1 if there is a tiling t_i of height 1 that ends at level i in
T, t_i =2 if there is t_i of height 2 that ends at level i in T,
and t_i =0 if there is no tile t_i that ends at level i in T.
For example, the tiling of T height 9 pictured in Figure <ref>
would be represented by the sequence seq(T)= (1,0,2,1,1,1,0,2,1).
F9A tiling in ℱ_9.
For any statement A, we let χ(A) =1 is A is true and χ(A) =0
if A is false. Then we have the following lemma.
Suppose that T ∈ℱ_n is a Fibonacci tiling
such that seq(T) = (t_1, …, t_n). Then
_n(T) = ∑_i=1^n F_i-1χ(t_i=2).
The theorem is easy to prove by induction. It is clearly true for
n=1 and n=2. Now suppose n ≥ 3. Then it is easy
to see from the Fibonacci tree for F_n that if t_n=2 so that t_n-1 =0,
then the tree
that starts at level n-1 which represents taking the path to the
left starting at level n is just the Fibonacci tree for F_n-1 and
hence this tree will contain F_n-1 leaves which will all be to the left
of path for the tiling T. Then the tree that starting at level
n-2 which represents taking the path to the
right starting at level n is just the Fibonacci tree for F_n-2 and
the number of paths in this tree which lie to the left of
the path for T is just that the number of paths to the left of
the tiling T' such that seq(T') = (t_1,…,t_n-2) in
the Fibonacci tree for F_n-2.
Thus in this case
_n(T) = F_n-1+_n-2(T')=
F_n-1+_n-2(t_1, …, t_n-2).
On the other hand if t_n =1, then we branch left at level n so
that the number of paths to the left of the path for T in the
Fibonacci tree for F_n will just be the number of paths to
the left of the tiling T” such that seq(T”) = (t_1, …, t_n-1)
in the
Fibonacci tree for F_n-1.
Thus in this case
_n(T) = _n-1(T”)= _n-1(t_1, …, t_n-1).
For example, for the tiling T in Figure <ref>,
_9(T) = F_2+F_8 = 1+21 =22.
For the unrank function, we must rely on Zeckendorf's theorem
<cit.> which states that every positive integer n is uniquely represented
as sum n = ∑_i=0^k F_c_i where each c_i ≥ 2 and
c_i+1 > c_i +1. Indeed, Zeckendorf's theorem
says that the greedy algorithm give us the proper representation.
That is, given n, find k such that F_k ≤ n < F_k+1, then
the representation for n is gotten by taking the representation
for n-F_k and adding F_k. For example, suppose that we want to find
T such that _13(T) =100. Then
* F_11 =89 ≤ 100< F_12=144 so that we need to find
the Fibonacci representation of 100-89 =11.
* F_6 = 8 ≤ 11 < F_7 =13 so that we need to find the Fibonacci
representation of 11-8 =3.
* F_4 = 3 ≤ 3 < F_5 =5.
Thus we can represent
100 = F_4+ F_6 + F_11 = 3+ 8 + 89 so that
seq(T) =(1,1,1,0,2,0,2,1,1,1,0,2,1).
§ THE ROOK THEORY MODEL FOR THE
𝐒𝐅_N,K(Q)S AND THE 𝐜𝐅_N,K(Q)S.
In this section, we shall give a rook theory model which
will allow us to give combinatorial interpretations for
the 𝐒𝐅_n,k(q)s and the 𝐜𝐅_n,k(q)s.
This rook theory
model is based on the one
which Bach, Paudyal, and Remmel used in <cit.> to give combinatorial interpretations to
the 𝐒𝐟_n,k(p,q)s and the 𝐜𝐟_n,k(p,q)s.
Thus, we shall briefly review the rook theory model in <cit.>.
A Ferrers board B=F(b_1, …, b_n) is
a board whose column heights are b_1, …, b_n, reading
from left to right, such that 0≤ b_1 ≤ b_2 ≤⋯≤ b_n.
We shall let B_n denote the Ferrers board F(0,1, …, n-1).
For example, the Ferrers board B = F(2,2,3,5) is
pictured on the left of Figure <ref>
and the Ferrers board B_4 is pictured on the right of
Figure <ref>
FerrersFerrers boards.
Classically, there are two type of rook placements that we
consider on a Ferrers board B. First we let
𝒩_k(B) be the set of all placements of
k rooks in B such that no two rooks lie in the same
row or column. We shall call an element of
𝒩_k(B) a placement of k non-attacking rooks
in B or just a rook placement for short. We let
ℱ_k(B) be the set of all placements of
k rooks in B such that no two rooks lie in the same
column. We shall call an element of
ℱ_k(B) a file placement of k rooks
in B.
Thus file placements differ from rook placements
in that file placements allow
two rooks to be in the same row. For example,
we exhibit a placement of 3 non-attacking rooks
in F(2,2,3,5) on the left in Figure
<ref> and a file placement of 3 rooks on
the right in Figure <ref>.
placementsExamples of rook and file placements.
Given a Ferrers board B = F(b_1, …, b_n), we
define the k-th rook number of B to be
r_k(B) = |𝒩_k(B)| and the k-th file number
of B to be f_k(B) = |ℱ_k(B)|. Then the rook
theory interpretation of the classical Stirling numbers is
S_n,k = r_n-k(B_n) 1 ≤ k ≤ n
c_n,k = f_n-k(B_n) 1 ≤ k ≤ n.
The idea of <cit.> is to modify the sets 𝒩_k(B) and
ℱ_k(B) to replace rooks with Fibonacci tilings.
The analogue of file placements is very straightforward.
That is, if B=F(b_1, …, b_n), then we let
ℱ𝒯_k(B) denote the set of all configurations such that
there are k columns (i_1, …, i_k) of B where
1 ≤ i_1 < ⋯ < i_k ≤ n such that in each
column i_j, we have placed one of the tilings T_i_j for the Fibonacci
number F_b_i_j. We shall call such a configuration
a Fibonacci file placement and denote it by
P = ((i_1,T_i_1), …, (i_k,T_i_k)).
Let
one(P) denote the number of tiles of height 1 that appear
in P and two(P) denote the number of tiles of height 2 that appear
in P. Then in <cit.>, we defined the weight of P, WF(P,p,q), to be
q^one(P)p^two(P). For example, we have
pictured an element P of ℱ𝒯_3(F(2,3,4,4,5)) in
Figure <ref> whose weight is q^7 p^2. Then
we defined the k-th p,q-Fibonacci file polynomial of B, 𝐟𝐓_k(B,p,q),
by setting
𝐟𝐓_k(B,p,q) = ∑_P ∈ℱ𝒯_k(B) WF(P,p,q).
If k =0, then the only element of ℱ𝒯_k(B) is the empty placement
whose weight by definition is 1.
FibfileA Fibonacci file placement.
Then in <cit.>, we proved the following theorem concerning
Fibonacci file placements in Ferrers boards.
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0.
Let B^- = F(b_1, …, b_n-1). Then for all
1 ≤ k ≤ n,
𝐟𝐓_k(B,p,q) = 𝐟𝐓_k(B^-,p,q)+ F_b_n(p,q) 𝐟𝐓_k-1(B^-,p,q).
To obtain the q-analogues that we desire for this paper, we
define a new weight functions for elements of ℱ𝒯_k(B) where
B=F(b_1, …, b_n) is Ferrers board. That is given
a Fibonacci file placement
P = ((i_1,T_i_1), …, (i_n-k,T_i_n-k)) in
ℱ𝒯_n-k(B), let (j_1, …, j_k) be
the sequence of columns in B which have no tilings, reading
from left to right. Then we define
𝐰_𝐁,𝐪(P) = q^∑_s=1^n-k_b_i_s(T_i_s) +
∑_t=1^k F_b_j_t
𝐰_𝐁,𝐪(P) =
q^∑_s=1^n-k_b_i_s(T_i_s)
Note that the only difference between these two weight functions
is that if b_i is column that does not contain a tiling in
P, then it contributes a factor of
q^F_b_i to 𝐰_𝐁,𝐪(P) and a factor of 1 to
𝐰_𝐁,𝐪(P).
We then define 𝐅𝐓_k(B,q) and 𝐅𝐓_k(B,q),
by setting
𝐅𝐓_k(B,q) = ∑_P ∈ℱ𝒯_k(B)𝐰_𝐁,𝐪(P)
𝐅𝐓_k(B,q) = ∑_P ∈ℱ𝒯_k(B)𝐰_𝐁,𝐪(P).
If k =0, then the only element of ℱ𝒯_k(B) is the empty
placement ∅ so that
𝐰_𝐁,𝐪(∅) =q^∑_i=1^n F_b_i and
𝐰_𝐁,𝐪(∅) =1.
Then we have the following analogue of Theorem
<ref>.
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0.
Let B^- = F(b_1, …, b_n-1). Then for all
1 ≤ k ≤ n,
𝐅𝐓_k(B,q) = q^F_b_n𝐅𝐓_k(B^-,q)+ [F_b_n]_q 𝐅𝐓_k-1(B^-,p,q)
and
𝐅𝐓_k(B,q) = 𝐅𝐓_k(B^-,q)+ [F_b_n]_q 𝐅𝐓_k-1(B^-,p,q).
We claim (<ref>)
results by classifying the Fibonacci file placements
in ℱ𝒯_k(B) according to whether there is a tiling in the
last column. If there is no tiling in the last column of P,
then removing the last column of P produces
an element of ℱ𝒯_k(B^-) . Thus such placements
contribute q^F_b_n𝐅𝐓_k(B^-,q) to 𝐅𝐓_k(B,q)
since the fact that the last column has no tiling means that
it contributes
a factor of q^F_b_n to 𝐰_𝐁,𝐪(P).
If there is a tiling in
the last column, then the Fibonacci file placement
that results by removing the last column is an
element of ℱ𝒯_k-1(B^-) and the sum of
the weights of the possible
Fibonacci tilings of height b_n for the last column
is ∑_T ∈ℱ_b_nq^_b_n(T) =
[F_b_n]_q. Hence such placements
contribute [F_b_n]_q 𝐅𝐓_k-1(B^-,q) to
𝐅𝐓_k(B,q). Thus
𝐅𝐓_k(B,q) = q^F_b_n𝐅𝐓_k(B^-,q)+ [F_b_n]_q 𝐅𝐓_k-1(B^-,p,q).
A similar argument will prove (<ref>).
If B=F(b_1, …, b_n) is a Ferrers board,
then we let B_x denote the board that results by
adding x rows of length n below B. We label
these rows from top to bottom with the numbers
1,2, …, x. We shall call
the line that separates B from these x rows the bar.
A mixed file placement P on the board B_x consists
of picking for each column b_i either (i) a Fibonacci tiling
T_i of height b_i above the bar or (ii) picking
a row j below the bar to place a rook in the cell in row j
and column i. Let ℳ_n(B_x) denote set of all
mixed rook placements on B. For any P ∈ℳ_n(B_x),
we let one(P) denote the number of tiles of height 1 that appear
in P and two(P) denote the set tiles of height 2 that appear
in P. Then in <cit.>, we defined the weight of P, WF(P,p,q), to be
q^one(P)p^two(P).
For example,
Figure <ref> pictures a mixed placement P in
B_x where B = F(2,3,4,4,5,5) and x is 9 such that
WF(P,p,q) = q^7p^2.
mixedA mixed file placement.
Also in <cit.>, we proved the following theorem by counting
∑_P ∈ℳ_n(B_x) WF(P,p,q) in two different ways.
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0.
(x+F_b_1(p,q))(x+F_b_2(p,q)) ⋯ (x+F_b_n(p,q)) =
∑_k=0^n 𝐟𝐓_k(B,p,q) x^n-k.
To obtain the desired q-analogues for this paper, we
must define new weight functions
for mixed placements P ∈ℳ_n(B_x).
That is, suppose that P ∩ B is the Fibonacci tile placement
Q = ((i_1,T_i_1), …, (i_k,T_i_n-k)), and suppose that, for the rooks below the bar in columns
1 ≤j_1< …j_k≤ n, the rook in column
j_s is in row d_j_s for s =1, …, k.
Then we define
𝐰_𝐁_𝐱,𝐪(P) = 𝐰_𝐁,𝐪(P)q^∑_t=1^k d_j_t -1 =
q^∑_s=1^n-k_b_i_s(T_i_s) + ∑_t=1^k F_b_j_t+d_j_t-1
𝐰_𝐁_𝐱,𝐪(P) = 𝐰_𝐁,𝐪(P)q^∑_t=1^k d_j_t -1 =
q^∑_s=1^n-k_b_i_s(T_i_s) + ∑_t=1^k d_j_t-1.
That is, for each column i the choice of a Fibonacci tiling
T_i of height b_i above the bar contributes a factor
of q^_b_i(T_i) to 𝐰_𝐁_𝐱,𝐪(P) and the
choice of picking
a row j below the bar to place a rook in the cell in row j
and column i contributes a factor of
q^F_b_i+j-1 to 𝐰_𝐁_𝐱,𝐪(P). Similarly,
for each column b_i the choice of a Fibonacci tiling
T_i of height b_i above the bar contributes a factor
of q^_b_i(T_i) to 𝐰_𝐁_𝐱,𝐪(P) and the
choice of picking
a row j below the bar to place a rook in the cell in row j
and column i contributes a factor of
q^j-1 to 𝐰_𝐁_𝐱,𝐪(P).
Then we have the following analogue of Theorem <ref>.
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0.
Then for all positive integers x,
[x+F_b_1]_q [x+F_b_2]_q ⋯ [x+F_b_n]_q =
∑_k=0^n 𝐅𝐓_k(B,q) [x]_q^n-k
and
([x]_q+[F_b_1]_q) ([x]_q+[F_b_2]_q) ⋯ ([x]_q+[F_b_n]_q) =
∑_k=0^n 𝐅𝐓_k(B,q) [x]_q^n-k
To prove (<ref>),
fix x to be a positive integer and consider
the sums
S = ∑_P ∈ℳ_n(B_x)𝐰_𝐁_𝐱,𝐪(P)
S = ∑_P ∈ℳ_n(B_x)𝐰_𝐁_𝐱,𝐪(P).
For S, in a given column i, our choice of the Fibonacci tiling
of height b_i will contribute a factor
of ∑_T ∈ℱ_nq^_b_i(T) =[F_b_i]_q
to S. Our choice of placing a rook below the bar in
column i contribute a factor of
∑_j=1^x q^F_b_i+j-1 = q^F_b_i(1+q+q^2 + ⋯ q^x-1) =
q^F_b_i[x]_q
to S. As [F_b_i]_q + q^F_b_i[x]_q =[x+F_b_i]_q,
each column of b_i of B contributes
a factor of [x+ F_b_i]_q to S so that
S = ∏_i=1^n [x + F_b_i]_q.
For S, in a given column i, our choice of the Fibonacci tiling
of height b_i will contribute a factor
of ∑_T ∈ℱ_nq^_b_i(T) =[F_b_i]_q
to S. Our choice of placing a rook below the bar in
column i contribute a factor of
∑_j=1^x q^j-1 = [x]_q
to S. Thus each column b_i contributes
a factor of [x]_q+ [F_b_i]_q to S so that
S = ∏_i=1^n ([x]_q + [F_b_i]_q).
On the other
hand, suppose that we fix a Fibonacci file placement
P ∈ℱ𝒯_k(B).
Then we want to compute S_P = ∑_Q ∈ℳ_n(B),
Q ∩ B = P𝐰_𝐁_𝐱,𝐪(Q) which is the sum of
𝐰_𝐁_𝐱,𝐪(Q) over all
mixed placements Q such that Q intersect B equals P.
It it easy to see that such a Q arises by choosing
a rook to be placed below the bar for each column
that does not contain a tiling. Each such column contributes
a factor of 1+q+ ⋯ +q^x-1 =[x]_q in addition
to the weight 𝐰_𝐁,𝐪(P). Thus it follows that
S_P =𝐰_𝐁,𝐪(P) [x]_q^n-k.
Hence it follows that
S = ∑_k=0^n ∑_P ∈ℱ𝒯_k(B) S_P
= ∑_k=0^n [x]_q^n-k∑_P ∈ℱ𝒯_k(B)𝐰_𝐁,𝐪(P)
= ∑_k=0^n 𝐅𝐓_k(B,q) [x]_q^n-k.
The same argument will show that
S = ∑_k=0^n 𝐅𝐓_k(B,q) [x]_q^n-k.
Now consider the special case of the previous two theorems
when B_n = F(0,1,2, …, n-1). Then (<ref>) implies
that
𝐅𝐓_n+1-k(B_n+1,q) = q^F_n𝐅𝐓_n+1-k(B_n,p,q) +
[F_n]_q
𝐅𝐓_n-k(B_n,q).
It then easily follows that for all 0 ≤ k ≤ n,
𝐜𝐅_n,k(q) = 𝐅𝐓_n-k(B_n,q).
Note that 𝐜𝐅_n,0(q) = 0 for all n ≥ 1 since
there are no Fibonacci file placements in
ℱ𝒯_n(B_n) since there are only n-1 non-zero columns.
Moreover such a situation, we see that (<ref>)
implies that
[x]_q [x+F_1]_q [x+F_2]_q ⋯ [x+F_n-1]_q =
∑_k=1^n 𝐜𝐅_n,k(q) [x]_q^k.
Thus we have given a combinatorial proof of
(<ref>).
Similarly (<ref>) implies
that
𝐅𝐓_n+1-k(B_n+1,q) = 𝐅𝐓_n+1-k(B_n,p,q) +
[F_n]_q
𝐅𝐓_n-k(B_n,q).
It then easily follows that for all 0 ≤ k ≤ n,
𝐜𝐅_n,k(q) = 𝐅𝐓_n-k(B_n,q).
Moreover such a situation, we see that (<ref>)
implies that
[x]_q ([x]_q+[F_1]_q) ([x]_q+[F_2]_q) ⋯ ([x]_q+[F_n-1]_q) =
∑_k=1^n 𝐜𝐅_n,k(q) [x]_q^k.
Thus we have given a combinatorial proof of
(<ref>).
The Fibonacci analogue of rook placements defined in
<cit.> is a slight
variation of Fibonacci file placements. The main
difference is that each
tiling will cancel some of the top most cells in each column
to its right that has not been canceled by a tiling
which is further to the left. Our goal is to ensure
that if we start with a Ferrers board B =F(b_1, …, b_n),
our cancellation scheme will ensure that the number of
uncanceled cells in the empty columns are b_1, …, b_n-k,
reading from left to right.
That is, if B=F(b_1, …, b_n), then we let
𝒩𝒯_k(B) denote the set of all configurations such that
that there are k columns (i_1, …, i_k) of B where
1 ≤ i_1 < ⋯ < i_k ≤ n such that the
following conditions hold.
1. In column i_1, we place a Fibonacci tiling
T_i,1 of height b_i_1 and for each j > i_1,
this tiling cancels the top b_j-b_j-1 cells at the top
of column j. This cancellation has the effect of
ensuring that the number of uncanceled cells in the columns
without tilings at this point is
b_1, …, b_n-1, reading from left to right.
2. In column i_2, our cancellation due to the tiling
in column i_1 ensures that there are b_i_2-1 uncanceled
cells in column i_2. Then we place a Fibonacci tiling
T_i,2 of height b_i_2-1 and for each j > i_2,
we cancel the top b_j-1-b_j-2 cells in column j
that has not been canceled by the tiling in column i_1.
This cancellation has the effect of
ensuring that the number of uncanceled cells in columns
without tilings at this point
is b_1, …, b_n-2, reading from left to right.
3. In general, when we reach column i_s, we assume
that the cancellation due to the tilings in columns
i_1, …, i_j-1 ensure that the number of uncanceled
cells in the columns without tilings is b_1, …, b_n-(s-1),
reading from left to right. Thus there will be
b_i_s -(s-1) uncanceled cells in column i_s at this point.
Then we place a Fibonacci tiling
T_i,s of height b_i_s-(s-1) and for each j > i_s,
this tiling will
cancel the top b_j-(s-1)-b_j-s cells in column
j that has not been canceled by the tilings in
columns i_1, …, i_s-1.
This cancellation has the effect of
ensuring that the number of uncanceled cells in columns
without tilings at this point
is b_1, …, b_n-s, reading from left to right.
We shall call such a configuration
a Fibonacci rook placement and denote it by
P = ((i_1,T_i_1), …, (i_k,T_i_k)).
Let one(P) denote the number of tiles of height 1 that appear
in P and two(P) denote the number of tiles of height 2 that appear
in P. Then in <cit.>, we defined the weight of P, WF(P,p,q), to be
q^one(P)p^two(P). For example, on the left in
Figure <ref>, we have
pictured an element P of 𝒩𝒯_3(F(2,3,4,4,6,6))
whose weight is q^5 p^2. In
Figure <ref>, we have
indicated the canceled cells by the tiling in column
i by placing an i in the cell.
We note in the special case where B = F(0,k,2k, …, (n-1)k), then
our cancellation scheme is quite simple. That is, each tiling
just cancels the top k cells in each column to its right which
has not been canceled by tilings to its left.
For example, on the right in
Figure <ref>, we have
pictured an element P of 𝒩𝒯_3(F(0,1,2,3,4,5))
whose weight is q^6 p. Again, we have
indicated the canceled cells by the tiling in column
i by placing an i in the cell.
FibrookA Fibonacci rook placement.
We define the k-th p,q-Fibonacci rook polynomial of B, 𝐫𝐓_k(B,p,q),
by setting
𝐫𝐓_k(B,p,q) = ∑_P ∈𝒩𝒯_k(B) WF(P,p,q).
If k =0, then the only element of ℱ𝒯_k(B) is the empty placement
whose weight by definition is 1.
Then in <cit.>, we proved the following two theorems concerning
Fibonacci rook placements in Ferrers boards.
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0.
Let B^- = F(b_1, …, b_n-1). Then for all
1 ≤ k ≤ n,
𝐫𝐓_k(B,p,q) = 𝐫𝐓_k(B^-,p,q)+ F_b_n-(k-1)(p,q) 𝐫𝐓_k-1(B^-,p,q).
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0.
x^n =
∑_k=0^n 𝐫𝐓_n-k(B,p,q) (x-F_b_1(p,q))(x-F_b_2(p,q))⋯
(x-F_b_k(p,q)).
To obtain the q-analogues that we want for this paper, we need to define two
new weight functions on Fibonacci rook tilings.
That is, suppose that B =F(b_1, …, b_n) is a Ferrers board
and P = ((i_1,T_i_1), …, (i_k,T_i_k)) is an Fibonacci
rook tiling in 𝒩𝒯_k(B). Then we know
that the number of uncanceled cells in the n-k columns
which do not have tilings are b_1, … ,b_n-k reading from
left to right. Suppose that the number of uncanceled cells in
the columns with tilings are e_1, …, e_k reading from left
to right so that tiling T_i_j is of height e_j for j =1,
…, k. The we define
𝐖_𝐁,𝐪(P) = q^∑_s=1^k _e_s(T_i_s) +
∑_t=1^n-k F_b_t
𝐖_𝐁,𝐪(P) = q^∑_s=1^k _e_s(T_i_s).
For example, if B=(2,3,4,4,5,5) and P=((1,T_1),(3,T_3),(5,T_5)) is the rook tiling pictured
in Figure <ref>, then e_1 =2, e_2 = 3 and e_3 =4 and
one can check that _2(T_1) =0, _3(T_3) = F_2 =1, and _4(T_5) = F_3 = 2. Thus
𝐖_𝐁,𝐪(P)=q^0+1+2+F_2+F_3+F_4 = q^9 and
𝐖_𝐁,𝐪(P)=q^0+1+2 = q^3.
If k =0, then the only element of ℱ𝒯_k(B) is the empty placement
∅ which means that 𝐖_𝐁,𝐪(∅) =
q^∑_i=1^n F_b_i and 𝐖_𝐁,𝐪(∅) =1.
Then we define 𝐑𝐓_k(B,q)
by setting
𝐑𝐓_k(B,q) = ∑_P ∈𝒩𝒯_k(B)𝐖_𝐁,𝐪(P)
and
𝐑𝐓_k(B,q) = ∑_P ∈𝒩𝒯_k(B)𝐖_𝐁,𝐪(P).
Note that because of our cancellation scheme, there is a very
simple relationship between 𝐑𝐓_k(B,q) and
𝐑𝐓_k(B,q) in the case where B = F(b_1, …,
b_n). That is, in any placement P ∈𝒩𝒯_k(B),
the empty columns have b_1, …, b_n-k uncanceled cells,
reading from left to right, so that
𝐑𝐓_k(B,q) = q^∑_i=1^n-k F_b_i𝐑𝐓_k(B,q).
Let B=F(b_1, …, b_n) be a Ferrers board and
x be a positive integer.
Then we let AugB_x denote the board where
we start with B_x and add the flip of the board B about
its baseline below the board. We shall call the
the line that separates B from these x rows the upper bar
and the line that separates the x rows from the flip
of B added below the x rows the lower bar. We shall
call the flipped version of B added below B_x the board
B. For example,
if B=F(2,3,4,4,5,5), then the board AugB_7 is pictured
in Figure <ref>.
augAn example of an augmented board AugB_x.
The analogue of mixed placements in AugB_x are
more complex than the mixed placements for B_x. We process
the columns from left to right.
If we are in column 1, then we can do one of the following three things.
i. We can put a Fibonacci tiling in
cells in the first column in B. Then we must
cancel the top-most cells in each of the columns in B to its right
so that the number of uncanceled cells in
the columns to its right are b_1,b_2, …, b_n-1, respectively, as
we read from left to right. This means
that we will cancel b_i-b_i-1 at the top of column i in B
for i=2, …, n. We also cancel the same number
of cells at the bottom of the corresponding columns of B.
ii. We can place a rook in any row of column 1 that
lies between the upper bar and lower bar. This rook
will not cancel anything.
iii. We can put a flip of Fibonacci tiling
in column 1 of B. This tiling will not
cancel anything.
Next assume that when we get to column j, the
number of uncanceled cells in the columns that have
no tilings in B and B are b_1, …, b_k for some k as
we read from left to right. Suppose there are
b_i uncanceled cells in B in column j.
Then we can do one of three things.
i. We can put a Fibonacci tiling of height b_i in the uncanceled cells in column j in B. Then we must
cancel top-most cells of the columns in B to its right
so that the number of uncanceled cells in
the columns which have no tilings up to this point
are b_1,b_2, …, b_k-1,
We also cancel the same number
of cells at the bottom of the corresponding columns of B
ii. We can place a rook in any row of column j that
lies between the upper bar and lower bar. This rook
will not cancel anything.
iii. We can put a flip of Fibonacci tiling in the b_i
uncanceled cells in column j of B. This tiling will not
cancel anything
We let ℳ_n(AugB_x) denote set of all
mixed rook placements on AugB_x. For any placement
P ∈ℳ_n(AugB_x),
we define 𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) and 𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P)
as follows. For any column i, suppose that
the number of uncanceled cells in B in column i is t_i.
Then the factor 𝐖_𝐢,𝐀𝐮𝐠𝐁_𝐱,𝐪(P) that the placement
in column i contributes to 𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) is
* q^_t_i(T_i) if there is tiling T_i in B in
column i,
* q^F_t_i+s_i-1 if there is a rook in row s_i^th row from
the top in the x rows that lie between the upper bar and lower bar,
and
* -q^_t_i(T_i) if there is a flip of a tiling T_i
in column i of B.
Then we define
𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) = ∏_i=1^n 𝐖_𝐢,𝐀𝐮𝐠𝐁_𝐱,𝐪(P).
Similarly, the factor 𝐖_𝐢,𝐀𝐮𝐠𝐁_𝐱,𝐪(P)
that the tile placement
in column i contributes to 𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) is
* q^_t_i(T_i) if there is tiling T_i in B in
column i,
* q^s_i-1 if there is a rook in row s_i^th row from
the top in the x rows that lie between the upper bar and lower bar,
and
* -q^_t_i(T_i) if there is a flip of a tiling T_i
in column i of B.
Then we define
𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) = ∏_i=1^n
𝐖_𝐢,𝐀𝐮𝐠𝐁_𝐱,𝐪(P).
For example,
Figure <ref> pictures a mixed placement P in
AugB_x where B = F(2,3,4,4,5,5) and x is 7 where
_2(T_1) =0, _4(T_4) =F_2 =1,
and _4(T_5) =F_3 =2 where T_i is the tiling in
column i for i ∈{1,4,5}. The rooks columns 2 and 6
are in row 5 and the rook in column 3 is in row 3 so that s_2 =s_6 =5
and s_3=3. Thus
𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) = -q^0+(4+F_2)+(2+F_3)+1+2+(4+F_4) = -q^19
𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) =
-q^0+(4)+(2)+1+2+(4) = -q^13
aug2A mixed rook placement.
Our next theorem results from counting
∑_P ∈ℳ_n(AugB_x)𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P)
in two different ways.
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0
and x ∈. Then
[x]_q^n =
∑_k=0^n 𝐑𝐓_n-k(B,q)
([x]_q-[F_b_1]_q) ([x]_q-[F_b_2]_q)⋯
([x]_q-[F_b_k]_q).
Fix x to be a positive integer and consider
the sum S=∑_P ∈ℳ_n(AugB_x)𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P).
First we consider the contribution of each column as we proceed
from left to right. Given our three choices
in column 1, the contribution of our choice of the tilings of
height b_1 in column 1 of B is [F_b_1]_q, the choice
of placing a rook in between the upper bar and the lower is [x]_q,
and the contribution of our choice of the tilings of
height b_1 in column 1 of B is -[F_b_1]_q.
Thus the contribution of our choices in
column 1 to S is [F_b_1]_q+[x]_q -[F_b_1]_q = [x]_q.
In general, after we have processed our choices in
the first j columns, our cancellation scheme ensures
that the number of uncanceled cells in B and B in
the j-th column is b_i for some i ≤ j.
Thus given our three choices
in column j, the contribution of our choice of the tilings of
height b_i in column j of B is [F_b_i]_q, the choice
of placing a rook in between the upper bar and the lower is [x]_q,
and the contribution of our choice of the tilings of
height b_i in column j of B is -[F_b_i]_q.
Thus the contribution of our choices in
column j to S is [F_b_i]_q+[x]_q -[F_b_i]_q = [x]_q.
It follows that S = [x]_q^n.
On the other
hand, suppose that we fix a Fibonacci rook placement
P ∈𝒩𝒯_n-k(B).
Then we want to compute the S_P = ∑_Q ∈ℳ_n(AugB_x),
Q ∩ B = P𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) which is the sum
of 𝐖_𝐀𝐮𝐠𝐁_𝐱,𝐪(P) over all
mixed placements Q such that Q intersect B equals P.
Our cancellation scheme ensures that the number of uncanceled cells in B and B
in the k columns that do not contain tilings in P is
b_1, …, b_k as we read from right to left.
For each such 1 ≤ i ≤ k, the factor that
arises from either choosing a rook to be placed
in between the upper bar and lower bar or a flipped
Fibonacci tiling of height b_i in B is
[x]_q-[F_b_i]_q. It follows that
S_P =𝐖_𝐁,𝐪(P) ∏_i=1^k [x]_q-[F_b_i]_q.
Hence it follows that
S = ∑_k=0^n ∑_P ∈𝒩𝒯_n-k(B) S_P
= ∑_k=0^n (∏_i=1^k [x]_q-[F_b_i]_q)
∑_P ∈𝒩𝒯_k(B)𝐖_𝐁,𝐪(P)
= ∑_k=0^n 𝐑𝐓_n-k(B,q) ( ∏_i=1^k [x]_q-[F_b_i]_q).
Let B =F(b_1, …, b_n) be a Ferrers
board where 0 ≤ b_1 ≤⋯≤ b_n and b_n > 0
and x ≥ b_n. Then
[x]_q^n =
∑_k=0^n 𝐑𝐓_n-k(B,q) [x-F_b_1]_q [x-F_b_2]_q⋯
[x-F_b_k]_q.
It is easy to see from our cancellation scheme that
𝐑𝐓_n-k(B,q) =q^F_b_1+ ⋯ + F_b_k𝐑𝐓_n-k(B,q).
Thus it follows from (<ref>)
that
[x]_q^n =
∑_k=0^n 𝐑𝐓_n-k(B,q)q^-(F_b_1+ ⋯ + F_b_k)
([x]_q-[F_b_1]_q) ([x]_q-[F_b_2]_q)⋯
([x]_q-[F_b_k]_q).
However since x ≥ F_b_i for every i,
[x]_q-[F_b_i]_q = q^F_b_i[x-F_b_i]_q
so that
[x]_q^n =
∑_k=0^n 𝐑𝐓_n-k(B,q)
[x-F_b_1]_q [x-F_b_2]_q⋯
[x-F_b_k]_q.
Now consider the special case of the previous three theorems
when B_n = F(0,1,2, …, n-1). Then (<ref>) implies
that
𝐑𝐓_n+1-k(B_n+1,q) = q^F_k-1𝐑𝐓_n+1-k(B_n,q) +
[F_k]_q 𝐑𝐓_n-k(B_n,q).
Similarly (<ref>) implies
that
𝐑𝐓_n+1-k(B_n+1,q) =
𝐑𝐓_n+1-k(B_n,q) +
[F_k]_q 𝐑𝐓_n-k(B_n,q).
It then easily follows that for all 0 ≤ k ≤ n,
𝐒𝐅_n,k(q) = 𝐑𝐓_n-k(B_n,q)
and
𝐒𝐅_n,k(q) = 𝐑𝐓_n-k(B_n,q).
Note that 𝐒𝐅_n,0(q) = 𝐒𝐅_n,0(q) =0
for all n ≥ 1 since there are no Fibonacci rook placements in
𝒩𝒯_n(B_n) since there are only n-1 non-zero columns.
Moreover such a situation, we see that (<ref>)
implies that for x ≥ n,
[x]_q^n =
∑_k=1^n 𝐒𝐅_n,k(q) [x]_q [x-F_1]_q [x-F_2]_q⋯
[x-F_k-1]_q
Thus we have given a combinatorial proof of
(<ref>).
Similarly, (<ref>)
implies that for x ≥ n,
[x]_q^n =
∑_k=1^n 𝐒𝐅_n,k(q) [x]_q ([x]_q-[F_1]_q)
([x]_q-[F_2]_q)⋯
([x]_q-[F_k-1]_q)
Thus we have given a combinatorial proof of
(<ref>).
§ IDENTITIES FOR 𝐒𝐅_N,K(Q) AND
𝐜𝐅_N,K(Q)
In this section, we shall derive various identities and special values
for the Fibonacci analogues of the Stirling numbers 𝐒𝐅_n,k(q),
𝐒𝐅_n,k(q), 𝐜𝐅_n,k(q), and
𝐜𝐅_n,k(q).
Note that by (<ref>),
𝐒𝐅_n,k(q) = q^∑_i=1^k-1 F_i𝐒𝐅_n,k(q).
Then we have the following theorem.
* 𝐒𝐅_n,n(q) =1 and
𝐒𝐅_n,n(q) =q^∑_i=1^n-1 F_i.
* 𝐒𝐅_n,n-1(q) =∑_i=1^n-1 [F_i]_q
and
𝐒𝐅_n,n-1(q) =q^∑_i=1^n-2 F_i∑_i=1^n-1 [F_i]_q.
* 𝐒𝐅_n,n-2(q) =∑_i=1^n-2 [F_i]_q(∑_j=i^n-2
[F_j]_q)
and
𝐒𝐅_n,n-2(q) =q^∑_i=1^n-3 F_i∑_i=1^n-2 [F_i]_q(∑_j=i^n-2
[F_j]_q).
* 𝐒𝐅_n,1(q) =1 and
𝐒𝐅_n,1(q) =1.
* 𝐒𝐅_n,2(q) =(n-1) and
𝐒𝐅_n,2(q) =q(n-1).
* 𝐒𝐅_n,3(q) =(1+q)^n-1 -(q(n-1)+1)/q^2 and
𝐒𝐅_n,3(q) =(1+q)^n-1 -(q(n-1)+1).
For (1), it is easy to see that 𝐒𝐅_n,n(q) =1 since
the only placement in ℱ𝒯_n-n(B_n) is the empty placement.
The fact that 𝐒𝐅_n,k(q) =q^∑_i=1^n-1 F_i then
follows from (<ref>).
For (2), we can see that 𝐒𝐅_n,n-1(q) =
∑_i=1^n-1[F_i]_q because placements in
ℱ𝒯_n-(n-1)(B_n) = ℱ𝒯_1(B_n)
have exactly one column which
is filled with a Fibonacci tiling. If that column is
column i+1, then i ≥ 1 and
the sum of the weights of the possible tilings
in column i is [F_i]_q. The fact that
𝐒𝐅_n,n-1(q) =q^∑_i=1^n-2 F_i∑_i=1^n-1 [F_i]_q then follows from (<ref>).
For (3), we can classify the placements in
ℱ𝒯_n-(n-2)(B_n) = ℱ𝒯_2(B_n) by the left-most
column which contains a tiling. If that column is
column i+1, then i ≥ 1 and
the sum of the weights of the possible tilings
in column i is [F_i]_q. Moreover, any tiling
in column i cancels one cell in the remaining columns so
that number of uncanceled cells in the columns to the right
of column i+1 will be i, …, n-2, reading from right to left.
It then follows that
𝐒𝐅_n,n-2(q) =∑_i=1^n-2 [F_i]_q(∑_j=i^n-2
[F_j]_q).
The fact that
𝐒𝐅_n,n-1(q) =q^∑_i=1^n-3 F_i∑_i=1^n-2 [F_i]_q(∑_j=i^n-2
[F_j]_q)
then follows from (<ref>).
For (4), note that the elements in ℱ𝒯_n-1(B_n) have
a tiling in every column. Given our cancellation scheme, there is
exactly one such configuration. For example, the unique
element of ℱ𝒯_5(B_6) is pictured in Figure
<ref> where we have placed is in the cells
canceled by the tiling in column i. Thus the unique
element of ℱ𝒯_n-1(B_n) is just the Fibonacci
rook placement where there is tiling of height one in each column. Thus
𝐒𝐅_n,1(q) =𝐒𝐅_n,1(q) =1 since
the rank of each tiling height 1 is 0.
fullrooksThe Fibonacci rook tiling in ℱ𝒯_5(B_6).
For (5), note that the elements in ℱ𝒯_n-2(B_n)
have exactly one column i ≥ 2 which does not
have a tiling. Given our cancellation scheme, if the column
with out a tiling is column i ≥ 2, then any non-empty
column to the left of column i will be filled with a tiling
of height 1 and every column to the right of column i will
be filled with a tiling of height 2. For example, the unique
element of ℱ𝒯_6(B_8) is pictured in Figure
<ref> where we have placed is in the cells
canceled by the tiling in column i. Since the ranks of
the tilings of heights 1 and 2 are 0, it follows
that 𝐒𝐅_n,2(q) =n-1.
The fact that
𝐒𝐅_n,n-1(q) =q(n-1)
then follows from (<ref>).
2fullrooksA Fibonacci rook tiling in ℱ𝒯_6(B_8).
For (6), we proceed by induction. Note that we have
proved
𝐒𝐅_3,3(q) =q^F_1+F_2=q^2 = (1+q)^2-(2q +1).
Now assume that n ≥ 3 and
𝐒𝐅_n,3(q)=(1+q)^n-1-((n-1)q+1). Then
𝐒𝐅_n+1,3(q) = q^F_2𝐒𝐅_n,2(q)+[F_3]_q
𝐒𝐅_n,3(q)
= q(q(n-1) )+(1+q)((1+q)^n-1-((n-1)q+1))
= q^2(n-1) +(1-q)^n -(n-1)q-(n-1)q^2 -q -1
= (1-q)^n -(nq+1).
The fact that 𝐒𝐅_n,3(q)=
(1+q)^n-1-((n-1)q+1)/q^2 then follows from (<ref>).
Next we define
𝕊𝔽_k(q,t) :=
∑_n ≥ k𝐒𝐅_n,k(q) t^n
for k ≥ 1
It follows from Theorem <ref> that
𝕊𝔽_1(q,t) =
∑_n ≥ 1𝐒𝐅_n,1(q) t^n =
∑_n ≥ 1 t^n = t/1-t.
Then for k > 1,
𝕊𝔽_k(q,t) = ∑_n ≥ k𝐒𝐅_n,k(q) t^n
= t^k + ∑_n > k𝐒𝐅_n,k(q) t^n
= t^k + t ∑_n > k( 𝐒𝐅_n-1,k-1(q) +
[F_k]_q 𝐒𝐅_n-1,k-(p,q)) t^n-1
= t^k + t(∑_n > k𝐒𝐅_n-1,k-1(q) t^n-1)
+ [F_k]_qt(∑_n > k𝐒𝐅_n-1,k(q) t^n-1)
= t^k + t(𝕊𝔽_k-1(q,t) -t^k-1) + [F_k]_qt
𝕊𝔽_k(q,t).
It follows
that
𝕊𝔽_k(q,t) = t/(1 - [F_k]_qt)𝕊𝔽_k-1(q,t).
The following theorem easily follows from (<ref>) and
(<ref>).
For all k ≥ 1,
𝕊𝔽_k(q,t)= t^k/(1-[F_1]_qt) (1-[F_2]_qt)⋯
(1-[F_k]qt).
Note that it follows from (<ref>) and Theorem <ref> that
𝕊𝔽_k(q,t)= ∑_n ≥ k𝐒𝐅_n,k(q) t^n =
q^∑_i=1^k-1F_i t^k/(1-[F_1]_qt) (1-[F_2]_qt)⋯
(1-[F_k]qt).
For any formal power series in f(x) = ∑_n ≥ 0f_n x^n,
we let f(x)|_x^n = f_n denote the coefficient of x^n in
f(x). Our next result will give formulas for
𝐒𝐅_n,k(q)|_q^s for s =0,1,2.
* For all n ≥ k ≥ 1, 𝐒𝐅_n,k(q)|_q^0 = n-1k-1.
* For all n > k ≥ 2, 𝐒𝐅_n,k(q)|_q = (k-2)n-1k.
* For all
n ≥ s, 𝐒𝐅_n,3(q)|_q^s= n-1s+2.
* For all n ≥ k ≥ 3, 𝐒𝐅_n,k(q)|_q^2 = (k-3)n-1k
+ k-12n-1k+1.
* for all n ≥ k ≥ 4,
𝐒𝐅_n,k(q)|_q^3 = (k-4)n-1k
+ (k-12 +k-22 -1)n-1k+1
+k3n-1k+2.
* For all
n ≥ k ≥ 4,
𝐒𝐅_n,k(q)|_q^4 = (k-4)n-1k
+ (k-12 +k-22 +k-32-3)n-1k+1 +
(2k3+k-13 -k +1) n-1k+2 + k+14n-1k+3.
For (1), note that a placement P in ℱ𝒯_n-k(B_n) must
have k-1 empty columns among columns 2, …, n. If
WF(P) =1, then it must be the case that all the tilings
in the columns which contain tilings in P must have rank 0 so that
the tiling must contain only tiles of height 1. Thus P is
completely determined by the choice of the k-1 empty columns among
columns 2, …, n. Thus
𝐒𝐅_n,k(p,q)|_q^0 = n-1k-1.
For (3), note that by part 6 of Theorem <ref>,
we have that for any s ≥ 0,
𝐒𝐅_n,3(q)|_q^s = 𝐒𝐅_n,3(q)|_q^s+2 = (1+q)^n-1-((n-1)q+1)|_q^s+2
= n-1s+2.
For (2), note that
𝐒𝐅_n,2(q)|_q = 0 since
𝐒𝐅_n,k(q) = (n-1) by part 5 of Theorem <ref>.
By (3), 𝐒𝐅_n,3(q)|_q= n-13.
Thus our formula holds for n =2 and n=3.
Next fix k ≥ 4 and assume by induction that
𝐒𝐅_n,k-1(q)|_q= (k-3) n-1k-1
for all n ≥ k-1. Then we shall prove by induction on
n that 𝐒𝐅_n,k(q)|_q= (k-2) n-1k.
The base case n =k holds since
𝐒𝐅_k,(q)=1. But then assuming that
𝐒𝐅_n,k(q)|_q= (k-2) n-1k,
we see that
𝐒𝐅_n+1,k(q)|_q = 𝐒𝐅_n,k-1(q)|_q+
((1+q+q^2+ ⋯ + q^F_k-1)𝐒𝐅_n,k(q))|_q
= (k-3) n-1k-1 + 𝐒𝐅_n,k(q))|_q^0 +
𝐒𝐅_n,k(q))|_q
= (k-3) n-1k-1 + n-1k-1 +(k-2) n-1k
= (k-2) nk.
Parts (4), (5), and (6) can easily be proved by induction.
For example, by (3),
𝐒𝐅_n,3(q)|_q^2 = n-14
so that our formula holds for k=3. Now suppose
that k ≥ 4 and our formula holds for k-1. That is,
𝐒𝐅_n,k-1(q)|_q^2 = (k-4)
n-1k-1 + k-22n-1k.
Next observe that 𝐒𝐅_k,k(q)|_q^2 =0 since
𝐒𝐅_k,k(q) =1 so that our formula
holds for n =k. Note also that for
k ≥ 4, F_k ≥ 3.
But then for n ≥ k ≥ 4,
𝐒𝐅_n+1,k(q)|_q^2 = 𝐒𝐅_n,k-1(q)|_q^2+([F_k]_q
𝐒𝐅_n,k(q))|_q^2
= 𝐒𝐅_n,k-1(q)|_q^2+((1+q+q^2)
𝐒𝐅_n,k(q))|_q^2
= 𝐒𝐅_n,k-1(q)|_q^2+
𝐒𝐅_n,k(q)|_q^0+
𝐒𝐅_n,k(q)|_q+𝐒𝐅_n,k(q)|_q^2
= (k-4) n-1k-1 + k-22n-1k
+ n-1k-1 + (k-2)n-1k
+ 𝐒𝐅_n,k(q)|_q^2
= (k-3) n-1k-1 + k-12n-1k + 𝐒𝐅_n,k(q)|_q^2.
This gives us a recursion for 𝐒𝐅_n+1,k(q)|_q^2 in terms of 𝐒𝐅_n,k(q)|_q^2 which we can iterate
to prove that
𝐒𝐅_n,k(q)|_q^2 =
(k-3) n-1k + k-12n-1k+1.
For (5), we first have to establish the base case k=4.
𝐒𝐅_n+1,4(q)|_q^3 = 𝐒𝐅_n,3(q)|_q^3+([F_4]_q
𝐒𝐅_n,4(q))|_q^3
= 𝐒𝐅_n,3(q)|_q^3+((1+q+q^2)
𝐒𝐅_n,4(q))|_q^3
= 𝐒𝐅_n,k-1(q)|_q^3+
𝐒𝐅_n,k(q)|_q+
𝐒𝐅_n,k(q)|_q^2+𝐒𝐅_n,k(q)|_q^3
= n-15 + 2n-14 +
(n-14+3n-15)
+𝐒𝐅_n,k(q)|_q^3
= 3n-14 + 4n-15 +𝐒𝐅_n,4(q)|_q^3.
This gives us a recursion for 𝐒𝐅_n+1,4(q)|_q^3 in terms of 𝐒𝐅_n,4(q)|_q^3 which we can iterate
to prove that
𝐒𝐅_n,4(q)|_q^3 =
3n-15 + 4n-16.
Thus our formula for (5) holds for k =4.
Next assume that k ≥ 5. First we note that
𝐒𝐅_k,k(q)|_q^3 =0 since
𝐒𝐅_k,k(q) =1 so that our formula
holds for n =k. Note also that for
k ≥ 5, F_k ≥ 5.
Now suppose our formula holds for k-1. That is,
𝐒𝐅_n,k-1(q)|_q^3 =
(k-5)
n-1k-1 + (k-22+k-32-1)
n-1k +k-13n-1k+1.
Next observe that 𝐒𝐅_k,k(q)|_q^3 =0 since
𝐒𝐅_k,k(q) =1 so that our formula
holds for n =k. Note also that for
k ≥ 4, F_k ≥ 3.
But then for n ≥ k ≥ 5,
𝐒𝐅_n+1,k(q)|_q^3 = 𝐒𝐅_n,k-1(q)|_q^3+([F_k]_q
𝐒𝐅_n,k(q))|_q^3
= 𝐒𝐅_n,k-1(q)|_q^2+((1+q+q^2+q^3)
𝐒𝐅_n,k(q))|_q^3
= 𝐒𝐅_n,k-1(q)|_q^3+
𝐒𝐅_n,k(q)|_q^0+
𝐒𝐅_n,k(q)|_q+
𝐒𝐅_n,k(q)|_q^2+
𝐒𝐅_n,k(q)|_q^3
= (k-5)
n-1k-1 + (k-22+k-32-1)
n-1k +k-13n-1k+1
+ n-1k-1+(k-2)n-1k+
(k-3)n-1k + k-12n-1k+1
+ 𝐒𝐅_n,k(q)|_q^3
= (k-4) n-1k-1 + (k-12+k-22-1)
n-1k + k3n-1k+1
+ 𝐒𝐅_n,k(q)|_q^3.
This gives us a recursion for 𝐒𝐅_n+1,k(q)|_q^3 in terms of 𝐒𝐅_n,k(q)|_q^3 which we can interate
to prove that
𝐒𝐅_n,k(q)|_q^3 =
(k-4) n-1k + (k-12+k-22-1)
n-1k+1 + k3n-1k+2.
For (6), again, we first have to establish the base case k=4.
𝐒𝐅_n+1,4(q)|_q^4 = 𝐒𝐅_n,3(q)|_q^4+([F_4]_q
𝐒𝐅_n,4(q))|_q^4
= 𝐒𝐅_n,3(q)|_q^4+((1+q+q^2)
𝐒𝐅_n,4(q))|_q^4
= 𝐒𝐅_n,k-1(q)|_q^4+
𝐒𝐅_n,k(q)|_q^2+
𝐒𝐅_n,k(q)|_q^3+𝐒𝐅_n,k(q)|_q^4
= n-16 + n-14 +3n-15 +
3n-15+4n-16 +
𝐒𝐅_n,k(q)|_q^4
= n-14 + 6n-15 + 5n-16+𝐒𝐅_n,4(q)|_q^3.
This gives us a recursion for 𝐒𝐅_n+1,4(q)|_q^4 in terms of 𝐒𝐅_n,4(q)|_q^4 which we can iterate
to prove that
𝐒𝐅_n,4(q)|_q^4 = n-15 + 6n-16 + 5n-17.
Thus our formula for (6) holds for k =4.
Next assume that k ≥ 5. First we note that
𝐒𝐅_k,k(q)|_q^4 =0 since
𝐒𝐅_k,k(q) =1 so that our formula
holds for n =k. Note also that for
k ≥ 5, F_k ≥ 5.
Now suppose our formula holds for k-1. That is,
𝐒𝐅_n,k-1(q)|_q^4 = (k-5)
n-1k-1 + (k-22+k-32+k-32-3)
n-1k +
(k-13+k-23-(k-1)+1)n-1k+1 + k4n-1k+2.
Next observe that 𝐒𝐅_k,k(q)|_q^5 =0 since
𝐒𝐅_k,k(q) =1 so that our formula
holds for n =k. Note also that for
k ≥ 5, F_k ≥ 5.
But then for n ≥ k ≥ 5,
𝐒𝐅_n+1,k(q)|_q^4 = 𝐒𝐅_n,k-1(q)|_q^4+([F_k]_q
𝐒𝐅_n,k(q))|_q^4
= 𝐒𝐅_n,k-1(q)|_q^4+((1+q+q^2+q^3+q^4)
𝐒𝐅_n,k(q))|_q^4
= 𝐒𝐅_n,k-1(q)|_q^3+
𝐒𝐅_n,k(q)|_q^0+
𝐒𝐅_n,k(q)|_q+
𝐒𝐅_n,k(q)|_q^2+
𝐒𝐅_n,k(q)|_q^3
+ 𝐒𝐅_n,k(q)|_q^4
= (k-5)
n-1k-1 + (k-22+k-32+k-32-3)
n-1k +
+ (k-13+k-23-(k-1)+1)n-1k+1 +
k4n-1k+2
+ n-1k-1+(k-2)n-1k+
(k-3)n-1k + k-12n-1k+1
+ (k-4) n-1k + (k-12+k-22-1)
n-1k+1 + k3n-1k+2
+ 𝐒𝐅_n,k(q)|_q^4
= (k-4)n-1k-1 +
( k-12+k-22+k-32-3 )
n-1k
+(k3+k-13-k+1) n-1k+1 +
k+14n-1k+2
+𝐒𝐅_n,k(q)|_q^4.
This gives us a recursion for 𝐒𝐅_n+1,k(q)|_q^4 in terms of 𝐒𝐅_n,k(q)|_q^4 which we can iterate
to prove that
𝐒𝐅_n,k(q)|_q^4 =
(k-4) n-1k + (k-12+k-22+k-22-3)n-1k+1 +
(2k3+k-12-k+1)
n-1k+2+ k+14n-1k+3.
A sequence of real numbers a_0, …, a_n is is said to be unimodal
if there is a 0 ≤ j ≤ n such that
a_0 ≤⋯≤ a_j ≥ a_j+1≥⋯≥ a_n and
is said to be log-concave if for 0 ≤ i ≤ n,
a_i^2- a_i-1a_i+1≥ 0 where we set a_-1 = a_n+1 =0.
If a sequence is log-concave, then
it is unimodal. A polynomial P(x) = ∑_k=0^n a_k x^k is said to be unimodal if a_0, …, a_n is a unimodal sequence and is said
to be log-concave if a_0, …, a_n is log concave.
It is easy to see from Theorem <ref> that
𝐒𝐅_n,k(q) is unimodal for
all n ≥ k when k ∈{1,2,3}.
Computational evidence suggests that
𝐒𝐅_n,4(q) is unimodal for all n ≥ 4 and that
𝐒𝐅_n,5(q) is unimodal for all n ≥ 5.
However, it is not the case that
𝐒𝐅_n,6(q) is unimodal for all n ≥ 6.
For example, one can use part 3 of Theorem <ref> to compute
𝐒𝐅_8,6(q) =
21 +28q +31q^2+29q^3+30q^4+25q^5+23q^2+22q^7+15q^8+10q^9+7q^10+5q^11+3q^12+2q^13+q^14.
It is not difficult to see that for any Ferrers board
B =F(b_1,…,b_n), the coefficients that appear
in the polynomials 𝐅𝐓_k(B,q) and 𝐅𝐓_k(B,q)
are essentially the same. That is, we have the following theorem.
Let B=F(b_1, …, b_n) be a skyline board. Then
𝐅𝐓_k(B,q) = ( ∏_i=1^n (1+[F_b_i]_qz))|_z^k
and
𝐅𝐓_k(B,q) =
( ∏_i=1^n (q^F_b_i+[F_b_i]_qz))|_z^k =
q^∑_i=1^n F_b_i( ∏_i=1^n (1+1/q[F_b_i]_1/qz))|_z^k
It is easy to see that if we are creating a Fibonacci file tiling
in ℱ𝒯_k(B), then in column i, we have two choices,
namely, we can leave the column empty or put a Fibonacci tiling
of height b_i. For 𝐅𝐓_k(B,q),
the weight of an empty column is 1 and the sum of weights of
the Fibonacci tilings of height b_i is [F_b_i]_q. Thus
( ∏_i=1^n (1+[F_b_i]_qz))|_z^k is equal
to the sum over all Fibonacci file tilings where exactly k columns
have tiling which is equal to 𝐅𝐓_k(B,q).
Similarly, For 𝐅𝐓_k(B,q),
the weight of an empty column i when it is empty is
q^F_b_i and the sum of weights of
the Fibonacci tilings of height b_i is [F_b_i]_q. Thus
( ∏_i=1^n (q^F_b_i+[F_b_i]_qz))|_z^k is equal
to the sum over all Fibonacci file tilings where exactly k columns
have tiling which is equal to 𝐅𝐓_k(B,q).
It follow that for any n, the coefficient of q^n in
( ∏_i=1^n (1+[F_b_i]_qz))|_z^k is equal to
the coefficient of 1/q^n+k in
( ∏_i=1^n (1+1/q[F_b_i]_1/qz))|_z^k.
It follows that
𝐅𝐓_k(B,q)_q^n =
𝐅𝐓_k(B,q)|_q^-n-k+∑_i=1^n F_b_i.
It is easy to see from (<ref>) that
𝐜𝐅_n,n-1(q) = ∑_i=1^n-1 [F_i]_q
so that coefficient of q^k in 𝐜𝐅_n,n-1(q) weakly decreases as k goes from 0 to F_n-1-1. It follows that the coefficient
of q^k in 𝐜𝐅)_n,n-1(q) weakly increase. Similarly,
it is easy to see that
𝐜𝐅_n,1(q) = ∏_i=1^n-1[F_i]_q
so that 𝐜𝐅_n,1(q) is just the rank generating
function of a product of chains which is know to be symmetric and unimodal,
see <cit.>.
From our computational evidence, it seems that the polynomials
𝐜𝐅_n,2(q)
are unimodal. However, it is not the case
𝐜𝐅_n,k(q) are unimodal for all n and k.
For example, 𝐜𝐅_9,7(q) starts out
28+42q+50q^2+53q^3+58q^4+57q^5+58q^6+60q^7+ … .
Finally, our results show that the matrices
||(-1)^n-k𝐜𝐅_n,k(q)|| and
||𝐒𝐅_n,k(q)|| are inverses of each other. One can give a combinatorial proof of this fact. Indeed, the combinatorial
proof of <cit.> which shows that matrices
||(-1)^n-k𝐜𝐟_n,k(q)|| and
||𝐒𝐟_n,k(q)|| are inverses of each other can also be applied to show
that the matrices
||(-1)^n-k𝐜𝐅_n,k(q)|| and
||𝐒𝐅_n,k(q)|| are inverses of each other.
20
ACMS T. Amdeberhan, X. Chen, V. Moll, and B. Sagan,
Generalized Fibonacci polynomials and Fibonomial coefficients,
Ann. Comb., 18 (2014), 129-138.
BPR Q.T. Bach, R. Paudyal, and J.B. Remmel, A Fibonacci analogue
of the Stirling numbers, submitted to Discrete Math., <http://arxiv.org/abs/1510.04310v2> (2015).
C E.R. Canfield, A Sperner Property preserved by product, Linear Multilinear Algebra, 9 (1980), 151-157.
CS X. Chen and B. Sagan,
On the fractal nature of the Fibonomial triangle,
Integers, 14 (2014) A3, 12 pg.
Fon G. Fontené, Généralisation d'une formule
connue, Nouv. Ann. Math., 15 (1915), 112.
Gould H.W. Gould, The bracket function
and the Fontené-Ward generalized binomial coefficients
with applications to the Fibonomial coefficients,
Fibonacci Quart., 7 (1969), 23-40.
GS H.W. Gould and P. Schlesinger, Extensions
of the Hermite G.C.D. theorems for binomial coefficients,
Fibonacci Quart., 33 (1995), 386-391.
H V.E. Hoggatt Jr., Fibonacci numbers and
generalized binomial coefficients, Fibonacci Quart., 5 (1995), 383-400.
MR B.K. Miceli and J.B. Remmel, Augmented Rook Boards and General
Product Formulas, Electron. J. Combin., vol. 15 (1), (2008),
R85 (55 pgs).
OEIS The On-line Encyclopedia of Integer Sequences <http://oeis.org/>.
SS B. Sagan and C. Savage, Combinatorial
interpretations of binomial coefficients analogues related
to Lucas sequences, Integers, 10 (2010), A52, 697-703.
Torreto R. Torretto and A. Fuchs,
Generalized binomial coefficients, Fibonacci Quart., 2 (1964), 296-302.
Troj P. Trojovský, Discrete Appl. Math., 155 (2007),
2017-2024.
Gill S.G. Williamson, Combinatorics for Computer Science,
Computer Science Press, (1985).
Z E. Zeckendorf, Représentation des nombres naturels par une somme de nombres de Fibonacci ou de nombres de Lucas, Bull. Soc. Roy. Sci. Liège,
41 (1972), 179-182.
|
http://arxiv.org/abs/1701.07822v2 | 20170126185802 | An FPTAS for the parametric knapsack problem | [
"Michael Holzhauser",
"Sven O. Krumke"
] | cs.DS | [
"cs.DS",
"cs.CC",
"math.OC"
] |
TUKL]Michael Holzhausercor1
holzhauser@mathematik.uni-kl.de
TUKL]Sven O. Krumke
krumke@mathematik.uni-kl.de
[cor1]Corresponding author. Fax: +49 (631) 205-4737. Phone: +49 (631) 205-2511
[TUKL]University of Kaiserslautern, Department of Mathematics
Paul-Ehrlich-Str. 14, D-67663 Kaiserslautern, Germany
In this paper, we investigate the parametric knapsack problem, in which the item profits are affine functions depending on a real-valued parameter. The aim is to provide a solution for all values of the parameter. It is well-known that any exact algorithm for the problem may need to output an exponential number of knapsack solutions.
We present a fully polynomial-time approximation scheme (FPTAS) for the problem that, for any desired precision ε∈ (0,1), computes (1-ε)-approximate solutions for all values of the parameter. This is the first FPTAS for the parametric knapsack problem that does not require the slopes and intercepts of the affine functions to be non-negative but works for arbitrary integral values. Our FPTAS outputs (n^2/ε) knapsack solutions and runs in strongly polynomial-time (n^4/ε^2). Even for the special case of positive input data, this is the first FPTAS with a strongly polynomial running time. We also show that this time bound can be further improved to (n^2/ε· A(n,ε)), where A(n,ε) denotes the running time of any FPTAS for the traditional (non-parametric) knapsack problem.
knapsack problems parametric optimization approximation algorithms
§ INTRODUCTION
The knapsack problem is one of the most fundamental combinatorial optimization problems: Given a set of n items with weights and profits and a knapsack capacity, the task is to choose a subset of the items with a maximum profit such that the weight of these items does not exceed the knapsack capacity. The problem is known to be weakly -hard and solvable in pseudo-polynomial time. Moreover, several constant factor approximation algorithms and approximation schemes have been developed for the problem <cit.> (cf. <cit.> for an overview).
In this paper, we investigate a generalization of the problem in which the profits are no longer constant but affine functions depending on a parameter λ∈ℝ. More precisely, for a knapsack with capacity W and for each item i in the item set {1,…,n} with weight w_i ∈ℕ_> 0, the profit p_i is now of the form p_i(λ) a_i + λ· b_i with a_i,b_i ∈ℤ. The resulting optimization problem can be stated as follows:
p^*(λ) = max ∑_i=1^n (a_i + λ· b_i) · x_i
∑_i=1^n w_i · x_i ≤ W
x_i ∈{0,1} ∀ i ∈{1,…,n}
The aim of this parametric knapsack problem is to return a partition of the real line into intervals (-∞,λ_1], [λ_1,λ_2], …, [λ_k-1,λ_k], [λ_k,+∞) together with a solution x^* for each interval such that this solution is optimal for all values of λ in the interval. The function mapping each λ∈ℝ to the profit of the corresponding optimal solution is called the optimal profit function and will be denoted by p^*(λ) in the following. It is easy to see that p^* is continuous, convex, and piecewise linear with breakpoints λ_1,…,λ_k <cit.>.
Clearly, since the parametric knapsack problem is a generalization of the traditional (non-parametric) knapsack problem, it is at least as hard to solve as the knapsack problem. In fact, it was shown that, even in the case of integral input data, the minimum number of breakpoints of the optimal profit function can be exponentially large, so any exact algorithm may need to return an exponential number of knapsack solutions <cit.>. In this paper, we are interested in a fully polynomial time approximation scheme for the parametric knapsack problem. We will show that, for any desired precision ε∈ (0,1), a polynomial number of intervals suffices in order to be able to provide a (1-ε)-approximate solution for each λ∈ℝ.
Without loss of generality, we may assume that w_i ≤ W for each i ∈{1,…,n} since otherwise we are not able to chose item i at all. However, note that we do not set any further restrictions on the profits, i.e., the parameters a_i and b_i. In particular, each profit may become negative for some specific value of λ. It is even possible that there a no profitable items at all for some values of λ.
§.§ Previous work
A large number of publications investigated parametric versions of well-known problems. This includes the parametric shortest path problem <cit.>, the parametric minimum spanning tree problem <cit.>, the parametric maximum flow problem <cit.>, and the parametric minimum cost flow problem <cit.> (cf. <cit.> for an overview). The parametric knapsack problem considered here was first investigated by <cit.>. She showed that the number of breakpoints of the optimal profit function can become exponentially large in general. If the parameters are integral, the number of breakpoints can still attain a pseudo-polynomial size. The first specialized exact algorithm for the problem was presented by <cit.>, who showed that the problem can be solved in (knW) time, where k denotes the number of breakpoints of the optimal profit function p^*.
The first approximation scheme for the problem was recently published by <cit.>. The authors presented a generalization of the standard polynomial-time approximation scheme for the knapsack problem, resulting in a PTAS for the problem with a running time in (1/ε^2· n^1/ε+2). In the special case of positive values of λ as well as non-negative values of a_i and b_i for each i ∈{1,…,n}, the authors show that an algorithm of <cit.> for the bicriteria knapsack problem can be used to obtain an FPTAS for the parametric knapsack problem running in (n^3/ε^2·log^2 UB_max) time, where UB_max denotes an upper bound on the maximum possible profit with respect to both of the profit functions ∑_i=1^n a_i · x_i and ∑_i=1^n b_i · x_i.
§.§ Our contribution
We present the first FPTAS for the parametric knapsack problem without the restriction to non-negative input data. In particular, we show that we only need a total number of (n^2/ε) intervals to approximate the problem (which, itself, may need an exponential number of intervals as described above) and that we can compute an approximate solution for each interval in (n^2/ε) time, yielding an FPTAS with a strongly polynomial running time of (n^4/ε^2). Our algorithm is the first FPTAS for the problem with a strongly polynomial running time, being superior to the PTAS of <cit.> for ε≤ 0.5 and, in the special case of positive input data, superior to their FPTAS for large input values. In a second step, we improve this result to a running time of (n^2/ε· A(n,ε)), where A(n,ε) denotes the running time of any FPTAS for the traditional knapsack problem. Using the FPTAS of <cit.>, this yields a time bound of ( n^3/εlog1/ε + n^2/ε^4log^2 1/ε) for the parametric knapsack problem.
§.§ Organization
The results of this paper are divided into three main parts. In Section <ref>, we show how we can generalize the well-known greedy-like 1/2-approximation algorithm for the traditional knapsack problem to the parametric setting and how the resulting profit function can be “smoothened” such that it becomes convex and continuous without losing the approximation guarantee. This will be the key ingredient for the parametric FPTAS, which will be presented in Section <ref>. We will first recapitulate a basic FPTAS for the traditional knapsack problem in Section <ref> and then extend it to the parametric case in Section <ref>, presenting a first time bound for the resulting FPTAS. In Section <ref>, as a main result of the paper, we present an improved analysis yielding the claimed running time of the FPTAS. Finally, in Section <ref>, we show that is suffices to solve the corresponding subproblems only approximately so that we can incorporate traditional FPTASs, which improves the running time of our algorithm to the claimed one.
§ OBTAINING A PARAMETRIC 1/2-APPROXIMATION
The parametric FPTAS will rely on a convex and continuous 1/2-approximation of the optimal profit function p^*(λ), i.e., a parametric 1/2-approximation algorithm for the parametric knapsack problem. We will therefore present such an algorithm in this section and describe how we can guarantee these properties of the function.
§.§ Traditional 1/2-approximation algorithm
The basic (non-parametric) 1/2-approximation algorithm proceeds as follows: In a first step, the algorithm sorts the items in decreasing order of their ratios p_i/w_i, which can be done in (n log n) time. It then packs the items in this ordering until the next item k with k ≥ 2 would violate the knapsack capacity (or until there are no items left), yielding a feasible solution x'. The algorithm either returns x' or, if better, the solution containing only an item with the largest profit p^(max). If x^* denotes an optimal solution to the given knapsack instance, x^A the solution returned by the above algorithm, and x^LP a solution to the LP-relaxation of the problem, we get that
p^A p(x^A) ∑_i=1^n p_i · x^A_i = max{ p^(max), ∑_i=1^k-1 p_i }
≥1/2·∑_i=1^k p_i ≥1/2· p(x^LP) ≥1/2· p(x^*),
so x^A is a 1/2-approximation. We refer to <cit.> for further details on this standard algorithm.
§.§ Parametric 1/2-approximation algorithm
In the parametric knapsack problem, the profits are affine functions of the form p_i(λ) = a_i + λ· b_i such that the optimal profit p^* changes with λ. However, note that the solution x' of the traditional 1/2-approximation algorithm only depends on the ordering of the items and, thus, remains constant as long as this ordering does not change. Moreover, two items can only change their relative ordering if their profit functions intersect, yielding (n^2) intervals I'_j, within which the ordering of the items remains unchanged. For all values of λ in such an interval I'_j, the algorithm computes the same solution x', which has a profit of the form p^(j)(λ) α^(j) + λ·β^(j). For each λ∈ I'_j, the 1/2-approximation algorithm either returns x' with profit p^(j)(λ) or the most valuable item only. One possibility to obtain a parametric 1/2-approximation algorithm would be to consider each interval I'_j separately and to divide it into subintervals, depending on whether p^(j)(λ) or p^(max)(λ) is larger, where p^(max) denotes the profit of the most valuable item (which, again, now depends on λ). However, the resulting piecewise linear function p^A(λ) is not necessarily continuous or convex, which will be required later, though (see Figure <ref>).
Instead, we ignore the intervals I'_j and only consider the above affine functions p^(j)(λ) = α^(j) + λ·β^(j). Let S denote the set of all such functions together with the function p^(0)(λ) 0 and each profit function p_i(λ). By computing the upper envelope of the (n^2) functions in S, we obtain a function φ, which is based on feasible solutions whose profit is not smaller than p^A(λ) at each λ∈ℝ (see the dotted curve in Figure <ref>). By standard arguments, it follows that φ is convex, piecewise linear, and continuous as it is the pointwise maximum of affine functions. For m functions, the upper envelope can be computed in (m log m) time as shownby <cit.>. Within the same time bound, we can sort the resulting intervals by increasing values of their left boundary. Hence, we obtain a piecewise linear, continuous, and convex 1/2-approximation φ with (n^2) breakpoints in (n^2 log n) time. In the following, we will refer to the intervals between the breakpoints of φ as I_1,…,I_q.
Strictly speaking, the author proves the result for finite line segments and not for straight lines. However, it is easy to compute upper and lower bounds for the smallest and largest possible intersection point of two of the involved functions, respectively, and to reduce the problem to the resulting interval.
§ OBTAINING A PARAMETRIC FPTAS
Before we explain the parametric FPTAS in detail, we first recapitulate the basic FPTAS for the traditional (non-parametric) knapsack problem as introduced by <cit.> since its way of proceeding is crucial for the understanding of the parametric version.
§.§ Traditional FPTAS
Consider the case of some fixed value for λ, such that the profits have a constant, but possibly negative value p_i. The basic FPTAS for the traditional knapsack problem is based on a well-known dynamic programming scheme, which was originally designed to solve the problem exactly in pseudo-polynomial time: Let P denote an upper bound on the maximum profit of a solution to the given instance. For k ∈{0,…,n} and p ∈{0,…,P}, let w(k,p) denote the minimum weight that is necessary in order to obtain a profit of exactly p with those items in the item set {1,…,k} that have a non-negative[Note that items with a negative profit will not be present in an optimal solution.] profit. For k=0, we set w(0,p) = 0 for p=0 and w(0,p) = W+1 for p > 0. For k ∈{1,…,n} and for the case that p_k ∈{0,…,p}, we compute the values w(k,p) recursively by w(k,p) = min{w(k-1,p), w(k-1,p-p_k) + w_k}, representing the choice to either not pack the item or to pack it, respectively. Else, if p_k ∉{0,…,p}, we set w(k,p) = w(k-1,p) since we can omit negative item- and knapsack-profits. The largest value of p such that w(n,p) ≤ W then yields the optimal solution to the problem. The procedure runs in pseudo-polynomial time 𝒪(nP).
The idea of the basic FPTAS is to scale down the item profits p_i to new values _i p_i/M, where M ε·øp/n for some value øp fulfilling 1/2· p^* ≤øp≤ p^*. Instead of setting øp p^A as it is done in the traditional FPTAS, we can alternatively use our improved 2-approximate solution φ. Since the maximum possible profit P is then given by
P≤∑_i=1^n _i ≤∑_i=1^n n · p_i/ε·φ≤n/ε·p^*/φ≤2n/ε,
the procedure runs in polynomial time (n^2/ε). The crucial observation is that we only lose a factor of (1-ε) by scaling down the profits, so the solution obtained by the above dynamic programming scheme applied to the scaled profits yields a (1-ε)-approximate solution for the problem (see <cit.> for further details on the algorithm).
§.§ Parametric scaling
Although the parametric FPTAS is based on the basic FPTAS, the instance parameters now depend on λ and, thus, change while λ increases. In particular, both the item profits and φ now depend on λ, so the scaled profits _i(λ) n · p_i(λ)/ε·φ(λ) have a highly non-linear behavior. Nevertheless, similar to the parametric 1/2-approximation considered in Section <ref>, the solution returned by the FPTAS does not change as long as the profit _i(λ) of each item remains constant. Hence, if I'_j denotes an interval such that the scaled profits _i remain constant for each λ∈ I'_j, we can evaluate the dynamic programming scheme with the profits _i to obtain a (1-ε)-approximate solution for the interval I'_j. The proof of the polynomial running time and the approximation guarantee remain unchanged.
It remains to show how we can divide the real line into a polynomial number of intervals such that the profits remain constant in each interval. The basic FPTAS will then be evaluated for each such interval subsequently (using the corresponding constant scaled profits) in order to obtain (1-ε)-approximate solutions for the whole real line.
One natural idea would be to build on those intervals described in Section <ref> for which φ(λ) behaves like an affine function: Let I_1,…,I_q with q ∈𝒪(n^2) denote the affine segments of φ such that, for each j ∈{1,…,q}, the function φ takes on some affine form φ(λ) = α^(j) + λ·β^(j) for λ∈ I_j.
Now consider one such interval I_j. If φ(λ) = 0 for λ∈ I_j, it also holds that p^*(λ) = 0 since φ(λ) ≥1/2· p^*(λ) for λ∈ℝ, so the all-zero solution is optimal. Otherwise, for the non-rounded scaled profit of each item i, it holds that
n · p_i(λ)/ε·φ(λ) = n · (a_i + λ· b_i)/ε· (α^(j) + λ·β^(j))n/ε· f_i(λ).
These functions f_i are monotonous, since the first derivative fulfills
df_i/dλ (λ) = b_i · (α^(j) + λ·β^(j)) - (a_i + λ· b_i) ·β^(j)/(α^(j) + λ·β^(j))^2
= b_i ·α^(j) - a_i ·β^(j)/(α^(j) + λ·β^(j))^2
and, thus, does not change its sign within I_j. Hence, within the interval I_j, each scaled profit _i has a monotone behavior. Moreover, it holds that 0 ≤ f_i(λ) ≤ 2 for all λ∈ I_j since each item either has a non-negative profit within the whole interval or it will be ignored and since p_i(λ) ≤ p^*(λ) ≤ 2 ·φ(λ). These observations yield that each scaled profit _i changes its (integral) value at most (n/ε) times within I_j since we only need to consider values for _i between 0 and 2n/ε. Hence, the above recursive formulae only change (n^2/ε) times within each I_j, in which case we have to repeat the computation of the values w(i,p). This yields a total computational overhead of (n^4/ε^2) per interval and, since there are at most 𝒪(n^2) intervals, a total running time of (n^6/ε^2) for the parametric FPTAS. This running time will be significantly improved in the next subsection.
It should be noted that we need to take care of a proper definition of the returned intervals: For example, consider two scaled profits of the forms _i = 1 + λ and _j = 1 - λ. For the critical value λ_1 = 1, both profits evaluate to 1. However, for λ_1' λ_1 + δ and λ_1”λ_1 - δ for a small value of δ, one of the profits already changes its integral value and the dynamic programming scheme may behave differently. One simple solution is to assess that we add a single-point interval [λ_1,λ_1] for each critical value as well as two open intervals of the forms (λ_0,λ_1) and (λ_1,λ_2), where λ_0 and λ_2 are adjacent critical values. The returned (ordered) sequence of intervals then alternates between single-point intervals and open intervals. For an open interval, we can obtain an approximate solution by setting λ to the middle point of the interval and performing the dynamic program for the corresponding constant scaled profits.
§.§ Improved Analysis
The major drawback of the above algorithm is that we basically need to reset the whole procedure whenever the function φ changes its behavior. With this approach, we were able guarantee that each scaled profit has a monotone behavior such that each possible integral value is only attained at most once per interval. As it will be shown in this Section, we are somewhat allowed to “ignore” these changes without losing the guarantee that each possible value of the scaled profits will only be attained a constant number of times.
For each item i ∈{1,…,n}, the scaled profits _i(λ) attain each value in { 0, …, 2n/ε} at most three times as λ increases from -∞ to +∞.
In order to prove the claim, it suffices to show that the sign of the first derivative of each function f_i changes at most twice while λ increases. As above, let I_1,…,I_q denote the intervals for which φ takes on some affine form φ(λ) = α^(j) + λ·β^(j) such that
f_i(λ) = a_i + λ· b_i/α^(j) + λ·β^(j)
for λ∈ I_j. Since φ is convex and continuous, it holds that β^(j)≤β^(j+1) for j ∈{1,…,q-1} and that there is some index h such that α^(j)≤α^(j+1) for j ∈{1,…,h} and α^(j)≥α^(j+1) for j ∈{h+1,…,q-1}. In fact, due to the construction of the intervals, these inequalities hold in the strict sense since β^(j) = β^(j+1) would also imply that α^(j) = α^(j+1) by continuity of φ, so both segments would belong to the same interval. Hence, if we plot the points (β^(j),α^(j))^T into a b-a-space, we get a picture as shown in Figure <ref>. Moreover, for each j ∈{1,…,q-1}, there is some λ_j ∈ℝ with
α^(j) + λ_j ·β^(j) = α^(j+1) + λ_j ·β^(j+1)
due to the continuity and construction of φ. Hence, since β^(j+1) = β^(j) + δ_j for some value δ_j > 0, we get that
α^(j+1) = α^(j) + λ_j ·β^(j) - λ_j ·β^(j+1)
= α^(j) - λ_j ·δ_j,
so the slope of the line that connects the points (β^(j),α^(j))^T and (β^(j+1),α^(j+1))^T evaluates to
α^(j+1) - α^(j)/β^(j+1) - β^(j) = -λ_j ·δ_j/δ_j = -λ_j
and, thus, decreases while j increases. This yields that the piecewise linear function g connecting each of the points (β^(j),α^(j))^T in the order j=1,…,q is concave(as illustrated by the highlighted area in Figure <ref>).
This can also be seen by arguments used in the field of computational geometry: It is known that the upper envelope of a set of affine functions of the form c ·λ - d corresponds to the lower surface of a convex hull in the dual space, which is clearly convex. Such a dual space contains a point (c,d)^T for each affine function of the above form in the primal space and, conversely, an affine function λ· c - μ for each point (λ,μ)^T in the primal space. Hence, each line segment (breakpoint) of our upper envelope corresponds to a corner point (line segment) of the lower surface of a convex hull in the dual space (cf. <cit.>). In fact, Figure <ref> shows the dual space mirrored at the b-axis.
Now, for some specific item i ∈{1,…,n}, consider the first derivate of f_i, which as we have seen evaluates to
df_i/dλ(λ) = b_i ·α^(j) - a_i ·β^(j)/(α^(j) + λ·β^(j))^2
as shown above. Since the denominator is always positive, we need to bound the number of times the sign of the numerator changes. The value b_i ·α^(j) - a_i ·β^(j) can be interpreted as the inner product of the vectors (-a_i,b_i) and (β^(j),α^(j))^T. Hence, since (-a_i,b_i) · (b_i,a_i)^T = 0, the sign of the derivate changes whenever the function g crosses the line going through the origin and the point (b_i,a_i)^T (see the dotted line in Figure <ref>). Since g is concave as shown above, this can happen at most two times while λ increases, which yields the claim.
Theorem <ref> shows that each possible profit is only attained a constant number of times per item although the involved functions f_i are rational functions whose denominator changes for increasing λ. Hence, each item only creates (n/ε) subintervals as opposed to the (n^2 ·n/ε) subintervals proven in Section <ref>. It remains to show that we can determine these subintervals efficiently.
As shown in the proof of Theorem <ref>, the slope of each function f_i changes at most twice, yielding for each item up to three partitions of the set of intervals I_1,…,I_q such that f_i is monotonous within each partition. By scanning through the sequence of intervals of φ, we can determine these three partitions for all items in total time (n · n^2) = (n^3). For each item, each partition, and each possible scaled profit in {0,…,2n/ε} (which can be attained only once in the partition), we perform a binary search on the intervals in the partition in order to find a value for λ at which the corresponding profit is attained, if such a λ exists. This can be done in (n · 3 ·n/ε·log n^2) = (n^2/ε·log n) time in total. Finally, we need to sort this list of critical values of λ in order to determine the subintervals of the FPTAS, which can be done in (n ·n/ε·log (n ·n/ε)) = (n^2/ε·logn/ε) time.
In summary, we need (n^3 + n^2/ε·logn/ε) time to determine the (n^2/ε) subintervals of the FPTAS. For each of these subintervals, we need to perform the traditional FPTAS with the corresponding (constant) scaled profits, which can be done in (n^2/ε) time each. Hence, we obtain an FPTAS for the parametric knapsack problem running in (n^4/ε^2) time in total. This yields the main result of this paper:
There is an FPTAS for the parametric knapsack problem running in strongly polynomial time (n^4/ε^2).
§ COMBINING FPTASS
In the previous section, we have seen that the parametric knapsack problem can be divided into (n^2/ε) subproblems, for which we need to provide (1 - ε)-approximate solutions. These subproblems were created in a way such that the scaled profits are constant for each subproblem. Each of them can be seen as a new, independent, and non-parametric knapsack instance (albeit a special one, since the profits are now of polynomial size). In Section <ref>, we simply solved each of the resulting knapsack instances exactly in 𝒪(n^2/ε) time. The main observation of this section is that we actually do not necessarily need to solve the subproblems exactly – it suffices to solve them up to a factor of (1 - ε) using any FPTAS for the traditional knapsack problem.
Consider one fixed interval I' of the (n^2/ε) subintervals of the problem. For each λ∈ I', the scaled profits _i take on constant values. Moreover, it holds that 1/2· p^*(λ) ≤φ(λ) ≤ p^*(λ) for any λ∈ I' as shown in Section <ref>. Let øx denote a solution returned by some FPTAS for the traditional knapsack problem that is called on an instance with the scaled profits and let x denote an exact solution to the scaled instance (which, e.g., can be obtained by the dynamic programming scheme as above). Clearly, it holds that
p(øx) ∑_i=1^n _i ·øx_i ≥ (1 - ε) ·∑_i=1^n _i · x_i p(x).
For any fixed λ∈ I' and an optimal solution x^* for the unscaled problem at λ, we then get the following approximation guarantee for the solution øx:
p(øx) = ∑_i=1^n p_i ·øx_i ≥∑_i=1^n M ·p_i/M·øx_i
= M ·p(øx)
≥ (1 - ε) · M ·p(x)
≥ (1 - ε) · M ·p(x^*)
≥ (1 - ε) · M ·∑_i=1^n (p_i/M - 1 ) · x^*_i
= (1 - ε) ·(p^*(λ) - M ·∑_i=1^n x^*_i )
≥ (1 - ε) ·(p^*(λ) - ε·φ(λ) )
≥ (1 - ε) ·(p^*(λ) - ε· p^*(λ) )
≥ (1 - ε)^2 · p^*(λ)
= (1 - 2ε + ε^2) · p^*(λ)
≥ (1 - 2ε) · p^*(λ).
Setting ε' ε/2 then yields the desired approximation guarantee. Hence, although the subproblems were designed in a way such that the basic dynamic programming scheme does not change its behavior, we do not necessarily need to execute it but can also use an FPTAS instead.
There is an FPTAS for the parametric knapsack problem running in (n^2/ε· A(n,ε)) time, where A(n,ε) denotes the running time of an FPTAS for the traditional knapsack problem.
Note that it clearly holds that A(n,ε) ∈Ω(n), so the running time of the main procedure will dominate the overheads to compute φ and the set of subintervals.
At present, the best FPTAS for the traditional knapsack problem is given by <cit.> and achieves a running time of
( n ·min{log n, log1/ε} + .
. 1/ε^2log1/ε·min{ n, 1/εlog1/ε}).
Under the commonly used assumption that n is much larger than 1/ε in practice <cit.>, this running time evaluates to
( n log1/ε + 1/ε^3log^2 1/ε),
yielding an FPTAS for the parametric knapsack problem with a strongly polynomial running time of
( n^3/εlog1/ε + n^2/ε^4log^2 1/ε).
elsarticle-num-names
|
http://arxiv.org/abs/1701.07824v1 | 20170126142814 | Isotope effect on filament dynamics in fusion edge plasmas | [
"Ole Hauke Heinz Meyer",
"Alexander Kendl"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
]Isotope effect on filament dynamics in fusion edge plasmas
^1Institut für Ionenphysik und Angewandte Physik, Universität
Innsbruck, 6020 Innsbruck, Austria
^2Department of Physics and Technology, UiT - The Arctic University of Norway,
9037 Tromsø, Norway
ole.meyer@uibk.ac.at
The influence of the ion mass on filament propagation in the scrape-off layer
of toroidal magnetised plasmas is analysed for various fusion relevant majority
species, like hydrogen isotopes and helium, on the basis of a computational
isothermal gyrofluid model for the plasma edge.
Heavy hydrogen isotope plasmas show slower outward filament propagation and
thus improved confinement properties compared to light isotope plasmas,
regardless of collisionality regimes. Similarly, filaments in fully ionised
helium move more slowly than in deutrium. Different mass effects on the
filament inertia through polarisation, finite Larmor radius, and parallel
dynamics are identified.
Keywords: isotope effect, plasma filament, blob, particle transport
[
O H H Meyer^1,2 and A Kendl^1
=================================
§ INTRODUCTION
In various tokamak experiments the confinement properties have been shown to
scale favourably with increasing mass of the main (fusion relevant) ion plasma
species, specifically hydrogen isotopes and helium <cit.>.
The radial cross-field transport of coherent filamentary structures (commonly
denoted “blobs”) in the scrape-off layer (SOL) of tokamaks accounts for a
significant part of particle and heat losses <cit.> to
the plasma facing components. Experimentally the ion mass effect on SOL filament
dynamics has been studied in a simple magnetised torus <cit.>.
Filamentary transport in tokamaks in general is an active subject of studies in
experiments, analytical theory, and by computations in two and three dimensions.
The basic properties of filamentary transport are reviewed in <cit.>.
Blob propagation results from magnetic drifts that polarise density
perturbations, thus yielding a dipolar electric potential ϕ whose
resulting B×∇ϕ drift in the magnetic field B
drives the filaments down the magnetic field gradient and towards the wall.
The basic physics is illustrated by accounting for the current
paths involved upon charging of the blob by the diamagnetic current:
the closure is via perpendicular polarisation currents in the drift plane and
through parallel divergence of the parallel current <cit.>.
Two-dimensional (2-d) closure schemes are discussed in Ref. <cit.>.
Depending on parallel resistivity, the dominant closure path features distinct
dynamics: if closure is mainly through the polarisation current, the 2-d
cross-field properties are dominant, leading to a mushroom-cape shaped radial
propagation. For reduced resistivity the closure parallel to the magnetic
field direction in 3-d is dominant, and Boltzmann spinning leads to a more
coherently propagating structure at significantly reduced radial velocity.
The isotope mass may have influence on the E× B shearing rate in the
edge region <cit.>, and flow shear in the edge region has been suggested
to be a main agent which controls blob formation <cit.>.
In addition, a finite ion temperature introduces poloidally asymmetric
propagation of blobs <cit.>. The underlying finite Larmor
radius (FLR) effects have been found to contribute to favourable
isotopic transport scaling of tokamak edge turbulence <cit.>.
In this work we study the isotopic mass effect on blob filament propagation by
employing an isothermal gyrofluid model so that relevant FLR contributions to
the blob evolution are effectively included, in addition to the mass
dependencies in polarisation and in parallel ion velocities.
§ GYROFLUID MODEL AND COMPUTATION
The present simulations on the isotopic dependence of 3-d filament and 2-d blob
propagation in the edge and SOL of tokamaks are based on the gyrofluid
electromagnetic model introduced by Scott <cit.>.
In the local delta-f isothermal limit the model consists of
evolution equations for the gyrocenter densities n_s and parallel velocities
u_s of electrons and ions, where the index s denotes the species with
s ∈ (e, i):
_s n_s/ t = - u_s + 𝒦(
ϕ_s + τ_s n_s),
β A_/ t + ϵ_s _s u_s /
t = - ( ϕ_s + τ_s n_s) + 2 ϵ_sτ_s𝒦( u_s ) - C J_,
The gyrofluid moments are coupled by the polarisation equation
∑_s a_s[ Γ_1 n_s + Γ_0 - 1/τ_sϕ] = 0,
and Ampere's law
- ^2 A_ = J_ = ∑_s a_s u_s .
The gyroscreened electrostatic potential acting on the ions is given by
ϕ_s = Γ_1( ρ_s^2 k_⊥^ 2) ϕ_k,
where ϕ_k are the Fourier coefficients of the electrostatic potential.
The gyroaverage operators Γ_0 (b) and Γ_1 (b) = Γ_0^1/2 (b) correspond to
multiplication of Fourier coefficients by I_0(b) e^-b and
I_0(b/2) e^- b/2, respectively, where I_0 is the modified Bessel
function of zero'th order and b = ρ_s^2 k_⊥^ 2.
We here use approximate Padé forms with Γ_0 (b) ≈ (1 +
b)^-1 and Γ_1 (b) ≈ (1 + b/2)^-1 <cit.>.
The perpendicular E×B advective and the parallel derivative operators
for species s are given by
_s/ t = / t + δ^-1{ϕ_s, },
= / z - δ^-1β{ A_, },
where we have introduced the Poisson bracket as
{ f, g } = ( f/ x g/ y - f/ y g/ x).
In local three-dimensional flux tube co-ordinates {x,y,z}, x is a (radial) flux-surface
label, y is a (perpendicular) field line label and z is the position along
the magnetic field line.
In circular toroidal geometry with major radius R, the curvature operator is given by
𝒦 = ω_B( sin z / x + cos z / y),
where ω_B = 2 L_⊥ / R,
and the perpendicular Laplacian is given by
^2 = ( ^2/ x^2 + ^2/ y^2).
Flux surface shaping effects <cit.> in more general tokamak or
stellarator geometry on SOL filaments <cit.> are here neglected for simplicity.
Spatial scales are normalised by the drift scale ρ_0 = √(T_
m_i0)/ B, where T_ is a reference electron temperature, B is
the reference magnetic field strength and m_i0 is a reference ion mass,
for which we use the mass of deuterium m_i0 = m_D.
The temporal scale is set to by c_0 / L_⊥, where c_0 = √(T_/m_i0),
and L_⊥ is a perpendicular normalisation length (e.g. a generalized
profile gradient scale length), so that δ = ρ_0 / L_⊥ is the drift scale.
The temporal scale may be expressed alternatively L_⊥ / c_0 = L_⊥ /
(ρ_0 Ω_0) = (δΩ_0)^-1, with the ion-cyclotron
frequency Ω_0 = ρ_0 / c_0. In the following we employ δ = 0.01.
The main species dependent parameters are
a_s = Z_s n_s0/n_ 0 , τ_s = T_s/Z_s T_, μ_s = m_s/Z_s m_i0,
ρ_s^2 = μ_sτ_sρ_0^2, ϵ_s = μ_s( q R/L_⊥)^2,
setting the relative concentrations, temperatures, mass ratios and FLR scales
of the respective species. Z_s is the charge state of the species s with
mass m_s and temperature T_s.
The plasma beta parameter
β = 4 π p_/B^2( q R/L_⊥)^2,
controls the shear-Alfvén activity, and
C = 0.51 ν_ L_⊥/c_0m_/m_i 0( q R/L_⊥)^2,
mediates the collisional parallel electron response for Z=1 charged hydrogen
isotopes. The collisional response for other isotopes or ion species is discussed
further below.
§.§ Parallel boundary conditions
We distinguish between two settings for parallel boundary conditions in
3-d simulations. In the case of edge simulations a toroidal
closed-flux-surface (CFS) geometry is considered, and quasi-periodic globally
consistent flux-tube boundary conditions in the parallel direction
<cit.> are applied on both state-variables n_, ϕ and flux
variables v_, u_s.
For SOL simulations, the state variables assume zero-gradient Neumann (sheath)
boundary conditions at the limiter location, and the flux variables are given as
u_s |_±π = p_|_±π = ±Γ_d n_|_±π,
v_ = u_s |_±π - J_ |_±π = ±Γ_d
[(Λ + 1) n_|_±π - ϕ|_±π],
at the parallel boundaries z = ±π respectively <cit.>.
Note that in order to retain the Debye sheath mode in this isothermal model, the Debye current
J_ |_±π = ±Γ_d (ϕ - Λ T_) is expressed as
J_ |_±π = ±Γ_d (ϕ - Λ n_) and the electron
pressure p_ = n_ T_ is replaced by p_ = n_ <cit.>.
This edge/SOL set-up and its effects on drift wave turbulence has been
presented in detail by Ribeiro in Refs. <cit.>.
The sheath coupling constant is Γ_d = √((1 + τ_i) / (μ_i ϵ)).
The floating potential is given by Λ = Λ_0 + Λ_i, where
Λ_0 = log√(m_i0 / (2 π m_)) and Λ_i = log√(μ_i / (1 + τ_i)).
Here terms with the index i apply only to the ion species.
The expressions presented here are obtained by considering the finite ion
temperature acoustic sound speed, c_i = √((Z_i T_i + T_) / m_i),
instead of c_0 in Ref. <cit.>. This results in the additional
Λ_i, and the normalisation scheme yields the extra
√((1 + τ_i) / μ_i) in Γ_d.
§.§ Numerical implementation
Our code TOEFL <cit.> is based on the delta-f isothermal electromagnetic
gyrofluid model <cit.> and uses globally consistent flux-tube
geometry <cit.> with a shifted metric treatment of the coordinates
<cit.> to avoid artefacts by grid deformation. In the SOL region a
sheath boundary condition model is applied <cit.>.
The electrostatic potential is obtained from the polarisation equation by an
FFT Poisson solver with zero-Dirichlet boundary conditions in the (radial)
x-direction. Gyrofluid densities are adapted at the x-boundaries to ensure
zero vorticity radial boundary conditions for finite ion temperature.
An Arakawa-Karniadakis scheme is employed for advancing the moment
equations <cit.>.
§ SCALING LAWS FROM DIMENSIONAL ANALYSIS
Blob velocity scalings are commonly deduced from the fluid vorticity
equation. We follow this approach and construct the gyrofluid vorticity
equation to deduce velocity scaling laws.
The vorticity equation can be obtained upon expressing the gyrocenter ion
density in terms of the electron density and polarisation contribution,
inserting in the ion gyrocenter density evolution equation and subtracting
the electron density evolution equation <cit.>.
Up to 𝒪(b) the ion gyrocenter density is
n_i = n_ - 1/2μ_i^2 p_i - μ_i^2 ϕ,
where the ion pressure is given in terms of the electron particle density p_i = τ_i n_. The gyroaveraged potential for species s up to 𝒪(b) is
ϕ_s = ϕ +1/2μ_iτ_i^2 ϕ.
Following Ref. <cit.> we obtain
μ_i∇·/ tϕ^* = J_ -
(1 + τ_i) 𝒦 (n_).
Here we have introduced the modified potential ϕ^* = ϕ + p_i.
The vorticity equation is equivalent to the quasi-neutrality statement of
current continuity, ∇·J = 0.
We identify the divergence of the polarisation current,
∇·J_pol = - μ_i ∇·/ tϕ^* ,
and the divergence of the diamagnetic current,
∇·J_dia = -(1 + τ_i) 𝒦 (n_).
Blob propagation has in a linearisation of the present gyrofluid model been
analytically analysed by Manz in Ref. <cit.>.
Therein the dependence of blob velocity
on the ion isotope mass is in principle present but not explicitly apparent.
To clarify, we here restate the calculations of Ref. <cit.>, but use the
vorticity eq. (<ref>) with the explicit occurence of μ_i.
Neglecting parallel currents, employing the blob correspondence
∂_x,∂_y → 1 / σ and / t →
i γ = i v_b / σ <cit.>, in terms of the blob width σ,
blob velocity v_b and linear growth rate of the instability γ and
furthermore identifying ϕ = i v_b σ (the radial component of the
electric drift), we get (in normalised units):
v_b = 1/√(2)√(√(f^2 + g^2) - f), where f =
( τ_i A/2 σ)^2, g = 1 + τ_i/μ_iω_B σ A.
A is the initial blob amplitude. In the limit of large blobs, g ≫ f, so that
v_b ≈1/√(2)√(1 + τ_i/μ_iω_B σ A),
and for smaller blobs satisfying g ≪ f we get
v_b ≈1 + τ_i/τ_iω_B σ^2/μ_i.
The correspondence with the result of Ref. <cit.> is made explicit upon
renormalising, i.e. letting v_b → v_b / δ c_0, A → A
/ δ, σ→σ / ρ_0, ω_B → 2 L_⊥ / R.
The limits then are
v_b ≈√(1 + τ_i/μ_i) c_0 √(2 σ/R A),
for σ^3 ≫μ_i τ_i^2 A/4 (1 + τ_i) ω_B,
and
v_b ≈ 2 c_0/μ_i1 + τ_i/τ_i(
σ/ρ_0)^2, for σ^3 ≪μ_i
τ_i^2 A/4 (1 + τ_i) ω_B.
For 2-d computations of sufficiently large blobs we consequently expect v_b
∼ 1 / √(μ_i), whereas for the 3-d model the expected scaling is not
a priori that clear. In Ref. <cit.> 3-d (linear) scaling laws were presented,
where the parallel dynamics was approximated by the Hasegawa-Wakatani closure,
J_ = 1/C^2 (n_ - ϕ).
In the following we are going to compare reduced 2-d and full 3-d dynamical
blob simulations for various isotope species with the analytical 1 / √(μ_i)-scaling.
§ TWO-DIMENSIONAL BLOB COMPUTATIONS
In this section we numerically analyse the dependence ob filament dynamics on
the normalised ion mass μ_i by
reduced 2-d blob simulations of the isothermal gyrofluid eqs. (<ref>,<ref>).
For the computations in this section we us as parameters:
curvature ω_B = 0.05, drift scale δ = 0.01, grid size L_y = 128
ρ_s, grid points N_x = N_y = 256, initial blob amplitude A = 1 and
Gaussian blob width σ = 10 ρ_s.
Fig. <ref> shows contours plots of the electron particle density at
different times of evolution of a seeded blob for several species of cold ions (τ_i=0).
The initial Gaussian density perturbation n_e (x,y, t=0) = A exp[ -
(x^2 + y^2) / σ^2 ] undergoes the familiar transition towards a
mushroom-shaped structure before the blob eventually breaks up due to
secondary instabilities. This figure illustrates the main point for
the following discussion: lighter isotopes propagate faster than heavier
isotopes. In terms of the (normalising) deuterium mass we consider
μ_H = 1/2, μ_D = 1, μ_T = 3/2 and
μ_He+ = 2 with μ_i = m_i /(Z_i m_D). The species index
He+ here denotes singly charged
helium-4 with Z_He+≡ 1. The case of (fully ionised) doubly charged helium-4 isotopes
will be discussed further below in context of 3-d simulations in Sec. <ref>.
Note that the lighter the ion species are, the further the blob is developed
in its radial propagation and evolution at a given snapshot in time.
For warm ions (τ_i >0) the blob propagation depends on the relative
initialisation of the electron and ion gyrocenter densities.
Commonly, a zero E× B vorticity blob initialisation is assumed where
n_i (x,y, t=0) = Γ_1^-1 n_ (x,y, t=0): inserting these into the polarisation
eq. (<ref>) results in vorticity Ω≡∇_⊥^2 ϕ = 0. This
initialisation for most parameters leads to an FLR induced rapid development of a
perpendicular propagation component in addition to the radial propagation of
the blob, and thus a pronounced up-down asymmetry in y direction.
Alternatively, the electron and ion gyrocenter densities can be chosen as
equal with n_i (x,y, t=0) = n_e (x,y, t=0), so that ϕ∼ n_e.
In this case the initial vorticity mostly cancels the FLR asymmetry effect,
and the blob remains more coherent and steady in its radial propagation <cit.>.
The truth may be somewhere in between: as in the experiment blobs are not
“seeded” (in contrast to common simulations), but appear near the separatrix
from E× B drift wave vortices or are sheared off from
poloidal flows, in general some phase-shifted combination of electric
potential and density perturbations will appear.
For comparison we perform simulations with both of these seeded blob density initialisations.
We note that the x coordinate is effectively pointing radially outwards (in
negative magnetic field gradient direction) at a low-field midplane location
in a tokamak, and the magnetic field here points into the (x,y) plane (e_z
= e_y × e_x), so that the effective electron diamagnetic
drift direction of poloidal propagation is in the present plots downwards (in
negative y direction).
Fig. <ref> shows blob propagation for warm ions with τ_i = 1,
initialised with the zero vorticity condition.
For comparison, we present in Fig. <ref> the propagation for the same
parameters but initialised with equal electron and ion gyrocenter densities.
Clearly, the latter cases with initial non-zero vorticity Ω = ∇_⊥^2
ϕ results in faster and more coherent radial propagation, whereas the
zero vorticity cases exhibit significant poloidal translation through the FLR
induced spin-up.
Regardless of initialisation, blobs of light ion species with small μ_i
travel faster and are further developed at a given time compared to heavier species.
Relevant quantities which determine the intermittent blob related transport properties
of the tokamak SOL are the maximum blob velocity and acceleration.
In Fig. <ref> we present maximum radial center-of-mass velocities
V_x, max and the average radial acceleration A as a function of
the the ion mass parameter μ_i.
The different symbols/colours represent cases with cold (τ_i = 0, blue
lower curves) and warm (τ_i = 1) ions, with both types of initial
conditions used on the latter: the zero vorticity condition is depicted in red
(middle curves) and the n_e=n_i condition in green (upper curves).
It can be seen that the maximum radial blob velocity is slightly larger for n_e=n_i
initialisation due to the mainly radial propagation (left figure), but the
average acceleration is for both cases nearly equal (right figure).
The radial center-of-mass position is given by X_c = [ ∫ x y
x n_ ] / [ ∫ x y n_ ].
Taking the temporal derivative gives the radial center-of-mass
velocity, V_x = X_c / t.
The maximum of V_x (t), V_x, max = max{ V_x (t) }
and the corresponding time for the occurence of the maximum, t_max,
then give a measure of the average radial acceleration,
A = V_x, max / t_max.
Clearly, an inverse dependence of velocities and acceleration on effective
ion mass μ_i can be inferred for all cases.
For cold ions, the only mass dependence in the present 2-d isothermal
gyrofluid model, lies in the gyrofluid polarisation equation, carrying over
the mass dependence of the polarisation drift in a fluid model.
As deduced from the basic linear considerations in Sec. <ref>, the
maximum blob velocity scales inversely with the square root of the ion species
or isotope mass: the plotted fits are close to the expected lines V_x,
max∼μ^-0.5.
From dimensional analysis it follows that the acceleration should scale
according to A ∼γ^2 σ, where γ is the growth-rate
of the linear instability. For γ∼ 1 / √(μ_i), we expect
A ∼ 1 / μ_i for cold ions: this is confirmed in Fig. <ref>
(right) where the fitted exponents are close to -1.
Warm ion simulations also feature μ_i species mass dependence through the FLR
operators Γ_0(b) and Γ_1(b), where b = ρ_i^2 k_⊥^2 =
μ_i τ_i ρ_0 k_⊥^2.
We find that for the parameters at hand, the maximum radial velocity for warm
ions with zero vorticity initialisation is higher compared to cold ions, with
a slightly increased isotopic dependence (seen in an exponent -0.548
compared to -0.500).
Initialising with non-zero vorticity yields approximately 50 % increased
velocities compared to cold ions, and slightly weakens the isotopic dependence
(expressed by an exponent -0.488). This can be attributed to the
mass dependence in the FLR operators, which we further discuss below in Sec. <ref>.
§ THREE-DIMENSIONAL FILAMENT COMPUTATIONS
In three dimensions, when the blob extends into an elongated filament along
the magnetic field lines, additional physics enters into the model.
The basic picture of interchange driving of filaments by charging through ∇
B and curvature drifts to produce a net outward E×B
propagation still remains valid. However, the total current continuity balance
now also involves parallel currents:
- ∇·J_pol = J_ + ∇·J_dia.
The detailed balance among the current terms determines the overall motion of
the filament. Furthermore, blob filaments in the edge of toroidal magnetised
plasmas generally tend to exhibit ballooning in the unfavourable curvature
region along the magnetic field.
The parallel gradients in a ballooned blob structure also lead to a parallel
Boltzmann response, mediated mainly through the resistive coupling of (ϕ -
n_) to C J_|| in eq. (<ref>).
This tends towards (more or less phase shifted) alignment between the electric
potential and the perturbed density, which strongly depends on the
collisionality parameter C.
For low collisionality, the electric potential in the blob evolves towards
establishment of a Boltzmann relation in phase with the electron density along B, so
that n_∼exp(- ϕ) ∼ϕ. This leads to reduced
radial particle transport, and the resulting spatial alignment of the
potential with the blob density perturbation produces a rotating vortex along
contours of constant density, the so-called Boltzmann spinning <cit.>.
Large collisionality leads to a delay in the build-up of the potential within
the blob, so that the radial interchange driving can compete with the parallel
evolution, and the perpendicular propagation is similar to the 2-d scenario.
In the following we investigate how 3-d filament dynamics is depending on the
ion mass. Clearly, we expect an impact in addition to the 2-d effects found in
the previous section, since (i) the parallel ion velocity is inversely
dependent on ion mass (but is for any ion species slow compared to the
electron velicity), (ii) the sheath boundary coupling constants are mass
dependent, and (iii) the basic dependence on the ion mass in the polarisation
current will play a more role complicated role compared to the 2-d model.
For our present study we chose the free computational parameters
basically identical to the 2-d case above: drift scale δ = 0.01,
curvature ω_B=0.05, blob amplitude A=1 and perpendicular blob width σ=10.
The Gaussian width of initial parallel density perturbation is given by
Δ_z=√(32), which represents a slight ballooning with some initial
sheath connection:
n_^3D (t = 0, x, y, z) = n_e ⊥·exp[-(z - z_0)^2/Δ^2]
where z_0 is the parallel reference coordinate at the outboard mid-plane and
n_e ⊥(x,y) is the perpendicular Gaussian initial perturbation
introduced in Sec. <ref>.
In this section we first focus on zero vorticity initial conditions, non-zero
conditions will be discussed further below.
The perpendicular domain size is L_x = L_y = 128 ρ_s with a grid
resolution of N_x = N_y = 256. The number of parallel grid points is varied
between N_z=8 and 16. The filament simulations have been tested for convergence
with respect to the number of drift planes up to N_z =32: N_z = 8 yields
qualitatively and quantitatively similar results to N_z = 16.
The plots showing colour cross sections throughout this article are taken from
simulations with N_z = 8, and the presented quantitative results have been
obtained for N_z = 16.
We here set β = 0 as electromagnetic effects in the SOL are thought to
be of minor importance for the present discussion <cit.>.
The collisionality parameter is chosen in C = 0.5 - 100 to cover a
likely range of tokamak SOL values.
Typical values for the collisionality parameter for the SOL in ASDEX Upgrade
L-mode plasmas have in the literature <cit.> been reported as C ∼ 1
- 100, and a reference characteristic collisionality in Ref. <cit.>
for MAST has been given as C ∼ 2.
Fig. <ref> illustrates the dependence of filament evolution
with respect to collisionality dependent Boltzmann spinning for warm deuterium ions.
The case with C = 10 represents the strong Boltzmann spinning (drift wave)
regime, where density and potential perturbations are closely aligned and the
radial filament motion is strongly impeded.
Increasing the collisionality to C = 100 reduces parallel electron dynamics
and so effectively increases the lag of potential build-up within the density
blob perturbations, so that the Boltzmann spinning is reduced.
The substantial perpendicular motion component is in this case partly caused
by FLR effects like in the corresponding 2-d case for τ_i=1.
The τ_i contribution to the ion diamagnetic curvature term results in
enhanced radial driving of the blob compared to cold ion cases.
A measure for blob compactness can be introduced <cit.> by
I_c (t) = ∫ x ∫ y n_ (x,y,t) h(x,y,t)/∫ x
∫ y n_ (x,y,t=0) h(x,y,t=0),
where the Heavyside function h(x,y,t) is defined as
h (x,y,t) = 1 if (x - x_max (t))^2 + (y -
y_max (t))^2 < σ^2,
and zero elsewhere. That is, the integral takes non-zero values for density
contributions located inside a circle of radius σ around its maximum.
Fig. <ref> quantifies the above observations. In the strong
Boltzmann spinning regime (C = 10, left) the blob retains much of its
initial shape, so that the compactness is higher compared to the weak
Boltzmann spinning regime (C = 100, right), where filaments feature a more
bean-shaped structure which reduces the compactness measure.
At the time of measurement (t = 5), the heavier isotopic blobs show slightly more
compactness, which is an indirect result of decreased velocity: at a given
time, the lighter isotopic blobs are further developed and thus less circular.
The observed trends are similar for cold (τ_i=0) and warm (τ_i=1) ions.
For increased collisionality, the deviation from circularity is more pronounced, as
the mushroom-cape shape is realized. Blobs in light isotopic plasmas are then
again further developed, i.e. finer scales have emerged at the time of
recording, resulting in a sharper mass dependence of blob compactness compared
to C = 10, where smaller scales are less prominent.
Fig. <ref> shows filament propagation for cold ions (τ_i=0) and weak
Boltzmann spinning. This can be compared to Fig. <ref> which shows
propagation for warm ions (τ_i=1) and also weak Boltzmann spinning.
It is observed that there is poloidal propagation also for the cold ion case,
which is a consequence of the non-vanishing Boltzmann spinning that is also present, although greatly reduced, for these high collisionality (C = 100) cases.
The resulting maximum radial center-of-mass velocites at the outboard midplane
(z = z_0) are shown in Fig. <ref> for weak (right) and strong (left)
Boltzmann spinning.
The fits of the exponent in μ^α to the simulation data in
Fig. <ref> carry evidence that the additional mass dependences
introduced by the 3-d model via parallel sheath-boundary conditions and
parallel ion velocity dynamics causes the clear deviation from a 1 /
√(μ_i) scaling.
For high collisionality (and thus reduced Boltzmann spinning), the parallel
current is impeded and the dynamics is more two-dimensional than for lower
collisionalities. The competing nature of the parallel divergence versus
current continuity via the divergence of the polarisation current with
collisionality is shown in Fig. <ref>:
for each value of collisionality C we compute the isotopic dependence of the
outboard-midplane maximum center-of-mass radial velocity,
V_max∼μ_i^α (μ_i),
contained in the scaling exponent, α (μ_i).
For large values of C the resulting dynamics strongly features
2-d propagation characteristics, since the diamagnetic current is almost
exclusively closed via the polarisation current, which gives the
1 / √(μ_i) scaling introduced in sec. <ref>.
Note that the scaling with respect to C cannot be inferred from linear models.
§.§.§ Non-zero gyrofluid vorticity initialisation
So far we have in this section applied the initial condition n_≡Γ_1 n_i
associated with zero initial vorticity. Now the case for non-zero initial
vorticity by the condition n_ = n_i is considered.
Fig. <ref> shows results for warm ion (τ_i = 1) computations in the strong
(C = 10, blue) and weak (C = 100, green) Boltzmann spinning regimes.
(Recall that for τ_i = 0 this discussion is redundant since Γ_1 (τ_i = 0)= 1.)
The left figure depicts the maximum radial center-of-mass velocity, and the right
figure shows the corresponding average acceleration.
Comparing with Fig. <ref> we notice that the resulting filament velocities
are similar to those obtained from zero initial vorticity. We also
find that the isotopic dependence ∼μ_i^α is not significantly altered.
Recalling the results from 2-d computation in sec. <ref>, we may
conclude that the initialisation is not that important for 3-d numerical
simulations with respect to the maximum radial filament velocity.
The slight impact of the initial condition on the resulting scaling exponent
for the 2-d case may then be connected to the more prominent mass
dependence in the polarisation current, which is weakened when
parallel currents are taken into account.
§.§.§ Comparison of filaments in deuterium and helium plasmas
When comparing blob filament propagation in deuterium and in fully ionised
helium-4 plasmas in the present model, the dynamical evolution is identical in
the cold ion limit: in the model parameter μ_i = m_i /(Z_i m_D) the
doubled mass of the helium nucleus excactly cancels with the doubled positive
charge, Z_He = 2.
Differences are only appearing in warm ion cases. The normalised
mass ratio is now identical for both species, μ_D = μ_He =
1. The only model parameter that is different, is the helium temperature
ratio, τ_He = T_He / Z_He T_. The species mass
effects thus appears in the combined b ∼μ_i τ_i.
In the following we consider plasmas at equal temperature, T_D =
T_He = 2 T_ such that τ_D = 2 and τ_He = 1.
The higher charge state of the helium nucleus is also indirectly evident in
the reduced electron-ion collision frequency contained in the C -parameter.
For electron-ion collisions where the ions are in charge state Z_i we have C
∼α_e ν_, with <cit.>
α_e ≈1 + 1.198 Z_i + 0.222 Z_i^2/1 + 2.966 Z_i + 0.753 Z_i^2.
For Z_i = 1 we have α_e ≈ 0.51 and Z_i = 2 gives α_e
≈ 0.43.
To account for this dependence, we in the following consider two cases:
(1) equal non-normalised collision frequencies, i.e. C → 0.51 C
for deuterium and C → 0.43 C for helium; (2) using the same C
for both deuterium and helium computations.
In case (2), setting first the normalised collisionality parameter C = 10
identical for both D and He computations results in V_D = 1.41
δ c_s0 and V_He = 1.14 δ c_s0.
Setting C = 100 identical for both D and He gives V_D = 3.2 δ
c_s0 and V_He = 2.6 δ c_s0, respectively.
For case (1) we set the electron-ion collision frequency equal for both
species, so that different C parameters are used according to equation <ref>:
C_D = 10 corresponds to C_He = 8.43, and
C_D = 100 to C_He = 84.3.
In these cases, maximal radial He velocities are V_He (C=10) = 1.09
δ c_s0 and V_He (C=100) = 2.46 δ c_s0.
We find that regardless of how the charge state depency for the
relative value of the collision parameter is treated, the filaments in
deuterium plasmas move faster than in helium plasmas at identical temperature.
This is visualized in Fig. <ref> showing filament propagation at equal
electron-ion collision frequency and electron temperature.
§ CONCLUSIONS
We have investigated filament propagation in SOL conditions characteristic for
tokamak fusion devices. Quasi-2-d dynamics is restored in high resistivity
regimes, where the maximum radial blob velocity scales inversely proportional
with the square root of the ion mass.
In 2-d the diamagnetic current drive is closed solely via the polarisation current,
yielding this simple characteristic scaling.
The larger inertia through polarisation of more massive ion species
effectively slows the evolution of filaments, and the maximum radial velocity
occurs later compared to blobs in plasmas with lighter ions.
For non-zero initial vorticity condition, the 2-d warm ion blobs show compact
radial propagation, where the isotopic effect through the mass dependent
FLR terms is slightly less pronounced.
Boltzmann spinning appears in 3-d situations particularly for low
collisionality regimes, and leads to a reduced dependence on the ion isotope
mass. The exponent in the scaling V ∼μ_i^α has been found to be
typically within the range α∈ [-0.1, -0.3] for C < 10, which is a
regime relevant for the edge of most present tokamaks.
Considering current continuity, the closure via the parallel current
divergence dynamically competes with current loops being closed through the
polarisation current.
For high collisionalities the parallel current is effectively imepeded and
the polarisation current characteristics dominate the blob evolution,
producing a more 2-d like velocity dependence with respect to ion mass.
The initial condition has been found to have little influence on the maximum
radial velocity when in 3-d the parallel closure of the current is taken into account.
For similar ion temperatures and electron-ion collision frequencies, it has
been found that helium filaments travel more slowly compared to deuterium
filaments in both high and low collisionality regimes.
This work was devoted to the identification of isotopic mass effects on
seeded (low amplitude) blob filaments in the tokamak SOL by means of a
delta-f gyrofluid model.
Naturally, blobs emerge near the separatrix within coupled edge/SOL
turbulence. The dependence of fully turbulent SOL transport on the ion mass
therefore will have to be further studied within a framework that consistently
couples edge and SOL turbulence, preferably through a full-f 3-d gyrofluid
(or gyrokinetic) computational model that does not make any smallness
assumption on the relative amplitude or perturbations compared to the
background <cit.>.
§ ACKNOWLEDGMENTS
We acknowledge main support by the Austrian Science Fund (FWF) project Y398.
This work has been carried out within the framework of the EUROfusion
Consortium and has received funding from the Euratom research and training
programme 2014-2018 under grant agreement No 633053. The views and opinions
expressed herein do not necessarily reflect those of the European Commission.
§ REFERENCES
00
Bessenrodt
Bessenrodt-Weberpals M 1993, 33 1205
Hawryluk
Hawryluk R J 1998, Rev. Mod. Phys. 70 537
Liu
Liu B 2016, 56 056012
Xu
Xu Y 2013, 110 265005
Boedo
Boedo J A 2003, Phys. Plasmas 10 1670
LaBombard
LaBombard B 2001, Phys. Plasmas 8 2107
Theiler
Theiler C 2009, 103 065001
DIp_review
D'Ippolito D A, Myra J R and Zweben S J 2011, Phys. Plasmas 18 060501
Krash1
Krasheninnikov S I 2001, Phys. Lett. A 283 368
Krash
Krasheninnikov S I, D'Ippolito D A and Myra J R 2008, J. Plasma Phys. 74, 679
Hahm
Hahm T S 2013 53 072002
Manz
Manz P 2015, Phys. Plasmas 22 022308
Madsen
Madsen J 2011, Phys. Plasmas 18 112504
Matthias
Wiesenberger M, Madsen J and Kendl A 2014, Phys. Plasmas 21 092301
paper1
Meyer O H H and Kendl A 2016, 58 115008
scott05b
Scott B D 2005 Phys. Plasmas 12 102307
dorland93
Dorland W 1993 B 5.3 812–835
kendl06
Kendl A and Scott B D 2006 Phys. Plasmas 13 012504
kendl03
Kendl A, Scott B D, Ball R and Dewar R L 2003 Phys. Plasmas 10 3684
riva17
Riva F, Lanti E, Jolliet S and Ricci R
2017 Plasmas Phys. Contr. Fusion 59 035001
scott98
Scott B D 1998, Phys. Plasmas 5 2334
Ribeiro05
Ribeiro TT and Scott BD
2005 Plasmas Phys. Contr. Fusion 47 1657
Ribeiro08
Ribeiro TT and Scott BD
2008 Plasmas Phys. Contr. Fusion 50 055007
kendl14
Kendl A 2014 Int. J. Mass Spectrometry 365/366 106–113
scott01
Scott B D 2001, Phys. Plasmas 8 447
arakawa66
Arakawa A 1966 J. Comput. Phys. 1 119
karniadakis91
Karniadakis G E, Israeli M and Orszag S A 1991 J. Comput. Phys. 97 414
Naulin
Naulin V and Nielsen A 2003, J. Sci. Comput. 25 104
Scott07
Scott B D 2007, Phys. Plasmas 14 102318
Manz13
Manz P 2013, Phys. Plasmas 20 102307
Krash_bc
Krasheninnikov S I 2001, Phys. Lett. A 283 368
Held16
Held 2016, 56 126005
angus12
Angus J R et al 2012 Contrib. Plasma Phys. 52 348
angus14
Angus J R and Umansky V M 2014 Phys. Plasmas 21 012514
Easy1
Easy L 2014, Phys. Plasmas 21, 122515
Hirshman
Hirshman S P 1977, Phys. Fluids 20 589
Kendl15
Kendl A 2015, 57 045012
|
http://arxiv.org/abs/1701.07689v4 | 20170126132801 | Biologically Feasible Gene Trees, Reconciliation Maps and Informative Triples | [
"Marc Hellmuth"
] | cs.DM | [
"cs.DM",
"q-bio.PE"
] |
]Marc Hellmuth
[]Dpt. of Mathematics and Computer Science, University of Greifswald, Walther-
Rathenau-Strasse 47, D-17487 Greifswald, Germany
Saarland University, Center for Bioinformatics, Building E 2.1, P.O. Box 151150, D-66041 Saarbrücken, Germany
Email:
Biologically Feasible Gene Trees, Reconciliation Maps and Informative Triples
[
=============================================================================
The history of gene families - which are equivalent to event-labeled gene trees - can be reconstructed from empirically estimated evolutionary event-relations containing pairs of orthologous, paralogous or xenologous genes. The question then arises as whether inferred event-labeled gene trees are biologically feasible, that is, if there is a possible true history that would explain a given gene tree. In practice, this problem is boiled down to finding a reconciliation map - also known as DTL-scenario - between the event-labeled gene trees and a (possibly unknown) species tree.
In this contribution, we first characterize whether there is a valid
reconciliation map for binary event-labeled gene trees T that contain
speciation, duplication and horizontal gene transfer events and some unknown
species tree S in terms of “informative” triples that are displayed in T
and provide information of the topology of S. These informative triples are used to
infer the unknown species tree S for T.
We obtain a similar result for non-binary gene trees. To this end, however,
the reconciliation map needs to be further restricted.
We provide a
polynomial-time algorithm to decide whether there is a species tree for a given
event-labeled gene tree, and in the positive case, to construct the
species tree and the respective (restricted) reconciliation map.
However, informative triples as well as DTL-scenarios have its limitations
when they are used to explain the biological feasibility of gene trees.
While reconciliation maps imply biological feasibility,
we show that the converse is not true in general. Moreover, we show that
informative triples do neither provide
enough information to characterize “relaxed” DTL-scenarios nor
non-restricted reconciliation maps for
non-binary biologically feasible gene trees.
§ BACKGROUND
The evolutionary history of genes is intimately linked with the history of the
species in which they reside. Genes are passed from generation to generation to
the offspring. Some of those genes are frequently duplicated, mutate, or get
lost - a mechanism that also ensures that new species can evolve. In particular,
genes that share a common origin (homologs) can be classified into the
type of their “evolutionary event relationship”, namely orthologs, paralogs
and xenologs <cit.>. Two homologous genes are
orthologous if at their most recent point of origin the ancestral gene is
transmitted to two daughter lineages; a speciation event happened. They
are paralogous if the ancestor gene at their most recent point of origin
was duplicated within a single ancestral genome; a duplication event
happened. Horizontal gene transfer (HGT) refers to the transfer of genes between
organisms in a manner other than traditional reproduction and across different
species and yield so-called xenologs.
In contrast to orthology and paralogy, the definition of xenology is less
well established and by no means consistent in the biological
literature. One definition stipulates that two genes are
xenologs if their history since their common ancestor involves
horizontal transfer of at least one of them <cit.>.
The mathematical framework for evolutionary event-relations
relations in terms of symbolic
ultrametrics, cographs and 2-structures <cit.>, on the other hand, naturally
accommodates more than two types of events associated with the internal nodes of
the gene tree. We follow the notion in
<cit.> and call two genes xenologous, whenever
their least common ancestor was a HGT event.
The knowledge of evolutionary event relations such as orthology, paralogy or
xenology is of fundamental importance in many fields of mathematical and
computational biology, including the reconstruction of evolutionary
relationships across species <cit.>,
as well as functional genomics and gene organization in species
<cit.>. Intriguingly, there are methods to infer orthologs
<cit.>
or to detect HGT <cit.> without
the need to construct gene or species trees. Given empirical estimated
event-relations one can infer the history of gene families which are
equivalent to event-labeled gene trees <cit.>. For an
event-labeled gene tree to be biologically feasible there must be a putative
“true” history that can explain the observed gene tree. However, in practice
it is not possible to observe the entire evolutionary history as e.g. gene
losses eradicate the entire information on parts of the history. Therefore, in
practice the problem of determining whether an event-labeled gene tree is
biologically feasible is reduced to the problem of finding a valid
reconciliation map, also known as DTL-scenario, between the event-labeled gene
trees and an arbitrary (possibly unknown) species tree. Tree-reconciliation
methods have been extensively studied over the last years
<cit.>
and are often employed to identify inner vertices of the gene tree as a
duplication, speciation or HGT, given that both, the gene and the species tree
are available.
In this contribution, we assume that only the event-labeled gene tree T is available and wish
to answer the question: How much information about the species tree S and the
reconciliation between T and S is already contained in the gene tree T?
As we shall see, this question can easily be answered for binary gene trees in terms
of “informative” triples that are displayed in T and provide
information on the topology of S. The latter generalizes results established
by Hernandez et al. <cit.> for the HGT-free case.
To obtain a similar result for non-binary gene trees, we show
that the reconciliation map needs to be restricted.
Nevertheless, informative triples can then be used to characterize whether
there is a valid restricted reconciliation map for a given non-binary
gene tree and some unknown species tree S, as well as to construct S,
provided the informative triples are consistent.
However, this approach
has also some limitations. We prove that
“informative” triples are not sufficient to characterize the existence
of a possibly “relaxed” reconciliation map.
Moreover, while reconciliation maps give clear evidence of gene trees to be biologically feasible,
the converse is in general not true. We provide a simple example that shows that not all
biologically feasible gene trees can be explained by DTL-scenarios.
§ PRELIMINARIES
A rooted tree T=(V,E) (on L) is an acyclic connected simple graph
with leaf set L⊆ V, set of edges E, and set of interior vertices
V^0=V∖ L such that there is one distinguished vertex ρ_T ∈ V,
called the root of T.
A vertex v∈ V is called a descendant of u∈ V, v ≼_T u, and u is an
ancestor of v, u ≽_T v, if u lies on the path from
ρ_T to v. As usual, we write v ≺_T u and u ≻_T v to
mean v ≼_T u and u v.
If u ≼_T v or v ≼_T u then u and v
are comparable and otherwise, incomparable.
For x∈ V, we write L_T(x):={ y∈ L| y≼ x} for the
set of leaves in the subtree T(x) of T rooted in x.
It will be convenient to use a notation for edges e that implies which of the vertex in
e is closer to the root. Thus, the notation for edges (u,v) of a tree
is always chosen such that u≻_T v.
For our discussion below we need to extend the ancestor relation ≼_T on
V to the union of the edge and vertex sets of T. More precisely, for the
edge e=(u,v)∈ E we put x ≺_T e if and only if x≼_T v and e
≺_T x if and only if u≼_T x. For edges e=(u,v) and f=(a,b) in
T we put e≼_T f if and only if v ≼_T b. In the latter case,
the edges e and f are called comparable.
For a non-empty subset of leaves A⊆ L, we define _T(A), or the
least common ancestor of A, to be the unique ≼_T-minimal vertex
of T that is an ancestor of every vertex in A. In case A={x,y }, we put
_T(x,y):=_T({x,y}) and if A={x,y,z }, we put
_T(x,y,z):=_T({x,y,z}). For later reference, note that for all x∈
V it hold that x=_T(L_T(x)). We will also make frequent use that for two
non-empty vertex sets A,B of a tree, it always holds that (A∪ B) =
((A),(B)).
A phylogenetic tree T (on L) is a rooted tree T=(V,E) (on L) such
that no interior vertex v∈ V^0 has degree two, except possibly the root
ρ_T. If L corresponds to a set of genes
or species , we call a phylogenetic tree on L gene tree
and species tree, respectively. The restriction T|_L' of of a
phylogenetic tree T to L'⊆ L is the rooted tree with leaf set L'
obtained from T by first forming the minimal spanning tree in T with leaf
set L' and then by suppressing all vertices of degree two with the exception
of ρ_T if ρ_T is a vertex of that tree.
Rooted triples are phylogenetic trees on three leaves with precisely two
interior vertices. They constitute an important concept in the context of
supertree reconstruction <cit.> and will
also play a major role here. A rooted tree T on L displays a triple
xy|z if, x,y,z∈ L and the path from x to y does not intersect the
path from z to the root ρ_T and thus, having _T(x,y)≺_T
_T(x,y,z). We denote by R(T) the set of all triples that are
displayed by the rooted tree T.
A set R of triples is consistent if there is a rooted tree T on L_R=
∪_r∈ R L_r(ρ_r) such that R⊆R(T) and thus, T
displays each triple in R. Not all sets of triples are consistent of
course. Nevertheless, given a triple set R there is a polynomial-time
algorithm, referred to in <cit.> as , that
either constructs a phylogenetic tree T that displays R or that recognizes
that R is not consistent <cit.>.
The runtime of BUILD is O(|L_R||R|) <cit.>.
Further practical implementations and improvements have been
discussed in <cit.>.
We will consider rooted trees
T=(V,E) from which particular edges are removed. Let ⊆ E and
consider the forest (V,E∖). We can preserve the
order ≼_T for all vertices within one connected component of and
define ≼_ as follows: x≼_y iff x≼_Ty and
x,y are in same connected component of . Since each connected component
T' of is a tree, the ordering ≼_ also implies a root
ρ_T' for each T', that is, x≼_ρ_T' for all x∈
V(T'). If L() is the leaf set of , we define L_(x) = {y∈
L() | y≺_ x} as the set of leaves in that are reachable
from x. Hence, all y∈ L_(x) must be contained in the same connected
component of . We say that the forest displays a triple r, if r
is displayed by one of its connected components. Moreover, R() denotes
the set of all triples that are displayed by the forest .
§ BIOLOGICALLY FEASIBLE AND OBSERVABLE GENE TREES
A gene tree arises through a series of events (speciation, duplication, HGT,
and gene loss) along a species tree. In a “true history” the gene tree
T = (V,E) on a set of genes is equipped with an
event-labeling map t:V∪ E→ I∪{0,1} with
I={,,,⊙,x} that assigns to each
vertex v of T a value t(v)∈ I indicating whether v is
a speciation event (), duplication event (), HGT event
(), extant leaf (⊙) or a loss event (x).
In addition, to each edge e a value t(e)∈{0,1} is added that
indicates whether e is a transfer edge (1) or not (0).
Note, in the figures we used the symbols ∙, □ and
for , and respectively.
Hence, e=(x,y) and t(e) =1 iff t(x)= and the genetic material is
transferred from the species containing x to the species containing y.
We remark that the restriction t_|V of t to the vertex set V was introduced
as “symbolic dating map” in <cit.> and that there is a close
relationship to so-called cographs <cit.>.
Let ⊆ be the set of all extant genes in T.
Hence, there is a map σ:→ that assigns to each extant gene the extant species in which it resides.
We assume that the gene tree and its event labels are inferred from
(sequence) data, i.e., T is restricted to those labeled trees that can be
constructed at least in principle from observable data.
Gene losses eradicate the entire information on parts of the history and thus,
cannot directly be observed from extant sequences.
Hence, in our setting the (observable) gene tree T is the restriction T_| to
the set of extant genes, see Figure <ref>.
Since all leaves of T are extant genes in we don't need to specially
label the leaves in , and thus simplify the
event-labeling map t:V^0∪ E→ I∪{0,1} by assigning only to the
interior vertex an event in I={,,}.
We assume here that all non-transfer edges transmit the genetic material vertically,
that is, from an ancestral species to its descendants.
We write (T;t,σ) for the tree T=(V,E) with event-labeling t
and corresponding map σ. The set = {e∈ E| t(e)=1}
will always denote the set of transfer edges in (T;t,σ).
Additionally, we consider gene tree (T=(V,E);t,σ) from which the transfer edges have been removed, resulting
in the forest = (V, E∖) in which we preserve the event-labeling t,
that is, we use the restriction t_|V on .
We call a gene tree (T;t,σ) on biologically feasible,
if there is a true scenario such that T = T_|, that is,
there is a true history that can explain (T;t,σ).
By way of example, the gene tree in Figure <ref>(right)
is biologically feasibly.
However, so-far it is unknown whether there are gene trees (T;t,σ)
that are not biologically feasible.
Answering the latter might be a hard task, as many HGT or duplication vertices
followed by losses can be inserted into T that may result in a putative
true history that explains the event-labeled gene tree.
Following Nøjgaard et al. <cit.>, we additionally restrict the set of observable gene trees (T;t,σ)
to those gene trees that satisfy the following observability axioms:
(O1) Every internal vertex v has degree at least 3, except
possibly the root which has degree at least 2.
(O2) Every HGT node has at least one transfer edge, t(e)=1, and at
least one non-transfer edge, t(e)=0;
(O3) (a)
If x∈ V is a speciation vertex,
then there are distinct children v,
w of x in T with
(v)∩(w) = ∅.
(b)
If (x,y) ∈, then
(x)∩(y) = ∅.
Condition (O1) is justified by the restriction T=T_|
of the true binary gene tree T to the set of extant genes ,
since T=T_| is always a phylogenetic tree.
In particular, (O1) ensures that every event leaves a historical trace in the
sense that there are at least two children that have survived in at least
two of its subtrees.
Condition (O2) ensures that for
an HGT event a historical trace remains of both the transferred and the
non-transferred copy.
Condition (O3.a) is a consequence of (O1), (O2) and a stronger condition (O3.a') claimed in <cit.>:
If x is a speciation vertex, then there
are at least two distinct children v,w of x such that the species V
and W that contain v and w, resp., are incomparable in S.
Note, a speciation vertex x cannot be observed from data
if it does not “separate” lineages, that is, there are two leaf
descendants of distinct children of x that are in distinct
species. Condition (O3.a') is even weaker and
ensures that any “observable” speciation vertex x separates at
least locally two lineages. As a result of (O3.a') one can obtain
(O3.a) <cit.>.
Intuitively, (O3.a) is satisfied since within a connected component of no
genetic material is exchanged between non-comparable nodes. Thus, a gene
separated in a speciation event necessarily ends up in distinct species in
the absence of the transfer edges.
Condition (O3.b) is a consequence of (O1), (O2) and a stronger condition (O3.b') claimed in <cit.>:
If (v,w) is a transfer edge in T, then t(v)= and the
species V and W that contain v and w, resp., are
incomparable in S.
Note, if (v,w)∈ then v signifies the transfer event
itself but w refers to the next (visible) event in the gene tree
T. In a “true
history” v is contained in a species V that transmits its genetic
material (maybe along a path of transfers) to a contemporary species Z
that is an ancestor of the species W containing w. In order to have
evidence that this transfer happened, Condition (O3.b') is used and
as a result one obtains (O3.b).
The intuition behind (O3.b) is as follows:
Observe that (x) and (y) are subtrees of distinct connected
components of whenever (x,y) ∈.
Since HGT amounts to the transfer of genetic material
across distinct species, the genes x and y are in distinct
species, cf. (O3.b). However, since does not contain transfer edges and thus, there is no
genetic material transferred across distinct species between distinct
connected components in . We refer to <cit.> for further details
In what follows, we only consider gene trees (T;t,σ)
that satisfy (O1), (O2) and (O3).
We simplify the notation a bit and write (u):=σ(L_(u)).
Based on Axiom (O2) the following results was established in <cit.>.
Let (T;t,σ) be an event-labeled gene tree.
Let 𝒯_1, …, 𝒯_k be
the connected components of with roots ρ_1, …, ρ_k,
respectively.
Then, {L_(ρ_1), …, L_(ρ_k)} forms a partition of .
Lemma <ref> particularly implies that
(x) ≠∅ for all x∈ V(T). Note,
might contain interior vertices (distinct from the root)
that have degree two. Nevertheless, for each x≼_ y in
we have x≼_T y in T. Hence, partial information (that in
particular is “undisturbed” by transfer edges) on the partial ordering of the
vertices in T can be inferred from .
§ RECONCILIATION MAP
Before we define a reconciliation map that “embeds” a
given gene tree into a given species tree we need a slight modification
of the species tree.
In order to account for duplication events
that occurred before the first speciation event, we need to add an extra vertex
and an extra edge “above” the last common ancestor of all species: hence, we
add an additional vertex to W (that is know the new root ρ_S of S) and the
additional edge (ρ_S,_S())∈ F. Note that strictly speaking S is
not a phylogenetic tree anymore. In case there is no danger of confusion, we
will from now on refer to a phylogenetic tree on with this extra edge and
vertex added as a species tree on .
Suppose that is a set of species, S=(W,F) is a phylogenetic tree on
, T=(V,E) is a gene tree with leaf set and that σ:→ and t:V^0→{,,}∪{0,1} are the
maps described above. Then we say that S is a species tree for
(T;t,σ) if there is a map μ:V→ W∪ F such that, for all x∈
V:
(M1) Leaf Constraint. If x∈ then μ(x)=σ(x).
(M2) Event Constraint.
(i) If t(x)=, then
μ(x) = _S((x)).
(ii) If t(x) ∈{, }, then μ(x)∈ F.
(iii) If t(x)= and (x,y)∈,
then μ(x) and μ(y) are incomparable in S.
(M3) Ancestor Constraint.
Let x,y∈ V with x≺_ y.
Note, the latter implies that the path connecting x and y in T
does not contain transfer edges.
We distinguish two cases:
(i) If t(x),t(y)∈{, }, then μ(x)≼_S μ(y),
(ii) otherwise, i.e., at least one of t(x) and t(y) is a speciation ,
μ(x)≺_Sμ(y).
We call μ the reconciliation map from (T;t,σ) to S.
Definition <ref> is a natural generalization of the map defined in
<cit.>, that is, in the absence of horizontal gene transfer, Condition
(M2.iii) vanishes and thus, the proposed reconciliation map precisely coincides
with the one given in <cit.>.
In case that the event-labeling of T is unknown, but a species tree S is
given, the authors in <cit.> gave an axiom set, called
DTL-scenario, to reconcile T with S. This reconciliation is then used to
infer the event-labeling t of T.
The “usual” DTL axioms explicitly refer to binary,
fully resolved gene and species trees. We therefore use a different axiom set
that is, nevertheless, equivalent to DTL-scenarios
in case the considered gene trees are binary <cit.>.
Condition (M1) ensures that each leaf of T, i.e., an extant gene in , is
mapped to the species in which it resides.
Condition (M2.i) and (M2.ii) ensure
that each vertex of T is either mapped to a vertex or an edge in S such that
a vertex of T is mapped to an interior vertex of S if and only if it is a
speciation vertex. We will discuss (M2.i) in further detail below.
Condition (M2.iii) maps the vertices of a transfer edge
in a way that they are incomparable in the species tree and is used
to satisfy axiom (O3).
Condition (M3) refers only to the connected components of and
is used to preserve the ancestor order ≼_T of T along the paths
that do not contain transfer edges is preserved.
It needs to be discussed, why one should map a speciation vertex x to
_S((x)) as required in (M2.i). The next lemma shows, that one
can put μ(x) = _S((x)).
Let μ be a reconciliation map from (T;t,σ) to S
that satisfies (M1) and (M3), then
μ(u)≽_S _S((u)) for any u∈ V(T).
Condition (M2.i) implies in particular the weaker property “(M2.i') if
t(x)= then μ(x)∈ W”. In the light of
Lemma <ref>, μ(x)=_S((x)) is the lowest possible
choice for the image of a speciation vertex.
Instead of considering the
possibly exponentially many reconciliation maps for which
μ(x)≻_S_S((x)) for speciation vertices x is allowed
we restrict our attention to those that satisfy (M2.i) only.
In particular, as we shall see later, there is a neat characterization
of maps that satisfy (M2.i) that does, however,
not work for maps with “relaxed” (M2.i).
Moreover, we have the following result, which is a mild
generalization of <cit.>.
Let μ be a reconciliation map from a gene tree (T;t,σ) to S.
* If v,w∈ V(T) are in the same connected component of , then
μ(_(v,w)) ≽_S _S(μ(v),μ(w)).
* If (T;t,σ) is a binary gene tree and x a speciation vertex with children v,w in T, then
then μ(v) and μ(w) are incomparable in S.
Let v,w∈ V(T) be in the same connected component of .
Assume that v and w are comparable in and
that w.l.o.g. v≻_ w. Condition (M3) implies that
μ(v)≽_Sμ(w). Hence,
v = _(v,w) and
μ(v) = _S(μ(v),μ(w)) and we are done.
Now assume that v and w are incomparable in .
Consider the unique path P connecting w with v in . This path P
is uniquely subdivided into a path P' and a path P” from
_(v,w) to v and w, respectively. Condition (M3) implies that
the images of the vertices of P' and P” under μ, resp., are ordered
in S with regards to ≼_S and hence, are contained in the intervals
Q' and Q” that connect μ(_(v,w)) with μ(v) and
μ(w), respectively. In particular, μ(_(v,w)) is the largest
element (w.r.t. ≼_S) in the union of Q'∪ Q” which contains the
unique path from μ(v) to μ(w) and hence also _S(μ(v),μ(w)).
Item <ref> was already proven in <cit.>.
Assume now that there is a reconciliation map μ from (T;t,σ) to S.
From a biological point of view, however, it is necessary to reconcile a gene
tree with a species tree such that genes do not “travel through time”, a
particular issue that must be considered whenever (T;t,σ) contains HGT,
see Figure <ref> for an example.
The map : V(T) →ℝ is a time map for the
rooted tree T if x≺_T y implies (x)>(y) for all
x,y∈ V(T).
A reconciliation map μ from
(T;t,σ) to S is time-consistent if there are time maps
τ_T for T and τ_S for S for all u∈ V(T) satisfying the
following conditions:
(T1) If t(u) ∈{, ⊙}, then
τ_T(u) = τ_S(μ(u)).
(T2) If t(u)∈{,} and, thus
μ(u)=(x,y)∈ E(S), then
τ_S(y)>τ_T(u)>τ_S(x).
Condition (T1) is used to identify the time-points of speciation vertices
and leaves u in the gene tree with the time-points of their respective
images μ(u) in the species trees. Moreover,
duplication or HGT vertices u are mapped to edges μ(u)=(x,y) in S
and the time point of u must thus lie between the time
points of x and y which is ensured by Condition (T2).
Nøjgaard et al. <cit.> designed an O(|V(T)|log(|V(S)|))-time algorithm
to check whether a given reconciliation map μ is time-consistent, and an algorithm with the same
time complexity for the construction of a time-consistent reconciliation
map, provided one exists.
Clearly, a necessary condition for the existence of time-consistent reconciliation maps from
(T;t,σ) to S is the existence of some reconciliation map
(T;t,σ) to S.
In the next section, we first characterize the existence of reconciliation maps
and discuss open time-consistency problems.
§ FROM GENE TREES TO SPECIES TREES
Since a gene tree T is uniquely determined by its induced triple set
ℛ(T), it is reasonable to expect that a lot of information on the
species tree(s) for (T;t, σ) is contained in the images of the triples in
ℛ(T), (or more precisely their leaves) under σ. However, not
all triples in ℛ(T) are informative, see Figure <ref> for
an illustrative example. In the absence of HGT, it has already been shown by
Hernandez-Rosales et al. <cit.> that the informative triples r∈ℛ(T) are precisely those that are rooted at a speciation event and
where the genes in r reside in three distinct species. However, in the
presence of HGT we need to further subdivide the informative triples as follows.
Let (T;t,σ) be a given event-labeled gene tree with respective
set of transfer-edges = {e_1,…,e_h} and as defined above.
We define
() = {ab|c∈R() σ(a),σ(b),σ(c)
are pairwise distinct}
as the subset of all triples displayed in such that the leaves are from pairwise distinct species.
Let
R_0() {ab|c∈() t(_(a,b,c)) = }
be the set of triples in () that are rooted at a speciation event.
For each e_i=(x,y) ∈ define
R_i() {ab|c σ(a),σ(b),σ(c) are pairwise distinct
and either a,b∈ L_(x), c∈ L_(y)
or c∈ L_(x), a,b∈ L_(y) }.
Hence, R_i() contains a triple ab|c for every
a,b∈ L_(x), c∈ L_(y) that reside in pairwise distinct species. Analogously, for any a,b∈ L_(y), c∈ L_(x) there is a triple ab|c∈R_i(), if
σ(a),σ(b),σ(c) are pairwise distinct.
The informative triples of T are comprised in the set
ℛ(T;t,σ) = ∪_i=0^h R_i().
Finally, we define the informative species triple set
𝒮(T;t,σ){σ(a)σ(b)|σ(c)ab|c∈ℛ(T;t,σ) }
that can be inferred from the informative triples of (T;t,σ).
§.§ Binary Gene Trees
In this section, we will be concerned
only with binary, i.e., “fully resolved” gene trees,
if not stated differently.
This is justified by
the fact that, generically, a speciation or duplication event
instantaneously generates exactly two offspring. However, we will allow
also non-binary species tree to model incomplete knowledge of the exact
species phylogeny. Non-binary gene trees are discussed in Section <ref>.
Hernandez et al. <cit.> established the following characterization for the HGT-free case.
For a given gene tree (T;t, σ) on that does not contain
HGT and 𝔖{σ(a)σ(b)|σ(c)ab|c∈ℛ_0(T)}, the following statement is satisfied:
There is a species tree on = σ() for (T;t, σ)
if and only if the triple set 𝔖 is consistent.
We emphasize that the results established in <cit.> are only valid
for binary gene trees, although this was not explicitly stated.
For an example that shows that Theorem <ref>
does is no always satisfied for non-binary gene trees see
Figure <ref>.
Lafond and El-Mabrouk <cit.> established
a similar result as in Theorem <ref> by using
only species triples that can be obtained directly
from a given orthology/paralogy-relation. However, they require a stronger
version of axiom (O3.a), that is, the images of all children of a
speciation vertex must be pairwisely incomparable in the species tree.
We, too, will use this restriction in Section <ref>
In what follows, we generalize the latter result and show that consistency of
𝒮(T;t,σ) characterizes whether there is a species tree S for
(T;t,σ) even if (T;t,σ) contains HGT.
If μ is a reconciliation map from a gene tree (T;t,σ) to
a species tree S
and ab|c∈ℛ(T;t,σ), then
σ(a)σ(b)|σ(c) is displayed in S.
Recall that is the leaf set of T=(V,E) and, by Lemma <ref>,
of .
Let {a,b,c}∈3 and assume w.l.o.g. ab|c∈ℛ(T;t,σ).
First assume that ab|c∈R_0, that is ab|c is displayed
in and t(_(a,b,c)) =. For simplicity set
u=_(a,b,c) and let x,y be its children in .
Since ab|c∈R_0, we can assume that w.l.o.g.
a,b∈ L_(x) and c∈ L_(y).
Hence, x≽__(a,b) and y≽_ c.
Condition (M3) implies that
μ(y)≽_S μ(c) = σ(c). Moreover, Condition (M3) and Lemma
<ref>(<ref>) imply that μ(x)≽_S
μ(_(a,b)) ≽_S _S(μ(a),μ(b)) =
_S(σ(a),σ(b)).
Since t(u)=, we can apply Lemma <ref>(<ref>)
and conclude that μ(x) and μ(y) are incomparable in S. Hence,
σ(c) and _S(σ(a),σ(b)) are incomparable. Thus, the
triple σ(a)σ(b)|σ(c) must be displayed in S.
Now assume that ab|c∈R_i for some transfer edge e_i = (x,y)∈.
For e_i = (x,y) we either have a,b∈ L_(x) and c∈ L_(y) or c∈
L_(x) and a,b∈ L_(y). W.l.o.g. let
a,b∈ L_(x) and c∈ L_(y). Thus, x≽__(a,b) and y≽_ c. Condition (M3) implies that
μ(y)≽_S μ(c) = σ(c). Moreover, Condition (M3) and Lemma
<ref>(<ref>) imply that μ(x)≽_S
μ(_(a,b)) ≽_S _S(μ(a),μ(b)) =
_S(σ(a),σ(b)). Since t(x)=, we can apply (M2.iii)
and conclude that μ(x) and μ(y) are incomparable in S. Hence,
σ(c) and _S(σ(a),σ(b)) are incomparable. Thus, the
triple σ(a)σ(b)|σ(c) must be displayed in S.
Let S=(W,F) be a species tree on .
Then there is reconciliation map μ from a gene tree (T;t,σ) to S
whenever S displays all triples in 𝒮(T;t,σ).
Recall that is the leaf set of T=(V,E) and, by Lemma <ref>, of . In what follows, we write (u) instead of
the more complicated writing L_(u) and, for consistency and simplicity,
we also often write σ((u)) instead of (u).
Put S=(W,F) and 𝒮 =
𝒮(T;t,σ). We first consider the subset U={x∈ V | x∈ or t(x) = }}
of V comprising the leaves and speciation
vertices of T.
In what follows we will explicitly construct μ: V → W∪ F
and verify that μ satisfies Conditions (M1), (M2) and (M3).
To this end, we first set for all x∈ U:
(S1) μ(x) = σ(x), if x∈,
(S2) μ(x)= _S(σ((x))), if t(x)=.
Conditions (S1) and (M1), as well as (S2) and (M2.i) are equivalent.
For later reference, we show that _S(σ((x))) ∈ W^0 =
W∖ and that there are two leaves a,b∈(x) such that
σ(a) ≠σ(b), whenever t(x)=.
Condition (O3.a) implies that
x has two children v and
w in T such that σ((v)) ∩σ((w)) = ∅. Moreover,
Lemma <ref> implies that both (v) and (w) are
non-empty subsets of and hence, neither
σ((v))=∅ nor σ((w))=∅.
Thus, there are two leaves a, b∈(x) such
that σ(a) ≠σ(b). Hence, _S(σ((x))) ∈ W^0 =
W∖.
* Claim 1: For all x,y∈ U with x≺_ y we have μ(x)≺_S μ(y).
Note, y must be an interior vertex, since x≺_ y. Hence t(y)=.
If x is a leaf, then μ(x)=σ(x)∈. As argued above,
μ(y) ∈ W∖. Since x∈(y) and
σ((y))≠∅,
we have
σ(x) ∈σ((y))⊆ and thus, μ(x)≺_S μ(y).
Now assume that x is an interior vertex and hence, t(x)=.
Again, there are leaves a,b ∈(x) with
A = σ(a)≠σ(b)=B.
Since t(y)=, vertex y has two children in . Let y' denote the child of
y with x≼_ y'.
Since (x)⊆(y')⊊(y),
we have (y)∖(y')≠∅ and, by
Condition (O3.a),
there is a gene c∈(y)∖(y') ⊆(y)∖(x) with
σ(c)=C≠ A,B.
By construction, ab|c∈R_0 and hence,
AB|C∈𝒮(T;t,σ).
Hence, _S(A,B)≺_S _S(A,B,C).
Since this holds for all triples x'x”|z with x',x”∈(x)
and z∈(y)∖(y'),
we can conclude that
μ(x) = _S(σ((x)))
≺_S
_S(σ((x))∪σ((y) ∖(y'))).
Since σ((x))∪σ((y) ∖(y')) ⊆σ((y))
we obtain
_S(σ((x))∪σ((y) ∖(y')))
≼_S
_S(σ((y))) = μ(y).
Hence, μ(x)≺_Sμ(y).
_– End Proof Claim 1 –
We continue to extend μ to the entire set V. To this end, observe first
that if t(x) ∈{, } then we wish to map x on an edge
μ(x) = (u,v) ∈ F such that Lemma <ref>
is satisfied: v≽_S _S(σ((x))). Such an edge exists for v
= _S(σ((x))) in S by construction. Every speciation vertex y
with y≻_ x therefore necessarily maps on the vertex u or above,
i.e., μ(y) ≽_S u must hold.
Thus, we set:
(S3) μ(x) = (u,_S(σ((x)))), if t(x)∈{, },
which now makes μ a map from V to W∪ F.
By construction of μ, Conditions (M1), (M2.i), (M2.ii) are satisfied by μ.
We proceed to show that (M3) is satisfied.
* Claim 2:
For all x,y∈ V with x≺_ y, Condition (M3) is satisfied.
If both x and y are speciation vertices, then we can apply the Claim 1
to conclude that μ(x)≺_S μ(y).
If x is a leaf, then we argue similarly as in the proof of Claim 1
to conclude that μ(x)≼_S μ(y).
Now assume that both x and y are interior vertices of T and
at least one vertex of x,y is not a speciation vertex.
Since, x≺_ y we have (x) ⊆(y) and thus, σ((x)) ⊆σ((y)).
We start with the case t(y)= and t(x)∈{, }.
Since t(y)=, vertex y has two children in . Let y' be the child of y with
x≼_ y'.
If σ((x)) contains only one species A, then
μ(x) = (u,A)≺_S u≼_S _S(σ((y))) = μ(y).
If σ((x)) contains at least two species, then there are a,b∈(x) with σ(a)=A≠σ(b)=B
Moreover, since (x)⊆(y')⊊(y),
we have (y)∖(y')≠∅ and, by
Condition (O3.a),
there is a gene c∈(y)∖(y') ⊆(y)∖(x) with
σ(c)=C≠ A,B.
By construction, ab|c∈R_0 and hence
AB|C∈𝒮(T;t,σ). Now we can argue similar as in the proof of the
Claim 1, to see that
μ(x) = (u,_S(σ((x)))) ≺_S u
≼_S
_S(σ((y))) = μ(y).
If t(x)= and t(y)∈{, }, then σ((x))
⊆σ((y)) implies that
μ(x) = _S(σ((x)))≼_S _S(σ((y)))
≺_S(u,_S(σ((y)))) = μ(y).
Finally assume that t(x),t(y)∈{, }. If σ((x))
= σ((y)), then μ(x) = μ(y). Now let σ((x)) ⊊σ((y)) which implies that _S(σ((x)))≼_S
_S(σ((y))). If _S(σ((x))) =
_S(σ((y))), then μ(x) = μ(y). If
_S(σ((x)))≺_S _S(σ((y))),
then
μ(x) =(u,_S(σ((x)))) ≺_S u
≼_S _S(σ((y)))
≺ (u',_S(σ((y))))
=μ(y).
_– End Proof Claim 2 –
It remains to show (M2.iii), that is, if e_i=(x,y) is a transfer-edge, then
μ(x) and μ(y) are incomparable in S. Since (x,y) is a transfer
edge and by Condition (O3.b), σ((x)) ∩σ((y)) = ∅.
If σ((x))={A} and σ((y))={C}, then μ(x) = (u,A)
and μ(y) = (u',C). Since A and C are distinct leaves in S, μ(x)
and μ(y) are incomparable.
Assume that |σ((x))|>1. Hence,
there are leaves a,b ∈(x) with A = σ(a)≠σ(b)=B
and c∈(y) with σ(c)=C≠ A,B. By construction, ab|c∈R_i and hence, AB|C∈𝒮(T;t,σ).
The latter is fulfilled for all triples x'x”|c∈R_i with x',x”∈(x),
and, therefore, _S(σ((x))∪{C}) ≻_S _S(σ((x))).
Set v=_S(σ((x))∪{C}).
Thus, there is an edge (v,v') in S
with v'≽_S _S(σ((x))) and an edge (v,v”) such that v”≽_S C.
Hence, either μ(x) = (v,v') or μ(x) = (u,_S(σ((x))) and v'≽_S u.
Assume that σ((y)) contains only the species C
and thus, μ(y) = (u',C).
Since v”≽_S C, we have either
v” = C which implies that μ(y) = (v,v”) or
v”≻_S C which implies that μ(y) = (u',C) and v”≽_S u'.
Since both vertices v' and v” are incomparable in S,
so μ(x) and μ(y) are.
If |σ((y))|>1, then we set v=_S(σ((x))∪σ((y)))
and we can argue analogously as above and conclude that there are edges
(v,v') and (v,v”) in S such that
v'≽_S _S(σ((x))) and
v”≽_S _S(σ((y))).
Again,
since v' and v” are incomparable in S and by construction of μ,
μ(x) and μ(y) are incomparable.
Lemma <ref> implies that consistency of the triple set
𝒮(T; t,σ ) is necessary for the existence of a reconciliation
map from (T; t,σ ) to a species tree on . Lemma
<ref>, on the other hand, establishes that this is also
sufficient. Thus, we have
There is a species tree on = σ() for a gene tree
(T;t, σ) on if
and only if the triple set 𝒮(T; t,σ ) is consistent.
§.§ Non-Binary Gene Trees
Now, we consider arbitrary, possibly non-binary gene trees that might be used
to model incomplete knowledge of the exact genes phylogeny.
Consider the “true” history of a gene tree that evolves along the (tube-like)
species tree in Figure <ref> (left).
The observable gene tree (T;t,σ) is shown in
<ref> (center-left). Since
ab|c,b'c'|a'∈R_0,
we obtain a set of
species triples 𝒮(T;t,σ) that contain the pair of
inconsistent species triple AB|C,BC|A. Thus,
there is no reconciliation map for
(T;t,σ) and any species tree,
although (T;t,σ) is biologically feasible.
Consider now
the “orthology” graph G (shown below the gene trees) that
has as vertex set and two genes x,y are connected by an edge if (x,y) is a
speciation vertex. Such graphs can be obtained from orthology
inference methods <cit.>
and the corresponding non-binary
gene tree (T';t,σ) (center-right) is constructed from such estimates
(see <cit.> for further
details).
Still, we can see that 𝒮(T';t,σ) contains the two
inconsistent species triples AB|C,BC|A. However, there is
a reconciliation map μ according to
Definition <ref> and a species tree S, as shown in
Figure <ref> (right).
Thus, consistency of 𝒮(T';t,σ) does not characterize whether there
is a valid reconciliation map for non-binary gene trees.
In order to obtain a similar result as in Theorem <ref> for non-binary
gene trees we have to strengthen observability axiom (O3.a) to
(O3.A)
If x is a speciation vertex with children v_1,…,v_k, then
(v_i) ∩(v_j) =∅, 1≤ i<j≤ k;
and to add an extra event constraint to Definition <ref>:
(M2.iv) Let v_1,…,v_k be the children of the speciation vertex x.
Then, μ(v_i) and μ(v_j) are incomparable in S, 1≤ i<j≤ k.
We call a reconciliation map that additionally satisfies (M2.iv) a restricted reconciliation map.
Such restricted reconciliation maps
satisfy the condition as required in <cit.> for the HGT-free case.
It can be shown that restricted reconciliation maps imply Condition (O3.A),
however, the converse is not true in general, see Figure <ref>.
Hence, we cannot use the axioms (O1)-(O3) and (O3.A)
to derive Condition (M2.iv) - similar to Lemma <ref>(<ref>)
- and thus, need to claim it.
It is now straightforward to obtain the next result.
If μ is a restricted reconciliation map
from (T;t,σ) to S
and ab|c∈ℛ(T;t,σ), then
σ(a)σ(b)|σ(c) is displayed in S.
Let {a,b,c}∈3 and assume w.l.o.g. ab|c∈ℛ(T;t,σ).
First assume that ab|c∈R_0, that is ab|c is displayed
in and t(_(a,b,c)) =. For simplicity set
u=_(a,b,c). Hence, there are two children x,y of u in
such that w.l.o.g. a,b∈ L_(x) and c∈ L_(y).
Now we can argue analogously as in the proof of Lemma <ref>
after replacing “we can apply Lemma <ref>(<ref>) ”
by “we can apply Condition (M2.iv)”.
The proof for ab|c∈R_i remains the same as in Lemma <ref>.
Let S be a species tree on .
Then, there is a restricted reconciliation map μ from a gene tree (T;t,σ)
that satisfies also (O3.A) to S
whenever S displays all triples in 𝒮(T;t,σ).
The proof is similar to the proof of Lemma <ref>.
However, note that a speciation vertex might have more than two children.
In these cases, one simply has to apply
Axiom (O3.A) instead of Lemma (O3.a)
to conclude that (M1),(M2.i)-(M2.iii), (M3) are
satisfied.
It remains to show that (M2.iv) is satisfied.
To this end, let x be a speciation vertex in T and the set of its children
C(x) = {v_1,…,v_k}.
By axiom (O3.A) we have (v_i) ∩(v_j) =∅ for all i≠ j.
Consider the following partition of C(x) into C_1 and C_2 that contain
all vertices v_i with |(v_i)|=1 and |(v_i)|>1, respectively.
By construction of μ, for all vertices in v_i,v_j∈ C_1, i≠ j
we have that μ(v_i)∈{σ(v_i), (u,σ(v_i)) } and
μ(v_j)∈{σ(v_j), (u',σ(v_j)) } are incomparable.
Now let v_i∈ C_1 and v_j∈ C_2. Thus there are A,B∈(v_j)
and σ(v_i)=C. Hence, AB|C∈𝒮(T;t,σ)
Thus, _S(A,B) must be incomparable to C in S.
Since the latter is satsfied for all species in (v_j),
_S( (v_j)) and C must be incomparable in S.
Again, by construction of μ, we see that
μ(v_i)∈{C, (u,C) } and
μ(v_j)∈{_S( (v_j)), (u',_S( (v_j))) }
are incomparable in S.
Analogously, if v_i,v_j∈ C_2, i≠ j, then
all triples AB|C and CD|A for all A,B∈(v_j)
and C,D∈(v_j) are contained in 𝒮(T;t,σ)
and thus, displayed by S.
Hence, _S( (v_i)) and _S( (v_j))
must be incomparable in S. Again, by construction of μ,
we obtain that μ(v_i)∈{_S( (v_i)), (u,_S( (v_i))) } and
μ(v_j)∈{_S( (v_j)), (u',_S( (v_j))) }
are incomparable in S. Therefore, (M2.iv) is satisfied.
As in the binary case, we obtain
There is a restricted reconciliation map for
a gene tree (T;t, σ) on that satisfies also (O3.A) and some species tree on
= σ() if
and only if the triple set 𝒮(T; t,σ ) is consistent.
§.§ Algorithm
The proof of Lemma <ref> and <ref> is constructive and we summarize the
latter findings in Algorithm <ref>, see Figure <ref> for an
illustrative example.
Algorithm <ref> returns a species tree S for a binary gene tree (T;t,σ)
and a reconciliation map μ in polynomial time, if one exists
and otherwise, returns that there is no species tree for (T;t,σ).
If (T;t,σ) is non-binary but satisfies Condition (O3.A),
then Algorithm <ref> returns a species tree S for (T;t,σ)
and a restricted reconciliation map μ in polynomial time, if one exists
and otherwise, returns that there is no species tree for (T;t,σ).
Theorem <ref> and the construction of μ in the proof of Lemma
<ref> and <ref> implies the correctness of the algorithm.
For the runtime observe that all tasks, computing 𝒮(T;t,σ), using the
algorithm <cit.> and the construction of the map
μ <cit.> can be done in polynomial time.
In our examples, the species trees that display 𝒮(T; t,σ ) is
computed using the O(|L_R||R|) time algorithm , that either
constructs a tree S that displays all triples in a given triple set R or
recognizes that R is not consistent. However, any other supertree method might
be conceivable, see <cit.> for an overview. The tree T returned
by is least resolved, i.e., if T' is obtained from T by
contracting an edge, then T' does not display R anymore. However, the trees
generated by do not necessarily have the minimum number of
internal vertices, i.e., the trees may resolve multifurcations in an arbitrary
way that is not implied by any of the triples in R. Thus, depending on R,
not all trees consistent with R can be obtained from .
Nevertheless, in <cit.> the following result was
established.
Let R be a consistent triple set.
If the tree T obtained with applied on R is binary,
then T is a unique tree on L_R that displays R, i.e., for any tree T' on L_R
that displays R we have T'≃ T.
So-far, we have shown that event-labeled gene trees (T;t,σ) for
which a species tree exists can be characterized by a set of species triples
S(T;t,σ) that is easily constructed from a subset of triples
displayed in T. From a biological point of view, however, it is necessary to
reconcile a gene tree with a species tree such that genes do not “travel
through time”. In <cit.>, the authors gave algorithms
to check whether a given
reconciliation map μ is time-consistent, and an algorithm with the same
time complexity for the construction of a time-consistent reconciliation
maps, provided one exists.
These algorithms require as input an event-labeled gene tree and species tree.
Hence, a necessary condition for the existence of time-consistent reconciliation maps is given by
consistency of the species triple S(T;t,σ) derived from (T;t,σ).
However, there are
possibly exponentially many species trees that are consistent with S(T;t,σ) for which some
of them have a time-consistent reconciliation map with T and some not, see Figure <ref>. The
question therefore arises as whether there is at least one species tree S with time-consistent
map, and if so, construct S.
§ LIMITATIONS OF INFORMATIVE TRIPLES AND RECONCILIATION MAPS
In Section <ref> we have already discussed that consistency of
S(T;t,σ) cannot be used to characterize whether there
is a reconciliation map that doesn't need to satisfy (M2.iv)
for some non-binary gene tree, see Figure <ref>.
In particular, Figure <ref>
shows a biologically feasible binary gene trees (center-left)
for which, however, neither a reconciliation map nor a restricted reconciliation map
exists.
Therefore, reconciliation maps provide, unsurprisingly, only a sufficient but not necessary condition
to determine whether gene trees are biologically feasible.
A further simple example is given in Figure <ref>.
Consider the “true” history of the gene tree that evolves along the (tube-like)
species tree in Figure <ref> (left). The set of extant genes
comprises a,a',b,b',c and c' and σ maps each gene in to
the species (capitals below the genes) A,B,C∈. For the observable gene
tree (T;t,σ) in Figure <ref> (center) we observe
that R_0 = {ab|c,b'c'|a'} and thus,
one obtains
the inconsistent species triples S(T;t,σ) = {AB|C,BC|A}.
Hence, Theorem <ref> implies that there is no species tree for
(T;t,σ). Note, (T;t,σ) satisfies also Condition (O3.A).
Hence, Theorem <ref> implies that no restricted reconciliation
map to any species tree exists for (T;t,σ).
Nevertheless, (T;t,σ) is biologically feasible as there is a
true scenario that explains the gene tree.
If Condition (M2.i) would be relaxed, that is, if we allow for speciation
vertices u that μ(u) ≽_S _S((u)), then there is a relaxed
μ from (T;t,σ) to the species tree S shown in Figure
<ref> (right). Hence, consistency of S(T;t,σ)
does not characterize the existence of relaxed reconciliation maps.
§ CONCLUSION AND OPEN PROBLEMS
Event-labeled gene trees can be obtained by combining the reconstruction of
gene phylogenies with methods for orthology and HGT detection. We showed that
event-labeled gene trees (T;t,σ) for which a species tree exists can be
characterized by a set of species triples S(T;t,σ) that is easily
constructed from a subset of triples displayed in T.
We have shown that biological feasibility of gene trees cannot be explained
in general by reconciliation maps, that is, there are biologically feasible gene trees
for which no reconciliation map to any species tree exists.
Moreover, we showed that consistency of S(T;t,σ) does not characterize
the existence of relaxed reconciliation maps.
We close this contribution by stating some open problems that need to be
solved in future work.
(1) Are all event-labeled gene trees (T;t,σ) biologically feasible?
(2) The results established here are based on informative triples provided by the gene trees.
If it is desired to find “non-restricted” reconciliation maps (those for which Condition (M2.iv) is not required)
for non-binary gene trees
the following question needs to be answered:
How much information of a non-restricted reconciliation map and a species tree
is already contained in non-binary event-labeled gene trees (T;t,σ)?
The latter might also be generalized by considering relaxed reconciliation maps (those for which
μ(x)≻_S _S((x)) for speciation vertices x or any other relaxation is allowed).
(3) Our results depend on three axioms (O1)-(O3) on the event-labeled
gene trees that are motivated by the fact that event-labels can
be assigned to internal vertices of gene trees only if there is
observable information on the event. The question which
event-labeled gene trees are actually observable given an
arbitrary, true evolutionary scenario deserves further
investigation in future work, since a formal theory of
observability is still missing.
(4) The definition of reconciliation maps is
by no means consistent in the literature.
For the results established here we considered three types of
reconciliation maps, that is, the “usual” map as in Def. <ref>
(as used in e.g. <cit.>), a restricted version (as used in e.g. <cit.>)
and a relaxed version.
However, a unified framework for reconciliation maps is desirable and might
be linked with a formal theory of observability.
(5) “Satisfiable” event-relations R_1,…,R_k are those for which there is a
representing gene tree (T;t,σ)
such that (x,y)∈ R_i if and only if t((x,y))=i. They are equivalent to
so-called unp 2-structures <cit.>.
In particular, if event-relations consist of orthologs, paralogs and xenologs only, then
satisfiable event-relations are equivalent to directed cographs <cit.>.
Satisfiable event-relations R_1,…,R_k are “S-consistent” if there is a species tree S
for the representing gene tree (T;t,σ) <cit.>.
However, given the unavoidable noise in the input data and possible
uncertainty about the true relationship between two genes,
one might ask to what extent the work of Lafond et al. <cit.>
can be generalized to determine whether given “partial” event-relations
are S-consistent or not.
It is assumable that subsets of the informative
species triples S(T;t,σ) that might be directly computed from such
event-relations can offer an avenue to the latter problem.
Characterization and complexity results for
“partial” event-relations to be satisfiable have been addressed in <cit.>.
(6) In order to determine whether there is a time-consistent reconciliation map
for some given event-labeled gene tree and species
trees fast algorithms have been developed <cit.>.
However, these algorithms require as input a gene tree (T;t,σ)
and a species tree S.
A necessary condition to a have time-consistent (restricted) reconciliation map
to some species tree is given by the consistency of the species triples 𝒮(T;t,σ).
However, in general there might be
exponentially many species trees that display 𝒮(T;t,σ) for which
some of them may have a time-consistent reconciliation map with (T;t,σ) and some might have
not (see Figure <ref> or <cit.>). Therefore, additional constraints to determine whether
there is at least one species tree S with time-consistent
map, and if so, construct S, must be established.
(7) A further key problem is the identification of horizontal transfer events.
In principle, likely genes that have been introduced into a genome by HGT
can be identified directly from sequence data <cit.>. Sequence
composition often identifies a gene as a recent addition to a genome. In the absence
of horizontal transfer, the similarities of pairs of true orthologs in the species pairs
(A,B) and (A,C) are expected to be linearly correlated. Outliers are likely candidates for
HGT events and thus can be “relabeled”. However, a
more detailed analysis of the relational properties of horizontally
transferred genes is needed.
plain
|
http://arxiv.org/abs/1701.07513v2 | 20170125224616 | Clouds in the atmospheres of extrasolar planets. V. The impact of CO2 ice clouds on the outer boundary of the habitable zone | [
"Daniel Kitzmann"
] | astro-ph.EP | [
"astro-ph.EP"
] |
V. The impact of CO2 ice clouds on the outer boundary of the habitable zone
Clouds in the atmospheres of extrasolar planets. V.
Physikalisches Institut & Center for Space and Habitability, University of Bern,
Sidlerstr. 5, 3012 Bern, Switzerland
daniel.kitzmann@csh.unibe.ch
Clouds have a strong impact on the climate of planetary atmospheres. The potential scattering greenhouse effect of CO_2 ice clouds in the atmospheres of terrestrial extrasolar planets is of particular interest because it might influence the position and thus the extension of the outer boundary of the classic habitable zone around main sequence stars. Here, the impact of CO_2 ice clouds on the surface temperatures of terrestrial planets with CO_2 dominated atmospheres, orbiting different types of stars is studied.
Additionally, their corresponding effect on the position of the outer habitable zone boundary is evaluated. For this study, a radiative-convective atmospheric model is used the calculate the surface temperatures influenced by CO_2 ice particles. The clouds are included using a parametrised cloud model. The atmospheric model includes a general discrete ordinate radiative transfer that can describe the anisotropic scattering by the cloud particles accurately.
A net scattering greenhouse effect caused by CO_2 clouds is only obtained in a rather limited parameter range which also strongly depends on the stellar effective temperature. For cool M-stars, CO2 clouds only provide about 6 K of additional greenhouse heating in the best case scenario. On the other hand, the surface temperature for a planet around an F-type star can be increased by 30 K if carbon dioxide clouds are present.
Accordingly, the extension of the habitable zone due to clouds is quite small for late-type stars. Higher stellar effective temperatures, on the other hand, can lead to outer HZ boundaries about 0.5 au farther out than the corresponding clear-sky values.
Clouds in the atmospheres of extrasolar planets
D. Kitzmann1
Received 8 November 2016 / Accepted 25 January 2017
=======================================================
§ INTRODUCTION
Clouds can have an important impact on the climate of terrestrial planets by either trapping the infrared radiation in the lower atmosphere (greenhouse effect) or by scattering incident stellar
radiation back to space (albedo effect). The position and extension of the habitable zone (HZ) around different types of stars thus depends on the presence of clouds <cit.>.
Especially, the outer HZ boundary might be influenced by the formation of CO_2 ice clouds and their corresponding climatic impact. If a planet is located farther away from its host star, its
atmospheric and surface temperatures become cooler due to the decrease in stellar insolation. To sustain liquid water on the surface, a thick atmosphere composed of a greenhouse gas, such as CO2, is required. If the terrestrial planet is still geologically active, CO_2 can accumulate in the atmosphere by volcanic outgassing <cit.>. With decreasing atmospheric temperatures, carbon dioxide will condense at some point to form clouds composed of CO_2 ice crystals.
In contrast to other types of condensates important for habitable, terrestrial planets – such as liquid H2O or water ice – dry ice is more or less transparent in the infrared except within a few strong absorption bands <cit.> (see the corresponding refractive index in Fig. 1 in <cit.>).
Thus, as argued by <cit.> or <cit.>, a classical greenhouse effect by absorption and re-emission of thermal radiation is unlikely to occur for CO2 ice clouds.
However, as pointed out by <cit.> and <cit.>, CO_2 ice particles can efficiently scatter thermal radiation back to the planetary surface thereby creating a scattering greenhouse effect. Depending on the cloud properties a scattering greenhouse effect can outweigh the cloud's albedo effect and can, in principle, increase the surface temperature above the freezing point of water <cit.>.
Most atmospheric modelling studies on the climatic effects of CO_2 clouds so far have been limited to the early Martian atmosphere. For a fully cloud-covered early Mars with a thick CO_2 dominated
atmosphere and CO_2 clouds composed of spherical CO_2 ice particles, <cit.> determined that in contrast to the cloud-free distance of 1.67 au by <cit.>, the outer
boundary of the HZ should be located at 2.4 au.
This value has been further used by <cit.> to extrapolate the effects of CO_2 clouds on the outer HZ boundary towards other main sequence central
stars, assuming that the radiative effects of CO_2 clouds are not a function of the incident stellar radiation or the properties of the planet and its atmosphere.
The cloud-free outer HZ has recently been slightly revised by <cit.> who used an updated version of the <cit.> model.
The corresponding results of these earlier studies on the outer boundary of the classical habitable zone are summarised in Fig. <ref>.
For the super-Earth Gliese 581 d orbiting the M3V dwarf Gliese 581, a simplified description of CO2 ice particle formation was included in the one-dimensional (1D) and three-dimensional (3D) atmospheric models by <cit.>. These studies stated that the CO2 clouds contribute to the greenhouse effect by increasing the planet's surface temperature. However, they also employed a simplified treatment of the radiative transfer by using two-stream methods.
The first extensive study on the radiative effect of CO_2 clouds in atmospheres of terrestrial planets around main-sequence dwarf stars was done by <cit.>. In
that radiative transfer study, the properties of the CO_2 ice particles (particle sizes, optical depths) were varied over a large parameter range to calculate their radiative effects using a
high-order discrete ordinate radiative transfer method. The results of this study suggest, that the simplified two-stream radiative transfer methods employed in previous model studies strongly
overestimated the scattering greenhouse effect of CO_2 clouds, concluding that more accurate radiative transfer schemes are absolutely required for atmospheric models to predict the correct climatic
impact of CO_2 ice particle clouds.
Following this numerical radiative transfer study, <cit.> reinvestigated the impact of CO2 ice clouds in the atmosphere of early Mars by using a radiative-convective
atmospheric model with an accurate radiative transfer. While the results in <cit.> suggest, that carbon dioxide clouds still yield a net greenhouse effect under certain
conditions, the impact on the surface temperature is much less pronounced than found in the previous studies on this topic. This reduced heating effect will also affect the previous
estimates on the extension of the HZ due to the presence of CO2 clouds.
In this work, I study the climatic effect of CO_2 ice clouds in CO_2-dominated atmospheres of terrestrial planets around different types of main-sequence dwarf stars by using a 1D
radiative-convective atmospheric model with an accurate multi-stream radiative transfer. In particular, the impact of carbon dioxide clouds on the position of the outer boundary of the classical
habitable zone is investigated.
Section <ref> gives an overview of the atmospheric model used in this study, as well as the CO_2
cloud description.
The climatic impact of CO_2 clouds is discussed in Sect. <ref>, while their effect on the outer HZ boundary is studied in Sect. <ref>.
Concluding remarks and a summary are given in Sect. <ref>.
§ MODEL DESCRIPTION
For the calculations in this publication, I use a 1D radiative-convective atmospheric model, previously used to study the climatic impact of CO2 clouds in the atmosphere of early Mars.
A detailed model description is presented in the following.
The model features a state-of-the-art radiative transfer treatment based on opacity sampling and a general discrete ordinate method, able to accurately treat
anisotropic scattering. The model currently considers N2, CO2, and H2O as atmospheric species.
In the dry atmosphere, N2 and CO2 are considered to be well mixed, that is, they have a constant mixing ratio throughout the atmosphere. For water, the relative humidity profile of
<cit.> is used, with a fixed relative humidity at the surface of 77%. <cit.> showed that the results using this relative humidity profile are a good
approximation in comparison to full 3D studies of cold atmospheres.
The atmospheric model is stationary, that is, it doesn't contain an explicit time dependence. It does, however, use the usual approach of time stepping to calculate the atmospheric temperatures
<cit.>. The radiative equilibrium temperatures are obtained via
d/d t T(z) = -g/c_p(z)d F(z)/d p(z) ,
where g is the gravitational acceleration, c_p the heat capacity at constant pressure and F the wavelength-integrated radiation flux.
It should be noted that the time t in Eq. (<ref>) needs not to be treated as a real time variable. It is, rather, just an iteration parameter to drive the atmosphere into
equilibrium.
For the present model, t is not a global constant, but is allowed to change as a function of grid point.
This allows, for example, to use relatively large values of t at grid points with very high thermal inertia (usually at high pressures), whereas it can be smaller at the top of the atmosphere to
stabilise the convergence in this region.
In regions where the atmosphere is found to be convectively unstable, convective adjustment is performed. The convective lapse rate for regions with CO2 and H2O condensation is modelled
after <cit.>.
Approximately one hundred grid points are used to discretise the vertical extension of the atmosphere. For the surface albedo, I use the mean Earth-like value of 0.13 <cit.>.
§.§ Radiative transfer
A single radiative transfer scheme with a multi-stream discrete ordinate method is used for the entire wavelength range from 0.1 μm to 500 μm.
This method solves the transfer equation
μd I_λ (τ_λ, μ)/dτ_λ = I_λ(τ_λ, μ) - S_λ(τ_λ, μ)
at several distinct values of the angular variable μ (streams) and afterwards computes the averaged radiation field quantities (radiation flux, mean intensity) by angular integration.
The general source function S_λ(τ_λ) takes the incident radiation from the central star (S_λ,*), the local
thermal emission, described by the Planck function B_λ, and the contributions due to scattering into account. It is given by
S_λ(τ_λ) = S_λ,*(τ_λ) + (1 - ω_λ) B_λ
+ ω_λ/2∫_-1^+1 p_λ^0(μ,μ')I_λ(μ') dμ' ,
where ω_λ is the single scattering albedo and p_λ^0(μ,μ') is the azimuthally-averaged scattering phase function.
Here, the scattering phase function is represented as a series of Legendre polynomials <cit.>
p_λ^0(μ,μ') = ∑_n=0^∞ (2 n + 1) P_n(μ) P_n(μ') χ_λ,n
with the Legendre polynomials P_n(μ) and the phase function moments χ_λ,n.
The moments are defined by the integration of the full phase function p_λ(α) with respect to the scattering angle α, weighted by the Legendre polynomials P_n(μ)
χ_λ,n = 1/2∫_-1^+1 p_λ(α) P_n (cosα) dcosα .
For most phase functions, this series is infinite but is, in practice, truncated at a certain
n=N_max.
As mentioned by <cit.>, the value of N_max is equal to the number of streams used to solve the radiative transfer equation.
To treat the strong forward scattering peak of the phase function in case of large size parameters, the δ-M scaling method from <cit.> is used.
This method approximates the forward scattering peak by a δ distribution and removes it from the phase function series.
The δ-M scaling, however, produces errors in the computed intensities which are therefore corrected by using the method presented in <cit.>.
For the numerical implementation of the discrete ordinate method, I use the C-DISORT radiative transfer code <cit.>.
Throughout this study, eight computational streams are used in all calculations.
Tests by doubling the number of streams showed no changes in the resulting atmospheric and surface temperatures or radiation fluxes.
§.§.§ Opacity sampling
For the description of wavelength-dependent transport coefficients, I adopt the opacity sampling method. This method has been introduced in the context of the cool atmospheres of late-type stars
which are dominated by molecular absorption <cit.>.
It has the advantage of operating in the normal wavelength/wavenumber space, which allows the transport coefficients of different species to be directly added <cit.>.
The rationale for employing opacity sampling is based on the fact, that wavelength-averaged quantities, such as the total radiation flux or the mean intensity, converge well before all spectral lines are fully resolved.
In the opacity sampling approach, the equation of radiative transfer is solved at distinct wavelength points, at which its results are identical to a line-by-line radiative transfer method. There are a number of strategies on how to distribute these distinct wavelengths, such that, for example, the wavelength-integrated flux converges for a small number of points.
As mentioned in <cit.>, the distribution of wavelengths at which the equation of radiative transfer is solved, is treated separately in three different wavelength regions. In the infrared, the points are sampled along the Planck black body curves for different temperatures, adopted from the method published by <cit.>.
Here, the points are sampled for 30 temperatures between 100 K and 400 K, covering the entire range of atmospheric temperatures encountered in the atmospheric scenarios of this study.
Between 0.3 μm and 5 μm, 20,000 wavelength points are distributed logarithmic equidistantly in wavenumbers. For smaller wavelengths, approximately 100 points are used to treat the smooth Rayleigh slope.
Figure <ref> shows a histogram of the distribution of opacity sampling points as a function of wavelength.
The figure nicely illustrates the high sampling rate in the thermal infrared where most of the atmospheric infrared radiation is transported and the drop-off at the flanks of the high and low temperature Planck curves.
In total approximately 40,000 distinct wavelengths are used in this study. Further increasing the spectral resolution has no impact on the atmospheric fluxes and temperature profiles.
In principle, the number of points could also be reduced significantly without any large impact on the temperatures.
For example, reducing the wavelengths in the visible and near infrared by 10,000 yields changes in the surface temperatures of only 0.1 K.
In fact, reasonable values for the surface temperatures can already be obtained by using only a couple of hundred wavelengths.
§.§.§ Absorption coefficients
The molecular absorption coefficients used in this study are calculated with the open source Kspectrum code (version 1.2.0). It should be noted that the currently available version of Kspectrum contains a bug in the calculation of the sub-Lorentzian line profiles for CO2. A fixed version of the code has been forwarded to the code's author but has not been made publicly available so far.
Using the HITRAN 2012 database <cit.>, absorption cross-sections of CO2 and H2O are obtained for pressures between 10^-6 bar and 300 bar and
temperatures between 100 K and 640 K. The cross-sections are tabulated in equidistant wavenumber steps of 0.01 cm^-1.
The opacity sampling points are always chosen to coincide with one of the tabulated wavenumbers, thus avoiding spectral interpolation of the cross-sections.
In the case of CO2, the sub-Lorentzian line profiles of <cit.> are employed. The line profiles are truncated at a distance of 500 cm^-1 from the line centre. The continuum contribution and dimer absorption between 1100 and 2000 cm^-1 is taken from
<cit.> while the descriptions in <cit.> are used to treat collision-induced absorption for wavenumbers smaller than 250 cm^-1.
For the self and foreign continuum absorption of H2O, the MT-CKD formulation <cit.> is used. Following the requirements of the MT-CKD model, the line wings of H2O are truncated at distances of 25 cm^-1.
Recently, <cit.> showed the importance of taking into account the line-coupling of the CO2 absorption lines in the infrared for the climate of the early Mars. The use of purely Lorentzian far wing line shapes can lead to an overestimation of the CO2 greenhouse effect in the infrared. The effect of line mixing is partly included in the model by using the sub-Lorentzian line profiles, though it may still overestimate the greenhouse warming by the CO2 gas under certain conditions. Thus, the HZ boundaries presented here could be an upper limit.
§.§.§ Molecular Rayleigh scattering
Molecular Rayleigh scattering <cit.> is included for CO2, H2O, and N2.
The corresponding cross-section is computed via
σ_rayleigh,ν = 24 π^3 ν^4/n_ref^2·(n(ν)^2 - 1/n(ν)^2 + 2)^2 · K(ν) ,
where ν is the wavenumber, n the refractive index, n_ref a reference particle number density, and K the King factor. The King
factor describes a correction to account for anisotropic molecules. It can also be written as a
function of the depolarization factor D
K(ν) = 6 + 3 D(ν)/6 - 7 D(ν) .
For water, the refractive index from <cit.> and the depolarisation factor of 3· 10^-4 from <cit.> are adopted.
The refractive indices and King factors for CO2 and N2 are taken from <cit.>. Note that Eq. (13) in <cit.> for the refractive
index of CO2 contains a typographical error. The nominator of the last term should read 0.1218145· 10^-6 instead of the 0.1218145· 10^-4 factor stated in their Eq. (13).
§.§ Cloud description
The cloud description is identical to the one also used by <cit.>, <cit.>, or <cit.>.
The size distribution of the CO2 ice particles is described by a modified gamma distribution
f(a) = ( a_effσ)^2-1/σ/Γ(1-2σ/σ) a^(1/σ-3)e^-a/a_effσ
where a_eff is the effective particle radius and σ the effective variance, for which a value of 0.1 is used <cit.>.
The optical properties of the ice particles are obtained via Mie theory, thereby assuming spherical particles <cit.>. The refractive index for dry ice is taken from
<cit.>.
A plot with the optical properties for some selected values of the effective radius and an optical depth
of one can be found in <cit.>. Throughout this study, the cloud optical depth τ refers to the particular wavelength of λ = 0.1 μ m.
The scattering phase function of the CO2 ice cloud particles is approximated by the analytical Henyey-Greenstein function <cit.>
p_HG,λ(g_λ,α) = 1-g_λ^2/(1 + g_λ^2 - 2g_λcosα)^3/2 ,
where g_λ is the asymmetry parameter obtained from Mie theory.
This function lacks the complicated structure and detailed features of the full Mie phase function, though preserves its average quantities, such as, most notably, the
asymmetry parameter g_λ, that is
g_HG,λ = 1/2∫_-1^+1 p_HG,λ(g_λ,α) cosα dcosα = g_λ .
In case of the Henyey-Greenstein function, the phase function moments required for the Legendre series (<ref>) have the simple form
χ_λ,n = g_λ^n .
§.§ Stellar spectra
Stellar spectra for four main-sequence stars with different stellar effective temperatures T_eff are used in this study.
This includes σ Boo (F2V, T_eff = 6722 K), the Sun (G2V, T_eff = 5777 K), the young K-dwarf ϵ Eri (K2V, T_eff = 5072 K), and AD Leo (M3.5eV,
T_eff = 3400 K).
The spectra are a composite of stellar atmosphere models and measurements. Details on the spectra can be found in <cit.>.
§ CLIMATIC IMPACT OF CO2 ICE CLOUDS
In this section, I first investigate the climatic effect of the CO2 clouds for planets orbiting different central stars. The aim is to test the impact of the stars' spectral energy
distribution on the efficiency of the net greenhouse effect .
The results presented in <cit.> suggest that the climatic impact is directly affected by the spectral distribution of the incident stellar radiation.
M-stars, for example, seem to results in only a very small net greenhouse effect of the CO2 ice particles. This is caused by the shift of the stellar spectrum more towards the near- and mid-infrared compared to solar-type stars, such as our Sun.
The shift of the spectrum to longer wavelengths makes Rayleigh scattering very inefficient and, thus, leads to a reduced net greenhouse effect by the cloud particles.
However, <cit.> were not able to quantify the impact on the surface temperature.
In order to be comparable, a common scenario is used for all central stars. In each case, the atmosphere is composed of six bar CO2 gas. Neither N2 nor H2O are considered for these calculations.
The planets are placed at orbital distances, where a surface temperature of 273.15 K (freezing point of water) is obtained for each central star.
For these cases, the corresponding temperature-pressure profiles are shown in Fig. <ref>, along with the saturation vapour pressure curve of CO2.
In all cases, the troposphere is separated into two different convective regimes: A dry adiabatic region in the lower troposphere and a moist CO2 adiabatic temperature profile in the upper part.
This upper part indicates the atmospheric region, where a CO2 ice cloud could potentially form.
To study the impact of the CO2 clouds, a cloud layer is inserted into these saturated regions in the following.
For the solar-type stars, the cloud layer is centred around 0.1 bar.
This corresponds roughly to the position of the cloud layer in the studies of <cit.> and <cit.>.
In the case of the M-type dwarf, the cloud layer has to be located deeper in the atmosphere (0.4 bar) because the strong absorption of stellar radiation by the atmosphere leads to a warmer middle
atmosphere and, thus, to a reduced tropopause height. The positions of the cloud layers is marked in Fig. <ref>.
A supersaturated CO2-dominated atmosphere would provide a large amount of condensible material. Thus, the CO2 ice particles could potentially reach large particle sizes.
The only detailed microphysical study by <cit.> obtained particle sizes of up to 1000 μm for an early Martian atmosphere composed of two bar of CO2.
On the other hand, the more simple cloud schemes in the 3D GCM by <cit.> resulted in particle sizes almost two orders of magnitudes smaller.
One probable cause for this discrepancy could be the high, critical supersaturation (1.35, experimentally derived by <cit.>) used in <cit.> to initiate the heterogeneous nucleation. Additionally, the clouds condense from the major atmospheric constituent meaning that a substantial amount of condensible material is available at this high supersaturation. This can result in the growth of very large ice particles.
To cover this parameter space, the effective particle sizes are varied between 10 μm and 500 μm in this study.
The resulting surface temperatures for optical depths of the cloud layer between 0.1 and 20 are shown in Fig. <ref> for all four central stars. The white contour lines indicate the clear-sky temperatures of 273.15 K.
The results presented in Fig. <ref> confirm the findings from <cit.> and <cit.> with respect to the fact, that CO2 clouds are only effective in producing a net greenhouse effect within a small parameter range. The extent of this parameter range, on the other hand, depends crucially on the spectral type of central star; this is relatively large for F-type stars and very small for M-dwarfs.
On the other hand, the most effective particle radius for a net greenhouse effect is almost independent of the stellar type and is approximately 25 μm. As discussed in <cit.>, this is not too surprising because the scattering greenhouse effect occurs in the infrared wavelength region. In order to be effective scatterers in this wavelength region, the particle size must be comparable to the IR wavelengths.
The ability of the cloud layer to limit the loss of shortwave radiation to space due to molecular Rayleigh scattering, however, crucially depends on the incident stellar spectrum. The increased scattering of shortwave radiation between the lower
atmosphere and the cloud base allows more radiation to be kept within the atmosphere than in the clear-sky case. Since the molecular Rayleigh scattering is proportional to ∝λ^-4, it is much less effective for cooler stars because the stellar spectrum is more shifted to near-infrared wavelengths. Thus, the stellar radiation of late-type stars is predominantly absorbed in the upper planetary atmosphere rather than being scattered. Consequently, the net heating effect of the CO2 ice clouds is considerably smaller in these cases.
Figure <ref> shows the resulting surface temperature as a function of cloud layer's optical depth but for a fixed effective particle radius of 25 μm.
As already discussed in <cit.> for the conditions of the early Martian atmosphere, the CO2 clouds are only effective at optical depths of approximately 5 to 8. For larger optical depths, the net heating effect becomes increasingly smaller, up to the point where the presence of the cloud layer results in atmospheric cooling.
As already apparent from Fig. <ref>, the irradiance from an F-type star produces the largest net greenhouse effect, with a temperature of up to 30 K higher than the clear-sky
case. The planet around the M-dwarf, on the other hand, doesn't benefit strongly from the cloud's greenhouse effect. Here, the largest temperature increases are at most 6 K.
On the contrary, for optical depths larger than 10, the CO2 ice cloud produces a net cooling effect that reaches temperature decreases of more than 20 K at an optical depth of 20.
These results will, of course, have a direct impact on the outer boundaries of the habitable zone around these different central stars. This is investigated further in the following section.
§ EFFECT OF CO2 CLOUDS ON THE OUTER BOUNDARY OF THE HABITABLE ZONE
§.§ Cloud-free limit
Before the impact of clouds is evaluated, I first perform calculations without the presence of clouds to obtain the cloud-free outer boundary of the habitable zone, as done by, for example, <cit.> or
<cit.>. These publications used an inverse modelling approach. This approach makes assumptions on the temperature profiles in the radiative equilibrium part of the
atmospheres, which allows them to obtain the outer HZ boundary without iterating the model into thermal equilibrium.
Here, in contrast to that, I use full atmospheric calculations, that is, no a priori assumptions of the temperature profile are made and all results represent fully converged model calculations.
Additionally, <cit.> and <cit.> used an increased surface albedo to account for the net cooling effect of water droplet and ice clouds.
The surface albedo was tuned to yield a mean Earth surface temperature for an Earth-like planet around the Sun.
Assuming that the impact of clouds neither depends on the central star nor on the atmospheric temperatures, this surface albedo was kept constant in all cases.
<cit.>, on the other hand, showed, that the greenhouse effect of the water ice clouds, for example, depends on the atmospheric temperatures.
For lower temperatures, this greenhouse effect has an increased efficiency, such that the tuned surface albedo would need to be decreased to capture this effect.
However, since the outer boundary is determined for a thick CO2 atmosphere, the exact value of the surface albedo is not of great importance because most of the incident stellar radiation will
be absorbed or scattered by the CO2 molecules before it reaches the surface. Only at low CO2 surface pressures should deviations be expected to occur.
Following the assumptions of <cit.> and <cit.>, I use an atmosphere composed of N2, CO2, and H2O. The amount of N2 is fixed at one bar,
while the vertical distribution of H2O is given by the relative humidity parametrisation of <cit.>. The atmospheric CO2 content, on the other hand, is a free parameter.
By varying the amount of stellar insolation, I obtain the orbital distances, where the assumed planet possesses a surface temperature of 273.15 K for a given surface pressure of CO2. The corresponding results for all four central stars are presented in Fig. <ref>.
The results clearly show the well-known maximum greenhouse effect of CO2 <cit.>. Carbon dioxide is only an efficient greenhouse gas below a certain maximum partial pressure.
For surface pressures higher than this value, the greenhouse effect is offset by the molecular Rayleigh scattering, such that the net effect of the CO2 molecules would be a dominating cooling effect.
This maximum greenhouse effect is a function of the spectral distribution of the incident stellar radiation. It is more dominant for an F-type star and less effective for late-type stars because their
spectra are more shifted towards the near infrared, which makes Rayleigh scattering rather inefficient. The results of Fig. <ref> agree overall with the values of
<cit.>.
Most deviations can be found at lower CO2 surface pressures, where the different value of the surface albedo influences the results.
Figure <ref> allows us to obtain the cloud-free outer boundary of the habitable zone, through use of the minima of the curves, that is, the smallest possible insolations.
These critical insolations for the maximum greenhouse effect are shown in Fig. <ref> (upper panel) as a function of the stellar effective temperature.
Additionally, the figure depicts the corresponding results from <cit.>. It should be noted that <cit.> use a value of 1360 W m^-2 for
S_⊙, whereas here a solar constant of 1366 W m^-2 is employed.
The deviations found in terms of the stellar insolation at the outer boundary as a function of T_eff are rather small.
These minor differences can be explained by the use of different surface albedos and the employed radiative transfer schemes.
The outer boundary position can also be expressed in terms of orbital distance d by the usual relation d = √(S_⊙ / F_*), where F_* is the stellar insolation at the top of the atmosphere and [ d ] = au.
The resulting cloud-free distances are shown in the lower panel of Fig. <ref>, again with the outer HZ boundary determined by
<cit.> for comparison.
Figure <ref> suggests, that the differences in terms of orbital distances between the two different studies are relatively small and almost negligible, considering the different modelling approaches. As noted in the introduction, the revised orbital distances by <cit.> also differ slightly from the original values obtained by <cit.>.
Following the approach of <cit.>, I perform a polynomial fit of the resulting orbital distances as a function of the stellar effective temperature.
The results for the outer HZ boundary locations in astronomical units are expressed by a 4th-order polynomial of the form:
d = c_1 T_eff + c_2 T_eff^2 + c_3 T_eff^3 + c_4 T_eff^4 , with [d] = au .
The corresponding parameters c_i are given in Table <ref>.
§.§ Impact of CO2 clouds
In this subsection the impact of the CO2 ice clouds on the position of the outer HZ boundary is calculated.
The calculations are restricted to the most optimum cases, that is, cloud optical depth and particle radii as well as the CO2 surface pressure are chosen such that the highest possible net heating effect of the CO2 ice clouds is obtained.
Additionally, in accordance with Sect. <ref>, the cloud coverage is set to 100%.
Thus, the values for the outer boundary presented in this section represent the upper limits of the impact of CO2 ice cloud on the outer HZ location.
The effective radius of particle size distribution is chosen based on the results from the previous subsection.
The results suggest, that the most effective particle size is independent from the central type and given by approximately a_eff≈ 25 μm.
This particle size will therefore be used in the following for all calculations.
For the most optimum cases, the atmospheric CO2 partial pressure and the optical depths of the clouds are slightly higher than what has been presented in the previous section. For example, the maximum greenhouse effect of the clear-sky atmosphere in the G-star case occurs around a CO2 partial pressure of approximately 5.7 bar (see Fig. <ref>), while the most effective optical
depth of the cloud layer in the 6 bar pure CO2 atmosphere is approximately 6.5.
When varying both, the CO2 content and the cloud optical thickness, the highest impact on the surface temperature occurs for a carbon dioxide surface pressure of 8 bar and an optical depth of 8.
However, the difference between these two cases in terms of the stellar insolation required to obtain a surface temperature of 273.15 K is only about 0.008 S_⊙ and, thus, has no impact on the position of the outer HZ boundary.
The orbital distances of the HZ boundary for all four central stars are again fitted with the polynomial in Eq. (<ref>), with the resulting parameters given in Table <ref>.
The results of this polynomial fit are shown in Fig. <ref> along with the results from <cit.> and <cit.> for comparison.
The increase of the orbital distances compared to the cloud-free case is clearly a function of the stellar effective temperature. For a planet around cool M-dwarfs, the increase in distance is relatively small. In the best-case scenario presented in this section, the increase is only approximately 0.05 au. Given the fact that, in reality, the atmospheric and CO2 cloud properties will be less than
optimal, one shouldn't expect a strong positive effect of carbon dioxide clouds in these cases. On the contrary, for less than optimal cloud properties, the outer boundary could even be shifted to smaller orbital distances than the clear-sky case for cooler central stars.
For stars with higher effective temperatures, the increase in orbital distance can be up to 0.5 au. For the F-type star, for example the presence of CO2 ice clouds would allow the planet to be located at 2.54 au, which is 0.44 au farther than the clear-sky value of 2.1 au.
As mentioned in the introduction, the only atmospheric modelling calculations for the impact of the carbon dioxide clouds on the position of the outer HZ boundary was done by <cit.>,
who state a distance of 2.4 au for the Sun. This result has been used by <cit.> to scale the cloud-free distances from <cit.>. It should be noted though, that the absorption coefficients for CO2 molecules in <cit.> and <cit.> were overestimated because the continuum absorption was accounted for twice in their atmospheric model <cit.>.
The less effective greenhouse effect by the CO2 ice clouds in this study results in a much smaller value for the orbital distance in this case. According to Fig. <ref>, the
orbital distance for a G2V star is about 2 au and, thus, 0.4 au smaller than what has previously been estimated.
Compared to the results from <cit.>, the new estimates provided by this work shift the boundary towards smaller orbital distances by approximately 0.5 au for F-type stars and
approximately 0.2 au for M-dwarfs.
§ SUMMARY
In this study the climatic effects of CO_2 ice clouds in CO_2-dominated atmospheres of terrestrial planets around different types of main-sequence dwarf stars and the impact of CO2 clouds on the location of the outer boundary of the classical habitable zone are investigated.
A radiative-convective atmospheric model employing an accurate discrete ordinate radiative transfer was used to calculate surface temperatures for a broad range of particle sizes and optical depths of CO_2 ice particles.
As a first step, the climatic impact of CO2 clouds in a thick, CO2-dominated atmosphere was evaluated for different stellar spectral energy distributions and as a function of the effective particle radius and the cloud's optical depth.
As already pointed out by <cit.>, the heating and cooling effect should be a direct function of the incident stellar spectrum.
The results from this work suggest that for these thick, CO2-dominated atmospheres the cloud's radiative forcing yields, at most, temperature increases smaller than 6 K for cool M-stars. On the other hand, central stars with higher effective temperatures resulted in surfaces temperatures approximately 30 K higher than the corresponding clear-sky case.
Furthermore, the parameter range where a CO2 ice cloud can have a net positive effect is rather limited for late-type stars. A heating effect is only obtained in a small parameter range with respect to optical depths and effective particle sizes. Outside this parameter range, the clouds would have a dominating albedo effect, thus cooling the surface.
The most effective particle size for the scattering greenhouse effect was found to be approximately 25 μm, independent from the central star type.
Following this, the impact of dry ice clouds on the position of the classical HZ's outer boundary was investigated.
Atmospheric calculations without clouds were performed first to obtain the cloud-free habitable zone boundary. The results of this cloud-free limit are very similar to the orbital distances published by <cit.>.
The outer boundary influenced by clouds has been calculated for the most optimum scenarios and, thus, represents an upper limit. The additional greenhouse effect allows the orbital distance of the outer HZ boundary to be increased by up to 0.5 au for F-stars. Stars with lower effective temperatures, on the other hand, don't benefit strongly from the CO2 ice clouds. The less effective net greenhouse effect results in an extension of only 0.05 au.
Compared to the orbital distance of the outer HZ boundary of 2.4 au for the Sun published by <cit.>, which has been obtained using a simplified two-stream method, the revised distance obtained by the more accurate radiative transfer treatment in this study is 0.4 au smaller. Thus, all parametrisations of the outer boundary which rely on that value (e.g. <cit.> or <cit.>) should be revisited.
It should be noted that the outer HZ boundary calculated in this work only considers the classical CO2-dominated atmospheres. The presence of other greenhouse gases, such as CH4, might also lead to an additional extension of the HZ. Other possible mechanisms also include, for example, the greenhouse effect provided by a H2-dominated atmosphere as studied by e.g. <cit.>, for example.
There are also still open questions remaining regarding the climatic impact of CO2 clouds. Models including a treatment for CO2 ice cloud formation have so far strongly disagreed on the resulting particle sizes <cit.>, such that additional research in this area is warranted. Additionally, all models have, so far, assumed spherical particles and used Mie theory to obtain the optical properties of the cloud particles.
However, as shown by laboratory measurements of <cit.>, <cit.>, or <cit.> CO_2 ice crystals can have cubic or octahedral shapes.
Combinations of both (cuboctahedra) or more complicated shapes such as rhobic-dodecahedral crystals can also occur.
Since the greenhouse effect of carbon dioxide clouds is determined by the scattering properties of the ice particles, their shapes should have a large impact on the resulting optical properties (e.g. scattering phase function) and, thus, also on the climatic impact.
D.K. gratefully acknowledges the support of the Center for Space and Habitability of the University of Bern and the MERAC Foundation for partial financial assistance.
This work has been carried out within the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation. D.K. acknowledges the financial support of
the SNSF.
aa
|
http://arxiv.org/abs/1701.08096v1 | 20170127160254 | Efficiently Summarising Event Sequences with Rich Interleaving Patterns | [
"Apratim Bhattacharyya",
"Jilles Vreeken"
] | cs.AI | [
"cs.AI",
"cs.DB"
] |
Apratim BhattacharyyaMax-Planck Institute for Informatics and Saarland University, Saarland Informatics Campus, Saarbrücken, Germany.
Jilles Vreeken[1]
=============================================================================================================================================================
Discovering the key structure of a database is one of the main goals of data mining. In pattern set mining we do so by discovering a small set of patterns that together describe the data well. The richer the class of patterns we consider, and the more powerful our description language, the better we will be able to summarise the data. In this paper we propose , a novel greedy MDL-based method for summarising sequential data using rich patterns that are allowed to interleave. Experiments show is orders of magnitude faster than the state of the art, results in better models, as well as discovers meaningful semantics in the form patterns that identify multiple choices of values.
§ INTRODUCTION
Discovering the key patterns from a database is one of the main goals of data mining. Modern approaches do not to ask for all patterns that satisfy a local interestingness constraint, such as frequency <cit.>, but instead ask for that set of patterns that is optimal for the data at hand.
There are different ways to define this optimum. The Minimum Description Length (MDL) principle <cit.> has proven to be particularly successful <cit.>.
Loosely speaking, by MDL we say that the best set of patterns is the set that compresses the data best. How well we can compress, or better, describe the data depends on the description language we use. The richer this language, the more relevant structure we can identify. At the same time, a richer language means a larger search space, and hence requires more efficient search.
In this paper we consider databases of event sequences, and are after that set of sequential patterns that together describe the data best—as we did previously with <cit.>. Like we describe a database with occurrences of patterns. Whereas requires these occurrences to be disjoint, however, we allow patterns to interleave. This leads to more succinct descriptions as well as better pattern recall. Moreover, we use a richer class of patterns. That is, we do not only allow for gaps in occurrences, but also allow patterns to emit one out of multiple events at a certain location. For example, the pattern `paper [proposes | presents] new' discovered in the JMLR abstract database matches two common forms of expressing that a paper presents or proposes something new.
With this richer language, we can obtain much better compression rates with much fewer patterns. To discover good models we propose , a highly efficient and versatile search algorithm. Its efficiency stems from re-use of information, partitioning the data, and in particular from considering only the currently relevant occurrences of patterns in the data. It is a natural any-time algorithm, and can be ran for any time budget that is opportune.
Extensive experimental evaluation shows that performs very well in practice. It is much better at retrieving interleaving patterns than the very recent proposal by Fowkes and Sutton <cit.>, and obtains much better compression rates than <cit.>, while being orders of magnitude faster than both.
The choice-patterns it discovers give insight in the data beyond the state of the art, identifying semantically coherent patterns.
Moreover, is highly extendable, allowing for richer pattern classes to be considered in the future.
§ PRELIMINARIES
Here we introduce basic notation, and give short introductions to the MDL principle.
§.§ Notation
We consider databases of event sequences. Such a database D is composed of |D| sequences. A sequence S ∈ D consists of |S| events drawn from an alphabet Ω.
The total number of events occurring in the database, denoted by ||D||, is simply the sum of lengths of all sequences ∑_S ∈ D |S|. We write S[j] to refer to the j^th event in sequence S. The support of an event e ∈Ω in a sequence S is simply the number of occurrences of e in S, i.e. (e | S) = |{j | S[j] = e}|. The support of e in a database D is defined as (e | D) = ∑_i = 1^|D| supp(e | S_i).
We consider two types of sequential patterns. A serial episode X ∈Ω^|X| is
a sequence of |X| events, and we say that a sequence S contains X if
there is a subsequence in S equal to X. We allow noise, or
gap events, within an occurrence of X.
We also consider choice episodes, or choicisodes. These are serial episodes with positions matching one out of multiple events. For example, serial episode ac matches an a followed by c, whereas choicisode [ a , b ] c matches occurrences of a or b followed by c.
§.§ Brief introduction to MDL
The Minimum Description Length principle (MDL) <cit.> is a practical version of Kolmogorov Complexity <cit.>. Both embrace the slogan Induction by Compression. We use the MDL principle for model selection.
By MDL, the best model is the model that gives the best lossless compression. More specifically, given a set of models ℳ, the best model M ∈ℳ is the one that minimizes L(M) + L(D | M), in which L(M) is the length in bits of the description of M, and L(D | M) is the length of the data when encoded with model M. Simply put, we are interested in that model that best compresses the data without loss.
MDL as describe above is known as two-part MDL, or crude MDL; as opposed to refined MDL. In refined MDL model and data are encoded together <cit.>. We use two-part MDL because we are specifically interested in the model: the patterns that give the best description.
In MDL we are only concerned with code lengths, not actual code words.
Next, we formalise our problem in terms of MDL.
§ MDL FOR EVENT SEQUENCES
To use MDL we need to define a model class ℳ, and how to encode a model M ∈ℳ and data D in bits.
As models we will consider code tables <cit.>. A code table is a dictionary between patterns and associated codes. A code table consists of the singleton patterns e ∈Ω, as well as a set 𝒫 of non-singleton patterns. We write _p(X) to denote the pattern code that identifies a pattern X ∈.
Similarly, we write _f(X) and _g(X) for the codes resp. identifying
a fill resp. a gap in the occurrence of a pattern X.
We can encode a sequence database D using the patterns in a code table , which generates a cover C of the database. A cover C uniquely defines a pattern code stream C_p and a meta code stream C_m. The pattern stream is simply the concatenation of the codes corresponding to the patterns in the cover, in the order of their appearance. Likewise, the meta stream C_m is the concatenation of the gap and fill codes corresponding to the cover. In Fig. <ref>, we illustrate two example covers and corresponding code tables, the first using only singletons and the second cover with interleaving using patterns from a richer code table with choicisodes.
Before formalising our score, it is helpful to know how to decode a database given a code table and the code streams.
§.§ Decoding a database
To decode a database, we start by reading a _p(X) from the pattern stream . If the corresponding pattern X is a singleton, we append it to our reconstruction of the database D. If it is a non-singleton, we append its first event, X[1], D. To allow for interleaving, we have to add a new context to context list Λ. A context is a tuple (X,i) consisting of a pattern X, and a pointer i to the next event to be read from the pattern. For an example, let us consider Cover 2 in Fig. <ref>. We read _p(p) from _p, append p[1]=a to D, and add (p,2) to the context list.
Next, if the context list is non-empty, we read as many meta codes from C_m as there are contexts in Λ.
If we read a fill code _f(X) corresponding to one of the contexts (X,i) ∈Λ, we append the next event from X, X[i] to the data D, and increment the pointer. If after this step we have finished reading the pattern, we remove its context from the list. If we only read gap codes _g(X) for every pattern X in the context list, we read again from the pattern stream. We do this until we reach the end of the pattern stream .
Continuing our example, we read _g(p) from C_m, which corresponds to a gap in the occurrence of pattern p. We read _p(q) from C_p, write q[1]=b to D, and insert context (q,2) to Λ. Next, Λ contains two contexts, and we read two meta codes from C_m, viz. _g(p) and _f(q). As for context (q,2) we read a fill code, we write q[2]=d to D, and increment its pointer to 3. Etc.
§.§ Calculating Encoded Lengths
Given the above scheme we know which codes to expect when, and can now formalise our score. We build upon and extend the encoding on Tatti & Vreeken <cit.> for richer covers and patterns.
§.§.§ Encoded Length of the Database
We encode the pattern stream using Shannon optimal prefix codes. The length of the pattern code L(_p(X)) for a pattern X depends on how often it is used in the pattern stream. We write (X) to denote the number of times _p(X) occurs in . The length of the optimal pattern code for X then is
L(_p(X)) = - log((X)/∑_Y ∈(Y)) .
The encoded length of the whole pattern stream C_p is then simply L(C_p) = ∑_X ∈(X) L(_p(X)).
To avoid arbitrary choices in the model encoding, we use prequential codes <cit.> to encode the meta stream. Prequential codes are asymptotically optimal without knowing the distribution beforehand. The idea is that we start with an uniform distribution over the events in the stream and update the counts after every received event. This means we have a valid probability distribution at every point in time, and can hence send optimal prefix codes. The total encoded length of the meta stream pattern is
L() = ∑_X ∈ CT( -∑_i=1^(X)log(ϵ + i/ 2 ϵ + i )
-∑_i=1^(X)log(ϵ + i/ 2 ϵ + (X) + i ) ) .
where ϵ=0.5 is a constant by which we initialize the distribution <cit.>, (X) and (X) are the number of times _f(X) resp. _g(X) occurs in C_m.
For lossless decoding of database D, the number of sequences |D| and the length of each sequence S ∈ D should also be encoded. We do this using , the MDL optimal code for integers n ≥ 1 <cit.>.
Combining the above, for the total encoded length of a database, given a code table and cover C, we have
L(D |) = (| D |) + ∑_S ∈ D(| S |) + L() + L() .
Next we discuss how to encode a model.
§.§.§ Encoded Length of the Code Table
Note that the simplest valid code table consists of only the singletons Ω. We refer to this code table as , or, the standard code table. We use to encode the non-singleton patterns 𝒫 of a code table .
The usage of a singleton e ∈ is simply its support in D, and hence the code length _p(e) = - log( (e | D)/||D||).
To use these codes the recipient needs to know the supports of the singletons. We encode these using a data to model code—an index over a canonically ordered enumeration of all possibilities <cit.>; here it is the number of possible supports of | Ω | alphabets over a database length of || D ||, || D |||Ω |. The length of the code is now simply the logarithm over the number of possibilities.
Given the standard code table , we can now encode the patterns in the code table. We first encode the length |X| of the pattern, and then number of choice spots in the pattern, ||X|| - |X|. We encode how many choices we have per location using a data to model code. We finally encode the events X[i] using the standard code table, . That is,
L(X |) = (|X|) + ( || X || - |X| + 1)
+ log()0pt||X||-1|X| - 1 + ∑_i = 1^|| X || L(X[i] |) .
Note that if we do not consider choicisodes, we can simplify the above as we only need to transmit the first and last part of this code. That is, the length and the events in the pattern.
Recall that, pattern codes in the pattern stream are optimal prefix codes. The occurrences of the non-singleton patterns need to be transmitted with the model. We do this again using a data to model code. We encode the sum of pattern usages, (𝒫) = ∑_X ∈∖Ω(X), by the MDL optimal code for integers. It is equivalent to use a pattern code per choicisode and then identify the choice-events, or to use a separate pattern code for each instantiation of the choicisode. For simplicity we make the latter choice.
The total encoded size of code table given a cover C of database D is then given by
L(| D, C) = (| Ω |) + log||D|| |Ω|
+ (| 𝒫 | + 1) + ((𝒫) + 1)
+ L((𝒫),| 𝒫 |) + ∑_X ∈ CTL(X,) .
We are interested in the set of patterns and a corresponding cover C which minimizes the total encoded length of the code table and the database, which is,
L(,D) = L(| C) + L(D |) .
We can now formally define our problem as follows.
Minimal Code Table Problem Let Ω be a set of events and let D be a sequence database over Ω, find the minimal set of serial (choice) episodes 𝒫 such that for the optimal cover C of D using 𝒫 and Ω, the total encoded cost L( , D) is minimal, where is the code-optimal code table for C.
For a given database D, we would like to find its optimal pattern set in polynomial time. However, there are exponentially many possible pattern sets, and given a pattern set, there are exponentially many possible covers.
For neither problem there exists trivial structure such as monotonicity or sub-modularity that would allow for an optimal polynomial time solution.
Hence, we resort to heuristics. In particular, we split our problem into two parts. We first explain our greedy algorithm to find a good cover given a set of patterns. We describe how to find a set of good patterns in Sec. <ref>.
§ COVERING A DATABASE
Given a pattern set 𝒫 and database D, we are after a cover C with interleaving and nesting, that minimises L(, D).
Each occurrence of a pattern X in database D, possibly with gaps, defines a window. We denote by S[ a,b ] a window in sequence S that extends from the position a to b. Two windows are non-overlapping if they do not have any events in common which belong to their respective patterns. Two interleaving or nesting windows might have common events, which, as we do not allow overlap, leads to gap events for one of the two windows. Two windows are disjoint if they do not have any events in common. For every event in the database D, there can be many windows with which we can choose to cover it. The optimal cover depends upon the pattern, fill, gap codes of the patterns. The choices grow exponentially with sequence length, with no trivial sub-structure.
To find good disjoint covers, Tatti & Vreeken <cit.> use an EM-style approach. At each step until convergence, given the pattern, gap and fill codes, the authors use the dynamic programming based algorithm to find a cover. takes a set of possibly overlapping minimal windows and returns a subset of disjoint minimal windows (i.e. a cover) which maximizes the sum of (a heuristic measure) of each window. Then, the lengths of the codes are reset based upon the found cover. It is unclear if this scheme can be extended to return a cover with interleaved or nested windows efficiently. Moreover if we extend our model with a new pattern, we have to rerun from scratch.
We propose an efficient and easily extendible heuristic for good covers with interleaved and nested windows.
§.§ Window Lengths
For a given pattern, as we consider windows with gaps, the length of an window in the database can be arbitrarily long. Tatti & Vreeken therefore consider only minimal windows. A window w = S[ i, j ] is a minimal window of a pattern X if w contains X but no other proper sub-windows of w contain X. If no interleaving or nesting is allowed, it is optimal to consider only minimal windows. Otherwise, it is easy to construct examples where the optimal cover consists of non-minimal windows.
Consider the sequence abdccdc and a code table with the pattern abc, dc and the singletons a, b, c and d. Two possible covers are: (ab d c) c dc using only minimal windows and (ab(dc)c) dc where a non-minimal window of abc is used and is nested with a window of dc. It is easy to see that the second cover leads to lower encoded length L(abdccdc, { abc, dc, a, b, c, d} ) (see Fig <ref>) of about 2.9 bits.
Ideally, we should consider all possible windows. The number of possible windows of a pattern, however, is quadratic in the length of the database. This means that even a search for all windows is computationally inefficient. Therefore, we first search for only the shortest window from each starting position in the database. We consider longer windows when necessary. We do so as follows.
§.§ Window Search
Given a pattern X, we use the pseudo-code presented as Algorithm <ref> to search for its windows in the a sequence or sub-sequence S of database D. It returns us (X) which is a set of candidate windows of the pattern X. It considers only the first window from each starting position in the sequence S. We later choose a subset of these windows (along with those of patterns other than X in ) to create a cover of the database L(D |). To control the ratio of gaps and fills, we maintain a budget variable. This is the number of extra allowed overall gaps. Ideally we would like to have more fills than gaps as it leads to better compression.
To search for windows efficiently in , we use an inverted index: 𝑖𝑛𝑑𝑒𝑥^-1(x) which gives us a list of positions of the event x ∈Ω in the database. We use a priority queue to store potential windows sorted by length. Shorter windows means more fills than gaps. We initialize (line 3) by creating potential windows at all the positions where the first alphabet of pattern X occurs in the sequence S_i and pushing these potential windows to . Each window w in contains the starting position in S_i, its length, and a pointer w_i. This pointer points to a certain event in pattern X which we are search for in S_i. At every step of (line 7) we look at the potential window at the top of the queue . We check if the next event in the database equals the character of the pattern X pointed to by w_i and increment the length of the window w. There are now two possibilities i) (line 10) The next database event is the same as the event in X pointed to by w_i. If we have found the full pattern X in the database, we add this window to (X). We can now update our budget if we used more fills than gaps. Using less gaps in one window allows us to use more gaps in another. ii) (line 15) The next database event does not equal the event of X pointed to by w_e. This means that the potential window w has one extra gap. We check if this extra gap is allowed by our budget (line 16). Otherwise, we drop the window.
[t]
sequence S and a pattern X
set of windows for X
←∅
p in 𝑖𝑛𝑑𝑒𝑥^-1(x)
w ← (𝑠𝑡𝑎𝑟𝑡, 𝑙𝑒𝑛𝑔𝑡ℎ, w_i = 1)
𝑝𝑢𝑠ℎ(w,)
T is not empty
w ←𝑡𝑜𝑝()
(𝑠𝑡𝑎𝑟𝑡, 𝑙𝑒𝑛𝑔𝑡ℎ, w_i) ← w
𝑙𝑒𝑛𝑔𝑡ℎ = 𝑙𝑒𝑛𝑔𝑡ℎ + 1
S_i[ start + length ] equals X[w_i]
w_i points to the end of X
append w to
b ← b + 2 ×(X) - 𝑙𝑒𝑛𝑔𝑡ℎ - 1
b + 2 ×(X) - 𝑙𝑒𝑛𝑔𝑡ℎ - 1 < 0
delete w from
(S_i,X,budget)
Now that we can search for windows of patterns, we describe how to choose a subset which generates a good cover C of the database D.
§.§ Candidate Order
In the first step of our greedy strategy, we sort the set of patterns in a fixed order, similar to <cit.>. We call this order the Candidate Order. We cover the database using windows of patterns in this order. This order is designed to minimize the code length. This is achieved by putting longer and more frequently occurring patterns higher up in the candidate order. This means we can cover more events while minimizing the code length.
We consider the patterns X ∈ in the order,
* Decreasing ↓ in length | X |
* Decreasing ↓ in support support(X | D)
* Decreasing ↓ in length of encoding it with the standard table.
* Increasing ↑ lexicographically.
§.§ Greedy Cover
We now describe our greedy algorithm which we use incrementally build a good cover as pseudo-code in Algorithm <ref>. We consider patterns in the candidate order. We maintain a set of selected windows . takes this set of selected windows and extends it with a subset of candidate windows of pattern X, (X), found with and possibly with (longer, interleaved) windows found on the fly. We assume that both and are sorted.
We refer to a block of windows which are interleaved or nested with each other as an window extend. For ease of notation, we refer to windows which are not interleaved or nested also as window extends (containing a single window). We begin by dividing the set into a set of window extends by a linear sweep (if is sorted). For patterns at the top of the candidate order is empty, so we can select all candidate windows (X). For any other pattern, we iterate though the list of window extends (line 4). All the windows of the pattern occurring between any two extends in can be potentially chosen. These are put in (line 5), a temporary list. It is possible that some windows of X in overlap. We consider these windows in order of decreasing length (line 6) and discard any window that overlaps with a previously chosen window. We additionally search (on the fly) for interleaved windows occurring within the window extends (line 9).
For example consider the sequence abcdacbd which we want to cover with the patterns ac and bd. Using we get two windows each for the two patterns. If ac is higher up in the candidate order, we first select the two windows of ac; abcdacbd. We now have two window extends in . We search for windows of bd within the first window extend of ac to find one interleaved window: abcdacbd and we select the second window of bd as it is between the two window extends of ac.
[t]
set of selected windows and a set of candidate windows (X) for pattern X.
set of selected windows combined with those in (X) not overlapping with .
←∅
← create window extends from and (X)
𝑙𝑎𝑠𝑡←∅
window w ∈
← all windows of X between 𝑙𝑎𝑠𝑡 and v
v in , in order of decreasing length
v does not overlap with
append v to
←∪{ windows of X inside 𝑤 in D}
𝑙𝑎𝑠𝑡←𝑤
Merge and
(,)
Note that, now takes time O(|| + ||D|| + || log(||)), in the worst case. Where, || is the number of windows in and || is the number of candidate windows. Let, W_max(𝒫) be the maximum number of candidate windows of any pattern in 𝒫. Then takes time O( |𝒫| ( W_max(𝒫)log(W_max(𝒫)) + ||D|| )) to construct a cover C of the database D using the patterns 𝒫 in the code table in the worst case. The maximum number of candidate windows of any pattern W_max(𝒫) is bounded by the size of the database O(||D||). However, makes it computationally more efficient to extend the code table with a new pattern X. We can discard windows of patterns in 𝒫 below X in the candidate order from the cover and run for (X) and the patterns in the code table below X in candidate order. This means that we do not have to recompute the cover from scratch. This is very efficient if the pattern X is near the bottom of the candidate order. As we shall see is very competitive in its execution time compared to <cit.>.
Having presented our greedy approach of covering a database given a set of patterns, we now turn our attention to the task of mining good set patterns.
§ MINING GOOD CODE TABLES
Given a pattern set 𝒫 we have a greedy algorithm to cover the database D and obtain the encoded length of the model and data L(,D). To solve the Minimal Code Table Problem we want to find that set 𝒫 of patterns which minimizes the total encoded length L(,D) of the database. As discussed before, there does not seem to be any trivial sub-structure in the problem which we can exploit to obtain an optimal set of patterns 𝒫 in polynomial time. So, we resort to heuristics. We build upon and extend <cit.>.
§.§ Generating Candidates
We build a pattern set 𝒫 incrementally. Given a set of patterns 𝒫 and a cover C, we aim to find a pattern X and an extension Y, such that X,Y ∈𝒫∪Ω, whose combination XY would decrease the encoded length L(𝒫∪ XY,D). We do this until we cannot find any XY that when added to 𝒫 reduces the total encode size. Doing is exactly, however, is computationally prohibitive. At every iteration, there would be O((|𝒫| + |Ω|)^2) possible candidates. Thus, we again resort to heuristics. We use the heuristic algorithm from <cit.> that can find good candidates, with likely decrease in code length if added, in O(| 𝒫 | + | Ω| + ||D||) time.
For readability and succinctness, we
describe algorithm in
Appendix <ref>.
Candidates are accepted or rejected based on the compression gain. As we can now find richer covers with interleaving and nesting, candidates are potentially more likely to be accepted. However, we want to find a succinct set of patterns which describe the data well. Choicisodes can help in this search for a succinct summary of the data.
§.§ Choicisodes
Recall from Sec. <ref> that we can encode patterns as choicisodes. We have the possibility of combining a newly discovered non-singleton pattern with a previously discovered non-singleton pattern or choicisode to create or expand a choicisode. Combining non-singleton patterns into a single choisisode may hence lead to savings in the encoded length of the code table L(| C) while providing a more succinct representation of the pattern set.
We use a greedy strategy based on MDL for discovering choicisodes. For each newly discovered non-singleton pattern, we consider all previously discovered non-singleton patterns or choicisodes which differ with it at one position. Then, we calculate the increase in code length (of the model) if we encode it as a choicisode with each of these non-singleton patterns or choicisodes. We also consider the increase in code length if we encode it as independently. We choose whichever option with leads to the minimum increase in code length.
Next we present our algorithm for mining a succinct and representative pattern set.
§.§ The SQUISH algorithm
The present the complete algorithm as pseudo-code in Algorithm <ref>. At each iteration, it considers each pattern X ∈. It creates potential extensions XY, with Y ∈, based on estimated change in the encoded length using (line 6). then considers each of these patterns in the order of the estimated decrease in gain if added to (line 7). is used to find the candidate windows of each of these extensions (line 9). is used to cover the data with this candidate pattern XY added to . We simultaneously consider the possibility of encoding XY as a choicisode. If XY leads to a decrease in the encoded length of the database D then, we add XY to . If XY is to be added, we
(see Appendix <ref>) the code table to remove redundant patterns. Consider, for example if we decide to add abcd, the pattern ab and cd may not be required to construct an effective cover of the database. We also consider the singletons occurring in the gaps of XY, by constructing new extended patterns by using these gap alphabets as intermediate alphabets.
[t]
database D
pattern set 𝒫 with low L(,D)
𝒫←ϕ
C ←(𝒫,D)
changes
ℱ←ϕ
X ∈ add (X, A, D) to ℱ
Z ∈ℱ ordered by estimated gain
Sort 𝒫∪ Z in candidate order
(Z) ←(D, Z, 𝑏𝑢𝑑𝑔𝑒𝑡)
C ←(𝒫∪ Z,D)
L(D, 𝒫∪ Z) < L(D, 𝒫)
𝒫←(𝒫∪ Z, D)
(D)
§ RELATED WORK
Discovering sequential patterns is an active research topic. Traditionally there was a focus on mining frequent sequential patterns, with different definitions of how to count occurrences <cit.>. Mining general patterns, patterns where the order of events are specified by a DAG is surprisingly hard. Even testing whether a sequence contains a pattern is NP-complete <cit.>. Consequently, research has focused on mining subclasses of episodes, such as, episodes with unique labels <cit.>, strict episodes <cit.>, and injective episodes <cit.>.
Traditional pattern mining typically results in overly many and highly redundant results. Once approach to counter this is mining statistically significant patterns. Computing the expected frequency of a sequential pattern under a null hypothesis is very complex, however <cit.>.
builds upon and extends <cit.>. Both draw inspiration from the <cit.> and <cit.> algorithms. pioneered the use of MDL for mining good patterns from transaction databases. Encoding sequential data with serial episodes is much more complicated, and hence uses a much more elaborate encoding scheme. Here, we extend it to discover richer structure in the data.
The algorithm <cit.> mines code table directly from data. iteratively seeks to improve the current model by considering as candidates joins XY of patterns X,Y ∈. Whereas considers the full Cartesian product and ranks on the basis of estimated gain, and take a batch based approach.
Lam et al. introduced <cit.> for mining sets of serial episodes. As opposed to the MDL principle, they use fixed length codes, and do not punish gaps within patterns. This means, their goal is essentially to cover the sequence with as few patterns as possible, which is different from our goal of finding patterns that succinctly summarize the data.
Recently, Fowkes and Sutton proposed the algorithm <cit.>. is based on a generative probabilistic model of the sequence database, and uses EM to search for that set of patterns that is most likely to generate the database. does not explicitly consider model complexity. Like , can handle interleaving and nesting of sequences. We will empirically compare to in the experiments.
§ EXPERIMENTS
Next we empirically evaluate on synthetic and real world data. We compare against <cit.> and <cit.>.
All algorithms were implemented in C++. We provide the code for research purposes.[]
We evaluate quantitatively on the basis of achieved compression, pattern recall, and execution times. Specifically, we consider the compression gain Δ L = L(D,) - L(D,). That is, the gain in compression using discovered patterns versus using the singleton-only code table. Higher scores are better.
All experiments were executed single threaded on quad-core Intel Xeon machines with 32GB of memory, running Linux.
§.§ Databases
We consider four synthetic, and five real databases. We give their base statistics in Table <ref>.
Indep, Plant-10, and Plant-50 are synthetic data consisting of a single sequence of 10 000 events, over an alphabet of 1000 events. For Indep, all events are independent. For Plant-10, and Plant-50 we plant resp. 10 and 50 patterns of 5 events long 10 times each over an otherwise independent sequence, with a 10% probability of having a gap between consecutive events. To evaluate the ability of to discover interleaved and nested patterns, we consider the Parallel database <cit.>. Each event in this database is generated by five independent parallel processes chosen at random. Each process i generates the events { a_i, b_i, c_i, d_i, e_i} in sequence.
We further consider five real data sets. Gazelle is click-stream data from an e-commerce website <cit.>. The Sign database is a list of American sign language utterances <cit.>. To allow for interpretability we also consider text data. Here the events are the (stemmed) words in the text, with stop words removed. Addresses contains speeches of American presidents. JMLR contains abstracts from the Journal of Machine Learning research, and Moby is the famous novel Moby Dick by Herman Melville.
§.§.§ Synthethic Data
As a sanity check we first compare to considering only serial episodes and not allowing interleaving or nesting. We find that in this setting performs on par with in terms of recovering non-interleaving patterns from synthetic data; like it correctly discovers no patterns from Indep, it recovers all patterns from Plant-10, and recovers 45 patterns exactly from Plant-50 and fragments of the remaining 5, but does so approximately ten times faster than .
To investigate how well retrieves interleaving patterns, we consider the Parallel dataset, and compare to . (We also considered but found it did not finish within a day.) To make the comparison fair, we restrict ourselves again to serial episodes, but now do allow for interleaving and nesting. We measure success in terms of pattern recall. That is, given a set of patterns 𝒫 and a set of target patterns 𝒯, we consider the set 𝒯 as the data and cover it with 𝒫 (not allowing for gaps). The pattern recall is the ratio of the total number covered events in 𝒯 to the maximum of the total number of events in 𝒯 or 𝒫.
We give the results in Fig. <ref>. We find that obtains much higher recall scores than . Inspecting the results, we see that discovers large fragments of each pattern, whereas retrieves only eight small patterns, most of length 2, and hence does not reconstruct the generating set of patterns well.
§.§.§ Real data
Next we evaluate on real data. We compare to in terms of number of patterns, achieved compression, and runtime.
We consider three different configurations, 1) disjoint covers of only serial episodes, 2) allowing interleaving and nesting of serial episodes, and 3) allowing interleaving and nesting of serial episodes and choicisodes.
We give the results in Table <ref>.
First of all, the -t columns show that in all setups needs only a fraction of the time—up to three orders of magnitude less—to discover a model that is at least good as what returns. To fully converge, and take roughly the same amount of time for the disjoint setting, as well as when we do allow interleaving.
However, when converged discovers models with much better compression rates, i.e. with much higher Δ L, than does. is also significantly faster than , taking only 87 minutes instead of 259 on the JMLR database, and on Gazelle requires only 96 instead of 680 minutes.
performs best when we consider our richest description language, allowing both interleaving and choicisodes, discovering much more succinct models that obtain much better scores than if we restrict ourselves. For example, for Gazelle, with choicisodes enabled needs only 605 instead of 901 patterns to achieve a Δ L of 165.7k instead of 161.6k. Overall, we observe that many choicisodes form semantically coherent groups. We present a number of exemplar choisisode patterns in Table <ref>. Interesting examples include: data-set and training-set from JMLR, god-bless and god-help from Address, cape-horn and cape-cod from Moby.
Last, but not least, we report on the convergence of L(,D), the encoded length of the database, over time for both and in Fig. <ref>. Both algorithms estimate batches of candidates, and test them one by one tests. We see that the initial candidates are highly effective on increasing compression gain. Candidates generated in the latter iterations lead to only little increase in compression gain. This leads to the possibility of executing based upon a time budget, as an any-time algorithm.
§ CONCLUSION
We considered summarising event sequences. Specifically, we aimed at discovering sets of patterns that capture rich structure in the data. We considered interleaved, nested, and partial pattern occurrences. We proposed the algorithm to efficiently search for pattern occurrences and the greedy algorithm for efficiently covering the data. Experiments show that works well in practice, outperforming the state of the art by a wide margin in terms of scores and speed, while discovering pattern sets that are both more succinct and easier to interpret.
As future work we are considering parallel episodes, patterns where certain events are un-ordered e.g. a { b,c } d <cit.>.
Discovering such structure presents a significant computational challenges and requires novel scores and algorithms.
§ ACKNOWLEDGEMENTS
Apratim Bhattacharyya and Jilles Vreeken are supported by the Cluster of Excellence “Multimodal Computing and Interaction” within the Excellence Initiative of the German Federal Government.
10
achar2012discovering
A. Achar, S. Laxman, R. Viswanathan, and P. Sastry.
Discovering injective episodes with general partial orders.
Data Min. Knowl. Disc., 25(1):67–108, 2012.
agrawal:94:fast
R. Agrawal and R. Srikant.
Fast algorithms for mining association rules.
In VLDB, pages 487–499, 1994.
bertens:16:ditto
R. Bertens, J. Vreeken, and A. Siebes.
Keeping it short and simple: Summarising complex event sequences with
multivariate patterns.
In KDD, pages 735–744, 2016.
fowkes:16:ism
J. Fowkes and C. Sutton.
A subsequence interleaving model for sequential pattern mining.
In KDD, 2016.
grunwald:07:book
P. Grünwald.
The Minimum Description Length Principle.
MIT Press, 2007.
kohavi:00:kddcup
R. Kohavi, C. Brodley, B. Frasca, L. Mason, and Z. Zheng.
KDD-Cup 2000 organizers' report: Peeling the onion.
SIGKDD Explor., 2(2):86–98, 2000.
http://www.ecn.purdue.edu/KDDCUP.
lam:12:gokrimp
H. T. Lam, F. Mörchen, D. Fradkin, and T. Calders.
Mining compressing sequential patterns.
In SDM, 2012.
laxman2007fast
S. Laxman, P. Sastry, and K. Unnikrishnan.
A fast algorithm for finding frequent episodes in event streams.
In KDD, pages 410–419. ACM, 2007.
vitanyi:93:book
M. Li and P. Vitányi.
An Introduction to Kolmogorov Complexity and its Applications.
Springer, 1993.
mannila:97:discovery
H. Mannila, H. Toivonen, and A. I. Verkamo.
Discovery of frequent episodes in event sequences.
Data Min. Knowl. Disc., 1(3):259–289, 1997.
papapetrou2005discovering
P. Papapetrou, G. Kollios, S. Sclaroff, and D. Gunopulos.
Discovering frequent arrangements of temporal intervals.
In ICDM, pages 354–361. IEEE, 2005.
pei2006discovering
J. Pei, H. Wang, J. Liu, K. Wang, J. Wang, and P. S. Yu.
Discovering frequent closed partial orders from strings.
IEEE TKDE, 18(11):1467–1481, 2006.
petitjean:16:skopus
F. Petitjean, T. Li, N. Tatti, and G. I. Webb.
Skopus: Mining top-k sequential patterns under leverage.
Data Min. Knowl. Disc., 30(5):1086–1111, 2016.
rissanen:78:mdl
J. Rissanen.
Modeling by shortest data description.
Automatica, 14(1):465–471, 1978.
rissanen:83:integers
J. Rissanen.
A universal prior for integers and estimation by minimum description
length.
Annals Stat., 11(2):416–431, 1983.
smets:12:slim
K. Smets and J. Vreeken.
Slim: Directly mining descriptive patterns.
In SDM, pages 236–247. SIAM, 2012.
tatti:15:epirank
N. Tatti.
Ranking episodes using a partition model.
Data Min. Knowl. Disc., 29(5):1312–1342, 2015.
tatti:11:multievent
N. Tatti and B. Cule.
Mining closed episodes with simultaneous events.
In KDD, pages 1172–1180, 2011.
tatti:12:clsepi
N. Tatti and B. Cule.
Mining closed strict episodes.
Data Min. Knowl. Disc., 25(1):34–66, 2012.
tatti:12:sqs
N. Tatti and J. Vreeken.
The long and the short of it: Summarizing event sequences with serial
episodes.
In KDD, pages 462–470. ACM, 2012.
vereshchagin:03:kolmo
N. Vereshchagin and P. Vitanyi.
Kolmogorov's structure functions and model selection.
IEEE TIT, 50(12):3265– 3290, 2004.
vreeken:11:krimp
J. Vreeken, M. van Leeuwen, and A. Siebes.
Krimp: Mining itemsets that compress.
Data Min. Knowl. Disc., 23(1):169–214, 2011.
wang:04:bide
J. Wang and J. Han.
Bide: Efficient mining of frequent closed sequences.
ICDE, 0:79, 2004.
§ APPENDIX
§.§ Estimating Candidates
Here we describe our heuristic strategy for finding new candidates of the form XY as in Sec. <ref>.
First, we need two crucial observations.
Constant Time Difference Estimation Given a database D and an cover C. Let P and Q be two patterns. Let V = {v_1,...,v_N} and W = { w_1,...,w_N} be two set of windows for P and Q, respectively. Both V and W occur in C. Each of these windows v_i and w_i occur in the same sequence. Given the start positions and end positions of the pattern in sequence k_i, we can write them as v_i = (a_i, b_i, P, k_i) and w_i = (c_i, d_i, Q, k_i). Let U be the set of windows produced by combining them, U = (a_1,d_1,R,k_1),...,(a_N,d_N,R,k_N). Let the windows in U be disjoint and the windows in U be disjoint with the windows in C ∖ (V ∪ W ). Then the difference L(D,C ∪ U ∖ (V ∪ W)) - L(D,C) depends only N, gaps(V), gaps(W), and gaps(U) and can be computed in constant time from these values.
Shorter Windows in Optimal Cover Given a database D and an cover C. Let v = (i, j, X, k) ∈ C. Assume that there exists a window S[a, b] containing X such that w = (a, b, X, l) does not overlap with any window in C and b - a < j - i. Then C is not an optimal cover.
We refer the reader to <cit.> for detailed proofs.
[t]
database D, cover C, and pattern X
pattern 𝑋𝑌 with low L(D, ∪𝑋𝑌)
Y ∈
V_Y ←∅;
W_Y ←∅;
U_Y ←∅;
d_Y ← 0
T ←∅
window v of X in cover C
(a, b, X, k) ← v
d ← end index of window following v in C
𝑡← (v, d, 0); l(t) ← d - a
add t into T
T is not empty
t ←min_u ∈ T l(u)
(v, d, s) ← t; a ← first index of v
w = (c, d, Y, k) ← active w of Y ending at d
Y = X (event at a or d is marked)
delete t from T
S_k[a, d] is a minimal window of 𝑋𝑌
add v into V_Y
add w into W_Y
add (a, d, 𝑋𝑌, k) into U_Y
d_Y ←min((V, W, U; A) + s, d_Y)
|Y| > 1 s ← s + (w)
Y = X
mark the events at a and d
delete t from T
w is the last window in the sequence
delete t from T
d ← end index of the active w' following w
update t to (v, d, s) and l(t) to d - a
(X,C,D). Heuristic for finding pattern 𝑋𝑌 with low L(D,∪𝑋𝑌)
We present our heuristic procedure as pseudo-code in Algorithm <ref>. In this algorithm, given pattern X and a cover C, for a possible extension Y, we enumerate the windows of XY from the shortest to the longest. These windows are constructed by combining two windows in the cover C. We maintain the sets V_Y, W_Y and U_Y (line 1), containing windows of X, windows of Y (to be combined together), and new windows of XY (resulting from the combination) respectively. We do this for every possible extension Y in the code table. At each step we compute the difference in code length of using these windows instead. We maintain d_Y to store this difference. By the observation Constant Time Difference Estimation, this can be done in constant time. We prefer patterns XY which are frequently occurring, with more fills than other meta stream characters. Thus, we want to find shorter windows of XY first. Such a set of windows U could potentially lead to a estimated decrease in code length. Therefore, to ensure that we find shorter windows first and efficiency, we search for all windows (all possible Y) simultaneously using a priority queue T and look only at windows in the cover C. For each window of X in the cover C, we look at windows after it to construct windows XY (Y is the pattern of the window following the window of X). We initialize the priority queue T with these windows (line 4-9), sorted based on length. At each step of the candidate generation algorithm, we retrieve once such window of XY from the priority queue T (line 11) add it to our list U_Y of windows of XY and estimate the change in code length (line 22). As we do not allow overlaps, we need to ensure that windows in U_Y are not overlapping. If a window of XY overlaps with any other window in C, we cannot use both of these windows at the same time. We take this into account by subtracting the gain(w) of this window w overlapping with the window of XY (line 23) <cit.>. The gain(w) of a window w if a upper bound on the bits gained by encoding the events in the database with this window vs. encoding them as singletons. We define the gain as in <cit.> for a window w of the pattern Y (S_k[i,j]),
(w) = -L(_p(X)) - (j - i - X)L(_g(X))
- (X - 1)L(_f(X)) + ∑_x ∈ X L(_p(x)) .
Overlapping could also happen if Y = X. So we simply check if the adjacent scans have already used these two instances of X for creating a window for pattern XX (line 25). We now extend our search by looking at the window following the currently considered window of Y in the cover C (line 34). As we allow interleaving and nesting in our covers, we also look at possible windows Y occurring inside or interleaved with windows of other patterns. That is, we look at singletons inside gaps of windows. For each window X in the cover C, we look at all windows following it, until we reach the window of X or the end of the cover.
§.§ Pruning the Code Table
Here, we present the algorithm we use to prune the code table , used at line 12 of as pseudo-code in Algorithm <ref>.
[ht]
pattern set 𝒫, database D
pruned pattern set 𝒫
X ∈𝒫
← code table corresponding to (D, 𝒫)
' ← code table obtained from by deleting X
g ←∑_w = (i, j, X, k) ∈ C(w)
g < L() - L(')
L(D, 𝒫∖ X) < L(D, 𝒫)
𝒫←𝒫∖ X
(𝒫,D)
|
http://arxiv.org/abs/1701.07904v2 | 20170126235413 | Composite Dislocations in Smectic Liquid Crystals | [
"Hillel Aharoni",
"Thomas Machon",
"Randall D. Kamien"
] | cond-mat.soft | [
"cond-mat.soft"
] |
Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, Pennsylvania 19104, USA
Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, Pennsylvania 19104, USA
kamien@upenn.edu
Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, Pennsylvania 19104, USA
Smectic liquid crystals are charcterized by layers that have a preferred uniform spacing and vanishing curvature in their ground state. Dislocations in the smectics play an important role in phase nucleation, layer reorientation, and dynamics.
Typically modeled as possessing one line singularity, the layer structure of a dislocation
leads to a diverging compression strain as one approaches the defect center, suggesting a large, elastically determined melted core. However, it has been observed that for large charge dislocations, the defect breaks up into two disclinations [C. E. Williams, Philos. Mag. 32, 313 (1975)]. Here we investigate the topology of the composite core. Because the smectic cannot twist, transformations between different disclination geometries are highly constrained.
We demonstrate the geometric route between them and show that despite enjoying precisely the topological rules of the three-dimensional nematic, the additional structure of line disclinations in three-dimensional smectics localizes transitions to higher-order point singularities.
Composite
Dislocations in Smectic Liquid Crystals
Randall D. Kamien
December 30, 2023
==================================================
Dislocations are, by their nature, not only topological but geometrical: by definition, they only occur in systems with broken translational order and therefore they must induce strain in the crystal or liquid crystal that host them <cit.>. These strains can grow quite large and often require a cutoff at the core to keep the energy finite. In exchange, the core melts into a higher-symmetry phase bringing with it the higher energy of the uncondensed condensate. Screw dislocations are especially troublesome because of a geometric consequence of their topology. Namely, the helicoidal layer structure that makes up the screw disclocation is not measured at its core <cit.>, that is, all the layers come together on the centerline. It follows that the compression energy must diverge there <cit.>. The symmetry of the smectic phase allows the core regions of a dislocation to be replaced by disclination pairs, for both edge and screw <cit.> dislocations.
Recall that line defects in nematics are characterized only by a ℤ_2=π_1(ℝP^2) charge; however, when the director lies in the plane perpendicular to the defect line we can assign a geometric charge
In this paper we discuss this phenomenon, and elucidate the topology that allows an edge dislocation can become a screw dislocation through the conversion of a disclination with the a geometry transforms into a geometry.
Before considering composite cores, we first compare the energetic situation in smectics with the theory of superconductors. Though the harmonic theory of smectics matches the London theory of superconductors <cit.> and the Landau theories are strikingly similar <cit.>, the nonlinear elasticity of the smectic, required by rotational invariance <cit.> captures both the geometry and the diverging energy density of a screw defect. We locate the smectic layers as level sets of a three-dimensional phase field ϕ( x). This is locally the phase of a complex scalar order parameter, ψ=⟨exp{iqϕ}⟩ where 2π/q is the equilibrium smectic spacing. The elastic free energy is
F=B/8∫ d^3x {[(∇ϕ)^2-1]^2 + 16λ^2 H^2},
where B is the bulk modulus, λ is the bend penetration depth, H=1/2∇·(∇ϕ/|∇ϕ|) is the mean curvature of the level sets, and we have set q=1 for simplicity.
There is a compact three-dimensional set of ground states, ϕ= n· x + ϕ_0 parameterized by a unit vector n∈ S^2 and a scalar global phase ϕ_0∈ S^1. Note that the ground state manifold is further reduced by the discrete nematic symmetry n→ - n, to form a twisted circle bundle over ℝP^2. Typically, one expands around the ground state ϕ=ϕ-u in the Eulerian coördinate u to find the harmonic theory <cit.>
F_ harm=B/2∫ d^3x {( n_0·∇ u )^2 + λ^2(∇^2 u)^2}.
It is remarkable that a screw dislocation with Burgers scalar <cit.> b, ϕ_ screw = z - b/2πtan^-1(y/x), an extremal of both the full and harmonic free energy functionals, has vanishing energy density in the harmonic theory but a diverging energy density in the rotationally-invariant theory scaling as Bb^4/r^4 with r^2=x^2+y^2. The linear elasticity theory is a poor starting point for an energetic description. For edge dislocations, the situation is almost opposite: the linear and nonlinear theories give different layer structures but the same energy <cit.>.
However, symmetry offers a way out: the ∇ϕ→ -∇ϕ symmetry of the smectic phase means that ϕ lives in the quotient space
S^1/ℤ_2≅{ℝ: ϕ∼ϕ + 2π, ϕ∼ -ϕ} <cit.>.
It is important to note that this space is not ℝP^1 where ϕ and ϕ+π are identified and which is not simply connected – the action of the smectic symmetry on ϕ is not free, it has two fixed points: 0 and π – the layers and the “half layers”. The level sets that correspond to layers or half-layers must be at these fixed points for a single valued density field ρ∝cosϕ, so that disclinations must lie on density minima or maxima. This condition breaks the continuous symmetry, ϕ→ϕ + constant, and generates a Peierls-Nabarro barrier to dislocation glide <cit.>. This extra structure allows dislocation cores to split into disclinations: an initial phase singularity of 2π is equivalent to a phase change from 0 to π followed by the reverse change from π to 0 since the sign of ∇ϕ changes at the fixed points. This process removes the phase singularity but preserves the phase winding that signifies the dislocation.
This fact allows the system to replace the high energy cores of standard dislocations. Such composite dislocations can be described in terms of an almost-equally-spaced structure as described by Kleman and co-workers <cit.>. The topology of a screw dislocation requires the solution ϕ_ screw at large distances, a solution with vanishing mean curvature H, however one can replace the divergent-compression core with equally spaced layers (Fig. <ref>). Such a core is built with layers specified as the normal evolution of a central helicoid (discussed in detail below). These layers are equally spaced, but are not minimal surfaces, and there are two curvature singularities created at a radius equal to the reduced Burgers scalar, = b/2π, which form a double helix. These singularities are the location of the disclinations, and indeed these double helices were observed in screw dislocations with large (giant) Burgers scalar <cit.>.
Outside the core, we attach helicoids to the helices which bound the developed layers. Each of these helices serves as seed for the helicoidal layer outside the core. We will return to details of this construction in the following. The Burgers scalar of such a split dislocation is again determined by the number of layers between the disclinations (Fig. <ref>).
This splitting into disclination pairs reveals an essential difference between edge and screw dislocations that is the central issue of this paper. Edge dislocations split into / disclination pairs <cit.>, while screw dislocations break into a pair of disclinations – how is the topological charge of the disclinations preserved?
In Fig. <ref> we illustrate a b=4 composite screw bending over to become a composite edge dislocation. Below the transition layers, the bottom layer structure indicates the topology of the composite screw. The transition to the edge dislocation at the top of Fig. <ref> preserves this separation of disclinations. However, because the edge dislocation is made of a / disclination pair and the screw disclination is made of two disclinations, the transition requires the conversion of a disclination into a disclination. As we show in Fig. <ref>, it is possible to turn a disclination into a . While this geometry reflects the more familiar fact that in a three-dimensional nematic there is only one kind of line defect locally (π_1(ℝP^2)=ℤ_2), the existence of smectic layers implies additional structure.
The layer normal of a surface, n, must satisfy the Frobenius integrability condition n·(∇× n)=0 so there can be no twist in a smectic. In addition, smectics must satisfy a geometric `measured' condition that follows from a finite layer thickness. As we show below, these restrictions imply that the transition from to must occur at an identifiable point, a `monopole' sitting on the disclination line. At a generic point p along a smectic disclination one can associate an integer that counts the number of layers, m_p attached to the disclination at that point. This is equal to 1 for a disclination, and 3 for a disclination. Generally, the local winding number of the disclination at p is given by 1-m_p/2. Because m_p ≥ 0 by construction, a smectic disclination can have a maximum geometric winding of +1, a consequence of Poénaru's result concerning the measured condition of smectics <cit.>.
Since m_p is an integer valued function along the disclination line, it can only change discretely at specific points. These are the monopoles, and are unique to smectics. No such structure can be defined for a nematic liquid crystals, where a +q profile may be smoothly deformed to a -q profile. As there is no homotopy between windings of ± q with n lying in a plane, such a homotopy must fully explore the groundstate manifold, ℝP^2, and evolve into the third dimension. While the smectic configuration is a valid nematic texture,
in a nematic such a configuration is not topologically protected from smearing to a smooth transition with twist (consider the standard transition through a twist disclination). This twist
prohibits the definition of a phase field and consequently the integer-valued invariant m_p is not defined in a nematic.
So how does the transition occur in a smectic? In the top row of Fig. <ref> we consider such a transition made by cutting open a toric focal conic domain. In this case the natural π/2 turn abets the transition from screw to edge but also demonstrates a key feature of the transition: it occurs at a point. Disclination lines in a smectic are where smectic layers intersect along a line (or end along a line as in the and +1 geometries) so, in order to make a transition, a new layer must emerge from the line. A whole layer adds two leafs and so the geometric winding would change by -1 or, conversely, increase by 1 on removal. Note that a half-layer (density minimum) joining would amount to two disclination lines joining – a different beast altogether. Thus, in order to make the geometric transitions it is necessary to go through a more singular point defect, a critical point of the smectic layers that is also a point defect or monopole.
This minimalistic layer description in terms of whole sheets and sheets ending on disclination lines is equivalent to a full phase field model, as we show rigorously elsewhere <cit.>.
More generally, it is not necessary to use a focal conic domain segment to achieve the transition, as illustrated in the bottom images of Fig. <ref>. In order to convert a charge-m disclination to a charge-m' one, 2(m-m') layers need to be added in the cross-section perpendicular to the disclination line. Generically, disclination lines do not intersect each other, and so the layers must appear in pairs by forming a three-dimensional half-conical structure. It follows that m-m' is necessarily integer, and nematic order can be maintained. For every integer there are monopole structures carrying that charge. The complete set of rules for the topologically allowed moves is the subject of other work <cit.>.
With this discussion in mind, we return to the transition from a screw to an edge dislocation, it is fortunate that the transition can be achieved with a π/2 turn: an edge dislocation (when the defect line is perpendicular to the displacement) and a screw dislocation (when the defect line is parallel to the displacement) meet as in Fig. <ref>. We see that one disclination in the screw keeps its charge and becomes the companion for the newly created . The remaining charge pairs up with a generated through a pinch or pincement of the smectic as illustrated in the topmost layer in Fig. <ref>. The topmost full layer merges with the disclination and, at that point the pincement is created.
Experimental observations <cit.> demonstrate the existence of composite screw dislocations. We determine the energetic favorability of the split-core screw dislocation expanding upon the geometric description by Kléman and coworkers <cit.>. The topology of a screw dislocation requires ϕ_ screw at large distances, a solution with vanishing mean curvature H. We replace the core with equally-spaced layers (Fig. <ref>) built with level sets of ϕ specified as the normal evolution of a central helicoid by a distance ℓ:
X_ℓ(r,θ) = [rcosθ, rsinθ, θ] + ℓ N, where ≡ b/(2π) and N is the normal of the central helicoid, N(r,θ) = γ[sinθ,-cosθ,r/],
with γ=[1+(r/)^2]^ normalizing N. On the central helicoid ℓ=0, the mean curvature vanishes identically and the Gaussian curvature is K(r,θ) = -γ^4/^2 so that the two principal curvatures are κ_± =±γ^2/. The largest value of |κ| = 1/ and measures the inverse distance to the first curvature singularity generated by the normal evolution of the helicoid, . Except for the central helicoid, the core layers are not minimal surfaces, i.e. H 0, however the compression in the entire core vanishes by construction.
It is amusing to note that this dichotomy of vanishing compression in the core and vanishing curvature in the exterior is reminiscent of the structure of e.g. an Abrikosov flux line <cit.>: at large distances we have a superconducting phase with vanishing magnetic field, while in the core we have normal metal with penetrating flux. In the flux line case this is a balance between two linear terms in the London theory. In our case, the smectic free energy is nonlinear in ϕ but harmonic in the compression strain u_zz=(∇ϕ)^2-1 and the mean curvature H. This same unexpected balance between nonlinear strains was first pointed out by Brener and Marchenko in smectic edge dislocations <cit.>.
It is straightforward to calculate the curvature energy of this core structure and we find F_ core = 4.66 B λ^2. The core energy is independent of the reduced Burgers scalar since (<ref>) reduces to the conformally invariant Willmore energy in the case of equally spaced layers.
This adds to the energy of the exterior region, which has only compression energy since it has zero mean curvature. Substituting the expression for ϕ_ screw into (<ref>) this energy is found to be F_ shell=(π/8)B^2.
There are two things that we must check at the interface between the core and the shell: 1) whether the layers match at the cylinder of radius , and 2) whether the layers match up smoothly or there is a mismatch between layer normals where the core meets the shell. To fit onto the shell layers, the inner surfaces must intersect the circle of radius at height z=0 at equally-spaced angles, so that the pushoff at distance ℓ from the central helicoid intersect the circle at an angle πℓ/(2). However, it is straightforward but tedious to check that the intersection is at an angle
α(ℓ̃)=[ℓ̃√(k/(1 + k))+ tan^-1(ℓ̃/√(k+k^2))]
where k^2=1-ℓ̃^2. The difference between α(ℓ̃) and πℓ̃/2 is greatest at |ℓ̃|≈0.73, where the two differ by 8%. Thus the core cannot have vanishing compression and also attach continuously to the other leaves and the double helicoidal structure proposed in <cit.> requires this small tweak.
The additional compression energy can be estimated by changing the spacing of the pushoffs in the core so that the distance of the ℓ^ th layer from the central helicoid is α^-1(πℓ̃/2) rather than ℓ. With this adjustment, the core spacing now reads |∇ϕ|=d/dℓ̃[α^-1(πℓ̃/2)], with compression energy ∼ .015 B^2, a nonzero but small correction to the compression in the outer region
quadratic in .
The bending energy stored in the mismatch between the layer normals carries a delta-function of mean curvature, which will not scale with due to the conformal invariance of the Willmore energy. Computing the angle deficit β_ N(ℓ̃)=arccos( N_c· N_s), where N_c and N_s are the core and shell layer normals respectively. Allows us to estimate the total bending energy in the mismatch between layer normals as [β_ N(ℓ̃)]^2 along the core boundary, giving ∼ 0.79 Bλ^2, a small correction to the bending energy of the core.
Even with the modification of the core structure, we find that the energy scales as F_ composite∼ B b^2 +C, as argued in <cit.>. For large b this will be smaller than the energy of the traditional, microscopic-core dislocation F_ standard∼ Bb^4/ξ^2+χξ^2, where ξ is the core radius and χ is the smectic condensation energy density. We can also compare F_ composite to the energy of a screw dislocation with an elastically melted nematic core <cit.>; minimizing the energy over ξ gives ξ∝ b, leading to the scaling F_ standard∼√(Bχ)b^2. Deep in the smectic phase, χ>B, and therefore for large enough b the composite screw will have lower energy than the standard melted-core screw.
We have described the structure of a composite-core dislocations comprised of two disclination lines and their topology. In particular, line disclinations in three-dimensional smectics carry a ℤ_2 topological charge, exactly as in three-dimensional nematics. Unlike nematics however, “escape into the third dimension” is not allowed in smectics and so the homotopy between different winding geometries occurs via higher-order monopoles. The work presented here is valid only for smectic A textures, where the layers have no additional structure, Smectic C textures require the additional matching of the c-director around the defect which we do not consider. In future work we will complete the classification of defects in smectics by studying the variety of allowable point defects <cit.>. More generally, the connection between disclinations and dislocations remains an open issue in translationally ordered systems.
It is our pleasure to acknowledge penetrating discussions with M. Kléman, O.D. Lavrentovich, and J.-F. Sadoc. This work was supported through NSF Grant DMR1262047 and by a Simons Investigator grant from the Simons Foundation to R.D.K.
10
klemanbook M. Kléman, Points, Lines and Walls: In Liquid Crystals, Magnetic Systems and Various Ordered Media, (John Wiley & Sons, New York, 1983).
mermin N.D. Mermin, Rev. Mod. Phys. 51, 591-648 (1979).
ack B.G. Chen, G.P. Alexander, and R.D. Kamien, Proc. Natl. Acad. Sci. 106, 15577 (2009).
poenaru V. Poénaru, Commun. Math. Phys. 80, 127 (1981).
msk E.A. Matsumoto, C.D. Santangelo, R.D. Kamien, Interface Focus 2 617 (2012).
kl R.D. Kamien and T.C. Lubensky, Phys. Rev. Lett. 82
2892 (1999).
meyer10 C. Meyer, Y. Nastishin and M. Kléman, Phys. Rev. E 82, 031704 (2010).
achard05 M.F. Achard, M. Kléman, Y. A. Nastishin, and H. T. Nguyen, Eur. Phys. J. E 16, 37 (2005).
klbook M. Kléman and O.D. Lavrentovitch, Soft Matter Physics: An Introduction, (Springer-Verlag, New York, 2003).
williams75 C. E. Williams, Philos. Mag. 32, 313 (1975).
LL L.D. Landau, E.M. Lifshitz, ,A. M. Kosevich, and L. P. Pitaevskiĭ , Theory of Elasticity (Third Edition) (Elsevier-Butterworth Heinemann, Oxford, 1986).
London F. London and H. London, Proc. Roy. Soc. A 149, 71 (1935).
dgsmectic P.-G. de Gennes, Solid State Commun. 10, 753-756 (1972).
halseynelson T.C. Halsey and D.R. Nelson, Phys. Rev. A 26, 2840-2853 (1982).
foot Note that the term “Burgers vector” might be more common but, in a smectic, this is only a scalar quantity. It would be a vector in a crystal with periodicity in more than one direction.
BPS C.D. Santangelo and R.D. Kamien, Phys. Rev. Lett. 91, 045506 (2003).
slse M.Y. Pevnyi, J.V. Selinger, and T.J. Sluckin, Phys. Rev. E 90, 032507 (2014).
klemanburger C.E. Williams and M. Kléman, J. Phys. (Paris) 36 (C1), C1-315 (1975).
toappear T. Machon, H. Aharoni, Y. Hu, and R.D. Kamien, unpublished (2017).
Abrikosov A.A. Abrikosov, Zh. Eksp. i Teor. Fiz. 32, 1442-1452 (1957); [JETP 5, 1174-1182 (1957)].
BM E.A. Brener and V.I. Marchenko, Phys. Rev. E 59, R4752 (1999).
|
http://arxiv.org/abs/1702.00489v2 | 20170126201140 | Transport Effects on Multiple-Component Reactions in Optical Biosensors | [
"Ryan M. Evans",
"David A. Edwards"
] | q-bio.MN | [
"q-bio.MN"
] |
R. M. Evans Applied and Computational Mathematics Division
Information and Technology Laboratory
National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
ryan.evans@nist.gov
D. A. Edwards Department of Mathematical Sciences, University of Delaware, Newark, DE 19716, USA
dedwards@udel.edu
Transport Effects on Multiple-Component Reactions in Optical Biosensors
This work was done with the support of the National Science Foundation under award number NSF-DMS 1312529. The first author was also partially supported by the National Research Council through an NRC postdoctoral fellowship.
Ryan M. Evans David A. Edwards
Received: date / Accepted: date
=========================================================================================================================================================================================================================================================================================================
Optical biosensors are often used to measure kinetic rate constants associated with chemical reactions. Such instruments operate in the surface-volume configuration, in which ligand molecules are convected through a fluid-filled volume over a surface to which receptors are confined. Currently, scientists are using optical biosenors to measure the kinetic rate constants associated with DNA translesion synthesis–a process critical to DNA damage repair. Biosensor experiments to study this process involve multiple interacting components on the sensor surface. This multiple-component biosensor experiment is modeled with a set of nonlinear Integrodifferential Equations (IDEs). It is shown that in physically relevant asymptotic limits these equations reduce to a much simpler set of Ordinary Differential Equations (ODEs). To verify the validity of our ODE approximation, a numerical method for the IDE system is developed and studied. Results from the ODE model agree with simulations of the IDE model, rendering our ODE model useful for parameter estimation.
§ INTRODUCTION
Note: this manuscript now appears in the Bulletin of Mathematical Biology, and may be found through the following reference: Evans, R.M. & Edwards, D.A. Bull Math Biol (2017) 79: 2215. https://doi.org/10.1007/s11538-017-0327-9
Kinetic rate constants associated with chemical reactions are often measured using optical biosensors. Such instruments operate in the surface-volume configuration in which ligand molecules are convected through a fluid-filled volume, over a surface to which receptors are immobilized. Ligand molecules are transported through the fluid onto the surface to bind with available receptor sites, creating bound ligand molecules at concentration B(x,t). Mass changes on the surface due to ligand binding are averaged over a portion of the channel floor [x_min,x_max] to produce measurements of the form
B(t)=1/x_max-x_min∫_x_min^x_maxB(x,t) dx.
See Figure <ref> for a schematic of one such biosensor experiment.
Measuring kinetic rate constants with optical biosensors requires an accurate model of this process, and models have been successfully proposed and progessively refined throughout the years: <cit.>. Although such models are typically limited to reactions involving only a single molecule or a single step, chemists are currently using biosensor technology to measure rate constants associated with reactions involving multiple interacting components. In particular, chemists are now using biosensor experiments to elucidate how cells cope with DNA damage. Harmful DNA lesions can impair a cell's ability to replicate DNA, and its ability to survive. One way a cell may respond to a DNA lesion is through DNA translesion synthesis <cit.>. For a description of this process we refer the interested reader to the references included herein; however, for our purposes it is sufficient to know that DNA translesion synthesis involves three interacting components: a Proliferating Cell Nuclear Antigen (PCNA) molecule, polymerase δ, and polymerase η. Moreover, in order for a successful DNA translesion synthesis event to occur polymerase η must bind with the PCNA molecule. A central question surrounding DNA translesion synthesis is whether the polymerase η and PCNA complex forms through direct binding, or through a catalysis-type ligand switching process <cit.>.
The former scenario is depicted in Figure <ref>, where we have shown polymerase η directly binding with a PCNA molecule,
i.e. the reaction:
P_1: E+L_2[_2k_d]_2
k_a E L_2.
Here, we have denoted the PCNA molecule and polymerase η as E and L_2 respectively. Additionally, _2k_ denotes the rate at which L_2 binds with an empty receptor E, and _2k_ denotes the rate at which L_2 dissociates from a receptor E. We will refer to this as pathway one, or simply P_1 as in (<ref>).
The catalysis-type ligand switching process is depticted in Figure <ref> and stated precisely as:
P_2: E+L_1 [_1k_d]_1k_a E L_1,
EL_1+L_2 [^1_2k_d]^1_2k_a E L_1 L_2 [^2_1k_a]^2_1k_d E L_2+L_1 , EL_2_2k_dE+L_2.
In (<ref>) and Figure <ref> we have denoted polymerase δ as L_1. This process is summarized as follows: first L_1 binds with an available receptor E; next L_2 associates with EL_1 to create the product EL_1L_2; then L_1 dissociates from EL_1L_2, leaving EL_2; finally, L_2 dissociates from EL_2. Furthermore, in (<ref>) and Figure
<ref> the rate constants _1k_ and _1k_ denote the rates at which L_1 binds and unbinds with a receptor E, ^j_ik_ denotes the rate at which ligand L_i binds with the product EL_j, and ^j_ik_ denotes the rate at which L_i dissociates from the product EL_1L_2. In the latter two expressions the indices i and j can equal one or two. We shall refer to this pathway two, or simply P_2 as in (<ref>).
Though Zhuang et al. provided indirect evidence of the ligand switch in <cit.>, a direct demonstration of this process has not been possible with conventional techniques such as fluorescence microscopy, since such techniques introduce the possibility of modifying protein activity. Hence, scientists are using label-free optical biosensors to measure the rate constants in (<ref>). By measuring the rate constants in (<ref>), one could determine whether EL_1L_2 forms through direct binding, or the catalysis-type ligand switching process. We note that the latter manifests itself mathematically with _2k_a=0, while the former with _1^2k_a= _1^2k_d= _2^1k_a= _2^1k_d=0.
However, the presence of multiple intereacting components on the sensor surface complicates parameter estimation. In the present scenario there are three species EL_1, EL_1L_2, and EL_2 at concentrations B_1(x,t), B_12(x,t), and B_2(x,t), and since optical biosensors typically measure only mass changes at the surface, lumped measurements of the form
𝒮(t)=s_1B_1(t)+(s_1+s_2)B_12(t)+s_2B_2(t)
are produced. In (<ref>)
B_i(t)=1/x_max-x_min∫_x_min^x_maxB_i(x,t) dx
denotes the average reacting species concentration, for i=1, 12, 2, and s_i denotes the molecular weight of L_i. The lumped signal (<ref>) raises uniqueness concerns, since more than one set of rate constants may possibly correspond to the same signal (<ref>). Fortunately, through varying the uniform in-flow concentrations of the ligands, C_1(0,y,t)=C_1,u
and C_2(0,y,t)=C_2,u, one may resolve this ill-posedness in certain physically relevant scenarios (Evans, R. M. and Edwards, D. A. and Li, W., submitted). This approach to identifying the correct set of rate constants in the presence of ambiguous data is related to the “global analysis” technique in biological literature <cit.>.
The presence of multiple interacting species and the lumped signal (<ref>) complicate parameter estimation even for systems accurately descibed by the well-stirred kinetics approximation. However in <cit.>, Edwards has shown that transport dynamics affect ligand binding in a thin boundary layer near the sensor surface. Hence, we begin in Section <ref> by summarizing the relevant boundary layer equations, which take the form of a set of nonlinear Integrodifferential Equations (IDEs). In Section <ref>, it is shown that in experimentally relevant asymptotic limits our IDE model reduces to a much simpler set of Ordinary Differential Equations (ODEs) which can be used for parameter estimation. To verify the accuracy of our ODE approximation, a numerical method is developed in Subsection <ref>. Convergence properties are examined in Subsection <ref>, and in Section <ref> the accuracy of our ODE approximation is verified by comparing results of our ODE model with results from our numerical method described in Section <ref>. Conclusions and plans for future work are discussed in Section <ref>.
§ GOVERNING EQUATIONS
For our purposes, biosensor experiments are partitioned into two phases: an injection phase, and a wash phase. During the injection phase L_1 and L_2 are injected into the biosensor via a buffer fluid at the uniform concentrations C_1(x,y,0)=C_1,u and C_2(x,y,t)=C_2,u. Injection continues until the signal (<ref>) reaches a steady-state, at which point the biosensor is washed with the buffer fluid–this is the wash phase of the experiment. Only pure buffer is flowing through biosensor during the wash phase, not buffer containing ligand molecules. This causes all bound ligand molecules at the surface to dissociate and flow out of the biosensor, thereby preparing the device for another experiment. We first summarize the governing equations for the injection phase.
§.§ Injection Phase
To present our governing equations we introduce the dimensionless variables:
x=x/L, y=y/H, t=_1k_aC_1,ut, B_i(x,t)=B_i(x,t)/R_T, C_i(x,y,t)=C_i(x,y,t)/C_i,u,
_i^jK_=C_i,u·_i^jk_/C_1,u·_1k_, _i^jK_=k_/C_1,u·_1k_, F_r=C_rD_r, C_r=C_1,u/C_2,u, D_r=D_1/D_2.
We have scaled the spatial variables with the instrument's dimensions, time with the association rate of L_1 onto an empty receptor, the bound ligand concentrations B_i with the initial free receptor concentration, and the unbound ligand concentrations with their respective uniform inflow concentrations. The rate constants _i^jK_a and _i^jK_d are the dimensionless analogs of _i^jk_ and _i^jk_. In the latter expressions the index i=1, 2, whereas j=1, 2, or can be blank. Furthermore, F_r measures the diffusion strength of each reacting species, as characterized by the product of the input concentrations and the diffusion coefficients. Henceforth, we shall drop the tildes on our dimensionless variables for simplicity. In particular,
we denote the dimensionless sensogram reading as
S(t)=𝒮(t)/R_T· s_1=B_1(t)+(1+s_2/s_1)B_12(t)+s_2/s_1B_2(t).
Moreover, we may use (<ref>) to denote the dimensionless average concentration, as it is of the same form in both the dimensionless and dimensional contexts.
Applying the law of mass action to (<ref>) gives the kinetics equations:
B_1/ t = (1-B_Σ)C_1(x,0,t)- _1K_B_1 - ^1_2K_B_1C_2(x,0,t) + ^1_2K_B_12,
B_12/ t =_2^1K_aB_1C_2(x,0,t)-_2^1K_dB_12+_1^2K_aB_2C_1(x,0,t)-_1^2K_dB_12,
B_2/ t = _2K_(1-B_Σ)C_2(x,0,t)- _2K_B_2+ _1^2K_B_12 - ^2_1K_B_2C_1(x,0,t),
𝐁(x,0)=0,
which hold on the reacting surface when y=0 and x∈[0,1]. In (<ref>), 𝐁=(B_1, B_12, B_2)^T is a vector in ℝ^3 whose components contain the three bound state concentrations. In addition, the terms in equations (<ref>)–(<ref>) have been ordered in accordance with Figures <ref> and <ref>.
Edwards has shown <cit.> that transport effects dominate in a thin boundary layer near the reacting surface where diffusion and convection balance. Hence the governing equations for C_i are
D_r^2 C_1/η^2=η C_1/ x,
^2 C_2/η^2=η C_2/ x.
In (<ref>)–(<ref>): η=^1/3y is the boundary layer variable, =VH^2/(LD_2)≫ 1 is the Péclet number, and V is the characteristic velocity associated with our flow.
Since C_1 is used up in the production of B_1 and B_12, and C_2 is used up in the production of B_12 and B_2, we have the diffusive flux conditions:
C_1/η(x,0,t)=/F_r( B_1/ t+ B_12/ t),
C_2/η(x,0,t)=( B_12/ t+ B_2/ t).
Equations (<ref>)–(<ref>) reflect the fact that in the boundary layer C is in a quasi-steady-state where change is driven solely by the surface reactions (<ref>)–(<ref>). Then, given the inflow and matching conditions
C_i(0,η,t)=1,
lim_η→∞C_i(x,η,t)=1,
the solution to (<ref>) is given by
C_1(x,0,t)=1-D_r^1/3/F_rΓ(2/3)3^1/3∫_0^x( B_1/ t+ B_12/ t)(x-ν, t)ν/ν^2/3,
C_2(x,0,t)=1-/Γ(2/3)3^1/3∫_0^x( B_12/ t+ B_2/ t)(x-ν, t)ν/ν^2/3.
See <cit.> for details of a similar calculation. During the injection phase, the bound state concentration is then governed by (<ref>) using (<ref>).
In (<ref>)–(<ref>) and (<ref>)
=_1k_aR_T(HL)^1/3/(VD^2)^1/3
is the Damköhler number–a key dimensionless parameter which measures the speed of reaction relative to the transport into the surface. In the experimentally relevant parameter regime of ≪ 1, the time scale for transport into the surface is much faster than the time scale for reaction. In this case there is a only a weak coupling between the two processes, and (<ref>) shows that the unbound concentration at the surface is only a perturbation away from uniform inlet concentration. When → 0 in (<ref>) using (<ref>), one recovers the well-stirred approximation in which transport into the surface completely decouples from reaction.
On the other hand, when =O(1) the two processes occur on the same time scale, and ligand depletion effects become more evident. This is a phenomenon in which ligand molecules are transported into the surface to bind with receptor sites upstream, before they bind with receptor sites downstream. Mathematically, this is reflected in the convolution integrals in (<ref>). When x≪ 1 the convolution integral influences the unbound concentration at the surface less than when x is larger.
A sample space-time curve for each of the reacting species concentrations B_i(x,t) is depicted in Figure <ref>, where we have shown the results of our numerical simulations described in Section <ref>.
The x-axis represents the sensor, and t-axis represents time. Injection begins at t=0, and ligand molecules bind with receptor sites as they are transported into the surface. Binding proceeds as the injection continues; finally each of the concentrations achieve a chemical equilibrium in which there is a balance between association and dissociation. Observe the spatial heterogeneity present in each of the bound state concentrations–the reaction proceeds faster near the inlet at x=0 than the rest of the surface. This is precisely the ligand depletion phenomenon described in the above paragraph, and is particularly evident in the surface plot of B_12. This is because in this simulation we have taken all of the rate constants equal to one, and either EL_1 or EL_2 must be present in order for EL_1L_2 to form. Thus, in this case EL_1L_2 experiences effectively twice the ligand depletion of the other reacting species.
Furthermore, one may notice an apparent discontinuity in each of the surface plots depicted in Figure <ref>–this reflects the weakly singular nature of the functions which we are attempting to approximate. When x≪ 1, one may show 𝐁 has the perturbation expansion
𝐁(x,t)=^0𝐁(t)+ x^1/3·^1𝐁(t)+ O(^2 x^2/3)
(this is simply (<ref>) for x≪ 1). It therefore follows that
𝐁/ x
(x,t)= ^1𝐁(t)/3x^2/3+O( ^2/x^1/3).
Hence, although the function 𝐁 is well-defined and continuous near x=0, it has a vertical tangent at x=0. The weakly-singular nature of 𝐁 is magnified since =2. To resolve this region, one may think to
adaptively change Δ x with the magnitude of 𝐁/ x. However, because the sensogram reading S(t) is computed over the region [x_min,x_max], we are not concerned with resolving this region and a uniform step size is sufficient. Moreover, our convergence results in Subsection <ref> demonstrate that a lack of resolution at x=0 does not affect our results in the region of interest [x_min,x_max].
§.§ Wash Phase
We now summarize the relevant equations for the wash phase. In practice the injection phase is run until the bound state concentration reaches a steady-state
<cit.>. This implies that because the bound ligand concentration evolves on a much slower time scale than the unbound ligand concentration <cit.>, the
unbound ligand concentration will have also reached steady-state by the time the wash phase begins. In particular, the unbound concentration on the surface will be uniform by the time the wash phase starts–i.e., C_i(x,0,0)=1. Thus, the kinetics equations are given by (<ref>), with (<ref>) replaced by the steady solution to (<ref>) during the injection phase:
𝐁(x,0)=A^-1𝐟,
A=[ (1++) 1 - 1; - (+) -; - ( ++) ] ,
=[ 1; 0; ].
Equations similar to (<ref>) hold:
C_i(0,η,t)=0,
lim_η→∞C_i(x,η,t)=0.
Equation (<ref>) is the inflow condition, and (<ref>) expresses the requirement that the concentration in the boundary layer must match the concentration C_i(x,y,t)=0 in the outer region. Moreover, as in the injection phase one can use (<ref>)–(<ref>) together with (<ref>) to show:
C_1(x,0,t)=-D_r^1/3/F_rΓ(2/3)3^1/3∫_0^x( B_1/ t+ B_12/ t)(x-ν, t)ν/x^2/3,
C_2(x,0,t)=-/Γ(2/3)3^1/3∫_0^x( B_12/ t+ B_2/ t)(x-ν, t)ν/ν^2/3.
Thus, during the wash phase the bound state evolution is governed by the (<ref>)–(<ref>), (<ref>), and (<ref>).
§ EFFECTIVE RATE CONSTANT APPROXIMATION
During both phases of the experiment, the bound state concentration 𝐁(x,t) obeys a nonlinear set of IDEs which is hopeless to solve in closed form. However, we are
ultimately interested in the average concentration 𝐁(t), rather than the spatially-dependent function 𝐁(x,t), since from 𝐁(t) we can construct the sensogram signal (<ref>) (the quantity of interest). Thus, we seek to find an approximation to 𝐁(t), and begin by
finding one during the injection phase. We first average each side of (<ref>), with C_1(x,0,t) and C_2(x,0,t) given by (<ref>), in the sense of (<ref>). Immediately, we are
confronted with terms such as
B_1C_2=B_1(1-/3^1/3Γ(2/3)∫_0^x( B_12/ t+ B_2/ t)ν/(x-ν)^2/3),
on the right hand side of (<ref>). In the experimentally relevant case of small , we are motivated to expand B(x,t) in a perturbation series:
B(x,t)=^0 B(x,t)+O().
In this limit, the leading order of (<ref>) is just C_i=1. Using this result in (<ref>), we have that the governing equation for ^0 B is independent of x:
d ^0 Bdt=-A ^0 B+𝐟,
where A is given by (<ref>) and 𝐟 by (<ref>). Hence the leading-order approximation
0𝐁(t)=A^-1(I-^-At)𝐟
is independent of space. Substituting (<ref>) into (<ref>), the time-dependent terms may be factored out of the integrand,
leaving the spatial dependence of C_j varying as x^1/3. This is the only spatial variation in (<ref>) at O(); hence we may write
𝐁(x,t)=^0𝐁(t)+ x^1/3·^1𝐁(t)+O(^2).
As a result of (<ref>) we have the relation
B_i(x,t)=0B_i(t)+O(^2),
which may be used to show the right hand side of (<ref>) is equal to
B_1- h·0B_1(0B_12/ t+0B_2/ t)+O(^2),
h(x)=3^2/3x^1/3/Γ(2/3).
We then average (<ref>), and use the resulting relation in (<ref>) to show the right hand side of (<ref>) reduces to:
B_1 C_2=B_1[ 1- h(B_12/ t+B_2/ t)]+O(^2).
In this manner, we can derive a set of nonlinear ODEs for 𝐁(t) of the form:
𝐁/ t=M^-1(𝐁)(-A𝐁+)+O(^2),
𝐁(0)=0,
where
M(𝐁)=I+ N(𝐁),
N(𝐁)=[ D_r^1/3h/F_r (1-) D_r^1/3h/F_r (1-) - h· -h·; h· h·+(D_r^1/3h/F_r) ( D_r^1/3h/F_r); -( D_r^1/3h/F_r) -(D_r^1/3h/F_r)+h(1-) h(1-) ].
We have also derived a set of ERC equations for the wash phase, they take take the form:
𝐁/ t=M^-1(𝐁)(-𝒟𝐁)+O(^2),
𝐁(0)=A^-1𝐟,
𝒟=[ - 0; 0 + 0; 0 - ],
where M(𝐁) is as in (<ref>)
Following <cit.>, we refer to the Ordinary Differential Equation (ODE) systems (<ref>) and (<ref>) as our Effective Rate Constant (ERC) Equations. A significant advantage of our ERC equations is that these ODEs are far easier to solve numerically than their IDE counterparts. To solve (<ref>) or (<ref>), one may simply apply their linear multistage or multistep formula of choice. This feature renders our ERC equations attractive for data analysis, since they can be readily implemented into a regression algorithm when attempting to determine the rate constants associated with the reactions (<ref>). Since experimental data is still forthcoming, we do not employ a regression algorithm to fit the rate constants in (<ref>) and (<ref>) to biosensor data. Synthetic data for the kinetic rate constants was used in our numerical simulations.
Solutions of our ERC equations for different parameter values are depicted in Figure <ref>.
First consider the solutions depicted on the left. Here the injection phase (<ref>) has been run from t=0 to t=5 and the wash phase (<ref>) has been run from t=5 to t=10. Furthermore, all rate constants were taken equal to one and the Damköhler number was =0.1. During the injection phase it is seen that B_1 and B_2 reach equilibrium after approximately one second, while B_12 takes approximately two seconds. This is not a surprise: we are injecting equal amounts of both ligands, all the rate constants are the same, and either EL_1 or EL_2 must already be present in order for EL_1L_2 to form. The equality of the rate constants is also the reason why all three species attain the same steady-state. Mathematically, the steady-state of 𝐁 during the injection phase is given by (<ref>), and one can readily verify that A^-1𝐟=(1/4, 1/4, 1/4 )^T when all of the rate constants are equal to one. Physically, each of the species ultimately achieves the same balance between association and dissociation. Furthermore, the fact that all of the rate constants are the same is the reason why B_12 decays to zero faster than the other two species: EL_1L_2 transitions to either EL_1 or EL_2 at the same rate as the latter two species transition into an empty receptor E.
Now consider the solutions depicted on the right in Figure <ref>. As with the previous case the injection phase has been run from t=0 to t=5 and the wash phase has been run from t=5 to t=10. However this time, all the rate constants have been taken equal to one except _2^1K_a, which was taken equal to _2^1K_a=10. During the injection phase it is seen that B_1 quickly reaches a local maximum, and then decreases to steady-state. Since _2^1K_a is an order of magnitude larger than the other rate constants, after a short period of time L_2 molecules bind with EL_1 at a faster rate than L_1 molecules bind with empty receptor sites. This results in the chemical equilibrium between EL_1 and EL_1L_2 depicted on the right in Figure <ref>. From these observations it is clear why the steady-state value of B_12 is larger than the previous case. However, it may be counterintuitive to observe that B_2 reaches a larger steady-state value in the solutions depicted on the right than the solutions depicted to the left. Although one may think the vast majority L_2 molecules should be used in forming EL_1L_2, the increase in EL_1L_2 also increases the concentration of empty receptor sites. The continuous injection of L_2 therefore drives the average concentration B_2 to a larger steady-state value. During the wash phase, it is seen that B_1 reaches a global maximum after approximately t≈5.75 seconds. The increase in B_1 during the wash phase is a direct consequence of L_2 molecules dissociating from EL_1L_2. Since only pure buffer is flowing through the biosensor during the wash phase, it is seen in Figure <ref> that each of the average concentrations B_i decay to zero.
§ NUMERICS
To verify the O(^2) accuracy of our ERC approximation derived in Section <ref>, we now develop a numerical approximation to the IDE system (<ref>), where C_1(x,0,t) and C_2(x,0,t) are given by (<ref>). We focus on the injection phase, since the wash phase is similar. Our approach is based on the numerical method described in <cit.>. Semi-implicit methods have been previously used with great
success to solve reaction-diffusion equations <cit.>, as they are typically robust, efficient, and accurate. Similarly, in our problem we exploit the structure of the integrodifferential operator, which naturally suggests a semi-implicit method in time. Moreover, since our method is semi-implicit in time we avoid the expense and complication of solving a nonlinear system at each time step. Convergence properties and remarks concerning stability, are discussed in Subsection <ref>; however, we first turn our attention to deriving our numerical method in Subsection <ref>.
§.§ Semi-implicit finite difference algorithm
We discretize the spatial interval [0,1] by choosing N+1 equally spaced discretization nodes x_i=iΔ x, for i=0, …, N, and discretize time by setting t_n=nΔ t, for n=0, …. Having chosen our discretization nodes and time steps, we seek to discretize (<ref>), where C_1(x,0,t) and C_2(x,0,t) are given by (<ref>). Note that this requires discretizing both the time derivatives and the convolution integrals; we first turn our attention to the latter, and focus on C_1(x,0,t). We would like to apply the trapezoidal rule to spatially discretize (<ref>), however the integrand of C_1(x,0,t) is singular when ν=0. To handle the singularity we subtract and add
( B_1/ t+ B_12/ t)(x-ν,t)|_ν=0
from the integrand. Doing so yields
C_1(x,0,t)= 1-D_r^1/3/F_r 3^1/3Γ(2/3) {∫_0^x[( B_1/ t+ B_12/ t)(x-ν,t)
- ( B_1/ t+ B_12/ t)(x,t)]dν/ν^2/3+3 x^1/3( B_1/ t+ B_12/ t)(x,t)},
where we have used the fact that (<ref>) is independent of ν. Then choosing a discretization node x=x_i and a time step t=t_n, we apply the trapezoidal rule to (<ref>) to obtain
C_1(x_i,0,t_n)= 1-D_r^1/3/F_r 3^1/3Γ(2/3) {0·/2+∑_j=1^i-1[( B_1/ t+ B_12/ t)(x_i-x_j,t_n)
- ( B_1/ t+ B_12/ t)(x_i,t_n)]Δ x/x_j^2/3+[( B_1/ t+ B_12/ t)(0,t_n)
-( B_1/ t+ B_12/ t)(x_i,t_n) ]/2 x_i^2/3+3 x^1/3_i( B_1/ t+ B_12/ t)(x_i,t_n)},
when x_i>0; simply evaluating (<ref>) at x=x_0 gives C(x_0,0,t_n)=1. The first term in the sum is zero, because in a similar manner to Appendix B of <cit.> we have
lim_ν→ 0( B_k/ t(x-ν, t_n) - B_k/ t(x,t))1/ν^2/3=lim_ν→ 0ν^1/3( B_k/ t(x-ν, t_n) - B_k/ t(x,t))1/ν,
which implies
lim_ν→ 0( B_k/ t(x-ν, t_n) - B_k/ t(x,t))1/ν^2/3=lim_ν→ 0ν^1/3^2 B_k/ x t(x,t)=0,
for k=1, 12, or 2. The last equality follows since we expect B_i/ t to be sufficiently regular for fixed x>0. The expansion (<ref>) shows that this is certainly true when ≪ 1, however when =O(1) or larger the nonlinearity in (<ref>) renders any analytic approximation to B_i beyond reach. Our results in Subsection <ref> show that our method indeed converges when =O(1) or larger.
We now turn our attention to discretizing the time derivatives. We denote our approximation to B_j(x_i,t_n) by
B_j(x_i,t_n)≈ B^j_i,n,
and approximate the time derivatives through the formula
B_j/ t(x_i,t_n)≈B^j_i,n-B^j_i,n-1/Δ t:=Δ B^j_i,n/Δ t.
Our approximation (<ref>) holds for all reacting species j=1, 12, 2, each of our discretization nodes x_i, and each time step t_n. As we shall show below, we treat Δ B^j_i,n as separate variable used to update B^j_i,n at each iteration of our algorithm.
With our time derivatives discretized as (<ref>), the fully-discretized version of C_1(x,0,t) is given by substituting (<ref>) into (<ref>):
C^1_i,n= 1-D_r^1/3/F_r 3^1/3Γ(2/3) {∑_j=1^i-1[(Δ B^1_i-j,n/Δ t+Δ B^12_i-j,n/Δ t)- (Δ B^1_i,n/Δ t+Δ B^12_i,n/Δ t)]Δ x/x_j^2/3
+[(Δ B^1_0,n/Δ t+Δ B^12_0,n/Δ t) -(Δ B^1_i,n/Δ t+Δ B^12_i,n/Δ t)]/2 x_i^2/3+3 x^1/3_i(Δ B^1_i,n/Δ t+Δ B^12_i,n/Δ t)},
for i>0, and C^1_0,n=1. The function C_2(x,0,t) has a similar discretization which we denote as C_2(x_i,0,t_n)≈ C^2_i,n. Thus, our numerical method takes the form:
Δ B^1_i,n+1/=(1-B^Σ_i,n)C^1_i,n+1- B^1_i,n- B^1_i,nC^2_i,n+1+ B^12_i,n,
Δ B^12_i,n+1/= B^1_i,nC^2_i,n+1- B^12_i,n+ B^2_i,nC^1_i,n+1- B^12_i,n,
Δ B^2_i,n+1/=(1-B^Σ_i,n)C_i,n+1^2- B^2_i,n+ B^12_i,n- B^2_i,nC^1_i,n+1.
We enforce the initial condition (<ref>) at our N+1 discretization nodes through the condition B^j_i,0=0 for j=1, 12, 2, and i=1,…, N. Observe that our method (<ref>) is semi-implicit rather than fully-implicit. This renders (<ref>) linear in Δ B^j_i,n+1, and as a result we can write
Δ𝐁_i,n+1/=M^-1_i,n(𝐁_i,n)(A^-1_i,n+1𝐁_i,n+𝐟_i,n+1),
where 𝐁_i,n=(B^1_i,n, B^12_i,n, B^2_i,n)^T. Hence, by using a method which is only semi-implicit in time we avoid the expense and complication of solving a nonlinear system at each time step. Having solved for Δ𝐁_i,n+1 using (<ref>), we march forward in time at a given node x_i through the formula
𝐁_i,n+1=𝐁_i,n+1/2(3Δ𝐁_i,n+1-Δ𝐁_i,n),
which is analogous to a second-order Adams-Bashforth formula.
In addition, we chose a method that is implicit in C_1(x,0,t) and C_2(x,0,t) also due to the form of the convolution integrals. From (<ref>) we see C_1(x,0,t) and C_2(x,0,t) depend on B_j(ν,t) only for ν≤ x. Thus by choosing a method that is implicit in C_1(x,0,t) and C_2(x,0,t), we are able to use the updated values of B_j(x,t) in the convolution integrals by first computing the solution at x=0, and marching our way downstream at each time step.
To make this notion more precise we note that in (<ref>) the matrix M^-1_i,n(𝐁_i,n) depends only upon 𝐁_i,n, however because of the convolution integrals C^1_i,n+1 and C^2_i,n+1, the matrix A_i,n+1 and vector 𝐟_i,n+1 depend upon 𝐁_l,n+1 for l<i. Thus, at each time step n+1 we first determine 𝐁_0,n+1. Next, we increment i and use the value of 𝐁_0,n+1 in (<ref>) to determine 𝐁_1,N+1. We proceed by iteratively marching our way downstream from x_2 to x_N to determine 𝐁_2,n+1, …, 𝐁_N,n+1. Intuitively, the updated information from the convolution integral flows downstream from left to right at each time step. We may repeat this procedure for as many time steps as we wish. In addition, we remark that the formula (<ref>) was initialized with one step of Euler's method.
Furthermore, with our finite difference approximation to 𝐁(x,t), we can determine the average quantity
B(t)=(B_1(t), B_12(t), B_2(t))^T
with the trapezoidal rule
𝐁(t_n)≈1/x_max-x_min(/2𝐁_m,n+∑_i=m+1^M-1𝐁_i,n+/2𝐁_M,n).
In (<ref>), the indices i=m and i=M correspond to x_min=m and x_max=M. Our nodes were chosen to align with x_min and x_max to avoid interpolation error.
§.§ Convergence study
§.§.§ Spatial Convergence
We now examine the spatial rate of convergence of our numerical method. Since from 𝐁 we can compute the quantity of interest (<ref>), we derive estimates for the rate at which our numerical approximation converges to 𝐁. Furthermore, because the system (<ref>), (<ref>) is nonlinear, our analysis will focus on the experimentally relevant case of ≪ 1. In addition, we will derive estimates only for the injection phase of the experiment, since the wash phase is similar.
To proceed, we consider the average variant of (<ref>), (<ref>). Averaging (<ref>) in the sense of (<ref>) gives:
dB_1/d t = (1-B_Σ)C_1(x,0,t)- _1K_B_1 - ^1_2K_B_1C_2(x,0,t)+ ^1_2K_B_12 ,
dB_12/d t = ^1_2K_B_1C_2(x,0,t)- ^1_2K_B_12 + ^2_1K_B_2C_1(x,0,t)
- _1^2K_B_12,
dB_2/d t = _2K_(1-B_Σ)C_2(x,0,t)- _2K_B_2+ _1^2K_B_12 - ^2_1K_B_2C_1(x,0,t).
𝐁(0)=0.
As in Subsection <ref>, we handle the singularity in (<ref>) by adding and subtracting (<ref>) from the integrand of (<ref>) to write C_1(x,0,t) as in (<ref>). The unbound ligand concentration C_2(x,0,t) has a representation analogous to (<ref>). In the following analysis we limit our attention to (<ref>), since the analysis for equations (<ref>)–(<ref>) is nearly identical.
We proceed by anaylzing each of the terms in (<ref>):
-B_1,
B_12,
C_1(x,0,t),
-B_ΣC_1(x,0,t),
-B_1 C_2(x,0,t).
Upon inspecting (<ref>) and using linearity of the averaging operator, we see that three terms contribute to (<ref>):
1
-D_r^1/3/F_r3^1/3Γ(2/3)∫_0^x( B_1/ t+ B_12/ t)(x-ν,t)-( B_1/ t+ B_12/ t)(x,t)dν/ν^2/3,
3x^1/3( B_1/ t+ B_12/ t).
In a similar manner, (<ref>) and (<ref>) each imply that we incur error from the terms:
- B_Σ
D_r^1/3 B_Σ/F_r3^1/3Γ(2/3)∫_0^x( B_1/ t+ B_12/ t)(x-ν,t)-( B_1/ t+ B_12/ t)(x,t)dν/ν^2/3,
-3x^1/3B_Σ( B_1/ t+ B_12/ t),
- B_1
B_1/3^1/3Γ(2/3)∫_0^x( B_12/ t+ B_2/ t)(x-ν,t)-( B_12/ t+ B_2/ t)(x,t)dν/ν^2/3,
-3 x^1/3B_1( B_12/ t+ B_2/ t).
Let us denote the trapezoidal rule of a function f(x) over the interval [a,b] by 𝒯(f(x),[a,b]). Then since 𝒯(1,[x_min,x_max]) is exact, the term (<ref>) does not contribute to the spatial discritization error.
Next we decompose the expansion (<ref>) into its individual components to obtain
B_j(x,t)=^0B_j(t)+ x^1/3·^1B_j(t)+O(x^2/3^2),
for j=1, 12, 2. Substituting (<ref>) into (<ref>), (<ref>), (<ref>) (<ref>), and using the fact that 𝒯(x^1/3,[x_min,x_max]) converges at a rate O(^2) shows that each of these terms converge at a rate of O(^2). Similarly, one can substitute (<ref>) into (<ref>), (<ref>), and (<ref>), and use the fact that 𝒯(x^1/3,[x_min,x_max]) converges at a rate of O(^2), to show that each of these terms converge at a rate of of O(^2).
It remains to determine the error associated with (<ref>), (<ref>), and (<ref>), so we turn our attention to (<ref>) and substitute (<ref>) into (<ref>) to obtain
-D_r^1/3/F_r3^1/3Γ(2/3)(x_max-x_min)(d^1B_1/d t+d^1B_12/dt)(t)∫_x_min^x_max∫_0^x(x-ν)^1/3-x^1/2)dν/ν^2/3,
where we have used the definition of our averaging operator (<ref>). In writing (<ref>), neglected higher-order terms which do not contribute to the leading-order spatial discretization error. Since the coefficient of the integral in (<ref>) is a function of time alone, this coefficient does not contribute to the leading-order spatial discretization error and we neglect it in our analysis. Hence, to compute the spatial discretization error associated with (<ref>), we calculate the error associated with applying the trapezoidal rule to the double integral
∫_x_min^x_max∫_0^x[(x-ν)^1/3-x^1/3]dν/ν^2/3 dx.
Treating the inner integral as a function of x we define
f(x)=∫_0^x ( (x-ν)^1/3-x^1/3)ν^-2/3 ν,
whose closed form is given by
f( x)= x^2/3/2(2^1/3√(π)Γ(1/3)/Γ(5/6)-6).
Towards applying the trapezoidal rule to (<ref>), we first note 𝒯(f,[0,x_i]) converges at a rate of O(^4/3). This is seen by first rewriting (<ref>) as
∫_0^Δ x [(x-ν)^1/3-x^1/3]ν^-2/3 ν+∫_Δ x^ x_i [(x-ν)^1/3-x^1/3]ν^-2/3 ν.
The term on the right converges at a rate of O(^2), and the term on the left converges at a rate of O(^4/3), which follows from expanding (x-ν)^1/3 about ν=0, and using the definition of the trapezoidal rule.
Applying the trapezoidal rule to (<ref>) then gives
^2∫_^∫_0^x((x-ν)^1/3-x^1/3) ν^-2/3 ν x
=^2/2𝒯(f(x),[0,x_m])+∑_i=m+1^M-1^2𝒯(f(x),[0,x_i])
+^2/2𝒯(f(x),[0,x_M])+O(^2^2),
where we have let x_min=x_m=m, and x_max=x_M=M. Since 𝒯(f,[0,x_i]) converges at a rate of O(^4/3), the right hand side of the above is
(^2/2f(x_m)+O(^2^7/3))+∑_i=m+1^M-1(^2 f(x_i)+O(^2^7/3))
+(^2/2f(x_M)+O(^2^7/3))+O(^2^2).
To compute our results in Section <ref>, we took =0.2, =0.8, in accordance with the literature <cit.>. Hence, in the above sum there are approximately 0.6N=0.6^-1 terms on the order of O(^2^7/3), and the above sum reduces to
^2 (/2f(x_m)+∑_i=m+1^M-1 f(x_i)
+/2f(x_M))+O(^2^4/3)
+O(^2^5/3).
The dominant error in (<ref>) is O(^2^4/3), thus the spatial discretization error associated with (<ref>) is O(^2^4/3). When measuring convergence we used values of =.25, =.75 to facilitate progressive
grid refinement; however, it is clear that these values of and do not change our argument. A similar argument shows the spatial discretization error associated with the nonlinear terms (<ref>) and (<ref>) is O(^2^4/3).
We have depicted our spatial convergence measurements for B_1 in Figure <ref> and tabulated them in Table <ref>. To obtain these results, we first computed a reference solution, with ==1/512. We then created a series of test solutions with mesh width =1/2^j, for j=2,…, 7, keeping =1/512 constant. Next, we computed 𝐁 by averaging our reference solution and test solutions at each time step with the trapezoidal rule as in (<ref>). We then computed the error between each test solution and the reference solution by taking the maximum difference of the two over all time steps.
From our results, we see that our method converges at a rate of O(^2) when ≪ 1, O(^4/3) when =O(1), and O(^3/2) when ≫ 1. The reduction in convergence when increases from small to moderate may be attributed to the O(^2^4/3) contributions from (<ref>), (<ref>), and (<ref>). There are
two competing magnitudes of error in (<ref>): one of O( ^2) (from terms (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>)), and one of O(^2 ^4/3) (from the integral terms (<ref>), (<ref>), and (<ref>)). When ^2^4/3< ^2, or <^2/3, the former is larger. Conversely, when ^2/3< the latter is larger.
When ≫ 1, the bound state evolves on a longer time scale of the form <cit.>
t_w=t/.
In this case, the characteristic time scale for reaction for reaction is much faster than the characteristic time scale for transport into the surface, and one typically refers to ≫ 1 as the transport-limited regime. Substituting (<ref>) into (<ref>), (<ref>), one may find the leading-order approximation to the resulting system for ≫ 1 by neglecting the left hand side of (<ref>)–(<ref>). Doing so one finds that even a leading-order approximation to (<ref>)–(<ref>) is nonlinear, rendering any error estimates in the transport-limited regime beyond reach. Nonetheless, our results in Figure <ref> and Table <ref> show that convergence is not an issue when ≫ 1.
§.§.§ Temporal Convergence
Since our time stepping method (<ref>) is analogous to a second-order Adams-Bashforth formula, we expect our method to achieve second-order accuracy in time. Figure <ref> shows that this is indeed the case when =.01, and the rate constants are =1, =1, =1, =1/2, =2, =2, and =1/2 (as in Subsection <ref> when measuring spatial convergence). Temporal convergence was measured in an analogous manner to spatial convergence.
However, we note that measuring temporal convergence when =O(1) is computationally prohibitive, since spatial convergence is O(^2^4/3) in this case, so in order for the spatial and temporal errors to balance one must have O(^2^4/3)=O(^2) or =^3/2.
Nonetheless, our results from Section <ref> demonstrate that our finite difference approximation agrees with our ERC approximation for a wide parameter range, so we are not concerned with temporal convergence of our method when =O(1) or larger.
§.§.§ Stability Remarks
We now make brief remarks concerning the stability of our method. Recall from Subsection <ref> we first determine the value of B_i(x,t) upstream at x=0, and iteratively march our way downstream to x=1 at each time step. Therefore, we expect any instabilities at x=0 to propagate downstream. Requiring that there no instabilities at x=0 is equivalent to asking that our time stepping method (<ref>) is stable for the ODE system found by replacing C_1(x,0,t) and C_2(x,0,t) with the constant function 1 in (<ref>). Though we do not have precise stability estimates for this system, numerical experimentation has shown that our time steps need to be sufficiently small in order to ensure that our numerical approximation is well behaved.
§ EFFECTIVE RATE CONSTANT APPROXIMATION VERIFICATION
With our numerical method in hand, we are now in a position to verify the accuracy of our ERC approximations (<ref>) and (<ref>). We tested the accuracy of our ERC equations when =0.1, and =0.45; the results are below in Figure <ref> and Tables <ref>–<ref>.
From these results, it is evident that our ERC equations accurately characterize 𝐁 and the sensogram reading (<ref>) not only for small , but for moderate as well. Motivated by <cit.>, we ran a series of simulations for different values of , ranging from ≈ 0.02 to =150. We measured the maximum
absolute error for each value of , and created the curves shown in Figure <ref>. The error starts off small as expected, and increases at rates which compares favorably with our O(^2) prediction, and finally reaches an asymptote corresponding to roughly two percent absolute error. Thus, although our ERC approximations (<ref>) and (<ref>) are formally valid for only small values of , their solutions agree with our finite difference approximation for moderate and large values of .
§ CONCLUSIONS
Scientists are attempting to determine whether the polymerase η and PCNA complex (denoted EL_2 throughout) which results from DNA translesion synthesis forms through direct binding (<ref>), or through a catalysis-type ligand switching process (<ref>). Since fluorescent labeling techniques may modify protein behavior, label-free optical biosensor experiments are used. Interpreting experimental data relies on a mathematical model, and modeling multiple-component biosensor experiments results in a complicated and unwieldy set of equations. We have shown that in experimentally relevant limits this model reduces to a much simpler set of ODEs (our ERC equations), which can be used to fit rate constants using biosensor data. In contrast with the standard well-stirred kinetics approximation, our ERC equations accurately characterize binding when mass transport effects are significant. This renders our ERC equations a flexible tool for estimating the rate constants in (<ref>). In turn, estimates for the rate constants in (<ref>) will reveal whether the polymerase η and PCNA complex forms via direct binding (<ref>), or the catalysis-type ligand switching process (<ref>).
Furthermore, the consideration of both direct binding (<ref>) and the ligand switching process (<ref>) has several mathematical and physical consequences. First, due the form of (<ref>), the species are directly coupled through the kinetics equations. This is true even in the well-stirred limit in which → 0, and (<ref>) reduces to C_1(x,0,t)=C_2(x,0,t)=1. However, transport effects manifested in (<ref>) nonlinearly couple the reacting species. So we see in Figure <ref> that there is a more pronounced depletion region in B_12 than in either of the other two species. Physically, this is a consequence of the fact that either EL_1 or EL_2 must be present in order for EL_1L_2 to form, thus the latter is affected by depletion of the former two species. Additionally, the multiple-component reactions (<ref>) alter the form of the sensogram reading to the lumped signal (<ref>), thereby complicating parameter estimation.
In addition to establishing a firm foundation for studying the inverse problem of estimating the rate constants in (<ref>), the present work also opens the door for future work on modeling and simulating multiple-component biosensor experiments. This includes considering other physical effects like cross-diffusion, or steric hinderance; and comparing the finite difference method described herein to the method of lines algorithm discussed in <cit.>.
§ PARAMETER VALUES
Parameter values from the literature are tabulated below.
The variables W, Q, represent the dimensional width, and flow rate; the other dimensional variables are as in Section <ref>. The flow rate is related to the velocity through the formula <cit.>
V=6Q/WH.
Using the dimensional values above, we calculated the following extremal bounds on the dimensionless variables.
Here ϵ=H/L is the aspect ratio, and Re=VH^2/(ν L) is the appropriate Reynolds number associated with our system.
The authors wish to emphasize that the bounds in Table <ref> are naïve extremal bounds calculated by using minimum and maximum values for the dimensional parameters in Table <ref>. In particular, the values for the dimensionless rate constants in Table <ref> are not estimates of their true values; they are minimum and maximum values calculated using combinations of extremal values for the parameters in Table <ref>. A large variation in the dimensionless rate constants is highly unlikely, since this scenario corresponds to one in which one of the association rate constants is very large, and another association rate constant very small. We would also like to note that a large variation in some of the parameters, such as the kinetic rate constants or , would necessitate very small values for either or both of Δ t and Δ x in our numerical method.
Furthermore, one may be concerned about the upper bound on the Reynolds number, the lower bound on the Péclet number, and the upper bound on the Damköhler number. All of these extremal bounds were calculated using a flow rate of 1 μL/min–the slowest flow rate possible on the BIAcore T200 <cit.>. Even with the fastest reactions, one can still design experiments to minimize transport effects by increasing the flow rate Q (thus the velocity), decreasing the initial empty receptor concentration R_T, and decreasing the ligand inflow concentrations C_1,u and C_2,u. In the case of the fastest reaction _1k_=3×10^9 cm^3/(mol·s), we can take:
Q=390 μ L/min,V=.75 cm/s,R_T=7.76× 10^-13 mol/cm^2,C_1,u=C_2,u=2.96×10^-12 mol/cm^3 .
These choices yield the dimensionless parameters Re=0.09, Pe=136.26, Da=5.16; these values are perfectly in line with our analysis, and the validity of our ERC equations.
plainnat
|
http://arxiv.org/abs/1701.08155v1 | 20170127035255 | The Moser's formula for the division of the circle by chords problem revisited | [
"Carlos Rodriguez-Lucatero"
] | math.CO | [
"math.CO",
"05A15"
] |
crodriguez@correo.cua.uam.mx
Departamento de Tecnologías de la Información, Universidad Autónoma Metropolitana-Cuajimalpa,
Torre III,
Av. Vasco de Quiroga 4871,
Col.Santa Fe Cuajimalpa, México, D. F.,
C.P. 05348, México
The enumeration of the regions formed when circle is divided by secants drawn from points on the circle is one of the examples where the inductive reasoning fails as was pointed out by Leo Moser in the Mathematical Miscellany in 1949. The formula that gives the right number of regions can be deduced by combinatorics reasoning using the Euler's planar graph formula, etc. My contribution in the present work is to reformulate and solve such problem in terms of a fourth order difference equation and to obtain the formula proposed by Leo Moser.
Mathematics Subjects Classification: 05A15
Keywords: Exact Enumeration Problems; Generating Functions.
The Moser's formula for the division of the circle by chords problem revisited
Carlos Rodríguez-Lucatero
December 30, 2023
==============================================================================
§ INTRODUCTION
A problem sometimes known as Moser's circle problem asks to determine the number of pieces into which a circle is divided if m points on its circumference are joined by chords with no three internally concurrent.
The number of regions formed inside the circle when it is divided by the chords as mentioned, can be sketched for the fisrt 5 steps in the following figure:
If we label the regions formed by this division by secants, the sequence of the number of regions generated in terms of number of points till this point is 1,2,4,8,16. If the number of points is denoted by m and we try to guess the functional behavior of the sequence the induction tell us that it is 2^m-1. By following this procedure if we add one more point and draw the corresponding secants we get 31 regions instead of 32. Let me tabulate this behavior for the seven first points in the following table
Points regions
m f(m)
1 1
2 2
3 4
4 8
5 16
6 31
7 57
As can be seen the functon 2^m-1 no longer describes the behavior of the sequence. That is the reason why Leo Moser in <cit.> pointed that the inductive method for guessing the next element in a numerical sequence can fail. In fact the title of the section in <cit.> was On the danger of induction. He leaves to the reader as an exercise, to show that the number of regions formed by joining points on a circle by chords is f(m)=∑_j=0^mm-1j.
The function that describes the behavior of the sequence is:
f(m)=m^4-6m^3+23 m^2-18m+24/24
This can be proven in many different forms. In the following sections I will describe how it was demonstrated
in <cit.> using combinatorial arguments, in <cit.> by using the Euler's planar graphs formula, and finally how it can be proven by using a fourth degree difference equation.
§ DEDUCTION USING A COMBINATORIAL ARGUMENTATION
In this section we will describe a proof of the Moser's formula based on the article <cit.>(see also<cit.>).
In order to find the actual formula on the number of regions formed, the author of <cit.> states the following
Result: The number of regions formed is
1+m2+m4
From the combinatorics it is known that rs=r!/s!(r-s)! and if we apply that to <ref> we obtain the Moser's formula
f(m)=m^4-6m^3+23m^2-18m+24/24
The proof of <ref> given in <cit.> uses the fact that the binomial coeficient rs counts the ways we can choose s elements from a set of r different elements.
In order to formalize the demonstration, I will start by some basic lemas that can be used for the proof.
The total number of chords that can be created from the m points on the circle is
m2
If we have m points each chord is formed by taking two points of the m lying on the perimeter of the circle. In other words the number of chords is determined by the number of ways in which the m points on the circle can be taken to form a chord.
Then the total number of posible chords equals the total number of different ways in which m points can be taken in groups of 2 what is equal to m2.
The total number of interior crossing points of the chords obtained from the m points on the circle is
m4
For calculating the number of interior intersection points inside the circle, we must count the number of ways in which the m points in the circle can be taken such that their related chords intersect, which can be done taking four points to form their chords and their corresponding intersections, what gives a total of m4 ways
Let m be the number of points on the circle and f(m) the number of regions formed by the division of a circle by chords. The total number of regions formed is
f(m)=m^4-6m^3+23m^2-18m+24/24
Let me start by counting the regions formed by taking each chord one by one. Each time a new chord is drawn, it crosses a number of regions dividing them into two. The number of new created regions is equal to the number of regions crossed by the new chord, which is one more than the number of chords crossed. Given that the new chord cannot pass through a previously drawn point of intersection, then the number of chords crossed is equal to the number of interior points of intersection of the new chord. As consequence, the number of new regions created by this new chord equals the number of interior points of intersection of this chord plus one.
Because of that, taking into accont lemma <ref> and lemma <ref> the total number of new regions created by drawing all the chords is equal to the number of added chords plus the number of interior points of intersection as well as the fact that at the begining we have one region, the total number of regions formed is
f(m)=1+m2+m4=m^4-6m^3+23m^2-18m+24/24
§ DEDUCTION USING THE PLANAR GRAPH EULER'S FORMULA
I found in <cit.> an elegant proof based on the famous planar graph Euler's formula that I will develop in this section. Let me start with the statement of the Euler's result
Let V the number of vertices, E the number of edges and F the number of faces of a planar graph. Then V-E+F=2.
In order to use the planar graphs Euler's for deducing the Leo Moser's formula the author of <cit.> relate the set V with the orginal points on the circle as well as with the interior intersection points, the set E with the chords and arc's formed by the points on the circle and the set F with the formed regions.
We can formalize this method as follows
Let P be the set of m points on the circle, I the set of m4 interior crossing points and f(m) the number of regions formed by the division of the circle by chords.
Let G=(V,E) be the planar graph obtained from the circle division by chords, where V=P ∪ I, E the edges of the planar graph and F=f(m). The number of regions or faces formed is
f(m)=m^4-6m^3+23m^2-18m+24/24
There are m points on the circle and there are m4 intersections of the chords. Then we have a total of
V=(P ∪ I)=m+m4
vertices. In order to count the total number of edges it must be noticed that we have m circular arcs. Given that we have m4 interior intresection points where four edges meet then we have 4m4 additional edges. Due to the fact that we have m2 chords corresponding to two edges that meet the circle we have 2m2 more edges. Then we have 2m2+4m4 edges generated by the chords, but due to counting process we have counted them twice. Then this quantity must be divided by 2 giving a total of m2+2m4. Hence the total number of edges is
E=m+m2+2m4
The number of faces is related with the number of regions f(m) as follows
F=f(m)+1
From the <ref> we know that
V-E+F=2
Replacing <ref>,<ref>,<ref> in <ref> we get
{ m+m4}-{ m+m2+2m4}+ f(m)+1 = 2
Simplifying <ref> we get
-m2-m4 + f(m) = 1
From <ref> we express f(m) in terms of the other elements of the expression as follows
f(m) = 1 +m2+m4=m^4-6m^3+23m^2-18m+24/24
and in that way we have finally mathematically proven the Moser's formula.
§ MY DEDUCTION OF THE LEO MOSER'S FORMULA BY SOLVING A DIFFERENCE EQUATION
An alternative method that I propose for solving the Moser's circle division by chords is by obtaining a recurrence from the numerical sequence of the number of regions formed and then solve the related difference equation. For this end we have to obtain a recurrence relation to be solved. This recurrence relation can be obtained from the succesion of regions, using the technique of successive differences frequently applied in problems of inductive reasoning on numerical sequences. Applying the successive differences technique on the sequence {a_0,a_1,a_2,a_3,a_4,a_5} of six elements to obtain the seventh element a_6. Let me state this first result as a lemma.
The elements of the sequence 1,2,4,8,16,31,57, … that represent the growth behavior on the number of regions formed by dividing the circle by chords can be generated by a fourth degree recurrence relation
We start by applying the successive differences method for guessing the next element in the sequence on the first six elements of such numerical sequence.
a_0 a_1 • a_2 • a_3 • a_4 • a_5 • a_6
1 2 • 4 • 8 • 16 • 31 • 57
• 1 • 2 • 4 • 8 • 15 • • •
• • 1 • 2 • 4 • 7 • • • •
• • • 1 • 2 • 3 • • • • •
• • • • 1 • 1 • • • • • •
From the calculations it can be noticed that we stop the successive differences procedure when the differences become constant. Then if we sum the last element of each row plus the last element on the original sequence of numbers we can obtain the next element on the sequence. So in the example, from the summation 1+3+7+15+31 we obtain 57 that correspond to the next value on the sequence.
From the successive differences table we obtain the following recurrence relation
[ a_n+4 = a_n+3 +(a_n+3-a_n+2)+((a_n+3-a_n+2)-(a_n+2-a_n+1))+; (((a_n+3-a_n+2)-(a_n+2-a_n+1))-((a_n+2-a_n+1)-(a_n+1-a_n)))+1 ]
The expression <ref> is the desired fourth degree recurrence relationship.
Simplifying and reordering the terms of the equation <ref> we get the next expression
a_n+4-4 a_n+3 + 6 a_n+2 - 4 a_n+1 + a_n = 1
adding the inicial conditions to <ref> we get the following difference equation
a_n+4-4 a_n+3 + 6 a_n+2 - 4 a_n+1 + a_n = 1, a_0=1,a_1=2,a_2=4,a_3=8
Once we have obtained the difference equation <ref> we can solve it by many existing methods <cit.>,<cit.>. The purpose of solving <ref> is to releat it with the deduction of the Leo Moser's formula of the number of regions formed by the division of the circle by chords. In what follows I will do it in two different ways
* By solving <ref> by the generating functions method
* By solving <ref> by the solution of non-homogeneous linear with constant coefficients difference equation method.
To formalize these results I will state them as the following theorems.
Let n the subindex of the recurrence <ref>, m the number of points on the circle and f(m) the number of regions formed by the division of the circle by chords. Let n=m-1. If we solve <ref> by the generating functions method the number of regions formed by the division of the circle by chords
is
f(m) = m^4-6m^3+23m^2-18m+24/24
From <ref> we have <ref>.
The application of equation <ref> for different values of n gives the following results
[ (n=0) a_4-4 a_3 + 6 a_2 - 4 a_1 + a_0 = 1; (n=1) a_5-4 a_4 + 6 a_3 - 4 a_2 + a_1 = 1; (n=2) a_6-4 a_5 + 6 a_4 - 4 a_3 + a_2 = 1; (n=3) a_7-4 a_6 + 6 a_5 - 4 a_4 + a_3 = 1; ⋮ ⋮ ]
If we multiply <ref> by x^0 the first row, the second row by x^1, the third row by x^2, the fourth row by x^3 and so on we get
[ (n=0) a_4 x^0-4 a_3 x^0 + 6 a_2 x^0 - 4 a_1 x^0 + a_0 x^0 = x^0; (n=1) a_5 x^1-4 a_4 x^1 + 6 a_3 x^1 - 4 a_2 x^1 + a_1 x^1 = x^1; (n=2) a_6 x^2-4 a_5 x^2 + 6 a_4 x^2 - 4 a_3 x^2 + a_2 x^2 = x^2; (n=3) a_7 x^3-4 a_6 x^3 + 6 a_5 x^3 - 4 a_4 x^3 + a_3 x^3 = x^3; ⋮ ⋮ ]
Summing up the rows of <ref> we obtain
∑_n=0^∞a_n+4 x^n - 4 ∑_n=0^∞a_n+3 x^n + 6 ∑_n=0^∞a_n+2 x^n - 4 ∑_n=0^∞a_n+1 x^n + ∑_n=0^∞a_n x^n = ∑_n=0^∞ x^n
Trying to equate the subindex of the coeficientes and the powers of the variables we rewrite <ref> as
[ x^-4∑_n=0^∞a_n+4 x^n+4 - 4 x^-3∑_n=0^∞a_n+3 x^n+3 + 6 x^-2∑_n=0^∞a_n+2 x^n+2; - 4 x^-1∑_n=0^∞a_n+1 x^n+1 + ∑_n=0^∞a_n x^n = ∑_n=0^∞ x^n ]
The generating function is defined as
f(x)=∑_n=0^∞a_n x^n
Before trying to put equation <ref> in terms of <ref> it should be noticed that the righthand side of equation <ref> is the generating function of the geometrical series whose corresponding succesion is 1,1,1,1,…. If we multiply each side of equation <ref> by x^4 we obtain
[ ∑_n=0^∞a_n+4 x^n+4 - 4 x ∑_n=0^∞a_n+3 x^n+3 + 6 x^2∑_n=0^∞a_n+2 x^n+2; - 4 x^3∑_n=0^∞a_n+1 x^n+1 + x^4∑_n=0^∞a_n x^n = x^4∑_n=0^∞ x^n ]
It can be noticed that by this operation the righthand side of <ref> correspond to a right shift of the corresponding succession what gives as result the cancelation of the four first places of this succession and the result is the succession 0,0,0,0,1,1,1,1,…. Rewriting <ref> in terms of the generating function <ref> we get
[ (f(x)-a_0-a_1x-a_2x^2-a_3x^3)-4x(f(x)-a_0-a_1x-a_2x^2); +6x^2(f(x)-a_0-a_1x)-4x^3(f(x)-a_0) + x^4f(x)=x^4/(1-x) ]
Replacing the a_0=1,a_1=2 and a_2=4,a_3=8 <ref> we get
[ (f(x)-1-2x-4x^2-8x^3)-4x(f(x)-1-2x-4x^2); +6x^2(f(x)-1-2x)-4x^3(f(x)-1) + x^4f(x)=x^4/(1-x) ]
By algebraic simplification and factorization of <ref> we obtain
f(x)(1-x)^4+(-1+2x-2x^2)=x^4/(1-x)
From <ref> we can obtain f(x)
f(x)=x^4/(1-x)^5+1/(1-x)^4-2x/(1-x)^4+2x^2/(1-x)^4
By partial fraction decomposition <cit.> <cit.> we have that
x^4/(1-x)^5=1/(1-x)-4/(1-x)^2+6/(1-x)^3-4/(1-x)^4+1/(1-x)^5
-2x/(1-x)^4=2/(1-x)^3-2/(1-x)^4
-2x^2/(1-x)^4=2/(1-x)^2-4/(1-x)^3+2/(1-x)^4
Substituing <ref>,<ref> and <ref> in <ref> we get
f(x)=1/(1-x)^5-3/(1-x)^4+4/(1-x)^3-2/(1-x)^2+1/(1-x)
From the generating functions theory it is known that <cit.> <cit.>
[ 1/(1-x)^r=-r0+-r1(-x)+-r2(-x)^2+…; =∑_i=0^∞-rix^i=1+(-1)r+1-11+(-1)^2r+2-12+…; =∑_i=0^∞r+i-1ix^i ]
Taking into account <ref> and using it in the equation <ref> we can calculate the n-th coeficient of each term and obtain
f(n)=n+4n- 3 n+3n + 4 n+2n - 2 n+1n + nn
We calculate the polynomials in n from the terms of equation <ref>
nn=1
n+1n=(n+1)n!/n!(n+1-n)!=n+1
n+2n=(n+2)(n+1)n!/n!(n+2-n)!=n^2+3n+1/2
n+3n=(n+3)(n+2)(n+1)n!/n!(n+3-n)!=n^3+6n^2+11n+6/6
n+4n=(n+4)(n+3)(n+2)(n+1)n!/n!(n+4-n)!=n^4+10n^3+35n^2+50n+24/24
Replacing <ref>,<ref>,<ref>,<ref> and <ref> in equation <ref> and simplifying we obtain
f(n)=n^4-2n^3+11n^2+14n-48/24
Recalling that the number of regions m formed in the circle is related with n by n=m-1 as well as the fact that for setting the recurrence we have introduced a right shift of four places in <ref> we replace n by m-1 in <ref> and add 3 to this new expression for the recuperation of the terms that we have missed because of this right shift in the succession we get
f(m)=m^4-6m^3+23m^2-18m+24/24
That is the desired result.
Let n the subindex of the recurrence <ref>, m the number of points on the circle and f(m) the number of regions formed by the division of the circle by chords. Let n=m-1. If we solve <ref> by the solution of linear non-homogeneous with constant coeficients method the number of regions formed by the division of the circle by chords
is
f(m) = m^4-6m^3+23m^2-18m+24/24
From <ref> we have <ref> .
From the discrete mathematics methods, it is known that the general solution of equation like <ref> consist of two parts, the solution of the associated homogeneous equation and a particular solution of <ref> that is <cit.> <cit.>
a^g_n=a^p_n + a^h_n
where the superscript g means general, p means particular and h means homogeneous solutions respectively.
The form of the linear non-homogeneus is
a_n+4-4 a_n+3 + 6 a_n+2 - 4 a_n+1 + a_n = f(n)
where f(n)=1. Then our non homogeneus equation with the corresponding four initial conditions will be
a_n+4-4 a_n+3 + 6 a_n+2 - 4 a_n+1 + a_n = 1, a_0=1,a_1=2,a_3=4,a_4=8
Let me start with the particular solution.
As will be shown later, when the homogeneous solution is obtained, the characteristic polynomial have as root r=1 with multiplicity four. It means that, in order to have the corresponding
four linearly independent solutions for the homogeneous associateed equations we will have a polynomial function of third degree as solution <cit.>. In order to have a particular solution beeing linearly independent from the associated homogeneus solution, the form of the particular solution will be a^p_n= An^4 for A constant. Replacing such solution in <ref> we get
An^4-4A(n-1)^4+6A(n-2)^4-4A(n-3)^4+A(n-4)^4=1
algebraically developing the fourth degree binomials in <ref> we obtain
[ A(n^4-4(n^4-4n^3+6n^2-4n+1)+6(n^4-8n^3+24n^2-32n+16); -4(n^4-12n^3+54n^2-108n+1)+(n^4-16n^3+96n^2-256n+256))=1 ]
Simplifying <ref> we get the value
A=1/24
and from <ref> we subtitute this value in the particular solution and obtain
a^p_n= n^4/24
The associated homogenous equation is
a_n+4-4 a_n+3 + 6 a_n+2 - 4 a_n+1 + a_n = 0, a_0=1,a_1=2,a_3=4,a_4=8
It is known from the discrete mathematics metodology <cit.> <cit.> for solving homogeneous recurrence as <ref> that their solutions have the form
a^h_n=Cr^n
Applying <ref> to <ref> it is obtained
Cr^n+4-4Cr^n+3+6Cr^n+2-4Cr^n+1+4Cr^n=0
Dividing each member of <ref> by Cr^n we obtain the following characteristic polynomial
r^4-4r^3+6r^2-4r+4=0
By factorization of <ref> we get
(r-1)^4=0
Then the roots of <ref> are r=1 with multiplicity of four. In order to have four linearly independent homogeneous solutions the homogeneous solution will have the next form
a^h_n=C_11^n+C_2n1^n+C_3n^21^n+C_4n^31^n=C_1+C_2n+C_3n^2+C_4n^3
Where C_1,C_2,C_3 and C_4 are constants that can be determined by the application of the
initial conditions a_0=1,a_1=2,a_3=4 and a_4=8. From <ref> we know that a^g_n=a^p_n+a^h_n and the we can establis the following relation
a^g_n=a^h_n+a^p_n=C_1+C_2n+C_3n^2+C_4n^3+n^4/24
By application of the initial conditions to <ref> we establish the following relations
[ a_0=C_1+C_2(0)+C_3(0)^2+C_4(0)^3+(0)^4/24=1; a_1=C_1+C_2(1)+C_3(1)^2+C_4(1)^3+(1)^4/24=2; a_2=C_1+C_2(2)+C_3(2)^2+C_4(2)^3+(2)^4/24=4; a_3=C_1+C_2(3)+C_3(3)^2+C_4(3)^3+(3)^4/24=8 ]
From the relations <ref> we establish the following linear relations
[ C_1=1; C_2+C_3+C_4=23/24; 2C_2+4C_3+8C_4=56/24; 3C_2+9C_3+27C_4=87/24 ]
For obtaining the values of C_2,C_3 and C_4 we solve the following linear system
( [ 1 1 1; 2 4 8; 3 9 27; ])
(
[ C_2; C_3; C_4 ])
=
(
[ 23/24; 56/24; 87/24 ])
Solving the linear system <ref> we obtain
[ C_2=14/24; C_3=11/24; C_4=-2/24 ]
Replacing the values obtained in <ref> we get the following expression
a^g_n=1+14/24n+11/24n^2-2/24n^3+n^4/24
We know that the relation between the number m of points in the circle from where the chords are traced, and
n is n=m-1 so we can express <ref> in terms of m as follows
a^g_n=1+14/24(m-1)+11/24(m-1)^2-2/24(m-1)^3+(m-1)^4/24
Algebraically developing each term of <ref> we get
[ a^g_n=1+14/24m-14/24+11/24(m^2-2m+1); -2/24(m^3-3m^2+3m-1)+(m^4-4m^3+6m^2-4m+1)/24 ]
Simplifying <ref> we finally obtain
a^g_n=24-18m+23m^2-6m^3+m^4/24
Reversing the order of the numerator terms we finally obtain the desired result
f(m)=m^4-6m^3+23m^2-18m+24/24
§ CONCLUSIONS
In this article, I proposed a deduction of the Leo Moser's known formula for counting the number of regions that are formed by dividing the circle by chords, by solving a fourth order difference equation obtained by the successive differences method. I solved this recurrence equation by two different methods. This article illustrate how a classical problem can lead to different and creative developments. That is what makes mathematics so exciting.
1
Grimaldi1
Ralph P. Grimaldi: Discrete and Combinatorial Mathematics: An applied introduction
Addison-Wesley, 3th Ed. (1994), .
Jobbings1
Andrew Jobbings: A selection of mathematical articles and notes by Andrew Jobbings.
,http://www.arbelos.co.uk/Papers/Chords-regions.pdf,27 December (2008)
Maier1
Eugene Maier: Counting Pizza Pieces and Other Combinatorial Problems.
Mathematics Teacher, 81 (1988), 22–26.
Moser1
Leo Moser and W. Bruce Ross: Mathematical Miscellany.
Mathematics Magazine, 23 (1949), 109–114.
Sedgewick1
Robert Segdewick and Philippe Flajolet: Introduction to the Analysis of Algorithms.
Addison-Wesley, 2nd Printing (2001), .
|
http://arxiv.org/abs/1701.07685v1 | 20170126132352 | Facing the phase problem in Coherent Diffractive Imaging via Memetic Algorithms | [
"Alessandro Colombo",
"Davide Emilio Galli",
"Liberato De Caro",
"Francesco Scattarella",
"Elvio Carlino"
] | physics.comp-ph | [
"physics.comp-ph",
"cond-mat.mtrl-sci",
"math.OC"
] |
alessandro.colombo6@unimi.it
Università degli Studi di Milano, via Giovanni Celoria 16, 20133 Milano, Italy
davide.galli@unimi.it
Università degli Studi di Milano, via Giovanni Celoria 16, 20133 Milano, Italy
liberato.decaro@ic.cnr.it
Istituto di Cristallografia, Consiglio Nazionale delle Ricerche (IC-CNR), via Giovanni Amendola, 122/O, 70126 Bari, Italy
scattarella@iom.cnr.it
Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche (IOM-TASC-CNR), Strada Statale 14 km 163.5, 34149 Trieste, Italy
carlino@iom.cnr.it
Istituto Officina dei Materiali, Consiglio Nazionale delle Ricerche (IOM-TASC-CNR), Strada Statale 14 km 163.5, 34149 Trieste, Italy
Coherent Diffractive Imaging is a lensless technique that allows imaging of matter at a spatial resolution not limited by lens aberrations.
This technique exploits the measured diffraction pattern of a coherent beam scattered by periodic and non–periodic objects to retrieve spatial
information. The diffracted intensity, for weak–scattering objects, is proportional to the modulus of the Fourier Transform of the object scattering function. Any phase information, needed to retrieve its scattering function, has to be retrieved by means of suitable algorithms.
Here we present a new approach, based on a memetic algorithm, i.e. a hybrid genetic algorithm, to face the phase problem, which exploits
the synergy of deterministic and stochastic optimization methods. The new approach has been tested on simulated data and applied
to the phasing of transmission electron microscopy coherent electron diffraction data of a SrTiO_3 sample.
We have been able to quantitatively retrieve the projected atomic potential, and also image the oxygen columns, which are not directly visible
in the relevant high-resolution transmission electron microscopy images. Our approach proves to be a new powerful tool for the study of matter
at atomic resolution and opens new perspectives in those applications in which effective phase retrieval is necessary.
Facing the phase problem in Coherent Diffractive Imaging via Memetic Algorithms
Elvio Carlino
December 30, 2023
===============================================================================
§ INTRODUCTION
Full-field and scanning microscopes can be either lens-based or lensless imaging systems. Coherent Diffractive Imaging (CDI) is a lensless technique that permits imaging matter at a spatial resolution not limited by lens aberrations. The seminal idea of CDI was due to David Sayre in 1952 <cit.> but it was only experimentally demonstrated for X-rays in 1999 <cit.> and, more recently, also for electrons, using a Transmission Electron Microscope (TEM), giving rise to the Electron Diffractive Imaging (EDI) <cit.>.
The goal is to retrieve a qualitative/quantitative image of a scattering function related to a physical property of the scattering object, such as the electron density (X-ray CDI) or the atomic potential (EDI).
High Resolution TEM (HRTEM) images of the projected atomic potential are phase-contrast images limited by the high-order aberrations of the objective lens, which distort the phase of the scattered wave function, giving rise to images of the sample, which in general are not immediately interpretable in terms of its atomic structure <cit.>.
Instead, diffraction patterns of scattering objects are not affected by these aberrations. Therefore they contain, in principle, undistorted information on the scattering function at a better spatial resolution with respect to lens-based imaging systems <cit.>. The diffracted intensity, for weak-scattering objects, is proportional only to the modulus of the Fourier Transform (FT) of the scattering function. Any phase information, which is experimentally lost (phase problem <cit.>), has to be retrieved by means of suitable algorithms. The lensless image of the scattering function, obtained by means of an inverse FT of the diffraction pattern once that the correct phase has been retrieved, is characterized by a final resolution experimentally limited only by the Numerical Aperture (NA_diff) corresponding to the highest spatial frequency contained in the diffraction pattern that can be related to the atomic structure of the investigated sample <cit.>. Wavelength, noise, radiation damage, thermal and mechanical stability of the experimental setup, dynamics of the detector, etc. <cit.> could limit the spatial resolution achievable.
In order to find a unique solution to the phase problem, phase retrieval algorithms need a-priori constraints, such as fixing a region around the sample characterized by zero scattering <cit.>. The extension of the zero-scattering region has to be large enough regarding the object support S, containing the scattering object. This requirement is the so-called oversampling of the diffraction pattern, needed to satisfy the Nyquist sampling requirements <cit.>.
For extended samples, alternative approaches have been developed to have enough a-priori information to make the phasing problem overdetermined. In particular, Abbey et al. <cit.> for X-rays CDI experiments substituted the region of zero scattering with a region of zero illumination using a confined probe with a technique named keyhole CDI. The possibility to perform Keyhole coherent diffraction experiments has been demonstrated also for electrons in EDI, obtaining the KEDI approach <cit.>.
So far, several phase retrieval algorithms have been developed <cit.>, which are mostly evolutions of Fienup’s modification <cit.> of the Gerchberg-Saxton's algorithm <cit.> working with a dual-space strategy.
The common approach of many phasing algorithms is to impose constraints both in real space (the prior information on the zero scattering/illuminating region) and in Fourier space (the amplitudes are adapted to the experimental values). The imposition of a constraint in one space always causes the violation of the constraint in the other. Consequently, the standard strategy is to use iterative schemes but, in this way, the global minimum of the reconstruction errors is often reached with difficulty <cit.>. Indeed, the support S of the scattering function is either unknown (X-ray CDI) or known at a worse spatial resolution. This is the case of EDI, for example, where the support is obtained by means of a lens-based image of the scattering function (projected atomic potential), i.e. by the HRTEM image.
In case of unknown complex scattering functions the knowledge of the support S (non-zero illumination region) at the same spatial resolution of the measured diffraction pattern seems to be mandatory for the success of the phasing, especially when phasing algorithms are not suitably structured to escape stagnation in local minima <cit.>. This aspect would limit the advantages of indirect imaging based on lensless systems with respect to conventional lens-based set-ups.
Indeed, standard phasing approaches, such as the Hybrid Input-Output (HIO), are mainly deterministic iterative algorithms, which, going back and forth from the real to the Fourier space, try to optimize a specific error functional <cit.>. For this reason they are highly efficient in finding local minima, but they suffer from stagnation mainly due to the incomplete knowledge of the support S. Furthermore, the final result is highly dependent on the initial conditions <cit.>.
In order to overcome these limitations, phasing procedures usually involve a lot of parallel and independent retrieval processes, with different initial conditions, choosing the scattering function with the lowest reconstruction errors as a possible solution.
A first step toward a smarter use of information coming from multiple phase retrieval processes is the Guided Hybrid Input-Output algorithm <cit.>.
A different approach to face the phase problem in CDI could make use of pure stochastic optimization methods. However, a major limitation of these methods is their inefficency once the number of unknowns is huge, so that, in typical CDI applications, such an approach is doomed to fail even by exploiting actual super–computing facilities.
In this article we make a step further proposing a new hybrid stochastic approach to better explore the phase solution space through a smart use of Genetic Algorithms (GAs) <cit.>. GAs have been already applied to the phase problem in different fields <cit.>.
The novelty of our new approach consists in the development of a Memetic Algorithm (MA) <cit.>
in the context of phase retrieval applied to CDI; this scheme represents a natural choice for a smart merging
of stochastic and deterministic optimization methods: the algorithm has been developed hybridizing a GA, which guarantees a wide exploration of the configuration space, with local optimization algorithms like Hybrid Input-Output and Error Reduction.
We have shown on simulated data that the MA phasing approach is able to retrieve the correct scattering function, when it is real, imposing a very loose support constraint S in the direct space.
Moreover, still on simulated data, we have shown that the new MA phasing approach is able to retrieve the correct scattering function, even if it is complex, starting from a knowledge of the support S at a resolution four times worse than the one corresponding to NA_diff, even worse than normally observed in EDI/KEDI real experiments. Our new approach shows convergence performances towards the global minimum that go well beyond those achievable by standard phasing deterministic algorithms.
Finally, we have applied the GA-based phasing approach to a KEDI experiment realized on a SrTiO_3 sample in a [100] axis orientation.
The image obtained after the phase reconstruction is a detailed structural map of the specimen atomic potential projected along the [100] direction at a sub-Ångström spatial resolution corresponding to the highest frequency measured in the experimental diffraction pattern. The intensity distribution enables one to distinguish between atomic sites containing different chemical species. Also the oxygen signals can be detected, despite the presence of heavy atoms in the crystal cell, whereas they are not visible in the relevant HRTEM image.
These results pave the way to the highest spatial resolution, accuracy and reliability achievable in lensless imaging and represent a new powerful tool for the study of matter.
§ RESULTS AND DISCUSSION
§.§ The Memetic Phase Retrieval approach
A GA <cit.> is a stochastic optimization method that imitates the survival–to–fitness typical of the natural evolution of a population. In general, this is obtained by elaborating the genetic information via three genetic operators: Selection, Crossover and Mutation.
In our algorithm we induce the genetic dynamics on a set of initial densities {ρ_i(x⃗) }_i=1 … N_p (also called population) which represents N_p possible solutions to the phase problem.
In standard GAs Mutation and Crossover induce a stochastic shift on every element of the population in the space of configurations; this improves the ability of exploring the space, but makes the GA efficient only when N_p increases with the number of unknowns <cit.>.
In the phase retrieval problem this condition makes standard GA impracticable due to the computational cost.
A practised way to overcome this issue consists in implementing hybrid GAs, known as Memetic Algorithms (MAs) <cit.>, where global optimization is boosted by local optimization procedures. This operation, in the framework of Memetic Algorithms, is known as Self (or Local) Improvement.
Using this method, we introduce local iterative deterministic phase retrieval procedures in our algorithm as Self Improvement operation.
For this reason, the new proposed phasing approach is called Memetic Phase Retrieval (MPR). Fig.<ref> shows an overview on the procedure, while a more detailed description of the method is reported in the Methods section. It is worth noting that the standard phasing approach can be seen as MPR without the genetic operators Selection, Crossover and Mutation.
MPR has to be considered as a smart framework to take advantage and improve the performances of any current iterative phase retrieval approach; it is clear, in fact, that any phase retrieval algorithm can be implemented in MPR.
In this work we use the Error Reduction and Hybrid Input-Output algorithms as Self Improvement operations because they are simple, well known and well-characterized methods. Moreover, as we will show in the following, the sole inclusion of HIO and ER inside MPR is enough to build a very powerful phase retrieval algorithm.
MPR actually includes also methods for the retrieval of the optimal support function, like the Shrinkwrap algorithm <cit.>; this feature makes MPR a Co-evolving Memetic Algorithm <cit.>, where the Self Improvement co-evolves along with candidate solutions. This peculiar feature will be discussed in future works because it has not been used in the present application on KEDI data, where sufficient information about the support function was available, making Shrinkwrap procedure unnecessary.
§.§ Testing MPR on simulated data
In the Supplementary Information we discuss in detail several numerical tests performed to verify the potentialities of the new proposed approach to reliably retrieve in a reliable way phase information in comparison to standard deterministic phasing approaches. Here, we reassume the main results obtained by simulations before discussing the application of MPR on true experimental data of a KEDI experiment.
In the comparison of the performances between MPR and standard phasing algorithms, applied on simulated data, we will focus on the best phase reconstructed by both methods and not on a statistical analysis of the results obtained from the set of phase retrieval processes started in parallel with different initial conditions. This is necessary because the genetic dynamics has the distinctive feature to mix and share parallel information during the stochastic evolution; this makes smarter the stochastic search for a better phase, but also tends to push the population near to the best candidate solution.
A statistical analysis would be thus biased in favor of MPR.
Moreover, we are going to compare MPR and standard phasing algorithms under equal conditions: the same number of candidate solutions in the population and the same kind of iterative phase retrieval algorithms.
The first test of comparison between MPR and the standard phasing approach is a phase retrieval of a positive and real-valued two-dimensional (2D) scattering function, as shown in Fig.<ref>a.
In the test the provided support function S (Fig.<ref>b) is a square four times smaller than the total area of the direct space, which gives a constraint ratio <cit.> Ω=2.
With this regard it is useful to remember that Ω≥ 1 is the mathematical condition to assure the existence of a unique solution to the phase problem. Ω is defined as the ratio between the support of the autocorrelation of the scattering object and two times the object support, which are areas for 2D phase problem, volumes for 3D ones. In this example, the support function is not updated during the phase retrieval.
Fig.<ref> shows the obtained results for a population N_p = 16384. In particular, Fig.<ref>b shows the support S, never updated during the phasing process. Fig.<ref>a shows the retrieved unknown function obtained by standard phasing algorithms (see Supplementary Information for further details). The poor quality of the retrieved phase is due to the weak constraint in real space, as the support constraint S has not been updated. Fig.<ref>c shows the retrieved unknown function obtained by MPR (see Phase retrieval of real-valued data in Supplementary Information for further details).
The performances of the new proposed stochastic approach are much better than the classic deterministic phasing methods. MPR works accurately even without a tight support constraint.
In the case of simulated data one can evaluate the true error defined as the normalized absolute difference for every pixel of the retrieved 2D function with respect to the solution (Lena image, adapted from the picture 4.2.04 in the USC–SIPI image database<cit.>) (see Supplementary information for more details).
The true error for the deterministic phasing approach is larger than 24%. Instead, MPR leads to a true error less than 1%.
A second numerical test of MPR concerns the reconstruction of a complex-valued scattering function.
In particular, we have considered a situation typically encountered in EDI, in which HRTEM allows us to obtain a lens-based image of the sample under investigation characterized by a worse spatial resolution with respect to that corresponding to the NA_diff of the measured diffraction pattern. In order to simulate this experimental situation, we have binned with a factor 4 the module of the scattering function to be retrieved (Fig.<ref>a) to obtain a rough estimation of its support, thresholding the binned image as shown in Fig.<ref>c, whereas Fig.<ref>b shows the phase in direct space that has also to be retrieved.
This data can be represented as depicted in Fig.<ref>a via the Hue-Saturation-Value (HSV) color system, where the information on the phase is stored in the hue, the modulus corresponds to the value and the saturation level is set to the maximum.
Even in this case, the standard phasing approach is far from recovering the correct complex scattering function, reported in Fig.<ref>b. Instead, MPR is able to correctly retrieve both module and phase of the complex unknown scattering function (Fig.<ref>c) (see Supplementary Information for further details).
The final true error for this test is about 10% for the standard approach and 1.5% for MPR.
§.§ Application of MPR on experimental data for Keyhole Electron Diffractive Imaging
KEDI experiments <cit.> are challenging tests for phase retrieval algorithms as the scattering function to be reconstructed is complex. SrTiO_3 was considered as a case study for the great importance of this oxide from both an applicative and a fundamental point of view. The role of the oxygen sub-lattice is of particular importance in the studies of two dimensional electron gases formed at the interface between two insulating oxides and has recently attracted great attention <cit.>. The capability to image the lattice of complex oxides at atomic resolution is necessary to understand the intriguing properties of this class of material. Moreover imaging of a light chemical element, such as oxygen, in a matrix of heavier atoms, like titanium and strontium, is not straightforward <cit.>. The samples were prepared for KEDI experiments in a [100] zone axis as this configuration enables the imaging of different atomic species in the crystal sub-lattice (see Methods section).
KEDI requires an HRTEM image (Fig.<ref>a) and a nano-diffraction pattern acquired from the same sample area with the same electron optical conditions <cit.>.
The HRTEM experiment enables us to image the sample (Fig.<ref>a and <ref>b), to complement the diffraction pattern at the lower spatial frequency (Fig.<ref>c) and to estimate the support (Fig.<ref>d) at the resolution allowed by the experimental conditions and by the electron objective lens aberrations <cit.>. In the case of the electron-optical set-up used for these experiments the relevant spatial resolution in the HRTEM image at optimum defocus is 0.19 nm <cit.>. The HRTEM image in Fig.<ref>b has been successfully simulated in the framework of full dynamical Bloch-wave approach <cit.> for a thickness of 25 nm and an underfocus value of 41.3 nm (see Supplementary Information). It should be noted that the phase contrast in the HRTEM image of Fig.<ref>b does not show any evident clues that could be correlated to the presence of the oxygen atomic columns which should be seen in the [100] projection of the SrTiO_3 atomic potential, as evidenced in the simulation (see Fig. S7 of Supplementary Information).
Indeed, it is worthwhile to remark that the HRTEM image is, in general, an interference pattern of the waves scattered by the atomic potentials in the specimen, and the positions of the maxima and minima in the image cannot be straightforwardly interpreted as structural features and therefore the comparison with the simulated images is needed <cit.> (see Fig. S7 of Supplementary Information).
The KEDI diffraction pattern, shown in Fig.<ref>c has been obtained by combining the measured diffraction pattern with the modulus of the HRTEM image FT, after a suitable matching procedure requiring its rotation and scaling <cit.>.
The pattern in Fig.<ref>c is the starting point for the phase retrieval process.
It is worth noting that the MPR phasing process has been carried out without any a–priori information about the phases, an information which is, instead, needed by standard phasing procedures applied to KEDI <cit.>.
Further details on MPR applied to experimental data have been reported in the Supplementary Information.
Fig.<ref> shows the retrieved scattering function obtained by using MPR, where the brightness corresponds to the modulus and the hue to the phase of the retrieved real-space complex-valued scattering function.
The long range phase variation is due to the phase variation of the illumination nano-probe <cit.>.
Fig.<ref>a shows the phase retrieved amplitude for the structure of the SrTiO_3 seen in a [100] projection.
Fig.<ref>d has been obtained by subtracting the contribution of the TEM illumination function to quantify the SrTiO_3 projected potential.
The first important point that should be emphasised, is that in the phase recovered image the positions of the maxima are correctly in correspondence with the expected positions of the atomic columns seen in the [100] projection. In other words, the phase reconstructed image is a structural image of the specimen. This is a fundamental issue that paves the way for quantitative structural imaging at atomic resolution.
Indeed, as shown in Fig.<ref>d, the Sr and Ti+O columns are precisely seated on the relevant square sublattice of the SrTiO_3 in the [100] projection (see Fig.<ref>c and Fig.<ref>b). Approximately in the center of the sublattice there is a lower signal which corresponds to the oxygen columns.
A second point concerns the retrieval of quantitative information about the atomic potential. By comparing data shown in Fig.<ref>d with the expected projected atomic potential (Fig.<ref>c) we found that the ratios between the intensities of Sr, Ti+O and O columns are correctly retrieved, providing truly quantitative information on the specimen. In particular, the expected intensity ratios are I_O / I_Sr=0.35 and I_Ti+O / I_Sr=0.96 while the experimental retrieved data give I_O / I_Sr=0.35±0.05 and I_Ti+O / I_Sr=0.89±0.10.
The possibility to do KEDI experiments by MPR reconstruction paves the way for a detailed structural characterization of the investigated samples and opens up new possibilities for the understanding of the properties of the matter at sub-Ångström resolution.
§.§ Conclusions
In this work we have discussed some of the potentialities of a new phasing approach, stochastic in the exploration of the space of solutions, based on a memetic algorithm, applicable both to X-ray and electron coherent diffraction imaging. We have tested the new phasing algorithm, named Memetic Phase Retrieval, on simulated data and we have obtained the result that the knowledge of the support - which defines the boundaries in the direct space of the unknown scattering function that one wants to retrieve - is less binding than previously reported. The more efficient exploration of the space of solution, possible thanks to the stochastic genetic procedures implemented in MPR, should be the higher gear of the new proposed phasing method with respect to those already available. Indeed, both the possibility to correctly retrieve a real-valued scattering function by its diffraction pattern without imposing any tight support and to reconstruct a complex-valued scattering function by using a low-resolution estimate of the support, are examples of the great capabilities of MPR to face the phase problem. Our tests on simulated data demonstrate the superior capabilities of the MPR for accurate phase retrievals. Indeed, by using the same computational resources, the MPR approach has proved to be much more powerful than deterministic phasing procedures in facing the phase problem.
The application to an experimental case of Keyhole Electron Diffraction Imaging has shown that the atomic potentials of SrTiO_3 can be quantitatively imaged, representing a relevant improvement for the study of the matter.
We believe that the Memetic Phase Retrieval approach could be of interest in all the fields that require accurate phase retrievals.
§ METHODS
§.§ The phase problem as an optimization problem
The ideal solution to the phase problem is a function ρ_s (x⃗), representing the spatial distribution of the sample, whose Fourier Transform (FT), ρ̃_s (q⃗), has a square modulus equal to I(q⃗), which is proportional to the experimental diffraction pattern intensity.
ρ_s (x⃗) is also assumed to be zero outside a well-defined region of the real space, the so-called support S, in order to satisfy the oversampling condition, which assures the necessary information to retrieve ρ_s (x⃗) <cit.>.
Here, x⃗=(x,y), with x and y the cartesian components of the position vector x with respect to the reference system.
Analogously, q⃗=(u,v), where u and v are the spatial frequencies components with respect to the reference axes.
In principle, this solution can be represented as an intersection of sets.
ℳ can be defined as the set of all functions ρ (x⃗) compatible with experimental data I(q⃗), i.e:
ℳ = {ρ (x⃗) : | ρ̃ (q⃗) |^2 = I(q⃗) }.
𝒮 is, instead, the set of all functions ρ_s (x⃗) satisfying the oversampling condition, which is defined by a binary function Π (x⃗) representing the object support S. So, the set 𝒮 is described by:
𝒮 = {ρ (x⃗) : ρ(x⃗)= Π (x⃗) ρ(x⃗) }.
Thanks to (<ref>) and (<ref>), the ideal solution is:
ρ_s (x⃗) = ℳ∩𝒮.
The main issue concerning experimental measurements is the presence of noise and lack of data; this, in general, implies that 𝒮 and ℳ do not intersect:
ℳ∩𝒮 = ∅.
Due to the condition (<ref>) a different way to define what we mean by “solution” is needed.
It is useful, at this point, to introduce two projection operators, which act on the function ρ (x⃗):
P_ℳ : P_ℳρ(x⃗) = ℱ^-1 [ √(I(q⃗)) e^i [ρ̃ (q⃗)]](x⃗),
P_𝒮 : P_𝒮ρ(x⃗) = Π(x⃗) ρ(x⃗).
It's trivial to prove that P_ℳ and P_𝒮 are projectors on sets ℳ and 𝒮, previously defined in (<ref>) and (<ref>).
Thanks to these operators, it's now possible to give a new definition of solution in place of the one defined in the eq. (<ref>):
ρ_s (x⃗) = min_ρ D[P_ℳρ(x⃗), P_𝒮ρ(x⃗)],
where the functional D[A,B] represents the metric of the space. Hereafter, we will refer to the eq. (<ref>) whenever we will talk about the “solution” of the problem, ρ_s (x⃗).
It's now clear that, in this framework, finding a solution to the phase problem means minimizing the distance between sets ℳ and 𝒮: the phase problem becomes an optimization problem for the quantity D[P_ℳρ(x⃗), P_𝒮ρ(x⃗)], which can be reinterpreted as the error of the recovered density ρ(x⃗).
Different definitions of the metric imply different definitions of the error assigned to a given ρ(x⃗) and, as consequence, different optimization targets.
We can define the error functional E[ρ] as
E[ρ] = D[P_ℳρ(x⃗), P_𝒮ρ(x⃗)],
such that the eq. (<ref>) turns into
ρ_s (x⃗) = min_ρ E[ρ].
Standard approaches to the phase problem are mainly deterministic iterative algorithms which, going back and forth from the real to the Fourier space, try to minimize a specific error functional <cit.>. These methods are highly efficient in finding local minima, but they suffer from stagnation and the final result is highly dependent on the initial conditions <cit.>.
In order to overcome these issues, phasing procedures usually involve a lot of parallel and independent retrieval processes with different initial conditions and then selecting the one with the lowest error.
The founding idea of the new proposed phasing method is to better perform this parallel exploration of the space, through the use of a Memetic Algorithm.
§.§ Selection as a Rigged Roulette
The Selection process is a delicate step in the Evolution process. A Selection strongly favoring only the better elements in {ρ_i(x⃗) }_i=1 … N_p (i.e., elements with the better fitness value) will improve the convergence speed, but the algorithm will suffer with stagnation in local minima.
On the other side, a selection process that weakly favors those elements will have, instead, an unstable convergence and will require an excessive length of time to find the solution.
There are several ways to select elements depending on their fitness value. The one chosen in this work is the so-called “rigged roulette”.
Once an error value E_i is assigned to every ρ_i(x⃗) in {ρ_i(x⃗) }_i=1 … N_p according to the eq.(<ref>), the set {ρ_i(x⃗) }_i=1 … N_p is ordered by increasing values of E_i (which is equivalent to a decreasing values of the fitness).
Whenever the algorithm has to select a ρ_i(x⃗) in {ρ_i(x⃗) }_i=1 … N_p for the Crossover operation, an index is extracted through the relation
s = ⌊{rand[0,1)}^r · N_p ⌋ + 1 ,
where r ≥ 1 is related to the “strength” of the selection process. Usual values of r range from 1.5 to 2.5.
Eq. (<ref>) maps a flat distribution in [1, 0) ⊂ℝ to an unbalanced distribution in { 1, N_p }⊂ℤ, where the higher the value is the greater the probability is of getting a lower index and, therefore, selecting a better element.
§.§ Differential Crossover
In the Natural Evolution process, the Crossover operation is the mixing of the parents' genetic pool.
In our implementation chromosomes are represented by every single (complex) value of ρ̃ (q⃗) = ℱ[ρ] (q⃗).
This means that, given two parent functions ρ_1(x⃗) and ρ_2(x⃗) selected according to their fitness, the son function ρ_son(x⃗) is created according to
ρ̃_son(q⃗) =
ρ̃_1 (q⃗), [0,1) > C
ρ̃_2 (q⃗), ,
where rand[0,1) is a random number with flat distribution in [0,1) and C is a balancing coefficient between 0 and 1.
An improvement in performances can be obtained using the so called Differential Crossover <cit.> where, instead of selecting two parents, four parents, ρ_1(x⃗) ρ_2(x⃗) ρ_3(x⃗) and ρ_4(x⃗), are chosen.
The differential crossover acts as follows:
ρ̃_son(q⃗) =
ρ̃_1 (q⃗), [0,1) > C
ρ̃_2 (q⃗) +
D_c · [ρ̃_3 (q⃗)-ρ̃_4 (q⃗)] ,
where D_c is called differential coefficient with typical values between 0.5 and 1.5.
The population of sons can be, in general, smaller than the whole population. This means that if the population of sons has N_s = G · N_p elements, a fraction of N_p - N_s parents, chosen randomly, will survive to the next generation.
The parameter G, which can be called genetic fraction, has values between 0 and 1: G=1 means that all of the parent population is replaced by the sons, while G=0 means that no sons are created, the genetic operators are switched off and we get a situation equivalent to the standard deterministic approach, as depicted in Fig.<ref>.
§.§ Mutation
Every element in the population {ρ_i(x⃗) }_i=1 … N_p may be subjected to a stochastic modification.
In this work, the mutation operation has been switched off because it does not introduce a remarkable improvement in the performance of MPR on treated data. Different implementations of the mutation operator are under study and will be topics of future works.
§.§ Self improvement via deterministic optimization
Optimization algorithms such as Error Reduction (ER) and Hybrid Input-Output (HIO) are efficient methods to find local minima or, more precisely, minima bounded to a region of the configuration space near the starting point.
These algorithms are strictly bounded to the metric D[A(x⃗), B(x⃗)] defined as:
D[A(x⃗), B(x⃗)] = ∫ dq⃗ [ |ℱ[A](q⃗)| - |ℱ[B](q⃗)| ]^2.
This implies that the local optimization target is the functional E[ρ] defined as:
E[ρ] = D[P_ℳρ(x⃗), P_𝒮ρ(x⃗)] =
= ∫ dq⃗ [ √(I(q⃗)) - |ℱ[Πρ](q⃗)| ]^2.
In our algorithm this local optimization is carried on by elaborating every ρ_i(x⃗) with N_HIO iterations of the Hybrid Input-Output algorithm and N_ER iterations of the Error Reduction algorithm.
In this work the global optimization target, i.e., the fitness of MPR, coincides with the local optimization target of ER and HIO algorithms just shown in (<ref>).
This is not to be taken for granted because, in general, we can define any arbitrary global optimization target different from the local one (<ref>).
We are testing different fitness definitions for the global optimization, like the Csiszar's Information Divergence<cit.>, and different local optimization algorithms.
§.§ The choice of the initial guess
The phase retrieval process can be divided into two main steps.
The first one concerns the choice of the initial population of densities {ρ_i(x⃗) }_i=1 … N_p. In the second step, we have to choose the parameters both of the genetic and the local optimization algorithms.
Standard approaches like Hybrid Input Output and Error Reduction need a single initial guess, which represent the first estimation of the solution.
MPR approach requires, instead, a set of initial guesses.
This set is produced from a single guess, simply randomly shifting every phase.
This means that, given an initial guess ρ_init (x⃗), every ρ_i(x⃗) in {ρ_i(x⃗) }_i=1 … N_p is created via the relation
ρ̃_i(q⃗_j) = √(I(q⃗_j))exp(√(-1)ϕ_j) with ϕ_j =
=[ρ̃_init(q⃗_j)] + R_c ·rand[-π, π].
The parameter R_c, which has values between 0 and 1, depends on the accuracy of the initial guess ρ_init (x⃗).
If ρ_init (x⃗) is already a good estimation of the solution, it will be useful to set a low value (usually near to 0) for coefficient R_c in order to well explore the space near ρ_init (x⃗).
If, instead, ρ_init (x⃗) is considered to be far from the solution, it is useful to set a value of R_c near to 1, in order to explore also areas of the space far from ρ_init (x⃗).
§.§ KEDI experiment
A KEDI experiment was performed following the procedure reported in <cit.> , which enables one to deliver a low dose of electrons to the specimen. The experiment requires the acquisition of an HRTEM image and a diffraction pattern by using the same electron optical set up. The experiments were performed by using a JEOL JEM 2010 F UHR operated at 200 kV. The cathode is a high coherence Shottky type. The microscope has an objective lens with low spherical aberration coefficient C_s=0.47 ± 0.01 mm and a relevant resolution at optimum defocus in HRTEM of 190 pm. The environment around the microscope is thermally and mechanically very stable allowing us to achieve in the scanning TEM (STEM) high angle annular dark field (HAADF) mode a resolution of 126 pm, which is the theoretical limit for the used electron optical set up <cit.>.
In a KEDI experiment the optical setup produces an electron nano-beam. The latter defines the mathematical support of the scattering function for the illuminated nanometric region of the extended crystal. As in a microscope the field of view is proportional to the inverse of magnification, the size of the illumination function (beam size) is somehow related to the spatial resolution. The electron beam size S (which defines the support) is directly related to the final resolution to be achieved and to the size of the detector used to record HRTEM image and n-ED pattern. In fact, if the highest frequency of the diffraction signal recorded in the reciprocal space is ρ^-1pm^-1, we should have ρ at least two or three times the pixel size Δ_map of the phased map to have an electron projected potential two-dimensional map calculated with a sufficient number of points to be plotted continuously.
For example, if we reached a final resolution – after the phase retrieval process – of ρ=70 pm we should have Δ_map∼ 25 ÷ 30 pm which, multiplied by the detector pixel number along a line, N=1024 in our experimental case, would lead to a spatial region O (scattering region plus non-illuminated surrounding region) of ∼ 25 ÷ 30 nm in size. Moreover, for the Nyquist theorem’s requirement, the illuminated beam size S (the support) has to be less than 2^-1/2 O, i.e., at maximum ∼ 17 ÷ 20 nm in size. Hence, in order to properly run the phase retrieval algorithms, the illuminated region of the sample in the direct HRTEM image has to be properly chosen with respect to the whole detector area to satisfy the above KEDI oversampling condition.
Here, the cathode emission condition and the electron optical illumination system of the microscope has been experimentally set up to increase the probe coherence on the smallest illuminated area achievable <cit.>. The microscope has an illumination system composed by three magneto-static lenses. These lenses were operated independently, together with the electrostatic lens of the emitter, to produce the smallest-sized probe on the focal plane of the pre-field of the objective lens and hence the smallest-sized coherent parallel beam on the specimen. The emission conditions of the microscope cathode were chosen to increase the coherence of the electron probe by decreasing the temperature of the emitting tip. We used a heating current for the filament that halves the emission current with respect to the standard operation, decreasing at the same time the electron dose delivered to the specimen. The current density on the specimen was below the detection limit of the amperometer connected to the phosphorus screen of the microscope ( < 0.1 pA cm^-2), allowing us to acquire the relevant diffraction pattern on the 1024x1024 Charge-Coupled Device (CCD) camera without using the beam stopper for the direct beam. Thus all the diffracted intensities were available for the phasing process and a very small dose is delivered to the specimen. The small electron probe, without any changes, was used to acquire both HRTEM image and diffraction from the same area of the specimen. Fig.<ref>a shows the HRTEM image. The illuminated area is 10±2nm. The interference pattern of the phase contrast HRTEM image formed in the image plane of the objective lens is shown at a higher magnification in Fig.<ref>b In Fig.<ref>c the diffraction pattern formed in the back focal plane of the objective lens is shown. The central part of the pattern has been replaced, after proper scaling and rotation, by the FFT of the HRTEM image in Fig.<ref>a, as established in EDI method <cit.>. The highest Miller’s index spot measurable in the pattern is the (5,5,0), which corresponds to a spacing of 55 pm. Thus, the expected gain in resolution of the maximum spatial frequency contained in the diffraction pattern (∼NA_diff^-1) with respect to that corresponding to the FT of the HRTEM image is about four times.
§ ACKNOWLEDGMENTS
This work was supported by the NOXSS PRIN (2012Z3N9R9) project and Progetto premiale MIUR 2013 USCEF.
We acknowledge the CINECA and Regione Lombardia LISA award LI05p-PUMAS,
for the availability of high-performance computing resources and support.
§ AUTHOR CONTRIBUTIONS STATEMENT
A.C. and D.E.G. conceived the algorithm. A.C., supervised by D.E.G. and L.D., developed, implemented and tested the algorithm, and carried out the reconstructions. E.C. designed and performed the TEM EDI experiments, contributing to the optimization of the relevant data reduction and phasing. L.D. and F.S. made data reduction. All authors have equally contributed to the preparation and the revision of the text.
§ ADDITIONAL INFORMATION
Competing financial interests: The authors declare no competing financial interests.
How to cite this article: Colombo, A. et al. Facing the phase problem in Coherent Diffractive Imaging via Memetic Algorithms. Sci. Rep. 7, 42236; doi:10.1038/srep42236 (2017).
10
url<#>1urlprefixURL
Sayre1952
authorSayre, D.
titleSome implications of a theorem due to Shannon.
journalActa Crystallographica
volume5, pages843 (year1952).
<http://dx.doi.org/10.1107/S0365110X52002276>.
Miao1999
authorMiao, J., authorCharalambous, P.,
authorKirz, J. & authorSayre, D.
titleExtending the methodology of x-ray crystallography to
allow imaging of micrometre-sized non-crystalline specimens.
journalNature volume400,
pages342–344 (year1999).
<http://dx.doi.org/10.1038/22498>.
Zuo2003
authorZuo, J., authorVartanyants, I.,
authorGao, M., authorZhang, R. &
authorNagahara, L.
titleAtomic resolution imaging of a carbon nanotube from
diffraction intensities.
journalScience volume300,
pages1419–1421 (year2003).
Huang2009
authorHuang, W., authorZuo, J., authorJiang,
B., authorKwon, K. & authorShim, M.
titleSub-ångström-resolution diffractive imaging
of single nanocrystals.
journalNature Physics
volume5, pages129–133
(year2009).
DeCaro2010
authorDe Caro, L., authorCarlino, E.,
authorCaputo, G., authorCozzoli, P. D. &
authorGiannini, C.
titleElectron diffractive imaging of oxygen atoms in
nanocrystals at sub-angstrom resolution.
journalNat Nano volume5,
pages360–365 (year2010).
<http://dx.doi.org/10.1038/nnano.2010.55>.
Spence1988
authorSpence, J. C.
titleExperimental high-resolution electron
microscopy (publisherOxford University Press,
year1988).
Decaro2013
authorDe Caro, L., authorCarlino, E.,
authorSiliqi, D. & authorGiannini, C.
titleCoherent diffractive imaging: From nanometric down to
picometric resolution.
In booktitleHandbook of Coherent-Domain Optical
Methods, pages291–314 (publisherSpringer,
year2013).
Fienup1982
authorFienup, J. R.
titlePhase retrieval algorithms: a comparison.
journalAppl. Opt. volume21,
pages2758–2769 (year1982).
<http://ao.osa.org/abstract.cfm?URI=ao-21-15-2758>.
Abbey2008
authorAbbey, B. et al.
titleKeyhole coherent diffractive imaging.
journalNature Physics
volume4, pages394–398
(year2008).
DeCaro2012
authorDe Caro, L., authorCarlino, E.,
authorVittoria, F. A., authorSiliqi, D. &
authorGiannini, C.
titleKeyhole electron diffractive imaging (KEDI).
journalActa Crystallographica Section A
volume68, pages687–702
(year2012).
<http://dx.doi.org/10.1107/S0108767312031832>.
shechtman2015
authorShechtman, Y. et al.
titlePhase retrieval with application to optical imaging:
a contemporary overview.
journalIEEE signal processing magazine
volume32, pages87–109
(year2015).
Gerchberg1972
authorGerchberg, R. & authorSaxton, W.
titleA practical algorithm for the determination of the
phase from image and diffraction plane pictures.
journalOptik (Jena) volume35,
pages237 (year1972).
Marchesini2007
authorMarchesini, S.
titleInvited article: A unified evaluation of iterative
projection algorithms for phase retrieval.
journalReview of Scientific Instruments
volume78, pages– (year2007).
<http://scitation.aip.org/content/aip/journal/rsi/78/1/10.1063/1.2403783>.
chen2007
authorChen, C.-C., authorMiao, J.,
authorWang, C. & authorLee, T.
titleApplication of optimization technique to
noncrystalline x-ray diffraction microscopy: Guided hybrid input-output
method.
journalPhysical Review B
volume76, pages064113
(year2007).
Goldberg1989
authorGoldberg, D. E.
titleGenetic Algorithms in Search, Optimization and
Machine Learning (publisherAddison-Wesley Longman Publishing
Co., Inc., addressBoston, MA, USA, year1989),
edition1st edn.
thust1997
authorThust, A., authorLentzen, M. &
authorUrban, K.
titleThe use of stochastic algorithms for phase retrieval
in high resolution transmission electron microscopy.
journalScanning Microscopy
volume11, pages437–454
(year1997).
nicholson1999
authorNicholson, J., authorOmenetto, F.,
authorFunk, D. & authorTaylor, A.
titleEvolving FROGS: phase retrieval from
frequency-resolved optical gating measurements by use of genetic algorithms.
journalOptics letters
volume24, pages490–492
(year1999).
taylor2006
authorTaylor, J. R., authorKing III, B. A.,
authorSteincamp, J. & authorRakoczy, J.
titleGenetic algorithm phase retrieval for the systematic
image-based optical alignment test bed.
journalPublications of the Astronomical Society of
the Pacific volume118, pages319
(year2006).
li2011
authorLi, N., authorGao, P., authorLu, Y.,
authorYu, W. & authorYu, B.
titlePhase retrieval for hard x-ray in-line phase contrast
imaging based on a parallel hybrid genetic algorithm.
In booktitleComputational Sciences and Optimization
(CSO), 2011 Fourth International Joint Conference on,
pages66–70 (organizationIEEE, year2011).
ong2010
authorOng, Y.-S., authorLim, M. H. &
authorChen, X.
titleResearch frontier-memetic computation–past, present
& future.
journalIEEE Computational Intelligence Magazine
volume5, pages24 (year2010).
goldberg1989sizing
authorGoldberg, D. E.
titleSizing populations for serial and parallel genetic
algorithms.
In booktitleProceedings of the 3rd international
conference on genetic algorithms, pages70–79
(organizationMorgan Kaufmann Publishers Inc.,
year1989).
moscato1989
authorMoscato, P. et al.
titleOn evolution, search, optimization, genetic
algorithms and martial arts: Towards memetic algorithms.
journalCaltech concurrent computation program, C3P
Report pages826 (year1989).
renders1996
authorRenders, J.-M. & authorFlasse, S. P.
titleHybrid methods using genetic algorithms for global
optimization.
journalIEEE Transactions on Systems, Man, and
Cybernetics, Part B (Cybernetics) volume26,
pages243–258 (year1996).
el2006
authorEl-Mihoub, T. A., authorHopgood, A. A.,
authorNolle, L. & authorBattersby, A.
titleHybrid genetic algorithms: A review.
journalEngineering Letters
volume13, pages124–137
(year2006).
Neri2012
authorNeri, F. & authorCotta, C.
titleMemetic algorithms and memetic computing
optimization: A literature review.
journalSwarm and Evolutionary Computation
volume2, pages1–14 (year2012).
Marchesini2003
authorMarchesini, S. et al.
titleX-ray image reconstruction from a diffraction pattern
alone.
journalPhys. Rev. B volume68,
pages140101 (year2003).
<http://link.aps.org/doi/10.1103/PhysRevB.68.140101>.
Smith2007
authorSmith, J. E.
titleCoevolving memetic algorithms: a review and progress
report.
journalIEEE Transactions on Systems, Man, and
Cybernetics, Part B (Cybernetics) volume37,
pages6–17 (year2007).
Millane2015
authorMillane, R. P. & authorArnal, R. D.
titleUniqueness of the macromolecular crystallographic
phase problem.
journalActa Crystallographica Section A: Foundations
and Advances volume71 (year2015).
SIPI1997
authorWeber, A. G.
titleThe USC-SIPI image database version 5
journalUSC-SIPI Rep. volume315,
pages1–24 (year1997).
Banerjee2015
authorBanerjee, H., authorBanerjee, S.,
authorRanderia, M. & authorSaha-Dasgupta, T.
titleElectronic structure of oxide interfaces: A
comparative analysis of GdTiO3/SrTiO3 and LaAlO3/SrTiO3 interfaces.
journalScientific reports
volume5 (year2015).
Varela2006
authorVarela, M. et al.
titleAtomic scale characterization of complex oxide
interfaces.
journalJournal of materials science
volume41, pages4389–4393
(year2006).
Muller2004
authorMuller, D. A., authorNakagawa, N.,
authorOhtomo, A., authorGrazul, J. L. &
authorHwang, H. Y.
titleAtomic-scale imaging of nanoengineered oxygen vacancy
profiles in SrTiO3.
journalNature volume430,
pages657–661 (year2004).
Carlino2014
authorCarlino, E.
titleTem for characterization of semiconductor
nanomaterials.
In booktitleTransmission Electron Microscopy
Characterization of Nanomaterials, pages89–138
(publisherSpringer, year2014).
Spence2013
authorSpence, J. C.
titleHigh-resolution electron microscopy
(publisherOUP Oxford, year2013).
DeCaro2016
authorDe Caro, L., authorScattarella, F. &
authorCarlino, E.
titleDetermination of the projected atomic potential by
deconvolution of auto-correlation function of TEM electron nano-diffraction
patterns.
journalCrystals .
volume6, pages141
(year2016).
Storn1997
authorStorn, R. & authorPrice, K.
titleDifferential evolution–a simple and efficient
heuristic for global optimization over continuous spaces.
journalJournal of global optimization
volume11, pages341–359
(year1997).
Csiszar1991
authorCsiszar, I.
titleWhy least squares and maximum entropy? an axiomatic
approach to inference for linear inverse problems.
journalAnn. Statist.
volume19, pages2032–2066
(year1991).
<http://dx.doi.org/10.1214/aos/1176348385>.
Carlino2005
authorCarlino, E. & authorGrillo, V.
In booktitleProceedings MCEM VII Portoroze (Si),
pages159 (year2005).
|
http://arxiv.org/abs/1701.07874v3 | 20170126210437 | Closing in on Resonantly Produced Sterile Neutrino Dark Matter | [
"John F. Cherry",
"Shunsaku Horiuchi"
] | hep-ph | [
"hep-ph",
"astro-ph.CO"
] |
=1
|
http://arxiv.org/abs/1701.07902v1 | 20170126233502 | On discrete structures in finite Hilbert spaces | [
"Ingemar Bengtsson",
"Karol Zyczkowski"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP"
] |
^1Fysikum, Stockholm University, Sweden
^2Jagiellonian University, Cracow, Poland
^3Center for Theoretical Physics,
Polish Academy of Sciences Warsaw, Poland
We present a brief review of discrete structures in a finite Hilbert space,
relevant for the theory of quantum information.
Unitary operator bases, mutually unbiased bases,
Clifford group and stabilizer states, discrete Wigner function,
symmetric informationally complete measurements,
projective and unitary t–designs are discussed. Some recent results in the
field are covered and several important open questions are
formulated. We advocate a geometric approach to the subject and
emphasize numerous links to various mathematical problems
On discrete structures in finite Hilbert spaces
Ingemar Bengtsson^1 and Karol Życzkowski^2,3
January 27, 2017
================================================
e-mail: ingemar@physto.se karol@cft.edu.pl
§ INTRODUCTION
These notes are based on a new chapter written to the second edition
of our book Geometry of Quantum States.
An introduction to Quantum Entanglement <cit.>.
The book is written
at the graduate level for a reader familiar with the principles of quantum mechanics.
It is targeted first of all for readers who
do not read the mathematical literature everyday, but
we hope that students of mathematics and of the information sciences will find it useful as well, since they also may wish to learn about
quantum entanglement.
Individual chapters of the book are to a large extent
independent of each other. For instance, we hope
that the new chapter presented here
might become a source of information
on recent developments on discrete structures
in finite Hilbert space also for experts working in the field.
Therefore we have compiled these notes, which aim to present
an introduction to the subject as well as an up to date
review on basic features of objects belonging to the Hilbert space
and important for the field of quantum information processing.
Quantum state spaces are continuous, but they have some intriguing
realizations of discrete structures hidden inside. We will discuss some of
them, starting from unitary operator bases, a notion of strategic importance
in the theory of entanglement, signal processing, quantum computation, and
more. The structures we are aiming at are known under strange acronyms such
as `MUB' and `SIC'. They will be spelled out in due course, but in most of
the chapter we let the Heisenberg groups occupy the centre stage. It seems
that the Heisenberg groups understand what is going on.
All references to equations or the numbers of section refers
to the draft of the second edition of the book.
To give a reader a better orientation on the topics
covered we provide its contents in Appendix A.
The second edition of the book includes also
a new chapter 17 on multipartite entanglement <cit.>
and several other new sections.
§ UNITARY OPERATOR BASES AND THE HEISENBERG GROUPS
Starting from a Hilbert space H of dimension N we have another
Hilbert space of dimension N^2 for free, namely the Hilbert-Schmidt space
of all complex operators acting on H, canonically
isomorphic to the Hilbert space H⊗ H^*. It was introduced
in Section 8.1
and further explored in Chapter 9.
Is it possible to find an orthonormal basis in H⊗ H^*
consisting solely of unitary operators? A priori this looks doubtful, since
the set of unitary matrices has real dimension N^2, only one half the
real dimension of H⊗ H^*. But physical observables
are naturally associated to unitary operators, so if such bases exist they
are likely to be important. They are called
unitary operator bases,
were introduced by Schwinger (1960) <cit.>,
and heavily used by him <cit.>.
In fact unitary operator bases do exist, in great abundance.
And we can ask for more <cit.>.
We can insist that the elements of the basis form a group. More
precisely, let G̅ be a finite group of order N^2, with identity element e.
Let U_g be unitary operators giving a projective representation of G̅,
such that
1. U_e is the identity matrix.
2. g≠ e ⇒ TrU_g = 0.
3. U_gU_h = λ (g,h)U_gh, where |λ (g,h)| = 1.
(So λ is a phase factor.) Then this collection of unitary
matrices is a unitary operator basis.
To see this, observe that
U_g^† = λ (g^-1,g)^-1U_g^-1 .
It follows that
g^-1h ≠ e ⇒U_g^† U_h = 0 ,
and moreover that TrU_g^† U_g = U_e = N. Hence these
matrices are orthogonal with respect to the Hilbert-Schmidt inner product from
Section 8.1.
Unitary operator bases arising from a group in this way
are known as unitary operator bases of group
type, or as nice error bases—a name that comes from the theory of
quantum computation (where they are used to discretize errors, thus making the
latter correctable—as we will see in Section 17.7).
The question of the existence of nice error bases is a question in group theory.
First of all we note that there are two groups involved in the construction,
the group G which is faithfully represented by the above formulas, and the
collineation group G̅ which is the group G with all phase
factors ignored.
group!collineation
The group G̅ is
also known, in this context, as the index group.
group!index
Unless N is a prime number (in which case the nice error bases are
essentially unique), there is a long list of possible index groups. An
abelian index group is necessarily of the form H× H, where H
is an abelian group. Non-abelian index groups are more difficult to
classify, but it is known that every index group must be
soluble. The classification problem has been studied by
Klappenecker and Rötteler <cit.>, making use of the
classification of finite groups. They also maintain an on-line catalogue.
Soluble groups will reappear in Section <ref>;
for the moment let us just mention that all abelian groups are soluble.
The paradigmatic example of a group G giving rise to a unitary operator basis
is the Weyl–Heisenberg group H(N).
This group appeared in many different
contexts, starting in nineteenth century algebraic geometry, and in the beginnings
of matrix theory <cit.>. In the twentieth century it took on a major role
in the theory of elliptic curves <cit.>. Weyl (1932) <cit.> studied
its unitary representations in his book on quantum mechanics.
The group H(N) can be presented as follows.
Introduce three group elements X, Z, and ω.
Declare them to be of order N:
X^N = Z^N = ω^N = 1 .
Insist that ω belongs to the centre of the group (it
commutes with everything):
ω X = X ω , ω Z = Zω ,
Then we impose one further key relation:
ZXZ^-1X^-1 = ω⇔
ZX = ω XZ .
The Weyl–Heisenberg group consists of all `words' that can be written down
using the three `letters' ω, X, Z, subject to the above relations. It requires
no great effort to see that it suffices to consider N^3 words of the form
ω^tX^rZ^s, where t,r,s are integers modulo N.
The Weyl–Heisenberg group admits an essentially unique unitary representation in
dimension N. First we represent ω as multiplication with a phase factor
which is a primitive root of unity, conveniently chosen to be
ω = e^2π i/N .
If we further insist that Z be represented by a diagonal operator we
are led to the clock-and-shift representation
Z|i⟩ = ω^i|i⟩ , X|i⟩
= |i+1⟩ .
The basis kets are labelled by integers modulo N. A very important area
of application for the Weyl–Heisenberg group is that of time-frequency analysis of
signals; then the operators X and Z may represent time delays and Doppler
shifts of a returning radar wave form. But here we stick to the language of quantum
mechanics and refer to Howard et al. <cit.> for an introduction to
signal processing and radar applications.
To orient ourselves we first write down the matrix form of the generators for N = 3,
which is a good choice for illustrative purposes:
Z = ( [ 1 0 0; 0 ω 0; 0 0 ω^2 ]) , X = ( [ 0 0 1; 1 0 0; 0 1 0 ]) .
In two dimensions Z and X become the Pauli matrices σ_z and
σ_x respectively.
We note
the resemblance between eq. (<ref>) and a special case of the equation that
defines the original Heisenberg group, eq. (6.4). This explains why Weyl
took this finite group as a toy model of the latter. We also note that although
the Weyl–Heisenberg group has order N^3 its collineation group—the group modulo
phase factors, which is the group acting on projective space—has order
N^2. In fact the collineation group is an abelian product of two cyclic groups,
Z_N× Z_N.
The slight departure from commutativity ensures an interesting representation theory.
There is a complication to notice at this point: because Z = X =
(-1)^N+1, it matters if N is odd or even. If N is odd the Weyl–Heisenberg
group is a subgroup of
SU(N), but if N is even it is a subgroup of U(N) only. Moreover, if N is
odd the Nth power of every group element is the identity, but if N is even
we must go to the 2Nth power to say as much. (For N = 2 we find X^2 = Z^2
= 1 but (ZX)^2 = (iσ_y)^2 = - 1.) These annoying
facts make even dimensions significantly more difficult to handle, and leads to
the definition
N̅ = {[ N N; 2N N ].
Keeping the complication in mind we turn to the problem of choosing suitable phase
factors for the N^2 words that will make up our nice error basis. The peculiarities
of even dimensions suggest an odd-looking move.
We introduce the phase factor
τ≡ - e^π i/N = - √(ω) ,
τ^N̅ = 1 .
Note that τ is an Nth root of unity only if N is odd.
Then we define the displacement operators
D_ p≡ D_r,s =
τ^rsX^rZ^s , 0 ≤ r,s ≤N̅-1 .
These are the words to use in the error basis. The double notation—using
either D_ p or D_r,s—emphasizes
that it is convenient to view r,s as the two components of a `vector' p.
Because of the phase factor τ the displacement operators are not actually
in the Weyl–Heisenberg group, as originally defined, if N is even. So there
is a price to pay for this, but in return we get the group law in the form
D_r,sD_u,v = ω^us-vrD_u,vD_r,s =
τ^us-vrD_r+u,s+v
⇔
D_ pD_ q = ω^Ω ( p, q)D_ qD_ p
= τ^Ω ( p, q)D_ p + q .
The expression in the exponent of the phase factors is anti-symmetric
in the `vectors' that label the displacement operators. In fact Ω ( p,
q) is a symplectic form (see Section 3.4),
and a very nice object to encounter.
A desirable by-product of our conventions is
D^-1_r,s = D_r,s^† = D_-r,-s .
Another nice feature is that the phase factor ensures that all displacement
operators are of order N.
On the other hand we have D_r+N,s = τ^NsD_r,s. This means that we have to live
with a treacherous sign if N is even, since the displacement operators
are indexed by integers modulo 2N in that case. Even dimensions are unavoidably
difficult to deal with.
(We do not know who first commented that “even dimensions
are odd”, but he or she had a point.)
Finally, and importantly, we observe that
D_r,s = τ^rs∑_i=0^N-1ω^si⟨
i|i+r⟩ = Nδ_r,0δ_s,0 .
Thus all the displacement operators except the identity are represented
by traceless matrices, which means that the Weyl–Heisenberg group does indeed
provide a unitary operator basis of group type, a nice error basis. Any complex
operator A on H_N can be written, uniquely, in the form
A = ∑_r=0^N-1∑_s=0^N-1a_rsD_r,s ,
where the expansion coefficients
a_rs are complex numbers given by
a_rs = 1/ND_r,s^† A =
1/ND_-r,-sA .
Again we are using the Hilbert-Schmidt scalar product from Section 8.1.
Such expansions were called `quantum
Fourier transformations' in Section 6.2.
§ PRIME, COMPOSITE, AND PRIME POWER DIMENSIONS
The Weyl–Heisenberg group cares deeply whether the dimension N is given by a
prime number (denoted p), or by a composite number (say N = p_1p_2 or N = p^K).
If N = p then every element in the group has order N (or N̅ if N = 2),
except of course for the identity element. If on the other hand N = p_1p_2 then
(Z^p_1)^p_2 = 1, meaning that the element Z^p_1 has order p_2
only. This is a striking difference between prime and composite dimensions.
The point is that we are performing arithmetic modulo N, which means that we regard
all integers that differ by multiples of N as identical. With this understanding, we
can add, subtract, and multiply the integers freely. We say that integers modulo N
form a ring, just as the ordinary integers do.
ring
If N is composite it can happen that xy = 0 modulo N,
even though the integers x
and y are non-zero. For instance, 2· 2 = 0 modulo 4. As Problem
12.2
should make clear, a ring is all we need to define
a Heisenberg group, so we can proceed anyway. However, things work more smoothly if
N equals a prime number p, because of the striking fact that every non-zero integer
has a multiplicative inverse modulo p. Integers modulo p form a field, which
by definition is a ring whose non-zero
field
members form an abelian group under multiplication, so that we can perform division
as well. In a field—the set of rational numbers is a standard example—we can
perform addition, subtraction, multiplication, and division. In a ring—such as
the set of all the integers—division sometimes fails. The distinction becomes
important in Hilbert space once the latter is being organized by the Weyl–Heisenberg group.
The field of integers modulo a prime p is denoted ℤ_p. When the
dimension N = p the
operators D_ p can be regarded as indexed by elements p of a two
dimensional vector space. (We use the same notation for arbitrary N, but in
general we need quotation marks around the word `vector'. In a true vector space
the scalar numbers must belong to a field.) Note that this vector space contains p^2
vectors only. Now, whenever we encountered a vector space
in this book, we tended to focus on the set of lines through its origin. This
is a fruitful thing to do here as well. Each such line consists of all vectors
p obeying the equation a· p = 0 for some fixed vector
a. Since a is determined only up to an overall factor we obtain
p+1 lines in all, given by
a = ( [ a_1; a_2 ])
∈{( [ 0; 1 ]) ,
( [ 1; 0 ]) , ( [ 1; 1 ]) , … , ( [ 1; p-1 ]) } .
This set of lines through the origin is a projective space with only
p+1 points. Of more immediate interest is the fact that these lines through
the origin correspond to cyclic subgroups of the Weyl–Heisenberg group, and
indeed to its maximal abelian subgroups. (Choosing a = (1,x) gives the
cyclic subgroup generated by D_-x,1.)
The joint eigenbases of such subgroups are related in an interesting
way, which will be the subject of Section <ref>.
Readers who want a simple story are advised to ignore everything we say about
non-prime dimensions. With this warning, we ask:
What happens when the dimension is not prime? On the physical side this
is often the case: we may build a Hilbert space of high dimension by taking
the tensor product of a large number of Hilbert spaces which individually
have a small dimension (perhaps to have a Hilbert space suitable for describing
many atoms). This immediately suggests that it might be interesting to study the
direct product H(N_1)× H(N_2) of two Weyl–Heisenberg groups, acting
on the tensor product space H_N_1⊗ H_N_2. (Irreducible
representations of a direct product of groups always act on the tensor product of
their representation spaces.) Does this give something new?
On the group theoretical side it is known that the cyclic groups Z_N_1N_2 and
Z_N_1× Z_N_2 are isomorphic if and only if the integers N_1 and N_2
are relatively prime, that is to say if they do not contain any common factor. To see
why, look at the examples N = 2· 2 and N = 2· 3. Clearly Z_2× Z_2
contains only elements of order 2, hence it cannot be isomorphic to Z_4. On the other
hand it is easy to verify that Z_2× Z_3 = Z_6. This observation carries over
to the Weyl–Heisenberg group: the groups H(N_1N_2) and H(N_1)× H(N_2)
are isomorphic if and only if N_1 and N_2 are relatively prime. Thus, in many
composite dimensions including the physically interesting case N = p^K we have a
choice of more than one Heisenberg group to play with. They all form nice error bases.
In applications to signal processing one sticks to H(N) also when
N is large and composite, but in many-body physics and in the theory of quantum
computation—carried out on tensor products of K qubits, say—it is the
multipartite Heisenberg group
group!multipartite Heisenberg
H(p)^⊗ K = H(p)× H(p)×…× H(p) that comes to the fore.
There is a way of looking at the group H(p)^⊗ K which is quite analogous to
our way of looking at H(p). It employs finite fields with p^K elements, and
de-emphasizes the tensor product structure of the representation space—which in
some situations is a disadvantage, but it is worth explaining anyway, especially since
field!finite
we will be touching on the theory of fields in Section <ref>.
We begin by recalling how the field of complex numbers is
constructed. One starts from the real field R, now called the
ground field, and observes that the polynomial equation P(x) ≡ x^2+1 = 0
does not have a real solution. To remedy this the number i is introduced as a
root of the equation P(x) = 0. With
this new number in hand the complex number field C is constructed as a
two-dimensional vector space over R, with 1 and i as basis vectors.
To multiply two complex numbers together we calculate modulo the polynomial P(x),
which simply amounts to setting i^2 equal to -1 whenever it occurs. The finite
fields GF(p^K) are defined in a similar way using the finite field Z_p
as a ground field. (Here `G' stands for Galois. For
a lively introduction to finite fields we refer to Arnold <cit.>. For quantum
mechanical applications we recommend the review by Vourdas <cit.>).
For an example, we may choose p = 2. The polynomial P(x) = x^2+x+1
has no zeros in the ground field Z_2, so we introduce an `imaginary'
number α which is declared to obey P(α ) = 0. Adding and multiplying in all
possible ways, and noting that α^2 = α +1 (using binary
arithmetic), we obtain a larger field having
2^2 elements of the form x_1+x_2α, where x_1 and x_2 are integers modulo 2.
This is the finite field GF(p^2). Interestingly its three non-zero elements can also
be described as α, α^2 = α + 1, and α^3 = 1. The first
representation is convenient for addition, the second for multiplication. We have a
third description as the set of binary sequences { 00, 01, 10, 11}, and consequently
a way of adding and multiplying binary sequences together. This is very useful in the
theory of error-correcting codes <cit.>. But this is by the way.
By pursuing this idea it has been proven that there exists a finite field GF(N)
with N elements if and only if N=p^K is a power of prime number
p, and moreover that this finite field is unique for a given N (which is by no means
obvious because there will typically exist several polynomials of the same degree
having no solution modulo p). These fields can be regarded as K dimensional vector
spaces over the ground field Z_p, which is useful when we do addition.
When we do multiplication it is helpful to observe the (non-obvious) fact that
finite fields always contain a primitive element in terms of which every
non-zero element of the field can be written as the primitive
element raised to some integer power, so the non-zero elements form a cyclic group. Some
further salient facts are:
* Every element obeys x^p^K = x.
* Every non-zero element obeys x^p^K-1 = 1.
* GF(p^K_1) is a subfield of GF(p^K_2) if and only if K_1 divides K_2.
The field with 2^3 elements is presented in Table <ref>,
and the field with 3^2 elements is given as Problem 12.4.
By definition the field theoretic trace is
x = x + x^p + x^p^2 + … + x^p^K-1 .
If x belongs to the finite field F_p^K its trace belongs to
the ground field Z_p. Like the trace of a matrix, the field theoretic
trace enjoys the properties that tr(x+y) = x + y and trax =
a x for any integer a modulo p. It is used to define the concept of
dual bases for the field. A basis is simply a set of elements such that any
element in the field can be expressed as a linear combination of this set, using
coefficients in Z_p. Given a basis e_i the dual basis ẽ_j is
defined through the equation
(e_iẽ_j) = δ_ij .
For any field element x we can then write, uniquely,
x = ∑_i=1^Kx_ie_i
x_i = (xẽ_i) .
From Table <ref> we can deduce that the basis (1,α, α^2)
is dual to (1,α^2, α), while the basis (α^3, α^5,
α^6) is dual to itself.
Let us now apply what we have learned to the Heisenberg groups.
(For more details
see Vourdas <cit.>, Gross <cit.>, and Appleby <cit.>).
Let x, u,v be elements of the finite field GF(p^K). Introduce
an orthonormal basis and label its vectors by the field elements,
|x⟩ : |0⟩ , |1⟩ ,
|α⟩ , … ,
|α^p^K -2⟩ .
Here α is a primitive element of the field, so there are p^K
basis vectors altogether. Using this basis we define the operators X_u, Z_u by
X_u|x⟩ = |x+u⟩ , Z_u|x⟩
= ω^tr(xu)|x⟩ , ω = e^2π i/p .
Note that X_u is not
equal to X raised to the power u—this would
make no sense, while the present definition does. In particular the phase
factor ω is raised to an exponent that is just an ordinary integer modulo
p. Due to the linearity of the field trace it is easily checked that
Z_uX_v = ω^ tr(vu)X_vZ_u .
Note that it can happen that X and Z commute—it does happen for GF(2^2),
for which tr(1) = 0—so the definition takes
some getting used to.
We can go on to define displacement operators
D_ u = τ^ tru_1u_2X_u_1Z_u_2 , τ = -e^iπ/p , u = ( [ u_1; u_2 ]) .
The phase factor has been chosen so that we obtain the desirable properties
D_ uD_ v = τ^⟨ u, v⟩
D_ u+ v , D_ uD_ v =
ω^⟨ u, v⟩D_ vD_ u D_ u^† = D_- u .
Here we introduced the symplectic form
⟨ u, v⟩ = tr(u_2v_1 - u_1v_2) .
So the formulas are arranged in parallel with those used to describe H(N).
It remains to show that the resulting group is isomorphic to the one obtained by
taking K-fold products of the group H(p).
There do exist isomorphisms between the two groups, but there does not exist a
canonical isomorphism. Instead we begin by choosing a pair of dual bases for the field,
obeying tr(e_iẽ_j) = δ_ij. We can then expand a given element of the
field in two different ways,
x = ∑_i=1^Kx_ie_i = ∑_r=1^Kx̃_iẽ_i ⇔{[ x_i = tr(xẽ_i); x̃_i = tr(xe_i) . ].
We then introduce an isomorphism between H_p^K and H_p⊗…⊗ H_p,
S|x⟩ = |x_1⟩⊗ |x_2⟩⊗…⊗
|x_K⟩ .
In each p dimensional factor space we have the group
H(p) and the displacement operators
D^(p)_i,j = τ^rsX^rZ^s , r,s ∈ℤ_p .
To set up an isomorphism between the two groups we expand
u = ( [ u_1; u_2 ]) =
( [ ∑_iu_1ie_i; ∑_iũ_2iẽ_i ]) .
Then the isomorphism is given by
D_ u = S^-1( D^(p)_u_11,ũ_21⊗
D^(p)_u_12,ũ_22⊗…⊗ D^(p)_u_1K,ũ_2K) S .
The verification consists of a straightforward calculation showing that
D_ uD_ v = τ^∑_i(ũ_2iv_1i - u_1iṽ_2i) D_ u + v = τ^⟨ u,v⟩D_ u + v .
It must of course be kept in mind that the isomorphism inherits the
arbitrariness involved in choosing a field basis. Nevertheless this
reformulation has its uses, notably because we can again regard the set of displacement
operators as a vector space over a field, and we can obtain N+1 = p^K + 1 maximal
abelian subgroups from the set of lines through its origin. However, unlike in the
prime dimensional case, we do not obtain every maximal abelian subgroup from this
construction <cit.>.
§ MORE UNITARY OPERATOR BASES
Do all interesting things come from groups? For unitary operator bases the answer
is a resounding `no'. We begin with a slight reformulation of the problem. Instead
of looking for special bases in the ket/bra Hilbert space H⊗ H^* we look for them in the ket/ket Hilbert space
H⊗ H. We relate the two spaces with a map that
interchanges their computational bases, |i⟩⟨ j| ↔
|i⟩ |j⟩, while leaving the components of the vectors unchanged.
A unitary operator U with matrix elements U_ij then corresponds to the state
|U⟩ = 1/√(N)∑_i,j = 0^N-1U_ij|i⟩ |j⟩ ∈ H⊗ H .
States of this form are said to be maximally entangled, and we will return
to discuss them in detail in Section 16.3.
A special example, obtained
by setting U_ij = δ_ij, appeared already in eq. (11.21).
For now we just observe that the task of finding a unitary operator basis for
H⊗ H^* is equivalent to that of finding a maximally
entangled basis for H⊗ H.
A rich supply of maximally entangled bases can be obtained using two concepts
imported from discrete mathematics: Latin squares and (complex) Hadamard matrices.
We explain the construction for N = 3, starting with the special state
|Ω⟩ = 1/√(3)(|0⟩ |0⟩ +
|1⟩ |1⟩ + |2⟩ |2⟩ ) .
Now we bring in a Latin square.
By definition this is an array of
N columns and N rows containing a symbol from an alphabet of N letters in
each position, subject to the restriction that no symbol occurs twice in any row or
in any column. The study of Latin squares goes back to Euler; Stinson
<cit.> provides a good account. If the reader has spent time on sudokos she
has worked within this tradition already. Serious applications of Latin squares,
to randomization of agricultural experiments,
were promoted by Fisher <cit.>.
We use a Latin square to expand our maximally
entangled state into N orthonormal maximally entangled states. An example with
N = 3 makes the idea transparent:
[ 0 1 2; 1 2 0; 2 0 1; ]→[ |Ω_0 ⟩ = 1/√(3)(|0⟩ |0⟩ + |1⟩ |1⟩ +
|2⟩ |2⟩ ); ; |Ω_1 ⟩ = 1/√(3)(|0⟩ |1⟩ +
|1⟩ |2⟩ + |2⟩ |0⟩ ); ; |Ω_2 ⟩ = 1/√(3)(|0⟩ |2⟩ +
|1⟩ |0⟩ + |2⟩ |1⟩ ) . ]
The fact that the three states (in H_9) are mutually orthogonal is an
automatic consequence of the properties of the Latin square. But we want 3^2
orthonormal states. To achieve this we bring in a complex Hadamard matrix,
that is to say a unitary matrix each of whose elements have the same
modulus. The Fourier matrix F, whose matrix
elements are
F_jk = 1/√(N)ω^jk =
1/√(N)(e^2π i/N)^jk , 0 ≤ j,k ≤ N - 1 .
provides an example that works for every N. For N = 3
it is an essentially unique example.
For complex Hadamard matrices in general, see Tadej and Życzkowski <cit.>, and Szöllősi <cit.>.
We use such a matrix to
expand the vector |Ω_0⟩ according to the pattern
1/√(3)[ [ 1 1 1; 1 ω ω^2; 1 ω^2 ω ]] →[ |Ω_00⟩ = 1/√(3)(|0⟩ |0⟩ + |1⟩ |1⟩ +
|2⟩ |2⟩ ); ; |Ω_01⟩ = 1/√(3)(|0⟩ |0⟩ +
ω |1⟩ |1⟩ + ω^2|2⟩ |2⟩ ); ; |Ω_02⟩ = 1/√(3)(|0⟩ |0⟩ +
ω^2|1⟩ |1⟩ + ω|2⟩ |2⟩ ) . ]
The orthonormality of these states is guaranteed by the properties of
the Hadamard matrix, and they are obviously maximally entangled. Repeating the
construction for the remaining states in (<ref>) yields a full orthonormal
basis of maximally entangled states. In fact for N = 3 we obtained nothing new; we
simply reconstructed the unitary operator basis provided by the Weyl–Heisenberg
group. The same is true for N = 2, where the analogous basis is known as the
Bell basis, and will reappear in eq. (16.1).
The generalization to any N should be clear, especially if we formalize the
notion of Latin squares a little. This will also provide some clues how the set
of all Latin squares can be classified. First of all Latin squares exist for any
N, because the multiplication table of a finite group is a Latin square. But most
Latin squares do not arise in this way. So how many Latin squares are there?
To count them one may first agree to present them in reduced form, which means
that the symbols appear in lexicographical order in the first row and the first
column. This can always be arranged by permutations of rows and columns. But there
are further natural equivalences in the problem. A Latin square can be presented as
N^2 triples (r,c,s), for `row, column, and symbol'. The rule is that in this
collection all pairs (r,c) are different, and so are all pairs (r,s) and
(s,c). So we have N non–attacking rooks on a cubic chess board of size N^3.
In this view the symbols are on the same footing as the rows and columns, and can
be permuted. A formal way of saying this is to introduce a map λ : Z_N × Z_N → Z_N, where Z_N denotes
the integers modulo N, such that the maps
i →λ (i,j) , i →λ (j,i) ,
are injective for all values of j. Two Latin squares are said to be
isotopic if they can be related by permutations within the three copies of
Z_N involved in the map. The classification of Latin squares under
these equivalences was completed up to N = 6 by Fisher and his collaborators <cit.>,
but for higher N the numbers grow astronomical. See Table <ref>.
The second ingredient in the construction, complex Hadamard matrices, also raises
a difficult classification problem.
The appropriate equivalence relation for this
classification includes permutation of rows and columns, as well as acting with
diagonal unitaries from the left and from the right. Thus we adopt the equivalence relation
H' ∼ PDHD'P' ,
where D,D' are diagonal unitaries and P,P' are permutation matrices.
For N = 2,3, and 5, all complex Hadamard matrices are equivalent to the Fourier matrix.
For N = 4 there exists a one-parameter family of inequivalent examples, including a purely real Hadamard matrix –
see Table <ref>, and also Problem 12.3.
The freely adjustable phase factor was discovered by Hadamard <cit.>.
It was adjusted in experiments performed many years later <cit.>.
Karlsson wrote down a three parameter family of N = 6 complex Hadamard matrices
in fully explicit and remarkably elegant form <cit.>. Karlsson's family is qualitatively more interesting than the N = 4 example.
Real Hadamard matrices (having entries ± 1) can exist only if N = 2 or
N = 4k. Paley conjectured <cit.> that in
these cases they always exist, but his conjecture remains open.
The smallest unsolved
case is N = 668. Real Hadamard matrices have many uses, and discrete
mathematicians have spent much effort constructing them <cit.>.
With these ingredients in hand we can write down the vectors in a maximally
entangled basis as
|Ω_ij⟩ = 1/√(N)∑_k=0^N-1
H_jk|k⟩ |λ (i,k)⟩ ,
where H_ik are the entries in a complex Hadamard matrix and
the function λ defines a Latin square.
(A quantum variation on the
theme of Latin squares, giving an even richer supply, is known
<cit.>).
The construction above is due to Werner <cit.>.
Since it relies on arbitrary Latin squares
and arbitrary complex Hadamard matrices we get an enormous supply
of unitary operator bases out of it.
This many groups do not exist, so most of these bases cannot
be obtained from group theory. Still some nice error bases—in
particular, the ones coming from the Weyl-Heisenberg group—do
turn up (as, in fact, it did in our N = 3 example). The converse
question, whether all nice error bases come from Werner's construction, has an
answer, namely `no'. The examples constructed in this section are all monomial,
meaning that the unitary operators can be represented by matrices having only one
non-zero entry in each row and in each column. Nice error bases not of
this form are known. Still it is interesting to observe that the operators
in a nice error basis can be represented by quite sparse matrices—they
always admit a representation in which at least one half of the matrix elements
equal zero <cit.>.
§ MUTUALLY UNBIASED BASES
Two orthonormal bases { |e_i⟩}_i=0^N-1 and
{ |f_j⟩}_j=0^N-1
are said to be complementary or unbiased if
|⟨ e_i|f_j⟩ |^2 = 1/N
for all possible pairs of vectors consisting of one vector
from each basis. If a system is prepared in a state belonging to one
of the bases, then no information whatsoever is available about the
outcome of a von Neumann measurement using the complementary basis.
The corresponding observables are complementary in the sense of Niels
Bohr, whose main concern was with the complementarity between position
and momentum as embodied in the formula
|⟨ x|p⟩ |^2 = 1/2π .
The point is that the right hand side is a constant. Its actual value
is determined by the probabilistic interpretation only when the dimension
of Hilbert space is finite.
To see why a set of mutually unbiased bases may be a desirable thing to
have, suppose we are given an unlimited supply of identically prepared
N level systems, and that we are asked to determine the N^2-1
parameters in the density matrix ρ.
That is, we are asked to perform quantum state tomography on ρ.
Performing the same von Neumann measurement on every copy will determine
a probability vector with N-1 parameters. But
N^2-1 = (N+1)(N-1) .
Hence we need to perform N+1 different measurements to fix ρ.
If—as is likely to happen in practice—each measurement can be performed
a finite number of times only, then there will be a statistical spread,
and a corresponding uncertainty in the determination
of ρ. Figure <ref> is intended to suggest (correctly as it
turns out) that the best result will be achieved if the N+1 measurements
are performed using Mutually Unbiased Bases (abbreviated MUB
from now on).
MUB have numerous other applications, notably to quantum
cryptography. The original BB84 protocol for quantum key distribution
<cit.> used a pair of qubit MUB. Going to larger sets of MUB in
higher dimensions yields further advantages <cit.>. The bottom line
is that MUB are of interest when one is trying to find or hide
information. Further applications
include entanglement detection in the laboratory <cit.>, a famous
retrodiction problem <cit.>, and more.
Our concern will be to find out how many MUB exist in a given dimension N.
The answer will tell us something about the shape of the convex body of mixed
quantum states. To see this, note that when N = 2 the pure states making up
the three MUB form the corners of a regular octahedron inscribed in the
Bloch sphere. See Figure <ref>.
Now let the dimension of Hilbert space be N. The set of mixed
states has real dimension N^2-1, and it has the maximally mixed state as its
natural origin. An orthonormal basis corresponds to a regular simplex
Δ_N-1 centred at the origin. It spans an (N-1)-dimensional plane
through the origin. Using the trace inner product we find that
|⟨ e_i|f_j⟩ |^2 = 1/N⇒( |e_i⟩⟨ e_i| - 1/N 1)
( |f_j⟩⟨ f_j| - 1/N 1) = 0 .
Hence the condition that two such simplices represent a pair of MUB
translates into the condition that the two (N-1)-planes be totally orthogonal, in
the sense that every Bloch vector in one of them is orthogonal to every Bloch vector in
the other. But the central equation of this section, namely (<ref>), implies
that there is room for at most N+1 totally orthogonal (N-1)-planes.
It follows that there can exist at most N+1 MUB. But it does not
at all follow that this many MUB actually exist. Our collection of N+1
simplices form an interesting convex polytope with N(N+1) vertices, and what
we are asking is whether this complementarity polytope can be inscribed
polytope!complementarity
into the convex body M^(N) of density matrices. In fact, given our
caricature of this body, as the stitching found on a tennis ball (Section 8.6),
this does seem a little unlikely (unless N = 2).
Anyway a set of N+1 MUB in N dimensions is referred to as a complete
set. Do such sets exist? If we think of a basis as given by the column vectors
of a unitary matrix, and if the basis is to be unbiased relative to the
computational basis, then that unitary matrix must be a complex Hadamard matrix.
Classifying pairs of MUB is equivalent to classifying such matrices. In Section
<ref> we saw that they exist for all N, often in
abundance. To be specific, let the identity matrix and the Fourier matrix
(<ref>) represent a pair of MUB. Can we find a third basis,
unbiased with respect to both?
Using N = 3 as an illustrative example we find that Figure 4.10
gives the story away. A column vector in a complex Hadamard matrix corresponds
to a point on the maximal torus in the octant picture. The twelve column vectors
[ [ 1 0 0 1 ω^2 ω^2 1 ω ω 1 1 1; 0 1 0 ω^2 1 ω^2 ω 1 ω 1 ω ω^2; 0 0 1 ω^2 ω^2 1 ω ω 1 1 ω^2 ω ]]
form four MUB, and it is clear from the picture that this is the
largest number that can be found. (For convenience we did not normalize the
vectors. The columns of the Fourier matrix F were placed on the right.)
We multiplied the vectors in the two bases in the middle with phase
factors in a way that helps to make the pattern memorable. Actually there is
a bit more to it. They form circulant matrices of size 3, meaning that
they can be obtained by cyclic permutations of their first row. Circulant
matrices have the nice property that F^† CF is a diagonal matrix for
every circulant matrix C.
The picture so far is summarized in Figure <ref>. The key observation
is that each of the bases is an eigenbasis for a cyclic subgroup of the
Weyl–Heisenberg group. As it turns out this generalizes straightforwardly
to all dimensions such that N = p, where p is an odd prime. We gave a
list of N+1 cyclic subgroups in eq.
(<ref>). Each cyclic subgroup consists of a complete set of commuting
observables, and they determine an eigenbasis. We denote
the a-th vector in the x-th eigenbasis as |x,a⟩,
and we have to solve
D_0,1|0,a⟩ = ω^a|0,a⟩ ,
D_x,1|x,a⟩ = ω^a|x,a⟩ , D_-1,0|∞ , a⟩
= ω^a|∞ , a⟩ .
Here x = 1, … , p-1, but in the spirit of projective geometry we can
extend the range to include 0 and ∞ as well. The solution, with
{ |e_r⟩}_r=0^p-1 denoting the computational basis, is
|0,a⟩ = |e_a⟩ ,
|x,a⟩ = 1/√(p)∑_r=0^p-1ω^(r-a)^2/2x |e_r⟩ ,
|∞ , a⟩ = 1/√(p)∑_r=0^p-1ω^ar|e_r⟩ .
It is understood that if `1/2' occurs in an exponent it denotes the
inverse of 2 in arithmetic modulo p (and similarly for `1/x'). There are p-1 bases
presented as columns of circulant matrices, and we use ∞ to label the Fourier
basis for a reason (Problem 12.5)
One can show directly (as done in 1981 by Ivanović <cit.>,
whose interest was in state tomography)
that these bases form a complete set of MUB
(Problem 12.6),
but a simple and remarkable theorem will save us from this effort.
Interestingly there is an alternative way to construct the complete set.
When N = 2 it is clear that we can start with the eigenbasis of
the group element Z (say), choose a point on the equator, and
then apply all the transformations effected by the Weyl–Heisenberg
group. The resulting orbit will consist of N^2 points, and if the
starting point is judiciously chosen they will form N MUB, all of
them unbiased to the eigenbasis of Z. Again the construction works
in all prime dimensions N = p, and indeed the resulting complete set
is equivalent to the previous one in the sense that there exists a
unitary transformation taking the one to the other.
This
construction is due to Alltop (1980),
whose interest was in radar applications
<cit.>. Later references have made the construction more transparent
and more general <cit.>.
The central theorem about MUB existence is due to
Bandyopadhyay, Boykin, Roychowdhury, and Vatan <cit.>.
The BBRV theorem. A complete set of MUB exists
in ℂ^N if and only if there exists a unitary operator basis which
can be divided into N+1 sets of N commuting operators such that
the sets have only the unit element in common.
Let us refer to unitary operator bases of this type as flowers,
and the sets into which they are divided as petals. Note that it is eq. (<ref>)
that makes flowers possible. There can be at most N mutually commuting orthogonal
unitaries since, once they are diagonalized, the vectors defined by their diagonals
must form orthogonal vectors in N dimensions. The Weyl–Heisenberg
groups are flowers if and only if N is a prime number—as exemplified
in Figure <ref>. So the fact that (<ref>) gives a complete
set of N+1 MUB whenever N is prime follows from the theorem.
We prove the BBRV theorem one way. Suppose a complete set of MUB exists.
We obtain a maximal set of commuting Hilbert-Schmidt orthogonal unitaries by carefully
choosing N unitary matrices U_r, with r between 0 and N-1:
U_r = ∑_rω^ri|e_i⟩⟨ e_i| ⇒U_r^† U_s = ∑_iω^i(r-s) =
Nδ_rs .
If the bases { |e_i⟩}_i=0^N-1 and
{ |f_i⟩}_i=0^N-1 are unbiased we can form two such sets,
and it is easy to check that
V_r = ∑_rω^ri|f_i⟩⟨ f_i| ⇒V_r^† U_s = 1/N∑_i,jω^-irω^js = Nδ_r,0δ_s,0 .
Hence U_r ≠ V_s unless r = s = 0. It may seem as if we
have constructed two cyclic subgroups of the
Weyl-Heisenberg group, but in fact we have not since we have said nothing about
the phase factors that enter into the scalar products ⟨ e_i|f_j⟩.
But this is as may be. It is still clear that if we go in this way,
we will obtain a flower from a collection of N+1 MUB. Turning this into an `if
and only if' statement is not very hard <cit.>.
We have found flowers in all prime dimensions. What about N = 4? At this
point we recall the Mermin square (5.61).
It defines (as it must, if one looks at the proof of the BBRV theorem) two distinct triplets
of MUB. It turns out, however, that these are examples of unextendible
sets of MUB, that cannot be completed to complete sets (a fact that can be
ascertained without calculations, if the reader has the prerequisites needed
to solve Problem 12.7.
A more constructive observation is that the operators occurring
in the square belong to the
2–partite Heisenberg group H(2)× H(2). This group is also a unitary
operator basis, and it contains no less than
15 maximal abelian subgroups, or petals in our language. Denoting the elements
of the collineation group of H(2) as X = σ_x, Y = σ_y, Z= σ_z,
and the elements of the 2–partite group as (say) XY = X⊗ Y, we label
the petals as
[ 1 = { 1Z, Z 1, ZZ} 2 = { X 1, 1 X, XX} 3 = { XZ, ZX, YY }; 4 = { 1Z, X 1, XZ} 5 =
{ Z 1, 1X, ZX} 6 = { ZZ, XX, YY } ]
(these are the petals occurring in the Mermin square), and
[ 7 = { 1Z, Y 1, YZ} 8 = { Z 1, 1Y, ZY } 9 = { X 1, 1Y, XY }; 10 = { 1X, Y 1, YX } 11 =
{ 1Y, Y 1, YY } 12 = { XY, YX, ZZ }; 13 = { XZ, YX, ZY } 14 =
{ XY, YZ, ZX } 15 = { XX, YZ, ZY } . ]
After careful inspection one finds that the unitary operator basis can
be divided into disjoint petals in 6 distinct ways, namely
[ { 1,2,11,13,14 } { 4,6,8,10,15 } { 2,3,7,8,12 }; { 1, 3,9,10,15} { 4,5,11,12,14 } { 5, 6, 7, 9, 13 } . ]
So we have six flowers, each of which contains exactly two Mermin petals
(not by accident, as Problem 12.7
reveals). The pattern
is summarized in Figure <ref>.
This construction can be generalized to any prime power dimension N = p^K, The
multipartite Heisenberg group H(p)^⊗ K gives many interlocking flowers,
and hence many complete sets of MUB. The finite Galois fields are useful here: the
formulas can be made to look quite similar to (<ref>), except that the
field theoretic trace is used in the exponents of ω. We do not go into details here, but mention only that
Complete sets of MUB were constructed for prime power dimensions by Wootters
and Fields (1989) <cit.>. Calderbank et al. <cit.> gave a more complete list, still relying on Heisenberg groups.
All known constructions are unitarily equivalent
to one on their list <cit.>.
The question whether all complete sets of MUB
can be transformed into each other by means of some unitary transformation arises
at this point. It is a difficult one. For N ≤ 5 one can show that all complete
sets are unitarily equivalent to each other, but for N = 2^5 this is no longer
true. In this case the 5–partite Heisenberg group can be partitioned into flowers
in several unitarily inequivalent ways.
This was noted by Kantor, who has also reviewed the full story <cit.>.
What about dimensions that are not prime powers? Unitary operator bases exist in
abundance, and defy classification, so it is not easy to judge whether there may
be a flower among them. Nice error bases are easier to deal with (because groups
can classified). There are Heisenberg groups in every dimension, and if the dimension is
N = p_1^K_1p_2^K_2… p_n^K_n one can use them to construct
min_i(p_i^K_i)+1 MUB in the composite dimension N
<cit.>.
What is more, it is known that this is the
largest number one can obtain from the partitioning of any nice error basis into
petals <cit.>. However, the story does not end there, because—making use
of the combinatorial concepts introduced in the next section—one can show that
in certain square dimensions larger, but still incomplete, sets do exist.
In particular, in dimension N = 26^2 = 2^213^2 group theory would suggest at most
5 MUB, but Wocjan and Beth found a set of 6 MUB. Moreover, in square dimensions
N = m^2 the number of MUB grows at least as fast as m^1/14.8 (with finitely
many exceptions) <cit.>.
This leaves us somewhat at sea. We do not know whether a complete set of MUB exists
if the dimension is N = 6, 10, 12, …. How is the question to be settled?
Numerical computer searches with carefully defined error bounds could settle the
matter in low dimensions, but the only case that has been investigated
in any depth is that of N = 6. In this case there is convincing evidence
that at most 3 MUB can be found, but still no
proof. (Especially convincing evidence was found by Brierley and Weigert
<cit.>, Jaming et al. <cit.>, and Raynal et al. <cit.>. See
also Grassl <cit.>, who studies the set of vectors unbiased relative to
both the computational and Fourier basis. There are 48 such vectors).
And N = 6 may be a very special case.
A close relative of the MUB existence problem arises in Lie algebra theory, and
is unsolved there as well.
But at least it received
a nice name: the Winnie–the–Pooh problem. The reason cannot be fully rendered
into English <cit.>.
One can imagine that it has do with harmonic (Fourier)
analysis
<cit.>. Or perhaps with symplectic topology: there exists an elegant
geometrical theorem which says that given any two bases in ℂ^N,
not necessarily unbiased, there always exist at least 2^N-1 vectors that are
unbiased relative to both bases. But there is no information about whether
these vectors form orthogonal bases (and in non–generic cases the vectors may
coincide). (See <cit.> for a description of this theorem, and
<cit.> for a description of this area of mathematics).
We offer these
suggestions as hints, and return to a brief summary of known facts at the
end of Section <ref>. Then we will have introduced some
combinatorial ideas which—whether they
have any connection to the MUB existence problem or not—have a number of
applications to physics.
§ FINITE GEOMETRIES AND DISCRETE WIGNER FUNCTIONS
A combinatorial structure underlying the usefulness of mutually unbiased
bases is that of finite affine planes. A finite plane is just like
an ordinary plane, but the number of its points is finite. A finite affine
plane contains lines too, which by definition are subsets of the set of
points. It is required that for any pair of points there is a unique line
containing them. It is also required that for every point not contained
in a given line there exists a unique line having no point in common with
the given line. (Devoted students of Euclid will recognize this as the
Parallel Axiom, and will also recognize that disjoint lines deserve to
be called parallel.) Finally, to avoid degenerate cases, it is required
that there are at least two points in each line, and that there are at
least two distinct lines. With these three axioms one can prove that two
lines intersect either exactly once or not at all, and also that for every
finite affine plane there exists an integer N such that
(i) there are N^2 points,
(ii) each line contains N points,
(iii) each point is contained in N+1 lines,
(iv) and there are altogether N+1 sets of N disjoint lines.
The proofs of these theorems are exercises in pure combinatorics <cit.>,
and appear at first glance quite unconnected to the geometry of quantum states.
It is much harder to decide whether a finite affine plane of order N actually
exists. If N is a power of a prime number finite planes can be constructed
using coordinates, just like the ordinary plane (where lines are defined by
linear equations in the coordinates that label the points), with the difference
that the coordinates are elements of a finite field of order N. Thus a point
is given by a pair (x,y), where x, y belong to the finite field, and a
line consists of all points obeying either y = ax+b or x = c, where a,b,c
belong to the field.
This is not quite the end of the story of the finite affine planes because examples
have been constructed that do not rely on finite fields, but the order N
of all these examples is a power of some prime number. Whether there exist finite
affine planes for any other N is not known.
Let us go a little deeper into the combinatorics, before we explain what it has
to do with us. A finite plane can clearly be thought of as a grid of N^2 points,
and its rows and columns provide us with two sets of N disjoint or parallel lines,
such that each line in one of the sets intersect each line in the other set exactly
once. But what of the next set of N parallel lines? We can label its lines with
letters from an alphabet containing N symbols, and the requirement that any two
lines meet at most once translates into the observation that finding the third set
is equivalent to finding a Latin square. As we saw in Section <ref>,
there are many Latin squares to choose from. The difficulty comes in the next
step, when we ask for two Latin squares describing two different sets of parallel
lines. Use Latin letters as the alphabet for the first, and Greek letters for
the second. Then each point in the array will be supplied with a pair of letters,
one Latin and one Greek. Since two lines are forbidden to meet more than once
a given pair of letters, such as (A, α) or (B, γ ),
is allowed to occur only once in the array. In other words the letters from the
two alphabets serve as alternative coordinates for the array. Pairs of Latin
squares enjoying this property are known as Graeco-Latin or orthogonal
Latin squares. For N = 3 it is easy to find Graeco-Latin pairs, such as
( [ A B C; B C A; C A B; ] , [ α β γ; γ α β; β γ α; ] )
= [ Aα Bβ Cγ; Bγ Cα Aβ; Cβ Aγ Bα; ] .
An example for N = 4, using alphabets that may appeal to bridge players,
is
[ A K Q J; K A J Q; Q J A K; J Q K A; ] .
Graeco-Latin pairs can be found for all choices of N>2 except
(famously) N = 6.
This is so even though Table <ref> shows that there
is a very large supply of Latin squares for N = 6.
The story behind these non-existence results goes back to Euler, who was concerned with arranging 36 officers
belonging to 6 regiments, 6 ranks, and 6 arms, in a square.
To define a complete affine plane, with N+1 sets of parallel
lines, requires us to find a set of N-1 mutually orthogonal Latin
squares, or MOLS.
For N = 6 and N = 10 this is impossible, and in fact an
infinite number of possibilities (beginning with N = 6) are ruled out by the
Bruck-Ryser theorem, which says that if an affine plane of order N exists,
and if N = 1 or 2 modulo 4, then N must be the sum of two squares. Note that
10 is a sum of two squares, but this case has been ruled out by different
means. Lam describes the computer based non-existence proof for N = 10 in a
thought-provoking way <cit.>.
If N = p^k
for some prime number p a solution can easily be found using analytic geometry
over finite fields. There remain an infinite number of instances, beginning
with N = 12, for which the existence of a finite affine plane is an open
question.
See the books by Bennett <cit.>, and Stinson <cit.>, for more
information, and for proofs of the statements we have made so far.
At this point we recall that complete sets of MUB exist when N = p^k, but
quite possibly not otherwise. Moreover such a complete set is naturally described
by N+1 sets of N vectors. The total number of vectors is the same as the
number of lines in a finite affine plane, so the question is if we can somehow
associate N^2 `points' to a complete set of MUB, in such a way that the
incidence structure of the finite affine plane becomes useful. One way to do this
is to start with the picture of a complete set of MUB as a polytope in Bloch space.
We do not have to assume that a complete set of MUB exists. We simply introduce
N(N+1) Hermitian matrices P_ν of unit trace,
polytope!complementarity
denoted P_v, and obeying
P_vP_v' = {[ 1 v = v'; 1/N v v'; 0 v ≠ v' ].
The condition that TrP_v^2 = 1 ensures that P_v lies on the outsphere of the
set of quantum states. If their eigenvalues are non-negative these are projectors, and
then they actually are quantum states, but this is not needed for the definition of the
polytope. To understand the face structure of the polytope
we begin by noting that the
convex hull of one vertex from each of the N+1 individual simplices forms a face.
(This is fairly obvious, and anyway we are just about to prove it.) Using the
matrix representation of the vertices we can then form the Hermitian unit trace matrix
A_f = ∑_ face P_v - 1 ,
where the sum runs over the N+1 vertices in the face. This is called a
face point operator (later to be subtly renamed as a phase point operator).
If N = 3 we can think of it pictorially as
^∙ ^∙ ^∙ ^∙, say—each triangle represents a basis.
See Figure <ref>. It is easy to see that 0 ≤ρ A_f≤ 1
for any matrix ρ that lies in the complementarity polytope, which means that the
latter is confined between two parallel hyperplanes. There is
a facet defined by ρ A_f = 0 (pictorially, this would be
_∙ _∙ _∙ _∙ _∙ _∙ _∙ _∙)
and an opposing face containing one
vertex from each simplex. Every vertex is included in one of these two faces.
There are N^N+1 operators A_f altogether, and equally many facets.
The idea is to select N^2 phase point
operators and use them to represent the points of an affine plane. The N+1
vertices P_ν that appear in the sum (<ref>) are to be regarded as the N+1
lines passing through the point A_f. A set of N parallel lines in the affine plane
will represent a complete set of orthonormal projectors.
To do so, recall that each A_f is defined by picking one P_v from each basis.
Let us begin by making all possible choices from the first two, and arrange
them in an array:
[ ^∙ _∙ _∙ _∙ _∙ _∙; ; ^∙ _∙ _∙ _∙ _∙ _∙; ; ^∙ ^∙ _∙ ^∙ _∙ ^∙ ]
We set N = 3—enough to make the idea come through—in this illustration.
Thus the N+1 simplices in the totally orthogonal (N-1)-planes appear as
four triangles, two of which have been used up to make the array. We use the vertices
of the remaining N-1 simplices to label the lines in the remaining N-1
pencils of parallel lines. To ensure that non-parallel lines intersect exactly
once a pair such as _∙ _∙ (picked from any two out of the four
triangles) must occur exactly once in the array. This problem can be solved
because an affine plane of order N is presumed to exist. One solution is
[ ^∙ _∙ _∙ _∙ _∙ _∙ _∙ ^∙ _∙ _∙ ^∙ _∙; ; ^∙ _∙ _∙ _∙ _∙ _∙ ^∙ _∙ _∙ _∙ _∙ ^∙; ; ^∙ ^∙ ^∙ ^∙ _∙ ^∙ _∙ _∙ _∙ ^∙ _∙ _∙ ]
We have now singled out N^2 face point operators for attention,
and the combinatorics of the affine plane guarantees that any pair of them
have exactly one P_v in common. Equation (<ref>) then enables us
to compute that
A_fA_f' = Nδ_f,f' .
This is a regular simplex in dimension N^2-1.
We used only N^2 out of the N^N+1 face point operators for this construction,
but a little reflection shows that the set of all of them can be divided into
N^N-1 disjoint face point operator simplices. In effect we have inscribed
the complementarity polytope into this many regular simplices, which is an
polytope!complementarity
interesting datum about the former. Each such simplex forms an orthogonal operator
basis, although not necessarily a unitary operator basis. Let us focus on one of them,
and label the operators occurring in it as A_i,j. It is easy to see that eq.
(<ref>) can be rewritten (in the language of the affine plane), and supplemented,
so that we have the two equations
P_v = 1/N∑_ line v A_i,j ,
A_i,j = ∑_ point (i,j)P_v - 1 .
The summations extend through all points on the line, respectively all lines passing
through the point. Using the fact that the phase point operators form an operator
basis we can then define a discrete Wigner function by
W_i,j = 1/NA_i,jρ .
Knowledge of the N^2 real numbers W_i,j is equivalent to knowledge of the
density matrix ρ. Each line in the affine plane is now associated with the number
p_v = ∑_ line vW_i,j = P_vρ .
Clearly the sum of these numbers over a pencil of parallel lines equals unity.
However, so far we have used only the combinatorics of the complementarity polytope, and
we have no right to expect that the operators P_v have positive spectra. They will be
projectors onto pure states if and only if the complementarity polytope has been
inscribed into the set of density matrices, which is a difficult thing to achieve. If it
is achieved we conclude that p_v ≥ 0, and then we have an
elegant discrete Wigner function—giving all the correct marginals—on
our hands <cit.>. It will receive further polish
in the next section.
Meanwhile, now that we have the concepts of mutually orthogonal Latin squares
and finite planes on the table, we
can discuss some interesting but rather abstract analogies to the MUB existence
problem. Fix N=p_1^K_1p_2^K_2… p_n^K_n. Let #_ MUB be the
number of MUB, and let #_ MOLS be the number of MOLS.
(We need only N-1 MOLS to construct a finite affine plane.) Then
[ min_i(p_i^K_i) -1 ≤#_ MOLS≤ N-1; ; min_i(p_i^K_i) + 1 ≤#_ MUB≤ N+1 . ]
The lower bound for MOLS is known as the MacNeish bound <cit.>.
Moreover, if N = 6, we know that Latin squares cannot occur in orthogonal pairs,
and we believe that there exist only three MUB. Finally, it is known that if there exist
N-2 MOLS there necessarily exist N-1 MOLS, and if there exist N MUB there
necessarily exist N+1 MUB <cit.>. This certainly encourages the speculation that
the existence problem for finite affine planes is related to the existence
problem for complete sets of MUB in some unknown way. However, the idea fades
a little if one looks carefully into the details <cit.>.
§ CLIFFORD GROUPS AND STABILIZER STATES
To go deeper into the subject we need to introduce the Clifford group.
group!Weyl–Heisenberg
group!Clifford
We use the displacement operators D_ p from Section <ref> to
describe the Weyl–Heisenberg group. By definition the Clifford group consists
of all unitary operators U such that
UD_ pU^-1∼ D_f( p) ,
where ∼ means `equal up to phase factors'. (Phase factors will
appear if U itself is a displacement operator, which is allowed.) Thus we ask
for unitaries that permute the displacement operators, so that the conjugate of
an element of the Weyl-Heisenberg group is again a member of the Weyl–Heisenberg
group. The technical term for this is that the Clifford group is the
normalizer of the Weyl-Heisenberg group within
the unitary group, or that the Weyl–Heisenberg group is an invariant subgroup
of the Clifford group. If we change H(N) into a non-isomorphic multipartite
Heisenberg group we obtain another Clifford group, but at first we stick with
H(N).
(The origin of the name `Clifford group' is a little unclear to us. It
seems to be connected to Clifford's gamma matrices rather than to Clifford
himself <cit.>).
The hard part of the argument is to show that
U_GD_ pU^-1_G ∼ D_G p ,
where G is a two-by-two matrix with entries in Z_N̅, the point
being that the map p→ f( p) has to be linear <cit.>. We take
this on trust here. To see the consequences we
return to the group law (<ref>). In the exponent of
the phase factor we encounter the symplectic form
Ω ( p, q) = p_2q_1 - p_1q_2 .
When strict equality holds in eq. (<ref>) it follows from the group law that
U_GD_ pD_ qU^-1_G = τ^Ω ( p, q)
UD_ p + qU^-1_G ⇒
D_G pD_G q = τ^Ω ( p, q)D_G( p+ q) .
On the other hand we know that
D_G pD_G q = τ^Ω (G p,G q)D_G( p+ q) .
Consistency requires that
Ω ( p, q) = Ω (G p,G q) N̅ .
The two by two matrix G must leave the symplectic form invariant.
The arithmetic in the exponent is modulo N̅, where N̅ = N if N is odd and
N̅ = 2N if N is even. We deplored this unfortunate complication in even
dimensions already in Section <ref>.
Let us work with explicit matrices
Ω = ( [ 0 - 1; 1 0 ]) ,
G = ( [ α β; γ δ ])
, α , β , γ , δ∈ Z_N̅ .
Then eq. (<ref>) says that
( [ 0 -1; 1 0 ]) =
( [ 0 -αδ + βγ; αδ - βγ 0 ]) .
Hence the matrix G must have determinant equal to 1 modulo N̅.
Such matrices form the group SL(2, Z_N̅), where Z_N̅
stands for the ring of integers modulo N̅. It is also known as a symplectic
group, because it leaves the symplectic form invariant.
group!symplectic
The full structure of the Clifford group C(N) is complicated
by the phase factors, and rather difficult to describe in general.
Things are much simpler when N̅ = N is odd, so let us restrict
our description to this case.
(A complete, and clear, account
of the general case is given by Appleby <cit.>).
Then the symplectic
group is a subgroup of the Clifford group. Another subgroup is evidently
the Weyl–Heisenberg group itself. Moreover, if we consider the Clifford
group modulo its centre, C(N)/I(N), that is to say if we identify group
elements differing only by phase factors—which we would naturally do if
we are interested only in how it transforms the quantum states—then we
find that C(N)/I(N) is a semi-direct product of the symplectic rotations
given by SL(2, Z_N) and the translations given by H(N)
modulo its centre.
In every case—also when N is even—the unitary representation of the Clifford group is
uniquely determined by the unitary representation of the Weyl-Heisenberg group.
The easiest case to describe is that when N is an odd prime number p. Then the
symplectic group is defined over the finite field Z_p consisting of integers
modulo p, and it contains p(p^2-1) elements altogether. Insisting that there exists
a unitary matrix U_G such that U_G D_ p U_G^-1 = D_G p we are led to the
representation
G = ( [ α β; γ δ ]) →{[ U_G = e^iθ/√(p)∑_i,jω^1/2β
(δ i^2 - 2ij + α j^2)|i⟩⟨ j| β≠ 0; ; U_G = ±∑_jω^αγ/2j^2|α j⟩⟨ j| β = 0 . ].
In these formulas `1/β' stands for the multiplicative inverse
of the integer β in arithmetic modulo p (and since 1/2 occurs it is
obvious that special measures must be taken if p=2). An
overall phase factor is left undetermined: it can be pinned down by insisting on
a faithful representation of the group <cit.>, but in many situations
it is not needed. It is noteworthy that the representation matrices are either
complex Hadamard matrices, or monomial matrices.
It is interesting to see how they act on the
set of mutually unbiased bases. In the affine plane a symplectic transformation
takes lines to lines, and indeed parallel lines to parallel lines. If one works
this out one finds that the symplectic group acts like Möbius transformations
transformation!Möbius
on a projective line whose points are the individual bases. See Problem 12.5.
Even though N = 2 is even, it is the easiest case to understand. The collineation group C(2)/I(2)
is just the group of rotations that transforms the polytope formed by the MUB states into itself,
or in other words it is the symmetry group S_4 of the octahedron. In higher prime dimensions the
Clifford group yields only a small subgroup of the symmetry group of the complementarity polytope.
When N is a composite number, and especially if N is an even composite
number, there are some significant complications for which we refer elsewhere <cit.>.
These complications do sit in the background in Section <ref>, where
the relevant group is the extended Clifford group
obtained by allowing also two-by-two matrices of determinant ± 1. In Hilbert space this
doubling of the group is achieved by representing the extra group elements by anti-unitary transformations <cit.>.
To cover MUB in prime power dimensions we need to generalize in a different
direction. The relevant Heisenberg group is the multipartite Heisenberg group.
We can still define the Clifford group as
the normalizer of this Heisenberg group. We recall that the latter contains many
maximal abelian subgroups, and we refer to the joint eigenvectors of these
subgroups as stabilizer states. The Clifford group acts by permuting the
stabilizer states, and every such permutation can be built as a sequence of
operations on no more than two qubits (or quNits as the case may be) at a time.
In one standard blueprint for universal quantum computing <cit.>, the quantum
computer is able to perform such permutations in a fault–tolerant way, and the
stabilizer states play a role reminiscent of that played by the separable
states (to be defined in Chapter 16)
in quantum communication.
The total number M of stabilizer states in N = p^K dimensions is
M = p^K∏_i=1^K(p^i+1) .
Dividing out a factor p^K we obtain the number of maximal abelian subgroups
of the Heisenberg group. In dimension N = 2^2 there are altogether 60 stabilizer
states forming 15 bases and 6 interlocking complete sets of MUB,
because there are 6 different ways in which the group H(2)× H(2)
can be displayed as a flower. See Figure <ref>.
The story in higher dimensions
is complicated by the appearance of complete sets that fail to be unitarily equivalent
to each other. We must refer elsewhere for the details <cit.>, but it is
worth remarking that, for the `canonical' choice of a complete set written down by
Ivanović <cit.> and by Wootters and Fields <cit.>, there exists
a very interesting subgroup of the Clifford group leaving this set invariant.
It is known as the restricted Clifford group <cit.>, and has an elegant
description in terms of finite fields.
group!restricted Clifford
Moreover (with an exception in dimension 3) the set of vectors that make up this set of
MUB is distinguished by the property that it provides the smallest orbit under
this group <cit.>.
For both Clifford groups, the quotient of their collineation groups with the discrete
translation group provided by their Heisenberg groups is a symplectic group. If we start
with the full Clifford group the symplectic group acts on a
2K-dimensional vector space over Z_p, while in the case of the
restricted Clifford group it can be identified with the group
SL(2, F_p^K) acting on a 2-dimensional vector space over the
finite field F_p^K = GF(p^K) <cit.>.
Armed with these group theoretical facts we can return to the subject of discrete Wigner
functions. If we are in a prime power dimension it is evident that we can produce
a phase point operator simplex by choosing any phase point operator A_f, and act
on it with the appropriate Heisenberg group. But we can ask for more. We can ask
for a phase point operator simplex that transforms into itself when acted on by
the Clifford group. If we succeed, we will have an affine plane that behaves like
a true phase space, since it will be equipped with a symplectic structure. This turns
out to be possible in odd prime power dimensions, but not quite possible when the
dimension is even. We confine ourselves to odd prime dimensions here. Then the Clifford
group contains a unique element of order two, whose unitary representative
we call A_0,0 <cit.>. Using eq. (<ref>) it is
G = ( [ -1 0; 0 -1 ])
⇒
A_0,0 = U_G = ∑_i=0^N-1 |N-i⟩⟨ i| .
By the way we observe that A_0,0^2 = F, where F is the Fourier matrix.
Making use of eqs. (<ref>-<ref>) we find
A_0,0 = ∑_r,sD_r,s .
To perform this sum, split it into a sum over the N+1 maximal abelian
subgroups and subtract 1 to avoid overcounting. Diagonalize
each individual generator of these subgroups, say
D_0,1 = Z = ω^a∑_a |0,a⟩⟨ 0,a| ⇒∑_i=0^N-1Z^i = N|0,0⟩⟨ 0,0| .
All subgroups work the same way, so this is enough. We conclude that
A_0,0 =
∑_z|z,0⟩⟨ z,0| - 1 .
(The range of the label z is extended to cover also the
bases that we have labelled by 0 and ∞.) Since we are picking one projector
from each of the N+1 bases this is in fact a phase point operator.
Starting from A_0,0 we can build a set of N^2 order two phase point operators
A_r,s = D_r,sAD_r,s^-1 .
Their eigenvalues are ± 1, so these operators are both Hermitian and
unitary. The dimension N is odd, so we can write N = 2m-1. Each phase
point operator splits Hilbert space into a direct sum of eigenspaces,
H_N = H_m^(+)⊕ H_m-1^(-) .
Altogether we have N^2 subspaces of dimension m, each of
which contain N+1 MUB vectors. Conversely, one can show that each of the N(N+1)
MUB vectors belongs to N such subspaces. This intersection pattern was said to be
“une configuration très-remarquable” when it was first discovered
(By Segre (1886) <cit.>, who was studying elliptic normal curves.
From the present
point of view it was first discovered by Wootters (1987) <cit.>).
The operators A_r,s form a
phase point operator simplex which enjoys the twin advantages of being both a unitary
operator basis and an orbit under the Clifford group. A very satisfactory
discrete Wigner function can be obtained from it <cit.>. The situation
in even prime power dimensions is somewhat less satisfactory since covariance
under the full Clifford group cannot be achieved in this case.
The set of phase point operators forms a particularly interesting unitary operator
basis, existing in odd prime power dimensions only. Its symmetry group acts on it in
such a way that any pair of elements can be transformed to any other pair. This is
at the root of its usefulness: from it we obtain a discrete Wigner function
on a phase space lacking any kind of scale, just as the ordinary symplectic
vector spaces used in classical mechanics lack any kind of scale. Moreover (with
two exceptions, one in dimension two and and one in dimension eight) it is
uniquely singled out by this property <cit.>.
§ SOME DESIGNS
To introduce our next topic let us say something obvious.
We know that
1_N = 1/N∑_i=1^N|e_i⟩⟨ e_i|
= ∫_ C P^n dΩ_Ψ |Ψ⟩⟨Ψ | ,
where dΩ_Ψ is the unitarily invariant Fubini–Study measure.
Let A be any operator acting on C^N. It follows that
1/N∑_i=0^N-1⟨ e_i|A|e_i⟩ =
∫_ C P^n dΩ_Ψ⟨Ψ|A|Ψ⟩ =
⟨⟨Ψ|A|Ψ⟩⟩_ FS .
On the right hand side we are averaging an (admittedly special)
function over all of C P^N-1. On the left hand side we take
the average of its values at N special points. In statistical
mechanics this equation allows us to evaluate the average expectation value
of the energy by means of the convenient fiction that the system is in an energy
eigenstate—which at first sight is not obvious at all.
To see how this can be generalized we recall the mean value theorem, which
says that for every continuous function defined on the closed
interval [0,1] there exists a point x in the interval such that
f(x) = ∫_0^1 dsf(s) .
Although it is not obvious, this can be generalized to the case of
sets of functions f_i <cit.>. Given such a set of functions one can always
find an averaging set consisting of K different points x_I such that, for
all the f_i,
1/K∑_I=1^Kf_i(x_I) = ∫_0^1 ds f_i(s) .
Of course the averaging set (and the integer K) will depend on the
set of functions { f_i} one wants to average.
We can generalize even more by replacing the interval with a
connected space, such as C P^N-1, and by replacing the real
valued functions with, say, the set Hom(t,t) of all complex valued functions
homogeneous of order t in the homogeneous coordinates and their complex
conjugates alike. (The restriction on the functions is needed in order to
ensure that we get functions on C P^N-1. Note that the expression
⟨ψ|A|ψ⟩ belongs to Hom(1,1).) This too can
always be achieved, with an averaging set being a
collection of points represented by the unit vectors |Ψ_I⟩,
1≤ I ≤ K, for some sufficiently large integer K <cit.>.
We define a complex projective t–design, or t–design for short,
as a collection of unit vectors { |Ψ_I⟩}_I=1^K such that
1/K∑_I=1^Kf(|Ψ_I⟩ ) =
∫_ C P^n dΩ_Ψ f(|Ψ⟩ )
for all polynomials f ∈(t,t) with the components
of the vector, and their complex conjugates, as arguments.
Formulas like this are called
cubature formulas, since—like quadratures—they give explicit solutions
of an integral, and they are of practical interest—for many signal processing and
quantum information tasks—provided that K can be chosen to be reasonably small.
Eq. (<ref>) shows that orthonormal bases are 1–designs. More generally,
every POVM
POVM
is a 1–design. Let us also note that functions f ∈ Hom(t-1,t-1)
can be regarded as special cases of functions in Hom(t,t), since they can be rewritten
as f = ⟨Ψ|Ψ⟩ f ∈ Hom(t,t). Hence a t–design is automatically
a (t-1)–design. But how do we recognize a t–design when we see one?
The answer is quite simple.
In eq. (7.69)
we calculated the Fubini–Study average of
|⟨Φ|Ψ⟩|^2t for a fixed unit vector |Φ⟩.
Now let { |Ψ_I⟩}_I = 1^K be a t–design. It follows that
1/K∑_J |⟨Ψ_I|Ψ_J ⟩ |^2t =
⟨ |⟨Ψ_I|Ψ⟩ |^2t⟩_ FS = t!(N-1)!/(N-1+t)! .
If we multiply by 1/K and then sum over I we obtain
1/K^2∑_I,J|⟨Ψ_I|Ψ_J⟩ |^2t =
t!(N-1)!/(N-1+t)! .
We have proved one direction of the following
theorem!Design
Design theorem. The set of unit vectors { |Ψ⟩}_I=1^K
forms a t–design if and only if eq. (<ref>) holds.
In the other direction a little more thought is needed <cit.>.
Take any vector |Ψ⟩ in C^N and construct a vector in
( C^N)^⊗ t by taking the tensor product of the
vector with itself t times. Do the same with |Ψ̅⟩, a vector whose components
are the complex conjugates of the components, in a fixed basis, of the given vector. A final
tensor product leads to the vector
|Ψ⟩^⊗ t⊗ |Ψ̅⟩^⊗ t∈
( C^N)^⊗ 2t .
In the given basis the components of this vector are
(z_0… z_0z̅_0…z̅_0, z_0… z_0z̅_1…z̅_1, …… ,
z_n… z_nz̅_n…z̅_n) .
In fact the components consists of all possible monomials in Hom(t,t). Thus, to show that
a set of unit vectors forms a t–design it is enough to show that the vector
|Φ⟩ = 1/K∑_I|Ψ_I⟩^⊗ t⊗
|Ψ̅_I⟩^⊗ t - ∫ dΩ_Ψ|Ψ⟩^⊗ t⊗
|Ψ̅⟩^⊗ t
is the zero vector. This will be so if its norm vanishes. We observe preliminarily that
⟨Ψ_I^⊗ t|Ψ_J^⊗ t⟩ = ⟨Ψ_I|Ψ_J⟩^t
.
If we make use of the ubiquitous eq. (7.69)
we find precisely that
||Φ ||^2 = 1/K^2∑_I,J|⟨Ψ_I|Ψ_J⟩ |^2t -
t!(N-1)!/(N-1+t)! .
This vanishes if and only if eq. (<ref>) holds. But
|Φ⟩ = 0 is a sufficient condition for a t–design, and the
theorem is proven.
This result is closely related to the Welch bound <cit.>,
which holds for every collection of K vectors in
ℂ^N. For any positive integer t
N+t-1t∑_I,J|⟨ x_I|x_J⟩ |^2t≥( ∑_I ⟨ x_I|x_I⟩^t)^2 .
Evidently a collection of unit vectors forms a t-design if and only if
the Welch bound is saturated.
The binomial coefficient occurring here is the number of ways in which t identical objects
can be distributed over N boxes, or equivalently it is the dimension of the symmetric
subspace H_ sym^⊗ t of the t–partite Hilbert space
H_N^⊗ t. This is not by accident. Introduce the operator
F = ∑_I|Ψ_I^⊗ t⟩⟨Ψ_I^⊗ t| .
It is then easy to see, keeping eq. (<ref>) in mind, that
F = ∑_I⟨Ψ_I^⊗ t|Ψ_I^⊗ t⟩ = K
F^2 = ∑_I,J⟨Ψ_I^⊗ t|
Ψ_J^⊗ t⟩⟨Ψ_J^⊗ t|Ψ_I^⊗ t⟩ =
∑_I,J|⟨Ψ_I|Ψ_J⟩|^2t .
Now we can minimize TrF^2 under the constraint that TrF = K. In fact this
means that all the eigenvalues λ_i of F have to be equal, namely equal to
λ_i = K/( H^⊗ t_ sym) .
So we have rederived the inequality
∑_I,J|⟨Ψ_I|Ψ_J⟩|^2t = F^2 ≥K^2/( H^⊗ t_ sym) .
Moreover we see that the operator F projects onto the symmetric subspace.
Although t–designs exist in all dimensions, for all t, it is not so easy to find
examples with small number of vectors. A lower bound on the number of vectors needed is
<cit.>
≥( [ N+(t/2)_+ -1; (t/2)_+ ]) ( [ N+(t/2)_- - 1; (t/2)_- ]) ,
where (t/2)_+ is the smallest integer not smaller than t/2 and
(t/2)_- is the largest integer not larger than t/2. The design is said to be
tight if the number of its vectors saturates this bound. Can the bound
be achieved? For dimension N = 2 much is known <cit.>. A tight 2–design
is obtained by inscribing a regular tetrahedron in the Bloch sphere. A tight 3–design is
obtained by inscribing a regular octahedron, and a tight 5–design by inscribing a regular
icosahedron. The icosahedron is also the smallest 4–design, so tight 4–designs do not
exist in this dimension. A cube gives a 3-design and a dodecahedron gives a 5-design.
For dimensions N > 2 it is known that tight t–designs can exist at most for t=1,2,3.
Every orthonormal basis is a tight 1–design. A tight 2–design needs N^2 vectors, and
the question whether they exist is the subject of Section <ref>.
Meanwhile
we observe that that the N(N+1) vectors in a complete set of MUB saturate
the Welch bound for t = 1,2. Hence complete sets of MUB are 2–designs, and much of
their usefulness stems from this fact. Tight 3–designs
exist in dimensions 2, 4, and 6. In general it is not known how many vectors that
are needed for minimal t–designs in arbitrary dimensions, which is why the
terminology `tight' is likely to be with us for some time.
A
particularly nice account of all these matters is in the University of
Waterloo Master's thesis by Belovs (2008) <cit.>.
For more results, and references that we have omitted, see Scott <cit.>.
The name `design' is used for more than one concept. One example, closely related
to the one we have been discussing, is that of a
unitary t–design.
(Although there was a prehistory, the name seems to stem from
a University of Waterloo Master's thesis by Dankert (2005) <cit.>. The idea was further
developed in papers to which we refer for proofs, applications, and details <cit.>).
By definition this is a set of unitary
operators { U_I}_I=1^K with the property that
1/K∑_IU_I^⊗ tA(U_I^⊗ t)^† = ∫_U(N) dU
U^⊗ tA(U^⊗ t)^† ,
where A is any operator acting on the t–partite Hilbert space and dU is
the normalized Haar measure on the group manifold. In the particularly interesting case t = 2
the averaging operation performed on the right hand side is known as twirling.
Condition (<ref>) for when a collection of vectors forms a projective t–design
has a direct analogue: the necessary and sufficient condition for a collection of K unitary
matrices to form a unitary t–design is that
1/K^2∑_I,J|U_I^† U_J|^2 = ∫_U(N) dU
|U|^2t = {[ (2t)!/t!(t+1)! , N = 2; t! , N ≥ t . ].
When t > N the right hand side looks more complicated.
It is natural to ask for the
operators U_I to form a finite group. The criterion for (a projective unitary representation
of) a finite group to serve as a unitary t–design is that it should have the same number of
irreducible components in the t–partite Hilbert space as the group U^⊗ t itself.
Thus a nice error basis, such as the Weyl–Heisenberg group, is always a 1–design because
any operator commuting with all
the elements of a nice error basis is proportional to the identity matrix. When t = 2 the
group U⊗ U splits the bipartite Hilbert space into its symmetric and its anti-symmetric
subspace.
In prime power dimensions both the Clifford group and the restricted Clifford group
are unitary 2-designs <cit.>. In fact,
it is enough to use a particular subgroup
of the Clifford group <cit.>.
For qubits, the minimal unitary 2-design is the
tetrahedral group, which has only 12 elements. In even prime power dimensions 2^k
the Clifford group, but not the restricted Clifford group, is a unitary 3–design
as well <cit.>. Interestingly, every
orbit of a group yielding a unitary t–design is a projective t–design.
This gives an alternative proof that a complete set of MUB is a 2–design
(in those cases where it is an orbit under the restricted Clifford group).
In even prime power dimensions the set of all
stabilizer states is a 3–design.
In dimension 4 it consists of 60 vectors, while a tight
3–design (which actually exists in this case) has 40 vectors only.
§ SICS
At the end of Section 8.4
we asked the seemingly innocent question:
Is it possible to inscribe a regular simplex of full dimension into the
convex body of density matrices? A tight 2–design in dimension N, if it exists,
has N^2 vectors only, and our question can be restated as: Do tight 2–designs
exist in all dimensions?
In Hilbert space language the question is: Can we find an informationally complete POVM
made up of equiangular vectors? Since absolute values of the scalar products are taken
the word `vector' really refers to a ray (a point in C P^n). That is,
we ask for N^2 vectors |ψ_I⟩ such that
1/N∑_I=1^N^2|ψ_I⟩⟨ψ_I| = 1
|⟨ψ_I|ψ_J⟩ |^2 = {[ 1 I = J; ; 1/N+1 I ≠ J . ].
We need N^2 unit vectors to have informational completeness (in the sense of
Section 10.1),
and we are assuming that the mutual fidelities are equal.
The precise number 1/(N+1) follows by squaring
the expression on the left hand side of eq. (<ref>), and then taking the trace. Such a collection
of vectors is called a SIC, so the final form of the question is: Do SICs
exist?
(The acronym is short for Symmetric Informationally
Complete Positive Operator Valued Measure <cit.>, and is rarely spelled out. We
prefer to use `SIC' as a noun. When pronounced as `seek' it serves to remind
us that the existence problem may well be hiding their most important message).
If they exist, SICs have some desirable properties. First of all they saturate
the Welch bound, and hence they are 2-designs with the minimal number
of vectors. Moreover, also for other reasons, they are theoretically
advantageous in quantum state tomography <cit.>, and they provide a
preferred form for informationally complete POVMs. Indeed an entire philosophy
can be built around them <cit.>.
But the SIC existence problem is unsolved.
Perhaps we should begin by noting that there is a crisp non-existence result
for the real Hilbert spaces R^N.
Then Bloch space has dimension N(N+1)/2-1, so the number of equiangular
vectors in a real SIC is N(N+1)/2. For N = 2 the rays of a real SIC
pass through the vertices of a regular triangle, and for N = 3 the
six diagonals of an icosahedron will serve. However, for N > 3 it
can be shown that a real SIC cannot exist unless N+2 is a square of
an odd integer. In particular N = 4 is ruled out. In fact SICs do not
exist in real dimension 47 = 7^2-2 either, so there are further
obstructions. The non-existence result is due to
Neumann, and reported by Lemmens and Seidel (1973) <cit.>.
Since then more has been learned <cit.>. Incidentally the SIC
in R^3 has been proposed as an ideal configuration for
an interferometric gravitational wave detector <cit.>.
In C^N exact solutions are available in all
dimensions 2≤ N ≤ 21,
and in a handful of dimensions higher than that. Numerical solutions to high
precision are available in all dimensions given by two-digit numbers and a bit
beyond that. (Most of these results, many of them unpublished, are due
to Gerhard Zauner, Marcus Appleby, Markus Grassl, and Andrew Scott. For the
state of the art in 2009, see Scott and Grassl <cit.>. The first two
parts of the conjecture are due to Zauner (1999) <cit.>, the third to
Appleby at al. (2013) <cit.>. We are restating it a little for
convenience).
The existing solutions support a three-pronged conjecture:
1. In every dimension there exists a SIC which is an orbit of the Weyl–Heisenberg group.
2. Every vector belonging to such a SIC is invariant under a Clifford group
element of order 3.
3. When N > 3 the overlaps of the SIC vectors are algebraic units in
an abelian
extension of the real quadratic field Q(√((N-3)(N+1))).
Let us sort out what this means, beginning with the easily understood part 1.
In two dimensions a SIC forms a tetrahedron inscribed in the Bloch sphere. If we orient it so that its
corners lie right on top of the faces of the octahedron whose corners are the stabilizer states it is easy to see (look at Figure <ref>a) that the Weyl–Heisenberg
group can be used to reach any
corner of the tetrahedron starting from any fixed corner. In other words, when N = 2
we can always write the N^2 SIC vectors in the form
|ψ_r,s⟩ = D_r,s|ψ_0⟩ , 0 ≤
r,s < N-1 ,
where |ψ_0⟩ is known as the fiducial vector for the SIC (and has
to be chosen carefully, in a fixed representation of the Weyl–Heisenberg
group). Conjecture 1 says that it is possible to find such a fiducial vector in every
dimension. Numerical searches are based on this, and basically proceed by minimizing the function
f_ SIC = ∑_r,s( |⟨ψ_0|ψ_r,s⟩|^2
- 1/N+1)^2 ,
where the sum runs over all pairs (r,s) ≠ (0,0). The arguments of the
function are the components of the fiducial vector |ψ_0⟩. This is a
fiducial vector for a SIC if and only if f_ SIC = 0. Solutions have been found
in all dimensions that have been looked at—even though the presence
of many local minima of the function makes the task difficult. SICs arising
in this way are said to be covariant under the Weyl–Heisenberg group.
It is believed that the numerical searches for such WH-SICs are exhaustive up to
dimension N = 50 <cit.>. They necessarily fall into orbits of the
Clifford group, extended to include anti-unitary symmetries.
group!extended Clifford
For N ≤ 50 there are six cases where there is only one such orbit
(namely N =2, 4, 5, 10, or 22),
while as many as ten orbits occur in two cases (N = 35 or 39).
Can SICs not covariant under a group exist? The only publicly available answer to
this question is that if N ≤ 3 then all SICs are orbits under the
Weyl–Heisenberg group <cit.>. Can any other group serve the purpose?
If N = 8 there exists an elegant SIC covariant under H(2)^⊗ 3
<cit.>, as well as two Clifford orbits of SICs covariant under the
Weyl–Heisenberg group H(8). No other examples of a SIC not covariant
under H(N) are known, and indeed it is known that for prime N
the Weyl–Heisenberg group is the only group that can yield SICs <cit.>.
Since the mutually unbiased bases rely on the multipartite Heisenberg group
this means that there can be no obvious connection between MUB and SICs,
except in prime dimensions. In prime dimensions it is known that the
Bloch vector of a SIC projector, when projected onto any one of the MUB
eigenvalue simplices, has the same length for all the N+1 simplices defined
by a complete set of MUB <cit.>. If N = 2, 3, every state having this
property is a SIC fiducial <cit.>, but when N ≥ 5 this is far from
being the case. The two lowest dimensions have a very special status.
The second part of the conjecture is due to Zauner. It clearly holds if N = 2.
Then the Clifford group, the group that permutes the stabilizer states, is the
symmetry group of the octahedron. This group contains elements of order 3, and
by inspection we see that such elements leave some corner of the SIC-tetrahedron
invariant. The conjecture says that such a symmetry is shared by all SIC vectors
in all dimensions, and this has been found to hold true for every solution found so far. The sizes of the Clifford orbits shrink accordingly. In many dimensions—very
much so in 19 and 48 dimensions—there are SICs
left invariant by larger subgroups of the Clifford group, but the order 3
symmetry is always present and appears to be universal.
There is no understanding of why this should be so.
To understand the third part of the conjecture, and why it is interesting,
it is necessary to go into the methods used to produce solutions in the first place.
Given conjecture 1, the straightforward way to find a SIC is to solve the
equations
|⟨ψ_0|D_rs|ψ_0⟩|^2 =
1/N+1
for (r,s) ≠ (0,0). Together with the normalization this is a set
of N^2 multivariate quartic polynomial equations in the 2N
real variables needed to specify the fiducial vector. To solve them
one uses the method of Gröbner bases to reduce the set of equations to a single
polynomial equation in one variable <cit.>. This is a task for computer
programs such as MAGMA, and a clever programmer. The number of equations greatly
exceeds the number of variables, so it would not be surprising if they did not have a
solution. But solutions do exist.
As an example, here is a fiducial vector for a SIC in 4
dimensions <cit.>:
ψ_0 = ( [ 1-τ/2√(5+√(5)/10); 1/20( i√(50-10√(5)) + (1+i)√(5(5+3√(5))); -1+τ/2√(5+√(5)/10); 1/20( i√(50-10√(5)) - (1-i)√(5(5+3√(5))) ])
(with τ = -e^iπ/4). This does not look memorable at first sight.
Note though that all components can be expressed in terms of nested square roots.
This means that they are numbers that can be constructed by means of rulers and
compasses, just as the ancient Greeks would have wished. This is not at all what
one would expect to come out from a Gröbner basis calculation, which in the end
requires one to solve a polynomial equation in one variable but of high degree.
Galois showed long ago that generic polynomial equations cannot be solved
by means of nested root extractions if their degree exceeds four.
And we can simplify the expression. Using the fact that the Weyl–Heisenberg
group is a unitary operator basis, eqs. (<ref>-<ref>) we can
write the fiducial projector as
|ψ_0⟩⟨ψ_0| = 1/N∑_r,s=0^N-1
D_r,s⟨ψ_0|D_r,s^† |ψ_0⟩ .
But the modulus of the overlaps is fixed by the SIC conditions, so
we can define the phase factors
e^iθ_r,s = √(N+1)⟨ψ_0|D_r,s|ψ_0⟩ , (r,s) ≠ (0,0) .
These phase factors are independent of the choice of basis in ℂ^N,
and if we know them we can reconstruct the SIC. The
number of independent phase factors is limited by the Zauner symmetry (and
by any further symmetry that the SIC may have). For the N = 4 example we find
[ [ × e^iθ_0,1 e^iθ_0,2 e^iθ_0,3; e^iθ_1,0 e^iθ_1,1 e^iθ_1,2 e^iθ_1,3; e^iθ_2,1 e^iθ_2,1 e^iθ_2,2 e^iθ_2,3; e^iθ_3,0 e^iθ_3,1 e^iθ_3,2 e^iθ_3,3 ]] = [
[ × u -1 1/u; u 1/u -1/u 1/u; -1 -u -1 1/u; 1/u u u u ]] ,
where
u = √(5)-1/2√(2)+i√(√(5)+1)/2 .
The pattern in eq. (<ref>) is forced upon us by the Zauner symmetry,
so once we know the number u it is straightforward to reconstruct the entire SIC.
What is this number? To answer this question one computes (usually by means of
a computer) the minimal polynomial, the lowest degree polynomial with
coefficients among the integers satisfied by the number u. In this case it is
p(t) = t^8 -2t^6 - 2t^4 - 2t^2 +1 .
Because the minimal polynomial exists, we say that u is an
algebraic number. Because its leading coefficient equals 1, we say that u
is an algebraic integer. Algebraic integers form a ring, just as the ordinary
integers do. (Algebraic number theorist refer to ordinary integers as
`rational integers'. This is the special case when the polynomial is of first
order.) Finally we observe that 1/u is an algebraic integer too—in fact it
is another root of the same equation—so we say that u is an algebraic
unit.
Neither algebraic number theory nor Galois theory (which we are
coming to) lend themselves to thumbnail sketches. For first encounters we recommend
the books by Alaca and Williams <cit.> and by Howie <cit.>, respectively.
Hence u is a very special number, and the question arises what number field it
belongs to. This is obtained by adding the roots of (<ref>) to the field of
rational numbers, ℚ. We did give a thumbnail sketch of field extensions
in Section <ref>, but now we are interested in fields with an
infinite number of elements. When we adjoined a root of the equation x^2 +1 = 0
to the real field R we obtained the complex field R( i) =
C. In fact there are two roots of the equation, and there is a group—in
this case the abelian group Z_2—that permutes them. This group is called the
Galois group
of the extension. It can also be regarded as the group of automorphisms of the extended field
C that leaves the ground field R invariant. A Galois group arises
whenever a root of an irreducible polynomial is added to a field. Galois proved that
a polynomial with rational coefficients can be solved with root extractions if and only
if the Galois group is soluble
group!soluble
(as is the case for a generic quartic, but not for a generic quintic). This is
the origin of the name `soluble'. For a group to be soluble it must have a particular
pattern of invariant subgroups.
What one finds in the case at hand is that the Galois group is non-abelian, but only barely
so. The field for the N = 4 SIC can be obtained by first extending ℚ to
ℚ(√(5)), that is to say to consider the real quadratic field
consisting of all numbers of the form x + √(5)y, where x,y are rational. A further
extension then leads to the field ℚ(u,r), where r is an additional (real)
root of (<ref>). The Galois group arising in the second step, considered by itself,
is abelian, and the second extension is therefore said to be an abelian extension.
Thus the mysterious number u does not only belong to the field whose construction we sketched,
it has a very special status in it, and moreover the field is of an especially important
kind, technically known as a ray class field.
Things are not quite so simple in higher dimensions, but almost so—in
terms of principles that is, not in terms of the calculations that need to be
done.
(The number theory of SICs, so far as it is known, was developed by
Appleby, Flammia, McConnell, and Yard (2016) <cit.>).
The SIC overlaps, and the SIC vectors themselves if expressed in the natural basis, are still
given by nested radicals, although no longer by square roots only so they are not
constructible with ruler and compass. Indeed the third part of the SIC conjecture
holds in all cases that have been looked at. What is more, the SIC overlaps
continue to yield algebraic units. But it is not understood why the polynomial
equations that define the SICs have this property.
A field extension is said to be abelian whenever its Galois group is abelian.
In the nineteenth century Kronecker and Weber studied
abelian extensions of the rational field Q, and they proved that all such
extensions are subfields of a cyclotomic field, which is what one obtains by
adjoining a root of unity to the rational numbers. (It will be observed that MUB vectors
have all their overlaps in a cyclotomic field.)
field!cyclotomic
Kronecker's Jugendtraum was to
extend this result to much more general ground fields. For instance, there are quadratic
extensions of the rationals such as Q(√(5)), consisting of all numbers
of the form x_1 + x_2√(5), where x_1,x_2 are rational numbers. More generally
one can replace the square root of two with the square root of any integer. If that integer
is negative the extension is an imaginary quadratic extension, otherwise it is a real
quadratic extension.
For the case of abelian extensions of an imaginary quadratic extension
of ℚ Kronecker's dream led to brilliant successes, with deep connections to—among
other things—the theory of elliptic curves. Such numbers turn up for special choices of the
arguments of elliptic and modular functions, much as the numbers in a cyclotomic field turn
up when the function e^iπ x is evaluated at rational values of x.
As the 12th on his famous list of problems for
the twentieth century, Hilbert asked for the restriction to imaginary quadratic fields
to be removed.
(For an engaging account of this piece of history see
the book by Gray <cit.>). The natural first step in solving the 12th
problem would seem to be to find a framework for the abelian extensions
of the real quadratic fields. This remains unsolved, but it seems to be in these deep waters that the SICs are swimming.
* * *
This chapter may have left the reader a little bewildered, and appalled by equations
like (<ref>). In defence of the chapter, we observe that practical applications
(to signal processing, adaptive radar, and more) lie close to it. But the last word goes
to Hilbert <cit.>:
There still lies an abundance of priceless treasures hidden in this domain, belonging
as a rich reward to the explorer who knows the value of such treasures and, with love,
pursues the art to win them.
§ CONCLUDING REMARKS
The aim of these notes is literally to present
a concise introduction to the broad subject of
discrete structures in a finite Hilbert space.
We are convinced that such a knowledge is useful
while investigating numerous problems motivated by
the theory of quantum information processing.
Even for small dimensions of the Hilbert space
several intriguing questions remain open,
so we are pleased to encourage the
reader to contribute to this challenging field.
We are indebted to Marcus Appleby,
Dardo Goyeneche, Marcus Grassl,
David Gross and Huangjun Zhu, for reading some fragments
of the text and providing us with valuable remarks.
Financial support by Narodowe Centrum Nauki
under the grant number DEC-2015/18/A/ST2/00274
is gratefully acknowledged.
§ CONTENTS OF THE II EDITION OF THE BOOK "GEOMETRY OF QUANTUM STATES.
AN INTRODUCTION TO QUANTUM ENTANGLEMENT"
BY I. BENGTSSON AND K. ŻYCZKOWSKI
1 Convexity, colours and statistics
1.1 Convex sets
1.2 High dimensional geometry
1.3 Colour theory
1.4 What is “distance”?
1.5 Probability and statistics
2 Geometry of probability distributions
2.1 Majorization and partial order
2.2 Shannon entropy
2.3 Relative entropy
2.4 Continuous distributions and measures
2.5 Statistical geometry and the Fisher–Rao metric
2.6 Classical ensembles
2.7 Generalized entropies
3 Much ado about spheres
3.1 Spheres
3.2 Parallel transport and statistical geometry
3.3 Complex, Hermitian, and Kähler manifolds
3.4 Symplectic manifolds
3.5 The Hopf fibration of the 3-sphere
3.6 Fibre bundles and their connections
3.7 The 3-sphere as a group
3.8 Cosets and all that
4 Complex projective spaces
4.1 From art to mathematics
4.2 Complex projective geometry
4.3 Complex curves, quadrics and the Segre embedding
4.4 Stars, spinors, and complex curves
4.5 The Fubini-Study metric
4.6 ℂ P^n illustrated
4.7 Symplectic geometry and the Fubini–Study measure
4.8 Fibre bundle aspects
4.9 Grassmannians and flag manifolds
5 Outline of quantum mechanics
5.1 Quantum mechanics
5.2 Qubits and Bloch spheres
5.3 The statistical and the Fubini-Study distances
5.4 A real look at quantum dynamics
5.5 Time reversals
5.6 Classical & quantum states: a unified approach
5.7 Gleason and Kochen-Specker
6 Coherent states and group actions
6.1 Canonical coherent states
6.2 Quasi-probability distributions on the plane
6.3 Bloch coherent states
6.4 From complex curves to SU(K) coherent states
6.5 SU(3) coherent states
7 The stellar representation
7.1 The stellar representation in quantum mechanics
7.2 Orbits and coherent states
7.3 The Husimi function
7.4 Wehrl entropy and the Lieb conjecture
7.5 Generalised Wehrl entropies
7.6 Random pure states
7.7 From the transport problem to the Monge distance
8 The space of density matrices
8.1 Hilbert–Schmidt space and positive operators
8.2 The set of mixed states
8.3 Unitary transformations
8.4 The space of density matrices as a convex set
8.5 Stratification
8.6 Projections and cross–sections
8.7 An algebraic afterthought
8.8 Summary
9 Purification of mixed quantum states
9.1 Tensor products and state reduction
9.2 The Schmidt decomposition
9.3 State purification & the Hilbert-Schmidt bundle
9.4 A first look at the Bures metric
9.5 Bures geometry for N=2
9.6 Further properties of the Bures metric
10 Quantum operations
10.1 Measurements and POVMs
10.2 Algebraic detour: matrix reshaping and reshuffling
10.3 Positive and completely positive maps
10.4 Environmental representations
10.5 Some spectral properties
10.6 Unital & bistochastic maps
10.7 One qubit maps
11 Duality: maps versus states
11.1 Positive & decomposable maps
11.2 Dual cones and super-positive maps
11.3 Jamiołkowski isomorphism
11.4 Quantum maps and quantum states
12 Discrete structures in Hilbert space
12.1 Unitary operator bases and the Heisenberg groups
12.2 Prime, composite, and prime power dimensions
12.3 More unitary operator bases
12.4 Mutually unbiased bases
12.5 Finite geometries and discrete Wigner functions
12.6 Clifford groups and stabilizer states
12.7 Some designs
12.8 SICs
13 Density matrices and entropies
13.1 Ordering operators
13.2 Von Neumann entropy
13.3 Quantum relative entropy
13.4 Other entropies
13.5 Majorization of density matrices
13.6 Proof of the Lieb conjecture
13.7 Entropy dynamics
14 Distinguishability measures
14.1 Classical distinguishability measures
14.2 Quantum distinguishability measures
14.3 Fidelity and statistical distance
15 Monotone metrics and measures
15.1 Monotone metrics
15.2 Product measures and flag manifolds
15.3 Hilbert-Schmidt measure
15.4 Bures measure
15.5 Induced measures
15.6 Random density matrices
15.7 Random operations
15.8 Concentration of measure
16 Quantum entanglement
16.1 Introducing entanglement
16.2 Two qubit pure states: entanglement illustrated
16.3 Maximally entangled states
16.4 Pure states of a bipartite system
16.5 A first look at entangled mixed states
16.6 Separability criteria
16.7 Geometry of the set of separable states
16.8 Entanglement measures
16.9 Two qubit mixed states
17 Multipartite entanglement
17.1 How much is three larger than two?
17.2 Botany of states
17.3 Permutation symmetric states
17.4 Invariant theory and quantum states
17.5 Monogamy relations and global multipartite entanglement
17.6 Local spectra and the momentum map
17.7 AME states and error–correcting codes
17.8 Entanglement in quantum spin systems
Epilogue
Appendix 1 Basic notions of differential geometry
Appendix 2 Basic notions of group theory
Appendix 3 Geometry do it yourself
Appendix 4 Hints and answers to the exercises
References
AW04
S. Alaca and K. S. Williams.
Introductory Algebraic Number Theory.
Cambridge UP, 2004.
Al80
W. O. Alltop.
Complex sequences with low periodic correlations.
IEEE Trans. Inform. Theory., 26:350, 1980.
ABBD15
D. Andersson, I. Bengtsson, K. Blanchfield, and H. B. Dang.
States that are far from being stabilizer states.
J. Phys., A 48:345301, 2015.
AB15
O. Andersson and I. Bengtsson.
Clifford tori and unbiased vectors.
preprint arXiv:1506.09062.
App09
D. M. Appleby.
Properties of the extended Clifford group with applications to
SIC-POVMs and MUBs.
preprint arXiv:0909.5233.
App05
D. M. Appleby.
Symmetric informationally complete-positive operator measures and the
extended Clifford group.
J. Math. Phys., 46:052107, 2005.
ADF14
D. M. Appleby, H. B. Dang, and C. A. Fuchs.
Symmetric informationally-complete quantum states as analogues to
orthonormal bases and minimum-uncertainty states.
Entropy, 16:1484, 2014.
AFMY16
D. M. Appleby, S. Flammia, G. McConnell, and J. Yard.
Generating ray class fields of real quadratic fields via complex
equiangular lines.
preprint arXiv:1604.06098.
AYZ13
D. M. Appleby, H. Yadsan-Appleby, and G. Zauner.
Galois automorphisms of a symmetric measurement.
Quant. Inf. Comp., 13:672, 2013.
Ar00
V. I. Arnold.
Symplectic geometry and topology.
J. Math. Phys., 41:3307, 2000.
Arn11
V. I. Arnold.
Dynamics, Statistics, and Projective Geometry of Galois
Fields.
Cambridge University Press, 2011.
Asc07
M. Aschbacher, A. M. Childs, and P. Wocjan.
The limitations of nice mutually unbiased bases.
J. Algebr. Comb., 25:111, 2007.
BBRV02
S. Bandyopadhyay, P. O. Boykin, V. Roychowdhury, and F. Vatan.
A new proof for the existence of mutually unbiased bases.
Algorithmica, 34:512, 2002.
Bel08
A. Belovs.
Welch bounds and quantum state tomography.
MSc thesis, Univ. Waterloo, 2008.
BZ06
I. Bengtsson and K. Życzkowski.
Geometry of Quantum States. An Introduction to Quantum Entanglement.
Cambridge University Press, 2006. Second Edition, CUP, 2017.
BZ16
I. Bengtsson and K. Życzkowski.
A brief introduction to multipartite entanglement.
preprint arXiv:1612.07747.
BB84
C. H. Bennett and G. Brassard.
Quantum cryptography: public key distribution and coin tossing.
In Int. Conf. on Computers, Systems and Signal
Processing, Bangalore, page 175. IEEE, 1984.
Ben95
M. K. Bennett.
Affine and Projective Geometry.
Wiley, 1995.
Bl14
K. Blanchfield.
Orbits of mutually unbiased bases.
J. Phys., 47:135303, 2014.
BRW60
B. Bolt, T. G. Room, and G. E. Wall.
On the Clifford collineation, transform and similarity groups I.
J. Austral. Math. Soc., 2:60, 1960.
BoZh15
A. Bondal and I. Zhdanovskiy.
Orthogonal pairs and mutually unbiased bases.
preprint arXiv:1510.05317.
Boy10
L. Boyle.
Perfect porcupines: ideal networks for low frequency gravitational
wave astronomy.
preprint arXiv:1003.4946.
BrW08
S. Brierley and S. Weigert.
Maximal sets of mutually unbiased quantum states in dimension six.
Phys. Rev., A 78:042312, 2008.
CCKS97
A. R. Calderbank, P. J. Cameron, W. M. Kantor, and J. J. Seidel.
Z_4-Kerdock codes, orthogonal spreads, and extremal Euclidean
line sets.
Proc. London Math. Soc., 75:436, 1997.
CBKG02
N. J. Cerf, M. Bourennane, A. Karlsson, and N. Gisin.
Security of quantum key distribution using d-level systems.
Phys. Rev. Lett., 88:127902, 2002.
Cha05
H. F. Chau.
Unconditionally secure key distribution in higher dimensions by
depolarization.
IEEE Trans. Inf. Theory, 51:1451, 2005.
Dan05
C. Dankert.
Efficient Simulation of Random Quantum States and Operators.
MSc thesis, Univ. Waterloo, 2005.
DiVLeTe02
D. DiVincenzo, D. W. Leung, and B. Terhal.
Quantum data hiding.
IEEE Trans. Inf. Theory, 48:580, 2002.
EA01
B.-G. Englert and Y. Aharonov.
The mean king's problem: Prime degrees of freedom.
Phys. Lett., 284:1, 2001.
Fis35
R. A. Fisher.
The Design of Experiments.
Oliver and Boyd, 1935.
FuSc13
C. A. Fuchs and R. Schack.
Quantum-Bayesian coherence.
Rev. Mod. Phys., 85:1693, 2013.
GoRo09
C. Godsil and A. Roy.
Equiangular lines, mutually unbiased bases, and spin models.
Eur. J. Comb., 30:246, 2009.
Go97
D. Gottesman.
Stabilizer Codes and Quantum Error Correction.
PhD thesis, California Institute of Technology, 1997.
Gr04
M. Grassl.
On SIC-POVMs and MUBs in dimension 6.
preprint quant-ph/040675.
Gray00
J. Gray.
The Hilbert Challenge.
Oxford UP, 2000.
Gro06
D. Gross.
Hudson's theorem for finite-dimensional quantum systems.
J. Math. Phys., 47:122107, 2006.
GrAuEi07
D. Gross, K. Audenaert, and J. Eisert.
Evenly distributed unitaries: on the structure of unitary designs.
J. Math. Phys., 48:052104, 2007.
Had93
J. Hadamard.
Résolution d'une question relative aux déterminants.
Bull. Sci. Math., 17:240, 1893.
HaSl96
R. H. Hardin and N. J. A. Sloane.
McLaren's improved snub cube and other new spherical designs in
three dimensions.
Discrete Comput. Geom., 15:429, 1996.
Hog82
S. G. Hoggar.
t-designs in projective spaces.
Europ. J. Combin., 3:233, 1982.
Hog98
S. G. Hoggar.
64 lines from a quaternionic polytope.
Geometriae Dedicata, 69:287, 1998.
Hor07
K. J. Horadam.
Hadamard Matrices and Their Applications.
Princeton University Press, 2007.
HCM06
S. D. Howard, A. R. Calderbank, and W. Moran.
The finite Heisenberg-Weyl groups in radar and communications.
EURASIP J. Appl. Sig. Process., 2006:85865, 2006.
Ho06
J. M. Howie.
Fields and Galois Theory.
Springer, 2006.
HuSa15
L. P. Hughston and S. M. Salamon.
Surverying points on the complex projective plane.
Adv. Math., 286:1017, 2016.
Iv81
I. D. Ivanović.
Geometrical description of quantal state determination.
J. Phys., A 14:3241, 1981.
JMMSW09
P. Jaming, M. Matolsci, P. Móra, F. Szöllősi, and M. Weiner.
A generalized Pauli problem an infinite family of
MUB-triplets in dimension 6.
J. Phys., A 42:245305, 2009.
Kan12
W. K. Kantor.
MUBs inequivalence and affine planes.
J. Math. Phys., 53:032204, 2012.
Kar11
B. R. Karlsson.
Three-parameter complex Hadamard matrices of order 6.
Linear Alg. Appl., 434:247, 2011.
Kh08
M. Khatirinejad.
On Weyl–Heisenberg orbits of equiangular lines.
J. Algebra. Comb., 28:333, 2008.
Kla02
A. Klappenecker and M. Rötteler.
Beyond stabilizer codes I: Nice error bases.
IEEE Trans. Inform. Theory, 48:2392, 2002.
KlaRoe04
A. Klappenecker and M. Rötteler.
Constructions of mutually unbiased bases.
Lect. Not. Computer Science, 2948:137, 2004.
KlaRoe05
A. Klappenecker and M. Rötteler.
Mutually Unbiased Bases are complex projective 2–designs.
In Proc ISIT, page 1740. Adelaide, 2005.
Kla05
A. Klappenecker and M. Rötteler.
On the monomiality of nice error bases.
IEEE Trans. Inform. Theory, 51:1084, 2005.
KRBSS09
A. Klimov, J. L. Romero, G. Björk, and L. L. Sanchez-Soto.
Discrete phase space structure of n-qubit mutually unbiased bases.
Ann. Phys., 324:53, 2009.
Kni96
E. Knill.
Group representations, error bases and quantum codes.
preprint quant-ph/9608049.
KT94
A. I. Kostrikin and P. I. Tiep.
Orthogonal Decompositions and Integral Lattices.
de Gruyter, 1994.
KuGr15
R. Kueng and D. Gross.
Qubit stabilizer states are complex projective 3-designs.
preprint arXiv:1510.02767.
Lai12
A. Laing, T. Lawson, E. M. López, and J. L. O'Brien.
Observation of quantum interference as a function of Berry's phase
in a complex Hadamard network.
Phys. Rev. Lett., 108:260505, 2012.
Lam91
C. W. H. Lam.
The search for a finite projective plane of order 10.
Amer. Math. Mon., 98:305, 1991.
LeSe73
P. W. H. Lemmens and J. J. Seidel.
Equiangular lines.
J. Algebra, 24:494, 1973.
Ma21
H. F. MacNeish.
Euler squares.
Ann. Math., 23:221, 1921.
Ma12
M. Matolcsi.
A Fourier analytic approach to the problem of unbiased bases.
Studia Sci. Math. Hungarica, 49:482, 2012.
Mum83
D. Mumford.
Tata Lectures on Theta.
Birkhäuser, Boston, 1983.
MV15
B. Musto and J. Vicary.
Quantum Latin squares and unitary error bases.
preprint arXiv:1504.02715.
Neu02
M. Neuhauser.
An explicit construction of the metaplectic representation over a
finite field.
Journal of Lie Theory, 12:15, 2002.
Pal33
R. E. A. C. Paley.
On orthogonal matrices.
J. Math.and Phys., 12:311, 1933.
PPGB10
T. Paterek, M. Pawłowski, M. Grassl, and Č. Brukner.
On the connection between mutually unbiased bases and orthogonal
Latin squares.
Phys. Scr., T140:014031, 2010.
Pl82
V. Pless.
Introduction to the Theory of Error–Correcting Codes.
Wiley, 1982.
RLE11
P. Raynal, X. Lü, and B.-G. Englert.
Mutually unbiased bases in dimension 6: The four most distant
bases.
Phys. Rev., A 83:062303, 2011.
Re70
C. Reid.
Hilbert.
Springer, 1970.
ReWe07
M. Reimpell and R. F. Werner.
A meaner king uses biased bases.
Phys. Rev., A 75:062334, 2008.
RBSC04
J. M. Renes, R. Blume-Kohout, A. J. Scott, and C. M. Caves.
Symmetric informationally complete quantum measurements.
J. Math. Phys., 45:2171, 2004.
RS09
A. Roy and A. J. Scott.
Unitary designs and codes.
Des. Codes Cryptogr., 53:13, 2009.
Schw60
J. Schwinger.
Unitary operator bases.
Proc. Natl. Acad. Sci., 46:570, 1960.
Schw03
J. Schwinger.
Quantum Mechanics—Symbolism of Atomic Measurements.
Springer, 2003.
Sco06
A. J. Scott.
Tight informationally complete measurements.
J. Phys., A 39:13507, 2006.
ScGr10
A. J. Scott and M. Grassl.
SIC-POVMs: A new computer study.
J. Math. Phys., 51:042203, 2010.
Se86
C. Segre.
Remarques sur les transformations uniformes des courbes elliptiques
en elles-mèmes.
Math. Ann, 27:296, 1886.
SeZa84
P. D. Seymour and T. Zaslawsky.
Averaging sets: a generalization of mean values and spherical
designs.
Adv. Math., 52:213, 1984.
SHBAH12
C. Spengler, M. Huber, S. Brierley, T. Adaktylos, and B. C. Hiesmayr.
Entanglement detection via mutually unbiased bases.
Phys. Rev., A 86:022311, 2012.
Sti04
D. R. Stinson.
Combinatorial Designs: Constructions and Analysis.
Springer, 2004.
STDH07
M. A. Sustik, J. Tropp, I. S. Dhillon, and R. W. Heath.
On the existence of equiangular tight frames.
Lin. Alg. Appl., 426:619, 2007.
Syl82
J. J. Sylvester.
A word on nonions.
John Hopkins Univ. Circulars., 1:1, 1882.
Szo11
F. Szöllősi.
Construction, classification and parametrization of complex
Hadamard matrices.
PhD thesis, CEU, Budapest, 2011.
Szo12
F. Szöllősi.
Complex Hadamard matrices of order 6: a four-parameter family.
J. London Math. Soc., 85:616, 2012.
TaZ06
W. Tadej and K. Życzkowski.
A concise guide to complex Hadamard matrices.
Open Sys. Inf. Dyn., 13:133, 2006.
Vou04
A. Vourdas.
Quantum systems with finite Hilbert space.
Rep. Prog. Phys., 67:267, 2004.
Web15
Z. Webb.
The Clifford group forms a unitary 3-design.
preprint arXiv:1510.02769.
WD10
S. Weigert and T. Durt.
Affine constellations without mutually unbiased counterparts.
J. Phys., A43:402002, 2010.
We13
M. Weiner.
A gap for the maximum number of mutually unbiased bases.
Proc. Amer. Math. Soc., 141:1963, 2013.
Wel74
L. R. Welch.
Lower bounds on the maximum cross correlation of signals.
IEEE Trans. Inf. Theory., 20:397, 1974.
Wer01
R. F. Werner.
All teleportation and dense coding schemes.
J. Phys., A 34:7081, 2001.
Wey32
H. Weyl.
Group Theory and Quantum Mechanics.
E. P. Dutton, New York, 1932.
WoBe05
P. Wocjan and T. Beth.
New construction of mutually unbiased bases in square dimensions.
Quantum Inf. Comp., 5:93, 2005.
Wo87
W. K. Wootters.
A Wigner function formulation of finite-state quantum mechanics.
Ann. Phys. (N.Y.), 176:1, 1987.
WF89
W. K. Wootters and B. F. Fields.
Optimal state determination by mutually unbiased measurements.
Ann. Phys. (N.Y.), 191:363, 1989.
Zau99
G. Zauner.
Quantendesigns.
PhD thesis, Univ. Wien, 1999.
Zhu15
H. Zhu.
Multiqubit Clifford groups are unitary 3–designs.
preprint arXiv:1510.02619.
Zhu10
H. Zhu.
SIC-POVMs and Clifford groups in prime dimensions.
J. Phys., A 43:305305, 2010.
Zh15
H. Zhu.
Mutually unbiased bases as minimal Clifford covariant 2–designs.
Phys. Rev., A 91:060301, 2015.
Zh16
H. Zhu.
Permutation symmetry determines the discrete Wigner function.
Phys. Rev. Lett., 116:040501, 2016.
|
http://arxiv.org/abs/1701.07660v1 | 20170126114124 | Variations on branching methods for non linear PDEs | [
"Xavier Warin"
] | math.PR | [
"math.PR",
"65C05, 60J60, 60J85, 35K10"
] |
callettersOMScmsymn
calletters
.equation
TheoremTheorem[section]
Definition[Theorem]DefinitionProposition[Theorem]PropositionPropertyPropertyAssumption[Theorem]AssumptionAlgorithmAlgorithmLemma[Theorem]LemmaCorollary[Theorem]CorollaryRemark[Theorem]RemarkExample[Theorem]ExampleProofProofIntroductionIntroductionaddtoresetequationsection.equation
-0.1 0.2 -0.1 0.2 -0.02
𝔻𝔼𝔽Ł𝕃ℙℚℝ𝕊𝕄ℕ
A B C D E F G H I J K L M N O P Q R S T U V W X Y ZLD̅CCOBÑB̃p̃ỸZ̃τ̅θ̅ψψ M I
αασαΩωΩτ̂ε𝐱0h̋XΛa̅Σ F𝔽ψφ
×⊗ 1 Proof. ⟨⟩FLp̂IXWT^1^-⌊⌋Variations on branching methods for non linear PDEs
Xavier Warin EDF R&D & FiME, Laboratoire de Finance des Marchés de l'Energie, xavier.warin@edf.fr
December 30, 2023
======================================================================================================
The branching methods developed in <cit.>, <cit.> are effective methods to solve some semi linear PDEs and are shown
numerically to be able to solve some full non linear PDEs.
These methods are however restricted to some small coefficients in the PDE and small maturities.
This article shows numerically that these methods can be adapted to solve the problems with longer maturities in the semi-linear case by using a new derivation scheme and some nested method.
As for the case of full non linear PDEs, we introduce new schemes and we show numerically that they provide an effective alternative to the schemes previously developed.
§ INTRODUCTION
The resolution of low dimensional non linear PDEs is often achieved by some deterministic methods such as finite difference schemes, finite elements and finite volume.
Due the curse of dimensionality, these methods cannot be used in dimension greater than three : both the computer time and the memory required are too large even for supercomputers.
In the recent years the probabilistic community has developed some representation of semi linear PDE:
-∂_tu- u =f(u,Du), u_T=g, t<T, x∈^d,
by means of backward stochastic differential equations (BSDE), as introduced by <cit.>.
Numerical Monte Carlo algorithms have been developed to solve efficiently these BSDE by <cit.>, <cit.>.
The representation of the following full non linear PDE:
-∂_tu- u =f(u,Du,D^2u), u_T=g, t<T, x∈^d,
has been given by the mean of second order backward stochastic differential equation (SOBSDE) by <cit.>.
A numerical algorithm developed by <cit.> has been derived to solve these full non linear PDE by the mean of SOBSEs.
The BSDE and SOBSDE schemes developed rely on the approximation of conditional expectation and the most effective implementation is based on regression methods as developed in <cit.>, <cit.>.
These regression methods develop an approximation of conditional expectations based on an expansion on basis functions. The size of this expansion has to grow exponentially with the dimension of the problem so we have to face again the curse of dimensionality.
Notice that the BSDE methodology could be used in dimension 4 or 5 as regressions has been successfully used in dimension 6 in <cit.> using some local regression function.
Recently a new representation of semi linear equations (<ref>) for a polynomial function f of u and Du has been given by <cit.> : this representation
uses the automatic differentiation approximation as used in <cit.>, <cit.>, <cit.>, and <cit.>.
The authors have shown that the representation gives a finite variance estimator only for small maturities or small non linearities and numerical examples until dimension 10 are given. Besides, they have shown that the given scheme using Malliavin weights cannot be used to solve the full non linear equation (<ref>).
<cit.> have introduced a re-normalization technique improving numerically the convergence of the scheme diminishing the variance observed for the semi linear case. Besides, the authors haved introduced a scheme to solve the full non linear equation (<ref>). Without proof of convergence they numerically have shown that the developed scheme is effective.
The aim of the paper is to provide some numerical variation on the algorithm developed in <cit.>.
In a first part we will show, with simple ideas, that it is possible to deal with longer maturities than the ones possible with the initial algorithm.
In a second part we give some alternative schemes to the one proposed in <cit.> and, testing them on some numerical examples, we show that they are superior than the scheme previously developed.
In the numerical results presented in the article, all errors are estimated as the log of the standard deviation observed divided by the square root of the number of particles used and these errors are plotted as a function of the log of the number of particles used. As our methods are pure Monte Carlo methods we expect to have lines with slope -1/2 when the numerical variance is bounded.
§ THE SEMI LINEAR CASE
Let σ_0 ∈^d be some constant non-degenerate matrix, μ∈^d be some constant vector f: [0,T] ^d ^d →,
and g: ^d → bounded Lipschitz functions,
we consider the semi linear parabolic PDE:
∂_t u + 1/2σ_0 σ_0^⊤ : D^2 u + μ.Du + f(·, u ,Du) = 0,
[0,T) ^d,
with terminal condition u(T, ·) = g(·) where A:B := (AB^⊤) for two matrices A, B ∈^d.
When f is a polynomial in (u, Du) in the form
f(t,x,y,z,γ)
=∑_ℓ = (ℓ_0, ℓ_1, ·, ℓ_m) ∈ L c_ℓ(t,x) y^ℓ_0∏_i=1^m (b_i · z)^ℓ_i ,
for some m ≥ 1, L⊂^1 + m,
where (b_i)_i=1, m is a sequence of ^d-valued bounded continuous functions defined on [0,T] ^d,
and (c_ℓ)_ℓ∈ L is a sequence of bounded continuous functions defined on [0,T] ^d.
<cit.> obtained a probabilistic representation to the above PDE by branching diffusion processes under some technical conditions.
In the sequel, we simplify the setting by taking f as a constant (in u, Du) plus a monomial in u, (b_i.Du) , i=1,m :
f(t,x,y,z)
=
h(t,x)+ c(t,x) y^ℓ_0∏_i=1^m (b_i · z)^ℓ_i ,
for some m ≥ 1,
where (b_i)_i=1, m is a sequence of ^d-valued bounded continuous function defined on [0,T] ^d, (ℓ_i)_i=0,m∈^m+1 supposing that ∑_i=0,mℓ_i >0,
and c is a bounded continuous function defined on [0,T] ^d. We note L= ∑_i=0^m ℓ_i.
The case with f a general polynomial only complexifies the notation : it can be simply treated as in <cit.> by introducing some probability mass function (p_ℓ)_ℓ∈ L (i.e. p_ℓ≥ 0 and ∑_ℓ∈ L p_ℓ = 1) that are used to select with monomial to consider during the branching procedure. Another approach can be used : instead of sampling the monomial to use, it is possible to consider successively all terms of the f but this doesn't give a representation as nice as the one in <cit.>.
§.§ Variation on the original scheme of <cit.>
In this section we present the original scheme of <cit.> and explain how to diminish the variance increase the maturities of the problem.
§.§.§ The branching process
Let us first introduce a branching process with arrival time of distribution density function ρ. At the arrival time, the particle branches into |ℓ | offsprings.
We introduce a sequence of i.i.d. positive random variables (τ^k)_k = (k_1, ⋯, k_n-1, k_n) ∈^n, n>1 with all the values k_i ∈ [1,L], for i >0.
We construct an age-dependent branching process using the following procedure :
* We start from a particle marked by 0, indexed by (1), of generation 1,
whose arrival time is given by T_(1) := τ^(1)∧ T.
* Let k = (k_1, ⋯, k_n-1, k_n) ∈^n be a particle of generation n, with arrival time T_k that branches into L offspring particles noted (k_1, ⋯, k_n-1, k_n,i) for i=1,...,L. We define the set of its offspring particles by
S(k) := {(k_1, ⋯, k_n, 1), ⋯, (k_1, ⋯, k_n, L) },
We first mark the ℓ_0 particles by 0, the ℓ_1 next by 1 , and so on, so that each particle has a mark i for i = 0, ⋯, m.
* For a particle k = (k_1, ⋯, k_n, k_n+1) of generation n+1,
we denote by k- := (k_1, ⋯, k_n) the “parent” particle of k,
and the arrival time of k is given by T_k := (T_k- + τ^k) ∧ T. Let us denote Δ T_k = T_k -T_k-.
* In particular, for a particle k = (k_1, ⋯, k_n) of generation n,
and T_k- is its birth time and also the arrival time of k-.
Moreover, for the initial particle k = (1), one has k- = ∅, and T_∅ = 0.
We denote further
θ_k := k,
^n_t := { k n T_k-≤ t < T_k }, t ∈ [0,T),
{k n T_k = T}, t = T,
and also
^n_t := ∪_s ≤ t^n_s,
_t := ∪_n ≥ 1^n_t
_t := ∪_n ≥ 1^n_t.
Clearly, _t (resp. ^n_t) denotes the set of all living particles (resp. of generation n) in the system at time t,
and _t (resp. ^n_t) denotes the set of all particles (resp. of generation n) being alive at or before time t.
We next equip each particle with a Brownian motion in order to define a branching Brownian motion.
Let (Ŵ^k)_k = (k_1, ⋯, k_n-1, k_n) ∈^n, n>1 be a sequence of independent d-dimensional Brownian motion, which is also independent of (τ^k)_k = (k_1, ⋯, k_n-1, k_n) ∈^n, n>1.
Define W^(1)_t = Ŵ^(1)_t for all t ∈[0, T_(1)] and then for each k = (k_1, ⋯, k_n) ∈_T ∖{(1)},
define
W^k_t := W^k-_T_k- + Ŵ^k_t - T_k-, t ∈ [T_k-, T_k].
Then (W^k_·)_k ∈_T is a branching Brownian motion.
§.§.§ The original algorithm
Let us denote F̅(t):=∫_t^∞ρ(s)ds.
Denoting X^k_t := x + μ t + σ_0 W^k_t for all k ∈_T and t ∈ [T_k-, T_k] and
by _t,x the expectation operator conditional on the starting data X_t=x at time t,
we obtain from the Feynman-Kac formula the representation of the solution u of equation (<ref>) as:
u(0,x)
=
_0,x[F̅(T)g(X_T)/F̅(T)+∫_0^T f(u,Du)(t,X_t)/ρ(t)ρ(t)dt]
=
_0,x[ϕ(T_(1),X^(1)_T_(1))],
where T_(1):=τ^(1)∧ T, and
ϕ(t,y)
:= _{t≥ T}/F̅(T)g(y)
+_{t<T}/ρ(t)( h+c u^ℓ_0∏_i=1^m (b_i · Du)^ℓ_i)(t,y).
On the event {_{T_(1)<T}}, using the independence of the (τ^k,W^k) we are left to calculate
[c u^ℓ_0 ∏_i=1^m (b_i · Du)^ℓ_i](T_(1),X_T_(1)) = c ∏_j=1^ℓ_0_T_(1),X_T_(1)[ ϕ(T_(1,j),X^(1)_T_(1,j))]
∏_i=1^m ( b_i(T_(1),X_T_(1)).D _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))])^ℓ_i
Using differentiation with respect to the heat kernel, i.e. the marginal density of the Brownian motion we get :
[c u^ℓ_0∏_i=1^m (b_i · Du)^ℓ_i](T_(1),X_T_(1)) = c ∏_j=1^ℓ_0_T_(1),X_T_(1)[ ϕ(T_(1,j),X^(1,j)_T_(1,j))]
∏_i=1^m ( b_i(T_(1),X_T_(1))._T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p)_Δ T_(1,p)/Δ T_(1,p)ϕ(T_(1,p),X^(1,p)_T_(1,p))])^ℓ_i
Using equations (<ref>) and (<ref>) recursively and the tower property , we get the following
representation
u(0,x)
= _0,x[ _(1)]
where _(1) is given by the backward recursion :
let _k := g(X^k_T) -g(X^k_T_k-)_{θ_k ≠ 0}/(Δ T_k) for every k ∈_T, then let
_k
:= 1/ρ(Δ T_k)( h(T_k,X^k_T_k) + c(T_k, X^k_T_k)
∏_k̃∈ S(k)_k̃_k̃),
k ∈_T ∖_T.
where
_k
= _{θ_k = 0} + _{θ_k ≠ 0} b_θ_k(T_k-, X^k_T_k-)
· (σ_0^⊤)^-1Ŵ^k_Δ T_k/Δ T_k.
and we have used that _0,x[ g(X^k_T_k-) b_θ_k(T_k-, X^k_T_k-)
·σ_0^⊤)^-1Ŵ^k_Δ T_k] =0.
This backward representation is slightly different from the elegant representation introduced in <cit.>.
Clearly on our case the variance of the method used will be lower than with the representation in <cit.> for a similar computational cost.
In the case where the operator f is linear and a function of the gradient (ℓ_0=0, m=1 and ℓ_1=1) using the arguments in <cit.> it can be easily seen by conditioning with respect to the number of branching that equation (<ref>) is of finite variance if 1/x ρ(x)^2 = O(x^α) as x ⟶ 0 with α≥ 0.
When τ follows for example a gamma law with parameters κ and θ, the finite variance is proved as soon as κ≤ 0.5 for PDE coefficients and maturities small enough.
In the non linear case, <cit.> have shown that the variance is in fact finite for maturities small enough and small coefficients as soon as κ <0.5 but numerical results show that κ=0.5 is optimal in term of efficiency: for a given θ the numerical variance is nearly the same for the values of κ between 0.4 and 0.5 but a higher κ value limits the number of branching thus meaning a smaller computational cost.
§.§.§ Variation on the original scheme
As indicated in the introduction, the method is restricted to small maturities or small non linearities.
Having a given non linearity we are interested in adapting the methodology in order to be able to treat longer maturities. A simple idea consists in noting that the Monte Carlo method is applied by sampling the conditional expectation _t,x for t>0 appearing in equation (<ref>) only once. Using nested Monte Carlo, so by sampling each term of equation (<ref>) more that one time
one can expect a reduction in the variance observed. A nested method of order n is defined as a method using n sampling to estimate each function u or Du at each branching.
Of course the computational time will grow exponentially with the number of samples taken and for example
trying to use a gamma law with a non linearity of Burger's type u(b.Du) with κ =0.5 is very costly: due to the high values of the density ρ near 0, trajectories can have many branching.
Some different strategies have been tested to be able to use this technique :
* A first possibility consists in trying to re-sample more at the beginning of the resolution and decreasing the number of samples as time goes by or as the number of branching increases. The methodology works slightly better than a re-sampling with a constant number of particles but has to be adapted to each maturity and each case so it has been given up.
* Another observation is that the gamma law is only necessary to treat the gradient term: so it is possible to use two laws: a first one, an exponential law, will be used to estimate the u function while an gamma law will be used for the Du terms. This second technique is the most effective and is used for the results obtained in the section.
For a given dimension d , we take σ_0 = 1/√(d)_d, μ =,
f(t,x,y,z)= d(t,x) + y (b · z),
where b := 0.2/d (1+1/d, 1+2/d, ⋯, 2) and
h(t,x)
:=
cos( x_1 +⋯ + x_d)
( α + σ_0^2/2 + c sin(x_1 +⋯ + x_d) 3d+1/2d e^α (T-t))
e^α (T-t).
With terminal condition g(x) = cos( x_1+ ⋯ + x_d),
the explicit solution of semi linear PDE (<ref>) is given by
u(t,x) = cos(x_1+ ⋯ + x_d) e^α (T-t).
Our goal is to estimate u at t=0, x=0.5.
This test case will be noted test A in the sequel.
We use the nested algorithm with two distributions for τ:
* an exponential law with density ρ(s)= λ e^-λ s with λ=0.4 to calculate the u terms,
* a gamma distribution ρ(s) = 1/Γ(κ) θ^κ s^κ -1exp(- s/θ) _{s > 0} with Γ(κ) := ∫_0^∞ s^κ-1 e^-s ds and the parameters κ=0.5, 1/θ=0.4 to calculate the Du terms.
We first give on figures <ref>, <ref> and <ref> the results obtained for test A for different maturities and a dimension d=4 so the analytical solution is -0.508283. We plot for each maturity :
* the solution obtained by increasing the number of Monte Carlo scenarios used,
* the error calculated as explained in the introduction.
Nested n curves stand for the curves using the nested method of order n, so the Nested 1 curve stands for the original method.
On figure <ref>, for maturity 2.5 the error observed with the orignal method (Nested 1) is around 1000 so it has not been plotted.
Because of the number of branching due to the gamma law, it seems difficult to use a nested method of order n>2 for long maturities : the time needed explodes.
But clearly the nested method permits to have accurate solution for longer maturities.
For a maturity of 2 we also give the results obtained in dimension 6 on figure <ref> giving an analytical solution -1.4769 : once again the original method fails to converge while the nested one give good results.
§.§ Adaptation of the original branching to the re-normalization technique
As introduced in <cit.>, we introduce a modification of the original branching process that let us use exponential laws for the branching dates
to treat the Du terms in the method previously described.
Recall that ^1_T = {(1)}, we introduce an associated ghost particle, denoted by (1^1),
and denote ^1_T := {(1), (1^1) }.
Next, given the collection ^n_T of all particles (as well as ghost particles) of generation n,
we define the collection ^n+1_T as follows.
For every k = (k_1, ⋯, k_n) ∈^n, we denote by o(k) = (k̂_1, ⋯, k̂_n) its original particles,
where k̂_i := j when k_i = j j^1.
Further, when k = (k_1, ⋯, k_n) is such that k_n ∈, we denote k^1 := (k_1, ⋯, k_n-1, k_n^1).
The mark of k ∈^n will be the same as its original particle o(k), i.e. θ_k := θ_o(k);
and T_k := T_o(k), Δ T_k := Δ T_o(k) and τ^k = τ^o(k).
Define also ^n_T:= {k ∈^n_T : o(k) ∈_T}.
For every k = (k_1, ⋯, k_n) ∈^n_T ∖^n_T, we still define the set of its offspring particles by
S(k) := {(k_1, ⋯, k_n, 1), ⋯, (k_1, ⋯, k_n, L) },
and the set of ghost offspring particles by
S^1(k)
:= { (k_1, ⋯, k_n, 1^1), ⋯, (k_1, ⋯, k_n, L^1) }.
Then the collection ^n+1_T of all particles (and ghost particles) of generation n+1 is
^n+1_T
:= ∪_k ∈^n_T ∖^n_T( S(k) ∪ S^1(k) ).
Define also
_T := ∪_n ≥ 1^n_T,
_T := ∪_n ≥ 1^n_T.
§.§.§ The original re-normalization technique
We next equip each particle with a Brownian motion in order to define a branching Brownian motion.
Further, let W^∅_0 := 0, and for every k = (k_1, ⋯, k_n) ∈^n_T, let
W^k_s
:=
W^k-_T_k- + _k_n ∈Ŵ^o(k)_s - T_k-,
X^k_s := μ s +σ_0 W^k_s,
∀ s ∈ [T_k-, T_k].
On figure <ref>, we give the original Galton-Watson tree and the ghost particles associated.
The initial equation (<ref>) remains unchanged (first step of the algorithm) but equation (<ref>) is modified
by replacing the term
_T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p)_Δ T_(1,p)/Δ T_(1,p)ϕ(T_(1,p),X^(1,p)_T_(1,p)) ]
by
_T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p)_Δ T_(1,p)/Δ T_(1,p)(ϕ(T_(1,p),X^(1,p)_T_(1,p)) -ϕ(T_(1,p),X^(1,p^1)_T_(1,p)) ) ].
Notice that since W^(1,p^1) has been obtained by (<ref>), Ŵ^(1,p)_Δ T_(1,p) and ϕ(T_(1,p),W^(1,p^1)_T_(1,p)) are orthogonal so that adding the second term acts as a control variate.
Recursively using the modified version of equation (<ref>) induced by the use of (<ref>), <cit.> gave defined the re-normalized estimator by a backward induction:
let _k := g(X^k_T)/(Δ T_k) for every k ∈_T, then let
_k
:=
1/ρ(Δ T_k) ( h(T_k, X^k_T_k)+c(T_k, X^k_T_k)
∏_k̃∈ S(k)(_k̃ - _k̃^1_{θ(k̃) ≠ 0}) _k̃),
k ∈_T ∖_T.
where the weights are given by equation (<ref>),
so we have
u(0,x)
= _0,x[ _(1)].
As explained in section <ref>, equation (<ref>) used in representation (<ref>) force us to take laws for branching dates with a high probability of low values that leads to a high number of recursions defined by equation (<ref>).
Besides such laws using some rejection algorithm, as gamma laws, are very costly to generate.
The use of (<ref>) permits us to use exponential laws very cheap to simulate and with a low probability of small values.
Indeed it can be easily seen in the linear case (f function of the gradient with ℓ_0=0, m=1 and ℓ_1=1) by conditioning with respect to the number of branching that the variance is bounded for small maturities
and coefficients if
_0,x[ (_k - _k^1_{θ(k) ≠ 0})^2 (b_θ_k(T_k-, X^k_T_k-)
· (σ_0^⊤)^-1Ŵ^o(k)_Δ T_k)^2/(Δ T_k)^2] < ∞.
By X^k^1_t construction using g regularity, it is easily seen that for small time steps Δ T_k, _0,x,Δ T_k[ (_k - _k^1)^2 ] = O(Δ T_k) as Δ T_k ⟶ 0 and (<ref>) is satisfied for every ρ densities.
§.§.§ Re-normalization techniques and antithetic
We give a version of the re-normalization technique using antithetic variables.
Equation (<ref>) is modified by :
W^k_s
:=
W^k-_T_k- + _k_n ∈Ŵ^o(k)_s - T_k- - _k_n ∉Ŵ^o(k)_s - T_k-,
X^k_s := μ s +σ_0 W^k_s,
∀ s ∈ [T_k-, T_k],
for every k = (k_1, ⋯, k_n) ∈^n_T.
Then equation (<ref>) is modified by :
* First , replacing the term tacking into account the power of uϕ(T_(1,j),X^(1,j)_T_(1,j))
by
1/2( ϕ(T_(1,j),X^(1,j)_T_(1,j))+
ϕ(T_(1,j),X^(1,j^1)_T_(1,j)) ),
* and the term taking into account the gradient
_T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p)_Δ T_(1,p)/Δ T_(1,p)ϕ(T_(1,p),X^(1,p)_T_(1,p)) ]
by
_T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p)_Δ T_(1,p)/Δ T_(1,p)1/2(ϕ(T_(1,p),X^(1,p)_T_(1,p)) -ϕ(T_(1,p),X^(1,p^1)_T_(1,p)) ) ].
Notice that with this version the variance of the gradient term is finite with the same argument as in the original re-normalization version in subsection <ref>.
By backward induction we get the re-normalized antithetic estimator modifying (<ref>) by:
_k
:=
1/ρ(Δ T_k) ( h(T_k, X^k_T_k)+c(T_k, X^k_T_k)
∏_k̃∈ S(k)1/2(_k̃ - _k̃^1_{θ(k̃) ≠ 0} + _k̃^1_{θ(k̃) = 0}) _k̃),
k ∈_T ∖_T.
where the weights are given by equation (<ref>).
Then we have
u(0,x)
= _0,x[ _(1)].
§.§.§ Numerical result for semi linear with re-normalization
We apply our nested algorithm on the original re-normalized technique and on the re-normalization technique with antithetic variables on two test cases.
First we give some results for test case A in dimension 4.
We give the Monte Carlo error obtained by the nested method on figure <ref>.
For the maturity T=3, without nesting the error of the original re-normalization technique has an order of magnitude of 2000 so the curve has not been given.
For the maturity T=4, the nested original re-normalization technique with an order 2 doesn't seem to converge.
As the maturity increases, nesting with a higher order becomes necessary.
Notice that with the re-normalization it is possible to use the nested method of a high order because of the small number of branching used.
For example, for T=2, for an accuracy of 0.0004, in dimension d=4:
* the original method in section <ref> with a nested method of order 2 achieves an accuracy of 0.0004 for a CPU time of 1500 seconds using 28 cores,
* the re-normalized version of section <ref> with a nested method of order 4 reaches the same accuracy in 1800 seconds,
* the re-normalized version with antithetic of section <ref> without nesting reaches the same accuracy in 11 seconds.
For the same test case A we plot in dimension 6 the error on figure <ref> to show that the method converges in high dimension.
Besides on figure <ref>, we show that the derivative is accurately calculated.
We then use a second test case B :
For a given dimension d , we take σ_0 = 1/√(d)_d, μ=,
f(t,x,y,z)= 0.1/d (· z)^2
with a terminal condition g(x) = cos( x_1+ ⋯ + x_d).
This test case cannot be solve by the nested method without re-normalization due to the high cost involved by the potential high number of branching. We give the results obtained for case B by the re-normalization methods of section <ref> and <ref> in dimension 4 on figure <ref>.
At last we give the results obtained in dimension 6 pour T=1.5 and T=3 on figure <ref>.
The nested method with re-normalization and antithetic appears to be the most effective and permits to solve semi-linear equations with quite long maturities.
The re-normalization technique is however far more memory consuming than the original scheme of section <ref>. This memory cost explodes with very high maturities.
The nested version of the original scheme of section <ref> isn't affected by these memory problems but is affected by an explosion of the computational time with longer maturities.
§.§ Extension to variable coefficients
In the case of time and space dependent coefficients μ and σ_0 of the PDE, it is possible to use the method consisting in “freezing” the coefficients first proposed in <cit.> for non fixed μ and extended in the general case in <cit.>.
This method increases the variance of the estimator, therefore it is more efficient for treating log maturities to use an Euler scheme to take into account the variation of the coefficients.
Introducing an Euler time step δ t, between the dates T_k- and T_k, the SDE is discretized as :
X^k_T_k-+i δ t = X^k_T_k-+(i-1) δ t + μ(T_k-+(i-1) δ t , X^k_T_k-+(i-1) δ t) δ t +
σ_0(T_k-+(i-1) δ t, X^k_T_k-+(i-1) δ t) Ŵ^k,i_δ t,
i=1, ..,N,
X^k_T_k = X^k_T_k-+ N δ t+μ(T_k-+N δ t, X^k_T_k-+ N δ t) (Δ T_k - Nδ t) +
σ_0(T_k-+N δ t, X^k_T_k-+N δ t) Ŵ^k,i_Δ T_k - Nδ t,
where N = ⌊Δ T_k/δ t⌋, and
(Ŵ^k,i)_k = (k_1, ⋯, k_n-1, k_n) ∈^n, n>1, i≥ 1 is a sequence of independent d-dimensional Brownian motion.
Using an integration by part on the first time step, in the original scheme of section <ref>, the gradient term in equation (<ref>) is replaced
_T_(1),X_T_(1)[ (σ_0(T_(1),X^(1)_T_(1))^⊤)^-1Ŵ^(1,p),1_min(δ t,Δ T_(1,p))/min(δ t,Δ T_(1,p))ϕ(T_(1,p),X^(1,p)_T_(1,p))]
In the case of the renormalization technique of section <ref>, the ghost is obtained from the original particule by removing the part associated to the first brownian.
Then for every k = (k_1, ⋯, k_n) ∈^n_T, the particule dynamic is given by
X^k_T_k-+ δ t : = X^k_T_k- + μ(T_k-, X^k_T_k-) δ t +
_k_n ∈σ_0(T_k-, X^k_T_k-) Ŵ^k,1_δ t,
X^k_T_k-+i δ t = X^k_T_k-+(i-1) δ t + μ(T_k-+(i-1) δ t , X^k_T_k-+(i-1) δ t) δ t +
σ_0(T_k-+(i-1) δ t, X^k_T_k-+(i-1) δ t) Ŵ^k,i_δ t,
i=2, ..,N,
X^k_T_k = X^k_T_k-+ N δ t+μ(T_k-+N δ t, X^k_T_k-+ N δ t) (Δ T_k - Nδ t) +
σ_0(T_k-+N δ t, X^k_T_k-+N δ t) Ŵ^k,i_Δ T_k - Nδ t,
if N>0 and
X^k_T_k: = X^k_T_k- + μ(T_k-, X^k_T_k-) Δ T_k +
_k_n ∈σ_0(T_k-, X^k_T_k-) Ŵ^k,1_Δ T_k
otherwise.
The renormalization technique of section <ref> leads to the following estimation of the gradient in equation (<ref>):
_T_(1),X_T_(1)[ (σ_0(T_(1),X^(1)_T_(1))^⊤)^-1Ŵ^(1,p),1_min( δ t,Δ T_(1,p)/min( δ t,Δ T_(1,p))(ϕ(T_(1,p),X^(1,p)_T_(1,p)) -ϕ(T_(1,p),X^(1,p^1)_T_(1,p)) ) ].
The renormalization technique described for the renormalization technique of section <ref> can be straightforwardly adapted to the renormalization scheme with antithetics of
section <ref>.
Of course using equation (<ref>) we expect that variance of the scheme will degrade with the diminution of the time step and we expect the scheme
(<ref>) to correct this behaviour.
On figure <ref> we give the error estimations given by the original scheme and the renormalization technique (with anithetics of section <ref>) depending on the time step for a case with
burgers non linearity in dimension 4 with 1e6 particles: as we refine the time step the scheme (<ref>) becomes unusable while the scheme (<ref>)
gives stable results.
§ THE FULL NON LINEAR CASE
In order to treat some full non linear case, so with a second order derivative D^2u in f, the re-normalization technique is necessary as no distribution can meet the finite variance requirement
even when f is linear in D^2u (see <cit.>).
Suppose that the f function is as follows :
f(t,x,y,z,γ)
:=
h(t,x)+
c(t,x) y^ℓ_0∏_i=1^m ( (b_i · z)^ℓ^1_i)
∏_i=m+1^2m( (a_i : γ)^ℓ_i),
for a given (ℓ_0, ℓ_1, ·, ℓ_m, ℓ_m+1, ⋯, ℓ_2m) ∈^1+2m,
m ≥ 1, b_i:[0,T] ^d →^d for i=1, ⋯, m are bounded continuous,
h: [0,T] ^d → is a bounded continuous function, and a_i : [0,T] ^d →^d, for i=m+1, ⋯, 2m are bounded continuous functions.
We note L= ∑_i=0^2mℓ_i.
We use a similar algorithm to the one proposed in section <ref>.
Instead of approximating f using representation (<ref>), we have to take into account the D^2u term :
[c u^ℓ_0 ∏_i=1^m (b_i · Du)^ℓ^1_i∏_i=m+1^2m (a_i :D^2u)^ℓ_i ](T_(1),X_T_(1)) =
c ∏_j=1^ℓ_0_T_(1),X_T_(1)[ ϕ(T_(1,j),X^(1)_T_(1,j))]
∏_i=1^m ( b_i(T_(1),X_T_(1)).D _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))])^ℓ_p^1
∏_i=m+1^2m (a_i :D^2 _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))])^ℓ_i.
The terms
_T_(1),X_T_(1)[ ϕ(T_(1,j),X^(1)_T_(1,j))]
and
( b_i(T_(1),X_T_(1)).D _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))])
are approximated by the different schemes previously seen.
It remains to give an approximation of the (a_i :D^2 _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))] u) term.
§.§ Ghost particles of dimension q
We extend the definition given in <cit.> of ghost tree for the full non linear case.
For a particle in dimension (1) of generation n=1, we introduce q associated ghost particles denoted (1^i) for i=1,...,q.
Let ^1_T := {(1), (1^1), ..., (1^q) }
Then given the collection ^n_T of all particles and ghost particles of generation n,
we define ^n+1_T as follows.
Given k=(k_1, ⋯, k_n) ∈^n_T, we denote by o(k) its original particle; and when k_n ∈, we denote k^i := (k_1, ⋯, k_n-1, k_n^i) for i ∈ [1,q] and i is noted the order of k^i. The function κ allows us to give the order of a particle for k=(k_1, ⋯, k_n) ∈^n_T :
κ(k) = i, k_n = p^i p ∈,
κ(k) = 0, k_n = p p ∈,
The variables T_k as well as the mark θ_k inherits that of the original particle o(k). Similarly Δ T_k = Δ T_o(k).
Denote also ^n_T := { k ∈^n_T : o(k) ∈^n_T }.
For every k = (k_1, ⋯, k_n) ∈^n_T ∖^n_T,
we define the collection of its offspring particles by
h(k) := {(k_1, ⋯, k_i, 1), ⋯, (k_1, ⋯, k_i, L) },
and generalizing the definition in section <ref>, we introduce q collections of all offspring ghost particles:
S^i(k)
:= { (k_1, ⋯, k_n, 1^i), ⋯, (k_1, ⋯, k_n, L^i) }, i=1,...,q
Then the collection ^n+1_T of all particles and ghost particles of generation n+1 is given by
^n+1_T
:= ∪_k ∈^n_T ∖^n_T ( S(k) ∪ S^1(k) ∪ ... ∪ S^q(k) ).
§.§ D^2u approximations
In this section, we give some different schemes that can be used to approximate the D^2u term and that we will compared on some numerical test cases.
§.§.§ The original D^2u approximation
The approximation developed in this paragraph was first proposed in <cit.> and uses some ghost particle of dimension q=2.
To obtained the position of a particle, we freeze its position if its order is 2 and inverse its increment if its order is 1 , so for every k = (k_1, ⋯, k_n) ∈^n_T
W^k_s
:=
W^k-_T_k- + _κ(k)=0Ŵ^o(k)_s - T_k- - _κ(k)=1Ŵ^o(k)_s - T_k-,
X^k_s := μ s +σ_0 W^k_s,
∀ s ∈ [T_k-, T_k].
Then we use the following representation for the D^2u term in equation (<ref>) :
D^2 _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))] =
_T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p)_Δ T_(1,p)(Ŵ^(1,p)_Δ T_(1,p))^⊤ - Δ T_(1,p) I_d/(Δ T_(1,p))^2σ_0^-1ψ],
where
ψ= 1/2[ ϕ(T_(1,p),X^(1,p)_T_(1,p)) + ϕ(T_(1,p),X^(1,p^1)_T_(1,p)) - 2 ϕ(T_(1,p),X^(1,p^2)_T_(1,p)) ].
Using for example the equation (<ref>) for the first derivative Du, <cit.> gave the following re-normalized estimator defined by a backward induction:
let _k := g(X^k_T)/(Δ T_k) for every k ∈_T, then let
_k
:= 1/ρ(Δ T_k) ( h(T_k, X^k_T_k)+c(T_k, X^k_T_k)
∏_k̃∈ S(k)(_k̃_θ(k̃)=0 + (_k̃ - _k̃^2) _ 1 ≤θ(k̃) ≤ m +
1/2(_k̃+_k̃^1 - _k̃^2) _m+1 ≤θ(k̃) ≤ 2m}) _k̃),
k ∈_T ∖_T.
where
_k
:=
_{θ_k = 0}
+ _{θ_k ∈{1, ⋯, m}}b_θ_k(T_k-, X^k_T_k-) · (σ_0^⊤)^-1Ŵ^o(k)_Δ T_k/Δ T_k
+ _{θ_k ∈{m+1, ⋯, 2m}} a_θ_k : (σ_0^⊤)^-1Ŵ^o(k)_Δ T_kŴ^o(k)_Δ T_k - Δ T_k I_d/(Δ T_k)^2σ_0^-1.
Then we have
u(0,x)
= _0,x[ _(1)].
§.§ A second representation
This second representation uses some ghost particle of dimension q=3.
Let
(Ŵ^k,i)_k = (k_1, ⋯, k_n-1, k_n) ∈^n, n>1, i=1, 2
be a sequence of independent d-dimensional Brownian motion, which is also independent of (Δ T_k)_k = (k_1, ⋯, k_n-1, k_n) ∈^n, n>1.
The dynamic of the original particles and the ghosts is given by :
W^k_s
:=
W^k-_T_k- + _κ(k)=0Ŵ^o(k),1_s - T_k- +Ŵ^o(k),2_s - T_k-/√(2) + _κ(k)=1Ŵ^o(k),1_s - T_k-/√(2) + _κ(k)=2Ŵ^o(k),2_s - T_k-/√(2)
X^k_s := μ s +σ_0 W^k_s,
∀ s ∈ [T_k-, T_k].
We then replace (<ref>) by
D^2 _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))] = _T_(1),X_T_(1)[ 2 (σ_0^⊤)^-1Ŵ^(1,p),1_Δ T_k(Ŵ^(1,p),2_Δ T_k)^⊤/(Δ T_(1,p))^2σ_0^-1ψ) ],
where
ψ = ϕ(T_(1,p),X^(1,p)_T_(1,p)) + ϕ(T_(1,p),X^(1,p^3)_T_(1,p)) - ϕ(T_(1,p),X^(1,p^1)_T_(1,p))- ϕ(T_(1,p),X^(1,p^2)_T_(1,p)).
This scheme can be can be easily obtained by applying the differentiation rule used for semi linear equations on two successive steps with size Δ T_(1,p)/2.
A simple calculation shows that the original scheme has a variance bounded by |D^2u|_∞^2 39/2 while this one has variance bounded by |D^2u|_∞^2 9 so we expect a diminution of the variance observed with this new scheme.
This derivation on two consecutive time steps has already been used implicitly for example in <cit.> and already was numerically superior to a scheme directly using second order Malliavin weight.
Recursively the re-normalized estimator is defined by a backward induction:
let _k := g(X^k_T)/(Δ T_k) for every k ∈_T, then let
_k
:=
1/ρ(Δ T_k) ( h(T_k, X^k_T_k)+c(T_k, X^k_T_k)
∏_k̃∈ S(k)(_k̃_θ(k̃)=0 + (_k̃ - _k̃^3) _ 1 ≤θ(k̃) ≤ m +
(_k̃+_k̃^3 - _k̃^1 - _k̃^2) _m+1 ≤θ(k̃) ≤ 2m}) _k̃),
k ∈_T ∖_T.
where
_k
:=
_{θ_k = 0}
+ _{θ_k ∈{1, ⋯, m}}b_θ_k(T_k-, X^k_T_k-) · (σ_0^⊤)^-1Ŵ^o(k),1_Δ T_k/Δ T_k
+ _{θ_k ∈{m+1, ⋯, 2m}} a_θ_k : 2 (σ_0^⊤)^-1Ŵ^o(k),1_Δ T_kŴ^o(k),2_Δ T_k/(Δ T_k)^2σ_0^-1.
Then we have
u(0,x)
= _0,x[ _(1)].
§.§ A third representation
This representation is only the antithetic version of the second one and uses some ghost particle of dimension q=6.
The dynamic of the original particles and the ghosts is given by :
W^k_s
:=
W^k-_T_k- + _κ(k)=0Ŵ^o(k),1_s - T_k- +Ŵ^o(k),2_s - T_k-/√(2) + _κ(k)=1Ŵ^o(k),1_s - T_k-/√(2) + _κ(k)=2Ŵ^o(k),2_s - T_k-/√(2) -
_κ(k)=4Ŵ^o(k),1_s - T_k- +Ŵ^o(k),2_s - T_k-/√(2) - _κ(k)=5Ŵ^o(k),1_s - T_k-/√(2) - _κ(k)=6Ŵ^o(k),2_s - T_k-/√(2)
X^k_s := μ s +σ_0 W^k_s,
∀ s ∈ [T_k-, T_k].
We then replace (<ref>) by
D^2 _T_(1),X_T_(1)[ϕ(T_(1,p),X^(1,p)_T_(1,p))] =
_T_(1),X_T_(1)[ (σ_0^⊤)^-1Ŵ^(1,p),1(Ŵ^(1,p),2)^⊤/(Δ T_(1,p))^2σ_0^-1ψ) ],
where
ψ = ϕ(T_(1,p),X^(1,p)_T_(1,p)) + 2 ϕ(T_(1,p),X^(1,p^3)_T_(1,p)) - ϕ(T_(1,p),X^(1,p^1)_T_(1,p))- ϕ(T_(1,p),X^(1,p^2)_T_(1,p)) +
ϕ(T_(1,p),X^(1,p)_T_(1,p^4))- ϕ(T_(1,p),X^(1,p^5)_T_(1,p))- ϕ(T_(1,p),X^(1,p^6)_T_(1,p)) ),
and the weights are still given by equation (<ref>).
The backward induction is defined as follows:
let _k := g(X^k_T)/(Δ T_k) for every k ∈_T, then let
_k
:=
1/ρ(Δ T_k) ( h(T_k, X^k_T_k)+ c(T_k, X^k_T_k)/2∏_k̃∈ S(k)((_k̃+_k̃^4)_θ(k̃)=0 + (_k̃ - _k̃^4) _ 1 ≤θ(k̃) ≤ m +
1/2 (_k̃+ 2_k̃^3 - _k̃^1 - _k̃^2 +_k̃^4- _k̃^5 - _k̃^6) _m+1 ≤θ(k̃) ≤ 2m}) _k̃),
k ∈_T ∖_T.
where the weights are given by equation (<ref>).
And as usual we have
u(0,x)
= _0,x[ _(1)].
Extension to schemes for derivatives of order more than 3 is obvious with the two last schemes.
§.§ Numerical results
For all test cases in this section we take μ = 0.2, σ_0 = 0.5 and we want to evaluate u(0,0.5 ).
We test the 3 schemes previously described :
* Version 1 stands for the original version of the scheme using backward recursion (<ref>),
* Version 2 stands for the second representation using second backward recursion (<ref>),
* Version 3 stands for the third representation corresponding to the antithetic version of the second representation and using backward recursion
(<ref>). Notice that in this case all terms in u in f are treated with antithetic ghosts.
We give results for the non nested version as the nested version doesn't improve the results very much.
* We first choose a non linearity
f(u,Du,D^2u) = h(t,x)+0.1/d u(:D^2u),
where μ = 0.2, σ_0 = 0.5 and
h(t,x)= (α+ σ_0^2/2) cos(x_1+..+x_d) e^α(T-t)+
0.1 cos(x_1+..+x_d)^2 e^2α(T-t)+ μsin(x_1+..+x_d) e^α(T-t),
with α=0.2.
We suppose that the final solution is given by g(x)= cos(x_1+..+x_d) such that the analytical solution is
u(t,x)= cos(x_1+..+x_d) e^α(T-t).
This test case will be noted test C.
In the example we want evaluate u(0,0.5 ).
First we take d=4 and give the results obtained for different maturities on figures <ref> and <ref>.
We then test in dimension 6 the different schemes on figure <ref>. Besides on figure <ref> we show that
the schemes provide a good accuracy for the computation of the derivatives by plotting (.Du) for the three versions : as expected the
accuracy is however slightly less good than for the function evaluation.
* At last we consider the test D where d=4, and
f(u,Du,D^2u) = 0.0125 (.DU) (:D^2u).
We give the solution and error obtained for the 3 methods on figure <ref>.
On all the test cases, the last representation using antithetic variables gives the best result in term of variance reduction but at a price of memory consumption increase: as order of the ghost representation increase so does the memory needed.
§ CONCLUSION
As the scheme and methods developped here let us extend the maturities than can be used to evaluate the solution of some semi linear and full non linear equation.
This is achieved by an increase of the computational time and the memory consumption.
plain
|
http://arxiv.org/abs/1701.07435v2 | 20170125190000 | Radio occultations of the Io plasma torus by Juno are feasible | [
"Phillip H. Phipps",
"Paul Withers"
] | astro-ph.EP | [
"astro-ph.EP",
"physics.space-ph"
] |
Phillip H. Phipps,1
Paul Withers,1,2
1Department of Astronomy,
Boston University, Boston, Massachusetts, USA.
2Center for Space Physics, Boston University, Boston, Massachusetts, USA.
The flow of material from Io's volcanoes into the Io plasma torus, out into the magnetosphere, and along field lines into Jupiter's upper atmosphere is not adequately understood.
The lack of observations of spatial and temporal variations in the Io plasma torus impedes attempts to understand the system as a whole.
Here we propose that radio occultations of the Io plasma torus by the Juno spacecraft can measure plasma densities in the Io plasma torus.
We find that the line-of-sight column density of plasma in each of the three regions of the Io plasma torus (cold torus, ribbon, and warm torus) can be measured with uncertainties of 10%.
We also find that scale heights describing the spatial variation in plasma density in each of these three regions can be measured with similar uncertainties.
Such observations will be sufficiently accurate to support system-scale studies of the flow of plasma through the magnetosphere of Jupiter.
§ INTRODUCTION
Volcanic eruptions on the innermost Galilean satellite, Io, are the main source of plasma in Jupiter's magnetosphere. Io orbits Jupiter in the plane of the planet's rotational equator at a distance of 5.9 R_J. Volcanic activity on Io delivers neutral gas into the Jupiter system at a rate of about 1 tonne per second <cit.>. Plasma is produced from these neutrals via electron collisions on timescales of 2–5 hours <cit.>. Plasma is also transferred into the magnetosphere as Jupiter's rapidly rotating magnetic field picks up ions from Io's ionosphere. The relative importance of these two processes is not currently known <cit.>.
Once ionized, these particles are affected by electromagnetic forces in addition to gravitational and centrifugal forces. These forces disperse the Io-genic plasma away from Io, but do not do so uniformly in all directions. Instead, the plasma is initially confined to a torus that is centered on the centrifugal equator at Io's orbital distance (5.9 R_J), called the Io plasma torus (IPT).
The centrifugal equator is the locus of points on a given field line which are located at the greatest distance from the rotation axis
<cit.>.
The torus is centered in this plane since this plane is where an ion trapped on a field line has the minimum centrifugal potential.
The axis of the centrifugal equator lies between Jupiter's rotational and magnetic axes and is therefore tilted towards the magnetic axis at 200^∘ longitude (System III).
The angle between the rotational and centrifugal axes is 2/3 the angle between the rotational and magnetic axes <cit.>.
Since Jupiter's magnetic axis is 9.6 degrees from the rotational axis, the centrifugal axis that defines the plane of the IPT is tilted by 6.4 degrees from the rotational axis and 3.2 degrees from the magnetic axis <cit.>.
The jovian magnetic field is not perfectly dipolar. Consequently, representing the centrifugal equator as a plane is an approximation.
However, doing so is sufficient for many purposes.
Plasma is lost from the IPT via flux tube interchange on timescales of around 20-80 days <cit.>. Flux tube interchange causes plasma to drift radially outward, which distributes plasma throughout Jupiter's middle and outer magnetosphere.
The dispersal of plasma from the IPT into the rest of the magnetosphere is the main process that provides plasma to the rest of Jupiter's magnetosphere.
Therefore spatial and temporal variations in the IPT can ultimately affect the distribution and dynamics of plasma throughout Jupiter's entire magnetosphere <cit.>.
There are several different ways in which remote sensing and in situ observations can measure conditions in the IPT. This article focuses on remote sensing observations of the IPT by radio occultations. These observations can monitor temporal and spatial variations in the density and temperature of the IPT.
A radio occultation occurs when an object, here the IPT, comes between the transmitter and receiver of a radio signal.
Properties of the radio signal are affected by the radio signal's propagation through the plasma in the torus.
Refraction of the radio signal as it passes through the plasma of the IPT causes a change in the frequency of the received signal due to the Doppler effect.
The line-of-sight integrated plasma density, also known as the total electron content (TEC), of the IPT can be determined from the measured shift in the received frequency <cit.>.
The most suitable scenario for a radio occultation observation of the IPT involves a spacecraft in a polar orbit around Jupiter with periapsis within Io's orbit.
A polar orbit ensures that the line of sight between the spacecraft and Earth is approximately parallel to the torus equator and sweeps through the entire cross-section of the IPT.
A periapsis within Io's orbit ensures that the line of sight between the spacecraft and Earth passes through the torus once, not twice, which simplifies analysis.
The only spacecraft currently operational at Jupiter, Juno, has such an orbit.
Launched on 5 August 2011, Juno entered orbit around 4 July 2016 and orbits with a near-polar inclination.
The Juno orbiter is the first spacecraft to operate in the outer solar system using solar power
and the first to have a polar orbit around Jupiter <cit.>.
The nominal Juno mission lifetime is 37 orbits.
Four of these orbits are dedicated to spacecraft checkout and instrument commissioning, leaving 33 planned science orbits <cit.>.
Juno's perijove is at equatorial latitudes and 1.06 R_J, which is about 4300 km above the planet's cloud-tops.
Opportunities to conduct radio occultation observations of the IPT occur once per orbit.
Prior to orbit insertion, Juno planned to conduct most of its mission in a 14-day orbit.
Due to anomalies encountered early in its orbital mission, the spacecraft may instead remain in a 53-day orbit for a considerable time.
The main findings of this article are not affected by the length of the orbital period as long as Juno has a near-polar orbit with periapsis inside Io's orbit, which is true at the time of writing and likely to remain true until the end of the mission.
The only significant effect of changes from the planned orbital period is in temporal resolution. Measurements will be possible once per orbital period, so every 53 days instead of every 14 days.
A major goal of the Juno mission is to map Jupiter's gravitational and magnetic fields <cit.>.
Analysis of these observations will improve understanding of the planet's interior structure and the properties of the magnetosphere out of the equatorial plane.
The gravitational mapping requires continuous radio tracking from Earth, so the Juno orbit is designed such that Juno is never occulted from view by Jupiter itself.
The Juno project plans to conduct radio tracking on about 24 orbits <cit.>, which means that radio occultations may be feasible on those 24 orbits.
With only two previous radio occultations of the IPT, the 24 possible Juno occultations offer an order of magnitude increase in the number of observations and unprecedented opportunities to explore spatial and temporal variability in the IPT.
This set of occultations will sample the full range of System III longitudes and the full range of positions relative to Io along its orbit, but only a narrow range of local times. Due to the small angular separation of Earth and the Sun as seen from Jupiter, all occultations will be near noon local time.
The aims of this article are to evaluate the feasibility of measuring properties of the IPT with radio occultations conducted by the Juno spacecraft, to estimate the likely accuracy of such observations, and to assess the contributions that such measurements could make towards key science questions concerning the IPT and its role in Jupiter's magnetosphere.
Section <ref> describes the IPT.
Section <ref> discusses the concept of a radio occultation and relevant capabilities of Juno.
Section <ref> explores radio occultations of the IPT using a simple model.
Section <ref> uses a more sophisticated model of the torus to determine the accuracy with which key torus properties can be measured.
Section <ref> discusses how plasma temperature and density can be obtained from the measured properties.
Section <ref> presents the conclusions of this work.
§ OVERVIEW OF OBSERVATIONS OF THE IO PLASMA TORUS
The IPT can be observed in a variety of ways, including ground-based optical and infrared measurements <cit.>, spacecraft in situ measurements <cit.>, spacecraft ultraviolet (UV) measurements <cit.>, and spacecraft radio occultation experiments <cit.>.
Ground-based optical and infrared observations can measure the composition, density, and temperatures of plasma within the IPT <cit.>.
The intensities of species-specific emission lines indicate the composition of the plasma.
Electron density in the IPT can be determined from the intensity ratio of S^+ emission lines at 6717 and 6731 Å or the intensity ratio of O^+ emission lines at 3726 and 3729 Å
<cit.>.
The electron temperature can be determined from intensity ratios of other pairs of S^+ lines and the
perpendicular ion temperature can be determined from the width of the S^+ 6731 Å line <cit.>.
The brightest emissions from the IPT are sodium D-line emissions due to resonant scattering of solar radiation by neutral sodium, although their behavior is different from typical torus plasma since they are neutral. We can expect the sodium emission to be a tracer of the neutrals but not of the ions.
These emissions are often used as proxy measurements for the main constituents of the IPT, ionized sulfur and oxygen.
Many ground-based surveys of spatial and temporal variability in the IPT have been conducted
<cit.>.
Valuable observations of the IPT were made by Voyager 1 during its flyby in March 1979 <cit.> and Galileo during its orbital tour in 1995–2003 <cit.>. Each spacecraft was equipped with an ultraviolet spectrometer (UVS) that covered 400–1800 Å
and an in situ plasma instrument (PLS).
The UVS experiments were able to measure electron density and temperature, ion temperature perpendicular to the magnetic field, and composition.
The PLS experiments were able to measure plasma density, velocity, and composition <cit.>.
Due to degeneracies in the interpretation of observations from each instrument,
both remote sensing UVS measurements and in situ PLS measurements were necessary to map the composition of the torus completely.
In situ measurements by Voyager 1 <cit.> and Galileo <cit.> mapped the spatial extent of the IPT.
They found that the IPT is centered at the orbital distance of Io, 5.9 R_J, and
has widths of about 2 R_J in, and 1 R_J perpendicular to, the plane of the centrifugal equator.
The Cassini Ultraviolet Imaging Spectrograph (UVIS) also observed the IPT during Cassini's Jupiter flyby in 2000–2001 <cit.>.
The spatial distribution of plasma in the IPT has also been mapped by active remote sensing experiments on spacecraft in the Jupiter system.
Prior to Juno, radio occultations through the IPT have been conducted twice, once by Voyager 1 <cit.> and later by Ulysses <cit.>.
These observations provided a time series of measurements of the TEC in the IPT between the spacecraft and Earth.
In combination with knowledge of the spacecraft trajectory, these TEC measurements constrained spatial variations in the local electron density within the IPT.
From these observations, a general picture of the IPT has been developed. From Voyager 1 measurements,
<cit.> found that the torus can be divided into three different regions: the cold torus, ribbon, and warm torus.
The innermost region, centered at 5.2 R_J, is the cold torus.
In the cold torus, densities fall off with height above the centrifugal equator with a scale height of 0.1 R_J, which is relatively small.
The cold torus peaks at around 5.23 R_J and extends from 4.9 R_J to 5.5 R_J and has a characteristic density of
∼1000 cm^-3. Its composition is mostly S^+ ions with smaller amounts of O^+ ions present.
In the cold torus, the electron temperature T_e≈ 1–2 eV and the ion temperature T_i ≈ 1–4 eV.
Beyond the cold torus lies the ribbon, whose center is at a distance of 5.6 R_J.
It has a scale height of 0.6 R_J and extends from 5.5–5.7 R_J.
The ribbon has a high characteristic density of ∼3000 cm^-3 and it is mostly O^+ ions with smaller amounts of S^+ ions present.
In the ribbon, T_e≈ 4–5 eV and T_i ≈ 10–30 eV.
The outermost region is the warm torus, whose center is at Io's orbital distance of 5.9 R_J.
It has a scale height of 1 R_J, which makes it the thickest region, and extends from 5.7–8 R_J.
The warm torus has a characteristic density of ∼2000 cm^-3 and it is composed of S^2+ and O^+ ions
with trace amounts of O^2+, S^+, and S^3+ ions.
In the warm torus, T_e ≈ 5–8 eV and T_i ≈ 60 eV.
The scale height, H, is related to plasma composition and temperature
<cit.>,
H = √(2k(T_i,∥+Z_iT_e,∥)/3M_iΩ^2)
where k is the Boltzmann constant, T_i,∥ is the ion temperature, Z_i is the atomic number of the ion species,
T_e,∥ is the electron temperature, M_i is the mass of the ion species, and Ω is the rotation rate of Jupiter's magnetosphere (∼ 1.75 × 10^-4 rad s^-1).
The ∥ subscripts on T_e and T_i refer to the component of temperature parallel to the magnetic field.
Since T_e is much smaller than T_i, the scale height is effectively insensitive to T_e.
This scale height defines the extent of the IPT parallel to the magnetic field lines.
If a radio occultation can determine the scale height H, then Equation <ref> can be used to infer the ion temperature.
Doing so requires independent knowledge of the ion composition, which is summarized above.
Previous observations have revealed much about the plasma torus.
However, many key science questions still remain concerning the generation, transport, and loss of plasma in the IPT and the magnetosphere of Jupiter.
In the context of the IPT itself, outstanding questions include:
1. Over what timescales does the supply of plasma to the IPT vary?
2. How do variations in Io's volcanic activity affect major properties of the IPT?
3. How do major properties of the IPT vary with System III longitude?
Multiple radio occultations of the IPT by Juno will provide new information for answering these questions.
These radio occultations offer unparalleled spatial and temporal coverage of the IPT.
§ RADIO OCCULTATIONS
A radio occultation occurs when an object, here the IPT, comes between the transmitter and receiver of a radio signal.
On each orbit, Juno will pass through the centrifugal equator such that the IPT is between the spacecraft and Earth.
This geometry is suitable for radio occultation observations of the torus.
During radio tracking, Juno will receive a radio signal from Earth at X-band frequencies (7.3 GHz) and use multipliers to retransmit that signal back to Earth at X-band frequencies (8.4 GHz) and Ka-band frequencies (32.1 GHz) (Table <ref>) <cit.>.
This method is similar to the method used by Cassini for radio occultations after the failure of its ultrastable oscillator <cit.>
and to the method that will be used by BepiColombo for gravity science measurements <cit.>.
Since the downlink frequencies are derived from the same source, the two down-linked radio signals will be transmitted coherently.
The propagation of the radio signal is affected by plasma along its path such that
the received frequency contains information about the electron density along the path of the radio signal.
As is shown here,
the line-of-sight integrated electron density can be derived from comparison of the received frequencies of the two down-linked radio signals.
Neglecting relativistic effects, the received frequency on Earth of the downlinked X-band signal satisfies <cit.>:
f_R,X = f_T,X - f_T,X/cd/dt∫dl +
e^2/8 π^2 m_e ϵ_0 c f_T,Xd/dt∫ N dl -f_T,Xκ/cd/dt∫ n dl
where f is frequency, subscripts R and T refer to received and transmitted, respectively, subscript X refers to X-band, c is the speed of light, t is time, l is distance along the ray path, -e is the electron charge, m_e is the electron mass, ϵ_0 is the permittivity of free space, N is the electron density, κ is the mean refractive volume of the neutrals, and n is the number density of neutrals.
A similar equation can be written for the received frequency on Earth of the downlinked Ka-band signal:
f_R,Ka = f_T,Ka - f_T,Ka/cd/dt∫dl +
e^2/8 π^2 m_e ϵ_0 c f_T,Kad/dt∫ N dl - f_T,Kaκ/cd/dt∫ n dl
The two transmitted frequencies, f_T,X and f_T,Ka, satisfy f_T,Ka / f_T,X = f_D,Ka / f_D,X, where f_D,Ka / f_D,X is a fixed ratio of 3344/880 <cit.>.
The subscript D refers to downlinked frequencies.
Accordingly, Equation <ref> can be multiplied by f_D,X / f_D,Ka and subtracted from Equation <ref> to give:
Δ f = f_R,X-f_R,Ka( f_D,X/f_D,Ka) =
e^2/8 π^2 m_e ϵ_0 c f_T,X( 1- ( f_D,X/f_D,Ka)^2 ) d/dt∫ N dl
where Δ f is defined as f_R,X-f_R,Ka( f_D,X/f_D,Ka).
Terms proportional to the transmitted frequency in Equations <ref>–<ref> cancel out in this difference.
This eliminates the classical Doppler shift and effects of neutral molecules.
The quantity ∫ N dl is the line-of-sight TEC.
If time series of f_R,X and f_R,Ka are available, Equation <ref> can be used to determine the rate of change of the TEC.
Given knowledge of the spacecraft trajectory, the time rate of change of the TEC can be converted into the spatial gradient of the TEC.
Finally, this can be integrated to give the TEC for each different line of sight.
§ INITIAL ESTIMATE OF FREQUENCY SHIFTS
We wish to determine how accurately properties of the IPT can be measured by radio occultation experiments.
Before developing a sophisticated model of the IPT and sources of noise, we first explore the influence of IPT properties on observable quantities using a simple model.
§.§ Initial model of Io plasma torus
We assume that the electron density N can be represented by a single Gaussian that depends on the distance s' from the center of the torus
and that the center of the torus lies in the plane of the centrifugal equator at a distance from Jupiter equal to Io's orbital distance of 5.9 R_J
<cit.>.
Hence:
N(s') = N(0)exp^-s'^2/H^2
where H is the scale height and N(0) is the density at the center of the torus. Typical values for N(0) and H are 2000 cm^-3 and 1 R_J, respectively <cit.>.
The critical quantity in Equation <ref> is ∫ N dl, the integral of the electron density along the line of sight.
We define TEC as a function of the radio signal's distance of closest approach to the center of the torus s, TEC(s).
This satisfies <cit.>:
∫ N dl = TEC(s) = 2∫_s^∞N(s')s' ds'/√(s'^2-s^2)
where s' is the distance from the center of the IPT to a point on the ray path.
With the density N given by Equation <ref>, TEC(s) is given by <cit.>:
TEC(s) =
N(0) √(π) H exp^-s^2/H^2
The maximum value of TEC(s) occurs at s=0, where TEC = N(0)√(π)H.
For benchmark values N(0)=2000 cm^-3 and H=1 R_J, the maximum value of the TEC is 25.5 × 10^16 m^-2.
This can be expressed as 25.5 TECU, where 1 TECU or total electron content unit equals 1 × 10^16 m^-2.
§.§ Characteristic frequency shifts
Combining Equations <ref> and <ref>, the frequency shift Δ f satisfies:
Δ f(s) = e^2/8 π^2 m_e ϵ_0 c f_T,X( 1- ( f_D,X/f_D,Ka)^2 ) d/dt[N(0) √(π) H exp^-s^2/H^2]
Since the only time-variable quantity in Equation <ref> is the distance of closest approach s, Equation <ref> becomes:
Δ f(s) = -e^2/8 π^2 m_e ϵ_0 c f_T,X( 1- ( f_D,X/f_D,Ka)^2 ) √(π) N(0) exp^-s^2/H^2(2s/H) ds/dt
Here ds/dt is the rate of change of the distance of closest approach s.
Note that this refers to the distance of closest approach of the line of sight between the spacecraft and Earth to the center of the IPT.
It is therefore affected by the trajectory of the spacecraft and the motion of the IPT, not solely by the trajectory of the spacecraft.
For simplicity in this exploratory work, we assume that ds/dt is constant during a radio occultation observation.
However, this is a questionable assumption that would need to be revised in the analysis of real observations.
First, Juno's speed during a radio occultation observation, which is essentially a periapsis pass, changes appreciably due to the high eccentricity of Juno's orbit.
Second, since the IPT is tilted with respect to Jupiter's rotational axis, the center of the IPT moves during a radio occultation observation.
At Io's orbital distance, the center of the IPT moves up and down with a velocity of ± 9 kms^-1 over Jupiter's 9.925 hour rotational period.
We assume that |ds/dt| is 20 km s^-1, which is a representative value for the spacecraft speed during a periapsis pass
(based on the ephemeris tool at www-pw.physics.uiowa.edu/∼jbg/juno.html).
This is equivalent to a change in s of one R_J in a time of one hour.
We reconsider this issue at the end of Section <ref>.
Equation <ref> provides an analytical description of the dependence of the measureable frequency shift Δ f on the central density of the torus, N(0), the torus scale height, H, and ds/dt,
which can be intepreted as the projected speed of the spacecraft.
The value and location of the maximum value of |Δ f| can be found by setting the derivative of Equation <ref> with respect to s to zero.
The maximum value of | Δ f|, | Δ f|_max, occurs at s^2 = H^2 / 2 and satisfies:
| Δ f|_max =
e^2/8 π^2 m_e ϵ_0 c f_T,X( 1- ( f_D,X/f_D,Ka)^2 ) √(π) N(0) e^-1/2√(2)ds/dt
For N(0) = 2000 cm^-3, H = 1 R_J, and |ds/dt| = 20 km s^-1, the maximum value of | Δ f| is 0.9 mHz.
This maximum occurs at s = 0.7 R_J.
The top panel of Figure <ref> shows how Δ f depends on s for several values of N(0) and fixed H = 1 R_J and ds/dt = -20 km s^-1.
We choose a range of values for N(0) that covers the observed values in the torus.
N(0) varies between 500 cm^-3 and 2500 cm^-3 <cit.>.
The middle panel of Figure <ref> shows how Δ f depends on s for several values of H and fixed N(0) = 2000 cm^-3 and ds/dt = -20 km s^-1.
We choose a range of values for H that covers the observed values in the torus.
H varies between 0.5 R_J and 2.5 R_J <cit.>.
The bottom panel in Figure <ref> shows how Δ f depends on s for several values of |ds/dt| and fixed N(0) = 2000 cm^-3 and H = 1 R_J.
We choose a range of values for |ds/dt| that increases from 20 km s^-1 to 40 km s^-1 in increments of 5 km s^-1.
Figure <ref> illustrates how the observed shift in frequency, Δ f, depends on N(0), H, and ds/dt.
Δ f is zero at the start of an occultation, when |s| is large.
Its magnitude increases monotonically to Δ f_max= 0.9 mHz at s_crit = H/√(2)
, then decreases monotonically through zero at s=0.
The behavior of Δ f in the second half of the occultation is the same as in the first half, except for a change in sign.
The full width at half maximum of the local maximum in Δ f is approximately equal to H.
The effects of variations in N(0) and ds/dt are straight-forward, since the frequency shift Δ f is proportional to both factors.
Spatial and temporal changes in N(0) are likely over the course of the Juno mission, since the IPT is intrinsically variable, whereas ds/dt will not vary greatly from orbit to orbit.
The effects of variations in H are more complex.
As H increases, s_crit increases. The width of the local maximum in Δ f also increases, but the value of Δ f_max remains the same.
The timescale, τ, for the radio signal to sweep through the IPT satisfies |ds/dt| τ = 2 H.
With |ds/dt|= 20 km s^-1 and H = 1 R_J, the timescale τ is approximately 2 hours.
An integration time on the order of 10 seconds provides spatial resolution on the order of H/100.
For this integration time, it can be assumed that
the relative accuracy with which f_R,X and f_R,Ka can be measured is 3 × 10^-14.
This is based on the Allan deviation of the Deep Space Network (DSN) hydrogen masers over a 10 second integration <cit.>.
With f_R,X = 8.4 GHz and f_R,Ka = 32.1 GHz <cit.>, the corresponding uncertainty in a measurement of Δ f is 3.8 × 10^-4 Hz (Equation <ref>).
This uncertainty in Δ f, σ_Δ f, is 40 percent of the characteristic value of 0.9 mHz discussed above.
The uncertainty on the inferred TEC, σ_TEC, follows from propagating the uncertainty in Δ f through the integrated version of Equation <ref>.
Assuming a simple numerical integration method leads to:
σ_TEC = √(Σ)(e^2/8 π^2 m_e ϵ_0 c f_T,X( 1- ( f_D,X/f_D,Ka)^2))^-1σ_Δ fΔ t
where Σ is the number of data points integrated to reach the current measurement and Δ t is the integration time for an individual measurement.
Since Σ = t/Δ t, where t is the time since the start of the observation,
we obtain:
(σ_TEC/1 TECU) = 0.5
√((t/1 hr) (Δ t/10 s))
For N(0) = 2000 cm^-3, H = 1 R_J, and |ds/dt| = 20 km s^-1, Δ f_max = 0.9 mHz and s_crit = 0.7 R_J.
If the integration starts at s = 4 R_J, then t at this local maximum is 3.3 hours from the start of the observation.
Henceforth we adopt Δ t = 36 seconds to provide a resolution of 0.01 R_J.
This yields σTEC / (1 TECU) = 0.92 √( t / (1 hr)), which gives σ_TEC = 1.68 TECU at the local maximum.
In this example, σ_TEC / TEC = 7% at the TEC maximum.
Several other potential sources of error must be considered.
The effects of noise at the transmitter and receiver on the simulated measurements of frequency shift are accounted for by the stated Allan deviation.
The effects of plasma in the rest of Jupiter's environment and the interplanetary medium can be accounted for in the frequency baseline prior to and after the occultation of the IPT <cit.>.
The noise contribution due to the interplanetary medium depends strongly on solar elongation angle. It should be noted that for most of the Juno mission the solar elongation angle is relatively large and the associated noise is relatively small <cit.>.
Plasma in the regions of Jupiter's magnetosphere outside the IPT will also contribute to the measured TEC.
At the centrifugal equator, assuming a magnetospheric density 3 cm^-3 and length of 100 R_J <cit.>, this contribution is about 0.7 TECU, which is small (3%) relative to the peak TEC of the IPT, 25.5 TECU.
Juno's periapsis altitude is approximately 4000 km, which is within the ionosphere <cit.>. Hence plasma in Jupiter's ionosphere may contribute to the measured total electron content between the spacecraft and Earth. The ionospheric plasma density at this altitude is approximately 3× 10^9 m^-3 and the ionospheric scale height is on the order of 1000 km <cit.>. This results in a vertical total electron content of 3 × 10^15 m^-2 or 0.3 TECU. The line of sight total electron content will be larger by a geometric factor. This is a potentially significant perturbation to the inferred total electron content of the IPT, especially if passage through the ionosphere occurs as the line of sight to Earth passes through the centrifugal equator. However, the Juno Waves instrument is capable of measuring the local plasma density at the spacecraft <cit.>. Using its measurements of the vertical structure of the topside ionosphere, the contributions of Jupiter's ionosphere to the inferred total electron content of the IPT can be eliminated.
Since Io orbits Jupiter every 1.7 days, each occultation will measure IPT properties at a different angular separation from Io. A series of occultations over a range of separations from Io will be valuable for assessing how plasma is transported away from Io and into the IPT. It is possible, though unlikely, for an IPT occultation to also probe Io's ionosphere directly. In that event, the line-of-sight total electron content would briefly increase by 0.1 TECU or 1 × 10^15 m^-2. This follows from a surface ionospheric density of 6 × 10^3 cm^-3 and a scale height of 100 km <cit.>.
We previously noted the flaws in the assumption that ds/dt is constant.
There are two main consequences if ds/dt is not constant.
The first consequence is that it becomes harder to determine the position s associated with a given time in the measured time series of Δ f.
Yet since the Juno trajectory and the location of the centrifugal equator at Io's orbital distance are known, the required mapping from time to position is tractable.
The effects of the nodding up and down of the IPT are illustrated in Figure <ref>.
This shows how ds/dt and s(t) change for different phasings of the motion of the IPT relative to the time of the occultation. This is equivalent to occultations occurring at different System III longitudes.
From a fixed vantage point of noon local time in the rotational equator, the IPT moves up and down sinusoidally with a period equal to the planetary rotation period of 9.925 hours, a distance magnitude of 5.89 R_J, and a speed magnitude of 9 km s^-1.
Given a constant spacecraft speed of 20 km s^-1, which is itself a noteworthy simplification, |ds/dt| varies between 10 and 30 km s^-1.
The variation in ds/dt with time leads to the second consequence, which is that the numerical and graphical results based on Equations <ref> and <ref> will no longer be perfectly accurate.
Furthermore, note that a constant time resolution in the measured received frequencies will no longer correspond to a constant spatial resolution within the IPT.
The only remaining potentially significant source of error is Earth's ionosphere, which is discussed in Section <ref>.
§.§ Initial model of Earth's ionosphere
Plasma densities are much greater in Earth's ionosphere than in Jupiter's magnetosphere or the interplanetary medium.
Consequently, plasma in Earth's ionosphere can make a significant contribution to the line-of-sight column density despite the ionosphere's limited vertical extent. If plasma densities in Earth's ionosphere were constant along the line of sight over the duration of the occultation, then they would have no effect on the rate of change of the column density and would not affect the measured frequency shift.
This is commonly the case for radio occultation observations of planetary atmospheres and ionospheres, which last for minutes, not hours.
However, due to the large size of the IPT and the long duration of an IPT occultation, conditions in Earth's ionosphere along the line of sight from the ground station to the spacecraft may change appreciably over the course of the occultation.
The vertical column density, or vertical total electron content (TEC), of Earth's ionosphere varies with time of day, season, the solar cycle, and other factors <cit.>. At nighttime, it can be represented by a constant value of 10 TECU from dusk until dawn.
After dawn, it increases smoothly to a peak value of ∼30 TECU at noon, then decreases smoothly to its nighttime value by dusk.
This peak TEC of Earth's ionosphere, 30 TECU, is around 1.3 times the peak TEC of the IPT, 25.5 TECU.
Moreover, line-of-sight TEC values will be greater than vertical TEC values by a factor of sec(χ), where χ is the zenith angle <cit.>.
Figure <ref> illustrates how line-of-sight TEC through Earth's ionosphere and the IPT
varies with time of day for a line-of-sight 30 degrees away from the zenith in which the radio signal passes through the center of the IPT at 9 hours local time.
The TEC is the sum of two components. The first component is from Earth's dayside ionosphere. It is given by
A + B cos[ 2 π(LT - 12 hrs) / (24 hrs) ],
where A equals 10 TECU, B equals 20 TECU, and LT is local time.
The second component is from the IPT. It is given by
C exp[-(LT - 9 hrs)^2 / (1 hr)^2],
where C equals 25.5 TECU and 1 hr equals 1 R_J/20 km s^-1 (Equation <ref>).
We assume benchmark values of N(0) = 2000 cm^-3 and H = 1 R_J for the IPT and a 36 second integration time.
Figure <ref> also shows representative uncertainties in TEC.
For conceptual simplicity, we neglect the variation in uncertainties with time that are defined by Equation <ref> and adopt instead a constant uncertainty of 2 TECU.
This value comes from the average of Equation <ref> over the assumed duration of the occultation.
The contributions of Earth's ionosphere to the measured line-of-sight TEC must be subtracted before properties of the IPT can be determined from the observations.
We consider two methods for doing so.
First, we do a linear fit to the simulated measurements of TEC at 6–8 and 10–12 hours, then subtract this fit from the simulated measurements of TEC at 7–11 hours.
The fit is shown as a red dot-dashed line in Figure <ref>.
The residual TEC, which is shown in the top panel of Figure <ref>, is the inferred contribution from the IPT.
This linear fitting method provides a baseline for the contributions of Earth's ionosphere.
As can be seen in the top panel of Figure <ref>, this method gives torus TEC values that are ∼ 1–2 TECU higher than the input torus TEC values from 7–11 hours.
Although the corrected simulated measurements of torus TEC values are larger than the input torus TEC values, the difference is less than the measurement uncertainty of 2 TECU.
Following Equation <ref>, we fit the corrected simulated measurements of torus TEC values to a function of the form of
TEC(s) = TEC(0) e^-s^2/H^2.
The fitted peak TEC value is 27.21±0.06 TECU
and the fitted scale height, H, is 1.002±0.002 R_J.
This peak TEC is 1.71 TECU larger than the input value of 25.5 TECU. Thus the fitted TEC value is 28 σ away from the input TEC value, but the difference is only 7% of the peak TEC.
The fitted scale height is 0.002 R_J larger than the input value of H.
The fitted scale height is 1 σ away from the input scale height, but the difference is only 1% of the scale height.
The fitted peak TEC value and scale height imply a central density N(0) of 2127.13 ± 6.33 cm^-3.
This inferred central density is 127.13 cm^-3 larger than the input value of 2000 cm^-3.
The fitted central density value is 20 σ away from the input central density value, but the difference is only 6% of the density.
We conclude that this method is reasonable for subtracting the effects of Earth's ionosphere as the errors in the fitted IPT properties are less than 10%.
Yet it provides a poor characterization of the uncertainty in the fitted properties.
Second, we subtract modeled direct measurements of the contributions of Earth's ionosphere from the simulated measurements of line-of-sight TEC.
The ionospheric contribution is shown as a dashed black line in bottom of Figure <ref>.
Here we assume Earth's ionospheric TEC follows the equation stated above for the dayside ionosphere, but that it is measured imperfectly.
GPS receivers at the NASA Deep Space Network (DSN) stations measure the TEC in Earth's ionosphere.
The TEC in Earth's ionosphere along the line of sight from the ground station to the spacecraft is routinely reported.
We subtract the contributions of Earth's ionosphere, which we assume to be known with an accuracy of 5 TECU <cit.>
from the simulated measurements of TEC, which as before we assume to be known with an accuracy of 2 TECU.
The residual TEC, which is shown in the bottom panel of Figure <ref>, is the inferred contribution from the IPT.
With this method, the corrected simulated measurements of TEC values match the input values well, whereas the values obtained with the first method were biased to larger values.
However, the measurement uncertainties are larger and thus the formal uncertainties on fitted parameters are also larger.
We fit the corrected simulated measurements of TEC values as above.
The fitted peak TEC value is 25.5±0.1 TECU, whereas the input value is 25.5 TECU.
The difference between fitted and input peak TEC values is <1 σ.
The fitted scale height, H, is 1.005±0.007 R_J, whereas the input value is 1 R_J.
The difference between fitted and input scale heights is <1 σ.
The fitted peak TEC value and scale height imply a central density, N(0), of 1994.18 ± 14.24 cm^-3, whereas the input central density is 2000 cm^-3. The difference between fitted and input central densities is <1 σ.
We conclude that this method is preferable. It accurately characterizes the fitted parameters.
Furthermore the formal uncertainties are consistent with differences between fitted values and input values.
Having established the principle that the IPT can be observed using radio occultations despite the effects of Earth's ionosphere,
we neglect Earth's ionosphere in the remainder of this article.
More precisely, we assume that the observations occur during the nighttime such that the vertical TEC in Earth's ionosphere is relatively constant.
The contribution of Earth's ionosphere to the line-of-sight TEC can be found using either pre- or post-occultation observations, then subtracted from the TEC measurements.
§ SOPHISTICATED MODEL OF IO PLASMA TORUS
Representing plasma densities in the IPT by a single Gaussian function is convenient and has been useful for testing the effects of changes in plasma and spacecraft parameters and effects of the Earth's ionosphere, but this representation oversimplifies the true density distribution in the IPT.
As discussed in Section <ref>, the IPT is conventionally divided into three regions: cold torus, ribbon, and warm torus.
These three regions have distinct compositions, temperatures, and densities.
To better understand temporal and spatial changes in the torus, it is desirable to measure densities in each of its constituent regions.
We therefore replace the single Gaussian function of Section <ref> with a more sophisticated function that includes contributions for each region.
§.§ Density distribution
We now represent the IPT by four functions, one each for the cold torus and ribbon, and two for the warm torus.
In the plane of the centrifugal equator, densities satisfy:
N(R < 6.1 R_J) = N_1 e^- (R-C_1)^2/(W_1)^2 +
N_2 e^- (R-C_2)^2/(W_2)^2 + N_3 e^- (R-C_3)^2/(W_3)^2
N(R > 6.1 R_J) = N_4 e^- (R-C_4)^2/(W_4)^2
where R is distance away from the center of Jupiter in the equatorial plane.
Equation <ref> contains three terms that represent the three regions of the torus: 1. cold torus, 2. ribbon, and 3. warm torus.
N_1, N_2, and N_3 correspond to the peak densities of the cold torus, ribbon, and warm torus components, respectively.
C_1, C_2, and C_3 are the central locations of the cold torus, ribbon, and warm torus components, respectively.
W_1, W_2, and W_3 are the radial widths, in R_J, of the cold torus, ribbon, and warm torus components, respectively.
Note that the total density at R = C_1, say, is the sum of the three terms. It is not simply N_1.
The warm torus is not well-represented by a single term, which is why Equation <ref> only applies at R < 6.1 R_J.
At larger radial distances, the plasma density is given by Equation <ref>.
We label this region as the extended torus. It has peak density N_4, central location C_4, and radial width W_4.
In order to extend this model beyond the plane of the centrifugal equator, we multiply each term in Equations <ref>–<ref> by factor of e^-r^2/H^2 where r is distance away from the plane of the centrifugal equator.
Therefore N(R, r) satisfies:
N(R< 6.1 R_J, r) = N_1 e^- (R-C_1)^2/(W_1)^2e^- r^2/H_1^2 +
N_2 e^- (R-C_2)^2/(W_2)^2e^- r^2/H_2^2 + N_3 e^- (R-C_3)^2/(W_3)^2e^- r^2/H_3^2
N(R > 6.1 R_J, r) = N_4 e^- (R-C_4)^2/(W_4)^2e^- r^2/H_3^2
H_1, H_2, H_3, and H_3 are the scale heights of the cold torus, ribbon, warm torus, and extended torus components, respectively.
Note that the warm torus and extended torus have the same scale height, H_3.
The functional form represented by Equations <ref>–<ref> was adopted in order to reproduce the radial density distribution for the centrifugal equator shown in Figure 6 of <cit.>. Numerical values of the corresponding model parameters, which were determined by a fit to the data shown in that figure, are given in Table <ref>.
Numerical values of the model scale heights, which were determined from Figure 12 in <cit.>, are given in Table <ref>.
A schematic of the model IPT and the occultation geometry is shown in Figure <ref>.
The modeled electron densities are shown in Figure <ref>.
Figure <ref> also demonstrates that this model provides a good representation of the density observations in the centrifugal equator reported in
Figure 6 of <cit.>.
§.§ Simulated Juno radio occultation
To simulate a radio occultation through this representation of the IPT, we assume that the line of sight from Juno to Earth is parallel to the centrifugal equator.
We assume that the spacecraft velocity in the direction normal to the centrifugal equatorial plane is -20 km s^-1,
assume that nodding motion of the IPT due to Jupiter's rapid rotation can be neglected,
and use an integration time of 36 seconds, which corresponds to a sampling rate of 0.03 Hz.
Figure <ref> shows the TEC and its rate of change.
Figure <ref> shows the corresponding noise-free frequency shift Δ f (Equation <ref>) and the noisy frequency shift Δ f.
Following Section <ref>, for relative measurement uncertainties of 3 × 10^-14 on f_R,X and f_R,Ka, the uncertainty on a single measurement of Δ f is 3.8 × 10^-4 Hz.
The uncertainties are added to the frequency shifts
pulling from a random normal distribution with mean zero and standard deviation of 3.8 × 10^-4 Hz.
The simulated measurements of TEC were found by integration of the frequency shift Δ f using Equation <ref>.
Uncertainties in the TEC were derived from the uncertainty in Δ f
by repeated application of the standard error propagation formula.
The top panel of Figure <ref> shows the simulated measurements of TEC, corresponding uncertainties, and the input TEC.
The bottom panel shows the difference between simulated measurements of TEC and the input TEC.
It is noticeable that the relative uncertainties on the TEC (Figure <ref>) are much less than those on the frequency shift (Figure <ref>) from which TEC was derived. This is an example of integration reducing the importance of random noise.
§.§ Fitted Io plasma torus parameters and their accuracy
Section <ref> explored the accuracy with which a central density and scale height could be fit to simulated TEC observations.
However, this used a simple single Gaussian model of the IPT.
Here we fit the simulated TEC measurements from Section <ref> to a model that includes multiple Gaussian contributions in order to determine the accuracy with which the central density and scale height of the cold torus, ribbon, and warm torus can be measured.
For clarity in this initial exploration of this topic, we assume that the radio occultation is observed at nighttime.
During the night, the TEC of Earth's ionosphere is relatively constant.
A constant TEC will have no effect on the measured frequency shift (Equation <ref>).
Consequently we neglect the effects of Earth's ionosphere and fit the simulated TEC observations shown in Figure <ref>.
Since the line-of-sight between the spacecraft and Earth is assumed to be parallel to the plane of the centrifugal equator, each radio ray path has a constant value of r.
The model TEC along the ray path with closest approach distance r is derived in Appendix <ref>. It satisfies:
TEC(r) =
√(π) N_1 W_1 e^- r^2/H_1^2 +
√(π) N_2 W_2 e^- r^2/H_2^2 +
√(π)/2[
N_3 W_3( 1 + ( 6.1 R_J - C_3) /W_3) +
N_4 W_4( 1 - ( 6.1 R_J - C_4) /W_4)
]
e^- r^2/H_3^2
Due to their different scale heights, the three regions of the IPT each make distinct and potentially separable contributions to the overall TEC.
We therefore fit the simulated TEC observations to a function of the form:
TEC(r) = A e^- r^2/B^2 +
Ce^-r^2/D^2 + Ee^-r^2/F^2
The parameters A, C, and E corresponds to the peak or equatorial TEC for each of the regions and the parameters B, D, and F correspond to the scale heights of the cold torus (H_1), ribbon (H_2), and warm torus (H_3), respectively.
We fit this equation to the simulated TEC observations shown in Figure <ref> using a Markov Chain Monte Carlo (MCMC) method.
This is implemented using the Python module emcee, which is an open source MCMC ensemble sampler developed by <cit.>.
Figure <ref> shows the simulated measurements and fitted TEC, as well as the residuals between the simulated measurements and the fit.
Table <ref> shows the best fit parameters for each region compared to the input values.
Two of the three fitted peak electron content values are within 1 σ of their input values, and the other is within 2 σ, which demonstrates that they are reliable. All three fitted scale heights are within 10% and 1 σ of their input values.
§ DISCUSSION
The preceding sections showed how radio signals from the Juno spacecraft could be used to measure TEC profiles for the IPT, that uncertainties on measured TEC are relatively small, and that a fit to the measured TEC can determine the scale height and peak TEC for each of the three regions of IPT (cold torus, ribbon, and warm torus).
Ion temperatures can be derived from scale heights via Equation <ref>.
We assume that S^+ dominate in the cold torus, O^+ dominates in the ribbon, and S^2+ and O^+ dominates in the warm torus such that the mean molecular mass is 24 daltons <cit.>.
We use the best fit parameters and uncertainties reported in Table <ref> to find
ion temperatures of 0.957^+0.173_-0.173 eV for the cold torus, 16.7^+1.58_-2.47 eV for the ribbon and 56.9^+6.05_-5.51 eV for the warm torus.
For reference, the ion temperatures reported by <cit.> and discussed in Section <ref> are
1–4 eV for the cold torus, 10–30 eV for the ribbon, and ≈60 eV for the warm torus.
Hence the fitted ion temperatures are reasonable for the cold torus, ribbon, and the warm torus.
The peak or equatorial TEC of each region can be determined by fitting Equation <ref> to the observed TEC(r).
For the cold torus and ribbon, peak TEC equals √(π) N_i W_i, where N_i is the maximum density in region i and W_i is the width of region i.
For the warm torus and its extension beyond 6.1 R_J, peak TEC is more complicated (Equation <ref>). Nevertheless, it can be considered as the product of a maximum density and an effective width.
If the width of a region is known from independent measurements or models of the IPT, then the maximum density for that region can be found from the observed peak TEC.
As noted by <cit.>, the electron density in a region cannot be accurately determined from an observed peak TEC without independent knowledge of the width and central peak location of that region.
The analysis described in this article assumes that the line of sight from Juno to Earth is parallel to the plane of the centrifugal equator.
If that is not the case, then the measured TEC values would correspond to cuts through the torus at the angle between the line of sight and the centrifugal equator. This is equivalent to the IPT being tilted.
A tilted torus can be accounted for by a suitable adjustment of the assumed Gaussian profile, as in the model by <cit.>.
§ CONCLUSIONS
When the line of sight between Juno and Earth passes through the Io plasma torus, which occurs once per orbit,
radio signals from the Juno spacecraft can be used to measure total electron content profiles for the Io plasma torus.
We develop a model of densities in the Io plasma torus using values measured by the Voyager 1 spacecraft and reported in
<cit.>, then use it to simulate a dual-frequency
radio occultation performed using the telecommunication subsystem on the Juno spacecraft.
Using the modeled densities we calculate the total electron content by integrating along a line of sight parallel to the torus equator.
From the total electron content we are able to derive the frequency shift that would be measured by the Deep Space Network receiving stations.
This is then used with error introduced equal to the Allan deviation corresponding to an integration time on the order of 10s to determine a simulated profile of the measured total electron content.
Uncertainties on the measured total electron content are relatively small (∼10%).
A Markov chain Monte Carlo fit to the measured total electron content can determine the scale height and peak total electron content for each of the three regions of Io plasma torus (cold torus, ribbon, and warm torus). The ion temperature in each region can be determined from the scale height assuming independent knowledge of the ion composition.
The peak total electron content in each region is proportional to the product of the peak local electron density and the region's width in the equatorial plane. However, without independent knowledge of one of these two factors, the other cannot be determined directly.
Numerical modeling of the Io plasma torus may be useful in narrowing the range of possible peak local electron densities and widths.
To date, only two radio occultations of the Io plasma torus have been performed, Voyager 1 <cit.> and Ulysses <cit.>.
Juno has the potential to perform over 20 occultations. This series of occultations would provide a rich picture of the structure of the Io plasma torus and its temporal and spatial variability.
The Juno mission presents an unparalleled opportunity to study the flow of material from the volcanoes of Io to the auroral regions of Jupiter with simultaneous observations of all stages in this system.
Ground-based infrared observations of Io can be used to monitor the moon's volcanic activity <cit.>.
Ground-based sodium cloud observations can be used to monitor the transport of material from Io's atmosphere into the neutral clouds, since sodium can be considered as a tracer for sulfur and oxygen <cit.>.
Radio occultations can be used to monitor the ionization of neutral species and the distribution of plasma within the Io plasma torus <cit.>.
Juno's suite of plasma instruments will monitor plasma densities in the acceleration regions near Jupiter's poles <cit.>.
Together, the measurements already planned by the Juno mission, the potential radio occultations of the Io plasma torus, and Earth-based observations of the Jupiter system will reveal the complete life-cycle of plasma in Jupiter's magnetosphere.
§ TOTAL ELECTRON CONTENT
Since the line-of-sight between the spacecraft and Earth is assumed to be parallel to the plane of the centrifugal equator, each radio ray path has a constant value of r.
The total electron content along the ray path with closest approach distance r, TEC(r), satisfies:
TEC(r) =
N_1 W_1√(π)/2( erf [ C_1/W_1]
+ erf [ ( 6.1 R_J - C_1) /W_1] )
e^- r^2/H_1^2 +
N_2 W_2√(π)/2( erf [ C_2/W_2]
+ erf [ ( 6.1 R_J - C_2) /W_2] )
e^- r^2/H_2^2 +
N_3 W_3√(π)/2( erf [ C_3/W_3]
+ erf [ ( 6.1 R_J - C_3) /W_3] )
e^- r^2/H_3^2 +
N_4 W_4√(π)/2( erf [ C_4/W_4]
- erf [ ( 6.1 R_J - C_4) /W_4] )
e^- r^2/H_3^2
where erf(x) is the error function.
For all plausible conditions, C_1/W_1, C_2/W_2, C_3/W_3, and C_4/W_4 are much greater than one.
Since erf(x ≫ 1) = 1, Equation <ref> becomes:
TEC(r) =
N_1 W_1√(π)/2( 1
+ erf [ ( 6.1 R_J - C_1) /W_1] )
e^- r^2/H_1^2 +
N_2 W_2√(π)/2( 1
+ erf [ ( 6.1 R_J - C_2) /W_2] )
e^- r^2/H_2^2 +
N_3 W_3√(π)/2( 1
+ erf [ ( 6.1 R_J - C_3) /W_3] )
e^- r^2/H_3^2 +
N_4 W_4√(π)/2( 1
- erf [ ( 6.1 R_J - C_4) /W_4] )
e^- r^2/H_3^2
Furthermore, (6.1 R_J - C_1)/W_1 and (6.1 R_J - C_2)/W_2 can also be expected to be greater than one, which gives:
TEC(r) =
√(π) N_1 W_1 e^- r^2/H_1^2 +
√(π) N_2 W_2 e^- r^2/H_2^2 +
N_3 W_3√(π)/2( 1
+ erf [ ( 6.1 R_J - C_3) /W_3] )
e^- r^2/H_3^2 +
N_4 W_4√(π)/2( 1
- erf [ ( 6.1 R_J - C_4) /W_4] )
e^- r^2/H_3^2
In our model, (6.1 R_J - C_3)/W_3 = 0.66 and
(6.1 R_J - C_4)/W_4 = 0.30.
The error function erf(x) increases from 0 at x=0 to 1 at x ≫ 1.
It can be approximated as erf(x) = x for x < 1 and
erf(x) = 1 for x > 1.
The error in this approximation is less than 0.15 for all x.
We therefore assume that (6.1 R_J - C_3)/W_3 < 1 and
(6.1 R_J - C_4)/W_4 < 1, which leads to:
TEC(r) =
√(π) N_1 W_1 e^- r^2/H_1^2 +
√(π) N_2 W_2 e^- r^2/H_2^2 +
√(π)/2[
N_3 W_3( 1 + ( 6.1 R_J - C_3) /W_3) +
N_4 W_4( 1 - ( 6.1 R_J - C_4) /W_4)
]
e^- r^2/H_3^2
Expanding the term in square brackets further does not provide additional insight.
PHP was supported, in part, by the Massachusetts Space Grant Consortium (MASGC).
PHP would also like to thank Mark Veyette and Paul Dalba for useful discussions.
We would like to thank the two anonymous reviewers for their suggestions.
No data were used in this article.
59
urlstyle
[Abramowitz and Stegun(1972)]1972Abramowitz
Abramowitz, M., and I. A. Stegun (1972), Handbook of Mathematical
Functions, 295-300 pp., Dover.
[Asmar et al.(2005)Asmar, Armstrong, Iess,
and Tortora]2005Asmar
Asmar, S. W., J. W. Armstrong, L. Iess, and P. Tortora (2005),
Spacecraft Doppler tracking: Noise budget and accuracy achievable in
precision radio science observations, Radio Science, 40,
RS2001, 10.1029/2004RS003101.
[Bagenal(1994)]1994Bagenal
Bagenal, F. (1994), Empirical model of the Io plasma torus: Voyager
measurements, J. Geophys. Res., 99, 11,043–11,062,
10.1029/93JA02908.
[Bagenal and Delamere(2011)]2011Bagenal
Bagenal, F., and P. A. Delamere (2011), Flow of mass and energy in the
magnetospheres of Jupiter and Saturn, J. Geophys. Res.,
116, A05209, 10.1029/2010JA016294.
[Bagenal and Sullivan(1981)]1981Bagenal
Bagenal, F., and J. D. Sullivan (1981), Direct plasma measurements in the
Io torus and inner magnetosphere of Jupiter, J. Geophys. Res.,
86, 8447–8466, 10.1029/JA086iA10p08447.
[Bagenal et al.(1997)Bagenal, Crary, Stewart,
Schneider, Gurnett, Kurth, Frank, and Paterson]1997Bagenal
Bagenal, F., F. J. Crary, A. I. F. Stewart, N. M. Schneider, D. A.
Gurnett, W. S. Kurth, L. A. Frank, and W. R. Paterson (1997),
Galileo measurements of plasma density in the Io torus, Geophys.
Res. Lett., 24, 2119, 10.1029/97GL01254.
[Bagenal et al.(2014)Bagenal, Adriani,
Allegrini, Bolton, Bonfond, Bunce, Connerney, Cowley, Ebert,
Gladstone, Hansen, Kurth, Levin, Mauk, McComas, Paranicas,
Santos-Costa, Thorne, Valek, Waite, and Zarka]2014Bagenal
Bagenal, F., A. Adriani, F. Allegrini, S. J. Bolton, B. Bonfond,
E. J. Bunce, J. E. P. Connerney, S. W. H. Cowley, R. W. Ebert, G. R.
Gladstone, C. J. Hansen, W. S. Kurth, S. M. Levin, B. H. Mauk,
D. J. McComas, C. P. Paranicas, D. Santos-Costa, R. M. Thorne,
P. Valek, J. H. Waite, and P. Zarka (2014), Magnetospheric Science
Objectives of the Juno Mission, Space Sci. Rev.,
10.1007/s11214-014-0036-8.
[Bagiya et al.(2009)Bagiya, Joshi, Iyer,
Aggarwal, Ravindran, and Pathan]2009Bagiya
Bagiya, M. S., H. P. Joshi, K. N. Iyer, M. Aggarwal, S. Ravindran,
and B. M. Pathan (2009), TEC variations during low solar activity period
(2005-2007) near the Equatorial Ionospheric Anomaly Crest region in India,
Annales Geophysicae, 27, 1047–1057,
10.5194/angeo-27-1047-2009.
[Bird et al.(1992)Bird, Asmar, Brenkle,
Edenhofer, Funke, Paetzold, and Volland]1992Bird
Bird, M. K., S. W. Asmar, J. P. Brenkle, P. Edenhofer, O. Funke,
M. Paetzold, and H. Volland (1992), Ulysses radio occultation
observations of the Io plasma torus during the Jupiter encounter,
Science, 257, 1531–1535,
10.1126/science.257.5076.1531.
[Bird et al.(1993)Bird, Asmar, Edenhofer,
Funke, Pätzold, and Volland]1993Bird
Bird, M. K., S. W. Asmar, P. Edenhofer, O. Funke, M. Pätzold, and
H. Volland (1993), The structure of Jupiter's Io plasma torus inferred
from Ulysses radio occultation observations, Planet. Space Sci.,
41, 999–1010, 10.1016/0032-0633(93)90104-A.
[Bolton et al.(2015)Bolton, Bagenal, Blanc,
Cassidy, Chané, Jackman, Jia, Kotova, Krupp, Milillo,
Plainaki, Smith, and Waite]2015Bolton
Bolton, S. J., F. Bagenal, M. Blanc, T. Cassidy, E. Chané,
C. Jackman, X. Jia, A. Kotova, N. Krupp, A. Milillo, C. Plainaki,
H. T. Smith, and H. Waite (2015), Jupiter's Magnetosphere: Plasma
Sources and Transport, Space Sci. Rev., 192, 209–236,
10.1007/s11214-015-0184-5.
[Bonfond et al.(2012)Bonfond, Grodent,
Gérard, Stallard, Clarke, Yoneda, Radioti, and
Gustin]2012Bonfond
Bonfond, B., D. Grodent, J.-C. Gérard, T. Stallard, J. T. Clarke,
M. Yoneda, A. Radioti, and J. Gustin (2012), Auroral evidence of Io's
control over the magnetosphere of Jupiter, Geophys. Res. Lett,
39, L01105, 10.1029/2011GL050253.
[Brown(1995)]1995Brown
Brown, M. E. (1995), Periodicities in the Io plasma torus, J.
Geophys. Res., 100, 21,683–21,696, 10.1029/95JA01988.
[Brown(1976)]1976Brown
Brown, R. A. (1976), A model of Jupiter's sulfur nebula, Astrophys.
J. Lett, 206, L179–L183, 10.1086/182162.
[Campbell and Synnott(1985)]1985Campbell
Campbell, J. K., and S. P. Synnott (1985), Gravity field of the Jovian
system from Pioneer and Voyager tracking data, Astron. J.,
90, 364–372, 10.1086/113741.
[Carlson and Judge(1975)]1975Carlson
Carlson, R. W., and D. L. Judge (1975), Pioneer 10 ultraviolet photometer
observations of the Jovian hydrogen torus — The angular distribution,
Icarus, 24, 395–399, 10.1016/0019-1035(75)90055-X.
[Connerney et al.(2016)Connerney, Bolton, and
Levin]2016Connerney
Connerney, J., S. Bolton, and S. Levin (2016), The Juno New Frontier
Mission: Inside and Out, in EGU General Assembly Conference
Abstracts, EGU General Assembly Conference Abstracts, vol. 18, p.
18023.
[de Kleer et al.(2014)de Kleer, de Pater,
Davies, and Ádámkovics]2014deKleer
de Kleer, K., I. de Pater, A. G. Davies, and M. Ádámkovics
(2014), Near-infrared monitoring of Io and detection of a violent outburst
on 29 August 2013, Icarus, 242, 352–364,
10.1016/j.icarus.2014.06.006.
[Dessler(2002)]2002Dessler
Dessler, A. J. (2002), Physics of the Jovian Magnetosphere, pp.
438–441, Cambridge, UK: Cambridge University Press.
[Divine and Garrett(1983)]1983Divine
Divine, N., and H. B. Garrett (1983), Charged particle distributions in
Jupiter's magnetosphere, J. Geophys. Res., 88, 6889–6903,
10.1029/JA088iA09p06889.
[Eshleman et al.(1979a)Eshleman,
Tyler, Wood, Lindal, Anderson, Levy, and Croft]1979Eshlemanb
Eshleman, V. R., G. L. Tyler, G. E. Wood, G. F. Lindal, J. D.
Anderson, G. S. Levy, and T. A. Croft (1979a), Radio
science with Voyager at Jupiter — Initial Voyager 2 results and a Voyager 1
measure of the Io torus, Science, 206, 959–962,
10.1126/science.206.4421.959.
[Eshleman et al.(1979b)Eshleman,
Tyler, Wood, Lindal, Anderson, Levy, and Croft]1979Eshlemana
Eshleman, V. R., G. L. Tyler, G. E. Wood, G. F. Lindal, J. D.
Anderson, G. S. Levy, and T. A. Croft (1979b), Radio
science with Voyager 1 at Jupiter - Preliminary profiles of the atmosphere
and ionosphere, Science, 204, 976–978,
10.1126/science.204.4396.976.
[Foreman-Mackey et al.(2013)Foreman-Mackey,
Hogg, Lang, and Goodman]2013Foreman
Foreman-Mackey, D., D. W. Hogg, D. Lang, and J. Goodman (2013), emcee:
The MCMC Hammer, Publ. A. S. P., 125, 306–312,
10.1086/670067.
[Hill et al.(1974)Hill, Dessler, and
Michel]1974Hill
Hill, T. W., A. J. Dessler, and F. C. Michel (1974), Configuration of
the Jovian magnetosphere, Geophys. Res. Lett, 1, 3–6,
10.1029/GL001i001p00003.
[Hill et al.(1981)Hill, Dessler, and
Maher]1981Hill
Hill, T. W., A. J. Dessler, and L. J. Maher (1981), Corotating
magnetospheric convection, J. Geophys. Res., 86,
9020–9028, 10.1029/JA086iA11p09020.
[Hinson et al.(1998)Hinson, Kliore, Flasar,
Twicken, Schinder, and Herrera]1998Hinson
Hinson, D. P., A. J. Kliore, F. M. Flasar, J. D. Twicken, P. J.
Schinder, and R. G. Herrera (1998), Galileo radio occultation
measurements of Io's ionosphere and plasma wake, J. Geophys. Res.,
103, 29,343–29,358, 10.1029/98JA02659.
[Howard et al.(1992)Howard, Eshleman, Hinson,
Kliore, Lindal, Woo, Bird, Volland, Edenhoffer, and
Paetzold]1992Howard
Howard, H. T., V. R. Eshleman, D. P. Hinson, A. J. Kliore, G. F.
Lindal, R. Woo, M. K. Bird, H. Volland, P. Edenhoffer, and
M. Paetzold (1992), Galileo radio science investigations, Space
Sci. Rev., 60, 565–590, 10.1007/BF00216868.
[Judge and Carlson(1974)]1974Judge
Judge, D. L., and R. W. Carlson (1974), Pioneer 10 Observations of the
Ultraviolet Glow in the Vicinity of Jupiter, Science, 183,
317–318, 10.1126/science.183.4122.317.
[Khurana et al.(2004)Khurana, Kivelson,
Vasyliunas, Krupp, Woch, Lagg, Mauk, and Kurth]2004Khurana
Khurana, K. K., M. G. Kivelson, V. M. Vasyliunas, N. Krupp, J. Woch,
A. Lagg, B. H. Mauk, and W. S. Kurth (2004), The configuration
of Jupiter's magnetosphere, pp. 593–616, Cambridge, UK: Cambridge
University Press.
[Kliore et al.(2004)Kliore, Anderson,
Armstrong, Asmar, Hamilton, Rappaport, Wahlquist, Ambrosini,
Flasar, French, Iess, Marouf, and Nagy]2004kliore
Kliore, A. J., J. D. Anderson, J. W. Armstrong, S. W. Asmar, C. L.
Hamilton, N. J. Rappaport, H. D. Wahlquist, R. Ambrosini, F. M.
Flasar, R. G. French, L. Iess, E. A. Marouf, and A. F. Nagy (2004),
Cassini Radio Science, Geophys. Res. Lett, 115, 1–70,
10.1007/s11214-004-1436-y.
[Kupo et al.(1976)Kupo, Mekler, and
Eviatar]1976Kupo
Kupo, I., Y. Mekler, and A. Eviatar (1976), Detection of ionized sulfur
in the Jovian magnetosphere, Astrophys. J. Lett, 205,
L51–L53, 10.1086/182088.
[Levy et al.(1981)Levy, Green, Royden,
Wood, and Tyler]1981Levy
Levy, G. S., D. W. Green, H. N. Royden, G. E. Wood, and G. L. Tyler
(1981), Dispersive Doppler measurement of the electron content of the torus
of Io, J. Geophys. Res., 86, 8467–8470,
10.1029/JA086iA10p08467.
[Maruyama et al.(2004)Maruyama, Ma, and
Nakamura]2004Maruyama
Maruyama, T., G. Ma, and M. Nakamura (2004), Signature of TEC storm on 6
November 2001 derived from dense GPS receiver network and ionosonde chain
over Japan, J. Geophys. Res., 109(A18), A10302,
10.1029/2004JA010451.
[Mendillo et al.(2004a)Mendillo,
Wilson, Spencer, and Stansberry]2004Mendillob
Mendillo, M., J. Wilson, J. Spencer, and J. Stansberry
(2004a), Io's volcanic control of Jupiter's extended neutral
clouds, Icarus, 170, 430–442,
10.1016/j.icarus.2004.03.009.
[Mendillo et al.(2004b)Mendillo,
Pi, Smith, Martinis, Wilson, and Hinson]2004Mendilloa
Mendillo, M., X. Pi, S. Smith, C. Martinis, J. Wilson, and
D. Hinson (2004b), Ionospheric effects upon a satellite
navigation system at Mars, Radio Science, 39, RS2028,
10.1029/2003RS002933.
[Mukai et al.(2012)Mukai, Hansen, Mittskus,
Taylor, and Danos]2012Mukai
Mukai, R., D. Hansen, A. Mittskus, J. Taylor, and M. Danos (2012),
Juno Telecommunications, in Design and Performance Summary
Series, 16, http://descanso.jpl.nasa.gov/DPSummary/summary.html.
[Nozawa et al.(2004)Nozawa, Misawa,
Takahashi, Morioka, Okano, and Sood]2004Nozawa
Nozawa, H., H. Misawa, S. Takahashi, A. Morioka, S. Okano, and
R. Sood (2004), Long-term variability of [SII] emissions from the Io
plasma torus between 1997 and 2000, J. Geophys. Res., 109,
A07209, 10.1029/2003JA010241.
[Nozawa et al.(2005)Nozawa, Misawa,
Takahashi, Morioka, Okano, and Sood]2005Nozawa
Nozawa, H., H. Misawa, S. Takahashi, A. Morioka, S. Okano, and
R. Sood (2005), Relationship between the Jovian magnetospheric plasma
density and Io torus emission, Geophys. Res. Lett., 32,
L11101, 10.1029/2005GL022759.
[Nozawa et al.(2006)Nozawa, Misawa, Kagitani,
Tsuchiya, Takahashi, Morioka, Kimura, Okano, Yamamoto, and
Sood]2006Nozawa
Nozawa, H., H. Misawa, M. Kagitani, F. Tsuchiya, S. Takahashi,
A. Morioka, T. Kimura, S. Okano, H. Yamamoto, and R. Sood (2006),
Implication for the solar wind effect on the Io plasma torus,
Geophys. Res. Lett., 33, L16103,
10.1029/2005GL025623.
[Payan et al.(2014)Payan, Rajendar, Paty, and
Crary]2014Payan
Payan, A. P., A. Rajendar, C. S. Paty, and F. Crary (2014), Effect of
plasma torus density variations on the morphology and brightness of the Io
footprint, J. Geophys. Res., 119, 3641–3649,
10.1002/2013JA019299.
[Pilcher and Morgan(1979)]1979Pilcher
Pilcher, C. B., and J. S. Morgan (1979), Detection of singly ionized
oxygen around Jupiter, Science, 205, 297,
10.1126/science.205.4403.297.
[Quémerais et al.(2006)Quémerais,
Bertaux, Korablev, Dimarellis, Cot, Sandel, and
Fussen]2006Quemerais
Quémerais, E., J.-L. Bertaux, O. Korablev, E. Dimarellis, C. Cot,
B. R. Sandel, and D. Fussen (2006), Stellar occultations observed by
SPICAM on Mars Express, J. Geophys. Res., 111, E09S04,
10.1029/2005JE002604.
[Schinder et al.(2015)Schinder, Flasar,
Marouf, French, Anabtawi, Barbinis, and Kliore]2015Schinder
Schinder, P. J., F. M. Flasar, E. A. Marouf, R. G. French,
A. Anabtawi, E. Barbinis, and A. J. Kliore (2015), A numerical
technique for two-way radio occultations by oblate axisymmetric atmospheres
with zonal winds, Radio Science, 50, 712–727,
10.1002/2015RS005690.
[Schneider and Trauger(1995)]1995Schneider
Schneider, N. M., and J. T. Trauger (1995), The Structure of the Io
Torus, Astrophys. J., 450, 450, 10.1086/176155.
[Smyth(1992)]1992Smyth
Smyth, W. H. (1992), Neutral cloud distribution in the Jovian system,
Advances in Space Research, 12, 337–346,
10.1016/0273-1177(92)90408-P.
[Smyth and Combi(1988)]1988Smyth
Smyth, W. H., and M. R. Combi (1988), A general model for Io's neutral gas
clouds. II - Application to the sodium cloud, Astrophys. J.,
328, 888–918, 10.1086/166346.
[Steffl et al.(2004a)Steffl,
Stewart, and Bagenal]2004aSteffl
Steffl, A. J., A. I. F. Stewart, and F. Bagenal (2004a),
Cassini UVIS observations of the Io plasma torus. I. Initial results,
Icarus, 172, 78–90, 10.1016/j.icarus.2003.12.027.
[Steffl et al.(2004b)Steffl,
Bagenal, and Stewart]2004bSteffl
Steffl, A. J., F. Bagenal, and A. I. F. Stewart (2004b),
Cassini UVIS observations of the Io plasma torus. II. Radial variations,
Icarus, 172, 91–103, 10.1016/j.icarus.2004.04.016.
[Thomas(1992)]1992Thomas
Thomas, N. (1992), Optical observations of Io's neutral clouds and plasma
torus, Surveys in Geophysics, 13, 91–164,
10.1007/BF01903525.
[Thomas et al.(2004)Thomas, Bagenal, Hill,
and Wilson]2004Thomas
Thomas, N., F. Bagenal, T. W. Hill, and J. K. Wilson (2004), The Io
neutral clouds and plasma torus, in Jupiter: The Planet, Satellites
and Magnetosphere, edited by F. Bagenal, T. E. Dowling, and W. B.
McKinnon, pp. 561–591, Cambridge, UK: Cambridge University Press.
[Thornton and Border(2000)]2000Thornton
Thornton, C., and J. Border (2000), Radiometric Tracking Techniques for
Deep-Space Navigation, in DEEP-SPACE COMMUNICATIONS AND NAVIGATION
SERIES, vol. 1, edited by J. Yuen, Jet Propulsion Laboratory, California
Institute of Technology. Available at
http://descanso.jpl.nasa.gov/monograph/mono.html.
[Tommei et al.(2015)Tommei, Dimare, Serra,
and Milani]2015Tommei
Tommei, G., L. Dimare, D. Serra, and A. Milani (2015), On the Juno
radio science experiment: models, algorithms and sensitivity analysis,
Mon. Not. R. Astron. Soc., 446, 3089–3099,
10.1093/mnras/stu2328.
[Wilson et al.(2002)Wilson, Mendillo,
Baumgardner, Schneider, Trauger, and Flynn]2002Wilson
Wilson, J. K., M. Mendillo, J. Baumgardner, N. M. Schneider, J. T.
Trauger, and B. Flynn (2002), The Dual Sources of Io's Sodium Clouds,
Icarus, 157, 476–489, 10.1006/icar.2002.6821.
[Withers et al.(2014)Withers, Moore, Cahoy,
and Beerer]2014Withers
Withers, P., L. Moore, K. Cahoy, and I. Beerer (2014), How to process
radio occultation data: 1. From time series of frequency residuals to
vertical profiles of atmospheric and ionospheric properties, Planet.
Space Sci., 101, 77–88, 10.1016/j.pss.2014.06.011.
[Woo and Armstrong(1979)]1979Woo
Woo, R., and J. W. Armstrong (1979), Spacecraft radio scattering
observations of the power spectrum of electron density fluctuations in the
solar wind, J. Geophys. Res., 84, 7288–7296,
10.1029/JA084iA12p07288.
[Yelle and Miller(2004)]2004Yelle
Yelle, R. V., and S. Miller (2004), Jupiter's thermosphere and
ionosphere, in Jupiter. The Planet, Satellites and Magnetosphere,
edited by F. Bagenal, T. E. Dowling, and W. B. McKinnon, pp. 185–218,
Cambridge, UK: Cambridge University Press.
[Yoneda et al.(2009)Yoneda, Kagitani, and
Okano]2009Yoneda
Yoneda, M., M. Kagitani, and S. Okano (2009), Short-term variability of
Jupiter's extended sodium nebula, Icarus, 204, 589–596,
10.1016/j.icarus.2009.07.023.
[Yoneda et al.(2010)Yoneda, Nozawa, Misawa,
Kagitani, and Okano]2010Yoneda
Yoneda, M., H. Nozawa, H. Misawa, M. Kagitani, and S. Okano (2010),
Jupiter's magnetospheric change by Io's volcanoes, Geophys. Res.
Lett., 37, L11202, 10.1029/2010GL043656.
[Yoneda et al.(2013)Yoneda, Tsuchiya, Misawa,
Bonfond, Tao, Kagitani, and Okano]2013Yoneda
Yoneda, M., F. Tsuchiya, H. Misawa, B. Bonfond, C. Tao,
M. Kagitani, and S. Okano (2013), Io's volcanism controls Jupiter's
radio emissions, Geophys. Res. Lett., 40, 671–675,
10.1002/grl.50095.
|
http://arxiv.org/abs/1701.07918v2 | 20170127011636 | Axion excursions of the landscape during inflation | [
"Gonzalo A. Palma",
"Walter Riquelme"
] | hep-th | [
"hep-th",
"astro-ph.CO",
"gr-qc",
"hep-ph"
] | |
http://arxiv.org/abs/1701.08148v3 | 20170127184832 | Mode volume, energy transfer, and spaser threshold in plasmonic systems with gain | [
"Tigran V. Shahbazyan"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics, Jackson State University, Jackson, MS
39217 USA
We present a unified approach to describe spasing in plasmonic systems modeled by quantum emitters interacting with resonant plasmon mode. We show that spaser threshold implies detailed energy transfer balance between the gain and plasmon mode and derive explicit spaser condition valid for arbitrary plasmonic systems. By defining carefully the plasmon mode volume relative to the gain region, we show that the spaser condition represents, in fact, the standard laser threshold condition extended to plasmonic systems with dispersive dielectric function. For extended gain region, the saturated mode volume depends solely on the system parameters that determine the lower bound of threshold population inversion.
Mode volume, energy transfer, and spaser threshold in plasmonic systems with gain
Tigran V. Shahbazyan
January 27, 2017
==================================================================================
§ INTRODUCTION
The prediction of plasmonic laser (spaser) <cit.> and its experimental realization in various systems <cit.> have been among the highlights in the rapidly developing field of plasmonics during the past decade <cit.>. First observed in gold nanoparticles (NP) coated by dye-doped silics shells <cit.>, spasing action was reported in hybrid plasmonic waveguides <cit.>, semiconductor quantum dots on metal film <cit.>, plasmonic nanocavities and nanocavity arrays <cit.>, and metallic NP and nanorods <cit.>, and more recently, carbon-based structures <cit.>. Small spaser size well below the diffraction limit gives rise to wealth of applications <cit.>.
The spaser feedback mechanism is based on the near-field coupling between resonant plasmon mode and gain medium, modeled here by an ensemble of pumped two-level quantum emitters (QE) with excitation frequency tuned to the plasmon frequency. The spaser threshold condition has been suggested as <cit.>
4πμ^2τ_2/3ħN_21/ V Q ≃ 1,
where μ and τ_2 are the QE dipole matrix element and polarization relaxation time, respectively, N_21=N_2-N_1 is the ensemble population inversion (N_2 and N_1 are, respectively, the number of excited and ground-state QEs), Q is the mode quality factor, and V is the mode volume. Equation (<ref>) is similar to the standard laser condition <cit.> that determines the threshold value of N_21, but with the cavity mode quality factor and volume replaced by their plasmon counterparts in metal-dielectric system characterized by dispersive dielectric function ε(ω,r). While the plasmon quality factor Q is well-defined in terms of the metal dielectric function ε(ω)=ε'(ω)+iε”(ω), there is an active debate on mode volume definition in plasmonic systems <cit.>. Since QEs are usually distributed outside the plasmonic structure, the standard expression for cavity mode volume, ∫ dV ε(r) |E(r)|^2 /max[ε(r) |E(r)|^2], where E(r) is the mode electric field, is ill-defined for open systems <cit.>. Furthermore, defining the plasmon mode volume in terms of field intensity at a specific point <cit.> seems impractical due to very large local field variations near the metal surface caused by particulars of system geometry, for example, sharp edges or surface imperfections: strong field fluctuations would grossly underestimate the mode volume that determines spasing threshold for gain distributed in an extended region. At the same time, while spasing was theoretically studied for several specific systems <cit.>, the general spaser condition was derived, in terms of system parameters such as permittivities and optical constants, only for two-component systems <cit.> without apparent relation to the mode volume in Eq. (<ref>). Note that the actual spasing systems can be comprised of many components, so that the extension of the laser condition (<ref>) to plasmonics implies some procedure, valid for any nanoplasmonic system, to determine the plasmon mode volume.
On the other hand, the steady state spaser action implies detailed balance of energy transfer (ET) processes between the QEs and the plasmon mode (see Fig. <ref>). Whereas the energy flow between individual QEs and plasmon can go in either direction depending on the QE quantum state, the net gain-plasmon ET rate is determined by population inversion N_21 and, importantly, distribution of plasmon states in the gain region. Since individual QE-plasmon ET rates are proportional to the plasmon local density of states (LDOS), which can vary in a wide range depending on QEs' positions and system geometry <cit.>, the net ET rate is obtained by averaging the plasmon LDOS over the gain region. Therefore, the plasmon mode volume should relate, in terms of average system characteristics, the laser condition (<ref>) to the microscopic gain-plasmon ET picture. The goal of this paper is to establish such a relation.
First, we derive the general spaser condition for any multicomponent nanoplasmonic system in terms of individual ET rates between QEs, constituting the gain, and resonant plasmon mode, providing the feedback. Second, we introduce the plasmon mode volume 𝒱 associated with a region of volume V_0 by relating 𝒱 to the plasmon LDOS, averaged over that region, and establish that the spaser condition does have the general form (<ref>). We then demonstrate, analytically and numerically, that a sufficiently extended region outside the plasmonic structure can saturate the plasmon mode volume, in which case 𝒱 is independent of the plasmon field distribution and determined solely by the system parameters,
V/V_0
=Q ε_d ε”(ω_pl) /|ε' (ω_pl)|,
where ε_d is the gain region dielectric constant and ω_pl is the plasmon frequency. With saturated mode volume (<ref>), the laser condition (<ref>) matches the spaser condition for two-component systems <cit.> and, in fact, defines the lower bound of threshold N_21. Finally, we demonstrate that, in realistic systems, the threshold N_21 can significantly exceed its minimal value.
§ SPASING AND GAIN-PLASMON ENERGY TRANSFER BALANCE
We consider N_0 QEs described by pumped two-level systems, located at positions r_j near a plasmonic structure, with excitation energy ħω_21=E_2-E_1, where E_1 and E_2 are, respectively, the lower and upper level energies. Within the density matrix approach, each QE is described by polarization ρ_21^(j) and occupation n_21^(j)≡ρ_22^(j)-ρ_11^(j), so that N_21= N_2-N_1=∑_jn_21^(j) is the ensemble population inversion. In the rotating wave approximation, the steady-state dynamics of QEs coupled to alternating electric field ℰ(r)e^-iω t is described by the Maxwell-Bloch equations
(ω - ω_21 +i/τ_2) ρ_21^(j) =μ/ħ n_21^(j) n_j· ℰ(r_j),
n_21^(j) -n̅_21 =- 4μτ_1/ħ Im[ρ_21^(j)* n_j· ℰ(r_j)],
where τ_2 and τ_1 are the time constants characterizing polarization and population relaxation, μ and n_j are, respectively, the QE dipole matrix element and orientation, and n̅_21 is the average population inversion per QE due to the pump. The local field ℰ(r_j) at the QE position is generated by all QEs' dipole moments p_j=μn_jρ_21^(j) and, within semiclassical approach, has the form <cit.>
ℰ(r_j ) =4πω^2/c^2∑_kG̅(ω;r_j,r_k) ·p_k,
where G̅(ω;r,r') is the electromagnetic Green dyadic in the presence of metal nanostructure and c is the speed of light. For nanoplasmonic systems, it is convenient to adopt rescaled Green dyadic that has direct near-field limit, D̅(ω;r,r')=-(4πω^2/c^2)G̅(ω;r,r'). Upon eliminating the electric field, the system (<ref>) takes the form
Ω_21p_j
+μ^2/ħ n_21^(j)n_j∑_kn_j·D̅(ω;r_j,r_k)·p_k
=0,
δ n_21^j/τ_1 -4/ħ Im∑_k[p_j^*·D̅(ω;r_j,r_k)·p_k]
= 0,
where we use shorthand notations Ω_21=ω-ω_21+i/τ_2 and δ n_21^j=n_21^(j)-n̅_21. The first equation in system (<ref>), being homogeneous in p_j, leads to the spaser threshold condition. Since the Green dyadic includes contributions from all electromagnetic modes, the spaser threshold in general case can only be determined numerically. However, for QEs coupled to a resonant plasmon mode, that is, for ω_21 close to the mode frequency ω_pl, the contribution from off-resonance modes is relatively small <cit.> and, as we show below, the spaser condition can be obtained explicitly for any nanoplasmonic system.
§.§ Gain coupling to a resonant plasmon mode
For QE frequencies ω_21 close to the plasmon frequency ω_pl, we can adopt the single mode approximation for the Green dyadic <cit.>
D̅(ω;r,r') = ω_pl/4 UE(r)⊗E^*(r')/ω-ω_pl+i/τ_pl,
where E(r) is the slow envelope of plasmon field satisfying the Gauss law ∇· [ε' (ω_pl,r) E(r) ]=0 and 1/τ_pl is the plasmon decay rate. In nanoplasmonic systems, the decay rate is dominated by the Ohmic losses and has the form
1τ_pl=W2U,
where
U
= 1/16π∫ dV |E (r) |^2∂ [ω_plε'(ω_pl,r) ]/∂ω_pl
is the mode stored energy, and
W=ω_pl/8π∫ dV |E(r) |^2ε”(ω_pl,r)
is the mode dissipated power <cit.>. The volume integration in U and W takes place, in fact, only over the metallic regions with dispersive dielectric function. For systems with a single metallic region, one obtains the standard plasmon decay rate:
1/τ_pl=ε”(ω_pl)/∂ε'(ω_pl)/∂ω_pl.
The Green dyadic (<ref>) is valid for a well-defined plasmon mode (ω_plτ_pl≫ 1) in any nanoplasmonic system, and its consistency is ensured by the optical theorem <cit.>.
With the plasmon Green dyadic (<ref>), the system (<ref>) takes the form
Ω_21p_j + μ^2/ħω_pln_21^(j)/4 UΩ_pl n_j[n_j·E(r_j)]∑_kE^*(r_k)·p_k
=0,
δ n_21^j/τ_1
- Im [ω_pl/ħ UΩ_pl [p_j^*·E(r_j)]
∑_kE^*(r_k)·p_k ]
=0,
where Ω_pl=ω-ω_pl+i/τ_pl. Multiplying the first equation by E^*(r_j) and summing up over j, we obtain the spaser condition
Ω_21Ω_pl+ μ^2/ħω_pl/4 U∑_jn_21^(j) |n_j·E(r_j) |^2=0.
The second term in Eq. (<ref>) describes coherent coupling between the QE ensemble and plasmon mode. Below we show that spasing implies detailed ET balance between the gain and plasmon mode.
§.§ Energy transfer and spaser condition
Let us now introduce, in the standard manner, the individual QE-plasmon ET rate as <cit.>
1/τ
=-μ^2/ħIm [
n·D̅(ω_pl;r,r)·n ]
=4πμ^2/ħ|n·E(r)|^2/∫dV ε”|E|^2,
where we used Eqs. (<ref>) and (<ref>), and implied ε≡ε(ω_pl,r) under the integral. The condition (<ref>) can be recast as
(ω-ω_21 +i/τ_2 ) (ω-ω_pl+i/τ_pl )+1/τ_ gτ_pl =0,
where we introduced net gain-plasmon ET rate,
1/τ_g
=∑_jn_21^(j)/τ_j
= 4πμ^2/ħ∑_j n_21^(j) |n_j·E(r_j)|^2/∫dV ε”|E|^2,
which represents the sum of individual QE-plasmon ET rates 1/τ_j, given by Eq. (<ref>), weighed by QE occupation numbers. Since n_21^(j) is positive or negative for QE in the excited or ground state, respectively, the direction of energy flow between the QE and the plasmon mode depends on the QE quantum state. Note that the main contribution to 1/τ_g comes from the regions with large plasmon LDOS, that is, high QE-plasmon ET rates (<ref>). The imaginary part of Eq. (<ref>) yields the spaser frequency <cit.>
ω_s=ω_plτ_pl+ω_21τ_2/τ_pl+τ_2,
while its real part, with the above ω_s, leads to
1/τ_gτ_pl=1/τ_2τ_pl + (ω_pl-ω_21)^2τ_2τ_pl/(τ_pl+τ_2)^2.
In the case when the QE and plasmon spectral bands overlap well, that is., |ω_pl-ω_21|τ_pl≪ 1 or |ω_pl-ω_21|τ_2≪ 1 depending on relative magnitude of the respective bandwidths 1/τ_2 and 1/τ_pl, the last term in Eq. (<ref>) can be disregarded, and we arrive at the spaser condition in the form 1/τ_g=1/τ_2, or
∑_jn_21^(j)/τ_j
=1/τ_2.
Equation (<ref>) implies that spaser threshold is reached when energy transfer balance between gain and plasmon mode is established.
§.§ System geometry and QE-plasmon ET rate
Individual ET rates in the spaser condition (<ref>) can vary in a wide range depending on the QE position and system geometry. In Fig. <ref>, we show the ET rate (<ref>) for a QE located at distance d from a tip of gold nanorod, modeled here by prolate spheroid with semiaxes a and b (see Appendix). In all numerical calculations, we use the experimental dielectric function for gold <cit.>. To highlight the role of system geometry, the ET rate 1/τ for nanorod is normalized by the ET rate 1/τ_sp for sphere of radius a. The latter ET rate has the form
1/τ_sp=12μ^2/ħε”(ω_sp)a^3/(a+d)^6,
and experiences a sharp decrease for d>a. With changing nanoparticle shape, three degenerate dipole modes of a sphere split into longitudinal and two transverse modes. The latter move up in energy to get damped by interband transitions in gold with their onset just above the plasmon energy in spherical particles, while the longitudinal mode moves down in energy away from the transitions onset, thereby gaining in the oscillator strength <cit.>. This sharpening of plasmon resonance together with condensation of plasmon states near the tips (lightning rod effect) results in up to 100-fold rate increase with reducing b/a ratio, as shown in Fig. <ref>, indicating that spasing is dominated by QEs located in the large plasmon LDOS regions. Large variations of 1/τ magnitude imply that the plasmon mode volume, which characterizes spatial extent of the gain region with sufficiently strong QE-plasmon coupling, is determined by the average plasmon LDOS in that region, as we show in the next section.
§ PLASMON MODE VOLUME AND SPASER THRESHOLD
§.§ General spaser condition
The form (<ref>) of spaser condition reveals the microscopic origin of spaser action as the result of cooperative ET between gain and resonant plasmon mode, with each QE contribution depending on its position and quantum state. Below we assume that QEs are distributed within some region of volume V_0 and that population inversion distribution follows, on average, that of QEs. After averaging over QEs' dipole orientations, the gain-plasmon ET rate (<ref>) takes the form
1/τ_g= 4πμ^2/3ħ∫ dV_0 n_21 (r)|E(r) |^2/∫ dV ε”(ω_pl,r)| E(r)|^2,
where n_21 (r) is population inversion density, yielding the spaser threshold condition
4πμ^2τ_2/3ħ∫ dV_0 n_21 (r)|E(r) |^2/∫ dV ε”(ω_pl,r)| E(r)|^2 =1,
which is valid for any multicomponent system supporting a well-defined surface plasmon. In the case of uniform gain distribution, n_21=N_21/V_0, and a single metallic component with volume V_m [e.g., a metal particle with dye-doped dielectric shell (see Fig. <ref>)], the threshold condition (<ref>) takes the form
4πμ^2τ_2/3ħ n_21/ε”(ω_pl) ∫ dV_0 |E|^2/∫ dV_m |E|^2 =1.
The threshold value of n_21 is determined by ratio of plasmon field integral intensities in the gain and metal regions. Evidently, the spaser threshold does depend on the gain region size and shape, which prompts us to revisit the mode volume definition for plasmonic systems in order to ensure its consistency with the general laser condition (<ref>).
§.§ Plasmon LDOS and associated mode volume
Here, we show that the mode volume in plasmonic systems can be accurately defined in terms of plasmon LDOS. The LDOS of a single plasmon mode is related to the plasmon Green dyadic (<ref>) as ρ (ω,r)
=-(2π^2ω_pl)^-1 Im Tr D̅(ω;r,r), and has the Lorentzian form <cit.>,
ρ (ω,r)
=τ_pl/8π^2 U |E(r) |^2/(ω-ω_pl)^2τ_pl^2+1,
where U is given by Eq. (<ref>). The plasmon LDOS (<ref>) characterizes the distribution of plasmon states in the unit volume and frequency interval. Consequently, its frequency integral, ρ (r)=∫ dωρ (ω,r), represents the plasmon mode density that describes the plasmon states' spatial distribution:
ρ (r)= |E (r) |^2/8π U
= 2 |E (r) |^2/∫ dV [∂ (ω_plε')/∂ω_pl]|E|^2.
Introducing the mode quality factor Q=ω_pl U/W, the mode density can be written as
ρ (r)
= 1/Q |E (r) |^2/∫ dV ε” |E|^2.
Note that, in terms of ρ (r), the gain-plasmon ET rate (<ref>) takes the form
1/τ_g= 4πμ^2/3ħ Q
∫ dV_0 n_21 (r) ρ(r),
implying that the largest contribution to 1/τ_g comes from QEs located in the regions with high plasmon density.
We now relate the plasmon mode volume V associated with region V_0 to the average mode density in that region:
1/𝒱
= 1/V_0∫ dV_0ρ(r)
= 1/V_02∫ dV_0 |E |^2/∫ dV [∂ ( ω_plε')/∂ω_pl]|E|^2,
or, equivalently,
𝒱/V_0
=Q ∫ dV ε” |E|^2/∫ dV_0 |E|^2.
The expressions (<ref>) or (<ref>) are valid for nanoplasmonic systems of any size and shape and with any number of metallic and dielectric components.
It is straightforward to check that, for uniform gain distribution with n_21=N_21/V_0, the spaser threshold condition (<ref>) coincides with the laser condition (<ref>) with associated mode volume 𝒱 given by Eq. (<ref>). Equivalently, for uniform gain distribution, the gain-plasmon ET rate (<ref>) takes the form
1/τ_g= 4πμ^2/3ħ N_21/𝒱 Q,
and the laser condition (<ref>) follows from the ET balance condition 1/τ_g=1/τ_2.
For systems with single metal component, the plasmon mode volume takes the form [compare to Eq. (<ref>)]
𝒱/V_0
=ω_pl/2∂ε'(ω_pl)/∂ω_pl∫ dV_m |E|^2/∫ dV_0 |E|^2
=Q ε”(ω_pl) ∫ dV_m |E|^2/∫ dV_0 |E|^2,
where the plasmon quality factor has the form
Q = ω_pl ∂ε'(ω_pl) /∂ω_pl/2ε” (ω_pl) =ω_plτ_pl/2.
Note that, for a well-defined plasmon with Q≫ 1, the plasmon mode volume is independent of Ohmic losses in metal.
§.§ Mode volume saturation and lower bound of spaser threshold
Since the QE-plasmon ET rate rapidly falls outside the plasmonic structure (see Fig. <ref>), spasing is dominated by QEs located sufficiently close to the metal surface. In the case when a metal nanostructure of volume V_m is surrounded by an extended gain region V_0, so that the plasmon LDOS spillover beyond V_0 is negligible, the plasmon mode volume is saturated by the gain, leading to constant value of V/V_0 that is independent of the plasmon field distribution. To demonstrate this point, we note that, in the quasistatic approximation, the integrals in Eq. (<ref>) reduce to surface terms,
∫ dV_0 |E|^2= ∫ dS Φ ^*∇_nΦ + ∫ dS_1Φ^*∇_nΦ ,
∫ dV_m |E |^2=∫ dS Φ ^*∇_nΦ ,
where S is the common interface separating the metal and dielectric regions, S_1 is the outer boundary of the dielectric region, Φ is the potential related to the plasmon field as E=-∇Φ, and ∇_nΦ is its normal derivative relative to the interface. The potentials in the first and second equations of system (<ref>) are taken, respectively, at the dielectric and metal sides of the interface S. Since the plasmon fields rapidly fall away from the metal, the contribution from the outer interface S_1 can be neglected for extended dielectric regions (see below). Then, using the standard boundary conditions at the common interface S, we obtain from Eqs. (<ref>) and (<ref>) the saturated mode volume:
𝒱/V_0
=ω_plε_d/2|ε'(ω_pl)| ∂ε'(ω_pl) /∂ω_pl
=Q ε_d ε”(ω_pl) /|ε' (ω_pl)|.
Remarkably, the saturated mode volume depends on system geometry only via the plasmon frequency ω_pl in the metal dielectric function. Combining Eqs. (<ref>) and (<ref>), we arrive at the spaser condition for saturated case,
4πμ^2τ_2/3ħε_d|ε'(ω_pl)|/ε”(ω_pl) n_21 =1,
which matches the spaser condition obtained previously for two-component systems, that is, with gain region extended to infinity <cit.>. We stress that the condition (<ref>) provides the lower bound for threshold value of n_21, while in real systems, where plasmon field distribution can extend beyond the gain region, the threshold can be significantly higher, as we illustrate below.
In Fig. (<ref>), we show the change of threshold n_21 with expanding gain region in nanorod-based spaser modeled by composite spheroidal particle with gold core and QE-doped silica shell. Calculations were performed using Eq. (<ref>) for confocal spheroids (see Appendix for details), and QE frequency ω_21 was tuned to resonance with longitudinal dipole mode frequency ω_pl. Note that the gain optical constants enter the spaser condition (<ref>) through a single parameter
n_0= 3ħ/4πμ^2τ_2,
which represents characteristic gain concentration and sets the overall scale of threshold n_21 for a specific gain medium. The ratio n_21/n_0, plotted in Fig. <ref>, depends only on plasmonic system parameters and, with expanding gain region, decreases prior reaching plateau corresponding to the saturated mode volume regime described by Eq. (<ref>). Note that, in nanorods, the rapid mode volume saturation seen in Fig. <ref>, as compared to spherical particles, is caused by condensation of plasmon states near the tips (lightning rod effect), leading to the much larger plasmon LDOS and, correspondingly, the QE-plasmon ET rate (see Fig. <ref>).
§ CONCLUSIONS
In summary, we have developed a unified approach to spasing in a system of pumped quantum emitters interacting with a plasmonic structure of arbitrary shape in terms of energy transfer processes within the system. The threshold value of population inversion is determined from the condition of detailed energy transfer balance between quantum emitters, constituting the gain, and resonant plasmon mode, providing the feedback. We have shown that, in plasmonic systems, the mode volume should be defined relative to a finite region, rather than to a point of maximal field intensity, by averaging the plasmon local density of states over that region. We demonstrated that, in terms of plasmon mode volume, the spaser condition has the standard form of the laser threshold condition, thus, extending the latter to plasmonic systems with dispersive dielectric function. We have also shown that, for extended gain region, the saturated plasmon mode volume is determined solely by the system permittivities, which define the lower bound of threshold population inversion.
This work was supported in part by NSF grants No. DMR-1610427 and No. HRD-1547754.
§ CALCULATION OF QE-PLASMON ET RATE FOR SPHEROIDAL NANOPARTICLE
The ET rate between a plasmon mode in metal nanoparticle with frequency ω_pl and a QE located at the point r distanced by d from the metal surface and polarized along the normal n to the surface is given by
1/τ
=
4πμ^2/ħ|n·E(r )|^2/∫dV ε”|E|^2
=
4πμ^2/ħε”(ω_pl)|∇_nΦ(r )|^2/∫dS Φ^*∇_nΦ,
where ∇_n=n·∇ stands for the normal derivative, and real part of the denominator is implied.
Consider a QE at distance d from the tip of a spheroidal particle with semiaxis a along the symmetry axis and semiaxis b in the symmetry plane (a>b). The potentials have the form Φ∝ R_lm(ξ)Y_lm(η,ζ), where ξ is the radial (normal) coordinate and the pair (η,ζ) parametrizes the surface (Y_lm are spherical harmonics). The surface area element is dS=h_ηh_ζ dη dζ, and normal derivative is ∇_n=h_ξ^-1(∂/∂ξ), where h_i are the scale factors (i=ξ,η,ζ) given by
h_ξ=f√(ξ^2-η^2/ξ^2-1),
h_η=f√(ξ^2-η^2/1-η^2),
h_ζ=f√((ξ^2-1)(1-η^2)),
f =√(a^2-b ^2) is half distance between the foci, and spheroid surface corresponds to ξ_1=a/f.
For QE located at point z=fξ on the z-axis (η=1) outside the spheroid, the radial potentials for dipole longitudinal plasmon mode (lm)=(10) have the form R(ξ)=P_1(ξ) for ξ<ξ_1 and R(ξ)=Q_1(ξ)P_1(ξ_1)/Q_1(ξ_1) for ξ>ξ_1, where P_l and Q_l are the Legendre functions of first and second kind, given by
P_1(ξ)=ξ,
Q_1(ξ)=ξ/2ln (ξ+1/ξ-1 )-1,
Q'_1(ξ)=1/2ln (ξ+1/ξ-1 )-ξ/ξ^2-1.
Using h_ξ=f along the z-axis, the ET rate equals
1/τ=
3μ^2/ħε”(ω_pl)R^' 2(ξ)/f^3ξ_1(ξ_1^2-1)
=
3μ^2/ħ ab^2ε”(ω_pl) [Q'_1(ξ)ξ_1/Q_1(ξ_1) ]^2,
with ξ=(a+d)/f, where the plasmon frequency ω_pl determined by the boundary condition ε'(ω_pl)=ε_d Q'_1(ξ_1). In the limit of spherical particle of radius a, that is, f→ 0 and ξ→∞ as b→ a, we have Q(ξ)≈ 1/3ξ^2, yielding
1/τ_sp=
12μ^2/ħε”(ω_sp)a^3/(a+d)^6,
where ω_sp is surface plasmon resonance frequency for a sphere determined by ε'(ω_sp)+2ε_d=0. The normalized ET rate τ_sp/τ has the form
τ_sp/τ
=
a^2/4b^2 (1+d/a )^6ε”(ω_sp)/ε”(ω_pl) [Q'_1(ξ)ξ_1/Q_1(ξ_1) ]^2,
with ξ=ξ_1+d/f.
§ CALCULATION OF POPULATION INVERSION DENSITY IN SPHEROIDAL CORE-SHELL NANOPARTICLE
We consider a core-shell nanoparticle with dielectric functions ε_c, ε_s, and ε_d in the core, shell, and outside dielectric, respectively, with inner and outer interface S_1 and S_2. The integrals over core ans shell regions in the condition (<ref>) reduce to surface terms
∫ dV_c |E|^2=∫ dS_1Φ^* E_n^c,
∫ dV_s |E|^2=∫ dS_2Φ^* E_n^s - ∫ dS_1Φ^* E_n^s,
where E_n^j(S_i)=-∇_jnΦ(S_i) is the field outward normal component at the ith interface in the jth medium side. Note that E_n^s(S_1)=(ε_c/ε_s)E_n^c(S_1) and E_n^s(S_2)=(ε_d/ε_s)E_n^d(S_2).
The ratio of integrated field intensities in the shell and core regions takes the form
L=∫ dV_s |E|^2/∫ dV_c |E|^2
=
ε_d/ε_s∫ dS_2Φ^* E_n^d/∫ dS_1Φ^* E_n^c - ε_c/ε_s,
where the potentials Φ are continuous at the interfaces. For nanostructures whose shape permits separation of variables, the potential can be written as Φ (r)=R (ξ)Σ(η,ζ), where ξ is the radial (normal) coordinate and the pair (η,ζ) parametrizes the surface. With surface area element dS=h_ηh_ζ dη dζ and normal derivative ∇_n=h_ξ^-1(∂/∂ξ), where h_i are the scale factors (i=ξ,η,ζ), the fraction of integrals takes the form
I=∫ dS_2Φ^* E_n^d/∫ dS_1Φ^* E_n^c
=
R_d (ξ_2)R_d' (ξ_2)/R_c (ξ_1)R_c' (ξ_1)
× ∫∫ dη_2 dζ_2 (h_η_2h_ζ_2/h_ξ_2) |Σ |^2/∫∫ dη_1 dζ_1 (h_η_1h_ζ_1/h_ξ_1) |Σ |^2.
Below we outline evaluation of L for core-shell NP described by two confocal prolate spheroids with semi-axises a and b. The two shell surfaces corresponds to ξ_1=a /f and ξ_2=sa /f, where f =√(a^2-b ^2) is half distance between the foci, and s>1 characterizes the shell thickness. Evaluation of angular integrals yields
I
= R_d (ξ_2)R_d' (ξ_2)/R_c (ξ_1)R_c' (ξ_1) ξ_2^2-1/ξ_1^2-1.
For longitudinal dipole mode (l=1, m=0), we have R_c = P_1(ξ) for ξ<ξ_1, R_s = AP_1(ξ)+BQ_1(ξ) for ξ_2<ξ<ξ_2, and R_d = CQ_1(ξ) for ξ>ξ_2, yielding
L=C^2 ε_d/ε_s Q_1(ξ_2)Q'_1(ξ_2)/P_1(ξ_1)P'_1(ξ_1) ξ_2^2-1/ξ_1^2-1 - ε_c/ε_s,
where C and ε_c(ω_pl) are determined from the continuity of R_i and ε_iR'_i across the interfaces.
99
bergman-prl03
D. Bergman and M. I. Stockman,
Phys. Rev. Lett., 90, 027402, (2003).
stockman-natphot08
M. I. Stockman,
Nature Photonics, 2, 327, (2008).
stockman-jo10
M. I. Stockman,
J. Opt. 12, 024004, (2010).
noginov-nature09
M. A. Noginov, G. Zhu, A. M. Belgrave, R. Bakker, V. M. Shalaev, E. E. Narimanov, S. Stout, E. Herz, T. Suteewong and U. Wiesner,
Nature, 460, 1110, (2009).
zhang-nature09 R. F. Oulton, V. J. Sorger, T. Zentgraf, R.-M. Ma, C. Gladden, L. Dai, G. Bartal, and X. Zhang,
Nature 461, 629, (2009).
zheludev-oe09 E. Plum, V. A. Fedotov, P. Kuo, D. P. Tsai, and N. I. Zheludev,
Opt. Expr. 17, 8548, (2009).
zhang-natmat10
R. Ma, R. Oulton, V. Sorger, G. Bartal, and X. Zhang,
Nature Mater., 10, 110, (2010).
ning-prb12 K. Ding, Z. C. Liu, L. J. Yin, M. T. Hill, M. J. H. Marell, P. J. van Veldhoven, R. Nöetzel, and C. Z. Ning,
Phys. Rev. B 85, 041301(R) (2012).
gwo-science12 Y.-J. Lu, J. Kim, H.-Y. Chen, C.i Wu, N. Dabidian, C. E. Sanders, C.-Y. Wang, M.-Y. Lu, B.-H. Li, X. Qiu, W.-H. Chang, L.-J. Chen, G. Shvets, C.-K. Shih, and S. Gwo,
Science 337, 450 (2012).
odom-natnano13 W. Zhou, M. Dridi, J. Y. Suh, C. H. Kim, D. T. Co, M. R. Wasielewski, G. C. Schatz, and T. W. Odom,
Nat. Nano. 8, 506 (2013).
shalaev-nl13 X. Meng, A. V. Kildishev, K. Fujita, K. Tanaka, and V. M. Shalaev,
Nano Lett. 13, 4106, (2013).
gwo-nl14 Y. Lu, C.-Y. Wang, J. Kim, H.-Y. Chen, M.-Y. Lu, Y.-C. Chen, W.-H. Chang, L.-J. Chen, M. I. Stockman, C.-K. Shih, S. Gwo,
Nano Lett. 14, 4381 (2014).
zhang-natnano14 R.-M. Ma, S. Ota, Y. Li, S. Yang, and X. Zhang,
Nat. Nano. 9, 600 (2014).
odom-natnano15 A. Yang, T. B. Hoang, M. Dridi, C. Deeb, M. H. Mikkelsen, G. C. Schatz, and T. W. Odom,
Nat. Comm. 6, 6939 (2015).
stockman-review M. I. Stockman,
in Plasmonics: Theory and Applications, edited by T. V. Shahbazyan and M. I. Stockman (Springer, New York, 2013).
apalkov-light14V. Apalkov and M. I Stockman,
Light: Science & Applications 3, e191 (2014).
premaratne-acsnano14 C. Rupasinghe, I. D. Rukhlenko, and M. Premaratne,
ACS Nano, 8 2431 (2014).
stockman-aop17 M. Premaratne and M. I. Stockman,
Adv. Opt. Phot. 9, 79 (2017).
haken H. Haken, Laser Theory (Springer, New York, 1983).
wegener-oe08 M. Wegener, J. L. Garcia-Pomar, C. M. Soukoulis, N. Meinzer, M. Ruther, and S. Linden,
Opt. Express 16, 19785 (2008).
klar-bjn13 N. Arnold, B. Ding, C. Hrelescu, and T. A. Klar,
Beilstein J. Nanotechnol. 4, 974 (2013).
li-prb13 X.-L. Zhong and Z.-Y. Li,
Phys. Rev. B 88, 085101 (2013).
lisyansky-oe13 D. G. Baranov, E.S. Andrianov, A. P. Vinogradov, and A. A. Lisyansky,
Opt. Express 21, 10779 (2013).
bordo-pra13 V. G. Bordo
Phys. Rev. A 88, 013803 (2013).
maier-oe06 S. Maier,
Opt. Express 14, 1957 (2006).
polman-nl10 M. Kuttge, F. J. Garcia de Abajo, and A. Polman,
Nano Lett. 10, 1537 (2010).
koenderink-ol10 A. F. Koenderink,
Opt. Lett. 35 4208 (2010).
hughes-ol12 P. T. Kristensen, C. Van Vlack, and S. Hughes,
Opt. Lett. 37, 1649 (2012).
russel-prb12 K. J. Russell, K. Y. M. Yeung, and E. Hu,
Phys. Rev. B 85, 245445 (2012).
lalanne-prl13 C. Sauvan, J. P. Hugonin, I. S. Maksymov, and P. Lalanne,
Phys. Rev. Lett. 110, 237401 (2013).
hughes-acsphot14 P. T. Kristensen and S. Hughes,
ACS Photon. 1, 2 (2014).
bonod-prb15 X. Zambrana-Puyalto, and N. Bonod,
Phys. Rev. B 91, 195422 (2015).
muljarov-prb16 E. A. Muljarov, W. Langbein,
Phys. Rev. B 94, 235438 (2016)
shegai-oe16 Z.-J. Yang, T. J. Antosiewicz, and T. Shegai,
Opt. Express 24, 20373 (2016).
derex-jo16 G. Colas des Francs, J. Barthes, A. Bouhelier, J.C. Weeber, A. Dereux,
J. Opt. 18, 094005 (2016).
stockman-prl11
M. I. Stockman,
Phys. Rev. Lett., 106, 156802, (2011).
shahbazyan-prl16 T. V. Shahbazyan,
Phys. Rev. Lett. 117, 207401 (2016).
novotny-book L. Novotny and B. Hecht, Principles of Nano-Optics (CUP, New York, 2012).
pustovit-prb16 V. N. Pustovit, A. M. Urbas, A. V. Chipouline, and T. V. Shahbazyan,
Phys. Rev. B 93, 165432 (2016).
petrosyan-17 L. S. Petrosyan and T. V. Shahbazyan,
arXiv:1702.04761.
landau L. D. Landau and E. M. Lifshitz, Electrodynamics of Continuous Media (Elsevier, Amsterdam, 2004).
christy
P. B. Johnson and R.W. Christy,
Phys. Rev. B, 6, 4370, (1973).
mulvaney-prl02C. Sönnichsen, T. Franzl, T. Wilk, G. von Plessen, J. Feldmann, O. V. Wilson, and P. Mulvaney,
Phys. Rev. Lett. 88, 077402 (2002).
|
http://arxiv.org/abs/1701.07948v3 | 20170127053121 | Lifting the Bandwidth Limit of Optical Homodyne Measurement | [
"Yaakov Shaked",
"Yoad Michael",
"Rafi Z. Vered",
"Leon Bello",
"Michael Rosenbluh",
"Avi Pe'er"
] | physics.optics | [
"physics.optics",
"quant-ph"
] |
Department of Physics and BINA Center of Nano-technology, Bar-Ilan University, Ramat-Gan 52900, Israel
avi.peer@biu.ac.il
Homodyne measurement is a corner-stone of quantum optics. It measures the fundamental variables of quantum electrodynamics - the quadratures of light, which represent the cosine-wave and sine-wave components of an optical field and constitute the quantum optical analog of position and momentum. Yet, standard homodyne, which is used to measure the quadrature information, suffers from a severe bandwidth limitation: While the bandwidth of optical states can easily span many THz, standard homodyne detection is inherently limited to the electrically accessible, MHz to GHz range, leaving a dramatic gap between the relevant optical phenomena and the measurement capability. We demonstrate a fully parallel optical homodyne measurement across an arbitrary optical bandwidth, effectively lifting this bandwidth limitation completely. Using optical parametric amplification, which amplifies one quadrature while attenuating the other, we measure two-mode quadrature squeezing of 1.7dB below the vacuum level simultaneously across a bandwidth of 55THz, using just one local-oscillator - the pump. As opposed to standard homodyne, our measurement is highly robust to detection inefficiency, and was obtained with >50% loss in the detection channel. This broadband parametric homodyne measurement opens a wide window for parallel processing of quantum information.
Lifting the Bandwidth Limit of Optical Homodyne Measurement
Yaakov Shaked, Yoad Michael, Rafi Z. Vered, Leon Bello, Michael Rosenbluh and Avi Pe'er
December 30, 2023
===========================================================================================
The standard representation of a nearly monochromatic light field is either as a complex amplitude a=|a|e^iφ to reflect the amplitude and phase of the field oscillation E(t)=a e^-i Ω t+a^∗e^i Ω t=|a|cos(Ω t+φ) (Ω the carrier frequency), or as a superposition of two quadrature oscillations E(t)= xcosΩ t + ysinΩ t, where x=a+a^∗ and y=i(a-a^∗) are the real quadrature amplitudes of the cosine-wave and sine-wave components. While the quadrature representation may be just a mathematical convenience in classical electromagnetism, it is of fundamental importance in quantum optics. The two quadrature operators x=a+a^† and y=i(a-a^†) form a conjugate pair of non-commuting observables ([ x,y]=2i) analogous to position and momentum in mechanics, indicating that their fluctuations are related by quantum uncertainty Δ xΔ y≥ 1. This conjugation is most emphasized with quantum squeezed light <cit.>, where the quantum uncertainty of one quadrature amplitude is reduced (squeezed), while the uncertainty of the other is inevitably increased (stretched), i.e. Δ x < 1 <Δ y <cit.>.
Homodyne measurement, which extracts the quadrature information of the field, forms the backbone of coherent detection in physics and engineering, and plays a central role in quantum information processing, from measuring non-classical squeezing <cit.>, through quantum state tomography <cit.>, generation of non-classical states <cit.>, quantum teleportation <cit.>, quantum key distribution (QKD) and quantum computing <cit.>. To measure the field quadratures, homodyne detection compares, by correlation, the optical signal against a strong and coherent quadrature reference (local oscillator - LO), where the specific quadrature axis to be measured is selected by tuning the phase of the LO. Hence, the heart of a homodyne detector encompasses an external LO and a field multiplier. This is most evident for homodyne measurement in the radio-frequency (RF) domain, where the input radio-wave and the LO are directly multiplied using an RF frequency mixer. In optics, however, direct frequency mixers do not exist. Instead, standard optical homodyne relies on a beam splitter to superpose the optical input and the LO (see fig. <ref>a)
and on the nonlinear electrical response of square-law photo-detectors as the field multipliers that generate an electronic signal proportional to the measured x or y quadrature. Thus, measuring quadratures with standard homodyne is strongly limited to the electronic bandwidth of the photo-detectors (MHz to GHz range). In addition, homodyne detection is also highly sensitive to the noise level and quantum efficiency of the detectors, which leads to decoherence due to the addition of vacuum noise <cit.>.
Yet, optical states of light can easily span optical bandwidths of 10-100THz and more, where the quadratures x(t), y(t) vary rapidly on a time scale comparable to the optical cycle (E(t)=x(t)cosΩ t + y(t)sinΩ t). Thus, the detection method enforces an inherent distinction between nearly monochromatic and broadband fields. In the near monochromatic case, the instantaneous quadrature amplitudes vary slowly over millions of optical cycles, and can be directly observed from the time dependent electronic signal of the homodyne output. For broadband light, however, photo-detectors are too slow to follow the quadrature variations, demanding an inherently different measurement approach <cit.>.
Two examples can illuminate both the potential utility of broad bandwidth in quantum information, and the difficulty of standard methods to exploit it. One example is one-way quantum computation with a quantum frequency comb <cit.>, which forms the most promising realization of scalable quantum information to date. This approach exploits the large bandwidth of frequency mode-pairs from a single parametric oscillator (two-mode squeezed vacuum) as a set of quantum modes (Q-modes), where coupling among near Q-modes demonstrated the largest entangled cluster states to date along with a complete set of quantum gate operations <cit.>. The number of parallel Q-modes is dictated by the squeezing bandwidth of the parametric oscillator, which can extend up to a full optical octave by rather simple means (limited only by phase matching of the nonlinear interaction) <cit.>. Assuming a squeezing bandwidth of 10-100THz, the number of simultaneous Q-modes can easily exceed 10^5. The limitation of this approach to quantum computation is the bandwidth of the measurement, where each Q-mode requires a separate homodyne detection using a precise pair of phase-correlated LOs. A broad bandwidth of Q-modes requires a dense set of correlated LOs and multiple homodyne measurements, quickly multiplying the complexity to impracticality. In our experiment, we simultaneously measured the entire bandwidth of broadband two-mode squeezed vacuum with only one LO - the pump field that generated the squeezed light to begin with.
Another example is in quantum communication and quantum key distribution (QKD), where enhanced bandwidth was employed to increase the data rate by increasing the number of bits per photon. The concept here is to divide the photon readout time, which is limited by photo-detectors, into multiple short time-bins, which act as an additional time stamp for each photon (or pair) <cit.>. The time stamp (bin), which is usually detected using a Franson interferometer <cit.>, enhances the number of bits per photon to log_2N, where N is the number of time-bins. Theoretically, if the bandwidth limit of the detector could be lifted, all time (or frequency) bins could be detected independently, and an N times higher flux of photons could be used, allowing full parallelization of the communication across the available bandwidth and enhancement of the total throughput by the much larger factor N (compared to log_2N).
Here we present a different approach to optical homodyne, which resorts to a broadband optical nonlinearity - parametric amplification, as the field multiplier. Using this method we measure the entire bandwidth simultaneously with a single homodyne device and a single LO, as illustrated in figure <ref>(b). Specifically, since parametric gain only amplifies one quadrature of the input signal but attenuates the other, analysis of the output spectrum enables evaluation of the input quadratures. Due to the parametric amplification of the quadrature of interest, our measurement is insensitive to detection inefficiency (and to the added vacuum noise it introduces). Indeed, our observation of broadband squeezing was easily obtained with >50% loss in the detection channel. With sufficient parametric gain, any given x quadrature can be amplified to overwhelm the attenuated orthogonal y quadrature, even if it was originally squeezed, such that the resulting output signal is practically proportional only to the input x quadrature, as illustrated in figure <ref>(b). Even if the parametric gain in the measurement is not high enough to completely diminish the y quadrature, once the desired x quadrature is sufficiently enhanced above the vacuum level, measurement is simple. Specifically, two orthogonal measurements, one for each quadrature, provide sufficient information to easily extract both quadratures (average) over the entire optical bandwidth, as detailed hereon.
§ RESULTS
We present our results as follows: First, we describe the experimental realization of broadband parametric homodyne, which demonstrated parallel measurement of quadrature squeezing of 1.7dB simultaneously across a bandwidth of 55THz. We then discuss in detail the theoretical foundations of parametric homodyne. We present the theory of two-mode quadratures, derive the output intensity of the parametric amplifier in terms of the input quadratures and consider measurement with finite parametric gain. We then discuss tomographic reconstruction of two-mode quantum states under the constraints of incomplete experimental data, and compare parametric homodyne to standard homodyne. Finally, we consider measurement of arbitrary broadband states of light.
§.§ Experiment
A common expression for the output field of an optical parametric amplifier, which is based on three or four wave mixing optical nonlinearity, is a_out=a_incosh(g) + a^†_insinh(g) = x_ine^g + y_ine^-g, where a_in, a^†_in are the input field operators, x_in, y_in are the input quadratures and g is the parametric gain. Hence, the parametric amplification amplifies one input quadrature (x_ine^g) while attenuating the other (y_ine^-g), indicating that for sufficient amplification, the output field reflects one quadrature of the input primarily without adding noise to the measured quadrature, thus offering a quadrature selective quantum measurement. This process responds instantly to time variations of the quadrature amplitudes x(t),y(t) and the amplification bandwidth is limited only by the phase matching conditions in the nonlinear medium, which can easily span an optical bandwidth of 10-100THz (implications of the time dependence are deferred to a later discussion). In our experiment, we measured the spectral intensity of the chosen input quadrature x ^†(ω)x (ω) simultaneously across the entire bandwidth by detecting the output spectrum of a parametric amplifier with an input of broadband squeezed vacuum.
We note that the parametric amplifier of the measurement need not be ideal. Specifically, since the attenuated quadrature is not measured, it is not necessarily required to be squeezed below vacuum, only to be sufficiently suppressed compared to the amplified quadrature. Consequently, restrictions on the measurement amplifier are considerably relaxed compared to sources of squeezed light, allowing it to operate with much higher gain.
The common source for squeezed light or squeezed vacuum is also a parametric amplifier. If the amplification is spontaneous (vacuum input), the amplifier will attenuate one of the quadratures of the vacuum input state, squeezing its quantum uncertainty. For measuring the squeezing, we exploit the same non-linearity and the same pump that generated the squeezed state in the first place, thus guaranteeing a bandwidth-match of the homodyne measurement to the squeezing process. The quadrature information over a broad frequency range is obtained simultaneously by measuring the spectrum of the light at the output of the parametric amplifier. With a single LO - the pump, each individual frequency component is measured independently, and the number of accessible Q-modes (or Q-bits) that could be utilized simultaneously would be multiplied by N (the number of resolved frequency bins) rather than log_2N. As will be explained later, a single frequency component of the quadrature is actually a combination of two frequency modes, commonly termed signal ω_s and idler ω_i, symmetrically separated around the main carrier frequency Ω.
The experimental demonstration of broadband parametric homodyne (see figure <ref>) consists of two parts: First, generation of broadband squeezed vacuum, and second, parametric homodyne detection of the generated squeezing. We generate broadband squeezed vacuum by collinear four-wave mixing (FWM) in a photonic crystal fiber (PCF) that is pumped by narrowband picosecond pulses. To measure the generated squeezing, we couple the generated FWM together with the pump to another PCF, which acts as a measurement amplifier (in the experiment this was the same PCF in the backward direction). After the second (measurement) pass we record the parametric output spectrum to extract the quadrature information (see figure <ref>a).
Since squeezed vacuum is a gaussian state, its quadrature distribution is completely defined by the second moment. We therefore measure the average spectral intensity (with averaging times of a few 10ms) and reconstruct the average quadrature fluctuations ⟨ x ^†x ⟩,⟨ y ^†y ⟩. Measurement of the instantaneous intensity distribution is possible with a shorter integration time, but not necessary for squeezed vacuum.
Fringes appear across the output spectrum of the measurement parametric amplifier due to chromatic dispersion in the optical components (filters, windows, etc.), which introduces a varying spectral phase with respect to the pump across the FWM spectrum. Thus for some frequencies the stretched quadrature is amplified (bright fringes) while for others the squeezed quadrature is amplified (relatively dark fringes), as seen in Figure 3a. The specific quadrature to be amplified can be controlled by the pump phase (see 'methods' for more details on the experiment). The broadband squeezing is evident already from the raw output spectrum, shown in figure <ref>a, where reduction of the parametric output below the vacuum noise-level (the parametric output when the input is blocked) is observed across the entire 55THz. To verify this, we varied the squeezing by varying the loss of the input FWM field before the measurement (second) pass through the PCF. As the loss is increased, the squeezing slowly vanishes, and even though the total power entering the fiber is diminished, the minimum fringes at the output of the measurement amplifier rise towards the vacuum input level, as shown in the inset of figure <ref>a.
The extraction of the quadrature information from the measured parametric output assumes knowledge of the parametric gain. The calibration of the parametric amplifier is simple, performed by recording the output spectrum for a set of known inputs (Fig. <ref>b), when blocking various input fields (signal, idler or pump). For example: The vacuum level of the parametric amplifier is observed when both the signal and the idler input fields are blocked (I_zsi - zero signal idler). Also, the average number of photons at the input is given by the ratio of the measured output when the signal is blocked (idler only, I_zs - zero signal) to the vacuum input level ⟨ N_i⟩ =I_zs/I_zsi-1. This calibration process is fully described in the 'methods'. After calibration, we obtain the parametric homodyne results of figure <ref>c, which show ∼1.7dB squeezing across the entire 55THz bandwidth.
The observed squeezing in our experiment is far from ideal, primarily due to the fact that the pump is pulsed, which induces an undesirable time dependence to both the magnitude and phase of the parametric gain in the squeezing process, as well as in the parametric homodyne detection via self-phase and cross-phase modulation - SPM and XPM. Since our pump pulses are relatively long, their time dependence can be regarded as adiabatic, indicating that the instantaneous squeezing (source) and parametric amplification (measurement) are ideal, but the quadrature axis, squeezing level and gain of the two amplifiers vary with time, not necessarily at the same rate. Thus, the measured spectrum, which represents a temporal average of the light intensity over the entire pulse, diminishes somewhat the expected squeezing (see Fig. <ref> in the extended data).
Even with a pulsed pump, however, the various homodyne and calibration measurements are consistent and unequivocal for weak enough pump intensity (see 'methods' for further details on the pulse averaging effects). With a pure CW pump, as is generally used in squeezing applications, this pulse averaging limitation would not exist. Another limitation in our measurements is the need to re-couple the FWM back into the PCF, which introduces an inevitable loss of 30% and reduces the observed squeezing. This "known" loss can either be avoided completely in other experimental configurations, or can be calibrated out to estimate the "bare" squeezing level of the measured light source (see 'methods').
We verified the properties of the parametric homodyne in several ways, which are shown in figures <ref> and <ref> in the extended data. We measured the squeezed quadrature ⟨ x ^†x ⟩, and the uncertainty area, ⟨ x ^†x ⟩×⟨ y ^†y ⟩ of the squeezed state. Ideally, the generated squeezed light should be a minimum uncertainly state of ⟨ x ^†x ⟩×⟨ y ^†y ⟩=1, independent of the generation gain; and the average intensity of the squeezed quadrature should exponentially decrease with the gain. The results, presented in figure <ref>(a-b) in the extended data, show a clear reduction of the normalized squeezed quadrature intensity down to ⟨ x ^†x ⟩≈ 0.68 (32% below the vacuum level), and the uncertainty area remains nearly ideal at ⟨ x ^†x ⟩×⟨ y ^†y ⟩ < 1.3, up to a pump power of 60mW. Further increase of the pump does not improve the measured squeezing due to pulse effects, and the minimum uncertainty property deteriorates. Based on the measured squeezing, the instantaneous squeezed quadrature at the peak of the pulse was estimated to be >3dB (see 'methods'). Additional verification measurements of the broadband squeezing are presented in the extended data with figures <ref>(c-d) and <ref>.
§.§ Theoretical Foundation
The direct mathematical relation of the time varying field to broadband quadrature amplitudes is simple and illuminating in both time and frequency, and yet, it is rarely used outside the context of near monochromatic light. For a classical time-dependent field E(t)=a(t)expi Ω t+c.c., the two quadratures in time are the real and imaginary parts of the field amplitude a(t)
[ x (t)=a(t)+a^∗(t)=2Re[a(t)],; y (t)=i[a^∗(t)-a(t)]=2Im[a(t)].; ]
In frequency therefore, the quadrature amplitudes x (ω),y (ω), represent the symmetric and antisymmetric parts of the field spectral amplitude a(ω)
[ x (ω)=a(ω)+ a^*(-ω),; y (ω)=i[a^*(ω)- a(-ω)],; ]
where ω is the offset from the carrier frequency Ω, possibly of optical separation, and a(ω)=1/√(2π)∫ a(t)e^-iω tdt.
The fundamental quadrature oscillation - a single frequency component of a quadrature amplitude x (ω), y (ω), is therefore a two-mode combination of frequencies ω_s=Ω+ω and ω_i=Ω-ω - the signal and idler. In analogy to eq. <ref>, the quantum operators of the quadratures x (ω), y (ω) can be expressed in terms of the field operators of the signal a_s=a(ω) and the idler a_i=a(-ω) <cit.>
[ x(ω) = a_s + a_i^†; y(ω) = i(a_s^†- a_i).; ]
This definition preserves the commutation relation [ x, y]=2i and reduces in the monochromatic case to the single-mode quadratures x= a+ a^†, y=i( a^†- a). Strictly speaking, equation <ref> defines the quadrature operators of the nonlinear dipole within the medium, not of the emitted light field. Specifically, they do not include the frequency-dependence of the optical field operator E(ω)∼a(ω)√(ω), which is different for the signal and idler modes. Yet, to avoid cumbersome nomenclature we will simply refer to these operators as the 'two-mode quadratures', since they correctly represent the quantum correlation and squeezing of a two-mode field.
Figure <ref> illustrates the temporal field of a single two-mode component of a pure quadrature oscillation, which represents a beat pattern: slow sinusoidal envelope of frequency ω over a fast carrier wave at frequency Ω (cosine or sine). The temporal two-mode field can be written in terms of the two-mode quadratures as
[ E_Ω,ω(t) = [a_s e^-i(Ω+ω)t+a_i e^-i(Ω-ω)t]+c.c.=; =[x(ω)e^-iω t + x^†(ω)e^iω t]cosΩ t +; + [y(ω)e^-iω t + y^†(ω)e^iω t]sinΩ t ]
where the terms in the square brackets represent the quadrature envelopes.
§.§ Two-Mode Quadratures
The generalization of the standard quadratures to two-mode quadratures requires some attention. As opposed to the standard quadrature operators, which are hermitian and represent time-independent real values, the two-mode quadrature operators are non-hermitian x^†(ω)=x(-ω)≠x(ω) and represent time-dependent envelopes with an amplitude and phase in some similarity to the field operators a,a^†, which represent the amplitude and phase of the carrier oscillation. Yet, in contrast to the field operator a, the two-mode quadrature x is an observable quantity. Specifically, since x commutes with its conjugate [x(ω), x^†(ω)]=0 (as opposed to [a,a^†]=1), it is possible in principle to simultaneously measure both the real and imaginary part of the quadrature envelope, and thereby obtain complete information on both amplitude and phase of the single quadrature:
[ Re[x] = x +x ^† = x_s + x_i; Im[x] = i(x - x ^†) = y_s - y_i,; ]
where x_s,i,y_s,i are the standard single mode quadratures of the signal and idler modes. Our experiment aims to measure x^†(ω)x(ω).
Since the phase of the two-mode quadrature relates to commuting observables (as opposed to the carrier phase), it does not reflect a non-classical property of the quantum light field, but rather defines the classical temporal mode in which the field is measured. Specifically, the temporal mode of measurement is the two-frequency beat pattern of frequency ω (see figure <ref>), where the envelope phase defines the temporal offset of the beat. This offset, along with other mode parameters, such as polarization, spatial mode, carrier frequency, etc. define the mode of the local oscillator. Of course, quantum entanglement is possible between the two envelope modes (cosine or sine) in direct equivalence to entanglement of a single photon (or photon pair, or cat state) between polarization modes, which is widely used for quantum information. However, this "quantumness" between modes is additional and different, on top of the intra-mode quantum state, which is described by the quadratures x,y.
Due to the bandwidth limitation of standard homodyne measurement, the commonly used expression to interpret two-mode quadratures does not rely on eqs. <ref>,<ref>, but rather on eq. <ref>. Two independent homodyne measurements of the signal and idler quadratures, x_s,i ,y_s,i relative to two correlated LOs at their respective frequencies ω_s, ω_i so that the output homodyne signal is within the electrical bandwidth. Thus, the standard procedure to measure just a single frequency component of the two-mode quadrature x(ω) (and it's squeezing) requires two separate homodyne measurements of the independent quadratures of both the signal and the idler using a pair of phase-correlated LOs <cit.>. For a broadband spectrum, standard two-mode homodyne requires a dense set of correlated pairs of LOs for each frequency component of the measurement. As we have shown, however, in our experiment above, a single LO is sufficient to simultaneously extract a specific quadrature across the entire optical bandwidth, just as a single pump laser can simultaneously generate the entire bandwidth of quadrature squeezed mode pairs.
§.§ Quantum Derivation of the Parametric Amplified Output Intensity
To model quantum mechanically the parametric homodyne process, we derive an expression for the parametric output intensity (photon-number) operator of the signal (or idler) mode, N_s(g) = a_s^†(g) a_s(g) (g is the parametric gain) in terms of the input complex quadratures x , y.
Mathematically, our method relies on the similarity between the quadrature operators of interest (eq. <ref>) x = a_s+ a_i^†, -i y ^†= a_s- a_i^† and the field operator at the output of a parametric amplifier:
a_s(g) = a_scosh(g) + e^iφa^†_isinh(g)≡C a_s + D a_i^†,
where the coefficients C and D are generally complex. Since field operators must fulfil [ a_s^†(g), a_s(g)] = 1, the two coefficients C and D must obey |C|^2 - |D|^2 = 1, which leads to the common description of C=coshg and D=e^iφsinhg). However, the attributed phase of the parametric process φ, which is determined by the pump phase and the phase matching conditions in the non-linear medium, can also be expressed explicitly, leaving the two coefficients C,D real and positive (rather than complex), using a_s(g,θ) = (C a_s e^iθ + D a_i^†e^-iθ)e^iθ_0. Since the overall phase θ_0 does not affect the photon-number calculations, we may discard it as θ_0 = 0. In this expression we account for the phase of the pump as a rotation of the input quadrature axis - a_s,i→ a_s,ie^iθ. Accordingly, the rotated complex quadrature operators (equation <ref>) become x _θ = a_se^iθ + a_i^†e^-iθ and y _θ = -i( a_ie^iθ - a_s^†e^-iθ).
Parametric amplification directly amplifies one quadrature of the input and attenuates the other, as evident by expressing the field operators a_s(g) at the output using the quadrature operators x , y of the input:
[ a_s(g,θ) = C+D/2 x _θ + iC-D/2 y ^†_θ= e^g x _θ+e^-g y ^†_θ.; ]
Finally, the parametric photon-number operator at the output becomes:
[ N_s(g,θ) = a_s^†(g,θ) a_s(g,θ) = -1/2(N_i - N_s + 1) +; +1/4(C + D)^2 x ^†_θ x _θ + 1/4(C - D)^2 y ^†_θ y _θ; = -1/2(N_i - N_s + 1) +1/4e^2g x ^†_θ x _θ +1/4e^-2g y ^†_θ y _θ, ]
where N_s,i=a_s,i^† a_s,i represent the input photon numbers (intensities) of the signal and idler.
The first term -1/2( N_i - N_s + 1) does not depend on the pump phase and contributes only an offset to the expectation value, which is approximately -1/2 since the signal and idler photon numbers are usually identical in the absence of loss. The remaining two terms are essential to the measurement since they are proportional to the two-mode quadrature intensities. The second term 1/4(C + D)^2 x ^†_θ x _θ =1/4e^2g x ^†_θ x _θ accounts for the amplification of one quadrature, and the third term 1/4(C - D)^2 y ^†_θ y _θ=1/4e^-2g y ^†_θ y _θ accounts for the attenuation of the other.
With sufficient parametric gain, any given x quadrature at the input, even if it was originally squeezed, can be amplified above the vacuum noise to a "classical level", which allows complete freedom in measurement since vacuum fluctuations are no longer the limiting noise. If the measurement gain considerably exceeds the generation gain, such that e^2g x ^†_θ x _θ >> e^-2g y ^†_θ y _θ, the amplified quadrature will dominate the intensity of the output light allowing to neglect the intensity of the attenuated orthogonal y quadrature, and the measurement of the light intensity spectrum at the output will directly reflect (after calibration) the single-shot value of the input quadrature intensity x^†(ω) x(ω), just like the standard measurement of the electrical spectrum at the output of standard homodyne.
§.§ Parametric Homodyne with Finite Gain
Although the concept of parametric homodyne is conveniently understood in the limit of large gain, where the quadrature of interest dominates the output light field, parametric homodyne is equally effective with almost any finite gain. When the measurement gain is not large enough and the attenuated quadrature cannot be neglected, the two quadrature intensities can be easily extracted using a pair of measurements; setting the pump phase to amplify one quadrature (θ=0) and then to amplify the other (θ=π/2), as illustrated in figure <ref>. Indeed, the output intensity in this case will not directly reflect the quadrature intensity, but it still provides equivalent information about the quadrature at any finite gain, since two light intensity measurements along orthogonal axes uniquely infer the two quadrature intensities at any finite gain, indicating that the information content of a measurement of the output intensity is the same as that of the quadrature intensity.
To derive this equivalence, let us examine a bit further the relation between the field operators at the output of the amplifier to the quadratures of the input (Eq. <ref>) a_s (θ,g)= a_s e^-iθcosh(g)+ a_i^† e^iθsinh(g) = x _θ e^g+ i y _θ^† e^-g. As mentioned, the field operator converges in the limit of large gain to an amplified single quadrature operator a_s (θ,g) → e^g x _θ, but this convergence can never be exact since the commutation relation of field operators [ a, a^†]=1 is inherently different than that of quadrature operators [ x , x ^† ]=0. To illuminate the smooth transition from a field operator to a quadrature, let us express the field operator for any finite parametric gain in the form of a “generalized” quadrature operator along an axis of a complex angle ϑ =θ + iγ,
[ a_s (g) = M(x̃cosϑ + ỹ^†sinϑ) = Mx̃_ϑ, ]
where the imaginary part of the quadrature axis and the normalization factor M relate to the gain g by tanhγ=e^-2g, M^2=2/sinh2γ.
Thus, the single-shot measurement of the output light intensity with any parametric gain reflects the intensity of the "generalized" quadrature at this gain value, and not the standard (real) quadrature. The commutation relation of these “generalized” quadratures is
[ [x̃_ϑ,x̃_ϑ^† ]=1/M^2 ≈e^-2g, ]
where the approximation is valid already for moderate gain of g≥1. Consequently, the commutator of the measured generalized quadratures, converges very quickly to that of the real quadratures.
§.§ Applicability to Quantum Tomography
Quantum state tomography is a major application of homodyne measurement. It allows reconstruction of an arbitrary quantum state (or it's density matrix or Wigner function) from a set of quadrature measurements along varying quadrature axes <cit.>. Unique reconstruction requires a complete measurement of the quadrature distribution function, which necessitates single-shot measurements of the instantaneous quadrature value, not just it's average. Although both standard two-mode homodyne and parametric homodyne provide incomplete quadrature information in a single shot (in somewhat different ways), they still allow reconstruction of the quantum state under some assumptions. Hereon we review the different limitations of both methods and their implications to quantum tomography, leading to a conclusion that a combination of parametric homodyne followed by standard homodyne alleviates all the limitations and allows unambiguous reconstruction of arbitrary states.
Standard two-mode homodyne cannot provide a complete measurement of x (ω) in a single shot since standard homodyne is a destructive measurement. Specifically, observation of Re[x (ω)]=x_s+x_i requires a standard homodyne measurement of both frequency modes, which inevitably destroys the quantum state by photo-detection and prevents a consecutive measurement of Im[x (ω)]=y_s-y_i. Splitting the state into two measurement channels is impossible since such a splitting will inevitably introduce additional vacuum noise. Thus, although Re[x(ω)] and Im[x (ω)] commute, standard two-mode homodyne can evaluate only one of them in a single shot. In analogy to light polarization, standard homodyne acts as an absorptive polarizer that detects one polarization but absorbs the other, preventing complete analysis of the polarization state.
Our current realization of parametric homodyne suffers from a different ambiguity in a single shot (envelope phase). Since parametric homodyne measures only the instantaneous intensity of the quadrature x ^† x (across a wide spectrum), but not its phase, only the probability distribution of the intensity P(x^†x) can be measured.
Let us analyze the ambiguity that is introduced to the reconstruction of a two-mode quantum state by the incomplete measurement, for both standard homodyne (only real part) and parametric homodyne (only intensity). For standard homodyne, the interpretation of a null result is ambiguous: A zero measurement can arise either from a “true” null of the measured quadrature or from a wrong selection of the envelope phase. Thus, standard homodyne can reconstruct a two-mode quantum state only if the envelope phase is fixed and known a-priori. For two-mode squeezed vacuum however, which is the major two-mode quantum state that is experimentally accessible, the envelope phase is random, indicating that standard homodyne can provide only the average fluctuations ⟨ x ^†x ⟩=⟨(x_s+x_i)^2⟩ +⟨(y_s-y_i)^2⟩, but not the single shot value of the quadrature (or its intensity).
For parametric homodyne, where the quadrature intensity is measured, null (or any intensity) is unambiguously interpreted for any envelope phase, but the sign of the measured quadrature is ambiguous. Thus, complete reconstruction is possible (for any envelope phase) only if the symmetry of the quadratures is known, which is relevant to a large set of important quantum states. For example, photon-number states or squeezed states <cit.> that are known to be symmetric can be reconstructed, and indeed the non-classicality of a single photon state is directly manifested by the fact that P(x_θ^†x_θ=0)=0 is null for any quadrature axis θ, which inevitably indicates negativity of the Wigner function at zero field. Yet, a two-mode coherent state |±α⟩ and cat states like |α⟩± |-α⟩ <cit.> can be differentiated only if the symmetry of the state is assumed a-priori. For broadband squeezed vacuum, where the envelope phase is inherently random, this measurement is ideal.
Clearly, the two methods complete each other in their capabilities, indicating that a combination of parametric homodyne with interferometric detection is the perfect solution to a complete measurement, as illustrated in figure <ref>. Specifically, parametric gain is a non-demolition process (contrary to standard homodyne) that provides a light output and allows extraction of the complete quadrature information in a single shot, including the phase. Thus, if the measured quadrature is amplified sufficiently above the vacuum, this quadrature becomes insensitive to loss, even for moderate gain values. The parametric output light can thus be split to two homodyne channels that will measure both Re[x(ω)] and Im[x(ω)] simultaneously (see figure <ref>). The splitting does not hamper the measurement (contrary to standard homodyne) since the added vacuum affects primarily the attenuated quadrature, which is not measured.
In the literature, the possibility to add a parametric amplifier before electronic detection was analyzed in several different contexts: Already the seminal paper of Caves from 1981 that introduced squeezed vacuum to the unused port of an interferometer for sub-shot noise interferometric measurement, suggested to include a parametric amplifier in the detection arm to overcome the quantum inefficiency of photo-detectors <cit.>, Leonhardt later suggested a similar use of parametric amplification for quantum tomography that is insensitive to loss <cit.>, Ralph suggested it for teleportation <cit.> and Davis et al for analysis of atomic spin-squeezing <cit.>. Most recently, this concept was experimentally implemented for atomic spin measurements in <cit.> enabling phase detection down to 20dB below the standard quantum limit with inefficient detectors.
§.§ Comparison to Standard Homodyne
It is illuminating to examine on equal footing standard homodyne measurement and the parametric homodyne method. After all, the balanced detection in standard homodyne produces a down-converted RF field at the difference-frequency of the two optical inputs (LO and signal), similar to optical down-conversion, which is the core of parametric amplification. In that view, the well known ’homodyne gain’ of balanced detection (proportional to the LO field) produces an amplified electronic version of the input quantum quadrature, directly analogous to the parametric gain (proportional to the pump amplitude), which optically amplifies a single input quadrature. Thus, both the standard homodyne gain and the optical parametric gain serve the same homodyne purpose - to amplify the quantum input of interest (the optical quadrature) to a classically detectable output level <cit.>, which is sufficiently above the measurement noise (the electronic noise for standard homodyne or the optical vacuum noise for parametric homodyne). Consequently, standard and parametric homodyne are two faces of the same concept.
The difference between the two schemes is both technical and conceptual. On the technical level, the gain of standard homodyne is generally very large, allowing to a-priori neglect any effect of the unmeasured quadrature on the electrical output, whereas the optical parametric gain may not be sufficient to justify such an a-priori assumption and may require more careful analysis of the output with finite gain, as we described earlier. On the conceptual level, parametric homodyne provides an optical output, as opposed to standard homodyne that destroys the optical fields. Since the optical parametric output can be sufficiently “classical” (amplified above the vacuum level), it is far less sensitive to additional vacuum noise from optical loss or detector inefficiency. Consequently, parametric homodyne does not only preserve the optical bandwidth across the quantum-classical transition (see figure <ref>), but can also allow complete reconstruction of the two-mode quadrature in a single shot, as was explained in the previous sub-section. Hence, adding a layer of optical parametric gain before the electronic photo-detection, be it intensity detection or homodyne provides a fundamental new freedom to quantum measurement beyond the ability to preserve the optical bandwidth.
§.§ Beyond the Single-Frequency Two-Mode Field
Last, let us briefly consider broadband time-dependent states of light beyond the single-frequency two-mode state. Any classical wave-packet with spectral envelope f(ω)=|f(ω)|e^iφ(ω) around the carrier frequency Ω (normalized to ∫dω|f(ω)|^2=1) can be regarded as an electromagnetic mode with associated quantum field operators
[ a_f(t) = ∫dω f(ω)a(ω)e^-iω t; a_f^†(t) = ∫dω f^⋆(ω)a^†(ω)e^iω t,; ]
and associated temporal quadratures
[ x_f(t) = ∫dω e^-iω t[ f(ω)a(ω)+f^⋆(-ω)a^†(-ω)]; y_f(t) = i ∫dω e^-iω t[ f^⋆(-ω)a^†(-ω)-f(ω)a(ω)],; ]
which is just the Fourier representation of equation <ref>.
We can express the temporal quadrature x_f(t) in terms of the two-mode quadratures x(ω),y(ω) as
[ x_f(t) =∫dω e^-iω t[f(ω)+f^⋆(-ω)/2 x(ω)+ if(ω)-f^⋆(-ω)/2 y^†(ω)]; y_f(t) =∫dω e^-iω t[ f(ω)+f^⋆(-ω)/2y^†(ω)- if(ω)-f^⋆(-ω)/2x(ω)], ]
where the symmetric and anti-symmetric parts of the wave-packet f(ω)+f^⋆(-ω)/2, f(ω)-f^⋆(-ω)/2 are the Fourier transforms of Ref(t), Imf(t) the real and imaginary parts of the field envelope in time.
Equation <ref> can be simplified considerably when the spectrum of the wave-packet is symmetric |f(ω)|=|f(-ω)|, which is the major situation to employ a quadrature representation to begin with. The temporal quadrature x_f(t) is then simply a superposition of many two-mode components x_θ(ω) with a spectrally varying axis θ(ω) and envelope phase δ(ω)
[ x_f(t) =∫dω e^-iω t|f(ω)|e^iδ(ω)
x_θ(ω)(ω) =; =∫dω e^-iω t|f(ω)|e^iδ(ω)[x(ω)cosθ(ω) +y^†(ω)sinθ(ω) ]; y_f(t) =∫dω e^-iω t|f(ω)|e^iδ(ω)
y^†_θ(ω)(ω) =; =∫dω e^-iω t|f(ω)|e^iδ(ω)[y^†(ω)cosθ(ω) -x(ω)sinθ(ω) ].; ]
The quadrature axis of each two-mode component is dictated by it's carrier phase θ(ω)=φ(ω)+φ(-ω)/2 - the symmetric part of the spectral phase of the wave-packet φ(ω); and the two mode envelope phase δ(ω)=φ(ω)-φ(-ω)/2 relates to the antisymmetric part of φ(ω). Thus, for a transform limited mode, where φ(ω)=0, both the envelope phase and the quadrature axis are constant across the spectrum δ(ω)=0, θ(ω)=0. An antisymmetric modulation (φ(ω)=-φ(-ω)), will affect only the envelope phase, but keep the quadrature axis constant θ(ω)=0, as is the case for down-converted light. A purely symmetric modulation φ(ω)=φ(-ω), as due to material dispersion, will affect only the quadrature axis, but keep the envelope constant δ(ω)=0.
Therefore, measurement of an arbitrary generalized quadrature of broadband light requires measurement (or knowledge) of two spectral degrees of freedom - the quadrature axis θ(ω) and the envelope phase δ(ω). Parametric homodyne with intensity measurement provides complete information of the quadrature axis θ(ω) (by measuring the output spectrum for varying pump phase), but is insensitive to δ(ω). It therefore allows measurement if δ(ω) is either unimportant (down-conversion) or known a-priori (transform limit or well defined pulse), which is relevant to all current sources of broadband quantum light in spite of the limitations. The combination of parametric gain followed by standard homodyne allows complete arbitrary measurement, as explained above.
§ DISCUSSION
It is interesting to note that the effect of two parametric amplifiers in series was deeply explored previously in the context of quantum interference <cit.>. In such a series configuration, interference occurs between two possibilities for generating bi-photons, either in the first amplifier or in the second, depending on the pump phase. The interference contrast can reach unity when the parametric gain of the two amplifiers is identical (assuming no loss), which testifies to the quantum nature of the light in both the single-photon regime <cit.> and at high-power <cit.>. Here however, we consider the second amplifier as a measurement device, independent of the source of light to be measured. This light source can be, but is certainly not limited to be, a squeezing parametric amplifier. Clearly, any other source of quantum light is relevant when homodyne measurement is of interest, such as single photons, Fock states, NOON states, Schrödinger cat states, etc.
A different optical measurement of quantum light was recently reported in <cit.>, where vacuum fluctuations of THz radiation were observed in time. There too, an optical nonlinearity (of several THz bandwidth) was utilized for a direct measurement, where the large bandwidth of the nonlinearity was key to enable time sampling of the vacuum fluctuations, well within a single optical-cycle of the measured THz mode.
To conclude, we presented a new approach to optical homodyne measurement with practically unlimited bandwidth, which adds a layer of optical parametric amplification before the photo-detection, and enables simultaneous quadrature measurement across the entire spectrum with a single LO. This measurement removes major limitations of optical homodyne and opens a wide window for efficient utilization of the bandwidth resource for parallel quantum information processing. An interesting expansion of this concept would be where the pump itself includes more than one mode, for measurement of "hyper" entanglement between different frequency pairs of the frequency comb with a multi-mode pump <cit.>.
This research was funded by the 'Bikura' (FIRST) program of the Israel science foundation (ISF grant #44/14).
ieeetr
10
SqueezedStatesMeasurementNatureMlynek1997
G. Breitenbach, S. Schiller, and J. Mlynek, “Measurement of the quantum states
of squeezed light,” Nature, vol. 387, no. 6632, pp. 471–475, 1997.
SubPoissonianReview
L. Davidovich, “Sub-poissonian processes in quantum optics,” Reviews of
Modern Physics, vol. 68, no. 1, p. 127, 1996.
ScullyBook
M. O. Scully and M. S. Zubairy, Quantum optics.
Cambridge university press, 1997.
LoudonKnight1987
R. Loudon and P. L. Knight, “Squeezed light,” Journal of modern optics,
vol. 34, no. 6-7, pp. 709–759, 1987.
HomodyneTomographyRaymer1993
D. Smithey, M. Beck, M. G. Raymer, and A. Faridani, “Measurement of the wigner
distribution and the density matrix of a light mode using optical homodyne
tomography: Application to squeezed states and the vacuum,” Physical
review letters, vol. 70, no. 9, p. 1244, 1993.
TomographySqueezingAppGrangier2006
A. Ourjoumtsev, R. Tualle-Brouri, and P. Grangier, “Quantum homodyne
tomography of a two-photon fock state,” Phys. Rev. Lett., vol. 96,
p. 213601, Jun 2006.
OpticalHomodyneTomography
A. I. Lvovsky and M. G. Raymer, “Continuous-variable optical quantum-state
tomography,” Reviews of Modern Physics, vol. 81, no. 1, p. 299, 2009.
CatStates2007generation
A. Ourjoumtsev, H. Jeong, R. Tualle-Brouri, and P. Grangier, “Generation of
optical ‘schrödinger cats’ from photon number states,” Nature,
vol. 448, no. 7155, pp. 784–786, 2007.
TeleportationSqueezedLightLam1998
T. C. Ralph and P. K. Lam, “Teleportation with bright squeezed light,” Phys. Rev. Lett., vol. 81, pp. 5668–5671, Dec 1998.
QuantumTeleportationHomodyneAppPolzik1998
A. Furusawa, J. L. Sørensen, S. L. Braunstein, C. A. Fuchs, H. J. Kimble,
and E. S. Polzik, “Unconditional quantum teleportation,” Science,
vol. 282, no. 5389, pp. 706–709, 1998.
lee2011teleportation
N. Lee, H. Benichi, Y. Takeno, S. Takeda, J. Webb, E. Huntington, and
A. Furusawa, “Teleportation of nonclassical wave packets of light,” Science, vol. 332, no. 6027, pp. 330–333, 2011.
CVQuantumInfoReviewPeter2005
S. L. Braunstein and P. Van Loock, “Quantum information with continuous
variables,” Reviews of Modern Physics, vol. 77, no. 2, p. 513, 2005.
FreqCombEntanglementHomodyneAppPfister2011
M. Pysher, Y. Miwa, R. Shahrokhshahi, R. Bloomer, and O. Pfister, “Parallel
generation of quadripartite cluster entanglement in the optical frequency
comb,” Physical review letters, vol. 107, no. 3, p. 030505, 2011.
HomodyneBandwidthHuang2015
D. Huang, D. Lin, C. Wang, W. Liu, S. Fang, J. Peng, P. Huang, and G. Zeng,
“Continuous-variable quantum key distribution with 1 mbps secure key rate,”
Optics express, vol. 23, no. 13, pp. 17511–17519, 2015.
HomodyneBandwidthAppel2007
J. Appel, D. Hoffman, E. Figueroa, and A. Lvovsky, “Electronic noise in
optical homodyne tomography,” Physical Review A, vol. 75, no. 3,
p. 035802, 2007.
HomodyneBandwidthOkubo2008
R. Okubo, M. Hirano, Y. Zhang, and T. Hirano, “Pulse-resolved measurement of
quadrature phase amplitudes of squeezed pulse trains at a repetition rate of
76 mhz,” Optics letters, vol. 33, no. 13, pp. 1458–1460, 2008.
BiChromaticLO_Boyd2007
A. M. Marino, C. Stroud Jr, V. Wong, R. S. Bennink, R. W. Boyd, et al.,
“Bichromatic local oscillator for detection of two-mode squeezed states of
light,” JOSA B, vol. 24, no. 2, pp. 335–339, 2007.
QuantumComputingFreqCombHomodyneAppPfister2008
N. C. Menicucci, S. T. Flammia, and O. Pfister, “One-way quantum computing in
the optical frequency comb,” Physical review letters, vol. 101,
no. 13, p. 130501, 2008.
ChirpCompress
S. Harris, “Chirp and compress: toward single-cycle biphotons,” Physical
review letters, vol. 98, no. 6, p. 063602, 2007.
ShakedPomerantz2014
Y. Shaked, R. Pomerantz, R. Z. Vered, and A. Peer, “Observing the nonclassical
nature of ultra-broadband bi-photons at ultrafast speed,” New Journal
of Physics, vol. 16, no. 5, p. 053012, 2014.
VeredShaked2015
R. Z. Vered, Y. Shaked, Y. Ben-Or, M. Rosenbluh, and A. Peer,
“Classical-to-quantum transition with broadband four-wave mixing,” Physical review letters, vol. 114, no. 6, p. 063902, 2015.
NJPTimeBining2
T. Zhong, H. Zhou, R. D. Horansky, C. Lee, V. B. Verma, A. E. Lita,
A. Restelli, J. C. Bienfang, R. P. Mirin, T. Gerrits, et al.,
“Photon-efficient quantum key distribution using time–energy entanglement
with high-dimensional encoding,” New Journal of Physics, vol. 17,
no. 2, p. 022002, 2015.
LargeAlphabet
I. Ali-Khan, C. J. Broadbent, and J. C. Howell, “Large-alphabet quantum key
distribution using energy-time entangled bipartite states,” Physical
review letters, vol. 98, no. 6, p. 060503, 2007.
franson1989bell
J. D. Franson, “Bell inequality for position and time,” Physical Review
Letters, vol. 62, no. 19, p. 2205, 1989.
huntington2005demonstration
E. Huntington, G. Milford, C. Robilliard, T. Ralph, O. Glöckl, U. L.
Andersen, S. Lorenz, and G. Leuchs, “Demonstration of the spatial separation
of the entangled quantum sidebands of an optical field,” Physical
Review A, vol. 71, no. 4, p. 041802, 2005.
barbosa2013beyond
F. A. Barbosa, A. S. Coelho, K. N. Cassemiro, P. Nussenzveig, C. Fabre,
M. Martinelli, and A. S. Villar, “Beyond spectral homodyne detection:
complete quantum measurement of spectral modes of light,” Physical
review letters, vol. 111, no. 20, p. 200402, 2013.
NewFormalismSqueezedStatesCaves1985
C. M. Caves and B. L. Schumaker, “New formalism for two-photon quantum optics.
i. quadrature phases and squeezed states,” Physical Review A, vol. 31,
no. 5, p. 3068, 1985.
SciencePaulDLett2008
V. Boyer, A. M. Marino, R. C. Pooser, and P. D. Lett, “Entangled images from
four-wave mixing,” Science, vol. 321, no. 5888, pp. 544–547, 2008.
caves1982quantum
C. M. Caves, “Quantum limits on noise in linear amplifiers,” Physical
Review D, vol. 26, no. 8, p. 1817, 1982.
MariaNonlinearInterferometersReview
M. Chekhova and Z. Ou, “Nonlinear interferometers in quantum optics,” Advances in Optics and Photonics, vol. 8, no. 1, pp. 104–155, 2016.
DirectSamplingScince
C. Riek, D. V. Seletskiy, A. S. Moskalenko, J. F. Schmidt, P. Krauspe,
S. Eckart, S. Eggert, G. Burkard, and A. Leitenstorfer, “Direct sampling of
electric-field vacuum fluctuations,” Science, vol. 350, no. 6259,
pp. 420–423, 2015.
SpectralNoiseCorrelationHomodyneAppTreps2014
R. Schmeissner, J. Roslund, C. Fabre, and N. Treps, “Spectral noise
correlations of an ultrafast frequency comb,” Physical review letters,
vol. 113, no. 26, p. 263906, 2014.
§ METHODS
§.§ Calibration of the Parametric Amplifier
To obtain the quadrature information from the measured output intensity, the parametric amplifier must be calibrated. The required parameters are the gain coefficients |C| and |D| which are linked by |C|^2-|D|^2=1 (without the phases, which define the quadrature axis), the average photon-numbers of the two input modes N̅_̅s̅,N̅_̅i̅ (to evaluate the offset term (-1/2(N̅_i -N̅_s + 1))), and the overall detector response per single photon n^2_0. Thus, independent measurements of the four parameters, |C| or |D|, N̅_̅s̅, N̅_̅i̅ and n^2_0, is required. In most squeezing applications however, the offset term may be treated as just -1/2, since it is proportional to the photon-number difference, which is generally zero for squeezed light when the loss is nearly symmetric.
Using equation <ref>, the measurement output (proportional to the FWM intensity) is
[ I_s = n_0^2 [|C|^2 N̅_̅s̅ + |D|^2(N̅_̅i̅ + 1)+; C^*D ⟨ a_s^† a_i ⟩ + CD^* ⟨ a_s a_i^†⟩].; ]
For calibration we use measurements that are independent of phase-coherent terms (⟨ a_s^† a_i⟩ =⟨ a_s a_i^†⟩ = 0 or D = 0), allowing us to write I_s=n_0^2 [|C|^2 N̅_s+|D|^2(N̅_i + 1)].
We first measure the output intensity in two scenarios: 1. I_zsi, blocking the signal and idler (vacuum input) and 2. I_zs, blocking the signal (only idler input)[In analogy to the engineering formalism for evaluating linear systems by measuring their response in various cases, termed: zero input response (ZIR) and zero state response (ZSR), we use a similar index for the various parametric responses: zero signal (ZS), zero idler (ZI), zero signal and idler (ZSI) and zero pump (ZP)]. These measurements provide (with the aid of equation <ref>) I_zsi= n_0^2 |D|^2 and I_zs = n_0^2 |D|^2(N̅_i + 1), indicating that the ratio between these two measurements provides the idler average photon-number N̅_i = I_zs/I_zsi -1. Note that these two measurements act as a simple method for acquiring the input number of photons independent of the parametric gain. A measurement of the signal photon-number N̅_s can be easily acquired by measuring the output idler intensities in the same way.
Next, we use the knowledge of the input photon-numbers for calibrating the overall detector response n_0^2. We measure: 3. Ĩ_zp, blocking the pump (zero amplification, |C|=1,|D|=0, letting the signal and idler through). Again, from equation <ref> we find n_0^2 = I_zp/N_s.
Once the detector response is obtained, we can obtain the parametric gain coefficients |C|, |D| with the I_zsi measurement, since |D|^2 = I_zsi/n_0^2 (and |C|^2 = |D|^2 + 1).
The calibration process is needed only once for any measured input, as long as the parametric measurement gain is constant, and as long as the average photon-number difference N̅_i-N̅_s does not change (typically for squeezed input, this difference is simply zero).
§.§ Extraction of both Quadratures (Average)
The two quadratures cannot be measured simultaneously, but their average intensities can both be extracted from two measurements of the parametric output intensity, amplifying one quadrature first (I_x) and then the other (I_y), according to
[ ⟨ x ^†x ⟩ = 1/r^2-q^2[r(I_x/n_0^2 - p) - q(I_y/n_0^2 - p)]; ⟨ y ^†y ⟩ = 1/r^2-q^2[r(I_y/n_0^2 - p) - q(I_x/n_0^2 - p)],; ]
where n_0^2 is the detector response per single photon and the coefficients p,q,r are:
[ p = 1/2(N̅_s-N̅_i-1); q = 1/4(|C|+|D|)^2; r = 1/4(|C|-|D|)^2.; ]
§.§ Details of the Experimental Setup
In our experiment (fig. <ref> and fig. <ref> in the extended data), we generate an ultra broadband two-mode squeezed vacuum via collinear four-wave mixing (FWM) in a photonic-crystal fiber (PCF), that is pumped by narrowband 12ps pulses at 786nm with up to 100mW average power. The broad bandwidth is obtained by closely matching the pump wavelength to the zero-dispersion of the fiber at 784nm <cit.>, resulting in a signal and idler bandwidth of ∼55THz each, with ∼90THz mean frequency separation between the mode centers (700nm - signal center, and 900nm - idler center). After generation, the pump is separated from the FWM field into a different optical path by a narrowband filter (NBF1 - Semrock NF03-808E-25), allowing independent control of the relative pump phase. The pump phase is actively locked to the phase of the FWM using an electro-optic modulator and a fast feedback loop. Both the FWM and pump fields are reflected back (mirrors M1, M2) towards the PCF for a second pass, which then acts as the homodyne measurement. The final parametric amplified spectrum (after the second "homodyne" pass) is filtered from the pump (NBF2 - Semrock NF03-785E-25 ) and measured with a cooled CCD-spectrograph (SpectraPro 2300i).
In order to partially compensate for the temporal pulse effects due to SPM of the pulsed pump, we used the original pump pulse from the first pass through the PCF also for the second pass. This guaranteed that the pump and the FWM accumulated nearly the same phase modulation (either SPM for the pump or XPM for the FWM light). Polarization manipulations were used to tune the effective parametric gain in the second (measurement) pass independently of the squeezing strength in the first pass: Since the phase matching conditions in the PCF are polarization dependent, the observed FWM spectrum is generated only by one polarization of the pump (this fact was extensively verified).
Thus, rotating the pump polarization before the first pass with a half wave plate (HWP) we could transfer part of the pump power through the fiber without affecting the FWM. This power could later be used in the 2nd pass by rotating its polarization back to the PCF axis with a quarter wave plate (QWP) in the pump beam path. This extra pump power accumulated almost the same phase modulation as the FWM, but without affecting the squeezing generation.
The various calibration measurements were performed by manipulating the FWM light between the passes either by physically blocking the FWM beam (vacuum input) or pump beam (zero amplification) or with a high efficiency optical long-pass filter (idler input only) (Semrock FF776-Dio1). The two orthogonal homodyne measurements (amplifying the squeezed quadrature or the stretched quadrature) were acquired by tuning the offset of the active feedback loop that locked the pump phase.
§.§ Effects of the Pulsed Pump
In our experiment the pump for both generation of the squeezed light and for the parametric homodyne measurement (2nd pass), is a pulsed laser of ≈12ps duration. Since the bandwidth of the generated FWM (55THz) is much larger than the pump bandwidth (<0.1THz) we could account for the main affect of the pulse shape as an adiabatic variation of the parametric gain and phase modulation (SPM, XPM) along the temporal profile of the pump pulse. Thus, the adiabatic variation can be discretized in time, referring to time instances within a single pulse as separate parametric events of varying gain and phase. However, since the integration time of the photo-detectors in the CCD-spectrograph is much longer (∼10ms), the measured homodyne data is averaged over the entire shape of many pulses.
The effect of the pulse on the parametric gain alone changes the generated squeezing and the measurement gain with time, measuring weak squeezing with weak parametric gain at the edges of the pulse, and strong squeezing with strong parametric gain at the peak. The phase modulation (SPM,XPM) of the FWM process has a more severe effect, since it modulates in time the quadrature axis to be amplified. As a result, due to the pump pulse shape, the amplified quadrature axis of the FWM field rotates with time. Luckily, when the pump itself experiences nearly the same phase modulation (SPM) it can still act as a near perfect LO (phase regarding) for measuring the FWM, even after passage through the fiber. The small residual difference between the pump SPM and the FWM XPM causes the amplified FWM quadrature to rotate with time, mixing different quadrature axes together in the same measurement, smearing out some of the squeezing.
Ideally, we would like to extract the maximum squeezing that occurs at the peak of the pulse from the time averaged measurements. To estimate this peak squeezing we numerically simulated the entire FWM generation and parametric amplification along the pump pulse with 50fs temporal resolution (corresponding to the coherence time of the FWM). The simulation incorporated the measured pump pulse energy, the measured loss and fiber coupling efficiencies, and an assumed hyperbolic-secant temporal shape of the pump pulse (12ps). Using the simulation we could calculate both the average and the peak outputs of the process, allowing us to estimate the squeezing at the peak of the pulse from the measured averaged homodyne output. Figure <ref> in the extended data demonstrates the relation between the peak homodyne output and the average homodyne output, as the parametric measurement gain is varied. As long as the generation pump power does not exceed a specific limit (∼60mW in our experiment), the pulse averaging only affects the absolute measured squeezing values (which can be roughly estimated) but not the expected trends of the experiment (increasing the loss, the squeezing power or the parametric power).
§.§ Expanded Results
To verify the properties of the parametric homodyne, we measured the quadrature squeezing ⟨ x ^†x ⟩, and the uncertainty area, ⟨ x ^†x ⟩×⟨ y ^†y ⟩ of the squeezed state as described in the main text.
Another important verification of our squeezing measurement is to observe the effect of loss on the measured quadrature squeezing and stretching. We measured the quadrature intensities after applying a set of known attenuations (30% - 66% loss), and reconstructed the 'bare' quadratures before loss, which indeed collapsed to the same value, as shown in figure <ref>(c,d) in the extended data.
The effect of loss on the quadrature intensity can be regarded as propagation through a beam-splitter with one open port. The relations between the operators of the two inputs (a_1, a_2) and two outputs (b_3,b_4) of the beam-splitter can be defined as b_3 = T a_1+R a_2 and b_4 = T a_1-R a_2, where T and R are the transmission and reflection (loss) amplitudes. In these terms, the quadrature operator at output port 3 is: x_3 = T x_1+R x_2, and the expectation value of the quadrature intensity is
[ ⟨ x_3^2⟩ = |T|^2⟨ x_1^2⟩ + |R|^2⟨ x_2^2⟩ +2R T⟨ x_1⟩⟨ x_2⟩. ]
Assuming a vacuum state at the open input port 2, the final expression becomes;
[ ⟨ x_3^2⟩ = |t|^2⟨ x_1^2⟩ + |r|^2. ]
Hence, the 'bare' quadratures, before the loss, can be reconstructed using
[ ⟨ x ^†x _bare⟩ = (⟨ x ^†x _measured⟩ - |r|^2) / |t|^2. ]
As a complementary evaluation, we studied the parametric measurement-amplifier output as a function of its own gain, while maintaining the squeezing generation gain constant. For this, we gradually increased the pump power in the second pass up to 5.5 times the pump power that generated the squeezing in the first pass. When the parametric gain is strong enough, the output intensity relative to the vacuum level (without input) is directly proportional to the input quadrature. Hence, we expect the relative-output to stabilize as the parametric gain is increased, and indeed the observed reduction below the vacuum level stabilized at 5%. Figure <ref> in the extended data shows the measured results and addresses the pulse effects on this measurement.
§ EXTENDED DATA
|
http://arxiv.org/abs/1701.07542v2 | 20170126014418 | Lepton identification at particle flow oriented detector for the future $e^{+}e^{-}$ Higgs factories | [
"Dan Yu",
"Manqi Ruan",
"Vincent Boudry",
"Henri Videau"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex"
] |
e2e-mail: ruanmq@ihep.ac.cn
IHEP, China
LLR, Ecole Polytechnique, France
Lepton identification at particle flow oriented detector for the future e^+e^- Higgs factories
Dan Yuaddr1, addr2
Manqi Ruane2,addr1
Vincent Boudryaddr2
Henri Videauaddr2
Received: date / Accepted: date
=====================================================================================================
The lepton identification is essential for the physics programs at high-energy frontier, especially for the precise measurement of the Higgs boson. For this purpose, a Toolkit for Multivariate Data Analysis (TMVA) based lepton identification (LICH[Lepton Identification for Calorimeter with High granularity]) has been developed for detectors using high granularity calorimeters.
Using the conceptual detector geometry for the Circular Electron-Positron Collider (CEPC) and single charged particle samples with energy larger than 2 GeV, LICH identifies electrons/muons with efficiencies higher than 99.5% and controls the mis-identification rate of hadron to muons/electrons to better than 1%/0.5%. Reducing the calorimeter granularity by 1-2 orders of magnitude, the lepton identification performance is stable for particles with E > 2 GeV.
Applied to fully simulated eeH/μμH events, the lepton identification performance is consistent with the single particle case: the efficiency of identifying all the high energy leptons in an event, is 95.5-98.5%.
§ INTRODUCTION
After the Higgs discovery, the precise determination of the Higgs boson properties becomes the focus of particle physics experiments.
Phenomenological studies show that the physics at TeV scale would be revealed if the Higgs couplings could reach the percent level measurement accuracy<cit.><cit.>.
The LHC is a powerful Higgs factory.
However, the precision of Higgs measurements at the LHC is limited by the huge QCD background, the large theoretical and systematical uncertainties.
In addition, the Higgs signal at the LHC is usually tagged by the Higgs decay products, making those measurements always model dependent.
Therefore, the precision of Higgs couplings at the HL-LHC is typically limited to 5-10% level depending on theoretical assumptions <cit.><cit.>.
In terms of Higgs measurements, the electron-positron colliders play a role complementary to the hadron colliders with distinguishable advantages.
Many electron-positron Higgs factories have been proposed, including the International Linear Collider (ILC), the Compact LInear Collider (CLIC), the Future e+e- Circular Collider (FCC-ee) and the CEPC <cit.><cit.><cit.>.
These proposed electron-positron Higgs factories pick and reconstruct Higgs events with an efficiency close to 100%, and determine the absolute value of the Higgs couplings.
Compared to the LHC, these facilities have much better accuracy on the Higgs total width measurements and Higgs exotic decay searches, in addition the accuracies of Higgs measurements are dominated by statistic errors.
For example, the circular electron-positron collider (CEPC) is expected to deliver 1 million Higgs bosons in its Higgs operation, with which the Higgs couplings will be measured to percent or even per mille level accuracy<cit.>.
The lepton identification is essential to the precise Higgs measurements.
The Standard Model Higgs boson has roughly 10% chance to decay into final states with leptons, for example, H→ WW* →llvv/lvqq, H→ZZ*→llqq, H→ττ, H→μμ, etc.
The SM Higgs also has a branching ratio Br(H→bb) = 58%, while the lepton identification provides an important input for the jet flavor tagging and the jet charge measurement.
On top of that, the Higgs boson has a significant chance to be generated together with leptons.
For example, in the ZH events, the leading Higgs generation process at 240-250 GeV electron-positron collisions, about 7% of the Higgs bosons are generated together with a pair of leptons ( Br(Z→ee) and Br(Z→μμ) = 3.36% ).
At the electron-positron collider, ZH events with Z decaying into a pair of leptons is regarded as the golden channel for the HZZ coupling and Higgs mass measurement<cit.>.
Furthermore, leptons are intensively used as a trigger signal for the proton colliders to pick up the physics events from the huge QCD backgrounds.
The Particle Flow Algorithm (PFA) becomes the paradigm of detector design for the high energy frontier<cit.>.
The key idea is to reconstruct every final state particle in the most suited sub-detectors, and reconstruct all the physics objects on top of the final state particles.
The PFA oriented detectors have high efficiency in reconstructing physics objects such as leptons, jets, and missing energy.
The PFA also significantly improves the jet energy resolution, since the charged particles, which contribute the majority of jet energy, are usually measured with much better accuracies in the trackers than in the calorimeters <cit.>.
To reconstruct every final state particle, the PFA requires excellent separation by employing highly-granular calorimeters.
In the detector designs of the International Large Detector (ILD) or the Silicon Detector (SiD) <cit.>, the total number of readout channels in calorimeters reaches the 10^8 level.
In addition to cluster separation, detailed spatial, energy and even time information on the shower developments is provided.
An accurate interpretation of this recorded information will enhance the physics performance of the full detector <cit.>.
Using the information recorded in the high granularity calorimeter and the dE/dx information recorded in the tracker, LICH(Lepton Identification in Calorimeter with High granularity), a dedicated lepton identification algorithm for Higgs factories has been developed. Using CEPC conceptual detector geometry <cit.>(based on ILD) and the Arbor<cit.> reconstruction package, its performance is tested on single particles and physics events.
For the single particles with energy higher than 2 GeV, LICH reaches an efficiency better than 99.5% in identifying the muons and the electrons, and 98% for pions. Its performance on physics events (eeH/μμH) and the final efficiency agrees with the efficiency at the single particle level.
This paper is organized as follows. The detector geometry and the samples are presented in section 2. In section 3, the discriminant variables measured from charged reconstructed particles are summarized and the algorithm architecture is presented. In section 4, the LICH performance on single particle events is presented. In section 5, the correlations between LICH performance and the calorimeter geometry are explored. In section 6, the LICH performance on ZH events where Z decays into ee or μμ pairs is studied, the results are then compared with that of single particle events. In section 7, the results are summarized and the impact of calorimeter granularity is discussed.
§ DETECTOR GEOMETRY AND SAMPLE
In this paper, the reference geometry is the CEPC conceptual detector <cit.>, which is developed from the ILD geometry <cit.>. ILD is a PFA oriented detector meant to be used for centre of mass energies up to 1 TeV. It is equipped with a low material tracking system and a calorimeter systems with extremely high granularity.
In this CEPC conceptual detector design, the forward region, and the yoke thickness have been adjusted to the CEPC collision environment with respect to the ILD detector.
The core part of this detector is a large solenoid of 3.5 Tesla. The solenoid system has an inner radius of 3.4 meters and a length of 8.05 meters, inside which both tracker and calorimeter system are installed.
The tracking system is composed of a TPC as the main tracker, a vertex system, and the silicon tracking devices. The amount of material in front of the calorimeter is kept to ∼ 5% radiation length.
Both ECAL and HCAL use sampling structures and have extremely high granularity. The ECAL uses tungsten as the absorber and silicon for the sensor.
In depth, the ECAL is divided into 30 layers and
in the transverse direction, each layer is divided into 5 by 5 mm^2 cells.
The HCAL uses stainless steel absorber and GRPC(Glass Resistive Plate Chamber) sensor layers. It uses 10 by 10 mm^2 cells and has 48 layers in total.
As a Higgs factory, the CEPC will be operated at 240-250 GeV center of mass energy.
To study the adequate lepton identification performance, we simulated single particle samples (pion+, muon-, and electron-) over an energy range of 1-120 GeV (1, 2, 3, 5, 7, 10, 20, 30, 40, 50, 70, 120 GeV).
At each energy point,100k events are simulated for each particle type.
These samples follow a flat distribution in theta and phi over the 4π solid angle.
These samples are reconstructed with Arbor (version 3.3). To disentangle the lepton identification performance from the effect of PFA reconstruction and geometry defects, we select those events where only one charged particle is reconstructed.
The total number of these events is recorded as N_1 Particle, and the number of these events identified with correct particle types is recorded as N_1 Particle, T.
The performance of lepton identification is then expressed as a migration matrix in Table <ref>,
its diagonal elements ϵ^i_i refer to the identification efficiencies (defined as N_1 Particle, T/N_1 Particle), and the off diagonal element P^i_j represent the probability of a type i particle to be mis-identified as type j.
§ DISCRIMINANT VARIABLES AND THE OUTPUT LIKELIHOODS
LICH takes individual reconstructed charged particles as input, extracts 24 discriminant variables for the lepton identification, and calculates the corresponding likelihood to be an electron or a muon.
These discriminant variables can be characterized into five different classes:
* dE/dx
For a track in the TPC, the distribution of energy loss per unit distance follows a Landau distribution.
The dE/dx estimator used here is the average of this value but after cutting tails at the two edges of the Landau distribution (first 7% and last 30%).
The dE/dx has a strong discriminant power to distinguish electron tracks from others at low energy (under 10 GeV) (Figure <ref>).
* Fractal Dimension
The fractal dimension (FD) of a shower is used to describe the self-similar behavior of shower spatial configurations, following the original definition in <cit.>, the fractal dimension is directly linked to the compactness of the particle shower.
At a fixed energy, the EM showers are much more compact than the muon or hadron shower, leading to a large FD.
The muon shower usually takes the configuration of a 1-dimensional MIP(Minimum Ionizing Particle) track, therefore has a FD close to zero.
The FD of the hadronic shower usually lays between the EM and MIP tracks, since it contains both EM and MIP components.
A typical distribution of FD for 40 GeV showers is presented in Figure <ref>,
For any calorimeter cluster, LICH calculates 5 different FD values: from its ECAL hits, HCAL hits, hits in 10 or 20 first layers of ECAL, and all the calorimeter hits.
* Energy Distribution
LICH builds variables out of the shower energy information, including the proportion of energy deposited in the first 10 layers in ECAL to the entire ECAL, or the energy deposited in a cylinder around the incident direction with a radius of 1 and 1.5 Moliere radius.
* Hits Information
Hits information refers to the number of hits in ECAL and HCAL and some other information obtained from hits, such as the number of ECAL (HCAL) layers hit by the shower, number of hits in the first 10 layers of ECAL.
* Shower Shape, Spatial Information
The spatial variables include the maximum distance between a hit and the extrapolated track, the maximum distance and average distance between shower hits and the axis of the shower (defined by the innermost point and the center of gravity of the shower), the depth (perpendicular to the detector layers) of the center of gravity, and the depth of the shower defined as the depth between the innermost hit and the outermost hit.
The correlation of those variables at energy 40 GeV are summarized in Figure <ref>, the definitions of all the variables are listed in <ref>. It is clear that the dE/dx, measured from tracks, does not correlate with any other variables which are measured from calorimeters. Some of the variables are highly correlated, such as FD_ECAL (FD calculated from ECAL hits) and EcalNHit (number of ECAL hits). However all these variables are kept because their correlations change with energy and polar angle.
LICH uses TMVA<cit.> methods to summarize these input variables into two likelihoods, corresponding to electrons and muons.
Multiple TMVA methods have been tested and the Boosted Decision Trees with Gradient boosting (BDTG) method is chosen for its better performance.
The e-likeness (L_e) and μ-likeness (L_μ) for different particles in a 40 GeV sample are shown in Figure <ref>.
§ PERFORMANCE ON SINGLE PARTICLE EVENTS
The phase space spanned by the lepton-likelihoods (L_e and L_μ) can be separated into different domains, corresponding to different catalogs of particles.
The domains for particles of different types can be adjusted according to physics requirements.
In this paper, we demonstrate the lepton identification performance on single particle samples using the following catalogs:
* Muon: L_μ > 0.5
* Electron: L_e > 0.5
* Pion: 1-(L_μ+L_e)> 0.5
* Undefined: L_μ < 0.5 & L_e < 0.5 & 1-(L_μ+L_e) < 0.5
The probabilities of undefined particles are very low (<10^-3) at single particle samples with the above catalog.
Since the distribution of these variables depends on the polar angle of the initial particle (θ), the TMVA is trained independently on four subsets:
* barrel 1: middle of barrel (| cosθ |< 0.3),
* barrel 2: edge of barrel (0.3 < |cosθ| < 0.7),
* overlap: overlap region of barrel and endcap (0.7 < |cosθ| < 0.8),
* endcap: (0.8 < |cosθ| < 0.98).
Take the sample of 40 GeV charged particle as an example, the migration matrix is shown in Table <ref>.
Comparing this table to the result of ALEPH for energetic taus<cit.>, the efficiencies are improved, and the mis-identification rates from hadrons to leptons are significantly reduced.
The lepton identification efficiencies (diagonal terms of the migration matrix) at different energies are presented in Figure <ref> for the different regions.
The identification efficiencies saturate at 99.9% for particles with energy higher than 2 GeV.
For those with energy lower than 2 GeV, the performance drops significantly, especially in barrel2 and overlap regions.
For the overlap region, the complex geometry limits the performance; while for the barrel2 region, charged particles with Pt < 0.97 GeV cannot reach the barrel, they will eventually hit the endcaps at large incident angle, hence their signal is more difficult to catalog.
Concerning the off-diagonal terms of the migration matrix, the chances of electrons to be mis-identified as muons and pions are negligible (P^e_μ, P^e_π<10^-3), the crosstalk rate P^μ_e is observed at even lower level.
However, the chances of pions to be mis-identified as leptons (P^π_e, P^π_μ) are of the order of 1% and are energy dependent.
In fact, these mis-identifications are mainly induced by the irreducible physics effects: pion decay and π^0 generation via π-nucleon collision.
Meanwhile, the muons also have a small chance to be mis-identified as pions at energy smaller than 2 GeV.
Figure <ref> shows the significant crosstalk items (P^π_e, P^π_μand P^μ_π) as a function of the particle energy in the endcap region.
The green shaded band indicates the probability of pion decay before reaching the calorimeter, which is roughly comparable with P^π_μ.
§ LEPTON IDENTIFICATION PERFORMANCE ON SINGLE PARTICLE EVENTS FOR DIFFERENT GEOMETRIES
The power consumption and electronic cost of the calorimeter system scale with the number of readout channels.
It's important to evaluate the physics performance for different calorimeter granularities, at which the LICH performance is analyzed.
The performance is scanned over certain ranges of the following parameters:
* the number of layers in ECAL, taking the value of 20, 26, 30;
* the number of layers in HCAL: 20, 30, 40, 48;
* the ECAL cell size = 5×5 mm^2, 10×10 mm^2, 20×20 mm^2, 40×40 mm^2
* HCAL cell size = 10×10 mm^2, 20×20 mm^2, 40×40 mm^2, 60×60 mm^2, 80×80 mm^2
In general, the lepton identification performance is extremely stable over the scanned parameter space.
Only for HCAL cell size larger than 60×60 mm^2 or HCAL layer number less than 20, marginal performance degradation is observed:
the efficiency of identifying muons degrades by 1-2% for low energy particles (E ≤ 2 GeV), and the identification efficiency of pion degrades slightly over the full energy range, see Figure <ref>.
§ PERFORMANCE ON PHYSICS EVENTS
The Higgs boson is mainly generated through the Higgsstrahlung process (ZH) and more marginally through vector boson fusion processes at electron-positron Higgs factories.
A significant part of the Higgs bosons will be generated together with a pair of leptons (electrons and muons).
These leptons are generated from the Z boson decay of the ZH process.
For the electrons, they can also be generated together with Higgs boson in the Z boson fusions events, see Figure <ref>.
At the CEPC, 3.6×10^4 μμH events and 3.9×10^4 eeH events are expected at an integrated luminosity of 5 ab^-1. In these events, the particles are rather isolated.
The eeH and μμH events provide an excellent access to the model-independent measurement to the Higgs boson using the recoil mass method <cit.>.
The recoil mass spectrum of eeH and μμH events is shown in Figure <ref>, which exhibits a high energy tail induced by the radiation effects (ISR, FSR, bremsstrahlung, beamstrahlung, etc), while in CEPC the beamstrahlung effect is negligible.
The bremsstrahlung effects for the muons are significantly smaller than that for the electrons, therefore, it has a higher maximum and a smaller tail.
Figure <ref> shows the energy spectrum for all the reconstructed charged particles in 10k eeH/μμH events.
The leptons could be classified into 2 classes, the initial leptons (those generated together with the Higgs boson) and those generated from the Higgs boson decay cascade.
For the eeH events, the energy spectrum of the initial electron exhibits a small peak at low energy, corresponding to the Z fusion events.
The precise identification of these initial leptons is the key physics objective for the lepton identification performance of the detector.
Since the lepton identification performance depends on the particle energy, and most of the initial leptons have an energy higher than 20 GeV,
we focused on the performance study of lepton identification on these high energy particles at detectors with two different sets of calorimeter cell sizes.
The μ-likeliness and e-likeliness of electrons, muons, and pions, for eeH events and μμH events are shown in Figure <ref> and Figure <ref>.
Table <ref> summarizes the definition of leptons and the corresponding performance at different conditions.
The identification efficiencies for the initial leptons is degraded by 1-2% with respect to the single particle case.
This degradation is mainly caused by the shower overlap, and it's much more significant for electrons as electron showers are much wider than that of muon, leading to a larger chance of overlapping.
The electrons in μμH events and vice versa, are generated in the Higgs decay.
Their identification efficiency and purity still remains at a reasonable level.
For charged leptons with energy lower than 20 GeV, the performance degrades by about 10% because of the high statistics of background and the cluster overlap.
The event identification efficiency, which is defined as the chance of successfully identifying both initial leptons, is presented in the last row of Table <ref>. The event identification efficiencies is
roughly the square of the identification efficiency of the initial leptons.
Comparing the performance of both geometries, it is shown that when the number of readout channels is reduced by 4, the event reconstruction efficiency is degraded by 1.3% and 1.7%, for μμH and eeH events respectively.
§ CONCLUSION
The high granularity calorimeter is a promising technology for detectors in collider facilities of the High Energy Frontiers. It provides good separation between different final state particles,
which is essential for the PFA reconstructions. It also records the shower spatial development and energy profile to an unprecedented level of details, which can be used for the energy measurement and particle identifications.
To exploit the capability of lepton identification with high granularity calorimeters and also to provide a viable toolkit for the future Higgs factories, LICH, a TMVA based lepton identification package dedicated to high granular calorimeter, has been developed.
Using mostly the shower description variables extracted from the high granularity calorimeter and also the dE/dx information measured from tracker, LICH calculates the e-likeness and μ-likeness for each individually reconstructed charged particle. Based on these output likelihoods, the leptons can be identified according to different physics requirement.
Applied to single particle samples simulated with the CEPC_v1 detector geometry, the typical identification efficiency for electron and muon is higher than 99.5% for energies higher than 2 GeV. For pions, the efficiency is reaching 98%. These efficiencies are comparable to the performance reached by ALEPH, while the mis-identification rates are significantly improved. Ultimately, the performances are limited by the irreducible confusions, in the sense that the chance for muon to be mis-identified as electron and vice versa is negligible, the mis-identification of pion to muon is dominated by the pion decay.
The tested geometry uses a ultra-high granularity calorimeter: the cell size is 1 by 1 cm^2 and the layer number of ECAL/HCAL is 30/48.
In order to reduce the total channel number, LICH is applied to a much more modest granularity, it is found that the lepton identification performance degrades only at particle energies lower than 2 GeV for an HCAL cell size bigger than 60×60 mm^2 or with an HCAL layer number less than 20.
The lepton identification performance of LICH is also tested on the most important physics events at CEPC. In these events, multiple final state particles could be produced in a single collision, the particle identification performance will potentially be degraded by the overlap between nearby particles.
The lepton identification on eeH/μμH event at 250 GeV collision energy has been checked. The efficiency for a single lepton identification is consistent with the single particle results.
The efficiency of finding two leptons decreases by 1∼2 % when the cell size doubles, which means that the detector needs 2∼4% more statistics in the running.
In eeH events, the performance degrades because the clustering algorithm still needs to be optimized.
To conclude, ultra-high granularity calorimeter designed for ILC provides excellent lepton identification ability, for operation close to ZH threshold. It may be a slight overkill for CEPC and a slightly reduced granularity can reach a better compromise. And LICH, the dedicated lepton identification for future e+e- Higgs factory, is prepared.
This study was supported by National Key Programme for S&T Research and Development (Grant NO.: 2016YFA0400400), the Hundred Talent programs of Chinese Academy of Science No. Y3515540U1, and AIDA2020.
§ APPENDIX SECTION
List and meaning of variables used in the TMVA which are not mentioned in the text:
* NH_ECALF10: Number of hits in the first 10 layers of ECAL
* FD_ECALL20: FD calculated using hits in the last 20 layers of ECAL
* FD_ECALF10: FD calculated using hits in the first 10 layers of ECAL
* AL_ECAL: Number of ECAL layer groups (each five layers forms a group) with hits
* av_NHH: Average number of hits in each HCAL layer groups (each five layers forms a group)
* rms_Hcal: The RMS of hits in each HCAL layer groups (each five layers forms a group)
* EEClu_r: Energy deposited in a cylinder around the incident direction with a radius of 1 Moliere radius
* EEClu_R: Energy deposited in a cylinder around the incident direction with a radius of 1.5 Moliere radius
* EEClu_L10: Energy deposited in the first 10 layers of ECAL
* MaxDisHel: Maximum distance between a hit and the helix
* minDepth: Depth of the inner most hit
* cluDepth: Depth of the cluster position
* graDepth: Depth of the cluster gravity center
* EcalEn: Energy deposited in ECAL
* avDisHtoL: Average distance between a hit to the axis from the inner most hit and the gravity center
* maxDisHtoL: Maximum distance between a hit to the axis from the inner most hit and the gravity center
* NLHcal: Number of HCAL layers with hits
* NLEcal: Number of ECAL layers with hits
* HcalNHit: Number of HCAL hits
* EcalNHit: Number of ECAL hits
9
ILCTDR
T. Behnke, J.E. Brau, P.N. Burrows, et al, The International Linear Collider Technical Design Report-Volume 4: Detectors[J]. arXiv preprint arXiv:1306.6329, 2013.
peskin
M.E. Peskin, Physics goals of the linear collider[J]. arXiv preprint hep-ph/9910521, 1999.
atlas
ATLAS collaboration, Physics at a High-Luminosity LHC with ATLAS[J]. arXiv preprint arXiv:1307.7292, 2013.
cms
CMS collaboration, Projected Performance of an Upgraded CMS Detector at the LHC and HL-LHC: Contribution to the Snowmass Process[J]. arXiv preprint arXiv:1307.7135, 2013.
clic
CLIC CDR, A multi-TeV linear collider based on CLIC technology: CLIC Conceptual Design Report[J]. edited by M. Aicheler, P. Burrows, M. Draper, T. Garvey, P. Lebrun, K. Peach, N. Phinney, H. Schmickler, D. Schulte and N. Toge, CERN-2012-007, 2012.
cepcprecdr
M. Ahmad et al (The CEPC-SPPC Study Group), CEPC-SppC Preliminary Conceptual Design Report: Physics and Detector, http://cepc.ihep.ac.cn/preCDR/main preCDR.pdf, retrieved 4th May 2015
mumuh
Z. Chen,Y. Yang, M. Ruan, et al, Study of Higgsstrahlung Cross Section and Higgs Mass Measurement Precisions with ZH (Z →μ^+μ^-) events at CEPC[J]. arXiv preprint arXiv:1601.05352, 2016.
cmsupg
CMS collaboration, Technical proposal for the phase-II upgrade of the CMS detector[J]. CERN, CERN-LHCC-2015-010. LHCC-P-008, 2015.
pfa
M.A. Thomson, Particle flow calorimetry and the PandoraPFA algorithm[J]. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2009, 611(1): 25-40.
cmspfa
F. Beaudette, The CMS Particle Flow Algorithm[J]. arXiv preprint arXiv:1401.8155, 2014.
jcpfa
J.C. Brient, Improving the Jet Reconstruction with the Particle Flow Method; an Introduction[J]. arXiv preprint physics/0412149, 2004.
eepfa
J.C. Brient, H. Videau, The calorimetry at the future e+ e-linear collider[J]. arXiv preprint hep-ex/0202004, 2002.
henripfa
H. Videau, Energy flow or Particle flow-The technique of energy flow for pedestrians[C]//International Conference on Linear Colliders-LCWS04. Ecole Polytechnique Palaiseau, 2004: 105-120.
Arbor
M. Ruan, Arbor, a new approach of the Particle Flow Algorithm.
arXiv:1403.4784 (2014).
ildloi
T. Abe, ILD Concept Group-Linear Collider Collaboration. The International Large Detector: Letter of Intent, 2010[J]. arXiv preprint arXiv:1006.3396, 4(10).
FDmanqi
M. Ruan, D. Jeans, V. Boudry, J.C. Brient, & H. Videau, (2014), Fractal Dimension of Particle Showers Measured in a Highly Granular Calorimeter, Physical review letters, 112(1), 012001.
TMVA A. Hoecker, P. Speckmayer,J. Stelzer , J. Therhaag, E. von Toerne, H. Voss, ... & D. Dannheim (2007), TMVA-Toolkit for multivariate data analysis.
arXiv preprint physics: 0703039.
ALEPH Aleph Collaboration, Measurement of the Tau Polarisation at LEP[J]. arXiv preprint hep-ex/0104038, 2001.
|
http://arxiv.org/abs/1701.07838v2 | 20170126190025 | Cosmic evolution of stellar quenching by AGN feedback: clues from the Horizon-AGN simulation | [
"R. S. Beckmann",
"J. Devriendt",
"A. Slyz",
"S. Peirani",
"M. L. A. Richardson",
"Y. Dubois",
"C. Pichon",
"N. E. Chisari",
"S. Kaviraj",
"C. Laigle",
"M. Volonteri"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
firstpage–lastpage
Human-Robot Mutual Adaptation in Shared Autonomy
Stefanos Nikolaidis
Carnegie Mellon University
snikolai@cmu.edu
Yu Xiang Zhu
Carnegie Mellon University
yuxiangz@cmu.edu
David Hsu
National University of Singapore
dyhsu@comp.nus.edu.sg
Siddhartha Srinivasa
Carnegie Mellon University
siddh@cmu.edu
December 30, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================
The observed massive end of the galaxy stellar mass function is steeper than its predicted dark matter halo counterpart in the standard ΛCDM paradigm. In this paper, we investigate the impact of active galactic nuclei (AGN) feedback on star formation in massive galaxies. We isolate the impact of AGNs by comparing two simulations from the HORIZON suite, which are identical except that one also includes super massive black holes (SMBH), and related feedback models. This allows us to cross-identify individual galaxies between simulations and quantify the effect of AGN feedback on their properties, including stellar mass and gas outflows. We find that massive galaxies (M_*≥ 10^11 M_⊙) are quenched by AGN feedback to the extent that their stellar masses decrease by up to 80% at z=0. SMBHs affect their host halo through a combination of outflows that reduce their baryonic mass, particularly for galaxies in the mass range 10^9 M_⊙≤ M_*≤ 10^11 M_⊙, and a disruption of central gas inflows, which limits in-situ star formation. As a result, net gas inflows onto massive galaxies, M_*≥ 10^11 M_⊙, drop by up to 70%. We measure a redshift evolution in the stellar mass ratio of twin galaxies with and without AGN feedback, with galaxies of a given stellar mass showing stronger signs of quenching earlier on. This evolution is driven by a progressive flattening of the M_ SMBH-M_* relation with redshift, particularly for galaxies with M_*≤ 10^10 M_⊙. M_ SMBH/M_* ratios decrease over time, as falling average gas densities in galaxies curb SMBH growth.
galaxies: evolution - galaxies: high-redshift - galaxies: quasars: supermassive black holes - galaxies: star formation - galaxies: active - methods: numerical
§ INTRODUCTION
It has long been known that the hierarchical structure formation paradigm implied by the cold dark matter model, while very successful overall, overproduces objects at the bright and faint end of the luminosity function <cit.>. Observations show much more inefficient star formation in low and high mass halos, with a peak in efficiency at the luminosity turnover <cit.>. To avoid the overcooling problem, and reproduce the observed luminosity function, an energetic feedback mechanism is required <cit.>. Several different avenues have been suggested to provide the necessary energy input and quench star formation, including the extragalactic UV background, supernova (SN) feedback and feedback due to active galactic nuclei (AGN).
Photoionisation by the extragalactic UV background and the first generations of stars suppresses gas accretion at high redshift, causing a number of smaller dark matter (DM) halos to remain devoid of gas <cit.>. While this mechanism provides a possible solution to the overabundance of very low mass (≤ 10^8M_⊙) substructures in Milky-Way like halos, it has been shown to have little effect on more massive objects that collapse later <cit.>.
On the other hand, SN feedback is widely believed to play an important part in quenching star formation in halos with masses below 10^11 M_⊙ <cit.>. Their shallow potentials allow SN driven winds with velocities comparable to the escape velocity <cit.> to empty the host galaxy of a significant amount of gas, thereby efficiently suppressing star formation. However, even stellar feedback concentrated in intense, compact starbursts caused by major mergers or violent disc instabilities cannot quench massive galaxies <cit.>.
It has been suggested that star formation halts in massive objects due to a slowdown in cold flows at low redshift <cit.>. However <cit.> show that massive quiescent galaxies can have twice as much DM as star forming galaxies, indicating that cosmic inflows probably continue long after star formation has ceased. Furthermore, the slowdown in cold flows is expected to take place over Gyr timescales <cit.>, which contradicts observational evidence that quenching of massive galaxies takes place on much shorter timescales <cit.>. Therefore the reduction of star formation in massive galaxies is unlikely to occur solely because accretion is fading away.
Instead, AGN feedback could provide an effective route to quenching massive galaxies, as well as regulating the growth of supermassive black holes (SMBH) <cit.>. There are two main mechanisms through which this could proceed. One possibility is that black holes and stars feed from the same cold gas supply until it is depleted by AGN feedback, at which point both processes come to a halt. Observations of galaxies that simultaneously harbour both an AGN and an active starburst provide evidence that supports this claim <cit.>. In this scenario, AGN feedback might even accelerate star formation by further compressing the cold gas of the galaxy, in a so called positive feedback mode <cit.>. However, its main
role is to prevent the gas heated/expelled by SN winds to be re-accreted at a later stage, alongside more pristine material. This is the so called maintenance mode, associated with powerful radio emission <cit.>.
Alternatively, AGN feedback could act directly on the gas content of the galaxy. It could expel the interstellar medium (ISM) out of galaxies in massive galactic winds, and/or prevent star formation by directly heating the ISM gas <cit.>. This view is supported by observational evidence of frequent and fast outflows in massive galaxies <cit.>, able to drive a significant gas mass <cit.> using only 5-10% of accretion power <cit.>. Whilst such outflows are common in AGN with powerful radio jets <cit.>, careful analysis of higher redshift objects provides evidence that quasars can also launch powerful energy driven winds and thus cause a rapid star formation decline <cit.>. Both modes of AGN feedback can reproduce the observed correlations between host galaxy and BH properties <cit.>, as shown by e.g. <cit.>. However, the timescale over which quenching takes place is still a matter of debate, with evidence existing for both rapid quenching <cit.> and much slower processes <cit.>. The timescales are probably dependent on galaxy type <cit.>.
In this work we use state-of-the-art cosmological simulations to investigate when and how AGN feedback affects its host galaxy. We isolate the impact of such feedback on stellar masses and large scale gas flows by comparing the evolution of a statistically representative sample of individual objects, identifying matching galaxies in two simulations, HORIZON-AGN (H-AGN) and HORIZON-noAGN (H-noAGN). As their names indicate, these simulations are identical in all aspects except one is run with and the other without AGN feedback. Following the evolution of twinned galaxies from redshift z=5 down to z=0 allows us to determine the epoch of quenching, and identify ensuing changes in the stellar masses of affected galaxies.
The paper is structured as follows: Section <ref> briefly introduces the HORIZON simulation suite and Section <ref> explains the procedure used to identify pairs of corresponding objects across both simulations. Section <ref> presents the effect of AGN feedback on galaxy stellar masses throughout cosmic time and Section <ref> determines the causes for the measured quenching by studying the evolution of the black hole population, the gas content of halos and galaxies and gas inflow/outflow rates. Section <ref> summarises and discusses our results.
§ THE SIMULATIONS
This paper presents a comparative analysis of two simulations: HORIZON-AGN (H-AGN) and HORIZON-noAGN (H-noAGN). Both simulations are run from identical initial conditions and share the same technical specifications and implementations of physics. The only difference is that HORIZON-AGN also includes a sub-grid modelling of SMBHs and the associated AGN feedback (see Section <ref>) whereas H-noAGN does not. More details can be found in <cit.>.
§.§ Cosmology and initial conditions
Both simulations were run with RAMSES <cit.>, an adaptive mesh refinement code, using a second-order unsplit Godunov scheme to solve the Euler equations. A HLLC Riemann solver with a MinMod Total Variation Diminishing scheme was used to reconstruct interpolated variables. Initial conditions were produced using MP-GRAFIC <cit.> and both simulations were carried out until z=0.0.
The initial conditions setup is a standard ΛCDM cosmology consistent with the WMAP-7 data <cit.>, with Ω_ m=0.272, dark energy density Ω_Λ=0.728, baryon density Ω_ b=0.045, Hubble constant H_0=70.4 ^-1 ^-1, amplitude of the matter power spectrum σ_8=0.81, and power-law index of the primordial power spectrum n_s=0.967.
The simulated cube of L_ box=100 on a side is initially refined uniformly down to physical Δ x = 1 kpc, which requires a root grid with 1024^3 cells at z ≃ 100. Extra refinement levels are continuously triggered using a quasi-Lagrangian criterion: a grid cell is split into eight whenever its Dark Matter (DM) or baryonic mass exceeds eight times the initial DM or baryonic mass respectively. To keep the size of the smallest cells approximately constant in physical units, cells on the most refined level are split into eight every time the expansion factor doubles, if they fulfil the refinement criteria.
There are a total of 1024^3 DM particles in each simulation, leading to a DM mass resolution of 8 × 10^7 M_⊙. All collisionless particles, i.e. DM and stars, are evolved using a multi-grid Poisson solver with a cloud-in-cell interpolation to assign particles to grid cells.
§.§ Cooling and heating
The gas is allowed to cool down to 10^4 using H, He and atomic metal cooling, following <cit.> and accounting for photon heating
by a uniform UV background <cit.> from z_ reion=10 onwards. The ratio between elements is assumed to be solar in these cooling/heating calculations.
The gas follows a mono-atomic equation of state, with adiabatic index γ = 5/3.
§.§ Stars and Supernovae
Star formation is modelled using a Kennicutt–Schmidt law ρ̇_* = ϵ_* ρ / t_ ff, where ρ̇_* is the star formation rate density, ϵ_*=0.02 the (constant) star formation efficiency <cit.>, ρ the gas density and t_ ff the local free fall time of the gas. Stars form when the gas number density exceeds ρ_0 /(μ m_H) = 0.1 H/cm^3 where m_H is the mass of a hydrogen atom and μ the mean molecular weight, and star particles are generated according to a Poisson random process <cit.> with a stellar mass resolution of M_* ≃ 2 × 10^6 M_⊙, kept constant throughout the simulation. To avoid numerical fragmentation, and mimic the effect of stellar heating by young stars, a polytropic equation of state, T = T_0(ρ/ρ_0)^κ-1, is used for gas above the star formation density threshold, with κ=4/3.
For stellar feedback, a Salpeter initial mass function (IMF) <cit.> is assumed, with low and high mass cutoffs of 0.1 M_⊙ and 100 M_⊙ respectively. In an effort to account for stellar feedback as comprehensively as possible, the (sub-grid) model implemented in this work includes stellar winds, Type II and Type Ia supernovae. Mechanical feedback energies from Type II supernovae and stellar winds are computed using STARBURST99 <cit.>. Specifically, we use a Padova model <cit.> with thermally pulsating asymptotic branch stars <cit.>, and stellar winds are calculated as in <cit.>. The frequency of Type I SN is estimated from <cit.>, assuming a binary fraction of 5%. To reduce computational costs, stellar feedback is modelled as a source of kinetic energy during the first 50 Myr of the lifetime of star particles, and as a heat source after that. On top of energy, mass and metals injected into the interstellar medium (ISM) by stellar feedback, we also keep track a variety of chemical elements (O, Fe, C, N, Mg, Si) synthesised in stars, with stellar yields estimated according to the W7 model of <cit.>. More detailed discussions of the stellar feedback model used can be found in <cit.>.
§.§ SMBH formation and accretion
In H-AGN, black holes are seeded with an initial mass of 10^5 M_⊙ in dense, star-forming regions, i.e. when a gas cell exceeds ρ > ρ_0 and is Jeans unstable, provided such regions are located more than 50 kpc away from a pre-existing black hole <cit.>. These black holes subsequently accrete gas at the Bondi-Hoyle-Lyttleton rate:
Ṁ_ BH=4 πα G^2 M_ BH^2 ρ̅/(c̅_̅ ̅s̅^2 + u̅^2)^3/2,
where M_ BH is the black hole mass, ρ̅ is the average gas density, c̅_̅ ̅s̅ is the average sound speed, and u̅ is the average gas velocity relative to the BH. We emphasize that this model is based on rather crude assumptions about the hydrodynamical processes undergone by gas surrounding an extremely small accretor. In particular, it does not take possible density or velocity gradients on the scale of the (unresolved) accretion radius into account, nor does it provide any description of important hydrodynamical instabilities which develop on yet
smaller scales <cit.>. To somewhat mitigate resolution effects that make it difficult to capture cold, dense regions of the ISM, a boost factor α is used, following <cit.> and <cit.>:
α =
(ρ / ρ_0)^2 if ρ >ρ_0
1 otherwise
Accretion is capped at the Eddington rate
Ṁ_ Edd= 4 π G M_ BH m_ p / (ϵ_ rσ_ T c) ,
where σ_ T is the Thompson cross section, m_ p is the proton mass, and c is the speed of light. A standard radiative efficiency, typical of a Shakura-Sunyaev accretion disc around the BHs, of ϵ_ r=0.1 is assumed <cit.>.
BHs are also allowed to merge with one another when they are closer than 4kpc, and their relative velocity is smaller than the escape velocity of the binary.
§.§ AGN feedback
Two modes of AGN feedback are implemented in H-AGN, depending on the instantaneous accretion rate of the SMBH: the so-called radio and quasar modes.
At high accretion rates, i.e. for Eddington ratios χ= Ṁ_ BH/Ṁ_ Edd > 0.01, the quasar mode deposits thermal energy isotropically into a sphere of radius Δx centred on the BH. This energy is deposited with an efficiency of ϵ_ f = 0.15 at a rate of
Ė_ AGN=ϵ_ fϵ_ rṀ_ BH c^2 .
The radio mode takes over at low accretion rates, χ= Ṁ_ BH/Ṁ_ Edd≤ 0.01, and deposits kinetic energy into bipolar outflows with jet/wind velocities of 10^4 km/s, along an axis aligned with the angular momentum of the accreted material, following the model of <cit.>. The total rate of energy deposited is given by the previous equation for Ė_ AGN, albeit using a higher efficiency of ϵ_ f=1 (see <cit.> for detail).
The radiative efficiencies of the two AGN feedback modes were chosen to reproduce the scaling relations between BH mass and galactic properties in the local universe, M_SMBH - M_* and M_SMBH-σ_* <cit.>. More generally, we refer the reader interested in the details of how accretion onto SMBHs, and subsequent feedback injection, depend on numerical resolution and
sub-grid model parameter choices to <cit.>. Note that the two radiation efficiency parameters previously mentioned are the only ones which are tuned in the HORIZON simulations, in the sense that the
other parameters (associated with the sub-grid models of star formation and stellar feedback) were not allowed to vary in order to obtain a better match to bulk galaxy
properties. For instance, even though our star formation efficiency choice ensures that galaxies will fall on the Kennicutt observational
law by construction, it does not automatically guarantee that they will have the correct stellar/gas mass and/or size at any epoch.
§.§ Mass categories for galaxies
To facilitate the presentation of our results, we split our sample of galaxy twins into three sub-samples, distinguished by the stellar mass of the H-AGN galaxy. We define small galaxies as twins with stellar masses M_*^H-AGN < 10^9 M_⊙ in H-AGN, medium galaxies as those with 10^9 M_⊙≤ M_*^H-AGN≤ 10^11 M_⊙ and large galaxies as those with M_*^H-AGN > 10^11 M_⊙. See Table <ref> for the number of twins in each mass category. As a visual guidance, these mass categories will be annotated by solid vertical lines on all relevant plots.
§.§ Nomenclature
For the purpose of this paper, “quenching” refers to any reduction in galaxy star formation rates (SFR) when AGN feedback is included, compared to the case without it, not just a redshift dependent specific SFR threshold of 0.3 / t_Hubble, as defined in e.g. <cit.>. The “quenching mass ratio” refers to the stellar mass ratio of galaxy twins between the cases with and without AGN feedback, i.e. M_*^H-AGN/M_*^H-noAGN.
§ HALO MATCHING ACROSS SIMULATIONS
§.§ The twinning procedure
As usual, the first step consists in detecting objects of interest (halos, subhalos and galaxies) in each simulation, using the adaptahop (sub)halofinder <cit.>.
Having two simulations based on identical initial conditions allows the identification of corresponding objects between the two simulations, a procedure here referred to as “twinning" (see also e.g. ). A pair of corresponding objects is called a twin, and identifies two objects (one in each simulation) that have grown from the same overdensity in the initial conditions. These objects can either be (sub)halos, for (sub)halo twins, or galaxies, for galaxy twins. If all the algorithms implemented to describe physical processes were identical in both simulations, the twins would be identical except for minor differences introduced by the stochastic nature of the star formation algorithm. However, such seemingly innocuous differences would already prevent us from directly twinning galaxies. We therefore employ the more general method of <cit.> to perform this task, which is summarised in Fig. <ref>. Only DM (sub)halos are twinned directly; to create galaxy twins, each galaxy is first associated to a host (sub)halo in the same simulation, before being twinned to the galaxy hosted by this (sub)halo's twin in the other simulation.
More specifically, as both simulations start from identical initial conditions, with uniquely identified DM particles in identical positions, we can identify which of these particles cluster to form gravitationally bound (sub)halos as the runs proceed. (Sub)halos that grew from the same initial overdensities in both simulations should contain a large fraction of DM particles with identical identities at any time. In practice, for two DM (sub)halos to be twinned, we require that at least 75% of the DM particles present in a (sub)halo in H-AGN are also present in the H-noAGN (sub)halo. Note that in some cases, this choice will lead to a single (sub)halo in H-noAGN being associated with several (sub)halos in H-AGN, as (sub)halos mergers lead to the formation of (sub)subhalos which are not necessarily disrupted at the same time in both simulations. In these cases, the object with the most similar mass is chosen as the twin (sub)halo, and the other matches are discarded.
Star particles stochastically form over the course of a simulation and their identifiers therefore reflect the detailed star formation history of that precise simulation, so it is not possible to identify galaxy twins directly through their star particle identities as is done for DM (sub)halos. Instead, galaxies are considered to be twins if they are located within DM (sub)halo twins (see Fig. <ref>). We therefore begin by assigning a host (sub)halo to each galaxy if its centre is located within a distance R_host=0.05 × R_ vir of the centre of the (sub)halo. The centres of these (sub)halos are computed using a shrinking sphere method <cit.>, and their precise location corresponds to the position of the most dense DM particle located in the final sphere identified with this method <cit.>. In case a (sub)halo contains more than one galaxy in its central region, we select the most massive one as being hosted by this (sub)halo and discard other matching objects. Note that proceeding in this way biases our results against so-called 'orphan' galaxies, i.e. galaxies whose host (sub)halo has been disrupted to the point that it falls below the particle detection threshold used by our halofinder. However, such orphans are quite rare (less than 1% of our sample at any redshift) and almost exclusively belong to the category of small galaxies (M_* < 10^9 M_⊙), so our conclusions are unaffected by this bias. We also find that relying on DM (sub)halo host twinning to twin galaxies, rather than directly measuring the galaxy orbital properties, is a more robust process, as orbital parameters are sensitive both to internal changes in galaxy properties (in particular stellar mass) and host (sub)halo density profiles, which can differ quite significantly between simulations with and without AGN feedback (see e.g. <cit.>).
§.§ Matched fractions
As we are interested in the most massive objects, the full sample considered here includes all (sub)halos resolved by at least 500 DM particles, and all galaxies hosted in these (sub)halos that contain at least 100 star particles. Note that this latter criterion, contrary to the one for DM (sub)halos does not correspond to a strict stellar mass threshold as star particles can have different masses (integer multiples of the minimal stellar mass). Across all redshifts, H-AGN and H-noAGN contain a comparable, albeit not identical number of both galaxies and (sub)halos (see Table <ref>). For this reason, the following analysis considers the H-AGN sample as the reference to analyse the effectiveness of the twinning algorithm.
Fig. <ref> shows that at high redshift, over 98% of (sub)halos present in H-AGN, corresponding to 15,818 out of 16,042 at redshift z=5, are twinned successfully, with an even distribution across all mass bins. At lower redshift, the fraction of matched (sub)halos decreases as the more and more different merger histories in the two simulations introduce larger discrepancies between individual objects. This fraction also decreases with (sub)halo mass, as (sub)halos with smaller particle numbers are more sensitive to these merger history changes. However, with 75,544 (sub)halos twinned at redshift z=0.1 out of a sample of 88,171, i.e. an average matched fraction of 86%, the resulting sample remains statistically representative.
The overall rates for matched galaxies are much lower than for (sub)halos, as they require three steps to establish the twin link (galaxy to host (sub)halo, host (sub)halo to host (sub)halo twin, host (sub)halo twin to galaxy twin, see Fig. <ref>), with a number of objects dropping out of the sample at each step. Identifying a galaxy with its host (sub)halo can be challenging, especially in dense environments and at high redshift where interactions are more common. For example, increasing the size of the region within which a galaxy is associated with its host (sub)halo, R_host, from R_host=0.05 × R_ vir to R_host=0.10 × R_ vir, increases the fraction of matched galaxies at high redshift <cit.>. More specifically, for redshift z=3, the total number of galaxies with at least 100 star particles identified with a host (sub)halo with the same number of DM particles, would rise from 47,656 to 67,301, out of 76,887 galaxies in total, i.e. an increase in matched fraction from 61 % to 87 %. Selecting amongst these galaxies those hosted in (sub)halos containing more than 500 DM particles (as we do in this work), further reduces the numbers to 34,128 (given in table <ref>) and 48,354 galaxies respectively, out of 76,887. Whilst the number of galaxies excluded by the strict position and mass criteria we employ for twinning does represent a significant fraction of the sample, especially at high redshift, we have checked that relaxing them hardly alters the quantitative results presented in this work. This can be intuitively understood as the mass cuts chosen only eliminate low mass galaxies from the sample. Low mass galaxies are both (i) the most numerous and (ii) virtually unaffected by AGN feedback, at any redshift. This can be seen in Fig. <ref>, where the entire population (i.e. all 76,887 objects for H-AGN at z=3), as opposed to only twinned galaxies, is plotted. Similarly, the use of a stricter position criterion mostly affects low mass galaxies, as these have longer dynamical friction times and are more easily dislodged from the centre of their host (sub)halos during gravitational interactions. In short, the galaxy sample as defined in this section is statistically robust enough, at all redshifts and galaxy masses, for us to draw conclusions about the impact of AGN feedback in the H-AGN simulation, while allowing us to get rid of virtually all mismatch errors in the galaxy twinning process.
§.§ The effect of AGN feedback on halos
Fig. <ref> shows that no matter the redshift, the DM (sub)halo mass functions (HMF) for H-AGN and H-noAGN are so similar that they are indistinguishable on the plot. Directly comparing the DM masses of (sub)halo twins (See Fig. <ref>) shows that (sub)halos with masses below M_vir^H-AGN< 10^11 M_⊙ have identical masses in both simulations, at all redshifts. The small spread in masses is mainly caused by variations in shape, which the structure finding algorithm translates into a small variation in virial mass. At redshifts of z=1 and below, (sub)halos with masses above M_vir^H-AGN > 10^11 M_⊙ can have a dark matter mass up to 5% percent lower in H-AGN. This is due to the fact that in the presence of AGN feedback, the baryon content of these massive (sub)halos is strongly reduced (see Section <ref>), which translates into a reduced total (sub)halo mass, as the reduced gravitational pull slows down the cosmic inflow rate. As the (sub)halos in H-AGN systematically exhibit lower masses, it makes sense to require, as we do, that 75% of DM particles from H-AGN be present in the H-noAGN (sub)halo, and not the reverse. A more detailed analysis as to how AGN feedback affects the inner structure of DM (sub)halos is carried out in <cit.>.
§ AGN FEEDBACK & STELLAR MASS
§.§ The galaxy stellar mass function
Comparing the galaxy population in H-AGN and H-noAGN at the various redshifts presented in this work shows that AGN feedback is instrumental in bringing the high mass end of the GSMF in agreement with observations[ A more detailed comparison of the GSMF in H-AGN to individual observational datasets can be found in <cit.>. ]. As Fig. <ref> demonstrates, AGN feedback is able to suppress star formation in galaxies with masses M_* ≥ 5 × 10^10 M_⊙ at z = 3, allowing the simulation to match the number of galaxies at and above the knee of the GSMF. It is important to note that H-AGN was not tuned to reproduce this result. As previously mentioned, the only tuning done on global galaxy properties in the simulation involves the radiative efficiency of the AGN feedback modes, which were set to reproduce the local M_SMBH - M_* and M_ SMBH - σ_* relations.
In the absence of AGN feedback, the GSMF in H-noAGN agrees well with predictions that the uniform baryon mass fraction of Ω_b / Ω_m = 0.165 <cit.> is entirely converted into stars in the galaxy mass range 5 × 10^10 M_⊙ < M_* < 10^12 M_⊙ by z=1. For galaxies with masses M_* ≥ 5 × 10^12 M_⊙, a discrepancy between the GSMF in the absence of feedback (solid grey line) and expectation values from the cold dark matter model (dotted line) starts to appear because gas cooling times in host (sub)halos harbouring such massive galaxies become comparable to the Hubble time when the halos assemble, so not all the baryons enclosed have yet been able to cool and form stars. Cooling is further hampered by the fact that in the absence of AGN feedback, heavy elements do not get distributed effectively throughout the halo but remain close to the central galaxy.
The simulations systematically overproduce the number of galaxies with masses below M_* ≤ 5 × 10^10 M_⊙ and z ≤ 3. This is partly caused by the fact that observed mass functions are derived from magnitude-limited data, whereas the GSMF presented here for H-AGN and H-noAGN are raw stellar masses extracted from the simulation, with no completeness, surface brightness or luminosity cut applied. Indeed, comparing the GSMFs presented in Fig. <ref> to those plotted in Fig.7 of <cit.>, which are based on the same simulation, H-AGN, but include a magnitude cut to match observations, one realises that the effective number of galaxies with masses
M_* = 10^9 M_⊙ is reduced by about 0.1-0.2 dex depending on redshift, whilst galaxies with masses M_* ≥ 5 × 10^10 M_⊙ are completely unaffected. This does somewhat flatten the simulated GSMFs at the faint end, bringing them in better agreement with the data. However, the remaining discrepancy of about 0.3 dex with the data for galaxies with stellar masses M_* ≤ 10^10 M_⊙ can probably be attributed to the implementation of an insufficiently energetic stellar feedback model, coupled with numerical resolution effects <cit.>[ Note that a stronger stellar feedback is likely to affect black hole masses as well <cit.>.], although how efficient such a feedback can realistically be is still a matter of debate. Having said that, as AGN feedback, through the comparison of H-AGN and H-noAGN, is measured to have no effect on the low mass end of the GSMFs (as clearly visible in Fig. <ref>), and the limitations discussed above are present in both simulations, they likely have a very limited impact on the work presented here. Still, the few absolute measurements presented here, such as outflow rates, have to be examined bearing in mind that stellar feedback is probably underestimated in the simulations, and thus that values derived for galaxies with stellar masses M_* ≤ 5 × 10^10 M_⊙ are very likely too low.
Finally, both simulations systematically underproduce massive (M_* ≥ 5 × 10^10 M_⊙) galaxies at redshift z ≥ 5 (Fig. <ref> top left panel).
Given that the HMF multiplied by the universal baryon fraction (dot-dashed curve on the figure) seems to describe the observed data fairly well for galaxies in this mass range,
this suggests that our inability to resolve the progenitors of halos early enough leads to star formation being artificially postponed. Obviously, since the gas content
of these halos is still correctly estimated, galaxies will eventually catch up: their star formation rate will be slightly higher than expected, as long as more gas is present.
However, at high redshift, galaxy star formation timescales cannot be considered small in comparison to the time elapsed since their host halo formed,
so their stellar masses can be significantly underestimated. Note that this resolution effect has completely vanished, at least for massive galaxies, by z=3.
Moreover, as this issue affects H-AGN and H-noAGN in the same way, it cancels out in the comparative analysis of the two simulations that we perform in this work.
§.§ Quenching
Instead of having to rely on statistical averages, such as those presented in the mass functions in Fig. <ref>, the twinning procedure described in Section <ref> allows for a direct comparison of the stellar masses of each individual galaxy, with and without AGN feedback. Fig. <ref> shows the results of this comparison for a range of redshifts. As expected from the local GSMF in Fig. <ref>, the most massive galaxies are the most strongly quenched at all redshifts (Fig. <ref>). However the amount of quenching does not vary linearly with galaxy mass, with the function tailing off for both strongly quenched large galaxies, and barely affected small galaxies.
We also measure a redshift dependence in the maximum amount of quenching observed, ranging from 40% for galaxies with stellar masses in H-AGN of M_*^H-AGN > 10^10 M_⊙ at redshift z=5, up to a maximum of over 80% for the largest galaxies at redshift z=0.1, suggesting that galaxy quenching is a continuous process active throughout the merging history of galaxies. In general, the shape of the distribution is driven by small galaxies that show little influence of AGN feedback at any redshift, and a tailing off for massive galaxies. Large galaxies appear to converge to a constant quenching mass ratio M_*^H-AGN/ M_*^H-noAGN = 0.2, as they grow from M_* = 10^11 M_⊙ to M_* = 10^12 M_⊙ both in H-AGN and H-noAGN. This is not due to any fundamental change in the impact of AGN feedback for galaxies with stellar masses above M_*>10^11 M_⊙ but rather reflects the fact that, even in the absence of AGN feedback, the GSMF in H-noAGN steepens due to long cooling times for massive objects (see Fig. <ref> and Section <ref>) which lead to reduced star formation rates. Therefore, the constant quenching mass ratio for large galaxies is not driven by less effective AGN feedback, but rather by less effective cooling for galaxies in the absence of feedback. Note that this is not a selection effect either, as no mass ratio cut was applied to our galaxy sample.
Particularly noticeable for redshifts above z>3, the minimum mass to experience quenching decreases with redshift: a typical 10^9 M_⊙ galaxy at redshift z=5 already has its stellar mass quenched by 10%, whereas a 10^9 M_⊙ galaxy at redshift z=1 shows a median reduction in stellar mass of less than 1%.
A transition mass between star forming and quenched galaxies is somewhat difficult to define in this context, as the quenching mass ratio is a cumulative measure, not an instantaneous one such as the star formation rate or the stellar mass growth timescale often used to separate star forming and quiescent populations. However, based on Fig. <ref>, it seems natural to define the mass at which quenching due to AGN feedback becomes important to be the point where the quenching mass ratio equals 0.85 ± 0.05, as it corresponds to the location of the sharp break in the quenching mass ratio versus galaxy stellar mass relation. As can be seen in Figure <ref>, this quantity shows a redshift evolution, decreasing from log(M^quench_*/[M_⊙]) =
10.35^0.12_0.36 at z=0.1 to log(M^quench_*/[M_⊙]) = 9.17^+0.05_-0.11 at z=5. This result is in good agreement with the value of 10.3 found for a sample of SDSS galaxies by <cit.>. At higher redshift, z ≈ 1, we find good agreement with the transition mass quoted by <cit.>, who used a large sample of SDSS and zCOSMOS galaxies to study the stellar mass at which “ mass quenching”, which includes the impact of AGN, becomes important. Note that our redshift evolution also suggests that the trend they observe can be extrapolated to z=2 at least. However, mostly driven by results above z>4, we find that our best fit power law for the redshift evolution,
M_*^quench (z) = 10^10.49 (1+z)^-1.50.
has a steeper slope than that reported in <cit.>. Comparing to other simulations <cit.>, we consistently find a lower value. This discrepancy is partially due to how the transition mass is defined in each work. The value in <cit.>, for example, is defined to be the value at which only AGN feedback quenches the galaxy. It is not surprising that this is higher than our definition of M^quench_*, which is the transition mass at which AGN feedback begins to be dominant, but stellar feedback might still play a role.
We take the fact that we consistently measure lower transition masses to be evidence that quenching is a long term process, whose effect accumulates over the evolution history of a galaxy. A galaxy whose star formation rate is reduced by 20 % would not sufficiently change colour to be counted among the “red and dead” population, but could over time show a noticeably reduced quenching mass ratio. The transition mass based on the cumulative star formation history is therefore necessarily lower than that based on a more instantaneous measure, such as the stellar mass growth timescale <cit.> or the star formation rate <cit.>.
As the evolution in the quenching mass ratio (and the minimum mass of a galaxy affected) is in our case case purely driven by AGN feedback, we expect the impact of AGN feedback to evolve with time. The evolution of the instantaneous AGN power, plotted in Fig. <ref> confirms this conjecture. Galaxies of the same stellar mass are subject to different amounts of feedback at different points in cosmic history. The median AGN power for a galaxy of a given stellar mass decreases strongly with redshift, with a galaxy with M^H-AGN_* = 10^9 M_⊙ at z=5 subject to a feedback three orders of magnitude stronger than a galaxy of equivalent stellar mass at z=0.
As the sound speed of the gas, and its relative velocity with respect to the SMBH, do not vary systematically with redshift, the feedback power of an AGN, calculated according to equation <ref> is mainly a function of local gas density in the vicinity of the BH, the Eddington ratio which determines the radiative efficiency of each mode of feedback and BH mass squared (the case of BHs accreting at the Eddington limit is rare and very short lived in the simulations, see e.g. <cit.> or section <ref> below). Therefore, the decreasing importance of AGN feedback in the evolution of small galaxies could be due to (i) the decreasing gas fractions associated with galaxy evolution, (ii) a shift in AGN feedback mode from quasar (high redshift) to radio dominated (low redshift), or (iii) an evolving black hole population (different M_ SMBH vs M_* relation). We examine each of these three options in turn in the following section.
§.§ The coevolution of SMBHs and their hosts
The total AGN power is a function of the accretion rate onto the central SMBH, which in turn, in the Bondi regime, depends on the SMBH mass squared[Or only of mass when accreting at the Eddington limit, but these episodes are rare and short lived in simulations with AGN feedback as we show later in this section; see equation <ref>, using equation <ref> or equation <ref>]. Fig. <ref> shows that the median mass of the central black hole undergoes a redshift evolution between z=0 and z=5, with galaxies of a given stellar mass hosting a more massive BH at higher redshift[ The results presented here are consistent with fig. 10 of <cit.>, which reports no redshift evolution in the M_* - M_SMBH relation. This is because the redshift evolution we see is mainly driven by low mass black holes at low redshift, a sample excluded by these authors who apply a cut in host halo mass of M_halo= 8 × 10^10 M_⊙. By comparison, the sample analysed here includes all black holes identified within a host galaxy in halos with M_halo> 4 × 10^10 M_⊙. The second difference between these two pieces of work concerns the statistical analysis chosen: while <cit.> employ a linear fit as used in observational studies, we present median black hole masses and thus allow for a non-linear correlation between black hole and galaxy stellar masses.]. For example, a galaxy with M_*^H-AGN=10^10 M_⊙ at redshift z=0.1 typically hosts a SMBH with M_SMBH= 1.1 × 10^7 M_⊙, whereas a galaxy with the same stellar mass at z=5 hosts a SMBH with a median mass of M_ SMBH = 4.2 × 10^7 M_⊙. A similar evolutionary trend is reported observationally <cit.> and in other large scale cosmological simulations <cit.>. A detailed discussion of the shape of the M_SMBH-M_* relation can be found in <cit.>.
Looking at the evolution of AGN feedback mode with redshift, as determined by the Eddington ratio χ, Fig. <ref> reveals that the SMBH population transitions from quasar mode to radio mode between redshifts z=3 and z=1, as χ falls below 0.01. At high redshift, the vast majority of AGNs are found in quasar mode, namely 95.7% of the sample at z=5 and 85.5% at z=3. At lower redshift, the population is overwhelmingly in radio mode across all mass bins, with only 19.1% and 2.3% found in quasar mode at z=1 and z=0.1 respectively.
As the Eddington ratio is a measure of how efficiently a black hole of a given mass is accreting, the high Eddington ratios at redshift z>3 explain the evolution in the M_ SMBH-M_* relation in Fig. <ref>. Indeed, whilst a black hole with M_ SMBH=10^7 M_⊙ at redshift z=5 accretes with a mean Eddington ratio of χ = 6.27 × 10^-2, a SMBH with the same mass at redshift z=0.1 accretes at only χ = 1.2 × 10^-4 Eddington. This means that the latter grows about two orders of magnitude more slowly than the former.
Fig. <ref> also clearly shows that except for a few outlying objects, the bulk of the population is not accreting in an Eddington limited fashion at z ≤ 5. This evolution in the median Eddington ratio reflects an underlying evolution in the gas density of galaxies, as can be seen in Fig. <ref> where we have divided χ by the SMBH mass to calculate the specific accretion rate of these SMBHs in Eddington units, i.e. we have removed the dependence on BH mass to be able to intercompare BHs across the whole mass range. The story which emerges from this plot is that since high redshift galaxies are more gas rich, they fuel their central black holes more efficiently, regardless of their masses. As the gas supply is depleted, accretion onto the central black hole slows down, and AGN feedback transitions from a quasar to a radiative feedback mode around z=2. Due to the different radiative efficiencies employed for the two feedback modes, ϵ_f = 0.15 for the quasar mode at χ≥ 0.01 and ϵ_f = 1.0 for the radio mode at χ < 0.01, (see Section <ref>), a larger percentage of accretion energy is converted into feedback at redshifts below z<2, but nevertheless, the amount of energy available for feedback declines.
Summarising the impact of all three effects, we conclude that the decreasing AGN power for a galaxy of a given stellar mass is driven by the cumulative effect of a proportionally larger central SMBH and the decreasing gas supply in the galaxy, for which the increasing efficiency of the feedback mode at z<2 is unable to compensate. Not only do existing BHs of a given mass accrete less efficiently in the gas poor galaxies at z<2, such that Ṁ_ BH(M_SMBH, z>2) > Ṁ_ BH ( M_ SMBH,z<2) for all M_SMBH, but any galaxy of a given stellar mass also hosts a significantly smaller black hole than its equivalent counterpart at z>2, ie M_SMBH/M_* (z<2)< M_SMBH/M_* (z>2). The two effects combine together to produce the redshift evolution of AGN power seen in Figure <ref>, despite the shift to a more efficient feedback mode around z=2.
How efficiently an AGN of a given power is able to remove gas from a galaxy is dependent on the depth of the gravitational potential it has to overcome in the process. Repeating the analysis of the quenching mass ratios, but plotting it against M_ SMBH / M^H-AGN_vir instead of the galaxy stellar mass, shows that the redshift evolution in the quenching mass ratio is truly driven by the evolution of the SMBH population and its feedback power (Fig. <ref>, in comparison to Fig. <ref>). Indeed, whilst the relation scatter augments, the different redshift curves now overlap within the quartile error bars: the redshift dependence in Fig. <ref> has been erased. In other words, independently of redshift, BHs with a black hole to halo mass ratio of less than M_SMBH / M^H-AGN_vir≤ 4 × 10^-5 quench their host galaxy by less than 20 %, whereas BHs with ratios only a factor 2-3 larger than that suppress their host galaxy stellar masses by up to 50 %.
Although there exists a clear transition of the sample from one AGN feedback mode to the other, no significant difference was found when analysing the quenching mass ratio (such as in Fig. <ref>) by splitting the sample into quasar or radio mode galaxies. All results shown here can be reproduced by assuming that the entire sample can be found in quasar mode at redshifts z>2, and in radio mode otherwise. The only notable discrepancy between the simulation data and this simplified model is that the scatter is somewhat reduced, which is expected as objects found in the opposite feedback mode to the majority of the population are statistical outliers. However, this analysis is based on an instantaneous measure of feedback mode at a specific redshift, and does not capture the accretion history of a particular object. We defer a more careful analysis of the evolution of the AGN sample, together with an analysis of the timescales on which quenching occurs in individual galaxies, to future work.
§ AGN FEEDBACK & GAS FLOWS
Fig. <ref> shows that SMBHs make up much less than 1% of the mass of their host galaxy, so the mass of baryons accreted by the SMBHs is negligible compared to the
reduction in stellar mass caused by quenching. The effect of AGN feedback on the cold gas supply of the galaxy must therefore be profound, to suppress star formation by up to an order of magnitude over the evolution of the galaxy. There are three possible channels through which AGN feedback can affect the gas content of the galaxy: (i) it can drive powerful outflows, emptying the reservoir of gas available in the ISM of the galaxy; (ii) it can prevent cosmic inflows from replenishing the gas supply in the galaxy or (iii) it can heat existing gas of the ISM and circum-galactic medium (CGM) to prevent cooling flows and the associated star formation. In this section, we investigate the relative importance of these three feedback channels.
§.§ The evolution of the baryon content
Should AGN feedback primarily suppress star formation through heating the existing gas in the halo, one would expect twinned halos to have the same total baryon mass, with that in H-AGN showing a much higher gas fraction as less gas is being turned into stars. Fig. <ref> shows that AGN feedback directly lowers the baryon (gas + stars + BH) content embedded within the virial radius of DM halos: the average baryon density ratio versus galaxy stellar mass relation follows a shape reminiscent of that of the quenching mass ratio previously discussed. Unfortunately all three major channels, through which AGN feedback is expected to affect star formation, lower the baryon density of the galaxy. Boosted outflows drive existing gas out of the galaxy, slowed down inflows prevent accretion in the first place, and heating causes the gas to expand, lowering the average density. We compare the average baryon density, as opposed to the total mass within the halo, to correct for the small differences in halo mass shown in Fig. <ref>, which translate into a difference in virial radius, and therefore a difference in the volume over which the gas mass in the halo is measured.
Two features stand out in comparison to the quenching mass ratio. First, the average baryon density within the halo of small galaxies is unaffected by AGN feedback at all redshifts. These galaxies do however show a reduced stellar mass, particularly at redshift z=5, where their stellar mass shows a median reduction of 10%, as seen in Fig. <ref>. This suggests that feedback affects the star formation efficiency of these galaxies more than it alters their gas supply. Efficiency is reduced either by locally heating the gas, redistributing the gas within the halo, or by destroying dense, star-forming clumps, but not by driving outflows or preventing gas inflows through the halo virial sphere. Secondly, for large galaxies with stellar masses above M_* > 10^11 M_⊙, the baryon content in both simulations becomes increasingly comparable again with increasing mass, despite the fact that these galaxies see a reduction in their stellar mass of around 80% for redshifts between z=1 and z=0.1. In this case, the deepening gravitational potential of the halo makes it difficult for the AGN to affect gas flows at the halo virial radius. However a much smaller fraction of the existing gas is converted into stars, because of effects (gas heating / redistribution) similar to those which plague small galaxies. These translate into the significantly reduced galaxy masses presented earlier in Fig. <ref>. For medium galaxies at all redshifts, AGN feedback acts through depleting the gas reservoir at the halo scale, which reduces the supply of gas available for star formation. This means AGN feedback also directly influence the inflows and/or the outflows of the galaxy. We leave a detailed analysis of the interstellar and intergalactic medium under AGN feedback to future work (Beckmann 2017, in prep.).
§.§ The effect on inflows and outflows
There are two ways to decrease the total baryon mass of a galaxy: by reducing inflows or by boosting outflows. In this work, we measure flows at two different radii: halo scales, also called R95, which correspond to a radius of R95=0.95 × R_vir, and galaxy scales, also called R20, which correspond to a radius of R20=0.2 × R_vir. Flows are measured through spherical surfaces located at these radii, centred on the halo. Flow masses are calculated for all cells within a narrow shell, centred on the radius in question, where Ṁ_gas=∑_i ρΔ x_i^3 v̅_i ·r̅_i/ω, where ρ is the gas density, Δ x is the cell size, v̅_i is the gas velocity, r̅_i is the unit vector of the cell centre relative to the halo centre and ω = 2kpc is the width of the shell. M_outflow includes all cells with v̅_i ·r̅_i > 0 and M_inflow includes all cells for which v̅_i ·r̅_i ≤ 0.
A first comparative look at the flow patterns for galaxies in H-AGN and H-noAGN (see Fig. <ref>) suggest that AGN feedback drives outflows in medium and large galaxies, particularly at halo scales. There are also some large scale pseudo flows present for large galaxies. These pseudo flows appear in Fig. <ref> because the algorithm used to extract the absolute flow values presented here assumes the halo can be accurately represented by a sphere. However, if the halo is non-spherical, pseudo flows are created. When rotating a non-spherical object through a spherical surface across which absolute mass flows are measured, parts of the object passing out of the sphere will register as outflows, while parts passing in will register as inflows. However, these contributions are not mass flows in the common sense, and cancel out when calculating net mass flows.
Small galaxies undergo no outflows at halo scales, with or without AGN feedback, which matches the conclusion from Fig. <ref> that the baryon mass of their halos is identical in H-AGN and H-noAGN. AGN feedback reduces inflows for medium and large galaxies, at both halo and galaxy scales, but the effect is more pronounced at the latter. A more quantitative analysis of the outflows driven by AGN feedback is presented in Fig. <ref>, where residual flow values for the two simulations are plotted. Residual flows are defined as the mass flow rates in H-AGN relative to those of their twin galaxies in H-noAGN, i.e Ṁ^ residual_ gas = Ṁ^ H-AGN_ gas
- Ṁ^ H-noAGN_ gas. This approach has the advantage of isolating the effect of AGN feedback and subtracting out any effects present in both simulations, such as the supernova driven outflows for small and medium galaxies at halo scales, and the pseudo flows seen for large objects at both radii.
The residual gas flows shown in Fig. <ref> demonstrate that AGN feedback has an approximately equal and opposite effect on outflows and inflows. Galaxies with stellar masses M_*^H-AGN≤ 10^11 M_⊙ see a similar amount of gas carried away by AGN driven outflows as that depleted in inflows. Apart for small galaxies at high redshift (z=5), where AGN feedback seems able to heat up the gas in the vicinity of the galaxy, causing a gas pile up which triggers larger inflows in H-AGN, the differences between flows at halo and galaxy scales are rather modest. There is a weak trend for boosted outflows to be less dominant at galaxy scales (especially at z ≤ 1) than at halo scales, and conversely for the suppression of inflows to be more relevant on small scales (especially for galaxies which are less massive than M_*^H-AGN≤ 10^10 M_⊙), but overall material is neither being significantly swept up in the halo and kicked out, nor preferentially deposited there. Rather, these two effects combine to reduce the net median inflow for medium galaxies into both the halo and the galaxy by up to 60% for the most massive objects, as the direct comparison of net inflows for twins in Fig. <ref> shows. This results in the reduction in baryon mass seen for the same galaxies in Fig. <ref>. In agreement with the same figure, small galaxies show no change in net flows at halo scales, which matches their identical baryon mass in the presence and absence of AGN feedback. These results are in agreement with work by <cit.>, who compare high resolution zoom simulations with and without AGN feedback of a single galaxy of M_*(z=6)=6.2×10^9 M_⊙. The authors find strong evidence for the fact that the AGN significantly heat the gas at halo scales, driving a hot super-wind and destroying cold flows. <cit.> also report AGN boosted outflows in their hydrodynamical merger simulation, and emphasize that long term quenching requires the inflows to be suppressed.
For large galaxies, the situation is different, particularly at low redshift. At halo scales, the simulation with AGN feedback actually shows boosted inflows carrying an amount of mass similar to that in boosted outflows. This means the outflows are being recycled, as AGN feedback becomes unable to gravitationally unbind the gas from the halo. It is important to note that the values plotted here represent the median value for a given mass bin, so the outflow and inflow values do not necessarily belong to the same object. It is therefore not necessarily correct that the two curves cancel out to produce no change in the net flow. Indeed, a comparative analysis of net inflows for each twin across H-AGN and H-noAGN (Fig. <ref>) shows that at halo scales, the overall inflow is boosted by up to 50% for the most massive galaxies in the presence of AGN feedback.
At galaxy scales, the gas flow patterns for the most massive objects, M_*^H-AGN > × 10^11 M_⊙ at z=0.1), become harder to predict. In these galaxies, AGNs fall into maintenance feedback mode (see Section <ref>) at redshifts below the peak of star formation, z=2. This produces very bursty outflows, as SMBHs go through cycles of being fed, which triggers strong feedback episodes. The latter drive out the gas, starving the black hole, and the feedback abates until enough gas becomes available again for the SMBH to go through another accretion event. These cycles take place on timescales much shorter than the time interval between the redshift outputs we are considering in this work. Furthermore, the number of such galaxies is quite limited (≈ 1000). We illustrate the impact this has on our results in Fig. <ref>, which shows that at redshift z=0.5, the median outflow depends quite sensitively on the exact point in time at which the distribution of galaxies is sampled. As the SMBHs cycle rapidly through a wide variety of active and quiescent states, the distribution of residual outflows spans several orders of magnitude. In comparison, the outflows produced by the larger population of smaller galaxies are more steady on similar timescales, and therefore their sampling is more robust (see left panel of Fig. <ref>). We would like to point out that the variation timescales of the large scale outflows studied here do not necessarily reflect the duty cycle of the SMBH, as each burst can be driven by a series of feedback events. We postpone a more detailed analysis of the SMBH duty cycles in the simulation to future work (Beckmann 2017 et al, in prep).
It is interesting to note that the lack of excess outflows for low mass galaxies (M^H-AGN_* < 5 × 10^10 M_⊙ at z=0.1) in Fig. <ref> does not mean that the outflows for any given matching galaxy pair in H-AGN and H-noAGN are identical. As the panel on small galaxies in Fig. <ref> demonstrates, individual objects show a variety of residual outflows (and inflows). The build-up of small differences in galaxy properties not necessarily induced by AGN feedback (e.g. stochastic star formation algorithm, seeding and growth of the central SMBH) can result in temporarily diverging residual outflow histories at low redshift. In other words, even though the precise amount of residual outflow from any specific twinned pair of galaxies does depend on the timescale of the outflow and so is sensitive to the redshift at which it is measured, the lack of any marked systematic difference in the gas flow pattern due to AGN feedback registers as a median residual outflow of zero for the whole sample.
Another note of caution concerns the residual flows in Fig. <ref>, which likely underestimate the effect of AGN on inflows and outflows. Particularly for massive galaxies at low redshift, where the H-noAGN twin has a stellar mass ∼ 5 × that of the H-AGN one, the H-noAGN twin has stronger stellar flows that obscure some of the AGN driven effects when calculating residual gas flows as Ṁ_gas^residual=Ṁ_gas^H-AGN-Ṁ_gas^H-noAGN. Finally, we come back to the asymmetry between residual inflows and outflows with respect to the zero residual line, which is stronger at galaxy than halo scales. In light of the previous remarks about outflow timescales, we can safely attribute this difference as meaning that AGN feedback does preferentially suppress inflows in the vicinity of all galaxies rather than eject gas from them, except perhaps at the very high mass end of the galaxy stellar mass function, M_*^H-AGN > × 10^11 M_⊙ at low redshifts (z < 0.1).
Overall, flow patterns due to AGN feedback, made up in roughly equal parts of boosted outflows and reduced inflows at halo scales, comfortably explain the non-linear distribution of baryon masses plotted in Fig. <ref>, which combines with a reduced star formation efficiency across all galaxy mass bins to produce the quenching mass ratios shown in Fig. <ref>.
§ DISCUSSION & CONCLUSIONS
We have isolated the effect of AGN feedback on stellar quenching in massive galaxies by comparing two cosmological simulations, H-AGN and H-noAGN, which were run with and without AGN feedback respectively. More specifically, by twinning individual DM halos and galaxies across the two simulations, we have been able to quantify the effect of feedback on individual objects throughout cosmic time. In agreement with a large body of previous work <cit.> our results show that AGN feedback is instrumental in quenching the massive end of the GSMF. Whilst the stellar mass of galaxies without AGN feedback closely follows predictions based on the assumption that all baryons contained in dark matter halos end up forming stars, galaxies subject to the influence of AGN feedback end up with masses distributed according to a GSMF that shows a characteristic exponential steepening at the high mass end, in line with observations <cit.>.
The importance of AGN feedback has been emphasised in all recent large-scale simulations of galaxy evolution, but the results differ in the details. Similar to results presented here, galaxies in the MassiveBlackII simulation exhibit signs of relatively strong quenching early on but then see a reduction in the impact of AGN feedback at lower redshifts <cit.>. A similar issue is reported by Illustris, who found that despite aggressive AGN feedback that produces unrealistically low gas fractions in DM halos at low redshift, star formation is not suppressed strongly enough and the simulation overproduces massive galaxies <cit.>. As opposed to the dual AGN feedback model used in H-AGN and Illustris, the EAGLE simulation only employs a single feedback mode, tuned to reproduce the GSMF at z=0. They do also see good agreement with the GSMF at redshift z>2, which supports our conclusion that the shift in feedback mode plays a subordinate role in the importance of AGN feedback <cit.>. Overall, regardless of the hydrodynamics scheme employed and the detail of the subgrid model implementation of AGN feedback, all four recent large-scale cosmological simulations of galaxy evolution agree that AGN feedback must play a crucial role in regulating the evolution of massive galaxies.
A closer comparison of the stellar mass for individual objects in H-AGN and H-noAGN reveals a non-linear dependence of quenching mass ratio on mass, with the most massive galaxies being the most strongly quenched, and the smallest galaxies mostly unaffected by AGN feedback. This leads to a characteristic shape for the mass ratio M_*^H-AGN/M_*^H-noAGN, which shows a linear dependence on log(M_*^H-AGN) for medium sized galaxies with 10^9 M_⊙≲ M_*^H-AGN≲ 10^11 M_⊙ (the exact values depend on redshift), but tails off at both low and high mass ends. The most massive galaxies, with M_* > 10^11 M_⊙ are most strongly quenched and contain only 20% of the stellar mass in the presence of AGN feedback at z=0.1, in comparison to the case without feedback.
We also find a significant redshift evolution for the smallest galaxy mass to be affected by AGN feedback, with smaller galaxies being more quenched at higher redshift. This transition mass at which AGN feedback becomes important, evolves from 2.24 × 10^10 M_⊙ at z=0.1 to 1.48 × 10^9 M_⊙ at z=4.9. Such a redshift dependence seems in good agreement with observations <cit.> but systematically leads to values at high redshift which are lower than these reported by a range of hydrodynamics simulations <cit.>. We argue that these discrepancies reflect the fact that quenching is very likely a cumulative process that builds up over the entire history of the galaxy, not just a one-off event that shuts down star formation forever when the galaxy reaches a particular stellar mass. If correct, the consequence is that differences induced by AGN feedback in star formation rates can be small, particularly for objects near the transition mass, the effect of such feedback only becoming apparent for galaxies once its integral represents a significant fraction of their total stellar mass.
In our case, this evolution is caused by the median central black hole mass of galaxies of a given stellar mass, M_ SMBH/M_*, increasing with redshift by a factor of a few between z=0 and z=5, a result for which there exists observational support <cit.>. However, it has been suggested that the observation trend might be biased <cit.>. A similar trend has been reported in other large scale simulations, such as MassiveBlackII <cit.> and Illustris <cit.>, but not in all of them <cit.>. This evolution in black hole masses combines with higher accretion rates at high redshift, due to gas-rich galaxies, such that a galaxy of the same stellar mass is subject to AGN feedback up to three orders of magnitude stronger at redshift z=5 than at z=0.1. We also measure a shift in feedback mode, with at least 85.5% of AGN at redshift z=3 and above in quasar mode, compared with a maximum of 19.1% at lower redshifts, but the increasing radiative efficiency associated with this shift is unable to offset the trend of lower M_ SMBH/M_* and lower accretion rates.
A comparative analysis of the baryon content of halos reveals that AGN feedback quenches star formation through a combination of reducing the total gas supply within the halo by driving outflows, and preventing accretion of fresh gas by curbing inflows. Small galaxies with M_*^H-AGN≤ 10^9 M_⊙ show nearly identical baryon masses with and without AGN feedback, as the flows at halo-scales remain chiefly unaffected by feedback. Note that this is not true at galaxy scales and high redshift, where gas inflows can be somewhat enhanced by feedback for these small objects. On the other hand, medium size galaxies 10^9 M_⊙≤ M_*^H-AGN≤ 10^11 M_⊙ experience a significant reduction in baryon mass, caused by an approximately equal contribution from AGN-driven gas outflows and a reduction of cosmic inflows. At galaxy scales, the reduction of inflows dominates. Finally, for large galaxies (10^11 M_⊙≤ M_*^H-AGN) the baryon mass rises again, as inflows at halo scales are swelled by gas expelled by AGN feedback in the inner regions, which remains gravitationally bound and falls back into the halo. At galaxy scales, outflows for the most massive objects vary on very short timescales, as the AGN enter a bursty maintenance mode. Thus the gas mass has a tendency to increase on average (as compared to medium size galaxies), even though the stellar mass does not, given the long characteristic timescale of star formation.
The picture that emerges seems consistent with high resolution work (20 pc instead of 1 kpc) by <cit.>, who use zoom simulations of individual objects with and without AGN feedback, and find that AGN drive hot super-winds which disrupt cold inflows on halo scales. The two effects combine to reduces the baryon content of galaxies by up to 30 %. Similar conclusions were reached by early semi-analytic work by e.g. <cit.>, who argue that strong, hot winds are necessary to reproduce the observed luminosity function, and recent simulations by <cit.> and <cit.> that also report AGN boosted outflows, concluding that long term quenching requires gas inflows to be suppressed.
Looking at cosmic accretion specifically, <cit.> find that the simulation without feedback (neither AGN driven nor stellar) see much higher levels of smooth accretion into the galaxy than the ones with (AGN and stellar) feedback. However, contrary to the results presented here, they find no evidence for recycling of gaseous material at the halo boundary.
While all current large scale cosmological simulations include AGN feedback as in integral part of their galaxy evolution model, some authors contend that processes in massive galaxies that do not rely on AGN feedback can reproduce galaxy mass functions through e.g. cosmic quenching <cit.> and that stellar feedback super-bubble feedback can drive powerful outflows <cit.>. <cit.> and <cit.>, running idealised galaxy simulations, find AGN-driven outflows consistent with those we report here but lower impact on the star formation rate of galaxies.
Overall, we conclude that AGN feedback provides an effective mechanism to reproduce the distribution of galaxies at and above the knee of the GSMF, over a redshift range spanning 90 % of the age of the Universe. For local galaxies, AGN feedback plays an important role in stifling star formation in objects above a transition mass of M_* ≥ 2×10^10 M_⊙. AGN feedback acts by reducing the stellar content of galaxies by up to 80% (for the most massive objects) through a mixture of increased outflows and reduced inflows, combined with a decreased star formation efficiency of in situ gas. We predict that the influence of AGN feedback should already be noticeable by redshift z = 5, for galaxies with relatively modest stellar masses (M_* ≈ 2 × 10^9 M_⊙) by current epoch standards, as these objects are close to the top end of the GSMF at these redshifts. This is exciting news as the James Webb Space Telescope should be able to test this prediction in the near future.
§ ACKNOWLEDGEMENTS
This work used the HPC resources of CINES (Jade supercomputer) under the allocation 2013047012 made by GENCI, and the horizon and Dirac clusters for post processing. This work is partially supported by the Spin(e) grants ANR-13-BS05-0005 of the French Agence Nationale de la Recherche and by the National Science Foundation under Grant No. NSF PHY11- 25915, and it is part of the Horizon-UK project, which used the DiRAC Complexity system, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk ). This equipment is funded by BIS National E-Infrastructure capital grant ST/K000373/1 and STFC DiRAC Operations grant ST/K0003259/1. DiRAC is part of the National E-Infrastructure. The research of RSB is supported by STFC, and the research of AS, MLAR and JD at Oxford is supported by the Oxford Martin School and Adrian Beecroft. NC is supported by a Beecroft postdoctoral fellowship.
mnras
|
http://arxiv.org/abs/1701.08038v2 | 20170127124759 | Cell-to-cell variability and robustness in S-phase duration from genome replication kinetics | [
"Qing Zhang",
"Federico Bassetti",
"Marco Gherardi",
"Marco Cosentino Lagomarsino"
] | q-bio.GN | [
"q-bio.GN",
"physics.bio-ph"
] |
APS/123-QED
qzhang519@gmail.com
federico.bassetti@unipv.it
gocram@gmail.com
marco.cosentino-lagomarsino@upmc.fr
^1 Sorbonne
Universités, UPMC Univ Paris 06, UMR 7238, Computational and
Quantitative Biology, 15 rue de l'École de Médecine Paris,
France
and
^2Dipartimento di Matematica, Università di Pavia, Pavia, Italy
^3IFOM, FIRC Institute of Molecular Oncology, Milan, Italy
^4CNRS, UMR 7238, Paris, France
Genome replication, a key process for a cell, relies on stochastic
initiation by replication origins, causing a variability of
replication timing from cell to cell. While stochastic models of
eukaryotic replication are widely available, the link between the
key parameters and overall replication timing has not been addressed
systematically.
We use a combined analytical and computational approach to calculate how
positions and strength of many origins lead to a given cell-to-cell
variability of total duration of the replication of a large region,
a chromosome or the entire genome.
Specifically, the total replication timing can be framed as an
extreme-value problem, since it is due to the last region that
replicates in each cell.
Our calculations identify two regimes based on the spread between
characteristic completion times of all inter-origin regions of a
genome. For widely different completion times, timing is set by the
single specific region that is typically the last to replicate in
all cells.
Conversely, when the completion time of all regions are comparable,
an extreme-value estimate shows that the cell-to-cell variability of
genome replication timing has universal properties. Comparison with
available data shows that the replication program of three yeast
species falls in this extreme-value regime.
Cell-to-cell variability and robustness in S-phase duration from
genome replication kinetics
Marco Cosentino Lagomarsino ^1,3,4
December 30, 2023
===============================================================================================
§ INTRODUCTION
In all living systems, the duration of DNA replication correlates with
key cell-cycle features, and is intimately linked with transcription,
chromatin structure and genome evolution. Dysfunctional replication
kinetics is associated to cancer and found in aging cells.
Eukaryotic organisms rely on multiple discrete origins of replication
along the DNA <cit.>. These origins
are “licensed” during the G1 phase by origin recognition complexes
and MCM helicases, and can initiate replication during S
phase <cit.>. Once one origin is activated
(“fires”), a pair of replication forks are assembled and move
bidirectionally. In one cell cycle, one origin already activated or
passively replicated cannot be activated
again <cit.>.
Origins have specific firing rates, possibly connected to the
number of bound MCM helicase complexes <cit.>, and their
specificity determines the kinetics of replication during S phase, or
“replication program”.
To investigate genomic replication kinetics, DNA copy number can be
measured with microarray or sequencing, as a function of genome
position and time (see, e.g.,
<cit.>). Based
on such high-throughput replication timing data, it is possible to
infer origin positions and the key parameters for a mathematical
description of the replication process (see,
e.g., <cit.>). Recent
methods also allow to extract the same information from free-cycling
cells <cit.>. The mathematical modeling of genome-wide
replication timing data shows that replication kinetics results from
the stochastic mechanism of origin
firing <cit.>.
In other words, replication timing originates from individual
probabilities of origin firing (and their correlations with genome
state <cit.>). In such
models, firing rate of individual origins determine the kinetic
pattern of replication along the chromosomal coordinate, and fork
velocity is typically assumed to be nearly constant along the genome
(in absence of blockage).
Evidence of this stochasticity directly from single cells (which
should give access to relevant correlation patterns) is less abundant.
Importantly, replication timing patterns observed in population
studies can be explained by stochastic origin firing at the
single-cell level <cit.>.
Stochastic activation of origins leads to stochasticity of termination
and cell-to-cell variability of the total duration of replication of a
chromosome, a genomic region, or the whole
S-phase <cit.>, with possible repercussions on the
cell cycle. This raises several questions, including how the
individual rates and spatial distribution of origins cooperate to generate
variability in replication timing, the extent of such variability, and
whether it is possible to identify specific regimes or optimization
principles in terms of cell-to-cell variability.
However, such questions have not been systematically addressed in the
available models.
-65.1pt
A series of pioneering studies <cit.> has used
techniques of extreme-value theory to derive the distribution of
replication times in the particular case where each locus of the
genome is a potential origin of replication, as in the embryonic cells
of X. laevis. These efforts allowed to clarify the possible
optimization principles underlying the replication kinetics in such
organisms.
Here, we extend this approach to the widely relevant case of discrete
origins with fixed
positions <cit.> using
a modeling framework for stochastic replication to investigate the
cell-to-cell variability of the duration of S-phase (or of the
replication of any genomic region such as one chromosome). We
use analytical calculations based on extreme-value theory and
simulations,
employ experimental data to infer replication parameters and identify
the main features of empirical origin strengths and positions, and
their response to specific changes.
§ MATERIALS AND METHODS
§.§ Model
We make use of a one-dimensional nucleation-growth
model <cit.> of stochastic replication kinetics
with discrete origin locations x_i, similar to models available in
the literature <cit.>.
Activation of origins (firing) is stochastic, and is described as a
non-stationary Poisson process. The firing rate A_i(t) of the
origin located at x_i is a function of time, A_i(t)=λ_i
t^γθ(t), where θ(t) is the step function, and
λ_i and γ are
constants <cit.>.
We assume that the parameter γ and the fork velocity v are
common to all origins, whereas λ_i, which reflects the
specific strength of each origin, is origin dependent. The
probability density function (PDF) f_i(t) of the firing time t for
the i-th origin, given that the origin fires during that
replication round, can be obtained as f_i(t)=A_i(t)
exp(-∫_0^t A_i(τ) dτ), which gives
f_i(t)=λ_i t^γ θ(t)
exp(-λ_i t^γ+1/γ+1).
When γ>0, i.e., when the firing rate increases with time,
f_i(t) is a stretched exponential distribution.
When γ=0, the firing rates are constant and the process is
stationary, so A_i(t)=λ_i and f_i(t)=λ_i
θ(t)e^-λ_i t.
Once an origin has fired, replication forks proceed bidirectionally at
constant speed, possibly overriding other origins by passive
replication. When two forks meet in an inter-origin region,
replication of that region is terminated.
The length of the i-th region is defined as d_i=x_i+1-x_i;
the time when its replication is completed is T_i.
The duration of the S phase T_S
is the time needed for all inter-origin regions to be replicated.
§.§ Fits
Empirical parameters were inferred through fitting experimental data
from refs. <cit.> on DNA copy
number as a function of position and time with the model. The
positions of replication origins were obtained directly from the
literature and considered
fixed <cit.>. The fits are
performed by minimizing the distance between the replication timing
profiles in the model and in the experimental data. This is carried
out by updating the global parameters (γ and v) and the local
parameters (λ_i, i∈{1,2,...,n}) iteratively (Appendix A). The parameters from these fits are presented
in Supplementary Table S1.
§.§ Simulations
Our theoretical calculations (described below)
allow to obtain the cell-to-cell variability of T_S in
special regimes.
We compare simulations using the complete information on the locations
and strengths of all origins fitted from the data, with randomized
chromosomes having similar properties. In these randomized chromosomes
we consider the inter-origin distances d_i and the strengths
λ_i as independent random variables. They are drawn from
probability distributions recapitulating their empirical mean and
variability.
More precisely, from the fitted parameters we fix the mean <
d> and the standard deviation σ_d of the distance, and
the mean <λ> and the standard deviation
σ_λ of the strength.
The actual distances d_i and strengths λ_i are then drawn by
sampling from two gamma distributions
d_i∼Γ(<d>^2/σ_d^2,<d>/σ_d^2),
λ_i∼Γ(<λ>^2/σ_λ^2,<λ>/σ_λ^2).
The gamma distribution Γ(a,b)
(parametrized in terms of a shape parameter a and a rate parameter b)
has PDF p(x) ∝ x^a-1exp(-b x). It yields positive values, with mean a/b and variance
a/b^2, and it is the maximum-entropy distribution with fixed mean
and fixed mean of the logarithm. We verified that the assumption of a
gamma distribution was in line with empirical data (Fig. S1).
To explore the full range of parameters, we also used stochastic
simulations, which were performed both (i) with the precise origin
locations and strengths fitted from the data, and (ii) with d_i and
λ_i drawn randomly as described above.
To avoid the boundary effects of linear chromosomes, we consider
circular chromosomes with n origins, unless specified otherwise
(boundary effects are discussed in the Appendix B and Fig. S2,
and do not affect our main conclusions.)
To analyze the biologically relevant regimes, we considered
replication kinetics data on different yeast species, from
refs. <cit.> and
<cit.>, ran simulations with such
parameters, and compared with the theoretical predictions using the
empirical values for σ_d, σ_λ and mean origin
positions and strengths.
§ BACKGROUND
§.§ The S-phase duration is the result of a maximum
operation on the stochastic replication times of inter-origin regions
We start by discussing how the stochastic nature of single-origin firing
affects the total replication timing of a chromosome.
Fig. <ref>ab illustrates this process. In each cell, a
chromosome is fully replicated when the last inter-origin region is
complete. In other words, the last-replicated region sets the
completion time for the whole chromosome. Consequently, the total
duration is the maximum among the replication times of all
inter-origin regions <cit.>.
For simplicity, we first consider the case of a genome with only one
chromosome.
The duration of the S phase is therefore
T_S=max(T_1,T_2,...,T_n) where n is the number of inter-origin
regions.
The stochasticity of the replication time T_i of each inter-origin
region makes the S-phase duration T_S itself stochastic,
thus giving rise to cell-to-cell variability, which can be estimated
by the model (Fig. <ref>c). In the case of multiple
chromosomes, the same reasoning applies to the last-replicated
inter-origin region over all chromosomes.
§ RESULTS
§.§ A theoretical calculation reveals the existence of two
distinct regimes for the replication program
It is possible to estimate the distribution of T_S
analytically, starting from the distribution of T_i.
Two distinct limit-case scenarios can be distinguished. In the first
scenario, a specific inter-origin region r is typically the slowest
to complete replication and thus represents a “replication
bottleneck”. In this case, T_S is dominated by T_r,
meaning that T_S≈ T_r.
T_r is identified as the one which is largest on average.
Fig. <ref>a shows an example chromosome with 10 origins
with the same strength, where one inter-origin distance (d_1) is
much larger than the others.
Owing to this disparity, T_1 is very likely the maximum among
all T_i, and is therefore the region determining T_S.
In this scenario, which we term “bottleneck estimate”, the
distribution of T_S will be approximately the same as that
of the bottleneck T_r (Fig. <ref>c).
In the second scenario, each inter-origin region has a similar
probability to be the latest to complete replication.
In this case, every inter-origin region contributes to the distribution
of T_S. Since T_S=max(T_1,T_2,…,T_n), we apply the
well-known Fisher-Tippett-Gnedenko theorem
<cit.>, which is a general
result on extreme-value distributions (EVD).
In order to use this theorem, we make the following two assumptions:
(i) T_1, T_2, …, T_n are statistically independent, i.e.,
each inter-origin replication time is an independent random variable,
incorporating the essential information about origin variability and rates;
(ii) T_i follows a stretched-exponential distribution,
independent of i, i.e.
p(T_i<t)=1-e^-α(t-t_0)^β,
when t>t_0, while p(T_i<t)=0 when t⩽ t_0. The (positive)
parameters α, β and t_0, effectively describe the
consequences of the model parameters v, γ, inter-origin
distances (d_1, d_2, ..., d_n) and origin strengths
(λ_1,λ_2,...,λ_n) on completion timing of
inter-origin regions (see below and Appendix D), and can be
obtained by fitting the distribution of replication time for a typical
inter-origin region (obtained from simulations) with Eq. <ref>.
Our fits show that Eq. <ref> is a remarkably good phenomenological
approximation of the distribution of T_i (see Appendix C and
Fig. S3), thus justifying assumption (ii) above.
Note that the fitted stretched exponential form also incorporates
effectively the coupling existing between different inter-origin
regions.
Indeed, neighboring regions are correlated since they use a pair of
replication forks stemming from their common origin. Moreover, even
distant inter-origin regions can share the same fork if they are
passively replicated.
In order to justify the assumption (i), we tested the effect of the
correlation between different regions, by sampling T_1, T_2, …,
T_n from the distribution in Eq. <ref> independently and then
taking their maximum T_S^*. We verified that the difference
between the distribution of T_S^* and that of
T_S obtained from simulation (where the correlations are
present) is small. Therefore, the effect of these relatively
short-ranged correlations can be, to a first approximation,
neglected at the scale of the chromosomes and of the genome, and
described by the effective stretched-exponential form (see
Fig. S4).
Based on these assumptions, we can use the Fisher-Tippett-Gnedenko
theorem and derive the following cumulative distribution function for
T_S as a function of the number of origins n
and the parameters α, β and t_0 (the
calculation is detailed in the Appendix D):
P(T_S≤ t)≈exp{-exp[βlog
n(1-(α/log n)^1/β(t-t_0))]}.
Eq. <ref> gives a direct estimate of
the distribution of the S-phase duration in this second scenario,
which we term “extreme-value” or “EVD” regime.
The resulting
distribution is universal, since it does not depend on the detailed
positions and rates of the origins, and depends in a simple way on the
parameters α, β, t_0 and n.
Although the extreme-value estimate should apply to the case of
large n, the approximation Eq. <ref> holds to a satisfactory
extent also for realistic values, when n is order 10 (see
Supplementary Fig. S12).
We also derived approximate analytical expressions for α,
β and t_0 as functions of the parameters v, γ, for a
“typical” region characterized by <λ> and
<d> under the assumption of negligible interference from
non-neighbour origins (see Appendix D).
The procedure by which we apply Eqs. <ref> and <ref> is the
following. Given inter-origin distances and origins strengths assigned
arbitrarily or inferred from empirical data, the simulation of the
replication of a chromosome gives the distribution of T_i and
T_S. A fit of the distribution of T_i from simulation
using Eq. <ref> gives the parameters α, β and t_0. Finally,
the EVD estimate for the distribution of T_S, can be
obtained from Eq. <ref>, and compared with the distribution of
T_S form simulations.
This procedure can be seen as a variant of the method introduced
in refs. <cit.> applicable to the case of discrete
origins (see Discussion).
Fig. <ref>b shows one example where one circular chromosome
has 10 origins with identical strengths and identical inter-origin
distances. The estimated distribution of S-phase duration from
Eq. <ref> is well-matched with the simulated one (Fig. <ref>d).
Fig. <ref> also shows how the bottleneck estimate works for
the opposite scenario, and compares simulations with both estimates in
the two different regimes.
Similar to Fig. <ref>, Supplementary Fig. S5 shows the
existence of the two regimes in presence of a single origin affecting
the two neighboring inter-origin regions. In the bottleneck regime,
these two regions replicate much later than the others, because their
common origin is much weaker than the other origins; the S-phase
duration is then dominated by their replication time. This case also
illustrates how the bottleneck regime may not be limited to a single
inter-origin region.
Finally, Supplementary Fig. S6 shows the distribution of the
inter-origin completion times T_i in the cases presented in
Fig. <ref> and Supplementary Fig. S5. This analysis
illustrates how extra peaks in the right tail of T_i distribution
relate to the failure of the extreme-value estimate for the
distribution of S-phase duration. These examples indicate that, as
expected, the presence of outliers in the values of T_i
(exceedingly slowly-replicating regions) is responsible for the
onset of the bottleneck behavior.
§.§ The extreme-value regime is robust to perturbations
increasing the replication timing of a local region
Origin number, origin strengths and inter-origin distances can be
perturbed due to genetic change (DNA mutation or recombination), over
evolution, and due to epigenetic effects such as binding of specific
agents. We can compare the robustness of the two regimes identified
above to perturbations of these parameters. We consider in particular
the elongation of a single inter-origin distance d_i ↦ d_i +
δ_d (similar results to those reported below are obtained
for a perturbation affecting the strength of a single origin, see
Supplementary Fig. S7).
In such case, the change of T_i is approximately equal to δ_d/2v.
In the bottleneck regime, if the perturbed inter-origin region is
the slowest-replicating one, <T_S>
increases linearly with δ_d with slope 1/2v, and the
distribution of T_S shifts by a delay δ_d/2v
(Fig. <ref>a).
In the extreme-value regime, instead, there is no single bottleneck
inter-origin region, and the change of
T_S with the perturbation turns out to be much smaller than
δ_d/2v (Fig. <ref>b).
Notice that in both regimes the variability of the S-phase duration
around its average is not affected sensibly (insets of
Fig. <ref>).
In summary, the bottleneck
region is “sensitive”
to the specific perturbations considered, since termination of
replication is highly dependent on a single inter-origin region, while
the EVD regime is “robust”, as the effect of small local
perturbations can be absorbed by passive replication from nearby
origins <cit.>.
§.§ Diversity between completion times of inter-origin
regions sets the regime of the replication program
The cases discussed above (Fig. <ref>) recapitulate the
expected behavior in case of high versus small variability of
the typical completion time of different inter-origin regions.
One can expect that if the variability of the inter-origin distances
is large, or origin strengths are heterogenous, it will be more likely
to produce a bottleneck region, which in turn will trivially affect
replication timing.
Conversely, the replication program will be in the extreme-value regime
if the completion times of all regions are comparable.
In order to show this, we tested systematically how average and
variability of T_S change with the variability of
inter-origin distances and origin strengths in randomly generated
genomes. In this analysis, origin spacings and strengths are
assigned according to the prescribed probability distributions shown
in Eq. <ref>, with varying parameters (see the Methods for a
precise description of how chromosomes are generated).
Fig. <ref> shows the results. Importantly, we find that
the regimes defined above as extreme cases apply for most
parameter sets, and there is only a small region of the parameters
where we find intermediate cases.
Specifically, two parameters, the standard deviations σ_d and
σ_λ, of the inter-origins distances and the origin
strengths respectively, are sufficient to characterize the system.
Fig. <ref>a indicates that as long as σ_d is smaller
than a threshold (around 30 kb), the average
<T_S> and the standard deviation
σ(T_S) of the replication time are approximately
constant. In this regime, the extreme-value estimate matches well the
simulation results. When σ_d exceeds the threshold, the
average of T_S increases and its standard deviation
decreases with large fluctuations. In this other regime, both
<T_S> and σ(T_S) deviate from
the EVD estimate.
Fig. <ref>b shows that varying σ_λ at fixed
origin positions produces a similar behavior (although with smaller
deviations from the EVD estimates).
This analysis shows an emergent dichotomy between these two regimes,
which depends on the distribution of T_i (i.e. both inter-origin
distances and origin firing rates). In principle, more complex
situations where e.g. a subset of many comparably “slow”
inter-origin regions dominates S-phase timing is possible, but this
situation is very rare (and negligible) if origin rates and positions
are generated with the criteria used here (given by
Eq. <ref>).
De facto, under these prescriptions, motivated by empirical
properties of origin positions and strengths, only the two regimes
defined above as extreme cases were observable. For example, one
can imagine a situation where each chromosome are, separately, in
the EVD regime, but the replication of one of the chromosomes takes
considerably longer than the others on average, which may lead the
S-phase duration to be in the bottleneck regime. However, we find
that this situation is essentially never found if origin rates and
positions have empirically relevant values (i.e. for all
realizations with empirical means and variances of inter-origin
distances and origin firing rates).
Qualitatively, this will always be the case if the distribution of
T_i shows a single mode, and there are very few, or just one
exceptional late-replicating region.
This behavior suggests to define “critical values” of σ_d
and σ_λ, separating the extreme-value regime from the
bottleneck regime, as follows.
We define the σ^c_d, at fixed σ_λ, as the
value of σ_d at which <T_S> (possibly averaged
over many samples of the origin configuration too, denoted
<<T_S>>) is 20% larger than at σ_d=0
and σ_λ=0. The results presented here do not depend
appreciably on this threshold and do not change much if we define
σ^c_d as the value of σ_d at which
<T_S> is 20% off the prediction of the EVD theory.
The same definition holds for σ^c_λ at fixed
σ_d.
Surprisingly, σ^c_d turns out to be independent of
σ_λ, and σ^c_λ independent of
σ_d.
The resulting “phase diagram”, shown in
Fig. <ref>c, separates the space of parameters into an approximately
rectangular region where the EVD estimate is precise, and an
outer region where heterogeneities dominate, which is identified with
the bottleneck regime.
We can give a simple argument for why this phase diagram is approximately
rectangle-shaped. Intuitively, a large σ_d increases the
probability of extracting a very large value for d, and a large
σ_λ increases the probability of extracting a very small
λ. In a realization of a randomized chromosome, such rare
events may generate an extremely slow-replicating region acting as the
bottleneck.
Clearly, drawing an extreme value for only one of the two variables is
sufficient to generate the bottleneck region, giving rise to the two
sides of the rectangle.
For values of the variances of both variables that are below the
individual thresholds, drawing a large d and small λ
jointly makes the upper-right region of the rectangle rounded.
However, such joint extreme draws in the same
inter-origin region are very rare, because the two variables are drawn
independently, so the rounded upper-right corner is very small, as
visible in Fig. <ref>c.
§.§ The yeast replication program is just inside the EVD
regime and likely under selection for short S-phase duration
The results of the previous section indicate that the standard
deviations of the origin distances and of the strengths are the most
relevant parameters determining the regime of the distribution of the
S-phase duration across cells.
We inferred the parameters from replication timing data of the yeasts
S. cerevisiae (ref. <cit.>),
L. kluyveri (ref. <cit.>) and S. pombe (ref. <cit.>). Such fits
fully constrain the model parameters: fork velocity v, γ,
start of the S phase t_0, origin strengths λ_i and
inter-origin distances d_i, from which we calculated
<d>, <λ>, σ_d and
σ_λ, and simulated the duration of S phase and
replication time of each chromosome (see Appendix A and
Fig. S8-10).
In these simulations we consider circular chromosomes with n
origins, and boundary effects are tested in the Appendix B
and Fig. S2, and do not affect our main conclusions, indicating
that, according to the model, the partition of the genome into 16
unconnected chromosomes has little effect on the statistics of
S-phase duration.
The values of γ that were obtained as best fits of the
empirical data (Supplementary Fig. S8) were in line with previous
analyses (e.g. <cit.>). In addition,
we found that the standard deviation of the predicted S-phase duration
decreases with the parameter γ (Supplementary Fig. S9), which
agrees with the finding of previous studies focused on
X. laevis <cit.>.
This analysis indicates that the whole-genome values of σ_d and
σ_λ measured for S. cerevisiae, L. kluyveri
and S. pombe place these genomes within the extreme-value
regime. Rescaling σ_d and σ_λ by the crossover
values σ^c_d and σ^c_λ
respectively makes it possible to compare data with different mean
T_S. This comparison (Fig. <ref>a) shows that not
only the genomic but also most of chromosomal parameters of
L. kluyveri, S. cerevisiae and S. pombe are
located in the extreme-value regime.
With the fitted parameters, most of chromosomes and genomes are
found in the extreme-value regime (as an example, see Supplementary
Fig.S10).
Interestingly, all chromosomes (and the full genome) lie close to the
transition line. This may be a consequence of the presence of
competing optimization goals, such as replication speed (or
reliability) and resource consumption by the replication
machinery <cit.>.
Furthermore, we considered data of two S. cerevisiae
mutants. In one mutant, three specific origins in three different
chromosomes (6, 7, and 10) were
inactivated <cit.>.
The inactivation of a specific origin slows down the replication of
the nearby region, which might cause a bottleneck.
Our results show that this origin mutant is still in EVD regime
(Supplementary Fig. S13).
Importantly, in this case the model should be able to make a precise
prediction for the replication profile of the chromosomes where one
origin is inactivated.
Supplementary Fig. S14 shows the prediction on the replication
profile of origin mutant strain based on the parameters fitted from
the data of wild-type strain (except that the three inactivated
origins are deleted from the origin list). The model prediction is
in fairly good agreement with data. The mismatch between prediction
and data in some regions (but not others) is an interesting feature
revealed by the model, and may result from experimental error or
gene-expression adaptation of the mutants <cit.>.
The other mutant strain that we considered is isw2/nhp10,
from the study of Vincent and coworkers <cit.>, who
analyzed the functional roles of the Isw2 and Ino80 complexes in DNA
replication kinetics under stress. This study compares the behavior
of wild type (wt) strain and a isw2/nhp10 mutant in the
presence of MMS (DNA alkylating agent methyl methanesulfonate) and
found that S-phase in isw2/nhp10 is extended compared to the
wt strain because the Isw2 and Ino80 complexes facilitate
replication in late-replicating-regions and improve replication fork
velocity. In agreement with these findings, the model fit of the
data shows that isw2/nhp10 mutant has more inactive origins
and smaller fork velocity. Such conditions may facilitate the onset
of a bottleneck regime in the mutant compared to the wt strain. We
found that S. cerevisiae wt strain treated with MMS still
falls in the extreme-value regime. Conversely, some chromosomes (e.g
13 and 15) of the isw2/nhp10 mutant are in the bottleneck
regime, and in this case, the whole genome (entire S-phase), is
driven in the bottleneck regime (see Supplementary Fig. S15).
Strikingly, the model makes a good prediction on the replication
profile of the isw2/nhp10 mutant, using origin firing
strengths and the γ values fitted from the wild-type strain
experiments, and just adjusting two (global) parameters replication
speed and an overall factor in all origin firing rates
(Supplementary Fig. S16). This provides a good cross-validation of
the applicability of the model in a predictive framework.
A further question is whether we can detect signs of optimization in
the duration of chromosome replication.
Fig. <ref>b compare the S-phase durations obtained from
simulations of the model in two cases: (i) by using the origin
positions and strengths from empirical data (see Supplementary
Fig. S10), and (ii) by using a null model with randomized parameters
(both origin strengths and inter-origin distances) drawn according to
Eq. (<ref>), and preserving the empirical mean and variance.
The results show that for some of the chromosomes the average
replication timing T_S is close to the typical one obtained
from randomized origins (e.g., chromosomes 1,3,5,6,8,11,13 in
S. cerevisiae). For other chromosomes (e.g., 2,4,7,10,12,15,16 in
S. cerevisiae) the empirical average T_S is instead
very close to the minimum reachable within their ensemble of
randomizations.
Remarkably, chromosomes with higher average replication timing in the
randomized ensemble seem to be more subject to pressure towards
decreasing their average T_S (Supplementary Fig. S11).
This result suggests that the whole replication program may be under
selective pressure for fast replication.
§ DISCUSSION
The core of our results are analytical estimates that capture the
cell-to-cell variability in S-phase duration based on the measurable
parameters of replication kinetics.
Extreme-value statistics has been applied to DNA replication
before <cit.>, but only to the case of
organisms like X. laevis, where origin positions are not
fixed and there is no spatial variability of initiation rates. To
our knowledge, this method has not been applied systematically to
fixed-origin organisms such as yeast.
More specifically ref. <cit.> explores the case of a perfect
lattice of equally spaced discrete origins with fixed and equal firing rates,
but does not address the role of the variability of inter-origin
replication times due to randomness in firing rates and inter-origin
distance, which is relevant for fixed-origin organisms.
Another difference is that the authors of
ref. <cit.> derive the coalescence
distribution starting from their model, while here we assume a
stretched-exponential, motivated by data analysis.
Since their distribution is more complex (although the model is
simpler), EVD estimate leads to a formula linking the parameters of
the Gumbel distribution to the initiation parameters in the form of an
implicit equation, that needs to be solved numerically.
Conversely, the assumption that the shape of the distribution of T_i
is given (and estimated from data), gives an explicit relationship
between the parameters describing the T_i distribution and the
Gumbel parameters, leading to simpler formulas and applicability to
the case of discrete origins with different spacings and firing
rates. The parameters of the T_i distribution have then to be
related to the microscopic parameters (See Appendix D).
It is important to note that an approach based on extreme-value
distribution theory is general <cit.>. Simulations
(including the model used here) are based on specific assumptions that
are often not simple to test and many models on the market use
slightly different assumptions.
Instead, the extreme-value estimates are robust to different shades of
assumptions used in the models available in the literature, and thus
more comprehensive.
Our estimates reveal universal behavior in the distribution of S-phase
duration. There is a prescribed relation between mean and
variance of S-phase duration, defining a “scaling” behavior for its
distribution. Such universality has been observed in cell-cycle
periods and cell size <cit.>.
Qualitatively, we expect the same universality to hold in a regime
when origins have less than 100% efficiencies, and some may not fire
at all during S-phase. Origins that fire only in a fraction of the
realizations are accounted for in our simulations, but they entail
second-neighbour effects that are not currently accounted in our
estimates.
There are hundreds of origins in a genome, but our
analysis shows that the relevant parameters to capture the overall
behavior are the means and variances of inter-origin distances and
origin firing rates.
Specifically, we find that two regimes describe most of the
phenomenology, and they depend on the values of these effective
variables.
Importantly, the regimes identified here differ from those
identified in ref. <cit.>, which just identify a critical
spacing between discrete (equally spaced) origins, for which
replication timing starts to be linear with inter-origin distance.
The notion that the last regions to replicate may tend to be
different in every cell (our “extreme-value” regime) has been
proposed already by Hawkins and
coworkers <cit.>. The opposite regime where some
specific regions tend to always replicate last ('bottleneck
region'), has been proposed for mammalian common fragile
sites <cit.>. Such regions of slow replication,
pausing and frequent termination have also been described in
yeast <cit.>.
These studies make it plausible to think that both extreme-value and
bottleneck regimes may apply to yeast, despite our analysis based on
replication kinetics data indicating some pressure towards the
extreme-value regime.
Another important case for what concerns replication termination is
the rDNA locus, which cannot be analyzed in replication kinetics data
based on microarrays / sequencing data due to its repetitive nature
(150 identical copies in yeast). However, the large inter-origin
distances, pseudo-unidirectional replication and epigenetic control
of origin firing in this locus <cit.> make it a good
candidate for the last sequence to replicate in yeast.
Importantly the model used here is similar to a set of previous
studies, which have tested this approach and validated it with
experimental data
<cit.>. Our
analysis of S-phase duration in single cells is generic, and
expected to be robust to variations model details.
The mutant data sets analyzed here also support the predictive power
of the model in presence of perturbations and parameter changes, and
hence validate the use of the model in a predictive framework.
Our predictions are compatible with the available values for
average S-phase duration, which can be roughly estimated through
flow cytometry <cit.>, and
corresponds well to the values obtained by the model (around 60
minutes for S. cerevisiae). Other yeast studies found smaller
values in other conditions <cit.>, which would be
interesting to study with the model.
Additionally, we provide a prediction for the cell-to-cell variability
of S-phase duration, which is an important step of the cell cycle.
Indeed, completion of replication needs to be coordinated with growth
and progression of the cell cycle
stages <cit.>. Cell-to-cell variability in
replication kinetics makes the S phase subject to inherent
stochasticity.
Experimentally, measuring the cell-to-cell variation of the S-phase duration
is a challenge.
While some studies exist using mammalian (cancer) cell
lines <cit.>, they currently do not have the precision
needed to allow a quantitative match with models.
However, we expect that such measurements will become available in the
near future, thanks to rapidly developing methods of single-cell
biology <cit.>.
Our predictions define some key properties of the replication period
that may be tested with, e.g., single-cell studies in budding yeast,
using the parameters available from replication kinetics studies. In
this model the S phase is (by itself) a “timer”, so its connection
to cell size homeostasis must be affected by external
mechanisms <cit.>. S-phase duration has been measured
on single E. coli cells, and found to be unlinked to cell
size <cit.>.
Interestingly, our predictions of S-phase duration and
variability as a function of chromosome copy numbers (Supplementary
Fig. S12) might apply to cancer cell lines with different levels of
aneuploidy <cit.>.
Finally, there is the possibility of applying this framework to
describe relevant perturbations <cit.>. This
could also help elucidate how response to DNA damage affects the
replication timing and its variability across cells.
Intriguingly, we also found evidence of bias towards faster
replication in empirical chromosomes compared to randomized ones.
Thus, our overall findings support the hypothesis of a possible
selective pressure for faster replication, and against bottlenecks.
Other approaches have assumed optimization for faster replication and
looked for optimal origin placement <cit.> or
found other signs of optimality in similar
data <cit.>. Our results are in line with these
findings, and isolate a complementary direction for such optimization.
All these considerations support the biological importance of
replication timing of inter-origin regions and its variability.
However, the sources of the constraints remain an open
question. Clearly, overall replication speed can increase indefinitely
by increasing origin number and initiation rates. However, there are
likely yet-to-be-characterized tradeoffs in these quantities, that
prevent this from happening, and force the system to optimize the
duration of replication in a smaller space of parameters. The
molecular basis for such constraints likely lies at least in part in
the finite resources available for initiation
complexes <cit.>.
We are grateful to Gilles Fischer, Nicolas Agier, Alessandra Carbone
and Renaud Dessalles for useful discussions.
QZ was supperted by the LabEx CALSIMLAB, public grant
ANR-11-LABX-0037-01 constituting a part of the “Investissements
d'Avenir” program (reference : ANR-11-IDEX-0004-02; YK).
§ FITTING REPLICATION TIMING DATA FROM EXPERIMENTS USING THE
MODEL
This section describes our fitting procedure based on the model. The
fitted parameters were used in simulations of genome replication
kinetics can giving the distribution of S-phase duration and of
replication time of one chromosome (Fig. <ref>).
We used flow cytometry (FACS) data to re-normalize replication timing
as follows. If the base line value of average DNA copy-number a is
remarkably larger than 1, and/or its plateau value b is remarkably smaller
than 2, we use the formula
y=a+(b-a)(t-T_0))^r/(t-T_0)^r+(t_c-T_0)^rθ(t-T_0) to
fit the FACS data and normalize replication timing data by ϕ_
norm(x,t)=1+ϕ(x,t)-a/b-a, where ϕ is the
replication probability function <cit.>.
We used fixed origin locations from the literature and optimized the
fit for the parameters γ, T_0, v and λ_i
iteratively. The objective function was defined as the L2 distance
(the average of squared differences) of the experimental and
theoretical replication probability timing profile (Fig. <ref>),
i.e., as √(∑_i∑_j(ϕ_model(x_i,t_j)-ϕ_exp.(x_i,t_j))^2/(N_xN_t)),
where N_x and N_t are the numbers of the measured loci and time
points respectively.
Initialization of the parameters for the fits was performed as
follows. Firing rate exponent γ and fork velocity v were
initialized at arbitrary values (typically γ at 0, v at 2
kb/min). The start of S phase T_0 was initially set when genome copy
number from the normalized FACs data (from the interval [a,b] to
[1,2]) is first larger than a fixed threshold (e.g. 1.05) and each origin
strength λ_i starts from the value fitted with the time-course
data at this origin.
Fitting was performed with following iterative rule. 1) for a
parameter x, assume it has a step length Δ_x, and a memorized
step length Δ_x^'=2Δ_x, 2) set r=Δ_x/Δ_x^'
and Δ_x^'=Δ_x, if x+Δ_x gives a better fit than
x, let x=x+Δ_x, otherwise (i) if |r|=1, we update
Δ_x →Δ_x/2 (ii) if |r|=0.5, set Δ_x
→ -Δ_x; 3)
repeat 2) until the termination condition is satisfied. λ_1,
λ_2, ..., λ_n for each chromosome are updated
iteratively given γ, v and T_0 and in each iteration, one
λ_i is chosen randomly to be updated. T_0 is updated
iteratively given γ and v. v is updated iteratively given
γ. For γ, we tested some discrete values between 0 and 3.
Supplementary Fig. <ref>a,b indicate the best fit value of γ
for S.cerevisiae and L.kluiveri,
and Supplementary Fig. <ref>c shows one example of the best fit.
§ ROLE OF CHROMOSOME BOUNDARIES IN REPLICATION TIMING
In some simulations, we used circularized chromosomes for easier
comparison with the analytical estimates, but relative to a circular
chromosome, a linear chromosome has lower symmetry because of the
boundary at both ends. To verify that this assumption does not
qualitatively affect the results, we circularized the empirical
S.cerevisiae chromosomes by linking their ends respectively,
and simulated their replication kinetics with the estimated
parameters. The results (Fig. <ref>) show that the
circularized chromosomes always replicate faster than the linear
chromosomes, but their durations do not differ much (the average
deviation is in all cases less than 15%).
§ DETERMINATION OF THE PARAMETERS Α, Β AND T_0
IN THE FORMULA FOR THE DISTRIBUTION OF T_I
Eq. 3 in the main text, describing the replication timing of one
inter-origin region contains the parameters α, β and
t_0, which need to be related to the biologically measurable
parameters (inter-origin distance and origin rates). To estimate such
parameters for the distribution of T_i we used two methods. The
first is a fit of all the T_i data taken from the simulation of the
given chromosome, and the second is to fit the specific
T_i data (replication times of the central inter-origin region) extracted
from simulation of a linear chromosomal fragment where inter-origin distances and origin
strengths are sampled from known distributions (different samples for different runs of the simulation).
In this second
method, each run of the simulation is carried out considering
inter-origin distances and origin strengths with the same averages as
the original chromosome.
Both methods give the same distribution for T_i, which agrees
very well with Eq. 3 of the main text (See Fig. <ref>).
We mainly used the second method since it does not depend on origin
configuration of the original chromosome. The detailed procedure is
the following. First, we defined a characteristic distance
d_c=(γ+1/<λ>log(1/1-x))^1/1+γv,
where x<1 (e.g. 0.99) and assume n_c=min(⌊
d_c/<d>⌋+1,⌊ n/2 ⌋)+1. Then we produced
a linear chromosomal fragment with 2n_c origins, in which two origins are
always located at the ends. Next, we simulated many realizations for
the replication of this chromosome. In each simulation run, we sampled
inter-origin distance d_i, origin strength λ_j and origin
firing time t_f^(j) from
Γ(<d>^2/σ^2(d),<d>/σ^2(d)),
Γ(<λ>^2/σ^2(λ),<λ>/σ^2(λ))
and f(t)=λ_i t^γθ(t)
exp(-λ_it^γ+1/γ+1) respectively, where
i∈{1,2,...,2n_c-1} and j∈{1,2,...,2n_c}. The statistics
over different realizations gives the distribution of the replication
time of the central inter-origin region (T_n_c), which was fitted
with Eq. <ref> to obtain α, β and t_0.
§ ANALYTICAL DERIVATION OF AN APPROXIMATE DISTRIBUTION OF S-PHASE
DURATION T_S BASED ON EXTREME VALUE THEORY.
This section gives further details on the analytical calculation for
the extreme-value estimate of the distribution of S-phase duration.
We assume that replication timing of one inter-origin region T_i
obeys the stretched exponential distribution
F(t)=P(T_i<t)=1-e^-α (t-t_0)^β ,
where t⩾ t_0 and α>0. The parameters α,
β and t_0 were obtained as described in the previous
section. We define M_n=max(T_1, T_2,...,T_n). By
taking a_n=1/(α^1/ββ(log n)^1-1/β) and
b_n=(log n/α)^1/β+t_0, and applying the
Fisher-Tippett-Gnedenko theorem, we can prove that
lim_n→∞P((M_n-b_n)/a_n≤
t)=exp(-exp(-t))≜ G(t) ,
where G(t) is the standard Gumbel distribution.
When n is sufficiently large, we can make the approximation
P((M_n-b_n)/a_n≤ t)≈ G(t). If we define t̃=a_n
t+b_n, we have P(M_n≤t̃)≈ G((t̃-b_n)/a_n).
Finally, we can represent the distribution of T_S (=M_n)
approximately as
P(T_S≤ t) ≈exp(-exp(-t-b_n/a_n))
= exp{-exp[βlog
n(1-(α/log
n)^1/β(t-t_0))]}
Here n is the origin number, and α, β and t_0
are connected to the model parameters describing replication
kinetics, v, γ, inter-origin distances (d_1, d_2, ...,
d_n) and origin strengths (λ_1,λ_2,...,λ_n).
We now discuss how α, β and t_0 can be expressed
as functions of simplified parameters by numerically solving some
approximate equations. We consider a “characteristic”
inter-origin region with the distance <d> and origin
strength <λ>, and we assume that the replication
of the inter-origin region is mainly carried out by the forks
originated from the two nearest origins, both of which are typically
activated, Thus we have
T_i≈<d>/2v+(t_f^l+t_f^r)/2,
where t_f^l and t_f^r are the firing time of the left origin and
the right origin respectively. Since t_0 is the minimal replication time
of inter-origin region and the firing time has zero as a lower bound,
one has
t_0=min(T_i)=<d>/2v.
From equation <ref>, we can further obtain
<T_i> ≈<d>/2v+<t_f>
and
σ(T_i)≈σ(t_f)
In addition, we have
<T_i>=α^-1/βΓ(1/β+1)+t_0,
σ(T_i) =
α^-2/β[Γ(2/β+1) -
Γ^2(1/β+1)],
<t_f> =
(γ+1/<λ>)^1/γ+1Γ(γ+2/γ+1),
and
σ(t_f)=(γ+1/<λ>)^1/γ+1√(Γ(γ+3/γ+1)-Γ^2(γ+2/γ+1))
Based on equations <ref>-<ref>, α and β can be
numerically solved as functions of v, γ, <d> and
<λ>. Our simulations in the EVD regime, and using
empirically realistic values of the parameters are in line with
equations <ref>-<ref>.
*
§ SUPPLEMENTARY FIGURES AND TABLES
|
http://arxiv.org/abs/1701.07479v3 | 20170125203858 | Epidemiological modeling of the 2005 French riots: a spreading wave and the role of contagion | [
"Laurent Bonnasse-Gahot",
"Henri Berestycki",
"Marie-Aude Depuiset",
"Mirta B. Gordon",
"Sebastian Roché",
"Nancy Rodriguez",
"Jean-Pierre Nadal"
] | physics.soc-ph | [
"physics.soc-ph",
"cs.SI"
] |
[
\begin@twocolumnfalse
Switched control for quantized feedback systems:
invariance and limit cycle analysis
Alessandro Vittorio Papadopoulos, Member, IEEE,
Federico Terraneo,
Alberto Leva, Member, IEEE,
Maria Prandini, Senior Member, IEEEA.V. Papadopoulos is with Mälardalen University, Västerås, Sweden, (e-mail: alessandro.papadopoulos@mdh.se), and F. Terraneo, A. Leva, and M. Prandini are with the Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, 20133, Italy, e-mail: {federico.terraneo, alberto.leva, maria.prandini}@polimi.it).This work was done when the first author was a post-doctoral researcher at Politecnico di Milano, and is supported by the European Commission under the project UnCoVerCPS with grant number 643921.
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
As a large-scale instance of dramatic collective behavior, the 2005 French riots started in a poor suburb of Paris, then spread in all of France, lasting about three weeks. Remarkably, although there were no displacements of rioters, the riot activity did travel. Access to daily national police data
has allowed us to explore the dynamics of riot propagation. Here we show that an epidemic-like model, with just a few parameters and a single sociological variable characterizing neighborhood deprivation, accounts quantitatively for the full spatio-temporal dynamics of the riots. This is the first time that such data-driven modeling involving contagion both within and between cities (through geographic proximity or media) at the scale of a country, and on a daily basis, is performed. Moreover, we give a precise mathematical characterization to the expression “wave of riots”, and provide a visualization of the propagation around Paris, exhibiting the wave in a way not described before. The remarkable agreement between model and data demonstrates that geographic proximity played a major role in the
propagation, even though information was readily available everywhere through media. Finally, we argue that our approach gives a general framework for the modeling of the dynamics of spontaneous collective uprisings.
\end@twocolumnfalse
]
§ INTRODUCTION
Attracting worldwide media attention, France experienced during the Autumn of 2005 the longest and most geographically extended riot of the contemporary history of Europe<cit.>. Without any political claims nor leadership, localization was mainly limited to the “banlieues” (suburbs of large metropolitan cities), where minority groups are largely confined. Contrary to the London “shopping riots” of 2011, rioting in France essentially consisted of car destruction and confrontations with the police. The triggering event took place in a deprived municipality at the north-east of Paris: on October 27, 2005, two youths died when intruding into a power substation while trying to escape a police patrol. Inhabitants spontaneously gathered on the streets with anger. Notwithstanding the dramatic nature of these events,
the access to detailed police data<cit.>, together with the extension in time and space – three weeks, more than 800 municipalities hit across all of France –, provide an exceptional opportunity for studying the dynamics of a large-scale riot episode. The present work aims at analyzing these data through a mathematical model that sheds new light on qualitative features of the riots as instances of collective human behavior<cit.>.
Several works<cit.>
have developed mathematical approaches to rioting dynamics, and their sociological implications have also been discussed<cit.>.
The 1978 article of Burbeck et al. <cit.> pioneered quantitative epidemiological modeling to study the dynamics of riots. Very few works followed the same route, but similar ideas have been applied to other social phenomena such as the spreading of ideas or rumors<cit.> and the viral propagation of memes on the Internet<cit.>. This original epidemiological modeling was however limited to the analysis within single cities, without spatial extension.
From the analysis of various sources, previous historical and sociological studies have discussed riot contagion from place to place
<cit.>.
However few studies aim at quantitatively describing the spatial spread of riots,
except for two notable exceptions.
Studies of the 2011 London riots<cit.> describe the displacements of rioters from neighborhoods to neighborhoods. In contradistinction with the London case, media reports and case studies<cit.> show that the 2005 French rioters remained localized in a particular neighborhood of each municipality. However, the riot itself did travel.
Conceptualizing riots as interdependent events, Myers makes use of
the event history approach<cit.> to study the US ethnic riots on a period of several years. This analysis exhibits space-time correlations showing that riots diffused from cities to cities<cit.>. There, each rioting episode is considered as a single global event (whether the city “adopts a riot” or does not), and measures of covariances allow to relate the occurrence of a riot in a city at a given time with the occurrence of riots in other cities at previous times.
This approach however does not describe the internal dynamics of a riot (its rise and fall within each city), nor the precise timing of the spread from city to city. Of course, going beyond this framework requires much more detailed data.
Our dataset, at a level of detail hitherto unavailable, allows us to provide the first data-driven modeling of riot contagion from city to city at the level of a whole country, coupled with contagion within each city, and with a time resolution of a day. Our work, of a different nature than that of the econometric one,
takes its root in the epidemiological approach introduced in the seminal work of Burbeck et al <cit.>, and is in the spirit of recent continuous spatio-temporal data-driven approaches in social science<cit.>.
Here we extend the notion of epidemiological propagation of riots by including spatial spreading, in a context where there is no displacement of rioters. Remarkably, the high quality of our results is achieved within the sole epidemiological framework, without any explicit modeling of, e.g., the police actions (in contrast with the 2011 London riots modeling<cit.>). For the first time, the present study provides a spatio-temporal framework that shows that, following a specific triggering event, propagation of rioting activity is analogous (but for some specificities) to the continuous propagation of epidemics.
More precisely, we introduce here a compartmental epidemic model of the Susceptible-Infected-Recovered (SIR) type<cit.>. Infection takes place through contacts within cities as well as through other short- and long-range interactions arising from either interpersonal networks or media coverage<cit.>. These influence interactions are the key to riots spreading over the discrete set of French municipalities. In particular, diffusion based on geographic proximity played a major role in generating a kind of riot wave around Paris which we exhibit here. This is substantiated by the remarkable agreement between the data and the model at various geographic scales. Indeed, one of our main findings is that less than ten free parameters together with only one sociological variable (the size of the population of poorly educated young males) are enough to accurately describe the complete spatio-temporal dynamics of the riots.
The qualitative features taken into account by our model – the role of a single triggering effect, a “social tension” buildup, a somewhat slower and rather smooth relaxation, and local as well as global spreading –, are common to many riots. This suggests that our approach gives a general framework for the modeling of the spatio-temporal dynamics of spontaneous collective uprisings.
§ RESULTS
§.§ The 2005 French riots dataset
We base our analysis here on the daily crime reports<cit.> of all incidents recorded by the French police at the municipalities (corresponding to the French “communes”) under police authority, which cover municipalities with a population of at least 20,000 inhabitants. Such data, on the detailed time course of riots at the scale of hours or days, and/or involving a large number of cities, are rare. In addition, as an output of a centralized national recording procedure applied in all national police units operating at the local level, the data are homogeneous in nature – and not subject to the selection or description biases which are frequent with media sources<cit.>. These qualities endow these data with a unique scientific value. We adopt a simple methodology for quantifying the rioting activity: we define as a single event any rioting-like act, as listed in the daily police reports, leaving aside its nature and its apparent intensity. Thus, each one of “5 burnt cars”
, “police officers attacked with stones” or “stoning of firemen”, is labeled as a single event. We thus get a dataset composed of the number of riot-like events for each municipality, every day from October 26 to December 8, 2005, a period of 44 days which covers the three weeks of riots and extends over two weeks after.
Figure <ref>a (left panel) shows at its top two typical examples of the time course of the number of events for municipalities (see also the plots for the 12 most active Île-de-France municipalities, Supplementary Fig. S1). A striking observation is that there is a similar up-and-down dynamics at every location, showing no rebound, or, if any, hardly distinguishable from the obvious stochasticity in the data. This pattern is similar to the one observed for the US ethnic riots<cit.>.
In addition, as illustrated on Fig. <ref> and Supplementary Fig. S3, we observe the same pattern across different spatial scales (municipalities, départements, régions, all country – see Materials and Methods for a description of these administrative divisions). Moreover, this pattern shows up clearly despite the difference in amplitudes (see also
section Fitting the data: the wave across the whole country).
This multi-scale property suggests an underlying mechanism for which geographical proximity matters. Finally, the rioting activity appears to be on top of a background level: as can be seen on Fig. <ref>, the number of events relaxes towards the very same level that it had at the outset of the period. Actually, in the police data, one cannot always discriminate rioting facts from ordinary criminal ones, such as the burning of cars unrelated to collective uprising. For each location, we assume that the stationary background activity corresponds to this “normal” criminal activity.
§.§ Modeling framework
We now introduce our modeling approach. Section Materials and Methods provides the full model and numerical details, as well as various quantitative statistical analyses for the fits that follow. The model features presented below are based on the analysis at the scale of municipalities. However, since aggregated data at the scale of départements present a pattern similar to the data of municipalities, we also fit the model at the département scale, as if the model assumptions were correct at the scale of each département. A “site”, below, is either a municipality or a département depending on the scale considered.
As the rioting activities are described by a discrete set of events, we assume an underlying point process<cit.> characterized by its mean value. Assuming no coupling between the dynamics of the rioting and criminal activities (see Materials and Methods for a discussion), the expected number of events at each site k (k=1,...,K, K being the number of sites), is the sum of the mean (time independent) background activity λ_b k, and of the (time dependent) rioting activity, λ_k(t). In fitting the model to the data, we take the background activity λ_b k as the average number of events at the considered site over the last two weeks of our dataset. Assuming Poisson statistics (which appears to be in good agreement with the data, see Materials and Methods), the means λ_k(t) fully characterize the rioting activities. We make the assumption that this number of events λ_k(t) is proportional to the local number of rioters, I_k(t):
λ_k(t) = α I_k(t).
We model the coupled dynamics of the set of 2 × K variables, the numbers
I_k(t) of rioters (infected individuals in the terminology of the SIR model) and the numbers S_k(t) of individuals susceptible to join the riot, by writing an epidemic SIR model<cit.> in a form suited for the present study, as explained below.
This gives the coupled dynamics of the λ_k(t) and of the associated variables,
σ_k(t)≡α S_k(t).
These dummy variables can be seen as the reservoirs of events (the maximum expected numbers of events that may occur from time t onwards).
We fit the model to the data by considering a discrete time version of the equations (events are reported on a daily basis), and by optimizing the choice of the model free parameters with a maximum likelihood method.
The result of the fit is a set of K smooth curves (in time), λ_k(t), k=1,.., K. For each location k, and each time t, the corresponding empirical data point has to be seen as a probabilistic realization of the Poisson process whose mean is λ_k(t).
Before going into the modeling details and the fits, we now give the main characteristics of the proposed SIR model. We assume homogeneous interactions within each municipality (a hypothesis justified by the coarse-grained nature of the data, and by the absence of displacements of rioters), and influences between sites. The model thus belongs to the category of metapopulation epidemic models <cit.>. Motivated by the relative smoothness of the time course of events, we make the strong assumption that, at each site, there is a constant rate at which rioters leave the riot. This parameter aggregates the effects of different factors – arrests, stringent policing, other sources of deterrence, fear, fatigue, etc. –, none of them being here modeled explicitly. In addition, since there are almost no rebounds of rioting activity, we assume that there is no flux from recovered (those who left the riot) to susceptible (and thus we do not have to keep track of the number of recovered
individuals).
In the epidemic of an infectious disease, contagion typically occurs by dyadic interactions, so that the probability for a susceptible individual to be infected is proportional to the fraction of infected individuals – leading to equations written in terms of the fractions of infected and susceptible individuals. In the present context, contagion results from a bandwagon effect<cit.>. The probability of becoming a rioter is thus a function of the number of rioters, hence of the number of events given the above hypothesis. This function is non-linear since, being a probability, it must saturate at some value (at most 1) for large rioting activities.
§.§ Single site epidemic modeling
As a first step, following Burbeck et al., we ignore interactions between sites, and thus specify the SIR model for each site separately. We consider here one single site (and omit the site index k in the equations). Before a triggering event occurs at some time t_0, there is a certain number S_0 > 0 of susceptible individuals but no rioters. At t_0 there is an exogenous shock leading to a sudden increase in the I population, hence in λ, yielding an initial condition λ(t_0)=A >0. From then on, the rioting activity at a single (isolated) site evolves according to:
[left=]align
d λ(t)/dt = - ω λ(t) + β σ(t) λ(t),
d σ(t)/dt = - β σ(t) λ(t),
where β is a susceptibility parameter. Here we work within a linear approximation of the probability to become infected, which appears to provide good results for the single site modeling The condition for the riot to start after the shock is that the reproduction number<cit.> R_0 = βσ(t_0)/ω is greater than 1. In such a case, from t=t_0 onward, the number of infected individuals increases, passes through a maximum and relaxes back towards zero.
We obtain the initial condition σ_0=σ(t_0)=α S_0 from the fitting procedure. Thus for each site, we are left with five free parameters to fit in order to best approximate the time course of the rioting events: ω, β, t_0, A and σ_0.
By showing examples at different scales, Fig. <ref> (b, red curves) illustrate the remarkable quality of the resulting fits (see also Supplementary Fig. S1). The obvious limitation is that fitting all the 853 municipalities present in the dataset amounts to determining 853 × 5 = 4265 free parameters. The fit is very good but meaningless (overfitting) for sites with only one or two events. In addition, these single site fits cannot explain why the riot started on some particular date at each location. Fitting the single site model requires one to assume that there is one exogenous specific shock at a specific time at each location, whereas the triggering of the local riot actually results from the riot events that occurred before elsewhere. Nevertheless, we see that everywhere the patterns are compatible with an epidemic dynamics and that through the use of the model it is possible to fill in missing data and to smooth the data (filtering out the noise). As a result of this
filtering, the global pattern of propagation becomes more apparent. Indeed, looking at the Paris area, one observes a kind of wave starting at Clichy-sous-Bois municipality, diffusing to nearby locations, spreading around Paris, and eventually dying out in the more wealthy south-west areas (see Supplementary Video 1).
§.§ Modeling the riot wave
We now take into account the interactions between sites, specifying the global metapopulation SIR model.
Among the K sites under consideration, only one site k_0, the municipality of Clichy-sous-Bois (département 93 when working at département scale), undergoes a shock at a time t_0, October 27, 2005. To avoid a number of parameters which would scale with the number of sites, we choose here all free parameters to be site-independent (in Materials and Methods we give a more general presentation of the model). The resulting system of 2× K coupled equations writes as follows: for t>t_0, for k=1,...,K,
[left=]align
d λ_k(t)/dt = - ω λ_k(t) + σ_k(t) Ψ(Λ_k(t)),
d σ_k(t)/dt = - σ_k(t) Ψ(Λ_k(t)).
Here ω is the site-independent value for the recovering rate.
For the interaction term we consider that at any site k the probability to join the riot is a function Ψ of a quantity Λ_k(t), the global activity as “seen” from site k. This represents how, on average, susceptible individuals feel concerned by rioting events occurring either locally, in neighboring cities, or anywhere else in France.
Whatever the means by which the information on the events is received (face-to-face interaction, phone, local or national media – TV or radio broadcasts, newspapers –, digital media, ...),
we make the hypothesis that the closer the events (in geographic terms), the stronger their influence.
We thus write that Λ_k(t) is a weighted sum of the rioting activities occurring in all sites,
Λ_k(t) = ∑_j W_kj λ_j(t),
where the weights W_kj
depend on the distance between sites k and j.
A simple hypothesis would have been to assume nearest-neighbor contagion.
We have checked that such scenario fails to reproduce the riots dynamics, which can be easily understood: the riot would not propagate from areas with deprived neighborhoods to other similar urban areas whenever separated by cities without poor neighborhoods.
We rather consider the weights as given by a decreasing
function of the distance. We tested several ways of choosing this function
and obtained the best results for two types of parameterization. One is a power law decay with the distance, motivated by several empirical studies of interactions relying on modern technologies<cit.>. The second option is the sum of an exponential decay and of a constant term. Both involve two parameters, a proximity scale d_0 and, respectively, the exponent δ and the strength ξ of the constant term.
For the (site independent) function Ψ(.), we consider either its linear approximation, writing
Ψ(Λ_k(t)) = β Λ_k(t) = β ∑_j W_kj λ_j(t),
with the susceptibility β as a site-independent free parameter, or various non-linear cases, involving up to four parameters.
Lastly, we have to make the crucial choice of the initial values σ_k,0=σ_k(t_0), specific to each site. By definition, they must be proportional to the size of the initial susceptible population. We make the hypothesis that the latter scales with the size of a population defined by a sociological specification. Thus we assume
σ_k,0 = ζ_0 N_k,
where ζ_0 is a site-independent free parameter, and N_k is the size of a reference population provided
for each municipality by INSEE, the French national statistics and economic studies institute. The results we present below take as reference the population of males aged between 16 and 24 out-of-school with no diploma.
We find this population, whose size can be viewed as an index of deprivation, to provide the best results when comparing the model fits done with different reference populations (see Materials and Methods).
This is in line, not only with the fact that riots started and propagated in poor neighborhoods, but also with the fact that most rioters where males, young, and poorly educated<cit.> – features common to many urban riots<cit.>. One should note that, once we have chosen this specific reference population – hence setting the susceptible population in deprived neighborhoods –, the hypotheses on the structure of the interactions implicitly assume interactions between populations with similar socio-economic characteristics. In particular, a distance-independent term in the interaction weights may correspond to proximity primarily perceived in terms of cultural, socio-economic characteristics. Our model thus allows to combine spatial and socio-economic characteristics, which are both known to potentially affect riot contagion<cit.>.
Finally, for the whole dynamics (with a number of coupled equations ranging from 186 up to 2560, depending on the case, see below), in the simplest linear case we are left with only six free parameters: ω, A, ζ_0, d_0, δ or ξ, and β.
In the non-linear case, we have five parameters as for the linear case, ω, A, ζ_0, d_0, δ or ξ, and, in place of β, up to four parameters depending on the choice of the function Ψ. In the following, we will also allow for specific β values at a small number of sites, adding as many parameters.
The above model, in the case of the linear approximation, makes links to the classical spatially continuous, non-local, SIR model<cit.> (see section Links to the original spatially continuous SIR model in Materials and Methods, and Supplementary Videos 3 and 4). In dimension one, when the space is homogeneous, we know<cit.> that traveling waves can propagate, quite similar to the way the riot spread around Paris as exhibited in the previous section. The new class of models we have introduced is however somewhat different and more general, and raises several open mathematical questions. The next section shows the wave generated by our global model and the fit to the data.
§.§ Fitting the data: the wave around Paris
We first focus on the contagion around Paris, characterized by a continuous dense urban fabric with deprived neighborhoods. There are 1280 municipalities in Île-de-France. Among the ones under police authority (a total of 462 municipalities, for all of which we have data), 287 are mentioned for at least one riot-like event. For all the other municipalities, which are under “gendarmerie” authority (a military status force with policing duties), we have no data. Since their population size is small, we expect the associated numbers of riot events to be very small if not absent, so that these sites have little influence on the whole dynamics. We choose the free parameters with the maximum likelihood method, making use of the available data, i.e. the 462 municipalities. However, the model simulations take into account all the 1280 municipalities. Results are presented for a power law decrease of the weights and a non-linear function Ψ characterized by 3 parameters
(see Materials and Methods for a quantitative comparison of different model variants). Thus, we have here a total of 8 free parameters: ω, A, ζ_0, d_0, δ, in addition to three for the non-linear function.
Figures <ref>, <ref> and the Supplementary Video 2 illustrate the main results. Figure <ref> compares the model and the data on four aspects: time course in each département (a), amplitude of the events (b), date at which the number of events is maximum (c), and spatial distribution of the riots (d). The global model with a single shock correctly reproduces the up-and-down pattern at each location, as illustrated on Fig. <ref> at département scale. One can note the preservation of the smooth relaxation at each site, despite the influence of other (still active) sites. This can be understood from the SIR dynamics: at a given location, the relaxation term (-ωλ_k) dominates when there is no more enough susceptible individuals, so that the local dynamics becomes essentially independent of what is occurring elsewhere. Quite importantly, these local patterns occur at the correct times. One sees that the date of maximum activity
spreads over several days and varies across locations, which reflects the propagation of the riot.
On the Supplementary Video 2 one can see the wave generated by the model. Figure <ref>b shows a sketch of this wave as a timeline with one image every 4 days – which corresponds to the timescale found by the parameter optimization, 1/ω∼ 4 days.
For comparison, we show side by side, Fig. <ref>a, the timeline built from the data which have been smoothed making use of the single site fits.
One can see the good agreement, except for few locations where the actual rioting activity occurs earlier than predicted by the global model. A most visible exception is Argenteuil municipality (north-west of Paris on the map, see Fig. <ref>, second images from the top), where the Minister of Interior made a speech (October 25) perceived as provocative by the banlieues residents. This could potentially explain the faster response to the triggering event.
The calibrated model gives the time course of the expected number of events in all municipalities, including those under “gendarmerie” authority for which we do not have any data. For all of the later, the model predicts a value remaining very small during all the studied period, except for one, the municipality of Fleury-Mérogis (see Fig. <ref>d, South of the map). Remarkably, searching in the media coverage, we found that a kindergarten has been burnt in that municipality at that period of time (Nov. 6).
§.§ Fitting the data: the wave across the whole country
We now show that the same model reproduces the full dynamics across the whole country. We apply our global model considering each one of the départements of metropolitan France (except Corsica and Paris, hence 93 départements) as one homogeneous site – computing at municipality scale would be too demanding (more than 36,000 municipalities). The Materials and Methods section details the comparison between various model options. We present here the results for the model version making use of the linear approximation, with
9 free parameters: ω, A, ζ_0, d_0, ξ, the same susceptibility β everywhere except for three different values, for the départements 13, 62 and 93. As for the wave around Paris, the resulting fit is very good, as illustrated on Fig. <ref> (see also Supplementary Fig. S3). Figures <ref>a and <ref>b show the results for the 12 most active départements. Figure <ref>c compares model and data on the total number of events, and Fig. <ref>d on the date of the maximum activity. For the latter, the data for the Île-de-France municipalities (Fig. <ref>c) are reported. One sees that the wave indeed spread over all France, with the dynamics in Paris area essentially preceding the one elsewhere.
Remarkably, one can see the effect of the riot wave even where few rioting events have been recorded.
The data exhibit a concentration of (weak) activities (Fig. <ref>a), a pattern which would not be expected in case of independent random events. The epidemiological model predicts these minor sites to be hit by the wave, with a small amplitude and at the correct period of time. This is apparent on Fig. <ref>b and can be shown to be statistically significant (see Materials and Methods and Supplementary Fig. S4 for more details).
Finally, we validate here the hypothesis that it is the number, and not the proportion, of individuals (susceptible individuals, rioters) that matters. The very same model, but with densities and not numbers, yields a much less good fit (see Materials and Methods). This comes as a quantitative confirmation of the hypothesized bandwagon effect, in line with previous literature<cit.>.
§ DISCUSSION
Studying the dynamics of riot propagation, a dramatic instance of large-scale social contagion, is difficult due to the scarcity of data. The present work takes advantage of the access to detailed national police data on the 2005 French riots that offer both the timescale of the day over a period of 3 weeks, and the geographic extension over the country. These data exhibit remarkable features that warrant a modeling approach. We have shown that a simple parsimonious epidemic-like model combining contagion both within and between cities, allows one to reproduce the daily time course of events, revealing the wave of contagion. The simplest model version with only 6 parameters already accounts for the wave very well, and more elaborated versions with about 10 parameters account for even finer details of the dynamics. A crucial model ingredient is the choice of a single sociological variable, taken from the census statistics as a proxy for calibrating the size of the susceptible population. It shows that the
wave propagates in an excitable medium of deprived neighborhoods.
It is interesting to put in contrast the results obtained here with a model where homogeneous weights are independent of the geographic distance, which we can consider as a null hypothesis model with regards to the geographic dependency. As discussed in Materials and Methods and illustrated on Supplementary Fig. S5, such a hypothesis fails to produce a wave, and, more unexpectedly, cannot account for the amplitudes of the riot. This confirms that diffusion by geographic proximity is a key underlying mechanism, and points towards the influence on the riots breadth of the concentration of urban areas with a high density of deprived neighborhoods (as it is the case for the départements of Île-de-France, 77, 78, 91, 92, 94 and 95). Thus, one can conclude that, having the outbreak location surrounded by a dense continuum of deprived neighborhoods made the large-scale contagion possible.
What lesson on human behavior can we draw from our analysis?
First, as we just indicated, “geography matters”<cit.>: despite the modern communication media, physical proximity is still a major feature in the circulation of ideas or behaviors, here of rioting. Second, strong interpersonal ties are at stake for dragging people into actions that confront social order. The underlying interpretation is that interpersonal networks are relevant for understanding riot participation. Human behavior is a consequence not only of individuals' attributes but also of the strength of the relation they hold with other individuals<cit.>.
Strong interpersonal connections to others who are already mobilized draw new participants into particular forms of collective action such as protest, and identity (ethnic or religious based) movements<cit.>. Third, concentration of socio-economic disadvantage facilitates formation of a sizeable group and therefore involvement in destruction: the numbers of rioters in the model (rather than proportions) can be interpreted as an indirect indication of risk assessment before participating in a confrontation with the police<cit.>. From this viewpoint, rioters seem to adopt a rational behavior and only engage in such event when their number is sufficient.
The question of parsimony is of the essence in our modeling approach: an outstanding question was to understand whether a limited number of parameters might account for the observed phenomena at various scales and in various locations. We answer this question positively here, thus revealing the existence of a general mechanism at work: general, since (i) the model is consistent with what has occurred at each location hit by the riot, and (ii) a similar up-and-down pattern is observed for the US ethnic riots in different cities<cit.>, suggesting that this process is indeed common to a large class of spontaneous riots. The wave we have exhibited has a precise meaning supported by the mathematical analysis. Indeed, it is generated by a single triggering event, with a mechanistic-like dynamics giving to the ensemble of local riots a status of a single global episode occurring at the scale of the country, with a well-defined timescale for the propagation.
Whether an initially local riot initiates a wave, and, if so, what is its geographic extension, depend on conditions similar to those at work for disease propagation: a high enough density of susceptible individuals, a suitable contact network and large enough susceptibilities. The 2005 riot propagation from place to place after a single shock is reminiscent of the spreading of other riots, such as the one of food riots in the late eighteen century in the UK<cit.>, or of local propagations during the week of riots in the US in reaction to Martin Luther King assassination. The latter series of riots has been coined as a “wave within a wave”, the larger wave corresponding to the series of US ethnic riots from 1964 to 1971<cit.>. However, this larger `wave' does not appear to be of the same nature as the traveling wave discussed here. Indeed, first, most of the riots in this long time period in the US have each their own triggering event. Second, these events are separated
one from another by large times gaps and are therefore discontinuous whereas we describe the continuous epidemiological spreading by a wave. To discuss such series of riots as these over long period of times, we note that there is no conceptual difficulty in extending our model to larger timescales – by adding a weak flux from recovered to susceptible individuals, and by dealing with several shocks –, although one would also have to take into account group identity changes and the effect of policies on structural characteristics of cities. However, the main issue here is rather the access to a detailed set of data.
In any case, the modeling approach introduced here provides a generative framework, different from the statistical/econometric approach, that may be adapted to the detailed description of the propagation of spontaneous collective uprisings from a main triggering event – notably, the interaction term in our SIR model can be modified to include time delays (time for the information to travel), to take into account time integration of past events, or to be also based on non-geographic criteria (e.g. cultural, ethnic, socio-economic similarity features). We believe that such extensions will lead to interesting developments in the study of spreading of social behaviors.
§ MATERIALS AND METHODS
§.§ French administrative divisions
The three main French administrative divisions are: the “commune”, which we refer to as municipality in the paper (more than 36,000 communes in France); at a mid-level scale the “département”, somewhat analogous to the English district (96 départements in Metropolitan France, labeled from 1 to 95, with 2A and 2B for Corsica); the “région” aggregating several neighboring départements (12 in Metropolitan France, as of 2016, excluding Corsica). At a given level, geographic and demographic characteristics are heterogeneous. The typical diameter of a département is ∼ 100 km, and the one of a région, ∼ 250 km.
There are two national police forces, the “police” and the “gendarmerie” (a civilian like police force reporting to the ministry of Interior whose agents have a military status in charge of policing the rural parts of the country). Most urbanized areas (covering all municipalities with a population superior to 20000) are under police authority. The more rural ones are under gendarmerie authority. The available data for the present study only concern the municipalities under police authority, except Paris, for which we lack data (but was not much affected by the riots).
The full list of the municipalities is available on the French government website,
<https://www.data.gouv.fr/fr/datasets/competence-territoriale-gendarmerie-et-police-nationales/>.
§.§ Dataset
From the source to the dataset.
The present analysis is based on the daily crime reports of all incidents of civil unrest reported by the French police at the municipalities under police authority<cit.> (see above).
We have been working with this raw source, that is the set of reports as transmitted by the local police departments, before any formatting or recoding by the national police statistical unit.
The daily reports are written in natural language, and have been encoded to allow for statistical treatment. From the reports we selected only facts related to urban violence. Some facts are reported more than once (a first time when the fact was discovered, and then one or two days later e.g. if the perpetrators have been identified). We carefully tried to detect and suppress double counting, but some cases may have been missed.
In the police data, incidents in relation with the riots are mostly cases of vehicles set on fire (about 70%), but also burning of public transportation vehicles, public buildings, of waste bins, damages to buses and bus shelters, confrontations between rioters and police, etc. Facts have been encoded with the maximum precision: day and time of the fact, who or what was the target, the type of damage, the number and kind of damaged objects and the number and quality of persons involved, whenever these details are mentioned in the report. In the present work, as explained in the main text, from these details, we compute a daily number of events per municipality. We generated a dataset for these events (with a total of 6877 entries concerning 853 municipalities).
There are a few missing or incomplete data – notably in the nights when rioting was at its maximum, as the police was overwhelmed and reported only aggregated facts, instead of details city per city.
Note that if one has, e.g., a number of burnt cars only at the département scale, one cannot know what is the corresponding number of events, since each one of these events at municipality scale corresponds to an unknown number of burnt cars. Hence if for a particular day, the police report gives the information only aggregated at département scale, one cannot even make use of it when modeling at this scale. Quantitatively, for the analysis of the events in Île-de-France, working with the 462 municipalities under police authority, there is 1.6% of missing data (334 out of 462× 44days =20,328 data values). For the analysis at the scale of the 93 départements, there is 0.2% of missing values (10 out 93× 44=4092).
Datasets for future works.
In addition, we have also built two other datasets. From the same police source, we built a dataset for the arrests (2563 entries), that in forthcoming work will serve to characterize the rioters as well as to investigate the deterrent effect of arrests on the riot dynamics. In future work we also plan to explore the rioting events beyond the sole number of events as studied here. One expects to see what has been for instance the role, if any, of curfews and other deterrent effects (in space and time). We also plan to study whether or not the intensity per event smoothly relaxes like the number of events itself. From both local newspapers and national TV and radio broadcasts, we built a specific dataset of media coverage. In ongoing work we extend our modeling framework by considering the coupling between the dynamics of riot events and the media coverage.
§.§ Background activity
The rioting activity appears to be above a constant level which most likely corresponds to criminal activities (an average of an order of 100 vehicles are burnt every day in France, essentially due to criminal acts not related to collective uprising). Since this background activity has the same level before and after the riot, we assumed that the dynamics of the riot and of the criminal activities are
independent. In addition, we also considered alternative models where the background activity and the rioting activity would be coupled, the background activity being considered as an equilibrium state, and the riot as a transient excited state. Such models would predict an undershooting of the activity just after the end of the riot – more exactly a relaxation with damped oscillations –, but the data do not exhibit such behavior
For the fits, the background activity λ_b is taken as the mean activity over the last two weeks in our dataset (November 25 to December 8), period that we can consider as the tail of the data, for which there is no longer any riot activity (see Fig. <ref>). For the sites with a non zero number of events in the tail, we observe that this baseline rate is proportional to the size of the reference population chosen for calibrating the size of the susceptible population. For the sites where the number of events in the tail is either zero or unknown (which is the case for a large number of small municipalities, in particular the ones under gendarmerie authority), one needs to give a non zero value to the corresponding baseline rate in order to apply the maximum likelihood method (see below). We estimated it from the size of the reference population (set as 1 when it is 0), using the latter proportionality coefficient (with a maximum value of λ_b set to one over the length of
the tail, ie 1/14).
Statistical tests of the Poisson hypothesis are provided below, paragraph Poisson noise assumption, Stationary tails statistics.
§.§ Epidemiological modeling: Single site model
We detail here the compartmental SIR model when applied to
each site separately (each municipality, or, after aggregating the data, for each département). Let us consider a particular site (we omit here the site index k in the equations).
At each time t there is a number S(t) of individual susceptible to join the riot, and I(t) of infected individuals (rioters). Those who leave the riots become recovered individuals.
Since we assume that there is no flux from recovered to susceptible, we do not have to keep track of the number of recovered individuals. Initially, before a triggering event at some time t_0 occurs, there is a certain number S_0 > 0 of susceptible individuals but no rioters, that is, I(t) = 0 for all t≤ t_0. At t_0 there is an exogenous shock and the number of rioters becomes positive I(t_0)=I_0 >0. From there on, neglecting fluctuations, the numbers of rioters and of susceptible individuals evolve according to the following set of equations:
[left=]align
d I(t)/dt = - ω I(t) + S(t) P(s→i,t)
d S(t)/dt = - S(t) P(s→i,t)
Let us now explain this system of equations.
In Eq. (<ref>), ω is the constant rate at which rioters leave the riot.
The second term in the right hand side of Eq. (<ref>) gives the flux from susceptible to infected as the product of the number of susceptible individuals, times the probability P(s→ i,t) for a susceptible individual to become infected. The second equation, Eq. (<ref>), simply states that those who join the riot leave the subpopulation of susceptible individuals.
We now specify the probability to join the riot, P(s→ i,t) (to become infected when in the susceptible state).
In line with accounts of other collective uprising phenomena<cit.>, testimonies from participants in the 2005 riots suggest a bandwagon effect: individuals join the riot when seeing a group of rioters in action. Threshold decision models<cit.> describe this herding behavior assuming that each individual has a threshold. When the herd size is larger than this threshold the individual joins the herd. Granovetter<cit.> has specifically applied such a model to riot formation, the threshold being then the number of rioters beyond which the individual decides to join the riot. Here we make the simpler hypothesis that the probability to join the riot does not depend on idiosyncratic factors, and is only an increasing function of the total number of rioters at the location (site) under consideration. It is worth emphasizing that this herding behavior is in contrast with the epidemic of an infectious disease, where contagion typically occurs from
dyadic interactions, in which case the probability is proportional to the fraction of infected individuals, I(t)/S_0.
Being a probability, P(s→ i,t) must saturate at some value (at most 1) for large I, and is thus a non-linear function of I. Nevertheless, we will first assume that conditions are such that we can approximate P(s→ i,t) by its linear behavior: P(s→ i,t) ∼κ I(t) (but note that κ does not scale with 1/S_0) and discuss later a different specification for this term.
Given the assumption λ(t) = α I(t), it is convenient to define
σ(t) = α S(t)
so that the riot dynamics at a single (isolated) site is described by:
[left=]align
d λ(t)/dt = - ω λ(t) + β σ(t) λ(t)
d σ(t)/dt = - β σ(t) λ(t)
where β≡κ/α.
Initially λ = 0, which is a fixed point of this system of equations. With σ(t_0)=σ_0 >0, the riot starts after the shock
if the reproduction number<cit.> R_0≡βσ_0/ω=κ S_0/ω
is greater than 1. In such a case, from t=t_0 onward, the number of infected individuals first increases, then goes through a maximum and eventually relaxes back towards zero.
Because κ is not of order 1/S_0, this condition seems too easy to satisfy: at any time, any perturbation would initiate a riot. One may assume that the particular parameter values allowing one to fit the data describe the state of the system at that particular period. Previous months and days of escalation of tension may have led to an increase in the susceptibility κ, or in the number of susceptible individuals S_0.
§.§ Epidemiological modeling: Non local contagion
We give here the details on the global SIR model, with interactions between sites. We have a discrete number K of sites, with homogeneous mixing within each site, and interactions between sites. At each site k, there is a number S_k of “susceptible” individuals, I_k of “infected” (rioters), and R_k of “recovered” individuals. As above, there is no flux from recovered to susceptible (hence we can ignore the variables R_k), and individuals at site k leave the riot at a constant rate ω_k. Assuming homogeneous mixing in each site, the dynamics is given by the following set of equations:
[left=]align
d I_k(t)/dt = - ω_k I_k(t) + S_k(t) P_k(s→i,t)
d S_k(t)/dt = - S_k(t) P_k(s→i,t)
with the initial conditions t<t_0 I_k(t) = 0, S_k(t)=S_k 0 > 0, and at t=t_0, a shock occurs at a single location k_0, I_k(t_0)=I_0 >0.
In the above equations, ω_k is the local recovering rate, and P_k(s→ i,t) is the probability for a s-individual at location k to become a rioter at time t.
We now write the resulting equations for the λ_k. We assume the rioting activity to be proportional to the number of rioters:
λ_k(t)= α I_k(t)
Note that different hypothesis on the dependency of λ_k on I_k could be considered. For instance we tested λ_k ∼( I_k)^q with some exponent q coming as an additional free parameter. In that case, the optimization actually gives that q is close to 1.
Multiplying each side of (<ref>) by α, one gets
[left=]align
d λ_k(t)/dt = - ω_k λ_k(t) + σ_k(t) P_k(s→i,t)
d σ_k(t)/dt = - σ_k(t) P_k(s→i,t)
where as before we introduce σ_k(t)=α S_k(t).
Taking into account the hypothesis on the linear dependency of the number of event in the number of rioters, (<ref>), we write P(s→ i,t) directly in term of the λs:
P_k(s→ i,t) = Ψ_k(Λ_k(t))
where Λ_k(t) is the activity “seen” from site k (see main text):
Λ_k(t) ≡∑_j W_kjλ_j(t)
where the weights W_kj are given by a decreasing function of the distance dist(k,j) between sites k and j: W_kj= W(dist(k,j)) (see below). The single site case is recovered for W_kj=δ_k,j.
In the linear approximation,
Ψ_k(Λ) = β_k Λ,
in which case one gets the set of equations
[left=]align
d λ_k(t)/dt = - ω_k λ_k(t) + β_k σ_k(t) ∑_j W_kj λ_j(t)
d σ_k(t)/dt = - β_k σ_k(t) ∑_j W_kj λ_j(t).
The form of these equations is analogous to the ones of the original
distributed contacts continuous spatial SIR model <cit.> (see below) but here with a discrete set of spatial locations.
In the whole paper, we take a site independent value of the recovering rate, ω_k=ω for every site k. Similarly, the susceptibility is chosen site-independent, β_k=β, except for some variants where a few sites are singularized, see section Results, details: All of France, département scale, below.
In the non-linear case, we choose parameters for Ψ_k(Λ) in order to have a function (i) being zero when there is no rioting activity; (ii) which saturates at a value (smaller or equal to 1) at large argument; (iii) with a monotonous increasing behavior giving a more or less pronounced threshold effect (e.g. a sigmoidal shape). This has to be done looking for the best compromise between quality of fit and number of parameters (as small as possible). We tested several sigmoidal functions.
For the fit of the Paris area at the scale of the municipalities, we made use of a variant with a strict threshold:
[left=]align
Λ≤Λ_c k, Ψ_k(Λ) = 0
Λ> Λ_c k, Ψ_k(Λ) = η_k (1- exp-γ_k (Λ-Λ_c k))
The fit being done with site-independent free parameters, this function thus contributes to three free parameters, Λ_c, η and γ.
§.§ Choice of the weights
The best results are obtained for two options. One is a power law decay with the distance:
W_kj = (1+dist(k,j)/d_0)^-δ
where dist(k,j) is the distance between site k and site j (see below for its computation).
The second option is the sum of an exponential decay and of a constant term
W_kj = ξ + (1-ξ) exp(-dist(k,j)/d_0)
In both cases we normalize the weights so that for every site k, W_k k=1. Taking site-independent free parameters, both cases give two free parameters, d_0 and δ for the choice (<ref>), d_0 and ξ for the choice (<ref>).
§.§ Distance-independent null hypothesis model
In order to test for the possible absence of geographic dependency in the contagion process, we take as a “null hypothesis” model a version of our model where the weights W_kj do not depend on the distance between sites. In this version, a given site is concerned by what is happening at its own location, and in an equally fashion by what is happening elsewhere. Mathematically, we thus consider the following weights:
W_kj =
1, if k=j
ξ, otherwise
where ξ is a constant term to be optimized as a free parameter. Apart from the choice of the weights, the model is the same as the one corresponding to the results shown on Fig. <ref>. Optimization is done over all the 8 free parameters. The results for this model are shown on Supplementary Fig. S5, to be compared with Fig. <ref>.
As one would expect, this model does not generate any wave: following the shock, all the riot curves happen to peak at the same time. Remarkably, assuming no geographic effect in the interaction term does not simply affect the timing of the events: the model also fails to account for the amplitudes of the rioting activities (see Supplementary Fig. S5b).
For what concerns the statistical significance, as expected from the comparison between the figures, the distance-independent null hypothesis model is far worse than the model with geographic dependency, despite the fact that it has one less free parameter. Making use of the Akaike Information Criterion<cit.> (AIC), the difference in AIC is ΔAIC = -438, which corresponds to a relative likelihood<cit.> of 9.0e-96. A similar conclusion is obtained from the BIC criterion<cit.>.
These results thus clearly underline the need for the interaction term to depend on the distance, supporting the view of a local contagion process.
§.§ Classic SIR model with densities
A more direct application of the classic SIR model as used for infectious diseases would have lead to consider equations for densities of agents (instead of numbers of agents). In the linear case, this leads to the following equations for the λs and σs:
[left=]align
d λ_k(t)/dt = - ω λ_k(t) + β σ_k(t) ∑_j W_kj λ_j(t)/N_j
d σ_k(t)/dt = - β σ_k(t) ∑_j W_kj λ_j(t)/N_j.
where here β=κ/ζ_0, and N_j is the size of the reference population at location j. These equations should be compared with Eq. (<ref>). Note that the weights being given by (<ref>), the dependency in the population size N_j cannot be absorbed in the weights.
Fitting this model with densities to the data leads to a much lower likelihood value compared to the model presented here. In the case of the fit of the whole dynamics at the scale of the départements, the difference in AIC is ΔAIC = -101, which corresponds to a relative likelihood of 9.6e-23.
§.§ Links to the original spatially continuous SIR model
In the case of the linear approximation, the meta-population SIR model that we have introduced leads to the set of equations (<ref>) of a type similar to the space-continuous non local (distributed contact) SIR model. With a view to describe the spreading of infections in spatially distributed populations, Kendall<cit.> introduced in 1957 this non-local version of the Kermack-McKendrick SIR model in the form of space-dependent integro-differential equations. Omitting the recovered population R, the system in the S,I variables reads:
[left=]align
dI(x,t)/dt = -ωI(x,t) + βS(x,t) ∫K(x,y) I(y,t) dy
dS(x,t)/dt = -βS(x,t) ∫K(x,y) I(y,t) dy
where x∈ℝ^N, with N=1, 2, and here I(x,t) and S(x,t) are densities of immune and susceptible individuals. In the particular case of dimension N=1, and the space is homogeneous, meaning here that K(x,y) is of the form K(x,y)=w(x-y), we know<cit.> that there exist traveling waves of any speed larger than or equal to some critical speed. Furthermore, this critical traveling wave speed also yields the asymptotic speed of spreading of the epidemic<cit.>. There have been many mathematical works on this system and on various extensions<cit.>. Thus, at least in dimension N=1 and for homogeneous space, this non-local system can generate traveling fronts for the density of susceptible individuals, hence the propagation of a “spike” of infected individuals. Although no proof exists in dimension N=2, numerical simulations show that the model can indeed generate waves<cit.>, as illustrated
by the Supplementary Videos 3 and 4, similar to the way the riot spread around Paris giving rise to the informal notion of a riot wave.
However, the model we introduce here is more general and differs from the Kendall model in certain aspects. Indeed, rather than continuous and homogeneous, the spatial structure is discrete with heterogeneous sites. Moreover, the set of equations here (<ref>) corresponds to the linear approximation (<ref>), whereas our general model involves a non-linear term. The understanding of generalized traveling waves and the speed of propagation in this general context are interesting open mathematical problems.
More work is needed to assess the mathematical properties of the specific family of non local contagion models introduced here, that is defined on a discrete network, with highly heterogeneous populations, and a non-linear probability of becoming infected.
§.§ Date of the maximum
Figures <ref>c and <ref>b show how well the model accounts for the temporal unfolding of the riot activity, thanks to a comparison between model and data of the date when the riot activity peaks at each location. Given the noisy nature of the data, the empirical date of that maximum itself is not well defined. For each site, we estimated this date as the weighted average of the dates of the 3 greatest values, weighted by those values. We filled in missing data values by linear interpolation.
For the contagion around Paris, considering the 12 most active municipalities shown in Fig. <ref>c, the correlation coefficient is r = 0.80, p=0.0017.
At the scale of the whole country, Fig. <ref>d, considering the départements having more than 60 events, the correlation coefficient is equal r=0.77, with a p-value of p=5.2e-6. Given the large differences in population size, the weighted correlation is more appropriate for comparing the timing of the riot activities. Using weights equal to the population sizes, this yields a weighted correlation coefficient of r=0.87, with a p-value p<1e-5 estimated with a bootstrap procedure.
§.§ Non-free parameters: Choice of the reference population
Populations statistics.
For the choice of the reference population, we compared the use of various specific populations, considering cross-linked database that involve age, sex and diploma. The source of these populations statistics is the INSEE, the French national institute carrying the national census (<http://www.insee.fr/>).
For the period under consideration, we used relevant data from 2006 since data from 2005 were not available.
When applying the model at the scale of départements, for each département the size of a given specific population is computed as the sum of the sizes of the corresponding populations of all its municipalities that are under police authority.
Choice of the reference population.
Working at the scale of départements, we found the best log-likelihood when using as reference population the one of males aged between 16 and 24 with no diploma, while not attending school (see Supplementary Fig. S7).
We thus calibrated the susceptible population in (all variants of) the model by assuming that, for each site (municipality or département), its size is proportional to the one of the corresponding reference population.
Influence on the results. The choice of the reference population has a major influence on the results. We find that an improper choice cannot be compensated by the optimization of the free parameters. As an example when working at the scale of municipalities, compare Supplementary Fig. S2, for which the reference population is the total population, with Fig. <ref>a and <ref>b.
§.§ Non-free parameters: geographic data
The geographic data are taken from the collaborative project Open Street Map (<http://osm13.openstreetmap.fr/ cquest/openfla/export/>).
The distance dist(k,j) is taken as the one (in km) between the centroid of each site. In the case of the municipalities, the centroid is taken as the geographic centroid computed with QGIS<cit.>. In the case of départements, the centroid is computed as the weighted centroid (weighted by the size of the reference population) of all its municipalities that are under police authority.
Making use of these geographic data, all the maps (Fig. <ref>d and <ref>, and Supplementary Videos 1 and 2), have been generated with the Mapping toolbox of the MATLAB<cit.> software.
§.§ Free parameters: numerical optimization
The data fit makes use of the maximum likelihood approach<cit.>. Let us call X = {x_k,i, k=1… K,i=1… 44 } the data, where each x_k,i∈ℕ corresponds to the number of events for the site k at day i (i=1 corresponding to October 26, 2005), and let θ denote the set of free parameters (e.g. θ = {ω, A, ζ_0, d_0, δ, β} in the multi-sites linear case). Assuming conditional independence, we have:
p(X|θ) = ∏_k ∏_i p(x_k,i|θ)
Under the Poisson noise hypothesis, the x_k,i are Poisson probabilistic realizations with mean (λ_k,i(θ) + λ_b k):
p(x_k,i|θ) = (λ_k,i(θ) + λ_b k)^x_k,i/x_k,i!exp(-(λ_k,i(θ) + λ_b k))
The log-likelihood, computed over all the sites under consideration and over the whole period (44 days long) for which we have data, thus writes:
ℓ(θ|X) = log p(X|θ)
= ∑_k,i(-λ_k,i(θ) - λ_b k + x_k,ilog (λ_k,i(θ) + λ_b k))
- ∑_k,ilog x_k,i!
Note that the last term in the right hand side does not depend on the free parameters and we can thus ignore it.
We performed the numerical maximization of the log-likelihood using the interior point algorithm<cit.> implemented in the MATLAB<cit.> function.
The method developed here allows one to explore the possibility of predicting the future time course of events based on the observation of the events up to some date. Preliminary results indicate that, once the activity has
reached its peak in the Paris area, the prediction in time and space of the riot dynamics for the rest of France becomes quite accurate.
§.§ Results, details: Paris area, municipality scale
For the results illustrated by the figures in the paper, we give here the free parameters numerical values obtained from the maximum likelihood method in the case of the fit at the scale of municipalities in Île-de-France. Note that this optimization is computationally demanding: it requires to generate a large number of times (of order of tens of thousands) the full dynamics (44 days) with 2560 (2× 1280) coupled equations. For the choice of the function Ψ, we tested the linear case and several non-linear choices. Results are presented for the non-linear case, the function Ψ being given by (<ref>). We find:
ω = 0.26, A=5.5;
for the power law decrease of the weights, d_0 =8. 10^-3 km,
δ= 0.67;
ζ_0=7.7/N_max, where N_max = 1174 is the maximum size of the reference populations, the max being taken over all Île-de-France municipalities;
for the parameters of the non-linear function:
η =0.63, γ = 1.27, Λ_c=0.06.
In Supplementary Table S1a we provide a summary of the variants that we have explored, together with a comparison according to the AIC criterion.
§.§ Results, details: All of France, département scale
We detail here the model options and the numerical results
for the global model, considering each one of the départements of metropolitan France (except Corsica and Paris, hence 93 départements) as one homogeneous site.
We have thus 186 (2 × 93) coupled equations with 6 to 12 free parameters, depending on the choice of the function Ψ and of the number of specific susceptibilities, see below and Supplementary Table S1b.
Outliers. Looking at the results for different versions, we observe some systematic discrepancy between data and model for three départements: 93, where the predicted activity is slightly too low and starts slightly too late, and 13 and 62 where it is too high (see Supplementary Fig. S6b).
Actually, if one looks at the empirical maximum number of events as a function of the size of the reference population used for calibrating the susceptible population, these three départements show up as outliers: the riot intensity is significantly different from what one would expect from the size of the poor population.
Outliers are here defined as falling outside the mean ± 3 standard deviations range when looking at the residuals of the linear regression. If one considers 4 standard deviations from the mean, one only finds the département 13.
The cases of 93 and 13 are not surprising. Département 93 is the one where the riots started, and has the highest concentration of deprived neighborhoods. Inhabitants are aware of this particularity and refer to their common fate by putting forward their belonging to the “neuf-trois” (nine-three, instead of ninety three). Events in département 13 are mainly those that occurred in the city of Marseille. Despite a high level of criminality, and large poor neighborhoods, the inhabitants consider that being “Marseillais” comes before being French, so that people might have felt less concerned. The case of 62 (notably when compared to 59) remains a puzzle for us.
We have tested the model calibration with variants having possibly one more free parameter for each one of these sites, β_93, β_62 and β_13, allowing for a different value of the susceptibility than the one taken for the rest of France. The quality of fit for different options (a single β value, a specific value β_13, and 3 specific values β_93, β_62 and β_13) are shown on Supplementary Table S1b. The best result (with a linear function Ψ) is obtained in the case of 3 specific values (with a total of 9 free parameters).
Function Ψ.
We also compared the choice of the linear function with the one of a non-linear function (still with 3 specific β values). The best AIC is obtained with a non-linear Ψ, with a total of 12 free parameters. The main qualitative gains with this variant can be seen comparing Supplementary Fig. S6c with
Supplementary Fig. S6a (which, for ease of comparison, reproduces Fig. <ref>b):
a slightly sharper increase at the beginning for every site, and a better value of the maximum activity in département 59. However, the maximum values for départements 94 and 95 are clearly better predicted from the variant with only 9 free parameters. Apart from these main qualitative differences, the fits are essentially equivalent. We thus choose to present the results for the simpler variant with 9 parameters (with a linear function Ψ and three specific β values).
Parameterization and results. Making use of the linear choice for Ψ, introducing
one free parameter for each one of the sites 13, 62 and 93,
and using the weights given by an exponential decrease plus a constant global value, Eq. (<ref>), optimization is thus done over the choice of 9 free parameters: ω, A, ζ_0, d_0, ξ, β, β_13, β_62 and β_93.
The numerical values of the free parameters obtained after optimization are as follows: ω = 0.41, A=2.6; ζ_0=190/N_max, where here N_max = 15632 is the maximum size of the reference populations of all metropolitan départements; the susceptibility is found to be β=2. 10^-3, except for three départements as explained above. For these three départements with a specific susceptibility,
one finds about twice the common value for the département 93 (where riots started), β_93/β∼ 1.95, and about half for the département 62 and 13, β_62/β∼ 0.47 and β_13/β∼ 0.42.
For the weights chosen with an exponential decrease plus a constant global value, Eq. (<ref>), d_0= 36 km, ξ=0.06. When using instead the power law decrease, Eq. (<ref>), one finds that the fit is almost as good. The exponent value is found to be δ = 0.80, which is similar to the value δ = 0.67 found for the fit at the scale of municipalities (restricted to Île-de-France région). Yet, these exponent values are much smaller than the ones, between 1. and 2., found in the literature on social interactions as a function of geographical distance<cit.>. A small value of δ means a very slow decrease, a hint to the need of keeping a non zero value at very large distance. This can be seen as another indication that the alternative choice with a long range part, Eq. (<ref>), is more relevant, meaning that both geographic proximity and long range interactions matter.
§.§ Minor sites: Comparison with a constant rate null-hypothesis
The model predicts that the minor sites, that is, those where the number of events is very small (with a level to be chosen, as discussed below) are hit by the wave, with a very small amplitude and at the correct period of time. This can be seen clearly on Fig. <ref>b and Supplementary Fig. S4a.
When looking at these figures, one should keep in mind that the model predictions represent the mean values of stochastic Poisson point processes. Thus, for instance, for a given day, and a given site, a value smaller than 1 for the Poisson parameter λ means that the most probable situation is no event at all, and we expect, say, 0 or 1 event. Yet, one should ask whether the apparent agreement is purely the result of chance. Of course the fit for any one of these minor sites, taken alone, is not significant. What matters here is the consistency of the global model with the set of activities of all the minor sites.
In order to quantitatively evaluate the relevance of the fit even for these minor sites, we confront the model predictions against the predictions of a null model specific for this set of sites. We consider a constant rate null-hypothesis model defined as a Poisson noise model, with a parameter for each site that is constant in time (λ_k(t)=λ_k). This parameter is chosen to be the empirical average number of events over the available period. Supplementary Figure S4b provides the comparison in terms of difference in AIC criterion between the two models. In this comparison, the AIC of the epidemiological model is obtained from its calibration over the full set of départements (see Fig. <ref>). In the resulting log-likelihood we only keep the terms that specifically depend on the considered minor sites.
We find that, when considering the minor sites as those with at most one event on any single day, the null model is preferred to the epidemiological model. This is not surprising given the level of noise in our dataset – compare in particular the presence of a criminal background not associated with the riots, as discussed above in section Background activity. However, when considering the minor sites as those where the number of events on any day does not exceed a value as low as two, we find that the epidemiological model yields a better account of the activity than the null model (Supplementary Fig. S4b). This is all the more remarkable as these sites have only a small influence on the calibration of the full model. Indeed, their contribution to the global model likelihood is small compared to the one of the major sites which essentially drive the data fit. As Supplementary Fig. S4b also shows, the gain in AIC increases rapidly when the number of events allowed for defining the minor sites
increases.
§.§ Poisson noise assumption
In order to calibrate the model to the data, we assume Poisson statistics, although we do not claim that the underlying process is exactly of Poisson nature. However, it is a convenient working hypothesis for numerical reasons (see above Free parameters: numerical optimization). From a theoretical point of view, this choice is a priori appropriate as we deal with discrete values (often very small). In addition, we have seen that the data suggests that a same kind of model is relevant at different scales: this points towards infinitely divisible distributions, such as the Poisson distribution – the sum of several independent Poisson processes still being a Poisson process.
Stationary tails statistics.
We show here that the statistics of the background activity (data in the tails exhibiting a stationary behavior) are compatible with the Poisson hypothesis.
Under such a hypothesis, the variance is equal to the mean. Figure <ref>a shows that, for each département, if we look at the last two weeks, the variance/mean relationship is indeed in good agreement with a Poisson hypothesis.
As additional support to the Poisson noise property, Fig. <ref>b shows a Poissonness plot<cit.> for each of the 12 régions. For completeness, we recall the meaning of a Poissonness plot. One has a total number of observations n. Each particular value x is observed a certain number of times n_x, hence an empirical frequency of occurrence n_x / n. If the underlying process is Poisson with mean λ, then one must have log(x! n_x / n) = - λ + x log(λ). Thus, in that case, the plot of the quantity log(x! n_x / n) (the blue circles in Fig. <ref>b) as a function of x should fall along a straight line with slope log(λ) and intercept -λ (the red lines on Fig. <ref>b).
Poisson realizations and Highest Density Regions.
We remind that we consider the observed data as probabilistic realizations of an underlying Poisson process, whose mean λ(t) is the outcome of the model fit. To have a better grasp on the meaning of such data fit, as a complement to Fig. <ref>b, we provide Fig. <ref>. On this figure we have plotted the 95% Highest Density Regions <cit.> (HDR, light orange areas) along with the means λ(t) (red curves) of the Poisson processes. The rational is as follows. From fitting the model, for each site and for each date, we have a value of λ.
If one draws a large number of realizations of a Poisson process with this mean value λ, one will find that 95% of the points lie within the corresponding 95% HDR. More precisely, a 95% highest density region corresponds to the interval of shortest length with a probability coverage of 95% <cit.>.
For each value of the set of λs, outcome of the fit with the global, non local, model, we estimated the corresponding 95% HDR thanks to a Monte Carlo procedure. These regions are shown as light orange areas on Fig. <ref>. These HDR allow to visualize the expected size of fluctuations (with respect to the mean).
Next, we look where the actual data points (gray points on Fig. <ref>) lie with respect to the HDR.
First, one sees that the empirical fluctuations are in agreement with the sizes of the HDRs (qualitatively, the points are spread in the HDRs). Second, remarkably, one finds that the percentage of data points outside the HDR is 9%, a value indeed close to the expected value 100-95=5% (expected if both the fit is good and the noise is Poisson).
This however slightly larger value could be due to statistical fluctuations. Yet, a closer look at the plots suggest a few large deviations, such as day 2 in département 69, that might correspond to true idiosyncrasies, cases which cannot be reproduced by the model and show up as
particular deviations to the “first order scenario” we present.
In addition to this analysis, to get more intuition on what Poisson fluctuations may produce, we generated artificial data that are Poisson probabilistic realizations given a certain underlying mean λ.
Fig. <ref> present two illustrative cases, where the λs are taken as the outcome of the fit for départements 93 and 76. In each case, four different probabilistic realizations are shown.
§.§ Data availability
The dataset used for this work is available from the corresponding author on reasonable request, and under the condition of proper referencing.
§ ACKNOWLEDGEMENTS
This paper benefited from the critical reading by several colleagues from various fields, as well as from useful remarks of anonymous referees. All the maps (Fig. <ref>d and <ref>, and Supplementary Videos 1 and 2), have been generated thanks to the Open Street Map data OpenStreetMap contributors (<https://www.openstreetmap.org/copyright>). This work received support from: the European Research Council advanced grant ERC ReaDi (European Union's 7th Framework Programme FB/2007-13/, ERC Grant Agreement no 321186 held by H. Berestycki); the CNRS interdisciplinary programs, PEPS Humain and PEPS MoMIS; the program SYSCOMM of the French National Research Agency, the ANR (project DyXi, grant ANR-08-SYSC-0008) . N.R. was supported by the NSF Grant DMS-1516778.
§ AUTHOR CONTRIBUTIONS STATEMENT
S.R. collected the raw data and provided the sociological expertise; M.-A.D. and M.B.G. built the database; L.B-G, J.-P.N, H.B. and N.R. were involved in the mathematical modeling; L.B-G performed the numerical analyses and simulations; L.B-G, J.-P.N, H.B. and S.R. wrote the paper with input from all the authors. All authors reviewed the manuscript.
10
Waddington_etal_2009
D. Waddington, F. Jobard, and M. King, Rioting in the UK and France: A
Comparative Analysis.
Cullompton, Devon: Willan Publishing, 2009.
Roche_2010b
S. Roché, “The nature of rioting. comparative reflections based on the
French case study.,” in Transnational Criminology Manual
(M. Herzog-Evans, ed.), (Nijmegen), pp. 155–170, Wolf legal publishers,
2010.
Cazelles_etal_2007
C. Cazelles, B. Morel, and S. Roché, “Les violences urbaines de l'automne
2005: événements, acteurs: dynamiques et interactions. Essai de
synthèse,” Centre d'analyse stratégique, 2007.
Raafat_etal_2009
R. M. Raafat, N. Chater, and C. Frith, “Herding in humans,” Trends in
cognitive sciences, vol. 13, no. 10, pp. 420–428, 2009.
Gross_2011
M. Gross, “Why do people riot?,” Current Biology, vol. 21, no. 18,
pp. R673–R676, 2011.
Stark_etal_1974
M. J. A. Stark, W. J. Raine, S. L. Burbeck, and K. K. Davison, “Some empirical
patterns in a riot process,” American Sociological Review,
pp. 865–876, 1974.
Granovetter_1978
M. Granovetter, “Threshold models of collective behavior,” American
Journal of Sociology, vol. 83, no. 6, pp. 1360–1380, 1978.
Burbeck_etal_1978
S. L. Burbeck, W. J. Raine, and M. A. Stark, “The dynamics of riot growth: An
epidemiological approach,” Journal of Mathematical Sociology, vol. 6,
no. 1, pp. 1–22, 1978.
Pitcher_etal_1978
B. L. Pitcher, R. L. Hamblin, and J. L. Miller, “The diffusion of collective
violence,” American Sociological Review, pp. 23–35, 1978.
Myers_2000
D. J. Myers, “The diffusion of collective violence: Infectiousness,
susceptibility, and mass media networks,” American Journal of
Sociology, vol. 106, no. 1, pp. 173–208, 2000.
Braha_2012
D. Braha, “Global civil unrest: contagion, self-organization, and
prediction,” PloS one, vol. 7, no. 10, p. e48596, 2012.
Baudains_etal_2013
P. Baudains, A. Braithwaite, and S. D. Johnson, “Target choice during extreme
events: a discrete spatial choice model of the 2011 London riots,” Criminology, vol. 51, no. 2, pp. 251–285, 2013.
Baudains_etal_2013b
P. Baudains, S. D. Johnson, and A. M. Braithwaite, “Geographic patterns of
diffusion in the 2011 London riots,” Applied Geography, vol. 45,
no. 0, pp. 211 – 219, 2013.
Davies_etal_2013
T. P. Davies, H. M. Fry, A. G. Wilson, and S. R. Bishop, “A mathematical model
of the London riots and their policing,” Scientific reports, vol. 3,
2013.
Berestycki_etal_2015
H. Berestycki., J. Nadal, and N. Rodriguez, “A model of riots dynamics:
shocks, diffusion and thresholds,” Networks and Heterogeneous Media
(NHM), vol. 10, no. 3, pp. 443–475, 2015.
Salgado_etal_2016
M. Salgado, A. Mascareño, G. Ruz, and J.-P. Nadal, “Models of contagion:
Towards a theory of crises propagation,” submitted, 2016.
Dietz_1967
K. Dietz, “Epidemics and rumours: A survey,” Journal of the Royal
Statistical Society. Series A (General), pp. 505–528, 1967.
Wang_Wood_2011
L. Wang and B. C. Wood, “An epidemiological approach to model the viral
propagation of memes,” Applied Mathematical Modelling, vol. 35,
no. 11, pp. 5442–5447, 2011.
Midlarsky_1978
M. I. Midlarsky, “Analyzing diffusion and contagion effects: The urban
disorders of the 1960s,” American Political Science Review, vol. 72,
no. 03, pp. 996–1008, 1978.
Govea_West_1981
R. M. Govea and G. T. West, “Riot contagion in latin america, 1949-1963,”
Journal of Conflict Resolution, vol. 25, no. 2, pp. 349–368, 1981.
Bohstedt_Williams_1988
J. Bohstedt and D. E. Williams, “The diffusion of riots: the patterns of 1766,
1795, and 1801 in devonshire,” The Journal of Interdisciplinary
History, vol. 19, no. 1, pp. 1–24, 1988.
Charlesworth_1994
A. Charlesworth, “The spatial diffusion of riots: Popular disturbances in
England and Wales, 1750–1850,” Rural History, vol. 5, no. 01,
pp. 1–22, 1994.
Myers_2010
D. Myers, “Violent protest and heterogeneous diffusion processes: The spread
of us racial rioting from 1964 to 1971,” Mobilization: An International
Quarterly, vol. 15, no. 3, pp. 289–321, 2010.
Mazars_2007
M. Mazars, “Les violences urbaines de l'automne 2005 vues du palais de
justice. Etude de cas. Les procédures judiciaires engagées au
tribunal de grande instance de Bobigny.,” Centre d'analyse
stratégique, 2007.
Strang_Tuma_1993
D. Strang and N. B. Tuma, “Spatial and temporal heterogeneity in diffusion,”
American Journal of Sociology, vol. 99, no. 3, pp. 614–639, 1993.
Myers_1997
D. J. Myers, “Racial rioting in the 1960s: An event history analysis of local
conditions,” American Sociological Review, pp. 94–112, 1997.
Mohler_etal_2011
G. O. Mohler, M. B. Short, P. J. Brantingham, F. P. Schoenberg, and G. E. Tita,
“Self-exciting point process modeling of crime,” Journal of the
American Statistical Association, vol. 106, no. 493, pp. 100–108, 2011.
Brantingham_etal_2012
P. J. Brantingham, G. E. Tita, M. B. Short, and S. E. Reid, “The ecology of
gang territorial boundaries,” Criminology, vol. 50, no. 3,
pp. 851–885, 2012.
Gauvin_etal_2013
L. Gauvin, A. Vignes, and J.-P. Nadal, “Modeling urban housing market
dynamics: can the socio-spatial segregation preserve some social
diversity?,” Journal of Economic Dynamics and Control, vol. 37, no. 7,
pp. 1300–1321, 2013.
Kermack_McKendrick_1927
W. O. Kermack and A. G. McKendrick, “A contribution to the mathematical theory
of epidemics,” in Proceedings of the Royal Society of London A:
mathematical, physical and engineering sciences, vol. 115, pp. 700–721, The
Royal Society, 1927.
Diekmann_Heesterbeek_2000
O. Diekmann and H. Heesterbeek, Mathematical Epidemiology of Infectious
Diseases: Model Building, Analysis and Interpretation.
New York: Wiley, 2000.
Hethcote_2000
H. W. Hethcote, “The mathematics of infectious diseases,” SIAM review,
vol. 42, no. 4, pp. 599–653, 2000.
Earl_etal_2004
J. Earl, A. Martin, J. D. McCarthy, and S. A. Soule, “The use of newspaper
data in the study of collective action,” Annu. Rev. Sociol., vol. 30,
pp. 65–80, 2004.
Baddeley_2007
A. Baddeley, Spatial Point Processes and their Applications, pp. 1–75.
Berlin, Heidelberg: Springer, 2007.
Ball_etal_2015
F. Ball, T. Britton, T. House, V. Isham, D. Mollison, L. Pellis, and G. S.
Tomba, “Seven challenges for metapopulation models of epidemics, including
households models,” Epidemics, vol. 10, pp. 63–67, 2015.
Schelling_1973
T. C. Schelling, “Hockey helmets, concealed weapons, and daylight saving: A
study of binary choices with externalities,” The Journal of Conflict
Resolution, vol. XVII (3), 1973.
Liben-Nowell_etal_2005
D. Liben-Nowell, J. Novak, R. Kumar, P. Raghavan, and A. Tomkins, “Geographic
routing in social networks,” Proceedings of the National Academy of
Sciences of the United States of America, vol. 102, no. 33,
pp. 11623–11628, 2005.
Lambiotte_etal_2008
R. Lambiotte, V. D. Blondel, C. de Kerchove, E. Huens, C. Prieur, Z. Smoreda,
and P. Van Dooren, “Geographical dispersal of mobile communication
networks,” Physica A: Statistical Mechanics and its Applications,
vol. 387, no. 21, pp. 5317–5325, 2008.
Goldenberg_Levy_2009
J. Goldenberg and M. Levy, “Distance is not dead: Social interaction and
geographical distance in the internet era,” arXiv preprint
arXiv:0906.3202, 2009.
Braun_Koopmans_2010
R. Braun and R. Koopmans, “The diffusion of ethnic violence in germany: The
role of social similarity,” European Sociological Review, vol. 26,
no. 1, pp. 111–123, 2010.
Kendall_1957
D. G. Kendall, “Discussion of “Measles periodicity and community size” by
M.S. Bartlett,” Journal of the Royal Statistical Society Series
A, vol. 120, pp. 64–67, 1957.
Kendall_1965
D. G. Kendall, “Mathematical models of the spread of infection,” Mathematics and computer science in biology and medicine, vol. 213, 1965.
Emirbayer_Goodwin_1994
M. Emirbayer and J. Goodwin, “Network analysis, culture, and the problem of
agency,” American journal of sociology, pp. 1411–1454, 1994.
Gould_2003
R. V. Gould, “Why do networks matter? Rationalist and structuralist
interpretations,” in Social movements and networks: Relational
approaches to collective action (M. Diani and D. McAdam, eds.), pp. 233–57,
Oxford: Oxford University Press, 2003.
Walgrave_Wouters_2014
S. Walgrave and R. Wouters, “The missing link in the diffusion of protest:
Asking others,” American Journal of Sociology, vol. 119, no. 6,
pp. 1670–1709, 2014.
Carter_1987
G. L. Carter, “Local police force size and the severity of the 1960s black
rioting,” Journal of Conflict Resolution, vol. 31, no. 4,
pp. 601–614, 1987.
Akaike_1974
H. Akaike, “A new look at the statistical model identification,” IEEE
transactions on automatic control, vol. 19, no. 6, pp. 716–723, 1974.
Burnham_Anderson_2003
K. P. Burnham and D. R. Anderson, Model selection and multimodel
inference: a practical information-theoretic approach.
Springer Science & Business Media, 2003.
Schwarz_1978
G. Schwarz et al., “Estimating the dimension of a model,” The
annals of statistics, vol. 6, no. 2, pp. 461–464, 1978.
Aronson_1977
D. Aronson, “The asymptotic speed of propagation of a simple epidemic,” in
Nonlinear diffusion, vol. 14, pp. 1–23, Pitman London, 1977.
Ruan_etal_2007
S. Ruan, “Spatial-temporal dynamics in nonlocal epidemiological models,” in
Mathematics for life science and medicine, pp. 97–122, Springer, 2007.
Bailey_1967
N. T. Bailey, “The simulation of stochastic epidemics in two dimensions,” in
Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics
and Probability, vol. 4, pp. 237–257, University of California Press
Berkeley and Los Angeles, 1967.
Rodriguez-Meza_2012
M. Rodríguez-Meza, “Spatial and temporal dynamics of infected populations:
the Mexican epidemic,” Revista Mexicana de Física, vol. 58, no. 1,
pp. 58–62, 2012.
QGIS_software
QGIS Development Team, QGIS Geographic Information System.
Open Source Geospatial Foundation, 2009.
MATLAB
MATLAB, version 9.0.0 (R2016a).
The MathWorks Inc., Natick, Massachusetts, 2016.
Myung_2003
I. J. Myung, “Tutorial on maximum likelihood estimation,” Journal of
mathematical Psychology, vol. 47, no. 1, pp. 90–100, 2003.
Byrd_etal_1999
R. H. Byrd, M. E. Hribar, and J. Nocedal, “An interior point algorithm for
large-scale nonlinear programming,” SIAM Journal on Optimization,
vol. 9, no. 4, pp. 877–900, 1999.
Hoaglin_1980_poissonness
D. C. Hoaglin, “A Poissonness plot,” The American Statistician,
vol. 34, no. 3, pp. 146–149, 1980.
Hyndman_1996
R. J. Hyndman, “Computing and graphing highest density regions,” The
American Statistician, vol. 50, no. 2, pp. 120–126, 1996.
Supplementary Information
§.§ Supplementary Figures
§.§ Supplementary Tables
§.§ Supplementary Videos
∙ Supplementary Video 1.
Riot propagation around Paris: smoothed data.
This video shows the riot propagation around Paris. The map shows the municipality boundaries, with Paris at the center. For each municipality for which data is available, a circle is drawn with an area proportional to the estimate of the size of the susceptible population (see main text, section Methods). Instead of making use of the raw data, for each municipality we replaced each day value by the one given by the fit with the single site epidemic model considered here as a tool for smoothing the data. The color represents the intensity of the rioting activity: the warmer the color, the higher the activity. The pace of the video corresponds to three days per second. In order to improve the fluidity of the video, we increased the number of frames per second by interpolating each day with 7 new frames, whose values are computed thanks to a piece-wise cubic interpolation of the original ones. The resulting frame rate is then 24 frames per second.
The maps have been generated with the Mapping toolbox of the MATLAB software making use of the Open Street Map data OpenStreetMap contributors (<https://www.openstreetmap.org/copyright>).
The video is encoded with the open standard H.264.
∙ Supplementary Video 2.
Riot propagation around Paris: model with non-local contagion.
This video shows the riot activity as predicted by the data-driven global epidemic-like model. Same technical details as for the SI Video 1.
The maps have been generated with the Mapping toolbox of the MATLAB software making use of the Open Street Map data OpenStreetMap contributors (<https://www.openstreetmap.org/copyright>).
The video is encoded with the open standard H.264.
∙ Supplementary Video 3.
Spatial SIR: wave propagation in a homogeneous medium.
We illustrate the formal continuous spatial SIR model with a video showing the propagation of a wave. The underlying medium is characterized by a uniform density of susceptible individuals. The weights w(x-y) in the interaction term are given by a decreasing exponential function of the Euclidean distance ||x-y||.
The video is encoded with the open standard H.264.
∙ Supplementary Video 4.
Spatial SIR: wave propagation in a non-homogeneous medium.
Same as SI Video 3, but with a heterogeneous density of susceptible individuals, characterized by (1) a decrease of the density towards 0 near the boundary of the image, so that the wave dies before leaving the frame, (2) a hole at the center, which is then bypassed by the wave, (3) a concentration of susceptible individuals that globally decreases on the y-axis, so that the wave dies while going downward.
The video is encoded with the open standard H.264.
|
http://arxiv.org/abs/1701.07629v1 | 20170126094114 | Non-Uniformly Coupled LDPC Codes: Better Thresholds, Smaller Rate-loss, and Less Complexity | [
"Laurent Schmalen",
"Vahid Aref",
"Fanny Jardel"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Non-Uniformly Coupled LDPC Codes: Better Thresholds, Smaller Rate-loss, and Less Complexity
Laurent Schmalen,
Vahid Aref, and
Fanny Jardel
Nokia Bell Labs, Stuttgart, Germany. (e-mail: {})
=======================================================================================================
We consider spatially coupled low-density parity-check codes with finite smoothing parameters. A finite smoothing parameter is important for designing practical codes that are decoded using low-complexity windowed decoders.
By optimizing the amount of coupling between spatial positions, we show that we can construct codes with excellent thresholds and small rate loss, even with the lowest possible smoothing parameter and large variable node degrees, which are required for low error floors.
We also establish that the decoding convergence speed is faster with non-uniformly coupled codes, which we verify by density evolution of windowed decoding with a finite number of iterations. We also show that by only slightly increasing the smoothing parameter, practical codes with potentially low error floors and thresholds close to capacity can be constructed. Finally, we give some indications on protograph designs.
§ INTRODUCTION
The work of L. Schmalen was supported by the German Government in the frame of the CELTIC+/BMBF project SENDATE-TANDEM.
Low-density parity-check (LDPC) codes are widely used due to their outstanding performance under low-complexity belief propagation (BP) decoding.
However, an error probability exceeding that of maximum-a-posteriori (MAP) decoding has to be tolerated with (sub-optimal) low-complexity BP decoding.
A few years ago, it has been empirically observed that the BP performance of
some protograph-based, spatially coupled (SC) LDPC ensembles (also termed convolutional LDPC codes) can improve towards the MAP performance of the underlying LDPC ensemble <cit.>. Around the same time,
this threshold saturation phenomenon has been proven rigorously in <cit.> for a newly introduced, randomly coupled SC-LDPC ensemble.
In particular, the BP threshold of that SC-LDPC ensemble tends towards its MAP threshold on any binary memoryless symmetric channel (BMS).
SC-LDPC ensembles are characterized by two parameters: the replication factor L, which denotes the number of copies of LDPC codes to be places along a spatial dimension, and the smoothing parameter w. This latter parameter indicates that each edge of the graph is allowed to connect to w neighboring spatial positions (for details, see <cit.> and Sec. <ref>).
The proof of threshold saturation was given in the context of uniform spatial coupling and requires both L→∞ and w→∞. This poses a serious disadvantage for realizing practical codes, as relatively large structures are required to build efficient codes.
In practice, the main challenges for implementing SC-LDPC codes are the rate-loss due to termination and the decoding complexity. The rate-loss, which scales with w, can be made arbitrarily small by increasing L, however, a large L can worsen the finite-length performance of SC-LDPC codes <cit.>. Known approaches to mitigate the rate-loss (e.g., <cit.>) often introduce extra structure at the boundaries, which is usually undesired. Therefore, we would like to keep the rate-loss as small as possible for a fixed, but small L. Additionally, the decoding complexity can be managed by employing windowed decoding (WD) <cit.>, however, the window length and complexity scale with the smoothing parameter w. For both reasons, w should be as small as possible, ideally w∈{2,3}, to keep the rate-loss and complexity small, e.g., in high-speed optical communications <cit.>.
In this paper, we construct code ensembles that have excellent thresholds for small w, that have smaller rate-loss than SC-LDPC ensembles and can be decoded with less complexity by maximizing the speed of the decoding wave. We achieve these properties by generalizing the uniformly coupled SC-LDPC codes of <cit.> to allow for non-uniform coupling. It was already recognized in <cit.> that non-uniform protographs can lead to improved thresholds in some circumstances by sacrificing a one-sided converge of the chain, which is not problematic when using WD. A very particular, exponential coupling was used in <cit.> to guarantee anytime reliability.
We extend non-uniform coupling to randomly coupled SC-LDPC ensembles and protograph-based ensembles. We analyze their performance under message passing with and without windowed decoding. We show that we can achieve excellent close-to-capacity thresholds by optimizing the coupling, for small w and large d_v, which is required for codes with low error floors. Furthermore, we introduce a new multi-type-based non-uniform coupling that further improves the thresholds without increasing w. We find that the rate-loss is decreased by non-uniform coupling as well. We finally show that the decoding speed, which is an indicator of the complexity, can be increased by non-uniform coupling.
§ SPATIALLY COUPLED LDPC CODES
We briefly describe two construction types of
non-uniformly coupled LDPC codes: the random ensemble and the protograph-based ensemble. The former is easier to analyze and exhibits
the general advantages of non-uniform coupling
while the latter is more of practical interest.
§.§ The Random (d_v,d_c,ν,L,M) Ensemble
We now briefly review how to sample a code from a random, non-uniformly coupled (d_v,d_c,ν,L,M) SC-LDPC ensemble with regular degree distributions.
We first lay out a set of positions indexed from z=1 to L on a spatial dimension.
At each spatial position (SP) z, there are M variable nodes (VNs) and Md_v/d_c check nodes (CNs),
where Md_v/d_c∈ℕ and d_v and d_c denote the variable and check node degrees, respectively.
The non-uniformly coupled structure is based on the smoothing distribution
ν=[ν_0,…,ν_w-1] where ν_i>0,
∑_iν_i=1 and w>1 denotes the smoothing (coupling) width.
The special case of ν_i=1/w leads to the usual, well-known spatial coupling with the uniform smoothing distribution <cit.>.
For termination, we additionally consider w-1 sets of Md_v/d_c CNs in SPs L+1,…,L+w-1. Every CN is assigned with d_c “sockets” and imposes an even parity constraint on its d_c neighboring VNs.
Each VN in SP z is connected to d_v CNs in SPs z,…,z+w-1 as follows:
For each of the d_v edges of this VN, an SP z^'∈{z,…,z+w-1} is randomly selected according to
the distribution ν, and then, the edge is uniformly connected to any free socket of the Md_v sockets arising from the CNs in that SP z^'.
This graph represents the code with n=LM code bits, distributed over L SPs. Because of additional CNs in SPs L+1,…,L+w-1, but also because of potentially unconnected CNs in SPs 1,…,w-1, the design rate is slightly decreased to r = 1-d_v/d_c-1/LΔ where
Δ=d_v/d_c(w-1-∑_k=0^w-2[ (∑_i=0^kν_i)^d_c+ (∑_i=k+1^w-1ν_i)^d_c])
which increases linearly with w.
In the limit of M, the asymptotic performance of this ensemble on a binary erasure channel (BEC) can be analyzed using density evolution, with
x_z^(t+1) = (1-∑_i=0^w-1ν_i(1-∑_j=0^w-1ν_j x_z+i-j^(t))^d_c-1)^d_v-1
where denotes the channel erasure probability and x_z^(t) the average erasure probability of the outgoing messages from VNs in SP z at iteration t. The messages are initialized as x_z^(0) =, if z∈[1,L] and x_z^(0) = 0 otherwise. For ν_i=1/w, (<ref>) becomes the known
DE equation for SC-LDPC codes with uniform coupling <cit.>.
§.§ Protograph-based SC-LDPC Ensembles
SC-LDPC ensembles with a certain predefined structure
can be constructed by means of protographs <cit.>. The Tanner graph of the protograph-based SC-LDPC code is some M-cover of the protograph, i.e.,
M copies of the protograph are bound together by random permutation of the edges between the same type of sockets. Protograph-based SC-LDPC codes are of practical interest because of their simple hardware implementation and their excellent
finite-length performance <cit.>.
An exemplary protograph of an SC-LDPC code with non-uniform coupling is shown in Fig. <ref>-a). As the coupled protograph is a chain of repeating segments, we represent coupled protographs by their distinct elementary segment shown in Fig. <ref>-b). We use the 3-tuple (d_v,b_1,b_2) to describe the elementary segment, with d_v the regular variable node degree, b_1 the number of parallel edges between VN v_1 and CN c_1 and b_2 the number of parallel edges between VN v_2 and CN c_1.
§.§ Windowed Decoder Complexity
The decoding complexity is an important parameter for practical SC-LDPC codes.
Consider the profile of densities [x_0^(t),x_1^(t),…] in (<ref>). It has been shown in <cit.> that the profile behaves like a “wave”: it shifts along the spatial dimension with “a constant speed” as the BP decoder iterates. The wave propagation speed is analytically analyzed and bounded in <cit.>,<cit.>.
The wave-like behaviour enables efficient sliding windowed decoding <cit.>: the decoder updates the BP messages of edges lying in a window of W_D SPs I times, and then shifts the window one SP forward and repeats. Thus, the decoding complexity scales with O(W_DILMd_v) as there are 2MLd_v BP messages and each BP message is updated W_DI times.
The required window size W_D is an increasing function of the smoothing factor w <cit.> which implies that we should keep w small. The number of iterations I>1/v where v is the speed of the wave. In the continuum limit of the spatial dimension, v is defined as the amount displacement of the profile along the spatial dimension after one iteration. For the discrete case of (<ref>), the speed can be estimated by
v≈ v_D=D/T_D,
where T_D in the minimum number of iterations required for the displacement of the profile by more than D SPs, i.e.,
T_D = min{T∈ℕ| x_z^(t+T)≤ x_z-D^(t), for t>0 ∧ z≤⌊ L/2⌋}.
The approximation of v becomes more precise by choosing larger D. We chose D=10 in this paper.
We quickly recapitulate the asymptotic analysis for the windowed decoder here. Instead of the windowed decoder proposed in <cit.>, we employ a slightly modified, more practical version, which updates the complete window after one decoding step.
For every windowed decoding step, indexed by c∈[1,L], we generate a copy y_c,z^(0) of the vector x=(x_1^(c-1),…,x_L+w-1^(c-1)) on which we apply the update rule (<ref>) for SPs z∈{c,c+1,…,c+W_D-1} only, for a total of I iterations. After I iterations, we update the SPs as
x_z^(c) = {[ x_z^(c-1) if z ∉[c,c+W_D); y_z-c+1^(I) otherwise ].
We use a finite number of iterations in the windowed decoder to accurately predict the performance of a practical decoder.
§ NON-UNIFORM COUPLING: RANDOM ENSEMBLES
In this section, we optimize non-uniformly SC-LDPC ensembles with random coupling for the BEC. First, we consider w=2, the smallest possible smoothing parameter. This case has a high practical interest as w should be kept as small as possible in order to keep the decoding latency and window length W_D manageable when employing windowed decoding. We show numerically that non-uniform coupling improves the BP threshold and also the decoding complexity as the total number of iterations decreases. Afterwards, we show the advantages of non-uniform coupling w>2.
§.§ Non-Uniform Unit-Memory Coupling (w=2)
Consider a random (d_v,d_c,ν,L,M) SC-LDPC ensemble
with smoothing vector ν=[α,1-α]. It is enough to assume 0≤α≤1/2 because of symmetry. In the limit of M, the asymptotic performance of the ensemble over BEC can be evaluated using DE. We consider the BP threshold
(α) = sup{ : x_z^(ℓ)→ 0 as ℓ→∞,∀ z∈[1,L]},
where x_z^(ℓ) is updated according to (<ref>).
Figure <ref> illustrates
(α) in terms of α
for different values of d_v.
Each curve has two minima and a maximum.
The two minima are at α=0 and α=1/2 where
(α=0)= corresponds to
the BP threshold of the uncoupled ensemble and (α=1/2) corresponds to the BP threshold of the SC-LDPC ensemble with uniform coupling.
The respective maxima of the curves are indicated by a marker and obtained for α^*.
We can see that uniform coupling (α=1/2) does not lead to the best thresholds. In particular, if we increase d_v, which is required for constructing codes with very low error floors, uniform coupling with w=2 is not efficient anymore, and the thresholds are significantly away from the BEC capacity. With an optimized α^⋆, we can achieve thresholds that are close to capacity (and the MAP threshold of the uncoupled LDPC ensemble ) and significantly outperform the uncoupled and the uniformly coupled cases. Table <ref> gives the thresholds of the optimized codes together with the unoptimized, uniformly coupled and uncoupled cases. Although coupling always improves the threshold, with w=2, uniform coupling is not a good solution and significantly better thresholds are obtained by non-uniform coupling, especially for larger d_v. Moreover, it is easy to show that the rate-loss Δ is maximized for uniform coupling (α=1/2). Hence non-uniform coupling will always reduce the rate-loss.
We can see that as d_v increases, α^⋆ decreases as well. An interesting open question is whether α saturates to some constant or if it will converge to zero.
Non-uniform coupling can also decrease the decoding complexity of windowed decoding.
Figure <ref> illustrates the effect of non-uniform coupling on the wave propagation. While uniform coupling (α=1/2) leads to a wave propagation from both ends towards the middle, non-uniform coupling sacrifices one of those waves in favor of the other one, which will (usually) travel at a faster velocity.
We compute the speed v according to (<ref>) for different values of α∈[0,1/2] and different values of ∈[(α=0),(α^⋆)] and show the contour lines of equal decoding speed v in Fig. <ref>
for d_v=5 and d_v=10. Points along a contour line indicate that the decoding wave moves with the same speed. When building practical decoders, usually a hardware constraint is imposed which limits the amount of operations that can be done. Hence also the decoding speed is limited. We can see that for a fixed speed v, non-uniformly coupled codes can be operated at much higher erasure probability than with uniform coupling. Note that the maxima of the speed contours coincide practically with the α^⋆ maximizing the threshold.
Figure <ref> suggests that windowed decoding also benefits from non-uniform coupling. For this reason, we use density evolution including windowed decoding, as detailed in Sec. <ref>. Figure <ref> exemplarily shows the thresholds for windowed decoding for the (5,10,[α,1-α],L=100) and the (10,20,[α,1-α],L=100) SC-LDPC ensembles for four window configurations: W_D∈{10,20} and I∈{3,9}. We see a good agreement between the speed contour lines of Fig. <ref> and the windowed decoding thresholds. Again we can see that for non-uniformly coupled codes and identical window configurations, we can significantly increase the decoding threshold.
§.§ Non-Uniform Coupling with w > 2
We have seen in the previous section that
non-uniform coupling can increase the BP threshold
if we constrain w=2. However, for d_v>5, we have to tolerate a gap to capacity.
In this case, we can relax the constraint on w. In fact, for w>2, non-uniform coupling can be more beneficial as there are more degrees of freedom for optimizing the smoothing vector ν. We numerically show in the following that it results in a faster saturation of the BP threshold to capacity even for small values of w, e.g., w=3.
Consider the DE equation (<ref>) for a random (d_v,d_c,ν,L) SC-LDPC ensemble over a BEC. Let ν=[ν_1,ν_2,1-ν_1-ν_2] with w=dim(ν)=3.
For regular ensembles with asymptotic rate r=1/2 (d_c=2d_v),
we observe that the BP threshold, (ν), depends on
the choice of ν and can get very close to the capacity.
We used a grid search with a fine resolution to
numerically optimize the BP threshold for the ensembles with d_v∈{4,…, 10}. The results are given in Tab. <ref> where
the optimized smoothing distribution is denoted by ν^⋆=[ν_1^⋆,ν_2^⋆,1-ν_1^⋆-ν_2^⋆]. We observe that the BP thresholds almost saturate to the capacity (or , respectively), while the BP threshold of uniformly coupled ensembles ((ν=[1/3,1/3,1/3])) have a gap to capacity which increases for larger d_v. Note that especially for small d_v, many different choices of ν lead to good thresholds . In that case, we select the optimum ν^⋆ which leads to a good threshold and also yields a small rate loss Δ. Note that in contrast to the w=2 case, where the rate-loss was maximal for uniform coupling, it is not hard to show that the rate-loss Δ for w=3 is maximized with ν = [1/2,0,1/2]. It is an interesting open question whether it is possible to construct capacity-achieving codes with a finite w.
§.§ Non-Uniform Coupling with Different Types
Non-uniform coupling is a general concept. So far,
we presented the simplest way of non-uniform coupling in which the edges of all VNs in an SP are randomly connected
according to a distribution ν. Generally, the edges of each VN
can be connected according to a set of distributions.
Let us illustrate the benefits of such coupling by an example. Consider again a coupled LDPC ensemble with w=2 and d_c=2d_v. Inspired by the protograph structure shown in Fig. <ref>, we partition the VNs in each SP into two sets of equal size, called “upper set” and “lower set”. As described in Sec. <ref>, the edges of VNs in the upper set are randomly connected to CNs according to
the “upper” smoothing distribution ν=[α,1-α]. Similarly, the edges of VNs in the lower set are distributed according to the “lower” smoothing distribution ν=[α,1-α]. Therefore, each CN receives two types of BP messages from VNs. Let x_z^(t) (x_z^(t)) denote the average erasure probability of the BP messages flowing from VNs of the upper set (lower set) in SP z at iteration t. Then the DE equations become
y_z^(t)=(1-(αx_z^(t)+ (1-α)x_z-1^(t)))^d_v-1(1-(αx_z^(t)+ (1-α)x_z-1^(t)))^d_v
y_z^(t)=(1-(αx_z^(t)+ (1-α)x_z-1^(t)))^d_v(1-(αx_z^(t)+ (1-α)x_z-1^(t)))^d_v-1
x_z^(t+1)=(1-( αy_z^(t) + (1-α)y_z+1^(t)))^d_v-1
x_z^(t+1)=(1-( αy_z^(t) + (1-α)y_z+1^(t)))^d_v-1
Using DE analysis and a rough exhaustive search, we optimized α and α to find the largest BP threshold for different values of d_v.
The thresholds are summarized in Tab. <ref>. We observe that the thresholds almost saturate to capacity for d_v=6 and d_v=7 with only w=2.
§ NON-UNIFORM COUPLING: PROTOGRAPH ENSEMBLES
As most practical codes are based on protographs, we extend the findings of this paper to protograph-based codes with the elementary building segment of Fig. <ref>-b). In comparison to the random ensembles, there is less room for optimization
as there are finite choices for b_1 and b_2, each requiring a
separate DE analysis, which is also slightly more complicated as the BP messages come from different edge types (multi-edge types DE).
We computed DE thresholds for all possible protographs based on a simple elementary segment with 2 VNs and 2 CNs for L=100 (r=0.495). In Tab. <ref>, we summarize the best protographs and the respective thresholds that we find for different choices of d_v. Some of the best elementary segments are shown in Fig. <ref>. Up to d_v=6, protographs with b_1=b_2=1 are optimal, however, when d_v > 6, interestingly, the choice b_1=1 and b_2=5 becomes optimal.
In Fig. <ref>, we plot the decoding speeds for the best protographs with d_v∈{4,5,6,7}. We can see that for < 0.488, the protograph (4,1,1) has the highest decoding speed and thus leads to the smallest decoding complexity, while for ≥ 0.488, the protograph (5,1,1) has the highest speed due to its different slope. Using an exhaustive search over all possible elementary protograph segments with 2 VNs and 2 CNs and with d_v≤ 18, we have verified that these two protographs are indeed the ones yielding the highest overall speeds and are good candidates for implementation.
§ ACKNOWLEDGMENTS
The authors would like to acknowledge Rüdiger Urbanke and Shrinivas Kudekar for interesting discussions and suggestions leading to the work in this paper and its presentation.
10
url@samestyle
Lentmaier-ita09
M. Lentmaier, G. P. Fettweis, K. Zigangirov, and D. J. Costello, Jr.,
“Approaching capacity with asymptotically regular LDPC codes,” in
Proc. ITA, 2009.
Kudekar-it11
S. Kudekar, T. Richardson, and R. Urbanke, “Threshold saturation via spatial
coupling: Why convolutional LDPC ensembles perform so well over the
BEC,” IEEE Trans. Inf. Theory, vol. 57, no. 2, Feb 2011.
Kudekar-it13
——, “Spatially coupled ensembles universally achieve capacity under belief
propagation,” IEEE Trans. Inf. Theory, vol. 59, no. 12, 2013.
Olmos-it15
P. M. Olmos and R. Urbanke, “A scaling law to predict the finite-length
performance of spatially-coupled LDPC codes,” IEEE Trans. Inf.
Theory, vol. 61, no. 6, pp. 3164–3184, June 2015.
kudekarISTC
S. Kudekar, C. Méasson, T. Richardson, and R. Urbanke, “Threshold
saturation on BMS channels via spatial coupling,” in Proc. ISTC,
2010.
sanatkar2016increasing
M. R. Sanatkar and H. D. Pfister, “Increasing the rate of spatially-coupled
codes via optimized irregular termination,” in Proc. ISTC, 2016.
Iyengar-wd
A. R. Iyengar, P. H. Siegel, R. L. Urbanke, and J. K. Wolf, “Windowed decoding
of spatially coupled codes,” IEEE Trans. Inf. Theory, 2013.
schmalen2015spatially
L. Schmalen, V. Aref, J. Cho, D. Suikat, D. Rösener, and A. Leven,
“Spatially coupled soft-decision error correction for future lightwave
systems,” J. Lightw. Technol., vol. 33, no. 5, pp. 1109–1116, 2015.
SchmalenSCC13
L. Schmalen and S. ten Brink, “Combining spatially coupled LDPC codes with
modulation and detection,” in Proc. ITG SCC, 2013.
JardelNU
F. Jardel and J. J. Boutros, “Non-uniform spatial coupling,” in Proc.
ITW, Nov. 2014.
noor2015anytime
M. Noor-A-Rahim, K. D. Nguyen, and G. Lechner, “Anytime reliability of
spatially coupled codes,” IEEE Trans. Commun., vol. 63, 2015.
mitchell2015spatially
D. G. Mitchell, M. Lentmaier, and D. J. Costello, “Spatially coupled LDPC
codes constructed from protographs,” IEEE Trans. Inf. Theory,
vol. 61, no. 9, pp. 4866–4889, 2015.
stinner2016waterfall
M. Stinner and P. M. Olmos, “On the waterfall performance of finite-length
SC-LDPC codes constructed from protographs,” IEEE J. Sel. Areas
Commun., vol. 34, no. 2, pp. 345–361, 2016.
kudekar2015wave
S. Kudekar, T. J. Richardson, and R. L. Urbanke, “Wave-like solutions of
general 1-d spatially coupled systems,” IEEE Trans. Inf. Theory,
2015.
aref2013convergence
V. Aref, L. Schmalen, and S. ten Brink, “On the convergence speed of
spatially coupled LDPC ensembles,” in Proc. Allerton Conf., 2013.
el2016velocity
R. El-Khatib and N. Macris, “The velocity of the decoding wave for spatially
coupled codes on BMS channels,” in Proc. ISIT, 2016.
|
http://arxiv.org/abs/1701.07633v2 | 20170126095319 | Diffusion approximations via Stein's method and time changes | [
"Mikolaj J. Kasprzak"
] | math.PR | [
"math.PR",
"60B10 (primary), 60J70, 60J65, 60E05, 60E15 (secondary)"
] |
Diffusion approximations via Stein's method
,
Mikołaj J. Kasprzak
University of Oxford
Mikołaj Kasprzak
Department of Statistics
University of Oxford
24-29 St Giles'
Oxford, OX1 3LB
United Kingdom
e1
E-mail:
We extend the ideas of <cit.> and use Stein's method to obtain a bound on the distance between a scaled time-changed random walk and a time-changed Brownian Motion. We then apply this result to bound the distance between a time-changed compensated scaled Poisson process and a time-changed Brownian Motion.
This allows us to bound the distance between a process whose dynamics resemble those of the Moran model with mutation and a process whose dynamics resemble those of the Wright-Fisher diffusion with mutation upon noting that the former may be expressed as a difference of two time-changed Poisson processes and the diffusive part of the latter may be expressed as a time-changed Brownian Motion.
The method is applicable to a much wider class of examples satisfying the Stroock-Varadhan theory of diffusion approximation (<cit.>).
[class=MSC]
[Primary ]60B10
60F17
[; secondary ]60J70, 60J65, 60E05, 60E15
Stein's method
functional convergence
time-changed Brownian Motion
Moran model
Wright-Fisher diffusion
§ INTRODUCTION
In his seminal paper <cit.>, Charles Stein introduced a method for proving normal approximations and obtained a bound on the speed of convergence to the standard normal distribution. He observed that a random variable Z has the standard normal law if and only if EZf(Z)=Ef'(Z) for all smooth functions f. Therefore, if, for a random variable W with mean 0 and variance 1, Ef'(W)-EWf(W) is close to zero for a large class of functions f, then the law of W should be approximately Gaussian. He then proposed that, instead of evaluating |Eh(W)-Eh(Z)| directly for a given function h, one can first find an f=f_h solving the following Stein equation:
f'(w)-wf(w)=h(w)-Eh(Z)
and then find a bound on |Ef'(W)-EWf(W)|. This approach often turns out to be much easier, due to some bounds on the solutions f_h, which can be derived in terms of the derivatives of h. Since then, Stein's method has been significantly developed and extended to approximations by distributions other than normal.
The aim of Stein's method is to find a bound on the quantity |E_ν_nh-E_μh|, where μ is the target (known) distribution, ν_n is the approximating law and h is chosen from a suitable class of real-valued test functions ℋ. The idea is to find an operator 𝒜 acting on a class of real-valued functions such that
(∀ f∈Domain(𝒜) E_ν𝒜f=0) ⟺ ν=μ,
where μ is our target distribution. In the next step, for a given function h∈ℋ, a solution f=f_h to the following Stein equation:
𝒜f=h-E_μh
is sought and its properties studied. Finally, using various mathematical tools (among which the most popular are Taylor's expansions in the continuous case, Malliavin calculus, as described in <cit.>, and coupling methods), a bound is sought for the quantity |E_ν_n𝒜f_h|.
An accessible account of the method can be found, for example, in the surveys <cit.> and <cit.> as well as the books <cit.> and <cit.>, which treat the cases of Poisson and normal approximation, respectively, in detail. <cit.> is a database of information and publications connected to Stein's method.
Approximations by infinite-dimensional laws have not been covered in the Stein's method literature very widely, with the notable exceptions of <cit.>, <cit.> and recently <cit.>. We will focus on the ideas taken from <cit.>, which provides bounds on the Brownian Motion approximation of a one-dimensional scaled random walk and some other one-dimensional processes including scaled sums of locally dependent random variables and examples from combinatorics. In the sequel, we show that the approach presented in <cit.> can be extended to time-changes of Brownian Motion, including diffusions in the natural scale.
The most important example we apply our theory to is the approximation of the Moran model with mutation by the Wright-Fisher diffusion with mutation. The former, first introduced in <cit.> as an alternative for the Wright-Fisher model (first formally described in <cit.>) is one of the simplest and most important models of the genetic drift, i.e. the change in the frequencies of alleles in a population. It assumes that the population is divided into two allelic types (A and a) and the frequency of each of the alleles is governed by a birth-death process and an independent mutation process. Specifically, in a population of size n, at exponential rate n 2, a pair of genes is sampled uniformly at random. Then one of them is selected at random to die and the other one gives birth to another gene of the same allelic type. In addition, every gene of type a changes its type independently at rate ν_2 and every gene of type A changes its type independently at rate ν_1. The model then looks at the proportion of a-genes in the population.
On the other hand, the Wright-Fisher model is a discrete Markov chain and does not allow for overlapping generations. Specifically, each step represents a generation. In generation k each of the n individuals chooses its parent independently, uniformly at random from the individuals present in generation k-1 and inherits its genetic type. This model also then looks at the proportion of a-genes in the population.
The Moran model turns out to be easier to study mathematically. It may be proved, for instance using the Stroock-Varadhan theory of diffusion approximation (see <cit.>), that it converges weakly to the Wright-Fisher diffusion, which is also a scaling limit of the Wright-Fisher model. We show how to put a bound on the speed of this convergence. The Wright-Fisher diffusion is often used in practice in genetics for inference concerning large populations. It is given by the equation:
dM(t)=γ(M(t))dt+√(M(t)(1-M(t)))dB_t,
where γ:[0,1]→R encompasses mutation. For a discussion of probabilistic models in genetics see <cit.>.
In Section <ref> we introduce the space of test functions we will find the bounds for. In Section <ref> we present our main reuslts. Theorem <ref> shows how the approach in <cit.> can be extended to the approximation of a scaled, time-changed random walk by a time-changed Brownian Motion. In Theorem <ref> we apply Theorem <ref> to look at the distance between a time-changed Poisson Process and a time-changed Browian Motion. Theorem <ref> shows how this can be extended to find the speed of convergence of a process whose dynamics resemble those of the Moran model with mutation to one whose dynamics resemble those of the Wright-Fisher diffusion with mutation. The bound obtained therein makes it possible to analyse the impact the mutation rates and the number of individuals have on the quality of the approximation and the interplay between those parameters. Section <ref> provides proofs of the reults presented in Section <ref> and comments on how the proof of Theorem <ref> may be adapted to find the speed of convergence in other examples satisfying the Stroock-Varadhan theory of diffusion approximation (see <cit.>).
In what follows, · will always denote the sup norm and D=D[0,1]=D([0,1],R) will be the Skorokhod space of càdlàg real-valued functions on [0,1].
§ SPACE M
Let us define:
f_L:=sup_w∈ D[0,1]|f(w)|/1+w^3,
and let L be the Banach space of the continuous functions f:D[0,1]→R such that f_L<∞. We now let M⊂ L consist of the twice Fréchet differentiable functions f, such that:
D^2f(w+h)-D^2f(w)≤ k_fh,
for some constant k_f, uniformly in w,h∈ D[0,1]. By D^kf we mean the k-th Fréchet derivative of f and the k-linear norm B on L is defined to be B=sup_{ h:h=1} |B[h,...,h]|. Note the following:
For every f∈ M, f_M<∞, where:
f_M:= sup_w∈ D[0,1]|f(w)|/1+w^3+sup_w∈ D[0,1]Df(w)/1+w^2+sup_w∈ D[0,1]D^2f(w)/1+w
+sup_w,h∈ D[0,1]D^2f(w+h)-D^2f(w)/h.
Note that for f∈ M it is possible to find a constant K_f satisfying:
A) Df(w)≤Df(w)-Df(0)+Df(0)
MVT≤ wsup_θ∈[0,1]D^2f(θ w)+Df(0)
≤ w[sup_θ∈[0,1](D^2f(θ w)-D^2f(0)+D^2f(0))]+Df(0)
(<ref>)≤ w[k_fsup_θ∈[0,1]θ w+D^2f(0)]+Df(0)
≤ k_fw^2+D^2f(0)(1∨w^2)+Df(0)< K_f(1+w^2);
B) D^2f(w)≤D^2f(w)-D^2f(0)+D^2f(0)
(<ref>)≤ k_fw+D^2f(0)<K_f(1+w);
C) |f(w+h)-f(w)-Df(w)[h]-1/2D^2f(w)[h,h]|≤ K_fh^3,
uniformly in w,h∈ D, where the last inequality follows by Taylor's theorem and (<ref>). Therefore:
f_M= sup_w∈ D|f(w)|/1+w^3+sup_w∈ DDf(w)/1+w^2+sup_w∈ DD^2f(w)/1+w
+sup_w,h∈ DD^2f(w+h)-D^2f(w)/h<∞.
We now let M^0⊂ M be the class of functionals g∈ M such that:
g_M^0:=g_M+sup_w∈ D|g(w)|+sup_w∈ DDg(w)+sup_w∈ DD^2g(w)<∞.
This is Proposition 3.1 of <cit.>:
Suppose that, for each n≥ 1, the random element Y_n of D[0,1] is piecewise constant with intervals of constancy of length at least r_n. Let (Z_n)_n≥ 1 be random elements of D[0,1] converging weakly in D[0,1], with respect to the Skorokhod topology, to a random element Z∈ C([0,1],R). If:
|Eg(Y_n)-Eg(Z_n)|≤ Cτ_ng_M^0
for each g∈ M^0 and if τ_nlog^2(1/r_n)0, then Y_n⇒ Z in D[0,1] (weakly in the Skorokhod topology).
A similar result holds when Y_n is a continuous-time Markov chain:
Suppose that, for each n≥ 1, the random element Y_n of D[0,1] is a contiuous-time Markov chain with mean holding time 1/λ_n→ 0, identically distributed for each state. Let (Z_n)_n≥ 1 be random elements of D[0,1] converging weakly in D[0,1], with respect to the Skorokhod topology, to a random element Z∈ C([0,1],R). Suppose further that:
|Eg(Y_n)-Eg(Z_n)|≤ Cτ_ng_M^0
for each g∈ M^0 and that τ_nlog^2((λ_n)^3)0. Then Y_n⇒ Z in D[0,1] (weakly in the Skorokhod topology).
We provide a proof of Proposition <ref> in the Appendix.
§ MAIN RESULTS
Theorem <ref> below is an extension of Theorem 1 in <cit.> to the case of a time-changed scaled random walk:
Let X_1,X_2,... be i.i.d. with mean 0, variance 1 and finite third moment. Let s:[0,1]→[0,∞) be a strictly increasing, continuous function with s(0)=0. Define:
Y_n(t)=n^-1/2∑_i=1^⌊ ns(t)⌋ X_i, t∈[0,1]
and let (Z(t),t∈[0,1])=(B(s(t)),t∈[0,1]), where B is a standard Brownian Motion. Suppose that g∈ M.
Then:
|Eg(Y_n)-Eg(Z)|≤ g_M30+54· 5^1/3s(1)/√(πlog 2)n^-1/2√(log (2s(1)n))
+g_Ms(1)(1+(3/2)^3√(2/π)s(1)^3/2)E|X_1|^3n^-1/2
+g_M2160/√(π)(log 2)^3/2n^-3/2(log (2s(1)n))^3/2.
In Theorem <ref> we do not claim that our bounds are sharp. Our bound in Theorem <ref> is of the same order as the one obtained in the original case in <cit.>. This result can also be extended in a straightforward way to instances in which the time change is random and independent of the step sizes of the random walk. We can obtain this by conditioning on the time change.
Theorem <ref> below treats a time-changed Poisson process and can also be extended to random time changes, independent of the Poisson process of interest, by conditioning.
Suppose that P is a Poisson process with rate 1 and S^(n):[0,1]→[0,∞) is a sequence of increasing continuous functions, such that S^(n)(0)=0. Let S:[0,1]→[0,∞) be also increasing and continuous. Let Z(t)=B(S(t)),t∈[0,1] where B is a standard Brownian Motion and
Ỹ_n(t)=P(nS^(n)(t))-nS^(n)(t)/√(n), t∈[0,1].
Then, for all g∈ M:
|Eg(Ỹ_n)-Eg(Z)|≤g_M{(2+27√(2)/2√(π)S(1))√(S-S^(n))+27√(2)/2√(π)S-S^(n)^3/2.
+ n^-1/2[30+54· 5^1/3S(1)/√(πlog 2)√(log (2s(1)n)).
+(1+(3/2)^3√(2/π)S^(n)(1)^3/2)S^(n)(1)(1+2e^-1).+1+log(1+2e^-1)+2log n/loglog(n+2)]
+n^-19√(S^(n)(1))/2(1+3nS^(n)(1))^1/2[4+16701+128(log n)^3/(loglog(n+3))^3]^1/3
+.n^-3/2[2160/√(π)(log 2)^3/2(log (2S(1)n))^3/2+8+33402+256(log n)^3/(loglog(n+3))^3]}.
The bound in Theorem <ref> goes to 0 as long as the time changes S^n→ S uniformly.
Theorem <ref> below gives a bound on the speed of convergence of a process whose dynamics are similar to those of the Moran model with mutation to a process whose dynamics are similar to those of the Wright-Fisher diffusion with mutation. In the Moran model with mutation, in a population of size n, each individual carries a particular gene of one of the two forms: A and a. Each individual has exactly one parent and offspring inherit the genetic type of their parent. Now, at exponential rate n 2 a pair of genes is sampled uniformly at random from the population. One of the pair is selected at random to die and the other one splits in two. In addition, every individual of type A changes its type independently at rate ν_2 and every individual of type a changes its type independently at rate ν_1.
Let X_n(t) be the proportion of type a genes in the population at time t∈[0,1] under the Moran model with mutation rates ν_1,ν_2, as described above. Let (X(t),t∈[0,1]) denote the Wright-Fisher diffusion given by:
dX(t)=(ν_2-(ν_1+ν_2)X(t))dt+√(X(t)(1-X(t)))dB_t.
Suppose that:
P[X_n(t+Δ t)-X_n(t)=1/n]=n^2R_1^(n)(t)Δ t
P[X_n(t+Δ t)-X_n(t)=1/n]=n^2R_-1^(n)(t)Δ t
.
and:
M_n(t)=1/nP_1(n^2R_1^(n)(t))-1/nP_-1(n^2R_-1^(n)(t)),
for t∈[0,1], where P_1 and P_-1 are i.i.d. rate 1 Poisson processes, independent of X_n.
Suppose further that
M(t)=W(∫_0^tX(t)(1-X(t))ds)+∫_0^t(ν_2-(ν_1+ν_2)X(s))ds,
where W is a standard Brownian Motion, independent of X.
Then, for any g∈ M:
|Eg(M_n)-Eg(M)|
≤ g_M{(18+ν_1^1/2+47ν_1^3/4+31ν_1^3/2+ν_2+3ν_2^2+9ν_2^3).
·(1.02· 10^6+425ν_2^1/2+623ν_2+39ν_2^3/2+7ν_2^5/2)
+ (12+3ν_2+3ν_2^2+9ν_2^3)(1.02· 10^6+425ν_1^1/2+623ν_1+39ν_1^3/2+7ν_1^5/2)
.+7(1/2(1+2ν_2)(ν_1+ν_2)+31(ν_1+ν_2)^3)} n^-1/4
+g_M2112[(18+ν_1^1/2+47ν_1^3/4+31ν_1^3/2+ν_2+3ν_2^2+9ν_2^3)(log(n^2/4+ν_2n))^3/2.
+.(12+3ν_2+3ν_2^2+9ν_2^3)(log(n^2/4+ν_1n))^3/2]n^-3.
If ν_1≥ 1 and ν_2≥ 1 then we can write:
|Eg(M_n)-Eg(M)|
≤ g_M[(18+79ν_1^3/2+13ν_2^3)(1.02· 10^6+1094ν_2^5/2).
+(12+15ν_2^3)(1.02· 10^6+1094ν_2^5/2)
+.7(31.5ν_1^3+32.5ν_2^3+ν_1ν_2(1+93ν_1+93ν_2))]n^-1/4
+g_M2112(18+79ν_1^3/2+13ν_2^3)n^-3(log(n^2/4+ν_1n))^3/2
+g_M2112(12+15ν_2^3)n^-3(log(n^2/4+ν_1n))^3/2.
The approximation gets worse as the mutation rates increase and the number of individuals decreases. Should we want to make the mutation rates depend on n and be of the same order, we will require them to be o(n^1/22) in order for the bound to converge to 0 as n→∞.
Note that the Moran model X_n jumps up by 1/n with intensity n^2R_1^(n)(t) and down by 1/n with intensity n^2R_-1^(n)(t), as defined in (<ref>). Using the ideas from <cit.> we can write X_n in the following form:
X_n(t)=1/nP̃_1(n^2R_1^(n)(t))-1/nP̃_-1(n^2R_-1^(n)(t))
for some indepenent rate 1 Poisson processes P̃_1 and P̃_-1 which are, however, not independent of R_1^(n) and R_-1^(n). In Theorem <ref> we consider a similar process, given by (<ref>), whose definition uses Poisson processes P_1 and P_-1 independent of R_1^(n) and R_-1^(n).
Similarly, the diffusive part of the Wright-Fisher diffusion may be expressed as a time-changed Brownian Motion W̃(∫_0^tX(t)(1-X(t)ds). However, we consider a process given by (<ref>) with the assumption that W is independent of X.
We will appeal to Theorem <ref> to obtain the bounds. The time changes we apply in this case are random and therefore we will first condition on them.
The Moran model M_n converges weakly to the Wright-Fisher diffusion M, which can be proved using, for instance, the Stroock-Varadhan theory of diffusion approximation <cit.>. However, our paper does not provide the tools necessary for treating this convergence with Stein's method and obtaining bounds on its rate.
A key idea used in the proof will be the Donnelly-Kurtz look-down construction coming from <cit.> and decribed in Chapter 2.10 of <cit.>. The n-particle look-down process is denoted by a vector (ψ_1(t),...,ψ_n(t)) with each index representing a "level" and each of ψ_i's representing the type of the individual at level i at time t. Individual at level k is equipped with an exponential clock with rate k-1, independent of other individuals, and at the times the clock rings it "looks down" at a level chosen uniformly at random from { 1,...,k-1} and adopts the type of the individual at that level. In addition, the type of each individual evolves according to the mutation process. A comparison of the generators of the Moran model and the look-down process shows that, as long as the two are started from the same initial exchangeable condition, they produce the same distribution of types in the population. In addition, it may be shown that the Wright-Fisher diffusion may be represented as the proportion of type a individuals in the population, in which the types are distributed according to the infinite-particle look-down process. The corresponding Moran model is then the proportion of type a individuals among the ones located on the first n levels. Due to exchangeability, we may then describe the Moran model X_n at a fixed time, as depending on the Wright-Fisher diffusion X in the following way: nX_n(t)∼Binomial(n,X(s)).
Our bound in Theorem <ref> is sufficient to conclude that process M_n converges to process M weakly on compact intervals. This follows from Proposition <ref>. Using the notation therein, in this case, τ_n=n^-1/4 and λ_n=n(n-1)/2.
§ SETTING UP STEIN'S METHOD
Let us first define:
A_n(t)=n^-1/2∑_i=1^⌊ ns(1)⌋ Z_i1_[i/n,s(1)](s(t))=n^-1/2∑_i=1^⌊ ns(1)⌋ Z_i1_[s^-1(i/n),1](t),
for Z_ii.i.d∼𝒩(0,1).
In a preparation for the proof of Theorem <ref>, we will apply Stein's method to find the distance between A_n and Y_n.
§.§ The Stein equation
We first note that if U_1,U_2,... are i.i.d. Ornstein-Uhlenbeck processes with stationary law 𝒩(0,1), then defining:
W_n(t,u)=n^-1/2∑_i=1^⌊ ns(t)⌋ U_i(u), u≥ 0,t∈[0,1],
we obtain that the law of A_n is stationary for (W_n(·,u))_u≥ 0. Denote the generator of (W_n(·,u))_u≥ 0 by 𝒜_n. By properties of stationary distributions, E_μ𝒜_nf=0 for all f∈Domain(𝒜_n) if and only if μ=ℒ(A_n). Therefore, we can treat
𝒜_nf=g-Eg(A_n)
as our Stein equation.
In the next subsection, for any g from a suitable class of functions, we will find an f satisfying equation (<ref>). Then, in the sequel, we will find a bound on |E𝒜_nf(Y_n)|, which will readily give us a bound on |Eg(Y_n)-Eg(A_n)|.
The generator 𝒜_n of the process (W_n(·,u))_u≥ 0 acts on any f∈ M in the following way:
(𝒜_nf)(w):=-Df(w)[w]+ED^2f(w)[A_n^(2)].
Before we prove this result, we need a lemma:
Letting ℱ_n,u=σ(W_n(·,v),v≤ u), we have:
W_n(·,u+v)-e^-vW_n(·,u)𝒟=σ(v)A_n(·).
We first note that for each i≥ 1 we can construct independent standard Brownian Motions B_i such that (X_i(u),u≥ 0)=(e^-uB_i(e^2u),u≥ 0). Then:
W_n(·,u+v)-e^-vW(·,u)= n^-1/2∑_k=1^⌊ ns(·)⌋ U_k(u+v)-e^-vn^-1/2∑_k=1^⌊ ns(·)⌋ U_k(u)
𝒟= n^-1/2e^-(u+v)∑_k=1^⌊ ns(·)⌋[B_k(e^2(u+v))-B_k(e^2u)]
𝒟= n^-1/2σ(v)∑_k=1^⌊ ns(·)⌋Z_k=σ(v)A_n(·).
Note that the semigroup of (W_n(·,u))_u≥ 0, acting on L, is defined by:
(T_n,uf)(w):=E[f(W_n(·,u)|W_n(·,0)=w]
and by Lemma <ref> we readily obtain that:
(T_n,uf)(w)=E[f(we^-u+σ(u)A_n(·)].
We can define the generator by: 𝒜_n=lim_u↘ 0T_n,u-I/u. We also have that for f∈ M:
|(T_n,uf)(w)-f(w)-EDf(w)[σ(u)A_n-w(1-e^-u)].
-.1/2ED^2f(w)[{σ(u)A_n-w(1-e^-u)}^(2)]|
(<ref>)≤ K_fEσ(u)A_n-w(1-e^-u)^3
≤ 4K_f[σ^3(u)EA_n^3+(1-e^-u)^3w^3]
≤ K_3(1+w^3)u^3/2
for a constant K_3 depending only on f, where the last inequality follows from the fact that for u≥ 0, σ^3(u)≤ 3u^3/2 and (1-e^-u)^3≤ u^3/2.
So:
|(T_n,uf-f)(w)+uDf(w)[w]-uED^2f(w)[A_n^(2)]|
≤ |(T_n,uf)(w)-f(w)-EDf(w)[σ(u)A_n-w(1-e^-u)].
-.1/2ED^2f(w)[{σ(u)A_n-w(1-e^-u)}^(2)]|+|σ(u)EDf(w)[A_n]|
+|(u-1+e^-u)Df(w)[w]|+|(σ^2(u)/2-u)ED^2f(w)[A_n^(2)]|
+|(1-e^-u)^2/2D^2f(w)[w^(2)]|+|σ(u)(1-e^-u)ED^2f(w)[A_n,w]|
≤ K_1(1+w^3)u^3/2+|σ(u)EDf(w)[A_n]|+|u-1+e^-u|Df(w)w
+|σ^2(u)/2-u|D^2f(w)EA_n^2
+(1-e^-u)^2/2D^2f(w)w^2+σ(u)(1-e^-u)D^2f(w)wEA_n
(<ref>)≤ K_1(1+w^3)u^3/2+|σ(u)EDf(w)[A_n]|+|u-1+e^-u|K_f(1+w^2)w
+|σ^2(u)/2-u|K_f(1+w)EA_n^2+(1-e^-u)^2/2K_f(1+w)w^2
+σ(u)(1-e^-u)(1+w)wEA_n
≤ 3u^3/2(K_1(1+w^3)+K_f(1+w^2)w+K_f(1+w)EA_n^2.
.+K_f(1+w)w^2+(1+w)wEA_n)+|σ(u)EDf(w)[A_n]|
≤ K_4(1+w^3)u^3/2,
for some constant K_4 depending only on f. The last inequality follows from the fact that:
EDf(w)[A_n]= EDf(w)[n^-1/2∑_k=1^⌊ ns(1)⌋Z_k
1_[s^-1(k/n),1]]
= n^-1/2∑_k=1^⌊ ns(1)⌋Df(w)[1_[s^-1(k/n),1]]E[Z_k]
= 0.
It follows that for any f∈ M:
𝒜_nf(w)=lim_u↘0T_n,uf(w)-f(w)/u=-Df(w)[w]+ED^2f(w)[A_n^(2)].
§.§ Solving Stein's equation
Suppose that g∈ M satisfies Eg(A_n)=0. Then the equation: 𝒜_nf=g
is solved by:
f=ϕ_n(g)=-∫_0^∞ T_n,ugdu
for T_n,u defined by (<ref>).
Furthermore, ϕ_n(g)∈ M and the following inequalities hold:
A) Dϕ_n(g)(w)≤(1+4/3EA_n^2)g_M(1+w^2),
B) D^2ϕ_n(g)(w)≤(1/2+EA_n/3)g_M(1+w),
C) D^2ϕ_n(g)(w+h)-D^2ϕ(g)(w)≤g_M/3h.
The proof will follow the procedure used to prove <cit.>.
Step 1: First, we show that ϕ_n(g)∈ M and that (<ref>) holds. Assume g∈ M and E[g(A_n)]=0.
Note that if, for instance:
|g(w)-g(x)|≤ C_g(1+w^2+x^2)w-x
uniformly in w,x∈ D[0,1] then:
lim_t→∞∫_0^t|T_n,ug(w)|du=lim_t→∞∫_0^t|Eg(we^-u+σ(u)A_n)|du
≤ lim_t→∞[∫_0^t|E[g(we^-u+σ(u)A_n)-g(σ(u)A_n)]|du+∫_0^t|E[g(σ(u)A_n)-g(A_n)]|du]
≤ C_glim_t→∞[∫_0^tE[(1+e^-uw+σ(u)A_n^2+σ^2(u)A_n^2)e^-uw]du.
.+∫_0^tE|(1+(σ^2(u)+1)A_n^2)|(σ(u)-1)A_ndu]
≤ C_glim_t→∞[∫_0^t[ e^-uw+2e^-3uw^3+3σ^2(u)e^-uwEA_n^2]du.
.+∫_0^t(σ(u)-1)E|(1+(σ^2(u)+1)Z^2)|A_ndu]
≤ C(1+w^3),
for some constant C. For such g,
f=ϕ_n(g)=-∫_0^∞ T_n,ugdu
exists.
Note that every g∈ M satisfies condition (<ref>) because for such g:
|g(w)-g(x)|(<ref>)≤ K_gw-x^3+|Dg(x)[w-x]+1/2D^2g(x)[w-x,w-x]|
≤ K_gw-x^3+Dg(x)w-x+1/2D^2g(x)w-x^2
(<ref>)≤ K_gw-x(w-x^2+1+x^2+1/2w-x(1+x))
≤ K_gw-x(2w^2+2x^2+1+x^2+1/2(w+x+wx+x^2))
≤ C_g(1+w^2+x^2)w-x
uniformly in w,x because w≤ 1+w^2, x≤ 1+x^2 and wx≤w^2+x^2.
Now take g∈ M, such that Eg(A_n)=0 and note that for ϕ_n defined in (<ref>) we get:
ϕ_n(g)(w+h)-ϕ_n(g)(w)(<ref>)=-E∫_0^∞[g((w+h)e^-u+σ(u)A_n)-g(we^-u+σ(u)A_n)]du
and so dominated convergence (which can be applied because of (<ref>)) gives:
D^kϕ_n(g)(w)=-E∫_0^∞e^-kuD^kg(we^-u+σ(u)A_n)du, k=1,2.
Also, observe that:
A) Dϕ_n(g)(w)
≤∫_0^∞ e^-uEDg(we^-u+σ(u)A_n)du
(<ref>)≤g_M∫_0^∞ e^-u(1+Ewe^-u+σ(u)A_n^2)du
≤g_M∫_0^∞(e^-u+2w^2e^-3u+2EA_n^2(e^-u-e^-3u))du
≤(1+4/3EA_n^2)g_M(1+w^2),
B) D^2ϕ_n(g)(w)
≤∫_0^∞ e^-2uED^2g(we^-u+σ(u)A_n)du
(<ref>)≤g_M∫_0^∞ e^-2u(1+Ewe^-u+σ(u)A_n)du
≤g_M∫_0^∞ e^-2u(1+e^-uw+σ(u)EA_n)du
≤g_M(1/2+w/3+EA_n/3)
≤(1/2+EA_n/3)g_M(1+w),
C) D^2ϕ_n(g)(w+h)-D^2ϕ_n(g)(w)
≤E∫_0^∞ e^-2uD^2g((w+h)e^-u+σ(u)A_n)-e^-2uD^2g(we^-u+σ(u)A_n)du
g∈ M,(<ref>)≤g_M∫_0^∞ e^-2ue^-uhdu=g_M/3h,
uniformly in g∈ M, which proves (<ref>). It follows by (<ref>) and (<ref>) that ϕ_n(g)∈ M. Thus, ϕ_n(g) is in the domain of the semigroup and of 𝒜_n. Also, similarly, for any t>0, ∫_0^tT_n,ugdu∈ M.
Step 2: We now show that for all t>0 and for all g∈ M:
T_n,tg-g=𝒜_n(∫_0^t T_n,ugdu).
We will prove it by following the steps of the proof of Proposition 1.5 on p. 9 of <cit.>. Observe that for all w∈ D[0,1] and h>0:
1/h[T_n,h-I]∫_0^tT_n,ug(w)du=1/h∫_0^t[T_n,u+hg(w)-T_n,ug(w)]du
= 1/h∫_t^t+hT_n,ug(w)du-1/h∫_0^hT_n,ug(w)du
(<ref>)= 1/h∫_t^t+hE[g(w e^-u+σ(u)A_n)]du-1/h∫_0^hE[g(w e^-u+σ(u)A_n)]du.
Taking h→ 0 on the left-hand side gives 𝒜_n(∫_0^t T_n,ug(w)du) because ∫_0^t T_n,ug(w)du is in the domain of 𝒜_n, as shown in Step 1. In order to analyse the right-hand side note that:
|1/h∫_0^hE[g(w e^-u+σ(u)A_n)]-g(w)du|
MVT≤ 1/h∫_0^hE[w(e^-u-1)+σ(u)A_nsup_c∈ [0,1]Dg(cw+(1-c)(we^-u+σ(u)A_n))]du
≤ g_M/h∫_0^hE[(w(1-e^-u)+σ(u)A_n)(1+3w^2+3w^2e^-2u+3σ^2(u)A_n^2)]du
= g_M/hE{(1+3w^2+3A_n^2)(w(-1+h+cosh(h)-sinh(h))..
.+A_ne^-h(-√(e^2h-1)+e^h(h+log(1+e^-h√(-1+e^2h))))
+3w(w^2-A_n^2)(e^-3h/6(e^h-1)^2(e^h+2))
+.3(w^2A_n-A_n^3)1/3(√(1-e^-2h)-√(e^-6h(e^2h-1)))}0.
Similarly:
|1/h∫_t^t+hE[g(we^-u+σ(u)A_n)]du-E[g(we^-t+σ(t)A_n)]|0
Therefore, as h→ 0, the right-hand side of (<ref>) converges to T_n,tg-g, which proves (<ref>).
Step 3: We note that for any h>0 and for any f∈ M:
1/h[T_n,s+hf-T_n,sf]=T_n,s[T_n,h-I/hf]
and therefore for any w∈ D[0,1], by dominated convergence (which can be applied because of (<ref>)):
(d/ds)^+T_n,sf(w) =lim_h↘ 0T_n,s[T_n,h-I/hf(w)]=lim_h↘ 0E[T_n,h-I/hf(we^-s+σ(s)A_n)]
=E[lim_h↘ 0T_n,h-I/hf(we^-s+σ(s)A_n)]=T_n,s𝒜_nf(w)
and, similarly, for s>0, (d/ds)^-T_n,sf=T_n,s𝒜_nf because:
lim_h↘01/-h[T_n,s-hf-T_n,sf](w)-T_n,s𝒜_nf(w)
= lim_h↘0T_n,s-h[(T_n,h-I/h-𝒜_n)f](w)+lim_h↘0(T_n,s-h-T_n,s)𝒜_nf(w)
= lim_h↘ 0E[(T_n,h-I/h-𝒜_n)f(we^-s+h+σ(s-h)A_n)]
+lim_h↘ 0E[𝒜_nf(we^-s+h+σ(s-h)A_n)-𝒜_nf(we^-s+σ(s)A_n)]
(<ref>)= 0
again, by dominated convergence. It can be applied because of (<ref>) and the observation that:
|𝒜_nf(we^-s+h+σ(s-h)A_n)-𝒜_nf(we^-s+σ(s)A_n)|
= |-Df(we^-s+h+σ(s-h)A_n)[we^-s+h+σ(s-h)A_n].
+∑_i=1^nD^2f(we^-s+h+σ(s-h)A_n)[1_[i/n,1]^(2)]
.-Df(we^-s+σ(s)A_n)[we^-s+σ(s)A_n]+∑_i=1^nD^2f(we^-s+σ(s)A_n)[1_[i/n,1]^(2)]|
≤ f_M(1+we^-s+h+σ(s-h)A_n^2)we^-s+h+σ(s-h)A_n
+nf_M(1+we^-s+h+σ(s-h)A_n)
+f_M(1+we^-s+σ(s)A_n)we^-s+σ(s)A_n+f_Mn(1+we^-s+σ(s)A_n)
≤ f_M(1+2w^2e^-2s+2+2σ^2(s-1)A_n^2)(we^-s+1+σ(s-1)A_n)
+nf_M(1+we^-s+1+σ(s-1)A_n)
+f_M(1+we^-s+σ(s)A_n)we^-s+σ(s)A_n+nf_M(1+we^-s+σ(s)A_n)
for all h∈[0,1].
Thus, for all w∈ D[0,1] and s>0: d/dsT_n,sf(w)=T_n,s𝒜_nf(w) and so, by the Fundamental Theorem of Calculus:
T_n,rf(w)-f(w)=∫_0^rT_n,s𝒜_nf(w)ds.
Applying (<ref>) to f=∫_0^tT_n,ugdu we obtain:
T_n,r∫_0^tT_n,ug(w)du-∫_0^tT_n,ug(w)du=∫_0^rT_n,s𝒜_n(∫_0^tT_n,ug(w)du)ds
Now, we take t→∞. We apply dominated convergence, which is allowed because of (<ref>) and the following bound for h_t,u(w)=∫_0^tT_n,ug(w)du:
|𝒜_nh_t,u(w)|
≤ ∫_0^tE|e^-uDg(we^-u+σ(u)A_n)[w]|du
+∑_i=1^n∫_0^tE|e^-2uD^2g(we^-u+σ(u)A_n)[1_[i/n,1]^(2)]|du
≤ ∫_0^∞E|e^-uDg(we^-u+σ(u)A_n)[w]|du
+∑_i=1^n∫_0^∞E|e^-2uD^2g(we^-u+σ(u)A_n)[1_[i/n,1]^(2)]|du
(<ref>)A,B≤ (1+4/3EA_n^2)g_M(1+w^2)w+n(1/2+EA_n/3)g_M(1+w),
where the first inequality follows again by dominated convergence applied because of (<ref>) in order to exchange integration and differentiation in a way similar to (<ref>). As a result, we obtain:
T_n,r∫_0^∞ T_n,ug(w)du-∫_0^∞ T_n,ug(w)du =∫_0^r T_n,slim_t→∞𝒜_n(∫_0^tT_n,ug(w)du)ds
(<ref>)=-∫_0^r T_n,sg(w)ds.
Now, dividing both sides by r and taking r→ 0, we obtain:
𝒜_n(∫_0^∞ T_n,ug(w)) =-lim_r→ 01/r∫_0^r T_n,sg(w)ds
=-lim_r→ 0[1/r∫_0^rEg(we^-s+σ(s)A_n)ds]
(<ref>)=-g(w),
which finishes the proof.
It is an easy consequence of Propositions <ref> and <ref> that for g∈ M:
𝒜_nϕ_n(g)(w)=-Dϕ_n(g)(w)[w]+ED^2ϕ_n(g)(w)[A_n^(2)].
§ PROOFS OF THEOREMS <REF>, <REF> AND <REF>
§.§ Proof of Theorem <ref>
§.§.§ Discretisation of Brownian Motion
Let A_n be as in (<ref>).
Now, note that we can first realise B and then set A_n(t)=B(⌊ ns(t)⌋/n) for t∈[0,1] so that:
sup_t∈[0,1]|A_n(t)-Z(t)| =sup_t∈[0,1]|B(⌊ ns(t)⌋/n)-B(s(t))|=sup_t∈[0,s(1)]|B(t)-B(⌊ nt⌋/n)|.
By Lemma 3 of <cit.> we get:
A) EA_n-Z ≤E[sup_t,s∈[0,s(1)],|t-s|≤1/n|B(t)-B(s)|]
≤5/√(π)·6/√(log 2)n^-1/2√(log (2s(1)n))
B) EA_n-Z^2 ≤E[(sup_t,s∈[0,s(1)],|t-s|≤1/n|B(t)-B(s)|)^2]
≤5/2·(6/√(log 2))^2n^-1log (2s(1)n)
C) EA_n-Z^3 ≤E[(sup_t,s∈[0,s(1)],|t-s|≤1/n|B(t)-B(s)|)^3]
≤5/√(π)·(6/√(log 2))^3n^-3/2(log (2s(1)n))^3/2
and therefore we obtain, for any g∈ M:
|Eg(A_n)-Eg(Z)|
MVT≤ E[sup_c∈[0,1]Dg((1-c)Z+cA_n)Z-A_n]
≤ g_ME[sup_c∈[0,1](1+Z+c(A_n-Z)^2)Z-A_n]
≤ g_ME[(1+2Z^2+2Z-A_n^2)Z-A_n]
Hölder≤ g_M{EZ-A_n+2EZ-A_n^3+2(EZ^3)^2/3(EA_n-Z^3)^1/3}
(<ref>)≤ g_M{5/√(π)·6/√(log 2)n^-1/2√(log (2s(1)n)).
+10/√(π)·(6/√(log 2))^3n^-3/2(log (2s(1)n))^3/2
.+2(EZ^3)^2/3(5/√(π))^1/3·6/√(log 2)n^-1/2√(log (2s(1)n))}
Doob's L^3≤ g_M(30/√(πlog 2)+2· 5^1/3· 6/π^1/6√(log 2)((3/2)^3· 2√(2/π))^2/3s(1))n^-1/2√(log (2s(1)n))
+g_M10/√(π)·(6/√(log 2))^3n^-3/2(log (2s(1)n))^3/2
= g_M30+54· 5^1/3s(1)/√(πlog 2)n^-1/2√(log (2s(1)n))
+g_M2160/√(π)(log 2)^3/2n^-3/2(log (2s(1)n))^3/2.
§.§.§ Applying Stein's method
Let g∈ M and g_n=g-E[g(A_n)]. Let f_n=ϕ_n(g_n), as in (<ref>). First, note that:
EDf_n(Y_n)[Y_n] =n^-1/2∑_j=1^⌊ ns(1)⌋EDf_n(Y_n)[X_j1_[s^-1(j/n),1]].
We now let Y_n^j=n^-1/2∑_k≠ j X_k1_[s^-1(k/n),1]=Y_n-n^-1/2X_j1_[s^-1(j/n),1] and observe that, by Taylor's theorem:
| n^-1/2EX_jDf_n(Y_n)[1_[s^-1(j/n),1]].-E{ n^-1/2X_jDf_n(Y_n^j)[1_[s^-1(j/n),1]].
..+n^-1(X_j)^2D^2f_n(Y_n^j)[(1_[s^-1(j/n),1]e_i)^(2)]}|
= |E[ n^-1/2X_jDf_n(Y_n^j+n^-1/2X_j1_[j/n,1])[1_[s^-1(j/n),1]]..
- n^-1/2X_jDf_n(Y_n^j)[1_[s^-1(j/n),1]]
.. -n^-1(X_j)^2D^2f_n(Y_n^j)[(1_[s^-1(j/n),1])^(2)]]|
(<ref>)C)≤ n^-3/2/6g_n_ME|X_j|^3
because, clearly, 1_[s^-1(j/n),1]=1. Also, in the last inequality we have used the fact that X_j is independent of Y_n^j. We can now sum (<ref>) over j=1,2,...,⌊ ns(1)⌋ and use the fact that X_j's are independent of Y_n^j's and that X_j's have mean 0 and variance 1 to obtain:
|EDf_n(Y_n)[Y_n]-n^-1∑_j=1^⌊ ns(1)⌋D^2f_n(Y_n^j)[(1_[s^-1(j/n),1])^(2)]|≤n^-1/2/6s(1)g_n_ME|X_1|^3.
We notice that for 𝒜_n defined in Proposition <ref>, using Remark <ref>, we obtain:
|E𝒜_nf_n(Y_n)|=|EDf_n(Y_n)[Y_n]-ED^2f_n(Y_n)[A_n^(2)]|
≤ |EDf_n(Y_n)[Y_n]-n^-1∑_j=1^⌊ ns(1)⌋ED^2f_n(Y^j_n)[(1_[s^-1(j/n),1])^(2)]|
+n^-1|∑_j=1^⌊ ns(1)⌋E{ D^2f_n(Y_n)[(1_[s^-1(j/n),1])^(2)]-D^2f_n(Y_n^j)[(1_[s^-1(j/n),1])^(2)]}|
≤ n^-1/2/6g_n_ME|X_1|^3
+n^-1|∑_j=1^⌊ ns(1)⌋E{ D^2f_n(Y_n^j+n^-1/2X_j1_[s^-1(j/n),1])[(1_[s^-1(j/n),1])^(2)]..
-..D^2f_n(Y_n^j)[(1_[s^-1(j/n),1])^(2)]}|
(<ref>)C)≤ n^-1/2/6g_n_ME|X_1|^3+n^-1g_n_M/3∑_j=1^⌊ ns(1)⌋n^-1/2EX_j1_[s^-1(j/n),1]
≤ n^-1/2/6s(1)g_n_ME|X_1|^3+n^-1/2/3s(1)g_n_MEX_11_[s^-1(j/n),1]
≤ n^-1/2s(1)g_n_M/2E|X_1|^3.
The last inequality follows by Jensen's inequality:
E|X_1|≤√(E|X_1|^2)=1=(E|X_1|^2)^3/2≤E|X_1|^3.
Now, note that this gives:
|Eg(Y_n)-Eg(A_n)|=|Eg_n(Y_n)|=|E𝒜_nf_n(Y_n)|
≤ n^-1/2/2s(1)g_n_ME|X_1|^3
≤ n^-1/2/2s(1)(g_M+Eg(A_n))E|X_1|^3
≤ n^-1/2/2s(1)(2+EA_n^3)g_ME|X_1|^3.
Also, recall that, by Doob's L^p inequality (see, e.g. Theorem 1.7 of Chapter II in <cit.>), if M is a right-continuous martingale then for every t>0 and p>1:
E[(sup_s∈[0,t]|M_s|)^p]≤(p/p-1)^pE|M_t|^p.
Note that A_n is an integrable process, adapted to its natural filtration and for any t∈ [s^-1(m/n),s^-1((m+1)/n)) and r∈ [s^-1(l/n),s^-1((l+1)/n)) and r<t:
E[A_n(t)|{ A_n(u):u≤ r}.] =n^-1/2E[.∑_k=1^mZ_k|Z_1,...,Z_l]
=n^-1/2∑_k=1^lZ_k + n^-1/2E[∑_k=l+1^m Z_k]
= A_n(r)
and so A_n is a right-continuous martingale. Applying Doob's L^3 inequality to it yields:
EA_n^3≤(3/2)^3E|A_n(1)|^3=(3/2)^3· 2√(2/π)s(1)^3/2,
because A_n(1)∼𝒩(0,⌊ ns(1)⌋/n).
Therefore (<ref>) gives:
|Eg(Y_n)-Eg(A_n)|≤ n^-1/2(1+(3/2)^3·√(2/π)s(1)^3/2)g_Ms(1)E|X_1|^3.
Combining this with (<ref>):
|Eg(Y_n)-Eg(Z)|≤ g_M30+54· 5^1/3s(1)/√(πlog 2)n^-1/2√(log (2s(1)n))
+g_Ms(1)(1+(3/2)^3√(2/π)s(1)^3/2)E|X_1|^3n^-1/2
+g_M2160/√(π)(log 2)^3/2n^-3/2(log (2s(1)n))^3/2,
which proves Theorem <ref>.
§.§ Proof of Theorem <ref>
Note that (P(⌊ nS^(n)(t)⌋),t∈[0,1]) can be expressed in the following way:
P(⌊ nS^(n)(t)⌋)-⌊ nS^(n)(t)⌋ =∑_i=1^⌊ nS^(n)(t)⌋ X_i,
where (X_i+1)'s are i.i.d. Poisson(1).
Therefore, we can express (Ỹ_n(t),t∈[0,1]) in the following way:
Ỹ_n(t)= n^-1/2{∑_i=1^⌊ nS^(n)(t)⌋ X_i+P(nS^(n)(t))-P(⌊ nS^(n)(t)⌋)-(nS^(n)(t)-⌊ nS^(n)(t)⌋)}.
We also define:
Y_n(t)=n^-1/2∑_i=1^⌊ nS^(n)(t)⌋ X_i.
Note that |nS^(n)(t)-⌊ nS^(n)(t)⌋|≤ 1 for all t≥ 0. Also, observe that for all t≥ 0:
|P(nS^(n)(t))-P(⌊ nS^(n)(t)⌋)|≤ P(⌊ nS^(n)(t)⌋+1)-P(⌊ nS^(n)(t)⌋).
By the independence of increments of a Poisson process:
A) EỸ_n-Y_n≤ n^-1/2[1+E[max_1≤ i≤ nP̅_i]]
B) EỸ_n-Y_n^3≤ n^-3/2[4+4E[max_1≤ i≤ nP̅_i^3]],
where P̅_1,⋯,P̅_ni.i.d∼Poisson(1). Using the trick from <cit.>, we note that, by Jensen's inequality applied to function exp(xloglog (n+2)):
exp(loglog (n+2)·E[max_1≤ i≤ nP̅_i]) ≤E[exp(loglog (n+2)·max_1≤ i≤ nP̅_i)]
=E[max_1≤ i≤ nexp(loglog (n+2)·P̅_i)]
≤ n E[exp(loglog n·P̅_1)]
=nexp(log (n+2)-1)
≤(1+2e^-1)n^2
and by Jensen's inequality applied to function exp(x^1/3loglog (n+3)), which is convex for x≥8/(loglog (n+3))^3:
exp(loglog (n+3)·{E[.max_1≤ i≤ nP̅_i^3|(max_1≤ i≤ nP̅_i^3)≥8/(loglog (n+3))^3]}^1/3)
≤ E[.exp(loglog (n+3)·max_1≤ i≤ nP̅_i)|(max_1≤ i≤ nP̅_i^3)≥8/(loglog (n+3))^3]
≤ nE[exp(loglog (n+3)·P̅_1)|(max_1≤ i≤ nP̅_i^3)≥8/(loglog (n+3))^3.]
≤ nexp(log (n+3)-1)/P[(max_1≤ i≤ nP̅_i)≥2/(loglog (n+3))]
≤ nexp(log (n+3)-1)/P[P̅_1≥2/(loglog 4))]
≤ (1+3e^-1)n^2/1-1957/720e.
Now, combining (<ref>), (<ref>) and (<ref>), we obtain:
A) EỸ_n-Y_n≤ n^-1/2[1+log(1+2e^-1)+2log n/loglog(n+2)]
B) EỸ_n-Y_n^3≤ n^-3/2[4+4(log(1+3e^-1/1-1957/720e)+2log n/loglog(n+3))^3+32/(loglog(n+3))^3]
≤ n^-3/2[4+16701+128(log n)^3/(loglog(n+3))^3].
We also note that:
[EỸ_n^3]^2/3 Doob≤ n^-1(3/2)^2[E|P(nS^(n)(1))-nS^(n)(1)|^3]^2/3
≤(3/2)^2n^-1[E|P(nS^(n)(1))-nS^(n)(1)|^4]^1/2
=(3/2)^2n^-1/2√(S^(n)(1))(1+3nS^(n)(1))^1/2.
Then, for every g∈ M:
|Eg(Y_n)-Eg(Ỹ_n)|MVT≤E[sup_c∈[0,1]Dg((1-c)Ỹ_n+cY_n)Y_n-Ỹ_n]
≤ g_ME[sup_c∈[0,1](1+Ỹ_n+c(Y_n-Ỹ_n)^2)Y_n-Ỹ_n]
≤ g_ME[(1+2Ỹ_n^2+2Y_n-Ỹ_n^2)Y_n-Ỹ_n]
Hölder≤ g_M{EY_n-Ỹ_n+2EY_n-Ỹ_n^3+2(EỸ_n^3)^2/3(EY_n-Ỹ_n^3)^1/3}
(<ref>),(<ref>)≤ g_M{ n^-1/2[1+log(1+2e^-1)+2log n/loglog(n+2)]+n^-3/2[8+33402+256(log n)^3/(loglog(n+3))^3].
.+9/2n^-1√(S^(n)(1))(1+3nS^(n)(1))^1/2[4+16701+128(log n)^3/(loglog(n+3))^3]^1/3}.
Let A_n(t)=n^-1/2∑_i=1^⌊ nS^(n)(t)⌋ Z_i,t∈[0,1] for Z_ii.i.d∼𝒩(0,1). By (<ref>):
|Eg(Y_n)-Eg(A_n)| ≤ n^-1/2(1+(3/2)^3√(2/π)S^(n)(1)^3/2)S^(n)(1)g_ME|X_1|^3
≤ n^-1/2(1+(3/2)^3√(2/π)S^(n)(1)^3/2)S^(n)(1)g_M(1+2e^-1)
because X_1𝒟=P(1)-1.
Now let Ã_n(t)=n^-1/2∑_i=1^⌊ nS(t)⌋ Z_i,t∈[0,1]. Then:
A) EA_n-Ã_n=n^-1/2E[sup_t∈[0,1]|∑_i=⌊ nS(t)∧ S^(n)(t)⌋+1^⌊ nS(t)∨ S^(n)(t)⌋ Z_i|]
=n^-1/2E[sup_t∈[0,1]|∑_i=1^⌊ nS(t)∨ S^(n)(t)⌋-(⌊ nS(t)∧ S^(n)(t)⌋+1) Z_i|]
Doob,Jensen≤ 2n^-1/2√(E|∑_i=1^sup_t∈[0,1](⌊ nS(t)∨ S^(n)(t)⌋-(⌊ nS(t)∧ S^(n)(t)⌋+1)) Z_i|^2)≤ 2√(S-S^(n))
B) EA_n-Ã_n^3=n^-3/2E[sup_t∈[0,1]|∑_i=⌊ nS(t)∧ S^(n)(t)⌋+1^⌊ nS(t)∨ S^(n)(t)⌋ Z_i|^3]
=n^-3/2E[sup_t∈[0,1]|∑_i=1^⌊ nS(t)∨ S^(n)(t)⌋-(⌊ nS(t)∧ S^(n)(t)⌋+1) Z_i|^3]
Doob≤(3/2)^3n^-3/2E[|∑_i=1^sup_t∈[0,1](⌊ nS(t)∨ S^(n)(t)⌋-(⌊ nS(t)∧ S^(n)(t)⌋+1)) Z_i|^3]
≤ 2√(2/π)(3/2)^3S-S^(n)^3/2
C) (EÃ_n^3)^2/3Doob≤ 2n^-1(3/2)^2π^-1/3(S(1)^3/2)^2/3=9/2π^1/3n^-1S(1).
Therefore:
|Eg(A_n)-Eg(Ã_̃ñ)|MVT≤E[sup_c∈[0,1]Dg((1-c)Ã_n+cA_n)A_n-Ã_n]
≤ g_ME[sup_c∈[0,1](1+Ã_n+c(A_n-Ã_n)^2)A_n-Ã_n]
≤ g_ME[(1+2Ã_n^2+2A_n-Ã_n^2)A_n-Ã_̃ñ]
Hölder≤ g_M{EA_n-Ã_n+2EA_n-Ã_n^3+2(EÃ_n^3)^2/3(EA_n-Ã_n)^1/3}
(<ref>)≤ g_M{ 2√(S-S^(n))+27√(2)/2√(π)S-S^(n)^3/2+27√(2)/2√(π)S(1)√(S-S^(n))}.
By (<ref>) we get for Z=B∘ S:
|Eg(Ã_n)-Eg(Z)|≤g_M30+54· 5^1/3S(1)/√(πlog 2)n^-1/2√(log (2S(1)n))
+g_M2160/√(π)(log 2)^3/2n^-3/2(log (2S(1)n))^3/2.
Theorem <ref> now follows from (<ref>), (<ref>), (<ref>), (<ref>).
§.§ Proof of Theorem <ref>
Note that X_n jumps up by 1/n with intensity 1/2n^2X_n(t)(1-X_n(t))+nν_2(1-X_n(t)) and down by 1/n with intensity 1/2n^2X_n(t)(1-X_n(t))+nν_1X_n(t). To see this observe that a jump occurs with intensity n 2 and it is an up-jump if the first gene chosen was of type a, the second of type A and the one with type A died (which happens with probability 1/2X_n(t)n(1-X_n(t))/n-1) or if the first one chosen was of type A, the second of type a and the type A gene died (which happens with probability 1/2(1-X_n(t))nX_n(t)/n-1). In addition, there are n(1-X_n(t)) genes of type A and each of them mutates into type a at rate ν_2. Hence:
P[X_n(t+Δ t)-X_n(t)=1/n]=1/2n^2X_n(t)(1-X_n(t))Δ t+nν_2(1-X_n(t))Δ t
P[X_n(t+Δ t)-X_n(t)=-1/n]=1/2n^2X_n(t)(1-X_n(t))Δ t+nν_1X_n(t)Δ t.
Therefore:
M_n(t)= P_1(n^2R^(n)_1(t))-n^2R^(n)_1(t)/n-P_-1(n^2R^(n)_-1(t))-n^2R^(n)_-1(t)/n
+∫_0^t(ν_2-(ν_1+ν_2)M_n(s))ds,
where P_1,P_-1 are i.i.d. Poisson processes with rate 1, independent of X_n, and
R_1^(n)(t):=∫_0^t (1/2X_n(s)+ν_2/n)(1-X_n(s))ds
R_-1^(n)(t):=∫_0^t (1/2(1-X_n(s))+ν_1/n)X_n(s)ds
for t∈[0,1].
Also let:
R_1(t)=R_-1(t):=∫_0^t1/2X(s)(1-X(s))ds
I_n(t):=∫_0^t(ν_2-(ν_1+ν_2)X_n(s))ds
I(t):=∫_0^t(ν_2-(ν_1+ν_2)X(s))ds
for t∈[0,1].
Let us denote Z_1=B_1∘ R_1, Z_-1=B_-1∘ R_-1, where B_1 and B_-1 are i.i.d. standard Brownian Motions, independent of X and:
Y_n^1(·):=P_1(n^2R_1^(n)(·))-n^2R_1^(n)(·)/n, Y_n^-1(·):=P_-1(n^2R_-1^(n)(·))-n^2R_-1^(n)(·)/n.
Now, for any g∈ M:
|Eg(M_n)-Eg(M)|
≤ |E[E[g(Y_n^1-Y_n^-1+I_n)-g(Z_1-Y_n^-1+I_n)|Y_n^-1,I_n,R_1^(n),R_-1^(n),R_1]].
+E[E[g(Z_1-Y_n^-1+I_n)-g(Z_1-Z_-1+I_n)|Z_1,I_n,R_1^(n),R_-1^(n),R_1]]
.+E[E[g(Z_1-Z_-1+I_n)-g(Z_1-Z_-1+I)|Z_1,Z_-1,R_1]]|
= |E[E[g^(1)(Y_n^1)-g^(1)(Z_1)|Y_n^-1,I_n,R_1^(n),R_1]].
+E[E[g^(-1)(Y_n^-1)-g^(-1)(Z_-1)|Z_1,I_n,R_-1^(n),R_-1]]
.+E[E[g^(0)(I_n)-g^(0)(I)|Z_1,Z_-1,R_1,R_-1]]|,
where g^(1)(x)=g(x-Y_n^-1+I_n), g^(-1)(x)=g(Z_1-x+I_n), g^(0)(x)=g(Z_1-Z_-1+x). Note that, given R_1 and R_1^(n), Y_n^1 and Z_1 are independent of Y_n^-1 and I_n. Similarly, given R_-1 and R_-1^(n), Y_n^-1 and Z_-1 are independent of Z_1 and I_n. Also, given R_1 and R_-1, I_n and I are independent of Z_1 and Z_-1.
We now apply Theorem <ref> to obtain:
A) E[E[g^(1)(Y_n^1)-g^(1)(Z_1)|Y_n^-1,I_n,R_1^(n),R_1]]
≤ E[E[g^(1)_M|R_1,R_1^(n)].
·{(2+27√(2)/2√(π)R_1(1))√(R_1-R_1^(n))+27√(2)/2√(π)R_1-R_1^(n)^3/2.
+ n^-1[30+54· 5^1/3R_1(1)/√(πlog 2)√(log (2R_1(1)n^2)).+(1+(3/2)^3√(2/π)R_1^(n)(1)^3/2)
· R_1^(n)(1)(1+2e^-1)
.+1+log(1+2e^-1)+4log n/loglog(n^2+2)]
+n^-29√(R_1^(n)(1))/2(1+3n^2R_1^(n)(1))^1/2[4+16701+1024(log n)^3/(loglog(n^2+3))^3]^1/3
+.n^-3[2160/√(π)(log 2)^3/2(log (2R_1(1)n^2))^3/2+8+33402+2048(log n)^3/(loglog(n^2+3))^3]}
≤ E[E[g^(1)_M|R_1,R_1^(n)]{(2+27√(2)/16√(π))√(R_1-R_1^(n))+27√(2)/2√(π)R_1-R_1^(n)^3/2..
+ n^-1[30+27/4· 5^1/3/√(πlog 2)√(log (n^2/4)).+(1+(3/2)^3√(2/π)(1/8+ν_2/n)^3/2)
·.(1/8+ν_2/n)(1+2e^-1)+1+log(1+2e^-1)+4log n/loglog(n^2+2)]
+n^-29√(1/8+ν_2/n)/2(1+3n^2/8+3nν_2)^1/2[4+16701+1024(log n)^3/(loglog(n^2+3))^3]^1/3
+..n^-3[2160/√(π)(log 2)^3/2(log (n^2/4+ν_2n))^3/2+8+33402+2048(log n)^3/(loglog(n^2+3))^3]}]
B) |E(E{.E[.g^(-1)(Y_n^-1)-g^(-1)(Z_-1)|Z_1,I_n]|R_-1,R_-1^(n)})|
≤ E[E[.g^(-1)_M.|R_-1,R_-1^(n)].
·{(2+27√(2)/2√(π)R_-1(1))√(R_-1-R_-1^(n))+27√(2)/2√(π)R_-1-R_-1^(n)^3/2.
+ n^-1[30+54· 5^1/3R_-1(1)/√(πlog 2)√(log (2R_-1(1)n^2))+(1+(3/2)^3√(2/π)R_-1^(n)(1)^3/2).
·. R_-1^(n)(1)(1+2e^-1)+1+log(1+2e^-1)+4log n/loglog(n^2+2)]
+n^-29√(R_-1^(n)(1))/2(1+3n^2R_-1^(n)(1))^1/2[4+16701+1024(log n)^3/(loglog(n^2+3))^3]^1/3
+.n^-3[2160/√(π)(log 2)^3/2(log (2R_-1(1)n^2))^3/2+8+33402+2048(log n)^3/(loglog(n^2+3))^3]}
≤ E[E[g^(1)_M|R_-1,R_-1^(n)]{(2+27√(2)/16√(π))√(R_-1-R_-1^(n))+27√(2)/2√(π)R_-1-R_-1^(n)^3/2..
+ n^-1[30+27/4· 5^1/3/√(πlog 2)√(log (n^2/4)).+(1+(3/2)^3√(2/π)(1/8+ν_1/n)^3/2)
.·(1/8+ν_1/n)(1+2e^-1)+1+log(1+2e^-1)+4log n/loglog(n^2+2)]
+n^-29√(1/8+ν_1/n)/2(1+3n^2/8+3nν_1)^1/2[4+16701+1024(log n)^3/(loglog(n^2+3))^3]^1/3
+..n^-3[2160/√(π)(log 2)^3/2(log (n^2/4+ν_1n))^3/2+8+33402+2048(log n)^3/(loglog(n^2+3))^3]}],
where we have used the fact that R_1(1),R_-1(1)≤1/8 and R_1^(n)≤1/8+ν_2/n, R_-1^(n)≤1/8+ν_1/n. We also note that:
|E[E[g^(0)(I_n)-g^(0)(I)|Z_1,Z_-1,R_1,R_-1]]|
≤ E[E[g^(0)_M|R_1,R_-1][I_n-I+2I_n-I^3+2II_n-I]]
≤ E[E[g^(0)_M|R_1,R_-1][I_n-I+2I_n-I^3+2ν_2I_n-I]].
Now note that:
A) E[.sup_w,h∈ DD^2g(w+h-Y_n^-1+I_n)-D^2g(w-Y_n^-1+I_n)/h|R_1,R_1^(n),R_-1^(n)]
≤ sup_x,y∈ DD^2g(x+y)-D^2g(x)/y
B) E[.sup_w∈ DD^2g(w-Y_n^-1+I_n)/1+w|R_1,R_1^(n),R_-1^(n)]
≤ E[.sup_w∈ DD^2g(w-Y_n^-1+I_n)/1+w-Y_n^-1+I_n·1+w-Y_n^-1+I_n/1+w|R_1,R_1^(n),R_-1^(n)]
Doob≤ (sup_w∈ DD^2g(w)/1+w)[1+2√(E[.(Y_n^-1(1))^2| R_1,R_1^(n),R_-1^(n)])+ν_2]
= (sup_w∈ DD^2g(w)/1+w)[1+2√(R_-1^(n)(1))+ν_2]
≤ (1+2√(1/8+ν_1/n)+ν_2)(sup_w∈ DD^2g(w)/1+w)
C) E[.sup_w∈ DDg(w-Y_n^-1+I_n)/1+w^2|R_1,R_1^(n),R_-1^(n)]
≤ E[.sup_w∈ DDg(w-Y_n^-1+I_n)/1+w-Y_n^-1+I_n^2·1+w-Y_n^-1+I_n^2/1+w^2|R_1,R_1^(n),R_-1^(n)]
Doob≤ 3(sup_w∈ DDg(w)/1+w^2)[1+4E[.(Y_n^-1(1))^2|R_1,R_1^(n),R_-1^(n)]+ν_2^2]
= 3(sup_w∈ DDg(w)/1+w^2)[1+4R_-1^n(1)+ν_2^2]
≤ (9/2+12ν_1/n+3ν_2^2)(sup_w∈ DDg(w)/1+w^2)
D) E[.sup_w∈ D|g(w-Y_n^-1)|/1+w^3|R_1,R_1^(n), R_-1^(n)]
≤ E[.sup_w∈ D|g(w-Y_n^-1+I_n)|/1+w-Y_n^-1+I_n^3·1+w-Y_n^-1+I_n^3/1+w^3|R_1,R_1^(n),R_-1^(n)]
Doob≤ 9(sup_w∈ D|g(w)|/1+w^3)[1+27/8E[.|Y_n^-1(1)|^3|R_1,R_1^(n),R_-1^(n)]+ν_2^3]
≤ 9(sup_w∈ D|g(w)|/1+w^3)[1+27/8(3(R_-1^(n)(1))^2+n^-1R_-1^(n)(1))^3/4+ν_2^3]
≤ (9+243/8·(3(1/8+ν_1/n)^2+1/8n+ν_1/n^2)^3/4+9ν_2^3)(sup_w∈ D|g(w)|/1+w^3).
Therefore:
E[g^(1)_M|R_1,R_1^(n),R_-1^(n)]
≤ [9+2√(1/8+ν_1/n)+12ν_1/n+243/8·(3/64+6ν_1+1/8n+ν_1+ν_1^2/n^2)^3/4+ν_2+3ν_2^2+9ν_2^3]g_M
Similarly:
A) E[.sup_w,h∈ DD^2g(Z_1-(w+h)+I_n)-D^2g(Z_1-w+I_n)/h|R_1,R_1^(n),R_-1^(n)]
≤ sup_x,y∈ DD^2g(x+y)-D^2g(x)/y
B) E[.sup_w∈ DD^2g(Z_1-w+I_n)/1+w|R_1,R_1^(n),R_-1^(n)]
≤ E[.sup_w∈ DD^2g(Z_1-w+I_n)/1+Z_1-w+I_n·1+Z_1-w+I_n/1+w|R_1,R_1^(n),R_-1^(n)]
Doob≤ (sup_w∈ DD^2g(w)/1+w)[1+2√(E[.(Z_1(1))^2| R_1,R_1^(n),R_-1^(n)])+ν_2]
= (sup_w∈ DD^2g(w)/1+w)[1+2√(R_1(1))+ν_2]
≤ (1+√(2)/2+ν_2)(sup_w∈ DD^2g(w)/1+w)
C) E[.sup_w∈ DDg(Z_1-w+I_n)/1+w^2|R_1,R_1^(n),R_-1^(n)]
≤ E[.sup_w∈ DDg(Z_1-w+I_n)/1+Z_1-w+I_n^2·1+Z_1-w+I_n^2/1+w^2|R_1,R_1^(n),R_-1^(n)]
Doob≤ 3(sup_w∈ DDg(w)/1+w^2)[1+4E[.(Z_1(1))^2|R_1,R_1^(n),R_-1^(n)]+ν_2^2]
= 3(sup_w∈ DDg(w)/1+w^2)[1+4R_1(1)+ν_2^2]≤(9/2+3ν_2^2)(sup_w∈ DDg(w)/1+w^2)
D) E[.sup_w∈ D|g(Z_1-w+I_n)|/1+w^3|R_1,R_1^(n), R_-1^(n)]
≤ E[.sup_w∈ D|g(Z_1-w+I_n)|/1+Z_1-w+I_n^3·1+Z_1-w+I_n^3/1+w^3|R_1,R_1^(n),R_-1^(n)]
Doob≤ 9(sup_w∈ D|g(w)|/1+w^3)[1+27/8E[.|Z_1(1)|^3|R_1,R_1^(n),R_-1^(n)]+ν_2^3]
≤ 9(sup_w∈ D|g(w)|/1+w^3)[1+27√(2)/4√(π)R_1(1)^3/2+ν_2^3]
≤ (9+243/64√(π)+9ν_2^3)(sup_w∈ D|g(w)|/1+w^3).
Therefore:
E[g^(-1)_M|R_1,R_1^(n),R_-1^(n)]≤(9+243/64√(π)+ν_2+3ν_2^2+9ν_2^3)g_M.
Also:
A) E[.sup_w,h∈ DD^2g(Z_1-Z_-1+(w+h))-D^2g(Z_1-Z_-1+w)/h|R_1,R_1^(n),R_-1^(n)]
≤ sup_x,y∈ DD^2g(x+y)-D^2g(x)/y
B) E[.sup_w∈ DD^2g(Z_1-Z_-1+w)/1+w|R_1,R_1^(n),R_-1^(n)]
≤ E[.sup_w∈ DD^2g(Z_1-Z_-1+w)/1+Z_1-Z_-1+w·1+Z_1-Z_-1+w/1+w|R_1,R_1^(n),R_-1^(n)]
Doob≤ (sup_w∈ DD^2g(w)/1+w)[1+2√(E[.(Z_1(1)-Z_-1(1))^2| R_1,R_1^(n),R_-1^(n)])]
= (sup_w∈ DD^2g(w)/1+w)[1+2√(2R_1(1))]
≤ 2(sup_w∈ DD^2g(w)/1+w)
C) E[.sup_w∈ DDg(Z_1-Z_-1+w)/1+w^2|R_1,R_1^(n),R_-1^(n)]
≤ E[.sup_w∈ DDg(Z_1-Z_-1+w)/1+Z_1-Z_-1+w^2·1+Z_1-Z_-1+w^2/1+w^2|R_1,R_1^(n),R_-1^(n)]
Doob≤ 2(sup_w∈ DDg(w)/1+w^2)[1+4E[.(Z_1(1)-Z_-1(1))^2|R_1,R_1^(n),R_-1^(n)]]
= 2(sup_w∈ DDg(w)/1+w^2)[1+8R_1(1)]
≤ 4(sup_w∈ DDg(w)/1+w^2)
D) E[.sup_w∈ D|g(Z_1-Z_-1+w)|/1+w^3|R_1,R_1^(n), R_-1^(n)]
≤ E[.sup_w∈ D|g(Z_1-Z_-1+w)|/1+Z_1-Z_-1+w^3·1+Z_1-Z_-1+w^3/1+w^3|R_1,R_1^(n),R_-1^(n)]
Doob≤ 4 (sup_w∈ D|g(w)|/1+w^3)[1+27/8E[.|Z_1(1)-Z_-1(1)|^3|R_1,R_1^(n),R_-1^(n)]]
≤ 4(sup_w∈ D|g(w)|/1+w^3)[1+27√(2)/4√(π)(2R_1(1))^3/2]
≤ (4+27√(2)/8√(π))(sup_w∈ D|g(w)|/1+w^3).
So:
E[g^(0)_M|R_1,R_1^(n),R_-1^(n)]≤(4+27√(2)/8√(π))g_M.
Now, the Moran model and the Wright-Fisher diffusion can be coupled using the Donnelly-Kurtz look-down construction (see the discussion below Theorem <ref>). In this construction first the Wright-Fisher diffusion M is realised and then the Moran model M_n is constructed by describing nM_n(s) as a Binomial(n,M(s)) random variable. Note that:
A) E√(R_1-R_1^(n))
≤ E√(∫_0^1 |1/2X(s)(1-X(s))-(1/2X_n(s)+ν_2/n)(1-X_n(s))|ds)
Jensen≤ √(∫_0^1 E|1/2X(s)(1-X(s))-(1/2X_n(s)+ν_2/n)(1-X_n(s))|ds)
≤ {∫_0^1 (1/2E[E[|X_n(s)-X(s)||X(s)]]+1/2E[E[|X_n^2(s)-X^2(s)||X(s)]]..
..+ν_2/nE[E[|1-X_n(s)||X(s)]])ds}^1/2
≤ {∫_0^1 (1/2E[√(Var[.X_n(s)|X(s)])]+1/2E[√(Var[.X^2_n(s)|X(s)])]..
..+1/2E[X(s)(1-X(s))/n]+ν_2/nE[1-X(s)])ds}^1/2
= {∫_0^1 (1/2E[√(X(s)(1-X(s))/n)]+1/2E[(X(s)(1-7X(s)+6nX(s))/n^3....
+X(s)(12X^2(s)-20nX^2(s)+4n^2X^2(s)-6X^3(s)+10nX^3(s)/n^3
....-8X^4(s)/n)^1/2]+E[(X(s)+2ν_2)(1-X(s))/2n])ds}^1/2
≤ √(1/4n^-1/2+1/2√(13n^-3+16n^-2+4n^-1)+1+2ν_2/2n^-1)
B) ER_1-R_1^(n)^3/2
≤ E[(∫_0^1 |1/2X(s)(1-X(s))-(1/2X_n(s)+ν_2/n)(1-X_n(s))|ds)^3/2]
Hölder≤ E[∫_0^1 |1/2X(s)(1-X(s))-(1/2X_n(s)+ν_2/n)(1-X_n(s))|^3/2ds]
Jensen≤ √(3)∫_0^1{√(2)/4E[.E|X_n(s)-X(s)|^3/2|X(s)].
+√(2)/4E[.E|X_n^2(s)-X^2(s)|^3/2|X(s)]
.+ν_2^3/2/n^3/2E[E[|1-X_n(s)|^3/2|X(s)]]} ds
Jensen≤ √(3)∫_0^1{√(2)/4E[(Var[.X_n(s)|X(s)])^3/4].
+1/2E[.E|X_n^2(s)-(X(s)(1-X(s))/n+X^2(s))|^3/2|X(s)]
+1/2E[(X(s)(1-X(s))/n)^3/2]
.+ν_2^3/2/n^3/2E[((1-X(s))X(s)/n+(1-X(s))^2)^3/4]} ds
≤ √(3)∫_0^1{√(2)/4E[(Var[.X_n(s)|X(s)])^3/4]+(1/16+ν_2^3/2)n^-3/2.
.+1/2E[.E|X_n^2(s)-(X(s)(1-X(s))/n+X^2(s))|^3/2|X(s)]} ds
Jensen≤ √(3)∫_0^1{√(2)/4E[(Var[.X_n(s)|X(s)])^3/4].
.+1/2E[(Var[.X_n^2(s)|X(s)])^3/4]+(1/16+ν_2^3/2)n^-3/2} ds
= √(3)∫_0^1{√(2)/4E[(X(s)(1-X(s))/n)^3/4]+1/2E[(X(s)(1-7X(s))/n^3...
+X(s)(6nX(s)+12X^2(s)-20nX^2(s)+4n^2X^2(s)-6X^3(s))/n^3
...+X(s)(10nX^3(s)-8n^2X^3(s))/n^3)^3/4]+(1/16+ν_2^3/2)n^-3/2} ds
≤ √(3)(√(2)/32n^-3/4+1/2(13n^-3+16n^-2+4n^-1)^3/4+(1/16+ν_2^3/2)n^-3/2)
C) E√(R_-1-R_-1^(n))
≤ E√(∫_0^1 |1/2X(s)(1-X(s))-(1/2(1-X_n(s))+ν_1/n)X_n(s)|ds)
Jensen≤ √(∫_0^1 E|1/2X(s)(1-X(s))-(1/2(1-X_n(s))+ν_1/n)X_n(s)|ds)
≤ {∫_0^1 (1/2E[E[|X_n(s)-X(s)||X(s)]]+1/2E[E[|X_n^2(s)-X^2(s)||X(s)]]..
..+ν_1/nE[E[X_n(s)|X(s)]])ds}^1/2
≤ {∫_0^1 (1/2E[√(Var[.X_n(s)|X(s)])]+1/2E[√(Var[.X^2_n(s)|X(s)])]..
..+1/2E[X(s)(1-X(s))/n]+ν_1/nE[X(s)])ds}^1/2
= {∫_0^1 (1/2E[√(X(s)(1-X(s))/n)]+1/2E[(X(s)(1-7X(s)+6nX(s))/n^3....
+X(s)(12X^2(s)-20nX^2(s)+4n^2X^2(s)-6X^3(s)+10nX^3(s))/n^3
....-8X^4(s)/n)^1/2]+E[X(s)(1-X(s)+2ν_1)/2n])ds}^1/2
≤ √(1/4n^-1/2+1/2√(13n^-3+16n^-2+4n^-1)+1+2ν_1/2n^-1)
D) ER_-1-R_-1^(n)^3/2
≤ E[(∫_0^1 |1/2X(s)(1-X(s))-(1/2(1-X_n(s))+ν_1/n)X_n(s)|ds)^3/2]
Hölder≤ E[∫_0^1 |1/2X(s)(1-X(s))-(1/2(1-X_n(s))+ν_1/n)X_n(s)|^3/2ds]
Jensen≤ √(3)∫_0^1{√(2)/4E[.E|X_n(s)-X(s)|^3/2|X(s)].
+√(2)/4E[.E|X_n^2(s)-X^2(s)|^3/2|X(s)]
.+ν_1^3/2/n^3/2E[E[|X_n(s)|^3/2|X(s)]]} ds
Jensen≤ √(3)∫_0^1{√(2)/4E[(Var[.X_n(s)|X(s)])^3/4].
.+1/2E[.E|X_n^2(s)-(X(s)(1-X(s))/n+X^2(s))|^3/2|X(s)].
+1/2E[(X(s)(1-X(s))/n)^3/2]
.+ν_1^3/2/n^3/2E[((1-X(s))X(s)/n+X^2(s))^3/4]} ds
≤ √(3)∫_0^1{√(2)/4E[(Var[.X_n(s)|X(s)])^3/4]+(1/16+ν_1^3/2)n^-3/2.
.+1/2E[.E|X_n^2(s)-(X(s)(1-X(s))/n+X^2(s))|^3/2|X(s)]} ds
Jensen≤ √(3)∫_0^1{√(2)/4E[(Var[.X_n(s)|X(s)])^3/4]+1/2E[(Var[.X_n^2(s)|X(s)])^3/4].
+.(1/16+ν_1^3/2)n^-3/2} ds
= √(3)∫_0^1{√(2)/4E[(X(s)(1-X(s))/n)^3/4]+1/2E[(X(s)(1-7X(s))/n^3...
+X(s)(6nX(s)+12X^2(s)-20nX^2(s)+4n^2X^2(s)-6X^3(s))/n^3
...+X(s)(10nX^3(s)-8n^2X^3(s))/n^3)^3/4]+(1/16+ν_1^3/2)n^-3/2} ds
≤ √(3)(√(2)/32n^-3/4+1/2(13n^-3+16n^-2+4n^-1)^3/4+(1/16+ν_1^3/2)n^-3/2).
Also:
A) EI-I_n≤∫_0^1(ν_1+ν_2)E[E[|X_n(s)-X(s)||X(s)]]ds
≤ ∫_0^1(ν_1+ν_2)E[√(Var[X_n(s)|X(s)])]ds
≤ ∫_0^1(ν_1+ν_2)E[√(X(s)(1-X(s))/n)]ds≤1/2(ν_1+ν_2)n^-1/2
B) EI-I_n^3Hölder≤∫_0^1E|(ν_1+ν_2)(X_n(s)-X(s))|^3ds
≤ (ν_1+ν_2)^3∫_0^1[E[(X_n(s)-X(s))^4]]^3/4ds
≤ (ν_1+ν_2)^3∫_0^1 E[(X(s)(1-7X(s)+7nX(s)+12X^2(s)-18nX^2(s)/n^4..
..+X(s)(6n^2X^2(s)-6X^3(s)+11nX^3(s)-6n^2X^3(s)+n^3X^3(s))/n^4)^3/4]ds
≤ (ν_1+ν_2)^3 (13n^-4+18n^-3+6n^-2+n^-1)^3/4.
We now combine (<ref>),(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) to obtain:
|Eg(M)-Eg(M_n)|
≤ g_M[9+2√(1/8+ν_1/n)+12ν_1/n+243/8·(3/64+6ν_1+1/8n+ν_1+ν_1^2/n^2)^3/4+ν_2+3ν_2^2+9ν_2^3]
·{(2+27√(2)/16√(π))√(1/4n^-1/2+1/2√(13n^-3+16n^-2+4n^-1)+1+2ν_2/2n^-1).
+27√(2)/2√(π)√(3)(√(2)/32n^-3/4+1/2(13n^-3+16n^-2+4n^-1)^3/4+(1/16+ν_2^3/2)n^-3/2)
+ n^-1[30+27/4· 5^1/3/√(πlog 2)√(log (n^2/4))+(1+(3/2)^3√(2/π)(1/8+ν_2/n)^3/2).
·(1/8+ν_2/n)(1+2e^-1).+1+log(1+2e^-1)+4log n/loglog(n^2+2)]
+n^-29√(1/8+ν_2/n)/2(1+3/8n^2+3ν_2n)^1/2[4+16701+1024(log n)^3/(loglog (n^2+3))^3]^1/3
+.n^-3[2160/√(π)(log 2)^3/2(log (n^2/4+ν_2n))^3/2+8+33402+2048(log n)^3/(loglog (n^2+3))^3]}
+g_M(9+243/64√(π)+ν_2+3ν_2^2+9ν_2^3)
·{(2+27√(2)/16√(π))√(1/4n^-1/2+1/2√(13n^-3+16n^-2+4n^-1)+1+2ν_1/2n^-1).
+27√(2)/2√(π)√(3)(√(2)/32n^-3/4+1/2(13n^-3+16n^-2+4n^-1)^3/4+(1/16+ν_1^3/2)n^-3/2)
+ n^-1[30+27/4· 5^1/3/√(πlog 2)√(log (n^2/4))+(1+(3/2)^3√(2/π)(1/8+ν_1/n)^3/2).
·(1/8+ν_1/n)(1+2e^-1).+1+log(1+2e^-1)+4log n/loglog(n^2+2)]
+n^-29√(1/8+ν_1/n)/2(1+3/8n^2+3ν_1n))^1/2[4+16701+1024(log n)^3/(loglog (n^2+3))^3]^1/3
.+n^-3[2160/√(π)(log 2)^3/2(log (n^2/4+ν_1n))^3/2+8+33402+2048(log n)^3/(loglog(n^2+3))^3]}
+g_M(4+27√(2)/8√(π))
·[1/2(1+2ν_2)(ν_1+ν_2)n^-1/2+2(ν_1+ν_2)^3 (13n^-4+18n^-3+6n^-2+n^-1)^3/4]
≤ g_M{(18+ν_1^1/2+47ν_1^3/4+31ν_1^3/2+ν_2+3ν_2^2+9ν_2^3).
·(1.02· 10^6+425ν_2^1/2+623ν_2+39ν_2^3/2+7ν_2^5/2)
+ (12+3ν_2+3ν_2^2+9ν_2^3)(1.02· 10^6+425ν_1^1/2+623ν_1+39ν_1^3/2+7ν_1^5/2)
.+7(1/2(1+2ν_2)(ν_1+ν_2)+31(ν_1+ν_2)^3)} n^-1/4
+g_M2112[(18+ν_1^1/2+47ν_1^3/4+31ν_1^3/2+ν_2+3ν_2^2+9ν_2^3)(log(n^2/4+ν_2n))^3/2.
+.(12+3ν_2+3ν_2^2+9ν_2^3)(log(n^2/4+ν_1n))^3/2]n^-3.
The term of order n^-1/4 appearing in the bound obtained in Theorem <ref> is unexpected. It comes from the comparison of the two time changes R_1^(n) and R_1 applied at certain points in the proof to the Poisson process and to Brownian Motion respectively.
One would expect n^-1√(log n^2) to be closer to the true speed of convergence. This is because our process M_n can be expressed as a difference of two scaled Poisson processes with parameters of order n^2, which resemble scaled random walks stopped at the n^2-th step. Order n^-1√(log n^2) would therefore be in line with the results presented in <cit.> in the context of scaled random walks and indeed with Theorem <ref>.
We can therefore guess that our bound is not sharp, yet a method avoiding the comparison of the aforementioned time changes is needed to improve it.
A strategy similar to the one used in the proof of Theorem <ref> may be used to obtain bounds on the distance between other continuous-time Markov chains and diffusions. A key ingredient in the proof is, however, a way of coupling the two for any fixed time.
§ APPENDIX: PROOF OF PROPOSITION <REF>
Note that the proof of Proposition 3.1 of <cit.> readily applies in this case up to and excluding (3.4) and it suffices to prove that lim inf_n→∞P[Y_n∈ B]≥P[Z∈ B] for all sets B of the form B=⋂_1≤ l≤ LB_l, where B_l={ w∈ D:w-s_l<γ_l}, s_l∈ C([0,1],R) and γ_l is such that P[Z∈∂ B_l]=0.
We will condition on the fact that the minimum holding time (interval of constancy of Y_n) is of length greater that r_n=λ_n^-3. It follows from Theorems 2.1 and 2.2 of Chapter 5 in <cit.> that if we condition on the number of holding times being equal to i, their lengths are distributed uniformly over the simplex A_i={ (x_1,...,x_i):x_j≥ 0, ∑_j=1^i x_j≤ 1}. Note that the probability of the minimum of them being greater or equal to r_n is (1-ir_n)^i if i≤ r_n and 0 otherwise. This is because Vol(A_i)=1/i! and Vol({ (x_1,...,x_i):x_j≥ r_n, ∑_j=1^i x_j≤ 1})=(1-ir_n)^i/i!. Therefore:
P[minimal waiting time≥ r_n]
= ∑_i=1^∞P[minimal waiting time≥ r_n|#waiting times=i]P[#waiting times=i]
= ∑_i=1^⌊λ_n^3⌋(1-iλ_n^-3)^ie^-λ_n(λ_n)^i-1/(i-1)! 1.
To see this note the following:
A)∑_i=⌈λ_n^5/4⌉^⌊λ_n^3⌋(1-iλ_n^-3)^ie^-λ_n(λ_n)^i-1/(i-1)! ≤ e^-λ_n(λ_n^3-λ_n^5/4)(1-λ_n^-7/4)^λ_n^5/4λ_n^λ_n^5/4-1/(⌈λ_n^5/4⌉-1)!
≤λ_n^2⌈λ_n^5/4⌉/e^λ_n·λ_n^-1/8⌈λ_n^5/4⌉+9/8⌈λ_n^9/8⌉0
B)∑_i=1^⌈λ_n^5/4⌉-1(1-iλ_n^-3)^ie^-λ_n(λ_n)^i-1/(i-1)!≥ (1-λ_n^-7/4)^⌈λ_n^5/4⌉e^-λ_n∑_i=1^⌈λ_n^5/4⌉-1(λ_n)^i-1/(i-1)!1,
where the convergence in B) holds since (1-λ_n^-7/4)^⌈λ_n^5/4⌉→ 1, e^-λ_n∑_i=1^∞(λ_n)^i-1/(i-1)!=1 and:
e^-λ_n∑_i=⌈λ_n^5/4⌉^∞(λ_n)^i-1/(i-1)!≤ e^-λ_nλ_n^⌈λ_n^5/4⌉/⌈λ_n^5/4⌉!·⌈λ_n^5/4⌉+1/⌈λ_n^5/4⌉+1-λ_n0
for instance, by Proposition A.2.3(ii) of <cit.>.
Furthermore, note that for g_l,n^* defined by (3.6) in <cit.>:
lim inf_n→∞E[∏_l=1^L g^*_l,n(Y_n)]
= lim inf_n→∞E[.∏_l=1^L g^*_l,n(Y_n)|minimal waiting time≥ r_n]P[minimal waiting time≥ r_n]
+lim inf_n→∞E[.∏_l=1^L g^*_l,n(Y_n)|minimal waiting time< r_n]P[minimal waiting time< r_n]
= lim inf_n→∞E[.∏_l=1^L g^*_l,n(Y_n)|minimal waiting time≥ r_n]P[minimal waiting time≥ r_n]
because:
0≤ lim inf_n→∞E[.∏_l=1^L g^*_l,n(Y_n)|minimal waiting time< r_n]P[minimal waiting time< r_n]
≤ lim inf_n→∞P[minimal waiting time< r_n](<ref>)=0.
Following the same steps as in <cit.>, we obtain:
lim inf_n→∞P[Y_n∈ B]≥lim inf_n→∞P[Y_n∈ B and minimal waiting time≥ r_n]
≥lim inf_n→∞E[.∏_l=1^L g^*_l,n(Y_n)|minimal waiting time≥ r_n]P[minimal waiting time≥ r_n]
(<ref>),(<ref>)≥lim inf_n→∞{E[∏_l=1^L g^*_l,n(Z_n)]-Cτ_n∏_l=1^L g^*_l,n_M^0}
≥lim inf_n→∞{E[∏_l=1^L g^*_l,n(Z_n)]-C”τ_np_n^2(ϵγ)^-2η_n^-3}
Fatou≥E[lim inf_n→∞∏_l=1^L g^*_l,n(Z_n)]≥P[⋂_1≤ l≤ L(Z-s_l<γ_l(1-θ))]
.
§ ACKNOWLEDGEMENTS
The author would like to thank Professor Gesine Reinert, Professor Alison Etheridge and Professor Andrew Barbour for many helpful discussions and Professor Gesine Reinert for constructive comments on the early versions of the paper. The author is also grateful to Dr Sebastian Vollmer for pointing out a mistake in an early version of the proof of Proposition <ref>.
alpha
|
http://arxiv.org/abs/1701.07635v1 | 20170126100839 | Precise predictions for charged Higgs boson production | [
"Maria Ubiali"
] | hep-ph | [
"hep-ph"
] |
§ INTRODUCTION
The detection of a charged Higgs boson would inexorably point to a broader Higgs sector than the one predicted by the Standard Model (SM). Thus, its discovery would be an unmistakable sign of the presence of new physics. For this reason
extensive searches have been carried out by the ATLAS and CMS collaborations at the LHC. The analyses performed at the centre-of-mass energy of 7 TeV <cit.>, 8 TeV <cit.> and more recently 13 TeV <cit.> set stringent limits on the parameter space of the models featuring the presence of charged scalars. Experimental searches so far focussed on the detection of a light charged Higgs, with mass below the top quark mass (typically m_<160 GeV), or of a heavy charged Higgs, with mass above the top quark mass (typically m_>200 GeV). In the light-mass range the main charged Higgs production mechanism is via the production of top-antitop pairs, with the (anti)top quark decaying into a positively(negatively) charged Higgs and (anti)bottom quark.
This region is suitably described by the top-antitop production cross section multiplied with the branching ratio of the top decay, see for example Refs. <cit.>.
In the heavy-mass region, the charged Higgs bosons are mostly produced in association with top (anti)quark, as we discuss in more details in Section <ref>. In the intermediate region both mechanisms contribute and their interference must be consistently taken into account, as it is illustrated in Section <ref>.
In this contribution, we first summarise the main recent developments in the precise calculation of the total and differential cross sections for the production of a heavy charged Higgs boson. We then review recent progress in the precise computation of the inclusive cross section in the intermediate-mass range.
Although charged Higgs bosons appear in several BSM scenarios, here we focus on the simplest extension of the Standard Model, namely on a two-Higgs doublet model (2HDM), in which two isospin doublets are introduced
to break the SU(2)× U(1) symmetry, leading to the existence of five physical Higgs
bosons, two of which are charged particles (H^±).
Imposing flavour conservation, there are four possible ways
to couple the SM fermions to the two Higgs doublets <cit.>. Each of these four ways
gives rise to rather different phenomenological scenarios. The results reviewed in this contribution are displayed for a type-II scenario, but are easily generalisable to all other scenarios, by suitably rescaling the couplings of the charged Higgs with bottom and top quarks.
§ HEAVY CHARGED HIGGS BOSON PRODUCTION
Heavy charged Higgs bosons have mass larger than the top-quark mass and are dominantly produced in association with a top quark. Such a process, featuring bottom quarks in the initial state, can
be described in QCD either in a four-flavor (4FS) or five-flavor scheme (5FS). In the former,
the bottom quark mass is considered on the same footings as the other hard scales of the process
and bottom quarks do not contribute to the proton wave-function. They can only be generated as massive final states at the level of the short-distance cross section. A representative tree-level Feynman diagram in the 4FS is depicted in the right panel of Fig. <ref>.
Instead, in five-flavour scheme, the bottom quark mass is considered
to be a much smaller scale than the hard scales involved in the process and
bottom quarks are treated on the same footing as all other massless partons, thus the tree-level diagram is initiated by a bottom-quark.
Next-to-leading order (NLO) calculations for the total cross sections in the 5FS <cit.> including super-symmetric corrections have been performed more than fifteen years ago.
Threshold resummation effects have also be computed up to NNLL accuracy <cit.>.
The NLO calculation for the total cross sections in the four-flavor scheme is more involved, featuring an additional massive final state in the leading-order matrix element. It was carried out about ten years ago <cit.> and it consistently includes electro-weak and super-symmetric corrections.
In Ref. <cit.> an up-to-date comparison of the next-to-leading-order total cross section in the 4FS and 5FS was presented. A motivated choice of factorisation scale, μ=(m_t+m_)/k with k in the range 4-6, motivated by the study in Ref. <cit.> brings the two predictions much closer to each other and reconcile them within the estimated theoretical uncertainties due to missing higher-order corrections, parton distribution functions and physical input parameters. A four- and five-flavour scheme matched prediction using the Santander Matching weighted average <cit.> is provided for the interpretation of current and future experimental searches
for heavy charged Higgs bosons at the LHC. The predictions have been recently updated by using the most recent PDF sets and parameter settings in the Higgs Cross Section Working Group Yellow Report 4 <cit.>. The results of the most recent comparison and matching are displayed in Fig. <ref>.
The four- and five-flavor scheme
computations of the charged Higgs production cross sections could be consistently matched, as it has been recently done in the case of bottom-fusion-initiated Higgs production in Refs. <cit.>.
Instead of an interpolation between the four and five-flavor scheme results by mean of a weighted average, as in the case of the Santander matching, one could use a more systematic approach which preserves the perturbative accuracy of both computations by expanding out the 5FS computation in powers of the strong coupling α_s, and replacing the terms which also appear in the 4FS computation with their massive-scheme counterparts. The result would then retain the accuracy of both 4FS and 5FS: at the massive level, the fixed-order accuracy corresponding to the NLO, and at the massless level, the logarithmic accuracy of the starting five-flavor scheme computation (NLL). This is a direction to be pursued in future studies.
As far as predictions at the fully differential level including parton shower (PS) effects are concerned, they were made available a few years ago in the POWHEG <cit.> and MC@NLO formalisms <cit.>. Both computations were performed in the five-flavor scheme.
Differential calculations were made available for the first time in the four-flavor scheme in Ref. <cit.>, where fully-differential results in the 4FS were presented for the first time using
MG5_aMC@NLO <cit.> together with Herwig++ <cit.>
or Pythia8 <cit.>.
The presence of a NLO+PS fully differential calculation allows a detailed comparison to the one in the 5FS.
It is interesting to observe that in Ref. <cit.> it is shown that a reduced shower scale (by a factor of four) with respect to the default one in MG5_aMC@NLO improves the matching between parton shower and fixed-order results at large transverse momenta. Moreover the reduction of the matching scale choice also improves the agreement between the 4FS and 5FS calculations. It is interesting to notice that also in this case, as in the case of the choice of scales in the total cross section, a softer scale with respect to the naive hard scale is physically motivated and its employment improves the comparison between results in the two schemes. The inclusion of NLO(+PS) corrections further improves their mutual agreement at the level of shapes.
Details of the comparison between schemes are illustrated in Fig. <ref> for two representative observables. On the left panel the transverse momentum distribution of the charged Higgs boson is plotted. In this case, as for all inclusive observables in the kinematics of the final bottom quark, the shapes of the four- and five-flavor scheme predictions agree very well. The difference in normalisation is compatible with the one observed in Ref. <cit.>, although in this case it is more significant due to a slightly different choice of scale in the 5FS. Differences remain, however, and they are particularly sizeable for observables related to b jets and B hadrons, as it is displayed in the right panel of Fig. <ref>, in which the transverse momentum of the second-hardest B hadron is plotted in the two schemes: at small p_T the 4FS prediction is suppressed with respect to the 5FS one. This is due to mass effects:
the b quark is collinear to the beam. Such configurations are enhanced in a 5FS because of
the collinear singularities, while in the 4FS such a singularities are screened by the b-quark mass.
Given these differences, it is important to assess which are the most reliable predictions for this class of observables, given that the proper simulation of the signal is crucial to fully exploit the potential of the data collected in charged Higgs searches at the LHC.
The recommendation of the authors of <cit.>, reiterated in the YR4 <cit.> is that 4FS predictions should be adopted for any realistic
signal simulation in experimental searches. Such recommendation is motivated by the fact that the 4FS prediction provides a better description of the final state kinematics and that the dependence on PS is smaller for the 4FS than for 5FS predictions. This is probably due to the fact that the 4FS has more differential information at the matrix-element level, which reduces the effects of the shower. For the normalisation of the cross section one could use the Santander-matched predictions of Ref. <cit.>.
§ INTERMEDIATE CHARGED HIGGS BOSON PRODUCTION
The region in which the mass of the charged Higgs is close to the mass of
the top quark has not been explored so far by the experiments at the
LHC. The main reason for that is the lack of precise theoretical
predictions for the charged Higgs production in that specific mass region.
Indeed, the treatment of the intermediate region between resonant
top quark decays and the continuum contribution for large charged
Higgs masses has been an open problem for some time. This has been
recently tackled by the full NLO calculation in the four-flavor scheme
published in Ref. <cit.>. The calculation, performed in the complex-mass
scheme, for the intermediate top quarks allows to fully include
double- and single-resonant top contributions. Previous calculations
performed in several schemes were either done at leading order <cit.>
or by combining two processes without including
the full interference contributions between the two <cit.>.
As it is shown in Fig. <ref>, the NLO QCD corrections computed in Ref. <cit.> turn out to be large in this mass regime, with K-factors about 1.5-1.6. The central prediction in the main frame develop a prominent structure with a kink at the threshold m_H^±≃ m_t-m_b.
Results nicely interpolate between the low- and high-mass regimes.
The effect of the single-resonant contributions (pp→ t W^- and pp →t̅ H^+) is visible when comparing
the intermediate-mass result with the low-mass prediction. Indeed,
the single-resonant contributions
are missing in the low-mass prediction and amount to 10%-15% of the pp → tt̅ cross section depending on the specific
value of tanβ. Finally, looking at the matching of the intermediate-mass predictions to the heavy charged Higgs cross section, a 5%-10% gap can be observed for tanβ=8 and tanβ=30, originating from the non-resonant part of the amplitude, which, because of the chiral structure of the H^+ tb and Wtb vertices, is enhanced (suppressed) for large (small) values of tanβ.
§ CONCLUSIONS
Important progress has been made in the past few years in the simulation of heavy and intermediate mass charged Higgs boson events, also thanks to the joint work of experimentalists and theorists in the Higgs Cross Section Working Group.
For heavy charged Higgs boson, Santander-matched predictions for wide range of masses and tanβ in type-II 2HDM, generalisable to other types, have been made available for experimental searches.
Furthermore fully differential calculations at NLO and NLO+PS are available both in the 5FS and in the 4FS.
The comparison between 4FS versus 5FS comparison at the level of total and differential cross sections show that compatible results can be achieved for observables inclusive in bottom quark kinematics, also thanks to the choice of a lower shower and factorisation scales. Finally, the novel computation of total cross section calculation for simulation of signal in intermediate mass region is the first step towards an improved simulation of the differential distributions in that specific range, for which Run-II results will be soon made available.
99
Aad:2012tj
G. Aad et al. [ATLAS Collaboration],
JHEP 1206 (2012) 039
doi:10.1007/JHEP06(2012)039
[arXiv:1204.2760 [hep-ex]].
Aad:2012rjx
G. Aad et al. [ATLAS Collaboration],
JHEP 1303 (2013) 076
doi:10.1007/JHEP03(2013)076
[arXiv:1212.3572 [hep-ex]].
Aad:2013hla
G. Aad et al. [ATLAS Collaboration],
Eur. Phys. J. C 73 (2013) no.6, 2465
doi:10.1140/epjc/s10052-013-2465-z
[arXiv:1302.3694 [hep-ex]].
Chatrchyan:2012vca
S. Chatrchyan et al. [CMS Collaboration],
JHEP 1207 (2012) 143
doi:10.1007/JHEP07(2012)143
[arXiv:1205.5736 [hep-ex]].
Khachatryan:2015uua
V. Khachatryan et al. [CMS Collaboration],
JHEP 1512 (2015) 178
doi:10.1007/JHEP12(2015)178
[arXiv:1510.04252 [hep-ex]].
Khachatryan:2015qxa
V. Khachatryan et al. [CMS Collaboration],
JHEP 1511 (2015) 018
doi:10.1007/JHEP11(2015)018
[arXiv:1508.07774 [hep-ex]].
Aad:2015typ
G. Aad et al. [ATLAS Collaboration],
JHEP 1603 (2016) 127
doi:10.1007/JHEP03(2016)127
[arXiv:1512.03704 [hep-ex]].
Aad:2014kga
G. Aad et al. [ATLAS Collaboration],
JHEP 1503 (2015) 088
doi:10.1007/JHEP03(2015)088
[arXiv:1412.6663 [hep-ex]].
Aaboud:2016dig
M. Aaboud et al. [ATLAS Collaboration],
Phys. Lett. B 759 (2016) 555
doi:10.1016/j.physletb.2016.06.017
[arXiv:1603.09203 [hep-ex]].
ATLAS:2016qiq
The ATLAS collaboration [ATLAS Collaboration],
ATLAS-CONF-2016-089.
CMS:2016szv
CMS Collaboration [CMS Collaboration],
CMS-PAS-HIG-16-031.
Branco:2011iw
G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher and J. P. Silva,
Phys. Rept. 516 (2012) 1
doi:10.1016/j.physrep.2012.02.002
[arXiv:1106.0034 [hep-ph]].
Czarnecki:1992zm
A. Czarnecki and S. Davidson,
Phys. Rev. D 48 (1993) 4183
doi:10.1103/PhysRevD.48.4183
[hep-ph/9301237]
Denner:1990ns
A. Denner and T. Sack,
Nucl. Phys. B 358 (1991) 46.
doi:10.1016/0550-3213(91)90530-B
Li:1990qf
C. S. Li, R. J. Oakes and T. C. Yuan,
Phys. Rev. D 43 (1991) 3759.
doi:10.1103/PhysRevD.43.3759
Blokland:2005vq
I. R. Blokland, A. Czarnecki, M. Slusarczyk and F. Tkachov,
Phys. Rev. D 71 (2005) 054004
Erratum: [Phys. Rev. D 79 (2009) 019901]
doi:10.1103/PhysRevD.79.019901, 10.1103/PhysRevD.71.054004
[hep-ph/0503039].
Brucherseifer:2013iv
M. Brucherseifer, F. Caola and K. Melnikov,
JHEP 1304 (2013) 059
doi:10.1007/JHEP04(2013)059
[arXiv:1301.7133 [hep-ph]].
Peng:2006wv
W. Peng, M. Wen-Gan, Z. Ren-You, J. Yi, H. Liang and G. Lei,
Phys. Rev. D 73 (2006) 015012
Erratum: [Phys. Rev. D 80 (2009) 059901]
doi:10.1103/PhysRevD.80.059901, 10.1103/PhysRevD.73.015012
[hep-ph/0601069].
Dittmaier:2009np
S. Dittmaier, M. Kramer, M. Spira and M. Walser,
Phys. Rev. D 83 (2011) 055005
doi:10.1103/PhysRevD.83.055005
[arXiv:0906.2648 [hep-ph]].
Zhu:2001nt
S. h. Zhu,
Phys. Rev. D 67 (2003) 075006
doi:10.1103/PhysRevD.67.075006
[hep-ph/0112109].
Gao:2002is
G. p. Gao, G. r. Lu, Z. h. Xiong and J. M. Yang,
Phys. Rev. D 66 (2002) 015007
doi:10.1103/PhysRevD.66.015007
[hep-ph/0202016].
Plehn:2002vy
T. Plehn,
Phys. Rev. D 67 (2003) 014018
doi:10.1103/PhysRevD.67.014018
[hep-ph/0206121].
Berger:2003sm
E. L. Berger, T. Han, J. Jiang and T. Plehn,
Phys. Rev. D 71 (2005) 115012
doi:10.1103/PhysRevD.71.115012
[hep-ph/0312286].
Kidonakis:2016eeu
N. Kidonakis,
Phys. Rev. D 94 (2016) no.1, 014010
doi:10.1103/PhysRevD.94.014010
[arXiv:1605.00622 [hep-ph]].
Weydert:2009vr
C. Weydert, S. Frixione, M. Herquet, M. Klasen, E. Laenen, T. Plehn, G. Stavenga and C. D. White,
Eur. Phys. J. C 67 (2010) 617
doi:10.1140/epjc/s10052-010-1320-8
[arXiv:0912.3430 [hep-ph]].
Klasen:2012wq
M. Klasen, K. Kovarik, P. Nason and C. Weydert,
Eur. Phys. J. C 72 (2012) 2088
doi:10.1140/epjc/s10052-012-2088-9
[arXiv:1203.1341 [hep-ph]].
Flechl:2014wfa
M. Flechl, R. Klees, M. Kramer, M. Spira and M. Ubiali,
Phys. Rev. D 91 (2015) no.7, 075015
doi:10.1103/PhysRevD.91.075015
[arXiv:1409.5615 [hep-ph]].
Maltoni:2012pa
F. Maltoni, G. Ridolfi and M. Ubiali,
JHEP 1207 (2012) 022
Erratum: [JHEP 1304 (2013) 095]
doi:10.1007/JHEP04(2013)095, 10.1007/JHEP07(2012)022
[arXiv:1203.6393 [hep-ph]].
Harlander:2011aa
R. Harlander, M. Kramer and M. Schumacher,
arXiv:1112.3478 [hep-ph].
deFlorian:2016spz
D. de Florian et al. [LHC Higgs Cross Section Working Group],
arXiv:1610.07922 [hep-ph].
Forte:2015hba
S. Forte, D. Napoletano and M. Ubiali,
Phys. Lett. B 751 (2015) 331
doi:10.1016/j.physletb.2015.10.051
[arXiv:1508.01529 [hep-ph]].
Forte:2016sja
S. Forte, D. Napoletano and M. Ubiali,
Phys. Lett. B 763 (2016) 190
doi:10.1016/j.physletb.2016.10.040
[arXiv:1607.00389 [hep-ph]].
Degrande:2015vpa
C. Degrande, M. Ubiali, M. Wiesemann and M. Zaro,
JHEP 1510 (2015) 145
doi:10.1007/JHEP10(2015)145
[arXiv:1507.02549 [hep-ph]].
Alwall:2014hca
J. Alwall et al.,
JHEP 1407 (2014) 079
doi:10.1007/JHEP07(2014)079
[arXiv:1405.0301 [hep-ph]].
Bahr:2008pv
M. Bahr et al.,
Eur. Phys. J. C 58 (2008) 639
doi:10.1140/epjc/s10052-008-0798-9
[arXiv:0803.0883 [hep-ph]].
Sjostrand:2007gs
T. Sjostrand, S. Mrenna and P. Z. Skands,
Comput. Phys. Commun. 178 (2008) 852
doi:10.1016/j.cpc.2008.01.036
[arXiv:0710.3820 [hep-ph]].
Degrande:2016hyf
C. Degrande, R. Frederix, V. Hirschi, M. Ubiali, M. Wiesemann and M. Zaro,
arXiv:1607.05291 [hep-ph].
Assamagan:2004gv
K. A. Assamagan, M. Guchait and S. Moretti,
hep-ph/0402057.
Moretti:2002eu
S. Moretti, K. Odagiri, P. Richardson, M. H. Seymour and B. R. Webber,
JHEP 0204 (2002) 028
doi:10.1088/1126-6708/2002/04/028
[hep-ph/0204123].
Alwall:2004xw
J. Alwall and J. Rathsman,
JHEP 0412 (2004) 050
doi:10.1088/1126-6708/2004/12/050
[hep-ph/0409094].
|
http://arxiv.org/abs/1701.08100v1 | 20170126164229 | The Causal Frame Problem: An Algorithmic Perspective | [
"Ardavan Salehi Nobandegani",
"Ioannis N. Psaromiligkos"
] | cs.AI | [
"cs.AI",
"q-bio.NC",
"stat.ML"
] |
Wurtzite spin lasers
Igor Žutić
====================
The Frame Problem (FP) is a puzzle in philosophy of mind and epistemology, articulated by the Stanford Encyclopedia of Philosophy as follows: “How do we account for our apparent ability to make decisions on the basis only of what is relevant to an ongoing situation without having explicitly to consider all that is not relevant?" In this work, we focus on the causal variant of the FP, the Causal Frame Problem (CFP). Assuming that a reasoner's mental causal model can be (implicitly) represented by a causal Bayes net, we first introduce a notion called Potential Level (PL). PL, in essence, encodes the relative position of a node with respect to its neighbors in a causal Bayes net. Drawing on the psychological literature on causal judgment, we substantiate the claim that PL may bear on how time is encoded in the mind. Using PL, we propose an inference framework, called the PL-based Inference Framework (PLIF), which permits a boundedly-rational approach to the CFP to be formally articulated at Marr's algorithmic level of analysis. We show that our proposed framework, PLIF, is consistent with a wide range of findings in causal judgment literature, and that PL and PLIF make a number of predictions, some of which are already supported by existing findings.
Keywords: Causal Frame Problem; Time and Causality; Bounded Rationality; Algorithmic Level Analysis
§ INTRODUCTION
At the core of any decision-making or reasoning task, resides an innocent-looking yet challenging question: Given an inconceivably large body of knowledge available to the reasoner, what constitutes the relevant for the task and what the irrelevant? The question, as it is posed, echoes the well-known Frame Problem (FP) in epistemology and philosophy of mind, articulated by Glymour (1987) as follows: “Given an enormous amount of stuff, and some task to be done using some of the stuff, what is the relevant stuff for the task?" Fodor (1987) comments: “The frame problem goes very deep; it goes as deep as the analysis of rationality."
The question posed above perfectly captures what is really at the core of the FP, yet, it may suggest an unsatisfying approach to the FP at the algorithmic level of analysis (Marr, 1982). Indeed, the question may suggest the following two-step methodology: In the first step, out of all the body of knowledge available to the reasoner (termed, the model), she has to identify what is relevant to the task (termed, the relevant submodel); it is only then that she advances to the second step by performing reasoning or inference on the identified submodel. There is something fundamentally wrong with this methodology (which we term, sequential approach to reasoning) which bears on the following understanding: The relevant submodel, i.e., the portion of the reasoner's knowledge deemed relevant to the task, oftentimes is so enormous (or even infinitely large) that the reasoner—inevitably bounded in time and computational resources—would never get to the second step, had she adhered to such a methodology. In other words, in line with the notion of bounded rationality (Simon, 1957), a boundedly-rational reasoner must have the option, if need be, to merely consult a fraction of the potentially large—if not infinitely so—relevant submodel.
Recent work by icard2015 elegantly promotes this insight when they write: “Somehow the mind must focus in on some “submodel" of the “full" model (including all possibly relevant variables) that suffices for the task at hand and is not too costly to use."[In an informative example on Hidden Markov Models (HMMs), Icard & Goodman (2015) present a setting wherein the relevant submodel is infinitely large—an example which makes it pronounced what is wrong with the sequential approach stated earlier.] They then ask the following question: “what kind of simpler model should a reasoner consult for a given task?" This is an inspiring question hinting to an interesting line of inquiry as to how to formally articulate a boundedly-rational approach to the FP at Marr's algorithmic level of analysis (1982).
In this work, we focus on the causal variant of the FP, the Causal Frame Problem (CFP), stated as follows: Upon being presented with a causal query, how does the reasoner manage to attend to her causal knowledge relevant to the derivation of the query while rightfully dismissing the irrelevant? We adopt Causal Bayesian Networks (CBNs) (Pearl, 1988; Gopnik et al., 2004, inter alia) as a normative model to represent how the reasoner's internal causal model of the world is structured (i.e., reasoner's mental model). First, we introduce the notion of Potential Level (PL). PL, in essence, encodes the relative position of a node (representing a propositional variable or a concept) with respect to its neighbors in a CBN. Drawing on the psychological literature on causal judgment, we substantiate the claim that PL may bear on how time is encoded in the mind. Equipped with PL, we embark on investigating the CFP at Marr's algorithmic level of analysis. We propose an inference framework, termed PL-based Inference Framework (PLIF), which aims at empowering the boundedly-rational reasoner to consult (or retrieve[The terms “consult" and “retrieve" will be used interchangeably. We elaborate on the rationale behind that in Sec. <ref>, where we connect our work to Long Term Memory and Working Memory.]) parts of the underlying CBN deemed relevant for the derivation of the posed query (the relevant submodel) in a local, bottom-up fashion until the submodel is fully retrieved. PLIF allows the reasoner to carry out inference at intermediate stages of the retrieval process over the thus-far retrieved parts, thereby obtaining lower and upper bounds on the posed causal query. We show, in the Discussion section, that our proposed framework, PLIF, is consistent with a wide range of findings in causal judgment literature, and that PL and PLIF make a number of predictions, some of which are already supported by the findings in the psychology literature.
In their work, Icard and Goodman (2015) articulate a boundedly-rational approach to the CFP at Marr's computational level of analysis, which, as they point out, is from a “god's eye" point of view. In sharp contrast, our proposed framework PLIF is not from a “god's eye" point of view and hence could be regarded, potentially, as a psychologically plausible proposal at Marr's algorithmic level of analysis as to how the mind both retrieves and, at the same time, carries out inference over the retrieved submodel to derive bounds on a causal query. We term this concurrent approach to reasoning, as opposed to the flawed sequential approach stated earlier.[We elaborate more on this in the Discussion section.] The retrieval process progresses in a local, bottom-up fashion, hence the submodel is retrieved incrementally, in a nested manner.[The term “nested" implies that the thus-far retrieved submodel is subsumed by every later submodel (should the reasoner proceeds with the retrieval process).] Our analysis (Sec. <ref>) confirms Icard and Goodman's insight (2015) that even in the extreme case of having an infinitely large relevant submodel, the portion of which the reasoner has to consult so as to obtain a “sufficiently good" answer to a query could indeed be very small.
§ POTENTIAL LEVEL AND TIME
Before proceeding further, let us introduce some preliminary notations. Random Variables (RVs) are denoted by lower-case bold-faced letters, e.g., x, and their realizations by non-bold lower-case letters, e.g., x. Likewise, sets of RVs are denoted by upper-case bold-faced letters, e.g., X, and their corresponding realizations by upper-case non-bold letters, e.g., X. Val(·) denotes the set of possible values a random quantity can take on. Random quantities are assumed to be discrete unless stated otherwise.
The joint probability distribution over x_1,⋯,x_n is denoted by (x_1,⋯,x_n). We will use the notation x_1:n to denote the sequence of n RVs x_1,⋯,x_n, hence (x_1,⋯,x_n)=(x_1:n). The terms “node" and “variable" will be used interchangeably throughout. To simplify presentation, we adopt the following notation: We denote the probability (x=x) by (x) for some RV x and its realization x∈Val(x). For conditional probabilities, we will use the notation (x|y) instead of (x=x|y=y). Likewise, (X|Y)=( X= X| Y= Y) for X ∈( X) and Y ∈( Y). A generic conditional independence relationship is denoted by ( A B| C) where A, B, and C represent three mutually disjoint sets of variables belonging to a CBN. Furthermore, throughout the paper, we assume that ϵ is some negligibly small positive real-valued quantity. Whenever we subtract ϵ from a quantity, we simply imply a quantity less than but arbitrarily close to the original quantity. The rationale behind adopting such a notation will become clearer in Sec. <ref>.
Before formally introducing the notion of PL (unavoidably, with some mathematical jargon), we articulate in simple terms what the idea behind PL is. PL simply induces a chronological order on the nodes of a CBN, allowing the reasoner to encode the timing between cause and effect.[More precisely, PL induces a topological order on the nodes of a CBN, with temporal interpretations suggested in Def. 1.] As we will see, PL plays an important role in guiding the retrieval process used in our proposed framework. Next, PL is formally defined, followed by two clarifying examples.
Def. 1. (Potential Level (PL)) Let par( x) and child( x) denote, respectively, the sets of parents (i.e., immediate causes) and children (i.e., immediate effects) of x. Also let T_0∈ℝ∪{-∞}. The PL of x, denoted by p_l( x), is defined as follows: (i) If par( x)=∅, p_l( x)=T_0, and (ii) If par( x)≠∅, p_l( x) is a real-valued quantity selected from the interval (max_ y∈ par( x)p_l( y),min_ z∈ child( x)p_l( z)) such that p_l( x)-max_ y∈ par( x)p_l( y) indicates the amount of time which elapses between intervening simultaneously on all the RVs in par( x) (i.e., do(par( x)=par_x)) and x taking its value x in accord with the distribution (x|par_x). If child( x)=∅, substitute the upper bound of the given interval by +∞. ▪
Parameter T_0 symbolizes the origin of time, as perceived by the reasoner. T_0=0 is a natural choice, unless the reasoner believes that time continues indefinitely into the past, in which case T_0=-∞. The next two examples further clarify the idea behind PL. In both examples we assume T_0=0.
For the first example, let us consider the CBN depicted in Fig. <ref>(a) containing the RVs x, y, and z with p_l( x)=4, p_l( y)=4.7, and p_l( z)=5. According to Def. 1, the given PLs can be construed in terms of the relative time between the occurrence of cause and effect as articulated next. Upon intervening on x (i.e., do( x=x)), after the elapse of p_l( y)- p_l( x)=0.7 units of time, the RV y takes its value y in accord with the distribution (y|x). Likewise, upon intervening on y (i.e., do( y=y)), after the elapse of p_l( z)- p_l( y)=0.3 units of time, z takes its value z according to (z|y).
For the second example, consider the CBN depicted in Fig. <ref>(b) containing the RVs x, y, z, and t with p_l( x)=4, p_l( y)=4.7, p_l( z)=5, and p_l( t)=5.6. Upon intervening on x (i.e., do( x=x)) the following happens: (i) after the elapse of p_l( y)- p_l( x)=0.7 units of time, y takes its value y according to (y|x), and (ii) after the elapse of p_l( z)- p_l( x)=1 unit of time, z takes its value z according to (z|x). Also, upon intervening simultaneously on RVs y, z (i.e., do( y=y, z=z)), after the elapse of p_l( t)-max_ r∈ par( t)p_l( r)=0.6 units of time, t takes its value t according to (t|y,z).
In sum, the notion of PL bears on the underlying time-grid upon which a CBN is constructed, and adheres to Hume's principle of temporal precedence of cause to effect <cit.>. A growing body of work in psychology literature corroborates Hume's centuries-old insight, suggesting that the timing and temporal order between events strongly influences how humans induce causal structure over them <cit.>. The introduced notion of PL is based on the following hypothesis: When learning the underlying causal structure of a domain, humans may as well encode the temporal patterns (or some estimates thereof) on which they rely to infer the causal structure. This hypothesis is supported by recent findings suggesting that people have expectations about the delay length between cause and effect <cit.>. It is worth noting that we could have defined PL in terms of relative expected time between cause and effect, rather than relative absolute time. Under such an interpretation, the time which elapses between the intervention on a cause and the occurrence of its effect would be modeled by a probability distribution, and PL would be defined in terms of the expected value of that distribution. Our proposed framework, PLIF, is indifferent as to whether PL should be construed in terms of absolute or expected time. greville2010temporal show that causal relations with fixed temporal intervals are consistently judged as stronger compared to those with variable temporal intervals. This finding, therefore, seems to suggest that people expect, to a greater extent, fixed temporal intervals between cause and effect, rather than variable ones—an interpretation which, at least to a first approximation, favors construing PL in terms of relative absolute time (see Def. 1).[There are cases, however, that, despite the precedence of cause to effect, quantifying the amount of time between their occurrences may bear no meaning, e.g., when dealing with hypothetical constructs. In such cases, PL should be simply construed as a topological ordering. From a purely computational perspective, PL is a generalization of topological sorting in computer science.]
§ INFORMATIVE EXAMPLE
To develop our intuition, and before formally articulating our proposed framework, let us present a simple yet informative example which demonstrates: (i) how the retrieval process can be carried out in a local, bottom-up fashion, allowing for retrieving the relevant submodel incrementally, and (ii) how adopting PL allows the reasoner to obtain bounds on a given causal query at intermediate stages of the retrieval process.
Let us assume that the posed causal query is (x|y) where x, y are two RVs in the CBN depicted in Fig. <ref>(a) with PLs p_l( x),p_l( y), and let p_l( x)>p_l( y). The relevant information for the derivation of the posed query (i.e., the relevant submodel) is depicted in Fig. <ref>(e).
Starting from the target RV x in the original CBN (Fig. <ref>(a)) and moving one step backwards,[Taking one step backwards from variable q amounts to retrieving all the parents of q.] t_1 is reached (Fig. <ref>(b)). Since p_l( y)< p_l( t_1), y must be a non-descendant of t_1, and therefore, of x. Hence, conditioning on t_1 d-separates x from y <cit.>, yielding ( x y| t_1). Thus (x|y)=∑_t_1∈ Val(t_1)(x|y,t_1)(t_1|y)=∑_t_1∈ Val(t_1)(x|t_1)(t_1|y) implying: min_t_1∈ Val( t_1)(x|t_1)≤(x|y)≤max_t_1∈ Val( t_1)(x|t_1). It is crucial to note that the given bounds can be computed using the information thus-far retrieved, i.e., the information encoded in the submodel shown in Fig. <ref>(b). Taking a step backwards from t_1, t_2 is reached (Fig. <ref>(c)). Using a similar line of reasoning to the one presented for t_1, having p_l( y)< p_l( t_2) ensures ( x y| t_2). Therefore, the following bounds on the posed query can be derived, which, crucially, can be computed using the information thus-far retrieved: min_t_2∈ Val( t_2)(x|t_2)≤(x|y) ≤max_t_2∈ Val( t_2)(x|t_2). It is straightforward to show that the bounds derived in terms of t_2 are tighter than the bounds derived in terms of t_1.[Here we are implicitly making the assumption that the CPDs involved in the parameterization of the underlying CBN are non-degenerate. Dropping this assumption yields the following result: The bounds derived in terms of t_2 are equally-tight or tighter than the bounds derived in terms of t_1.] Finally, taking one step backward from t_2, y is reached (Fig. <ref>(d)) and the exact value for (x|y) can be derived, again using only the submodel thus-far retrieved (Fig. <ref>(d)).
We are now well-positioned to present our proposed framework, PLIF.
§ PL-BASED INFERENCE FRAMEWORK (PLIF)
In this section, we intend to elaborate on how, equipped with the notion of PL, a generic causal query of the form[We do not consider interventions in this work. However, with some modifications, the presented analysis/results can be extended to handle a generic causal query of the form ( O=O| E=E, do( Z=Z)) where Z denotes the set of intervened variables.] ( O=O| E=E) can be derived where O and E denote, respectively, the disjoint sets of target (or objective) and observed (or evidence) variables. In other words, we intend to formalize how inference over a CBN whose nodes are endowed with PL as an attribute should be carried out. Before we present the main result, a few definitions are in order.
Def. 2. (Critical Potential Level (CPL)) The target variable with the least PL is denoted by o^∗ and its PL is referred to as the CPL. More formally, p_l^∗:≜min_ o∈ O p_l( o) and o^∗:≜min_ o∈ O p_l( o). E.g., for the setting given in Fig. <ref>(a), o^∗= x, and p_l^∗=p_l( x). Viewed through the lens of time, o^∗ is the furthest target variable into the past, with PL p_l^∗.
There are two possibilities: (a) p_l^∗>T_0, or (b) p_l^∗=T_0, with T_0 denoting the origin of time; cf. Sec. <ref>. In the sequel we assume that (a) holds. For a discussion on the special case (b), the reader is referred to the Supplementary Information.
Def. 3. (Inference Threshold (IT) and IT Root Set (IT-RS)) To any real-valued quantity, T, corresponds a unique set, R_ T, obtained as follows: Start at every variable x∈ O∪ E with PL ≥ T and backtrack along all paths terminating at x. Backtracking along each path stops as soon as a node with PL less than T is encountered. Such nodes, together, compose the set R_ T. It follows from the definition that: max_ t∈ R_ Tp_l( t)< T. T and R_ T are termed, respectively, Inference Threshold (IT) and the IT Root Set (IT-RS) for T.
For example, the set of variables circled at the stages depicted in Figs. <ref>(b-d) are, the IT-RSs for T=p_l( x)-ϵ, T=p_l( t_1)-ϵ, and T=p_l( t_2)-ϵ, respectively. Note that instead of, say T=p_l( x)-ϵ, we could have said: for any T∈(p_l( t_1),p_l( x)). However, expressing ITs in terms of ϵ liberates us from having to express them in terms of intervals thereby simplifying the exposition in the sole hope that the reader finds it easier to follow the work. We would like to emphasize that the adopted notation should not be construed as implying that the assignment of values to ITs is such a sensitive task that everything would have collapsed, had IT not been chosen in such a fine-tuned manner. To recap, in simple terms, T bears on how far into the past a reasoner is consulting her mental model in the process of answering a query, and R_ T characterizes the furthest-into-the-past concepts entertained by the reasoner in that process.
Next, we formally present the main idea behind PLIF, followed by its interpretation in simple terms.
Lemma 1. For any chosen IT T<p_l^∗ and its corresponding R_ T, define S:≜ R_ T∖ E. Then the following holds:
min_S∈ Val( S)(O|S,E)≤(O|E) ≤max_S∈ Val( S)(O|S,E).
Crucially, the provided bounds can be computed using the information encoded in the submodel retrieved in the very process of obtaining the R_ T. □
For a formal proof of Lemma 1, the reader is referred to the Supplementary Information. Mathematical jargon aside, the message of Lemma 1 is quite simple: For any chosen inference threshold T which is further into the past than o^∗, Lemma 1 ensures that the reasoner can condition on S and obtain the reported lower and upper bounds on the query by using only the information encoded in the retrieved submodel.
It is natural to ask under what conditions the exact value to the posed query can be derived using the thus-far retrieved submodel (i.e., the submodel obtained during the identification of R_ T). The following remark bears on that.
Remark 1. If for IT T, R_ T satisfies either: (i) R_ T⊆ E, or (ii) for all r∈ R_ T, p_l( r)=T_0, and min_ e∈ Ep_l( e)> T, or (iii) the lower and upper bound given in (<ref>) are identical, then the exact value of the posed query can be derived using the submodel retrieved in the process of obtaining R_ T. Fig. <ref>(d) shows a setting wherein conditions (i) and (iii) are both met.
The rationale behind Remark 1 is provided in the Supplementary Information.
§.§ Case Study
Next, we intend to cast the Hidden Markov Model (HMM) studied in (Icard & Goodman, 2015, p. 2) into our framework.
The setting is shown in Fig. <ref>(left). We adhere to the same parametrization and query adopted therein. All RVs in this section are binary, taking on values from the set {0,1}; x=x indicates the event wherein x takes the value 1, and x=x̅ implies the event wherein x takes the value 0. We assume p_l( x_t+i)=i-2.[Note that the trend of the upper- and lower-bound curve as well as the size of the intervals shown in Fig. <ref>(right) are insensitive with regard to the choice of PLs for variables { x_t-i}_i=-1^+∞.] We should note that the assignment of the PLs for the variables in { y_t-i}_i=0^+∞ does not affect the presented results in any way. The query of interest is (x_t+1|y_-∞:t). Notice that after performing three steps of the sort discussed in the example presented in Sec. <ref> (for the IT T=-3-ϵ), the lower bound on the posed query exceeds 0.5 (shown by the red dashed line in Fig. <ref>(right)). This observation has the following intriguing implication. Assume, for the sake of argument, that we were presented with the following Maximum A-Posterior (MAP) inference problem: Upon observing all the variables in { y_t-i}_i=0^+∞ taking on the value 1, what would be the most likely state for the variable x_t+1? Interestingly, we would be able to answer this MAP inference problem simply after three backward moves (corresponding to the IT T=-3-ϵ). In Fig. <ref>(right), the intervals within which the posed query falls (due to Lemma 1) in terms of the adopted IT T are depicted.
Our analysis confirms Icard and Goodman's insight (2015) that even in the extreme case of having infinite-sized relevant submodel (Fig. <ref>(left)), the portion of which the reasoner has to consult so as to obtain a “sufficiently good" answer to the posed query could happen to be very small (Fig. <ref>(right)).
§ DISCUSSION
To our knowledge, PLIF is the first inference framework proposed that capitalizes on time to constrain the scope of causal reasoning over CBNs, where the term scope refers to the portion of a CBN on which inference is carried out. PLIF does not restrict itself to any particular inference scheme. The claim of PLIF is that inference should be confined within and carried out over retrieved submodels of the kind suggested by Lemma 1 so as to obtain the reported bounds therein. In this light, PLIF can accommodate all sorts of inference schemes, including Belief Propagation (BP), and sample-based inference methods using Markov Chain Monte Carlo (MCMC), as two prominent classes of inference schemes proposed in the literature.[MCMC-based methods have been successful in simulating important aspects of a wide range of cognitive phenomena, and giving accounts for many cognitive biases; cf. <cit.>. Also, work in theoretical neuroscience has suggested mechanisms for how BP and MCMC-based methods could be realized in neural circuits; cf. <cit.>.] For example, to cast BP into PLIF amounts to restricting BP's message-passing within submodels of the kind suggested by Lemma 1. In other words, assuming that BP is to be adopted as the inference scheme, upon being presented with a causal query, an IT according to Lemma 1 will be selected—at the meta-level—by the reasoner and the corresponding submodel, as suggested by Lemma 1, will be retrieved, over which inference will be carried out using BP. This will lead to obtaining lower and upper bounds on the query, as reported in Lemma 1. If time permits, the reasoner builds up incrementally on the thus-far retrieved submodel so as to obtain tighter bounds on the query.[The very property that the submodel gets constructed incrementally in a nested fashion guarantees that the obtained lower and upper bounds get tighter as the reasoner adopts smaller ITs; cf. Fig. <ref>(left).] MCMC-based inference methods can be cast, in a similar fashion, into PLIF.
The problem of what parts of a CBN are relevant and what are irrelevant for a given query, according to (Geiger, Verma, & Pearl, 1989), was first addressed by Shachter (1988). The approaches proposed for identifying the relevant submodel for a given query fall into two broad categories (cf. (Mahoney & Laskey, 1998) and references therein): (i) top-down approaches, and (ii) bottom-up approaches. Top-down approaches start with the full knowledge of the underlying CBN and, depending on the posed query, gradually prune the irrelevant parts of the CBN. In this respect, top-down approaches are inevitably from “god's eye" point of view—a characteristic which undermines their cognitive-plausibility. Bottom-up approaches, on the other hand, start at the variables involved in the posed query and move backwards till the boundaries of the underlying CBN are finally reached, only then they start to prune the parts of the constructed submodel—if any—which can be safely removed without jeopardizing the exact computation of the posed query. It is important to note that bottom-up approaches cannot stop at intermediate steps during the backward move and run inference on the thus-far constructed submodel without running the risk of compromising some of the (in)dependence relations structurally encoded in the CBN, which would yield erroneous inferences. This observation is due to the fact that there exists no local signal revealing how the thus-far retrieved nodes are positioned relative to each other and to the to-be-retrieved nodes—a shortcoming circumvented in the case of PLIF by introducing PL. Another pitfall shared by both top-down and bottom-up approaches is their sequential methodology towards the task of inference, according to which the relevant submodel for the posed query should be first constructed, and only then inference is carried out to compute the posed query.[The computation can be carried out to obtain either the exact value or simply an approximation to the query. Nonetheless, what both top-down and bottom-up approaches agree on is that the relevant submodel is to be first identified, should the reasoner intend to compute exactly or approximately the posed query.] On the contrary, PLIF submits to what we call the concurrent approach to reasoning, whereby retrieval and inference take place in tandem. The HMM example analyzed in Sec. <ref>, shows the efficacy of the concurrent approach.
Work on causal judgment provides support for the so-called alternative neglect, according to which subjects tend to neglect alternative causes to a much greater extent in predictive reasoning than in diagnostic reasoning <cit.>. Alternative neglect, therefore, implies that subjects would tend to ignore parts of the relevant submodel while constructing it. Recent findings, however, seem to cast doubt on alternative neglect <cit.>. Meder et al. (2014), Experiment 1 demonstrates that subjects appropriately take into account alternative causes in predictive reasoning. Also, Cummins (2014) substantiates a two-part explanation of alternative neglect according to which: (i) subjects interpret predictive queries as requests to estimate the probability of the effect when only the focal cause is present, an interpretation which renders alternative causes irrelevant, and (ii) the influence of inhabitory causes (i.e., disablers) on predictive judgment is underestimated, and this underestimation is incorrectly interpreted as neglecting of alternative causes. Cummins (2014), Experiment 2 shows that when predictive inference is queried in a manner that more accurately expresses the meaning of noisy-OR Bayes net (i.e., the normative model adopted by fernbach2011asymmetries) likelihood estimates approached normative estimates. cummins2014impact, Experiment 4 shows that the impact of disablers on predictive judgments is far greater than that of alternative causes, while having little impact on diagnostic judgments. PLIF commits to the retrieval of enablers as well as disablers. As mentioned earlier, PLIF abstracts away from the inference algorithm operating on the retrieved submodel, and, hence, leaves it to the inference algorithm to decide how the retrieved enablers and disablers should be integrated. In this light, PLIF is consistent with the results of Experiment 4.
In an attempt to explain violations of screening-off reported in the literature, park2013mechanistic find strong support for the contradiction hypothesis followed by the mediating mechanism hypothesis, and finally conclude that people do conform to screening-off once the causal structure they are using is correctly specified. PLIF is consistent with these findings, as it adheres to the assumption that reasoners carry out inference on their internal causal model (including all possible mediating variables and disablers), not the potentially incomplete one presented in the cover story; see also <cit.>.
Experiment 5 in <cit.>, consistent with <cit.>, shows that causal judgments are strongly influenced by memory retrieval/activation processes, and that both number of disablers and order of disabler retrieval matter in causal judgments. These findings suggest that the CFP and memory retrieval/activation are intimately linked. In that light, next, we intend to elaborate on the rationale behind adopting the term “retrieve" and using it interchangeably with the term “consult" throughout the paper; this is where we relate PLIF to the concepts of Long Term Memory (LTM) and Working Memory (WM) in psychology and neurophysiology. Next, we elaborate on how PLIF could be interpreted through the lenses of two influential models of WM, namely, Baddeley and Hitch's (1974) Multi-component model of WM (M-WM) and Ericsson and Kintsch's Long-term Working Memory (LTWM) model (1995). The M-WM postulates that “long-term information is downloaded into a separate temporary store, rather than simply activated in LTM", a mechanism which permits WM to “manipulate and create new representations, rather than simply activating old memories" (Baddeley, 2003). Interpreting PLIF through the lens of the M-WM model amounts to the value for IT being chosen (and, if time permits, updated so as to obtain tighter bounds) by the central executive in the M-WM and the submodel being incrementally “retrieved" from LTM into M-WM's episodic buffer. Interpreting PLIF through the lens of the LTWM model amounts to having no retrieval from LTM into WM and the submodel suggested by Lemma 1 being merely “activated in LTM" and, in that sense, being simply “consulted" in LTM. In sum, PLIF is compatible with both of the narratives provided by the M-WM and LTWM models.
A number of predictions follow from PL and PLIF. For instance, PLIF makes the following prediction: Prompted with a predictive or a diagnostic query (i.e., ( e| c) and ( c| e), respectively), subjects should not retrieve any of the effects of e. Introspectively, this prediction seems plausible, and can be tested, using a similar approach to <cit.>, by asking subjects to “think aloud" while engaging in predictive or diagnostic reasoning. Also, PL yields the following prediction: Upon intervening on cause c, subjects should be sensitive to when effect e will occur, even in settings where they are not particularly instructed to attend to such temporal patterns. This prediction is supported by recent findings suggesting that people do have expectations about the delay length between cause and effect <cit.>.
There is a growing acknowledgment in the literature that, not only time and causality are intimately linked, but that they mutually constrain each other in human cognition <cit.>. In line with this view, we see our work also as an attempt to formally articulate how time could guide and constrain causal reasoning in cognition. While many questions remain open, we hope to have made some progress towards better understanding of the CFP at the algorithmic level.
§ ACKNOWLEDGMENTS
We are grateful to Thomas Icard for valuable discussions. We would also like to thank Marcel Montrey and Peter Helfer for helpful comments on an earlier draft of this work. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under grant RGPIN 262017.
mahoney1998constructing
fodor1987modules
icard2015
gopnik2004theory
shachter1988probabilistic
baddeley2003working
ericsson1995long
pearl2014probabilistic
geiger1989d
simon1957models
marr1982vision
glymour1987android
baddeley1974working
Rehder2015
apacite
§ SUPPLEMENTARY INFORMATION
§.§ S-I Proof of Lemma 1:
Simple use of the total probability lemma yields:
S1(O|E)=∑_S∈ Val( S)(O|S,E)(S|E).
Equation (S1) immediately reveals a simple fact, namely, that (O|E) is a linear combination of the members of the set {(O|S,E)}_S∈ Val( S), an observation which grants the validity of the expression given in (<ref>) in the main text.
The key point which is left to be shown is the following: (Q.1) Why can the bounds given in (<ref>) be computed using the submodel retrieved in the process of obtaining the corresponding R_ T for the adopted IT T<p_l^∗? This is where the notion of PL comes into play. To articulate the intended line of reasoning let us introduce some notations first. According to Def. 3, any chosen IT T induces an IT-RS R_ T. Let us partition the set of evidence variables E into three mutually disjoint sets E_T^+, E_T, and E_T^-, where E_T denotes the set of variables in E which belong to the IT-RS R_ T (i.e., E_T:≜ E∩ R_ T), E_T^+ denotes the set of variables in E with PLs ≥ T, and finally, E_T^- denotes the set of variables in E which are neither in E_T nor in E_T^+ (i.e., E_T^-:≜ E∖( E_T∪ E_T^+)). Note that, by construction, the PLs of the variables in E_T^- are less than the adopted IT T, hence the adopted notation. For example, for the setting depicted in Fig. <ref>(b) (corresponding to the IT T=p_l( x)-ϵ), E_T=∅, E_T^+=∅, and E_T^-={ y}. Also, for the setting depicted in Fig. <ref>(d) (corresponding to the IT T=p_l( t_2)-ϵ), E_T={ y}, E_T^+=∅, and E_T^-=∅. Next, we present a key result as a lemma.
Lemma S.1. Let (O|E) denote the posed causal query. For any chosen IT T<p_l^∗ and its corresponding IT-RS R_ T, the following conditional independence relation holds:
S2
( O E_T^-| R_ T∪ E_ T^+).
Proof. The relations between the PLs of the variables involved in the statement (S2) ensures that, according to d-separation criterion (Pearl, 1988), conditioning on the variables in R_ T∪ E_ T^+ blocks all the paths between the variables in O and E_T^-, hence follows (S2).
The following two-part argument responds to the question posed in (Q.1) in the affirmative. First, notice that:
(O|S,E)= (O|S,E_ T,E_ T^-,E_ T^+)
= (O|R_ T, E_ T^-,E_ T^+)
(S2)= (O|R_ T,E_ T^+). S3
Second, note that the process of obtaining R_ T, namely, moving backwards from the variables in O∪ E_ T^+ until R_ T is reached, ensures that the submodel retrieved in this process suffices for the derivations of (O| R_ T, E_ T^+). Using the approach introduced in <cit.> for identifying the relevant information for the derivation of a query in a Bayesian network, this follows from the following fact: Conditioned on R_ T∪ E_ T^+, the set O is d-separated from all the nodes in the set An( O∪ E)∖ R_ T whose PLs are less than the adopted IT T. Note that An( O∪ E) denotes the ancesteral graph for the nodes in O∪ E. This completes the proof. ▪
§.§ S-II The Rationale behind Remark 1:
Case (i) and Case (iii) immediately follow from Lemma 1 in the main text. Case (ii) implies that all the ancestors of variables in O∪ E are retrieved, hence the sufficiency of the retrieved submodel for the exact derivation of the query; see also Sec. S-III.
§.§ S-III On the Special Case of Having p_l^∗=T_0:
In such circumstances, to derive (O|E), the set of all the ancestors of variables in O∪ E should be retrieved and then inference should be carried out on the retrieved submodel.
|
http://arxiv.org/abs/1701.07511v1 | 20170125223547 | Mini-BFSS in Silico | [
"Tarek Anous",
"Cameron Cogburn"
] | hep-th | [
"hep-th"
] |
MIT-CTP-4877
15mm
Mini-BFSS in Silico
15mm
Tarek Anous^1,2 and Cameron Cogburn^3
5mm
^1Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, B.C. V6T 1Z1, Canada
^2Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
^3Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
5mm
tarek@phas.ubc.ca, ccogburn@mit.edu
15mm
We study a mass-deformed 𝒩=4 version of the BFSS matrix model with three matrices and gauge group SU(2). This model has zero Witten index. Despite this, we give numerical evidence for the existence of four supersymmetric ground states, two bosonic and two fermionic, in the limit where the mass deformation is tuned to zero.
plain
§ INTRODUCTION
This paper concerns itself with the supersymmetric quantum mechanics of three bosonic SU(N) matrices and their fermionic superpartners. The model in question, introduced in <cit.>, has four supercharges and describes the low energy effective dynamics of a stack of N wrapped D-branes in a string compactification down to 3+1 dimensions. When the compactification manifold has curvature and carries magnetic fluxes, the bosonic matrices obtain masses<cit.>. When the compact manifold is Calabi-Yau and carries no fluxes, the matrices are massless.
This theory has flat directions whenever the matrices are massless, and hence is a simplified version of the BFSS matrix model <cit.>, which, for the sake of comparison, has nine bosonic SU(N) matrices and 16 supercharges and describes the non-Abelian geometry felt by D-particles in a non-compact 9+1 dimensional spacetime. We hence dub the model studied here: mini-BFSS (or mini-BMN <cit.> in the massive case). The Witten index W_I has been computed for mini-BFSS <cit.> and vanishes, meaning that the existence of supersymmetric ground states is still an open question. Even the refined index, twisted by a combination of global symmetries and calculated in <cit.>, gives us little information about the set of ground states due to the subtleties associated with computing such indices in the presence of flat directions in the potential. This is in stark contrast with the full BFSS model, whose Witten index W_I=1, implying beyond doubt the existence of at least one supersymmetric ground state. The zero index result for mini-BFSS has led to the interpretation that it may not have any zero energy ground states <cit.>, and hence no holographic interpretation. The logic being that, without a rich low energy spectrum, scattering in mini-BFSS would not mimic supergraviton scattering in a putative supersymmetric holographic dual <cit.>.
Of course
a vanishing W_I does not confirm the absence of supersymmetric ground states—as there may potentially be an exact degeneracy between the bosonic and fermionic states at zero energy.
We weigh in on the existence of supersymmetric states in mini-BFSS by solving the Schrödinger equation numerically for the low-lying spectrum of the N=2 model, in the in silico spirit of <cit.>. To deal with the flat directions we numerically diagonalize the Hamiltonian of the mass-deformed mini-BMN matrix model, for which the flat directions are absent, and study the bound state energies as a function of the mass. A numerical analysis of mini-BFSS can also be found in <cit.> which use different methods.
What we uncover is quite surprising. As we tune the mass parameter m to zero, we find evidence for four supersymmetric ground states, two bosonic and two fermionic, which cancel in the evaluation of W_I. This result seems to agree with plots found in <cit.>. It must be said that our result does not constitute an existence proof for supersymmetric threshold bound states in the massless limit, but certainly motivates a further study of the low-lying spectrum of these theories.
The organization of the paper is as follows: in section <ref> we present the supercharges, Hamiltonian and symmetry generators of the mini-BMN model for arbitrary N. In section <ref> we restrict to N=2 and give coordinates in which the Schrödinger equation becomes separable. In section <ref> we provide our numerical results and in section <ref> we derive the one-loop effective theory on the moduli space in the massless theory. We conclude with implications for the large-N mini-BFSS model in section <ref>. We collect formulae for the Schrödinger operators maximally reduced via symmetries in appendix <ref> and compute the one-loop metric on the Coulomb branch moduli space in appendix <ref>.
§ SETUP
§.§ Supercharges and Hamiltonian
Let us consider a supersymmetric quantum mechanics of SU(N) bosonic matrices X^i_A and their superpartners λ_Aα. The quantum mechanics we have in mind has four supercharges:[Spinors and their conjugates transform respectively in the 2 and 2̅ of of SO(3). Spinor indices are raised and lowered using the Levi-Civita symbol ϵ^αβ=-ϵ_αβ with ϵ^12=1. Thus in our conventions:
(ψ̅ϵ)_α=ψ̅^γϵ_γα , (ϵψ)^α=ϵ^αγψ_γ , ϵ_αωϵ^ωβ=δ_α^ β .
]
Q_α=(-i∂_X^i_A-i m X^i_A-i W^i_A)σ^i γ_αλ_Aγ , Q̅^β=λ̅^ γ_Aσ^i β_γ(-i∂_X^i_A+i m X^i_A+i W^i_A) .
The parameter m is simply the mass of X^i_A. The massless version of this model was introduced in <cit.> and can be derived by dimensionally reducing 𝒩=1, d=4 super Yang-Mills to the quantum mechanics of its zero-modes. The mass deformation was introduced in <cit.>, and can be obtained from a dimensional reduction of the same gauge theory on R× S^3. We direct the reader to <cit.> for an introduction to these models. This quantum mechanics should be thought of as a simplified version of the BMN matrix model <cit.> (mini-BMN for brevity). The massless limit should then be thought of as a mini-BFSS matrix model <cit.>. The lowercase index i=1,…,3 runs over the spatial dimensions (in the language of the original gauge theory), and the uppercase index A=1,…,N^2-1, runs over the generators of the gauge group SU(N). The σ^i are the Pauli matrices and greek indices run over α=1,2. In keeping with <cit.>, we have defined W^i_A≡∂ W/∂ X^i_A where
W≡g/6f_ABC ϵ_ijk X^i_A X^j_B X^k_C,
and f_ABC are the structure constants of SU(N).
The gauginos obey the canonical fermionic commutation relations {λ_Aα,λ̅^β_B}=δ_ABδ_α^ β, and hence the algebra generated by these supercharges is <cit.>
{Q_α,Q̅^β}=2(δ_α^ β H-g σ^k β_α X^k_A G_A+ m σ^k β_α J^k) , {Q̅^α,Q̅^β}={Q_α,Q_β}=0 ,
with Hamiltonian:
H≡ -1/2∂_X^i_A∂_X^i_A+1/2m^2 (X^i_A)^2+m X^i_A W^i_A+g^2/4(f_ABC X^i_B X^j_C)^2-3/4m[λ̅_A,λ_A]+ig f_ABCλ̅_A X^k_B σ^kλ_C .
The operators G_A and J^k appearing in the algebra are, respectively, the generators of gauge transformations and SO(3) rotations. These are given by:
G_A≡ -i f_ABC(X^i_B ∂_X^i_C+λ̅_Bλ_C) , J^i≡-i ϵ_ijk X^j_A ∂_X^k_A+1/2λ̅_Aσ^iλ_A .
In solving for the spectrum of this theory, we must impose the constraint G_A|ψ⟩=0 , ∀ A. In the above expressions, whenever fermionic indices are suppressed, it implies that they are being summed over.
Let us briefly note the dimensions of the fields and parameters in units of the energy [ℰ]=1. These are [X]=-1/2, [λ]=0, [g]=3/2 and [m]=1. Therefore, an important role will be played by the dimensionless quantity
ν≡m/g^2/3 .
We consider here the mass deformed gauge quantum mechanics because, in the absence of the mass parameter m, the classical potential has flat directions (see figure <ref>). Turning on this mass deformation gives us a dimensionless parameter ν, to tune in studying the spectrum of this theory, and allows us to approach the massless limit from above.
§.§ Symmetry algebra
Let us now give the symmetry algebra of the theory. The components of J⃗ satisfy:
[J^i,J^j]=i ϵ_ijk J^k , [J^i,Q_α]=-1/2σ^i γ_α Q_γ ,
[J⃗^ 2,J^i]=0 , [J^i,Q̅^α]=1/2Q̅^β σ^i α_β .
There is an additional U(1)_R generator R≡λ̅_Aλ_A which counts the number of fermions. It satisfies
[R,Q_α]=-Q_α , [R,Q̅^α]=+Q̅^α , [R,J^i]=0 .
The Hamiltonian also has a particle-hole symmetry:
λ̅^α_A→ϵ^αγλ_Aγ , λ_Aα→λ̅_A^γϵ_γα , ϵ^12=-ϵ_12=1 ,
where ϵ^αβ is the Levi-Civita symbol. This transformation leaves the Hamiltonian invariant but takes R→ 2(N^2-1)-R and effectively cuts our problem in half.
One peculiar feature of the mass deformed theory is that the supercharges do not commute with the Hamiltonian as a result of the vector J⃗ appearing in (<ref>). It is easy to show that
[H,Q_α]=m/2Q_α , [H,Q̅^β]=-m/2Q̅^β .
Thus, acting with a supercharge increases/decreases the energy of a state by ±m/2. This is a question of R-frames, as discussed in <cit.>. Essentially we can choose to measure energies with respect to the shifted Hamiltonian H_m≡ H+m/2R, which commutes with the supercharges, and write the algebra as:
{Q_α,Q̅^β}=2{δ_α^ β (H_m-m/2R)-g σ^k β_α X^k_A G_A+ m σ^k β_α J^k} .
§.§ Interpretation as D-particles
The ν→ 0 limit of this model can be thought of as the worldvolume theory of a stack of N D-branes compactified along a special Lagrangian cycle of a Calabi-Yau three-fold <cit.>. The X^i_A then parametrize the non-Abelian geometry felt by the compactified D-particles in the remaining non-compact 3+1 dimensional asymptotically flat spacetime. The addition of the mass parameter corresponds to adding curvature and magnetic fluxes to the compact manifold <cit.>
changing the asymptotics of the non-compact spacetime to AdS_4. This interpretation was argued in <cit.> and passes several consistency checks. Hence we should think of the mass deformed theory as describing the non-relativistic dynamics of D-particles in an asymptotically AdS_4 spacetime and the massless limit as taking the AdS radius to infinity in units of the string length.
To be more specific, it will be useful to translate between our conventions and the conventions of <cit.>. One identifies m=Ω, g^2=1/m_v, {X,λ}_ us= m_v^1/2{X,λ}_ them in units where the string length l_s=1. Reintroducing l_s, this dictionary implies that g^2=g_s /l_s^3√(2π), with g_s the string coupling, gets set by a combination of the magnetic fluxes threading the compact manifold and similarly ℓ_ AdS≡ 1/m gets set by a combination of these magnetic fluxes and the string length. For AdS_4×CP^3 compactifications dual to ABJM this was worked out in detail in <cit.> and they identify
g_s=(32π^2N/k^5)^1/4 , ℓ_ AdS=(N/8π^2 k)^1/4l_s ,
where k and N are, respectively, integrally quantized magnetic 2-form and 6-form flux. In this example taking ν=√(2π)(k^2/N)^1/3→ 0 while keeping g_s fixed takes the AdS radius to infinity in units of l_s.
The main focus of the next sections is on whether this stack of D-particles forms a supersymmetric bound state, particularly in the ν→ 0 limit. There the Witten index W_I≡Tr_ℋ{ (-1)^R e^-β H} has been computed <cit.> and evaluates to zero. This is in contrast with the full BFSS matrix model, whose index is W_I=1, confirming the existence of a supersymmetric ground state. We will use the numerical approach of <cit.> and verify if supersymmetry is preserved or broken in the SU(2) case. We find evidence that supersymmetry is preserved in the ν→ 0 limit, and that there are precisely 4 ground states contributing to the vanishing Witten index.
§ QUANTIZING THE SU(2) THEORY
§.§ Polar representation of the matrices
We are aiming to solve the Schrödinger problem H_m|ψ⟩=ℰ_m|ψ⟩. We will not be able to do this for arbitrary N and from here on we will restrict to gauge group SU(2) for which the structure constants f_ABC=ϵ_ABC. In this case the wavefunctions depend on 9 bosonic degrees of freedom tensored into a 64-dimensional fermionic Hilbert space. It is thus incumbent upon us to reduce this problem maximally via symmetry. In order to do so, we exploit the fact that the matrices X^i_A admit a polar decomposition as follows
X_A^i=L_AB Λ_B^j M^T ji
with
L≡ e^-i φ_1 ℒ^3e^-i φ_2 ℒ^2e^-i φ_3 ℒ^3 , M≡ e^-i ϑ_1 ℒ^3e^-i ϑ_2 ℒ^2e^-i ϑ_3 ℒ^3 ,
and [ℒ^i]_jk≡-iϵ_ijk are the generators of SO(3). The diagonal matrix
Λ≡diag(𝐱_1,𝐱_2,𝐱_3)
represents the spatial separation between the pair of D-branes in the stack. The φ_i and ϑ_i represent the (respectively gauge-dependent and gauge-independent) Euler-angle rigid body rotations of the configuration space. This parametrization is useful because the Schrödinger equation is separable in these variables, as we show in appendix <ref>.
The metric on configuration space can be re-expressed as:
∑_A,idX_A^i dX_A^i =∑_a=1^3 d𝐱^2_a+ I_a(dΩ_a^2+dω_a^2)-2K_a dΩ_a dω_a ,
I_a ≡𝐱_b 𝐱_b-𝐱_a^2 , K_a≡|ϵ_abc| 𝐱_b 𝐱_c .
The angular differentials are the usual SU(2) Cartan-Maurer differential forms defined as follows:
dω_a=-1/2ϵ_abc [L^T· dL]_bc , dΩ_a=-1/2ϵ_abc[M^T· dM]_bc .
The volume element used to compute the norm of the wavefunction is
∏_i,A dX_A^idX_A^i=Δ(𝐱_a)∏_i=1^3 d𝐱_isinφ_2∏_j=1^3 dφ_j sinϑ_2∏_k=1^3 dϑ_k ,
where Δ(𝐱_a)≡(𝐱_1^2-𝐱_2^2)(𝐱_3^2-𝐱_2^2)(𝐱_3^2-𝐱_1^2) is the Vandermonde determinant with squared eigenvalues. To cover the configuration space correctly, we take the new coordinates to lie in the range <cit.>:
𝐱_3≥𝐱_1≥|𝐱_2|≥0 , π≥φ_2 ,ϑ_2≥0 , 2π≥φ_i≠2 ,ϑ_i≠2≥0 .
The generators of gauge-transformations G_A and rotations J^i are given in (<ref>). These satisfy
[J^i,J^j]=i ϵ_ijk J^k , [G_A,G_B]=i ϵ_ABC G_C ,
[J⃗^ 2,J^i]=0 , [J^i,G_A]=0 .
To label the SU(2)_gauge× SO(3)_J representations of the wavefunctions, it is useful to define the “body fixed" angular momentum and gauge operators P⃗≡ M^-1·J⃗ and S⃗≡ L^-1·G⃗, which satisfy
P⃗^2=J⃗^2 , S⃗^2=G⃗^2 ,
[P^i,P^j]=-i ϵ_ijk P^k , [S_A,S_B]=-i ϵ_ABC S_C ,
[P^i,J^j]=0 , [S_A,G_B]=0 .
Unlike the generators of angular momentum, P⃗ is not conserved. However, as we explain in appendix <ref>, it is still useful for separating variables.
Let us give expressions for the bosonic parts of J⃗ and P⃗, which we call 𝒥⃗ and 𝒫⃗ respectively, in terms of the angular coordinates. These are:
𝒥^1 =-i(-cosϑ_1ϑ_2 ∂_ϑ_1-sinϑ_1∂_ϑ_2+cosϑ_1/sinϑ_2 ∂_ϑ_3) ,
𝒥^2 =-i(-sinϑ_1ϑ_2 ∂_ϑ_1+cosϑ_1∂_ϑ_2+sinϑ_1/sinϑ_2 ∂_ϑ_3) ,
𝒥^3 =-i ∂_ϑ_1 ,
and
𝒫^1 =-i(-cosϑ_3/sinϑ_2 ∂_ϑ_1+sinϑ_3∂_ϑ_2+ϑ_2 cosϑ_3 ∂_ϑ_3) ,
𝒫^2 =-i( sinϑ_3/sinϑ_2 ∂_ϑ_1+cosϑ_3∂_ϑ_2-ϑ_2 sinϑ_3 ∂_ϑ_3) ,
𝒫^3 =-i ∂_ϑ_3 .
Similarly let us define 𝒢_A and 𝒮_A as the bosonic parts of the the G_A and S_A operators. The 𝒢_A are related to the 𝒥^i by replacing ϑ_i→φ_i. It is easy to guess that the 𝒮_A are then related to the 𝒫^i via the same replacement.
We are now ready to give expressions for the momentum operators and the kinetic energy operator in terms of the new variables. These are <cit.>:
-i∂_X^i_A =-iL_AaM^ib{δ_ab ∂_𝐱_a+iϵ_abc/𝐱_a^2-𝐱_b^2(𝐱_a 𝒫^c+𝐱_b 𝒮_c)} ,
-1/2∂_X^i_A∂_X^i_A =-1/2Δ∂_𝐱_aΔ∂_𝐱_a+1/2∑_a=1^3I_a(𝒫^a2+𝒮_a^2)+2K_a 𝒫^a𝒮_a/I_a^2-K_a^2 .
It is also straightforward to write down the bosonic potential V in terms of the new variables:
V=1/2m^2 𝐱_a 𝐱_a+3g m 𝐱_1𝐱_2𝐱_3+g^2/2(𝐱_1^2𝐱_2^2+𝐱_1^2𝐱_3^2+𝐱_2^2𝐱_3^2) .
As expected it is independent of the angular variables. We have depicted constant potential surfaces in figure <ref>.
Apart from the coordinates 𝐱_a the following non-linear coordinates will often appear in the equations below:
𝐲_a≡I_a/I_a^2-K_a^2=1/2|ϵ_abc|𝐱_b^2+𝐱_c^2/(𝐱_b^2-𝐱_c^2)^2 , 𝐳_a≡K_a/I_a^2-K_a^2=|ϵ_abc|𝐱_b 𝐱_c/(𝐱_b^2-𝐱_c^2)^2 .
With these definitions the kinetic term can be written as:
-1/2∂_X^i_A∂_X^i_A=-1/2Δ∂_𝐱_aΔ∂_𝐱_a+1/2[𝐲_a(𝒫^a2+𝒮_a^2)+2 𝐳_a 𝒫^a𝒮_a] .
Notice that the term ∑_a=1^3𝐲_a 𝒫^a2 is the kinetic energy of a rigid rotor with principal moments of inertial 𝐲_a^-1.
Unlike the c=1 matrix model, the angular-independent piece of the kinetic term can not be trivialized by absorbing a factor of √(Δ) into the wavefunction <cit.>. Instead we have:
-1/2Δ∂_𝐱_aΔ∂_𝐱_a=-1/2(1/√(Δ)∂_𝐱_a^2√(Δ)+T) ,
where
T≡∑_a=1^3𝐲_a=𝐱_1^2+𝐱_2^2/(𝐱_1^2-𝐱_2^2)^2+𝐱_1^2+𝐱_3^2/(𝐱_1^2-𝐱_3^2)^2+𝐱_2^2+𝐱_3^2/(𝐱_3^2-𝐱_3^2)^2 ,
and its appearance in the Schrödinger equation acts as an attractive effective potential between the 𝐱_a.
§.§ Gauge-invariant fermions
Because the operators G_A in (<ref>) have a nontrivial dependence on the gauginos λ_Aα it is not sufficient to suppress the wavefunction's dependence on gauge angles φ_i entirely. Instead we can write down a set of gauge-invariant fermions that will contain the entire dependence on the gauge angles <cit.>:
χ_Aα≡ L_BAλ_Bα , χ̅_A^β≡ L_BAλ̅_B^β .
These satisfy {χ_Aα,χ̅_B^β}=δ_ABδ_α^ β, but no longer commute with bosonic derivatives.
Defining σ̃^i β_α≡ M^jiσ^j β_α, we can now write the supercharges in terms of the new parametrization. These are:
Q_α=-i σ̃_α^b γχ_aγ(δ_ab{∂_𝐱_b+m 𝐱_b+g/2|ϵ_bst| 𝐱_s𝐱_t}+iϵ_abc/𝐱_a^2-𝐱_b^2(𝐱_a 𝒫^c+𝐱_b 𝒮_c)) ,
Q̅^β=-i χ̅_a^γ σ̃_γ^b β(δ_ab{∂_𝐱_b-m 𝐱_b-g/2|ϵ_bst| 𝐱_s𝐱_t}+iϵ_abc/𝐱_a^2-𝐱_b^2(𝐱_a 𝒫^c+𝐱_b 𝒮_c)) ,
where we have put the gauge-invariant fermions to the left so as to remind the reader that the bosonic derivatives are not meant to act on them in the supercharges. The Hamiltonian H (not H_m) in the new parametrization is:
H=-1/2Δ∂_𝐱_aΔ∂_𝐱_a+1/2[𝐲_a(𝒫^a2+𝒮_a^2)+2 𝐳_a 𝒫^a𝒮_a]
+1/2m^2 𝐱_a 𝐱_a+3g m 𝐱_1𝐱_2𝐱_3+g^2/2(𝐱_1^2𝐱_2^2+𝐱_1^2𝐱_3^2+𝐱_2^2𝐱_3^2)-3/4m[χ̅_A,χ_A]+ig ϵ_AkCχ̅_A 𝐱_k σ̃^kχ_C .
§ NUMERICAL RESULTS
In order to calculate the spectrum of the Hamiltonian (<ref>), we must reduce our problem using symmetry, that is we should label our states via the maximal commuting set of conserved quantities: H_m, J^3 ,J⃗^ 2, R. Because of the discrete particle-hole symmetry (<ref>) we need only consider R=0,…,3. In appendix <ref> we construct gauge-invariant highest-weight representations of SO(3)_J in each R-charge sector. This means we fix the wavefunctions' dependence on the angles ϑ_i and φ_i and provide the reduced Schrödinger operators that depend only on 𝐱_a.[We only provide a small set of these reduced Schrödinger operators, as they increase in size with increasing SO(3)_J eigenvalue j.]
Our numerical results for the lowest energy states of H_m for each R and j are presented in Table <ref> and were obtained by inputting the restricted Schrödinger equations of appendix <ref> into Mathematica's NDEigenvalues command, which uses a finite element approach to solve for the eigenfunctions of a coupled differential operator on a restricted domain. We have labeled each row by the fermion number R and each column by the SO(3)_J highest weight eigenvalue j (i.e. J⃗^2|ψ⟩=j(j+1)|ψ⟩ and J^3|ψ⟩=j|ψ⟩).
A few comments are in order:
* The most striking feature of these plots is the seeming appearance of zero energy states for (R,j)=(2,0) and (R,j)=(3,1/2) as ν→0. Since the Witten index W_I=0, and since the states in the (2,0) and (3,1/2) sectors seem have nonzero energy for any finite ν, it must be the case that these states are elements of the same supersymmetry multiplet. This must be so for the deformation invariance of W_I.
* Since we know, by construction, that the lowest energy (R,j)=(2,0) and (R,j)=(3,1/2) states are related by supersymmetry, we can use the difference in their numerically-obtained energies as a benchmark of our numerical errors. Obtaining the (R,j)=(2,0) ground state energy required solving a coupled Schrödinger equation involving 15 functions in 3 variables. For the (R,j)=(3,1/2) state, the number of functions one is numerically solving for jumps to 40. In the latter case, it was difficult to reduce our error (either by refining the finite element mesh, or increasing the size of the domain) in a significant way without Mathematica crashing. This is despite the fact that we had 12 cores and 64 Gb of RAM at our disposal. In figure <ref> we plot the percentage error in the H_m energy difference between these two states as a function of ν. We find that the energy difference between these states is around 13% of the total energy as a function of ν. For comparison, we also do this for the lowest (R,j)=(0,0) and (R,j)=(1,1/2) states, where the numerics are more reliable as a result of solving a much simpler set of equations. There the difference between the computed energies is at most 2%.
* Our results suggest that there are 4 supersymmetric states, two of which are bosonic and two which are fermionic, which would cancel in the evaluation of the index. Explicitly, the two bosonic states are j=0 singlets in the R=2 and R=4 sectors (recall the discrete particle hole symmetry of the theory) and the two fermionic states are the j=1/2 doublet in the R=3 sector. It is interesting to note that there aren't more states in this multiplet, for example numerically studying the (R,j)=(0,1) sector reveals no evidence for a supersymmetric state in the ν→ 0 limit.
* The massless SU(2) model was studied using a different numerical approach in <cit.> and their plots for the ground state energies seem to approach ours, particularly figures 2 and 5 of <cit.>.
* Our numerical evidence for these supersymmetric states does not constitute a proof since we will never be able to numerically resolve if this state has exactly zero energy. However, the result is highly suggestive of a supersymmetry preserving set of states at ν=0 and there is no contradiction with the analytically obtained Witten index result W_I=0. It would be interesting to analyze the existence of these states analytically in future work.
§ EFFECTIVE THEORY ON THE MODULI SPACE OF THE SU(2) MODEL
In order to get a better handle on the previous section's numerical results, we will now study the ν→0 limit of the matrix model analytically. Since the full problem is clearly quite difficult even for N=2, we will study the massless model in some parametric limit. This is possible because the theory has a moduli space[Also sometimes called a Coulomb branch.]—a flat direction where the D-branes can become well separated, and along this moduli space certain fields become massive and can be integrated out. We will parametrize this moduli space by the coordinates (𝐱_3,ϑ_2,ϑ_1) and will henceforth label them (𝐱_3,ϑ_2,ϑ_1)→(r,θ,ϕ) for the remainder of this section. The parametric limit we will take is the limit of large r.
To derive the effective theory along the moduli space we will first take (r,θ,ϕ) to be slowly varying and expand H=H^(0)+H^(1)+… in inverse powers of the dimensionless quantity g r^3. We will compute the effective Hamiltonian in perturbation theory by integrating out the other fields in their ground state, in which (r,θ,ϕ) appear as parameters. Similar analysis to this was performed in <cit.>. Defining ∂⃗≡(∂_𝐱_1,∂_𝐱_2), the Hamiltonian, to lowest order, is
H^(0)≡-1/2(𝐱_1^2-𝐱_2^2)∂⃗·(𝐱_1^2-𝐱_2^2)∂⃗-1/2(𝐱_1^2-𝐱_2^2)^2[(𝐱_1^2+𝐱_2^2)(∂_ϑ_3^2+∂_φ_3^2)+4 𝐱_1 𝐱_2 ∂_ϑ_3∂_φ_3]
+g^2/2r^2(𝐱_1^2+𝐱_2^2)-i g r ϵ_3DE χ̅_D σ̃^3 χ_E ,
where σ̃^i β_α≡ M^jiσ^j β_α depends explicitly on (ϑ_1,ϑ_2,ϑ_3). It is straightforward to show that H^(0) admits a zero energy ground state given by:
Ψ^(0)=g r/π√(32) e^-g/2 r (𝐱_1^2+𝐱_2^2)∑_B=1^2{χ̅_B ϵ χ̅_B-i∑_C=1^2ϵ_3BC χ̅_B σ̃^3(χ̅_Cϵ)}|0⟩ ,
where |0⟩ is the fermionic vacuum and we have normalized Ψ^(0) with respect to
∫_0^∞ d𝐱_1∫_-𝐱_1^𝐱_1d𝐱_2∫_0^2πdϑ_3∫_0^2πdφ_3 (𝐱_1^2-𝐱_2^2) .
Similarly we can expand the supercharges Q_α= Q_α^(0)+Q_α^(1)+…, where
Q^(0)_α ≡ -i∑_a,b=1^2σ̃_α^b γχ_aγ(δ_ab{∂_𝐱_b+g/2|ϵ_bst| 𝐱_s𝐱_t}+iϵ_abc/𝐱_a^2-𝐱_b^2(𝐱_a 𝒫^c+𝐱_b 𝒮_c)) ,
Q^(1)_α ≡-i σ̃_α^b γχ_3γ(δ_3b{∂_r+g/2|ϵ_3st| 𝐱_s𝐱_t}+iϵ_3bc/r(𝒫^c+𝒮_c)) .
It is easy to check that Q^(0)_αΨ^(0)=Q̅^(0)βΨ^(0)=0. We are now tasked with finding the effective supercharges Q^ eff_α= ⟨ Q^(1)_α⟩_Ψ^(0)+… that act on the massless degrees of freedom (r,θ,ϕ) along the moduli space. At lowest order we find the supercharges (acting on gauge-invariant wavefunctions) are those of a free particle in R^3 and its fermionic superpartner:
Q_α^ eff=-i ∇_𝐱⃗·σ⃗_α^ γ ψ_γ , Q̅^β_ eff=-i ψ̅^γ∇_𝐱⃗·σ⃗_γ^ β ,
where we have labeled (r,θ,ϕ) in cartesian coordinates as well as defined (ψ_α,ψ̅^β)≡(χ_3 α,χ̅_3^β). Since the remaining gauge angles (φ_1,φ_2) have no kinetic terms in the effective theory along the moduli space, we need not consider them as dynamical variables and can treat ψ_α as a fundamental field.
Let us now compute the effective theory to next order in perturbation theory. Instead of computing this in the operator formalism, let us first invoke symmetry arguments to constrain what the answer should look like. The low energy effective theory on the moduli space should be a supersymmetric theory with four supercharges and an SO(3) R-symmetry, therefore it should fall in the class discovered in <cit.>:
ℒ=1/2f(𝐱̇⃗̇^2+i(ψ̅ψ̇-ψ̇̅̇ψ) +D^2)+1/2(∇_k f) ϵ_klm 𝐱̇^l ψ̅ σ^m ψ
-D/2 (∇_𝐱⃗f)·ψ̅ σ⃗ ψ+1/4(∇_i∇_j f)(ψ̅ σ^i ψ)(ψ̅ σ^j ψ) ,
which is invariant under
δ𝐱⃗ =iψ̅ σ⃗ ξ-iξ̅ σ⃗ ψ
δψ_α =𝐱̇⃗̇·σ⃗_α^ βξ_β+iD ξ_α
δψ̅^β =𝐱̇⃗̇·ξ̅^ασ⃗_α^ β-iD ξ̅^β
δ D =-ψ̇̅̇ ξ-ξ̅ ψ̇ .
In order to preserve the SO(3) symmetry f should be a function of r≡ |𝐱⃗|. Notice that (<ref>) reduces to the theory of a free particle and its superpartner when f=1. Therefore we should find that at 1-loop order f=1+c/g r^3, since (g r^3)^-1 is our expansion parameter, with c to be determined. A calculation <cit.> reproduced in appendix <ref> gives c=-3/2 or
f=1-3/2g r^3 .
Analytic evidence for the numerically found supersymmetric ground states can be obtained by studying the Schrödinger problem associated with (<ref>). We do not do this here, but we can gain some intuition by studying the existence of normalizable zero-modes of the Laplacian on moduli-space <cit.>:
ds^2=(1-3/2g r^3)(dr^2+r^2 dΩ_2^2) .
We can construct two normalizable zero-modes as follows. The zero-form
ω_0≡∫^r dr' 1/r'^2(1-3/2g r'^3)^-1/2
is a zero-mode of the Laplacian, but is not normalizable. To construct normalizable forms, we take
ω_1≡ dω_0 , ω_2=⋆ ω_1 .
These are normalizable within the domain r∈[(3/2g)^1/3,∞]. Since there exists zero-modes in this toy-moduli-space approximation, it would be interesting to study the set of ground states of (<ref>) in more detail.
§ DISCUSSION
In this paper we have studied the mini-BFSS/BMN model with gauge group SU(2) and uncovered numerical evidence for a set of supersymmetric ground states in the massless limit of the theory. In the massless limit the matrices can become widely separated. The effective theory on the moduli space has non-trivial interactions governed by a metric that gets generated on this moduli space at one loop. Let us now discuss what may happen in the SU(N) case at large N. The quartic interaction in (<ref>) can be rewritten as a commutator-squared interaction (f_ABC X^i_B X^j_C)^2∼Tr([X^i,X^j])^2, where X^i≡ X^i_A τ_A and τ_A are the generators in the fundamental of SU(N). Therefore, at tree-level, along the moduli space there will be a set of N-1 massless, non-interacting, point particles in R^3 (and their superpartners), each one corresponding to an element of the Cartan of SU(N). At one-loop there will be a correction to the moduli space-metric, depending on the relative distances between these particles. Just like in the SU(2) case these corrections will come at order |r_a-r_b|^-3. One difference, however, is that there may be an enhancement of order N to this correction. It would certainly be interesting to see if we can isolate the |r_a-r_b|^-3 corrections to the moduli space metric by taking a large N limit, as can be done in the D0-D4 system <cit.> and in the three-node Abelian quiver <cit.>. Perhaps we can adapt the methods in <cit.> for these purposes. The analysis in <cit.> seems to suggest that such a decoupling limit at large N is possible .
Interestingly, it was shown in <cit.> that the one-loop effective action on the Coulomb branch of the three-node Abelian quiver exhibits an emergent conformal symmetry at large N. This conformal symmetry depends on the delicate balance between the form of the interaction potential and the metric on the moduli space, which has a similar |r_a-r_b|^-3 form as in (<ref>). It would be interesting to establishing whether the SU(N) generalization of the model studied in this paper also has a non-trivial conformal symmetry at infinite N, broken by finite N effects. We save this problem for future work, but list here some reasons why this would be worth studying:
* The BFSS matrix model has a holographic interpretation <cit.>. At large N it is dual to a background of D0 branes in type IIA supergravity. In BFSS there is no correction to the moduli space metric and neither side of this duality is conformal. The BFSS matrix model is thus a theory of the 10d flat space S-matrix. It would be interesting to understand the large N version of mini-BFSS in the context of holography along similar lines. Because of the large number of coupled degrees of freedom at large N and the reduced supersymmetry, the effective theory along the moduli space of mini-BFSS has a non-trivial metric and may potentially exhibit a non-trivial conformal fixed point along this moduli space, as happens for quiver quantum mechanics models with vector rather than matrix interactions <cit.>. To answer this question definitively we will need to compute the effective theory along the Coulomb branch for N≫ 2 and check whether it is conformal.
* New results have shown that a certain class of disordered quantum mechanics models, known as SYK for Sachdev-Ye-Kitaev, exhibits phenomenology of interest for near-extremal black holes (see <cit.> and references therein as well was <cit.> for models without disorder). These phenomena include an emergent conformal symmetry in the IR, maximal chaos <cit.>, and a linear in T specific heat. Despite the successes of these models, they are not dual to weakly coupled gravity. BFSS is a large N gauged matrix quantum mechanics dual to weakly-coupled Einstein gravity but, as we previously mentioned, it does not have an emergent conformal symmetry and remains a model of D-particles in flat space. It would certainly be interesting if mini-BFSS fell in the universality class of quantum mechanics models with emergent conformal symmetry in the IR and maximal chaos, such as the SYK model and its non-disordered cousins, but remains dual to weakly coupled gravity. Recently <cit.> advocated the study of such matrix models for similar reasons. In the same vein <cit.> studies classical chaos in BFSS numerically.
* If this model, like SYK, is at all related to the holography of near-extremal black holes, then we can try to study its S-matrix to gain some insight into the real time dynamics of black hole microstates. A numerical implementation of such a study in the context of similar supersymmetric quantum mechanics models with flat directions can be found in <cit.>.
* The slow moving dynamics of a class of BPS multi-black hole solutions in supergravity is a superconformal quantum mechanics <cit.> with no potential, provided a near horizon limit is taken. It would be interesting to understand if there is some limit in which the multi-black hole moduli space quantum mechanics and the large N matrix quantum mechanics on the moduli space coincide. Perhaps as a consequence of non-renormalization theorems as in <cit.>.
§ ACKNOWLEDGEMENTS
It is a pleasure to thank Dionysios Anninos, Frederik Denef, Felix Haehl, Rachel Hathaway, Eliot Hijano, Jaehoon Lee, Eric Mintun, Edgar Shaghoulian, Benson Way and Mark Van Raamsdonk for helpful discussions. We are particularly indebted to Dionysios Anninos, Frederik Denef, and Edgar Shaghoulian for their comments on an early draft. We made heavy use of Matthew Headrick's grassmann.m package. C.C. would like to thank David and Gay Cogburn for their support. T.A. is supported in part by the U.S. Department of Energy under grant Contract Number DE-SC0012567, by the Natural Sciences and Engineering Research Council of Canada, and by grant 376206 from the Simons Foundation.
§ REDUCED SCHRÖDINGER EQUATION
In this appendix we construct gauge-invariant highest-weight wavefunctions of SO(3)_J in each R-charge sector (up to 3) and use these to maximally reduce the Schrödinger equation via symmetries.
§.§ R=0
This sector of the theory was studied in <cit.>, although without access to numerics. We repeat their analysis here. We wish to separate variables using the SO(3)_J symmetry. We therefore want to write down the highest weight state satisfying J^3|ψ⟩_0=j|ψ⟩_0 and J^+|ψ⟩_0=0 , with J^±≡ J^1± i J^2. The rest of the spin multiplet can be obtained by acting on |ψ⟩_0 with J^- up to 2j times. This however doesn't entirely fix the angular dependence of the wavefunction, as these two conditions only fix the dependence on up to two angles. Recall, however, that the operators P⃗ commute with J⃗ and P⃗^2=J⃗^2, but [H,P⃗]≠ 0. We will then write |ψ⟩_0 as a sum of terms with definite P^3 eigenvalue. That is, we write |ψ⟩_0 as:
|ψ⟩_0=e^i j ϑ_1sin^jϑ_2∑_p=-j^j e^i p ϑ_3^p(ϑ_2/2) f^p(𝐱_a) .
Since the number of terms in the wavefunction grows with j it will be cumbersome to give the reduced radial Schrödinger equation for arbitrary j. Instead we will give the expressions for j=0,1/2,1.
Before giving the reduced Schrödinger equations it is worth noting that it has long been known that there exists no supersymmetric states in this sector <cit.>. The reason is that the supersymmetry equations Q_α|ψ⟩_0=Q̅^β|ψ⟩_0=0 are easy to solve and give
|ψ⟩_0^ SUSY∼exp{ g 𝐱_1 𝐱_2 𝐱_3+m/2𝐱_a 𝐱_a} ,
which is non-normalizable. It is also known that the spectrum in this sector is discrete <cit.>.
For parsimony let us define
ℋ̂≡-1/2Δ∂_𝐱_aΔ∂_𝐱_a+V
with V defined in (<ref>).
Then for j=0 the reduced Schrödinger equation, obtained from H_m|ψ⟩_0=ℰ_m|ψ⟩_0, is simply
(ℋ̂+9/2m)f^0(𝐱_a)=ℰ_m f^0(𝐱_a) .
For j=1/2 there is no mixing between the f^±1/2(𝐱_a) and each satisfies
(ℋ̂+9/2m+T/8)f^±1/2(𝐱_a)=ℰ_m f^±1/2(𝐱_a) ,
where T was defined in (<ref>). Finally, for j=1 we have
{ℋ̂+9/2m+T/4+1/4[ 𝐲_3 0 𝐲_1-𝐲_2; 0 T-2 𝐲_3 0; 𝐲_1-𝐲_2 0 𝐲_3 ]}[ f^-1; f^0; f^+1 ]=ℰ_m[ f^-1; f^0; f^+1 ] .
§.§ R=1
Continuing on from the last section, we want to write down wavefunctions in the R=1 sector that are gauge invariant, and satisfy J^3|ψ⟩_1=j|ψ⟩_1 and J^+|ψ⟩_1=0. To do so, we will write our wavefuntions as a
|ψ⟩_1=e^i j ϑ_1sin^jϑ_2∑_p=-j^j e^i p ϑ_3^p(ϑ_2/2) f^p_Aα χ̅_A^α|0⟩ ,
where |0⟩ is the fermionic vacuum and each term in the sum has definite P^3 eigenvalue.
The functions f^p_Aα that satisfy these conditions are:
f^p_A1 =e^-iϑ_1/2{ e^-iϑ_3/2cos(ϑ_2/2)L^p_2A-1(𝐱_a)- e^iϑ_3/2sin(ϑ_2/2)L^p_2A(𝐱_a)} ,
f^p_A2 = e^iϑ_1/2{ e^-iϑ_3/2sin(ϑ_2/2)L^p_2A-1(𝐱_a)+ e^iϑ_3/2cos(ϑ_2/2)L^p_2A(𝐱_a)} .
We remind the reader that the χ̅_A^α are the gauge-invariant fermions defined in (<ref>). The reduced Schrödinger equation for j=0 (and hence p=0) is
{ℋ̂+7/2m+5/8T+𝐀
}[ L_1^0; ⋮; L_6^0 ]=ℰ_m[ L_1^0; ⋮; L_6^0 ]
where 𝐀 is a 6× 6 matrix that can be written in terms of 2× 2 blocks as follows
𝐀≡i/2[ i 𝐲_11 -(2 g 𝐱_3+𝐳_3)σ^3 (2 g 𝐱_2+𝐳_2)σ^2; (2 g 𝐱_3+𝐳_3)σ^3 i 𝐲_21 -(2 g 𝐱_1+𝐳_1)σ^1; -(2 g 𝐱_2+𝐳_2)σ^2 (2 g 𝐱_1+𝐳_1)σ^1 i 𝐲_31 ] ,
where the coordinates 𝐲_a and 𝐳_a (nonlinearly related to 𝐱_a) were defined in (<ref>).
Using the above definitions it is straightforward to write down the equations for j=1/2. These are
{ℋ̂+7/2m+3/4T+
([ 𝐀+𝐁 𝐂; 𝐂^† 𝐀-𝐁 ])
}[ L_1^-12; ⋮; L_6^-12; L_1^12; ⋮; L_6^12 ]=ℰ_m[ L_1^-12; ⋮; L_6^-12; L_1^12; ⋮; L_6^12 ]
with
𝐁≡1/4[ 𝐲_3 σ^3 -2i 𝐳_3 1 0; 2i 𝐳_3 1 𝐲_3 σ^3 0; 0 0 𝐲_3 σ^3 ]
and
𝐂≡1/4[ 𝐲_1 σ^1-i 𝐲_2 σ^2 0 2 𝐳_2 1; 0 𝐲_1 σ^1-i 𝐲_2 σ^2 -2i 𝐳_1 1; -2 𝐳_2 1 2i 𝐳_1 1 𝐲_1 σ^1-i 𝐲_2 σ^2 ] .
§.§ R=2
As we can see, the number of equations keeps increasing with fermion number and spin. Therefore in this section and the next, we will only give the reduced Schrödinger equations for j=0. As before the general highest weight R=2 wavefunction admits a decomposition:
|ψ⟩_2=e^i j ϑ_1sin^jϑ_2∑_p=-j^j e^i p ϑ_3^p(ϑ_2/2) f^p_ABαβ χ̅_A^αχ̅_B^β|0⟩ .
In order to avoid over-counting let us set f^p_ABαβ=0 whenever B<A and similarly f^p_AAαβ=0 (no sum on indices) whenever β≤α. Imposing that J^3|ψ⟩_2=j|ψ⟩_2, J^+|ψ⟩_2=0 and that each term in the sum have definite P^3 eigenvalue imposes that the functions f^p_ABαβ take on a particular form. These are (no sum on indices and A<B):
f_AA12^p =L^p_A(𝐱_a) ,
f^p_AB12 =e^-iϑ_3/2sinϑ_2 Y^p_AB(𝐱_a)+cos^2(ϑ_2/2) R_AB^p(𝐱_a)-sin^2(ϑ_2/2) S_AB^p(𝐱_a)-e^iϑ_3/2sinϑ_2 U^p_AB(𝐱_a)
f^p_AB21 =e^-iϑ_3/2sinϑ_2 Y^p_AB(𝐱_a)-sin^2(ϑ_2/2) R_AB^p(𝐱_a)+cos^2(ϑ_2/2) S_AB^p(𝐱_a)-e^iϑ_3/2sinϑ_2 U^p_AB(𝐱_a)
f^p_AB11 =e^-iϑ_1{ e^-iϑ_3cos^2(ϑ_2/2)Y^p_AB(𝐱_a)-1/2sinϑ_2(R_AB^p(𝐱_a)+S_AB^p(𝐱_a))+e^iϑ_3sin^2(ϑ_2/2)U^p_AB(𝐱_a)}
f^p_AB22 = e^iϑ_1{ e^-iϑ_3sin^2(ϑ_2/2)Y^p_AB(𝐱_a)+1/2sinϑ_2(R_AB^p(𝐱_a)+S_AB^p(𝐱_a))+e^iϑ_3cos^2(ϑ_2/2)U^p_AB(𝐱_a)} .
Notice that even for j=0, determining the spectrum will involve solving a set of 15 coupled partial differential equations. We will label the set of functions Y_AB^p≡ Y_6-A-B^p and so on for the remaining functions. We also define the following vector of functions:
Ψ^0_R=2≡(L_1^0,…,R_1^0,…,S_1^0,…,U_1^0,…,Y_1^0,…)^T .
The j=0 Schrödinger equation is then:
{ℋ̂+5/2m+3/4T+𝐃+𝐋+g𝐌}Ψ^0_R=2
=ℰ_m Ψ^0_R=2
where 𝐃, 𝐋 and 𝐌 are 15× 15 matrices that can be written in terms of 3× 3 blocks as follows
𝐃≡[ 𝐝^1 0 0 0 0; 0 𝐝^3 𝐝^1-𝐲_3/41 0 0; 0 𝐝^1-𝐲_3/41 𝐝^3 0 0; 0 0 0 -𝐝^3 1/4(𝐲_1-𝐲_2)1; 0 0 0 1/4(𝐲_1-𝐲_2)1 -𝐝^3 ]
𝐋≡-1/2[ 2∑_a𝐲_a |ℒ^a| 0 0 0 0; 0 0 0 𝐳_1 ℒ^1+i 𝐳_2 ℒ^2 𝐳_1 ℒ^1-i 𝐳_2 ℒ^2; 0 0 0 𝐳_1 ℒ^1+i 𝐳_2 ℒ^2 𝐳_1 ℒ^1-i 𝐳_2 ℒ^2; 0 𝐳_1 ℒ^1-i 𝐳_2 ℒ^2 𝐳_1 ℒ^1-i 𝐳_2 ℒ^2 -2 𝐳_3 ℒ^3 0; 0 𝐳_1 ℒ^1+i 𝐳_2 ℒ^2 𝐳_1 ℒ^1+i 𝐳_2 ℒ^2 0 2 𝐳_3 ℒ^3; ]
𝐌≡[ 0 𝐱_3 𝐦^3 𝐱_3 𝐦^3 𝐱_1 𝐦^1+𝐱_2 𝐦^2 𝐱_2 𝐦^2-𝐱_1 𝐦^1; 𝐱_3 𝐦^3† -𝐱_3 ℒ^3 0 𝐱_2 𝐝^2 𝐱_2 𝐝^2†-𝐱_1 ℒ^1; 𝐱_3 𝐦^3† 0 𝐱_3 ℒ^3 -𝐱_1 ℒ^1-𝐱_2 𝐝^2† -𝐱_2 𝐝^2; 𝐱_1 𝐦^1†+𝐱_2 𝐦^2† 𝐱_2 𝐝^2† -𝐱_1 ℒ^1-𝐱_2 𝐝^2 𝐱_3 ℒ^3 0; 𝐱_2 𝐦^2†-𝐱_1 𝐦^1† 𝐱_2 𝐝^2 -𝐱_1 ℒ^1 -𝐱_2 𝐝^2† 0 -𝐱_3 ℒ^3; ] .
In these definitions, the ℒ^i are the 3×3 generators of SO(3) defined below (<ref>). The 𝐝^i are
𝐝^1≡(T/4-𝐲_a)δ_ab , 𝐝^2≡[ 0 0 0; 0 0 0; -1 0 0 ] , 𝐝^3≡1/2(𝐲_a-1/2𝐲_3)δ_ab ,
and the 𝐦^i are
𝐦^1≡[ 0 0 0; i 0 0; i 0 0 ] , 𝐦^2≡[ 0 -1 0; 0 0 0; 0 -1 0 ] , 𝐦^3≡[ 0 0 i; 0 0 i; 0 0 0 ] .
Whenever a matrix appears in an absolute value symbol |·|, the absolute value is to be applied to the entries of the matrix.
The Schrödinger operator for j=1/2 will be a generalization of the above operator to one acting on 30 functions. We do not provide expressions for it here, but analyze its spectrum in the main text.
§.§ R=3
The highest weight R=3 wavefunctions take the form
|ψ⟩_3=e^i j ϑ_1sin^jϑ_2∑_p=-j^j e^i p ϑ_3^p(ϑ_2/2) f^p_ABCαβγ χ̅_A^α χ̅_B^β χ̅_C^γ|0⟩ .
To avoid overcounting we set
f^p_ABCαβγ =0 if C<B or B<A
f^p_AABαβγ =0 if β≤α
f^p_ABBαβγ =0 if γ≤β .
Because of the fermionic statistics f^p_AAAαβγ=0 identically. Imposing the highest weight condition forces f^p_123αβγ to take the following form
f^p_123αβγ=∑_a,b, c=1^2 F^p_abc(𝐱_a)u_α a(ϑ⃗)u_β b(ϑ⃗)u_γ c(ϑ⃗) ,
with
u_α a(ϑ⃗)≡ e^i/2((-1)^αϑ_1+(-1)^aϑ_3){(1-|α-a|)cos(ϑ_2/2)+(α-a)sin(ϑ_2/2)} .
Furthermore
f_AABαβγ^p =U^p_AAB(𝐱_a)y^1_αβγ(ϑ⃗)+Y^p_AAB(𝐱_a)y^2_αβγ(ϑ⃗)
f_ABBαβγ^p =U^p_ABB(𝐱_a)y^1_αβγ(ϑ⃗)+Y^p_ABB(𝐱_a)y^2_αβγ(ϑ⃗)
where
y^1_αβγ(ϑ⃗) ≡e^i/2{((-1)^α+(-1)^β+(-1)^γ)ϑ_1-ϑ_3}/2[(4-α β γ)cos(ϑ_2/2)+(α β γ-2)sin(ϑ_2/2)]
y^2_αβγ(ϑ⃗) ≡e^i/2{((-1)^α+(-1)^β+(-1)^γ)ϑ_1+ϑ_3}/2[(α β γ-4)sin(ϑ_2/2)+(α β γ-2)cos(ϑ_2/2)] .
Notice that for j=0 the reduced Schrödinger equation is a set of 20 coupled partial differential equations. We will give the Schrödinger operator acting on the following vector of functions
Ψ_R=3^0≡(F^0_122,F^0_211,F^0_121,F^0_212,F^0_221,F^0_112,F^0_111,F^0_222,.
. U^0_113,U^0_223,Y^0_113,Y^0_223,U_112^0,U^0_233,Y^0_112,Y^0_233,U^0_122,U^0_133Y^0_122,Y^0_133)^T .
With Ψ_R=3^0 defined, we are tasked with solving the following set of differential equations:
{ℋ̂+3/2m+1/4(11/2T-𝐲_3)-𝐈+𝐉+𝐉^†+𝐊}Ψ_R=3^0=ℰ_m Ψ_R=3^0
,
where 𝐈, 𝐉, and 𝐊 are 20× 20 matrices that can be written in block form as follows:
𝐈≡[ 𝐲_11_2× 2; 𝐲_21_2× 2 0; 𝐲_31_2× 2; (𝐲_1+𝐲_2)1_2× 2; 3/4(𝐲_1+𝐲_2)1_4× 4; 0 1/2(T+𝐲_1-𝐲_2/2)1_4× 4; 1/2(T-𝐲_1-𝐲_2/2)1_4× 4 ] ,
and 𝐉 and 𝐊 can be written in terms of 4×4 blocks as follows:
𝐉≡[ 0 0 𝐱_3 zz 𝐱_2 yy 𝐱_1 xx; 0 0 0 𝐱_2 yy2 𝐱_1 xx2; 𝐱_3 zz^† 0 0 𝐱_1 xx3 𝐱_2 yy3; 𝐱_2 yy^† 𝐱_2 yy2^† 𝐱_1 xx3^† 0 𝐱_3 zz2; 𝐱_1 xx^† 𝐱_1 xx2^† 𝐱_2 yy3^† 𝐱_3 zz2^† 0 ]
𝐊≡1/4[ (T-5 𝐲_3) i1 𝐲_1 i4 +𝐲_2 i5 2𝐳_3 zz3 2 𝐳_2 yy4 2 𝐳_1 xx4; 𝐲_1 i4^†+𝐲_2 i5^† (𝐲_1-𝐲_2)i2 0 -2𝐳_2 yy4 2𝐳_1 xx5; 2 𝐳_3 zz3^† 0 -4 𝐲_3 i3 2 i 𝐳_1 |xx3| 2 𝐳_2 yy5; 2 𝐳_2 yy4^† -2 𝐳_2 yy4^† -2i 𝐳_2 |xx3|^† -4 𝐲_2 i3 2𝐳_3 zz4; 2 𝐳_1 xx4^† 2 𝐳_1 xx5^† 2 𝐳_2 yy5^† 2 𝐳_3 zz4^† -4 𝐲_1 i3 ]
𝐉≡i/2[ 0 0 𝐳_3 𝐚^3+2g 𝐱_3 𝐛^3 𝐳_2 𝐚^2+2g 𝐱_2 𝐛^2 𝐳_1 𝐚^1+2g 𝐱_1 𝐛^1; 0 0 0 -𝐳_2 𝐚^2+2g 𝐱_2 𝐜^2 𝐳_1 𝐞^1+2g 𝐱_1 𝐜^1; 0 0 0 σ^1⊗(𝐳_1 1+2g 𝐱_1 σ^3) -σ^2⊗(𝐳_2 σ^1-2i g 𝐱_2 σ^2); 0 0 0 0 σ^3⊗(𝐳_3 1-2g 𝐱_3 σ^3); 0 0 0 0 0 ] ,
𝐊≡[ 1/4(T-5 𝐲_3)σ^1⊗σ^1 1/4(𝐲_1 𝐬+𝐲_2 𝐭) 0 0 0; 1/4(𝐲_1 𝐬+𝐲_2 𝐭)^† 1/4(𝐲_1-𝐲_2)σ^1⊗1 0 0 0; 0 0 -𝐲_3 1⊗σ^1 0 0; 0 0 0 -𝐲_2 1⊗σ^1 0; 0 0 0 0 -𝐲_1 1⊗σ^1; ] ,
where we have implicitly defined
𝐚^1≡ -1/2[ 0 0; 1 1 ]⊗(1-σ^1)+1/2[ 0 0; 1 -1 ]⊗(-i σ^2+σ^3) , 𝐛^1≡1/2[ -1 1; 0 0 ]⊗(1+σ^1)-1/2[ 1 1; 0 0 ]⊗(i σ^2+σ^3) ,
𝐚^2≡ -i/2[ -1 1; 0 0 ]⊗(1-σ^1)+i/2[ 1 1; 0 0 ]⊗(-i σ^2+σ^3) , 𝐛^2≡ -i/2[ 0 0; 1 1 ]⊗(1+σ^1)+i/2[ 0 0; 1 -1 ]⊗(i σ^2+σ^3) ,
𝐚^3≡ i σ^2⊗[ -1 1; 0 0 ]+σ^3⊗[ 0 0; -1 1 ] , 𝐛^3≡ -1⊗[ 0 0; 1 1 ]-σ^1⊗[ 1 1; 0 0 ] ,
𝐚^1≡ -1/2[ 0 0; 1 1 ]⊗(1-σ^1)+1/2[ 0 0; 1 -1 ]⊗(-i σ^2+σ^3) ,
𝐚^2≡ -i/2[ -1 1; 0 0 ]⊗(1-σ^1)+i/2[ 1 1; 0 0 ]⊗(-i σ^2+σ^3) ,
𝐚^3≡ i σ^2⊗[ -1 1; 0 0 ]+σ^3⊗[ 0 0; -1 1 ] ,
𝐛^1≡1/2[ -1 1; 0 0 ]⊗(1+σ^1)-1/2[ 1 1; 0 0 ]⊗(i σ^2+σ^3) ,
𝐛^2≡ -i/2[ 0 0; 1 1 ]⊗(1+σ^1)+i/2[ 0 0; 1 -1 ]⊗(i σ^2+σ^3) ,
𝐛^3≡ -1⊗[ 0 0; 1 1 ]-σ^1⊗[ 1 1; 0 0 ] ,
as well as
𝐜^1≡1/2[ 0 0; 1 -1 ]⊗(1+σ^1)+1/2[ 0 0; 1 1 ]⊗(i σ^2+σ^3) ,
𝐜^2≡ -i/2[ 0 0; 1 1 ]⊗(1+σ^1)-i/2[ 0 0; 1 -1 ]⊗(i σ^2+σ^3) ,
𝐞^1≡ -1/2[ 1 1; 0 0 ]⊗(1-σ^1)+1/2[ -1 1; 0 0 ]⊗(-i σ^2+σ^3) ,
and finally
𝐬≡[ 1 1; 0 0 ]⊗1+[ 0 0; -3 1 ]⊗σ^1 and 𝐭≡[ -3 1; 0 0 ]⊗1+[ 0 0; 1 -1 ]⊗σ^1 .
The Hamiltonian acting on the R=3, j=1/2 wavefunction will be a generalization of the above operator to one acting on 40 functions. We will not give the expression here, but we analyze the spectrum of the R=3, j=1/2 sector numerically in the main text.
§ METRIC ON THE MODULI SPACE
In order to determine the one-loop effective action for the ν=0 theory, we follow <cit.> and pass to the Lagrangian formulation of our gauge-quantum mechanics, including gauge-fixing terms and ghosts. We will use the background field method <cit.>—that is we will expand the fields X^i_A=B^i_A+X̃^i_A where B^i_A is a fixed background field configuration and X̃^i_A are the fluctuating degree of freedom. We choose B^i_A=δ_A3 𝐱⃗ such that it parametrizes motion along the moduli space.
The gauge-fixed Lagrangian is:
ℒ=ℒ_ bos.+ℒ_ ferm.+ℒ_ g.f.+ℒ_ ghost
with
ℒ_ bos. =1/2(𝒟_t X^i_A)^2-g^2/4(f_ABC X^i_B X^j_C)^2
ℒ_ ferm. =i(λ̅_A 𝒟_tλ_A-g f_ABCλ̅_A X^k_B σ^kλ_C)
ℒ_ g.f. =-1/2ξ(𝒟_t^ bg A_A)^2
ℒ_ ghost =c̅_A(-δ_AB ∂_t^2-g f_ACB ∂_t(A_C·)+g^2 f_ACDf_DEBB^i_C X^i_E)c_B
and
𝒟_t X^i_A≡Ẋ^i_A+g f_ABC A_B X^i_C , 𝒟_t λ_Aα≡λ̇_Aα+g f_ABC A_B λ_Cα , 𝒟_t^ bg A_A≡-Ȧ_A+g f_ABC B^i_B X^i_C .
We further set ξ=1, corresponding to Feynman gauge.
We can obtain the correction to the metric on moduli space by choosing a background field 𝐱⃗ as follows <cit.>
𝐱⃗=(b,vt,0)
where b is to be thought of as an impact parameter for a particle moving at speed v. We now Wick rotate t→ -iτ, v→ iγ and A_A→ i A_A and expand the action to quadratic order in fluctuating fields about the background field B^i_A=δ_A3 𝐱⃗. The idea is to integrate out all fields that obtain a mass, through interaction with the background field, at one loop.
Following this procedure, it is easy to show that all fields with color index A=3 remain massless, while the rest obtain time dependent masses. After diagonalizing the mass-matrix for the bosonic fields we find the contribution to the Euclidean effective action coming respectively from the bosonic, fermionic and ghost determinants are:
δ S_E^ bos.= -2 Tr log(-∂_τ^2+g^2(b^2+γ^2τ^2))- Tr log(-∂_τ^2+g^2(b^2+γ^2τ^2)-2g γ)
- Tr log(-∂_τ^2+g^2(b^2+γ^2τ^2)+2g γ)
δ S_E^ ferm.= Tr log[ ∂_τ -g(γ τ+ib); -g(γ τ-ib) ∂_τ ]+Tr log[ ∂_τ g(γ τ-ib); g(γ τ+ib) ∂_τ ]
δ S_E^ ghost= 2 Tr log(-∂_τ^2+g^2(b^2+γ^2τ^2)) .
Note that the ghost determinant cancels against the contribution coming from four of the eight massive bosons. Up to a diverging constant, which will cancel between the bosonic and fermionic terms, we can replace log(λ)=-∫_0^∞ds/se^-s λ and, summing over the spectra of the above differential operators, we find
δ S_E =∫_0^∞ds/s e^-b^2g^2s(cosh(2g γ s) csch(g γ s)-(g γ s))
=∫_0^∞ds/s e^-b^2g^2ssech(g γ s/2) sinh(3g γ s/2) .
Let us now wick rotate back to Lorentzian time and use:
e^-b^2g^2s/s=∫dt/√(π s)g v e^-s g^2 r^2 , r^2=b^2+v^2t^2 ,
to write down the Lorentzian action to O(v^2):
i S_L =i∫ dt[ v^2/2-g v ∫ds/√(π s)e^-s g^2 r^2sec(g v s/2) sin(3g v s/2)]
=i∫ dt 1/2(1-3/2 g r^3)v^2+O(v^4) ,
which is the same correction as found in <cit.>. It also resembles the correction to the moduli space metric in the D0-D4 system <cit.>, albeit with a different coefficient and sign.
utphys
|
http://arxiv.org/abs/1701.07826v2 | 20170126190001 | TDE fallback cut-off due to a pre-existing accretion disc | [
"Adithan Kathirgamaraju",
"Rodolfo Barniol Duran",
"Dimitrios Giannios"
] | astro-ph.HE | [
"astro-ph.HE"
] | |
http://arxiv.org/abs/1701.08071v2 | 20170127145036 | Emotion Recognition From Speech With Recurrent Neural Networks | [
"Vladimir Chernykh",
"Pavel Prikhodko"
] | cs.CL | [
"cs.CL"
] |
Application of Spin-Exchange Relaxation-Free Magnetometry to the Cosmic Axion Spin Precession Experiment
[
December 30, 2023
========================================================================================================
In this paper the task of emotion recognition from speech is considered. Proposed approach uses deep recurrent neural network trained on a sequence of acoustic features calculated over small speech intervals. At the same time special probabilistic-nature CTC loss function allows to consider long utterances containing both emotional and neutral parts. The effectiveness of such an approach is shown in two ways. Firstly, the comparison with recent advances in this field is carried out. Secondly, human performance on the same task is measured. Both criteria show the high quality of the proposed method.
§ INTRODUCTION
Nowadays machines can successfully recognize human speech. Automatic speech recognition (ASR) services can be found everywhere. Voice input interfaces are used for many applications from navigation system in mobile phones to Internet-of-Things devices. A lot of personal assistants like Apple Siri <cit.>, Amazon Alexa <cit.>, Yandex Alisa <cit.>, or Google Duplex <cit.> were released recently and are already the inalienable part of the life.
Nevertheless this field is still rapidly emerging. Last year Google has released its Cloud API for speech recognition <cit.>. In the last Windows 10 one can find Cortana voice interface <cit.> integrated. Small startups all over the world as well as IT giants like Google, Microsoft and Baidu are actively doing research in this area.
Market size of both hardware and software for speech recognition has reached 55 billion dollars in 2016 and it continues to grow approximately 11% a year <cit.>.
Therefore authors believe that this field is perspective and is worth to put an attention at.
§.§ Problem
Virtually all the ASR algorithms and services are simply transcribing audio recordings into written words. But that is only the first level of speech understanding.
During the conversation humans receive lots of meta-information apart from text. Examples might be the person who is speaking, his intonation and emotion, loudness, shades etc. These factors might considerably influence the true intended meaning of a phrase. Even turn it into opposite - that is what we call sarcasm or irony. Humans take all these elements into consideration while processing the phrase in the brain and only after that the final meaning is formed.
Accounting for these factors in purely retrieval systems, e.g. search engines, may be superfluous. But it becomes crucial in more human-involved systems like voice assistants, where the close communication with human is needed. To be able to detect the meaning of the spoken message correctly one needs to account not only for the semantics but also for the discussed type meta-information. Thus to build a more complete human-computer interaction system it is necessary to extract these features out of the audio signal.
This paper addresses only one of the questions arisen above: how to correctly recognize the emotional background of the voice? The main goal of the work is to answer this question. The main obstacles that complicate the solution are:
* Emotions are subjective.
They are complex psychological and social phenomena. People understand emotion differently. Thus there are many difficulties in defining the notion of emotion <cit.>.
Altrov et al. in <cit.> collected the corpus of Estonian speech with 4 emotions included: joy, anger, sadness, neutral. Then they asked people of different nationalities to evaluate it. Estonians, Latvians, Italians, Finns, Swedes, Danes, Norwegians and Russians took part in the experiment. Almost all nationalities are close to Estonians both geographically and culturally. Nevertheless Estonians perform much better than any other nationality showing about 69% mean class accuracy. All other people perform 10-15% worse and the only emotion that they recognize relatively well was sadness.
Work of Altrov et al. <cit.> showed that there is significant intercultural differences in emotions understanding. But even inside one culture this understanding may vary greatly.
* Assignment of the emotions to the audio recording.
It is not obvious how one should assign emotional labels to the long audio recording or even continuous flow of speech. Should it be one emotion per whole recording or per one utterance? If one chooses utterance-based solution then how the split should be done? Is it possible for the utterance to have multiple emotions? These and few other questions put the methodology in the forefront.
* Complexity and cost of database collection.
Databases for usual speech recognition task are relatively easy to collect: one can take dialogues from the films, Youtube blogs, news, etc. and annotate them. Almost the only requirement is the high quality of the audio recording.
When it comes to the emotions there is a huge problem with all of these sources. Emotions in them are dramatically biased. In news most of the speech is neutral. In films set of emotions depends on the genre but the distribution is almost always biased towards the one prevailing emotion.
Another way is to collect the database artificially. The following big problem arises here: how to record a predefined emotion in a natural way? Douglas-Cowie et al. suggest to use professional actors <cit.>. Actors are given either with the topics and asked to improvise on this topic or with the scripted material which they should read. At the time of reading actors are to show the predefined emotion. Busso et al. give the overview and the comparison of these two approaches in their paper <cit.>.
The set of emotions to use is another important question. There should enough emotions to cover all the basic human reactions but not too many to be able to play and assess them reliably. Picard et al. describe the how and why the emotions should be chosen in their work <cit.>. They suggest to use at least 5 basic emotions: happiness, anger, sadness, neutral, frustration.
The other side of this coin is how the emotions should be measured and evaluated. Cowie et al. give their view to this problem in their paper <cit.>. Authors propose to use 3D Valence-Arousal-Dominance ordinal space as well as categorical labels for the evaluation of the utterances. Moreover, many assessors are needed for one utterance to be able to evaluate it consistently.
Altogether, these peculiarities make the collection of the database very complicated, time-consuming and expensive task.
One of the good methodology and collection examples is IEMOCAP database presented by Busso et al. in <cit.>. IEMOCAP is used in this work and will be described in more details later.
Some of these questions are resolved by authors of this paper, others are tackled by the authors of database used, third are inherent to the problem and can not be avoided.
§.§ Related works
The problem described in section <ref> has previously been considered by few works.
Majority of the works state the emotion recognition task as a classification problem where one utterance has exactly one label.
Before the deep learning era people have come with many different methods which mostly extract complex low-level handcrafted features out of the initial audio recording of the utterance and then apply conventional classification algorithms. One of the approaches is to use generative models like Hidden Markov Models or Gaussian Mixture Model to learn the underlying probability distribution of the features and then to train a Bayessian classifier using maximal likelihood principle. Variations of this method was introduced by Shuller et al. in 2003 in <cit.> and by Lee et al. in 2004 in <cit.>. Another common approach is to gather a global statistics over local low-level features computed over the parts of the signal and apply a classification model. Eyben et al. in 2009 <cit.> and Mower et al. in 2011 <cit.> used this approach with Support Vector Machine as a classification model. Lee et al. in 2011 in <cit.> used Decision Trees and Kim et al. in 2013 in <cit.> utilized K Nearest Neighbours instead of SVM. People also tried to adapt popular speech recognition methods to the task of emotion recognition: for more information look at works of Hu et al. in 2007 <cit.> and Nwe et al. in 2013 in <cit.>.
One of the first deep learning end-to-end approaches was presented by Han et al. in 2014 in their work <cit.>. Their idea is to split each utterance into frames and calculate low-level features at the first step. Then authors used densely connected neural network with three hidden layers to transform this sequence of features to the sequence of probability distributions over the target emotion labels. Then these probabilities are aggregated into utterance-level features using simple statistics like maximum, minimum, average, percentiles, etc. After that the Extreme Learning Machine (ELM) <cit.> is trained to classify utterances by emotional state.
In the continuation of the Han et al. work Lee and Tashev presented their paper <cit.> in 2015. They have used the same idea and approach as Han et al. in <cit.>. The main contribution is that they replaced simple densely-connected network with recurrent neural network (RNN) with Long short-term memory (LSTM) units. Lee and Tashev have also introduced probabilistic approach to learning which is in some points similar to approach presented in current paper. But they continued to use local probabilities aggregation into gloabal feature vector and ELM on top of them.
The main drawbacks of these two approaches are that they are using very simple and naive aggregation functions and ELMs. The latter is actively criticized by the research community last years and Yann LeCun in particular <cit.>.
This work in its first edition was written in early 2017 <cit.> and was aimed to get rid of the drawbacks discussed above by applying fully end-to-end pipeline without handcrafted parts in the middle.
After that few purely deep learning and end-to-end approaches based on modern architectures have already arisen. Neumann and Vu in their 2017 paper <cit.> used currently popular attentive architecture. Attention is a mechanism that was firstly introduced by Bahdanau et al. in 2015 in <cit.> and now is state-of-the-art in the field of machine translation <cit.>. Xia et al. in their 2017 work <cit.> used a slightly different approach based in Deep Belief Networks (DBN) and continuous problem statement in 2D Valence-Arousal space. Each utterance can be assessed in ordinal scale and then embedded into multidimensional space. Regions in this space are associated with different emotions. The task then is to learn how to embed the utterances in this space. One of the most recent and interesting works was presented in 2018 by Lakomkin et al. in <cit.>. They suggested to do a transfer learning from usual speech recognition task to the emotion recognition. One might anticipate this method to work well because the speech corpora for speech recognition are far better developed - they are bigger and better annotated. Authors performed a fine-tuning of the DeepSpeech <cit.> kind of network trained on LibriSpeech <cit.>.
In spite of existence of few more recent papers on this topic, the quality of the model proposed in this paper is on par with them. At the same time it allows for some extensions like the sequence of emotion labels as an output which other approaches do not support to the best of authors' knowledge.
§ DATA
All experiments are carried out with audio recordings from the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database <cit.>. There are also few more emotional speech databases the overview of which can be found in <cit.>. IEMOCAP is chosen because it has one of the most elaborate acquisition methodology, free academic license, long recordings duration and good markup.
§.§ Database structure
IEMOCAP <cit.> consists of approximately 12 hours of recordings. Audio, video and facial keypoints data was captured during the live sessions. Each session is a sequence of dialogues between man and woman. In total 10 people split into 5 pairs took part in the process. All involved people are professional actors and actresses from Drama Department of University of Southern California <cit.>. The recording process took place at the professional cinema studio. Actors seated across each other at "social" distance of 3 meters. It enables more realistic communication.
Before the recording actors were given with the topic of the conversation and the emotional tone in which they should perform. There are two types of dialogues: scripted (actors were given with the text) and improvised.
After recording of these conversations authors divided them into utterances with speech (see figure <ref>).
Note that audio was captured using two microphones. Therefore the recordings contain two channels which correspond to male and female voices. Sometimes they interrupt each other. In these moments the utterances might intersect. This intersection takes about 9% of all utterances time. It might lead to undesired results because microphones were place relatively near each other and thus inevitably captures both voices.
After the recording assessors (3 or 4) were asked to evaluate each utterance based on both audio and video streams. The evaluation form contained 10 options (neutral, happiness, sadness, anger, surprise, fear, disgust, frustration, excited, other). In this work only only 4 of them are taken for the analysis: anger, excitement, neutral and sadness (as ones of the most common, <cit.>). Figure <ref> shows the distribution of considered emotions among the utterances.
Emotion is assigned to the utterance if and only if at least half of experts were consistent in their evaluation. About 25% of the utterances do not satisfy this condition and emotion label was not assigned at all (see figure <ref>). Moreover, significantly less than a half of remained utterances have consistent assessment from all the experts (figure <ref>). This statistics confirms the statement from section <ref> that emotion is a subjective notion. Therefore it is reasonable to assume that there is no way to classify emotions accurately even if humans fail to do so.
§.§ Preprocessing
The raw signal has the sample rate of 16 kHz and thus working with it requires enormous computational power. There are technologies (e.g. Google Wavenet <cit.>) that deal with it but for now these algorithms can hardly work online even with Google computational power.
The goal is to reduce the amount of computations down to the acceptable while preserving as much information as possible. Each utterance is divided into intersecting intervals (frames) of 200 milliseconds (overlap by 100 milliseconds). Then acoustic features are calculated over each frame. The resulted sequence of feature vectors represents initial utterance in low dimensional space ans serves as an input to the model.
Authors also experimented with different frame durations from 30 milliseconds to 200 milliseconds. 30 milliseconds roughly correspond to the duration of one phoneme in the normal flow of spoken English. 200 milliseconds is the approximate duration of one word. Experiments do not show significant difference in terms of quality. But computation time rises with the reduction in frame duration due to bigger number of frames. Thus authors decided to stay with 200ms.
Note that labels are presented only for utterances. It means that the task is weakly labelled in a sense that not every frame is labelled.
The key point here is the set of features to calculate. All possible features can be classified into 3 buckets:
* Acoustic
They describe the wave properties of a speech. It includes Fourier frequencies, energy-based features, Mel-Frequency Cepstral Coefficients (MFCC) and similar.
* Prosodic
This type of features measures peculiarities of speech like pauses between words, prosodies and loudness. These speech details depend on a speaker, and use of them in the speaker-free systems is debatable. Therefore they are not used in this work.
* Linguistic
These features are based on semantic information contained in speech. Exact transcriptions require a lot of assessor's work. In future it is possible to include speech recognition to the pipeline to use automatically recognized text. But for now authors do not use linguistic features.
The current feature extraction algorithm utilizes only acoustic features. PyAudioAnalysis <cit.> library by Giannakopoulos is used. More precisely, 34 features are calculated:
* 3 Time-domain: zero crossing rate, energy, entropy of energy
* 5 Spectral-domain: spectral centroid, spectral spread, spectral entropy, spectral flux, spectral rolloff
* 13 MFCCs
* 13 Chroma: 12-dimensional chroma vector, standard deviation of chroma vector
In future authors plan to get rid of the handcrafted features and switch to the Convolutional Neural Network (CNN) based feature extraction algorithm.
The final output of the preprocessing step is the sequence of 34-dimensional vectors for each utterance. The length of the sequence depends on the duration of the utterance.
§ METHOD
In this paper the Connectionist Temporal Classification (CTC) <cit.> approach is used to classify speakers by emotional state from the audio recording.
The raw input data is the sound signal which is high-frequency time series. After all the preprocessing steps described in section <ref> this sound signal is represented as a sequence of multidimensional frame feature vectors. The task is to map this long input sequence into short sequence of emotions which are presented in the recording.
The major difficulty is the significant difference in input and output sequences lengths. The input sequence length might be about 100 which is about 10 seconds with the chosen preprocessing settings. Output sequence length is usually no more than 2-4. Two orders of magnitude difference. In this case usual solutions such as padding of output sequence of bucketing (which is used in Google Neural Machine Translation <cit.>) can hardly be applied.
CTC addresses this problem in an essential way by utilizing three main concepts:
* Introduce additional NULL label which corresponds to the absence of any other label and extends the initial labels set.
* Bijective sequence-to-sequence learning, i.e., one-to-one mapping from sequence of frame features to the sequence of extended labels.
* Collapse resulting sequence w.r.t. duplicates of labels and introduced extra label.
In case of emotion recognition these features are inherently implied by the essence of the task. On the one hand one utterance may contain several different emotions but on the other hand there might be considerable parts of recording without any sign of emotions.
Thus there are strong reasons to believe that one can benefit from usage of Connectionist Temporal Classification approach in this problem.
§.§ Notation
Let E = {0 … k-1} be the set of labels and L = E ∪{NULL} — extended label set.
Assume that 𝒟 = {(X_i, 𝐳_i)}_i=1^n is the dataset where 𝐳_i ∈𝒵 = E^* is the true sequence of labels and X_i ∈𝒳 = ( ℝ^f )^* — corresponding f-dimensional feature sequence. It is worth to mention that the lengths of these sequences |𝐳_i| = U_i and |X_i| = T_i may not be the same in general case, the only condition is that U_i ≤ T_i.
Next let's introduce the set of decision functions or models ℱ = { f : 𝒳↦𝒵} in which the best model is to be found. In case of neural network with the fixed architecture it is essential to associate the set of functions ℱ with the network weights space 𝒲 and thus function f and vector of weights 𝐰 are interchangeable.
Having the set of functions one need to know how to choose the best. For that purpose probabilistic approach and maximal likelihood training is used (one can learn more in <cit.>). Assume that the model f can also calculate the probability measure p of any sequence being its output. Then one wants the likelihood of the dataset 𝒟 to be as high as possible:
∏_i=1^|𝒟|p(𝐳_i|X_i) →max.
The optimal model then can be found as:
f̂ = _f ∈ℱ∑_i=1^|𝒟|logp(𝐳_i|X_i) = _𝐰∈𝒲 Q( 𝐰, 𝒟).
This method can be seen from the angle of loss functions and Empirical Risk Minimizer (see <cit.>)
In case of neural network models the optimization is usually carried out with gradient descent type algorithms.
§.§ CTC approach
CTC is the one of the sequence-to-sequence prediction methods that deals with the different lengths of the input and output sequences. The main advantage of CTC is that it chooses the most probable label sequence (labeling) regarding the various ways of aligning it with the initial sequence. The probability of the particular labeling is added up from the probabilities of every its alignment.
In the figure <ref> the pipeline of the CTC method is depicted.
Recurrent neural network (RNN) with fixed architecture (see details in section <ref>) is chosen as a space of classifiers ℱ. The only requirement for the structure is to output the sequence of the same length as it takes as an input.
Think of RNN as a mapping from the input space 𝒳 to the sequence of probability distributions over the extended label set L:
Y = f(X) ∈ [0; 1]^(k + 1) × T,
where y_c^t is the output of the softmax layer and represents the estimation of the probability of observing class c at the timestep t.
For every input X let's define the path π — it is an arbitrary sequence from L^* with the length of T. Then the conditional probability of the path is
p(π|X) = ∏_t=1^T y_π_t^t.
The problem is that the path can contain NULL class which is unacceptable in the final output. First of all one needs to get rid of the NULLs. For that purpose mapping M : L^T ↦ E^≤ T is introduced. It basically consists of two steps:
* Delete all consequent repeated labels
* Delete all NULLs
Consider the following example: M ( -aa-b-b–ccc ) = M ( abb—bc- ) = abbc. Notice that M is the surjective mapping. By means of it the paths are transformed into labelings. To compute the probability of the labeling one needs to sum up probabilities of all paths that wrap into this particular labeling:
p(𝐥 | X) = ∑_π∈ M^-1(𝐥)p(π | X).
The direct calculation of p(𝐥|X) requires summation over all corresponding paths which is exhaustive task. There are (k+1)^T possible paths. Graves et al. <cit.> derived a new efficient forward-backward dynamic programming algorithm for that. The initial idea was taken from HMM decoding algorithm introduced by Rabiner <cit.>.
Finally, the objective function is
Q(𝐰, 𝒟) =
-∑_i=1^|𝒟|logp(𝐳_i|X_i) =
-∑_i=1^|𝒟|∑_π∈ M^-1(𝐳_i)logp(π|X_i).
Neural network here plays a role of probability measure p evaluator and the more it trains the more accurate probability estimations it gives. To enable the neural network training with the standard gradient-based methods Graves et al. <cit.> suggested differentiation technique naturally embedded into dynamic programming algorithm.
The final model chooses the labeling with the highest probability:
𝐡(X) = _𝐥∈ E^≤ Tp(𝐥|X)
However one has exponential number of labelings and thus the task of accurate probability computation is intractable. There are two main heuristics for tackling this problem:
* Best path search
It approximates the most probable labeling with the wrapped version (after M transformation) of the most probable path.
* Beam search
It keeps track of the fixed length prefix to choose the most probable label at each step. Best path search is a special case of beam search where the beam width equals to 1.
Both heuristics are tested during the experiments.
§ EXPERIMENTS
In the series of experiments authors investigate proposed approach and compare it to the different baselines for emotion recognition. All the code can be found in the github repository <cit.>.
One of the main obstacles with the speech recognition task in practice is that it is usually weakly supervised (as described in section <ref>). Here it means that there are a lot of frames in the utterance but only one emotional label. At the same time it is obvious that for sufficiently long periods of speech not all the frames contain emotions. CTC loss function suggests one of the ways to overcome this issue.
Authors choose two more methods and provide a comparison between them and CTC in the same setting. The algorithms are described at section <ref> while the results are reported at section <ref>.
In all the methods and algorithms discussed below the frame features are calculated as described in section <ref>.
Please also note, that in IEMOCAP database each utterance has only one emotion. Therefore in CTC approach the length of all the real output sequence equals to one U_i = |𝐳_i| = 1. Thus one can consider the output sequence of emotion labels as one emotion assigned to the utterance and vectors 𝐳_i, 𝐡(X_i) as scalars z_i, h_i.
§.§ Metrics
First of all, one need to decide on the evaluation criteria. In this work authors follow the suggestion from Lee et al. <cit.> and uses two main metrics to evaluate and compare the models:
* Overall (weighted) accuracy
1/n∑_i=1^n[z_i=h_i]
It is a usual accuracy which is calculated as a fraction of correct answers over all examples.
* Mean class (unweighted) accuracy
1/c∑_c=1^k∑_i=1^n[z_i=h_i]·[z_i=c]/∑_i=1^n[z_i=c]
The idea is to take accuracy only inside one class and then average these values across all classes.
In both formulas above the square brackets denote indicator function.
Overall accuracy is the standard metric which is common to use and thus easy to compare with the results from other papers. But it has one major drawback. It does not account for the class imbalance. While in the case of IEMOCAP dataset, e.g., neutral class is approximately 1.7x times bigger than excitement. Therefore authors introduce mean class accuracy which taked into account the differences in class sizes and get rid of the imbalance influence on the metric value.
§.§ Baselines
In this subsection one can find the description and the performance report of the baselines algorithms.
§.§.§ Framewise
The core idea of this method is to classify each frame separately. Remember that the task is weakly supervised the following workflow is chosen:
* Take two loudest frames from each utterance. Loudness in this context equals to the spectral power
* Assign these frames with the emotion of the utterance
* Train the frame classification model
* Label all frames in all utterances using fitted model
* Classify utterances based on the obtained frame-level labels
The naive assumption here is that the whole utterance can be represented by 2 loudest frames.
Random Forest Classifier <cit.> is used as a classification model.
To assign emotion to the utterance majority voting is applied to the emotion labels of the frames. More detailed description of the algorithm, hyperparameters setting and code might be found in the github repository <cit.>.
In the figure <ref> there are the results of this method for randomly chosen validation set utterances. One can observe that for short utterances it works fine but with longer utterances it becomes sawtooth and unstable.
For the methodology and results of the overall comparison with other methods please see section <ref> and table <ref>.
§.§.§ One-label
One-label approach implies that every utterance has only one emotional label notwithstanding its length. In other words sequence-to-label learning paradigm is used here in contrast with sequence-to-sequence learning in CTC.
The important detail is that all major modern deep learning frameworks (like TensorFlow, Keras, PyTorch, etc.) can group data into batches. Batch is in fact a multidimensional tensor. Mini-batch gradient descent and its modifications is the de facto standard method of training for neural networks. But the peculiarity here is that only the tensors of the same dimensions can be packed into the batch. After the preprocessing steps described in section <ref> the input data is the sequences of the same dimension (34) but of the different length which depends on the duration of the utterance. Thus it is impossible to pack them into batch and train a network efficiently.
There are couple of solutions to this problem, e.g., padding or bucketing <cit.>. Here authors use padding. The idea is to make all the sequences of the same length. For that short sequences are appended with zeros and long sequences are truncated to the unified length. In this work the unified length equals to 78 which is approximately the 90%-percentile of all sequences lengths. After that step the training can be done efficiently using mini-batch approaches. Authors used Adam <cit.> optimizer for the training.
One-label approach also requires the definition of the network architecture. Authors decided to use same architecture for all of the approaches to be able to fairly compare them. One-label architecture is depicted in the figure <ref> of Appendix A. It contains stacked Bidirectional LSTM units and dense classification layers on top of them. Categorical cross-entropy loss function is used. For more detailed description of the network structure and training procedure see figure <ref> in Appendix A and code in <cit.>.
The methodology and results of the overall comparison with other methods are described section <ref> and table <ref>.
§.§ CTC
Although CTC approach can inherently account for more than one label in the utterance, the design of the IEMOCAP database implies only one emotion per utterance (see sections <ref> and <ref>). Consequently there are four valid types of label sequences from L^* which can be generated by the network (see figure <ref>).
Each type of the sequence is later collapsed by the M transformation during CTC decoding step (see section <ref>). Note that all 4 valid sequence types are collapsed into one "Emo" label.
When applying the CTC approach one faces the same problem with different input sequence lengths as one saw in One-label approach in section <ref>. The solution here is the same. Input sequences are padded or truncated to the length of 78. The only difference is that one keeps track of the initial sequence length to decode the resulting output sequence even better by not taking into account padded places (see figure <ref> and code <cit.> for more details).
CTC approach requires the neural network architecture. As it is mentioned in section <ref> authors decided to use same architecture for all of the approaches to be able to fairly compare them. CTC architecture is shown in the figure <ref> of Appendix A. It contains stacked Bidirectional LSTM units and dense classification layers on top of them. CTC loss function is used. For more detailed description of the network structure and training procedure see figure <ref> in Appendix A and code in <cit.>.
The methodology and results of the overall comparison with other methods are described section <ref> and table <ref>.
§.§ Comparison
In this section we provide a comparison between all three approaches described above in sections <ref>, <ref>, <ref>.
Each method is tested using grouped cross-validation approach. In usual k-fold cross-validation approach the dataset is randomly split into into k disjoint folds. At each of k steps the the k^th fold is used as a test set and all other folds are used as a training set.
Grouped cross-validation assumes that each data sample has an additional label. This label shows the group of the sample. Group in this context might be any kind of common property that samples share. In this work the group is a speaker. It means that the group labels contains all samples that were spoken by one person (and only them). Grouped cross-validation splits the data in such a way that samples from the one group can not be in both training and test sets simultaneously.
Grouped cross-validation technique allows to ensure that the model quality is measured in speaker independent way. It means that the model is not overfitted to the manner of particular speakers presented in the training set.
IEMOCAP dataset contains 10 speakers which were recorded by pairs. Each speaker has roughly the same number of utterances. If one was to split the data into groups according to the speaker then one would get only 10% of data for the test. That might be to unstable. Thus authors decided to form groups not by speakers exactly but by pair of speakers that were recorded simultaneously. In that way 20% of data is split for the test which is more stable.
The results of 5-fold grouped cross-validation averaged across folds are shown in the table <ref>.
First row with "Dummy" method corresponds to the naive classification model which always answers with label of the largest training class. In IEMOCAP case it is neutral class. "Framewise" and "One-label" rows represent the described baseline models. "CTC" shows the model investigated in this paper. As one can notice CTC performs slightly better than One-label approach and much better than Framewise and Dummy.
The last line in this table shows the human performance at the same task. Authors conducted the series of experiments to measure it. This process is described in more details in section <ref>.
§.§ Error structure
Observing the quality of the CTC model in section <ref> authors also decided to further investigate it. Graves et al. in <cit.> reports huge gap in quality over the classical models. Here the gain is about 3-5%. For that reason the error structure is studied.
First of all, let's look at predictions distribution in comparison with real expert labels. This is done by means of confusion matrix shown in the figure <ref>. Busso et al. in <cit.> mention that audio signal plays the main role in sadness recognition while angry and excitement are better detected via video signal which accompanied audio during the assessors work on IEMOCAP. This hypothesis seems to be true in relation to CTC model. Sadness recognition percentage is much higher than the others.
In section <ref> authors have already described that expert answers are not fully consistent sometimes (see figure <ref>). It allows to speak about the reliability of the label. Figure <ref> shows how the model quality depends on the expert confidence degree. On the x-axis one can see the number of experts whose answer differs from the final emotion assigned to the utterance. y-axis shows the emotion label. In each cell of a table there is a model error percentage when classifying corresponding emotion at corresponding confidence level. The more red the cell is the the bigger the error is.
In fact this matrix gives an interesting piece of information. If one takes in account only those utterances in which experts were consistent then one gets approximately 65% accuracy. It sounds more promising than 54%.
Going further, authors investigate the wrong predictions themselves and not only their distribution. In inconsistent samples some experts give answers that are not the same as the final emotion assigned to the utterances. These answers can be arbitrary emotion from the full IEMOCAP list. Here authors filter only four considered emotions from all the wrong answers.
In the first row of the table <ref> there is the percentage of inconsistent answers from utterances labeled as the header name which falls into considered four emotions. For example, 17% in column "Anger" means the following: utterances finally labeled as angry have some inconsistent expert answers; 17% of these answers have labels from the set of considered 4 emotions.
In the second row there is the percentage of model answers that coincide with the inconsistent answer of expert in this case. Note that there can not be more than one inconsistent answer because otherwise half of the experts would be inconsistent and utterance should not be included into the dataset at all.
In other words, table <ref> shows how frequently the errors of our model coincide with the human divergence in emotion assessment. If the errors of the model were random then second row of the table would contain approximately 33% at each cell. In the case of the CTC model this percentage is much higher. It means that the models make the mistakes which are similar to human mistakes. This topic is further discussed in the section <ref>.
§.§ Human performance
Observing the inconsistency of experts and other problems of the markup described in the sections <ref> and <ref> authors come with the idea to see how humans perform at this task.
This question was previously arisen in the papers. As authors have already described in the section <ref>, Altrov et al. did the same work in <cit.>. They used almost the same 4 classes (joy, anger, sadness, neutral) thus the results might be comparable. Native language speakers scored about 69% mean class accuracy. All other people perform 10-15% worse.
In this work a simple interface (fig. <ref>) for relabelling speech corpus was developed. The idea is to see how well humans can solve this classification task. One can consider that as a humanized machine learning model.
Five people were involved in the experiment. All of them were authors' lab colleagues (not professional actors or psychologist) and their native language is Russian. Each of them was asked to assess the random subset of the utterances. There is a possibility to see the correct answer after one gives own answer. This allows for positive feedback loop and kind of "model training" in terms of humanized machine learning model. During the experiment a small fraction of the utterances (2 from each emotion, 8 in total) was excluded from the main dataset. These utterances were given to the assessors prior to the main experiment as a kind of training examples. Through these mechanism assessors were able to get used to the system, way how actors talk, tune the volume level and other parameters. Answers at these preliminary stage were not included in the final statistics. Finally, each utterance was assessed by at least 2 assessors.
In the figure <ref> one can see the results of the experiment taken.
Both overall accuracy and mean class accuracy are about 70% (see table <ref>). These numbers confirm the idea that the emotion is the subjective notion and it is hardly probable for any model to achieve even this 70%. In this light the model error structure investigated in the section <ref> becomes crucial because human errors are not random. Humans make mistakes in the cases where the emotion is indeed unclear. For example, it is hard to confuse angry and sadness, but it is easy to do so for excitement and happiness.
It leads to the conclusion that to be able to see the real quality of the model one should look not only at the accuracy numbers but also at the error structure. It should be reasonable and resembles human structure. In case both criteria are satisfied (high enough accuracy and reasonable error structure) one can say that the model is good. Error structure analysis for CTC model which is carried out in section <ref> satisfies both criteria and thus the investigated CTC model can be considered to work well.
§ CONCLUSION
In this paper authors propose a novel algorithm for emotion recognition from audio based on Connectionist Temporal Classification approach. There are two main advantages of the suggested method:
* It takes into account that even the emotional utterance might contain parts where there is no emotions
* It can predict the sequence of emotions for one utterance
Conducted experiments lead to the results are comparable with the state-of-the-art in this field. Authors provide an in-depth analysis of the models answers and errors. Moving further, the human performance on this task is measured to be able to understand the possible limits of the model improvements. The initial suggestion that emotion is a subjective notion is approved and it turns out that the gap between human and proposed model is not so big. Moreover, the error structure for the humans and the model is similar which becomes one more argument in favor of the model.
Authors have few plans on the future development of the current work. One way is to get rid of the handcrafted MFCC feature extraction and switch to the learnable methods like Convolutional Neural Networks. Another way is to apply domain adaptation techniques and transfer the knowledge from the speech recognition methods to the the emotion detection using pretraining and fine-tuning.
unsrt
§ APPENDIX A
|
http://arxiv.org/abs/1701.07892v1 | 20170126222832 | SDSS J105754.25+275947.5: a period-bounce eclipsing cataclysmic variable with the lowest-mass donor yet measured | [
"M. J. McAllister",
"S. P. Littlefair",
"V. S. Dhillon",
"T. R. Marsh",
"B. T. Gänsicke",
"J. Bochinksi",
"M. C. P. Bours",
"E. Breedt",
"L. K. Hardy",
"J. J. Hermes",
"S. Kengkriangkrai",
"P. Kerry",
"S. G. Parsons",
"S. Rattanasoon"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Quantum dark solitons as qubits in Bose-Einstein condensates
Hugo Terças
December 30, 2023
============================================================
We present high-speed, multicolour photometry of the faint, eclipsing cataclysmic variable (CV) SDSS J105754.25+275947.5. The light from this system is dominated by the white dwarf. Nonetheless, averaging many eclipses reveals additional features from the eclipse of the bright spot. This enables the fitting of a parameterised eclipse model to these average light curves, allowing the precise measurement of system parameters. We find a mass ratio of q = 0.0546 ± 0.0020 and inclination i = 85.74 ± 0.21 ^∘. The white dwarf and donor masses were found to be M_w = 0.800 ± 0.015M_⊙ and M_d = 0.0436 ± 0.0020M_⊙, respectively. A temperature T_w = 13300 ± 1100 K and distance d = 367 ± 26 pc of the white dwarf were estimated through fitting model atmosphere predictions to multicolour fluxes. The mass of the white dwarf in SDSS 105754.25+275947.5 is close to the average for CV white dwarfs, while the donor has the lowest mass yet measured in an eclipsing CV. A low-mass donor and an orbital period (90.44 min) significantly longer than the period minimum strongly suggest that this is a bona fide period-bounce system, although formation from a white dwarf/brown dwarf binary cannot be ruled out. Very few period-minimum/period-bounce systems with precise system parameters are currently known, and as a consequence the evolution of CVs in this regime is not yet fully understood.
binaries: close - binaries: eclipsing - stars: dwarf novae - stars: individual: SDSS J105754.25+275947.5 - stars: cataclysmic variables - stars: brown dwarfs
mn2e_fixed2
§ INTRODUCTION
Cataclysmic variable stars (CVs) are close, interacting binary systems containing a white dwarf primary star and a low-mass, Roche-lobe filling secondary star. Material from the secondary (donor) star is transferred to the white dwarf, but is not immediately accreted in those systems with a low magnetic field white dwarf. Instead, an accretion disc forms in order for angular momentum to be conserved. An area of increased luminosity is present at the point where the stream of transferred material makes contact with the disc, and is termed the bright spot. For a general review of CVs, see <cit.> and <cit.>.
For systems with inclinations greater than approximately 80^∘ to our line of sight, the donor star can eclipse all other system components. Eclipses of the individual components – white dwarf, bright spot and accretion disc – create a complex eclipse shape. These individual eclipses occur in quick succession, and therefore high-time resolution observations are required in order to separate them from each other. High-time resolution also allows the timings of white dwarf and bright spot eclipses to be precisely measured, which can be used to derive accurate system parameters <cit.>.
Steady mass transfer from donor to white dwarf is possible in CVs due to sustained angular momentum loss from the system. The gradual loss of angular momentum causes the donor star's radius – and therefore the separation and orbital period of the system – to decrease over time. During this process the donor star's thermal time-scale increases at a faster rate than its mass-loss time-scale, which has the effect of driving it further away from thermal equilibrium. Around the point where the donor star becomes substellar, it is sufficiently far from thermal equilibrium for it to no longer shrink in response to angular momentum loss. In fact, the (now degenerate) donor star's radius actually increases with further losses, resulting in the system separation and orbital period also increasing (e.g. ; ).
A consequence of CV evolution is therefore a minimum orbital period that systems reach before heading back towards longer periods. The orbital period minimum is observed to occur at an orbital period of 81.8 ± 0.9 min <cit.>, consistent with an accumulation of systems found at 82.4 ± 0.7 min <cit.> known as the `period spike' and expected to coincide with the period minimum. CVs evolving back towards longer periods are referred to as period-bounce systems or `period bouncers'.
When considering period bouncers as a fraction of the total CV population, there is a serious discrepancy between prediction and observation. Evolutionary models predict ∼ 40-70% of the total CV population to be period bouncers <cit.>. In contrast, for many years there was a distinct lack of direct evidence for any substellar donors within CVs <cit.>, and it wasn't until a decade ago that the first direct detection was claimed by <cit.>. A rough estimate of ∼ 15% for the fraction of period bouncers was made from a small sample of eclipsing CVs by <cit.>. Two characteristics of period-bounce CVs are a faint quiescent magnitude and a long outburst recurrence time <cit.>, which may have resulted in an under-sampling of the population. However, the identification of CVs from the Sloan Digital Sky Survey (SDSS; ) – which make up the majority of Savoury et al.'s sample – should not be significantly affected by either, due to being reasonably complete down to g∼19 mag and selection from spectral analysis <cit.>. We therefore expect to find a substantial population of period-bounce systems in the SDSS sample.
One such object discovered by the SDSS is SDSS J105754.25+275947.5 (hereafter SDSS 1057). A faint system at g'≃19.5, it was identified as a CV by <cit.>. The SDSS spectrum for this system is dominated by the white dwarf, and also shows double-peaked Balmer emission lines – characteristic of a high-inclination binary. <cit.> confirmed SDSS 1057 to be an eclipsing CV after finding short and deep eclipses with low-time-resolution photometry. These light curves also appear flat outside of eclipse with no obvious orbital hump before eclipses, hinting at a faint bright spot feature and therefore low accretion rate. From their photometry, <cit.> measure SDSS 1057's orbital period to be 90.44 min. Due to a low accretion rate and no sign of a secondary star in its spectrum, <cit.> highlight SDSS 1057 as a good candidate for a period-bounce system.
In this paper, we present high-time resolution ULTRACAM and ULTRASPEC eclipse light curves of SDSS 1057, which we average and model in order to obtain precise system parameters. The observations are described in Section <ref>, the results displayed in Section <ref>, and an analysis of these results is presented in Section <ref>.
§ OBSERVATIONS
SDSS 1057 was observed a total of 12 times from Apr 2012 - Jun 2015 with the high-speed cameras ULTRACAM <cit.> and ULTRASPEC <cit.>. Half of these observations are from ULTRACAM on the 4.2 m William Herschel Telescope (WHT), La Palma, with the other half from ULTRASPEC on the 2.4 m Thai National Telescope (TNT), Thailand. Eclipses were observed simultaneously in the SDSS u' g' r' filters with ULTRACAM and in a Schott KG5 filter with ULTRASPEC. The Schott KG5 filter is a broad filter, covering approximately u' + g' + r'. A complete journal of observations is shown in Table <ref>.
Data reduction was carried out using the ULTRACAM pipeline reduction software (see ). A nearby, photometrically stable comparison star was used to correct for any transparency variations during observations.
The standard stars Feige 34 (observed on 29 Apr 2012), G162-66 (25 Apr 2012) and HD 121968 (21 and 23 Jun 2015) were used to transform the photometry into the u' g' r' i' z' standard system <cit.>. The KG5 filter was calibrated using a similar method to <cit.>; see (2016, in press) for a full description of the calibration process. A KG5 magnitude was calculated for the SDSS standard star GJ 745A (01 Mar 2015), and used to find a target flux in the KG5 band.
For observations at the WHT, photometry was corrected for extinction using the typical r'-band extinction for good quality, dust free nights from the Carlsberg Meridian Telescope[<http://www.ast.cam.ac.uk/ioa/research/cmt/camc_extinction.html>], and subsequently converted into u' and g' bands using the information provided in La Palma Technical Note 31[<http://www.ing.iac.es/Astronomy/observing/manuals/ps/tech_notes/tn031.pdf>]. At the TNT, photometry was corrected using extinction measurements obtained during the commissioning phase (Nov 2013) of ULTRASPEC <cit.>.
§ RESULTS
§.§ Orbital ephemeris
Mid-eclipse times (T_mid) were determined assuming that the white dwarf eclipse is symmetric around phase zero: T_mid = (T_wi + T_we)/2, where T_wi and T_we are the times of white dwarf mid-ingress and mid-egress, respectively. T_wi and T_we were determined by locating the times of minimum and maximum in the smoothed light curve derivative. There were no significant deviations from linearity in the T_mid values and the T_mid errors (see Table <ref>) were adjusted to give χ^2 = 1 with respect to a linear fit.
All eclipses were used to determine the following ephemeris:
HMJD = 56046.002389(8) + 0.0627919557(6) E.
This ephemeris was used to phase-fold the data for the analysis that follows.
§.§ Light curve morphology and variations
All observations listed in Table <ref> show a clear white dwarf eclipse, while only a select few show a very faint bright spot eclipse. The difficulty in locating the bright spot eclipse feature in these light curves is due to the bright spot in SDSS 1057 being significantly less luminous than the white dwarf. This is made even harder due to the low signal-to-noise of each light curve – a consequence of SDSS 1057 being a faint system (g'∼19.5). In order to increase the signal-to-noise and strengthen the bright spot eclipse features, multiple eclipses have to be averaged. As discussed in <cit.>, eclipse averaging can lead to inaccuracies if there are significant changes in disc radius. Such changes can shift the timing of the bright spot eclipse features over time and result in the broadening and weakening of these features after eclipse averaging. Not all systems exhibit significant disc radius changes, and visual analysis of the positions of the bright spot in individual eclipses show SDSS 1057 to have a constant disc radius – making eclipse averaging suitable in this case.
The eclipses selected to contribute to the average eclipse in each wavelength band are phase-folded and plotted on top of each other in Figure <ref>. These include four out of the six ULTRACAM u' g' r' eclipses and three out of the six ULTRASPEC KG5 eclipses. The 30 Dec 2013 and 23 Jun 2015 ULTRACAM observations were not included due to being affected by transparency variations, while the first three ULTRASPEC observations were not used due to a low signal-to-noise caused by overly short exposure times. As can be seen in Figure <ref>, there is no obvious flickering component in any SDSS 1057 eclipse light curve, but a large amount of white noise. Despite this, there are hints of a bright spot ingress feature around phase 0.01 and an egress at approximately phase 0.08. These features are clearest in the r' band.
The resulting average eclipses in each band are shown in Figure <ref>. All four eclipse light curves have seen an increase in signal-to-noise through averaging, and as a result the bright spot features have become clearer – sufficiently so for eclipse model fitting (see section <ref>). The sharp bright spot egress feature in the r' band eclipse is further evidence for no significant disc radius changes in SDSS 1057 and validates the use of eclipse averaging in this instance.
§.§ Simultaneous average light curve modelling
The model of the binary system used to calculate eclipse light curves contains contributions from the white dwarf, bright spot, accretion disc and donor star, and is described in detail by <cit.>. The model makes a number of important assumptions: the bright spot lies on a ballistic trajectory from the donor, the donor fills its Roche lobe, the white dwarf is accurately described by a theoretical mass-radius relation, and an unobscured white dwarf <cit.>. The validity of this final assumption has been questioned by <cit.> through fast photometry observations of the dwarf nova OY Car. However, as stated in <cit.>, we feel this is still a reasonable assumption to make due to agreement between photometric and spectroscopic parameter estimates <cit.>. Due to the tenuous bright spot in SDSS 1057, a simple bright spot model was preferred in this instance, with the four additional complex bright spot parameters introduced by <cit.> not included. The simple bright spot model was also chosen for modelling the eclipsing CV PHL 1445, another system with a weak bright spot <cit.>.
As outlined in <cit.>, our eclipse model has recently received two major modifications. First, it is now possible to fit multiple eclipse light curves simultaneously, whilst sharing parameters intrinsic to the system being modelled, e.g. mass ratio (q), white dwarf eclipse phase full-width at half-depth (Δϕ) and white dwarf radius (R_w) between all eclipses. Second, there is the option for any flickering present in the eclipse light curves to now also be modelled, thanks to the inclusion of an additional Gaussian process (GP) component. This requires three further parameters to the model, which represent the hyperparameters of the GP. For more details about the implementation of this additional GP component to the model, see <cit.>. While the SDSS 1057 average light curves do not show any obvious signs of flickering, there is evidence for slight correlation in the residuals and therefore GPs are included in the analysis.
The four average SDSS 1057 eclipses were fit simultaneously with the model – GP component included. All 50 parameters were left to fit freely, except for the four limb-darkening parameters (U_w). This is due to the data not being of sufficient quality to constrain values of U_w accurately. The U_w parameters' priors were heavily constrained around values inferred from the white dwarf temperature and log g (see end of section <ref>). These white dwarf parameters were determined through a preliminary run of the fitting procedure described throughout this section and shown schematically in Figure <ref>.
An affine-invariant MCMC ensemble sampler <cit.> was used to draw samples from the posterior probability distribution of the model parameters. The MCMC was run for a total of 30,000 steps, with the first 20,000 of these used as part of a burn-in phase and discarded. The model fit to all four average eclipses is shown in Figure <ref>. The blue line represents the most probable fit, and has a χ^2 of 1561 with 966 degrees of freedom. The lines below each eclipse represent the separate components to the model: white dwarf (purple), bright spot (red), accretion disc (yellow) and donor (green). In addition to the most probable fit, a blue fill-between region can also be seen plotted on each eclipse. This represents 1σ from the posterior mean of a random sample (size 1000) of the MCMC chain.
In all four eclipses, the model manages to fit both the white dwarf and bright spot eclipses successfully. There is no structure visible in the residuals at the phases corresponding to any of the ingresses and egresses. In general, there is some structure in the residuals, which validates our decision to include the GP component. This component can be visualised through the red fill-between regions overlaying each eclipse's residuals in Figure <ref>, and represents 2σ from the GP's posterior mean. The GPs appear to model the residuals successfully in the r' and g' bands, but struggles for u' and KG5. This may be due to differing amplitudes and timescales of the noise between eclipses, while our GP component can currently only accommodate for a shared amplitude and timescale between all eclipses.
§.§.§ White dwarf atmosphere fitting
The depths of the four white dwarf eclipses from the simultaneous fit provide a measure of the white dwarf flux at u', g', r' and KG5 wavelengths. Estimates of the white dwarf temperature, log g and distance were obtained through fitting these white dwarf fluxes to white dwarf atmosphere predictions <cit.> with an affine-invariant MCMC ensemble sampler <cit.>. Reddening was also included as a parameter, in order for its uncertainty to be taken into account, but is not constrained by our data. Its prior covered the range from 0 to the maximum galactic extinction along the line-of-sight <cit.>. The white dwarf fluxes and errors were taken as median values and standard deviations from a random sample of the simultaneous eclipse fit chain. A 3% systematic error was added to the fluxes to account for uncertainties in photometric calibration.
Knowledge of the white dwarf temperature and log g values enabled the estimation of the U_w parameters, with use of the data tables in <cit.>. Linear limb-darkening parameters of 0.427, 0.392 and 0.328 were determined for the u', g' and r' bands, respectively. A value of 0.374 for the KG5 band was calculated by taking a weighted mean of the u', g' and r' values, based on the approximate fraction of the KG5 bandpass covered by each of the three SDSS filters.
§.§.§ System parameters
The posterior probability distributions of q, Δϕ and R_w/a returned by the MCMC eclipse fit described in section <ref> were used along with Kepler's third law, the system's orbital period and a temperature-corrected white dwarf mass-radius relationship <cit.>, to calculate the posterior probability distributions of the system parameters <cit.>, which include:
* mass ratio, q;
* white dwarf mass, M_w;
* white dwarf radius, R_w;
* white dwarf log g;
* donor mass, M_d;
* donor radius, R_d;
* binary separation, a;
* white dwarf radial velocity, K_w;
* donor radial velocity, K_d;
* inclination, i.
The most likely value of each distribution is taken as the value of each system parameter, with upper and lower bounds derived from 67% confidence levels.
There are two iterations to the fitting procedure (Figure <ref>), with system parameters calculated twice in total. The value for log g returned from the first calculation was used to constrain the log g prior in a second MCMC fit of the model atmosphere predictions <cit.> to the white dwarf fluxes, as described in section <ref>. The results of this MCMC fit can be found in Figure <ref>, with the measured white dwarf fluxes in each band in blue and the white dwarf atmosphere model in red. The model and fluxes are in good agreement in all wavelength bands, however it appears that the measured u' band flux is slightly underestimated. On close inspection of the u' band eclipse fit in Figure <ref>, we find a greater than expected contribution from both the disc and donor at this wavelength, opening up the possibility that a small fraction of the true white dwarf flux may have been mistakenly attributed to these components. The measured fluxes from SDSS 1057 are consistent with a white dwarf of temperature 13300 ± 1100 K and distance 367 ± 26 pc.
The posterior probability distributions of the system parameters are shown in Figure <ref>, while their calculated values are given in Table <ref>. Also included in Table <ref> are the estimates of the white dwarf temperature and distance from the white dwarf atmosphere fitting.
§.§.§ Spectral energy distribution
<cit.> use both the SDSS spectrum and GALEX fluxes <cit.> to analyse the spectral energy distribution of SDSS 1507. The model of <cit.> is able to successfully reproduce the SDSS spectrum with a white dwarf temperature of 10500 K, log g of 8.0, distance of 305 pc, accretion disc temperature of 5800 K and an L5 secondary star. However, the model does not provide a good fit to the GALEX fluxes, which <cit.> state could have been taken during eclipse.
As we arrive at a slightly different white dwarf temperature, log g and distance (Table <ref>), as well as a slightly later spectral type secondary, we investigated whether the <cit.> model with these parameters is still a good fit to the SDSS spectrum. The resulting fit is shown in Figure <ref>. While the fit is good, the white dwarf temperature used appears to produce a slope that is slightly too blue, hinting that it might be marginally overestimated, but this may be corrected with alternate disc parameters. As in <cit.>, the GALEX fluxes (red data points) are again not fit well by the model, with both the near- and far-UV fluxes much lower than predicted. Using the ephemeris in Equation <ref>, we can rule out the possibility of these fluxes being taken during eclipse. Another reason for these low UV flux measurements could be due to absorption by an “accretion veil" of hot gas positioned above the accretion disc <cit.>. This explanation consequently invalidates our prior assumption of an unobscured white dwarf (see Section <ref>). However, we can take reassurance from the agreement between photometric and spectroscopic parameter estimates for two eclipsing CVs (OY Car and CTCV J1300-3052) that both show convincing evidence for an accretion veil <cit.>.
§ DISCUSSION
§.§ Component masses
The white dwarf in SDSS 1057 is found to have a mass of 0.800 ± 0.015 M_⊙, which is close to the mean CV white dwarf mass of 0.81 ± 0.04 M_⊙ <cit.> but notably higher than both the mean post-common-envelope binary (PCEB) white dwarf mass of 0.58 ± 0.20 M_⊙<cit.> and mean white dwarf field mass of 0.621 M_⊙ <cit.>.
The donor has a mass of 0.0436 ± 0.0020M_⊙, which makes it not only substellar – as it is well below the hydrogen burning limit of ∼0.075M_⊙ <cit.> – but also the lowest mass donor yet measured in an eclipsing CV.
§.§ Mass transfer rate
We calculate a medium-term average mass transfer rate of Ṁ = 6.0 ^+2.9_-2.1 × 10^-11M_⊙yr^-1 using the white dwarf mass and temperature <cit.>. This is a number of times greater than the expected secular mass transfer rate of Ṁ ∼ 1.5 × 10^-11M_⊙yr^-1 for a period-bounce system at this orbital period <cit.>, and is actually consistent with the secular mass transfer rate of a pre-bounce system of the same orbital period. This is further evidence that the white dwarf temperature we derive through white dwarf atmosphere predictions may be slightly overestimated.
Recalculating the medium-term average mass transfer rate using the lower white dwarf temperature of 10500 K from <cit.> brings it much more in line with the expected secular mass transfer rate. Importantly, the system parameters we obtain are consistent within errors, regardless of whether a white dwarf temperature of 10500 K or 13300 K is used to correct the white dwarf mass-radius relationship.
§.§ White dwarf pulsations
The white dwarf's temperature and log g put it just outside the blue edge of the DAV instability strip, which opens up the possibility of pulsations <cit.>. The lack of out-of-eclipse coverage and low signal-to-noise of this data is not conducive to a search for pulsations, and therefore out-of-eclipse follow-up observations are required to determine whether this white dwarf is pulsating.
§.§ Evolutionary state of SDSS 1057
The relation between donor mass and orbital period in CVs was used to investigate the evolutionary status of SDSS 1057. Figure <ref> shows SDSS 1057's donor mass (M_d) plotted against orbital period (P_orb), along with the four other known substellar donor eclipsing systems: SDSS J150722.30+523039.8 (SDSS 1507), PHL 1445, SDSS J143317.78+101123.3 (SDSS 1433) and SDSS J103533.03+055158.4 (SDSS 1035) <cit.>. Also plotted are four evolutionary tracks: a red track representing the evolution of a CV with a main-sequence donor <cit.>, and three blue tracks as examples of evolution when systems contain a brown dwarf donor from formation <cit.>.
CV systems that follow the main-sequence track evolve from longer to shorter periods – right to left in Figure <ref> – until the orbital period minimum (vertical dashed line) is reached, at which point they head back towards longer periods. Systems that form with a brown dwarf donor instead start at shorter periods and evolve to longer periods – left to right in Figure <ref> – and eventually join up with the post-period-bounce main-sequence track. The three brown dwarf donor tracks shown in Figure <ref> all have the same initial white dwarf (0.75 M_⊙) and donor (0.07 M_⊙) masses, but have different donor ages at start of mass transfer. The dashed, dot-dashed and dotted blue lines represent donor ages of 2 Gyr, 1 Gyr and 600 Myr respectively.
Figure <ref> is similar to Figure 9 from <cit.>, but now with SDSS 1057 added in. The evolutionary status of each of the four existing substellar systems were discussed in detail in <cit.>, which we summarise here. SDSS 1507 lies significantly below the period minimum in Figure <ref> due to being metal poor as a member of the Galactic halo, inferred from SDSS 1507's high proper motion <cit.>. This is an exceptional system and therefore we do not include it in the remaining discussion. From their positions in Figure <ref>, the best apparent explanation for PHL 1445 and SDSS 1433 (and arguably also SDSS 1035) is formation with a brown dwarf donor. However, due to the observation of a “brown dwarf desert” <cit.> the progenitors of such systems – and therefore the systems themselves – are expected to be very rare and greatly outnumbered by those following the main-sequence track. This makes it unlikely for even a single one of these systems to have formed with a brown dwarf donor, never mind the majority of this (albeit small) sample. The most likely scenario is that all three systems belong to the main-sequence track, which raises concerns for the accuracy of this track (see Section <ref>).
As it has the lowest donor mass of all other systems discussed above and an orbital period significantly greater than the period minimum, we find SDSS 1057 to be positioned close to the period-bounce arm of the main-sequence donor track in Figure <ref>. Its 90.44 min period puts distance between itself and the period minimum, giving SDSS 1057 the best case for being a true period bouncer among the other currently known substellar systems. This is backed up by SDSS 1057 possessing additional period-bouncer traits: low white dwarf temperature (although at 13300 K it is at the upper end of what's expected; ), faint quiescent magnitude (g'≃19.5 at d≃367 pc) and long outburst recurrence time (no outburst recorded in over 8 years of CRTS observations; ). It must be stated that due to the merging of the brown dwarf and main-sequence donor tracks post-period minimum, the scenario of SDSS 1057 directly forming with a brown dwarf donor cannot be ruled out. However, due to the lack of potential progenitors and with 80% predicted to lie below the period minimum <cit.>, this seems unlikely to be the case.
§.§ CV Evolution at period minimum
This study of SDSS 1057 brings the total number of modelled eclipsing period-minimum/period-bounce systems – and therefore systems with precise system parameters – to seven. This includes the period minimum systems SDSS J150137.22+550123.3 (SDSS 1501), SDSS J090350.73+330036.1 (SDSS 0903) and SDSS J150240.98+333423.9 (SDSS 1502) from <cit.>, which all have periods 86 min but aren't included in Figure <ref> due to having donor masses above the substellar limit.
It is evident that none of these systems – including SDSS 1057 – lie on the main-sequence donor track itself, with some (namely PHL 1445 and SDSS 1433) located far from it. This raises questions about the accuracy of the donor track in the period minimum regime, but it may be the case that there is a large intrinsic scatter associated with the track. It is expected for a small amount of intrinsic scatter to exist due to differences in white dwarf mass, but a significant contribution may come from variations in the additional angular momentum loss (approximately 2.5× gravitational radiation) that is required in order for the donor track to conform with the observed period minimum <cit.>. In <cit.> we used the width of the observed period minimum from <cit.> as a measure of the intrinsic scatter of the main-sequence donor track, but we concluded this was too small to account for the positions of PHL 1445 and SDSS 1433.
With such a small sample of observations currently available, it is not possible to thoroughly test the validity of the main-sequence donor evolutionary track at the period minimum. Many more precise masses from period-minimum/period-bounce systems are required, and therefore every additional eclipsing system within this regime that is suitable for modelling is of great value.
§ CONCLUSIONS
We have presented high-speed photometry of the faint eclipsing CV SDSS 1057. By increasing signal-to-noise through averaging multiple eclipses, a faint bright spot eclipse feature emerged from the white dwarf-dominated eclipse profiles. The presence of bright spot eclipse features enabled the determination of system parameters through fitting an eclipse model to average eclipses in four different wavelength bands simultaneously. Multi-wavelength observations allowed a white dwarf temperature and distance to be estimated through fits of model atmosphere predictions to white dwarf fluxes.
While the white dwarf in SDSS 1057 has a mass comparable to the average for CV white dwarfs, we find the donor to have the lowest mass of any known eclipsing CV donor. A low donor mass – coupled with an orbital period significantly greater than the period minimum – is strong evidence for SDSS 1057 being a bona fide period-bounce system, although formation from a white dwarf/brown dwarf binary cannot be ruled out. Every eclipsing period-minimum/period-bounce CV is of great interest, with so few systems with precise system parameters currently known. As a consequence, the evolution of systems in this regime is not yet fully understood.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for their comments. MJM acknowledges the support of a UK Science and Technology Facilities Council (STFC) funded PhD. SPL and VSD are supported by STFC grant ST/J001589/1. TRM and EB are supported by the STFC
grant ST/L000733/1. VSD and TRM acknowledge the support of the Leverhulme Trust for the operation of ULTRASPEC at the Thai National Telescope. Support for this work was provided by NASA through Hubble Fellowship grant #HST-HF2-51357.001-A. The research leading to these results has received funding from the
European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 320964 (WDTracer). The results presented are based on observations made with the William Herschel Telescope, operated at the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrofísica de Canarias by the Isaac Newton Group, as well as the Thai National Telescope, operated at the Thai National Observatory by the National Astronomical Research Institute of Thailand. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
|
http://arxiv.org/abs/1701.08077v1 | 20170127152049 | An assessment of the two-layer quasi-laminar theory of relaminarization through recent high-Re accelerated TBL experiments | [
"Rajesh Ranjan",
"Roddam Narasimha"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Ensemble Estimation of Generalized Mutual Information with Applications
to Genomics
Kevin R. Moon1, Kumar Sricharan2, Alfred O. Hero III3
1Dept. of Mathematics and Statistics, Utah State University, kevin.moon@usu.edu
2Intuit Inc., sricharankumar@intuit.com
3EECS Dept., University of Michigan, hero@eecs.umich.edu This work was supported in part by the US Army Research Office under grants W911NF1910269 and W911NF1510479, and by the National Nuclear Security Administration in the US Department of Energy under grant DE-NA0003921. This paper appeared in part in the Proceedings of the 2017 IEEE Intl. Symposium on Information Theory (ISIT) <cit.>.
December 30, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The phenomenon of relaminarization is observed in many flow situations, including that of an initially turbulent boundary layer subjected to strong favourable pressure gradients. Available turbulence models have hitherto been unsuccessful in correctly predicting boundary layer parameters for such flows. Narasimha and Sreenivasan <cit.> proposed a quasi-laminar theory (QLT) based on a two-layer model to explain the later stages of relaminarization. This theory showed good agreement with the experimental data available, which at the time was at relatively low Re. QLT, therefore, could not be validated at high Re.
Some of the more recent experiments report for the first time comprehensive studies of a relaminarizing flow at relatively high Reynolds numbers (of order 5× 10^3 in momentum thickness), where all the boundary layer quantities of interest are measured. In the present work, the two-layer model is revisited for these relaminarizing flows with an improved code in which the inner-layer equations for quasi-laminar theory have been solved exactly. It is shown that even for high-Re flows with high acceleration, QLT provides a much superior match with the experimental results than the standard turbulent boundary layer codes. This agreement can be seen as strong support for QLT, which therefore has the potential to be used in RANS simulations along with turbulence models.
U_eExternal velocity
U_sSlip velocity in outer layer solution
ρDensity
νKinematic viscosity
PPressure
δBoundary layer thickness
δ^⋆Displacement thickness
θMomentum thickness
HShape factor = δ^⋆ / θ
τShear stress
u_τFriction velocity
y^+Wall distance in viscous units = y u_τ/ν
κvon Kármán constant
KLaunder's pressure gradient parameter = ν/U_e^2dU_e/dx
ΛPressure gradient parameter = - δ/τ_w_0dP/dx
Δ_pPressure gradient parameter = ν/ρ u_τ^3dP/dx
ReReynolds number
Re_θReynolds number based on θ
Re_δReynolds number based on δ
x_0Location of beginning of contraction in the experimental set-up
x_1Location of maximum c_f
x_rtLocation of retransition
§ INTRODUCTION
Relaminarization or `reverse transition' occurs when a turbulent or transitional boundary layer reverts to a laminar-like state. This phenomenon was first suspected on a gas turbine blade <cit.> which exhibited an unexpected drop in the measured heat-transfer co-efficients. Relaminarization has since been investigated not only in the turbine blades <cit.> but also in many technological flow situations including swept wings <cit.>, nozzle contractions <cit.> and supersonic Prandtl-Meyer expansion corners <cit.>; they occur in geo- and bio-mechanical situations as well. It has been observed <cit.> that relaminarization of an initially turbulent or transitional boundary layer or duct flow can occur due to one or more of several reasons including: (a) dissipation of turbulent energy due to the action of viscosity or other molecular transport properties, (b) absorption due to an external force or agent such as buoyancy or curvature, and (c) domination of the Reynolds stresses by other imposed forces, such as due to severe acceleration of the free stream in highly favourable pressure gradient (FPG) or a strong normal magnetic field in a magnetohydrodynamic duct flow. In the first two cases, the decay of turbulent intensity leads to changing the character of the mean flow. However, in the third case, the domination of pressure forces over a nearly frozen or slowly responding Reynolds shear stress plays the major role in relaminarization <cit.> (henceforth referred to as NS73). Thus, the Reynolds stresses are not quenched, but their contributions to momentum balance become negligible compared to that of the pressure gradient. It has for this reason been called `soft' relaminarization <cit.>, in contrast to the first two cases where there is `hard' relaminarization leading to decay and eventual quenching of turbulence.
In the present work, we will limit our discussions only to relaminarizing boundary layers pertaining to the third category, where the dynamics of reversion is more intriguing. Figure <ref> is a cartoon of relaminarization due to strong FPG, where one noted characteristic feature of the relaminarizing flow, namely the thinning of the initially turbulent boundary layer, is depicted. It should also be noticed that this relaminarization, unlike direct transition to turbulence (which occurs through the relatively sudden appearance of turbulent spots at a fairly well-defined onset location), is a gradual rather than catastrophic process; e.g. there is no abrupt drop in the boundary layer thickness (δ). It is therefore difficult to define a critical value for some single parameter to identify the onset of relaminarization. Several investigators have nonetheless proposed different indicators of relaminarization and pressure gradient parameters, each with its own critical value, to predict an onset in terms of some flow variable, as listed in Table <ref> (based on <cit.> and <cit.>).
As depicted in Fig. <ref>, the turbulent intensity does not come down very much in absolute magnitude; rather its normalized value with respect to the free-stream dynamic pressure declines continuously. Because of this the later stages of the relaminarizing boundary layer tends to a state that can be called `quasi-laminar' or `laminarized', as its features are more `laminar-like' than turbulent (in the sense that the mean flow can be predicted without appeal to any turbulence quantity, including the Reynolds shear stress). Warnack & Fernholz <cit.> have compared the mean-velocity profiles with the Falkner-Skan profiles at different streamwise stations in the relaminarizing region, and have noticed that the profiles deviate from the standard `log-law' as we go downstream. Sufficiently far downstream (in the quasi-laminar region), their match with the Falkner-Skan solutions is reasonably good. NS73 have also compared the profile of a relaminarized boundary layer with a Blasius profile, and they found the agreement to be excellent. However in both cases turbulent fluctuations remained easily measurable.
Apart from the decrease in δ, the laminarized boundary layer can be easily identified by a rapid increase in the shape-factor (H) and a substantial decrease in the skin friction (c_f) and heat-transfer co-efficients. After extensive study of experimental data, NS73 have proposed a quasi-laminar theory (QLT) to predict boundary layer parameters in this region. This theory has been successfully tested against experimental data that were available before 1972. These experiments were, however, mostly at relatively low Reynolds numbers (Re of order of a few to several hundreds based on momentum thickness), and often lacked measurements of several crucial boundary layer parameters (e.g. skin-friction) in the same flow. Some of the experimental data available at that time also had considerable scatter or poor momentum balance, and were therefore found to be not sufficiently reliable <cit.> for validating the theory.
As experimental as well as DNS data are now available on many relaminarizing flows, at relatively high Re and at high acceleration, we revisit QLT for some of these flows. While these recent investigations confirm the earlier conclusions in general <cit.>, none of them attempted quantitative comparisons between the two-layer theory and the experimental data. It is our objective here to fill this gap. For this purpose, a new improved code has been developed to solve the QLT equations exactly and with higher precision. This study gains significance from the fact that the available turbulence models have not been found reliable for predicting boundary layer parameters in a relaminarizing flow, as will be shown in later sections.
§ RECENT STUDIES ON RELAMINARIZATION
Sreenivasan <cit.> has presented an excellent review of experiments on relaminarizing flows available before 1983. He has also identified experiments that can be considered `reliable' and `trustworthy'. He lamented the fact that despite many experiments whose results were available at that time, there was no single experiment that could be `recommended for turbulence modeling and further computations'.
Since the publication of that review, there have been at least 4 detailed experimental studies (to the best of our knowledge), which address several concerns raised in <cit.> through improved setup and instrumentation. We consider that among these studies, the experiment by Bourassa and Thomas<cit.>, apart from having been carried out at the highest initial Reynolds numbers Re_θ0 to-date, has implemented most of the suggestions made in <cit.> for a future experiment, and could be the best single case for developing or validating a model for relaminarization in high FPG.
Table <ref> lists recent laboratory experiments including two entries from the list available in <cit.> and <cit.> which are often quoted in the recent literature. The experiments in which comprehensive data are not available or otherwise considered not `reliable' (according to <cit.>) are not considered in preparing this list. This table includes a brief summary about each flow including (wherever available) the initial Reynolds number (Re_θ 0), beginning of contraction in the experiment (x_0), extent of the laminarized zone for the present analysis, defined by the region between maximum c_f near contraction (x_1) and the minimum c_f at retransition (x_rt), maximum acceleration as defined by parameters K and Λ, minimum and maximum shape factors as well as a brief description about the set-up and measurement techniques used. Brief comments about these flows are given below.
Badri Narayanan & Ramjee <cit.> report a series of seven experiments using a tunnel wall-liner as well as wedges to achieve high acceleration. There is usually a large scatter in the data, and skin-friction measurements were made only in two experiments out of the seven. In <cit.>, these two cases were considered for the assessment of QLT. The major drawback of these experiments is very low initial Reynolds numbers, which unfortunately do not rule out low Re effects.
Blackwelder & Kovasznay <cit.> used a 2D contraction to achieve relaminarization and showed that the Reynolds shear stress in the outer region (away from the wall) remains nearly constant along mean streamlines in the relaminarizing region. This aspect is further discussed in the next section. The intermittency factor however decreased continuously along mean streamlines through strong FPG. Despite being the only detailed study at a relatively high initial Reynolds number (Re_θ0 = 2500) at the time of Sreenivasan's review, its usefulness is limited by poor 2D momentum balance.
Escudier et. al <cit.> used a contraction shape based on an inviscid analytical solution for a forward-facing step to achieve high acceleration. They used an improved method (compared to BK) to estimate intermittency, and argued that the change in the structure of the turbulent boundary layer can be explained by the fall in intermittency in the vicinity of the wall (from 1 to virtually zero when the flow is fully `laminarized'). In their experiment, it is noticed that as K reaches K^⋆ = 3×10^-6 (where superscript denotes the proposed critical value) there is a steep fall in Re_θ from around 1100 to 300 in a distance of 0.1 m - steeper than in any other experiment carried out to-date. Also, Interestingly the region highlighted by <cit.> in their plots does not correspond to laminarized zone as defined above (x_1-x_rt) and, in fact, c_f keeps falling (and H rising) beyond that highlighted region. In the context of these observations, the low-Re effect cannot be ruled out for the sharp and sudden change in boundary layer parameters. Furthermore there is a strong and local departure from two-dimensionality in the region of the steep fall of Re_θ.
Warnack & Fernholz <cit.> have performed careful experiments on an axisymmetric center-body in a wind tunnel and focussed on reducing the error in skin-friction measurements due to the use of the same instrumentation in different flow regimes (fully turbulent, laminar-like, zero-pressure gradient etc.). In order to achieve this objective, they have used Preston tubes, surface fences, wall hot-wire probes as well as oil-film interferometry for c_f measurement. Out of the four experiments they performed with this set-up, two (labelled WF2 and WF4 in Table <ref>) exhibited relaminarization, and confirmed the validity of ideas underlying the analysis of <cit.>.
Bourassa & Thomas <cit.> present an experimental investigation on a flat-plate turbulent boundary layer in a high contraction which achieves steep favourable pressure gradients over a small distance. The peak value of the pressure-gradient parameter K thus achieved is 4.5 × 10^-6, which is 1.5 times higher than the `critical' value 3 × 10^-6 often considered as necessary for relaminarization. Compared to other experiments, BT have the longest history of flow development for the `initial' turbulent boundary layer before acceleration is applied, and the highest Re_θ0 to-date. In their study, they report most of the flow quantities of interest, measured to high precision.
Figure <ref> shows the external velocity distribution for all the recent experiments with an initial Reynolds number Re_θ0 > 800 in Table <ref>. The entry with legend IY in this figure is taken from <cit.> (Re_θ0 = 799) . The work of <cit.> is very largely concerned with the structural changes of the relaminarizing boundary layer but no data on mean flow properties are available. The velocity distribution for WF2 and WF4 are similar as the same centre-body was used to generate the pressure distribution. Clearly BT stands out from the rest, with the strongest acceleration (U_e increases by a factor of about 5.14 in a contraction length of 0.61 m). The high strain rate present in this region changes the structure of turbulence enormously and the local skin friction c_f falls by a factor of around 2.5 within the contraction.
In the light of this discussion it can be seen that among available experiments, BT-flow presents the most severe high-Re case for assessing as well as developing relaminarizing flow models. A rigorous assessment of QLT for this flow, as presented in the following sections, is a first step towards development of such models.
Here it should be pointed out that all the experiments mentioned in Table <ref>, except BT, report breakdown of the standard log-law in the relaminarized region. BT, however, argue that the logarithmic law persists even in the relaminarized region, but with substantially different values of slope and intercept. It may require a few more experiments at high Re to settle this issue.
There have also been more recent DNS studies (<cit.>, <cit.>) which confirm most of the general features of the relaminarizing flow as described here. They are not considered here for the present study as the initial Reynolds numbers ( Re_θ0 = 458 for <cit.>, and Re_θ0 = 1130 and 1900 for <cit.>) for these simulations are still appreciably lower than that of <cit.>.
§ QUASI-LAMINAR THEORY
NS73 have proposed a quasi-laminar theory based on their two-layer model, comprising a viscous inner layer and an outer stress-free (hence inviscid) but rotational layer, to explain the mechanics of relaminarization. As the present analysis follows <cit.>, only a summary of the method is given below.
In the region with high favourable pressure gradients, the turbulent structures in the outer layer are distorted due to rapid flow acceleration. This leads to the Reynolds shear stress being `frozen' along streamlines and hence out of step with the steep rise in the dynamic pressure. Consequently the pressure gradient dominates the slowly responding Reynolds shear stress in the outer layer, i.e. dp/dx ≫∂τ/∂ y, and hence the boundary layer begins drifting away from a fully turbulent state to a turbulence-independent. Thus the outer layer can be treated as stress-free for calculations, with a slip velocity U_s at the surface.
In order to satisfy the no-slip boundary condition, an inner viscous sub-boundary layer develops subsequently. (In other words, the viscous sublayer already present in the turbulent boundary layer is transformed to a laminar sub-boundary layer.) This layer can be thought of as originating from the decaying upstream turbulence in the viscous sublayer of the turbulent boundary layer and is maintained in a stable state by the highly favourable pressure gradient.
§.§ Validity of QLT for recent experiments
We now discuss two aspects of QLT: frozen Reynolds stress and conservation of mean vorticity, in order to ascertain their validity for high Re flows. These aspects, described in <cit.> and <cit.>, offer further insights into the mechanics of relaminarization.
§.§ Reynolds Stress
Returning to the Reynolds stresses, the shear stress in the BK-flow was found experimentally to be varying little along streamlines compared to the changes in the dynamic pressure (1/2) ρ U_e^2 in the outer layer. A similar observation was made in <cit.> regarding the BR experiments <cit.>. Among the more recent experiments, Reynolds stresses from the available data for BT and WF2 (X-wire data for WF4 are not available), calculated along the mean streamlines, are shown in Figure <ref>. The streamfunction was calculated using mean velocity profiles given for these experiments in wall variables u^+, y^+ from ψ/ν = ∫ u^+ dy^+. It can be noted from Fig. <ref> that for BT-flow, as we go away from the wall in the boundary layer (increasing ψ/ν), there is less than 20% variation in the Reynolds shear stress, whereas the dynamic pressure increases nearly 20 times (see Fig. <ref>) over the same distance.
§.§ Conservation of mean-flow vorticity
One direct consequence of the outer layer being stress-free is conservation of mean angular momentum (hence mean vorticity ω∼ (U_e -U_s)/δ). This explains the thinning of the boundary layer in the relaminarizing region as the difference between U_e and U_s decreases as we go downstream (shown in Fig. <ref> later). This inviscid nature of the outer layer also restricts entrainment across the layer. NS73 have shown that this was supported by the experimental evidence at the time.
In Fig. <ref>, the Reynolds numbers based on δ, δ^⋆ and θ are plotted for BT-flow. Also included is the Reynolds number based on (δ-δ^⋆), which is a measure of the boundary layer mass-flux. It is obvious from the plots that in the laminarized region there is very little variation in Re_δ and Re_(δ -δ^⋆), in contrast to Re_δ^⋆ and Re_θ. This strengthens the argument of nearly constant mass flux in the boundary layer, and hence very little entrainment.
This is confirmed in Fig. <ref>, showing data on Re_δ for other relaminarizing flow experiments. This plot also includes an entry from an experiment by Narahari Rao (labelled NR) in <cit.> . As δ^⋆ is very small compared to δ, Re_δ closely indicates the mass-flux across the layer. Barring a little scatter (for example in WF4), the value of Re_δ remains approximately constant for all streamwise locations in the laminarized region, indicating again little entrainment.
§.§ Solution of QLT
Based on the arguments presented above, a two-layer model was proposed by NS73 to solve for the flow in the quasi-laminar region. Figure <ref> depicts these two layers that constitute the quasi-laminar boundary layer. In this model the outer layer is governed by the equation
u̅∂u̅/∂x̅ + v̅∂u̅/∂y̅ = U_e dU_e/dx̅
and the inner by
ũ∂ũ/∂x̃ + ṽ∂ũ/∂ỹ = U_s dU_s/dx̃ + ν∂^2ũ/∂ỹ^2
Here the overbar and tilde represent outer and inner variables respectively.
The boundary conditions for each layer are also shown in Fig. <ref>, based on the matched asymptotic expansions in the two layers (NS73). Here the slip (or surface) velocity U_s, which the inner layer sees as the `free-stream' velocity at its edge, can be calculated after the origin of the inner laminar layer (called `virtual origin') is fixed.
§.§ Choice of virtual origin
The precise definition of virtual origin is a grey area, as there is no single parameter to locate a precise `onset of relaminarization'. In Fig. <ref>, different pressure gradient parameters are plotted over the c_f plot for four flows. The proposed critical values for the parameters K^⋆, Δ_p^⋆, Λ^⋆ listed in Table <ref> (with K^⋆ as suggested by <cit.>) are also plotted on the abcissa. There is some scatter among these criteria, but as observed by NS73, a precise definition of the virtual origin is not compulsory for quasi-laminar calculations, as the effect on the solution is not significant unless the origin is appreciably altered.
This has been largely found to be true for the current calculations where the beginning of the contraction (x_0) has been taken as the virtual origin (here Δ_p^⋆≃ -0.02). In the case of BT-Flow, however, quasi-laminar calculations with x_0 as virtual origin lead to slightly over-predicted skin-friction values. However, when the origin is shifted further upstream (x-x_0 = -1.57m), the predictions are improved (see Figure <ref> in section <ref>) . Interestingly there are no appreciable effects of this shift of origin on the other parameters such as shape factor or boundary layer thickness.
This shift of the effective origin of the inner laminar layer could be due to the upstream effect of the flow in the BT-experiment, with its long history of flow before the sudden and large contraction compared to other relaminarizing experiments. The upstream effects of this contraction can be seen in Figure <ref>, where velocity as well as K are shown to be rising even 1.57 m before the contraction. It has been found that one satisfactory way of determining an effective virtual origin for the laminar sub-boundary layer would be to see if U_s(x) in the laminarised region belongs to the Falkner-Skan family. This can be done by putting U_s ∼(x-x_0)^m, where both x_0 and m can be determined by a least-squares procedure. We have actually done this for the BT-flow and the fit is statistically good to r^2 = 0.99, with x_0 = -1.5m and m = 7.4328.
After the virtual origin is fixed, NS73 computed the slip velocity for the outer layer (or the `external' velocity for the inner layer) using U_s = √((U_e^2 - U_e0^2)+U_s0^2), where U_e0 and U_s0 are respectively external and slip velocities at the virtual origin. U_s0 can be taken as zero or a very small value. Figure <ref> shows U_e and U_s for the BT-flow.
Once U_s is known, solutions for both the layers can be computed separately using their respective governing equations, and the uniformly valid solution can be obtained by taking the union of the two.
§ NUMERICAL METHOD
The predictions for the relaminarizing flows have been obtained using both turbulent boundary layer (TBL) models and quasi-laminar theory (QLT). Two separate codes are written to solve their respective 2D equations using an implicit finite difference method. These codes are written in MATLAB and largely follow the schemes described in <cit.>.
§.§ Turbulent boundary layer
In this code, the full turbulent 2D incompressible boundary layer equations, as given below, are solved.
u∂ u/∂ x + v∂ u/∂ y = U_e dU_e/d x + 1/ρ∂/∂ y( μ∂ u/∂ y) - ρu'v'
The flow is assumed to be fully turbulent from the first reported location in the experiments (which is the case for the experiments mentioned). The initial skin friction is obtained using the Blasius turbulent skin-friction law and the initial boundary layer velocity profile is formed using the Coles velocity profile combining the law of the wall and the law of the wake. A total of 5000 points were used across the boundary layer without any grid stretching. No-slip boundary condition was used at the wall without any wall-function model.
§.§.§ Turbulence models
The Reynolds shear stress term in Eqn. <ref> is obtained with a mixing length model (MLM) as well as an algebraic eddy viscosity model (EVM). For MLM, a length scale (l_m) based on Prandtl's mixing length idea is introduced to calculate the stress term
-ρu'v' = ρ l_m^2 ( ∂ u/∂ y)^2
whereas in the case of EVM, the eddy viscosity (μ_T) is introduced to simplify the equations to the form of the laminar flow equations, assuming that the shear-stress is given by:
-ρu'v' = μ_T ( ∂ u/∂ y)
There are numerous models available in the literature to compute l_m and μ_T. Here we follow the composite models as given in <cit.>. The turbulent boundary layer in terms of l_m or μ_T can be divided in two layers - wall and outer. The model parameters used in our calculations for these two layers are listed in Table <ref>.
Here κ is the von Kármán constant, taken as 0.41 for the present calculations. The wall layer MLM is the well-known Prandtl-Van Driest mixing length model, with damping length constant A^+ = 26. Cebeci and Smith<cit.> introduced modifications in this model to account for the pressure gradient in the flow, taking the constant as A^+ = 26/N, where N = (1+11.8Δ_p)^1/2.
In EVM, the outer and wall layer models are due to <cit.> and <cit.> respectively. Here y_a^+ is another dimensionless length scale in the order of laminar boundary layer thickness and is taken as 9.7 (after <cit.>).
Figure <ref> shows the predictions of skin-friction for BT flow with the different turbulence models described above. In the fully turbulent region where acceleration is not very high (x < x_0), both MLM and MLM-PG (MLM with pressure-gradient modifications suggested in <cit.>) show good match with the experimental data but all of them fail conspicuously in the relaminarizing region. MLM-PG does only slightly better than MLM in this region.
EVM consistently overpredicts the c_f value, even in the region x<x_0. This may be because the Clauser model <cit.> is developed mainly for equilibrium pressure gradient turbulent flows, which is not the case for the BT-flow. However, as <cit.> concludes, “there is no other known eddy viscosity model that is more generally applicable".
We have also solved the turbulent boundary layer using several momentum intergral methods including the methods proposed by Spence <cit.> (used by <cit.>), Head <cit.> and Moses <cit.>, but the results are similar or worse than those obtained with MLM-PG. It is also known that even more sophisticated transport equation based models such as SA, SST, k-ϵ etc. fail to predict boundary layer parameters sufficiently accurate in the relaminarizing region <cit.>.
For the comparative study between fully turbulent and quasi-laminar calculations in the following section, the results with MLM-PG turbulence model are compared with QLT for the relaminarizing region.
§.§ Quasi-laminar layer
In the QLT code, the equations for the inner and outer layers are solved separately as in NS73.
§.§.§ Inner layer
The equation for the inner layer Eqn. <ref> is solved directly using implicit finite difference method as against the Falkner-Skan or Thwaites method used in NS73. The initial profile for this laminar boundary layer is assumed to be the Pohlhausen cubic velocity profile with the pressure gradient and initial skin-friction taken from Blasius skin-friction law. To solve the equation, the numerical schemes given in <cit.> are used with 251 points along the boundary layer. The wall shear stress (τ_w) obtained from this inner layer solution is normalized with 0.5ρ U_e^2 to obtain the skin-friction for the quasi-laminar region.
§.§.§ Outer layer
To solve the outer layer equations, we follow the simple integral method suggested by NS73, where momentum and energy integral equations are used to obtain a first-order differential equation in the exponent of the power law velocity profile. This is acceptable as the overall solution of QLT is not very sensitive to the origin of the outer layer. In the present case, the initial power law velocity profile for the outer layer is obtained using a fit of the turbulent boundary layer profile (as calculated from the TBL code) at the corresponding location. The differential equation is then solved using the ODE45 solver available in Matlab 8.3.
Once the profiles for both the layers are known, the relevant boundary layer parameters of interest such as δ, δ^⋆, θ, H etc. can be obtained from the uniformly valid profile. Figure <ref> shows the solutions for inner and outer layers along with the uniformly valid solution at a location x-x_0 = 0.2 in the relaminarizing region of BT-flow.
§ RESULTS AND DISCUSSION
In this section, we compare the results for BT-flow as obtained using QLT and TBL. Before proceeding further, it is important to verify that the boundary layer (and hence quasi-laminar) assumptions are valid in the high FPG region as the streamwise gradients are very high. Figure <ref> compares the streamwise gradient with boundary layer shear for the full flow as well as outer flow. The substantial logitudinal strain rate in the flow is obvious from the fact that the non-dimensional velocity gradient dU_e / d x goes as high as 20, whereas the maximum non-dimensional strain rate in the outer flow, which is d(U_e-U_s) / d x, is only 0.65. The shear ∂ U / ∂ y in the outer layer (of order (U_e - U_s) / δ), however, continues to be higher than this streamwise derivative d (U_e - U_s) / d x in the laminarized zone, by a factor between 4 and 20. This value can be considered sufficiently high to get approximate answers from boundary layer theory.
Figure <ref>(a) shows the solution obtained for QLT as well as TBL for this flow. The top panel shows the distribution of external velocity as well as pressure gradient parameter K for both experiment and computation. A cubic smoothing spline was used to fit U_e obtained from experiment and a slight deviation in K is expected. The deviation is unavoidable due to the necessity of using a smooth and differentiable velocity field. It should be mentioned that experimental data used here are hand-extracted from the figures given in the publication as we did not have access to digitized data.
In the second panel from the top, the experimental values of c_f are compared with the predictions of TBL and QLT. For the region (x<x_0), where the flow is fully turbulent and pressure-gradient is very mild, the predictions with the TBL are very good; however in the strong pressure gradient region (x>x_0), c_f is very poorly predicted. On the other hand, the predictions with the QLT are unacceptable in the fully turbulent region but they are dramatically better in the quasi-laminar region compared to TBL. The same trend is observed for shape factor as well as boundary layer thickness (bottom two panels), where predictions with QLT are much closer to experiment than TBL for the quasi-laminar region. It is expected that predictions in c_f by QLT can be further improved by using a better initial profile. This work is currently in progress.
There is a small region in the boundary layer (-0.05 < x -x_0< 1.05) where none of these predictions, either by TBL or QLT, seem to be valid. This region was identified as transitional in <cit.> and has been called `island of ignorance' in <cit.>. A careful interpolation based on predictions from TBL and QLT using intermittency may make the predictions better in this region. This work is currently in progress.
The superiority of QLT over TBL in the quasi-laminar region is supported by the other experiments mentioned in Table <ref>. For the sake of completeness, the predictions for another high Re flow WF4 are presented in Fig. <ref>(b). The observations made regarding TBL and QLT above for the BT-flow can be repeated for this flow as well.
§ CONCLUSIONS
A simple but robust quasi-laminar theory (QLT) has been proposed by <cit.> to explain the later stages of relaminarization. QLT is based on a two-layer model: a sheared outer stress-free inviscid layer under the effect of acceleration, and an inner viscous sub-boundary layer that is stabilized by the FPG and develops subsequently to satisfy the no-slip boundary condition. Earlier tests of this theory were mostly against low Re experiments <cit.>.
In this paper, we assess the two-layer model based on QLT for recent experiments which have a relatively long turbulent boundary layer history (high initial Re_θ0) and have been conducted with improved instrumentation and better control. For reasons stated in section <ref>, <cit.> is considered for a detailed assessment of QLT. The basic principles behind this theory such as nearly frozen Reynolds stress and constant mass flux in the outer layer are first assessed.
In the later stages of relaminarization, QLT provides a superior match with experimental results than turbulent boundary layer codes. The main limitation of the QLT is the choice of an effective virtual origin for the inner layer, which is due to lack of a precise definition of onset of relaminarization.
A new single integrated model, combining a turbulent boundary layer code with QLT but each in their respective regions of validity, has been proposed and is currently being evaluated.
asmems4
|
http://arxiv.org/abs/1701.07517v2 | 20170125232730 | Hardware Translation Coherence for Virtualized Systems | [
"Zi Yan",
"Guilherme Cox",
"Jan Vesely",
"Abhishek Bhattacharjee"
] | cs.AR | [
"cs.AR"
] |
On The Compound MIMO Wiretap Channel with Mean Feedback
^†Amr Abdelaziz, ^†C. Emre Koksal and ^†Hesham El Gamal ^*Ashraf D. Elbayoumy ^†Department of Electrical and Computer Engineering ^*Department of Electrical Engineering
The Ohio State University Military Technical College
Columbus, Ohio 43201 Cairo, Egypt
December 30, 2023
======================================================================================================================================================================================================================================================================================
plain
To improve system performance, modern operating systems (OSes) often
undertake activities that require modification of virtual-to-physical
page translation mappings. For example, the OS may migrate data
between physical frames to defragment memory and enable
superpages. The OS may migrate pages of data between heterogeneous
memory devices. We refer to all such activities as page
remappings. Unfortunately, page remappings are expensive. We show that
translation coherence is a major culprit and that systems employing
virtualization are especially badly affected by their overheads. In
response, we propose or , a readily
implementable hardware mechanism to piggyback translation coherence
atop existing cache coherence protocols. We perform detailed studies
using KVM-based virtualization, showing that HATRIC achieves up
to 30% performance and 10% energy benefits, for per-CPU area
overheads of 2%. We also quantify HATRIC's benefits on systems
running Xen and find up to 33% performance improvements.
§ INTRODUCTION
As the computing industry designs systems for big-memory workloads,
systems architects have begun embracing heterogeneous memory
architectures. For example, Intel is integrating high-bandwidth
on-package memory in its Knight's Landing chip, and 3D Xpoint memory
in several products <cit.>. AMD and Hynix are releasing
High-Bandwidth Memory or HBM <cit.>. Similarly,
Micron's Hybrid Memory Cube <cit.> and
byte-addressable persistent memories <cit.> are quickly gaining
traction. Vendors are combining these high-performance memories with
traditional high-capacity and low-cost DRAM, prompting research on
heterogeneous memory architectures <cit.>.
Fundamentally, heterogeneous memories are dependent on the concept of
page remapping to migrate data between diverse memory devices for good
performance. Page remapping is not a new concept – OSes have long
used it to migrate physical pages to defragment memory and create
superpages <cit.>, to migrate pages among NUMA sockets
<cit.>, and to deduplicate memory by enabling
copy-on-write optimizations <cit.>. However, while page remappings were used sparingly
in those scenarios, they are likely to be used more frequently for
heterogeneous memories. This is because page remapping is essential to
adapt data placement to the memory access patterns of workloads, and
to harness the performance and energy potential of memories with
different latency, bandwidth, and capacity
characteristics. Consequently, developers at IBM and Redhat are
already deploying Linux patchsets to enable page remapping amongst
coherent heterogeneous memory devices <cit.>.
Unfortunately, these efforts face an obstacle – the high performance
and energy penalty of page remapping. There are two components to this
cost. The first is the overhead of copying data. The second is the
cost of translation coherence. When privileged software remaps a
physical page, it has to update the corresponding virtual-to-physical
page translation in the page table. Translation coherence is the means
by which caches dedicated to translations (e.g., Translation Lookaside
Buffers or TLBs <cit.>, etc.) are kept up to date with the latest page table
mappings.
Past work has shown that translation coherence overheads can easily
consume 10-30% of system performance <cit.>. These overheads are even more
alarming on virtualized systems, which are used in the server and
cloud settings expected to be early adopters of heterogeneous
memories. We are the first to show that as much as 40% of their
runtime can be wasted on translation coherence. The key culprit is
virtualization's use of multiple page tables. Architectures with
hardware assists for virtualization like Intel VT-x and AMD-V use a
guest page table to map guest virtual pages to guest physical pages,
and a nested page table to map guest physical pages to system physical
pages. Changes to the guest page table and in particular, the nested
page table, prompt expensive translation coherence activity.
The problem of coherence is not restricted to translation mappings. In
fact, the systems community has studied problems posed by cache
coherence for several decades <cit.> and has developed
efficient hardware cache coherence protocols <cit.>. What
makes translation coherence challenging is that unlike cache
coherence, it relies on cumbersome software support. While this may
have sufficed in the past when page remappings were used relatively
infrequently, they are problematic for heterogeneous memories where
page remapping is more frequent. Consequently, we believe that there
is a need to architect better support for translation coherence. In
order to understand what this support should constitute, we list three
attributes desirable for translation coherence.
1 Precise
invalidation: Processors use several hardware translation
structures – TLBs, MMU caches <cit.>, and nested TLBs (nTLBs) <cit.>
– to cache portions of the page table(s). Ideally, translation
coherence should invalidate the translation structure entries
corresponding to remapped pages, rather than flushing all the contents
of these structures.
2 Precise target
identification: The CPU running privileged code that remaps a page
is known as the initiator. An ideal translation coherence
protocol would allow the initiator to identify and alert only CPUs
whose TLBs, MMU caches, and nTLBs actually cache the remapped page's
translation. By restricting coherence messages to only these targets, other CPUs remain unperturbed by coherence activity.
3 Lightweight
target-side handling: Target CPUs should invalidate their
translation structures and relay acknowledgment responses to the
initiator quickly, without excessively interfering with workloads
executing on the target CPUs.
Unfortunately, translation coherence meets none of these
goals today. Consider, for example, changes to the nested page
table. Further, consider 1; when hypervisors
change a nested page table entry, they track guest physical and system
physical page numbers, but not the guest virtual page. Unfortunately,
as we describe in Sec. <ref>, translation structures on
architectures like x86-64 permit invalidation of individual entries
only if their guest virtual page is known. Consequently, hypervisors
completely flush all translation structures, even when only a single
page is remapped. This degrades performance since virtualized systems
need expensive two-dimensional page table walks to re-populate the
flushed structures <cit.>.
Current translation coherence protocols also fail to achieve
2. Hypervisors track the subset of CPUs that a
guest VM runs on but cannot (easily) identify the CPUs used by a
process within the VM. Therefore, when the hypervisor remaps a page,
it conservatively initiates coherence activities on all CPUs that may
potentially have executed any process in the guest VM. While
this does spare CPUs that never execute the VM, it needlessly flushes
translation structures on CPUs that execute the VM but not the
process.
Finally, 3 is also not met. Initiators currently
use expensive inter-processor interrupts (on x86) or tlbi
instructions (on ARM, Power) to prompt VM exits on all target
CPUs. Translation structures are flushed on a VM re-entry. VM exits
are particularly detrimental to performance, interrupting the
execution of target-side applications <cit.>.
We believe that the solution to these problems is to implement
translation coherence in hardware. This view is inspired by prior work
on UNITD <cit.>, which showcased the potential
of hardware translation coherence. Unfortunately, UNITD is
energy inefficient and, like other recent proposals <cit.>, cannot support virtualized systems. In response,
we propose or , a hardware mechanism to tackle these problems and
meet 1-3. HATRIC
extends translation structure entries with coherence tags (or co-tags)
storing the system physical address where the translation entry
resides (not to be confused with the physical address stored in the
page table). This solves 1, since translation
structures can now be identified by the hypervisor without
knowledge of the guest virtual address. HATRIC exposes co-tags
to the underlying cache coherence protocol, achieving
2 and 3.
We evaluate HATRIC for a forward-looking virtualized system with
a high-bandwidth die-stacked memory and a slower off-chip memory. HATRIC drastically reduces translation coherence overheads,
improving performance by 30%, saving as much as 10% of energy, while
adding less than 2% of CPU area. Overall, our contributions are:
* We perform a characterization study to quantify the overheads of
translation coherence on hypervisor-managed die-stacked
memory. While we focus on KVM in this paper, we have also studied
Xen and quantified its overheads.
* We design HATRIC to subsume translation coherence in
hardware by piggybacking on, without fundamentally changing,
existing cache coherence protocols. HATRIC goes beyond UNITD <cit.> by a
accommodating translation coherence for both bare-metal and
virtualized scenarios; b extending coherence to
not just TLBs, but also MMU caches and nTLBs; c
and achieving better energy efficiency.
* We perform several studies that illustrate the benefits of HATRIC's design decisions. Further, we discuss HATRIC's
advantages over purely software approaches to mitigate translation
coherence issues.
Overall, HATRIC is efficient and versatile. While
we mostly focus on the particularly arduous challenges of translation
coherence due to nested page table changes, HATRIC is applicable
to guest page tables and non-virtualized systems.
To achieve this goal, we must architect mechanisms that are efficient
and general. In other words, we must improve not only
OS-initiated page remappings, but also hypervisor-initated remappings
in virtualized systems. Unfortunately, we find that translation
coherence performs particularly poorly on virtualized systems,
consuming as much as 30% of system runtime, and remains largely
ignored by prior work <cit.>. The key
culprit is virtualization's use of multiple page tables. Specifically,
architectures with hardware assists for virtualization like Intel VT-x
and AMD-V – generally preferred over other approaches (see
Sec. <ref>) – use a guest page table to map guest virtual
pages to guest physical pages, and a nested page table to map guest
physical pages to system physical pages. Changes to either page table
require expensive software coherence. Alarmingly, translation
coherence for nested page tables is particularly expensive. To
understand why, we list three attributes desirable for translation
coherence.
Translation coherence for nested page tables meets none
of these goals. Consider 1; when hypervisors
change a nested page table entry, they track guest physical and system
physical page numbers, but not the guest virtual page. Unfortunately,
for reasons detailed in Sec. <ref>, translation structures
permit invalidation of select entries only if their guest virtual page
is known. As a result, hypervisors flush all the contents of all
translation structures, even when only a single page is remapped. This
degrades performance since virtualized systems suffer expensive
two-dimensional page table walks to re-populate the flushed structures
<cit.>.
Current translation coherence protocols also fail to achieve
2. Hypervisors track the subset of CPUs that a
guest VM runs on but cannot (easily) identify the CPUs that a process
within the VM runs on. Therefore, when the hypervisor remaps a page,
it conservatively initiates coherence activities on all CPUs that may
potentially have executed any process in the guest VM. This
needlessly flushes translation structures on more CPUs than necessary.
Finally, 3 is also not met. Initiators currently
use expensive inter-processor interrupts (on x86) or tlbi
instructions (on ARM, Power) to prompt VM exits on all target
CPUs. Translation structures are flushed on a VM re-entry. VM exits
are particularly detrimental to performance, interrupting the
execution of target-side applications <cit.>.
In response, we propose or , a low-overhead
hardware mechanism that meets 1-3. HATRIC extends translation
structure entries with coherence tags (or co-tags) storing the system
physical address where the translation entry resides (not to be
confused with the physical address stored in the page table). This
solves 1, since translation structures can now be
identified by the hypervisor without knowledge of the guest
virtual address. Further, HATRIC exposes co-tags to the
underlying cache coherence protocol, achieving 2 and 3.
We evaluate HATRIC for a forward-looking virtualized system with
a high-bandwidth die-stacked memory and a slower off-chip memory. To
the best of our knowledge, we are the first to study how hypervisors
manage such a memory hierarchy. We show that HATRIC eliminates
most of the overheads of translation coherence, improving performance
by almost 30%, saving as much as 10% of system energy, while adding
less than 2% of CPU area. Overall, our contributions are:
* We perform a characterization study to quantify the overheads of
translation coherence on hypervisor-managed die-stacked
memory. While we focus on KVM in this paper, we have also studied
Xen and find that overheads remain similar.
* We design HATRIC to subsume translation coherence in
hardware by piggybacking on, without fundamentally changing,
existing cache coherence protocols. While we are partly inspired by
prior work on UNITD <cit.>, HATRIC goes beyond, by a accommodating
translation coherence for both bare-metal and virtualized
scenarios; b achieving better energy
efficiency; c and extending coherence to not
just TLBs, but also MMU caches and nTLBs.
Overall, HATRIC is efficient and versatile. While we
mostly focus on the challenges of translation coherence due to nested
page table changes, HATRIC is equally applicable to guest page
table coherence, as well as non-virtualized coherence.
§ BACKGROUND
We begin by presenting an overview of the key hardware and software
structures involved in page remapping. Our discussion focuses on
x86-64 systems. Other architectures are broadly similar but differ in
some low-level details.
§.§ HW and SW Support for Virtualization
Virtualized systems accomplish virtual-to-physical address translation
in one of two ways. Traditionally, hypervisors have used shadow page
tables to map guest virtual pages (GVPs) to system physical pages
(SPPs), keeping them synchronized with guest OS page tables
<cit.>. However, the overheads of page table
synchronization can often be high <cit.>. As a result,
most modern systems now use two dimensional page tables
instead. Figure <ref> illustrates
two-dimensional page table walks (see past work for more details
<cit.>). Guest page tables
map GVPs to guest physical pages (GPPs). Nested page tables map GPPs
to SPPs. x86-64 systems use 4-level forward mapped radix trees for
both page tables <cit.>. We refer to these as levels 4 (the root level) to
1 (the leaf level) as per recent work <cit.>. When a process running in a
guest VM makes a memory reference, its GVP must be translated to an
SPP. Consequently, the guest CR3 register is combined with the
requested GVP (not shown in the picture) to deduce the GPP of level 4
of the guest page table (shown as GPP Req.). However, to look up
the guest page table (gL4-gL1), the GPP must be converted into
the SPP where the page table actually resides. Therefore, we first use
the GPP to look up the nested page tables (nL4-nL1), to find SPP
gL4. Looking up gL4 then yields the GPP of the next guest
page table level (gL3). The rest of the page table walk proceeds
similarly, requiring 24 memory references in total. This presents a
performance problem as the number of references is significantly more
than the 4 references needed for non-virtualized systems. Further, the
references are entirely sequential. CPUs use three types of
translation structures to accelerate this walk:
a Private per-CPU TLBs cache
the requested GVP to SPP mappings, short-circuiting the entire
walk. TLB misses trigger hardware page table walkers to look up the
page table.
b Private per-CPU MMU caches
store intermediate page table information to accelerate parts of the
page table walk <cit.>. There are two flavors of MMU cache. The first is a
page walk cache and is implemented in AMD chips
<cit.>. Figure
<ref> shows the information cached in page
walk caches. Page walk caches are looked up with GPPs and provide SPPs
where page tables are stored. The second is called a paging
structure cache and is implemented by Intel <cit.>. Paging structure caches are looked up with GVPs
and provide the SPPs of page table locations. Paging structure caches
generally perform better, so we focus mostly on them
<cit.>.
c Private per-CPU nTLBs
short-circuit nested page table lookups by caching GPP to SPP
translations <cit.>. Figure
<ref> shows the information cached by nTLBs.
Concomitantly, CPUs cache page table information in
private L1 (L2, etc.) caches and the shared last-level cache
(LLC). The presence of separate private translation caches poses
coherence problems. While standard cache coherence protocols ensure
that page table entries in private L1 caches are coherent, there are
no such guarantees for TLBs, MMU caches, and nTLBs. Instead,
privileged software keeps translation structures coherent with data
caches and one another.
§.§ Page Remapping in Virtualized Systems
We now detail the ways in which a virtualized system can trigger
coherence activity in translation structures. All page remappings can
be classified by the data they move, and the software agent initiating
the move.
Remapped data: Systems may remap a page
storing (i) the guest page table; (ii) the nested page table; or (iii)
non-page table data. Most remappings are from (iii) as they constitute
most memory pages. We have found that less than 1% of page remappings
correspond to (i)-(ii). We therefore highlight HATRIC's
operation using (iii); nevertheless, HATRIC also implicitly
supports the first two cases.
Remapping initiator: Pages can be remapped
by (i) a guest OS; or (ii) the hypervisor. When a guest OS remaps a
page, the guest page table changes. Past work achieves low-overhead
guest page table coherence with relatively low-complexity software
extensions <cit.>. Unfortunately, there are no such
workarounds to mitigate the translation coherence overheads of
hypervisor-initiated nested page table remappings. For these reasons,
cross-VM memory deduplication <cit.> and
page migration between NUMA memories on multi-socket systems
<cit.> are known to be
expensive. In the past, such overheads may have been mitigated by
using these optimizations sparingly. However, nested page table
remappings become frequent with heterogeneous memories, making
hypervisor-initiated translation coherence problematic.
§ SHORTCOMINGS OF CURRENT TRANSLATION COHERENCE MECHANISMS
Our goal is to ensure that translation coherence does not impede the
adoption of heterogeneous memories. We study forward-looking
die-stacked DRAM as an example of an important heterogeneous memory
system. Die-stacked memory uses DRAM stacks that are tightly
integrated with the processor die using high-bandwidth links like
through-silicon vias, or silicon interposers <cit.>. Die-stacked memory is expected to be useful for
multi-tenant and rack-scale computing where memory bandwidth is often
a performance bottleneck, and will require a combination of
application, guest OS, and hypervisor management <cit.>. We take the first steps
towards this, by showing the problems posed by translation coherence
on hypervisor management.
§.§ Translation Coherence Overheads
We quantify translation coherence overheads on a die-stacked system
that is virtualized with KVM. We modify KVM to page between the
die-stacked and off-chip DRAM. Since ours is the first work to
consider hypervisor management of die-stacked memory, we implement a
variety of paging policies. Rather than focusing on developing a
single “best” policy, our objective is to show that current
translation coherence overheads are so high that they curtail the
effectiveness of practically any paging policy.
Our paging mechanisms extend prior work that explores basic
software-guided die-stacked DRAM paging <cit.>. When off-chip
DRAM data is accessed, there is a page fault. KVM then migrates the
desired page into an available die-stacked DRAM physical page
frame. The GVP and GPP remain unchanged, but KVM changes the SPP and
hence, its nested page table entry. This triggers translation
coherence.
We run our modified KVM on the detailed cycle-accurate simulator
described in Sec. <ref>. Like prior work <cit.>,
we model a system with 2GB of die-stacked DRAM with 4× the
memory bandwidth of a slower off-chip 8GB DRAM. This is a total of
10GB of addressable DRAM. Further, we model 16 CPUs based on Intel's
Haswell architecture.
Figure <ref> quantifies the performance of
hypervisor-managed die-stacked DRAM, and translation coherence's
impact on it. We normalize all performance numbers to the runtime of a
system with only off-chip DRAM and no high-bandwidth die-stacked DRAM
(no-hbm). Further, we show an unachievable best-case scenario
where all data fits in an infinite-sized die-stacked memory (inf-hbm). After profiling several paging strategies (evaluated in
detail in Sec. <ref>), we plot the best-performing ones with
the curr-best bars. These results assume cumbersome software
translation coherence mechanisms. In contrast, the achievable
bars represent the potential performance of the best paging policies
with zero-overhead (and hence ideal) translation coherence.
Figure <ref> shows that unachievable infinite
die-stacked DRAM can improve performance by 25-75% (inf-hbm
versus no-hbm). Unfortunately, the current “best” paging
policies we achieve in KVM (curr-best) fall far short of the
ideal inf-hbm case. Translation coherence overheads are a big
culprit – when these overheads are eliminated in achievable,
system performance comes within 3-10% of the case with infinite
die-stacked DRAM capacity (inf-hbm). In fact, Figure
<ref> shows that translation coherence overheads can
be so high that they can prompt die-stacked DRAM to counterintuitvely
worsen performance. For example, data caching and tunkrank actually suffer 23% and 10% performance degradations in
curr-best, respectively, despite using high-bandwidth
die-stacked memory. Though omitted to save space, we have also
profiled the Xen hypervisors and found similar trends (presented in
Sec. <ref>). Overall, translation coherence overheads threaten
the use of die-stacked, and indeed any heterogeneous, memory.
We modify KVM to support paging between die-stacked and off-chip
DRAM. We design and evaluate several paging policies using the
real-system infrastructure of Sec. <ref>. Our
system takes an existing multi-socket NUMA system, and through the
introduction of memory contention, makes it behave like an
architecture with 2GB of die-stacked DRAM with 4× the memory
bandwidth of a slower off-chip 8GB DRAM, for a total of 10GB of
addressable DRAM. We use 16 Intel Sandybridge CPUs (though we also
evaluate the impact of CPU counts in Sec. <ref>).
We modify KVM to remap pages in the two-level memory, mirroring prior
work on software-guided (rather than hypervisor-guided) die-stacked
DRAM paging <cit.>. When off-chip DRAM data is accessed,
there is a page fault. KVM then migrates the desired page into an
available die-stacked DRAM physicala page frame. Therefore, the GVP
and GPP remain unchanged, but KVM changes the SPP and hence, its
nested page table entry. This triggers translation coherence. We study
several paging policies (detailed in Secs. <ref>
and <ref>), but rather than focusing on invidual ones, we show
that translation coherence comprehensively hampers them all.
Figure <ref> quantifies the performance of die-stacked
DRAM paging and translation coherence. We compare runtime with only
off-chip DRAM (slow) and when the entire workload fits in the
die-stacked memory (fast) by making the die-stacked memory as
big as needed. We also show the runtime of KVM's best-performing
paging strategy, with software translation coherence overheads (sw) and shows the benefits that could be achieved if translation
coherence were free (ideal).
Figure <ref> shows that die-stacked DRAM has potential
for signficant performance improvements (i.e., see fast) but we
are nowhere close to achieving them (i.e., see sw). Translation
coherence is a big culprit; when its overheads are eliminated in ideal, runtimes drop within 1% of fast. In fact, canneal, graph500, and facesim see 10-15% performance
improvements over sw. Other workloads (e.g., data caching
and tunkrank) actually experience performance degradations
with only software managed translation coherence. For example, data caching gets 23% slower with sw, despite having 2GB of
high-bandwidth stacked DRAM. With (ideal) however, performance
improves by 39%. In a nutshell, translation coherence overheads
threaten the use of die-stacked (and indeed any heterogeneous) DRAM.
§.§ Page Remapping Anatomy
We now shed light on the sources of overheads from translation
coherence. While we use page migration between off-chip and
die-stacked DRAM as our driving example, the same mechanisms are used
today to migrate pages between NUMA memories, or to defragment memory,
etc.
When a VM is configured, KVM assigns it virtual CPU threads or
vCPUs. Figure <ref> assumes 3 vCPUs executing on physical
CPUs. Suppose vCPU 0 frequently demands data in GVP 3, which maps to
GPP 8 and SPP 5, and that SPP 5 resides in off-chip DRAM. The
hypervisor may want to migrate SPP 5 to die-stacked memory (e.g., SPP
512) to improve performance. On a VM exit (assumed to have occurred
prior in time to Figure <ref>), the hypervisor modifies the
nested page table to update the SPP, triggering translation
coherence. There are three problems with this:
All vCPUs are identified as targets: Figure
<ref> shows that the hypervisor initiates translation
coherence by setting the TLB flush request bit in every vCPU's kvm_vcpu structure. kvm_vcpu stores vCPU state; when a vCPU
is scheduled on a physical CPU, it provides register content,
instruction pointers, etc. By setting these bits, the hypervisor
signals that TLB, MMU cache, and nTLB entries need to be flushed.
Ideally, we would like the hypervisor to identify only the CPUs that
actually cache the stale translation as targets. The hypervisor does
spare physical CPUs that never executed the VM. However, it flushes
all physical CPUs that ran any of the vCPUs of the VM, regardless of
whether they cache the modified page table entries.
One might consider (similar to Linux) tracking the subset of physical
Naturally, this may be
conservative since the translations may have been evicted from the
TLBs, etc. Unfortunately, however, hypervisors are unable to achieve
even this type of coarse-grained target identification. In a nutshell,
hypervisors (in full- rather than para-virtualized scenarios
<cit.>) cannot easily identify the subset of vCPUs
and physical CPUs that individual processes have executed on. Instead,
they conservatively identify all vCPUs as targets.
All vCPUs suffer VM exits: In the next
step, the hypervisor launches inter-processor interrupts (IPIs) to all
the vCPUs. IPIs use the processor's advanced programmable interrupt
controllers (APICs). APIC implementations vary; depending on the APIC
technology, KVM converts broadcast IPIs into a loop of individual
IPIs, or a loop across processor clusters. We have profiled the
overheads of IPIs using microbenchmarks on Haswell systems, and like
past work <cit.>, find that they are
expensive, consuming thousands of clock cycles. If the receiving CPUs
are running vCPUs, they suffer VM exits, compromising
3 from Sec. <ref>. Targets then
acknowledge the initiator, which is paused waiting for all vCPUs to
respond.
All translation structures are flushed: The
next step is to invalidate stale mappings in translation structure
entries. Current architectures provide ISA and microarchitectural
support for this via, for example, invlpg instructions in x86,
etc. There are two caveats however. First, these instructions need the
GVP of the modified nested page table mapping to identify the TLB
entries that need to be invalidated. This is largely because modern
TLBs maintain GVP bits in the tag. While this is a good design choice
for non-virtualized systems, it is problematic for virtualized systems
because hypervisors do not have easy access to GVPs. Instead, they
have GPPs and SPPs. Consequently, KVM, Xen, etc., flush all TLB
contents when they modify a nested page table entry, rather than
selectively invalidating TLB entries. Second, there are currently no
instructions to selectively invalidate MMU caches or nTLBs, even
though they are tagged with GPPs and SPPs. The is because the marginal
benefits of adding ISA support for selective MMU cache and nTLB
invalidation are limited when the more performance-critical TLBs are
flushed.
One might consider solving this problem by re-designing the
guest-hypervisor interface. One possibility might be to create a
communication channel for the guest OS to pass information about GVP
changes to the hypervisor and guarantee synchronization of this
information between the guest and hypervisor. Unfortunately,
synchronizing on every guest page fault, address space switch, etc.,
is expensive, and re-introduces problems similar to those with shadow
paging <cit.>. In an alternative approach, we may
create a communication channel for the hypervisor to query GVP
information from the guest. However, the guest OS must not change GVPs
while the hypervisor uses it, introducing design complexity and
constraining guest OS operation. Overall, while software solutions
might be possible, they require complex changes to existing
guest-hypervisor interfaces.
§.§ Hardware Versus Software Solutions
It is natural to ask whether translation coherence problems can be
solved with smarter software. We have studied this possibility and
have concluded that hardware solutions are superior. Fundamentally,
software solutions only partially solve the problem of flushing all
translation structures, and cannot solve the problem of identifying
all vCPUs as translation coherence targets and prompting VM exits.
Consider the problem of flushing all translation structures. One might
consider tackling this problem by modifying the guest-hypervisor
interface to enable the hypervisor to use existing ISA support (e.g.,
invlpg instructions) to selectively invalidate TLB entries. But
this only fixes TLB invalidation – no architectures today maintain
selective invalidation instructions for MMU caches and nTLBs, so these
would still have to be flushed.
Even if this problem could be solved, making target-side translation
coherence handling lightweight is challenging. Fundamentally, handling
translation coherence in software means that a context switch of the
CPUs is unavoidable. One alternative to expensive VM exits might be to
switch to lighterweight interrupts to query the guest OS for GVP-SPP
mappings. Unfortunately, even these interrupts remain
expensive. Specifically, we profiled interrupt costs using
microbenchmarks on Intel's Haswell machines and found that they
require 640 cycles on average, which is just half of the average of
1300 cycles required for a VM exit. HATRIC, however, entirely
eliminates these costs by never disrupting the operation of the
guest OS or requiring context switching.
Then, consider the problem of imprecise target identification. One
potential workaround might be to use the hypervisor's reverse mappings
to identify which virtual machines map a modified system physical
page. Unfortunately, the reverse mapping does actually maintain
information about which CPUs have actually cached the translation
mappings. In the absence of this information, the hypervisor has to
continue conservatively identifying all VM CPUs as targets.
§.§ Summary of Observations
In summary, software translation coherence a has
high overheads that jeapordize the benefits of die-stacked DRAM;
b requires comprehensive solutions that attack
the problems of translation structure flushing, imprecise target
identification, and VM exits. Further, we show (in Sec. <ref>)
that c translation coherence worsens as VMs are
assigned more vCPUs; and d because a process'
page remapping triggers coherence activity on all vCPUs of the VM,
co-running processes within the VM (with disjoint address spaces that)
needlessly suffer from translation coherence.
§ MOTIVATION: DIE-STACKED DRAM
Virtualized page remapping suffers poor performance, jeapordizing the
benefits of memory defragmentation (to create superpages), memory
deduplication (with copy-on-writes), etc. Even worse, page remapping
is fundamental to emerging memory architectures with die-stacked
memory (e.g., Intel's 3D XPoint memory <cit.>, Micron's
Hybrid Memory Cube <cit.>, and AMD/Hynix's High-Bandwidth
Memory <cit.>), and byte-addressable non-volatile memories
<cit.>. We study translation
coherence in the context of die-stacked DRAM as representative of the
issues that arise more generally for heterogeneous memories.
Die-stacked memory tightly integrates DRAM stacks with the processor
die, using high-bandwidth links like through-silicon vias, or silicon
interposers <cit.>. Experts generally
believe that die-stacked DRAM will be managed by a combination of
microarchitectural innovations, and OS policies (possibly with
application-level guidance) <cit.>.
We perform the first studies on hypervisor-managed die-stacked DRAM,
going beyond past work on software-managed stacked DRAM
<cit.>. Die-stacked memory is expected to be useful for
bandwidth constrained multi-tenant and rack-scale computing
<cit.> and will likely require a
combination of application, guest OS, and hypervisor management. We
take the first steps towards achieving such a management stack, by
showing the problems posed by translation coherence on hypervisor
management.
Die-stacked DRAM paging performance:
Figure <ref> quantifies the performance of die-stacked
DRAM paging. We use the real-system emulation framework described in
Sec. <ref> and model 2GB of die-stacked memory
and 8GB of off-chip memory, totallying 10GB of DRAM. This is
consistent with industry projections, where die-stacked memories are
expected to comprise 20% of total memory <cit.>. Like past
work <cit.>, we model a bandwidth differential of 4×
between die-stacked and off-chip DRAM. We use 16 Intel Xeon Haswell
cores (though Sec. <ref> studies the impact of core counts).
Figure <ref> compares runtime with only off-chip DRAM
(slow) and when the entire workload fits in the die-stacked
memory (fast) by artificially making the die-stacked memory as
big as needed. Die-stacked DRAM is potentially useful for these
workloads – if the die-stacked DRAM services all the memory
references (fast), runtime can be reduced to 30-40% of the
off-chip (slow) baseline. Our goal is to use finite-capacity
die-stacked DRAM to approach the fast results.
We modify KVM to support paging between die-stacked and off-chip
DRAM. Like prior work on software-managed die-stacked DRAM
<cit.>, we modify KVM to page fault on access to off-chip
DRAM pages. Faults migrate the desired page into an available
die-stacked DRAM physical page frame. The GVP and GPP remain
unchanged; however, the hypervisor changes the SPP and hence, the
nested page table. This triggers translation coherence.
To accommodate incoming pages from off-chip DRAM, the hypervisor must
employ a page replacement policy to choose what to evict from
die-stacked DRAM. We have studied several page replacement policies
(detailed in Sec. <ref>) but ultimately adopt
Linux's classic CLOCK algorithm <cit.> for pseudo-LRU
replacement, as this generally performs best. We have also studied
optimizations such as prefetching pages from off-chip DRAM, etc. We
detail these studies in Sec. <ref>, but for now,
showcase the problems with translation coherence for the
best-performing paging policy. Figure <ref> quantifies
the performance of this policy with the actual bars. The (ideal) bars further add zero-overhead (and hence unachieveable)
translation coherence to actual.
Figure <ref> shows that translation coherence degrades
performance. For example, canneal, graph500, and facesim see 10-15% performance improvements if translation
coherence overheads are entirely eliminated, achieving close to the
performance of fast. Moreover, some workloads (e.g., data
caching and tunkrank) actually experience performance degradations with die-stacking because of translation coherence
overheads. For example, data caching gets 23% slower with actual, despite having 2GB of high-bandwidth stacked DRAM. But when
translation coherence overheads are removed (ideal), the
performance improves by 39%. In a nutshell, translation coherence
overheads are a performance bottleneck that threaten the very use of
die-stacked (and indeed any heterogeneous) DRAM.
Translation coherence component
overheads: Figure <ref> shows how we can
attack translation coherence overheads. We first plot, as a fraction
of the total benefits of zero-overhead translation coherence, the
benefits of selective invalidation of translation structures (prec-inv or 1 from
Sec. <ref>). We then add precise target identification
(prec-targ or 2 from
Sec. <ref>), and lightweight target-side handling (no-VM-exit or 3 from
Sec. <ref>). Translation coherence can be improved by
attacking all three problems. Workloads like canneal and
graph500 especially need precise translation structure invalidation
as they have pseudo-random memory access patterns. This means that
repopulating flushed structures with two-dimensional page table walks
(which suffer cache misses) is expensive. Workloads like tunkrank and facesim benefit from eliminating VM exits. This
is partly because VM exits worsen shared lock contention among
parallel threads, when vCPUs holding locks take particularly long
resume.
Summary: Overall, Figures
<ref>-<ref> show that
a translation coherence overheads hamper state of
the art page replacement (e.g., Linux's CLOCK LRU); and
b require comprehensive solutions that attack the
problems of translation structure flushing, imprecise target
identification, and VM exits. Further, we show (in Sec. <ref>)
that c translation coherence worsens as VMs are
assigned more vCPUs and are scheduled on more physical CPUs; and
d page remappings from any individual process
hurts all processes in a VM.
§ HARDWARE DESIGN
We now detail HATRIC's design, focusing mostly on
hypervisor-initiated paging which modifies the nested page
table. HATRIC achieves all three goals set out in
Sec. <ref>. It does so by adding co-tags to translation
structures to achieve precise invalidation. It then exposes these
co-tags to the cache coherence protocol to precisely identify
coherence targets and to eliminate VM exits.
§.§ Co-Tags
We describe co-tags by discussing what they are, what they accomplish,
how they are designed, and who sets them.
What are co-tags? Consider the page tables
of Figure <ref> and suppose that the hypervisor
modifies the GPP 2-SPP 2 nested page table mapping, making the TLB entry
caching information about SPP 2 stale. Since the TLB caches GVP-SPP
mappings rather than GPP-SPP mappings, this means that we'd like to
selectively invalidate GVP 1-SPP 2 from the TLB, and although not
shown, corresponding MMU cache and nTLB entries. Co-tags allow us to
do this by acting as tag extensions that allow precise identification
of translations when the hypervisor does not know the GVP. Co-tags
store the system physical address of the nested page table entry (nL1 from the bottom-most row in Figure
<ref>). For example, GVP 1-SPP 2 uses the
nested page table entry at system physical address 0x100c, which
is stored in the co-tag.
What do co-tags accomplish? Co-tags not
only permit precise translation information identification but can
also be piggybacked on existing cache coherence protocols. When the
hypervisor modifies a nested page table translation, cache coherence
protocols detect the modification to the system physical address of
the page table entry. Ordinarily, all private caches respond so that
only one amongst them holds the up-to-date copy of the cache line
storing the nested page table entry. With co-tags, HATRIC
extends cache coherence as follows. Coherence messages, previously
restricted to just private caches, are now also relayed to translation
structures. Co-tags are used to identify which (if any) TLB, MMU
cache, and nTLB entries correspond to the modified nested page table
cache line. Overall, this means that co-tags: a
pick up on nested page table changes entirely in hardware, without the
need for IPIs, VM exits, or invlpg instructions;
b rely on, without fundamentally changing,
existing cache coherence protocols; c permit
selective TLBs, MMU caches, and nTLBs rather than flushes.
How are co-tags implemented? Co-tags have
one important drawback. System physical addresses on 64-bit systems
require 8 bytes. If all 8 bytes are realized in the co-tag, each TLB
entry doubles in size. MMU cache and nTLB entries triple in
size. Since address translation can account for 13-15% of processor
energy <cit.>, these area and associated energy
overheads are unacceptable.
Therefore, we decrease the resolution of co-tags, using fewer
bits. This means that groups, rather than individual TLB entries may
be invalidated when one nested page table entry is changed. However,
judiciously-sized co-tags generally achieve a good balance between
invalidation precision, and area/energy overheads. Sec. <ref>
shows, using detailed RTL modeling, that 2-byte co-tags (a per-core
area overhead of 2%) strike a good balance. We specify the exact
subset of address bits make up the co-tag in subsequent sections.
Who sets co-tags? For good performance,
co-tags must be set by hardware without an OS or hypervisor
interrupt. HATRIC uses the page table walker to do this. On TLB,
MMU cache, and nTLB misses, the page table walker performs a
two-dimensional page table walk. In so doing, it infers the system
physical address of the page table entries and stores it in the TLB,
MMU cache, and nTLB co-tags.
When hypervisors change the nested page table to remap a page, they
have three pieces of information: (i) the GPP of the translation
entry; (ii) the SPP in the translation entry; and (iii) the SPP where
the translation entry is stored. Consider Figure <ref>, which
shows a guest OS-managed page table, a hypervisor-managed nested page
table and TLB hardware (hardware MMU caches and nTLBs are not
shown). The hypervisor knows that GPP0 maps to SPP0, and that this
translation is located in system memory address 0x1000. Since this
translation is being changed, the corresponding stale TLB entry must
be invalidated. The tags with which the TLBs may be looked up though,
are GVPs, which hypervisors in fully-virtualized scenarios cannot
easily extract.
Figure <ref> shows that we add co-tags (shown in green boxes)
to each TLB entry. Co-tags are bits that can be used to look up the
TLB, and hence act as tag extensions. We use co-tags to record the
system physical address storing the nested page table's translation
entry. For example, GVP->SPP0 uses address 0x1008 for its nested
translation; we therefore store 0x1008 in the co-tag. Similarly, we
add co-tags to MMU cache and nTLB entries.
Co-tags serve two goals. First, they allow the hypervisor to uniquely
identify translation structure entries with information they have
available, the system physical address of the nested page table entry
they are changing. Second, by recording system physical addresses,
co-tags can enables TLB coherence to bootstrap off standard cache
coherence protocols (which also operate on system physical addresses).
Co-tags have one important drawback. System physical addresses can be
up to 64 bits; recording full addresses can therefore double the size
of 8-byte TLB entries (and similarly enlarge MMU cache and nTLB
entries). This is a big area and energy overhead. Since address
translation can account for 13-15% of processor energy
<cit.>, this is unacceptable.
We address this problem by decreasing the resolution of co-tags, using
only a subset of the bits. This does mean that TLB entries cannot be
uniquely identified anymore and groups of TLB entries may need to be
invalidated when one nested page table entry is changed. However,
judiciously-sized co-tags generally achieve a good balance between
invalidation precision, and area/energy overheads. Sec. <ref>
shows, using detailed RTL modeling, that 2-byte co-tags in all the
translation structures (a per-core area overhead of 2%) strikes the
best balance. We defer a discussion of which exact subset of address
bits make up the co-tag to the following discussion on how co-tags
interface with the cache coherence protocol.
§.§ Integration with Cache Coherence
Modern cache coherence protocols can integrate not only readable and
writable private caches, but also read-only instruction caches (though
instruction caches do not have to be read-only). Since TLBs, MMU
caches, and nested TLBs are fundamentally read-only structures, HATRIC integrates them into the existing cache coherence protocol
in a manner similar to read-only instruction caches. Beyond this, HATRIC has minimal impact on the cache coherence protocol. We
describe HATRIC's operation on a directory-based MESI protocol,
with the coherence directories located at the shared LLC cache
banks. Without loss of generality, we use dual-grain coherence
directories from recent work <cit.>.
Translation structure coherence states:
Since translation structures are read-only, their entries require only
two coherence states: Shared (S), and Invalid (I). These two states
may be realized using per-entry valid bits. When a translation
is entered into the TLB, MMU cache, or nTLB, the valid bit is
set, representing the S state; the translation can be accessed by the
local CPU. The translation structure entry remains in this state until
it receives a coherence message. Co-tags are compared to incoming
messages; when an invalidation request matches the co-tag, the
translation entry is invalidated.
Translation coherence initiators: Consider
Figure <ref>. Before detailing the numbered transactions,
let us consider HATRIC's components. We show a 4-CPU system,
with private L1 caches, 4 shared LLC banks, and per-bank coherence
directories. We show TLBs and though they also exist, we omit MMU
caches and nTLBs to save space. MMU caches and nTLBs interact with the
cache coherence protocol in a manner that mirrors TLBs. We show 8
cached page table entries, represented as green and black boxes.
Translation coherence is initiated by the hardware page table walker
or OS/hypervisor software.
Page table walkers: These are hardware
finite state machines that are invoked on TLB misses. Walkers traverse
the page tables and are responsible for filling translation
information into the translation structures and setting the
co-tags. Walkers cannot map or unmap pages.
OS and hypervisor: These can traverse,
map, and unmap page table entries using standard load/store
instructions. HATRIC picks up these changes, and keeps all
private cache and translation structures coherent.
Coherence directory: HATRIC minimally
changes the coherence directory. Key design considerations are:
Directory entry changes: Figure
<ref> shows that the coherence directory tracks non-page
table and page table cache lines. We make a minor change to directory
entries, adding two bits to record whether cache lines belong to a
guest page table (gPT) or nested page table (nPT). HATRIC uses these bits to identify the case when a line holding
page table data is modified in the private caches. When this happens,
coherence transactions need to be sent to the translation structures.
The nPT and gPT bits are set by the hardware page table
walkers on fills to the TLBs, MMU caches, and nTLBs. One might
initially expect this to be problematic in the case where the OS or
hypervisor reads or writes a page table cache line in software. In
reality however, this does not present correctness issues. Two
situations are possible. In the first situation, the page table walker
has previously accessed the cache line, and has already set the nPT or gPT bit in the cache line's directory entry. There are
no correctness issues in this case. In the second situation, the OS or
hypervisor reads or writes a page table cache line that has previously
never been looked up by the page table walker. In this case, there is
actually no need to set the nPT or gPT bits in the
coherence directory entry yet since no translations from this line are
cached in the TLB, MMU cache, or nTLB anyway. Modifying the cache line
at this point does not require coherence messages to be sent to the
translation structures. When the page table walker does eventually
access a translation from this cache line and fills it into the
translation structures, it checks the access bit already
maintained by x86-64 translation entries. The access bit records
whether an entry has previously been filled into the TLB or accessed
by the page table walker <cit.>. If this bit is
clear, this means that the entry (and hence the cache line it resides
in) has not been accessed by the page table walker yet. In this case,
the page table walker sends a message to the coherence directory to
update the nPT and gPT bits of the relevant cache line.
Coherence granularity: Figure
<ref> shows that directory entries store information at
the cache line granularity. x86-64 systems cache 8 page table entries
per 64-byte cache line. Hence, similar to false sharing in caches
<cit.>, HATRIC conservatively invalidates all
translation structure entries caching these 8 page table entries, even
if only a single page table entry is modified. For example, consider
CPU 3 in Figure <ref>, where the TLB caches two
translations mapped to the same cache line. If any CPU modifies either
one of these translations, HATRIC has to invalidate both TLB
entries. This has implications on the size of co-tags. Recall that in
Sec. <ref>, we stated that co-tags use a subset of the address
bits. We want use the least significant, and hence, highest entropy
bits as co-tags. But since cache coherence protocols track groups of
8 translations, co-tags do not store the 3 least significant address
bits. Our 2 byte co-tags use bits 19-3 of the system physical address
storing the page table. Naturally, this means that translations from
different addresses in the page table may alias to the same co-tag. In
practice, this has little adverse affect on HATRIC's
performance.
Coherence specificity issues: To simplify
hardware, coherence directories do not track where among the private
caches, TLB, MMU cache, and nTLB the page table entries are
cached. Instead, coherence directories are pseudo-specific. For example, Figure <ref> shows that
CPU 0 caches page table entries in the TLB and L1 cache, CPU 1 only
caches them in the L1 cache, while CPU 3 only caches them in the
TLB. Nevertheless, the coherence directory's sharer list does not
capture this distinction. Therefore, when a CPU modifies page table
contents and invalidation messages need to be sent to the sharers,
they are relayed to the L1 caches and all translation
structures, regardless of which ones actually cache page tables. This
results in spurious coherence activity (e.g., CPU 3's L1 cache need not
be relayed an invalidation message for any of the page table entries
shown). In practice though, because modifications of the page table
are rare compared to other coherence activity, this additional traffic
is tolerable. Ultimately, the gains from eliminating high-latency
software TLB coherence far outweigh these relatively minor overheads
(see Sec. <ref>).
Cache and translation structure evictions:
Directories track translations in a coarse-grained and pseudo-specific
manner. This has important implications on cache line
evictions. Ordinarily, when a private cache line is evicted, the
coherence directory is relayed a message to update the line's sharer
list <cit.>. An up-to-date sharer list eliminates
spurious coherence traffic to this line in the future. We continue to
employ this strategy for non-page table cache lines but use a slightly
different approach for page tables. When a cache line holding page
table entries is evicted, its content may still be cached in the TLB,
MMU cache, nTLB. Even worse, other translations with matching co-tags
may still be residing in the translation structures. One option may be
to detect all translations with matching co-tags and invalidate
them. This hurts energy because of the additional translation
structure lookups, and performance because of unnecessary TLB, MMU
cache, and nTLB entry invalidations.
Figure <ref> shows how HATRIC handles this problem,
contrasting it with traditional cache coherence. Suppose CPU 0 evicts
a cache line with page table entries. Both approaches relay a message
to the coherence directory. Ordinarily, we remove CPU 0 from the
sharer list. However, if HATRIC sees that this message
corresponds to a cache line storing a page table (by checking the
directory entry's page table bits), the sharer list is untouched.
This means that if CPU 1 subsequently writes to the same cache line,
HATRIC sends spurious invalidate messages to CPU 0, unlike
traditional cache coherence. However, we mitigate frequency of
spurious messages; when CPU 0 sees spurious coherence traffic, it
sends a message back to the directory to demote CPU 0 from the sharer
list. Sharer lists are hence lazily updated. For similar reasons,
evictions from translation structures also lazily update coherence
directory sharer lists.
Cache coherence protocols pay attention to two categories of
evictions. First, consider evictions from private caches. These
usually relay messages to the directory to update the appropriate
entry's sharer list <cit.>. HATRIC however,
changes this aspect of the protocol. Since the sharer list tells us
whether page table is cached in some combination of the CPU's
translation structures and caches, individual evictions do not
guarantee that the page table entry has been removed from all
structures. One might initially consider handling this issue by
scanning all translation structures and caches to find out whether the
evicted translation is indeed being evicted by all of them. Aside from
the fact that this consumes non-trivial energy, it requires that we
also invalidate any other translation information with the same
co-tag. Since co-tags provide a coarser tracking granularity, we find
that this degrades performance (see Sec. <ref>).
Instead, our approach is to lazily update eviction information in
coherence directory. When translation information is evicted from a
CPU's L1 cache or translation structures, we do not transmit this
information to the coherence directory. Since this makes the sharer
list stale, modifications of translation information on another core
may prompt spurious coherence messages to be relayed. HATRIC
mitigates this problem in the following way. If invalidation messages
sent to a core find no entries with matching co-tags in the
translation structures, and no entries with matching tags in the L1
cache, the core relays a message back to the coherence directory from
demotion from the sharer list. This approach generally provides good
performance (see Sec. <ref>).
Directory evictions: Finally, past work
shows that coherence directory entry evictions require
back-invalidations of the associated cache lines in the cores
<cit.>. This is necessary for correctness; all lines in
private caches must always have a directory entry. HATRIC
extends this approach to relay back-invalidations to the TLBs, MMU
caches, and nTLBs too.
§.§ Putting It All Together
Figure <ref> details HATRIC's overall
operation. Initially, CPU 0's TLB and L1 caches are empty. On a memory
access, CPU 0 misses in the TLB and walks the page table
1. Whenever a request is satisfied from a page
table line in the L1 cache in the M, E, or S state, there is no need
to initiate coherence transactions. However, suppose that the last
memory reference in the page table walk from Figure
<ref> is absent in the L1 cache. A read
request is sent to the coherence directory in step 2.
Two scenarios are possible. In the first, the translation may be
uncached in the private caches, and there is no coherence directory
entry. A directory entry is allocated and the gPT or nPT
bit is set. In the second scenario (shown in Figure
<ref>), the request matches an existing directory
entry. The nPT bit already is set and HATRIC reads the
sharer list which identifies CPUs 1 and 3 as also caching the desired
translation (and the 7 adjacent translations in the cache line) in
shared state. In response, the cache line with the desired
translations is sent back to CPU 0 (from CPU 1, 3, or memory,
whichever is quicker), updating the L1 cache 3a and
TLB 3b. Subsequently, the sharer list adds CPU 0.
Now suppose that CPU 1 runs the hypervisor and unmaps the solid green
translation from the nested page table in step 4. To transition the L1 cache line into the M state, the cache
coherence protocol relays a message to the coherence directory. The
corresponding directory entry is identified in 5,
and we find that CPU 0 and 3 need to be sent invalidation
requests. However, the sharer list is (i) coarse-grained and (ii)
pseudo-specific. Because of (i), CPU 0 has to invalidate not only its
TLB entry 6a but also 8 translations in the L1 cache
6b, and CPU 3 has to invalidate the 2 TLB entries
with matching co-tags 6c. Because of (ii), CPU 1's L1
cache receives a spurious invalidation message 6d.
Steps 6a-6d
constitute these messages. Because page table entries are tracked at a
cache line granularity by the coherence protocol, CPU0 has to
invalidate 8 translations, as per usual 6b. HATRIC uses co-tags to go beyond traditional cache coherence to
invalidate the TLB's copy of the solid green translation too
6a. CPU3 also attempts to invalidate its copies
of the now stale translation. However, because of coarser-grained
co-tags, CPU3 has to invalidate the striped black translation in
addition to the solid green one 6c. Further, as
the sharer list does not more precisely indicate where within the
translation structures and L1 caches a page table entry resides,
CPU3's L1 cache is spuriously looked up 6d.
§.§ Other Key Observations
Scope: HATRIC is applicable to virtualized and
non-virtualized systems. For the latter, the co-tags may simply be
used to store the physical addresses of page tables. Further, while we
have focused on nested page table coherence, HATRIC can also be
trivially modified to support shadow page tables too
<cit.>. The co-tags merely have to store the memory
addresses where shadow page tables are stored.
Metadata updates: Beyond software changes
to the translations, they may also be changed by hardware page table
walkers. Specifically, page table walkers update dirty and access bits
to aid page replacement policies <cit.>. But since these
updates are picked up by the standard cache coherence protocol, HATRIC naturally handles these updates too.
Prefetching optimizations: Beyond simply
invalidating stale translation structure entries, HATRIC could
potentially directly update (or prefetch) the updated mappings into
the translation structures. Since a thorough treatment of these
studies requires an understanding of how to manage translation access
bits while speculatively prefetching into translation structures
<cit.>, we leave this for future work.
Coherence protocols: We have studied a MESI
directory based coherence protocol but we have also implemented HATRIC atop MOESI protocols too, as well as snooping protocols like
MESIF <cit.>. HATRIC requires no fundamental
changes to support these protocols.
Synonyms and superpages: HATRIC
naturally handles synonyms or virtual address aliases. This is because
synonyms are defined by unique translations in separate page table
locations, and hence separate system physical addresses. Therefore,
changing or removing a translation has no impact on other translations
in the synonym set, allowing HATRIC to be agnostic to
synonyms. Similarly, HATRIC supports superpages, which also
occupy unique translation entries and can hence be easily detected
by co-tags.
Multiprogrammed workloads: One might expect
that when an application's physical page is remapped, there is no need
for translation coherence activities to the other applications,
because they operate on distinct address spaces. Unfortunately,
however, hypervisors do not know which physical CPUs an application
executed on; all they know is the vCPUs and the physical the entire VM
uses. Therefore, the hypervisor conservatively flushes the even the
translation structures of CPUs that never ran the offending
application. HATRIC completely eliminates this problem by
precisely tracking the correspondence between translations and CPUs.
Comparison to past approaches: HATRIC
is inspired by past work on UNITD <cit.>. Like
HATRIC, UNITD piggybacks translation coherence atop cache
coherence protocols. Unlike HATRIC however, UNITD cannot
support virtualized systems or MMU cache and nTLB coherence. Further,
HATRIC uses energy-frugal co-tags instead of UNITD's
large reverse-lookup CAM circuitry, achieving far greater energy
efficiency. We showcase this in Sec. <ref> where we compare
the efficiency of HATRIC versus an enhanced UNITD design
for virtualization. Beyond UNITD, past work on DiDi
<cit.> also targets translation coherence for
non-virtualized systems. Similarly, recent work investigates
translation coherence overheads in the context of die-stacked DRAM
<cit.>. While this work mitigates translation coherence
overheads, it does so specifically for non-virtualized x86
architectures, and ignores MMU caches and nTLBs. Finally, recent work
uses software mechanisms to reduce translation overheads for guest
page table modifications <cit.>, while HATRIC
also solves the problem of nested page table coherence.
§ METHODOLOGY
Our experimental methodology has two steps. First, we modify KVM to
implement paging on a two-level memory with die-stacked DRAM. Second,
we use detailed cycle-accurate simulation to assess performance and
energy.
§.§ Die-Stacked DRAM Simulation
We evaluate HATRIC's performance on a detailed cycle-accurate
simulation framework that models the operation of a 32-CPU Haswell
processor. We assume 2GB of die-stacked DRAM with 4× the
bandwidth of slower 8GB off-chip DRAM, similar to prior work
<cit.>. Each CPU maintains 32KB L1 caches, 256KB L2 caches,
64-entry L1 TLBs, 512-entry L2 TLBs, 32-entry nTLBs
<cit.>, and 48-entry paging structure MMU caches
<cit.>. Further, we assume a 20MB LLC. We model the
energy usage of this system using the CACTI framework
<cit.>. We use Ubuntu 15.10 Linux as our guest
OS. Further, we evaluate HATRIC in detail using KVM. Beyond
this, we have also run Xen to highlight HATRIC's generality with
other hypervisors.
We use a trace-based approach to drive our simulation framework. We
collect instruction traces from our modified hypervisors with 50
billion memory references using a modified version of Pin which tracks
all GVPs, GPPs, and SPPs, as well as changes to the guest and nested
page tables. In order to collect accurate paging activity, we collect
these traces on a real-system. Ideally, we would like this system to
use die-stacked DRAM but since this technology is in its infancy, we
are inspired by recent work <cit.> to modify a real-system to
mimic the activity of die-stacking. We take an existing multi-socket
NUMA platform, and by introducing contention, creates two different
speeds of DRAM. We use a 2-socket Intel Xeon E5-2450 system, running
our software stack. We dedicate the first socket for execution of the
software stack and mimicry of fast or die-stacked DRAM. The second
socket mimics the slow or off-chip DRAM. It does so by running several
instances of memhog on its cores. Similar to prior work
<cit.>, we use memhog to carefully generate memory contention to achieve the
desired bandwidth differential between the fast and slow DRAM of
4×. By using Pin to track KVM and Linux paging code on this
infrastructure, we accurately generate instruction traces to test HATRIC.
§.§ Real-System Die-Stacked DRAM Mimicry
We would like to modify KVM to study translation coherence on a real
die-stacked system. Unfortunately, die-stacked DRAM is in its infancy,
with products just beginning to ship. Instead, we take inspiration
from recent work <cit.> to create a modeling infrastructure
that takes an existing multi-socket NUMA platform, and by introducing
contention, creates two different speeds of DRAM. We use a 2-socket
Intel Xeon E5-2450 system with Sandybridge CPUs, running Ubuntu 15.10
with v4.4 kernels and the latest version of KVM. Each socket has 8
cores, hyperthreaded 2 ways (8×2 or 16 hardware
contexts). Further, each core uses 32KB L1 caches, 256KB L2 caches,
64-entry L1 TLBs, 512-entry L2 TLBs, 32-entry nTLBs
<cit.>, and 48-entry paging structure MMU cache
<cit.>. Socket use a 20MB LLC and 16GB DDR3
DRAM.
We use this multi-socket system to mimic a die-stacked memory
architecture. We dedicate the first socket for execution of the
software stack and mimicry of fast or die-stacked DRAM. The second
socket mimics the slow or off-chip DRAM. It does so by running several
instances of memhog on its cores. Similar to prior work
<cit.>, we use memhog to carefully generate
memory contention to achieve bandwidth differential between the fast
and slow DRAM of 4×. This is typical of expected die-stacked
DRAM architectures <cit.>. Overall, our system emulates an
architecture with 2GB of die-stacked DRAM, and 8GB of off-chip DRAM,
matching projections where die-stacked DRAM is expected to account for
roughly 20% of total DRAM <cit.>.
§.§ Die-Stacked DRAM Simulation
After evaluating KVM's translation coherence overheads on long-running
workloads with full-system effects, we design an in-house simulation
framework which models a 32-core Haswell based processor, with the
cache and TLB configuration detailed in
Sec. <ref>. We collect memory traces with 50
billion memory references using a modified version of Pin which tracks
all GVPs, GPPs, and SPPs, as well as changes to the guest and nested
page tables. Like the emulator, we assume 2GB of die-stacked DRAM and
8GB off-chip slow DRAM. Furthermore, we faithfully model the
internals of KVM's paging stack.
§.§ KVM Paging Policies
Our goal is to showcase the overheads imposed by translation coherence
on paging decisions rather than design the optimal paging policy,
leaving this for future work. So, we pick well-known paging policies
that cover a wide range of design options. For example, we have studied
FIFO and LRU replacement policies, finding the latter to perform
better, as expected. We implement LRU policies in KVM by repurposing
Linux's well-known pseudo-LRU CLOCK policy <cit.>. LRU
alone doesn't always provide good performance since it is expensive to
traverse page lists to identify good candidates for eviction from
die-stacked memory. Instead, performance is improved by moving this
operation off the critical path of execution; we therefor
pre-emptively evict pages from die-stacked memory so that a pool of
free pages are always maintained. We call this migration daemon
and combine it with LRU. We have also investigated the benefits of
page prefetching; that is, when an application demand fetches a page
from off-chip to die-stacked memory, we also prefetch a set number of
adjacent pages. Generally, we have found that the best paging policy
uses a combination of these approaches.
§.§ Workloads
Our focus is on two sets of workloads. The first set comprises
applications that benefit from the higher bandwidth of die-stacked
memory. We use canneal and facesim from the Parsec suite
<cit.>, data caching and tunkrank from
Cloudsuite <cit.>, and graph500 as part of
this group. We also create 80 multiprogrammed combinations of
workloads from all the Spec applications to showcase the problem of
imprecise target identification in virtualized translation coherence.
Our second group of workloads is made up of smaller-footprint
applications whose data largely fits within the die-stacked DRAM. We
use these workloads to evaluate HATRIC's overheads in situations
where hypervisor-mediated paging (and hence translation coherence)
between die-stacked and off-chip DRAM is rarer. We use the remaining
Parsec applications, and Spec applications for these studies.
§ EVALUATION
Performance as a function of vCPU counts:
Figure <ref> shows HATRIC's runtime, normalized as a
fraction of application runtime in the absence of any die-stacked
memory (no-hbm from Figure <ref>). We compare
runtimes for the best KVM paging policies (sw), HATRIC,
and ideal unachievable zero-overhead translation coherence (ideal). Further, we vary the number of vCPUs per VM and observe the
following.
HATRIC is always within 2-4% of the ideal
performance. In some cases, HATRIC is instrumental in achieving
any gains from die-stacked memory at all. Consider data caching,
which slows down when using die-stacked memory, because of translation
coherence overheads. HATRIC cuts runtimes down to roughly 75%
of the baseline runtime in all cases.
Figure <ref> also shows that HATRIC is valuable at all
vCPU counts. In some cases, more vCPUs exacerbate translation
coherence overheads. This is because IPI broadcasts become more
expensive and more vCPUs suffer VM exits. This is why, for example,
data caching and tunkrank become slower (see sw)
when vCPUs increase from 4 to 8. HATRIC eliminates these
problems, flattening runtime improvements across vCPU counts. In other
scenarios, fewer vCPUs worsen performance since each vCPU performs
more of the application's total work. Here, the impact of a full TLB,
nTLB, and MMU cache flush for every page remapping is very expensive
(e.g., graph500 and facesim). Here, HATRIC again
eliminates these overheads almost entirely.
Performance as a function of paging
policy: Figure <ref> also shows HATRIC performance,
but this time as a function of different KVM paging policies. We study
three policies with 16 vCPUs. First, we show lru, which
determines which pages to evict from die-stacked DRAM. We then add the
migration daemon (&mig-dmn), and page prefetching
(&pref).
Figure <ref> shows HATRIC improves runtime substantially
for any paging policy. Performance is best when all techniques are
combined, but HATRIC achieves 10-30% performance improvements
even for just lru. Furthermore, Figure <ref> shows that
translation coherence overheads can often be so high that the paging
policy itself makes little difference to performance. Consider tunkrank, where the difference between lru versus the &pref bars is barely 2-3%. With HATRIC, however, paging
optimizations like prefetching and migration daemons help.
Impact of translation structure sizes: One
of HATRIC's advantages is that it converts translation structure
flushes to selective invalidations. This improves TLB, MMU cache, and
nTLB hit rates substantially, obviating the need for expensive
two-dimensional page table walks. We expect HATRIC to improve
performance even more as translation structures become bigger (and
flushes needlessly evict more entries). Figure <ref> quantifies
the relationship. We vary TLB, nTLB, and MMU cache sizes from the
default (see Sec. <ref>) to double (2×) and
quadruple (4×) the number of entries.
Figure <ref> shows that translation structure flushes largely
counteract the benefits of greater size. Specifically, the sw
results see barely any improvement, even when sizes are
quadrupled. Inter-DRAM page migrations essentially flush the
translation structures so often that additional entries are not
effectively leveraged. Figure <ref> shows that this is a wasted
opportunity since zero-overhead translation coherence (ideal)
actually does enjoy 5-7% performance benefits. HATRIC solves
this problem, comprehensively achieving within 1% of the ideal,
thereby exploiting larger translation structures.
Multi-programmed workloads: We now focus on
multiprogrammed workloads made up sequential applications. Each
workload runs 16 Spec benchmarks on a Linux VM atop KVM. As is
standard for multiprogrammed workloads, we use two performance metrics
<cit.>. The first is weighted runtime
improvement, which captures overall system performance. The second is
the runtime improvement of the slowest application in the workload,
capturing fairness.
Figure <ref> shows our results. The graph on the left plots the
weighted runtime improvement, normalized to cases without die-stacked
DRAM. As usual, sw represents the best KVM paging policy. The
x-axis represents the workloads, arranged in ascending order of
runtime. The lower the runtime, the better the performance. Similarly,
the graph on the right of Figure <ref> shows shows the runtime
of the slowest application in the workload mix; again, lower runtimes
indicate a speedup in the slowest application.
Figure <ref> shows that translation coherence can be disastrous
to the performance of multiprogrammed workloads. More than 70% of the
workload combinations suffer performance degradation with
die-stacking. These applications suffer from unnecessary translation
structure flushes and VM exits, caused by software translation
coherence's imprecise target identification. Runtime is more than
2× for 11 workloads. Additionally, translation coherence
degrades application fairness. For example, in more than half the
workloads, the slowest application's runtime is (2×)+ with a
maximum of (4×)+. Applications that struggle are usually those
with limited memory-level parallelism that benefit little from the
higher bandwidth of die-stacked memory and instead, suffer from the
additional translation coherence overheads.
HATRIC solves all these issues, achieving improvements for every
single weighted runtime, and even for each of the slowest
applications. In fact, HATRIC entirely eliminating translation
coherence overheads, reducing runtime to 50-80% of the baseline
without die-stacked DRAM. The key enabler is HATRIC's precise
identification of coherence targets – applications that do not need
to participate in translation coherence operations have their
translation structure contents left unflushed and do not suffer VM
exits.
Performance-energy tradeoffs: Intuitively,
we expect that since HATRIC reduces runtime substantially, it
should reduce static energy sufficiently to offset the higher energy
consumption from the introduction of co-tags. Indeed, this is true for
workloads that have sufficiently large memory footprints to trigger
inter-memory paging. However, we also assess HATRIC's energy
implications on workloads that do not frequently remap pages (i.e.,
their memory footprints fit comfortably within die-stacked
DRAM).
The graph on the left of Figure <ref> plots all the workloads
including the single-threaded and multithreaded ones that benefit from
die-stacking and those whose memory needs fit entirely in die-stacked
DRAM. The x-axis plots the workload runtime, as a fraction of the
runtime of sw results. The y-axis plots energy, similarly
normalized. We desire points that converge towards the lower-left
corner of the graph.
The graph on the left of Figure <ref> shows that HATRIC
always boosts performance, and almost always improves energy
too. Energy savings of 1-10% are routine. In fact, HATRIC even
improves the performance and energy of many workloads that do not page
between the two memory levels. This is because these workloads still
remap pages to defragment memory (to support superpages) and HATRIC mitigates the associated translation coherence
overheads. There are some rare instances (highlighted in black) where
energy does exceed the baseline by 1-1.5%. These are workloads for
whom efficient translation coherence does not make up for the
additional energy of the co-tags. Nevertheless, these overheads are
low, and their instances rare.
Co-tag sizing: We now turn to co-tag
sizing. Excessively large co-tags consume significant lookup and
static energy, while small ones force HATRIC to invalidate too
many translation structures on a page remap. The graph on the right of
Figure <ref> shows the performance-energy implications of
varying co-tag size from 1 to 3 bytes.
First and foremost, 2B co-tags – our design choice – provides the
best balance of performance and energy. While 3B co-tags track page
table entries at a finer granularity, they only modestly improve
performance over 2B co-tags, but consume much more energy. Meanwhile
1B co-tags suffer in terms of both performance and energy. Since 1B
co-tags have a coarser tracking granularity, they invalidate more
translation entries from TLBs, MMU caches, and nTLBs than larger
co-tags. And while the smaller co-tags do consume less lookup and
static energy, these additional invalidations lead to more expensive
two-dimensional page table walks and a longer system runtime. The end
result is an increase in energy too.
Coherence directory design decisions:
Sec. <ref> detailed the nuances modifying traditional
coherence directories to support translation coherence. Figure
<ref> captures the performance and energy (normalized to
those of the best paging policy or sw in previous graphs) of
these approaches. We consider the following options, beyond baseline
HATRIC.
EGR-dir-update: This is a design that
eagerly updates coherence directories whenever a translation entry is
evicted from a CPU's L1 cache or translation structures. While this
does reduce spurious coherence messages, it requires expensive lookups
in translation structures to ensure that entries with the same co-tag
have been evicted. Figure <ref> shows that the performance
gains from reduced coherence traffic is almost negligible, while
energy does increase, relative to HATRIC.
FG-tracking: We study a hypothetical design
with greater specificity in translation tracking. That is, coherence
directories are modified to track whether translations are cached in
the TLBs, MMU caches, nTLBs, or L1 caches. Unlike HATRIC, if a
translation is cached only in the MMU cache but not the TLB, the
latter is not sent invalidation requests. Figure <ref>
shows that while one might expect this specificity to result in
reduced coherence traffic, system energy is actually slightly higher
than HATRIC. This is because more specificity requires more
complex and area/energy intensive coherence directories. Further, since
the runtime benefits are small, we believe HATRIC remains the
smarter choice.
No-back-inv: We study an unrealistically
ideal design with infinitely-sized coherence directories which never
need to relay back-invalidations to private caches or translation
structures. We find that this does reduce energy and runtime, but not
significantly from HATRIC's dual-grain coherence directory based
on <cit.>.
All: Figure <ref> compares HATRIC to an approach which marries all the optimizations
discussed. HATRIC almost exactly meets the same performance and
is actually more energy-efficient, largely because the eager updates
of coherence directories add significant translation structure lookup
energy.
Comparison with UNITD: We now compare HATRIC to prior work on UNITD <cit.>. To do
this, we first upgrade the baseline UNITD design in several
ways. First, and most importantly, we extend add support for
virtualization by storing the system physical address of nested page
tables entries are stored in the reverse-lookup CAM originally
proposed <cit.>. Second, we extend UNITD to
work seamlessly with coherence directories. We call this upgraded
design UNITD++.
Figure <ref> compares HATRIC and UNITD++ results,
normalized to results from the case without die-stacked DRAM. As
expected, both approaches outperform a system with only traditional
software-based translation coherence (sw). However, HATRIC
typically provides an additional 5-10% performance boost versus UNITD++ by also extending the benefits of hardware translation
coherence to MMU caches and nTLBs. Further, HATRIC is more
energy efficient than UNITD++ as it boosts performance (saving
static energy) but also does not need reverse-lookup CAMs.
Xen results: In order to assess HATRIC's generality across hypervisors, we have begun studying it's
effectiveness on Xen. Because our memory traces require months to
collect, we have thus far evaluated canneal and data
caching, assuming 16 vCPUs. Our initial results show that Xen's
performance is improved by 21% and 33% for canneal and data caching respectively, over the best paging policy employing
software translation.
§ CONCLUSION
We present a case for folding translation coherence atop existing
hardware cache coherence protocols. We achieve this with simple
modifications to translation structures (TLBs, MMU caches, and nTLBs)
and with state-of-the-art coherence protocols. Our solutions are
general (they support nested and guest page table modifications) and
readily-implementable. We believe, therefore, that HATRIC will
become essential for upcoming systems, especially as they rely on page
migration to exploit heterogeneous memory systems.
ieeetr
|
http://arxiv.org/abs/1701.08154v1 | 20170127185632 | The Evolutionary Status of WN3/O3 Wolf-Rayet Stars | [
"Kathryn F. Neugent",
"Phil Massey",
"D. John Hillier",
"Nidia I. Morrell"
] | astro-ph.SR | [
"astro-ph.SR"
] |
The Chronicles of LITTLE THINGS BCDs III: Gas Clouds in and around Mrk 178, VII Zw 403, and NGC 3738
Nau Raj Pokhrel
=====================================================================================================
As part of a multi-year survey for Wolf-Rayet stars in the Magellanic Clouds, we have discovered a new type of Wolf-Rayet star with both strong emission and absorption. While one might initially classify these stars as WN3+O3V binaries based on their spectra, such a pairing is unlikely given their faint visual magnitudes. Spectral modeling suggests effective temperatures and bolometric luminosities similar to those of other early-type LMC WNs but with mass-loss rates that are three to five times lower than expected. They additionally retain a significant amount of hydrogen, with nitrogen at its CNO-equilibrium value (10× enhanced). Their evolutionary status remains an open question. Here we discuss why these stars did not evolve through quasi-homogeneous evolution. Instead we suggest that based on a link with long-duration gamma ray bursts, they may form in lower metallicity environments. A new survey in M33, which has a large metallicity gradient, is underway.
§ INTRODUCTION
A few years ago we began a search for Wolf-Rayet (WR) stars in the Magellanic Clouds <cit.>. As I write this proceeding, we have just finished our forth year of observations and have imaged the entire optical disks of both the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC). We will likely have a few candidate WRs left to spectroscopically confirm after this season of imaging, but so far our efforts have been quite successful. We have discovered four WN-type stars, one WO-type star, eleven Of-type stars, and ten stars that appear to belong to an entirely new class of WR star.
The ten stars, as shown in Figure <ref> (left), exhibit strong WN3-like emission line features as well as O3V star absorption lines. While one might instinctively think these are WN3 stars with OV3 binary companions, this cannot be the case. They are faint with M_V ∼ -2.5. An O3V by itself has an absolute magnitude of M_V ∼ -5.5 <cit.> so these stars cannot contain an O-type star companion. We call these stars WN3/O3s where the “slash" represents their composite spectra. As mentioned previously, we have found ten of these stars, or 6.5% of the LMC WR population. As Figure <ref> (right) shows, all of their spectra are nearly identical.
§ MODELING EFFORTS AND PHYSICAL PARAMETERS
Given that these stars are not WN3 stars with O3V companions, we were interested to see if we could model their spectra as single stars. We obtained medium dispersion optical spectra for all ten WN3/O3s in addition to high-dispersion optical spectra for one WN3/O3, UV spectra for three WN3/O3s, and NIR spectra for one WN3/O3. For LMC170-2 we obtained spectra from all four sources giving us coverage from 1000-2500Å. To model these stars we turned to cmfgen, a spectral modeling code that contains all of the complexities needed to model hot stars near their Eddington limits <cit.>. Using cmfgen, we began by modeling the spectrum of LMC170-2.
Using the parameters given in Table <ref>, we obtained a good fit to both the emission and the absorption lines for LMC170-2. Hainich et al. (2014) recently modeled most of the previously known WN-type WRs in the LMC using PoWR and thus we can compare the star's physical parameters with those of more “normal" WN-type LMC WRs. We can additionally compare the parameters with those of LMC O3Vs. Table <ref> shows that the abundances and He/H ratio are comparable with LMC WN3s. Figure <ref> (left) shows that while the temperature is a bit on the high side of what one would expect for a LMC WN, it is still within the expected temperature range. However, the biggest surprise is the mass-loss rate. As is shown in Table <ref>, the mass loss rate of the WN3/O3s is more similar to that of an O3V than of a normal LMC WN. This is shown visually in Figure <ref> (right).
While all the spectra look visually similar, we still wanted to determine a range of physical parameters for these WN3/O3s. Thus, we modeled all 10 of them using cmfgen. As discussed above, we obtained UV data for three of our stars. Based on these data, we determined that C iv λ1550 was not present in the spectra of our stars. This again confirms that there is not an O3V star within the system. Additionally, it points to a high temperature as this line disappears at an effective temperature at 80,000 K. As an upper limit, as the effective temperature increases above 110,000 K, both He ii λ4200 and O vi λ1038 become too weak. Thus, the overall temperature regime is between 80,000 - 110,000 K. However, in practice, our models only varied between 100,000 - 105,000 K. Most of the other physical parameters all stayed relatively consistent and are thus well constrained as is shown in Table <ref>. However, the exception is again the mass-loss rates. Figure <ref> shows the range in the mass-loss rates for our WN3/O3s. While most of them are quite low (like those of O3Vs), a few of them are on the low end of some other early-type LMC WNs. In particular, there are three LMC WNs with low mass-loss rates and similar luminosities to the WN3/O3s. However, these three stars are visually much brighter than the WN3/O3s and thus we do not believe that they are the same type of star. We are still planning on studying them further.
§ EVOLUTIONARY STATUS
Now that we have a good handle on their physical parameters, we are currently investigating possible WN3/O3 progenitors as well as their later stages of evolution. Figure <ref> shows where the WN3/O3s are located within the disk of the LMC. If they were all grouped together, we might assume that they formed out of the same stellar nursery. Instead, they are pretty evenly spaced across the LMC.
There are other non-binary WRs with hydrogen absorption lines denoted as WNha stars. <cit.> has argued that the majority of these stars evolved through quasi-homogeneous evolution. In this case, the stars have high enough rotational velocities that the material in the core mixes with the material in the outer layers. This creates CNO abundances near equilibrium, much as we find in the WN3/O3s. However, the WN3/O3s have relatively low rotational velocities (V_rot∼ 150 km s^-1) and extremely low mass-loss rates. It is difficult (if not impossible) to imagine an scenario where a star could begin its life with a large enough rotational velocity (typically ∼ 250 km s^-1) to produce homogeneous evolution and later slow down to 150 km s^-1 given the small mass-loss rates <cit.>. Instead, a homogeneous star must have either a larger rotational velocity or a larger mass-loss rate. So, at this point we can rule out quasi-homogeneous evolution.
§ NEXT STEPS
<cit.> looked at various types of supernovae progenitors and found that WN3/O3s, with their low mass-loss rates and high wind velocities, might be the previously-unidentified progenitors of Type Ic-BL supernovae. A subset of these Type Ic-BL supernovae then turn into long-duration gamma ray bursts which are preferentially found in low metallicity environments <cit.>. Thus, we are currently in the process of investigating any metallicity dependence for the formation of WN3/O3s.
So far we have only found them in the low metallicity LMC. Given the high number of WRs currently known in the Milky Way, we would expect to have found at least a few of them. However, these dim WRs with strong absorption lines have not been found elsewhere. To investigate any metallicity dependence, we have decided to begin another search for WRs in M33 which has a strong metallicity gradient. We previously conducted a search for WRs in M33 but our previous survey simply did not go deep enough, as Figure <ref> shows. So, there may be an entire yet undiscovered population of WN3/O3s within M33.
We continue to discuss the evolutionary status of these WN3/O3s with our theoretician colleagues while also observationally attempting to determine where these stars come from and what they turn into. By searching for them in M33 we should be able to constrain their metallicity dependence and gain further insight into their evolution.
[Conti(1988)]Conti88
Conti, P. S. 1988, NASA Special Publication, 497, 119
[Droutet al.(2016)]Drout16
Drout, M. R., Milisavljevic, D., Parrent, J., Margutti, R., et al. 2016, ApJ, 821, 57
[Hainich et al.(2014)Hainich, Rühling, Todt, Oskinova, Liermann, Gräfener, Foellmi, Schnurr, & Hamann]Potsdam
Hainich, R., Rühling, U., Todt, H., et al. 2014, A&A, 565, A27
[Hillier & Miller(1998)]CMFGEN
Hillier, D. J. & Miller, D. L. 1998, ApJ, 496, 407
[Martins et al.(2013)Depagne,Russeil,Mahy]Martins13
Martins, F., Depagne, E., Russeil, D., & Mahy, L. 2013, A&A, 554, 23
[Massey et al(2013)Massey,Neugent,Hillier, & Puls]Ostars
Massey,, P., Neugent, K. F., Hillier, D. J., & Puls, J. 2013, ApJ, 768, 6
[Massey et al.(2015)Massey, Neugent, & Morrell]MCWR15
Massey, P., Neugent, K. F., & Morrell, N. 2015, ApJ, 807, 81
[Massey et al.(2016)Massey, Neugent, & Morrell]MCWR16
Massey, P., Neugent, K. F., & Morrell, N. 2016, in prep.
[Massey et al.(2014)Massey, Neugent, Morrell, & Hillier]MCWRs
Massey, P., Neugent, K. F., Morrell, N., & Hillier, D. J. 2014, ApJ, 788, 83
[]homogen
Song, H.F., Meynet, G., Maeder, A., Ekström, & Eggenberger, P. 2016, A&A, 585, 120
[Vinket al.(2011)Vink,Gräener,&Harries]LDGRB
Vink, J. S., Gräener, G., & Harries, T. J. 2011, A&A, 536, L10
|
http://arxiv.org/abs/1701.07661v1 | 20170126114225 | On the possible enhancement of the dark matter density distribution at the galactic center | [
"V. Gammaldi",
"V. Avila-Reese",
"O. Valenzuela",
"A. X. Gonzalez-Morales",
"P. Salucci",
"F. Nesti"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO",
"astro-ph.HE",
"hep-ph"
] |
[
[
December 30, 2023
=====================
Una sobredensidad de materia oscura, de origen térmico, en el centro galáctico producida por la presencia del agujero negro Sgr A^* podría explicar el factor astrofísico necesario para justificar el corte en el espectro de rayos gamma detectado por HESS.
The Dark Matter (DM) spike induced by the adiabatic growth of a massive Black Hole (BH) in a cuspy environment, may explain the thermal DM density required to fit the cut-off in the HESSJ1745-290 γ-ray spectra (F. Aharonian et al. (2009)) as TeV DM signal with a background component (Cembranos et al. (2012)). The spike extension appears comparable with the HESS angular resolution.The DM-density is locally enhanced in a region of radius R_sp=α_γ r_s(M_BH/ρ_s r_s)^1/(3-γ) as studied by Gondolo & Silk (1999) for several profiles (α_γ). The BH mass at the GC is M_BH=4.5× 10^6 M_⊙. We use the hydrodynamics Milky Way-like simulation Garrotxa (Roca-Fabregas et al. 2016). We fit the DM distribution to three cases (see for details Gammaldi et al. 2016): i) 4-parameter profile down to the nominal resolution limit of 109 pc (GARR-I), ii) 4-parameter profile with conservative limit of 300 pc (GARR-I300), and iii) 5-parameter profile from 300 pc (GARR-II300). The inner slopes, γ, which we extrapolate to the very center, are 0.6, 1 and 0.02, respectively.
For the angular resolution of the HESS telescope (∼ 0.1^∘) the astrophysical factor related with the BH induced DM spike on each profile is: ⟨ J ⟩_ΔΩ^BH-GARRI=2.58×10^27GeV^2cm^-5sr^-1, ⟨ J ⟩_ΔΩ^BH-GARRI300=2.16×10^27GeV^2cm^-5sr^-1 and ⟨ J ⟩_ΔΩ^BH-GARRII=7.56×10^25GeV^2cm^-5sr^-1. Each one corresponds to R_sp=16 pc (0.11^∘ deg), R_sp=11 pc (0.07^∘ deg) and R_sp=2.3 pc (0.01^∘ deg) assuming R_⊙=8.5 kpc. The comparison of these results with the HESS data shows that the observed angular extent of the HESSJ1745-290 signal depends on not only the instrumental resolution, but also on the background normalization. In the upper panel of Fig.1 we assume that the background increases through the GC as the extrapolation of the underlying DM-halo profile without spike.
The case for GARR-I300 (γ=1) could be considered similar to the case in which the background is given by a millisecond pulsars (MSPs) population following the distribution of the GeV γ-ray emission claimed in Calore et al. (2015) (there, γ=1.2).
In this case, the DM spike would appear much more localized than if the signal were normalized to the value of the background at 0.54^∘ deg (≈ 80 pc from the GC).
The DM spike may help to describe the spatial tail reported by HESS II at angular scales 0.54^∘ towards Sgr A^*. On the other hand, the different profiles of the spike may allow to make a difference to disentangle the nature (warm or cold) of the DM particle (Gammaldi, Nesti & Salucci, in preparation.).
HESS
F. Aharonian et al., A&A503,817(2009); HESS collaboration, [arXiv:1509.03425]; Phys.Rev.Lett.114, 081301 (2015); [arXiv:1603.07730].
Gammaldi
J.A.R. Cembranos, V. Gammaldi, A.L. Maroto, Phys.Rev. D86 (2012) 103506; JCAP 1304 (2013) 051.
GS
P. Gondolo, J. Silk, Phys.Rev.Lett. 83 (1999) 1719-1722.
GARR
S. Roca-Fbrega, et al. [arXiv:1504.06261].
Gamm
V. Gammaldi et al., [arXiv:1607.02012].
Calore
F. Calore et al., [arXiv:1411.4647].
|
http://arxiv.org/abs/1701.07875v3 | 20170126211029 | Wasserstein GAN | [
"Martin Arjovsky",
"Soumith Chintala",
"Léon Bottou"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Frustrated honeycomb-lattice bilayer quantum antiferromagnet in a magnetic field:
Unconventional phase transitions in a two-dimensional isotropic Heisenberg model
Johannes Richter
December 30, 2023
==========================================================================================================================================================================
§ INTRODUCTION
The problem this paper is concerned with is that of unsupervised learning.
Mainly, what does it mean to learn a probability
distribution? The classical answer to this is to learn a probability
density. This is often done by defining a parametric family of densities
(P_θ)_θ∈^d and finding the one that maximized the likelihood on our data:
if we have real data examples {x^(i)}_i=1^m, we would solve the problem
max_θ∈^d1/m∑_i=1^m log P_θ(x^(i))
If the real data distribution _r admits a density and _θ is the
distribution of the parametrized density P_θ, then, asymptotically, this
amounts to minimizing the Kullback-Leibler divergence KL(_r _θ).
For this to make sense, we need the model density P_θ to exist.
This is not the case in the rather common situation where we are
dealing with distributions supported by low dimensional manifolds.
It is then unlikely that the model manifold and the true distribution's support
have a non-negligible intersection (see <cit.>),
and this means that the KL distance is not defined (or simply infinite).
The typical remedy is to add a noise term to the model
distribution. This is why virtually all generative models described in
the classical machine learning literature include a noise
component. In the simplest case, one assumes a Gaussian noise with
relatively high bandwidth in order to cover all the examples. It is
well known, for instance, that in the case of image generation models,
this noise degrades the quality of the samples and makes them
blurry.
For example, we can see in the recent paper <cit.>
that the optimal standard deviation of the noise added to the model
when maximizing likelihood is around 0.1 to each pixel in a generated image,
when the pixels were already normalized to be in the range [0, 1]. This is
a very high amount of noise, so much that when papers report the samples of
their models, they don't add the noise term on which they report likelihood numbers.
In other words, the added noise term
is clearly incorrect for the problem, but is needed to make the
maximum likelihood approach work.
Rather than estimating the density of _r which may not exist, we
can define a random variable Z with a fixed distribution p(z) and
pass it through a parametric function g_θ: 𝒵→𝒳 (typically a neural network of some kind) that directly
generates samples following a certain distribution _θ. By
varying θ, we can change this distribution and make it close to
the real data distribution _r. This is useful in two ways. First
of all, unlike densities, this approach can represent distributions
confined to a low dimensional manifold. Second, the ability to easily
generate samples is often more useful than knowing the numerical value
of the density (for example in image superresolution or semantic
segmentation when considering the conditional distribution of the
output image given the input image). In general, it is computationally
difficult to generate samples given an arbitrary high dimensional
density <cit.>.
Variational Auto-Encoders (VAEs) <cit.> and Generative
Adversarial Networks (GANs) <cit.> are well known examples
of this approach. Because VAEs focus on the approximate likelihood of
the examples, they share the limitation of the standard models and
need to fiddle with additional noise terms. GANs offer much more
flexibility in the definition of the objective function, including
Jensen-Shannon <cit.>, and all f-divergences
<cit.> as well as some exotic combinations
<cit.>. On the other hand, training GANs is well known for
being delicate and unstable, for reasons theoretically investigated in
<cit.>.
In this paper, we direct our attention on the various ways to measure
how close the model distribution and the real distribution are, or
equivalently, on the various ways to define a distance or divergence
ρ(_θ,_r). The most fundamental difference between such
distances is their impact on the convergence of sequences of
probability distributions. A sequence of distributions
(_t)_t∈ converges if and only if there is a distribution
_∞ such that ρ(_t,_∞) tends to zero,
something that depends on how exactly the distance ρ is defined.
Informally, a distance ρ induces a weaker topology when it makes
it easier for a sequence of distribution to converge.[More
exactly, the topology induced by ρ is weaker than that induced
by ρ' when the set of convergent sequences under ρ is a
superset of that under ρ'.] Section <ref>
clarifies how popular probability distances differ in that respect.
In order to optimize the parameter θ, it is of course desirable
to define our model distribution _θ in a manner that makes
the mapping θ↦_θ continuous.
Continuity means that when a sequence of parameters θ_t converges
to θ, the distributions _θ_t also converge
to _θ. However, it is essential to remember that the
notion of the convergence of the distributions _θ_t depends
on the way we compute the distance between distributions.
The weaker this distance, the easier
it is to define a continuous mapping from θ-space to
_θ-space, since it's easier for the
distributions to converge.
The main reason we care about the mapping θ↦_θ
to be continuous is as follows. If ρ is our notion of
distance between two distributions, we would like to have a loss
function θ↦ρ(_θ, _r) that is continuous,
and this is equivalent to having the mapping θ↦_θ
be continuous when using the distance between distributions ρ.
The contributions of this paper are:
* In Section <ref>, we provide a comprehensive
theoretical analysis of how the Earth Mover (EM) distance behaves in comparison to
popular probability distances and divergences used in the
context of learning distributions.
* In Section <ref>, we define a form of GAN called Wasserstein-GAN
that minimizes a reasonable and efficient approximation of the EM distance,
and we theoretically show that the corresponding optimization
problem is sound.
* In Section <ref>, we empirically show that WGANs
cure the main training problems of GANs. In particular, training
WGANs does not require maintaining a careful balance in training of
the discriminator and the generator, and does not require a careful
design of the network architecture either. The mode dropping
phenomenon that is typical in GANs is also drastically reduced. One
of the most compelling practical benefits of WGANs is the ability to
continuously estimate the EM distance by training the
discriminator to optimality. Plotting these learning curves is not
only useful for debugging and hyperparameter searches, but also
correlate remarkably well with the observed sample quality.
§ DIFFERENT DISTANCES
We now introduce our notation. Let be a compact metric set
(such as the space of images [0,1]^d) and let Σ denote the
set of all the Borel subsets of . Let Prob()
denote the space of probability measures defined on .
We can now define elementary distances and divergences
between two distributions _r,_g∈Prob():
* The Total Variation (TV) distance
δ(_r,_g) = sup_A∈Σ |_r(A)-_g(A)| .
* The Kullback-Leibler (KL) divergence
KL(_r_g) = ∫log(P_r(x)/P_g(x)) P_r(x) dμ(x) ,
where both _r and _g are assumed to be absolutely continuous,
and therefore admit densities, with respect to a same measure μ defined on .[
Recall that a probability distribution
_r∈Prob() admits a density p_r(x) with
respect to μ, that is, ∀ A∈Σ, _r(A) =
∫_A P_r(x) dμ(x), if and only it is absolutely continuous
with respect to μ, that is, ∀ A∈Σ,
μ(A)=0⇒_r(A)=0 .]
The KL divergence is famously assymetric and possibly infinite when there
are points such that P_g(x)=0 and P_r(x)>0.
* The Jensen-Shannon (JS) divergence
JS(_r,_g) = KL(_r_m)+KL(_g_m) ,
where _m is the mixture (_r+_g)/2. This divergence
is symmetrical and always defined because we can choose μ=_m.
* The Earth-Mover (EM) distance or Wasserstein-1
W(_r, _g) = inf_γ∈Π(_r ,_g)_(x, y) ∼γ[ x - y ] ,
where Π(_r,_g) denotes the set of all joint distributions
γ(x,y) whose marginals are respectively _r and
_g. Intuitively, γ(x,y) indicates how much “mass” must
be transported from x to y in order to transform the distributions
_r into the distribution _g. The EM distance then
is the “cost” of the optimal transport plan.
The following example illustrates how apparently simple sequences of
probability distributions converge under the EM distance but do not
converge under the other distances and divergences defined above.
[Learning parallel lines]
Let Z ∼ U[0,1] the uniform distribution
on the unit interval. Let _0 be the
distribution of (0, Z) ∈^2 (a 0 on the x-axis
and the random variable Z on the y-axis), uniform on a straight
vertical line passing through the origin. Now
let g_θ(z) = (θ, z) with θ
a single real parameter. It is easy to see
that in this case,
* W(_0, _θ) = |θ|,
* JS(_0,_θ) =
log 2 if θ≠ 0 ,
0 if θ = 0 ,
* KL(_θ_0) = KL(_0 _θ) =
+∞ if θ≠ 0 ,
0 if θ = 0 ,
* and δ(_0,_θ) =
1 if θ≠ 0 ,
0 if θ = 0 .
When θ_t→0,
the sequence (_θ_t)_t∈ converges
to _0 under the EM distance, but does not converge
at all under either the JS, KL, reverse KL, or TV divergences.
<ref> illustrates this for the case of the EM and JS distances.
Example <ref> gives us a case where we can learn a
probability distribution over a low dimensional manifold by doing
gradient descent on the EM distance. This cannot be done with the
other distances and divergences because the resulting loss function is
not even continuous. Although this simple example features
distributions with disjoint supports, the same conclusion holds when
the supports have a non empty intersection contained in a set of
measure zero. This happens to be the case when two low dimensional
manifolds intersect in general position <cit.>.
0
We now introduce our notation. Let 𝒳⊆^d
be a compact set (such as [0,1]^d the space of images). We define
Prob() to be the space of probability measures over
. We note
C_b() = {f: →, f is continuous and bounded}
Note that if f ∈ C_b(),
we can define f_∞ = max_x ∈ |f(x)|, since
f is bounded. With this norm, the space (C_b(), ·_∞)
is a normed vector space. As for any normed vector space, we can define
its dual
C_b()^* = {ϕ: C_b() →, ϕ is linear
and continuous}
and give it the dual norm ϕ = sup_f ∈ C_b(), f_∞≤ 1 |ϕ(f)|.
With this definitions, (C_b()^*, ·) is another normed space.
Now let μ be a signed measure over , and let us define
the total variation distance
μ_TV = sup_A ⊆ |μ(A)|
where the supremum is taken all Borel sets in .
Since the total variation is a norm, then if we have _r
and _θ two probability distributions over ,
δ(_r, _θ) := _r - _θ
is a distance in Prob() (called the total variation
distance).
We can consider
Φ: (Prob(), δ) → (C_b()^*, ·)
where Φ()(f) := _x ∼[f(x)] is a linear function
over C_b(). The Riesz Representation theorem (<cit.>,
Theorem 10) tells us that Φ is an isometric immersion. This
tells us that we can effectively consider Prob() with
the total variation distance as a subset of C_b()^* with
the norm distance. Thus, just to accentuate it one more time,
the total variation over Prob() is exactly
the norm distance over C_b()^*.
Let us stop for a second and analyze what all this technicality meant.
The main thing to carry is that we introduced a distance δ
over probability distributions. When looked as a distance over
a subset of C_b()^*, this distance gives the norm topology.
The norm topology is very strong. Therefore, we can expect that
not many functions θ↦_θ will be continuous
when measuring distances between distributions with δ. As
we will show later in Theorem <ref>, δ gives the same topology
as the Jensen-Shannon divergence, pointing to the fact that the
JS is a very strong distance, and is thus more propense to
give a discontinuous loss function.
Now, all dual spaces (such as C_b()^* and thus
Prob()) have a strong topology (induced by the norm),
and a weak* topology. As the name suggests, the weak* topology
is much weaker than the strong topology. In the case of
Prob(), the strong topology is given by the
total variation distance, and the weak* topology is given
by the Wasserstein distance (among others) <cit.>.
Since the Wasserstein distance is much weaker than the strong
topology, we can now ask whether W(_r, _θ) is
a continuous loss function on θ
under mild assumptions. This, and more, is
true, as we now state and prove.
Let _r be a fixed distribution over 𝒳.
Let Z be a random variable (e.g Gaussian) over another
space 𝒵. Let g: 𝒵×^d →𝒳
be a function, that will be denoted g_θ(z) with z the first coordinate
and θ the second. Let _θ denote the distribution of g_θ(Z).
Then,
* If g is continuous in θ, so is W(_r, _θ).
* If g is locally Lipschitz and satisfies regularity
assumption <ref>,
then W(_r, _θ) is continuous
everywhere, and differentiable almost everywhere.
* Statements 1-2 are false for the Jensen-Shannon divergence JS(_r, _θ)
and all the KLs.
See Appendix <ref>
[Learning parallel lines]
Let Z ∼ U[0,1] the uniform distribution
on the unit interval. Let _0 be the
distribution of (0, Z) ∈^2 (a 0 on the x-axis
and the random variable Z on the y-axis), uniform on a straight
vertical line passing through the origin. Now
let g_θ(z) = (θ, z) with θ
a single real parameter. It is easy to see
in this case W(_0, _θ) = |θ|
and
JS(_0,_θ) =
log 2 if θ≠ 0
0 if θ = 0
which is clearly not continuous. Similarly, in the case of
the KLs, we would see that
KL(_θ_0) = KL(_0 _θ) =
+∞ if θ≠ 0
0 if θ = 0
Example <ref> gives us a case where we can learn
a probability distribution over a low dimensional manifold
by doing gradient descent on the Wasserstein distance. Something
we know can't be done with the JS because of the theorems
of <cit.>. Note that the discontinuity problem
would arize even if the manifolds intersect, as proved in
<cit.>. This is because unless they are exactly
the same, the intersection would be negligible (lying on a set
of measure 0 for both manifolds, in this case a point).
Since the Wasserstein distance is much weaker than the JS distance[
The argument for why this happens, and indeed
how we arrived to the idea that Wasserstein is what
we should really be optimizing is displayed in Appendix
<ref>. We strongly encourage the interested
reader who is not afraid of the mathematics to go through it.],
we can now ask whether W(_r, _θ) is
a continuous loss function on θ
under mild assumptions. This, and more, is
true, as we now state and prove.
Let _r be a fixed distribution over 𝒳.
Let Z be a random variable (e.g Gaussian) over another
space 𝒵. Let g: 𝒵×^d →𝒳
be a function, that will be denoted g_θ(z) with z the first coordinate
and θ the second. Let _θ denote the distribution of g_θ(Z).
Then,
* If g is continuous in θ, so is W(_r, _θ).
* If g is locally Lipschitz and satisfies regularity
assumption <ref>,
then W(_r, _θ) is continuous
everywhere, and differentiable almost everywhere.
* Statements 1-2 are false for the Jensen-Shannon divergence JS(_r, _θ)
and all the KLs.
See Appendix <ref>
The following corollary tells us that learning by minimizing
the EM distance makes sense (at least in theory) with neural networks.
Let g_θ be any feedforward neural network[By a
feedforward neural network we mean a function composed by
affine transformations and pointwise nonlinearities which are
smooth Lipschitz
functions (such as the sigmoid, tanh, elu, softplus, etc).
Note: the statement is also true for rectifier
nonlinearities but the proof is more
technical (even though very similar) so we omit it.] parameterized by θ,
and p(z) a prior over z such that _z ∼ p(z)[z] < ∞ (e.g.
Gaussian, uniform, etc.). Then assumption
<ref> is satisfied and therefore W(_r, _θ)
is continuous everywhere and differentiable almost everywhere.
See Appendix <ref>
All this shows that EM is a much more sensible
cost function for our problem than at least the Jensen-Shannon
divergence. The following theorem describes the relative strength of
the topologies induced by these distances and divergences, with KL the strongest,
followed by JS and TV, and EM the weakest.
Let be a distribution on a compact space and
(_n)_n ∈ be a sequence
of distributions on . Then, considering
all limits as n →∞,
* The following statements are equivalent
* δ(_n, ) → 0
with δ the total variation distance.
* JS(_n,) → 0 with
JS the Jensen-Shannon divergence.
* The following statements are equivalent
* W(_n, ) → 0.
* _n where represents
convergence in distribution for random variables.
* KL(_n ) → 0 or KL(_n) → 0 imply
the statements in (1).
* The statements in (1) imply the statements in (2).
See Appendix <ref>
This highlights the fact that
the KL, JS, and TV distances are not sensible
cost functions when learning distributions
supported by low dimensional manifolds.
However the EM distance is sensible
in that setup. This obviously leads us to the next section
where we introduce a practical approximation
of optimizing the EM distance.
§ WASSERSTEIN GAN
Again, Theorem <ref> points to the fact that
W(_r, _θ) might have nicer properties
when optimized than JS(_r,_θ).
However, the infimum in
(<ref>) is highly intractable. On the other hand,
the Kantorovich-Rubinstein duality <cit.>
tells us that
W(_r, _θ) = sup_f_L ≤ 1_x ∼_r
[f(x)] - _x ∼_θ[f(x)]
where the supremum is over all the 1-Lipschitz functions
f: →. Note that if we replace f_L ≤ 1
for f_L ≤ K (consider K-Lipschitz for some constant K), then
we end up with K · W(_r, _g). Therefore, if we have a
parameterized family of functions {f_w}_w ∈𝒲
that are all K-Lipschitz for some K, we could consider
solving the problem
max_w ∈𝒲_x ∼_r[f_w(x)] -
_z ∼ p(z) [f_w(g_θ(z)]
and if the supremum in (<ref>) is attained
for some w ∈𝒲 (a pretty strong assumption
akin to what's assumed when proving consistency of an
estimator), this process would
yield a calculation of W(_r, _θ) up to
a multiplicative constant. Furthermore, we could consider
differentiating W(_r, _θ) (again, up to a constant)
by back-proping through equation (<ref>) via
estimating _z ∼ p(z)[∇_θ f_w(g_θ(z))].
While this is all intuition, we now prove that this process
is principled under the optimality assumption.
Let _r be any distribution. Let _θ be
the distribution of g_θ(Z) with Z a random
variable with density p and g_θ
a function satisfying assumption <ref>.
Then, there is a solution f: →
to the problem
max_f_L ≤ 1_x ∼_r[f(x)] -
_x ∼_θ [f(x)]
and we have
∇_θ W(_r, _θ)
= -_z ∼ p(z)[∇_θ f(g_θ(z))]
when both terms are well-defined.
See Appendix <ref>
Now comes the question of finding the function f that
solves the maximization problem in equation (<ref>).
To roughly approximate
this, something that we can do is train a neural network
parameterized with weights w lying in a compact space
𝒲 and then backprop through
_z ∼ p(z)[∇_θ f_w(g_θ(z))], as we
would do with a typical GAN. Note that the fact that
𝒲 is compact implies that all the functions
f_w will be K-Lipschitz for some K that only depends
on 𝒲 and not the individual weights, therefore
approximating (<ref>) up to an irrelevant scaling factor
and the capacity of the `critic' f_w. In order to have parameters w
lie in a compact space, something simple we can do is clamp
the weights to a fixed box (say 𝒲 = [-0.01,0.01]^l) after each
gradient update. The Wasserstein Generative Adversarial
Network (WGAN) procedure is described in Algorithm <ref>.
Weight clipping is a clearly terrible way to enforce a Lipschitz constraint.
If the clipping parameter is large, then it can take a long time
for any weights to reach their limit, thereby making it harder
to train the critic till optimality. If the clipping is small, this
can easily lead to vanishing gradients when the number of layers is big,
or batch normalization is not used (such as in RNNs). We experimented
with simple variants (such as projecting the weights to a sphere) with
little difference, and we stuck with weight clipping due to its simplicity
and already good performance. However, we do leave the topic of
enforcing Lipschitz constraints in a neural network setting for further
investigation, and we actively encourage interested researchers to improve
on this method.
The fact that the EM distance is continuous and differentiable a.e.
means that we can (and should) train
the critic till optimality. The argument is simple,
the more we train the critic, the more reliable gradient of
the Wasserstein we get, which is actually useful by the
fact that Wasserstein is differentiable almost everywhere.
For the JS, as the discriminator gets better the gradients get
more reliable but the true gradient is 0 since the JS is locally
saturated and we get vanishing gradients,
as can be seen in <ref> of this paper
and Theorem 2.4 of <cit.>. In <ref>
we show a proof of concept of this, where we train
a GAN discriminator and a WGAN critic till optimality.
The discriminator learns very quickly to distinguish between
fake and real, and as expected provides no reliable gradient
information. The critic, however, can't saturate, and converges
to a linear function that gives remarkably clean gradients everywhere.
The fact that we constrain the weights limits the possible
growth of the function to be at most linear in different parts
of the space, forcing the optimal critic to have this behaviour.
Perhaps more importantly, the fact that we can train the critic
till optimality makes it impossible to collapse modes when we do.
This is due to the fact that mode collapse comes from the fact
that the optimal generator for a fixed discriminator
is a sum of deltas on the points the discriminator assigns
the highest values, as observed by <cit.> and
highlighted in <cit.>.
In the following section we display the practical benefits
of our new algorithm, and we provide an in-depth comparison
of its behaviour and that of traditional GANs.
§ EMPIRICAL RESULTS
We run experiments on image generation using our Wasserstein-GAN algorithm
and show that there are significant practical benefits to using it over the
formulation used in standard GANs.
We claim two main benefits:
* a meaningful loss metric that correlates with the generator's
convergence and sample quality
* improved stability of the optimization process
§.§ Experimental Procedure
We run experiments on image generation. The target distribution to learn is the
LSUN-Bedrooms dataset <cit.> – a collection of natural images of
indoor bedrooms. Our baseline comparison is DCGAN <cit.>,
a GAN with a convolutional architecture trained with the standard GAN procedure
using the -log D trick <cit.>.
The generated samples are 3-channel images of 64x64 pixels in size.
We use the hyper-parameters specified in Algorithm <ref> for
all of our experiments.
§.§ Meaningful loss metric
Because the WGAN algorithm attempts to train the critic f (lines 2–8 in Algorithm <ref>)
relatively well before each generator update (line 10 in Algorithm <ref>),
the loss function at this point is an estimate of the EM distance, up to constant
factors related to the way we constrain the Lipschitz constant of f.
Our first experiment illustrates how this estimate correlates well
with the quality of the generated samples. Besides the convolutional
DCGAN architecture, we also ran experiments where we replace the
generator or both the generator and the critic by 4-layer
ReLU-MLP with 512 hidden units.
<ref> plots the evolution of the WGAN estimate
(<ref>) of the EM distance during WGAN training for all three
architectures. The plots clearly show that these curves correlate
well with the visual quality of the generated samples.
To our knowledge, this is the first time in GAN literature that such a
property is shown, where the loss of the GAN shows properties of
convergence. This property is extremely useful when doing research in
adversarial networks as one does not need to stare at the generated
samples to figure out failure modes and to gain information on which
models are doing better over others.
However, we do not claim that this is a new method to quantitatively
evaluate generative models yet. The constant scaling
factor that depends on the critic's architecture means it's hard
to compare models with different critics. Even more,
in practice the fact that the critic doesn't have
infinite capacity makes it hard to know just how
close to the EM distance our estimate really is.
This being said, we have succesfully used the loss metric to
validate our experiments repeatedly and without failure, and
we see this as a huge improvement in training GANs which
previously had no such facility.
In contrast, <ref> plots the evolution of the GAN estimate
of the JS distance during GAN training. More precisely, during GAN training,
the discriminator is trained to maximize
L(D, g_θ) = _x ∼_r[log D(x)] + _x ∼_θ[log(1 - D(x))]
which is is a lower bound of 2 JS(_r,_θ) - 2 log 2.
In the figure, we plot the quantity 1/2 L(D, g_θ) + log 2,
which is a lower bound of the JS distance.
This quantity clearly correlates poorly the sample quality. Note
also that the JS estimate usually stays constant or goes up instead
of going down. In fact
it often remains very close to log2≈0.69 which is the highest
value taken by the JS distance. In other words, the JS distance
saturates, the discriminator has zero loss, and the generated samples
are in some cases meaningful (DCGAN generator, top right plot) and in
other cases collapse to a single nonsensical image <cit.>.
This last phenomenon has been theoretically explained in
<cit.> and highlighted in <cit.>.
When using the -log D trick <cit.>,
the discriminator loss and the generator loss are different.
Figure <ref> in Appendix <ref>
reports the same plots for GAN training, but using
the generator loss instead of the discriminator loss.
This does not change the conclusions.
Finally, as a negative result, we report that WGAN training becomes unstable
at times when one uses a momentum based optimizer such as Adam <cit.> (with β_1 > 0)
on the critic, or
when one uses high learning rates. Since the loss for the critic is nonstationary, momentum
based methods seemed to perform worse. We identified
momentum as a potential cause because, as the loss blew up and samples got worse,
the cosine between the Adam step and the gradient usually turned negative. The
only places where this cosine was negative was in these situations of
instability. We therefore switched to RMSProp <cit.> which is known
to perform well even on very nonstationary problems <cit.>.
§.§ Improved stability
One of the benefits of WGAN is that it allows us to train the critic
till optimality. When the critic is trained to completion, it simply
provides a loss to the generator that we can train as any other neural
network. This tells us that we no longer need to balance generator and
discriminator's capacity properly. The better the critic, the higher
quality the gradients we use to train the generator.
We observe that WGANs are much more robust than GANs when one varies
the architectural choices for the generator. We illustrate this
by running experiments on three generator architectures:
(1) a convolutional DCGAN generator, (2) a convolutional DCGAN generator
without batch normalization and with a constant number of filters,
and (3) a 4-layer ReLU-MLP with 512 hidden units.
The last two are known to perform very poorly with GANs.
We keep the convolutional DCGAN architecture for
the WGAN critic or the GAN discriminator.
Figures <ref>, <ref>, and <ref> show
samples generated for these three architectures using both the WGAN
and GAN algorithms. We refer the reader to Appendix
<ref> for full sheets of generated samples.
Samples were not cherry-picked.
In no experiment did we see evidence of mode collapse for the WGAN algorithm.
§ RELATED WORK
There's been a number of works on the so called
Integral Probability Metrics (IPMs) <cit.>.
Given a set of functions from 𝒳
to , we can define
d_(_r, _θ) = sup_f ∈_x ∼_r[f(x)] - _x ∼_θ[f(x)]
as an integral probability metric associated with the
function class . It is easily verified that
if for every f ∈ we have -f ∈
(such as all examples we'll consider), then
d_ is nonnegative, satisfies the triangular
inequality, and is symmetric. Thus, d_ is
a pseudometric over Prob().
While IPMs might seem to share a similar formula,
as we will see different classes of functions
can yeald to radically different metrics.
* By the Kantorovich-Rubinstein duality <cit.>,
we know that W(_r, _θ) = d_(_r,
_θ) when is the set of 1-Lipschitz
functions. Furthermore, if is the set
of K-Lipschitz functions, we get K · W(_r,
_θ) = d_(_r, _θ).
* When is the set of all measurable
functions bounded between -1 and 1 (or all
continuous functions between -1 and 1), we
retrieve d_(_r, _θ) = δ(_r,
_θ) the total variation distance <cit.>.
This already tells us that going from 1-Lipschitz
to 1-Bounded functions drastically changes the
topology of the space, and the regularity
of d_(_r, _θ) as a loss
function (as by Theorems <ref>
and <ref>).
* Energy-based GANs (EBGANs) <cit.>
can be thought of
as the generative approach to the total variation
distance. This connection is stated and
proven in depth in Appendix <ref>.
At the core of the connection is that the discriminator
will play the role of f maximizing equation
(<ref>) while its only restriction is
being between 0 and m for
some constant m. This will yeald the same
behaviour as being restricted to be between -1
and 1 up to a constant scaling factor irrelevant
to optimization. Thus, when the discriminator
approaches optimality the cost for the generator
will aproximate the total variation distance
δ(_r, _θ).
Since the total variation distance displays the
same regularity as the JS, it can be seen that
EBGANs will suffer from the same problems
of classical GANs regarding not being able
to train the discriminator till optimality
and thus limiting itself to very imperfect
gradients.
* Maximum Mean Discrepancy (MMD) <cit.> is
a specific case of integral probability metrics when
= {f ∈ℋ: f_∞≤ 1} for
ℋ some Reproducing Kernel Hilbert Space (RKHS)
associated with a given kernel k: ×→.
As proved on <cit.> we know that MMD is a proper
metric and not only a pseudometric when the kernel is universal.
In the specific case where
ℋ = L^2(, m) for m the normalized Lebesgue
measure on , we know that {f ∈ C_b(), f_∞≤ 1}
will be contained in , and therefore d_(_r, _θ)
≤δ(_r, _θ) so the regularity of the MMD distance
as a loss function will be at least as bad as the one of the total
variation. Nevertheless this is a very extreme case, since we would
need a very powerful kernel to approximate the whole L^2. However,
even Gaussian kernels are able to detect tiny noise patterns
as recently evidenced by <cit.>. This points to the
fact that especially with low bandwidth kernels, the distance
might be close to a saturating regime similar as with total
variation or the JS. This obviously doesn't need to
be the case for every kernel, and figuring out how and which different MMDs
are closer to Wasserstein or total variation distances is an interesting
topic of research.
The great aspect of MMD is that via the kernel trick there is no need
to train a separate network to maximize equation (<ref>) for the ball
of a RKHS. However, this has the disadvantage that evaluating the MMD distance
has computational cost that grows quadratically with the amount of samples
used to estimate the expectations in (<ref>). This last point
makes MMD have limited scalability, and is sometimes inapplicable to
many real life applications because of it. There are estimates with
linear computational cost for the MMD <cit.> which
in a lot of cases makes MMD very useful, but they also have worse sample complexity.
* Generative Moment Matching Networks (GMMNs) <cit.>
are the generative counterpart of MMD. By backproping through
the kernelized formula for equation (<ref>), they directly
optimize d_MMD(_r, _θ) (the IPM when is
as in the previous item). As mentioned, this has the advantage
of not requiring a separate network to approximately maximize
equation (<ref>). However, GMMNs have enjoyed limited applicability.
Partial explanations for their unsuccess are the quadratic cost as a function
of the number of samples and vanishing gradients for low-bandwidth kernels.
Furthermore, it may be possible that some
kernels used in practice are unsuitable for capturing very complex
distances in high dimensional sample spaces such as natural images.
This is properly justified by the fact that <cit.>
shows that for the typical Gaussian MMD test to be reliable (as in it's power
as a statistical test approaching 1), we need the number of
samples to grow linearly with the number of dimensions. Since
the MMD computational cost grows quadratically with the number
of samples in the batch used to estimate equation (<ref>),
this makes the cost of having a reliable estimator
grow quadratically with the number of dimensions, which makes it
very inapplicable for high dimensional problems. Indeed, for
something as standard as 64x64 images, we would need minibatches
of size at least 4096 (without taking into account the constants
in the bounds of <cit.> which would make this number
substantially larger) and a total cost per iteration of
4096^2, over 5 orders of magnitude more than a
GAN iteration when using the standard batch size of 64.
That being said, these numbers can be a bit unfair to the MMD,
in the sense that we are comparing empirical sample complexity of
GANs with the theoretical sample complexity of MMDs, which tends
to be worse. However, in the original GMMN paper <cit.> they
indeed used a minibatch of size 1000, much larger than the standard
32 or 64 (even when this incurred in quadratic computational cost).
While estimates that have linear computational cost as a function of the number
of samples exist <cit.>,
they have worse sample complexity, and to the best of
our knowledge they haven't been yet applied in a generative context
such as in GMMNs.
On another great line of research, the recent work of <cit.>
has explored the use of Wasserstein distances in the context of learning
for Restricted Boltzmann Machines for discrete spaces. The motivations
at a first glance might seem quite different, since the manifold setting is restricted
to continuous spaces and in finite discrete spaces the weak and strong
topologies (the ones of W and JS respectively) coincide. However, in the
end there is more in commmon than not about our motivations. We both
want to compare distributions in a way that leverages the geometry of the
underlying space, and Wasserstein allows us to do exactly that.
Finally, the work of <cit.> shows new algorithms for calculating
Wasserstein distances between different distributions. We believe this direction
is quite important, and perhaps could lead to new ways of evaluating generative models.
§ CONCLUSION
We introduced an algorithm that we deemed WGAN, an alternative
to traditional GAN training. In this new model, we showed that we can
improve the stability of learning, get rid of problems like mode collapse,
and provide meaningful learning curves useful for debugging and hyperparameter
searches. Furthermore, we showed that the corresponding optimization problem
is sound, and provided extensive theoretical work highlighting the
deep connections to other distances between distributions.
§ ACKNOWLEDGMENTS
We would like to thank
Mohamed Ishmael Belghazi,
Emily Denton,
Ian Goodfellow,
Ishaan Gulrajani,
Alex Lamb,
David Lopez-Paz,
Eric Martin,
Maxime Oquab,
Aditya Ramesh,
Ronan Riochet,
Uri Shalit,
Pablo Sprechmann,
Arthur Szlam,
Ruohan Wang,
for helpful comments and advice.
plain
§ WHY WASSERSTEIN IS INDEED WEAK
We now introduce our notation. Let 𝒳⊆^d
be a compact set (such as [0,1]^d the space of images). We define
Prob() to be the space of probability measures over
. We note
C_b() = {f: →, f is continuous and bounded}
Note that if f ∈ C_b(),
we can define f_∞ = max_x ∈ |f(x)|, since
f is bounded. With this norm, the space (C_b(), ·_∞)
is a normed vector space. As for any normed vector space, we can define
its dual
C_b()^* = {ϕ: C_b() →, ϕ is linear
and continuous}
and give it the dual norm ϕ = sup_f ∈ C_b(), f_∞≤ 1 |ϕ(f)|.
With this definitions, (C_b()^*, ·) is another normed space.
Now let μ be a signed measure over , and let us define
the total variation distance
μ_TV = sup_A ⊆ |μ(A)|
where the supremum is taken all Borel sets in .
Since the total variation is a norm, then if we have _r
and _θ two probability distributions over ,
δ(_r, _θ) := _r - _θ_TV
is a distance in Prob() (called the total variation
distance).
We can consider
Φ: (Prob(), δ) → (C_b()^*, ·)
where Φ()(f) := _x ∼[f(x)] is a linear function
over C_b(). The Riesz Representation theorem (<cit.>,
Theorem 10) tells us that Φ is an isometric immersion. This
tells us that we can effectively consider Prob() with
the total variation distance as a subset of C_b()^* with
the norm distance. Thus, just to accentuate it one more time,
the total variation over Prob() is exactly
the norm distance over C_b()^*.
Let us stop for a second and analyze what all this technicality meant.
The main thing to carry is that we introduced a distance δ
over probability distributions. When looked as a distance over
a subset of C_b()^*, this distance gives the norm topology.
The norm topology is very strong. Therefore, we can expect that
not many functions θ↦_θ will be continuous
when measuring distances between distributions with δ. As
we will show later in Theorem <ref>, δ gives the same topology
as the Jensen-Shannon divergence, pointing to the fact that the
JS is a very strong distance, and is thus more propense to
give a discontinuous loss function.
Now, all dual spaces (such as C_b()^* and thus
Prob()) have a strong topology (induced by the norm),
and a weak* topology. As the name suggests, the weak* topology
is much weaker than the strong topology. In the case of
Prob(), the strong topology is given by the
total variation distance, and the weak* topology is given
by the Wasserstein distance (among others) <cit.>.
§ ASSUMPTION DEFINITIONS
Let g: 𝒵×^d →𝒳 be
locally Lipschitz between finite dimensional vector spaces.
We will denote g_θ(z) it's evaluation on coordinates
(z, θ). We say that g satisfies assumption <ref>
for a certain probability distribution p over 𝒵
if there are local Lipschitz constants L(θ, z) such
that
_z ∼ p[L(θ, z)] < +∞
§ PROOFS OF THINGS
Let θ and θ' be two parameter vectors in ^d. Then, we
will first attempt to bound W(_θ, _θ'), from where the
theorem will come easily. The main element of the proof is the use of the
coupling γ, the distribution of the joint (g_θ(Z), g_θ'(Z)),
which clearly has γ∈Π(_θ, _θ').
By the definition of the Wasserstein distance, we have
W(_θ, _θ') ≤∫_×x - yγ
= _(x, y) ∼γ [x - y ]
= _z[g_θ(z) - g_θ'(z)]
If g is continuous in θ, then g_θ(z) →_θ→θ' g_θ'(z),
so g_θ - g_θ'→ 0 pointwise as functions of z. Since
is compact, the distance of any two elements in it has to be uniformly bounded by
some constant M, and therefore g_θ(z) - g_θ'(z)≤ M for
all θ and z uniformly. By the bounded convergence theorem, we therefore
have
W(_θ, _θ') ≤_z[g_θ(z) - g_θ'(z)] →_θ→θ' 0
Finally, we have that
|W(_r, _θ) - W(_r, _θ') | ≤ W(_θ, _θ') →_θ→θ' 0
proving the continuity of W(_r, _θ).
Now let g be locally Lipschitz. Then, for a given
pair (θ, z) there is a constant L(θ, z)
and an open set U such that (θ, z) ∈ U,
such that for every (θ', z') ∈ U we have
g_θ(z) - g_θ'(z')≤ L(θ, z) (θ - θ' + z - z')
By taking expectations and z'=z we
_z[g_θ(z) - g_θ'(z)]
≤θ - θ'_z[L(θ, z)]
whenever (θ', z) ∈ U. Therefore, we can define
U_θ = {θ' | (θ', z) ∈ U}. It's easy
to see that since U was open, U_θ is as well.
Furthermore, by assumption <ref>, we can
define L(θ) = _z[L(θ, z)] and achieve
|W(_r, _θ) - W(_r, _θ')|
≤ W(_θ, _θ')
≤ L(θ) θ - θ'
for all θ' ∈ U_θ, meaning that W(_r, _θ)
is locally Lipschitz. This obviously implies that
W(_r, _θ) is everywhere continuous, and
by Radamacher's theorem we know it has to be differentiable
almost everywhere.
The counterexample for item 3 of the Theorem is indeed
Example <ref>.
We begin with the case of smooth nonlinearities. Since g is
C^1 as a function of (θ, z) then for any
fixed (θ, z) we have L(θ, Z) ≤∇_θ, xg_θ(z) + ϵ
is an acceptable local Lipschitz constant for all ϵ > 0.
Therefore, it suffices to prove
_z ∼ p(z)[∇_θ, z g_θ(z)]< +∞
If H is the number of layers we know
that ∇_z g_θ(z) = ∏_k=1^H W_k D_k where
W_k are the weight matrices and D_k is are the diagonal Jacobians
of the nonlinearities. Let f_i:j be the application
of layers i to j inclusively (e.g. g_θ = f_1:H).
Then, ∇_W_k g_θ(z) = ((∏_i=k+1^H W_i D_i
) D_k ) f_1:k-1(z).
We recall that if L is the Lipschitz constant
of the nonlinearity, then D_i≤ L and
f_1:k-1(z)≤z L^k-1∏_i=1^k-1W_i. Putting this together,
∇_z, θ g_θ(z) ≤∏_i=1^H W_i D_i
+ ∑_k=1^H ((∏_i=k+1^H W_i D_i ) D_k )
f_1:k-1(z)
≤ L^H ∏_i=H^K W_i + ∑_k=1^H z L^H
(∏_i=1^k-1W_i)
(∏_i=k+1^HW_i)
If C_1(θ) = L^H(∏_i=1^HW_i) and
C_2(θ) = ∑_k=1^H L^H
(∏_i=1^k-1W_i)
(∏_i=k+1^HW_i) then
_z ∼ p(z)[∇_θ, z g_θ(z)]
≤ C_1(θ) + C_2(θ) _z ∼ p(z)[z] < +∞
finishing the proof
*
* (δ(_n, ) → 0 ⇒ JS(_n,) → 0) — Let _m be the mixture distribution _m = 1/2_n
+ 1/2 (note that _m depends on n).
It is easily verified that δ(_m, _n)
≤δ(_n, ), and in particular this tends to 0 (as
does δ(_m, )). We now show this for completeness.
Let μ be a signed measure,
we define μ_TV = sup_A ⊆𝒳 |μ(A)|.
for all Borel sets A.
In this case,
δ(_m, _n) = _m - _n _TV
= 1/2 + 1/2_n - _n _TV
= 1/2 - _n _TV
= 1/2δ(_n, ) ≤δ(_n, )
Let f_n = d _n/d _m be the Radon-Nykodim
derivative between _n and the mixture. Note that by
construction for every Borel set A we have _n(A) ≤
2 _m(A). If A = {f_n > 3} then we get
_n(A) = ∫_A f_n _m ≥ 3 _m(A)
which implies _m(A) = 0. This means that f_n
is bounded by 3 _m(and therefore _n and
)-almost everywhere. We could have done this
for any constant larger than 2 but for our
purposes 3 will sufice.
Let ϵ > 0 fixed,
and A_n = {f_n > 1 + ϵ}. Then,
_n(A_n) = ∫_A_n f_n _m ≥ (1 + ϵ) _m(A_n)
Therefore,
ϵ_m(A_n) ≤_n(A_n) - _m(A_n)
≤ |_n(A_n) - _m(A_n)|
≤δ(_n, _m)
≤δ(_n, ).
Which implies _m(A_m) ≤1/ϵδ(_n, ). Furthermore,
_n(A_n) ≤_m(A_n) + |_n(A_n) - _m(A_n)|
≤1/ϵδ(_n, ) + δ(_n, _m)
≤1/ϵδ(_n, ) + δ(_n, )
≤(1/ϵ + 1) δ(_n, )
We now can see that
KL(_n _m) = ∫log(f_n) _n
≤log(1 + ϵ) + ∫_A_nlog(f_n) _n
≤log(1 + ϵ) + log(3) _n(A_n)
≤log(1 + ϵ) + log(3) (1/ϵ + 1) δ(_n, )
Taking limsup we get 0 ≤lim sup KL(_n _m) ≤log(1 + ϵ)
for all ϵ > 0, which means KL(_n _m) → 0.
In the same way, we can define g_n = d /d _m, and
2 _m({g_n > 3}) ≥({g_n > 3}) ≥ 3 _m({g_n > 3})
meaning that _m({g_n > 3}) = 0 and therefore
g_n is bounded by 3 almost everywhere for _n, _m
and . With the same calculation, B_n = {g_n > 1 + ϵ} and
(B_n) = ∫_B_n g_n _m ≥ (1 + ϵ) _m(B_n)
so _m(B_n) ≤1/ϵδ(, _m) → 0, and therefore
(B_n) → 0. We can now show
KL(_m) = ∫log(g_n)
≤log(1 + ϵ) + ∫_B_nlog (g_n)
≤log(1 + ϵ) + log(3) (B_n)
so we achieve 0 ≤lim sup KL(_m) ≤log(1 + ϵ)
and then KL(_m) → 0. Finally, we conclude
JS(_n,) = 1/2 KL(_n _m) + 1/2 KL(_m) → 0
* (JS(_n,) → 0 ⇒δ(_n, ) → 0) — by a simple
application of the triangular and Pinsker's inequalities we get
δ(_n, ) ≤δ(_n, _m) + δ(, _m)
≤√(1/2 KL(_n _m)) + √(1/2 KL(_m))
≤ 2 √(JS(_n,))→ 0
* This is a long known fact that W metrizes
the weak* topology of (C(), ·_∞)
on Prob(), and by definition this
is the topology of convergence in distribution.
A proof of this can be found (for example) in <cit.>.
* This is a straightforward application of Pinsker's inequality
δ(_n, ) ≤√(1/2 KL(_n ))→ 0
δ(, _n) ≤√(1/2 KL(_n))→ 0
* This is trivial by recalling the fact that δ and W give the
strong and weak* topologies on the dual of (C(), ·_∞)
when restricted to Prob().
Let us define
V(f̃, θ) = _x ∼_r[f̃(x)] - _x ∼_θ [f̃(x)]
= _x ∼_r[f̃(x)] - _z ∼ p(z) [f̃(g_θ(z))]
where f̃ lies in = {f̃: → , f̃∈ C_b(), f̃_L ≤ 1} and
θ∈^d.
Since is compact, we know
by the Kantorovich-Rubenstein duality <cit.> that
there is an f ∈ that attains the value
W(_r, _θ) = sup_f̃∈ V(f̃, θ) = V(f, θ)
Let us define X^*(θ) = {f ∈: V(f, θ) = W(_r, _θ)}. By
the above point we know then that X^*(θ) is non-empty. We know
that by a simple envelope theorem (<cit.>, Theorem 1) that
∇_θ W(_r, _θ) = ∇_θ V(f, θ)
for any f ∈ X^*(θ) when both terms are well-defined.
Let f ∈ X^*(θ), which we knows exists
since X^*(θ) is non-empty for all θ. Then, we get
∇_θ W(_r, _θ) = ∇_θ V(f, θ)
= ∇_θ[ _x ∼_r[f(x)] - _z ∼ p(z)[f(g_θ(z))]
= -∇_θ_z ∼ p(z)[f(g_θ(z))]
under the condition that the first and last terms are well-defined.
The rest of the proof will be dedicated to show that
-∇_θ_z ∼ p(z)[f(g_θ(z))] = -_z ∼ p(z)[∇_θ f(g_θ(z))]
when the right hand side is defined. For the reader who is
not interested in such technicalities, he or she can skip the
rest of the proof.
Since f ∈, we know that it is 1-Lipschitz.
Furthermore, g_θ(z) is locally
Lipschitz as a function of (θ, z). Therefore,
f(g_θ(z)) is locally Lipschitz on (θ, z)
with constants L(θ, z) (the same ones as g).
By Radamacher's Theorem, f(g_θ(z)) has to be
differentiable almost everywhere for (θ, z)
jointly. Rewriting this, the set A = {(θ, z):
f ∘ g is not differentiable} has
measure 0. By Fubini's Theorem, this implies that
for almost every θ the section A_θ
= {z: (θ, z) ∈ A} has measure 0.
Let's now fix a θ_0 such that
the measure of A_θ_0 is null (such
as when the right hand side of equation (<ref>)
is well defined). For this
θ_0 we have ∇_θ f(g_θ(z))|_θ_0
is well-defined for almost any z, and since p(z)
has a density, it is defined p(z)-a.e. By assumption
<ref> we know that
_z ∼ p(z) [∇_θ f(g_θ(z))|_θ_0]
≤_z ∼ p(z) [L(θ_0, z)] < + ∞
so _z ∼ p(z) [∇_θ f(g_θ(z))|_θ_0]
is well-defined for almost every θ_0. Now, we can see
_z∼ p(z)[f(g_θ(z))] - _z ∼ p(z)[f(g_θ_0(z))]
- ⟨ (θ - θ_0), _z ∼ p(z) [∇_θ f(g_θ(z))|_θ_0]
⟩/θ - θ_0
= _z ∼ p(z)[ f(g_θ(z)) - f(g_θ_0(z))
- ⟨(θ - θ_0), ∇_θ f(g_θ(z))|_θ_0⟩/θ - θ_0]
By differentiability, the term inside the integral converges p(z)-a.e. to 0
as θ→θ_0. Furthermore,
f(g_θ(z)) - f(g_θ_0(z))
- ⟨(θ - θ_0), ∇_θ f(g_θ(z))|_θ_0⟩/θ - θ_0
≤θ - θ_0 L(θ_0, z)
+ θ - θ_0∇_θ f(g_θ(z))|_θ_0/θ - θ_0
≤ 2 L(θ_0, z)
and since _z ∼ p(z)[2 L(θ_0, z)] < +∞ by assumption 1,
we get by dominated convergence that Equation <ref> converges
to 0 as θ→θ_0 so
∇_θ_z ∼ p(z) [f(g_θ(z))] = _z ∼ p(z) [∇_θ f(g_θ(z))]
for almost every θ, and in particular when the right
hand side is well defined.
Note that the mere existance of the left
hand side (meaning the differentiability a.e. of _z ∼ p(z)
[f(g_θ(z))]) had to be proven, which we just did.
§ ENERGY-BASED GANS OPTIMIZE TOTAL VARIATION
In this appendix we show that under an optimal discriminator,
energy-based GANs (EBGANs) <cit.> optimize the total variation
distance between the real and generated distributions.
Energy-based GANs are trained in a similar fashion to GANs, only under
a different loss function. They have a discriminator D who tries to
minimize
L_D(D, g_θ) = _x ∼_r[D(x)] + _z ∼ p(z)[[m - D(g_θ(z))]^+]
for some m > 0 and [x]^+ = max(0, x) and a generator network g_θ that's trained to minimize
L_G(D, g_θ) = _z ∼ p(z) [D(g_θ(z))] - _x ∼_r[D(x)]
Very importantly, D is constrained to be non-negative,
since otherwise the trivial solution for D would be to set everything to
arbitrarily low values. The original EBGAN paper used only _z ∼ p(z)[D(g_θ(z))] for
the loss of the generator, but this is obviously equivalent to our
definition since the term _x ∼_r[D(x)] does not dependent
on θ for a fixed discriminator (such as when backproping to the
generator in EBGAN
training) and thus minimizing one or the other is equivalent.
We say that a measurable function D^*: 𝒳→ [0, +∞)
is optimal for g_θ (or _θ) if L_D(D^*, g_θ) ≤ L_D(D, g_θ) for all
other measurable functions D. We show that such a discriminator
always exists for any two distributions _r and _θ,
and that under such a discriminator, L_G(D^*, g_θ) is
proportional to δ(_r, _θ). As a simple corollary,
we get the fact that L_G(D^*, g_θ) attains its minimum
value if and only if δ(_r, _θ) is at its
minimum value, which is 0, and _r = _θ
(Theorems 1-2 of <cit.>).
Let _r be a the real data distribution over
a compact space 𝒳.
Let g_θ: 𝒵→𝒳 be a
measurable function (such as any neural network). Then,
an optimal discriminator D^* exists for _r and
_θ, and
L_G(D^*, g_θ) = m/2δ(_r, _θ)
First, we prove that there exists an optimal
discriminator. Let D: 𝒳→ [0, +∞)
be a measurable function, then D'(x) := min(D(x), m) is
also a measurable function, and L_D(D', g_θ) ≤ L_D(D, g_θ).
Therefore, a function D^*: 𝒳→ [0, +∞) is
optimal if and only if D^*' is. Furthermore, it is optimal if and
only if L_D(D^*, g_θ) ≤ L_D(D, g_θ) for all D: 𝒳→ [0, m]. We are then interested to see if there's an
optimal discriminator for the problem
min_0 ≤ D(x) ≤ m L_D(D, g_θ).
Note now that if 0 ≤ D(x) ≤ m we have
L_D(D, g_θ) = _x ∼_r[D(x)] + _z ∼ p(z)[[m - D(g_θ(z))]^+]
= _x ∼_r[D(x)] + _z ∼ p(z)[m - D(g_θ(z))]
= m + _x ∼_r[D(x)] - _z ∼ p(z)[D(g_θ(z))]
= m + _x ∼_r[D(x)] - _x ∼_θ[D(x)]
Therefore, we know that
inf_0 ≤ D(x) ≤ m L_D(D, g_θ) = m + inf_0 ≤ D(x) ≤ m_x ∼_r[D(x)] - _x ∼_θ[D(x)]
= m + inf_-m/2≤ D(x) ≤m/2_x ∼_r[D(x)] - _x ∼_θ[D(x)]
= m + m/2inf_-1 ≤ f(x) ≤ 1_x ∼_r[f(x)] - _x ∼_θ[f(x)]
The interesting part is that
inf_-1 ≤ f(x) ≤ 1_x ∼_r[f(x)] - _x ∼_θ[f(x)] = - δ(_r, _θ)
and there is an f^*: → [-1,1] such that _x ∼_r[f^*(x)] -
_x ∼_θ[f^*(x)] = - δ(_r, _θ). This is a long known fact,
found for example in <cit.>, but we prove it later for completeness. In that case,
we define D^*(x) = m/2f^*(x) + m/2. We then have 0 ≤ D(x) ≤ m and
L_D(D^*, g_θ)
= m + _x ∼_r[D^*(x)] - _x ∼_θ[D^*(x)]
= m + m/2_x ∼_r[D^*(x)] - _x ∼_θ[f^*(x)]
= m - m/2δ(_r, _θ)
= inf_0 ≤ D(x) ≤ m L_D(D, g_θ)
This shows that D^* is optimal and L_D(D^*, g_θ) = m - m/2δ(_r, _θ). Furthermore,
L_G(D^*, g_θ) = _z ∼ p(z)[D^*(g_θ(z))] - _x ∼_r[D^*(x)]
= -L_D(D^*, g_θ) + m
= m/2δ(_r, _g)
concluding the proof.
For completeness, we now show a proof for equation (<ref>)
and the existence of said f^* that attains the value of the infimum.
Take μ = _r - _θ,
which is a signed measure, and (P, Q) its Hahn decomposition.
Then, we can define f^* := 1_Q - 1_P.
By construction, then
EE_x ∼_r[f^*(x)] - _x ∼_θ[f^*(x)]
= ∫ f^* μ̣= μ(Q) - μ(P)
= -(μ(P) - μ(Q)) = -μ_TV
= -_r - _θ_TV
= -δ(_r, _θ)
Furthermore, if f is bounded between -1 and 1, we get
|_x ∼_r[f(x)] - _x ∼_θ[f(x)] |
= |∫ f _r - ∫ f _θ|
= |∫ f μ̣|
≤∫ |f| |̣μ| ≤∫ 1 |̣μ|
=|μ|(𝒳) = μ_TV = δ(_r, _θ)
Since δ is positive, we can conclude _x ∼_r[f(x)] - _x ∼_θ[f(x)] ≥ -δ(_r, _θ).
§ GENERATOR'S COST DURING NORMAL GAN TRAINING
§ SHEETS OF SAMPLES
top=0cm, bottom=0cm
|
http://arxiv.org/abs/1701.07903v1 | 20170126234935 | Quantum dark soliton (qubits) in Bose Einstein condensates | [
"Muzzamal I. Shaukat",
"Eduardo V. Castro",
"Hugo Terças"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"quant-ph"
] |
CeFEMA, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
University of Engineering and Technology, Lahore (RCET Campus), Pakistan
muzzamalshaukat@gmail.com
CeFEMA, Instituto Superior Técnico, Universidade de Lisboa, Lisboa, Portugal
Instituto de Plasmas e Fusão Nuclear, Lisboa, Portugal
Instituto de Telecomunicações, Lisboa, Portugal
hugo.tercas@tecnico.ulisboa.pt
67.85.Hj 42.50.Lc 42.50.-p 42.50.Md
We study the possibility of using dark-solitons in quasi one dimensional
Bose-Einstein condensates to produce two-level systems (qubits) by exploiting the intrinsic nonlinear and the coherent nature of the matter waves. We
calculate the soliton spectrum and the conditions for a qubit to exist. We also compute the coupling between the phonons and the solitons and investigate the emission rate of the qubit in that case. Remarkably, the qubit lifetime is estimated to be of the order of a few seconds, being only limited by the dark-soliton “death" due to quantum evaporation.
Quantum dark solitons as qubits in Bose-Einstein condensates
Hugo Terças
December 30, 2023
============================================================
§ INTRODUCTION
Quantum effects strive to disappear for macroscopic objects. Typically, quantum effects become defamed out into their classical averages, and therefore manipulation of quantum states, relevant for quantum computation, becomes unsustainable at the macroscopic scale. However, Bose-Einstein condensates (BECs) constitute one important exception where quantum effects are perceptible on a macroscopic level. With the advent and rapid developing of laser cooling and trapping of neutral atoms over the past decade, micron-sized atomic gases at ultralow temperatures are routinely formed in the laboratory <cit.>. Moreover, quantum optic techiques allow for an unprecedent versatile and precise control on internal degrees of freedom, putting cold atoms as one of the most prominent candidates to test complex aspects of strongly correlated matter and to applications in quantum information processing <cit.>.
Quantum information has been introduced in cold atom systems at various levels <cit.>. One way consists in defining a qubit (a two-state system) via two internal states of an atom. This approach, however, requires each atom to be addressed separately. A similar problem appears when the qubit is introduced via a set of spatially localized states (e.g. in adjacent wells of an optical lattice potential) of an atom or a BEC. The complication is due to the fact that the number of atoms in a BEC experiment significantly fluctuates from run to run. As a result, any qubit system dependent on the number of atoms becomes problematic. A second way of producing qubits in these systems relies on collective properties of ultracold atoms. Here, a two-state system can be formed by isolating a pair of macroscopic states that are set sufficiently far away from the multiparticle spectrum. At the same time, however, the energy gap between these lowest states must remain small enough to allow measurable dynamics <cit.>. Experiments performed in the double-well potential configuration are a pioneer example of such approach <cit.>. Nevertheless, despite the appealing similarity with single-particle two-level states, double-well potentials lack to achieve a macroscopic superposition allowing for a measurable dynamics <cit.>. To overcome the superposition issue, a more recent proposal based on BEC superfluid current states in the ring geometry, analogous to the superconducting flux qubit <cit.>, has been discussed <cit.>. More recently, the concept of phononic reservoir via the manipulation of the phononic degrees of freedom has pushed quantum information realizations to another level, comprising the dynamics of impurities immersed in BECs <cit.> and reservoir engineering to produce multipartite dark states <cit.>. Another important difference in respect to quantum optical system is the possibility to use phononic reservoirs to test non-Markovian effects in many-body systems <cit.>. The implementation of quantum
gates has been recently proposed in Ref. <cit.>.
Another important family of macroscopic structures in BECs with potential applications in quantum information are the so-called dark-solitons (DS). They consist of nonlinear localized depressions in a quasi-1D BEC that emerge due to a precise balance between the dispersive and nonlinear effects in the system <cit.>, being also ubiquitous in nonlinear optics <cit.>, shallow liquids <cit.>, magnetic films <cit.>. Quasi one-dimensional BECs with repulsive interatomic interaction are prone to inception of dark solitons by various methods, like imprinting spatial phase distribution <cit.>, inducing density defects in BEC <cit.>, and by collision of two condensates <cit.>.
The stability and dynamics of DS in BECs have been a subject of intense research over the last decade <cit.>. Recent activity in the field involve studies on the collective aspects of the so-called soliton gases <cit.>, putting dark solitons as a good candidate to investigate many-body physics <cit.>.
In this paper, we combine the intrinsic nonlinearity in quasi one-dimensional BECs to construct two-level states (qubits) with dark solitons. As we will show, thanks to the unique properties of the DS spectrum, perfectly isolated two-level states are possible to construct. As a result, a matter-wave qubit of a few kHz energy gap is achieved. The effect of decoherece due to the presence of phonons (quantum fluctuations around the bacground density) play the role of a proper quantum reservoir. Remarkably, due to their instrisic slow-time dynamics, BEC phonons provide small decoherence rates of few Hz, meaning that under typical experimental conditions, DS-qubits have a lifetime comparable to the lifetime of the BEC, being only limited by the soliton quantum diffusion (“evaporation"). As we show below, this effect is not critical and the qubit is still robust within the 100 ms-0.1 s time scale.
The paper is organized as follows: In sec. II, the properties of a single DS in a quasi-1D BEC immersed in a second condensate are derived. We start with the set of coupled Gross Pitaevskii equations and find under which conditions DSs can uniquevocally define a two level atom (qubit). In Sec. III, we compute the coupling between phonons and DSs. Sec. IV discusses the Weisskopf-Wigner theory to determine the emission rate of the qubit. Some discussion and conclusions about the implications of our proposal in practical quantum information protocols are stated in Sec. V.
§ MEAN-FIELD EQUATIONS AND THE DARK-SOLITON QUBIT
We consider two-coupled quasi-1D BECs. A quasi 1D gas is produced when the transverse dimension of the trap is larger than or of the order of the s-wave scattering length and, at the same time, much smaller than the longitudinal extension <cit.>. At the mean field level, the dynamics of the system is thus governed by the time-dependent coupled Gross Pitaevskii equations
iħ∂ψ _1/∂ t=-ħ ^2/2m∂^2ψ _2/∂ x^2+g_11|ψ _1| ^2ψ
_1+g_12|ψ _2| ^2ψ _1
iħ∂ψ _2/∂ t=-ħ ^2/2m∂^2ψ _2/∂ x^2+g_22|ψ _2| ^2ψ
_2+g_21|ψ _1| ^2ψ _2
where g_11 (g_22) is the one-dimensional coupling strength between particles in BEC_1 (BEC_2) and g_12=g_21 is the inter-particle coupling constant, ħ is the Planck constant, m is the mass of the atomic species. We restrict the discussion to repulsive interatomic interacions, g_11(g_22)>0. In what follow, we assume that a dark soliton is present in BEC_1 and g_22≪ g_12≤ g_11 such that particles in BEC_2 do not interact and can
therefore be regarded as a set of free interacting particles (see Fig. <ref>). Thus, Eq. (<ref>) can be written as
iħ∂ψ _2/∂ t=-ħ ^2/2m
∂^2ψ _2/∂ x^2+g_21|ψ _ sol|
^2ψ _2,
where the soliton profile, resulting a singular nonlinear solution to Eq. (<ref>), is given by <cit.>
ψ _ sol(x)=√(n_0)tanh( x/ξ).
Here, n_0 is the background density which is typically of the order of 10^8 m^-1 in elongated BECs,
and the healing length ξ =ħ /√(mn_0g_11) is of the order (0.2-0.7) μm. We also consider the experimentally accesible trap frequencies ω_r=2π× (1-5) kHz ≫ω _z=2π× (15-730) Hz and the
corresponding length amount to be the value l_z=(0.6-3.9) μm <cit.>. Notice that the previous results can be easily generalized for the case of a gray solitons (i.e solitons traveling with speed v) by replacing Eq. (<ref>) by
ψ _ sol(x)=√(n_0)[iθ+1/γtanh( x/ξγ)],
where θ=v/c_s, γ=(1-θ^2)^-1/2, and c_s=√(gn_0/m) is the BEC sound speed <cit.>. Therefore, the time-independent version of Eq. (<ref>) reads
E'ψ _2=-ħ ^2/2m
∂^2ψ _2/∂ x^2-g_21n_0 sech^2( x/ξ) ψ _2,
where E'=E-g_21n_0. Here, the dark soliton act as a potential for the particles of the reservoir. Analytical solutions to Eq. (<ref>) can be obtained by casting the potential term in the form of <cit.>
V(x)=-ħ ^2/2mξ ^2ν (ν +1) sech^2(
x/ξ),
where ν=(-1+√(1+4g_12/g_11))/2. The particular case of ν being a positive integer corresponds to the important case of the reflectionless potential <cit.>, for which an incident wave is totally transmitted. For the more general case considered here, the energy spectrum associated to the potential in Eq. (<ref>) reads
E_n^^'=-ħ ^2/2mξ ^2(ν -n)^2,
where n is an integer. The number of bound states is given by n_ bound=⌊ 1+ν+√(ν(ν+1))⌋, where ⌊·⌋ denotes the integer part. A two-level system (qubit) can be perfectly isolated when the value of ν ranges as
1/3≤ν <4/5.
At the critical point ν =1/2 the two-energy levels merge and the qubit is ill-defined. Finally, for ν≥ 4/5, three-level systems (qutrits) can also be formed, but this case is out of the scope of the present work and will be discussed in a separate publication. The features of the spectrum (<ref>) are illustrated in Fig. <ref>.
§ QUBIT-PHONON INTERACTION
The mean-field soliton soluton in Eq. (<ref>) is accompained by quantum fluctuations (phonons). In that case, the total wavefunction is given by ψ_1(x)=ψ_ sol(x)+δψ_1(x), where
δψ_1(x)=∑_k (u_k(x) b_k e^ikx +v_k(x)^*b_k^† e^-ikx),
with b_k denoting bosonic operators satisfying the commutation relation [b_k,b_q^†]=δ_kq. u_k(x) and v_k(x) are amplitudes veryfing the normalization condition | u_k(x)| ^2 -| v_k(x)| ^2=1 and are explicitly given by
<cit.>
. u_k(x)=√(1/4πξ)μ/ϵ _k
. ×
. [ ( (kξ )^2+2ϵ _k/μ) (
kξ/2+itanh( x/ξ) ) +kξ/
cosh ^2( x/ξ) ] . ,
and
. v_k(x)=√(1/4πξ)μ/ϵ _k
. ×
. [ ( (kξ )^2-2ϵ _k/μ) (
kξ/2+itanh( x/ξ) ) +kξ/
cosh ^2( x/ξ) ] . .
Similarly, the particles in BEC_2 are eigenstates of Eq. (<ref>), being therefore spannable in terms of the bosonic operators a_ℓ as
ψ_2(x)=∑_ℓ=0^1 φ_ℓ(x) a_ℓ,
where φ_0(x)= sech(x/ξ)/(√(2ξ)) and ϕ_1(x)=i√(3)tanh(x/ξ)φ_0(x).
The total Hamiltonian may then be written as
H=H_ qubit+H_ ph+H_ int.
The first term H_ qubit represents the dark-soliton (qubit) Hamiltonian
H_ qubit=ħω _0σ _z
where ω _0=ħ(2ν -1)/(2mξ ^2) is the qubit gap frequency and σ_z=a_1^† a_1- a_0^† a_0 is the corresponding spin operator. The second term describes the phonon (reservoir) Hamiltonian
H_=∑_k ϵ _kb _k^†b _k,
where the Bogoliubov spectrum is given by ϵ _k=μξ√(k^2(ξ^2k^2+2)) and μ =gn_0 denotes the chemical potential. The interaction Hamiltonian H_ int between qubit and the reservoir is defined as
H_ int=g_12∫ dxψ _2^†ψ _1^†ψ _1ψ _2
which, with the prescriptions in Eqs. (<ref>) and (<ref>), can be decomposed as
H_ int=H_ int^(0)+H_ int^(1)+H_ int^(2),
respectivelty containing zero, first and second order terms in the operators b_k and b_k^†. Owing to the small depletion of the condensate, and consistent with the Bogoliubov approximation performed in Eq. (<ref>), we ignore the higher-order term H_ int^(2)∼𝒪(b_k^2). The first part of Eq. (<ref>) corresponds to a Stark shift term of the type
H_ int^(0)=g_12n_0δ _ℓℓ'a_ℓ^†a_ℓ'f_ℓℓ',
where f_ℓℓ'=∫ dx φ _ℓ^†(x)φ _ℓ'(x)tanh ^2(
x/ξ). The latter can be omitted by renormalizing the qubit frequency as ω_0=ω
_0+n_0g_12. In its turn, the first-order term 𝒪(b_k) is given by
H_ int^(1)=∑_k∑_ℓ,ℓ'a_ℓ^†a_ℓ'(
b_kg_ℓ,ℓ'(k)+b_k^†g_ℓ,ℓ'(k)^*) + h.c.
where
g_ℓ,ℓ'(k) =√(n_0)g_12∫ dxφ _ℓ^†(x)φ _ℓ'(x)tanh( x/ξ)e^ikx u_k
As we can observe, Eq. (<ref>) contains intraband (ℓ=ℓ') and
interband (ℓ≠ℓ') terms. However, for small values of the coupling between the system and the environment, the qubit transition can only be driven by near-resonant phonons, for which te interband coupling amplitude | g_01(k)|=| g_10(k)^*| is much larger that the interband terms | g_00(k) | and | g_11(k)| (see Fig. <ref>). As such, within the rotating-wave approximation (RWA), we can safely drop the intraband terms to obain
H_ int^(1)=∑_kg(k)σ_+b_k+∑_kg(k)^*σ_-b_k^† + h.c.,
where σ_+=a_1^† a_0, σ_-=a_0^† a_1 and the coupling constant g_k≡ g_0,1(k)=-g_1,0(k) is explicitly given by
g_k = ig_12k^2ξ ^3/2/80ϵ _k√(
n_0π/6)(2μ +8k^2μξ ^2+15ϵ _k)
( -4+k^2ξ ^2) csch( kπξ/2).
We notice that the implementation of the RWA approximation also implied the dropping of the counter-rotating terms proportional to b_k σ_- and b_k^†σ_+ that do not conserve the total number of excitations. The accuracy of such an approximation can be verified a posteriori, provided that the emission rate Γ is much smaller than the qubit transition frequency ω_0.
§ SPONTANEOUS DECAY OF THE DARK-SOLITON QUBIT
Neglecting the effect of temperature and other external perturbations, the only source of decoherence of a dark-soliton qubit are te sourrounding phonons. Because cold atom experiments are typically very clean, and considering that the zero-temperature approximation is an excellent approximation for quasi-1D BECs <cit.>, we employ the Wigner-Weisskopf theory in order to compute the lifetime of the qubit. We assume the qubit to be initially in its excited state and the field to be in the vacuum state. Under such conditons, the total system+reservoir wavefunction can be parametrized as
|ψ (t)⟩ =α(t)e^-iω _0t|
e,0⟩ +∑_kβ_k(t)e^-iω _kt|
g,1_k⟩
where α(t) and β_k(t) are the probability amplitudes. The Wigner-Weisskopf ansatz (<ref>) is then let to evolve under the total Hamiltonian in Eq. (<ref>), for which the Schrödinger equation yields the following evolution of the coefficients
α̇(t) = i/ħ∑
_kg_ke^-i(ω _k-ω _0)tβ_k(t)
β_k(t) = i/ħg_k^∗∫_0^tα(t^^')e^i(ω _k-ω _0)t^^'dt^^'.
Due to separation of time scales between the phonons and the decay process, we may assume that the coefficient α(t) evolves much slower than β_k(t), which allows us to evoque the Born approximation to write
∫_0^tα(t^^')e^-i(ω _k-ω _0)(t-t^^')dt^^'≃α(t)∫_0^te^-i(ω _k-ω
_0)τdτ ,
where τ =t-t^^'. Moreover, since we expect α(t) to varie at a rate Γ≪ω _0, the relevant decay dynamics is expected to take place at times t≫1/ω _0, which allows us to take upper limit of above integral to ∞ (Markov approximation). Therefore, we have
[ α(t)∫_0^∞e^-i(ω _k-ω _0)τdτ = α(t)πδ (ω _k-ω _0); - i α(t)℘(1/ω _k-ω _0) ]
where ℘ represents the Cauchy principal part describing an additional energy (Lamb) shift. Because it represents a small correction to the qubit energy ω_0, we do not compute its contribution explicitly. Therefore, the excited state amplitude
decays exponentially as
α(t)=e^-Γ t/2.
where Γ is the population decay rate given as
Γ = L/√(2)ħξ∫ dω_k √(1+ η_k)/η_k| g_k|^2 δ(ω_k-ω_0)
= π N_0g_12^2/76800ħμ ^5ξ ^2η_0 √(
μ +η_0 /μ)( -μ +η_0 ) ( -5μ +η_0
) ^2
× ( 8η_0 +3μ(-2+5ξ√(ħ ^2ω _0^2/
μ ^2ξ ^2)) ) ^2
× csch^2( π√(-μ
+η_0 )/2√(μ))
where η_0,_k =√(μ ^2+ħ ^2ω _0,k^2). As depicted in Fig. <ref>, the decay rate Γ is orders of magnitude smaller than the qubit gap ω_0, confirming that the evoquing both the RWA and the Born-Markov approximation can also be used for phononic systems. Remarkably, for a quasi-1D of chemical potential of few kHz, we can obtain a qubit lifetime τ_ qubit∼ 1/Γ of the order of a second, a time comparable to lifetime of the BEC itself. Notice that the value of g_12 (and consequently the qubit natural frequency ω_0 and lifetime τ_ qubit) can be experimentally tunned with the help of Feshbach resonances. The only immediate limitation to the performance of our proposal may be related to the dark-soliton quantum diffusion <cit.>. Since they interact with the background phonons, they are expected to evaporate within the time scale τ_ diffusion=8ξ/c_s √(3n_0ξ/2). For typical 1D BECs with ξ∼ 0.7-1.0 μm and c_s∼ 1.0 mm/s, we estimate τ_ diffusion∼ 0.05-0.1 s, which reduces τ_ qubit in about 20%. Finally, by putting Eqs. (<ref>) and (<ref>) together, we can evalute the evolution of the amplitude coefficient β_k(t) as
β_k(t) =i/ħg_k^∗∫_0^te^-[Γ/2
-i(ω _k-ω _0)]tdt,
which yields the following Lorentzian spectrum
S(ω_k)=lim_t→∞|β_k(t)| ^2 =1/ħ ^2| g_k| ^2/Γ ^2/4
+(ω _k-ω _0)^2,
as illustrated in Fig. <ref>. It is observed that the Lorentzian spectrum is narrower for a weak coupling constant g_12.
§ CONCLUSION
In conclusion, we have shown that a dark soliton in a quasi one-dimensional Bose-Einstein condensate can produce a well isolated two-level system, which can act as a matter-wave qubit of energy gap of a few kHz. This feature is intrinsic to the nonlinear nature of Bose-Einstein condensates and does not require manipulation of the internal degrees of freedom of the atoms. We observe that the decoherence induced by the quantum fluctuations (phonons) produce a finite qubit lifetime. Quite remarkably, leading calculations provide a qubit lifetime of the order of a few seconds, a time scale comparable to the duration of state-of-the art cold atomic traps. The only major limitation to the qubit robustness is the quantum diffusion of the soliton, which is estimated to reduce the qubit lifetime to around 20% its value. This puts qubits made of dark solitons as good candidates to store information for large times (∼ 0.01-1 s), offering an appealing alternative to quantum optical of solid-state platforms. While dark solitons may not compete in terms of quantum scalability (the number of solitons in a typical elongated BEC is not expected to surpasse a few tens), their unprecedent coherence and lifetime will certainly make them attractive to the design of new quantum memories and quantum gates. Moreover, due to the possibility of interfacing cold atomic clouds with solid-state and optical systems, our findings may inspire further applications in hybdrid quantum computers.
§ ACKNOWLEDGEMENTS
One of the authors (H. T.) acknowledges the Security of Quantum Information Group for the hospitality and for providing the working conditions during the early stages of this work. Stimulating discussions with J. D. Rodrigues are acknowledged. The authors also thank the support from the DP-PMI programme and Fundação para a Ciência e a Tecnologia (Portugal), namely through the scholarship number SFRH/PD/BD/113650/2015 and the grant number SFRH/BPD/110059/2015.E.V.C. acknowledges partial support from
FCT-Portugal through Grant No. UID/CTM/04540/2013.
99
henderson2009 K. Henderson, C. Ryu, C. MacCormick and M. G. Boshier, New
J. Phys. 11, 043030 (2009).
donley2001 E. A. Donley, N. R. Claussen, S. L. Cornish, J. L. Roberts,
E. A. Cornell, C. E. Wieman, Nature 412, 295 (2001).
leanhardt2003 A. E. Leanhardt, T. A. Pasquini, M. Saba, A. Schirotzek, Y. Shin, D. Kielpinski, D. E. Pritchard, W. Ketterle, Science 301, 1513 (2003).
zoller98 D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, P. Zoller, Phys. Rev. Lett. 81, 3108 (1998).
greiner2002 M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, I. Bloch, Nature 415, 39 (2002).
orth2008 P. P. Orth, I. Stanic, and K. Le Hur, Phys. Rev. A 77, 051601 (2008).
kalas2008 R. M. Kalas, A. V. Balatsky, D. Mozyrsky, Phys. Rev. B 78,184513 (2008).
santamore2008 D. H. Santamore, E. Timmermans, Phys. Rev. A 78, 013619 (2008).
solenov2008a D. Solenov and D. Mozyrsky, Phys. Rev. Lett. 100, 150402 (2008).
solenov2008bD. Solenov and D. Mozyrsky, Phys. Rev. A 78, 053611 (2008)
kalas2010 R. M. Kalas, D. Solenov, E. Timmermans, Phys. Rev. A 81, 053620 (2010).
porto2003 J. V. Porto, S. Rolston, B. Laburthe Tolra, C. J. Williams and W. D. Phillips, Phil. Trans. R. Soc. Lond. A 361, 1417 (2003).
lundblad2009 N. Lundblad, J. M. Obrecht, I. B. Spielman, J. V. Porto, Nat. Phys. 5, 575 (2009).
leggett89 A. J. Leggett, Quantum mechanics at the macroscopic level, Ecole d'été de physique théorique (Les Houches, Haute-Savoie, France) (1986).
dw_2008
F. W. Strauch, M. Edwards, E. Tiesinga, C. Williams, and C. W. Clark Phys. Rev. A 77, 050304(R) (2008).
andrews M. R. Andrews, H.-J. Miesner, D. M. Stamper-Kurn, J. Stenger, and W. Ketterle, Phys. Rev. Lett. 82, 2422 (1999).
tinkham2004M. Tinkham, Introduction to Supreconductivity, Dover Publications, New York, 2nd Edn (2004).
solenov2011 D. Solenov and D. Mozyrsky, J. Comput. Theor. Nanosci. 8, 481 (2011).
klein2007 A. Klein, M. Bruderer, S.R. Clark and D. Jaksch, New J. Phys. 9, 411 (2007).
cirone2009 M. A. Cirone, G De Chiara, G. M. Palma and A. Recati, New J. Phys. 11, 103055 (2009).
haikka2011 P. Haikka, S. McEndoo, G. De Chiara, G. M. Palma, and S. Maniscalco, Phys. Rev. A 84, 031602 (2011).
mulansky2011 F. Mulansky, J. Mumford, and D. H. J. O’Dell, Phys. Rev. A 84, 063602 (2011).
peotta2013 S. Peotta, D. Rossini, M. Polini, F. Minardi, R. Fazio, Phys. Rev. Lett. 110, 015302 (2013).
stanigel2012 K. Stannigel, P. Rabl, and P. Zoller, New J. Phys. 14, 063014 (2012).
ramos2014 T. Ramos, H. Pichler, A. J. Daley, P. Zoller, Phys. Rev. Let. 113, 237203 (2014).
petersen2014 J. Petersen, J. Volz, and A. Rauschenbeutel, Science 346, 67 (2014).
mitsch2014 R. Mitsch, C. Sayrin, B. Albrecht, P. Schneeweiss, and A. Rauschenbeutel, Nature Communication 5, 5713 (2014).
sollner2014 I. Söllner, S. Mahmoodian, A. Javadi, and P. Lodahl, Nature Nanotechnology 10, 775 (2015).
young2014 A. B. Young, A. Thijssen, D. M. Beggs, L. Kuipers, J. Rarity, and R. Oulton, Phys. Rev. Lett. 115, 153901 (2015).
ramos2016 T. Ramos, B. Vermersch, P. Hauke, H. Pichler, and P. Zoller, Phys. Rev. A 93, 062104 (2016).
vermersch2016 B. Vermersch, T. Ramos, P. Hauke, and P. Zoller, Phys. Rev. A 93, 063830 (2016).
luiz F. S. Luiz, E. I. Duzzioni, L. Sanz, Brazilian Journal of
Physics 45, 550 (2015).
kivshar Y. S. Kivshar and G. P. Agrawal, Optical Solitons:
From Fibers to Photonic Crystals (Academic Press, San Diego, USA, 2003).
burger S. Burger, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera,
G. V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett. 83, 5198
(1999).
Denschlag J. Denschlag, J. E. Simsarian, D. L. Feder, C. W. Clark,
L. A. Collins, J. Cubizolles, L. Deng, E. W. Hagley, K. Helmerson, W. P.
Reinhart, S. L. Rolston, B. I. Schneider, and W. D. Phillips, Science
287, 97 (2000).
anderson B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder,
L. A. Collins, C. W. Clark, and E. A. Cornell, Phys. Rev. Lett. 86,
2926 (2001).
Krakel D. Krakel, N. J. Halas, G. Giuliani, and D. Grischkowsky,
Phys. Rev. Lett. 60, 29 (1988); G. A. Swartzlander, D. R. Andersen,
J. J. Regan, H. Yin, and A. E. Kaplan, ibid. 66, 1583 (1991).
Denardo B. Denardo, W. Wright, S. Putterman, and A. Larraza, Phys.
Rev. Lett. 64, 1518 (1990).
Chen M. Chen, M. A. Tsankov, J. M. Nash, and C. E. Patton, Phys.
Rev. Lett. 70, 1707 (1993).
Dutton Z. Dutton, M. Budde, C. Slowe, and L.V. Hau, Science
293, 663 (2001).
Reinhardt W. P. Reinhardt and C. W. Clark, J. Phys. B 30,
L785 (1997).
Scott T. F. Scott, R. J. Ballagh, and K. Burnett, J. Phys. B
31, L 329 (1998)
jackson B. Jackson, N. P. Proukakis, C. F. Barenghi, Phys. Rev. A
75, 051601 (2007).
dziarmaga J. Dziarmaga, Z. P. Karkuszewski, and K. Sacha, J. Phys.
B: At. Mol. Opt. Phys. 36, 1217 (2003).
gael2005 G. A. El and A. M. Kamchatnov, Phys. Rev. Lett. 95,
204101 (2005).
tercas H. Terças, D. D. Solnyshkov and G. Malpuech, Phys. Rev.
Lett. 110, 035302 (2013); ibid 113, 036403 (2014).
perez V. M. Perez-Garcia, H. Michinel and H. Herrero, Phys. Rev. A
57, 3837 (1998).
carr L. D. Carr, C. W. Clark and W. P. Reinhardt, Phys. Rev. A
62, 063611 (2000).
Muryshev A. Muryshev, G. V. Shylapnikov, W. Ertmer, K. Sengstock,
and M. Lewenstein, Phys. Rev. Lett. 89, 110401 (2002).
huang G. Huang, J. Szeftel, and S. Zhu, Phys. Rev. A 65,
053605 (2002).
zakharov72 V. E. Zakharov and A. B. Shabat, Sov. Phys. JETP
34, 62 (1972).
zakharov73 V. E. Zakharov and A. B. Shabat, Sov. Phys. JETP
37, 823 (1973).
parker N. Parker, Numerical Studies of Vortices and Dark Solitons in atomic Bose Einstein Condensates, Ph.D Thesis (2004).
pelinovsky D. E. Pelinovsky, Y. S. Kivshar, and V. V. Afanasjev, Phys. Rev. E 54, 2015 (1996).
wadkin D. C. Wadkin-Snaith and D. M. Gangardt, Phys. Rev. Lett. 108, 085301 (2012).
john J. Leknera, Am. J. Phys. 75, 12 (2007).
Dziarmaga04 J. Dziarmaga, Phys. Rev. A 70, 063616 (2004).
pitaevskii L. Pitaevskii and S. Stringari, Bose-Einstein
Condensation (Clarendon, Oxford, 2003).
pethick C. J. Pethick and H. Smith, Bose Einstein
Condensation in Dilute Gases; Second Edition (Cambridge University Press,
Cambridge, England, 2008).
|
http://arxiv.org/abs/1701.07518v4 | 20170125233439 | On The Compound MIMO Wiretap Channel with Mean Feedback | [
"Amr Abdelaziz",
"C. Emre Koksal",
"Hesham El Gamal",
"Ashraf D. Elbayoumy"
] | cs.CR | [
"cs.CR",
"cs.IT",
"math.IT"
] |
On The Compound MIMO Wiretap Channel with Mean Feedback
^†Amr Abdelaziz, ^†C. Emre Koksal and ^†Hesham El Gamal ^*Ashraf D. Elbayoumy ^†Department of Electrical and Computer Engineering ^*Department of Electrical Engineering
The Ohio State University Military Technical College
Columbus, Ohio 43201 Cairo, Egypt
December 30, 2023
======================================================================================================================================================================================================================================================================================
Compound MIMO wiretap channel with double sided uncertainty is considered under channel mean information model. In mean information model, channel variations are centered around its mean value which is fed back to the transmitter. We show that the worst case main channel is anti-parallel to the channel mean information resulting in an overall unit rank channel. Further, the worst eavesdropper channel is shown to be isotropic around its mean information. Accordingly, we provide the capacity achieving beamforming direction. We show that the saddle point property holds under mean information model, and thus, compound secrecy capacity equals to the worst case capacity over the class of uncertainty. Moreover, capacity achieving beamforming direction is found to require matrix inversion, thus, we derive the null steering (NS) beamforming as an alternative suboptimal solution that does not require matrix inversion. NS beamformer is in the direction orthogonal to the eavesdropper mean channel that maintains the maximum possible gain in mean main channel direction. Extensive computer simulation reveals that NS performs very close to the optimal solution. It also verifies that, NS beamforming outperforms both maximum ratio transmission (MRT) and zero forcing (ZF) beamforming approaches over the entire SNR range. Finally, An equivalence relation with MIMO wiretap channel in Rician fading environment is established.
MIMO Wiretap Channel, Compound Wiretap Channel, Mean Channel Information, Saddle point, Worst Case Capacity.
§ INTRODUCTION
A key consideration in determining the secrecy capacity of the MIMO wiretap channel is the amount of information available at the transmitter, not only about the eavesdropper channel, but also about the main channel. In principle, assuming perfect knowledge about the main channel, either full eavesdropper's channel state information (CSI) or, at least, its distribution are required to determine the secrecy capacity. The secrecy capacity of the general MIMO wiretap channel has been studied in <cit.> assuming perfect knowledge of both channels.
In practical scenarios, having even partial knowledge on the eavesdropper channel is typically not possible, especially, when dealing with strictly passive eavesdroppers. Further, in fast fading channel, it may also be unreasonable to have perfect main CSI at the transmitter. Compound wiretap channel <cit.> is a model that tackle these limitations in which CSI is known only to belong to a certain class of uncertainty. This assumption can be used to model eavesdropper only CSI <cit.> (single sided uncertainty) or both main and eavesdropper channels (double sided uncertainty)<cit.>. Depending on the considered class of uncertainty, the secrecy capacity of the compound wiretap channel can be characterized.
Classes of uncertainity in the compound wiretap channel can be characterized in two different categories as it pertains to the set that the main and/or eavesdropper channels belong to: 1) Finite state channels 2) Continous set. The discrete memoryless compound wiretap channel with countably finite uncertainty set was studied in <cit.>. Meanwhile, the corresponding compound Gaussian MIMO wiretap channel with countably finite uncertainty set is analyzed in <cit.>. In both cases, the secrecy capacity is established only for the degraded case (i.e. main channel is stronger in all spatial directions). Meanwhile, the secrecy capacity itself remains unknown for the general indefinite case (i.e. main channel is stronger in subset of the available spatial directions). A closed form solution was obtained either in case of an isotropic eavesdropper <cit.> or the degraded case in the high SNR regime <cit.>. Although the optimal signaling scheme for the non-isotropic non-degraded case still not known to date in general, necessary conditions for optimality were derived in <cit.> and <cit.> for the deterministic known channel case. Recently in <cit.>, the compound Gaussian MIMO wiretap channel was studied under spectral norm constraint (maximum channel gain) and rank constraint for both single and double sided classes of uncertainty without the degradedness assumption.
In <cit.> (Theorem 3) it was shown that, the secrecy capacity of the compound MIMO wiretap channel is upper bounded by the worst case capacity over the considered class of uncertainty. The term worst case capacity is established by optimizing the input signal covariance for all possible main and eavesdropper channel, then, taking the minimum over all main and eavesdropper channels over the considered class of uncertainty. Moreover, it was also shown that, the compound secrecy capacity is lower bounded by the capacity of the worst possible main and eavesdropper channels. Here, the saddle point needs to be considered, i.e. maxmin = minmax, where the max is taken over non-negative definite input covariance matrices subject to an average power constraint and the min is taken over the classes of channel uncertainty. If the saddle point property holds, the compound capacity is fully characterized and is known to match the worst case one.
In this paper, we consider the class of channels with double sided uncertainty under channel mean information model. In mean information model, channel is centered around a mean value which is fed back to the transmitter. An example for the mean information model is the channel with a strong Line-of-Sight (LOS) component, the gain of which is known at the transmitter. While it is unlikely to expect the eavesdropper to share its CSI (even its mean channel) in some scenarios, a secure communication system may be designed in a way that puts physical restrictions on the locations of possible attacker. These physical restrictions can be informative to the transmitter and may enable to achieve better secrecy rates by designing its signaling scheme accordingly. We first establish the worst case secrecy capacity of the compound MIMO wiretap channel under mean information model, then, we show that the saddle point property holds. We show that, the worst case main channel is anti-parallel to the channel mean information resulting in an overall unit rank channel. Further, the worst eavesdropper dropper channel is shown to be isotropic around its mean information. Accordingly, generalized eigenvector beamforming is known to be the optimal signaling strategy <cit.><cit.>. We show that the saddle point property holds under mean information model, and thus, compound secrecy capacity equals to the worst case capacity over the class of uncertainty. Further, as the generalized eigenvector solution requires matrix inversion, we introduce null steering (NS) beamforming, that is, transmission in the direction orthogonal to the eavesdropper mean channel direction maintaining the maximum possible gain in mean main channel direction, as an alternative suboptimal solution. Extensive computer simulation reveals that NS performs extremely close to the optimal solution. It also verifies that, NS beamforming outperforms both maximum ratio transmission (MRT) and zero forcing (ZF) beamforming approaches over the entire SNR range. Finally, An equivalence relation with MIMO wiretap channel in Rician fading environment is established.
§ SYSTEM MODEL AND PROBLEM STATEMENT
§.§ Notations
In the rest of this paper we use boldface uppercase letters for random matrices, uppercase letters for their realizations, bold face lowercase letters for random vectors and lowercase letters for its realizations. Meanwhile, (.)^† denotes conjugate transpose, 𝐈_N denotes identity matrix of size N, (.) denotes matrix determinant operator and 1_m × n denotes a m × n matrix of all 1's.
§.§ System Model
We consider the MIMO wiretap channel scenario in which a transmitter 𝒜 with N_a > 1 antennas amounts to transmit a confidential message massage to a receiver, ℬ, having N_b > 1 antennas over an unsecure channel in the presence of a passive adversary, ℰ, equipped with N_e > 1 antennas. The discrete baseband equivalent channels for the signal received by each of the legitimate destination, 𝐲, and the adversary, 𝐳, are as follows:
𝐲 = 𝐇_b 𝐱 + 𝐧_b, 𝐳 = 𝐇_e 𝐱 + 𝐧_e,
where 𝐱∈ℂ^N_a× 1 is the transmitted signal vector constrained by an average power constraint 𝔼[𝐭𝐫(𝐱𝐱^†)] ≤ P. Also, 𝐇_b∈ℂ^N_b × N_a and 𝐇_e ∈ℂ^N_e × N_a are the channel coefficients matrices between message source, destination and adversary respectively. Finally, 𝐧_b ∈ℂ^N_b× 1 and 𝐧_e ∈ℂ^N_e× 1 are independent zero mean normalized to unit variance circular symmetric complex random vectors for both destination and adversary channels respectively, where, 𝐧_b∼𝒞𝒩(0,𝐈_N_b) and 𝐧_e∼𝒞𝒩(0,𝐈_N_e).
§.§ Problem Statement
In this paper, we consider the case where the transmitter does not know the exact realizations of both 𝐇_b and 𝐇_e. Rather, it only knows that they both belong to a known compact (closed and bounded) uncertainty sets. Under the considered channel mean feedback model, we define channel uncertainty sets as follows:
𝒮_b = {𝐇_b:𝐇_b = 𝐇_μ b + Δ𝐇_b, Δ𝐇_b_2 ≤ϵ_b,
𝐇_μ b= λ_μ b^1/2v_bu_b^†} ,
𝒮_e = {𝐇_e:𝐇_e = 𝐇_μ e + Δ𝐇_e, Δ𝐇_e_2 ≤ϵ_e,
𝐇_μ e= λ_μ e^1/2v_eu_e^† , u_e ∈𝒰} ,
where, 𝐇_μ∘ is the channel mean information which is assumed to be of unit rank and v_∘∈ℂ^N_∘× 1 , u_∘∈ℂ^N_a × 1. We assume that the transmitter knows u_b, meanwhile, it knows only that u_e ∈𝒰 where 𝒰 is the set of uncertainty about the eavesdropper mean information. In the extreme case when u_e is know exactly at the transmitter, we simply write 𝒰={u_e}.
Further, Δ𝐇_∘ is the channel uncertain part which is assumed to satisfy the bounded spectral norm condition Δ𝐇_∘_2 ≤ϵ_∘. In the compound wiretap channel, channels realizations are assumed to be fixed over the entire transmission duration. Therefore, Δ𝐇_∘ is considered fixed once it has been realized. This model is the scenario in which the eavesdropper can approach the transmitter up to a certain distance and from limited range of directions, see Fig. (<ref>).
First, let us define
C(𝐖_b,𝐖_e,𝐐) = log(𝐈_N_a + 𝐖_b𝐐)(𝐈_N_a + 𝐖_e𝐐),
where 𝐖_∘≜𝐇_∘^†𝐇_∘, ∘∈{e,b} is the channel Gram matrix and 𝐐=𝔼[𝐱𝐱^†] is the input signal covariance matrix. The capacity of the worst case main and eavesdropper channels can be defined as follows:
C_w = min_𝐖_b : 𝐇_b∈𝒮_b
𝐖_e : 𝐇_e∈𝒮_emax_𝐐≽0
𝐭𝐫(𝐐) ≤ P C(𝐖_b,𝐖_e,𝐐).
The following lower bound on the compound secrecy capacity was established in <cit.>:
C_l = max_𝐐≽0
𝐭𝐫(𝐐) ≤ Pmin_𝐖_b : 𝐇_b∈𝒮_b
𝐖_e : 𝐇_e∈𝒮_e C(𝐖_b,𝐖_e,𝐐).
thus, the following bounds on the compound capacity holds <cit.>:
C_l ≤ C_c ≤ C_w
The problem under consideration is first to evaluate the lower bound on the compound capacity over the uncertainty sets by solving (<ref>). To solve (<ref>) we need to identify the worst case main and eavesdropper channels (i.e. main and eavesdropper channel realizations that minimize the lower bound), and then, determining the optimal signaling scheme accordingly. Further, we need to check whether the saddle point property, in the form maxmin = minmax, for the considered class of channels.
§ COMPOUND SECRECY CAPACITY WITH KNOWN EAVESDROPPER MEAN INFORMATION
In this section we characterize the secrecy capacity of the considered compound wiretap channel when the mean information of the eavesdropper, u_e, is known exactly at the transmitter, i.e., 𝒰={u_e}. To proceed, we first need to identify the worst main channel Gram matrix, 𝐖_bw which is evaluated in the following proposition:
For the considered compound wiretap channel, for any non negative definite matrix 𝐐 and any 𝐖_e such that 𝐇_e ∈𝒮_e we have
C(𝐖_b,𝐖_e,𝐐) ≥ C(𝐖_bw,𝐖_e,𝐐)
where 𝐖_bw = (λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^† is the worst main channel Gram matrix where (x)_+ = max (0,x).
The proof is give in Appendix <ref>.
The result of Proposition <ref> can interpreted as follows, the worst main channel is the channel that has lost all, but one, of its degrees of freedom, meanwhile, the only left degree of freedom happens with its maximum possible strength in the direction that is anti-parallel to the mean channel. An obvious direct consequence of Proposition <ref> is that the optimal input covariance, 𝐐^*, has to be of unit rank. That is because 𝐖_bw is shown to be of unit rank. Therefore, we conclude that beamforming is the optimal transmit strategy for the considered compound wiretap channel. Thus, we can restrict our analysis to a unit rank 𝐐. Next, we need to identify the worst eavesdropper Gram matrix.
For the considered compound wiretap channel, for any unit rank matrix 𝐐 with λ(𝐐) ≤ P, we have
C(𝐖_bw,𝐖_e,𝐐) ≥ C(𝐖_bw,𝐖_ew,𝐐)
where 𝐖_ew = (λ_μ e+ 2 λ_μ e^1/2ϵ_e) u_e u_e^† + ϵ_e^2 𝐈 is the worst eavesdropper channel Gram matrix.
The proof is given in Appendix <ref>.
The statement of proposition <ref> states that the worst eavesdropper channel is isotropic around its mean channel. This means that, the worst eavesdropper dropper channel happens with its maximum strength in the direction parallel to its mean channel. The main result of this paper is given in the following theorem. We give the compound secrecy capacity of the considered class of channel and the capacity achieving input signal covariance 𝐐^*.
The secrecy capacity for the compound wiretap channel defined in (<ref>) and (<ref>) is equal to the worst case capacity, the saddle point property holds
C_c^* = max_𝐐≽0
𝐭𝐫(𝐐) ≤ Pmin_𝐖_b : 𝐇_b∈𝒮_b
𝐖_e : 𝐇_e∈𝒮_e C(𝐖_b,𝐖_e,𝐐)
= min_𝐖_b : 𝐇_b∈𝒮_b
𝐖_e : 𝐇_e∈𝒮_emax_𝐐≽0
𝐭𝐫(𝐐) ≤ P C(𝐖_b,𝐖_e,𝐐)
= C(𝐖_bw,𝐖_ew,𝐐^*)= C_w
where 𝐖_bw and 𝐖_ew are as given in Propositions <ref> and <ref> respectively. Moreover, beamforming is the optimal signaling strategy:
𝐐^* = P q_* q_*^†,
where q_* is the eigenvector associated with the maximum eigenvalue of (𝐈_Na + P 𝐖_ew)^-1(𝐈_Na + P 𝐖_bw).
The proof is given in Appendix <ref>.
Theorem <ref> proves that the saddle point property holds for the class of channels described in (<ref>) and (<ref>), and thus, the secrecy capacity of the compound wiretap channel is equal to the worst case capacity. Further, since the worst case main channel is of unit rank, accordingly, generalized eigenvector beamforming is known to be the optimal signaling strategy <cit.><cit.>.
§.§ Null Steering Beamforming as an Alternative Solution
As can be seen from Theorem <ref>, the generalized eigenvector solution requires matrix inversion which may require a considerably high computational complexity especially when the number of transmitting antennas gets large. Therefore, we introduce the null steering (NS) beamforming <cit.> as an alternative suboptimal solution. In our case, the NS beamforming matrix is given as follows:
𝐐_ns = P q_ns q_ns^†, q_ns= [𝐈-u_eu_e^†]u_b[𝐈-u_eu_e^†] u_b.
NS beamformer can recognized as the projection of u_b onto the null space of u_w. In particular, q_ns maximizes the gain in the direction u_b while creating a null notch in the direction u_e. Thus, it can understood as the transmission in the direction orthogonal to the eavesdropper mean channel direction maintaining the maximum possible gain in mean main channel direction. We give justifications for the choice of NS beamforming as a candidate suboptimal solution for our problem in Appendix <ref>. Extensive computer simulation provided in section <ref> reveals that NS performs extremely close to the optimal solution, yet, with no need for matrix inversion.
§ COMPOUND SECRECY CAPACITY WITH EAVESDROPPER MEAN UNCERTAINTY
Unlike the previous section where u_e is assumed to be known at the transmitter, in this section we characterize the secrecy capacity of the considered compound wiretap channel when the eavesdropper mean direction, u_e, is known only to belong to the set 𝒰.
A key step toward the characterization of the secrecy capacity is to find the worst eavesdropper channel Gram matrix 𝐖_ew with that assumption. We give 𝐖_ew in the following proposition.
For the considered compound wiretap channel with u_e ∈𝒰, for all 𝐖_b ∈𝒮_1 and any non negative definite matrix 𝐐 we have
C(𝐖_b,𝐖_e,𝐐) ≥ C(𝐖_b,𝐖_ew,𝐐)
where 𝐖_ew = (λ_μ e+ 2 λ_μ e^1/2ϵ_e) u_* u_*^† + ϵ_e^2 𝐈 is the worst eavesdropper channel Gram matrix where:
u_* = _u ∈𝒰 u_b^†u,
The proof is given in appendix <ref>.
Observe that, the assumption that u_e ∈𝒰 does not affect 𝐖_bw, thus, the optimal covariance is again of unit rank as in the case 𝒰={u_e}. Therefore, beamforming is still the optimal transmit strategy under the assumption u_e ∈𝒰. Again, transmission in the direction of the eigenvector of (𝐈_N_a + P 𝐖_ew)^-1(𝐈_N_a + P 𝐖_bw) is the optimal solution where 𝐖_ew as given in proposition <ref>. We give the optimal 𝐐^* in the following corollary as a direct consequence of Theorem <ref>
The saddle point property (<ref>) holds for the considered compound wiretap channel with u_e ∈𝒰.
Moreover, the optimal signaling scheme is zero mean Gaussian with covariance matrix given by
𝐐^* = P q_* q_*^†,
where q_* is the eigenvector associated with the maximum eigenvalue of (𝐈_Na + P 𝐖_ew)^-1(𝐈_Na + P 𝐖_bw), where 𝐖_bw and 𝐖_ew are as given in propositions <ref> and <ref>, respectively.
Follows immediately by Theorem <ref> while realizing that the worst eavesdropper mean channel is in the direction u_*.
Corollary <ref> extends Theorem <ref> to the case of uncertainty about the eavesdropper mean channel direction. We can conclude that, since the transmitter does not know the eavesdropper mean channel, it design its signal assuming the worst eavesdropper mean channel. Again, we note that, NS beamforming still can be introduced as an alternative solution against an eavesdropper with mean direction uncertainty. For this particular scenario, q_ns takes the same form as in (<ref>), yet, in the direction u_* instead of u_e.
§ APPLICATION TO RICIAN FADING MIMO WIRETAP CHANNEL
In this section we consider a special class of MIMO wiretap channels which is well adopted to the class of compound wiretap channel considered in this paper. We study the Rician fading MIMO wiretap channel. In a Rician fading environment, the deterministic line of sight (LOS) component causes the channel variations to be centered around a mean matrix. This mean matrix is usually of unit rank whose gain depends mainly on the distance between transmitter and receiver, array configuration and respective array orientation. In the next section we give the Rician fading MIMO channel model, and then, in section <ref> we describe the relation between Rician MIMO wiretap channel and the compound wiretap channel described in (<ref>) and (<ref>).
§.§ Rician Fading MIMO Channel Model
Wireless MIMO channel with dominant LOS component is best described by the Rician fading model. In Rician fading model, the received signal can be decomposed into two components; one is the specular component originated from the LOS path and the other is the diffuse non-line of sight component (NLOS) component. Following, we give the mathematical model for the considered Rician MIMO wiretap channel with the subscript ∘∈{b,e} denotes the legitimate and eavesdropper channels respectively.
𝐇_∘ = 𝐇_∘^los + 𝐇_∘^nlos,
where 𝐇_∘^los and 𝐇_∘^nlos represents the LOS and NLOS components respectively and
𝐇_∘^los = √(γ_∘^2 k_∘1+k_∘)Ψ_∘, 𝐇_∘^nlos = √(γ_∘^21+k_∘)𝐇̂_∘,
where γ_∘ quantifies the channel strength for both receiver and eavesdropper, k_∘ is the Rician factor that facilitates the contribution of the LOS component to the received signal, Ψ_∘ =𝐚(θ_∘)𝐚^†(ϕ_∘), 𝐚(θ_∘) and 𝐚(ϕ_∘) are the antenna array spatial signatures (steering vectors) at receiver (eavesdropper) and transmitter respectively, θ_∘ and ϕ_∘ are the angle of arrival (AoA) and angle of departure (AoD) of the transmitted signal respectively. Note that, AoD, ϕ, represents the azimuth angle of the receiver (eavesdropper) with respect to the transmitter antenna array. Meanwhile, 𝐇̂_∘ represents the channel coefficients matrix for the NLOS signal component.
§.§ Relation to the Compound Wiretap Channel
In the previous section we gave the mathematical description of the Rician fading MIMO wiretap channel. In this section we highlight the equivalence relation between this class of MIMO wiretap channel and the compound wiretap channel studied in this paper. Recalling the definition of the compound wiretap channel given in (<ref>) and (<ref>), it is straight forward to see that the following analogies hold:
𝐇_μ∘ ⇔𝐇_∘^los, λ_μ∘ ⇔N_aN_∘γ_∘^2 k_∘1+k_∘,
v_∘ ⇔𝐚(θ_∘), u_∘ ⇔𝐚(ϕ_∘),
Δ𝐇_∘ ⇔𝐇_∘^nlos, ϵ_∘^2 ⇔N_∘γ_∘^21+k_∘.
Observe that in the settings of Rician fading MIMO wiretap channel, eavesdropper eigen direction, u_e, corresponds to the physical direction (in azimuth plane) of the eavesdropper. Therefore, the assumptions that u_e is known at the transmitter corresponds to the scenario in which the transmitter has a prior knowledge about the eavesdropper azimuth direction. Whereas, the assumption that u_e ∈𝒰 corresponds to the scenario in which the transmitter does not know exactly the azimuth direction of the eavesdropper, meanwhile, it knows that the eavesdropper, if any, has a restricted access to the communication area. That is, it can only approach the transmitter up to a certain distance and the receiver up to a certain azimuth direction.
§.§ Numerical results
For the sake of numerical evaluation, we use the established equivalence relation between the considered compound wiretap channel and the MIMO channel with Rician fading. We compare the performance of the optimal solution to our proposed NS beamforming solution. Given the azimuth direction of both eavesdropper and legitimate receiver, another two possible transmission schemes may come to mind. First, beamforming toward the intended receiver which is well known as MRT. Second, creating a deep null notch in the direction of the eavesdropper which is well known as ZF. To evaluate the value of the mean information, we provide numerical simulation for an eavesdropper having the same parameters as of the main receiver, i.e. N_a=N_b=N_e=4 and γ_b=γ_e=1, for the same Rician k factor, and thus, we have ϵ_b=ϵ_e and λ_μ b=λ_μ e. We assume uniform linear array configuration at all nodes with antenna spacing of half wavelength. Whereas, we assume the receiver and eavesdropper not to share the same azimuth direction, ϕ_b = 25^∘ and ϕ_e = 60^∘. As can be seen in Fig. (<ref>), we compare the achievable secrecy rate for the NS, MRT and ZF beamforming approaches against the optimal solution for different values of Rician k factor. Simulation results shows that NS beamforming performs extremely close to optimal and outperforms both MRT and ZF over the entire SNR range. Although It may be seen that NS performance matches the optimal solution, we provide a zoom in picture at the upper left corner of Fig. (<ref>) to show that the acheivable rate by NS beamforming is slightly below the secrecy capacity of the channel. It is observed that it maintains a small gap to capacity in order of 10^-4 over the entire SNR range for all values of k.
§ DISCUSSION AND FUTURE WORK
Compound MIMO wiretap channel with double sided uncertainty is considered under channel mean information model. The worst case main channel is shown to be anti-parallel to the channel mean information resulting in an overall unit rank channel. Further, the worst eavesdropper dropper channel is shown to be isotropic around its mean information. Accordingly, generalized eigenvector beamforming is shown to be the optimal signaling strategy. The saddle point property is shown to hold under mean information model, and thus, compound secrecy capacity equals to the worst case capacity over the class of uncertainty. Further, as the generalized eigenvector solution requires matrix inversion, we introduced NS beamforming, that is, transmission in the direction orthogonal to the eavesdropper mean channel direction maintaining the maximum possible gain in mean main channel direction, as an alternative suboptimal solution. Extensive computer simulation revealed that NS performs extremely close to the optimal solution. It also verified the superiority of NS beamforming to both MRT and ZF approaches over the entire SNR range.
It is worth noting that, the results for the compound wiretap channel are too conservative in general and, consequently, so is the result of this paper. That is due to the assumption that channel realizations remain constant over the entire transmission duration leading us to the worst case optimization. While this assumption simplifies the mathematical analysis, it does not usually hold in practice. More interesting scenario is to consider the compound wiretap channel with channel realizations allowed to change, possibly random, during transmission duration.
IEEEtran
§ PROOF OF PROPOSITION <REF>
We observe that
C(𝐖_b,𝐖_e,𝐐) = log(𝐈_N_a + 𝐖_b𝐐)(𝐈_N_a + 𝐖_e𝐐)
(a)=∑_i=1^N_alog(1 + λ_i(𝐖_b𝐐))
- log(𝐈_N_a + 𝐖_e𝐐)
(b)=∑_i=1^N_alog(1 + σ_i^2((𝐇_μ b+Δ𝐇_b)𝐐^1/2))
- log(𝐈_N_a + 𝐖_e𝐐)
(c)≥log(1 + (λ_μ b^1/2 - ϵ_b)_+^2λ_1(𝐐))
- log(𝐈_N_a + 𝐖_e𝐐)
(d)= C(𝐖_bw,𝐖_e,𝐐)
where (a) follows from determinant properties and (b) follows by recognizing that λ_i(𝐖_b𝐐) = σ_i^2((𝐇_μ b+Δ𝐇_b)𝐐^1/2) where σ_i(𝐀) is the i^th singular value of 𝐀. Meanwhile, (c) follows from the singular value inequality in Lemma 7 in <cit.>, that is, σ_i^2((𝐇_μ b+Δ𝐇_b)𝐐^1/2) ≥ (σ_i(𝐇_μ b) - σ_1(Δ𝐇_b))_+ λ_i(𝐐) and removed the summation due to the fact that σ_1(𝐇_μ b) = λ_μ b^1/2 and σ_i(𝐇_μ b) = 0 ∀ i>1.
§ PROOF OF PROPOSITION <REF>
It can be seen that
C(𝐖_bw,𝐖_e,𝐐) (a)=log(𝐈_N_a + (λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†𝐐)(𝐈_N_a + 𝐖_e𝐐)
(b)=log1 + λ((λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†𝐐)1 + σ^2((𝐇_μ e + Δ𝐇_e)𝐐^1/2)
(c)=log1 + λ((λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†𝐐)1 + σ^2(𝐇_μ e𝐐^1/2 + Δ𝐇_e 𝐐^1/2)
(d)≥log1 + λ((λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†𝐐)1 + (σ(𝐇_μ e𝐐^1/2) + σ (Δ𝐇_e 𝐐^1/2))^2
where (a) follows by direct substitution in (<ref>) with 𝐖_bw given in proposition <ref>, (b) follows since 𝐐 is of unit rank and that λ(𝐖_b𝐐) = σ^2((𝐇_μ b+Δ𝐇_b)𝐐^1/2) and (c) is straightforward. Meanwhile, the upper bound in (d) follows since σ(A+B) ≤σ(A) + σ(B) for unit rank matrices A and B where the inequality holds with equality when A and B have the same singular vectors. Therefore, the inequality in (d) established with equality if 𝐇_μ e and Δ𝐇_e have the same singular vectors. Let us write 𝐇_μ e = VΣ_μ eU^† where the first columns of V and U are v_e and u_e respectively, and Σ_μ e = diag{λ_μ e^1/2,0,..,0}. Hence to establish (d) with equality, Δ𝐇_e need to have V and U as its right and singular vectors respectively. Consequently, we can write 𝐇_e= VΣ_eU^†, and hence, 𝐖_e = UΣ_e^2U^†. But we have that Σ_μ e≼ϵ_e 𝐈, accordingly, Σ_e ≼Σ_ew where Σ_ew = diag{λ_μ e^1/2+ϵ_e, ϵ_e,...,ϵ_e}. Noting that the function log(𝐈+𝐖𝐐) is monotonically increasing in 𝐖, we conclude that 𝐖_ew = U Σ_ew^2U^†. However, we can write 𝐖_ew as (λ_μ e+ 2 λ_μ e^1/2ϵ_e) u_e u_e^† + ϵ_e^2 𝐈 as required.
§ PROOF OF THEOREM <REF>
To establish the saddle point property, we give a proof similar to the one given in Theorem 6 in <cit.> while keeping in mind the difference between the compound wiretap channel defined there with the one defined in (<ref>) and (<ref>). Let 𝐐^* be the optimal solution for the left hand side max-min problem, we observe that, to show the saddle point property in (<ref>) is equivalent to show that <cit.>:
C(𝐖_bw,𝐖_ew,𝐐) (a)≤ C(𝐖_bw,𝐖_ew,𝐐^*)
(b)≤ C(𝐖_b,𝐖_e,𝐐^*),
where 𝐖_ew and 𝐖_bw are as given in propositions <ref> and <ref> respectively. Note that (a) follows since 𝐐^* is optimal for 𝐖_b = 𝐖_bw and 𝐖_e = 𝐖_ew. Now we write:
C(𝐖_bw,𝐖_ew,𝐐^*) (a)=log(1 + σ_i^2((λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†𝐐^*1/2))
- log(𝐈+𝐖_ew𝐐^*)
(b)≤∑_i=1^N_alog(1 + σ_i^2((𝐇_μ b+Δ𝐇_b)𝐐^*1/2))
-log(𝐈+𝐖_ew𝐐^*)
(c)= C(𝐖_b,𝐖_ew,𝐐^*)
(d)≤ C(𝐖_b,𝐖_e,𝐐^*)
where (a) follows by direct substitution by 𝐖_bw, meanwhile, (b) and (c) follow from (<ref>) and we used (<ref>) to write (d). Since the difference channel is, at most, of unit rank, then, beamforming toward the eigenvector associated with the largest eigenvalue of (𝐈_Na + P 𝐖_ew)^-1(𝐈_Na + P 𝐖_bw) follows by corollary 1 in <cit.> and Theorem 6 in <cit.>.
§ PROOF OF PROPOSITION <REF>
Observe that 𝐇_μ e^†𝐇_μ e = λ_μ euu^† for some u ∈𝒰. Thus, the result of proposition <ref> can be established in a fashion similar to the proof of proposition <ref>, however, by realizing that
min_𝐖_e : 𝐇_e ∈𝒮_e C( 𝐖_bw,𝐖_e(u),𝐐) =
min_u ∈𝒰 C(𝐖_bw,(λ_μ e+ 2 λ_μ e^1/2ϵ_e) u u^† + ϵ_e^2 𝐈,𝐐).
Thus, taking the minimum of (<ref>) over u and dropping the constraint u ∈𝒰, the minimum is attained when u=u_b. Meanwhile, with the constraint into action, the minimum is attained at u_* ∈𝒰 which has the minimum distance to u_b. Equivalently,
u_* = _u ∈𝒰u_b-u
= _u ∈𝒰 u_b^† u
which agree with (<ref>).
§ JUSTIFICATION FOR NS BEAMFORMING
To understand the motivation behind introducing NS beamforming as an alternative solution, we write 𝐐 = P qq^†. Now, it can be seen that:
C_c^* = max_𝐐≽0
𝐭𝐫(𝐐) ≤ P C(𝐖_bw,𝐖_ew,𝐐)
(a)=max_𝐐≽0
𝐭𝐫(𝐐) ≤ Plog(𝐈+𝐖_bw𝐐)(𝐈+𝐖_ew𝐐)
(b)=max_𝐐≽0
𝐭𝐫(𝐐) ≤ Plog(𝐈+(λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†𝐐)(𝐈+(λ_μ e+ 2 λ_μ e^1/2ϵ_e) u_e u_e^† + ϵ_e^2 𝐈 )𝐐)
(c)=max_q
q=1log(𝐈+P(λ_μ b^1/2 - ϵ_b)_+^2 u_bu_b^†qq^†)(𝐈+P((λ_μ e+ 2 λ_μ e^1/2ϵ_e)u_eu_e^†qq^† + ϵ_e^2qq^† ))
(d)=max_q
q=1log1 +P(λ_μ b^1/2 - ϵ_b)_+^2 u_b^†q1+P((λ_μ e+ 2 λ_μ e^1/2ϵ_e)u_e^†q + ϵ_e^2))
where (a) follows by direct substitution with 𝐖_bw and 𝐖_ew in (<ref>), (b) follows by substituting the values of 𝐖_bw and 𝐖_ew. Meanwhile, in (c) we used that 𝐐 is of unit rank and thus it has only one eigenvalue equals to P and its corresponding eigenvector q, also, we have removed the power constraint by introducing the constraint q=1. Since both of the numerator and the denominator are of unit rank, d follows from (c) by substituting the only eigenvalue of both of them. Now observe that, the choice of q does not affect the eigenvalue of the matrix ϵ_e^2qq^†, rather, it do affect the eigenvalues of the other matrices. Clearly our objective is to find q that simultaneously maximizes the numerator and minimizes (optimally, nulling out) the denominator in (<ref>(d)). The optimal q_* that maximizes C_c is given by Theorem <ref>. However, we note that q_ns in (<ref>) is the optimal solution to the following optimization problem
max_q
q=1 <q^†,u_b>
Subject to <q^†,u_e> = 0,
i.e., beamforming in the direction q_ns maximizes the gain in the direction u_b while creating a null notch in the direction u_e.
|
http://arxiv.org/abs/1701.08105v2 | 20170127163427 | Introduction to the theory of Gibbs point processes | [
"David Dereudre"
] | math.PR | [
"math.PR"
] |
Introduction to the theory of Gibbs point processes
DEREUDRE David, University Lille 1, david.dereudre@univ-lille1.fr
*
DEREUDRE David
December 30, 2023
=====================
*rrrrrrrrrrrrrrrrrrrrEach chapter should be preceded by an abstract (10–15 lines long) that summarizes the content. The abstract will appear online at <www.SpringerLink.com> and be available with unrestricted access. This allows unregistered users to read the abstract as a teaser for the complete chapter. As a general rule the abstracts will not appear in the printed version of your book unless it is the style of your particular book or that of the series to which your book belongs.
Please use the 'starred' version of the new Springer command for typesetting the text of the online abstracts (cf. source file of this chapter template ) and include them with the source files of your manuscript. Use the plain command if the abstract is also to appear in the printed version of the book.
The Gibbs point processes (GPP) constitute a large class of point processes with interaction between the points. The interaction can be attractive, repulsive, depending on geometrical features whereas the null interaction is associated with the so-called Poisson point process. In a first part of this mini-course, we present several aspects of finite volume GPP defined on a bounded window in ^d. In a second part, we introduce the more complicated formalism of infinite volume GPP defined on the full space ^d. Existence, uniqueness and non-uniqueness of GPP are non-trivial questions which we treat here with completely self-contained proofs. The DLR equations, the GNZ equations and the variational principle are presented as well. Finally we investigate the estimation of parameters. The main standard estimators (MLE, MPLE, Takacs-Fiksel and variational estimators) are presented and we prove their consistency. For sake of simplicity, during all the mini-course, we consider only the case of finite range interaction and the setting of marked points is not presented.
§ INTRODUCTION
The spatial point processes are well studied objects in probability theory and statistics for modelling and analysing spatial data which appear in several disciplines as statistical mechanics, material science, astronomy, epidemiology, plant ecology, seismology, telecommunication, and others <cit.>. There exist many models of such random points configurations in space and the most popular one is surely the Poisson point process. It corresponds to the natural way of producing independent locations of points in space without interaction. For dependent random structures, we can mention for instance the Cox processes, determinantal point processes, Gibbs point processes, etc. None of them is established as the most relevant model for applications. In fact the choice of the model depends on the nature of the dataset, the knowledge of (physical or biological) mechanisms producing the pattern, the aim of the study (theoretical, applied or numerical).
In this mini-course, we focus on Gibbs point processes (GPP) which constitute a large class of points processes, able to fit several kinds of patterns and which provide a clear interpretation of the interaction between the points, such as attraction or repulsion depending on their relative position. Note that this class is particularly large since several point processes can be represented as GPP (see <cit.> for instance). The main disadvantage of GPP is the complexity of the model due to an intractable normalizing constant which appears in the local conditional densities. Therefore their analytical studies are in general based on implicit equilibrium equations which lead to complicated and delicate analysis. Moreover, the theoretical results which are needed to investigate the Gibbs point process theory are scattered across several publications or books. The aim of this mini-course is to provide a solid and self-contained theoretical basis for understanding deeply the Gibbs point process theory. The results are in general not exhaustive but the main ideas and tools are presented in accordance with modern and recent developments. The main strong restriction here involves the range of the interaction, which is assumed to be finite. The infinite range interaction requires the introduction of tempered configuration spaces and for sake of simplicity we decided to avoid this level of complexity. The mini-course is addressed for Master and Phd students and also for researchers who want to discover or investigate the domain. The manuscript is based on a mini-course given during the conference of GDR 3477 géométrie stochastique, at university of Nantes in April 2016.
In a first section, we introduce the finite volume GPP on a bounded window Ł⊂. They are simply defined as point processes in Ł whose the distributions are absolutely continuous with respect to the Poisson point process distribution. The unnormalized densities are of form z^Ne^-β H, where z and β are positive parameters (called respectively activity and inverse temperature), N is the number of points and H an energy function. Clearly, these distributions favour (or penalize) configurations with low (or high) energy E. This distortion strengthens as β is large. The parameter z allows to tune the mean number of points. This setting is relatively simple since all the objects are defined explicitly. However, the intractable normalization constant is ever a problem and most of quantities are not computable. Several standard notions (DLR and GNZ equations, Ruelle's estimates, etc.) are treated in this first section as a preparation for the more complicated setting of infinite volume GPP developed in the second section. Note that we do not present the setting of marked Gibbs point processes in order to keep the notations as simple as possible. However, all the results can be easily extended in this case.
In a second section, we present the theory of infinite volume GPP in . There are several motivations for studying such infinite volume regime. Firstly, the GPP are the standard models in statistical physics for modelling systems with a large number of interacting particles (around 10^23 according to the Avogadro's number). Therefore, the case where the number of particles is infinite is an idealization of this setting and furnishes microscopic descriptions of gas, liquid or solid. Macroscopic quantities like the density of particles, the pressure and the mean energy are consequently easily defined by mean values or laws of large numbers. Secondly, in the spatial statistic context, the asymptotic properties of estimators or tests are obtained when the observation window tends to the full space . This strategy requires the existence of infinite volume models. Finally, since the infinite volume GPP are stationary (shift invariant) in , several powerful tools, as the ergodic theorem or the central limit Theorem for mixing field, are available in this infinite volume regime.
The infinite volume Gibbs measures are defined by a collection of implicit DLR equations (Dobrushin, Lanford and Ruelle). The existence, uniqueness and non-uniqueness are non trivial questions which we treat in depth with self-contained proofs in this second section. The phase transition between uniqueness and non uniqueness is one of the most difficult conjectures in statistical physics. This phenomenon is expected to occur for all standard interactions although it is proved rigorously only for few models. The area interaction is one of such models and the complete proof of its phase transition is given here. The GNZ equations, the variational principle are discussed as well.
In the last section, we investigate the estimation of parameters which appear in the distribution of GPP. For sake of simplicity we deal only with the activity parameter z and the inverse temperature β. We present several standard procedures (MLE, MPLE, Takacs-Fiksel procedure) and a new variational procedure. We show the consistency of estimators, which highlights that many theoretical results are possible in spite of lack of explicit computations. We will see that the GNZ equations play a crucial role in this task. For sake of simplicity the asymptotic normality is not presented but some references are given.
Let us finish this introduction by giving standard references. Historically, the GPP have been introduced for statistical mechanics considerations and an unovoidable reference is the book by Ruelle <cit.>. Important theoretical contributions are also developed in two Lecture Notes <cit.> by Georgii and Preston. For the relations between GPP and stochastic geometry, we can mention the book <cit.> by Chiu et al. and for spatial statistic and numerical considerations, the book by Møller and Waagepetersen <cit.> is the standard reference. Let us mention also the book <cit.> by van Lieshout on the applications of GPP.
§ FINITE VOLUME GIBBS POINT PROCESSES
In this first section we present the theory of Gibbs point process on a bounded set Ł⊂. A Gibbs point process (GPP) is a point process with interactions between the points defined via an energy functional on the space of configurations. Roughly speaking, the GPP produces random configurations for which the configurations with low energy have more chance to appear than the configurations with high energy (see Definition <ref>). In Section <ref> we recall succinctly some definitions of point process theory and we introduce the reference Poisson point process. The energy functions are discussed in Section <ref> and the definiton of finite volume GPP is given in Section <ref>. Some first properties are presented as well. The central DLR equations and GNZ equations are treated in Sections <ref> and <ref>. Finally we finish the first section by giving Ruelle estimates in the setting of superstable and lower regular energy functions.
§.§ Poisson point process
In this first section, we describe briefly the setting of point process theory and we introduce the reference Poisson point process. We only give the main definitions and concepts and we suggest <cit.> for a general presentation.
The space of configurations is defined as the set of locally finite subsets in :
={⊂, _Ł:=∩Ł is finite for any bounded set Ł⊂}.
Note that we consider only the simple point configurations, which means that the points do not overlap. We denote by _f the space of finite configurations in and by _Ł the space of finite configurations inside Ł⊂^d.
The space is equipped with the sigma-field generated by the counting functions N_Ł for all bounded measurable Ł⊂, where N_Ł: ↦#_Ł. A point process is then simply a measurable function from any probability space (Ω,,P) to (,). As usual, the distribution (or the law) of a point process is defined by the image of P to (,) by the application . We say that has finite intensity if, for any bounded set Ł, the expectation μ(Ł):=E(N_Ł()) is finite. In this case, μ is a sigma-finite measure called intensity measure of . When μ=λ^d, where ł^d is the Lebesgue measure on ^d and ≥ 0 a positive real, we simply say that has finite intensity .
The main class of point processes is the family of Poisson point processes, which furnish the natural way of producing independent points in space. Let μ be a sigma-finite measure in . A Poisson point process with intensity μ is a point process such that, for any bounded Ł in , these properties both occur
* The random variable N_Ł() is distributed following a Poisson distribution with parameter μ(Ł).
* Given the event {N_Ł()=n}, the n points in _Ł are independent and distributed following the distribution μ_Ł/μ(Ł).
The distribution of such a Poisson point process is denoted by π^μ. When the intensity is μ=ł^d, we say that the Poisson point process is stationary (or homogeneous) with intensity >0, and denote its distribution π^. For any measurable set Ł⊂, we denote by π_Ł^ the distribution of a Poisson point process with intensity ł^d_Ł which is also the distribution of a stationary Poisson point process with intensity restricted to Ł. For sake of brevity, π and π_Ł denote the distribution of Poisson point processes with intensity =1.
§.§ Energy functions
In this section, we present the energy functions with the standard assumptions which we assume in this mini-course. The choices of energy functions come from two main motivations. First, the GPP are natural models in statistical physics for modelling continuum interacting particles systems. In general, in this setting the energy function is a sum of the energy contribution of all pairs of points (see expression (<ref>)). The GPP are also used in spatial statistics to fit as best as possible the real datasets. So, in a first step, the energy function is chosen by the user with respect to the characteristics of the dataset. Then the parameters are estimated in a second step.
An energy function is a measurable function
H: _f ↦∪{+∞}
such that the following assumptions hold
* H is non-degenerate:
H(∅)<+∞.
* H is hereditary: for any ∈_f and x∈ then
H()<+∞⇒ H(\{x})<+∞.
* H is stable: there exists a constant A such that for any ∈_f
H()≥ A N_^d().
The stability implies that the energy is superlinear. If the energy function H is positive then the choice A=0 works but in the interesting cases, the constant A is negative. The hereditary means that the set of allowed configurations (configurations with finite energy) is stable when points are removed. The non-degeneracy is very natural. Without this assumption, the energy would be equal to infinity everywhere (by hereditary).
1) Pairwise interaction. Let us start with the most popular energy function which is based on a function (called pair potential)
φ: ^+ →∪{+∞}.
The pairwise energy function is defined for any ∈_f by
H()=∑_{x,y}⊂φ(|x-y|).
Note that such an energy function is trivially hereditary and non-degenerate. The stability is more delicate and we refer to general results in <cit.>. However if φ is positive the result is obvious.
A standard example coming from statistical physics is the so-called Lennard-Jones pair potential where φ(r)=ar^-12+b r^-6 with a>0 and b∈. In the interesting case b<0, the pair potential φ(r) is positive (repulsive) for small r and negative (attractive) for large r. The stability is not obvious and is proved in Proposition 3.2.8 in <cit.>.
The Strauss interaction corresponds to the pair potential φ(r)=_[0,R](r) where R>0 is a support parameter. This interaction exhibits a constant repulsion between the particles at distance smaller than R. This simple model is very popular in spatial statistics.
The multi-Strauss interaction corresponds to the pair potential
φ(r)=∑_i=1^k a_i _]R_i-1,R_i],
where (a_i)_1≤ i≤ k is a sequence of real numbers and 0=R_0<R_1<…<R_k a sequence of increasing real numbers. Clearly, the pair potential exhibits a constant attraction or repulsion at different scales. The stability occurs provided that the parameter a_1 is large enough (see Section 3.2 in <cit.>).
2) Energy functions coming from geometrical objects. Several energy functions are based on local geometrical characteristics. The main motivation is to provide random configurations such that special geometrical features appear with higher probability under the Gibbs processes than the original Poisson point process. In this paragraph we give examples related to the Delaunay-Voronoi diagram. Obviously other geometrical graph structures could be considered.
Let us recall that for any x∈∈_f the Voronoi cell C(x,) is defined by
C(x,)={w∈^d, such that ∀ y∈ |x-w|≤ |x-y|}.
The Delaunay graph with vertices is defined by considering the edges
D()={{x,y}⊂ such that C(x,)∩ C(y,)≠∅}.
See <cit.> for a general presentation on the Delauany-Voronoi tessellations.
A first geometric energy function can be defined by
H()=∑_x∈_C(x,) is bounded φ(C(x,)),
where φ is any function from the space of polytopes in to . Examples of such functions φ are the Area, the (d-1)-Hausdorff measure of the boundary, the number of faces, etc... Clearly these energy functions are non-degenerate and hereditary. The stability holds as soon as the function φ is bounded from below.
Another kind of geometric energy function can be constructed via a pairwise interaction along the edges of the Delaunay graph. Let us consider a finite pair potential φ: ^+ ↦. Then the energy function is defined by
H()=∑_{x,y}⊂ D()φ(|x-y|)
which is again clearly non-degenerate and hereditary. The stability occurs in dimension d=2 thanks to Euler's formula. Indeed the number of edges in the Delaunay graph is linear with respect to the number of vertices. Therefore the energy function is stable as soon as the pair potential φ is bounded from below. In higher dimension d>2, the stability is more complicated and not really understood. Obviously, if φ is positive, the stability occurs.
Let us give a last example of geometric energy function which is not based on the Delaunay-Voronoi diagram but on a germ-grain structure. For any radius R>0 we define the germ-grain structure of ∈ by
L_R()=⋃_x∈ B(x,R),
where B(x,R) is the closed ball centred at x with radius R. Several interesting energy functions are built from this germ-grain structure. First the Widom-Rowlinson interaction is simply defined by
H()=Area(L_R()),
where the "Area" is simply the Lebesgue measure λ^d. This model is very popular since it is one of a few models for which the phase transition result is proved (see Section <ref>). This energy function is sometimes called Area-interaction <cit.>. If the Area functional is replaced by any linear combination of the Minkowski functionals we obtain the Quermass interaction <cit.>.
Another example is the random cluster interaction defined by
H()=Ncc(L_R()),
where Ncc denotes the functional which counts the number of connected components. This energy function is introduced first in <cit.> for its relations with the Widom-Rowlinson model. See also <cit.> for a general study in the infinite volume regime.
§.§ Finite Volume GPP
Let Ł⊂ such that 0<ł^d(Ł)<+∞. In this section we define the finite volume GPP on Ł and we give its first properties.
The finite volume Gibbs measure on Ł with activity z>0, inverse temperature β≥ 0 and energy function H is the distribution
P_Ł^z,β=1/Z_Ł^z,β z^N_Ł e^-β Hπ_Ł,
where Z_Ł^z,β, called partition function, is the normalization constant ∫ z^N_Łe^-β H dπ_Ł. A finite volume Gibbs point process (GPP) on Ł with activity z>0, inverse temperature β≥ 0 and energy function H is a point process on Ł with distribution P_Ł^z,β.
Note that P_Ł^z,β is well-defined since the partition function Z_Ł^z,β is positive and finite. Indeed, thanks to the non degeneracy of H
Z_Ł^z,β≥π_Ł(∅)e^-β H({∅})=e^-ł^d(Ł)e^-β H({∅})>0
and thanks to the stability of H
Z_Ł^z,β≤ e^-ł^d(Ł)∑_n=0^+∞(ze^-β Ał^d(Ł))^n/n! = e^ł^d(Ł)(ze^-β A-1)<+∞.
In the case β=0, we recover that P_Ł^z,β is the Poisson point process π_Ł^z. So the activity parameter z is the mean number of points per unit volume when the interaction is null. When the interaction is active (β>0), P_Ł^z,β favours the configurations with low energy and penalizes the configurations with high energy. This distortion strengthens as β is large.
There are many motivations for the exponential form of the density in (<ref>). Historically, it is due to the fact that the finite volume GPP solves the variational principle of statistical physics. Indeed, P_Ł^z,β is the unique probability measure which realizes the minimum of the free excess energy, equal to the mean energy plus the entropy. It expresses the common idea that the equilibrium states in statistical physics minimize the energy and maximize the "disorder". This result is presented in the following proposition. Recall first that the relative entropy of a probability measure P on _Ł with respect to the Poisson point process π_Ł^ is defined by
(P|π_Ł^)={[ ∫log(f)dP if P≼π_Ł^z with f=dP/dπ_Ł^; +∞ otherwise. ].
Let H be an energy function, z>0, β≥ 0. Then
{P_Ł^z,β}=argmin_P∈_Łβ E_P(H)-log(z)E_P(N_Ł)+(P|π_Ł),
where _Ł is the space of probability measures on _Ł with finite intensity and E_P(H) is the expectation of H under P, which is always defined (maybe equal to infinity) since H is stable.
First we note that
β E_P_Ł^z,β(H)-log(z)E_P_Ł^z,β(N_Ł)+(P_Ł^z,β|π_Ł)
= β∫ H dP_Ł^z,β -log(z)E_P_Ł^z,β(N_Ł)+∫log(z^N_Łe^-β H/Z_Ł^z,β) dP_Ł^z,β
= -log(Z_Ł^z,β).
This equality implies that the minimum of β E_P(H)-log(z)E_P(N_Ł)+(P|π_Ł) should be equal to -log(Z_Ł^z,β). So for any P∈_Ł such that E_P(H)<+∞ and (P|π_Ł)<+∞ let us show that β E_P(H)-log(z)E_P(N_Ł)+(P|π_Ł)≥ -log(Z_Ł^z,β) with equality if and only if P=P_Ł^z,β. Let f be the density of P with respect to π_Ł.
log(Z_Ł^z,β) ≥ log(∫_{f>0} z^N_Łe^-β Hdπ_Ł)
= log(∫ z^N_Łe^-β Hf^-1dP)
≥ ∫log(z^N_Łe^-β Hf^-1)dP
= -β E_P(H)-log(z)E_P(N_Ł)-log(f)dP.
The second inequality, due to the Jensen's inequality, is an equality if and only if z^N_Łe^-β Hf^-1 is P a.s. constant which is equivalent to P=P_Ł^z,β. The proposition is proved.
The parameters z and β allow to fit the mean number of points and the mean value of the energy under the GPP. Indeed when z increases, the mean number of points increases as well and similarly when β increases, the mean energy decreases. This phenomenon is expressed in the following proposition. The proof is a simple computation of derivatives.
Let us note that it is not easy to tune both parameters simultaneously since the mean number of points changes when β is modified (and vice versa). The estimation of the parameters z and β is discussed in the last Section <ref>.
The function z↦ E_P_Ł^z,β(N_Ł) is continuous and differentiable, with derivative z↦ Var_P_Ł^z,β(N_Ł)/z on (0,+∞). Similarly the function β↦ E_P_Ł^z,β(H) is continuous and differentiable with derivative β↦ -Var_P_Ł^z,β(H) on ^+.
Let us finish this section by explaining succinctly how to simulate such finite volume GPP. There are essentially two algorithms. The first one is based on a MCMC procedure where GPP are viewed as equilibrium states of Markov chains. The simulation is obtained by letting run for a long enough time the Markov chain. The simulation is not exact and the error is essentially controlled via a monitoring approach (see <cit.>). The second one is a coupling from the past algorithm which provided exact simulations. However, the computation time is often very long and these algorithms are not really that used in practice (see <cit.>).
§.§ DLR equations
The DLR equations are due to Dobrushin, Lanford and Ruelle and give the local conditional distributions of GPP in any bounded window Δ given the configuration outside Δ. We need to define a family of local energy functions (H_Δ)_Δ⊂.
For any bounded set Δ and any finite configuration ∈_f we define
H_() := H()-H(_^c),
with the the convention ∞-∞=0.
The quantity H_() gives the energetic contribution of points in _ towards the computation of the energy of . As an example, let us compute these quantities in the setting of pairwise interaction introduced in (<ref>);
H_() = ∑_{x,y}⊂φ(|x-y|) - ∑_{x,y}⊂_^cφ(|x-y|) =∑_[ {x,y}⊂; {x,y}∩≠∅ ]φ(|x-y|).
Note that H_() does not depend only on points in Δ. However, trivially we have H()=H_()+H(_^c), which shows that the energy of is the sum of the energy H_() plus something which does not depends on _.
Let ⊂Ł be two bounded sets in with ł^d()>0. Then for P_Ł^z,β-a.s. all _^c
P_Ł^z,β(d_|_^c) = 1/Z_^z,β(_^c) z^N_()e^-β H_()π_(d_),
where Z_^z,β(_^c) is the normalizing constant ∫ z^N_()e^-β H_()π_(d_). In particular the right term in (<ref>) does not depend on Ł.
From the definition of H_ and the stochastic properties of the Poisson point process we have
P_Ł^z,β(d) = 1/Z_Ł^z,β z^N_Ł()e^-β H()π_Ł(d)
= 1/Z_Ł^z,β z^N_()e^-β H_Δ()z^N_Ł\()e^-β H(_Ł\Δ)π_(d_)π_Ł\(d_Ł\).
This expression ensures that the unnormalized conditional density of P_Ł^z,β(d_|_^c) with respect to π_(d_) is _↦ z^N_()e^-β H_Δ(). The normalization is necessary Z_^z,β(_^c) and the proposition is proved.
The DLR equations give the local conditional marginal distributions of GPP. They are the main tool to understand the local description of P_Ł^z,β, in particular when Ł is large. Note that the local marginal distributions (not conditional) are in general not accessible. It is a difficult point of the theory of GPP. This fact will be reinforced in the infinite volume regime, where the local distributions can be non-unique.
The DLR equations have a major issue due the the intractable normalization constant Z_^z,β(_^c). In the next section the problem is partially solved via the GNZ equations.
§.§ GNZ equations
The GNZ equations are due to Georgii, Nguyen and Zessin and have been introduced first in <cit.>. They generalize the Slivnyak-Mecke formulas for Poisson point processes. In this section we present and prove these equations. We need first to define the energy of a point inside a configuration.
Let ∈_f be a finite configuration and x∈. Then the local energy of x in is defined by
h(x,)=H({x}∪)-H(),
with the convention +∞-(+∞)=0.
Note that if x∈ then h(x,)=0.
For any positive measurable function f from ×_f to ,
∫∑_x∈ f(x,\{x}) P_Ł^z,β(d) =z ∫∫_Ł f(x,) e^-β h(x,) dxP_Ł^z,β(d).
Let us decompose the left term in (<ref>).
∫∑_x∈ f(x,\{x}) P_Ł^z,β(d)
= 1/Z_Ł^z,β∫∑_x∈ f(x,\{x}) z^N_Ł()e^-β H()π_Ł(d)
= e^-ł^d(Ł)/Z_Ł^z,β∑_n=1^+∞z^n/n!∑_k=1^n ∫_Ł^k f(x_k,{x_1,…, x_n}\{x_k}) e^-β H({x_1,…, x_n}) dx_1… dx_n
= e^-ł^d(Ł)/Z_Ł^z,β∑_n=1^+∞z^n/(n-1)!∫_Ł^k f(x,{x_1,…, x_n-1}) e^-β H({x_1,…, x_n-1})
e^-β h(x,{x_1,…, x_n-1}) dx_1… dx_n-1 dx
= z/Z_Ł^z,β∫_Ł∫ f(x,)z^N_Ł() e^-β H()e^-β h(x,)π_Ł(d)dx
= z ∫∫_Ł f(x,) e^-β h(x,) dxP_Ł^z,β(d).
As usual the function f in (<ref>) can be chosen without a constant sign. We just need to check that both terms in (<ref>) are integrable.
In the following proposition we show that the equations GNZ (<ref>) characterize the probability measure P_Ł^z,β.
Let Ł⊂ bounded such that ł^d(Ł)>0. Let P be a probability measure on _Ł such that for any positive measurable function f from ×_f to
∫∑_x∈ f(x,\{x}) P(d) =z ∫∫_Ł f(x,) e^-β h(x,) dxP(d).
Then it holds that P=P_Ł^z,β.
Let us consider the measure Q=_{H<+∞}z^-N_Łe^β H P. Then
∫∑_x∈ f(x,\{x}) Q(d)
= ∫∑_x∈ f(x,\{x}) _{H()<+∞}z^-N_Ł()e^β H() P(d)
= z^-1∫∑_x∈ f(x,\{x}) _{H(\{x})<+∞}_{h(x,\{x})<+∞}
z^-N_Ł(\{x})e^β H(\{x})e^β h(x,\{x})P(d)
= ∫∫_Ł f(x,) _{H()<+∞}_{h(x,)<+∞} e^-β h(x,)z^-N_Ł()e^β H()e^β h(x,) dxP(d)
= ∫∫_Ł f(x,) _{h(x,)<+∞} dxQ(d).
We deduce that Q satisfies the Slivnyak-Mecke formula on {∈_Ł, H()<+∞}. It is well-known (see <cit.> for instance) that it implies that the measure Q (after normalization) is the Poisson point process π_Ł restricted to {∈_Ł, H()<+∞}. The proposition is proved.
These last two propositions show that the GNZ equations contain completely the informations on P_Ł^z,β. Note again that the normalization constant Z_Ł^z,β is not present in the equations.
§.§ Ruelle estimates
In this section we present Ruelle estimates in the context of superstable and lower regular energy functions. These estimates are technical and we refer to the original paper <cit.> for the proofs.
An energy function H is said superstable if H=H_1+H_2 where H_1 is an energy function (see Definition (<ref>)) and H_2 is a pairwise energy function defined in (<ref>) with a non-negative continuous pair potential φ such that φ(0)>0. The energy function H is said lower regular if there exists a summable decreasing sequence of positive reals (ψ_k)_k≥ 0 (i.e. ∑_k=0^+∞ψ_k<+∞) such that for any finite configurations ^1 and ^2
H(^1∪^2)-H(^1)-H(^2)≥ -∑_k,k'∈^dψ_‖ k-k'‖(N^2_[k+[0,1]^d](^1)+N^2_[k'+[0,1]^d](^2)).
Let us give the main example of superstable and lower regular energy function.
Let H be a pairwise energy function with a pair potential φ=φ_1+φ_2 where φ_1 is stable and φ_2 is non-negative continuous with φ_2(0)>0. Moreover, we assume that there exists a positive decreasing function ψ from ^+ to such that
∫_0^+∞ r^d-1ψ(r)dr<+∞
and such that for any x∈, φ(x)≥ -ψ(‖ x‖).
Then the energy function H is superstable and lower regular.
In particular, the Lennard-Jones pair potential or the Strauss pair potential defined in Section <ref> are superstable and lower regular. Note also that all geometric energy functions presented in Section <ref> are not superstable.
Let H be a superstable and lower regular energy function. Let z>0 and β>0 be fixed. Then for any bounded subset Δ⊂^d with λ^d(Δ)>0 there exist two positive constants c_1,c_2 such that for any bounded set Λ and k≥ 0
P_Ł^z,β (N_≥ k)≤ c_1e^-c_2k^2.
In particular, Ruelle estimates (<ref>) ensure that the random variable N_ admits exponential moments for all orders under P_Ł^z,β. Surprisingly, the variate N^2_ admits exponential moments for small orders. This last fact is not true under the Poisson point process π_Ł^z=P_Ł^z,0. The interaction between the points improves the integrability properties of the GPP with respect to the Poisson point process.
§ INFINITE VOLUME GIBBS POINT PROCESSES
In this section we present the theory of infinite volume GPP corresponding to the case "Ł=" of the previous section. Obviously, a definition inspired by (<ref>) does not work since the energy of an infinite configuration is meaningless. A natural construction would be to consider a sequence of finite volume GPP (P_Ł_n^z,β)_n≥ 1 on bounded windows Ł_n=[-n,n]^d and let n tend to infinity. It is more or less what we do in the following Sections <ref> and <ref>, except that the convergence occurs only for a subsequence and that the field is stationarized (see equation (<ref>)). As far as we know, there does not exist a general proof of the convergence of the sequence (P_Ł_n^z,β)_n≥ 1 without extracted a subsequence. The stationarization is a convenient setting here in order to use the tightness entropy tools. In Sections <ref> and <ref> we prove that the accumulation points P^z,β satisfy the DLR equations which is the standard definition of infinite volume GPP (see Definition <ref>). We make precise that the main new assumption in this section is the finite range property (see Definition <ref>). It means that the points interact with each other only if their distance is smaller than a fixed constant R>0. The GNZ equations in the infinite volume regime are discussed in Section <ref>. The varitional characterisation of GPP, in the spirit of Proposition <ref>, is presented in Section <ref>. Uniqueness and non-uniqueness results of infinite volume GPP are treated in Sections <ref> and <ref>. These results, whose proofs are completely self contained here, ensure the existence of a phase transition for the Area energy function presented in (<ref>). It means that the associated infinite volume Gibbs measures are unique for some parameters (z,β) and non-unique for other parameters.
§.§ The local convergence setting
In this section we define the topology of local convergence which is the setting we use to prove the existence of an accumulation point for the sequence of finite volume Gibbs measures.
First, we say that a function from to is local if there exists a bounded set ⊂ such that for all ∈, f()=f(_).
The local convergence topology on the space of probability measures on is the smallest topology such that for any local bounded function f from to the function P↦∫ f dP is continuous. We denote by this topology.
Let us note that the continuity of functions f in the previous definition is not required. For instance the function ↦ f()=_N_()≥ k, where is a bounded set in and k any integer, is a bounded local function. For any vector u∈ we denote by τ_u the translation by the vector u acting on or . A probability P on is said stationary (or shift invariant) if for any vector u∈ P=P∘τ_u^-1.
Our tightness tool is based on the specific entropy which is defined for any stationary probability P on by
_(P)=lim_n→+∞1/ł^d(Ł_n)(P_Ł_n|π_Ł_n^),
where (P_Ł_n|π_Ł_n^) is the relative entropy of P_Ł_n, the projection of P on Ł_n, with respect to π_Ł_n^ (see Definition (<ref>)). Note that the specific entropy _(P) always exists (i.e. the limit in (<ref>) exists); see chapter 15 in <cit.>. The tightness tool presented in Lemma <ref> below is a consequence of the following proposition.
For any >0 and any value K≥ 0, the set
{ P∈ such that _(P)≤ K}
is sequentially compact for the topology , where is the space of stationary probability measures on with finite intensity.
§.§ An accumulation point P^z,β
In this section we prove the existence of an accumulation point for a sequence of stationarized finite volume GPP. To the end we consider the Gibbs measures (P_Ł_n^z,β)_n≥ 1 on Ł_n:=[-n,n]^d, where (P_Ł^z,β) is defined in (<ref>) for any z>0, β≥ 0 and energy function H. We assume that H is stationary, which means that for any vector u∈ and any finite configuration ∈_f
H(τ_u())=H().
For any n≥ 1, the empirical field P̅_Ł_n^z,β is defined by the probability measure on such that for any test function f
∫ f()P̅_Ł_n^z,β(d)= 1/ł^d(Ł_n)∫_ Ł_n∫ f(τ_u()) P_Ł_n^z,β(d)du.
The probability measure P̅_Ł_n^z,β can be interpreted as the Gibbs measure P_Ł_n^z,β where the origin of the space (i.e. the point {0}) is replaced by a random point chosen uniformly inside Ł_n. It is a kind of stationarization of P_Ł_n^z,β and any accumulation point of the sequence (P̅_Ł_n^z,β)_n≥ 1is necessary stationary.
The sequence (P̅_Ł_n^z,β)_n≥ 1 is tight for the topology. We denote by P^z,β any of its accumulation points.
Our tightness tool is the following lemma whose the proof is a consequence of Proposition <ref> (See also Proposition 15.52 in <cit.>).
The sequence (P̅_Ł_n^z,β)_n≥ 1 is tight for the topology if there exits >0 such that
sup_n≥ 11/ł^d(Ł_n)(P_Ł_n^z,β|π_Ł_n^) <+∞.
So, let us compute (P_Ł_n^z,β|π_Ł_n^) and check that we can find >0 such that (<ref>) holds.
(P_Ł_n^z,β|π_Ł_n^) = ∫log(dP_Ł_n^z,β/dπ_Ł_n^) dP_Ł_n^z,β
= ∫[ log(dP_Ł_n^z,β/dπ_Ł_n)+log(dπ_Ł_n/dπ_Ł_n^)] dP_Ł_n^z,β
= ∫[ log(z^N_Ł_ne^-β H/Z_Ł_n^z,β) + log(e^(-1)ł^d(Ł_n)(1/)^N_Ł_n) ] dP_Ł_n^z,β
= ∫[-β H+log(z/)N_Ł_n]dP_Ł_n^z,β+(-1)ł^d(Ł_n)-log(Z_Ł_n^z,β).
Thanks to the non degeneracy and the stability of H we find that
(P_Ł_n^z,β|π_Ł_n^) ≤ ∫(-Aβ+log(z/))N_Ł_n dP_Ł_n^z,β+ł^d(Ł_n)((-1)+1+β H({∅})).
Choosing >0 such that -Aβ+log(z/)≤ 0 we obtain
(P_Ł_n^z,β|π_Ł_n^) ≤ ł^d(Ł_n)(+β H({∅})
and (<ref>) holds. Proposition <ref> is proved.
In the following, for sake of simplicity, we say that P̅_Ł_n^z,β converges to P^z,β although it occurs only for a subsequence.
Note that the existence of an accumulation points holds under very weak assumptions on the energy function H. Indeed the two major assumptions are the stability and the stationarity. The superstability or the lower regularity presented in Definition <ref> are not required here. However, if the energy function H is superstable and lower regular, then the accumularion points P^z,β inherits Ruelle estimates (<ref>). This fact is obvious since the function ↦_{N_()≥ k} is locally bounded.
Let H be a superstable and lower regular energy function (see Definition <ref>). Let z>0 and β>0 be fixed. Then for any bounded subset Δ⊂^d with λ^d(Δ)>0, there exists c_1 and c_2 two positive constants such that for any k≥ 0
P^z,β (N_≥ k)≤ c_1e^-c_2k^2.
The important point now is to prove that P^z,β satisfies good stochastic properties as for instance the DLR or GNZ equations. At this stage, without extra assumptions, these equations are not necessarily satisfied. Indeed it is possible to build energy functions H such that the accumulation point P^z,β is degenerated and charges only the empty configuration. In this mini-course our extra assumption is the finite range property presented in the following section. More general settings have been investigated for instance in <cit.> or <cit.>.
§.§ The finite range property
The finite range property expresses that further a certain distance distance R>0 the points do not interact each other. Let us recall the Minkoswki ⊕ operator acting on sets in ^d. For any two sets A,B⊂^d, the set A⊕ B is defined by {x+y, x∈ A and y∈ B}.
The energy function H has a finite range R>0 if for every bounded , the local energy H_ (see Definition <ref>) is a local function on ⊕ B(0,R). It means that for any finite configuration ∈_f
H_() := H()-H(_^c)=H(_⊕ B(0,R))-H(_⊕ B(0,R)\^c).
Let us illustrate the finite range property in the setting of pairwise interaction defined in (<ref>). Assume that the interaction potential φ :^+ →∪{+∞} has a support included in [0,R]. Then the associated energy function has a finite R;
H_()
= ∑_[ {x,y}⊂; {x,y}∩≠∅; |x-y|≤ R ]φ(|x-y|)
= ∑_[ {x,y}⊂_⊕ B(0,R); {x,y}∩≠∅; ]φ(|x-y|).
Also the area energy function (<ref>) inherits the finite range property. A simple computation gives
H_() = Area(⋃_x∈_ B(x,R) \⋃_x∈_⊕ B(0,2R) \ B(x,R))
which provides a range of interaction equals to 2R.
Let us note that the energy functions defined in (<ref>),(<ref>) and (<ref>) do not have the finite range property. Similarly the pairwise energy function (<ref>) with the Lennard-Jones potential is not finite range since the support of the pair potential is not bounded. A truncated version of such potential is sometimes considered.
Let us finish this section by noting that the finite range property allows to extend the domain of definition of H_ from the space _f to the set . Indeed, since H_()=H_(_⊕ B(0,R)), this equality provides a definition of H_() when is in . This point is crucial in order to correctly define the DLR equations in the infinite volume regime.
§.§ DLR equations
In section 1 on the finite volume GPP, the DLR equations are presented as properties for P_Ł^z,β (see Section <ref>). In the setting of infinite volume GPP, the DLR equations are the main points of the definition of GPP.
Let H be a stationary and finite range energy function. A stationary probability P on is an infinite volume Gibbs measure with activity z>0, inverse temperature β≥ 0 and energy function H if for any bounded Δ⊂ such that ł^d()>0 then for P-a.s. all _^c
P(d_|_^c) = 1/Z_^z,β(_^c) z^N_()e^-β H_()π_(d_),
where Z_^z,β(_^c) is the normalizing constant ∫ z^N_()e^-β H_()π_(d_). As usual, an infinite volume GPP is a point process whose distribution is an infinite volume Gibbs measure.
Note that the DLR equations (<ref>) make sense since H_() is well defined for any configuration ∈ (see the end of Section <ref>). Note also that the DLR equations (<ref>) can be reformulated in an integral form. Indeed P satisfies (<ref>) if and only if for any local bounded function f from to
∫ fdP=∫ f('_∪_^c) 1/Z_^z,β(_^c) z^N_('_)e^-β H_('_∪_^c)π_(d'_) P(d).
The term "equation" is now highlighted by the formulation (<ref>) since the unknown variate P appears in both left and right sides. The existence, uniqueness and non-uniqueness of solutions of such DLR equations are non trivial questions. In the next theorem, we show that the accumulation point P^z,β obtained in Section <ref> is such a solution. Infinite volume Gibbs measure exist and the question of existence is solved. The uniqueness and non-uniqueness are discussed in Sections <ref> and <ref>.
Let H be a stationary and finite range energy function. Then for any z>0 and β≥ 0 the probability measure P^z,β defined in Proposition <ref> is an infinite volume Gibbs measure.
We have just to check that P^z,β satisfies, for any bounded Δ and any positive local bounded function f, the equation (<ref>). Let us define the function f_Δ by
f_Δ: ↦∫ f('_∪_^c) 1/Z_^z,β(_^c) z^N_('_)e^-β H_('_∪_^c)π_(d'_).
Since f is local and bounded and since H is finite range, the function f_ is bounded and local as well. From the convergence of the sequence (P̅_Ł_n^z,β)_n≥ 1 to P^z,β with respect to the topology, we have
∫ f_ dP^z,β = lim_n→∞∫ f_ dP̅_Ł_n^z,β
= lim_n→∞1/ł^d(Ł_n)∫_Ł_n∫ f_(τ_u()) P_Ł_n^z,β(d)du.
= lim_n→∞1/ł^d(Ł_n)∫_Ł_n∫∫ f('_∪τ_u()_^c) z^N_('_)/Z_^z,β(τ_u()_^c) e^-β H_('_∪τ_u()_^c)
π_(d'_) P_Ł_n^z,β(d)du
= lim_n→∞1/ł^d(Ł_n)∫_Ł_n∫∫ f(τ_u('_τ_-u(Δ)∪_τ_-u(Δ)^c))z^N_τ_-u()('_τ_-u(Δ))/Z^z,β_τ_-u(Δ)(_τ_-u(Δ)^c)
e^-β H_τ_-u()('_τ_-u(Δ)∪_τ_-u(Δ)^c)π_τ_-u(Δ)(d'_τ_-u(Δ)) P_Ł_n^z,β(d)du.
Denoting by Ł_n^* the set of u∈Ł_n such that τ_-u(Δ)⊂Ł_n, by Proposition <ref>, P_Ł_n^z,β satisfies the DLR equation on τ_-u(Δ) as soon as τ_-u(Δ)⊂Ł_n (i.e. u∈Ł_n^*). It follows that for any u∈Ł_n^*
∫ f(τ_u) P_Ł_n^z,β(d)
= ∫∫ f(τ_u('_τ_-u(Δ)∪_τ_-u(Δ)^c))z^N_τ_-u()('_τ_-u(Δ))/Z^z,β_τ_-u(Δ)(_τ_-u(Δ)^c) e^-β H_τ_-u()('_τ_-u(Δ)∪_τ_-u(Δ)^c)
π_τ_-u(Δ)(d'_τ_-u(Δ)) P_Ł_n^z,β(d).
By noting that ł^d(Ł_n^*) is equivalent to ł^d(Ł_n) when n goes to infinity, we obtain in compiling (<ref>) and (<ref>)
∫ f_ dP^z,β = lim_n→∞1/ł^d(Ł_n)∫_Ł^*_n∫∫ f(τ_u) P_Ł_n^z,β(d)du
= lim_n→∞∫ f() P̅_Ł_n^z,β(d)
= ∫ fdP^z,β
which gives the expected integral DLR equation on with test function f.
§.§ GNZ equations
In this section we deal with the GNZ equations in the infinite volume regime. As in the finite volume case, the main advantage of such equations is that the intractable normalization factor Z_Ł^z,β is not present.
Note first that, in the setting of finite range interaction R>0, the local energy h(x,) defined in Definition <ref> is well-defined for any configuration ∈ even if is infinite. Indeed, we clearly have h(x,)=h(x,_B(x,R)).
Let P be a probability measure on . Let H be a finite range energy function and z>0, β≥ 0 be two parameters. Then P is an infinite volume Gibbs measure with energy function H, activity z>0 and inverse temperature β if and only if for any positive measurable function f from × to
∫∑_x∈ f(x,\{x}) P(d) =z ∫∫_ f(x,) e^-β h(x,) dxP(d).
Let us start with the proof of the "only if" part. Let P be an infinite volume Gibbs measure. By standard monotonicity arguments it is sufficient to prove (<ref>) for any local positive measurable function f. So let Δ⊂ be a bounded set such that f(x,)=1_(x)f(x,_). Applying now the DLR equation (<ref>) on the set Δ we find
∫∑_x∈ f(x,\{x}) P(d)
= ∫∫∑_x∈'_ f(x,'_\{x}) 1/Z_^z,β(_^c) z^N_('_)e^-β H_('_∪_^c)π_(d'_) P(d).
By computations similar to those developed in the proof of Proposition <ref>, we obtain
∫∑_x∈ f(x,\{x}) P(d) = z ∫∫_∫ f(x,'_) 1/Z_^z,β(_^c) e^-β h(x,'_∪_^c)
z^N_('_)e^-β H_('_∪_^c)π_(d'_) dx P(d)
= z ∫∫_ f(x,) e^-β h(x,) dxP(d).
Let us now turn to the "if part". Applying equation (<ref>) to the function f̃(x,)=ψ(_^c)f(x,) where f is a local positive function with support and ψ a positive test function we find
∫ψ(_^c) ∑_x∈_Δ f(x,\{x}) P(d) =z ∫ψ(_^c)∫_ f(x,) e^-β h(x,) dxP(d).
This implies that for P almost all _^c the conditional probability measure P(d_|_^c) solves the GNZ equations on with local energy function _↦ h(x,_∪_^c). Following an adaptation of the proof of Proposition <ref>, we get that
P(d_|_^c) = 1/Z_^z,β(_^c) z^N_()e^-β H_()π_(d_),
which is exactly the DLR equation (<ref>) on Δ. The theorem is proved.
Let us finish this section with an application of the GNZ equations which highlights that some properties of infinite volume GPP can be extracted from the implicit GNZ equations.
Let be a infinite volume GPP for the hardcore pairwise interaction φ(r)=+∞_[0,R](r) (see Definition (<ref>)) and the activity z>0. Then
z/1+z v_d R^d≤ E(N_[0,1]^d()) ≤ z,
where v_d is the volume of the unit ball in .
Note that the inverse temperature β does not play any role here and that E_P(N_[0,1]^d()) is simply the intensity of .
The local energy of such harcore pairwise interaction is given by
h(x,)=∑_y∈_B(x,R)φ(|x-y|)= +∞__B(x,R)≠∅.
So the GNZ equation (<ref>) with the function f(x,)=_[0,1]^d(x) gives
E(N_[0,1]^d())=z∫_[0,1]^d P(_B(x,R)= ∅) dx=z P(_B(0,R)= ∅),
which provides a relation between the intensity and the spherical contact distribution of . The upper bound in
(<ref>) follows. For the lower bound we have
E_P(N_[0,1]^d()) = z P(_B(0,R)= ∅)
≥ z(1-E_P(N_B(0,R)()))
= z(1-v_dR^rE_P(N_[0,1]^d())).
Note also that a natural upper bound for E_P(N_[0,1]^d) is obtained via the closed packing configuration. For instance, in dimension d=2, it gives the upper bound π/(2√(3) R^2).
§.§ Variational principle
In this section, we extend the variational principle for finite volume GPP presented in Proposition <ref> to the setting of infinite volume GPP. For brevity we present only the result without the proof which can be found in <cit.>.
The variational principle claims that the Gibbs measures are the minimizers of the free excess energy defined by the sum of the the mean energy and the specific entropy. Moreover, the minimum is equal to minus the pressure. Let us first define all these macroscopic quantities.
Let us start by introducing the pressure with free boundary condition. It is defined as the following limit
p^z,β:=lim_n→ +∞1/|Ł_n|ln (Z^z,β_Ł_n),
The existence of such limit is proved for instance in Lemma 1 in <cit.>.
The second macroscopic quantity involves the mean energy of a stationary probability measure P. It is also defined by a limit but, in opposition to the pressure, we have to assume that it exists. The proof of such existence is generally based on stationary arguments and nice representations of the energy contribution per unit volume. It depends strongly on the expression of the energy function H. Examples are given below. So for any stationary probability measure P on we assume that the following limit exists in ∪{+∞},
H(P):=lim_n→∞1/|Ł_n|∫ H(_Ł_n) dP(),
and we call the limit mean energy of P.
We need to introduce a technical assumption on the boundary effects of H. We assume that for any infinite volume Gibbs measure P
lim_n→∞1/|Ł_n|∫∂ H_Ł_n()dP()=0,
where ∂ H_Ł_n()= H_Ł_n()-H(_Ł_n).
We assume that H is stationary and finite range. Moreover, we assume that the mean energy exists for any stationary probability measure P (i.e. the limit (<ref>) exists) and that the boundary effects assumption (<ref>) holds. Let z>0 and β≥ 0 two parameters. Then for any stationary probability measure P on with finite intensity
I_1(P)+β H(P)-log(z)E_P(N_[0,1]^d) ≥ -p^z,β,
with equality if and only if P is a Gibbs measure with activity z>0, inverse temperature β and energy function H.
Let us finish this section by presenting the two fundamental examples of energy functions satisfying the assumptions of Theorem <ref>.
Let H be the Area energy function defined in (<ref>). Then both limits (<ref>) and (<ref>) exist. In particular, the assumptions of Theorem <ref> are satisfied and the variational principle holds.
Let us prove only that the limit (<ref>) exists. The existence of limit (<ref>) can be shown in the same way. By definition of H and the stationarity of P,
∫ H(_Ł_n) P(d) = ∫Area(L_R(_Ł_n))P(d)
= λ^d(Ł_n) ∫Area(L_R()∩[0,1]^d)P(d)
+ ∫(Area(L_R(_Ł_n))- Area(L_R()∩[-n,n]^d))P(d).
By geometric arguments, we get that
|Area(L_R(_Ł_n))- Area(L_R()∩[-n,n]^d) | ≤ Cn^d-1,
for some constant C>0. We deduce that the limit (<ref>) exists with
H(P)= ∫Area(L_R()∩[0,1]^d)P(d).
Let H be the pairwise energy function defined in (<ref>) with a superstable, lower regular pair potential with compact support. Then the both limits (<ref>) and (<ref>) exist. In particular the assumptions of Theorem <ref> are satisfied and the variational principle holds.
Since the potential φ is stable with compact support, we deduce that φ≥ 2A and H is finite range and lower regular. In this setting, the existence of the limit (<ref>) is proved in <cit.>, Theorem 1 with
H(P)={[ 1/2∫∑_0≠ x∈φ(x)P^0(d) if E_P(N^2_[0,1]^d)<∞; +∞ otherwise ].
where P^0 is the Palm measure of P. Recall that P^0 can be viewed as the natural version of the conditional probability P(.|0∈) (see <cit.> for more details). It remains to prove the existence of the limit (<ref>) for any Gibbs measure P on . A simple computation gives that, for any ∈,
∂ H_Ł_n()= ∑_x∈_Ł_n^⊕\Ł_n∑_ y∈_Ł_n\Ł_n^⊖φ(x-y),
where Ł_n^⊕=Ł_n+R_0 and Ł_n^⊖=Ł_n-R_0 with R_0 an integer larger than the range of the interaction R.
Therefore thanks to the stationarity of P and the GNZ equations (<ref>), we obtain
|∫∂ H_Ł_n() dP()| ≤ ∫∑_x∈_Ł_n^⊕\Ł_n∑_y∈\{x} |φ(x-y)|dP()
= z∫∫_Ł^⊕_n\Ł_n e^-β∑_y∈φ(x-y)∑_y∈ |ϕ(x-y)|dx dP()
= z|Ł^⊕_n\Ł_n| ∫ e^-β∑_y∈_B(0,R_0)φ(y)∑_y∈_B(0,R_0) |φ(y)| dP().
Since φ≥ 2A, denoting by C:=sup_c∈[2A;+∞) |c|e^-β c<∞ we find that
|∫∂ H_Ł_n() dP()|
≤ zC|Ł^⊕_n\Ł_n| ∫ N_B(0,R_0)()e^-2β A N_B(0,R_0)() dP(d).
Using Ruelle estimates (<ref>), the integral in the right term of (<ref>) is finite. The boundary assumption (<ref>) follows.
§.§ A uniqueness result
In this section we investigate the uniqueness of infinite volume Gibbs measures. The common belief claims that the Gibbs measures are unique when the activity z or (and) the inverse temperature β are small enough (low activity, high temperature regime). The non-uniqueness phenomenon (discussed in the next section) are in general related to some issues with the energy part in the variational principle (see Theorem <ref>). Indeed, either the mean energy has several minimizers or there is a conflict between the energy and the entropy. Therefore it is natural to expect the the Gibbs measures are unique when β is small enough. When z is small, the mean number of points per unit volume is low and so the energy is in general low as well.
As far as we know, there do not exist general results which prove the uniqueness for small β or small z. In the case of pairwise energy functions (<ref>), the uniqueness for any β>0 and z>0 small enough is proved via the Kirkwood-Salsburg equations (see Theorem 5.7 <cit.>). An extension of the Dobrushin uniqueness criterium in the continuum is developed as well <cit.>.
The uniqueness of GPP can also be obtained via the cluster expansion machinery which provides a power series expansion of the partition function when z and β are small enough. This approach has been introduced first by Mayer and Montroll <cit.> and we refer to <cit.> for a general presentation.
In this section we give a simple and self-contained proof of the uniqueness of GPP for all β≥ 0 and any z>0 small enough. We just assume that the energy function H has a local energy h uniformly bounded from below. This setting covers for instance the case of pairwise energy function (<ref>) with non-negative pair potential or the Area energy function (<ref>).
Let us start by recalling the existence of a percolation threshold for the Poisson Boolean model. For any configuration ∈ the percolation of L_R()=∪_x∈ B(x,R) means the existence of an unbounded connected component in L_R() .
For any d≥ 2, there exists 0<z_d<+∞ such that for z<z_d, π^z(L_1/2 percolates )=0 and for z>z_d, π^z(L_1/2 percolates )=1.
The value z_d is called the percolation threshold of the Poisson Boolean model with radius 1/2. By scale invariance, the percolation threshold for any other radius R is simply z_d/(2R)^d. The exact value of z_d is unknown but numerical studies provide for instance the approximation z_2≃ 1.4 in dimension d=2.
Let H be an energy function with finite range R>0 such that the local energy h is uniformly bounded from below by a constant C. Then for any β≥ 0 and z<z_d e^Cβ /R^d, there exists an unique Gibbs measure with energy function H, activity z>0 and inverse temperature β.
The proof is based on two main ingredients. The first one is the stochastic domination of Gibbs measures, with uniformly bounded from below local energy function h, by Poisson processes. This result is given in the following lemma, whose proof can be found in <cit.>. The second ingredient is a disagreement percolation result presented in Lemma <ref> below.
Let H be an energy function such that the local energy h is uniformly bounded from below by a constant C. Then for a any bounded set and any outside configuration _^c the Gibbs distribution inside given by
P^z,β(d_|_^c) = 1/Z_^z,β(_^c) z^N_() e^-β H_()π_(d_)
is stochastically dominated by the Poisson point distribution π_^ze^-Cβ(d_).
Thanks to Strassen's Theorem, this stochastic domination can be interpreted via the following coupling (which could be the definition of the stochastic domination): There exist two point processes and ' on Δ such that ⊂', ∼ P^z,β(d_|_^c) and '∼π_^ze^-Cβ(d_).
Now the rest of the proof of Theorem <ref> consists
in showing that the Gibbs measure is unique as soon as π^ze^-Cβ(L_R/2percolates)=0. Roughly speaking, if the dominating process does not percolate, the information coming from the boundary condition does not propagate in the heart of the model and the Gibbs measure is unique. To prove rigorously this phenomenon, we need a disagreement percolation argument introduced first in <cit.>. For any sets A,B∈^d, we denote by A⊖ B the set (A^c⊕ B)^c.
Let _^c^1 and _^c^2 be two configurations on ^c. For any R'>R, there exist three point processes ^1, ^2 and ' on Δ such that ^1⊂', ^2⊂', ^1∼ P^z,β(d_|^1_^c), ^2∼ P^z,β(d_|^2_^c) and '∼π_^ze^-Cβ(d_). Moreover, denoting by L^_R'/2(') the connected components of L_R'/2(') which are inside ⊖ B(0,R'/2), then ^1=^2 on the set L^_R'/2(').
Let us note first that, by Lemma <ref>, there exist three point processes ^1, ^2 and ' on Δ such that ^1⊂', ^2⊂', ^1∼ P^z,β(d_|^1_^c), ^2∼ P^z,β(d_|^2_^c) and '∼π_^ze^-Cβ(d_). The main difficulty is now to show that we can build
^1 and ^2 such that ^1=^2 on the set L^_R'/2(').
Let us decompose via a grid of small cubes where each cube has a diameter smaller than ϵ=(R'-R)/2. We define an arbitrary numeration of these cubes (C_i)_1≤ i≤ m and we construct progressively the processes ^1, ^2 and ' on each cube C_i. Assume that they are already constructed on C_I:=∪_i∈ I C_i with all the expected properties:^1_C_I⊂'_C_I, ^2_C_I⊂'_C_I, ^1_C_I∼ P^z,β(d_C_I|^1_^c), ^2_C_I∼ P^z,β(d_C_I|^2_^c), '_C_I∼π_C_I^ze^-Cβ(d_C_I) and ^1_C_I=^2_C_I on the set L^_R'/2('_C_I). Let us consider the smaller index j∈{1,… m}\ I such that either the distances d(C_j,_^c^1) or d(C_j,_^c^2) or d(C_j,'_C_I) is smaller than R'-ϵ.
* If such an index j does not exist, by the finite range property the following Gibbs distributions coincide on ^I=\ C_I;
P^z,β(d_^I|^1_^c∪^1_C_I)= P^z,β(d_^I|^2_^c∪^2_C_I).
Therefore we define ^1, ^2 and ' on ^I by considering _^I^1 and _^I' as in Lemma <ref> and by putting _^I^2=_^I^1. We can easily check that all expected properties hold and the full construction of ^1, ^2 and ' is over.
* If such an index j does exist, we consider the double coupling construction of ^1, ^2 and ' on ^I. It means that ^1_^I⊂'_^I, ^2_^I⊂'_^I, ^1_^I∼ P^z,β(d_^I|^1_^c∪^1_C_I), ^2_^I∼ P^z,β(d_^I|^2_^c∪^2_C_I) and '_^I∼π_^I^ze^Cβ(d_). Now we keep these processes ^1_^I, ^2_^I and '_^I only on the window C_j. The construction of the processes ^1, ^2 and ' is now over C_I∪ C_j and we can check again that all expected properties hold. We go on to the construction of the processes on a new cube in (C_i)_i∈{1,… n}\{I,j} and so on.
Let us now finish the proof of Theorem <ref> by considering two infinite volume GPP ^1 and ^2 with distribution P^1 and P^2. We have to show that for any local event A P^1(A)=P^2(A). We denote by _0 the support of such an event A. Let us consider a bounded subset ⊃_0 and three new processes _^1, _^2 and _' on Δ constructed as in Lemma <ref>. Precisely, for any i=1,2
_^i⊂_', '∼π_^ze^-Cβ, the conditional distribution of _^i given ^i_^c is P^z,β(|^i_^c) and _^1=_^2 on the set L^_R'/2(_'). The parameter R'>R is chosen such that
ze^-Cβ R'^d<z_d
which is possible by assumption on z.
Thanks to the DLR equations (<ref>), for any i=1,2 the processes ^i_ and ^i_ have the same distributions and therefore P^i(A)=P(_^i∈ A). Denoting by {Δ↔_0} the event that there exists a connected component in L_R'/2(') which intersects (⊖ B(0,R'/2))^c and _0, we obtain that
|P^1(A)-P^2(A)| = |P(_^1∈ A)-P(_^2∈ A)|
≤ E(_{Δ↔_0}|__^1∈ A-__^2∈ A|)+E(_{Δ↔_0}^c|__^1∈ A-__^2∈ A|)
≤ P({Δ↔_0})+E(_{Δ↔_0}^c|__^1∈ A-__^1∈ A|)
= P({Δ↔_0}).
By the choice of R' in inequality (<ref>) and Proposition <ref>, it follows that
π^ze^-Cβ(L_R'/2percolates)=0
and we deduce, by a monotonicity argument, the probability P({Δ↔_0}) tends to 0 when Δ tends to ^d (see <cit.> for details on equivalent characterizations of continuum percolation). The left term in (<ref>) does not depend on Δ and therefore it is null. Theorem <ref> is proved.
§.§ A non-uniqueness result
In this section we discuss the non-uniqueness phenomenon of infinite volume Gibbs measures. It is believed to occur for almost all models provided that the activity z or the inverse temperature β is large enough. However, in the present continuous setting without spin, it is only proved for few models and several old conjectures are still valid. For instance, for the pairwise Lennard-Jones interaction defined in (<ref>), it is conjectured that for β large (but not too large) there exists an unique z such that the Gibbs measures are not unique. It would correspond to a liquid-vapour phase transition. Similarly for β very large, it is conjectured that the non-uniqueness occurs as soon as z is larger than a threshold z_β. It would correspond to a crystallization phenomenon for which a symmetry breaking may occur. Indeed, it is expected, but not proved at all, that some continuum Gibbs measures would be not invariant under symmetries like translations, rotations, etc. This conjecture is probably one of the most important and difficult challenges in statistical physics. In all cases, the non-uniqueness appear when the local distribution of infinite volume Gibbs measures depend on the boundary conditions "at infinity".
In this section we give a complete proof of such non-uniqueness result for the Area energy interaction presented in (<ref>). This result has been first proved in <cit.> but our proof is inspired by the one given in <cit.>. Roughly speaking, we build two different Gibbs measures which depend, via a percolation phenomenon, on the boundary conditions "at infinity". In one case, the boundary condition "at infinity" is empty and in the other case the boundary condition is full of particles. We show that the intensity of both infinite volume Gibbs measures are different.
Let us cite another famous non-uniqueness result for attractive pair and repulsive four-body potentials <cit.>. As far as we know, this result and the one presented below on the Area interaction, are the only rigorous proofs of non-uniqueness results for continuum particles systems without spin.
For z=β large enough, the infinite volume Gibbs measures for the Area energy function H presented in (<ref>), the activity z and the inverse temperature β are not unique.
In all the proof we fix z=β. Let us consider following finite volume Gibbs measures on Ł_n=[-n,n]^d with different boundary conditions:
dP_Ł_n()=1/Z_Ł_n_{_Ł_n\Ł_n^⊖=∅} z^N_Ł_n()e^-zArea(Ł_n∩ L_R())dπ_Ł_n(),
and
dQ_Ł_n()=1/ Z'_Ł_nz^N_Ł_n() e^-zArea(Ł_n^⊖∩ L_R())dπ_Ł_n(),
where Ł_n^⊖=Ł_n ⊖ B(0,R/2). Recall that R is the radius of balls in L_R()=∪_x∈ B(x,R) and that the range of the interaction is 2R. As in Section <ref> we consider the associated empirical fields P̅_Ł_n and Q̅_Ł_n defined by
∫ f()dP̅_Ł_n()= 1/ł^d(Ł_n)∫_ Ł_n f(τ_u()) dP_Ł_n()du
and
∫ f()dQ̅_Ł_n()= 1/ł^d(Ł_n)∫_ Ł_n f(τ_u()) dQ_Ł_n()du,
where f is any measurable bounded test function.
Following the proof of Proposition <ref> we get the existence of an accumulation point P̅ (respectively Q̅) for (P̅_Ł_n) (respectively (Q̅_Ł_n)). As in Theorem <ref>, we show that P̅ and Q̅ satisfy the DLR equations and therefore they are both infinite volume Gibbs measures for the Area energy function, the activity z and the inverse temperature β=z. Now it remains to prove that P̅ and Q̅ are different when z is large enough. Note that the difference between P̅ and Q̅ comes only from their boundary conditions "at infinity" (i.e. the boundary conditions of P_Ł_n and Q_Ł_n when n goes to infinity).
Let us start with a representation of P_Ł_n and Q_Ł_n via the two type Widom-Rowlinson model on Ł_n. Consider the following event of allowed configurations on _Ł_n^2
={ (^1, ^2)∈ _Ł_n^2, s.t. [ a) L_R/2(^1)∩ L_R/2(^2)=∅; b) L_R/2(^1)∩Ł_n^c=∅ ]}
which assumes first that the balls with radii R/2 centred at ^1 and ^2 do not overlap and secondly that the balls centred at ^1 are completely inside Ł_n.
The two type Widom-Rowlinson model on Ł_n with boundary condition b) is the probability measure P̃_Ł_n on _Ł_n^2 which is absolutely continuous with respect to the product (π^z_Ł_n)^⊗ 2 with density
1/Z̃_n_(^1,^2) z^N_Ł_n(^1) z^N_Ł_n(^2)dπ_Ł_n(^1) dπ_Ł_n(^2),
where Z̃_Ł_n is a normalization factor.
The first marginal (respectively the second marginal) distribution of P̃_Ł_n is P_Ł_n (respectively Q_Ł_n).
By definition of P̃_Ł_n, its first marginal admits the following unnormalized density with respect to π_Ł_n(d^1)
f(^1) = ∫_(^1,^2) z^N_Ł_n(^1) z^N_Ł_n(^2)dπ_Ł_n(^2)
= e^(z-1)ł^d(Ł_n)z^N_Ł_n(^1)∫_(^1,^2) dπ^z_Ł_n(^2)
= e^(z-1)ł^d(Ł_n)z^N_Ł_n(^1)_{^1_Ł_n\Ł_n^⊖=∅} e^-zArea(Ł_n∩ L_R(^1))
which is proportional to the density of P_Ł_n. A similar computation gives the same result for Q_Ł_n.
Now let us give a representation of the two type Widom-Rowlinson model via the random cluster model. The random cluster process R_Ł_n is a point process on Ł_n distributed by
1/Ẑ_n z^N_Ł_n() 2^N^Ł_n_cc()dπ_Ł_n(),
where N^Ł_n_cc() is the number of connected components of L_R/2() which are completely included in Ł_n. Then we build two new point processes _Ł_n^1 and _Ł_n^2 by splitting randomly and uniformly the connected component of R_Ł_n. Each connected component inside Ł_n is given to _Ł_n^1 or _Ł_n^2 with probability an half each. The connected components hitting Ł_n^c are given to _Ł_n^2. Rigorously this construction is done by the following way. Let us consider (C_i())_1≤ i≤ N^Ł_n_cc() the collection of connected components of L_R/2() inside Ł_n. Let (ϵ_i)_i≥ 1 be a sequence of independent Bernoulli random variables with parameter 1/2. The processes _Ł_n^1 and _Ł_n^2 are defined by
_Ł_n^1=⋃_1≤ i≤ N^Ł_n_cc(R_Ł_n), ϵ_i=1 R_Ł_n∩ C_i(R_Ł_n) and _Ł_n^2=R_Ł_n\_Ł_n^1.
The distribution of (_Ł_n^1,_Ł_n^2) is the two-type Widom-Rowlinson model with boundary condition b). In particular, _Ł_n^1∼ P_Ł_n and _Ł_n^2∼ Q_Ł_n.
For any bounded measurable test function f we have
E(f(_Ł_n^1,_Ł_n^2))
= E[f(⋃_1≤ i≤ N^Ł_n_cc(R_Ł_n), ϵ_i=1 R_Ł_n∩ C_i(R_Ł_n), R_Ł_n∩( ⋃_1≤ i≤ N^Ł_n_cc(R_Ł_n), ϵ_i=1 C_i(R_Ł_n))^c )]
= 1/Ẑ_n∫∑_(ϵ_i)∈{0,1}^N^Ł_n_cc()1/2^N^Ł_n_cc()
f(⋃_1≤ i≤ N^Ł_n_cc(), ϵ_i=1∩ C_i(), ∩(⋃_1≤ i≤ N^Ł_n_cc(), ϵ_i=1 C_i())^c ) z^N_Ł_n()2^N^Ł_n_cc() dπ_Ł_n()
= 1/Ẑ_n∫∑_(ϵ_x)∈{0,1}^ (_ f)(⋃_x∈, ϵ_x=1{x}, \⋃_x∈, ϵ_x=1{x}) z^N_Ł_n()dπ_Ł_n()
= 1/Ẑ_n∫ (_ f)(⋃_(x,ϵ _x)∈, ϵ_x=1{x}, ⋃_(x,ϵ _x)∈, ϵ_x=0{x}) (2z)^N_Ł_n() dπ̃_Ł_n()
where π̃_Ł_n is a marked Poisson point process on Ł_n×{0,1}. It means that the points are distributed by π_Ł_n and that each point x is marked independently by a Bernoulli variable ϵ_x with parameter 1/2. We obtain
E(f(_Ł_n^1,_Ł_n^2)) = e^|Ł_n|/Ẑ_n∫ (_ f)(⋃_(x,ϵ _x)∈, ϵ_x=1{x}, ⋃_(x,ϵ _x)∈, ϵ_x=0{x}) z^N_Ł_n() dπ̃_Ł_n^2()
= e^|Ł_n|/Ẑ_n∫∫ (_ f)(^1,^2) z^N_Ł_n(^1)z^N_Ł_n(^2) dπ_Ł_n(^1) dπ_Ł_n(^2),
which proves the Lemma.
Note that the random cluster process R_Ł_n is a finite volume GPP with energy function Ĥ=-N_cc^Ł_n, activity z and inverse temperature log(2). Its local energy ĥ is defined by
ĥ(x,)= N^Ł_n_cc()-N^Ł_n_cc(∪{x}).
Thanks to a geometrical argument, it is not difficult to note that ĥ is uniformly bounded from above by a constant c_d (depending only on the dimension d). For instance, in the case d=2, a ball with radius R/2 can overlap at most 5 disjoints balls with radius R/2 and therefore c_2=5-1=4 is suitable.
By Lemma <ref>, we deduce that the distribution of R_Ł_n dominates the Poisson point distribution π_Ł_n^2ze^-c_d. So we choose
z> z_de^c_d/2R^d
which implies that the Boolean model with intensity 2ze^-c_d and radii R/2 percolates with probability one (see Proposition <ref>). For any ∈, we denote by C_∞() the unbounded connected components in L_R/2() (if it exists) and we define by α the intensity of points in C_∞() under the distribution π^2ze^-c_d;
α:= ∫ N_[0,1]^d(∩ C_∞() )dπ^2ze^-c_d()>0.
We are now in position to finish the proof of Theorem <ref> by proving that the difference in intensities between Q̅ and P̅ is larger than α.
The local convergence topology ensures that, for any local bounded function f, the evaluation P↦∫ fdP is continuous. Actually, the continuity of such evaluation holds for the larger class of functions f satisfying: i) f is local on some bounded set ii) there exists A>0 such that |f()|≤ A(1+#()). In particular, the application P↦ i(P):=∫ N_[0,1]^d()P(d) is continuous (see <cit.> for details). We deduce that
i(Q̅)-i(P̅) = ∫ N_[0,1]^d()dQ̅()- ∫ N_[0,1]^d()dP̅()
= lim_n→∞( ∫ N_[0,1]^d()dQ̅_Ł_n()- ∫ N_[0,1]^d()dP̅_Ł_n())
= lim_n→∞1/ł^d(Ł_n)∫_Ł_n( ∫ N_[0,1]^d(τ_u)dQ_Ł_n().
- .∫ N_[0,1]^d(τ_u) dP_Ł_n())du.
By the representation of P_Ł_n and Q_ł_n given in Lemma <ref>, we find
i(Q̅)-i(P̅) = lim_n→∞1/ł^d(Ł_n)∫_Ł_n E( N_[0,1]^d(τ_u_Ł_n^2)-N_[0,1]^d(τ_u_Ł_n^1) )du
= lim_n→∞1/ł^d(Ł_n)∫_Ł_n E( N_τ_u[0,1]^d(R_Ł_n∩ C_b(R_Ł_n)) )du,
where C_b() are the connected components of L_R/2() hitting Ł_n^c. Since the distribution of R_Ł_n dominates π_Ł_n^2ze^-c_d,
i(Q̅)-i(P̅) ≥ lim_n→∞1/ł^d(Ł_n)∫_[-n,n-1]^d∫ N_τ_u[0,1]^d(∩ C()_∞)dπ_Ł_n^2ze^-c_d()du,
≥ lim_n→∞1/ł^d(Ł_n)∫_[-n,n-1]^dα du=α>0.
The theorem is proved.
§ ESTIMATION OF PARAMETERS.
In this section we investigate the parametric estimation of the activity z^* and the inverse temperature β^* of an infinite volume Gibbs point process . As usual the star specifies that the parameters z^*,β^* are unknown whereas the variable z and β are used for the optimization procedures. Here the dataset is the observation of trough the bounded window Ł_n=[-n,n]^d (i.e. the process _Ł_n). The asymptotic means that the window Ł_n increases to the whole space (i.e. n goes to infinity) without changing the realization of .
For sake of simplicity, we decide to treat only the case of two parameters (z,β) but it would be possible to consider energy functions depending on an extra parameter θ∈^p.
The case where H depends linearly on θ can be treated exactly as z and β. For the non linear case the setting is much more complicated and each procedure has to be adapted. References are given in each section.
In all the section, we assume that the energy function H is stationary and has a finite range R>0. The existence of is therefore guaranteed by Theorem <ref>.
The procedures presented below are not affected by the uniqueness or non-uniqueness of the distribution of such GPP.
In Section <ref>, we start by presenting the natural maximum likelihood estimator. Afterwards, in Section <ref>, we introduce the general Takacs-Fiksel estimator which is a mean-square procedure based on the GNZ equations. The standard maximum pseudo-likelihood estimator is a particular case of such estimator and is presented in Section <ref>. An application to an unobservable issue is treated in Section <ref>. The last Section <ref> is devoted to a new estimator based on a variational GNZ equation.
§.§ Maximum likelihood estimator
The natural method to estimate the parameters is the likelihood inference. However a practical issue is that the likelihood depends on the intractable partition function. In the case of sparse data, approximations were first proposed in <cit.>, before simulation-based methods have been developed <cit.>. Here, we treat only the theoretical aspects of the MLE and these practical issues are not investigated.
The maximum likelihood estimator of (z^*,β^*) is given for any n≥ 1 by
(ẑ_n,β̂_n) = argmax_
z>0,β≥ 01/Z_Ł_n^z,β z^N_Ł_n() e^-β H(_Ł_n).
Note that the argmax is not necessarily unique and that the boundary effects are not considered in this version of MLE. Other choices could be considered.
In this section we show the consistency of such estimators. The next natural question concerns the asymptotic distribution of the MLE but this problem is more arduous and is still partially unsolved today. Indeed, Mase <cit.> and Jensen <cit.> proved that the MLE is asymptotically normal when the parameters z and β are small enough. Without these conditions, phase transition may occur and some long-range dependence phenomenon can appear. The MLE might then exhibit a non standard asymptotic behavior, in the sense that the rate of convergence might differ from the standard square root of the size of the window and the limiting law might be non-gaussian.
The next theorem is based on a preprint by Mase <cit.>. See alse <cit.> for general results on consistency.
We assume that the energy function H is stationary, finite range and not almost surely constant (i.e. there exists a subset Ł⊂^d such that H(_Ł) is not π_Ł(d_Ł) almost surely constant). We assume also that the mean energy exists for any stationary probability measure P (i.e. the limit (<ref>) exists) and that the boundary effects assumption (<ref>) holds. Moreover we assume that for any ergodic Gibbs measure P, the following limit holds for P-almost every
lim_n↦∞1/λ^d(Ł_n)H(_Ł_n)=H(P).
Then, almost surely the parameters (ẑ_n,β̂_n) converge to (z^*,β^*) when n goes to infinity.
Let us assume that the Gibbs distribution P of is ergodic. Ortherwise P can be represented as a mixture of ergodic stationary Gibbs measures (see <cit.>, Theorem 2.2 and 4.1). Therefore the proof of the consistency of the MLE reduces to the case when P is ergodic, which is assumed henceforth.
Let us consider the log-likelihood contrast function
K_n(θ,β)=-log(Z_Ł_n^e^-θ,β) -θ N_Ł_n() -β H(_Ł_n)
related to the parametrization θ=-log(z). It is clear that (ẑ_n,β̂_n)=(e^-θ̃_n,β̃_n) where (θ̃_n,β̃_n) is the argmax of (θ,β)↦ K_n(θ,β). So it is sufficient to show that (θ̃_n,β̃_n) converges almost surely to (-log(z^*),β^*). The limit (<ref>), the ergodic Theorem and the assumption (<ref>) imply the existence of the following limit contrast function
K(θ,β):=-p^e^-θ,β -θ E_P(N_[0,1]^d()) -β H(P)=lim_n→∞K_n(θ,β)/λ^d(Ł_n).
The variational principle (Theorem <ref>) ensures that (θ,β)↦ K(θ,β) is lower than I_1(P) with equality if and only if P is a Gibbs measure with energy function H, activity z and inverse temperature β. Since H is not almost surely constant, it is easy to see that two Gibbs measures with different parameters z,β are different (this fact can be viewed used the DLR equations in a very large box Ł). Therefore K(θ,β) is maximal, equal to I_1(P), if and only if (θ, β)=(θ^*, β^*).
Therefore it remains to prove that the maximizers of (θ,β)↦ K_n(θ,β) converge to the unique maximizer of (θ,β)↦ K(θ,β).
First note that the functions K_n are concave. Indeed, the Hessian of K_n is negative since
∂^2 K_n(θ,β)/∂^2 θ =-Var_P_Ł_n^e^-θ,β(N_Ł_n), ∂^2 K_n(θ,β)/∂^2 β =-Var_P_Ł_n^e^-θ,β(H)
and
∂^2 K_n(θ,β)/∂θ∂β =-Cov_P_Ł_n^e^-θ,β(N_Ł_n,H).
The convergence result for the argmax follows since the function (θ,β)↦ K(θ,β) is necessarily strictly concave at (θ^*,β^*) because K(θ,β) is maximal uniquely at (θ^*,β^*).
Let us finish this section with a discussion on the extra assumption (<ref>) which claims that the empirical mean energy converges to the expected value energy. This assumption is in general proved via the ergodic theorem or a law of large numbers. In the case of the Area energy function H defined in (<ref>), it is a direct consequence of a decomposition as in (<ref>) and the ergodic Theorem. In the case of pairwise interaction, the verification follows essentially the proof of Proposition <ref>.
§.§ Takacs-Fiksel estimator
In this section we present an estimator introduced in the eighties by Takacs and Fiksel <cit.>. It is based on the GNZ equations presented in Section <ref>. Let us start by explaining briefly the procedure. Let f be a test function from ^d× to . We define the following quantity for any z>0, β>0 and ∈
C_Ł_n^z,β(f,)=∑_x∈_Ł_n f(x,\{x})-z∫_Ł_n e^-β h(x,)f(x,) dx.
By the GNZ equation (<ref>) we obtain
E(C^z^*,β^*_Ł_n(f,))=0
where is a GPP with parameter z^* and β^*. Thanks to the ergodic Theorem it follows that for n large enough
C^z^*,β^*_Ł_n(f,)/λ^d(Ł_n)≈ 0.
Then the Takacs-Fiksel estimator is defined as a mean-square method based on functions C^z^*,β^*_Ł_n(f_k,) for a collection of test functions (f_k)_1≤ k ≤ K.
Let K≥ 2 be an integer and (f_k)_1≤ k ≤ K a family of K functions from ^d× to . The Takacs-Fiksel estimator (ẑ_n,β̂_n) of (z^*,β^*) is defined by
(ẑ_n,β̂_n)=argmin_(z,β)∈∑_k=1^K (C^z,β_Ł_n(f_k,))^2,
where ⊂(0,+∞)×[0,+∞) is a bounded domain containing (z^*,β^*).
In opposition to the MLE procedure, the contrast function does not depend on the partition function. This estimator is explicit except for the computation of integrals and the optimization procedure. In <cit.> the Takacs-Fiksel procedure is presented in a more general setting including the case where the functions f_k depend on parameters z and β. This generalization may lead to a simpler procedure in choosing f_k such that the integral term in (<ref>) is explicitly computable.
In the rest of the section, we prove the consistency of the estimator. General results on consistency and asymptotic normality are developed in <cit.>.
We make the following integrability assumption: for any 1≤ k ≤ K
E( |f_k(0,)|(1+|h(0,)|) sup_(z,β)∈e^-β h(0,))<+∞.
We assume also the following identifiability condition: the equality
∑_k=1^K E(f_k(0,)(ze^-β h(0,)-z^*e^-β^* h(0,)))^2=0
holds if and only (z,β)=(z^*,β^*).
Then the Takacs-Fiksel estimator (ẑ_n,β̂_n) presented in Definition <ref> converges almost surely to (z^*,β^*).
As in the proof of Theorem <ref>, without loss of generality, we assume that the Gibbs distribution of is ergodic. Therefore, thanks to the ergodic Theorem, almost surely for any 1≤ k ≤ K
lim_n↦∞C^z,β_Ł_n(f_k,)/λ^d(Ł_n) = E[∑_x∈_[0,1]^d f_k(x,\ x)]-zE[∫_[0,1]^d e^-β h(x,)f_k(x,)dx].
By the GNZ equation (<ref>)
E[∑_x∈_[0,1]^d f_k(x,\ x)] = z^*E[∫_[0,1]^d e^-β^* h(x,)f_k(x,)dx].
Using the stationarity and compiling (<ref>) and (<ref>), we obtain that the contrast function
K_n(z,β)= ∑_k=1^K (C^z,β_Ł_n(f_k,)/λ^d(Ł_n))^2
admits almost surely the limit
lim_n↦∞ K_n(z,β) =K(z,β):= ∑_k=1^K E(f_k(0,)(ze^-β h(0,)-z^*e^-β^* h(0,)))^2,
which is null if and only if (z,β)=(z^*,β^*). Therefore it remains to prove that the minimizers of the contrast function converge to the minimizer of the limit contrast function. In the previous section we solved a similar issue for the MLE procedure using the convexity of contrast functions. This argument does not work here and we need more sophisticated tools.
We define by W_n(.) the modulus of continuity of the contrast function K_n; let η be a positive real
W_n(η)=sup{|K_n(z,β)-K_n(z',β')|, with (z,β),(z',β')∈, ‖(z-z',β-β')‖≤η}.
Assuming that there exists a sequence (ϵ_l)_l≥ 1, which goes to zero when l goes to infinity, such that for any l≥ 1
P ( lim sup_n↦ +∞{W_n(1/l)≥ϵ_l})=0
then almost surely the minimizers of (z,β)↦ K_n(z,β) converges to the minimizer of (z,β)↦ K(z,β).
Let us show that the assertion (<ref>) holds. Thanks to equalities (<ref>), (<ref>) and assumption (<ref>), there exists a constant C_1 such that for n large enough, any 1≤ k ≤ K and any (z,β)∈
|C^z,β_Ł_n(f_k,)|/λ^d(Ł_n)≤ C_1.
We deduce that for n large enough
|K_n(z,β)-K_n(z',β')| ≤ C_1/λ^d(Ł_n)∑_k=1^K ∫_Ł_n |f_k(x,)||ze^-β h(x,)-z'e^-β' h(x,)| dx
≤ C_1|β-β'|/λ^d(Ł_n)max_1≤ k≤ K∫_Ł_n |f_k(x,)h(x,)|sup_(z,β”)∈ ze^-β” h(x,)dx
+C_1|z-z'|/λ^d(Ł_n)max_1≤ k≤ K∫_Ł_n |f_k(x,)|sup_(z,β”)∈ e^-β” h(x,)dx.
By the ergodic Theorem, the following convergences hold almost surely
lim_n↦ +∞1/λ^d(Ł_n)∫_Ł_n |f_k(x,)h(x,)|sup_(z,β”)∈ ze^-β” h(x,)dx
= E(|f_k(0,)h(0,)|sup_(z,β”)∈ ze^-β” h(0,))<+∞,
and
lim_n↦ +∞1/λ^d(Ł_n)∫_Ł_n |f_k(x,)|sup_(z,β”)∈ e^-β” h(x,)dx
= E(|f_k(0,)|sup_(z,β”)∈ e^-β” h(0,))<+∞.
This implies the existence of a constant C_2>0 such that for n large enough, any 1≤ k ≤ K and any (z,β)∈
|K_n(z,β)-K_n(z',β')| < C_2‖ (z-z',β-β')‖.
The assumption (<ref>) occurs with the sequence ϵ_l=C_2/l and Theorem <ref> is proved.
The integrability assumption (<ref>) is sometimes difficult to check, especially when the local energy h(0,) is not bounded from below. For instance in the setting of pairwise energy function H defined in (<ref>) with a pair potential φ having negative values, Ruelle estimates (<ref>) are very useful. Indeed, by stability of the energy function, the potential φ is necessary bounded from below by 2A and therefore
E(e^-β h(0,))<E(e^-2Aβ N_B(0,R)()))<+∞,
where R is the range of the interaction.
In the identifiability assumption (<ref>), the sum is null if and only if each term is null. Assuming that the functions are regular enough, each term is null as soon as (z,β) belongs to a 1-dimensional manifold embedded in ^2 containing (z^*,β^*). Therefore, assumption (<ref>) claims that (z^*,β^*) is the unique element of these K manifolds. If K≤ 2, there is no special geometric argument to ensure that K 1-dimensional manifolds in ^2 have an unique intersection point. For this reason, it is recommended to choose K≥ 3. See Section 5 in <cit.> for more details and complements on this identifiability assumption.
§.§ Maximum pseudo-likelihood estimator
In this section we present the maximum pseudo-likelihood estimator, which is a particular case of the Takacs-Fiksel estimator. This procedure has been first introduced by Besag in <cit.> and popularized by Jensen and Moller in <cit.> and Baddeley and Turner in <cit.>.
The maximum pseudo-likelihood estimator (ẑ_n,β̂_n) is defined as a Takacs-Fiksel estimator (see Definition <ref>) with K=2, f_1(x,)=1 and f_2(x,)=h(x,).
This particular choice of functions f_1, f_2 simplifies the identifiability assumption (<ref>). The following theorem is an adaptation of Theorem <ref> in the present setting of MPLE. The asymptotic normality is investigated first in <cit.> (see also <cit.> for more general results).
Assuming
E( (1+h(0,)^2) sup_(z,β)∈e^-β h(0,))<+∞
and
P(h(0,)=h(0,∅))<1,
then the maximum pseudo-likelihood estimator (ẑ_n,β̂_n) converges almost surely to (z^*,β^*).
Let us check the assumptions of Theorem <ref>. Clearly, the integrability assumption (<ref>) ensures the integrability assumptions (<ref>) with f_1=1 and f_2=h. So it remains to show that assumption (<ref>) implies the identifiability assumption (<ref>). Consider the parametrization z=e^-θ and ψ the function
ψ(θ,β)=E(e^-θ^*-β^*h(0,)(e^U-U-1)),
with
U=β^*h(0,)+θ^*-β h(0,)-θ.
The function ψ is convex, non negative and equal to zero if and only if U is almost surely equal to zero. By assumption (<ref>) this fact occurs when (z,β)=(z^*,β^*). Therefore the gradient ∇ψ=0 if and only (z,β)=(z^*,β^*). Noting that
∂ψ(θ,β)/∂θ=E(z^*e^-β^*h(0,)-ze^-β h(0,))
and
∂ψ(θ,β)/∂β=E(h(0,)(z^*e^-β^*h(0,)-ze^-β h(0,))),
the identification assumption (<ref>) holds. The theorem is proved.
§.§ Solving an unobservable issue
In this section we give an application of the Takacs-Fiksel procedure in a setting of partially observable dataset. Let us consider a Gibbs point process for which we observe only L_R() in place of . This setting appears when Gibbs point processes are used for producing random surfaces via germ-grain structures (see <cit.> for instance). Applications for modelling micro-structure in materials or micro-emulsion in statistical physics are developed in <cit.>.
The goal is to furnish an estimator of z^* and β^* in spite of this unobservable issue.
Note that the number of points (or balls) is not observable from L_R() and therefore the MLE procedure is not achievable, since the likelihood is not computable. When β is known and fixed to zero, it corresponds to the estimation of the intensity of the Boolean model from its germ-grain structure (see <cit.> for instance).
In the following we assume that a Gibbs point process for the Area energy function defined in (<ref>), the activity z^* and the inverse temperature β^*. This choice is natural since the energy function depends on the observations L_R(). The more general setting of Quermass interaction is presented in <cit.> but for sake of simplicity, we treat only here the simpler case of Area interaction.
We opt for a Takacs-Fiksel estimator but the main problem is that the function
C_Ł_n^z,β(f,)=∑_x∈_Ł_n f(x,\{x})-z∫_Ł_n e^-β h(x,)f(x,) dx,
which appears in the procedure, is not computable since the positions of points are not observable. The main idea is to choose the function f properly such that the sum is observable although each term of the sum is not. To this end, we define
f_1(x,)=Surface(∂ B(x,R)∩ L^c_R())
and
f_2(x,)=_{B(x,R)∩ L_R()=∅},
where ∂ B(x,R) is the boundary of the ball B(x,R) (i.e. the sphere S(x,R)) and the "Surface" means the (d-1)-dimensional Hausdorff measure in . Clearly the function f_1 gives the surface of the portion of the sphere S(x,R) outside the germ-grain structure L_R(). The function f_2 indicates if the ball B(x,R) hits the germ-grain structure L_R(). Therefore we obtain that
∑_x∈_Ł_n f_1(x,\{x})=Surface(∂ L_R(_Ł_n))
and
∑_x∈_Ł_n f_2(x,\{x})=N_iso(L_R(_Ł_n)),
where N_iso(L_R(_Ł_n)) is the number of isolated balls in the germ-grain structure L_R(_Ł_n). Let us note that these quantities are not exactly observable since, in practice, we observe L_R()∩Ł_n rather than L_R(_Ł_n). However, if we omit this boundary effect, the values C_Ł_n^z,β(f_1,) and C_Ł_n^z,β(f_2,) are observable and the Takacs-Fiksel procedure is achievable. The consistency of the estimator is guaranteed by Theorem <ref>. The integrability assumption (<ref>) is trivially satisfied since the functions f_1, f_2 and h are uniformly bounded. The verification of the identifiability assumption (<ref>) is more delicate and we refer to <cit.>, example 2 for a proof. Numerical estimations on simulated and real datasets can be found in <cit.>.
§.§ A variational estimator
In this last section, we present a new estimator based on a variational GNZ equation which is a mix between the standard GNZ equation and an integration by parts formula. This equation has been first introduced in <cit.> for statistical mechanics issues and used recently in <cit.> for spatial statistic considerations. In the following, we present first this variational equation and afterwards we introduce its associated estimator of β^*. The estimation of z^* is not considered here.
Let be a GPP for the energy function H, the activity z and the inverse temperature β. We assume that, for any ∈, the function x↦ h(x,) is differentiable on ^d\. Let f be a function from ^d× to which is differentiable and with compact support with respect to the first variable. Moreover we assume the integrability of both terms below. Then
E(∑_x∈∇_xf(x,\{x}))= β E(∑_x∈ f(x,\{x})∇_xh(x,\{x})).
By the standard GNZ equation (<ref>) applied to the function ∇_xf, we obtain
E(∑_x∈∇_xf(x,\{x}))= z E(∫_^d e^-β h(x,)∇_xf(x,)dx).
By a standard integration by part formula with respect to the first variable x, we find that
E(∑_x∈∇_xf(x,\{x}))= zβ E(∫_^d∇_xh(x,)e^-β h(x,) f(x,)dx).
Using again the GNZ equation we finally obtain (<ref>).
Note that equation (<ref>) is a vectorial equation. For convenience it is possible to obtain a real equation by summing each coordinate of the vectorial equation. The gradient operator is simply replaced by the divergence operator.
The parameter z does not appear in the variational GNZ equation (<ref>). Therefore these equations do not characterize the Gibbs measures as in Proposition <ref>. Actually these variational GNZ equations characterize the mixing of Gibbs measures with random activity (See <cit.> for details).
Let us now explain how to estimate β^* from these variational equations. When the observation window Ł_n is large enough we identify the expectations of sums in (<ref>) by the sums. Then the estimator of β^* is simply defined by
β̂_n= ∑_x∈_Ł_ndiv_x f(x,\{x})/∑_x∈_Ł_n f(x,\{x})div_xh(x,\{x}).
Note that this estimator is very simple and quick to compute in comparison to the MLE, MPLE or the general Takacs-Fiksel estimators. Indeed, in (<ref>), there are only elementary operations (no optimization procedure, no integral to compute).
Let us now finish this section with a consistency result. More general results for consistency, asymptotic normality and practical estimations are available in <cit.>.
Let be a GPP for a stationary and finite range energy function H, activity z^* and inverse temperature β^*. We assume that, for any ∈, the function x↦ h(x,) is differentiable on ^d\. Let f be a stationary function from ^d× to , differentiable with respect to the first variable and such that
E((|f(0,|+|∇_xf(0,)|+|f(0,)∇_xh(0,)|)e^-β^*h(0,))<+∞
and
E(f(0,)div_xh(0,)e^-β^*h(0,))≠ 0.
Then the estimator β̂_n converges almost surely to β^*.
As usual, without loss of generality, we assume that the Gibbs distribution of is ergodic. Then by the ergodic theorem the following limits both hold almost surely
lim_n↦ +∞1/λ^d(Ł_n)∑_x∈_Ł_ndiv_xf(x,\{x})=E(∑_x∈_[0,1]^ddiv_xf(x,\{x}))
and
lim_n↦ +∞1/λ^d(Ł_n)∑_x∈_Ł_n f(x,\{x})div_xh(x,\{x})
= E(∑_x∈_[0,1]^d f(x,\{x})div_xh(x,\{x})).
Note that both expectations in (<ref>) and (<ref>) are finite since by the GNZ equations, the stationarity and assumption (<ref>)
E(∑_x∈_[0,1]^d |divf(x,\{x})|)=E(|divf(0,)|e^-β^*h(0,))<+∞
and
E(∑_x∈_[0,1]^d |f(x,\{x})divh(x,\{x})|)=E(|f(0,)divh(0,)|e^-β^*h(0,))<+∞.
We deduce that almost surely
lim_n↦ +∞β̂_n= E(∑_x∈_[0,1]^ddivf(x,\{x}))/E(∑_x∈_[0,1]^d f(x,\{x})divh(x,\{x})),
where the denominator is not null thanks to assumption (<ref>). Therefore it remains to prove the following variational GNZ equation
E(∑_x∈_[0,1]^d∇_xf(x,\{x}))=β^*E(∑_x∈_[0,1]^d f(x,\{x})∇_xh(x,\{x})).
Note that this equation is not a direct consequence of the variational GNZ equation (<ref>) since the function x↦ f(x,) does not have a compact support. We need the following cut-off approximation. Let us consider (ψ_n)_n≥ 1 any sequence of functions from ^d to such that ψ_n is differentiable, equal to 1 on Ł_n, 0 on Ł_n+1^c and such that |∇ψ_n| and |ψ_n| are uniformly bounded by a constant C (which does not depend on n). It is not difficult to build such a sequence of functions. Let us now apply the variational GNZ equation (<ref>) to the function (x,)↦ψ_n(x)f(x,), we obtain
E(∑_x∈ψ_n(x)∇_xf(x,\{x}))+E(∑_x∈∇_xψ_n(x)f(x,\{x}))
= β^* E(∑_x∈ψ_n(x)f(x,\{x})∇_xh(x,\{x})).
Thanks to the GNZ equation and the stationarity we get
| E(∑_x∈ψ_n(x)∇_xf(x,\{x})) -λ^d(Ł_n)E(∑_x∈_[0,1]^d∇_xf(x,\{x}))|
≤ Cz^*λ^d(Ł_n+1\Ł_n) E(|∇_xf(0,)|e^-β^*h(0,)),
and
| E(∑_x∈ψ_n(x)f(x,\{x})∇_xh(x,\{x})).
. -λ^d(Ł_n)E(∑_x∈_[0,1]^d f(x,\{x})∇_xh(x,\{x}))|
≤ Cz^*λ^d(Ł_n+1\Ł_n) E(|f(0,)∇_xh(0,)|e^-β^*h(0,)),
and finally
|E(∑_x∈∇_xψ_n(x)f(x,\{x}))|≤ Cz^*λ^d(Ł_n+1\Ł_n) E(|f(0,)|e^-β^*h(0,)).
Therefore, dividing equation (<ref>) by λ^d(Ł_n), using the previous approximations and letting n go to infinity, we find exactly the variational equation (<ref>). The theorem is proved.
Acknowledgement: The author thanks P. Houdebert, A. Zass and the anonymous referees for the careful reading and the interesting comments. This work was supported in part by the Labex CEMPI (ANR-11-LABX-0007-01), the CNRS GdR 3477 GeoSto and the ANR project PPP (ANR-16-CE40-0016).
plain
|
http://arxiv.org/abs/1701.07480v1 | 20170125204048 | Numerical Approximations for the Cahn-Hilliard phase field model of the binary fluid-surfactant system | [
"Xiaofeng Yang"
] | math.NA | [
"math.NA"
] |
[pages=1-last]Surfactant_Jan_1.pdf
|
http://arxiv.org/abs/1701.08091v1 | 20170127155515 | The correlation between the Nernst effect and fluctuation diamagnetism in strongly fluctuating superconductors | [
"Kingshuk Sarkar",
"Sumilan Banerjee",
"Subroto Mukerjee",
"T. V. Ramakrishnan"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el"
] |
^1 Department of Physics, Indian Institute of Science, Bangalore 560 012, India
^2 Centre for Quantum Information and Quantum Computing, Indian Institute of Science,
Bangalore 560 012, India
^3 Department of Physics, Banaras Hindu University, Varanasi 221005, India
E-mail: kingshuk@physics.iisc.ernet.in, sumilan@physics.iisc.ernet.in,
smukerjee@physics.iisc.ernet.in, tvrama2002@yahoo.co.in
Keywords: Nernst effect, Transverse thermoelectric transport coefficient, Diamagnetism,
Correlation, Superconducting fluctuations, Cuprates
We study the Nernst effect in fluctuating superconductors by calculating the transport coefficient α_xy in a phenomenological model where relative importance of phase and amplitude fluctuations of the order parameter is tuned continuously to smoothly evolve from an effective XY model to more conventional Ginzburg-Landau description. To connect with a concrete experimental realization we choose the model parameters appropriate for cuprate superconductors and calculate α_xy and the magnetization M over the entire range of experimentally accessible values of field, temperature and doping. We argue that α_xy and M are both determined by the equilibrium properties of the superconducting fluctuations (and not their dynamics) despite the former being a transport quantity. Thus, the experimentally observed correlation between the Nernst signal and the magnetization arises primarily from the correlation between α_xy and M. Further, there exists a dimensionless ratio M/(T α_xy) that quantifies this correlation. We calculate, for the first time, this ratio over the entire phase diagram of the cuprates and find it agrees with previous results obtained in specific parts of the phase diagram. We conclude that that there appears to be no sharp distinction between the regimes dominated by phase fluctuations and Gaussian fluctuations for this ratio in contrast to α_xy and M individually. The utility of this ratio is that it can be used to determine the extent to which superconducting fluctuations contribute to the Nernst effect in different parts of the phase diagram given the measured values of magnetization.
The correlation between the Nernst effect and fluctuation diamagnetism in strongly
fluctuating superconductors]
[
Kingshuk Sarkar^1 , Sumilan Banerjee^1, Subroto Mukerjee^1,2, T. V. Ramakrishnan^1,3
December 30, 2023
=========================================================================================
§ INTRODUCTION
The Nernst effect is the phenomenon of the production of an electric field E in a direction perpendicular to an applied temperature gradient ∇ T under conditions of zero electrical current flow. This is possible only when time reversal symmetry is broken and thus in the most common setting the sample is placed in an external magnetic field B. The Nernst effect is particularly pronounced in type II superconducting systems <cit.>. Such systems possess mobile vortices for certain ranges of values of applied magnetic field and temperature. These vortices can move under the influence of a temperature gradient inducing a transverse electric field through phase slips. The vortices possess entropy which causes them to move opposite to the direction of an applied temperature gradient. However, since they carry no charge they do not produce an electric current giving rise to the Nernst effect. The Nernst signal is proportional to the vortex entropy. In contrast, for systems in which the elementary mobile degrees of freedom are charged quasiparticles, the condition of zero electrical current implies an equal and opposite flux of particles along and against the temperature gradient. The particles moving in the two opposite directions carry different amounts of entropy giving rise to a heat current. However, if they are scattered in the same way, the transverse electric fields induced by them cancel in the presence of a magnetic field giving rise to a zero Nernst signal. This is known as the Sondheimer cancellation <cit.>. The Nernst effect in quasiparticle systems is thus typically produced by energy dependent scattering or amibipolarity of the carriers and is generally not as strong as in superconductors. The Nernst effect has also been observed in heavy fermion systems <cit.>.
The above discussion would suggest that a pronounced Nernst signal in a superconductor is an indicator of mobile vortices. However, the Nernst effect has been observed in the cuprates at temperatures well above the transition temperature T_c <cit.>. A description of the system in terms of distinct non-overlapping vortices is not always possible at such high temperatures. In overdoped cuprates, it has been argued that the Nernst effect is most effectively described in terms of Gaussian fluctuations of the superconducting order parameter rather than distinct mobile vortices <cit.>. Calculations of the Nernst coefficient in this regime at small magnetic fields produce a good match to experimental data at low fields. At high fields and low temperatures, the Gaussian theory is not applicable. Nevertheless, a description of the system in terms of a Ginzburg-Landau theory of superconducting fluctuations with appropriate dynamics produces a good match to experimental data <cit.>. Other works along similar lines include a calculation based on self-consistent Gaussian approximation using Landau level basis at low temperature and finite fields <cit.> and a Coulomb gas model of vortices with the core energy related to the Nernst effect and diamagnetism <cit.>.
In the underdoped region, fluctuations are expected to be much stronger yielding a large region of temperature with dominant fluctuations in the phase of the order parameter with a largely uniform amplitude. A description of the system in terms of mobile vortices is a good one in this regime and a calculation of the Nernst effect based on a classical XY model has been performed yielding a good match to experimental data <cit.>. A systematic interpolation between these two regimes as a function of doping, temperature and magnetic field for the Nernst has been lacking, primarily due to the absence of a common theory of superconducting fluctuations across the entire superconducting phase diagram. In this paper, we address this lacuna in the literature by employing a phenomenological Ginzburg-Landau-type functional developed by two of us <cit.>. Calculations based on this functional have provided good agreement with experimental measurements of different quantities such as the specific heat, superfluid density, photoemission and the superconducting dome across the entire range of doping and temperature of the cuprate phase diagram. This functional has also recently been employed by us to obtain a fairly good agreement with measurements of fluctuation diamagnetism in the cuprates <cit.>.
The measured Nernst effect in different parts of the cuprate phase diagram has been variously attributed to Gaussian fluctuations <cit.>, phase fluctuations <cit.> and quasiparticles <cit.>. In several instances there is no consensus on exactly which mechanism is responsible for the observed signal in the same part of the phase diagram <cit.> also complicated by the observation of competing orders. In this work we calculate the coefficient α_xy, called the off-diagonal Peltier coefficient and sometimes the Ettingshausen coefficient, from a model of superconducting fluctuations. In the limit of strong particle-hole symmetry, as seen for many superconductors, the Nernst coefficient ν = 1/Hα_xy/σ_xx, where H is the magnetic field and σ_xx, the magnetoconductivity. We show that in a model of superconducting fluctuations, α_xy, despite being a transport quantity, is expected to be naturally related to equilibrium quantities. This is due to the fact that α_xy is determined by the strength of the superconducting fluctuations as opposed to their dynamics (as we explain later), which is also responsible for equilibrium phenomena. On the other hand, ν and σ_xx are given by the dynamics of the fluctuations. In particular, we argue that α_xy is naturally related to the magnetization M through a dimensionless ratio M/(T α_xy), which is a function of doping, temperature and magnetic field. Experimentally, in hole-doped cuprate superconductors above the superconducting transition temperature T_c in the pseudogap regime a large diamagnetic response has been observed concurrently with a large Nernst signal over a wide range of temperatures <cit.>. A connection between α_xy and M via the ratio M/(T α_xy) has also been proposed theoretically in the XY and Gaussian fluctuation dominated regime of the cuprate phase diagram <cit.> and found to be consistent with experimental observations. In most superconductors, including the cuprates, superconducting fluctuations are the main source of any large observed diamagnetic signal. Thus, a concurrent measurement of α_xy along with a comparison to our calculated ratio of M/(T α_xy) can provide an indication of whether the observed Nernst signal is also due to superconducting fluctuations. We illustrate this by performing our calculations on our phenomenological model of superconducting fluctuations for the cuprates, mentioned in the previous paragraph.
The paper is organized as follows: In section 2, we discuss the model we study and various details concerning the form of the currents and transport coefficients obtained from it. Section 3 contains a discussion of the methodology and a description of the details of our numerical simulations. We present the results of our simulations in section 4 and comment on the important features seen in the data. Finally in section 5, we discuss the novel findings of our calculations and also their relation to previous theoretical and experimental work. Additionally, there are three appendices which discuss technical details pertinent to the calculations and results discussed in the main text.
§ MODEL
To study transport properties due to superconducting fluctuations we implement “model A” dynamics for a complex superconducting order parameter Ψ(r,t) given by the stochastic equation
τ D_tΨ(r,t)=-δ F{Ψ, Ψ^*}/δΨ^*(r,t)+η.
F{Ψ, Ψ^*} is a free energy functional.
In order to be able to introduce electromagnetic fields, we define a covariant time derivative D_t=(∂/∂ t+i2π/Φ_0Φ) and a covariant spatial derivative D=∇-i2π/Φ_0 A. A(r,t) and Φ(r,t) are the magnetic vector and scalar potential respectively while Φ_0=h/e^* is the flux quantum. The free energy functional is assumed to contain an energy cost for spatial inhomogeneities of the order parameter through the appearance of terms involving the covariant spatial derivative. The specific model we study is defined on a lattice, where the spatial derivative has to be appropriately discretized as we discuss later. The time scale τ, which provides the characteristic temporal response scale of the order parameter dynamics, can in general be complex. However it is required to be real under the requirement that the equation of motion for Ψ^* be the same as for Ψ under the simultaneous transformation of complex conjugation (Ψ→Ψ^*) and magnetic field inversion (H→ -H) (particle-hole symmetry). Evidence of particle-hole symmetry in the form of no appreciable Hall or Seebeck effect is seen in the experimentally accessible regime of the superconductors we study here and thus we take τ to be real in our calculations.
The thermal fluctuations are introduced through η( r,t) with the Gaussian white noise correlator
⟨η^*(r,t)η(r^',t^') ⟩ =2k_BTτδ(r-r^')δ(t-t^')
Further, the magnetic field (H=∇× A) is assumed to be uniform and not fluctuating due to a large ratio (κ) between the London penetration depth (λ) and the coherence length (ξ) for the strong type-II superconductors we study. Cuprate and iron-based superconductors are examples of these.
The dynamical model Eq. <ref> is the simplest one which yields an equilibrium state in the absence of driving potentials. It can be derived microscopically within BCS theory above and close to the transition temperature T_c. However, it has been used phenomenologically to study transport previously in situations, where the microscopic theory is not known, such as for the cuprates <cit.>. We employ the model in a similar spirit here.
§.§ Heat and electrical transport coefficients
The model described by Eq. <ref> has no conservation laws and thus currents cannot be defined in terms of continuity equations. Nevertheless, they can be defined by appealing to the microscopics of the full system and then identifying the degrees of freedom that contribute to the superconductivity. The expression for the charge current density obtained this way is <cit.>
J^ e_ tot=-δ F/δ A
An expression can also be obtained for the heat current density J^ Q along similar lines but it cannot be written as compactly as the one for the charge current density <cit.>. We provide the exact expression for the heat current for the model we study in the next subsection. For the present discussion, we only require that J^ Q exists.
In the presence of a magnetic field, these current densities are sums of transport and magnetization current densities <cit.>.
J^e_ tot( r) = J^e_ tr( r) + J^e_ mag( r)
J^Q_ tot( r) = J^Q_ tr( r) + J^Q_ mag( r) ,
where tr and mag stand for transport and magnetization respectively.
The transport coefficients we calculate are described only by the transport parts of the current densities, to obtain which the magnetization parts need to be subtracted from the total current densities. We detail the steps to do this in <ref> which follows the discussion of ref. <cit.>.
The transport current densities can be related to an applied temperature gradient ∇ T and electric field E in linear response as
[ J_tr^e; J_tr^Q ] = [ σ̂ α̂; α̂̃̂ κ̂ ][ E; -∇T, ]
where σ̂, α̂, α̂̃̂, κ̂ are the electrical, thermoelectric, electro-thermal and thermal conductivity tensors respectively and are independent of the gradients in linear response. On general grounds it can be shown that σ_xy(H)=-σ_yx(H) and α_xy(H)=-α_yx(H). The Nernst co-efficient (ν) under the condition J^e_tr=0 is given by <cit.>
ν=E_yH∇_x T=1/Hα_xyσ_xx-σ_xyα_xx/σ_xx^2+σ_xy^2
For systems with particle-hole symmetry α_xx and σ_xy are zero and thus
ν=α_xy/Hσ_xx
Further, the Onsager relation gives α̂̃̂=Tα̂ <cit.>.
§.§ Dimensional analysis of the transport coefficients
Eq. <ref> can be written in terms of dimensionless parameters as follows. We assume that there are basic scales, x_0, T_0 and Ψ_0 for the spatial coordinate, temperature and the order parameter arising in the equilibrium state of the system. We can then define r', T' and Ψ', which are the dimensionless spatial coordinate, temperature and order parameter respectively by scaling by the quantities x_0, T_0 and Ψ_0. Eqs. <ref> and <ref> can now be cast in dimensionless
form in terms of these quantities as
D_t'Ψ'=-δ F'/δΨ'^*+η'
and
⟨( η'( r'_1,t'_1) )^*η'( r'_2,t'_2) ⟩ =2T'δ( r'_1- r'_2)δ(t'_1-t'_2),
where t', F' and η' are the dimensionless values of the time, free energy density and noise. This is possible only if their basic scales are t_0=τ (Ψ_0)^2(x_0)^d/k_BT_0, F_0=k_BT_0/(x_0)^d and η_0=Ψ_0 (x_0)^d/k_BT_0 respectively, where d is the number of spatial dimensions. Additionally, the basic scale of the magnetic flux is Φ_0, which from gauge invariance implies that the basic scales of the electric potential V and electrical current density J^e are V_0=Φ_0/t_0 and J^e_0=k_BT/(x_0)^d-1Φ_0. Thus, the basic scales of the coefficients σ̂ and α̂ are J^e_0 x_0/V_0 and J^e_0 x_0/T_0. The dimensionless quantities σ̂ and α̂ can be calculated from Eqns. <ref> and <ref> using the dimensionless form of J^e. These can then be multiplied by appropriate basic scales to get their correct dimensional values.
From the above discussion, it can be seen that while σ̂ is proportional to the relaxation time τ, α̂ is independent of it. Thus, the Nernst signal is inversely proportional to τ in our model. α depends only on the parameters of F which also determine thermal equilibrium properties of the system. In particular, the ratio | M|/T α is dimensionless, where M is the magnetization, suggesting a possible relationship between M and α. In this work, we thus assert that the most meaningful comparison of fluctuation diamagnetism with the Nernst effect is a comparison of α_xy and M.
It has been shown that for a fluctuating 2D superconductor in the limit of Gaussian superconducting fluctuations and low magnetic fields | M|/T α_xy=2 <cit.>. Interestingly, in the complementary limit of very strong fluctuations with temperature much higher than T_c and weak fields, the same ratio is obtained <cit.>. In this work, we calculate this ratio without restricting ourselves to the above limits and show that it in general deviates from the value of 2.
§.§ The free energy functional
The free energy functional we use describes superconductivity on a two dimensional lattice <cit.>. It has a Ginzburg-Landau form with parameters chosen to reproduce experimental observations for the cuprates. In particular, it has been employed to successfully reproduce experimental measurements of the specific heat, superfluid density, superconducting dome and fluctuation diamagnetism <cit.> . Coupling nodal quasiparticles to the fluctuations produces Fermi arcs <cit.>. The functional essentially describes the cuprates as highly anisotropic layered materials with weakly coupled stacks of CuO_2 planes. The superconducting order parameter ψ_m=Δ_m exp (iϕ_m) is defined on the sites m of the square lattice where Δ_m and ϕ_m are the amplitude and phase respectively. The ψ_m field is microscopically related to the complex spin-singlet pairing amplitude ψ_m=1/2⟨ a_i↓ a_j↑ -a_j↓ a_i↑⟩ on the CuO_2 bonds where m is the bond center of the nearest neighbour lattice sites i and j where a_i(a^†_i) are annihilation (creation) operators. The form of the functional F=F_0+F_1
ℱ_0({Δ_m})=∑_m (AΔ_m^2 + B/2Δ_m^4),
ℱ_1({Δ_m,ϕ_m})=-C ∑_⟨ mn⟩Δ_m Δ_n cos(ϕ_m-ϕ_n-A_mn),
where ⟨ mn⟩ denotes pairs of nearest neighbour bond sites and A_mn(=2π/Φ_0∫_m^n A.d r) is the bond flux which incorporates the effect of an out of plane magnetic field. The motivation for these explicit forms of the parameters A, B and C from cuprate phenomenology and the details of temperature, doping dependence of a particular cuprate, e.g. Bi2212 as discussed in <ref>.
The form of the functional ℱ{ϕ_m,Δ_m} is such that phase fluctuations are dominant and amplitude fluctuations weak at low doping x and and become comparable in strength as x increases ultimately tending towards Gaussian fluctuations of the full order parameter at large doping. The charge and heat current operators are (see <ref>)
J^ e=2π/Φ_0CΔ_mΔ_nsin(ϕ_m-ϕ_n-A_mn)
J^ Q=1/2(J^E_m→ n-J^E_n→ m)+M_z(E×ẑ)
where J^ E_m→n=-C/2{∂ψ^*_m/∂ t√(ψ_m/ψ^*_m)|ψ_n|e^iω_m,n+c.c.} with ω_m,n=ϕ_m-ϕ_n-A_mn.
In the extreme type-II limit when the penetration depth λ→∞, the out of plane magnetic field H is related to the in-plane bond flux A_mn on a square plaquette of size a_0 such that ∑_ A_mn=2πH a_0^2/Φ_0. The lattice constant a_0 introduces a field scale H_0 obtained when one flux quantum Φ_0 passes through the square plaquette and H_0=Φ_0/2π a_0^2. We also note that Δ_mΔ_ncos(ϕ_m-ϕ_n-A_mn)=-(|ψ_m-ψ_n e^iA_mn|^2-Δ_m^2-Δ_n^2) and therefore the term ℱ_1 can be readily identified with the discretized version of the covariant derivative | DΨ|^2 in a standard Ginzburg-Landau theory. Thus, the lattice constant a_0 can be thought of as a suitable ultraviolet cutoff to describe the physics of the system.
§ SIMULATION GEOMETRY AND METHODOLOGY
We simulate the model given by Eqn. <ref> numerically on a two dimensional system of size 100 × 100. We perform the simulation in dimensionless terms by scaling the relevant quantities by the units described in subsection 2.2. To compute α_xy we perform our simulations on a cylinder (Fig. <ref>) with periodic boundary conditions in one direction (ŷ) and zero current conditions along the other (x̂). The uniform magnetic flux per plaquette is in the radial direction and determined by the condition of zero flux in the axial direction. The resulting current is in the azimuthal direction and in the absence of any perturbations (temperature gradient, electric field etc) is maximum at the edges and falls to zero and changes direction at the center Fig. <ref> (red line). Thus, in the absence of any perturbing fields the background magnetization of the cylinder should be zero which can be checked by summing over the charge currents from one end to the other.
A perturbing field like the temperature gradient along the axial direction introduces a transport current in the azimuthal direction and as a result the total current density is enhanced at one end and suppressed at the other (black line). We see this effect in our simulation by setting the temperature gradient in the linear response regime. Summing the total current density over the whole sample gives only the transport current since the sum over the magnetization current continues to be zero. α_xy can be obtained from the equation
α_xy=-1/S_A∫ J^e_ tot dS_A/∇ T
where S_A is the area of the sample. The typical number of time steps chosen for equilibration and time averaging are about 1.2×10^7 and 10^6 respectively.
We also compute the coefficient α̃_xy by switching off the temperature gradient and instead turning on the electric field E in the axial direction of the cylinder. E can be introduced through a time dependent magnetic vector potential (A) with E=-∂ A/∂ t, a position dependent electrostatic potential E=-∇Φ or any gauge invariant combination of the two. In this method we calculate the total heat current density. It can be shown that the appropriate subtraction of the magnetization current to yield α̃_xy gives
α̃_xy=-(1/S_A∫ J^Q_tot dS_A/E-M)
The magnetization M is obtained from J^e_mag = × M by an appropriate integration in the equilibrium state (i.e. zero electric field and temperature gradient). The values obtained are in agreement with those from Monte-Carlo simulations obtained in a previous study <cit.>.
A check for whether the magnetization current subtraction has been done properly is by verifying the equality α_xy=α̃_xy/T, which is a consequence of the Onsager relations for transport coefficients. We have verified that the above equality holds to within our noise levels for all values of doping, temperature and field. We note that the in the underdoped region, where the fluctuations are strong, there is a large separation between T_c and T_c^MF. Thus, fluctuations of the amplitude of Ψ are negligible even up to temperatures significantly greater than T_c (but also significantly lower than T_c^MF). This allows us to use an effective XY model with only a dynamically varying phase and amplitude frozen to the mean-field value up to fairly high temperatures at underdoping. This effective XY model seems to have a lower noise level for α_xy as compared to the full Ginzburg-Landau model. We thus employ this effective model for lower noise in the underdoped region and have verified that the results agree with those obtained from the full model to within error bars.
§ RESULTS
We plot the obtained values of α_xy as functions of doping, temperature and field. The overall features of α_xy over the phase diagram are summarized in Fig. <ref> through color map plots of the strength of the α_xy in the field-temperature (H-T) plane for three different values of doping going from underdoped to overdoped. We have also compared α_xy to M. M can in turn be compared directly to experiments as was done by us in a previous study based on the model we employ here <cit.>. We found the calculated M to be in reasonably good quantitative agreement across the entire range of doping, field and temperature accessible in experiments on the cuprates <cit.>. The value of α_xy for our two dimensional system is converted to a three dimensional one by dividing by the lattice spacing of BSSCO to enable a direct comparison to the three dimensional magnetization.
Fig. <ref> shows the field dependence of α_xy at different temperatures for three representative values of doping - one each in the underdoped, optimally doped and overdoped regimes, with respective T_c values indicated in the figure panels. The magnetization M is shown alongside to enable a comparison. It can be seen that the overall dependence on temperature and field is the same for both quantities for all three values of doping. This is significant because the strength of superconducting fluctuations is different for the three regimes going from strong to weak as the value of doping increases. This similarity of the gross features in the field and temperature dependence of both quantities is a consequence of the fact that it is the strength of the superconducting fluctuations rather than their dynamics that is responsible for both the diamagnetic and off-diagonal thermoelectric responses. The color plots of α_xy in Fig. <ref> illustrate the field and temperature dependence better making it possible to identify contours of constant α_xy.
The similarity between the field and temperature dependences of α_xy and M motivates a more careful comparison of the two quantities. As argued in the previous section, the quantity | M|/(T α_xy) is dimensionless and hence a good measure of the correlations between the two quantities M and α_xy. Plots of this quantity are shown in Fig. <ref> and it can be seen that it is not a constant but has a dependence on doping x, temperature T/T_c and field H/H_0. Of particular relevance is the fact that it stays close to the value 2 for T>T_c at both underdoping and overdoping over a substantial range of field as shown in Figs.<ref>(a),(e). This is consistent with the predictions of theoretical calculations in the high temperature limit of the XY model and the Guassian fluctuation limit respectively as we discuss in the next section <cit.>. The dimensionless ratio has also been calculated to be 2 for a model with both superconducting and charge density wave order <cit.>. For optimal doping, the ratio approaches 2 at high fields in our numerical calculations. It should be noted that the ratio appears to be less than 2 at low fields. This is consistent with results obtained from self-consistent Gaussian fluctuations<cit.>. However, the signal to noise ratio in the simulations at low fields is small and we can not infer anything conclusively about the ratio | M|/(T α_xy) in this regime.
A final feature of our simulation data that needs to be highlighted is shown in Fig. <ref>. In this figure contours of constant α_xy are plotted in the x-T plane for different values of the magnetic field for T>T_c. The superconducting dome obtained by calculating T_c as a function of x is also plotted. It can be seen that the contours follow the superconducting dome. This is especially significant at underdoping where the transition temperature is determined by the strength of phase fluctuations that in turn suppress the superfluid stiffness. We discuss the relevance of this feature in our data in the next section, but note that the same feature is also seen in the fluctuation diamagnetism experimentally <cit.> and in theoretical calculations <cit.>. More significantly, the same feature has also been seen in experimental data for the Nernst coefficient <cit.>.
§ DISCUSSION AND CONCLUSIONS
We have obtained α_xy and the magnetization M as functions of temperature and magnetic field from a phenomenological model of superconducting fluctuations. This model is described by a Ginzburg-Landau free energy on a lattice with the coefficients of the different terms chosen as parameters of the temperature and doping to reproduce several experimentally observed equilibrium properties of the cuprates. Transport is modeled by introducing simple relaxation dynamics for the superconducting order parameter. Correlations between the Nernst signal and the diamagnetism have been observed in experiments. The Nernst signal is α_xy/σ_xx for systems with small values of the Hall angle and thermopower, as is the case for the cuprates over large parts of the phase diagram. We have argued here that the correlation between the Nernst signal and the magnetization arises primarily due to a correlation between α_xy and the magnetization in a model with only superconducting fluctuations since both quantities depend only on the strength of the fluctuations and not their dynamics. The relationship between α_xy and M is quantified by calculating the the dimensionless ratio M/(T α_xy). This ratio has been calculated by other authors previously for a model of superconducting fluctuations in the XY limit of strong phase fluctuations and the Gaussian limit and found to be equal to 2 in both <cit.>. These correspond to high temperature limits T ≫ T_c for the overdoped and underdoped cuprates respectively. Here, we have calculated this ratio as a function of field, temperature and doping for the entire phase diagram and found deviations from the value of 2 in regions where the high temperature approximation does not apply.
α_xy calculated as a function of temperature, field and doping is shown is Figs. <ref>,<ref> alongside M. It can be seen that the dependence of both quantities on field and temperature is very similar for the entire range of doping. This has previously been demonstrated in certain limits for very underdoped and overdoped samples <cit.>. Our calculations agree with these previous results. On the underdoped side, our model reduces to a phase only model for a large range of temperatures for which the amplitude of the superconducting order parameter is effectively constant with no spatial or temporal fluctuations. This corresponds to the XY limit which was the subject of one of the aforementioned studies <cit.>. On the overdoped side, the strength of the fluctuations is weaker resulting in a smaller difference between T_c and T_c^MF. In this limit both phase and amplitude fluctuate together and cannot be disentangled from each other. The description of the physics of the system is thus in terms of fluctuations of the full order parameter. At high temperature, the system is in the Gaussian limit and our results agree with previous calculations of α_xy in low fields in this limit <cit.>. At higher fields too in the overdoped limit, our calculations agree with previous work <cit.>.
One of the new results of our work is that we have shown that one can smoothly interpolate between these previously studied limits by employing the free energy functional (<ref>) to calculate α_xy. As a result, we are able to directly show the connection not just between α_xy and M but also between these quantities and others whose nature is primarily determined by superconducting fluctuations, across the entire phase diagram. One of these quantities is the superfluid stiffness, the disappearance of which corresponds to the destruction of superconductivity at the transition temperature T_c. The correlation between α_xy and T_c can be seen in Fig. <ref> where curves of constant α_xy in the temperature and doping plane follow the superconducting dome for different values of the magnetic field. A similar correlation also exists between M and T_c, which we have shown in an earlier work <cit.>.
The ratio 𝐌/(T α_xy) is plotted in Fig. <ref> for different values of temperature, field and doping. It has been remarked earlier that this value has been shown to be equal to 2 at high temperature for the XY model <cit.> and in the limit of Gaussian fluctuations at low field <cit.>. Our model extrapolates to both limits for appropriate choices of parameters but we have to be careful in defining what we mean by high temperature. The XY limit is obtained when the separation between T_c^MF and T_c becomes large, which corresponds to underdoping. High temperature here means temperatures large compared to T_c but small compared to T_c^MF. This defines a fairly wide range of temperatures since the two scales are well separated. On the other hand, the Gaussian limit corresponds to a small separation between T_c and T_c^MF (overdoping) and high temperature here means a temperatures large compared to both. It should be emphasized that there is a Gaussian regime for any value of doping for temperatures larger than T_c^MF. However, for underdoped systems, these temperatures are much higher than the ones at which experimental measurements are performed and are thus not relevant here. Optimally doped systems lie in neither regime and our work provides the first calculation of the ratio M/(T α_xy) for them. Even in the underdoped and overdoped regime, we calculate for the first time the ratio beyond the high temperature limits discussed above. It can be seen that M/(T α_xy) agrees with the previously obtained results mentioned above.
It is interesting to note that while M/(T α_xy) obtained from our simulations does deviate from the value of 2 at low temperatures (See Fig. <ref>), it attains this “high temperature” value even at temperatures comparable to T_c. In fact for the underdoped system, it does so even at temperatures lower than T_c. Thus, it appears that in so far as this quantity is concerned, the Gaussian regime (T ≫ T_c^MF) is not distinguishable from the strongly phase fluctuating regime. We emphasize that this does not imply that the two regimes are indistinguishable for each of the two quantities M and α_xy individually. Indeed, the temperature dependence of the these two quantities at low field has been shown to be distinct in the two regimes <cit.> but their ratio appears to not make that distinction since the leading temperature dependence cancels between the numerator and the denominator. Thus, there does not seem to be a very clear distinction between the underdoped, optimally doped and overdoped systems with the temperature scale for the ratio being set only by T_c regardless of whether T_c^MF is in its vicinity. We note that the value of M/(T α_xy) appears to be less than 2 at high temperature for the lowest fields.
This could be an artifact of high noise levels in this regime and a higher precision calculation (which would be fairly time consuming) may yield a value equal to 2.
The utility of our calculation is in identifying the correlation between the magnetization M and α_xy. For a superconducting system, a strong diamagnetic signal, even above T_c is typically due to superconducting fluctuations as opposed to other excitations like quasiparticles <cit.>. However, the Nernst signal, can have substantial contributions from these other excitations in addition to from superconducting fluctuations. In fact, the role of quasiparticles in the observed large Nernst effect of the cuprates has been discussed extensively in Refs.<cit.>. Our calculation provides a method for determining the extent of the contribution of superconducting fluctuations to the observed Nernst signal through the ratio of M/(T α_xy). If the observed ratio is close to the predictions from our model then superconducting fluctuations are chiefly responsible for the Nernst effect in the particular regime of temperature, field and doping. We would likely to emphasize again that the relevant transport quantity in our calculation is α_xy and not the Nernst signal ν. Experimentally, obtaining α_xy requires a concurrent measurement of the Nernst effect and the magnetoconductance. It is also possible that features in the Nernst effect unconnected to superconducting fluctuations, and hence the magnetization, arise due to the behavior of the magnetoconductance and not α_xy. An analysis of these features is beyond the scope of a calculation like ours.
To summarize, we have studied the Nernst effect in fluctuating superconductors by calculating the transport coefficient α_xy. We have employed a phenomenological model of superconducting fluctuations in the cuprates, which allows us to calculate α_xy and the magnetization M over the entire range of experimentally accessible values of field, temperature and doping. We have found fairly good agreement with experimental data, wherever available and previous theoretical calculations in specific regimes of the parameters. We have argued that α_xy and M are both determined by the equilibrium properties of the superconducting fluctuations (and not their dynamics) despite the former being a transport quantity. Consequently, there exists a dimensionless ratio M/(T α_xy) that quantifies the relation between the two quantities. We have calculated this ratio over the entire phase diagram of the cuprates and found that it agrees with previously obtained results. Further, it appears that there is no sharp distinction between phase fluctuations and Gaussian fluctuations for this ratio even though there is for α_xy and M individually. The utility of this ratio is that it can be used to determine the extent to which superconducting fluctuations contribute to the Nernst effect in different parts of the phase diagram given the measured values of magnetization.
§ ACKNOWLEDGEMENTS
K.S. would like to thank CSIR (Govt. of India) and S.M. thanks the DST (Govt. of India) for support. T.V.R. acknowledges the support of the DST Year of Science Professorship, and the hospitality of the NCBS, Bangalore. The authors would like to thank Subhro Bhattacharjee for many stimulating comments and discussions.
§ THE FREE ENERGY FUNCTIONAL
The functional form in the absence of a gauge field is defined as
ℱ_0({Δ_m})=∑_m (AΔ_m^2 + B/2Δ_m^4),
ℱ_1({Δ_m,ϕ_m})=-C ∑_⟨ mn⟩Δ_m Δ_n cos(ϕ_m-ϕ_n),
where the pairing field ψ_m=Δ_m exp(iϕ_m) is defined on the sites m of the square lattice with phase ϕ_m and amplitude Δ_m. ⟨ mn⟩ denotes nearest neighbour site pairs.
The coeffecients A, B and C are given doping x and temperature T dependence from cuprate experiments in a phenomenological way with dimensionless numbers f, b, c and a temperature scale T_0 and parametrized as A(x,T)= (f/T_0)^2[T-T^*(x)]e^T/T_0, B=bf^4/T_0^3 and C(x)=xcf^2/T_0 <cit.>. The quadratic term coefficient A is proportional to (T-T_lp) where T_lp is the local pairing scale temperature and in our theory we identify it to be the psudogap temperature scale T^* <cit.>. Cooling down from above T^*, the pairing scale ⟨Δ_m⟩ increases with noticible change in magnitude <cit.> while A changes sign. Across the phase diagram T^* is considered to be varying with doping concentration x as a simplified linear form T^*(x)=T_0(1-x/x_c) with T_0≃ 400 K at zero doping and vanishing at a doping concentration x_c=0.3. The exponential factor e^T/T_0 suppresses average local gap magnitude ⟨Δ_m⟩ at high temperatures (T T^*(x)) with respect to its temperature independent equipartition value √(T/A(x,T)) which will result from the simplified form of the functional (Eq.(<ref>)) being used over the entire range of temperature. In the range of temperature of our study the role of this factor is not very crucial, for a detailed discussion see ref <cit.>. The parameter B is chosen as a doping independent positive number and the form of C is chosen to be proportional to x for small doping. The reason for such a choice can be understood from the Uemura correlations <cit.> where superfluid density ρ_s ∝ x in the underdoped region of the cuprates. Further elaborate details about the functional and coefficients can be found in the appendix of Refs. <cit.>.
§ MORE ON TRANSPORT CURRENTS, COEFFICIENTS AND MAGNETIZATION:
The Nernst effect is the off-diagonal component of the thermopower tensor Q̂, measured in the absence of electrical currents
J_tr= σ E + α(-∇ T)
where J_tr is transport current, E is the electric field and ∇ T is the temperature gradient.
Q̂=σ̂^-1α̂ is the thermopower tensor.
Here
σ̂=[ σ_xx σ_xy; σ_yx σ_yy ] and α̂=
[ α_xx α_xy; α_yx α_yy ]
For an isotropic system, σ_xx=σ_yy and α_xx=α_yy. Further, σ_xy=-σ_yx and α_xy=-α_yx.
Therefore the thermopower tensor
Q̂ = σ^-1α
= 1/σ_xx^2+σ_yy^2[ σ_xx -σ_xy; σ_xy σ_xx ][ α_xx α_xy; -α_xy α_xx ]
The Nernst coefficient
Q_xy=-Q_yx=α_xyσ_xx-σ_xyα_xx/σ_xx^2+σ_xy^2=(α_xy/σ_xx-StanΘ_H),
where Θ_H=tan^-1(σ_xy/σ_xx) is the Hall angle and S(Q_xx=Q_yy) is thermopower.
Let J_tot^e( r), J_tot^Q( r) and J_tot^E( r) be the total charge, heat and energy current densities at position r in the sample. Each of these current densities is a sum of a transport part and magnetization part. The latter exists even in equilibrium and needs to be subtracted to obtain the transport contributions.
If Φ( r) is the electric potential at r, these currents are related to each other as
J_ tot^Q( r)= J^E_ tot( r)-Φ( r) J_ tot^e (r)
The transport part of the current densities have a similar relation
J_ tr^Q( r)= J^E_ tr( r)-Φ( r) J_ tr^e (r)
The charge and energy magnetization densities M^e( r) and M^E(r) are related with their respective current counterparts such that <cit.>
J^e_ mag( r) = ∇× M^e( r)
J^E_ mag( r) = ∇× M^E( r) .
If the surrounding material is non-magnetic, both M^e( r) and M^E( r) vanish outside the material. Therefore integrating over the sample area S_A and averaging
J̅^e_ tr=1/S_A∫_S_A J^e_ tr( r)dS_A = 1/S_A∫_S_A J^e_ tot( r)dS_A
J̅^E_ tr=1/S_A∫_S_A J^E_ tr( r)dS_A = 1/S_A∫_S_A J^E_ tot( r)dS_A .
Utilizing the above relations and Eq. (<ref>), Eq. (<ref>) we get
J̅^Q_ tr = 1/S_A(∫_S_A
J^E_ tot( r)dS_A - ∫_S_AΦ( r)
J^e_ tr( r)dS_A) .
and
J_ tot^Q( r)= J^Q_ tr( r)+ J_mag^E ( r)-Φ( r)(∇× M^e)
Now using the identity ∇×Φ M^e=∇Φ× M^e + Φ(∇× M^e) reduces to
J_ tot^Q( r)= J^Q_ tr( r)+ ∇Φ( r)× M^e+∇×( M^E-Φ( r) M^e)
We identify and note that there is no heat magnetization density M^Q( r) such that J^Q_mag( r) = ∇× M^Q( r). In fact,
J_mag^Q( r)=∇Φ( r)× M^e+∇×( M^E-Φ( r) M^e)
and therefore
J̅^Q_ tr = 1/S_A∫_S_A( J^Q_ tot( r) - M^e× E) dS_A
and for M=Mẑ and E=Ex̂ we obtain
α̃_yx=J̅^Q(y)_ tr/E=J̅^Q(y)_ tot/E-M
§ HEAT AND CHARGE CURRENT EXPRESSIONS FOR CONTINUUM AND LATTICE MODELS
The expressions of charge and heat current <cit.> for a continuum Ginzburg-Landau theory
J^e_GL=-i C_02π/Φ_0⟨Ψ^*(∇-i2π/Φ_0 A)Ψ⟩ +c.c.
J^Q_GL = - C_0 ⟨ (∂/∂ t-i2π/Φ_0Φ)Ψ^*(∇ - i2π/Φ_0 A) Ψ⟩ + c.c.
with C_0=ħ^2/2m^* and ⟨...⟩ stands for thermal averages.
For the lattice model given by Eq. <ref> the heat current between sites m and n is obtained taking into account a contribution J^E_m→n from site m to n and vice versa and subtracting them out as
J^ Q=1/2( J^ E_m→n-J^ E_n→m)+M_z(E×ẑ)
where J^ E_m→ n=-C/2{∂ψ^*_m/∂ t√(ψ_m/ψ^*_m)|ψ_n|e^iω_m,n+c.c.} with ω_m,n=ϕ_m-ϕ_n-∫_m^n A.d r is a gauge invariant quantity. The charge current expression is J^ e=2π/Φ_0CΔ_mΔ_nsin(ϕ_m-ϕ_n-A_mn)
For an XY model described by the Hamiltonian, ℋ_XY=-J∑_<mn>cos(ϕ_m-ϕ_n-A_mn), J being the XY coupling, the heat and charge current expressions <cit.> are
J^ e_XY=Jsin(ϕ_m-ϕ_n-A_mn)
J^ Q_XY=-J/2(ϕ̇_m+ϕ̇_n)sin(ϕ_m-ϕ_n-A_mn)+M_z(E×ẑ)
One can verify that the frozen amplitude limit of both charge and heat current expressions of our lattice model reduces to these expressions.
§ EFFECTIVE XY-MODEL
On the under doped side, where T^*=T_c^MF >> T_c we can integrate out the amplitude Δ_m of the pair degrees of freedom ψ_m to obtain an effective action ℱ_XY only in terms of the phase.
e^-βℱ_XY({ϕ_m})=∫_0^∞∏_m(Δ_m dΔ_m)e^-βℱ_0({Δ_m}) e^-βℱ_1({Δ_m,ϕ_m})/∫_0^∞∏_m(Δ_m dΔ_m) e^-βℱ_0({Δ_m})=⟨exp(-βℱ_1 )⟩_0
In the above, we make use of the cumulant expansion i.e.
⟨exp(-βℱ_1 )⟩_0=exp{-β⟨ℱ_1⟩_0 + β^2/2(⟨ℱ_1^2⟩_0-⟨ℱ_1⟩_0^2)+...},
(⟨...⟩_0 denotes thermal average obtained using ℱ_0 only, to obtain,)
ℱ_XY({ϕ_m})=-C∑_<m n>⟨Δ_mΔ_n⟩_0cos(ϕ_m-ϕ_n)
-β C^2/2∑_<m n>,<l k>cos(ϕ_m-ϕ_n)cos(ϕ_l-ϕ_k)[⟨Δ_mΔ_nΔ_lΔ_k⟩_0-⟨Δ_m Δ_n⟩_0⟨Δ_lΔ_k⟩_0]
+ higher order terms
By neglecting the fluctuations of amplitudes and retaining just the first of the above expression, an effective XY model is obtained, i.e.
ℱ_XY[ϕ_m]=CΔ̅^2∑_<mn>cos(ϕ_m-ϕ_n)
with Δ̅^2 =∫_0^∞Δ^3 Exp[-β(AΔ^2+B/2Δ^4)]dΔ/∫_0^∞Δ Exp[-β(AΔ^2+B/2Δ^4)]dΔ
40
Palstra_1990T. T. M. Palstra, B. Batlogg, L. F. Schneemeyer, and J. V.Waszczak, Phys. Rev. Lett. 64, 3090 (1990).
Ong_2000 Z. A. Xu, N. P. Ong, Y. Wang, T. Kakeshita, and S. Uchida, Nature (London) 406, 486 (2000).
Ong_2006 Y. Wang, L. Li and N. P. Ong, Phys. Rev. B 73, 024510 (2006).
Pourret_2006 A. Pourret, H. Aubin, J. Lesueur, C. A. Marrache-Kikuchi, L. Berge, L. Dumoulin, and K. Behnia Nat. Phys. 2, 683 (2006)
Sondheimer_1948 E. H. Sondheimer, Proc. R. Soc. A 193, 484 (1948).
Bel_2004 R. Bel, K. Behnia, Y. Nakajima, K. Izawa, Y. Matsuda, H. Shishido, R. Settai, and Y. Ōnuki, Phys. Rev. Lett. 92, 217002 (2004).
Luo_2016 Luo et al. Phys Rev B 93, 201102(R) (2016)
Ussishkin I. Ussishkin, S. L. Sondhi and D. A. Huse, Phys. Rev. Lett. 89, 287001 (2002).
Mukerjee S. Mukerjee and D. A. Huse, Phys. Rev. B 70, 014506 (2004).
Rosenstein_2009 B. D. Tinh and B. Rosenstein, Phys Rev B, 79 024518 (2009)
Tinh_2014 B. D. Tinh, N. Q. Hoc, and L. M. Thu Eur. Phys. J. B (2014) 87: 284
Orgad1_2014 G. Wachtel and D. Orgad, Phys. Rev. B 90, 184505 (2014)
Orgad2_2014 G. Wachtel and D. Orgad, Phys. Rev. B 90, 224506 (2014)
Orgad_2015 G. Wachtel and D. Orgad, Phys. Rev. B 91, 014503 (2015)
Podolsky D. Podolsky, S. Raghu and A. Vishwanath, Phys. Rev. Lett. 99, 117004 (2007).
Banerjee_1 S. Banerjee, T. V. Ramakrishnan, C. Dasgupta, Phys. Rev. B 83, 024510 (2011).
Banerjee_2 S. Banerjee, T. V. Ramakrishnan, C. Dasgupta, Phys. Rev. B 84, 144535 (2011).
Sarkar_2016 K. Sarkar, S. Banerjee, S. Mukerjee, T. V. Ramakrishnan, Ann. Phys, 365, (2016)
Sachdev_2010 A. Hackl, M. Vojta, and S. Sachdev, Phys Rev B 81, 045102 (2010)
Ong_2001 Y. Wang, Z. A. Xu, T. Kakeshita, S. Uchida, S. Ono, Y. Ando, and N. P. Ong, Phys. Rev. B 64,224519 (2001)
Taillefer_2010 O. Cyr-Choiniere et al., Nature (London) 458, 743 (2009); J. Chang
et al., Phys. Rev. Lett. 104, 057005 (2010); R. Daou et al., Nature (London) 463, 519 (2010)
Varlamov_2011 A. Levchenko, M. R. Norman, and A. A. Varlamov Phys Rev B 83, 020506(R) (2011)
Ong_2005 Y. Wang, L. Li, M. J. Naughton, G. D. Gu, S. Uchida and N. P. Ong, Phys. Rev. Lett. 95, 247002 (2005).
Li_2007 L. Li, J. G. Checkelsky, S. Komiya, Y. Ando, and N. P. Ong, Nat. Phys. 3, 311 (2007).
Li_2010 L. Li, Y. Wang, S. Komiya, S. Ono, Y. Ando, G. D. Gu and N. P. Ong, Phys. Rev. B 81, 054510 (2010)
Xiao Xiao et al. Phys. Rev. B, 90, 214511 (2014)
Raghu S. Raghu, D. Podolsky, A. Vishwanath, and David A. Huse Phys. Rev. B 78, 184520 (2008)
Caroli C. Caroli and K. Maki, Phys. Rev. 164, 591 (1967)
UD_1991 S. Ullah and A. T. Dorsey, Phys. Rev. B 44, 262 (1991)
A_Schmid A. Schmid, Phys. Kondens. Mat. 5, 302 (1966)
Cooper_1997 N. R. Cooper, B. I. Halperin, and I. M. Ruzin, Phys. Rev. B 55, 2344 (1997)
Ghosal_2007 A. Ghosal, P. Goswami, and S. Chakravarty, Phys. Rev. B 75, 115123 (2007).
Timsuk_1999 T. Timsuk and B. Statt, Rep. Prog. Phys. 62, 61 (1999)
Uemura1989 Y. J. Uemura et al., Phys. Rev. Lett. 62, 2317 (1989).
|
http://arxiv.org/abs/1701.08102v3 | 20170127162617 | Spacetime Spin and Chirality Operators for Minimal 4D, $\cal N$ = 1 Supermultiplets From BC${}_4$ Adinkra-Tessellation of Riemann Surfaces | [
"S. James Gates Jr"
] | hep-th | [
"hep-th"
] |
=1
#1#2#1
#2
#1#2#2
#1#1
#1#1
#1#1
L
R
$̸mSGννν
λ̇đψ̇ḂB̈ṫV̇U̇
D I J K L R
k̂ℓ̂#1^!!#1#10=#1[1pt]01mm#1#1#2#1/#2#1#1 |
http://arxiv.org/abs/1701.07891v1 | 20170126222222 | Non-intuitive Computational Optimization of Illumination Patterns for Maximum Optical Force and Torque | [
"Yoonkyung E. Lee",
"Owen D. Miller",
"M. T. Homer Reid",
"Steven G. Johnson",
"Nicholas X. Fang"
] | physics.optics | [
"physics.optics"
] |
Non-intuitive Computational Optimization of Illumination Patterns for Maximum Optical Force and Torque
Yoonkyung E. Lee,1 Owen D. Miller,2,3 M. T. Homer Reid,2 Steven G. Johnson,2
and Nicholas X. Fang1
========================================================================================================
1Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
2Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
2Currently with the Department of Applied Physics, Yale University, New Haven, Connecticut 06520, USA
*nicfang@mit.edu
This paper aims to maximize optical force and torque on arbitrary
micro- and nano-scale objects using numerically optimized structured
illumination. By developing a numerical framework for computer-automated
design of 3d vector-field illumination, we demonstrate a 20-fold enhancement
in optical torque per intensity over circularly polarized plane wave
on a model plasmonic particle. The nonconvex optimization is efficiently
performed by combining a compact cylindrical Bessel basis representation
with a fast boundary element method and a standard derivative-free,
local optimization algorithm. We analyze the optimization results
for 2000 random initial configurations, discuss the tradeoff between
robustness and enhancement, and compare the different effects of multipolar
plasmon resonances on enhancing force and torque. All results are
obtained using open-source computational software available online.
OSCIS code (350.4855) Optical Tweezers or optical manipulation;(140.7010); (090.1760) Computer holography; Diffraction and Gratings: Optical Vortices; (250.5403) Plasmonics; (290.2200) Extinction
§ INTRODUCTION
We show how large-scale computational optimization <cit.>
can be used to design superior and non-intuitive structured illumination
patterns that achieve 20-fold enhancements (for fixed incident-field
intensity) of the optical torque on sub-micron particles, demonstrating
the utility of an optimal design approach for the many nanoscience
applications that rely on optical actuation of nanoparticles <cit.>.
Recent advances in nanoparticle engineering <cit.>
and holographic beam-generation via spatial light modulators (SLMs)
<cit.>
and other phase-manipulation techniques<cit.>
have created many new degrees of freedom for engineering light–particle
interactions beyond traditional optical tweezers. Enhanced and unusual
optical forces and torques can be engineered by designing material
objects <cit.>
and/or structured illumination, with the latter including “tractor
beams” <cit.> and beams
carrying optical angular momentum <cit.>.
These increased degrees of freedom pose an interesting design challenge:
for a given target object, what is the optimal illumination pattern
to produce the strongest optical force or torque? While a small number
of Gaussian beam parameters can be manually calibrated for optimal
performance using manual trial-and-error <cit.>,
an arbitrary 3D vector field requires a more targeted approach. Moreover,
optimization of a 3d vector field is highly nonconvex by nature, and
posesses many local optima due to wave interference and resonance.
When exploring so many parameters, a large number of scattering problems
must be solved efficiently, which requires careful design of the optimization
framework.
Research in computational optimization of optical actuation has focused
on the design of new material geometries <cit.>
and on the improvement of multiplexed optical traps for microscale
dielectric particles (holographic optical tweezers) <cit.>.
However, no computational method has been available to design structured
illumination for unconventional target objects that are nonspherical,
lossy, or nanometer-scale, thereby requiring a costly full-wave numerical
simulation for computing optical force and torque.
We present a compact and rapid computational framework to optimize
structured illumination for the mechanical actuation of an arbitrary
target object. We combine (i) a compact Bessel-basis representation
(Sec.<ref>); (ii) a numerical
solver based on boundary element method (BEM) that discretizes the
surfaces of a 3d scattering problem to form a BEM matrix, and solves
hundreds of thousands of incident-field configurations using the same
matrix (Sec.<ref>); (iii) an appropriate
optimization algorithm that exploits the smoothness of the nonconvex
and nonlinear optimization problem (Sec.<ref>);
and (iv) a suitable figure of merit (FOM) and optimization constraints
(Sec.<ref>). As a result, we rapidly attain
many-fold improvements in optical forces and torques over random field
or plane wave illuminations (Sec.<ref>),
and discuss the tradeoff between enhancement and robustness of optimization
(Sec.<ref>). Furthermore, a given material object
may have scattering resonances at various frequencies, and the choice
of frequency for the incident field has several important implications.
When comparing interactions with two different resonances, we were
able to distinguish the impact of the change in the resonant field
pattern from the change in the resonance lifetime. Controlling for
the change in lifetime, we found that torque seems to favor higher-order
(e.g. quadrupole) resonances with greater angular momentum,
while force seems to favor lower-order (e.g. dipole) resonance
with greater field intensity within the particle (Sec.<ref>).
§ OPTIMIZATION FRAMEWORK
A structured-illumination optimization aims to find the best 3d vector
field, i.e., the one that maximizes a desired FOM for a given scattering
problem (see Fig.<ref>). Our choice of the Bessel
basis expansion is described in Sec.<ref>.
Sec.<ref> and Sec.<ref>
discuss the BEM solver and the optimization algorithm. Lastly, Sec.
<ref> explains force and torque FOMs and the
corresponding optimization constraints. The entire numerical framework
is implemented with .
§.§ Analytical Representation of Structured Illumination
The computational design of structured illumination requires a compact
analytical representation of an arbitrary 3d vector field. The vector
field will contain spatial variations in both intensity and phase,
and must satisfy the vector wave equation <cit.>.
The electric field 𝐄 can be represented using a basis
expansion 𝐄=∑_i=0^Nc_iϕ_i,
where the complex scalar coefficient c_i determines the relative
intensity and phase of each mode ϕ_i.
The choice of coordinates and the basis functions ϕ_i
depends on the problem geometry. While the spherical coordinate system
is a common choice in Mie scattering <cit.>,
it requires a very large number of modes N to represent light propagating
along a linear axis and potentially interacting with flat substrates
or SLMs. The cartesian coordinate system is also ill-suited because
it requires large N to describe laser beams with a finite radius.
We find that the cylindrical coordinate system <cit.>
is well suited, requiring a small number of modes to describe structured
illumination with varying distributions of linear and angular momentum.
Among the wide menu of cylindrical basis functions (e.g., Bessel,
Laguerre-Gaussian, Hermite-Gaussian, and so forth) <cit.>,
we choose the Bessel basis <cit.>
for its compact analytical expression, derived from the scalar generating
function
ψ_m(r,φ,z)=J_m(k_tr)exp(imφ+ik_zz),
where J_m is the mth-order Bessel function, k_t is the transverse
wavevector in r̂, and k_z is the longitudinal wavevector
in ẑ, satisfying k_t^2+k_z^2=2π/λ. The
ratio between k_t and k_z specificies the numerical aperture
of the basis, NA=tan^-1(k_t/k_z). Higher NA
represents greater transverse momentum that can increase optical torque.
But the permissible range of NA is often dictated by experimental
considerations, and the range of NA in the optimization can
be set accordingly.
Taking spatial derivaties of Eq.(<ref>) gives
𝐌_i =∇×(ψ_i𝐮_z), (azimuthal polarization)
𝐍_i =1/k∇×𝐌_i, (radial polarization)
where 𝐮_z is the unit vector in ẑ, and 𝐌_i
and 𝐍_i are the ith bases for azimuthal and radial
polarizations, respectively (right inset of Fig.<ref>).
The incident electric field 𝐄_inc can be expressed
as:
𝐄_inc(r,ϕ,z)=∑_i=0^Na_i𝐌_i+b_i𝐍_i,
where a_i and b_i are the complex scalar coefficients. The
Bessel basis produces the most compact expressions for 𝐌_i
and 𝐍_i because the magnitude of ψ does not vary
with z, reducing ∂/∂ z terms in Eqs.(<ref>,<ref>).
Note that we intentionally decouple our optimization framework from
the idiosyncratic differences in the spatial resolution of the SLMs.
A wide variety of experimental methods (e.g., superposed pitch-fork
holograms) <cit.> can be used to generate
beams expressed as Eq.(<ref>), for a finite N and NA.
In this paper, we consider numerical apertures with opening angles
≤10^∘ and 12 basis functions (N=5).
§.§ Numerical Solver
The optimization process itself has no restrictions on the choice
of the numerical solver, so the biggest consideration is the computational
cost: the smaller the the better. We choose the Boundary Element Method
(BEM) <cit.> for several
reasons. In comparison to other scattering methodologies such as the
finite-difference or finite-element methods, BEM is particularly well-suited
to the type of large-scale optimization problem requiring a rapid
update of the incident field for a given geometry and a given wavelength
λ_opt.
M_[ fixed; BEM matrix ]𝐜_[ output; current ]=𝐟_[ rapidly updated; input field ]
In Eq.(<ref>), the BEM matrix M remains fixed for
a given geometry and frequency, while the column 𝐟 representing
the incident field is rapidly updated at each step of the optimization
process. This allows hundreds of thousands of scattering configurations
to be computed on the order of a few hours. In addition, BEM projects
the 3d scattering problem onto a 2d surface mesh, thereby reducing
the computation volume by a factor of 1/2000 in the nanoparticle scattering
problem we consider. Lastly, recent improvements <cit.>
have significantly increased the speed with which optical froce and
torque can be computed in BEM.
§.§ Optimization Algorithm
Structured-illumination optimization is nonlinear and nonconvex, such
that searching for a global optimum is prohibitively expensive. Therefore
we choose a local algorithm with random starting points. We choose
one of the simplest solutions available: constrained optimization
by linear approximation (COBLYA) <cit.>, a derivative-free
algorithm that exploits the smoothness of the problem. An open-source
implementation of COBYLA is available through NLopt <cit.>.
§.§ Figure of Merit and Optimization Constraints for Optical Force and
Torque
We consider two types of optical actuation with respect to the object
coordinate (left inset of Fig.<ref>); the force F_z
and torque T_z. In order to avoid the optimizer from increasing
the brightness of the beam indefinitely, we choose to divide the force
and torque by the average incident-field intensity on the particle
surface (I_avg=| E_inc|^2/2Z_0,
where Z_0 is the impedance of free space), which is easily computed
in BEM. We choose the incident-field rather than the total-field intensity
to avoid penalizing high extinction efficiency. I_avg
is measured on the particle surface because we want to account for
the portion of the beam that interacts with the target particle, rather
than the entire beam. We choose nondimensionalized figures of merit:
FOM_F = F_z/I_avg·(π c/3λ^2),
FOM_T = T_z/I_avg·(4π^2c/3λ^3),
where the constants in parentheses reflect ideal single-channel scattering.
The largest <cit.>
scattering cross-section into a single (spherical harmonic) channel
is 3λ^2/2π, which when multiplied by single-photon changes
in linear (2ħ k) and angular (ħ) momentum per photon,
divided by the photon energy (ħω), yields the constants
in Eqs.(<ref>,<ref>).
Optimization constraints can be added to suppress actuation in undesired
directions. We suppress actuations in directions other than ẑ
using smooth constraints: (|𝐅|^2-F_z^2)/|𝐅|^2≤0.01
and (|𝐓|^2-T_z^2)/|𝐓|^2≤0.01,
where the limiting value 0.01 is set to ensure that F_z and T_z
exceed 99% of the force and torque magnitudes |𝐅|
and |𝐓|, respectively.
§ RESULTS AND DISCUSSION
We demonstrate our illumination-field optimization framework on the
gold nanotriangle illustrated in Fig.<ref>. Our previous
work <cit.> analyzes the optical force and torque
on such a particle for circularly-polarized (CP) planewave illumination.
CP planewave is a common incident-field choice <cit.>
for torque generation due to its intrinsic spin angular momentum,
but we find in our computational optimization that highly optimized
field patterns can show 20x improvement of FOM_T. The
wavelength of illumination in each optimization, λ_opt,
is chosen to correspond to the plasmonic resonance wavelengths of
the model particle.
Sec.<ref> presents the distribution
of 2000 local-optimization results and discusses the optimized field-patterns.
Sec.<ref> analyzes the wavelength-dependence of optical
force and torque for optimized illuminations, based on the choice
of λ_opt, and compares the results with the reference
force and torque from CP planewave. Lastly, the tradeoff between robustness
and enhancement is discussed in Sec.<ref>.
§.§ Illumination-field Optimization from 2000 Randomly Selected Initial
Configurations
The illumination-field design space is nonconvex and littered with
local optima, due primarily to wave-optical interference effects.
We survey this broader design space by restarting our local-optimization
algorithm 2000 times with randomly selected initial configurations
that are constructed using Eq.(<ref>), where the complex
coefficients a_i and b_i are uniform random numbers bounded
by | a_i|,| b_i|≤1. The results are summarized
in Figs.<ref>-<ref>
at λ_opt=1028nm (dipole resonance) and 625nm
(quadrupole resonance), respectively. At both wavelengths, we find
that more than 50% of local optimizations from random starting points
can achieve over 5x enhancement of FOM_T compared to CP
planewave reference, and that the optimized field patterns contain
various combinations of Bessel-basis modes without a systematic convergence
to one over the others. The distributions are plotted in log-scale
to increase the visibility of small bins.
In Fig.<ref>, the median FOM_T
at 0.913 is very close to the best FOM_T at 1.01 and the
distribution is predominantly concentrated to the right: the 4 rightmost
bars represent 69% of all samples, which all acheive over 5x enhancement
of FOM_T compared to CP planewave reference at 0.169 (marked
with a red triangle). The insets show the optimization results from
two different starting points – random field (top) and CP planewave
(bottom) – that reach similar optimized field patterns. We also observe
that a variety of other patterns, dominated by different combinations
of Bessel-basis modes, can produce a nearly identical or superior
FOM.
In Fig.<ref>, the final FOM_T
distribution is more dispersed between the median at 3.934 and the
best at 10.94, which respectively achieve over 5x and 14x enhancement
compared to CP planewave reference at 0.779. As in Fig.<ref>,
the results concentrate heavily around the median; however, in Fig.<ref> a small number of samples achieve a remarkable
improvement above 14-fold. The top inset shows four different field
patterns that produce a nearly identical FOM_T above the
median, and the bottom inset shows the field-pattern with the highest
FOM_T. A comparison of all optimized field patterns at
1028nm and 625nm shows that the latter contains more higher-order
Bessel-basis contributions.
§.§ Dependence on Illumination Wavelength
We further investigate the influence of λ_opt by
plotting optical force and torque per incident-field intensity I_avg
as a function of illumination wavelength. In Figs.<ref>A-<ref>B,
the reference planewave force spectrum (black dashed line) is identical
in both plots, clearly dominated by a broad dipole resonance with
smaller peaks at higher-order resonances. Through illumination-field
optimization, we can enhance the force at the dipole mode while suppressing
higher-order modes (Fig.<ref>A) and also enhance
the force at the quadrupole mode while suppressing the dipole mode
(Fig.<ref>B).
In Figs.<ref>A-<ref>C,
the reference planewave torque spectrum (black dashed line) is identical
in all three plots and peaks at both dipole and quadrupole resonances
with nearly equal heights (explained in detail in <cit.>).
Illumination-field optimization at dipole resonance and off-resonance
achieve a similar 6x-boost at the chosen λ_opt value
without suppressing the quadrupole resonance. The optimized total
fields in Fig.<ref>D at 1028nm and 805nm both
exhibit a 4π phase change around the circumference of the particle,
resembling a quadrupole resonance.
In <ref>C, on the other hand, the best optimization
at quadrupole resonance achieves a remarkable 20x improvement while
suppressing much of the dipole resonance; the median optimization
also achieves 12x improvement while suppressing the dipole resonance
to a lesser extent. The optimized total field in Fig.<ref>D
at 625nm shows a distinct, highly resonant distribution.
When comparing interactions with two different resonances, we were
able to distinguish the impact of the change in resonant field pattern
from the change in the resonance lifetime. Controlling for the change
in lifetime, we found that torque seems to favor higher-order (e.g.
quadrupole) resonances with greater angular momentum, while force
(F_z), which closely correlates with extinction power, seems
to favor lower-order (e.g. dipole) resonance with greater field
intensity within the particle. With the use of structured illumination,
higher-order resonances can be excited more effectively, which contributes
to higher optical torque after optimization.
Our numerical optimization framework allows a systematic search of
the illumination-field design space to maximize force and torque on
lossy, non-spherical particles with multipolar scattering channels.
In addition, we think a rigorous analytical study of the fundamental
upper bounds on opto-mechanical responses, similar to the analysis
performed on light extinction <cit.>,
would be useful in the future.
§.§ Robustness of Optimization
Experimental generation of the designed illumination via SLMs may
suffer various types of manufacturing errors <cit.>.
Fig.<ref> shows the tradeoff between enhancement
and robustness to experimental errors. The fractional error is added
to the beam using
𝐄_w=∑_i=0^m(a_i+δ_ai)𝐌_i+(b_i+δ_bi)𝐍_i, (|δ|≤ w·| a,b|_∞),
where δ is a complex-valued random error bounded by the magnitude
of the largest coefficient multiplied by the fractional weight 0≤ w≤1.
Fig.<ref> shows the decrease in FOM_T
as a function of w. At λ_dip, increasing w
from 1% to 10% decreased the best FOM from 0.994 to 0.517,
and the median FOM followed a similar trend. On the other hand, FOM_T
of the best optimized field at λ_quad dropped from
8.624 to 1.433, and the median optimized field changed from
3.92 to 3.237. Fig.<ref> demonstrates a clear
tradeoff between field enhancement and error tolerance, as one might
expect due to the need to couple strongly to the underlying particle
resonances. For the tolerance requirements of a given experimental
setup, the approach we outline here could easily be adapted to a robust-optimization
framework <cit.> in which the
expected variability is included and optimized against.
§ CONCLUSION
We present a numerical framework for computer optimization of structured
illumination that maximizes optical force and torque on arbitrary
scatterers, and show a 20-fold enhancement in optical torque per intensity
on an example plasmonic nanoparticle, compared to a circularly polarized
planewave. Previously, the major bottleneck has been the cumbersome
computation. We overcome this bottleneck with a compact cylindrical
Bessel basis and a fast boundary element method. We are optimistic
that such computational framework for 3d vector fields can be generalized
and applied to other design problems in opto-mechanics, nanophotonics,
and 3d imaging.
§ ACKNOWLEDGEMENT
The authors thank George Barbastathis for helpful discussions.
§ REFERENCES
10
bertsekas_nonlinear_1999
D. P. Bertsekas, Nonlinear programming (Athena scientific Belmont,
1999).
powell_fast_1978
M. J. Powell, A fast algorithm for nonlinearly constrained
optimization calculations, in Numerical analysis, (Springer,
1978), pp. 144–157.
johnson_nlopt_2014
S. G. Johnson, The NLopt nonlinear-optimization package (2014).
grier_revolution_2003
D. G. Grier, A revolution in optical manipulation, Nature
424, 810–816 (2003).
dholakia_optical_2008
K. Dholakia, P. Reece, and M. Gu, Optical micromanipulation, Chem.
Soc. Rev. 37, 42–55 (2008).
agarwal_manipulation_2005
R. Agarwal, K. Ladavac, Y. Roichman, G. Yu, C. M. Lieber, and D. G. Grier,
Manipulation and assembly of nanowires with holographic optical
traps, Opt. Express 13, 8906–8912 (2005).
kelly_optical_2003
K. L. Kelly, E. Coronado, L. L. Zhao, and G. C. Schatz, The optical
properties of metal nanoparticles: the influence of size, shape, and
dielectric environment, The Journal of Physical Chemistry B 107,
668–677 (2003).
xia_shape-controlled_2005
Y. Xia and N. J. Halas, Shape-controlled synthesis and surface
plasmonic properties of metallic nanostructures, MRS bulletin 30,
338–348 (2005).
heckenberg_generation_1992
N. R. Heckenberg, R. McDuff, C. P. Smith, and A. G. White, Generation
of optical phase singularities by computer-generated holograms, Opt. Lett.
17, 221–223 (1992).
curtis_dynamic_2002
J. E. Curtis, B. A. Koss, and D. G. Grier, Dynamic holographic optical
tweezers, Optics Communications 207, 169–175 (2002).
di_leonardo_computer_2007
R. Di Leonardo, F. Ianni, and G. Ruocco, Computer generation of
optimal holograms for optical trap arrays, Optics Express 15,
1913–1922 (2007).
chen_generation_2011
H. Chen, J. Hao, B.-F. Zhang, J. Xu, J. Ding, and H.-T. Wang,
Generation of vector beam with space-variant distribution of both
polarization and phase, Optics letters 36, 3179–3181 (2011).
karimi_efficient_2009
E. Karimi, B. Piccirillo, E. Nagali, L. Marrucci, and E. Santamato,
Efficient generation and sorting of orbital angular momentum
eigenmodes of light by thermally tuned q-plates, Applied Physics Letters
94, 231124 (2009).
dolev_surface-plasmon_2012
I. Dolev, I. Epstein, and A. Arie, Surface-plasmon holographic beam
shaping, Physical review letters 109, 203903 (2012).
schulz_integrated_2013
S. A. Schulz, T. Machula, E. Karimi, and R. W. Boyd, Integrated multi
vector vortex beam generator, Opt. Express 21, 16130–16141 (2013).
chen_creating_2015
C.-F. Chen, C.-T. Ku, Y.-H. Tai, P.-K. Wei, H.-N. Lin, and C.-B. Huang,
Creating Optical Near-Field Orbital Angular Momentum in
a Gold Metasurface, Nano Letters 15, 2746–2750 (2015).
liu_radiation_2005
M. Liu, N. Ji, Z. Lin, and S. Chui, Radiation torque on a birefringent
sphere caused by an electromagnetic wave, Physical Review E 72
(2005).
liu_light-driven_2010
M. Liu, T. Zentgraf, Y. Liu, G. Bartal, and X. Zhang, Light-driven
nanoscale plasmonic motors, Nature nanotechnology 5, 570–573
(2010).
lehmuskero_ultrafast_2013
A. Lehmuskero, R. Ogier, T. Gschneidtner, P. Johansson, and M. Käll,
Ultrafast Spinning of Gold Nanoparticles in Water Using
Circularly Polarized Light, Nano Letters p. 130624122754005 (2013).
arita_laser-induced_2013
Y. Arita, M. Mazilu, and K. Dholakia, Laser-induced rotation and
cooling of a trapped microgyroscope in vacuum, Nature Communications
4 (2013).
chen_optical_2016
J. Chen, N. Wang, L. Cui, X. Li, Z. Lin, and J. Ng, Optical Twist
Induced by Plasmonic Resonance, Scientific Reports 6, 27927
(2016).
novitsky_single_2011
A. Novitsky, C.-W. Qiu, and H. Wang, Single Gradientless Light
Beam Drags Particles as Tractor Beams, Physical Review Letters
107 (2011).
sukhov_negative_2011
S. Sukhov and A. Dogariu, Negative Nonconservative Forces:
Optical “Tractor Beams” for Arbitrary Objects, Phys. Rev.
Lett. 107, 203602 (2011).
simpson_mechanical_1997
N. Simpson, K. Dholakia, L. Allen, and M. Padgett, Mechanical
equivalence of spin and orbital angular momentum of light: an optical
spanner, Optics Letters 22, 52–54 (1997).
torres_twisted_2011
J. P. Torres and L. Torner, Twisted Photons: Applications of Light
with Orbital Angular Momentum (Wiley New York, 2011).
chen_negative_2014
J. Chen, J. Ng, K. Ding, K. H. Fung, Z. Lin, and C. T. Chan, Negative
Optical Torque, arXiv:1402.0621v1 (2014).
lehmuskero_plasmonic_2014
A. Lehmuskero, Y. Li, P. Johansson, and M. Käll, Plasmonic particles
set into fast orbital motion by an optical vortex beam, Optics Express
22, 4349 (2014).
singer_three-dimensional_2000
W. Singer, S. Bernet, N. Hecker, and M. Ritsch-Marte,
Three-dimensional force calibration of optical tweezers, Journal of
Modern Optics 47, 2921–2931 (2000).
gersborg_maximizing_2011
A. R. Gersborg and O. Sigmund, Maximizing opto-mechanical interaction
using topology optimization, International Journal for Numerical Methods in
Engineering 87, 822–843 (2011).
hajizadeh_optimized_2010
F. Hajizadeh and S. N. S Reihani, Optimized optical trapping of gold
nanoparticles, Optics express 18, 551–559 (2010).
tolic-norrelykke_matlab_2004
I. M. Tolić-Nørrelykke, K. Berg-Sørensen, and H. Flyvbjerg,
MatLab program for precision calibration of optical tweezers,
Computer Physics Communications 159, 225–240 (2004).
polin_optimized_2005
M. Polin, K. Ladavac, S.-H. Lee, Y. Roichman, and D. Grier, Optimized
holographic optical traps, Optics Express 13, 5831–5845 (2005).
martin-badosa_design_2007
E. Martín-Badosa, M. Montes-Usategui, A. Carnicer, J. Andilla,
E. Pleguezuelos, and I. Juvells, Design strategies for optimizing
holographic optical tweezers set-ups, Journal of Optics A: Pure and Applied
Optics 9, S267 (2007).
bianchi_real-time_2010
S. Bianchi and R. Di Leonardo, Real-time optical micro-manipulation
using optimized holograms generated on the GPU, Computer Physics
Communications 181, 1444–1448 (2010).
cizmar_holographic_2010
T. Čižmár, O. Brzobohaty, K. Dholakia, and P. Zemánek, The
holographic optical micro-manipulation system based on counter-propagating
beams, Laser Physics Letters 8, 50 (2010).
tao_tao_3d_2011
T. T. Tao Tao, J. L. Jing Li, Q. L. Qian Long, and X. W. Xiaoping Wu,
3d trapping and manipulation of micro-particles using holographic
optical tweezers with optimized computer-generated holograms, Chinese Optics
Letters 9, 120010–120013 (2011).
lapointe_towards_2011
C. P. Lapointe, T. G. Mason, and I. I. Smalyukh, Towards total
photonic control of complex-shaped colloids by vortex beams, Optics express
19, 18182–18189 (2011).
j.d._jackson_classical_1962
J.D. Jackson, Classical Electrodynamics, vol. 3 (Wiley New York,
1962), third edition ed.
bruning_multiple_1971
J. H. Bruning and Y. T. Lo, Multiple scattering of EM waves by
spheres part I–Multipole expansion and ray-optical solutions, Antennas
and Propagation, IEEE Transactions on 19, 378–390 (1971).
bohren_absorption_2004
C. F. Bohren and D. R. Huffman, Absorption and scattering of light by
small particles (Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, 2004).
stratton_electromagnetic_2007
J. A. Stratton, Electromagnetic theory (John Wiley & Sons, 2007).
zhan_cylindrical_2009
Q. Zhan, Cylindrical vector beams: from mathematical concepts to
applications, Advances in Optics and Photonics 1, 1 (2009).
rosen_pseudo-nondiffracting_1995
J. Rosen, B. Salik, and A. Yariv, Pseudo-nondiffracting beams
generated by radial harmonic functions, JOSA A 12, 2446–2457
(1995).
volke-sepulveda_orbital_2002
K. Volke-Sepulveda, V. Garcés-Chávez, S. Chávez-Cerda, J. Arlt, and
K. Dholakia, Orbital angular momentum of a high-order Bessel light
beam, Journal of Optics B: Quantum and Semiclassical Optics 4, S82
(2002).
chew_integral_2008
W. C. Chew, M. S. Tong, and B. Hu, Integral equation methods for
electromagnetic and elastic waves, Synthesis Lectures on Computational
Electromagnetics 3, 1–241 (2008).
harrington_field_1996
R. F. Harrington and J. L. Harrington, Field computation by moment
methods (Oxford University Press, 1996).
reid_efficient_2013
M. T. Reid and S. G. Johnson, Efficient Computation of Power,
Force, and Torque in BEM Scattering Calculations, arXiv preprint
arXiv:1307.2966 (2013).
hamam_coupled-mode_2007
R. E. Hamam, A. Karalis, J. Joannopoulos, and M. Soljačić,
Coupled-mode theory for general free-space resonant scattering of
waves, Physical review A 75, 053801 (2007).
kwon_optimal_2009
D.-H. Kwon and D. M. Pozar, Optimal characteristics of an arbitrary
receive antenna, IEEE Transactions on Antennas and Propagation 57,
3720–3727 (2009).
liberal_least_2014
I. Liberal, Y. Ra'di, R. Gonzalo, I. Ederra, S. A. Tretyakov, and R. W.
Ziolkowski, Least Upper Bounds of the Powers Extracted and
Scattered by Bi-anisotropic Particles, IEEE Transactions on Antennas
and Propagation 62, 4726–4735 (2014).
lee_optical_2014
Y. E. Lee, K. H. Fung, D. Jin, and N. X. Fang, Optical torque from
enhanced scattering by multipolar plasmonic resonance, Nanophotonics
3, 343–440 (2014).
marston_radiation_1984
P. L. Marston and J. H. Crichton, Radiation torque on a sphere caused
by a circularly-polarized electromagnetic wave, Physical Review A
30, 2508 (1984).
friese_optical_1998
M. Friese, T. Nieminen, N. Heckenberg, and H. Rubinsztein-Dunlop,
Optical torque controlled by elliptical polarization, Optics
letters 23, 1–3 (1998).
miller_fundamental_2014
O. Miller, C. Hsu, M. Reid, W. Qiu, B. DeLacy, J. Joannopoulos, M. Soljačić,
and S. Johnson, Fundamental Limits to Extinction by Metallic
Nanoparticles, Physical Review Letters 112 (2014).
miller_fundamental_2016
O. D. Miller, A. G. Polimeridis, M. T. Homer Reid, C. W. Hsu, B. G. DeLacy,
J. D. Joannopoulos, M. Soljačić, and S. G. Johnson, Fundamental
limits to optical response in absorptive systems, Optics Express
24, 3329 (2016).
jesacher_wavefront_2007
A. Jesacher, A. Schwaighofer, S. Fürhapter, C. Maurer, S. Bernet, and
M. Ritsch-Marte, Wavefront correction of spatial light modulators
using an optical vortex image, Opt. Express 15, 5801–5808 (2007).
boyd_convex_2004
S. Boyd and L. Vandenberghe, Convex optimization (Cambridge university
press, 2004).
mutapcic_robust_2009
A. Mutapcic, S. Boyd, A. Farjadpour, S. G. Johnson, and Y. Avniel,
Robust design of slow-light tapers in periodic waveguides,
Engineering Optimization 41, 365–384 (2009).
|
http://arxiv.org/abs/1701.07478v3 | 20170125203617 | Third Law of Thermodynamics as a Single Inequality | [
"Henrik Wilming",
"Rodrigo Gallego"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
The third law of thermodynamics in the form of the unattainability principle states that exact ground-state cooling requires infinite resources. Here we investigate the amount of non-equilibrium resources needed for approximate cooling. We consider as resource any system out of equilibrium, allowing for resources beyond the i.i.d. assumption and including the input of work as a particular case. We establish in full generality a sufficient and a necessary condition for cooling and show that for a vast class of non-equilibrium resources these two conditions coincide, providing a single necessary and sufficient criterion. Such conditions are expressed in terms of a single function playing a similar role for the third law to the one of the free energy for the second law. From a technical point of view we provide new results about concavity/convexity of certain Renyi-divergences, which might be of independent interest.
The third law of thermodynamics as a single inequality
Rodrigo Gallego
December 30, 2023
======================================================
§ INTRODUCTION
Pure quantum states are indispensable resources for any task in quantum information processing. However, the third law of thermodynamics (more precisely, the unattainability principle) states that cooling a system exactly to zero temperature requires an infinite amount of resources, being it in the form of time, space, work or some other resource <cit.>. Similarly, no-go theorems have been put forward for the task of bit erasure –which is closely related to ground-state cooling– showing that no unitary process on a system and a finite dimensional reservoir can bring the system from a mixed to a pure state <cit.>. However, these no-go results do not say much about the amount of resources needed for approximate cooling. Indeed, in recent times a sizable number of studies deal with different protocols to cool a small quantum system by unitarily acting on a heat bath and a certain number of systems out of equilibrium to be “used up” (known under the name of algorithmic or dynamical cooling) <cit.> or studying particular models of refrigerating small quantum systems <cit.>, including ones that seem to challenge the unattainability principle in terms of required time <cit.>.
In this work we will focus on quantifying in full generality the expenditure of arbitrary systems out of equilibrium that are needed for approximate cooling while having access to a heat bath. Our scenario is similar to the one considered in algorithmic cooling, but here we treat the full thermodynamics of the problem by allowing for resources with non-trivial Hamiltonians and accounting for the energy conservation of the total process.
We will do this in the resource theoretic framework of quantum thermodynamics <cit.>, which has proven useful to answer a variety of fundamental questions in quantum thermodynamics, such as establishing an infinite family of second laws <cit.>, providing fundamental bounds to single-shot thermodynamics <cit.>, providing definitions of work for quantum systems <cit.>, generalizing fluctuation theorems <cit.>, elucidating the thermodynamic meaning of negative entropies <cit.> and elucidating the role of quantum coherence in thermodynamics <cit.>.
Recently, there have also been studies from this point of view on the problem of cooling <cit.>, however mostly focusing on providing necessary conditions in terms of resources such as time, space or Hilbert-space dimension.
The task of cooling that we are considering can be phrased as finding a cooling protocol between an arbitrary resource described by the state and Hamiltonian ρ_R and H_R, respectively, and a target system described by ρ_S and H_S so that ρ_S approximates the ground-state of H_S. We will later assume for simplicity that ρ_S is a thermal state – in this case the goal is to bring its final temperature T_S to a very low value. We will assume that the density matrix of the resource has full rank because otherwise the problem trivializes, since one can, for example, simply swap with a ground-state [Similar arguments can be made for other states without full rank. For example two copies of a rank-2 state in a 4-dimensional system can be written as a pure state in tensor product with a full-rank state. Such a resource therefore already contains a pure state which can then be mapped to the ground-state.]. We furthermore assume that the target system is initially in thermal equilibrium with some environment. Then the transition, i.e. the cooling protocol, can be performed by using a thermal bath at a fixed inverse temperature β and performing a global unitary that commutes with the total Hamiltonian, so that energy conservation is properly accounted for. This kind of transitions have been extensively studied and they can be characterized by families of functions M^α, the so-called monotones, so that a transition is possible if and only if <cit.>
M^α(ρ_R,H_R) ≥ M^α(ρ_S,H_S) ∀
α.
Hence, the problem at hand is in principle hard to characterize since one needs to verify an infinite number of conditions to conclude that a given transition is possible. The main contribution of the present work is to show that in the limit where T_S is sufficiently close to zero –i.e. the regime where the (un)attainability problem is formulated– the infinite set of monotones appearing in (<ref>) can be essentially reduced to a single monotone. We call this monotone the vacancy and it is defined as
V_β (ρ,H):=S(ω_β(H)ρ),
where ω_β(H) is the Gibbs state of at inverse temperature β and S is the relative entropy defined as
S(ρσ) =(ρlogρ) - (ρlogσ),
if supp(ρ)⊆supp(σ) and equal to +∞ otherwise.
We find that sufficient and necessary conditions for cooling, respectively, are given by
V_β(ρ_R,H_R) - K(ρ_R,H_R,ρ_S,H_S,β) ≥V_β(ρ_S,H_S),
V_β(ρ_R,H_R) ≥V_β(ρ_S,H_S),
where K (ρ_R,H_R,ρ_S,H_S,β) → 0 as T_S → 0. Hence in the limit of very low temperature cooling V_β(ρ_S,H_S) is the key quantity that determines the fundamental limitations.
Importantly, V_β(ρ_S,H_S) diverges as T_S→ 0. The necessary condition (<ref>) therefore shows that an infinite amount of resources (as measured by V_β) is necessary for exact ground-state cooling. Furthermore we show that for a vast class of resource systems, for example thermal states of coupled harmonic oscillators, the function K(ρ_R,H_R,ρ_S,H_S,β) vanishes identically. Hence
V_β(ρ_R,H_R) ≥V_β(ρ_S,H_S)
becomes both a sufficient and necessary condition. That V_β plays an important role for the third law had been previously found in the setting of i.i.d. resources and qubits as target systems in the seminal work of Ref. <cit.>. Here, we extend the significance of the quantity V_β to arbitrary scenarios.
Usually, the unattainability principle is formulated with respect to time, arguing that an infinite amount of time (or infinitely many cycles of a periodically working machine) are needed to cool a system exactly to zero temperature. Our results show, for example, that if the non-equilibrium resources are simply hot thermal systems (as in the example of a thermal machine that operates between two heat baths), the system to be cooled and the cooling machine have to effectively interact with infinitely many such resource systems (or all parts of one infinitely large system). This implies that an infinite amout of time is needed, since each such interaction takes a finite time (see <cit.> for a thorough discussion of this point).
Our findings not only serve to pose limitations to protocols of algorithmic cooling, but also suggest a surprising symmetry between the second and third law of thermodynamics. The second law –in its averaged version or in the version of the Jarzynski equality <cit.>– can be expressed in terms of the free-energy difference defined as
Δ F_β (ρ,H) = 1/β S(ρω_β(H)).
In analogy, we show that the third law can be expressed similarly in terms of V_β(ρ,H) which simply inverts the arguments of the relative entropy in Eq. (<ref>) and drops the pre-factor.
This symmetry between the second and third law is quite surprising and hints at the fact that the second and third law can be related to the errors of first and second kind in hypothesis testing <cit.>.
We leave the investigation of this deeper relation between the two for future work.
From a technical point of view, our results rely on certain convexity-properties of the function α↦ S_α(ρ || σ), where S_α are classical Renyi-divergences <cit.>. We believe that these results might be of independent interest.
§ SET-UP AND GENERAL NECESSARY CONDITION
In the following we will use the set-up of catalytic thermal operations <cit.> applied to the task of cooling. In this set-up we imagine to possess a resource given by the pair of state and Hamiltonian (ρ_R,H_R). We can then use an arbitrary thermal bath at inverse temperature β, that is, a system in a Gibbs state ω_β(H_B) of a Hamiltonian H_B, and finally an ancillary system, the so-called catalyst with arbitrary state and Hamiltonian (σ_C,H_C) in such a way the latter is returned in the same configuration and uncorrelated from the rest of the systems after implementing the protocol. The target system to be cooled is initially assumed to be in thermal equilibrium with the thermal bath and therefore described by a Gibbs state (ω_β(H_S),H_S). The total compound RSBC is transformed by a cooling protocol, which consists simply of a unitary transformation U which commutes with the total Hamiltonian.
More formally, we say that there exists a cooling protocol to ρ_S using the resource (ρ_R,H_R) if there exists a fixed catalyst (σ_C,H_C) and for any ϵ>0 there exists a unitary U and a bath Hamiltonian H_B such that
ρ'_RS⊗σ^ϵ_C = _B(U ρ_R ⊗ω_β(H_S) ⊗ω_β(H_B)⊗σ_C U^†)
with _R(ρ'_R S)=ρ_S and σ_C - σ_C^ϵ_1≤ϵ. The only constraint on the unitary U is that it conserves the global energy, i.e.,
[U,H_R+H_S+H_B+H_C]=0.
Note that this formulation of the cooling process contains as a particular case partial cooling in which we do not start with the target in a Gibbs state. In this case, the initial system of S, if it is partially cooled before starting the protocol, can be simply incorporated as a part of the resource R.
The problem of finding conditions for the existence of a transitions of the form (<ref>) has been studied in Ref. <cit.> for diagonal states, that is, with [ρ_R,H_R]=0 and [ρ_S,H_S]=0. Throughout this manuscript we will restrict to such diagonal states, but we emphasize that the the necessary condition (<ref>) also holds for non-diagonal states as we will see later.
Under the assumption that ρ_R and ρ_S are diagonal, one can show that cooling to a state ρ_S is possible if and only if <cit.>
S_α(ρ_R || ω_β(H_R)) ≥ S_α(ρ_S || ω_β(H_S)) ∀α≥ 0,
where S_α are so-called Renyi-divergences. The proof of this statement relies simply on the results of Ref. <cit.> together with the additivity of the Renyi-divergences under tensor-products.
An important tool that appears in Eq. (<ref>) is the concept of a monotone of (catalytic) thermal operations <cit.>. This is any function f which can only decrease under (catalytic) thermal operations. The functions S_α appearing in (<ref>) are monotones under catalytic thermal operations and more generally under any channel that has the Gibbs state as a fixed point. Importantly, any monotone f, possibly different from S_α, allows us to construct necessary conditions for a given transition. We will now show that V_β is also a monotone under catalytic thermal operations and derive the corresponding necessary condition for cooling.
The vacancy is an additive monotone under catalytic thermal operations. This has as an implication that
for any target (ρ_S,H_S) and resource (ρ_R,H_R) –not necessarily diagonal states–, the condition
V_β(ρ_R,H_R)≥V_β(ρ_S,H_S).
is necessary for cooling.
Let us first show that V_β is a monotone under catalytic thermal operations. Let us consider an arbitrary transition from state ρ to state ρ' –both with Hamiltonian H– by catalytic thermal operations, then we will now show that V_β(ρ,H)≥V_β(ρ',H).
First note that the vacancy diverges for a state ρ without full rank, thus the inequality V_β(ρ,H)≥V_β(ρ',H) is satisfied trivially for those states. Let us therefore assume that ρ is a full rank state.
As was shown in <cit.>, for any 0≤α≤ 2 the Renyi-divergences
S_α(ρω_β(H)) := 1/α-1log(ρ^αω_β(H)^1-α)
are monotonoic under (catalytic) thermal operations for arbitrary states ρ. That is, we have the necessary condition
S_α(ρ || ω_β(H)) ≥ S_α(ρ' || ω_β(H)) ∀ 0≤α≤ 2,
By simple algebra, one can show that
.∂/∂_α|_α=0 S_α(ρω_β(H)) = V_β(ρ,H) ≥ 0.
Taylor-expanding (<ref>) on both sides for any α>0 and dividing by α then yields
V_β(ρ,H) + O(α)≥ V_β(ρ',H) + O(α),
where O(α) indicates that it is of first order on α. Taking α arbitrarily small then yields V_β(ρ,H) ≥ V_β(ρ',H). This proves monotonicity under catalytic thermal operations. Additivity follows directly from the additivity of the relative entropy under tensor products.
Once we established that the vacancy is a monotone under catalytic thermal operations, we can now derive the necessary condition for cooling by simply applying this condition to the transition that SR undergo in the cooling process:
V_β(ρ_R ⊗ω_β(H_S), H_R+H_S )
≥V_β(ρ'_RS , H_R+H_S )
≥V_β( ω_β(H_R) ⊗ρ_S, H_R+H_S ).
The last inequality follows from the fact that one can always replace the state on any system by an uncorrelated thermal state at the heat bath's temperature using a thermal operation. Using additivity of the vacancy and the fact that V_β(ω_β(H),H)=0, we obtain the necessary condition (<ref>).
We emphasize that the necessary condition (<ref>) is derived in full generality and it applies to any full-rank state ρ_R and any state ρ_S, possibly not diagonal in the eigenbasis of H_S.
The monotone V_β was first introduced in Ref. <cit.>. Its relevance for the unattainability principle is clear since if ρ_S does not have full support, then the r.h.s. of (<ref>) diverges. Hence, exact cooling is impossible unless the resource ρ_R does not have full support either. In the particular case of ρ_R= ⊗ _i=1^n ϱ^i where ϱ^i has full support, the condition (<ref>) already tell us that we need infinite resources –infinite n in this case– for exact cooling. Hence, such a simple analysis already suggests that the quantity V_β plays a crucial role for the limitations on cooling.
To summarize, we have seen, building upon previous literature, that the vacancy V_β establishes completely general necessary conditions for cooling. However, for necessary and sufficient conditions one should in principle verify an infinite number of inequalities given by (<ref>). Our contribution will be to show that these infinite number of inequalities can be reduced to a single one, which can also be expressed in terms of the vacancy for a sufficiently cold final state ρ_S. Furthermore, we will show that for a large family of resource systems the single sufficient condition that we find coincides with the necessary condition (<ref>). Hence, the limits on cooling are entirely ruled by the function V_β. This holds for large classes of finite systems, with possibly correlated and interacting subsystems.
§ GENERAL SUFFICIENT CONDITIONS FOR COOLING
The process of cooling laid out in the previous section can in principle be applied to any final state ρ_S. We will now assume for simplicity that the final state, as it corresponds to a cooling process, is of the form ρ_S=ω_β_S(H_S) with β_S very large. We can then derive the following completely general sufficient condition for cooling.
For every choice of β and H_S there is a critical β_cr>0 such that for any β_S > β_ cr and full-rank resource (ρ_R,H_R) the condition
V_β(ρ_R,H_R) - K(β_S,β,ρ_R,H_R,H_S) ≥ V_β(ω_β_S(H_S),H_S)
is sufficient for cooling. The positive semidefinite function K has the property K(β_S,β,ρ_R,H_R) → 0 as β_S→∞ for any fixed β,H_R,ρ_R>0 and H_S.
The proof of the theorem is given in Sec. <ref>. Nonetheless we provide a sketch of the main ideas involved in such a proof at the end of this section. The function K is given by
K(β_S,β, ρ_R,H_R,H_S)=
max{0,-δ(β_S) min_α≤δ(β_S)∂^2/∂α^2S_α(ρ_Rω_β(H_R))},
where
δ(β_S):=log(Z_β)/V_β(ω_β_S(H_S),H_S)≥ 0, Z_β = (^-β H_S).
The bound (<ref>) applies for any possible (diagonal) resource state, however finding K(β_S,β,ρ_R,H_R,H_S) involves a minimization, that although is feasible for low dimensional systems, might be an obstacle for practical purposes when dealing with large systems and for values of β_S so that K(β_S,β,ρ_R,H_R,H_S) cannot be neglected. That said, we will investigate kinds of resource systems ρ_R,H_R for which K(β_S,β,ρ_R,H_R,H_S)= 0. In those cases, the general sufficient condition given by (<ref>) taken together with the necessary condition (<ref>) will imply that that a necessary and sufficient condition is given simply by
V_β(ρ_R,H_R) ≥ V_β(ω_β_S(H_S),H_S),
In particular, we will see in section <ref>, that this holds true for large classes of thermal non-equilibrium resources. Let us also note that K(β_S,β,ρ_R,H_R,H_S), just as the vacancy, is additive over non-interacting and uncorrelated resources. We will use this property in the next section to investigate the setting of i.i.d. resources.
In the result given above, we have focused on thermal target states. This is in fact not necessary. We show in the appendix <ref> that a completely analogous result holds for states of the form
ρ_ϵ = (1-ϵ) |0⟩⟨0| + ϵρ^⊥, ϵ≪ 1.
where ρ^⊥ is any density matrix which has full rank on the subspace orthogonal to the ground-state |0⟩ and commutes with H_S.
§.§ Sketch of the proof of Thm. <ref>
As we have seen in the previous section, a set of sufficient conditions for a transition with catalytic thermal operations is given by the infinite set of inequalities of (<ref>). The main idea behind the proof is that when the target system is sufficiently cold (β_S > β_cr) it suffices to check the conditions (<ref>) for very small α. This follows from the fact that for β_S > β_cr the r.h.s. of (<ref>), given by S_α(ρ_Sω_β(H_S)), rapidly saturates to its maximum value as we increase α and it is concave (see Fig. <ref>). Given that one only needs to consider small values of α it is possible to make a Taylor expansion around α=0 of S_α of the form
S_α(ρ_Rω_β(H_R)) ≈ S_0(ρω_β(H_R))+ ∂ S_α(ρω_β(H_R))/∂α|_α=0α
+ k α^2.
This reduces the infinite inequalities of (<ref>) to a single one that depends on the derivate of S_α and an error term k, that is related to the error term appearing in Thm. <ref> denoted by K. This expansion can be further simplified noting that S_α=0(ρω_β(H))=0. The vacancy comes into play because of the identity
∂ S_α(ρσ)/∂α|_α=0=S_1(σρ):=S(σρ),
which inverts the arguments of the second term on the r.h.s. of (<ref>). Taking all these elements into account and accounting properly for the precision of the Taylor approximations we arrive at an inequality involving only the vacancy and a vanishing error term as determined by Thm. <ref>.
§.§ A short comment on catalysts
As laid out in Sec. <ref>, we define the catalytic thermal operations by including the possibility that the catalyst changes during the transition, as long as this change can be made arbitrarily small, as is standard in recent literature on the resource theoretic approach to thermodynamics <cit.>. This formulation is a form of exact catalysis, in the sense that as the error has to be arbitrarily small, the catalyst is returned arbitrarily unchanged. However, it is a possible to consider other forms of catalytic thermal operations which are either more restrictive about the change of the catalyst, where no error –not even arbitrarily small– is allowed for; or less restrictive, in the sense the catalyst is allowed to change by a finite amount. We consider both alternatives on Appendix <ref> and <ref> respectively. First, we study the case in which one requires that the catalysts is always returned exactly in the same state, that is, taking ϵ=0 in the definitions laid out in Sec. <ref>. In this case it is no longer valid that a set of sufficient conditions is given by positive values of α on (<ref>), but one also has to consider Renyi-divergences for negative α. This case is analyzed in Appendix <ref>, where we show that in this scenario i) The general necessary condition of Theorem <ref> holds and ii) A general sufficient condition similar to the one of Theorem <ref> is derived. This general sufficient condition only differs on a multiplicative constant –independent of the final temperature to which one cools– from the one derived in Theorem <ref>.
Finally, in Appendix <ref> we furthermore discuss the case of approximate catalysts. We put forward a consistent method to allow for finite errors on the catalyst while maintaining the validity of the third law of thermodynamics.
§ I.I.D RESOURCES AND SCALING OF THE TARGET TEMPERATURE
Theorem <ref> together with the necessary condition (<ref>) provide completely general sufficient and necessary condition, respectively, for cooling a system to target temperature T_s=1/β_S (setting k_B=1) using a given resource (ρ_R,H_R). They thus characterize the possibility of cooling in full generality. To obtain results for concrete physical situations and find out how the target temperature T_S scales with physical key quantities of the resource –such as the system-size of the resource– one has to choose a particular resource and calculate its vacancy as well as the error term K. Then one has to check how these quantities depend on the physical properties of interest.
We will now focus on the scaling between the size of the resource and the final temperature of the target system. For this we assume that the resource is given by a number of identically and independently distributed copies. Later we will also discuss other assumptions we can make about the resource.
Thus, we consider the case where the resource state is given by ρ_R=ϱ_R^⊗ n and Hamiltonian H_R=∑_i H_R^i where H_R^i=𝕀_1 ⊗⋯⊗𝕀_i-1⊗ h_R ⊗𝕀_i+1⊗⋯⊗𝕀_n.
Let us now consider the following task: Given fixed ϱ_R,H_R,β,H_S, find the minimum n so that it is possible to cool down the target state to inverse temperature β_S.
By using Thm. <ref> together with additivity of V_β we obtain that the necessary number of copies n^ nec(β_S) fulfills
n^ nec(β_S) ≥V_β (ω_β_S(H_S),H_S)/V_β(ϱ_R,h_R) .
By using Thm. <ref> in the i.i.d. we also obtain a sufficient number of copies n^ suff. The condition (<ref>) takes the form
n [ V_β(ϱ_R,h_R) - K(β_S,β,ϱ_R,h_R,H_S) ]≥V_β (ω_β_S(H_S),H_S).
Since this condition is sufficient, but not always necessary, we obtain
n^ suff(β_S) ≤V_β (ω_β_S(H_S),H_S)/V_β(ϱ_R,h_R) - K(β_S,β,ϱ_R,h_R,H_S) .
Since the correction K goes to zero as β_S→∞ (the target temperature going to zero), we see that
lim_β_S→∞n^ suff(β_S)/n^ nec(β_S) = 1.
It is also interesting to re-express the previous conditions to obtain a more transparent relation between the final achievable temperature and the number of copies. We will see next that n^ nec(β_S) and n^ suff(β_S) scale as β_S for large β_S. Thus the target temperature approaches zero as 1/n.
§.§ Scaling of the target temperature
In the special case where the target state is a thermal state one can reformulate the vacancy in terms of non-equilibrium free energies. Indeed, the vacancy of a thermal state at temperature β_S simply takes the form
_β(ω_β_S(H),H) = β_S Δ F_β_S(ω_β(H_S),H_S),
with Δ F_β(ρ,H) = ⟨ H⟩_ρ - ⟨ H ⟩_β - (S(ρ)-S(ω_β))/β.
In this case the condition (<ref>) reads:
V_β(ρ_R,H_R) ≥β_S Δ F_β_S(ω_β(H_S),H_S).
From (<ref>) we see that for large β_S we have (assuming vanishing ground-state energy)
_β(ω_β_S(H),H) = β_S E_β-S_β, as β_S→∞.
Assuming again a resource system of n non-interacting identical particles each described by (ϱ_R,h_R), we then obtain that the minimum achievable temperature T^(n)_S scales as
T^(n)_S = 1/nE^S_β/ V_β(ϱ_R,h_R), n ≫ 1.
This result is similar to the asymptotic result of Janzing et al. <cit.>.
Lastly, let us point out that the above scaling relation implies that the probability p to find the system in the ground-state after the cooling procedure increases exponentially to 1 with n. For example, if the target system is a d+1 dimensional system with gap Δ above a unique ground-state, we have (for large n)
p ≥ 1/(1+d^-β_S Δ) ≈ 1 - d ^-n V_β(ω_β_R(h_R),h_R) Δ/E^S_β.
Thus, while an exact third law holds in the sense that n→∞ for T_S→ 0, the ground-state probability asymptotically converges very quickly to unity.
The above relations demonstrate how one can obtain quantitative expressions of the unattainability principle from Theorems <ref> and <ref> by making assumptions about the given resources.
§.§ Scaling of the vacancy with system size
In the case of i.i.d. resources with non-interacting Hamiltonians the vacancy is an extensive quantity in the sense that it scales linearly with the number of particles. However, for arbitrary quantum systems with correlated and interacting constituents, it is in general difficult to calculate the vacancy and hence estimate directly how it scales with the number of particles. Nonetheless, we can use the relation (<ref>) to argue that the vacancy will be extensive for large classes of many-body systems.
In particular, let us assume that the resource is in a thermal state of some local Hamiltonian. That is ρ_R=ω_β̃(H̃_R), where H̃_R is any local Hamiltonian (possibly differing form H_R) and β̃ is finite. In this case one can use Eq. (<ref>) to write
V_β(ρ_R,H_R)= β̃Δ F_β̃(ω_β_R(H_R),H̃_R).
From the fact that the von Neumann entropy is sub-additive and from the locality of the Hamiltonians H_R and H̃_R it then follows that the vacancy V_β(ρ_R,H_R) scales (at most) linearly with the system size. As a consequence, the minimal final temperature T_S^(n) scales (at best) inversely proportional to the volume of the resource.
In the light of the previous considerations, it seems likely that a similar scaling holds for any resource (potentially under reasonable physical assumptions, such as clustering of correlations). We leave the general characterisation of many-body systems such that the vacancy is extensive as an interesting future research direction.
§ THERMAL RESOURCES
As discussed after the statement of Theorem <ref>, it is useful to find general conditions under which the error term K disappears and the sufficient condition coincides with the general necessary condition. Naturally, it is necessary to make additional assumptions about the resources to achieve this.
We now consider as resource state ρ_R a (possibly multi-partite) thermal state of some Hamiltonian H_R at inverse temperature β_R. In the following we will derive a simple expression that allows us to check whether
K(β_S,β,ρ_R,H_R,H_S)= 0, ∀β_S>0,H_S,
and hence (<ref>) becomes a necessary and sufficient condition.
The reasoning is based on showing that
S_α(ρ_R=ω_β_R(H_R)||ω_β(H_R))
is convex for a given range of α<1, which implies (<ref>). The convexity of (<ref>) can be determined by looking at the convexity of the average energy as a function of the inverse temperature of the resource
x ↦ E_x^R := (ω_x(H_R)H_R).
In particular, we will see that if β_R< β and the function x↦ E^R_x is convex for x∈[β_R,β], then the function S_α(ω_β_R(H_R)||ω_β(H_R)) is convex for all α<1.
For resources of the form (ω_β_R(H_R),H_R) that are hotter than the bath, that is with β_R ≤β, if E^R_x=(ω_x(H_R) H_R) is convex in the range x∈ [β_R,β], then (<ref>) is a sufficient and necessary condition for low temperature cooling.
This Theorem simplifies considerably the task of formulating bounds on the third law, since the average energy is a much more accessible quantity than the Renyi-divergences. In Sec. <ref> we discuss several classes of physically motivated conditions that imply that E^R_x=(ω_x(H_R) H_R) is convex. We emphasize, however, that the convexity of the energy is not a necessary condition for the correction K to vanish: There are cases for which x ↦ E^R_x is not convex for the whole range of inverse temperatures [β_R,β] and (<ref>) is nevertheless a sufficient and necessary condition for cooling.
Lastly, let us mention that condition (<ref>) is fulfilled if, for a fixed β_R, the bath's inverse temperature β is sufficiently large, without any extra assumption on the convexity of E_x. This implies, that for sufficiently cold baths, (<ref>) is also a sufficient and necessary condition. This is shown in Appendix <ref>, together with several properties of the Renyi divergences for thermal states that might be of independent interest and also include a proof of Thm. <ref>.
§.§ Systems for which the energy is convex
As implied by Cor. <ref>, Eq. (<ref>) becomes a sufficient and necessary condition for cooling if the resource is a thermal state hotter than the bath and its average energy is convex in the inverse temperature. We will now see that the convexity of the energy is fulfilled by a wide range of physical models.
We will first re-express the convexity in terms of the heat capacity. This allows to check for general systems whether E^R_x is convex, as the heat capacity as a function of the temperature is an intensively studied quantity for many-body systems. Using the definition of heat capacity C_x:=d E^R_x/dT, with T=1/x, we find that the convexity of the energy, as formulated in the condition of Thm. <ref>, can be expressed as
d^2 E^R_x/dx^2=1/x^2( C_x/x - d C_x/dx)≥ 0 with x∈ [β_R,β].
Equivalently, this condition can be expressed as
1/β_R^2 C_β_R - 1/β'^2 C_β'≥ 0
for all β_R≤β' ≤β.
In most thermodynamics systems, the heat capacity is monotonically increasing with the temperature, hence d C_x/dx≤ 0 and (<ref>) is satisfied. A seminal exception to this case is given by the so-called Schottky anomaly, which is present in certain solid states at very low temperatures <cit.>.
We thus see, that for thermodynamic systems the convexity of the energy is a very natural property.
Nevertheless it can fail, in particular in finite systems. We will now show that even for large classes of finite systems the energy is convex.
This is due to the following Lemma, which we prove in the appendix.
Consider any Hamiltonian with equidistant and non-degenerate energy levels. Then the function β↦ E_β is convex.
Immediate examples of Hamiltonians with equidistant energy levels are two-level systems or harmonic oscillators. But in fact, the lemma covers a much wider class of models since the vacancy is unitarily invariant and additive over non-interacting subsystems.
It follows that also any harmonic system and any system described by free fermions has a convex energy function, since free bosonic and fermionic systems can always be made non-interacting by a normal-mode decomposition. In such a normal-mode decomposition they simply correspond to a collection of non-interacting harmonic oscillators or two-level systems, respectively.
These systems include highly-correlated (even entangled) systems and no thermodynamic limit needs to be taken. A particularly interesting resource that is included by these results is that of hot thermal light, which has been considered before as a valuable resource for cooling <cit.>.
It can furthermore be checked that for large but finite many-body systems whose density of states in the bulk is well approximated by μ(ϵ) ≃^γϵ - αϵ^2 the average energy E_β is convex in β [Such a density of states is typical for many-body systems in the bulk of the energy, but deviations typically do appear in the tails of the distribution.].
Finally, for every finite system there is a critical β_c such that E_β is convex for all β>β_c. Thus as soon as β>β_R>β_c, the sufficient condition (<ref>) holds for small enough target temperatures. This means that if an experimenter has a mechanism to pre-cool the environment to a very low temperature, well below 1/β_c, and the resource has a temperature, which is larger than that of the environment but still smaller than 1/β_c, then condition (<ref>) holds as a sufficient and necessary condition.
§.§ A source of work
Our formalism can also incorporate a source of work as particular case of a resource for cooling. The limitations on cooling as a function of the input of work have been studied in Ref. <cit.>. There it is shown that the fluctuations of work, rather than its average value, have to diverge when the target state reaches vanishing final temperature and if the heat bath has finite heat capacity. We will here derive a result similar in spirit using Thm. <ref>, although we employ a different model for the work source. Importantly, our result implicitly allows for infinite heat capacity in the heat bath. It should thus be viewed as being complementary to the results in Ref. <cit.>.
Let us model the work source by a system R with Hamiltonian
H^w_R=∑_x=-d/2^d/2Ex/d|Ex/d⟩⟨Ex/d|.
One can see this Hamiltonian as a d-dimensional harmonic oscillators with energies bounded between E/2 and -E/2. We are interested in the limit of d→∞. In this case R is similar to the model put forward in Ref. <cit.>, with the difference that we consider here a finite value of E. We enforce the condition that the battery acts as a energy reservoir and not as an entropy sink (that would make the task of cooling trivial) by assuming the work source to be in state ρ^w_R=𝕀/d (we can also interpret this as the work-source being at temperature +∞). These assumptions on the work source are justified by the fact that it fulfills the second law of thermodynamics.
By this statement we mean the following. Suppose we want to use a non-equilibrium state ρ of some system S with Hamiltonian H_S to extract work and put it as average energy into the work-source. To do this we implement a (catalytic) thermal operation on the heat bath, system S and the work-source R. Then the increase of energy on the work-source (i.e., the work) Δ E_R is bounded by the non-equilibrium free energy of the system as
Δ E_R ≤Δ F_β(ρ,H_S).
This is shown in Appendix <ref>.
Now we will show that the third law can be obtained, in the sense that both E and d have to diverge in order to be able to use R to cool down a system to zero final temperature. Let us first recall that by Lemma <ref> a sufficient and necessary condition for cooling for such a resource is given by (<ref>). Furthermore, the vacancy of the work source is given by
V_β(ρ_R^w,H_R^w) = S(ω_β(H_R^w)𝕀/d)
= ( ω_β(H_R^w)(log (ω_β(H_R^w) - log (𝕀/d)))
= -β( ω_β(H_R^w)H_R^w) + log(d)
- log(Z_β (H_R^w))
≤ -β E/2 +log(d) - log(Z_β (H_R^w)).
The partition function can be upper bounded as
Z_β(H_R^w) = ∑_x=-d/2^d/2 e^-βEx/d
≥ e^β E/2 + (d-1) e^-β E/2
= e^β E/2 d( 1/d + d-1/d e^-β E).
Hence, we find that
V_β (ρ_R^w,H_R^w) ≤ - log( 1/d + d-1/d e^-β E).
Combined with (<ref>), this implies that a necessary condition for cooling to a state ρ_S is given by
V_β ( ρ_S,H_S) ≤ - log( 1/d + d-1/d e^-β E).
Most importantly, note that in order to obtain a state ρ_S that is close to a Gibbs state at zero temperature the r.h.s. of (<ref>) has to diverge. For this to be possible both E and d have to diverge, since
lim_d→∞V_β(ρ_R^w,H_R^w) ≤ β E,
lim_E→∞V_β(ρ_R^w,H_R^w) ≤ log (d).
This implies the unattainability principle in the sense that an infinitely dense spectrum with unbounded energy is needed for cooling to absolute zero.
Before coming to the proof of Thm. <ref> and our conclusions, let us briefly comment on a different model of work and a possible source of confusion that might arise. A model of work known as a work-bit has been used in the literature of thermal operations <cit.>. In this model, it is assumed that the work-source is a two-level system with energy-gap W that undergoes a transition from the excited state |W⟩ to the ground-state |0⟩ to implement a transition on a system S. Using the results of Ref. <cit.> one can show that it is possible to cool a system to the ground-state in this model as long as W> log Z_β, where Z_β is the partition function of the system S. This means that there exists a (catalytic) thermal operation that implements cooling in the sense that
ω_β(H_S)⊗W↦0⊗0.
At first this seems to be in conflict with our results. However, using the vacancy, it is easy to show that the above process is extremely unstable: it only works for pure initial states of the work-bit. Indeed, one can use condition (<ref>) to establish limits on cooling if the initial state on the work-bit is any full-rank state that approximates the excited state W to arbitrary but finite precision. A simple calculation for a initial state of the work system as ρ=(1-ϵ)W +ϵ0 with Hamiltonian H_W=W W yields the bound
V_β(ρ,H_W)≤ - log( ϵ(1-ϵ)) ∀ W.
This implies that perfect cooling is impossible for any value of W –even diverging– if the initial state of the work-bit is a full-rank state.
§ PROOF OF THM. <REF>
We will now proof Thm. <ref>. Before we go to the details, let us first explain the general logic behind the result. It is clear that to obtain a single necessary and sufficient condition for cooling at low temperatures, we have to show that the infinite set of second laws in eq. (<ref>) collapse to a single condition.
The first important step in the proof is the following Lemma.
Let β>0 and a Hamiltonian H_S be given. There exists a critical inverse temperature β_ cr such that for all β_S>β_ cr and
α↦ S”_α(ω_β_S(H_S)ω_β(H_S)) ≤ 0.
and
S_∞ (ω_β_S(H_S)ω_β(H_S)) ≤log Z_β.
Here, the critical value δ(β_S) is given by
δ(β_S) = log (Z_β)/_β(ω_β_S(H_S),H_S).
See appendix <ref>.
Using this result, we can now upper bound the Renyi-divergence on the target by its linear approximation at the origin in this parameter regime.
Since S'_0(ρ || ω_β(H)) = S(ω_β(H) || ρ)=_β(ρ,H), we get
S_α(ω_β_S(H_S)||ω_β(H_S)) ≤_β(ω_β_S(H_S),H_S)α, ∀α≤α_c.
Secondly, for small enough target temperatures we also have S_∞(ω_β_S(H_S)||ω_β(H_S)) ≤_β(ω_β_S(H_S),H_S)α_c. Since α↦ S_α is monotonously increasing, the second laws in eq. (<ref>) are hence also satisfied if
S_α(ρ_R || ω_β(H_R)) > _β(ω_β_S(H_S),H_S)α, ∀α≤α_c.
For small temperatures, we can further restrict the range of α to the interval [0,δ(β_S)), where δ(β_S) is given by:
δ(β_S) = S_∞(ω_β_S(H_S)||ω_β(H_S))/_β(ω_β_S(H_S),H_S).
The final step is now given by bounding the Renyi-divergence of the resource S_α(ρ_R || ω_β(H_R)). In particular, if we knew that it was convex (such as in the case of a thermal resource with E^R_β being convex), we could lower bound it by its linear approximation at the origin and obtain the necessary and sufficient condition (<ref>).
In the general case, S_α(ρ_R || ω_β(H_R)) will not be convex. But we can use that we only have to check small values of α<δ(β_S) and simply Taylor expand S_α(ρ_R || ω_β(H_R)). Using Taylor's theorem we then obtain
S_α(ρ_R || ω_β(H_R)) ≥_β(ρ_R,H_R)α - k(β_S,β,ρ_R,H_R)α^2.
This yields as new sufficient condition
_β(ρ_R,H_R)α - k(β_S,β,ρ_R,H_R)α^2 ≥_β(ω_β_S(H_S),H_S)α,
for all 0<α≤δ(β_S). The function k(β_S,β,ρ_R,H_R)≥ 0 is given by
k(β_S,β,ρ_R,H_R) = max{0,-min_α≤δ(β_S) S”_α(ρ_R ω_β(H_R))}.
We can now divide the sufficient condition by α and, since k(β_S,β,ρ_R,H_R)≥ 0, replace α by δ(β_S) to arrive at the final sufficient condition
_β(ρ_R,H_R) - K(β_S,β,ρ_R,H_R,H_S) ≥_β(ω_β_S(H_S),H_S),
with K(β_S,β,ρ_R,H_R,H_S) = k(β_S,β,ρ_R,H_R)δ(β_S). This finishes the proof.
§ SUMMARY
In this work we have investigated the limits on low temperature cooling when arbitrary systems out of equilibrium are used as a resource. We provide sufficient and necessary conditions that establish novel upper and lower bounds on the amount of resources that are needed to cool a system close to its ground state. We found that the limitations are ruled by a single quantity, namely the vacancy. This is remarkable, since at higher temperatures there is an infinite family of “second laws” that need to be checked to determine whether a non-equilibrium state transition is possible.
We have only focused on the amount of non-equilibrium resources, as we assume access to an infinite heat bath and we leave considerations about the time and complexity of the cooling protocol aside. These other kind of resources have been explored in other complementary works on the third law <cit.>. It would be interesting to see if the the vacancy plays a role to express the limitations on the size of the heat bath or any other resources that diverge when cooling a system to absolute zero. More particularly, it is an interesting question for future work to obtain the optimal sufficient scaling of the size of the heat bath and the potential “catalyst” τ that is needed to cool the system to the final low temperature <cit.>.
In this work, we have required the catalyst to be returned exactly. The necessary condition (<ref>) and the resulting quantitative unattainability principle is, however, stable when one requires instead that the vacancy of the catalyst only changes little (see appendix <ref> for a discussion of approximate catalysts). We leave it as an open problem to study how the sufficient condition behaves in such an approximate scenario.
The results of Sec. <ref> suggests that for a large class of physically relevant systems the third law can be expressed simply as the monotonicity of the vacancy. It would be of interest to specify more general assumptions on a many-body system so that this is the case. On the other hand, there exist systems for which the vacancy is not a sufficient condition. This open the possibility to have families of resources that, although are out of equilibrium, are useless for cooling. We leave this as an open question for future work.
Lastly, we note that in this work we have focused on the expenditure of non-equilibrium resources for low temperature cooling, which are precisely the resources that are employed in laser cooling <cit.>. We leave as an interesting open research direction to analyse protocols of laser cooling in the light of the bounds obtained here.
Acknowledgments.
We thank Lluis Masanes, Joseph M. Renes and Jens Eisert for interesting discussions and valuable feedback. This work has been supported by the ERC (TAQ), the DFG (GA 2184/2-1) and the Studienstiftung des Deutschen Volkes.
§ PROOF OF CONCAVITY OF RENYI DIVERGENCE FOR LOW TEMPERATURES
In this section we proof the Lemma <ref> about concavity of the Renyi-divergence at low temperatures. The Lemma holds for any Hamiltonian with pure point spectrum, a gap above the ground-state and the property that the partition sum exists for any positive temperature.
Let β>0 and a Hamiltonian H_S with ground-state degeneracy g_0 be given. There exists a critical inverse temperature β_ cr such that for all β_S>β_ cr and for all 0<α<δ(β_S) we have
α↦ S”_α(ω_β_Sω_β) ≤ 0.
and
S_∞ (ω_β_Sω_β) ≤log Z_β.
Here, the critical value δ(β_S) is given by
δ(β_S) = log (Z_β)/_β(ω_β_S(H_S),H_S) <1.
Let us first prove that the max-Renyi-divergence is upper bounded by the partition function at the environment temperature. Suppose that β_S>β and let us write
S_α(ω_β_S(H_S)ω_β(H_S)) =1/α-1log(∑_ig_i ^-α(β_S-β)E_i(Z_β/Z_β_S)^α^-β E_i/Z_β)
= 1/α-1log(^-α(β_S-β)E_0(Z_β/Z_β_S)^α∑_ig_i^-α(β_S-β)(E_i-E_0)^-β E_i/Z_β),
where E_i denote the different energies of H_S, with degeneracies g_i. Assuming w.l.o.g. E_0=0, we write this as
S_α(ω_β_S(H_S)ω_β(H_S)) = α/α-1log( Z_β/Z_β_S) + 1/α-1log(1 + ∑_i>0^-α(β_S-β)E_i^-β E_i/Z_β).
It is now obvious that in the limit we obtain
S_∞(ω_β_S(H_S)ω_β(H_S)) = lim_α→∞S_α(ω_β_S(H_S)ω_β(H_S)) = log(Z_β) - log(Z_β_S) ≤log Z_β.
As a second step let us find the condition for which δ(β_S)<1. To do that we express the vacancy as
_β(ω_β_S(H_S),H_S) = β_S E_β - S_β + log Z_β_S,
where we write S_β := S(ω_β(H_S)). We thus need that
β_S E_β - S(ω_β(H_S)) > log Z_β - log Z_β_S.
Relaxing to the sufficient criterion β_S E_β - S_β > log Z_β = S_β - β E_β we thus obtain
β_S > 2S_β - β E_β/E_β.
Let us now turn to the concavity. We will use the representation of S”_α proven in the next section, which is given by
S”_α(ω_β_Sω_β) = 2/(1-α)^3(log Z_β_S-log Z_β̃(α) + (β_S-β̃(α))E_β̃(α)-(β_S-β̃(α))^2 Var(H)_β̃(α)),
where β̃(α)=β(1-α)+αβ_S. Since we are only interested in α<δ(β_S)<1, we have β≤β̃(α) < β_S. We therefore have to show that the terms in the parenthesis are negative. Let us use that the average energy is monotonic with β and that Z_β̃(α)>1 to bound these terms as
parenthesis ≤log Z_β_S + (β_S-β̃(α))E_β̃(α)-(β_S-β̃(α))^2 Var(H)_β̃(α)
≤log Z_β_S + (β_S-β)E_β-(β_S-β̃(α))^2 Var(H)_β̃(α)
≤log(d) + (β_S-β)E_ max-(β_S-β̃(α))^2 min_x∈ [β,β̃(α)]Var(H)_x.
Now we bound β̃(α). To do that we use that β̃(α)≤β̃(δ(β_S))=: β̃^*(β_S). It is clear that if we can bound β̃^*(β_S) by a constant, the terms in the parenthesis become negative for some β_S since the second order term in β_S dominates. To see that β̃^*(β_S) is indeed upper bounded by a constant, we again write the vacancy as
_β(ω_β_S(H_S),H_S) = -S(ω_β)+ β_S E_β + log Z_β_S
to obtain
β^*:= lim_β_S→∞β̃^*(β_S) = lim_β_S→∞β(1-δ(β_S)) + δ(β_S)β_S
= β + lim_β_S→∞log Z_β/β_S E_β +log Z_β_S-S(ω_β(H_S))β_S = β+ log Z_β/E_β.
This finishes the proof that β_ cr as claimed in the Lemma exists. We also note that the function β̃^*(β_S) is monotonically decreasing for all β_S such that β̃^*(β_S)<1.
Finally, note that eq. (<ref>) allows to give upper bounds on β_ cr once one has lower bounds on the energy variance for inverse temperatures in the interval [β,β^*].
§ RENYI DIVERGENCE BETWEEN THERMAL STATES
Here, we will specialize to the situation where the resource states are thermal, with inverse temperature β_R. We now calculate the Renyi divergence for α<1 in this case. We first write:
S_α (ω_β_R || ω_β) = -α/α-1log Z_β_R + log Z_β + 1/α-1log(^-β_R Hα^-β H(1-α))
= -α/α-1log Z_β_R + log Z_β + 1/α-1log Z_(β_R-β)α + β
= -α-1/α-1log Z_β_R + log Z_β + 1/α-1log(Z_(β_R-β)α + β/Z_β_R)
= log(Z_β/Z_β_R) + 1/α-1log(Z_(β_R-β)α + β/Z_β_R).
We will now show that the function is convex provided β_R<β and that the function x↦ E_β_R+x is convex for 0≤ x ≤β-β_R. For the second derivative (with β̃=(β_R-β)α + β) we obtain:
S_α (ω_β_R || ω_β)” = 2/(1-α)^3log Z_β_R -2/(1-α)^3log Z_β̃ - 21/(1-α)^2∂_αlog Z_β̃ + 1/α-1∂^2_αlog Z_β̃
= 2/(1-α)^3log Z_β_R -2/(1-α)^3log Z_β̃ - 21/(1-α)^2(β-β_R)E_β̃ - 1/1-α(β-β_R)^2 Var(H)_β̃
=2/(1-α)^3[log Z_β_R - log Z_β̃ -(1-α)(β-β_R)E_β̃ - (1-α)^2/2(β-β_R)^2Var(H)_β̃].
Utilizing (1-α)(β-β_R) = β̃ - β_R, we can write this as
S_α (ω_β_R || ω_β)” = 2/(1-α)^3[log Z_β_R - log Z_β̃ - (β̃-β_R)E_β̃ - (β̃-β_R)^2/2Var(H)_β̃].
Here, we have introduced the average energy E_β and the variance Var(H)_β = ⟨ H^2⟩_β - ⟨ H⟩_β^2, which fulfill ∂_x E_x = - Var(H)_x. With these expressions at hand, we will now show Thm. <ref> and another result about about the convexity of Renyi divergences for sufficiently large reference temperature β.
§.§ Proof of Thm. <ref>
We need to show that the r.h.s. of (<ref>) is positive with the premise that x ↦ E_x is convex in x∈ [ β_R,β], β_R ≤β and α <1. This last condition on α implies that we need to show that
log Z_β_R - log Z_β̃≥ (β̃-β_R)E_β̃ + (β̃-β_R)^2/2Var(H)_β̃.
We use an integral representation of the l.h.s:
log Z_β_R - log Z_β̃ = -∫^β̃-β_R_0d/d xlog Z_β_R+x dx = ∫^β̃-β_R_0 E_β_R+xd x.
Hence, we conclude that what needs to be shown is
∫^β̃-β_R_0 E_β_R+xd x ≥ (β̃-β_R)E_β̃ + (β̃-β_R)^2/2Var(H)_β̃.
Whether this inequality is satisfied, and thus, S_α(ρ_Rω_β(H)) is convex, is entirely determined by the function x↦ E_x. This is due to the fact that the derivative of E_x is given by -Var(H)_x, so that the right hand side can be seen as a linear approximation to the function E_x. A geometrical interpretation is provided in Fig <ref> showing that it is trivially satisfied when E_x is convex. This finishes the proof.
As a final remark, although not useful to obtain bounds on the third law, we note that a completely analogous argument implies that if α<1, E_x is convex but in contrast to the previous case β_R≥β, then it is fulfilled that
∫^β̃-β_R_0 E_β_R+xd x ≤ (β̃-β_R)E_β̃ + (β̃-β_R)^2/2Var(H)_β̃.
This shows that in the case of resources colder than the bath, the function S_α(ρ_Rω_β(H)) is concave.
§.§ Very cold heat baths
We will now show that in the case of very cold heat baths (very large β) we also have that S_α(ω_β_Rω_β) is convex, and hence (<ref>) becomes sufficient and necessary.
For any resource of the form (ω_β_R(H_R),H_R), given a fixed β_R there exist a sufficiently large value of β so that (<ref>) is a sufficient and necessary condition for low temperature cooling.
We only give a sketch and show that S_α(ω_β_Rω_β) is convex for values of α<α_c, where α_c<1 is chosen arbitrarily. Recalling Eq. <ref>, we then need to show that
∫^β̃-β_R_0 E_β_R+xd x ≥ (β̃-β_R)E_β̃ + (β̃-β_R)^2/2Var(H)_β̃.
Note that in the limit of large β the scaling of the r.h.s. of (<ref>) is such that β̃-β_R=(1-α)(β-β_R) scales proportionally to β, while E_β̃ and Var(H)_β̃ scale as e^-kβ. Therefore, the r.h.s. of (<ref>) approaches zero as β→∞ whereas the l.h.s. grows monotonically with β. Hence, (<ref>) is fulfilled which concludes the proof.
§ EQUIDISTANT LEVELS
Here, we consider the particular case of a system with M+1 equidistant levels and show that the function E_β is convex. The energy-gap between subsequent levels is Δ and we set the ground-state energy to zero. The energy E_β then takes the form
E_β = 1/^βΔ-1Δ - M+1/^(M+1)Δβ-1Δ.
In particular, for M→∞ we obtain results for the harmonic oscillator and for M=1 for a qubit. We have to prove that the second derivative is positive, i.e.,
E”_β = 1/8Δ^3[sinh(βΔ)/sinh(βΔ/2)^4-(M+1)^3sinh((M+1)βΔ)/sinh((M+1)βΔ/2)^4_=:f(β,M+1)] ≥ 0.
For M=0 this is clearly true. We will set M+1=:γ and show that ∂_γ f(β,γ) ≤ 0. We have
∂_γ f(β,γ) = -γ^2 1/sinh(γβΔ/2)^4[γβΔ(2+cosh(γβΔ))-3sinh(γβΔ)].
In the following, set γβΔ=x. Due to the negative pre-factor, we are done if we can show
x(2+cosh(x)) - 3sinh(x) ≥ 0.
We will show this using a Taylor-expansion:
2x + x cosh(x) - 3sinh(x) = 2x+∑_n=0^∞ x^2n+1(1/(2n)!-3/(2n+1)!)
= 2x+∑_n=0^∞ x^2n+1(((2n+1)-3)(2n)!/(2n)!(2n+1)!)
= 2x - 2x + ∑_n=1^∞ x^2n+1(((2n+1)-3)(2n)!/(2n)!(2n+1)!)
≥ 0.
§ THE WORK SOURCE MODEL
Here we show that a work source of the form (ρ_R^w, H_R^w) as given in the main text fulfills the second law of thermodynamics. Let us consider an arbitrary system (ρ_S,H_S) and let us consider catalytic thermal operations on SW. We will show that the maximum amount of mean energy that one can store on the work source is bounded by the initial non-equilibrium free energy of S. Let us recall, see for instance Ref. <cit.>, that the free energy difference is given by
Δ F_β (ρ,H):=1/β S(ρω_β(H)) =F(ρ,H) - F(ω_β(H)),
where F(ρ,H)= (ρ H) - 1/β S(ρ) is the free energy.
The protocol of work extraction is a transition of the form
ρ_SW^i:=ρ_S ⊗ρ_R^w →ρ_SW^f= E(ρ_SW^i)
where E is any channel that has the Gibbs state as a fixed point. Monotonicity of Δ F_β under channels of the form E imply that
Δ F^f_β := Δ F_β(ρ_SR^f,H_S+ H_R) ≤Δ F_β(ρ_SR^i,H_S+ H_R) := Δ F_β^i.
Combining this last equation with super-additivity and additivity of the relative entropy one can easily find, following a similar reasoning as in Ref. <cit.>, that
Δ E_R ≤Δ F_β (ρ_S,H_S)
where Δ E_R= (𝕀⊗ H_R^w ρ_SR^f)- (𝕀⊗ H_R^w ρ_SR^i) is the mean energy stored in the work source R.
§ ARBITRARY TARGET-STATES CLOSE TO THE GROUND-STATE
In this section we prove a result similar to our general sufficient condition for cooling, but where we consider target states of the form
ρ_ϵ = (1-ϵ) |0⟩⟨0| + ϵρ^⊥, ϵ≪ 1.
where ρ^⊥ is a density matrix which has full rank on the subspace orthogonal to the ground-state |0⟩ and commutes with the Hamiltonian H_S.
For every choice of β, H_S and ρ^⊥ as above, there is a critical ϵ_cr>0 such that for any ϵ < ϵ_ cr the condition
V_β(ρ_R,H_R) +K̃(ϵ,β,ρ_R,H_R,H_S,ρ^⊥) ≥ V_β(ρ_ϵ,H_S)
is sufficient for cooling. The function K̃ has the property K̃(ϵ,β,ρ_R,H_R,ρ^⊥) → 0 as ϵ→∞ for any fixed β,H_R,ρ_R,ρ^⊥ and H_S.
The proof of this theorem is essentially identical to the one of Theorem <ref>. The only difference is that instead of Lemma <ref>, we use the following Lemma:
Let H_S be a d-dimensional Hamiltonian with ground-state |0⟩ and H|0⟩=0. Let β>0 be fixed and consider the state
ρ_ϵ = (1-ϵ) |0⟩⟨0| + ϵρ^⊥,
with rank(ρ^⊥)=d-1, ρ^⊥|0⟩=0 and [ρ^⊥,H_S]=0. Then there exists an ϵ_cr>0 such that for all α<δ(ϵ)
d^2/dα^2 S_α(ρ_ϵ||ω_β(H_S)) <0, ∀ϵ<ϵ_cr.
Here, δ(ϵ) fulfills
δ(ϵ) = log Z_β/_β(ρ_ϵ,H_S) < 1, ∀ϵ<ϵ_cr.
We will now proof this Lemma. Let us then express the Renyi-divergence as
S_α(ρ_ϵ||ω_β(H_S)) = 1/α-1log((1-ϵ)^α + ϵ^α((ρ^⊥)^α^-β H_S(1-α))) + log(Z_S)
=: 1/α-1log (f(α)) + log(Z_S).
As is apparent from the expression, in the following we will often encounter the functions
f̃(α) := ((ρ^⊥)^α^-β H(1-α)),
f_ϵ(α) := (ρ_ϵ^α^-β H(1-α)) = (1-ϵ)^α + ϵ^αf̃(α).
It is useful to remember from the main text that ρ^⊥ is a normalized quantum state that commutes with H and has rank d-1.
In the following we will also often write S_α instead of S_α(ρ_ϵ||ω_β(H_S)) and simply f_ϵ or f_ϵ,α instead of f_ϵ(α) to simplify the notation (similarly for f̃). While f_ϵ and f̃ are structurally essentially the same, it is important to keep in mind that only f_ϵ, and not f̃, depends on ϵ.
We now have to prove that S_α is concave for small enough ϵ, i.e., have to show that there exists a ϵ_cr>0 such that its second derivative is negative for ϵ < ϵ_cr.
The second derivative of S_α can be computed straightforwardly and gives
S”_α = -2/(1-α)^3log f_ϵ - 2/(1-α)^2f_ϵ'/f_ϵ + 1/1-α((f_ϵ'/f_ϵ)^2-f_ϵ”/f_ϵ).
To go on we need to establish a few properties of functions like f_ϵ and f̃. We will collect these properties in a series of Lemmata.
Let ρ be a quantum state and σ be a positive semi-definite operator with [ρ,σ]=0. Define f(α):=(ρ^ασ^1-α).
Then
(f')^2 - f” f ≤ 0, 0<α<1.
A simple calculation shows that
f'(α) = (ρ^α (log(ρ) - log(σ)) σ^1-α),
f”(α) = (ρ^α (log(ρ) - log(σ))^2 σ^1-α).
We now use the Cauchy-Schwartz inequality |(A^† Bρ)|^2 ≤(A^† A ρ)(B^† Bρ) with A=ρ^-α/2 (log(ρ) - log(σ)) σ^α/2 and B = ρ^-α/2σ^α/2 to obtain (note the change from α to 1-α)
f'(1-α)^2 = (ρ^-α/2 (log(ρ) - log(σ)) σ^α/2ρ^-α/2σ^α/2ρ)^2
≤(ρ^-α (log(ρ) - log(σ))^2 σ^αρ)(ρ^-ασ^αρ)
= (ρ^1-α (log(ρ) - log(σ))^2 σ^α)(ρ^1-ασ^α)
= f”(1-α) f(1-α).
Let H be a Hamiltonian with ground-state energy E_0=0 and let sigma be a quantum state with [σ,H]=0. Then
f(α) := (σ^α^-β H(1-α)) ≤ Z, 0≤α≤ 1.
From the calculation of the previous Lemma we see that the second derivative of f is the trace of a product of positive commuting operators. Hence it is always positive and therefore f is convex. But since H≥ 0 we have f(0) = Z ≥ 1 = f(1) and from convexity we get f(α) ≤ Z.
Note that due to our assumption about the groundstate energy we have Z_S≥ 1 and from the above Lemma we know f_ϵ≤ Z_S. We will now show that for every 0 < α_c' < 1, we have 1≤ f_ϵ(α)≤ Z_S if ϵ is small enough and α < α_c'.
For any 0< α_c'< 1 there exists a critical ϵ'_cr>0, such that for all ϵ < ϵ'_cr we have
f_ϵ(α) := (ρ_ϵ^α^-β H(1-α)) ≥ 1, 0≤α < α_c'.
Assume some 0 < α < α_c'. Using that f̃ is independent of ϵ and positive, we can lower bound it by some f̃_min>0. Also, (1-α)^α is monotonically decreasing with α for 0<α < 1. We therefore get the lower bound
f_ϵ(α) = (1-ϵ)^α + ϵ^αf̃(α) ≥ (1-ϵ) + ϵ^αf̃_min = 1 + ϵ(ϵ^α-1f̃_min-1).
Thus for ϵ < ϵ'_cr (α_c') := (f̃_min)^1/1-α_c we have f_ϵ(α) ≥ 1.
Due to the preceding Lemma, we will in the following take the (somewhat arbitrary) choice α_c' = 1/3 and only consider α < α_c' as well as values of ϵ < ϵ_cr'(α_c'). Since later we are anyway only interested in arbitrarily small values of ϵ and α≤δ(ϵ), this is no obstruction.
For all α < α_c' and ϵ < ϵ_cr' we have f_ϵ'(α) ≤ 0.
Follows from f_ϵ(α) ≥ 1 for all α≤α_c' together with the facts that f_ϵ(1)=1,f_ϵ(0)=Z_S and that f_ϵ is convex.
We are now in position to go on with the proof of the asymptotic concavity. First, we will further restrict the values of α by choosing arbitrarily α_c < α_c'=1/3 and restricting to α≤α_c. The reason for this will become clear later in the proof.
Considering eq. (<ref>) and using Z_S ≥ f_ϵ≥ 1 as well as Lemma <ref>, we can upper now bound the second derivative as
S”_α ≤ - 2/(1-α)^2f_ϵ'/f_ϵ + 1/(1-α)f_ϵ^2((f_ϵ')^2-f_ϵ”f_ϵ)
≤ - 2/(1-α_c)^2f_ϵ'/f_ϵ + 1/Z_S^2((f_ϵ')^2-f_ϵ”f_ϵ).
One might be tempted to use Lemma <ref> and simply upper bound the second term by zero, but that bound would be too weak, since the first term diverges as log(1/ϵ). We will therefore now have to do a more detailed calculation. We first compute the derivatives of f_ϵ:
f_ϵ,α' = (1-ϵ)^αlog(1-ϵ) + log(ϵ)ϵ^αf̃_α + ϵ^αf̃'_α,
(f_ϵ,α')^2 = (1-ϵ)^2αlog(1-ϵ)^2 + log(ϵ)^2ϵ^2αf̃^2_α + ϵ^2α (f̃'_α)^2 + 2(1-ϵ)^αlog(1-ϵ)log(ϵ)ϵ^αf̃_α
+ 2(1-ϵ)^αlog(1-ϵ)ϵ^αf̃'_α + 2 log(ϵ)ϵ^2αf̃_αf̃'_α.
f_ϵ,α” = (1-ϵ)^αlog(1-ϵ)^2 + log(ϵ)^2ϵ^αf̃_α+2log(ϵ)ϵ^αf̃'_α + ϵ^αf̃”_α.
These give
(f_ϵ,α')^2 - f_ϵ,α”ϵ^αf̃_α = (1-ϵ)^2αlog(1-ϵ)^2 + log(ϵ)^2ϵ^2αf̃^2_α + ϵ^2α (f̃'_α)^2 + 2(1-ϵ)^αlog(1-ϵ)log(ϵ)ϵ^αf̃_α
+ 2(1-ϵ)^αlog(1-ϵ)ϵ^αf̃'_α + 2 log(ϵ)ϵ^2αf̃_αf̃'_α
- ϵ^αf̃_α((1-ϵ)^αlog(1-ϵ)^2 + log(ϵ)^2ϵ^αf̃_α+2log(ϵ)ϵ^αf̃'_α + ϵ^αf̃”_α)
= log(1-ϵ)^2 (1-ϵ)^α((1-ϵ)^α -ϵ^αf̃_α) + ϵ^2α (f̃'_α)^2
+ 2(1-ϵ)^αlog(1-ϵ)log(ϵ)ϵ^αf̃_α + 2(1-ϵ)^αlog(1-ϵ)ϵ^αf̃'_α - ϵ^2αf̃”_αf̃_α
= log(1-ϵ)^2 (1-ϵ)^α((1-ϵ)^α -ϵ^αf̃_α) + 2(1-ϵ)^αlog(1-ϵ)ϵ^α(log(ϵ) f̃_α + ϵ^αf̃'_α)
+ ϵ^2α((f̃'_α)^2- f̃”_αf̃_α).
Hence we have
(f_ϵ,α')^2 - f_ϵ,α” f_α = -(1-ϵ)^α f_ϵ,α” + log(1-ϵ)^2 (1-ϵ)^α((1-ϵ)^α -ϵ^αf̃_α)
+ 2(1-ϵ)^αlog(1-ϵ)ϵ^α(log(ϵ) f̃_α + ϵ^αf̃'_α) + ϵ^2α((f̃'_α)^2- f̃”_αf̃_α).
Note in particular that the last term is negative semi-definite due to Lemma <ref>.
Let us now also write 0< 1/k_c := (1-α_c)^2 < 1. Inserting the previous result into eq. (<ref>) we then obtain
S”_α ≤ - 2 k_c f_ϵ,α' + 1/Z^2((f_ϵ,α')^2-f_ϵ,α” f_α)
≤ -2 k_c f_ϵ,α' + 1/Z^2( -(1-ϵ)^α f_ϵ,α”
+ log(1-ϵ)^2 (1-ϵ)^α((1-ϵ)^α -ϵ^αf̃_α) .
.+ 2(1-ϵ)^αlog(1-ϵ)ϵ^α(log(ϵ) f̃_α
+ ϵ^αf̃'_α) )
≤ -2 k_c f_ϵ,α' + 1/Z^2( -(1-ϵ) f_ϵ,α”
+ log(1-ϵ)^2 (1 -(1-ϵ)ϵf̃_α) .
.+ 2log(1-ϵ)(log(ϵ) f̃_α
+ ϵ^2α(1-ϵ)^αf̃'_α) )
We now lower bound f_ϵ' and f_ϵ” as
f_ϵ,α' ≥log(1-ϵ) + log(ϵ)f̃_α + f̃_α'
f_ϵ,α” ≥ (1-ϵ)log(1-ϵ)^2 + log(ϵ)^2 ϵ^α_cf̃_α + 2log(ϵ)ϵ^αf̃'_α + ϵf̃”_α.
Note that we cannot easily bound the terms involving f̃'_α since we do not know the sign of f̃'_α. However, we emphasize again that f̃ is independent of ϵ and can hence essentially be treated as constant. Putting in the bounds then yields
S”_α ≤ -2k_c (log(1-ϵ) + log(ϵ)f̃_α + f̃_α') -1-ϵ/Z^2((1-ϵ)log(1-ϵ)^2 + log(ϵ)^2 ϵ^α_cf̃_α + 2log(ϵ)ϵ^αf̃'_α + ϵf̃”_α)
+ 1/Z^2( log(1-ϵ)^2 (1 -(1-ϵ)ϵf̃_α) + 2log(1-ϵ)(log(ϵ) f̃_α
+ ϵ^2α(1-ϵ)^αf̃'_α) )
= log(ϵ)f̃_α(2/Z^2log(1-ϵ)-1-ϵ/Z^22ϵ^αf̃'_α/f̃_α -1-ϵ/Z^2log(ϵ)ϵ^α_c - 2/(1-α_c)^2)
+ log(1-ϵ)( - 2/(1-α_c)^2 - (1-ϵ)^2/Z^2log(1-ϵ) + 1-(1-ϵ)ϵf̃_α/Z^2log(1-ϵ) + 2/Z^2ϵ^2α(1-ϵ)^αf̃'_α)
-2/(1-α_c)^2f̃'_α - (1-ϵ)ϵ/Z^2f̃”_α
≤log(ϵ)f̃_α(2/Z^2log(1-ϵ)-1-ϵ/Z^22ϵ^αf̃'_α/f̃_α -1-ϵ/Z^2log(ϵ)ϵ^α_c - 2/(1-α_c)^2) + M(ϵ, H,β,ρ^⊥) - K(α_c,H,β, ρ^⊥),
where M goes to zero as ϵ goes to zero and K is independent of ϵ. Also note that M is bounded and independent of α (due to the boundedness of f̃_α and its derivatives).
Let us define m(α_c) = max_α≤α_cf̃'_α/ f̃_α. Since α_c < 1/2 we can simplify the bound to
S”_α ≤log(ϵ)f̃_α/Z^2(2 log(1-ϵ)-2(1-ϵ)ϵ^α m(α_c) -(1-ϵ)log(ϵ)ϵ^α_c - 8 Z^2 ) + M(ϵ, H,β,ρ^⊥) - K(α_c,H,β, ρ^⊥).
Clearly S”_α can be made negative by taking ϵ and α_c arbitrarily small since the dominant term in the bracket goes as -log(ϵ). However, since our objective is to upper bound S_α by V_β(ρ_ϵ,H_S) α for all α≤α_c, we also need that V_β(ρ_ϵ,H_S) α_c ≥log Z_S and hence α_c ≥log(Z_S)/ V_β(ρ_ϵ,H_S). Hence we choose α_c = δ(ϵ) = log Z_S/ V_β(ρ_ϵ,H_S) and hope for the best. The vacancy is given by:
V_β(ρ_ϵ,H_S) = - log(1-ϵ)1/Z_S + Z_S-1/Z_Slog(1/ϵ) + C_1(ρ^⊥,β, H_S),
where C_1 does not depend on ϵ. Hence
lim_ϵ→ 0ϵ^δ(ϵ) = 1/Z_S^Z_S/Z_S-1 and lim_ϵ→ 0 - (1-ϵ)log(ϵ)ϵ^δ(ϵ) = +∞.
Since the other terms in the first bracket in S”_α go to zero as ϵ→ 0 and K is independent of ϵ, we can thus find a finite ϵ_cr such that
S”_α≤ 0, α≤δ(ϵ_cr).
This finishes the proof.
§ EXACTLY CONSERVED CATALYSTS
In this section we analyze the scenario where the catalyst always has to be returned without any error. That is, the cooling protocol considers a process like the one described in Sec. <ref> but taking ϵ=0. First, note that the vacancy is automatically also a monotone in this setting, since we are considering a subset of free operations. Hence, the inequality
V_β(ρ_R,H_R) ≥ V_β(ρ_S,H_S)
is also a necessary condition for this set of free operations.
In the following, we will consider for simplicity only the case where the target system is thermal, ρ_S = ω_β_S(H_S).
We will now prove the following theorem, which provides a sufficient condition for cooling and which coincides with that in our general Theorem <ref> up to a multiplicative factor.
Assume thermal operations with exact catalysts. Then for every choice of β and H_S there is a critical β_cr>0 such that for any β_S > β_ cr and full-rank resource (ρ_R,H_R) (diagonal in the energy eigenbasis) the condition
V_β(ρ_R,H_R) - K(β_S,β,ρ_R,H_R,H_S) ≥ r(β,H) V_β(ω_β_S(H_S),H_S)
is sufficient for cooling. The positive semidefinite function K is identical to that in Theorem <ref> and the constant r(β,H_S) is independent of ρ_R,H_R and β_S and given by
r(β,H_S)= 1+2E_max-E_β/E_β
where E_max is the largest eigenvalue of H_S and we assume that the ground-state energy of H_S is zero.
Before going into the proof, let us discuss briefly the implications that the correction given by r(β,H_S) has over the sufficient condition Thm. <ref>. This is better explained if we look at the scaling results of Sec. <ref>. There we showed that the sufficient condition of Thm. <ref> provides an upper bound on the number of copies of a resource that are sufficient to implement a cooling process, as given by n^suff in (<ref>). The sufficient condition laid out in Thm. <ref> implies simply that that we need r-times more systems to implement the cooling protocol, where r=r(β,H). Note importantly that r does not depend on the final temperature, so employing r(β,H_S) × n^suff is always sufficient for cooling. We emphasize that we believe that the factor r(β,H_S) can be made much closer to 1 by more elaborate proof techniques, but leave this as an open problem.
It was shown in Ref. <cit.> that a transition ρ→ρ' between two diagonal states is possible with exact preservation of the catalyst if and only if the Renyi-divergences
S_α(ρω_β(H)) = sign(α)/α-1log(ρ^αω_β(H)^1-α)
do not increase for all α∈ (-∞,+∞). The sufficient condition in Theorem <ref> covers all α≥ 0. We thus have to check that we can fulfill all the inequalities for α<0 using the multiplicative factor r(β,H_S). To do this we provide new lower- and upper-bounds for the Renyi-divergences for negative α. We begin with a lower-bound. Consider any state ρ with eigenvalues p_i in the energy-eigenbasis. Then we have
S_-|α|(ρω_β(H)) = 1/|α|+1log(∑_i p_i^-|α| w_i^1+|α|),
where w_i = ^-β E_i/Z_β are the eigenvalues of the thermal state. Using concavity of the logarithm we can bound this as
S_-|α|(ρω_β(H)) ≥1/|α|+1∑_i w_i log( p_i^-|α| w_i^|α|) = |α|/|α|+1∑_i(w_i log(w_i)- w_i log(p_i) )
= |α|/|α|+1 V_β(ρ,H).
We can thus lower bound all the Renyi-divergences for negative α by a simple function. Later, we will apply this bound to the resource.
We will now derive a similar upper bound for the target system, i.e., assuming a system in a thermal state.
First, we rewrite the Renyi-divergences as
S_-|α|(ω_β_S(H_S) ω_β(H_S)) = |α|/|α|+1log(Z_β_S) - log(Z_β) + 1/1+|α|log(^(β_S-β)|α|H_S^-β H_S),
which can be verified by direct calculation. We will now use the log-sum inequality. It states that for any two sets of d non-negative numbers {a_i} and {b_i} we have
loga/b≤∑_i a_i/aloga_i/b_i,
with a=∑_i a_i and b=∑_i b_i. Let E_i be the energy-eigenvalues of H_S. Then we set
a_i = ^(β_S-β)|α| E_i^-β E_i = ^-β̃(α)E_i, b_i = ^-β E_i/Z_β,
where β̃(α):= β - (β_S -β)|α|. Using the log-sum inequality we then obtain
S_-|α|(ω_β_S(H_S) ω_β(H_S)) ≤|α|/|α|+1log(Z_β_S) - log(Z_β) + 1/1+|α|∑_i ^-β̃(α)E_i/Z_β̃(α)log( ^(β_S-β)|α| E_i Z_β)
= |α|/|α|+1(log(Z_β_S) - log(Z_β)) + |α|/|α|+1(β_S-β) ∑_i ^-β̃(α)E_i/Z_β̃(α) E_i.
Denoting by E_max the maximum energy, we then get the bound
S_-|α|(ω_β_S(H_S) ω_β(H_S)) ≤|α|/|α|+1(log(Z_β_S) - log(Z_β) + (β_S-β) E_max).
Let us recall from Eq. (<ref>), that for thermal states, the vacancy can be expressed as a function of the non-equilibrium free energy as
V_β(ω_β_S(H_S),H_S) = β_S Δ F_β_S(ω_β(H),H) = β_S E_β - S_β + log(Z_β_S),
where E_β and S_β denote the thermal energy expectation value and von Neumann entropy at inverse temperature β. Using this together with -logZ_β = β E_β - S_β, we can rewrite the upper bound on the Renyi-divergences as
S_-|α|(ω_β_S(H_S) ω_β(H_S)) ≤|α|/|α|+1( V_β(ω_β_S(H_S),H_S)[1+E_max-E_β/Δ F_β_S(ω_β(H),H)] - β(E_max-E_β))
≤|α|/|α|+1 V_β(ω_β_S(H_S),H_S)[1+E_max-E_β/Δ F_β_S(ω_β(H),H)].
Since we have
Δ F_β_S(ω_β(H),H) = E_β - E_β_S - 1/β_S(S_β - S_β_S),
it is always possible find a critical inverse temperature β'_S such that Δ F_β_S(ω_β(H),H)≥ E_β/2 for all β_S>β'_S. Then, for all β_S larger than this critical temperature we can bound the Renyi-divergences as
S_-|α|(ω_β_S(H_S) ω_β(H_S)) ≤|α|/|α|+1 V_β(ω_β_S(H_S),H_S)[1+2E_max-E_β/E_β] =: |α|/|α|+1 V_β(ω_β_S(H_S),H_S) r(β,H_S).
Using the lower bound (<ref>) for the resource, we then find that the inequalities for negative α are fulfilled if
|α|/|α|+1 V_β(ρ_R,H_R) ≥|α|/|α|+1 V_β(ω_β_S(H_S),H_S) r(β,H_S)
Cancelling the prefactors, we get as sufficient condition for negative values of α the inequality
V_β(ρ_R,H_R) ≥ V_β(ω_β_S(H_S),H_S) r(β,H_S).
Combining this with the sufficient condition for positive α, which is the sufficient condition provided by Thm. <ref>, then yields the claimed sufficient condition in the theorem.
§.§.§ Catalysts can always be chosen with full rank
Before finishing this section, let us point out that in the case of exact catalysis, one can always choose the catalyst to have full rank. That is, the actual implementation of the cooling protocol, which is guaranteed to exist under the conditions of Thm. <ref>, never requires to employ a catalyst which is not full rank.
To see this, consider a bi-partite system with non-interacting Hamiltonian H_1+H_2.
Then consider the initial state ρ_SB⊗σ_C and apply an energy-preserving unitary operation U that results in the state ρ'_SBC with ρ'_C=σ_C.
Here, we imagine that ρ_SB also includes the state of the heat-bath and thus there can be a built-up of correlations between the catalyst and ρ_SB.
Furthermore assume that ρ_SB has full-rank and that σ_C is supported only on a subspace P⊂ H_2 with complement Q= - P (we identify the vector-space and the projector on the space). Thus P=∑_jj, where the sum is over the eigen-states of σ_C.
Let the spectrum of ρ_SB and σ_C be {p_α} and {q_j}, respectively. Then the final state
ρ'_SBC = ∑_α,j p_α q_j Uα⊗jU^†
is a convex sum of the positive semi-definite operators Uα⊗jU^†. The sum has support only within ⊗ P since otherwise the reduced state ρ'_C would also be supported outside of P. Hence also every summand is supported within ⊗ P. Using ⊗ P = ∑_α,jα⊗j we then obtain
(⊗ Q) U(⊗ P) U^† = ⊗ Q ∑_α,jUα⊗jU^† = 0.
In other words, we have (⊗ Q) U (⊗ P) = 0 and by a similar calculation also (⊗ P) U(⊗ Q)=0.
Thus, the unitary U is block-diagonal. In particular, the operator V=(⊗ P)U(⊗ P) considered as an operator on the Hilbert-space H_1⊗ P is unitary.
Since U is energy-preserving by assumption, we can deduce that P=span{|E_j⟩} for some subset of energy-eigenstates |E_j⟩ of the Hamiltonian H_2 of the catalyst.
Then V commutes with the Hamiltonian H_1+ H_2|_P, where H_2|_P denotes the Hamiltonian of the catalyst, but restricted to the subspace P.
We can thus obtain an equivalent catalyst with full rank and a corresponding thermal operation by restricting σ_C and H_2 to the subspace P on which σ_C has full rank and using the thermal operation defined by V:
V ρ_SB⊗σ_C|_P V^† = ρ'_SBC|_⊗ P.
In particular note that the above analysis also shows that, in the case of exact catalysis, pure catalysts are useless: If a transition can be done with a pure catalyst, it can also be done without a catalyst.
§ APPROXIMATE CATALYSIS
In this manuscript, we have assumed that catalysts are returned arbitrarily close to their initial state (or exactly, in the last section).
Here we will discuss possible relaxations of this assumption to include approximate catalysts.
First, we note that the problem of allowing for finite errors –in some suitable measure– between the initial and final state of the catalyst is a delicate one, specially in the context of the third law of thermodynamics.
The challenge is caused by the fact that already the statement of the unattainability principle is not stable under arbitrarily good approximations:
It compares the case where the state of the target system is exactly the ground-state with the case of approximating the ground-state to arbitrary precision.
In the latter case, infinite resources are needed while in the latter case finite resources are needed (however diverging with the approximation precision).
This is the ultimate reason why a discontinuous measure of resources (like the vacancy) is necessary to capture the third law in the resource theoretic setting.
With this in mind, let us discuss the problem of approximate catalysts. If one demands that the catalyst is returned in approximately the same state, it is crucial how one measures “approximately”. In the context of thermal operations, this problem has been studied in Refs. <cit.>. It has been shown in Ref. <cit.>, that if one requires only that the catalyst is returned up to an arbitrarily small but fixed error in trace-distance, any transition can be implemented using a thermal operation to arbitrary precision – without any resource. In particular, this implies that perfect cooling can be achieved without using any resource state. Therefore, it is clear that stronger conditions are necessary to not trivialize problem of cooling.
A second way to define approximate catalysts is to require that the catalyst is returned up to an error ϵ/log(d) in trace-distance, where d is the dimension of the catalyst, and ϵ>0 is arbitrarily small but fixed for all catalysts.
Intuitively, this definition requires that the error is small even when multiplied by the number of particles in the catalyst.
In this case, transitions can be implemented to arbitrary precision if the non-equilibrium free energy decreases <cit.>.
This would lead to a constant amount of resources needed to cool a given system to arbitrary low temperatures – hence the unattainability principle is also in this case violated.
Because of the arguments above, it seems allowing for a finite error –measured in trace-distance– in catalyst seems to be too forgiving.
However, Ref. <cit.> also hints at a solution to this problem: One should measure the error in terms of a quantity that is meaningful for the problem at hand. In Ref. <cit.> the authors consider the problem of work extraction and demand in turn that the catalyst is returned with approximately the same “work distance”, where the work distance measures the potential of one state to produce work. In our case, we are concerned with the task of low-temperature cooling. Indeed, the vacancy itself plays the role of the cooling potential, since the limitations for low temperature cooling of Thm. <ref> are expressed in terms of the vacancy. We can thus require that the catalyst has to be returned with a vacancy that differs only by an amount ϵ from the initial vacancy. If we adopt this definition of approximate catalysts, the general necessary condition (<ref>) is modified only slightly.
This can be seen in the following way. First note that this notion requires that catalysts all have finite vacancy, i.e., they must have full rank. In this case, we can simply evaluate the vacancy of the resource, system and target before and after the cooling protocol has been applied. Let us assume that the initial state of the catalyst is σ, while the final state is σ'. Since the vacancy is an additive monotone of thermal operations and vanishes on thermal states, we then obtain
V_β(ρ_R⊗ω_β(H_S)⊗σ,H_R+H_S+H_C) = V_β(ρ_R,H_R) + V_β(σ,H_C)
≥ V_β(ω_β(H_R)⊗ρ_S ⊗σ', H_R+ H_S +H_C) = V_β(ρ_S,H_S) + V_β(σ',H_C).
We hence obtain as a new necessary condition
V_β(ρ_R,H_R) + ϵ≥ V_β(ρ_S,H_S),
with ϵ = V_β(σ,H_C) - V_β(σ',H_C) being the error in the catalyst measured by the vacancy.
Thus the necessary condition and hence the quantitative unattainability principle is stable under approximate catalysts if defined consistently: allowing a fixed but small error measured by the vacancy difference.
It seems plausible that under this definition of catalysts, also the sufficient condition in Theorem <ref> simplifies to (<ref>) for arbitrary resources – at least for low enough target temperatures. However, proving this statement rigorously seems to require further technical innovations beyond the scope of this work. We therefore leave this as an open problem.
|
http://arxiv.org/abs/1701.07823v1 | 20170126105848 | A continuum model of skeletal muscle tissue with loss of activation | [
"Giulia Giantesio",
"Alessandro Musesti"
] | q-bio.TO | [
"q-bio.TO",
"74B20, 74L15"
] |
Low Rank Magnetic Resonance Fingerprinting
Yonina C. Eldar
==========================================
We present a continuum model for the mechanical behavior of
the skeletal muscle tissue when its functionality is reduced due to
aging. The loss of ability of activating is typical of the geriatric
syndrome called sarcopenia. The material is described by a
hyperelastic, polyconvex, transverse isotropic strain energy
function. The three material parameters appearing in the energy are
fitted by least square optimization on experimental
data, while incompressibility is assumed through a Lagrange multiplier
representing the hydrostatic pressure. The activation of the muscle
fibers, which is related to the contraction of the sarcomere, is
modeled by the so called active strain approach. The loss of
performance of an elder muscle is then obtained by lowering of some
percentage the active part of the stress. The model is implemented
numerically and the obtained results are discussed and graphically
represented.
§ INTRODUCTION
Skeletal muscle tissue is one of the main components of the human
body, being about 40% of its total mass. Its principal role is the
production of force, which supports the body and becomes
movement by acting on bones. The mechanism by which a muscle
produces force is called activation.
Skeletal muscle tissue is a highly ordered hierarchical structure. The
cells of the tissue are the muscular fibers, having a length up to
several centimeters; they are organized in fascicles, where every
fiber is multiply connected to nerve axons, which drive the activation
of the tissue. Connective tissue, which is essentially isotropic,
fills the spaces among the fibers. Every fiber contains a
concatenation of millions of sarcomeres, which are the fundamental unit
of the muscle. With a length of some micrometers, a sarcomere is
composed by chains of proteins, mainly actin and myosin, which can
slide on each other. This sliding movement produces the contraction of
the sarcomere and, ultimately, the contraction of the whole muscle and
the production of force and movement.
The aim of this Chapter is to propose a mathematical model of skeletal
muscle tissue with a reduced activation, which is typical of a
geriatric syndrome named sarcopeniasarcopenia <cit.>. About
thirty years ago, the term sarcopenia (from Greek sarx or flesh and
penia
or loss) has been introduced in order to describe the age-related
decrease of muscle mass and performance. Sarcopenia has since then
been defined as the loss of skeletal muscle mass and strength that
occurs with advancing age, which in turn affects balance, gait and
overall ability to perform even the simple tasks of daily living such
as rising from a chair or climbing steps. According to
<cit.>, sarcopenia affects more than 50 millions
people today and it will affect more than 200 millions in the next 40
years. There is still no generally accepted test for its diagnosis and
many efforts are made nowadays by the medical community to better
understand this syndrome. Therefore it is desirable to build a
mathematical model of muscle tissue affected by sarcopenia. However,
to the best of our knowledge, in the biomathematical literature the
topic of loss of activation has never been addressed.
In order to use the valuable tools of Continuum Mechanics, during the
last decades the skeletal muscle tissue has been often modeled as a
continuum material <cit.>,
which is usually assumed to be transversely
isotropic and incompressible. The former assumption is motivated by
the alignment of the muscular fibers, while the latter is ensured by
the high water content of the tissue (about 75% of the total volume).
Moreover, in view of some experimental tests, the material is assumed
to be nonlinear and viscoelastic. Focusing our attention only on the
steady properties of the tissue, here we neglect the viscous effects
and we set in the framework of hyperelasticity.
In the model that we propose, there are three constitutive
prescriptions: one for the hyperelastic energy when the tissue is not
active (passive energy), one for the activation and one for the loss
of performance. As far as the passive part is concerned, we assume an
exponential stress response of the material, which is customary in
biological tissues. The particular form that we choose, being a slight
simplification of the one proposed in <cit.>, has the
advantage of being polyconvex and coercive, giving mathematical
soundness to the model and stability to the numerical
simulations.
A recent and very promising way to describe the activation is the
active strain approach, where the extra energy produced by the
activation mechanism is encoded in a multiplicative decomposition of
the deformation gradient in an elastic and an active part (see
Section <ref>). Unlike the classical
active stress approach, in which the active part of the stress
is modeled in a pure phenomenological way and a new term has to be
added to the passive energy, the active strain method does not change
the form of the elastic energy, keeping in particular all its
mathematical properties. Moreover, at least in the case of skeletal
muscles, the active strain approach seems to be more adherent to the
physiology of the tissue, in the sense that at the molecular level the
production of force is actually given by a deformation of the
material, thanks to the contraction of the sarcomeres. The active part
of the deformation gradient is a mathematical representation of such a
contraction. The multiplicative decomposition of the deformation
gradient has been applied to an active striated muscle in
<cit.>. However, this decomposition involves only a
part of the whole elastic energy, which is written as the sum of two
terms for the case of a fiber-reinforced material. As far as we know,
the active strain approach has never been previously applied to the
whole elastic energy of a skeletal muscle tissue. As a drawback, the
active strain approach can be a source of some technical difficulties;
for instance, in our case fitting the model on the experimental data
is not so simple, see eq. (<ref>).
Furthermore, we consider the loss of performance, which is one of the
novelties of our model. Unfortunately, there are no experimental data
on the elastic properties of a sarcopenic muscle tissue, at least to
our knowledge; hence we adopt the naive strategy of reducing the
active part of the stress (which is the difference between the stress
of the material with and without activation) by a given percentage, represented
by the damage parameter d (see
Section <ref>). In this way, there is a single
parameter in the model which concisely accounts for any effect of the
disease.
The proposed model can be numerically implemented using finite element
methods. In Section <ref> we present some results
obtained using FEniCS, an open source collection of Python
libraries. Actually, we consider a cylindrical geometry with radial
symmetry, so that the numerical domain is two-dimensional and the
computational cost is reduced. As far as the boundary conditions are
concerned, we prescribe the displacement on the bases of the cylinder
and let the lateral surface traction-free. Such simulations show that
the experimental results of <cit.> on the passive and active
stress-strain healthy curves, obtained in vivo from a tetanized
tibialis anterior of a rat, can be well reproduced by our
model. Further, the behavior of the tissue when d increases is
analyzed. An ongoing task is to perform a finite element
implementation of the model when generic loads are applied, and to
consider a realistic three-dimensional muscle mesh. We are now
developing a truly hyperelastic model, where the expression of the
stress takes into account also the dependence of the activation on the
deformation gradient. Actually, in this chapter the stress is computed
as the derivative of the hyperelastic energy keeping the active part
of the deformation gradient fixed.
In the future, it will be very interesting to find some connections
between the damage percentage (the parameter d) and other
physiological quantities, such as the mass of the muscular tissue or
the neuronal activity. Another important topic will be the application
of some homogenization techniques in order to deduce an improved
constitutive equation for the skeletal muscle starting from its
microstructure.
§ CONSTITUTIVE MODEL
Skeletal muscle tissue is characterized by densely packed muscle
fibers,muscle fibers which are arranged in fascicles. Filling the spaces between
the fibers and fascicles, connective tissue surrounds the muscle and
it is responsible of the elastic recoil of the muscle to elongation.
Besides a large amount of water, the fibers themselves contain titin,
actin and myosin filaments. The latter two sliding elements form the
actual contractile component of the muscle, which is called
sarcomere. Since the fibers locally follow a predominant
unidirectional alignment, transverse isotropy with respect to that
main direction can be assumed. We hence begin by modelling the
skeletal muscle tissue as a transversely isotropic nonlinear
hyperelastic material with principal direction , which follows
the alignment of the muscle fibers.
§.§ Passive model
passive energy
Let denote the deformation gradient tensor, =^T the right
Cauchy-Green tensor and =⊗ the so called structural
tensor.structural tensor If Ω denotes the reference configuration occupied
by the muscle, we describe its passive behavior by choosing a
hyperelastic strain energy function
∫_Ω W()dV,
where the strain energy density is of the form
W()=μ/4{1/α[e^α(I_p-1)-1]+K_p-1},
with
I_p=w_0/3()+(1-w_0)(), K_p=w_0/3(^-1)+(1-w_0)(^-1).
Here μ is an elastic parameter and α and w_0 are positive
dimensionless material parameters. The generalized invariants I_p
and K_p are given by a weighted combination of the isotropic and
anisotropic components; in particular, w_0 measures the ratio of
isotropic tissue constituents and 1-w_0 that of muscle
fibers. Moreover, the term () represents the squared
stretch in the direction of the muscle fiber and is thus associated with
longitudinal fiber properties, while the term (^-1)
describes the change of the squared cross-sectional area of a surface
element which is normal to the direction in the
reference configuration and thus relates to the transverse
behavior of the material <cit.> (see Fig. <ref>).
One of the mathematical features of the energy density
(<ref>) is that it is polyconvex and coercive
<cit.>, hence the equilibrium problem with mixed boundary
conditions is well posed.
We remark that is the identity tensor in the reference
configuration, so that I_p=K_p=1, i.e. we have the energy- and
stress-free state of the passive muscle tissue (see <cit.>).
The high content of water is responsible of the nearly incompressible
behavior which is experimentally reported for muscle fibers, so that
we can assumeincompressibility
= 1.
As is customary in hyperelasticity, the first
Piola-KirchhoffPiola-Kirchhoff stress tensor stress
tensor, known as nominal stress tensor, can be directly
computed by differentiating the strain energy function:
= W-p^-T=2W-p^-T=
= μ/2{e^α(I_p-1)[w_0/3+(1-w_0)]-^-1[w_0/3+(1-w_0)]^-1}-p^-T,
where p is a Lagrange multiplier associated with the hydrostatic
pressure which results from the incompressibility constraint
(<ref>).
The material parameters of the model can be obtained from real data.
More precisely, concerning the elastic parameter μ, we use the
value given in <cit.>, while the other two parameters have been
obtained by least squares optimization using the experimental data by
Hawkins and Bey <cit.> about the stretch response of a
tetanized tibialis anterior of a rat (see
Fig. <ref>). In Table <ref> we furnish the
values of the parameters.
We remark that the strain energy function (<ref>) is a
slight simplification of the one proposed by Ehret, Böl and Itskov in <cit.>:
W_EBI()=μ/4{1/α[e^α(I_p-1)-1]+
1/β[e^β(K_p-1)-1]},
where α=19.69, β=1.190, w_0=0.7388. Actually, our
simplification consists in linearizing the term related to K_p,
which describes the transverse behavior. This is motivated by the
fact that the parameter β is much smaller than α. In
Fig. <ref> we can see the comparison between the nominal
stress in the direction of the stretch of the two models when the
muscle fibers are elongated in their direction.
§.§ Active model
activationOne of the main features of the skeletal muscle tissue is its ability
of being voluntarily activated. Skeletal muscles are activated
through electrical impulses from motor nerves; the activation triggers
a chemical reaction between the actin and myosin filaments which
produces a sliding of the molecular chains, causing a contraction of
the muscle fibers.
During the last decades, many authors tried to mathematically model
the process of activation, mainly with two different approaches (for a
review see <cit.>). The most famous approach followed in the
literature is called active stress and it consists in adding an
extra term to the stress, which accounts for the contribution
given by the activation (see for example
<cit.>). However, this is an ad
hoc method, usually not related to the sliding movement of the
filaments in the sarcomeres, which is the main mechanism of
contraction at the mesoscale.
More recently, the active strainactivation!active strain
approach was proposed by Taber
and Perucchio <cit.> in order to describe the activation of
the cardiac tissue, following previous theories of growth and
morphogenesis, as well as several models of plasticity. The method for
soft living tissues is explained in <cit.>.
Differently from the active stress approach, this method does not
change the form of the strain energy function; rather, it assumes that
only a part of the deformation gradient, obtained by a multiplicative
decomposition, is responsible for the store of elastic energy. This
method is related to the biological meaning of activation and can be
reasonably adopted also in our case. To the best of our knowledge, the
active strain approach has never been followed for the skeletal muscle
tissue in literature.
We begin by rewriting the deformation gradient as =_e_a, where
_e is the elastic part and _a describes the active
contribution (see Fig. <ref>).
The active strain _a represents a change of the reference
volume elements due to the contraction of the sarcomeres, so that it
does not contribute to the elastic energy.
A reference volume element, distorted by _a, needs a further
deformation _e to match the actual volume element,
which accommodates
both the external forces and the active contraction. Notice that neither _a nor
_e need to be the gradients of some displacement, that is, it is
not necessary that they fulfill the compatibility condition
curl_a=0 or curl_e=0.
The volume elements are modified by the internal active forces without
changing the elastic energy, hence the strain energy function of the
activated material has to be computed using _e=_e^T_e and
taking into account _e=_a^-1. If _a=χ_a
for some displacement χ_a, then from
Fig. <ref> by a change of variables it is easy to see that
∫_χ_a(Ω)W(_e)dV=∫_ΩW(^-T_a_a^-1)(_a)dV.
The right-hand side of the previous equation is well defined also when
_a does not come from a global displacement, and it describes
the strain energy of the active body.
We then obtain the modified hyperelastic energy density
W() = (_a)W(_e) =
(_a)W(^-T_a_a^-1).
We now have to model the active part _a.
Since the activation of the muscle consists in a contraction along the fibers, we choose
_a=-γ⊗,
where 0≤γ <1 is a dimensionless parameter representing
the relative contraction of activated fibers (γ=0 meaning no activation).
Then the modified strain energy density becomes
W()=(1-γ)W(_e)=(1-γ)μ/4{1/α[e^α(I_e-1)-1]+K_e-1
},
I_e=w_0/3(_e)+(1-w_0)(_e),
K_e=w_0/3(_e^-1)+(1-w_0)(_e^-1).
The corresponding first Piola-Kirchhoff stress tensorPiola-Kirchhoff stress tensor is given by
= _aW_e_a^-1-p^-T=
= μ/2(1-γ)_e{e^α(I_e-1)[w_0/3+(1-w_0)]-_e^-1[w_0/3+(1-w_0)]
_e^-1}_a^-1
-p^-T,
where p accounts for the incompressibility constraint
=1. Notice that, since the activation (<ref>) does not
preserve volume and the material has to be globally incompressible,
one has that _e≠ 1, so that the material is elastically
compressible. As far as the strain energy density is concerned, a
factor (1-γ) appears in (<ref>) which keeps into
account the compressibility of _a. It would be interesting to
study also other kinds of passive energies, involving the quantity
, in order to better describe the elastic compressibility of
the material.
In Fig. <ref> we represent, for several values of the parameter
γ, the stress-strain curve for a uniaxial tension
along the fibers. If the muscle is activated (γ >0), then (the
absolute value of) the stress increases with γ and the value of
the stretch such that the stress is zero becomes less than one.
§ MODELLING THE ACTIVATION ON EXPERIMENTAL DATA
activation
The activation parameter γ, which was assumed constant in the
previous section, in fact usually depends on the deformation
gradient. In typical experiments on a tetanized skeletal muscle it is
apparent that the contraction of the fibers due to activation varies
with their stretch, reaching a maximum value and then
decreasing. Fig. <ref> shows the qualitative
relation between the elongation and the developed stress.
This section will be devoted to taking into account this
phenomenon. Specifically, the expression of γ will be
determined matching an experiment-based relation between stress and
strain with our model (<ref>).
In order to find the relation between stress and strain, the
experiments in vivo are usually performed in two steps. First,
the stress-strain curve is obtained without any activation
(passive curve). Second, by an electrical stimulus the muscle
is isometrically kept in a tetanized state and the total
stress-strain curve is plotted. The last curve, which is
qualitatively represented in Fig. <ref>, depends on
the reciprocal position of actin and myosin chains. By taking the
difference of the two curves one can obtain the active curve,
describing the amount of stress due to activation. It is useful to
find a mathematical expression of such a curve, in order to take into
account the experimental behavior of the active contraction. This
issue has already been addressed in several papers, see
e.g. <cit.>.
Denoting with λ the ratio between the current length of the
muscle and its original length, we assume the active curve to be of
the form
P_act(λ) =
{ P_optexp[-k(λ^2-λ_opt^2)^2/λ-λ_min]
if λ>λ_min,
0 otherwise,
.
where λ_min is the minimum stretch value after which the
activation starts (i.e. the lower bound for the stretch at which the
myofilaments begin to overlap) and k is merely a fitting
parameter. The coordinates (λ_opt,P_opt) identify the
position of the maximum of the curve. As it is explained in
<cit.>, the value of P_opt takes into account some
information at the mesoscale level, such as the number of activated
motor units and the interstimulus interval; according to the
literature <cit.>, it is set at P_opt=73.52
kPa. The numerical values of the other three parameters, deduced
through least squares optimization on the data reported in
<cit.>, are given in Table <ref>.
The expression (<ref>) has the advantage of describing the
asymmetry between the ascending and descending branches of the active
curve obtained in <cit.>. Indeed, even if the asymmetry is not
so evident in their curve, due to the fact that there are only few
data on the descending branch, it is a typical feature of several
experimentally measured sarcomere length-force relation. Moreover, as
one can easily see in Fig. <ref>, the convex behavior of the
data nearby λ_min is well fitted.
§.§ The activation parameter γ as a function of the elongation
activation!parameter of
Now our aim is to obtain P_act(λ) given in (<ref>)
from the model described in Section
<ref>. In order to reach our purpose, we
have to model the activation parameter γ as a function of the
stretch.
As in the experiments of Hawkins and Bey <cit.>, let us consider a uniaxial simple tension along the fibers. For simplicity, we assume that the fibers follow the direction
=_1.
Since the skeletal muscle tissue is modeled as an incompressible transversely isotropic material, the general form of the
deformation gradient is given by
=
[ λ 0 0; 0 1/√(λ) 0; 0 0 1/√(λ) ].
Then using the notation introduced in Section <ref>, one has
_e =
[ λ^2/(1-γ)^2 0 0; 0 1/λ 0; 0 0 1/λ ],
I_e =w_0/3[λ^2/(1-γ)^2+2/λ]+(1-w_0)λ^2/(1-γ)^2,
K_e =w_0/3[(1-γ)^2/λ^2+2λ]+(1-w_0)(1-γ)^2/λ^2.
In this case, it is convenient to look at the strain energy as a function of the stretch λ and the activation parameter γ:
W(λ,γ)=(1-γ)W(λ,γ)=(1-γ)μ/4{1/α[e^α(I_e-1)-1]+K_e-1
}.
Then the nominal stress along the fiber direction is given by
P_tot(λ,γ):= Wλ
=(1-γ)μ/4[I_e'
e^α(I_e-1) +K_e'],
where
I_e' =I_eλ=2w_0/3[λ/(1-γ)^2-1/λ^2]+2(1-w_0)λ/(1-γ)^2,
K_e' =K_eλ=2w_0/3[-(1-γ)^2/λ^3+1]
-2(1-w_0)(1-γ)^2/λ^3.
We can get the passive stress by setting γ=0:
P_pas(λ):=P_tot(λ,0)
= μ2 {[(1-2/3w_0)λ-w_0/31/λ^2]e^α[(1-2/3w_0)λ^2+w_0/32/λ-1].
- .(1-2/3w_0)1/λ^3+w_0/3}.
We remark that the values of P_tot and P_pas can also be
obtained by computing the first component of the stress given by
(<ref>) and (<ref>) after finding the
hydrostatic pressure from the conditions
P_22=P_33=P_22=P_33=0 (traction-free
lateral surface).
Our aim is to find the value of γ such that
P_tot(λ, γ)= P_act(λ)+P_pas(λ),
where P_act (λ) is given by (<ref>). Unfortunately,
this leads to an equation for γ which cannot be explicitly
solved:
(1-γ){[(1-2/3w_0)λ/(1-γ)^2
-w_0/31/λ^2]
e^α[
(1-2/3w_0)λ^2/(1-γ)^2+w_0/32/λ
-1].
.+w_0/3
-(1-2/3w_0)(1-γ)^2/λ^3}
=2/μ[P_act(λ)+P_pas(λ)].
However one can employ standard numerical methods and plot the
solution. Fig. <ref>_1, which is obtain by a bisection
method, shows γ as a function of λ.
We remark that γ vanishes before λ_min, indeed in
this region there is no difference between total and passive
stress. The corresponding behavior of the stresses is plotted in
Fig. <ref>_2, which is very similar to the
representative plot of Fig. <ref>.
We emphasize that the previous model is not strictly
hyperelastic, since in the expression of the
stress (<ref>) the derivative of γ with
respect to 𝐅 has been neglected. We are now working on a
truly hyperelastic model, which can be useful for some numerical
implementations.
§.§ Loss of activation
activation!loss of
We now want to describe from a mathematical point of view the loss of
performance of a skeletal muscle tissue. As we have already explained
in the Introduction, this is one of the main effects of
sarcopenia,sarcopenia which is a typical syndrome of
advanced age.
In <cit.> it is remarked that aging is
associated with changes in muscle mass, composition, activation and
material properties. In sarcopenic muscle, there is a loss of motor
units via denervation and a net conversion in slow fibers, with a
resulting loss in muscle power. Hence, the loss of performance of a
sarcopenic muscle can be described as a weakening of the activation of
the fibers.
Unfortunately, as far as we know, there are no experimental data
describing a uniaxial simple tension along the fibers of a sarcopenic
muscle. For this reason, we try to describe the loss of activation by
a parameter d which lowers the curve P_act(λ) given by
(<ref>). The parameter d describes the percentage of
disease or damage: if d=0, then the muscle is
healthy. In order to get our aim, we multiply the function
P_act(λ) by the factor 1-d, as one can see in
Fig. <ref>. Notice that such a choice can be overly simple:
for instance, it implies that the maximum is always attained at
λ_opt, even if there is no experimental evidence of
that. However, the presence of d allows to describe, at least
qualitatively, the loss of performance of a muscle, which is one of the
goals of our model.
§ NUMERICAL VALIDATION
Finally, we simulate numerically the contraction and the elongation of
a slab of skeletal muscle tissue represented by a cylinder. We assume
radial symmetry, so that the mesh is a rectangle. The ends of the
cylinder are assumed to remain perpendicular to the axial direction.
The rectangle is modeled by the hyperelastic model presented in the
previous sections. The active contractile fibers are aligned along the
length of the rectangle, which coincides with _1. The passive
and active material parameters are given in Tables <ref> and
<ref>, respectively. Concerning the boundary conditions, the
cylinder is fixed at one end and elongated to a given length, in order
to recreate the situation of the experiments reported in
<cit.>. The lateral surface is assumed to be tension-free.
The analysis is performed by using the computing environment
FEniCS. The FEniCS ProjectFEniCS <cit.> is a collection of
numerical software, supported by a set of novel algorithms and
techniques, aimed at the automated solution of differential equations
using finite element methods.
As it is explained in Section <ref>, one of
the main features of our model is the dependence of the activation
parameter γ on the stretch λ. The function
γ(λ) solves the implicit equation
(<ref>), which ensures that the corresponding
stress curves fit the experimental data. However, even if this
equation can be solved using numerical methods, it is interesting to
find an explicit function in order to analyze qualitatively the active
model and to run the simulations in FEniCS. Moreover, the explicit
function γ(λ) has to be very precise, since a slight
error on γ deeply affects the behavior of the total stress.
Hence, it is reasonable to relate the expression of γ to
the material parameters and the quantities involved in
(<ref>).
An idea is to isolate the exponential in (<ref>)
and to express its exponent by a first step approximation of a
fixed-point method. We then obtain the following expression of
γ:
γ(λ)= { a[√(1-2/3w_0/g(λ_min)/α+w_0/3λ_min)λ_min
-√(1-2/3w_0/g(λ)/α+w_0/3λ)λ]
if λ>λ_min,
0 otherwise,
.
g(λ)= lnα+α(1-w_0/λ)-1/2ln(1-2/3w_0/1/α+w_0/3λ)
+ln{b2/μ[P_act(λ)+P_pas(λ)]+
√(1-2/3w_0/1/α+w_0/3λ)[
(1-2/3w_0)^2/1/α+w_0/3λ-λw_0/3]},
where a and b are dimensionless fitting parameters: a is related to the magnitude
of γ, while b acts on the curves (<ref>) and
(<ref>), which are the terms of the equation not depending on
γ. Performing a least square optimization on the resulting
P_act, one gets a=1.0133 and b=0.2050.
Fig. <ref> shows the plot of the function
γ(λ) given in (<ref>) in comparison to the
numerical solution of equation (<ref>) obtained by
a bisection method.
Notice that the function defined in (<ref>) is continuous; in
particular we impose γ(λ_min)=0, so that the starting
value of activation does not change. Moreover, the function
approximates very well the numerical values of γ in the range
0.7<λ<1.5. However, the fitting is not so good when λ
becomes larger: for instance, the function is negative for
λ≥ 1.6. Nevertheless, the latter behavior of γ does not
influence too much the curve P_tot, since in that region
P_pas≫ P_act. Indeed, one can even neglect the activation for
large stretches.
The total stress response is plotted in Fig. <ref>
in comparison to the data given in <cit.>.
Finally, it is interesting to run the simulations in the case of loss
of activation, i.e. when the damage parameter d varies. In
order to find the suitable activation function γ(λ), it
is sufficient to multiply the term P_act in (<ref>) by
(1-d). As one would expect from Fig. <ref>, we have that
when d increases the activation γ decreases
(Fig. <ref>_1). This means that lowering the curve of
P_act results in a decrease of γ(λ), which leads to a
lowered total stress response. As one can see in
Fig. <ref>_2, the damage parameter mainly affects the
value of the stress in the region near λ_opt, where the
active stress reaches its maximum. However, the qualitative behavior of the stress curve
does not change, at least for d≤ 0.5. In particular, after a
plateau, the stress follows the exponential growth of the passive curve.
§ ACKNOWLEDGEMENT
This work has been supported by the project Active Ageing and
Healthy Living <cit.> of the Università Cattolica
del Sacro Cuore and partially supported by GNFM (Gruppo
Nazionale per la Fisica Matematica) of INdAM (Istituto Nazionale di
Alta Matematica).
The authors wish to thank the anonymous referees for their useful comments.
10
D:fenics
M. S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg,
C. Richardson, J. Ring, M. E. Rognes, and G. N. Wells.
The FEniCS Project Version 1.5.
Archive of Numerical Software, 100:9–23, 2015.
D:asasb
D. Ambrosi and S. Pezzuto.
Active Stress vs. Active Strain in Mechanobiology:
Constitutive Issues.
Journal of Elasticity, 107:199–212, 2012.
D:blemker
S. S. Blemker, P. M. Pinsky, and S. L. Delp.
A 3D model of muscle reveals the causes of nonuniform strains in
the biceps brachii.
Journal of Biomechanics, 38:657–665, 2005.
D:bol
M. Böl and S. Reese.
Micromechanical modelling of skeletal muscles based on the finite
element method.
Computer Methods in Biomechanics and Biomedical Engineering,
11:489–504, 2008.
D:review
G. Chagnon, M. Rebouah, and D. Favier.
Hyperelastic Energy Densities for Soft Biological
Tissues: A Review.
Journal of Elasticity, 120:129–160, 2015.
D:reportSarcopenia
A. J. Cruz-Jentoft, J. P. Baeyens, J. M. Bauer, Y. Boirie, T. Cederholm,
F. Landi, F. C. Martin, J. P. Michel, Y. Rolland, S. M. Schneider,
E. Topinková, M. Vandewoude, and M. Zamboni.
Sarcopenia: European consensus on definition and diagnosis.
Age and Ageing, 39:412–423, 2010.
D:ebi
A. E. Ehret, M. Böl, and M. Itskov.
A continuum constitutive model for the active behaviour of skeletal
muscle.
Journal of the Mechanics and Physics of Solids, 59:625–636,
2011.
D:ebipcb
A. E. Ehret and M. Itskov.
A polyconvex hyperelastic model for fiber-reinforced materials in
application to soft tissues.
Journal of Materials Science, 42:8853–8863, 2007.
D:ei2009
A. E. Ehret and M. Itskov.
Modeling of anisotropic softening phenomena: Application to soft
biological tissues.
International Journal of Plasticity, 25:901–919, 2009.
D:datib
D. Hawkins and M. Bey.
A Comprehensive Approach for Studying Muscle-Tendon
Mechanics.
ASME Journal of Biomechanical Engineering, 116:51–55, 1994.
D:thomas
T. Heidlauf and O. Röhrle.
A multiscale chemo-electro-mechanical skeletal muscle model to
analyze muscle contraction and force generation for different muscle fiber
arrangements.
Frontiers in Physiology, 5:1–14, 2014.
D:hernandez
B. Hernández-Gascón, J. Grasa, B. Calvo, and J. F. Rodríguez.
A 3D electro-mechanical continuum model for simulating skeletal
muscle contraction.
Journal of Theoretical Biology, 335:108–118, 2013.
D:Johansson
T. Johansson, P. Meier, and R. Blickhan.
A Finite-Element Model for the Mechanical Analysis of
Skeletal Muscles.
Journal of Theoretical Biology, 206:131–149, 2000.
D:Lang2010
T. Lang, T. Streeper, P. Cawthon, K. Baldwin, D. R. Taaffe, and T. B. Harris.
Sarcopenia: etiology, clinical consequences, intervention, and
assessment.
Osteoporos Int, 21:543–559, 2010.
D:martins
J. A. C. Martins, E. B. Pires, R. Salvado, and P. B. Dinis.
A numerical model of passive and active behavior of skeletal muscles.
Computer Methods in Applied Mechanics and Engineering,
151:419–433, 1998.
D:giulio
A. Musesti, G. G. Giusteri, and A. Marzocchi.
Predicting Ageing: On the Mathematical Modelization of
Ageing Muscle Tissue.
In G. Riva et al., editor, Active Ageing and Healthy Living.
IOS press, 2014.
Chapter 17.
D:nardinocchiteresi
P. Nardinocchi and L. Teresi.
On the Active Response of Soft Living Tissues.
Journal of Elasticity, 88:27–39, 2007.
D:PTD
C. Paetsch, B. A. Trimmer, and A. Dorfmann.
A constitutive model for active-passive transition of muscle fibers.
International Journal of Non-Linear Mechanics, 47:377–387,
2012.
D:progettob
G. Riva, P. Ajmone Marsan, and C. Grassi.
Active Ageing and Healthy Living.
IOS press, 2014.
D:sch
J. Schröder and P. Neff.
Invariant formulation of hyperelastic transverse isotropy based on
polyconvex free energy functions.
International Journal of Solids and Structures, 40:401–445,
2003.
D:taber
L. A. Taber and R. Perucchio.
Modeling Heart Development.
Journal of Elasticity, 61:165–197, 2000.
D:vanLeeuwen1991
J. L. van Leeuwen.
Optimum power output and structural design of sarcomeres.
Journal of Theoretical Biology, 149:229–256, 1991.
D:vanLeeuwen1992
J. L. van Leeuwen.
Muscle function in locomotion.
In Advances in Comparative and Environmental Physiology.
Springer Heidelberg Berlin, 1992.
Chapter 7.
D:overviewSarco
S. von Haehling, J. E. Morley, and S. D. Anker.
An overview of sarcopenia: facts and numbers on prevalence and
clinical impact.
Journal of Cachexia, Sarcopenia and Muscle, 1:129–133, 2010.
|
http://arxiv.org/abs/1701.08023v1 | 20170127121112 | The Condorcet Principle for Multiwinner Elections: From Shortlisting to Proportionality | [
"Haris Aziz",
"Edith Elkind",
"Piotr Faliszewski",
"Martin Lackner",
"Piotr Skowron"
] | cs.GT | [
"cs.GT"
] |
Curvature in Hamiltonian Mechanics And The Einstein-Maxwell-Dilaton
Action
S. G. Rajeev
Jan 15 2017
==========================================================================
We study two notions of stability in multiwinner elections that are based on the Condorcet criterion. The first notion was
introduced by Gehrlein: A committee is stable if each committee member is preferred to each non-member by a (possibly
weak) majority of voters. The second notion is called local stability (introduced in this paper): A size-k committee is
locally stable in an election with n voters if there is no candidate c and no group of more than n/k+1
voters such that each voter in this group prefers c to each committee member. We argue that Gehrlein-stable
committees are appropriate for shortlisting tasks, and that locally stable committees are better suited for applications
that require proportional representation. The goal of this paper is to analyze these notions in detail, explore their
compatibility with notions of proportionality, and investigate the computational complexity of related algorithmic tasks.
§ INTRODUCTION
The notion of a Condorcet winner is among the most important ones in
(computational) social
choice <cit.>. Consider a group of agents,
each with a preference order over a given
set of candidates. The Condorcet condition says that if
there exists a candidate c that is preferred to every other candidate
by a majority of agents (perhaps a different majority in each case),
then this candidate c should be seen as the collectively best
option. Such a candidate is known as the Condorcet winner.
In single-winner elections, that is, in settings where the goal is to choose
one candidate (presidential elections are a prime example here), there are strong arguments
for choosing a Condorcet winner whenever it exists. For example, in case of presidential elections
if a Condorcet winner existed but was not chosen as the country's
president, a majority of the voters might revolt.
(We note, however, that there are also arguments against rules that
choose Condorcet winners whenever they exist: for example, such
rules suffer from the no-show paradox <cit.>
and fail the reinforcement axiom <cit.>.)
In this paper, we consider multiwinner elections, that is,
settings where instead of choosing a single winner (say, the president)
we choose a collective body of a given size (say, a parliament).
The goal of our paper is to analyze generalizations of the
concept of a Condorcet winner to multiwinner elections.
There are several natural definitions of “a Condorcet committee” and we consider their merits and application
domains (we write “Condorcet committee” in quotes because several
notions could be seen as deserving this term and, thus, eventually we
do not use it for any of them).
First, we can take the approach of Gehrlein <cit.> and
Ratliff <cit.>, where we want the committee to be a
collection of high-quality individuals who do not necessarily need to
cooperate with each other (this is a natural approach, e.g., when we
are shortlisting a group of people for an academic position or for
some prize <cit.>).
In this case, each member of the “Condorcet committee” should be
preferred by a majority of voters to all the non-members.
Alternatively, there is the approach of Fishburn <cit.>
(also analyzed from an algorithmic perspective by Darmann <cit.>), where
we assume that the committee members have to work so closely with each
other that it only makes sense to consider voters' preferences over
entire committees rather than over individual candidates (this would be
natural in selecting, e.g., small working groups). In this case, a “Condorcet committee” is
a committee that is preferred to every other committee by a majority of
voters. However, this approach is of limited use when voters express
their preferences over individual candidates:
while such preferences can be lifted to preferences over committees,
e.g., by using a scoring function, in the presence of strong synergies
among the committee members the induced preferences over committees
are unlikely to offer a good approximation of voters' true preferences.
Therefore, we do not pursue this approach in our work.
Finally, there is a middle-ground approach, proposed by Elkind et
al. <cit.>, where the committee members focus
on representing the voters (this is the case, e.g., in parliamentary
elections). In this case, we compare committees against single
candidates: We say that a voter i prefers committee W to some
candidate c if there exists a candidate w in W (who can be seen as the
representative of this voter) such that i prefers w to c. Now we
could say that a “Condorcet committee” is one that is preferred to
each candidate outside the committee by a majority of voters. Indeed, Elkind et
al. <cit.> refer to such committees as
Condorcet winning sets.
Elkind et al. <cit.> were unable to find
an election with no Condorcet winning set of size three; their empirical results
suggest that such elections are very unlikely.
Thus, to use their approach in order to select large committees in a meaningful way,
we should focus on committees that are
preferred to unselected candidates by a large fraction of voters.
In particular, we argue that when n voters select k candidates,
the winning committee should be preferred to each non-member by roughly n-n/k
voters. The resulting concept, which we call
local stability, can be seen as
a translation of the notion of justified representation by
Aziz et al. <cit.> from the world of approval-based elections to ranked-ballot elections.
We also consider a stronger variant of this notion, which can be seen as an analogue
of extended justified representation from the work of Aziz et al. <cit.>.
The goal of our work is to contrast the approach based on the ideas of
Gehrlein <cit.> and Ratliff <cit.>, which we call Gehrlein stability,
with the approach based on Condorcet winning sets (i.e., local stability). By considering several restricted
domains (single-peaked, single-crossing, and a restriction implied by
the existence of political parties), we show that Gehrlein stable committees
are very well-suited for shortlisting (as already suggested by
Barberá and Coelho <cit.>),
whereas locally stable committees are better at providing proportional
representation.
From the point of view of the computational complexity, while in both
cases we show -hardness of testing the existence of a respective
“Condorcet committee”, we discover that a variant of the Gehrlein–Ratliff
approach leads to a polynomial-time algorithm.
§ PRELIMINARIES
For every natural number p, we let [p] denote the set {1, 2,
…, p}.
An election is a pair E = (C, V), where C = {c_1, …,
c_m} is a set of candidates and V = (v_1, …, v_n) is a
list of voters; we write |V| to denote the number of voters in
V. Each voter v∈ V is endowed with a linear preference
order over C, denoted by ≻_v. For ℓ∈ [m] we write
_ℓ(v) to denote ℓ candidates most preferred by
v. We write (v) to denote the single most preferred candidate
of voter v, i.e., _1(v) = {(v)}, and for each c∈
C∖(v) it holds that (v) ≻_v c. We write
a≻_v b ≻_v … to indicate that v ranks a first and
b second, followed by all the other candidates in an arbitrary
order. Given two disjoint subsets of candidates S, T⊆ C,
S∩ T=∅, we write S≻_v T to indicate that v
prefers each candidate in S to each candidate in T.
A committee is a subset of C. A multiwinner voting rule
takes an election E=(C, V) and a positive integer k with
k≤ |C| as its input, and outputs a non-empty collection of
size-k committees. A multiwinner rule is said to be resolute if the set (E, k) is a singleton for each election
E=(C, V) and each committee size k.
Given an election E=(C, V) with |V|=n and two candidates c, d∈
C, we say that c wins the pairwise election between c and
d if more than n/2 voters in V prefer c to d; if exactly
n/2 voters in V prefer c to d, we say that the pairwise
election between c and d is tied. The majority graph of
an election E = (C,V) is a directed graph M(E) with vertex set C
and the following edge set:
{(c, d)∈ C^2 |c wins the pairwise
election between c and d}.
Observe that if the number of voters
n is odd, then M(E) is a tournament, i.e., for each pair
of candidates c, d∈ C exactly one of their connecting edges, (c, d) or (d, c),
is present in M(E). We will also consider
the weak majority graph of E, which we denote by W(E):
this is the directed graph obtained from M(E) by adding edges
(c, d) and (d, c) for each pair of candidates c, d such that
the pairwise election between c and d is tied.
A candidate c is said to be a Condorcet winner of an election E=(C, V)
if the outdegree of c in M(E) is |C|-1; c is said to be a weak Condorcet winner
of E if the outdegree of c in W(E) is |C|-1.
§ GEHRLEIN STABILITY AND LOCAL STABILITY
Gehrlein <cit.> proposed a simple, natural extension
of the notion of a weak Condorcet winner to the case of multiwinner
elections, and a similar definition was subsequently introduced by
Ratliff <cit.>. We recall and discuss Gehrlein's
definition, and then put forward a different approach to defining good
committees, which is inspired by the recent work on Condorcet winning
sets <cit.> and on justified representation in
approval-based committee elections <cit.>.
§.§ Gehrlein Stability
Gehrlein <cit.> and Ratliff <cit.>
base their approach on the following idea: a committee is unstable if
there exists a majority of voters who prefer a candidate that is not
currently in the committee to some current committee member.
Consider an election E = (C,V). A committee S ⊆ C is
weakly Gehrlein-stable if for each committee member c ∈ S
and each non-member d ∈ C ∖ S it holds that c wins or
ties the pairwise election between c and d. Committee S is
strongly Gehrlein-stable if for each c∈ S and each
d∉S the pairwise election between c and d is won by
c.
By definition, each strongly Gehrlein-stable committee is also weakly
Gehrlein-stable, and the two notions are equivalent if the majority
graph M(E) is a tournament. Further, a strongly (respectively,
weakly) Gehrlein-stable committee of size one is simply a Condorcet
winner (respectively, a weak Condorcet winner) of a given election.
More generally, each member of a strongly (respectively, weakly)
Gehrlein-stable committee would be a Condorcet winner (respectively, a
weak Condorcet winner) should the other committee members be removed
from the election. Note also that given a committee S, it is
straightforward to verify if it is strongly (respectively, weakly)
Gehrlein-stable: it suffices to check that there is no candidate in
C∖ S that ties or defeats (respectively, defeats) some
member of S in their pairwise election.
Gehrlein stability has received some attention in the literature. In
particular, Ratliff <cit.>,
Coelho <cit.>, and, very recently,
Kamwa <cit.>, proposed and analyzed a number of multiwinner
rules that satisfy weak Gehrlein stability, i.e., elect weakly
Gehrlein stable committees whenever they exist. These rules can be
seen as analogues of classic single-winner Condorcet-consistent rules,
such as Maximin or Copeland's rule (see, e.g., the survey by
Zwicker <cit.> for their definitions). Specifically,
each of these rules is based on a function that assigns non-negative
scores to committees in such a way that committees with score 0 are
exactly the weakly Gehrlein-stable committees; it then outputs the
committees with the minimum score.
Gehrlein Stability and Majority Graphs
Gehrlein stability is closely related to a classic tournament solution
concept, namely, the top cycle (see, e.g., the survey by Brandt, Brill
and Harrenstein <cit.> for an overview of tournament
solution concepts). Indeed, if the majority graph M(E) is a
tournament, then every top cycle in M(E) is a Gehrlein-stable
committee (recall that for tournaments weak Gehrlein stability is
equivalent to strong Gehrlein stability, so we use the term `Gehrlein
stability' to refer to both notions). In the presence of ties, the
relevant solution concepts are the Smith set and the Schwarz set: the
former corresponds to a weakly Gehrlein-stable committee and the
latter corresponds to a strongly Gehrlein-stable committee.
However, there is an important difference between Gehrlein committees
and each of these tournament solution concepts: When computing a
tournament solution, we aim to minimize the number of elements in the
winning set, whereas in the context of multiwinner elections our goal
is to find a weakly/strongly Gehrlein-stable committee of a given
size. This difference has interesting algorithmic implications. While
it is easy to find a Smith set for a given tournament, in
Section <ref> we show that it is NP-hard to determine
if a given election admits a weakly Gehrlein-stable committee of a
given size. On the other hand, we can extend the existing algorithm
for finding a Schwarz set to identify a strongly Gehrlein-stable
committee.
We defer most of our computational results until
Section <ref>, but we present the proof of this result
here because it implicitly provides a very useful characterization of
strongly Gehrlein-stable committees.
Given an election E=(C, V) and a positive integer k with k≤ |C|,
we can decide in polynomial time whether E admits a strongly Gehrlein-stable
committee of size k. Moreover, if such a committee exists, then it is unique.
Given an election E=(C, V), we let 𝒞 = {C_1, …, C_r}
be the list of strongly connected components of W(E); note that
a graph can be decomposed into strongly connected components in polynomial time.
Given two candidates a, b∈ C, we write a→ b if W(E) contains a directed path from a to b.
Consider two distinct sets C_i, C_j∈𝒞 and two candidates a∈ C_i, b∈ C_j.
Note that the pairwise election between a and b cannot be tied, since otherwise
a and b would be in the same set. Suppose without loss of generality
that a beats b in their pairwise election. Then for each a'∈ C_i, b'∈ C_j
we have a'→ a, a→ b, b→ b' and hence by transitivity
a'→ b'. On the other hand, we cannot have b'→ a', as this would
mean that a' and b' belong to the same connected component of W(E).
Thus, we can define a total order on 𝒞 as follows:
for C_i, C_j∈𝒞 we set C_i < C_j if i≠ j and a→ b
for each a∈ C_i, b∈ C_j. By the argument above, < is indeed a total order on 𝒞;
we can renumber the elements of 𝒞 so that C_1<…<C_r.
Then for a∈ C_i, b∈ C_j we have a→ b if and only if i≤ j.
Now, consider a strongly Gehrlein-stable committee S. Suppose that a→ b and b∈ S.
It is easy to see that a∈ S; this follows by induction on the length of the
shortest path from a to b in W(E).
Hence, every strongly Gehrlein-stable committee is of the form ⋃_i≤ s C_i
for some s∈ [r]. Thus, there is a strong Gehrlein committee of size k
if and only if ∑_i=1^s|C_i|=k for some s∈[r]. This argument also shows
that a strongly Gehrlein-stable committee of a given size is unique.
We have argued that for tournaments the notions of weak Gehrlein stability
and strong Gehrlein stability coincide. We obtain the following corollary.
Consider an election E=(C, V). If W(E) is a tournament, we can decide
in polynomial time whether E has a weakly Gehrlein-stable committee of a given size.
Moreover, if such a committee exists, it is unique.
Gehrlein Stability and Enlargement Consistency
Interestingly, Barberá and Coelho <cit.>
have shown that weak Gehrlein stability is incompatible with enlargement
consistency: For every resolute multiwinner rule that
elects a weakly Gehrlein-stable committee whenever such a committee exists,
there exists an election E and
committee size k such that the only committee in (E,k) is not
a subset of the only committee in (E,k+1).[Enlargement
consistency is defined for resolute rules only. An analogue of this notion for
non-resolute rules was introduced by Elkind et
al. <cit.> under the name of
committee monotonicity.]
While this result means that such rules are not well-suited
for shortlisting tasks <cit.>,
it only holds for weak Gehrlein stability and not for strong Gehrlein stability.
Indeed, let us consider the following multiwinner variant of the Copeland
rule (it is very similar to the NED rule of
Coelho <cit.> and we will call it
strong-NED). Given an election E, the score of a candidate
is its outdegree in M(E). Strong-NED chooses the committee of k
candidates with the highest scores (to match the framework of
Barberá and Coelho <cit.>, the
rule should be resolute and so we break ties lexicographically). By
its very definition, strong-NED satisfies enlargement
consistency. Further, if there is a committee W that is strongly
Gehrlein-stable, then strong-NED chooses this committee (if there
are m candidates in total, then the outdegree of each candidate
from W is at least m-|W|, whereas the outdegree of each
candidate outside of W is at most m-|W|-1; Barberá and
Coelho <cit.> also gave this
argument, but assuming an odd number of voters).
§.§ Local Stability
An important feature of Gehrlein stability is that it is strongly
driven by the majority opinions. Suppose, for instance, that a group
of 1000 voters is to elect 10 representatives from the set {c_1,
…, c_20}, and the society is strongly polarized: 501 voters
rank the candidates as c_1≻…≻ c_20, whereas the
remaining 499 voters rank the candidates as c_20≻…≻
c_1. Then the unique Gehrlein-stable committee of size 10 consists
of candidates c_1, …, c_10, and the preferences of 499 voters
are effectively ignored. While this is appropriate in some settings,
in other cases we may want to ensure that candidates who are
well-liked by significant minorities of voters are also elected.
Aziz et al. <cit.> formalize this idea in the context of
approval voting, where each voter submits a set of candidates that she
approves of (rather than a ranked ballot). Specifically, they say that
committee S, |S|=k, provides justified representation in an
election (C, V) with |V|=n, where each voter i is associated
with an approval ballot A_i⊆ C, if there is no group of
voters V'⊆ V with |V'|≥⌈n/k⌉ such
that A_i∩ S=∅ for each i∈ V', yet there exists a
candidate c∈ C∖ S approved by all voters in
V'. Informally speaking, this definition requires that each
`cohesive' group of voters of size at least q=
⌈n/k⌉ is represented in the committee. The choice
of threshold q=⌈n/k⌉ (known as the Hare
quota) is natural in the context of approval voting: it ensures
that, when the electorate is composed of k equal-sized groups of
voters, with sets of candidates approved by each group being pairwise
disjoint, each group is allocated a representative.
Extending this idea to ordinal ballots and to an arbitrary threshold
q, we obtain the following definition.
Consider an election E = (C,V) with |V|=n and a positive value q∈ℚ.
A committee S violates local stability for quota q
if there exists a group V^*⊆ V with |V^*|≥ q
and a candidate c ∈ C ∖ S such that each voter from
V^* prefers c to each member of S; otherwise, S provides local stability for quota q.
Note that, while in the context of approval voting the notion of group cohesiveness
can be defined in absolute terms (a group is considered cohesive if there is a candidate
approved by all group members), for ranked ballots a cohesive group is defined relative
to a given committee (a group is cohesive with respect to S if all its members prefer
some candidate to S). Another important difference between the two settings is that,
while a committee that provides justified representation is guaranteed to exist
and can be found in polynomial time <cit.>, a committee that provides local stability
may fail to exist, even if we use the same value of the quota, i.e., q=⌈n/k⌉.
Fix an integer d≥ 2; let
X={x_1, …, x_d}, Y={y_1, …, y_d}, Z={z_1, …,
z_d}, and set C={a, b}∪ X∪ Y∪ Z. There are 4
voters with preferences a≻ b≻⋯ and 4 voters with
preferences b≻ a≻⋯. Also, for each i∈[d],
there are two voters with preferences x_i≻ y_i≻
z_i≻⋯, two voters with preferences y_i≻ z_i≻
x_i≻⋯, and two voters with preferences z_i≻
x_i≻ y_i≻⋯. Altogether, we have n=6d+8 voters.
Set k=2d+1; then for d≥ 2 we obtain ⌈n/k⌉
= 4. We will now argue that this election admits no locally
stable committee of size k for quota
q=⌈n/k⌉. Suppose for the sake of contradiction
that S is a locally stable committee of size k for this value
of the quota. Note first that for each i∈[d] we have |{x_i,
y_i, z_i}∩ S|≥ 2. Indeed, suppose that this is not the
case for some i∈[d]. By symmetry, we can assume without loss
of generality that y_i, z_i∉S. However, then there are
4 voters who prefer z_i to every member of the committee, a
contradiction with local stability. Thus, S contains at least
2d candidates in X∪ Y ∪ Z and hence |S∩{a, b}|≤
1. Thus at least one of a or b does not belong to the
committee and either the four a ≻ b ≻⋯ voters or
the four b ≻ a ≻⋯ voter witness that S is not
locally stable. or the second four voters
In Section <ref>, we will use the idea from
Example <ref> to argue that it is NP-hard to decide whether a
given election admits a locally stable committee.
Definition <ref> does not specify a value of the quota
q. Intuitively, the considerations that should determine the choice
of quota are the same as for Single Transferable Vote (STV), and one
can choose any of the quotas that are used for STV (see, e.g., the
survey by Tideman <cit.>). In particular, for k=1 and the
Hare quota q=⌈n/k⌉ we obtain Pareto optimality: a
committee {a} of size k=1 is locally stable for quota
⌈n/k⌉ if there is no other candidate c such that
all voters prefer c to a. For k=1 and
q=⌈n/k+1⌉ (the Hagenbach-Bischoff
quota),
locally stable committees for quota q are those whose unique element
is a weak Condorcet winner; for k=1 and
q=⌊n/k+1⌋+1 (the Droop quota), a locally
stable committee for quota q has the Condorcet winner as its only
member.
For k=2 and q=⌈n/k⌉, locally stable committees are closely
related to Condorcet winning sets, as defined by Elkind et
al. <cit.>, and, more generally, locally
stable committees are related to θ-winning
sets <cit.>. Elkind et
al. <cit.> say that a set of candidates S is
a θ-winning set in an election (C, V) with |V|=n if
for each candidate c∈ C∖ S there are more than θ n
voters who prefer some member of S to c; a 1/2-winning
set is called a Condorcet winning set. Importantly, unlike
locally stable committees, θ-winning sets are defined in terms
of strict inequalities. If we replace `more than θ n' with `at
least θ n' in the definition of Elkind et
al. <cit.>, we obtain the definition of local
stability for quota q=(1-θ)n.
Elkind et al. <cit.> define a voting rule that
for a given election E and committee size k outputs a size-k
θ-winning set for the smallest possible θ. This rule, by
definition, outputs locally stable committees whenever they exist.
We remark that the 15-voter, 15-candidate election
described by Elkind et al. <cit.> is an
example of an election with no locally stable committee for
q=⌈n/k⌉ and k=2, thus complementing Example <ref>
(which works for odd k≥ 5).
For concreteness, from now on we fix the quota to be
q=⌊n/k+1⌋+1 (the Droop quota), and use the
expression `locally stable committee' to refer to locally stable
committees for this value of the quota. However, some of our results
extend to other values of q as well.
Full Local Stability
Aziz et al. <cit.> also proposed the notion of
extended justified representation, which deals with larger
groups of voters that, intuitively, are entitled to more than a single
representative. To apply their idea to ranked ballots, we need to
explain how voters evaluate possible deviations. We require a new
committee to be a Pareto improvement over the old one: given a committee S
and a size-ℓ set of candidates T, we say that a voter v
prefers T to S if there is a bijection μ: T →_ℓ(S) such that for each c∈ T voter v weakly prefers
c to μ(c) and for some c∈ T voter v strictly prefers
c to μ(c). We now present our analogue of extended justified
representation for ranked ballots, which we call full local stability.
Consider an election (C,V) with |V|=n. We say that a committee
S, |S|=k, violates ℓ-local stability for
ℓ∈[k] if there exists a group of voters V^*⊆ V with
|V^*|≥⌊ℓ· n/k+1⌋+1 and a
set of ℓ candidates T, such that
each voter v∈ V^* prefers T to S; otherwise, S provides
ℓ-local stability. A committee S with |S|=k provides
full local stability if it provides ℓ-local stability
for all ℓ∈[k].
By construction, 1-local stability is simply local stability, and hence
every committee that provides full local stability also provides local stability.
Local stability and full local stability
extend from committees to voting rules in a natural way.
A multiwinner voting rule satisfies (full) local
stability if for every election E=(C, V) and every target
committee size k such that in E the set of size-k committees
that provide (full) local stability is not empty, it holds that
every committee in (E) provides (full) local stability.
Weakly/strongly Gehrlein-stable rules can be defined in a similar manner.
Solid Coalitions and Dummett's Proportionality
Let us examine the relation between (full) local stability, and the
solid coalitions property and Dummett's proportionality. Both these
notions were used by Elkind et
al. <cit.> as indicators of voting
rules' ability to find committees that represent voters proportionally
(however, we give a slightly different definition than they give; see
explanation below).
Consider an election (C,V) with |V|=n. We say that a committee
S, |S|=k, violates the solid coalitions property if there
exists a candidate c ∉ S who is ranked first by some ⌈n/k⌉ voters. We say that a committee S, |S|=k,
violates Dummett's proportionality if there exists a set of
ℓ candidates Q with Q ∖ S ≠∅ and a set of ⌈ℓ n/k⌉ voters V such that for each voter v∈ V
it holds that _ℓ(v)=Q.
A locally stable committee satisfies the solid coalitions property;
a fully locally stable committee satisfies Dummett's proportionality.
We present the proof for local stability; for full local stability
the same argument can be used. Consider an election E, a target
committee size k, and a committee S such that some ⌈n/k⌉ voters rank a candidate c ∈ C ∖ S
first. Since n/k > n/k+1, also ⌈n/k⌉≥⌊n/k+1⌋+1, and so the same group of voters
witnesses that S violates local stability.
The solid coalitions property and Dummett's proportionality are
usually defined as properties of multiwinner rules. In contrast,
Definition <ref> treats them as properties of
coalitions, which is essential for establishing a relation such as the
one given in Proposition <ref>. Indeed,
local stability as the property of a rule puts no restrictions on the
output of the rule for profiles for which there exists no locally
stable committees and, in particular, for such profiles local
stability does not guarantee the solid coalitions property.
§ THREE RESTRICTED DOMAINS
In Section <ref> we have argued that Gehrlein stability is a
majoritarian notion, whereas local stability is directed towards
proportional representation. Now we reinforce this intuition by
describing the structure of Gehrlein-stable and locally stable committees for three well-studied
restricted preference domains. Namely, we consider
single-crossing elections, single-peaked elections, and elections
where the voters have preferences over parties (modeled as large sets
of `similar' candidates).
The following observation will be useful in our analysis. Consider an
election E=(C, V) for which M(E) is a transitive tournament, i.e.,
if (a, b) and (b, c) are edges of M(E) then (a, c) is also an
edge of M(E). In such a case, the set of ordered pairs (a, b)
such that (a, b)∈ M(E) is a linear order on C and we refer to it
as the majority preference order. Given a positive integer k,
we let the centrist committee S_center consist of
the top k candidates in the majority preference order.
Theorem <ref> implies the following simple observation.
If M(E) is transitive then for each committee size k,
S_center is strongly Gehrlein-stable.
It is well known that if the number of voters is odd and the election
is either single-peaked or single-crossing (see definitions below),
then M(E) is a transitive tournament. Thus
Proposition <ref> is very useful in such settings.
§.§ Single-Crossing Preferences
The notion of single-crossing preferences was proposed by
Mirrlees <cit.> and Roberts <cit.>.
Informally speaking, an election is single-crossing if (the voters can
be ordered in such a way that) as we move from the first voter to the
last one, the relative order within each pair of candidates changes at
most once. For a review of examples where single-crossing preferences
can arise, we refer the reader to the work of Saporiti and
Tohmé <cit.>.
An election (C,V) with V=(v_1, …, v_n) is
single-crossing
[Our definition of single-crossing elections
assumes that the order of voters is fixed. More commonly, an election is defined to be single-crossing if voters
can be permuted so that the condition formulated in Definition <ref> holds. For our purposes, this distinction
is not important, and the approach we chose makes the presentation more compact.]
if for each pair of candidates a, b ∈ C such that
v_1 prefers a over b we have {i | a ≻_v_i b} = [t]
for some t∈[n].
Single-crossing elections have many desirable properties. In the
context of our work, the most important one is that if E=(C, V) is a
single-crossing election with an odd number of voters, then M(E) is
a transitive tournament. Moreover if |V|=2n'+1, the majority
preference order coincides with the preferences of the (n'+1)-st
voter <cit.>. By
Proposition <ref>, this means that for single-crossing
elections with an odd number of voters the centrist committee exists,
is strongly Gehrlein-stable, and consists of the top k candidates in
the preference ranking of the median voter, which justifies the term
centrist committee.
For a single-crossing election with an odd number of voters, the
centrist committee is strongly Gehrlein-stable.
Locally stable committees turn out to be very different. Let E = (C,
V) be a single-crossing election with |V|=n, and let k be the
target committee size; then the Droop quota for E is q =
⌊n/k+1⌋+1. We say that a size-k
committee S is single-crossing uniform for E if for each
ℓ∈ [k] it contains the candidate ranked first by voter
v_ℓ· q. Note that a single-crossing uniform committee need
not be unique: e.g., if all the voters rank the same candidate first,
then every committee containing this candidate is single-crossing
uniform.
Figure <ref>
shows a single-crossing election with 15 voters
over the candidate
set C = {a,b,c,d,e,f,g,h,i,j,k}.
The first voter ranks the candidates in the alphabetic order,
and the last voter ranks them in the reverse alphabetic order.
For readability, we list the top four-ranked candidates only.
For the target committee size 4, the centrist
committee (marked with a rectangle) is {c,d,e,f},
and the unique single-crossing uniform committee is
{b,d,g,i} (marked with dashed ellipses).
If we reorder the voters from v_15
to v_1, the unique single-crossing uniform committee is {c,d,h,j}.
We will now argue that
single-crossing uniform committees are locally stable.
For every single-crossing election E=(C, V)
and for every k∈[|C|] it holds that every size-k single-crossing
uniform committee for E is locally stable.
Fix a single-crossing election E=(C, V) with |V|=n
and a target committee size k; set
q=⌊n/k+1⌋+1.
Consider a committee S, |S|=k, that is single-crossing uniform
with respect to E. We will show that S is locally stable.
Consider an arbitrary candidate c∉S. Suppose first that some voter v_i
with i<q ranks c above all candidates in S. Let a=(v_q).
As a∈ S and E is single-crossing, each voter v_j with j≥ q
prefers a to c. Thus, there are at most q-1 voters who prefer c
to each member of S.
Now, suppose that some voter v_i with ℓ q < i < (ℓ+1)q for some ℓ∈[k-1]
ranks c above all candidates in S; let
a=(v_ℓ· q), b=(v_(ℓ+1)· q).
By construction we have a, b∈ S and by the single-crossing property a≠ b
(if a=b, then a and c would cross more than once).
Also, by the single-crossing property all voters v_j with j≤ℓ q
rank a above c and all voters v_j' with j'≥ (ℓ+1)q rank b above c.
Thus, there are at most q-1 voters who prefer c to each member of S.
Finally, suppose that some voter v_i with i>kq ranks c above all members of S;
let a=(v_k· q). We have a∈ S and
by the single-crossing property all voters v_j with j≤ kq rank a above c.
Thus, there are at most n-kq voters who may prefer c to a,
and q> n/k+1 implies n - qk < n - nk/k+1 = n/k+1 < q.
In each case, the number of voters who may prefer c to all members of S
is strictly less than q.
The following example shows that a single-crossing uniform committee can violate Gehrlein stability
and, similarly, that the centrist committee can violate local stability.
Let C={a, b, c}. Consider the single-crossing election
where three voters rank the candidates as a≻ b≻ c
and four voters rank the candidates as c≻ b≻ a.
Let k = 2. We have ⌊n/k+1⌋+1 = 3.
The committee {a, c} is single-crossing uniform for this election,
yet four voters out of seven prefer b to a.
The committee {b, c} is centrist, yet it is not locally stable
since there are q=3 voters who prefer a to
both b and c.
§.§ Single-Peaked Preferences
The class of single-peaked preferences,
first introduced by Black <cit.>, is perhaps the most
extensively studied restricted preference domain.
Let be an order over C. We say that an election E=(C, V)
is single-peaked with respect to if for each voter v∈ V
and for each pair of candidates a, b ∈ C
such that (v) a b or b a (v) it holds that a ≻_v b.
We will refer to as a societal axis for E.
Just as in single-crossing elections, in single-peaked elections with an odd number of voters
the majority preference order is transitive and hence the centrist committee is well-defined
and strongly Gehrlein-stable.
For a single-peaked election with an odd number of voters, the
centrist committee is strongly Gehrlein-stable.
Moreover, we can define an analogue of a single-crossing uniform committee for single-peaked elections.
To this end, given an election E = (C, V) that is single-peaked with respect to the societal axis
, we reorder the voters so that for the new order V'=(v_1', …, v'_n)
it holds that (v'_i)(v'_j) implies i<j; we say
that an order of voters V' that has this property is -compatible.
We can now use the same construction as in Section <ref>.
Specifically, given an election E=(C, V) with |V|=n that is single-peaked with respect to
and a target committee size k, we set q=⌊n/k+1⌋+1,
and say that a committee S is single-peaked uniform for E if
for some -compatible order of voters V'
we have (v'_ℓ· q)∈ S for each ℓ∈[k].
For every single-peaked election E=(C, V)
and for every k∈[|C|] it holds that every size-k single-peaked
uniform committee for E is locally stable.
Fix an election E=(C, V) with |C|=m, |V|=n that is single-peaked with respect to
and a target committee size k. Assume without loss of generality that
orders the candidates as c_1… c_m and that V is -compatible.
Consider a single-peaked uniform committee S of size k.
Recall that q=⌊n/k+1⌋+1.
Let c_ℓ=(v_q), c_r=(v_k· q).
Consider a candidate c_j∉S.
Suppose first that j<ℓ. Then for each voter v_i with i≥ q
the candidate (v_i) is either c_ℓ or some candidate to the right
of c_ℓ. Thus, all such voters prefer c_ℓ to c_j, and hence
there can be at most q-1 voters who prefer c_j to each member of S.
By a similar argument, if j>r, there are at most n-kq < q voters
who prefer c_j to each member of S.
It remains to consider the case ℓ<j<r. Let ℓ'=max{t| t<j, c_t∈ S},
r'=min{t| t>j, c_t∈ S}. Set i=max{i: (v_i· q)=c_ℓ'};
then the most preferred candidate of voter v_(i+1)· q is c_r'.
Since the voters' preferences are single-peaked with respect to ,
v_i· q and all voters that precede her in V prefer c_ℓ'
to c_j, and v_(i+1)· q and all voters that appear after her in V prefer c_r'
to c_j. Thus, only the voters in the set V' = {v_i· q+1, …, v_i· q+q-1}
may prefer c_j to all voters in S, and |V'|≤ q-1.
Thus, for any choice of c_j∉S fewer than q voters prefer c_j to all members
of S, and hence S is locally stable.
The proof of Proposition <ref> is very similar
to the proof of Proposition <ref>; we omit it due to space constraints.
Observe that the election from Example <ref> is single-peaked
and committee {a, c} is single-peaked uniform for that election.
This shows that for single-peaked elections a single-peaked uniform committee
can violate Gehrlein stability
and the centrist committee can violate local stability.
§.§ Party-List Elections
When candidates are affiliated with political parties, it is not unusual for the voters'
preferences to be driven by party affiliations: a voter who associates herself with a political
party, ranks the candidates who belong to that party above all other candidates
(but may rank candidates that belong to other parties arbitrarily). In the presence
of a strong party discipline, we may additionally assume that all supporters of a given
party rank candidates from that party in the same way. We will call elections
with this property party-list elections.
An election E=(C, V) is said to be a party-list election
for a target committee size k if we can partition the set of candidates
C into pairwise disjoint sets C_1,…, C_p and the set of voters V into pairwise disjoint groups V_1, …, V_p
so that
[(i)]
* |C_i| ≥ k for each i ∈ [p],
* each voter from V_i prefers each candidate in C_i to each candidate in C∖ C_i,
* for each i ∈ [p]
all voters in V_i order the candidates in C_i in the same way.
Party-list elections are helpful for understanding the difference
between local stability and full local stability. Indeed, when all
voters have the same preferences over candidates, local stability only
ensures that a committee contains the unanimously most preferred
candidate. In particular, the committee that consists of the single
most preferred candidate and the k-1 least preferred candidates is
locally stable. On the other hand, full local stability imposes
additional constraints. For example, when preferences are unanimous,
only the committee that consists of the k most preferred candidates
satisfies full local stability. Generalizing this observation, we
will now show that in party-list elections a fully locally stable
committee selects representatives from each set C_i in proportion to
the number of voters in V_i.
Let E=(C, V) be a party-list election for a target committee size k,
and let (C_1, …, C_p) and (V_1, …, V_p) be the respective partitions of C and V.
Then for each i∈[p]
every committee S of size k that provides full local stability for E
contains all candidates ranked in top ⌊ k ·|V_i|/n⌋
positions by the voters in V_i.
Consider a committee S, |S|=k, that provides full local stability for E.
Fix i∈[p], let ℓ=⌊ k |V_i|/n⌋,
and let C'_i be the set of candidates ranked in top ℓ positions by each voter in V_i.
Let c be some candidate in C'_i. If c∉S, then voters in V_i prefer C'_i to S.
As
|V_i| ≥⌊|V_i| · k/k+1⌋+1 = ⌊ k|V_i|/n·n/k+1⌋+1 ≥⌊ℓn/k+1⌋+1 ,
this would mean that S violates ℓ-local stability for E,
a contradiction.
Consider an election E=(C, V) with C=X∪ Y∪ Z,
X={x_1, …, x_4}, Y={y_1, …, y_4}, Z={z_1, …, z_4}
and |V|=16, where 8 voters rank the candidates as
x_1≻ x_2 ≻ x_3 ≻ x_4≻…,
4 voters rank the candidates as
y_1≻ y_2 ≻ y_3 ≻ y_4≻… and
4 voters rank the candidates as
z_1≻ z_2 ≻ z_3 ≻ z_4≻….
Let k=4. Clearly, E is a party-list election.
To provide full local stability,
a committee has to contain the top two candidates from X,
the top candidate from Y and the top candidate from Z.
Thus, {x_1, x_2, y_1, z_1} is the unique fully locally stable committee.
On the other hand, observe that in an election where two parties have
equal support, i.e., when C and V are partitioned into C_1, C_2
and V_1, V_2, respectively, and |V_1|=|V_2|, every committee S
that contains the top candidate in C_1 (according to voters in
V_1) and the top candidate in C_2 (according to voters in V_2),
provides local stability. Thus, local stability can capture the idea
of diversity to some extent, but not of fully proportional
representation.
Finally, note that Gehrlein stability does not offer any guarantees in
the party-list framework: If a party is supported by more than half of
the voters, then the top k candidates of this party form the unique
strongly Gehrlein-stable committee; if a party is supported by
fewer than half of the voters then it is possible that none of its
candidates is in a weakly Gehrlein-stable committee.
§ COMPUTATIONAL COMPLEXITY
We will now argue that finding stable committees can be computationally challenging,
both for weak Gehrlein stability and for local stability (recall that, in contrast,
for strong Gehrlein stability Theorem <ref> provides a polynomial-time algorithm).
Full local stability appears to be even more demanding:
we provide evidence that even checking whether a given committee
is fully locally stable is hard as well.
Given an election E = (C, V) and a target committee size k with k≤ |C|,
it is NP-complete to decide if there exists a weakly Gehrlein-stable committee
of size k for E.
It is immediate that this problem is in NP: given an election E =
(C, V), a target committee size k, and a committee S with
|S|=k, we can check that S has no incoming edges in M(E).
To show hardness, we provide a reduction from Partially
Ordered Knapsack. An instance of this problem is given by a list
of r ordered pairs of positive integers ℒ =
((s_1, w_1), …, (s_r, w_r)), a capacity bound b, a
target weight t, and a directed acyclic graph Γ=([r],
A). It is a `yes'-instance if there is a subset of indices
I⊆ [r] such that ∑_i∈ Is_i≤ b, ∑_i∈
Iw_i≥ t and for each directed edge (i, j)∈ A it holds that
j∈ I implies i∈ I. This problem is strongly NP-complete;
indeed, it remains NP-hard if s_i=w_i and w_i≤ r for all
i∈[r] <cit.>. Note that if s_i=w_i for
all i∈[r], we can assume that b=t, since otherwise we
obviously have a `no'-instance.
Given an instance ⟨ℒ, b, t, Γ⟩ of
Partially Ordered Knapsack with ℒ=((s_1,
w_1), …, (s_r, w_r)), s_i=w_i, w_i≤ r for all
i∈ [r] and b=t, we construct an
election
as follows. For each i∈[r], let C_i={c_i^1, …,
c_i^w_i} and set C= ⋃_i∈[r] C_i. We construct the
set of voters V and the voters' preferences so that the majority
graph of the resulting election (C, V) has the following
structure:
(1) for each i∈[r] the induced subgraph on C_i is a
strongly connected tournament;
(2) for each (i, j)∈ A there is an edge from each
candidate in C_i to each candidate in C_j;
(3) there are no other edges.
Using McGarvey's theorem, we can ensure that the number of voters
|V| is polynomial in |C|; as we have w_i≤ r for all i∈
[r], it follows that both the number of voters and the number of
candidates are polynomial in r. Finally, we let the target
committee size k be equal to the knapsack size t.
Let I be a witness that ⟨ℒ, b, t, Γ⟩ is a `yes'-instance of Partially Ordered Knapsack.
Then the set of candidates S = ⋃_i∈ IC_i is a weakly
Gehrlein-stable committee of size k: by construction, |S|=∑_i∈
I|C_i| = ∑_i∈ Iw_i = t, and the partial order constraints
ensure that S has no incoming edges in the weighted majority graph
of (C, V).
Conversely, suppose that S is a weakly Gehrlein-stable committee of size
k for (C, V). Note first that for each i∈ [r] it holds that
C_i∩ S≠∅ implies C_i⊆ S. Indeed, if we
have c∈ C_i∩ S, c'∈ C_i∖ S for some i∈[r] and
some c, c'∈ C_i then in the weighted majority graph of (C, V)
there is a path from c' to c. This path contains an edge that
crosses from C_i∖ S into C_i∩ S, a contradiction with
S being a weakly Gehrlein-stable committee. Thus, S=⋃_i∈ IC_i
for some I⊆ [r], and we have ∑_i∈ Iw_i=∑_i∈
I|C_i|=k = t. Moreover, for each directed edge (i, j)∈ A
such that C_j⊆ S we have C_i⊆ S: indeed, the
weighted majority graph of (C, V) contains edges from candidates
in C_i to candidates in C_j, so if C_i⊈S, at
least one of these edges would enter S, a contradiction with S
being a weakly Gehrlein-stable committee. It follows that I is a witness
that we have started with a `yes'-instance of Partially Ordered
Knapsack.
We obtain a similar result for locally stable committees.
Given an election E = (C, V) and a target committee size k, with
k≤ |C|, it is NP-complete to decide if there exists a
locally stable committee of size k for E.
It is easy to see that this problem is in NP: given an election (C,
V) together with a target committee size k and a committee S
with |S|=k, we can check for each c∈ C∖ S whether
there exist at least ⌊n/k⌋ +1
voters who prefer c to each member of S.
To prove NP-hardness, we reduce from 3-Regular Vertex
Cover. Recall that an instance of 3-Regular Vertex Cover is
given by a 3-regular graph G = (V, E) and a positive integer
t; it is a `yes'-instance if G admits a vertex cover of size at
most t, i.e., a subset of vertices V'⊆ V with |V'|≤
t such that {ν, ν'}∩ V'≠∅ for each {ν,
ν'}∈ E. This problem is known to be
NP-complete <cit.>.
Consider an instance (G, t) of 3-Regular Vertex Cover with
G=(V, E), V={ν_1, …, ν_r}. Note that we have
|E|=1.5r, and we can assume that t<r-1, since otherwise (G, t)
is trivially a `yes'-instance.
Given (G, t), we construct an
election
as follows. We set C=V∪ X ∪ Y ∪ Z, where X={x_1, …,
x_1.5r}, Y={y_1, …, y_1.5r}, Z = {z_1, …,
z_1.5r}. For each edge {ν, ν'}∈ E we construct one
voter with preferences ν≻ν'≻⋯ and one voter with
preferences ν'≻ν≻⋯; we refer to these voters as
the edge voters. Also, for each j ∈ [1.5r] we construct
two voters with preferences x_j≻ y_j≻ z_j≻⋯, two
voters with preferences y_j≻ z_j ≻ x_j≻⋯, and two
voters with preferences z_j≻ x_j ≻ y_j≻⋯; we refer
to these voters as the xyz-voters. We set k = t+3r. Note
that the number of voters in our instance is n = 2|E| + 6· 1.5r
= 12r. Thus, using the fact that 0<t<r-1, we can bound
n/k+1 as follows:
n/k+1 > 12r/4r= 3, and n/k+1 <
12r/3r= 4.
Thus the Droop quota is q = ⌊n/k+1⌋
+ 1 = 4.
Now, suppose that V' is a vertex cover of size at most t; we can
assume that |V'| is exactly t, as otherwise we can add arbitrary
t-|V'| vertices to V', and it remains a vertex cover. Then S =
V'∪ X∪ Y is a locally stable committee of size |S|=t+2·
1.5r= t+3r. Indeed, for each voter one of her top two candidates is
in the committee (for edge voters this follows from the fact that
V' is a vertex cover and for xyz-voters this is immediate from the
construction), so local stability can only be violated if for some
candidate c∉S there are at least q = 4 voters who rank
c first. However, by construction each candidate is ranked first
by at most three voters.
Conversely, suppose that S is a locally stable committee of size
t+3r. The argument in Example <ref> shows that |S∩{x_j, y_j, z_j}|≥ 2 for each j=1, …, 1.5r.
Hence, |S∩ V|≤ t. Now, suppose that S∩ V is not a
vertex cover for G. Consider an edge {ν, ν'} with ν,
ν'∉S. Since G is 3-regular, there are three edge voters
who rank ν first; clearly, these voters prefer ν to each
member of S. Moreover, there is an edge voter whose preference
order is ν' ≻ν≻…; this voter, too, prefers ν
to each member of S. Thus, we have identified four voters who
prefer ν to S, a contradiction with the local stability of
S. This shows that S∩ V is a vertex cover for G, and we
have already argued that |S∩ V|≤ t.
As we have observed in the proof of Theorem <ref>, it is possible to verify in
polynomial time that a given committee is locally stable. This is not
the case for full local stability, as we the following theorem shows.
Given an election E=(C, V) and a committee S,
it is coNP-complete to decide whether S provides full local stability for E.
To see that this problem is in coNP, note that a certificate for a `no`-instance is an integer ℓ∈ [k], a set V^* of voters with |V^*|=⌊ℓ n/|S|+1⌋+1 and a set T of candidates with |T|=ℓ such that voters in V^* prefer T to S.
For hardness, we reduce from the NP-complete Multicolored Clique problem <cit.> to the
complement of our problem.
An instance of Multicolored Clique is given by an undirected graph G=(U, ℰ),
a positive integer s, and a mapping (coloring) g U→ [s];
it is a `yes`-instance if there exists a set of vertices {u_1,…,u_s}⊆ U with
g(u_i)=i for every i∈[s] such that {u_1,…,u_s} forms a clique in G.
We write U_a to denote the neighborhood of a vertex a∈ U, i.e., N_a={b∈ U|{a,b}∈ℰ} and we write U_i to denote all i-colored vertices, i.e., U_i={u∈ U:g(u)=i}.
We have to make a few additional assumptions, all of which do not impact the hardness of Multicolored Clique:
First, we assume that s>2, clearly hardness remains to hold.
Further, we assume without loss of generality that s^2 divides |U| and that that |U_i|=|U|/s; this can be achieved by adding disconnected vertices.
Finally, we assume that candidates of the same color are not connected.
We construct an election as follows:
Let S={w_1,w_2,…,w_s+2} and C=U ∪ S.
We refer to candidates in U as vertex candidates and we say that u is an i-colored candidate if g(u)=i.
We create a voter v_a for each vertex a ∈ U;
this voter's preferences are
a≻ N_a ≻ w_1 ≻…≻ w_s+1≻ U∖ (N_a∪{a}) ≻ w_s+2,
where sets are ordered arbitrarily.
Let V_U={v_a| a∈ U}.
Furthermore, for every i,j∈[s] we create a set of voters V_i^j of size |V_i^j|=(s+1)·|U|/s^2.
For an integer z, let z denote the number (z s)+1.
Voters in V_i^j have preferences of the form
w_s+1 ≻ U_i≻ w_j≻ U_i+1≻ w_j+1≻…
≻ U_i+s-1≻ w_j+s-1≻ w_s+2.
Let V̅=⋃_i,j∈[s] V_i^j.
Finally, let V' contain |U|+s+1 voters of the form S≻ U.
We set V= V_U∪V̅∪ V' and have |V|=(s+3)· |U|+s+1.
Thus, for ℓ-local stability we have a quota of ⌊ℓ· |V|/|S|+1⌋+1=ℓ· |U|+⌊ℓ· (s+1)/s+3⌋+1.
Note that we have |V^*|≥ℓ· |U|+⌊ℓ· (s+1)/s+3⌋+1 if and only if |V^*|> ℓ· |U|+ ℓ· (s+1)/s+3; we will use this condition in the following proof.
Let us now prove that S does not provide full local stability for (C, V) if and only if (U, ℰ) has a clique of size s.
Let U' be a multicolored clique of size s in G, i.e., for all i∈ [s] it holds that C_i∩ U'≠∅.
We will show that S violates (s+1)-local stability.
Let us consider T={w_s+1}∪ U'; we claim that a sufficient number of voters prefers T to S.
Note that s voters corresponding to U' prefer T to S.
Furthermore, all voters in V̅ prefer T to S.
In total these are s+s^2· (s+1)·|U|/s^2 = (s+1)· |U| +s. We have to show that (s+1)|U| +s > (s+1)|U|+(s+1)· (s+1)/s+3. This is equivalent to s(s+3)>(s+1)^2, which holds for s ≥ 2. Hence S is not (s+1)-locally stable and thus does not provide full local stability.
For the converse direction, let us make the following useful observation: if T with |T|=ℓ contains an element that is ranked below the ℓ-th representative of voter v, then v does not prefer T to S.
Now let us first show that S provides ℓ-local stability for all ℓ∈{1,…,s,s+2}.
To see that S provides (s+2)-local stability, let T⊆ C with |T|=s+2 and let V^*⊆ V be a set of the necessary size, i.e., |V^*|> (s+2)· |U|+ (s+2)· (s+1)/s+3. Hence V^* has to contain voters from V'. But for them an improvement is not possible since for any v∈ V', _s+2(v)=S.
To see that S is 1-locally stable, note that |V_U| is lower than the quota for ℓ = 1, |V_U|< ⌊|V|/s+3⌋+1. Voters from V̅ and from V' have their top-ranked candidate in S. Hence no group of sufficient size can deviate.
For 1<ℓ≤ s, if T does not contain w_s+1, then voters from V̅ would not deviate. Since voter from V' would also not deviate either, V^* would be too small:
|V^*| ≤ |V_U| = |U| < 2· |U|+ 2· (s+1)/s+3.
Hence we can assume that w_s+1∈ T.
Then, however, voters from V_U are excluded as w_s+1 is only their (s+1)-st representative.
Hence we only have to consider voters from V̅.
Note that T has to contain at least one vertex candidate, because otherwise T⊆ S.
Since all V_i^j are symmetric, we can assume without loss of generality that U_1∩ T≠∅, i.e., T contains a 1-colored vertex.
We distinguish whether T∩{w_1,…, w_s} is empty or not; in both cases we show that |V^*| cannot have a sufficient size.
If T∩{w_1,…, w_s}≠∅, we assume (again without loss of generality) that w_1∈ T.
Observe that if V_i^j prefers T to S, then by construction of voters in V_i^j
it has to hold that
T⊆{w_s+1}∪{w_j,…,w_j+ℓ-2}∪ U_i∪…∪ U_i+ℓ-2.
Since w_1∈ T, condition (<ref>) implies that 1∈{j, …, j+ℓ-2}.
Similarly, since T contains a 1-colored vertex, condition (<ref>) implies that 1∈{i, …, i+ℓ-2}.
We see that for both i and j there are ℓ-1 possible values. Similarly as before we infer that none of the voters from V_U prefers T to S—this is because w_s+1∈ T, and w_s+1 is only the (s+1)-st representative of voters in V_U. Hence T (with cardinality lower than s+1) cannot be desirable for them. Clearly, none of the voters from V' prefers T to S.
Thus, it follows that
|V^*| ≤ (ℓ-1)^2· (s+1)·|U|/s^2≤ (ℓ-1)· (s-1)(s+1)·|U|/s^2 < ℓ· |U| .
Hence the size of V^* cannot be sufficiently large.
Now we consider the case that T∩{w_1,…, w_s}=∅ and hence T⊆ U∪{w_s+1}.
If V_i^j prefers T to S, then T has to contain w_s+1, at least one element of U_i∪{w_j}, at least two elements of U_i∪ U_i+1∪{w_j,w_j+1}, …, and at least ℓ-1 elements of U_i∪…∪ U_i+ℓ-2∪{w_j,…,w_j+ℓ-2}.
Since T⊆ U∪{w_s+1}, we can assume without loss of generality that T⊆ U_1∪…∪ U_ℓ-1∪{w_s+1}.
Also, without loss of generality, we can assume that T contains a candidate from U_1. This implies that only voters from V_1^j (j arbitrary) may prefer T to S.
Now:
|V_1^1∪…∪ V_1^s|=s(s+1)·|U|/s^2=|U|+|U|/s^2<2|U|≤ℓ|U|.
Similarly as before, none of the voters from V_U and V' prefers T to S.
Hence, also in this case, we have shown V^* cannot be sufficiently large.
We conclude that S satisfies ℓ-local stability for ℓ∈{2,…,s}.
We have established that S provides ℓ-local stability for all ℓ∈{1,…,s,s+2}.
Hence, if S fails full local stability, then it fails (s+1)-local stability.
Let V^*⊆ V and T⊆ C witness that S is not (s+1)-locally stable.
First, let us show that |V^*|≥ (s+1)· |U|+ s:
Since V^* witness that S is not (s+1)-locally stable, we know that |V^*|> (s+1)· |U|+ (s+1)· (s+1)/s+3.
Note that (s+1)^2/s+3=s-1+4/s+3. Since s>2, |V^*|≥ (s+1)· |U|+s.
Since |V^*|≥ (s+1)· |U|+ s and V^*∩ V'=∅ (no improvement is possible for voters in V'), V^* has to contain at least s voters from V_U.
First, we show that T∩ U contains a vertex of every color and |T∩ U| = s.
Then we are going to show that |V^*∩ V_U|=s.
We conclude the proof by showing that the corresponding vertices form a clique in G.
To show that T∩ U contains a vertex of every color, let us first observe that w_s+1∈ T; otherwise voters in V̅ would not prefer T over S and so V^* would not be of sufficient size.
Since T⊈S, there exists a j∈[s] such that w_j∉ T.
Now assume towards a contradiction that T∩ U contains no i-colored vertices.
Let x denote the number of colors which are not used in T; by our assumption x ≥ 1.
We are going to show that in this case V^* is not of sufficient size:
If T contains neither i-colored vertices nor w_j, then voters in V_i^j do not prefer T to S.
Thus, |V^*| contains at most (s+1)|U|-x(s+1)·|U|/s^2 voters from V̅.
Next, for i∈[s], if T contains an i-colored vertex, say vertex a, then only one i-colored voter prefers T to S and that is v_a. This follows from the fact that i-colored vertices are not connected; hence v_a is the only i-colored voter that ranks a above {w_1,…,w_s+1}, which is a necessary requirement for T (containing a) to be preferable to S.
If T does not contain an i-colored vertex, then all i-colored voters may prefer T to S; recall these are |U|/s many.
We see that |V^*| contains at most x·|U|/s+(s-x) voters from V_U. Further, |V^*| contains
no voters from V'.
This yields an upper-bound on the total number of voters in V^*:
|V^*| ≤ x·|U|/s+(s-x) + (s+1)|U|-x(s+1)·|U|/s^2
<s + (s+1)|U| ,
which yields a contradiction.
Hence T∩ U contains a vertex of every color.
Since |V^*|≥ s + (s+1)|U|, the set V^* has to contain at least s voters from V_U.
Observe that voter v_a with g(a)=i may only prefer T to S if a∈ T. This follows from the already established facts that T contains an i-colored vertex and, assuming this vertex is a, v_a is the only i-colored voter ranking a above {w_1,…,w_s+1}.
Furthermore, if v_a prefers T to S, it has to hold that T∩ U⊆ N(a)∪{a}.
Hence T∩ U is a clique. As T∩ U contains a vertex of every color, T∩ U is a multicolored clique.
Given an election E=(C, V) and a committee S,
it is W[1]-hard to decide whether S provides full local stability for E when parameterized by the committee size k.
The Multicolored Clique problem is W[1]-hard <cit.> and the reduction used in the proof of Theorem <ref> is a parametrized reduction (k=s+2).
We have not settled the complexity of finding a committee that
provides full local stability, but we expect this problem to be computationally hard as well.
More precisely, it belongs to the second level of the
polynomial hierarchy (membership verification can be expressed as
“there exists a committee such that each possible deviation by each
group of voters is not a Pareto improvement for them,” where both
quantifiers operate over objects of polynomial size);
we expect the problem to be complete for this complexity class.
§ CONCLUSIONS AND RESEARCH DIRECTIONS
We have considered two generalizations of the notion of a Condorcet
winner to the case of multi-winner elections: the one proposed by
Gehrlein <cit.> and Ratliff <cit.> and
the one defined in this paper (but inspired by the works of Aziz et
al. <cit.> and Elkind et
al. <cit.>). We have provided
evidence that the former approach is very majoritarian in spirit and
is well-suited for shortlisting tasks (in particular, we
have shown that the objection based on weakly Gehrlein-stable rules
necessarily failing enlargement consistency does not apply to strongly
Gehrlein-stable rules). On the other hand, we have given arguments
that local stability may lead to diverse committees, whereas full
local stability may lead to committees that represent the voters
proportionally. (We use qualifications such as “may lead” instead of
“leads” because, technically, (fully) local stable rules
may behave arbitrarily on elections where (fully)
locally stable committees do not exist).
In our discussion, we have only very briefly mentioned rules
that are either Gehrlein-stable or locally stable.
Many such rules have been defined in the literature <cit.>,
and these rules call for a more detailed study,
both axiomatic and algorithmic. Our results
indicate that weakly Gehrlein-stable and locally stable rules
are unlikely to be polynomial-time computable; it would be desirable
to find practical heuristics or design efficient exponential algorithms.
plain
|
http://arxiv.org/abs/1701.07481v3 | 20170125204056 | Learning Word-Like Units from Joint Audio-Visual Analysis | [
"David Harwath",
"James R. Glass"
] | cs.CL | [
"cs.CL",
"cs.CV"
] |
Ballistic, diffusive, and arrested transport in disordered momentum-space lattices
Bryce Gadway
December 30, 2023
==================================================================================
Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the words “lighthouse” within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.
§ INTRODUCTION
§.§ Problem Statement and Motivation
Automatically discovering words and other elements of linguistic structure from continuous speech has been a longstanding goal in computational linguists, cognitive science, and other speech processing fields. Practically all humans acquire language at a very early age, but this task has proven to be an incredibly difficult problem for computers.
While conventional automatic speech recognition (ASR) systems have a long history and have recently made great strides thanks to the revival of deep neural networks (DNNs), their reliance on highly supervised training paradigms has essentially restricted their application to the major languages of the world, accounting for a small fraction of the more than 7,000 human languages spoken worldwide <cit.>. The main reason for this limitation is the fact that these supervised approaches require enormous amounts of very expensive human transcripts. Moreover, the use of the written word is a convenient but limiting convention, since there are many oral languages which do not even employ a writing system. In constrast, infants learn to communicate verbally before they are capable of reading and writing - so there is no inherent reason why spoken language systems need to be inseparably tied to text.
The key contribution of this paper has two facets. First, we introduce a methodology capable of not only discovering word-like units from continuous speech at the waveform level with no additional text transcriptions or conventional speech recognition apparatus. Instead, we jointly learn the semantics of those units via visual associations. Although we evaluate our algorithm on an English corpus, it could conceivably run on any language without requiring any text or associated ASR capability. Second, from a computational perspective, our method of speech pattern discovery runs in linear time. Previous work has presented algorithms for performing acoustic pattern discovery in continuous speech <cit.> without the use of transcriptions or another modality, but those algorithms are limited in their ability to scale by their inherent O(n^2) complexity, since they do an exhaustive comparison of the data against itself. Our method leverages correlated information from a second modality - the visual domain - to guide the discovery of words and phrases. This enables our method to run in O(n) time, and we demonstrate it scalability by discovering acoustic patterns in over 522 hours of audio.
§.§ Previous Work
A sub-field within speech processing that has garnered much attention recently is unsupervised speech pattern discovery. Segmental Dynamic Time Warping (S-DTW) was introduced by <cit.>, which discovers repetitions of the same words and phrases in a collection of untranscribed acoustic data. Many subsequent efforts extended these ideas <cit.>. Alternative approaches based on Bayesian nonparametric modeling <cit.> employed a generative model to cluster acoustic segments into phoneme-like categories, and related works aimed to segment and cluster either reference or learned phoneme-like tokens into higher-level units <cit.>.
While supervised object detection is a standard problem in the vision community, several recent works have tackled the problem of weakly-supervised or unsupervised object localization <cit.>. Although the focus of this work is discovering acoustic patterns, in the process we jointly associate the acoustic patterns with clusters of image crops, which we demonstrate capture visual patterns as well.
The computer vision and NLP communities have begun to leverage deep learning to create multimodal models of images and text. Many works have focused on generating annotations or text captions for images <cit.>. One interesting intersection between word induction from phoneme strings and multimodal modeling of images and text is that of <cit.>, who uses images to segment words within captions at the phoneme string level. Other work has taken these ideas beyond text, and attempted to relate images to spoken audio captions directly at the waveform level <cit.>. The work of <cit.> is the most similar to ours, in which the authors learned embeddings at the entire image and entire spoken caption level and then used the embeddings to perform bidirectional retrieval. In this work, we go further by automatically segmenting and clustering the spoken captions into individual word-like units, as well as the images into object-like categories.
§ EXPERIMENTAL DATA
We employ a corpus of over 200,000 spoken captions for images taken from the Places205 dataset <cit.>, corresponding to over 522 hours of speech data. The captions were collected using Amazon's Mechanical Turk service, in which workers were shown images and asked to describe them verbally in a free-form manner. The data collection scheme is described in detail in <cit.>, but the experiments in this paper leverage nearly twice the amount of data. For training our multimodal neural network as well as the pattern discovery experiments, we use a subset of 214,585 image/caption pairs, and we hold out a set of 1,000 pairs for evaluating the multimodal network's retrieval ability. Because we lack ground truth text transcripts for the data, we used Google's Speech Recognition public API to generate proxy transcripts which we use when analyzing our system. Note that the ASR was only used for analysis of the results, and was not involved in any of the learning.
§ AUDIO-VISUAL EMBEDDING NEURAL NETWORKS
We first train a deep multimodal embedding network similar in spirit to the one described in <cit.>, but with a more sophisticated architecture. The model is trained to map entire image frames and entire spoken captions into a shared embedding space; however, as we will show, the trained network can then be used to localize patterns corresponding to words and phrases within the spectrogram, as well as visual objects within the image by applying it to small sub-regions of the image and spectrogram. The model is comprised of two branches, one which takes as input images, and the other which takes as input spectrograms. The image network is formed by taking the off-the-shelf VGG 16 layer network <cit.> and replacing the softmax classification layer with a linear transform which maps the 4096-dimensional activations of the second fully connected layer into our 1024-dimensional multimodal embedding space. In our experiments, the weights of this projection layer are trained, but the layers taken from the VGG network below it are kept fixed. The second branch of our network analyzes speech spectrograms as if they were black and white images. Our spectrograms are computed using 40 log Mel filterbanks with a 25ms Hamming window and a 10ms shift. The input to this branch always has 1 color channel and is always 40 pixels high (corresponding to the 40 Mel filterbanks), but the width of the spectrogram varies depending upon the duration of the spoken caption, with each pixel corresponding to approximately 10 milliseconds worth of audio. The architecture we use is entirely convolutional and shown below, where C denotes the number of convolutional channels, W is filter width, H is filter height, and S is pooling stride.
* Convolution: C=128, W=1, H=40, ReLU
* Convolution: C=256, W=11, H=1, ReLU
* Maxpool: W=3, H=1, S=2
* Convolution: C=512, W=17, H=1, ReLU
* Maxpool: W=3, H=1, S=2
* Convolution: C=512, W=17, H=1, ReLU
* Maxpool: W=3, H=1, S=2
* Convolution: C=1024, W=17, H=1, ReLU
* Meanpool over entire caption
* L2 normalization
In practice during training, we restrict the caption spectrograms to all be 1024 frames wide (i.e., 10sec of speech) by applying truncation or zero padding. Additionally, both the images and spectrograms are mean normalized before training. The overall multimodal network is formed by tying together the image and audio branches with a layer which takes both of their output vectors and computes an inner product between them, representing the similarity score between a given image/caption pair. We train the network to assign high scores to matching image/caption pairs, and lower scores to mismatched pairs.
Within a minibatch of B image/caption pairs, let S_j^p, j=1, …, B denote the similarity score of the j^th image/caption pair as output by the neural network. Next, for each pair we randomly sample one impostor caption and one impostor image from the same minibatch. Let S_j^i denote the similarity score between the j^th caption and its impostor image, and S_j^c be the similarity score between the j^th image and its impostor caption. The total loss for the entire minibatch is then computed as
ℒ(θ) = ∑_j=1^B[max(0, S_j^c - S_j^p + 1)
+ max(0, S_j^i - S_j^p + 1)]
We train the neural network with 50 epochs of stochastic gradient descent using a batch size B = 128, a momentum of 0.9, and a learning rate of 1e-5 which is set to geometrically decay by a factor between 2 and 5 every 5 to 10 epochs.
§ FINDING AND CLUSTERING AUDIO-VISUAL CAPTION GROUNDINGS
Although we have trained our multimodal network to compute embeddings at the granularity of entire images and entire caption spectrograms, we can easily apply it in a more localized fashion. In the case of images, we can simply take any arbitrary crop of an original image and resize it to 224x224 pixels. The audio network is even more trivial to apply locally, because it is entirely convolutional and the final mean pooling layer ensures that the output will be a 1024-dim vector no matter the extent of the input. The bigger question is where to locally apply the networks in order to discover meaningful acoustic and visual patterns.
Given an image and its corresponding spoken audio caption, we use the term grounding to refer to extracting meaningful segments from the caption and associating them with an appropriate sub-region of the image. For example, if an image depicted a person eating ice cream and its caption contained the spoken words “A person is enjoying some ice cream,” an ideal set of groundings would entail the acoustic segment containing the word “person” linked to a bounding box around the person, and the segment containing the word “ice cream” linked to a box around the ice cream. We use a constrained brute force ranking scheme to evaluate all possible groundings (with a restricted granularity) between an image and its caption. Specifically, we divide the image into a grid, and extract all of the image crops whose boundaries sit on the grid lines. Because we are mainly interested in extracting regions of interest and not high precision object detection boxes, to keep the number of proposal regions under control we impose several restrictions. First, we use a 10x10 grid on each image regardless of its original size. Second, we define minimum and maximum aspect ratios as 2:3 and 3:2 so as not to introduce too much distortion and also to reduce the number of proposal boxes. Third, we define a minimum bounding width as 30% of the original image width, and similarly a minimum height as 30% of the original image height. In practice, this results in a few thousand proposal regions per image.
To extract proposal segments from the audio caption spectrogram, we similarly define a 1-dim grid along the time axis, and consider all possible start/end points at 10 frame (pixel) intervals. We impose minimum and maximum segment length constraints at 50 and 100 frames (pixels), implying that our discovered acoustic patterns are restricted to fall between 0.5 and 1 second in duration. The number of proposal segments will vary depending on the caption length, and typically number in the several thousands. Note that when learning groundings we consider the entire audio sequence, and do not incorporate the 10sec duration constraint imposed during training.
Once we have extracted a set of proposed visual bounding boxes and acoustic segments for a given image/caption pair, we use our multimodal network to compute a similarity score between each unique image crop/acoustic segment pair. Each triplet of an image crop, acoustic segment, and similarity score constitutes a proposed grounding. A naive approach would be to simply keep the top N groundings from this list, but in practice we ran into two problems with this strategy. First, many proposed acoustic segments capture mostly silence due to pauses present in natural speech. We solve this issue by using a simple voice activity detector (VAD) which was trained on the TIMIT corpus<cit.>. If the VAD estimates that 40% or more of any proposed acoustic segment is silence, we discard that entire grounding. The second problem we ran into is the fact that the top of the sorted grounding list is dominated by highly overlapping acoustic segments. This makes sense, because highly informative content words will show up in many different groundings with slightly perturbed start or end times. To alleviate this issue, when evaluating a grounding from the top of the proposal list we compare the interval intersection over union (IOU) of its acoustic segment against all acoustic segments already accepted for further consideration. If the IOU exceeds a threshold of 0.1, we discard the new grounding and continue moving down the list. We stop accumulating groundings once the scores fall to below 50% of the top score in the “keep” list, or when 10 groundings have been added to the “keep” list. Figure <ref> displays a pictorial example of our grounding procedure.
Once we have completed the grounding procedure, we are left with a small set of regions of interest in each image and caption spectrogram. We use the respective branches of our multimodal network to compute embedding vectors for each grounding's image crop and acoustic segment. We then employ k-means clustering separately on the collection of image embedding vectors as well as the collection of acoustic embedding vectors. The last step is to establish an affinity score between each image cluster ℐ and each acoustic cluster 𝒜; we do so using the equation
Affinity(ℐ, 𝒜) = ∑_𝐢∈ℐ∑_𝐚∈𝒜𝐢^⊤𝐚·Pair(𝐢, 𝐚)
where 𝐢 is an image crop embedding vector, 𝐚 is an acoustic segment embedding vector, and Pair(𝐢, 𝐚) is equal to 1 when 𝐢 and 𝐚 belong to the same grounding pair, and 0 otherwise. After clustering, we are left with a set of acoustic pattern clusters, a set of visual pattern clusters, and a set of linkages describing which acoustic clusters are associated with which image clusters. In the next section, we investigate these clusters in more detail.
§ EXPERIMENTS AND ANALYSIS
We trained our multimodal network on a set of 214,585 image/caption pairs, and vetted it with an image search (given caption, find image) and annotation (given image, find caption) task similar to the one used in <cit.>. The image annotation and search recall scores on a 1,000 image/caption pair held-out test set are shown in Table <ref>. Also shown in this table are the scores achieved by a model which uses the ASR text transcriptions for each caption instead of the speech audio. The text captions were truncated/padded to 20 words, and the audio branch of the network was replaced with a branch with the following architecture:
* Word embedding layer of dimension 200
* Temporal Convolution: C=512, W=3, ReLU
* Temporal Convolution: C=1024, W=3
* Meanpool over entire caption
* L2 normalization
One would expect that access to ASR hypotheses should improve the recall scores, but the performance gap is not enormous. Access to the ASR hypotheses provides a relative improvement of approximately 21.8% for image search R@10 and 12.5% for annotation R@10 compared to using no transcriptions or ASR whatsoever.
We performed the grounding and pattern clustering steps on the entire training dataset, which resulted in a total of 1,161,305 unique grounding pairs. For evaluation, we wish to assign a label to each cluster and cluster member, but this is not completely straightforward since each acoustic segment may capture part of a word, a whole word, multiple words, etc. Our strategy is to force-align the Google recognition hypothesis text to the audio, and then assign a label string to each acoustic segment based upon which words it overlaps in time. The alignments are created with the help of a Kaldi <cit.> speech recognizer based on the standard WSJ recipe and trained using the Google ASR hypothesis as a proxy for the transcriptions. Any word whose duration is overlapped 30% or more by the acoustic segment is included in the label string for the segment. We then employ a majority vote scheme to derive the overall cluster labels. When computing the purity of a cluster, we count a cluster member as matching the cluster label as long as the overall cluster label appears in the member's label string. In other words, an acoustic segment overlapping the words “the lighthouse” would receive credit for matching the overall cluster label “lighthouse”. A breakdown of the segments captured by two clusters is shown in Table <ref>. We investigated some simple schemes for predicting highly pure clusters, and found that the empirical variance of the cluster members (average squared distance to the cluster centroid) was a good indicator. Figure <ref> displays a scatter plot of cluster purity weighted by the natural log of the cluster size against the empirical variance. Large, pure clusters are easily predicted by their low empirical variance, while a high variance is indicative of a garbage cluster.
Ranking a set of k=500 acoustic clusters by their variance, Table <ref> displays some statistics for the 50 lowest-variance clusters. We see that most of the clusters are very large and highly pure, and their labels reflect interesting object categories being identified by the neural network. We additionally compute the coverage of each cluster by counting the total number of instances of the cluster label anywhere in the training data, and then compute what fraction of those instances were captured by the cluster. There are many examples of high coverage clusters, e.g. the “skyscraper” cluster captures 84% of all occurrences of the word “skyscraper”, while the “baseball” cluster captures 86% of all occurrences of the word “baseball”. This is quite impressive given the fact that no conventional speech recognition was employed, and neither the multimodal neural network nor the grounding algorithm had access to the text transcripts of the captions.
To get an idea of the impact of the k parameter as well as a variance-based cluster pruning threshold based on Figure <ref>, we swept k from 250 to 2000 and computed a set of statistics shown in Table <ref>. We compute the standard overall cluster purity evaluation metric in addition to the average coverage across clusters. The table shows the natural tradeoff between cluster purity and redundancy (indicated by the average cluster coverage) as k is increased. In all cases, the variance-based cluster pruning greatly increases both the overall purity and average cluster coverage metrics. We also notice that more unique cluster labels are discovered with a larger k.
Next, we examine the image clusters. Figure <ref> displays the 9 most central image crops for a set of 10 different image clusters, along with the majority-vote label of each image cluster's associated audio cluster. In all cases, we see that the image crops are highly relevant to their audio cluster label. We include many more example image clusters in Appendix A.
In order to examine the semantic embedding space in more depth, we took the top 150 clusters from the same k=500 clustering run described in Table <ref> and performed t-SNE <cit.> analysis on the cluster centroid vectors. We projected each centroid down to 2 dimensions and plotted their majority-vote labels in Figure <ref>. Immediately we see that different clusters which capture the same label closely neighbor one another, indicating that distances in the embedding space do indeed carry information discriminative across word types (and suggesting that a more sophisticated clustering algorithm than k-means would perform better). More interestingly, we see that semantic information is also reflected in these distances. The cluster centroids for “lake,” “river,” “body,” “water,” “waterfall,” “pond,” and “pool” all form a tight meta-cluster, as do “restaurant,” “store,” “shop,” and “shelves,” as well as “children,” “girl,” “woman,” and “man.” Many other semantic meta-clusters can be seen in Figure <ref>, suggesting that the embedding space is capturing information that is highly discriminative both acoustically and semantically.
Because our experiments revolve around the discovery of word and object categories, a key question to address is the extent to which the supervision used to train the VGG network constrains or influences the kinds of objects learned. Because the 1,000 object classes from the ILSVRC2012 task <cit.> used to train the VGG network were derived from WordNet synsets <cit.>, we can measure the semantic similarity between the words learned by our network and the ILSVRC2012 class labels by using synset similarity measures within WordNet. We do this by first building a list of the 1,000 WordNet synsets associated with the ILSVRC2012 classes. We then take the set of unique majority-vote labels associated with the discovered word clusters for k=500, filtered by setting a threshold on their variance (σ^2 ≤ 0.65) so as to get rid of garbage clusters, leaving us with 197 unique acoustic cluster labels. We then look up each cluster label in WordNet, and compare all noun senses of the label to every ILSVRC2012 class synset according to the path similarity measure. This measure describes the distance between two synsets in a hyponym/hypernym hierarchy, where a score of 1 represents identity and lower scores indicate less similarity. We retain the highest score between any sense of the cluster label and any ILSVRC2012 synset. Of the 197 unique cluster labels, only 16 had a distance of 1 from any ILSVRC12 class, which would indicate an exact match. A path similarity of 0.5 indicates one degree of separation in the hyponym/hypernym hierarchy - for example, the similarity between “desk” and “table” is 0.5. 47 cluster labels were found to have a similarity of 0.5 to some ILSVRC12 class, leaving 134 cluster labels whose highest similarity to any ILSVRC12 class was less than 0.5. In other words, more than two thirds of the highly pure pattern clusters learned by our network were dissimilar to all of the 1,000 ILSVRC12 classes used to pretrain the VGG network, indicating that our model is able to generalize far beyond the set of classes found in the ILSVRC12 data. We display the labels of the 40 lowest variance acoustic clusters labels along with the name and similarity score of their closest ILSVRC12 synset in Table <ref>.
§ CONCLUSIONS AND FUTURE WORK
In this paper, we have demonstrated that a neural network trained to associate images with the waveforms representing their spoken audio captions can successfully be applied to discover and cluster acoustic patterns representing words or short phrases in untranscribed audio data. An analogous procedure can be applied to visual images to discover visual patterns, and then the two modalities can be linked, allowing the network to learn, for example, that spoken instances of the word “train” are associated with image regions containing trains. This is done without the use of a conventional automatic speech recognition system and zero text transcriptions, and therefore is completely agnostic to the language in which the captions are spoken. Further, this is done in O(n) time with respect to the number of image/caption pairs, whereas previous state-of-the-art acoustic pattern discovery algorithms which leveraged acoustic data alone run in O(n^2) time. We demonstrate the success of our methodology on a large-scale dataset of over 214,000 image/caption pairs comprising over 522 hours of spoken audio data, which is to our knowledge the largest scale acoustic pattern discovery experiment ever performed. We have shown that the shared multimodal embedding space learned by our model is discriminative not only across visual object categories, but also acoustically and semantically across spoken words.
The future directions in which this research could be taken are incredibly fertile. Because our method creates a segmentation as well as an alignment between images and their spoken captions, a generative model could be trained using these alignments. The model could provide a spoken caption for an arbitrary image, or even synthesize an image given a spoken description. Modeling improvements are also possible, aimed at the goal of incorporating both visual and acoustic localization into the neural network itself. The same framework we use here could be extended to video, enabling the learning of actions, verbs, environmental sounds, and the like. Additionally, by collecting a second dataset of captions for our images in a different language, such as Spanish, our model could be extended to learn the acoustic correspondences for a given object category in both languages. This paves the way for creating a speech-to-speech translation model not only with absolutely zero need for any sort of text transcriptions, but also with zero need for directly parallel linguistic data or manual human translations.
acl_natbib
§ ADDITIONAL CLUSTER VISUALIZATIONS
2.1
[]
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
beach cliff pool desert field
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
chair table staircase statue stone
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
church forest mountain skyscraper trees
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
waterfall windmills window city bridge
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
flowers man wall archway baseball
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
boat shelves cockpit girl children
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
building rock kitchen plant hallway
|
http://arxiv.org/abs/1701.07697v5 | 20170126133819 | An Introduction to Classic DEVS | [
"Yentl Van Tendeloo",
"Hans Vangheluwe"
] | cs.OH | [
"cs.OH",
"I.6.2"
] |
Yentl Van Tendeloo University of Antwerp, Belgium
Yentl.VanTendeloo@uantwerpen.be
Hans Vangheluwe University of Antwerp - Flanders Make, Belgium; McGill University, Canada
Hans.Vangheluwe@uantwerpen.be
Discrete-Event Modelling for Queueing Systems and Performance Analysis
Yentl Van Tendeloo Hans Vangheluwe
December 30, 2023
======================================================================
bstract.tex
bstract.tex
Learning Objectives
After completing this chapter we expect you to be able to:
* Understand the difference between and other (similar) formalisms
* Explain the semantics of a given model
* Understand the relation and difference between a model and its simulator
* Apply to a simple queueing problem
* Understand the major shortcomings of and their proposed solutions
-introduction.tex
-atomic.tex
-coupled.tex
-semantics.tex
-applications.tex
-variants.tex
-conclusions.tex
urther_reading.tex
uestions.tex
§ ACKNOWLEDGEMENT
This work was partly funded with a PhD fellowship grant from the Research Foundation - Flanders (FWO).
Partial support by the Flanders Make strategic research centre for the manufacturing industry
is also gratefully acknowledged.
plain
|
http://arxiv.org/abs/1701.07679v2 | 20170126130027 | The detection of variable radio emission from the fast rotating magnetic hot B-star HR7355 and evidence for its X-ray aurorae | [
"P. Leto",
"C. Trigilio",
"L. Oskinova",
"R. Ignace",
"C. S. Buemi",
"G. Umana",
"A. Ingallinera",
"H. Todt",
"F. Leone"
] | astro-ph.SR | [
"astro-ph.SR"
] |
firstpage–lastpage
Fast and Accurate Time Series Classification with WEASEL
Patrick Schfer
Humboldt University of Berlin
Berlin, Germany
patrick.schaefer@hu-berlin.de
Ulf Leser
Humboldt University of Berlin
Berlin, Germany
leser@informatik.hu-berlin.de
======================================================================================================================================================================================================================================================
In this paper we investigate the multiwavelengths properties
of the magnetic early B-type star HR 7355. We present its radio
light curves at several frequencies, taken with the Jansky Very
Large Array, and X-ray
spectra, taken with the X-ray telescope.
Modeling of the radio light curves for the Stokes I and V
provides a quantitative analysis of the HR 7355 magnetosphere.
A comparison between HR 7355 and
a similar analysis for the Ap star CU Vir,
allows us to study how the different physical parameters of the two stars affect
the structure of the respective magnetospheres where the non-thermal electrons originate.
Our model includes a cold thermal plasma component that accumulates
at high magnetic latitudes that influences the radio regime, but does
not give rise to X-ray emission.
Instead, the thermal X-ray emission
arises from shocks generated by wind stream collisions close
to the magnetic equatorial plane.
The analysis of the X-ray spectrum of HR 7355 also suggests
the presence of a non-thermal radiation.
Comparison between the spectral index of the power-law X-ray energy distribution
with the non-thermal electron energy distribution indicates
that the non-thermal X-ray component could be the auroral signature of
the non-thermal electrons that impact the stellar surface, the same non-thermal
electrons that are responsible for the observed radio emission.
On the basis of our analysis, we suggest a novel model that simultaneously
explains the X-ray and the radio features of HR 7355 and is likely
relevant for magnetospheres of other magnetic early type stars.
stars: early-type – stars: chemically peculiar – stars: individual: HR 7355 – stars: magnetic field – radio continuum: stars – X-rays: stars.
§ INTRODUCTION
Stellar magnetism at the top of the main sequence is not typical, but
neither is it an extremely rare phenomenon.
In fact about 10% of the OB-type stars display strong and stable magnetic fields
<cit.>.
The hot magnetic stars are mainly characterized as oblique magnetic rotators (OMR):
a dipolar magnetic field
topology with field axis misaligned with respect to the rotation
axis <cit.>. The
existence of such well-ordered magnetic fields are a cause of inhomogeneous
photospheres, giving rise to observable photometric, spectroscopic
and magnetic variability that can be explained in the framework of
the OMR. Early type magnetic stars are sufficiently
hot to produce a radiatively driven stellar wind, that in the presence of their
large-scale magnetic fields may be strongly aspherical. The wind plasma
accumulates at low magnetic latitudes (inner magnetosphere),
whereas it can freely propagate along directions near the magnetic poles <cit.>.
Observable signatures of plasma trapped inside
stellar magnetospheres can be recognized in the UV spectra
<cit.> and in the Hα line
<cit.>.
The interaction of a radiatively driven wind with the stellar
magnetosphere has been well studied. As examples, see
<cit.> for an application of the <cit.>
model to radiatively driven winds,
wind compression models with magnetic fields (WCFields) <cit.>,
and the magnetically torqued disk (MTD) model <cit.>.
Such interaction
causes an accumulation of hot material close to the magnetic
equatorial plane, as described by the magnetically confined wind
shock (MCWS) model <cit.>.
The wind plasma arising from the two opposite hemispheres collides
close to the magnetic equatorial plane, shocking the plasma
to radiate X-rays
( has made a recent review of the X-ray
emission from magnetic hot stars).
In the presence of a strong magnetic field, the
stellar wind plasma is confined within the stellar magnetosphere and
forced to rigidly co-rotate with the star
<cit.>.
The prototype of a rigidly rotating magnetosphere (RRM)
is σ Ori E <cit.>.
Evidence of circumstellar matter bound to the strong stellar magnetic field
has been reported in a few other cases
<cit.>.
The stellar rotation plays an important role in establishing the size of
the RRM, and the density
of the plasma trapped inside.
The rotation works in opposition to the gravitational infall
of the magnetospheric plasma, leading to a large centrifugal
magnetosphere (CM) <cit.>.
The existence of a CM filled by stellar wind material
is a suitable condition to give rise to non-thermal radio continuum
emission that was first measured for
peculiar magnetic B and A stars <cit.>.
In accord with the OMR model, their radio
emission is cyclic owing to stellar rotation
<cit.>, suggestive of optically thick
emission arising from a stable RRM.
The radio emission features
are characterized by a simple dipolar magnetic field topology
and have been successfully reproduced using a 3D model that computes
the gyrosynchrotron emission
<cit.>.
In this model, the non-thermal electrons
responsible for the radio emission originate in magnetospheric regions
far from the stellar surface, where the kinetic energy density of
the gas is high enough to brake the magnetic field lines forming
current sheets. These regions are the sites where the mildly
relativistic electrons originate. In the magnetic equator, at
around the Alfvén radius, there is a transitional magnetospheric
“layer” between the inner confined plasma and the escaping wind.
Energetic electrons that recirculate through this layer back to the
inner magnetosphere radiate radio by the gyrosynchrotron emission
mechanism.
The analysis of radio emissions from magnetic early-type stars
is a powerful diagnostic tool for the study of the topology
of their magnetospheres. The radio radiation at different frequencies
probes the physical conditions of the stellar magnetosphere
at different depths, even topologies are complex <cit.>.
Hence, the radio emission of the hot magnetic stars
provides a favored window to study
the global magnetic field topology, the spatial stratification of the thermal electron density,
the non-thermal electron number density, and interactions between stellar
rotation, wind, and magnetic field.
In fact, the above physical parameters can be derived by comparing
the multi-wavelength radio light curves, for the total and circularly polarized flux density,
with synthetic light curves using our 3D theoretical model.
It is then possible, to study how such stellar properties as rotation, wind, magnetic
field geometry affect the efficiency of the electron
acceleration mechanism.
In particular,
it is important to apply the radio diagnostic techniques on a sample of
magnetic early-type stars that differ
in their stellar rotation periods, magnetic field strengths, and field geometries.
To this end, we conducted a radio survey of a representative sample of hot magnetic stars
using the Karl G. Jansky Very Large Array (VLA).
These stars probe different combinations of source parameters owing to their
different physical properties.
This paper presents the first results of this extensive study.
Here we present the analysis of the radio emission from the fast rotating, hot
magnetic star HR 7355. We were able to
reproduce multi-wavelength radio light curves for the total and the
circularly polarized flux density. The model simulation of the
radio light curves, along with a simulation of the X-ray spectrum
of HR 7355, are used to significantly constrain the physical
parameters of its stellar magnetosphere. On this basis, we
suggest a scenario that simultaneously explains the
behavior of HR 7355 at both radio and X-ray wavelengths.
In Section <ref> we briefly introduce the object of this study,
HR 7355. The observations used in our analyses are
presented in Section <ref>. The radio properties of HR 7355 are
discussed in detail in Section <ref>. Section <ref>
describes the model, while stellar magnetosphere is presented in
Section <ref>. Analysis of X-ray emission of HR 7355 is
provided in Section <ref>. The considerations on auroral
radio emission in HR 7355 are given in section Section <ref>, while
Section <ref> summarizes the results of our work.
§ MAGNETIC EARLY B-TYPE STAR HR 7355
The early-type main sequence star (B2V) HR 7355 (HD 182180) is
characterized by a surface overabundance of helium
<cit.>. This star evidences also a very strong
and variable magnetic field <cit.>.
The magnetic curve of HR 7355 changes polarity twice per period,
and was modeled in the framework of the OMR by a mainly
dipolar field, with the magnetic axis significantly misaligned with respect
to the rotation axis.
Among the class of the magnetic early-type stars, HR 7355 is an
extraordinarily fast rotator. Only the B2.5V type star HR 5907
<cit.> has a shorter rotation period. The rotation
period of HR 7355 (≈ 0.52 days) sets it close to the
point break-up, giving rise to a strong deformation from spherical
<cit.>. The main stellar parameters of HR 7355
are listed in Table <ref>.
HR 7355 hosts a strong and steady magnetic field,
indicating the existence of a co-rotating magnetosphere <cit.>
suitable for giving rise to non-thermal radio continuum emission.
HR 7355 has a flux density at 1.4 GHz of 7.9 ± 0.6 [mJy] as listed by
the NVSS <cit.>.
At the tabulated stellar distance, we estimate a radio luminosity of
≈ 5× 10^17 [erg s^-1 Hz^-1], making HR 7355
one of the most luminous magnetic hot stars at radio wavelengths.
Thus, HR 7355 is an ideal target to study the effects
of fast rotation and the high magnetic field strength for magnetospheric
radio emissions.
To obtain information on the stellar wind parameters of HR 7355
we retrieved its archival UV spectra (sp39549, sp39596) obtained by
the International Ultraviolet Explorer (IUE). These UV spectra were
analyzed by means of non-LTE iron-blanketed model atmosphere PoWR,
which treats the photosphere as well as the the wind, and also accounts
for X-rays <cit.>. Already the first
inspection of the spectra reveals a lack of asymmetric line profiles
which would be expected for spectral lines formed in a stellar wind.
In fact, <cit.> were able to fit the IUE spectra of
HR 7355 with a static model that does not include stellar wind, showing
that its contribution must be small, and at best only upper limit could be
obtained by UV spectral line modeling.
We attempted to estimate the upper limit for the mass-loss rate that would be
still consistent with the IUE observations. We found that in our models for the
stellar parameters given by <cit.> and the assumed mass-loss
rate of about Ṁ=10^-10 M_⊙ yr^-1 only the Si iv
resonance line is sensitive to the mass-loss rate (see Fig. <ref>).
As spherical symmetry is assumed for the PoWR models while
<cit.> found different temperatures for the pole and equator
regions, we can give only a rough estimate for the mass-loss rate. For the lower
temperature of 15.7kK at the pole region the Si iv resonance line is
weaker than for the higher temperatures of the equator region. Therefore we
infer the upper limit of the mass-loss rate for the lower temperature and find a
value of about
Ṁ < 10^-11 M_⊙ yr^-1 for a spherical symmetric smooth wind
to be consistent with the IUE observation. We also checked for the effect of
X-rays via super-ionization but did not find a major impact on the Si iv
resonance line.
§ OBSERVATIONS AND DATA REDUCTION
§.§ Radio
Broadband multi-frequency observations of HR 7355 were carried out
using the Karl G. Jansky Very Large Array (VLA), operated by the
National Radio Astronomy Observatory[The National Radio
Astronomy Observatory is a facility of the National Science Foundation
operated under cooperative agreement by Associated Universities,
Inc.] (NRAO), in different epochs. Table <ref> reports the
instrumental and observational details for each observing epoch.
To maximize the VLA performances
the observations were done using the full array configuration at each observing bands,
without splitting the interferometer in sub-arrays.
To observe all the selected sky frequencies,
the observations were carried out cyclically varying the observing bands.
The data were calibrated using the standard calibration pipeline,
working on the Common Astronomy Software Applications (CASA),
and imaged using the CASA task CLEAN. Flux densities for the Stokes
I and V parameters were obtained by fitting a two-dimensional
gaussian at the source position in the cleaned maps. The size of
the gaussian profile is comparable with the array beam, indicating
that HR 7355 is unresolved for all the analyzed radio frequencies.
The minimum array beam size is
0.18 × 0.12 [arcsec^2], obtained with the BnA array
configuration at 44 GHz. The errors were computed as the quadratic sum
of the flux density error,
derived from the bidimensional gaussian fitting procedure,
and the map rms measured in a field area lacking in radio sources.
§.§ X-ray
We retrieved and analyzed archival X-ray observations of HR 7355
obtained with the on 2012-09-25 (ObsID 0690210401, ),
and lasting ≈ 2.5 hr. All
three (MOS1, MOS2, and PN) European Photon Imaging Cameras (EPICs)
were operated in the standard, full-frame mode and a thick UV filter
<cit.>. The data were reduced using
the most recent calibration. The spectra and light-curves were extracted
using standard procedures from a region with diameter ≈
15. The background area was chosen to be nearby the star
and free
of X-ray sources. To analyze the spectra we used the standard
spectral fitting software xspec <cit.>. The abundances
were set to solar values according to <cit.>. The
adopted distance to the star and interstellar reddening E(B - V)
are listed in Table <ref>.
§ THE RADIO PROPERTIES OF HR 7355
§.§ Radio light curves
The magnetosphere of HR 7355 shows evidence of strong and variable
radio emission. The VLA radio measurements were phase folded using
the ephemeris given by <cit.>:
JD=2 454 672.80+
0.5214404 E [days]
and are displayed in Fig. <ref>. The left panels show the
new radio data for the Stokes I (RCP+LCP, respectively Right and
Left Circular Polarization state[VLA measurements of the
circular polarization state are in accordance with the IAU and
IEEE orientation/sign convention, unlike the classical physics
usage.]), with each observing radio frequency shown individually.
In the top panel of Fig. <ref> is also shown the variability
of the logitudinal component of the magnetic field (B_ e)
<cit.>.
The radio light curves for Stokes I are variable at all observed
frequencies. Relative to the median, the amplitudes of the variation,
with frequency are respectively: ≈ 60% at 6 GHz, ≈
65% at 10 GHz, ≈ 62% at 15 GHz, ≈ 77% at 22
GHz, ≈ 60% at 33 GHz and ≈ 39% at 44 GHz.
The JD_0 of the HR 7355 ephemeris refers to the minimum of
the photometric light curve <cit.>, corresponding
to a null in the effective magnetic field curve, and to a minimum
emission for the Hα <cit.>. Interestingly, the radio
light curves at ν≤ 15 GHz show an indication of a minimum
emission close to ϕ=0 (see the left panels of Fig. <ref>).
Despite not having full coverage of rotational phase,
the radio light curve at ν≤ 15 GHz evidences a maximum
at ϕ≈ 0.2, close to the maximum effective magnetic field
strength, followed by another minimum that becomes progressively
less deep with increasing frequency. The rotational phases covering
the negative extrema of the magnetic curve are not observed at
radio wavelengths, but the rising fluxes suggest that the radio
light curves at ν≤ 15 could be characterized by another
maximum. The radio data for total intensity seems to show
2 peaks per cycle, that are related to the two
extrema of the magnetic field curve.
Comparison between the
radio and the magnetic curves also indicates
a phase lag between the radio light curves and the magnetic one.
At the higher frequencies (ν≥ 22 GHz), the shapes of the
light curve are more complex, and any relation with variability
in B_ e
is no longer simple. Furthermore, it appears that the
average radio spectrum of HR 7355 is relatively flat from 6 to 44
GHz (c.f., top panel in Fig. <ref>). The error bar of each
point shown in the figure is the standard deviation of the measurements
performed at a given frequency. The spectral index of the mean
radio spectrum of this hot magnetic star is close to -0.1, like
free-free emission from an optically thin thermal plasma. Hence,
without any information regarding the fraction of the circularly polarized
radio emission and its variability, the total radio
intensity alone can easily be mistakenly attributed to
a Bremsstrahlung radiation.
Interestingly, a flat radio spectrum has been already detected
in some others magnetic chemically peculiar stars <cit.>.
§.§ Circular polarization
Circularly polarized radio emission
is detected from HR 7355 above the
3σ detection level, revealing
a non-thermal origin for the radio emission. The
right panels of Fig. <ref> show the fraction
π_c of the circularly polarized flux density (Stokes
V / Stokes I, where Stokes V= RCP-LCP) as a function of the
rotational phase, and for all the observed frequencies. In the
top right panel of Fig. <ref>, the magnetic field curve is
again shown for reference.
π_c is variable as HR 7355 rotates, and the amplitude
of the variation rises as the radio frequency increases. It appears
that the amplitude of the intensity variation is larger when the
circular polarization is smaller. In particular π_c
ranges between: ≈ -5% to 5% at 6 GHz, ≈ -9%
to 8% at 10 GHz, ≈ -10% to 8% at 15 GHz, ≈
-5% to 14% at 22 GHz, ≈ -13% to 20% at 33 GHz, ≈
-16% to 26% at 44 GHz.
To parameterize the amplitude of the radio light curves for the
total and polarized intensity, the bottom panel of Fig. <ref>
shows the variation of the standard deviation (σ) of all
the measurements occuring at the same frequency, as a function of
frequency. The standard deviation of the Stokes I measurements is
largest at ν≤ 22 GHz, whereas at 33 and 44 GHz, the standard
deviations dramatically decrease, confirming the decreasing of the
light curve amplitudes discussed in Sec. <ref>. By
contrast the standard deviation of the measurements of the circularly
polarized flux density increases as the frequency increases. Considering
Fig. <ref> (bottom panel), σ values
for the Stokes I and V measurements are evidently inversely related.
Comparing the curves of variation of π_c with the
magnetic field curve, a positive degree of circular
polarization is detected when the north magnetic pole is close to
the line-of-sight, and is negative when the south pole is most
nearly aligned with the viewing sightline. When
the magnetic poles are close to the direction of the line-of-sight,
we observe most of the radially oriented field lines. In this case
the gyrosynchrotron mechanism gives rise to radio emission that is partially
polarized, respectively right-handed for the north pole and left-handed
for the south pole. This behavior of the gyrosynchrotron polarized
emission has already been recognized, at ν≤ 15 GHz, in the
cases of CU Vir <cit.> and σ Ori E
<cit.>, that have magnetospheres defined mainly by a
magnetic dipole, similar to the case of HR 7355. Our new radio
measurements show, for the first time, this
behavior persisting up to ν=44 GHz for HR 7355.
Furthermore, the light curves of π_c show
that the magnetic field component close to the stellar
surface can be traced with the circularly polarized emission at high
frequency.
§ THE MODEL
In previous papers <cit.>, we developed
a 3D model to simulate the gyrosynchrotron radio emission
arising from a stellar magnetosphere defined by a dipole. In the
case of the hot magnetic stars, the scenario attributes the origin of their
radio emission as the interaction between the large-scale
dipolar magnetic field and the radiatively driven stellar wind.
Following this model the plasma wind progressively accumulates in
the magnetospheric region where the magnetic field lines are closed
(inner magnetosphere). The plasma temperature linearly increases
outward, whereas its density linearly decreases
<cit.>.
Outside the Alfvén surface, the magnetic tension is not able to force co-rotation of the plasma.
Similar to the case of Jupiter's magnetosphere <cit.>,
the co-rotation breakdown powers a current sheet system
where magnetic reconnection accelerates the local plasma
up to relativistic energies <cit.>.
A fraction of the non-thermal electrons, assumed to have a power-law energy
spectrum and an isotropic pitch angle distribution (i.e., the angle between the
directions of the electron velocity and the local magnetic field
vector), can diffuse back to the star within a magnetic
shell that we here designate as the ”middle magnetosphere”. This
non-thermal electron population has a homogeneous spatial density
distribution within the middle magnetosphere, owing to magnetic
mirroring. A cross-section of the stellar magnetosphere model is
pictured in Fig. <ref>.
The non-thermal electrons moving within the middle magnetosphere
radiate at radio wavelengths by the gyro-synchrotron emission
mechanism. To simulate the radio emission arising from these
non-thermal electrons, the magnetosphere of the
star is sampled in a three dimensional grid, and the physical
parameters needed to compute the gyrosynchrotron emission and
absorption coefficients are calculated at each grid point.
As a first step, we set the stellar geometry: rotation
axis inclination (i), and tilt of the dipole magnetic axis (β),
and the polar field strength (B_p). In the stellar
reference frame, assumed with the z-axis coinciding with the
magnetic dipole axis, the space surrounding the star is sampled
in a 3D cartesian grid, and the dipolar magnetic field vector
components are calculated at each grid point. Given the
stellar rotational phase (ϕ), the field topology is
then rotated in the observer reference frame (see procedure described
in App. A of ).
In the second step, we locate the magnetospheric subvolume where the unstable
electron population propagates. This spatial region is delimited by
two magnetic field lines. The inner line intercepts the magnetic
equatorial plane at a distance equal to the Alfvén radius
(R_A). The outer line intercepts the equatorial
plane at a distance R_A+l, with l being the width of the current
sheet where magnetic reconnection accelerates the local plasma
up to relativistic energies. Within each grid point of the middle
magnetosphere, the non-thermal electrons have a constant number
density (n_r). By contrast the inner magnetosphere is
filled by a thermal plasma with density and temperature that are
functions of the stellar distance as previously described.
In the third step, given the observing radio frequency ν, we
calculate the emission and absorption coefficients for the
gyro-synchrotron emission <cit.> at the grid points that
fall within the middle magnetosphere. For each grid point of the
inner magnetosphere, the free-free absorption
coefficient <cit.>, the refractive index, and the polarization
coefficient for the two magneto-ionic modes <cit.>
are computed.
We are able to solve numerically the radiative transfer equation
along the directions parallel to the line-of-sight for the Stokes
I and V (as described in the App. A of ).
Scaling the result for the stellar distance, and repeating these
operations as a function of the rotational phase, ϕ,
synthetic stellar radio light curves are calculated, and
then simulations are compared with observations.
§.§ Modeling the HR 7355 radio emission
On the basis of the model described in previous section, we seek to reproduce
the multi-wavelength radio light curves of HR 7355
for Stokes I and V. The already known stellar parameters of HR 7355,
needed for the simulations are listed in Table <ref>.
The Alfvén radius (R_A)
and the length of the current sheet (l) have been assumed as free
parameters. For the sampling step, we adopt a variable grid with a
narrow spacing (0.1 R_∗) for distances lower then 8 R_∗,
a middle spacing (0.3 R_∗) between 8 and 12
R_∗, and a rough spacing (1 R_∗) for
distances beyond 12 R_∗.
Following results obtained from simulations of radio
emissions of other hot magnetic stars <cit.>,
the low-energy cutoff of the power-law electron energy
distribution has been fixed at 100 keV, corresponding to a Lorentz factor γ = 1.2.
The temperature of the thermal plasma at the stellar surface has
been set equal to the photospheric one (given in Tab. <ref>),
whereas its density (n_0) has been assumed as a free parameter.
The assumed values of the model, free parameters, and the corresponding
simulation steps are listed in Table <ref>.
Adopting these
stellar parameters, we were able to simulate radio light curves for
the Stokes I and V that closely resemble the measurement of HR 7355.
The corresponding ranges of the model parameters are reported in Table <ref>.
The Fig. <ref> displays the envelope of the simulated light curves,
for the Stokes I and V respectively, that closely match the observed ones.
This envelope was obtained from the simultaneous visualization of the
whole set of simulations performed using the combinations of the model free parameters
listed as model solutions in Table <ref>.
The simulations indicate that
gyro-synchrotron emission from a dipole-shaped magnetosphere can closely
reproduce the observations of HR 7355. The low-frequency Stokes I
radio emission shows a clear phase modulation, that
becomes progressively less evident as the frequency increases.
Conversely the simulations of the light curves for the Stokes V
indicates that the circularly polarized emission is strongly
rotationally modulated, with an amplitude that increases with frequency.
Such behavior of the simulated
radio light curves is consistent with the measurements.
To highlight the close match between simulations and observations,
we also compared the simulated radio spectra with the observed spectrum.
The synthetic spectra were realized
averaging the simulated light curves at each frequency.
In the top and middle panels of Fig. <ref>,
the observed spectrum of HR 7355 is shown again, and the superimposed shaded
area represents the
envelope of the simulated spectra for the Stokes I.
In the bottom panel of Fig. <ref>
the standard deviation (σ)
of the observed and simulated multi-frequency light curves (Stokes I and V) have been compared.
In the case of the model simulations, more than one spectrum was
produced. The σ values pictured in the bottom panel of Fig. <ref>
are the averages of the standard deviations corresponding to the
whole set of simulated light curves.
The top panel of Fig. <ref> refers to
the model simulations with parameters δ=2.5,
whereas the middle panel is for the δ=2 case.
The σ of the simulated
HR 7355 radio emission seems to be larger than the observed ones.
Such behavior is confirmed when looking at the bottom panel of
Fig. <ref>.
In particular the σ of the light curves with δ=2.5
are highest. This behavior suggests that the value of δ=2,
for the spectral index of the non-thermal electron, could be close to the true value.
But we must also take into account that,
the magnetosphere of this rapidly rotating star could be oblate,
whereas our model assumes a simple dipole.
The stretching of the magnetosphere could affect the magnetic field topology of the regions
where the radio emission at the observed frequencies originate.
The effect of the plasma inertia to the magnetic field configuration is an issue
outside the limit of our model.
In any case, this mismatch between the dipolar and the true stellar magnetic topology
could explain the differences between observations and simulations.
Furthermore, has been proven that within the HR 7355 magnetosphere
there are cloud the dense plasma (with linear size ≈ 2 R_∗)
co-rotating with the star <cit.>, that could affect the
rotational modulation of the stellar radio emission.
The modeling approach followed to simulate the radio light curves of HR 7355
does not take into account for the presence of such material.
On the basis of this considerations we cannot exclude that the spectral
index of the non thermal electrons could be close to δ=2.5.
In any case, the higher dispersion of the simulations
with respect to the observations can be explained as a consequence of
the coverage for the observed radio light curves not being complete.
In fact, we are missing some portions of the light curves that are expected
to be highly variable. On the other hand, the frequency dependence of the
standard deviations of the simulated Stokes I and V radio light
curves are similar to the observed ones (see bottom panel of
Fig. <ref>). This is further evidence of the
good fit of our model for describing the radio magnetosphere of
HR 7355.
§ THE MAGNETOSPHERE OF HR 7355
§.§ Radio diagnostic
Analysis of model solutions for
the observed multi-wavelength radio light curves of HR 7355,
respectively for the Stokes I and V, can be used to
constrain the physical conditions of the magnetosphere of this
hot star. The thermal electron density at the stellar
surface (n_0) is well constrained. We found acceptable
light curves for Alfvén radii greater than 10 R_∗. The
other two model free parameters are degenerate, namely
the non-thermal electron density (n_r) and the length
of the current sheet (l). The product of these two parameters is
the column density of relativistic electrons at the Alfvén
radius. We found that the column density is a function of R_A, the mathematical relationship, obtained by fitting these
parameters, is provided in Table <ref>, and
pictured in Fig. <ref>.
The value of the Alfvén radius
is related to the wind of HR 7355. In
<cit.> we computed R_A given the magnetic
field strength, the wind mass-loss rate, its terminal velocity
(v_∞), the stellar radius, and the rotation period. In
the present analysis, we reverse this approach: given v_∞
and the rotation period, we estimate the mass-loss rates (Ṁ)
of HR 7355 that are compatible with the values of R_A
listed in Table <ref>. We assume two values of wind terminal
velocity that are reasonable for a main sequence B type star
<cit.>: v_∞ =500 and
1000 [km s^-1]. Fig. <ref> shows the values of
Ṁ, and the corresponding pressure, as a function of R_A. The highest values of R_A need a low
wind mass-loss rate.
The model simulation provide an
estimate for the density of the thermal plasma trapped within the inner
magnetosphere of HR 7355. The adopted radial dependence for the
plasma temperature and density are respectively: n=n_0
r^-1 and T=T_eff r, hence the thermal pressure
(p=k_B n T) is constant inside the inner magnetosphere.
In steady state p=p_ram, where p_ram
is the wind ram pressure. In the bottom panel of Fig. <ref>
the grey area represents the thermal pressure of the plasma trapped
in the inner magnetosphere. Those solutions that do not satisfy
the above equality condition cannot be considered valid. The average
Alvén radii that are physically plausible are listed in
Table <ref>. The corresponding wind mass loss rate,
the density of the wind at the Alvén radius, the average
thermal temperature of the plasma trapped within the inner
magnetosphere, as well as the corresponding emission measure
are also listed in Table <ref>.
In the case of a dipolar shaped magnetosphere (see Fig. <ref>),
the radiatively driven stellar wind can freely propagate only from
the northern and southern polar caps. As a
consequence, the actual mass loss rate (Ṁ_act)
is a fraction of Ṁ. The fraction of the wind that freely propagates
can be estimated from the ratio between the two polar caps area and the whole surface.
The polar caps area is derived from the relation
defining the dipolar magnetic field line: r=R_Acosλ ^2
(where λ is the magnetic latitude). In fact,
the point where the field line, with a given R_A, crosses the stellar surface
individuates the latitude of the polar cap.
The values of Ṁ_act, listed in Table <ref>,
are in good agreement with those obtained from the UV spectral
analyses of HR 7355 (see Sec. <ref>) and other B-type stars with
similar spectral types <cit.>
The indirect evaluation of the linear extension of the radio emitting region
is also useful for estimating the brightness temperature of HR 7355.
The average flux densities for HR 7355 are ≈ 15.5 mJy, in the frequency range 6–44 GHz.
Assuming the mean equatorial diameter of the Alfvén surface (31 R_∗) for the source size,
the corresponding brightness temperature is T_bril≈ 3 × 10^10 [K].
The above estimate reenforces the conclusion that
the radio emission from HR 7355 has a non-thermal origin.
It is instructive to compare the results obtained from the
analysis of radio emission from HR 7355 and from the Ap
star CU Vir conducted using similar
approach <cit.>.
For example, the wind electron density number at the Alvén surface,
n_w(R_A), has similar values for both stars.
The estimated column density of the relativistic electrons at the Alvén radius lies
in the range 3.2–4.6 × 10^14 [cm^-2] in the case of CU Vir,
versus a column density that ranges between 1.9 ×10^15 and 3.0×10^15 [cm^-2] for HR 7355.
This is even higher in the case of the HR 7355 model solutions with δ=2.5:
the derived range is 1.1–1.8×10^16 [cm^-2].
For the case of δ=2, column density of the non-thermal electrons for HR 7355
is higher by about an order of magnitude as compared to CU Vir.
Under the reasonable assumption that HR 7355 and CU Vir have similar non-thermal
acceleration efficiencies,
the higher non-thermal electron column density of HR 7355 could be explained
if it is characterized by a more extended acceleration region as compared to CU Vir.
The magnetosphere of HR 7355 is bigger than CU Vir,
with R_∗=3.69 versus 2.06 R_⊙ for CU Vir
<cit.>, and so the linear size of the acceleration
region (l) will be consequently wider for HR 7355.
Furthermore, the B2 type star HR 7355 has a higher stellar mass compared to the
Ap star CU Vir, 6 versus 3.06 M_⊙ <cit.>.
Their Kepler corotation radii (R_K=(G M_∗ / ω^2)^1/3)
are respectively: 1.3 R_∗ for HR 7355, 1.9 R_∗ for CU Vir.
Comparing the above estimated values of R_K with
the average Alfvén radii, respectively 15.5 for HR 7355,
and 14.5 [R_∗] for CU Vir.
The ratio R_A/R_K for the HR 7355 CM magnetosphere is ≈ 11.9,
versus ≈ 7.6 in the case of CU Vir.
The above estimation highlights that HR 7355 is characterized by a larger
magnetospheric volume maintained in rigid co-rotation compared with that of CU Vir.
We also compare the
magnetic field strength at the Alfvén radius for both stars.
The two stars have similar rotation periods (≈ 0.52 d), but
HR 7355 has a larger size, and a stronger polar magnetic field
strength (11.6 kG versus 3.8 kG, ).
The radial dependence for a simple magnetic dipole at the equatorial
plane is described by B_eq=1/2
B_p (R_∗/r)^3 [Gauss], and is plotted in Fig. <ref>.
The ranges of the allowed R_A values, given in units of
solar radii, are shown for both stars. The corresponding magnetic
field strengths are derived. Fig. <ref> makes clear
that the current sheet region of HR 7355 is characterized by a
magnetic field strength of roughly double the value of the case for
CU Vir. From a purely qualitative point-of-view, it is reasonable
to assume the non-thermal acceleration process operates within a
thicker middle magnetosphere for HR 7355.
At the distances of the two analyzed stars, D=236 pc for HR 7355
and D=80 pc for CU Vir, their radio luminosities are
respectively, ≈ 10^18 [erg s^-1 Hz^-1],
obtained using the average radio flux density measured in this
paper; and ≈ 3× 10^16 [erg s^-1 Hz^-1],
using the mean of the measured flux densities listed in
<cit.>. As discussed above, the two stars are
characterized by different radio emitting volumes. For HR 7355 the
non-thermal electrons also travel within magnetospheric regions at
higher magnetic field strength. Using a model for gyrosynchrotron
emission, the magnetic field strength directly affects the observed
radio flux density level <cit.>.
Taking into account
the various physical differences, we are able to explain qualitatively why
HR 7355 is a brighter radio source as compared to CU Vir.
§.§ X-ray diagnostic
The X-ray flux of HR 7355 in the 0.2–10 keV band
measured by the is ≈ 1.6 × 10^-13 erg cm^-2
s^-1 (see Table <ref>). This is orders of
magnitude higher that may be expected if plasma emitting in radio regime would
be solely responsible for the X-ray generation –
using the average temperature and emission measure listed in Table <ref>,
the expected X-ray flux is only ≈ 10^-15 erg cm^-2 s^-1.
Thus, a cold thermal plasma component responsible for the radio emission alone
cannot explain the observed X-rays from HR 7355.
Comparing the X-ray and the radio emission of HR 7355 to that of
late-type stars reveals significant differences
(see Table <ref>). HR 7355 violates
the empirical relation coupling the X-ray and radio luminosities
of magnetically active stars
(L_X / L_ν,rad≈ 10^15.5 Hz,
),
which is valid among stars distributed within a wide range of
spectral classes (from F to early M stars).
This is clear evidence that the physical mechanisms for the radio and X-ray
emissions operating in an early B-star like HR 7355 are
distinct from coronal mechanisms operating in
the intermediate- and low-mass main sequence stars.
Yet, somewhat surprisingly, the deviation of the early type
HR 7355 from the Guëdel–Benz relation is similar to that for the stars
at the bottom of the main sequence – the ultra cool dwarfs with
spectral type later than M7
<cit.>.
These important similarities between
active ultra cool dwarfs and a strongly magnetic B star
indicate that radio and X-ray emission in their magnetospheres
may be produced by related physical mechanisms and provide
useful hints for the latter.
According to the MCWS model, the thermal plasma responsible for
X-ray emissions from magnetic B-type stars is produced by stellar
wind streams colliding at the magnetic equator.
The radio wavelengths
are instead sensitive
to only the cold thermal plasma that accumulates at the higher magnetic
latitudes.
Consequently, the X-ray emission provides a different set of
constraints on
the physical conditions in the magnetosphere of
HR 7355.
We have analyzed the archival measurements. First,
the observed spectrum in 0.2-10.0 keV band was fit with a thermal
two-temperature
spectral model that assumes optically thin plasma in collisional
equilibrium. The fit is statistically significant, with reduced
χ^2 =0.72 for 88 degrees of freedom. The model fit parameters
are shown in Table <ref>. The two-temperature components
are well in accord with the values listed by <cit.>.
The thermal plasma is extraordinarily hot, with the bulk of the plasma
at a temperature 3.6 keV (40 MK). This is significantly hotter
than usually found in magnetic B-stars
<cit.>.
In the framework of the MCWS model, the wind plasma streams that
collide at the magnetic equator give rise to a shock that heats the
plasma. Hence, the maximum temperature follows from a Rankine-Hugoniot
condition and cannot exceed a value determined by the maximum stellar
wind velocity.
In the analysis presented in Sect. <ref>,
we assumed two distinct wind velocities, 500 km s^-1 and
1000 km s^-1, that encompass values plausible for a main
sequence B-type star <cit.>.
Using Eq. 10 from <cit.>, we estimate the
maximum plasma temperatures that can be produced via a magnetically confined
wind shock for these two wind speeds; these temperatures are
respectively: 3.5 MK and 14 MK – significantly lower than that deduced
from the X-ray spectral analysis.
This led us to conclude that the assumption of the hard part
of X-ray spectrum being produced by the hot thermal plasma
is not realistic. Therefore, as a next step,
we attempted to fit the HR 7355 spectra
with an absorbed power-law model, however no satisfactory fit could
be obtained. Finally, we fit the observed X-ray spectrum by
combining thermal and power-law models.
The resulting fit, corresponding to the power-law plus thermal model, is shown in Fig. <ref>.
A high-quality fit with reduced χ^2 = 0.725
for 89 degrees of freedom was obtained. Based on spectral
fitting, the 2T thermal model has no preference over a model that
combines thermal and power-law (non-thermal) components. The
model fit parameters are shown in Table <ref>. The
temperature of the thermal X-ray plasma in this combined model,
≈ 10 MK, is easier to reconcile with a typical wind
velocity of a B2V star. A more complex model involving two
temperatures plus a power-law, can also be fit to the observed spectra,
yielding marginally better fitting statistics, however we choose
the simplest models.
To investigate on the origin of the power-law X-ray component in
the HR 7355 spectrum, it is useful to consider the Sun.
The solar flare X-ray spectrum usually shows a hard X-ray component
with a power-law energy distribution <cit.>, well explained
as bremsstrahlung from a non-thermal electron population <cit.>.
For the non-thermal bremsstrahlung emission, the observed X-ray
spectral index (α) is related to the spectral index (δ)
of the injected non-thermal electrons. For a thick-target
bremsstrahlung emission, δ and α are related as follow:
δ = α +1.
In the case of HR 7355, if we assume that its power-law X-ray
emission is generated by the impact with the stellar surface of the
same non-thermal electron population responsible of the gyro-synchrotron
radio emission, it is possible to estimate a value δ=2.7 of
the spectral index of these energetic electrons.
The simulation of the radio emission of HR 7355 indicates that the non-thermal
electrons have a spectral index of δ=2–2.5.
The above range of values is not quite consistent with
the value derived by the spectrum of the X-ray photons, but is however close.
This suggests that the thick-target bremsstrahlung
emission from a non-thermal electron population that impacts
with the stellar surface is a plausible explanation for
the origin of the power-law component
detected in the X-ray spectrum of HR 7355.
Similarly to the Solar case, it is likely that
not only electrons, but also protons are accelerated in HR 7355.
In some solar flares the observable effects of these two
different populations of high energy particles have been
recognized in the X- and the gamma-ray domain.
The electrons radiate hard X-rays by thick-target bremsstrahlung emission,
whereas the protons, that interact with the ions at the stellar surface,
radiate gamma-rays (see and references here reported).
These effects, if present, will open a new
high energy observational window to the hot magnetic stars.
The mechanism responsible for this power-law X-ray spectral component
has an important difference with the case of solar flares. During a solar
flare, the energetic electrons are impulsively injected, whereas
for hot star magnetospheres, the non-thermal electrons are
continuously accelerated. The latter mechanism is similar to the
auroras from the magnetized planets in the solar system. Thus, we
suggest that the X-ray emission from HR 7355 is physically analogous
to the X-ray from Jupiter's aurora as measured by in November
2003. The Jovian X-ray spectrum has been modeled using a combination
of thermal and power-law components <cit.>.
The power-law X-ray component dominates at the high energies of
the spectral range (X-ray photons with energies higher then 2
keV), and is explained in terms of bremsstrahlung
emission of the precipitating
electrons with energies of ≈ 100 keV
<cit.>. This high-energy electron
population is generated far from the planet (20–30 Jupiter radii),
in the co-rotation breakdown region of Jupiter's magnetosphere.
The mechanism we propose for the non-thermal electron acceleration
in hot magnetic stars resembles the acceleration of high-energy electrons
in Jupiter's atmosphere. For the stellar magnetospheres, the co-rotation background
region coincides with the equatorial current sheet outside the
Alvén radius (see Fig. <ref>). The non-thermal electrons
are responsible for the gyro-synchrotron radio emission
of HR 7355 and have a power-law energy distribution with a low-energy
cutoff at 100 keV. These particles can be, also,
responsible for the power-law component in the HR 7355 X-ray
spectrum, as a consequence of the bremsstrahlung at the stellar
surface. In this case, the bremsstrahlung X-ray emission arises
from an annular region around the pole. The existence of a well
defined spatial location of the hard X-ray source region could
produce a smooth modulation of the X-ray emission as the star
rotates. Unfortunately the short archival observations
did not sample the stellar rotation period, so no conclusions
can be drawn yet about the variability of the power-law component
in the HR 7355 X-ray spectrum. We intend to remedy this observational
shortcoming in future.
In Figure <ref> we present the model sketch that may
explain simultaneously the radio and the
X-ray emission of HR 7355.
The thermal X-ray component arises from the hot plasma that is
shocked by the impact
between stellar wind flows from opposing stellar hemispheres.
The colder thermal plasma that accumulates in the the inner-magnetosphere
does not significantly contribute to the X-rays, but explains
the rotational modulation of the radio emission.
The stellar radio emission originates from a non-thermal electron population moving inside
the magnetic cavity that is defined by the field lines that
intercept the magnetic equator outside the Alvén radius, coinciding
with the magnetospheric regions where
the co-rotation breakdown take place, and where the current sheet systems
accelerate the electrons up to relativistic energies.
These precipitating non-thermal electrons could give rise to stellar auroral signatures,
like the non-thermal X-ray emission from the polar caps.
§ IS THERE AURORAL RADIO EMISSION FROM HR 7355?
The non-thermal electron population injected in the stellar magnetosphere
close to the Alfvén radius,
and which moves toward the star, is responsible for the incoherent
gyro-synchrotron radio emission of HR 7355.
The electrons with velocities almost parallel with the magnetic field lines
(i.e., low pitch angle)
can deeply penetrate within the stellar magnetosphere,
and are responsible for the X-ray auroral spectral features,
as discussed in Sec. <ref>.
Non-thermal electrons that impact the stellar surface are lost to the magnetosphere.
As a consequence,
the distribution of the non-thermal electrons reflected outside (by the magnetic mirroring) will be deprived
of electrons with quite low pitch-angle values. This is a suitable condition
to develop the unstable electron energy distribution, known as the loss-cone distribution
<cit.>.
This inverted electron velocity distribution
can trigger the coherent Electron
Cyclotron Maser (ECM) emission mechanism.
The ECM amplifies the extraordinary magneto-ionic mode, producing
highly circularly polarized radiation (≈ 100%), at frequencies
close to the first few harmonics
of the local gyro-frequency (ν_B = 2.8 × 10^-3B/Gauss GHz);
however, the fundamental harmonic is probably suppressed by gyro-magnetic
absorption
effects <cit.>.
The ECM is the process that generates the broad-band auroral radio emission
of the magnetized planets of the solar system <cit.>,
of the two magnetic stars CU Vir <cit.>
and HD 133880 <cit.>,
and,
of the Ultra Cool Dwarfs
<cit.>.
The auroral radio emission arises from the thin density-depleted magnetic
cavity
related to the auroral oval at the polar caps.
This kind of coherent radio emission is
efficiently amplified within a narrow beam pattern
tangentially directed along the cavity wall (i.e., the laminar source model,
),
giving rise to a radio light house effect.
In the case of the Earth Auroral Kilometric Radiation (AKR), this highly
directional coherent radiation is upward refracted by the
dense thermal plasma trapped outside the auroral cavity <cit.>.
The auroral radio emission from CU Vir was first discovered at 1.4
GHz as intense (≈ 1 order of magnitude brighter then the
incoherent radio emission) radio pulses that were 100% circularly
polarized <cit.>. This amplified emission from
CU Vir has also been detected at 2.5 GHz <cit.>,
and at 600 MHz <cit.>. The pulse arrival times
are observed to be a function of the observing frequency
<cit.>, a signature of frequency-dependent refractive
effects suffered by the CU Vir auroral radio emission.
The radio pulses, related to the auroral radio emission from a
dipole-shaped magnetosphere,
occur when the magnetic dipole axis lies in the plane of the sky
(cross-over phases), with a duration of 5–10% of the rotational
period. The features of the auroral radio emission arising from
the CU Vir magnetosphere has been successfully modeled using a
simple dipole shape <cit.>.
Our source HR 7355 has a stellar geometry, rotation, and
magnetic dipole obliquity suitable for detection of its auroral
radio emission. In fact, as evident from the top panel
of Fig. <ref>, the curve for the magnetic field
shows a change of net
polarity twice per rotation. The phase coverage of the
radio observations presented in this paper has good sampling
at the cross-over phases at each observing frequency (see
bottom panel of Fig. <ref>). At the frequencies
ranging from 6 to 44 GHz, the radio measurements of HR 7355 do
not show any hint of auroral radio emission.
As previously explained, the stellar auroral radio emission at a
given frequency originates in the magnetospheric regions where the
second harmonic of the local gyro-frequency is close to the observing
frequency. In the case of CU Vir (B_p=3800 [Gauss],
), emission from the ECM
arises from magnetospheric layers located between
≈ 1 and ≈ 2.3 stellar radii above the stellar surface.
Scaling to the polar strength of HR 7355 (B_p=11600
[Gauss], ), the selected observing bands,
tuned at the frequencies of the second harmonic of the gyro-frequency,
arise from magnetospheric layers with r ranging between 1.14 and 2.21 R_∗
(see the right y-axis of the bottom panel of Fig. <ref>),
that correspond to layers that are located
at heights lower than ≈ 1.2 stellar radii
from the surface of the star.
On the other hand, from the model
simulations of the radio light curves of HR 7355 (this paper) and
CU Vir <cit.>, we note that the thermal plasma trapped
within the magnetosphere of HR 7355 has a higher density as compared
to CU Vir. The auroral radio emission arising from the deeper
magnetospheric layers of HR 7355 may suffer absorption effects.
Up to now, we have no radio measurements at frequencies
below 6 GHz, corresponding to layers further out in radius
where conditions would be more conducive for the detection of auroral
radio emission from HR 7355. Consequently, it is not possible at
this time to draw firm conclusions regarding the production of
auroral radio emission at HR 7355.
§ SUMMARY AND CONCLUSIONS
In this paper we have presented an
extensive analysis of the rigidly rotating magnetosphere of the
fast rotating B2V star, HR 7355.
This study has been made using radio (VLA) and X-ray () observations.
The radio measurements of HR 7355 cover a large frequency range, from 6 to 44 GHz.
The total (Stokes I) and the circularly polarized (Stokes V) flux density are variable.
The radio data have been phase folded using the stellar rotation period,
demonstrating that the radio variability is a consequence of the rotation.
The rotational phases have been well sampled, allowing us
to build multi-wavelength radio light curves separately in Stokes I and V.
Modeling of the stellar radio emission,
using a 3D model that simulates the radio emission from a rigidly
rotating stellar magnetosphere shaped by a simple dipole
<cit.>,
allows us to constrain the physical conditions in the magnetosphere of HR 7355.
As result of the present analysis, the wind mass-loss rate of HR 7355 has been indirectly derived.
Independently, we obtained constrains on
mass-loss rate from the analysis of archival UV spectra of HR 7355 by means of
the non-LTE stellar atmosphere model PoWR. The radio and UV values of mass-loss
rate are in good agreement, and are in accord with estimates of mass-loss
rates derived from the UV spectra of other stars with similar spectral types.
The average radio luminosity of HR 7355 is about
10^18 erg s^-1 Hz^-1, in the range 6–44 GHz,
making it one of the brightest radio sources among the class of the MCP stars
(mean radio luminosity ≈ 10^16.8 [ergs s^-1 Hz^-1]
).
To investigate further, the magnetosphere of HR 7355 was compared
with that of CU Vir, another magnetic star studied with the same
modeling approach.
The comparison reveals that both these stars are characterized by
centrifugal magnetospheres but, the same time, have significant differences.
The CM of HR 7355, as normalized to the stellar radius is larger,
and the regions where the non-thermal electrons are generated
are characterized by a stronger local magnetic field strength,
with respect to the case of CU Vir,
with a consequent effect on the radio brightness of the two stars.
The analysis presented in this paper allows us to estimate the
average physical conditions of the thermal plasma confined within the
magnetospheric region.
Absorption effects by the trapped plasma influences the
emerging stellar radio emission and plays a key role for the modeling
of the radio light curves.
The measured thermal X-ray emission from HR 7355 could be explained as a
consequence of the shock heating of the colliding wind streams arising from the
two opposite stellar hemispheres.
The fit to the X-ray spectrum of HR 7355 suggests a presence of
a non-thermal X-ray component described by a power law. The spectral index of
the non-thermal X-ray
photons is compatible with the thick target bremsstrahlung emission generated by
the non-thermal electron population, which are responsible for the observed
radio emission, that impact with the stellar surface close to the polar caps.
This could be the signature for auroral X-ray emission from HR 7355.
Stellar rotation can lead to greater X-ray emissions than predicted by the
scaling laws in the framework of the MCWS model <cit.>.
However, <cit.> point out that even taking rotation into
account, there are some hot magnetic stars that are too bright in the X-ray
band, one of them being HR 7355.
Our new model suggests that auroral
X-ray emission is a likely additional mechanism that increases X-ray production
and can account for the strong X-rays from HR 7355. We speculate that auroral
mechanism operates in other hot magnetic stars that display hard and bright
X-ray emission.
The auroral phenomenon also gives rise to features at radio wavelengths,
such as coherent pulses with ≈ 100% circular polarization
that occur at predictable rotational phases.
This radio lighthouse phenomena has been recognized in the
magnetized planets of the solar system,
among some ultra cool dwarfs, and in two
hot magnetic stars, with the prototype CU Vir.
For the stellar auroral radio emission to be detectable,
the stellar magnetic dipole must to be oriented with the axis
lying in the plane of the sky (null effective magnetic field).
The stellar geometry of HR 7355
is favorable for detecting this coherent emission,
in fact the magnetic field curve inverts the sign twice per period.
We do not however find any signature of auroral radio emission from HR 7355, at least in the frequency range 6–44 GHz.
For this frequency range, we suggest that
the auroral radio emission originates deep
inside the stellar magnetosphere and is strongly absorbed.
New observations at lower frequencies, corresponding to less deep layers,
could reveal whether auroral radio emissions are in
fact produced the magnetosphere of HR 7355.
We want to underscore that
the synergistic radio and X-ray analysis is
a powerful combination that can led to strong constraints for the
stellar magnetospheric conditions of hot magnetic stars.
From the results of the radio modeling simulations and the
X-ray spectral analysis of HR 7355,
we have been able to outline a physical
scenario that simultaneously explains features detected at
opposite ends of the source spectrum.
§ ACKNOWLEDGMENTS
We thank the referee for their very useful remarks that helped to improve the paper.
The National Radio Astronomy Observatory is a facility of the National Science
Foundation operated under cooperative agreement by Associated Universities, Inc.
This work has extensively used the NASA's Astrophysics Data System, and the
SIMBAD database, operated at CDS, Strasbourg, France. This publication used
data products provided by the XMM-Newton Science Archive. LMO acknowledges
support by the DLR grant 50 OR 1302.
99
[Antonova et al.2008]antonova_etal08
Antonova A., Doyle J.G., Hallinan G., Bourke S., Golden A., 2008, A&A, 487, 317
[Arnaud1996]arnaud96
Arnaud K.A., 1996, ASPC, 101, 17
[Aschwanden2002]aschwanden02
Aschwanden, M.J., 2002, SSRv, 101, 1
[Asplund et al.2009]asplund_etal09
Asplund M., Grevesse N., Sauval A.J., Scott P., 2009, ARA&A, 47, 481
[Babcock1949]babcock49
Babcock H.W., 1949, Observatory, 69, 191
[Babel & Montmerle1997]babel_montmerle97
Babel J., Montmerle T., 1997, A&A, 323, 121
[Bard & Townsend2015]bard_townsend15
Bard C., Townsend R., 2015, IAUS, 307, 449
[Benz & Guëdel1994]benz_guedel94
Benz A.O., Guëdel M., 1994, A&A, 285, 621
[Berger2002]berger02
Berger E., 2002, ApJ, 572, 503
[Berger et al.2010]berger_etal10
Berger E., Basri G., Fleming T.A., Giampapa M.S., Gizis J.E., Liebert J., Martín E., Phan-Bao N., et al., 2010, ApJ, 709, 332
[Bohlender & Monin2011]bohlender_monin11
Bohlender D.A., Monin D., 2011, AJ, 141, 169
[Branduardi-Raymont et al.2007]branduardi-raymont_etal_07
Branduardi-Raymont G., Bhardwaj A., Elsner R.F., Gladstone G.R., Ramsay G., et al., 2007, A&A, 463, 761
[Branduardi-Raymont et al.2008]branduardi-raymont_etal_08
Branduardi-Raymont G., Elsner R.F., Galand M., Grodent D., Cravens T.E., et al., 2008, J. Geophys. Res., 113, A2202
[Brown1971]brown_71
Brown J.C., 1971, Sol. Phys, 18, 489
[Burgasser & Putman2005]burgasser_putman05
Burgasser A.J., Putman M.E., 2005, ApJ, 626, 486
[Cassinelli et al.2002]cassinelli_etal02
Cassinelli J.P., Brown J.C., Maheswaran M., Miller N.A., Telfer D.C., 2002, ApJ, 578, 951
[Chandra et al.2015]chandra_etal15
Chandra P., Wade G.A., Sundqvist J.O., et al., 2015, MNRAS, 452, 1245
[Condon et al.1998]condon_etal98
Condon J.J, Cotton W.D., Greisen E.W., Yin Q.F., Perley R.A., Taylor G.B., Broderick J.J., 1998, AJ, 115, 1693
[Drake et al.1987]drake_etal87
Drake S.A., Abbot D.C., Bastian T.S., Bieging J.H., Churchwell E., Dulk G., Linsky J.L, 1987, ApJ, 322, 902
[Dulk1985]dulk85
Dulk G.A., 1985, ARA&A, 23, 169
[Fossati et al.2015]fossati_etal15
Fossati L., Castro N., Schöller M., N., Hubrig S., Langer N., Morel T., Briquet M., Herrero A., Przybilla N., Sana H.,Schneider F.R.N.,
de Koter A., and BOB Collaboration, 2015, A&A, 582, 45
[Gräfener et al.2002]graf2002
Gräfener G., Koesterke L., Hamann W., 2002, , 387, 244
[Groote & Hunger1997]groote_hunger97
Groote D., Hunger K., 1997, A&A, 319, 250
[Grunhut et al.2012a]grunhut_etal12a
Grunhut J.H., Rivinius Th., Wade G.A., Townsend R.H.D., et al. 2012a, MNRAS, 419, 1610
[Grunhut et al.2012b]grunhut_etal12b
Grunhut J.H., Wade G.A., and the MiMeS Collaboration, 2012b, ASPC, 465, 42
[Guëdel & Benz1993]guedel_benz93
Guëdel M., Benz A.O., 1993, ApJ, 405, L63
[Hallinan et al.2008]hallinan_etal08
Hallinan G., Antonova A., Doyle J.G., Bourke S., Lane C., Golden A., 2008, ApJ, 684, 644
[Hamann & Gräfener2003]hg2003
Hamann W., Gräfener G., 2003, , 410, 993
[Hoffleit & Jaschek1991]hoffleit_jaschek91
Hoffleit D., Jaschek C., 1991, The Bright Star Catalogue (New Haven, CT: Yale Univ. Observatory)
[Hubrig et al.2015]hubrig_etal15
Hubrig S., Schöller M., Fossati L., Morel T., Castro N., Oskinova L.M., Przybilla N.,
Eikenberry S.S., Nieva M.-F, Langer N., and BOB collaboration, 2015, A&A, 578, L3
[Hudson & Ryan1995]hudson_ryan95
Hudson H., Ryan J., 1995, ARA&A, 33, 239
[Ignace, Cassinelli & BjorkmanIgnace et al.1998]ignace_etal98
Ignace R., Cassinelli J.P., Bjorkman J.E., 1998, ApJ, 505, 910
[Ignace, Oskinova & MassaIgnace et al.2013]ignace_etal13
Ignace R., Oskinova L.M., Massa D., 2013, MNRAS, 429, 516
[Kao et al.2016]kao_etal16
Kao M.M., Hallinan G., Pineda J.S., Escala I., Burgasser A. Bourke S., Stevenson D., 2016, ApJ, 818, 24
[Kochukhov et al.2014]kochukhov_etal14
Kochukhov O., Lülftinger T., Neiner C., Alecian E., and the MiMeS collaboration, 2014, A&A, 565, 83
[Klein & Trotter1984]klein_trotter84
Klein K.L., Trotter G., 1984, A&A, 141, 67
[Krticka2014]kritcka14
Krticka J., 2014, A&A, 564, 70
[Leone1991]leone91
Leone F., 1991, A&A, 252, 198
[Leone1993]leone93
Leone F., 1993, A&A, 273, 509
[Leone & Umana1993]leone_umana93
Leone F., Umana G., 1993, A&A, 268, 667
[Leone, Trigilio & UmanaLeone et al.1994]leone_etal94
Leone F., Trigilio C., Umana G., 1994, A&A, 283, 908
[Leone et al.2004]leone_etal04
Leone F., Trigilio C., Neri R., Umana G., 2004, A&A, 423, 10
[Leone et al.2010]leone_etal10
Leone F., Bohlender D.A., Bolton C.T., Buemi G., Catanzaro G., Hill G.M., Stift M.J., 2010, MNRAS, 401, 2739
[Leto et al.2006]leto_etal06
Leto P., Trigilio C., Buemi C.S., Umana G., Leone F., 2006, A&A, 458, 831
[Leto et al.2012]leto_etal12
Leto P., Trigilio C., Buemi C.S., Leone F., Umana G., 2012, MNRAS, 423, 1766
[Leto et al.2016]leto_etal16
Leto P., Trigilio C., Buemi C.S., Umana G., Ingallinera A., Cerrigone L., 2016, MNRAS, 459, 1159
[Lynch et al.2016]lynch_etal16
Lynch C., Murphy T., Ravi V., Hobbs G., Lo K., Ward C., 2016, MNRAS, 457, 1224
[Linsky et al.Linsky, Drake & Bastian1992]linsky_etal92
Linsky J.L., Drake S.A., Bastian S.A., 1992, ApJ, 393, 341
[Lo et al.2012]lo_etal12
Lo K.K., Bray J.D., Hobbs G., et al., 2012, MNRAS, 421, 3316
[Louarn & Le Queau1996a]louarn_lequeau96a
Louarn P., Le Queau D., 1996a, P&SS, 44, 199
[Louarn & Le Queau1996b]louarn_lequeau96b
Louarn P., Le Queau D., 1996b, P&SS, 44, 211
[Maheswaran & Cassinelli2009]maheswaran_cassinelli09
Maheswaran M., Cassinelli J.P., 2009, MNRAS, 394, 415
[Melrose & Dulk1982]melrose_dulk82
Melrose D.B., Dulk G.A., 1982, ApJ, 259, 844
[Menietti et al.2011]menietti_etal11
Menietti J.D., Mutel R.L., Christopher I.W., Hutchinson K.A., Sigwarth J.B, 2011, J. Geophys. Res., 116, A12219
[Mutel, Christopher & PickettMutel et al.2008]mutel_etal08
Mutel R.L., Christopher I.W., Pickett J.S., 2008, GeoRL, 35, L07104.
[Nazé et al.2014]naze_etal14
Nazé Y., Petit V., Rinbrand M., et al., 2014, ApJS, 215, 10
[Nichols2011]nichols11
Nichols J.D., 2011, J. Geophys. Res., 116, A10232
[Oksala et al.2010]oksala_etal10
Oksala M.E., Wade G.A., Marcolino W.L.F., et al., 2010, MNRAS, 405, L51
[Oskinova et al.2011]oskinova_etal11
Oskinova L.M., Todt H., Ignace R., Brown C.J., Cassinelli J.P.,
Hamann W.-R., 2011, MNRAS, 416, 1456
[Oskinova et al.2014]oskinova_etal14
Oskinova L.M., Nazé Y., Todt H., Huenemoerder D.P., Ignace R.,
Hubrig S., Hamann W.-R., 2014, Nature Communications, 4024
[Oksala et al.2012]oksala_etal12
Oksala M.E., Wade G.A., Townsend R.H.D., et al., 2012, MNRAS, 419, 959
[Petit et al.2013]petit_etal13
Petit V., Owocki S.P., Wade G.A., et al., 2013, MNRAS, 429, 398
[Poe, Friend & CassinelliPoe et al.1989]poe_etal89
Poe C.H., Friend D.B., Cassinelli J.P., 1989, ApJ, 337, 888
[Prinja1989]prinja89
Prinja K.P., 1989, MNRAS, 241, 721
[Ramaty1969]ramaty69
Ramaty R., 1969, ApJ, 158, 75
[Ravi et al.2010]ravi_etal10
Ravi V., Hobbs G., Wickramasinghe D., Champion D.J., Keith M., 2010, MNRAS, 408, L99
[Rivinius et al.2008]rivinius_etal08
Rivinius Th., Štefl S., Townsend R.H.D., Baade D., 2008, A&A, 482, 255
[Rivinius et al.2010]rivinius_etal10
Rivinius Th., Szeifert Th., Barrera L., Townsend R.H.D., Štefl S., Baade D., 2010, MNRAS, 405, L46
[Rivinius et al.2013]rivinius_etal13
Rivinius Th., Townsend R.H.D., Kochukhov O., et al., 2013, MNRAS, 429, 177
[Route & Wolszczan2012]route_wolszczan12
Route M., Wolszczan A., 2012, ApJ, 747, L22
[Route & Wolszczan2013]route_wolszczan13
Route M., Wolszczan A., 2013, ApJ, 773, 18
[Sander et al.(2015)Sander, Shenar, Hainich,
Gímenez-García, Todt, & Hamann]Sander2015
Sander A., Shenar T., Hainich R., Gímenez-García A.,
Todt H., Hamann W.-R., 2015, , 577, A13
[Sikora et al.2015]sikora_etal15
Sikora J., Wade G.A., Bohlender D.A., Neiner C.,4 Oksala M.E., et al, 2015, MNRAS, 451, 1928
[Shenar et al.(2014)Shenar, Hamann, & Todt]Shenar2014
Shenar T., Hamann W.-R., Todt H., 2014, , 562, A118
[Shore1987]shore87
Shore S.N., 1987, AJ, 94, 73
[Shore, Brown & SonnebornShore et al.1987]shore_etal87
Shore S.N., Brown D.N., Sonneborn G., 1987, AJ, 94, 737
[Shore & Brown1990]shore_brown90
Shore S.N., Brown D.N., 1990, ApJ, 365, 665
[Stibbs1950]stibbs50
Stibbs D.W.N., 1950, MNRAS, 110, 395
[Stevens & George2010]stevens_george10
Stevens I.R., George S.J., 2010, ASPC, 422, 135
[Strüder et al.2001]struder_etal01
Strüder L., Briel U., Dennerl K., Hartmann R., Kendziorra E., Meidinger N., Pfeffermann E., Reppin C., Aschenbach B., Bornemann W., et al., 2001, A&A, 365, L18
[Townsend & Owocky2005]townsend_owocky05
Townsend R.H.D., Owocki S.P., 2005, MNRAS, 357, 251
[Townsend et al.Townsend, Owocki & Groote2005]townsend_etal05
Townsend R.H.D., Owocki S.P., Groote D., 2005, ApJ, 630, 81
[Townsend et al.2013]townsend_etal13
Townsend R.H.D., Rivinius Th., Rowe J.F., Moffat A.F.J., Matthews J.M., 2013, ApJ, 769, 33
[Trigilio et al.2000]trigilio_etal00
Trigilio C., Leto P., Leone F., Umana G., Buemi C., 2000, A&A, 362, 281
[Trigilio et al.2004]trigilio_etal04
Trigilio C., Leto P., Umana G., Leone F., Buemi C.S., 2004, A&A, 418, 593
[Trigilio et al.2008]trigilio_etal08
Trigilio C., Leto P., Umana G., Buemi C.S., Leone F., 2008, MNRAS, 384, 1437
[Trigilio et al.2011]trigilio_etal11
Trigilio C., Leto P., Umana G., Buemi C.S., Leone F., 2011, ApJ, 739, L10
[Turner et al.2001]turner_etal01
Turner M.J.L., Abbey A., Arnaud M., Balasini M., Barbera M., Belsole E., Bennie P.J., Bernard J.P., Bignami G.F., Boer M., et al., 2001, A&A, 365, 27
[ud-Doula & Owocky2002]ud-doula_owocki02
ud-Doula A, Owocki S., 2002, ApJ 576, 413
[ud-Doula et al.ud-Doula, Townsend & Owocky2006]ud-doula_etal06
ud-Doula A., Townsend R., Owocky S., 2006, ApJ, 640, L191
[ud-Doula et al.ud-Doula, Owocky & Townsend2008]ud-doula_etal08
ud-Doula A., Owocky S., Townsend R., 2008, MNRAS, 385, 97
[ud-Doula et al.2014]ud-doula_etal14
ud-Doula A. Owocky S., Townsend R., Petit V., Cohen D., 2014, MNRAS, 441, 3600
[ud-Doula2015]ud-doula15
ud-Doula A., 2015, IAUS, 307, 321
[ud-Doula & Nazé2016]ud-doula_naze15
ud-Doula A., Nazé Y., 2016, AdSpR, 58, 680
[Usov & Melrose1992]usov_melrose92
Usov V.V., Melrose D.B., 1992, ApJ, 395, 575
[van Leeuwen2007]van_leeuwen07
van Leeuwen F., 2007, A&A, 474, 653
[Walborn1974]walborn74
Walborn, N.R., 1974, ApJ, 191, L95
[Weber & DavisWeber & Davis1967]weberdavis
Weber E.J., Davies L., 1967, ApJ, 148, 217
[Williams, Cook & BergerWilliams et al.2014]williams_etal14
Williams P.K.G., Cook B.A., Berger E., 2014, ApJ, 785, 9
[Williams et al.2015]williams_etal15
Williams P.K.G., Berger E., Irwin J., Berta-Thompson Z.K., Charbonneau D., 2015, ApJ, 799, 192
[Wu & Lee1979]wu_lee79
Wu C.S., Lee L.C. 1979, ApJ, 230, 621
[Zarka1998]zarka98
Zarka P., 1998, J. Geophys. Res., 103, 20159
|
http://arxiv.org/abs/1701.07901v1 | 20170126231858 | Deep Region Hashing for Efficient Large-scale Instance Search from Images | [
"Jingkuan Song",
"Tao He",
"Lianli Gao",
"Xing Xu",
"Heng Tao Shen"
] | cs.CV | [
"cs.CV"
] |
empty
-.3cm
-.3cm
-0.02cm
-0.4cm
|
http://arxiv.org/abs/1701.07663v1 | 20170126114754 | Kinetically constrained lattice gases: tagged particle diffusion | [
"Oriane Blondel",
"Cristina Toninelli"
] | math.PR | [
"math.PR",
"cond-mat.stat-mech",
"math-ph",
"math.MP"
] |
]Kinetically constrained lattice gases: tagged particle diffusion
O. Blondel ]O. Blondel
blondel@math.univ-lyon1.fr
Institut Camille Jordan CNRS-UMR 5208
Bâtiment Braconnier
Univ. Claude Bernard Lyon 1
43 boulevard du 11 novembre 1918
69622 Villeurbanne cedex
C. Toninelli]C. Toninelli
cristina.toninelli@upmc.fr
Laboratoire de Probabilités et Modèles Aléatoires
CNRS-UMR 7599 Univ. Paris VI-VII 4, Place Jussieu F-75252 Paris Cedex 05 France
This work has been supported by the ERC Starting Grant 680275 MALIG
Kinetically constrained lattice gases (KCLG) are interacting particle systems on the integer lattice ℤ^d with hard core
exclusion and
Kawasaki type dynamics. Their peculiarity is that jumps are allowed only
if the configuration satisfies a constraint
which asks for enough empty sites in a certain local neighborhood.
KCLG have been introduced and extensively studied in physics literature as models of glassy dynamics.
We focus on the most studied class of KCLG, the Kob Andersen (KA) models.
We analyze the behavior of a tracer (i.e. a tagged particle) at equilibrium. We prove that for all dimensions d≥ 2
and for any equilibrium particle density, under diffusive rescaling the motion of the tracer converges to a d-dimensional Brownian motion with non-degenerate diffusion matrix. Therefore we disprove the occurrence of a diffusive/non diffusive transition which had been conjectured in physics literature.
Our technique is flexible enough and can be extended to analyse the tracer behavior for other choices of constraints.
[
[
December 30, 2023
=====================
MSC 2010 subject classifications:60K35
60J27
Keywords: Kawasaki dynamics, tagged particle, kinetically constrained models
§ INTRODUCTION
Kinetically constrained lattice gases (KCLG) are interacting particle systems on the integer lattice ℤ^d with hard core
exclusion, i.e.
with the constraint that on each site there is at most one particle.
A configuration is therefore defined by giving for each site
x∈ℤ^d the occupation variable η(x)∈{0,1}, which
represents an empty or occupied site respectively.
The dynamics is given by a continuous time Markov process of Kawasaki type, which allows
the exchange of the occupation variables across a bond e=(x,y) of
neighboring sites x and y with a rate c_x,y(η) depending on
the configuration .
The simplest case is the simple symmetric exclusion process
(SSEP) in which a jump of a particle to a neighboring empty site occurs at rate one, namely
c_x,y^SSEP()=(1-η(x))η(y)+η(x)(1-η(y)).
Instead, for KCLG the jump to a neighboring empty site can occur only if the configuration satisfies a certain local constraint
which involves the occupation variables on other sites besides the initial and final position of the particle. More precisely c_x,y(η) is of the form
c_x,y^SSEPr_x,y(η) where r_x,y(η) degenerates to zero for certain choices of {η(z)}_z∈ℤ^d∖{ x,y}.
Furthermore r_x,y does
not depend on the value of η(x) and η(y) and therefore
detailed balance w.r.t. ρ-Bernoulli product
measure μ_ρ is verified for any ρ∈[0,1]. Therefore μ_ρ is an
invariant reversible measure for the process. However, at variance with
the simple symmetric exclusion process, KCLG have several
other invariant measures. This is related to the fact that due to the degeneracy of r_x,y(η)
there exist blocked configurations, namely configurations for which all
exchange rates are equal to zero.
KCLG have been introduced in physics literature (see <cit.> for a
review) to model the liquid/glass transition that occurs when a liquid is suddenly cooled.
In
particular they were devised to mimic the fact that the motion of a
molecule in a low temperature (dense) liquid can be inhibited by the geometrical
constraints created by the surrounding molecules. Since the exchange rates are devised to
encode this local caging mechanism, they require
a minimal number of empty sites
in a certain neighborhood of e=(x,y) in order for the exchange at e
to be allowed. There exists also a non-conservative version of KCLG, the so called Kinetically Constrained Spin Models, which feature a Glauber type dynamics and have been recently studied in several works (see e.g. <cit.> and references therein).
Let us start by recalling some fundamental issues which, due to the fact that the jump to a neighboring empty site is not always allowed, require for KCLG
different techniques
from those used to study SSEP.
A first basic question is whether the infinite volume process is
ergodic, namely whether zero is a simple eigenvalue for the
generator of the Markov process in _2(μ_ρ).
This would in turn imply relaxation to
μ_ρ in the _2(μ_ρ) sense. Since the constraints require a minimal number of empty sites, it is possible that the process undergoes a
transition from an ergodic to a non ergodic regime at ρ_c with
0<ρ_c<1.
The
next natural issue is to establish the large time behavior of the
infinite volume process in the ergodic regime, when we start from equilibrium measure at time
zero. This in turn is related to the scaling with the
system size of the spectral gap and of the inverse of the log Sobolev
constant on a finite volume. Recall that
for SSEP decay to equilibrium occurs as 1/t^d/2 and both the
spectral gap and the inverse of the log Sobolev constant decay as 1/L^2 uniformly in
the density ρ <cit.>, where L is the linear size of the finite volume. Numerical simulations for some KCLG suggest the possibility of an anomalous slowing down at high density <cit.> which could correspond to a scaling of the spectral gap and of the log Sobolev constant different from SSEP.
Two other natural issues are the evolution of macroscopic density profiles, namely the
study of the hydrodynamic limit, and the large time behavior of a tracer particle under a diffusive rescaling.
For SSEP and d≥ 2 the tracer particle converges to a Brownian motion <cit.>, more precisely the rescaled position of the tracer at time ϵ^-2 t converges as ϵ→ 0, to a d-dimensional Brownian motion with non-degenerate diffusion matrix. Instead, for some KCLG it has been conjectured that a diffusive/non-diffusive transition occurs at a finite critical density ρ_c<1: the self-diffusion matrix would be non-degenerate only for ρ<ρ_c <cit.>. Concerning the hydrodynamic limit, the following holds for SSEP:
starting from an initial condition that has
a density profile and under a diffusive rescaling, there is a density profile at later times and it can be obtained from the initial
one by solving the heat
equation <cit.>. For KCLG a natural candidate for the
hydrodynamic limit is a parabolic equation of porous media type
degenerating when the density approaches one. Establishing this result in presence of constraints is particularly challenging.
In order to recall the previous results on KCLG and to explain the novelty of our results, we should distinguish among cooperative and non-cooperative KCLG.
A
model is said to be non-cooperative if its constraints are such that it is
possible to construct a proper finite group of vacancies, the
mobile cluster, with the following two properties:
(i) for any configuration it is possible to move the mobile cluster to
any other position in the lattice by a sequence of allowed exchanges; (ii)
any nearest neighbor exchange is allowed if the mobile cluster is in a proper position in its
vicinity.
All models which are not non-cooperative are said to be
cooperative. From the point of view of the modelization of the liquid/glass
transition, cooperative models are the most relevant ones. Indeed, very
roughly speaking, non cooperative models are expected to behave like a
rescaled SSEP with the mobile cluster playing the role of a single
vacancy and are less suitable to describe the rich
behavior of glassy dynamics. Furthermore, from a mathematical point of view, cooperative models are much more challenging. Indeed, for non-cooperative models the existence of finite mobile clusters simplifies the analysis and allows the application of some standard techniques (e.g. paths arguments) already developed for SSEP.
We can now recall
the existing mathematical results for KCLG.
Non-cooperative models.
Ergodicity in infinite volume at any ρ<1 easily follows from the fact that with probability one there exists a mobile cluster and using path arguments (see for example <cit.>).
In <cit.> it is proven in certain cases that both the inverse of the spectral gap and the log Sobolev
constant in finite volume of linear size L with boundary sources [Namely with the addition of Glauber birth/death terms at the boundary] scale as O(L^2). Furthermore for the same models
the self-diffusion matrix of the tagged particle
is proved to be non-degenerate <cit.>. The diffusive scaling of the spectral gap has been proved also for some models without boundary sources in <cit.>. Finally, the hydrodynamic limit has been successfully analyzed for a special class constraints in <cit.>. In all these cases the
macroscopic density evolves under diffusive rescaling according to a
porous medium equation of the type ∂_tρ(t,u)=∇( D ∇ρ) with D(ρ)=(1-ρ)^m and m an integer parameter.
Cooperative models.
The class of cooperative models which has been most studied in physics literature are the so-called Kob
Andersen (KA) models <cit.>.
KA actually denotes a class of models on
ℤ^d characterized by an integer parameter s with s∈[2,d]. The nearest neighbor exchange rates are defined as follows: c_x,y=c_x,y^SSEPr_x,y(η)
with r_x,y=1 if at least s-1 neighbors of x different from
y are empty and at least s-1 neighbors of y different from x
are empty too, r_x,y=0 otherwise. In other words, a particle is allowed to jump to a neighboring empty site iff it has at least s empty neighbors both in its initial and final position.
Hence s is called the facilitation parameter. The choices s=1 and s>d are discarded for the following reasons: s=1 coincides with SSEP, while for s>d at any density the model is not ergodic [This follows from the fact that if s>d there exists finite clusters of particles which are blocked. For example for s=3,d=2 if there is a 2× 2 square fully occupied by particles all these particles can never jump to their neighboring empty position.].
It is immediate to verify that KA is a cooperative model
for all s∈[2,d]. For example if s=d=2 a fully
occupied double column which spans the lattice can never be
destroyed. Thus no finite cluster of vacancies can be mobile since
it cannot overcome the double column.
In <cit.> it has
been proven that for all s∈[2,d] the infinite volume process
is ergodic at any finite density, namely ρ_c=1, thus disproving previous conjectures <cit.> on the occurrence of an ergodicity breaking transition.
In <cit.> a technique has been devised to analyze the spectral gap of
cooperative KCLG on finite volume with boundary sources. In particular, for KA model with d=s=2 it has been proved that in a box of linear size L with
boundary sources, the spectral gap scales as 1/L^2 (apart from logarithmic corrections)
at any density. By using this result it is proved that, again for the choice d=s=2, the infinite volume time auto-correlation of
local functions decays as 1/t (modulo logarithmic corrections) <cit.>.
The technique of <cit.> can be extended to prove for all choices of d and s∈[2,d] a diffusive scaling for the spectral gap and a decay of the correlation at least as 1/t. A lower bound as 1/t^d/2 follows by comparison with SSEP.
In the present paper we
analyze the behavior of a tracer (also called tagged particle) for KA models at equilibrium, namely when the infinite volume system is initialized with ρ-Bernoulli measure. We prove (Theorem <ref>) that for all d, for any choice of s∈[2,d] and for any ρ<1, under diffusive scaling the motion of the tracer converges to a d-dimensional Brownian motion with non-degenerate diffusion matrix.
Our result disproves the occurrence of a diffusive/non diffusive transition which had been conjectured in physics literature on the basis of numerical simulations <cit.>. Positivity of the self-diffusion matrix at any ρ<1 had been later claimed in <cit.>. However, the results in <cit.> do not provide a full and rigorous proof of the positivity of the self-diffusion matrix. Indeed, they rely on a comparison with the behavior of certain random walks in a random environment which is not exact.
We follow here a novel route, different from the heuristic arguments sketched in <cit.>, which allows us to obtain the first rigorous proof of positivity of the self diffusion matrix for a cooperative KCLG. In particular we prove that positivity holds for any ρ<1 for all KA models. Our
technique is flexible enough and can be extended to analyze other cooperative models in the ergodic regime.
The plan of the paper follows.
In Section <ref>, after setting the relevant notation, we introduce KA models and state our main result (Theorem <ref>).
In Section <ref> we recall some basic properties of KA models: ergodicity at any ρ<1 (Proposition <ref>); the existence of a finite critical scale above which with large probability a configuration on finite volume can be connected to a framed configuration, namely a configuration with empty boundary.
In Section <ref> we
introduce an auxiliary diffusion process which corresponds to a random walk on the infinite component of a certain percolation cluster. Then we prove that this auxiliary process has a non-degenerate diffusion matrix (Proposition <ref>). In Section <ref> we prove via path arguments that the diffusion matrix of KA is lower bounded by the one for the auxiliary process (Theorem <ref>). This allows to conclude that the self diffusion matrix for KA model is non-degenerate.
§ MODEL AND RESULTS
The models considered here are defined on the integer lattice ℤ^d
with sites x = (x_1,…,x_d) and basis vectors e_1=(1,…,0), e_2=(0,1,…,0),…, e_d=(0,…,1). Given x and y in ℤ^d we write x∼ y if they are nearest neighbors, namely
d(x,y)=1 where d(·,·) is the distance associated with the Euclidean norm.
Also, given a finite set Λ⊂ℤ^d we define its neighborhood ∂Λ as the set of sites outside Λ at distance one and its interior neighborhood ∂_-Λ as the set of sites inside Λ at distance one from Λ^c, namely
∂Λ:={x∉Λ:∃ y∈Λ d(x,y)=1}
∂_-Λ:={x∈Λ:∃ y∈Λ^c d(x,y)=1}
We denote by Ω the configuration space, Ω={0,1}^ℤ^d and by the greek letters η,ξ the configurations. Given η∈Ω we let η(x)∈{0,1} be the occupation variable at site x.
We fix a parameter ρ∈[0,1] and we denote by μ the ρ-Bernoulli product measure.
Finally, given η∈Ω for any bond e=(x,y) we denote by η^xy the configuration obtained from η by exchanging the occupation variables at x and y, namely
η^xy (z): = {[ η(z) z ∉{x,y}; η(x) z=y; η(y) z=x. ].
The Kob-Andersen (KA) models are interacting particle systems with Kawasaki type (i.e. conservative) dynamics on the lattice ^d depending on a parameter s≤ d (the facilitation parameter) with s∈[2,d]. They are Markov processes defined through the generator which acts on local functions f:Ω→ℝ as
L_ envf(ξ)=∑_x∈^d∑_y∼ x c_xy(ξ)[f(ξ^xy)-f(ξ)],
where
c_xy(ξ)={[ 1 if ξ(x)=1, ξ(y)=0, ∑_z∼ y(1-ξ(z))≥ s-1 and ∑_z∼ x(1-ξ(z))≥ s,; 0 else. ].
where here and in the following with a slight abuse of notation we let ∑_z∼ y be the sum over sites z∈ℤ^d with z∼ y.
In words, each couple of neighboring sites (x,y) waits an independent mean one exponential time and then the values η(x) and η(y) are exchanged provided : either (i) there is a particle at x and an empty site at y and at least s-1 nearest neighbors for y and at least s nearest neighbors for x or (ii) there is a particle at y and an empty site at x and at least s nearest neighbors for y and at least s-1 nearest neighbors for x.
We call the jump of a particle from x to y allowed if c_xy(ξ)=1. For any ρ∈ (0,1), the process is reversible w.r.t. μ, the product Bernoulli measure of parameter ρ.
We consider a tagged particle in a KA system at equilibrium. More precisely, we consider the joint process (X_t,ξ_t)_t≥ 0 on ^d×{ 0,1}^^d with generator
ℒf(X,ξ) = ∑_y∈^d∖{ X} ∑_z∼ y c_yz(ξ)[f(X,ξ^yz)-f(X,ξ)]
+ ∑_y∼ Xc_Xy(ξ)[f(y,ξ^Xy)-f(X,ξ)]
and initial distribution ξ_0∼μ_0:=μ(·|ξ(0)=1), X_0=0. Here and in the rest of the paper, we denote for simplicity by 0 the origin, namely site x∈ℤ^d with e_i· x=0 ∀ i∈{ 1,…,d}.
In order to study the position of the tagged particle, (X_t)_t≥ 0, it is convenient to define the process of the environment seen from the tagged particle (η_t)_t≥ 0:=(τ_X_tξ_t)_t≥ 0, where (τ_xξ)(y)=ξ(x+y). This process is Markovian, has generator
Lf(η)=∑_y∈^d∖{ 0} ∑_z∼ y c_yz(η)[f(η^xy)-f(η)]+∑_y∼ 0c_0y(η)[f(τ_y(η^0y))-f(η)]
and is reversible w.r.t. μ_0. We still say that the jump of a particle from x to y is an allowed move if c_xy(η)=1. In the case x=0, this jump in fact turns η into τ_y(η^0y).
By using the fact that the process seen from the tagged particle is ergodic at any ρ<1 (see Proposition <ref>) we can apply a classic result <cit.> and obtain the following.
<cit.>[This result is proved in <cit.> for exclusion processes on ^d but the proof also works in our setting.]
For any ρ∈(0,1), there exists a non-negative d× d matrix D(ρ) such that
X_^-2t→ 0⟶√(2D(ρ))B_t,
where B is a standard d-dimensional Brownian motion and the convergence holds in the sense of
weak convergence of path measures on D([0,∞),^d). Moreover, the matrix D(ρ) is characterized by
u· D(ρ)u=inf_f{∑_y∈^d∖{ 0} ∑_z∼ yμ_0(c_yz(η)[f(η^xy)-f(η)]^2).
+.∑_y∼ 0μ_0(c_0y(η)[u· y+f(τ_y(η^0y))-f(η)]^2)}
for any u∈^d, where the infimum is taken over local functions f on { 0,1}^^d.
Our main result is the following.
Fix an integer d and s∈[2,d] and consider the KA model on ℤ^d with facilitation parameter s. Then, for any ρ∈(0,1), any i=1,…,d, we have e_i· D(ρ)e_i>0.
In other words, the matrix D(ρ) is non-degenerate at any density.
Since the constraints are monotone in s (the facilitation parameter), it is enough to prove the above result for s=d. From now on we assume s=d.
§ ERGODICITY, FRAMEABILITY AND CHARACTERISTIC LENGTHSCALE
In this section we recall some key results for KA dynamics.
In <cit.>, following the arguments of <cit.>, it was proved that KA models are ergodic for any ρ<1. More precisely we have the following.
Fix an integer d and s∈[2,d] and consider the KA model on ℤ^d with facilitation parameter s.
Fix ρ∈(0,1) and let μ be the ρ-Bernoulli product measure. Then
0 is a simple eigenvalue of the generator L_ env defined by formula (<ref>) considered on L_2(μ).
Along the same lines one can prove that the process of the environment seen from the tagged particle is ergodic on L_2(μ_0), namely recalling that μ_0:=μ(·|ξ(0)=1) it holds
0 is a simple eigenvalue of the generator L defined by formula (<ref>) considered on L_2(μ_0).
Given Λ⊂ℤ^d and two configurations
η,σ∈Ω, a sequence of configurations
P_η,σ= (η^(1),η^(2),…,η^(n))
starting at η^(1)=η and ending at η^(n)=σ is an
allowed path from η to σ inside Λ if for any
i=1,…,n-1 there exists a bond (x_i,y_i), namely a couple of neighboring sites, with
η^(i+1)=(η^(i))^x_iy_i and c_x_iy_i(η^(i))
=1. We also require that paths do not go through the same configuration twice, namely for all i,j∈[2,n] with i≠ j it holds η^(i)≠η^(j).
We say that n is the length of the path. Of course the notion of allowed path depends on the choice of the facilitation parameter s which enters in the definition of c_xy.
It is also useful to define allowed paths for the process seen from the tagged particle.
The paths are defined as before, with the only difference that for any
i=1,…,n-1 there exists a bond (x_i,y_i), namely a couple of neighboring sites, with
c_x_iy_i(η^(i))
=1 and
* either x_i=0 and η^(i+1)=τ_y_i((η^(i))^0y_i)
* or x_i≠ 0 and η^(i+1)=(η^(i))^x_iy_i
Following the terminology of <cit.> we introduce the notion of frameable and framed configurations.
Fix a set Λ⊂ℤ^d and a configuration ω∈Ω.
We say that ω is Λ-framed
if ω (x) =0 for any x ∈∂_-Λ.
Let
ω^(Λ) be the configuration equal to ω_Λ inside Λ and
equal to 1 outside Λ.
We say that ω is Λ-frameable
if there exist a Λ-framed configuration σ^(Λ)
with at least one allowed configuration path P_ω^(Λ)→σ^(Λ) inside Λ (by definition any framed configuration is also frameable). Sometimes, when from the context it is clear to which geometric set Λ we are referring, we will drop Λ in the names and just say framed and frameable configurations. Of course the notion of frameable configurations depends on the choice of the facilitation parameter s.
The following result, proved in <cit.>, shows that on a sufficiently large lengthscale frameable configurations are typical.
<cit.>
For any dimension d, any ρ<1 and any >0, there exists Ξ=Ξ(ρ,,d)<∞ such that, for the KA process in ^d with facilitation parameter d, for L≥Ξ it holds
μ(ξ is Λ_L-frameable)≥ 1-
where we set Λ_L=[0,L]^d.
§ AN AUXILIARY DIFFUSION
In this section we will introduce a bond percolation process on a properly renormalized lattice and an auxiliary diffusion which corresponds to a random walk on the infinite component of this percolation. Then we will prove that this auxiliary process has a non-degenerate diffusion matrix (Proposition <ref>). This result will be the key starting point of the next section, where we will prove our main Theorem <ref> by comparing the diffusion matrix of the KA model with the diffusion matrix of the auxiliary process (Theorem <ref>).
In order to introduce our bond percolation process we need some auxiliary notation.
Fix a parameter L∈ and consider the renormalized lattice (L+2)^d.
For n∈{ 0,… d}, let B^(n):={ 0,1}^d-n×{ 0,… L-1}^n. We say that B^(n) is the elementary block of L–dimension n. B^(n)_1,… B^(n)_dn are the blocks obtained from B^(n) by permutations of the coordinates. Notice that one can write the cube of side length L+2 as a disjoint union of such blocks in the following way (see Figure <ref>):
Λ_L+2:={ 0,…,L+1}^d=B^(0)⊔_n=1^d_i=1^dn(B^(n)_i+2e_j_i1+…+2e_j_in),
where the block B^(n)_i has length L in the directions e_j_i1,…,e_j_in and length 2 in the other directions.
By first decomposing ^d in blocks of linear size L+2 and then using this decomposition, we finally get a paving of ^d by blocks with side lengths in { 2, L}.
We will speak of liaison tubes or just tubes for blocks of L–dimension 1 and of facilitating blocks for blocks of L–dimension 2 or larger.
We will also call faces of B^(n)_i the 2^d-n (disjoint) regions of the form
x_j_i1∈[0,L-1],…,x_j_in∈[0,L-1] and x_j=c_j with c_j∈{0,1} for all j∉{j_i1,…,j_in}.
Finally, for x∈ (L+2)^d, i=1,…,d, we define the block neighborhood 𝒩_x,i of (x,x+(L+2)e_i) recursively in the L–dimension of the blocks (see also Figure <ref>):
* B^(0)+x and B^(0)+x+(L+2)e_i belong to 𝒩_x,i,
* each tube adjacent to B^(0)+x or B^(0)+x+(L+2)e_i belongs to 𝒩_x,i,
* recursively, each block of L–dimension n+1 adjacent to some block of L–dimension n in 𝒩_x,i is also in 𝒩_x,i.
We are now ready to define our bond percolation process.
Let ℰ((L+2)^d) be the set of bonds of (L+2)^d. Given a configuration η∈{0,1}^ℤ^d, the corresponding configuration on the bonds
η̅∈{0,1}^ℰ((L+2)^d) is defined by η̅_x,x+(L+2)e_i=1 iff
* each tube in 𝒩_x,i contains at least a zero,
*
for all n=2,…,d, for all B block of L–dimension n in 𝒩_x,i,
let Λ_B,i with i∈[1,2^n-d] be its faces. The configuration should be Λ_B,i frameable
for KA process with parameter n.
In other words the edge
(x,x+(L+2)e_i) is open if (1) and (2) are satisfied, closed otherwise. See Figure <ref> for an example of an open bond.
Note that conditions (1) and (2) do not ask anything of the configuration inside B^(0)+x and B^(0)+x+(L+2)e_i. As a consequence, the distribution of η̅ is the same for η∼μ as for η∼μ_0. We denote it by μ̅.
μ̅ is a (d+2)–dependent bond percolation such that for any fixed ρ∈ (0,1), μ̅(η̅_0,(L+2)e_i=1)L→∞⟶ 1 for all i=1,…,d. In particular, for L(ρ) large enough, there is an infinite open cluster.
To bound the dependence range, it is enough to check that for x,y∈ (L+2)^d at distance at least (L+2)(d+2), 𝒩_x,i and 𝒩_y,j are disjoint for any i,j=1,…,d.
We now show that the percolation parameter goes to 1 with L. First, the number of blocks in 𝒩_x,i depends only on d and the configurations inside the different blocks in 𝒩_x,i are independent, so we just need to show that the probability for each block to satisfy condition (<ref>) or (<ref>) (depending on its L–dimension) goes to one. This is clearly true for condition (<ref>), since the probability that a given tube contains a zero is 1-ρ^2^d-1L. For condition (<ref>), consider a block of L–dimension n with n≥ 2, and notice that under either μ or μ_0, the configurations inside the 2^d-n different n-dimensional faces of the block are independent since the faces are disjoint. The conclusion therefore follows from Lemma <ref>.
Now we can define the auxiliary process (Y_t)_t≥ 0, which lives on (L+2)^d and whose diffusion coefficient we will compare with D(ρ). Fix L(ρ) so that under μ̅ the open cluster percolates. Y is the simple random walk on the infinite percolation cluster. More precisely, let μ̅^*:=μ̅(·|0↔∞), where as usual we write “0↔∞” for “0 belongs to the infinite percolation cluster”. Y_0:=0 and from x∈ (L+2)^d, Y jumps to x±(L+2)e_i at rate η̅_x,x±(L+2)e_i. We write ^ aux_μ̅^* for the distribution of this random walk.
For L(ρ) large enough, there exists a positive (non-degenerate) d× d matrix D_ aux(ρ) such that under ^ aux_μ̅^*,
Y_^-2t→ 0⟶√(2D_ aux(ρ))B_t,
where B is a standard d-dimensional Brownian motion and the convergence holds in the sense of
weak convergence of path measures on D([0,∞),^d). The matrix D_ aux(ρ) is characterized by
u· D_ aux(ρ)u=inf_f{∑_y(L+2)^d∼ 0μ̅^*(η̅_0,y[u· y+f(τ_y(η̅))-f(η̅)]^2)}>0
for any u∈^d, where the infimum is taken over local functions f on { 0,1}^ℰ((L+2)^d).
The convergence to Brownian motion (formula <ref>) was proved in <cit.> in the case of independent bond percolation and the variational formula (the equality in (<ref>) was established in <cit.>). As pointed out in Remark 4.16 of <cit.>, independence is only needed to show positivity of the diffusion coefficient. This property indeed relies on the fact that the effective conductivity in a box of size N is bounded away from 0 as N→∞. Therefore to prove the positive lower bound of formula <ref> we just need to show that this property holds under μ̅ if L is large enough. Notice that we just need to prove the result in dimension d=2. In fact, in order to prove that e_1· D_ auxe_1>0, we just need to find a lower bound on the number of disjoint open paths from left to right in [1,(L+2)N]^2 (<cit.>). The other directions are similar.
More precisely, we only need to show that for L large enough there exists λ>0 such that for N large enough,
μ̅(at least λ N disjoint left-right open paths in [1,(L+2)N]^2)≥ 1- e^-λ N.
To this aim, we embed open paths in μ̅ into open paths of yet another percolation process built from μ. Let μ̂ be the independent site percolation process defined as follows. The underlying graph is ^̂2̂=3(L+2)× 2(L+2). For x∈^2, we let x̂=(L+2)(3x_1,2x_2) and 𝒩̂_x be the union of 𝒩_x,1 and the tubes just above and to the right (Figure <ref>). We say that x̂ is ^∧-open if each tube (resp. each block) in 𝒩̂_x satisfies condition (<ref>) (resp. (<ref>)). This defines a probability measure μ̂ on { 0, 1}^^̂2̂, which is an independent site percolation process since 𝒩̂_x∩𝒩̂_x'=∅ if x≠ x'. Moreover, μ̂(0 is ^∧-open)L→∞⟶1, similarly to what we proved in Lemma <ref>. We now use <cit.> to say that (<ref>) holds with μ̅ replaced with μ̂ and “open” by “^∧-open”. In order to deduce (<ref>), notice μ̂ is transparently coupled with μ̅ since they are both constructed from μ. Moreover, it is clear that for x∈^2 the following holds
* x̂ is ^∧-open implies (x̂,x̂+(L+2)e_1) is open (for our dependent percolation process),
* x̂,x̂+3(L+2)e_1 are ^∧-open implies (x̂,x̂+(L+2)e_1),(x̂+(L+2)e_1,x̂+2(L+2)e_1),(x̂+2(L+2)e_1,x̂+3(L+2)e_1) are open,
* x̂,x̂+2(L+2)e_2 are ^∧-open implies (x̂,x̂+(L+2)e_2),(x̂+(L+2)e_2,x̂+2(L+2)e_2) are open.
Therefore, for the natural coupling between μ̅ and μ̂, existence of disjoint ^∧-open paths implies existence of disjoint open paths and (<ref>) follows.
§ COMPARISON OF THE DIFFUSION COEFFICIENTS AND PROOF OF THEOREM <REF>
The main result of this section is the following Theorem, which states that the self diffusion matrix for KA is lower bounded by the self diffusion matrix for the auxiliary model introduced in the previous section.
There exists a constant C=C(d,L(ρ))>0 such that for all i=1,… d,
e_i· D(ρ)e_i≥ Ce_i· D_ aux(ρ)e_i.
This result will be proved by using the variational characterisation of the diffusion matrices and via path arguments.
More precisely, for any move (x,ξ)→ (x',ξ') which has rate >0 for the auxiliary process we construct (in Lemmata <ref>, <ref>, <ref>, <ref>, <ref>) a path of moves,
each having positive rate for the KA process and connecting (x,ξ) to (x',ξ').
Once Theorem <ref> is proved, our main result follows.
The result follows by using Proposition <ref> and Theorem <ref>.
We are therefore left with the proof of Theorem <ref>. Let us start by establishing some key Lemmata.
Let A={ξ(0)=1, ξ(x)=0 ∀ x∈{ 0,1}^d ∖ 0}. Define μ_A=μ(·|A) and denote by η^x y,□ the configuration obtained from η by exchanging the contents of the boxes x+{ 0,1}^d and y+{ 0,1}^d. Then the following holds
e_i· D_ auxe_i≤μ̅(0↔∞)^-1inf_f{∑_y(L+2)^d∼ 0μ_A(η̅_0,y[y_i+f(τ_y(η^0y,□))-f(η)]^2)},
where the infimum is taken over local functions f on { 0,1}^^d.
Let f be a local function on { 0,1} ^^d. We associate with it a local function f̅ on { 0,1}^ℰ((L+2)^d), defined by f̅(η̅)=μ_A(f|η̅). Then, for y(L+2)^d∼ 0, since η̅ does not depend on the configuration η inside { 0,1}^d and y+{ 0,1}^d, we can bound
μ_A(η̅_0,y[y_i+f(τ_y(η^0y,□))-f(η)]^2)
= μ_A(η̅_0,yμ_A([y_i+f(τ_y(η^0y,□))-f(η)]^2|η̅))
≥μ̅(η̅_0,y[y_i+f̅(τ_y(η̅))-f̅(η̅)]^2)
≥μ̅^*(η̅_0,y[y_i+f̅(τ_y(η̅))-f̅(η̅)]^2)μ̅(0↔∞).
Therefore the result follows by (<ref>).
The next sequence of Lemmata will show that for all η,y such that η̅_0,y=1 and η∈ A, there exists an allowed path from η to τ_y(η^0y,□) of finite length.
In order to avoid heavy notations, we will sometimes adopt an informal description of the allowed paths in the proofs.
For simplicity, we state the results in the case y=(L+2)e_1, but the process would be the same in any direction. In the following, c(d) denotes a constant depending only on d which may change from line to line.
Let η∈{ 0,1} ^^d such that η̅_0,(L+2)e_1=1.
Choose a block of L–dimension n∈[2,d] inside 𝒩_0,1, call it Λ. Then,
using at most c(d)2^L^d allowed moves, one can empty every site on its interior boundary ∂_-Λ (see Figure <ref>).
For the blocks of L–dimension d, this follows from the condition on η̅ which implies frameability of these blocks. The number of necessary moves is bounded by the number of configurations inside one block times the number of involved blocks. Then we deal with the blocks of L–dimension d-1,… 2 iteratively. Note that the frameability condition given by the definition of η̅ is such that a block of L–dimension k∈{ 2,…,d-1} is frameable (in the sense of the KA process in dimension d) as soon as the neighboring blocks of L–dimension k+1 are framed. Indeed, for k<d, every site x in a block of L–dimension k is adjacent to a point in the interior boundary of d-k different blocks of dimension k+1 (which belong to 𝒩_0,1 by construction). Therefore the path allowed by KA model with parameter k in order to frame the configuration (which exists thanks to condition (2)), is also allowed by KA model with parameter d (since the missing d-k empty sites are found in the interior boundary of the neighboring framed block).
After this step, the tubes in 𝒩_0,1 are wrapped by zeros, namely for any site x inside a tube, any neighbor of x that belongs to a facilitating block (i.e. to a block of L–dimension ≥ 2) is empty.
Next we notice that inside a tube wrapped by zeros, the jump of a particle to a neighboring empty site is always allowed (since the wrapping guarantees an additional zero in the initial and in the final position of the particle). More precisely
the following holds
Fix i∈[1,…,d] and choose any configuration ξ such that B^(1)_i is wrapped by zeros. Fix x∼ y with x∈ B^(1)_i and y∈ B^(1)_i. Then c_x,y(η)=1.
Therefore if ξ,ξ' are two configurations that are both empty on ∂ B^(1)_i, coincide outside of B^(1)_i and have the same number of zeros inside B^(1)_i, then there is an allowed path with length c(d)L from ξ to ξ'. Moreover, if x,x'∈ B^(1)_i, and ξ,ξ' have the same positive number of zeros inside the tube and the tracer respectively at x,x', it takes at most c(d)L allowed moves inside the tube to change ξ into ξ' and take the tracer from x to x'.
One just needs to notice that the wrapping ensures the satisfaction of the constraint for any such exchange.
Fix a configuration such that: there is at least one zero in each tube inside 𝒩_0,1; each such tube is wrapped by zeros; the tracer is at zero and the remaining sites of { 0,1} ^d are empty. Then, the tracer can be moved to any position in B_1^(1)+2e_1 , namely for any y∈ B_1^(1)+2e_1 there is an allowed path from (0,η) to (y,η') for at least a configuration η'.
It is clear that the tracer can get to e_1 and we can bring a zero to 2e_1 thanks to Lemma <ref>. Then we can exchange the configuration in e_1 and 2e_1, take the zero in e_1+e_2 inside the tube (namely exchange the configuration in e_1+e_2 and 2e_1+e_2 thanks to the empty site in 2e_1+2e_2 guaranteed by the wrapping), use Lemma <ref> again to get the to the desired position, and take the zero back to e_1+e_2 (if the desired position is 2e_1+e_2 there is no need to take the zero in e_1+e_2 inside the tube).
Fix a configuration such that all tubes adjacent to B^(0) are wrapped by zeros and they all contain a zero except possibly B_1^(1)+2e_1. If they do not contain the tracer, we can exchange the configurations in the slices { 1}×{ 0,1}^d-1 and { 2}×{ 0,1}^d-1 in at most c(d)L allowed moves.
For x∈{ 1}×{ 0,1}^d-1, x+e_1 has d-1 empty neighbors in { 2}×^d-1 thanks to the wrapping. Moreover x is adjacent to d tubes, d-1 of which are not B_1^(1)+2e_1 and therefore contain a zero that can be brought to a site adjacent to x using Lemma <ref>. The constraint for the exchange is then satisfied if the configurations differ at x, x+e_1 (else the exchange is pointless).
Fix a configuration such that: all tubes adjacent to B^(0) are wrapped by zeros and they all contain a zero except possibly B_1^(1)+2e_1; either the slice { 0}×{ 0,1} ^d-1 or the slice { 1}×{ 0,1} ^d-1 are completely empty. Then we can exchange the configurations in { 0}×{ 0,1} ^d-1 and { 1}×{ 0,1} ^d-1 in at most c(d)L allowed moves.
We describe the case { 0}×{ 0,1} ^d-1 empty. Order arbitrarily the zeros in positions x∈{ 0}×{ 0,1} ^d-1 and move them one by one to x+1. When attempting to move the i–th zero, initially in position x, a certain number N_i of its neighbors in slice { 0}×{ 0,1} ^d-1 have not been touched and are still empty. The other d-1-N_i zeros are now in neighboring positions of x+e_1. Moreover, there are d-1 tubes adjacent to both x and x+e_1. In N_i of those, we take the zero to the position adjacent to x+e_1, and in the other d-1-N_i to the position adjacent to x. Now the condition to exchange the variables at x,x+e_1 is satisfied.
We are now ready to prove the following key result
There exists a constant C=C(L,ρ)<∞ such that for any f local function on { 0,1}^^d, we have
μ_A(η̅_0,y[y_i+f(τ_y(η^0y,□))-f(η)]^2)
≤ C(L,ρ)[∑_y∈^d∖{ 0} ∑_z∼ yμ_0(c_yz(η)[f(η^xy)-f(η)]^2)
+∑_y∼ 0μ_0(c_xy(η)[y_i+f(τ_y(η^0y))-f(η)]^2)].
Due to Lemmata <ref>, <ref>, <ref>, <ref>, <ref>, we know that for all η such that η̅_0,y=1 and η∈ A, there exists an allowed path from η to τ_y(η^0y,□) of length upper bounded by C'2^L^d for some finite constant C'. In Figure <ref>, we give the main steps in the construction of such a path.
In particular, ∑_k=0^N-11_x^(k)=0y^(k)_i=y_i.
Then we can write
y_i+f(τ_y(η^0y,□))-f(η)=∑_k=0^N-1[1_x^(k)=0y^(k)_i+f(η^(k+1))-f(η^k))].
By Cauchy-Schwarz inequality, we deduce that
[y_i+f(τ_y(η^0y,□))-f(η)]^2≤ C'2^L^d∑_k=0^N-1c_x^(k)y^(k)(η^(k))[1_x^(k)=0y^(k)_i+f(η^(k+1))-f(η^k))]^2.
Therefore,
μ_A(η̅_0,y[y_i+f(τ_y(η^0y,□))-f(η)]^2)= (1-ρ)^1-2^dμ_0(1_Aη̅_0,y[y_i+f(τ_y(η^0y,□))-f(η)]^2)
≤ (1-ρ)^1-2^dC'2^L^dμ_0(η̅_0,y∑_k=0^N-1c_x^(k)y^(k)(η^(k))[1_x^(k)=0y^(k)_i+f(η^(k+1))-f(η^k))]^2)
≤(1-ρ)^1-2^dC'2^L^d{∑_z∼ x≠ 0,η,η'μ_0(η)η̅_0,y∑_k=0^N-11_x^(k)=x,y^(k)=z,η^(k)=η'c_xz(η')[f(η'^xz)-f(η')]^2,.
+.∑_z∼ 0,η,η'μ_0(η)η̅_0,y∑_k=0^N-11_x^(k)=0,y^(k)=z,η^(k)=η'c_xz(η')[z_i+f(τ_zη'^xz)-f(η')]^2},
where the sums are taken over x∼ z inside 𝒩_0,i, η,η'∈{ 0,1}^𝒩_0,i with the same number of zeros, and the equality η^(k)=η' actually means η^(k)=τ_Y_kη', where Y_k=∑_m=0^k-1y^(j). Since η and η' have the same number of zeros and the tracer at zero by construction, μ_0(η)=μ_0(η') and we can bound η̅_0,y∑_k=0^N-11_x^(k)=0,y^(k)=z,η^(k)=η' by N≤ C'2^L^d to obtain
μ_A(η̅_0,y[y_i+f(τ_y(η^0y,□))-f(η)]^2)
≤ (1-ρ)^1-2^d(C'2^L^d)^3[∑_x≠ 0 ∑_z∼ xμ_0(c_xz(η)[f(η^xz)-f(η)]^2)
+∑_z∼ 0μ_0(c_0z(η)[z_i+f(τ_z(η^0z))-f(η)]^2)].
Finally, we can conclude.
The result follows from Lemma <ref>, Lemma <ref> and the variational formula for D in Proposition <ref>.
bertini-toninelli Bertini, Lorenzo; Toninelli, Cristina Exclusion processes with degenerate rates: convergence to equilibrium and tagged particle. J. Statist. Phys. 117 (2004), no. 3-4, 549–580.
orianediff Blondel, Oriane Tracer diffusion at low temperature in kinetically constrained models. Ann. Appl. Probab. 25 (2015), no. 3, 1079–1107.
CMRT Cancrini, N.; Martinelli, F.; Roberto, C.; Toninelli, C. Kinetically constrained spin models. Probab. Theory Related Fields 140 (2008), no. 3-4, 459–504.
CMRT2 Cancrini, N.; Martinelli, F.; Roberto, C.; Toninelli, C. Kinetically constrained lattice gases. Comm. Math. Phys. 297 (2010), no. 2, 299–344.
chayes Chayes, J. T.; Chayes, L. Bulk transport properties and exponent inequalities for random resistor and flow networks, Comm. Math. Phys. 105 (1986), no. 1, 133–152.
dMFGW De Masi, A.; Ferrari, P. A.; Goldstein, S.; Wick, W. D., An invariance principle for reversible Markov processes. Applications to random motions in random environments. J. Statist. Phys. 55 (1989), no. 3-4, 787–855.
parisi
Franz, S.; Mulet, R.; Parisi, G., Kob-Andersen model: A nonstandard mechanism for the glassy transition,
Phys.Rev.E, 65,
021506.
GSTGarrahan, J.P.; Sollich, P.; Toninelli, C.,
Kinetically constrained models,
in "Dynamical heterogeneities in glasses, colloids, and granular
media", Oxford Univ. Press, Eds.: L. Berthier, G. Biroli, J-P Bouchaud, L.
Cipelletti and W. van Saarloos (2011).
GLT
Gonalves, P.; Landim, C.; Toninelli, C. Hydrodynamic limit for a particle system with degenerate rates. Ann. Inst. Henri Poincar Probab. Stat. 45 (2009), no. 4, 887–909.
kesten Kesten, Harry Percolation theory for mathematicians. Progress in Probability and Statistics, 2. Birkhuser, Boston, Mass., 1982. iv+423 pp. ISBN: 3-7643-3107-0.
KA Kob, W.; Andersen, H. C., Kinetic lattice-gas model of cage effects in high-density
liquids and a test of mode-coupling theory of the ideal-glass transition,
Physical Review E, 48, 4359–4363 (1993).
kurchan2
Kurchan, J.; Peliti, L.; Sellitto, M., Aging in lattice-gas models with constrained dynamics,
Europhys. Lett , 39,
365–370 (1997).
Lee, Tzong-Yow; Yau, Horng-Tzer Logarithmic Sobolev inequality for some models of random walks. Ann. Probab. 26 (1998), no. 4, 1855–1873.
MP
Marinari, E.; Pitard, E., Spatial correlations in the relaxation
of the Kob-Andersen model,
Europhysics Lett.,
69,
35-241, (2005).
nagahata Nagahata, Yukio Lower bound estimate of the spectral gap for simple exclusion process with degenerate rates. Electron. J. Probab. 17 (2012), no. 92, 19 pp.
quastel Quastel, Jeremy Diffusion of color in the simple exclusion process. Comm. Pure Appl. Math. 45 (1992), no. 6, 623–679.
Ritort Ritort, F.; Sollich, P.,
Glassy dynamics of kinetically constrained models,
Advances in Physics 52,219–342 (2003).
spohn Spohn, Herbert, Tracer diffusion in lattice gases. J. Statist. Phys. 59 (1990), no. 5-6, 1227–1239.
S
Spohn H. :
Large scale dynamics of interacting particles.
Berlin: Springer 1991.
BiTo
Toninelli, Cristina; Biroli, Giulio Dynamical arrest, tracer diffusion and kinetically constrained lattice gases. J. Statist. Phys. 117 (2004), no. 1-2, 27–54.
TBF Toninelli, Cristina; Biroli, Giulio; Fisher, Daniel S. Cooperative behavior of kinetically constrained lattice gas models of glassy dynamics. J. Stat. Phys. 120 (2005), no. 1-2, 167–238.
yau Yau, Horng-Tzer Logarithmic Sobolev inequality for generalized simple exclusion processes. Probab. Theory Related Fields 109 (1997), no. 4, 507–538.
|
http://arxiv.org/abs/1701.07680v2 | 20170126130303 | An H^1-conforming Virtual Element Method for Darcy equations and Brinkman equations | [
"Giuseppe Vacca"
] | math.NA | [
"math.NA"
] |
remark
remarkRemark[section]
remark
testTest[section]
plain
theoremTheorem[section]
propositionProposition[section]
corollarioCorollary[section]
lemmaLemma[section]
sistema
{[ .
remark
An H^1-conforming Virtual Element Methods for Darcy equations and Brinkman equations
Giuseppe Vacca
Dipartimento di Matematica e Applicazioni, Università degli Studi di Milano Bicocca, Via Roberto Cozzi 55 - 20125 Milano, Italy; E-mail: giuseppe.vacca@unimib.it.
==============================================================================================================================================================================================
The focus of the present paper is on developing a Virtual Element Method for Darcy and Brinkman equations.
In <cit.> we presented a family of Virtual Elements for Stokes equations and we defined a new Virtual Element space of velocities such that the associated discrete kernel is pointwise divergence-free. We use a slightly different Virtual Element space having two fundamental properties: the L^2-projection onto ℙ_k is exactly computable on the basis of the degrees of freedom, and the
associated discrete kernel is still pointwise divergence-free.
The resulting numerical scheme for the Darcy equation has optimal order of convergence and H^1 conforming velocity solution.
We can apply the same approach to develop a robust virtual element method for the Brinkman equation that is stable for both the Stokes and Darcy limit case.
We provide a rigorous error analysis of the method and several numerical tests.
An H^1-conforming Virtual Element Methods for Darcy equations and Brinkman equations
Giuseppe Vacca
Dipartimento di Matematica e Applicazioni, Università degli Studi di Milano Bicocca, Via Roberto Cozzi 55 - 20125 Milano, Italy; E-mail: giuseppe.vacca@unimib.it.
==============================================================================================================================================================================================
§ INTRODUCTION
The Virtual Element Methods (in short, VEM or VEMs) is a recent technique for solving PDEs. VEMs were recently introduced in <cit.> as a generalization of the finite element method on polyhedral or polygonal meshes.
In the numerical analysis and engineering literature there has been a recent growth of interest
in developing numerical methods that
can make use of general polygonal and polyhedral meshes, as opposed to more standard triangular/quadrilateral (tetrahedral/hexahedral)
grids. Indeed, making use of polygonal meshes brings forth a range of advantages, including for instance automatic hanging node treatment,
more efficient approximation of geometric data features, better domain meshing
capabilities, more efficient and easier
adaptivity, more robustness to mesh deformation, and others. This interest in the literature is also reflected in commercial codes,
such as CD-Adapco, that have recently included polytopal meshes.
We refer to the recent papers and monographs
<cit.>
as a brief representative sample of the increasing list of technologies that make use of polygonal/polyhedral meshes. We mention
here in particular the polygonal finite elements, that generalize finite elements to polygons/polyhedrons by making use of
generalized non-polynomial shape functions, and the mimetic discretisation schemes <cit.>, that combine ideas from the finite difference and
finite element methods.
The principal idea behind VEM is to use approximated discrete bilinear forms that require only integration of polynomials on the (polytopal)
element in order to be computed. The resulting discrete solution is conforming and the accuracy granted by such discrete bilinear forms
turns out to be sufficient to achieve the correct order of convergence.
Following this approach, VEM is able to make use of very general polygonal/polyhedral meshes without the need to integrate
complex non-polynomial functions on the elements and without loss of accuracy. Moreover,
VEM is not restricted to low order converge and can be easily applied to three dimensions and use non convex (even non simply connected) elements.
The Virtual Element Method has been developed successfully for a large range of problems, see for instance <cit.>.
A helpful paper for the implementation of the method is <cit.>.
The focus of this paper is on developing a new Virtual Element Method for the Darcy equation that is suitable for a robust extension to the (more complex) Brinkman problem.
For such a problem, other VEM numerical schemes have been proposed, see for example <cit.>.
In <cit.> the authors developed a new Virtual Element Method for Stokes problems by exploiting the flexibility of the Virtual Element construction in a new way. In particular, they define a new Virtual Element space of velocities carefully designed to solve the Stokes problem. In connection with a suitable pressure space, the new Virtual Element space leads to an exactly divergence-free discrete velocity, a favorable property when more complex problems, such as the Navier-Stokes problem, are considered. We highlight that this feature is not shared by the method defined in <cit.> or by most of the standard mixed Finite Element methods, where the divergence-free constraint is imposed only in a weak (relaxed) sense.
In the present contribution we develop the Virtual Element Method for Darcy equations by introducing a slightly different virtual space for the velocities such that the local L^2 orthogonal projection onto the space of polynomials of degree less or equal than k (where k is the polynomial degree of accuracy of the method) can be computed using the local degrees of freedom. The resulting Virtual Elements family inherits the advantages on the scheme proposed in <cit.>, in particular it yields an exactly divergence-free discrete kernel. Thus we obtain a stable Darcy element that is also uniformly stable for the Stokes problem. A sample of uniformly stable methods for Darcy-Stokes model is for instance <cit.>.
The last part of the paper deals with the analysis of a new mixed finite element method for Brinkman equations that stems from the above scheme for the Darcy problem. Mathematically, the Brinkman problem resembles both the Stokes problem for fluid flow and the Darcy problem for flow in porous media (see <cit.>).
Constructing finite element methods to solve the Brinkman equation that are robust for both (Stokes and Darcy) limits is challenging. We will see how the above Virtual Element approach offers a natural and straightforward
framework for constructing stable numerical algorithms for the Brinkman
equations.
We remark that the proposed scheme belongs to the class of the pressure-robust method, i.e. delivers a velocity error independent of the continuous pressure.
The paper is organized as follows.
In Section <ref> we introduce the model continuous Darcy problem.
In Section <ref> we present its VEM discretisation.
In Section <ref> we detail the theoretical features and the convergence analysis of the problem.
In Section <ref> we develop a stable numerical methods for Brinkman equations.
In Section <ref> we show the numerical tests.
Finally in the Appendix we present the theoretical analysis of the extension to the Darcy equation of the scheme of <cit.>. Even though this latter method is not recommended for the Darcy problem, the numerical experiments showed an unexpected optimal convergence rate for the pressure. We theoretically prove this behaviour, developing an inverse inequality for the VEM space, which is interesting on its own.
§ THE CONTINUOUS PROBLEM
We consider the classical Darcy equation that describes the flow of a fluid through a porous medium.
Let Ω⊆^2 be a bounded polygon then the Darcy equation in mixed form is
{ find (𝐮, p) such that
^-1𝐮 + ∇ p = 0 in Ω,
div 𝐮 = f in Ω,
𝐮·𝐧 = 0 on ∂Ω,.
where 𝐮 and p are respectively the velocity and the pressure fields, f ∈ L^2(Ω) is the source term and is a uniformly symmetric, positive definite tensor that represents the permeability of the medium.
From (<ref>), since we have assumed no flux boundary
conditions all over ∂Ω, the external force f has zero mean value on Ω.
We consider the spaces
𝐕:= {𝐮∈ H( div, Ω), s.t 𝐮·𝐧 = 0 on ∂Ω}, Q:= L^2_0(Ω) = { q ∈ L^2(Ω) s.t. ∫_Ω q dΩ = 0 }
equipped with the natural norms
𝐯_𝐕^2 := 𝐯_[L^2(Ω) ]^2^2 + div 𝐯_L^2(Ω)^2 , q_Q := q_L^2(Ω),
and the bilinear forms a(·, ·) 𝐕×𝐕→ and b(·, ·) 𝐕× Q → defined by:
a (𝐮, 𝐯) := ∫_Ω^-1 𝐮·𝐯 dΩ, for all 𝐮, 𝐯∈𝐕
b(𝐯, q) := ∫_Ω div 𝐯 q dΩ for all 𝐯∈𝐕, q ∈ Q.
Then the variational formulation of Problem (<ref>) is
{ find (𝐮, p) ∈𝐕× Q, such that
a(𝐮, 𝐯) + b(𝐯, p) = 0 for all 𝐯∈𝐕,
b(𝐮, q) = (f, q) for all q ∈ Q,.
where
(f, q) := ∫_Ω f q dΩ for all q ∈ Q.
Let us introduce the kernel
𝐙 := {𝐯∈𝐕 s.t. b(𝐯, q) =0 for all q ∈ Q};
then it is straightforward to see that
𝐯_𝐕 := 𝐯_[L^2(Ω) ]^2 for all 𝐯∈𝐙.
It is well known that (see for instance <cit.>):
* a(·, ·) and b(·, ·) are continuous, i.e.
|a(𝐮, 𝐯)| ≤a𝐮_𝐕𝐯_𝐕 for all 𝐮, 𝐯∈𝐕,
|b(𝐯, q)| ≤b𝐯_𝐕q_Q for all 𝐯∈𝐕 and q ∈ Q;
* a(·, ·) is coercive on the kernel 𝐙, i.e. there exists a positive constant α depending on such that
a(𝐯, 𝐯) ≥α𝐯^2_𝐕 for all 𝐯∈𝐙;
* b(·,·) satisfies the inf-sup condition, i.e.
∃ β >0 such that sup_𝐯∈𝐕 𝐯≠0b(𝐮, q)/𝐯_𝐕≥βq_Q for all q ∈ Q.
Therefore, Problem (<ref>) has a unique solution (𝐮, p) ∈𝐕× Q such that
𝐮_𝐕 + p_Q ≤ C f_L^2(Ω)
with the constant C depending only on Ω and .
§ VIRTUAL FORMULATION FOR DARCY EQUATIONS
§.§ Decomposition and the original virtual element spaces
We outline the Virtual Element discretization of Problem (<ref>).
Here and in the rest of the paper the symbol C will indicate a generic positive constant independent of the mesh size that may change at
each occurrence. Moreover, given any subset ω in ℝ^2 and k ∈ℕ, we will denote by _k(ω) the
polynomials of total degree at most k defined on ω, with the extended notation _-1(ω)=∅.
Let 𝒯_h_h be a sequence of decompositions of Ω into general polygonal elements K with
h_K := diameter(K) ,
h := sup_K ∈𝒯_h h_K .
We suppose that for all h, each element K in 𝒯_h fulfils the following assumptions:
* (𝐀1) K is star-shaped with respect to a ball of radius ≥ γ h_K,
* (𝐀2) the distance between any two vertexes of K is ≥ c h_K,
where γ and c are positive constants. We remark that the hypotheses above, though not too restrictive in many practical cases,
can be further relaxed, as noted in <cit.>.
From now on we assume that is piecewise constant with respect to 𝒯_h on Ω.
Using standard VEM notation, for k ∈, let us define the spaces
* _k(K) the set of polynomials on K of degree ≤ k,
* _k(K) := {v ∈ C^0(∂ K) s.t v_|e∈_k(e) ∀ e ⊂∂ K},
* 𝒢_k(K):= ∇(_k+1(K)) ⊆ [_k(K)]^2,
* 𝒢_k(K)^⊥ := 𝐱^⊥[_k-1(K)] ⊆ [_k(K)]^2 with 𝐱^⊥:= (x_2, -x_1).
In <cit.> the authors have introduced a new family of Virtual Elements for the Stokes problem
on polygonal meshes. In particular, by a proper choice of the Virtual space of velocities, the virtual local spaces are associated to a Stokes-like variational problem on each element. The main ideas of the method are
* the Virtual space contains the space of all the polynomials of the prescribed order plus suitable non polynomial functions,
* the degrees of freedom are carefully chosen so that the H^1 semi-norm projection onto the space of polynomials can be exactly computed,
* the choice of the Virtual space of velocities and the associated degrees of freedom guarantee that the final discrete velocity is pointwise divergence-free and more generally the discrete kernel is contained in the continuous one.
In this section we briefly recall from <cit.> the notations, the main properties of the Virtual spaces and some details of the construction of the H^1 semi-norm projection.
Let k ≥ 2 the polynomial degree of accuracy of the method, then we define on each element K ∈𝒯_h the finite dimensional local virtual space
𝐖_h^K := {𝐯∈ [H^1(K)]^2 s.t 𝐯_|∂ K∈ [_k(∂ K)]^2 , .
.
{ - Δ𝐯 - ∇ s ∈𝒢_k-2(K)^⊥,
div 𝐯∈_k-1(K),
. for some s ∈ L^2(K) }
where all the operators and equations above are to be interpreted in the distributional sense.
It is easy to check that
[_k(K)]^2 ⊆𝐖_h^K,
and that (see <cit.> for the proof) the dimension of 𝐖_h^K is
( 𝐖_h^K ) = ([_k(∂ K)]^2) + (𝒢_k-2(K)^⊥) + ( (_k-1(K)) - 1)
= 2n_K k + (k-1)(k-2)/2 + (k+1)k/2 - 1.
The corresponding degrees of freedom are chosen prescribing, given a function 𝐯∈𝐖_h^K, the following linear operators 𝐃_𝐕, split into four subsets (see Figure <ref>):
* 𝐃_𝐕1: the values of 𝐯 at the vertices of the polygon K,
* 𝐃_𝐕2: the values of 𝐯 at k-1 distinct points of every edge e ∈∂ K (for example we can take the k-1 internal points of the (k+1)-Gauss-Lobatto quadrature rule in e, as suggested in <cit.>),
* 𝐃_𝐕3: the moments of 𝐯
∫_K 𝐯·𝐠_k-2^⊥ dK for all 𝐠_k-2^⊥∈𝒢_k-2(K)^⊥,
* 𝐃_𝐕4: the moments up to order k-1 and greater than zero of div 𝐯 in K, i.e.
∫_K ( div 𝐯) q_k-1 dK for all q_k-1∈_k-1(K) /.
For all K ∈𝒯_h, we introduce the H^1 semi-norm projection Π_k^∇,K𝐖_h^K → [_k(K)]^2, defined by
{ ∫_K ∇ 𝐪_k : ∇ (𝐯_h - Π_k^∇,K𝐯_h) d K = 0 for all 𝐪_k ∈ [_k(K)]^2,
Π_0^0,K(𝐯_h - Π_k^∇,K𝐯_h) = 0 ,
.
where Π_0^0,K is the L^2-projection operator onto the constant functions defined on K. It is immediate to check that the energy projection is well defined and it clearly holds Π_k^∇,K𝐪_k = 𝐪_k for all 𝐪_k ∈_k(K).
Moreover the operator Π_k^∇,K is computable in terms of the degrees of freedom 𝐃_𝐕 (see equations (27)-(29) in <cit.> and the subsequent discussion).
§.§ The modified virtual space and the projection Π^0,K_k
Let n a positive integer, then for all K ∈𝒯_h, the L^2-projection Π^0,K_n𝐖^K_h → [_n(K)]^2 is defined by
∫_K 𝐪_n· (𝐯_h - Π^0,K_n𝐯_h) d K = 0 for all 𝐪_n ∈ [_n(K)]^2.
It is possible to check (see Section 3.3 of <cit.> for the proof) that the degrees of freedom 𝐃_𝐕 allow us to compute exactly the L^2-projection Π^0,K_k-2. On the other hand we can observe that we can not compute exactly from the DoFs the L^2-projection onto the space of polynomials of degree ≤ k.
The goal of the present section is to introduce, taking the inspiration from <cit.>, a new virtual space 𝐕_h^K to be used in place of 𝐖_h^K in such a way that
* the DoFs 𝐃_𝐕 can still be used for 𝐕_h^K,
* [_k(K)]^2 ⊆𝐕_h^K,
* the projection Π_k^0,K𝐕_h^K → [_k(K)]^2 can be exactly computable by the DoFs 𝐃_𝐕.
To construct 𝐕_h^K we proceed as follows: first of all we define an augmented virtual local space 𝐔_h^K by taking
𝐔_h^K := {𝐯∈ [H^1(K)]^2 s.t 𝐯_|∂ K∈ [_k(∂ K)]^2 , .
.
{ - Δ𝐯 - ∇ s ∈𝒢_k(K)^⊥,
div 𝐯∈_k-1(K),
. for some s ∈ L^2(K) }
Now we define the enhanced Virtual Element space 𝐕_h^K as the restriction of 𝐔_h^K given by
𝐕_h^K := {𝐯∈𝐔_h^K s.t. (𝐯 - Π^∇,K_k 𝐯, 𝐠_k^⊥)_[L^2(K)]^2 = 0 for all 𝐠_k^⊥∈𝒢_k(K)^⊥/𝒢_k-2(K)^⊥} ,
where the symbol 𝒢_k(K)^⊥/𝒢_k-2(K)^⊥ denotes the polynomials in 𝒢_k(K)^⊥ that are L^2-orthogonal to all polynomials of 𝒢_k-2(K)^⊥.
We proceed by investigating the dimension and by choosing suitable DoFs of the virtual space 𝐕_h. First of all we recall from <cit.> the following facts
([_k(∂ K)]^2) = 2n_K k, (_k-1(K)) = k(k+1)/2, (𝒢_k(K)^⊥) = k(k+1)/2
where n_K is the number of edges of the polygon K.
The dimension of 𝐔_h^K is
( 𝐔_h^K )
= 2n_K k + k(k+1)/2 + (k+1)k/2 - 1.
Moreover as DoFs for 𝐔_h^K we can take the linear operators 𝐃_𝐕 and plus the moments
𝐃_𝐔∫_K 𝐯·𝐠_k^⊥ dK for all 𝐠_k^⊥∈𝒢_k(K)^⊥/𝒢_k-2(K)^⊥.
The proof is virtually identical to that given in <cit.> for 𝐖_h^K and it is based (see for instance <cit.>) on the fact that given
* a polynomial function 𝐠_b ∈ [_k(∂ K)]^2,
* a polynomial function 𝐡∈𝒢_k(K)^⊥,
* a polynomial function g ∈_k-1(K) satisfying the compatibility condition
∫_K g dΩ = ∫_∂ K𝐠_b ·𝐧 ds,
there exists a unique pair (𝐯, s)∈𝐔_h^K× L^2(K) / such that
𝐯_|∂ K = 𝐠_b, div 𝐯 = g,
- Δ𝐯 - ∇ s = 𝐡.
Moreover, since from <cit.>, rot𝒢_k(K)^⊥→_k-1(K)
is an isomorphism,
we can conclude that the map that associates a given compatible data set
(𝐠_b, 𝐡, g) to the velocity field 𝐯 that solves (<ref>) is an injective map. Then
( 𝐔_h^K ) = ([_k(∂ K)]^2) + (𝒢_k(K)^⊥) + ( (_k-1(K)) - 1)
and the thesis follows from (<ref>).
The dimension of 𝐕_h^K is equal to that of 𝐖_h^K that is, as in (<ref>)
( 𝐕_h^K ) = 2n_K k + (k-1)(k-2)/2 + (k+1)k/2 - 1.
As DoFs in 𝐕_h^K we can take 𝐃_𝐕.
From (<ref>) it is straightforward to check that
(𝒢_k(K)^⊥/ 𝒢_k-2(K)^⊥) = (𝒢_k(K)^⊥) - (𝒢_k-2(K)^⊥) = 2k -1.
Hence, neglecting the independence of the additional 2k - 1 conditions in (<ref>), it holds that
( 𝐕_h^K ) ≥( 𝐔_h^K ) - (2k -1) = 2n_K k + (k-1)(k-2)/2 + (k+1)k/2 - 1 = ( 𝐖_h^K ).
We now observe that a function 𝐯∈𝐕_h^K such that 𝐃_𝐕(𝐯) = 0 is identically zero. Indeed, from (<ref>), it is immediate to check that in this case the Π_k^∇, K 𝐯 would be zero, implying that all its
moment are zero, in particular, since 𝐯∈𝐕_h^K, all the moments 𝐃_𝐔 of 𝐯 are also zero. Now, from Lemma <ref>, we have that 𝐯 is zero.
Therefore, from (<ref>), we obtain that the dimension of 𝐕_h^K is actually the same of 𝐖_h^K, and that the DoFs 𝐃_𝐕 are unisolvent for 𝐕_h^K.
The degrees of freedom 𝐃_𝐕 allow us to compute exactly the L^2-projection Π^0,K_k𝐕_h → [_k(K)]^2, i.e. the moments
∫_K 𝐯·𝐪_k d K
for all 𝐯∈𝐕_h and for all 𝐪_k∈ [_k(K)]^2.
Let us set
𝐪_k = ∇ q_k+1 + 𝐠_k-2^⊥ + 𝐠_k^⊥.
with q_k+1∈_k+1(K) /, 𝐠_k-2^⊥∈𝒢_k-2^⊥(K) and 𝐠_k^⊥∈𝒢_k^⊥(K)/𝒢_k-2^⊥(K). Therefore using the Green formula and since 𝐯∈𝐕_h, we get
∫_K 𝐯·𝐪_k d K = ∫_K 𝐯· (∇ q_k+1 + 𝐠_k-2^⊥ + 𝐠_k^⊥) d K
= - ∫_K div 𝐯 q_k+1 d K +
∫_K 𝐯·𝐠_k-2^⊥ d K +
∫_K Π_k^∇, K 𝐯·𝐠_k^⊥ d K
+ ∫_∂ K q_k+1 𝐯·𝐧 d s.
Now, since div 𝐯 is a polynomial of degree less or equal than k-1 we can reconstruct its value from 𝐃_𝐕4 and compute exactly the first term. The second term is computable from 𝐃_𝐕3. The third term is computable from all the 𝐃_𝐕 using the projection Π_k^∇, K 𝐯. Finally from 𝐃_𝐕1 and 𝐃_𝐕2 we can reconstruct 𝐯 on the boundary and so compute exactly the boundary term.
For what concerns the pressures we take the standard finite dimensional space
Q_h^K := _k-1(K)
having dimension
(Q_h^K) = (_k-1(K)) = (k+1)k/2.
The corresponding degrees of freedom are chosen defining for each q∈ Q_h^K the following linear operators 𝐃_𝐐:
* 𝐃_𝐐: the moments up to order k-1 of q, i.e.
∫_K q p_k-1 dK for all p_k-1∈_k-1(K).
Finally we define the global virtual element spaces as
𝐕_h := {𝐯∈ [H^1(Ω)]^2 s.t 𝐯·𝐧 = 0 on ∂Ω and 𝐯_|K∈𝐕_h^K for all K ∈𝒯_h}
and
Q_h := { q ∈ L_0^2(Ω) s.t. q_|K∈ Q_h^K for all K ∈𝒯_h},
with the obvious associated sets of global degrees of freedom. A simple computation shows that:
(𝐕_h) = n_P ( (k+1)k/2 -1 + (k-1)(k-2)/2)
+ 2(n_V + (k-1) n_E) + (n_V, B + (k-1) n_E, B)
and
(Q_h) = n_P (k+1)k/2 - 1 ,
where n_P is the number of elements, n_E, n_V (resp., n_E, B, n_V, B) is the number of internal edges and vertexes (resp., boundary edges and vertexes) in 𝒯_h.
As observed in <cit.>, we remark that
div 𝐕_h⊆ Q_h .
By definition (<ref>) it is clear that our discrete velocities field is H^1-conforming, in particular we obtain continuous velocities, whereas the natural discretization is only H( div)- conforming. This property, in combination with (<ref>), will make our method suitable for a (robust) extension to the Brinkman problem.
§.§ The discrete bilinear forms
The next step in the construction of our method is to define on the virtual spaces 𝐕_h and Q_h a discrete version of the bilinear forms a(·, ·) and b(·, ·) given in (<ref>) and (<ref>).
For simplicity we assume that the tensor is piecewise constant with respect to the decomposition 𝒯_h, i.e. is constant on each polygon K ∈𝒯_h.
First of all we decompose into local contributions the bilinear forms a(·,·) and b(·, ·), the norms ·_𝐕 and ·_Q by defining
a (𝐮, 𝐯) =: ∑_K ∈𝒯_h a^K (𝐮, 𝐯) for all 𝐮, 𝐯∈𝐕
b (𝐯, q) =: ∑_K ∈𝒯_h b^K (𝐯, q) for all 𝐯∈𝐕 and q ∈ Q,
and
𝐯_𝐕 =: (∑_K ∈𝒯_h𝐯^2_𝐕, K)^1/2 for all 𝐯∈𝐕, q_Q =: (∑_K ∈𝒯_hq^2_Q, K)^1/2 for all q ∈ Q.
We now define discrete versions of the bilinear form a(·, ·) (cf. (<ref>)), and of the bilinear form b(·, ·) (cf. (<ref>)). For what concerns b(·, ·), we simply set
b(𝐯, q) = ∑_K ∈𝒯_h b^K(𝐯, q) = ∑_K ∈𝒯_h∫_K div 𝐯 q dK for all 𝐯∈𝐕_h, q ∈ Q_h,
i.e. as noticed in <cit.> we do not introduce any approximation of the bilinear form. We notice that (<ref>)
is computable from the degrees of freedom 𝐃_𝐕1, 𝐃_𝐕2 and 𝐃_𝐕4, since q is polynomial in each element K ∈𝒯_h.
On the other hand, the bilinear form a(·, ·) needs to be dealt with in a more careful way.
First of all, by Proposition <ref>, we observe that for all 𝐪_k ∈ [_k(K)]^2 and for all 𝐯∈𝐕_h^K, the quantity
a^K (𝐪_k, 𝐯) = ∫_K^-1 𝐪_k ·𝐯 dK.
is exactly computable by the DoFs.
However, for an arbitrary pair (𝐮,𝐯 )∈𝐕_h^K ×𝐕_h^K, the quantity a_h^K(𝐰, 𝐯) is clearly not computable.
In the standard procedure of VEM framework, we define a computable discrete local bilinear form
a_h^K(·, ·) 𝐕_h^K ×𝐕_h^K →
approximating the continuous form a^K(·, ·) and satisfying the following properties:
* 𝐤-consistency: for all 𝐪_k ∈ [_k(K)]^2 and 𝐯_h ∈𝐕_h^K
a_h^K(𝐪_k, 𝐯_h) = a^K( 𝐪_k, 𝐯_h);
* stability: there exist two positive constants α_* and α^*, independent of h and K, such that, for all 𝐯_h ∈𝐕_h^K, it holds
α_* a^K(𝐯_h, 𝐯_h) ≤ a_h^K(𝐯_h, 𝐯_h) ≤α^* a^K(𝐯_h, 𝐯_h).
Let ℛ^K 𝐕_h^K ×𝐕_h^K → be a (symmetric) stabilizing bilinear form, satisfying
c_* a^K(𝐯_h, 𝐯_h) ≤ℛ^K(𝐯_h, 𝐯_h) ≤ c^* a^K(𝐯_h, 𝐯_h) for all 𝐯_h ∈𝐕_h such that Π_k^0,K𝐯_h= 0
with c_* and c^* positive constants independent of h and K.
Then, we can set
a_h^K(𝐮_h, 𝐯_h) := a^K (Π_k^0,K𝐮_h, Π_k^0,K𝐯_h ) + ℛ^K ((I -Π_k^0,K) 𝐮_h, (I -Π_k^0,K) 𝐯_h )
for all 𝐮_h, 𝐯_h ∈𝐕_h^K.
It is straightforward to check that Definition (<ref>) and properties (<ref>) imply the consistency and the stability of the bilinear form a_h^K(·, ·).
In the construction of the stabilizing form ℛ^K with condition (<ref>) we essentially require that the stabilizing term ℛ^K(𝐯_h, 𝐯_h) scales as a^K(𝐯_h, 𝐯_h).
Following the standard VEM technique (cf. <cit.> for more details), denoting with 𝐮̅_h, 𝐯̅_h ∈^N_K
the vectors containing the values of the N_K local degrees of freedom associated to 𝐮_h, 𝐯_h ∈𝐕_h^K, we set
ℛ^K (𝐮_h, 𝐯_h) = α^K 𝐮̅_h^T 𝐯̅_h ,
where α^K is a suitable positive constant that scales as |K|. For example, in the numerical tests presented in Section <ref>, we have chosen α^K as the mean value of the eigenvalues of the matrix stemming from the
term a^K (Π_k^0,K𝐮_h, Π_k^0,K𝐯_h ) in (<ref>).
Finally we define the global approximated bilinear form a_h(·, ·) 𝐕_h ×𝐕_h → by simply summing the local contributions:
a_h(𝐮_h, 𝐯_h) := ∑_K ∈𝒯_h a_h^K(𝐮_h, 𝐯_h) for all 𝐮_h, 𝐯_h ∈𝐕_h.
§.§ The discrete problem
We are now ready to state the proposed discrete problem. Referring to (<ref>), (<ref>), (<ref>), and (<ref>) we consider the virtual element problem:
{ find (𝐮_h, p_h) ∈𝐕_h × Q_h, such that
a_h(𝐮_h, 𝐯_h) + b(𝐯_h, p_h) = 0 for all 𝐯_h ∈𝐕_h,
b(𝐮_h, q_h) = (f, q_h) for all q_h ∈ Q_h..
We point out that the symmetry of a_h(·, ·) together with (<ref>) easily implies that a_h(·, ·) is (uniformly) continuous with respect to the L^2 norm.
Moreover, as observed in <cit.>, introducing the discrete kernel:
𝐙_h := {𝐯_h ∈𝐕_h s.t. b(𝐯_h, q_h) = 0 for all q_h ∈ Q_h},
it is immediate to check that
𝐙_h ⊆𝐙 .
Then the bilinear form a_h(·, ·) is also uniformly coercive on the discrete kernel 𝐙_h with respect to the 𝐕 norm.
Moreover as a direct consequence of Proposition 4.3 in <cit.>, we have the following stability result.
Given the discrete spaces
𝐕_h and Q_h defined in (<ref>) and (<ref>), there exists a positive β̃, independent of h, such that:
sup_𝐯_h ∈𝐕_h 𝐯_h ≠0b(𝐯_h, q_h)/𝐯_h_𝐕≥β̃q_h_Q for all q_h ∈ Q_h.
In particular, the the inf-sup condition of Proposition <ref>, along with property (<ref>), implies that:
div 𝐕_h = Q_h .
Finally we can state the well-posedness of virtual problem (<ref>).
Problem (<ref>) has a unique solution (𝐮_h, p_h) ∈𝐕_h × Q_h, verifying the estimate
𝐮_h_𝐕 + p_h_Q≤ C f_0.
§ THEORETICAL RESULTS
We begin by proving an approximation result for the virtual local space 𝐕_h. First of all, let us recall a classical result by Brenner-Scott (see <cit.>).
Let K ∈𝒯_h, then for all 𝐮∈ [H^s+1(K)]^2 with 0 ≤ s ≤ k, there exists a polynomial function 𝐮_π∈ [_k(K)]^2, such that
𝐮 - 𝐮_π_0,K + h_K |𝐮 -𝐮_π |_1,K≤ C h_K^s+1| 𝐮|_s+1,K.
We have the following approximation results (for the proof see <cit.>).
Let 𝐮∈𝐕∩ [H^s+1(Ω)]^2 with 0 ≤ s ≤ k. Under the assumption (𝐀1) and (𝐀2)
on the decomposition 𝒯_h, there exists 𝐮_int∈𝐖_h such that
𝐮 - 𝐮_int_0 + h_K |𝐮 -𝐮_int |_1,K≤ C h_K^s+1| 𝐮|_s+1,K.
where C is a constant independent of h.
For what concerns the pressures, from classic polynomial approximation theory <cit.>, for q ∈ H^k(Ω) it holds
inf_𝐪_h∈𝐐_h q - q_h _Q≤ C h^k |q|_k.
We are ready to state the following convergence theorem.
Let (𝐮, p) ∈𝐕× Q be the solution of problem (<ref>) and (𝐮_h, p_h) ∈𝐕_h × Q_h be the solution of problem (<ref>). Then it holds
𝐮 - 𝐮_h_0≤ C h^k+1 |𝐮|_k+1 , and 𝐮 - 𝐮_h_𝐕≤ C h^k |𝐮|_k+1 ,
p - p_h_Q≤ C h^k ( |𝐮|_k+1 + |p|_k).
We begin by remarking that as a consequence of the inf-sup condition with classical arguments (see for
instance Proposition 2.5 in <cit.>), there exists 𝐮_I ∈𝐕_h such that
Π_k-1^0,K ( div 𝐮_I) = div 𝐮_I = Π_k-1^0,K( div 𝐮) for all K ∈𝒯_h,
𝐮 - 𝐮_I_0≤ C inf_𝐯_h ∈𝐕_h𝐮 - 𝐯_0 and 𝐮 - 𝐮_I_𝐕≤ C inf_𝐯_h ∈𝐕_h𝐮 - 𝐯_𝐕.
Let us set δ_h = 𝐮_I - 𝐮_h. From (<ref>) and (<ref>), we have that div δ_h = 0 and thus δ_h ∈𝐙_h. Now, using (<ref>), (<ref>), (<ref>) and introducing the piecewise polynomial approximation (<ref>) together with (<ref>), we have
α_* α δ_h^2_0 ≤α_* a(δ_h, δ_h) ≤ a_h(δ_h, δ_h) = a_h(𝐮_I, δ_h) - a_h(𝐮_h, δ_h)
= a_h(𝐮_I, δ_h) + b(δ_h, p_h) = a_h(𝐮_I, δ_h)
= ∑_K ∈𝒯_h a_h^K(𝐮_I, δ_h) =
∑_K ∈𝒯_h ( a_h^K(𝐮_I - 𝐮_π, δ_h) + a^K(𝐮_π , δ_h) )
= ∑_K ∈𝒯_h ( a_h^K(𝐮_I - 𝐮_π, δ_h) + a^K(𝐮_π - 𝐮, δ_h) ) - a( 𝐮, δ_h)
= ∑_K ∈𝒯_h ( a_h^K(𝐮_I - 𝐮_π, δ_h) + a^K(𝐮_π - 𝐮, δ_h) ) + b( δ_h , p)
= ∑_K ∈𝒯_h ( a_h^K(𝐮_I - 𝐮_π, δ_h) + a^K(𝐮_π - 𝐮, δ_h) )
≤ C ∑_K ∈𝒯_h ( 𝐮_I - 𝐮_π_0,K + 𝐮 - 𝐮_π_0,K) δ_h_0, K
≤ C ( 𝐮_I - 𝐮_π_0 + 𝐮 - 𝐮_π_0) δ_h_0
then
δ_h_0≤ C 𝐮_I - 𝐮_π_0 + 𝐮 - 𝐮_π_0 .
The L^2-estimate follows easily by the triangle inequality.
It is also straightforward to see from (<ref>) and (<ref>) that
b(𝐮 - 𝐮_h, q_h) = 0 for all q_h ∈ Q_h,
than we get div 𝐮_h = Π_k-1^0,K ( div 𝐮) for all K ∈𝒯_h and therefore
div (𝐮 - 𝐮_h)_0 = ∑_K ∈𝒯_h div 𝐮 - Π_k-1^0,K ( div 𝐮)_0,K≤ C h^k | div 𝐮|_k≤ C h^k |𝐮|_k+1,
from which the estimate in the 𝐕 norm.
We proceed by analysing the error on the pressure field.
Let q_h ∈ Q_h, then from the discrete inf-sup condition (<ref>), we infer:
β̃p_h - q_h_Q ≤sup_𝐯_h ∈𝐕_h 𝐯_h ≠0b(𝐯_h, p_h - q_h)/𝐯_h_V = sup_𝐯_h ∈𝐕_h 𝐯_h ≠0b(𝐯_h, p_h - p) + b(𝐯_h, p - q_h)/𝐯_h_V.
Since (𝐮,p) and (𝐮_h,p_h) are respectively the solution of (<ref>) and (<ref>), it follows that
a(𝐮, 𝐯_h) + b(𝐯_h, p) = 0 for all 𝐯_h ∈𝐕_h,
a_h(𝐮_h, 𝐯_h) + b(𝐯_h, p_h) = 0 for all 𝐯_h ∈𝐕_h.
Therefore, we get
b(𝐯_h, p_h - p) = a(𝐮, 𝐯_h) - a_h(𝐮_h, 𝐯_h) for all 𝐯_h ∈𝐕_h.
Using (<ref>), the continuity of a_h(·, ·) and the triangle inequality, we get:
b(𝐯_h, p_h - p) = a(𝐮, 𝐯_h) - a_h(𝐮_h, 𝐯_h)
= ∑_K ∈𝒯_h( a^K(𝐮, 𝐯_h) - a_h^K(𝐮_h, 𝐯_h))
= ∑_K ∈𝒯_h( a^K(𝐮 - 𝐮_π, 𝐯_h) +
a_h^K(𝐮_π -𝐮_h, 𝐯_h) )
≤∑_K ∈𝒯_h C ( 𝐮 - 𝐮_π_𝐕,K +
(𝐮_π - 𝐮_h)_𝐕,K) 𝐯_h_𝐕,K
≤∑_K ∈𝒯_h C ( 𝐮 - 𝐮_π_𝐕,K + 𝐮 -𝐮_h_𝐕,K)
𝐯_h_𝐕,K
where 𝐮_π is the piecewise polynomial of degree k defined in Lemma <ref>. Then, from estimate (<ref>) and the previous estimate on the velocity error, we obtain
|b(𝐯_h, p_h - p)| ≤ C h^k |𝐮|_k+1 𝐯_h_𝐕.
Moreover, we have
|b(𝐯_h, p - q_h) | ≤ C p - q_h_Q𝐯_h_𝐕.
Then, using (<ref>) and (<ref>) in (<ref>), we infer
p_h - q_h_Q ≤ C h^k |𝐮|_k+1 + C p - q_h_Q.
Finally, using (<ref>) and the triangular inequality, we get
p -p_h_Q ≤p -q_h_Q + p_h - q_h_Q ≤ C h^k |𝐮|_k+1 + C p - q_h_Q for all q_h ∈ Q_h.
Passing to the infimum with respect to q_h ∈ Q_h, and using estimate (<ref>), we get the thesis.
We observe that the estimates on the velocity errors in Theorem <ref> do not depend on the continuous pressure, whereas the velocity errors of the classical methods have a pressure contribution. Therefore the proposed scheme belongs to the class of the pressure-robust methods.
§ A STABLE VEM FOR BRINKMAN EQUATIONS
§.§ The continuous problem
The Brinkman equation describes fluid flow in complex porous media with a viscosity coefficient highly varying so that the flow is dominated by the Darcy equations in some regions of the domain and by the Stokes equation in others. We consider the Brinkman equation on a polygon Ω⊆^2 with homogeneous Dirichlet boundary
conditions:
{ μ Δ𝐮 + ∇ p + ^-1𝐮 = 𝐟 in Ω,
div 𝐮 = 0 in Ω,
𝐮 = 0 on ∂Ω,.
where 𝐮 and p are the unknown velocity and pressure fields, μ is the fluid viscosity, denotes the permeability tensor of the porous media and 𝐟∈ [L^2(Ω)]^2 is the external source term.
We assume that is a symmetric positive definite tensor and that there exist two positive (uniform) constants λ_1, λ_2 > 0 such that
λ_1 η^T η≤η^T ^-1η≤λ_2 η^T η for all η∈^2.
For what concerns the fluid viscosity we consider 0 < μ≤ C, this include the case where μ approaches zero and equation (<ref>) becomes a singular perturbation of the classic Darcy equations.
Let us consider the spaces
𝐕:= [H_0^1(Ω)]^2, Q:= L^2_0(Ω)
with the usual norms, and let A(·, ·) 𝐕×𝐕→ be the bilinear form defined by:
A (𝐮, 𝐯) := a^∇ (𝐮, 𝐯) + a(𝐮, 𝐯) , for all 𝐮, 𝐯∈𝐕
where
a^∇ (𝐮, 𝐯) := ∫_Ωμ ∇𝐮 : ∇𝐯 dx for all 𝐮, 𝐯∈𝐕
and a(·, ·) is the bilinear form defined in (<ref>).
Then the variational formulation of Problem (<ref>) is:
{ find (𝐮, p) ∈𝐕× Q, such that
A(𝐮, 𝐯) + b(𝐯, p) = (𝐟, 𝐯) for all 𝐯∈𝐕,
b(𝐮, q) = 0 for all q ∈ Q,.
where and b(·, ·) 𝐕× Q → is the bilinear form in (<ref>) and using standard notation
(𝐟, 𝐯) = ∫_Ω𝐟·𝐯 dx.
The natural energy norm for the velocities is induced by the symmetric an positive bilinear form A(·, ·) and is defined by (e.g. <cit.>)
𝐯^2_𝐕, μ := A(𝐯, 𝐯) = μ ∇𝐯_0^2 + ^-1/2𝐯_0^2.
We can observe that the equivalence with the 𝐕 norm is not uniform, i.e.
c_1 √(μ) 𝐯_𝐕≤𝐯_𝐕, μ≤ c_2 𝐯_𝐕
where c_1, c_2 here and in the follows denote two positive constant independent of h and μ.
For what concerns the pressures, we consider the norm (see for instance <cit.>)
p_Q, μ := sup_𝐯∈𝐕b(𝐯, p)/𝐯_𝐕, μ
Using the inf-sup condition in the usual norm it is possible to check the equivalence between the norms for the pressure but again the equivalence is not uniform, i.e.
c_1 p_Q ≤p_Q, μ≤c_2/√(μ) p_Q.
Since, considering the modified norm, the bilinear form A(·, ·) is uniformly continuous and coercive, and inf-sup condition is clearly fulfilled, Problem (<ref>) has a unique solution (𝐮, p) ∈𝐕× Q such that
𝐮_𝐕, μ + p_Q, μ≤ C 𝐟_𝐕'
where the constant C depends only on Ω.
§.§ Virtual formulation for Brinkman equations
Mathematically, Brinkman equations can be viewed as a combination of the Stokes and the
Darcy equation, that can change from place to place in the computational domain.
Therefore, numerical schemes for Brinkman equations have to be
carefully designed to accommodate both Stokes and Darcy simultaneously.
In this section we propose a Virtual Element scheme that is
accurate for both Darcy and Stokes flows.
For this goal we combine the ideas developed in the previous sections with the argument in <cit.>.
Let us consider the virtual spaces 𝐕_h and Q_h (cfr. (<ref>) and (<ref>)). As usual in the VEM framework we need to define a computable approximation of the continuous bilinear forms. Using obvious notations we split the bilinear form A(·, ·) as
A(𝐮, 𝐯) =: ∑_K ∈𝒯_h A^K (𝐮, 𝐯) = ∑_K ∈𝒯_h( a^∇, K (𝐮, 𝐯) + a^K (𝐮, 𝐯)) for all 𝐮, 𝐯∈𝐕.
We begin by observing that, from <cit.> (in particular c.f. (27)-(29)) and from Section <ref>, A^K(𝐪_k, 𝐯) is computable on the basis of the DoFs 𝐃_𝐕 for all 𝐪_k ∈ [_k(K)]^2 and for all 𝐯∈𝐕_h.
Starting from this observation we can approximate the continuous form A^K(·, ·) with the bilinear form
A_h^K(·, ·) 𝐕_h^K ×𝐕_h^K →,
given by
A_h^K(𝐮, 𝐯) = a_h^∇, K(𝐮, 𝐯) + a_h^K(𝐮, 𝐯) for all 𝐮, 𝐯∈𝐕_h
where a_h^∇, K(·, ·) is the bilinear form defined in equation (35) in <cit.> and a_h^K(·, ·) is defined in (<ref>).
It is clear that the bilinear form A_h^K(·, ·) satisfies the k-consistency and the stability properties.
As usual we build the global approximated bilinear form A_h(·, ·) 𝐕_h ×𝐕_h → by simply summing the local contributions.
For what concerns the bilinear form b(·, ·), as observed in Section <ref>, it can be computed exactly.
The last step consists in constructing a computable approximation of the right-hand side (𝐟, 𝐯) in (<ref>). We define the approximated load term 𝐟_h as
𝐟_h := Π_k^0,K𝐟 for all K ∈𝒯_h,
and consider:
(𝐟_h, 𝐯_h) = ∑_K ∈𝒯_h∫_K 𝐟_h ·𝐯_h dK = ∑_K ∈𝒯_h∫_K Π_k^0,K𝐟·𝐯_h dK = ∑_K ∈𝒯_h∫_K 𝐟·Π_k^0,K𝐯_h dK.
We observe that (<ref>) can be exactly computed from 𝐃_𝐕 for all 𝐯_h ∈𝐕_h (see Proposition <ref>).
Furthermore, the following result concerning a L^2 and H^1-type norm, can be proved using standard arguments <cit.>.
Let 𝐟_h be defined as in (<ref>), and let us assume 𝐟∈ H^k+1(Ω). Then, for all 𝐯_h ∈𝐕_h, it holds
|( 𝐟_h - 𝐟, 𝐯_h ) | ≤ C h^k+1 |𝐟|_k+1𝐯_h_0 and |( 𝐟_h - 𝐟, 𝐯_h ) | ≤ C h^k+2 |𝐟|_k+1 |𝐯_h|_𝐕.
In the light of the previous definitions, we consider the virtual element approximation of the Brinkman problem:
{ find (𝐮_h, p_h) ∈𝐕_h × Q_h, such that
A_h(𝐮_h, 𝐯_h) + b(𝐯_h, p_h) = (𝐟_h, 𝐯_h) for all 𝐯_h ∈𝐕_h,
b(𝐮_h, q_h) = 0 for all q_h ∈ Q_h..
Equation (<ref>) is well posed since the discrete bilinear form A_h(·,·)
is (uniformly) stable with respect to the norm ·_𝐕, μ by construction and the inf-sup condition is fulfilled (the proof follows the guidelines of Proposition 4.2 in <cit.> and the linearity of the Fortin operator). Then we have the following result.
Problem (<ref>) has a unique solution (𝐮_h, p_h) ∈𝐕_h × Q_h, verifying the estimate
𝐮_h_𝐕, μ + p_h_Q, μ≤ C 𝐟_𝐕'.
We now notice that, if 𝐮∈𝐕 is the velocity solution to Problem (<ref>), then it is the solution
to Problem:
{ find 𝐮∈𝐙 such that
A(𝐮, 𝐯) = (𝐟, 𝐯) for all 𝐯∈𝐙 .
Analogously, if 𝐮_h ∈𝐕_h is the velocity solution to Problem (<ref>), then it is the solution to
Problem:
{ find 𝐮_h ∈𝐙_h such that
A_h(𝐮_h, 𝐯_h) = (𝐟_h, 𝐯_h) for all 𝐯_h ∈𝐙_h .
For what concerns the convergence results we state the following theorem. The proof can be derived by extending the techniques of the previous section and is therefore omitted.
Let 𝐮∈𝐙 be the solution of problem (<ref>) and 𝐮_h ∈𝐙_h be the solution of
problem (<ref>). Then
𝐮 - 𝐮_h_𝐕, μ≤ C (√(μ) h^k + ^-1/2_∞ h^k+1) |u|_k+1 + C h^k+1 |f|_k+1
Let (𝐮, p) ∈𝐕× Q be the solution of Problem (<ref>) and (𝐮_h, p_h) ∈𝐖_h ∈ Q_h be the solution of Problem (<ref>). Then it holds:
p - p_h _Q, μ≤ C ( h^k |u|_k+1 + h^k/√(μ) |p|_k + h^k+1 |f|_k+1).
The constants C above are independent of h and μ.
In the last part of this section we present a brief discussion about the construction of a reduced virtual element method for Brinkman equations equivalent to Problem (<ref>) but involving significantly fewer degrees of freedom, especially for large k.
This construction essentially follows the guidelines of Section 5 in <cit.> (where we refer the reader for a deeper presentation).
Let us define the original reduced local virtual spaces, for k≥ 2:
𝐖_h^K := {𝐯∈ [H^1(K)]^2 s.t 𝐯_|∂ K∈ [_k(∂ K)]^2, { -Δ𝐯 - ∇ s ∈𝒢_k-2(K)^⊥,
div 𝐯∈_0(K),
. for some s ∈ H^1(K) }
As before we enlarge the virtual space 𝐖_h^K and we consider
𝐔_h^K := {𝐯∈ [H^1(K)]^2 s.t 𝐯_|∂ K∈ [_k(∂ K)]^2, { - Δ𝐯 - ∇ s ∈𝒢_k(K)^⊥,
div 𝐯∈_0(K),
. for some s ∈ H^1(K) }
Finally we define the enhanced Virtual Element space, the restriction 𝐕_h^K of 𝐔_h^K given by
𝐕_h^K := {𝐯∈𝐔_h^K s.t. (𝐯 - Π^∇,K_k 𝐯, 𝐠_k^⊥)_[L^2(K)]^2 = 0 for all 𝐠_k^⊥∈𝒢_k(K)^⊥/𝒢_k-2(K)^⊥} ,
where as before the symbol 𝒢_k(K)^⊥/𝒢_k-2(K)^⊥ denotes the polynomials i 𝒢_k(K)^⊥ that are L^2-orthogonal to all polynomials of 𝒢_k-2(K)^⊥.
For the pressures we consider the reduced space
Q_h^K := _0(K).
As sets of degrees of freedom for the reduced spaces, combining the argument in Section <ref> and <cit.> we may consider the following.
For every function 𝐯∈𝐕_h^K we take the following linear operators 𝐃_𝐕, split into three subsets (see Figure <ref>):
* 𝐃_𝐕1: the values of 𝐯 at each vertex of the polygon K,
* 𝐃_𝐕2: the values of 𝐯 at k-1 distinct points of every edge e ∈∂ K,
* 𝐃_𝐕3: the moments of 𝐯
∫_K 𝐯·𝐠_k-2^⊥ dK for all 𝐠_k-2^⊥∈𝒢_k-2(K)^⊥.
For every q ∈Q_h we consider
* 𝐃_𝐐: the moment
∫_K q dK.
Therefore we have that:
( 𝐕_h^K ) = ([_k(∂ K)]^2) + (𝒢_k-2(K)^⊥) = 2n_K k + (k-1)(k-2)/2,
and
(Q_h^K) = (_0(K)) = 1,
where n_K is the number of vertexes in K.
We define the global reduced virtual element spaces in the standard fashion.
The reduced virtual element discretization of the Brinkman problem (<ref>) is then:
{ find 𝐮_h ∈𝐕_h and p_h ∈Q_h, such that
A_h(𝐮_h, 𝐯_h) + b(𝐯_h, p_h) = (𝐟_h, 𝐯_h) for all 𝐯_h ∈𝐕_h,
b(𝐮_h, q_h) = 0 for all q_h ∈Q_h..
Above, the bilinear forms A_h(·, ·) and b(·, ·), and the loading term 𝐟_h are the same as before.
The following proposition states the relation between Problem (<ref>) and the reduced Problem (<ref>) (the proof is equivalent to that of Proposition 5.1 in <cit.>).
Let (𝐮_h, p_h) ∈𝐕_h × Q_h be the solution of problem (<ref>) and (𝐮_h, p_h) ∈𝐕_h ×Q_h be the solution of problem (<ref>). Then
𝐮_h = 𝐮_h and p_h|K = Π_0^0, K p_h for all K ∈𝒯_h.
§ NUMERICAL TESTS
In this section we present two numerical experiments to test the practical performance of the method. The first experiment is focused on the method introduced in Section <ref> for the Darcy Problem, whereas in the second experiment we test the method in Section <ref> for the Brinkman equations.
Since the VEM velocity solution 𝐮_h is not explicitly known point-wise inside the elements, we compute the method error comparing 𝐮 with a suitable polynomial projection of the approximated 𝐮_h.
In particular we consider the computable error quantities:
error(𝐮, H^1) := ( ∑_K ∈𝒯_h∇ u - Π_k-1^0, K (∇ u_h) _0,K^2 )^1/2
error(𝐮, H( div)) := ( ∑_K ∈𝒯_h div u - div u_h _0,K^2 + ∑_K ∈𝒯_h u - Π_k^0, K u_h _0,K^2 )^1/2
error(𝐮, L^2) := (∑_K ∈𝒯_h u - Π_k^0, K u_h _0,K^2 )^1/2
error(p, L^2) :=p - p_h_0.
Regarding the computational domain, in our tests we always take the square domain Ω= [0,1] ^2, which is partitioned using the following sequences of polygonal meshes:
* {𝒱_h}_h: sequence of Voronoi meshes with h=1/4, 1/8, 1/16, 1/32,
* {𝒯_h}_h: sequence of triangular meshes with h=1/2, 1/4, 1/8, 1/16,
* {𝒬_h}_h: sequence of square meshes with h=1/4, 1/8, 1/16, 1/32.
* {𝒲_b}_h: sequence of WEB-like meshes with h= 4/10, 2/10, 1/10, 1/20.
An example of the adopted meshes is shown in Figure <ref>.
For the generation of the Voronoi meshes we use the code Polymesher <cit.>.
The non convex WEB-like meshes are composed by hexagons, generated starting from the triangular meshes {𝒯_h}_h and randomly displacing the midpoint of each (non boundary) edge.
In this example we consider the Darcy problem (<ref>) where we set =I, and we choose the load term 𝐟 in such a way that the analytical solution is
𝐮(x,y) = -π [ sin(π x) cos(π y); cos(π x) sin(π y) ]
p(x,y) = cos(π x) cos(π y).
We analyse the practical performance of the virtual method by studying the errors versus the diameter h of the meshes. In addition we compare the results obtained with the scheme of Section <ref>, labeled as “div-free”, with those obtained with the method in the Appendix, labeled as “non div-free” (in both cases we consider polynomial degrees k=2).
We notice that the "non div-free" method is a naive extension to the Darcy equation of the inf-sup stable scheme proposed in <cit.>. Since the scheme lacks a uniform ellipticity-on-the-kernel condition, it is not recommended for the problem under consideration. The purpose of the comparison is thus to underline the importance of the property Z_h⊆ Z (cf. Section 3.4) in the present context.
In Figure <ref> and <ref>, we display the results for the sequence of Voronoi
meshes 𝒱_h. In Figure <ref> and <ref>, we show the results for the sequence of meshes 𝒯_h, while
in Figure <ref> and <ref> we plot the results for the sequence of
meshes 𝒬_h, finally in <ref> and <ref> we exhibit the results for the sequence of meshes 𝒲_h.
We notice that the theoretical predictions of Section <ref> and the Appendix are confirmed for both the L^2 norm and the H( div) norm. Note that for the H( div) norm we plot only the error for the “div-free” method since such scheme guarantees by construction, a better approximation of the divergence. Indeed let u_h (resp. u_h) be the solution obtained with the “div-free” method (“non div-free” method) then u_h satisfies
div u_h = Π^0,K_k-1 f = Π^0,K_k-1 ( div u) for all K ∈𝒯_h
whereas u_h satisfies the same equation only in a projected sense, i.e.
Π^0,K_k-1 ( div u_h) = Π^0,K_k-1 f = Π^0,K_k-1 ( div u) for all K ∈𝒯_h.
We can observe that the convergence rate of the L^2 norm for the pressure is optimal also for the “non div-free” method as proved in the Appendix.
Finally, we can observe that using a square mesh decomposition holds a convergence rate that is slightly better than what predicted by the theory.
In this example we test the Brinkman equation (<ref>) with different values of the fluid viscosity μ and fixed permeability tensor = I. We choose the load term 𝐟 and the Dirichlet boundary conditions in such a way that the analytical solution is
𝐮(x,y) = [ sin(π x) cos(π y); -cos(π x) sin(π y) ]
p(x,y) = x^2 y^2 - 1/9.
The aim of this test is to check the practical performance of the method introduced in Section <ref> in the reduced formulation (c.f. (<ref>).
In Table <ref> and Table <ref> we display the total amount of DoFs and the errors for the family of meshes 𝒱_h choosing k=2 respectively for the “div-free” method (cf. Section <ref>) and the “non div-free” method (cf. Reamrk <ref> and the Appendix).
We observe that also in the limit case, when the equation becomes a singular perturbation of the classic Darcy equations (e.g. for “small” μ), the proposed “div-free” method preserves the optimal order of accuracy.
§ ACKNOWLEDGEMENTS
The author wishes to thank L. Beirão da Veiga and C. Lovadina for several interesting discussions and suggestions on the paper.
The author was partially supported by the European Research Council through
the H2020 Consolidator Grant (grant no. 681162) CAVE, Challenges and Advancements in Virtual Elements. This support is gratefully acknowledged.
§ APPENDIX: NON DIVERGENCE-FREE VIRTUAL SPACE
We have built a new H^1-conforming (vector valued) virtual space for the velocity vector field different from the more standard one presented in <cit.> for the elasticity problem.
The topic of the present section is to analyse the extension to the Darcy equation of the scheme of <cit.>.
Even though the method should not be used for the Darcy problem (cf. Remark 6.1), the numerical experiments have shown an optimal error convergence rate for the pressure variable. In this Section, we theoretically explain such a behaviour, under a convexity assumption on Ω (essentially, a regularity assumption on the problem). To this end, we develop an inverse estimate for the VEM spaces which is interesting on its own, and can be used in other contexts.
We briefly describe the method by making use of various tools from the Virtual Element technology, and we
refer the interested reader to the papers <cit.>) for a deeper presentation.
We consider the local virtual space
𝐖_h^K := {𝐯∈ [H^1(K)]^2 s.t 𝐯_|∂ K∈ [_k(∂ K)]^2 , Δ𝐯∈ [_k-2(K)]^2 }
with local degrees of freedom 𝐃_𝐕:
* 𝐃_𝐕1: the values of 𝐯 at each vertex of the polygon K,
* 𝐃_𝐕2: the values of 𝐯 at k-1 distinct points of every edge e ∈∂ K,
* 𝐃_𝐕3: the moments of 𝐯 up to order k-2, i.e.
∫_K 𝐯 ·𝐪_k-2 dK for all 𝐪_k-2∈ [_k-2(K)]^2.
As observed in <cit.>, the DoFs 𝐃_𝐕 allow us to compute the operator Π_k^∇,K𝐖_h^K → [_k(K)]^2 defined as the analogous of the H^1 semi-norm projection (c.f. (<ref>)).
For all K ∈𝒯_h, the augmented virtual local space 𝐔_h^K is defined by
𝐔_h^K = {𝐯∈ [H^1(K)]^2 s.t. 𝐯∈ [_k(∂ K)]^2, Δ𝐯∈ [_k(K)]^2 }.
Now we define the enhanced Virtual Element space, the restriction 𝐕_h^K of 𝐔_h^K given by
𝐕_h^K := {𝐯∈𝐔_h^K s.t. (𝐯 - Π_k^∇,K 𝐯, 𝐪_k )_[L^2(K)]^2 = 0 for all 𝐪∈ [_k(K)/_k-2 (K)]^2} ,
where the symbol _k(K)/_k-2 (K) denotes the polynomials of degree k living on K that are L^2-orthogonal to all polynomials of degree k-2 on K.
The enhanced space 𝐕_h^K has three fundamental properties (see <cit.> for a proof):
* [_k(K)]^2 ⊆𝐕_h^K,
* the set of linear operators 𝐃_𝐕 constitutes a set of DoFs for the space 𝐕_h^K,
* the L^2-projection operator Π^0, K_k𝐕_h^K → [_k(K)]^2 is exactly computable by the DoFs.
Recalling (<ref>) and from <cit.> it holds that dim( 𝐕_h^K ) = dim( 𝐕_h^K ).
For the pressures we use the space of the piecewise polynomials Q_h (c.f. (<ref>)).
For what concerns the construction of the approximated bilinear forms, it is straightforward to see that
b(𝐯, q) = ∑_K ∈𝒯_h b^K(𝐯, q) = ∑_K ∈𝒯_h∫_K div 𝐯 q dK = - ∫_K 𝐯·∇ q dK + ∫_∂ K q 𝐯·𝐧
is computable from the DoFs for all 𝐯∈𝐕_h, q ∈ Q_h. Moreover using standard arguments <cit.> we can define a computable bilinear form
a_h^K(·, ·) 𝐕_h^K ×𝐕_h^K →
approximating the continuous form a^K(·, ·), and satisfying the k-consistency (c.f. (<ref>)) and the stability properties (c.f. (<ref>)).
Finally we define the global approximated bilinear form a_h(·, ·) 𝐕_h ×𝐕_h → by simply summing the local contributions.
By construction (see for instance <cit.>) the discrete bilinear form a_h(·,·) is
(uniformly) stable with respect to the L^2 norm.
We are now ready to state the proposed discrete virtual element problem:
{ find (𝐮_h, p_h) ∈𝐕_h × Q_h, such that
a_h(𝐮_h, 𝐯_h) + b(𝐯_h, p_h) = 0 for all 𝐯_h ∈𝐕_h,
b(𝐮_h, q_h) = (f, q_h) for all q_h ∈ Q_h..
We shall first prove an inverse inequality for the virtual element functions in 𝐕_h.
Under the assumption (𝐀1), (𝐀2), let K ∈𝒯_h and let 𝐯_h ∈𝐕_h^K. Then the following inverse estimate holds
|𝐯_h|_1,k≤ c_inv h_K^-1 𝐯_h_0,E
where the constant c_inv is independent of 𝐯_h, h_K and K.
We only sketch the proof, since we follow the guidelines of Lemma 3.1 and 3.3 in <cit.>. Let 𝐯_h ∈𝐕_h^K, then
|𝐯_h|^2_1,K = ∫_K ∇𝐯_h ·∇𝐯_h = - ∫_K Δ𝐯_h 𝐯_h + ∫_∂ K𝐯_h ∇𝐯_h ·𝐧_K.
Under the assumption (𝐀1), (𝐀2) and by Lemma 3.3 in <cit.> we get
- ∫_K Δ𝐯_h 𝐯_h ≤Δ𝐯_h_0,E𝐯_h_0,E≤ C_1 h_K^-1 |𝐯_h|_1,E𝐯_h_0,E
where the constant C_1 is independent of 𝐯_h, h_K and K. For what concerns the second addend in the right side of (<ref>), under the assumptions (𝐀1), (𝐀2), and using Lemma 3.1 in <cit.>, for all 𝐰∈ [H^1/2(∂ K)]^2 the following holds: there exists an extension 𝐰∈ [H^1(K)]^2 of 𝐰 such that
h_K^-1 𝐰_0, K + |𝐰|_1, K≤ C 𝐰_1/2, ∂ K,
where we consider the scaled norm
𝐰_1/2, ∂ K := h_K^-1/2 𝐰_0, ∂ K + |𝐰|_1/2, ∂ K.
By definition it holds
∫_∂ K𝐯_h ∇𝐯_h ·𝐧_K ≤𝐯_h_1/2, ∂ K sup_𝐰∈ [H^1/2(∂ K)]^2⟨∇𝐯_h ·𝐧_K , 𝐰⟩/𝐰_1/2, ∂ K.
Now, using the definition (<ref>), an inverse estimate (𝐯_h is polynomial on ∂ K) and the trace theorem <cit.>, it holds that
𝐯_h_1/2, ∂ K = h_K^-1/2 𝐯_h_0, ∂ K + |𝐯_h|_1/2, ∂ K≤ C_2 h_K^-1/2 𝐯_h_0, ∂ K
≤ C_2 h_K^-1/2 ( 𝐯_h_0, K)^1/2(h_K^-1 𝐯_h_0, K + |𝐯_h|_1, K)^1/2
≤(ϵ + C_2^2/ϵ) h_K^-1𝐯_h_0, K + ϵ |𝐯_h|_1, K.
for any real ϵ >0. For the last term, using (<ref>), (<ref>) and (<ref>) we get
sup_𝐰∈ [H^1/2(∂ K)]^2⟨∇𝐯_h ·𝐧_K , 𝐰⟩/𝐰_h_1/2, ∂ K ≤ C sup_𝐰∈ [H^1(K)]^2⟨∇𝐯_h ·𝐧_K , 𝐰⟩/h_K^-1 𝐰_0, K + |𝐰|_1, K
≤ C ( sup_𝐰∈ [H^1(K)]^2∫_K Δ𝐯_h 𝐰/h_K^-1 𝐰_0, K + |𝐰|_1, K +
sup_𝐰∈ [H^1(K)]^2∫_K ∇𝐯_h ·∇𝐰/h_K^-1 𝐰_0, K + |𝐰|_1, K)
≤ C ( sup_𝐰∈ [H^1(K)]^2∫_K Δ𝐯_h 𝐰/h_K^-1 𝐰_0, K +
sup_𝐰∈ [H^1(K)]^2∫_K ∇𝐯_h ·∇𝐰/ |𝐰|_1, K)
≤ C ( h_K Δ𝐯_h_0, K + |𝐯_h|_1,K) ≤ C_3 |𝐯_h|_1,K.
From (<ref>) and (<ref>) we can conclude that
∫_∂ K𝐯_h ∇𝐯_h ·𝐧_K ≤( (ϵ + C_2^2/ϵ) h_K^-1𝐯_h_0, K + ϵ |𝐯_h|_1, K) C_3|𝐯_h|_1, K
Finally, choosing ϵ = 1/2C_3 and collecting (<ref>) and (<ref>) in (<ref>) we have
1/2|𝐯_h|_1, K≤( C_1 + 1/2 + 2 C_2^2 C_3^2) h_K^-1𝐯_h_0, K
from which follows the thesis.
Let us analyse the theoretical properties of the method. We consider the discrete kernel:
𝐙_h := {𝐯_h ∈𝐕_h s.t. b(𝐯_h, q_h) = 0 for all q_h ∈ Q_h} = {𝐯_h ∈𝐕_h s.t. Π_k-1^0,K ( div𝐯_h) = 0 for all K ∈𝒯_h},
therefore the divergence-free property is satisfied only in a relaxed (projected) sense. As a consequence the bilinear form a_h(·, ·) is not uniformly coercive on the discrete kernel 𝐙_h; nevertheless it holds the following h-dependent coercivity property
a_h(𝐯_h, 𝐯_h) ≥α α_* 𝐯_h_0^2 ≥ C h^2 𝐯_h^2_𝐕
that can be derived by using inverse estimate (<ref>).
Recalling that a_h(·, ·) is continuous with respect the 𝐕 norm and that the discrete inf-sup condition is fulfilled <cit.>
sup_𝐯_h ∈𝐕_h 𝐯_h ≠0b(𝐯_h, q_h)/𝐯_h_𝐕≥β̃q_h_Q for all q_h ∈ Q_h
problem (<ref>) has a unique solution but we expect a worse order of accuracy since the bilinear form a_h(·, ·) is not uniformly stable. In fact we have the following convergence results that are, perhaps surprisingly, still optimal in the pressure variable.
Let (𝐮, p) ∈𝐕× Q be the solution of problem (<ref>) and (𝐮_h, p_h) ∈𝐕_h × Q_h be the solution of problem (<ref>). Then
𝐮 -𝐮_h _0≤ C h^k-1 (|p|_k + h^2 |u|_k+1) , and 𝐮 - 𝐮_h _𝐕≤ C h^k-2 (|p|_k + h^2 |u|_k+1).
Assuming further that Ω is convex, the following estimate holds:
p - p_h_Q≤ C h^k (|p|_k + h^2 |u|_k+1).
As observed in the proof of Theorem <ref> the inf-sup condition (<ref>) implies the existence of a function 𝐮_I ∈𝐕_h such that
Π_k-1^0,K ( div 𝐮_I) = Π_k-1^0,K( div 𝐮) for all K ∈𝒯_h,
𝐮 - 𝐮_I_𝐕≤ C inf_𝐯_h ∈𝐕_h𝐮 - 𝐯_h_𝐕
Now let us set δ_h = 𝐮_h - 𝐮_I. For what concerns the L^2 norm, using the stability of the bilinear form a_h(·, ·) and (<ref>) together with (<ref>)
α_* α δ_h^2_0 ≤α_* a(δ_h, δ_h) ≤a_h(δ_h, δ_h) = a_h(𝐮_h, δ_h) - a_h(𝐮_I, δ_h)
= - b(δ_h, p_h) - a(𝐮, δ_h) + a(𝐮, δ_h) - a_h(𝐮_I, δ_h)
= b(δ_h, p - p_h) + ( a(𝐮, δ_h) - a_h(𝐮_I, δ_h)) =:
μ_1(δ_h) + μ_2(δ_h).
By (<ref>) and property (<ref>), it is straightforward to see that
Π_k-1^0,K ( div 𝐮_h) = Π_k-1^0,K ( div 𝐮) = Π_k-1^0,K ( div 𝐮_I) for all K ∈𝒯_h
so that δ_h ∈𝐙_h. Therefore
μ_1(δ_h) = b(δ_h, p - p_h)= b(δ_h, p) = b(δ_h, p - q_h)
for all q_h ∈ Q_h. Using the inverse estimate (<ref>) and standard approximation theory we get
|μ_1(δ_h)| ≤ C δ_h_𝐕inf_q_h ∈ Q_hp - q_h_Q ≤ C h^-1 δ_h_0 h^k |p|_k = C h^k-1 |p|_k δ_h_0.
By standard technique in VEM convergence theory, it holds that
|μ_2(δ_h)| ≤ h^k+1 |u|_k+1 δ_h_0.
Collecting (<ref>) and (<ref>) in (<ref>) we get the L^2 estimate. Whereas the 𝐕 norm estimate follows from an inverse estimate (<ref>).
For what concerns the estimate on the pressure, let p_π the piecewise polynomial with respect to 𝒯_h defined by p_π= Π_k-1^0,K p for all K ∈𝒯_h. Let us set
χ_h := 𝐮 - 𝐮_h, z:= p_π - p ρ_h:= p_π - p_h
From (<ref>) and (<ref>), it is straightforward to see that the couple (χ_h, ρ_h) solves the Darcy problem
{ a(χ_h, 𝐯_h) + b(𝐯_h, ρ_h) = (a_h(𝐮_h, 𝐯_h) - a(𝐮_h, 𝐯_h)) + b(𝐯_h, z) for all 𝐯_h ∈𝐕_h,
b(χ_h, q_h) = 0 for all q_h ∈ Q_h..
To prove the estimate for the pressure we employ the usual duality argument. Let therefore ϕ be the solution of the auxiliary problem
{ Δϕ = ρ_h on Ω
ϕ = 0 on ∂Ω .
that, due to the convexity assumption, satisfies
ϕ_2 ≤ C ρ_h_0
where the constant C depends only on Ω.
For all 𝐯∈𝐕_h let us denote with 𝐯_I its interpolant defined in (<ref>) and (<ref>). Therefore Green formula together with (<ref>) yields
ρ_h^2_0 = (ρ_h, Δϕ) = b(∇ϕ, ρ_h) = b((∇ϕ)_I, ρ_h)
= (a_h(𝐮_h, (∇ϕ)_I) - a(𝐮_h, (∇ϕ)_I)) + b((∇ϕ)_I, z) - a(χ_h, (∇ϕ)_I)
=: μ_1((∇ϕ)_I) + μ_2((∇ϕ)_I) + μ_3((∇ϕ)_I).
We analyse separately the three terms. For the first one, using the consistency property of a_h(·, ·), the polynomial approximation of 𝐮 and ∇ϕ, the estimate on the velocity error and (<ref>) we get
μ_1((∇ϕ)_I) = a_h(𝐮_h, (∇ϕ)_I) - a(𝐮_h, (∇ϕ)_I)
= ∑_K ∈𝒯_h ( a_h^K(𝐮_h, (∇ϕ)_I) - a^K(𝐮_h, (∇ϕ)_I) )
= ∑_K ∈𝒯_h( a^K(𝐮_h - 𝐮_π, (∇ϕ)_I - (∇ϕ)_π) - a^K(𝐮_h - 𝐮_π, (∇ϕ)_I - (∇ϕ)_π) )
≤ C ∑_K ∈𝒯_h𝐮_h - 𝐮_π_0,K(∇ϕ)_I - (∇ϕ)_π_0,K
≤ C ∑_K ∈𝒯_h (𝐮 - 𝐮_h_0,K + 𝐮 - 𝐮_π_0,K) ((∇ϕ) - (∇ϕ)_I_0,K + (∇ϕ) - (∇ϕ)_π_0,K)
≤ C h^k-1 (|p|_k + h^2|u|_k+1) h ∇ϕ_1 ≤ C h^k-1 |p|_k h ϕ_2 ≤ C h^k (|p|_k + h^2|u|_k+1) ρ_h_0
For what concerns the second term we have
μ_2((∇ϕ)_I) = b((∇ϕ)_I, z) = b((∇ϕ)_I - ∇ϕ, z) + b(∇ϕ, z)
≤ C (|∇ϕ - (∇ϕ)_I |_1 + ϕ_2) z_0 ≤ C (|∇ϕ|_1 + ϕ_2) z_0
≤ C h^k |p|_k ϕ_2 ≤ C h^k |p|_k ρ_h_0.
Finally, for the third term we begin by observing that from (<ref>)
b(χ_h, ϕ) = b(χ_h, ϕ - ϕ_π),
for all ϕ_π∈ Q_h, and by the Green formula
b(χ_h, ϕ) = - a(χ_h, ∇ϕ) = - a(χ_h, ∇ϕ - (∇ϕ)_I) - a(χ_h, (∇ϕ)_I).
Therefore, by collecting (<ref>), (<ref>), and using the previous error estimate, it holds that
μ_3((∇ϕ)_I) = - a(χ_h, (∇ϕ)_I) = a(χ_h, ∇ϕ - (∇ϕ)_I) + b(χ_h, ϕ - ϕ_π)
≤ C (χ_h_0 ∇ϕ - (∇ϕ)_I_0 + χ_h_𝐕ϕ - ϕ_π_0)
≤ C h^k-1 (|p|_k + h^2|u|_k+1) h ϕ_2 + h^k-2 (|p|_k + h^2|u|_k+1) h^2 ϕ_2 ) ≤ C h^k (|p|_k + h^2|u|_k+1) ρ_h_0.
Finally by collecting (<ref>), (<ref>) and (<ref>) in (<ref>) we get the thesis.
tocsection
plain ] |
http://arxiv.org/abs/1701.08223v2 | 20170127235743 | The Python-based Simulations of Chemistry Framework (PySCF) | [
"Qiming Sun",
"Timothy C. Berkelbach",
"Nick S. Blunt",
"George H. Booth",
"Sheng Guo",
"Zhendong Li",
"Junzi Liu",
"James McClain",
"Elvira R. Sayfutyarova",
"Sandeep Sharma",
"Sebastian Wouters",
"Garnet Kin-Lic Chan"
] | physics.chem-ph | [
"physics.chem-ph"
] |
1]Qiming Sunosirpt.sun@gmail.com
2]Timothy C. Berkelbach
3,4]Nick S. Blunt
5]George H. Booth
1,6]Sheng Guo
1]Zhendong Li
7]Junzi Liu
1,6]James D. McClain
1,6]Elvira R. Sayfutyarova
8]Sandeep Sharma
9]Sebastian Wouters
1]Garnet Kin-Lic Changkc1000@gmail.com
[1]Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena CA 91125, USA
[2]Department of Chemistry and James Franck Institute, University of Chicago, Chicago, Illinois 60637, USA
[3]Chemical Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
[4]Department of Chemistry, University of California, Berkeley, California 94720, USA
[5]Department of Physics, King's College London, Strand, London WC2R 2LS, United Kingdom
[6]Department of Chemistry, Princeton University, Princeton, New Jersey 08544, USA
[7]Institute of Chemistry Chinese Academy of Sciences, Beijing 100190, P. R. China
[8]Department of Chemistry and Biochemistry, University of Colorado Boulder, Boulder, CO 80302, USA
[9]Brantsandpatents, Pauline van Pottelsberghelaan 24, 9051 Sint-Denijs-Westrem, Belgium
[NO \title GIVEN]
[
=================
§.§.§ Abstract
PySCF is a general-purpose electronic structure platform designed from the ground
up to emphasize code simplicity, so as to facilitate new method development and enable
flexible computational workflows. The package provides a wide range of tools to
support simulations of finite-size systems, extended systems with periodic boundary
conditions, low-dimensional periodic systems, and custom Hamiltonians, using mean-field
and post-mean-field methods with standard Gaussian basis functions. To ensure ease of
extensibility, PySCF uses the Python language to implement almost all of its
features, while computationally critical paths are implemented with heavily optimized C
routines. Using this combined Python/C implementation, the package is as efficient as the
best existing C or Fortran based quantum chemistry programs. In this paper we document
the capabilities and design philosophy of the current version of the PySCF
package.
iblabel[1]#1.
apsrev
§ INTRODUCTION
The Python programming language is playing an increasingly important role in scientific
computing. As a high level language, Python supports rapid development practices and easy
program maintenance. While programming productivity is hard to measure, it is commonly
thought that it is more efficient to prototype new ideas in Python, rather than in
traditional low-level compiled languages such as Fortran or C/C++.
Further, through the use of the many high-quality numerical libraries available in Python
– such as NumPy<cit.>, SciPy<cit.>, and MPI4Py<cit.>
– Python programs can perform at competitive levels with optimized Fortran and C/C++
programs, including on large-scale computing architectures.
There have been several efforts in the past to incorporate Python into electronic structure
programs.
Python has been widely adopted in a scripting role:
the Psi4<cit.> quantum chemistry package uses a custom “Psithon” dialect to
drive the underlying C++ implementation, while general simulation environments
such as ASE<cit.> and PyMatGen<cit.> provide Python frontends to multiple
quantum chemistry and electronic structure packages, to organize complex workflows<cit.>. Python
has also proved popular for implementing symbolic second-quantized algebra and code
generation tools, such as the Tensor Contraction Engine<cit.> and the
SecondQuantizationAlgebra library<cit.>.
In the above cases, Python has been used as a supporting language, with the underlying
quantum chemistry algorithms implemented in a compiled language. However, Python has also
seen some use as a primary implementation language for electronic structure methods.
PyQuante<cit.> was an early attempt to implement a Gaussian-based
quantum chemistry code in Python, although it did not achieve speed or functionality
competitive with typical packages. Another early effort was the GPAW<cit.>
code, which implements the projector augmented wave formalism for density functional
theory, and which is still under active development in multiple groups. Nonetheless, it
is probably fair to say that using Python as an implementation language, rather than a
supporting language, remains the exception rather than the rule in modern quantum
chemistry and electronic structure software efforts.
With the aim of developing a new highly functional, high-performance computing toolbox
for the quantum chemistry of molecules and materials implemented primarily in the
Python language, we started the open-source project “Python-based Simulations of Chemistry Framework” (PySCF) in 2014.
The program was initially ported from our quantum chemistry density matrix embedding
theory (DMET) project<cit.> and contained only the Gaussian integral interface, a
basic Hartree-Fock solver, and a few post-Hartree-Fock components required by DMET.
In the next 18 months, multi-configurational self-consistent-field (MCSCF), density
functional theory and coupled cluster theory, as well as relevant modules for
molecular properties, were added into the package.
In 2015, we released the first stable version, PySCF 1.0, wherein we codified our primary
goals for further code development: to produce a package that emphasizes simplicity,
generality, and efficiency, in that order.
As a result of this choice, most functions in PySCF are written purely in Python, with a very
limited amount of C code only for the most time-critical parts.
The various features and APIs are designed and implemented in the simplest and most
straightforward manner, so that users can easily modify the source code to meet their own
scientific needs and workflow.
However, although we have favored algorithm accessibility and extensibility over
performance, we have found that the efficient use of numerical Python libraries allows
PySCF to perform at least as fast as the best existing quantum chemistry
implementations.
In this article, we highlight the current capabilities and design philosophy of the
PySCF package.
§ CAPABILITIES
Molecular electronic structure methods are typically the main focus of quantum chemistry
packages. We have put significant effort towards the production of a stable, feature-rich
and efficient molecular simulation environment in PySCF.
In addition to molecular quantum chemistry methods, PySCF also provides a wide
range of quantum chemistry methods for extended systems with periodic boundary conditions
(PBC). Table <ref> lists the main electronic structure
methods available in the PySCF package.
More detailed descriptions are presented in Section <ref> - Section <ref>.
Although not listed in the table, many auxiliary tools for method development are also part of the package.
They are briefly documented in Section <ref> - Section <ref>.
§.§ Self-consistent field methods
Self-consistent field (SCF) methods are the starting point for most electronic structure
calculations. In PySCF, the SCF module includes implementations of Hartree-Fock
(HF) and density functional theory (DFT) for restricted, unrestricted, closed-shell and
open-shell Slater determinant references. A wide range of predefined exchange-correlation (XC) functionals
are supported through a general interface to the Libxc<cit.> and
Xcfun<cit.> functional libraries. Using the interface, as shown
in Figure <ref>, one can easily customize the XC functionals in DFT calculations.
PySCF uses the Libcint<cit.> Gaussian
integral library, written by one of us (QS) as its integral engine.
In its current implementation, the SCF program can handle over 5000 basis functions on a
single symmetric multiprocessing (SMP) node without any approximations to the integrals.
To obtain rapid convergence in the SCF iterations, we have also developed a second order
co-iterative augmented Hessian (CIAH) algorithm for orbital optimization <cit.>.
Using the direct SCF technique with the CIAH algorithm, we are able to converge
a Hartree-Fock calculation for the open-shell molecule Fe(II)-porphine (2997
AOs) on a 16-core node in one day.
§.§ Post SCF methods
Single-reference correlation methods can be used on top of the HF or DFT references,
including Møller-Plesset second-order perturbation theory (MP2), configuration
interaction, and coupled cluster theory.
Canonical single-reference coupled cluster theory has been implemented with
single and double excitations (CCSD)<cit.> and with perturbative triples
[CCSD(T)].
The associated derivative routines include CCSD and CCSD(T) density matrices, CCSD and
CCSD(T) analytic gradients, and equation-of-motion CCSD for the ionization potentials,
electron affinities, and excitation energies
(EOM-IP/EA/EE-CCSD)<cit.>.
The package contains two complementary implementations of each of these methods.
The first set are straightforward spin-orbital and spatial-orbital implementations, which
are optimized for readability and written in pure Python using syntax of the
NumPy function (which can use either the default Numpy implementation
or a custom -based version) for tensor contraction.
These implementations are easy for the user to modify. A second
spatial-orbital implementation has been intensively optimized to minimize
dataflow and uses asynchronous I/O and a threaded function for efficient
tensor contractions. For a system of 25 occupied orbitals and 1500 virtual orbitals
(H_50 with cc-pVQZ basis), the latter CCSD implementation takes less than 3 hours to
finish one iteration using 28 CPU cores.
The configuration interaction code implements two solvers: a solver for configuration
interaction with single and double excitations (CISD), and a determinant-based full
configuration interaction (FCI) solver<cit.> for fermion, boson or coupled
fermion-boson Hamiltonians. The CISD solver has a similar program layout
to the CCSD solver. The FCI solver additionally implements the spin-squared operator,
second quantized creation and annihilation operators (from which arbitrary second
quantized algebra can be implemented), functions to evaluate the density matrices and
transition density matrices (up to fourth order), as well as a function to evaluate the
overlap of two FCI wavefunctions in different orbital bases. The FCI solver is
intensively optimized for multi-threaded performance. It can perform one matrix-vector
operation for 16 electrons and 16 orbitals using 16 CPU cores in 30 seconds.
§.§ Multireference methods
For multireference problems, the PySCF package provides the complete active space self
consistent field (CASSCF) method<cit.> and N-electron
valence perturbation theory (NEVPT2)<cit.>.
When the size of the active space exceeds the capabilities of the
conventional FCI solver, one can switch to external variational solvers
such as a density matrix renormalization group (DMRG) program<cit.>
or a full configuration interaction quantum Monte Carlo (FCIQMC) program<cit.>.
Incorporating external solvers into the CASSCF optimizer widens
the range of possible applications, while raising new challenges for an efficient CASSCF
algorithm.
One challenge is the communication between the external solver and the orbital
optimization driver; communication must be limited to quantities
that are easy to obtain from the external solver.
A second challenge is the cost of handling quantities associated with the active space;
for example, as the active space becomes large, the memory required
to hold integrals involving active labels can easily exceed available memory.
Finally, any approximations introduced in the context of the above two challenges
should not interfere with the quality of convergence of the CASSCF optimizer.
To address these challenges,
we have implemented a general AO-driven CASSCF optimizer<cit.> that
provides second order convergence and which may easily be combined with a wide
variety of external variational solvers, including DMRG, FCIQMC and their
state-averaged solvers.
Only the 2-particle density matrix and Hamiltonian integrals
are communicated between the CASSCF driver and the external CI solver.
Further, the AO-driven algorithm has a low memory and I/O footprint. The current
implementation supports
calculations with 3000 basis functions and 30–50 active orbitals
on a single SMP node with 128 GB memory, without any approximations to the AO integrals.
A simple interface is provided to use an external solver in
multiconfigurational calculations.
Figure <ref> shows how to perform a
DMRG-CASSCF calculation by replacing the attribute of the CASSCF
method.
DMRG-SC-NEVPT2<cit.>, and ic-MPS-PT2 and
ic-MPS-LCC<cit.> methods are also available through the interface to the DMRG
program package Block<cit.>, and the ic-MPS-LCC program of
Sharma <cit.>.
§.§ Molecular properties
At the present stage, the program can compute molecular properties
such as analytic nuclear gradients, analytic nuclear Hessians, and NMR
shielding parameters at the SCF level. The CCSD and CCSD(T) modules include solvers for
the Λ-equations. As a result, we also provide one-particle and two-particle density
matrices, as well as the analytic nuclear gradients, for the CCSD and CCSD(T)
methods<cit.>.
For excited states,
time-dependent HF (TDHF) and time-dependent DFT (TDDFT)
are implemented on top of the SCF module. The relevant analytic nuclear
gradients are also programmed<cit.>.
The CCSD module offers another option to obtain
excited states using the EOM-IP/EA/EE-CCSD methods. The third option
to obtain excited states
is through the multi-root CASCI/CASSCF solvers, optionally followed by the MRPT tool chain.
Starting from the multi-root CASCI/CASSCF solutions, the program can compute
the density matrices of all the states and the transition density matrices
between any two states. One can contract these density matrices with
specific AO integrals to obtain different first order molecular properties.
§.§ Relativistic effects
Many different relativistic treatments are available in PySCF.
Scalar relativistic effects can be added to all SCF and post-SCF methods through relativistic
effective core potentials (ECP)<cit.> or the all-electron spin-free X2C<cit.>
relativistic correction.
For a more advanced treatment, PySCF also provides 4-component relativistic
Hartree-Fock and no-pair MP2 methods with Dirac-Coulomb, Dirac-Coulomb-Gaunt, and
Dirac-Coulomb-Breit Hamiltonians.
Although not programmed as a standalone module, the no-pair
CCSD electron correlation energy can also be computed with the straightforward
spin-orbital version of the CCSD program.
Using the 4-component Hamiltonian, molecular properties including analytic
nuclear gradients and NMR shielding parameters are available at the mean-field level<cit.>.
§.§ Orbital localizer and result analysis
Two classes of orbital localization methods are available in the package.
The first emphasizes the atomic character of the basis functions.
The relevant localization functions can generate intrinsic atomic orbitals
(IAO)<cit.>, natural atomic orbitals (NAO)<cit.>,
and meta-Löwdin orbitals<cit.> based on orbital projection and
orthogonalization.
With these AO-based local orbitals, charge distributions can be properly
assigned to atoms in population analysis<cit.>.
In the PySCF population analysis code, meta-Löwdin orbitals are the
default choice.
The second class, represented by Boys-Foster, Edmiston-Ruedenberg, and
Pipek-Mezey localization, require minimizing (or maximizing) the dipole, the
Coulomb self-energy, or the atomic charges, to obtain the optimal localized orbitals.
The localization routines can take arbitrary orthogonal input orbitals and call
the CIAH algorithm to rapidly converge the solution.
For example, using 16 CPU cores, it takes 3 minutes to localize 1620 HF unoccupied
orbitals for the C_60 molecule using Boys localization.
A common task when analysing the results of an electronic structure calculation
is to visualize the orbitals.
Although PySCF does not have a visualization tool itself, it provides a module
to convert the given orbital coefficients to the <cit.> format which can
be read and visualized by other software, e.g. Jmol<cit.>.
Figure <ref> is an example to run Boys localization for the
C_60 HF occupied orbitals and to generate the orbital surfaces of the localized
σ-bond orbital in a single Python script.
§.§ Extended systems with periodic boundary conditions
PBC implementations typically use either plane
waves<cit.> or local atomic
functions<cit.> as the underlying orbital basis.
The PBC implementation in PySCF uses the local basis formulation, specifically
crystalline orbital Gaussian basis functions ϕ, expanded in terms of a lattice sum
over local Gaussians χ
ϕ_k,χ(r) = ∑_T
e^ik·Tχ(r - T)
where k is a vector in the first Brillouin zone and T is a lattice
translational vector.
We use a pure Gaussian basis in our PBC implementation for two reasons: to
simplify the development of post-mean-field methods for extended systems and to
have a seamless interface and direct comparability to
finite-sized quantum chemistry calculations. Local bases are favourable for
post-mean-field methods because they are generally quite compact, resulting in small
virtual spaces <cit.>, and further allow locality to be exploited.
Due to the use of local bases, various boundary conditions can be easily
applied in the PBC module, from zero-dimensional systems (molecules) to
extended one-, two- and three-dimensional periodic systems.
The PBC module supports both all-electron and pseudopotential calculations. Both
separable pseudopotentials (e.g. Goedecker-Teter-Hutter (GTH)
pseudopotentials <cit.>) and non-separable pseudopotentials (quantum chemistry ECPs and
Burkatzi-Filippi-Dolg pseudopotentials<cit.>) can be
used.
In the separable pseudopotential implementation, the associated orbitals and densities are
guaranteed to be smooth, allowing a grid-based treatment that uses discrete fast Fourier
transforms <cit.>.
In both the pseudopotential and all-electron PBC calculations,
Coulomb-based integrals are handled via density fitting as described in Section <ref>.
The PBC implementation is organized in direct correspondence to the molecular implementation.
We implemented the same function interfaces as in the molecular code,
with analogous module and function names.
Consequently, methods defined in the molecular part of the code can be seamlessly mixed
with the PBC functions without modification, especially in Γ-point calculations
where the PBC wave functions are real.
Thus, starting from PBC Γ-point mean-field orbitals, one can,
for example, carry out CCSD, CASSCF, TDDFT, etc. calculations using
the molecular implementations.
We also introduce specializations of the PBC methods to support
k-point (Brillouin zone) sampling.
The k-point methods slightly
modify the Γ-point data structures, but inherit from and reuse
almost all of the Γ-point functionality.
Explicit k-point sampling is supported at the HF and DFT level, and on top of this we
have also implemented k-point MP2, CCSD, CCSD(T) and EOM-CCSD methods<cit.>, with
optimizations to carefully distribute work and data across cores.
On 100 computational cores, mean-field
simulations including unit cells with over 100 atoms, or k-point CCSD calculations with
over 3000 orbitals, can be executed without difficulty.
§.§ General AO integral evaluator and J/K builds
Integral evaluation forms the foundation of Gaussian-based electronic
structure simulation.
The general integral evaluator library Libcint supports a wide range of
GTO integrals, and PySCF exposes simple APIs to access the Libcint integral
functions.
As the examples in Figure <ref> show,
the PySCF integral API allows users to access AO integrals either in a
giant array or in individual shells with a single line of Python code. The
integrals provided include,
* integrals in the basis of Cartesian, real-spherical and j-adapted spinor GTOs;
* arbitrary integral expressions built from r, p, and
σ polynomials;
* 2-center, 3-center and 4-center 2-electron integrals for the Coulomb
operator 1/r_12, range-separated Coulomb operator
erf(ω r_12)/r_12,
Gaunt interaction, and Breit interaction.
Using the general AO integral evaluator, the package provides a general
AO-driven J/K contraction function.
J/K-matrix construction involves a contraction over a high order tensor
(e.g. 4-index 2-electron integrals (ij|kl)) and a low order tensor (e.g. the 2-index
density matrix γ)
J_ij = ∑_kl (ij|kl) γ_kl
K_il = ∑_jk (ij|kl) γ_jk
When both tensors can be held in memory, the Numpy package offers a convenient
tensor contraction function
to quickly construct J/K matrices.
However, it is common for the high order tensor to be too large to fit into the
available memory.
Using the Einstein summation notation of the Numpy function,
our AO-driven J/K contraction implementation offers the capability to contract the
high order tensor (e.g. 2-electron integrals or their high order derivatives) with
multiple density matrices, with a small memory footprint.
The J/K contraction function also supports subsystem contraction, in
which the 4 indices of the 2-electron integrals are distributed over different segments
of the system which may or may not overlap with each other.
This subsystem contraction is particularly useful in two scenarios:
in fragment-based methods, where the evaluation of Coulomb or exchange energies
involves integral contraction over different fragments, and
in parallel algorithms, where one partitions the J/K contraction into
small segments and distributes them to different computing nodes.
§.§ General integral transformations
Integral transformations are another fundamental operation found in quantum
chemistry programs. A common kind of integral transformation is to transform the 4 indices of the
2-electron integrals by 4 sets of different orbitals.
To satisfy this need, we designed a general integral transformation function to
handle the arbitrary AO integrals provided by the Libcint library and arbitrary kinds of orbitals.
To reduce disk usage, we use permutation symmetry over i and j, k and l in
(ij|kl) whenever possible for real integrals.
Integral transformations involve high computational and I/O costs.
A standard approach to reduce these costs involves precomputation to reduce integral costs and data
compression to increase I/O throughput. However, we have not adopted such an
optimization strategy in our implementation because it is against the objective of simplicity
for the PySCF package. In our implementation, initialization is not
required for the general integral transformation function. Similarly to the AO integral API, the
integral transformation can thus be launched with one line of Python code. In the
integral data structure, we store the transformed integrals by chunks in the HDF5
format without compression. This choice has two advantages. First, it allows
for fast indexing and hyperslab selection for subblocks of the integral array.
Second, the integral data can be easily accessed by other program packages
without any overhead for parsing the integral storage protocol.
§.§ Density fitting
The density fitting (DF) technique is implemented for both finite-sized
systems and crystalline systems with periodic boundary conditions.
In finite-sized systems, one can use DF to approximate the J/K matrix and the MO
integrals for the HF, DFT and MP2 methods. To improve the performance of
the CIAH algorithm, one can use the DF orbital Hessian in the CIAH
orbital optimization for Edmiston-Ruedenberg localization and for the HF, DFT and
CASSCF algorithms.
In the PBC module, the 2-electron integrals are represented as the product of
two 3-index tensors which are treated as DF objects.
Based on the requirements of the system being modelled, we have developed various DF
representations.
When the calculation involves only smooth bases (typically with pseudopotentials),
plane waves are used as the auxiliary fitting functions and the DF 3-index
tensor is computed within a grid-based treatment using discrete fast Fourier transforms <cit.>.
When high accuracy in all-electron calculations is required,
a mixed density fitting technique is invoked in which the fitting functions are
Gaussian functions plus plane waves.
Besides the choice of fitting basis, different metrics (e.g. overlap, kinetic, or Coulomb) can be used
in the fitting to balance performance against computational accuracy.
The 3-index DF tensor is stored as a giant array in the HDF5 format without compression.
With this design, it is straightforward to access the 2-electron integrals
with the functions of the PySCF package.
Moreover, it allows us to supply 2-electron integrals to calculations by
overloading the DF object in cases where direct storage of the 4-index integrals in memory or on disk is infeasible
(see discussion in Section <ref>).
§.§ Custom Hamiltonians
Most quantum chemistry approximations are not tied to the details of the ab initio molecular or periodic Hamiltonian.
This means that they can also be used with arbitrary model Hamiltonians,
which is of interest for semi-empirical quantum chemistry calculations
as well as condensed-matter model studies.
In PySCF, overwriting the predefined Hamiltonian is straightforward.
The Hamiltonian is an attribute of the mean-field calculation object.
Once the 1-particle and 2-particle integral attributes of the mean-field object are
defined, they are used by the mean-field calculation and all subsequent
post-Hartree-Fock correlation treatments.
Users can thus carry out correlated calculations with model Hamiltonians in exactly the same way as
with standard ab initio Hamiltonians.
Figure <ref> displays an example of how to input a model Hamiltonian.
§.§ Interfaces to external programs
PySCF can be used either as the driver to execute external programs or as
an independent solver to use as part of a computational workflow involving other software.
In PySCF, the DMRG programs Block<cit.> and
CheMPS2<cit.> and the FCIQMC program NECI<cit.> can be
used as a replacement for the FCI routine for large active spaces in the
CASCI/CASSCF solver.
In the QM/MM interface, by supplying the charges and the positions of the MM
atoms, one can compute the HF, DFT, MP2, CC, CI and MCSCF energies and their
analytic nuclear gradients.
To communicate with other quantum chemistry programs,
we provide utility functions to read and write Hamiltonians in the
Molpro<cit.> format, and arbitrary orbitals in the
<cit.> format.
The program also supports to write SCF solution and CI wavefunction in the
GAMESS<cit.> format and to read orbitals from
Molpro XML output.
The real space electron density can be output on cubic grids in the
Gaussian<cit.> format.
§.§ Numerical tools
Although the Numpy and Scipy libraries provide a wide range of
numerical tools for scientific computing, there are some numerical components
commonly found in quantum chemistry algorithms that are not provided by these
libraries. For example, the direct inversion of the iterative
space (DIIS) method<cit.> is one of the most commonly used
tools in quantum chemistry
to speed up optimizations when a second order algorithm is not
available.
In PySCF we provide a general DIIS handler for an object array of
arbitrary size and arbitrary data type. In the current implementation,
it supports DIIS optimization both with or without supplying the error vectors.
For the latter case, the differences between the arrays of adjacent iterations are minimized.
Large scale eigenvalue and linear equation solvers are also
common components of many quantum chemistry methods.
The Davidson diagonalization algorithm and Arnoldi/Krylov subspace solver
are accessible in PySCF through simple APIs.
§ DESIGN AND IMPLEMENTATION OF PYSCF
While we have tried to provide rich functionality for quantum chemical simulations
with the built-in functions of the PySCF package, it will
nonetheless often be the case that a user's needs are not covered by the built-in functionality.
A major design goal has been to implement PySCF in a sufficiently flexible way so
that users can easily extend its functionality.
To provide robust components for complex problems and non-trivial workflows,
we have made the following general design choices in
PySCF:
* Language: Mostly Python, with a little C. We believe that it is easiest to
develop and test new functionality in Python. For this reason, most functions in
PySCF are written in pure Python. Only a few computational hot spots have been
rewritten and optimized in C.
* Style: Mostly functional, with a little object-oriented programming (OOP).
Although OOP is a successful and widely used programming paradigm, we feel that it is
hard for users to customize typical OOP programs without learning details of the
object hierarchy and interfaces. We have adopted a functional programming style, where
most functions are pure, and thus can be invoked alone and independently of each other.
This allows users to mix functionality with a minimal knowledge of the
PySCF internals.
We elaborate on these choices below.
§.§ Input language
Almost every quantum chemistry package today uses its own custom input language.
This is a burden to the user, who must become
familiar with a new domain-specific language for every new package.
In contrast, PySCF does not have an input language. Rather, the functionality
is simply called from an input script written in the host Python language.
This choice has clear benefits:
* There is no need to learn a domain-specific language.
Python, as a general programming language, is already widely used for
numerical computing, and is taught in modern computer science courses.
For novices, the language is easy to learn and help is readily available from the
large Python community.
* One can use all Python language features in the input script.
This allows the input script to implement complex logic and computational
workflows, and to carry out tasks (e.g. data processing and plotting) in the same
script as the electronic structure simulation (see Figure <ref> for an
example).
* The computational environment is easily extended beyond that provided by the PySCF package.
The PySCF package is a regular Python module
which can be mixed and matched with other Python modules to build a personalized
computing environment.
* Computing can be carried out interactively. Simulations can be tested, debugged, and executed step by step within
the Python interpreter shell.
§.§ Enabling interactive computing
As discussed above, a strength of the PySCF package is that its functionality
can be invoked from the interactive Python shell. However, maximizing its usability
in this interactive mode entails additional design optimizations.
There are three critical considerations to facilitate such interactive computations:
* The functions and data need to be easy to access;
* Functions should be insensitive to execution order (when and how many times
a function is called should not affect the result);
* Computations should not cause (significant) halts in the interactive shell.
To address these requirements, we have enforced the following design rules
wherever possible in the package:
* Functions are pure (i.e. state free). This ensures that they are insensitive to execution order;
* Method objects (classes) only hold results and control parameters;
* There is no initialization of functions, or at most a short initialization chain;
* Methods are placed at both the module level and within classes so that
the methods and their documentation can be easily accessed by the
interactive shell (see Figure <ref>).
A practical solution to eliminate halting of the interactive shell is to overlap the REPL
(read-eval-print-loop) and task execution. Such task parallelism requires
the underlying tasks to be independent of each other. Although certain dependence between methods
is inevitable, the above design rules greatly reduce function call dependence. Most functions in
PySCF can be safely placed in the background using the standard Python and libraries.
§.§ Methods as plugins
Ease-of-use is the primary design objective of the PySCF package.
However, function simplicity and versatility are difficult to balance in
the same software framework.
To balance readability and complexity,
we have implemented only the basic algorithmic features in the main methods, and placed advanced
features in additional “plugins”.
For instance, the main mean-field module implements only the basic self-consistent loop.
Corrections (such as for relativistic effects) are implemented in an independent
plugin module, which can be activated by reassigning the mean-field 1-electron Hamiltonian
method at runtime.
Although this design increases the complexity of implementation of the plugin functions,
the core methods retain a clear structure and are easy to comprehend.
Further, this approach decreases the coupling between different features: for example,
independent features can be modified and tested independently and combined in calculations.
In the package, this plugin design has been widely used, for example, to enable molecular point group symmetry,
relativistic corrections, solvation effects, density fitting approximations,
the use of second-order orbital optimization, different variational active space solvers, and many
other features (Figure <ref>).
§.§ Seamless MPI functionality
The Message Passing Interface (MPI) is the most popular parallel protocol in the
field of high performance computing. Although MPI provides high efficiency
for parallel programming, it is a challenge to develop a simple and efficient MPI
program. In compiled languages, the program must explicitly control
data communication according to the MPI communication protocol.
The most common design is to activate MPI communication from the beginning and
to update the status of the MPI communicator throughout the program.
When developing new methods, this often leads to extra effort in code
development and debugging.
To sustain the simplicity of the PySCF package, we have designed a
different mechanism to execute parallel code with MPI.
We use MPI to start the Python interpreter as a daemon to receive both the
functions and data on the remote nodes.
When a parallel session is activated, the master process sends to the remote Python
daemons both the functions and the data. The function is decoded remotely and then
executed.
This design allows one to develop code mainly in serial mode and to switch
to the MPI mode only when high performance is required. Figure <ref> shows an
example to perform a periodic calculation with and without a parallel session.
Comparing to the serial mode invocation, we see that the user only has to change the
density fitting object to acquire parallel functionality.
§ CONCLUSIONS
Python and its large collection of third party libraries are helping to revolutionize how
we carry out and implement numerical simulations. It is potentially much more productive
to solve computational problems within the Python ecosystem because it frees researchers
to work at the highest level of abstraction without worrying about the details of complex
software implementation. To bring all the benefits of the Python ecosystem to quantum
chemistry and electronic structure simulations, we have
started the open-source PySCF project.
PySCF is a simple, lightweight, and efficient computational chemistry program
package, which supports ab initio calculations for both molecular and extended systems.
The package serves as an extensible electronic structure toolbox, providing a large number
of fundamental operations with simple APIs to manipulate methods, integrals, and wave
functions. We have invested significant effort to ensure simplicity of use and
implementation while preserving competitive functionality and performance. We believe that
this package represents a new style of program and library design that will be
representative of future software developments in the field.
§ ACKNOWLEDGMENTS
QS would like to thank Junbo Lu and Alexander Sokolov for testing functionality and for
useful suggestions for the program package. The development of different components of
the PySCF package has been generously supported by several sources. Most of the
molecular quantum chemistry software infrastructure was developed with support from the US
National Science Foundation, through grants CHE-1650436 and ACI-1657286. The
periodic mean-field infrastructure was developed with support from ACI-1657286. The
excited-state periodic coupled cluster methods were developed with support from the US
Department of Energy, Office of Science, through the grants DE-SC0010530 and DE-SC0008624.
Additional support for the extended-system methods has been provided by the Simons
Foundation through the Simons Collaboration on the Many Electron Problem, a Simons
Investigatorship in Theoretical Physics, the Princeton Center for Theoretical Science, and
startup funds from Princeton University and the California Institute of Technology.
|
http://arxiv.org/abs/1701.07876v1 | 20170126211108 | Frustrated honeycomb-lattice bilayer quantum antiferromagnet in a magnetic field: Unconventional phase transitions in a two-dimensional isotropic Heisenberg model | [
"Taras Krokhmalskii",
"Vasyl Baliha",
"Oleg Derzhko",
"Jörg Schulenburg",
"Johannes Richter"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.stat-mech"
] |
Institute for Condensed Matter Physics,
National Academy of Sciences of Ukraine,
Svientsitskii Street 1, 79011 L'viv, Ukraine
Department for Theoretical Physics,
Ivan Franko National University of L'viv,
Drahomanov Street 12, 79005 L'viv, Ukraine
Institute for Condensed Matter Physics,
National Academy of Sciences of Ukraine,
Svientsitskii Street 1, 79011 L'viv, Ukraine
Institute for Condensed Matter Physics,
National Academy of Sciences of Ukraine,
Svientsitskii Street 1, 79011 L'viv, Ukraine
Institut für theoretische Physik,
Otto-von-Guericke-Universität Magdeburg,
P.O. Box 4120, 39016 Magdeburg, Germany
Department for Theoretical Physics,
Ivan Franko National University of L'viv,
Drahomanov Street 12, 79005 L'viv, Ukraine
Abdus Salam International Centre for Theoretical Physics,
Strada Costiera 11, 34151 Trieste, Italy
Universitätsrechenzentrum,
Otto-von-Guericke-Universität Magdeburg,
P.O. Box 4120, 39016 Magdeburg, Germany
Institut für theoretische Physik,
Otto-von-Guericke-Universität Magdeburg,
P.O. Box 4120, 39016 Magdeburg, Germany
75.10.-b,
75.10.Jm
We consider the spin-1/2 antiferromagnetic Heisenberg model on a bilayer honeycomb lattice including interlayer frustration
in the presence of an external magnetic field.
In the vicinity of the saturation field,
we map the low-energy states of this quantum system onto the spatial configurations of hard hexagons on a honeycomb lattice.
As a result,
we can construct effective classical models (lattice-gas as well as Ising models) on the honeycomb lattice
to calculate the properties of the frustrated quantum Heisenberg spin system in the low-temperature regime.
We perform classical Monte Carlo simulations for a hard-hexagon model and adopt known results for an Ising model
to discuss the finite-temperature order-disorder phase transition
that is driven by a magnetic field at low temperatures.
We also discuss an effective-model description around the ideal frustration case
and find indications for a spin-flop like transition in the considered isotropic spin model.
Frustrated honeycomb-lattice bilayer quantum antiferromagnet in a magnetic field:
Unconventional phase transitions in a two-dimensional isotropic Heisenberg model
Johannes Richter
December 30, 2023
==========================================================================================================================================================================
§ INTRODUCTION
An important class of quantum Heisenberg antiferromagnets consists of the so-called two-dimensional dimerized quantum antiferromagnets.
They can be obtained by placing strongly antiferromagnetically interacting pairs of spins 1/2 (dimers) on a regular two-dimensional lattice
and assuming weak antiferromagnetic interactions between dimers.
Among such models one may mention the J-J^' model
with the staggered arrangement of the strong J^' bonds
(defining dimers and favoring singlet formation on dimers)
on a square lattice<cit.>
(see also Ref. valenti-eggert for related dimerized square-lattice models).
Other examples are the bilayer models:
They consist of two antiferromagnets in each layer with a dominant nearest-neighbor interlayer coupling which defines dimers.<cit.>
By considering additional frustrating interlayer couplings
the bilayer model can be pushed in the parameter space to a point
which admits a rather comprehensive analysis of the energy spectrum.<cit.>
For this special set of coupling parameters, the frustrated bilayer is a system with local conservation laws
(the square of the total spin of each dimer is a good quantum number)
that explains why it is much easier to examine this specific case.
On the other hand,
the frustrated bilayer belongs to the class of so-called localized-magnon spin systems,<cit.>
which exhibit some prominent features around the saturation field,
such as
a ground-state magnetization jump at the saturation field,
a finite residual entropy at the saturation field,
and
an unconventional low-temperature thermodynamics,
for a review see Refs. review1,review2,review3.
The singlet state of the dimer is the localized-magnon state which belongs to a completely dispersionless (flat) one-magnon band.
Over the last decade a large variety of flat-band systems with unconventional physical properties was found,
see Refs. other1,other2,other3 and references therein.
For the flat-band systems at hand,
the local nature of the one-magnon states allows to construct also localized many-magnon states
and to calculate their degeneracy by mapping the problem onto a classical hard-core-object lattice gas;
the case of the frustrated bilayer was discussed in Refs. prb2006,prb2010.
In the strong-field low-temperature regime
the independent localized-magnon states are the lowest-energy ones
and therefore they dominate the thermodynamics.
The thermodynamic properties in this regime can be efficiently calculated using classical Monte Carlo simulations for a lattice-gas problem.
Even in case of small deviations from the ideal flat-band geometry
a description which is based on the strong-coupling approach<cit.> can be elaborated.<cit.>
Again the effective theory is much simpler than that for the initial problem.
From the theoretical side, frustrated bilayer systems have been studied by several authors.
Thus,
the frustrated square-lattice bilayer quantum Heisenberg antiferromagnet was studied in Refs. lin,prb2006,prb2010,chen,albuquerque,murakami,alet,
whereas the honeycomb-lattice bilayer with frustration was studied in
Refs. oitmaa,zhang,bishop (intralayer frustration)
and
Refs. classical,brenig (interlayer frustration).
For the system to be examined in our paper,
i.e.,
the spin-1/2 antiferromagnetic Heisenberg model on a bilayer honeycomb lattice including interlayer frustration,
H. Zhang et al.<cit.> have determined the quantum phase diagram at zero magnetic field
for a rather general case of an arbitrary relation between the nearest-neighbor intralayer coupling and the frustrating interlayer coupling.
Another recent study reported in Ref. classical
concerns the antiferromagnetic classical Heisenberg model on a bilayer honeycomb lattice in a highly frustrated regime
in the presence of a magnetic field.
Its main result is the phase diagram of the model in the plane “magnetic field – temperature”.
However, this analysis cannot contain any hallmarks caused by the localized magnons,
since localized-magnon features represent a pure quantum effect which disappears in the classical limit.
From the experimental side,
one may mention several layered materials, which can be viewed as frustrated bilayer quantum Heisenberg antiferromagnets.
Thus,
the compound Ba_2CoSi_2O_6Cl_2 could be described as a two-dimensionally antiferromagnetically coupled spin-1/2 XY-like spin dimer system
in which Co^2+ sites form the frustrated square-lattice bilayer.<cit.>
The interest in the frustrated honeycomb-lattice bilayers stems from experiments on Bi_3Mn_4O_12(NO_3).<cit.>
In this compound,
the ions Mn^4+ form a frustrated spin-3/2 bilayer honeycomb lattice.<cit.>
Finally, let us mention that a bilayer honeycomb lattice can be realized using ultracold atoms.<cit.>
The present study has several goals.
Motivated by the recent paper of H. Zhang et al.,<cit.> we wish to extend it
to the case of nonzero magnetic field.
On the other hand, with our study we complement the analysis of the classical case<cit.> to the pure quantum case of s=1/2.
Finally,
the present study can be viewed as an extension of our previous calculations<cit.> to the honeycomb-lattice geometry.
Although we do not intend to provide a theoretical description of Bi_3Mn_4O_12(NO_3),
our results may be relevant for the discussion of the localized-magnon effects in this and in similar materials.
The outline of the paper is as follows.
Section <ref> contains the spectroscopic study of the frustrated honeycomb-lattice bilayer spin-1/2 Heisenberg antiferromagnet:
By exact diagonalization for finite quantum systems and direct calculations for finite hard-core lattice-gas systems
we show the correspondence between the ground states in the large-S^z subspaces and the spatial configurations of hard hexagons on an auxiliary honeycomb lattice.
Based on the established correspondence,
in Section <ref>
we report results of classical Monte Carlo simulations for hard hexagons on the honeycomb lattice
and use them to predict the properties of the frustrated honeycomb-lattice bilayer spin-1/2 Heisenberg antiferromagnet
in the strong-field low-temperature regime.
The most intriguing outcome is an order-disorder phase transition which is expected at low temperatures just below the saturation
field.
This transition is related to the ordering of the localized magnons on the two-sublattice honeycomb lattice as the density of the localized magnons increases.
Section <ref> deals with some generalization of the independent localized-magnon picture:
We show how to take into account the contribution of a low-lying set of other localized states
as well as discuss the effect of deviations from the ideal frustration
case.
We end with a summarizing discussion in Section <ref>.
Several technical details are put to the appendixes.
§ INDEPENDENT LOCALIZED-MAGNON STATES
In the present paper,
we consider the spin-1/2 Heisenberg antiferromagnet with the Hamiltonian
H=∑_⟨ i j⟩J_ijs_i·s_j -hS^z,
J_ij>0,
S^z=∑_i s_i^z
defined on the honeycomb-lattice bilayer shown in Fig. <ref>.
The first sum in Eq. (<ref>) runs over all bonds of the lattice
and hence J_ij acquires three values:
J_2 (dimer bonds),
J_1 (nearest-neighbor intralayer bonds),
and
J_ (frustrating interlayer bonds),
see Fig. <ref>.
In what follows
we consider the case J_=J_1 and call it the “ideal frustration case”
(or “ideal flat-band case”).
Only in Sec. <ref> we discuss deviations from the ideal frustration case,
i.e., J_ J_1.
Since the z component of the total spin S^z commutes with the Hamiltonian
we can consider the subspaces with different values of S^z separately.
In the strong-field regime the subspaces with large S^z are relevant.
The only state with S^z=N/2 is the fully polarized state
|…↑…⟩
with the energy
E_FM=N(J_2/8+3J_1/4).
In the subspace with S^z=N/2-1 (one-magnon subspace)
N eigenstates of H (<ref>) belong to four one-magnon bands,
E_FM+Λ_k^(α), α=1,2,3,4,
with the dispersion relations:
Λ_k^(1)=Λ_k^(2)
=-J_2-3J_1,
Λ_k^(3,4)
=-3J_1∓ J_1|γ_k|,
|γ(k)|
=
√(3+2[cos k_a +cos k_b +cos(k_a+k_b)]).
Here
k=(k_x,k_y),
k_a=√(3)a_0 k_x,
k_b=3a_0k_y/2-√(3)a_0k_x/2,
where a_0 is the hexagon side length,
and k acquires N/2 values from the first Brillouin zone,
see Appendix A.
The N states from the two flat bands α=1 and α=2 can be chosen as a set of localized states
where the spin flip is located on one of the N vertical dimers,
see Fig. <ref>.
The remaining N states
(i.e., from the two dispersive bands α=3 and α=4)
are extended over the whole lattice.
As can be seen from Eq. (<ref>),
the two-fold degenerate dispersionless (flat) one-magnon band becomes the lowest-energy one,
if J_2> 3J_1,
i.e., if the strength of the vertical bond J_2 is sufficiently large.
In what follows,
we assume that this inequality holds.
From the one-magnon spectra (<ref>)
we can also get the value of the saturation field:
h_sat=J_2+3J_1.
We pass to the many-magnon ground states.
Because the localized one-magnon states have the lowest energy in the one-magnon subspace,
the ground states in the subspaces with S^z=N/2-n, n=2,…, n_max,
n_max=N/2=N/4
can be obtained by populating the dimers.
However, for the ground-state manifold a hard-core constraint is valid,
i.e.,
the neighboring vertical dimers cannot be populated simultaneously,
since the occupation of nearest neighbors leads to an increase of the energy.
Thus, we arrive at the mapping onto a classical lattice-gas model of hard hexagons on an auxiliary honeycomb lattice:
Each ground state of the quantum spin model can be visualized as a spatial configuration of the hard hexagons on the honeycomb lattice
excluding the population of neighboring sites (hard-core rule),
see Fig. <ref>.
The occupation of neighboring sites,
excluded for the ground-state manifold at S^z=N/2-2,…, N/4,
provides another class of localized states
which can be visualized as overlapping hexagons on the honeycomb lattice:
These states were completely characterized in Refs. andreas and prb2010.
Each overlapping pair of hexagons
(i.e., occupation of neighboring dimers by localized magnons)
increases the energy by J_1.
If J_2/J_1 is sufficiently large,
the overlapping hexagon states are the lowest excited states in the subspaces with S^z=N/2-2,…,N/4,
but they are the ground states in the subspaces with lower S^z.
From exact-diagonalization data for N=24, 32, 36, 48
we determined the required values of J_2/J_1 as 3.687, 3.781, 3.813, 3.874, respectively:
For these values the first excited state in the subspace with S^z=N/2-2 are the overlapping hexagon states.
We check our statements on the character of the ground states and the excited states by comparison with exact-diagonalization data.
Clearly,
exact diagonalizations are restricted to finite lattices,
which are shown in Fig. <ref>.
We use the spinpack package <cit.>
and exploit the local symmetries to perform numerical exact calculations for large sizes of the Hamiltonian matrix.
The ground-state degeneracy coincides with the number of spatial configurations of hard hexagons on the honeycomb lattice for all considered cases,
see Table <ref>.
In Table <ref>
we also report the energy gap Δ to the first excited state and the degeneracy of the first excited state.
While in the one-magnon subspace we have Δ=J_2-3J_1, see Eq. (<ref>),
the energy gap in the subspaces S^z=N/2-n, n=2,…,N/2-1 agrees with the conjecture
that for large enough J_2/J_1 ≳ 4 the first excited states are other localized-magnon states
for which two of the localized magnons are neighbors
(two hard hexagons overlap), see above.
Further evidence for this picture is provided by the value Δ=2J_1 for S^z=N/4:
The first excited state with respect to the localized-magnon-crystal state corresponds to three overlapping hard hexagons
resulting in an increase of energy by 2J_1
[see also Eq. (<ref>) in Appendix B].
The zero-temperature magnetization curve is shown by the thick solid red curve in Fig. <ref>.
The magnetization curve probes the ground-state manifold and it is in a perfect agreement with
the above described picture.
There are two characteristic fields,
h_2=J_2
and
h_sat=J_2+3J_1,
at which the ground-state magnetization curve has a jump.
To demonstrate the robustness of the main features of the magnetization curve against deviations from the ideal frustration case,
we also show the curve when J_ slightly differs from J_1.
A more detailed discussion of this issue is then provided in Sec. <ref>.
In the next section we use the established correspondence between the spin model and the hard-hexagon model
to calculate the thermodynamic properties of the frustrated honeycomb-lattice bilayer quantum Heisenberg antiferromagnet
in the strong-field low-temperature regime.
§ HARD HEXAGONS ON THE HONEYCOMB LATTICE
The lowest eigenstates in the subspaces with large S^z become ground states for strong magnetic fields.
Thus,
the energy of these lowest eigenstates in the subspaces with S^z=N/2-n, n=0,1,…,n_max in the presence of the field h
is
E^lm_n(h)=E_FM-hN/2-(ϵ_1-h)n,
ϵ_1=J_2+3J_1 .
At the saturation field,
i.e., at h=h_sat=ϵ_1, all these energies
become independent of n,
E^lm_n(h_sat)=E_FM-ϵ_1 N/2.
Therefore, the system exhibits a huge ground-state degeneracy at
h_sat which grows exponentially with the system size N:<cit.>
W=∑_n=0^N/2g_N(n)≈exp(0.218 N), see Eq. (<ref>) below.
Here g_N(n) denotes the degeneracy of the ground state for the 2N-site frustrated honeycomb-lattice bilayer in the subspace with S^z=N/2-n.
Furthermore,
following Refs. mike and review2,
the contribution of the independent localized-magnon states to the partition function is given by the following formula:
Z_lm(T,h,N)=∑_n=0^N/2g_N(n)exp[-E^lm_n(h)/T].
Since g_N(n)=Z_hc(n,N) is the canonical partition function of n hard hexagons on the N-site honeycomb lattice,
Eq. (<ref>) can be rewritten as
Z_lm(T,h,N)=exp(-E_FM-hN/2/T) Ξ_hc(T,μ,N),
Ξ_hc(T,μ,N)=∑_n=0^N/2Z_hc(n,N)exp(μ n/T),
μ=ϵ_1-h.
As a result,
we get the following relations:
F_lm(T,h,N)/N
=
E_FM/N-h/2+1/2Ω_hc (T,μ,N)/N,
Ω_hc (T,μ,N)
=
-TlnΞ_hc(T,μ,N)
for the free energy per site f(T,h),
M_lm(T,h,N)/N
=
1/2+1/2∂/∂μΩ_hc(T,μ,N)/N
for the magnetization per site m(T,h),
S_lm(T,h,N)/N
=
1/2S_hc(T,μ,N)/N
for the entropy per site s(T,h),
C_lm(T,h,N)/N
=
1/2C_hc(T,μ,N)/N
for the specific heat per site c(T,h).
Note that h and μ are related by μ=h_sat-h.
The hard-hexagon quantities in the r.h.s. of these equations depend on the temperature and
the chemical potential only through the activity z=exp(μ/T).
That means that for the frustrated quantum spin system at hand all thermodynamic
quantities depend on temperature and magnetic field only via x=(h_sat-h)/T=ln z,
i.e., a universal behavior emerges in this regime.
To check the formulas for thermodynamic quantities given in Eqs. (<ref>) – (<ref>)
we compare the exact-diagonalization data with the predictions based on the hard-hexagon picture.
We set J_1=1, J_2=5 and perform exact-diagonalization calculations
for thermodynamics for the frustrated quantum spin system of N=24 sites,<cit.> see Fig. <ref>,
where the total size of the Hamiltonian matrix is already
16 777 216 × 16 777 216.
We also perform the simpler calculations for the corresponding hard-hexagon systems,
see Appendix B.
Our results for temperature dependences of the specific heat around the saturation are collected in Figs. <ref> and <ref>.
As can be seen from these plots,
the hard-hexagon picture perfectly reproduces the low-temperature features of the frustrated quantum spin model around the saturation field.
Deviations from the hard-core-model predictions in the upper panel of Fig. <ref> become visible only at T=0.2.
From the middle panel of Fig. <ref> one can conclude
that the temperature profiles for specific heat at h=7.95 and h=8.05 are well described by the hard-core model again up to about T=0.1.
Using the correspondence between the frustrated quantum spin model and the classical hard-core-object lattice-gas model,
we can give a number of predictions for the former model based on the analysis of the latter one.
For example,
we can calculate the ground-state entropy at the saturation field:
S(T→ 0,h=h_sat,N)/2N
=lnΞ_hc(z=1,N)/2N≈ 0.218.
This number follows by direct calculations for finite lattices up to N=64 sites.
On the other hand,
for the problem of hard hexagons on a honeycomb lattice
κ=exp[lnΞ_hc(z=1,N)/N]= 1.546…
plays
the same role as
the hard-square entropy constant
κ=1.503048082…
for hard squares on the square lattice
or
the hard-hexagon entropy constant
κ=1.395485972…
for hard hexagons on the triangular lattice.<cit.>
Such constants determine the asymptotic growth and are also of interest to combinatorialists.
A more precise value of this constant for hard hexagons on a honeycomb lattice can be found in Ref. baxter2.
The most interesting consequence of the correspondence between the frustrated quantum bilayer and the hard-core lattice gas is the existence of an order-disorder phase transition.
It is generally known
that for the lattice-gas model on the honeycomb lattice with first neighbor exclusion
the
hard hexagons spontaneously occupy one of two sublattices of the honeycomb lattice
as the activity z exceeds the critical value z_c=7.92…, see Ref. baxter2.
In the spin language,
this corresponds to the ordering of the localized magnons as their density increases.
This occurs at low temperatures just below the saturation field.
For the fixed (small) deviation from the saturation field,
h_sat-h,
the formula for the critical temperature reads:
T_c=h_sat-h/ln z_c≈ 0.48(h_sat-h).
Furthermore,
the critical behavior falls into the universality class of the two-dimensional-Ising-model.<cit.>
That means, the specific heat at T_c (<ref>) shows a logarithmic singularity.
Of course, the calculated T_c (<ref>) must be small,
otherwise the elaborated effective low-energy theory fails,
see Figs. <ref> and <ref>.
§ BEYOND INDEPENDENT LOCALIZED-MAGNON STATES
§.§ Other localized-magnon states
Following Ref. prb2010,
in addition to the independent localized-magnon states
(which obey the hard-hexagon rule)
we may also take into account another class of localized-magnon states which correspond to overlapping hexagon states
(i.e., they violate the hard-hexagon rule),
see also our discussion in Sec. <ref>.
The corresponding lattice-gas Hamiltonian has the form:
H({n_m})
=
-μ∑_m=1^Nn_m
+V∑_⟨ mn⟩ n_mn_n.
Here
n_m=0,1 is the occupation number attached to each site m=1,…,N of the auxiliary honeycomb lattice,
the first (second) sum runs over all sites (nearest-neighbor bonds) of this auxiliary lattice,
and μ=h_sat-h, V=J_1.
The interaction describes the energy increase if two neighboring sites are occupied by hexagons.
In the limit V→∞ the hard-core rule is restored.
The partition function is given by
Z_LM(T,h,N)
=
exp(-E_FM-hN/2/T) Ξ_lg(T,μ,N),
Ξ_lg(T,μ,N)
=∑_n_1=0,1…∑_n_N=0,1exp[-H({n_m})/T].
Since Z_LM contains not only the contribution from independent localized-magnon states,
but also from overlapping localized-magnon states,
it is valid in a significantly wider region of magnetic fields and temperatures.
Evidently,
new Ising variables σ_m=2n_m-1 may be introduced in Eqs. (<ref>) and (<ref>)
and as a result we face the antiferromagnetic honeycomb-lattice Ising model in a uniform magnetic field:
H
=
N(-μ/2+3V/8)
-Γ∑_m=1^Nσ_m
+J∑_⟨ mn⟩σ_mσ_n,
Γ=μ/2-3V/4,
J=V/4>0.
The Ising variable σ_m acquires two values ± 1,
the nearest-neighbor interaction J=J_1/4>0 is antiferromagnetic,
and the effective magnetic field Γ=(h_sat-h)/2 -3J_1/4=(J_2+3J_1/2-h)/2 is zero when h=J_2+3J_1/2.
The zero-field case (i.e., Γ=0) is exactly solvable,
see Ref. jozef-michal and references therein.
For example,
the critical temperature is known to be
T_c/J_1=1/[2ln (2+√(3))]≈ 0.380.
The ground-state antiferromagnetic order in the model (<ref>) survives at T=0
at small fields
|Γ|<3J, i.e.,
for h_2<h<h_sat, h_2=J_2, h_sat=J_2+3J_1.
The antiferromagnetic honeycomb-lattice Ising model in a uniform magnetic field
was a subject of several studies in the past.<cit.>
In particular,
several closed-form expressions for the critical line in the plane “magnetic field – temperature”
which are in good agreement with numerical results were obtained,
see Refs. honeycomb-ising1,honeycomb-ising2 and also Refs. honeycomb-ising3,honeycomb-ising4.
On the basis of these studies we can construct the phase diagram, see Fig. <ref>.
Here we have used the two closed-form expressions for the critical line of the antiferromagnetic Ising model in a magnetic field
suggested in Refs. honeycomb-ising1 and honeycomb-ising2,
where both are indistinguishable in the scale used in Fig. <ref>.
Although the two-dimensional Ising model in a field has not been solved
analytically,
the results of Refs. honeycomb-ising1,honeycomb-ising2 are known to be very accurate.<cit.>
In Fig. <ref>
the temperature profiles for the specific heat in a wide range of magnetic fields
are shown.
The comparison with the exact-diagonalization data demonstrates a clear improvement of the hard-hexagon description
after using the lattice-gas model (<ref>).
Furthermore,
in the lower panel of Fig. <ref> we report classical Monte Carlo data for h=5.05
[lattice-gas model (<ref>)]
which shows how the temperature profile C(T,h,N)/N modifies and develops a singularity
as the system size increases
[see C(T,h,N)/N for N=288^2 in the lower panel of Fig. <ref>].
§.§ Deviation from the ideal flat-band geometry
Following Ref. around,
we can consider an effective low-energy description
when the flat-band conditions are slightly violated and the former flat band acquires a small dispersion.
To this end,
we assume that the intralayer nearest-neighbor interaction J_1 and the interlayer frustrating interaction J_ are different,
but the difference is small | J_1-J_|/J_2≪ 1.
Then in the strong-field low-temperature regime there are two relevant states at each dimer:
| u⟩=|↑↑⟩
and
| d⟩=(|↑↓⟩ - |↓↑⟩)/√(2).
Their energies,
ϵ_u=J_2/4-h
and
ϵ_d=-3J_2/4,
coincide at h=h_0=J_2.
Now the 2^N-fold degenerate ground-state manifold is splitted by the perturbation,
which consists of the Zeeman term -(h-h_0)∑_i s_i^z
and the interdimer interactions with the coupling constants J_1 and J_.
The effective Hamiltonian acting in the ground-state manifold can be found perturbatively:<cit.>
H_eff=PHP+…,
where P=|φ_0⟩⟨φ_0| is the projector onto the ground-state manifold,
|φ_0⟩=∏_m=1^N|v⟩,
where |v⟩ is either the state | u⟩ or the state | d⟩.
After some straightforward calculations and introducing the (pseudo)spin-1/2 operators
T^z=(| u⟩⟨ u| - | d⟩⟨ d|)/2,
T^+= | u⟩⟨ d|,
T^-= | d⟩⟨ u|
at each vertical bond
we arrive at the following result:
H_eff
=
N(-h/2-J_2/4+3J/8)
-h∑_m=1^NT_m^z
+∑_⟨ mn⟩[J^z T_m^zT_n^z+J(T_m^xT_n^x+T_m^yT_n^y)],
h=h-J_2-3J/2,
J=J_1+J_/2,
J^z=J,
J=J_1-J_.
The second sum in Eq. (<ref>) runs over all 3N/2 nearest-neighbor bonds of the auxiliary honeycomb lattice.
Note that the sign of the coupling constant J is not important, since the
auxiliary-lattice model (<ref>) is bipartite.
Again the effective Hamiltonian (<ref>)
which corresponds to the spin-1/2 XXZ Heisenberg model in a z-aligned field on the honeycomb lattice
is much simpler than the initial model and it can be studied further by, e.g., the
quantum Monte Carlo method.<cit.>
For the ideal flat-band geometry (ideal frustration case) the effective Hamiltonian (<ref>) transforms into the above discussed lattice-gas or Ising models.
To make this evident we have to take into account that
J=J_1=V,
h=h-h_sat+3J/2=-μ+3V/2,
J^z=J_1=V,
J=0,
and replace T^z by -σ/2:
H_eff
=
N(-h/2-J_2/4+3V/8)
-(μ/2-3V/4)∑_m=1^Nσ_m
+V/4∑_⟨ mn⟩σ_mσ_n
=E_FM-hN
+N(-μ/2 +3V/8)
-(μ/2-3V/4)∑_m=1^Nσ_m
+V/4∑_⟨ mn⟩σ_mσ_n,
cf. Eqs. (<ref>) and (<ref>).
To illustrate the quality of the effective description,
we compare the results for the ground-state magnetization curve obtained by exact diagonalization
for the initial model of N=32 sites
(thin solid curves in Fig. <ref>)
and
for the effective model of N=16 sites
(thin dashed curves in Fig. <ref>).
It is worth noting the symmetry present in the Hamiltonian (<ref>):
If one replaces
h=J_2+3J+δ h to h=J_2- δ h
and
all T_m^z to -T_m^z
the Hamiltonian (<ref>) (up to the constant) remains the same.
This symmetry of the effective model is also present in the exact-diagonalization
data for the initial model,
if deviations from the flat-band geometry are small,
see the thin solid black curve and the thin solid brown curve in Figs. <ref> and <ref>.
Moreover, it is also obvious in the lattice-gas Hamiltonian (<ref>):
After the replacement
μ=δμ to μ=3J_1-δμ
and
all n_m to 1-n_m
the Hamiltonian (<ref>) (up to the constant) remains the same.
As can be seen in Fig. <ref>,
the magnetization jumps survive even for moderate deviations from the ideal frustration
case.
The nature of the jump is evident from the effective model (<ref>):
It is a spin-flop transition,
which is present in a two-dimensional Ising-like XXZ Heisenberg antiferromagnet in an external field along the easy axis,
see, e.g., Refs. spin-flop-yunoki,spin-flop-selke,spin-flop.
Note that according to Eq. (<ref>)
the effective easy-axis XXZ model becomes isotropic for J_1+J_=2(J_1-J_),
i.e., the spin-flop transition disappears as increasing the deviation from the ideal frustration case J_1=J_.
Although, we are not aware of previous studies of the spin-flop transition for
the honeycomb-lattice spin-1/2 XXZ model
(and such a study is beyond the scope of the present paper),
we may mention here that the square-lattice case was examined in Ref. spin-flop-yunoki.
In particular,
one may find there the dependences of the height of the magnetization jump and of the
transition field on the anisotropy.
Furthermore, for temperature effects, see Ref. spin-flop-selke.
Supposing, that for the honeycomb-lattice case the same scenario as for the square-lattice case is valid,
we may expect that the spin-flop like transition in our model only disappears at the isotropic point J_1+J_=2(J_1-J_).
Our quantum Monte Carlo data shown in the inset of Fig. <ref> support this conclusion.
Let us complete this section with a general remark on effective models around the ideal flat-band geometry (the ideal frustration case).
Recalling the findings of Ref. around,
where several localized-magnon systems including the square-kagome model were examined,
we conclude that the effective model around the ideal flat-band geometry essentially depends on the universality class of the localized-magnon system.
(For a comprehensive discussion of the various universality classes of localized-magnon
systems, see Ref. univ_class.)
While for the square-kagome model falling into the monomer universality class we
obtained the (pseudo)spin-1/2 XXZ models with easy-plane anisotropy,<cit.>
for the considered frustrated honeycomb-lattice bilayer model,
which belongs to a hard-hexagon universality class,
we get the (pseudo)spin-1/2 XXZ models with easy-axis anisotropy.
Clearly, the magnitude of the Ising terms in the effective Hamiltonian are related to the specific hard-core rules.
§ CONCLUSIONS
In this paper
we examine
the low-temperature properties of the frustrated honeycomb-lattice bilayer spin-1/2 Heisenberg antiferromagnet in a magnetic field.
For the considered model,
when the system has local conservation laws,
it is possible to construct a subset of 2^N eigenstates
(N=N/2)
of the Hamiltonian and to calculate their contribution to thermodynamics.
For sufficiently strong interlayer coupling,
these states are low-energy ones for strong and intermediate fields
and therefore they dominate the thermodynamic properties.
The most interesting features of the studied frustrated quantum spin model are:
The magnetization jumps as well as wide plateaus,
the residual ground-state entropy,
the extra low-temperature peak in the temperature dependence of the specific heat around the saturation,
and the finite-temperature order-disorder phase transition of the two-dimensional Ising-model universality class.
The phase transition occurs just below the saturation field h_sat.
However, for large enough J_2/J_1,
there is a line of phase transitions which occur below T_c/J_1=1/[2ln (2+√(3))]≈ 0.380
for h in the region between h_2=J_2 and h_sat=J_2+3J_1.
Finally,
for deviations from the ideal frustration case we observe for the
isotropic Heisenberg model at hand magnetization jumps which can be understood
as spin-flop like transitions.
There might be some relevance of our study for the magnetic compound
Bi_3Mn_4O_12(NO_3).
The most intriguing question is:
Can the phase diagram from Fig. <ref> be observed experimentally?
First, the exchange couplings for Bi_3Mn_4O_12(NO_3) are still under debate<cit.> but the relation J_2/J_1≈ 2 looks plausible.
In this case the flat band is not the lowest-energy one, see Eq. (<ref>).
Second, the spin value is s=3/2 for this compound
(each Mn^4+ ion carries a spin s=3/2)
and the localized-magnon effects are less pronounced in comparison with the s=1/2
case.
For example, the magnitude ground-state magnetization jump at the saturation is still
N/2, but this magnitude is only 1/6 of the saturation value
(in contrast to 1/2 of the saturation value for the s=1/2 case).
Thus, further studies on this compound are needed
to clarify the relation to the localized-magnon scenario presented in our paper.
§ ACKNOWLEDGMENTS
The present study was supported by the Deutsche Forschungsgemeinschaft (project RI615/21-2).
O. D. acknowledges the kind hospitality of the University of Magdeburg in October-December of 2016.
The work of T. K. and O. D. was partially supported by Project FF-30F (No. 0116U001539) from the Ministry of Education and Science of Ukraine.
O. D. would like to thank the Abdus Salam International Centre for Theoretical Physics (Trieste, Italy)
for partial support of these studies through the Senior Associate award.
00
jjprime
R. R. P. Singh, M. P. Gelfand, and D. A. Huse,
Phys. Rev. Lett. 61, 2484 (1988);
S. E. Krüger, J. Richter, J. Schulenburg, D. J. J. Farnell, and R. F. Bishop,
Phys. Rev. B 61, 14607 (2000);
S. Wenzel, L. Bogacz, and W. Janke,
Phys. Rev. Lett. 101, 127202 (2008);
L. Fritz, R. L. Doretto, S. Wessel, S. Wenzel, S. Burdin, and M. Vojta,
Phys. Rev. B 83, 174416 (2011);
J. Richter and O. Derzhko,
to appear in Eur. J. Phys.
[arXiv:1611.09655].
valenti-eggert
U. Tutsch, B. Wolf, S. Wessel, L. Postulka, Y. Tsui, H. O. Jeschke, I. Opahle, T. Saha-Dasgupta, R. Valenti, A. Brühl,
K. Remović-Langer, T. Kretz, H.-W. Lerner, M. Wagner, and M. Lang,
Nature Communications 5, 5169 (2014);
D. Straßel, P. Kopietz, and S. Eggert,
Phys. Rev. B 91, 134406 (2015).
bilayer
K. Hida,
J. Phys. Soc. Jpn. 59, 2230 (1990);
L. Wang, K. S. D. Beach, and A. W. Sandvik,
Phys. Rev. B 73, 014431 (2006);
R. Ganesh, D. N. Sheng, Y.-J. Kim, and A. Paramekanti,
Phys. Rev. B 83, 144414 (2011);
R. Ganesh, S. V. Isakov, and A. Paramekanti,
Phys. Rev. B 84, 214412 (2011).
andreas
A. Honecker, F. Mila, and M. Troyer,
Eur. Phys. J. B 15, 227 (2000).
prl2002
J. Schulenburg, A. Honecker, J. Schnack, J. Richter, and H.-J. Schmidt,
Phys. Rev. Lett. 88, 167207 (2002).
review1
J. Richter,
Fizika Nizkikh Temperatur (Kharkiv) 31, 918 (2005)
[Low Temperature Physics 31, 695 (2005)].
review2
O. Derzhko, J. Richter, A. Honecker, and H.-J. Schmidt,
Fizika Nizkikh Temperatur (Kharkiv) 33, 982 (2007)
[Low Temperature Physics 33, 745 (2007)].
review3
O. Derzhko, J. Richter, and M. Maksymenko,
Int. J. Mod. Phys. B 29, 1530007 (2015).
other1
A. Mielke, J. Phys. A 24, L73 (1991);
A. Mielke, J. Phys. A 24, 3311 (1991);
A. Mielke, J. Phys. A 25, 4335 (1992);
A. Mielke, Phys. Lett. A 174, 443 (1993);
H. Tasaki, Phys. Rev. Lett. 69, 1608 (1992);
A. Mielke and H. Tasaki, Commun. Math. Phys. 158, 341 (1993);
H. Tasaki, J. Phys.: Condens. Matter 10, 4353 (1998);
H. Tasaki, Prog. Theor. Phys. 99, 489 (1998).
other2
D. Leykam, S. Flach, O. Bahat-Treidel, and A. S. Desyatnikov,
Phys. Rev. B 88, 224203 (2013);
S. Flach, D. Leykam, J. D. Bodyfelt, P. Matthies, and A. S. Desyatnikov,
EPL 105, 10001 (2014);
W. Maimaiti, A. Andreanov, H. C. Park, O. Gendelman, and S. Flach,
arXiv:1610.02970.
other3
L. Morales-Inostroza and R. A. Vicencio,
Phys. Rev. A 94, 043831 (2016).
prb2006
J. Richter, O. Derzhko, and T. Krokhmalskii,
Phys. Rev. B 74, 144430 (2006).
prb2010
O. Derzhko, T. Krokhmalskii, and J. Richter,
Phys. Rev. B 82, 214412 (2010);
O. Derzhko, T. Krokhmalskii, and J. Richter,
Teoret. Mat. Fiz. 168, 441 (2011)
[Theoretical and Mathematical Physics 168, 1236 (2011)].
aa
A. Honecker and A. Läuchli,
Phys. Rev. B 63, 174407 (2001).
around
O. Derzhko, J. Richter, O. Krupnitska, and T. Krokhmalskii,
Phys. Rev. B 88, 094426 (2013);
J. Richter, O. Krupnitska, T. Krokhmalskii, and O. Derzhko,
J. Magn. Magn. Mater. 379, 39 (2015).
lin
H. Q. Lin and J. L. Shen,
J. Phys. Soc. Jpn. 69, 878 (2000).
chen
P. Chen, C. Y. Lai, and M. F. Yang,
Phys. Rev. B 81, 020409(R) (2010).
albuquerque
A. F. Albuquerque, N. Laflorencie, J. D. Picon, and F. Mila,
Phys. Rev. B 83, 174421 (2011).
murakami
Y. Murakami, T. Oka, and H. Aoki,
Phys. Rev. B 88, 224404 (2013).
alet
F. Alet, K. Damle, and S. Pujari,
Phys. Rev. Lett. 117, 197203 (2016).
oitmaa
J. Oitmaa and R. R. P. Singh,
Phys. Rev. B 85, 014428 (2012).
zhang
H. Zhang, M. Arlego, and C. A. Lamas,
Phys. Rev. B 89, 024403 (2014).
bishop
R. F. Bishop and P. H. Y. Li,
arXiv:1611.03287.
classical
F. A. Gómez Albarracin and H. D. Rosales,
Phys. Rev. B 93, 144413 (2016).
brenig
H. Zhang, C. A. Lamas, M. Arlego, and W. Brenig,
Phys. Rev. B 93, 235150 (2016).
tanaka
H. Tanaka, N. Kurita, M. Okada, E. Kunihiro, Y. Shirata, K. Fujii, H. Uekusa, A. Matsuo, K. Kindo, and H. Nojiri,
J. Phys. Soc. Jpn. 83, 103701 (2014).
smirnova
O. Smirnova, M. Azuma, N. Kumada, Y. Kusano, M. Matsuda, Y. Shimakawa, T. Takei, Y. Yonesaki, and N. Kinomura,
J. Am. Chem. Soc. 131, 8313 (2009);
S. Okubo, F. Elmasry, W. Zhang, M. Fujisawa, T. Sakurai, H. Ohta, M. Azuma, O. A. Sumirnova, and N. Kumada,
J. Phys.: Conf. Ser. 200, 022042 (2010);
M. Matsuda, M. Azuma, M. Tokunaga, Y. Shimakawa, and N. Kumada,
Phys. Rev. Lett. 105, 187201 (2010).
kandpal
H. C. Kandpal and J. van den Brink,
Phys. Rev. B 83, 140412(R) (2011).
cold_atoms
S. Dey and R. Sensarma,
Phys. Rev. B 94, 235107 (2016).
spin
J. Richter and J. Schulenburg,
Eur. Phys. J. B 73, 117 (2010);
https://www-e.uni-magdeburg.de/jschulen/spin/
prb2004
O. Derzhko and J. Richter,
Phys. Rev. B 70, 104415 (2004).
mike
M. E. Zhitomirsky and H. Tsunetsugu,
Phys. Rev. B 70, 100403(R) (2004);
M. E. Zhitomirsky and H. Tsunetsugu,
Prog. Theor. Phys. Suppl. 160, 361 (2005).
baxter
R. J. Baxter, I. G. Enting, and S. K. Tsang,
J. Stat. Phys. 22, 465 (1980);
R. J. Baxter and S. K. Tsang,
J. Phys. A 13, 1023 (1980);
R. J. Baxter,
Exactly Solved Models in Statistical Mechanics
(Academic Press, New York, 1982).
baxter2
R. J. Baxter,
Ann. Comb. 3, 191 (1999)
[arXiv:cond-mat/9811264].
debierre
J.-M. Debierre and L. Turban,
Phys. Lett. A 97, 235 (1983);
L. Turban and J.-M. Debierre,
Phys. Lett. A 103, 81 (1984).
jozef-michal
J. Strečka and M. Jaščur,
Acta Physica Slovaca 65, 235 (2015).
honeycomb-ising1
F. Y. Wu, X. N. Wu, and H. W. J. Blöte,
Phys. Rev. Lett. 62, 2773 (1989); 63, 696 (1989).
honeycomb-ising2
X.-Z. Wang and J. S. Kim,
Phys. Rev. Lett. 78, 413 (1997);
X.-Z. Wang and J. S. Kim,
Phys. Rev. E 56, 2793 (1997).
honeycomb-ising3
S.-Y. Kim,
Phys. Lett. A 358, 245 (2006).
honeycomb-ising4
S. L. A. de Queiroz,
Phys. Rev. E 87, 024102 (2013).
fulde
P. Fulde,
Electron Correlations in Molecules and Solids
(Springer-Verlag, Berlin, Heidelberg, 1993),
p. 77.
mila
F. Mila and K. P. Schmidt,
in
Introduction to Frustrated Magnetism,
Springer Series in Solid-State Sciences 164,
edited by C. Lacroix, P. Mendels, and F. Mila
(Springer-Verlag, Berlin, Heidelberg, 2011),
pp. 537-559.
alps
A. F. Albuquerque et al. (ALPS collaboration),
J. Magn. Magn. Mater. 310, 1187 (2007);
B. Bauer et al. (ALPS collaboration),
J. Stat. Mech. P05001 (2011).
spin-flop-yunoki
S. Yunoki,
Phys. Rev. B 65, 092402 (2002).
spin-flop-selke
M. Holtschneider, S. Wessel, and W. Selke,
Phys. Rev. B 75, 224417 (2007).
spin-flop
K. Balamurugan, S.-H. Lee, J.-S. Kim, J.-M. Ok, Y.-J. Jo, Y.-M. Song, S.-A. Kim, E. S. Choi, M. D. Le, and J.-G. Park,
Phys. Rev. B 90, 104412 (2014).
univ_class
O. Derzhko and J. Richter,
Eur. Phys. J. B 52, 23 (2006).
§ APPENDIX A: ONE-MAGNON ENERGIES (<REF>)
In this appendix,
we present the calculation of the one-magnon energies (<ref>).
In the one-magnon subspace,
the Hamiltonian (<ref>) can be written in the following form (see Fig. <ref>):
H=∑_m_a=0^L-1∑_m_b=0^L-1(
J_2 h_m_a,m_b,1;m_a,m_b,3 +J_2 h_m_a,m_b,2;m_a,m_b,4.
.
+J_1 h_m_a,m_b,1;m_a,m_b,2
+J_1 h_m_a,m_b,1;m_a,m_b,4
+J_1 h_m_a,m_b,3;m_a,m_b,4
+J_1 h_m_a,m_b,3;m_a,m_b,2.
.
+J_1 h_m_a,m_b,2;m_a,m_b+1,1
+J_1 h_m_a,m_b,2;m_a,m_b+1,3
+J_1 h_m_a,m_b,4;m_a,m_b+1,3
+J_1 h_m_a,m_b,4;m_a,m_b+1,1.
.
+J_1 h_m_a,m_b,2;m_a+1,m_b+1,1
+J_1 h_m_a,m_b,2;m_a+1,m_b+1,3
+J_1 h_m_a,m_b,4;m_a+1,m_b+1,3
+J_1 h_m_a,m_b,4;m_a+1,m_b+1,1),
h_i;j
=
1/2(s_i^-s_j^+ + s_j^-s_i^+)
-1/2(s_i^-s_i^+ + s_j^-s_j^+)
+1/4.
Recall that N is the number of sites,
N=N/2 is the number of vertical bonds,
and N/2=L^2 is the number of sites of the triangular lattice which is used here:
The honeycomb-lattice bilayer is viewed as a triangular lattice with four sites in the unit cell.
We introduce the Fourier transformation,
s^+_m_a,m_b,α
=
1/L∑_k_a∑_k_bexp[i(k_am_a+k_bm_b)] s^+_k,α,
s^-_m_a,m_b,α
=
1/L∑_k_a∑_k_bexp[-i(k_am_a+k_bm_b)] s^-_k,α,
k=k_a2/3a_0(√(3)/2i+1/2j)+k_b2/3a_0j,
k_a=2π/Lz_a, z_a=0,1,…,L -1,
k_b=2π/Lz_b, z_b=0,1,…,L -1,
a_0 is the hexagon side length.
After that, Hamiltonian (<ref>) can be cast into
H
=
N/2(J_2/2+3J_1)
+
∑_k(
[ s^-_k,1 s^-_k,2 s^-_k,3 s^-_k,4 ])
(
[ H_11 H_12 H_13 H_14; H_21 H_22 H_23 H_24; H_31 H_32 H_33 H_34; H_41 H_42 H_43 H_44 ])
(
[ s^+_k,1; s^+_k,2; s^+_k,3; s^+_k,4 ])
with the following matrix H
H
=
(
[ -J_2/2-3J_1 J_1/2γ_k J_2/2 J_1/2γ_k; J_1/2γ^*_k -J_2/2-3J_1 J_1/2γ^*_k J_2/2; J_2/2 J_1/2γ_k -J_2/2-3J_1 J_1/2γ_k; J_1/2γ^*_k J_2/2 J_1/2γ^*_k -J_2/2-3J_1 ]),
γ_k=1+exp(-ik_b)+exp[-i(k_a+k_b)].
The eigenvalues of the matrix H are as follows:
{
-J_2-3J_1,
-J_2-3J_1,
-3J_1-J_1|γ_k|,
-3J_1+J_1|γ_k|},
|γ_k|
=√(3+2[cos k_a +cos k_b +cos(k_a+k_b)]).
Therefore,
in the one-magnon subspace we have
H
=
E_FM
+
∑_k∑_α=1,2,3,4Λ_k^(α)s_k,α^-s_k,α^+,
Λ_k^(1)=Λ_k^(2)
=-J_2-3J_1,
Λ_k^(3,4)
=-3J_1∓ J_1|γ_k|.
§ APPENDIX B: FINITE LATTICES
In this appendix,
we collect some formulas for finite lattices.
Consider hard hexagons on the (periodic) N=12 site honeycomb lattice,
see Fig. <ref>.
Then
Ξ_hc(T,μ, 12)
=1 + 12z+ 48z^2+ 76z^3 +45z^4 +12z^5+ 2z^6,
z=exp(μ/T),
μ=h_sat-h,
see Table <ref>.
All thermodynamic quantities follow from Eq. (<ref>) according to standard prescriptions of statistical mechanics.
Next,
consider the lattice-gas model with finite nearest-neighbor repulsion on the (periodic) N=12 site honeycomb lattice,
see Fig. <ref>.
Then
Ξ_lg(T,μ, 12)
=
∑_n_1=0,1…∑_n_12=0,1exp[
μ/T∑_m=1^12 n_m
.
.
-V/T(
n_5n_1+n_1n_2+n_2n_3+n_4n_2+n_3n_5+n_9n_4+n_4n_6+n_5n_7+n_6n_7
.
.
.
.
+n_8n_6+n_7n_9+n_12n_8+n_8n_10+n_9n_11+n_10n_11+n_1n_10+n_11n_12+n_12n_3
)
]
=
∑_n_1=0,1…∑_n_12=0,1
z^∑_m=1^12 n_mexp[
-V/T(
n_5n_1+n_1n_2+n_2n_3+n_4n_2+n_3n_5+n_9n_4+n_4n_6+n_5n_7+n_6n_7
.
.
.
.
+n_8n_6+n_7n_9+n_12n_8+n_8n_10+n_9n_11+n_10n_11+n_1n_10+n_11n_12+n_12n_3
)
],
μ=h_sat-h,
V=J_1,
z=exp(μ/T).
The partition function (<ref>) contains 4096 terms and can be easily calculated.
All thermodynamic quantities follow from Eq. (<ref>) according to standard prescriptions of statistical mechanics.
Clearly, Eq. (<ref>) implies a specific numbering of sites in Fig. <ref>.
However, it can be rewritten in the form that does not depend on the site numbering
[as Eq. (<ref>)].
If we introduce the function g(k_1,k_2) with
the integer k_1=0,…, 12 to count the number of occupied sites
and
the integer k_2=0,…,18 to count the number of bonds which connect the occupied sites,
Eq. (<ref>) can be cast into
Ξ_lg(T,μ, 12)
=
∑_k_1=0^12∑_k_2=0^18 g(k_1,k_2) z^k_1exp(-V/Tk_2).
The only non-zero values of the function g(k_1,k_2) are as follows:
g(0,0)=1;
g(1,0)=12;
g(2,0)=48, g(2,1)=18;
g(3,0)=76, g(3,1)=108, g(3,2)=36;
g(4,0)=45, g(4,1)=168, g(4,2)=207, g(4,3)=72, g(4,4)=3;
g(5,0)=12, g(5,1)=48, g(5,2)=276, g(5,3)=276, g(5,4)=168, g(5,5)=12;
g(6,0)=2, g(6,1)=0, g(6,2)=42, g(6,3)=212, g(6,4)=342, g(6,5)=264, g(6,6)=62;
g(7,3)=12, g(7,4)=18, g(7,5)=276, g(7,6)=276, g(7,7)=168, g(7,8)=12;
g(8,6)=45, g(8,7)=168, g(8,8)=207, g(8,9)=72, g(8,10)=3;
g(9,9)=76, g(9,10)=108, g(9,11)=36;
g(10,12)=48, g(10,13)=189;
g(11,15)=12;
g(12,18)=1.
This representation allows to clarify the degeneracy of the first excited state reported in Table <ref>:
According to the elaborated picture it is given by the value of g(k_1,1), k_1=2,3,4,5.
Furthermore,
g(6,1)=0,
i.e., one cannot place 6 occupied sites on the 12-site lattice in Fig. <ref> to have only 1 bond connecting the occupied sites.
The smallest number of bonds connecting occupied sites is 2 and g(6,2)=42:
This explains the value of the energy gap Δ=2 and the 42-fold degeneracy of the first excited state.
|
http://arxiv.org/abs/1701.07743v1 | 20170126153812 | The electronics of the HEPD of the CSES experiment | [
"V. Scotti",
"G. Osteria"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.HE",
"physics.ins-det"
] |
INFN Napoli, Complesso Universitario di M. S. Angelo, Ed. 6- Via Cintia, 80126 Napoli, Italy
http://cses.roma2.infn.it/
The China Seismo Electromagnetic Satellite (CSES) aims to contribute to the monitoring of earthquakes from space. This space mission, lead by a Chinese-Italian collaboration, will study phenomena of electromagnetic nature and their correlation with the geophysical activity. The satellite will be launched in 2017 and will host several instruments onboard: two magnetometers, an electrical field detector, a plasma analyzer, a Langmiur probe and the High Energy Particle Detector (HEPD). The HEPD, built by the Italian collaboration, will study the temporal stability of the inner Van Allen radiation belts, investigating precipitation of trapped particles induced by magnetospheric, ionosferic and tropospheric electromagnetic emissions, as well as by seismo-electromagnetic disturbances. It consists of two layers of plastic scintillators for trigger and a calorimeter. The direction of the incident particle is provided by two planes of double-side silicon microstrip detectors. HEPD is capable of separating electrons and protons and identify nuclei up to Iron. The HEPD will study the low energy component of cosmic rays too. The HEPD comprises the following subsystems: detector, electronics, power supply and mechanics. The electronics can be divided in three blocks: silicon detector, scintillator detectors (trigger, energy and veto detectors) and global control and data managing. In this paper a description of the electronics of the HEPD and its main characteristics will be presented.
The electronics of the HEPD of the CSES experiment
for the CSES-Limadou Collaboration
18 november 2016
===================================================
fancy
§ INTRODUCTION
CSES is a scientific mission dedicated to study electromagnetic, plasma and particles perturbations of atmosphere, ionosphere, magnetosphere and Van Allen belts induced by natural sources and anthropocentric emitters and their correlations with the occurrence of seismic events.
Among the possible phenomena generated by an earthquake, bursts of Van Allen belt electron fluxes in the magnetosphere have been repeatedly reported in literature by various experiments, though a statistical significance was always difficult to claim <cit.>. The CSES mission aims at measuring such particle bursts, by means of the High Energy Particle Detector (HEPD). The high inclination orbit of the satellite allows the instrument to detect particles of different nature during its revolution: galactic cosmic rays - which are modulated by the solar activity at low energies and also solar energetic particles associated to transient phenomena such as Solar Flares or Coronal Mass Ejections.
The satellite will be placed at a 98 Sun-synchronous circular orbit at an altitude about 500 km, the launch is scheduled in July 2017 with an expected lifetime of 5 years. The satellite mass will be about 730 kg and the peak power consumption about 900 W.
CSES satellite hosts several instruments on board (see Figure <ref>):
* a Search-Coil Magnetometer, a High Precision Magnetometer and an Electric Field Detector for measuring the magnetic and electric fields;
* a Plasma Analyser Package and a Langmuir Probe for measurements of local plasma disturbances;
* a GNSS Occultation Receiver and a Transmitter for the study of profile disturbance of plasma;
* the High-Energy Particle Package and High-Energy Particle Detector (HEPD) for the measurement of the flux of energetic particles.
CSES is the first satellite of a space monitoring system proposed in order to investigate the topside ionosphere - with the most advanced techniques and equipment - and designed in order to gather world-wide data of the near-Earth electromagnetic environment.
§ THE HIGH ENERGY PARTICLE DETECTOR
The High-Energy Particle Detector has been developed by the Italian CSES collaboration, due to its long experience in cosmic ray physics. CSES will complement the cosmic ray measurements of PAMELA <cit.> at low energy, thus giving a complete picture of the cosmic ray radiation by direct measurements from the very low energies (few MeVs) up to the the TeV region.
The High Energy Particle Detector (HEPD) will study low energy Cosmic Rays (CR) in the energy range 3 - 300 MeV. The HEPD has to separate electrons and proton, identifying electrons within a proton background (N_e/N_p= e-5÷e-3), and identify nuclei up to Iron. The high-inclination orbit allows the telescope to detect particles of different nature during its revolution: galactic CR, Solar Energetic Particles, particles trapped in the magnetosphere.
The HEPD comprises the following subsystems: detector, electronics, power supply and mechanics. The detector is contained in an aluminum box, while the electronics cards are placed outside the detector fixed on the base plate by means of a dedicated supporting structure. The outside surface is covered with aluminized polyimide layer to assure a good thermal insulation.
The detector consists of three components:
* Silicon planes: two planes of double-side silicon micro-strip detectors are placed on the top of the detector in order to track the direction of the incident particle limiting the effect of Coulomb multiple scattering on the direction measurement;
* Trigger: a layer of thin plastic scintillator divided into six segments;
* Calorimeter: a tower of 16 layers of 1 thick plastic scintillator planes followed by a 3×3 matrix of an inorganic scintillator LYSO.
An organic scintillator is used in the calorimeter to optimize the energy resolution. In order to extend the electron measurement range to 100 MeV an inorganic scintillator LYSO is used for the last plane of the calorimeter. The calorimeter volume is surrounded by 5 thick plastic scintillator veto planes. All the scintillator detectors (trigger, calorimeter and VETO) are read out by photomultiplier tubes (PMTs).
The good energy-loss measurement of the silicon track, combined with the energy resolution of the scintillators and calorimeter, allows identifying electrons with acceptable proton background levels (N_e/N_p= e-5÷e-3).
In table <ref> the main parameter of the detector are summarized.
The Italian collaboration developed four models of the HEPD: Electrical, Mechanical and Thermal, Qualification (QM) and Flight Models.
§ THE ELECTRONICS OF THE HEPD
The electronics of the HEPD can be divided in three blocks:
* Silicon detector;
* Scintillator detectors;
* Global control and data managing.
Each detector block includes power chain for bias distribution and a data acquisition processing chain. The main Power Supply provides the low voltages to the detector electronics and the high bias voltages for PMTs and silicon modules.
All the electronics is designed with embedded Hot/Cold redundancy and all the components of the board have been selected capable to withstand a -40 to 85 operating range. The maximum data transfer rate from the satellite is 50 GB per day.
The whole electronics system is schematized in figure <ref>. It is composed by front-end electronics and four main boards:
* Data Acquisition (DAQ) Board: manages all the scientific data of HEPD. The DAQ accomplies the following functionalities: acquisition of trigger signal from PMT/Trigger board, management of hybrid circuits on the silicon planes, acquisition of silicon planes data, computing of PMTs data and silicon planes data, data compression, transmission of scientific data on the scientific data link.
* Trigger Board: manages the analog signals coming from the PMTs and generates the trigger signals needed for data acquisition. The main functions of this board are to invert and attenuate the PMTs analog signal to adapt it to input requirements of the EASIROC Integrated Circuits, to convert the EASIROC readout signals into digital signals, to allow the DAQ board to read the EASIROC digital output, to allow the CPU to configure the EASIROC, to generate and transmit slow event trigger signals manipulating the fast trigger signals coming from EASIROC, to allow the CPU to configure the slow trigger generation algorithm.
* CPU Board: controls the detector and communicates with the platform of the satellite via CAN BUS interface. The CPU manages the following functionalities: communication with Satellite OBDH computer via CAN bus, storage of non volatile information, management -via internal slow control links bus- of TM/TC and LVPS control board, Trigger Board and DAQ Board, management of system diagnostic routines and of system configuration, system monitor.
* Telemetry/Telecommand board.
A scheme of the data acquisition procedure is depicted in figure <ref>. The analogical signal read out from the PMTs associated to scintillator detectors are transmitted directly to the Trigger Board. Signals of each data processing block related to scintillators are managed by an FPGA which issues the FAST trigger signal needed to start the acquisition of the tracker by DAQ. After an handshake protocol, if the trigger is confirmed by DAQ, the Trigger Board sends Scintillators data to DAQ Board. Scintillators and tracker data are processed by a dedicated DSP and the results are written on a DP-RAM waiting to be transferred to satellite via RS-422 on a CPU command.
§ TEST AND QUALIFICATION CAMPAIGN
According to Chinese space procedures, the HEPD project involved the construction of four detector versions: the Electrical Model, the Structural and Thermal Model, the Qualification Model and the Flight Model. All the models have been built, assembled and integrated. In Spring 2016 we started the test and qualification campaign with the HEPD QM: vibration test at SERMS laboratory in Terni (PG) simulating launch and flight, thermal and vacuum test at SERMS laboratory simulating space environment. Finally, beam test were carried out at Beam Test Facility of the Laboratori Nazionali di Frascati of INFN. The detector was irradiated with electrons and positrons from 30 to 150 MeV. The objective was to study the instrument response to electrons in the energy range of interest and to perform precise calibration of the calorimeter energy measurement for the QM. In Figure <ref> we show a preliminary plot of the total energy loss in the HEPD QM plastic scintillator calorimeter for 30 MeV electrons. The red curve is a Landau fit to the one-particle peak of the distribution.
The same tests have been taken with the HEPD FM. Beam test data are under analysis.
§ CONCLUSIONS
The HEPD has been developed by the Italian CSES-Limadou Collaboration. In this paper the main features of the HEPD have been described, with a scpecific focus on the electronics of the instrument. At the moment the FM of the HEPD is in China for pre-flight test.
The launch campaign will start in July 2017. The memorandum between Italy and China foresees also the commissioning post launch at CSES Ground Segment in Beijing and beam test of the QM after the redelivery.
9
Wang Wang L. et al., Earthq Sci (2015) 28 4, 303
Zhang Zhang X. et al., Nat. Hazards Earth Syst. Sci. (2013) 13, 197
Sgrigna Sgrigna V. et al., Journal of Atmospheric and Solar-Terrestrial Physics (2005), 67 1448
pamela Adriani, O. et al., Physical Review Letters (2013), DOI: 10.1103/PhysRevLett.111.081102
|
http://arxiv.org/abs/1701.07821v1 | 20170126185550 | Gromov-Witten theory via Kuranishi structures | [
"Mohammad Farajzadeh Tehrani",
"Kenji Fukaya"
] | math.SG | [
"math.SG",
"math.AG"
] |
all
equationsection
.tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png
plain
theoremTheorem
theoremsubsection
lemma[theorem]Lemma
conjlemma[theorem]Conjectural Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
conjecture[theorem]Conjecture
notation[theorem]Notation
definition[theorem]Definition
condition[theorem]Condition
definition
example[theorem]Example
exercise[theorem]Exercise
problem[theorem]Problem
remark
remark[theorem]Remark
claim[theorem]Claim
#1#1
#1#1
#1#1
#1#1
#1#1
ϱ
≪⟨
⟩
⟶
∂̅
ε
ħ
#1
#1#1#1#1#1#1#1#1#1#1#1
|
http://arxiv.org/abs/1701.08082v5 | 20170127153334 | Application of Spin-Exchange Relaxation-Free Magnetometry to the Cosmic Axion Spin Precession Experiment | [
"Tao Wang",
"Derek F. Jackson Kimball",
"Alexander O. Sushkov",
"Deniz Aybas",
"John W. Blanchard",
"Gary Centers",
"Sean R. O Kelley",
"Arne Wickenbrock",
"Jiancheng Fang",
"Dmitry Budker"
] | physics.atom-ph | [
"physics.atom-ph",
"physics.ins-det",
"physics.space-ph"
] |
1]Tao Wangmycorrespondingauthor1
[mycorrespondingauthor1]Corresponding author
taowang@berkeley.edu
4]Derek F. Jackson Kimball
3]Alexander O. Sushkov
3]Deniz Aybas
2]John W. Blanchard
2]Gary Centers
1]Sean R. O' Kelley
2]Arne Wickenbrock
5]Jiancheng Fang
2,1,6]Dmitry Budkermycorrespondingauthor2
[mycorrespondingauthor2]Corresponding author
budker@uni-mainz.de
[1]Department of Physics, University of California, Berkeley, California 94720-7300, USA
[4]Department of Physics, California State University, East Bay, Hayward, California 94542-3084, USA
[3]Department of Physics, Boston University, Boston, Massachusetts 02215, USA
[2]Helmholtz Institute Mainz, Johannes Gutenberg University, 55099 Mainz, Germany
[5]School of Instrumentation Science and Opto-Electronics Engineering, Beihang University, Beijing 100191, PRC
[6]Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
The Cosmic Axion Spin Precession Experiment (CASPEr) seeks to measure oscillating torques on nuclear spins caused by axion or axion-like-particle (ALP) dark matter via nuclear magnetic resonance (NMR) techniques. A sample spin-polarized along a leading magnetic field experiences a resonance when the Larmor frequency matches the axion/ALP Compton frequency, generating precessing transverse nuclear magnetization. Here we demonstrate a Spin-Exchange Relaxation-Free (SERF) magnetometer with sensitivity ≈ 1 fT/√(Hz) and an effective sensing volume of 0.1 cm^3 that may be useful for NMR detection in CASPEr. A potential drawback of SERF-magnetometer-based NMR detection is the SERF's limited dynamic range. Use of a magnetic flux transformer to suppress the leading magnetic field is considered as a potential method to expand the SERF's dynamic range in order to probe higher axion/ALP Compton frequencies.
Axion Dark Matter, Atomic Magnetometer, Spin-Exchange Relaxation-Free
[2010] 00-0199-00
§ INTRODUCTION
Dark matter and dark energy are the most abundant yet mysterious substances in the Universe. Axions and axion-like particles (ALPs; we do not distinguish between axions and ALPs in the following) have emerged as theoretically well-motivated dark-matter candidates <cit.>. The Cosmic Axion Spin Precession Experiment (CASPEr) experiment searches for a time-varying axion field by using Nuclear Magnetic Resonance (NMR) techniques <cit.>. CAPSEr is projected to realize a sensitivity to axions and ALPs beyond the current astrophysical and laboratory limits <cit.>.
As discussed in <cit.>, a dark-matter ALP field can cause oscillating torques on nuclear spins either by generating an oscillating nuclear electric dipole moment (EDM) that interacts with a static electric field or through an oscillating “ALP wind” that acts as a pseudo-magnetic field along the relative velocity vector between the sample and the dark matter. The oscillation frequency of the torque is given by the ALP Compton frequency ω_a. In CASPEr, a sample of nuclear spins is polarized along a leading magnetic field, and if the Larmor frequency matches ω_a, a resonance occurs and precessing transverse magnetization is generated. The initial plan for CASPEr employs Superconducting Quantum Interference Device (SQUID) magnetometers to search frequencies ≲ 1 MHz (roughly corresponding to an applied magnetic field below 0.1 T depending on the sample), and inductive detection using an LC circuit for frequencies ≳ 1 MHz.
Another possibility for NMR detection is the use of an optical atomic magnetometer <cit.>. In particular, a state-of-the-art Spin-Exchange Relaxation-Free (SERF) magnetometer has realized a sensitivity of 160 aT/√(Hz) in a gradiometer arrangement, and its quantum noise limit is 50 aT/√(Hz), which is the most sensitive magnetometer in the low-frequency region <cit.>. This motivates consideration of SERF magnetometers for NMR detection in CASPEr <cit.>. SERF magnetometers are applied in fundamental symmetry tests <cit.>; they have better sensitivity than Superconducting Quantum Interference Devices (SQUIDs) in the low-field regime <cit.>, which could in principle improve the sensitivity of the search for axion dark matter, but SERF magnetometers have a disadvantage of a smaller bandwidth than SQUIDs <cit.>. With a magnetically shielded room, a SQUID magnetometer operated inside a LOw Intrinsic NOise Dewar (LINOD) could reach a noise level of about 260 aT/√(Hz) below 100 Hz, and achieve a noise level of 150 aT/√(Hz) between 20 kHz and 2.5 MHz <cit.>. SERF magnetometers have demonstrated comparable magnetic-field sensitivities to those of SQUID magnetometers; however, they have certain advantages that may be important in specific applications. First and foremost, SERF magnetometers do not require cryogenics, they generally have the “1/f knee” at lower frequencies, and they are robust with respect to electromagnetic transients. There are also disadvantages such as generally lower dynamic range, bandwidth, the necessity to heat the sensor cells, larger sensor size, and the absence of the elegant gradiometric arrangements possible with SQUIDs.
In Sec. II, CASPEr is summarized and the corresponding estimates for the axion induced signal shown. We then explore the potential of SERF magnetometry in CASPEr where an experimental arrangement is proposed and various sources of noise are considered. In Sec. III, we introduce a modification to the quantum noise equations to account for position dependent atomic absorption by the pump beam. We then present a 1 fT/√(Hz) magnetometer and demonstrate a measurement of the modified noise described by the equations. A possible technique to significantly expand the bandwidth of the SERF axion search is also explored.
§ SERF MAGNETOMETERS FOR SPIN PRECESSION DETECTION IN CASPER
The CASPEr research program encompasses experiments employing established technology to search for an oscillating nuclear electric dipole moment (EDM) induced by axions or ALPs (CASPEr Electric) and search for direct interaction of nuclear spins with an oscillating axion/ALP field (axion wind; CASPEr Wind). The CASPEr-Wind and the CASPEr-Electric experiments have a lot of features in common. The proposal to use a SERF magnetometer for detection of spin precession may be applicable to both CASPEr-Wind and CASPEr-Electric although in the following we focus on CASPEr-Electric. The axion field can be treated as a fictitious AC-magnetic field acting on nuclear spins in an electrically polarized material <cit.>
B_a(t)=ϵ_S E^*d_n/μsin(ω_a t)
,
where ϵ_S is the Schiff factor <cit.>, E^* is the effective static electric field acting on the atoms containing the nuclear spins of interest, μ is the nuclear magnetic moment, ω_a=m_a/ħ is the frequency of the axion (we set c=1 in the paper), and m_a is the mass of the axion. Note that the field oscillates at the Compton frequency of the axion. The nuclear electric dipole moment (d_n) generated by the axion dark matter can be written as <cit.>
d_n ≈ (10^-25 e·cm)(eV/m_a)(g_d ×GeV^2)
.
where g_d is the EDM coupling.
In the CASPEr experiment, the nuclear spins in a solid sample are prepolarized by either a several tesla magnetic field generated by superconducting coils or optical polarization via transient paramagnetic centers. The experiment is then carried out in a leading magnetic field B_0; the effective electric field E^* inside the sample is perpendicular to B_0 as shown in Fig. <ref>. The time-varying moments induced by axion dark matter are collinear with nuclear spin. In the rotating frame, if there is a nucleon electric dipole moment, the nuclear spins will precess around the electric field, and this will induce a transverse magnetization, which can be measured with a sensitive magnetometer. The first generation CASPEr-Electric experiment will most likely employ a ferroelectric sample containing Pb as the active element. As mentioned in <cit.>, ^207Pb (nuclear spin I=1/2) has a nonzero magnetic dipole moment, and has a large atomic number (Z), which means it has a large Schiff factor (since the effect produced by the Schiff moment increases faster than Z^2) <cit.>. The transverse magnetization of the ferroelectric samples caused by the axion field can be written as <cit.>
M_a(t) ≈ n_Pbpμγ_Pb1/T_b/(1/T_b)^2+(ω_0-m_a/ħ)^2 B_a(t)
,
where n_Pb is the number density of nuclear spins of ^207Pb, p is the spin polarization of ^207Pb, μ=0.584μ_N is the nuclear magnetic moment of ^207Pb, γ_Pb is the gyromagnetic ratio of ^207Pb, ω_0 is the spin-precession frequency in the applied magnetic field, we define T_b=min{T_2,τ_a} as the “signal bandwidth time", T_2 is the transverse relaxation time of the nuclear spins, and τ_a= 10^6h/m_a is the axion coherence time <cit.>, which varies from 4 × 10^5 to 4 × 10^-3 s over the range of the axion masses from 10^-14-10^-6 eV.
CASPEr searches for axion dark matter corresponding to axions of different masses by sweeping the applied magnetic field from zero to several T or higher, which in turn scans the NMR resonance frequency and sets the axion Compton frequency to which the apparatus is sensitive. Much of the interesting parameter space corresponds to field values that exceed the magnetic field limit of the SERF magnetometer. The large DC field problem can be solved using a flux transformer as shown in Fig. (<ref>), which acts as a “DC magnetic filter" reducing the static magnetic field to keep the alkali metal atoms in the SERF regime. The flux transformer only picks up the time-varying component of the magnetic flux through the enclosed area.
A SERF magnetometer has a narrow bandwidth of a few Hz <cit.>; by applying a constant magnetic field along the pump-beam direction, the SERF magnetometer can be tuned to resonate at a higher frequency, which increases the detectable frequency range of the SERF magnetometer up to 200 Hz <cit.> or higher. The LOngitudinal Detection scheme (LOD) discussed in <cit.> can, in principle, fully remedy the disadvantage of the SERF magnetometer's limited bandwidth, which is discussed in the Appendix.
The alkali cell of the SERF magnetometer is heated to 373 K- 473 K in order to increase the alkali vapor density to improve the sensitivity. However, the ferroelectric sample is cooled down to a low temperature to increase the longitudinal relaxation time and the spin polarization of the sample <cit.>. Again a flux transformer <cit.> is a potential solution to this problem where, as shown in Fig. <ref>, the SERF magnetometer can be placed in a warm bore of a superconducting system containing the transformer coils and magnetic shields.
The magnetic flux through the primary coil can be written as <cit.>
Φ_p= μ_0 μ_r g N_p M_a A_p,
where μ_r is the relative permeability of the ferroelectric sample, μ_r ≈ 1 for PbTiO_3, g≈ 1 is the geometric demagnetizing factor <cit.>, A_p is the cross-section of the cylindrical sample, and N_p is the number of turns of the primary coil.
The flux transformer has an enhancement factor (k_FT=B_s/B_p), where B_s is the magnetic field in the secondary coil, B_p is the magnetic field in the primary coil, which can be calculated as
k_FT =N_p A_pB_s/Φ_p=N_pA_p/Φ_pμ_0N_s/l_sΦ_p/L_s+L_p=μ_0 N_s/l_sN_p A_p/L_s+L_p,
where N_s is the number of turns of the secondary coil, l_s is the coil length of the secondary coil, L_p and L_s are the inductances of the primary and the secondary coil, respectively. Inductances of multi-turn long solenoid coil can be written as
L_p≈μ_0 N_p^2 A_p/l_p,
L_s≈μ_0 N_s^2 A_s/l_s,
where A_s is the winding cross-section of the secondary coil, l_p is the coil length of the primary coil. Inserting Eq.(<ref>) into Eq.(<ref>), we have
k_FT =1/l_s/l_pN_p/N_s+A_s/A_pN_s/N_p;
with N_p/N_s=√(A_sl_p/A_pl_s), we have L_p=L_s, and the gain factor of Eq.(<ref>) has a maximum √(A_pl_p/4A_sl_s). Careful consideration of the relative coil geometries, taking into account sample size and insulation requirements for example, is required to ensure this remains an enhancement factor. [For small samples and larger amounts of insulation between the flux transformer and the SERF magnetometer, the geometry may lead to an unfavorable k_FT<1. For example, a 1.6 cm diameter sample with a 6 cm diameter secondary coil, l_p = 1.6 cm and l_s = 2 cm, yields a k_FT = 0.1, reducing the advantage of using a flux transformer. But for larger samples, the k_FT grows greater than 1, making this approach especially useful.]
At a finite temperature, the flux transformer induces Johnson noise. The noise of the flux transformer and the SERF magnetometer system is determined by the noise of the flux transformer (δ B_FT), the field enhancement coefficient, and the sensitivity of the SERF magnetometer (δ B_SERF)
δ B_n=√(δ B_FT^2+(δ B_SERF/k_FT)^2).
In this experiment, the flux transformer is made of zero-dissipation material, such as superconducting niobium or niobium-titanium wire and cooled with liquid helium to realize superconductivity <cit.>. Type I superconductors are the ideal choice since the magnetic field cannot penetrate, making the Johnson noise negligible. However, the sweeping magnetic field reaches the critical field of these materials around 100 mT (≈ 6 MHz for Pb). Depending on the parameter space to be explored, the material should be chosen accordingly.
If t<τ_a, the experimental sensitivity after measurement time t can be written as <cit.>
Φ_p/N_p A_p=δ B_n/√(t).
When t>τ_a, and the experimental sensitivity after measurement time t can be written as <cit.>
Φ_p/N_p A_p=δ B_n/(τ_a t)^1/4.
When t<τ_a and T_2<τ_a, the axion-nucleon coupling constant can be calculated by Eqs. (<ref>-<ref>) and (<ref>). When ω_m=m_a/ħ, the transverse magnetization is enhanced at the resonant point; we find
g_d[GeV^-2]=4.5× 10^45/C· mδ B_n× m_a[eV]/μ_0 n_Pbpγ_Pb T_2 ϵ_S E^*√(t).
When t>τ_a and T_2<τ_a, the axion-nucleon coupling constant can be calculated by Eqs. (<ref>-<ref>) and (<ref>) as
g_d[GeV^-2]=4.5× 10^45/C· mδ B_n× m_a^5/4[eV]/μ_0 n_Pbpγ_Pb T_2 ϵ_S E^*(10^6h[eV· s] t)^1/4.
When T_2>τ_a, then
g_d[GeV^-2]=4.5× 10^45/C· mδ B_n× m_a^9/4[eV]/μ_0 n_Pbpγ_Pbϵ_S E^*(10^6h[eV· s])^5/4(t)^1/4.
Here we assumed that the transverse relaxation time T_2 equals 5 ms in Phase 1 of the CASPEr experiment<cit.>. The sample is paramagnetic purified PbTiO_3, which is polarized by 20 T magnetic field at 4.2 K, yielding p = 0.001 <cit.>. The E^* is assumed to be 3 × 10^10 V/m <cit.>. We assumed A_p = 78 cm^2, A_s = 28 cm^2, l_p = 10 cm, l_s = 2 cm, and enhancement factor k_FT≈ 2, and the sensitivity of the SERF magnetometer is 50 aT/√(Hz) <cit.>. In Phase 2, we assumed T_2 = 1 s, and p = 1 <cit.>. Utilizing the narrow bandwidth range, the measurement time of a single frequency point is assumed to be 36 hours. It will take approximately 1 year of continuous data acquisition to sweep the 200 Hz of parameter space. The detectable region for the SERF magnetometer is calculated with Eq.(<ref>), which is plotted as the orange region in Fig. <ref>. From the figure, we can see that CASPEr experiments employing a SERF magnetometer for NMR detection can realize a sensitivity of 10^-25 GeV^-2. Technical noise such as that due to vibrations of the apparatus is a major concern for such experiments, but we point out here that the Q-factor of the axion oscillation is ≈ 10^6, which is usually much higher than the Q-factor of the vibrational noises facilitating suppression of the latter.
§ MODEL OF SERF MAGNETOMETER NOISE LIMITATIONS
A SERF magnetometer is an alkali vapor atomic magnetometer, that works in the regime where the spin-exchange rate far exceeds the frequency of Larmor procession. In this regime spin-exchange relaxation is suppressed <cit.>. A circularly polarized pump beam is used to spin polarize the alkali atoms while a linearly polarized probe beam propagates perpendicularly through the cell. If a small magnetic field is applied perpendicular to the plane of the pump and probe beams, this will cause the spins to precess by a small angle and probe beam's plane of polarization to rotate by an angle proportional to the magnetic field due to the Faraday effect. The magnetic field can thus be determined by measuring the optical rotation angle.
The sensitivity with which spin precession can be measured determines the achievable sensitivity of the CASPEr experiment up to the point where the magnetization noise of the sample becomes dominant. To date, measurements of the sensitivity of the SERF magnetometers has been limited by Johnson noise from the magnetic shield, even with a low-noise ferrite magnetic shield <cit.>. The Low Intrinsic NOise Dewar (LINOD) reported in <cit.> opens new possibilities for ultra-low magnetic noise superconducting shields, in which case the quantum noise and the technical noise could become the dominant noise sources in the future. Thus, studying the intrinsic noise of a SERF magnetometer , as we do in this work as a first step in the investigation of the possible application of SERF magnetometry to CASPEr, is essential for further improving sensitivity of the SERF magnetometry.
The SERF magnetometer has a fundamental sensitivity at the attotesla (10^-18 tesla) level <cit.> limited by the spin-projection noise (SPN) and the photon shot noise (PSN) <cit.>. In many practical implementations, the photon shot noise is a major contribution to the quantum noise limit for SERF magnetometers <cit.>.
When photon shot noise is the dominant quantum noise source, the optimum sensitivity of a SERF magnetometer is achieved when the polarization of the atoms is 50% <cit.> and the power of the pump beam is chosen accordingly. Furthermore, a detuned pump beam causes light shifts, which can be treated as a fictitious magnetic field <cit.>. This light shift is conventionally eliminated by locking the pump beam's frequency to the resonance point. However, the on-resonance pump beam is strongly adsorbed by the non-fully polarized alkali atoms due to the larger optical depth. The absorption causes position-dependent polarization along the pump-beam propagation direction in the cell. A hybrid optical pumping scheme has been proposed to solve this problem <cit.>, however to-date in practice the sensitivity of the hybrid SERF magnetometers have not surpassed the sensitivity of direct-optical-pumping-based potassium SERF magnetometers <cit.>.
Here we determine the noise limit for a direct-optical-pumping-based SERF magnetometer taking into account the absorption of the pump beam by the alkali atoms. Furthermore, the sensitivity of the SERF magnetometer is usually limited by the optical rotation measurement which we experimentally demonstrate along with the analytic absorption modification to the noise limit.
The major sources of noise affecting SERF magnetometers can be divided into three categories, 1) Quantum noise (Spin-projection noise and Photon shot noise). 2) Technical noise (The probe beam polarization rotation noise caused by the Faraday modulator and Lock-in amplifier etc.). 3) Magnetic noise (Johnson noise of the magnetic shields).
The contribution to the apparent magnetic noise per root Hz measured by a SERF magnetometer associated with photon shot noise can be written as <cit.>
δ B_PSN=ħ/g_s μ _B P_z √(nV)2√(2)(R+Γ_pr+Γ_SD)/√(Γ_pr(OD)_0),
where ħ is the reduced Planck constant, g_s≈2 is the electron Lande factor, g_sμ_B/ħ=γ_e is the gyromagnetic ratio of the electron, μ_B is the Bohr magneton, P_z is the spin polarization along the pump beam, which is
P_z=R/R+Γ_pr+Γ_SD,
n is the density of the alkali atoms, V is the overlapping volume of the probe beam and the pump beam, t is the measurement time, R is the pumping rate of the pump beam, Γ_pr is the pumping rate of the probe beam, Γ_SD is the spin-relaxation rate caused by the spin destruction, and OD_0 is the optical depth on resonance.
The apparent magnetic noise per root Hz measured by a SERF magnetometer associated with spin-projection noise can be written as <cit.>
δ B_SPN=2ħ√((R+Γ_pr+Γ_SD))/g_s μ _B P_z √(nV).
Calculating the quadrature sum of Eq.(<ref>) and Eq.(<ref>), we find the expression for the total quantum noise
δ B_qt =2ħ(√(R+Γ_pr+Γ_SD))^3/g_s μ_B R √(n V)√(1+2(R+Γ_pr+Γ_SD)/Γ_pr(OD)_0).
The sensitivity of the optical rotation measurement plays an important role in the atomic spin measurement, which can also be a limitation for the sensitivity of the SERF magnetometer. The optical rotation angle θ of a linearly polarized probe beam can be described as <cit.>
θ =π/2nlr_ecP_xf_D2Γ_L/2π/(ν-ν_D2)^2+(Γ_L/2)^2,
where l is the length of the optical path of the probe beam through the alkali cell, r_e is the classical radius of the electron, c is the speed of light, P_x is the spin polarization projection along the X-axis, f_D2 is the oscillator strengths of the D2 line, Γ_L is the pressure broadening caused by the buffer gas and quenching gas, ν is the frequency of the probe beam, and ν_D2 is the resonance frequency of the D2 line.
Under conditions where the residual magnetic fields are well-compensated, a small magnetic field B_y applied along the Y-axis causes the net spin polarization to precess, generating a non-zero spin projection along the X-axis, given by
P_x=γ_e R B_y/(R+Γ_pr+Γ_SD)^2+(γ_e B_y)^2+(γ_e B_LS)^2,
where B_LS is the light shift. Such a Y-directed field can be used to calibrate a SERF magnetometer. If the calibration magnetic field applied along the Y-direction B_y ≪ (R+Γ_pr+Γ_SD)/γ_e, and the light shift is negligible, then Eq.(<ref>) can be simplified to
P_x=γ_e R B_y/(R+Γ_pr+Γ_SD)^2.
Combining the results of Eq.(<ref>) and Eq.(<ref>), and assuming the wavelength of the probe beam is several hundreds GHz detuned (which depends on the pressure broadening Γ_L) to lower frequency from the D2 resonance frequency, we find an expression for optical-rotation-induced apparent magnetic noise
δ B_m=4(R+Γ_pr+Γ_SD)^2 [(ν-ν_D2)^2+(Γ_L/2)^2] δθ/γ_e R nlr_ecf_D2Γ_L,
where δθ is the sensitivity of the optical rotation measurement in rad/√(Hz).
To better describe the parameters determining the SERF noise limits, one must additionally account for the adsorption of the pump beam propagating through the cell, which is not linearly proportional to the power of the pump beam measured before the cell. The pump beam propagates along the cell with the pumping rate decreasing according to <cit.>
R(z)=R_INW [R_IN/Γ_rel e^R_IN/Γ_rel-nσ(ν)z]
,
where R_IN is the pumping rate of the pump beam entering the front of the cell. R(z) is the pumping rate of the pump beam which propagates in the cell with a distance of z. W is the Lambert W-function, which is the inverse of the function f(W)=We^W, Γ_rel=Γ_SD+Γ_pr. In a vapor with a large optical depth, the pumping rate in the center of the cell is different from the pumping rate calculated before the cell. The pumping rate R in the Eqs.(<ref>), (<ref>), (<ref>) and (<ref>) should be replaced by Eq.(<ref>) evaluated at the location of the probe beam. This modifies the noise limits accordingly.
§ EXPERIMENTS AND RESULTS
The experimental setup is shown as Fig. <ref>. A spherical cell with a diameter of approximately 25 mm is placed in a vacuum chamber containing a drop of potassium, approximately 1600 torr helium buffer gas and 33 torr nitrogen quenching gas for suppressing radiation trapping <cit.>. The vacuum chamber is made of G-10 fiberglass, and the cell is heated up to 460 K with an AC heater, which is made of twisted wires to reduce magnetic field. The magnetic-shielding system includes mu-metal magnetic shields and active compensation coils. The shielding factor of the magnetic shield is approximately 10^5, supplemented by the compensation coils the residual magnetic field at the cell position is smaller than 10 pT. The pump beam propagates along the Z-axis; its wavelength is locked to 770.1 nm (the center of the D1 resonance) to reduce the light shift. The diameter of the pump beam illuminating the cell is approximately 15 mm.
The probe beam propagates along the X-axis; it is approximately 0.5 nm (250 GHz) detuned to lower frequency from the potassium D2 line. The probe beam is linearly polarized with a Glan-Taylor polarizer. Additionally, a Faraday modulator is used to reduce the 1/f noise at low frequency by modulating the beam polarization with an amplitude of approximately 0.03 rad at a frequency of 5.1 kHz. Then the probe beam passes through another Glan-Taylor polarizer set at 90^∘ to the initial beam polarization direction. A lock-in amplifier (LIA) is used to demodulate the signal from the photodiode.
In order to precisely calibrate the coils of the SERF magnetometer, we applied the synchronous optical pumping technique. By applying a chopper to modulate the pump beam, the magnetometer can work in the Bell-Bloom mode (BB mode) <cit.>.
Calibration of the compensation coils is performed using the applied chopper at frequency (ω) and an additional bias magnetic field in the Y-direction (B_y). The response of the BB magnetometer can be written as
S_x(ω)=R S_0/4√((2πΔν)^2+(ω-γ B_y)^2),
where S_0 is the polarization in zero magnetic field, γ=γ_e/q, q is the slowing-down factor <cit.>, which is determined by the polarization of the potassium atoms, Δν is the magnetic linewidth.
In order to keep the nuclear slowing-down factor constant (≈ 6) <cit.>, the powers of the pump beam and the probe beam are adjusted to small values where the magnetic linewidth is independent of the powers of the pump beam and the probe beam. By applying different bias magnetic fields in the Y-axis, we measured the response of the BB magnetometer. The magnetic field generated by the Y coils can be calculated from the resonant point using Eq.(<ref>). The results are shown in Fig. <ref>. The measured data near 50 Hz (line frequency in China) has a relatively large error bar, because the lock-in amplifier has a notch filter near the line frequency, which attenuates the response signal. According to the linear fit, we measure the coil calibration constant to be approximately 0.177 nT/μA.
After the coil calibration experiments, the chopper is turned off, and the residual magnetic fields are well-compensated to near zero. The power of the probe beam is increased to 0.5 mW and the power of the pump beam increased to 1 mW. A calibration magnetic field oscillating at 30 Hz is applied along the Y-direction to calibrate the response of the magnetometer, whose amplitude is approximately 15.6 pT_rms. Then the sensitivity of the SERF magnetometer at 30 Hz is calibrated. There is Johnson noise of several fT/√(Hz) generated by the mu-metal magnetic shield, which under certain conditions could exceed the intrinsic sensitivity limits (spin projection noise, photon shot noise and technical noise in the optical rotation measurement) of the SERF magnetometer. In order to measure the noise limits of the SERF magnetometer, the pump beam is blocked after the calibration, and the noise floor of the response of the SERF magnetometer is measured and recorded. This procedure enables us to distinguish the noise limit related to quantum noise and technical noise of the probe beam from the Johnson noise of the magnetic shield and pump-beam related noise. Then we increase the power of the pump beam in steps of 1 mW, and repeat the experiments until the power of the pump beam reaches 10 mW. The experimental results are shown in Fig. <ref>, the peaks of the signal responses at 30 Hz are caused by the probe beam's pumping effect and the applied calibration magnetic field <cit.>. For comparison, the magnetic noise limit of the shield is also shown in the figure (single channel), which is the Johnson noise of the magnetic shield, and is approximately 7.5 fT/√(Hz), which matches well with the theoretical prediction. The magnetic noise of a finite length mu-metal magnetic shield can be written as <cit.>
δ B_mag=μ_0/r√(GkTσ t_h),
where μ_0 is the permeability of vacuum, G is a constant determined by the geometry of the magnetic shield <cit.>, k is the Boltzmann constant, T is the temperature of the magnetic shield, σ is the conductivity of the mu-metal, t_h is the thickness of the innermost magnetic shield, which is approximately 1 mm in the experiment, and r is the radius of the innermost magnetic shield, which is 0.2 m in the experiment.
The frequency response of an undetuned SERF magnetometer is equivalent to a first order low-pass filter with a cutoff frequency equals to (R+Γ_rel)/q <cit.>, which means the signal response decreases as the frequency increases. However, there is flicker noise from a magnetic shield below 20 Hz <cit.>. Finally, the most sensitive frequency range of a SERF magnetometer is usually between 20 Hz and 40 Hz. In order to demonstrate the relationship between the power of the pump beam and the noise limits, we estimate the noise limits in Fig. <ref>, by calculating the sensitivities around 30 Hz, and plot the results in Fig. <ref> as black dots. In order to maximize the optical path length of the probe beam propagating through the spherical cell, the probe beam is directed through the center of the cell. The overlapping volume of the probe beam and the pump beam is thus located in the center of the cell. The actual pumping rate of the pump beam should be modified based on Eq.(<ref>). The noise limits are plotted in Fig. <ref>. The technical limit is set by the optical rotation sensitivity of approximately 1× 10^-7 rad/√(Hz), which is calibrated by replacing the cell with another known Verdet constant Faraday modulator. According to Fig. <ref>, when the power of the pump beam is far from sufficient to fully polarize the alkali atoms, the power of the pump beam in the center of the cell attenuates faster than linear. When the power of the pump beam is sufficient to nearly fully polarize the alkali atoms, the pump power attenuates linearly with propagation distance <cit.>. In the high pump power regime, the technical noise of the optical rotation measurement approaches the intrinsic noise limit of the SERF magnetometer. The experimental results are larger than the theoretical prediction of the technical limits set by the sensitivity of the optical rotation measurement in the high pumping rate region, which could be caused by the non-negligible light shift due to the large pump power and/or the pressure shifts caused by the buffer gas <cit.>. The modified model for calculating the noise limit of the SERF magnetometer will be helpful in optimizing the power of pump beam, and determining the bottle-neck noise limit of the experimental apparatus.
The demonstrated noise limit of the apparatus is better than 1 fT/√(Hz), which is still much larger than the fundamental sensitivity of the SERF magnetometer. Comparing with the most sensitive SERF magnetometer mentioned in <cit.>, our SERF apparatus doesn't have the inner-most low-noise ferrite magnetic shield, and the quantum noise limit of our SERF apparatus could be further improved. This can be achieved by replacing our Faraday modulator and expanding both the pump and probe beams to increase the overlapping volume. (Note that if we increase the sensitivity by expanding both pump and probe beams, it is more accurate to determine the noise limit by averaging the sensitivity instead of applying a single value of z in the Eq.(<ref>).)
§ CONCLUSION
Modified sensitivity limits of a SERF magnetometer are determined, which consider absorption of the pump beam by the alkali atoms. This absorption modification is demonstrated with a 1 fT/√(Hz) SERF magnetometer, where the technical limit set by optical-rotation measurement sensitivity is also identified. SERF magnetometers are currently the most sensitive magnetic sensors in the low-frequency region, whose sensitivity is competitive in the SERF-CASPEr experiments. There are several difficulties in using SERF magnetometers in CASPEr, one of which being the limited field range satisfying the condition that the spin-exchange rate exceeds the Larmor precession frequency (the SERF regime). To solve this problem, a superconducting flux transformer is introduced to the SERF-CASPEr experiments which effectively displaces the large sweeping magnetic field away from the SERF magnetometer. Another potential advantage of the flux transformer is the use of low-loss superconducting tunable capacitors to increase the enhancement factor, and corresponding spin precession measurement sensitivity, by working in the tuned mode <cit.>. Another difficulty only briefly considered above is the thermal isolation required between the secondary coil and the SERF magnetometer, because the cell of the SERF magnetometer needs to be heated to increase the vapor density of the alkali atoms, whereas, the flux transformers need to be cooled to be below critical temperature. However, this can likely be overcome with careful engineering of the experiment. A ferromagnetic needle magnetometer is another potential magnetic sensor that could be applied in future versions of CASPEr experiments, which in principal has a better quantum noise limit and can operate at cryogenic temperatures <cit.>. The needle magnetometer does not have the thermal isolation issues, and could benefit from the superconducting transformer (we can easily make the enhancement factor k_FT>1) and the longitudinal detection scheme (extending the limited bandwidth of the needle magnetometer).”
§ ACKNOWLEDGMENTS
The authors wish to thank Giuseppe Ruoso and Surjeet Rajendran for the useful comments. This project has received funding from the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation programme (grant agreement No 695405). We acknowledge the support of the Simons and Heising-Simons Foundations, the DFG Reinhart Koselleck project, the National Science Foundation of the USA under grant PHY-1707875, and the National Science Foundation of China under Grant No. 61227902.
§ APPENDIX: LONGITUDINAL DETECTION SCHEME
The setup of the SERF-CASPEr-LOD is similar to the SERF-CASPEr setup (as shown in Fig. <ref>), except an additional oscillating field B_mcos(ω_m t) is applied perpendicular to B_0 and the pickup coil is oriented along the B_0 axis. ω_a and ω_m are within the linewidth (1/T_2) of the Larmor resonance of the magnetized ferroelectric samples. The primary coil, now oriented along the B_0 direction, picks up the time-varying magnetization of the sample which can be written as <cit.>
Δ M_z=M_z-M_0 ≈M_0/4γ_Pb^2T_1T_2 B_m B_a cos[(ω_m-ω_a)t]
=n_Pbpμγ_PbT_2B_acos[(ω_m-ω_a)t]×γ_PbB_mT_1/4
,
where M_z is the longitudinal magnetization of ferroelectric sample, M_0 is the static magnetization and B_m ≪ 1/(γ_Pb√(T_1T_2)) to prevent saturation, for example, for T_2=1 s, B_m ≪ 0.3 nT, for T_2=1 ms, B_m ≪ 10 nT. In order to simplify the following calculations, we assume an appropriate amplitude of the oscillating magnetic field B_m to let γ_PbB_mT_1/4=1, for T_1≈ 1 hr, B_m ≈ 20 pT.
One advantage of applying this strategy in the SERF-CASPEr is that we can keep the SERF magnetometer working in the frequency region corresponding to the optimum sensitivity by tuning the frequency of the oscillating magnetic field ω_m so that the frequency of the oscillating magnetization (ω_m-ω_a) is at the optimum. However, the technical noise should be carefully considered when using the LOD scheme. The magnetic noise of the leading field (B_0) will directly couple through the flux transformer and contribute to the noise measured by the SERF magnetometer; the state-of-the-art superconducting magnet system mentioned in <cit.> realizes a stability of 17 ppt/hour, which means that for a 10 T leading field from the superconducting magnet, the low-frequency drift of the magnetic field is approximately 170 pT/hour. If the spectrum of the leading magnetic field noise is concentrated mostly in very low frequencies, it may be possible to tune the frequency of the oscillating magnetization far enough away from the peak of the magnetic field noise spectrum to enable a sensitive measurement. The spin projection noise produced by the sample can be estimated as <cit.>
B_spin ≈μ_0μ√(n_Pb/V_Pb∫_ω_0+δ f-1/2π T_b^ω_0+δ f+1/2π T_b1/8T_2/1+T_2^2(ω_m-ω_0)^2 dω_p)
= μ_0μ√(n_Pb/8V_PbT_2)
√(ln[1+T_2^2 ( δ_f+1/2π T_b) ] -ln[1+T_2^2(δ_f-1/2π T_b) ]),
where V_Pb is the volume of the sample, δ f is the offset of the center of the axion signal from the Larmor frequency, here δ f= 100 Hz (half bandwidth of the SERF magnetometer). For low frequencies (masses) the axion coherence is sufficiently long such that T_2 limits “signal bandwidth time”. When V_Pb≈ 785 cm^3, here we assume T_2=1 s leading to B_spin≈ 0.2 aT/√(Hz). To make sure the technical noise would not surpass the spin-projection noise, the relative amplitude noise
of the pump field should be smaller than 10^-8/√(Hz).
Practically, the noise of the SERF magnetometer is far larger than the spin-projection noise of the ferroelectric sample. If the pump field has a white noise B_mn, and in order to make sure the magnetic noise would not surpass the sensitivity of the SERF magnetometer, which is assumed to be 50 aT/√(Hz),
B_mn<δ B_SERF/k_FTμ_0 n_Pb p μγ_Pb T_2.
For Phase 1, p = 0.001 and T_2 = 5 ms, B_mn < 40 aT/√(Hz). For phase 2, p = 1 and T_2 = 1 s, B_mn < 0.04 aT/√(Hz). And B_m=20 pT, so requirement of the amplitude noise of pump field, for Phase 1 is approximately 4×10^-6 /√(Hz), for Phase 2 is approximately 4×10^-9 /√(Hz). The sensitivity projection plot of SERF-CASPEr-LOD Phase 1 is shown in Fig. <ref>, the measurement time is reduced to 1 hour, because when T_2=5 ms it will take approximately 1 year continuously data acquiring to sweep the axion mass up to 1 MHz. For SERF-CASPEr-LOD Phase 2, the measurement time of a single frequency point is reduced to 18 s, and it will take approximately 1 year continuously data acquiring to sweep the axion mass up to 1 MHz.
We assumed the tilt angle caused by vibration is θ≪ 1. As shown in Fig. <ref>, in the conventional scheme, the vibrational noise of the leading field B_0 picked up by the primary coil is
B_vnc=B_0sin(θ)≈θ B_0.
In the LOD scheme, the vibrational noise of the leading field B_0 picked up by the primary coil is
B_vnl=B_0cos(θ)-B_0=-2B_0sin^2(θ/2) ≈ -θ^2 B_0/2.
According to Eqs. (<ref>) and (<ref>), the vibrational noise is quadratically suppressed in the LOD scheme which may become a distinct advantage in the event that the sensitivity of CASPEr is limited by vibrational noise.
|
http://arxiv.org/abs/1701.07466v3 | 20170125195941 | The Gaussian Stiffness of Graphene deduced from a Continuum Model based on Molecular Dynamics Potentials | [
"Cesare Davini",
"Antonino Favata",
"Roberto Paroni"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
The Gaussian Stiffness of Graphene
deduced from a Continuum Model
based on Molecular Dynamics Potentials
Cesare Davini^1 Antonino Favata^2 Roberto Paroni^3
December 30, 2023
================================================================================================================
^1 Via Parenzo 17, 33100 Udine
mailto:cesare.davini@uniud.itcesare.davini@uniud.it
^2 Department of Structural and Geotechnical Engineering
Sapienza University of Rome, Rome, Italy
mailto:antonino.favata@uniroma1.itantonino.favata@uniroma1.it
^3 Dipartimento di Architettura, Design e Urbanistica
University of Sassari, Alghero (SS), Italy
mailto:paroni@uniss.itparoni@uniss.it
myheadings
C. Davini, A. Favata, R. Paroni
The Bending Behavior of Graphene
§ ABSTRACT
We consider a discrete model of a graphene sheet with atomic interactions governed by a harmonic approximation of the 2nd-generation Brenner potential that depends on bond lengths, bond angles, and two types of dihedral angles. A continuum limit is then deduced that fully describes the bending behavior. In particular, we deduce for the first time an analytical expression of the Gaussian stiffness, a scarcely investigated parameter ruling the rippling of graphene, for which contradictory values have been proposed in the literature. We disclose the atomic-scale sources of both bending and Gaussian stiffnesses and provide for them quantitative evaluations.
Keywords: Graphene, Continuum Modeling, Gaussian stiffness.
§ INTRODUCTION
Graphene has attracted increasing interest during the past few years, and is nowadays used in a great variety of applications, taking advantage of its extraordinary mechanical, electrical and thermal conductivity properties. Nevertheless, its potentialities, and those of graphene-based materials, are far from being fully explored and exploited, and many studies are carried out by the scientific community in order to develop new technological applications <cit.>.
The understanding of the bending behavior of graphene is of paramount importance in several technological applications. It is exploited, for example, to predict the performance of graphene nano-electro-mechanical devices and ripple formation <cit.>, and it is proposed to be the key point to produce efficient hydrogen-storage devices <cit.>.
A very recent review in Materials Today by Deng & Berry <cit.> gives an overview on the hot problem of wrinkling, rippling and crumpling of graphene, highlighting formation mechanism and applications. Indeed, these corrugations can modify its electronic structure, create polarized carrier puddles, induce pseudo-magnetic field in bilayers and alter surface properties. Although a great effort has been done on the experimental side, predictive models are still wanted. They are of crucial importance when these phenomena need to be controlled and designed.
In particular, since the bending stiffness and the Gaussian stiffness —that is the reluctance to form non-null Gaussian curvatures— are the two crucial parameters governing the rippling of graphene, it is necessary to accurately determine them for both the design and the manipulation of graphene morphology. Although several evaluations of the bending stiffness have been proposed in the literature, the Gaussian stiffness has not been object of an intensive study. Indeed, as pointed out in a very recent review on mechanical properties of graphene <cit.>, only two conflicting evaluations have been proposed. In <cit.>, periodic boundary conditions have been used within a quantum-mechanical framework, and the value -0.7 eV has been found. While in <cit.> the estimate of -1.52 eV has been obtained by combining the configurational energy of membranes determined by Helfrich Hamiltonian with energies of fullerenes and single wall carbon nanotubes calculated by Density Functional Theory (DFT). At a discrete level there are two main difficulties in the evaluation of the Gaussian stiffness: on the one hand, controlling a discrete double curvature surfaces is problematic, and on the other hand, a suitable notion of Gaussian curvature at the discrete level should be introduced. Instead, when well established continuum models are adopted, such as plate theory, one has the problem of determining the equivalent stiffnesses, letting alone the conceptual crux of giving a meaning to the notion of thickness (see <cit.>, <cit.> and references therein).
In this paper we deduce a continuum 2-dimensional model of a graphene sheet inferred from Molecular Dynamics (MD). In particular, looking at the 2nd-generation reactive empirical bond-order (REBO) potential <cit.>, we give a nano-scale description of the atomic interactions and then we deduce the continuum limit, avoiding the problem of postulating an “equivalent thickness” and circumventing artificial procedures to identify the material parameters that describe the mechanical response of a plate within the classical theory.
Our analysis of the atomic-scale interaction relies on the discrete mechanical model proposed in <cit.> and exploited in <cit.>, whose results are also based on the 2nd-generation Brenner potential. This potential is largely used in MD simulations for carbon allotropes; for a detailed description of its general form and that adopted in our theory we refer the reader to Appendix B of <cit.>. Here, we recall the key ingredients needed:
* the kinematic variables associated with the interatomic bonds involve first, second and third nearest neighbors of any given atom. In particular, the kinematical variables we consider are bond lengths, bond angles, and dihedral angles; from <cit.> it results that these latter are of two kinds, that we here term C and Z, as carefully described in Sec. <ref>.
* graphene suffers an angular self-stress, and the self-energy associated with the self-stress (sometimes called cohesive energy in the literature) is quantitatively relevant;
* the energetic contribution of dihedral interaction is very relevant in bending.
For the first time, we propose a continuum model able to predict both the bending and the Gaussian stiffnesses. The analytical formula we obtain for the former predicts exactly the same value as that computed with MD simulations of the last generation. The value of the Gaussian stiffness we obtain is in very good agreement with DFT computations proposed in <cit.>.
For the modeling of graphene many different approaches at different scales can be found in the literature, ranging from first principle calculations <cit.>, atomistic calculations <cit.> and continuum mechanics <cit.>. Furthermore, mixed atomistic formulations with finite elements have been reported for graphene <cit.>.
The paper is organized as follows. In Sec. <ref>, we describe the kinematics and the energetic of the graphene sheet at the nano-scale. In Sec. <ref>, we deduce the strain measures for the change of edge lengths, wedge angles and dihedral angles, approximated to the lowest order that makes the energy quadratic in the displacement. In Sec. <ref>, the total energy is split in its in-plane and out-of-plane contributions, and focus is set on the latter, having the first already been considered in <cit.>. In Sec. <ref>, we deduce a continuous energy that approximates the discrete energy and in Sec. <ref> the limit energy is rearranged in a more amenable form, able to put in evidence the equivalence with plate theory. In Sec. <ref>, quantitative results for the continuum material parameters are deduced, by means of the 2nd-generation Brenner potential, and compared with the literature. Appendix <ref>, containing some computations ancillary to Sec. <ref>, completes the paper.
§ DESCRIPTION OF KINEMATICS AND ENERGETICS OF THE GRAPHENE SHEET
At the nano-scale a graphene sheet is a discrete set of carbon atoms that, in the
absence of external forces, sit at the vertices of a
periodic array of hexagonal cells. More specifically,
atoms occupy the nodes of the 2–lattice, see Figure <ref>, generated by two simple Bravais lattices
[ L_1(ℓ) = {𝐱∈ℝ^2:
𝐱 = n^1ℓ_1 + n^2ℓ_2 (n^1,
n^2) ∈ℤ^2 },; L_2(ℓ) = ℓ𝐩 + L_1(ℓ), ]
simply shifted with respect to one another. In (<ref>), ℓ denotes
the lattice size (the reference interatomic distance), while
ℓ_α and ℓ𝐩 respectively are the lattice
vectors and the shift vector, whose Cartesian components
are given by
_1 =(√(3), 0), _2 = (√(3)/2,
3/2 ) 𝐩 =
(√(3)/2, 1/2).
The sides of the hexagonal cells in Figure <ref> stand for the bonds
between pairs of next nearest neighbor atoms and are represented by the
vectors
_α = _α - (α = 1, 2) _3 = - .
For convenience we also set
_3=_2-_1.
As reference configuration we take the set of points ^ℓ∈ L^1(ℓ)∪ L^2(ℓ) contained in a bounded open set Ω⊂ℝ^2.
Graphene mechanics is ruled by the interactions between
the carbon atoms given by some suitable potential. According to the 2nd-generation Brenner potential <cit.>, as detailed in <cit.>, in order to account properly for the mechanical behavior of a bended graphene sheet it is necessary to consider three types of energetic contributions, respectively coming from: binary interactions between next nearest atoms (edge bonds), three-bodies interactions between consecutive pairs of next nearest atoms (wedge bonds) and four-bodies interactions between three consecutive pairs of next nearest atoms (dihedral bonds). There are two types of relevant dihedral bonds: the Z-dihedra
in which the edges connecting the four atoms form a z-shape and the C-dihedra in which the edges form a c-shape (see Fig. <ref>).
We consider a harmonic approximation of the stored energy and assume that it is given by the sum of the following terms:
[ 𝒰_ℓ^l= 1/2 ∑_ℰ k^l (l -
l^)^2 ,; 𝒰_ℓ^ϑ = 1/2 ∑_𝒲 k^ϑ (ϑ - ϑ^)^2,; 𝒰_ℓ^Θ = 1/2 ∑_𝒵 k^ ( - Θ^)^2+1/2 ∑_𝒞 k^ ( - Θ^)^2 ]
𝒰_ℓ^l, 𝒰_ℓ^ϑ and 𝒰_ℓ^Θ are the energies of the edge bonds, the wedge bonds and the dihedral bonds, respectively; l denotes the distance between nearest neighbor atoms, ϑ the angle between pairs of edges having a lattice point in common and and the Z- and C-dihedral angles between two consecutive wedges, to be defined later (see Fig. <ref>); l^ is the edge length at ease,
ϑ^ the angle at ease between consecutive edges and
Θ^ the dihedral angle at ease.
The sums extend to
all edges, ℰ, all wedges, 𝒲, all Z-dihedra, 𝒵, and all C-dihedra, , contained in the set Ω. The
bond constants k^l, k^ϑ, k^, and k^ will be deduced by making use of the 2nd-generation Brenner potential.
The graphene sheet does not have a configuration at ease (i.e. stress-free). Indeed, in <cit.> it has been shown that
Θ^=0, l^ =ℓ ϑ^=2/3π + δϑ_0,
where δϑ_0 0.
We set δΘ:=Θ, l= ℓ + δ l and ϑ=2/3π + δϑ and write (<ref>) as
[ 𝒰_ℓ^l = 1/2 ∑_ℰ k^l (δ l)^2 ,; 𝒰_ℓ^ϑ = 1/2 ∑_𝒲 k^ϑ (δϑ - δϑ_0)^2,; 𝒰_ℓ^Θ = 1/2 ∑_𝒵 k^ (δ)^2+1/2 ∑_𝒞 k^ (δ)^2 ]
In particular, up to a constant, the wedge energy takes the form
𝒰_ℓ^ϑ = τ_0∑_𝒲δϑ + 1/2 ∑_𝒲 k^ϑ (δϑ)^2,
with
τ_0 := -k^ϑ δϑ_0
the angle self-stress.
The dihedral bonds play an important role because they contribute to the stored energy by about 50%, see <cit.>, the rest is due to the angle self-stress τ_0 associated to the wedge bonds.
The energy decomposition (<ref>) is based on the choice of the set of kinematical variables {l, ϑ, , }. This choice is the most natural, if one considers the 2nd-generation Brenner potential, where all those variables appear in explicit manner. A harmonic approximation in each of those parameters is of course unique.
In the next section we shall make explicit the change of length δ l, the change of wedge angle δϑ, and the changes of the Z- and C-dihedral angles δ and δ.
In Section <ref>, with the notation introduced in the next section, we shall write the energies
(<ref>) more explicitly.
§ APPROXIMATED STRAIN MEASURES
In this section we calculate the strain measures associated to a change of configuration described by a displacement field : (L_1(ℓ)∪ L_2(ℓ))∩Ω→ℝ^3, approximated to the lowest order that makes the energy quadratic in .
§.§ Change of the edge lengths
With δ l_i(^ℓ) we denote the change in length of the edge parallel to _i and starting from the lattice point ^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ω. Thus,
δ l_i(^ℓ) =|(^ℓ + ℓ_i + (^ℓ+ℓ_i)) - (^ℓ +(^ℓ))|-ℓ
=|ℓ_i + ((^ℓ+ℓ_i) - (^ℓ))|-ℓ,
and up to terms o(| |) can be rewritten as
δ l_i (^ℓ)= 1/ℓ ((^ℓ+ℓ_i) - (^ℓ))·𝐩_i i=1,2,3.
In particular, the first order changes are determined by the in-plane components of only.
§.§ Change of the wedge angles
For each fixed node ^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ω we denote by ϑ_i(^ℓ) the angle of the wedge delimited by the edges _i+1 and _i+2; that is, the wedge angle opposite to the i-th edge (see Fig. <ref>). Here, i, i+1, and i+2 take values in {1,2,3} and the sums should be interpreted mod 3: for instance, if i=2 then i+1=3 and i+2=1.
From (<ref>) we see that the change in the wedge angle enters into the energy not just quadratically but also linearly, therefore the variations of the wedge angle should be computed up to the second order approximation.
To keep the notation compact, we set
_i:=(^ℓ+ℓ_i), _0:=(^ℓ).
Let
_i+1 :=(^ℓ + ℓ_i+1 + (^ℓ+ℓ_i+1)) - (^ℓ +(^ℓ))
=ℓ_i+1+(_i+1 -_0),
and
_i+2 :=(^ℓ + ℓ_i+2 + (^ℓ+ℓ_i+2)) - (^ℓ +(^ℓ))
=ℓ_i+2+(_i+2 -_0),
be the images of the edges parallel to _i+1 and _i+2 and starting at ^ℓ. Then, the angle ϑ_i=ϑ_i(^ℓ) is given by
cos(ϑ_i)=_i+1·_i+2/|_i+1||_i+2|.
Calculations given in Appendix <ref> yield that
ϑ_i= 2/3π+δϑ_i^(1)+δϑ_i^(2) + o(| |^2),
where δϑ_i^(1) and δϑ_i^(2) are the first order and the second order variation, respectively, of the wedge angle with respect to the reference angle 2/3π. Therefore, keeping up to second order terms one has that
δϑ_i=δϑ_i^(1)+δϑ_i^(2).
It turns out that the first order variation takes the form
δϑ_i^(1)(^ℓ)=-1/ℓ(_i+1-_0)·_i+1^⊥ + 1/ℓ(_i+2 -_0)·_i+2^⊥,
with _i+1^⊥ defined by
_i+1^⊥:=_i+2 +1/2_i+1/|_i+2 +1/2_i+1|=2/√(3)(_i+2 +1/2_i+1) i=1, 2, 3,
that is, the unit vector orthogonal to _i+1, (cf. equation (19) in <cit.>).
Figure <ref> illustrates the geometrical meaning of formula (<ref>).
In particular,
∑_i=1^3δϑ_i^(1)(^ℓ)=0,
as it could have been deduced from geometrical considerations.
The second order variation is given by, see Appendix <ref>,
δϑ_i^(2)(^ℓ)=-1/√(3)[ -1/2(δϑ_i^(1))^2+2/ℓ^2(𝐮_i+1 - 𝐮_0)·(𝐮_i+2 - 𝐮_0)-
((𝐮_i+1 - 𝐮_0)/ℓ^2·_i+1+(𝐮_i+2 - 𝐮_0)/ℓ^2·_i+2)×
×((𝐮_i+1 - 𝐮_0)/ℓ^2·_i+2+(𝐮_i+2 - 𝐮_0)/ℓ^2·_i+1 -√(3)/2δϑ_i^(1))
-_i+1·_i+2/ℓ^4(|𝐮_i+1 - 𝐮_0|^2-21/ℓ^2( _i+1·(𝐮_i+1 - 𝐮_0) )^2
+|𝐮_i+2 - 𝐮_0|^2-21/ℓ^2( _i+2·(𝐮_i+2 - 𝐮_0) )^2 ].
By algebric manipulation one finds that
∑_i=1^3δϑ_i(^ℓ)=∑_i=1^3δϑ_i^(2)(^ℓ)=-3√(3)/ℓ^2(1/3∑_i=1^3w(^ℓ+ℓ_i)-w(^ℓ))^2,
where w denotes the out-of-plane component of the displacement, that is
w:=·_3,
where _3 is the unit vector perpendicular to the undeformed sheet.
Note that, by (<ref>), the ∑_iδϑ_i(^ℓ) is non-positive and hence the contribution of the self-stress to the strain energy is non-negative for τ_0<0, i.e., for δϑ_0>0, see (<ref>).
§.§ Change of the dihedral angles
For each fixed node ^ℓ∈ L_2(ℓ)∩Ω and for each edge parallel to _i
and starting at ^ℓ we need to define four types of dihedral angles __i^+(^ℓ), __i^-(^ℓ), __i_i+1(^ℓ) and __i_i+2(^ℓ):
cos__i^+=(_i×_i+1)·(_i×_i^+)/|_i×_i+1||_i×_i^+|,
cos__i^-=(_i+2×_i)·(_i^-×_i)/|_i+2×_i||_i^-×_i)|,
cos__i_i+1=(_i×_i+1)·(_i^-×_i)/|_i×_i+1||_i^-×_i|,
cos__i_i+2=(_i+2×_i)·(_i×_i^+)/|_i+2×_i||_i×_i^+|,
where
_i^+= ^ℓ+ℓ_i-ℓ_i+2+(^ℓ+ℓ_i-ℓ_i+2)-( ^ℓ+ℓ_i+(^ℓ+ℓ_i) )
= -ℓ_i+2+_i^+-_i, _i^+:=(^ℓ+ℓ_i-ℓ_i+2),
_i^-= ^ℓ+ℓ_i-ℓ_i+1+(^ℓ+ℓ_i-ℓ_i+1)-( ^ℓ+ℓ_i+(^ℓ+ℓ_i) )
= -ℓ_i+1+_i^--_i, _i^-:=(^ℓ+ℓ_i-ℓ_i+1)
are the images of vectors ℓ_i^+ and ℓ_i^- (see Fig. <ref>, for i=1), parallel to _i+2 and _i+1 and starting at the image of the point ^ℓ+ℓ_i.
Also here, i, i+1, and i+2 take values in {1,2,3} and the sums should be interpreted mod 3: for instance, if i=3 then i+1=1 and i+2=2.
The C-dihedral angle __i^+(^ℓ) is the angle corresponding to the C-dihedron with middle edge ℓ_i and oriented as _i^⊥, while __i^-(^ℓ) is the angle corresponding to the C-dihedron oriented opposite to _i^⊥ (see Fig. <ref> for i=1).
The Z-dihedral angle __i_i+1(^ℓ)
corresponds to the Z-dihedron with middle edge ℓ_i and the other two edges parallel to _i+1
(see Fig. <ref> for i=1).
Then, recalling that δΘ=Θ, calculations in Appendix <ref> yield that
δ__i^+(^ℓ)=2√(3)/3ℓ[2w(^ℓ)-w(^ℓ+ℓ_i+1)+w(^ℓ+ℓ_i-ℓ_i+2)-2w(^ℓ+ℓ_i)],
and
δ__i_i+1(^ℓ)=
2√(3)/3ℓ[w(^ℓ+ℓ_i-ℓ_i+1)-w(^ℓ+ℓ_i)+w(^ℓ+ℓ_i+1)-w(^ℓ)].
Analogous formulas hold for δ__i^- and δ__i_i+2:
δ__i^-(^ℓ)=-2√(3)/3ℓ[2w(^ℓ)-w(^ℓ+ℓ_i+2)+w(^ℓ+ℓ_i-ℓ_i+1)-2w(^ℓ+ℓ_i)],
δ__i_i+2(^ℓ)=2√(3)/3ℓ[w(^ℓ+ℓ_i-ℓ_i+2)-w(^ℓ+ℓ_i)+w(^ℓ+ℓ_i+2)-w(^ℓ)].
§ SPLITTING OF THE ENERGY
The above calculations show that δϑ_i^(1) as well as δ l_i depend upon the in-plane components of , cf. (<ref>) and (<ref>), while δϑ_i^(2), δ, and δ depend upon the out-of-plane component of , cf. (<ref>), (<ref>), and (<ref>). This yields a splitting of the energy into membrane and bending parts
𝒰_ℓ=𝒰_ℓ^(m)+𝒰_ℓ^(b), 𝒰_ℓ^(b):=_ℓ^(s) +_ℓ^(d)
defined by
𝒰_ℓ^(m):=1/2 ∑_ℰ k^l (δ l)^2+1/2 ∑_𝒲 k^ϑ (δϑ^(1))^2
_ℓ^(s) :=τ_0∑_𝒲δϑ^(2),
_ℓ^(d) :=1/2 ∑_𝒵 k^ (δ)^2+1/2 ∑_𝒞 k^ (δ)^2,
where _ℓ^(s) is the self-energy (corresponding to the so-called cohesive energy in the literature) and _ℓ^(d) is the dihedral energy.
The analysis in a paper by Davini <cit.> applies here to the in-plane deformations, providing a continuum model of the graphene sheet within the framework of Γ-convergence theory. Hereafter, we concentrate on the out-of-plane deformations.
With the notation introduced in Section <ref> we now write the bending energy more explicitly.
The self-energy can be written as
𝒰^(s)_ℓ
= ∑_^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ωτ_0 ∑_i =
1^3δϑ_i^(2) (^ℓ),
where ∑_i =1^3δϑ_i^(2) (^ℓ) is given in (<ref>) in terms of the out-of-plane component of the displacement w.
We further split the dihedral energy _ℓ^(d) in
_ℓ^(d):=𝒰^𝒵_ℓ+𝒰^𝒞_ℓ,
where
𝒰^𝒵_ℓ=1/2 k^𝒵∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (δ__i_i+2(^ℓ))^2+ (δ__i_i+1(^ℓ))^2
is the contribution of the Z-dihedra, and
𝒰^𝒞_ℓ=1/2 k^𝒞 ∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (δ__i^+(^ℓ))^2+ (δ__i^-(^ℓ))^2
is the contribution of the C-dihedra. The Z- and C-dihedral angles appearing in (<ref>) and (<ref>) are given in terms of w in (<ref>)-(<ref>).
In the next section we deduce, by means of a formal analysis, a continuous version of the discrete bending energy
𝒰_ℓ^(b)=_ℓ^(s) +𝒰^𝒵_ℓ+𝒰^𝒞_ℓ,
from which we shall deduce
expressions for the sheet's bending stiffnesses. A rigorous analysis based on Γ-convergence theory will be done in a forthcoming paper <cit.>.
§ THE CONTINUUM LIMIT
In this section we find a continuous energy, defined over the domain Ω, that
approximates the discrete bending energy 𝒰_ℓ^(b) defined over the lattice (L_1(ℓ)∪ L_2(ℓ))∩Ω. This is achieved by letting the lattice size ℓ go to zero so that
(L_1(ℓ)∪ L_2(ℓ))∩Ω invades Ω.
With this in mind, in place of a function w:(L_1(ℓ)∪ L_2(ℓ))∩Ω→ℝ, we consider a twice continuously differentiable function w:Ω→ℝ.
Given two vectors and , with ∂^2_w we denote the second partial derivative
of w in the directions /|| and /||, that is
∂^2_w=∇^2 w /||·/||,
where ∇^2 w denotes the Hessian of w. Clearly, we also have
∂^2_w(x_0)=lim_ℓ→ 0w(x_0+ℓ+ℓ)-w(x_0+ℓ)-w(x_0+ℓ)+w(x_0)/ℓ^2 || ||.
The change of the Z-dihedra, see (<ref>)_2, can be rewritten, after setting
_i:=_i-_i+2,
as
δ__i_i+2(^ℓ) =2√(3)/3ℓ[w(^ℓ+ℓ_i-ℓ_i+2)-w(^ℓ)-w(^ℓ+ℓ_i)+w(^ℓ+ℓ_i+2)]
=2√(3)/3ℓ[w(^ℓ+ℓ_i)-w(^ℓ)-w(^ℓ+ℓ_i+ℓ_i+2)+w(^ℓ+ℓ_i+2)]
=2√(3)/3ℓ [- ∂^2__i_i+2w(^ℓ) ℓ^2 |_i| |_i+2|+o(ℓ^2)]
=-2 ℓ∂^2__i_i+2w(^ℓ) +o(ℓ),
where the third equality follows from (<ref>). Similarly, setting
_i:=_i-_i+1,
we have that
δ__i_i+1(^ℓ) =
2√(3)/3ℓ[w(^ℓ+ℓ_i-ℓ_i+1)-w(^ℓ+ℓ_i)+w(^ℓ+ℓ_i+1)-w(^ℓ)]
=2√(3)/3ℓ[w(^ℓ+ℓ_i)-w(^ℓ+ℓ_i+ℓ_i+1)+w(^ℓ+ℓ_i+1)-w(^ℓ)]
=-2 ℓ∂^2__i_i+1w(^ℓ) +o(ℓ).
Taking (<ref>) into account, we may rewrite the vectors _i and _i, defined in (<ref>) and (<ref>), in terms of _i, for instance _1=_1 and _1=-_3,
and then rewrite the Z-dihedral energy, see (<ref>) and rewritten below for the reader convenience, as
𝒰^𝒵_ℓ =1/2 k^𝒵∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (δ__i_i+2(^ℓ))^2+ (δ__i_i+1(^ℓ))^2
=1/24ℓ^2 k^𝒵∑_^ℓ∈ L_2(ℓ)∩Ω (∂^2__1_3w(^ℓ))^2+ (∂^2__1_1w(^ℓ))^2+(∂^2__2_2w(^ℓ))^2
+(∂^2__2_3w(^ℓ))^2+(∂^2__3_1w(^ℓ))^2+(∂^2__3_2w(^ℓ))^2+o(ℓ^2),
=1/28√(3)/9 k^𝒵∑_^ℓ∈ L_2(ℓ)∩Ω(∂^2__1_3w(^ℓ))^2+ (∂^2__1_1w(^ℓ))^2+(∂^2__2_2w(^ℓ))^2
+(∂^2__2_3w(^ℓ))^2+(∂^2__3_1w(^ℓ))^2+(∂^2__3_2w(^ℓ))^2)|E^ℓ(^ℓ)|+o(ℓ^2),
where |E^ℓ(^ℓ)|=ℓ^23√(3)/2 is the area of the hexagon
E^ℓ(^ℓ) of side ℓ centred at ^ℓ (see Fig. <ref>).
Let χ_E^ℓ(^ℓ)() be the characteristic function of E^ℓ(^ℓ), i.e., it is equal to 1 if ∈
E^ℓ(^ℓ) and 0 otherwise, and let
W^_ℓ():=∑_^ℓ∈ L_2(ℓ)∩Ω ((∂^2__1_3w(^ℓ))^2+ (∂^2__1_1w(^ℓ))^2+(∂^2__2_2w(^ℓ))^2
+(∂^2__2_3w(^ℓ))^2+(∂^2__3_1w(^ℓ))^2+(∂^2__3_2w(^ℓ))^2)χ_E^ℓ(^ℓ)().
Then, we may simply write
𝒰^𝒵_ℓ=1/28√(3)/9 k^𝒵∫_Ω W^_ℓ() d+o(ℓ^2),
and since W^_ℓ converges, as ℓ goes to zero, to
(∂^2__1_3w)^2+ (∂^2__1_1w)^2+(∂^2__2_2w)^2+(∂^2__2_3w)^2+(∂^2__3_1w)^2+(∂^2__3_2w)^2,
we deduce that
lim_ℓ→ 0𝒰^𝒵_ℓ=1/28√(3)/9 k^𝒵∫_Ω (∂^2__1_3w)^2+ (∂^2__1_1w)^2
+(∂^2__2_2w)^2
+(∂^2__2_3w)^2
+(∂^2__3_1w)^2+(∂^2__3_2w)^2 d=:𝒰^𝒵_0(w).
The functional 𝒰^𝒵_0, defined in (<ref>), is the continuum limit of the Z-dihedral energy.
Working in a similar manner, we find the continuum limit of the C-dihedral energy:
lim_ℓ→ 0𝒰^𝒞_ℓ=1/216√(3)/9 k^𝒞∫_Ω∑_i=1^3 (∂^2__i_i^⊥w)^2 d
=:𝒰^𝒞_0(w),
and the continuum limit of the self-energy:
lim_ℓ→ 0𝒰^(s)_ℓ=-4/9τ_0 ∫_Ω ( ∂^2__1 _1w+∂^2__2 _2w+∂^2__1 _2w)^2 d
=:𝒰^(s)_0(w).
Detailed calculations leading to (<ref>) and (<ref>) are found in Appendix <ref>.
The total bending limit energy is therefore
𝒰^(b)_0(w):=𝒰^𝒵_0(w)+𝒰^𝒞_0(w)+𝒰^(s)_0(w).
§ THE EQUIVALENT PLATE EQUATION
In this section we rewrite the limit energies in a more amenable form.
We start by manipulating the limit C-dihedral energy. We first note that
∂^2__2_2^⊥w =∇^2 w _2·_2^⊥=∇^2 w (-1/2_1+√(3)/2_1^⊥)· (-√(3)/2_1-1/2_1^⊥)
=√(3)/4∂^2__1_1w-√(3)/4∂^2__1^⊥_1^⊥w
-1/2∂^2__1_1^⊥w,
and similarly
∂^2__3_3^⊥w=
-√(3)/4∂^2__1_1w+√(3)/4∂^2__1^⊥_1^⊥w
-1/2∂^2__1_1^⊥w,
from which we find that
∑_i=1^3 (∂^2__i_i^⊥w)^2 =(∂^2__1_1^⊥w)^2+
(√(3)/4∂^2__1_1w-√(3)/4∂^2__1^⊥_1^⊥w
-1/2∂^2__1_1^⊥w)^2
+
(-√(3)/4∂^2__1_1w+√(3)/4∂^2__1^⊥_1^⊥w
-1/2∂^2__1_1^⊥w)^2
=3/2 (∂^2__1_1^⊥w)^2+ 3/8(∂^2__1_1w-∂^2__1^⊥_1^⊥w)^2
=3/2 (∂^2__1_1^⊥w)^2+ 3/8(∂^2__1_1w+∂^2__1^⊥_1^⊥w)^2-3/2∂^2__1_1w∂^2__1^⊥_1^⊥w
=3/8 (Δ w)^2- 3/2∇^2 w,
where Δ w denotes the Laplacian of w.
Hence, the C-dihedral energy defined in (<ref>) rewrites as
𝒰^𝒞_0(w) =1/216√(3)/9 k^𝒞∫_Ω∑_i=1^3 (∂^2__i_i^⊥w)^2 d=
1/22√(3)/3 k^𝒞∫_Ω (Δ w)^2- 4 ∇^2 w d.
We now tackle the Z-dihedral energy. Recalling (<ref>), by a calculation similar to that carried on in (<ref>) we find
∂^2__1_3w=
-√(3)/2∂^2__1_1w-1/2∂^2__1_1^⊥w ∂^2__1_1w=
+√(3)/2∂^2__1_1w-1/2∂^2__1_1^⊥w,
where _i^⊥=_3×_i, and from these equations we deduce that
(∂^2__1_1w)^2+(∂^2__1_3w)^2=3/2 (∂^2__1_1w)^2+
1/2 (∂^2__1_1^⊥w)^2.
Similar identities hold for _2 and _3. Thence, we find that the Z-dihedral energy
takes the form
𝒰^𝒵_0(w) =1/28√(3)/9 k^𝒵∫_Ω (∂^2__1_3w)^2+ (∂^2__1_1w)^2
+(∂^2__2_2w)^2
+(∂^2__2_3w)^2
+(∂^2__3_1w)^2+(∂^2__3_2w)^2 d
=1/28√(3)/9 k^𝒵∫_Ω3/2∑_i=1^3(∂^2__i_iw)^2+
1/2∑_i=1^3(∂^2__i_i^⊥w)^2 d.
The second sum is equal, as it can be checked, to the last line of (<ref>), and with a similar calculation we also find that
∑_i=1^3(∂^2__i_iw)^2=9/8 (Δ w)^2- 3/2∇^2 w,
and hence
𝒰^𝒵_0(w)
=1/28√(3)/9 k^𝒵∫_Ω3/2(9/8 (Δ w)^2- 3/2∇^2 w)+
1/2(3/8 (Δ w)^2- 3/2∇^2 w) d
=1/28√(3)/9 k^𝒵∫_Ω15/8 (Δ w)^2- 3 ∇^2 w d
=1/25√(3)/3 k^𝒵∫_Ω (Δ w)^2- 8/5∇^2 w d.
We now deal with the self-energy. Again with a calculation similar to that carried on in (<ref>) we find
∂^2__1_2w
=-1/2∂^2__1_1w+√(3)/2∂^2__1_1^⊥w
∂^2__2_2w =1/4∂^2__1_1w+3/4∂^2__1^⊥_1^⊥w
-√(3)/2∂^2__1_1^⊥w,
and hence
𝒰^(s)_0(w) =-4/9τ_0 ∫_Ω ( ∂^2__1 _1w+∂^2__2 _2w+∂^2__1 _2w)^2 d
=-4/9τ_0 ∫_Ω ( 3/4∂^2__1 _1w+ 3/4∂^2__1^⊥_1^⊥w)^2 d
=-1/2τ_0/2∫_Ω ( Δ w)^2 d.
By summing (<ref>), (<ref>), and (<ref>), we find the total bending energy, defined in (<ref>):
𝒰^(b)_0(w)
= 1/2∫_Ω(5√(3)/3 k^𝒵+2√(3)/3 k^𝒞-τ_0/2)( Δ w)^2
+(- 8/55√(3)/3 k^𝒵-42√(3)/3 k^𝒞)∇^2 w d,
=1/2∫_Ω ( Δ w)^2+_G∇^2 w d,
where
:= 5√(3)/3 k^𝒵+2√(3)/3 k^𝒞-τ_0/2, _G:=- 8/55√(3)/3 k^𝒵-42√(3)/3 k^𝒞.
are the bending and the Gaussian stiffnesses, respectively. _G is called Gaussian because it multiplies the Gaussian curvature ∇^2w, while Δ w is twice the mean curvature. The analytical expression for the bending stiffness (<ref>)_1 coincides with the one deduced in <cit.>, within a discrete mechanical framework, if one assumes that k^𝒵≡ k^𝒞. It clearly shows that the origin of the bending stiffness is twofold: a part depends on the dihedral contribution, and a part on the self-stress. Quite surprisingly, the self-stress has no role in the Gaussian stiffness. It is worth noticing that in the above approach there is no need of introducing any questionable effective thickness parameter.
§ NUMERICAL RESULTS
In this section, we adopt the 2nd-generation Brenner potential <cit.> to obtain quantitative results for the continuum material parameters deduced in Sec. <ref>.
The 2nd-generation REBO potentials developed for hydrocarbons by Brenner et al. in <cit.> accommodate up to third-nearest-neighbor interactions through a bond-order function depending, in particular, on dihedral angles. Following Appendix B of <cit.>, we here give a short account of the form of this potential.
The binding energy V of an atomic aggregate is given as a sum over nearest neighbors:
V=∑_i∑_j<i V_ij ;
the interatomic potential V_ij is given by
V_ij=V_R(r_ij)+b_ijV_A(r_ij),
where the individual effects of the repulsion and attraction functions V_R(r_ij) and V_A(r_ij), which model pair-wise interactions of atoms i and j depending on their distance r_ij, are modulated by the bond-order function b_ij. The repulsion and attraction functions have the following forms:
V_A(r) =-f^C(r)∑_n=1^3B_n e^-β_n r ,
V_R(r) =f^C(r)( 1 + Q/r) A e^-α r ,
where f^C(r) is a cutoff function limiting the range of covalent interactions, and where Q, A, B_n, α, and β, are parameters to be chosen fit to some material-specific dataset. The remaining ingredient in (<ref>) is the bond-order function:
b_ij=1/2(b_ij^σ-π+b_ji^σ-π)+b_ij^π ,
where apexes σ and π refer to two types of bonds: the strong covalent σ-bonds between atoms in one and the same given plane, and the π-bonds responsible for interlayer interactions, which are perpendicular to the plane of σ-bonds.
The role of function b_ij^σ-π is to account for the local coordination of, and the bond angles relative to, atoms i and j; its form is:
b_ij^σ-π=(1+∑_k≠ i,j f_ik^C(r_ik)G(cosθ_ijk) e^λ_ijk+P_ij(N_i^C,N_i^H) )^-1/2 .
Here, for each fixed pair of indices (i,j), (a) the cutoff function f_ik^C limits the interactions of atom i to those with its nearest neighbors; (b) λ_ijk is a string of parameters designed to prevent attraction in some specific situations; (c) function P_ij depends on N_i^C and N_i^H, the numbers of C and H atoms that are nearest neighbors of atom i; it is meant to adjust the bond-order function according to the environment of the C atoms in one or another molecule; (d) for solid-state carbon, the values of both the string λ_ijk and the function P_ij are taken null; (e) function G modulates the contribution of each nearest neighbour of atom i in terms of the cosine of the angle between the ij and ik bonds; its analytic form is given by three sixth-order polynomial splines.
Function b_ij^π is given a split representation:
b_ij^π=Π_ij^RC+b_ij^DH,
where the first addendum Π_ij^RC depends on whether the bond between atoms i and j has a radical character and on whether it is part of a conjugated system, while the second addendum b_ij^DH depends on dihedral angles and has the following form:
b_ij^DH=T_ij(N_i^t,N_j^t,N_ij^ conj)(∑_k(≠ i,j)∑_k(≠ i,j)( 1-cos^2Θ_ijkl)f_ik^C(r_ik)f_jl^C(r_jl) ) ,
where function T_ij is a tricubic spline depending on N_i^t=N_i^C+N_i^H, N_j^t, and N_ij^ conj, a function of local conjugation, and the dihedral angle is defined as
cosΘ_ijkl=_jik·_ijl, _jik=_ji×_ik/|_ji×_ik|, _jil=_ij×_il/|_ij×_jl|.
The values of the constant k^ and k^ can be deduced by deriving twice of the potential, and computing the result in the ground state (GS): r_ij=ℓ, θ_ijk=2/3π, Θ_ijkl=0. In particular, we find:
k^Θ:=k^=k^=∂^2_Θ_ijklV_ij|_GS=2TV_A(ℓ),
where T is the value of T_ij in the GS.
With this notation the bending stiffness becomes:
=7√(3)/3k^Θ-τ_0/2.
This expression coincides with that given in [28]:
𝒟=V_A(r_0)/2( (b_0^σ-π)'-14 T_0/√(3)),
after noticing that
V_A(r_0)(b_0^σ-π)'≡-τ_0, -V_A(r_0)( 7 T_0/√(3))≡ 2TV_A(ℓ) 7/√(3)= 7/√(3)k^Θ.
Instead, in references [4] and [21] the dihedral energies are not contemplated and the bending stiffness found, up to notational differences, coincides with ours after setting k^Θ=0.
With the values reported in <cit.>, we get:
k^Θ=0.0282 nN nm=0.1764 eV.
From <cit.>, we take the value of the selfstress τ_0:
τ_0=-0.2209 nN nm=-1.3787 eV.
With (<ref>) and (<ref>), we obtain:
=7√(3)/3k^Θ-τ_0/2=0.2247 nN nm=1.4022 eV,
_G=-16√(3)/3k^Θ=-0.2610 nN nm=-1.6293 eV.
The value of is in complete agreement with the literature <cit.>; from (<ref>) and (<ref>) it is possible to check that the contribution of the self-stress and the dihedral stiffness amounts to about 49.16% and 50.84% of the total, respectively. Neither analytical evaluation of _G, nor MD computations, have been proposed so far. The value we obtain is in good agreement with the value of -1.52 eV, reported in <cit.> and determined by means of DFT.
§ CONCLUSIONS
Starting form a discrete model inferred from MD, we have deduced a continuum theory describing the bending behavior of a graphene sheet. Atomic interactions have been modeled by exploiting the main features of the 2nd-generation Brenner potential and adopting a quadratic approximation of the energy. The deduced continuum limit fully describes the bending behavior of graphene. To our knowledge, it is the first time that an analytical expression of the Gaussian stiffness is given and an explanation of its origins at the atomistic scale is provided. We also derived a quantitative evaluation of the related constitutive parameters.
§ ACKNOWLEDGMENTS
AF acknowledges the financial support of Sapienza University of Rome (Progetto d'Ateneo 2016 — “Multiscale Mechanics of 2D Materials: Modeling and Applications”).
§ APPENDIX
In order to compute the strain measures for small changes of configuration of the graphene foil, we write the displacements of the nodes in the form
u (^ℓ)= ξ(^ℓ),
where ξ is a positive scalar measuring smallness and := u/ξ stands for the displacement distribution normalized accordingly.
§.§ Change of the bond angle
Let us define the bond angle as
cos(ϑ_i(ξ))=(𝐦(ξ)·𝐧(ξ)/|𝐦(ξ)||𝐧(ξ)|) ,
with
𝐦(ξ):=ℓ_i+1+ξ (_i+1 - _0) 𝐧(ξ):=ℓ_i+2+ξ (_i+2 - _0),
where we have set
_0:=(^ℓ), _i:=(^ℓ+ℓ_i).
Then, from Taylor's expansion we get
ϑ_i(ξ)= ϑ_i(0)+ϑ_i^'(0) ξ+1/2 ϑ_i^''(0) ξ^2 +O(ξ^3),
where the various terms can be calculated by successive differentiations of Eq. (<ref>). Thus,
-sin(ϑ_i(0)) ϑ_i^'(0)=.(𝐦(ξ)·𝐧(ξ)/|𝐦(ξ)||𝐧(ξ)|)^'|_ξ=0
=𝐦'(ξ)·𝐧(ξ)+𝐦(ξ)·𝐧'(ξ)/|𝐦(ξ)||𝐧(ξ)|-𝐦(ξ)·𝐧(ξ)/|𝐦(ξ)||𝐧(ξ)|.(𝐦(ξ)·𝐦'(ξ)/|𝐦(ξ)|^2 +𝐧(ξ)·𝐧'(ξ)/|𝐧(ξ)|^2)|_ξ=0,
which yields
ϑ_i^'(0)=-_i+2 +1/2_i+1/|_i+2 +1/2_i+1|·(_i+1 - _0) - _i+1 +1/2_i+2/|_i+1 +1/2_i+2|·(_i+2 - _0),
where we take into account that sin(ϑ_i(0))=√(3)/2, |𝐦(0)|=|𝐧(0)|=ℓ, |_i+1 +1/2_i+2|=|_i+2 +1/2_i+1| = √(3)/2 and _i·_i+1=-1/2.
Moreover, by differentiating Eq. (<ref>) twice, we get
-cos(ϑ_i(0)) ϑ^'_i(0)^2-sin(ϑ_i(0)) ϑ_i^''(0)=. (𝐦(ξ)·𝐧(ξ)/|𝐦(ξ)||𝐧(ξ)|)^'' |_ξ=0,
which gives
ϑ_i^''(0)= -1/sinϑ_i(0)(cosϑ_i(0)ϑ^'_i(0)^2+.(𝐦(ξ)·𝐧(ξ)/|𝐦(ξ)||𝐧(ξ)|)”|_ξ=0).
Computations yield that
(𝐦·𝐧/|𝐦||𝐧|)” =𝐦”/|𝐦|·𝐧/|𝐧|+2 𝐦'/|𝐦|·𝐧'/|𝐧|+𝐦/|𝐦|·𝐧”/|𝐧|
-[𝐦'/|𝐦|·𝐧/|𝐧|+𝐦/|𝐦|·𝐧'/|𝐧|+(𝐦/|𝐦|·𝐧/|𝐧|)' ] (𝐦'/|𝐦|·𝐦/|𝐦|+𝐧'/|𝐧|·𝐧/|𝐧|)
-𝐦/|𝐦|·𝐧/|𝐧|[ (|𝐦'|/|𝐦|)^2+𝐦/|𝐦|·𝐦”/|𝐦|-2(𝐦'/|𝐦|·𝐦/|𝐦|)^2.
. +
(|𝐧'|/|𝐧|)^2+𝐧/|𝐧|·𝐧”/|𝐧|-2(𝐧'/|𝐧|·𝐧/|𝐧|)^2
] .
Since 𝐦”(0)=𝐧”(0)=0 and (𝐦·𝐧/|𝐦||𝐧|)'|_ξ=0=-sinϑ_i(0)ϑ_i^'(0), we finally have that
.(𝐦·𝐧/|𝐦||𝐧|)”|_ξ=0 =2/ℓ^2(_i+1 - _0)·(_i+2 - _0)-((_i+1 - _0)/ℓ·_i+1+(_i+2 - _0)/ℓ·_i+2)×
×((_i+1 - _0)/ℓ·_i+2+(_i+2 - _0)/ℓ·_i+1 -sinϑ_i(0)ϑ_i^'(0) )
-(_i+1·_i+2)(1/ℓ^2|_i+1 - _0|^2-21/ℓ^2( _i+1·(_i+1 - _0) )^2
+1/ℓ^2|_i+2 - _0|^2 .
.-21/ℓ^2( _i+2·(_i+2 - _0) )^2
).
All in all, we have that:
ϑ_i”(0)=-2/√(3)[-1/2ϑ_i'(0) ^2+2/ℓ^2(_i+1 - _0)·(_i+2 - _0)-
((_i+1 - _0)/ℓ·_i+1+(_i+2 - _0)/ℓ·_i+2)×
×((_i+1 - _0)/ℓ·_i+2+(_i+2 - 𝔲_0)/ℓ·_i+1 -√(3)/2ϑ_i^'(0) )
-_i+1·_i+2/ℓ^2(|_i+1 - _0|^2-2( _i+1·(_i+1 - _0) )^2
+|_i+2 - _0|^2
-2( _i+2·(_i+2 - _0) )^2 ] .
Recalling that δϑ_i=ϑ_i^'(0) ξ+1/2 ϑ_i^''(0) ξ^2 +O(ξ^3) and that 𝐮=ξ, we get:
δϑ_i=δϑ_i^(1)+δϑ_i^(2)+O(ξ^3),
with
δϑ_i^(1)=ϑ_i^'(0)ξ=-1/ℓ_i+2 +1/2_i+1/|_i+2 +1/2_i+1|·(𝐮_i+1 - 𝐮_0) - 1/ℓ_i+1 +1/2_i+2/|_i+1 +1/2_i+2|·(𝐮_i+2 - 𝐮_0).
and
δϑ_i^(2)=1/2ϑ_i^''(0)ξ^2= -1/√(3)[-1/2(ξϑ_i'(0)) ^2+2/ℓ^2(𝐮_i+1 - 𝐮_0)·(𝐮_i+2 - 𝐮_0)-
((𝐮_i+1 - 𝐮_0)/ℓ^2·_i+1+(𝐮_i+2 - 𝐮_0)/ℓ^2·_i+2)×
×((𝐮_i+1 - 𝐮_0)/ℓ^2·_i+2+(𝐮_i+2 - 𝐮_0)/ℓ^2·_i+1 -√(3)/2ξϑ_i^'(0) )
-_i+1·_i+2/ℓ^4(|𝐮_i+1 - 𝐮_0|^2-21/ℓ^2( _i+1·(𝐮_i+1 - 𝐮_0) )^2
+|𝐮_i+2 - 𝐮_0|^2
-21/ℓ^2( _i+2·(𝐮_i+2 - 𝐮_0) )^2 ].
We write (<ref>) in the simpler form
δϑ_i^(1)=-1/ℓ (𝐮_i+1 - 𝐮_0)·^⊥_i+1 + 1/ℓ (𝐮_i+2 - 𝐮_0)·^⊥_i+2,
where the unit vectors ^⊥_i+1's are defined by (<ref>).
§.§ Change of the dihedral angle
To fix the ideas, we focus on the dihedral angles δ__1^+, and δ__1_2, sketched in Fig. <ref>; the other strains can be obtained in analogous manner.
The first order approximation of the dihedral angle is all we need to evaluate the corresponding energy contribution.
Let us denote by
_1=ℓ_1+(_1-_0)ξ, _2=ℓ_2+(_2-_0)ξ, _4=ℓ_4+(_4-_1)ξ
the edge vectors after the deformation. We have that
|sin( __1^+(ξ))|=|(_1×_2)×(_1×_4)|/|_1×_2||_1×_4|;
it is easy to see that
_1×_2=√(3)/2ℓ^2_3+ ( ℓ_1×(_2-_0) -ℓ_2×(_1-_0))ξ+O(ξ^2),
_1×_4=√(3)/2ℓ^2_3+( ℓ_1×(_4-_1) -ℓ_4×(_1-_0))ξ+O(ξ^2),
whence
(_1×_2)×(_1×_4)=√(3)/2_3 ×( ℓ_1×(_4-_1)-ℓ_4×(_1-_0) +
+ℓ_2×(_1-_0)-ℓ_1×(_2-_0))ξ+O(ξ^2).
On recalling the identity
×(×)=(·)-(·),
we obtain:
(_1×_2)×(_1×_4)=√(3)/2ℓ^2(2𝔴_1-𝔴_4+𝔴_2-2𝔴_0 )ℓ_1ξ+O(ξ^2),
where we have used the fact that _4-_2(=-_3-_2)=_1 and set 𝔴=·_3.
Now,
|_1×_2|=|_1×_2|=√(3)/2ℓ^2+O(ξ^2),
whence
|sin( __1^+(ξ))|=|δ__1^+|ξ+O(ξ^2)=2√(3)/3ℓ|2𝔴_1-𝔴_4+𝔴_2-2𝔴_0 |ξ+O(ξ^2).
Thus, on recalling that w=𝔴ξ, we conclude that
δ__1^+=2√(3)/3ℓ(2w_1-w_4+w_2-2w_0 ).
For a generic C-dihedral angle centered in _i, we get
δ__i^+(^ℓ)=2√(3)/3ℓ[2w(^ℓ)-w(^ℓ+ℓ_i+1)+w(^ℓ+ℓ_i-ℓ_i+2)-2w(^ℓ+ℓ_i)],
δ__i^-(^ℓ)=-2√(3)/3ℓ[2w(^ℓ)-w(^ℓ+ℓ_i+2)+w(^ℓ+ℓ_i-ℓ_i+1)-2w(^ℓ+ℓ_i)].
For the Z-dihedral angle __1_2, we introduce the vector
_5=ℓ_5+(_5-_1)ξ,
image of _5 under the deformation. We have that
|sin__1_2|=|(_1×_2)×(_5×_1)|/|_1×_2||_5×_1|,
and
_5×_1=√(3)/2ℓ^2_3+( ℓ_5×(_1-_0)-ℓ_1×(_5-_1) )ξ+O(ξ^2),
whence
(_1×_2)×(_5×_1)=√(3)/2ℓ^2_3 ×( ℓ_5×(_1-_0)-ℓ_1×(_5-_1)
+ℓ_2×(_1-_0)-ℓ_1×(_2-_0) )ξ+O(ξ^2).
Again, on making use of (<ref>) and recalling that _5=-_2, we obtain:
(_1×_2)×(_5×_1)=√(3)/2ℓ^2(𝔴_5-𝔴_1+𝔴_2-𝔴_0)ξ+O(ξ^2).
On noticing that |_5×_1|=√(3)/2ℓ^2+O(ξ^2), we get
|sin__1_2|=|δ__1_2|ξ+O(ξ^2)=2√(3)/3ℓ|𝔴_5-𝔴_1+𝔴_2-𝔴_0 |ξ+O(ξ^2),
whence
δ__1_2=2√(3)/3ℓ(w_5-w_1+w_2-w_0 ).
For a generic Z-dihedral angle centered in _i, we get
δ__i_i+1(^ℓ)=2√(3)/3ℓ[w(^ℓ+ℓ_i-ℓ_i+1)-w(^ℓ+ℓ_i)+w(^ℓ+ℓ_i+1)-w(^ℓ)],
δ__i_i+2(^ℓ)=2√(3)/3ℓ[w(^ℓ+ℓ_i-ℓ_i+2)-w(^ℓ+ℓ_i)+w(^ℓ+ℓ_i+2)-w(^ℓ)].
§.§ Deduction of the continuum limits 𝒰^𝒞_0 and 𝒰^(s)_0
In this Appendix we give a justification of (<ref>) and (<ref>) following the lines outlined in Section <ref> for the derivation of the continuum limit of the Z-dihedral energy.
As in Section <ref>, in place of a function w:(L_1(ℓ)∪ L_2(ℓ))∩Ω→ℝ, we consider a twice continuously differentiable function w:Ω→ℝ.
We first consider the contribution due to the C-dihedra.
Momentarily, to keep the notation compact, we set
:=∇ w(^ℓ), :=∇^2 w(^ℓ).
By Taylor's expansion we have
-w(^ℓ+ℓ_i+1) =-w(^ℓ)-ℓ·_i+1-1/2ℓ^2 _i+1·_i+1+o(ℓ^2)
w(^ℓ+ℓ_i-ℓ_i+2) =w(^ℓ)+ℓ· (_i-_i+2)+1/2ℓ^2 (_i-_i+2)· (_i-_i+2)+o(ℓ^2)
-2w(^ℓ+ℓ_i) =-2w(^ℓ)-2ℓ·_i- ℓ^2 _i·_i+o(ℓ^2).
Thence, since _1+_2+_3=0, we find from (<ref>) that
δ__i^+(^ℓ) =2√(3)/3ℓ[2w(^ℓ)-w(^ℓ+ℓ_i+1)+w(^ℓ+ℓ_i-ℓ_i+2)-2w(^ℓ+ℓ_i)]
=√(3)ℓ/3[-_i+1·_i+1+ (_i-_i+2)· (_i-_i+2)-2_i·_i)]+o(ℓ)
and, by substituting _i+2=-(_i+_i+1), we eventually get
δ__i^+(^ℓ) =√(3)ℓ/3[-_i+1·_i+1+ (2_i+_i+1)· (2_i+_i+1)-2_i·_i]+o(ℓ)
=4√(3)ℓ/3[_i· (1/2_i+ _i+1)]+o(ℓ)
=2ℓ_i·_i^⊥+o(ℓ)=2ℓ∂^2__i_i^⊥w(^ℓ)+o(ℓ).
where _i^⊥ is defined in (<ref>). Similarly, we also find that
δ__i^-(^ℓ)=2ℓ∂^2__i_i^⊥w(^ℓ)+o(ℓ).
With the same steps taken in the study of the Z-dihedral energy we now derive the limit of the C-dihedral energy.
With the expressions of the change of the C-dihedra (<ref>) and (<ref>), we can rewrite the C-dihedral energy,
see (<ref>), as
𝒰^𝒞_ℓ =1/2 k^𝒞 ∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (δ__i^+(^ℓ))^2+ (δ__i^-(^ℓ))^2
=1/2 8ℓ^2 k^𝒞 ∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (∂^2__i_i^⊥w(^ℓ))^2+o(ℓ^2)
=1/216√(3)/9 k^𝒞 ∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (∂^2__i_i^⊥w(^ℓ))^2|E^ℓ(^ℓ)|+o(ℓ^2)
=1/216√(3)/9 k^𝒞∫_Ω W^_ℓ() d+o(ℓ^2),
where the function
W^_ℓ():=∑_^ℓ∈ L_2(ℓ)∩Ω∑_i=1^3 (∂^2__i_i^⊥w(^ℓ))^2 χ_E^ℓ(^ℓ)()
converges to ∑_i=1^3 (∂^2__i_i^⊥w)^2, as ℓ goes to zero.
Thus,
lim_ℓ→ 0𝒰^𝒞_ℓ=1/216√(3)/9 k^𝒞∫_Ω∑_i=1^3 (∂^2__i_i^⊥w)^2 d
=:𝒰^𝒞_0(w),
which is (<ref>).
We now compute the limit of the self-energy.
By Taylor's expansion, with the notation introduced in (<ref>), and taking into account that
_1+_2+_3=0, we find
∑_iδϑ_i(^ℓ) =-3√(3)/ℓ^2[1/3∑_i=1^3w(^ℓ+ℓ_i)-w(^ℓ)]^2
=-3√(3)/ℓ^2ℓ^4/36 ( _1·_1+_2·_2+_3·_3)^2+o(ℓ^2)
=-3√(3)/ℓ^2ℓ^4/9( _1·_1+_2·_2+_1·_2)^2+o(ℓ^2)
=-√(3)/3ℓ^2( ∂^2__1 _1w(^ℓ)+∂^2__2 _2w(^ℓ)+∂^2__1 _2w(^ℓ))^2+o(ℓ^2).
With this expression the self-energy (<ref>) takes the form
𝒰^(s)_ℓ = ∑_^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ωτ_0 ∑_i =
1^3δϑ_i (^ℓ)
= -√(3)/3τ_0 ℓ^2∑_^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ω( ∂^2__1 _1w(^ℓ)+∂^2__2 _2w(^ℓ)+∂^2__1 _2w(^ℓ))^2+o(ℓ^2).
Since the sum is over the points of both lattices, whereas in the previous cases the sum was only over the nodes of L_2(ℓ), we cannot use the hexagons E^ℓ(^ℓ) earlier introduced. Let T^ℓ(^ℓ) be the triangle
centered at ^ℓ of side √(3)ℓ as depicted in Figure <ref>.
Let
W^ϑ_ℓ():=∑_^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ω(∂^2__1 _1w(^ℓ)+∂^2__2 _2w(^ℓ)+∂^2__1 _2w(^ℓ)
)^2 χ_T^ℓ(^ℓ)()
and note that the area of T^ℓ(^ℓ) is |T^ℓ(^ℓ)|=3√(3)/4ℓ^2.
The self-energy rewrites as
𝒰^(s)_ℓ = -4/9τ_0 ∑_^ℓ∈ (L_1(ℓ)∪ L_2(ℓ))∩Ω( ∂^2__1 _1w(^ℓ)+∂^2__2 _2w(^ℓ)+∂^2__1 _2w(^ℓ))^2|T^ℓ(^ℓ)|
+o(ℓ^2)
= -4/9τ_0 ∫_Ω W^ϑ_ℓ() d+o(ℓ^2).
Since W^ϑ_ℓ converges to ( ∂^2__1 _1w+∂^2__2 _2w+∂^2__1 _2w)^2 as ℓ goes to zero, we find
lim_ℓ→ 0𝒰^(s)_ℓ=-4/9τ_0 ∫_Ω ( ∂^2__1 _1w+∂^2__2 _2w+∂^2__1 _2w)^2 d
=:𝒰^(s)_0(w),
which is (<ref>).
10
Akiwande_2016
D. Akinwande, C.J. Brennan, J.S. Bunch, P. Egberts, J.R. Felts, H. Gao,
R. Huang, J.-S. Kim, T. Li, Y. Li, K.M. Liechti, N. Lu, H.S. Park, E.J. Reed,
P. Wang, B.I. Yakobson, T. Zhang, Y.-W. Zhang, Y. Zhou, and Zhu Y.
A review on mechanics and mechanical properties of 2d materials -
graphene and beyond.
Preprint at arXiv:1609.07187.
Alessi_2016
R. Alessi, A. Favata, and A. Micheletti.
Pressurized CNTs under tension: A finite-deformation
lattice model.
Compos. Part B Eng., in press,
http://dx.doi.org/10.1016/j.compositesb.2016.10.006.
Arroyo_2002
M. Arroyo and T. Belytschko.
An atomistic-based finite deformation membrane for single layer
crystalline films.
J. Mech. Phys. Solids, 50(9):1941 – 1977, 2002.
Arroyo2004
M. Arroyo and T. Belytschko.
Finite crystal elasticity of carbon nanotubes based on the
exponential cauchy-born rule.
Phys. Rev. B, 69:115415, 2004.
Bajaj_2013
C. Bajaj, A. Favata, and P. Podio-Guidugli.
On a nanoscopically-informed shell theory of carbon nanotubes.
Europ. J. Mech. A/Solids, 42:137–157, 2013.
Brenner_2002
D.W. Brenner, O.A. Shenderova, J.A. Harrison, S.J. Stuart, B. Ni, and S.B.
Sinnott.
A second-generation reactive empirical bond order (REBO)
potential energy expression for hydrocarbons.
J. Phys. Cond. Mat., 14(4):783, 2002.
Cadelano_2009
E. Cadelano, P.L. Palla, S. Giordano, and L. Colombo.
Nonlinear elasticity of monolayer graphene.
Phys. Rev. Lett., 102:235502, 2009.
Sfyris_2014
C. Galiotis D. Sfyris, G.I. Sfyris.
Curvature dependent surface energy for a free standing monolayer
graphene: Some closed form solutions of the non-linear theory.
Int. J. nonl. Mech., 67:186–197, 2014.
Sfyris_2014b
C. Galiotis D. Sfyris, G.I. Sfyris.
Curvature dependent surface energy for free standing monolayer
graphene: Geometrical and material linearization with closed form solutions.
Int. J. Engineering Science, 85:224–233, 2014.
Davini_2014
C. Davini.
Homogenization of a graphene sheet.
Cont. Mech. Thermod., 26(1):95–113, 2014.
Davini_2017
C. Davini, A. Favata, and R. Paroni.
A homogenized continuum theory for graphene bending.
Forthcoming, 2017.
Deng_2016
S. Deng and V. Berry.
Wrinkled, rippled and crumpled graphene: an overview of formation
mechanism, electronic properties, and applications.
Materials Today, 19(4):197 – 212, 2016.
Favata_2014
A. Favata, A. Micheletti, and P. Podio-Guidugli.
A nonlinear theory of prestressed elastic stick-and-spring
structures.
Int. J. Eng. Sci., 80:4–20, 2014.
Favata_2016
A. Favata, A. Micheletti, P. Podio-Guidugli, and N.M. Pugno.
Geometry and self-stress of single-wall carbon nanotubes and graphene
via a discrete model based on a 2nd-generation REBO potential.
J. Elasticity, 125:1–37, 2016.
Favata_2016b
A. Favata, A. Micheletti, P. Podio-Guidugli, and N.M. Pugno.
How graphene flexes and stretches under concomitant bending couples
and tractions.
Meccanica, in press, 2016.
Favata_2016a
A. Favata, A. Micheletti, S. Ryu, and N.M. Pugno.
An analytical benchmark and a mathematica
program for MD codes: testing LAMMPS on the 2nd
generation Brenner potential.
Comput. Phys. Commun., 207:426–431, 2016.
Ferrari2014
A.C. Ferrari, F. Bonaccorso, V. Fal'ko, K.S. Novoselov, S. Roche,
P. Bøggild, S. Borini, F.H.L. Koppens, V. Palermo, N.M. Pugno, J.A.
Garrido, R. Sordan, A. Bianco, L. Ballerini, M. Prato, E. Lidorikis,
J. Kivioja, C. Marinelli, T. Ryhänen, A. Morpurgo, J.N. Coleman,
V. Nicolosi, L. Colombo, A. Fert, M. Garcia-Hernandez, A. Bachtold, G.F.
Schneider, F. Guinea, C. Dekker, M. Barbone, Z. Sun, C. Galiotis, A.N.
Grigorenko, G. Konstantatos, A. Kis, M. Katsnelson, L. Vandersypen,
A. Loiseau, V. Morandi, D. Neumaier, E. Treossi, V. Pellegrini, M. Polini,
A. Tredicucci, G.M. Williams, B. Hee Hong, J.-H. Ahn, J. Min Kim, H. Zirath,
B.J. van Wees, H. van der Zant, L. Occhipinti, A. Di Matteo, I.A. Kinloch,
T. Seyller, E. Quesnel, K. Feng, X.and Teo, N. Rupesinghe, P. Hakonen,
S. R.T. Neil, Q. Tannock, T. Löfwander, and J. Kinaret.
Science and technology roadmap for graphene, related two-dimensional
crystals, and hybrid systems.
Nanoscale, 7(11):4587–5062, 2015.
Goler_2013
S. Goler, C. Coletti, V. Tozzini, V. Piazza, T. Mashoff, F. Beltram,
V. Pellegrini, and S. Heun.
Influence of graphene curvature on hydrogen adsorption: Toward
hydrogen storage devices.
Phys. Chem. C, 117(22):11506–11513, 2013.
Hajgato_2012
B. Hajgató, S. Güryel, Y. Dauphin, J.-M. Blairon, H.E. Miltner,
G. Van Lier, F. De Proft, and P. Geerlings.
Theoretical investigation of the intrinsic mechanical properties of
single- and double-layer graphene.
J. Phys. Chem. C, 116(42):22608–22618, 2012.
Hartmann_2013
M.A. Hartmann, M. Todt, F.G. Rammerstorfer, F.D. Fischer, and O. Paris.
Elastic properties of graphene obtained by computational mechanical
tests.
Europhys. Lett., 103(6):68004, 2013.
Huang2006
Y. Huang, J. Wu, and K. C. Hwang.
Thickness of graphene and single-wall carbon nanotubes.
Phys. Rev. B, 74:245413, 2006.
Kim_2012
S.M. Kim, E.B. Song, S. Lee, J. Zhu, D.H. Seo, M. Mecklenburg, S. Seo, and K.L.
Wang.
Transparent and flexible graphene charge-trap memory.
ACS Nano, 6(9):7879–7884, 2012.
Koskinen_2010
P. Koskinen and O.O. Kit.
Approximate modeling of spherical membranes.
Phys. Rev. B, 82:235420, 2010.
Kudin_2001
K.N. Kudin, G.E. Scuseria, and B.I. Yakobson.
c_2F, BN, and C
nanoshell elasticity from ab initio computations.
Phys. Rev. B, 64:235406, 2001.
Lindahl_2012
N. Lindahl, D. Midtvedt, J. Svensson, O.A. Nerushev, N. Lindvall, A. Isacsson,
and E.E. B. Campbell.
Determination of the bending rigidity of graphene via electrostatic
actuation of buckled membranes.
Nano Lett., 12(7):3526–3531, 2012.
Liu_2007
F. Liu, P. Ming, and J. Li.
Ab initio calculation of ideal strength and phonon
instability of graphene under tension.
Phys. Rev. B, 76:064120, 2007.
LuJ1997
J.P. Lu.
Elastic properties of carbon nanotubes and nanoropes.
Phys. Rev. Lett., 79:1297–1300, 1997.
Lu_2009
Q. Lu, M. Arroyo, and R. Huang.
Elastic bending modulus of monolayer graphene.
J. Phys. D, 42(10):102002, 2009.
Luhuang2009
Q. Lu and R. Huang.
Nonlinear mechanics of single-atomic-layer graphene sheets.
Int. J. Appl. Mech., 01(03):443–467, 2009.
Pacheco_2014
A.A. Pacheco Sanjuan, Z. Wang, H.P. Imani, M. Vanević, and
S. Barraza-Lopez.
Graphene's morphology and electronic properties from discrete
differential geometry.
Phys. Rev. B, 89:121403, 2014.
SakhaeePour_2009
A. Sakhaee-Pour.
Elastic properties of single-layered graphene sheet.
Sol. St. Comm., 149(1–2):91 – 95, 2009.
Scarpa_2010
F. Scarpa, S. Adhikari, A.J. Gil, and C. Remillat.
The bending of single layer graphene sheets: the lattice versus
continuum approach.
Nanotech., 21(12):125702, 2010.
Scarpa_2009
F. Scarpa, S. Adhikari, and A. Srikantha Phani.
Effective elastic mechanical properties of single layer graphene
sheets.
Nanotech., 20(6):065709, 2009.
Shi_2012
X. Shi, B. Peng, N.M. Pugno, and H. Gao.
Stretch-induced softening of bending rigidity in graphene.
Appl. Phys. Let., 100(19), 2012.
Tapaszto_2012
L. Tapaszto, T. Dumitrica, S.J. Kim, P. Nemes-Incze, C. Hwang, and L.P. Biro.
Breakdown of continuum mechanics for nanometre-wavelength rippling
of graphene.
Nat. Phys., 8(10):739–742, 2012.
Tozzini_2011
V. Tozzini and V. Pellegrini.
Reversible hydrogen storage by controlled buckling of graphene
layers.
Phys. Chem. C, 115(51):25523–25528, 2011.
Tozzini_2013
V. Tozzini and V. Pellegrini.
Prospects for hydrogen storage in graphene.
Phys. Chem., 15:80–89, 2013.
Wei_2013
Y. Wei, B. Wang, J. Wu, R. Yang, and M.L. Dunn.
Bending rigidity and Gaussian bending stiffness of
single-layered graphene.
Nano Lett., 13(1):26–30, 2013.
Yakobson_1996
B. I. Yakobson, C. J. Brabec, and J. Bernholc.
Nanomechanics of carbon tubes: Instabilities beyond linear response.
Phys. Rev. Lett., 76:2511–2514, Apr 1996.
Zakharchenko_2009
K. V. Zakharchenko, M. I. Katsnelson, and A. Fasolino.
Finite temperature lattice properties of graphene beyond the
quasiharmonic approximation.
Phys. Rev. Lett., 102:046808, 2009.
Zhang_2011
D.-B. Zhang, E. Akatyeva, and T. Dumitric ăă.
Bending ultrathin graphene at the margins of continuum mechanics.
Phys. Rev. Lett., 106:255503, 2011.
Zhao_2009
H. Zhao, K. Min, and N. R. Aluru.
Size and chirality dependent elastic properties of graphene
nanoribbons under uniaxial tension.
Nano Lett., 9(8):3012–3015, 2009.
|